diff options
author | alc <alc@FreeBSD.org> | 2010-05-26 18:00:44 +0000 |
---|---|---|
committer | alc <alc@FreeBSD.org> | 2010-05-26 18:00:44 +0000 |
commit | 3f1d4b057cf7217e3fe56dcd1bd29db76406ae5f (patch) | |
tree | 95cb060d8736de18caf3932cd92f06f852f69659 /sys/sparc64 | |
parent | 8dd88ee72437269f6edd3612fb57c46b8adfc020 (diff) | |
download | FreeBSD-src-3f1d4b057cf7217e3fe56dcd1bd29db76406ae5f.zip FreeBSD-src-3f1d4b057cf7217e3fe56dcd1bd29db76406ae5f.tar.gz |
Push down page queues lock acquisition in pmap_enter_object() and
pmap_is_referenced(). Eliminate the corresponding page queues lock
acquisitions from vm_map_pmap_enter() and mincore(), respectively. In
mincore(), this allows some additional cases to complete without ever
acquiring the page queues lock.
Assert that the page is managed in pmap_is_referenced().
On powerpc/aim, push down the page queues lock acquisition from
moea*_is_modified() and moea*_is_referenced() into moea*_query_bit().
Again, this will allow some additional cases to complete without ever
acquiring the page queues lock.
Reorder a few statements in vm_page_dontneed() so that a race can't lead
to an old reference persisting. This scenario is described in detail by a
comment.
Correct a spelling error in vm_page_dontneed().
Assert that the object is locked in vm_page_clear_dirty(), and restrict the
page queues lock assertion to just those cases in which the page is
currently writeable.
Add object locking to vnode_pager_generic_putpages(). This was the one
and only place where vm_page_clear_dirty() was being called without the
object being locked.
Eliminate an unnecessary vm_page_lock() around vnode_pager_setsize()'s call
to vm_page_clear_dirty().
Change vnode_pager_generic_putpages() to the modern-style of function
definition. Also, change the name of one of the parameters to follow
virtual memory system naming conventions.
Reviewed by: kib
Diffstat (limited to 'sys/sparc64')
-rw-r--r-- | sys/sparc64/sparc64/pmap.c | 19 |
1 files changed, 13 insertions, 6 deletions
diff --git a/sys/sparc64/sparc64/pmap.c b/sys/sparc64/sparc64/pmap.c index 6b4ae41..719fb3b 100644 --- a/sys/sparc64/sparc64/pmap.c +++ b/sys/sparc64/sparc64/pmap.c @@ -1492,12 +1492,14 @@ pmap_enter_object(pmap_t pm, vm_offset_t start, vm_offset_t end, psize = atop(end - start); m = m_start; + vm_page_lock_queues(); PMAP_LOCK(pm); while (m != NULL && (diff = m->pindex - m_start->pindex) < psize) { pmap_enter_locked(pm, start + ptoa(diff), m, prot & (VM_PROT_READ | VM_PROT_EXECUTE), FALSE); m = TAILQ_NEXT(m, listq); } + vm_page_unlock_queues(); PMAP_UNLOCK(pm); } @@ -1947,17 +1949,22 @@ boolean_t pmap_is_referenced(vm_page_t m) { struct tte *tp; + boolean_t rv; - mtx_assert(&vm_page_queue_mtx, MA_OWNED); - if ((m->flags & (PG_FICTITIOUS | PG_UNMANAGED)) != 0) - return (FALSE); + KASSERT((m->flags & (PG_FICTITIOUS | PG_UNMANAGED)) == 0, + ("pmap_is_referenced: page %p is not managed", m)); + rv = FALSE; + vm_page_lock_queues(); TAILQ_FOREACH(tp, &m->md.tte_list, tte_link) { if ((tp->tte_data & TD_PV) == 0) continue; - if ((tp->tte_data & TD_REF) != 0) - return (TRUE); + if ((tp->tte_data & TD_REF) != 0) { + rv = TRUE; + break; + } } - return (FALSE); + vm_page_unlock_queues(); + return (rv); } void |