summaryrefslogtreecommitdiffstats
path: root/sys/amd64/include
diff options
context:
space:
mode:
authoralc <alc@FreeBSD.org>2012-05-18 05:36:04 +0000
committeralc <alc@FreeBSD.org>2012-05-18 05:36:04 +0000
commiteba132f415a1222ad568372f289b15e1d838fb05 (patch)
tree3ef3ba31bad2506ab41ce284c292b71ee0b9d718 /sys/amd64/include
parent6fce315734ae68ff89003ff88fc8c19be17e97a8 (diff)
downloadFreeBSD-src-eba132f415a1222ad568372f289b15e1d838fb05.zip
FreeBSD-src-eba132f415a1222ad568372f289b15e1d838fb05.tar.gz
Rename pmap_collect() to pmap_pv_reclaim() and rewrite it such that it no
longer uses the active and inactive paging queues. Instead, the pmap now maintains an LRU-ordered list of pv entry pages, and pmap_pv_reclaim() uses this list to select pv entries for reclamation. Note: The old pmap_collect() tried to avoid reclaiming mappings for pages that have either a hold_count or a busy field that is non-zero. However, this isn't necessary for correctness, and the locking in pmap_collect() was insufficient to guarantee that such mappings weren't reclaimed. The new pmap_pv_reclaim() doesn't even try. Reviewed by: kib MFC after: 6 weeks
Diffstat (limited to 'sys/amd64/include')
-rw-r--r--sys/amd64/include/pmap.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/sys/amd64/include/pmap.h b/sys/amd64/include/pmap.h
index 1b8108a..48758ef 100644
--- a/sys/amd64/include/pmap.h
+++ b/sys/amd64/include/pmap.h
@@ -295,7 +295,7 @@ struct pv_chunk {
pmap_t pc_pmap;
TAILQ_ENTRY(pv_chunk) pc_list;
uint64_t pc_map[_NPCM]; /* bitmap; 1 = free */
- uint64_t pc_spare[2];
+ TAILQ_ENTRY(pv_chunk) pc_lru;
struct pv_entry pc_pventry[_NPCPV];
};
OpenPOWER on IntegriCloud