diff options
author | dg <dg@FreeBSD.org> | 1995-07-13 08:48:48 +0000 |
---|---|---|
committer | dg <dg@FreeBSD.org> | 1995-07-13 08:48:48 +0000 |
commit | c8b0a7332c667c4216e12358b63e61fad9031a55 (patch) | |
tree | c6f2eefb41eadd82d51ecb0deced0d6d361765ee /sys/vm/vm_page.c | |
parent | f4ec3663dfda604a39dab484f7714e57488bc2c4 (diff) | |
download | FreeBSD-src-c8b0a7332c667c4216e12358b63e61fad9031a55.zip FreeBSD-src-c8b0a7332c667c4216e12358b63e61fad9031a55.tar.gz |
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!!
Much needed overhaul of the VM system. Included in this first round of
changes:
1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
haspage, and sync operations are supported. The haspage interface now
provides information about clusterability. All pager routines now take
struct vm_object's instead of "pagers".
2) Improved data structures. In the previous paradigm, there is constant
confusion caused by pagers being both a data structure ("allocate a
pager") and a collection of routines. The idea of a pager structure has
escentially been eliminated. Objects now have types, and this type is
used to index the appropriate pager. In most cases, items in the pager
structure were duplicated in the object data structure and thus were
unnecessary. In the few cases that remained, a un_pager structure union
was created in the object to contain these items.
3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
be removed. For instance, vm_object_enter(), vm_object_lookup(),
vm_object_remove(), and the associated object hash list were some of the
things that were removed.
4) simple_lock's removed. Discussion with several people reveals that the
SMP locking primitives used in the VM system aren't likely the mechanism
that we'll be adopting. Even if it were, the locking that was in the code
was very inadequate and would have to be mostly re-done anyway. The
locking in a uni-processor kernel was a no-op but went a long way toward
making the code difficult to read and debug.
5) Places that attempted to kludge-up the fact that we don't have kernel
thread support have been fixed to reflect the reality that we are really
dealing with processes, not threads. The VM system didn't have complete
thread support, so the comments and mis-named routines were just wrong.
We now use tsleep and wakeup directly in the lock routines, for instance.
6) Where appropriate, the pagers have been improved, especially in the
pager_alloc routines. Most of the pager_allocs have been rewritten and
are now faster and easier to maintain.
7) The pagedaemon pageout clustering algorithm has been rewritten and
now tries harder to output an even number of pages before and after
the requested page. This is sort of the reverse of the ideal pagein
algorithm and should provide better overall performance.
8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
have been removed. Some other unnecessary casts have also been removed.
9) Some almost useless debugging code removed.
10) Terminology of shadow objects vs. backing objects straightened out.
The fact that the vm_object data structure escentially had this
backwards really confused things. The use of "shadow" and "backing
object" throughout the code is now internally consistent and correct
in the Mach terminology.
11) Several minor bug fixes, including one in the vm daemon that caused
0 RSS objects to not get purged as intended.
12) A "default pager" has now been created which cleans up the transition
of objects to the "swap" type. The previous checks throughout the code
for swp->pg_data != NULL were really ugly. This change also provides
the rudiments for future backing of "anonymous" memory by something
other than the swap pager (via the vnode pager, for example), and it
allows the decision about which of these pagers to use to be made
dynamically (although will need some additional decision code to do
this, of course).
13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
object" code has been removed. MAP_COPY was undocumented and non-
standard. It was furthermore broken in several ways which caused its
behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
continue to work correctly, but via the slightly different semantics
of MAP_PRIVATE.
14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
threads design can be worked around in other ways. Both #12 and #13
were done to simplify the code and improve readability and maintain-
ability. (As were most all of these changes)
TODO:
1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
this will reduce the vnode pager to a mere fraction of its current size.
2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
information provided by the new haspage pager interface. This will
substantially reduce the overhead by eliminating a large number of
VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
improved to provide both a "behind" and "ahead" indication of
contiguousness.
3) Implement the extended features of pager_haspage in swap_pager_haspage().
It currently just says 0 pages ahead/behind.
4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
via a much more general mechanism that could also be used for disk
striping of regular filesystems.
5) Do something to improve the architecture of vm_object_collapse(). The
fact that it makes calls into the swap pager and knows too much about
how the swap pager operates really bothers me. It also doesn't allow
for collapsing of non-swap pager objects ("unnamed" objects backed by
other pagers).
Diffstat (limited to 'sys/vm/vm_page.c')
-rw-r--r-- | sys/vm/vm_page.c | 84 |
1 files changed, 17 insertions, 67 deletions
diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c index 30983e1..2f6b8a0 100644 --- a/sys/vm/vm_page.c +++ b/sys/vm/vm_page.c @@ -34,7 +34,7 @@ * SUCH DAMAGE. * * from: @(#)vm_page.c 7.4 (Berkeley) 5/7/91 - * $Id: vm_page.c,v 1.31 1995/04/16 12:56:21 davidg Exp $ + * $Id: vm_page.c,v 1.32 1995/05/30 08:16:15 rgrimes Exp $ */ /* @@ -86,14 +86,11 @@ struct pglist *vm_page_buckets; /* Array of buckets */ int vm_page_bucket_count; /* How big is array? */ int vm_page_hash_mask; /* Mask for hash function */ -simple_lock_data_t bucket_lock; /* lock for all buckets XXX */ struct pglist vm_page_queue_free; struct pglist vm_page_queue_active; struct pglist vm_page_queue_inactive; struct pglist vm_page_queue_cache; -simple_lock_data_t vm_page_queue_lock; -simple_lock_data_t vm_page_queue_free_lock; /* has physical page allocation been initialized? */ boolean_t vm_page_startup_initialized; @@ -196,14 +193,6 @@ vm_page_startup(starta, enda, vaddr) start = phys_avail[biggestone]; - - /* - * Initialize the locks - */ - - simple_lock_init(&vm_page_queue_free_lock); - simple_lock_init(&vm_page_queue_lock); - /* * Initialize the queue headers for the free queue, the active queue * and the inactive queue. @@ -250,8 +239,6 @@ vm_page_startup(starta, enda, vaddr) bucket++; } - simple_lock_init(&bucket_lock); - /* * round (or truncate) the addresses to our page size. */ @@ -290,8 +277,6 @@ vm_page_startup(starta, enda, vaddr) */ first_page = phys_avail[0] / PAGE_SIZE; - - /* for VM_PAGE_CHECK() */ last_page = phys_avail[(nblocks - 1) * 2 + 1] / PAGE_SIZE; page_range = last_page - (phys_avail[0] / PAGE_SIZE); @@ -342,12 +327,6 @@ vm_page_startup(starta, enda, vaddr) } } - /* - * Initialize vm_pages_needed lock here - don't wait for pageout - * daemon XXX - */ - simple_lock_init(&vm_pages_needed_lock); - return (mapped); } @@ -383,8 +362,6 @@ vm_page_insert(mem, object, offset) { register struct pglist *bucket; - VM_PAGE_CHECK(mem); - if (mem->flags & PG_TABLED) panic("vm_page_insert: already inserted"); @@ -400,9 +377,7 @@ vm_page_insert(mem, object, offset) */ bucket = &vm_page_buckets[vm_page_hash(object, offset)]; - simple_lock(&bucket_lock); TAILQ_INSERT_TAIL(bucket, mem, hashq); - simple_unlock(&bucket_lock); /* * Now link into the object's list of backed pages. @@ -434,8 +409,6 @@ vm_page_remove(mem) { register struct pglist *bucket; - VM_PAGE_CHECK(mem); - if (!(mem->flags & PG_TABLED)) return; @@ -444,9 +417,7 @@ vm_page_remove(mem) */ bucket = &vm_page_buckets[vm_page_hash(mem->object, mem->offset)]; - simple_lock(&bucket_lock); TAILQ_REMOVE(bucket, mem, hashq); - simple_unlock(&bucket_lock); /* * Now remove from the object's list of backed pages. @@ -488,17 +459,13 @@ vm_page_lookup(object, offset) bucket = &vm_page_buckets[vm_page_hash(object, offset)]; s = splhigh(); - simple_lock(&bucket_lock); for (mem = bucket->tqh_first; mem != NULL; mem = mem->hashq.tqe_next) { - VM_PAGE_CHECK(mem); if ((mem->object == object) && (mem->offset == offset)) { - simple_unlock(&bucket_lock); splx(s); return (mem); } } - simple_unlock(&bucket_lock); splx(s); return (NULL); } @@ -522,12 +489,10 @@ vm_page_rename(mem, new_object, new_offset) if (mem->object == new_object) return; - vm_page_lock_queues(); /* keep page from moving out from under pageout daemon */ s = splhigh(); vm_page_remove(mem); vm_page_insert(mem, new_object, new_offset); splx(s); - vm_page_unlock_queues(); } /* @@ -583,12 +548,19 @@ vm_page_alloc(object, offset, page_req) register vm_page_t mem; int s; +#ifdef DIAGNOSTIC + if (offset != trunc_page(offset)) + panic("vm_page_alloc: offset not page aligned"); + + mem = vm_page_lookup(object, offset); + if (mem) + panic("vm_page_alloc: page already allocated"); +#endif + if ((curproc == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) { page_req = VM_ALLOC_SYSTEM; }; - simple_lock(&vm_page_queue_free_lock); - s = splhigh(); mem = vm_page_queue_free.tqh_first; @@ -605,7 +577,6 @@ vm_page_alloc(object, offset, page_req) vm_page_remove(mem); cnt.v_cache_count--; } else { - simple_unlock(&vm_page_queue_free_lock); splx(s); pagedaemon_wakeup(); return (NULL); @@ -626,7 +597,6 @@ vm_page_alloc(object, offset, page_req) vm_page_remove(mem); cnt.v_cache_count--; } else { - simple_unlock(&vm_page_queue_free_lock); splx(s); pagedaemon_wakeup(); return (NULL); @@ -639,7 +609,6 @@ vm_page_alloc(object, offset, page_req) TAILQ_REMOVE(&vm_page_queue_free, mem, pageq); cnt.v_free_count--; } else { - simple_unlock(&vm_page_queue_free_lock); splx(s); pagedaemon_wakeup(); return NULL; @@ -650,8 +619,6 @@ vm_page_alloc(object, offset, page_req) panic("vm_page_alloc: invalid allocation class"); } - simple_unlock(&vm_page_queue_free_lock); - mem->flags = PG_BUSY; mem->wire_count = 0; mem->hold_count = 0; @@ -784,10 +751,8 @@ vm_page_free(mem) } if ((flags & PG_WANTED) != 0) - wakeup((caddr_t) mem); + wakeup(mem); if ((flags & PG_FICTITIOUS) == 0) { - - simple_lock(&vm_page_queue_free_lock); if (mem->wire_count) { if (mem->wire_count > 1) { printf("vm_page_free: wire count > 1 (%d)", mem->wire_count); @@ -798,15 +763,13 @@ vm_page_free(mem) } mem->flags |= PG_FREE; TAILQ_INSERT_TAIL(&vm_page_queue_free, mem, pageq); - - simple_unlock(&vm_page_queue_free_lock); splx(s); /* * if pageout daemon needs pages, then tell it that there are * some free. */ if (vm_pageout_pages_needed) { - wakeup((caddr_t) &vm_pageout_pages_needed); + wakeup(&vm_pageout_pages_needed); vm_pageout_pages_needed = 0; } @@ -817,8 +780,8 @@ vm_page_free(mem) * lots of memory. this process will swapin processes. */ if ((cnt.v_free_count + cnt.v_cache_count) == cnt.v_free_min) { - wakeup((caddr_t) &cnt.v_free_count); - wakeup((caddr_t) &proc0); + wakeup(&cnt.v_free_count); + wakeup(&proc0); } } else { splx(s); @@ -841,7 +804,6 @@ vm_page_wire(mem) register vm_page_t mem; { int s; - VM_PAGE_CHECK(mem); if (mem->wire_count == 0) { s = splhigh(); @@ -867,8 +829,6 @@ vm_page_unwire(mem) { int s; - VM_PAGE_CHECK(mem); - s = splhigh(); if (mem->wire_count) @@ -895,8 +855,6 @@ vm_page_activate(m) { int s; - VM_PAGE_CHECK(m); - s = splhigh(); if (m->flags & PG_ACTIVE) panic("vm_page_activate: already active"); @@ -933,8 +891,6 @@ vm_page_deactivate(m) { int spl; - VM_PAGE_CHECK(m); - /* * Only move active pages -- ignore locked or already inactive ones. * @@ -969,7 +925,6 @@ vm_page_cache(m) { int s; - VM_PAGE_CHECK(m); if ((m->flags & (PG_CACHE | PG_BUSY)) || m->busy || m->wire_count || m->bmapped) return; @@ -982,11 +937,11 @@ vm_page_cache(m) m->flags |= PG_CACHE; cnt.v_cache_count++; if ((cnt.v_free_count + cnt.v_cache_count) == cnt.v_free_min) { - wakeup((caddr_t) &cnt.v_free_count); - wakeup((caddr_t) &proc0); + wakeup(&cnt.v_free_count); + wakeup(&proc0); } if (vm_pageout_pages_needed) { - wakeup((caddr_t) &vm_pageout_pages_needed); + wakeup(&vm_pageout_pages_needed); vm_pageout_pages_needed = 0; } @@ -1004,8 +959,6 @@ boolean_t vm_page_zero_fill(m) vm_page_t m; { - VM_PAGE_CHECK(m); - pmap_zero_page(VM_PAGE_TO_PHYS(m)); m->valid = VM_PAGE_BITS_ALL; return (TRUE); @@ -1021,9 +974,6 @@ vm_page_copy(src_m, dest_m) vm_page_t src_m; vm_page_t dest_m; { - VM_PAGE_CHECK(src_m); - VM_PAGE_CHECK(dest_m); - pmap_copy_page(VM_PAGE_TO_PHYS(src_m), VM_PAGE_TO_PHYS(dest_m)); dest_m->valid = VM_PAGE_BITS_ALL; } |