summaryrefslogtreecommitdiffstats
path: root/sys/vm/swap_pager.c
diff options
context:
space:
mode:
authordg <dg@FreeBSD.org>1995-07-13 08:48:48 +0000
committerdg <dg@FreeBSD.org>1995-07-13 08:48:48 +0000
commitc8b0a7332c667c4216e12358b63e61fad9031a55 (patch)
treec6f2eefb41eadd82d51ecb0deced0d6d361765ee /sys/vm/swap_pager.c
parentf4ec3663dfda604a39dab484f7714e57488bc2c4 (diff)
downloadFreeBSD-src-c8b0a7332c667c4216e12358b63e61fad9031a55.zip
FreeBSD-src-c8b0a7332c667c4216e12358b63e61fad9031a55.tar.gz
NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct
proc or any VM system structure will have to be rebuilt!!! Much needed overhaul of the VM system. Included in this first round of changes: 1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages, haspage, and sync operations are supported. The haspage interface now provides information about clusterability. All pager routines now take struct vm_object's instead of "pagers". 2) Improved data structures. In the previous paradigm, there is constant confusion caused by pagers being both a data structure ("allocate a pager") and a collection of routines. The idea of a pager structure has escentially been eliminated. Objects now have types, and this type is used to index the appropriate pager. In most cases, items in the pager structure were duplicated in the object data structure and thus were unnecessary. In the few cases that remained, a un_pager structure union was created in the object to contain these items. 3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now be removed. For instance, vm_object_enter(), vm_object_lookup(), vm_object_remove(), and the associated object hash list were some of the things that were removed. 4) simple_lock's removed. Discussion with several people reveals that the SMP locking primitives used in the VM system aren't likely the mechanism that we'll be adopting. Even if it were, the locking that was in the code was very inadequate and would have to be mostly re-done anyway. The locking in a uni-processor kernel was a no-op but went a long way toward making the code difficult to read and debug. 5) Places that attempted to kludge-up the fact that we don't have kernel thread support have been fixed to reflect the reality that we are really dealing with processes, not threads. The VM system didn't have complete thread support, so the comments and mis-named routines were just wrong. We now use tsleep and wakeup directly in the lock routines, for instance. 6) Where appropriate, the pagers have been improved, especially in the pager_alloc routines. Most of the pager_allocs have been rewritten and are now faster and easier to maintain. 7) The pagedaemon pageout clustering algorithm has been rewritten and now tries harder to output an even number of pages before and after the requested page. This is sort of the reverse of the ideal pagein algorithm and should provide better overall performance. 8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup have been removed. Some other unnecessary casts have also been removed. 9) Some almost useless debugging code removed. 10) Terminology of shadow objects vs. backing objects straightened out. The fact that the vm_object data structure escentially had this backwards really confused things. The use of "shadow" and "backing object" throughout the code is now internally consistent and correct in the Mach terminology. 11) Several minor bug fixes, including one in the vm daemon that caused 0 RSS objects to not get purged as intended. 12) A "default pager" has now been created which cleans up the transition of objects to the "swap" type. The previous checks throughout the code for swp->pg_data != NULL were really ugly. This change also provides the rudiments for future backing of "anonymous" memory by something other than the swap pager (via the vnode pager, for example), and it allows the decision about which of these pagers to use to be made dynamically (although will need some additional decision code to do this, of course). 13) (dyson) MAP_COPY has been deprecated and the corresponding "copy object" code has been removed. MAP_COPY was undocumented and non- standard. It was furthermore broken in several ways which caused its behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will continue to work correctly, but via the slightly different semantics of MAP_PRIVATE. 14) (dyson) Sharing maps have been removed. It's marginal usefulness in a threads design can be worked around in other ways. Both #12 and #13 were done to simplify the code and improve readability and maintain- ability. (As were most all of these changes) TODO: 1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing this will reduce the vnode pager to a mere fraction of its current size. 2) Rewrite vm_fault and the swap/vnode pagers to use the clustering information provided by the new haspage pager interface. This will substantially reduce the overhead by eliminating a large number of VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be improved to provide both a "behind" and "ahead" indication of contiguousness. 3) Implement the extended features of pager_haspage in swap_pager_haspage(). It currently just says 0 pages ahead/behind. 4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps via a much more general mechanism that could also be used for disk striping of regular filesystems. 5) Do something to improve the architecture of vm_object_collapse(). The fact that it makes calls into the swap pager and knows too much about how the swap pager operates really bothers me. It also doesn't allow for collapsing of non-swap pager objects ("unnamed" objects backed by other pagers).
Diffstat (limited to 'sys/vm/swap_pager.c')
-rw-r--r--sys/vm/swap_pager.c503
1 files changed, 199 insertions, 304 deletions
diff --git a/sys/vm/swap_pager.c b/sys/vm/swap_pager.c
index 16ec7bb..2f3b268 100644
--- a/sys/vm/swap_pager.c
+++ b/sys/vm/swap_pager.c
@@ -39,7 +39,7 @@
* from: Utah $Hdr: swap_pager.c 1.4 91/04/30$
*
* @(#)swap_pager.c 8.9 (Berkeley) 3/21/94
- * $Id: swap_pager.c,v 1.40 1995/05/18 02:59:20 davidg Exp $
+ * $Id: swap_pager.c,v 1.41 1995/05/30 08:15:55 rgrimes Exp $
*/
/*
@@ -71,9 +71,6 @@
#define NPENDINGIO 10
#endif
-int swap_pager_input __P((sw_pager_t, vm_page_t *, int, int));
-int swap_pager_output __P((sw_pager_t, vm_page_t *, int, int, int *));
-
int nswiodone;
int swap_pager_full;
extern int vm_swap_size;
@@ -106,35 +103,35 @@ struct swpagerclean {
struct swpclean swap_pager_done; /* list of completed page cleans */
struct swpclean swap_pager_inuse; /* list of pending page cleans */
struct swpclean swap_pager_free; /* list of free pager clean structs */
-struct pagerlst swap_pager_list; /* list of "named" anon regions */
-struct pagerlst swap_pager_un_list; /* list of "unnamed" anon pagers */
+struct pagerlst swap_pager_object_list; /* list of "named" anon region objects */
+struct pagerlst swap_pager_un_object_list; /* list of "unnamed" anon region objects */
#define SWAP_FREE_NEEDED 0x1 /* need a swap block */
#define SWAP_FREE_NEEDED_BY_PAGEOUT 0x2
int swap_pager_needflags;
struct pagerlst *swp_qs[] = {
- &swap_pager_list, &swap_pager_un_list, (struct pagerlst *) 0
+ &swap_pager_object_list, &swap_pager_un_object_list, (struct pagerlst *) 0
};
-int swap_pager_putmulti();
-
+/*
+ * pagerops for OBJT_SWAP - "swap pager".
+ */
struct pagerops swappagerops = {
swap_pager_init,
swap_pager_alloc,
swap_pager_dealloc,
- swap_pager_getpage,
- swap_pager_getmulti,
- swap_pager_putpage,
- swap_pager_putmulti,
- swap_pager_haspage
+ swap_pager_getpages,
+ swap_pager_putpages,
+ swap_pager_haspage,
+ swap_pager_sync
};
int npendingio = NPENDINGIO;
-int require_swap_init;
void swap_pager_finish();
int dmmin, dmmax;
+
static inline void
swapsizecheck()
{
@@ -149,10 +146,8 @@ swapsizecheck()
void
swap_pager_init()
{
- dfltpagerops = &swappagerops;
-
- TAILQ_INIT(&swap_pager_list);
- TAILQ_INIT(&swap_pager_un_list);
+ TAILQ_INIT(&swap_pager_object_list);
+ TAILQ_INIT(&swap_pager_un_object_list);
/*
* Initialize clean lists
@@ -161,8 +156,6 @@ swap_pager_init()
TAILQ_INIT(&swap_pager_done);
TAILQ_INIT(&swap_pager_free);
- require_swap_init = 1;
-
/*
* Calculate the swap allocation constants.
*/
@@ -172,88 +165,56 @@ swap_pager_init()
}
-/*
- * Allocate a pager structure and associated resources.
- * Note that if we are called from the pageout daemon (handle == NULL)
- * we should not wait for memory as it could resulting in deadlock.
- */
-vm_pager_t
-swap_pager_alloc(handle, size, prot, offset)
- void *handle;
- register vm_size_t size;
- vm_prot_t prot;
- vm_offset_t offset;
+void
+swap_pager_swap_init()
{
- register vm_pager_t pager;
- register sw_pager_t swp;
- int waitok;
- int i, j;
-
- if (require_swap_init) {
- swp_clean_t spc;
- struct buf *bp;
+ swp_clean_t spc;
+ struct buf *bp;
+ int i;
- /*
- * kva's are allocated here so that we dont need to keep doing
- * kmem_alloc pageables at runtime
- */
- for (i = 0, spc = swcleanlist; i < npendingio; i++, spc++) {
- spc->spc_kva = kmem_alloc_pageable(pager_map, PAGE_SIZE * MAX_PAGEOUT_CLUSTER);
- if (!spc->spc_kva) {
- break;
- }
- spc->spc_bp = malloc(sizeof(*bp), M_TEMP, M_KERNEL);
- if (!spc->spc_bp) {
- kmem_free_wakeup(pager_map, spc->spc_kva, PAGE_SIZE);
- break;
- }
- spc->spc_flags = 0;
- TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
- }
- require_swap_init = 0;
- if (size == 0)
- return (NULL);
- }
/*
- * If this is a "named" anonymous region, look it up and return the
- * appropriate pager if it exists.
+ * kva's are allocated here so that we dont need to keep doing
+ * kmem_alloc pageables at runtime
*/
- if (handle) {
- pager = vm_pager_lookup(&swap_pager_list, handle);
- if (pager != NULL) {
- /*
- * Use vm_object_lookup to gain a reference to the
- * object and also to remove from the object cache.
- */
- if (vm_object_lookup(pager) == NULL)
- panic("swap_pager_alloc: bad object");
- return (pager);
+ for (i = 0, spc = swcleanlist; i < npendingio; i++, spc++) {
+ spc->spc_kva = kmem_alloc_pageable(pager_map, PAGE_SIZE * MAX_PAGEOUT_CLUSTER);
+ if (!spc->spc_kva) {
+ break;
}
+ spc->spc_bp = malloc(sizeof(*bp), M_TEMP, M_KERNEL);
+ if (!spc->spc_bp) {
+ kmem_free_wakeup(pager_map, spc->spc_kva, PAGE_SIZE);
+ break;
+ }
+ spc->spc_flags = 0;
+ TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
}
- /*
- * Pager doesn't exist, allocate swap management resources and
- * initialize.
- */
- waitok = handle ? M_WAITOK : M_KERNEL;
- pager = (vm_pager_t) malloc(sizeof *pager, M_VMPAGER, waitok);
- if (pager == NULL)
- return (NULL);
- swp = (sw_pager_t) malloc(sizeof *swp, M_VMPGDATA, waitok);
+}
+
+int
+swap_pager_swp_alloc(object, wait)
+ vm_object_t object;
+ int wait;
+{
+ register sw_pager_t swp;
+ int i, j;
+
+ if (object->pg_data != NULL)
+ panic("swap_pager_swp_alloc: swp already allocated");
+
+ swp = (sw_pager_t) malloc(sizeof *swp, M_VMPGDATA, wait);
if (swp == NULL) {
- free((caddr_t) pager, M_VMPAGER);
- return (NULL);
+ return 1;
}
- size = round_page(size);
- swp->sw_osize = size;
- swp->sw_nblocks = (btodb(size) + btodb(SWB_NPAGES * PAGE_SIZE) - 1) / btodb(SWB_NPAGES * PAGE_SIZE);
- swp->sw_blocks = (sw_blk_t)
- malloc(swp->sw_nblocks * sizeof(*swp->sw_blocks),
- M_VMPGDATA, waitok);
+
+ swp->sw_nblocks = (btodb(object->size) + btodb(SWB_NPAGES * PAGE_SIZE) - 1) / btodb(SWB_NPAGES * PAGE_SIZE);
+
+ swp->sw_blocks = (sw_blk_t) malloc(swp->sw_nblocks * sizeof(*swp->sw_blocks), M_VMPGDATA, wait);
if (swp->sw_blocks == NULL) {
free((caddr_t) swp, M_VMPGDATA);
- free((caddr_t) pager, M_VMPAGER);
- return (NULL);
+ return 1;
}
+
for (i = 0; i < swp->sw_nblocks; i++) {
swp->sw_blocks[i].swb_valid = 0;
swp->sw_blocks[i].swb_locked = 0;
@@ -263,30 +224,59 @@ swap_pager_alloc(handle, size, prot, offset)
swp->sw_poip = 0;
swp->sw_allocsize = 0;
- if (handle) {
- vm_object_t object;
- swp->sw_flags = SW_NAMED;
- TAILQ_INSERT_TAIL(&swap_pager_list, pager, pg_list);
- /*
- * Consistant with other pagers: return with object
- * referenced. Can't do this with handle == NULL since it
- * might be the pageout daemon calling.
- */
- object = vm_object_allocate(offset + size);
- object->flags &= ~OBJ_INTERNAL;
- vm_object_enter(object, pager);
- object->pager = pager;
+ object->pg_data = swp;
+
+ if (object->handle != NULL) {
+ TAILQ_INSERT_TAIL(&swap_pager_object_list, object, pager_object_list);
+ } else {
+ TAILQ_INSERT_TAIL(&swap_pager_un_object_list, object, pager_object_list);
+ }
+
+ return 0;
+}
+
+/*
+ * Allocate a pager structure and associated resources.
+ * Note that if we are called from the pageout daemon (handle == NULL)
+ * we should not wait for memory as it could resulting in deadlock.
+ */
+vm_object_t
+swap_pager_alloc(handle, size, prot, offset)
+ void *handle;
+ register vm_size_t size;
+ vm_prot_t prot;
+ vm_offset_t offset;
+{
+ vm_object_t object;
+ int i;
+
+ /*
+ * If this is a "named" anonymous region, look it up and use the
+ * object if it exists, otherwise allocate a new one.
+ */
+ if (handle) {
+ object = vm_pager_object_lookup(&swap_pager_object_list, handle);
+ if (object != NULL) {
+ vm_object_reference(object);
+ } else {
+ /*
+ * XXX - there is a race condition here. Two processes
+ * can request the same named object simultaneuously,
+ * and if one blocks for memory, the result is a disaster.
+ * Probably quite rare, but is yet another reason to just
+ * rip support of "named anonymous regions" out altogether.
+ */
+ object = vm_object_allocate(OBJT_SWAP, offset + size);
+ object->handle = handle;
+ (void) swap_pager_swp_alloc(object, M_WAITOK);
+ }
} else {
- swp->sw_flags = 0;
- TAILQ_INSERT_TAIL(&swap_pager_un_list, pager, pg_list);
+ object = vm_object_allocate(OBJT_SWAP, offset + size);
+ (void) swap_pager_swp_alloc(object, M_WAITOK);
}
- pager->pg_handle = handle;
- pager->pg_ops = &swappagerops;
- pager->pg_type = PG_SWAP;
- pager->pg_data = (caddr_t) swp;
- return (pager);
+ return (object);
}
/*
@@ -296,11 +286,12 @@ swap_pager_alloc(handle, size, prot, offset)
*/
inline static int *
-swap_pager_diskaddr(swp, offset, valid)
- sw_pager_t swp;
+swap_pager_diskaddr(object, offset, valid)
+ vm_object_t object;
vm_offset_t offset;
int *valid;
{
+ sw_pager_t swp = object->pg_data;
register sw_blk_t swb;
int ix;
@@ -308,7 +299,7 @@ swap_pager_diskaddr(swp, offset, valid)
*valid = 0;
ix = offset / (SWB_NPAGES * PAGE_SIZE);
if ((swp->sw_blocks == NULL) || (ix >= swp->sw_nblocks) ||
- (offset >= swp->sw_osize)) {
+ (offset >= object->size)) {
return (FALSE);
}
swb = &swp->sw_blocks[ix];
@@ -378,18 +369,19 @@ swap_pager_freeswapspace(sw_pager_t swp, unsigned from, unsigned to)
* this routine frees swap blocks from a specified pager
*/
void
-_swap_pager_freespace(swp, start, size)
- sw_pager_t swp;
+swap_pager_freespace(object, start, size)
+ vm_object_t object;
vm_offset_t start;
vm_offset_t size;
{
+ sw_pager_t swp = object->pg_data;
vm_offset_t i;
int s;
s = splbio();
for (i = start; i < round_page(start + size); i += PAGE_SIZE) {
int valid;
- int *addr = swap_pager_diskaddr(swp, i, &valid);
+ int *addr = swap_pager_diskaddr(object, i, &valid);
if (addr && *addr != SWB_EMPTY) {
swap_pager_freeswapspace(swp, *addr, *addr + btodb(PAGE_SIZE) - 1);
@@ -402,15 +394,6 @@ _swap_pager_freespace(swp, start, size)
splx(s);
}
-void
-swap_pager_freespace(pager, start, size)
- vm_pager_t pager;
- vm_offset_t start;
- vm_offset_t size;
-{
- _swap_pager_freespace((sw_pager_t) pager->pg_data, start, size);
-}
-
static void
swap_pager_free_swap(swp)
sw_pager_t swp;
@@ -477,7 +460,7 @@ swap_pager_free_swap(swp)
void
swap_pager_reclaim()
{
- vm_pager_t p;
+ vm_object_t object;
sw_pager_t swp;
int i, j, k;
int s;
@@ -493,7 +476,7 @@ swap_pager_reclaim()
*/
s = splbio();
if (in_reclaim) {
- tsleep((caddr_t) &in_reclaim, PSWP, "swrclm", 0);
+ tsleep(&in_reclaim, PSWP, "swrclm", 0);
splx(s);
return;
}
@@ -503,14 +486,14 @@ swap_pager_reclaim()
/* for each pager queue */
for (k = 0; swp_qs[k]; k++) {
- p = swp_qs[k]->tqh_first;
- while (p && (reclaimcount < MAXRECLAIM)) {
+ object = swp_qs[k]->tqh_first;
+ while (object && (reclaimcount < MAXRECLAIM)) {
/*
* see if any blocks associated with a pager has been
* allocated but not used (written)
*/
- swp = (sw_pager_t) p->pg_data;
+ swp = (sw_pager_t) object->pg_data;
for (i = 0; i < swp->sw_nblocks; i++) {
sw_blk_t swb = &swp->sw_blocks[i];
@@ -527,7 +510,7 @@ swap_pager_reclaim()
}
}
}
- p = p->pg_list.tqe_next;
+ object = object->pager_object_list.tqe_next;
}
}
@@ -541,7 +524,7 @@ rfinished:
}
splx(s);
in_reclaim = 0;
- wakeup((caddr_t) &in_reclaim);
+ wakeup(&in_reclaim);
}
@@ -551,10 +534,10 @@ rfinished:
*/
void
-swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
- vm_pager_t srcpager;
+swap_pager_copy(srcobject, srcoffset, dstobject, dstoffset, offset)
+ vm_object_t srcobject;
vm_offset_t srcoffset;
- vm_pager_t dstpager;
+ vm_object_t dstobject;
vm_offset_t dstoffset;
vm_offset_t offset;
{
@@ -566,41 +549,37 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
if (vm_swap_size)
no_swap_space = 0;
- if (no_swap_space)
- return;
-
- srcswp = (sw_pager_t) srcpager->pg_data;
+ srcswp = (sw_pager_t) srcobject->pg_data;
origsize = srcswp->sw_allocsize;
- dstswp = (sw_pager_t) dstpager->pg_data;
+ dstswp = (sw_pager_t) dstobject->pg_data;
/*
- * remove the source pager from the swap_pager internal queue
+ * remove the source object from the swap_pager internal queue
*/
- s = splbio();
- if (srcswp->sw_flags & SW_NAMED) {
- TAILQ_REMOVE(&swap_pager_list, srcpager, pg_list);
- srcswp->sw_flags &= ~SW_NAMED;
+ if (srcobject->handle == NULL) {
+ TAILQ_REMOVE(&swap_pager_un_object_list, srcobject, pager_object_list);
} else {
- TAILQ_REMOVE(&swap_pager_un_list, srcpager, pg_list);
+ TAILQ_REMOVE(&swap_pager_object_list, srcobject, pager_object_list);
}
+ s = splbio();
while (srcswp->sw_poip) {
- tsleep((caddr_t) srcswp, PVM, "spgout", 0);
+ tsleep(srcswp, PVM, "spgout", 0);
}
splx(s);
/*
* clean all of the pages that are currently active and finished
*/
- (void) swap_pager_clean();
+ swap_pager_sync();
s = splbio();
/*
* transfer source to destination
*/
- for (i = 0; i < dstswp->sw_osize; i += PAGE_SIZE) {
+ for (i = 0; i < dstobject->size; i += PAGE_SIZE) {
int srcvalid, dstvalid;
- int *srcaddrp = swap_pager_diskaddr(srcswp, i + offset + srcoffset,
+ int *srcaddrp = swap_pager_diskaddr(srcobject, i + offset + srcoffset,
&srcvalid);
int *dstaddrp;
@@ -614,7 +593,7 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
* dest.
*/
if (srcvalid) {
- dstaddrp = swap_pager_diskaddr(dstswp, i + dstoffset,
+ dstaddrp = swap_pager_diskaddr(dstobject, i + dstoffset,
&dstvalid);
/*
* if the dest already has a valid block,
@@ -657,43 +636,47 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
free((caddr_t) srcswp->sw_blocks, M_VMPGDATA);
srcswp->sw_blocks = 0;
free((caddr_t) srcswp, M_VMPGDATA);
- srcpager->pg_data = 0;
- free((caddr_t) srcpager, M_VMPAGER);
+ srcobject->pg_data = NULL;
return;
}
void
-swap_pager_dealloc(pager)
- vm_pager_t pager;
+swap_pager_dealloc(object)
+ vm_object_t object;
{
register sw_pager_t swp;
int s;
+ swp = (sw_pager_t) object->pg_data;
+
+ /* "Can't" happen. */
+ if (swp == NULL)
+ panic("swap_pager_dealloc: no swp data");
+
/*
* Remove from list right away so lookups will fail if we block for
* pageout completion.
*/
- s = splbio();
- swp = (sw_pager_t) pager->pg_data;
- if (swp->sw_flags & SW_NAMED) {
- TAILQ_REMOVE(&swap_pager_list, pager, pg_list);
- swp->sw_flags &= ~SW_NAMED;
+ if (object->handle == NULL) {
+ TAILQ_REMOVE(&swap_pager_un_object_list, object, pager_object_list);
} else {
- TAILQ_REMOVE(&swap_pager_un_list, pager, pg_list);
+ TAILQ_REMOVE(&swap_pager_object_list, object, pager_object_list);
}
+
/*
* Wait for all pageouts to finish and remove all entries from
* cleaning list.
*/
+ s = splbio();
while (swp->sw_poip) {
- tsleep((caddr_t) swp, PVM, "swpout", 0);
+ tsleep(swp, PVM, "swpout", 0);
}
splx(s);
- (void) swap_pager_clean();
+ swap_pager_sync();
/*
* Free left over swap blocks
@@ -708,88 +691,7 @@ swap_pager_dealloc(pager)
free((caddr_t) swp->sw_blocks, M_VMPGDATA);
swp->sw_blocks = 0;
free((caddr_t) swp, M_VMPGDATA);
- pager->pg_data = 0;
- free((caddr_t) pager, M_VMPAGER);
-}
-
-/*
- * swap_pager_getmulti can get multiple pages.
- */
-int
-swap_pager_getmulti(pager, m, count, reqpage, sync)
- vm_pager_t pager;
- vm_page_t *m;
- int count;
- int reqpage;
- boolean_t sync;
-{
- if (reqpage >= count)
- panic("swap_pager_getmulti: reqpage >= count");
- return swap_pager_input((sw_pager_t) pager->pg_data, m, count, reqpage);
-}
-
-/*
- * swap_pager_getpage gets individual pages
- */
-int
-swap_pager_getpage(pager, m, sync)
- vm_pager_t pager;
- vm_page_t m;
- boolean_t sync;
-{
- vm_page_t marray[1];
-
- marray[0] = m;
- return swap_pager_input((sw_pager_t) pager->pg_data, marray, 1, 0);
-}
-
-int
-swap_pager_putmulti(pager, m, c, sync, rtvals)
- vm_pager_t pager;
- vm_page_t *m;
- int c;
- boolean_t sync;
- int *rtvals;
-{
- int flags;
-
- if (pager == NULL) {
- (void) swap_pager_clean();
- return VM_PAGER_OK;
- }
- flags = B_WRITE;
- if (!sync)
- flags |= B_ASYNC;
-
- return swap_pager_output((sw_pager_t) pager->pg_data, m, c, flags, rtvals);
-}
-
-/*
- * swap_pager_putpage writes individual pages
- */
-int
-swap_pager_putpage(pager, m, sync)
- vm_pager_t pager;
- vm_page_t m;
- boolean_t sync;
-{
- int flags;
- vm_page_t marray[1];
- int rtvals[1];
-
-
- if (pager == NULL) {
- (void) swap_pager_clean();
- return VM_PAGER_OK;
- }
- marray[0] = m;
- flags = B_WRITE;
- if (!sync)
- flags |= B_ASYNC;
-
- swap_pager_output((sw_pager_t) pager->pg_data, marray, 1, flags, rtvals);
-
- return rtvals[0];
+ object->pg_data = 0;
}
static inline int
@@ -811,17 +713,24 @@ swap_pager_block_offset(swp, offset)
}
/*
- * _swap_pager_haspage returns TRUE if the pager has data that has
+ * swap_pager_haspage returns TRUE if the pager has data that has
* been written out.
*/
-static boolean_t
-_swap_pager_haspage(swp, offset)
- sw_pager_t swp;
+boolean_t
+swap_pager_haspage(object, offset, before, after)
+ vm_object_t object;
vm_offset_t offset;
+ int *before;
+ int *after;
{
+ sw_pager_t swp = object->pg_data;
register sw_blk_t swb;
int ix;
+ if (before != NULL)
+ *before = 0;
+ if (after != NULL)
+ *after = 0;
ix = offset / (SWB_NPAGES * PAGE_SIZE);
if (swp->sw_blocks == NULL || ix >= swp->sw_nblocks) {
return (FALSE);
@@ -836,19 +745,6 @@ _swap_pager_haspage(swp, offset)
}
/*
- * swap_pager_haspage is the externally accessible version of
- * _swap_pager_haspage above. this routine takes a vm_pager_t
- * for an argument instead of sw_pager_t.
- */
-boolean_t
-swap_pager_haspage(pager, offset)
- vm_pager_t pager;
- vm_offset_t offset;
-{
- return _swap_pager_haspage((sw_pager_t) pager->pg_data, offset);
-}
-
-/*
* swap_pager_freepage is a convienience routine that clears the busy
* bit and deallocates a page.
*/
@@ -887,16 +783,17 @@ swap_pager_iodone1(bp)
{
bp->b_flags |= B_DONE;
bp->b_flags &= ~B_ASYNC;
- wakeup((caddr_t) bp);
+ wakeup(bp);
}
int
-swap_pager_input(swp, m, count, reqpage)
- register sw_pager_t swp;
+swap_pager_getpages(object, m, count, reqpage)
+ vm_object_t object;
vm_page_t *m;
int count, reqpage;
{
+ register sw_pager_t swp = object->pg_data;
register struct buf *bp;
sw_blk_t swb[count];
register int s;
@@ -905,7 +802,6 @@ swap_pager_input(swp, m, count, reqpage)
vm_offset_t kva, off[count];
swp_clean_t spc;
vm_offset_t paging_offset;
- vm_object_t object;
int reqaddr[count];
int sequential;
@@ -1029,17 +925,17 @@ swap_pager_input(swp, m, count, reqpage)
if (swap_pager_free.tqh_first == NULL) {
s = splbio();
if (curproc == pageproc)
- (void) swap_pager_clean();
+ swap_pager_sync();
else
pagedaemon_wakeup();
while (swap_pager_free.tqh_first == NULL) {
swap_pager_needflags |= SWAP_FREE_NEEDED;
if (curproc == pageproc)
swap_pager_needflags |= SWAP_FREE_NEEDED_BY_PAGEOUT;
- tsleep((caddr_t) &swap_pager_free,
+ tsleep(&swap_pager_free,
PVM, "swpfre", 0);
if (curproc == pageproc)
- (void) swap_pager_clean();
+ swap_pager_sync();
else
pagedaemon_wakeup();
}
@@ -1091,7 +987,7 @@ swap_pager_input(swp, m, count, reqpage)
*/
s = splbio();
while ((bp->b_flags & B_DONE) == 0) {
- tsleep((caddr_t) bp, PVM, "swread", 0);
+ tsleep(bp, PVM, "swread", 0);
}
if (bp->b_flags & B_ERROR) {
@@ -1104,7 +1000,7 @@ swap_pager_input(swp, m, count, reqpage)
--swp->sw_piip;
if (swp->sw_piip == 0)
- wakeup((caddr_t) swp);
+ wakeup(swp);
/*
@@ -1124,7 +1020,7 @@ swap_pager_input(swp, m, count, reqpage)
if (spc) {
m[reqpage]->object->last_read = m[reqpage]->offset;
if (bp->b_flags & B_WANTED)
- wakeup((caddr_t) bp);
+ wakeup(bp);
/*
* if we have used an spc, we need to free it.
*/
@@ -1134,7 +1030,7 @@ swap_pager_input(swp, m, count, reqpage)
crfree(bp->b_wcred);
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
- wakeup((caddr_t) &swap_pager_free);
+ wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@@ -1185,7 +1081,7 @@ swap_pager_input(swp, m, count, reqpage)
for (i = 0; i < count; i++) {
m[i]->dirty = VM_PAGE_BITS_ALL;
}
- _swap_pager_freespace(swp, m[0]->offset + paging_offset, count * PAGE_SIZE);
+ swap_pager_freespace(object, m[0]->offset + paging_offset, count * PAGE_SIZE);
}
} else {
swap_pager_ridpages(m, count, reqpage);
@@ -1195,13 +1091,14 @@ swap_pager_input(swp, m, count, reqpage)
}
int
-swap_pager_output(swp, m, count, flags, rtvals)
- register sw_pager_t swp;
+swap_pager_putpages(object, m, count, sync, rtvals)
+ vm_object_t object;
vm_page_t *m;
int count;
- int flags;
+ boolean_t sync;
int *rtvals;
{
+ register sw_pager_t swp = object->pg_data;
register struct buf *bp;
sw_blk_t swb[count];
register int s;
@@ -1210,7 +1107,6 @@ swap_pager_output(swp, m, count, flags, rtvals)
vm_offset_t kva, off, foff;
swp_clean_t spc;
vm_offset_t paging_offset;
- vm_object_t object;
int reqaddr[count];
int failed;
@@ -1341,8 +1237,8 @@ swap_pager_output(swp, m, count, flags, rtvals)
/*
* For synchronous writes, we clean up all completed async pageouts.
*/
- if ((flags & B_ASYNC) == 0) {
- swap_pager_clean();
+ if (sync == TRUE) {
+ swap_pager_sync();
}
kva = 0;
@@ -1354,7 +1250,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
swap_pager_free.tqh_first->spc_list.tqe_next->spc_list.tqe_next == NULL) {
s = splbio();
if (curproc == pageproc) {
- (void) swap_pager_clean();
+ swap_pager_sync();
#if 0
splx(s);
return VM_PAGER_AGAIN;
@@ -1367,14 +1263,13 @@ swap_pager_output(swp, m, count, flags, rtvals)
if (curproc == pageproc) {
swap_pager_needflags |= SWAP_FREE_NEEDED_BY_PAGEOUT;
if((cnt.v_free_count + cnt.v_cache_count) > cnt.v_free_reserved)
- wakeup((caddr_t) &cnt.v_free_count);
+ wakeup(&cnt.v_free_count);
}
swap_pager_needflags |= SWAP_FREE_NEEDED;
- tsleep((caddr_t) &swap_pager_free,
- PVM, "swpfre", 0);
+ tsleep(&swap_pager_free, PVM, "swpfre", 0);
if (curproc == pageproc)
- (void) swap_pager_clean();
+ swap_pager_sync();
else
pagedaemon_wakeup();
}
@@ -1434,7 +1329,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
* place a "cleaning" entry on the inuse queue.
*/
s = splbio();
- if (flags & B_ASYNC) {
+ if (sync == FALSE) {
spc->spc_flags = 0;
spc->spc_swp = swp;
for (i = 0; i < count; i++)
@@ -1461,9 +1356,9 @@ swap_pager_output(swp, m, count, flags, rtvals)
* perform the I/O
*/
VOP_STRATEGY(bp);
- if ((flags & (B_READ | B_ASYNC)) == B_ASYNC) {
+ if (sync == FALSE) {
if ((bp->b_flags & B_DONE) == B_DONE) {
- swap_pager_clean();
+ swap_pager_sync();
}
splx(s);
for (i = 0; i < count; i++) {
@@ -1475,7 +1370,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
* wait for the sync I/O to complete
*/
while ((bp->b_flags & B_DONE) == 0) {
- tsleep((caddr_t) bp, PVM, "swwrt", 0);
+ tsleep(bp, PVM, "swwrt", 0);
}
if (bp->b_flags & B_ERROR) {
printf("swap_pager: I/O error - pageout failed; blkno %d, size %d, error %d\n",
@@ -1487,12 +1382,12 @@ swap_pager_output(swp, m, count, flags, rtvals)
--swp->sw_poip;
if (swp->sw_poip == 0)
- wakeup((caddr_t) swp);
+ wakeup(swp);
if (bp->b_vp)
pbrelvp(bp);
if (bp->b_flags & B_WANTED)
- wakeup((caddr_t) bp);
+ wakeup(bp);
splx(s);
@@ -1532,7 +1427,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
crfree(bp->b_wcred);
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
- wakeup((caddr_t) &swap_pager_free);
+ wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@@ -1540,15 +1435,15 @@ swap_pager_output(swp, m, count, flags, rtvals)
return (rv);
}
-boolean_t
-swap_pager_clean()
+void
+swap_pager_sync()
{
register swp_clean_t spc, tspc;
register int s;
tspc = NULL;
if (swap_pager_done.tqh_first == NULL)
- return FALSE;
+ return;
for (;;) {
s = splbio();
/*
@@ -1580,7 +1475,7 @@ doclean:
spc->spc_flags = 0;
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
- wakeup((caddr_t) &swap_pager_free);
+ wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@@ -1588,7 +1483,7 @@ doclean:
splx(s);
}
- return (tspc ? TRUE : FALSE);
+ return;
}
void
@@ -1602,7 +1497,7 @@ swap_pager_finish(spc)
if ((object->paging_in_progress == 0) &&
(object->flags & OBJ_PIPWNT)) {
object->flags &= ~OBJ_PIPWNT;
- thread_wakeup((int) object);
+ wakeup(object);
}
/*
@@ -1662,7 +1557,7 @@ swap_pager_iodone(bp)
pbrelvp(bp);
if (bp->b_flags & B_WANTED)
- wakeup((caddr_t) bp);
+ wakeup(bp);
if (bp->b_rcred != NOCRED)
crfree(bp->b_rcred);
@@ -1671,12 +1566,12 @@ swap_pager_iodone(bp)
nswiodone += spc->spc_count;
if (--spc->spc_swp->sw_poip == 0) {
- wakeup((caddr_t) spc->spc_swp);
+ wakeup(spc->spc_swp);
}
if ((swap_pager_needflags & SWAP_FREE_NEEDED) ||
swap_pager_inuse.tqh_first == 0) {
swap_pager_needflags &= ~SWAP_FREE_NEEDED;
- wakeup((caddr_t) &swap_pager_free);
+ wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT) {
@@ -1685,7 +1580,7 @@ swap_pager_iodone(bp)
}
if (vm_pageout_pages_needed) {
- wakeup((caddr_t) &vm_pageout_pages_needed);
+ wakeup(&vm_pageout_pages_needed);
vm_pageout_pages_needed = 0;
}
if ((swap_pager_inuse.tqh_first == NULL) ||
OpenPOWER on IntegriCloud