summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_pageout.c
Commit message (Collapse)AuthorAgeFilesLines
* - Correct a long-standing race condition in vm_page_try_to_cache() thatalc2004-02-141-3/+2
| | | | | | | | | | | | could result in a panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251. - Simplify the code surrounding the fix to this same race condition in vm_pageout.c's revision 1.251. There should be no behavioral change. Reviewed by: tegge MFC after: 7 days
* Correct a long-standing race condition in the inactive queue scan. (Seealc2004-02-101-0/+15
| | | | | | | | the added comment for low-level details.) The effect of this race condition is a panic "vm_page_cache: caching a dirty page, ..." Reviewed by: tegge MFC after: 7 days
* Locking for the per-process resource limits structure.jhb2004-02-041-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - struct plimit includes a mutex to protect a reference count. The plimit structure is treated similarly to struct ucred in that is is always copy on write, so having a reference to a structure is sufficient to read from it without needing a further lock. - The proc lock protects the p_limit pointer and must be held while reading limits from a process to keep the limit structure from changing out from under you while reading from it. - Various global limits that are ints are not protected by a lock since int writes are atomic on all the archs we support and thus a lock wouldn't buy us anything. - All accesses to individual resource limits from a process are abstracted behind a simple lim_rlimit(), lim_max(), and lim_cur() API that return either an rlimit, or the current or max individual limit of the specified resource from a process. - dosetrlimit() was renamed to kern_setrlimit() to match existing style of other similar syscall helper functions. - The alpha OSF/1 compat layer no longer calls getrlimit() and setrlimit() (it didn't used the stackgap when it should have) but uses lim_rlimit() and kern_setrlimit() instead. - The svr4 compat no longer uses the stackgap for resource limits calls, but uses lim_rlimit() and kern_setrlimit() instead. - The ibcs2 compat no longer uses the stackgap for resource limits. It also no longer uses the stackgap for accessing sysctl's for the ibcs2_sysconf() syscall but uses kernel_sysctl() instead. As a result, ibcs2_sysconf() no longer needs Giant. - The p_rlimit macro no longer exists. Submitted by: mtm (mostly, I only did a few cleanups and catchups) Tested on: i386 Compiled on: alpha, amd64
* - Push down Giant from vm_pageout() to vm_pageout_scan(), freeingalc2003-10-241-7/+4
| | | | | | | vm_pageout_page_stats() from Giant. - Modify vm_pager_put_pages() and vm_pager_page_unswapped() to expect the vm object to be locked on entry. (All of the pager routines now expect this.)
* - Retire vm_pageout_page_free(). Instead, use vm_page_select_cache() fromalc2003-10-221-40/+12
| | | | | | vm_pageout_scan(). Rationale: I don't like leaving a busy page in the cache queue with neither the vm object nor the vm page queues lock held. - Assert that the page is active in vm_pageout_page_stats().
* - Assert that every page found in the active queue is an active page.alc2003-10-221-7/+2
|
* - Increase the object lock's scope in vm_contig_launder() so that accessalc2003-10-181-11/+5
| | | | | | | | | to the object's type field and the call to vm_pageout_flush() are synchronized. - The above change allows for the eliminaton of the last parameter to vm_pageout_flush(). - Synchronize access to the page's valid field in vm_pageout_flush() using the containing object's lock.
* - Synchronize access to a vm page's valid field using the containingalc2003-10-171-14/+19
| | | | | vm object's lock. - Release the vm object and vm page queues locks around vput().
* Merge vm_pageout_free_page_calc() into vm_pageout(), eliminating somealc2003-09-191-26/+18
| | | | unneeded code.
* When calling vget() on a vnode-backed vm object, acquire the vnodealc2003-09-171-2/+3
| | | | interlock before releasing the vm object's lock.
* - Add vm object locking to the part of vm_pageout_scan() that laundersalc2003-08-311-20/+16
| | | | | dirty pages. - Remove some unused variables.
* Extend the scope of the page queues lock in vm_pageout_scan() to coveralc2003-08-151-14/+4
| | | | the traversal of the PQ_INACTIVE queue.
* Change the layout policy of the swap_pager from a hardcoded widthphk2003-08-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | striping to a per device round-robin algorithm. Because of the policy of not attempting to retain previous swap allocation on page-out, this means that a newly added swap device almost instantly takes its 1/N share of the I/O load but it takes somewhat longer for it to assume it's 1/N share of the pages if there is plenty of space on the other devices. Change the 8G total swapspace limitation to 8G per device instead by using a per device blist rather than one global blist. This reduces the memory footprint by 75% (typically a couple hundred kilobytes) for the common case with one swapdevice but NSWAPDEV=4. Remove the compile time constant limit of number of swap devices, there is no limit now. Instead of a fixed size array, store the per swapdev structure in a TAILQ. Total swap space is still addressed by a 32 bit page number and therefore the upper limit is now 2^42 bytes = 16TB (for i386). We still do not allocate the first page of each device in order to give some amount of protection to any bsdlabel at the start of the device. A new device is appended after the existing devices in the swap space, no attempt is made to fill in holes left behind by swapoff (this can trivially be changed should it ever become a problem). The sysctl vm.nswapdev now reflects the number of currently configured swap devices. Rename vm_swap_size to swap_pager_avail for consistency with other exported names. Change argument type for vm_proc_swapin_all() and swap_pager_isswapped() to be a struct swdevt pointer rather than an index. Not changed: we are still using blists to manage the free space, but since the swapspace is no longer fragmented by the striping different resource managers might fare better.
* - Complete the vm object locking in vm_pageout_object_deactivate_pages().alc2003-07-071-21/+27
| | | | | | | | | - Change vm_pageout_object_deactivate_pages()'s first parameter from a vm_map_t to a pmap_t. - Change vm_pageout_object_deactivate_pages()'s and vm_pageout_map_deactivate_pages()'s last parameter from a vm_pindex_t to a long. Since the number of pages in an address space doesn't require 64 bits on an i386, vm_pindex_t is overkill.
* Add vm object locking to vm_pageout_map_deactivate_pages().alc2003-06-291-9/+18
|
* - Add vm object locking to vm_pageout_clean().alc2003-06-281-5/+7
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* If we seem to be out of VM, don't allow the pagedaemon to killdas2003-05-191-7/+8
| | | | | | | | | | | | | processes in the first pass. Among other things, this will give us a chance to launder vnode-backed pages before concluding that we need more swap. This is particularly useful for systems that have no swap. While here, update a comment and remove some long-unused code. Reported by: Lucky Green <shamrock@cypherpunks.to> Suggested by: dillon Approved by: re (rwatson)
* Avoid a lock-order reversal and implement vm_object lockingalc2003-05-041-8/+8
| | | | in vm_pageout_page_free().
* Eliminate an unused parameter from vm_pageout_object_deactivate_pages().alc2003-04-301-6/+5
|
* - Acquire the vm_object's lock when performing vm_object_page_clean().alc2003-04-241-4/+7
| | | | | | - Add a parameter to vm_pageout_flush() that tells vm_pageout_flush() whether its caller has locked the vm_object. (This is a temporary measure to bootstrap vm_object locking.)
* Lock the proc to check p_flag and several other related tests injhb2003-04-221-2/+5
| | | | vm_daemon(). We don't need to hold sched_lock as long now as a result.
* - Lock the vm_object when performing vm_object_pip_wakeup().alc2003-04-201-2/+2
| | | | - Merge two identical cases in a switch statement.
* - Lock the vm_object when performing vm_object_pip_add().alc2003-04-201-0/+2
|
* Add a facility allowing processes to inform the VM subsystem they arewes2003-03-311-1/+2
| | | | | | | | critical and should not be killed when pageout is looking for more memory pages in all the wrong places. Reviewed by: arch@ Sponsored by: St. Bernard Software
* - When the VM daemon is out of swap space and looking for adas2003-03-121-2/+7
| | | | | | | | | process to kill, don't block on a map lock while holding the process lock. Instead, skip processes whose map locks are held and find something else to kill. - Add vm_map_trylock_read() to support the above. Reviewed by: alc, mike (mentor)
* Add a comment describing how pagedaemon_wakeup() should be used andalc2003-02-091-0/+6
| | | | | | synchronized. Suggested by: tegge
* - It's more accurate to say that vm_paging_needed() returns TRUEalc2003-02-021-2/+3
| | | | | | than a positive number. - In pagedaemon_wakeup(), set vm_pages_needed to 1 rather than incrementing it to accomplish the same.
* - Convert vm_pageout()'s tsleep()s to msleep()s with the page queue lock.alc2003-02-021-2/+5
|
* - Remove (some) unnecessary explicit initializations to zero.alc2003-02-011-8/+5
| | | | - Style changes to vm_pageout(): declarations and white-space.
* - Update vm_pageout_deficit using atomic operations. It's a simplealc2003-01-141-3/+1
| | | | | counter outside the scope of existing locks. - Eliminate a redundant clearing of vm_pageout_deficit.
* Make vm_pageout_page_free() static.alc2003-01-141-1/+2
|
* Avoid extern decls in .c files by putting them in the vm/swap_pager.hphk2003-01-031-1/+0
| | | | | include file where they belong. Share the dmmax_mask variable.
* vm_pager_put_pages() takes VM_PAGER_* flags, not OBJPC_* flags. It justdillon2002-12-281-1/+1
| | | | | | | | | so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no harm done. But fix it :-) No operational changes. MFC after: 1 day
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-181-0/+2
|
* Hold the page queues lock when calling pmap_protect(); it updates fieldsalc2002-12-011-3/+5
| | | | | | | of the vm_page structure. Nearby, remove an unnecessary semicolon and return statement. Approved by: re (blanket)
* Increase the scope of the page queue lock in vm_pageout_scan().alc2002-12-011-2/+2
| | | | Approved by: re (blanket)
* Assert that the page queues lock rather than Giant is held inalc2002-11-231-2/+3
| | | | | | vm_pageout_page_free(). Approved by: re
* - Add an event that is triggered when the system is low on memory. This isjeff2002-11-211-1/+9
| | | | | | | | | intended to be used by significant memory consumers so that they may drain some of their caches. Inspired by: phk Approved by: re Tested on: x86, alpha
* Remove vm_page_protect(). Instead, use pmap_page_protect() directly.alc2002-11-181-2/+2
|
* Now that pmap_remove_all() is exported by our pmap implementationsalc2002-11-161-5/+5
| | | | use it directly.
* Move pmap_collect() out of the machine-dependent code, rename italc2002-11-131-1/+31
| | | | | | | | to reflect its new location, and add page queue and flag locking. Notes: (1) alpha, i386, and ia64 had identical implementations of pmap_collect() in terms of machine-independent interfaces; (2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
* When prot is VM_PROT_NONE, call pmap_page_protect() directly rather thanalc2002-11-101-5/+5
| | | | | | | | | indirectly through vm_page_protect(). The one remaining page flag that is updated by vm_page_protect() is already being updated by our various pmap implementations. Note: A later commit will similarly change the VM_PROT_READ case and eliminate vm_page_protect().
* - Create a new scheduler api that is defined in sys/sched.hjeff2002-10-121-3/+2
| | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch
* - Get rid of the unused LK_NOOBJ.jeff2002-09-251-1/+1
|
* Use the fields in the sysentvec and in the vm map header in place of thejake2002-09-211-2/+2
| | | | | | | | constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS. This is mainly so that they can be variable even for the native abi, based on different machine types. Get stack protections from the sysentvec too. This makes it trivial to map the stack non-executable for certain abis, on machines that support it.
* Completely redo thread states.julian2002-09-111-6/+6
| | | | Reviewed by: davidxu@freebsd.org
* o Lock page queue accesses by vm_page_activate().alc2002-08-101-0/+4
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+3
| | | | | o Increment cnt.v_dfree inside vm_pageout_page_free() rather than at each call.
* o Require that the page queues lock is held on entry to vm_pageout_clean()alc2002-07-271-4/+5
| | | | | | and vm_pageout_flush(). o Acquire the page queues lock before calling vm_pageout_clean() or vm_pageout_flush().
OpenPOWER on IntegriCloud