summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_pageout.c
Commit message (Collapse)AuthorAgeFilesLines
* Avoid extern decls in .c files by putting them in the vm/swap_pager.hphk2003-01-031-1/+0
| | | | | include file where they belong. Share the dmmax_mask variable.
* vm_pager_put_pages() takes VM_PAGER_* flags, not OBJPC_* flags. It justdillon2002-12-281-1/+1
| | | | | | | | | so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no harm done. But fix it :-) No operational changes. MFC after: 1 day
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-181-0/+2
|
* Hold the page queues lock when calling pmap_protect(); it updates fieldsalc2002-12-011-3/+5
| | | | | | | of the vm_page structure. Nearby, remove an unnecessary semicolon and return statement. Approved by: re (blanket)
* Increase the scope of the page queue lock in vm_pageout_scan().alc2002-12-011-2/+2
| | | | Approved by: re (blanket)
* Assert that the page queues lock rather than Giant is held inalc2002-11-231-2/+3
| | | | | | vm_pageout_page_free(). Approved by: re
* - Add an event that is triggered when the system is low on memory. This isjeff2002-11-211-1/+9
| | | | | | | | | intended to be used by significant memory consumers so that they may drain some of their caches. Inspired by: phk Approved by: re Tested on: x86, alpha
* Remove vm_page_protect(). Instead, use pmap_page_protect() directly.alc2002-11-181-2/+2
|
* Now that pmap_remove_all() is exported by our pmap implementationsalc2002-11-161-5/+5
| | | | use it directly.
* Move pmap_collect() out of the machine-dependent code, rename italc2002-11-131-1/+31
| | | | | | | | to reflect its new location, and add page queue and flag locking. Notes: (1) alpha, i386, and ia64 had identical implementations of pmap_collect() in terms of machine-independent interfaces; (2) sparc64 doesn't require it; (3) powerpc had it as a TODO.
* When prot is VM_PROT_NONE, call pmap_page_protect() directly rather thanalc2002-11-101-5/+5
| | | | | | | | | indirectly through vm_page_protect(). The one remaining page flag that is updated by vm_page_protect() is already being updated by our various pmap implementations. Note: A later commit will similarly change the VM_PROT_READ case and eliminate vm_page_protect().
* - Create a new scheduler api that is defined in sys/sched.hjeff2002-10-121-3/+2
| | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch
* - Get rid of the unused LK_NOOBJ.jeff2002-09-251-1/+1
|
* Use the fields in the sysentvec and in the vm map header in place of thejake2002-09-211-2/+2
| | | | | | | | constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS. This is mainly so that they can be variable even for the native abi, based on different machine types. Get stack protections from the sysentvec too. This makes it trivial to map the stack non-executable for certain abis, on machines that support it.
* Completely redo thread states.julian2002-09-111-6/+6
| | | | Reviewed by: davidxu@freebsd.org
* o Lock page queue accesses by vm_page_activate().alc2002-08-101-0/+4
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+3
| | | | | o Increment cnt.v_dfree inside vm_pageout_page_free() rather than at each call.
* o Require that the page queues lock is held on entry to vm_pageout_clean()alc2002-07-271-4/+5
| | | | | | and vm_pageout_flush(). o Acquire the page queues lock before calling vm_pageout_clean() or vm_pageout_flush().
* o Lock page queue accesses by vm_page_activate() and vm_page_deactivate()alc2002-07-271-7/+6
| | | | | in vm_pageout_object_deactivate_pages(). o Apply some style fixes to vm_pageout_object_deactivate_pages().
* o Extend the scope of the page queues lock in vm_pageout_scan()alc2002-07-231-2/+1
| | | | to cover the traversal of the cache queue.
* o Lock page queue accesses by vm_page_try_to_cache(). (The accessesalc2002-07-201-0/+2
| | | | | in kern/vfs_bio.c are already locked.) o Assert that the page queues lock is held in vm_page_try_to_cache().
* o Lock page queue accesses by vm_page_cache() in vm_fault() andalc2002-07-201-0/+2
| | | | | vm_pageout_scan(). (The others are already locked.) o Assert that the page queues lock is held in vm_page_cache().
* o Lock accesses to the active page queue in vm_pageout_scan() andalc2002-07-201-2/+4
| | | | vm_pageout_page_stats().
* Part 1 of KSE-IIIjulian2002-06-291-4/+26
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* o Introduce and use vm_map_trylock() to replace several direct usesalc2002-04-281-2/+1
| | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it.
* Remove references to vm_zone.h and switch over to the new uma API.jeff2002-03-201-1/+1
|
* Remove __P.alfred2002-03-191-8/+8
|
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-0/+1
| | | | | | malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
* Back out the modification of vm_map locks from lockmgr to sx locks. Thegreen2002-03-181-1/+2
| | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm.
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.green2002-03-131-2/+1
| | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-19/+6
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Fix a horribly suboptimal algorithm in the vm_daemon.silby2002-02-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | In order to determine what to page out, the vm_daemon checks reference bits on all pages belonging to all processes. Unfortunately, the algorithm used reacted badly with shared pages; each shared page would be checked once per process sharing it; this caused an O(N^2) growth of tlb invalidations. The algorithm has been changed so that each page will be checked only 16 times. Prior to this change, a fork/sleepbomb of 1300 processes could cause the vm_daemon to take over 60 seconds to complete, effectively freezing the system for that time period. With this change in place, the vm_daemon completes in less than a second. Any system with hundreds of processes sharing pages should benefit from this change. Note that the vm_daemon is only run when the system is under extreme memory pressure. It is likely that many people with loaded systems saw no symptoms of this problem until they reached the point where swapping began. Special thanks go to dillon, peter, and Chuck Cranor, who helped me get up to speed with vm internals. PR: 33542, 20393 Reviewed by: dillon MFC after: 1 week
* Changes to make the OOM killer much more effective:silby2002-02-191-2/+1
| | | | | | | | | | | | - Allow the OOM killer to target processes currently locked in memory. These very often are the ones doing the memory hogging. - Drop the wakeup priority of processes currently sleeping while waiting for their page fault to complete. In order for the OOM killer to work well, the killed process and other system processes waiting on memory must be allowed to wakeup first. Reviewed by: dillon MFC after: 1 week
* GC P_BUFEXHAUST leftovers, we've had a new mechanism to avoid bufferdillon2002-01-311-3/+0
| | | | | | cache lockups for over a year now. MFC after: 0 days
* Fix a BUF_TIMELOCK race against BUF_LOCK and fix a deadlock in vget()dillon2001-12-201-4/+11
| | | | | | | | against VM_WAIT in the pageout code. Both fixes involve adjusting the lockmgr's timeout capability so locks obtained with timeouts do not interfere with locks obtained without a timeout. Hopefully MFC: before the 4.5 release
* Syntax cleanup and documentation, no operational changes.dillon2001-10-211-5/+9
| | | | MFC after: 1 day
* Don't remove all mappings of a swapped out process if the vm map containedtegge2001-10-141-1/+5
| | | | | | wired entries. vm_fault_unwire() depends on the mapping being intact. Reviewed by: dillon
* KSE Milestone 2julian2001-09-121-9/+12
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Reorg vm_page.c into vm_page.c, vm_pageq.c, and vm_contig.c (for contigmalloc).dillon2001-07-041-51/+14
| | | | | | | | | | | Also removed some spl's and added some VM mutexes, but they are not actually used yet, so this commit does not really make any operational changes to the system. vm_page.c relates to vm_page_t manipulation, including high level deactivation, activation, etc... vm_pageq.c relates to finding free pages and aquiring exclusive access to a page queue (exclusivity part not yet implemented). And the world still builds... :-)
* whitespace / register cleanupdillon2001-07-041-3/+3
|
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-35/+11
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
* Don't lock around swap_pager_swap_init() that is only called once duringjhb2001-06-201-0/+2
| | | | | the pagedaemon's startup code since it calls malloc which results in lock order reversals.
* Put the scheduler, vmdaemon, and pagedaemon kthreads back under Giant forjhb2001-06-201-16/+1
| | | | | now. The proc locking isn't actually safe yet and won't be until the proc locking is finished.
* Two fixes to the out-of-swap process termination code. First, start killingdillon2001-06-091-3/+8
| | | | | | | | | | | processes a little earlier to avoid a deadlock. Second, when calculating the 'largest process' do not just count RSS. Instead count the RSS + SWAP used by the process. Without this the code tended to kill small inconsequential processes like, oh, sshd, rather then one of the many 'eatmem 200MB' I run on a whim :-). This fix has been extensively tested on -stable and somewhat tested on -current and will be MFCd in a few days. Shamed into fixing this by: ps
* - Add in several asserts of vm_mtx.jhb2001-05-231-5/+42
| | | | | | | | | - Assert Giant in vm_pageout_scan() for the vnode hacking that it does. - Don't hold vm_mtx around vget() or vput(). - Lock Giant when calling vm_pageout_scan() from the pagedaemon. Also, lock curproc while setting the P_BUFEXHAUST flag. - For now we still hold Giant for all of the vm_daemon. When process limits are locked we will be only need Giant for swapout_procs().
* Introduce a global lock for the vm subsystem (vm_mtx).alfred2001-05-191-5/+14
| | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb
* During the code to pick a process to kill when memory is exhausted, keepjhb2001-05-171-3/+18
| | | | | | | the process in question locked as soon as we find it and determine it to be eligible until we actually kill it. To avoid deadlock, we don't block on the process lock but skip any process that is already locked during our search.
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inmarkm2001-05-011-1/+2
| | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations)
* Convert the allproc and proctree locks from lockmgr locks to sx locks.jhb2001-03-281-4/+5
|
* Change and clean the mutex lock interface.bmilekic2001-02-091-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
OpenPOWER on IntegriCloud