summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* Use `struct uma_zone *' instead of uma_zone_t, so that <sys/uma.h> isn'tbde2002-09-051-1/+1
| | | | a prerequisite.
* s/SGNL/SIG/davidxu2002-09-051-1/+2
| | | | | | | | | | s/SNGL/SINGLE/ s/SNGLE/SINGLE/ Fix abbreviation for P_STOPPED_* etc flags, in original code they were inconsistent and difficult to distinguish between them. Approved by: julian (mentor)
* o Synchronize updates to struct vm_page::cow with the page queues lock.alc2002-09-021-6/+5
|
* Reduce the maximum KVA reserved for swap meta structures from 70 to 32 MB.dillon2002-08-311-2/+2
| | | | | | Reduce the swap meta calculation by a factor of 2, it's still massive overkill. X-MFC after: immediately
* Change hw.physmem and hw.usermem to unsigned long like they used to bepeter2002-08-301-2/+2
| | | | | | | | | | | | | in the original hardwired sysctl implementation. The buf size calculator still overflows an integer on machines with large KVA (eg: ia64) where the number of pages does not fit into an int. Use 'long' there. Change Maxmem and physmem and related variables to 'long', mostly for completeness. Machines are not likely to overflow 'int' pages in the near term, but then again, 640K ought to be enough for anybody. This comes for free on 32 bit machines, so why not?
* o Retire pmap_pageable(). It's an advisory routine that nonealc2002-08-252-13/+0
| | | | of our platforms implements.
* o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever sincealc2002-08-256-32/+5
| | | | | | pmap_zero_page() and pmap_zero_page_area() were modified to accept a struct vm_page * instead of a physical address, vm_page_zero_fill() and vm_page_zero_fill_area() have served no purpose.
* o Use vm_object_lock() in place of directly locking Giant.alc2002-08-241-12/+12
| | | | Reviewed by: md5
* o Use vm_object_lock() in place of Giant when manipulating a vm objectalc2002-08-241-2/+2
| | | | in vm_map_insert().
* o Resurrect vm_object_lock() and vm_object_unlock() from revision 1.19.alc2002-08-241-0/+6
| | | | (For now, they simply acquire and release Giant.)
* Don't use "NULL" when "0" is really meant.archie2002-08-211-2/+2
|
* o Assert that the page queues lock is held in vm_page_activate().alc2002-08-111-1/+1
|
* o Lock page queue accesses by vm_page_activate().alc2002-08-111-0/+2
|
* o Lock page queue accesses by vm_page_activate().alc2002-08-101-0/+4
|
* o Move a call to vm_page_wakeup() inside the scope of the page queues lock.alc2002-08-101-1/+1
|
* o Remove the setting and clearing of the PG_MAPPED flag from the alpha andalc2002-08-101-1/+0
| | | | | ia64 pmap. o Remove the PG_MAPPED flag's declaration.
* o Remove the setting and clearing of the PG_MAPPED flag. (This flag isalc2002-08-103-4/+4
| | | | obsolete.)
* o Use pmap_page_is_mapped() in vm_page_protect() rather than the PG_MAPPEDalc2002-08-081-1/+1
| | | | | flag. (This is the only place in the entire kernel where the PG_MAPPED flag is tested. It will be removed soon.)
* o Acquire the page queues lock before checking the page's busy statusalc2002-08-041-2/+4
| | | | | in vm_page_grab(). Also, replace the nearby tsleep() with an msleep() on the page queues lock.
* - Replace v_flag with v_iflag and v_vflagjeff2002-08-044-24/+37
| | | | | | | | | | | | | | | - v_vflag is protected by the vnode lock and is used when synchronization with VOP calls is needed. - v_iflag is protected by interlock and is used for dealing with vnode management issues. These flags include X/O LOCK, FREE, DOOMED, etc. - All accesses to v_iflag and v_vflag have either been locked or marked with mp_fixme's. - Many ASSERT_VOP_LOCKED calls have been added where the locking was not clear. - Many functions in vfs_subr.c were restructured to provide for stronger locking. Idea stolen from: BSD/OS
* o Extend the scope of the page queues lock in contigmalloc1().alc2002-08-041-8/+8
| | | | | o Replace vm_page_sleep_busy() with vm_page_sleep_if_busy() in vm_contig_launder().
* o Remove the setting of PG_MAPPED from vm_page_wire() andalc2002-08-031-2/+0
| | | | vm_page_alloc(VM_ALLOC_WIRED).
* o Convert two instances of vm_page_sleep_busy() into vm_page_sleep_if_busy()alc2002-08-021-6/+9
| | | | with appropriate page queue locking.
* o Lock page queue accesses in nwfs and smbfs.alc2002-08-021-1/+1
| | | | o Assert that the page queues lock is held in vm_page_deactivate().
* o Lock page queue accesses by vm_page_deactivate().alc2002-08-021-0/+2
|
* o Acquire the page queues lock before calling vm_page_io_finish().alc2002-08-011-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_finish().
* o Setting PG_MAPPED and PG_WRITEABLE on pages that are mapped and unmappedalc2002-07-311-2/+0
| | | | | | | | | | | | by pmap_qenter() and pmap_qremove() is pointless. In fact, it probably leads to unnecessary pmap_page_protect() calls if one of these pages is paged out after unwiring. Note: setting PG_MAPPED asserts that the page's pv list may be non-empty. Since checking the status of the page's pv list isn't any harder than checking this flag, the flag should probably be eliminated. Alternatively, PG_MAPPED could be set by pmap_enter() exclusively rather than various places throughout the kernel.
* o Lock page accesses by vm_page_io_start() with the page queues lock.alc2002-07-311-1/+2
| | | | o Assert that the page queues lock is held in vm_page_io_start().
* o In vm_object_madvise() and vm_object_page_remove() replacealc2002-07-301-15/+10
| | | | | | | vm_page_sleep_busy() with vm_page_sleep_if_busy(). At the same time, increase the scope of the page queues lock. (This should significantly reduce the locking overhead in vm_object_page_remove().) o Apply some style fixes.
* - Optimize wakeup() and its friends; if a thread waken up is beingtanimura2002-07-301-64/+65
| | | | | | | | | | | | | | swapped in, we do not have to ask for the scheduler thread to do that. - Assert that a process is not swapped out in runq functions and swapout(). - Introduce thread_safetoswapout() for readability. - In swapout_procs(), perform a test that may block (check of a thread working on its vm map) first. This lets us call swapout() with the sched_lock held, providing a better atomicity.
* o Introduce vm_page_sleep_if_busy() as an eventual replacement foralc2002-07-292-0/+23
| | | | | vm_page_sleep_busy(). vm_page_sleep_if_busy() uses the page queues lock.
* Remove a XXXKSE comment. the code is no longer a problem..julian2002-07-291-1/+1
|
* Create a new thread state to describe threads that would be ready to runjulian2002-07-291-16/+66
| | | | | | | | | | | except for the fact tha they are presently swapped out. Also add a process flag to indicate that the process has started the struggle to swap back in. This will be needed for the case where multiple threads start the swapin action top a collision. Also add code to stop a process fropm being swapped out if one of the threads in this process is actually off running on another CPU.. that might hurt... Submitted by: Seigo Tanimura <tanimura@r.dl.itc.u-tokyo.ac.jp>
* o Pass VM_ALLOC_WIRED to vm_page_grab() rather than calling vm_page_wire()alc2002-07-291-7/+2
| | | | | in pmap_new_thread(), pmap_pinit(), and vm_proc_new(). o Lock page queue accesses by vm_page_free() in pmap_object_init_pt().
* o Modify vm_page_grab() to accept VM_ALLOC_WIRED.alc2002-07-282-1/+5
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-14/+19
| | | | o Apply some style fixes.
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+8
|
* o Lock page queue accesses by vm_page_free().alc2002-07-281-2/+3
| | | | | o Increment cnt.v_dfree inside vm_pageout_page_free() rather than at each call.
* o Lock page queue accesses by vm_page_free().alc2002-07-281-0/+2
|
* o Require that the page queues lock is held on entry to vm_pageout_clean()alc2002-07-273-5/+9
| | | | | | and vm_pageout_flush(). o Acquire the page queues lock before calling vm_pageout_clean() or vm_pageout_flush().
* o Lock page queue accesses by vm_page_activate().alc2002-07-271-0/+6
|
* o Lock page queue accesses by vm_page_activate() and vm_page_deactivate()alc2002-07-271-7/+6
| | | | | in vm_pageout_object_deactivate_pages(). o Apply some style fixes to vm_pageout_object_deactivate_pages().
* o Lock page queue accesses by vm_page_activate() and vm_page_deactivate().alc2002-07-271-0/+2
|
* o Remove a vm_page_deactivate() that is immediately followed by aalc2002-07-251-3/+0
| | | | | | vm_page_rename() from vm_object_backing_scan(). vm_page_rename() also performs vm_page_deactivate() on pages in the cache queues, making the removed vm_page_deactivate() redundant.
* o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,alc2002-07-243-56/+11
| | | | user_wire.
* o Lock page queue accesses by vm_page_dontneed().alc2002-07-232-4/+5
| | | | o Assert that the page queue lock is held in vm_page_dontneed().
* o Extend the scope of the page queues lock in vm_pageout_scan()alc2002-07-231-2/+1
| | | | to cover the traversal of the cache queue.
* Change struct vmspace->vm_shm from void * to struct shmmap_state *, thisalfred2002-07-221-1/+1
| | | | removes the need for casts in several cases.
* Remove caddr_t.alfred2002-07-221-1/+1
|
* o Lock page queue accesses by vm_page_free() and vm_page_deactivate().alc2002-07-211-0/+12
|
OpenPOWER on IntegriCloud