summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* - Substitute bdone() and bwait() from vfs_bio.c foralc2004-02-231-23/+4
| | | | | | | | | | swap_pager_putpages()'s buffer completion code. Note: the only difference between swp_pager_sync_iodone() and bdone(), aside from the locking in the latter, was the unnecessary clearing of B_ASYNC. - Remove an unnecessary pmap_page_protect() from swp_pager_async_iodone(). Reviewed by: tegge
* Correct a long-standing race condition in vm_object_page_remove() thatalc2004-02-221-1/+1
| | | | | | | could result in a dirty page being unintentionally freed. Reviewed by: tegge MFC after: 7 days
* Eliminate the second, unnecessary call to pmap_page_protect() near the endalc2004-02-211-2/+4
| | | | | | | of vm_pageout_flush(). Instead, assert that the page is still write protected. Discussed with: tegge
* - Correct a long-standing race condition in vm_page_try_to_free() thatalc2004-02-191-4/+3
| | | | | | | | could result in a dirty page being unintentionally freed. - Simplify the dirty page check in vm_page_dontneed(). Reviewed by: tegge MFC after: 7 days
* Back out previous commit due to objections.des2004-02-161-2/+0
|
* Don't panic if we fail to satisfy an M_WAITOK request; return 0 instead.des2004-02-161-0/+2
| | | | The calling code will either handle that gracefully or cause a page fault.
* Correct a long-standing race condition in vm_contig_launder() that couldalc2004-02-161-0/+2
| | | | | | | | | result in a panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275. MFC after: 7 days
* Correct a long-standing race condition in vm_fault() that could result in aalc2004-02-151-3/+1
| | | | | | | | | | panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275. Reviewed by: tegge MFC after: 7 days
* - Correct a long-standing race condition in vm_page_try_to_cache() thatalc2004-02-142-4/+3
| | | | | | | | | | | | could result in a panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251. - Simplify the code surrounding the fix to this same race condition in vm_pageout.c's revision 1.251. There should be no behavioral change. Reviewed by: tegge MFC after: 7 days
* Remove the absolute count g_access_abs() function since experience hasphk2004-02-121-2/+2
| | | | | | | | | | | | shown that it is not useful. Rename the relative count g_access_rel() function to g_access(), only the name has changed. Change all g_access_rel() calls in our CVS tree to call g_access() instead. Add an #ifndef BURN_BRIDGES #define of g_access_rel() for source code compatibility.
* Further reduce the use of Giant in vm_map_delete(): Perform pmap_remove()alc2004-02-121-2/+2
| | | | | | on system maps, besides the kmem_map, without Giant. In collaboration with: tegge
* Correct a long-standing race condition in the inactive queue scan. (Seealc2004-02-101-0/+15
| | | | | | | | the added comment for low-level details.) The effect of this race condition is a panic "vm_page_cache: caching a dirty page, ..." Reviewed by: tegge MFC after: 7 days
* swp_pager_async_iodone() no longer requires Giant. Modify bufdone()alc2004-02-071-3/+0
| | | | | | and swapgeom_done() to perform swp_pager_async_iodone() without Giant. Reviewed by: tegge
* - Locking for the per-process resource limits structure has eliminatedalc2004-02-052-5/+1
| | | | | | the need for Giant in vm_map_growstack(). - Use the proc * that is passed to vm_map_growstack() rather than curthread->td_proc.
* Locking for the per-process resource limits structure.jhb2004-02-045-32/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - struct plimit includes a mutex to protect a reference count. The plimit structure is treated similarly to struct ucred in that is is always copy on write, so having a reference to a structure is sufficient to read from it without needing a further lock. - The proc lock protects the p_limit pointer and must be held while reading limits from a process to keep the limit structure from changing out from under you while reading from it. - Various global limits that are ints are not protected by a lock since int writes are atomic on all the archs we support and thus a lock wouldn't buy us anything. - All accesses to individual resource limits from a process are abstracted behind a simple lim_rlimit(), lim_max(), and lim_cur() API that return either an rlimit, or the current or max individual limit of the specified resource from a process. - dosetrlimit() was renamed to kern_setrlimit() to match existing style of other similar syscall helper functions. - The alpha OSF/1 compat layer no longer calls getrlimit() and setrlimit() (it didn't used the stackgap when it should have) but uses lim_rlimit() and kern_setrlimit() instead. - The svr4 compat no longer uses the stackgap for resource limits calls, but uses lim_rlimit() and kern_setrlimit() instead. - The ibcs2 compat no longer uses the stackgap for resource limits. It also no longer uses the stackgap for accessing sysctl's for the ibcs2_sysconf() syscall but uses kernel_sysctl() instead. As a result, ibcs2_sysconf() no longer needs Giant. - The p_rlimit macro no longer exists. Submitted by: mtm (mostly, I only did a few cleanups and catchups) Tested on: i386 Compiled on: alpha, amd64
* Drop the reference count on the old vmspace after fully switching thejhb2004-02-021-2/+2
| | | | | | current thread to the new vmspace. Suggested by: dillon
* Check error return from g_clone_bio(). (netchild@)phk2004-02-021-0/+11
| | | | | | Add XXX comment about why this is still not optimal. (phk@) Submitted by: netchild@
* - Use a seperate startup function for the zeroidle kthread. Use this tojeff2004-02-021-10/+23
| | | | set P_NOLOAD prior to running the thread.
* - Fix a problem where we did not drain the cache of buckets in the zonejeff2004-02-011-8/+21
| | | | | | when uma_reclaim() was called. This was introduced when the zone working-set algorithm was removed in favor of using the per cpu caches as the working set.
* Mechanical whitespace cleanup.des2004-01-301-41/+41
|
* Fixed breakage of scheduling in rev.1.29 of subr_4bsd.c. Thebde2004-01-291-1/+1
| | | | | | | | | | | | | | | | | "scheduler" here has very little to do with scheduling. It is actually the swapper, and it really must be the last SYSINIT'ed item like its comment says, since proc0 metamorphoses into swapper by calling scheduler() last in mi_start(), and scheduler() never returns.. Rev.1.29 of subr_4bsd.c broke this by adding another SI_ORDER_FIRST item (kproc_start() for schedcpu_thread() onto the SI_SUB_RUN_SCHEDULER_LIST. The sorting of SYSINITs with identical orders (at all levels) is apparently nondeterministic, so this resulted in schedule() sometimes being called second last and schedcpu_thread() not being called at all. This quick fix just changes the code to almost match the comment (SI_ORDER_FIRST -> SI_ORDER_ANY). "LAST" is misspelled "ANY", and there is no way to ensure that there is only 1 very lst SYSINIT. A more complete fix would remove the SYSINIT obfuscation.
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orjeff2004-01-251-2/+1
| | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
* 1. Statically initialize swap_pager_full and swap_pager_almost_full to thealc2004-01-241-2/+6
| | | | | | | | | | | full state. (When swap is added their state will change appropriately.) 2. Set swap_pager_full and swap_pager_almost_full to the full state when the last swap device is removed. Combined these changes eliminate nonsense messages from the kernel on swap- less machines. Item 2 submitted by: Divacky Roman <xdivac02@stud.fit.vutbr.cz> Prodding by: phk
* Increase UMA_BOOT_PAGES because of changes to pv entry initialization inalc2004-01-181-1/+1
| | | | revision 1.457 of i386/i386/pmap.c.
* Don't acquire Giant in vm_object_deallocate() unless the object is vnode-alc2004-01-181-8/+12
| | | | backed.
* Remove vm_page_alloc_contig(). It's now unused.alc2004-01-142-16/+0
|
* Remove long dead code, specifically, code related to munmapfd().alc2004-01-111-1/+0
| | | | (See also vm/vm_mmap.c revision 1.173.)
* - Unmanage pages allocated by contigmalloc1(). (There is no point inalc2004-01-101-6/+2
| | | | | having PV entries for these pages.) - Remove splvm() and splx() calls.
* Unmanage pages allocated by kmem_alloc(). (There is no point in having PValc2004-01-101-0/+1
| | | | entries for these pages.)
* - Enable recursive acquisition of the mutex synchronizing access to thealc2004-01-082-8/+13
| | | | | | | | | | | | | | | | | free pages queue. This is presently needed by contigmalloc1(). - Move a sanity check against attempted double allocation of two pages to the same vm object offset from vm_page_alloc() to vm_page_insert(). This provides better protection because double allocation could occur through a direct call to vm_page_insert(), such as that by vm_page_rename(). - Modify contigmalloc1() to hold the mutex synchronizing access to the free pages queue while it scans vm_page_array in search of free pages. - Correct a potential leak of pages by contigmalloc1() that I introduced in revision 1.20: We must convert all cache queue pages to free pages before we begin removing free pages from the free queue. Otherwise, if we have to restart the scan because we are unable to acquire the vm object lock that is necessary to convert a cache queue page to a free page, we leak those free pages already removed from the free queue.
* Don't bother clearing PG_ZERO in contigmalloc1(), kmem_alloc(), oralc2004-01-062-3/+0
| | | | kmem_malloc(). It serves no purpose.
* Simplify the various pager allocation routines by computing the desiredalc2004-01-043-15/+16
| | | | object size once and assigning that value to a local variable.
* Eliminate the acquisition and release of Giant from vnode_pager_alloc().alc2004-01-041-2/+0
| | | | | | The vm object and vnode locking should suffice. Discussed with: jeff
* Reduce the scope of Giant in swap_pager_alloc().alc2004-01-031-2/+2
|
* Revision 1.74 of vm_meter.c ("Avoid lock-order reversal") makes the releasealc2004-01-021-2/+0
| | | | | and subsequent reacquisition of the same vm object lock in vm_object_collapse() unnecessary.
* Avoid lock-order reversal between the vm object list mutex and the vmalc2004-01-021-5/+15
| | | | object mutex.
* - Increase the scope of the kmem_object's lock in kmem_malloc(). Add aalc2004-01-011-2/+7
| | | | comment explaining why a further increase is not possible.
* In vm_page_lookup() check the root of the vm object's splay tree for thealc2003-12-311-3/+5
| | | | desired page before calling vm_page_splay().
* Simplify vm_page_grab(): Don't bother with the generation check. If thealc2003-12-311-18/+6
| | | | | | | | | vm object hasn't changed, the desired page will be at or near the root of the vm object's splay tree, making vm_page_lookup() cheap. (The only lock required for vm_page_lookup() is already held.) If, however, the vm object has changed and retry was requested, eliminating the generation check also eliminates a pointless acquisition and release of the page queues lock.
* - Modify vm_object_split() to expect a locked vm object on entry andalc2003-12-302-17/+10
| | | | | return on a locked vm object on exit. Remove GIANT_REQUIRED. - Eliminate some unnecessary local variables from vm_object_split().
* Remove swap_pager_un_object_list; it is unused.alc2003-12-291-24/+8
|
* Remove GIANT_REQUIRED from kmem_suballoc().alc2003-12-281-2/+0
|
* - Reduce Giant's scope in vm_fault().alc2003-12-261-14/+10
| | | | | - Use vm_object_reference_locked() instead of vm_object_reference() in vm_fault().
* Minor correction to revision 1.258: Use the proc pointer that is passed toalc2003-12-261-2/+1
| | | | vm_map_growstack() in the RLIMIT_VMEM check rather than curthread.
* - Create an unmapped guard page to trap access to vm_page_array[-1].alc2003-12-221-0/+5
| | | | | | | This guard page would have trapped the problems with the MFC of the PAE support to RELENG_4 at an earlier point in the sequence of events. Submitted by: tegge
* - Significantly reduce the number of preallocated pv entries inalc2003-12-221-1/+1
| | | | | | | | | | pmap_init(). Such a large preallocation is unnecessary and wastes nearly eight megabytes of kernel virtual address space per gigabyte of managed physical memory. - Increase UMA_BOOT_PAGES by two. This enables the removal of pmap_pv_allocf(). (Note: this function was only used during initialization, specifically, after pmap_init() but before pmap_init2(). During pmap_init2(), a new allocator is installed.)
* - Correct an error in mincore(2) that has existed since its introduction:alc2003-12-211-1/+1
| | | | | | mincore(2) should check that the page is valid, not just allocated. Otherwise, it can return a false positive for a page that is not yet resident because it is being read from disk.
* Remove trailing whitespace.kan2003-12-081-7/+7
|
* Addendum to revision 1.174: In the case where vm_pager_allocate() is calledalc2003-12-081-2/+6
| | | | | | | to create a vnode-backed object, the vnode lock must be held by the caller. Reported by: truckman Discussed with: kan
* Fix a deadlock between vm_fault() and vm_mmap(): The expected lock orderingalc2003-12-061-4/+13
| | | | | | | | | | | | between vm_map and vnode locks is that vm_map locks are acquired first. In revision 1.150 mmap(2) was changed to pass a locked vnode into vm_mmap(). This creates a lock-order reversal when vm_mmap() calls one of the vm_map routines that acquires a vm_map lock. The solution implemented herein is to release the vnode lock in mmap() before calling vm_mmap() and reacquire this lock if necessary in vm_mmap(). Approved by: re (scottl) Reviewed by: jeff, kan, rwatson
OpenPOWER on IntegriCloud