summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_page.c
Commit message (Collapse)AuthorAgeFilesLines
* The synchronization provided by vm object locking has eliminated thealc2004-11-031-14/+5
| | | | | | | | | | | | | | | | | need for most calls to vm_page_busy(). Specifically, most calls to vm_page_busy() occur immediately prior to a call to vm_page_remove(). In such cases, the containing vm object is locked across both calls. Consequently, the setting of the vm page's PG_BUSY flag is not even visible to other threads that are following the synchronization protocol. This change (1) eliminates the calls to vm_page_busy() that immediately precede a call to vm_page_remove() or functions, such as vm_page_free() and vm_page_rename(), that call it and (2) relaxes the requirement in vm_page_remove() that the vm page's PG_BUSY flag is set. Now, the vm page's PG_BUSY flag is set only when the vm object lock is released while the vm page is still in transition. Typically, this is when it is undergoing I/O.
* Assert that the containing vm object is locked in vm_page_cache() andalc2004-10-281-0/+2
| | | | vm_page_try_to_cache().
* Assert that the containing vm object is locked in vm_page_flash().alc2004-10-251-0/+2
|
* Assert that the containing vm object is locked in vm_page_busy() andalc2004-10-241-0/+4
| | | | vm_page_wakeup().
* Introduce VM_ALLOC_NOBUSY, an option to vm_page_alloc() and vm_page_grab()alc2004-10-241-2/+3
| | | | | | | | that indicates that the caller does not want a page with its busy flag set. In many places, the global page queues lock is acquired and released just to clear the busy flag on a just allocated page. Both the allocation of the page and the clearing of the busy flag occur while the containing vm object is locked. So, the busy flag might as well never be set.
* Correct two errors in PG_BUSY management by vm_page_cowfault(). Bothalc2004-10-181-2/+1
| | | | | | | | | | | errors are in rarely executed paths. 1. Each time the retry_alloc path is taken, the PG_BUSY must be set again. Otherwise vm_page_remove() panics. 2. There is no need to set PG_BUSY on the newly allocated page before freeing it. The page already has PG_BUSY set by vm_page_alloc(). Setting it again could cause an assertion failure. MFC after: 2 weeks
* Assert that the containing object is locked in vm_page_io_start() andalc2004-10-171-0/+2
| | | | | | vm_page_io_finish(). The motivation being to transition synchronization of the vm_page's busy field from the global page queues lock to the per-object lock.
* Add new a function isa_dma_init() which returns an errno when it failsphk2004-09-151-1/+1
| | | | | | | | | and which takes a M_WAITOK/M_NOWAIT flag argument. Add compatibility isa_dmainit() macro which whines loudly if isa_dma_init() fails. Problem uncovered by: tegge
* Advance the state of pmap locking on alpha, amd64, and i386.alc2004-07-291-6/+3
| | | | | | | | | | | | | | | | - Enable recursion on the page queues lock. This allows calls to vm_page_alloc(VM_ALLOC_NORMAL) and UMA's obj_alloc() with the page queues lock held. Such calls are made to allocate page table pages and pv entries. - The previous change enables a partial reversion of vm/vm_page.c revision 1.216, i.e., the call to vm_page_alloc() by vm_page_cowfault() now specifies VM_ALLOC_NORMAL rather than VM_ALLOC_INTERRUPT. - Add partial locking to pmap_copy(). (As a side-effect, pmap_copy() should now be faster on i386 SMP because it no longer generates IPIs for TLB shootdown on the other processors.) - Complete the locking of pmap_enter() and pmap_enter_quick(). (As of now, all changes to a user-level pmap on alpha, amd64, and i386 are performed with appropriate locking.)
* Fix a race in vm_page_sleep_if_busy(). Due to vm_object lockinggreen2004-07-211-4/+12
| | | | | | | | | | | | | | | being incomplete, it currently has to know how to drop and pick back up the vm_object's mutex if it has to sleep and drop the page queue mutex. The problem with this is that if the page is busy, while we are sleeping, the page can be freed and object disappear. When trying to lock m->object, we'd get a stale or NULL pointer and crash. The object is now cached, but this makes the assumption that the object is referenced in some manner and will not itself disappear while it is unlocked. Since this only happens if the object is locked, I had to remove an assumption earlier in contigmalloc() that reversed the order of locking the object and doing vm_page_sleep_if_busy(), not the normal order.
* - Eliminate the pte object from the pmap. Instead, page table pages arealc2004-07-191-2/+0
| | | | | | | | | | | | | allocated as "no object" pages. Similar changes were made to the amd64 and i386 pmap last year. The primary reason being that maintaining a pte object leads to lock order violations. A secondary reason being that the pte object is redundant, i.e., the page table itself can be used to lookup page table pages. (Historical note: The pte object predates our ability to allocate "no object" pages. Thus, the pte object was a necessary evil.) - Unconditionally check the vm object lock's status in vm_page_remove(). Previously, this assertion could not be made on Alpha due to its use of a pte object.
* Increase the scope of the page queues lock in vm_page_alloc() to coveralc2004-07-101-1/+1
| | | | a diagnostic check that accesses the cache queue count.
* Remove spl() calls. Update comments to reflect the removal of spl() calls.alc2004-06-191-53/+8
| | | | Remove '\n' from panic() format strings. Remove some blank lines.
* Do not preset PG_BUSY on VM_ALLOC_NOOBJ pages. Such pages are notalc2004-06-171-0/+2
| | | | accessible through an object. Thus, PG_BUSY serves no purpose.
* To date, unwiring a fictitious page has produced a panic. The reasonalc2004-05-221-0/+4
| | | | | | | | | | | | | | | being that PHYS_TO_VM_PAGE() returns the wrong vm_page for fictitious pages but unwiring uses PHYS_TO_VM_PAGE(). The resulting panic reported an unexpected wired count. Rather than attempting to fix PHYS_TO_VM_PAGE(), this fix takes advantage of the properties of fictitious pages. Specifically, fictitious pages will never be completely unwired. Therefore, we can keep a fictitious page's wired count forever set to one and thereby avoid the use of PHYS_TO_VM_PAGE() when we know that we're working with a fictitious page, just not which one. In collaboration with: green@, tegge@ PR: kern/29915
* Restructure vm_page_select_cache() so that adding assertions is easy.alc2004-05-121-10/+15
| | | | | | | | Some of the conditions that caused vm_page_select_cache() to deactivate a page were wrong. For example, deactivating an unmanaged or wired page is a nop. Thus, if vm_page_select_cache() had ever encountered an unmanaged or wired page, it would have looped forever. Now, we assert that the page is neither unmanaged nor wired.
* Cache queue pages are not mapped. Thus, the pmap_remove_all() byalc2004-05-091-1/+0
| | | | vm_page_alloc() is unnecessary.
* Update the comment describing vm_page_grab() to reflect the previousalc2004-04-241-6/+5
| | | | revision and correct some of its style errors.
* Push down the responsibility for zeroing a physical page from thealc2004-04-241-0/+2
| | | | | | | | | | | | | caller to vm_page_grab(). Although this gives VM_ALLOC_ZERO a different meaning for vm_page_grab() than for vm_page_alloc(), I feel such change is necessary to accomplish other goals. Specifically, I want to make the PG_ZERO flag immutable between the time it is allocated by vm_page_alloc() and freed by vm_page_free() or vm_page_free_zero() to avoid locking overheads. Once we gave up on the ability to automatically recognize a zeroed page upon entry to vm_page_free(), the ability to mutate the PG_ZERO flag became useless. Instead, I would like to say that "Once a page becomes valid, its PG_ZERO flag must be ignored."
* Remove advertising clause from University of California Regent's license,imp2004-04-061-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Eliminate unused arguments from vm_page_startup().alc2004-04-041-1/+1
|
* Modify contigmalloc1() so that the free page queues lock is not held whenalc2004-03-021-3/+2
| | | | | | vm_page_free() is called. The problem with holding this lock is that it is a spin lock and vm_page_free() may attempt the acquisition of a different default-type lock.
* - Correct a long-standing race condition in vm_page_try_to_free() thatalc2004-02-191-4/+3
| | | | | | | | could result in a dirty page being unintentionally freed. - Simplify the dirty page check in vm_page_dontneed(). Reviewed by: tegge MFC after: 7 days
* - Correct a long-standing race condition in vm_page_try_to_cache() thatalc2004-02-141-1/+1
| | | | | | | | | | | | could result in a panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251. - Simplify the code surrounding the fix to this same race condition in vm_pageout.c's revision 1.251. There should be no behavioral change. Reviewed by: tegge MFC after: 7 days
* - Enable recursive acquisition of the mutex synchronizing access to thealc2004-01-081-6/+7
| | | | | | | | | | | | | | | | | free pages queue. This is presently needed by contigmalloc1(). - Move a sanity check against attempted double allocation of two pages to the same vm object offset from vm_page_alloc() to vm_page_insert(). This provides better protection because double allocation could occur through a direct call to vm_page_insert(), such as that by vm_page_rename(). - Modify contigmalloc1() to hold the mutex synchronizing access to the free pages queue while it scans vm_page_array in search of free pages. - Correct a potential leak of pages by contigmalloc1() that I introduced in revision 1.20: We must convert all cache queue pages to free pages before we begin removing free pages from the free queue. Otherwise, if we have to restart the scan because we are unable to acquire the vm object lock that is necessary to convert a cache queue page to a free page, we leak those free pages already removed from the free queue.
* In vm_page_lookup() check the root of the vm object's splay tree for thealc2003-12-311-3/+5
| | | | desired page before calling vm_page_splay().
* Simplify vm_page_grab(): Don't bother with the generation check. If thealc2003-12-311-18/+6
| | | | | | | | | vm object hasn't changed, the desired page will be at or near the root of the vm object's splay tree, making vm_page_lookup() cheap. (The only lock required for vm_page_lookup() is already held.) If, however, the vm object has changed and retry was requested, eliminating the generation check also eliminates a pointless acquisition and release of the page queues lock.
* - Create an unmapped guard page to trap access to vm_page_array[-1].alc2003-12-221-0/+5
| | | | | | | This guard page would have trapped the problems with the MFC of the PAE support to RELENG_4 at an earlier point in the sequence of events. Submitted by: tegge
* - Additional vm object locking in vm_object_split()alc2003-11-011-2/+1
| | | | | - New vm object locking assertions in vm_page_insert() and vm_object_set_writeable_dirty()
* - Retire vm_pageout_page_free(). Instead, use vm_page_select_cache() fromalc2003-10-221-1/+1
| | | | | | vm_pageout_scan(). Rationale: I don't like leaving a busy page in the cache queue with neither the vm object nor the vm page queues lock held. - Assert that the page is active in vm_pageout_page_stats().
* - Assert that the containing vm object is locked inalc2003-10-211-0/+1
| | | | | | vm_page_set_validclean(). (This function reads and modifies the vm page's valid field, which is synchronized by the lock on the containing vm object.)
* - Remove some long unused code.alc2003-10-201-10/+0
|
* Retire vm_page_copy(). Its reason for being ended when peter@ modifiedalc2003-10-081-17/+3
| | | | | | pmap_copy_page() et al. to accept a vm_page_t rather than a physical address. Also, this change will facilitate locking access to the vm page's valid field.
* Assert that the containing vm object's lock is held inalc2003-10-051-0/+1
| | | | vm_page_set_invalid().
* Assert that the containing vm object's lock is held inalc2003-10-041-0/+1
| | | | vm_page_zero_invalid().
* - Extend the scope the vm object lock to cover calls toalc2003-10-041-0/+1
| | | | | | vm_page_is_valid(). - Assert that the lock on the containing vm object is held in vm_page_is_valid().
* In vm_page_remove(), assert that the vm object is locked, unless an Alpha.alc2003-09-281-2/+3
| | | | (The Alpha still requires updates to its pmap.)
* Initialize the page's pindex field even for VM_ALLOC_NOOBJ allocations.alc2003-09-221-0/+2
| | | | | (This field is useful for implementing sanity checks even if the page does not belong to an object.)
* Recent pmap changes permit the use of a more precise locking assertionalc2003-08-281-2/+1
| | | | in vm_page_lookup().
* Held pages, just like wired pages, should not be added to the cache queues.alc2003-08-231-1/+2
| | | | Submitted by: tegge
* Hold the page queues lock when performing vm_page_clear_dirty() andalc2003-08-231-2/+3
| | | | vm_page_set_invalid().
* Assert that the vm object's lock is held on entry to vm_page_grab(); removealc2003-08-211-14/+6
| | | | | code from this function that was needed when vm object locking was incomplete.
* Assert that the vm object lock is held in vm_page_alloc().alc2003-08-201-0/+1
|
* Modify vm_page_alloc() and vm_page_select_cache() to allow the page thatalc2003-07-011-2/+4
| | | | | is returned by vm_page_select_cache() to belong to the object that is already locked by the caller to vm_page_alloc().
* - Use an int rather than a vm_pindex_t to represent the desired pagealc2003-06-281-24/+6
| | | | | | color in vm_page_alloc(). (This also has small performance benefits.) - Eliminate vm_page_select_free(); vm_page_alloc() might as well call vm_pageq_find() directly.
* vm_page_select_cache() enforces a number of conditions on the returnedalc2003-06-261-1/+6
| | | | page. Add the ability to lock the containing object to those conditions.
* Maintain a lock on the vm object of interest throughout vm_fault(),alc2003-06-221-0/+2
| | | | | | releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages() or vm_pager_has_pages()). If we sleep, we have marked the vm object with the paging-in-progress flag.
* Assert that the vm object is locked in vm_page_try_to_free().alc2003-06-191-0/+2
|
* Use __FBSDID().obrien2003-06-111-1/+3
|
* Teach vm_page_grab() how to handle the vm object's lock.alc2003-06-071-4/+16
|
OpenPOWER on IntegriCloud