summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_fault.c
Commit message (Collapse)AuthorAgeFilesLines
* Make pmap_enter() responsible for setting PG_WRITEABLE insteadalc2006-11-121-7/+2
| | | | | of its caller. (As a beneficial side-effect, a high-contention acquisition of the page queues lock in vm_fault() is eliminated.)
* The page queues lock is no longer required by vm_page_wakeup().alc2006-10-231-3/+3
|
* Replace PG_BUSY with VPO_BUSY. In other words, changes to the page'salc2006-10-221-7/+8
| | | | | busy flag, i.e., VPO_BUSY, are now synchronized by the per-vm object lock instead of the global page queues lock.
* Eliminate unnecessary PG_BUSY tests. They originally served a purposealc2006-10-211-1/+1
| | | | that is now handled by vm object locking.
* Reimplement the page's NOSYNC flag as an object-synchronized instead of aalc2006-08-131-5/+5
| | | | | | | | | | page queues-synchronized flag. Reduce the scope of the page queues lock in vm_fault() accordingly. Move vm_fault()'s call to vm_object_set_writeable_dirty() outside of the scope of the page queues lock. Reviewed by: tegge Additionally, eliminate an unnecessary dereference in computing the argument that is passed to vm_object_set_writeable_dirty().
* Eliminate the acquisition and release of the page queues lock around a callalc2006-08-061-4/+2
| | | | to vm_page_sleep_if_busy().
* Retire debug.mpsafevm. None of the architectures supported in CVS requirealc2006-07-211-7/+1
| | | | it any longer.
* Remove mpte optimization from pmap_enter_quick().ups2006-06-151-4/+2
| | | | | | | | | There is a race with the current locking scheme and removing it should have no measurable performance impact. This fixes page faults leading to panics in pmap_enter_quick_locked() on amd64/i386. Reviewed by: alc,jhb,peter,ps
* Simplify the implementation of vm_fault_additional_pages() based upon thealc2006-05-131-12/+5
| | | | | | | object's memq being ordered. Specifically, replace repeated calls to vm_page_lookup() by two simple constant-time operations. Reviewed by: tegge
* Remove leading __ from __(inline|const|signed|volatile). They areimp2006-03-081-2/+2
| | | | obsolete. This should reduce diffs to NetBSD as well.
* Adjust old comment (present in rev 1.1) to match changes in rev 1.82.tegge2006-02-021-1/+1
| | | | | PR: kern/92509 Submitted by: "Bryan Venteicher" <bryanv@daemoninthecloset.org>
* Use the new macros abstracting the page coloring/queues implementation.alc2006-01-271-2/+2
| | | | (There are no functional changes.)
* MI changes:netchild2005-12-311-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | - provide an interface (macros) to the page coloring part of the VM system, this allows to try different coloring algorithms without the need to touch every file [1] - make the page queue tuning values readable: sysctl vm.stats.pagequeue - autotuning of the page coloring values based upon the cache size instead of options in the kernel config (disabling of the page coloring as a kernel option is still possible) MD changes: - detection of the cache size: only IA32 and AMD64 (untested) contains cache size detection code, every other arch just comes with a dummy function (this results in the use of default values like it was the case without the autotuning of the page coloring) - print some more info on Intel CPU's (like we do on AMD and Transmeta CPU's) Note to AMD owners (IA32 and AMD64): please run "sysctl vm.stats.pagequeue" and report if the cache* values are zero (= bug in the cache detection code) or not. Based upon work by: Chad David <davidc@acns.ab.ca> [1] Reviewed by: alc, arch (in 2004) Discussed with: alc, Chad David, arch (in 2004)
* Don't access fs->first_object after dropping reference to it.tegge2005-12-201-1/+3
| | | | | | The result could be a missed or extra giant unlock. Reviewed by: alc
* Remove unneeded calls to pmap_remove_all(). The given page is not mapped.alc2005-12-111-1/+0
| | | | Reviewed by: tegge
* Eliminate an incorrect cast.alc2005-09-071-1/+1
|
* Pass a value of type vm_prot_t to pmap_enter_quick() so that it determinealc2005-09-031-1/+2
| | | | whether the mapping should permit execute access.
* Convert a remaining !fs.map->system_map tojhb2005-07-141-1/+1
| | | | | | | | | fs.first_object->flags & OBJ_NEEDGIANT test that was missed in an earlier revision. This fixes mutex assertion failures in the debug.mpsafevm=0 case. Reported by: ps MFC after: 3 days
* The final test in unlock_and_deallocate() to determine if GIANT needs to begrehan2005-05-121-1/+1
| | | | | | | unlocked wasn't updated to check for OBJ_NEEDGIANT. This caused a WITNESS panic when debug_mpsafevm was set to 0. Approved by: jeffr
* - Add a new object flag "OBJ_NEEDSGIANT". We set this flag if thejeff2005-05-031-4/+9
| | | | | | | underlying vnode requires Giant. - In vm_fault only acquire Giant if the underlying object has NEEDSGIANT set. - In vm_object_shadow inherit the NEEDSGIANT flag from the backing object.
* - Remove GIANT_REQUIRED where giant is no longer required.jeff2005-01-241-2/+6
| | | | | | | - Use VFS_LOCK_GIANT() rather than directly acquiring giant in places where giant is only held because vfs requires it. Sponsored By: Isilon Systems, Inc.
* /* -> /*- for license, minor formatting changesimp2005-01-071-1/+1
|
* Continue the transition from synchronizing access to the page's PG_BUSYalc2004-12-241-11/+29
| | | | | | | | | flag and busy field with the global page queues lock to synchronizing their access with the containing object's lock. Specifically, acquire the containing object's lock before reading the page's PG_BUSY flag and busy field in vm_fault(). Reviewed by: tegge@
* Modify pmap_enter_quick() so that it expects the page queues to be lockedalc2004-12-231-4/+3
| | | | | | | | | | | on entry and it assumes the responsibility for releasing the page queues lock if it must sleep. Remove a bogus comment from pmap_enter_quick(). Using the first change, modify vm_map_pmap_enter() so that the page queues lock is acquired and released once, rather than each time that a page is mapped.
* In the common case, pmap_enter_quick() completes without sleeping.alc2004-12-151-8/+2
| | | | | | | | | | | | | | | | | | In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.)
* Remove unnecessary check for curthread == NULL.alc2004-10-171-1/+1
|
* System maps are prohibited from mapping vnode-backed objects. Takealc2004-09-111-8/+8
| | | | | | | advantage of this restriction to avoid acquiring and releasing Giant when wiring pages within a system map. In collaboration with: tegge@
* Push Giant deep into vm_forkproc(), acquiring it only if the process hasalc2004-09-031-0/+4
| | | | | mapped System V shared memory segments (see shmfork_myhook()) or requires the allocation of an ldt (see vm_fault_wire()).
* In vm_fault_unwire() eliminate the acquisition and release of Giant in thealc2004-09-011-4/+0
| | | | case of non-kernel pmaps.
* In the previous revision, I failed to condition an early release of Giantalc2004-08-221-1/+2
| | | | | | | in vm_fault() on debug_mpsafevm. If debug_mpsafevm was not set, the result was an assertion failure early in the boot process. Reported by: green@
* Further reduce the use of Giant by vm_fault(): Giant is held only whenalc2004-08-211-4/+3
| | | | | | | | manipulating a vnode, e.g., calling vput(). This reduces contention for Giant during many copy-on-write faults, resulting in some additional speedup on SMPs. Note: debug_mpsafevm must be enabled for this optimization to take effect.
* - Introduce and use a new tunable "debug.mpsafevm". At present, settingalc2004-08-161-7/+11
| | | | | | | | | | | | | | "debug.mpsafevm" results in (almost) Giant-free execution of zero-fill page faults. (Giant is held only briefly, just long enough to determine if there is a vnode backing the faulting address.) Also, condition the acquisition and release of Giant around calls to pmap_remove() on "debug.mpsafevm". The effect on performance is significant. On my dual Opteron, I see a 3.6% reduction in "buildworld" time. - Use atomic operations to update several counters in vm_fault().
* The vm map lock is needed in vm_fault() after the page has been found,tegge2004-08-121-51/+37
| | | | | | | | | to avoid later changes before pmap_enter() and vm_fault_prefault() has completed. Simplify deadlock avoidance by not blocking on vm map relookup. In collaboration with: alc
* Make two changes to vm_fault().alc2004-08-091-16/+7
| | | | | | 1. Move a comment to its proper place, updating it. (Except for white- space, this comment had been unchanged since revision 1.1!) 2. Remove spl calls.
* Make two changes to vm_fault().alc2004-08-091-11/+6
| | | | | | 1. Retain the map lock until after the calls to pmap_enter() and vm_fault_prefault(). 2. Remove a stale comment. Submitted by: tegge@
* To date, unwiring a fictitious page has produced a panic. The reasonalc2004-05-221-11/+10
| | | | | | | | | | | | | | | being that PHYS_TO_VM_PAGE() returns the wrong vm_page for fictitious pages but unwiring uses PHYS_TO_VM_PAGE(). The resulting panic reported an unexpected wired count. Rather than attempting to fix PHYS_TO_VM_PAGE(), this fix takes advantage of the properties of fictitious pages. Specifically, fictitious pages will never be completely unwired. Therefore, we can keep a fictitious page's wired count forever set to one and thereby avoid the use of PHYS_TO_VM_PAGE() when we know that we're working with a fictitious page, just not which one. In collaboration with: green@, tegge@ PR: kern/29915
* Make vm_page's PG_ZERO flag immutable between the time of the page'salc2004-05-061-1/+0
| | | | | | | | | | allocation and deallocation. This flag's principal use is shortly after allocation. For such cases, clearing the flag is pointless. The only unusual use of PG_ZERO is in vfs_bio_clrbuf(). However, allocbuf() never requests a prezeroed page. So, vfs_bio_clrbuf() never sees a prezeroed page. Reviewed by: tegge@
* - Make the acquisition of Giant in vm_fault_unwire() conditional on thealc2004-03-101-2/+4
| | | | | | | | | | | | pmap. For the kernel pmap, Giant is not required. In general, for other pmaps, Giant is required by i386's pmap_pte() implementation. Specifically, the use of PMAP2/PADDR2 is synchronized by Giant. Note: In principle, updates to the kernel pmap's wired count could be lost without Giant. However, in practice, we never use the kernel pmap's wired count. This will be resolved when pmap locking appears. - With the above change, cpu_thread_clean() and uma_large_free() need not acquire Giant. (The first case is simply the revival of i386/i386/vm_machdep.c's revision 1.226 by peter.)
* Correct a long-standing race condition in vm_fault() that could result in aalc2004-02-151-3/+1
| | | | | | | | | | panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275. Reviewed by: tegge MFC after: 7 days
* - Locking for the per-process resource limits structure has eliminatedalc2004-02-051-2/+0
| | | | | | the need for Giant in vm_map_growstack(). - Use the proc * that is passed to vm_map_growstack() rather than curthread->td_proc.
* - Reduce Giant's scope in vm_fault().alc2003-12-261-14/+10
| | | | | - Use vm_object_reference_locked() instead of vm_object_reference() in vm_fault().
* NFC: Update stale comments.mini2003-11-101-1/+1
| | | | Reviewed by: alc
* - vm_fault_copy_entry() should not assume that the source object containsalc2003-10-151-5/+19
| | | | | | | every page. If the source entry was read-only, one or more wired pages could be in backing objects. - vm_fault_copy_entry() should not set the PG_WRITEABLE flag on the page unless the destination entry is, in fact, writeable.
* Lock the destination object in vm_fault_copy_entry().alc2003-10-081-2/+7
|
* Retire vm_page_copy(). Its reason for being ended when peter@ modifiedalc2003-10-081-2/+4
| | | | | | pmap_copy_page() et al. to accept a vm_page_t rather than a physical address. Also, this change will facilitate locking access to the vm page's valid field.
* Synchronize access to a vm page's valid field using the containingalc2003-10-041-4/+8
| | | | vm object's lock.
* Migrate pmap_prefault() into the machine-independent virtual memory layer.alc2003-10-031-1/+91
| | | | | | | A small helper function pmap_is_prefaultable() is added. This function encapsulate the few lines of pmap_prefault() that actually vary from machine to machine. Note: pmap_is_prefaultable() and pmap_mincore() have much in common. Going forward, it's worth considering their merger.
* Add vm object locking to vnode_pager_lock(). (This triggers the movementalc2003-09-181-1/+1
| | | | of a VM_OBJECT_LOCK() in vm_fault().)
* To implement the sequential access optimization, vm_fault() may need toalc2003-08-231-8/+10
| | | | | | | reacquire the "first" object's lock while a backing object's lock is held. Since this is a lock-order reversal, vm_fault() uses trylock to acquire the first object's lock, skipping the sequential access optimization in the unlikely event that the trylock fails.
* Maintain a lock on the vm object of interest throughout vm_fault(),alc2003-06-221-10/+9
| | | | | | releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages() or vm_pager_has_pages()). If we sleep, we have marked the vm object with the paging-in-progress flag.
OpenPOWER on IntegriCloud