summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_fault.c
Commit message (Collapse)AuthorAgeFilesLines
* - Remove GIANT_REQUIRED where giant is no longer required.jeff2005-01-241-2/+6
| | | | | | | - Use VFS_LOCK_GIANT() rather than directly acquiring giant in places where giant is only held because vfs requires it. Sponsored By: Isilon Systems, Inc.
* /* -> /*- for license, minor formatting changesimp2005-01-071-1/+1
|
* Continue the transition from synchronizing access to the page's PG_BUSYalc2004-12-241-11/+29
| | | | | | | | | flag and busy field with the global page queues lock to synchronizing their access with the containing object's lock. Specifically, acquire the containing object's lock before reading the page's PG_BUSY flag and busy field in vm_fault(). Reviewed by: tegge@
* Modify pmap_enter_quick() so that it expects the page queues to be lockedalc2004-12-231-4/+3
| | | | | | | | | | | on entry and it assumes the responsibility for releasing the page queues lock if it must sleep. Remove a bogus comment from pmap_enter_quick(). Using the first change, modify vm_map_pmap_enter() so that the page queues lock is acquired and released once, rather than each time that a page is mapped.
* In the common case, pmap_enter_quick() completes without sleeping.alc2004-12-151-8/+2
| | | | | | | | | | | | | | | | | | In such cases, the busying of the page and the unlocking of the containing object by vm_map_pmap_enter() and vm_fault_prefault() is unnecessary overhead. To eliminate this overhead, this change modifies pmap_enter_quick() so that it expects the object to be locked on entry and it assumes the responsibility for busying the page and unlocking the object if it must sleep. Note: alpha, amd64, i386 and ia64 are the only implementations optimized by this change; arm, powerpc, and sparc64 still conservatively busy the page and unlock the object within every pmap_enter_quick() call. Additionally, this change is the first case where we synchronize access to the page's PG_BUSY flag and busy field using the containing object's lock rather than the global page queues lock. (Modifications to the page's PG_BUSY flag and busy field have asserted both locks for several weeks, enabling an incremental transition.)
* Remove unnecessary check for curthread == NULL.alc2004-10-171-1/+1
|
* System maps are prohibited from mapping vnode-backed objects. Takealc2004-09-111-8/+8
| | | | | | | advantage of this restriction to avoid acquiring and releasing Giant when wiring pages within a system map. In collaboration with: tegge@
* Push Giant deep into vm_forkproc(), acquiring it only if the process hasalc2004-09-031-0/+4
| | | | | mapped System V shared memory segments (see shmfork_myhook()) or requires the allocation of an ldt (see vm_fault_wire()).
* In vm_fault_unwire() eliminate the acquisition and release of Giant in thealc2004-09-011-4/+0
| | | | case of non-kernel pmaps.
* In the previous revision, I failed to condition an early release of Giantalc2004-08-221-1/+2
| | | | | | | in vm_fault() on debug_mpsafevm. If debug_mpsafevm was not set, the result was an assertion failure early in the boot process. Reported by: green@
* Further reduce the use of Giant by vm_fault(): Giant is held only whenalc2004-08-211-4/+3
| | | | | | | | manipulating a vnode, e.g., calling vput(). This reduces contention for Giant during many copy-on-write faults, resulting in some additional speedup on SMPs. Note: debug_mpsafevm must be enabled for this optimization to take effect.
* - Introduce and use a new tunable "debug.mpsafevm". At present, settingalc2004-08-161-7/+11
| | | | | | | | | | | | | | "debug.mpsafevm" results in (almost) Giant-free execution of zero-fill page faults. (Giant is held only briefly, just long enough to determine if there is a vnode backing the faulting address.) Also, condition the acquisition and release of Giant around calls to pmap_remove() on "debug.mpsafevm". The effect on performance is significant. On my dual Opteron, I see a 3.6% reduction in "buildworld" time. - Use atomic operations to update several counters in vm_fault().
* The vm map lock is needed in vm_fault() after the page has been found,tegge2004-08-121-51/+37
| | | | | | | | | to avoid later changes before pmap_enter() and vm_fault_prefault() has completed. Simplify deadlock avoidance by not blocking on vm map relookup. In collaboration with: alc
* Make two changes to vm_fault().alc2004-08-091-16/+7
| | | | | | 1. Move a comment to its proper place, updating it. (Except for white- space, this comment had been unchanged since revision 1.1!) 2. Remove spl calls.
* Make two changes to vm_fault().alc2004-08-091-11/+6
| | | | | | 1. Retain the map lock until after the calls to pmap_enter() and vm_fault_prefault(). 2. Remove a stale comment. Submitted by: tegge@
* To date, unwiring a fictitious page has produced a panic. The reasonalc2004-05-221-11/+10
| | | | | | | | | | | | | | | being that PHYS_TO_VM_PAGE() returns the wrong vm_page for fictitious pages but unwiring uses PHYS_TO_VM_PAGE(). The resulting panic reported an unexpected wired count. Rather than attempting to fix PHYS_TO_VM_PAGE(), this fix takes advantage of the properties of fictitious pages. Specifically, fictitious pages will never be completely unwired. Therefore, we can keep a fictitious page's wired count forever set to one and thereby avoid the use of PHYS_TO_VM_PAGE() when we know that we're working with a fictitious page, just not which one. In collaboration with: green@, tegge@ PR: kern/29915
* Make vm_page's PG_ZERO flag immutable between the time of the page'salc2004-05-061-1/+0
| | | | | | | | | | allocation and deallocation. This flag's principal use is shortly after allocation. For such cases, clearing the flag is pointless. The only unusual use of PG_ZERO is in vfs_bio_clrbuf(). However, allocbuf() never requests a prezeroed page. So, vfs_bio_clrbuf() never sees a prezeroed page. Reviewed by: tegge@
* - Make the acquisition of Giant in vm_fault_unwire() conditional on thealc2004-03-101-2/+4
| | | | | | | | | | | | pmap. For the kernel pmap, Giant is not required. In general, for other pmaps, Giant is required by i386's pmap_pte() implementation. Specifically, the use of PMAP2/PADDR2 is synchronized by Giant. Note: In principle, updates to the kernel pmap's wired count could be lost without Giant. However, in practice, we never use the kernel pmap's wired count. This will be resolved when pmap locking appears. - With the above change, cpu_thread_clean() and uma_large_free() need not acquire Giant. (The first case is simply the revival of i386/i386/vm_machdep.c's revision 1.226 by peter.)
* Correct a long-standing race condition in vm_fault() that could result in aalc2004-02-151-3/+1
| | | | | | | | | | panic "vm_page_cache: caching a dirty page, ...": Access to the page must be restricted or removed before calling vm_page_cache(). This race condition is identical in nature to that which was addressed by vm_pageout.c's revision 1.251 and vm_page.c's revision 1.275. Reviewed by: tegge MFC after: 7 days
* - Locking for the per-process resource limits structure has eliminatedalc2004-02-051-2/+0
| | | | | | the need for Giant in vm_map_growstack(). - Use the proc * that is passed to vm_map_growstack() rather than curthread->td_proc.
* - Reduce Giant's scope in vm_fault().alc2003-12-261-14/+10
| | | | | - Use vm_object_reference_locked() instead of vm_object_reference() in vm_fault().
* NFC: Update stale comments.mini2003-11-101-1/+1
| | | | Reviewed by: alc
* - vm_fault_copy_entry() should not assume that the source object containsalc2003-10-151-5/+19
| | | | | | | every page. If the source entry was read-only, one or more wired pages could be in backing objects. - vm_fault_copy_entry() should not set the PG_WRITEABLE flag on the page unless the destination entry is, in fact, writeable.
* Lock the destination object in vm_fault_copy_entry().alc2003-10-081-2/+7
|
* Retire vm_page_copy(). Its reason for being ended when peter@ modifiedalc2003-10-081-2/+4
| | | | | | pmap_copy_page() et al. to accept a vm_page_t rather than a physical address. Also, this change will facilitate locking access to the vm page's valid field.
* Synchronize access to a vm page's valid field using the containingalc2003-10-041-4/+8
| | | | vm object's lock.
* Migrate pmap_prefault() into the machine-independent virtual memory layer.alc2003-10-031-1/+91
| | | | | | | A small helper function pmap_is_prefaultable() is added. This function encapsulate the few lines of pmap_prefault() that actually vary from machine to machine. Note: pmap_is_prefaultable() and pmap_mincore() have much in common. Going forward, it's worth considering their merger.
* Add vm object locking to vnode_pager_lock(). (This triggers the movementalc2003-09-181-1/+1
| | | | of a VM_OBJECT_LOCK() in vm_fault().)
* To implement the sequential access optimization, vm_fault() may need toalc2003-08-231-8/+10
| | | | | | | reacquire the "first" object's lock while a backing object's lock is held. Since this is a lock-order reversal, vm_fault() uses trylock to acquire the first object's lock, skipping the sequential access optimization in the unlikely event that the trylock fails.
* Maintain a lock on the vm object of interest throughout vm_fault(),alc2003-06-221-10/+9
| | | | | | releasing the lock only if we are about to sleep (e.g., vm_pager_get_pages() or vm_pager_has_pages()). If we sleep, we have marked the vm object with the paging-in-progress flag.
* As vm_fault() descends the chain of backing objects, set paging-in-alc2003-06-221-8/+8
| | | | progress on the next object before clearing it on the current object.
* Make some style and white-space changes to the copy-on-write path throughalc2003-06-221-10/+5
| | | | vm_fault(); remove a pointless assignment statement from that path.
* Lock one of the vm objects involved in an optimized copy-on-write fault.alc2003-06-211-2/+5
|
* The so-called "optimized copy-on-write fault" case should not requirealc2003-06-201-9/+2
| | | | | | | the vm map lock. What's really needed is vm object locking, which is (for the moment) provided Giant. Reviewed by: tegge
* Fix a vm object reference leak in the page-based copy-on-write mechanismalc2003-06-191-1/+1
| | | | | | used by the zero-copy sockets implementation. Reviewed by: gallatin
* Use __FBSDID().obrien2003-06-111-2/+4
|
* Prefer the proc lock to sched_lock when testing PS_INMEM now that it isjhb2003-04-221-3/+4
| | | | safe to do so.
* - Lock the vm_object when performing vm_object_pip_wakeup().alc2003-04-201-0/+10
|
* - Lock the vm_object when performing vm_object_pip_add().alc2003-04-201-0/+4
|
* - Add vm_paddr_t, a physical address type. This is required for systemsjake2003-03-251-2/+3
| | | | | | | | | | | | | | | where physical addresses larger than virtual addresses, such as i386s with PAE. - Use this to represent physical addresses in the MI vm system and in the i386 pmap code. This also changes the paddr parameter to d_mmap_t. - Fix printf formats to handle physical addresses >4G in the i386 memory detection code, and due to kvtop returning vm_paddr_t instead of u_long. Note that this is a name change only; vm_paddr_t is still the same as vm_offset_t on all currently supported platforms. Sponsored by: DARPA, Network Associates Laboratories Discussed with: re, phk (cdevsw change)
* Zero copy send and receive fixes:ken2003-03-081-2/+9
| | | | | | | | | | | | | | | | | | - On receive, vm_map_lookup() needs to trigger the creation of a shadow object. To make that happen, call vm_map_lookup() with PROT_WRITE instead of PROT_READ in vm_pgmoveco(). - On send, a shadow object will be created by the vm_map_lookup() in vm_fault(), but vm_page_cowfault() will delete the original page from the backing object rather than simply letting the legacy COW mechanism take over. In other words, the new page should be added to the shadow object rather than replacing the old page in the backing object. (i.e. vm_page_cowfault() should not be called in this case.) We accomplish this by making sure fs.object == fs.first_object before calling vm_page_cowfault() in vm_fault(). Submitted by: gallatin, alc Tested by: ken
* Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.alc2003-03-061-7/+0
| | | | Discussed on: arch@
* Merge all the various copies of vm_fault_quick() into a singledillon2003-01-161-0/+18
| | | | portable copy.
* vm_fault_copy_entry() needn't clear PG_ZERO because it didn't passalc2003-01-121-3/+0
| | | | VM_ALLOC_ZERO to vm_page_alloc().
* Reduce the number of times that we acquire and release the page queuesalc2002-12-291-2/+0
| | | | | lock by making vm_page_rename()'s caller, rather than vm_page_rename(), responsible for acquiring it.
* - Hold the page queues lock around calls to vm_page_flag_clear().alc2002-12-241-0/+2
|
* - Hold the page queues lock when performing vm_page_busy() oralc2002-12-191-1/+7
| | | | | | vm_page_flag_set(). - Replace vm_page_sleep_busy() with proper page queues locking and vm_page_sleep_if_busy().
* Now that pmap_remove_all() is exported by our pmap implementationsalc2002-11-161-2/+2
| | | | use it directly.
* When prot is VM_PROT_NONE, call pmap_page_protect() directly rather thanalc2002-11-101-2/+2
| | | | | | | | | indirectly through vm_page_protect(). The one remaining page flag that is updated by vm_page_protect() is already being updated by our various pmap implementations. Note: A later commit will similarly change the VM_PROT_READ case and eliminate vm_page_protect().
* Complete the page queues locking needed for the page-based copy-alc2002-10-191-2/+2
| | | | | | | | | on-write (COW) mechanism. (This mechanism is used by the zero-copy TCP/IP implementation.) - Extend the scope of the page queues lock in vm_fault() to cover vm_page_cowfault(). - Modify vm_page_cowfault() to release the page queues lock if it sleeps.
OpenPOWER on IntegriCloud