summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
...
* Add a needed #include.alc2003-01-011-0/+1
| | | | Reported by: ia64 tinderbox
* Implement a variant locking scheme for vm maps: Access to system mapsalc2002-12-312-16/+39
| | | | | | | | | | | | | is now synchronized by a mutex, whereas access to user maps is still synchronized by a lockmgr()-based lock. Why? No single type of lock, including sx locks, meets the requirements of both types of vm map. Sometimes we sleep while holding the lock on a user map. Thus, a a mutex isn't appropriate. On the other hand, both lockmgr()-based and sx locks release Giant when a thread/process blocks during contention for a lock. This could lead to a race condition in a legacy driver (that relies on Giant for synchronization) if it attempts to kmem_malloc() and fails to immediately obtain the lock. Fortunately, we never sleep while holding a system map lock.
* - Mark the kernel_map as a system map immediately after its creation.alc2002-12-301-2/+2
| | | | - Correct a cast.
* - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.alc2002-12-301-10/+11
| | | | | | - Introduce map_sleep_mtx and use it to replace Giant in vm_map_unlock_and_wait() and vm_map_wakeup(). (Original version by: tegge.)
* - Remove vm_object_init2(). It is unused.alc2002-12-293-8/+3
| | | | | | - Add a mtx_destroy() to vm_object_collapse(). (This allows a bzero() to migrate from _vm_object_allocate() to vm_object_zinit(), where it will be performed less often.)
* Reduce the number of times that we acquire and release the page queuesalc2002-12-293-6/+2
| | | | | lock by making vm_page_rename()'s caller, rather than vm_page_rename(), responsible for acquiring it.
* Assert that the page queues lock rather than Giant is held inalc2002-12-281-1/+2
| | | | vm_page_flag_clear().
* vm_pager_put_pages() takes VM_PAGER_* flags, not OBJPC_* flags. It justdillon2002-12-281-1/+1
| | | | | | | | | so happens that OBJPC_SYNC has the same value as VM_PAGER_PUT_SYNC so no harm done. But fix it :-) No operational changes. MFC after: 1 day
* Allow the VM object flushing code to cluster. When the filesystem syncerdillon2002-12-283-7/+22
| | | | | | | | | | | | | comes along and flushes a file which has been mmap()'d SHARED/RW, with dirty pages, it was flushing the underlying VM object asynchronously, resulting in thousands of 8K writes. With this change the VM Object flushing code will cluster dirty pages in 64K blocks. Note that until the low memory deadlock issue is reviewed, it is not safe to allow the pageout daemon to use this feature. Forced pageouts still use fs block size'd ops for the moment. MFC after: 3 days
* Two changes to kmem_malloc():alc2002-12-281-6/+4
| | | | | - Use VM_ALLOC_WIRED. - Perform vm_page_wakeup() after pmap_enter(), like we do everywhere else.
* - Change vm_object_page_collect_flush() to assert rather thanalc2002-12-271-6/+5
| | | | | acquire the page queues lock. - Acquire the page queues lock in vm_object_page_clean().
* Increase the scope of the page queues lock in phys_pager_getpages().alc2002-12-271-4/+7
|
* - Hold the page queues lock around calls to vm_page_flag_clear().alc2002-12-242-0/+4
|
* - Hold the page queues lock around vm_page_wakeup().alc2002-12-243-3/+10
|
* - Hold the kernel_object's lock around vm_page_insert(..., kernel_object,alc2002-12-231-0/+2
| | | | ...).
* Eliminate some dead code. (Any possible use for this code died withalc2002-12-231-4/+0
| | | | | | vm/vm_page.c revision 1.220.) Submitted by: bde
* The UP -current was not properly counting the per-cpu VM stats in thedillon2002-12-221-0/+3
| | | | | | | sysctl code. This makes 'systat -vm 1's syscall count work again. Submitted by: Michal Mertl <mime@traveller.cz> Note: also slated for 5.0
* Increase the scope of the kmem_object locking in kmem_malloc().alc2002-12-201-3/+5
|
* Add a mutex to struct vm_object. Initialize and destroy that mutexalc2002-12-202-2/+11
| | | | | at appropriate times. For the moment, the mutex is only used on the kmem_object.
* Remove the hash_rand field from struct vm_object. As of revision 1.215 ofalc2002-12-192-13/+1
| | | | vm/vm_page.c, it is unused.
* - Remove vm_page_sleep_busy(). The transition to vm_page_sleep_if_busy(),alc2002-12-192-35/+2
| | | | | | which incorporates page queue and field locking, is complete. - Assert that the page queue lock rather than Giant is held in vm_page_flag_set().
* - Hold the page queues lock when performing vm_page_busy() oralc2002-12-191-1/+7
| | | | | | vm_page_flag_set(). - Replace vm_page_sleep_busy() with proper page queues locking and vm_page_sleep_if_busy().
* - Hold the page queues lock when performing vm_page_busy().alc2002-12-181-2/+5
| | | | | - Replace vm_page_sleep_busy() with proper page queues locking and vm_page_sleep_if_busy().
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-182-1/+9
|
* Hold the page queues lock when performing vm_page_flag_set().alc2002-12-172-1/+3
|
* Change the way ELF coredumps are handled. Instead of unconditionallydillon2002-12-161-0/+1
| | | | | | | | | | | | | | | | | | | skipping read-only pages, which can result in valuable non-text-related data not getting dumped, the ELF loader and the dynamic loader now mark read-only text pages NOCORE and the coredump code only checks (primarily) for complete inaccessibility of the page or NOCORE being set. Certain applications which map large amounts of read-only data will produce much larger cores. A new sysctl has been added, debug.elf_legacy_coredump, which will revert to the old behavior. This commit represents collaborative work by all parties involved. The PR contains a program demonstrating the problem. PR: kern/45994 Submitted by: "Peter Edwards" <pmedwards@eircom.net>, Archie Cobbs <archie@dellroad.org> Reviewed by: jdp, dillon MFC after: 7 days
* Perform vm_object_lock() and vm_object_unlock() on kmem_objectalc2002-12-151-0/+4
| | | | around vm_page_lookup() and vm_page_free().
* This is David Schultz's swapoff code which I am finally able to commit.dillon2002-12-155-8/+339
| | | | | | | This should be considered highly experimental for the moment. Submitted by: David Schultz <dschultz@uclink.Berkeley.EDU> MFC after: 3 weeks
* Fix a refcount race with the vmspace structure. In order to preventdillon2002-12-152-8/+19
| | | | | | | | | | | | | | | | | | resource starvation we clean-up as much of the vmspace structure as we can when the last process using it exits. The rest of the structure is cleaned up when it is reaped. But since exit1() decrements the ref count it is possible for a double-free to occur if someone else, such as the process swapout code, references and then dereferences the structure. Additionally, the final cleanup of the structure should not occur until the last process referencing it is reaped. This commit solves the problem by introducing a secondary reference count, calling 'vm_exitingcnt'. The normal reference count is decremented on exit and vm_exitingcnt is incremented. vm_exitingcnt is decremented when the process is reaped. When both vm_exitingcnt and vm_refcnt are 0, the structure is freed for real. MFC after: 3 weeks
* As per the comments, vm_object_page_remove() now expects its caller to lockalc2002-12-151-8/+2
| | | | the object (i.e., acquire Giant).
* Perform vm_object_lock() and vm_object_unlock() aroundalc2002-12-151-2/+8
| | | | vm_object_page_remove().
* Perform vm_object_lock() and vm_object_unlock() aroundalc2002-12-151-0/+2
| | | | vm_object_page_remove().
* Assert that the page queues lock is held in vm_page_unhold(),alc2002-12-151-2/+4
| | | | vm_page_remove(), and vm_page_free_toq().
* Hold the page queues lock when calling pmap_protect(); it updates fieldsalc2002-12-011-7/+22
| | | | | | | of the vm_page structure. Make the style of the pmap_protect() calls consistent. Approved by: re (blanket)
* Hold the page queues lock when calling pmap_protect(); it updates fieldsalc2002-12-011-3/+5
| | | | | | | of the vm_page structure. Nearby, remove an unnecessary semicolon and return statement. Approved by: re (blanket)
* Increase the scope of the page queue lock in vm_pageout_scan().alc2002-12-011-2/+2
| | | | Approved by: re (blanket)
* Lock page field accesses in mincore().alc2002-11-281-0/+2
| | | | Approved by: re (blanket)
* Hold the page queues lock when performing pmap_clear_modify().alc2002-11-271-0/+4
| | | | Approved by: re (blanket)
* Hold the page queues lock while performing pmap_page_protect().alc2002-11-271-2/+4
| | | | Approved by: re (blanket)
* Acquire and release the page queues lock around calls to pmap_protect()alc2002-11-251-0/+4
| | | | | | because it updates flags within the vm page. Approved by: re (blanket)
* Extend the scope of the page queues/fields locking in vm_freeze_copyopts()alc2002-11-241-1/+3
| | | | | | to cover pmap_remove_all(). Approved by: re
* Hold the page queues/flags lock when calling vm_page_set_validclean().alc2002-11-232-1/+5
| | | | Approved by: re
* Assert that the page queues lock rather than Giant is held inalc2002-11-231-2/+3
| | | | | | vm_pageout_page_free(). Approved by: re
* Add page queue and flag locking in vnode_pager_setsize().alc2002-11-231-0/+2
| | | | Approved by: re
* - Add an event that is triggered when the system is low on memory. This isjeff2002-11-211-1/+9
| | | | | | | | | intended to be used by significant memory consumers so that they may drain some of their caches. Inspired by: phk Approved by: re Tested on: x86, alpha
* - Wakeup the correct address when a zone is no longer full.jeff2002-11-181-1/+1
| | | | Spotted by: jake
* Remove vm_page_protect(). Instead, use pmap_page_protect() directly.alc2002-11-185-30/+8
|
* - Don't forget the flags value when using boot pages.jeff2002-11-161-0/+1
| | | | Reported by: grehan
* Now that pmap_remove_all() is exported by our pmap implementationsalc2002-11-165-19/+19
| | | | use it directly.
* Remove dead code that hasn't been needed since the demise of share mapsalc2002-11-132-26/+0
| | | | in various revisions of vm/vm_map.c between 1.148 and 1.153.
OpenPOWER on IntegriCloud