summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_map.c
Commit message (Collapse)AuthorAgeFilesLines
* Back out M_* changes, per decision of the TRB.imp2003-02-191-3/+3
| | | | Approved by: trb
* Remove the acquisition and release of Giant around pmap_growkernel().alc2003-02-151-2/+0
| | | | | | It's unnecessary for two reasons: (1) Giant is at present already held in such cases and (2) our various implementations of pmap_growkernel() look to be MP safe. (For example, for sparc64 the proof of (2) is trivial.)
* Add MTX_DUPOK to the initialization of system map locks.alc2003-01-251-2/+2
|
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-3/+3
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* Close the remaining user address mapping races for physicaldillon2003-01-201-3/+8
| | | | | | | I/O, CAM, and AIO. Still TODO: streamline useracc() checks. Reviewed by: alc, tegge MFC after: 7 days
* It is possible for an active aio to prevent shared memory from beingdillon2003-01-131-0/+8
| | | | | | | | | | | | | dereferenced when a process exits due to the vmspace ref-count being bumped. Change shmexit() and shmexit_myhook() to take a vmspace instead of a process and call it in vmspace_dofree(). This way if it is missed in exit1()'s early-resource-free it will still be caught when the zombie is reaped. Also fix a potential race in shmexit_myhook() by NULLing out vmspace->vm_shm prior to calling shm_delete_mapping() and free(). MFC after: 7 days
* Lock the vm object when performing vm_object_clear_flag().alc2003-01-031-0/+4
|
* Implement a variant locking scheme for vm maps: Access to system mapsalc2002-12-311-16/+38
| | | | | | | | | | | | | is now synchronized by a mutex, whereas access to user maps is still synchronized by a lockmgr()-based lock. Why? No single type of lock, including sx locks, meets the requirements of both types of vm map. Sometimes we sleep while holding the lock on a user map. Thus, a a mutex isn't appropriate. On the other hand, both lockmgr()-based and sx locks release Giant when a thread/process blocks during contention for a lock. This could lead to a race condition in a legacy driver (that relies on Giant for synchronization) if it attempts to kmem_malloc() and fails to immediately obtain the lock. Fortunately, we never sleep while holding a system map lock.
* - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.alc2002-12-301-10/+11
| | | | | | - Introduce map_sleep_mtx and use it to replace Giant in vm_map_unlock_and_wait() and vm_map_wakeup(). (Original version by: tegge.)
* - Remove vm_object_init2(). It is unused.alc2002-12-291-1/+0
| | | | | | - Add a mtx_destroy() to vm_object_collapse(). (This allows a bzero() to migrate from _vm_object_allocate() to vm_object_zinit(), where it will be performed less often.)
* Fix a refcount race with the vmspace structure. In order to preventdillon2002-12-151-6/+17
| | | | | | | | | | | | | | | | | | resource starvation we clean-up as much of the vmspace structure as we can when the last process using it exits. The rest of the structure is cleaned up when it is reaped. But since exit1() decrements the ref count it is possible for a double-free to occur if someone else, such as the process swapout code, references and then dereferences the structure. Additionally, the final cleanup of the structure should not occur until the last process referencing it is reaped. This commit solves the problem by introducing a secondary reference count, calling 'vm_exitingcnt'. The normal reference count is decremented on exit and vm_exitingcnt is incremented. vm_exitingcnt is decremented when the process is reaped. When both vm_exitingcnt and vm_refcnt are 0, the structure is freed for real. MFC after: 3 weeks
* Perform vm_object_lock() and vm_object_unlock() aroundalc2002-12-151-2/+8
| | | | vm_object_page_remove().
* Hold the page queues lock when calling pmap_protect(); it updates fieldsalc2002-12-011-7/+22
| | | | | | | of the vm_page structure. Make the style of the pmap_protect() calls consistent. Approved by: re (blanket)
* Acquire and release the page queues lock around calls to pmap_protect()alc2002-11-251-0/+4
| | | | | | because it updates flags within the vm page. Approved by: re (blanket)
* Fix an error case in vm_map_wire(): unwiring of an entry during cleanupalc2002-11-091-2/+2
| | | | | | after a user wire error fails when the entry is already system wired. Reported by: tegge
* Correctly print vm_offset_t types.mux2002-11-071-6/+5
|
* Properly put macro args in ().phk2002-10-161-2/+2
| | | | Spotted by: FlexeLint.
* Modify vm_map_clean() (and thus the msync(2) system call) to supportmdodd2002-09-221-7/+10
| | | | | | | invalidation of cached pages for objects of type OBJT_DEVICE. Submitted by: Christian Zander <zander@minion.de> Approved by: alc
* Use the fields in the sysentvec and in the vm map header in place of thejake2002-09-211-4/+3
| | | | | | | | constants VM_MIN_ADDRESS, VM_MAXUSER_ADDRESS, USRSTACK and PS_STRINGS. This is mainly so that they can be variable even for the native abi, based on different machine types. Get stack protections from the sysentvec too. This makes it trivial to map the stack non-executable for certain abis, on machines that support it.
* o Use vm_object_lock() in place of Giant when manipulating a vm objectalc2002-08-241-2/+2
| | | | in vm_map_insert().
* o Merge vm_fault_wire() and vm_fault_user_wire() by adding a new parameter,alc2002-07-241-5/+2
| | | | user_wire.
* Infrastructure tweaks to allow having both an Elf32 and an Elf64 executablepeter2002-07-201-3/+2
| | | | | | | | | | | | | | | handler in the kernel at the same time. Also, allow for the exec_new_vmspace() code to build a different sized vmspace depending on the executable environment. This is a big help for execing i386 binaries on ia64. The ELF exec code grows the ability to map partial pages when there is a page size difference, eg: emulating 4K pages on 8K or 16K hardware pages. Flesh out the i386 emulation support for ia64. At this point, the only binary that I know of that fails is cvsup, because the cvsup runtime tries to execute code in pages not marked executable. Obtained from: dfr (mostly, many tweaks from me).
* (VM_MAX_KERNEL_ADDRESS - KERNBASE) / PAGE_SIZE may not fit in an integer.peter2002-07-181-1/+1
| | | | | | Use lmin(long, long), not min(u_int, u_int). This is a problem here on ia64 which has *way* more than 2^32 pages of KVA. 281474976710655 pages to be precice.
* o Assert GIANT_REQUIRED on system maps in _vm_map_lock(),alc2002-07-121-0/+6
| | | | | | | | | | _vm_map_lock_read(), and _vm_map_trylock(). Submitted by: tegge o Remove GIANT_REQUIRED from kmem_alloc_wait() and kmem_free_wakeup(). (This clears the way for exec_map accesses to move outside of Giant. The exec_map is not a system map.) o Remove some premature MPSAFE comments. Reviewed by: tegge
* o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()alc2002-07-111-2/+3
| | | | | | | and kmem_free_wakeup(). Previously, kmem_free_wakeup() always called wakeup(). In general, no one was sleeping. o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c for use in vm_kern.c.
* o Make the reservation of KVA space for kernel map entries a functionalc2002-07-031-1/+2
| | | | | | | | of the KVA space's size in addition to the amount of physical memory and reduce it by a factor of two. Under the old formula, our reservation amounted to one kernel map entry per virtual page in the KVA space on a 4GB i386.
* Avoid using the 64-bit vm_pindex_t in a few places where 64-bitiedowse2002-06-261-2/+3
| | | | | | | | | | | | | types are not required, as the overhead is unnecessary: o In the i386 pmap_protect(), `sindex' and `eindex' represent page indices within the 32-bit virtual address space. o In swp_pager_meta_build() and swp_pager_meta_ctl(), use a temporary variable to store the low few bits of a vm_pindex_t that gets used as an array index. o vm_uiomove() uses `osize' and `idx' for page offsets within a map entry. o In vm_object_split(), `idx' is a page offset within a map entry.
* Enforce RLIMIT_VMEM on growable mappings (aka the primary stack or anydillon2002-06-261-0/+14
| | | | | | MAP_STACK mapping). Suggested by: alc
* o In vm_map_insert(), replace GIANT_REQUIRED by the acquisition andalc2002-06-221-5/+6
| | | | | | | | release of Giant around the direct manipulation of the vm_object and the optional call to pmap_object_init_pt(). o In vm_map_findspace(), remove GIANT_REQUIRED. Instead, acquire and release Giant around the occasional call to pmap_growkernel(). o In vm_map_find(), remove GIANT_REQUIRED.
* o Remove GIANT_REQUIRED from vm_map_stack().alc2002-06-211-2/+0
|
* o Replace GIANT_REQUIRED in vm_object_coalesce() by the acquisition andalc2002-06-191-2/+2
| | | | | | | | release of Giant. o Reduce the scope of GIANT_REQUIRED in vm_map_insert(). These changes will enable us to remove the acquisition and release of Giant from obreak().
* o Remove LK_CANRECURSE from the vm_map lock.alc2002-06-181-2/+2
|
* - Introduce the new M_NOVM option which tells uma to only check the currentlyjeff2002-06-171-1/+2
| | | | | | | | | | | | | | | | allocated slabs and bucket caches for free items. It will not go ask the vm for pages. This differs from M_NOWAIT in that it not only doesn't block, it doesn't even ask. - Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This tells uma that it should only allocate buckets out of the bucket cache, and not from the VM. It does this by using the M_NOVM option to zalloc when getting a new bucket. This is so that the VM doesn't recursively enter itself while trying to allocate buckets for vm_map_entry zones. If there are already allocated buckets when we get here we'll still use them but otherwise we'll skip it. - Use the ZONE_VM flag on vm map entries and pv entries on x86.
* o Acquire and release Giant in vm_map_wakeup() to preventalc2002-06-171-0/+7
| | | | | | a lost wakeup(). Reviewed by: tegge
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-397/+0
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* o Acquire and release Giant in vm_map_unlock_and_wait().alc2002-06-121-3/+5
| | | | Submitted by: tegge
* o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()alc2002-06-111-4/+20
| | | | | | | in vm_map_wire(). o Make two white-space changes in vm_map_wire(). Reviewed by: tegge
* o Teach vm_map_delete() to respect the "in-transition" flagalc2002-06-111-0/+31
| | | | | | on a vm_map_entry by sleeping until the flag is cleared. Submitted by: tegge
* o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.alc2002-06-101-5/+6
| | | | | | | Submitted by: tegge o Eliminate the "!mapentzone" check from vm_map_entry_create() and vm_map_entry_dispose(). Reviewed by: tegge o Fix white-space usage in vm_map_entry_create().
* o Add vm_map_wire() for wiring contiguous regions of either kernelalc2002-06-091-1/+159
| | | | | | | | | | | or user vm_maps. This implementation has two key benefits when compared to vm_map_{user_,}pageable(): (1) it avoids a race condition through the use of "in-transition" vm_map entries and (2) it eliminates lock recursion on the vm_map. Note: there is still an error case that requires clean up. Reviewed by: tegge
* o Simplify vm_map_unwire() by merging the second and third passesalc2002-06-081-17/+11
| | | | over the caller-specified region.
* o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().alc2002-06-081-6/+15
| | | | | | | | o Add a stub for vm_map_wire(). Note: the description of the previous commit had an error. The in- transition flag actually blocks the deallocation of a vm_map_entry by vm_map_delete() and vm_map_simplify_entry().
* o Add vm_map_unwire() for unwiring contiguous regions of either kernelalc2002-06-071-1/+163
| | | | | | | | | | | | | | | or user vm_maps. In accordance with the standards for munlock(2), and in contrast to vm_map_user_pageable(), this implementation does not allow holes in the specified region. This implementation uses the "in transition" flag described below. o Introduce a new flag, "in transition," to the vm_map_entry. Eventually, vm_map_delete() and vm_map_simplify_entry() will respect this flag by deallocating in-transition vm_map_entrys, allowing the vm_map lock to be safely released in vm_map_unwire() and (the forthcoming) vm_map_wire(). o Modify vm_map_simplify_entry() to respect the in-transition flag. In collaboration with: tegge
* o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming italc2002-06-021-90/+1
| | | | | to vm_object_split(). Its interface should still be changed to resemble vm_object_shadow().
* o Style fixes to vm_map_split(), including the elimination of one variablealc2002-06-021-8/+1
| | | | | | | | declaration that shadows another. Note: This function should really be vm_object_split(), not vm_map_split(). Reviewed by: md5
* o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),alc2002-06-011-24/+7
| | | | | | | | | vm_map_create(), and vm_map_submap(). o Make further use of a local variable in vm_map_entry_splay() that caches a reference to one of a vm_map_entry's children. (This reduces code size somewhat.) o Revert a part of revision 1.66, deinlining vmspace_pmap(). (This function is MPSAFE.)
* o Revert a part of revision 1.66, contrary to what that commit message says,alc2002-06-011-14/+13
| | | | | | | | deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior() actually increases the kernel's size. o Make vm_map_entry_set_behavior() static and add a comment describing its purpose. o Remove an unnecessary initialization statement from vm_map_entry_splay().
* Further work on pushing Giant out of the vm_map layer and downalc2002-05-311-3/+2
| | | | | | | | | | into the vm_object layer: o Acquire and release Giant in vm_object_shadow() and vm_object_page_remove(). o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call to vm_object_page_remove(). o Remove the acquisition and release of Giant around vm_map_lookup()'s call to vm_object_shadow().
* o Acquire and release Giant around pmap operations in vm_fault_unwire()alc2002-05-261-4/+3
| | | | | | | and vm_map_delete(). Assert GIANT_REQUIRED in vm_map_delete() only if operating on the kernel_object or the kmem_object. o Remove GIANT_REQUIRED from vm_map_remove(). o Remove the acquisition and release of Giant from munmap().
* o Replace the vm_map's hint by the root of a splay tree. By design,alc2002-05-241-80/+103
| | | | | | | | | | | | | | | | | | | | | the last accessed datum is moved to the root of the splay tree. Therefore, on lookups in which the hint resulted in O(1) access, the splay tree still achieves O(1) access. In contrast, on lookups in which the hint failed miserably, the splay tree achieves amortized logarithmic complexity, resulting in dramatic improvements on vm_maps with a large number of entries. For example, the execution time for replaying an access log from www.cs.rice.edu against the thttpd web server was reduced by 23.5% due to the large number of files simultaneously mmap()ed by this server. (The machine in question has enough memory to cache most of this workload.) Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld" due to the overhead of maintaining the splay tree. I believe that some or all of this can be eliminated through optimizations to the code. Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu> Reviewed by: jeff
OpenPOWER on IntegriCloud