summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_map.c
Commit message (Collapse)AuthorAgeFilesLines
* - Avoid a lock-order reversal between Giant and a system map mutex thatalc2003-11-191-2/+4
| | | | | | | | | occurs when kmem_malloc() fails to allocate a sufficient number of vm pages. Specifically, we avoid the lock-order reversal by not grabbing Giant around pmap_remove() if the map is the kmem_map. Approved by: re (jhb) Reported by: Eugene <eugene3@web.de>
* Changes to msync(2)alc2003-11-141-2/+2
| | | | | | | | | | - Return EBUSY if the region was wired by mlock(2) and MS_INVALIDATE is specified to msync(2). This is required by the Open Group Base Specifications Issue 6. - vm_map_sync() doesn't return KERN_FAILURE. Thus, msync(2) can't possibly return EIO. - The second major loop in vm_map_sync() handles sub maps. Thus, failing on sub maps in the first major loop isn't necessary.
* - The Open Group Base Specifications Issue 6 specifies that an munmap(2)alc2003-11-101-14/+6
| | | | | | | | must return EINVAL if size is zero. Submitted by: tegge - In order to avoid a race condition in multithreaded applications, the check and removal operations by munmap(2) must be in the same critical section. To accomodate this, vm_map_check_protection() is modified to require its caller to obtain at least a read lock on the map.
* - Remove Giant from msync(2). Giant is still acquired by the lower layersalc2003-11-091-0/+10
| | | | | | | | | | if we drop into the pmap or vnode layers. - Migrate the handling of zero-length msync(2)s into vm_map_sync() so that multithread applications can't change the map between implementing the zero-length hack in msync(2) and reacquiring the map lock in vm_map_sync(). Reviewed by: tegge
* - Rename vm_map_clean() to vm_map_sync(). This better reflects the factalc2003-11-091-59/+5
| | | | | | | | | | that msync(2) is its only caller. - Migrate the parts of the old vm_map_clean() that examined the internals of a vm object to a new function vm_object_sync() that is implemented in vm_object.c. At the same, introduce the necessary vm object locking so that vm_map_sync() and vm_object_sync() can be called without Giant. Reviewed by: tegge
* - Move the implementation of OBJ_ONEMAPPING from vm_map_delete() toalc2003-11-051-30/+24
| | | | | vm_map_entry_delete() so that all of the vm object manipulation is performed in one place.
* Update avail_ssize for rstacks after growing them.marcel2003-11-041-0/+1
|
* Whitespace cleanup.des2003-11-031-29/+29
|
* - Increase the scope of the source object lock in vm_map_copy_entry().alc2003-11-031-5/+3
|
* - Introduce and use vm_object_reference_locked(). Unlikealc2003-11-021-1/+1
| | | | | | | vm_object_reference(), this function must not be used to reanimate dead vm objects. This restriction simplifies locking. Reviewed by: tegge
* Fix two bugs introduced with the rstack functionality and specific tomarcel2003-10-311-1/+2
| | | | | | | | | | | | | | | the rstack functionality: 1. Fix a KASSERT that tests for the address to be above the upward growable stack. Typically for rstack, the faulting address can be identical to the record end of the upward growable entry, and very likely is on ia64. The KASSERT tested for greater than, not greater equal, so whenever the register stack had to be grown the assertion fired. 2. When we grow the upward growable stack entry and adjust the unlying object, don't forget to adjust the size of the VM map. Not doing so would trigger an assert in vm_mapzdtor(). Pointy hat: marcel (for not testing with INVARIANTS).
* Corrections to revision 1.305alc2003-10-181-22/+36
| | | | | | | | - Specifying VM_MAP_WIRE_HOLESOK should not assume that the start address is the beginning of the map. Instead, move to the first entry after the start address. - The implementation of VM_MAP_WIRE_HOLESOK was incomplete. This caused the failure of mlockall(2) in some circumstances.
* Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.bms2003-10-061-0/+6
| | | | | | | | Add a definition of pmap_wired_count(). Add a definition of vmspace_wired_count(). Reviewed by: truckman Discussed with: peter
* Part 2 of implementing rstacks: add the ability to create rstacks andmarcel2003-09-271-39/+55
| | | | | | | | | | | | | | | | | | | | use the ability on ia64 to map the register stack. The orientation of the stack (i.e. its grow direction) is passed to vm_map_stack() in the overloaded cow argument. Since the grow direction is represented by bits, it is possible and allowed to create bi-directional stacks. This is not an advertised feature, more of a side-effect. Fix a bug in vm_map_growstack() that's specific to rstacks and which we could only find by having the ability to create rstacks: when the mapped stack ends at the faulting address, we have not actually mapped the faulting address. we need to include or cover the faulting address. Note that at this time mmap(2) has not been extended to allow the creation of rstacks by processes. If such a need arises, this can be done. Tested on: alpha, i386, ia64, sparc64
* Adjust the kmapentzone limit so that it takes into account the size ofsilby2003-09-231-1/+3
| | | | | | | maxproc and maxfiles, as procs, pipes, and other structures cause allocations from kmapentzone. Submitted by: tegge
* Change the handling of the kernel and kmem objects in vm_map_delete(): Inalc2003-09-231-23/+18
| | | | | | | order to use "unmanaged" pages in the kmem object, vm_map_delete() must unconditionally perform pmap_remove(). Otherwise, sparc64 has problems. Tested by: jake
* Introduce MAP_ENTRY_GROWS_DOWN and MAP_ENTRY_GROWS_UP to allow formarcel2003-08-301-80/+144
| | | | | | | | | | | | | | growable (stack) entries that not only grow down, but also grow up. Have vm_map_growstack() take these flags into account when growing an entry. This is the first step in adding support for upward growable stacks. It is a required feature on ia64 to support the register stack (or rstack as I like to call it -- it also means reverse stack). We do not currently create rstacks, so the upward growing is not exercised and the change should be a functional no-op. Reviewed by: alc
* Remove GIANT_REQUIRED from vmspace_alloc().alc2003-08-131-1/+0
|
* Add the mlockall() and munlockall() system calls.bms2003-08-111-12/+39
| | | | | | | | | | | | | | | | | | | | | | | - All those diffs to syscalls.master for each architecture *are* necessary. This needed clarification; the stub code generation for mlockall() was disabled, which would prevent applications from linking to this API (suggested by mux) - Giant has been quoshed. It is no longer held by the code, as the required locking has been pushed down within vm_map.c. - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES to express their intention explicitly. - Inspected at the vmstat, top and vm pager sysctl stats level. Paging-in activity is occurring correctly, using a test harness. - The RES size for a process may appear to be greater than its SIZE. This is believed to be due to mappings of the same shared library page being wired twice. Further exploration is needed. - Believed to back out of allocations and locks correctly (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC). PR: kern/43426, standards/54223 Reviewed by: jake, alc Approved by: jake (mentor) MFC after: 2 weeks
* Move the implementation of the vmspace_swap_count() (used only inphk2003-07-181-37/+0
| | | | | | | | | the "toss the largest process" emergency handling) from vm_map.c to swap_pager.c. The quantity calculated depends strongly on the internals of the swap_pager and by moving it, we no longer need to expose the internal metrics of the swap_pager to the world.
* Background: pmap_object_init_pt() premaps the pages of a object inalc2003-07-031-1/+74
| | | | | | | | | | | | | | | | order to avoid the overhead of later page faults. In general, it implements two cases: one for vnode-backed objects and one for device-backed objects. Only the device-backed case is really machine-dependent, belonging in the pmap. This commit moves the vnode-backed case into the (relatively) new function vm_map_pmap_enter(). On amd64 and i386, this commit only amounts to code rearrangement. On alpha and ia64, the new machine independent (MI) implementation of the vnode case is smaller and more efficient than their pmap-based implementations. (The MI implementation takes advantage of the fact that objects in -CURRENT are ordered collections of pages.) On sparc64, pmap_object_init_pt() hadn't (yet) been implemented.
* Check the address provided to vm_map_stack() against the vm map's maximum,alc2003-07-011-1/+2
| | | | returning an error if the address is too high.
* Introduce vm_map_pmap_enter(). Presently, this is a stub calling the MDalc2003-06-291-7/+19
| | | | pmap_object_init_pt().
* Simple read-modify-write operations on a vm object's flags, ref_count, andalc2003-06-271-4/+0
| | | | | shadow_count can now rely on its mutex for synchronization. Remove one use of Giant from vm_map_insert().
* Remove a GIANT_REQUIRED on the kernel object that we no longer need.alc2003-06-251-2/+0
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Pass the vm object to vm_object_collapse() with its lock held.alc2003-06-071-2/+2
|
* Increase the scope of the vm_object lock in vm_map_delete().alc2003-04-301-12/+13
|
* Add vm_object locking to vmspace_swap_count().alc2003-04-301-5/+6
|
* - Extend the scope of two existing vm_object locks to coveralc2003-04-261-1/+1
| | | | swap_pager_freespace().
* - Acquire the vm_object's lock when performing vm_object_page_clean().alc2003-04-241-0/+2
| | | | | | - Add a parameter to vm_pageout_flush() that tells vm_pageout_flush() whether its caller has locked the vm_object. (This is a temporary measure to bootstrap vm_object locking.)
* - Update the vm_object locking in vm_map_insert().alc2003-04-201-8/+13
|
* Update vm_object locking in vm_map_delete().alc2003-04-201-5/+9
|
* o Update locking around vm_object_page_remove() in vm_map_clean()alc2003-04-191-4/+2
| | | | | | to use the new macros. o Remove unnecessary increment and decrement of the vm_object's reference count in vm_map_clean().
* Lock some manipulations of the vm object's flags.alc2003-04-131-4/+4
|
* Including <sys/stdint.h> is (almost?) universally only to be able to usephk2003-03-181-1/+0
| | | | | %j in printfs, so put a newsted include in <sys/systm.h> where the printf prototype lives and save everybody else the trouble.
* - When the VM daemon is out of swap space and looking for adas2003-03-121-2/+13
| | | | | | | | | process to kill, don't block on a map lock while holding the process lock. Instead, skip processes whose map locks are held and find something else to kill. - Add vm_map_trylock_read() to support the above. Reviewed by: alc, mike (mentor)
* Remove ENABLE_VFS_IOOPT. It is a long unfinished work-in-progress.alc2003-03-061-249/+0
| | | | Discussed on: arch@
* Back out M_* changes, per decision of the TRB.imp2003-02-191-3/+3
| | | | Approved by: trb
* Remove the acquisition and release of Giant around pmap_growkernel().alc2003-02-151-2/+0
| | | | | | It's unnecessary for two reasons: (1) Giant is at present already held in such cases and (2) our various implementations of pmap_growkernel() look to be MP safe. (For example, for sparc64 the proof of (2) is trivial.)
* Add MTX_DUPOK to the initialization of system map locks.alc2003-01-251-2/+2
|
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-3/+3
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* Close the remaining user address mapping races for physicaldillon2003-01-201-3/+8
| | | | | | | I/O, CAM, and AIO. Still TODO: streamline useracc() checks. Reviewed by: alc, tegge MFC after: 7 days
* It is possible for an active aio to prevent shared memory from beingdillon2003-01-131-0/+8
| | | | | | | | | | | | | dereferenced when a process exits due to the vmspace ref-count being bumped. Change shmexit() and shmexit_myhook() to take a vmspace instead of a process and call it in vmspace_dofree(). This way if it is missed in exit1()'s early-resource-free it will still be caught when the zombie is reaped. Also fix a potential race in shmexit_myhook() by NULLing out vmspace->vm_shm prior to calling shm_delete_mapping() and free(). MFC after: 7 days
* Lock the vm object when performing vm_object_clear_flag().alc2003-01-031-0/+4
|
* Implement a variant locking scheme for vm maps: Access to system mapsalc2002-12-311-16/+38
| | | | | | | | | | | | | is now synchronized by a mutex, whereas access to user maps is still synchronized by a lockmgr()-based lock. Why? No single type of lock, including sx locks, meets the requirements of both types of vm map. Sometimes we sleep while holding the lock on a user map. Thus, a a mutex isn't appropriate. On the other hand, both lockmgr()-based and sx locks release Giant when a thread/process blocks during contention for a lock. This could lead to a race condition in a legacy driver (that relies on Giant for synchronization) if it attempts to kmem_malloc() and fails to immediately obtain the lock. Fortunately, we never sleep while holding a system map lock.
* - Increment the vm_map's timestamp if _vm_map_trylock() succeeds.alc2002-12-301-10/+11
| | | | | | - Introduce map_sleep_mtx and use it to replace Giant in vm_map_unlock_and_wait() and vm_map_wakeup(). (Original version by: tegge.)
* - Remove vm_object_init2(). It is unused.alc2002-12-291-1/+0
| | | | | | - Add a mtx_destroy() to vm_object_collapse(). (This allows a bzero() to migrate from _vm_object_allocate() to vm_object_zinit(), where it will be performed less often.)
* Fix a refcount race with the vmspace structure. In order to preventdillon2002-12-151-6/+17
| | | | | | | | | | | | | | | | | | resource starvation we clean-up as much of the vmspace structure as we can when the last process using it exits. The rest of the structure is cleaned up when it is reaped. But since exit1() decrements the ref count it is possible for a double-free to occur if someone else, such as the process swapout code, references and then dereferences the structure. Additionally, the final cleanup of the structure should not occur until the last process referencing it is reaped. This commit solves the problem by introducing a secondary reference count, calling 'vm_exitingcnt'. The normal reference count is decremented on exit and vm_exitingcnt is incremented. vm_exitingcnt is decremented when the process is reaped. When both vm_exitingcnt and vm_refcnt are 0, the structure is freed for real. MFC after: 3 weeks
* Perform vm_object_lock() and vm_object_unlock() aroundalc2002-12-151-2/+8
| | | | vm_object_page_remove().
OpenPOWER on IntegriCloud