summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_map.c
Commit message (Collapse)AuthorAgeFilesLines
...
* Correct two error cases in vm_map_unwire():alc2004-05-251-4/+5
| | | | | | | | | | | | | | | | | 1. Contrary to the Single Unix Specification our implementation of munlock(2) when performed on an unwired virtual address range has returned an error. Correct this. Note, however, that the behavior of "system" unwiring is unchanged, only "user" unwiring is changed. If "system" unwiring is performed on an unwired virtual address range, an error is still returned. 2. Performing an errant "system" unwiring on a virtual address range that was "user" (i.e., mlock(2)) but not "system" wired would incorrectly undo the "user" wiring instead of returning an error. Correct this. Discussed with: green@ Reviewed by: tegge@
* To date, unwiring a fictitious page has produced a panic. The reasonalc2004-05-221-5/+13
| | | | | | | | | | | | | | | being that PHYS_TO_VM_PAGE() returns the wrong vm_page for fictitious pages but unwiring uses PHYS_TO_VM_PAGE(). The resulting panic reported an unexpected wired count. Rather than attempting to fix PHYS_TO_VM_PAGE(), this fix takes advantage of the properties of fictitious pages. Specifically, fictitious pages will never be completely unwired. Therefore, we can keep a fictitious page's wired count forever set to one and thereby avoid the use of PHYS_TO_VM_PAGE() when we know that we're working with a fictitious page, just not which one. In collaboration with: green@, tegge@ PR: kern/29915
* Properly remove MAP_FUTUREWIRE when a vm_map_entry gets torn down.green2004-05-071-0/+1
| | | | | | | | | | | | | | Previously, mlockall(2) usage would leak MAP_FUTUREWIRE of the process's vmspace::vm_map and subsequent processes would wire all of their memory. Coupled with a wired-page leak in vm_fault_unwire(), this would run the system out of free pages and cause programs to randomly SIGBUS when faulting in new pages. (Note that this is not the fix for the latter part; pages are still leaked when a wired area is unmapped in some cases.) Reviewed by: alc PR kern/62930
* In cases where a file was resident in memory mmap(..., PROT_NONE, ...)alc2004-04-241-4/+5
| | | | | | | | | | | | | | | would actually map the file with read access enabled. According to http://www.opengroup.org/onlinepubs/007904975/functions/mmap.html this is an error. Similarly, an madvise(..., MADV_WILLNEED) would enable read access on a virtual address range that was PROT_NONE. The solution implemented herein is (1) to pass a vm_prot_t to vm_map_pmap_enter() describing the allowed access and (2) to make vm_map_pmap_enter() responsible for understanding the limitations of pmap_enter_quick(). Submitted by: "Mark W. Krentel" <krentel@dreamscape.com> PR: kern/64573
* Remove advertising clause from University of California Regent's license,imp2004-04-061-4/+0
| | | | | | per letter dated July 22, 1999. Approved by: core
* Do not copy vm_exitingcnt to the new vmspace in vmspace_exec(). Copyingtjr2004-03-231-1/+2
| | | | | it led to impossibly high values in the new vmspace, causing it to never drop to 0 and be freed.
* Retire pmap_pinit2(). Alpha was the last platform that used it. However,alc2004-03-071-2/+0
| | | | | | | | | | | | | | ever since alpha/alpha/pmap.c revision 1.81 introduced the list allpmaps, there has been no reason for having this function on Alpha. Briefly, when pmap_growkernel() relied upon the list of all processes to find and update the various pmaps to reflect a growth in the kernel's valid address space, pmap_init2() served to avoid a race between pmap initialization and pmap_growkernel(). Specifically, pmap_pinit2() was responsible for initializing the kernel portions of the pmap and pmap_pinit2() was called after the process structure contained a pointer to the new pmap for use by pmap_growkernel(). Thus, an update to the kernel's address space might be applied to the new pmap unnecessarily, but an update would never be lost.
* Further reduce the use of Giant in vm_map_delete(): Perform pmap_remove()alc2004-02-121-2/+2
| | | | | | on system maps, besides the kmem_map, without Giant. In collaboration with: tegge
* - Locking for the per-process resource limits structure has eliminatedalc2004-02-051-3/+1
| | | | | | the need for Giant in vm_map_growstack(). - Use the proc * that is passed to vm_map_growstack() rather than curthread->td_proc.
* Locking for the per-process resource limits structure.jhb2004-02-041-9/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - struct plimit includes a mutex to protect a reference count. The plimit structure is treated similarly to struct ucred in that is is always copy on write, so having a reference to a structure is sufficient to read from it without needing a further lock. - The proc lock protects the p_limit pointer and must be held while reading limits from a process to keep the limit structure from changing out from under you while reading from it. - Various global limits that are ints are not protected by a lock since int writes are atomic on all the archs we support and thus a lock wouldn't buy us anything. - All accesses to individual resource limits from a process are abstracted behind a simple lim_rlimit(), lim_max(), and lim_cur() API that return either an rlimit, or the current or max individual limit of the specified resource from a process. - dosetrlimit() was renamed to kern_setrlimit() to match existing style of other similar syscall helper functions. - The alpha OSF/1 compat layer no longer calls getrlimit() and setrlimit() (it didn't used the stackgap when it should have) but uses lim_rlimit() and kern_setrlimit() instead. - The svr4 compat no longer uses the stackgap for resource limits calls, but uses lim_rlimit() and kern_setrlimit() instead. - The ibcs2 compat no longer uses the stackgap for resource limits. It also no longer uses the stackgap for accessing sysctl's for the ibcs2_sysconf() syscall but uses kernel_sysctl() instead. As a result, ibcs2_sysconf() no longer needs Giant. - The p_rlimit macro no longer exists. Submitted by: mtm (mostly, I only did a few cleanups and catchups) Tested on: i386 Compiled on: alpha, amd64
* Drop the reference count on the old vmspace after fully switching thejhb2004-02-021-2/+2
| | | | | | current thread to the new vmspace. Suggested by: dillon
* - Modify vm_object_split() to expect a locked vm object on entry andalc2003-12-301-2/+0
| | | | | return on a locked vm object on exit. Remove GIANT_REQUIRED. - Eliminate some unnecessary local variables from vm_object_split().
* Minor correction to revision 1.258: Use the proc pointer that is passed toalc2003-12-261-2/+1
| | | | vm_map_growstack() in the RLIMIT_VMEM check rather than curthread.
* - Avoid a lock-order reversal between Giant and a system map mutex thatalc2003-11-191-2/+4
| | | | | | | | | occurs when kmem_malloc() fails to allocate a sufficient number of vm pages. Specifically, we avoid the lock-order reversal by not grabbing Giant around pmap_remove() if the map is the kmem_map. Approved by: re (jhb) Reported by: Eugene <eugene3@web.de>
* Changes to msync(2)alc2003-11-141-2/+2
| | | | | | | | | | - Return EBUSY if the region was wired by mlock(2) and MS_INVALIDATE is specified to msync(2). This is required by the Open Group Base Specifications Issue 6. - vm_map_sync() doesn't return KERN_FAILURE. Thus, msync(2) can't possibly return EIO. - The second major loop in vm_map_sync() handles sub maps. Thus, failing on sub maps in the first major loop isn't necessary.
* - The Open Group Base Specifications Issue 6 specifies that an munmap(2)alc2003-11-101-14/+6
| | | | | | | | must return EINVAL if size is zero. Submitted by: tegge - In order to avoid a race condition in multithreaded applications, the check and removal operations by munmap(2) must be in the same critical section. To accomodate this, vm_map_check_protection() is modified to require its caller to obtain at least a read lock on the map.
* - Remove Giant from msync(2). Giant is still acquired by the lower layersalc2003-11-091-0/+10
| | | | | | | | | | if we drop into the pmap or vnode layers. - Migrate the handling of zero-length msync(2)s into vm_map_sync() so that multithread applications can't change the map between implementing the zero-length hack in msync(2) and reacquiring the map lock in vm_map_sync(). Reviewed by: tegge
* - Rename vm_map_clean() to vm_map_sync(). This better reflects the factalc2003-11-091-59/+5
| | | | | | | | | | that msync(2) is its only caller. - Migrate the parts of the old vm_map_clean() that examined the internals of a vm object to a new function vm_object_sync() that is implemented in vm_object.c. At the same, introduce the necessary vm object locking so that vm_map_sync() and vm_object_sync() can be called without Giant. Reviewed by: tegge
* - Move the implementation of OBJ_ONEMAPPING from vm_map_delete() toalc2003-11-051-30/+24
| | | | | vm_map_entry_delete() so that all of the vm object manipulation is performed in one place.
* Update avail_ssize for rstacks after growing them.marcel2003-11-041-0/+1
|
* Whitespace cleanup.des2003-11-031-29/+29
|
* - Increase the scope of the source object lock in vm_map_copy_entry().alc2003-11-031-5/+3
|
* - Introduce and use vm_object_reference_locked(). Unlikealc2003-11-021-1/+1
| | | | | | | vm_object_reference(), this function must not be used to reanimate dead vm objects. This restriction simplifies locking. Reviewed by: tegge
* Fix two bugs introduced with the rstack functionality and specific tomarcel2003-10-311-1/+2
| | | | | | | | | | | | | | | the rstack functionality: 1. Fix a KASSERT that tests for the address to be above the upward growable stack. Typically for rstack, the faulting address can be identical to the record end of the upward growable entry, and very likely is on ia64. The KASSERT tested for greater than, not greater equal, so whenever the register stack had to be grown the assertion fired. 2. When we grow the upward growable stack entry and adjust the unlying object, don't forget to adjust the size of the VM map. Not doing so would trigger an assert in vm_mapzdtor(). Pointy hat: marcel (for not testing with INVARIANTS).
* Corrections to revision 1.305alc2003-10-181-22/+36
| | | | | | | | - Specifying VM_MAP_WIRE_HOLESOK should not assume that the start address is the beginning of the map. Instead, move to the first entry after the start address. - The implementation of VM_MAP_WIRE_HOLESOK was incomplete. This caused the failure of mlockall(2) in some circumstances.
* Move pmap_resident_count() from the MD pmap.h to the MI pmap.h.bms2003-10-061-0/+6
| | | | | | | | Add a definition of pmap_wired_count(). Add a definition of vmspace_wired_count(). Reviewed by: truckman Discussed with: peter
* Part 2 of implementing rstacks: add the ability to create rstacks andmarcel2003-09-271-39/+55
| | | | | | | | | | | | | | | | | | | | use the ability on ia64 to map the register stack. The orientation of the stack (i.e. its grow direction) is passed to vm_map_stack() in the overloaded cow argument. Since the grow direction is represented by bits, it is possible and allowed to create bi-directional stacks. This is not an advertised feature, more of a side-effect. Fix a bug in vm_map_growstack() that's specific to rstacks and which we could only find by having the ability to create rstacks: when the mapped stack ends at the faulting address, we have not actually mapped the faulting address. we need to include or cover the faulting address. Note that at this time mmap(2) has not been extended to allow the creation of rstacks by processes. If such a need arises, this can be done. Tested on: alpha, i386, ia64, sparc64
* Adjust the kmapentzone limit so that it takes into account the size ofsilby2003-09-231-1/+3
| | | | | | | maxproc and maxfiles, as procs, pipes, and other structures cause allocations from kmapentzone. Submitted by: tegge
* Change the handling of the kernel and kmem objects in vm_map_delete(): Inalc2003-09-231-23/+18
| | | | | | | order to use "unmanaged" pages in the kmem object, vm_map_delete() must unconditionally perform pmap_remove(). Otherwise, sparc64 has problems. Tested by: jake
* Introduce MAP_ENTRY_GROWS_DOWN and MAP_ENTRY_GROWS_UP to allow formarcel2003-08-301-80/+144
| | | | | | | | | | | | | | growable (stack) entries that not only grow down, but also grow up. Have vm_map_growstack() take these flags into account when growing an entry. This is the first step in adding support for upward growable stacks. It is a required feature on ia64 to support the register stack (or rstack as I like to call it -- it also means reverse stack). We do not currently create rstacks, so the upward growing is not exercised and the change should be a functional no-op. Reviewed by: alc
* Remove GIANT_REQUIRED from vmspace_alloc().alc2003-08-131-1/+0
|
* Add the mlockall() and munlockall() system calls.bms2003-08-111-12/+39
| | | | | | | | | | | | | | | | | | | | | | | - All those diffs to syscalls.master for each architecture *are* necessary. This needed clarification; the stub code generation for mlockall() was disabled, which would prevent applications from linking to this API (suggested by mux) - Giant has been quoshed. It is no longer held by the code, as the required locking has been pushed down within vm_map.c. - Callers must specify VM_MAP_WIRE_HOLESOK or VM_MAP_WIRE_NOHOLES to express their intention explicitly. - Inspected at the vmstat, top and vm pager sysctl stats level. Paging-in activity is occurring correctly, using a test harness. - The RES size for a process may appear to be greater than its SIZE. This is believed to be due to mappings of the same shared library page being wired twice. Further exploration is needed. - Believed to back out of allocations and locks correctly (tested with WITNESS, MUTEX_PROFILING, INVARIANTS and DIAGNOSTIC). PR: kern/43426, standards/54223 Reviewed by: jake, alc Approved by: jake (mentor) MFC after: 2 weeks
* Move the implementation of the vmspace_swap_count() (used only inphk2003-07-181-37/+0
| | | | | | | | | the "toss the largest process" emergency handling) from vm_map.c to swap_pager.c. The quantity calculated depends strongly on the internals of the swap_pager and by moving it, we no longer need to expose the internal metrics of the swap_pager to the world.
* Background: pmap_object_init_pt() premaps the pages of a object inalc2003-07-031-1/+74
| | | | | | | | | | | | | | | | order to avoid the overhead of later page faults. In general, it implements two cases: one for vnode-backed objects and one for device-backed objects. Only the device-backed case is really machine-dependent, belonging in the pmap. This commit moves the vnode-backed case into the (relatively) new function vm_map_pmap_enter(). On amd64 and i386, this commit only amounts to code rearrangement. On alpha and ia64, the new machine independent (MI) implementation of the vnode case is smaller and more efficient than their pmap-based implementations. (The MI implementation takes advantage of the fact that objects in -CURRENT are ordered collections of pages.) On sparc64, pmap_object_init_pt() hadn't (yet) been implemented.
* Check the address provided to vm_map_stack() against the vm map's maximum,alc2003-07-011-1/+2
| | | | returning an error if the address is too high.
* Introduce vm_map_pmap_enter(). Presently, this is a stub calling the MDalc2003-06-291-7/+19
| | | | pmap_object_init_pt().
* Simple read-modify-write operations on a vm object's flags, ref_count, andalc2003-06-271-4/+0
| | | | | shadow_count can now rely on its mutex for synchronization. Remove one use of Giant from vm_map_insert().
* Remove a GIANT_REQUIRED on the kernel object that we no longer need.alc2003-06-251-2/+0
|
* Use __FBSDID().obrien2003-06-111-2/+3
|
* Pass the vm object to vm_object_collapse() with its lock held.alc2003-06-071-2/+2
|
* Increase the scope of the vm_object lock in vm_map_delete().alc2003-04-301-12/+13
|
* Add vm_object locking to vmspace_swap_count().alc2003-04-301-5/+6
|
* - Extend the scope of two existing vm_object locks to coveralc2003-04-261-1/+1
| | | | swap_pager_freespace().
* - Acquire the vm_object's lock when performing vm_object_page_clean().alc2003-04-241-0/+2
| | | | | | - Add a parameter to vm_pageout_flush() that tells vm_pageout_flush() whether its caller has locked the vm_object. (This is a temporary measure to bootstrap vm_object locking.)
* - Update the vm_object locking in vm_map_insert().alc2003-04-201-8/+13
|
* Update vm_object locking in vm_map_delete().alc2003-04-201-5/+9
|
* o Update locking around vm_object_page_remove() in vm_map_clean()alc2003-04-191-4/+2
| | | | | | to use the new macros. o Remove unnecessary increment and decrement of the vm_object's reference count in vm_map_clean().
* Lock some manipulations of the vm object's flags.alc2003-04-131-4/+4
|
* Including <sys/stdint.h> is (almost?) universally only to be able to usephk2003-03-181-1/+0
| | | | | %j in printfs, so put a newsted include in <sys/systm.h> where the printf prototype lives and save everybody else the trouble.
* - When the VM daemon is out of swap space and looking for adas2003-03-121-2/+13
| | | | | | | | | process to kill, don't block on a map lock while holding the process lock. Instead, skip processes whose map locks are held and find something else to kill. - Add vm_map_trylock_read() to support the above. Reviewed by: alc, mike (mentor)
OpenPOWER on IntegriCloud