summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_map.h
Commit message (Collapse)AuthorAgeFilesLines
* o Update some comments.alc2002-09-221-7/+9
|
* Change struct vmspace->vm_shm from void * to struct shmmap_state *, thisalfred2002-07-221-1/+1
| | | | removes the need for casts in several cases.
* Remove caddr_t.alfred2002-07-221-1/+1
|
* o Add a "needs wakeup" flag to the vm_map for use by kmem_alloc_wait()alc2002-07-111-0/+3
| | | | | | | and kmem_free_wakeup(). Previously, kmem_free_wakeup() always called wakeup(). In general, no one was sleeping. o Export vm_map_unlock_and_wait() and vm_map_wakeup() from vm_map.c for use in vm_kern.c.
* o Eliminate vmspace::vm_minsaddr. It's initialized but never used.alc2002-06-251-5/+5
| | | | | o Replace stale comments in vmspace by "const until freed" annotations on some fields.
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-8/+0
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().alc2002-06-081-0/+2
| | | | | | | | o Add a stub for vm_map_wire(). Note: the description of the previous commit had an error. The in- transition flag actually blocks the deallocation of a vm_map_entry by vm_map_delete() and vm_map_simplify_entry().
* o Add vm_map_unwire() for unwiring contiguous regions of either kernelalc2002-06-071-0/+4
| | | | | | | | | | | | | | | or user vm_maps. In accordance with the standards for munlock(2), and in contrast to vm_map_user_pageable(), this implementation does not allow holes in the specified region. This implementation uses the "in transition" flag described below. o Introduce a new flag, "in transition," to the vm_map_entry. Eventually, vm_map_delete() and vm_map_simplify_entry() will respect this flag by deallocating in-transition vm_map_entrys, allowing the vm_map lock to be safely released in vm_map_unwire() and (the forthcoming) vm_map_wire(). o Modify vm_map_simplify_entry() to respect the in-transition flag. In collaboration with: tegge
* o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),alc2002-06-011-1/+8
| | | | | | | | | vm_map_create(), and vm_map_submap(). o Make further use of a local variable in vm_map_entry_splay() that caches a reference to one of a vm_map_entry's children. (This reduces code size somewhat.) o Revert a part of revision 1.66, deinlining vmspace_pmap(). (This function is MPSAFE.)
* o Revert a part of revision 1.66, contrary to what that commit message says,alc2002-06-011-3/+8
| | | | | | | | deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior() actually increases the kernel's size. o Make vm_map_entry_set_behavior() static and add a comment describing its purpose. o Remove an unnecessary initialization statement from vm_map_entry_splay().
* o Replace the vm_map's hint by the root of a splay tree. By design,alc2002-05-241-1/+3
| | | | | | | | | | | | | | | | | | | | | the last accessed datum is moved to the root of the splay tree. Therefore, on lookups in which the hint resulted in O(1) access, the splay tree still achieves O(1) access. In contrast, on lookups in which the hint failed miserably, the splay tree achieves amortized logarithmic complexity, resulting in dramatic improvements on vm_maps with a large number of entries. For example, the execution time for replaying an access log from www.cs.rice.edu against the thttpd web server was reduced by 23.5% due to the large number of files simultaneously mmap()ed by this server. (The machine in question has enough memory to cache most of this workload.) Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld" due to the overhead of maintaining the splay tree. I believe that some or all of this can be eliminated through optimizations to the code. Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu> Reviewed by: jeff
* o Header files shouldn't depend on options: Provide prototypesalc2002-05-061-3/+0
| | | | | | | for uiomoveco(), uioread(), and vm_uiomove() regardless of whether ENABLE_VFS_IOOPT is defined or not. Submitted by: bde
* o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainlyalc2002-05-061-1/+0
| | | | an operation on a vm_object and belongs in the latter place.
* o Condition the compilation of uiomoveco() and vm_uiomove()alc2002-05-051-1/+3
| | | | | | on ENABLE_VFS_IOOPT. o Add a comment to the effect that this code is experimental support for zero-copy I/O.
* o Remove dead and lockmgr()-specific debugging code.alc2002-05-021-17/+0
|
* Pass the caller's file name and line number to the vm_map locking functions.alc2002-04-281-9/+24
|
* o Introduce and use vm_map_trylock() to replace several direct usesalc2002-04-281-0/+1
| | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it.
* o Begin documenting the (existing) locking protocol on the vm_mapalc2002-04-271-6/+26
| | | | | | | in the same style as sys/proc.h. o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map. (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c revision 1.206, de-inlining these methods increased the kernel's size.)
* Remove an unused option, VM_FAULT_HOLD, to vm_fault().alc2002-04-171-1/+0
|
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-1/+0
| | | | | | malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
* Back out the modification of vm_map locks from lockmgr to sx locks. Thegreen2002-03-181-20/+10
| | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm.
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.green2002-03-131-10/+20
| | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-6/+2
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Fix a race with free'ing vmspaces at process exit when vmspaces arealfred2002-02-051-0/+2
| | | | | | | | | | | | | | | | | | | shared. Also introduce vm_endcopy instead of using pointer tricks when initializing new vmspaces. The race occured because of how the reference was utilized: test vmspace reference, possibly block, decrement reference When sharing a vmspace between multiple processes it was possible for two processes exiting at the same time to test the reference count, possibly block and neither one free because they wouldn't see the other's update. Submitted by: green
* Don't let pmap_object_init_pt() exhaust all available free pagesdillon2001-10-311-0/+1
| | | | | | (allocating pv entries w/ zalloci) when called in a loop due to an madvise(). It is possible to completely exhaust the free page list and cause a system panic when an expected allocation fails.
* KSE Milestone 2julian2001-09-121-1/+1
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Change inlines back into mainline code in preparation for mutexing. Also,dillon2001-07-041-125/+54
| | | | | | | | most of these inlines had been bloated in -current far beyond their original intent. Normalize prototypes and function declarations to be ANSI only (half already were). And do some general cleanup. (kernel size also reduced by 50-100K, but that isn't the prime intent)
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-11/+5
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
* Two fixes to the out-of-swap process termination code. First, start killingdillon2001-06-091-0/+1
| | | | | | | | | | | processes a little earlier to avoid a deadlock. Second, when calculating the 'largest process' do not just count RSS. Instead count the RSS + SWAP used by the process. Without this the code tended to kill small inconsequential processes like, oh, sshd, rather then one of the many 'eatmem 200MB' I run on a whim :-). This fix has been extensively tested on -stable and somewhat tested on -current and will be MFCd in a few days. Shamed into fixing this by: ps
* Introduce a global lock for the vm subsystem (vm_mtx).alfred2001-05-191-6/+16
| | | | | | | | | | | | | | | | | | | vm_mtx does not recurse and is required for most low level vm operations. faults can not be taken without holding Giant. Memory subsystems can now call the base page allocators safely. Almost all atomic ops were removed as they are covered under the vm mutex. Alpha and ia64 now need to catch up to i386's trap handlers. FFS and NFS have been tested, other filesystems will need minor changes (grabbing the vm lock when twiddling page properties). Reviewed (partially) by: jake, jhb
* Putting sys/lockmgr.h in here allows us to depollute userland includesmarkm2001-05-031-0/+2
| | | | | a bit. OK'ed by: bde
* Fix the botched rev 1.59 where I made it such that without INVARIANTSalfred2001-04-181-2/+2
| | | | | | the map is never locked. Submitted by: tegge
* use %p for pointer printf, include sys/systm.h for printf protoalfred2001-04-131-6/+7
|
* Use a macro wrapper over printf along with KASSERT to reduce the amountalfred2001-04-131-40/+15
| | | | of code here.
* Fix a lock reversal problem in the VM subsystem related to threadeddillon2001-03-141-0/+1
| | | | | | | | | | | | | programs. There is a case during a fork() which can cause a deadlock. From Tor - The workaround that consists of setting a flag in the vm map that indicates that a fork is in progress and using that mark in the page fault handling to force a revalidation failure. That change will only affect (pessimize) page fault handling during fork for threaded (linuxthreads style) applications and applications using aio_*(). Submited by: tegge
* Change and clean the mutex lock interface.bmilekic2001-02-091-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order)
* For lockmgr mutex protection, use an array of mutexes that are allocatedjasone2000-10-121-4/+4
| | | | | | | | | and initialized during boot. This avoids bloating sizeof(struct lock). As a side effect, it is no longer necessary to enforce the assumtion that lockinit()/lockdestroy() calls are paired, so the LK_VALID flag has been removed. Idea taken from: BSD/OS.
* Convert lockmgr locks from using simple locks to using mutexes.jasone2000-10-041-4/+6
| | | | | | Add lockdestroy() and appropriate invocations, which corresponds to lockinit() and must be called to clean up after a lockmgr lock is no longer needed.
* Add MAP_NOCORE to mmap(2), and MADV_NOCORE and MADV_CORE to madvise(2).ps2000-02-281-18/+23
| | | | | | | | | | | | This This feature allows you to specify if mmap'd data is included in an application's corefile. Change the type of eflags in struct vm_map_entry from u_char to vm_eflags_t (an unsigned int). Reviewed by: dillon,jdp,alfred Approved by: jkh
* Change #ifdef KERNEL to #ifdef _KERNEL in the public headers. "KERNEL"peter1999-12-291-1/+1
| | | | | | is an application space macro and the applications are supposed to be free to use it as they please (but cannot). This is consistant with the other BSD's who made this change quite some time ago. More commits to come.
* Add MAP_NOSYNC feature to mmap(), and MADV_NOSYNC and MADV_AUTOSYNC todillon1999-12-121-1/+2
| | | | | | | | | | | | | | | | | madvise(). This feature prevents the update daemon from gratuitously flushing dirty pages associated with a mapped file-backed region of memory. The system pager will still page the memory as necessary and the VM system will still be fully coherent with the filesystem. Modifications made by other means to the same area of memory, for example by write(), are unaffected. The feature works on a page-granularity basis. MAP_NOSYNC allows one to use mmap() to share memory between processes without incuring any significant filesystem overhead, putting it in the same performance category as SysV Shared memory and anonymous memory. Reviewed by: julian, alc, dg
* cleanup madvise code, add a few more sanity checks.dillon1999-09-211-1/+1
| | | | Reviewed by: Alan Cox <alc@cs.rice.edu>, dg@root.com
* Add 'lastr' field to vm_map_entry in preparation for its removaldillon1999-09-171-0/+1
| | | | | | | from the vnode. (The changeover is undergoing final testing and will be committed soon). Reviewed by: Alan Cox <alc@cs.rice.edu>, David Greenman <dg@root.com>
* $Id$ -> $FreeBSD$peter1999-08-281-1/+1
|
* Correct the inconsistent formatting in struct vm_map.alc1999-08-231-2/+2
| | | | Addendum to rev 1.47: submitted by dillon.
* struct vm_map:alc1999-08-231-2/+9
| | | | | | The lock structure cannot be the first element of the vm_map because this can result in livelock between two or more system processes trying to kmem_alloc_wait.
* Fix breakage - an extra brace got inserted where DIAGNOSTIC was definedmjacob1999-08-181-2/+1
| | | | but MAP_LOCK_DIAGNOSTIC wasn't.
* vm_map_lock*:alc1999-08-161-41/+52
| | | | | | | | Remove semicolons or add "do { } while (0)" as necessary to enable the use of these macros in arbitrary statements. (There are no functional changes.) Submitted by: dillon
* Move the memory access behavior information provided by madvisealc1999-08-011-1/+21
| | | | | | from the vm_object to the vm_map. Submitted by: dillon
* Remove unused function prototypes.alc1999-07-101-3/+1
|
OpenPOWER on IntegriCloud