summaryrefslogtreecommitdiffstats
path: root/sys/vm/vm_map.c
Commit message (Collapse)AuthorAgeFilesLines
* o Remove LK_CANRECURSE from the vm_map lock.alc2002-06-181-2/+2
|
* - Introduce the new M_NOVM option which tells uma to only check the currentlyjeff2002-06-171-1/+2
| | | | | | | | | | | | | | | | allocated slabs and bucket caches for free items. It will not go ask the vm for pages. This differs from M_NOWAIT in that it not only doesn't block, it doesn't even ask. - Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This tells uma that it should only allocate buckets out of the bucket cache, and not from the VM. It does this by using the M_NOVM option to zalloc when getting a new bucket. This is so that the VM doesn't recursively enter itself while trying to allocate buckets for vm_map_entry zones. If there are already allocated buckets when we get here we'll still use them but otherwise we'll skip it. - Use the ZONE_VM flag on vm map entries and pv entries on x86.
* o Acquire and release Giant in vm_map_wakeup() to preventalc2002-06-171-0/+7
| | | | | | a lost wakeup(). Reviewed by: tegge
* o Use vm_map_wire() and vm_map_unwire() in place of vm_map_pageable() andalc2002-06-141-397/+0
| | | | | | | | | vm_map_user_pageable(). o Remove vm_map_pageable() and vm_map_user_pageable(). o Remove vm_map_clear_recursive() and vm_map_set_recursive(). (They were only used by vm_map_pageable() and vm_map_user_pageable().) Reviewed by: tegge
* o Acquire and release Giant in vm_map_unlock_and_wait().alc2002-06-121-3/+5
| | | | Submitted by: tegge
* o Properly handle a failure by vm_fault_wire() or vm_fault_user_wire()alc2002-06-111-4/+20
| | | | | | | in vm_map_wire(). o Make two white-space changes in vm_map_wire(). Reviewed by: tegge
* o Teach vm_map_delete() to respect the "in-transition" flagalc2002-06-111-0/+31
| | | | | | on a vm_map_entry by sleeping until the flag is cleared. Submitted by: tegge
* o In vm_map_entry_create(), call uma_zalloc() with M_NOWAIT on system maps.alc2002-06-101-5/+6
| | | | | | | Submitted by: tegge o Eliminate the "!mapentzone" check from vm_map_entry_create() and vm_map_entry_dispose(). Reviewed by: tegge o Fix white-space usage in vm_map_entry_create().
* o Add vm_map_wire() for wiring contiguous regions of either kernelalc2002-06-091-1/+159
| | | | | | | | | | | or user vm_maps. This implementation has two key benefits when compared to vm_map_{user_,}pageable(): (1) it avoids a race condition through the use of "in-transition" vm_map entries and (2) it eliminates lock recursion on the vm_map. Note: there is still an error case that requires clean up. Reviewed by: tegge
* o Simplify vm_map_unwire() by merging the second and third passesalc2002-06-081-17/+11
| | | | over the caller-specified region.
* o Remove an unnecessary call to vm_map_wakeup() from vm_map_unwire().alc2002-06-081-6/+15
| | | | | | | | o Add a stub for vm_map_wire(). Note: the description of the previous commit had an error. The in- transition flag actually blocks the deallocation of a vm_map_entry by vm_map_delete() and vm_map_simplify_entry().
* o Add vm_map_unwire() for unwiring contiguous regions of either kernelalc2002-06-071-1/+163
| | | | | | | | | | | | | | | or user vm_maps. In accordance with the standards for munlock(2), and in contrast to vm_map_user_pageable(), this implementation does not allow holes in the specified region. This implementation uses the "in transition" flag described below. o Introduce a new flag, "in transition," to the vm_map_entry. Eventually, vm_map_delete() and vm_map_simplify_entry() will respect this flag by deallocating in-transition vm_map_entrys, allowing the vm_map lock to be safely released in vm_map_unwire() and (the forthcoming) vm_map_wire(). o Modify vm_map_simplify_entry() to respect the in-transition flag. In collaboration with: tegge
* o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming italc2002-06-021-90/+1
| | | | | to vm_object_split(). Its interface should still be changed to resemble vm_object_shadow().
* o Style fixes to vm_map_split(), including the elimination of one variablealc2002-06-021-8/+1
| | | | | | | | declaration that shadows another. Note: This function should really be vm_object_split(), not vm_map_split(). Reviewed by: md5
* o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),alc2002-06-011-24/+7
| | | | | | | | | vm_map_create(), and vm_map_submap(). o Make further use of a local variable in vm_map_entry_splay() that caches a reference to one of a vm_map_entry's children. (This reduces code size somewhat.) o Revert a part of revision 1.66, deinlining vmspace_pmap(). (This function is MPSAFE.)
* o Revert a part of revision 1.66, contrary to what that commit message says,alc2002-06-011-14/+13
| | | | | | | | deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior() actually increases the kernel's size. o Make vm_map_entry_set_behavior() static and add a comment describing its purpose. o Remove an unnecessary initialization statement from vm_map_entry_splay().
* Further work on pushing Giant out of the vm_map layer and downalc2002-05-311-3/+2
| | | | | | | | | | into the vm_object layer: o Acquire and release Giant in vm_object_shadow() and vm_object_page_remove(). o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call to vm_object_page_remove(). o Remove the acquisition and release of Giant around vm_map_lookup()'s call to vm_object_shadow().
* o Acquire and release Giant around pmap operations in vm_fault_unwire()alc2002-05-261-4/+3
| | | | | | | and vm_map_delete(). Assert GIANT_REQUIRED in vm_map_delete() only if operating on the kernel_object or the kmem_object. o Remove GIANT_REQUIRED from vm_map_remove(). o Remove the acquisition and release of Giant from munmap().
* o Replace the vm_map's hint by the root of a splay tree. By design,alc2002-05-241-80/+103
| | | | | | | | | | | | | | | | | | | | | the last accessed datum is moved to the root of the splay tree. Therefore, on lookups in which the hint resulted in O(1) access, the splay tree still achieves O(1) access. In contrast, on lookups in which the hint failed miserably, the splay tree achieves amortized logarithmic complexity, resulting in dramatic improvements on vm_maps with a large number of entries. For example, the execution time for replaying an access log from www.cs.rice.edu against the thttpd web server was reduced by 23.5% due to the large number of files simultaneously mmap()ed by this server. (The machine in question has enough memory to cache most of this workload.) Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld" due to the overhead of maintaining the splay tree. I believe that some or all of this can be eliminated through optimizations to the code. Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu> Reviewed by: jeff
* o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire andalc2002-05-181-2/+2
| | | | | | | release Giant around vm_map_madvise()'s call to pmap_object_init_pt(). o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition and release of Giant. o Remove the acquisition and release of Giant from madvise().
* o Remove GIANT_REQUIRED and an excessive number of blank linesalc2002-05-121-10/+0
| | | | | from vm_map_inherit(). (minherit() need not acquire Giant anymore.)
* o Acquire and release Giant in vm_object_reference() andalc2002-05-121-3/+2
| | | | | | | | | vm_object_deallocate(), replacing the assertion GIANT_REQUIRED. o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry(). o Acquire and release Giant around vm_map_protect()'s call to pmap_protect(). Altogether, these changes eliminate the need for mprotect() to acquire and release Giant.
* o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainlyalc2002-05-061-77/+0
| | | | an operation on a vm_object and belongs in the latter place.
* o Condition the compilation of uiomoveco() and vm_uiomove()alc2002-05-051-0/+4
| | | | | | on ENABLE_VFS_IOOPT. o Add a comment to the effect that this code is experimental support for zero-copy I/O.
* o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().alc2002-05-051-2/+2
| | | | | o Acquire and release Giant around vm_map_lookup()'s call to vm_object_shadow().
* o Remove GIANT_REQUIRED from vm_map_lookup_entry() andalc2002-05-041-3/+0
| | | | | vm_map_check_protection(). o Call vm_map_check_protection() without Giant held in munmap().
* o Change the implementation of vm_map locking to use exclusive locksalc2002-05-021-26/+24
| | | | | exclusively. The interface still, however, distinguishes between a shared lock and an exclusive lock.
* o Remove dead and lockmgr()-specific debugging code.alc2002-05-021-6/+0
|
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-291-1/+1
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* Pass the caller's file name and line number to the vm_map locking functions.alc2002-04-281-11/+11
|
* o Introduce and use vm_map_trylock() to replace several direct usesalc2002-04-281-0/+10
| | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it.
* o Begin documenting the (existing) locking protocol on the vm_mapalc2002-04-271-19/+0
| | | | | | | in the same style as sys/proc.h. o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map. (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c revision 1.206, de-inlining these methods increased the kernel's size.)
* Do not free the vmspace until p->p_vmspace is set to null. Otherwisepeter2002-04-171-3/+7
| | | | | | | | statclock can access it in the tail end of statclock_process() at an unfortunate time. This bit me several times on an SMP alpha (UP2000) and the problem went away with this change. I'm not sure why it doesn't break x86 as well. Maybe it's because the clocks are much faster on alpha (HZ=1024 by default).
* Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()peter2002-04-151-1/+1
| | | | | | | | | | | and pmap_copy_page(). This gets rid of a couple more physical addresses in upper layers, with the eventual aim of supporting PAE and dealing with the physical addressing mostly within pmap. (We will need either 64 bit physical addresses or page indexes, possibly both depending on the circumstances. Leaving this to pmap itself gives more flexibilitly.) Reviewed by: jake Tested on: i386, ia64 and (I believe) sparc64. (my alpha was hosed)
* Remove references to vm_zone.h and switch over to the new uma API.jeff2002-03-201-2/+6
|
* Quit a warning introduced by UMA. This only occurs on machines wherejeff2002-03-191-1/+1
| | | | | | vm_size_t != unsigned long. Reviewed by: phk
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-43/+122
| | | | | | malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
* Back out the modification of vm_map locks from lockmgr to sx locks. Thegreen2002-03-181-60/+58
| | | | | | | | | | best path forward now is likely to change the lockmgr locks to simple sleep mutexes, then see if any extra contention it generates is greater than removed overhead of managing local locking state information, cost of extra calls into lockmgr, etc. Additionally, making the vm_map lock a mutex and respecting it properly will put us much closer to not needing Giant magic in vm.
* Acquire a read lock on the map inside of vm_map_check_protection() ratheralc2002-03-171-0/+6
| | | | | than expecting the caller to do so. This (1) eliminates duplicated code in kernacc() and useracc() and (2) fixes missing synchronization in munmap().
* Rename SI_SUB_MUTEX to SI_SUB_MTX_POOL to make the name at all accurate.green2002-03-131-58/+60
| | | | | | | | | | | | | | | | While doing this, move it earlier in the sysinit boot process so that the VM system can use it. After that, the system is now able to use sx locks instead of lockmgr locks in the VM system. To accomplish this, some of the more questionable uses of the locks (such as testing whether they are owned or not, as well as allowing shared+exclusive recursion) are removed, and simpler logic throughout is used so locks should also be easier to understand. This has been tested on my laptop for months, and has not shown any problems on SMP systems, either, so appears quite safe. One more user of lockmgr down, many more to go :)
* - Remove a number of extra newlines that do not belong here according toeivind2002-03-101-78/+18
| | | | | | | | | style(9) - Minor space adjustment in cases where we have "( ", " )", if(), return(), while(), for(), etc. - Add /* SYMBOL */ after a few #endifs. Reviewed by: alc
* Fix a bug in the vm_map_clean() procedure. msync()ing an area of memorydillon2002-03-071-1/+4
| | | | | | | that has just been mapped MAP_ANON|MAP_NOSYNC and has not yet been accessed will panic the machine. MFC after: 1 day
* Fix a race with free'ing vmspaces at process exit when vmspaces arealfred2002-02-051-16/+29
| | | | | | | | | | | | | | | | | | | shared. Also introduce vm_endcopy instead of using pointer tricks when initializing new vmspaces. The race occured because of how the reference was utilized: test vmspace reference, possibly block, decrement reference When sharing a vmspace between multiple processes it was possible for two processes exiting at the same time to test the reference count, possibly block and neither one free because they wouldn't see the other's update. Submitted by: green
* Don't let pmap_object_init_pt() exhaust all available free pagesdillon2001-10-311-1/+1
| | | | | | (allocating pv entries w/ zalloci) when called in a loop due to an madvise(). It is possible to completely exhaust the free page list and cause a system panic when an expected allocation fails.
* Fix locking violations during page wiring:tegge2001-10-141-3/+32
| | | | | | | | | | | | - vm map entries are not valid after the map has been unlocked. - An exclusive lock on the map is needed before calling vm_map_simplify_entry(). Fix cleanup after page wiring failure to unwire all pages that had been successfully wired before the failure was detected. Reviewed by: dillon
* Add missing includes of sys/ktr.h.jhb2001-10-111-0/+1
|
* Make MAXTSIZ, DFLDSIZ, MAXDSIZ, DFLSSIZ, MAXSSIZ, SGROWSIZ loaderps2001-10-101-3/+3
| | | | | | | tunable. Reviewed by: peter MFC after: 2 weeks
* KSE Milestone 2julian2001-09-121-14/+14
| | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha
* Change inlines back into mainline code in preparation for mutexing. Also,dillon2001-07-041-106/+183
| | | | | | | | most of these inlines had been bloated in -current far beyond their original intent. Normalize prototypes and function declarations to be ANSI only (half already were). And do some general cleanup. (kernel size also reduced by 50-100K, but that isn't the prime intent)
* With Alfred's permission, remove vm_mtx in favor of a fine-grained approachdillon2001-07-041-60/+53
| | | | | | | | | (this commit is just the first stage). Also add various GIANT_ macros to formalize the removal of Giant, making it easy to test in a more piecemeal fashion. These macros will allow us to test fine-grained locks to a degree before removing Giant, and also after, and to remove Giant in a piecemeal fashion via sysctl's on those subsystems which the authors believe can operate without Giant.
OpenPOWER on IntegriCloud