summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* o Add vm_map_unwire() for unwiring contiguous regions of either kernelalc2002-06-072-1/+167
| | | | | | | | | | | | | | | or user vm_maps. In accordance with the standards for munlock(2), and in contrast to vm_map_user_pageable(), this implementation does not allow holes in the specified region. This implementation uses the "in transition" flag described below. o Introduce a new flag, "in transition," to the vm_map_entry. Eventually, vm_map_delete() and vm_map_simplify_entry() will respect this flag by deallocating in-transition vm_map_entrys, allowing the vm_map lock to be safely released in vm_map_unwire() and (the forthcoming) vm_map_wire(). o Modify vm_map_simplify_entry() to respect the in-transition flag. In collaboration with: tegge
* fix typo in _SYS_SYSPROTO_H_ case: s/mlockall_args/munlockall_argsalfred2002-06-061-1/+1
| | | | Submitted by: Mark Santcroos <marks@ripe.net>
* Add a comment describing a resource leak that occurs during a failure casejeff2002-06-031-0/+3
| | | | in obj_alloc.
* o Migrate vm_map_split() from vm_map.c to vm_object.c, renaming italc2002-06-023-90/+93
| | | | | to vm_object_split(). Its interface should still be changed to resemble vm_object_shadow().
* o Style fixes to vm_map_split(), including the elimination of one variablealc2002-06-021-8/+1
| | | | | | | | declaration that shadows another. Note: This function should really be vm_object_split(), not vm_map_split(). Reviewed by: md5
* o Condition vm_object_pmap_copy_1()'s compilation on the kernelalc2002-06-021-0/+2
| | | | | option ENABLE_VFS_IOOPT. Unless this option is in effect, vm_object_pmap_copy_1() is not used.
* o Remove GIANT_REQUIRED from vm_map_zfini(), vm_map_zinit(),alc2002-06-012-25/+15
| | | | | | | | | vm_map_create(), and vm_map_submap(). o Make further use of a local variable in vm_map_entry_splay() that caches a reference to one of a vm_map_entry's children. (This reduces code size somewhat.) o Revert a part of revision 1.66, deinlining vmspace_pmap(). (This function is MPSAFE.)
* o Revert a part of revision 1.66, contrary to what that commit message says,alc2002-06-012-17/+21
| | | | | | | | deinlining vm_map_entry_behavior() and vm_map_entry_set_behavior() actually increases the kernel's size. o Make vm_map_entry_set_behavior() static and add a comment describing its purpose. o Remove an unnecessary initialization statement from vm_map_entry_splay().
* Export nswapdev through sysctl(8).des2002-05-311-0/+2
| | | | Sponsored by: DARPA, NAI Labs
* Further work on pushing Giant out of the vm_map layer and downalc2002-05-312-9/+15
| | | | | | | | | | into the vm_object layer: o Acquire and release Giant in vm_object_shadow() and vm_object_page_remove(). o Remove the GIANT_REQUIRED assertion preceding vm_map_delete()'s call to vm_object_page_remove(). o Remove the acquisition and release of Giant around vm_map_lookup()'s call to vm_object_shadow().
* Check for defined(__i386__) instead of just defined(i386) since the compileralfred2002-05-301-3/+3
| | | | will be updated to only define(__i386__) for ANSI cleanliness.
* The kernel printf does not have %ipeter2002-05-291-1/+1
|
* o Remove unused #defines.alc2002-05-271-9/+0
|
* o Acquire and release Giant around pmap operations in vm_fault_unwire()alc2002-05-263-7/+5
| | | | | | | and vm_map_delete(). Assert GIANT_REQUIRED in vm_map_delete() only if operating on the kernel_object or the kmem_object. o Remove GIANT_REQUIRED from vm_map_remove(). o Remove the acquisition and release of Giant from munmap().
* o Replace the vm_map's hint by the root of a splay tree. By design,alc2002-05-242-81/+106
| | | | | | | | | | | | | | | | | | | | | the last accessed datum is moved to the root of the splay tree. Therefore, on lookups in which the hint resulted in O(1) access, the splay tree still achieves O(1) access. In contrast, on lookups in which the hint failed miserably, the splay tree achieves amortized logarithmic complexity, resulting in dramatic improvements on vm_maps with a large number of entries. For example, the execution time for replaying an access log from www.cs.rice.edu against the thttpd web server was reduced by 23.5% due to the large number of files simultaneously mmap()ed by this server. (The machine in question has enough memory to cache most of this workload.) Nothing comes for free: At present, I see a 0.2% slowdown on "buildworld" due to the overhead of maintaining the splay tree. I believe that some or all of this can be eliminated through optimizations to the code. Developed in collaboration with: Juan E Navarro <jnavarro@cs.rice.edu> Reviewed by: jeff
* o Make contigmalloc1() static.alc2002-05-222-5/+1
|
* In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensurejhb2002-05-201-0/+7
| | | | | | | that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls uma we can probably remove the check in malloc() for this now. Also, perform an extra witness check in that case to make sure we don't hold any locks when performing a M_WAITOK allocation.
* o Eliminate the acquisition and release of Giant from minherit(2).alc2002-05-181-7/+2
| | | | (vm_map_inherit() no longer requires Giant to be held.)
* o Remove GIANT_REQUIRED from vm_map_madvise(). Instead, acquire andalc2002-05-183-8/+9
| | | | | | | release Giant around vm_map_madvise()'s call to pmap_object_init_pt(). o Replace GIANT_REQUIRED in vm_object_madvise() with the acquisition and release of Giant. o Remove the acquisition and release of Giant from madvise().
* o Remove the acquisition and release of Giant from mprotect().alc2002-05-181-6/+2
|
* More s/file system/filesystem/gtrhodes2002-05-161-1/+1
|
* Make daddr_t and u_daddr_t 64bits wide.phk2002-05-141-2/+2
| | | | | | Retire daddr64_t and use daddr_t instead. Sponsored by: DARPA & NAI Labs.
* Don't call the uz free function while the zone lock is held. This can leadjeff2002-05-131-14/+21
| | | | | to lock order reversals. uma_reclaim now builds a list of freeable slabs and then unlocks the zones to do all of the frees.
* Remove the hash_free() lock order reversal. This could have happened forjeff2002-05-131-69/+72
| | | | | | several reasons before. Fixing it involved restructuring the generic hash code to require calling code to handle locking, unlocking, and freeing hashes on error conditions.
* o Remove GIANT_REQUIRED and an excessive number of blank linesalc2002-05-121-10/+0
| | | | | from vm_map_inherit(). (minherit() need not acquire Giant anymore.)
* o Acquire and release Giant in vm_object_reference() andalc2002-05-122-11/+9
| | | | | | | | | vm_object_deallocate(), replacing the assertion GIANT_REQUIRED. o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry(). o Acquire and release Giant around vm_map_protect()'s call to pmap_protect(). Altogether, these changes eliminate the need for mprotect() to acquire and release Giant.
* o Header files shouldn't depend on options: Provide prototypesalc2002-05-061-3/+0
| | | | | | | for uiomoveco(), uioread(), and vm_uiomove() regardless of whether ENABLE_VFS_IOOPT is defined or not. Submitted by: bde
* o Condition the compilation and use of vm_freeze_copyopts()alc2002-05-063-2/+14
| | | | on ENABLE_VFS_IOOPT.
* o Some improvements to the page coloring of vm objects, particularly,alc2002-05-061-9/+17
| | | | | | for shadow objects. Submitted by: bde
* o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainlyalc2002-05-064-78/+78
| | | | an operation on a vm_object and belongs in the latter place.
* o Condition the compilation of uiomoveco() and vm_uiomove()alc2002-05-052-1/+7
| | | | | | on ENABLE_VFS_IOOPT. o Add a comment to the effect that this code is experimental support for zero-copy I/O.
* Expand the one-line function pbreassignbuf() the only place it is or couldphk2002-05-051-1/+1
| | | | be used.
* o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().alc2002-05-051-2/+2
| | | | | o Acquire and release Giant around vm_map_lookup()'s call to vm_object_shadow().
* Use pages instead of uz_maxpages, which has not been initialized yet, whenjeff2002-05-041-2/+2
| | | | | | | creating the vm_object. This was broken after the code was rearranged to grab giant itself. Spotted by: alc
* o Make _vm_object_allocate() and vm_object_allocate() callablealc2002-05-042-22/+21
| | | | | | without holding Giant. o Begin documenting the trivial cases of the locking protocol on vm_object.
* o Remove GIANT_REQUIRED from vm_map_lookup_entry() andalc2002-05-042-7/+3
| | | | | vm_map_check_protection(). o Call vm_map_check_protection() without Giant held in munmap().
* o Change the implementation of vm_map locking to use exclusive locksalc2002-05-021-26/+24
| | | | | exclusively. The interface still, however, distinguishes between a shared lock and an exclusive lock.
* Hide a pointer to the malloc_type bucket at the end of the freed memory. Ifjeff2002-05-022-2/+84
| | | | | this memory is modified after it has been freed we can now report it's previous owner.
* Move around the dbg code a bit so it's always under a lock. This stops ajeff2002-05-021-8/+7
| | | | | weird potential race if we were preempted right as we were doing the dbg checks.
* - Changed the size element of uma_zctor_args to be size_t instead of int.arr2002-05-022-4/+4
| | | | | | | - Changed uma_zcreate to accept the size argument as a size_t intead of int. Approved by: jeff
* malloc/free(9) no longer require Giant. Use the malloc_mtx to protect thejeff2002-05-022-8/+19
| | | | | | | mallochash. Mallochash is going to go away as soon as I introduce the kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen until all users of the malloc api that expect memory to be aligned on the size of the allocation are fixed.
* o Remove dead and lockmgr()-specific debugging code.alc2002-05-022-23/+0
|
* Remove the temporary alignment check in free().jeff2002-05-023-19/+118
| | | | | | | | | | | Implement the following checks on freed memory in the bucket path: - Slab membership - Alignment - Duplicate free This previously was only done if we skipped the buckets. This code will slow down INVARIANTS a bit, but it is smp safe. The checks were moved out of the normal path and into hooks supplied in uma_dbg.
* o Convert the vm_page buckets mutex to a spin lock. (This resolvesalc2002-04-301-14/+11
| | | | | | | an issue on the Alpha platform found by jeff@.) o Simplify vm_page_lookup(). Reviewed by: jhb
* Add a new UMA debugging facility. This will overwrite freed memory withjeff2002-04-302-0/+159
| | | | | | | | | | | 0xdeadc0de and then check for it just before memory is handed off as part of a new request. This will catch any post free/pre alloc modification of memory, as well as introduce errors for anything that tries to dereference it as a pointer. This code takes the form of special init, fini, ctor and dtor routines that are specificly used by malloc. It is in a seperate file because additional debugging aids will want to live here as well.
* Move the implementation of M_ZERO into UMA so that it can be passed tojeff2002-04-302-16/+21
| | | | | | uma_zalloc and friends. Remove this functionality from the malloc wrapper. Document this change in uma.h and adjust variable names in uma_core.
* o Revert vm_fault1() to its original name vm_fault(), eliminating the wrapperalc2002-04-301-16/+11
| | | | that took its place for the purposes of acquiring and releasing Giant.
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-294-10/+34
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* Tidy up some loose ends.peter2002-04-292-3/+0
| | | | | | | | | | | | i386/ia64/alpha - catch up to sparc64/ppc: - replace pmap_kernel() with refs to kernel_pmap - change kernel_pmap pointer to (&kernel_pmap_store) (this is a speedup since ld can set these at compile/link time) all platforms (as suggested by jake): - gc unused pmap_reference - gc unused pmap_destroy - gc unused struct pmap.pm_count (we never used pm_count - we track address space sharing at the vmspace)
* Document three synchronization issues in vm_fault().alc2002-04-291-0/+8
|
OpenPOWER on IntegriCloud