summaryrefslogtreecommitdiffstats
path: root/sys/vm
Commit message (Collapse)AuthorAgeFilesLines
* More s/file system/filesystem/gtrhodes2002-05-161-1/+1
|
* Make daddr_t and u_daddr_t 64bits wide.phk2002-05-141-2/+2
| | | | | | Retire daddr64_t and use daddr_t instead. Sponsored by: DARPA & NAI Labs.
* Don't call the uz free function while the zone lock is held. This can leadjeff2002-05-131-14/+21
| | | | | to lock order reversals. uma_reclaim now builds a list of freeable slabs and then unlocks the zones to do all of the frees.
* Remove the hash_free() lock order reversal. This could have happened forjeff2002-05-131-69/+72
| | | | | | several reasons before. Fixing it involved restructuring the generic hash code to require calling code to handle locking, unlocking, and freeing hashes on error conditions.
* o Remove GIANT_REQUIRED and an excessive number of blank linesalc2002-05-121-10/+0
| | | | | from vm_map_inherit(). (minherit() need not acquire Giant anymore.)
* o Acquire and release Giant in vm_object_reference() andalc2002-05-122-11/+9
| | | | | | | | | vm_object_deallocate(), replacing the assertion GIANT_REQUIRED. o Remove GIANT_REQUIRED from vm_map_protect() and vm_map_simplify_entry(). o Acquire and release Giant around vm_map_protect()'s call to pmap_protect(). Altogether, these changes eliminate the need for mprotect() to acquire and release Giant.
* o Header files shouldn't depend on options: Provide prototypesalc2002-05-061-3/+0
| | | | | | | for uiomoveco(), uioread(), and vm_uiomove() regardless of whether ENABLE_VFS_IOOPT is defined or not. Submitted by: bde
* o Condition the compilation and use of vm_freeze_copyopts()alc2002-05-063-2/+14
| | | | on ENABLE_VFS_IOOPT.
* o Some improvements to the page coloring of vm objects, particularly,alc2002-05-061-9/+17
| | | | | | for shadow objects. Submitted by: bde
* o Move vm_freeze_copyopts() from vm_map.{c.h} to vm_object.{c,h}. It's plainlyalc2002-05-064-78/+78
| | | | an operation on a vm_object and belongs in the latter place.
* o Condition the compilation of uiomoveco() and vm_uiomove()alc2002-05-052-1/+7
| | | | | | on ENABLE_VFS_IOOPT. o Add a comment to the effect that this code is experimental support for zero-copy I/O.
* Expand the one-line function pbreassignbuf() the only place it is or couldphk2002-05-051-1/+1
| | | | be used.
* o Remove GIANT_REQUIRED from vm_map_lookup() and vm_map_lookup_done().alc2002-05-051-2/+2
| | | | | o Acquire and release Giant around vm_map_lookup()'s call to vm_object_shadow().
* Use pages instead of uz_maxpages, which has not been initialized yet, whenjeff2002-05-041-2/+2
| | | | | | | creating the vm_object. This was broken after the code was rearranged to grab giant itself. Spotted by: alc
* o Make _vm_object_allocate() and vm_object_allocate() callablealc2002-05-042-22/+21
| | | | | | without holding Giant. o Begin documenting the trivial cases of the locking protocol on vm_object.
* o Remove GIANT_REQUIRED from vm_map_lookup_entry() andalc2002-05-042-7/+3
| | | | | vm_map_check_protection(). o Call vm_map_check_protection() without Giant held in munmap().
* o Change the implementation of vm_map locking to use exclusive locksalc2002-05-021-26/+24
| | | | | exclusively. The interface still, however, distinguishes between a shared lock and an exclusive lock.
* Hide a pointer to the malloc_type bucket at the end of the freed memory. Ifjeff2002-05-022-2/+84
| | | | | this memory is modified after it has been freed we can now report it's previous owner.
* Move around the dbg code a bit so it's always under a lock. This stops ajeff2002-05-021-8/+7
| | | | | weird potential race if we were preempted right as we were doing the dbg checks.
* - Changed the size element of uma_zctor_args to be size_t instead of int.arr2002-05-022-4/+4
| | | | | | | - Changed uma_zcreate to accept the size argument as a size_t intead of int. Approved by: jeff
* malloc/free(9) no longer require Giant. Use the malloc_mtx to protect thejeff2002-05-022-8/+19
| | | | | | | mallochash. Mallochash is going to go away as soon as I introduce the kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen until all users of the malloc api that expect memory to be aligned on the size of the allocation are fixed.
* o Remove dead and lockmgr()-specific debugging code.alc2002-05-022-23/+0
|
* Remove the temporary alignment check in free().jeff2002-05-023-19/+118
| | | | | | | | | | | Implement the following checks on freed memory in the bucket path: - Slab membership - Alignment - Duplicate free This previously was only done if we skipped the buckets. This code will slow down INVARIANTS a bit, but it is smp safe. The checks were moved out of the normal path and into hooks supplied in uma_dbg.
* o Convert the vm_page buckets mutex to a spin lock. (This resolvesalc2002-04-301-14/+11
| | | | | | | an issue on the Alpha platform found by jeff@.) o Simplify vm_page_lookup(). Reviewed by: jhb
* Add a new UMA debugging facility. This will overwrite freed memory withjeff2002-04-302-0/+159
| | | | | | | | | | | 0xdeadc0de and then check for it just before memory is handed off as part of a new request. This will catch any post free/pre alloc modification of memory, as well as introduce errors for anything that tries to dereference it as a pointer. This code takes the form of special init, fini, ctor and dtor routines that are specificly used by malloc. It is in a seperate file because additional debugging aids will want to live here as well.
* Move the implementation of M_ZERO into UMA so that it can be passed tojeff2002-04-302-16/+21
| | | | | | uma_zalloc and friends. Remove this functionality from the malloc wrapper. Document this change in uma.h and adjust variable names in uma_core.
* o Revert vm_fault1() to its original name vm_fault(), eliminating the wrapperalc2002-04-301-16/+11
| | | | that took its place for the purposes of acquiring and releasing Giant.
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-294-10/+34
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* Tidy up some loose ends.peter2002-04-292-3/+0
| | | | | | | | | | | | i386/ia64/alpha - catch up to sparc64/ppc: - replace pmap_kernel() with refs to kernel_pmap - change kernel_pmap pointer to (&kernel_pmap_store) (this is a speedup since ld can set these at compile/link time) all platforms (as suggested by jake): - gc unused pmap_reference - gc unused pmap_destroy - gc unused struct pmap.pm_count (we never used pm_count - we track address space sharing at the vmspace)
* Document three synchronization issues in vm_fault().alc2002-04-291-0/+8
|
* Pass the caller's file name and line number to the vm_map locking functions.alc2002-04-282-20/+35
|
* o Introduce and use vm_map_trylock() to replace several direct usesalc2002-04-285-8/+14
| | | | | | of lockmgr(). o Add missing synchronization to vmspace_swap_count(): Obtain a read lock on the vm_map before traversing it.
* We do not necessarily need to map/unmap pages to zero parts of them.peter2002-04-283-4/+14
| | | | | On systems where physical memory is also direct mapped (alpha, sparc, ia64 etc) this is slightly harmful.
* o Begin documenting the (existing) locking protocol on the vm_mapalc2002-04-272-25/+26
| | | | | | | in the same style as sys/proc.h. o Undo the de-inlining of several trivial, MPSAFE methods on the vm_map. (Contrary to the commit message for vm_map.h revision 1.66 and vm_map.c revision 1.206, de-inlining these methods increased the kernel's size.)
* o Control access to the vm_page_buckets with a mutex.alc2002-04-261-33/+17
| | | | o Fix some style(9) bugs.
* - Fix a round down bogon in uma_zone_set_max().arr2002-04-251-0/+2
| | | | Submitted by: jeff@
* Reintroduce locking on accesses to vm_object_list.alc2002-04-203-1/+10
|
* o Move the acquisition of Giant from vm_fault() to the pointalc2002-04-191-12/+8
| | | | | after initialization in vm_fault1(). o Fix some style problems in vm_fault1().
* Add a comment documenting a race condition in vm_fault(): Specifically, aalc2002-04-181-0/+3
| | | | modification is made to the vm_map while only a read lock is held.
* o Call vm_map_growstack() from vm_fault() if vm_map_lookup() has failedalc2002-04-181-1/+10
| | | | | | | | | | | due to conditions that suggest the possible need for stack growth. This has two beneficial effects: (1) we can now remove calls to vm_map_growstack() from the MD trap handlers and (2) simple page faults are faster because we no longer unnecessarily perform vm_map_growstack() on every page fault. o Remove vm_map_growstack() from the i386's trap_pfault(). o Remove the acquisition and release of Giant from i386's trap_pfault(). (vm_fault() still acquires it.)
* Do not free the vmspace until p->p_vmspace is set to null. Otherwisepeter2002-04-171-3/+7
| | | | | | | | statclock can access it in the tail end of statclock_process() at an unfortunate time. This bit me several times on an SMP alpha (UP2000) and the problem went away with this change. I'm not sure why it doesn't break x86 as well. Maybe it's because the clocks are much faster on alpha (HZ=1024 by default).
* Remove an unused option, VM_FAULT_HOLD, to vm_fault().alc2002-04-172-3/+0
|
* Pass vm_page_t instead of physical addresses to pmap_zero_page[_area]()peter2002-04-154-28/+14
| | | | | | | | | | | and pmap_copy_page(). This gets rid of a couple more physical addresses in upper layers, with the eventual aim of supporting PAE and dealing with the physical addressing mostly within pmap. (We will need either 64 bit physical addresses or page indexes, possibly both depending on the circumstances. Leaving this to pmap itself gives more flexibilitly.) Reviewed by: jake Tested on: i386, ia64 and (I believe) sparc64. (my alpha was hosed)
* Fix a witness warning when expanding a hash table. We were allocating the newjeff2002-04-141-38/+79
| | | | | | | | | | hash while holding the lock on a zone. Fix this by doing the allocation seperately from the actual hash expansion. The lock is dropped before the allocation and reacquired before the expansion. The expansion code checks to see if we lost the race and frees the new hash if we do. We really never will lose this race because the hash expansion is single threaded via the timeout mechanism.
* Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx.jeff2002-04-141-0/+2
|
* Fix the calculation that determines uz_maxpages. It was off for large zones.jeff2002-04-142-28/+53
| | | | | | | | | | | | | | Fortunately we have no large zones with maximums specified yet, so it wasn't breaking anything. Implement blocking when a zone exceeds the maximum and M_WAITOK is specified. Previously this just failed like the old zone allocator did. The old zone allocator didn't support WAITOK/NOWAIT though so we should do what we advertise. While I was in there I cleaned up some more zalloc logic to further simplify that code path and reduce redundant code. This was needed to make the blocking work properly anyway.
* Remember to unlock the zone if the fill count is too high.jeff2002-04-101-3/+4
| | | | Pointed out by: pete, jake, jhb
* Quiet witness warnings about acquiring several zone locks. In the case thatjeff2002-04-081-1/+2
| | | | this happens it is OK.
* Add a mechanism to disable buckets when the v_free_count drops belowjeff2002-04-081-6/+29
| | | | v_free_min. This should help performance in memory starved situations.
* Don't release the zone lock until after the dtor has been called. As far as Ijeff2002-04-081-3/+3
| | | | | | | | can tell this could not have caused any problems yet because UMA is still called with giant. Pointy hat to: jeff Noticed by: jake
OpenPOWER on IntegriCloud