summaryrefslogtreecommitdiffstats
path: root/sys/vm/uma_core.c
Commit message (Collapse)AuthorAgeFilesLines
* Be consistent about "static" functions: if the function is markedphk2002-09-281-1/+1
| | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512
* - Use my freebsd email alias in the copyright.jeff2002-09-191-1/+1
| | | | - Remove redundant instances of my email alias in the file summary.
* - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.jeff2002-09-181-78/+49
| | | | | | | - Remove all instances of the mallochash. - Stash the slab pointer in the vm page's object pointer when allocating from the kmem_obj. - Use the overloaded object pointer to find slabs for malloced memory.
* Don't use "NULL" when "0" is really meant.archie2002-08-211-2/+2
|
* Fix a lock order reversal in uma_zdestroy. The uma_mtx needs to be held acrossjeff2002-07-051-4/+4
| | | | | | calls to zone_drain(). Noticed by: scottl
* Remove unnecessary includes.jeff2002-07-051-2/+0
|
* Actually use the fini callback.jeff2002-07-031-0/+1
| | | | | Pointy hat to: me :-( Noticed By: Julian
* Reduce the amount of code that runs with the zone lock held in slab_zalloc().jeff2002-06-251-6/+8
| | | | This allows us to run the zone initialization functions without any locks held.
* - Remove bogus use of kmem_alloc that was inherited from the old zonejeff2002-06-191-16/+18
| | | | | | | | | allocator. - Properly set M_ZERO when talking to the back end page allocators for non malloc zones. This forces us to zero fill pages when they are first brought into a cache. - Properly handle M_ZERO in uma_zalloc_internal. This fixes a problem where per cpu buckets weren't always getting zeroed.
* Honor the BUCKETCACHE flag on free as well.jeff2002-06-171-4/+9
|
* - Introduce the new M_NOVM option which tells uma to only check the currentlyjeff2002-06-171-3/+17
| | | | | | | | | | | | | | | | allocated slabs and bucket caches for free items. It will not go ask the vm for pages. This differs from M_NOWAIT in that it not only doesn't block, it doesn't even ask. - Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This tells uma that it should only allocate buckets out of the bucket cache, and not from the VM. It does this by using the M_NOVM option to zalloc when getting a new bucket. This is so that the VM doesn't recursively enter itself while trying to allocate buckets for vm_map_entry zones. If there are already allocated buckets when we get here we'll still use them but otherwise we'll skip it. - Use the ZONE_VM flag on vm map entries and pv entries on x86.
* Correct the logic for determining whether the per-CPU locks neediedowse2002-06-101-1/+1
| | | | | | | to be destroyed. This fixes a problem where destroying a UMA zone would fail to destroy all zone mutexes. Reviewed by: jeff
* Add a comment describing a resource leak that occurs during a failure casejeff2002-06-031-0/+3
| | | | in obj_alloc.
* In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensurejhb2002-05-201-0/+7
| | | | | | | that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls uma we can probably remove the check in malloc() for this now. Also, perform an extra witness check in that case to make sure we don't hold any locks when performing a M_WAITOK allocation.
* Don't call the uz free function while the zone lock is held. This can leadjeff2002-05-131-14/+21
| | | | | to lock order reversals. uma_reclaim now builds a list of freeable slabs and then unlocks the zones to do all of the frees.
* Remove the hash_free() lock order reversal. This could have happened forjeff2002-05-131-69/+72
| | | | | | several reasons before. Fixing it involved restructuring the generic hash code to require calling code to handle locking, unlocking, and freeing hashes on error conditions.
* Use pages instead of uz_maxpages, which has not been initialized yet, whenjeff2002-05-041-2/+2
| | | | | | | creating the vm_object. This was broken after the code was rearranged to grab giant itself. Spotted by: alc
* Move around the dbg code a bit so it's always under a lock. This stops ajeff2002-05-021-8/+7
| | | | | weird potential race if we were preempted right as we were doing the dbg checks.
* - Changed the size element of uma_zctor_args to be size_t instead of int.arr2002-05-021-3/+3
| | | | | | | - Changed uma_zcreate to accept the size argument as a size_t intead of int. Approved by: jeff
* malloc/free(9) no longer require Giant. Use the malloc_mtx to protect thejeff2002-05-021-8/+17
| | | | | | | mallochash. Mallochash is going to go away as soon as I introduce the kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen until all users of the malloc api that expect memory to be aligned on the size of the allocation are fixed.
* Remove the temporary alignment check in free().jeff2002-05-021-19/+21
| | | | | | | | | | | Implement the following checks on freed memory in the bucket path: - Slab membership - Alignment - Duplicate free This previously was only done if we skipped the buckets. This code will slow down INVARIANTS a bit, but it is smp safe. The checks were moved out of the normal path and into hooks supplied in uma_dbg.
* Move the implementation of M_ZERO into UMA so that it can be passed tojeff2002-04-301-10/+16
| | | | | | uma_zalloc and friends. Remove this functionality from the malloc wrapper. Document this change in uma.h and adjust variable names in uma_core.
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-291-3/+11
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* - Fix a round down bogon in uma_zone_set_max().arr2002-04-251-0/+2
| | | | Submitted by: jeff@
* Fix a witness warning when expanding a hash table. We were allocating the newjeff2002-04-141-38/+79
| | | | | | | | | | hash while holding the lock on a zone. Fix this by doing the allocation seperately from the actual hash expansion. The lock is dropped before the allocation and reacquired before the expansion. The expansion code checks to see if we lost the race and frees the new hash if we do. We really never will lose this race because the hash expansion is single threaded via the timeout mechanism.
* Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx.jeff2002-04-141-0/+2
|
* Fix the calculation that determines uz_maxpages. It was off for large zones.jeff2002-04-141-28/+51
| | | | | | | | | | | | | | Fortunately we have no large zones with maximums specified yet, so it wasn't breaking anything. Implement blocking when a zone exceeds the maximum and M_WAITOK is specified. Previously this just failed like the old zone allocator did. The old zone allocator didn't support WAITOK/NOWAIT though so we should do what we advertise. While I was in there I cleaned up some more zalloc logic to further simplify that code path and reduce redundant code. This was needed to make the blocking work properly anyway.
* Remember to unlock the zone if the fill count is too high.jeff2002-04-101-3/+4
| | | | Pointed out by: pete, jake, jhb
* Add a mechanism to disable buckets when the v_free_count drops belowjeff2002-04-081-6/+29
| | | | v_free_min. This should help performance in memory starved situations.
* Don't release the zone lock until after the dtor has been called. As far as Ijeff2002-04-081-3/+3
| | | | | | | | can tell this could not have caused any problems yet because UMA is still called with giant. Pointy hat to: jeff Noticed by: jake
* Implement uma_zdestroy(). It's prototype changed slightly. I decided that Ijeff2002-04-081-23/+76
| | | | | | | | didn't like the wait argument and that if you were removing a zone it had better be empty. Also, I broke out part of hash_expand and made a seperate hash_free() for use in uma_zdestroy.
* Rework most of the bucket allocation and free code so that per cpu locks arejeff2002-04-081-214/+191
| | | | | | | | | | | | | | | | never held across blocking operations. Also, fix two other lock order reversals that were exposed by jhb's witness change. The free path previously had a bug that would cause it to skip the free bucket list in some cases and go straight to allocating a new bucket. This has been fixed as well. These changes made the bucket handling code much cleaner and removed quite a few lock operations. This should be marginally faster now. It is now possible to call malloc w/o Giant and avoid any witness warnings. This still isn't entirely safe though because malloc_type statistics are not protected by any lock.
* This fixes a bug where isitem never got set to 1 if a certain chain of eventsjeff2002-04-071-0/+2
| | | | | | | relating to extreme low memory situations occured. This was only ever seen on the port build cluster, so many thanks to kris for helping me debug this. Tested by: kris
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-1/+1
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
* fix comment typo, s/neccisary/necessary/galfred2002-04-021-2/+2
|
* Reset the cachefree statistics after draining the cache. This fixes a bugjeff2002-03-241-0/+4
| | | | | | | | | | where a sysctl within 20 seconds of a cache_drain could yield negative "USED" counts. Also, grab the uma_mtx while in the sysctl handler. This hadn't caused problems yet because Giant is held all the time. Reported by: kkenn
* Add uma_zone_set_max() to add enforced limits to non vm obj backed zones.jeff2002-03-201-10/+15
|
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-0/+1900
malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
OpenPOWER on IntegriCloud