summaryrefslogtreecommitdiffstats
path: root/sys/vm/uma_core.c
Commit message (Collapse)AuthorAgeFilesLines
* Move the pcpu lock out of the uma_cache and instead have a single setbmilekic2003-06-251-28/+18
| | | | | | | of pcpu locks. This makes uma_zone somewhat smaller (by (LOCKNAME_LEN * sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact). No Objections from jeff.
* Make sure that the zone destructor doesn't get called twice inbmilekic2003-06-251-2/+6
| | | | certain free paths.
* Use __FBSDID().obrien2003-06-111-4/+3
|
* Revert last commit, I have no idea what happened.phk2003-06-091-1/+1
|
* A white-space nit I noticed.phk2003-06-091-1/+1
|
* uma_zone_set_obj() must perform VM_OBJECT_LOCK_INIT() if the calleralc2003-04-281-2/+3
| | | | provides storage for the vm_object.
* Remove an XXX comment. It is no longer a problem.alc2003-04-261-4/+1
|
* Lock the vm_object in obj_alloc().alc2003-04-191-0/+2
|
* Don't grab Giant in slab_zalloc() if M_NOWAIT is specified. Thisgallatin2003-04-181-4/+9
| | | | | | | should allow the use of INTR_MPSAFE network drivers. Tested by: njl Glanced at by: jeff
* Obtain Giant before calling kmem_alloc without M_NOWAIT and before callingtegge2003-03-261-2/+21
| | | | kmem_free if Giant isn't already held.
* Replace calls to WITNESS_SLEEP() and witness_list() with equivalent callsjhb2003-03-041-1/+2
| | | | to WITNESS_WARN().
* Back out M_* changes, per decision of the TRB.imp2003-02-191-6/+6
| | | | Approved by: trb
* Change a printf to also tell how many items were left in the zone.phk2003-02-041-2/+2
|
* Remove M_TRYWAIT/M_WAITOK/M_WAIT. Callers should use 0.alfred2003-01-211-6/+6
| | | | Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
* - M_WAITOK is 0 and not a real flag. Test for this properly.jeff2003-01-201-5/+4
| | | | | Submitted by: tmm Pointy hat to: jeff
* Correct typos, mostly s/ a / an / where appropriate. Some whitespace cleanup,schweikh2003-01-011-1/+1
| | | | especially in troff files.
* - Wakeup the correct address when a zone is no longer full.jeff2002-11-181-1/+1
| | | | Spotted by: jake
* - Don't forget the flags value when using boot pages.jeff2002-11-161-0/+1
| | | | Reported by: grehan
* atomic_set_8 isn't MI. Instead, follow Jake's suggestions aboutmjacob2002-11-111-0/+4
| | | | ZONE_LOCK.
* - Add support for machine dependant page allocation routines. MD codejeff2002-11-011-2/+20
| | | | | | may define UMA_MD_SMALL_ALLOC to make use of this feature. Reviewed by: peter, jake
* - Now that uma_zalloc_internal is not the fast path don't be so fussy aboutjeff2002-10-241-179/+198
| | | | | | | | | | | extra function calls. Refactor uma_zalloc_internal into seperate functions for finding the most appropriate slab, filling buckets, allocating single items, and pulling items off of slabs. This makes the code significantly cleaner. - This also fixes the "Returning an empty bucket." panic that a few people have seen. Tested On: alpha, x86
* - Move the destructor calls so that they are not called with the zone lockjeff2002-10-241-5/+6
| | | | | | | | held. This avoids a lock order reversal when destroying zones. Unfortunately, this also means that the free checks are not done before the destructor is called. Reported by: phk
* Be consistent about "static" functions: if the function is markedphk2002-09-281-1/+1
| | | | | | static in its prototype, mark it static at the definition too. Inspired by: FlexeLint warning #512
* - Use my freebsd email alias in the copyright.jeff2002-09-191-1/+1
| | | | - Remove redundant instances of my email alias in the file summary.
* - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.jeff2002-09-181-78/+49
| | | | | | | - Remove all instances of the mallochash. - Stash the slab pointer in the vm page's object pointer when allocating from the kmem_obj. - Use the overloaded object pointer to find slabs for malloced memory.
* Don't use "NULL" when "0" is really meant.archie2002-08-211-2/+2
|
* Fix a lock order reversal in uma_zdestroy. The uma_mtx needs to be held acrossjeff2002-07-051-4/+4
| | | | | | calls to zone_drain(). Noticed by: scottl
* Remove unnecessary includes.jeff2002-07-051-2/+0
|
* Actually use the fini callback.jeff2002-07-031-0/+1
| | | | | Pointy hat to: me :-( Noticed By: Julian
* Reduce the amount of code that runs with the zone lock held in slab_zalloc().jeff2002-06-251-6/+8
| | | | This allows us to run the zone initialization functions without any locks held.
* - Remove bogus use of kmem_alloc that was inherited from the old zonejeff2002-06-191-16/+18
| | | | | | | | | allocator. - Properly set M_ZERO when talking to the back end page allocators for non malloc zones. This forces us to zero fill pages when they are first brought into a cache. - Properly handle M_ZERO in uma_zalloc_internal. This fixes a problem where per cpu buckets weren't always getting zeroed.
* Honor the BUCKETCACHE flag on free as well.jeff2002-06-171-4/+9
|
* - Introduce the new M_NOVM option which tells uma to only check the currentlyjeff2002-06-171-3/+17
| | | | | | | | | | | | | | | | allocated slabs and bucket caches for free items. It will not go ask the vm for pages. This differs from M_NOWAIT in that it not only doesn't block, it doesn't even ask. - Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This tells uma that it should only allocate buckets out of the bucket cache, and not from the VM. It does this by using the M_NOVM option to zalloc when getting a new bucket. This is so that the VM doesn't recursively enter itself while trying to allocate buckets for vm_map_entry zones. If there are already allocated buckets when we get here we'll still use them but otherwise we'll skip it. - Use the ZONE_VM flag on vm map entries and pv entries on x86.
* Correct the logic for determining whether the per-CPU locks neediedowse2002-06-101-1/+1
| | | | | | | to be destroyed. This fixes a problem where destroying a UMA zone would fail to destroy all zone mutexes. Reviewed by: jeff
* Add a comment describing a resource leak that occurs during a failure casejeff2002-06-031-0/+3
| | | | in obj_alloc.
* In uma_zalloc_arg(), if we are performing a M_WAITOK allocation, ensurejhb2002-05-201-0/+7
| | | | | | | that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls uma we can probably remove the check in malloc() for this now. Also, perform an extra witness check in that case to make sure we don't hold any locks when performing a M_WAITOK allocation.
* Don't call the uz free function while the zone lock is held. This can leadjeff2002-05-131-14/+21
| | | | | to lock order reversals. uma_reclaim now builds a list of freeable slabs and then unlocks the zones to do all of the frees.
* Remove the hash_free() lock order reversal. This could have happened forjeff2002-05-131-69/+72
| | | | | | several reasons before. Fixing it involved restructuring the generic hash code to require calling code to handle locking, unlocking, and freeing hashes on error conditions.
* Use pages instead of uz_maxpages, which has not been initialized yet, whenjeff2002-05-041-2/+2
| | | | | | | creating the vm_object. This was broken after the code was rearranged to grab giant itself. Spotted by: alc
* Move around the dbg code a bit so it's always under a lock. This stops ajeff2002-05-021-8/+7
| | | | | weird potential race if we were preempted right as we were doing the dbg checks.
* - Changed the size element of uma_zctor_args to be size_t instead of int.arr2002-05-021-3/+3
| | | | | | | - Changed uma_zcreate to accept the size argument as a size_t intead of int. Approved by: jeff
* malloc/free(9) no longer require Giant. Use the malloc_mtx to protect thejeff2002-05-021-8/+17
| | | | | | | mallochash. Mallochash is going to go away as soon as I introduce the kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen until all users of the malloc api that expect memory to be aligned on the size of the allocation are fixed.
* Remove the temporary alignment check in free().jeff2002-05-021-19/+21
| | | | | | | | | | | Implement the following checks on freed memory in the bucket path: - Slab membership - Alignment - Duplicate free This previously was only done if we skipped the buckets. This code will slow down INVARIANTS a bit, but it is smp safe. The checks were moved out of the normal path and into hooks supplied in uma_dbg.
* Move the implementation of M_ZERO into UMA so that it can be passed tojeff2002-04-301-10/+16
| | | | | | uma_zalloc and friends. Remove this functionality from the malloc wrapper. Document this change in uma.h and adjust variable names in uma_core.
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-291-3/+11
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* - Fix a round down bogon in uma_zone_set_max().arr2002-04-251-0/+2
| | | | Submitted by: jeff@
* Fix a witness warning when expanding a hash table. We were allocating the newjeff2002-04-141-38/+79
| | | | | | | | | | hash while holding the lock on a zone. Fix this by doing the allocation seperately from the actual hash expansion. The lock is dropped before the allocation and reacquired before the expansion. The expansion code checks to see if we lost the race and frees the new hash if we do. We really never will lose this race because the hash expansion is single threaded via the timeout mechanism.
* Protect the initial list traversal in sysctl_vm_zone() with the uma_mtx.jeff2002-04-141-0/+2
|
* Fix the calculation that determines uz_maxpages. It was off for large zones.jeff2002-04-141-28/+51
| | | | | | | | | | | | | | Fortunately we have no large zones with maximums specified yet, so it wasn't breaking anything. Implement blocking when a zone exceeds the maximum and M_WAITOK is specified. Previously this just failed like the old zone allocator did. The old zone allocator didn't support WAITOK/NOWAIT though so we should do what we advertise. While I was in there I cleaned up some more zalloc logic to further simplify that code path and reduce redundant code. This was needed to make the blocking work properly anyway.
* Remember to unlock the zone if the fill count is too high.jeff2002-04-101-3/+4
| | | | Pointed out by: pete, jake, jhb
OpenPOWER on IntegriCloud