| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
of pcpu locks. This makes uma_zone somewhat smaller (by (LOCKNAME_LEN *
sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact).
No Objections from jeff.
|
|
|
|
| |
certain free paths.
|
| |
|
| |
|
| |
|
|
|
|
| |
provides storage for the vm_object.
|
| |
|
| |
|
|
|
|
|
|
|
| |
should allow the use of INTR_MPSAFE network drivers.
Tested by: njl
Glanced at by: jeff
|
|
|
|
| |
kmem_free if Giant isn't already held.
|
|
|
|
| |
to WITNESS_WARN().
|
|
|
|
| |
Approved by: trb
|
| |
|
|
|
|
| |
Merge M_NOWAIT/M_DONTWAIT into a single flag M_NOWAIT.
|
|
|
|
|
| |
Submitted by: tmm
Pointy hat to: jeff
|
|
|
|
| |
especially in troff files.
|
|
|
|
| |
Spotted by: jake
|
|
|
|
| |
Reported by: grehan
|
|
|
|
| |
ZONE_LOCK.
|
|
|
|
|
|
| |
may define UMA_MD_SMALL_ALLOC to make use of this feature.
Reviewed by: peter, jake
|
|
|
|
|
|
|
|
|
|
|
| |
extra function calls. Refactor uma_zalloc_internal into seperate functions
for finding the most appropriate slab, filling buckets, allocating single
items, and pulling items off of slabs. This makes the code significantly
cleaner.
- This also fixes the "Returning an empty bucket." panic that a few people
have seen.
Tested On: alpha, x86
|
|
|
|
|
|
|
|
| |
held. This avoids a lock order reversal when destroying zones.
Unfortunately, this also means that the free checks are not done before
the destructor is called.
Reported by: phk
|
|
|
|
|
|
| |
static in its prototype, mark it static at the definition too.
Inspired by: FlexeLint warning #512
|
|
|
|
| |
- Remove redundant instances of my email alias in the file summary.
|
|
|
|
|
|
|
| |
- Remove all instances of the mallochash.
- Stash the slab pointer in the vm page's object pointer when allocating from
the kmem_obj.
- Use the overloaded object pointer to find slabs for malloced memory.
|
| |
|
|
|
|
|
|
| |
calls to zone_drain().
Noticed by: scottl
|
| |
|
|
|
|
|
| |
Pointy hat to: me :-(
Noticed By: Julian
|
|
|
|
| |
This allows us to run the zone initialization functions without any locks held.
|
|
|
|
|
|
|
|
|
| |
allocator.
- Properly set M_ZERO when talking to the back end page allocators for
non malloc zones. This forces us to zero fill pages when they are first
brought into a cache.
- Properly handle M_ZERO in uma_zalloc_internal. This fixes a problem where
per cpu buckets weren't always getting zeroed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allocated slabs and bucket caches for free items. It will not go ask the vm
for pages. This differs from M_NOWAIT in that it not only doesn't block, it
doesn't even ask.
- Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This
tells uma that it should only allocate buckets out of the bucket cache, and
not from the VM. It does this by using the M_NOVM option to zalloc when
getting a new bucket. This is so that the VM doesn't recursively enter
itself while trying to allocate buckets for vm_map_entry zones. If there
are already allocated buckets when we get here we'll still use them but
otherwise we'll skip it.
- Use the ZONE_VM flag on vm map entries and pv entries on x86.
|
|
|
|
|
|
|
| |
to be destroyed. This fixes a problem where destroying a UMA zone
would fail to destroy all zone mutexes.
Reviewed by: jeff
|
|
|
|
| |
in obj_alloc.
|
|
|
|
|
|
|
| |
that td_intr_nesting_level is 0 (like malloc() does). Since malloc() calls
uma we can probably remove the check in malloc() for this now. Also,
perform an extra witness check in that case to make sure we don't hold
any locks when performing a M_WAITOK allocation.
|
|
|
|
|
| |
to lock order reversals. uma_reclaim now builds a list of freeable slabs and
then unlocks the zones to do all of the frees.
|
|
|
|
|
|
| |
several reasons before. Fixing it involved restructuring the generic hash
code to require calling code to handle locking, unlocking, and freeing hashes
on error conditions.
|
|
|
|
|
|
|
| |
creating the vm_object. This was broken after the code was rearranged to
grab giant itself.
Spotted by: alc
|
|
|
|
|
| |
weird potential race if we were preempted right as we were doing the dbg
checks.
|
|
|
|
|
|
|
| |
- Changed uma_zcreate to accept the size argument as a size_t intead of
int.
Approved by: jeff
|
|
|
|
|
|
|
| |
mallochash. Mallochash is going to go away as soon as I introduce the
kfree/kmalloc api and partially overhaul the malloc wrapper. This can't happen
until all users of the malloc api that expect memory to be aligned on the size
of the allocation are fixed.
|
|
|
|
|
|
|
|
|
|
|
| |
Implement the following checks on freed memory in the bucket path:
- Slab membership
- Alignment
- Duplicate free
This previously was only done if we skipped the buckets. This code will slow
down INVARIANTS a bit, but it is smp safe. The checks were moved out of the
normal path and into hooks supplied in uma_dbg.
|
|
|
|
|
|
| |
uma_zalloc and friends. Remove this functionality from the malloc wrapper.
Document this change in uma.h and adjust variable names in uma_core.
|
|
|
|
|
|
|
| |
mutex class. Currently this is only used for kmapentzone because kmapents
are are potentially allocated when freeing memory. This is not dangerous
though because no other allocations will be done while holding the
kmapentzone lock.
|
|
|
|
| |
Submitted by: jeff@
|
|
|
|
|
|
|
|
|
|
| |
hash while holding the lock on a zone. Fix this by doing the allocation
seperately from the actual hash expansion.
The lock is dropped before the allocation and reacquired before the expansion.
The expansion code checks to see if we lost the race and frees the new hash
if we do. We really never will lose this race because the hash expansion is
single threaded via the timeout mechanism.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fortunately we have no large zones with maximums specified yet, so it wasn't
breaking anything.
Implement blocking when a zone exceeds the maximum and M_WAITOK is specified.
Previously this just failed like the old zone allocator did. The old zone
allocator didn't support WAITOK/NOWAIT though so we should do what we
advertise.
While I was in there I cleaned up some more zalloc logic to further simplify
that code path and reduce redundant code. This was needed to make the blocking
work properly anyway.
|
|
|
|
| |
Pointed out by: pete, jake, jhb
|