summaryrefslogtreecommitdiffstats
path: root/sys/vm/uma_int.h
Commit message (Collapse)AuthorAgeFilesLines
* - Remove the working-set algorithm. Instead, use the per cpu buckets as thejeff2003-09-191-6/+1
| | | | | | | | | | | | working set cache. This has several advantages. Firstly, we never touch the per cpu queues now in the timeout handler. This removes one more reason for having per cpu locks. Secondly, it reduces the size of the zone by 8 bytes, bringing it under 200 bytes for a single proc x86 box. This tidies up other logic as well. - The 'destroy' flag no longer needs to be passed to zone_drain() since it always frees everything in the zone's slabs. - cache_drain() is now only called from zone_dtor() and so it destroys by default. It also does not need the destroy parameter now.
* - Remove the cache colorization code. We can't use it due to all of thejeff2003-09-191-4/+0
| | | | | | | | broken consumers of the malloc interface who assume that the allocated address will be an even multiple of the size. - Remove disabled time delay code on uma_reclaim(). The comment there said it all. It was not an effective strategy and it should not be left in #if 0'd for all eternity.
* - Fix the silly flag situation in UMA. Remove redundant ZFLAG/ZONE flagsjeff2003-09-191-11/+7
| | | | | | | | | | | by accepting the user supplied flags directly. Previously this was not done so that flags for the same field would not be defined in two different files. Add comments in each header instructing future developers on how now to shoot their feet. - Fix a test for !OFFPAGE which should have been a test for HASH. This would have caused a panic if we had ever destructed a malloc zone. This also opens up the possibility that other zones could use the vsetobj() method rather than a hash.
* - Initialize a pool of bucket zones so that we waste less space on zones thatjeff2003-09-191-10/+3
| | | | | | | | | | | | don't cache as many items. - Introduce the bucket_alloc(), bucket_free() functions to wrap bucket allocation. These functions select the appropriate bucket zone to allocate from or free to. - Rename ub_ptr to ub_cnt to reflect a change in its use. ub_cnt now reflects the count of free items in the bucket. This gets rid of many unnatural subtractions by 1 throughout the code. - Add ub_entries which reflects the number of entries possibly held in a bucket.
* - When deciding whether to init the zone with small_init or large_init,bmilekic2003-08-111-1/+1
| | | | | | | | | | | | | | | | | | | compare the zone element size (+1 for the byte of linkage) against UMA_SLAB_SIZE - sizeof(struct uma_slab), and not just UMA_SLAB_SIZE. Add a KASSERT in zone_small_init to make sure that the computed ipers (items per slab) for the zone is not zero, despite the addition of the check, just to be sure (this part submitted by: silby) - UMA_ZONE_VM used to imply BUCKETCACHE. Now it implies CACHEONLY instead. CACHEONLY is like BUCKETCACHE in the case of bucket allocations, but in addition to that also ensures that we don't setup the zone with OFFPAGE slab headers allocated from the slabzone. This means that we're not allowed to have a UMA_ZONE_VM zone initialized for large items (zone_large_init) because it would require the slab headers to be allocated from slabzone, and hence kmem_map. Some of the zones init'd with UMA_ZONE_VM are so init'd before kmem_map is suballoc'd from kernel_map, which is why this change is necessary.
* - Get rid of the ill-conceived uz_cachefree member of uma_zone.jeff2003-07-301-1/+0
| | | | | | | - In sysctl_vm_zone use the per cpu locks to read the current cache statistics this makes them more accurate while under heavy load. Submitted by: tegge
* Move the pcpu lock out of the uma_cache and instead have a single setbmilekic2003-06-251-22/+7
| | | | | | | of pcpu locks. This makes uma_zone somewhat smaller (by (LOCKNAME_LEN * sizeof(char) + sizeof(struct mtx) * maxcpu) bytes, to be exact). No Objections from jeff.
* Prepend _ to internal union members to avoid ambiguity.phk2003-05-311-4/+4
| | | | Found by: FlexeLint
* - Add support for machine dependant page allocation routines. MD codejeff2002-11-011-0/+8
| | | | | | may define UMA_MD_SMALL_ALLOC to make use of this feature. Reviewed by: peter, jake
* - Use my freebsd email alias in the copyright.jeff2002-09-191-5/+1
| | | | - Remove redundant instances of my email alias in the file summary.
* - Split UMA_ZFLAG_OFFPAGE into UMA_ZFLAG_OFFPAGE and UMA_ZFLAG_HASH.jeff2002-09-181-4/+35
| | | | | | | - Remove all instances of the mallochash. - Stash the slab pointer in the vm page's object pointer when allocating from the kmem_obj. - Use the overloaded object pointer to find slabs for malloced memory.
* Part 1 of KSE-IIIjulian2002-06-291-1/+1
| | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
* - Introduce the new M_NOVM option which tells uma to only check the currentlyjeff2002-06-171-0/+1
| | | | | | | | | | | | | | | | allocated slabs and bucket caches for free items. It will not go ask the vm for pages. This differs from M_NOWAIT in that it not only doesn't block, it doesn't even ask. - Add a new zcreate option ZONE_VM, that sets the BUCKETCACHE zflag. This tells uma that it should only allocate buckets out of the bucket cache, and not from the VM. It does this by using the M_NOVM option to zalloc when getting a new bucket. This is so that the VM doesn't recursively enter itself while trying to allocate buckets for vm_map_entry zones. If there are already allocated buckets when we get here we'll still use them but otherwise we'll skip it. - Use the ZONE_VM flag on vm map entries and pv entries on x86.
* Add a new zone flag UMA_ZONE_MTXCLASS. This puts the zone in it's ownjeff2002-04-291-6/+21
| | | | | | | mutex class. Currently this is only used for kmapentzone because kmapents are are potentially allocated when freeing memory. This is not dangerous though because no other allocations will be done while holding the kmapentzone lock.
* Fix the calculation that determines uz_maxpages. It was off for large zones.jeff2002-04-141-0/+2
| | | | | | | | | | | | | | Fortunately we have no large zones with maximums specified yet, so it wasn't breaking anything. Implement blocking when a zone exceeds the maximum and M_WAITOK is specified. Previously this just failed like the old zone allocator did. The old zone allocator didn't support WAITOK/NOWAIT though so we should do what we advertise. While I was in there I cleaned up some more zalloc logic to further simplify that code path and reduce redundant code. This was needed to make the blocking work properly anyway.
* Quiet witness warnings about acquiring several zone locks. In the case thatjeff2002-04-081-1/+2
| | | | this happens it is OK.
* Rework most of the bucket allocation and free code so that per cpu locks arejeff2002-04-081-1/+2
| | | | | | | | | | | | | | | | never held across blocking operations. Also, fix two other lock order reversals that were exposed by jhb's witness change. The free path previously had a bug that would cause it to skip the free bucket list in some cases and go straight to allocating a new bucket. This has been fixed as well. These changes made the bucket handling code much cleaner and removed quite a few lock operations. This should be marginally faster now. It is now possible to call malloc w/o Giant and avoid any witness warnings. This still isn't entirely safe though because malloc_type statistics are not protected by any lock.
* Spelling correction; s/seperate/separate/gjeff2002-04-071-1/+1
| | | | Submitted by: eric
* Change callers of mtx_init() to pass in an appropriate lock type name. Injhb2002-04-041-2/+4
| | | | | | | most cases NULL is passed, but in some cases such as network driver locks (which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used. Tested on: i386, alpha, sparc64
* Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locksjeff2002-03-271-1/+1
| | | | | | | | | | | with this flag. Remove the dup_list and dup_ok code from subr_witness. Now we just check for the flag instead of doing string compares. Also, switch the process lock, process group lock, and uma per cpu locks over to this interface. The original mechanism did not work well for uma because per cpu lock names are unique to each zone. Approved by: jhb
* This is the first part of the new kernel memory allocator. This replacesjeff2002-03-191-0/+328
malloc(9) and vm_zone with a slab like allocator. Reviewed by: arch@
OpenPOWER on IntegriCloud