summaryrefslogtreecommitdiffstats
path: root/lib/libc/stdlib/malloc.c
Commit message (Collapse)AuthorAgeFilesLines
* Fix a lock order reversal bug that could cause deadlock during fork(2).jasone2008-12-011-11/+37
| | | | Reported by: kib
* Adjust an assertion to handle the case where a lock is contested, butjasone2008-11-301-1/+1
| | | | | | spinning is avoided due to running on a single-CPU system. Reported by: stefanf
* Do not spin when trying to lock on a single-CPU system.jasone2008-11-301-11/+13
| | | | Reported by: davidxu
* Revert to preferring mmap(2) over sbrk(2) when mapping memory, due tojasone2008-11-031-12/+17
| | | | | | | potential extreme contention in the kernel for multi-threaded applications on SMP systems. Reported by: kris
* Use PAGE_{SIZE,MASK,SHIFT} from machine/param.h rather than hard-codingjasone2008-09-101-120/+88
| | | | | | page size and using sysconf(3). Suggested by: marcel
* Unbreak ia64: pges are 8KB.marcel2008-09-061-1/+1
|
* Add thread-specific caching for small size classes, based on magazines.jasone2008-08-271-231/+1080
| | | | | | | | | | | | | | | | | | | | | | | | This caching allows for completely lock-free allocation/deallocation in the steady state, at the expense of likely increased memory use and fragmentation. Reduce the default number of arenas to 2*ncpus, since thread-specific caching typically reduces arena contention. Modify size class spacing to include ranges of 2^n-spaced, quantum-spaced, cacheline-spaced, and subpage-spaced size classes. The advantages are: fewer size classes, reduced false cacheline sharing, and reduced internal fragmentation for allocations that are slightly over 512, 1024, etc. Increase RUN_MAX_SMALL, in order to limit fragmentation for the subpage-spaced size classes. Add a size-->bin lookup table for small sizes to simplify translating sizes to size classes. Include a hard-coded constant table that is used unless custom size class spacing is specified at run time. Add the ability to disable tiny size classes at compile time via MALLOC_TINY.
* Move CPU_SPINWAIT into the innermost spin loop, in order to allow fasterjasone2008-08-141-2/+3
| | | | | | preemption while busy-waiting. Submitted by: Mike Schuster <schuster@adobe.com>
* Re-order the terms of an expression in arena_run_reg_dalloc() to correctlyjasone2008-08-141-2/+2
| | | | | | detect whether the integer division table is large enough to handle the divisor. Before this change, the last two table elements were never used, thus causing the slow path to be used for those divisors.
* Remove variables which are assigned values and never used thereafter.cperciva2008-08-081-5/+1
| | | | | Found by: LLVM/Clang Static Checker Approved by: jasone
* Enhance arena_chunk_map_t to directly support run coalescing, and usejasone2008-07-181-394/+338
| | | | | | | the chunk map instead of red-black trees where possible. Remove the red-black trees and node objects that are obsoleted by this change. The net result is a ~1-2% memory savings, and a substantial allocation speed improvement.
* In the error path through base_alloc(), release base_mtx [1].jasone2008-06-101-3/+7
| | | | | | Fix bit vector initialization for run headers. Submitted by: [1] Mike Schuster <schuster@adobe.com>
* Add a separate tree to track arena chunks that contain dirty pages.jasone2008-05-011-157/+133
| | | | | | | This substantially improves worst case allocation performance, since O(lg n) tree search can be used instead of O(n) tree iteration. Use rb_wrap() instead of directly calling rb_*() macros.
* Set QUANTUM_2POW_MIN and SIZEOF_PTR_2POW parameters for MIPSgonzo2008-04-291-0/+5
| | | | Approved by: imp
* Check for integer overflow before calling sbrk(2), since it uses ajasone2008-04-291-0/+7
| | | | signed increment argument, but the size is an unsigned integer.
* Implement red-black trees without using parent pointers, and store thejasone2008-04-231-116/+171
| | | | | | | | | color bit in the least significant bit of the right child pointer, in order to reduce red-black tree linkage overhead by ~2X as compared to sys/tree.h. Use the new red-black tree implementation in malloc, which drops memory usage by ~0.5 or ~1%, for 32- and 64-bit systems, respectively.
* Remove stale #include <machine/atomic.h>, which as needed by lazyjasone2008-03-071-4/+4
| | | | deallocation.
* Fix a race condition in arena_ralloc() for shrinking in-place largejasone2008-02-171-25/+41
| | | | | | | | reallocation, when junk filling is enabled. Junk filling must occur prior to shrinking, since any deallocated trailing pages are immediately available for use by other threads. Reported by: Mats Palmgren <mats.palmgren@bredband.net>
* Remove support for lazy deallocation. Benchmarks across a wide range ofjasone2008-02-171-209/+3
| | | | | | | | allocation patterns, number of CPUs, and MALLOC_OPTIONS settings indicate that lazy deallocation has the potential to worsen throughput dramatically. Performance degradation occurs when multiple threads try to clear the lazy free cache simultaneously. Various experiments to avoid this bottleneck failed to completely solve this problem, while adding yet more complexity.
* Fix a bug in lazy deallocation that was introduced whenjasone2008-02-081-7/+10
| | | | | | | | arena_dalloc_lazy_hard() was split out of arena_dalloc_lazy() in revision 1.162. Reduce thundering herd problems in lazy deallocation by randomly varying how many probes a thread does before taking the slow path.
* Clean up manipulation of chunk page map elements to remove some tenuousjasone2008-02-081-362/+357
| | | | | | | | | | | assumptions about whether bits are set at various times. This makes adding other flags safe. Reorganize functions in order to inline i{m,c,p,s,re}alloc(). This allows the entire fast-path call chains for malloc() and free() to be inlined. [1] Suggested by: [1] Stuart Parmenter <stuart@mozilla.com>
* Track dirty unused pages so that they can be purged if they exceed ajasone2008-02-061-664/+956
| | | | | | | | | | | | | | | | | | | | | | | | | threshold, according to the 'F' MALLOC_OPTIONS flag. This obsoletes the 'H' flag. Try to realloc() large objects in place. This substantially speeds up incremental large reallocations in the common case. Fix a bug in arena_ralloc() that caused relocation of sub-page objects even if the old and new sizes were in the same size class. Maintain trees of runs and simplify the per-chunk page map. This allows logarithmic-time searching for sufficiently large runs in arena_run_alloc(), whereas the previous algorithm required linear time in the worst case. Break various large functions into smaller sub-functions, and inline only the functions that are in the fast path for small object allocation/deallocation. Remove an unnecessary check in base_pages_alloc_mmap(). Avoid integer division in choose_arena() for the NO_TLS case on single-CPU systems.
* Enable both sbrk(2)- and mmap(2)-based memory acquisition methods byjasone2008-01-031-7/+8
| | | | | | | | | default. This has the disadvantage of rendering the datasize resource limit irrelevant, but without this change, legitimate uses of more memory than will fit in the data segment are thwarted by default. Fix chunk_alloc_mmap() to work correctly if initial mapping is not chunk-aligned and mapping extension fails.
* Fix a major chunk-related memory leak in chunk_dealloc_dss_record(). [1]jasone2007-12-311-65/+56
| | | | | | | | Clean up DSS-related locking and protect all pertinent variables with dss_mtx (remove dss_chunks_mtx). This fixes race conditions that could cause chunk leaks. Reported by: [1] kris
* Fix a bug related to sbrk() calls that could cause address space leaks.jasone2007-12-311-186/+268
| | | | | | | | | | | | | | | | | | | | | | | | | This is a long-standing bug, but until recent changes it was difficult to trigger, and even then its impact was non-catastrophic, with the exception of revision 1.157. Optimize chunk_alloc_mmap() to avoid the need for unmapping pages in the common case. Thanks go to Kris Kennaway for a patch that inspired this change. Do not maintain a record of previously mmap'ed chunk address ranges. The original intent was to avoid the extra system call overhead in chunk_alloc_mmap(), which is no longer a concern. This also allows some simplifications for the tree of unused DSS chunks. Introduce huge_mtx and dss_chunks_mtx to replace chunks_mtx. There was no compelling reason to use the same mutex for these disjoint purposes. Avoid memset() for huge allocations when possible. Maintain two trees instead of one for tracking unused DSS address ranges. This allows scalable allocation of multi-chunk huge objects in the DSS. Previously, multi-chunk huge allocation requests failed if the DSS could not be extended.
* Back out premature commit of previous version.jasone2007-12-281-183/+113
|
* Maintain two trees instead of one (old_chunks --> old_chunks_{ad,szad}) injasone2007-12-281-113/+183
| | | | | | | | | order to support re-use of multi-chunk unused regions within the DSS for huge allocations. This generalization is important to correct function when mmap-based allocation is disabled. Avoid zeroing re-used memory in the DSS unless it really needs to be zeroed.
* Release chunks_mtx for all paths through chunk_dealloc().jasone2007-12-281-1/+4
| | | | Reported by: kris
* Add the 'D' and 'M' run time options, and use them to control whetherjasone2007-12-271-291/+435
| | | | | | | | | | | | | | | | | | memory is acquired from the system via sbrk(2) and/or mmap(2). By default, use sbrk(2) only, in order to support traditional use of resource limits. Additionally, when both options are enabled, prefer the data segment to anonymous mappings, in order to coexist better with large file mappings in applications on 32-bit platforms. This change has the potential to increase memory fragmentation due to the linear nature of the data segment, but from a performance perspective this is mitigated by the use of madvise(2). [1] Add the ability to interpret integer prefixes in MALLOC_OPTIONS processing. For example, MALLOC_OPTIONS=lllllllll can now be specified as MALLOC_OPTIONS=9l. Reported by: [1] rwatson Design review: [1] alc, peter, rwatson
* Use fixed point integer math instead of floating point math whenjasone2007-12-181-42/+47
| | | | | | | | | | calculating run sizes. Use of the floating point unit was a potential pessimization to context switching for applications that do not otherwise use floating point math. [1] Reformat cpp macro-related comments to improve consistency. Submitted by: das
* Refactor features a bit in order to make it possible to disable lazyjasone2007-12-171-52/+127
| | | | | | | | | deallocation and dynamic load balancing via the MALLOC_LAZY_FREE and MALLOC_BALANCE knobs. This is a non-functional change, since these features are still enabled when possible. Clean up a few things that more pedantic compiler settings would cause complaints over.
* Only zero large allocations when necessary (for calloc()).jasone2007-11-281-1/+1
|
* Implement dynamic load balancing of thread-->arena mapping, based on lockjasone2007-11-271-58/+297
| | | | | | | | | | | | | | | | | contention. The intent is to dynamically adjust to load imbalances, which can cause severe contention. Use pthread mutexes where possible instead of libc "spinlocks" (they aren't actually spin locks). Conceptually, this change is meant only to support the dynamic load balancing code by enabling the use of spin locks, but it has the added apparent benefit of substantially improving performance due to reduced context switches when there is moderate arena lock contention. Proper tuning parameter configuration for this change is a finicky business, and it is very much machine-dependent. One seemingly promising solution would be to run a tuning program during operating system installation that computes appropriate settings for load balancing. (The pthreads adaptive spin locks should probably be similarly tuned.)
* Implement lazy deallocation of small objects. For each arena, maintain ajasone2007-11-271-0/+218
| | | | | | | | | | | vector of slots for lazily freed objects. For each deallocation, before doing the hard work of locking the arena and deallocating, try several times to randomly insert the object into the vector using atomic operations. This approach is particularly effective at reducing contention for multi-threaded applications that use the producer-consumer model, wherein one producer thread allocates objects, then multiple consumer threads deallocate those objects.
* Avoid re-zeroing memory in calloc() when possible.jasone2007-11-271-143/+218
|
* Fix stats printing of the amount of memory currently consumed by hugejasone2007-11-271-36/+37
| | | | | | | | | | | allocations. [1] Fix calculation of the number of arenas when 'n' is specified via MALLOC_OPTIONS. Clean up various style inconsistencies. Obtained from: [1] NetBSD
* Fix junk/zero filling for realloc(). Junk filling was missing in one case,jasone2007-06-151-36/+48
| | | | | | and zero filling was broken in a way that could cause memory corruption. Update comments.
* Use size_t instead of unsigned for pagesize-related values, in order tojasone2007-03-291-4/+8
| | | | | | | | | avoid downcasting issues. In particular, this change fixes posix_memalign(3) for alignments greater than 2^31 on LP64 systems. Make sure that NDEBUG is always set to be compatible with MALLOC_DEBUG. [1] Reported by: [1] Lee Hyo geol <hyogeollee@gmail.com>
* Remove the run promotion/demotion machinery. Replace it with red-blackjasone2007-03-281-430/+219
| | | | | | | | | | | | | | | | | | | | | | | trees that track all non-full runs for each bin. Use the red-black trees to be able to guarantee that each new allocation is placed in the lowest address available in any non-full run. This change completes the transition to allocating from low addresses in order to reduce the retention of sparsely used chunks. If the run in current use by a bin becomes empty, deallocate the run rather than retaining it for later use. The previous behavior had the tendency to spread empty runs across multiple chunks, thus preventing the release of chunks that were completely unused. Generalize base_chunk_alloc() (and rename it to base_pages_alloc()) to handle allocation sizes larger than the chunk size, so that it is possible to support chunk sizes that are smaller than an arena object. Reduce the minimum chunk size from 64kB to 8kB. Optimize tracking of addresses for deleted chunks. Fix a statistics bug for huge allocations.
* Fix some subtle bugs for posix_memalign() having to do with integerjasone2007-03-241-18/+43
| | | | | | | | rounding and overflow. Carefully document what the various overflow tests actually detect. The bugs mostly canceled out, such that the worst possible failure cases resulted in non-fatal over-allocations.
* Fix posix_memalign() for large objects. Now that runs are extents ratherjasone2007-03-231-151/+297
| | | | | | | | than binary buddies, the alignment guarantees are weaker, which requires a more complex aligned allocation algorithm, similar to that used for alignment greater than the chunk size. Reported by: matteo
* Use extents rather than binary buddies to track free pages withinjasone2007-03-231-323/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | chunks. This allows runs to be any multiple of the page size. The primary advantage is that large objects are no longer constrained to be 2^n pages, which can dramatically decrease internal fragmentation for large objects. This also allows the sizes for runs that back small objects to be more finely tuned. Free runs are searched for linearly using the chunk page map (with the help of some heuristic optimizations). This changes the allocation policy from "first best fit" to "first fit". A prototype red-black tree implementation for tracking free runs that implemented "first best fit" did not cause a measurable speed or memory usage difference for realistic chunk sizes (though of course it is possible to construct benchmarks that favor one allocation policy over another). Refine the handling of fullness constraints for small runs to be more tunable. Restructure the per chunk page map to contain only two fields per entry, rather than four. Also, increase each entry from 4 to 8 bytes, since it allows for 32-bit integers, without increasing the number of chunk header pages. Relax the maximum chunk size constraint. This is of no practical interest; it is merely fallout from the chunk page map restructuring. Revamp statistics gathering and reporting to be faster, clearer and more informative. Statistics gathering is fast enough now to have little to no impact on application speed, but it still requires approximately two extra pages of memory per arena (per process). This memory overhead may be acceptable for most systems, but we still need to leave statistics gathering disabled by default in RELENG branches. Rename NO_MALLOC_EXTRAS to MALLOC_PRODUCTION in order to make its intent clearer (i.e. it should be defined in RELENG branches).
* Avoid using vsnprintf(3) unless MALLOC_STATS is defined, in order tojasone2007-03-201-152/+233
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | avoid substantial potential bloat for static binaries that do not otherwise use any printf(3)-family functions. [1] Rearrange arena_run_t so that the region bitmask can be minimally sized according to constraints related to each bin's size class. Previously, the region bitmask was the same size for all run headers, which wasted a measurable amount of memory. Rather than making runs for small objects as large as possible, make runs as small as possible such that header overhead stays below a certain bound. There are two exceptions that override the header overhead bound: 1) If the bound is impossible to honor, it is relaxed on a per-size-class basis. Since there is one bit of header overhead per object (plus a constant), it is impossible to achieve a header overhead less than or equal to 1/(# of bits per object). For the current setting of maximum 0.5% header overhead, this relaxation comes into play for {2, 4, 8, 16}-byte objects, for which header overhead is (on 64-bit systems) {7.1, 4.3, 2.2, 1.2}%, respectively. 2) There is still a cap on small run size, still set to 64kB. This comes into play for {1024, 2048}-byte objects, for which header overhead is {1.6, 3.1}%, respectively. In practice, this reduces the run sizes, which makes worst case low-water memory usage due to fragmentation less bad. It also reduces worst case high-water run fragmentation due to non-full runs, but this is only a constant improvement (most important to small short-lived processes). Reduce the default chunk size from 2MB to 1MB. Benchmarks indicate that the external fragmentation reduction makes 1MB the new sweet spot (as small as possible without adversely affecting performance). Reported by: [1] kientzle
* Modify chunk_alloc() to prefer mmap()ed memory over sbrk()ed memory.jasone2007-02-221-36/+40
| | | | | | | | | | | This has no impact unless USE_BRK is defined (32-bit platforms), in which case user allocations are allocated via mmap() if at all possible, in order to avoid the possibility of unreclaimable chunks in the data segment. Fix an obscure bug in base_alloc() that could have allowed undefined behavior if an application were to use sbrk() in conjunction with a USE_BRK-enabled malloc.
* Fix a utrace(2)-related bug in calloc(3).jasone2007-01-311-44/+56
| | | | | | Integrate various pedantic cleanups. Submitted by: Andrew Doran <ad@netbsd.org>
* Implement chunk allocation/deallocation hysteresis by caching one sparejasone2006-12-231-51/+86
| | | | | | | | chunk per arena, rather than immediately deallocating all unused chunks. This fixes a potential performance issue when allocating/deallocating an object of size (4kB..1MB] in a loop. Reported by: davidxu
* Change the way base allocation is done for internal malloc datajasone2006-09-081-56/+93
| | | | | | | structures, in order to avoid the possibility of attempted recursive lock acquisition for chunks_mtx. Reported by: Slawa Olhovchenkov <slw@zxy.spb.ru>
* Enable TLS on PowerPC.marcel2006-09-011-1/+0
|
* Enable TLS on ia64.marcel2006-09-011-1/+0
|
* Correctly handle the case in calloc(num, size) wherecperciva2006-08-131-1/+1
| | | | | | | | | | | (size_t)(num * size) == 0 but both num and size are nonzero. Reported by: Ilja van Sprundel Approved by: jasone Security: Integer overflow; calloc was allocating 1 byte in response to a request for a multiple of 2^32 (or 2^64) bytes instead of returning NULL.
OpenPOWER on IntegriCloud