summaryrefslogtreecommitdiffstats
path: root/lib/libc/stdlib/malloc.c
Commit message (Collapse)AuthorAgeFilesLines
* Change the way base allocation is done for internal malloc datajasone2006-09-081-56/+93
| | | | | | | structures, in order to avoid the possibility of attempted recursive lock acquisition for chunks_mtx. Reported by: Slawa Olhovchenkov <slw@zxy.spb.ru>
* Enable TLS on PowerPC.marcel2006-09-011-1/+0
|
* Enable TLS on ia64.marcel2006-09-011-1/+0
|
* Correctly handle the case in calloc(num, size) wherecperciva2006-08-131-1/+1
| | | | | | | | | | | (size_t)(num * size) == 0 but both num and size are nonzero. Reported by: Ilja van Sprundel Approved by: jasone Security: Integer overflow; calloc was allocating 1 byte in response to a request for a multiple of 2^32 (or 2^64) bytes instead of returning NULL.
* Define NO_TLS on PowerPC.marcel2006-08-091-0/+1
| | | | See also: PR ia64/91846
* Conditionally expand the size_invs lookup table in arena_run_reg_dalloc()jasone2006-07-271-1/+12
| | | | | | | | so that architectures with a quantum of 8 (rather than 16) work. Restore arm's quantum to 8. Submitted by: jmg
* Use 4 as QUANTUM_2POW_MIN on arm as it is on any other architecture, to avoidcognet2006-07-271-1/+1
| | | | triggering an assertion later.
* Fix cpp logic in arena_malloc() to adjust size when assertions are enabled,jasone2006-07-271-23/+19
| | | | | | | | even if stats gathering is disabled. [1] Remove 'size' parameter from several functions that do not use it. Reported by: [1] ache
* Use some math tricks in arena_run_reg_dalloc() to avoid actual division, asjasone2006-07-011-83/+90
| | | | | | | | | | | | | | | | | well as avoiding a switch statement. This change has no significant impact to performance when branch prediction is successful at predicting the sizes of objects passed to free(), but in the case that the object sizes are semi-random, this change has the potential to prevent many branch prediction misses, thus improving performance substantially. Take advantage of alignment guarantees in ipalloc(), and pad object sizes to something less than a power of two when possible. This has the potential to substantially reduce internal fragmentation for objects allocated via posix_memalign(). Avoid an unnecessary pow2_ceil() call in arena_ralloc(). Submitted by: djam8193ah@hotmail.com
* Make the behavior of malloc(0) standards-compliant by getting rid of nil,jasone2006-06-301-48/+46
| | | | | | | | | and instead creating a small allocation for each malloc(0) call. The optional SysV compatibility behavior remains unchanged. Add a couple of assertions. Fix a couple of typos in error message strings.
* Add a missing case for the switch statement in arena_run_reg_dalloc(). [1]jasone2006-06-201-8/+20
| | | | | | | Fix a leak in chunk_dealloc(). [2] Reported by: [1] djam8193ah@hotmail.com, [2] Ville-Pertti Keinonen <will@exomi.com>
* Increase the minimum chunk size by a power of two (32kB --> 64kB, assumingjasone2006-05-101-2/+2
| | | | | | | | 4kB pages), in order to avoid dangerous rounding error when calculating fullness limits during run promotion/demotion. Convert a structure bitfield to a normal field in areana_run_t. This should have been changed along with the other fields in revision 1.120.
* Change the semantics of brk_max to dynamically deal with data segmentjasone2006-04-271-71/+83
| | | | | | | | | | | | | | | bounds. [1] Modify logic for utilizing the data segment, such that it is possible to create huge allocations there. Shrink the data segment when deallocating a chunk, if it is at the end of the data segment. Rename chunk_size to csize in huge_malloc(), in order to avoid masking a static variable of the same name. [1] Reported by: Paul Allen <nospam@ugcs.caltech.edu>
* Add an unreachable return statement, in order to avoid a compiler warningjasone2006-04-051-0/+1
| | | | | | for non-standard optimization levels. Reported by: Michael Zach <zach@webges.com>
* Only initialize the first per-chunk page map element for free runs. Thisjasone2006-04-051-31/+16
| | | | makes run split/coalesce operations of complexity lg(n) rather than n.
* Add init_lock, and use it to protect against allocator initializationjasone2006-04-041-8/+21
| | | | | | | | races. This isn't currently necessary for libpthread or libthr, but without it external threads libraries like the linuxthreads port are not safe to use. Reported by: ganbold@micom.mng.net
* Refactor per-run bitmap manipulation functions so that bitmap offsets onlyjasone2006-04-041-69/+131
| | | | | | | | | | | | | | | | | have to be calculated once per allocator operation. Make nil const. Update various comments. Remove/avoid division where possible. For the one division operation that remains in the critical path, add a switch statement that has a case for each small size class, and do division with a constant divisor in each case. This allows the compiler to generate optimized code that does not use hardware division [1]. Obtained from: peter [1]
* Optimize runtime performance, primary using the following techniques:jasone2006-03-301-285/+294
| | | | | | | | | | | | | | | * Avoid choosing an arena until it's certain that an arena is needed for allocation. * Convert division/multiplication to bitshifting where possible. * Avoid accessing TLS variables in single-threaded code. * Reduce the amount of pointer dereferencing. * Move lock acquisition in critical paths to only protect the the code that requires synchronization, and completely remove locking where possible.
* Add malloc_usable_size(3).jasone2006-03-281-0/+20
| | | | Discussed with: arch@
* Allow the 'n' option to decrease the number of arenas below the default,jasone2006-03-261-2/+16
| | | | | to as little as one arena. Also, limit the number of arenas to avoid a potential invariant violation in base_alloc().
* Add comments and reformat/rearrange code. There are no significantjasone2006-03-261-208/+224
| | | | functional changes in this commit.
* Convert TINY_MIN_2POW from a cpp macro to tiny_min_2pow (a variable), andjasone2006-03-241-21/+37
| | | | | | | | | | | | | | | determine its value at run time according to other relevant values. This avoids the creation of runs that are incompletely utilized, as long as pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW setting). Increase the size of several structure bitfields in arena_run_t in order to avoid integer overflow in the case that a run's header does not overlap with the space that is usable as application allocation regions. Given the tiny_min_2pow change, this fix has no additional impact unless pagesize is >32kB. Reported by: kris
* Add USE_BRK-specific code in malloc_init_hard() to allow the firstjasone2006-03-241-65/+110
| | | | | | | | | | | | | | | | | | | internally used chunk to start at the beginning of the heap, rather than at a chunk-aligned address. This reduces mapped memory somewhat for 32-bit architectures. Add the arena_run_link_t type and use it wherever a run object is only used as a ring 'header'. This saves approximately 40 kB of memory per arena. Remove an obsolete (no longer used) code path from base_alloc(), which supported the internal allocation of objects larger than the chunk size. Enhance chunk_dealloc() to cache chunk addresses for all deallocated chunks. This has no impact for most programs, but has the potential to reduce VM map fragmentation for programs that use huge allocations.
* Separate completely full runs from runs that are merely almost full, sojasone2006-03-201-61/+71
| | | | | | | that no linear searching is necessary if we resort to allocating from a run that is known to be mostly full. There are pathological edge cases that could have caused severely degraded performance, and this change fixes that.
* Optimize realloc() to reallocate in place if the old and new sizes arejasone2006-03-191-105/+167
| | | | | | | | | | | close enough to each other that reallocation would allocate a new region of the same size. This improves the performance of repeated incremental reallocations by up to three orders of magnitude. [1] Fix arena_new() to properly constrain run size if a small chunk size was specified during runtime configuration. Suggested by: se [1]
* Modify allocation policy, in order to avoid excessive fragmentation forjasone2006-03-171-2453/+1018
| | | | | | | | | | | | | | | allocation patterns that involve a relatively even mixture of many different size classes. Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up using an address-ordered first best fit policy, VM map fragmentation is much less likely, which makes smaller chunks not as much of a risk. This reduces the virtual memory size of most applications. Remove redzones, since program buffer overruns are no longer as likely to corrupt malloc data structures. Remove the C MALLOC_OPTIONS flag, and add H and S.
* Fix calculation of the number of arenas to use on multi-processor systems.jasone2006-02-041-1/+1
|
* Remove unwarranted uses of 'goto'.jasone2006-01-271-203/+153
|
* Add NO_MALLOC_EXTRAS, so that various extra features that can causejasone2006-01-271-3/+16
| | | | | | | | | performance degradation can be disabled via something like the following in /etc/malloc.conf: CFLAGS+=-DNO_MALLOC_EXTRAS Suggested by: deischen
* Fix the type of a statistics counter (unsigned --> unsigned long).jasone2006-01-271-1/+1
|
* Clean up statistics gathering and printing.jasone2006-01-271-71/+64
|
* Optimize arena_bin_pop() to reduce the number of separator operations.jasone2006-01-261-13/+10
| | | | | | | | Remove the block of code that tries to use delayed regions in LIFO order, since from a policy perspective, it conflicts with LRU caching of newly coalesced regions in arena_undelay(). There are numerous policy alternatives, and it isn't readily obvious which (if any) is superior; this change at least has the virtue of being consistent with policy.
* Remove a redundant variable assignment in arena_reg_frag_alloc().jasone2006-01-251-1/+0
|
* If no coalesced exact-fit small regions are available, but delayed exact-jasone2006-01-251-173/+186
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fit regions are available, use the delayed regions in LIFO order, in order to increase locality of reference. We might expect this to cause delayed regions to be removed from the delay ring buffer more often (since we're now re-using more recently buffered regions), but numerous tests indicate that the overall impact on memory usage tends to be good (reduced fragmentation). Re-work arena_frag_reg_alloc() so that when large free regions are exhausted, it uses small regions in a way that favors contiguous allocation of sequentially allocated small regions. Use arena_frag_reg_alloc() in this capacity, rather than directly attempting over-fitting of small requests when no large regions are available. Remove the bin overfit statistic, since it is no longer relevant due to the arena_frag_reg_alloc() changes. Do not specify arena_frag_reg_alloc() as an inline function. It is too large to benefit much from being inlined, and it is also called in two places, only one of which is in the critical path (the other call bloated arena_reg_alloc()). Call arena_coalesce() for a region before caching it with arena_mru_cache(). Add assertions that detect the attempted caching of adjacent free regions, so that we notice this problem when it is first created, rather than in arena_coalesce(), when it's too late to know how the problem arose. Reported by: Hans Blancke
* Make the 'C' and 'c' malloc options consistent with other options; 'C'jasone2006-01-231-2/+2
| | | | doubles the cache size, and 'c' halves the cache size.
* In arena_chunk_reg_alloc(), try to avoid touching the last page in thejasone2006-01-231-7/+24
| | | | | chunk during initialization, in order to avoid physically backing the page unless data are allocated there.
* Use uintptr_t rather than size_t when casting pointers to integers. Also,jasone2006-01-201-44/+45
| | | | | | | fix the few remaining casting style(9) errors that remained after the functional change. Reported by: jmallett
* Revert addtion of assertions in revision 1.99. These assertions causejasone2006-01-191-7/+0
| | | | | | problems in cases where regions are faked up for the purposes of red-black tree searches, since those faked region headers reside on the stack, rather than in a malloc chunk.
* Add assertions that detect some forms of region separator corruption.jasone2006-01-191-0/+7
|
* Remove loops in arena_coalesce(). They are no longer necessary, now thatjasone2006-01-191-4/+5
| | | | | internal allocation does not rely on recursive arena use (base_arena was removed in revision 1.95).
* Make all internal variables and functions static.jasone2006-01-191-12/+15
| | | | Reported by: ache
* Return NULL if there is an OOM error during initialization, rather thanjasone2006-01-191-35/+50
| | | | | | | | allowing the error to be fatal. Move a label in order to make sure to properly handle errors in malloc(0). Reported by: Alastair D'Silva, Saneto Takanori
* Add a separate simple internal base allocator and remove base_arena, so thatjasone2006-01-161-151/+175
| | | | | | | | | | there is never any need to recursively call the main allocation functions. Remove recursive spinlock support, since it is no longer needed. Allow chunks to be as small as the page size. Correctly propagate OOM errors from arena_new().
* Define NO_TLS on ia64. The dynamic TLS implementation on ia64 ismarcel2006-01-161-0/+1
| | | | | | | | | broken for non-threaded shared processes in that __tls_get_addr() assumes the thread pointer is always initialized. This is not the case. When arenas_map is referenced in choose_arena() and it is defined as a thread-local variable, it will result in a SIGSEGV. PR: ia64/91846 (describes the TLS/ia64 bug).
* Replace malloc(), calloc(), posix_memalign(), realloc(), and free() withjasone2006-01-131-927/+4481
| | | | | | | a scalable concurrent allocator implementation. Reviewed by: current@ Approved by: phk, markm (mentor)
* Fix a bitwise logic error in posix_memalign().jasone2006-01-121-2/+2
| | | | Reported by: glebius
* In preparation for a new malloc implementation:jasone2006-01-121-0/+64
| | | | | | | | | | | | | * Add posix_memalign(). * Move calloc() from calloc.c to malloc.c. Add a calloc() implementation in rtld-elf in order to make the loader happy (even though calloc() isn't used in rtld-elf). * Add _malloc_prefork() and _malloc_postfork(), and use them instead of directly manipulating __malloc_lock. Approved by: phk, markm (mentor)
* Remove the check about whether MALLOC_EXTRA_SANITY is defined,delphij2005-02-271-2/+0
| | | | | | | | | surrounding the undef'ing it. It does not seem necessary to undef some symbol that is not exist, and gcc does not complain about whether a symbol is exist before #undef'ing it out. Spotted by: mingyanguo via ChinaUnix.net forum Reviewed by: phk
* Consistently use __inline instead of __inline__ as the former is an empty macrostefanf2004-07-041-3/+3
| | | | in <sys/cdefs.h> for compilers without support for inline.
* Define malloc_pageshift and malloc_minsize for arm.cognet2004-05-141-0/+4
|
OpenPOWER on IntegriCloud