| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
to as little as one arena. Also, limit the number of arenas to avoid a
potential invariant violation in base_alloc().
|
|
|
|
| |
functional changes in this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
determine its value at run time according to other relevant values. This
avoids the creation of runs that are incompletely utilized, as long as
pagesize isn't too large (>32kB, given the current RUN_MIN_REGS_2POW
setting).
Increase the size of several structure bitfields in arena_run_t in order
to avoid integer overflow in the case that a run's header does not overlap
with the space that is usable as application allocation regions. Given
the tiny_min_2pow change, this fix has no additional impact unless
pagesize is >32kB.
Reported by: kris
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
internally used chunk to start at the beginning of the heap, rather
than at a chunk-aligned address. This reduces mapped memory somewhat
for 32-bit architectures.
Add the arena_run_link_t type and use it wherever a run object is only
used as a ring 'header'. This saves approximately 40 kB of memory per
arena.
Remove an obsolete (no longer used) code path from base_alloc(), which
supported the internal allocation of objects larger than the chunk
size.
Enhance chunk_dealloc() to cache chunk addresses for all deallocated
chunks. This has no impact for most programs, but has the potential
to reduce VM map fragmentation for programs that use huge
allocations.
|
|
|
|
|
|
|
| |
that no linear searching is necessary if we resort to allocating from a
run that is known to be mostly full. There are pathological edge cases
that could have caused severely degraded performance, and this change
fixes that.
|
|
|
|
|
|
|
|
|
|
|
| |
close enough to each other that reallocation would allocate a new region
of the same size. This improves the performance of repeated incremental
reallocations by up to three orders of magnitude. [1]
Fix arena_new() to properly constrain run size if a small chunk size was
specified during runtime configuration.
Suggested by: se [1]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allocation patterns that involve a relatively even mixture of many
different size classes.
Reduce the chunk size from 16 MB to 2 MB. Since chunks are now carved up
using an address-ordered first best fit policy, VM map fragmentation is
much less likely, which makes smaller chunks not as much of a risk. This
reduces the virtual memory size of most applications.
Remove redzones, since program buffer overruns are no longer as likely to
corrupt malloc data structures.
Remove the C MALLOC_OPTIONS flag, and add H and S.
|
| |
|
|
|
|
| |
Pointed out by: ceri, ru, delphij
|
|
|
|
|
|
| |
it first.
Approved by: andre
|
| |
|
|
|
|
|
|
|
|
| |
providing proper error checking and other improvements.
Obtained from: OpenBSD
Requested by: flz (to port Open[BGP|OSPF]D)
MFC after: 3 days
|
| |
|
|
|
|
| |
Reviewed by: davidxu
|
|
|
|
|
| |
Approved by: cognet (mentor)
MFC after: 3 days
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
and if so, use the pts system.
Suggested by: rwatson
|
| |
|
|
|
|
|
|
|
|
|
| |
performance degradation can be disabled via something like the following
in /etc/malloc.conf:
CFLAGS+=-DNO_MALLOC_EXTRAS
Suggested by: deischen
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Remove the block of code that tries to use delayed regions in LIFO order,
since from a policy perspective, it conflicts with LRU caching of newly
coalesced regions in arena_undelay(). There are numerous policy
alternatives, and it isn't readily obvious which (if any) is superior;
this change at least has the virtue of being consistent with policy.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fit regions are available, use the delayed regions in LIFO order, in order
to increase locality of reference. We might expect this to cause delayed
regions to be removed from the delay ring buffer more often (since we're
now re-using more recently buffered regions), but numerous tests indicate
that the overall impact on memory usage tends to be good (reduced
fragmentation).
Re-work arena_frag_reg_alloc() so that when large free regions are
exhausted, it uses small regions in a way that favors contiguous allocation
of sequentially allocated small regions. Use arena_frag_reg_alloc() in
this capacity, rather than directly attempting over-fitting of small
requests when no large regions are available.
Remove the bin overfit statistic, since it is no longer relevant due to
the arena_frag_reg_alloc() changes.
Do not specify arena_frag_reg_alloc() as an inline function. It is too
large to benefit much from being inlined, and it is also called in two
places, only one of which is in the critical path (the other call bloated
arena_reg_alloc()).
Call arena_coalesce() for a region before caching it with
arena_mru_cache().
Add assertions that detect the attempted caching of adjacent free regions,
so that we notice this problem when it is first created, rather than in
arena_coalesce(), when it's too late to know how the problem arose.
Reported by: Hans Blancke
|
|
|
|
| |
doubles the cache size, and 'c' halves the cache size.
|
|
|
|
|
| |
chunk during initialization, in order to avoid physically backing the
page unless data are allocated there.
|
|
|
|
|
|
|
| |
fix the few remaining casting style(9) errors that remained after the
functional change.
Reported by: jmallett
|
|
|
|
|
|
| |
problems in cases where regions are faked up for the purposes of red-black
tree searches, since those faked region headers reside on the stack, rather
than in a malloc chunk.
|
| |
|
|
|
|
|
| |
internal allocation does not rely on recursive arena use (base_arena was
removed in revision 1.95).
|
|
|
|
| |
Reported by: ache
|
|
|
|
|
|
|
|
| |
allowing the error to be fatal.
Move a label in order to make sure to properly handle errors in malloc(0).
Reported by: Alastair D'Silva, Saneto Takanori
|
|
|
|
|
|
|
|
|
|
| |
there is never any need to recursively call the main allocation functions.
Remove recursive spinlock support, since it is no longer needed.
Allow chunks to be as small as the page size.
Correctly propagate OOM errors from arena_new().
|
|
|
|
|
|
|
|
|
| |
broken for non-threaded shared processes in that __tls_get_addr()
assumes the thread pointer is always initialized. This is not the
case. When arenas_map is referenced in choose_arena() and it is
defined as a thread-local variable, it will result in a SIGSEGV.
PR: ia64/91846 (describes the TLS/ia64 bug).
|
|
|
|
|
|
|
| |
a scalable concurrent allocator implementation.
Reviewed by: current@
Approved by: phk, markm (mentor)
|
|
|
|
| |
Reported by: glebius
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add posix_memalign().
* Move calloc() from calloc.c to malloc.c. Add a calloc() implementation in
rtld-elf in order to make the loader happy (even though calloc() isn't
used in rtld-elf).
* Add _malloc_prefork() and _malloc_postfork(), and use them instead of
directly manipulating __malloc_lock.
Approved by: phk, markm (mentor)
|
|
|
|
|
|
|
|
| |
between a 32-bit integer and a radix-64 ASCII string. The l64a_r() function
is a NetBSD addition.
PR: 51209 (based on submission, but very different)
Reviewed by: bde, ru
|
| |
|
|
|
|
| |
the function definition.
|
|
|
|
| |
stdio/ and stdlib/. Don't define __cleanup twice.
|
|
|
|
|
|
|
|
|
|
|
|
| |
a tty device instead of the legacy minor number approach. This is known to
fix gnome-vfs' sftp module as well as kio_sftp and kdesu on -CURRENT.
Thanks to scottl for the snprintf() approach idea.
Reviewed by: phk
Tested by: pav
mich
Approved by: re (scottl)
|
| |
|
|
|
|
|
|
|
|
|
| |
surrounding the undef'ing it. It does not seem necessary to
undef some symbol that is not exist, and gcc does not complain
about whether a symbol is exist before #undef'ing it out.
Spotted by: mingyanguo via ChinaUnix.net forum
Reviewed by: phk
|
|
|
|
|
|
| |
is not portable.
Asked by: joerg
|
|
|
|
| |
Noted by: bde
|
|
|
|
|
|
|
|
|
|
| |
really so.
"If the value of base is 16, the characters 0x or 0X may optionally
precede the sequence of letters and digits, following the sign if
present."
Found by: joerg
|