diff options
author | jasone <jasone@FreeBSD.org> | 2008-02-06 02:59:54 +0000 |
---|---|---|
committer | jasone <jasone@FreeBSD.org> | 2008-02-06 02:59:54 +0000 |
commit | 44c343f8fa20d86c54fbf7b2b90d931648a29f95 (patch) | |
tree | ed1b3ba2f9f69f641eed2a388dccf785d4de022d /lib/libc/stdlib/malloc.3 | |
parent | 0635509b37504b63d5eb26d757055029065f8597 (diff) | |
download | FreeBSD-src-44c343f8fa20d86c54fbf7b2b90d931648a29f95.zip FreeBSD-src-44c343f8fa20d86c54fbf7b2b90d931648a29f95.tar.gz |
Track dirty unused pages so that they can be purged if they exceed a
threshold, according to the 'F' MALLOC_OPTIONS flag. This obsoletes the
'H' flag.
Try to realloc() large objects in place. This substantially speeds up
incremental large reallocations in the common case.
Fix a bug in arena_ralloc() that caused relocation of sub-page objects
even if the old and new sizes were in the same size class.
Maintain trees of runs and simplify the per-chunk page map. This allows
logarithmic-time searching for sufficiently large runs in
arena_run_alloc(), whereas the previous algorithm required linear time
in the worst case.
Break various large functions into smaller sub-functions, and inline
only the functions that are in the fast path for small object
allocation/deallocation.
Remove an unnecessary check in base_pages_alloc_mmap().
Avoid integer division in choose_arena() for the NO_TLS case on
single-CPU systems.
Diffstat (limited to 'lib/libc/stdlib/malloc.3')
-rw-r--r-- | lib/libc/stdlib/malloc.3 | 32 |
1 files changed, 17 insertions, 15 deletions
diff --git a/lib/libc/stdlib/malloc.3 b/lib/libc/stdlib/malloc.3 index 466accc..1d33e6a 100644 --- a/lib/libc/stdlib/malloc.3 +++ b/lib/libc/stdlib/malloc.3 @@ -32,7 +32,7 @@ .\" @(#)malloc.3 8.1 (Berkeley) 6/4/93 .\" $FreeBSD$ .\" -.Dd January 3, 2008 +.Dd February 5, 2008 .Dt MALLOC 3 .Os .Sh NAME @@ -95,7 +95,7 @@ bytes. The contents of the memory are unchanged up to the lesser of the new and old sizes. If the new size is larger, -the value of the newly allocated portion of the memory is undefined. +the contents of the newly allocated portion of the memory are undefined. Upon success, the memory referenced by .Fa ptr is freed and a pointer to the newly allocated memory is returned. @@ -204,15 +204,16 @@ This option is enabled by default. See the .Dq M option for related information and interactions. -.It H -Use -.Xr madvise 2 -when pages within a chunk are no longer in use, but the chunk as a whole cannot -yet be deallocated. -This is primarily of use when swapping is a real possibility, due to the high -overhead of the -.Fn madvise -system call. +.It F +Double/halve the per-arena maximum number of dirty unused pages that are +allowed to accumulate before informing the kernel about at least half of those +pages via +.Xr madvise 2 . +This provides the kernel with sufficient information to recycle dirty pages if +physical memory becomes scarce and the pages remain unused. +The default is 512 pages per arena; +.Ev MALLOC_OPTIONS=10f +will prevent any dirty unused pages from accumulating. .It J Each byte of new memory allocated by .Fn malloc , @@ -384,11 +385,12 @@ separately in a single data structure that is shared by all threads. Huge objects are used by applications infrequently enough that this single data structure is not a scalability issue. .Pp -Each chunk that is managed by an arena tracks its contents in a page map as -runs of contiguous pages (unused, backing a set of small objects, or backing -one large object). +Each chunk that is managed by an arena tracks its contents as runs of +contiguous pages (unused, backing a set of small objects, or backing one large +object). The combination of chunk alignment and chunk page maps makes it possible to -determine all metadata regarding small and large allocations in constant time. +determine all metadata regarding small and large allocations in +constant and logarithmic time, respectively. .Pp Small objects are managed in groups by page runs. Each run maintains a bitmap that tracks which regions are in use. |