summaryrefslogtreecommitdiffstats
path: root/contrib/jemalloc/ChangeLog
blob: 0cf887c2344a6cbcd23e832723e54f76a20c5b3e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
Following are change highlights associated with official releases.  Important
bug fixes are all mentioned, but some internal enhancements are omitted here for
brevity.  Much more detail can be found in the git revision history:

    https://github.com/jemalloc/jemalloc

* 4.0.0 (August 17, 2015)

  This version contains many speed and space optimizations, both minor and
  major.  The major themes are generalization, unification, and simplification.
  Although many of these optimizations cause no visible behavior change, their
  cumulative effect is substantial.

  New features:
  - Normalize size class spacing to be consistent across the complete size
    range.  By default there are four size classes per size doubling, but this
    is now configurable via the --with-lg-size-class-group option.  Also add the
    --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and
    --with-lg-tiny-min options, which can be used to tweak page and size class
    settings.  Impacts:
    + Worst case performance for incrementally growing/shrinking reallocation
      is improved because there are far fewer size classes, and therefore
      copying happens less often.
    + Internal fragmentation is limited to 20% for all but the smallest size
      classes (those less than four times the quantum).  (1B + 4 KiB)
      and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.
    + Chunk fragmentation tends to be lower because there are fewer distinct run
      sizes to pack.
  - Add support for explicit tcaches.  The "tcache.create", "tcache.flush", and
    "tcache.destroy" mallctls control tcache lifetime and flushing, and the
    MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API
    control which tcache is used for each operation.
  - Implement per thread heap profiling, as well as the ability to
    enable/disable heap profiling on a per thread basis.  Add the "prof.reset",
    "prof.lg_sample", "thread.prof.name", "thread.prof.active",
    "opt.prof_thread_active_init", "prof.thread_active_init", and
    "thread.prof.active" mallctls.
  - Add support for per arena application-specified chunk allocators, configured
    via the "arena.<i>.chunk_hooks" mallctl.
  - Refactor huge allocation to be managed by arenas, so that arenas now
    function as general purpose independent allocators.  This is important in
    the context of user-specified chunk allocators, aside from the scalability
    benefits.  Related new statistics:
    + The "stats.arenas.<i>.huge.allocated", "stats.arenas.<i>.huge.nmalloc",
      "stats.arenas.<i>.huge.ndalloc", and "stats.arenas.<i>.huge.nrequests"
      mallctls provide high level per arena huge allocation statistics.
    + The "arenas.nhchunks", "arenas.hchunk.<i>.size",
      "stats.arenas.<i>.hchunks.<j>.nmalloc",
      "stats.arenas.<i>.hchunks.<j>.ndalloc",
      "stats.arenas.<i>.hchunks.<j>.nrequests", and
      "stats.arenas.<i>.hchunks.<j>.curhchunks" mallctls provide per size class
      statistics.
  - Add the 'util' column to malloc_stats_print() output, which reports the
    proportion of available regions that are currently in use for each small
    size class.
  - Add "alloc" and "free" modes for for junk filling (see the "opt.junk"
    mallctl), so that it is possible to separately enable junk filling for
    allocation versus deallocation.
  - Add the jemalloc-config script, which provides information about how
    jemalloc was configured, and how to integrate it into application builds.
  - Add metadata statistics, which are accessible via the "stats.metadata",
    "stats.arenas.<i>.metadata.mapped", and
    "stats.arenas.<i>.metadata.allocated" mallctls.
  - Add the "stats.resident" mallctl, which reports the upper limit of
    physically resident memory mapped by the allocator.
  - Add per arena control over unused dirty page purging, via the
    "arenas.lg_dirty_mult", "arena.<i>.lg_dirty_mult", and
    "stats.arenas.<i>.lg_dirty_mult" mallctls.
  - Add the "prof.gdump" mallctl, which makes it possible to toggle the gdump
    feature on/off during program execution.
  - Add sdallocx(), which implements sized deallocation.  The primary
    optimization over dallocx() is the removal of a metadata read, which often
    suffers an L1 cache miss.
  - Add missing header includes in jemalloc/jemalloc.h, so that applications
    only have to #include <jemalloc/jemalloc.h>.
  - Add support for additional platforms:
    + Bitrig
    + Cygwin
    + DragonFlyBSD
    + iOS
    + OpenBSD
    + OpenRISC/or1k

  Optimizations:
  - Maintain dirty runs in per arena LRUs rather than in per arena trees of
    dirty-run-containing chunks.  In practice this change significantly reduces
    dirty page purging volume.
  - Integrate whole chunks into the unused dirty page purging machinery.  This
    reduces the cost of repeated huge allocation/deallocation, because it
    effectively introduces a cache of chunks.
  - Split the arena chunk map into two separate arrays, in order to increase
    cache locality for the frequently accessed bits.
  - Move small run metadata out of runs, into arena chunk headers.  This reduces
    run fragmentation, smaller runs reduce external fragmentation for small size
    classes, and packed (less uniformly aligned) metadata layout improves CPU
    cache set distribution.
  - Randomly distribute large allocation base pointer alignment relative to page
    boundaries in order to more uniformly utilize CPU cache sets.  This can be
    disabled via the --disable-cache-oblivious configure option, and queried via
    the "config.cache_oblivious" mallctl.
  - Micro-optimize the fast paths for the public API functions.
  - Refactor thread-specific data to reside in a single structure.  This assures
    that only a single TLS read is necessary per call into the public API.
  - Implement in-place huge allocation growing and shrinking.
  - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make
    additional optimizations that reduce maximum lookup depth to one or two
    levels.  This resolves what was a concurrency bottleneck for per arena huge
    allocation, because a global data structure is critical for determining
    which arenas own which huge allocations.

  Incompatible changes:
  - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious
    warnings by default.
  - Assure that the constness of malloc_usable_size()'s return type matches that
    of the system implementation.
  - Change the heap profile dump format to support per thread heap profiling,
    rename pprof to jeprof, and enhance it with the --thread=<n> option.  As a
    result, the bundled jeprof must now be used rather than the upstream
    (gperftools) pprof.
  - Disable "opt.prof_final" by default, in order to avoid atexit(3), which can
    internally deadlock on some platforms.
  - Change the "arenas.nlruns" mallctl type from size_t to unsigned.
  - Replace the "stats.arenas.<i>.bins.<j>.allocated" mallctl with
    "stats.arenas.<i>.bins.<j>.curregs".
  - Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
  - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the
    MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.

  Removed features:
  - Remove the *allocm() API, which is superseded by the *allocx() API.
  - Remove the --enable-dss options, and make dss non-optional on all platforms
    which support sbrk(2).
  - Remove the "arenas.purge" mallctl, which was obsoleted by the
    "arena.<i>.purge" mallctl in 3.1.0.
  - Remove the unnecessary "opt.valgrind" mallctl; jemalloc automatically
    detects whether it is running inside Valgrind.
  - Remove the "stats.huge.allocated", "stats.huge.nmalloc", and
    "stats.huge.ndalloc" mallctls.
  - Remove the --enable-mremap option.
  - Remove the "stats.chunks.current", "stats.chunks.total", and
    "stats.chunks.high" mallctls.

  Bug fixes:
  - Fix the cactive statistic to decrease (rather than increase) when active
    memory decreases.  This regression was first released in 3.5.0.
  - Fix OOM handling in memalign() and valloc().  A variant of this bug existed
    in all releases since 2.0.0, which introduced these functions.
  - Fix an OOM-related regression in arena_tcache_fill_small(), which could
    cause cache corruption on OOM.  This regression was present in all releases
    from 2.2.0 through 3.6.0.
  - Fix size class overflow handling for malloc(), posix_memalign(), memalign(),
    calloc(), and realloc() when profiling is enabled.
  - Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
    "secondary" precedence is specified, but sbrk(2) is not supported.
  - Fix fallback lg_floor() implementations to handle extremely large inputs.
  - Ensure the default purgeable zone is after the default zone on OS X.
  - Fix latent bugs in atomic_*().
  - Fix the "arena.<i>.dss" mallctl to handle read-only calls.
  - Fix tls_model configuration to enable the initial-exec model when possible.
  - Mark malloc_conf as a weak symbol so that the application can override it.
  - Correctly detect glibc's adaptive pthread mutexes.
  - Fix the --without-export configure option.

* 3.6.0 (March 31, 2014)

  This version contains a critical bug fix for a regression present in 3.5.0 and
  3.5.1.

  Bug fixes:
  - Fix a regression in arena_chunk_alloc() that caused crashes during
    small/large allocation if chunk allocation failed.  In the absence of this
    bug, chunk allocation failure would result in allocation failure, e.g.  NULL
    return from malloc().  This regression was introduced in 3.5.0.
  - Fix backtracing for gcc intrinsics-based backtracing by specifying
    -fno-omit-frame-pointer to gcc.  Note that the application (and all the
    libraries it links to) must also be compiled with this option for
    backtracing to be reliable.
  - Use dss allocation precedence for huge allocations as well as small/large
    allocations.
  - Fix test assertion failure message formatting.  This bug did not manifest on
    x86_64 systems because of implementation subtleties in va_list.
  - Fix inconsequential test failures for hash and SFMT code.

  New features:
  - Support heap profiling on FreeBSD.  This feature depends on the proc
    filesystem being mounted during heap profile dumping.

* 3.5.1 (February 25, 2014)

  This version primarily addresses minor bugs in test code.

  Bug fixes:
  - Configure Solaris/Illumos to use MADV_FREE.
  - Fix junk filling for mremap(2)-based huge reallocation.  This is only
    relevant if configuring with the --enable-mremap option specified.
  - Avoid compilation failure if 'restrict' C99 keyword is not supported by the
    compiler.
  - Add a configure test for SSE2 rather than assuming it is usable on i686
    systems.  This fixes test compilation errors, especially on 32-bit Linux
    systems.
  - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit
    test.
  - Fix/remove flawed alignment-related overflow tests.
  - Prevent compiler optimizations that could change backtraces in the
    prof_accum unit test.

* 3.5.0 (January 22, 2014)

  This version focuses on refactoring and automated testing, though it also
  includes some non-trivial heap profiling optimizations not mentioned below.

  New features:
  - Add the *allocx() API, which is a successor to the experimental *allocm()
    API.  The *allocx() functions are slightly simpler to use because they have
    fewer parameters, they directly return the results of primary interest, and
    mallocx()/rallocx() avoid the strict aliasing pitfall that
    allocm()/rallocm() share with posix_memalign().  Note that *allocm() is
    slated for removal in the next non-bugfix release.
  - Add support for LinuxThreads.

  Bug fixes:
  - Unless heap profiling is enabled, disable floating point code and don't link
    with libm.  This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64
    systems, makes it possible to completely disable floating point register
    use.  Some versions of glibc neglect to save/restore caller-saved floating
    point registers during dynamic lazy symbol loading, and the symbol loading
    code uses whatever malloc the application happens to have linked/loaded
    with, the result being potential floating point register corruption.
  - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
    backtrace creation in imemalign().  This bug impacted posix_memalign() and
    aligned_alloc().
  - Fix a file descriptor leak in a prof_dump_maps() error path.
  - Fix prof_dump() to close the dump file descriptor for all relevant error
    paths.
  - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for
    allocation, not just deallocation.
  - Fix a data race for large allocation stats counters.
  - Fix a potential infinite loop during thread exit.  This bug occurred on
    Solaris, and could affect other platforms with similar pthreads TSD
    implementations.
  - Don't junk-fill reallocations unless usable size changes.  This fixes a
    violation of the *allocx()/*allocm() semantics.
  - Fix growing large reallocation to junk fill new space.
  - Fix huge deallocation to junk fill when munmap is disabled.
  - Change the default private namespace prefix from empty to je_, and change
    --with-private-namespace-prefix so that it prepends an additional prefix
    rather than replacing je_.  This reduces the likelihood of applications
    which statically link jemalloc experiencing symbol name collisions.
  - Add missing private namespace mangling (relevant when
    --with-private-namespace is specified).
  - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as
    static even for debug builds.
  - Add a missing mutex unlock in a malloc_init_hard() error path.  In practice
    this error path is never executed.
  - Fix numerous bugs in malloc_strotumax() error handling/reporting.  These
    bugs had no impact except for malformed inputs.
  - Fix numerous bugs in malloc_snprintf().  These bugs were not exercised by
    existing calls, so they had no impact.

* 3.4.1 (October 20, 2013)

  Bug fixes:
  - Fix a race in the "arenas.extend" mallctl that could cause memory corruption
    of internal data structures and subsequent crashes.
  - Fix Valgrind integration flaws that caused Valgrind warnings about reads of
    uninitialized memory in:
    + arena chunk headers
    + internal zero-initialized data structures (relevant to tcache and prof
      code)
  - Preserve errno during the first allocation.  A readlink(2) call during
    initialization fails unless /etc/malloc.conf exists, so errno was typically
    set during the first allocation prior to this fix.
  - Fix compilation warnings reported by gcc 4.8.1.

* 3.4.0 (June 2, 2013)

  This version is essentially a small bugfix release, but the addition of
  aarch64 support requires that the minor version be incremented.

  Bug fixes:
  - Fix race-triggered deadlocks in chunk_record().  These deadlocks were
    typically triggered by multiple threads concurrently deallocating huge
    objects.

  New features:
  - Add support for the aarch64 architecture.

* 3.3.1 (March 6, 2013)

  This version fixes bugs that are typically encountered only when utilizing
  custom run-time options.

  Bug fixes:
  - Fix a locking order bug that could cause deadlock during fork if heap
    profiling were enabled.
  - Fix a chunk recycling bug that could cause the allocator to lose track of
    whether a chunk was zeroed.  On FreeBSD, NetBSD, and OS X, it could cause
    corruption if allocating via sbrk(2) (unlikely unless running with the
    "dss:primary" option specified).  This was completely harmless on Linux
    unless using mlockall(2) (and unlikely even then, unless the
    --disable-munmap configure option or the "dss:primary" option was
    specified).  This regression was introduced in 3.1.0 by the
    mlockall(2)/madvise(2) interaction fix.
  - Fix TLS-related memory corruption that could occur during thread exit if the
    thread never allocated memory.  Only the quarantine and prof facilities were
    susceptible.
  - Fix two quarantine bugs:
    + Internal reallocation of the quarantined object array leaked the old
      array.
    + Reallocation failure for internal reallocation of the quarantined object
      array (very unlikely) resulted in memory corruption.
  - Fix Valgrind integration to annotate all internally allocated memory in a
    way that keeps Valgrind happy about internal data structure access.
  - Fix building for s390 systems.

* 3.3.0 (January 23, 2013)

  This version includes a few minor performance improvements in addition to the
  listed new features and bug fixes.

  New features:
  - Add clipping support to lg_chunk option processing.
  - Add the --enable-ivsalloc option.
  - Add the --without-export option.
  - Add the --disable-zone-allocator option.

  Bug fixes:
  - Fix "arenas.extend" mallctl to output the number of arenas.
  - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory
    is undefined.
  - Fix build break on FreeBSD related to alloca.h.

* 3.2.0 (November 9, 2012)

  In addition to a couple of bug fixes, this version modifies page run
  allocation and dirty page purging algorithms in order to better control
  page-level virtual memory fragmentation.

  Incompatible changes:
  - Change the "opt.lg_dirty_mult" default from 5 to 3 (32:1 to 8:1).

  Bug fixes:
  - Fix dss/mmap allocation precedence code to use recyclable mmap memory only
    after primary dss allocation fails.
  - Fix deadlock in the "arenas.purge" mallctl.  This regression was introduced
    in 3.1.0 by the addition of the "arena.<i>.purge" mallctl.

* 3.1.0 (October 16, 2012)

  New features:
  - Auto-detect whether running inside Valgrind, thus removing the need to
    manually specify MALLOC_CONF=valgrind:true.
  - Add the "arenas.extend" mallctl, which allows applications to create
    manually managed arenas.
  - Add the ALLOCM_ARENA() flag for {,r,d}allocm().
  - Add the "opt.dss", "arena.<i>.dss", and "stats.arenas.<i>.dss" mallctls,
    which provide control over dss/mmap precedence.
  - Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
  - Define LG_QUANTUM for hppa.

  Incompatible changes:
  - Disable tcache by default if running inside Valgrind, in order to avoid
    making unallocated objects appear reachable to Valgrind.
  - Drop const from malloc_usable_size() argument on Linux.

  Bug fixes:
  - Fix heap profiling crash if sampled object is freed via realloc(p, 0).
  - Remove const from __*_hook variable declarations, so that glibc can modify
    them during process forking.
  - Fix mlockall(2)/madvise(2) interaction.
  - Fix fork(2)-related deadlocks.
  - Fix error return value for "thread.tcache.enabled" mallctl.

* 3.0.0 (May 11, 2012)

  Although this version adds some major new features, the primary focus is on
  internal code cleanup that facilitates maintainability and portability, most
  of which is not reflected in the ChangeLog.  This is the first release to
  incorporate substantial contributions from numerous other developers, and the
  result is a more broadly useful allocator (see the git revision history for
  contribution details).  Note that the license has been unified, thanks to
  Facebook granting a license under the same terms as the other copyright
  holders (see COPYING).

  New features:
  - Implement Valgrind support, redzones, and quarantine.
  - Add support for additional platforms:
    + FreeBSD
    + Mac OS X Lion
    + MinGW
    + Windows (no support yet for replacing the system malloc)
  - Add support for additional architectures:
    + MIPS
    + SH4
    + Tilera
  - Add support for cross compiling.
  - Add nallocm(), which rounds a request size up to the nearest size class
    without actually allocating.
  - Implement aligned_alloc() (blame C11).
  - Add the "thread.tcache.enabled" mallctl.
  - Add the "opt.prof_final" mallctl.
  - Update pprof (from gperftools 2.0).
  - Add the --with-mangling option.
  - Add the --disable-experimental option.
  - Add the --disable-munmap option, and make it the default on Linux.
  - Add the --enable-mremap option, which disables use of mremap(2) by default.

  Incompatible changes:
  - Enable stats by default.
  - Enable fill by default.
  - Disable lazy locking by default.
  - Rename the "tcache.flush" mallctl to "thread.tcache.flush".
  - Rename the "arenas.pagesize" mallctl to "arenas.page".
  - Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
  - Change the "opt.prof_accum" default from true to false.

  Removed features:
  - Remove the swap feature, including the "config.swap", "swap.avail",
    "swap.prezeroed", "swap.nfds", and "swap.fds" mallctls.
  - Remove highruns statistics, including the
    "stats.arenas.<i>.bins.<j>.highruns" and
    "stats.arenas.<i>.lruns.<j>.highruns" mallctls.
  - As part of small size class refactoring, remove the "opt.lg_[qc]space_max",
    "arenas.cacheline", "arenas.subpage", "arenas.[tqcs]space_{min,max}", and
    "arenas.[tqcs]bins" mallctls.
  - Remove the "arenas.chunksize" mallctl.
  - Remove the "opt.lg_prof_tcmax" option.
  - Remove the "opt.lg_prof_bt_max" option.
  - Remove the "opt.lg_tcache_gc_sweep" option.
  - Remove the --disable-tiny option, including the "config.tiny" mallctl.
  - Remove the --enable-dynamic-page-shift configure option.
  - Remove the --enable-sysv configure option.

  Bug fixes:
  - Fix a statistics-related bug in the "thread.arena" mallctl that could cause
    invalid statistics and crashes.
  - Work around TLS deallocation via free() on Linux.  This bug could cause
    write-after-free memory corruption.
  - Fix a potential deadlock that could occur during interval- and
    growth-triggered heap profile dumps.
  - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags.
  - Fix chunk_alloc_dss() to stop claiming memory is zeroed.  This bug could
    cause memory corruption and crashes with --enable-dss specified.
  - Fix fork-related bugs that could cause deadlock in children between fork
    and exec.
  - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.
  - Fix realloc(p, 0) to act like free(p).
  - Do not enforce minimum alignment in memalign().
  - Check for NULL pointer in malloc_usable_size().
  - Fix an off-by-one heap profile statistics bug that could be observed in
    interval- and growth-triggered heap profiles.
  - Fix the "epoch" mallctl to update cached stats even if the passed in epoch
    is 0.
  - Fix bin->runcur management to fix a layout policy bug.  This bug did not
    affect correctness.
  - Fix a bug in choose_arena_hard() that potentially caused more arenas to be
    initialized than necessary.
  - Add missing "opt.lg_tcache_max" mallctl implementation.
  - Use glibc allocator hooks to make mixed allocator usage less likely.
  - Fix build issues for --disable-tcache.
  - Don't mangle pthread_create() when --with-private-namespace is specified.

* 2.2.5 (November 14, 2011)

  Bug fixes:
  - Fix huge_ralloc() race when using mremap(2).  This is a serious bug that
    could cause memory corruption and/or crashes.
  - Fix huge_ralloc() to maintain chunk statistics.
  - Fix malloc_stats_print(..., "a") output.

* 2.2.4 (November 5, 2011)

  Bug fixes:
  - Initialize arenas_tsd before using it.  This bug existed for 2.2.[0-3], as
    well as for --disable-tls builds in earlier releases.
  - Do not assume a 4 KiB page size in test/rallocm.c.

* 2.2.3 (August 31, 2011)

  This version fixes numerous bugs related to heap profiling.

  Bug fixes:
  - Fix a prof-related race condition.  This bug could cause memory corruption,
    but only occurred in non-default configurations (prof_accum:false).
  - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is
    excluded from backtraces).
  - Fix a prof-related bug in realloc() (only triggered by OOM errors).
  - Fix prof-related bugs in allocm() and rallocm().
  - Fix prof_tdata_cleanup() for --disable-tls builds.
  - Fix a relative include path, to fix objdir builds.

* 2.2.2 (July 30, 2011)

  Bug fixes:
  - Fix a build error for --disable-tcache.
  - Fix assertions in arena_purge() (for real this time).
  - Add the --with-private-namespace option.  This is a workaround for symbol
    conflicts that can inadvertently arise when using static libraries.

* 2.2.1 (March 30, 2011)

  Bug fixes:
  - Implement atomic operations for x86/x64.  This fixes compilation failures
    for versions of gcc that are still in wide use.
  - Fix an assertion in arena_purge().

* 2.2.0 (March 22, 2011)

  This version incorporates several improvements to algorithms and data
  structures that tend to reduce fragmentation and increase speed.

  New features:
  - Add the "stats.cactive" mallctl.
  - Update pprof (from google-perftools 1.7).
  - Improve backtracing-related configuration logic, and add the
    --disable-prof-libgcc option.

  Bug fixes:
  - Change default symbol visibility from "internal", to "hidden", which
    decreases the overhead of library-internal function calls.
  - Fix symbol visibility so that it is also set on OS X.
  - Fix a build dependency regression caused by the introduction of the .pic.o
    suffix for PIC object files.
  - Add missing checks for mutex initialization failures.
  - Don't use libgcc-based backtracing except on x64, where it is known to work.
  - Fix deadlocks on OS X that were due to memory allocation in
    pthread_mutex_lock().
  - Heap profiling-specific fixes:
    + Fix memory corruption due to integer overflow in small region index
      computation, when using a small enough sample interval that profiling
      context pointers are stored in small run headers.
    + Fix a bootstrap ordering bug that only occurred with TLS disabled.
    + Fix a rallocm() rsize bug.
    + Fix error detection bugs for aligned memory allocation.

* 2.1.3 (March 14, 2011)

  Bug fixes:
  - Fix a cpp logic regression (due to the "thread.{de,}allocatedp" mallctl fix
    for OS X in 2.1.2).
  - Fix a "thread.arena" mallctl bug.
  - Fix a thread cache stats merging bug.

* 2.1.2 (March 2, 2011)

  Bug fixes:
  - Fix "thread.{de,}allocatedp" mallctl for OS X.
  - Add missing jemalloc.a to build system.

* 2.1.1 (January 31, 2011)

  Bug fixes:
  - Fix aligned huge reallocation (affected allocm()).
  - Fix the ALLOCM_LG_ALIGN macro definition.
  - Fix a heap dumping deadlock.
  - Fix a "thread.arena" mallctl bug.

* 2.1.0 (December 3, 2010)

  This version incorporates some optimizations that can't quite be considered
  bug fixes.

  New features:
  - Use Linux's mremap(2) for huge object reallocation when possible.
  - Avoid locking in mallctl*() when possible.
  - Add the "thread.[de]allocatedp" mallctl's.
  - Convert the manual page source from roff to DocBook, and generate both roff
    and HTML manuals.

  Bug fixes:
  - Fix a crash due to incorrect bootstrap ordering.  This only impacted
    --enable-debug --enable-dss configurations.
  - Fix a minor statistics bug for mallctl("swap.avail", ...).

* 2.0.1 (October 29, 2010)

  Bug fixes:
  - Fix a race condition in heap profiling that could cause undefined behavior
    if "opt.prof_accum" were disabled.
  - Add missing mutex unlocks for some OOM error paths in the heap profiling
    code.
  - Fix a compilation error for non-C99 builds.

* 2.0.0 (October 24, 2010)

  This version focuses on the experimental *allocm() API, and on improved
  run-time configuration/introspection.  Nonetheless, numerous performance
  improvements are also included.

  New features:
  - Implement the experimental {,r,s,d}allocm() API, which provides a superset
    of the functionality available via malloc(), calloc(), posix_memalign(),
    realloc(), malloc_usable_size(), and free().  These functions can be used to
    allocate/reallocate aligned zeroed memory, ask for optional extra memory
    during reallocation, prevent object movement during reallocation, etc.
  - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is
    more human-readable, and more flexible.  For example:
      JEMALLOC_OPTIONS=AJP
    is now:
      MALLOC_CONF=abort:true,fill:true,stats_print:true
  - Port to Apple OS X.  Sponsored by Mozilla.
  - Make it possible for the application to control thread-->arena mappings via
    the "thread.arena" mallctl.
  - Add compile-time support for all TLS-related functionality via pthreads TSD.
    This is mainly of interest for OS X, which does not support TLS, but has a
    TSD implementation with similar performance.
  - Override memalign() and valloc() if they are provided by the system.
  - Add the "arenas.purge" mallctl, which can be used to synchronously purge all
    dirty unused pages.
  - Make cumulative heap profiling data optional, so that it is possible to
    limit the amount of memory consumed by heap profiling data structures.
  - Add per thread allocation counters that can be accessed via the
    "thread.allocated" and "thread.deallocated" mallctls.

  Incompatible changes:
  - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).
  - Increase default backtrace depth from 4 to 128 for heap profiling.
  - Disable interval-based profile dumps by default.

  Bug fixes:
  - Remove bad assertions in fork handler functions.  These assertions could
    cause aborts for some combinations of configure settings.
  - Fix strerror_r() usage to deal with non-standard semantics in GNU libc.
  - Fix leak context reporting.  This bug tended to cause the number of contexts
    to be underreported (though the reported number of objects and bytes were
    correct).
  - Fix a realloc() bug for large in-place growing reallocation.  This bug could
    cause memory corruption, but it was hard to trigger.
  - Fix an allocation bug for small allocations that could be triggered if
    multiple threads raced to create a new run of backing pages.
  - Enhance the heap profiler to trigger samples based on usable size, rather
    than request size.
  - Fix a heap profiling bug due to sometimes losing track of requested object
    size for sampled objects.

* 1.0.3 (August 12, 2010)

  Bug fixes:
  - Fix the libunwind-based implementation of stack backtracing (used for heap
    profiling).  This bug could cause zero-length backtraces to be reported.
  - Add a missing mutex unlock in library initialization code.  If multiple
    threads raced to initialize malloc, some of them could end up permanently
    blocked.

* 1.0.2 (May 11, 2010)

  Bug fixes:
  - Fix junk filling of large objects, which could cause memory corruption.
  - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual
    memory limits could cause swap file configuration to fail.  Contributed by
    Jordan DeLong.

* 1.0.1 (April 14, 2010)

  Bug fixes:
  - Fix compilation when --enable-fill is specified.
  - Fix threads-related profiling bugs that affected accuracy and caused memory
    to be leaked during thread exit.
  - Fix dirty page purging race conditions that could cause crashes.
  - Fix crash in tcache flushing code during thread destruction.

* 1.0.0 (April 11, 2010)

  This release focuses on speed and run-time introspection.  Numerous
  algorithmic improvements make this release substantially faster than its
  predecessors.

  New features:
  - Implement autoconf-based configuration system.
  - Add mallctl*(), for the purposes of introspection and run-time
    configuration.
  - Make it possible for the application to manually flush a thread's cache, via
    the "tcache.flush" mallctl.
  - Base maximum dirty page count on proportion of active memory.
  - Compute various additional run-time statistics, including per size class
    statistics for large objects.
  - Expose malloc_stats_print(), which can be called repeatedly by the
    application.
  - Simplify the malloc_message() signature to only take one string argument,
    and incorporate an opaque data pointer argument for use by the application
    in combination with malloc_stats_print().
  - Add support for allocation backed by one or more swap files, and allow the
    application to disable over-commit if swap files are in use.
  - Implement allocation profiling and leak checking.

  Removed features:
  - Remove the dynamic arena rebalancing code, since thread-specific caching
    reduces its utility.

  Bug fixes:
  - Modify chunk allocation to work when address space layout randomization
    (ASLR) is in use.
  - Fix thread cleanup bugs related to TLS destruction.
  - Handle 0-size allocation requests in posix_memalign().
  - Fix a chunk leak.  The leaked chunks were never touched, so this impacted
    virtual memory usage, but not physical memory usage.

* linux_2008082[78]a (August 27/28, 2008)

  These snapshot releases are the simple result of incorporating Linux-specific
  support into the FreeBSD malloc sources.

--------------------------------------------------------------------------------
vim:filetype=text:textwidth=80
OpenPOWER on IntegriCloud