summaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h
Commit message (Expand)AuthorAgeFilesLines
* slub: Add statistics for this_cmpxchg_double failuresChristoph Lameter2011-03-221-0/+1
* Merge branch 'slub/lockless' into for-linusPekka Enberg2011-03-201-2/+5
|\
| * Lockless (and preemptless) fastpaths for slubChristoph Lameter2011-03-111-1/+4
| * slub: min_partial needs to be in first cachelineChristoph Lameter2011-03-111-1/+1
* | slub: automatically reserve bytes at the end of slabLai Jiangshan2011-03-111-0/+1
|/
* slub tracing: move trace calls out of always inlined functions to reduce kern...Richard Kennedy2010-11-061-29/+26
* slub: Enable sysfs support for !CONFIG_SLUB_DEBUGChristoph Lameter2010-10-061-1/+1
* slub: reduce differences between SMP and NUMAChristoph Lameter2010-10-021-4/+1
* slub: Dynamically size kmalloc cache allocationsChristoph Lameter2010-10-021-5/+2
* Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds2010-08-221-1/+1
|\
| * slub: add missing __percpu markup in mm/slub_def.hNamhyung Kim2010-08-091-1/+1
* | dma-mapping: rename ARCH_KMALLOC_MINALIGN to ARCH_DMA_MINALIGNFUJITA Tomonori2010-08-111-3/+5
|/
* Merge branch 'perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/fre...Ingo Molnar2010-06-091-1/+2
|\
| * tracing: Remove kmemtrace ftrace pluginLi Zefan2010-06-091-1/+2
* | SLUB: Allow full duplication of kmalloc array for 390Christoph Lameter2010-05-301-1/+1
* | slub: move kmem_cache_node into it's own cachelineAlexander Duyck2010-05-241-6/+3
|/
* mm: Move ARCH_SLAB_MINALIGN and ARCH_KMALLOC_MINALIGN to <linux/slub_def.h>David Woodhouse2010-05-191-0/+8
* SLUB: this_cpu: Remove slub kmem_cache fieldsChristoph Lameter2009-12-201-2/+0
* SLUB: Get rid of dynamic DMA kmalloc cache allocationChristoph Lameter2009-12-201-8/+11
* SLUB: Use this_cpu operations in slubChristoph Lameter2009-12-201-5/+1
* tracing, slab: Define kmem_cache_alloc_notrace ifdef CONFIG_TRACINGLi Zefan2009-12-111-2/+2
*-. Merge branches 'slab/cleanups' and 'slab/fixes' into for-linusPekka Enberg2009-09-141-6/+2
|\ \
| | * SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256Aaro Koskinen2009-08-301-4/+2
| * | slab: remove duplicate kmem_cache_init_late() declarationsWu Fengguang2009-08-061-2/+0
|/ /
* | kmemleak: Trace the kmalloc_large* functions in slubCatalin Marinas2009-07-081-0/+2
|/
* slab,slub: don't enable interrupts during early bootPekka Enberg2009-06-121-0/+2
* tracing, kmemtrace: Separate include/trace/kmemtrace.h to kmemtrace part and ...Zhaolei2009-04-121-1/+1
* kmemtrace: use tracepointsEduard - Gabriel Munteanu2009-04-031-8/+4
* Merge branch 'tracing/core-v2' into tracing-for-linusIngo Molnar2009-04-021-3/+50
|\
| * Merge branch 'for-ingo' of git://git.kernel.org/pub/scm/linux/kernel/git/penb...Ingo Molnar2009-02-201-3/+16
| |\
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| * | tracing/kmemtrace: normalize the raw tracer event to the unified tracing APIFrederic Weisbecker2008-12-301-1/+1
| * | kmemtrace: SLUB hooks.Eduard - Gabriel Munteanu2008-12-291-3/+50
| |/
| |
| \
*-. \ Merge branches 'topic/slob/cleanups', 'topic/slob/fixes', 'topic/slub/core', ...Pekka Enberg2009-03-241-4/+17
|\ \ \ | |_|/ |/| |
| | * SLUB: Do not pass 8k objects through to the page allocatorPekka Enberg2009-02-201-2/+2
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| |/ |/|
| * slub: move min_partial to struct kmem_cacheDavid Rientjes2009-02-231-1/+1
|/
* SLUB: dynamic per-cache MIN_PARTIALPekka Enberg2008-08-051-0/+1
* SL*B: drop kmem cache argument from constructorAlexey Dobriyan2008-07-261-1/+1
* Christoph has movedChristoph Lameter2008-07-041-1/+1
* slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter2008-07-031-0/+2
* slub: Fallback to minimal order during slab page allocationChristoph Lameter2008-04-271-0/+2
* slub: Update statistics handling for variable order slabsChristoph Lameter2008-04-271-0/+2
* slub: Add kmem_cache_order_objects structChristoph Lameter2008-04-271-2/+10
* slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter2008-04-141-1/+1
* slub: Fix up commentsChristoph Lameter2008-03-031-2/+2
* slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter2008-02-141-3/+3
* slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter2008-02-141-0/+1
* slub: kmalloc page allocator pass-through cleanupPekka Enberg2008-02-141-2/+6
* SLUB: Support for performance statisticsChristoph Lameter2008-02-071-0/+23
OpenPOWER on IntegriCloud