summaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h
Commit message (Expand)AuthorAgeFilesLines
* SLUB: this_cpu: Remove slub kmem_cache fieldsChristoph Lameter2009-12-201-2/+0
* SLUB: Get rid of dynamic DMA kmalloc cache allocationChristoph Lameter2009-12-201-8/+11
* SLUB: Use this_cpu operations in slubChristoph Lameter2009-12-201-5/+1
* tracing, slab: Define kmem_cache_alloc_notrace ifdef CONFIG_TRACINGLi Zefan2009-12-111-2/+2
*-. Merge branches 'slab/cleanups' and 'slab/fixes' into for-linusPekka Enberg2009-09-141-6/+2
|\ \
| | * SLUB: fix ARCH_KMALLOC_MINALIGN cases 64 and 256Aaro Koskinen2009-08-301-4/+2
| * | slab: remove duplicate kmem_cache_init_late() declarationsWu Fengguang2009-08-061-2/+0
|/ /
* | kmemleak: Trace the kmalloc_large* functions in slubCatalin Marinas2009-07-081-0/+2
|/
* slab,slub: don't enable interrupts during early bootPekka Enberg2009-06-121-0/+2
* tracing, kmemtrace: Separate include/trace/kmemtrace.h to kmemtrace part and ...Zhaolei2009-04-121-1/+1
* kmemtrace: use tracepointsEduard - Gabriel Munteanu2009-04-031-8/+4
* Merge branch 'tracing/core-v2' into tracing-for-linusIngo Molnar2009-04-021-3/+50
|\
| * Merge branch 'for-ingo' of git://git.kernel.org/pub/scm/linux/kernel/git/penb...Ingo Molnar2009-02-201-3/+16
| |\
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| * | tracing/kmemtrace: normalize the raw tracer event to the unified tracing APIFrederic Weisbecker2008-12-301-1/+1
| * | kmemtrace: SLUB hooks.Eduard - Gabriel Munteanu2008-12-291-3/+50
| |/
| |
| \
*-. \ Merge branches 'topic/slob/cleanups', 'topic/slob/fixes', 'topic/slub/core', ...Pekka Enberg2009-03-241-4/+17
|\ \ \ | |_|/ |/| |
| | * SLUB: Do not pass 8k objects through to the page allocatorPekka Enberg2009-02-201-2/+2
| | * SLUB: Introduce and use SLUB_MAX_SIZE and SLUB_PAGE_SHIFT constantsChristoph Lameter2009-02-201-3/+16
| |/ |/|
| * slub: move min_partial to struct kmem_cacheDavid Rientjes2009-02-231-1/+1
|/
* SLUB: dynamic per-cache MIN_PARTIALPekka Enberg2008-08-051-0/+1
* SL*B: drop kmem cache argument from constructorAlexey Dobriyan2008-07-261-1/+1
* Christoph has movedChristoph Lameter2008-07-041-1/+1
* slub: Do not use 192 byte sized cache if minimum alignment is 128 byteChristoph Lameter2008-07-031-0/+2
* slub: Fallback to minimal order during slab page allocationChristoph Lameter2008-04-271-0/+2
* slub: Update statistics handling for variable order slabsChristoph Lameter2008-04-271-0/+2
* slub: Add kmem_cache_order_objects structChristoph Lameter2008-04-271-2/+10
* slub: No need for per node slab counters if !SLUB_DEBUGChristoph Lameter2008-04-141-1/+1
* slub: Fix up commentsChristoph Lameter2008-03-031-2/+2
* slub: Support 4k kmallocs again to compensate for page allocator slownessChristoph Lameter2008-02-141-3/+3
* slub: Determine gfpflags once and not every time a slab is allocatedChristoph Lameter2008-02-141-0/+1
* slub: kmalloc page allocator pass-through cleanupPekka Enberg2008-02-141-2/+6
* SLUB: Support for performance statisticsChristoph Lameter2008-02-071-0/+23
* Explain kmem_cache_cpu fieldsChristoph Lameter2008-02-041-5/+5
* SLUB: rename defrag to remote_node_defrag_ratioChristoph Lameter2008-02-041-1/+4
* Unify /proc/slabinfo configurationLinus Torvalds2008-01-021-2/+0
* slub: provide /proc/slabinfoPekka J Enberg2008-01-011-0/+2
* Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter2007-10-171-1/+1
* SLUB: Optimize cacheline use for zeroingChristoph Lameter2007-10-161-0/+1
* SLUB: Place kmem_cache_cpu structures in a NUMA aware wayChristoph Lameter2007-10-161-3/+6
* SLUB: Move page->offset to kmem_cache_cpu->offsetChristoph Lameter2007-10-161-0/+1
* SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slabChristoph Lameter2007-10-161-1/+8
* SLUB: direct pass through of page size or higher kmalloc requestsChristoph Lameter2007-10-161-33/+24
* SLUB: Force inlining for functions in slub_def.hChristoph Lameter2007-08-311-4/+4
* fix gfp_t annotations for slubAl Viro2007-07-201-1/+1
* Slab allocators: Cleanup zeroing allocationsChristoph Lameter2007-07-171-13/+0
* SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUGChristoph Lameter2007-07-171-0/+4
* Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semanticsChristoph Lameter2007-07-171-12/+0
* slob: initial NUMA supportPaul Mundt2007-07-161-1/+5
* SLUB: minimum alignment fixesChristoph Lameter2007-06-161-2/+11
OpenPOWER on IntegriCloud