summaryrefslogtreecommitdiffstats
path: root/include/linux/slub_def.h
Commit message (Expand)AuthorAgeFilesLines
* mm/slub.c: wrap kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIALWei Yang2017-07-061-0/+13
* mm/slub.c: wrap cpu_slab->partial in CONFIG_SLUB_CPU_PARTIALWei Yang2017-07-061-0/+19
* mm/slub.c: pack red_left_pad with another int to save a wordWei Yang2017-07-061-1/+1
* slub: make sysfs file removal asynchronousTejun Heo2017-06-231-0/+1
* slub: separate out sysfs_slab_release() from sysfs_slab_remove()Tejun Heo2017-02-221-2/+2
* mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUBAlexander Potapenko2016-07-281-0/+4
* mm, kasan: account for object redzone in SLUB's nearest_obj()Alexander Potapenko2016-07-281-4/+6
* mm: SLUB freelist randomizationThomas Garnier2016-07-261-0/+5
* mm: slub: remove unused virt_to_obj()Andrey Ryabinin2016-05-261-16/+0
* mm, kasan: SLAB supportAlexander Potapenko2016-03-251-0/+11
* mm/slub: support left redzoneJoonsoo Kim2016-03-151-0/+1
* mm: memcontrol: move kmem accounting code to CONFIG_MEMCGJohannes Weiner2016-01-201-1/+1
* mm: slub: share object_err functionAndrey Ryabinin2015-02-131-0/+3
* mm: slub: introduce virt_to_obj functionAndrey Ryabinin2015-02-131-0/+16
* slab: embed memcg_cache_params to kmem_cacheVladimir Davydov2015-02-121-1/+1
* slub: use sysfs'es release mechanism for kmem_cacheChristoph Lameter2014-05-061-0/+9
* slub: rework sysfs layout for memcg cachesVladimir Davydov2014-04-071-0/+3
* Merge branch 'slab/next' of git://git.kernel.org/pub/scm/linux/kernel/git/pen...Linus Torvalds2013-11-221-1/+1
|\
| * mm, slub: fix the typo in include/linux/slub_def.hZhi Yong Wu2013-11-111-1/+1
* | slub: remove verify_mem_not_deleted()Christoph Lameter2013-09-041-13/+0
* | mm/sl[aou]b: Move kmallocXXX functions to common codeChristoph Lameter2013-09-041-97/+0
|/
* slab: Common definition for kmem_cache_nodeChristoph Lameter2013-02-011-11/+0
* slab: Common Kmalloc cache determinationChristoph Lameter2013-02-011-31/+10
* slab: Common definition for the array of kmalloc cachesChristoph Lameter2013-02-011-6/+0
* slab: Common constants for kmalloc boundariesChristoph Lameter2013-02-011-16/+3
* slab: Common kmalloc slab index determinationChristoph Lameter2013-02-011-63/+0
* slub: slub-specific propagation changesGlauber Costa2012-12-181-0/+1
* sl[au]b: allocate objects from memcg cacheGlauber Costa2012-12-181-1/+4
* slab/slub: struct memcg_paramsGlauber Costa2012-12-181-0/+3
* mm, sl[aou]b: Extract common fields from struct kmem_cacheChristoph Lameter2012-06-141-1/+1
* slub: Get rid of the node fieldChristoph Lameter2012-06-011-1/+0
* Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-03-281-2/+4
|\
| * slub: per cpu partial statistics changeAlex Shi2012-02-181-2/+4
* | BUG: headers with BUG/BUG_ON etc. need linux/bug.hPaul Gortmaker2012-03-041-0/+1
|/
* slub: correct comments error for per cpu partialAlex Shi2011-09-271-1/+1
* slub: per cpu cache for partial pagesChristoph Lameter2011-08-191-0/+4
* Merge branch 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git...Linus Torvalds2011-07-301-0/+3
|\
| * slub: fast release on full slabChristoph Lameter2011-07-021-0/+1
| * slub: Add statistics for the case that the current slab does not match the nodeChristoph Lameter2011-07-021-0/+1
| * slub: Add cmpxchg_double_slab()Christoph Lameter2011-07-021-0/+1
* | slub: Add method to verify memory is not freedBen Greear2011-07-071-0/+13
* | slab, slub, slob: Unify alignment definitionChristoph Lameter2011-06-161-10/+0
|/
* slub: Deal with hyperthetical case of PAGE_SIZE > 2MChristoph Lameter2011-05-211-2/+4
* slub: Remove CONFIG_CMPXCHG_LOCAL ifdefferyChristoph Lameter2011-05-071-2/+0
* slub: Add statistics for this_cmpxchg_double failuresChristoph Lameter2011-03-221-0/+1
* Merge branch 'slub/lockless' into for-linusPekka Enberg2011-03-201-2/+5
|\
| * Lockless (and preemptless) fastpaths for slubChristoph Lameter2011-03-111-1/+4
| * slub: min_partial needs to be in first cachelineChristoph Lameter2011-03-111-1/+1
* | slub: automatically reserve bytes at the end of slabLai Jiangshan2011-03-111-0/+1
|/
* slub tracing: move trace calls out of always inlined functions to reduce kern...Richard Kennedy2010-11-061-29/+26
OpenPOWER on IntegriCloud