summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
Commit message (Expand)AuthorAgeFilesLines
...
| * slub: Simplify control flow in __slab_alloc()Christoph Lameter2012-06-011-8/+6
| * slub: Acquire_slab() avoid loopChristoph Lameter2012-06-011-13/+15
| * slub: Add frozen check in __slab_allocChristoph Lameter2012-06-011-0/+6
| * slub: Use freelist instead of "object" in __slab_allocChristoph Lameter2012-06-011-18/+20
* | Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-06-011-10/+13
|\ \ | |/ |/|
| * slub: use __SetPageSlab function to set PG_slab flagJoonsoo Kim2012-05-181-1/+1
| * slub: fix a memory leak in get_partial_node()Joonsoo Kim2012-05-181-3/+6
| * slub: remove unused argument of init_kmem_cache_node()Joonsoo Kim2012-05-161-4/+4
| * slub: fix a possible memory leakJoonsoo Kim2012-05-161-1/+1
| * slub: fix incorrect return type of get_any_partial()Joonsoo Kim2012-05-081-1/+1
* | slub: missing test for partial pages flush work in flush_all()majianpeng2012-05-171-1/+1
|/
* Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds2012-03-281-1/+9
|\
| * slub: only IPI CPUs that have per cpu obj to flushGilad Ben-Yossef2012-03-281-1/+9
* | Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-03-281-5/+21
|\ \ | |/ |/|
| * slub: per cpu partial statistics changeAlex Shi2012-02-181-3/+9
| * slub: include include for prefetchChristoph Lameter2012-02-101-0/+1
| * slub: Do not hold slub_lock when calling sysfs_slab_add()Christoph Lameter2012-02-061-1/+2
| * slub: prefetch next freelist pointer in slab_alloc()Eric Dumazet2012-01-241-1/+9
* | cpuset: mm: reduce large amounts of memory barrier related damage v3Mel Gorman2012-03-211-15/+25
|/
* mm,x86,um: move CMPXCHG_DOUBLE config optionHeiko Carstens2012-01-121-3/+6
* mm,slub,x86: decouple size of struct page from CONFIG_CMPXCHG_LOCALHeiko Carstens2012-01-121-3/+3
* Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds2012-01-111-29/+48
|\
| * Merge branch 'slab/urgent' into slab/for-linusPekka Enberg2012-01-111-1/+3
| |\
| | * slub: add missed accountingShaohua Li2011-12-131-2/+5
| | * slub: Switch per cpu partial page support off for debuggingChristoph Lameter2011-12-131-1/+3
| * | slub: disallow changing cpu_partial from userspace for debug cachesDavid Rientjes2012-01-101-0/+2
| * | slub: Extract get_freelist from __slab_allocChristoph Lameter2011-12-131-25/+32
| * | slub: fix a possible memleak in __slab_alloc()Eric Dumazet2011-12-131-0/+5
| * | slub: add missed accountingShaohua Li2011-11-271-2/+5
| * | Merge branch 'slab/urgent' into slab/nextPekka Enberg2011-11-271-16/+26
| |\ \ | | |/
| * | slub: add taint flag outputting to debug pathsDave Jones2011-11-161-1/+1
* | | slub: min order when debug_guardpage_minorder > 0Stanislaw Gruszka2012-01-101-0/+3
* | | Merge branch 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/pe...Linus Torvalds2012-01-091-3/+3
|\ \ \
| * | | percpu: Remove irqsafe_cpu_xxx variantsChristoph Lameter2011-12-221-3/+3
| | |/ | |/|
* | | x86: Fix and improve cmpxchg_double{,_local}()Jan Beulich2012-01-041-2/+2
|/ /
* | slub: avoid potential NULL dereference or corruptionEric Dumazet2011-11-241-10/+11
* | slub: use irqsafe_cpu_cmpxchg for put_cpu_partialChristoph Lameter2011-11-241-1/+1
* | slub: move discard_slab out of node lockShaohua Li2011-11-151-4/+12
* | slub: use correct parameter to add a page to partial list tailShaohua Li2011-11-151-1/+2
|/
* lib/string.c: introduce memchr_inv()Akinobu Mita2011-10-311-45/+2
*-. Merge branches 'slab/next' and 'slub/partial' into slab/for-linusPekka Enberg2011-10-261-166/+392
|\ \
| | * slub: Discard slab page when node partial > minimum partial numberAlex Shi2011-09-271-1/+1
| | * slub: correct comments error for per cpu partialAlex Shi2011-09-271-1/+1
| | * slub: Code optimization in get_partial_node()Alex,Shi2011-09-131-4/+2
| | * slub: per cpu cache for partial pagesChristoph Lameter2011-08-191-47/+292
| | * slub: return object pointer from get_partial() / new_slab().Christoph Lameter2011-08-191-60/+73
| | * slub: pass kmem_cache_cpu pointer to get_partial()Christoph Lameter2011-08-191-15/+15
| | * slub: Prepare inuse field in new_slab()Christoph Lameter2011-08-191-3/+2
| | * slub: Remove useless statements in __slab_allocChristoph Lameter2011-08-191-4/+0
| | * slub: free slabs without holding locksChristoph Lameter2011-08-191-13/+13
OpenPOWER on IntegriCloud