diff options
author | Shaohua Li <shaohua.li@intel.com> | 2011-11-11 14:54:14 +0800 |
---|---|---|
committer | Pekka Enberg <penberg@kernel.org> | 2011-11-27 22:08:15 +0200 |
commit | 4c493a5a5c0bab6c434af2723328edd79c49aa0c (patch) | |
tree | 184c48e7c1759127de931d903bdbbdcc786acac6 /mm | |
parent | 42616cacf8bf898b1bc734b88a76cbaadffb8eb7 (diff) | |
download | op-kernel-dev-4c493a5a5c0bab6c434af2723328edd79c49aa0c.zip op-kernel-dev-4c493a5a5c0bab6c434af2723328edd79c49aa0c.tar.gz |
slub: add missed accounting
With per-cpu partial list, slab is added to partial list first and then moved
to node list. The __slab_free() code path for add/remove_partial is almost
deprecated(except for slub debug). But we forget to account add/remove_partial
when move per-cpu partial pages to node list, so the statistics for such events
are always 0. Add corresponding accounting.
This is against the patch "slub: use correct parameter to add a page to
partial list tail"
Acked-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 7 |
1 files changed, 5 insertions, 2 deletions
@@ -1901,11 +1901,14 @@ static void unfreeze_partials(struct kmem_cache *s) } if (l != m) { - if (l == M_PARTIAL) + if (l == M_PARTIAL) { remove_partial(n, page); - else + stat(s, FREE_REMOVE_PARTIAL); + } else { add_partial(n, page, DEACTIVATE_TO_TAIL); + stat(s, FREE_ADD_PARTIAL); + } l = m; } |