summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorJesper Dangaard Brouer <brouer@redhat.com>2015-11-20 15:57:55 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-11-22 11:58:44 -0800
commit033745189b1bae3fc931beeaf48604ee7c259309 (patch)
treeaf0d474640b5c70e74507a8195b580f47faf52f0 /mm/slub.c
parent03ec0ed57ffc77720b811dbb6d44733b58360d9f (diff)
downloadop-kernel-dev-033745189b1bae3fc931beeaf48604ee7c259309.zip
op-kernel-dev-033745189b1bae3fc931beeaf48604ee7c259309.tar.gz
slub: add missing kmem cgroup support to kmem_cache_free_bulk
Initial implementation missed support for kmem cgroup support in kmem_cache_free_bulk() call, add this. If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough to not add any asm code. Incoming bulk free objects can belong to different kmem cgroups, and object free call can happen at a later point outside memcg context. Thus, we need to keep the orig kmem_cache, to correctly verify if a memcg object match against its "root_cache" (s->memcg_params.root_cache). Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Vladimir Davydov <vdavydov@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index ce17976..3484704 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2887,13 +2887,17 @@ static int build_detached_freelist(struct kmem_cache *s, size_t size,
/* Note that interrupts must be enabled when calling this function. */
-void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p)
+void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p)
{
if (WARN_ON(!size))
return;
do {
struct detached_freelist df;
+ struct kmem_cache *s;
+
+ /* Support for memcg */
+ s = cache_from_obj(orig_s, p[size - 1]);
size = build_detached_freelist(s, size, p, &df);
if (unlikely(!df.page))
OpenPOWER on IntegriCloud