summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2017-02-22 15:41:33 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2017-02-22 16:41:27 -0800
commit50862ce711b3e9cf8511df7a356892e128b037d3 (patch)
treea65cd9b2c8fe319bd8cce533c0977e7932d3442e
parent01fb58bcba63f8fba37581c24c99e9a515dd0335 (diff)
downloadop-kernel-dev-50862ce711b3e9cf8511df7a356892e128b037d3.zip
op-kernel-dev-50862ce711b3e9cf8511df7a356892e128b037d3.tar.gz
slab: remove slub sysfs interface files early for empty memcg caches
With kmem cgroup support enabled, kmem_caches can be created and destroyed frequently and a great number of near empty kmem_caches can accumulate if there are a lot of transient cgroups and the system is not under memory pressure. When memory reclaim starts under such conditions, it can lead to consecutive deactivation and destruction of many kmem_caches, easily hundreds of thousands on moderately large systems, exposing scalability issues in the current slab management code. This is one of the patches to address the issue. Each cache has a number of sysfs interface files under /sys/kernel/slab. On a system with a lot of memory and transient memcgs, the number of interface files which have to be removed once memory reclaim kicks in can reach millions. Link: http://lkml.kernel.org/r/20170117235411.9408-10-tj@kernel.org Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Jay Vana <jsvana@fb.com> Acked-by: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/slub.c25
1 files changed, 23 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c
index 62d0b55..af38aaa 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3959,8 +3959,20 @@ int __kmem_cache_shrink(struct kmem_cache *s)
#ifdef CONFIG_MEMCG
static void kmemcg_cache_deact_after_rcu(struct kmem_cache *s)
{
- /* called with all the locks held after a sched RCU grace period */
- __kmem_cache_shrink(s);
+ /*
+ * Called with all the locks held after a sched RCU grace period.
+ * Even if @s becomes empty after shrinking, we can't know that @s
+ * doesn't have allocations already in-flight and thus can't
+ * destroy @s until the associated memcg is released.
+ *
+ * However, let's remove the sysfs files for empty caches here.
+ * Each cache has a lot of interface files which aren't
+ * particularly useful for empty draining caches; otherwise, we can
+ * easily end up with millions of unnecessary sysfs files on
+ * systems which have a lot of memory and transient cgroups.
+ */
+ if (!__kmem_cache_shrink(s))
+ sysfs_slab_remove(s);
}
void __kmemcg_cache_deactivate(struct kmem_cache *s)
@@ -5659,6 +5671,15 @@ static void sysfs_slab_remove(struct kmem_cache *s)
*/
return;
+ if (!s->kobj.state_in_sysfs)
+ /*
+ * For a memcg cache, this may be called during
+ * deactivation and again on shutdown. Remove only once.
+ * A cache is never shut down before deactivation is
+ * complete, so no need to worry about synchronization.
+ */
+ return;
+
#ifdef CONFIG_MEMCG
kset_unregister(s->memcg_kset);
#endif
OpenPOWER on IntegriCloud