diff options
author | Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> | 2008-09-22 13:57:52 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-09-23 08:09:14 -0700 |
commit | a10cebf56ca7e7c034d1b6646230c6553e478967 (patch) | |
tree | 80f92bd693b7a6079be2814b01c14649b5f82217 /fs/sysfs | |
parent | b4d19cc84e8e6838f4aa0b26b3afcdc8c7f71505 (diff) | |
download | op-kernel-dev-a10cebf56ca7e7c034d1b6646230c6553e478967.zip op-kernel-dev-a10cebf56ca7e7c034d1b6646230c6553e478967.tar.gz |
memcg: check under limit at shrink_usage
Current memory cgroup(both in mainline and -mm) doesn't account swap
caches as memory(swap cache support is dropped temporarily now).
So try_to_free_mem_cgroup_pages doesn't reflect the count of pages that
have been moved to swap cache.
But this makes mem_cgroup_shrink_usage fail easily if most of the pages
are anon/shmem, and then shmem_getpage returns -ENOMEM and the process
will be killed.
This patch adds res_counter_check_under_limit to avoid these cases.
BTW, even if swap cache support is enabled again, if a process is moved to
another cgroup, which has been just made, between precharge and
shrink_usage in shmem_getpage, shrink_usage may fail just because there is
no pages to reclaim.
So this change would make sense anyway.
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/sysfs')
0 files changed, 0 insertions, 0 deletions