diff options
author | KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> | 2012-05-29 15:06:51 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-29 16:22:24 -0700 |
commit | 4b91355e9dc9ac1eb3d69e56de093899ff2677ef (patch) | |
tree | b6d18618013eadd3b3e3144c90b16d0d5f07b3af /include/linux/swap.h | |
parent | 181eb39425f2b9275afcb015eaa547d11f71a02f (diff) | |
download | op-kernel-dev-4b91355e9dc9ac1eb3d69e56de093899ff2677ef.zip op-kernel-dev-4b91355e9dc9ac1eb3d69e56de093899ff2677ef.tar.gz |
memcg: fix/change behavior of shared anon at moving task
This patch changes memcg's behavior at task_move().
At task_move(), the kernel scans a task's page table and move the changes
for mapped pages from source cgroup to target cgroup. There has been a
bug at handling shared anonymous pages for a long time.
Before patch:
- The spec says 'shared anonymous pages are not moved.'
- The implementation was 'shared anonymoys pages may be moved'.
If page_mapcount <=2, shared anonymous pages's charge were moved.
After patch:
- The spec says 'all anonymous pages are moved'.
- The implementation is 'all anonymous pages are moved'.
Considering usage of memcg, this will not affect user's experience.
'shared anonymous' pages only exists between a tree of processes which
don't do exec(). Moving one of process without exec() seems not sane.
For example, libcgroup will not be affected by this change. (Anyway, no
one noticed the implementation for a long time...)
Below is a discussion log:
- current spec/implementation are complex
- Now, shared file caches are moved
- It adds unclear check as page_mapcount(). To do correct check,
we should check swap users, etc.
- No one notice this implementation behavior. So, no one get benefit
from the design.
- In general, once task is moved to a cgroup for running, it will not
be moved....
- Finally, we have control knob as memory.move_charge_at_immigrate.
Here is a patch to allow moving shared pages, completely. This makes
memcg simpler and fix current broken code.
Suggested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Glauber Costa <glommer@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/swap.h')
-rw-r--r-- | include/linux/swap.h | 9 |
1 files changed, 0 insertions, 9 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h index d965c4b..49c0fa9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -359,7 +359,6 @@ struct backing_dev_info; #ifdef CONFIG_CGROUP_MEM_RES_CTLR extern void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent, bool swapout); -extern int mem_cgroup_count_swap_user(swp_entry_t ent, struct page **pagep); #else static inline void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent, bool swapout) @@ -470,14 +469,6 @@ mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) { } -#ifdef CONFIG_CGROUP_MEM_RES_CTLR -static inline int -mem_cgroup_count_swap_user(swp_entry_t ent, struct page **pagep) -{ - return 0; -} -#endif - #endif /* CONFIG_SWAP */ #endif /* __KERNEL__*/ #endif /* _LINUX_SWAP_H */ |