diff options
author | Kirill A. Shutemov <kirill.shutemov@linux.intel.com> | 2014-02-25 15:01:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-02-25 15:25:44 -0800 |
commit | 9845cbbd113fbb5b769a45d8e88dc47bc12df4e0 (patch) | |
tree | 6ceaa19094138fe27cc6be0009dea1ef770c762b /mm/kmemleak-test.c | |
parent | 01412886b735ef241f9a41adf9f707ce1522eb61 (diff) | |
download | op-kernel-dev-9845cbbd113fbb5b769a45d8e88dc47bc12df4e0.zip op-kernel-dev-9845cbbd113fbb5b769a45d8e88dc47bc12df4e0.tar.gz |
mm, thp: fix infinite loop on memcg OOM
Masayoshi Mizuma reported a bug with the hang of an application under
the memcg limit. It happens on write-protection fault to huge zero page
If we successfully allocate a huge page to replace zero page but hit the
memcg limit we need to split the zero page with split_huge_page_pmd()
and fallback to small pages.
The other part of the problem is that VM_FAULT_OOM has special meaning
in do_huge_pmd_wp_page() context. __handle_mm_fault() expects the page
to be split if it sees VM_FAULT_OOM and it will will retry page fault
handling. This causes an infinite loop if the page was not split.
do_huge_pmd_wp_zero_page_fallback() can return VM_FAULT_OOM if it failed
to allocate one small page, so fallback to small pages will not help.
The solution for this part is to replace VM_FAULT_OOM with
VM_FAULT_FALLBACK is fallback required.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reported-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/kmemleak-test.c')
0 files changed, 0 insertions, 0 deletions