diff options
author | Andrea Arcangeli <aarcange@redhat.com> | 2011-01-13 15:47:16 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-01-13 17:32:47 -0800 |
commit | 91600e9e592e48736e630851c83da2ad6bf0e91f (patch) | |
tree | 5b17c58699eed9c7440812f0f2f29f4c80972973 /mm | |
parent | 14d1a55cd26f1860f837f37ae42520c7c13b1347 (diff) | |
download | op-kernel-dev-91600e9e592e48736e630851c83da2ad6bf0e91f.zip op-kernel-dev-91600e9e592e48736e630851c83da2ad6bf0e91f.tar.gz |
thp: fix memory-failure hugetlbfs vs THP collision
hugetlbfs was changed to allow memory failure to migrate the hugetlbfs
pages and that broke THP as split_huge_page was then called on hugetlbfs
pages too.
compound_head/order was also run unsafe on THP pages that can be splitted
at any time.
All compound_head() invocations in memory-failure.c that are run on pages
that aren't pinned and that can be freed and reused from under us (while
compound_head is running) are buggy because compound_head can return a
dangling pointer, but I'm not fixing this as this is a generic
memory-failure bug not specific to THP but it applies to hugetlbfs too, so
I can fix it later after THP is merged upstream.
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory-failure.c | 2 | ||||
-rw-r--r-- | mm/rmap.c | 2 |
2 files changed, 2 insertions, 2 deletions
diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 6a283cc..1b43d0f 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -386,7 +386,7 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill, struct task_struct *tsk; struct anon_vma *av; - if (unlikely(split_huge_page(page))) + if (!PageHuge(page) && unlikely(split_huge_page(page))) return; read_lock(&tasklist_lock); av = page_lock_anon_vma(page); @@ -1430,7 +1430,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags) int ret; BUG_ON(!PageLocked(page)); - BUG_ON(PageTransHuge(page)); + VM_BUG_ON(!PageHuge(page) && PageTransHuge(page)); if (unlikely(PageKsm(page))) ret = try_to_unmap_ksm(page, flags); |