diff options
author | Wu Fengguang <fengguang.wu@intel.com> | 2009-09-16 11:50:12 +0200 |
---|---|---|
committer | Andi Kleen <ak@linux.intel.com> | 2009-09-16 11:50:12 +0200 |
commit | 2a7684a23e9c263c2a1e8b2c0027ad1836a0f9df (patch) | |
tree | b9769d2f391d76d9c84c687aa771d36cc539025e /mm | |
parent | 888b9f7c58ebe8303bad817cd554df887a683957 (diff) | |
download | op-kernel-dev-2a7684a23e9c263c2a1e8b2c0027ad1836a0f9df.zip op-kernel-dev-2a7684a23e9c263c2a1e8b2c0027ad1836a0f9df.tar.gz |
HWPOISON: check and isolate corrupted free pages v2
If memory corruption hits the free buddy pages, we can safely ignore them.
No one will access them until page allocation time, then prep_new_page()
will automatically check and isolate PG_hwpoison page for us (for 0-order
allocation).
This patch expands prep_new_page() to check every component page in a high
order page allocation, in order to completely stop PG_hwpoison pages from
being recirculated.
Note that the common case -- only allocating a single page, doesn't
do any more work than before. Allocating > order 0 does a bit more work,
but that's relatively uncommon.
This simple implementation may drop some innocent neighbor pages, hopefully
it is not a big problem because the event should be rare enough.
This patch adds some runtime costs to high order page users.
[AK: Improved description]
v2: Andi Kleen:
Port to -mm code
Move check into separate function.
Don't dump stack in bad_pages for hwpoisoned pages.
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_alloc.c | 20 |
1 files changed, 19 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a0de15f..9faa7ad 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -234,6 +234,12 @@ static void bad_page(struct page *page) static unsigned long nr_shown; static unsigned long nr_unshown; + /* Don't complain about poisoned pages */ + if (PageHWPoison(page)) { + __ClearPageBuddy(page); + return; + } + /* * Allow a burst of 60 reports, then keep quiet for that minute; * or allow a steady drip of one report per second. @@ -646,7 +652,7 @@ static inline void expand(struct zone *zone, struct page *page, /* * This page is about to be returned from the page allocator */ -static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) +static inline int check_new_page(struct page *page) { if (unlikely(page_mapcount(page) | (page->mapping != NULL) | @@ -655,6 +661,18 @@ static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) bad_page(page); return 1; } + return 0; +} + +static int prep_new_page(struct page *page, int order, gfp_t gfp_flags) +{ + int i; + + for (i = 0; i < (1 << order); i++) { + struct page *p = page + i; + if (unlikely(check_new_page(p))) + return 1; + } set_page_private(page, 0); set_page_refcounted(page); |