diff options
author | Bob Picco <bob.picco@hp.com> | 2007-06-08 13:47:00 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-06-08 17:23:34 -0700 |
commit | 12710a56cb56e81bd8f457cc2f50c2ebfc0cb390 (patch) | |
tree | c71714196a6b3cc2f801387016bf2a1dd1c6726c /arch/x86_64 | |
parent | 778e9a9c3e7193ea9f434f382947155ffb59c755 (diff) | |
download | op-kernel-dev-12710a56cb56e81bd8f457cc2f50c2ebfc0cb390.zip op-kernel-dev-12710a56cb56e81bd8f457cc2f50c2ebfc0cb390.tar.gz |
fix sysrq-m oops
We aren't sampling for holes in memory. Thus we encounter a section hole
with empty section map pointer for SPARSEMEM and OOPs for show_mem. This
issue has been seen in 2.6.21, current git and current mm. The patch below
is for mainline and mm. It was boot tested for SPARSEMEM, current VMEMMAP
of Andy's in mm ml and DISCONTIGMEM. A slightly different patch will be
posted to stable for 2.6.21.
Previous to commit f0a5a58aa812b31fd9f197c4ba48245942364eae memory_present
was called for node_start_pfn to node_end_pfn. This would cover the
hole(s) with reserved pages and valid sections. Most SPARSEMEM supported
arches do a pfn_valid check in show_mem before computing the page structure
address.
This issue was brought to my attention on IRC by Arnaldo Carvalho de Melo.
Thanks to Arnaldo for testing.
Signed-off-by: Bob Picco <bob.picco@hp.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86_64')
-rw-r--r-- | arch/x86_64/mm/init.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86_64/mm/init.c b/arch/x86_64/mm/init.c index 1ad5111..efb6e84 100644 --- a/arch/x86_64/mm/init.c +++ b/arch/x86_64/mm/init.c @@ -79,6 +79,8 @@ void show_mem(void) if (unlikely(i % MAX_ORDER_NR_PAGES == 0)) { touch_nmi_watchdog(); } + if (!pfn_valid(pgdat->node_start_pfn + i)) + continue; page = pfn_to_page(pgdat->node_start_pfn + i); total++; if (PageReserved(page)) |