summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorNeilBrown <neilb@suse.de>2006-06-23 02:05:48 -0700
committerLinus Torvalds <torvalds@g5.osdl.org>2006-06-23 07:43:07 -0700
commite0f23603fb2607315ce52432cc4225df410828cf (patch)
tree7a32b42a64a64488aee0e91ab016d53c353f9482 /mm
parent57ae2508610d50893cb3e3bbb869ff70ff724a2a (diff)
downloadop-kernel-dev-e0f23603fb2607315ce52432cc4225df410828cf.zip
op-kernel-dev-e0f23603fb2607315ce52432cc4225df410828cf.tar.gz
[PATCH] Remove semi-softlockup from invalidate_mapping_pages
If invalidate_mapping_pages is called to invalidate a very large mapping (e.g. a very large block device) and if the only active page in that device is near the end (or at least, at a very large index), such as, say, the superblock of an md array, and if that page happens to be locked when invalidate_mapping_pages is called, then pagevec_lookup will return this page and as it is locked, 'next' will be incremented and pagevec_lookup will be called again. and again. and again. while we count from 0 upto a very large number. We should really always set 'next' to 'page->index+1' before going around the loop again, not just if the page isn't locked. Cc: "Steinar H. Gunderson" <sgunderson@bigfoot.com> Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/truncate.c22
1 files changed, 16 insertions, 6 deletions
diff --git a/mm/truncate.c b/mm/truncate.c
index 6cb3fff..cf1b015 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -230,14 +230,24 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping,
pagevec_lookup(&pvec, mapping, next, PAGEVEC_SIZE)) {
for (i = 0; i < pagevec_count(&pvec); i++) {
struct page *page = pvec.pages[i];
+ pgoff_t index;
+ int lock_failed;
- if (TestSetPageLocked(page)) {
- next++;
- continue;
- }
- if (page->index > next)
- next = page->index;
+ lock_failed = TestSetPageLocked(page);
+
+ /*
+ * We really shouldn't be looking at the ->index of an
+ * unlocked page. But we're not allowed to lock these
+ * pages. So we rely upon nobody altering the ->index
+ * of this (pinned-by-us) page.
+ */
+ index = page->index;
+ if (index > next)
+ next = index;
next++;
+ if (lock_failed)
+ continue;
+
if (PageDirty(page) || PageWriteback(page))
goto unlock;
if (page_mapped(page))
OpenPOWER on IntegriCloud