diff options
author | Nick Piggin <npiggin@suse.de> | 2007-07-19 01:47:22 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-07-19 10:04:41 -0700 |
commit | 79352894b28550ee0eee919149f57626ec1b3572 (patch) | |
tree | 849e6aa148c69b9df3920199255ca14792eeffa2 /mm | |
parent | 83c54070ee1a2d05c89793884bea1a03f2851ed4 (diff) | |
download | op-kernel-dev-79352894b28550ee0eee919149f57626ec1b3572.zip op-kernel-dev-79352894b28550ee0eee919149f57626ec1b3572.tar.gz |
mm: fix clear_page_dirty_for_io vs fault race
Fix msync data loss and (less importantly) dirty page accounting
inaccuracies due to the race remaining in clear_page_dirty_for_io().
The deleted comment explains what the race was, and the added comments
explain how it is fixed.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 9 | ||||
-rw-r--r-- | mm/page-writeback.c | 17 |
2 files changed, 21 insertions, 5 deletions
diff --git a/mm/memory.c b/mm/memory.c index 61d51da..50dd3d1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1765,6 +1765,15 @@ gotten: unlock: pte_unmap_unlock(page_table, ptl); if (dirty_page) { + /* + * Yes, Virginia, this is actually required to prevent a race + * with clear_page_dirty_for_io() from clearing the page dirty + * bit after it clear all dirty ptes, but before a racing + * do_wp_page installs a dirty pte. + * + * do_no_page is protected similarly. + */ + wait_on_page_locked(dirty_page); set_page_dirty_balance(dirty_page); put_page(dirty_page); } diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 886ea0d..e624827 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -918,6 +918,8 @@ int clear_page_dirty_for_io(struct page *page) { struct address_space *mapping = page_mapping(page); + BUG_ON(!PageLocked(page)); + if (mapping && mapping_cap_account_dirty(mapping)) { /* * Yes, Virginia, this is indeed insane. @@ -943,14 +945,19 @@ int clear_page_dirty_for_io(struct page *page) * We basically use the page "master dirty bit" * as a serialization point for all the different * threads doing their things. - * - * FIXME! We still have a race here: if somebody - * adds the page back to the page tables in - * between the "page_mkclean()" and the "TestClearPageDirty()", - * we might have it mapped without the dirty bit set. */ if (page_mkclean(page)) set_page_dirty(page); + /* + * We carefully synchronise fault handlers against + * installing a dirty pte and marking the page dirty + * at this point. We do this by having them hold the + * page lock at some point after installing their + * pte, but before marking the page dirty. + * Pages are always locked coming in here, so we get + * the desired exclusion. See mm/memory.c:do_wp_page() + * for more comments. + */ if (TestClearPageDirty(page)) { dec_zone_page_state(page, NR_FILE_DIRTY); return 1; |