summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2007-03-29 01:20:36 -0700
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-03-29 08:22:25 -0700
commit1ae7000630e3c05b6f7e3dfc76472f1bca6c1788 (patch)
tree805d97820dae82a5141f0b1aefc1383bd794e956 /mm/page_alloc.c
parenta2646d1e6c8d2239d8054a7d342eb9775a1d273a (diff)
downloadop-kernel-dev-1ae7000630e3c05b6f7e3dfc76472f1bca6c1788.zip
op-kernel-dev-1ae7000630e3c05b6f7e3dfc76472f1bca6c1788.tar.gz
[PATCH] holepunch: fix shmem_truncate_range punch locking
Miklos Szeredi observes that during truncation of shmem page directories, info->lock is released to improve latency (after lowering i_size and next_index to exclude races); but this is quite wrong for holepunching, which receives no such protection from i_size or next_index, and is left vulnerable to races with shmem_unuse, shmem_getpage and shmem_writepage. Hold info->lock throughout when holepunching? No, any user could prevent rescheduling for far too long. Instead take info->lock just when needed: in shmem_free_swp when removing the swap entries, and whenever removing a directory page from the level above. But so long as we remove before scanning, we can safely skip taking the lock at the lower levels, except at misaligned start and end of the hole. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Miklos Szeredi <mszeredi@suse.cz> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
0 files changed, 0 insertions, 0 deletions
OpenPOWER on IntegriCloud