summaryrefslogtreecommitdiffstats
path: root/Documentation/vm
diff options
context:
space:
mode:
authorHugh Dickins <hugh@veritas.com>2005-09-03 15:54:41 -0700
committerLinus Torvalds <torvalds@evo.osdl.org>2005-09-05 00:05:42 -0700
commit5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1 (patch)
tree91ed9ef6f4cb5f6a1832f2baaaabd53fcd83513e /Documentation/vm
parent048c27fd72816b44e096997d1c6901c3abbfd45b (diff)
downloadop-kernel-dev-5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1.zip
op-kernel-dev-5d337b9194b1ce3b6fd5f3cb2799455ed2f9a3d1.tar.gz
[PATCH] swap: swap_lock replace list+device
The idea of a swap_device_lock per device, and a swap_list_lock over them all, is appealing; but in practice almost every holder of swap_device_lock must already hold swap_list_lock, which defeats the purpose of the split. The only exceptions have been swap_duplicate, valid_swaphandles and an untrodden path in try_to_unuse (plus a few places added in this series). valid_swaphandles doesn't show up high in profiles, but swap_duplicate does demand attention. However, with the hold time in get_swap_pages so much reduced, I've not yet found a load and set of swap device priorities to show even swap_duplicate benefitting from the split. Certainly the split is mere overhead in the common case of a single swap device. So, replace swap_list_lock and swap_device_lock by spinlock_t swap_lock (generally we seem to prefer an _ in the name, and not hide in a macro). If someone can show a regression in swap_duplicate, then probably we should add a hashlock for the swap_map entries alone (shorts being anatomic), so as to help the case of the single swap device too. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'Documentation/vm')
-rw-r--r--Documentation/vm/locking15
1 files changed, 7 insertions, 8 deletions
diff --git a/Documentation/vm/locking b/Documentation/vm/locking
index c3ef09a..f366fa9 100644
--- a/Documentation/vm/locking
+++ b/Documentation/vm/locking
@@ -83,19 +83,18 @@ single address space optimization, so that the zap_page_range (from
vmtruncate) does not lose sending ipi's to cloned threads that might
be spawned underneath it and go to user mode to drag in pte's into tlbs.
-swap_list_lock/swap_device_lock
--------------------------------
+swap_lock
+--------------
The swap devices are chained in priority order from the "swap_list" header.
The "swap_list" is used for the round-robin swaphandle allocation strategy.
The #free swaphandles is maintained in "nr_swap_pages". These two together
-are protected by the swap_list_lock.
+are protected by the swap_lock.
-The swap_device_lock, which is per swap device, protects the reference
-counts on the corresponding swaphandles, maintained in the "swap_map"
-array, and the "highest_bit" and "lowest_bit" fields.
+The swap_lock also protects all the device reference counts on the
+corresponding swaphandles, maintained in the "swap_map" array, and the
+"highest_bit" and "lowest_bit" fields.
-Both of these are spinlocks, and are never acquired from intr level. The
-locking hierarchy is swap_list_lock -> swap_device_lock.
+The swap_lock is a spinlock, and is never acquired from intr level.
To prevent races between swap space deletion or async readahead swapins
deciding whether a swap handle is being used, ie worthy of being read in
OpenPOWER on IntegriCloud