summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorBo Liu <bo-liu@hotmail.com>2009-11-02 16:50:33 +0000
committerLinus Torvalds <torvalds@linux-foundation.org>2009-11-02 09:44:41 -0800
commit32c5fc10e79a7053ac5728b01a0bff55cbcb9d49 (patch)
tree7a392ac3196770c49622d5d5cb41f77c46a35f83
parentc9354c85c1c7bac788ce57d3c17f2016c1c45b1d (diff)
downloadop-kernel-dev-32c5fc10e79a7053ac5728b01a0bff55cbcb9d49.zip
op-kernel-dev-32c5fc10e79a7053ac5728b01a0bff55cbcb9d49.tar.gz
mm: remove incorrect swap_count() from try_to_unuse()
In try_to_unuse(), swcount is a local copy of *swap_map, including the SWAP_HAS_CACHE bit; but a wrong comparison against swap_count(*swap_map), which masks off the SWAP_HAS_CACHE bit, succeeded where it should fail. That had the effect of resetting the mm from which to start searching for the next swap page, to an irrelevant mm instead of to an mm in which this swap page had been found: which may increase search time by ~20%. But we're used to swapoff being slow, so never noticed the slowdown. Remove that one spurious use of swap_count(): Bo Liu thought it merely redundant, Hugh rewrote the description since it was measurably wrong. Signed-off-by: Bo Liu <bo-liu@hotmail.com> Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/swapfile.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a1bc6b9..9c590ee 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1151,8 +1151,7 @@ static int try_to_unuse(unsigned int type)
} else
retval = unuse_mm(mm, entry, page);
- if (set_start_mm &&
- swap_count(*swap_map) < swcount) {
+ if (set_start_mm && *swap_map < swcount) {
mmput(new_start_mm);
atomic_inc(&mm->mm_users);
new_start_mm = mm;
OpenPOWER on IntegriCloud