diff options
author | Zhihui Zhang <zzhsuny@gmail.com> | 2015-06-24 16:56:42 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-06-24 17:49:42 -0700 |
commit | 95bbc0c7210a7397fec1cd219f896ca95bf29e3e (patch) | |
tree | a2dc0ec161c0d15d9393136384c0f2351793a66b /mm/vmscan.c | |
parent | f012a84aff7a7f1d50b060e8b205ad68ffb86045 (diff) | |
download | op-kernel-dev-95bbc0c7210a7397fec1cd219f896ca95bf29e3e.zip op-kernel-dev-95bbc0c7210a7397fec1cd219f896ca95bf29e3e.tar.gz |
mm: rename RECLAIM_SWAP to RECLAIM_UNMAP
The name SWAP implies that we are dealing with anonymous pages only. In
fact, the original patch that introduced the min_unmapped_ratio logic
was to fix an issue related to file pages. Rename it to RECLAIM_UNMAP
to match what does.
Historically, commit a6dc60f8975a ("vmscan: rename sc.may_swap to
may_unmap") renamed .may_swap to .may_unmap, leaving RECLAIM_SWAP
behind. commit 2e2e42598908 ("vmscan,memcg: reintroduce sc->may_swap")
reintroduced .may_swap for memory controller.
Signed-off-by: Zhihui Zhang <zzhsuny@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r-- | mm/vmscan.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index c627fa4..19ef01e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3597,7 +3597,7 @@ int zone_reclaim_mode __read_mostly; #define RECLAIM_OFF 0 #define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ #define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ -#define RECLAIM_SWAP (1<<2) /* Swap pages out during reclaim */ +#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ /* * Priority for ZONE_RECLAIM. This determines the fraction of pages @@ -3639,12 +3639,12 @@ static long zone_pagecache_reclaimable(struct zone *zone) long delta = 0; /* - * If RECLAIM_SWAP is set, then all file pages are considered + * If RECLAIM_UNMAP is set, then all file pages are considered * potentially reclaimable. Otherwise, we have to worry about * pages like swapcache and zone_unmapped_file_pages() provides * a better estimate */ - if (zone_reclaim_mode & RECLAIM_SWAP) + if (zone_reclaim_mode & RECLAIM_UNMAP) nr_pagecache_reclaimable = zone_page_state(zone, NR_FILE_PAGES); else nr_pagecache_reclaimable = zone_unmapped_file_pages(zone); @@ -3675,15 +3675,15 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order) .order = order, .priority = ZONE_RECLAIM_PRIORITY, .may_writepage = !!(zone_reclaim_mode & RECLAIM_WRITE), - .may_unmap = !!(zone_reclaim_mode & RECLAIM_SWAP), + .may_unmap = !!(zone_reclaim_mode & RECLAIM_UNMAP), .may_swap = 1, }; cond_resched(); /* - * We need to be able to allocate from the reserves for RECLAIM_SWAP + * We need to be able to allocate from the reserves for RECLAIM_UNMAP * and we also need to be able to write out pages for RECLAIM_WRITE - * and RECLAIM_SWAP. + * and RECLAIM_UNMAP. */ p->flags |= PF_MEMALLOC | PF_SWAPWRITE; lockdep_set_current_reclaim_state(gfp_mask); |