diff options
author | Mel Gorman <mgorman@suse.de> | 2013-02-22 16:34:27 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2013-02-23 17:50:16 -0800 |
commit | 3abef4e6c23feef4aa9ab161ae138d6d39ae69f3 (patch) | |
tree | af0648448caa7715fca89ba78c9bca606a1d9e74 /mm/migrate.c | |
parent | 34f0315adb58af3b01f59d05b2bce267474e71cb (diff) | |
download | op-kernel-dev-3abef4e6c23feef4aa9ab161ae138d6d39ae69f3.zip op-kernel-dev-3abef4e6c23feef4aa9ab161ae138d6d39ae69f3.tar.gz |
mm: numa: take THP into account when migrating pages for NUMA balancing
Wanpeng Li pointed out that numamigrate_isolate_page() assumes that only
one base page is being migrated when in fact it can also be checking
THP.
The consequences are that a migration will be attempted when a target
node is nearly full and fail later. It's unlikely to be user-visible
but it should be fixed. While we are there, migrate_balanced_pgdat()
should treat nr_migrate_pages as an unsigned long as it is treated as a
watermark.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Suggested-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Simon Jeons <simon.jeons@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/migrate.c')
-rw-r--r-- | mm/migrate.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/migrate.c b/mm/migrate.c index 2fd8b4af..77f4e70 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1461,7 +1461,7 @@ int migrate_vmas(struct mm_struct *mm, const nodemask_t *to, * pages. Currently it only checks the watermarks which crude */ static bool migrate_balanced_pgdat(struct pglist_data *pgdat, - int nr_migrate_pages) + unsigned long nr_migrate_pages) { int z; for (z = pgdat->nr_zones - 1; z >= 0; z--) { @@ -1559,8 +1559,10 @@ int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) { int ret = 0; + VM_BUG_ON(compound_order(page) && !PageTransHuge(page)); + /* Avoid migrating to a node that is nearly full */ - if (migrate_balanced_pgdat(pgdat, 1)) { + if (migrate_balanced_pgdat(pgdat, 1UL << compound_order(page))) { int page_lru; if (isolate_lru_page(page)) { |