diff options
author | Rik van Riel <riel@redhat.com> | 2014-04-11 13:00:28 -0400 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-05-07 13:33:46 +0200 |
commit | 5085e2a328849bdee6650b32d52c87c3788ab01c (patch) | |
tree | 315d11c9247c532c7e9800ca23380218cee64f87 /kernel/sched | |
parent | 792568ec6a31ca560ca4d528782cbc6cd2cea8b0 (diff) | |
download | op-kernel-dev-5085e2a328849bdee6650b32d52c87c3788ab01c.zip op-kernel-dev-5085e2a328849bdee6650b32d52c87c3788ab01c.tar.gz |
sched/numa: Retry placement more frequently when misplaced
When tasks have not converged on their preferred nodes yet, we want
to retry fairly often, to make sure we do not migrate a task's memory
to an undesirable location, only to have to move it again later.
This patch reduces the interval at which migration is retried,
when the task's numa_scan_period is small.
Signed-off-by: Rik van Riel <riel@redhat.com>
Tested-by: Vinod Chegu <chegu_vinod@hp.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1397235629-16328-3-git-send-email-riel@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f6457b6..ecea8d9 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1326,12 +1326,15 @@ static int task_numa_migrate(struct task_struct *p) /* Attempt to migrate a task to a CPU on the preferred node. */ static void numa_migrate_preferred(struct task_struct *p) { + unsigned long interval = HZ; + /* This task has no NUMA fault statistics yet */ if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults_memory)) return; /* Periodically retry migrating the task to the preferred node */ - p->numa_migrate_retry = jiffies + HZ; + interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16); + p->numa_migrate_retry = jiffies + interval; /* Success if task is already running on preferred CPU */ if (task_node(p) == p->numa_preferred_nid) |