diff options
author | Gregory Haskins <ghaskins@novell.com> | 2008-12-29 09:39:50 -0500 |
---|---|---|
committer | Gregory Haskins <ghaskins@novell.com> | 2008-12-29 09:39:50 -0500 |
commit | 7e96fa5875d4a9be18d74d3ca7b90518d05bc426 (patch) | |
tree | 3556aaa97bcd2dd71bd673d48b5ce4197d588fee /kernel/sched.c | |
parent | 777c2f389e463428fd7e2871051a84d7fe84b172 (diff) | |
download | op-kernel-dev-7e96fa5875d4a9be18d74d3ca7b90518d05bc426.zip op-kernel-dev-7e96fa5875d4a9be18d74d3ca7b90518d05bc426.tar.gz |
sched: pull only one task during NEWIDLE balancing to limit critical section
git-id c4acb2c0669c5c5c9b28e9d02a34b5c67edf7092 attempted to limit
newidle critical section length by stopping after at least one task
was moved. Further investigation has shown that there are other
paths nested further inside the algorithm which still remain that allow
long latencies to occur with newidle balancing. This patch applies
the same technique inside balance_tasks() to limit the duration of
this optional balancing operation.
Signed-off-by: Gregory Haskins <ghaskins@novell.com>
CC: Nick Piggin <npiggin@suse.de>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 18 |
1 files changed, 17 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 7729f9a..94d9a6c 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2984,6 +2984,16 @@ next: pulled++; rem_load_move -= p->se.load.weight; +#ifdef CONFIG_PREEMPT + /* + * NEWIDLE balancing is a source of latency, so preemptible kernels + * will stop after the first task is pulled to minimize the critical + * section. + */ + if (idle == CPU_NEWLY_IDLE) + goto out; +#endif + /* * We only want to steal up to the prescribed amount of weighted load. */ @@ -3030,9 +3040,15 @@ static int move_tasks(struct rq *this_rq, int this_cpu, struct rq *busiest, sd, idle, all_pinned, &this_best_prio); class = class->next; +#ifdef CONFIG_PREEMPT + /* + * NEWIDLE balancing is a source of latency, so preemptible + * kernels will stop after the first task is pulled to minimize + * the critical section. + */ if (idle == CPU_NEWLY_IDLE && this_rq->nr_running) break; - +#endif } while (class && max_load_move > total_load_moved); return total_load_moved > 0; |