diff options
author | Wanpeng Li <wanpeng.li@linux.intel.com> | 2015-05-13 14:01:01 +0800 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-06-19 10:06:45 +0200 |
commit | 8b5e770ed7c05a65ffd2d33a83c14572696236dc (patch) | |
tree | 861a74f878283f7c2f2b23bcdf7f365667275b7b /kernel/sched | |
parent | 1cde2930e15473cb4dd7e5a07d83e605a969bd6e (diff) | |
download | op-kernel-dev-8b5e770ed7c05a65ffd2d33a83c14572696236dc.zip op-kernel-dev-8b5e770ed7c05a65ffd2d33a83c14572696236dc.tar.gz |
sched/deadline: Optimize pull_dl_task()
pull_dl_task() uses pick_next_earliest_dl_task() to select a migration
candidate; this is sub-optimal since the next earliest task -- as per
the regular runqueue -- might not be migratable at all. This could
result in iterating the entire runqueue looking for a task.
Instead iterate the pushable queue -- this queue only contains tasks
that have at least 2 cpus set in their cpus_allowed mask.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
[ Improved the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-1-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/deadline.c | 28 |
1 files changed, 27 insertions, 1 deletions
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 890ce95..9cbe1c7 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1230,6 +1230,32 @@ next_node: return NULL; } +/* + * Return the earliest pushable rq's task, which is suitable to be executed + * on the CPU, NULL otherwise: + */ +static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu) +{ + struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost; + struct task_struct *p = NULL; + + if (!has_pushable_dl_tasks(rq)) + return NULL; + +next_node: + if (next_node) { + p = rb_entry(next_node, struct task_struct, pushable_dl_tasks); + + if (pick_dl_task(rq, p, cpu)) + return p; + + next_node = rb_next(next_node); + goto next_node; + } + + return NULL; +} + static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl); static int find_later_rq(struct task_struct *task) @@ -1514,7 +1540,7 @@ static int pull_dl_task(struct rq *this_rq) if (src_rq->dl.dl_nr_running <= 1) goto skip; - p = pick_next_earliest_dl_task(src_rq, this_cpu); + p = pick_earliest_pushable_dl_task(src_rq, this_cpu); /* * We found a task to be pulled if: |