diff options
author | Xunlei Pang <xlpang@redhat.com> | 2016-05-09 12:11:31 +0800 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-05-10 10:02:46 +0200 |
commit | 13b5ab02ae118fc8dfdc2b8597688ec4a11d5b53 (patch) | |
tree | a13090d099c6721c710ed2e5f3a5ba8719c21a2e /kernel/sched/rt.c | |
parent | 536bd00cdbb7b908573e5a93bae67b64cbae60d8 (diff) | |
download | op-kernel-dev-13b5ab02ae118fc8dfdc2b8597688ec4a11d5b53.zip op-kernel-dev-13b5ab02ae118fc8dfdc2b8597688ec4a11d5b53.tar.gz |
sched/rt, sched/dl: Don't push if task's scheduling class was changed
We got this warning:
WARNING: CPU: 1 PID: 2468 at kernel/sched/core.c:1161 set_task_cpu+0x1af/0x1c0
[...]
Call Trace:
dump_stack+0x63/0x87
__warn+0xd1/0xf0
warn_slowpath_null+0x1d/0x20
set_task_cpu+0x1af/0x1c0
push_dl_task.part.34+0xea/0x180
push_dl_tasks+0x17/0x30
__balance_callback+0x45/0x5c
__sched_setscheduler+0x906/0xb90
SyS_sched_setattr+0x150/0x190
do_syscall_64+0x62/0x110
entry_SYSCALL64_slow_path+0x25/0x25
This corresponds to:
WARN_ON_ONCE(p->state == TASK_RUNNING &&
p->sched_class == &fair_sched_class &&
(p->on_rq && !task_on_rq_migrating(p)))
It happens because in find_lock_later_rq(), the task whose scheduling
class was changed to fair class is still pushed away as if it were
a deadline task ...
So, check in find_lock_later_rq() after double_lock_balance(), if the
scheduling class of the deadline task was changed, break and retry.
Apply the same logic to RT tasks.
Signed-off-by: Xunlei Pang <xlpang@redhat.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Juri Lelli <juri.lelli@arm.com>
Link: http://lkml.kernel.org/r/1462767091-1215-1-git-send-email-xlpang@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/rt.c')
-rw-r--r-- | kernel/sched/rt.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index c41ea7a..ec4f538d 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1729,6 +1729,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq) !cpumask_test_cpu(lowest_rq->cpu, tsk_cpus_allowed(task)) || task_running(rq, task) || + !rt_task(task) || !task_on_rq_queued(task))) { double_unlock_balance(rq, lowest_rq); |