diff options
author | Peter Zijlstra <peterz@infradead.org> | 2009-12-22 15:43:19 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-12-23 10:04:10 +0100 |
commit | 0c69774e6ce94364cfaa8bdeb18061edc414bc5a (patch) | |
tree | b83fdb55af2f9b9dddcab4a273739ebe1c810594 /kernel/sched.c | |
parent | f7b84a6ba7eaeba4e1df8feddca1473a7db369a5 (diff) | |
download | op-kernel-dev-0c69774e6ce94364cfaa8bdeb18061edc414bc5a.zip op-kernel-dev-0c69774e6ce94364cfaa8bdeb18061edc414bc5a.tar.gz |
sched: Revert 738d2be, simplify set_task_cpu()
Effectively reverts 738d2be4301007f054541c5c4bf7fb6a361c9b3a.
As demonstrated by Eric, we really need to call __set_task_cpu()
early in the fork() path to properly initialize the various task
state -- specifically the cgroup state through set_task_rq().
[ we could probably fix this by explicitly calling
__set_task_cpu() from sched_fork(), but lets try that for the
next cycle and simply revert to the old behaviour for now. ]
Reported-by: Eric Paris <eparis@redhat.com>
Tested-by: Eric Paris <eparis@redhat.com>,
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: efault@gmx.de
LKML-Reference: <1261492999.4937.36.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 9 |
1 files changed, 4 insertions, 5 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 87f1f47..c535cc4 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2045,11 +2045,10 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) trace_sched_migrate_task(p, new_cpu); - if (task_cpu(p) == new_cpu) - return; - - p->se.nr_migrations++; - perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0); + if (task_cpu(p) != new_cpu) { + p->se.nr_migrations++; + perf_sw_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0); + } __set_task_cpu(p, new_cpu); } |