diff options
author | Paul Turner <pjt@google.com> | 2010-11-29 16:55:40 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-11-30 10:07:10 +0100 |
commit | 822bc180a7f7a7bc5fcaaea195f41b487cc8cae8 (patch) | |
tree | 875bc605db4996e5e008f73fde1daf27fd49a53c | |
parent | b7a2b39d9b7703ccf068f549c8dc3465fc41d015 (diff) | |
download | op-kernel-dev-822bc180a7f7a7bc5fcaaea195f41b487cc8cae8.zip op-kernel-dev-822bc180a7f7a7bc5fcaaea195f41b487cc8cae8.tar.gz |
sched: Fix unregister_fair_sched_group()
In the flipping and flopping between calling
unregister_fair_sched_group() on a per-cpu versus per-group basis
we ended up in a bad state.
Remove from the list for the passed cpu as opposed to some
arbitrary index.
( This fixes explosions w/ autogroup as well as a group
creation/destruction stress test. )
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <20101130005740.080828123@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
-rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 35a6373..66ef579 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -8085,7 +8085,6 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu) { struct rq *rq = cpu_rq(cpu); unsigned long flags; - int i; /* * Only empty task groups can be destroyed; so we can speculatively @@ -8095,7 +8094,7 @@ static inline void unregister_fair_sched_group(struct task_group *tg, int cpu) return; raw_spin_lock_irqsave(&rq->lock, flags); - list_del_leaf_cfs_rq(tg->cfs_rq[i]); + list_del_leaf_cfs_rq(tg->cfs_rq[cpu]); raw_spin_unlock_irqrestore(&rq->lock, flags); } #else /* !CONFG_FAIR_GROUP_SCHED */ |