summaryrefslogtreecommitdiffstats
path: root/kernel/sched
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2014-07-29 17:00:21 +0200
committerIngo Molnar <mingo@kernel.org>2014-08-12 12:48:18 +0200
commit743cb1ff191f00fee653212bdbcee1e56086d6ce (patch)
tree0a6213fa99f27404f07e2e38f29ce93ae9bbf588 /kernel/sched
parent98a96f202203fecad65b44449077c695686ad4db (diff)
downloadop-kernel-dev-743cb1ff191f00fee653212bdbcee1e56086d6ce.zip
op-kernel-dev-743cb1ff191f00fee653212bdbcee1e56086d6ce.tar.gz
sched/fair: Make calculate_imbalance() independent
Rik noticed that calculate_imbalance() relies on update_sd_pick_busiest() to guarantee that busiest->sum_nr_running > busiest->group_capacity_factor. Break this implicit assumption (with the intent of not providing it anymore) by having calculat_imbalance() verify it and not rely on others. Reported-by: Rik van Riel <riel@redhat.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Vincent Guittot <vincent.guittot@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20140729152631.GW12054@laptop.lan Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bfa3c86..e9477e6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6248,7 +6248,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
return fix_small_imbalance(env, sds);
}
- if (!busiest->group_imb) {
+ if (busiest->sum_nr_running > busiest->group_capacity_factor) {
/*
* Don't want to pull so many tasks that a group would go idle.
* Except of course for the group_imb case, since then we might
OpenPOWER on IntegriCloud