diff options
author | Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> | 2008-06-27 13:41:20 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-06-27 14:31:33 +0200 |
commit | 53fecd8ae1900fb571086f54f664051004665b55 (patch) | |
tree | 2dfd5aba9d974f0f114e96cbdc2aef82a32078a9 /kernel/sched_fair.c | |
parent | 4d8d595dfa69e1c807bf928f364668a7f30da5dc (diff) | |
download | op-kernel-dev-53fecd8ae1900fb571086f54f664051004665b55.zip op-kernel-dev-53fecd8ae1900fb571086f54f664051004665b55.tar.gz |
sched: kill task_group balancing
The idea was to balance groups until we've reached the global goal, however
Vatsa rightly pointed out that we might never reach that goal this way -
hence take out this logic.
[ the initial rationale for this 'feature' was to promote max concurrency
within a group - it does not however affect fairness ]
Reported-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 15 |
1 files changed, 2 insertions, 13 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 40cf24a..b10c0d6 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1422,9 +1422,7 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest, rcu_read_lock(); list_for_each_entry(tg, &task_groups, list) { - long imbalance; - unsigned long this_weight, busiest_weight; - long rem_load, max_load, moved_load; + long rem_load, moved_load; /* * empty group @@ -1435,17 +1433,8 @@ load_balance_fair(struct rq *this_rq, int this_cpu, struct rq *busiest, rem_load = rem_load_move * aggregate(tg, this_cpu)->rq_weight; rem_load /= aggregate(tg, this_cpu)->load + 1; - this_weight = tg->cfs_rq[this_cpu]->task_weight; - busiest_weight = tg->cfs_rq[busiest_cpu]->task_weight; - - imbalance = (busiest_weight - this_weight) / 2; - - if (imbalance < 0) - imbalance = busiest_weight; - - max_load = max(rem_load, imbalance); moved_load = __load_balance_fair(this_rq, this_cpu, busiest, - max_load, sd, idle, all_pinned, this_best_prio, + rem_load, sd, idle, all_pinned, this_best_prio, tg->cfs_rq[busiest_cpu]); if (!moved_load) |