summaryrefslogtreecommitdiffstats
path: root/kernel/sched/sched.h
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2013-08-19 12:41:09 +0200
committerIngo Molnar <mingo@kernel.org>2013-09-12 19:14:42 +0200
commit6263322c5e8ffdaf5eaaa29e9d02d84a786aa970 (patch)
treeb11fbbf69279eec14bd80b9c3a30dc19475b5832 /kernel/sched/sched.h
parent3b524d60943a2f9ee1194323ff9d5ee01a4d1ce1 (diff)
downloadop-kernel-dev-6263322c5e8ffdaf5eaaa29e9d02d84a786aa970.zip
op-kernel-dev-6263322c5e8ffdaf5eaaa29e9d02d84a786aa970.tar.gz
sched/fair: Rewrite group_imb trigger
Change the group_imb detection from the old 'load-spike' detector to an actual imbalance detector. We set it from the lower domain balance pass when it fails to create a balance in the presence of task affinities. The advantage is that this should no longer generate the false positive group_imb conditions generated by transient load spikes from the normal balancing/bulk-wakeup etc. behaviour. While I haven't actually observed those they could happen. I'm not entirely happy with this patch; it somehow feels a little fragile. Nor does it solve the biggest issue I have with the group_imb code; it it still a fragile construct in that once we 'fixed' the imbalance we'll not detect the group_imb again and could end up re-creating it. That said, this patch does seem to preserve behaviour for the described degenerate case. In particular on my 2*6*2 wsm-ep: taskset -c 3-11 bash -c 'for ((i=0;i<9;i++)) do while :; do :; done & done' ends up with 9 spinners, each on their own CPU; whereas if you disable the group_imb code that typically doesn't happen (you'll get one pair sharing a CPU most of the time). Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-36fpbgl39dv4u51b6yz2ypz5@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r--kernel/sched/sched.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b3c5653..0d7544c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -605,6 +605,7 @@ struct sched_group_power {
*/
unsigned int power, power_orig;
unsigned long next_update;
+ int imbalance; /* XXX unrelated to power but shared group state */
/*
* Number of busy cpus in this group.
*/
OpenPOWER on IntegriCloud