summaryrefslogtreecommitdiffstats
path: root/kernel/sched/sched.h
diff options
context:
space:
mode:
authorPaul Turner <pjt@google.com>2012-10-04 13:18:30 +0200
committerIngo Molnar <mingo@kernel.org>2012-10-24 10:27:23 +0200
commitaff3e498844441fa71c5ee1bbc470e1dff9548d9 (patch)
tree78085232ff0200ad8247d1948bbe6131b6f504ab /kernel/sched/sched.h
parent0a74bef8bed18dc6889e9bc37ea1050a50c86c89 (diff)
downloadop-kernel-dev-aff3e498844441fa71c5ee1bbc470e1dff9548d9.zip
op-kernel-dev-aff3e498844441fa71c5ee1bbc470e1dff9548d9.tar.gz
sched: Account for blocked load waking back up
When a running entity blocks we migrate its tracked load to cfs_rq->blocked_runnable_avg. In the sleep case this occurs while holding rq->lock and so is a natural transition. Wake-ups however, are potentially asynchronous in the presence of migration and so special care must be taken. We use an atomic counter to track such migrated load, taking care to match this with the previously introduced decay counters so that we don't migrate too much load. Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Ben Segall <bsegall@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20120823141506.726077467@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r--kernel/sched/sched.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 664ff39..30236ab 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -230,7 +230,7 @@ struct cfs_rq {
* the FAIR_GROUP_SCHED case).
*/
u64 runnable_load_avg, blocked_load_avg;
- atomic64_t decay_counter;
+ atomic64_t decay_counter, removed_load;
u64 last_decay;
#endif
#ifdef CONFIG_FAIR_GROUP_SCHED
OpenPOWER on IntegriCloud