summaryrefslogtreecommitdiffstats
path: root/kernel/sched_fair.c
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2008-05-03 18:29:28 +0200
committerIngo Molnar <mingo@elte.hu>2008-05-05 23:56:18 +0200
commit3e51f33fcc7f55e6df25d15b55ed10c8b4da84cd (patch)
tree3752f9ea8e014ec40e95a1b197b0a3d18e1056a8 /kernel/sched_fair.c
parenta5574cf65b5f03ce9ade3918764fe22e5e2371e3 (diff)
downloadop-kernel-dev-3e51f33fcc7f55e6df25d15b55ed10c8b4da84cd.zip
op-kernel-dev-3e51f33fcc7f55e6df25d15b55ed10c8b4da84cd.tar.gz
sched: add optional support for CONFIG_HAVE_UNSTABLE_SCHED_CLOCK
this replaces the rq->clock stuff (and possibly cpu_clock()). - architectures that have an 'imperfect' hardware clock can set CONFIG_HAVE_UNSTABLE_SCHED_CLOCK - the 'jiffie' window might be superfulous when we update tick_gtod before the __update_sched_clock() call in sched_clock_tick() - cpu_clock() might be implemented as: sched_clock_cpu(smp_processor_id()) if the accuracy proves good enough - how far can TSC drift in a single jiffie when considering the filtering and idle hooks? [ mingo@elte.hu: various fixes and cleanups ] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r--kernel/sched_fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index d99e01f..c863663 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -959,7 +959,7 @@ static void yield_task_fair(struct rq *rq)
return;
if (likely(!sysctl_sched_compat_yield) && curr->policy != SCHED_BATCH) {
- __update_rq_clock(rq);
+ update_rq_clock(rq);
/*
* Update run-time statistics of the 'current'.
*/
OpenPOWER on IntegriCloud