summaryrefslogtreecommitdiffstats
path: root/kernel/time
diff options
context:
space:
mode:
authorDavid Engraf <david.engraf@sysgo.com>2017-02-17 08:51:03 +0100
committerJohn Stultz <john.stultz@linaro.org>2017-03-23 12:30:27 -0700
commit1b8955bc5ac575009835e371ae55e7f3af2197a9 (patch)
tree525fa655709d9a8102621280951dab517e007f20 /kernel/time
parente1c09219af364d17bcc432d86ad342bec1653dc5 (diff)
downloadop-kernel-dev-1b8955bc5ac575009835e371ae55e7f3af2197a9.zip
op-kernel-dev-1b8955bc5ac575009835e371ae55e7f3af2197a9.tar.gz
timers, sched_clock: Update timeout for clock wrap
The scheduler clock framework may not use the correct timeout for the clock wrap. This happens when a new clock driver calls sched_clock_register() after the kernel called sched_clock_postinit(). In this case the clock wrap timeout is too long thus sched_clock_poll() is called too late and the clock already wrapped. On my ARM system the scheduler was no longer scheduling any other task than the idle task because the sched_clock() wrapped. Signed-off-by: David Engraf <david.engraf@sysgo.com> Signed-off-by: John Stultz <john.stultz@linaro.org>
Diffstat (limited to 'kernel/time')
-rw-r--r--kernel/time/sched_clock.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index ea6b610..2d8f05a 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -206,6 +206,11 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)
update_clock_read_data(&rd);
+ if (sched_clock_timer.function != NULL) {
+ /* update timeout for clock wrap */
+ hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
+ }
+
r = rate;
if (r >= 4000000) {
r /= 1000000;
OpenPOWER on IntegriCloud