diff options
author | Ingo Molnar <mingo@elte.hu> | 2009-06-03 22:19:36 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-06-03 23:39:51 +0200 |
commit | 128f048f0f0d2a477ad2555e7acd2ad15a1b6061 (patch) | |
tree | 5927a26da343af2189e47b497eec38680f19294f /kernel | |
parent | 233f0b95ca3a0d1dcbd70bc7e519069a8e10d23e (diff) | |
download | op-kernel-dev-128f048f0f0d2a477ad2555e7acd2ad15a1b6061.zip op-kernel-dev-128f048f0f0d2a477ad2555e7acd2ad15a1b6061.tar.gz |
perf_counter: Fix throttling lock-up
Throttling logic is broken and we can lock up with too small
hw sampling intervals.
Make the throttling code more robust: disable counters even
if we already disabled them.
( Also clean up whitespace damage i noticed while reading
various pieces of code related to throttling. )
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/perf_counter.c | 19 |
1 files changed, 14 insertions, 5 deletions
diff --git a/kernel/perf_counter.c b/kernel/perf_counter.c index ab44554..0bb03f1 100644 --- a/kernel/perf_counter.c +++ b/kernel/perf_counter.c @@ -2822,11 +2822,20 @@ int perf_counter_overflow(struct perf_counter *counter, if (!throttle) { counter->hw.interrupts++; - } else if (counter->hw.interrupts != MAX_INTERRUPTS) { - counter->hw.interrupts++; - if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) { - counter->hw.interrupts = MAX_INTERRUPTS; - perf_log_throttle(counter, 0); + } else { + if (counter->hw.interrupts != MAX_INTERRUPTS) { + counter->hw.interrupts++; + if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) { + counter->hw.interrupts = MAX_INTERRUPTS; + perf_log_throttle(counter, 0); + ret = 1; + } + } else { + /* + * Keep re-disabling counters even though on the previous + * pass we disabled it - just in case we raced with a + * sched-in and the counter got enabled again: + */ ret = 1; } } |