diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2011-02-15 22:26:07 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-02-16 13:25:29 +0100 |
commit | 4fe757dd48a9e95e1a071291f15dda5421dacb66 (patch) | |
tree | 9981eaf986d477d096cdb0388e0f95a80eeb2c38 /kernel | |
parent | 7d44ec193d95416d1342cdd86392a1eeb7461186 (diff) | |
download | op-kernel-dev-4fe757dd48a9e95e1a071291f15dda5421dacb66.zip op-kernel-dev-4fe757dd48a9e95e1a071291f15dda5421dacb66.tar.gz |
perf: Fix throttle logic
It was possible to call pmu::start() on an already running event. In
particular this lead so some wreckage as the hrtimer events would
re-initialize active timers.
This was due to throttled events being activated again by scheduling.
Scheduling in a context would add and force start events, resulting in
running events with a possible throttle status. The next tick to hit
that task will then try to unthrottle the event and call ->start() on
an already running event.
Reported-by: Jeff Moyer <jmoyer@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/perf_event.c | 19 |
1 files changed, 15 insertions, 4 deletions
diff --git a/kernel/perf_event.c b/kernel/perf_event.c index 999835b..656222f 100644 --- a/kernel/perf_event.c +++ b/kernel/perf_event.c @@ -782,6 +782,10 @@ retry: raw_spin_unlock_irq(&ctx->lock); } +#define MAX_INTERRUPTS (~0ULL) + +static void perf_log_throttle(struct perf_event *event, int enable); + static int event_sched_in(struct perf_event *event, struct perf_cpu_context *cpuctx, @@ -794,6 +798,17 @@ event_sched_in(struct perf_event *event, event->state = PERF_EVENT_STATE_ACTIVE; event->oncpu = smp_processor_id(); + + /* + * Unthrottle events, since we scheduled we might have missed several + * ticks already, also for a heavily scheduling task there is little + * guarantee it'll get a tick in a timely manner. + */ + if (unlikely(event->hw.interrupts == MAX_INTERRUPTS)) { + perf_log_throttle(event, 1); + event->hw.interrupts = 0; + } + /* * The new state must be visible before we turn it on in the hardware: */ @@ -1596,10 +1611,6 @@ void __perf_event_task_sched_in(struct task_struct *task) } } -#define MAX_INTERRUPTS (~0ULL) - -static void perf_log_throttle(struct perf_event *event, int enable); - static u64 perf_calculate_period(struct perf_event *event, u64 nsec, u64 count) { u64 frequency = event->attr.sample_freq; |