diff options
author | Steven Rostedt <srostedt@redhat.com> | 2009-04-20 13:24:21 -0400 |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2009-04-20 13:24:21 -0400 |
commit | 17487bfeb6cfb05920e6a9d5a54f345f2917b4e7 (patch) | |
tree | 23064df0a7814788823484098450cb2924ff9a34 /kernel | |
parent | 23de29de2d8b227943be191d59fb6d983996d55e (diff) | |
download | op-kernel-dev-17487bfeb6cfb05920e6a9d5a54f345f2917b4e7.zip op-kernel-dev-17487bfeb6cfb05920e6a9d5a54f345f2917b4e7.tar.gz |
tracing: fix recursive test level calculation
The recursive tests to detect same level recursion in the ring buffers
did not account for the hard/softirq_counts to be shifted. Thus the
numbers could be larger than then mask to be tested.
This patch includes the shift for the calculation of the irq depth.
[ Impact: stop false positives in trace recursion detection ]
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/trace/ring_buffer.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index e145969..aa40ae9 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1483,7 +1483,9 @@ rb_reserve_next_event(struct ring_buffer_per_cpu *cpu_buffer, static int trace_irq_level(void) { - return hardirq_count() + softirq_count() + in_nmi(); + return (hardirq_count() >> HARDIRQ_SHIFT) + + (softirq_count() >> + SOFTIRQ_SHIFT) + + !!in_nmi(); } static int trace_recursive_lock(void) |