diff options
author | Andrew Morton <akpm@linux-foundation.org> | 2009-11-25 23:01:50 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2009-11-26 09:34:04 +0100 |
commit | 11e6635763bdc0e24b39a38876574660755acffc (patch) | |
tree | 9020eb9a4a527803e42b5770ca7a2e81b29fe425 /kernel | |
parent | 2c31b7958fd21df9fa04e5c36cda0f063ac70b27 (diff) | |
download | op-kernel-dev-11e6635763bdc0e24b39a38876574660755acffc.zip op-kernel-dev-11e6635763bdc0e24b39a38876574660755acffc.tar.gz |
kernel/hw_breakpoint.c: Fix local/global shadowing
If the new percpu tree is combined with the perf events tree
the following new warning triggers:
kernel/hw_breakpoint.c: In function 'toggle_bp_task_slot':
kernel/hw_breakpoint.c:151: warning: 'task_bp_pinned' is used uninitialized in this function
Because it's not valid anymore to define a local variable
and a percpu variable (even if it's file scope local) with
the same name.
Rename the local variable to resolve this.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <200911260701.nAQ71owx016356@imap1.linux-foundation.org>
[ v2: added changelog ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/hw_breakpoint.c | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/kernel/hw_breakpoint.c b/kernel/hw_breakpoint.c index dd3fb4a..32e10181 100644 --- a/kernel/hw_breakpoint.c +++ b/kernel/hw_breakpoint.c @@ -121,7 +121,7 @@ static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable) int count = 0; struct perf_event *bp; struct perf_event_context *ctx = tsk->perf_event_ctxp; - unsigned int *task_bp_pinned; + unsigned int *tsk_pinned; struct list_head *list; unsigned long flags; @@ -146,15 +146,15 @@ static void toggle_bp_task_slot(struct task_struct *tsk, int cpu, bool enable) if (WARN_ONCE(count < 0, "No breakpoint counter found in the counter list")) return; - task_bp_pinned = per_cpu(task_bp_pinned, cpu); + tsk_pinned = per_cpu(task_bp_pinned, cpu); if (enable) { - task_bp_pinned[count]++; + tsk_pinned[count]++; if (count > 0) - task_bp_pinned[count-1]--; + tsk_pinned[count-1]--; } else { - task_bp_pinned[count]--; + tsk_pinned[count]--; if (count > 0) - task_bp_pinned[count-1]++; + tsk_pinned[count-1]++; } } |