summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* perf callchains: Use zalloc to allocate objectsArnaldo Carvalho de Melo2010-05-101-3/+3
| | | | | | | | | | Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf newt: Use newtAddComponent()Arnaldo Carvalho de Melo2010-05-101-3/+3
| | | | | | | | | | | | | Instead of newtAddComponents(just-one-entry, NULL), that is not needed if, like in this browser, we're adding just one component at a time. Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* Merge branch 'perf/test' of ↵Ingo Molnar2010-05-107-145/+257
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into perf/core
| * perf lock: Drop "-a" option from cmd_record() default arguments setHitoshi Mitake2010-05-091-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch drops "-a" from the default arguments passed to perf record by perf lock. If a user wants to do a system wide record of lock events, perf lock record -a <program> <argument> ... is enough for this purpose. This can reduce the size of the perf.data file. % sudo ./perf lock record whoami root [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.439 MB perf.data (~19170 samples) ] % sudo ./perf lock record -a whoami # with -a option root [ perf record: Woken up 0 times to write data ] [ perf record: Captured and wrote 48.962 MB perf.data (~2139197 samples) ] Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> LKML-Reference: Message-Id: <1273306229-5216-1-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
| * perf/live-mode: Handle payload-less eventsTom Zanussi2010-05-091-8/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some events, such as the PERF_RECORD_FINISHED_ROUND event consist of only an event header and no data. In this case, a 0-length payload will be read, and the 0 return value will be wrongly interpreted as an 'unexpected end of event stream'. This patch allows for proper handling of data-less events by skipping 0-length reads. Signed-off-by: Tom Zanussi <tzanussi@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Masami Hiramatsu <mhiramat@redhat.com> LKML-Reference: <1273038527.6383.51.camel@tropicana> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
| * tracing: Factorize lock events in a lock classFrederic Weisbecker2010-05-091-33/+15
| | | | | | | | | | | | | | | | | | | | | | | | lock_acquired, lock_contended and lock_release now share the same prototype and format. Let's factorize them into a lock event class. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Drop the nested field from lock_release eventFrederic Weisbecker2010-05-092-3/+3
| | | | | | | | | | | | | | | | | | | | | | Drop the nested field as we don't use it. Every nested state can be computed from a state machine on post processing already. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Steven Rostedt <rostedt@goodmis.org>
| * tracing: Drop lock_acquired waittime fieldFrederic Weisbecker2010-05-092-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop the waittime field from the lock_acquired event, we can calculate it by substracting the lock_acquired event timestamp with the matching lock_acquire one. It is not needed and takes useless space in the traces. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Steven Rostedt <rostedt@goodmis.org>
| * perf lock: Always check min AND max wait timeFrederic Weisbecker2010-05-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a lock is acquired after beeing contended, we update the wait time statistics for the given lock. But if the min wait time is updated, we don't check the max wait time. This is wrong because the first time we update the wait time, we want to update both min and max wait time. Before: Name acquired contended total wait (ns) max wait (ns) min wait (ns) key 8 1 21656 0 21656 After: Name acquired contended total wait (ns) max wait (ns) min wait (ns) key 8 1 21656 21656 21656 Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
| * perf: Fix perf lock bad rateFrederic Weisbecker2010-05-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix the cast made to get the bad rate. It is made in the result instead of the operands. We need the operands to be cast in double, otherwise the result will always be zero. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
| * perf: Humanize lock flags in perf lockFrederic Weisbecker2010-05-091-3/+8
| | | | | | | | | | | | | | | | | | | | | | Use an enum instead of plain constants for lock flags. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
| * perf: Cleanup perf lock broken statesFrederic Weisbecker2010-05-091-20/+29
| | | | | | | | | | | | | | | | | | | | | | | | Use enum to get a human view of bad_hist indexes and put bad histogram output in its own function. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
| * perf lock: Add "info" subcommand for dumping misc informationHitoshi Mitake2010-05-091-23/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the "info" subcommand to perf lock which can be used to dump metadata like threads or addresses of lock instances. "map" was removed because info should do the work for it. This will be useful not only for debugging but also for ordinary analyzing. v2: adding example of usage % sudo ./perf lock info -t | Thread ID: comm | 0: swapper | 1: init | 18: migration/5 | 29: events/2 | 32: events/5 | 33: events/6 ... % sudo ./perf lock info -m | Address of instance: name of class | 0xffff8800b95adae0: &(&sighand->siglock)->rlock | 0xffff8800bbb41ae0: &(&sighand->siglock)->rlock | 0xffff8800bf165ae0: &(&sighand->siglock)->rlock | 0xffff8800b9576a98: &p->cred_guard_mutex | 0xffff8800bb890a08: &(&p->alloc_lock)->rlock | 0xffff8800b9522a08: &(&p->alloc_lock)->rlock | 0xffff8800bb8aaa08: &(&p->alloc_lock)->rlock | 0xffff8800bba72a08: &(&p->alloc_lock)->rlock | 0xffff8800bf18ea08: &(&p->alloc_lock)->rlock | 0xffff8800b8a0d8a0: &(&ip->i_lock)->mr_lock | 0xffff88009bf818a0: &(&ip->i_lock)->mr_lock | 0xffff88004c66b8a0: &(&ip->i_lock)->mr_lock | 0xffff8800bb6478a0: &(shost->host_lock)->rlock v3: fixed some problems Frederic pointed out * better rbtree tracking in dump_threads() * removed printf() and used pr_info() and pr_debug() Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> LKML-Reference: <1272863520-16179-1-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
| * perf: Provide a new deterministic events reordering algorithmFrederic Weisbecker2010-05-092-45/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current events reordering algorithm is based on a heuristic that gets broken once we deal with a very fast flow of events. Indeed the time period based flushing is not suitable anymore in the following case, assuming we have a flush period of two seconds. CPU 0 | CPU 1 | cnt1 timestamps | cnt1 timestamps | 0 | 0 1 | 1 2 | 2 3 | 3 [...] | [...] 4 seconds later If we spend too much time to read the buffers (case of a lot of events to record in each buffers or when we have a lot of CPU buffers to read), in the next pass the CPU 0 buffer could contain a slice of several seconds of events. We'll read them all and notice we've reached the period to flush. In the above example we flush the first half of the CPU 0 buffer, then we read the CPU 1 buffer where we have events that were on the flush slice and then the reordering fails. It's simple to reproduce with: perf lock record perf bench sched messaging To solve this, we use a new solution that doesn't rely on an heuristical time slice period anymore but on a deterministic basis based on how perf record does its job. perf record saves the buffers through passes. A pass is a tour on every buffers from every CPUs. This is made in order: for each CPU we read the buffers of every counters. So the more buffers we visit, the later will be the timstamps of their events. When perf record finishes a pass it records a PERF_RECORD_FINISHED_ROUND pseudo event. We record the max timestamp t found in the pass n. Assuming these timestamps are monotonic across cpus, we know that if a buffer still has events with timestamps below t, they will be all available and then read in the pass n + 1. Hence when we start to read the pass n + 2, we can safely flush every events with timestamps below t. ============ PASS n ================= CPU 0 | CPU 1 | cnt1 timestamps | cnt2 timestamps 1 | 2 2 | 3 - | 4 <--- max recorded ============ PASS n + 1 ============== CPU 0 | CPU 1 | cnt1 timestamps | cnt2 timestamps 3 | 5 4 | 6 5 | 7 <---- max recorded Flush every events below timestamp 4 ============ PASS n + 2 ============== CPU 0 | CPU 1 | cnt1 timestamps | cnt2 timestamps 6 | 8 7 | 9 - | 10 Flush every events below timestamp 7 etc... It also works on perf.data versions that don't have PERF_RECORD_FINISHED_ROUND pseudo events. The difference is that the events will be only flushed in the end of the perf.data processing. It will then consume more memory and scale less with large perf.data files. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com>
| * perf: Introduce a new "round of buffers read" pseudo eventFrederic Weisbecker2010-05-092-11/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to provide a more rubust and deterministic reordering algorithm, we need to know when we reach a point where we just did a pass through over every counter buffers to read every thing they had. This patch introduces a new PERF_RECORD_FINISHED_ROUND pseudo event that only consist in an event header and doesn't need to contain anything. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com>
* | perf report: Allow limiting the number of entries to print in callchainsArnaldo Carvalho de Melo2010-05-093-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Works by adding a third parameter to the '-g' argument, after the graph type and minimum percentage, for example: [root@doppio linux-2.6-tip]# perf report -g fractal,0.5,2 Will show only the first two symbols where at least 0.5% of the samples took place. All the other symbols that don't fall outside these constraints will be put together in the last entry, prefixed with "[...]" and the total percentage for them. Suggested-by: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf session: Embed the host machine data on perf_sessionArnaldo Carvalho de Melo2010-05-096-31/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have just one host on a given session, and that is the most common setup right now, so embed a ->host_machine struct machine instance directly in the perf_session class, check if we're looking for it before going to the rb_tree. This also fixes a problem found when we try to process old perf.data files where we didn't have MMAP events for the kernel and modules and thus don't create the kernel maps, do it in event__preprocess_sample if it wasn't already. Reported-by: Ingo Molnar <mingo@elte.hu> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Zhang, Yanmin <yanmin_zhang@linux.intel.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf symbols: Check if a struct machine instance was foundArnaldo Carvalho de Melo2010-05-091-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Which can happen when processing old files that had no fake kernel MMAP, events. That shouldn't result in perf_session__create_kernel_maps not being called, this will be fixed in a followup patch, for now do these checks to avoid segfaulting. Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf symbols: Consider unresolved DSOs in the dso__col_widt calculationArnaldo Carvalho de Melo2010-05-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | By using BITS_PER_LONG / 4, that is the number of chars that will be used in such cases as the DSO "name". Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf hist: Simplify the insertion of new hist_entry instancesArnaldo Carvalho de Melo2010-05-095-44/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | And with that fix at least one bug: The first hit for an entry, the one that calls malloc to create a new instance in __perf_session__add_hist_entry, wasn't adding the count to the per cpumode (PERF_RECORD_MISC_USER, etc) total variable. Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf report: Fix leak of resolved callchains array on error pathArnaldo Carvalho de Melo2010-05-091-9/+7
| | | | | | | | | | | | | | | | | | | | Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf callchain: Move validate_callchain to callchain libArnaldo Carvalho de Melo2010-05-093-14/+11
|/ | | | | | | | | | Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* perf report: Document '--call-graph' better for usagePekka Enberg2010-05-081-1/+1
| | | | | | | | | | | | | | | This patch improves 'perf report -h' output for the '--call-graph' command line option by enumerating the different output types. Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1273332783-4268-1-git-send-email-penberg@cs.helsinki.fi> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* kprobes: Move enable/disable_kprobe() out from debugfs codeMasami Hiramatsu2010-05-081-66/+66
| | | | | | | | | | | | | | | Move enable/disable_kprobe() API out from debugfs related code, because these interfaces are not related to debugfs interface. This fixes a compiler warning. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Tony Luck <tony.luck@intel.com> Cc: systemtap <systemtap@sources.redhat.com> Cc: DLE <dle-develop@lists.sourceforge.net> LKML-Reference: <20100427223312.2322.60512.stgit@localhost6.localdomain6> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perf: P4 PMU -- check for proper event index in RAW eventsCyrill Gorcunov2010-05-081-1/+10
| | | | | | | | | | | | RAW events are special and we should be ready for user passing in insane event index values. Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> LKML-Reference: <20100508112717.315897547@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perf: P4 PMU -- Get rid of redundant check for array indexCyrill Gorcunov2010-05-081-5/+0
| | | | | | | | | | | | The caller already has done such a check. And it was wrong anyway, it had to be '>=' rather than '>' Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> LKML-Reference: <20100508112717.130386882@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perf: P4 PMU -- protect sensible procedures from preemptionCyrill Gorcunov2010-05-081-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Steven reported: | | I'm getting: | | Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727 | Call Trace: | [<ffffffff811c7565>] debug_smp_processor_id+0xd5/0xf0 | [<ffffffff81019874>] p4_hw_config+0x2b/0x15c | [<ffffffff8107acbc>] ? trace_hardirqs_on_caller+0x12b/0x14f | [<ffffffff81019143>] hw_perf_event_init+0x468/0x7be | [<ffffffff810782fd>] ? debug_mutex_init+0x31/0x3c | [<ffffffff810c68b2>] T.850+0x273/0x42e | [<ffffffff810c6cab>] sys_perf_event_open+0x23e/0x3f1 | [<ffffffff81009e6a>] ? sysret_check+0x2e/0x69 | [<ffffffff81009e32>] system_call_fastpath+0x16/0x1b | | When running perf record in latest tip/perf/core | Due to the fact that p4 counters are shared between HT threads we synthetically divide the whole set of counters into two non-intersected subsets. And while we're "borrowing" counters from these subsets we should not be preempted (well, strictly speaking in p4_hw_config we just pre-set reference to the subset which allow to save some cycles in schedule routine if it happens on the same cpu). So use get_cpu/put_cpu pair. Also p4_pmu_schedule_events should use smp_processor_id rather than raw_ version. This allow us to catch up preemption issue (if there will ever be). Reported-by: Steven Rostedt <rostedt@goodmis.org> Tested-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> LKML-Reference: <20100508112716.963478928@openvz.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, perf: P4 PMU -- configure predefined eventsCyrill Gorcunov2010-05-081-15/+14
| | | | | | | | | | | | If an event is not RAW we should not exit p4_hw_config early but call x86_setup_perfctr as well. Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Robert Richter <robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* perf_event: Make software events work againPaul Mackerras2010-05-081-6/+6
| | | | | | | | | | | | | | | | | | | | | | Commit 6bde9b6ce0127e2a56228a2071536d422be31336 ("perf: Add group scheduling transactional APIs") added code to allow a group to be scheduled in a single transaction. However, it introduced a bug in handling events whose pmu does not implement transactions -- at the end of scheduling in the events in the group, in the non-transactional case the code now falls through to the group_error label, and proceeds to unschedule all the events in the group and return failure. This fixes it by returning 0 (success) in the non-transactional case. Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Lin Ming <ming.m.lin@intel.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: eranian@gmail.com LKML-Reference: <20100508105800.GB10650@brick.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'perf' of ↵Ingo Molnar2010-05-082-4/+18
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux-2.6 into perf/core
| * perf list: Improve the raw hw event descriptor documentationArnaldo Carvalho de Melo2010-05-072-4/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It was x86 specific and imcomplete at that, improve the situation by making it clear where the example provided applies and by adding the URLs for the Intel and AMD manuals where this is discussed in depth. Acked-by: Robert Richter <robert.richter@amd.com> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Frédéric Weisbecker <fweisbec@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Robert Richter <robert.richter@amd.com> Reported-by: Robert Richter <robert.richter@amd.com LKML-Reference: <new-submission> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
* | perf, x86: implement group scheduling transactional APIsLin Ming2010-05-071-113/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | Convert to the transactional PMU API and remove the duplication of group_sched_in(). Reviewed-by: Stephane Eranian <eranian@google.com> Signed-off-by: Lin Ming <ming.m.lin@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <1272002172.5707.61.camel@minggr.sh.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf: Add group scheduling transactional APIsLin Ming2010-05-072-16/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add group scheduling transactional APIs to struct pmu. These APIs will be implemented in arch code, based on Peter's idea as below. > the idea behind hw_perf_group_sched_in() is to not perform > schedulability tests on each event in the group, but to add the group > as a whole and then perform one test. > > Of course, when that test fails, you'll have to roll-back the whole > group again. > > So start_txn (or a better name) would simply toggle a flag in the pmu > implementation that will make pmu::enable() not perform the > schedulablilty test. > > Then commit_txn() will perform the schedulability test (so note the > method has to have a !void return value. > > This will allow us to use the regular > kernel/perf_event.c::group_sched_in() and all the rollback code. > Currently each hw_perf_group_sched_in() implementation duplicates all > the rolllback code (with various bugs). ->start_txn: Start group events scheduling transaction, set a flag to make pmu::enable() not perform the schedulability test, it will be performed at commit time. ->commit_txn: Commit group events scheduling transaction, perform the group schedulability as a whole ->cancel_txn: Stop group events scheduling transaction, clear the flag so pmu::enable() will perform the schedulability test. Reviewed-by: Stephane Eranian <eranian@google.com> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Lin Ming <ming.m.lin@intel.com> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1272002160.5707.60.camel@minggr.sh.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Improve the PEBS ABIPeter Zijlstra2010-05-076-23/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename perf_event_attr::precise to perf_event_attr::precise_ip and widen it to 2 bits. This new field describes the required precision of the PERF_SAMPLE_IP field: 0 - SAMPLE_IP can have arbitrary skid 1 - SAMPLE_IP must have constant skid 2 - SAMPLE_IP requested to have 0 skid 3 - SAMPLE_IP must have 0 skid And modify the Intel PEBS code accordingly. The PEBS implementation now supports up to precise_ip == 2, where we perform the IP fixup. Also s/PERF_RECORD_MISC_EXACT/&_IP/ to clarify its meaning, this bit should be set for each PERF_SAMPLE_IP field known to match the actual instruction triggering the event. This new scheme allows for a PEBS mode that uses the buffer for more than a single event. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Stephane Eranian <eranian@google.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Consolidate some code repetitionPeter Zijlstra2010-05-071-53/+44
| | | | | | | | | | | | | | | | Remove some duplicated logic. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Remove PEBS SAMPLE_RAW supportPeter Zijlstra2010-05-071-14/+0
| | | | | | | | | | | | | | | | Its broken, we really should get PERF_SAMPLE_REGS sorted. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Use weight instead of cmask in for_each_event_constraint()Robert Richter2010-05-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There may exist constraints with a cmask set to zero. In this case for_each_event_constraint() will not work properly. Now weight is used instead of the cmask for loop exit detection. Weight is always a value other than zero since the default contains the HWEIGHT from the counter mask and in other cases a value of zero does not fit too. This is in preparation of ibs event constraints that wont have a cmask. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1271190201-25705-7-git-send-email-robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Pass enable bit mask to __x86_pmu_enable_event()Robert Richter2010-05-072-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | To reuse this function for events with different enable bit masks, this mask is part of the function's argument list now. The function will be used later to control ibs events too. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1271190201-25705-6-git-send-email-robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Call x86_setup_perfctr() from .hw_config()Robert Richter2010-05-072-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The perfctr setup calls are in the corresponding .hw_config() functions now. This makes it possible to introduce config functions for other pmu events that are not perfctr specific. Also, all of a sudden the code looks much nicer. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1271190201-25705-4-git-send-email-robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Move x86_setup_perfctr()Robert Richter2010-05-071-61/+59
| | | | | | | | | | | | | | | | | | Move x86_setup_perfctr(), no other changes made. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1271190201-25705-3-git-send-email-robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf, x86: Move perfctr init code to x86_setup_perfctr()Robert Richter2010-05-071-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | Split __hw_perf_event_init() to configure pmu events other than perfctrs. Perfctr code is moved to a separate function x86_setup_perfctr(). This and the following patches refactor the code. Split in multiple patches for better review. Signed-off-by: Robert Richter <robert.richter@amd.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1271190201-25705-2-git-send-email-robert.richter@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | perf: Annotate perf_event_read_group() vs perf_event_release_kernel()Peter Zijlstra2010-05-071-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stephane reported a lockdep warning while using PERF_FORMAT_GROUP. The issue is that perf_event_read_group() takes faults while holding the ctx->mutex, while perf_event_release_kernel() can be called from munmap(). Which makes for an AB-BA deadlock. Except we can never establish the deadlock because we'll only ever call perf_event_release_kernel() after all file descriptors are dead so there is no concurrency possible. Reported-by: Stephane Eranian <eranian@google.com> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | Merge branch 'perf/urgent' into perf/coreIngo Molnar2010-05-07179-724/+1742
|\ \ | | | | | | | | | | | | | | | Merge reason: Resolve patch dependency Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | perf: Fix exit() vs PERF_FORMAT_GROUPPeter Zijlstra2010-05-072-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both Stephane and Corey reported that PERF_FORMAT_GROUP didn't work as expected if the task the counters were attached to quit before the read() call. The cause is that we unconditionally destroy the grouping when we remove counters from their context. Fix this by only doing this when we free the counter itself. Reported-by: Corey Ashford <cjashfor@linux.vnet.ibm.com> Reported-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1273160566.5605.404.camel@twins> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | Merge branch 'zerolen' of ↵Linus Torvalds2010-05-052-0/+0
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/misc-2.6 * 'zerolen' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/misc-2.6: [MTD] Remove zero-length files mtdbdi.c and internal.ho
| | * | [MTD] Remove zero-length files mtdbdi.c and internal.hoJeff Garzik2010-05-052-0/+0
| | | | | | | | | | | | | | | | | | | | | | | | Both were "removed" in commit a33eb6b91034c95b9b08576f68be170f995b2c7d. Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
| * | | Merge branch 'upstream-linus' of ↵Linus Torvalds2010-05-053-37/+20
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev * 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev: pata_pcmcia / ide-cs: Fix bad hashes for Transcend and kingston IDs libata: Fix several inaccuracies in developer's guide
| | * | | pata_pcmcia / ide-cs: Fix bad hashes for Transcend and kingston IDsKristoffer Ericson2010-05-052-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the bad hashes for one Kingston and one Transcend card. Thanks to komuro for pointing this out. Signed-off-by: Kristoffer Ericson <kristoffer.ericson@gmail.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
| | * | | libata: Fix several inaccuracies in developer's guideSergei Shtylyov2010-05-051-33/+16
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6bfff31e77cfa1b13490337e5a4dbaa3407e83ac (libata: kill probe_ent and related helpers) killed ata_device_add() but didn't remove references to it from the libata developer's guide. Commits 9363c3825ea9ad76561eb48a395349dd29211ed6 (libata: rename SFF functions) and 5682ed33aae05d10a25c95633ef9d9c062825888 (libata: rename SFF port ops) renamed the taskfile access methods but didn't update the developer's guide. Commit c9f75b04ed5ed65a058d18a8a8dda50632a96de8 (libata: kill ata_noop_dev_select()) didn't update the developer's guide as well. The guide also refers to the long gone ata_pio_data_xfer_noirq(), ata_pio_data_xfer(), and ata_mmio_data_xfer() -- replace those by the modern ata_sff_data_xfer_noirq(), ata_sff_data_xfer(), and ata_sff_data_xfer32(). Also, remove the reference to non-existant ata_port_stop()... Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
| * | | Merge branch 'slab-for-linus' of ↵Linus Torvalds2010-05-051-1/+1
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6 * 'slab-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: slub: Fix bad boundary check in init_kmem_cache_nodes()
OpenPOWER on IntegriCloud