summaryrefslogtreecommitdiffstats
path: root/kernel/trace
Commit message (Collapse)AuthorAgeFilesLines
* tracing: switch function prints from %pf to %psSteven Rostedt2009-09-173-7/+7
| | | | | | | | | | For direct function pointers (like what mcount provides) PowerPC64 requires the use of %ps, otherwise nothing is printed. This patch converts all prints of functions retrieved through mcount to use the %ps format from the %pf. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* Merge branch 'linus' into tracing/coreIngo Molnar2009-09-171-3/+3
|\ | | | | | | | | | | Merge reason: Pick up kernel/softirq.c update for dependent fix. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * Merge branch 'for-linus' of ↵Linus Torvalds2009-09-151-3/+3
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (46 commits) powerpc64: convert to dynamic percpu allocator sparc64: use embedding percpu first chunk allocator percpu: kill lpage first chunk allocator x86,percpu: use embedding for 64bit NUMA and page for 32bit NUMA percpu: update embedding first chunk allocator to handle sparse units percpu: use group information to allocate vmap areas sparsely vmalloc: implement pcpu_get_vm_areas() vmalloc: separate out insert_vmalloc_vm() percpu: add chunk->base_addr percpu: add pcpu_unit_offsets[] percpu: introduce pcpu_alloc_info and pcpu_group_info percpu: move pcpu_lpage_build_unit_map() and pcpul_lpage_dump_cfg() upward percpu: add @align to pcpu_fc_alloc_fn_t percpu: make @dyn_size mandatory for pcpu_setup_first_chunk() percpu: drop @static_size from first chunk allocators percpu: generalize first chunk allocator selection percpu: build first chunk allocators selectively percpu: rename 4k first chunk allocator to page percpu: improve boot messages percpu: fix pcpu_reclaim() locking ... Fix trivial conflict as by Tejun Heo in kernel/sched.c
| | * Merge branch 'percpu-for-linus' into percpu-for-nextTejun Heo2009-08-1416-73/+99
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/sparc/kernel/smp_64.c arch/x86/kernel/cpu/perf_counter.c arch/x86/kernel/setup_percpu.c drivers/cpufreq/cpufreq_ondemand.c mm/percpu.c Conflicts in core and arch percpu codes are mostly from commit ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all the first chunk allocators into mm/percpu.c, the changes are moved from arch code to mm/percpu.c. Signed-off-by: Tejun Heo <tj@kernel.org>
| | * \ Merge branch 'master' into for-nextTejun Heo2009-07-048-74/+86
| | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull linus#master to merge PER_CPU_DEF_ATTRIBUTES and alpha build fix changes. As alpha in percpu tree uses 'weak' attribute instead of inline assembly, there's no need for __used attribute. Conflicts: arch/alpha/include/asm/percpu.h arch/mn10300/kernel/vmlinux.lds.S include/linux/percpu-defs.h
| | * | | percpu: clean up percpu variable definitionsTejun Heo2009-06-241-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Percpu variable definition is about to be updated such that all percpu symbols including the static ones must be unique. Update percpu variable definitions accordingly. * as,cfq: rename ioc_count uniquely * cpufreq: rename cpu_dbs_info uniquely * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and rename it * ipv4,6: rename cookie_scratch uniquely * x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to pmc_irq_entry and nmi_entry to pmc_nmi_entry * perf_counter: rename disable_count to perf_disable_count * ftrace: rename test_event_disable to ftrace_test_event_disable * kmemleak: rename test_pointer to kmemleak_test_pointer * mce: rename next_interval to mce_next_interval [ Impact: percpu usage cleanups, no duplicate static percpu var names ] Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Dave Jones <davej@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: linux-mm <linux-mm@kvack.org> Cc: David S. Miller <davem@davemloft.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andi Kleen <andi@firstfloor.org>
* | | | | tracing: Fix minor bugs for __unregister_ftrace_function_probeAtsushi Tsuji2009-09-161-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the condition of strcmp for "*". Also fix NULL pointer dereference when glob is NULL. Signed-off-by: Atsushi Tsuji <a-tsuji@bk.jp.nec.com> LKML-Reference: <4AAF6726.5090905@bk.jp.nec.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: optimize global_trace_clock cachelinesSteven Rostedt2009-09-151-10/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The prev_trace_clock_time is only read or written to when the trace_clock_lock is taken. For better perfomance, they should share the same cache line. Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | ftrace: document function and function graph implementationMike Frysinger2009-09-141-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While implementing function tracer and function tracer graph support, I found the exact arch implementation details to be a bit lacking (and my x86 foo ain't great). So after pounding out support for the Blackfin arch, start documenting the requirements/details. Signed-off-by: Mike Frysinger <vapier@gentoo.org> LKML-Reference: <1252973415-21264-1-git-send-email-vapier@gentoo.org> Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: make testing syscall events a separate configurationSteven Rostedt2009-09-142-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parag noticed that the number of event tests has increased tremendously: grep "Testing event" dmesg.31rc9 |wc -l 100 grep "Testing event" dmesg.31git |wc -l 1172 This is due to the testing of every syscall event when ftrace self test is enabled. This adds a bit more time to kernel boot up and can affect development by slowing down the time it takes between reboots. This option makes the testing of the syscall events into a separate config, to still be able to test most of ftrace internals at boot up but not have to wait for all the syscall events to be tested. The syscall event testing only tests the enabling and disabling of the trace point, since the syscalls are not executed. What really needs to be done is to somehow have a userspace tool test the syscall tracepoints as well. Reported-by: Parag Warudkar <parag.lkml@gmail.com> LKML-Reference: <f7848160909130815l3e768a30n3b28808bbe5c254b@mail.gmail.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove some unused macrosLi Zefan2009-09-142-35/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - remove FTRACE_ENTRY_STRUCT_ONLY() - remove TRACE_XXX() macros Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AADF6E6.3080606@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | ftrace: add compile-time check on F_printk()Li Zefan2009-09-141-4/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure F_printk() has corrent format and args, and make sure changes in F_STRUCT() won't break F_printk(). Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AADF6CC.1060809@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: fix F_printk() typosLi Zefan2009-09-141-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found some typos in F_printk(), so I wrote compile-time check for it, and triggered some compile errors and warnings. I've fixed them on x86_32, but I have no x86_64 in my hand, so there may still be some compile warnings on 64bits. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AADF60B.5070407@cn.fujitsu.com> [ tested on x86_64, and works fine ] Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: add static to generated TRACE_EVENT functionsSteven Rostedt2009-09-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of the generated functions used in the TRACE_EVENT macros are not declared static, but they are not global. Discovered by sparse. Reported-by: Jaswinder Singh Rajput <jaswinder@kernel.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | ring-buffer: typecast cmpxchg to fix PowerPC warningSteven Rostedt2009-09-141-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cmpxchg used by PowerPC does the following: ({ \ __typeof__(*(ptr)) _o_ = (o); \ __typeof__(*(ptr)) _n_ = (n); \ (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ (unsigned long)_n_, sizeof(*(ptr))); \ }) This does a type check of *ptr to both o and n. Unfortunately, the code in ring-buffer.c assigns longs to pointers and pointers to longs and causes a warning on PowerPC: ring_buffer.c: In function 'rb_head_page_set': ring_buffer.c:704: warning: initialization makes pointer from integer without a cast ring_buffer.c:704: warning: initialization makes pointer from integer without a cast ring_buffer.c: In function 'rb_head_page_replace': ring_buffer.c:797: warning: initialization makes integer from pointer without a cast This patch adds the typecasts inside cmpxchg to annotate that a long is being cast to a pointer and a pointer is being casted to a long and this removes the PowerPC warnings. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: add filter event logic to special, mmiotrace and boot tracersSteven Rostedt2009-09-123-5/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the pluging tracers use macros to create the structures and automate the exporting of their formats to the format files, they also automatically get a filter file. This patch adds the code to implement the filter logic in the trace recordings. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove trace_event_types.hSteven Rostedt2009-09-121-178/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The macros in trace_entries.h have made the code in trace_event_types.h obsolete. The file is no longer used, so this patch removes it. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: use the new trace_entries.h to create format filesSteven Rostedt2009-09-123-118/+136
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the way the format files in debugfs/tracing/events/ftrace/*/format are created. It uses the new trace_entries.h file to automate the creation of the format files to ensure that they are always in sync with the actual structures. This is the same methodology used to create the format files for the TRACE_EVENT macro. This also updates the filter creation that was built on the creation of the format files. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: show details of structures within the ftrace structuresSteven Rostedt2009-09-122-7/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some of the internal ftrace structures use structures within. The output of a field saying it is just a structure is useless for a format file. A binary reader of the ring buffer needs to know more about how the fields are broken up. This patch adds to the ftrace structure macros new fields to describe the structures inside a structure. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: use macros to create internal ftrace entry ring buffer structuresSteven Rostedt2009-09-122-133/+369
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The entries used by ftrace internal code (plugins) currently have their formats manually exported to userspace. That is, the format files in debugfs/tracing/events/ftrace/*/format are currently created by hand. This is a maintenance nightmare, and can easily become out of sync with what is actually shown. This patch uses the methodology of the TRACE_EVENT macros to build the structures so that their formats can be automated and this will keep the structures in sync with what users can see. This patch only changes the way the structures are created. Further patches will build off of this to automate the format files. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing/profile: fix profile_disable vs module_unloadLi Zefan2009-09-121-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the correspoding module is unloaded before ftrace_profile_disable() is called, event->profile_disable() won't be called, which can cause oops: # insmod trace-events-sample.ko # perf record -f -a -e sample:foo_bar sleep 3 & # sleep 1 # rmmod trace_events_sample # insmod trace-events-sample.ko OOPS! Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4A9214E3.2070807@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: do not update tracing_max_latency when tracer is stoppedCarsten Emde2009-09-124-4/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The state of the function pair tracing_stop()/tracing_start() is correctly considered when tracer data are updated. However, the global and externally accessible variable tracing_max_latency is always updated - even when tracing is stopped. The update should only occur, if tracing was not stopped. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove unused local variables in tracer probe functionsCarsten Emde2009-09-122-21/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the nsecs_to_usecs() conversion in probe_wakeup_sched_switch() and check_critical_timing() was moved to a later stage in order to avoid unnecessary computing, it was overlooked to remove the original variables, assignments and comments.. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: trace parser support for function and graphjolsa@redhat.com2009-09-111-110/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the writing to 'set_graph_function', 'set_ftrace_filter' and 'set_ftrace_notrace' to use the generic trace_parser 'trace_get_user' function. Removed FTRACE_ITER_CONT flag, since it's not needed after this change. Minor fix in set_graph_function display - g_show function. Signed-off-by: Jiri Olsa <jolsa@redhat.com> LKML-Reference: <1252682969-3366-4-git-send-email-jolsa@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: trace parser support for set_eventjolsa@redhat.com2009-09-111-47/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the parsing of the file 'set_event' to use the generic trace_praser 'trace_get_user' function. Signed-off-by: Jiri Olsa <jolsa@redhat.com> LKML-Reference: <1252682969-3366-3-git-send-email-jolsa@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: create generic trace parserjolsa@redhat.com2009-09-112-0/+141
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Create a "trace_parser" that can parse the user space input for separate words. struct trace_parser is the descriptor. Generic "trace_get_user" function that can be a helper to read multiple words passed in by user space. Signed-off-by: Jiri Olsa <jolsa@redhat.com> LKML-Reference: <1252682969-3366-2-git-send-email-jolsa@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: consolidate code between trace_output.c and trace_function_graph.cSteven Rostedt2009-09-113-30/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both trace_output.c and trace_function_graph.c do basically the same thing to handle the printing of the latency-format. This patch moves the code into one function that both can use. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: add lock depth to entriesSteven Rostedt2009-09-114-10/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the lock depth of the big kernel lock to the generic entry header. This way we can see the depth of the lock and help in removing the BKL. Example: # _------=> CPU# # / _-----=> irqs-off # | / _----=> need-resched # || / _---=> hardirq/softirq # ||| / _--=> preempt-depth # |||| /_--=> lock-depth # |||||/ delay # cmd pid |||||| time | caller # \ / |||||| \ | / <idle>-0 2.N..3 5902255250us+: lock_acquire: read rcu_read_lock <idle>-0 2.N..3 5902255253us+: lock_release: rcu_read_lock <idle>-0 2dN..3 5902255257us+: lock_acquire: xtime_lock <idle>-0 2dN..4 5902255259us : lock_acquire: clocksource_lock <idle>-0 2dN..4 5902255261us+: lock_release: clocksource_lock Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: move tgid out of generic entry and into userstackSteven Rostedt2009-09-114-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The userstack trace required the recording of the tgid entry. Unfortunately, it was added to the generic entry where it wasted 4 bytes of every entry and was only used by one entry. This patch moves it out of the generic field and moves it into the only user (userstack_entry). Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: add latency format to function_graph tracerSteven Rostedt2009-09-111-6/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While debugging something with the function_graph tracer, I found the need to see the preempt count of the traces. Unfortunately, since the function graph tracer has its own output formatting, it does not honor the latency-format option. This patch makes the function_graph tracer honor the latency-format option, but still keeps control of the output. But now we have the same details that the latency-format supplies. # tracer: function_graph # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / # |||| # CPU|||| DURATION FUNCTION CALLS # | |||| | | | | | | 3) d..1 1.333 us | idle_cpu(); 3) d.h1 | tick_check_idle() { 3) d.h1 0.550 us | tick_check_oneshot_broadcast(); 3) d.h1 | tick_nohz_stop_idle() { 3) d.h1 | ktime_get() { 3) d.h1 | ktime_get_ts() { Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: move PRED macros to trace_events_filter.cLi Zefan2009-09-092-41/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move DEFINE_COMPARISON_PRED() and DEFINE_EQUALITY_PRED() to kernel/trace/trace_events_filter.c Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove stats from struct tracerLi Zefan2009-09-091-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove unused field @stats from struct tracer. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: format clean upsLi Zefan2009-09-091-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix white-space formatting. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove dead codeLi Zefan2009-09-091-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Removes unreachable code. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4AA8579B.4020706@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: do not grab lock in wakeup latency function tracingSteven Rostedt2009-09-091-14/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The wakeup tracer, when enabled, has its own function tracer. It only traces the functions on the CPU where the task it is following is on. If a task is woken on one CPU but then migrates to another CPU before it wakes up, the latency tracer will then start tracing functions on the other CPU. To find which CPU the task is on, the wakeup function tracer performs a task_cpu(wakeup_task). But to make sure the task does not disappear it grabs the wakeup_lock, which is also taken when the task wakes up. By taking this lock, the function tracer does not need to worry about the task being freed as it checks its cpu. Jan Blunck found a problem with this approach on his 32 CPU box. When a task is being traced by the wakeup tracer, all functions take this lock. That means that on all 32 CPUs, each function call is taking this one lock to see if the task is on that CPU. This lock has just serialized all functions on all 32 CPUs. Needless to say, this caused major issues on that box. It would even lockup. This patch changes the wakeup latency to insert a probe on the migrate task tracepoint. When a task changes its CPU that it will run on, the probe will take note. Now the wakeup function tracer no longer needs to take the lock. It only compares the current CPU with a variable that holds the current CPU the task is on. We don't worry about races since it is OK to add or miss a function trace. Reported-by: Jan Blunck <jblunck@suse.de> Tested-by: Jan Blunck <jblunck@suse.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | ring-buffer: consolidate interface of rb_buffer_peek()Robert Richter2009-09-091-7/+4
|/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rb_buffer_peek() operates with struct ring_buffer_per_cpu *cpu_buffer only. Thus, instead of passing variables buffer and cpu it is better to use cpu_buffer directly. This also reduces the risk of races since cpu_buffer is not calculated twice. Signed-off-by: Robert Richter <robert.richter@amd.com> LKML-Reference: <1249045084-3028-1-git-send-email-robert.richter@amd.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | Merge branch 'tracing/core' of ↵Ingo Molnar2009-09-061-5/+6
|\ \ \ \ | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into tracing/core
| * | | | tracing/filters: Defer pred allocation, fix memory leakLi Zefan2009-09-041-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The predicates of an event and their filter structure are allocated when we create an event filter for the first time. These objects must be created once but each time we come with a new filter, we overwrite such pre-existing allocation, if any. Thus, this patch checks if the filter has already been allocated before going ahead. Spotted-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tom Zanussi <tzanussi@gmail.com> Cc: Masami Hiramatsu <mhiramat@redhat.com> LKML-Reference: <4A9CB1BA.3060402@cn.fujitsu.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
* | | | | Merge commit 'v2.6.31-rc9' into tracing/coreIngo Molnar2009-09-063-27/+14
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge reason: move from -rc5 to -rc9. Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | | | tracing: handle broken names in ftrace filterJiri Olsa2009-08-181-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If one filter item (for set_ftrace_filter and set_ftrace_notrace) is being setup by more than 1 consecutive writes (FTRACE_ITER_CONT flag), it won't be handled corretly. I used following program to test/verify: [snip] #include <stdio.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <string.h> int main(int argc, char **argv) { int fd, i; char *file = argv[1]; if (-1 == (fd = open(file, O_WRONLY))) { perror("open failed"); return -1; } for(i = 0; i < (argc - 2); i++) { int len = strlen(argv[2+i]); int cnt, off = 0; while(len) { cnt = write(fd, argv[2+i] + off, len); len -= cnt; off += cnt; } } close(fd); return 0; } [snip] before change: sh-4.0# echo > ./set_ftrace_filter sh-4.0# /test ./set_ftrace_filter "sys" "_open " sh-4.0# cat ./set_ftrace_filter #### all functions enabled #### sh-4.0# after change: sh-4.0# echo > ./set_ftrace_notrace sh-4.0# test ./set_ftrace_notrace "sys" "_open " sh-4.0# cat ./set_ftrace_notrace sys_open sh-4.0# Signed-off-by: Jiri Olsa <jolsa@redhat.com> LKML-Reference: <20090811152904.GA26065@jolsa.lab.eng.brq.redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
| * | | | | ftrace: Unify effect of writing to trace_options and option/*Zhaolei2009-08-181-10/+2
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "echo noglobal-clock > trace_options" can be used to change trace clock but "echo 0 > options/global-clock" can't. The flag toggling will be silently accepted without actually changing the clock callback. We can fix it by using set_tracer_flags() in trace_options_core_write(). Changelog: v1->v2: Simplified switch() after Li Zefan <lizf@cn.fujitsu.com>'s suggestion Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
| * | | | Remove double removal of blktrace directoryAlan D. Brunelle2009-08-121-11/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit fd51d251e4cdb21f68e9dbc4336514d64a105a79 Author: Stefan Raspl <raspl@linux.vnet.ibm.com> Date: Tue May 19 09:59:08 2009 +0200 blktrace: remove debugfs entries on bad path added in an explicit invocation of debugfs_remove for bt->dir, in blk_remove_buf_file_callback we are also getting the directory removed. On occasion I am seeing memory corruption that I have bisected down to this commit. [The testing involves a (long) series of I/O benchmarks with blktrace invoked around the actual runs.] I believe that this committed patch is correct, but the problem actually lies in the code in blk_remove_buf_file_callback. With this patch I am able to consistently get complete runs whereas previously I could not get a single run to complete. The first part of the patch simply moves the debugfs_remove below the relay_close: the relay_close call will remove files under bt->dir, and so we should not remove the directory until all the files we created have been removed. (Note: This is not sufficient to fix the problem - the file system code has ref counts on the directoy, so our invocation does not cause the directory to actually be removed. Nonetheless, we should not rely upon that feature.) Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* | | | | ring-buffer: only enable ring_buffer_swap_cpu when neededSteven Rostedt2009-09-042-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the ability to swap the cpu buffers adds a small overhead to the recording of a trace, we only want to add it when needed. Only the irqsoff and preemptoff tracers use this feature, and both are not recommended for production kernels. This patch disables its use when neither irqsoff nor preemptoff is configured. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | ring-buffer: check for swapped buffers in start of committingSteven Rostedt2009-09-041-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because the irqsoff tracer can swap an internal CPU buffer, it is possible that a swap happens between the start of the write and before the committing bit is set (the committing bit will disable swapping). This patch adds a check for this and will fail the write if it detects it. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: report error in trace if we fail to swap latency bufferSteven Rostedt2009-09-041-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The irqsoff tracer will fail to swap the cpu buffer with the max buffer if it preempts a commit. Instead of ignoring this, this patch makes the tracer report it if the last max latency failed due to preempting a current commit. The output of the latency tracer will look like this: # tracer: irqsoff # # irqsoff latency trace v1.1.5 on 2.6.31-rc5 # -------------------------------------------------------------------- # latency: 112 us, #1/1, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4) # ----------------- # | task: -4281 (uid:0 nice:0 policy:0 rt_prio:0) # ----------------- # => started at: save_args # => ended at: __do_softirq # # # _------=> CPU# # / _-----=> irqs-off # | / _----=> need-resched # || / _---=> hardirq/softirq # ||| / _--=> preempt-depth # |||| / # ||||| delay # cmd pid ||||| time | caller # \ / ||||| \ | / bash-4281 1d.s6 265us : update_max_tr_single: Failed to swap buffers due to commit in progress Note the latency time and the functions that disabled the irqs or preemption will still be listed. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: add trace_array_printk for internal tracers to useSteven Rostedt2009-09-042-2/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a trace_array_printk to allow a tracer to use the trace_printk on its own trace array. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: pass around ring buffer instead of tracerSteven Rostedt2009-09-0410-99/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The latency tracers (irqsoff and wakeup) can swap trace buffers on the fly. If an event is happening and has reserved data on one of the buffers, and the latency tracer swaps the global buffer with the max buffer, the result is that the event may commit the data to the wrong buffer. This patch changes the API to the trace recording to be recieve the buffer that was used to reserve a commit. Then this buffer can be passed in to the commit. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: make tracing_reset safe for external useSteven Rostedt2009-09-041-2/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reseting the trace buffer without first disabling the buffer and waiting for any writers to complete, can corrupt the ring buffer. This patch makes the external version of tracing_reset safe from corruption by disabling the ring buffer and calling synchronize_sched. This version can no longer be called from interrupt context. But all those callers have been removed. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: use timestamp to determine start of latency tracesSteven Rostedt2009-09-044-24/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the latency tracers reset the ring buffer. Unfortunately if a commit is in process (due to a trace event), this can corrupt the ring buffer. When this happens, the ring buffer will detect the corruption and then permanently disable the ring buffer. The bug does not crash the system, but it does prevent further tracing after the bug is hit. Instead of reseting the trace buffers, the timestamp of the start of the trace is used instead. The buffers will still contain the previous data, but the output will not count any data that is before the timestamp of the trace. Note, this only affects the static trace output (trace) and not the runtime trace output (trace_pipe). The runtime trace output does not make sense for the latency tracers anyway. Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* | | | | tracing: remove users of tracing_resetSteven Rostedt2009-09-044-14/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function tracing_reset is deprecated for outside use of trace.c. The new function to reset the the buffers is tracing_reset_online_cpus. The reason for this is that resetting the buffers while the event trace points are active can corrupt the buffers, because they may be writing at the time of reset. The tracing_reset_online_cpus disables writes and waits for current writers to finish. This patch replaces all users of tracing_reset except for the latency tracers. Those changes require more work and will be removed in the following patches. Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
OpenPOWER on IntegriCloud