summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* staging: line6: Filter on UX2 interfacesChris Rorvick2015-01-121-16/+4
| | | | | | | | | | The driver only supports interface 0 of the TonePort UX2 and POD Studio UX2 devices. Use the device table to filter on this. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Filter on Pocket POD interfaceChris Rorvick2015-01-121-10/+2
| | | | | | | | | | The driver only supports interface 1 of the Pocket POD. Use the device table to filter on this. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Split out POD HD500 interfacesChris Rorvick2015-01-123-19/+28
| | | | | | | | | | The driver uses a different altsetting depending on the interface. Add device type entries for each of these. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Split out PODxt Live interfacesChris Rorvick2015-01-124-81/+48
| | | | | | | | | | The PODxt Live device has both a POD and a Variax interface. Add device type entries for each of these. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: List out capabilities individuallyChris Rorvick2015-01-122-22/+36
| | | | | | | | | | | | | The `LINE6_CAP_CTRL_PCM_HW' macro combines three capabilities to save horizontal space when defining the properties entries. Now that these are no longer limited to single lines this is not such a concern. Specify capabilities individually when defining each property for better clarity. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Use explicit indexes when defining propertiesChris Rorvick2015-01-121-22/+90
| | | | | | | | | | | | Specify the index of the properties entry explicitly to define this structure more robustly. Also, drop the `L6PROP' macro in favor of initializing each member explicitly on its own line since horizontal space is limited and more attributes will be added later. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Rename capability macrosChris Rorvick2015-01-126-14/+14
| | | | | | | | | | Including "BIT" in the macro name is pointless. Replace with "CAP" to provide some context for what its value represents. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Remove useless commentsChris Rorvick2015-01-121-2/+0
| | | | | | | Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Remove idVendor and idProduct macrosChris Rorvick2015-01-122-61/+19
| | | | | | | | | | | These are now only used to build the device table. Each entry in this table is already clearly documented as to what device it maps to so the macros become unnecessary indirection. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Key off of device typeChris Rorvick2015-01-124-143/+144
| | | | | | | | | | | | | | | The driver currently uses the device's idProduct as input to several switch statements. In some cases this is not sufficiently granular and the interface number must be taken into account. Store the device type in `usb_line6' and key off of it instead. New types can then be added that map to specific interfaces on the device so that this conditional logic can be flattened out. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Index properties array with device typeChris Rorvick2015-01-121-15/+2
| | | | | | | | | | | | | | The current logic uses the index of the matched entry from the device table as an offset to the corresponding properties entry. The values of the new `line6_device_type' enum are ordered such that they can be used as an index into either of these arrays. Drop the device entry lookup logic and use the device type (via the .driver_info member) instead. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Define a device type enumChris Rorvick2015-01-122-18/+57
| | | | | | | | | | Define an enum containing the supported devices and associate each entry in the device table to the respective value. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Cleanup device tableChris Rorvick2015-01-121-19/+21
| | | | | | | | | Wrap USB_DEVICE to avoid repeating the Line 6 vendor ID. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Remove unsupported X3 devicesChris Rorvick2015-01-123-30/+0
| | | | | | | | | | | Support for these devices appears to have never been completed. Remove them from the device table along with a minimal amount of supporting code. Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Remove line6_pod_transmit_paramter()Chris Rorvick2015-01-122-11/+0
| | | | | | | | | | This function was no longer used as of commit 2807904441d4 (staging: line6: drop MIDI parameter sysfs attrs). Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: Remove `device_bit' from propertiesChris Rorvick2015-01-124-86/+24
| | | | | | | | | | The `device_bit' member was no longer used as of commit 2807904441d4 (staging: line6: drop MIDI parameter sysfs attrs). Signed-off-by: Chris Rorvick <chris@rorvick.com> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* Staging: line6: remove spaces before commas.Jonas Lundqvist2015-01-121-3/+3
| | | | | | | | | Fix three space prohibited errors in pcm.h found by checkpatch.pl. Signed-off-by: Jonas Lundqvist <jonas@gannon.se> Reviewed-by: Jeremiah Mahler <jmmahler@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* staging: line6: toneport.c: Fix for possible null pointer dereferenceRickard Strandqvist2015-01-121-7/+8
| | | | | | | | | | | | The NULL check was done to late, and there it was a risk of a possible null pointer dereference. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Reviewed-by: Stefan Hajnoczi <stefanha@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
* linux 3.19-rc4v3.19-rc4Linus Torvalds2015-01-111-1/+1
|
* Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds2015-01-115-11/+8
|\ | | | | | | | | | | | | | | | | | | | | | | Pull ARM fixes from Russell King: "Three small fixes from over the Christmas period, and wiring up the new execveat syscall for ARM" * 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile error ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region ARM: 8249/1: mm: dump: don't skip regions ARM: wire up execveat syscall
| * ARM: 8275/1: mm: fix PMD_SECT_RDONLY undeclared compile errorVictor Kamensky2015-01-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In v3.19-rc3 tree when CONFIG_ARM_LPAE and CONFIG_DEBUG_RODATA are enabled image failed to compile with the following error: arch/arm/mm/init.c:661:14: error: ‘PMD_SECT_RDONLY’ undeclared here (not in a function) It seems that '80d6b0c ARM: mm: allow text and rodata sections to be read-only' and 'ded9477 ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE' commits crossed. 80d6b0c uses PMD_SECT_RDONLY macro but ded9477 renames it and uses software bits L_PMD_SECT_RDONLY instead. Fix is to use L_PMD_SECT_RDONLY instead PMD_SECT_RDONLY as ded9477 does in another places. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem regionGrygorii Strashko2015-01-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 8249/1: mm: dump: don't skip regionsMark Rutland2015-01-071-7/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the arm page table dumping code starts dumping page tables from USER_PGTABLES_CEILING. This is unnecessary for skipping any entries related to userspace as the swapper_pg_dir does not contain such entries, and results in a couple of unfortuante side effects. Firstly, any kernel mappings which might exist below USER_PGTABLES_CEILING will not be accounted in the dump output. This masks any entries erroneously created below this address. Secondly, if the final page table entry walked is part of a valid mapping the page table dumping code will not log the region this entry is part of, as the final note_page call in walk_pgd will trigger an early return when 0 < USER_PGTABLES_CEILING. Luckily this isn't seen on contemporary systems as they typically don't have enough RAM to extend the linear mapping right to the end of the address space. Due to the way addr is constructed in the walk_* functions, it can never be less than USER_PGTABLES_CEILING when walking the page tables, so it is not necessary to avoid dereferencing invalid table addresses. The existing checks for st->current_prot and st->marker[1].start_address are sufficient to ensure we will not print and/or dereference garbage when trying to log information. This patch removes both problematic uses of USER_PGTABLES_CEILING from the arm page table dumping code, preventing both of these issues. We will now report any low mappings, and the final note_page call will not return early, ensuring all regions are logged. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: wire up execveat syscallRussell King2015-01-072-0/+2
| | | | | | | | Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds2015-01-116-39/+53
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Misc fixes: two vdso fixes, two kbuild fixes and a boot failure fix with certain odd memory mappings" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, vdso: Use asm volatile in __getcpu x86/build: Clean auto-generated processor feature files x86: Fix mkcapflags.sh bash-ism x86: Fix step size adjustment during initial memory mapping x86_64, vdso: Fix the vdso address randomization algorithm
| * \ Merge tag 'pr-20141223-x86-vdso' of ↵Ingo Molnar2015-01-011-2/+4
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux into x86/urgent Pull VDSO fix from Andy Lutomirski: "This is hopefully the last vdso fix for 3.19. It should be very safe (it just adds a volatile). I don't think it fixes an actual bug (the __getcpu calls in the pvclock code may not have been needed in the first place), but discussion on that point is ongoing. It also fixes a big performance issue in 3.18 and earlier in which the lsl instructions in vclock_gettime got hoisted so far up the function that they happened even when the function they were in was never called. n 3.19, the performance issue seems to be gone due to the whims of my compiler and some interaction with a branch that's now gone. I'll hopefully have a much bigger overhaul of the pvclock code for 3.20, but it needs careful review." Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | x86, vdso: Use asm volatile in __getcpuAndy Lutomirski2014-12-231-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Linux 3.18 and below, GCC hoists the lsl instructions in the pvclock code all the way to the beginning of __vdso_clock_gettime, slowing the non-paravirt case significantly. For unknown reasons, presumably related to the removal of a branch, the performance issue is gone as of e76b027e6408 x86,vdso: Use LSL unconditionally for vgetcpu but I don't trust GCC enough to expect the problem to stay fixed. There should be no correctness issue, because the __getcpu calls in __vdso_vlock_gettime were never necessary in the first place. Note to stable maintainers: In 3.18 and below, depending on configuration, gcc 4.9.2 generates code like this: 9c3: 44 0f 03 e8 lsl %ax,%r13d 9c7: 45 89 eb mov %r13d,%r11d 9ca: 0f 03 d8 lsl %ax,%ebx This patch won't apply as is to any released kernel, but I'll send a trivial backported version if needed. Fixes: 51c19b4f5927 x86: vdso: pvclock gettime support Cc: stable@vger.kernel.org # 3.8+ Cc: Marcelo Tosatti <mtosatti@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Andy Lutomirski <luto@amacapital.net>
| * | | x86/build: Clean auto-generated processor feature filesBjørn Mork2014-12-232-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 9def39be4e96 ("x86: Support compiling out human-friendly processor feature names") made two source file targets conditional. Such conditional targets will not be cleaned automatically by make mrproper. Fix by adding explicit clean-files targets for the two files. Fixes: 9def39be4e96 ("x86: Support compiling out human-friendly processor feature names") Signed-off-by: Bjørn Mork <bjorn@mork.no> Cc: Josh Triplett <josh@joshtriplett.org> Link: http://lkml.kernel.org/r/1419335863-10608-1-git-send-email-bjorn@mork.no Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | x86: Fix mkcapflags.sh bash-ismSylvain BERTRAND2014-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Chocked while compiling linux with dash shell instead of bash shell. See: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html Signed-off-by: Sylvain BERTRAND <sylvain.bertrand@gmail.com> Link: http://lkml.kernel.org/r/20141223123912.GA1386@localhost.localdomain Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | x86: Fix step size adjustment during initial memory mappingJan Beulich2014-12-231-20/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old scheme can lead to failure in certain cases - the problem is that after bumping step_size the next (non-final) iteration is only guaranteed to make available a memory block the size of what step_size was before. E.g. for a memory block [0,3004600000) we'd have: iter start end step amount 1 3004400000 30045fffff 2M 2M 2 3004000000 30043fffff 64M 4M 3 3000000000 3003ffffff 2G 64M 4 2000000000 2fffffffff 64G 64G Yet to map 64G with 4k pages (as happens e.g. under PV Xen) we need slightly over 128M, but the first three iterations made only about 70M available. The condition (new_mapped_ram_size > mapped_ram_size) for bumping step_size is just not suitable. Instead we want to bump it when we know we have enough memory available to cover a block of the new step_size. And rather than making that condition more complicated than needed, simply adjust step_size by the largest possible factor we know we can cover at that point - which is shifting it left by one less than the difference between page table level shifts. (Interestingly the original STEP_SIZE_SHIFT definition had a comment hinting at that having been the intention, just that it should have been PUD_SHIFT-PMD_SHIFT-1 instead of (PUD_SHIFT-PMD_SHIFT)/2, and of course for non-PAE 32-bit we can't really use these two constants as they're equal there.) Furthermore the comment in get_new_step_size() didn't get updated when the bottom-down mapping logic got added. Yet while an overflow (flushing step_size to zero) of the shift doesn't matter for the top-down method, it does for bottom-up because round_up(x, 0) = 0, and an upper range boundary of zero can't really work well. Signed-off-by: Jan Beulich <jbeulich@suse.com> Acked-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/54945C1E020000780005114E@mail.emea.novell.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | Merge tag 'pr-20141220-x86-vdso' of ↵Ingo Molnar2014-12-211-16/+29
| |\ \ \ | | |/ / | | | / | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux into x86/urgent Pull a VDSO fix from Andy Lutomirski: "One vdso fix for a longstanding ASLR bug that's been in the news lately. The vdso base address has always been randomized, and I don't think there's anything particularly wrong with the range over which it's randomized, but the implementation seems to have been buggy since the very beginning. This fixes the implementation to remove a large bias that caused a small fraction of possible vdso load addresess to be vastly more likely than the rest of the possible addresses." Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * x86_64, vdso: Fix the vdso address randomization algorithmAndy Lutomirski2014-12-201-16/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The theory behind vdso randomization is that it's mapped at a random offset above the top of the stack. To avoid wasting a page of memory for an extra page table, the vdso isn't supposed to extend past the lowest PMD into which it can fit. Other than that, the address should be a uniformly distributed address that meets all of the alignment requirements. The current algorithm is buggy: the vdso has about a 50% probability of being at the very end of a PMD. The current algorithm also has a decent chance of failing outright due to incorrect handling of the case where the top of the stack is near the top of its PMD. This fixes the implementation. The paxtest estimate of vdso "randomisation" improves from 11 bits to 18 bits. (Disclaimer: I don't know what the paxtest code is actually calculating.) It's worth noting that this algorithm is inherently biased: the vdso is more likely to end up near the end of its PMD than near the beginning. Ideally we would either nix the PMD sharing requirement or jointly randomize the vdso and the stack to reduce the bias. In the mean time, this is a considerable improvement with basically no risk of compatibility issues, since the allowed outputs of the algorithm are unchanged. As an easy test, doing this: for i in `seq 10000` do grep -P vdso /proc/self/maps |cut -d- -f1 done |sort |uniq -d used to produce lots of output (1445 lines on my most recent run). A tiny subset looks like this: 7fffdfffe000 7fffe01fe000 7fffe05fe000 7fffe07fe000 7fffe09fe000 7fffe0bfe000 7fffe0dfe000 Note the suspicious fe000 endings. With the fix, I get a much more palatable 76 repeated addresses. Reviewed-by: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Signed-off-by: Andy Lutomirski <luto@amacapital.net>
* | | Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds2015-01-114-35/+19
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Ingo Molnar: "Misc fixes: group scheduling corner case fix, two deadline scheduler fixes, effective_load() overflow fix, nested sleep fix, 6144 CPUs system fix" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/fair: Fix RCU stall upon -ENOMEM in sched_create_group() sched/deadline: Avoid double-accounting in case of missed deadlines sched/deadline: Fix migration of SCHED_DEADLINE tasks sched: Fix odd values in effective_load() calculations sched, fanotify: Deal with nested sleeps sched: Fix KMALLOC_MAX_SIZE overflow during cpumask allocation
| * | | sched/fair: Fix RCU stall upon -ENOMEM in sched_create_group()Tetsuo Handa2015-01-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When alloc_fair_sched_group() in sched_create_group() fails, free_sched_group() is called, and free_fair_sched_group() is called by free_sched_group(). Since destroy_cfs_bandwidth() is called by free_fair_sched_group() without calling init_cfs_bandwidth(), RCU stall occurs at hrtimer_cancel(): INFO: rcu_sched self-detected stall on CPU { 1} (t=60000 jiffies g=13074 c=13073 q=0) Task dump for CPU 1: (fprintd) R running task 0 6249 1 0x00000088 ... Call Trace: <IRQ> [<ffffffff81094988>] sched_show_task+0xa8/0x110 [<ffffffff81097acd>] dump_cpu_task+0x3d/0x50 [<ffffffff810c3a80>] rcu_dump_cpu_stacks+0x90/0xd0 [<ffffffff810c7751>] rcu_check_callbacks+0x491/0x700 [<ffffffff810cbf2b>] update_process_times+0x4b/0x80 [<ffffffff810db046>] tick_sched_handle.isra.20+0x36/0x50 [<ffffffff810db0a2>] tick_sched_timer+0x42/0x70 [<ffffffff810ccb19>] __run_hrtimer+0x69/0x1a0 [<ffffffff810db060>] ? tick_sched_handle.isra.20+0x50/0x50 [<ffffffff810ccedf>] hrtimer_interrupt+0xef/0x230 [<ffffffff810452cb>] local_apic_timer_interrupt+0x3b/0x70 [<ffffffff8164a465>] smp_apic_timer_interrupt+0x45/0x60 [<ffffffff816485bd>] apic_timer_interrupt+0x6d/0x80 <EOI> [<ffffffff810cc588>] ? lock_hrtimer_base.isra.23+0x18/0x50 [<ffffffff81193cf1>] ? __kmalloc+0x211/0x230 [<ffffffff810cc9d2>] hrtimer_try_to_cancel+0x22/0xd0 [<ffffffff81193cf1>] ? __kmalloc+0x211/0x230 [<ffffffff810ccaa2>] hrtimer_cancel+0x22/0x30 [<ffffffff810a3cb5>] free_fair_sched_group+0x25/0xd0 [<ffffffff8108df46>] free_sched_group+0x16/0x40 [<ffffffff810971bb>] sched_create_group+0x4b/0x80 [<ffffffff810aa383>] sched_autogroup_create_attach+0x43/0x1c0 [<ffffffff8107dc9c>] sys_setsid+0x7c/0x110 [<ffffffff81647729>] system_call_fastpath+0x12/0x17 Check whether init_cfs_bandwidth() was called before calling destroy_cfs_bandwidth(). Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> [ Move the check into destroy_cfs_bandwidth() to aid compilability. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Paul Turner <pjt@google.com> Cc: Ben Segall <bsegall@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/201412252210.GCC30204.SOMVFFOtQJFLOH@I-love.SAKURA.ne.jp Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | sched/deadline: Avoid double-accounting in case of missed deadlinesLuca Abeni2015-01-091-18/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dl_runtime_exceeded() function is supposed to ckeck if a SCHED_DEADLINE task must be throttled, by checking if its current runtime is <= 0. However, it also checks if the scheduling deadline has been missed (the current time is larger than the current scheduling deadline), further decreasing the runtime if this happens. This "double accounting" is wrong: - In case of partitioned scheduling (or single CPU), this happens if task_tick_dl() has been called later than expected (due to small HZ values). In this case, the current runtime is also negative, and replenish_dl_entity() can take care of the deadline miss by recharging the current runtime to a value smaller than dl_runtime - In case of global scheduling on multiple CPUs, scheduling deadlines can be missed even if the task did not consume more runtime than expected, hence penalizing the task is wrong This patch fix this problem by throttling a SCHED_DEADLINE task only when its runtime becomes negative, and not modifying the runtime Signed-off-by: Luca Abeni <luca.abeni@unitn.it> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Juri Lelli <juri.lelli@gmail.com> Cc: <stable@vger.kernel.org> Cc: Dario Faggioli <raistlin@linux.it> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1418813432-20797-3-git-send-email-luca.abeni@unitn.it Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | sched/deadline: Fix migration of SCHED_DEADLINE tasksLuca Abeni2015-01-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to global EDF, tasks should be migrated between runqueues without checking if their scheduling deadlines and runtimes are valid. However, SCHED_DEADLINE currently performs such a check: a migration happens doing: deactivate_task(rq, next_task, 0); set_task_cpu(next_task, later_rq->cpu); activate_task(later_rq, next_task, 0); which ends up calling dequeue_task_dl(), setting the new CPU, and then calling enqueue_task_dl(). enqueue_task_dl() then calls enqueue_dl_entity(), which calls update_dl_entity(), which can modify scheduling deadline and runtime, breaking global EDF scheduling. As a result, some of the properties of global EDF are not respected: for example, a taskset {(30, 80), (40, 80), (120, 170)} scheduled on two cores can have unbounded response times for the third task even if 30/80+40/80+120/170 = 1.5809 < 2 This can be fixed by invoking update_dl_entity() only in case of wakeup, or if this is a new SCHED_DEADLINE task. Signed-off-by: Luca Abeni <luca.abeni@unitn.it> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Juri Lelli <juri.lelli@gmail.com> Cc: <stable@vger.kernel.org> Cc: Dario Faggioli <raistlin@linux.it> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1418813432-20797-2-git-send-email-luca.abeni@unitn.it Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | sched: Fix odd values in effective_load() calculationsYuyang Du2015-01-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In effective_load, we have (long w * unsigned long tg->shares) / long W, when w is negative, it is cast to unsigned long and hence the product is insanely large. Fix this by casting tg->shares to long. Reported-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Dave Jones <davej@redhat.com> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20141219002956.GA25405@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | sched, fanotify: Deal with nested sleepsPeter Zijlstra2015-01-091-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As per e23738a7300a ("sched, inotify: Deal with nested sleeps"). fanotify_read is a wait loop with sleeps in. Wait loops rely on task_struct::state and sleeps do too, since that's the only means of actually sleeping. Therefore the nested sleeps destroy the wait loop state and the wait loop breaks the sleep functions that assume TASK_RUNNING (mutex_lock). Fix this by using the new woken_wake_function and wait_woken() stuff, which registers wakeups in wait and thereby allows shrinking the task_state::state changes to the actual sleep part. Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com> Reported-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Takashi Iwai <tiwai@suse.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric Paris <eparis@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Paris <eparis@redhat.com> Link: http://lkml.kernel.org/r/20141216152838.GZ3337@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | sched: Fix KMALLOC_MAX_SIZE overflow during cpumask allocationAlex Thorlton2014-12-231-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When allocating space for load_balance_mask, in sched_init, when CPUMASK_OFFSTACK is set, we've managed to spill over KMALLOC_MAX_SIZE on our 6144 core machine. The patch below breaks up the allocations so that they don't overflow the max alloc size. It also allocates the masks on the the node from which they'll most commonly be accessed, to minimize remote accesses on NUMA machines. Suggested-by: George Beshers <gbeshers@sgi.com> Signed-off-by: Alex Thorlton <athorlton@sgi.com> Cc: George Beshers <gbeshers@sgi.com> Cc: Russ Anderson <rja@sgi.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/1418928270-148543-1-git-send-email-athorlton@sgi.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* | | | Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds2015-01-1126-77/+371
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Mostly tooling fixes, but also some kernel side fixes: uncore PMU driver fix, user regs sampling fix and an instruction decoder fix that unbreaks PEBS precise sampling" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/uncore/hsw-ep: Handle systems with only two SBOXes perf/x86_64: Improve user regs sampling perf: Move task_pt_regs sampling into arch code x86: Fix off-by-one in instruction decoder perf hists browser: Fix segfault when showing callchain perf callchain: Free callchains when hist entries are deleted perf hists: Fix children sort key behavior perf diff: Fix to sort by baseline field by default perf list: Fix --raw-dump option perf probe: Fix crash in dwarf_getcfi_elf perf probe: Fix to fall back to find probe point in symbols perf callchain: Append callchains only when requested perf ui/tui: Print backtrace symbols when segfault occurs perf report: Show progress bar for output resorting
| * | | | perf/x86/uncore/hsw-ep: Handle systems with only two SBOXesAndi Kleen2015-01-092-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was another report of a boot failure with a #GP fault in the uncore SBOX initialization. The earlier work around was not enough for this system. The boot was failing while trying to initialize the third SBOX. This patch detects parts with only two SBOXes and limits the number of SBOX units to two there. Stable material, as it affects boot problems on 3.18. Tested-by: Andreas Oehler <andreas@oehler-net.de> Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Stephane Eranian <eranian@google.com> Cc: Yan, Zheng <zheng.z.yan@intel.com> Link: http://lkml.kernel.org/r/1420583675-9163-1-git-send-email-andi@firstfloor.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | perf/x86_64: Improve user regs samplingAndy Lutomirski2015-01-091-2/+76
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Perf reports user regs for kernel-mode samples so that samples can be backtraced through user code. The old code was very broken in syscall context, resulting in useless backtraces. The new code, in contrast, is still dangerously racy, but it should at least work most of the time. Tested-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: chenggang.qcg@taobao.com Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/243560c26ff0f739978e2459e203f6515367634d.1420396372.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | perf: Move task_pt_regs sampling into arch codeAndy Lutomirski2015-01-096-16/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On x86_64, at least, task_pt_regs may be only partially initialized in many contexts, so x86_64 should not use it without extra care from interrupt context, let alone NMI context. This will allow x86_64 to override the logic and will supply some scratch space to use to make a cleaner copy of user regs. Tested-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: chenggang.qcg@taobao.com Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Namhyung Kim <namhyung@gmail.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Jean Pihet <jean.pihet@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mark Salter <msalter@redhat.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/e431cd4c18c2e1c44c774f10758527fb2d1025c4.1420396372.git.luto@amacapital.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | x86: Fix off-by-one in instruction decoderPeter Zijlstra2015-01-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stephane reported that the PEBS fixup was broken by the recent commit to the instruction decoder. The thing had an off-by-one which resulted in not being able to decode the last instruction and always bail. Reported-by: Stephane Eranian <eranian@google.com> Fixes: 6ba48ff46f76 ("x86: Remove arbitrary instruction size limit in instruction decoder") Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org # 3.18 Cc: <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Liang Kan <kan.liang@intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20141216104614.GV3337@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | | Merge tag 'perf-urgent-for-mingo' of ↵Ingo Molnar2015-01-096-33/+69
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: - Free callchains when hist entries are deleted, plugging a massive leak in 'top -g', where hist_entries (and its callchains) are decayed over time. (Namhyung Kim) - Fix segfault when showing callchain in the hists browser (report & top) (Namhyung Kim) - Fix children sort key behavior, and also the 'perf test 32' test that was failing due to reliance on undefined behaviour (Namhyung Kim) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | | | perf hists browser: Fix segfault when showing callchainNamhyung Kim2015-01-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When perf report on TUI shows callchain it checks first node has siblings to determine whether it needs to print percentage value. But it missed a case that first node is NULL. So sometimes it segfaults like below: $ perf top -g perf: Segmentation fault -------- backtrace -------- perf[0x4fcefb] /usr/lib/libc.so.6(+0x33b20)[0x7f2a35839b20] perf(rb_next+0x8)[0x47d3d8] perf[0x4f6058] perf[0x4f833b] perf[0x4f8610] perf[0x4f209e] perf(ui_browser__run+0x3a)[0x4f2e6a] perf[0x4f94ee] perf(perf_evlist__tui_browse_hists+0x94)[0x4fbbf4] perf[0x444d10] /usr/lib/libpthread.so.0(+0x7314)[0x7f2a37070314] /usr/lib/libc.so.6(clone+0x6d)[0x7f2a358ee5bd] $ addr2line -e `which perf` 0x4f6058 /home/namhyung/project/linux/tools/perf/ui/browsers/hists.c:553 I don't know why the backtrace didn't print some symbols.. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Fixes: 4087d11cd945 ("perf hists browser: Print overhead percent value for first-level callchain") Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/r/1419401076-21700-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * | | | perf callchain: Free callchains when hist entries are deletedNamhyung Kim2015-01-083-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Markus reported that "perf top -g" can leak ~300MB per second on his machine. This is partly because it missed to free callchains when hist entries are deleted. Fix it. Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Markus Trippelsdorf <markus@trippelsdorf.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20141230053813.GD6081@sejong Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| | * | | | perf hists: Fix children sort key behaviorNamhyung Kim2015-01-082-32/+35
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When perf report --children resorts output fields, it tries to put caller above the callee. But this was only meaningful for a same thread and doing this requires callchain enabled. So fix its check before comparing the callchain depth. This also changes the hist accumulation tests: In test 3, xmalloc in bash thread should be above than other perf threads due to alphabetical order of comm string. Also it's under page_fault in bash thread since alphabetical order of dso name. The sys_perf_event_open in perf thread is put on the last line since it's self overhead is 0. In test 4, the sys_perf_event_open is put above other perf entries that have same children overhead since its callchain depth is smaller. Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1419309381-2593-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
| * | | | Merge tag 'perf-urgent-for-mingo' of ↵Ingo Molnar2015-01-084-6/+75
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent Pull perf/urgent fixes from Arnaldo Carvalho de Melo: " - 'perf probe' should fall back to find probe point in symbols when failing to do so in a debuginfo file (Masami Hiramatsu) - Fix 'perf probe' crash in dwarf_getcfi_elf (Namhyung Kim) - Fix shell completion with 'perf list' --raw-dump option (Taesoo Kim) - Fix 'perf diff' to sort by baseline field by default (Namhyung Kim) " Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
| | * | | | perf diff: Fix to sort by baseline field by defaultNamhyung Kim2015-01-021-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The currently perf diff didn't add the baseline and delta (or other compute) fields to the sort list so output will be sorted by other fields like alphabetical order of DSO or symbol as below example. Fix it by adding hpp formats for the fields and provides default compare functions. Before: $ perf diff # Event 'cycles' # # Baseline Delta Shared Object Symbol # ........ ....... .................. ............................... # [bridge] [k] ip_sabotage_in [btrfs] [k] __etree_search.constprop.47 0.01% [btrfs] [k] btrfs_file_mmap 0.01% -0.01% [btrfs] [k] btrfs_getattr [e1000e] [k] e1000_watchdog 0.00% [kernel.vmlinux] [k] PageHuge 0.00% [kernel.vmlinux] [k] __acct_update_integrals 0.00% [kernel.vmlinux] [k] __activate_page [kernel.vmlinux] [k] __alloc_fd 0.02% +0.02% [kernel.vmlinux] [k] __alloc_pages_nodemask ... After: # Baseline Delta Shared Object Symbol # ........ ....... .................. ................................ # 24.73% -4.62% perf [.] append_chain_children 7.96% -1.29% perf [.] dso__find_symbol 6.97% -2.07% libc-2.20.so [.] vfprintf 4.61% +0.88% libc-2.20.so [.] __fprintf_chk 4.41% +2.43% perf [.] sort__comm_cmp 4.10% -0.16% perf [.] comm__str 4.03% -0.93% perf [.] machine__findnew_thread_time 3.82% +3.09% perf [.] __hists__add_entry 2.95% -0.18% perf [.] sort__dso_cmp ... Signed-off-by: Namhyung Kim <namhyung@kernel.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1419656793-32756-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
OpenPOWER on IntegriCloud