summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/perf_event.c
Commit message (Collapse)AuthorAgeFilesLines
* arm_pmu: simplify arm_pmu::handle_irqMark Rutland2018-05-211-2/+1
| | | | | | | | | | | | | | | | | | | The arm_pmu::handle_irq() callback has the same prototype as a generic IRQ handler, taking the IRQ number and a void pointer argument which it must convert to an arm_pmu pointer. This means that all arm_pmu::handle_irq() take an IRQ number they never use, and all must explicitly cast the void pointer to an arm_pmu pointer. Instead, let's change arm_pmu::handle_irq to take an arm_pmu pointer, allowing these casts to be removed. The redundant IRQ number parameter is also removed. Suggested-by: Hoeun Ryu <hoeun.ryu@lge.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: correct PMUVer probingMark Rutland2018-02-201-2/+2
| | | | | | | | | | | | | | | | | | The ID_AA64DFR0_EL1.PMUVer field doesn't follow the usual ID registers scheme. While value 0xf indicates a non-architected PMU is implemented, values 0x1 to 0xe indicate an increasingly featureful architected PMU, as if the field were unsigned. For more details, see ARM DDI 0487C.a, D10.1.4, "Alternative ID scheme used for the Performance Monitors Extension version". Currently, we treat the field as signed, and erroneously bail out for values 0x8 to 0xe. Let's correct that. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* bitmap: replace bitmap_{from,to}_u32arrayYury Norov2018-02-061-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | with bitmap_{from,to}_arr32 over the kernel. Additionally to it: * __check_eq_bitmap() now takes single nbits argument. * __check_eq_u32_array is not used in new test but may be used in future. So I don't remove it here, but annotate as __used. Tested on arm64 and 32-bit BE mips. [arnd@arndb.de: perf: arm_dsu_pmu: convert to bitmap_from_arr32] Link: http://lkml.kernel.org/r/20180201172508.5739-2-ynorov@caviumnetworks.com [ynorov@caviumnetworks.com: fix net/core/ethtool.c] Link: http://lkml.kernel.org/r/20180205071747.4ekxtsbgxkj5b2fz@yury-thinkpad Link: http://lkml.kernel.org/r/20171228150019.27953-2-ynorov@caviumnetworks.com Signed-off-by: Yury Norov <ynorov@caviumnetworks.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: David Decotigny <decot@googlers.com>, Cc: David S. Miller <davem@davemloft.net>, Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* arm64: perf: remove unsupported events for Cortex-A73Xu YiPing2017-12-011-6/+0
| | | | | | | | | | bus access read/write events are not supported in A73, based on the Cortex-A73 TRM r0p2, section 11.9 Events (pages 11-457 to 11-460). Fixes: 5561b6c5e981 "arm64: perf: add support for Cortex-A73" Acked-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Xu YiPing <xuyiping@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: add support for Cortex-A35Julien Thierry2017-08-101-0/+17
| | | | | | | | | | | | | | The Cortex-A35 uses some implementation defined perf events. The Cortex-A35 derives from the Cortex-A53 core, using the same event mapings based on Cortex-A35 TRM r0p2, section C2.3 - Performance monitoring events (pages C2-562 to C2-565). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: add support for Cortex-A73Julien Thierry2017-08-101-0/+37
| | | | | | | | | | | | | | | The Cortex-A73 uses some implementation defined perf events. This patch sets up the necessary mapping for Cortex-A73. Mappings are based on Cortex-A73 TRM r0p2, section 11.9 Events (pages 11-457 to 11-460). Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Remove redundant entries from CPU-specific event mapsWill Deacon2017-08-101-110/+4
| | | | | | | | Now that the event mapping code always looks into the PMUv3 events before any extended mappings, the extended mappings can be reduced to only those events that are not discoverable through the PMCEID registers. Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Connect additional events to pmu countersJulien Thierry2017-08-101-0/+11
| | | | | | | | | | | | | | Last level caches and node events were almost never connected in current supported cores. We connect last level caches to the actual last level within the core and node events are connected to bus accesses. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Allow standard PMUv3 events to be extended by the CPU typeWill Deacon2017-08-081-20/+26
| | | | | | | | | Rather than continue adding CPU-specific event maps, instead look up by default in the PMUv3 event map and only fallback to the CPU-specific maps if either the event isn't described by PMUv3, or it is described but the PMCEID registers say that it is unsupported by the current CPU. Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Allow more than one cycle counter to be usedPratyush Anand2017-08-081-7/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently: $ perf stat -e cycles:u -e cycles:k true Performance counter stats for 'true': 2,24,699 cycles:u <not counted> cycles:k (0.00%) 0.000788087 seconds time elapsed We can not count more than one cycle counter in one instance,because we allow to map cycle counter into PMCCNTR_EL0 only. However, if I did not miss anything then specification do not prohibit to use PMEVCNTR<n>_EL0 for cycle count as well. Modify the code so that it still prefers to use PMCCNTR_EL0 for cycle counter, however allow to use PMEVCNTR<n>_EL0 if PMCCNTR_EL0 is already in use. After this patch: $ perf stat -e cycles:u -e cycles:k true Performance counter stats for 'true': 2,17,310 cycles:u 7,40,009 cycles:k 0.000764149 seconds time elapsed Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Extend event config for ARMv8.1Shaokun Zhang2017-05-301-1/+1
| | | | | | | | | | | | | Perf has supported ARMv8.1 feature with 16-bit evtCount filed [see c210ae8 arm64: perf: Extend event mask for ARMv8.1], event config should be extended to 16-bit too, otherwise, if use -e event_name whose event_code is more than 0x3ff, pmu_config_term will return -EINVAL because function pmu_format_max_value depends on event config. This patch extends event config to 16-bit. Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Ignore exclude_hv when kernel is running in HYPGanapatrao Kulkarni2017-05-151-7/+16
| | | | | | | | | | | | | | | | | | | | commit d98ecdaca296 ("arm64: perf: Count EL2 events if the kernel is running in HYP") returns -EINVAL when perf system call perf_event_open is called with exclude_hv != exclude_kernel. This change breaks applications on VHE enabled ARMv8.1 platforms. The issue was observed with HHVM application, which calls perf_event_open with exclude_hv = 1 and exclude_kernel = 0. There is no separate hypervisor privilege level when VHE is enabled, the host kernel runs at EL2. So when VHE is enabled, we should ignore exclude_hv from the application. This behaviour is consistent with PowerPC where the exclude_hv is ignored when the hypervisor is not present and with x86 where this flag is ignored. Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com> [will: added comment to justify the behaviour of exclude_hv] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: pmu: Wire-up Cortex A53 L2 cache events and DTLB refillsFlorian Fainelli2017-04-281-0/+6
| | | | | | | | | Add missing L2 cache events: read/write accesses and misses, as well as the DTLB refills. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: pmuv3: handle pmuv3+Mark Rutland2017-04-251-3/+4
| | | | | | | | | | | | | | | | | | | | | | Commit f1b36dcb5c316c27 ("arm64: pmuv3: handle !PMUv3 when probing") is a little too restrictive, and prevents the use of of backwards compatible PMUv3 extenstions, which have a PMUver value other than 1. For instance, ARMv8.1 PMU extensions (as implemented by ThunderX2) are reported with PMUver value 4. Per the usual ID register principles, at least 0x1-0x7 imply a PMUv3-compatible PMU. It's not currently clear whether 0x8-0xe imply the same. For the time being, treat the value as signed, and with 0x1-0x7 treated as meaning PMUv3 is implemented. This may be relaxed by future patches. Reported-by: Jayachandran C <jnair@caviumnetworks.com> Tested-by: Jayachandran C <jnair@caviumnetworks.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: pmuv3: use arm_pmu ACPI frameworkMark Rutland2017-04-111-17/+9
| | | | | | | | | | | | Now that we have a framework to handle the ACPI bits, make the PMUv3 code use this. The framework is a little different to what was originally envisaged, and we can drop some unused support code in the process of moving over to it. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Jeremy Linton <jeremy.linton@arm.com> [will: make armv8_pmu_driver_init static] Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: pmuv3: handle !PMUv3 when probingMark Rutland2017-04-111-16/+71
| | | | | | | | | | | | | | | When probing via ACPI, we won't know up-front whether a CPU has a PMUv3 compatible PMU. Thus we need to consult ID registers during probe time. This patch updates our PMUv3 probing code to test for the presence of PMUv3 functionality before touching an PMUv3-specific registers, and before updating the struct arm_pmu with PMUv3 data. When a PMUv3-compatible PMU is not present, probing will return -ENODEV. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* KVM: arm64: Fix the issues when guest PMCCFILTR is configuredWei Huang2016-11-181-9/+1
| | | | | | | | | | | | | | | | | | | | | | KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured. But this function can't deals with PMCCFILTR correctly because the evtCount bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this function shouldn't return immediately; instead it needs to check further if select_idx is ARMV8_PMU_CYCLE_IDX. Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER blindly to attr.config. Instead it ought to convert the request to the "cpu cycle" event type (i.e. 0x11). To support this patch and to prevent duplicated definitions, a limited set of ARMv8 perf event types were relocated from perf_event.c to asm/perf_event.h. Cc: stable@vger.kernel.org # 4.6+ Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Wei Huang <wei@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: pmu: Hoist pmu platform device nameJeremy Linton2016-09-161-1/+1
| | | | | | | | Move the PMU name into a common header file so it may be referenced by other users. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: pmu: Probe default hw/cache countersJeremy Linton2016-09-161-4/+41
| | | | | | | | | | | ARMv8 machines can identify the micro/arch defined counters that are available on a machine. Add all these counters to the default armv8 perf map. At run-time disable the counters which are not available on the given PMU. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: pmu: add fallback probe tableMark Salter2016-09-161-1/+12
| | | | | | | | | | In preparation for ACPI support, add a pmu_probe_info table to the arm_pmu_device_probe() call. This table gets used when probing in the absence of a devicetree node for PMU. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: move to common attr_group fieldsMark Rutland2016-09-091-12/+24
| | | | | | | | | By using a common attr_groups array, the common arm_pmu code can set up common files (e.g. cpumask) for us in subsequent patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Use the builtin_platform_driverKefeng Wang2016-08-221-5/+1
| | | | | | | Use the builtin_platform_driver() to simplify code. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: don't expose CHAIN event in sysfsWill Deacon2016-04-251-2/+1
| | | | | | | | | | | | | The CHAIN event allows two 32-bit counters to be treated as a single 64-bit counter, under certain allocation restrictions on the PMU. Whilst userspace could theoretically create CHAIN events using the raw event syntax, we don't really want to advertise this in sysfs, since it's useless in isolation. This patch removes the event from our /sys entries. Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/perf: Add Broadcom Vulcan PMU supportAshok Kumar2016-04-251-0/+60
| | | | | | | | | | | Broadcom Vulcan uses ARMv8 PMUv3 and supports most of the ARMv8 recommended implementation defined events. Added Vulcan events mapping for perf and perf_cache map. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ashok Kumar <ashoks@broadcom.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/perf: Filter common events based on PMCEIDn_EL0Ashok Kumar2016-04-251-15/+55
| | | | | | | | | | | | | | | | | | The complete common architectural and micro-architectural event number structure is filtered based on PMCEIDn_EL0 and exposed to /sys using is_visibile function pointer in events attribute_group. To filter the events in is_visible function, pmceid based bitmap is stored in arm_pmu structure and the id field from perf_pmu_events_attr is used to check against the bitmap. The function which derives event bitmap from PMCEIDn_EL0 is executed in the cpus, which has the pmu being initialized, for heterogeneous pmu support. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ashok Kumar <ashoks@broadcom.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/perf: Access pmu register using <read/write>_sys_regAshok Kumar2016-04-251-17/+16
| | | | | | | | | changed pmu register access to make use of <read/write>_sys_reg from sysreg.h instead of accessing them directly. Signed-off-by: Ashok Kumar <ashoks@broadcom.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/perf: Define complete ARMv8 recommended implementation defined eventsAshok Kumar2016-04-251-0/+79
| | | | | | | | | | Defined all the ARMv8 recommended implementation defined events from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers" in ARM DDI 0487A.g. Signed-off-by: Ashok Kumar <ashoks@broadcom.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64/perf: Changed events naming as per the ARM ARMAshok Kumar2016-04-251-153/+153
| | | | | | | | | | | | | | | | | changed all the common events name definition as per the document ARM DDI 0487A.g SoC specific event names follow the general naming style in the file and doesn't reflect any document. changed ARMV8_A53_PERFCTR_PREFETCH_LINEFILL to ARMV8_A53_PERFCTR_PREF_LINEFILL to match with other SoC specific event names which use _PREF_ style. corrected typo l21 to l2i. Signed-off-by: Ashok Kumar <ashoks@broadcom.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Move PMU register related defines to asm/perf_event.hShannon Zhao2016-03-291-53/+19
| | | | | | | | | | | | | To use the ARMv8 PMU related register defines from the KVM code, we move the relevant definitions to asm/perf_event.h header file and rename them with prefix ARMV8_PMU_. This allows us to get rid of kvm_perf_event.h. Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Merge tag 'arm64-perf' of ↵Linus Torvalds2016-03-211-22/+100
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm[64] perf updates from Will Deacon: "I have another mixed bag of ARM-related perf patches here. It's about 25% CPU and 75% interconnect, but with drivers/bus/ languishing without an obvious maintainer or tree, Olof and I agreed to keep all of these PMU patches together. I suspect a whole load of code from drivers/bus/arm-* can be moved under drivers/perf/, so that's on the radar for the future. Summary: - Initial support for ARMv8.1 CPU PMUs - Support for the CPU PMU in Cavium ThunderX - CPU PMU support for systems running 32-bit Linux in secure mode - Support for the system PMU in ARM CCI-550 (Cache Coherent Interconnect)" * tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (26 commits) drivers/perf: arm_pmu: avoid NULL dereference when not using devicetree arm64: perf: Extend ARMV8_EVTYPE_MASK to include PMCR.LC arm-cci: remove unused variable arm-cci: don't return value from void function arm-cci: make private functions static arm-cci: CoreLink CCI-550 PMU driver arm-cci500: Rearrange PMU driver for code sharing with CCI-550 PMU arm-cci: CCI-500: Work around PMU counter writes arm-cci: Provide hook for writing to PMU counters arm-cci: Add helper to enable PMU without synchornising counters arm-cci: Add routines to save/restore all counters arm-cci: Get the status of a counter arm-cci: write_counter: Remove redundant check arm-cci: Delay PMU counter writes to pmu::pmu_enable arm-cci: Refactor CCI PMU enable/disable methods arm-cci: Group writes to counter arm-cci: fix handling cpumask_any_but return value arm-cci: simplify sysfs attr handling drivers/perf: arm_pmu: implement CPU_PM notifier arm64: dts: Add Cavium ThunderX specific PMU ...
| * arm64: perf: Extend ARMV8_EVTYPE_MASK to include PMCR.LCWill Deacon2016-02-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Commit 7175f0591eb9 ("arm64: perf: Enable PMCR long cycle counter bit") added initial support for a 64-bit cycle counter enabled using PMCR.LC. Unfortunately, that patch doesn't extend ARMV8_EVTYPE_MASK, so any attempts to set the enable bit are ignored by armv8pmu_pmcr_write. This patch extends the mask to include the new bit. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: Extend event mask for ARMv8.1Jan Glauber2016-02-181-2/+2
| | | | | | | | | | | | | | | | ARMv8.1 increases the PMU event number space to 16 bit so increase the EVTYPE mask. Signed-off-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: Enable PMCR long cycle counter bitJan Glauber2016-02-181-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the long cycle counter bit (LC) disabled the cycle counter is not working on ThunderX SOC (ThunderX only implements Aarch64). Also, according to documentation LC == 0 is deprecated. To keep the code simple the patch does not introduce 64 bit wide counter functions. Instead writing the cycle counter always sets the upper 32 bits so overflow interrupts are generated as before. Original patch from Andrew Pinksi <Andrew.Pinksi@caviumnetworks.com> Signed-off-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64/perf: Add Cavium ThunderX PMU supportJan Glauber2016-02-181-1/+68
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support PMU events on Caviums ThunderX SOC. ThunderX supports some additional counters compared to the default ARMv8 PMUv3: - branch instructions counter - stall frontend & backend counters - L1 dcache load & store counters - L1 icache counters - iTLB & dTLB counters - L1 dcache & icache prefetch counters Signed-off-by: Jan Glauber <jglauber@cavium.com> [will: capitalisation] Signed-off-by: Will Deacon <will.deacon@arm.com>
| * arm64: perf: Rename Cortex A57 eventsJan Glauber2016-02-181-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | The implemented Cortex A57 events are strictly-speaking not A57 specific. They are ARM recommended implementation defined events and can be found on other ARMv8 SOCs like Cavium ThunderX too. Therefore rename these events to allow using them in other implementations too. Signed-off-by: Jan Glauber <jglauber@cavium.com> [will: capitalisation and ordering] Signed-off-by: Will Deacon <will.deacon@arm.com>
* | arm64: perf: Count EL2 events if the kernel is running in HYPMarc Zyngier2016-02-291-1/+5
|/ | | | | | | | | | | When the kernel is running in HYP (with VHE), it is necessary to include EL2 events if the user requests counting kernel or hypervisor events. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: perf: add support for Cortex-A72Will Deacon2015-12-221-1/+12
| | | | | | | | | | Cortex-A72 has a PMUv3 implementation that is compatible with the PMU implemented by Cortex-A57. This patch hooks up the new compatible string so that the Cortex-A57 event mappings are used. Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: add format entry to describe event -> config mappingWill Deacon2015-12-221-2/+16
| | | | | | | | | | | | It's all very well providing an events directory to userspace that details our events in terms of "event=0xNN", but if we don't define how to encode the "event" field in the perf attr.config, then it's a waste of time. This patch adds a single format entry to describe that the event field occupies the bottom 10 bits of our config field on ARMv8 (PMUv3). Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: kernel: enforce pmuserenr_el0 initialization and restoreLorenzo Pieralisi2015-12-211-3/+0
| | | | | | | | | | | | | | | | | | | | The pmuserenr_el0 register value is architecturally UNKNOWN on reset. Current kernel code resets that register value iff the core pmu device is correctly probed in the kernel. On platforms with missing DT pmu nodes (or disabled perf events in the kernel), the pmu is not probed, therefore the pmuserenr_el0 register is not reset in the kernel, which means that its value retains the reset value that is architecturally UNKNOWN (system may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access is available at EL0, which must be disallowed). This patch adds code that resets pmuserenr_el0 on cold boot and restores it on core resume from shutdown, so that the pmuserenr_el0 setup is always enforced in the kernel. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Add event descriptionsDrew Richardson2015-11-161-0/+138
| | | | | | | | | | Add additional information about the ARM architected hardware events to make counters self describing. This makes the hardware PMUs easier to use as perf list contains possible events instead of users having to refer to documentation like the ARM TRMs. Signed-off-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Convert event enums to #definesDrew Richardson2015-11-161-50/+45
| | | | | | | | The enums are not necessary and this allows the event values to be used to construct static strings at compile time. Signed-off-by: Drew Richardson <drew.richardson@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: add Cortex-A57 supportMark Rutland2015-10-071-0/+61
| | | | | | | | | | | The Cortex-A57 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: perf: add Cortex-A53 supportMark Rutland2015-10-071-2/+62
| | | | | | | | | | | The Cortex-A53 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: perf: move to shared arm_pmu frameworkMark Rutland2015-10-071-861/+90
| | | | | | | | | | | | | Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: perf: condense event number mapsMark Rutland2015-07-271-102/+22
| | | | | | | | | | | | | | | | | | Most of the cache events an architecture might support do not map well to those provided by the ARM architecture, and as such most entries in the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid physical event identifier, the *_UNSUPPORTED macros expand to a non-zero value and thus each unsupported event must be explicitly initialised as such. This leads to large diffs when adding support for a new CPU, and makes it difficult to spot the important information. This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED macros to initialise all entries to *_UNSUPPORTED before overriding this for the specific events we actually support, resulting in a significant source code reduction. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: factor out callchain codeMark Rutland2015-07-271-178/+0
| | | | | | | | | | | | | | We currently bundle the callchain handling code with the PMU code, despite the fact the two are distinct, and the former can be useful even in the absence of the latter. Follow the example of arch/arm and factor the callchain handling into its own file dependent on CONFIG_PERF_EVENTS rather than CONFIG_HW_PERF_EVENTS. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: replace arch_find_n_match_cpu_physical_id with ↵Sudeep Holla2015-07-271-2/+2
| | | | | | | | | | | | | | | | | of_cpu_device_node_get arch_find_n_match_cpu_physical_id parses the device tree to get the device node for a given logical cpu index. However, since ARM PMUs get probed after the CPU device nodes are stashed while registering the cpus, we can use of_cpu_device_node_get to avoid another DT parse. This patch replaces arch_find_n_match_cpu_physical_id with of_cpu_device_node_get to reuse the stashed value directly instead. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Remove unnecessary printkSuzuki K. Poulose2015-07-271-3/+1
| | | | | | | | | | | ARM64 pmu prints an error message in event_init() when no hardware PMU is available. This is pretty annoying as it keeps printing the message for every single trial, flooding the kernel logs, unnecessarily. The return code is sufficient for the user to figure out the reason. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: fix unassigned cpu_pmu->plat_device when probing PMU PPIsShannon Zhao2015-06-301-1/+2
| | | | | | | | | | | | | | Commit d795ef9aa831 ("arm64: perf: don't warn about missing interrupt-affinity property for PPIs") added a check for PPIs so that we avoid parsing the interrupt-affinity property for these naturally affine interrupts. Unfortunately, this check can trigger an early (successful) return and we will not assign the value of cpu_pmu->plat_device. This patch fixes the issue. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Don't use of_node after putting itStephen Boyd2015-06-301-1/+2
| | | | | | | | | It's possible, albeit unlikely, that using the of_node here will reference freed memory. Call of_node_put() after printing the name to be safe. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
OpenPOWER on IntegriCloud