summaryrefslogtreecommitdiffstats
path: root/arch/arm/kernel
Commit message (Collapse)AuthorAgeFilesLines
...
| | | * | | | | | ARM: perf: move arm_pmu into <asm/pmu.h>Mark Rutland2011-08-311-50/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, struct arm_pmu and related functions are only visible to {,arch/arm/}/kernel/perf_event.c. This prevents new drivers from using the framework. This patch moves declarations to asm/pmu.h, allowing new PMU drivers to use the framework. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: remove cpu-related misnomersMark Rutland2011-08-314-78/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently struct cpu_hw_events stores data on events running on a PMU associated with a CPU. As this data is general enough to be used for system PMUs, this name is a misnomer, and may cause confusion when it is used for system PMUs. Additionally, 'armpmu' is commonly used as a parameter name for an instance of struct arm_pmu. The name is also used for a global instance which represents the CPU's PMU. As cpu_hw_events is now not tied to CPU PMUs, it is renamed to pmu_hw_events, with instances of it renamed similarly. As the global 'armpmu' is CPU-specfic, it is renamed to cpu_pmu. This should make it clearer which code is generic, and which is coupled with the CPU. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: remove event limit from pmu_hw_eventsMark Rutland2011-08-311-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the event accounting data in pmu_hw_events is stored in fixed-sized arrays within the structure. This patch refactors the accounting data to allow any number of events to be managed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: add support for multiple PMUsMark Rutland2011-08-311-22/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, a single static instance of struct pmu is used when registering an ARM PMU with the main perf subsystem. This limits the ARM perf code to supporting a single PMU. This patch replaces the static struct pmu instance with a member variable on struct arm_pmu. This provides bidirectional mapping between the two structs, and therefore allows for support of multiple PMUs. The function 'to_arm_pmu' is provided for convenience. PMU-generic functions are also updated to use the new mapping, and PMU-generic initialisation of the member variables is moved into a new function: armpmu_init. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: refactor event mappingMark Rutland2011-08-314-52/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently mapping an event type to a hardware configuration value depends on the data being pointed to from struct arm_pmu. These fields (cache_map, event_map, raw_event_mask) are currently specific to CPU PMUs, and do not serve the general case well. This patch replaces the event map pointers on struct arm_pmu with a new 'map_event' function pointer. Small shim functions are used to reuse the existing common code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: add type field to struct arm_pmuMark Rutland2011-08-311-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the ARM perf code assumes all PMUs it will handle are CPU PMUs, having ARM_PMU_DEVICE_CPU hardcoded when reserving or releasing hardware. This means that currently, the ARM perf code can't support system PMUs. This patch adds a 'type' field to struct arm_pmu, which allows the code to reserve & release the hardware regardless of the PMU type. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: lock PMU registers per-CPUMark Rutland2011-08-314-40/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, a single lock serialises access to CPU PMU registers. This global locking is unnecessary as PMU registers are local to the CPU they monitor. This patch replaces the global lock with a per-CPU lock. As the lock is in struct cpu_hw_events, PMUs providing a single cpu_hw_events instance can be locked globally. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: remove unnecessary armpmu->stopMark Rutland2011-08-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As armpmu_disable will call armpmu->stop when the last event has been removed, this is pointless and simply adds to the noise when debugging. Additionally, due to this call occurring in a preemptible context, this is problematic for per-cpu locking of PMU registers (where we will attempt to access per-cpu spinlock for use with raw_spin_lock_irqsave). This patch removes the call to armpmu->stop. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: indirect access to cpu_hw_eventsMark Rutland2011-08-311-3/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, cpu_hw_events is a global per-CPU variable. To enable support for multiple PMUs, there needs to be a mapping from an instance of arm_pmu to its cpu_hw_events. Additionally, as system PMUs are not CPU-affine, they should not have this stored per-CPU. This patch moves access to the hardware events data behind an accessor function (arm_pmu::get_hw_events). This allows each instance to have its own hardware event data, which can be stored per-CPU or globally as required. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: move platform device to struct arm_pmuMark Rutland2011-08-311-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the ARM perf code supports having a single struct platform_device to supply IRQ numbers, limiting it to supporting a single PMU. This patch makes a platform_device instance variable on struct arm_pmu. This should allow for multiple PMUs to be supported in future. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: move active_events into struct arm_pmuMark Rutland2011-08-311-11/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch moves the active_events counter into struct arm_pmu, in preparation for supporting multiple PMUs. This also moves pmu_reserve_mutex, as it is used to guard accesses to active_events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: remove active_maskMark Rutland2011-08-314-18/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, pmu_hw_events::active_mask is used to keep track of which events are active in hardware. As we can stop counters and their interrupts, this is unnecessary. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: clean up event group validationMark Rutland2011-08-311-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, event group validation compares each event's 'pmu' pointer against the static 'pmu' pointer. This limits the code to supporting only 1 PMU. This patch changes the behaviour to consider an event's group leader's 'pmu' pointer as canonical for validation. This should ease later generalisation of the code to support multiple PMUs at once. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: only register a CPU PMU when presentMark Rutland2011-08-311-16/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, an "empty" struct pmu is registered as the CPU PMU, regardless of whether there is a physical PMU. This burdens the accessor functions with checks to see whether a PMU is actually present. This patch changes initialisation to register a PMU only if there is a supported PMU present, and removes the checks that this change makes redundant. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Ashwin Chaugule <ashwinc@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: add mode exclusion for Cortex-A15 PMUWill Deacon2011-08-311-9/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Cortex-A15 PMU implements the PMUv2 specification and therefore has support for some mode exclusion. This patch adds support for excluding user, kernel and hypervisor counts from a given event. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: allow armpmu to implement mode exclusionWill Deacon2011-08-311-19/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modern PMUs allow for mode exclusion, so we no longer wish to return -EPERM if it is requested. This patch provides a hook in the armpmu structure for implementing mode exclusion. The hw_perf_event initialisation is slightly delayed so that the backend code can update the structure if required. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: index PMU registers from zeroWill Deacon2011-08-311-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM PMU code used to use 1-based indices for PMU registers. This caused several data structures (pmu_hw_events::{active_events, used_mask, events}) to have an unused element at index zero. ARMPMU_MAX_HWEVENTS still takes this indexing into account, and currently equates to 33. This patch updates the core ARM perf code to use the 0th index again. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: index Xscale and ARMv6 event counters starting from zeroWill Deacon2011-08-312-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the ARMv7 PMU backend indexes event counters from zero, follow suit and do the same for ARMv6 and Xscale. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: index ARMv7 event counters starting from zeroWill Deacon2011-08-311-151/+88
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current ARMv7 PMU backend indexes event counters from two, with index zero being reserved and index one being used to represent the cycle counter. This patch tidies up the code by indexing from one instead (with zero for the cycle counter). This allows us to remove many of the accessor macros along with the counter enumeration and makes the code much more readable. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: use integers for ARMv7 event indicesWill Deacon2011-08-311-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch ensures that integers are used to represent event indices in the ARMv7 PMU backend. This ensures consistency between functions and also with the arm_pmu structure. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: use u32 instead of unsigned long for PMNC registerWill Deacon2011-08-311-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARMv7 perf backend mixes up u32 and unsigned long, which is rather ugly. This patch makes the ARMv7 PMU code consistently use the u32 type instead. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: use cpumask_t to record active IRQsWill Deacon2011-08-311-33/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 5dfc54e0 ("ARM: GIC: avoid routing interrupts to offline CPUs") prevents the GIC from setting the affinity of an IRQ to a CPU with id >= nr_cpu_ids. This was previously abused by perf on some platforms where more IRQs were registered than possible CPUs. This patch fixes the problem by using a cpumask_t to keep track of the active (requested) interrupts in perf. The same effect could be achieved by limiting the number of IRQs to the number of CPUs, but using a mask instead will be useful for adding extended CPU hotplug support in the future. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: PMU: move CPU PMU platform device handling and init into perfWill Deacon2011-08-312-186/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once upon a time, OProfile and Perf fought hard over who could play with the PMU. To stop all hell from breaking loose, pmu.c offered an internal reserve/release API and took care of parsing PMU platform data passed in from board support code. Now that Perf has ingested OProfile, let's move the platform device handling into the Perf driver and out of the PMU locking code. Unfortunately, the lock has to remain to prevent Perf being bitten by out-of-tree modules such as LTTng, which still claim a right to the PMU when Perf isn't looking. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | * | | | | | ARM: perf: de-const struct arm_pmuMark Rutland2011-08-314-21/+21
| | | |/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes const qualifiers from instances of struct arm_pmu, and functions initialising them, in preparation for generalising arm_pmu usage to system (AKA uncore) PMUs. This will allow for dynamically modifiable structures (locks, struct pmu) to be added as members of struct arm_pmu. Acked-by: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Jean Pihet <j-pihet@ti.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | ARM: hw_breakpoint: reduce the number of WARN_ONCE invocationsWill Deacon2011-08-311-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM hw_breakpoint backend is currently a bit too noisy when things start to go awry. This patch removes a couple of over-zealous WARN_ONCE invocations and replaces then with pr_warnings instead. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | ARM: hw_breakpoint: trap undef instruction exceptions in reset_ctrl_regsWill Deacon2011-08-311-9/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM debug registers can only be accessed if the DBGSWENABLE signal to the core is driven HIGH by the DAP. The architecture does not provide a way to detect the value of this signal, so the best we can do is register an undef_hook to trap debug register co-processor accesses and then fail if the trap is taken. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | ARM: hw_breakpoint: add support for multiple watchpointsWill Deacon2011-08-311-35/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM debug architecture 7.1 mandates that the DFAR is updated on a watchpoint debug exception to contain the faulting virtual address of the memory access. This allows us to determine which watchpoints have fired and therefore report useful information to userspace. This patch adds support for using the DFAR in the watchpoint handler, which allows us to support multiple watchpoints on CPUs implementing the new debug architecture. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | ARM: hw_breakpoint: reserve one breakpoint for watchpoint steppingWill Deacon2011-08-311-39/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current hw_breakpoint code on ARM reserves 1 breakpoint for each watchpoint that is available. Since debug architectures prior to 7.1 are restricted to 1 watchpoint anyway, only one breakpoint was ever reserved. This patch changes the reservation strategy so that a single breakpoint is reserved, regardless of the number of watchpoints. This is in preparation for multiple-watchpoint support on debug architectures from 7.1 onwards. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | * | | | | | ARM: hw_breakpoint: add initial Cortex-A15 (debug v7.1) supportWill Deacon2011-08-311-19/+36
| | |/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds initial support for Cortex-A15 (debug architecture v7.1) to the hw_breakpoint ARM backend. Signed-off-by: Will Deacon <will.deacon@arm.com>
| * | | | | | Merge branch 'zImage_DTB_append' of git://git.linaro.org/people/nico/linux ↵Russell King2011-09-151-1/+1
| |\ \ \ \ \ \ | | |_|_|/ / / | |/| | | | | | | | | | | | into devel-stable
| * | | | | | ARM: remove boot_params from struct machine_descNicolas Pitre2011-08-211-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that there is no more users, we can remove it from the kernel. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de>
| * | | | | | ARM: introduce atag_offset to replace boot_paramsNicolas Pitre2011-08-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The boot_params member of the mdesc structure is used to provide a default physical address for the ATAG list. Since this value is fixed at compile time and sometimes based on constants such as ARCH_PHYS_OFFSET, it gets in the way of runtime PHYS_OFFSET and CONFIG_ARM_PATCH_PHYS_VIRT usage. Let's introduce atag_offset which should contains only the relative offset from PHYS_OFFSET instead of an absolute value, in preparation to move all instance of boot_params over to it. Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: Petr Štetiar <ynezz@true.cz> Acked-by: Arnd Bergmann <arnd@arndb.de>
* | | | | | | Merge branch 'core-locking-for-linus' of ↵Linus Torvalds2011-10-263-14/+14
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits) rtmutex: Add missing rcu_read_unlock() in debug_rt_mutex_print_deadlock() lockdep: Comment all warnings lib: atomic64: Change the type of local lock to raw_spinlock_t locking, lib/atomic64: Annotate atomic64_lock::lock as raw locking, x86, iommu: Annotate qi->q_lock as raw locking, x86, iommu: Annotate irq_2_ir_lock as raw locking, x86, iommu: Annotate iommu->register_lock as raw locking, dma, ipu: Annotate bank_lock as raw locking, ARM: Annotate low level hw locks as raw locking, drivers/dca: Annotate dca_lock as raw locking, powerpc: Annotate uic->lock as raw locking, x86: mce: Annotate cmci_discover_lock as raw locking, ACPI: Annotate c3_lock as raw locking, oprofile: Annotate oprofilefs lock as raw locking, video: Annotate vga console lock as raw locking, latencytop: Annotate latency_lock as raw locking, timer_stats: Annotate table_lock as raw locking, rwsem: Annotate inner lock as raw locking, semaphores: Annotate inner lock as raw locking, sched: Annotate thread_group_cputimer as raw ... Fix up conflicts in kernel/posix-cpu-timers.c manually: making cputimer->cputime a raw lock conflicted with the ABBA fix in commit bcd5cff7216f ("cputimer: Cure lock inversion").
| * | | | | | | locking, ARM: Annotate low level hw locks as rawThomas Gleixner2011-09-133-14/+14
| | |_|/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Annotate the low level hardware locks which must not be preempted. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | | | | | Merge branch 'misc' into for-linusRussell King2011-10-259-53/+129
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/arm/mach-integrator/integrator_ap.c
| * | | | | | | ARM: 7133/1: SMP: fix per cpu timer setup before the cpu is marked onlineThomas Gleinxer2011-10-231-10/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is related to the early enabling of interrupts and the per cpu timer setup before the cpu is marked online. This doesn't need to be done in order to call calibrate_delay(). calibrate_delay() monitors jiffies, which are updated from the CPU which is waiting for the new CPU to set the online bit. So simply calibrate_delay() can be called on the new CPU just from the interrupt disabled region and move the local timer setup after stored the cpu data and before enabling interrupts. This solves both the cpu_online vs. cpu_active problem and the affinity setting of the per cpu timers. Signed-off-by: Thomas Gleinxer <tglx@linutronix.de> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7137/1: Fix error upon adding LL debugAfzal Mohammed2011-10-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Upon adding new board LL debug support, if the resultant code addition would not cause PC relative offset of "hexbuf" from "adr r2, hexbuf" (+2) instruction to be representable in a shifted 8-bit value (hence indirectly putting higher aligment requirement on larger offsets), following error occurs, arch/arm/kernel/debug.S: Assembler messages: arch/arm/kernel/debug.S:138: Error: invalid constant (428) after fixup Fix it by bringing "hexbuf" closer so that "adr" can have the offset. Signed-off-by: Afzal Mohammed <afzal@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7098/1: kdump: copy kernel relocation code at the kexec prepare stageLei Wen2011-10-171-17/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This copy really don't need to do at the very second before the kernel would crash. Signed-off-by: Lei Wen <leiwen@marvell.com> Acked-by: Simon Horman <horms@verge.net.au> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7062/1: cache: detect PIPT I-cache using CTRWill Deacon2011-10-171-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Cache Type Register L1Ip field identifies I-caches with a PIPT policy using the encoding 11b. This patch extends the cache policy parsing to identify PIPT I-caches correctly and prevent them from being treated as VIPT aliasing in cases where they are sufficiently large. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: platform fixups: remove mdesc argument to fixup functionRussell King2011-10-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the mdesc pointer in the fixup function call. No one uses the mdesc pointer, it shouldn't be modified anyway, and we can't wrap it, so let's remove it. Platform files found by: $ regexp=$(git grep -h '\.fixup.*=' arch/arm | sed 's!.*= *\([^,]*\),* *!\1!' | sort -u | tr '\n' '|' | sed 's,|$,,;s,|,\\|,g') $ git grep $regexp arch/arm Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7017/1: Use generic BUG() handlerSimon Glass2011-10-172-11/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM uses its own BUG() handler which makes its output slightly different from other archtectures. One of the problems is that the ARM implementation doesn't report the function with the BUG() in it, but always reports the PC being in __bug(). The generic implementation doesn't have this problem. Currently we get something like: kernel BUG at fs/proc/breakme.c:35! Unable to handle kernel NULL pointer dereference at virtual address 00000000 ... PC is at __bug+0x20/0x2c With this patch it displays: kernel BUG at fs/proc/breakme.c:35! Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP ... PC is at write_breakme+0xd0/0x1b4 This implementation uses an undefined instruction to implement BUG, and sets up a bug table containing the relevant information. Many versions of gcc do not support %c properly for ARM (inserting a # when they shouldn't) so we work around this using distasteful macro magic. v1: Initial version to replace existing ARM BUG() implementation with something more similar to other architectures. v2: Add Thumb support, remove backtrace whitespace output changes. Change to use macros instead of requiring the asm %d flag to work (thanks to Dave Martin <dave.martin@linaro.org>) v3: Remove old BUG() implementation in favor of this one. Remove the Backtrace: message (will submit this separately). Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always define GENERIC_BUG this might be academic.) Rebase to linux-2.6.git master. v4: Allow BUGS in modules (these were not reported correctly in v3) (thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.) Remove __bug() as this is no longer needed. v5: Add %progbits as the section flags. Signed-off-by: Simon Glass <sjg@chromium.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Tested-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7068/1: process: change from __backtrace to dump_stack in show_regsLaura Abbott2011-10-172-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, show_regs calls __backtrace which does nothing if CONFIG_FRAME_POINTER is not set. Switch to dump_stack which handles both CONFIG_FRAME_POINTER and CONFIG_ARM_UNWIND correctly. __backtrace is now superseded by dump_stack in general and show_regs was the last caller so remove __backtrace as well. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7031/1: entry: Fix Thumb-2 undef handling for multi-CPU kernelsDave Martin2011-10-171-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When v6 and >=v7 boards are supported in the same kernel, the __und_usr code currently makes a build-time assumption that Thumb-2 instructions occurring in userspace don't need to be supported. Strictly speaking this is incorrect. This patch fixes the above case by doing a run-time check on the CPU architecture in these cases. This only affects kernels which support v6 and >=v7 CPUs together: plain v6 and plain v7 kernels are unaffected. Signed-off-by: Dave Martin <dave.martin@linaro.org> Reviewed-by: Jon Medhurst <tixy@yxit.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7030/1: entry: Remove unnecessary masking when decoding Thumb-2 ↵Dave Martin2011-10-171-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | instructions When testing whether a Thumb-2 instruction is 32 bits long or not, the masking done in order to test bits 11-15 of the first instruction halfword won't affect the result of the comparison, so remove it. Signed-off-by: Dave Martin <dave.martin@linaro.org> Reviewed-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | | | | | | ARM: 7029/1: Make cpu_architecture into a global variableDave Martin2011-10-171-1/+19
| |/ / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CPU architecture really should not be changing at runtime, so make it a global variable instead of a function. The cpu_architecture() function declared in <asm/system.h> remains the correct way to read this variable from C code. Signed-off-by: Dave Martin <dave.martin@linaro.org> Reviewed-by: Jon Medhurst <tixy@yxit.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | |
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
| \ \ \ \ \ \
*-----------. \ \ \ \ \ \ Merge branches 'arnd-randcfg-fixes', 'debug', 'io' (early part), 'l2x0', ↵Russell King2011-10-2512-120/+268
|\ \ \ \ \ \ \ \ \ \ \ \ \ | | | |_|_|_|_|/ / / / / / | | |/| | | | | | | | / / | | | | | | |_|_|_|_|/ / | | | | | |/| | | | | / | | | | | | | | |_|_|/ | | | | | | | |/| | | 'p2v', 'pgt' (early part) and 'smp' into for-linus
| | | | | | | * | | | ARM: 7115/4: move __exception and friends to asm/exception.hJamie Iles2011-10-173-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The definition of __exception_irq_entry for CONFIG_FUNCTION_GRAPH_TRACER=y needs linux/ftrace.h, but this creates a circular dependency with it's current home in asm/system.h. Create asm/exception.h and update all current users. v4: - rebase to rmk/for-next v3: - remove redundant includes of linux/ftrace.h v2: - document the usage restricitions of __exception* Cc: Zoltan Devai <zdevai@gmail.com> Signed-off-by: Jamie Iles <jamie@jamieiles.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | | * | | | ARM: 7124/1: smp: Add a localtimer handler callable from C codeShawn Guo2011-10-171-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to be able to handle localtimer directly from C code instead of assembly code, introduce handle_local_timer(), which is modeled after handle_IRQ(). Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | | * | | | ARM: 7123/1: smp: Add an IPI handler callable from C codeShawn Guo2011-10-171-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to be able to handle IPI directly from C code instead of assembly code, introduce handle_IPI(), which is modeled after handle_IRQ(). Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | | | | | | * | | | ARM: 7100/1: smp_scu: remove __init annotation from scu_enable()Shawn Guo2011-10-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When Cortex-A9 MPCore resumes from Dormant or Shutdown modes, SCU needs to be re-enabled. This patch removes __init annotation from function scu_enable(), so that platform resume procedure can call it to re-enable SCU. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud