summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* arm64: KVM: vgic-v3: Only wipe LRs on vcpu exitMarc Zyngier2016-03-091-5/+4
| | | | | | | | | | | So far, we're always writing all possible LRs, setting the empty ones with a zero value. This is obvious doing a low of work for nothing, and we're better off clearing those we've actually dirtied on the exit path (it is very rare to inject more than one interrupt at a time anyway). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: vgic-v3: Reset LRs at boot timeMarc Zyngier2016-03-093-0/+17
| | | | | | | | | | | In order to let the GICv3 code be more lazy in the way it accesses the LRs, it is necessary to start with a clean slate. Let's reset the LRs on each CPU when the vgic is probed (which includes a round trip to EL2...). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: vgic-v3: Do not save an LR known to be emptyMarc Zyngier2016-03-091-2/+9
| | | | | | | | | On exit, any empty LR will be signaled in ICH_ELRSR_EL2. Which means that we do not have to save it, and we can just clear its state in the in-memory copy. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: vgic-v3: Save maintenance interrupt state only if requiredMarc Zyngier2016-03-091-2/+31
| | | | | | | | | | | | | | | | | | | | Next on our list of useless accesses is the maintenance interrupt status registers (ICH_MISR_EL2, ICH_EISR_EL2). It is pointless to save them if we haven't asked for a maintenance interrupt the first place, which can only happen for two reasons: - Underflow: ICH_HCR_UIE will be set, - EOI: ICH_LR_EOI will be set. These conditions can be checked on the in-memory copies of the regs. Should any of these two condition be valid, we must read GICH_MISR. We can then check for ICH_MISR_EOI, and only when set read ICH_EISR_EL2. This means that in most case, we don't have to save them at all. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: vgic-v3: Avoid accessing ICH registersMarc Zyngier2016-03-093-121/+182
| | | | | | | | | | | | | | Just like on GICv2, we're a bit hammer-happy with GICv3, and access them more often than we should. Adopt a policy similar to what we do for GICv2, only save/restoring the minimal set of registers. As we don't access the registers linearly anymore (we may skip some), the convoluted accessors become slightly simpler, and we can drop the ugly indexing macro that tended to confuse the reviewers. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Make GICD_SGIR quicker to hitMarc Zyngier2016-03-091-5/+5
| | | | | | | | | | The GICD_SGIR register lives a long way from the beginning of the handler array, which is searched linearly. As this is hit pretty often, let's move it up. This saves us some precious cycles when the guest is generating IPIs. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Only wipe LRs on vcpu exitMarc Zyngier2016-03-091-5/+5
| | | | | | | | | | | So far, we're always writing all possible LRs, setting the empty ones with a zero value. This is obvious doing a lot of work for nothing, and we're better off clearing those we've actually dirtied on the exit path (it is very rare to inject more than one interrupt at a time anyway). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Reset LRs at boot timeMarc Zyngier2016-03-091-0/+12
| | | | | | | | | | In order to let make the GICv2 code more lazy in the way it accesses the LRs, it is necessary to start with a clean slate. Let's reset the LRs on each CPU when the vgic is probed. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Do not save an LR known to be emptyMarc Zyngier2016-03-091-6/+20
| | | | | | | | | | | | On exit, any empty LR will be signaled in GICH_ELRSR*. Which means that we do not have to save it, and we can just clear its state in the in-memory copy. Take this opportunity to move the LR saving code into its own function. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Move GICH_ELRSR saving to its own functionMarc Zyngier2016-03-091-15/+21
| | | | | | | | | | | In order to make the saving path slightly more readable and prepare for some more optimizations, let's move the GICH_ELRSR saving to its own function. No functional change. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Save maintenance interrupt state only if requiredMarc Zyngier2016-03-091-7/+47
| | | | | | | | | | | | | | | | | | | | Next on our list of useless accesses is the maintenance interrupt status registers (GICH_MISR, GICH_EISR{0,1}). It is pointless to save them if we haven't asked for a maintenance interrupt the first place, which can only happen for two reasons: - Underflow: GICH_HCR_UIE will be set, - EOI: GICH_LR_EOI will be set. These conditions can be checked on the in-memory copies of the regs. Should any of these two condition be valid, we must read GICH_MISR. We can then check for GICH_MISR_EOI, and only when set read GICH_EISR*. This means that in most case, we don't have to save them at all. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: vgic-v2: Avoid accessing GICH registersMarc Zyngier2016-03-092-22/+52
| | | | | | | | | | | | | GICv2 registers are *slow*. As in "terrifyingly slow". Which is bad. But we're equaly bad, as we make a point in accessing them even if we don't have any interrupt in flight. A good solution is to first find out if we have anything useful to write into the GIC, and if we don't, to simply not do it. This involves tracking which LRs actually have something valid there. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* KVM: arm/arm64: timer: Add active state cachingMarc Zyngier2016-02-293-0/+37
| | | | | | | | | | | | | | | | | | | | | Programming the active state in the (re)distributor can be an expensive operation so it makes some sense to try and reduce the number of accesses as much as possible. So far, we program the active state on each VM entry, but there is some opportunity to do less. An obvious solution is to cache the active state in memory, and only program it in the HW when conditions change. But because the HW can also change things under our feet (the active state can transition from 1 to 0 when the guest does an EOI), some precautions have to be taken, which amount to only caching an "inactive" state, and always programing it otherwise. With this in place, we observe a reduction of around 700 cycles on a 2GHz GICv2 platform for a NULL hypercall. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Switch the CP reg search to be a binary searchMarc Zyngier2016-02-291-18/+23
| | | | | | | | | Doing a linear search is a bit silly when we can do a binary search. Not that we trap that so many things that it has become a burden yet, but it makes sense to align it with the arm64 code. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Rename struct coproc_reg::is_64 to is_64bitMarc Zyngier2016-02-292-6/+6
| | | | | | | | As we're going to play some tricks on the struct coproc_reg, make sure its 64bit indicator field matches that of coproc_params. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Enforce sorting of all CP tablesMarc Zyngier2016-02-291-8/+17
| | | | | | | | | Since we're obviously terrible at sorting the CP tables, make sure we're going to do it properly (or fail to boot). arm64 has had the same mechanism for a while, and nobody ever broke it... Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Properly sort the invariant tableMarc Zyngier2016-02-291-3/+3
| | | | | | | | Not having the invariant table properly sorted is an oddity, and may get in the way of future optimisations. Let's fix it. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Switch the sys_reg search to be a binary searchMarc Zyngier2016-02-291-18/+22
| | | | | | | | | | | | | | | | | Our 64bit sys_reg table is about 90 entries long (so far, and the PMU support is likely to increase this). This means that on average, it takes 45 comparaisons to find the right entry (and actually the full 90 if we have to search the invariant table). Not the most efficient thing. Specially when you think that this table is already sorted. Switching to a binary search effectively reduces the search to about 7 comparaisons. Slightly better! As an added bonus, the comparison is done by comparing all the fields at once, instead of one at a time. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add a new vcpu device control group for PMUv3Shannon Zhao2016-02-298-0/+240
| | | | | | | | | | | | | | | To configure the virtual PMUv3 overflow interrupt number, we use the vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_PMU_V3_IRQ attribute within the KVM_ARM_VCPU_PMU_V3_CTRL group. After configuring the PMUv3, call the vcpu ioctl with attribute KVM_ARM_VCPU_PMU_V3_INIT to initialize the PMUv3. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Introduce per-vcpu kvm device controlsShannon Zhao2016-02-295-4/+71
| | | | | | | | | | | | | In some cases it needs to get/set attributes specific to a vcpu and so needs something else than ONE_REG. Let's copy the KVM_DEVICE approach, and define the respective ioctls for the vcpu file descriptor. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Acked-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add a new feature bit for PMUv3Shannon Zhao2016-02-297-1/+20
| | | | | | | | | | | To support guest PMUv3, use one bit of the VCPU INIT feature array. Initialize the PMU when initialzing the vcpu with that bit and PMU overflow interrupt set. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Free perf event of PMU when destroying vcpuShannon Zhao2016-02-293-0/+24
| | | | | | | | | When KVM frees VCPU, it needs to free the perf_event of PMU. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Reset PMU state when resetting vcpuShannon Zhao2016-02-293-0/+22
| | | | | | | | | When resetting vcpu, it needs to reset the PMU state to initial status. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add PMU overflow interrupt routingShannon Zhao2016-02-293-3/+79
| | | | | | | | | | | | | | | | | | When calling perf_event_create_kernel_counter to create perf_event, assign a overflow handler. Then when the perf event overflows, set the corresponding bit of guest PMOVSSET register. If this counter is enabled and its interrupt is enabled as well, kick the vcpu to sync the interrupt. On VM entry, if there is counter overflowed and interrupt level is changed, inject the interrupt with corresponding level. On VM exit, sync the interrupt level as well if it has been changed. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMUSERENR registerShannon Zhao2016-02-295-5/+110
| | | | | | | | | | | | | | | | | | | | This register resets as unknown in 64bit mode while it resets as zero in 32bit mode. Here we choose to reset it as zero for consistency. PMUSERENR_EL0 holds some bits which decide whether PMU registers can be accessed from EL0. Add some check helpers to handle the access from EL0. When these bits are zero, only reading PMUSERENR will trap to EL2 and writing PMUSERENR or reading/writing other PMU registers will trap to EL1 other than EL2 when HCR.TGE==0. To current KVM configuration (HCR.TGE==0) there is no way to get these traps. Here we write 0xf to physical PMUSERENR register on VM entry, so that it will trap PMU access from EL0 to EL2. Within the register access handler we check the real value of guest PMUSERENR register to decide whether this access is allowed. If not allowed, return false to inject UND to guest. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add helper to handle PMCR register bitsShannon Zhao2016-02-294-1/+40
| | | | | | | | | | | | According to ARMv8 spec, when writing 1 to PMCR.E, all counters are enabled by PMCNTENSET, while writing 0 to PMCR.E, all counters are disabled. When writing 1 to PMCR.P, reset all event counters, not including PMCCNTR, to zero. When writing 1 to PMCR.C, reset PMCCNTR to zero. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMSWINC registerShannon Zhao2016-02-295-1/+58
| | | | | | | | | Add access handler which emulates writing and reading PMSWINC register and add support for creating software increment event. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMOVSSET and PMOVSCLR registerShannon Zhao2016-02-294-3/+60
| | | | | | | | | | | | | Since the reset value of PMOVSSET and PMOVSCLR is UNKNOWN, use reset_unknown for its reset handler. Add a handler to emulate writing PMOVSSET or PMOVSCLR register. When writing non-zero value to PMOVSSET, the counter and its interrupt is enabled, kick this vcpu to sync PMU interrupt. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMINTENSET and PMINTENCLR registerShannon Zhao2016-02-292-4/+29
| | | | | | | | | | Since the reset value of PMINTENSET and PMINTENCLR is UNKNOWN, use reset_unknown for its reset handler. Add a handler to emulate writing PMINTENSET or PMINTENCLR register. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for event type registerShannon Zhao2016-02-292-2/+127
| | | | | | | | | | | | | | | | These kind of registers include PMEVTYPERn, PMCCFILTR and PMXEVTYPER which is mapped to PMEVTYPERn or PMCCFILTR. The access handler translates all aarch32 register offsets to aarch64 ones and uses vcpu_sys_reg() to access their values to avoid taking care of big endian. When writing to these registers, create a perf_event for the selected event type. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: PMU: Add perf event map and introduce perf event creating functionShannon Zhao2016-02-292-0/+78
| | | | | | | | | | | | When we use tools like perf on host, perf passes the event type and the id of this event type category to kernel, then kernel will map them to hardware event number and write this number to PMU PMEVTYPER<n>_EL0 register. When getting the event number in KVM, directly use raw event type to create a perf_event for it. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMCNTENSET and PMCNTENCLR registerShannon Zhao2016-02-294-4/+107
| | | | | | | | | | | | | Since the reset value of PMCNTENSET and PMCNTENCLR is UNKNOWN, use reset_unknown for its reset handler. Add a handler to emulate writing PMCNTENSET or PMCNTENCLR register. When writing to PMCNTENSET, call perf_event_enable to enable the perf event. When writing to PMCNTENCLR, call perf_event_disable to disable the perf event. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for event counter registerShannon Zhao2016-02-295-4/+213
| | | | | | | | | | | | | | | | These kind of registers include PMEVCNTRn, PMCCNTR and PMXEVCNTR which is mapped to PMEVCNTRn. The access handler translates all aarch32 register offsets to aarch64 ones and uses vcpu_sys_reg() to access their values to avoid taking care of big endian. When reading these registers, return the sum of register value and the value perf event counts. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMCEID0 and PMCEID1 registerShannon Zhao2016-02-291-4/+24
| | | | | | | | | Add access handler which gets host value of PMCEID0 or PMCEID1 when guest access these registers. Writing action to PMCEID0 or PMCEID1 is UNDEFINED. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMSELR registerShannon Zhao2016-02-292-2/+19
| | | | | | | | | | Since the reset value of PMSELR_EL0 is UNKNOWN, use reset_unknown for its reset handler. When reading PMSELR, return the PMSELR.SEL field to guest. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add access handler for PMCR registerShannon Zhao2016-02-293-2/+47
| | | | | | | | | | Add reset handler which gets host value of PMCR_EL0 and make writable bits architecturally UNKNOWN except PMCR.E which is zero. Add an access handler for PMCR. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Define PMU data structure for each vcpuShannon Zhao2016-02-293-0/+51
| | | | | | | | | | | | | | | | | Here we plan to support virtual PMU for guest by full software emulation, so define some basic structs and functions preparing for futher steps. Define struct kvm_pmc for performance monitor counter and struct kvm_pmu for performance monitor unit for each vcpu. According to ARMv8 spec, the PMU contains at most 32(ARMV8_PMU_MAX_COUNTERS) counters. Since this only supports ARM64 (or PMUv3), add a separate config symbol for it. Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Add temporary kvm_perf_event.hMarc Zyngier2016-02-292-0/+56
| | | | | | | | | In order to merge the KVM/ARM PMU patches without creating a conflict mess, let's have a temporary include file that won't conflict with anything. Subsequent patches will clean that up. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Move __cpu_init_stage2 after kvm_call_hypMarc Zyngier2016-02-291-5/+7
| | | | | | | | In order to ease the merge with the rest of the arm64 tree, move the definition of __cpu_init_stage2() after what will be the new kvm_call_hyp. Hopefully the resolution of the merge conflict will be obvious. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Use common version of timer-sr.cMarc Zyngier2016-02-293-70/+10
| | | | | | | | | Using the common HYP timer code is a bit more tricky, since we use system register names. Nothing a set of macros cannot work around... Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Use common version of vgic-v2-sr.cMarc Zyngier2016-02-292-83/+4
| | | | | | | | No need to keep our own private version, the common one is strictly identical. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* ARM: KVM: Move kvm/hyp/hyp.h to include/asm/kvm_hyp.hMarc Zyngier2016-02-298-12/+7
| | | | | | | | | In order to be able to use the code located in virt/kvm/arm/hyp, we need to make the global hyp.h file accessible from include/asm, similar to what we did for arm64. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Move vgic-v2 and timer save/restore to virt/kvm/arm/hypMarc Zyngier2016-02-293-2/+5
| | | | | | | | | We already have virt/kvm/arm/ containing timer and vgic stuff. Add yet another subdirectory to contain the hyp-specific files (timer and vgic again). Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Move kvm/hyp/hyp.h to include/asm/kvm_hyp.hMarc Zyngier2016-02-299-20/+8
| | | | | | | | | | In order to be able to move code outside of kvm/hyp, we need to make the global hyp.h file accessible from a standard location. include/asm/kvm_hyp.h seems good enough. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: VHE: Add support for running Linux in EL2 modeMarc Zyngier2016-02-292-1/+40
| | | | | | | | | | | | With ARMv8.1 VHE, the architecture is able to (almost) transparently run the kernel at EL2, despite being written for EL1. This patch takes care of the "almost" part, mostly preventing the kernel from dropping from EL2 to EL1, and setting up the HYP configuration. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYPMarc Zyngier2016-02-291-5/+13
| | | | | | | | | | | With VHE, we place kernel {watch,break}-points at EL2 to get things like kgdb and "perf -e mem:..." working. This requires a bit of repainting in the low-level encore/decode, but is otherwise pretty simple. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: perf: Count EL2 events if the kernel is running in HYPMarc Zyngier2016-02-291-1/+5
| | | | | | | | | | | When the kernel is running in HYP (with VHE), it is necessary to include EL2 events if the user requests counting kernel or hypervisor events. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: Move most of the fault decoding to CMarc Zyngier2016-02-293-67/+90
| | | | | | | | | | | | | | | The fault decoding process (including computing the IPA in the case of a permission fault) would be much better done in C code, as we have a reasonable infrastructure to deal with the VHE/non-VHE differences. Let's move the whole thing to C, including the workaround for erratum 834220, and just patch the odd ESR_EL2 access remaining in hyp-entry.S. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: VHE: Add alternative panic handlingMarc Zyngier2016-02-291-8/+27
| | | | | | | | | | | As the kernel fully runs in HYP when VHE is enabled, we can directly branch to the kernel's panic() implementation, and not perform an exception return. Add the alternative code to deal with this. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* arm64: KVM: VHE: Add fpsimd enabling on guest accessMarc Zyngier2016-02-291-0/+6
| | | | | | | | | | Despite the fact that a VHE enabled kernel runs at EL2, it uses CPACR_EL1 to trap FPSIMD access. Add the required alternative code to re-enable guest FPSIMD access when it has trapped to EL2. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
OpenPOWER on IntegriCloud