summaryrefslogtreecommitdiffstats
path: root/target-arm/translate.c
Commit message (Collapse)AuthorAgeFilesLines
...
* target-arm: Add support for A32 and T32 HVC and SMC insnsPeter Maydell2014-10-241-11/+92
| | | | | | | | | | | Add support for HVC and SMC instructions to the A32 and T32 decoder. Using these for real exceptions to EL2 or EL3 is currently not supported (the do_interrupt routine does not handle them) but we require the instruction support to implement PSCI. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1412865028-17725-6-git-send-email-peter.maydell@linaro.org
* target-arm: Don't handle c15_cpar changes via tb_flush()Peter Maydell2014-09-291-19/+21
| | | | | | | | | | | | | | | | | | | At the moment we try to handle c15_cpar with the strategy of: * emit generated code which makes assumptions about its value * when the register value changes call tb_flush() to throw away the now-invalid generated code This works because XScale CPUs are always uniprocessor, but it's confusing because it suggests that the same approach can be taken for other registers. It also means we do a tb_flush() on CPU reset, which makes multithreaded linux-user binaries even more likely to fail than would otherwise be the case. Replace it with a combination of TB flags for the access checks done on cp0/cp1 for the XScale and iwMMXt instructions, plus a runtime check for cp2..cp13 coprocessor accesses. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1411056959-23070-1-git-send-email-peter.maydell@linaro.org
* target-arm: Implement ARMv8 single-stepping for AArch32 codePeter Maydell2014-08-191-2/+74
| | | | | | | | | | ARMv8 single-stepping requires the exception level that controls the single-stepping to be in AArch64 execution state, but the code being stepped may be in AArch64 or AArch32. Implement the necessary support code for single-stepping AArch32 code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* target-arm: Don't allow AArch32 to access RES0 CPSR bitsPeter Maydell2014-08-191-6/+7
| | | | | | | | | | | | | | | The CPSR has a new-in-v8 execution state bit (IL), and also some state which has effects in AArch32 but appears only in the SPSR format (SS) but is RES0 in the CPSR. Add the IL bit to CPSR_EXEC, and enforce that guest direct reads and writes to CPSR can't read or write the RES0 bits, so the guest can't get at the SS bit which we store in uncached_cpsr. This includes not permitting exception returns to copy reserved bits from an SPSR into CPSR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* trace: [tcg] Include TCG-tracing header on all targetsLluís Vilanova2014-08-121-0/+3
| | | | | Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* target-arm: Delete unused iwmmxt_msadb helperPeter Maydell2014-06-091-2/+0
| | | | | | | | | | | The iwmmxt_msadb helper and its corresponding gen function are unused; delete them. (This function appears to have never been used right back to the initial implementation of iwMMXt; it is identical to iwmmxt_madduq, and is presumably an accidental remnant from the initial development.) Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1401822125-1822-1-git-send-email-peter.maydell@linaro.org
* target-arm: A32/T32: Mask CRC value in calling code, not helperPeter Maydell2014-06-091-0/+10
| | | | | | | | | | Bring the 32-bit CRC helper functions into line with the A64 ones, by masking the high bytes of the value in the calling code rather than the helper. This is more efficient since we can determine the mask at translation time. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1401458125-27977-7-git-send-email-peter.maydell@linaro.org
* target-arm: add support for v8 VMULL.P64 instructionPeter Maydell2014-06-091-1/+25
| | | | | | | | | | | | | | Add support for the VMULL.P64 polynomial 64x64 to 128 bit multiplication instruction in the A32/T32 instruction sets; this is part of the v8 Crypto Extensions. To do this we have to move the neon_pmull_64_{lo,hi} helpers from helper-a64.c into neon_helper.c so they can be used by the AArch32 translator. Inspired-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1401386724-26529-4-git-send-email-peter.maydell@linaro.org
* target-arm: Allow 3reg_wide undefreq to encode more bad size optionsPeter Maydell2014-06-091-12/+12
| | | | | | | | | | | | The current undefreq field in the neon_3reg_wide handling allows us to encode "UNDEF if size != 0" and "UNDEF if size == 0". This is no longer sufficient with the advent of 64-bit polynomial VMULL, which means we want to UNDEF if size == 1. Change the undefreq encoding to use separate bits for all of "UNDEF if size == 0", "UNDEF if size == 1" and "UNDEF if size == 2". Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1401386724-26529-3-git-send-email-peter.maydell@linaro.org
* target-arm: add support for v8 SHA1 and SHA256 instructionsArd Biesheuvel2014-06-091-0/+84
| | | | | | | | | | | | | | | | | | This adds support for the SHA1 and SHA256 instructions that are available on some v8 implementations of Aarch32. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1401386724-26529-2-git-send-email-peter.maydell@linaro.org [PMM: * rebase * fix bad indent * add a missing UNDEF check for Q!=1 in the 3-reg SHA1/SHA256 case * use g_assert_not_reached() * don't re-extract bit 6 for the 2-reg-misc encodings * set the ELF HWCAP2 bits for the new features ] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: move arm_*_code to a separate filePaolo Bonzini2014-06-051-0/+1
| | | | | | | These will soon require cpu_ldst.h, so move them out of cpu.h. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: Invert the inclusion of helper.hRichard Henderson2014-05-281-3/+2
| | | | | | | | | | Rather than include helper.h with N values of GEN_HELPER, include a secondary file that sets up the macros to include helper.h. This minimizes the files that must be rebuilt when changing the macros for file N. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-arm: Add SPSR entries for EL2/HYP and EL3/MONEdgar E. Iglesias2014-05-271-2/+2
| | | | | | | Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1400980132-25949-12-git-send-email-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: A32: Use get_mem_index for load/storesEdgar E. Iglesias2014-05-271-106/+106
| | | | | | | | | | | | | Avoid using IS_USER directly as the MMU-idx to simplify future changes to the MMU layout. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1400980132-25949-5-git-send-email-edgar.iglesias@gmail.com Message-id: 1400805738-11889-6-git-send-email-edgar.iglesias@gmail.com [PMM: parts relating to LDRT/STRT moved into earlier patches] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm/translate.c: Use get_mem_index() for SRS memory accessesPeter Maydell2014-05-271-2/+2
| | | | | | | | | | | | The SRS instruction was using a hardcoded 0 for the memory accesses. This happens to be OK since the SRS instruction is UNPREDICTABLE in User and System modes, but is awkward if we want to rearrange the MMU index uses. Switch to using get_mem_index() like all the other accesses. Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1400980132-25949-4-git-send-email-edgar.iglesias@gmail.com
* target-arm/translate.c: Clean up mmu index handling for ldrt/strtPeter Maydell2014-05-271-12/+17
| | | | | | | | | | Clean up the mmu index handling for ldrt/strt insns: instead of a flag 'user' indicating whether to treat the store as user mode or not, use 'memidx' to indicate the correct memory index to use. Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1400980132-25949-3-git-send-email-edgar.iglesias@gmail.com
* arm: translate.c: Fix smlald InstructionPeter Crosthwaite2014-04-171-11/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The smlald (and probably smlsld) instruction was doing incorrect sign extensions of the operands amongst 64bit result calculation. The instruction psuedo-code is: operand2 = if m_swap then ROR(R[m],16) else R[m]; product1 = SInt(R[n]<15:0>) * SInt(operand2<15:0>); product2 = SInt(R[n]<31:16>) * SInt(operand2<31:16>); result = product1 + product2 + SInt(R[dHi]:R[dLo]); R[dHi] = result<63:32>; R[dLo] = result<31:0>; The result calculation should be done in 64 bit arithmetic, and hence product1 and product2 should be sign extended to 64b before calculation. The current implementation was adding product1 and product2 together then sign-extending the intermediate result leading to false negatives. E.G. if product1 = product2 = 0x4000000, their sum = 0x80000000, which will be incorrectly interpreted as -ve on sign extension. We fix by doing the 64b extensions on both product1 and product2 before any addition/subtraction happens. We also fix where we were possibly incorrectly setting the Q saturation flag for SMLSLD, which the ARM ARM specifically says is not set. Reported-by: Christina Smith <christina.smith@xilinx.com> Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 2cddb6f5a15be4ab8d2160f3499d128ae93d304d.1397704570.git.peter.crosthwaite@xilinx.com Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Dump 32-bit CPU state if 64 bit CPU is in AArch32Peter Maydell2014-04-171-0/+5
| | | | | | | | | | | | | For system mode, we may have a 64 bit CPU which is currently executing in AArch32 state; if we're dumping CPU state to the logs we should therefore show the correct state for the current execution state, rather than hardwiring it based on the type of the CPU. For consistency with how we handle translation, we leave the 32 bit dump function as the default, and have it hand off control to the 64 bit dump code if we're in AArch64 mode. Reported-by: Rob Herring <rob.herring@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Implement ARMv8 MVFR registersPeter Maydell2014-04-171-2/+8
| | | | | | | | | | | | For ARMv8 there are two changes to the MVFR media feature registers: * there is a new MVFR2 which is accessible from 32 bit code * 64 bit code accesses these via the usual sysreg instructions rather than with a floating-point specific instruction Implement this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Fix VFP enables for AArch32 EL0 under AArch64 EL1Peter Maydell2014-04-171-0/+31
| | | | | | | | | | | | | | | | | | | | | | | The current A32/T32 decoder bases its "is VFP/Neon enabled?" check on the FPSCR.EN bit. This is correct if EL1 is AArch32, but for an AArch64 EL1 the logic is different: it must act as if FPSCR.EN is always set. Instead, trapping must happen according to CPACR bits for cp10/cp11; these cover all of FP/Neon, including the FPSCR/FPSID/MVFR register accesses which FPSCR.EN does not affect. Add support for CPACR checks (which are also required for ARMv7, but were unimplemented because Linux happens not to use them) and make sure they generate exceptions with the correct syndrome. We actually return incorrect syndrome information for cases where FP is disabled but the specific instruction bit pattern is unallocated: strictly these should be the Uncategorized exception, not a "SIMD disabled" exception. This should be mostly harmless, and the structure of the A32/T32 VFP/Neon decoder makes it painful to put the 'FP disabled?' checks in the right places. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Add support for generating exceptions with syndrome informationPeter Maydell2014-04-171-38/+65
| | | | | | | | | | | | | | | Add new helpers exception_with_syndrome (for generating an exception with syndrome information) and exception_uncategorized (for generating an exception with "Unknown or Uncategorized Reason", which have a syndrome register value of zero), and use them to generate the correct syndrome information for exceptions which are raised directly from generated code. This patch includes moving the A32/T32 gen_exception_insn functions further up in the source file; they will be needed for "VFP/Neon disabled" exception generation later. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Provide correct syndrome information for cpreg access trapsPeter Maydell2014-04-171-1/+44
| | | | | | | | | | | | For exceptions taken to AArch64, if a coprocessor/system register access fails due to a trap or enable bit then the syndrome information must include details of the failing instruction (crn/crm/opc1/opc2 fields, etc). Make the decoder construct the syndrome information at translate time so it can be passed at runtime to the access-check helper function and used as required. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Split out private-to-target functions into internals.hPeter Maydell2014-04-171-0/+1
| | | | | | | | | | Currently cpu.h defines a mixture of functions and types needed by the rest of QEMU and those needed only by files within target-arm/. Split the latter out into a new header so they aren't needlessly exposed further than required. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: A64: Add [UF]RSQRTE (reciprocal root estimate)Alex Bennée2014-03-171-2/+10
| | | | | | | | | | | | | This adds support for [UF]RSQRTE instructions. It utilises the existing NEON helpers with some changes. The changes include an explicit passing of fpstatus (so the correct one is used between arm32 and aarch64), denormilzation, more correct error handling and also proper scaling of the fraction going into the estimate. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1394822294-14837-25-git-send-email-peter.maydell@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: A64: Implement AdvSIMD reciprocal estimate insns URECPE, FRECPEAlex Bennée2014-03-171-2/+10
| | | | | | | | | | | | | | | Implement URECPE and FRECPE instructions in both scalar and vector forms. The actual reciprocal estimate function is shared with the A32/T32 Neon code. However in A64 we aren't using the Neon "standard FPSCR value" so extra checks are necessary to handle non-squashed denormal inputs which can never happen for A32/T32. Calling conventions for the helpers are thus modified to pass the fpst directly; we mark the helpers as TCG_CALL_NO_RWG since we're changing the declarations anyway. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1394822294-14837-21-git-send-email-peter.maydell@linaro.org
* target-arm: A64: Implement PMULL instructionPeter Maydell2014-03-171-0/+1
| | | | | | | | | | | | | | | Implement the PMULL instruction; this is the last unimplemented insn in the three-reg-diff group. Note that PMULL with size 3 is considered part of the AES part of the crypto extensions (see the ID_AA64ISAR0_EL1 register definition in the v8 ARM ARM), so it isn't necessary to burn an extra feature bit on it, even though we're using more feature bits than a single "crypto extension present/not present" toggle. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Message-id: 1394822294-14837-2-git-send-email-peter.maydell@linaro.org
* exec: Change cpu_abort() argument to CPUStateAndreas Färber2014-03-131-1/+1
| | | | Signed-off-by: Andreas Färber <afaerber@suse.de>
* cpu: Move breakpoints field from CPU_COMMON to CPUStateAndreas Färber2014-03-131-2/+2
| | | | | | | | Most targets were using offsetof(CPUFooState, breakpoints) to determine how much of CPUFooState to clear on reset. Use the next field after CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise. Signed-off-by: Andreas Färber <afaerber@suse.de>
* target-arm: Implement WFE as a yield operationPeter Maydell2014-03-101-0/+6
| | | | | | | | | | | | | | | Implement WFE to yield our timeslice to the next CPU. This avoids slowdowns in multicore configurations caused by one core busy-waiting on a spinlock which can't possibly be unlocked until the other core has an opportunity to run. This speeds up my test case A15 dual-core boot by a factor of three (though it is still four or five times slower than a single-core boot). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1393339545-22111-1-git-send-email-peter.maydell@linaro.org Reviewed-by: Richard Henderson <rth@twiddle.net> Tested-by: Rob Herring <rob.herring@linaro.org>
* target-arm: Add support for AArch32 ARMv8 CRC32 instructionsWill Newton2014-02-261-0/+56
| | | | | | | | | | | | | | Add support for AArch32 CRC32 and CRC32C instructions added in ARMv8 and add a CPU feature flag to enable these instructions. The CRC32-C implementation used is the built-in qemu implementation and The CRC-32 implementation is from zlib. This requires adding zlib to LIBS to ensure it is linked for the linux-user binary. Signed-off-by: Will Newton <will.newton@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1393411566-24104-3-git-send-email-will.newton@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Remove unnecessary code now read/write fns can't failPeter Maydell2014-02-201-4/+0
| | | | | | | | Now that cpreg read and write functions can't fail and throw an exception, we can remove the code from the translator that synchronises the guest PC in case an exception is thrown. Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Split cpreg access checks out from read/write functionsPeter Maydell2014-02-201-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | Several of the system registers handled via the ARMCPRegInfo mechanism have access trap control bits controlling whether the registers are accessible to lower privilege levels. Replace the existing mechanism (allowing the read and write functions to return EXCP_UDEF if access is denied) with a dedicated "check access rights" function pointer in the ARMCPRegInfo. This will allow us to simplify some of the register definitions, which no longer need read/write functions purely to handle the access checks. We take the opportunity to define the return value from the access checking function in a way that allows us to set the correct exception syndrome information for exceptions taken to AArch64 (which may need to distinguish access failures due to a configurable trap or enable from other kinds of access failure). This commit defines the new mechanism but does not move any of the registers across to use it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Log bad system register accesses with LOG_UNIMPPeter Maydell2014-02-201-0/+13
| | | | | | | | | | | Log guest attempts to access unimplemented system registers via the LOG_UNIMP reporting mechanism (for both the 32 bit and 64 bit instruction sets). This is particularly useful for debugging problems where the guest is trying to use a system register that QEMU doesn't implement. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* target-arm: Add support for AArch32 64bit VCVTB and VCVTTWill Newton2014-02-081-22/+61
| | | | | | | | | Add support for the AArch32 floating-point half-precision to double- precision conversion VCVTB and VCVTT instructions. Signed-off-by: Will Newton <will.newton@linaro.org> [PMM: fixed a minor missing-braces style issue] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add AArch32 SIMD VCVTA, VCVTN, VCVTP and VCVTMWill Newton2014-01-311-1/+52
| | | | | | | | Add support for the AArch32 Advanced SIMD VCVTA, VCVTN, VCVTP and VCVTM instructions. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add AArch32 FP VCVTA, VCVTN, VCVTP and VCVTMWill Newton2014-01-311-0/+61
| | | | | | | | Add support for the AArch32 floating-point VCVTA, VCVTN, VCVTP and VCVTM instructions. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add AArch32 SIMD VRINTA, VRINTN, VRINTP, VRINTM, VRINTZWill Newton2014-01-311-1/+39
| | | | | | | | Add support for the AArch32 Advanced SIMD VRINTA, VRINTN, VRINTP VRINTM and VRINTZ instructions. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add support for AArch32 SIMD VRINTXWill Newton2014-01-311-1/+10
| | | | | | | | Add support for the AArch32 Advanced SIMD VRINTX instruction. Signed-off-by: Will Newton <will.newton@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add support for AArch32 FP VRINTXWill Newton2014-01-311-0/+11
| | | | | | | Add support for the AArch32 floating-point VRINTX instruction. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add support for AArch32 FP VRINTZWill Newton2014-01-311-0/+16
| | | | | | | Add support for the AArch32 floating-point VRINTZ instruction. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add support for AArch32 FP VRINTRWill Newton2014-01-311-0/+11
| | | | | | | Add support for the AArch32 floating-point VRINTR instruction. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Add AArch32 FP VRINTA, VRINTN, VRINTP and VRINTMWill Newton2014-01-311-0/+54
| | | | | | | | Add support for AArch32 ARMv8 FP VRINTA, VRINTN, VRINTP and VRINTM instructions. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Rename A32 VFP conversion helpersWill Newton2014-01-081-11/+13
| | | | | | | | | | The VFP conversion helpers for A32 round to zero as this is the only rounding mode supported. Rename these helpers to make it clear that they round to zero and are not suitable for use in the AArch64 code. Signed-off-by: Will Newton <will.newton@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: Use VFP_BINOP macro for min, max, minnum, maxnumPeter Maydell2014-01-081-8/+8
| | | | | | | | | Use the VFP_BINOP macro to provide helpers for min, max, minnum and maxnum, rather than hand-rolling them. (The float64 max version is not used by A32 but will be needed for A64.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: Widen exclusive-access support struct fields to 64 bitsPeter Maydell2014-01-081-26/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding support for A64 load/store exclusive instructions, widen the fields in the CPU state struct that deal with address and data values for exclusives from 32 to 64 bits. Although in practice AArch64 and AArch32 exclusive accesses will be generally separate there are some odd theoretical corner cases (eg you should be able to do the exclusive load in AArch32, take an exception to AArch64 and successfully do the store exclusive there), and it's also easier to reason about. The changes in semantics for the variables are: exclusive_addr -> extended to 64 bits; -1ULL for "monitor lost", otherwise always < 2^32 for AArch32 exclusive_val -> extended to 64 bits. 64 bit exclusives in AArch32 now use the high half of exclusive_val instead of a separate exclusive_high exclusive_high -> is no longer used in AArch32; extended to 64 bits as it will be needed for AArch64's pair-of-64-bit-values exclusives. exclusive_test -> extended to 64 bits, as it is an address. Since this is a linux-user-only field, in arm-linux-user it will always have the top 32 bits zero. exclusive_info -> stays 32 bits, as it is neither data nor address, but simply holds register indexes etc. AArch64 will be able to fit all its information into 32 bits as well. Note that the refactoring of gen_store_exclusive() coincidentally fixes a minor bug where ldrexd would incorrectly update the first CPU register even if the load for the second register faulted. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: Remove ARMCPU/CPUARMState from cpregs APIs used by decoderPeter Maydell2014-01-071-3/+4
| | | | | | | | | | | | | The cpregs APIs used by the decoder (get_arm_cp_reginfo() and cp_access_ok()) currently take either a CPUARMState* or an ARMCPU*. This is problematic for the A64 decoder, which doesn't pass the environment pointer around everywhere the way the 32 bit decoder does. Adjust the parameters these functions take so that we can copy only the relevant info from the CPUARMState into the DisasContext and then use that. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: A64: add support for conditional branchesAlexander Graf2013-12-171-5/+9
| | | | | | | | | | | This patch adds emulation for the conditional branch (b.cond) instruction. Signed-off-by: Alexander Graf <agraf@suse.de> [claudio: adapted to new decoder structure, reused arm infrastructure for checking the flags] Signed-off-by: Claudio Fontana <claudio.fontana@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: Split A64 from A32/T32 gen_intermediate_code_internal()Peter Maydell2013-12-171-38/+24
| | | | | | | | | | | | | | | | | | | | | | The A32/T32 gen_intermediate_code_internal() is complicated because it has to deal with: * conditionally executed instructions * Thumb IT blocks * kernel helper page * M profile exception-exit special casing None of these apply to A64, so putting the "this is A64 so call the A64 decoder" check in the middle of the A32/T32 loop is confusing and means the A64 decoder's handling of things like conditional jump and singlestepping has to take account of the conditional-execution jumps the main loop might emit. Refactor the code to give A64 its own gen_intermediate_code_internal function instead. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
* target-arm: add support for v8 AES instructionsArd Biesheuvel2013-12-171-0/+26
| | | | | | | | | This adds support for the AESE/AESD/AESMC/AESIMC instructions that are available on some v8 implementations of Aarch32. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Message-id: 1386266078-6976-1-git-send-email-ard.biesheuvel@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Use new qemu_ld/st opcodesRichard Henderson2013-12-101-31/+25
| | | | | | | | | Retain the existing gen_aa32_* inlines, to aid compilation for A64. Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1386628626-21627-1-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
OpenPOWER on IntegriCloud