summaryrefslogtreecommitdiffstats
path: root/target-arm/translate-a64.c
Commit message (Collapse)AuthorAgeFilesLines
* target-arm: Report S/NS status in the CPU debug logsPeter Maydell2015-11-031-1/+10
| | | | | | | | | | | If this CPU supports EL3, enhance the printing of the current CPU mode in debug logging to distinguish S from NS modes as appropriate. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1445883178-576-3-git-send-email-peter.maydell@linaro.org
* target-arm: Bring AArch64 debug CPU display of PSTATE into line with AArch32Peter Maydell2015-11-031-3/+5
| | | | | | | | | | | | | | The AArch64 debug CPU display of PSTATE as "PSTATE=200003c5 (flags --C-)" on the end of the same line as the last of the general purpose registers is unnecessarily different from the AArch32 display of PSR as "PSR=200001d3 --C- A svc32" on its own line. Update the AArch64 code to put PSTATE in its own line and in the same format, including printing the exception level (mode). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1445883178-576-2-git-send-email-peter.maydell@linaro.org
* target-*: Advance pc after recognizing a breakpointRichard Henderson2015-10-281-2/+5
| | | | | | | | Some targets already had this within their logic, but make sure it's present for all targets. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-arm: Fix CPU breakpoint handlingSergey Fedorov2015-10-161-5/+12
| | | | | | | | | | | | | A QEMU breakpoint match is not definitely an architectural breakpoint match. If an exception is generated unconditionally during translation, it is hardly possible to ignore it in the debug exception handler. Generate a call to a helper to check CPU breakpoints and raise an exception only if any breakpoint matches architecturally. Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Break the TB after ISB to execute self-modified code correctlySergey Sorokin2015-10-161-1/+7
| | | | | | | | | | | | | | | | If any store instruction writes the code inside the same TB after this store insn, the execution of the TB must be stopped to execute new code correctly. As described in ARMv8 manual D3.4.6 self-modifying code must do an IC invalidation to be valid, and an ISB after it. So it's enough to end the TB after ISB instruction on the code translation. Also this TB break is necessary to take any pending interrupts immediately after an ISB (as required by ARMv8 ARM D1.14.4). Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> [PMM: tweaked commit message and comments slightly] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: Remove gen_intermediate_code_pcRichard Henderson2015-10-071-27/+3
| | | | | | | | | | It is no longer used, so tidy up everything reached by it. This includes the gen_opc_* arrays, the search_pc parameter and the inline gen_intermediate_code_internal functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add TCG_MAX_INSNSRichard Henderson2015-10-071-0/+3
| | | | | | | | Adjust all translators to respect it. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-arm: Add condexec state to insn_startRichard Henderson2015-10-071-1/+1
| | | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-*: Introduce and use cpu_breakpoint_testRichard Henderson2015-10-071-13/+13
| | | | | | | | | | | | Reduce the boilerplate required for each target. At the same time, move the test for breakpoint after calling tcg_gen_insn_start. Note that arm and aarch64 do not use cpu_breakpoint_test, but still move the inline test down after tcg_gen_insn_start. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-*: Increment num_insns immediately after tcg_gen_insn_startRichard Henderson2015-10-071-3/+3
| | | | | | | | This does tidy the icount test common to all targets. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-*: Unconditionally emit tcg_gen_insn_startRichard Henderson2015-10-071-4/+1
| | | | | | | | | | While we're at it, emit the opcode adjacent to where we currently record data for search_pc. This puts gen_io_start et al on the "correct" side of the marker. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rename debug_insn_start to insn_startRichard Henderson2015-10-071-1/+1
| | | | | | | | With an eye toward making it mandatory. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-arm: Use tcg_gen_extrh_i64_i32Richard Henderson2015-09-141-25/+9
| | | | | | | | | | | | | | Usually, eliminate an operation from the translator by combining a shift with an extract. In the case of gen_set_NZ64, we don't need a boolean value for cpu_ZF, merely a non-zero value. Given that we can extract both halves of a 64-bit input in one call, this simplifies the code. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-12-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Recognize RORRichard Henderson2015-09-141-12/+21
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-11-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Eliminate unnecessary zero-extend in disas_bitfieldRichard Henderson2015-09-141-1/+5
| | | | | | | | | | | | | | For !SF, this initial ext32u can't be optimized away by the current TCG code generator. (It would require backward bit liveness propagation.) But since the range of bits for !SF are already constrained by unallocated_encoding, we'll never reference the high bits anyway. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-10-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Recognize UXTB, UXTH, LSR, LSLRichard Henderson2015-09-141-0/+17
| | | | | | | | | These are all special case aliases of UBFM. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-9-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Recognize SXTB, SXTH, SXTW, ASRRichard Henderson2015-09-141-1/+23
| | | | | | | | | These are all special case aliases of SBFM. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-8-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Implement fcsel with movcondRichard Henderson2015-09-141-28/+17
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-7-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Implement ccmp branchlessRichard Henderson2015-09-141-16/+58
| | | | | | | | | | This can allow much of a ccmp to be elided when particular flags are subsequently dead. Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-6-git-send-email-rth@twiddle.net Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Use setcond and movcond for cselRichard Henderson2015-09-141-36/+49
| | | | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-5-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Share all common TCG temporariesRichard Henderson2015-09-141-22/+0
| | | | | | | | | | | | | | | | | | | | | | This is a bug fix for aarch64. At present, we have branches using the 32-bit (translate.c) versions of cpu_[NZCV]F, but we set the flags using the 64-bit (translate-a64.c) versions of cpu_[NZCV]F. From the view of the TCG code generator, these are unrelated variables. The bug is hard to see because we currently only read these variables from branches, and upon reaching a branch TCG will first spill live variables and then reload the arguments of the branch. Since the 32-bit versions were never live until reaching the branch, we'd re-read the data that had just been spilled from the 64-bit versions. There is currently no such problem with the cpu_exclusive_* variables, but there's no point in tempting fate. Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-id: 1441909103-24666-2-git-send-email-rth@twiddle.net Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Fix default_exception_el() function for the case when EL3 is not ↵Sergey Sorokin2015-09-081-1/+5
| | | | | | | | | | | | supported If EL3 is not supported in current configuration, we should not try to get EL3 bitness. Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Message-id: 1441208342-10601-2-git-send-email-afarallax@yandex.ru Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Wire up HLT 0xf000 as the A64 semihosting instructionPeter Maydell2015-09-071-2/+22
| | | | | | | | | | | For the A64 instruction set, the semihosting call instruction is 'HLT 0xf000'. Wire this up to call do_arm_semihosting() if semihosting is enabled. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Christopher Covington <christopher.covington@linaro.org> Tested-by: Christopher Covington <cov@codeaurora.org> Message-id: 1439483745-28752-10-git-send-email-peter.maydell@linaro.org
* tcg: Remove tcg_gen_trunc_i64_i32Richard Henderson2015-08-241-30/+30
| | | | | | Replacing it with tcg_gen_extrl_i64_i32. Signed-off-by: Richard Henderson <rth@twiddle.net>
* target-arm: Split DISAS_YIELD from DISAS_WFEPeter Maydell2015-07-061-0/+6
| | | | | | | | | | | | | | | | Currently we use DISAS_WFE for both WFE and YIELD instructions. This is functionally correct because at the moment both of them are implemented as "yield this CPU back to the top level loop so another CPU has a chance to run". However it's rather confusing that YIELD ends up calling HELPER(wfe), and if we ever want to implement real behaviour for WFE and SEV it's likely to trip us up. Split out the yield codepath to use DISAS_YIELD and a new HELPER(yield) function, and have HELPER(wfe) call HELPER(yield). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1435672316-3311-2-git-send-email-peter.maydell@linaro.org Reviewed-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
* disas: Remove uses of CPU envPeter Crosthwaite2015-06-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | disas does not need to access the CPU env for any reason. Change the APIs to accept CPU pointers instead. Small change pattern needs to be applied to all target translate.c. This brings us closer to making disas.o a common-obj and less architecture specific in general. Cc: Richard Henderson <rth@twiddle.net> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: "Edgar E. Iglesias" <edgar.iglesias@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Michael Walle <michael@walle.cc> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Leon Alrae <leon.alrae@imgtec.com> Cc: Jia Liu <proljc@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Acked-by: Luiz Capitulino <lcapitulino@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* target-arm: Don't halt on WFI unless we don't have any workPeter Maydell2015-05-291-0/+4
| | | | | | | | | | | | | Just NOP the WFI instruction if we have work to do. This doesn't make much difference currently (though it does avoid jumping out to the top level loop and immediately restarting), but the distinction between "halt" and "don't halt" will become more important when the decision to halt requires us to trap to a higher exception level instead. Suggested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* target-arm: Extend FP checks to use an ELGreg Bellows2015-05-291-4/+4
| | | | | | | | | | | | | | | Extend the ARM disassemble context to take a target exception EL instead of a boolean enable. This change reverses the polarity of the check making a value of 0 indicate floating point enabled (no exception). Signed-off-by: Greg Bellows <greg.bellows@linaro.org> [PMM: Use a common TB flag field for AArch32 and AArch64; CPTR_EL2 exists in v7; CPTR_EL2 should trap for EL2 accesses; CPTR_EL2 should not trap for secure accesses; CPTR_EL3 should trap for EL3 accesses; CPACR traps for secure accesses should trap to EL3 if EL3 is AArch32] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* target-arm: Make singlestate TB flags common between AArch32/64Peter Maydell2015-05-291-2/+2
| | | | | | | | | | Currently we keep the TB flags PSTATE_SS and SS_ACTIVE in different bit positions for AArch64 and AArch32. Replace these separate definitions with a single common flag in the upper part of the flags word. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* target-arm: Add exception target el infrastructureGreg Bellows2015-05-291-12/+22
| | | | | | | | | | | | | | | | | | | | Add a CPU state exception target EL field that will be used for communicating the EL to which an exception should be routed. Add a disassembly context field for tracking the EL3 architecture needed for determining the target exception EL. Add a target EL argument to the generic exception helper for callers to specify the EL to which the exception should be routed. Extended the helper to set the newly added CPU state exception target el. Added a function for setting the target exception EL and updated calls to helpers to call it. Signed-off-by: Greg Bellows <greg.bellows@linaro.org> Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1429722561-12651-2-git-send-email-greg.bellows@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg: Change translator-side labels to a pointerRichard Henderson2015-03-131-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | This is improved type checking for the translators -- it's no longer possible to accidentally swap arguments to the branch functions. Note that the code generating backends still manipulate labels as int. With notable exceptions, the scope of the change is just a few lines for each target, so it's not worth building extra machinery to do this change in per-target increments. Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com> Cc: Michael Walle <michael@walle.cc> Cc: Leon Alrae <leon.alrae@imgtec.com> Cc: Anthony Green <green@moxielogic.com> Cc: Jia Liu <proljc@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20150212' into stagingPeter Maydell2015-02-131-7/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert to linked list. # gpg: Signature made Fri 13 Feb 2015 05:40:41 GMT using RSA key ID 4DD0279B # gpg: Good signature from "Richard Henderson <rth7680@gmail.com>" # gpg: aka "Richard Henderson <rth@redhat.com>" # gpg: aka "Richard Henderson <rth@twiddle.net>" * remotes/rth/tags/pull-tcg-20150212: tcg: Remove unused opcodes tcg: Implement insert_op_before tcg: Remove opcodes instead of noping them out tcg: Put opcodes in a linked list tcg: Introduce tcg_op_buf_count and tcg_op_buf_full tcg: Move emit of INDEX_op_end into gen_tb_end tcg: Reduce ifdefs in tcg-op.c tcg: Move some opcode generation functions out of line Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * tcg: Introduce tcg_op_buf_count and tcg_op_buf_fullRichard Henderson2015-02-121-6/+3
| | | | | | | | | | | | | | | | The method by which we count the number of ops emitted is going to change. Abstract that away into some inlines. Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
| * tcg: Move emit of INDEX_op_end into gen_tb_endRichard Henderson2015-02-121-1/+0
| | | | | | | | | | Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | target-arm: A64: Avoid signed shifts in disas_ldst_pair()Peter Maydell2015-02-131-1/+1
| | | | | | | | | | | | | | | | | | Avoid shifting potentially negative signed offset values in disas_ldst_pair() by keeping the offset in a uint64_t rather than an int64_t. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1423233250-15853-5-git-send-email-peter.maydell@linaro.org
* | target-arm: A64: Avoid left shifting negative integers in disas_pc_rel_addrPeter Maydell2015-02-131-2/+3
| | | | | | | | | | | | | | | | | | Shifting a negative integer left is undefined behaviour in C. Avoid it by assembling and shifting the offset fields as unsigned values and then sign extending as the final action. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1423233250-15853-4-git-send-email-peter.maydell@linaro.org
* | target-arm: A64: Fix handling of rotate in logic_imm_decode_wmaskPeter Maydell2015-02-131-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code in logic_imm_decode_wmask attempts to rotate a mask value within the bottom 'e' bits of the value with mask = (mask >> r) | (mask << (e - r)); This has two issues: * if the element size is 64 then a rotate by zero results in a shift left by 64, which is undefined behaviour * if the element size is smaller than 64 then this will leave junk in the value at bit 'e' and above, which is not valid input to bitfield_replicate(). As it happens, the bits at bit 'e' to '2e - r' are exactly the ones which bitfield_replicate is going to copy in there, so this isn't a "wrong code generated" bug, but it's confusing and if we ever put an assert in bitfield_replicate it would fire on valid guest code. Fix the former by not doing anything if r is zero, and the latter by masking with bitmask64(e). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1423233250-15853-3-git-send-email-peter.maydell@linaro.org
* | target-arm: A64: Fix shifts into sign bitPeter Maydell2015-02-131-3/+3
|/ | | | | | | | Fix attempts to shift into the sign bit of an int, which is undefined behaviour in C and warned about by the clang sanitizer. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1423233250-15853-2-git-send-email-peter.maydell@linaro.org
* target-arm: Use correct mmu_idx for unprivileged loads and storesPeter Maydell2015-02-051-1/+18
| | | | | | | | | | | | | | | The MMU index to use for unprivileged loads and stores is more complicated than we currently implement: * for A64, it should be "if at EL1, access as if EL0; otherwise access at current EL" * for A32/T32, it should be "if EL2, UNPREDICTABLE; otherwise access as if at EL0". In both cases, if we want to make the access for Secure EL0 this is not the same mmu_idx as for Non-Secure EL0. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Greg Bellows <greg.bellows@linaro.org>
* target-arm: Define correct mmu_idx values and pass them in TB flagsPeter Maydell2015-02-051-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | We currently claim that for ARM the mmu_idx should simply be the current exception level. However this isn't actually correct -- secure EL0 and EL1 should have separate indexes from non-secure EL0 and EL1 since their VA->PA mappings may differ. We also will want an index for stage 2 translations when we properly support EL2. Define and document all seven mmu index values that we require, and pass the mmu index in the TB flags rather than exception level or priv/user bit. This change doesn't update the get_phys_addr() code, so our page table walking still assumes a simplistic "user or priv?" model for the moment. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Greg Bellows <greg.bellows@linaro.org> --- This leaves some odd gaps in the TB flags usage. I will circle back and clean this up later (including moving the other common flags like the singlestep ones to the top of the flags word), but I didn't want to bloat this patchseries further.
* target-arm/translate-a64: Fix wrong mmu_idx usage for LDT/STTPeter Maydell2015-02-051-1/+1
| | | | | | | | | | | | | | | | | | The LDT/STT (load/store unprivileged) instruction decode was using the wrong MMU index value. This meant that instead of these insns being "always access as if user-mode regardless of current privilege" they were "always access as if kernel-mode regardless of current privilege". This went unnoticed because AArch64 Linux doesn't use these instructions. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Greg Bellows <greg.bellows@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> --- I'm not counting this as a security issue because I'm assuming nobody treats TCG guests as a security boundary (certainly I would not recommend doing so...)
* gen-icount: check cflags instead of use_icount globalPaolo Bonzini2015-01-031-1/+1
| | | | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* translate: check cflags instead of use_icount globalPaolo Bonzini2015-01-031-2/+2
| | | | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* target-arm: A64: remove redundant storeAlex Bennée2014-11-021-1/+0
| | | | | | | | | There is not much point storing the same value twice in a row. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* target-arm: rename arm_current_pl to arm_current_elGreg Bellows2014-10-241-8/+8
| | | | | | | | | | | Renamed the arm_current_pl CPU function to more accurately represent that it returns the ARMv8 EL rather than ARMv7 PL. Signed-off-by: Greg Bellows <greg.bellows@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1413910544-20150-5-git-send-email-greg.bellows@linaro.org [PMM: fixed a minor merge resolution error in a couple of hunks] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Handle SMC/HVC undef-if-no-ELx in pre_* helpersPeter Maydell2014-10-241-2/+2
| | | | | | | | | | | | SMC must UNDEF if EL3 is not implemented; similarly HVC UNDEFs if EL2 is not implemented. Move the handling of this from translate-a64.c into the pre_smc and pre_hvc helper functions. This is necessary because use of these instructions for PSCI takes precedence over this UNDEF case, and we can't tell if this is a PSCI call until runtime. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1412865028-17725-5-git-send-email-peter.maydell@linaro.org
* target-arm: A64: Emulate the SMC insnEdgar E. Iglesias2014-09-291-0/+13
| | | | | | Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1411718914-6608-10-git-send-email-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: A64: Emulate the HVC insnEdgar E. Iglesias2014-09-291-9/+22
| | | | | | Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Message-id: 1411718914-6608-8-git-send-email-edgar.iglesias@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* target-arm: Implement ARMv8 single-step handling for A64 codePeter Maydell2014-08-191-5/+86
| | | | | | | | | | | | Implement ARMv8 software single-step handling for A64 code: correctly update the single-step state machine and generate debug exceptions when stepping A64 code. This patch has no behavioural change since MDSCR_EL1.SS can't be set by the guest yet. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
* target-arm: A64: Avoid duplicate exit_tb(0) in non-linked goto_tbPeter Maydell2014-08-191-2/+3
| | | | | | | | | | | | | | | | | If gen_goto_tb() decides not to link the two TBs, then the fallback path generates unnecessary code: * if singlestep is enabled then we generate unreachable code after the gen_exception_internal(EXCP_DEBUG) * if singlestep is disabled then we will generate exit_tb(0) twice, once in gen_goto_tb() and once coming out of the main loop with is_jmp set to DISAS_JUMP Correct these deficiencies by only emitting exit_tb() in the non-singlestep case, in which case we can use DISAS_TB_JUMP to suppress the main-loop exit_tb(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
OpenPOWER on IntegriCloud