summaryrefslogtreecommitdiffstats
path: root/tcg/aarch64
Commit message (Collapse)AuthorAgeFilesLines
* tcg-aarch64: Use 32-bit loads for qemu_ld_i32Richard Henderson2014-09-291-12/+15
| | | | | | | | The "old" qemu_ld opcode did not specify the size of the result, and so we had to assume full register width. With the new opcodes, we can narrow the result. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson2014-06-041-2/+0
| | | | | | | Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Make debug_frame constRichard Henderson2014-05-281-13/+9
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove unreachable code in tcg_out_op and op_defsRichard Henderson2014-05-121-19/+3
| | | | | | | | | | | The INDEX_op_call case has just been obsoleted; the mov and movi cases have not been reachable for years. Attempt to document this both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT. Because of the TCG_OPF_NOT_PRESENT change, this must be done for all targets in a single commit. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson2014-05-122-69/+53
| | | | | | | | And use tcg pointer differencing functions as appropriate. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add INDEX_op_trunc_shr_i32Richard Henderson2014-04-281-0/+1
| | | | | | Let the backend do something special for truncation. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Use HOST_WORDS_BIGENDIANRichard Henderson2014-04-181-1/+0
| | | | | | Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Remove w constraintRichard Henderson2014-04-181-22/+18
| | | | | | Now redundant with the type parameter to tcg_target_const_match. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add TCGType parameter to tcg_target_const_matchRichard Henderson2014-04-181-1/+1
| | | | | | | | Most 64-bit targets need to be able to ignore the high bits of a TCG_TYPE_I32 value. Suggested-by: Stuart Brady <sdb@zubnet.me.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Fix warning (1 bit signed bitfield entry) and replace int by boolStefan Weil2014-04-181-3/+3
| | | | | | | | | | | | Static code analyzers complain about signed bitfields with only a single bit. is_ld is used as a boolean value, so make it bool. ppc64 already used bool for the 2nd argument is_ld of the local function add_qemu_ldst_label. Modify all other TCG targets to do follow this example. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use tcg_out_mov in preference to tcg_out_movrRichard Henderson2014-04-161-9/+7
| | | | | | | It's the more canonical interface. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Prefer unsigned offsets before signed offsets for ldstRichard Henderson2014-04-161-5/+6
| | | | | | | The assembler seems to prefer them, perhaps we should too. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Introduce tcg_out_insn_3312, _3310, _3313Richard Henderson2014-04-161-87/+89
| | | | | | | | | | | | Replace aarch64_ldst_op_data with AArch64LdstType, as it wasn't encoded for the proper shift for the field and was confusing. Merge aarch64_ldst_op_data, AArch64LdstType, and a few stray opcode bits into a single I3312_* argument, eliminating some magic numbers from the helper functions. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Merge aarch64_ldst_get_data/type into tcg_out_opRichard Henderson2014-04-161-83/+32
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Introduce tcg_out_insn_3507Richard Henderson2014-04-161-24/+33
| | | | | | | Cleaning up the implementation of REV and REV16 at the same time. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Support stores of zeroRichard Henderson2014-04-161-16/+19
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Implement TCG_TARGET_HAS_new_ldstRichard Henderson2014-04-162-60/+31
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Pass qemu_ld/st arguments directlyRichard Henderson2014-04-161-32/+17
| | | | | | | Instead of passing them the "args" array. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use TCGMemOp in qemu_ld/stRichard Henderson2014-04-161-68/+63
| | | | | | | Making the bswap conditional on the memop instead of a compile-time test. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use ADR to pass the return address to the ld/st helpersRichard Henderson2014-04-161-2/+9
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use tcg_out_call for qemu_ld/stRichard Henderson2014-04-161-4/+2
| | | | | | | In some cases, a direct branch will be in range. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Avoid add with zero in tlb loadRichard Henderson2014-04-161-9/+19
| | | | | | | Some guest env are small enough to reach the tlb with only a 12-bit addition. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Implement tcg_register_jitRichard Henderson2014-04-161-15/+69
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Introduce tcg_out_insn_3314Richard Henderson2014-04-161-67/+33
| | | | | | | Combines 4 other inline functions and tidies the prologue. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Reuse LR in translated codeRichard Henderson2014-04-162-33/+33
| | | | | | | | It's obviously call-clobbered, but is otherwise unused. Repurpose it as the TCG temporary. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use CBZ and CBNZRichard Henderson2014-04-161-2/+24
| | | | | | | | A compare and branch against zero happens at the start of every single TB. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Create tcg_out_brcondRichard Henderson2014-04-161-20/+14
| | | | | | | Rearrange code to put the compare and branch in the same place. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use symbolic names for branchesRichard Henderson2014-04-161-31/+43
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use adrp in tcg_out_moviRichard Henderson2014-04-161-0/+23
| | | | | | | | | | | | | | | | Loading an qemu pointer as an immediate happens often. E.g. - exit_tb $0x7fa8140013 + exit_tb $0x7f81ee0013 ... - : d2800260 mov x0, #0x13 - : f2b50280 movk x0, #0xa814, lsl #16 - : f2c00fe0 movk x0, #0x7f, lsl #32 + : 90ff1000 adrp x0, 0x7f81ee0000 + : 91004c00 add x0, x0, #0x13 Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Special case small constants in tcg_out_moviRichard Henderson2014-04-161-0/+10
| | | | | Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use ORRI in tcg_out_moviRichard Henderson2014-04-161-31/+39
| | | | | | | | The subset of logical immediates that we support is quite quick to test, and such constants are quite common to want to load. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use MOVN in tcg_out_moviRichard Henderson2014-04-161-13/+50
| | | | | | | | When profitable, initialize the register with MOVN instead of MOVZ, before setting the remaining lanes with MOVK. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use TCGType and TCGMemOp constantsRichard Henderson2014-04-161-35/+38
| | | | | | | Rather than raw constants that could mean anything. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Use intptr_t apropriatelyRichard Henderson2014-04-161-5/+5
| | | | | | | As opposed to tcg_target_long. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Introduce tcg_out_insn_3405Richard Henderson2014-03-141-21/+27
| | | | | | | | Cleaning up the implementation of tcg_out_movi at the same time. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support div, remRichard Henderson2014-03-142-13/+45
| | | | | | | | | | | Clean up multiply at the same time. For remainder, generic code will produce mul+sub, whereas we can implement with msub. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support muluh, mulshRichard Henderson2014-03-142-2/+14
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support add2, sub2Richard Henderson2014-03-142-4/+80
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support depositRichard Henderson2014-03-142-21/+49
| | | | | | | | Also tidy the implementation of ubfm, sbfm, extr in order to share code. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Use tcg_out_insn for setcondRichard Henderson2014-03-141-9/+3
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support movcondRichard Henderson2014-03-142-2/+36
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Support andc, orc, eqv, not, negRichard Henderson2014-03-142-10/+67
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Handle constant operands to and, or, xorRichard Henderson2014-03-141-49/+107
| | | | | | | | | | | | Handle a simplified set of logical immediates for the moment. The way gcc and binutils do it, with 52k worth of tables, and a binary search depth of log2(5334) = 13, seems slow for the most common cases. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Handle constant operands to add, sub, and compareRichard Henderson2014-03-141-22/+78
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Implement mov with tcg_out_insnRichard Henderson2014-03-141-15/+9
| | | | | | | | Avoid the magic numbers in the current implementation. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Introduce tcg_out_insn_3401Richard Henderson2014-03-141-46/+26
| | | | | | | | This merges the implementation of tcg_out_addi and tcg_out_subi. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Convert shift insns to tcg_out_insnRichard Henderson2014-03-141-31/+21
| | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Introduce tcg_out_insnRichard Henderson2014-03-141-36/+58
| | | | | | | | | Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction groups to the new scheme. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
* tcg-aarch64: Remove nop from qemu_st slow pathRichard Henderson2014-03-081-7/+0
| | | | | | | | Commit 023261ef851b22a04f6c5d76da870051031757a6 failed to remove a nop that's no longer required. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-aarch64: Simplify tcg_out_ldst_9 encodingRichard Henderson2014-03-081-12/+2
| | | | | | | | | | | At first glance the code appears to be using 1's compliment encoding, a-la AArch32. Except that the constant is "off", creating a complicated split field 2's compliment encoding. Much clearer to just use a normal mask and shift. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
OpenPOWER on IntegriCloud