summaryrefslogtreecommitdiffstats
path: root/tcg/arm
Commit message (Collapse)AuthorAgeFilesLines
* linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier2015-08-241-4/+4
| | | | | | | | | | | As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Mask TCGMemOp appropriately for indexingRichard Henderson2015-06-091-3/+3
| | | | | | | | | | The addition of MO_AMASK means that places that used inverted masks need to be changed to use positive masks, and places that failed to mask the intended bits need updating. Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini2015-06-031-0/+1
| | | | | | | | | | | | This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson2015-05-141-13/+14
| | | | | | | | The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Merge memop and mmu_idx parameters to qemu_ld/stRichard Henderson2015-05-141-4/+8
| | | | | | | | | At the tcg opcode level, not at the tcg-op.h generator level. This requires minor changes through all of the tcg backends, but none of the cpu translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change generator-side labels to a pointerRichard Henderson2015-03-131-7/+7
| | | | | | | | | | | | | | | This is less about improved type checking than enabling a subsequent change to the representation of labels. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com> Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson2014-06-041-2/+0
| | | | | | | Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Make debug_frame constRichard Henderson2014-05-281-13/+9
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove unreachable code in tcg_out_op and op_defsRichard Henderson2014-05-121-29/+3
| | | | | | | | | | | The INDEX_op_call case has just been obsoleted; the mov and movi cases have not been reachable for years. Attempt to document this both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT. Because of the TCG_OPF_NOT_PRESENT change, this must be done for all targets in a single commit. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson2014-05-122-96/+55
| | | | | | | And use tcg pointer differencing functions as appropriate. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Use HOST_WORDS_BIGENDIANRichard Henderson2014-04-181-1/+0
| | | | | | Instead of rolling a local TCG_TARGET_WORDS_BIGENDIAN. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Relax requirement for mulu2_i32 on 32-bit hostsRichard Henderson2014-04-181-0/+1
| | | | | | | Instead require either mulu2_i32 or muluh_i32. The code in tcg-op.h already supports looking for both. Previous incomplete conversion? Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add TCGType parameter to tcg_target_const_matchRichard Henderson2014-04-181-1/+1
| | | | | | | | Most 64-bit targets need to be able to ignore the high bits of a TCG_TYPE_I32 value. Suggested-by: Stuart Brady <sdb@zubnet.me.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Fix warning (1 bit signed bitfield entry) and replace int by boolStefan Weil2014-04-181-3/+3
| | | | | | | | | | | | Static code analyzers complain about signed bitfields with only a single bit. is_ld is used as a boolean value, so make it bool. ppc64 already used bool for the 2nd argument is_ld of the local function add_qemu_ldst_label. Modify all other TCG targets to do follow this example. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Avoid ldrd/strd for user-only emulationRichard Henderson2014-03-271-4/+17
| | | | | | | | | | | | The arm ldrd/strd insns must cause alignment traps, whereas at least for armv7 ldr/str must handle unaligned operations. While this is hardly the only problem facing user-only emu, this solves one problem for i386 on armv7 emulation. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reported-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].Huw Davies2014-02-171-1/+1
| | | | | | | | It's this that should be subtracted from 0x20 when converting to a right rotate. Cc: qemu-stable@nongnu.org Signed-off-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use qemu_getauxvalRichard Henderson2013-11-301-9/+5
| | | | | | | Allow host detection on linux systems without glibc 2.16 or later. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Improve GUEST_BASE qemu_ld/stRichard Henderson2013-10-121-104/+116
| | | | | | | | | | | | If we pull the code to emit the actual load/store into a subroutine, we can share the reg+reg addressing mode code between softmmu and usermode. This lets us load GUEST_BASE into a temporary register rather than attempting to add it piece-wise to the address. Which lets us use movw+movt for armv7, rather than (up to) 4 adds. Code size for pre-armv7 stays the same. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Convert to new ldst opcodesRichard Henderson2013-10-122-71/+38
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Tidy variable naming convention in qemu_ld/stRichard Henderson2013-10-121-115/+115
| | | | | | | | | s/addr_reg2/addrhi/ s/addr_reg/addrlo/ s/data_reg2/datahi/ s/data_reg/datalo/ Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Convert to le/be ldst helpersRichard Henderson2013-10-121-21/+29
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use TCGMemOp within qemu_ldst routinesRichard Henderson2013-10-121-64/+61
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add qemu_ld_st_i32/64Richard Henderson2013-10-101-0/+2
| | | | | | | Step two in the transition, adding the new ldst opcodes. Keep the old opcodes around until all backends support the new opcodes. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add tcg-be-ldst.hRichard Henderson2013-10-101-24/+3
| | | | | | Move TCGLabelQemuLdst and related stuff out of tcg.h. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Move the tlb addend load earlierRichard Henderson2013-10-011-5/+6
| | | | | | | | There are free scheduling slots between the sequence of comparison instructions. This requires changing the register in use to avoid conflict with those compares. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Remove restriction on qemu_ld output registerRichard Henderson2013-10-011-24/+34
| | | | | | | | | The main intent of the patch is to allow the tlb addend register to be changed, without tying that change to the constraint. But the most common side-effect seems to be to enable usage of ldrd with the r0,r1 pair. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Return register containing tlb addendRichard Henderson2013-10-011-29/+30
| | | | | | | Preparatory to rescheduling the tlb load, and changing said register. Continues to use R1 for now. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Move load of tlb addend into tcg_out_tlb_readRichard Henderson2013-10-011-37/+23
| | | | | | | | This allows us to make more intelligent decisions about the relative offsets of the tlb comparator and the addend, avoiding any need of writeback addressing. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use QEMU_BUILD_BUG_ON to verify constraints on tlbRichard Henderson2013-10-011-5/+10
| | | | | | | One of the two constraints we already checked via #if, but the tlb offset distance was only checked at runtime. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use strd for tcg_out_arg_reg64Richard Henderson2013-10-011-3/+10
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Rearrange slow-path qemu_ld/stRichard Henderson2013-10-011-90/+87
| | | | | | | | Use the new helper_ret_*_mmu routines. Use a conditional call to arrange for a tail-call from the store path, and to load the return address for the helper for the load path. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use ldrd/strd for appropriate qemu_ld/st64Richard Henderson2013-10-011-5/+43
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* exec: Split softmmu_defs.hRichard Henderson2013-09-021-2/+0
| | | | | | | | | | | The _cmmu helpers can be moved to exec-all.h. The helpers that are used from TCG will shortly need access to tcg_target_long so move their declarations into tcg.h. This requires minor include adjustments to all TCG backends. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change tcg_out_ld/st offset to intptr_tRichard Henderson2013-09-021-2/+2
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change relocation offsets to intptr_tRichard Henderson2013-09-021-4/+4
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change flush_icache_range arguments to uintptr_tRichard Henderson2013-09-021-5/+4
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add muluh and mulsh opcodesRichard Henderson2013-09-021-0/+2
| | | | | | | | Use them in places where mulu2 and muls2 are used. Optimize mulx2 with dead low part to mulxh. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Implement tcg_register_jitRichard Henderson2013-07-091-9/+67
| | | | | | | Allows unwinding past the code_gen_buffer. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use AT_PLATFORM to detect the host ISARichard Henderson2013-07-091-4/+16
| | | | | | | | | | With this we can generate armv7 insns even when the OS compiles for a lower common denominator. The macros are arranged so that when we do compile for a given ISA, all of the runtime checks for that ISA are optimized away. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Simplify logic in detecting the ARM ISA in useRichard Henderson2013-07-091-39/+23
| | | | | | | | GCC 4.8 defines a handy __ARM_ARCH symbol that we can use, which will make us nicely forward compatible with ARMv8 AArch32. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Rename use_armv5_instructions to use_armvt5_instructionsRichard Henderson2013-07-091-6/+6
| | | | | | | | As it really controls the availability of a thumb interworking instruction on armv5t. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Make use of conditional availability of opcodes for divideRichard Henderson2013-07-092-8/+22
| | | | | | | | We can now detect and use divide instructions at runtime, rather than having to restrict their availability to compile-time. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Don't implement remRichard Henderson2013-07-092-16/+1
| | | | | Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Split rem requirement from div requirementRichard Henderson2013-07-091-0/+2
| | | | | | | | There are several hosts with only a "div" insn. Remainder is computed manually from the quotient and inputs. We can do this generically. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove redundant tcg_target_init checksRichard Henderson2013-06-051-6/+0
| | | | | | | | We've got a compile-time check for the condition in exec/cpu-defs.h. Reviewed-by: Andreas Färber <afaerber@suse.de> Reviewed-by: liguang <lig.fnst@cn.fujitsu.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use movi32 in exit_tbRichard Henderson2013-05-031-9/+7
| | | | | | | | Avoid the mini constant pool for armv7, and avoid replicating the test for pre-v7. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg-arm: Fix 64-bit tlb load for pre-v6Richard Henderson2013-05-031-1/+1
| | | | | | | | Found by inspection, since the effect of the bug was simply to send all memory ops through the slow path. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg-arm: Remove long jump from tcg_out_goto_labelRichard Henderson2013-04-271-6/+1
| | | | | | Branches within a TB will always be within 16MB. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Convert to CONFIG_QEMU_LDST_OPTIMIZATIONRichard Henderson2013-04-271-107/+202
| | | | | | | | Move the slow path out of line, as the TODO's mention. This allows the fast path to be unconditional, which can speed up the fast path as well, depending on the core. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-arm: Use movi32 + blx for calls on v7Richard Henderson2013-04-271-0/+3
| | | | | | | | | Work better with branch predition when we have movw+movt, as the size of the code is the same. Perhaps re-evaluate when we have a proper constant pool. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
OpenPOWER on IntegriCloud