summaryrefslogtreecommitdiffstats
path: root/tcg/ia64
Commit message (Collapse)AuthorAgeFilesLines
* tcg: Remove TCG_TARGET_HAS_new_ldstRichard Henderson2014-06-041-2/+0
| | | | | | | Since all backends have been converted, remove the compatibility code. Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove unreachable code in tcg_out_op and op_defsRichard Henderson2014-05-121-35/+5
| | | | | | | | | | | The INDEX_op_call case has just been obsoleted; the mov and movi cases have not been reachable for years. Attempt to document this both in each tcg_out_op switch, and via TCG_OPF_NOT_PRESENT. Because of the TCG_OPF_NOT_PRESENT change, this must be done for all targets in a single commit. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Define TCG_TARGET_INSN_UNIT_SIZERichard Henderson2014-05-122-145/+78
| | | | | | | | | | | Using a 16-byte aligned structure achieves best results, both for code cleanliness and compiled code size. However, this means that we can't use the trick of encoding the slot number into the low 2 bits. Thankfully, we only ever use slot2, so make that explicit in the names of the relocation functions, and drop the code for other slots. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add INDEX_op_trunc_shr_i32Richard Henderson2014-04-281-0/+1
| | | | | | Let the backend do something special for truncation. Signed-off-by: Richard Henderson <rth@twiddle.net>
* Merge remote-tracking branch 'remotes/rth/tags/tcg-next-20140422' into stagingPeter Maydell2014-04-241-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull tcg 2014-04-22 # gpg: Signature made Tue 22 Apr 2014 22:00:04 BST using RSA key ID 4DD0279B # gpg: Can't check signature: public key not found * remotes/rth/tags/tcg-next-20140422: tcg: Use HOST_WORDS_BIGENDIAN tcg: Fix fallback from muls2_i64 to mulu2_i64 tcg: Use tcg_gen_mulu2_i32 in tcg_gen_muls2_i32 tcg: Relax requirement for mulu2_i32 on 32-bit hosts tcg-s390: Remove W constraint tcg-sparc: Use the type parameter to tcg_target_const_match tcg-ppc64: Use the type parameter to tcg_target_const_match tcg-aarch64: Remove w constraint tcg: Add TCGType parameter to tcg_target_const_match tcg: Fix out of range shift in deposit optimizations tci: Mask shift counts to avoid undefined behavior tcg: Mask shift quantities while folding tcg: Use "unspecified behavior" for shifts tcg: Fix warning (1 bit signed bitfield entry) and replace int by bool Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * tcg: Add TCGType parameter to tcg_target_const_matchRichard Henderson2014-04-181-1/+1
| | | | | | | | | | | | | | | | Most 64-bit targets need to be able to ignore the high bits of a TCG_TYPE_I32 value. Suggested-by: Stuart Brady <sdb@zubnet.me.uk> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Convert to new ldst opcodesRichard Henderson2014-04-172-67/+35
| | | | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Move part of softmmu slow path out of lineRichard Henderson2014-04-171-62/+114
| | | | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Convert to new ldst helpersRichard Henderson2014-04-171-62/+80
| | | | | | | | | | | | | | Still inline, but updated to the new routines. Always use the LE helpers, reusing the bswap between the fast and slot paths. Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Reduce code duplication in tcg_out_qemu_ldRichard Henderson2014-04-171-37/+24
| | | | | | | | | | | | The only differences were in the bswap insns emitted. Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Move tlb addend load into tlb readRichard Henderson2014-04-171-12/+12
| | | | | | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Move bswap for store into tlb loadRichard Henderson2014-04-171-63/+31
| | | | | | | | | | | | Saving at least two cycles per store, and cleaning up the code. Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Re-bundle the tlb loadRichard Henderson2014-04-171-23/+54
| | | | | | | | | | | | | | This sequencing requires 5 stop bits instead of 6, and has room left over to pre-load the tlb addend, and bswap data prior to being stored. Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg-ia64: Optimize small arguments to exit_tbRichard Henderson2014-04-171-3/+9
|/ | | | | | Saves one bundle for the common case of exit_tb 0. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Introduce tcg_opc_bswap64_iRichard Henderson2013-11-181-35/+28
| | | | | Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Introduce tcg_opc_ext_iRichard Henderson2013-11-181-30/+24
| | | | | | | | Being able to "extend" from 64-bits (with a mov) simplifies a few places where the conditional breaks the train of thought. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Introduce tcg_opc_movi_aRichard Henderson2013-11-181-16/+16
| | | | | Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Introduce tcg_opc_mov_aRichard Henderson2013-11-181-19/+16
| | | | | Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Use A3 form of logical operationsRichard Henderson2013-11-181-30/+34
| | | | | | | We can and/or/xor/andcm small constants, saving one cycle. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Use SUB_A3 and ADDS_A4 for subtractionRichard Henderson2013-11-181-2/+23
| | | | | | | | We can subtract from more small constants that just 0 with one insn, and we can add the negative for most small constants. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Use ADDS for small additionRichard Henderson2013-11-181-4/+16
| | | | | | | | | | Avoids a wasted cycle loading up small constants. Simplify the code assuming the tcg optimizer is going to work and don't expect the first operand of the add to be constant. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Avoid unnecessary stop bit in tcg_out_aluRichard Henderson2013-11-181-11/+6
| | | | | | | | | When performing an operation with two input registers, we'd leave the stop bit (and thus an extra cycle) that's only needed when one or the other input is a constant. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Move AREG0 to R32Richard Henderson2013-11-182-9/+8
| | | | | | | | | | | | | Since the move away from the global areg0, we're no longer globally reserving areg0. Which means our use of R7 clobbers a call-saved register. Shift areg0 into the windowed registers. Indeed, choose the incoming parameter register that it comes to us by. This requires moving the register holding the return address elsewhere. Choose R33 for tidiness. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Simplify brcondRichard Henderson2013-11-181-34/+10
| | | | | | | | | | | | | | There was a misconception that a stop bit is required between a compare and the branch that uses the predicate set by the compare. This lead to the usage of an extra bundle in which to perform the compare. The extra bundle left room for constants to be loaded for use with the compare insn. If we pack the compare and the branch together in the same bundle, then there's no longer any room for non-zero constants. At which point we can eliminate half the function by not handling them. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Handle constant callsRichard Henderson2013-11-181-3/+35
| | | | | | | | | | Using only indirect calls results in 3 bundles (one to load the descriptor address), and 4 stop bits. By looking through the descriptor to the constants, we can perform the call with 2 bundles and only 1 stop bit. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Use shortcuts for nop insnsRichard Henderson2013-11-181-124/+127
| | | | | | | | There's no need to go through the full opcode-to-insn function call to generate nops. This makes the source a bit more readable. Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg-ia64: Use TCGMemOp within qemu_ldst routinesRichard Henderson2013-11-181-82/+91
| | | | | Acked-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add qemu_ld_st_i32/64Richard Henderson2013-10-101-0/+2
| | | | | | | Step two in the transition, adding the new ldst opcodes. Keep the old opcodes around until all backends support the new opcodes. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add tcg-be-null.hRichard Henderson2013-10-101-0/+2
| | | | | | | | | This is a no-op backend data implementation, for those targets that are not currently using the load/store optimization path. This is prepatory to always requiring these functions in all backends. Signed-off-by: Richard Henderson <rth@twiddle.net>
* exec: Split softmmu_defs.hRichard Henderson2013-09-021-3/+0
| | | | | | | | | | | The _cmmu helpers can be moved to exec-all.h. The helpers that are used from TCG will shortly need access to tcg_target_long so move their declarations into tcg.h. This requires minor include adjustments to all TCG backends. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change tcg_out_ld/st offset to intptr_tRichard Henderson2013-09-021-2/+2
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change relocation offsets to intptr_tRichard Henderson2013-09-021-7/+7
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Change flush_icache_range arguments to uintptr_tRichard Henderson2013-09-021-2/+1
| | | | | Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add muluh and mulsh opcodesRichard Henderson2013-09-021-0/+4
| | | | | | | | Use them in places where mulu2 and muls2 are used. Optimize mulx2 with dead low part to mulxh. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Split rem requirement from div requirementRichard Henderson2013-07-091-0/+2
| | | | | | | | There are several hosts with only a "div" insn. Remainder is computed manually from the quotient and inputs. We can do this generically. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add signed multiword multiplication operationsRichard Henderson2013-02-231-0/+2
| | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* tcg: Add 64-bit multiword arithmetic operationsRichard Henderson2013-02-231-0/+3
| | | | | | Matching the 32-bit multiword arithmetic that we already have. Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* tcg: Make 32-bit multiword operations optional for 64-bit hostsRichard Henderson2013-02-231-0/+3
| | | | | Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
* exec: move include files to include/exec/Paolo Bonzini2012-12-191-1/+1
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* janitor: add guards to headersPaolo Bonzini2012-12-191-0/+3
| | | | Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge branch 'linux-user-for-upstream' of ↵Aurelien Jarno2012-10-191-3/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.linaro.org/people/rikuvoipio/qemu * 'linux-user-for-upstream' of git://git.linaro.org/people/rikuvoipio/qemu: linux-user: register align p{read, write}64 linux-user: ppc: mark as long long aligned tcg: Remove TCG_TARGET_HAS_GUEST_BASE define configure: Remove unnecessary host_guest_base code linux-user: If loading fails, print error as string, not number linux-user: Fix siginfo handling alpha-linux-user: Fix sigaltstack structure definition linux-user: Implement gethostname linux-user: Perform more checks on iovec lists linux-user: fix multi-threaded /proc/self/maps linux-user: fix statfs
| * tcg: Remove TCG_TARGET_HAS_GUEST_BASE definePeter Maydell2012-10-121-3/+0
| | | | | | | | | | | | | | | | | | | | GUEST_BASE support is now supported by all TCG backends, and is now mandatory. Drop the now-pointless TCG_TARGET_HAS_GUEST_BASE define (set by every backend) and the error if it is unset. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
* | tcg-ia64: Implement depositRichard Henderson2012-10-172-2/+58
| | | | | | | | | | | | | | | | | | | | Note that in the general reg=reg,reg case we're restricted to 16-bit insertions. This makes it easy to allow "any" constant as input, as post-truncation it will fit into the constant load insn for which we have room in the bundle. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* | tcg/ia64: slightly optimize TLB access codeAurelien Jarno2012-10-171-5/+17
| | | | | | | | | | | | | | | | It is possible to slightly optimize the TLB access code, by replacing the movi + and instructions by a deposit instruction. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* | tcg/ia64: remove suboptimal register shifting in qemu_ld/st opsAurelien Jarno2012-10-171-39/+37
| | | | | | | | | | | | | | | | | | | | | | Remove suboptimal register shifting in qemu_ld/st ops, introduced at the CONFIG_TCG_PASS_AREG0 time. As mem_idx is now loaded in register R58/R59 for the slow path, we have to make sure to do it last, to not add additional register constraints. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* | tcg/ia64: implement movcond_i32/64Aurelien Jarno2012-10-172-2/+40
| | | | | | | | | | | | | | | | | | Implement movcond_i32/64 on ia64 hosts. It is not possible to have immediate compare arguments without adding a new bundle, but it is possible to have 22-bit immediate value arguments. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* | tcg/ia64: use stack for TCG tempsBlue Swirl2012-10-171-3/+4
|/ | | | | | | Use stack instead of temp_buf array in CPUState for TCG temps. Signed-off-by: Blue Swirl <blauwirbel@gmail.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg: remove obsolete jmp opAurelien Jarno2012-10-061-4/+0
| | | | | | | | | | | | | The TCG jmp operation doesn't really make sense in the QEMU context, it is unused, it is not implemented by some targets, and it is wrongly implemented by some others. This patch simply removes it. Reviewed-by: Richard Henderson <rth@twiddle.net> Acked-by: Blue Swirl <blauwirbel@gmail.com> Acked-by: Stefan Weil<sw@weilnetz.de> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg: Remove tcg_target_get_call_iarg_regs_countStefan Weil2012-09-221-6/+0
| | | | | | | | | | | | | The TCG targets no longer need individual implementations. Since commit 6a18ae2d2947532d5c26439548afa0481c4529f9, 'flags' is no longer used in tcg_target_get_call_iarg_regs_count. The remaining tcg_target_get_call_iarg_regs_count is trivial and only called once. Therefore the patch eliminates it completely. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg: Introduce movcondRichard Henderson2012-09-211-0/+2
| | | | | | | | Implemented with setcond if the target does not provide the optional opcode. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
OpenPOWER on IntegriCloud