summaryrefslogtreecommitdiffstats
path: root/tcg
Commit message (Collapse)AuthorAgeFilesLines
* tcg/i386: omit a few REXW prefixes in softmmu codeAurelien Jarno2015-09-021-6/+9
| | | | | | | | | | When computing the TLB address we are likely to mask out the high 32-bits by using shr + and. We can use 32-bit instructions in that case. This saves 2 bytes per TLB access. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1437306632-20655-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0Richard Henderson2015-09-021-7/+20
| | | | | | | | | | | | In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest base to the address base register from the address index register. Except that 31 in the base slot is SP not XZR, so we need to be more intelligent about which reg gets placed in which slot. Cc: qemu-stable@nongnu.org (v2.4.0) Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reported-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* s390: fix softmmu compilationLaurent Vivier2015-08-281-2/+2
| | | | | | | | | guest_base must be used only in linux-user mode. Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-id: 1440757421-9674-1-git-send-email-laurent@vivier.eu Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier2015-08-248-61/+49
| | | | | | | | | | | As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier2015-08-245-18/+8
| | | | | | | | | | | | | | | | | | All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: Use softmmu fast path for unaligned accessesRichard Henderson2015-08-241-13/+24
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/s390: Use softmmu fast path for unaligned accessesRichard Henderson2015-08-241-5/+21
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Improve unaligned load/store handling on 64-bit backendBenjamin Herrenschmidt2015-08-241-10/+31
| | | | | | | | | | | | | | | | | | | | Currently, we get to the slow path for any unaligned access in the backend, because we effectively preserve the bottom address bits below the alignment requirement when comparing with the TLB entry, so any non-0 bit there will cause the compare to fail. For the same number of instructions, we can instead add the access size - 1 to the address and stick to clearing all the bottom bits. That means that normal unaligned accesses will not fallback (the HW will handle them fine). Only when crossing a page boundary well we end up having a mismatch because we'll end up pointing to the next page which cannot possibly be in that same TLB entry. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Message-Id: <1437455978.5809.2.camel@kernel.crashing.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/i386: use softmmu fast path for unaligned accessesAurelien Jarno2015-08-241-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Softmmu unaligned load/stores currently goes through through the slow path for two reasons: - to support unaligned access on host with strict alignement - to correctly handle accesses crossing pages x86 is only concerned by the second reason. Unaligned accesses are avoided by compilers, but are not uncommon. We therefore would like to see them going through the fast path, if they don't cross pages. For that we can use the fact that two adjacent TLB entries can't contain the same page. Therefore accessing the TLB entry corresponding to the first byte, but comparing its content to page address of the last byte ensures that we don't cross pages. We can do this check without adding more instructions in the TLB code (but increasing its length by one byte) by using the LEA instruction to combine the existing move with the size addition. On an x86-64 host, this gives a 3% boot time improvement for a powerpc guest and 4% for an x86-64 guest. [rth: Tidied calculation of the offset mask] Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436467197-2183-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove tcg_gen_trunc_i64_i32Richard Henderson2015-08-241-7/+2
| | | | | | Replacing it with tcg_gen_extrl_i64_i32. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson2015-08-2414-53/+71
| | | | | | | Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: update README about size changing opsAurelien Jarno2015-08-241-3/+15
| | | | | Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 opsAurelien Jarno2015-08-241-0/+13
| | | | | | | | | They behave the same as ext32s_i64 and ext32u_i64 from the constant folding and zero propagation point of view, except that they can't be replaced by a mov, so we don't compute the affected value. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: implement real ext_i32_i64 and extu_i32_i64 opsAurelien Jarno2015-08-249-8/+41
| | | | | | | | | | | | | | Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a 32-bit value is always converted to a 64-bit value and not propagated through the register allocator or the optimizer. Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: don't abuse TCG type in tcg_gen_trunc_shr_i64_i32Aurelien Jarno2015-08-241-2/+2
| | | | | | | | | The tcg_gen_trunc_shr_i64_i32 function takes a 64-bit argument and returns a 32-bit value. Directly call tcg_gen_op3 with the correct types instead of calling tcg_gen_op3i_i32 and abusing the TCG types. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno2015-08-2413-18/+18
| | | | | | | | | | | The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: allow constant to have copiesAurelien Jarno2015-08-241-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that copies and constants are tracked separately, we can allow constant to have copies, deferring the choice to use a register or a constant to the register allocation pass. This prevent this kind of regular constant reloading: -OUT: [size=338] +OUT: [size=298] mov -0x4(%r14),%ebp test %ebp,%ebp jne 0x7ffbe9cb0ed6 mov $0x40002219f8,%rbp mov %rbp,(%r14) - mov $0x40002219f8,%rbp mov $0x4000221a20,%rbx mov %rbp,(%rbx) mov $0x4000000000,%rbp mov %rbp,(%r14) - mov $0x4000000000,%rbp mov $0x4000221d38,%rbx mov %rbp,(%rbx) mov $0x40002221a8,%rbp mov %rbp,(%r14) - mov $0x40002221a8,%rbp mov $0x4000221d40,%rbx mov %rbp,(%rbx) mov $0x4000019170,%rbp mov %rbp,(%r14) - mov $0x4000019170,%rbp mov $0x4000221d48,%rbx mov %rbp,(%rbx) mov $0x40000049ee,%rbp mov %rbp,0x80(%r14) mov %r14,%rdi callq 0x7ffbe99924d0 mov $0x4000001680,%rbp mov %rbp,0x30(%r14) mov 0x10(%r14),%rbp mov $0x4000001680,%rbp mov %rbp,0x30(%r14) mov 0x10(%r14),%rbp shl $0x20,%rbp mov (%r14),%rbx mov %ebx,%ebx mov %rbx,(%r14) or %rbx,%rbp mov %rbp,0x10(%r14) mov %rbp,0x90(%r14) mov 0x60(%r14),%rbx mov %rbx,0x38(%r14) mov 0x28(%r14),%rbx mov $0x4000220e60,%r12 mov %rbx,(%r12) mov $0x40002219c8,%rbx mov %rbp,(%rbx) mov 0x20(%r14),%rbp sub $0x8,%rbp mov $0x4000004a16,%rbx mov %rbx,0x0(%rbp) mov %rbp,0x20(%r14) mov $0x19,%ebp mov %ebp,0xa8(%r14) mov $0x4000015110,%rbp mov %rbp,0x80(%r14) xor %eax,%eax jmpq 0x7ffbebcae426 lea -0x5f6d72a(%rip),%rax # 0x7ffbe3d437b3 jmpq 0x7ffbebcae426 Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: track const/copy status separatelyAurelien Jarno2015-08-241-28/+14
| | | | | | | | | | | | Instead of using an enum which could be either a copy or a const, track them separately. This will be used in the next patch. Constants are tracked through a bool. Copies are tracked by initializing temp's next_copy and prev_copy to itself, allowing to simplify the code a bit. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: add temp_is_const and temp_is_copy functionsAurelien Jarno2015-08-241-71/+60
| | | | | | | | | Add two accessor functions temp_is_const and temp_is_copy, to make the code more readable and make code change easier. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: optimize temps trackingAurelien Jarno2015-08-241-11/+32
| | | | | | | | | | | | | | | | | | | The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate vector registers on most guests, it's not uncommon to have more than 100 used temps. This means we have initialize more than 2kB at least twice per TB, often more when there is a few goto_tb. Instead used a TCGTempSet bit array to track which temps are in used in the current basic block. This means there are only around 16 bytes to initialize. This improves the boot time of a MIPS guest on an x86-64 host by around 7% and moves out tcg_optimize from the the top of the profiler list. [rth: Handle TCG_CALL_DUMMY_ARG] Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: fix constant signednessAurelien Jarno2015-08-241-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By convention, on a 64-bit host TCG internally stores 32-bit constants as sign-extended. This is not the case in the optimizer when a 32-bit constant is folded. This doesn't seem to have more consequences than suboptimal code generation. For instance the x86 backend assumes sign-extended constants, and in some rare cases uses a 32-bit unsigned immediate 0xffffffff instead of a 8-bit signed immediate 0xff for the constant -1. This is with a ppc guest: before ------ ---- 0x9f29cc movi_i32 tmp1,$0xffffffff movi_i32 tmp2,$0x0 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2 mov_i32 r10,tmp0 0x7fd8c7dfe90c: xor %ebp,%ebp 0x7fd8c7dfe90e: mov %ebp,%r11d 0x7fd8c7dfe911: mov 0x18(%r14),%r9d 0x7fd8c7dfe915: add %r9d,%r10d 0x7fd8c7dfe918: adc %ebp,%r11d 0x7fd8c7dfe91b: add $0xffffffff,%r10d 0x7fd8c7dfe922: adc %ebp,%r11d 0x7fd8c7dfe925: mov %r11d,0x134(%r14) 0x7fd8c7dfe92c: mov %r10d,0x28(%r14) after ----- ---- 0x9f29cc movi_i32 tmp1,$0xffffffffffffffff movi_i32 tmp2,$0x0 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2 mov_i32 r10,tmp0 0x7f37010d490c: xor %ebp,%ebp 0x7f37010d490e: mov %ebp,%r11d 0x7f37010d4911: mov 0x18(%r14),%r9d 0x7f37010d4915: add %r9d,%r10d 0x7f37010d4918: adc %ebp,%r11d 0x7f37010d491b: add $0xffffffffffffffff,%r10d 0x7f37010d491f: adc %ebp,%r11d 0x7f37010d4922: mov %r11d,0x134(%r14) 0x7f37010d4929: mov %r10d,0x28(%r14) Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436544211-2769-2-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/mips: fix add2Aurelien Jarno2015-08-011-0/+3
| | | | | | | | | | | | | | The add2 code in the tcg_out_addsub2 function doesn't take into account the case where rl == al == bl. In that case we can't compute the carry after the addition. As it corresponds to a multiplication by 2, the carry bit is the bit 31. While this is a corner case, this prevents x86-64 guests to boot on a MIPS host. Cc: qemu-stable@nongnu.org Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/s390x: Mask TCGMemOp appropriately for indexingAurelien Jarno2015-08-011-2/+2
| | | | | | | | Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition, but two cases were forgotten in the TCG S390 backend. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/mips: Mask TCGMemOp appropriately for indexingAurelien Jarno2015-08-011-2/+2
| | | | | | | | Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition, but two cases were forgotten in the TCG MIPS backend. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/mips: fix TLB loading for BE host with 32-bit guestsAurelien Jarno2015-08-011-1/+3
| | | | | | | | | | For 32-bit guest, we load a 32-bit address from the TLB, so there is no need to compensate for the low or high part. This fixes 32-bit guests on big-endian hosts. Cc: qemu-stable@nongnu.org Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg: mark temps as mem_coherent = 0 for mov with a constantAurelien Jarno2015-07-271-0/+1
| | | | | | | | | | When a constant has to be loaded in a mov op, we fail to set mem_coherent = 0. This patch fixes that. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1437994568-7825-3-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: correctly mark dead inputs for mov with a constantAurelien Jarno2015-07-271-0/+3
| | | | | | | | | | | | | | When tcg_reg_alloc_mov propagate a constant, we failed to correctly mark a temp as dead if the liveness analysis hints so. This fixes the following assert when configure with --enable-debug-tcg: qemu-x86_64: tcg/tcg.c:1827: tcg_reg_alloc_bb_end: Assertion `ts->val_type == TEMP_VAL_DEAD' failed. Cc: Richard Henderson <rth@twiddle.net> Reported-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1437994568-7825-2-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: fix tcg_opt_gen_moviAurelien Jarno2015-07-231-1/+1
| | | | | | | | | Due to a copy&paste, the new op value is tested against mov_i32 instead of movi_i32. The test is therefore always false. Fix that. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436544211-2769-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: use 32-bit offset for 32-bit softmmu emulationRichard Henderson2015-07-231-6/+6
| | | | | | | | | Similar to the same fix for user-mode, except this instance occurs on the softmmu path. Again, the tlb addend must be the base register, while the guest address is the index. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: use 32-bit offset for 32-bit user-mode emulationPaolo Bonzini2015-07-231-10/+16
| | | | | | | | | | | | | Thanks to the previous patch, it is now easy for tcg_out_qemu_ld and tcg_out_qemu_st to use a 32-bit zero extended offset. However, the guest base register x28 must be the base and addr_reg must be the index. Reported-by: Leon Alrae <leon.alrae@imgtec.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1436974021-28978-3-git-send-email-pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: add ext argument to tcg_out_insn_3310Paolo Bonzini2015-07-231-19/+22
| | | | | | | | | | | | The new argument lets you pick uxtw or uxtx mode for the offset register. For now, all callers pass TCG_TYPE_I64 so that uxtx is generated. The bits for uxtx are removed from I3312_TO_I3310. Reported-by: Leon Alrae <leon.alrae@imgtec.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1436974021-28978-2-git-send-email-pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/i386: Extend addresses for 32-bit guestsRichard Henderson2015-07-231-42/+72
| | | | | | | | Removing the ??? comment explaining why it (mostly) worked. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1437081950-7206-2-git-send-email-rth@twiddle.net>
* tci: Fix regression with INDEX_op_qemu_st_i32, INDEX_op_qemu_st_i64Stefan Weil2015-07-131-6/+0
| | | | | | | | | | Commit 59227d5d45bb3c31dc2118011691c35b3c00879c did not update the code in tcg/tci/tcg-target.c for those two cases. Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-id: 1436556159-3002-1-git-send-email-sw@weilnetz.de Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg/mips: Fix build error from merged memop+mmu_idx parameterJames Hogan2015-07-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 3972ef6f830d ("tcg: Push merged memop+mmu_idx parameter to softmmu routines") caused the following build errors when building TCG for MIPS: In file included from tcg/tcg.c:258:0: tcg/mips/tcg-target.c In function ‘tcg_out_qemu_ld_slow_path’: tcg/mips/tcg-target.c:1015:22: error: ‘lb’ undeclared (first use in this function) tcg/mips/tcg-target.c In function ‘tcg_out_qemu_st_slow_path’: tcg/mips/tcg-target.c:1058:22: error: ‘lb’ undeclared (first use in this function) It looks like lb was meant to refer to the TCGLabelQemuLdst *l parameter, so fix both references to lb to refer to just l. Fixes: 3972ef6f830d ("tcg: Push merged memop+mmu_idx parameter to softmmu routines") Signed-off-by: James Hogan <james.hogan@imgtec.com> Reviewed-by: Leon Alrae <leon.alrae@imgtec.com> Message-id: 1436433435-24898-2-git-send-email-james.hogan@imgtec.com Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Leon Alrae <leon.alrae@imgtec.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* tcg/s390: fix branch target change during code retranslationAurelien Jarno2015-07-071-4/+8
| | | | | | | | | | Make sure to not modify the branch target. This ensure that the branch target is not corrupted during partial retranslation. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Tested-by: Alexander Graf <agraf@suse.de> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* cpu-defs: Move CPU_TEMP_BUF_NLONGS to tcgPeter Crosthwaite2015-06-261-0/+2
| | | | | | | | | | | | | | | The usages of this define are pure TCG and there is no architecture specific variation of the value. Localise it to the TCG engine to remove another architecture agnostic piece from cpu-defs.h. This follows on from a28177820a868eafda8fab007561cc19f41941f4 where temp_buf was moved out of the CPU_COMMON obsoleting the need for the super early definition. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <498e8e5325c1a1aff79e5bcfc28cb760ef6b214e.1433052532.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg/optimize: rename tcg_constant_foldingAurelien Jarno2015-06-091-6/+1
| | | | | | | | | | | The tcg_constant_folding folding ends up doing all the optimizations (which is a good thing to avoid looping on all ops multiple time), so make it clear and just rename it tcg_optimize. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447607-31184-6-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: fold constant test in tcg_opt_gen_movAurelien Jarno2015-06-091-53/+36
| | | | | | | | | | | Most of the calls to tcg_opt_gen_mov are preceeded by a test to check if the source temp is a constant. Fold that into the tcg_opt_gen_mov function. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433495958-9508-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: fold temp copies test in tcg_opt_gen_movAurelien Jarno2015-06-091-18/+9
| | | | | | | | | | | Each call to tcg_opt_gen_mov is preceeded by a test to check if the source and destination temps are copies. Fold that into the tcg_opt_gen_mov function. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447607-31184-4-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: remove opc argument from tcg_opt_gen_movAurelien Jarno2015-06-091-7/+7
| | | | | | | | | | | We can get the opcode using the TCGOp pointer. It needs to be dereferenced, but it's anyway done a few lines below to write the new value. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447607-31184-3-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: remove opc argument from tcg_opt_gen_moviAurelien Jarno2015-06-091-20/+20
| | | | | | | | | | | We can get the opcode using the TCGOp pointer. It needs to be dereferenced, but it's anyway done a few lines below to write the new value. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447607-31184-2-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: fix dead computation for repeated input argumentsAurelien Jarno2015-06-091-3/+11
| | | | | | | | | | | | | | | | | When the same temp is used twice or more as an input argument to a TCG instruction, the dead computation code doesn't recognize the second use as a dead temp. This is because the temp is marked as live in the same loop where dead inputs are checked. The fix is to split the loop in two parts. This avoid emitting a move and using a register for the movcond instruction when used as "move if true" on x86-64. This might bring more improvements on RISC TCG targets which don't have outputs aliased to inputs. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447228-29425-3-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: fix register allocation with two aliased dead inputsAurelien Jarno2015-06-091-0/+10
| | | | | | | | | | | | | | | | | | | | For TCG ops with two outputs registers (add2, sub2, div2, div2u), when the same input temp is used for the two inputs aliased to the two outputs, and when these inputs are both dead, the register allocation code wrongly assigned the same register to the same output. This happens for example with sub2 t1, t2, t3, t3, t4, t5, when t3 is not used anymore after the TCG op. In that case the same register is used for t1, t2 and t3. The fix is to look for already allocated aliased input when allocating a dead aliased input and check that the register is not already used. Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1433447228-29425-2-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Handle MO_AMASK in tcg_dump_opsRichard Henderson2015-06-091-3/+12
| | | | | | Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Mask TCGMemOp appropriately for indexingRichard Henderson2015-06-097-30/+30
| | | | | | | | | | The addition of MO_AMASK means that places that used inverted masks need to be changed to use positive masks, and places that failed to mask the intended bits need updating. Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com> Tested-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: add TCG_TARGET_TLB_DISPLACEMENT_BITSPaolo Bonzini2015-06-039-0/+10
| | | | | | | | | | | | This will be used to size the TLB when more than 8 MMU modes are used by the target. Limitations come from the limited size of the immediate fields (which sometimes, as in the case of Aarch64, extend to instructions that shift the immediate). Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <1424436345-37924-2-git-send-email-pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* tci: do not use CPUArchState in tcg-target.hPaolo Bonzini2015-06-032-3/+4
| | | | | | | | | | | tcg-target.h does not use any QEMU-specific symbols, save for tci's usage of CPUArchState. Pull that up to tcg/tcg.h. This will make it possible to include tcg-target.h in cpu-defs.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Alexander Graf <agraf@suse.de>
* tcg: Add MO_ALIGN, MO_UNALNRichard Henderson2015-05-141-0/+13
| | | | | | | | | These modifiers control, on a per-memory-op basis, whether unaligned memory accesses are allowed. The default setting reflects the target's definition of ALIGNED_ONLY. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Push merged memop+mmu_idx parameter to softmmu routinesRichard Henderson2015-05-1410-113/+112
| | | | | | | | The extra information is not yet used but it is now available. This requires minor changes through all of the tcg backends. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Merge memop and mmu_idx parameters to qemu_ld/stRichard Henderson2015-05-1414-61/+130
| | | | | | | | | At the tcg opcode level, not at the tcg-op.h generator level. This requires minor changes through all of the tcg backends, but none of the cpu translators. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
OpenPOWER on IntegriCloud