summaryrefslogtreecommitdiffstats
path: root/tcg
Commit message (Collapse)AuthorAgeFilesLines
* tcg/mips: Support r6 SEL{NE, EQ}Z instead of MOVN/MOVZJames Hogan2015-10-191-6/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Extend MIPS movcond implementation to support the SELNEZ/SELEQZ instructions introduced in MIPS r6 (where MOVN/MOVZ have been removed). Whereas the "MOVN/MOVZ rd, rs, rt" instructions have the following semantics: rd = [!]rt ? rs : rd The "SELNEZ/SELEQZ rd, rs, rt" instructions are slightly different: rd = [!]rt ? rs : 0 First we ensure that if one of the movcond input values is zero that it comes last (we can swap the input arguments if we invert the condition). This is so that it can exactly match one of the SELNEZ/SELEQZ instructions and avoid the need to emit the other one. Otherwise we emit the opposite instruction first into a temporary register, and OR that into the result: SELNEZ/SELEQZ TMP1, v2, c1 SELEQZ/SELNEZ ret, v1, c1 OR ret, ret, TMP1 Which does the following: ret = cond ? v1 : v2 Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1443788657-14537-7-git-send-email-james.hogan@imgtec.com>
* tcg/mips: Support r6 multiply/divide encodingsJames Hogan2015-10-192-3/+43
| | | | | | | | | | | | | | | | | | | | | | | MIPSr6 adds several new integer multiply, divide, and modulo instructions, and removes several pre-r6 encodings, along with the HI/LO registers which were the implicit operands of some of those instructions. Update TCG to use the new instructions when built for r6. The new instructions actually map much more directly to the TCG ops, as they only provide a single 32-bit half of the result and in a normal general purpose register instead of HI or LO. The mulu2_i32 and muls2_i32 operations are no longer appropriate for r6, so they are removed from the TCG opcode table. This is because they would need to emit two separate host instructions anyway (for the high and low half of the result), which TCG can arrange automatically for us in the absense of mulu2_i32/muls2_i32 by splitting it into mul_i32 and mul*h_i32 TCG ops. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1443788657-14537-6-git-send-email-james.hogan@imgtec.com>
* tcg/mips: Support r6 JR encodingJames Hogan2015-10-191-1/+4
| | | | | | | | | | | | | | | MIPSr6 encodes JR as JALR with zero as the link register, and the pre-r6 JR encoding is removed. Update TCG to use the new encoding when built for r6. We still use the old encoding for pre-r6, so as not to confuse return prediction stack hardware which may detect only particular encodings of the return instruction. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1443788657-14537-5-git-send-email-james.hogan@imgtec.com>
* tcg/mips: Add use_mips32r6_instructions definitionJames Hogan2015-10-191-0/+7
| | | | | | | | | | | Add definition use_mips32r6_instructions to the MIPS TCG backend which is constant 1 when built for MIPS release 6. This will be used to decide between pre-R6 and R6 instruction encodings. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1443788657-14537-4-git-send-email-james.hogan@imgtec.com>
* tcg-opc.h: Simplify insn_start defJames Hogan2015-10-191-8/+5
| | | | | | | | | | | We already have a TLADDR_ARGS definition, so rearrange the order slightly and use it in the definition of insn_start, instead of having an #ifdef. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Richard Henderson <rth@twiddle.net> Message-Id: <1443788657-14537-2-git-send-email-james.hogan@imgtec.com>
* tcg/ppc: Prefer mask over andi.Richard Henderson2015-10-191-10/+10
| | | | | | Prefer the instruction that isn't required to modify cr0. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Revise goto_tb implementationRichard Henderson2015-10-191-11/+38
| | | | | | | | | | | | | | Restrict the size of code_gen_buffer to 2GB on ppc64, which lets us assert that everything is reachable with addis+addi from tb_ret_addr. This lets us use a max of 4 insns for goto_tb instead of 7. Emit the indirect branch portion of goto_tb up front, which means we only have to update two insns to update any link. With a 64-bit store, we can update the link atomically, which may be required in future. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Adjust exit_tb for change in prologue placementRichard Henderson2015-10-191-6/+4
| | | | | | | | Changing the prologue to the beginning of the code_gen_buffer changes the direction of the "return" branch. Need to change the logic to match. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Check for overflow via highwater markRichard Henderson2015-10-072-5/+14
| | | | | | | | | | | | We currently pre-compute an worst case code size for any TB, which works out to be 122kB. Since the average TB size is near 1kB, this wastes quite a lot of storage. Instead, check for overflow in between generating code for each opcode. The overhead of the check isn't measurable and wastage is minimized. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Emit prologue to the beginning of code_gen_bufferRichard Henderson2015-10-071-7/+28
| | | | | | | | | | By putting the prologue at the end, we risk overwriting the prologue should our estimate of maximum TB size. Given the two different placements of the call to tcg_prologue_init, move the high water mark computation into tcg_prologue_init. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove tcg_gen_code_search_pcRichard Henderson2015-10-072-42/+19
| | | | | | | | It's no longer used, so tidy up everything reached by it. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove gen_intermediate_code_pcRichard Henderson2015-10-071-4/+0
| | | | | | | | | | It is no longer used, so tidy up everything reached by it. This includes the gen_opc_* arrays, the search_pc parameter and the inline gen_intermediate_code_internal functions. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Save insn data and use it in cpu_restore_state_from_tbRichard Henderson2015-10-072-15/+29
| | | | | | | | We can now restore state without retranslation. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Pass data argument to restore_state_to_opcRichard Henderson2015-10-072-1/+12
| | | | | | | | | | The gen_opc_* arrays are already redundant with the data stored in the insn_start arguments. Transition restore_state_to_opc to use data from the latter. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Add TCG_MAX_INSNSRichard Henderson2015-10-071-0/+1
| | | | | | | | Adjust all translators to respect it. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Allow extra data to be attached to insn_startRichard Henderson2015-10-074-16/+59
| | | | | | | | | With an eye toward having this data replace the gen_opc_* arrays that each target collects in order to enable restore_state_from_tb. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Rename debug_insn_start to insn_startRichard Henderson2015-10-073-8/+8
| | | | | | | | With an eye toward making it mandatory. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/mips: pass oi to tcg_out_tlb_loadAurelien Jarno2015-09-191-15/+5
| | | | | | | | | | Instead of computing mem_index and s_bits in both tcg_out_qemu_ld and tcg_out_qemu_st function and passing them to tcg_out_tlb_load, directly pass oi to the tcg_out_tlb_load function and compute mem_index and s_bits there. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/mips: move tcg_out_addsub2Aurelien Jarno2015-09-191-49/+49
| | | | | | | | | Somehow the tcg_out_addsub2 function ended-up in the middle of the qemu_ld/st related functions. Move it with other arithmetics related functions. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/mips: Fix clobbering of qemu_ld inputsJames Hogan2015-09-191-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The MIPS TCG backend implements qemu_ld with 64-bit targets using the v0 register (base) as a temporary to load the upper half of the QEMU TLB comparator (see line 5 below), however this happens before the input address is used (line 8 to mask off the low bits for the TLB comparison, and line 12 to add the host-guest offset). If the input address (addrl) also happens to have been placed in v0 (as in the second column below), it gets clobbered before it is used. addrl in t2 addrl in v0 1 srl a0,t2,0x7 srl a0,v0,0x7 2 andi a0,a0,0x1fe0 andi a0,a0,0x1fe0 3 addu a0,a0,s0 addu a0,a0,s0 4 lw at,9136(a0) lw at,9136(a0) set TCG_TMP0 (at) 5 lw v0,9140(a0) lw v0,9140(a0) set base (v0) 6 li t9,-4093 li t9,-4093 7 lw a0,9160(a0) lw a0,9160(a0) set addend (a0) 8 and t9,t9,t2 and t9,t9,v0 use addrl 9 bne at,t9,0x836d8c8 bne at,t9,0x836d838 use TCG_TMP0 10 nop nop 11 bne v0,t8,0x836d8c8 bne v0,a1,0x836d838 use base 12 addu v0,a0,t2 addu v0,a0,v0 use addrl, addend 13 lw t0,0(v0) lw t0,0(v0) Fix by using TCG_TMP0 (at) as the temporary instead of v0 (base), pushing the load on line 5 forward into the delay slot of the low comparison (line 10). The early load of the addend on line 7 also needs pushing even further for 64-bit targets, or it will clobber a0 before we're done with it. The output for 32-bit targets is unaffected. srl a0,v0,0x7 andi a0,a0,0x1fe0 addu a0,a0,s0 lw at,9136(a0) -lw v0,9140(a0) load high comparator li t9,-4093 -lw a0,9160(a0) load addend and t9,t9,v0 bne at,t9,0x836d838 - nop + lw at,9140(a0) load high comparator +lw a0,9160(a0) load addend -bne v0,a1,0x836d838 +bne at,a1,0x836d838 addu v0,a0,v0 lw t0,0(v0) Cc: qemu-stable@nongnu.org Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg: Move tci_tb_ptr to -commonPeter Crosthwaite2015-09-161-0/+4
| | | | | | | | | This requires global visibility to common code. Move to tcg-common. Cc: Stefan Weil <sw@weilnetz.de> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <cb0340eba225ab4945aa6cf7c9013f33aa05bcf8.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* tcg: split tcg_op_defs to -commonPeter Crosthwaite2015-09-163-8/+35
| | | | | | | | | | | | | tcg_op_defs (and the _max) are both needed by the TCI disassembler. For multi-arch, tcg.c will be multiple-compiled (arch-obj) with its symbols hidden from common code. So split the definition off to new file, tcg-common.c which will remain a regular obj-y for use by both the TCI disas as well as the multiple tcg.c's. Cc: Stefan Weil <sw@weilnetz.de> Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com> Message-Id: <4b607425886d85aee65878e4935dfad46b3e6085.1441614289.git.crosthwaite.peter@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell2015-09-141-0/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Support for jemalloc * qemu_mutex_lock_iothread "No such process" fix * cutils: qemu_strto* wrappers * iohandler.c simplification * Many other fixes and misc patches. And some MTTCG work (with Emilio's fixes squashed): * Signal-free TCG kick * Removing spinlock in favor of QemuMutex * User-mode emulation multi-threading fixes/docs # gpg: Signature made Thu 10 Sep 2015 09:03:07 BST using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" * remotes/bonzini/tags/for-upstream: (44 commits) cutils: work around platform differences in strto{l,ul,ll,ull} cpu-exec: fix lock hierarchy for user-mode emulation exec: make mmap_lock/mmap_unlock globally available tcg: comment on which functions have to be called with mmap_lock held tcg: add memory barriers in page_find_alloc accesses remove unused spinlock. replace spinlock by QemuMutex. cpus: remove tcg_halt_cond and tcg_cpu_thread globals cpus: protect work list with work_mutex scripts/dump-guest-memory.py: fix after RAMBlock change configure: Add support for jemalloc add macro file for coccinelle configure: factor out adding disas configure vhost-scsi: fix wrong vhost-scsi firmware path checkpatch: remove tests that are not relevant outside the kernel checkpatch: adapt some tests to QEMU CODING_STYLE: update mixed declaration rules qmp: Add example usage of strto*l() qemu wrapper cutils: Add qemu_strtoull() wrapper cutils: Add qemu_strtoll() wrapper ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * replace spinlock by QemuMutex.KONRAD Frederic2015-09-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spinlock is only used in two cases: * cpu-exec.c: to protect TranslationBlock * mem_helper.c: for lock helper in target-i386 (which seems broken). It's a pthread_mutex_t in user-mode, so we can use QemuMutex directly, with an #ifdef. The #ifdef will be removed when multithreaded TCG will need the mutex as well. Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> Message-Id: <1439220437-23957-5-git-send-email-fred.konrad@greensocs.com> Signed-off-by: Emilio G. Cota <cota@braap.org> [Merge Emilio G. Cota's patch to remove volatile. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* | softmmu: add helper function to pass through retaddrPavel Dovgalyuk2015-09-111-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | This patch introduces several helpers to pass return address which points to the TB. Correct return address allows correct restoring of the guest PC and icount. These functions should be used when helpers embedded into TB invoke memory operations. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20150710095650.13280.32255.stgit@PASHA-ISP> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | typofixes - v4Veres Lajos2015-09-112-3/+3
| | | | | | | | | | Signed-off-by: Veres Lajos <vlajos@gmail.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
* | tcg/i386: omit a few REXW prefixes in softmmu codeAurelien Jarno2015-09-021-6/+9
| | | | | | | | | | | | | | | | | | | | When computing the TLB address we are likely to mask out the high 32-bits by using shr + and. We can use 32-bit instructions in that case. This saves 2 bytes per TLB access. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1437306632-20655-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* | tcg/aarch64: Fix tcg_out_qemu_{ld, st} for guest_base == 0Richard Henderson2015-09-021-7/+20
|/ | | | | | | | | | | | In ffc6372851d8631a9f9fa56ec613b3244dc635b9, we swapped the guest base to the address base register from the address index register. Except that 31 in the base slot is SP not XZR, so we need to be more intelligent about which reg gets placed in which slot. Cc: qemu-stable@nongnu.org (v2.4.0) Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reported-by: Andreas Färber <afaerber@suse.de> Signed-off-by: Richard Henderson <rth@twiddle.net>
* s390: fix softmmu compilationLaurent Vivier2015-08-281-2/+2
| | | | | | | | | guest_base must be used only in linux-user mode. Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-id: 1440757421-9674-1-git-send-email-laurent@vivier.eu Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* linux-user: remove useless macros GUEST_BASE and RESERVED_VALaurent Vivier2015-08-248-61/+49
| | | | | | | | | | | As we have removed CONFIG_USE_GUEST_BASE, we always use a guest base and the macros GUEST_BASE and RESERVED_VA become useless: replace them by their values. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440420834-8388-1-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* linux-user: remove --enable-guest-base/--disable-guest-baseLaurent Vivier2015-08-245-18/+8
| | | | | | | | | | | | | | | | | | All tcg host architectures now support the guest base and as there is no real performance lost, it can be always enabled. Anyway, guest base use can be disabled lively by setting guest base to 0. CONFIG_USE_GUEST_BASE is defined as (USE_GUEST_BASE && USER_ONLY), it should have to be replaced by CONFIG_USER_ONLY in non CONFIG_USER_ONLY parts, but as some other parts are using !CONFIG_SOFTMMU I have chosen to use !CONFIG_SOFTMMU instead. Reviewed-by: Alexander Graf <agraf@suse.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu> Message-Id: <1440373328-9788-2-git-send-email-laurent@vivier.eu> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/aarch64: Use softmmu fast path for unaligned accessesRichard Henderson2015-08-241-13/+24
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/s390: Use softmmu fast path for unaligned accessesRichard Henderson2015-08-241-5/+21
| | | | Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/ppc: Improve unaligned load/store handling on 64-bit backendBenjamin Herrenschmidt2015-08-241-10/+31
| | | | | | | | | | | | | | | | | | | | Currently, we get to the slow path for any unaligned access in the backend, because we effectively preserve the bottom address bits below the alignment requirement when comparing with the TLB entry, so any non-0 bit there will cause the compare to fail. For the same number of instructions, we can instead add the access size - 1 to the address and stick to clearing all the bottom bits. That means that normal unaligned accesses will not fallback (the HW will handle them fine). Only when crossing a page boundary well we end up having a mismatch because we'll end up pointing to the next page which cannot possibly be in that same TLB entry. Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Message-Id: <1437455978.5809.2.camel@kernel.crashing.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/i386: use softmmu fast path for unaligned accessesAurelien Jarno2015-08-241-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Softmmu unaligned load/stores currently goes through through the slow path for two reasons: - to support unaligned access on host with strict alignement - to correctly handle accesses crossing pages x86 is only concerned by the second reason. Unaligned accesses are avoided by compilers, but are not uncommon. We therefore would like to see them going through the fast path, if they don't cross pages. For that we can use the fact that two adjacent TLB entries can't contain the same page. Therefore accessing the TLB entry corresponding to the first byte, but comparing its content to page address of the last byte ensures that we don't cross pages. We can do this check without adding more instructions in the TLB code (but increasing its length by one byte) by using the LEA instruction to combine the existing move with the size addition. On an x86-64 host, this gives a 3% boot time improvement for a powerpc guest and 4% for an x86-64 guest. [rth: Tidied calculation of the offset mask] Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436467197-2183-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Remove tcg_gen_trunc_i64_i32Richard Henderson2015-08-241-7/+2
| | | | | | Replacing it with tcg_gen_extrl_i64_i32. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: Split trunc_shr_i32 opcode into extr[lh]_i64_i32Richard Henderson2015-08-2414-53/+71
| | | | | | | Rather than allow arbitrary shift+trunc, only concern ourselves with low and high parts. This is all that was being used anyway. Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: update README about size changing opsAurelien Jarno2015-08-241-3/+15
| | | | | Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: add optimizations for ext_i32_i64 and extu_i32_i64 opsAurelien Jarno2015-08-241-0/+13
| | | | | | | | | They behave the same as ext32s_i64 and ext32u_i64 from the constant folding and zero propagation point of view, except that they can't be replaced by a mov, so we don't compute the affected value. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: implement real ext_i32_i64 and extu_i32_i64 opsAurelien Jarno2015-08-249-8/+41
| | | | | | | | | | | | | | Implement real ext_i32_i64 and extu_i32_i64 ops. They ensure that a 32-bit value is always converted to a 64-bit value and not propagated through the register allocator or the optimizer. Cc: Andrzej Zaborowski <balrogg@gmail.com> Cc: Alexander Graf <agraf@suse.de> Cc: Blue Swirl <blauwirbel@gmail.com> Cc: Stefan Weil <sw@weilnetz.de> Acked-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: don't abuse TCG type in tcg_gen_trunc_shr_i64_i32Aurelien Jarno2015-08-241-2/+2
| | | | | | | | | The tcg_gen_trunc_shr_i64_i32 function takes a 64-bit argument and returns a 32-bit value. Directly call tcg_gen_op3 with the correct types instead of calling tcg_gen_op3i_i32 and abusing the TCG types. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg: rename trunc_shr_i32 into trunc_shr_i64_i32Aurelien Jarno2015-08-2413-18/+18
| | | | | | | | | | | The op is sometimes named trunc_shr_i32 and sometimes trunc_shr_i64_i32, and the name in the README doesn't match the name offered to the frontends. Always use the long name to make it clear it is a size changing op. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: allow constant to have copiesAurelien Jarno2015-08-241-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that copies and constants are tracked separately, we can allow constant to have copies, deferring the choice to use a register or a constant to the register allocation pass. This prevent this kind of regular constant reloading: -OUT: [size=338] +OUT: [size=298] mov -0x4(%r14),%ebp test %ebp,%ebp jne 0x7ffbe9cb0ed6 mov $0x40002219f8,%rbp mov %rbp,(%r14) - mov $0x40002219f8,%rbp mov $0x4000221a20,%rbx mov %rbp,(%rbx) mov $0x4000000000,%rbp mov %rbp,(%r14) - mov $0x4000000000,%rbp mov $0x4000221d38,%rbx mov %rbp,(%rbx) mov $0x40002221a8,%rbp mov %rbp,(%r14) - mov $0x40002221a8,%rbp mov $0x4000221d40,%rbx mov %rbp,(%rbx) mov $0x4000019170,%rbp mov %rbp,(%r14) - mov $0x4000019170,%rbp mov $0x4000221d48,%rbx mov %rbp,(%rbx) mov $0x40000049ee,%rbp mov %rbp,0x80(%r14) mov %r14,%rdi callq 0x7ffbe99924d0 mov $0x4000001680,%rbp mov %rbp,0x30(%r14) mov 0x10(%r14),%rbp mov $0x4000001680,%rbp mov %rbp,0x30(%r14) mov 0x10(%r14),%rbp shl $0x20,%rbp mov (%r14),%rbx mov %ebx,%ebx mov %rbx,(%r14) or %rbx,%rbp mov %rbp,0x10(%r14) mov %rbp,0x90(%r14) mov 0x60(%r14),%rbx mov %rbx,0x38(%r14) mov 0x28(%r14),%rbx mov $0x4000220e60,%r12 mov %rbx,(%r12) mov $0x40002219c8,%rbx mov %rbp,(%rbx) mov 0x20(%r14),%rbp sub $0x8,%rbp mov $0x4000004a16,%rbx mov %rbx,0x0(%rbp) mov %rbp,0x20(%r14) mov $0x19,%ebp mov %ebp,0xa8(%r14) mov $0x4000015110,%rbp mov %rbp,0x80(%r14) xor %eax,%eax jmpq 0x7ffbebcae426 lea -0x5f6d72a(%rip),%rax # 0x7ffbe3d437b3 jmpq 0x7ffbebcae426 Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: track const/copy status separatelyAurelien Jarno2015-08-241-28/+14
| | | | | | | | | | | | Instead of using an enum which could be either a copy or a const, track them separately. This will be used in the next patch. Constants are tracked through a bool. Copies are tracked by initializing temp's next_copy and prev_copy to itself, allowing to simplify the code a bit. Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: add temp_is_const and temp_is_copy functionsAurelien Jarno2015-08-241-71/+60
| | | | | | | | | Add two accessor functions temp_is_const and temp_is_copy, to make the code more readable and make code change easier. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: optimize temps trackingAurelien Jarno2015-08-241-11/+32
| | | | | | | | | | | | | | | | | | | The tcg_temp_info structure uses 24 bytes per temp. Now that we emulate vector registers on most guests, it's not uncommon to have more than 100 used temps. This means we have initialize more than 2kB at least twice per TB, often more when there is a few goto_tb. Instead used a TCGTempSet bit array to track which temps are in used in the current basic block. This means there are only around 16 bytes to initialize. This improves the boot time of a MIPS guest on an x86-64 host by around 7% and moves out tcg_optimize from the the top of the profiler list. [rth: Handle TCG_CALL_DUMMY_ARG] Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/optimize: fix constant signednessAurelien Jarno2015-08-241-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By convention, on a 64-bit host TCG internally stores 32-bit constants as sign-extended. This is not the case in the optimizer when a 32-bit constant is folded. This doesn't seem to have more consequences than suboptimal code generation. For instance the x86 backend assumes sign-extended constants, and in some rare cases uses a 32-bit unsigned immediate 0xffffffff instead of a 8-bit signed immediate 0xff for the constant -1. This is with a ppc guest: before ------ ---- 0x9f29cc movi_i32 tmp1,$0xffffffff movi_i32 tmp2,$0x0 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2 mov_i32 r10,tmp0 0x7fd8c7dfe90c: xor %ebp,%ebp 0x7fd8c7dfe90e: mov %ebp,%r11d 0x7fd8c7dfe911: mov 0x18(%r14),%r9d 0x7fd8c7dfe915: add %r9d,%r10d 0x7fd8c7dfe918: adc %ebp,%r11d 0x7fd8c7dfe91b: add $0xffffffff,%r10d 0x7fd8c7dfe922: adc %ebp,%r11d 0x7fd8c7dfe925: mov %r11d,0x134(%r14) 0x7fd8c7dfe92c: mov %r10d,0x28(%r14) after ----- ---- 0x9f29cc movi_i32 tmp1,$0xffffffffffffffff movi_i32 tmp2,$0x0 add2_i32 tmp0,CA,CA,tmp2,r6,tmp2 add2_i32 tmp0,CA,tmp0,CA,tmp1,tmp2 mov_i32 r10,tmp0 0x7f37010d490c: xor %ebp,%ebp 0x7f37010d490e: mov %ebp,%r11d 0x7f37010d4911: mov 0x18(%r14),%r9d 0x7f37010d4915: add %r9d,%r10d 0x7f37010d4918: adc %ebp,%r11d 0x7f37010d491b: add $0xffffffffffffffff,%r10d 0x7f37010d491f: adc %ebp,%r11d 0x7f37010d4922: mov %r11d,0x134(%r14) 0x7f37010d4929: mov %r10d,0x28(%r14) Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436544211-2769-2-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
* tcg/mips: fix add2Aurelien Jarno2015-08-011-0/+3
| | | | | | | | | | | | | | The add2 code in the tcg_out_addsub2 function doesn't take into account the case where rl == al == bl. In that case we can't compute the carry after the addition. As it corresponds to a multiplication by 2, the carry bit is the bit 31. While this is a corner case, this prevents x86-64 guests to boot on a MIPS host. Cc: qemu-stable@nongnu.org Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/s390x: Mask TCGMemOp appropriately for indexingAurelien Jarno2015-08-011-2/+2
| | | | | | | | Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition, but two cases were forgotten in the TCG S390 backend. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
* tcg/mips: Mask TCGMemOp appropriately for indexingAurelien Jarno2015-08-011-2/+2
| | | | | | | | Commit 2b7ec66f fixed TCGMemOp masking following the MO_AMASK addition, but two cases were forgotten in the TCG MIPS backend. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
OpenPOWER on IntegriCloud