| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
| |
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
| |
We even had the encoding of smull already handy...
Cc: Andrzej Zaborowski <balrogg@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
| |
We're going to have use for this shortly in implementing other helpers.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
| |
Matching the 32-bit multiword arithmetic that we already have.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
| |
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
| |
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* 'eflags3' of git://github.com/rth7680/qemu: (61 commits)
target-i386: Use movcond to implement shiftd.
target-i386: Discard CC_OP computation in set_cc_op also
target-i386: Use movcond to implement rotate flags.
target-i386: Use movcond to implement shift flags.
target-i386: Add CC_OP_CLR
target-i386: Implement tzcnt and fix lzcnt
target-i386: Use clz/ctz for bsf/bsr helpers
target-i386: Implement ADX extension
target-i386: Implement RORX
target-i386: Implement SHLX, SARX, SHRX
target-i386: Implement PDEP, PEXT
target-i386: Implement MULX
target-i386: Implement BZHI
target-i386: Implement BLSR, BLSMSK, BLSI
target-i386: Implement BEXTR
target-i386: Implement ANDN
target-i386: Implement MOVBE
target-i386: Decode the VEX prefixes
target-i386: Tidy prefix parsing
target-i386: Use CC_SRC2 for ADC and SBB
...
|
| |
| |
| |
| |
| |
| |
| | |
With this being all straight-line code, it can get deleted
when the cc variables die.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The shift and rotate insns use movcond to set CC_OP, and thus
achieve a conditional EFLAGS setting. By discarding CC_OP in
a later flags setting insn, we can discard that movcond.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| | |
With this being all straight-line code, it can get deleted
when the cc variables die.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| | |
With this being all straight-line code, it can get deleted
when the cc variables die.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| | |
Special case xor with self. We need not even store the known
zero into cc_src.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
We weren't computing flags for lzcnt at all. At the same time,
adjust the implementation of bsf/bsr to avoid the local branch,
using movcond instead.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| | |
And mark the helpers as NO_RWG_SE.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| | |
Do all of group 17 at one time for ease.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| | |
As this is the first of the BMI insns to be implemented,
this carries quite a bit more baggage than normal.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| | |
No actual required uses of these encodings yet.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| | |
Avoid duplicating switch statement between 32 and 64-bit modes.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Add another slot in ENV and store two of the three inputs. This lets us
do less work when carry-out is not needed, and avoids the unpredictable
CC_OP after translating these insns.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| | |
Pass the data in explicitly, rather than indirectly via env.
This avoids all sorts of unnecessary register spillage.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In preparation for making this a const helper.
By using the proper types in the parameters to the helper functions,
we get to avoid quite a lot of subsequent casting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
After a comparison or subtraction, the original value of the LHS will
currently be reconstructed using an addition. However, in most cases
it is already available: store it in a temp-local variable and save 1
or 2 TCG ops (2 if the result of the addition needs to be extended).
The temp-local can be declared dead as soon as the cc_op changes again,
or also before the translation block ends because gen_prepare_cc will
always make a copy before returning it. All this magic, plus copy
propagation and dead-code elimination, ensures that the temp local will
(almost) never be spilled.
Example (cmp $0x21,%rax + jbe):
Before After
----------------------------------------------------------------------------
movi_i64 tmp1,$0x21 movi_i64 tmp1,$0x21
movi_i64 cc_src,$0x21 movi_i64 cc_src,$0x21
sub_i64 cc_dst,rax,tmp1 sub_i64 cc_dst,rax,tmp1
add_i64 tmp7,cc_dst,cc_src
movi_i32 cc_op,$0x11 movi_i32 cc_op,$0x11
brcond_i64 tmp7,cc_src,leu,$0x0 discard loc11
brcond_i64 rax,cc_src,leu,$0x0
Before After
----------------------------------------------------------------------------
mov (%r14),%rbp mov (%r14),%rbp
mov %rbp,%rbx mov %rbp,%rbx
sub $0x21,%rbx sub $0x21,%rbx
lea 0x21(%rbx),%r12
movl $0x11,0xa0(%r14) movl $0x11,0xa0(%r14)
movq $0x21,0x90(%r14) movq $0x21,0x90(%r14)
mov %rbx,0x98(%r14) mov %rbx,0x98(%r14)
cmp $0x21,%r12 | cmp $0x21,%rbp
jbe ... jbe ...
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Placing the CC_OP_DYNAMIC at the join is less effective than
before the branch, as the branch will have forced global registers
to their home locations. This way we have a chance to discard
CC_SRC2 before it gets stored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A jump that ends a basic block or otherwise falls back to CC_OP_DYNAMIC
will always have to call gen_op_set_cc_op. However, not all jumps end
a basic block, so introduce a variant that does not do this.
This was partially undone earlier (i386: drop cc_op argument of gen_jcc1),
redo it now also to prepare for the introduction of src2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| | |
Replace low-level ops with a higher-level "cmp %al, (A0)" in the case
of scas, and "cmp T0, (A0)" in the case of cmps.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It is almost unused, and it is simpler to pass a TCG value directly
to gen_shiftd_rm_T1_T3. This value is then written to t2 without
going through a temporary register.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| | |
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This simplifies all the jump generation code. CCPrepare allows the
code to create an efficient brcond always, so there is no need to
duplicate the setcc and jcc code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
|