| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Convert existing users of KVM_ENABLE_CAP to new helper.
Reviewed-by: Thomas Huth <thuth@linux.vnet.ibm.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Book3s_64 guests expect the L1 cache size in device tree, so let's give
them proper values for all CPU types we support.
This fixes a "not compliant" warning with sles11 guests on -M pseries for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
We were entering the power saving state even when interrupts (like an
external interrupt or a decrementer interrupt) were still in flight.
In case we find a pending interrupt, don't enter power saving state.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Tom Musta <tmusta@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are 3 different variants of the decrementor for BookE and BookS.
The BookE variant sets TSR[DIS] to 1 when the DEC value becomes 1 or 0. TSR[DIS]
is then the indicator whether the decrementor interrupt line is asserted or not.
The old BookS variant treats DEC as an edge interrupt that gets triggered when
the DEC value's top bit turns 1 from 0.
The new BookS variant maintains the assertion bit inside DEC itself. Whenever
the DEC value becomes negative (top bit set) the DEC interrupt line is asserted.
So far we implemented mostly the old BookS variant. Let's do them all properly.
This fixes booting pseries ppc64 guest images in TCG mode for me.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corrects the VSX integer to floating point conversion instructions
by using the endian correct accessors. The auxiliary "j" index used by the
existing macros is now obsolete and is removed. The JOFFSET preprocessor
macro is also obsolete and removed.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
This patch corrects the VSX floating point to integer conversion
instructions by using the endian correct accessors. The auxiliary
"j" index used by the existing macros is now obsolete and is removed.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
| |
This change corrects the VSX double precision to single precision and
single precision to double precisions conversion routines. The endian
correct accessors are now used. The auxiliary "j" index is no longer
necessary and is eliminated.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
| |
This change fixes the VSX scalar compare instructions. The existing usage of "x.f64[0]"
is changed to "x.VsrD(0)".
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A common pattern in the VSX helper code macros is the use of "x.fld[i]" where
"x" is a VSR and "fld" is an argument to a macro ("f64" or "f32" is passed).
This is not always correct on LE hosts.
This change addresses all instances of this pattern to be "x.fld" where "fld" is:
- "VsrD(0)" for scalar instructions accessing 64-bit numbers
- "VsrD(i)" for vector instructions accessing 64-bit numbers
- "VsrW(i)" for vector instructions accessing 32-bit numbers
Note that there are no instances of this pattern where a scalar instruction
accesses a 32-bit number.
Note also that it would be correct to use "VsrD(i)" for scalar instructions since
the loop index is only ever "0". I have choosen to use "VsrD(0)" instead ... it
seems a little clearer.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
| |
This change properly orders the doublewords of the VSRs 0-31. Because these
registers are constructed from separate doublewords, they must be inverted
on Little Endian hosts. The inversion is performed both when the VSR is read
and when it is written.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change defines accessors for VSR doubleword and word fields that
are correct from a host Endian perspective. This allows code to
use the Power ISA indexing numbers in code.
For example, the xscvdpsxws instruction has a target VSR that looks
like this:
0 32 64 127
+-----------+--------+-----------+-----------+
| undefined | SW | undefined | undefined |
+-----------+--------+-----------+-----------+
VSX helper code will use VsrW(1) to access this field.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The various VSX Convert to Integer instructions should truncate the
floating point number to an integer value, which is equivalent to
a round-to-zero rounding mode. The existing VSX floating point to
integer conversion helpers are erroneously using the rounding mode set
int the PowerPC Floating Point Status and Control Register (FPSCR).
This change corrects this defect by using the appropriate
float*_to_*_round_to_zero() routines fro the softfloat library.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
| |
Remove MSR_POW from the msr_mask for POWER7/7P/8.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Cédric Le Goater <clg@fr.ibm.com>
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
Without MSR_VSX we die early during a Linux boot.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Cédric Le Goater <clg@fr.ibm.com>
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
Add PPC_ISEL to insns_flags.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Cédric Le Goater <clg@fr.ibm.com>
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
Add MSR_LE to the msr_mask for POWER8.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Cédric Le Goater <clg@fr.ibm.com>
Tested-by: Cédric Le Goater <clg@fr.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This flag will be used to decide whether to emulate some bits of
H_SET_MODE hypercall because some are POWER8-only.
While we are here, add 2.05 flag to POWER8 family too. POWER7/7+ already
have it.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PowerPC kernel expects the number of SMT threads in a core to be a power
of 2. Since QEMU doesn't enforce this, it leads to an early guest kernel
crash if invalid threads count is specified.
Prevent this crash and make it a graceful exit from QEMU itself by
validating the user-supplied threads count.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stewart Smith <stewart@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
create_new_table() should allocate 0x20 opc_handler_t pointers, but
actually allocates 0x20 opc_handler_t structs. Fix this.
Signed-off-by: Stuart Brady <sdb@zubnet.me.uk>
Reviewed-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
This resets SPR values to defaults on CPU reset. This should help
with little-endian guests reboot issues.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
| |
This fixes warnings from the static code analysis (smatch).
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Codespell found and fixed these new typos:
* doesnt -> doesn't
* funtion -> function
* perfomance -> performance
* remaing -> remaining
A coding style issue (line too long) was fixed manually.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
| |
This lets us drop some local variables in tlb_fill() functions.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
| |
Most targets were using offsetof(CPUFooState, breakpoints) to determine
how much of CPUFooState to clear on reset. Use the next field after
CPU_COMMON instead, if any, or sizeof(CPUFooState) otherwise.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
| |
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
| |
Note that while such functions may exist both for *-user and softmmu,
only *-user uses the CPUState hook, while softmmu reuses the prototype
for calling it directly.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
| |
All targets using it gain the ability to set -cpu name,key=value,...
options via the default TYPE_CPU CPUClass::parse_features() implementation.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
Default to false.
Tidy variable naming and inline cast uses while at it.
Tested-by: Jia Liu <proljc@gmail.com> (or32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits fdfba1a298ae26dd44bcfdb0429314139a0bc55a,
ab1da85791340e504d10487e1add81b9988afa98,
f606604f1c10b60ef294f1b9b229426521a365e3 and
2c17449b3022ca9623c4a7e2a504a4150ac4ad30 added usages of ENV_GET_CPU()
macro in target-specific code.
Use ppc_env_get_cpu() instead.
Cc: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes use of @cpu_dt_id and related API in:
1. emulated XICS hypercall handlers as they receive fixed CPU indexes;
2. XICS-KVM to enable in-kernel XICS on right CPU;
3. device-tree renderer.
This removes @cpu_index fixup as @cpu_dt_id is used instead so QEMU monitor
can accept command-line CPU indexes again.
This changes kvm_arch_vcpu_id() to use ppc_get_vcpu_dt_id() as at the moment
KVM CPU id and device tree ID are calculated using the same algorithm.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Mike Day <ncmike@ncultra.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally CPUState::cpu_index is used to pick the right CPU for various
operations. However default consecutive numbering does not always work
for POWERPC.
These indexes are reflected in /proc/device-tree/cpus/PowerPC,POWER7@XX
and used to call KVM VCPU's ioctls. In order to achieve this,
kvmppc_fixup_cpu() was introduced. Roughly speaking, it multiplies
cpu_index by the number of threads per core.
This approach has disadvantages such as:
1. NUMA configuration stays broken after the fixup;
2. CPU-targeted commands from the QEMU Monitor do not work properly as
CPU indexes have been fixed and there is no clear way for the user to
know what the new CPU indexes are.
This introduces a @cpu_dt_id field in the CPUPPCState struct which
is initialized from @cpu_index by default and can be fixed later
to meet the device tree requirements.
This adds an API to handle @cpu_dt_id.
This removes kvmppc_fixup_cpu() as it is not more needed, @cpu_dt_id
is calculated in ppc_cpu_realize().
This will be used later in machine code.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Mike Day <ncmike@ncultra.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This support updating htab managed by the hypervisor. Currently we don't have
any user for this feature. This actually bring the store_hpte interface
in-line with the load_hpte one. We may want to use this when we want to
emulate henter hcall in qemu for HV kvm.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ folded fix for the "warn_unused_result" build break in
kvmppc_hash64_write_pte(), Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
For updating in kernel htab we need to provide both pte0 and pte1, hence update
the interface to take pte0 and pte1 together
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ ldq_phys() API change, Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With kvm enabled, we store the hash page table information in the hypervisor.
Use ioctl to read the htab contents. Without this we get the below error when
trying to read the guest address
(gdb) x/10 do_fork
0xc000000000098660 <do_fork>: Cannot access memory at address 0xc000000000098660
(gdb)
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ fixes for 32 bit build (casts!), ldq_phys() API change,
Greg Kurz <gkurz@linux.vnet.ibm.com ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Correctly update the htab_mask using the return value of
KVM_PPC_ALLOCATE_HTAB ioctl. Also we don't update sdr1
on GET_SREGS for HV. We check for external htab and if
found true, we don't need to update sdr1
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ fixed pte group offset computation in ppc_hash64_htab_lookup() that
caused TCG to fail, Greg Kurz <gkurz@linux.vnet.ibm.com> ]
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
Per Alex Graf's suggestion, the recently added case to gen_conditional_store
for stqcx should use an additional temporary when accessing the second
doubleword. This avoids the mutation of the EA argument to the function,
which is counter intuitive.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
| |
This patch fixes 64 bit constants that were erroneously declared as "ul" instead of
"ull". The preferred form "ULL" is used.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
64 bit constants need the "ULL" suffix, not just "UL", because
on 32 bit platforms 'long' is not large enough and this will
cause a compiler warning.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
| |
This patch adds the Vector Permuate and Exclusive OR (vpermxor)
instruction introduced in Power ISA Version 2.07.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the Vector SHA Sigma instructions introduced in Power
ISA Version 2.07:
- Vector SHA-512 Sigma Doubleword (vshasigmad)
- Vector SHA-256 Sigma Word (vshasigmaw)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the Vector AES instructions introduced in Power ISA
Version 2.07:
- Vector AES Cipher (vcipher)
- Vector AES Cipher Last (vcipherlast)
- Vector AES Inverse Cipher (vncipher)
- Vector AES Inverse Cipher Last (vncipherlast)
- Vector AES SubBytes (vsbox)
Note that the implementation of vncipher deviates from the RTL in
ISA V2.07. However it does match the verbal description in the
third paragraph. The RTL will be fixed in ISA V2.07B. The
implementation here has been tested against actual P8 hardware.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
| |
This patch add the Binary Coded Decimal instructions bcdadd. and
bcdsub.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds the Vectory Polynomial Multiply Sum instructions
introduced in Power ISA Version 2.07:
- Vectory Polynomial Multiply Sum Byte (vpmsumb)
- Vectory Polynomial Multiply Sum Halfword (vpmsumh)
- Vectory Polynomial Multiply Sum Word (vpmsumw)
- Vectory Polynomial Multiply Sum Doubleword (vpmsumd)
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|