| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This allows to define VMSTATE_SINGLE with VMSTATE_SINGLE_TEST
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lzcnt is a AMD Phenom/Barcelona added instruction returning the
number of leading zero bits in a word.
As this is similar to the "bsr" instruction, reuse the existing
code. There need to be some more changes, though, as lzcnt always
returns a valid value (in opposite to bsr, which has a special
case when the operand is 0).
lzcnt is guarded by the ABM CPUID bit (Fn8000_0001:ECX_5).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
| |
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
| |
The arpl implementation in target-i386/translate.c uses cpu_A0
temporary across a brcond op. This patch fixes that issue.
Signed-off-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
| |
This reduce the impact on hosts that have addressing modes with limited
offsets. Suggested by Laurent Desnogues.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
| |
There was a missmerge, and then we got a tail recursive call to cpu_post_load
without case base :)
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 56aebc891674cd2d07b3f64183415697be200084 changed gdbstub in way
that debugging 32 or 16-bit guest code is no longer possible with qemu
for x86_64 guest CPUs. Since that commit, qemu only provides registers
sets for 64-bit, forcing current and foreseeable gdb to also switch its
architecture to 64-bit. And this breaks if the inferior is 32 or 16 bit.
No question, this is a gdb issue. But, as it was confirmed in several
discusssions with gdb people, it is a non-trivial thing to fix. So until
qemu finds a gdb version attach with a rework x86 support, we have to
work around it by switching the register layout as the guest switches
its execution mode between 16/32 and 64 bit.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
| |
mce_banks is always MCE_BANKS_DEF * 4 in size, value never change
CC: Huang Ying <ying.huang@intel.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
| |
Don't even ask, being able to load/save between 64<->80bit floats should be forbidden
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
| |
It is needed to save the interrupt_bitmap
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
| |
It is needed to store fptags
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
| |
We save more that fpus on that 16 bits (fpstt), we need an additional field
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
This makes the savevm code correct, and sign extensins gives us exactly
what we need (namely, sign extend to 64 bits when used with 64bit addresess.
Once there, change 0x100000 for 1 << 20, that maks all a20 use the same syntax.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch corrects the following aspects of exception generation in
fxsave/fxrstor:
* Generate #GP if the operand is not aligned to a 16 byte boundary
* Generate #UD if the LOCK prefix is used
* For CR0.EM = 1 #NM is generated, not #UD
Signed-off-by: Kevin Wolf <mail@kevin-wolf.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
RDTSCP reads the time stamp counter and atomically also the content
of a 32-bit MSR, which can be freely set by the OS. This allows CPU
local data to be queried by userspace.
Linux uses this to allow a fast implementation of the getcpu()
syscall, which uses the vsyscall page to avoid a context switch.
AMD CPUs since K8RevF and Intel CPUs since Nehalem support this
instruction.
RDTSCP is guarded by the RDTSCP CPUID bit (Fn8000_0001:EDX[27]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for the AMD Phenom/Barcelona's SSE4a instructions.
Those include insertq and extrq, which are doing shift and mask on
XMM registers, in two versions (immediate shift/length values and
stored in another XMM register).
Additionally it implements movntss, movntsd, which are scalar
non-temporal stores (avoiding cache trashing). These are implemented
as normal stores, though.
SSE4a is guarded by the SSE4A CPUID bit (Fn8000_0001:ECX[6]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
|
|
| |
AMD CPUs featuring a shortcut to access CR8 even from 32-bit mode.
If you use the LOCK prefix with "mov CR0", it accesses CR8 instead.
This behavior is guarded by the CR8_LEGACY CPUID bit
(Fn8000_0001:ECX[1]).
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the very least, a change like this requires discussion on the list.
The naming convention is goofy and it causes a massive merge problem. Something
like this _must_ be presented on the list first so people can provide input
and cope with it.
This reverts commit 99a0949b720a0936da2052cb9a46db04ffc6db29.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
| |
Some not so obvious bits, slirp and Xen were left alone for the time
being.
Signed-off-by: malc <av1474@comtv.ru>
|
|
|
|
|
|
| |
Use globals for the 8 or 16 CPU registers on i386 and x86_64.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
| |
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
| |
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
| |
The CPU state parameter is not used, remove it and adjust callers. Now we
can compile ioport.c once for all targets.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
| |
cpu_synchronize_state already does this.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
| |
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem: Our file sys-queue.h is a copy of the BSD file, but there are
some additions and it's not entirely compatible. Because of that, there have
been conflicts with system headers on BSD systems. Some hacks have been
introduced in the commits 15cc9235840a22c289edbe064a9b3c19c5f49896,
f40d753718c72693c5f520f0d9899f6e50395e94,
96555a96d724016e13190b28cffa3bc929ac60dc and
3990d09adf4463eca200ad964cc55643c33feb50 but the fixes were fragile.
Solution: Avoid the conflict entirely by renaming the functions and the
file. Revert the previous hacks.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
| |
Direct call to kvm_arch_get_registers() bypass logic in
cpu_synchronize_state()
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
| |
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
cpu_synchronize_state() is a little unreadable since the 'modified'
argument isn't self-explanatory. Simplify it by making it always
synchronize the kernel state into qemu, and automatically flush the
registers back to the kernel if they've been synchronized on this
exit.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
| |
In addition to the TCG based qemu64 type let's introduce a kvm64 CPU type,
which is the least common denominator of all KVM-capable x86-CPUs
(based on Intel Pentium 4 Prescott). It can be used as a base type
for migration.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The CPUID level determines how many CPUID leafs are exposed to the guest.
Some features (like multi-core) cannot be propagated without the proper
level, but guests maybe confused by bogus entries in some leafs.
So add level= and xlevel= to the list of -cpu options to allow the user to
override the default settings. While at it, merge unnecessary local
variables into one and allow hexadecimal arguments.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Controlled by the enhanced -smp option set the CPUID bits to present the
guest the desired topology. This is vendor specific, but (with the exception
of the CMP_LEGACY bit) not conflicting, so we set all bits everytime.
There is no real multithreading support for AMD CPUs, so report cores
instead.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Intel CPUs store the number of cores in CPUID leaf 4. So push
the maxleaf value to 4 to allow the guests access to this leaf.
Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
handle_cpu_signal is very nearly copy-paste code for each target, with a
few minor variations. This patch sets up appropriate defaults for a
generic handle_cpu_signal and provides overrides for particular targets
that did things differently. Fixing things like the persistent (XXX:
use sigsetjmp) should now become somewhat easier.
Previous comments on this patch suggest that the "activate soft MMU for
this block" comments refer to defunct functionality. I have removed
such blocks for the appropriate targets in this patch.
Signed-off-by: Nathan Froyd <froydnj@codesourcery.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
kqemu introduces a number of restrictions on the i386 target. The worst is that
it prevents large memory from working in the default build.
Furthermore, kqemu is fundamentally flawed in a number of ways. It relies on
the TSC as a time source which will not be reliable on a multiple processor
system in userspace. Since most modern processors are multicore, this severely
limits the utility of kqemu.
kvm is a viable alternative for people looking to accelerate qemu and has the
benefit of being supported by the upstream Linux kernel. If someone can
implement work arounds to remove the restrictions introduced by kqemu, I'm
happy to avoid and/or revert this patch.
N.B. kqemu will still function in the 0.11 series but this patch removes it from
the 0.12 series.
Paul, please Ack or Nack this patch.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
|
|
|
| |
Signed-off-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|