| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Mostly bugfixes + Alexey's interface-based implementation
of the NMI monitor command.
# gpg: Signature made Thu 28 Aug 2014 15:07:22 BST using RSA key ID 9B4D86F2
# gpg: Good signature from "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: aka "Paolo Bonzini <bonzini@gnu.org>"
* remotes/kvm/tags/for-upstream:
mc146818rtc: reinitialize irq_reinject_on_ack_count on reset
target-i386: Add "tsc_adjust" CPU feature name
target-i386: Add "mpx" CPU feature name
vl: process -object after other backend options
checkpatch.pl: adjust typedef definition to QEMU coding style
x86: Clear MTRRs on vCPU reset
x86: kvm: Add MTRR support for kvm_get|put_msrs()
x86: Use common variable range MTRR counts
target-i386: Don't forbid NX bit on PAE PDEs and PTEs
spapr: Add support for new NMI interface
s390x: Migrate to new NMI interface
s390x: Convert QEMUMachine to MachineClass
cpus: Define callback for QEMU "nmi" command
kvm: run cpu state synchronization on target vcpu thread
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
tsc_adjust migration support is already implemented (commit
f28558d3d37ad3bc4e35e8ac93f7bf81a0d5622c), so we can add it to the list
of known feature names.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Migration support for MPX is already implemented (commit
79e9ebebbf2a00c46fcedb6dc7dd5e12bbd30216), so we can add it to the list
of known feature names.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The SDM specifies (June 2014 Vol3 11.11.5):
On a hardware reset, the P6 and more recent processors clear the
valid flags in variable-range MTRRs and clear the E flag in the
IA32_MTRR_DEF_TYPE MSR to disable all MTRRs. All other bits in the
MTRRs are undefined.
We currently do none of that, so whatever MTRR settings you had prior
to reset is what you have after reset. Usually this doesn't matter
because KVM often ignores the guest mappings and uses write-back
anyway. However, if you have an assigned device and an IOMMU that
allows NoSnoop for that device, KVM defers to the guest memory
mappings which are now stale after reset. The result is that OVMF
rebooting on such a configuration takes a full minute to LZMA
decompress the firmware volume, a process that is nearly instant on
the initial boot.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The MTRR state in KVM currently runs completely independent of the
QEMU state in CPUX86State.mtrr_*. This means that on migration, the
target loses MTRR state from the source. Generally that's ok though
because KVM ignores it and maps everything as write-back anyway. The
exception to this rule is when we have an assigned device and an IOMMU
that doesn't promote NoSnoop transactions from that device to be cache
coherent. In that case KVM trusts the guest mapping of memory as
configured in the MTRR.
This patch updates kvm_get|put_msrs() so that we retrieve the actual
vCPU MTRR settings and therefore keep CPUX86State synchronized for
migration. kvm_put_msrs() is also used on vCPU reset and therefore
allows future modificaitons of MTRR state at reset to be realized.
Note that the entries array used by both functions was already
slightly undersized for holding every possible MSR, so this patch
increases it beyond the 28 new entries necessary for MTRR state.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We currently define the number of variable range MTRR registers as 8
in the CPUX86State structure and vmstate, but use MSR_MTRRcap_VCNT
(also 8) to report to guests the number available. Change this to
use MSR_MTRRcap_VCNT consistently.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit e8f6d00c30ed88910d0d985f4b2bf41654172ceb ("target-i386: raise
page fault for reserved physical address bits") added a check that the
NX bit is not set on PAE PDPEs, but it also added it to rsvd_mask for
the rest of the function. This caused any PDEs or PTEs with NX set to be
erroneously rejected, making PAE guests with NX support unusable.
Signed-off-by: William Grant <wgrant@ubuntu.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently syscall instruction is buggy on user mode X86_64,
the EIP is updated after do_syscall(), that is too late for
clone(). Because clone() will create a thread at the env->EIP
(the address of syscall insn), and then child thread enters
do_syscall() again, that is not expected. Sometimes it is tragic.
User mode syscall insn emulation is not used MSR, so the
action should be same to INT 0x80. INT 0x80 will update EIP in
do_interrupt(), ditto for syscall() for consistency.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
into staging
Tracing pull request
* remotes/stefanha/tags/tracing-pull-request:
virtio-rng: add some trace events
trace: add some tcg tracing support
trace: teach lttng backend to use format strings
trace: [tcg] Include TCG-tracing header on all targets
trace: [tcg] Include event definitions in "trace.h"
trace: [tcg] Generate TCG tracing routines
trace: [tcg] Include TCG-tracing helpers
trace: [tcg] Define TCG tracing helper routine wrappers
trace: [tcg] Define TCG tracing helper routines
trace: [tcg] Declare TCG tracing helper routines
trace: [tcg] Add 'tcg' event property
trace: [tcg] Argument type transformation machinery
trace: [tcg] Argument type transformation rules
trace: [tcg] Add documentation
trace: install simpletrace SystemTap tapset
simpletrace: add simpletrace.py --no-header option
trace: add tracetool simpletrace_stap format
trace: extract stap_escape() function for reuse
Conflicts:
Makefile.objs
|
| |
| |
| |
| |
| | |
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|/
|
|
|
|
| |
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
|
|
|
|
|
|
| |
Previously, execute would be disabled for all pages with SMEP enabled,
regardless of what mode the access took place in.
Signed-off-by: Ricky Zhou <ricky@rzhou.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
KVM_FEATURE_CLOCKSOURCE_STABLE_BIT is enabled by default and supported
by KVM. But not having a name defined makes QEMU treat it as an unknown
and unmigratable feature flag (as any unknown feature may possibly
require state to be migrated), and disable it by default on "-cpu host".
As a side-effect, the new name also makes the flag configurable,
allowing the user to disable it (which may be useful for testing or for
compatibility with old kernels).
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds a new CPU model named "Broadwell". It has all the features
from Haswell, plus PREFETCHW, RDSEED, ADX, SMAP.
PREFETCHW was already supported as "3dnowprefetch".
RDSEED, ADX was added on Linux v3.15-rc1.
SMAP was added on Linux v3.15-rc2.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: Wang, Yong Y <yong.y.wang@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Dugger, Donald D <donald.d.dugger@intel.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
| |
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Expose "Invariant TSC" flag, if KVM is enabled. From Intel documentation:
17.13.1 Invariant TSC The time stamp counter in newer processors may
support an enhancement, referred to as invariant TSC. Processor’s
support for invariant TSC is indicated by CPUID.80000007H:EDX[8].
The invariant TSC will run at a constant rate in all ACPI P-, C-.
and T-states. This is the architectural behavior moving forward. On
processors with invariant TSC support, the OS may use the TSC for wall
clock timer services (instead of ACPI or HPET timers). TSC reads are
much more efficient and do not incur the overhead associated with a ring
transition or access to a platform resource.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
[ehabkost: redo feature filtering to use .tcg_features]
[ehabkost: add CPUID_APM_INVTSC macro, add it to .unmigratable_flags]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Invariant TSC documentation mentions that "invariant TSC will run at a
constant rate in all ACPI P-, C-. and T-states".
This is not the case if migration to a host with different TSC frequency
is allowed, or if savevm is performed. So block migration/savevm.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[AF+mtosatti: Updated error message]
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Having only migratable flags reported by default on the "host" CPU model
is safer for the following reasons:
* Existing users may expect "-cpu host" to be migration-safe, if they
take care of always using compatible host CPUs, host kernels, and
QEMU versions.
* Users who don't care aboug migration and want to enable all features
supported by the host kernel can simply change their setup to use
migratable=no.
Without this change, people using "-cpu host" will stop being able to
migrate, because now "invtsc" is getting enabled by default.
We are not setting migratable=yes by default on all X86CPU subclasses,
because users should be able to get non-migratable features enabled if
they ask for them explicitly.
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This flag will allow the user to choose between two modes:
* All flags that can be enabled on the host, even if unmigratable
(migratable=no);
* All flags that can be enabled on the host, are known to QEMU
and migratable (migratable=yes).
The default is still migratable=false, to keep current behavior, but
this will be changed to migratable=true by another patch.
My plan was to support the "migratable" flag on all CPU classes, but
have the default to "false" on all CPU models except "host". However,
DeviceClass has no mechanism to allow a child class to have a different
property default from the parent class yet, so by now only the "host"
CPU model will support the "migratable" flag.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
If enforce/check is specified in TCG mode, QEMU will ensure all CPU
features are supported by TCG, so no CPU feature is silently disabled.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Be explicit about TCG vs. !KVM]
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
Instead of manually filtering each feature word, add a tcg_features
field to FeatureWordInfo, and use that field to filter all feature words
in TCG mode.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
| |
Now that we have the feature word arrays, we don't need to manually copy
each array item, we can simply iterate through each feature word.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
| |
Those macros will be used in the feature_word_info array data, so need
to be defined earlier.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
TCG doesn't support any of the feature flags on FEAT_KVM and
FEAT_C000_0001_EDX feature words, so clear all bits on those feature
words.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
| |
The TCG_7_0_EBX_FEATURES macro was defined but never used (it even had a
typo that was never noticed). Make the existing TCG feature filtering
code use it.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
Instead of an #ifdef in the middle of the code, just set
TCG_EXT2_FEATURES to a different value depending on TARGET_X86_64.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
| |
This will allow us to re-use the feature filtering logic (and the
check/enforce flag logic) for TCG.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
| |
This will help us simplify the code that calls
report_unavailable_features() later.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Merge filter_features_for_kvm() and kvm_check_features_against_host().
Both functions made exactly the same calculations, the only difference
was that filter_features_for_kvm() changed the bits on cpu->features[],
and kvm_check_features_against_host() did error reporting.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
Instead of checking and calling unavailable_host_feature() once for each
bit, simply call the function (now renamed to
report_unavailable_features()) once for each feature word.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[AF: Drop unused return value]
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
KVM never supported the MONITOR flag so it doesn't make sense to have it
enabled by default when KVM is enabled.
The rationale here is similar to the cases where it makes sense to have
a feature enabled by default on all CPU models when on KVM mode (e.g.
x2apic). In this case we are having a feature disabled by default for
the same reasons.
In this case we don't need machine-type compat code because it is
currently impossible to run a KVM VM with the MONITOR flag set.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
|
|
|
|
|
|
|
|
| |
This patch eliminates the (now) redundant copy of the Advanced Encryption Standard (AES)
ShiftRows and InvShiftRows tables; the code is updated to use the common tables declared in
include/qemu/aes.h.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After previous Peter patch, they are redundant. This way we don't
assign them except when needed. Once there, there were lots of case
where the ".fields" indentation was wrong:
.fields = (VMStateField []) {
and
.fields = (VMStateField []) {
Change all the combinations to:
.fields = (VMStateField[]){
The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
|
|
|
|
|
|
|
|
| |
Because of the "goto out", the contents of local_err are leaked
and lost.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
|
|
|
|
|
| |
The function tcg_gen_lshift() is unused; remove it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* remotes/bonzini/softmmu-smap: (33 commits)
target-i386: cleanup x86_cpu_get_phys_page_debug
target-i386: fix protection bits in the TLB for SMEP
target-i386: support long addresses for 4MB pages (PSE-36)
target-i386: raise page fault for reserved bits in large pages
target-i386: unify reserved bits and NX bit check
target-i386: simplify pte/vaddr calculation
target-i386: raise page fault for reserved physical address bits
target-i386: test reserved PS bit on PML4Es
target-i386: set correct error code for reserved bit access
target-i386: introduce support for 1 GB pages
target-i386: introduce do_check_protect label
target-i386: tweak handling of PG_NX_MASK
target-i386: commonize checks for PAE and non-PAE
target-i386: commonize checks for 4MB and 4KB pages
target-i386: commonize checks for 2MB and 4KB pages
target-i386: fix coding standards in x86_cpu_handle_mmu_fault
target-i386: simplify SMAP handling in MMU_KSMAP_IDX
target-i386: fix kernel accesses with SMAP and CPL = 3
target-i386: move check_io helpers to seg_helper.c
target-i386: rename KSMAP to KNOSMAP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
| |
| |
| |
| |
| |
| | |
Make the code a bit more similar to x86_cpu_handle_mmu_fault.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
User pages must be marked as non-executable when running under SMEP;
otherwise, fetching the page first and then calling it will fail.
With this patch, all SMEP testcases in kvm-unit-tests now pass.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
4MB pages can use 40-bit addresses by putting the higher 8 bits in bits
20-13 of the PDE. Bit 21 is reserved.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| | |
In large pages, bit 12 is for PAT, but bits starting at 13 are reserved.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
They can moved to after the dirty bit processing, and unified between
CR0.PG=1 and CR0.PG=0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| | |
The correct error code is 9 (present, reserved), not 8.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
Given the simplifications to the code in the previous patches, this
is now very simple to do.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| | |
This will help adding 1GB page support in the next patch.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| |
| |
| |
| | |
Remove the tail of the PAE case, so that we can use "goto" in the
next patch to jump to the protection checks.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
| |
| |
| | |
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|