diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2014-07-21 11:19:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-07-21 11:19:18 -0700 |
commit | 5b2b9d776102a82848b73964dd12fa41eb697e27 (patch) | |
tree | 629ab0c1761e336dc6051552fbe16b4c7fbefe76 /arch/x86 | |
parent | 80d6191ea7e96017ab607843c63a215d5f2f8a58 (diff) | |
parent | bb18b526a9d8d4a3fe56f234d5013b9f6036978d (diff) | |
download | op-kernel-dev-5b2b9d776102a82848b73964dd12fa41eb697e27.zip op-kernel-dev-5b2b9d776102a82848b73964dd12fa41eb697e27.tar.gz |
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"These are mostly PPC changes for 3.16-new things. However, there is
an x86 change too and it is a regression from 3.14. As it only
affects nested virtualization and there were other changes in this
area in 3.16, I am not nominating it for 3.15-stable"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: Check for nested events if there is an injectable interrupt
KVM: PPC: RTAS: Do byte swaps explicitly
KVM: PPC: Book3S PR: Fix ABIv2 on LE
KVM: PPC: Assembly functions exported to modules need _GLOBAL_TOC()
PPC: Add _GLOBAL_TOC for 32bit
KVM: PPC: BOOK3S: HV: Use base page size when comparing against slb value
KVM: PPC: Book3E: Unlock mmu_lock when setting caching atttribute
Diffstat (limited to 'arch/x86')
-rw-r--r-- | arch/x86/kvm/x86.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f644933..ef432f8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5887,6 +5887,18 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win) kvm_x86_ops->set_nmi(vcpu); } } else if (kvm_cpu_has_injectable_intr(vcpu)) { + /* + * Because interrupts can be injected asynchronously, we are + * calling check_nested_events again here to avoid a race condition. + * See https://lkml.org/lkml/2014/7/2/60 for discussion about this + * proposal and current concerns. Perhaps we should be setting + * KVM_REQ_EVENT only on certain events and not unconditionally? + */ + if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) { + r = kvm_x86_ops->check_nested_events(vcpu, req_int_win); + if (r != 0) + return r; + } if (kvm_x86_ops->interrupt_allowed(vcpu)) { kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu), false); |