summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| | * | x86, apicv: add virtual interrupt delivery supportYang Zhang2013-01-2914-40/+407
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Virtual interrupt delivery avoids KVM to inject vAPIC interrupts manually, which is fully taken care of by the hardware. This needs some special awareness into existing interrupr injection path: - for pending interrupt, instead of direct injection, we may need update architecture specific indicators before resuming to guest. - A pending interrupt, which is masked by ISR, should be also considered in above update action, since hardware will decide when to inject it at right time. Current has_interrupt and get_interrupt only returns a valid vector from injection p.o.v. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| | * | x86, apicv: add virtual x2apic supportYang Zhang2013-01-296-29/+201
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | basically to benefit from apicv, we need to enable virtualized x2apic mode. Currently, we only enable it when guest is really using x2apic. Also, clear MSR bitmap for corresponding x2apic MSRs when guest enabled x2apic: 0x800 - 0x8ff: no read intercept for apicv register virtualization, except APIC ID and TMCCT which need software's assistance to get right value. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| | * | x86, apicv: add APICv register virtualization supportYang Zhang2013-01-294-1/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - APIC read doesn't cause VM-Exit - APIC write becomes trap-like Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Yang Zhang <yang.z.zhang@intel.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| | * | kvm: Obey read-only mappings in iommuAlex Williamson2013-01-271-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've been ignoring read-only mappings and programming everything into the iommu as read-write. Fix this to only include the write access flag when read-only is not set. Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| | * | kvm: Force IOMMU remapping on memory slot read-only flag changesAlex Williamson2013-01-271-4/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Memory slot flags can be altered without changing other parameters of the slot. The read-only attribute is the only one the IOMMU cares about, so generate an un-map, re-map when this occurs. This also avoid unnecessarily re-mapping the slot when no IOMMU visible changes are made. Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| | * | KVM: x86 emulator: fix test_cc() build failure on i386Avi Kivity2013-01-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'pushq' doesn't exist on i386. Replace with 'push', which should work since the operand is a register. Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | | KVM: PPC: E500: Remove kvmppc_e500_tlbil_all usage from guest TLB codeAlexander Graf2013-01-241-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The guest TLB handling code should not have any insight into how the host TLB shadow code works. kvmppc_e500_tlbil_all() is a function that is used for distinction between e500v2 and e500mc (E.HV) on how to flush shadow entries. This function really is private between the e500.c/e500mc.c file and e500_mmu_host.c. Instead of this one, use the public kvmppc_core_flush_tlb() function to flush all shadow TLB entries. As a nice side effect, with this we also end up flushing TLB1 entries which we forgot to do before. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: E500: Make clear_tlb_refs and clear_tlb1_bitmap staticAlexander Graf2013-01-243-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Host shadow TLB flushing is logic that the guest TLB code should have no insight about. Declare the internal clear_tlb_refs and clear_tlb1_bitmap functions static to the host TLB handling file. Instead of these, we can use the already exported kvmppc_core_flush_tlb(). This gives us a common API across the board to say "please flush any pending host shadow translation". Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: e500: Implement TLB1-in-TLB0 mappingAlexander Graf2013-01-242-19/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a host mapping fault happens in a guest TLB1 entry today, we map the translated guest entry into the host's TLB1. This isn't particularly clever when the guest is mapped by normal 4k pages, since these would be a lot better to put into TLB0 instead. This patch adds the required logic to map 4k TLB1 shadow maps into the host's TLB0. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: E500: Split host and guest MMU partsAlexander Graf2013-01-244-624/+704
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch splits the file e500_tlb.c into e500_mmu.c (guest TLB handling) and e500_mmu_host.c (host TLB handling). The main benefit of this split is readability and maintainability. It's just a lot harder to write dirty code :). Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: e500: Call kvmppc_mmu_map for initial mappingAlexander Graf2013-01-241-31/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When emulating tlbwe, we want to automatically map the entry that just got written in our shadow TLB map, because chances are quite high that it's going to be used very soon. Today this happens explicitly, duplicating all the logic that is in kvmppc_mmu_map() already. Just call that one instead. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: E500: Propagate errors when shadow mappingAlexander Graf2013-01-241-28/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When shadow mapping a page, mapping this page can fail. In that case we don't have a shadow map. Take this case into account, otherwise we might end up writing bogus TLB entries into the host TLB. While at it, also move the write_stlbe() calls into the respective TLBn handlers. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: E500: Explicitly mark shadow maps invalidAlexander Graf2013-01-241-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we invalidate shadow TLB maps on the host, we don't mark them as not valid. But we should. Fix this by removing the E500_TLB_VALID from their flags when invalidating. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | | KVM: PPC: E500: Move write_stlbe higherAlexander Graf2013-01-241-16/+16
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | Later patches want to call the function and it doesn't have dependencies on anything below write_host_tlbe. Move it higher up in the file. Signed-off-by: Alexander Graf <agraf@suse.de>
| * | KVM: VMX: set vmx->emulation_required only when needed.Gleb Natapov2013-01-241-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If emulate_invalid_guest_state=false vmx->emulation_required is never actually used, but it ends up to be always set to true since handle_invalid_guest_state(), the only place it is reset back to false, is never called. This, besides been not very clean, makes vmexit and vmentry path to check emulate_invalid_guest_state needlessly. The patch fixes that by keeping emulation_required coherent with emulate_invalid_guest_state setting. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86: fix use of uninitialized memory as segment descriptor in emulator.Gleb Natapov2013-01-241-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | If VMX reports segment as unusable, zero descriptor passed by the emulator before returning. Such descriptor will be considered not present by the emulator. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: rename fix_pmode_dataseg to fix_pmode_seg.Gleb Natapov2013-01-241-7/+7
| | | | | | | | | | | | | | | | | | | | | The function deals with code segment too. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: don't clobber segment AR of unusable segments.Gleb Natapov2013-01-241-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Usability is returned in unusable field, so not need to clobber entire AR. Callers have to know how to deal with unusable segments already since if emulate_invalid_guest_state=true AR is not zeroed. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: skip vmx->rmode.vm86_active check on cr0 write if unrestricted ↵Gleb Natapov2013-01-241-8/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | guest is enabled vmx->rmode.vm86_active is never true is unrestricted guest is enabled. Make it more explicit that neither enter_pmode() nor enter_rmode() is called in this case. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: remove hack that disables emulation on vcpu reset/initGleb Natapov2013-01-241-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | There is no reason for it. If state is suitable for vmentry it will be detected during guest entry and no emulation will happen. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: if unrestricted guest is enabled vcpu state is always valid.Gleb Natapov2013-01-241-0/+3
| | | | | | | | | | | | | | | Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: reset CPL only on CS register write.Gleb Natapov2013-01-241-1/+2
| | | | | | | | | | | | | | | Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: VMX: remove special CPL cache access during transition to real mode.Gleb Natapov2013-01-241-8/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since vmx_get_cpl() always returns 0 when VCPU is in real mode it is no longer needed. Also reset CPL cache to zero during transaction to protected mode since transaction may happen while CS.selectors & 3 != 0, but in reality CPL is 0. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: convert a few freestanding emulations to fastopAvi Kivity2013-01-231-3/+3
| | | | | | | | | | | | | | | | | | Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: rearrange fastop definitionsAvi Kivity2013-01-231-35/+35
| | | | | | | | | | | | | | | | | | | | | | | | Make fastop opcodes usable in other emulations. Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: convert 2-operand IMUL to fastopAvi Kivity2013-01-231-8/+6
| | | | | | | | | | | | | | | | | | Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: convert BT/BTS/BTR/BTC/BSF/BSR to fastopAvi Kivity2013-01-231-50/+26
| | | | | | | | | | | | | | | | | | Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: convert INC/DEC to fastopAvi Kivity2013-01-231-17/+7
| | | | | | | | | | | | | | | | | | Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: covert SETCC to fastopAvi Kivity2013-01-231-31/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a bit of a special case since we don't have the usual byte/word/long/quad switch; instead we switch on the condition code embedded in the instruction. Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: convert shift/rotate instructions to fastopAvi Kivity2013-01-231-41/+31
| | | | | | | | | | | | | | | | | | | | | | | | SHL, SHR, ROL, ROR, RCL, RCR, SAR, SAL Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86 emulator: Convert SHLD, SHRD to fastopAvi Kivity2013-01-231-12/+21
| | | | | | | | | | | | | | | | | | Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi.kivity@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86: improve reexecute_instructionXiao Guangrong2013-01-213-11/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current reexecute_instruction can not well detect the failed instruction emulation. It allows guest to retry all the instructions except it accesses on error pfn For example, some cases are nested-write-protect - if the page we want to write is used as PDE but it chains to itself. Under this case, we should stop the emulation and report the case to userspace Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86: let reexecute_instruction work for tdpXiao Guangrong2013-01-211-18/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, reexecute_instruction refused to retry all instructions if tdp is enabled. If nested npt is used, the emulation may be caused by shadow page, it can be fixed by dropping the shadow page. And the only condition that tdp can not retry the instruction is the access fault on error pfn Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: x86: clean up reexecute_instructionXiao Guangrong2013-01-211-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Little cleanup for reexecute_instruction, also use gpa_to_gfn in retry_instruction Reviewed-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * | KVM: set_memory_region: Remove unnecessary variable memslotTakuya Yoshikawa2013-01-171-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One such variable, slot, is enough for holding a pointer temporarily. We also remove another local variable named slot, which is limited in a block, since it is confusing to have the same name in this function. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: set_memory_region: Don't check for overlaps unless we create or move a slotTakuya Yoshikawa2013-01-171-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | Don't need the check for deleting an existing slot or just modifiying the flags. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: set_memory_region: Don't jump to out_free unnecessarilyTakuya Yoshikawa2013-01-171-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the separation between the sanity checks and the rest of the code a bit clearer. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: s390: kvm/sigp.c: fix memory leakageCong Ding2013-01-171-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | the variable inti should be freed in the branch CPUSTAT_STOPPED. Signed-off-by: Cong Ding <dinggnu@gmail.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() ↵Takuya Yoshikawa2013-01-141-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | takes a long time If the userspace starts dirty logging for a large slot, say 64GB of memory, kvm_mmu_slot_remove_write_access() needs to hold mmu_lock for a long time such as tens of milliseconds. This patch controls the lock hold time by asking the scheduler if we need to reschedule for others. One penalty for this is that we need to flush TLBs before releasing mmu_lock. But since holding mmu_lock for a long time does affect not only the guest, vCPU threads in other words, but also the host as a whole, we should pay for that. In practice, the cost will not be so high because we can protect a fair amount of memory before being rescheduled: on my test environment, cond_resched_lock() was called only once for protecting 12GB of memory even without THP. We can also revisit Avi's "unlocked TLB flush" work later for completely suppressing extra TLB flushes if needed. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: Make kvm_mmu_slot_remove_write_access() take mmu_lock by itselfTakuya Yoshikawa2013-01-142-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | Better to place mmu_lock handling and TLB flushing code together since this is a self-contained function. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: Make kvm_mmu_change_mmu_pages() take mmu_lock by itselfTakuya Yoshikawa2013-01-142-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No reason to make callers take mmu_lock since we do not need to protect kvm_mmu_change_mmu_pages() and kvm_mmu_slot_remove_write_access() together by mmu_lock in kvm_arch_commit_memory_region(): the former calls kvm_mmu_commit_zap_page() and flushes TLBs by itself. Note: we do not need to protect kvm->arch.n_requested_mmu_pages by mmu_lock as can be seen from the fact that it is read locklessly. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: Remove unused slot_bitmap from kvm_mmu_pageTakuya Yoshikawa2013-01-143-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | Not needed any more. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: MMU: Make kvm_mmu_slot_remove_write_access() rmap basedTakuya Yoshikawa2013-01-141-13/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes it possible to release mmu_lock and reschedule conditionally in a later patch. Although this may increase the time needed to protect the whole slot when we start dirty logging, the kernel should not allow the userspace to trigger something that will hold a spinlock for such a long time as tens of milliseconds: actually there is no limit since it is roughly proportional to the number of guest pages. Another point to note is that this patch removes the only user of slot_bitmap which will cause some problems when we increase the number of slots further. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: MMU: Remove unused parameter level from __rmap_write_protect()Takuya Yoshikawa2013-01-141-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | No longer need to care about the mapping level in this function. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | KVM: Write protect the updated slot only when dirty logging is enabledTakuya Yoshikawa2013-01-142-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Calling kvm_mmu_slot_remove_write_access() for a deleted slot does nothing but search for non-existent mmu pages which have mappings to that deleted memory; this is safe but a waste of time. Since we want to make the function rmap based in a later patch, in a manner which makes it unsafe to be called for a deleted slot, we makes the caller see if the slot is non-zero and being dirty logged. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Signed-off-by: Gleb Natapov <gleb@redhat.com>
| * | Merge branch 'kvm-ppc-next' of https://github.com/agraf/linux-2.6 into queueGleb Natapov2013-01-1412-11/+153
| |\ \
| | * | KVM: PPC: BookE: Add EPR ONE_REG syncAlexander Graf2013-01-103-1/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to be able to read and write the contents of the EPR register from user space. This patch implements that logic through the ONE_REG API and declares its (never implemented) SREGS counterpart as deprecated. Signed-off-by: Alexander Graf <agraf@suse.de>
| | * | KVM: PPC: BookE: Implement EPR exitAlexander Graf2013-01-107-3/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The External Proxy Facility in FSL BookE chips allows the interrupt controller to automatically acknowledge an interrupt as soon as a core gets its pending external interrupt delivered. Today, user space implements the interrupt controller, so we need to check on it during such a cycle. This patch implements logic for user space to enable EPR exiting, disable EPR exiting and EPR exiting itself, so that user space can acknowledge an interrupt when an external interrupt has successfully been delivered into the guest vcpu. Signed-off-by: Alexander Graf <agraf@suse.de>
| | * | KVM: PPC: BookE: Emulate mfspr on EPRAlexander Graf2013-01-101-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The EPR register is potentially valid for PR KVM as well, so we need to emulate accesses to it. It's only defined for reading, so only handle the mfspr case. Signed-off-by: Alexander Graf <agraf@suse.de>
| | * | KVM: PPC: BookE: Allow irq deliveries to inject requestsAlexander Graf2013-01-101-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When injecting an interrupt into guest context, we usually don't need to check for requests anymore. At least not until today. With the introduction of EPR, we will have to create a request when the guest has successfully accepted an external interrupt though. So we need to prepare the interrupt delivery to abort guest entry gracefully. Otherwise we'd delay the EPR request. Signed-off-by: Alexander Graf <agraf@suse.de>
OpenPOWER on IntegriCloud