summaryrefslogtreecommitdiffstats
path: root/arch
Commit message (Collapse)AuthorAgeFilesLines
* KVM: MMU: delay flush all tlbs on sync_page pathXiao Guangrong2011-01-121-2/+10
| | | | | | | | | | Quote from Avi: | I don't think we need to flush immediately; set a "tlb dirty" bit somewhere | that is cleareded when we flush the tlb. kvm_mmu_notifier_invalidate_page() | can consult the bit and force a flush if set. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: abstract invalid guest pte mappingXiao Guangrong2011-01-122-37/+37
| | | | | | | Introduce a common function to map invalid gpte Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: remove 'clear_unsync' parameterXiao Guangrong2011-01-123-8/+7
| | | | | | | Remove it since we can judge it by using sp->unsync Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: rename 'reset_host_protection' to 'host_writable'Lai Jiangshan2011-01-122-9/+9
| | | | | | | | Rename it to fit its sense better Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: don't drop spte if overwrite it from W to ROXiao Guangrong2011-01-121-11/+9
| | | | | | | | | We just need flush tlb if overwrite a writable spte with a read-only one. And we should move this operation to set_spte() for sync_page path Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: fix forgot flush tlbs on sync_page pathXiao Guangrong2011-01-121-0/+1
| | | | | | | | | | | | | | | | | We should flush all tlbs after drop spte on sync_page path since Quote from Avi: | sync_page | drop_spte | kvm_mmu_notifier_invalidate_page | kvm_unmap_rmapp | spte doesn't exist -> no flush | page is freed | guest can write into freed page? KVM-Stable-Tag. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Add instruction-set-specific exit qualifications to kvm_exit traceAvi Kivity2011-01-124-2/+25
| | | | | | | | | | | The exit reason alone is insufficient to understand exactly why an exit occured; add ISA-specific trace parameters for additional information. Because fetching these parameters is expensive on vmx, and because these parameters are fetched even if tracing is disabled, we fetch the parameters via a callback instead of as traditional trace arguments. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Record instruction set in kvm_exit tracepointAvi Kivity2011-01-123-4/+9
| | | | | | | exit_reason's meaning depend on the instruction set; record it so a trace taken on one machine can be interpreted on another. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()Avi Kivity2011-01-121-38/+25
| | | | | | | | | cea15c2 ("KVM: Move KVM context switch into own function") split vmx_vcpu_run() to prevent multiple copies of the context switch from being generated (causing problems due to a label). This patch folds them back together again and adds the __noclone attribute to prevent the label from being duplicated. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: x86 emulator: do not perform address calculations on linear addressesAvi Kivity2011-01-121-1/+2
| | | | | | | | Linear addresses are supposed to already have segment checks performed on them; if we play with these addresses the checks become invalid. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86 emulator: preserve an operand's segment identityAvi Kivity2011-01-122-52/+59
| | | | | | | | | | | | | Currently the x86 emulator converts the segment register associated with an operand into a segment base which is added into the operand address. This loss of information results in us not doing segment limit checks properly. Replace struct operand's addr.mem field by a segmented_address structure which holds both the effetive address and segment. This will allow us to do the limit check at the point of access. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86 emulator: drop DPRINTF()Avi Kivity2011-01-121-6/+1
| | | | | | | Failed emulation is reported via a tracepoint; the cmps printk is pointless. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86 emulator: drop unused #ifndef __KERNEL__Avi Kivity2011-01-121-7/+0
| | | | | Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: VMX: Inform user about INTEL_TXT dependencyShane Wang2011-01-121-1/+4
| | | | | | | | | Inform user to either disable TXT in the BIOS or do TXT launch with tboot before enabling KVM since some BIOSes do not set FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX bit when TXT is enabled. Signed-off-by: Shane Wang <shane.wang@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: don't mark spte notrap if reserved bit setXiao Guangrong2011-01-121-6/+11
| | | | | | | | If reserved bit is set, we need inject the #PF with PFEC.RSVD=1, but shadow_notrap_nonpresent_pte injects #PF with PFEC.RSVD=0 only Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Mask KVM_GET_SUPPORTED_CPUID data with Linux cpuid infoAvi Kivity2011-01-121-0/+9
| | | | | | | | This allows Linux to mask cpuid bits if, for example, nx is enabled on only some cpus. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: SVM: Replace svm_has() by standard Linux cpuid accessorsAvi Kivity2011-01-121-10/+5
| | | | | | | | | | Instead of querying cpuid directly, use the Linux accessors (boot_cpu_has, etc.). This allows the things like the clearcpuid kernel command line to work (when it's fixed wrt scattered cpuid bits). Acked-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: fix apf prefault if nested guest is enabledXiao Guangrong2011-01-123-1/+4
| | | | | | | | If apf is generated in L2 guest and is completed in L1 guest, it will prefault this apf in L1 guest's mmu context. Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: support apf for nonpaing guestXiao Guangrong2011-01-121-3/+9
| | | | | | | Let's support apf for nonpaing guest Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: clear apfs if page state is changedXiao Guangrong2011-01-121-0/+3
| | | | | | | | | | | | | If CR0.PG is changed, the page fault cann't be avoid when the prefault address is accessed later And it also fix a bug: it can retry a page enabled #PF in page disabled context if mmu is shadow page This idear is from Gleb Natapov Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: fix missing post sync auditXiao Guangrong2011-01-121-0/+1
| | | | | | | Add AUDIT_POST_SYNC audit for long mode shadow page Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Clean up vm creation and releaseJan Kiszka2011-01-125-65/+22
| | | | | | | | | | | | IA64 support forces us to abstract the allocation of the kvm structure. But instead of mixing this up with arch-specific initialization and doing the same on destruction, split both steps. This allows to move generic destruction calls into generic code. It also fixes error clean-up on failures of kvm_create_vm for IA64. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: x86: Makefile clean upTracey Dent2011-01-121-1/+1
| | | | | | | Changed makefile to use the ccflags-y option instead of EXTRA_CFLAGS. Signed-off-by: Tracey Dent <tdent48227@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: remove unused function declarationXiao Guangrong2011-01-121-1/+0
| | | | | | | Remove the declaration of kvm_mmu_set_base_ptes() Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: Disallow NMI while blocked by STIAvi Kivity2011-01-121-1/+6
| | | | | | | | | | | | While not mandated by the spec, Linux relies on NMI being blocked by an IF-enabling STI. VMX also refuses to enter a guest in this state, at least on some implementations. Disallow NMI while blocked by STI by checking for the condition, and requesting an interrupt window exit if it occurs. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: avoid unnecessary wait for a async pfXiao Guangrong2011-01-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | In current code, it checks async pf completion out of the wait context, like this: if (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE && !vcpu->arch.apf.halted) r = vcpu_enter_guest(vcpu); else { ...... kvm_vcpu_block(vcpu) ^- waiting until 'async_pf.done' is not empty } kvm_check_async_pf_completion(vcpu) ^- delete list from async_pf.done So, if we check aysnc pf completion first, it can be blocked at kvm_vcpu_block Fixed by mark the vcpu is unhalted in kvm_check_async_pf_completion() path Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Acked-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: fix searching async gfn in kvm_async_pf_gfn_slotXiao Guangrong2011-01-121-2/+2
| | | | | | | | Don't search later slots if the slot is empty Acked-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: fix tracing kvm_try_async_get_pageXiao Guangrong2011-01-121-1/+1
| | | | | | | | | | | Tracing 'async' and *pfn is useless, since 'async' is always true, and '*pfn' is always "fault_pfn' We can trace 'gva' and 'gfn' instead, it can help us to see the life-cycle of an async_pf Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: replace vmalloc and memset with vzallocTakuya Yoshikawa2011-01-121-3/+1
| | | | | | | | Let's use newly introduced vzalloc(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: handle exit due to INVD in VMXGleb Natapov2011-01-122-0/+7
| | | | | | | | | | Currently the exit is unhandled, so guest halts with error if it tries to execute INVD instruction. Call into emulator when INVD instruction is executed by a guest instead. This instruction is not needed by ordinary guests, but firmware (like OpenBIOS) use it and fail. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86: Avoid issuing wbinvd twiceJan Kiszka2011-01-121-4/+6
| | | | | | | | | | Micro optimization to avoid calling wbinvd twice on the CPU that has to emulate it. As we might be preempted between smp_call_function_many and the local wbinvd, the cache might be filled again so that real work could be done uselessly. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: pre-allocate one more dirty bitmap to avoid vmalloc()Takuya Yoshikawa2011-01-121-11/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | Currently x86's kvm_vm_ioctl_get_dirty_log() needs to allocate a bitmap by vmalloc() which will be used in the next logging and this has been causing bad effect to VGA and live-migration: vmalloc() consumes extra systime, triggers tlb flush, etc. This patch resolves this issue by pre-allocating one more bitmap and switching between two bitmaps during dirty logging. Performance improvement: I measured performance for the case of VGA update by trace-cmd. The result was 1.5 times faster than the original one. In the case of live migration, the improvement ratio depends on the workload and the guest memory size. In general, the larger the memory size is the more benefits we get. Note: This does not change other architectures's logic but the allocation size becomes twice. This will increase the actual memory consumption only when the new size changes the number of pages allocated by vmalloc(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: propagate fault r/w information to gup(), allow read-only memoryMarcelo Tosatti2011-01-122-14/+26
| | | | | | | | As suggested by Andrea, pass r/w error code to gup(), upgrading read fault to writable if host pte allows it. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: flush TLBs on writable -> read-only spte overwriteMarcelo Tosatti2011-01-121-0/+10
| | | | | | | | | | | | | | | | This can happen in the following scenario: vcpu0 vcpu1 read fault gup(.write=0) gup(.write=1) reuse swap cache, no COW set writable spte use writable spte set read-only spte Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: remove kvm_mmu_set_base_ptesMarcelo Tosatti2011-01-122-9/+1
| | | | | | | Unused. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: remove setting of shadow_base_ptes for EPTMarcelo Tosatti2011-01-121-3/+1
| | | | | | | | | | | | | | The EPT present/writable bits use the same position as normal pagetable bits. Since direct_map passes ACC_ALL to mmu_set_spte, thus always setting the writable bit on sptes, use the generic PT_PRESENT shadow_base_pte. Also pass present/writable error code information from EPT violation to generic pagefault handler. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Avoid double interrupt injection with vapicAvi Kivity2011-01-121-1/+2
| | | | | | | | | | | After an interrupt injection, the PPR changes, and we have to reflect that into the vapic. This causes a KVM_REQ_EVENT to be set, which causes the whole interrupt injection routine to be run again (harmlessly). Optimize by only setting KVM_REQ_EVENT if the ppr was lowered; otherwise there is no chance that a new injection is needed. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their callersAvi Kivity2011-01-121-20/+6
| | | | | | | This abstraction only serves to obfuscate. Remove. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit pathAvi Kivity2011-01-121-14/+21
| | | | | | | | | | | | | | | ldt is never used in the kernel context; same goes for fs (x86_64) and gs (i386). So save/restore them in the heavyweight exit path instead of the lightweight path. By itself, this doesn't buy us much, but it paves the way for moving vmload and vmsave to the heavyweight exit path, since they modify the same registers. [jan: fix copy/pase mistake on i386] Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: SVM: Move svm->host_gs_base into a separate structureAvi Kivity2011-01-121-3/+5
| | | | | | | More members will join it soon. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: SVM: Move guest register save out of interrupts disabled sectionAvi Kivity2011-01-121-5/+5
| | | | | | | | | Saving guest registers is just a memory copy, and does not need to be in the critical section. Move outside the critical section to improve latency a bit. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86: Add missing inline tag to kvm_read_and_reset_pf_reasonJan Kiszka2011-01-121-1/+1
| | | | | | | | | May otherwise generates build warnings about unused kvm_read_and_reset_pf_reason if included without CONFIG_KVM_GUEST enabled. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Move KVM context switch into own functionAndi Kleen2011-01-121-25/+38
| | | | | | | | | | | | | | | | | | gcc 4.5 with some special options is able to duplicate the VMX context switch asm in vmx_vcpu_run(). This results in a compile error because the inline asm sequence uses an on local label. The non local label is needed because other code wants to set up the return address. This patch moves the asm code into an own function and marks that explicitely noinline to avoid this problem. Better would be probably to just move it into an .S file. The diff looks worse than the change really is, it's all just code movement and no logic change. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86: Mark kvm_arch_setup_async_pf staticJan Kiszka2011-01-121-1/+1
| | | | | | | It has no user outside mmu.c and also no prototype. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Send async PF when guest is not in userspace too.Gleb Natapov2011-01-121-1/+2
| | | | | | | | | If guest indicates that it can handle async pf in kernel mode too send it, but only if interrupts are enabled. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Let host know whether the guest can handle async PF in non-userspace ↵Gleb Natapov2011-01-124-2/+8
| | | | | | | | | | | | context. If guest can detect that it runs in non-preemptable context it can handle async PFs at any time, so let host know that it can send async PF even if guest cpu is not in userspace. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM paravirt: Handle async PF in non preemptable contextGleb Natapov2011-01-121-6/+34
| | | | | | | | | | | If async page fault is received by idle task or when preemp_count is not zero guest cannot reschedule, so do sti; hlt and wait for page to be ready. vcpu can still process interrupts while it waits for the page to be ready. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Inject asynchronous page fault into a PV guest if page is swapped out.Gleb Natapov2011-01-123-5/+42
| | | | | | | | | | | | | | | | Send async page fault to a PV guest if it accesses swapped out memory. Guest will choose another task to run upon receiving the fault. Allow async page fault injection only when guest is in user mode since otherwise guest may be in non-sleepable context and will not be able to reschedule. Vcpu will be halted if guest will fault on the same page again or if vcpu executes kernel code. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Handle async PF in a guest.Gleb Natapov2011-01-126-9/+243
| | | | | | | | | | | | When async PF capability is detected hook up special page fault handler that will handle async page fault events and bypass other page faults to regular page fault handler. Also add async PF handling to nested SVM emulation. Async PF always generates exit to L1 where vcpu thread will be scheduled out until page is available. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM paravirt: Add async PF initialization to PV guest.Gleb Natapov2011-01-122-0/+98
| | | | | | | | Enable async PF in a guest if async PF capability is discovered. Acked-by: Rik van Riel <riel@redhat.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
OpenPOWER on IntegriCloud