summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm/mmu.c
Commit message (Collapse)AuthorAgeFilesLines
* KVM: x86 emulator: fix memory access during x86 emulationGleb Natapov2010-03-011-10/+7
| | | | | | | | | | | Currently when x86 emulator needs to access memory, page walk is done with broadest permission possible, so if emulated instruction was executed by userspace process it can still access kernel memory. Fix that by providing correct memory access to page walker during emulation. Signed-off-by: Gleb Natapov <gleb@redhat.com> Cc: stable@kernel.org Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Add tracepoint for guest page agingAvi Kivity2010-03-011-3/+8
| | | | Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: emulate accessed bit for EPTRik van Riel2010-03-011-2/+8
| | | | | | | | | | | | | | | | | | | Currently KVM pretends that pages with EPT mappings never got accessed. This has some side effects in the VM, like swapping out actively used guest pages and needlessly breaking up actively used hugepages. We can avoid those very costly side effects by emulating the accessed bit for EPT PTEs, which should only be slightly costly because pages pass through page_referenced infrequently. TLB flushing is taken care of by kvm_mmu_notifier_clear_flush_young(). This seems to help prevent KVM guests from being swapped out when they should not on my system. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Introduce kvm_host_page_sizeJoerg Roedel2010-03-011-16/+2
| | | | | | | | | | This patch introduces a generic function to find out the host page size for a given gfn. This function is needed by the kvm iommu code. This patch also simplifies the x86 host_mapping_level function. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Remove some useless code from alloc_mmu_pages()Wei Yongjun2010-03-011-5/+2
| | | | | | | | | If we fail to alloc page for vcpu->arch.mmu.pae_root, call to free_mmu_pages() is unnecessary, which just do free the page malloc for vcpu->arch.mmu.pae_root. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Rename vcpu->shadow_efer to eferAvi Kivity2010-03-011-1/+1
| | | | | | | None of the other registers have the shadow_ prefix. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Move cr0/cr4/efer related helpers to x86.hAvi Kivity2010-03-011-0/+1
| | | | | | | They have more general scope than the mmu. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: rename is_writeble_pte() to is_writable_pte()Takuya Yoshikawa2010-03-011-9/+9
| | | | | | | | | | | | | | There are two spellings of "writable" in arch/x86/kvm/mmu.c and paging_tmpl.h . This patch renames is_writeble_pte() to is_writable_pte() and makes grepping easy. New name is consistent with the definition of itself: return pte & PT_WRITABLE_MASK; Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Replace read accesses of vcpu->arch.cr0 by an accessorAvi Kivity2010-03-011-1/+1
| | | | | | | Since we'd like to allow the guest to own a few bits of cr0 at times, we need to know when we access those bits. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: VMX: Enable EPT 1GB page supportSheng Yang2010-03-011-3/+5
| | | | | Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: x86: Moving PT_*_LEVEL to mmu.hSheng Yang2010-03-011-4/+0
| | | | | | | We can use them in x86.c and vmx.c now... Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: switch vcpu context to use SRCUMarcelo Tosatti2010-03-011-4/+3
| | | | Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: introduce kvm->srcu and convert kvm_set_memory_region to SRCU updateMarcelo Tosatti2010-03-011-14/+14
| | | | | | | | | | Use two steps for memslot deletion: mark the slot invalid (which stops instantiation of new shadow pages for that slot, but allows destruction), then instantiate the new empty slot. Also simplifies kvm_handle_hva locking. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: modify memslots layout in struct kvmMarcelo Tosatti2010-03-011-5/+6
| | | | | | | | | Have a pointer to an allocated region inside struct kvm. [alex: fix ppc book 3s] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: Report spte not found in rmap before BUG()Avi Kivity2010-03-011-0/+1
| | | | | | | In the past we've had errors of single-bit in the other two cases; the printk() may confirm it for the third case (many->many). Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: x86: Fix host_mapping_level()Sheng Yang2010-01-251-4/+2
| | | | | | | | | | When found a error hva, should not return PAGE_SIZE but the level... Also clean up the coding style of the following loop. Cc: stable@kernel.org Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Allow internal errors reported to userspace to carry extra dataAvi Kivity2009-12-031-0/+1
| | | | | | | | | Usually userspace will freeze the guest so we can inspect it, but some internal state is not available. Add extra data to internal error reporting so we can expose it to the debugger. Extra data is specific to the suberror. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Don't pass kvm_run argumentsAvi Kivity2009-12-031-1/+1
| | | | | | They're just copies of vcpu->run, which is readily accessible. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: fix pointer castFrederik Deweerdt2009-10-161-6/+10
| | | | | | | | | | | | | | | | On a 32 bits compile, commit 3da0dd433dc399a8c0124d0614d82a09b6a49bce introduced the following warnings: arch/x86/kvm/mmu.c: In function ‘kvm_set_pte_rmapp’: arch/x86/kvm/mmu.c:770: warning: cast to pointer from integer of different size arch/x86/kvm/mmu.c: In function ‘kvm_set_spte_hva’: arch/x86/kvm/mmu.c:849: warning: cast from pointer to integer of different size The following patch uses 'unsigned long' instead of u64 to match the pointer size on both arches. Signed-off-by: Frederik Deweerdt <frederik.deweerdt@xprog.eu> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: add support for change_pte mmu notifiersIzik Eidus2009-10-041-9/+53
| | | | | | | | | | this is needed for kvm if it want ksm to directly map pages into its shadow page tables. [marcelo: cast pfn assignment to u64] Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: add SPTE_HOST_WRITEABLE flag to the shadow ptesIzik Eidus2009-10-041-4/+11
| | | | | | | | | | | this flag notify that the host physical page we are pointing to from the spte is write protected, and therefore we cant change its access to be write unless we run get_user_pages(write = 1). (this is needed for change_pte support in kvm) Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: dont hold pagecount reference for mapped sptes pagesIzik Eidus2009-10-041-5/+2
| | | | | | | | | | | | | | | | | When using mmu notifiers, we are allowed to remove the page count reference tooken by get_user_pages to a specific page that is mapped inside the shadow page tables. This is needed so we can balance the pagecount against mapcount checking. (Right now kvm increase the pagecount and does not increase the mapcount when mapping page into shadow page table entry, so when comparing pagecount against mapcount, you have no reliable result.) Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Optimize kvm_mmu_unprotect_page_virt() for tdpAvi Kivity2009-09-101-0/+3
| | | | | | | We know no pages are protected, so we can short-circuit the whole thing (including fairly nasty guest memory accesses). Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: fix bogus alloc_mmu_pages assignmentMarcelo Tosatti2009-09-101-8/+0
| | | | | | | | | | | Remove the bogus n_free_mmu_pages assignment from alloc_mmu_pages. It breaks accounting of mmu pages, since n_free_mmu_pages is modified but the real number of pages remains the same. Cc: stable@kernel.org Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: make __kvm_mmu_free_some_pages handle empty listIzik Eidus2009-09-101-1/+2
| | | | | | | | | | First check if the list is empty before attempting to look at list entries. Cc: stable@kernel.org Signed-off-by: Izik Eidus <ieidus@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: shadow support for 1gb pagesJoerg Roedel2009-09-101-12/+2
| | | | | | | | | | | | | This patch adds support for shadow paging to the 1gb page table code in KVM. With this code the guest can use 1gb pages even if the host does not support them. [ Marcelo: fix shadow page collision on pmd level if a guest 1gb page is mapped with 4kb ptes on host level ] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: make page walker aware of mapping levelsJoerg Roedel2009-09-101-1/+16
| | | | | | | | | | | The page walker may be used with nested paging too when accessing mmio areas. Make it support the additional page-level too. [ Marcelo: fix reserved bit check for 1gb pte ] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: make direct mapping paths aware of mapping levelsJoerg Roedel2009-09-101-34/+49
| | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: rename is_largepage_backed to mapping_levelJoerg Roedel2009-09-101-33/+67
| | | | | | | | With the new name and the corresponding backend changes this function can now support multiple hugepage sizes. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: make rmap code aware of mapping levelsJoerg Roedel2009-09-101-25/+28
| | | | | | | | | This patch removes the largepage parameter from the rmap_add function. Together with rmap_remove this function now uses the role.level field to find determine if the page is a huge page. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: fix missing locking in alloc_mmu_pagesMarcelo Tosatti2009-09-101-0/+2
| | | | | | | | | | | n_requested_mmu_pages/n_free_mmu_pages are used by kvm_mmu_change_mmu_pages to calculate the number of pages to zap. alloc_mmu_pages, called from the vcpu initialization path, modifies this variables without proper locking, which can result in a negative value in kvm_mmu_change_mmu_pages (say, with cpu hotplug). Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: Discard unnecessary kvm_mmu_flush_tlb() in kvm_mmu_load()Sheng Yang2009-09-101-1/+1
| | | | | | | set_cr3() should already cover the TLB flushing. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: MMU: Fix MMU_DEBUG compile breakageJoerg Roedel2009-09-101-2/+2
| | | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Trace shadow page lifecycleAvi Kivity2009-09-101-4/+6
| | | | | | Create, sync, unsync, zap. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Trace guest pagetable walkerAvi Kivity2009-09-101-0/+3
| | | | Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Prepare memslot data structures for multiple hugepage sizesJoerg Roedel2009-09-101-14/+16
| | | | | | | [avi: fix build on non-x86] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: add kvm_mmu_get_spte_hierarchy helperMarcelo Tosatti2009-09-101-0/+18
| | | | | | | Required by EPT misconfiguration handler. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: make for_each_shadow_entry aware of largepagesMarcelo Tosatti2009-09-101-0/+5
| | | | | | | | This way there is no need to add explicit checks in every for_each_shadow_entry user. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU audit: largepage handlingMarcelo Tosatti2009-09-101-8/+7
| | | | | | | Make the audit code aware of largepages. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU audit: audit_mappings tweaksMarcelo Tosatti2009-09-101-1/+7
| | | | | | | | | | - Fail early in case gfn_to_pfn returns is_error_pfn. - For the pre pte write case, avoid spurious "gva is valid but spte is notrap" messages (the emulation code does the guest write first, so this particular case is OK). Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU audit: nontrapping ptes in nonleaf levelMarcelo Tosatti2009-09-101-6/+1
| | | | | | | It is valid to set non leaf sptes as notrap. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU audit: update audit_write_protectionMarcelo Tosatti2009-09-101-3/+11
| | | | | | | | - Unsync pages contain writable sptes in the rmap. - rmaps do not exclusively contain writable sptes anymore. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU audit: update count_writable_mappings / count_rmapsMarcelo Tosatti2009-09-101-10/+94
| | | | | | | | | | | | | | | | | | Under testing, count_writable_mappings returns a value that is 2 integers larger than what count_rmaps returns. Suspicion is that either of the two functions is counting a duplicate (either positively or negatively). Modifying check_writable_mappings_rmap to check for rmap existance on all present MMU pages fails to trigger an error, which should keep Avi happy. Also introduce mmu_spte_walk to invoke a callback on all present sptes visible to the current vcpu, might be useful in the future. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: introduce is_last_spte helperMarcelo Tosatti2009-09-101-13/+13
| | | | | | | | | | Hiding some of the last largepage / level interaction (which is useful for gbpages and for zero based levels). Also merge the PT_PAGE_TABLE_LEVEL clearing loop in unlink_children. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Return to userspace on emulation failureAvi Kivity2009-09-101-2/+3
| | | | | | | Instead of mindlessly retrying to execute the instruction, report the failure to userspace. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Use macro to iterate over vcpus.Gleb Natapov2009-09-101-3/+3
| | | | | | | | [christian: remove unused variables on s390] Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: s/shadow_pte/spte/Avi Kivity2009-09-101-51/+51
| | | | | | | | | We use shadow_pte and spte inconsistently, switch to the shorter spelling. Rename set_shadow_pte() to __set_spte() to avoid a conflict with the existing set_spte(), and to indicate its lowlevelness. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Adjust pte accessors to explicitly indicate guest or shadow pteAvi Kivity2009-09-101-8/+8
| | | | | | | | | Since the guest and host ptes can have wildly different format, adjust the pte accessor names to indicate on which type of pte they operate on. No functional changes. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: MMU: Fix is_dirty_pte()Avi Kivity2009-09-101-1/+1
| | | | | | | | | | is_dirty_pte() is used on guest ptes, not shadow ptes, so it needs to avoid shadow_dirty_mask and use PT_DIRTY_MASK instead. Misdetecting dirty pages could lead to unnecessarily setting the dirty bit under EPT. Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Cache pdptrsAvi Kivity2009-09-101-2/+5
| | | | | | | Instead of reloading the pdptrs on every entry and exit (vmcs writes on vmx, guest memory access on svm) extract them on demand. Signed-off-by: Avi Kivity <avi@redhat.com>
OpenPOWER on IntegriCloud