summaryrefslogtreecommitdiffstats
path: root/drivers/kvm/mmu.c
Commit message (Collapse)AuthorAgeFilesLines
...
* KVM: MMU: Fix rare oops on guest context switchAvi Kivity2007-09-141-2/+3
| | | | | | | | | | | A guest context switch to an uncached cr3 can require allocation of shadow pages, but we only recycle shadow pages in kvm_mmu_page_fault(). Move shadow page recycling to mmu_topup_memory_caches(), which is called from both the page fault handler and from guest cr3 reload. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: MMU: Fix cleaning up the shadow page allocation cacheAvi Kivity2007-07-201-1/+1
| | | | | | | __free_page() wants a struct page, not a virtual address. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* KVM: MMU: Fix oopses with SLUBAvi Kivity2007-07-201-13/+26
| | | | | | | | | The kvm mmu uses page->private on shadow page tables; so does slub, and an oops result. Fix by allocating regular pages for shadows instead of using slub. Tested-by: S.Çağlar Onur <caglar@pardus.org.tr> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix memory slot management functions for guest smpAvi Kivity2007-07-201-62/+41
| | | | | | | | | | | The memory slot management functions were oriented against vcpu 0, where they should be kvm-wide. This causes hangs starting X on guest smp. Fix by making the functions (and resultant tail in the mmu) non-vcpu-specific. Unfortunately this reduces the efficiency of the mmu object cache a bit. We may have to revisit this later. Signed-off-by: Avi Kivity <avi@qumranet.com>
* mm: Remove slab destructors from kmem_cache_create().Paul Mundt2007-07-201-4/+4
| | | | | | | | | | | | | | Slab destructors were no longer supported after Christoph's c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been BUGs for both slab and slub, and slob never supported them either. This rips out support for the dtor pointer from kmem_cache_create() completely and fixes up every single callsite in the kernel (there were about 224, not including the slab allocator definitions themselves, or the documentation references). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* KVM: Clean up #includesAvi Kivity2007-07-161-4/+6
| | | | | | Remove unnecessary ones, and rearange the remaining in the standard order. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fix Wrong tlb flush orderShaohua Li2007-07-161-1/+1
| | | | | | | Need to flush the tlb after updating a pte, not before. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Flush remote tlbs when reducing shadow pte permissionsAvi Kivity2007-07-161-3/+5
| | | | | | | | | | | When a vcpu causes a shadow tlb entry to have reduced permissions, it must also clear the tlb on remote vcpus. We do that by: - setting a bit on the vcpu that requests a tlb flush before the next entry - if the vcpu is currently executing, we send an ipi to make sure it exits before we continue Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix vcpu freeing for guest smpAvi Kivity2007-07-161-2/+2
| | | | | | | | A vcpu can pin up to four mmu shadow pages, which means the freeing loop will never terminate. Fix by first unpinning shadow pages on all vcpus, then freeing shadow pages. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Lazy guest cr3 switchingAvi Kivity2007-07-161-21/+22
| | | | | | | | | Switch guest paging context may require us to allocate memory, which might fail. Instead of wiring up error paths everywhere, make context switching lazy and actually do the switch before the next guest entry, where we can return an error if allocation fails. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Remove unused large page markerAvi Kivity2007-07-161-1/+0
| | | | | | | This has not been used for some time, as the same information is available in the page header. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Don't cache guest access bits in the shadow page tableAvi Kivity2007-07-161-8/+0
| | | | | | | | | This was once used to avoid accessing the guest pte when upgrading the shadow pte from read-only to read-write. But usually we need to set the guest pte dirty or accessed bits anyway, so this wasn't really exploited. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Simpify accessed/dirty/present/nx bit handlingAvi Kivity2007-07-161-5/+0
| | | | | | | | Always set the accessed and dirty bit (since having them cleared causes a read-modify-write cycle), always set the present bit, and copy the nx bit from the guest. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Make setting shadow ptes atomic on i386Avi Kivity2007-07-161-2/+12
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fold fix_write_pf() into set_pte_common()Avi Kivity2007-07-161-0/+11
| | | | | | | This prevents some work from being performed twice, and, more importantly, reduces the number of places where we modify shadow ptes. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fold fix_read_pf() into set_pte_common()Avi Kivity2007-07-161-17/+0
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Move set_pte_common() to pte width dependent codeAvi Kivity2007-07-161-48/+0
| | | | | | In preparation of some modifications. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Use slab caches for shadow pages and their headersAvi Kivity2007-07-161-25/+39
| | | | | | Use slab caches instead of a simple custom list. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Store shadow page tables as kernel virtual addresses, not physicalAvi Kivity2007-07-161-17/+15
| | | | | | Simpifies things a bit. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Simplify kvm_mmu_free_page() a tiny bitAvi Kivity2007-07-161-6/+4
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Update shadow pte on write to guest pteAvi Kivity2007-07-161-0/+15
| | | | | | | | | | | | | | | | | | | | A typical demand page/copy on write pattern is: - page fault on vaddr - kvm propagates fault to guest - guest handles fault, updates pte - kvm traps write, clears shadow pte, resumes guest - guest returns to userspace, re-faults on same vaddr - kvm installs shadow pte, resumes guest - guest continues So, three vmexits for a single guest page fault. But if instead of clearing the page table entry, we update to correspond to the value that the guest has just written, we eliminate the third vmexit. This patch does exactly that, reducing kbuild time by about 10%. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Respect nonpae pagetable quadrant when zapping ptesAvi Kivity2007-07-161-0/+4
| | | | | | | | | | | | | | | | | | | When a guest writes to a page that has an mmu shadow, we have to clear the shadow pte corresponding to the memory location touched by the guest. Now, in nonpae mode, a single guest page may have two or four shadow pages (because a nonpae page maps 4MB or 4GB, whereas the pae shadow maps 2MB or 1GB), so we when we look up the page we find up to three additional aliases for the page. Since we _clear_ the shadow pte, it doesn't matter except for a slight performance penalty, but if we want to _update_ the shadow pte instead of clearing it, it is vital that we don't modify the aliases. Fortunately, exactly which page is needed (the "quadrant") is easily computed, and is accessible in the shadow page header. All we need is to ignore shadow pages from the wrong quadrants. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Unify kvm_mmu_pre_write() and kvm_mmu_post_write()Avi Kivity2007-07-161-7/+4
| | | | | | | Instead of calling two functions and repeating expensive checks, call one function and provide it with before/after information. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Assume that writes smaller than 4 bytes are to non-pagetable pagesAvi Kivity2007-07-161-0/+1
| | | | | | | | This allows us to remove write protection earlier than otherwise. Should some mad OS choose to use byte writes to update pagetables, it will suffer a performance hit, but still work correctly. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: fix an if() conditionAdrian Bunk2007-05-031-1/+1
| | | | | | | | It might have worked in this case since PT_PRESENT_MASK is 1, but let's express this correctly. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Per-vcpu statisticsAvi Kivity2007-05-031-1/+1
| | | | | | | | Make the exit statistics per-vcpu instead of global. This gives a 3.5% boost when running one virtual machine per core on my two socket dual core (4 cores total) machine. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Avoid heavy ASSERT at non debug mode.Yaozu Dong2007-05-031-0/+6
| | | | Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Retry sleeping allocation if atomic allocation failsAvi Kivity2007-05-031-5/+21
| | | | | | This avoids -ENOMEM under memory pressure. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Use slab caches to allocate mmu data structuresAvi Kivity2007-05-031-4/+35
| | | | | | | Better leak detection, statistics, memory use, speed -- goodness all around. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Handle partial pae pdptrAvi Kivity2007-05-031-6/+12
| | | | | | | | | | | Some guests (Solaris) do not set up all four pdptrs, but leave some invalid. kvm incorrectly treated these as valid page directories, pinning the wrong pages and causing general confusion. Fix by checking the valid bit of a pae pdpte. This closes sourceforge bug 1698922. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Simply gfn_to_page()Avi Kivity2007-05-031-8/+4
| | | | | | | | | | | | Mapping a guest page to a host page is a common operation. Currently, one has first to find the memory slot where the page belongs (gfn_to_memslot), then locate the page itself (gfn_to_page()). This is clumsy, and also won't work well with memory aliases. So simplify gfn_to_page() not to require memory slot translation first, and instead do it internally. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Add mmu cache clear functionDor Laor2007-05-031-0/+17
| | | | | | | | | | Functions that play around with the physical memory map need a way to clear mappings to possibly nonexistent or invalid memory. Both the mmu cache and the processor tlb are cleared. Signed-off-by: Dor Laor <dor.laor@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Use list_move()Avi Kivity2007-05-031-8/+4
| | | | | | Use list_move() where possible. Noticed by Dor Laor. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fix hugepage pdes mapping same physical address with different accessAvi Kivity2007-05-031-3/+5
| | | | | | | | | | | | | | | | | | | The kvm mmu keeps a shadow page for hugepage pdes; if several such pdes map the same physical address, they share the same shadow page. This is a fairly common case (kernel mappings on i386 nonpae Linux, for example). However, if the two pdes map the same memory but with different permissions, kvm will happily use the cached shadow page. If the access through the more permissive pde will occur after the access to the strict pde, an endless pagefault loop will be generated and the guest will make no progress. Fix by making the access permissions part of the cache lookup key. The fix allows Xen pae to boot on kvm and run guest domains. Thanks to Jeremy Fitzhardinge for reporting the bug and testing the fix. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Remove global pte trackingAvi Kivity2007-05-031-9/+0
| | | | | | | | | | | The initial, noncaching, version of the kvm mmu flushed the all nonglobal shadow page table translations (much like a native tlb flush). The new implementation flushes translations only when they change, rendering global pte tracking superfluous. This removes the unused tracking mechanism and storage space. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Avoid guest virtual addresses in string pio userspace interfaceAvi Kivity2007-05-031-0/+9
| | | | | | | | | | | | | | | The current string pio interface communicates using guest virtual addresses, relying on userspace to translate addresses and to check permissions. This interface cannot fully support guest smp, as the check needs to take into account two pages at one in case an unaligned string transfer straddles a page boundary. Change the interface not to communicate guest addresses at all; instead use a buffer page (mmaped by userspace) and do transfers there. The kernel manages the virtual to physical translation and can perform the checks atomically by taking the appropriate locks. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix bogus sign extension in mmu mapping auditAvi Kivity2007-05-031-1/+1
| | | | | | | | | | When auditing a 32-bit guest on a 64-bit host, sign extension of the page table directory pointer table index caused bogus addresses to be shown on audit errors. Fix by declaring the index unsigned. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Fix off-by-one when writing to a nonpae guest pdeAvi Kivity2007-04-191-0/+1
| | | | | | | | | | | | | | Nonpae guest pdes are shadowed by two pae ptes, so we double the offset twice: once to account for the pte size difference, and once because we need to shadow pdes for a single guest pde. But when writing to the upper guest pde we also need to truncate the lower bits, otherwise the multiply shifts these bits into the pde index and causes an access to the wrong shadow pde. If we're at the end of the page (accessing the very last guest pde) we can even overflow into the next host page and oops. Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fix host memory corruption on i386 with >= 4GB ramAvi Kivity2007-03-181-3/+3
| | | | | | | | | | | | | PAGE_MASK is an unsigned long, so using it to mask physical addresses on i386 (which are 64-bit wide) leads to truncation. This can result in page->private of unrelated memory pages being modified, with disasterous results. Fix by not using PAGE_MASK for physical addresses; instead calculate the correct value directly from PAGE_SIZE. Also fix a similar BUG_ON(). Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: MMU: Fix guest writes to nonpae pdeAvi Kivity2007-03-181-12/+34
| | | | | | | | | | | | | | | | | | | KVM shadow page tables are always in pae mode, regardless of the guest setting. This means that a guest pde (mapping 4MB of memory) is mapped to two shadow pdes (mapping 2MB each). When the guest writes to a pte or pde, we intercept the write and emulate it. We also remove any shadowed mappings corresponding to the write. Since the mmu did not account for the doubling in the number of pdes, it removed the wrong entry, resulting in a mismatch between shadow page tables and guest page tables, followed shortly by guest memory corruption. This patch fixes the problem by detecting the special case of writing to a non-pae pde and adjusting the address and number of shadow pdes zapped accordingly. Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: Use page_private()/set_page_private() apisMarkus Rechberger2007-03-041-18/+18
| | | | | | | Besides using an established api, this allows using kvm in older kernels. Signed-off-by: Markus Rechberger <markus.rechberger@amd.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* [PATCH] misc NULL noise removalAl Viro2007-02-091-1/+1
| | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] KVM: MMU: Report nx faults to the guestAvi Kivity2007-01-261-0/+6
| | | | | | | | | | | | | With the recent guest page fault change, we perform access checks on our own instead of relying on the cpu. This means we have to perform the nx checks as well. Software like the google toolbar on windows appears to rely on this somehow. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] KVM: MMU: Perform access checks in walk_addr()Avi Kivity2007-01-261-10/+0
| | | | | | | | | | | | | | | | | | | Check pte permission bits in walk_addr(), instead of scattering the checks all over the code. This has the following benefits: 1. We no longer set the accessed bit for accessed which fail permission checks. 2. Setting the accessed bit is simplified. 3. Under some circumstances, we used to pretend a page fault was fixed when it would actually fail the access checks. This caused an unnecessary vmexit. 4. The error code for guest page faults is now correct. The fix helps netbsd further along booting, and allows kvm to pass the new mmu testsuite. Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [PATCH] KVM: Simplify mmu_alloc_roots()Ingo Molnar2007-01-051-6/+6
| | | | | | | | | | | Small optimization/cleanup: page == page_header(page->page_hpa) Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] KVM: Avoid oom on cr3 switchIngo Molnar2007-01-051-0/+2
| | | | | | | Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] KVM: MMU: add audit code to check mappings, etc are correctAvi Kivity2007-01-051-2/+185
| | | | | | | Signed-off-by: Avi Kivity <avi@qumranet.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] KVM: MMU: Flush guest tlb when reducing permissions on a pteAvi Kivity2007-01-051-1/+6
| | | | | | | | | | | | | If we reduce permissions on a pte, we must flush the cached copy of the pte from the guest's tlb. This is implemented at the moment by flushing the entire guest tlb, and can be improved by flushing just the relevant virtual address, if it is known. Signed-off-by: Avi Kivity <avi@qumranet.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] KVM: MMU: Detect oom conditions and propagate error to userspaceAvi Kivity2007-01-051-11/+21
| | | | | | | Signed-off-by: Avi Kivity <avi@qumranet.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] KVM: MMU: Replace atomic allocations by preallocated objectsAvi Kivity2007-01-051-31/+120
| | | | | | | | | | | | | | | | The mmu sometimes needs memory for reverse mapping and parent pte chains. however, we can't allocate from within the mmu because of the atomic context. So, move the allocations to a central place that can be executed before the main mmu machinery, where we can bail out on failure before any damage is done. (error handling is deffered for now, but the basic structure is there) Signed-off-by: Avi Kivity <avi@qumranet.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
OpenPOWER on IntegriCloud