summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kvm/book3s_32_mmu_host.c
Commit message (Collapse)AuthorAgeFilesLines
* KVM: PPC: Book3S: Make magic page properly 4k mappableAlexander Graf2014-07-281-4/+3
| | | | | | | | | | | | | | | | | | | | The magic page is defined as a 4k page of per-vCPU data that is shared between the guest and the host to accelerate accesses to privileged registers. However, when the host is using 64k page size granularity we weren't quite as strict about that rule anymore. Instead, we partially treated all of the upper 64k as magic page and mapped only the uppermost 4k with the actual magic contents. This works well enough for Linux which doesn't use any memory in kernel space in the upper 64k, but Mac OS X got upset. So this patch makes magic page actually stay in a 4k range even on 64k page size hosts. This patch fixes magic page usage with Mac OS X (using MOL) on 64k PAGE_SIZE hosts for me. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Make shared struct aka magic page guest endianAlexander Graf2014-05-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | The shared (magic) page is a data structure that contains often used supervisor privileged SPRs accessible via memory to the user to reduce the number of exits we have to take to read/write them. When we actually share this structure with the guest we have to maintain it in guest endianness, because some of the patch tricks only work with native endian load/store operations. Since we only share the structure with either host or guest in little endian on book3s_64 pr mode, we don't have to worry about booke or book3s hv. For booke, the shared struct stays big endian. For book3s_64 hv we maintain the struct in host native endian, since it never gets shared with the guest. For book3s_64 pr we introduce a variable that tells us which endianness the shared struct is in and route every access to it through helper inline functions that evaluate this variable. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: NULL return of kvmppc_mmu_hpte_cache_next should be handledZhouyi Zhou2014-01-091-0/+5
| | | | | | | NULL return of kvmppc_mmu_hpte_cache_next should be handled Signed-off-by: Zhouyi Zhou <yizhouzhou@ict.ac.cn> Signed-off-by: Alexander Graf <agraf@suse.de>
* kvm: powerpc: Add kvmppc_ops callbackAneesh Kumar K.V2013-10-171-1/+1
| | | | | | | | | | This patch add a new callback kvmppc_ops. This will help us in enabling both HV and PR KVM together in the same kernel. The actual change to enable them together is done in the later patch in the series. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [agraf: squash in booke changes] Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Book3S PR: Better handling of host-side read-only pagesPaul Mackerras2013-10-171-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we request write access to all pages that get mapped into the guest, even if the guest is only loading from the page. This reduces the effectiveness of KSM because it means that we unshare every page we access. Also, we always set the changed (C) bit in the guest HPTE if it allows writing, even for a guest load. This fixes both these problems. We pass an 'iswrite' flag to the mmu.xlate() functions and to kvmppc_mmu_map_page() to indicate whether the access is a load or a store. The mmu.xlate() functions now only set C for stores. kvmppc_gfn_to_pfn() now calls gfn_to_pfn_prot() instead of gfn_to_pfn() so that it can indicate whether we need write access to the page, and get back a 'writable' flag to indicate whether the page is writable or not. If that 'writable' flag is clear, we then make the host HPTE read-only even if the guest HPTE allowed writing. This means that we can get a protection fault when the guest writes to a page that it has mapped read-write but which is read-only on the host side (perhaps due to KSM having merged the page). Thus we now call kvmppc_handle_pagefault() for protection faults as well as HPTE not found faults. In kvmppc_handle_pagefault(), if the access was allowed by the guest HPTE and we thus need to install a new host HPTE, we then need to remove the old host HPTE if there is one. This is done with a new function, kvmppc_mmu_unmap_page(), which uses kvmppc_mmu_pte_vflush() to find and remove the old host HPTE. Since the memslot-related functions require the KVM SRCU read lock to be held, this adds srcu_read_lock/unlock pairs around the calls to kvmppc_handle_pagefault(). Finally, this changes kvmppc_mmu_book3s_32_xlate_pte() to not ignore guest HPTEs that don't permit access, and to return -EPERM for accesses that are not permitted by the page protections. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: do not treat noslot pfn as a error pfnXiao Guangrong2012-10-291-1/+1
| | | | | | | | | | | | | | | | This patch filters noslot pfn out from error pfns based on Marcelo comment: noslot pfn is not a error pfn After this patch, - is_noslot_pfn indicates that the gfn is not in slot - is_error_pfn indicates that the gfn is in slot but the error is occurred when translate the gfn to pfn - is_error_noslot_pfn indicates that the pfn either it is error pfns or it is noslot pfn And is_invalid_pfn can be removed, it makes the code more clean Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* Merge remote-tracking branch 'master' into queueMarcelo Tosatti2012-10-291-5/+5
|\ | | | | | | | | | | | | | | | | | | | | Merge reason: development work has dependency on kvm patches merged upstream. Conflicts: arch/powerpc/include/asm/Kbuild arch/powerpc/include/asm/kvm_para.h Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
| * powerpc: Build fix for powerpc KVMAneesh Kumar K.V2012-10-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix build failure for powerpc KVM by adding missing VPN_SHIFT definition and the ';' arch/powerpc/kvm/book3s_32_mmu_host.c: In function 'kvmppc_mmu_map_page': arch/powerpc/kvm/book3s_32_mmu_host.c:176: error: 'VPN_SHIFT' undeclared (first use in this function) arch/powerpc/kvm/book3s_32_mmu_host.c:176: error: (Each undeclared identifier is reported only once arch/powerpc/kvm/book3s_32_mmu_host.c:176: error: for each function it appears in.) arch/powerpc/kvm/book3s_32_mmu_host.c:178: error: expected ';' before 'next_pteg' arch/powerpc/kvm/book3s_32_mmu_host.c:190: error: label 'next_pteg' used but not defined make[1]: *** [arch/powerpc/kvm/book3s_32_mmu_host.o] Error 1 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
| * powerpc/mm: Convert virtual address to vpnAneesh Kumar K.V2012-09-171-4/+4
| | | | | | | | | | | | | | | | | | | | | | This patch convert different functions to take virtual page number instead of virtual address. Virtual page number is virtual address shifted right by VPN_SHIFT (12) bits. This enable us to have an address range of upto 76 bits. Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* | KVM: PPC: Book3s: PR: Add (dumb) MMU Notifier supportAlexander Graf2012-10-051-0/+1
|/ | | | | | | | Now that we have very simple MMU Notifier support for e500 in place, also add the same simple support to book3s. It gets us one step closer to actual fast support. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Add cache flush on page mapAlexander Graf2012-08-161-0/+3
| | | | | | | | | When we map a page that wasn't icache cleared before, do so when first mapping it in KVM using the same information bits as the Linux mapping logic. That way we are 100% sure that any page we map does not have stale entries in the icache. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Use get/set for to_svcpu to help preemptionAlexander Graf2012-03-051-6/+15
| | | | | | | | | | | | | | | When running the 64-bit Book3s PR code without CONFIG_PREEMPT_NONE, we were doing a few things wrong, most notably access to PACA fields without making sure that the pointers stay stable accross the access (preempt_disable()). This patch moves to_svcpu towards a get/put model which allows us to disable preemption while accessing the shadow vcpu fields in the PACA. That way we can run preemptible and everyone's happy! Reported-by: Jörg Sommer <joerg@alea.gnuu.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Implement correct SID mapping on Book3s_32Alexander Graf2010-10-241-26/+31
| | | | | | | | | | | | | | | | Up until now we were doing segment mappings wrong on Book3s_32. For Book3s_64 we were using a trick where we know that a single mmu_context gives us 16 bits of context ids. The mm system on Book3s_32 instead uses a clever algorithm to distribute VSIDs across the available range, so a context id really only gives us 16 available VSIDs. To keep at least a few guest processes in the SID shadow, let's map a number of contexts that we can use as VSID pool. This makes the code be actually correct and shouldn't hurt performance too much. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Revert "KVM: PPC: Use kernel hash function"Alexander Graf2010-10-241-2/+8
| | | | | | | | | | It turns out the in-kernel hash function is sub-optimal for our subtle hash inputs where every bit is significant. So let's revert to the original hash functions. This reverts commit 05340ab4f9a6626f7a2e8f9fe5397c61d494f445. Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: correctly check gfn_to_pfn() return valueGleb Natapov2010-10-241-1/+1
| | | | | | | | | On failure gfn_to_pfn returns bad_page so use correct function to check for that. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Magic Page Book3s supportAlexander Graf2010-10-241-1/+1
| | | | | | | | | | | We need to override EA as well as PA lookups for the magic page. When the guest tells us to project it, the magic page overrides any guest mappings. In order to reflect that, we need to hook into all the MMU layers of KVM to force map the magic page if necessary. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Convert MSR to shared pageAlexander Graf2010-10-241-2/+2
| | | | | | | | | | | | | | | One of the most obvious registers to share with the guest directly is the MSR. The MSR contains the "interrupts enabled" flag which the guest has to toggle in critical sections. So in order to bring the overhead of interrupt en- and disabling down, let's put msr into the shared page. Keep in mind that even though you can fully read its contents, writing to it doesn't always update all state. There are a few safe fields that don't require hypervisor interaction. See the documentation for a list of MSR bits that are safe to be set from inside the guest. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Make use of hash based Shadow MMUAlexander Graf2010-08-011-94/+10
| | | | | | | | | We just introduced generic functions to handle shadow pages on PPC. This patch makes the respective backends make use of them, getting rid of a lot of duplicate code along the way. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: PPC: Use kernel hash functionAlexander Graf2010-08-011-8/+2
| | | | | | | | The linux kernel already provides a hash function. Let's reuse that instead of reinventing the wheel! Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Remove obsolete kvmppc_mmu_find_pteAlexander Graf2010-08-011-20/+0
| | | | | | | | | Initially we had to search for pte entries to invalidate them. Since the logic has improved since then, we can just get rid of the search function. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Find HTAB ourselvesAlexander Graf2010-05-171-8/+13
| | | | | | | | | | | | | For KVM we need to find the location of the HTAB. We can either rely on internal data structures of the kernel or ask the hardware. Ben issued complaints about the internal data structure method, so let's switch it to our own inquiry of the HTAB. Now we're fully independend :-). CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Convert u64 -> ulongAlexander Graf2010-05-171-5/+3
| | | | | | | | | There are some pieces in the code that I overlooked that still use u64s instead of longs. This slows down 32 bit hosts unnecessarily, so let's just move them to ulong. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Add host MMU SupportAlexander Graf2010-05-171-0/+480
In order to support 32 bit Book3S, we need to add code to enable our shadow MMU to actually add shadow PTEs. This is the module enabling that support. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
OpenPOWER on IntegriCloud