summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm/mmu-book3e.h
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/e6500: Make TLB lock recursiveScott Wood2014-03-191-3/+6
| | | | | | | | | | | | | Once special level interrupts are supported, we may take nested TLB misses -- so allow the same thread to acquire the lock recursively. The lock will not be effective against the nested TLB miss handler trying to write the same entry as the interrupted TLB miss handler, but that's also a problem on non-threaded CPUs that lack TLB write conditional. This will be addressed in the patch that enables crit/mc support by invalidating the TLB on return from level exceptions. Signed-off-by: Scott Wood <scottwood@freescale.com>
* powerpc/e6500: TLB miss handler with hardware tablewalk supportScott Wood2014-01-091-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are a few things that make the existing hw tablewalk handlers unsuitable for e6500: - Indirect entries go in TLB1 (though the resulting direct entries go in TLB0). - It has threads, but no "tlbsrx." -- so we need a spinlock and a normal "tlbsx". Because we need this lock, hardware tablewalk is mandatory on e6500 unless we want to add spinlock+tlbsx to the normal bolted TLB miss handler. - TLB1 has no HES (nor next-victim hint) so we need software round robin (TODO: integrate this round robin data with hugetlb/KVM) - The existing tablewalk handlers map half of a page table at a time, because IBM hardware has a fixed 1MiB indirect page size. e6500 has variable size indirect entries, with a minimum of 2MiB. So we can't do the half-page indirect mapping, and even if we could it would be less efficient than mapping the full page. - Like on e5500, the linear mapping is bolted, so we don't need the overhead of supporting nested tlb misses. Note that hardware tablewalk does not work in rev1 of e6500. We do not expect to support e6500 rev1 in mainline Linux. Signed-off-by: Scott Wood <scottwood@freescale.com> Cc: Mihai Caraman <mihai.caraman@freescale.com>
* powerpc: Fix build error for book3eAneesh Kumar K.V2013-05-021-0/+18
| | | | | | | | | | | | We moved the definition of shift_to_mmu_psize and mmu_psize_to_shift out of hugetlbpage.c in patch "powerpc: New hugepage directory format". These functions are not related to hugetlbpage and we want to use them outside hugetlbpage.c We missed a definition for book3e when we moved these functions. Add similar functions to mmu-book3e.h Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Reduce PTE table memory wastageAneesh Kumar K.V2013-04-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | We allocate one page for the last level of linux page table. With THP and large page size of 16MB, that would mean we are wasting large part of that page. To map 16MB area, we only need a PTE space of 2K with 64K page size. This patch reduce the space wastage by sharing the page allocated for the last level of linux page table with multiple pmd entries. We call these smaller chunks PTE page fragments and allocated page, PTE page. In order to support systems which doesn't have 64K HPTE support, we also add another 2K to PTE page fragment. The second half of the PTE fragments is used for storing slot and secondary bit information of an HPTE. With this we now have a 4K PTE fragment. We use a simple approach to share the PTE page. On allocation, we bump the PTE page refcount to 16 and share the PTE page with the next 16 pte alloc request. This should help in the node locality of the PTE page fragment, assuming that the immediate pte alloc request will mostly come from the same NUMA node. We don't try to reuse the freed PTE page fragment. Hence we could be waisting some space. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* KVM: PPC: booke: Extend MAS2 EPN mask for 64-bitMihai Caraman2012-12-061-1/+1
| | | | | | | | Extend MAS2 EPN mask to retain most significant bits on 64-bit hosts. Use this mask in tlb effective address accessor. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: booke: category E.HV (GS-mode) supportScott Wood2012-04-081-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Chips such as e500mc that implement category E.HV in Power ISA 2.06 provide hardware virtualization features, including a new MSR mode for guest state. The guest OS can perform many operations without trapping into the hypervisor, including transitions to and from guest userspace. Since we can use SRR1[GS] to reliably tell whether an exception came from guest state, instead of messing around with IVPR, we use DO_KVM similarly to book3s. Current issues include: - Machine checks from guest state are not routed to the host handler. - The guest can cause a host oops by executing an emulated instruction in a page that lacks read permission. Existing e500/4xx support has the same problem. Includes work by Ashish Kalra <Ashish.Kalra@freescale.com>, Varun Sethi <Varun.Sethi@freescale.com>, and Liu Yu <yu.liu@freescale.com>. Signed-off-by: Scott Wood <scottwood@freescale.com> [agraf: remove pt_regs usage] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: e500: use hardware hint when loading TLB0 entriesScott Wood2012-03-051-2/+3
| | | | | | | | | | | | | | | The hardware maintains a per-set next victim hint. Using this reduces conflicts, especially on e500v2 where a single guest TLB entry is mapped to two shadow TLB entries (user and kernel). We want those two entries to go to different TLB ways. sesel is now only used for TLB1. Reported-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: e500: clear up confusion between host and guest entriesScott Wood2012-03-051-0/+1
| | | | | | | | | | | | | | | | | Split out the portions of tlbe_priv that should be associated with host entries into tlbe_ref. Base victim selection on the number of hardware entries, not guest entries. For TLB1, where one guest entry can be mapped by multiple host entries, we use the host tlbe_ref for tracking page references. For the guest TLB0 entries, we still track it with gtlb_priv, to avoid having to retranslate if the entry is evicted from the host TLB but not the guest TLB. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* powerpc: Define/use HUGETLB_NEED_PRELOAD insead of complicated #ifBecky Bruce2011-12-071-0/+7
| | | | | | | | | Define HUGETLB_NEED_PRELOAD in mmu-book3e.h for CONFIG_PPC64 instead of having a much more complicated #if block. This is easier to read and maintain. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: Add ICSWX/ACOP support to Book3e cores like A2Jimi Xenidis2011-11-251-0/+4
| | | | | | | | ICSWX is also used by the A2 processor to access coprocessors, although not all "chips" that contain A2s have coprocessors. Signed-off-by: Jimi Xenidis <jimix@pobox.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Hugetlb for BookEBecky Bruce2011-09-201-0/+7
| | | | | | | | | | | | | | | | | | | | | Enable hugepages on Freescale BookE processors. This allows the kernel to use huge TLB entries to map pages, which can greatly reduce the number of TLB misses and the amount of TLB thrashing experienced by applications with large memory footprints. Care should be taken when using this on FSL processors, as the number of large TLB entries supported by the core is low (16-64) on current processors. The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g. Page sizes larger than the max zone size are called "gigantic" pages and must be allocated on the command line (and cannot be deallocated). This is currently only fully implemented for Freescale 32-bit BookE processors, but there is some infrastructure in the code for 64-bit BooKE. Signed-off-by: Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Rename slb0_limit() to safe_stack_limit() and add Book3E supportBenjamin Herrenschmidt2011-05-061-0/+4
| | | | | | | | slb0_limit() wasn't a very descriptive name. This changes it along with a comment explaining what it's used for, and provides a 64-bit BookE implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: Flush IPROT protected TLB entries leftover by firmwareJack Miller2011-04-271-0/+1
| | | | | | | | | | | | | | | | | | | When we set up the TLB for ourselves on Book3E, we need to flush out any old mappings established by the firmware or bootloader. At present we attempt this with a tlbilx to flush everything, but this will leave behind any entries with the IPROT bit set. There are several good reason firmware might establish mappings with IPROT, and in fact ePAPR compliant firmwares are required to establish their initial mapped area with IPROT. This patch, therefore adds more complex code to scan through the TLB upon entry and flush away any entries that are not our own. Signed-off-by: Jack Miller <jack@codezen.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Add TLB size detection for TYPE_3E MMUsBenjamin Herrenschmidt2011-04-271-0/+15
| | | | | | Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/book3e: Protect complex macro args in mmu-book3e.hScott Wood2011-02-071-4/+4
| | | | | Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/fsl-booke: Add support for FSL Arch v1.0 MMU in setup_page_sizesKumar Gala2010-10-141-0/+15
| | | | | | | | | | | Update setup_page_sizes() to support for a MMU v1.0 FSL style MMU implementation. In such a processor, we don't have TLB0PS or EPTCFG registers (and access to these registers may cause exceptions). We need to parse the older format of TLBnCFG for page size support. Additionaly, assume since we are an FSL implementation that we have 2 TLB arrays and the second array contains the variable size pages. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc/book3e: Adjust the page sizes list based on MMU configBenjamin Herrenschmidt2010-07-141-0/+4
| | | | | | | | Use the MMU config registers to scan for available direct and indirect page sizes and print out the result. Will be needed for future hugetlbfs implementation. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/booke: Move MMUCSR definition into mmu-book3e.hKumar Gala2009-08-241-0/+12
| | | | | | | | The MMUCSR is now defined as part of the Book-3E architecture so we can move it into mmu-book3e.h and add some of the additional bits defined by the architecture specs. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
* powerpc/mm: Add support for SPARSEMEM_VMEMMAP on 64-bit Book3EBenjamin Herrenschmidt2009-08-201-0/+1
| | | | | | | | | The base TLB support didn't include support for SPARSEMEM_VMEMMAP, though we did carve out some virtual space for it, the necessary support code wasn't there. This implements it by using 16M pages for now, though the page size could easily be changed at runtime if necessary. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc: Add memory management headers for new 64-bit BookEBenjamin Herrenschmidt2009-08-201-0/+27
| | | | | | | | | | | | | This adds the PTE and pgtable format definitions, along with changes to the kernel memory map and other definitions related to implementing support for 64-bit Book3E. This also shields some asm-offset bits that are currently only relevant on 32-bit We also move the definition of the "linux" page size constants to the common mmu.h file and add a few sizes that are relevant to embedded processors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* powerpc/mm: Add more bit definitions for Book3E MMU registersBenjamin Herrenschmidt2009-08-201-49/+119
| | | | | | | This adds various additional bit definitions for various MMU related SPRs used on Book3E. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* Merge commit 'origin/master' into nextBenjamin Herrenschmidt2009-03-301-0/+2
| | | | | | Manual merge of: arch/powerpc/include/asm/elf.h drivers/i2c/busses/i2c-mpc.c
* powerpc/book-3e: Introduce concept of Book-3e MMUKumar Gala2009-02-121-0/+103
The Power ISA 2.06 spec introduces a standard MMU programming model that is based on the Freescale Book-E MMU programing model. The Freescale version is pretty backwards compatiable with the ISA 2.06 definition so we are starting to refactor some of the Freescale code so it can be easily shared. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
OpenPOWER on IntegriCloud