summaryrefslogtreecommitdiffstats
path: root/include/asm-powerpc/mmu-hash64.h
Commit message (Collapse)AuthorAgeFilesLines
* [POWERPC] vmemmap fixes to use smaller pagesBenjamin Herrenschmidt2008-05-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This changes vmemmap to use a different region (region 0xf) of the address space, and to configure the page size of that region dynamically at boot. The problem with the current approach of always using 16M pages is that it's not well suited to machines that have small amounts of memory such as small partitions on pseries, or PS3's. In fact, on the PS3, failure to allocate the 16M page backing vmmemmap tends to prevent hotplugging the HV's "additional" memory, thus limiting the available memory even more, from my experience down to something like 80M total, which makes it really not very useable. The logic used by my match to choose the vmemmap page size is: - If 16M pages are available and there's 1G or more RAM at boot, use that size. - Else if 64K pages are available, use that - Else use 4K pages I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages) and it seems to work fine. Note that I intend to change the way we organize the kernel regions & SLBs so the actual region will change from 0xf back to something else at one point, as I simplify the SLB miss handler, but that will be for a later patch. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Move phys_addr_t definition into asm/types.hKumar Gala2008-04-171-3/+0
| | | | | | | | | | | | | | | | Moved phys_addr_t out of mmu-*.h and into asm/types.h so we can use it in places that before would have caused recursive includes. For example to use phys_addr_t in <asm/page.h> we would have included <asm/mmu.h> which would have possibly included <asm/mmu-hash64.h> which includes <asm/page.h>. Wheeee recursive include. CONFIG_PHYS_64BIT is a bit counterintuitive in light of ppc64 systems and thus the config option is only used for ppc32 systems with >32-bit physical addresses (44x, 85xx, 745x, etc.). Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Provide a way to protect 4k subpages when using 64k pagesPaul Mackerras2008-01-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using 64k pages on 64-bit PowerPC systems makes life difficult for emulators that are trying to emulate an ISA, such as x86, which use a smaller page size, since the emulator can no longer use the MMU and the normal system calls for controlling page protections. Of course, the emulator can emulate the MMU by checking and possibly remapping the address for each memory access in software, but that is pretty slow. This provides a facility for such programs to control the access permissions on individual 4k sub-pages of 64k pages. The idea is that the emulator supplies an array of protection masks to apply to a specified range of virtual addresses. These masks are applied at the level where hardware PTEs are inserted into the hardware page table based on the Linux PTEs, so the Linux PTEs are not affected. Note that this new mechanism does not allow any access that would otherwise be prohibited; it can only prohibit accesses that would otherwise be allowed. This new facility is only available on 64-bit PowerPC and only when the kernel is configured for 64k pages. The masks are supplied using a new subpage_prot system call, which takes a starting virtual address and length, and a pointer to an array of protection masks in memory. The array has a 32-bit word per 64k page to be protected; each 32-bit word consists of 16 2-bit fields, for which 0 allows any access (that is otherwise allowed), 1 prevents write accesses, and 2 or 3 prevent any access. Implicit in this is that the regions of the address space that are protected are switched to use 4k hardware pages rather than 64k hardware pages (on machines with hardware 64k page support). In fact the whole process is switched to use 4k hardware pages when the subpage_prot system call is used, but this could be improved in future to switch only the affected segments. The subpage protection bits are stored in a 3 level tree akin to the page table tree. The top level of this tree is stored in a structure that is appended to the top level of the page table tree, i.e., the pgd array. Since it will often only be 32-bit addresses (below 4GB) that are protected, the pointers to the first four bottom level pages are also stored in this structure (each bottom level page contains the protection bits for 1GB of address space), so the protection bits for addresses below 4GB can be accessed with one fewer loads than those for higher addresses. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Add hugepagesz boot-time parameterJon Tollefson2008-01-171-0/+1
| | | | | | | | | | | | This adds the hugepagesz boot-time parameter for ppc64. It lets one pick the size for huge pages. The choices available are 64K and 16M when the base page size is 4k. It defaults to 16M (previously the only only choice) if nothing or an invalid choice is specified. Tested 64K huge pages successfully with the libhugetlbfs 1.2. Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Kill sparse warning in HPTE_V_COMPARE()Geert Uytterhoeven2008-01-171-1/+1
| | | | | | | | Fixes sparse warning: constant 0xffffffffffffff80 is so big it is unsigned long Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Use SLB size from the device treeMichael Neuling2007-12-111-0/+1
| | | | | | | | | | | | | Currently we hardwire the number of SLBs to 64, but PAPR says we should use the ibm,slb-size property to obtain the number of SLB entries. This uses this property instead of assuming 64. If no property is found, we assume 64 entries as before. This soft patches the SLB handler, so it shouldn't change performance at all. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Use 1TB segmentsPaul Mackerras2007-10-121-42/+98
| | | | | | | | | | | | | | | | | | | | This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Celleb: New HTAB Guest OS Interface on BeatIshizaki Kou2007-10-031-0/+1
| | | | | | | | | | This changes the Celleb code to work with new Guest OS Interface to tweak HTAB on Beat. It detects old and new Guest OS Interfaces automatically. Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp> Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fixes for the SLB shadow buffer codeMichael Neuling2007-08-031-0/+1
| | | | | | | | | | | | | | On a machine with hardware 64kB pages and a kernel configured for a 64kB base page size, we need to change the vmalloc segment from 64kB pages to 4kB pages if some driver creates a non-cacheable mapping in the vmalloc area. However, we never updated with SLB shadow buffer. This fixes it. Thanks to paulus for finding this. Also added some write barriers to ensure the shadow buffer contents are always consistent. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix Kexec/Kdump for power6Sachin P. Sant2007-06-251-0/+3
| | | | | | | | | | | | | | | | | | | | On Power machines supporting VRMA, Kexec/Kdump does not work. VRMA (virtual real-mode area) means that accesses with IR/DR = 0 (i.e. the MMU "off") actually still go through the hash table, using entries put there by the hypervisor. This means that when we clear out the hash table on kexec, we need to make sure these entries are left untouched. This also adds plpar_pte_read_raw() on the lines of plpar_pte_remove_raw(). Signed-off-by : Sachin Sant <sachinp@in.ibm.com> Signed-off-by : Mohan Kumar M <mohan@in.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Kill typedef-ed structs for hash PTEs and BATsDavid Gibson2007-06-141-3/+3
| | | | | | | | | | | | | | Using typedefs to rename structure types if frowned on by CodingStyle. However, we do so for the hash PTE structure on both ppc32 (where it's called "PTE") and ppc64 (where it's called "hpte_t"). On ppc32 we also have such a typedef for the BATs ("BAT"). This removes this unhelpful use of typedefs, in the process bringing ppc32 and ppc64 closer together, by using the name "struct hash_pte" in both cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Fix warning in hpte_decode(), and generalize itPaul Mackerras2007-05-101-1/+11
| | | | | | | | | | | | | This adds the necessary support to hpte_decode() to handle 1TB segments and 16GB pages, and removes an uninitialized value warning on avpn. We don't have any code to generate HPTEs for 1TB segments or 16GB pages yet, so this is mostly for completeness, and to fix the warning. Signed-off-by: Paul Mackerras <paulus@samba.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
* [POWERPC] Introduce address space "slices"Benjamin Herrenschmidt2007-05-091-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic issue is to be able to do what hugetlbfs does but with different page sizes for some other special filesystems; more specifically, my need is: - Huge pages - SPE local store mappings using 64K pages on a 4K base page size kernel on Cell - Some special 4K segments in 64K-page kernels for mapping a dodgy type of powerpc-specific infiniband hardware that requires 4K MMU mappings for various reasons I won't explain here. The main issues are: - To maintain/keep track of the page size per "segment" (as we can only have one page size per segment on powerpc, which are 256MB divisions of the address space). - To make sure special mappings stay within their allotted "segments" (including MAP_FIXED crap) - To make sure everybody else doesn't mmap/brk/grow_stack into a "segment" that is used for a special mapping Some of the necessary mechanisms to handle that were present in the hugetlbfs code, but mostly in ways not suitable for anything else. The patch relies on some changes to the generic get_unmapped_area() that just got merged. It still hijacks hugetlb callbacks here or there as the generic code hasn't been entirely cleaned up yet but that shouldn't be a problem. So what is a slice ? Well, I re-used the mechanism used formerly by our hugetlbfs implementation which divides the address space in "meta-segments" which I called "slices". The division is done using 256MB slices below 4G, and 1T slices above. Thus the address space is divided currently into 16 "low" slices and 16 "high" slices. (Special case: high slice 0 is the area between 4G and 1T). Doing so simplifies significantly the tracking of segments and avoids having to keep track of all the 256MB segments in the address space. While I used the "concepts" of hugetlbfs, I mostly re-implemented everything in a more generic way and "ported" hugetlbfs to it. Slices can have an associated page size, which is encoded in the mmu context and used by the SLB miss handler to set the segment sizes. The hash code currently doesn't care, it has a specific check for hugepages, though I might add a mechanism to provide per-slice hash mapping functions in the future. The slice code provide a pair of "generic" get_unmapped_area() (bottomup and topdown) functions that should work with any slice size. There is some trickiness here so I would appreciate people to have a look at the implementation of these and let me know if I got something wrong. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Prepare for splitting up mmu.h by MMU typeDavid Gibson2007-04-271-0/+400
Currently asm-powerpc/mmu.h has definitions for the 64-bit hash based MMU. If CONFIG_PPC64 is not set, it instead includes asm-ppc/mmu.h which contains a particularly horrible mess of #ifdefs giving the definitions for all the various 32-bit MMUs. It would be nice to have the low level definitions for each MMU type neatly in their own separate files. It would also be good to wean arch/powerpc off dependence on the old asm-ppc/mmu.h. This patch makes a start on such a cleanup by moving the definitions for the 64-bit hash MMU to their own file, asm-powerpc/mmu_hash64.h. Definitions for the other MMUs still all come from asm-ppc/mmu.h, however each MMU type can now be one-by-one moved over to their own file, in the process cleaning them up stripping them of cruft no longer necessary in arch/powerpc. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
OpenPOWER on IntegriCloud