summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/mm/hash_utils_64.c
Commit message (Collapse)AuthorAgeFilesLines
...
| * [PATCH] powerpc/iseries: Fix double phys_to_abs bug in htab_bolt_mappingMichael Ellerman2006-02-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before the merge I updated create_pte_mapping() to work for iSeries, by calling iSeries_hpte_bolt_or_insert. (4c55130b2aa93370f1bf52d2304394e91cf8ee39) Later we changed iSeries_hpte_insert to cope with the bolting case, and called that instead from create_pte_mapping() (which was renamed to htab_bolt_mapping) (3c726f8dee6f55e96475574e9f645327e461884c). Unfortunately that change introduced a subtle bug, where we pass an absolute address to iSeries_hpte_insert() where it expects a physical address. This leads to us calling phys_to_abs() twice on the physical address, which is seriously bogus. This only causes a problem if the absolute address from the first translation can be looked up again in the chunk_map, which depends on the size and layout of memory. I've seen it fail on one box, but not others. The minimal fix is to pass the physical address to iSeries_hpte_insert(). For 2.6.17 we should make phys_to_abs() BUG if we try to double-translate an address. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | Merge ../powerpc-mergePaul Mackerras2006-02-241-1/+2
|\ \ | |/
| * [PATCH] powerpc: Only calculate htab_size in one place for kexecMichael Ellerman2006-02-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For kexec we need to know the size of the MMU hash table. Currently we calculate the size once in the htab code, and then twice more in the kexec code, once using htab_hash_mask and once using ppc64_pft_size. On some machines the ppc64_pft_size calculation is broken because ppc64_pft_size is not set. So we need to fix the second calculation, but better still we should just calculate the size once and use it everywhere else. Tested on Power5 LPAR, Power4 non-LPAR and Power3. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* | [PATCH] powerpc: Always panic if lmb_alloc() failsMichael Ellerman2006-02-071-1/+0
|/ | | | | | | | | | | | | | | | Currently most callers of lmb_alloc() don't check if it worked or not, if it ever does weird bad things will probably happen. The few callers who do check just panic or BUG_ON. So make lmb_alloc() panic internally, to catch bugs at the source. The few callers who did check the result no longer need to. The only caller that did anything interesting with the return result was careful_allocation(). For it we create __lmb_alloc_base() which _doesn't_ panic automatically, a little messy, but passable. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* spelling: s/retreive/retrieve/Adrian Bunk2006-01-101-1/+1
| | | | Signed-off-by: Adrian Bunk <bunk@stusta.de>
* [PATCH] powerpc: Separate usage of KERNELBASE and PAGE_OFFSETMichael Ellerman2006-01-091-3/+3
| | | | | | | | | | | | | | | | | | | | | This patch separates usage of KERNELBASE and PAGE_OFFSET. I haven't looked at any of the PPC32 code, if we ever want to support Kdump on PPC we'll have to do another audit, ditto for iSeries. This patch makes PAGE_OFFSET the constant, it'll always be 0xC * 1 gazillion for 64-bit. To get a physical address from a virtual one you subtract PAGE_OFFSET, _not_ KERNELBASE. KERNELBASE is the virtual address of the start of the kernel, it's often the same as PAGE_OFFSET, but _might not be_. If you want to know something's offset from the start of the kernel you should subtract KERNELBASE. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] spufs: The SPU file system, baseArnd Bergmann2006-01-091-0/+1
| | | | | | | | | | | | | | | | | | | | | This is the current version of the spu file system, used for driving SPEs on the Cell Broadband Engine. This release is almost identical to the version for the 2.6.14 kernel posted earlier, which is available as part of the Cell BE Linux distribution from http://www.bsc.es/projects/deepcomputing/linuxoncell/. The first patch provides all the interfaces for running spu application, but does not have any support for debugging SPU tasks or for scheduling. Both these functionalities are added in the subsequent patches. See Documentation/filesystems/spufs.txt on how to use spufs. Signed-off-by: Arnd Bergmann <arndb@de.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: htab_initialize_secondary cannot be marked __initAnton Blanchard2005-12-291-1/+1
| | | | | | | | | | Sonny has noticed hotplug CPU on ppc64 is broken in 2.6.15-*. One of the problems is that htab_initialize_secondary is called when a cpu is being brought up, but it is marked __init. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] powerpc: Add missing icache flushes for hugepagesDavid Gibson2005-12-091-1/+1
| | | | | | | | | | | | | | | | | | On most powerpc CPUs, the dcache and icache are not coherent so between writing and executing a page, the caches must be flushed. Userspace programs assume pages given to them by the kernel are icache clean, so we must do this flush between the kernel clearing a page and it being mapped into userspace for execute. We were not doing this for hugepages, this patch corrects the situation. We use the same lazy mechanism as we use for normal pages, delaying the flush until userspace actually attempts to execute from the page in question. Tested on G5. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: merge code values for identifying platformsPaul Mackerras2005-11-101-9/+20
| | | | | | | | | | | | This patch merges platform codes. systemcfg->platform is no longer used, systemcfg use in general is deprecated as much as possible (and renamed _systemcfg before it gets completely moved elsewhere in a future patch), _machine is now used on ppc64 along as ppc32. Platform codes aren't gone yet but we are getting a step closer. A bunch of asm code in head[_64].S is also turned into C code. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] revised Memory Add Fixes for ppc64Mike Kravetz2005-11-081-0/+9
| | | | | | | | Add the create_section_mapping() routine to create hptes for memory sections dynamically added after system boot. Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] ppc64: Fix the lazy icache/dcache code for non-RAM pagesBenjamin Herrenschmidt2005-11-081-0/+3
| | | | | | | | | | For some stupid reason I can't explain (brown paper bag is at hand), I removed the check pfn_valid() in the code that does the icache/dcache coherency on POWER4 and later. That causes us to eventually try to access non existing struct page when hashing in IO pages. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* Merge ../linux-2.6Paul Mackerras2005-11-081-2/+4
|\
| * [PATCH] ppc64: Fix bug in SLB miss handler for hugepagesDavid Gibson2005-11-071-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch, however, should be applied on top of the 64k-page-size patch to fix some problems with hugepage (some pre-existing, another introduced by this patch). The patch fixes a bug in the SLB miss handler for hugepages on ppc64 introduced by the dynamic hugepage patch (commit id c594adad5653491813959277fb87a2fef54c4e05) due to a misunderstanding of the srd instruction's behaviour (mea culpa). The problem arises when a 64-bit process maps some hugepages in the low 4GB of the address space (unusual). In this case, as well as the 256M segment in question being marked for hugepages, other segments at 32G intervals will be incorrectly marked for hugepages. In the process, this patch tweaks the semantics of the hugepage bitmaps to be more sensible. Previously, an address below 4G was marked for hugepages if the appropriate segment bit in the "low areas" bitmask was set *or* if the low bit in the "high areas" bitmap was set (which would mark all addresses below 1TB for hugepage). With this patch, any given address is governed by a single bitmap. Addresses below 4GB are marked for hugepage if and only if their bit is set in the "low areas" bitmap (256M granularity). Addresses between 4GB and 1TB are marked for hugepage iff the low bit in the "high areas" bitmap is set. Higher addresses are marked for hugepage iff their bit in the "high areas" bitmap is set (1TB granularity). To avoid conflicts, this patch must be applied on top of BenH's pending patch for 64k base page size [0]. As such, this patch also addresses a hugepage problem introduced by that patch. That patch allows hugepages of 1MB in size on hardware which supports it, however, that won't work when using 4k pages (4 level pagetable), because in that case hugepage PTEs are stored at the PMD level, and each PMD entry maps 2MB. This patch simply disallows hugepages in that case (we can do something cleverer to re-enable them some other day). Built, booted, and a handful of hugepage related tests passed on POWER5 LPAR (both ARCH=powerpc and ARCH=ppc64). [0] http://gate.crashing.org/~benh/ppc64-64k-pages.diff Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | Merge ../linux-2.6Paul Mackerras2005-11-071-117/+415
|\ \ | |/
| * [PATCH] ppc64: support 64k pagesBenjamin Herrenschmidt2005-11-061-117/+415
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel base page size to 64K. The resulting kernel still boots on any hardware. On current machines with 4K pages support only, the kernel will maintain 16 "subpages" for each 64K page transparently. Note that while real 64K capable HW has been tested, the current patch will not enable it yet as such hardware is not released yet, and I'm still verifying with the firmware architects the proper to get the information from the newer hypervisors. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] powerpc: Kill ppcdebugDavid Gibson2005-11-071-7/+0
|/ | | | | | | | | | | | | | | | The ancient ppcdebug/PPCDBG mechanism is now only used in two places. First, in the hash setup code, one of the bits allows the size of the hash table to be reduced by a factor of 8 - which would be better accomplished with a command line option for that purpose. The other was a bunch of bus walking related messages in the iSeries code, which would seem to be insufficient reason to keep the mechanism. This patch removes the last traces of this mechanism. Built and booted on iSeries and pSeries POWER5 LPAR (ARCH=powerpc). Signed-off-by: David Gibson <dwg@au1.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Move default hash table size calculation to hash_utils_64.cPaul Mackerras2005-10-121-1/+22
| | | | | | | | | We weren't computing the size of the hash table correctly on iSeries because the relevant code in prom.c was #ifdef CONFIG_PPC_PSERIES. This moves the code to hash_utils_64.c, makes it unconditional, and cleans it up a bit. Signed-off-by: Paul Mackerras <paulus@samba.org>
* powerpc: Merge arch/ppc64/mm to arch/powerpc/mmPaul Mackerras2005-10-101-0/+438
This moves the remaining files in arch/ppc64/mm to arch/powerpc/mm, and arranges that we use them when compiling with ARCH=ppc64. Signed-off-by: Paul Mackerras <paulus@samba.org>
OpenPOWER on IntegriCloud