summaryrefslogtreecommitdiffstats
path: root/arch/sh/include/asm/page.h
Commit message (Collapse)AuthorAgeFilesLines
* sh: Fix cached/uncaced address calculation in 29bit modeNobuhiro Iwamatsu2011-11-041-0/+5
| | | | | | | | | | | | In the case of 29bit mode, CAC/UNCAC_ADDR does not return a right address. This revises this problem by using P1SEGADDR and P2SEGADDR in 29bit mode. Reported-by: Yutaro Ebihara <ebiharaml@si-linux.co.jp> Signed-off-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com> Tested-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Tested-by: Simon Horman <horms@verge.net.au> Cc: stable@kernel.org Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: kexec: Add PHYSICAL_STARTSimon Horman2011-10-281-0/+10
| | | | | | | | | | | | | Add PHYSICAL_START kernel configuration parameter to set the address at which the kernel should be loaded. It has been observed on an sh7757lcr that simply modifying MEMORY_START does not achieve this goal for 32bit sh. This is due to MEMORY_OFFSET in arch/sh/kernel/vmlinux.lds.S bot being based on MEMORY_START on such systems. Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Fix common misspellingsLucas De Marchi2011-03-311-1/+1
| | | | | | Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* dma-mapping: rename ARCH_KMALLOC_MINALIGN to ARCH_DMA_MINALIGNFUJITA Tomonori2010-08-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Now each architecture has the own dma_get_cache_alignment implementation. dma_get_cache_alignment returns the minimum DMA alignment. Architectures define it as ARCH_KMALLOC_MINALIGN (it's used to make sure that malloc'ed buffer is DMA-safe; the buffer doesn't share a cache with the others). So we can unify dma_get_cache_alignment implementations. This patch: dma_get_cache_alignment() needs to know if an architecture defines ARCH_KMALLOC_MINALIGN or not (needs to know if architecture has DMA alignment restriction). However, slab.h define ARCH_KMALLOC_MINALIGN if architectures doesn't define it. Let's rename ARCH_KMALLOC_MINALIGN to ARCH_DMA_MINALIGN. ARCH_KMALLOC_MINALIGN is used only in the internals of slab/slob/slub (except for crypto). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sh: rework memory limits to work with LMB.Paul Mundt2010-05-101-1/+1
| | | | | | | | | | | | | | This reworks the memory limit handling to tie in through the available LMB infrastructure. This requires a bit of reordering as we need to have all of the LMB reservations taken care of prior to establishing the limits. While we're at it, the crash kernel reservation semantics are reworked so that we allocate from the bottom up and reduce the risk of having to disable the memory limit due to a clash with the crash kernel reservation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Assembly friendly __pa and __va definitionsMatt Fleming2010-04-251-4/+9
| | | | | | | | | This patch defines ___pa and ___va which return the physical and virtual address of an address, respectively. These macros are suitable for calling from assembly because they don't include the C casts required by __pa and __va. Signed-off-by: Matt Fleming <matt@console-pimps.org>
* sh: Merge legacy and dynamic PMB modes.Paul Mundt2010-02-181-10/+1
| | | | | | | | | | | | | | | | | | | | | | | | | This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: uncached mapping helpers.Paul Mundt2010-02-171-1/+18
| | | | | | | | | This adds some helper routines for uncached mapping support. This simplifies some of the cases where we need to check the uncached mapping boundaries in addition to giving us a centralized location for building more complex manipulation on top of. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Kill off some superfluous legacy PMB special casing.Paul Mundt2010-02-161-6/+1
| | | | | | | | | The __va()/__pa() offsets and the boot memory offsets are consistent for all PMB users, so there is no need to special case these for legacy PMB. Kill the special casing off and depend on CONFIG_PMB across the board. This also fixes up yet another addressing bug for sh64. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Acquire some more page flags for SH-5.Matt Fleming2010-01-161-1/+1
| | | | | | | | | | | We need some more page flags to hook up _PAGE_WIRED (and eventually other things). So use the unused PTE bits above the PPN field as no implementations use these for anything currently. Now that we have _PAGE_WIRED let's provide the SH-5 functions for wiring up TLB entries. Signed-off-by: Matt Fleming <matt@console-pimps.org>
* sh: fixed PMB mode refactoring.Paul Mundt2010-01-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | This introduces some much overdue chainsawing of the fixed PMB support. fixed PMB was introduced initially to work around the fact that dynamic PMB mode was relatively broken, though they were never intended to converge. The main areas where there are differences are whether the system is booted in 29-bit mode or 32-bit mode, and whether legacy mappings are to be preserved. Any system booting in true 32-bit mode will not care about legacy mappings, so these are roughly decoupled. Regardless of the entry point, PMB and 32BIT are directly related as far as the kernel is concerned, so we also switch back to having one select the other. With legacy mappings iterated through and applied in the initialization path it's now possible to finally merge the two implementations and permit dynamic remapping overtop of remaining entries regardless of whether boot mappings are crafted by hand or inherited from the boot loader. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Convert cache disabled SH-5 over to new cache interface.Paul Mundt2009-08-161-8/+0
| | | | | | | The caches enabled case needs more work, but is presently broken regardless, so this can be done incrementally. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: rework nommu for generic cache.c use.Paul Mundt2009-08-151-6/+1
| | | | | | | This does a bit of reorganizing for allowing nommu to use the new and generic cache.c, no functional changes. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Use the now generic SH-4 clear/copy page ops for all MMU platforms.Paul Mundt2009-07-271-5/+6
| | | | | | | | | | | | Now that the SH-4 page clear/copy ops are generic, they can be used for all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too will gradually be converted over to using this interface. SH-3 platforms which do not contain aliases will see no impact from this change, while aliasing SH-3 platforms will get the same interface as SH-4. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: wire up clear_user_highpage() for sh4, convert sh7705.Paul Mundt2009-07-271-4/+8
| | | | | | | | | This wires up clear_user_highpage() on SH-4 and subsequently converts the SH7705 32kB cache mode over to using it. Now that the SH-4 implementation handles all of the dcache purging directly in the aliasing case, there is no need to do this in the default clear_page() implementation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Migrate from PG_mapped to PG_dcache_dirty.Paul Mundt2009-07-221-0/+6
| | | | | | | | | | | | | | | | | This inverts the delayed dcache flush a bit to be more in line with other platforms. At the same time this also gives us the ability to do some more optimizations and cleanup. Now that the update_mmu_cache() callsite only tests for the bit, the implementation can gradually be split out and made generic, rather than relying on special implementations for each of the peculiar CPU types. SH7705 in 32kB mode and SH-4 still need slightly different handling, but this is something that can remain isolated in the varying page copy/clear routines. On top of that, SH-X3 is dcache coherent, so there is no need to bother with any of these tests in the PTEAEX version of update_mmu_cache(), so we kill that off too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* asm-generic: rename page.h and uaccess.hArnd Bergmann2009-06-111-1/+1
| | | | | | | | | | | | The current asm-generic/page.h only contains the get_order function, and asm-generic/uaccess.h only implements unaligned accesses. This renames the file to getorder.h and uaccess-unaligned.h to make room for new page.h and uaccess.h file that will be usable by all simple (e.g. nommu) architectures. Signed-off-by: Remis Lima Baima <remis.developer@googlemail.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
* sh: Support fixed 32-bit PMB mappings from bootloader.Yoshihiro Shimoda2009-03-101-1/+6
| | | | | | | | | | | | This provides a method for supporting fixed PMB mappings inherited from the bootloader, as an alternative to the dynamic PMB mapping currently used by the kernel. In the future these methods will be combined. P1/P2 area is handled like a regular 29-bit physical address, and local bus device are assigned P3 area addresses. Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: ioremap_prot support.Paul Mundt2008-09-121-0/+2
| | | | Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: migrate to arch/sh/include/Paul Mundt2008-07-291-0/+183
This follows the sparc changes a439fe51a1f8eb087c22dd24d69cebae4a3addac. Most of the moving about was done with Sam's directions at: http://marc.info/?l=linux-sh&m=121724823706062&w=2 with subsequent hacking and fixups entirely my fault. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
OpenPOWER on IntegriCloud