summaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/mmu.c
Commit message (Collapse)AuthorAgeFilesLines
* ARM: 8447/1: catch pending imprecise abort on unmaskLucas Stach2015-10-191-1/+2
| | | | | | | | | | | | | | Install a non-faulting handler just before unmasking imprecise aborts and switch back to the regular one after unmasking is done. This catches any pending imprecise abort that the firmware/bootloader may have left behind that would normally crash the kernel at that point. As there are apparently a lot of bootlaoders out there that do such a thing it makes sense to handle it in the common startup code. Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Tested-by: Tyler Baker <tyler.baker@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8422/1: enable imprecise aborts during early kernel startupLucas Stach2015-09-221-0/+3
| | | | | | | | | | | | | | | This patch adds imprecise abort enable/disable macros and uses them to enable imprecise aborts early when starting the kernel. This helps in tracking down the real cause for such imprecise abort, as they are handled as soon as they occur. Until now those aborts would only be enabled when entering the userspace and as a consequence crash the first userspace process if any abort had been raised during kernel startup. Signed-off-by: Fabrice Gasnier <fabrice.gasnier@st.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
*---. Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into ↵Russell King2015-09-031-9/+90
|\ \ \ | | | | | | | | | | | | for-linus
| | | * ARM: domains: keep vectors in separate domainRussell King2015-08-211-2/+2
| |_|/ |/| | | | | | | | | | | | | | | | | | | | Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: 8415/1: early fixmap support for earlyconStefan Agner2015-08-181-7/+81
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated to a permanent mapping in paging_init. This is also needed since the attributes may change as the memory types are initialized. The 3MiB range of fixmap spans two pte tables, but currently only one pte is created for early fixmap support. Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since the index for kmap does not start at zero anymore. This reverts 4221e2e6b316 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and FIX_KMAP_END") to some extent. Cc: Mark Salter <msalter@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * Merge branches 'fixes' and 'ioremap' into for-linusRussell King2015-07-071-89/+64
| |\ | |/ |/|
| * ARM: add helpful message when truncating physical memoryRussell King2015-06-291-0/+6
| | | | | | | | | | | | | | | | Add a nmessage to suggest that HIGHMEM is enabled when physical memory is truncated due to lack of virtual address space to map it in the low memory mapping. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 8394/1: update memblock limit after mapping lowmemLaura Abbott2015-06-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The memblock limit is currently used in find_limits to find the bounds for ZONE_NORMAL. The memblock limit may need to be rounded down a PMD size to ensure allocations are fully mapped though. This has the side effect of reducing the amount of memory in ZONE_NORMAL. Once all lowmem is mapped, it's safe to change the memblock limit back to include the unaligned section. Adjust the memblock limit after lowmem mapping is complete. Before: # cat /proc/zoneinfo | grep managed managed 62907 managed 424 After: # cat /proc/zoneinfo | grep managed managed 63331 Signed-off-by: Laura Abbott <labbott@fedoraproject.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| |
| \
*-. \ Merge branches 'arnd-fixes', 'clk', 'misc', 'v7' and 'fixes' into for-nextRussell King2015-06-121-99/+74
|\ \ \ | | |/
| | * ARM: 8356/1: mm: handle non-pmd-aligned end of RAMMark Rutland2015-05-141-10/+10
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings. Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs. This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Stefan Agner <stefan@agner.ch> Tested-by: Stefan Agner <stefan@agner.ch> Acked-by: Laura Abbott <labbott@redhat.com> Tested-by: Hans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: cleanup early_paging_init() callingRussell King2015-06-011-4/+2
| | | | | | | | | | | | | | | | | | | | Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the page table entries. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: re-implement physical address space switchingRussell King2015-06-011-84/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only then updating the page tables. Once that is complete, the system can be brought back up again. Since we disable the MMU, we need to do the update in assembly code. Luckily, the entries which need updating are fairly trivial, and are all setup by the early assembly code. We can merely adjust each entry by the delta required. Not only does this fix the code to be architecturally compliant, but it fixes a couple of bugs too: 1. The original code would only ever update the first L2 entry covering a fraction of the kernel; the remainder were left untouched. 2. The L2 entries covering the DTB blob were likewise untouched. This solution fixes up all entries. Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: keystone2: rename init_meminfo to pv_fixupRussell King2015-06-011-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different physical address space, possibly above the 4GB physical address space. Therefore, the name "init_meminfo()" is confusing. Rename it to pv_fixup() instead. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: keystone2: move address space switch printk into generic codeRussell King2015-06-011-0/+3
| | | | | | | | | | | | | | | | | | There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: keystone2: move update of the phys-to-virt constants into generic codeRussell King2015-06-011-4/+22
|/ | | | | | | | | | | | | Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us to move the update into generic code, along with the requirements for this update. This avoids platforms having to know the details of the phys-to-virt translation support. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem regionGrygorii Strashko2015-01-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'devel-stable' into for-nextRussell King2014-12-051-4/+35
|\
| * Merge tag 'ronx-next' of ↵Russell King2014-11-031-4/+35
| |\ | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux into devel-stable generic fixmaps ARM support for CONFIG_DEBUG_RODATA
| | * ARM: mm: allow non-text sections to be non-executableKees Cook2014-10-161-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX permission changes during free_initmem, so that init memory can be reclaimed. This uses section size instead of PMD size to reduce memory lost to padding on non-LPAE systems. Based on work by Brad Spengler, Larry Bassel, and Laura Abbott. Signed-off-by: Kees Cook <keescook@chromium.org> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * arm: fixmap: implement __set_fixmap()Kees Cook2014-10-161-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range. Based on patch by Rabin Vincent. Signed-off-by: Kees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: Nicolas Pitre <nico@linaro.org>
| | * ARM: expand fixmap region to 3MBRob Herring2014-10-161-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With commit a05e54c103b0 ("ARM: 8031/2: change fixmap mapping region to support 32 CPUs"), the fixmap region was expanded to 2MB, but it precluded any other uses of the fixmap region. In order to support other uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the adjacent 1MB range 0xffe00000-0xfff00000 is availabe. Remove fixmap_page_table ptr and lookup the page table via the virtual address so that the fixmap region can span more that one pmd. The 2nd pmd is already created since it is shared with the vector page. Signed-off-by: Rob Herring <robh@kernel.org> [kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls] [kees: moved pte allocation outside of CONFIG_HIGHMEM] Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Nicolas Pitre <nico@linaro.org>
* | | ARM: 8238/1: mm: Refine set_memory_* functionsJungseung Lee2014-12-031-38/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | ARM: 8235/1: Support for the PXN CPU feature on ARMv7Jungseung Lee2014-12-031-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modern ARMv7-A/R cores optionally implement below new hardware feature: - PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported. This patch set PXN bit on user page table for preventing user code execution with privilege mode. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Jungseung Lee <js07.lee@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | ARM: convert printk(KERN_* to pr_*Russell King2014-11-211-23/+15
|/ / | | | | | | | | | | | | | | | | | | | | Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Felipe Balbi <balbi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 8152/1: Convert pr_warning to pr_warnJoe Perches2014-09-261-2/+2
|/ | | | | | | | | | | | Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: add comments to the early page table remap codeRussell King2014-08-021-5/+46
| | | | | | | | | | | Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
*-. Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-nextRussell King2014-06-051-83/+40
|\ \
| | * ARM: 8025/1: Get rid of meminfoLaura Abbott2014-06-011-82/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | memblock is now fully integrated into the kernel and is the prefered method for tracking memory. Rather than reinvent the wheel with meminfo, migrate to using memblock directly instead of meminfo as an intermediate. Acked-by: Jason Cooper <jason@lakedaemon.net> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Kukjin Kim <kgene.kim@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: 8055/1: cacheflush: use -st dsb option for ensuring completionWill Deacon2014-05-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | dsb st can be used to ensure completion of pending cache maintenance operations, so use it for the v7 cache maintenance operations. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: 8031/2: change fixmap mapping region to support 32 CPUsLiu Hua2014-04-231-0/+4
| |/ | | | | | | | | | | | | | | | | | | | | | | | | In 32-bit ARM systems, the fixmap mapping region can support no more than 14 CPUs(total: 896k; one CPU: 64K). And we can configure NR_CPUS up to 32. So there is a mismatch. This patch moves fixmapping region downwards to region 0xffc00000- 0xffe00000. Then the fixmap mapping region can support up to 32 CPUs. Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Liu Hua <sdu.liu@huawei.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: ensure C page table setup code follows assembly code (part II)Russell King2014-06-021-8/+19
| | | | | | | | | | | | | | | | | | This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: ensure C page table setup code follows assembly codeRussell King2014-06-021-16/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix a long standing bug where, for ARMv6+, we don't fully ensure that the C code sets the same cache policy as the assembly code. This was introduced partially by commit 11179d8ca28d ([ARM] 4497/1: Only allow safe cache configurations on ARMv6 and later) and also by adding SMP support. This patch sets the default cache policy based on the flags used by the assembly code, and then ensures that when a cache policy command line argument is used, we verify that on ARMv6, it matches the initial setup. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: remove unused adjust_cr() functionRussell King2014-06-021-20/+0
| | | | | | | | | | | | adjust_cr() is not used anymore, so let's get rid of it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: move "noalign" command line option to alignment.cRussell King2014-06-021-7/+0
| | | | | | | | | | | | Keep all bits of alignment handling together. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: provide common method to clear bits in CPU control registerRussell King2014-06-021-6/+4
|/ | | | | | Several places open-code this manipulation, let's consolidate this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
*---. Merge branches 'amba', 'fixes', 'misc', 'mmci', 'unstable/omap-dma' and ↵Russell King2014-04-041-1/+16
|\ \ \ | | | | | | | | | | | | 'unstable/sa11x0' into for-next
| | * | ARM: 7954/1: mm: remove remaining domain support from ARMv6Will Deacon2014-02-101-0/+10
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: 7950/1: mm: Fix stage-2 device memory attributesChristoffer Dall2014-02-101-1/+6
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The stage-2 memory attributes are distinct from the Hyp memory attributes and the Stage-1 memory attributes. We were using the stage-1 memory attributes for stage-2 mappings causing device mappings to be mapped as normal memory. Add the S2 equivalent defines for memory attributes and fix the comments explaining the defines while at it. Add a prot_pte_s2 field to the mem_type struct and fill out the field for device mappings accordingly. Cc: <stable@vger.kernel.org> [3.9+] Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: fix executability of CMA mappingsRussell King2013-12-111-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: mm: Define set_memory_* functions for ARMLaura Abbott2013-12-111-0/+38
| | | | | | | | | | | | | | | | | | | | Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: implement basic NX support for kernel lowmem mappingsRussell King2013-12-111-5/+50
| | | | | | | | | | | | | | | | | | | | | | | | Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: add permission annotations to MT_MEMORY* mapping typesRussell King2013-12-111-15/+15
|/ | | | | | | Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7884/1: mm: Fix ECC mem policy printkMichal Simek2013-11-141-2/+2
| | | | | | | | | | | | | | | ECC policy can be applied to the whole system when this bit is implemented by SoC vendor (IMP - bit 9 - in L1 page table entry format). When this bit is not implemented by SoC vendor it doesn't mean that system has no other way how to do ECC. This patch ensures to show this message only when ECC is requested via cmd line ecc=on and runs on appropriate ARM core. Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: mm: Recreate kernel mappings in early_paging_init()Santosh Shilimkar2013-10-101-0/+82
| | | | | | | | | | | | | | | This patch adds a step in the init sequence, in order to recreate the kernel code/data page table mappings prior to full paging initialization. This is necessary on LPAE systems that run out of a physical address space outside the 4G limit. On these systems, this implementation provides a machine descriptor hook that allows the PHYS_OFFSET to be overridden in a machine specific fashion. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: R Sricharan <r.sricharan@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
* Merge branches 'debug-choice', 'devel-stable' and 'misc' into for-linusRussell King2013-09-051-2/+2
|\
| * ARM: constify machine_desc structure usesRussell King2013-07-261-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | struct machine_desc records are defined everywhere as a 'const' structure, but unfortuantely it loses its const-ness through the use of linker magic - the symbols which surround the section are not declared const so it becomes possible not to use 'const' for pointers to these const structures. Let's fix this oversight - all pointers to these structures should be marked const too. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | Merge branch 'security-fixes' into fixesRussell King2013-08-011-1/+13
|\ \
| * | ARM: make vectors page inaccessible from userspaceRussell King2013-08-011-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | If kuser helpers are not provided by the kernel, disable user access to the vectors page. With the kuser helpers gone, there is no reason for this page to be visible to userspace. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * | ARM: move vector stubsRussell King2013-07-311-1/+9
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the machine vector stubs into the page above the vector page, which we can prevent from being visible to userspace. Also move the reset stub, and place the swi vector at a location that the 'ldr' can get to it. This hides pointers into the kernel which could give valuable information to attackers, and reduces the number of exploitable instructions at a fixed address. Cc: <stable@vger.kernel.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 7785/1: mm: restrict early_alloc to section-aligned memoryRussell King2013-07-221-5/+38
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When map_lowmem() runs, and processes a memory bank whose start or end is not section-aligned, memory must be allocated to store the 2nd-level page tables. Those allocations are made by calling memblock_alloc(). At this point, the only memory that is free *and* mapped is memory which has already been mapped by map_lowmem() itself. For this reason, we must calculate the first point at which map_lowmem() will need to allocate memory, and set the memblock allocation limit to a lower address, so that memblock_alloc() is guaranteed to return memory that is already mapped. This patch enhances sanity_check_meminfo() to calculate that memory address, and pass it to memblock_set_current_limit(), rather than just assuming the limit is arm_lowmem_limit. The algorithm applied is: * Default memblock_limit to arm_lowmem_limit in the absence of any other limit; arm_lowmem_limit is the highest memory that is mapped by map_lowmem(). * While walking the list of memblocks, if the start of a block is not aligned, 2nd-level page tables will need to be allocated to map the first few pages of the block. Hence, the memblock_limit must be before the start of the block. * Similarly, if the end of any block is not aligned, 2nd-level page tables will need to be allocated to map the last few pages of the block. Hence, the memblock_limit must point at the end of the block, rounded down to section-alignment. * The memory blocks are assumed to be sorted in address order, so the first unaligned block start or end is used to set the limit. With this algorithm, the start or end of almost any bank can be non- section-aligned. The only exception is that the start of bank 0 must be section-aligned, since otherwise memory would need to be allocated when mapping the start of bank 0, which occurs before any free memory is mapped. [swarren, wrote commit description, rewrote calculation of memblock_limit] Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud