summaryrefslogtreecommitdiffstats
path: root/arch/arm/include/asm/assembler.h
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds2015-09-141-5/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM fixes from Russell King: "A number of fixes for the merge window, fixing a number of cases missed when testing the uaccess code, particularly cases which only show up with certain compiler versions" * 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: ARM: 8431/1: fix alignement of __bug_table section entries arm/xen: Enable user access to the kernel before issuing a privcmd call ARM: domains: add memory dependencies to get_domain/set_domain ARM: domains: thread_info.h no longer needs asm/domains.h ARM: uaccess: fix undefined instruction on ARMv7M/noMMU ARM: uaccess: remove unneeded uaccess_save_and_disable macro ARM: swpan: fix nwfpe for uaccess changes ARM: 8429/1: disable GCC SRA optimization
| * ARM: uaccess: remove unneeded uaccess_save_and_disable macroRussell King2015-09-091-5/+0
| | | | | | | | | | | | This macro is never referenced, remove it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| |
| \
*-. \ Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into ↵Russell King2015-09-031-0/+47
|\ \ \ | | |/ | | | | | | for-linus
| | * ARM: software-based priviledged-no-access supportRussell King2015-08-261-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a software-based implementation of the priviledged no access support found in ARMv8.1. Userspace pages are mapped using a different domain number from the kernel and IO mappings. If we switch the user domain to "no access" when we enter the kernel, we can prevent the kernel from touching userspace. However, the kernel needs to be able to access userspace via the various user accessor functions. With the wrapping in the previous patch, we can temporarily enable access when the kernel needs user access, and re-disable it afterwards. This allows us to trap non-intended accesses to userspace, eg, caused by an inadvertent dereference of the LIST_POISON* values, which, with appropriate user mappings setup, can be made to succeed. This in turn can allow use-after-free bugs to be further exploited than would otherwise be possible. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| | * ARM: entry: provide uaccess assembly macro hooksRussell King2015-08-261-0/+17
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide hooks into the kernel entry and exit paths to permit control of userspace visibility to the kernel. The intended use is: - on entry to kernel from user, uaccess_disable will be called to disable userspace visibility - on exit from kernel to user, uaccess_enable will be called to enable userspace visibility - on entry from a kernel exception, uaccess_save_and_disable will be called to save the current userspace visibility setting, and disable access - on exit from a kernel exception, uaccess_restore will be called to restore the userspace visibility as it was before the exception occurred. These hooks allows us to keep userspace visibility disabled for the vast majority of the kernel, except for localised regions where we want to explicitly access userspace. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: entry: efficiency cleanupsRussell King2015-08-251-4/+12
| | | | | | | | | | | | | | | | | | | | | | Make the "fast" syscall return path fast again. The addition of IRQ tracing and context tracking has made this path grossly inefficient. We can do much better if these options are enabled if we save the syscall return code on the stack - we then don't need to save a bunch of registers around every single callout to C code. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: entry: get rid of asm_trace_hardirqs_on_condRussell King2015-08-251-6/+2
|/ | | | | | | | There's no need for this macro, it can use a default for the condition argument. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: replace BSYM() with badr assembly macroRussell King2015-05-081-1/+16
| | | | | | | | | | | | | | | BSYM() was invented to allow us to work around a problem with the assembler, where local symbols resolved by the assembler for the 'adr' instruction did not take account of their ISA. Since we don't want BSYM() used elsewhere, replace BSYM() with a new macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM() mechanics integrated into it. This ensures that the BSYM()-ification is only used in conjunction with 'adr'. Acked-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: allow 16-bit instructions in ALT_UP()Russell King2015-04-141-0/+3
| | | | | | | | Allow ALT_UP() to cope with a 16-bit Thumb instruction by automatically inserting a following nop instruction. This allows us to care less about getting the assembler to emit a 32-bit thumb instruction. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+Russell King2014-07-181-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: Shawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8078/1: get rid of hardcoded assumptions about kernel stack sizeAndrey Ryabinin2014-07-011-3/+5
| | | | | | | | | | | | | | | Changing kernel stack size on arm is not as simple as it should be: 1) THREAD_SIZE macro doesn't respect PAGE_SIZE and THREAD_SIZE_ORDER 2) stack size is hardcoded in get_thread_info macro This patch fixes it by calculating THREAD_SIZE and thread_info address taking into account PAGE_SIZE and THREAD_SIZE_ORDER. Now changing stack size becomes simply changing THREAD_SIZE_ORDER. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8053/1: kernel: sleep: restore HYP mode configuration in cpu_resumeLorenzo Pieralisi2014-05-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | On CPUs with virtualization extensions the kernel installs HYP mode configuration on both primary and secondary cpus upon cold boot. On platforms where CPUs are shutdown in idle paths (ie CPU core gating), when a CPU resumes from low-power states it currently does not execute code that reinstalls the HYP configuration, which means that the kernel cannot run eg KVM properly on such machines. This patch, mirroring cold-boot behaviour, executes position independent code that reinstalls HYP configuration and drops to SVC mode safely on warmboot, so that deep idle states can be enabled in kernel running as hosts on platforms with power management HW. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8018/1: Add {inc,dec}_preempt_count asm macrosCatalin Marinas2014-04-091-0/+32
| | | | | | | | | | The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti (which also gets the current thread_info) instead of open-coding them in arch/arm/vfp/*.S files. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Arun KS <getarunks@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 8017/1: Move asm macro get_thread_info to asm/assembler.hCatalin Marinas2014-04-091-0/+10
| | | | | | | | | asm/assembler.h is a better place for this macro since it is used by asm files outside arch/arm/kernel/ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Arun KS <getarunks@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7990/1: asm: rename logical shift macros push pull into lspush lspullVictor Kamensky2014-02-251-4/+4
| | | | | | | | | | | | | Renames logical shift macros, 'push' and 'pull', defined in arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'. That eliminates name conflict between 'push' logical shift macro and 'push' instruction mnemonic. That allows assembler.h to be included in .S files that use 'push' instruction. Suggested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: asm: Add ARM_BE8() assembly helperBen Dooks2013-10-191-0/+7
| | | | | | | | | | Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
* ARM: barrier: allow options to be passed to memory barrier instructionsWill Deacon2013-08-121-2/+2
| | | | | | | | | | | | | | | On ARMv7, the memory barrier instructions take an optional `option' field which can be used to constrain the effects of a memory barrier based on shareability and access type. This patch allows the caller to pass these options if required, and updates the smp_*() barriers to request inner-shareable barriers, affecting only stores for the _wmb variant. wmb() is also changed to use the -st version of dsb. Reported-by: Albin Tonnerre <albin.tonnerre@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: Add base support for ARMv7-MCatalin Marinas2013-04-171-1/+16
| | | | | | | | | | | | | | | | | | | | This patch adds the base support for the ARMv7-M architecture. It consists of the corresponding arch/arm/mm/ files and various #ifdef's around the kernel. Exception handling is implemented by a subsequent patch. [ukleinek: squash in some changes originating from commit b5717ba (Cortex-M3: Add support for the Microcontroller Prototyping System) from the v2.6.33-arm1 patch stack, port to post 3.6, drop zImage support, drop reorganisation of pt_regs, assert CONFIG_CPU_V7M doesn't leak into installed headers and a few cosmetic changes] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Jonathan Austin <jonathan.austin@arm.com> Tested-by: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
* ARM: virt: avoid clobbering lr when forcing svc modeRussell King2013-01-101-7/+3
| | | | | | | | | | | | The safe_svcmode_maskall macro is used to ensure that we are running in svc mode, causing an exception return from hvc mode if required. This patch removes the unneeded lr clobber from the macro and operates entirely on the temporary parameter register instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> [will: updated comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: 7599/1: head: Remove boot-time HYP mode check for v5 and belowDave Martin2012-12-111-0/+8
| | | | | | | | | | | | | | | | | | | | | | The kernel can only be entered on HYP mode on CPUs which actually support it, i.e. >= ARMv7. pre-v6 platform support cannot coexist in the same kernel as support for v7 and higher, so there is no advantage in having the HYP mode check on pre-v6 hardware. At least one pre-v6 board is known to fail when the HYP mode check code is present, although the exact cause remains unknown and may be unrelated. [1] This patch restores the old behaviour for pre-v6 platforms, whereby the CPSR is forced directly to SVC mode with IRQs and FIQs masked. All kernels capable of booting on v7 hardware will retain the check, so this should not impair functionality. [1] http://lists.arm.linux.org.uk/lurker/message/20121130.013814.19218413.en.html ([ARM] head.S change broke platform device registration?) Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7549/1: HYP: fix boot on some ARM1136 coresMarc Zyngier2012-10-091-4/+5
| | | | | | | | | | | | | | | | | | It appears that performing a "movs pc, lr" to force the kernel into SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting correctly (change introduced in 80c59da [ARM: virt: allow the kernel to be entered in HYP mode]). While the reason it fails is not understood yet (the same code runs fine on the OMAP2430, ARM1136 as well), partially revert that change for platforms that do not enter in HYP mode, preserving the new feature and restoring a working kernel on the OMAP2420. Reported-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'hyp-boot-mode-rmk' of ↵Russell King2012-09-301-0/+28
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into devel-stable
| * ARM: virt: allow the kernel to be entered in HYP modeDave Martin2012-09-191-0/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch does two things: * Ensure that asynchronous aborts are masked at kernel entry. The bootloader should be masking these anyway, but this reduces the damage window just in case it doesn't. * Enter svc mode via exception return to ensure that CPU state is properly serialised. This does not matter when switching from an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C parlance), but it potentially does matter when switching from a another privileged mode such as hyp mode. This should allow the kernel to boot safely either from svc mode or hyp mode, even if no support for use of the ARM Virtualization Extensions is built into the kernel. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
* | ARM: 7527/1: uaccess: explicitly check __user pointer when !CPU_USE_DOMAINSRussell King2012-09-091-0/+8
|/ | | | | | | | | | | | | | | | | The {get,put}_user macros don't perform range checking on the provided __user address when !CPU_HAS_DOMAINS. This patch reworks the out-of-line assembly accessors to check the user address against a specified limit, returning -EFAULT if is is out of range. [will: changed get_user register allocation to match put_user] [rmk: fixed building on older ARM architectures] Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge tag 'cleanup2' of ↵Linus Torvalds2012-03-291-0/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull "ARM: cleanups of io includes" from Olof Johansson: "Rob Herring has done a sweeping change cleaning up all of the mach/io.h includes, moving some of the oft-repeated macros to a common location and removing a bunch of boiler plate. This is another step closer to a common zImage for multiple platforms." Fix up various fairly trivial conflicts (<mach/io.h> removal vs changes around it, tegra localtimer.o is *still* gone, yadda-yadda). * tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (29 commits) ARM: tegra: Include assembler.h in sleep.S to fix build break ARM: pxa: use common IOMEM definition ARM: dma-mapping: convert ARCH_HAS_DMA_SET_COHERENT_MASK to kconfig symbol ARM: __io abuse cleanup ARM: create a common IOMEM definition ARM: iop13xx: fix missing declaration of iop13xx_init_early ARM: fix ioremap/iounmap for !CONFIG_MMU ARM: kill off __mem_pci ARM: remove bunch of now unused mach/io.h files ARM: make mach/io.h include optional ARM: clps711x: remove unneeded include of mach/io.h ARM: dove: add explicit include of dove.h to addr-map.c ARM: at91: add explicit include of hardware.h to uncompressor ARM: ep93xx: clean-up mach/io.h ARM: tegra: clean-up mach/io.h ARM: orion5x: clean-up mach/io.h ARM: davinci: remove unneeded mach/io.h include [media] davinci: remove includes of mach/io.h ARM: OMAP: Remove remaining includes for mach/io.h ARM: msm: clean-up mach/io.h ...
| * ARM: create a common IOMEM definitionRob Herring2012-03-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several platforms create IOMEM defines for casting to 'void __iomem *', and other platforms are incorrectly using __io() macro for the same purpose. This creates a common definition and removes all the platform specific versions. Rather than try to make linux/io.h and asm/io.h assembly safe, the assembly version of IOMEM is moved into asm/assembler.h. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Sekhar Nori <nsekhar@ti.com> Cc: Kevin Hilman <khilman@ti.com> Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com> Cc: Ryan Mallon <rmallon@gmail.com> Cc: Eric Miao <eric.y.miao@gmail.com> Cc: Haojian Zhuang <haojian.zhuang@marvell.com> Acked-by: David Brown <davidb@codeaurora.org> Cc: Daniel Walker <dwalker@fifo99.com> Cc: Bryan Huntsman <bryanh@codeaurora.org> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Shawn Guo <shawn.guo@linaro.org> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Paul Walmsley <paul@pwsan.com> Acked-by: Viresh Kumar <viresh.kumar@st.com> Cc: Rajeev Kumar <rajeev-dlh.kumar@st.com> Cc: Colin Cross <ccross@android.com> Cc: Olof Johansson <olof@lixom.net> Cc: Stephen Warren <swarren@nvidia.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de>
* | ARM: 7325/1: fix v7 boot with lockdep enabledRabin Vincent2012-02-151-0/+5
|/ | | | | | | | | | | | | | | | | | | | Bootup with lockdep enabled has been broken on v7 since b46c0f74657d ("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR"). This is because v7_setup (which is called very early during boot) calls v7_flush_dcache_all, and the save_and_disable_irqs added by that patch ends up attempting to call into lockdep C code (trace_hardirqs_off()) when we are in no position to execute it (no stack, MMU off). Fix this by using a notrace variant of save_and_disable_irqs. The code already uses the notrace variant of restore_irqs. Reviewed-by: Nicolas Pitre <nico@linaro.org> Acked-by: Stephen Boyd <sboyd@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 7301/1: Rename the T() macro to TUSER() to avoid namespace conflictsCatalin Marinas2012-01-251-2/+2
| | | | | | | | | This macro is used to generate unprivileged accesses (LDRT/STRT) to user space. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: LPAE: add ISBs around MMU enabling codeWill Deacon2011-12-081-0/+11
| | | | | | | | | | | | | Before we enable the MMU, we must ensure that the TTBR registers contain sane values. After the MMU has been enabled, we jump to the *virtual* address of the following function, so we also need to ensure that the SCTLR write has taken effect. This patch adds ISB instructions around the SCTLR write to ensure the visibility of the above. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: assembler.h: Add string declaration macroDave Martin2011-07-071-0/+9
| | | | | | | | | | | | Declaring strings in assembler source involves a certain amount of tedious boilerplate code in order to annotate the resulting symbol correctly. Encapsulating this boilerplate in a macro should help to avoid some duplication and the occasional mistake. Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
* ARM: 6959/1: SMP build fix for entry-macro-multi.SMagnus Damm2011-06-171-0/+4
| | | | | | | | | | | | | | | | | The assembly code in entry-macro-multi.S does not build without the include asm/assembler.h in the case of CONFIG_SMP=y. Fixes the rather theoretical SMP build of mach-shmobile/entry-intc.c: arch/arm/include/asm/entry-macro-multi.S: Assembler messages: arch/arm/include/asm/entry-macro-multi.S:20: Error: bad instruction `alt_smp(test_for_ipi r0,r6,r5,lr)' arch/arm/include/asm/entry-macro-multi.S:20: Error: bad instruction `alt_up_b(9997f)' make[1]: *** [arch/arm/mach-shmobile/entry-intc.o] Error 1 make: *** [arch/arm/mach-shmobile] Error 2 make: *** Waiting for unfinished jobs.... Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'smp' into miscRussell King2011-01-061-4/+20
|\ | | | | | | | | | | Conflicts: arch/arm/kernel/entry-armv.S arch/arm/mm/ioremap.c
| * ARM: 6516/1: Allow SMP_ON_UP to work with Thumb-2 kernels.Dave Martin2010-12-201-3/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * __fixup_smp_on_up has been modified with support for the THUMB2_KERNEL case. For THUMB2_KERNEL only, fixups are split into halfwords in case of misalignment, since we can't rely on unaligned accesses working before turning the MMU on. No attempt is made to optimise the aligned case, since the number of fixups is typically small, and it seems best to keep the code as simple as possible. * Add a rotate in the fixup_smp code in order to support CPU_BIG_ENDIAN, as suggested by Nicolas Pitre. * Add an assembly-time sanity-check to ALT_UP() to ensure that the content really is the right size (4 bytes). (No check is done for ALT_SMP(). Possibly, this could be fixed by splitting the two uses ot ALT_SMP() (ALT_SMP...SMP_UP versus ALT_SMP...SMP_UP_B) into two macros. In the first case, ALT_SMP needs to expand to >= 4 bytes, not == 4.) * smp_mpidr.h (which implements ALT_SMP()/ALT_UP() manually due to macro limitations) has not been modified: the affected instruction (mov) has no 16-bit encoding, so the correct instruction size is satisfied in this case. * A "mode" parameter has been added to smp_dmb: smp_dmb arm @ assumes 4-byte instructions (for ARM code, e.g. kuser) smp_dmb @ uses W() to ensure 4-byte instructions for ALT_SMP() This avoids assembly failures due to use of W() inside smp_dmb, when assembling pure-ARM code in the vectors page. There might be a better way to achieve this. * Kconfig: make SMP_ON_UP depend on (!THUMB2_KERNEL || !BIG_ENDIAN) i.e., THUMB2_KERNEL is now supported, but only if !BIG_ENDIAN (The fixup code for Thumb-2 currently assumes little-endian order.) Tested using a single generic realview kernel on: ARM RealView PB-A8 (CONFIG_THUMB2_KERNEL={n,y}) ARM RealView PBX-A9 (SMP) Signed-off-by: Dave Martin <dave.martin@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
| * ARM: 6489/1: thumb2: fix incorrect optimisation in usraccWill Deacon2010-11-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 8b592783 added a Thumb-2 variant of usracc which, when it is called with \rept=2, calls usraccoff once with an offset of 0 and secondly with a hard-coded offset of 4 in order to avoid incrementing the pointer again. If \inc != 4 then we will store the data to the wrong offset from \ptr. Luckily, the only caller that passes \rept=2 to this function is __clear_user so we haven't been actively corrupting user data. This patch fixes usracc to pass \inc instead of #4 to usraccoff when it is called a second time. Cc: <stable@kernel.org> Reported-by: Tony Thompson <tony.thompson@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUsCatalin Marinas2010-11-041-6/+7
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes the domain switching functionality via the set_fs and __switch_to functions on cores that have a TLS register. Currently, the ioremap and vmalloc areas share the same level 1 page tables and therefore have the same domain (DOMAIN_KERNEL). When the kernel domain is modified from Client to Manager (via the __set_fs or in the __switch_to function), the XN (eXecute Never) bit is overridden and newer CPUs can speculatively prefetch the ioremap'ed memory. Linux performs the kernel domain switching to allow user-specific functions (copy_to/from_user, get/put_user etc.) to access kernel memory. In order for these functions to work with the kernel domain set to Client, the patch modifies the LDRT/STRT and related instructions to the LDR/STR ones. The user pages access rights are also modified for kernel read-only access rather than read/write so that the copy-on-write mechanism still works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register (CPU_32v6K is defined) since writing the TLS value to the high vectors page isn't possible. The user addresses passed to the kernel are checked by the access_ok() function so that they do not point to the kernel space. Tested-by: Anton Vorontsov <cbouatmailru@gmail.com> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: Allow SMP kernels to boot on UP systemsRussell King2010-10-041-2/+25
| | | | | | | | | | | | | | UP systems do not implement all the instructions that SMP systems have, so in order to boot a SMP kernel on a UP system, we need to rewrite parts of the kernel. Do this using an 'alternatives' scheme, where the kernel code and data is modified prior to initialization to replace the SMP instructions, thereby rendering the problematical code ineffectual. We use the linker to generate a list of 32-bit word locations and their replacement values, and run through these replacements when we detect a UP system. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: fix build error in arch/arm/kernel/process.cRussell King2010-04-211-6/+6
| | | | | | | | | | | | | | | | | | | | | /tmp/ccJ3ssZW.s: Assembler messages: /tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077' This is caused because: .section .data .section .text .section .text .previous does not return us to the .text section, but the .data section; this makes use of .previous dangerous if the ordering of previous sections is not known. Fix up the other users of .previous; .pushsection and .popsection are a safer pairing to use than .section and .previous. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'for-rmk-2.6.32' of git://git.pengutronix.de/git/ukl/linux-2.6 ↵Russell King2009-08-151-5/+44
|\ | | | | | | into devel-stable
| * Complete irq tracing support for ARMUwe Kleine-König2009-08-131-5/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch enabling and disabling irqs in assembler code and by the hardware wasn't tracked completly. I had to transpose two instructions in arch/arm/lib/bitops.h because restore_irqs doesn't preserve the flags with CONFIG_TRACE_IRQFLAGS=y Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
* | Thumb-2: Implement the unified arch/arm/lib functionsCatalin Marinas2009-07-241-0/+73
| | | | | | | | | | | | | | | | | | | | This patch adds the ARM/Thumb-2 unified support for the arch/arm/lib/* files. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | Thumb-2: Implementation of the unified start-up and exceptions codeCatalin Marinas2009-07-241-0/+11
|/ | | | | | | This patch implements the ARM/Thumb-2 unified kernel start-up and exception handling code. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* [ARM] barriers: improve xchg, bitops and atomic SMP barriersRussell King2009-05-281-0/+13
| | | | | | | | | | | | | | | | | Mathieu Desnoyers pointed out that the ARM barriers were lacking: - cmpxchg, xchg and atomic add return need memory barriers on architectures which can reorder the relative order in which memory read/writes can be seen between CPUs, which seems to include recent ARM architectures. Those barriers are currently missing on ARM. - test_and_xxx_bit were missing SMP barriers. So put these barriers in. Provide separate atomic_add/atomic_sub operations which do not require barriers. Reported-Reviewed-and-Acked-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] move include/asm-arm to arch/arm/include/asmRussell King2008-08-021-0/+116
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud