summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/entry.S
Commit message (Collapse)AuthorAgeFilesLines
* arm64: Bug fix in stack alignment exceptionChiaHao2014-06-181-1/+0
| | | | | | | | | | The value of ESR has been stored into x1, and should be directly pass to do_sp_pc_abort function, "MOV x1, x25" is an extra operation and do_sp_pc_abort will get the wrong value of ESR. Signed-off-by: ChiaHao <andy.jhshiu@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
* Merge tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm ↵Catalin Marinas2014-05-161-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into upstream FPSIMD register bank context switching and crypto algorithms optimisations for arm64 from Ard Biesheuvel. * tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm: arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions arm64: pull in <asm/simd.h> from asm-generic arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions arm64/crypto: AES using ARMv8 Crypto Extensions arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions arm64/crypto: SHA-1 using ARMv8 Crypto Extensions arm64: add support for kernel mode NEON in interrupt context arm64: defer reloading a task's FPSIMD state to userland resume arm64: add abstractions for FPSIMD state manipulation asm-generic: allow generic unaligned access if the arch supports it Conflicts: arch/arm64/include/asm/thread_info.h
| * arm64: defer reloading a task's FPSIMD state to userland resumeArd Biesheuvel2014-05-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* | arm64: split syscall_trace() into separate functions for enter/exitAKASHI Takahiro2014-05-121-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | As done in arm, this change makes it easy to confirm we invoke syscall related hooks, including syscall tracepoint, audit and seccomp which would be implemented later, in correct order. That is, undoing operations in the opposite order on exit that they were done on entry. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: make a single hook to syscall_trace() for all syscall featuresAKASHI Takahiro2014-05-121-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: Richard Guy Briggs <rgb@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: debug: avoid accessing mdscr_el1 on fault paths where possibleWill Deacon2014-05-121-47/+26
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix typo in entry.SNeil Zhang2014-01-131-1/+1
| | | | | | | | | Commit 64681787 (arm64: let the core code deal with preempt_count) changed the code, but left the comments unchanged, fix it. Signed-off-by: Neil Zhang <zhangwm@marvell.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: support single-step and breakpoint handler hooksSandeepa Prabhu2013-12-191-0/+2
| | | | | | | | | | | | | | | | AArch64 Single Steping and Breakpoint debug exceptions will be used by multiple debug framworks like kprobes & kgdb. This patch implements the hooks for those frameworks to register their own handlers for handling breakpoint and single step events. Reworked the debug exception handler in entry.S: do_dbg to route software breakpoint (BRK64) exception to do_debug_exception() Signed-off-by: Sandeepa Prabhu <sandeepa.prabhu@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: let the core code deal with preempt_countMarc Zyngier2013-11-251-22/+7
| | | | | | | | | | | | | | | | | | Commit f27dde8deef3 (sched: Add NEED_RESCHED to the preempt_count) introduced the use of bit 31 in preempt_count for obscure scheduling purposes. This causes interrupts taken from EL0 to hit the (open coded) BUG when this flag is flipped while handling the interrupt (we compare the values before and after, and kill the kernel if they are different). The fix is to stop messing with the preempt count entirely, as this is already being dealt with in the generic code (irq_enter/irq_exit). Tested on a dual A53 FPGA running cyclictest. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix access to preempt_count from assembly codeMarc Zyngier2013-11-051-11/+11
| | | | | | | | | | | | | | preempt_count is defined as an int. Oddly enough, we access it as a 64bit value. Things become interesting when running a BE kernel, and looking at the current CPU number, which is stored as an int next to preempt_count. Like in a per-cpu interrupt handler, for example... Using a 32bit access fixes the issue for good. Cc: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: mm: permit use of tagged pointers at EL0Will Deacon2013-09-031-0/+1
| | | | | | | | | | | | | | TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Enable interrupts in the EL0 undef handlerCatalin Marinas2013-08-221-0/+2
| | | | | | | | do_undefinstr() has to be called with interrupts disabled since it may read the instruction from the user address space which could lead to a data abort and subsequent might_sleep() warning in do_page_fault(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Change kernel stack size to 16KFeng Kan2013-07-261-1/+1
| | | | | | | | | Written by Catalin Marinas, tested by APM on storm platform. This is needed because of the failures encountered when running SpecWeb benchmark test. Signed-off-by: Feng Kan <fkan@apm.com> Acked-by: Kumar Sankaran <ksankaran@apm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: treat unhandled compat el0 traps as undefMark Rutland2013-05-311-0/+10
| | | | | | | | | | | | | | | | | Currently, if a compat process reads or writes from/to a disabled cp15/cp14 register, the trap is not handled by the el0_sync_compat handler, and the kernel will head to bad_mode, where it will die(), and oops(). For 64 bit processes, disabled system register accesses are currently treated as unhandled instructions. This patch modifies entry.S to treat these unhandled traps as undefined instructions, sending a SIGILL to userspace. This gives processes a chance to handle this and stop using inaccessible registers, and prevents further issues in the kernel as a result of the die(). Reported-by: Johannes Jensen <Johannes.Jensen@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: add explicit symbols to ESR_EL1 decodingMarc Zyngier2013-04-171-26/+27
| | | | | | | | | | | | | The ESR_EL1 decoding process is a bit cryptic, and KVM has also a need for the same constants. Add a new esr.h file containing the appropriate exception classes constants, and change entry.S to use it. Fix a small bug in the EL1 breakpoint check while we're at it. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: switch to generic sigaltstackAl Viro2013-02-141-5/+0
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* arm64: move vector entry macro to assembler.hMarc Zyngier2012-12-051-4/+0
| | | | | | | | | | This macro is also useful to other bits defining vectors (hypervisor stub, KVM...). Move it to a common location. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: get rid of fork/vfork/clone wrappersAl Viro2012-10-221-5/+0
| | | | | | | | [fixes from Catalin Marinas folded] Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* arm64: Use generic sys_execve() implementationCatalin Marinas2012-10-171-5/+0
| | | | | | | | This patch converts the arm64 port to use the generic sys_execve() implementation removing the arm64-specific (compat_)sys_execve_wrapper() functions. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Use generic kernel_execve() implementationCatalin Marinas2012-10-171-1/+1
| | | | | | | This patch enables CONFIG_GENERIC_KERNEL_EXECVE on arm64 and removes the arm64-specific implementation of kernel_execve(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Use generic kernel_thread() implementationCatalin Marinas2012-10-171-1/+4
| | | | | | | | | This patch enables CONFIG_GENERIC_KERNEL_THREAD on arm64, changes copy_threads to cope with kernel threads creation and adapts ret_from_fork accordingly. The arm64-specific kernel_thread implementation is no longer needed. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Do not include asm/unistd32.h in asm/unistd.hCatalin Marinas2012-10-111-0/+1
| | | | | | | | | This patch only includes asm/unistd32.h where necessary and removes its inclusion in the asm/unistd.h file. The __SYSCALL_COMPAT guard is dropped. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
* arm64: Enable interrupts before calling do_notify_resume()Catalin Marinas2012-10-081-0/+1
| | | | | | | | | | task_work_run() implementation had the side effect of enabling interrupts. With commit ac3d0da8 (task_work: Make task_work_add() lockless), interrupts are no longer enabled revealing the bug in the arch code. This patch enables the interrupt explicitly before calling do_notify_resume(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: Exception handlingCatalin Marinas2012-09-171-0/+695
The patch contains the exception entry code (kernel/entry.S), pt_regs structure and related accessors, undefined instruction trapping and stack tracing. AArch64 Linux kernel (including kernel threads) runs in EL1 mode using the SP1 stack. The vectors don't have a fixed address, only alignment (2^11) requirements. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Olof Johansson <olof@lixom.net> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Arnd Bergmann <arnd@arndb.de>
OpenPOWER on IntegriCloud