summaryrefslogtreecommitdiffstats
path: root/arch/mips/kernel
Commit message (Collapse)AuthorAgeFilesLines
* MIPS: Fix build with binutils 2.24.51+Manuel Lauss2014-11-077-9/+58
| | | | | | | | | | | | | | | | | | | | | | | | | Starting with version 2.24.51.20140728 MIPS binutils complain loudly about mixing soft-float and hard-float object files, leading to this build failure since GCC is invoked with "-msoft-float" on MIPS: {standard input}: Warning: .gnu_attribute 4,3 requires `softfloat' LD arch/mips/alchemy/common/built-in.o mipsel-softfloat-linux-gnu-ld: Warning: arch/mips/alchemy/common/built-in.o uses -msoft-float (set by arch/mips/alchemy/common/prom.o), arch/mips/alchemy/common/sleeper.o uses -mhard-float To fix this, we detect if GAS is new enough to support "-msoft-float" command option, and if it does, we can let GCC pass it to GAS; but then we also need to sprinkle the files which make use of floating point registers with the necessary ".set hardfloat" directives. Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com> Cc: Linux-MIPS <linux-mips@linux-mips.org> Cc: Matthew Fortune <Matthew.Fortune@imgtec.com> Cc: Markos Chandras <Markos.Chandras@imgtec.com> Cc: Maciej W. Rozycki <macro@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/8355/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: CMA: Do not reserve memory if not requiredZubair Lutfullah Kakakhel2014-10-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | Even if CMA is disabled, the for_each_memblock macro expands to run reserve_bootmem once. Hence, reserve_bootmem attempts to reserve location 0 of size 0. Add a check to avoid that. Issue was highlighted during testing with EVA enabled. resrve_bootmem used to exit gracefully when passed arguments to reserve 0 size location at 0 without EVA. But with EVA enabled, macros would point to different addresses and the code would trigger a BUG. Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Tested-by: Markos Chandras <markos.chandras@imgtec.com> Tested-by: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/8231/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Wire up bpf syscall.Ralf Baechle2014-10-274-0/+4
| | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: idle: Remove leftover __pastwait symbol and its referencesMarkos Chandras2014-10-231-3/+0
| | | | | | | | | | | | | | | | | | | | | | The __pastwait symbol was only used by the address_is_in_r4k_wait_irqoff function but this is no longer used since the SMTC removal in commit b633648c5ad3 ('MIPS: MT: Remove SMTC support'). That symbol also led to build failures under certain random configuration due to the way the compiler compiled the r4k_wait_irqoff function. If that function was called multiple times, the __pastwait symbol was redefined breaking the build like this: CHK include/generated/compile.h CC arch/mips/kernel/idle.o {standard input}: Assembler messages: {standard input}:527: Error: symbol `__pastwait' is already defined Link: http://www.linux-mips.org/cgi-bin/mesg.cgi?a=linux-mips&i=1244879922.24479.30.camel%40falcon Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/7791/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* Merge git://git.infradead.org/users/eparis/auditLinus Torvalds2014-10-191-3/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull audit updates from Eric Paris: "So this change across a whole bunch of arches really solves one basic problem. We want to audit when seccomp is killing a process. seccomp hooks in before the audit syscall entry code. audit_syscall_entry took as an argument the arch of the given syscall. Since the arch is part of what makes a syscall number meaningful it's an important part of the record, but it isn't available when seccomp shoots the syscall... For most arch's we have a better way to get the arch (syscall_get_arch) So the solution was two fold: Implement syscall_get_arch() everywhere there is audit which didn't have it. Use syscall_get_arch() in the seccomp audit code. Having syscall_get_arch() everywhere meant it was a useless flag on the stack and we could get rid of it for the typical syscall entry. The other changes inside the audit system aren't grand, fixed some records that had invalid spaces. Better locking around the task comm field. Removing some dead functions and structs. Make some things static. Really minor stuff" * git://git.infradead.org/users/eparis/audit: (31 commits) audit: rename audit_log_remove_rule to disambiguate for trees audit: cull redundancy in audit_rule_change audit: WARN if audit_rule_change called illegally audit: put rule existence check in canonical order next: openrisc: Fix build audit: get comm using lock to avoid race in string printing audit: remove open_arg() function that is never used audit: correct AUDIT_GET_FEATURE return message type audit: set nlmsg_len for multicast messages. audit: use union for audit_field values since they are mutually exclusive audit: invalid op= values for rules audit: use atomic_t to simplify audit_serial() kernel/audit.c: use ARRAY_SIZE instead of sizeof/sizeof[0] audit: reduce scope of audit_log_fcaps audit: reduce scope of audit_net_id audit: arm64: Remove the audit arch argument to audit_syscall_entry arm64: audit: Add audit hook in syscall_trace_enter/exit() audit: x86: drop arch from __audit_syscall_entry() interface sparc: implement is_32bit_task sparc: properly conditionalize use of TIF_32BIT ...
| * ARCH: AUDIT: audit_syscall_entry() should not require the archEric Paris2014-09-231-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a function where the arch can be queried, syscall_get_arch(). So rather than have every single piece of arch specific code use and/or duplicate syscall_get_arch(), just have the audit code use the syscall_get_arch() code. Based-on-patch-by: Richard Briggs <rgb@redhat.com> Signed-off-by: Eric Paris <eparis@redhat.com> Cc: linux-alpha@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: microblaze-uclinux@itee.uq.edu.au Cc: linux-mips@linux-mips.org Cc: linux@lists.openrisc.net Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: sparclinux@vger.kernel.org Cc: user-mode-linux-devel@lists.sourceforge.net Cc: linux-xtensa@linux-xtensa.org Cc: x86@kernel.org
* | Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2014-10-182-0/+30
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: "This is the MIPS pull request for the next kernel: - Zubair's patch series adds CMA support for MIPS. Doing so it also touches ARM64 and x86. - remove the last instance of IRQF_DISABLED from arch/mips - updates to two of the MIPS defconfig files. - cleanup of how cache coherency bits are handled on MIPS and implement support for write-combining. - platform upgrades for Alchemy - move MIPS DTS files to arch/mips/boot/dts/" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (24 commits) MIPS: ralink: remove deprecated IRQF_DISABLED MIPS: pgtable.h: Implement the pgprot_writecombine function for MIPS MIPS: cpu-probe: Set the write-combine CCA value on per core basis MIPS: pgtable-bits: Define the CCA bit for WC writes on Ingenic cores MIPS: pgtable-bits: Move the CCA bits out of the core's ifdef blocks MIPS: DMA: Add cma support x86: use generic dma-contiguous.h arm64: use generic dma-contiguous.h asm-generic: Add dma-contiguous.h MIPS: BPF: Add new emit_long_instr macro MIPS: ralink: Move device-trees to arch/mips/boot/dts/ MIPS: Netlogic: Move device-trees to arch/mips/boot/dts/ MIPS: sead3: Move device-trees to arch/mips/boot/dts/ MIPS: Lantiq: Move device-trees to arch/mips/boot/dts/ MIPS: Octeon: Move device-trees to arch/mips/boot/dts/ MIPS: Add support for building device-tree binaries MIPS: Create common infrastructure for building built-in device-trees MIPS: SEAD3: Enable DEVTMPFS MIPS: SEAD3: Regenerate defconfigs MIPS: Alchemy: DB1300: Add touch penirq support ...
| * | MIPS: cpu-probe: Set the write-combine CCA value on per core basisMarkos Chandras2014-09-221-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Different cores use different CCA values to achieve write-combine memory writes. For cores that do not support write-combine we set the default value to CCA:2 (uncached, non-coherent) which is the default value as set by the kernel. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7402/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: DMA: Add cma supportZubair Lutfullah Kakakhel2014-09-221-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds cma support to the MIPS architecture. cma uses memblock. However, mips uses bootmem. bootmem is informed about any regions reserved by memblock dma api is modified to use cma reserved memory regions when available Tested using cma_test. cma_test is a simple driver that assigns blocks of memory from cma reserved sections. Signed-off-by: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> Cc: catalin.marinas@arm.com Cc: will.deacon@arm.com Cc: tglx@linutronix.de Cc: mingo@redhat.com Cc: hpa@zytor.com Cc: arnd@arndb.de Cc: gregkh@linuxfoundation.org Cc: m.szyprowski@samsung.com Cc: x86@kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-arch@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7360/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | Merge branch 'for-3.18-consistent-ops' of ↵Linus Torvalds2014-10-153-11/+11
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu Pull percpu consistent-ops changes from Tejun Heo: "Way back, before the current percpu allocator was implemented, static and dynamic percpu memory areas were allocated and handled separately and had their own accessors. The distinction has been gone for many years now; however, the now duplicate two sets of accessors remained with the pointer based ones - this_cpu_*() - evolving various other operations over time. During the process, we also accumulated other inconsistent operations. This pull request contains Christoph's patches to clean up the duplicate accessor situation. __get_cpu_var() uses are replaced with with this_cpu_ptr() and __this_cpu_ptr() with raw_cpu_ptr(). Unfortunately, the former sometimes is tricky thanks to C being a bit messy with the distinction between lvalues and pointers, which led to a rather ugly solution for cpumask_var_t involving the introduction of this_cpu_cpumask_var_ptr(). This converts most of the uses but not all. Christoph will follow up with the remaining conversions in this merge window and hopefully remove the obsolete accessors" * 'for-3.18-consistent-ops' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (38 commits) irqchip: Properly fetch the per cpu offset percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t -fix ia64: sn_nodepda cannot be assigned to after this_cpu conversion. Use __this_cpu_write. percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t Revert "powerpc: Replace __get_cpu_var uses" percpu: Remove __this_cpu_ptr clocksource: Replace __this_cpu_ptr with raw_cpu_ptr sparc: Replace __get_cpu_var uses avr32: Replace __get_cpu_var with __this_cpu_write blackfin: Replace __get_cpu_var uses tile: Use this_cpu_ptr() for hardware counters tile: Replace __get_cpu_var uses powerpc: Replace __get_cpu_var uses alpha: Replace __get_cpu_var ia64: Replace __get_cpu_var uses s390: cio driver &__get_cpu_var replacements s390: Replace __get_cpu_var uses mips: Replace __get_cpu_var uses MIPS: Replace __get_cpu_var uses in FPU emulator. arm: Replace __this_cpu_ptr with raw_cpu_ptr ...
| * | | mips: Replace __get_cpu_var usesChristoph Lameter2014-08-263-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __get_cpu_var() is used for multiple purposes in the kernel source. One of them is address calculation via the form &__get_cpu_var(x). This calculates the address for the instance of the percpu variable of the current processor based on an offset. Other use cases are for storing and retrieving data from the current processors percpu area. __get_cpu_var() can be used as an lvalue when writing data or on the right side of an assignment. __get_cpu_var() is defined as : #define __get_cpu_var(var) (*this_cpu_ptr(&(var))) __get_cpu_var() always only does an address determination. However, store and retrieve operations could use a segment prefix (or global register on other platforms) to avoid the address calculation. this_cpu_write() and this_cpu_read() can directly take an offset into a percpu area and use optimized assembly code to read and write per cpu variables. This patch converts __get_cpu_var into either an explicit address calculation using this_cpu_ptr() or into a use of this_cpu operations that use the offset. Thereby address calculations are avoided and less registers are used when code is generated. At the end of the patch set all uses of __get_cpu_var have been removed so the macro is removed too. The patch set includes passes over all arches as well. Once these operations are used throughout then specialized macros can be defined in non -x86 arches as well in order to optimize per cpu access by f.e. using a global register that may be set to the per cpu base. Transformations done to __get_cpu_var() 1. Determine the address of the percpu instance of the current processor. DEFINE_PER_CPU(int, y); int *x = &__get_cpu_var(y); Converts to int *x = this_cpu_ptr(&y); 2. Same as #1 but this time an array structure is involved. DEFINE_PER_CPU(int, y[20]); int *x = __get_cpu_var(y); Converts to int *x = this_cpu_ptr(y); 3. Retrieve the content of the current processors instance of a per cpu variable. DEFINE_PER_CPU(int, y); int x = __get_cpu_var(y) Converts to int x = __this_cpu_read(y); 4. Retrieve the content of a percpu struct DEFINE_PER_CPU(struct mystruct, y); struct mystruct x = __get_cpu_var(y); Converts to memcpy(&x, this_cpu_ptr(&y), sizeof(x)); 5. Assignment to a per cpu variable DEFINE_PER_CPU(int, y) __get_cpu_var(y) = x; Converts to __this_cpu_write(y, x); 6. Increment/Decrement etc of a per cpu variable DEFINE_PER_CPU(int, y); __get_cpu_var(y)++ Converts to __this_cpu_inc(y) Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Tejun Heo <tj@kernel.org>
* | | | Merge branch 'x86-seccomp-for-linus' of ↵Linus Torvalds2014-10-141-1/+1
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 seccomp changes from Ingo Molnar: "This tree includes x86 seccomp filter speedups and related preparatory work, which touches core seccomp facilities as well. The main idea is to split seccomp into two phases, to be able to enter a simple fast path for syscalls with ptrace side effects. There's no substantial user-visible (and ABI) effects expected from this, except a change in how we emit a better audit record for SECCOMP_RET_TRACE events" * 'x86-seccomp-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86_64, entry: Use split-phase syscall_trace_enter for 64-bit syscalls x86_64, entry: Treat regs->ax the same in fastpath and slowpath syscalls x86: Split syscall_trace_enter into two phases x86, entry: Only call user_exit if TIF_NOHZ x86, x32, audit: Fix x32's AUDIT_ARCH wrt audit seccomp: Document two-phase seccomp and arch-provided seccomp_data seccomp: Allow arch code to provide seccomp_data seccomp: Refactor the filter callback and the API seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computing
| * | | | seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computingAndy Lutomirski2014-09-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The secure_computing function took a syscall number parameter, but it only paid any attention to that parameter if seccomp mode 1 was enabled. Rather than coming up with a kludge to get the parameter to work in mode 2, just remove the parameter. To avoid churn in arches that don't have seccomp filters (and may not even support syscall_get_nr right now), this leaves the parameter in secure_computing_strict, which is now a real function. For ARM, this is a bit ugly due to the fact that ARM conditionally supports seccomp filters. Fixing that would probably only be a couple of lines of code, but it should be coordinated with the audit maintainers. This will be a slight slowdown on some arches. The right fix is to pass in all of seccomp_data instead of trying to make just the syscall nr part be fast. This is a prerequisite for making two-phase seccomp work cleanly. Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux-s390@vger.kernel.org Cc: x86@kernel.org Cc: Kees Cook <keescook@chromium.org> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Kees Cook <keescook@chromium.org>
| * | | | MIPS: CPS: Initialize EVA before bringing up VPEs from secondary coresMarkos Chandras2014-08-191-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CPS code is doing several memory loads when configuring the VPEs from secondary cores, so the segmentation control registers must be initialized in time otherwise the kernel will crash with strange TLB exceptions. Reviewed-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7424/ Signed-off-by: James Hogan <james.hogan@imgtec.com>
| * | | | MIPS: scall64-o32: Fix indirect syscall detectionMarkos Chandras2014-08-191-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4c21b8fd8f14 (MIPS: seccomp: Handle indirect system calls (o32)) added indirect syscall detection for O32 processes running on MIPS64 but it did not work as expected. The reason is the the scall64-o32 implementation differs compared to scall32-o32. In the former, the v0 (syscall number) register contains the absolute syscall number (4000 + X) whereas in the latter it contains the relative syscall number (X). Fix the code to avoid doing an extra addition, and load the v0 register directly to the first argument for syscall_trace_enter. Moreover, set the .reorder assembler option in order to have better control on this part of the assembly code. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7481/ Cc: <stable@vger.kernel.org> # v3.15+ Signed-off-by: James Hogan <james.hogan@imgtec.com>
| * | | | MIPS: perf: Mark pmu interupt IRQF_NO_THREADYang Wei2014-08-181-1/+1
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In RT kernel, I ran into the following calltrace, so PMU interrupts cannot be threaded in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0 INFO: lockdep is turned off. Call Trace: [<ffffffff8088595c>] dump_stack+0x1c/0x50 [<ffffffff801a958c>] __might_sleep+0x13c/0x148 [<ffffffff80891c54>] rt_spin_lock+0x3c/0xb0 [<ffffffff801ad29c>] __wake_up+0x3c/0x80 [<ffffffff80243ba4>] perf_event_wakeup+0x8c/0xf8 [<ffffffff80243c50>] perf_pending_event+0x40/0x78 [<ffffffff8023d88c>] irq_work_run+0x74/0xc0 [<ffffffff80152640>] mipsxx_pmu_handle_shared_irq+0x110/0x228 [<ffffffff8015276c>] mipsxx_pmu_handle_irq+0x14/0x30 [<ffffffff801ffda4>] handle_irq_event_percpu+0xbc/0x470 [<ffffffff80204478>] handle_percpu_irq+0x98/0xc8 [<ffffffff801ff284>] generic_handle_irq+0x4c/0x68 [<ffffffff8089748c>] do_IRQ+0x2c/0x48 [<ffffffff80105864>] plat_irq_dispatch+0x64/0xd0 [ralf@linux-mips.org: I don't see why based on this register dump the handler should be marked IRQF_NO_THREAD - but the handler is manipulating per-CPU resources so we don't want it to be rescheduled to another CPU.] Signed-off-by: Yang Wei <Wei.Yang@windriver.com> Cc: a.p.zijlstra@chello.nl Cc: paulus@samba.org Cc: mingo@redhat.com Cc: acme@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7506/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | | MIPS: mcount: Adjust stack pointer for static trace in MIPS32Markos Chandras2014-09-261-0/+12
| |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every mcount() call in the MIPS 32-bit kernel is done as follows: [...] move at, ra jal _mcount addiu sp, sp, -8 [...] but upon returning from the mcount() function, the stack pointer is not adjusted properly. This is explained in details in 58b69401c797 (MIPS: Function tracer: Fix broken function tracing). Commit ad8c396936e3 ("MIPS: Unbreak function tracer for 64-bit kernel.) fixed the stack manipulation for 64-bit but it didn't fix it completely for MIPS32. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: <stable@vger.kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7792/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: Wire up new syscalls getrandom and memfd_create.Ralf Baechle2014-08-264-0/+8
| | | | | | | | | | | | Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: CPS: Initialize EVA before bringing up VPEs from secondary coresMarkos Chandras2014-08-261-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CPS code is doing several memory loads when configuring the VPEs from secondary cores, so the segmentation control registers must be initialized in time otherwise the kernel will crash with strange TLB exceptions. Reviewed-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7424/ Signed-off-by: James Hogan <james.hogan@imgtec.com>
* | | MIPS: scall64-o32: Fix indirect syscall detectionMarkos Chandras2014-08-261-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 4c21b8fd8f14 (MIPS: seccomp: Handle indirect system calls (o32)) added indirect syscall detection for O32 processes running on MIPS64 but it did not work as expected. The reason is the the scall64-o32 implementation differs compared to scall32-o32. In the former, the v0 (syscall number) register contains the absolute syscall number (4000 + X) whereas in the latter it contains the relative syscall number (X). Fix the code to avoid doing an extra addition, and load the v0 register directly to the first argument for syscall_trace_enter. Moreover, set the .reorder assembler option in order to have better control on this part of the assembly code. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Patchwork: http://patchwork.linux-mips.org/patch/7481/ Cc: <stable@vger.kernel.org> # v3.15+ Signed-off-by: James Hogan <james.hogan@imgtec.com>
* | | MIPS: perf: Mark pmu interupt IRQF_NO_THREADYang Wei2014-08-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In RT kernel, I ran into the following calltrace, so PMU interrupts cannot be threaded in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/0 INFO: lockdep is turned off. Call Trace: [<ffffffff8088595c>] dump_stack+0x1c/0x50 [<ffffffff801a958c>] __might_sleep+0x13c/0x148 [<ffffffff80891c54>] rt_spin_lock+0x3c/0xb0 [<ffffffff801ad29c>] __wake_up+0x3c/0x80 [<ffffffff80243ba4>] perf_event_wakeup+0x8c/0xf8 [<ffffffff80243c50>] perf_pending_event+0x40/0x78 [<ffffffff8023d88c>] irq_work_run+0x74/0xc0 [<ffffffff80152640>] mipsxx_pmu_handle_shared_irq+0x110/0x228 [<ffffffff8015276c>] mipsxx_pmu_handle_irq+0x14/0x30 [<ffffffff801ffda4>] handle_irq_event_percpu+0xbc/0x470 [<ffffffff80204478>] handle_percpu_irq+0x98/0xc8 [<ffffffff801ff284>] generic_handle_irq+0x4c/0x68 [<ffffffff8089748c>] do_IRQ+0x2c/0x48 [<ffffffff80105864>] plat_irq_dispatch+0x64/0xd0 [ralf@linux-mips.org: I don't see why based on this register dump the handler should be marked IRQF_NO_THREAD - but the handler is manipulating per-CPU resources so we don't want it to be rescheduled to another CPU.] Signed-off-by: Yang Wei <Wei.Yang@windriver.com> Cc: a.p.zijlstra@chello.nl Cc: paulus@samba.org Cc: mingo@redhat.com Cc: acme@kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7506/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | | MIPS: kdump: Set correct value to kexec_indirection_page variableYang Wei2014-08-251-2/+6
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since there is not indirection page in crash type, so the vaule of the head field of kimage structure is not equal to the address of indirection page but IND_DONE. so we have to set kexec_indirection_page variable to the address of the head field of image structure. [ralf@linux-mips.org: Don't add pointless empty line, fix trailing whitespace damage.] Signed-off-by: Yang Wei <Wei.Yang@windriver.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7499/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* | Merge branch 'signal-cleanup' of ↵Linus Torvalds2014-08-094-82/+51
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc Pull arch signal handling cleanup from Richard Weinberger: "This patch series moves all remaining archs to the get_signal(), signal_setup_done() and sigsp() functions. Currently these archs use open coded variants of the said functions. Further, unused parameters get removed from get_signal_to_deliver(), tracehook_signal_handler() and signal_delivered(). At the end of the day we save around 500 lines of code." * 'signal-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (43 commits) powerpc: Use sigsp() openrisc: Use sigsp() mn10300: Use sigsp() mips: Use sigsp() microblaze: Use sigsp() metag: Use sigsp() m68k: Use sigsp() m32r: Use sigsp() hexagon: Use sigsp() frv: Use sigsp() cris: Use sigsp() c6x: Use sigsp() blackfin: Use sigsp() avr32: Use sigsp() arm64: Use sigsp() arc: Use sigsp() sas_ss_flags: Remove nested ternary if Rip out get_signal_to_deliver() Clean up signal_delivered() tracehook_signal_handler: Remove sig, info, ka and regs ...
| * | mips: Use sigsp()Richard Weinberger2014-08-063-9/+7
| | | | | | | | | | | | | | | | | | Use sigsp() instead of the open coded variant. Signed-off-by: Richard Weinberger <richard@nod.at>
| * | mips: Use get_signal() signal_setup_done()Richard Weinberger2014-08-064-78/+49
| |/ | | | | | | | | | | | | Use the more generic functions get_signal() signal_setup_done() for signal delivery. Signed-off-by: Richard Weinberger <richard@nod.at>
* | Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2014-08-0725-206/+501
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: "This is the main pull request for 3.17. It contains: - misc Cavium Octeon, BCM47xx, BCM63xx and Alchemy updates - MIPS ptrace updates and cleanups - various fixes that will also go to -stable - a number of cleanups and small non-critical fixes. - NUMA support for the Loongson 3. - more support for MSA - support for MAAR - various FP enhancements and fixes" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (139 commits) MIPS: jz4740: remove unnecessary null test before debugfs_remove MIPS: Octeon: remove unnecessary null test before debugfs_remove_recursive MIPS: ZBOOT: implement stack protector in compressed boot phase MIPS: mipsreg: remove duplicate MIPS_CONF4_FTLBSETS_SHIFT MIPS: Bonito64: remove a duplicate define MIPS: Malta: initialise MAARs MIPS: Initialise MAARs MIPS: detect presence of MAARs MIPS: define MAAR register accessors & bits MIPS: mark MSA experimental MIPS: Don't build MSA support unless it can be used MIPS: consistently clear MSA flags when starting & copying threads MIPS: 16 byte align MSA vector context MIPS: disable preemption whilst initialising MSA MIPS: ensure MSA gets disabled during boot MIPS: fix read_msa_* & write_msa_* functions on non-MSA toolchains MIPS: fix MSA context for tasks which don't use FP first MIPS: init upper 64b of vector registers when MSA is first used MIPS: save/disable MSA in lose_fpu MIPS: preserve scalar FP CSR when switching vector context ...
| * \ Merge branch '3.16-fixes' into mips-for-linux-nextRalf Baechle2014-08-028-12/+71
| |\ \
| | * | MIPS: smp-mt: Fix link error when PROC_FS=nJames Hogan2014-08-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit d6d3c9afaab4 (MIPS: MT: proc: Add support for printing VPE and TC ids) causes a link error when CONFIG_PROC_FS=n: arch/mips/built-in.o: In function `proc_cpuinfo_notifier_init': smp-mt.c: undefined reference to `register_proc_cpuinfo_notifier' This is fixed by adding an ifdef around the procfs handling code in smp-mt.c. Signed-off-by: James Hogan <james.hogan@imgtec.com> Reported-by: Markos Chandras <markos.chandras@imgtec.com> Reviewed-by: Markos Chandras <markos.chandras@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # >= 3.15 Patchwork: https://patchwork.linux-mips.org/patch/7244/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: N32: Use compat getsockopt syscallSorin Dumitru2014-08-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IP_PKTOPTIONS sockopt puts control messages in option_values, these need to be handled differently in the compat case. This is already done through the MSG_CMSG_COMPAT flag, we just need to use compat_sys_getsockopt which sets that flag. Signed-off-by: Sorin Dumitru <sdumitru@ixiacom.com> Reviewed-by: James Hogan <james.hogan@imgtec.com> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7115/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: APRP: Fix an issue when device_create() fails.Sebastien Bourdelin2014-08-012-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a call to device_create() fails for a channel during the initialize loop, we need to clean the devices entries already created before leaving. Signed-off-by: Sebastien Bourdelin <sebastien.bourdelin@savoirfairelinux.com> Cc: linux-mips@linux-mips.org Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com> Cc: John Crispin <blogic@openwrt.org> Cc: Qais Yousef <Qais.Yousef@imgtec.com> Cc: linux-kernel@vger.kernel.org Cc: Jerome Oufella <jerome.oufella@savoirfairelinux.com> Patchwork: https://patchwork.linux-mips.org/patch/7111/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: GIC: Prevent array overrunJeffrey Deans2014-07-311-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A GIC interrupt which is declared as having a GIC_MAP_TO_NMI_MSK mapping causes the cpu parameter to gic_setup_intr() to be increased to 32, causing memory corruption when pcpu_masks[] is written to again later in the function. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7375/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: Remove BUG_ON(!is_fpu_owner()) in do_ade()Huacai Chen2014-07-301-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In do_ade(), is_fpu_owner() isn't preempt-safe. For example, when an unaligned ldc1 is executed, do_cpu() is called and then FPU will be enabled (and TIF_USEDFPU will be set for the current process). Then, do_ade() is called because the access is unaligned. If the current process is preempted at this time, TIF_USEDFPU will be cleard. So when the process is scheduled again, BUG_ON(!is_fpu_owner()) is triggered. This small program can trigger this BUG in a preemptible kernel: int main (int argc, char *argv[]) { double u64[2]; while (1) { asm volatile ( ".set push \n\t" ".set noreorder \n\t" "ldc1 $f3, 4(%0) \n\t" ".set pop \n\t" ::"r"(u64): ); } return 0; } V2: Remove the BUG_ON() unconditionally due to Paul's suggestion. Signed-off-by: Huacai Chen <chenhc@lemote.com> Signed-off-by: Jie Chen <chenj@lemote.com> Signed-off-by: Rui Wang <wangr@lemote.com> Cc: <stable@vger.kernel.org> Cc: John Crispin <john@phrozen.org> Cc: Steven J. Hill <Steven.Hill@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: ftrace: Fix dynamic tracing of kernel modulesPetri Gynther2014-07-302-9/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dynamic tracing of kernel modules is broken on 32-bit MIPS. When modules are loaded, the kernel crashes when dynamic tracing is enabled with: cd /sys/kernel/debug/tracing echo > set_ftrace_filter echo function > current_tracer 1) arch/mips/kernel/ftrace.c When the kernel boots, or when a module is initialized, ftrace_make_nop() modifies every _mcount call site to eliminate the ftrace overhead. However, when ftrace is later enabled for a call site, ftrace_make_call() does not currently restore the _mcount call correctly for module call sites. Added ftrace_modify_code_2r() and modified ftrace_make_call() to fix this. 2) arch/mips/kernel/mcount.S _mcount assembly routine is supposed to have the caller's _mcount call site address in register a0. However, a0 is currently not calculated correctly for module call sites. a0 should be (ra - 20) or (ra - 24), depending on whether the kernel was built with KBUILD_MCOUNT_RA_ADDRESS or not. This fix has been tested on Broadcom BMIPS5000 processor. Dynamic tracing now works for both built-in functions and module functions. Signed-off-by: Petri Gynther <pgynther@google.com> Cc: linux-mips@linux-mips.org Cc: rostedt@goodmis.org Cc: alcooperx@gmail.com Cc: cminyard@mvista.com Patchwork: https://patchwork.linux-mips.org/patch/7476/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| | * | MIPS: Prevent user from setting FCSR cause bitsPaul Burton2014-07-301-1/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If one or more matching FCSR cause & enable bits are set in saved thread context then when that context is restored the kernel will take an FP exception. This is of course undesirable and considered an oops, leading to the kernel writing a backtrace to the console and potentially rebooting depending upon the configuration. Thus the kernel avoids this situation by clearing the cause bits of the FCSR register when handling FP exceptions and after emulating FP instructions. However the kernel does not prevent userland from setting arbitrary FCSR cause & enable bits via ptrace, using either the PTRACE_POKEUSR or PTRACE_SETFPREGS requests. This means userland can trivially cause the kernel to oops on any system with an FPU. Prevent this from happening by clearing the cause bits when writing to the saved FCSR context via ptrace. This problem appears to exist at least back to the beginning of the git era in the PTRACE_POKEUSR case. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: Paul Burton <paul.burton@imgtec.com> Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7438/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: detect presence of MAARsPaul Burton2014-08-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Detect the presence of MAAR using the MRP bit in Config5, and record that presence using a CPU option bit. A cpu_has_maar macro will then allow code to conditionalise upon the presence of MAARs. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7330/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: consistently clear MSA flags when starting & copying threadsPaul Burton2014-08-021-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The TIF_MSA_CTX_LIVE flag (indicating that a task has MSA context which needs to be preserved) was being cleared in start_thread, but the TIF_USEDMSA flag (indicating that a task has used MSA in this timeslice) was not. In copy_thread neither flag was cleared, but both need to be. Without clearing these flags the kernel will proceed to attempt to save MSA context when the task is context switched out, and if the task had not used MSA in the meantime then it will fail because MSA or the FPU are disabled. The end result is typically: do_cpu invoked from kernel context![#1]: CPU: 0 PID: 99 Comm: sh Not tainted 3.16.0-rc4-00025-g6dc9476-dirty #88 task: 8f23dc60 ti: 8f1d8000 task.ti: 8f1d8000 ... Call Trace: [<8010edbc>] resume+0x5c/0x280 [<80481e0c>] __schedule+0x370/0x800 [<80104838>] work_resched+0x8/0x2c Fix by consistently clearing both flags in both functions. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7309/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: disable preemption whilst initialising MSAPaul Burton2014-08-021-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Preemption must be disabled throughout the process of enabling the FPU, enabling MSA & initialising the vector registers. Without doing so it is possible to lose the FPU or MSA whilst initialising them causing that initialisation to fail. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7307/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: ensure MSA gets disabled during bootPaul Burton2014-08-021-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The kernel relies upon MSA being disabled when a task begins running, so that it can initialise or restore context in response to the resulting MSA disabled exception. Previously the state of MSA following boot was left as it was before the kernel ran, where MSA could potentially have been enabled. Explicitly disable it during boot to prevent any problems. As a nice side effect the code reads a little better too. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7306/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: fix MSA context for tasks which don't use FP firstPaul Burton2014-08-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a task does not execute scalar FP instructions prior to using MSA then the flags indicating that the task has live MSA context were not being set. The upper 64b of each vector register would then be lost upon the tasks first context switch after using MSA. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7500/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: init upper 64b of vector registers when MSA is first usedPaul Burton2014-08-022-9/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a task first makes use of MSA we need to ensure that the upper 64b of the vector registers are set to some value such that no information can be leaked to it from the previous task to use MSA context on the CPU. The architecture formerly specified that these bits would be cleared to 0 when a scalar FP instructions wrote to the aliased FP registers, which would have implicitly handled this as the kernel restored scalar FP context. However more recent versions of the specification now state that the value of the bits in such cases is unpredictable. Initialise them explictly to be sure, and set all the bits to 1 rather than 0 for consistency with the least significant 64b. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7497/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: preserve scalar FP CSR when switching vector contextPaul Burton2014-08-022-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switching the vector context implicitly saves & restores the state of the aliased scalar FP data registers, however the scalar FP control & status register is distinct from the MSA control & status register. In order to allow scalar FP to function correctly in programs using MSA, the scalar CSR needs to be saved & restored along with the MSA vector context. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7301/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: save/restore MSACSR register on context switchPaul Burton2014-08-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I added a field for the MSACSR register in struct mips_fpu_struct, but never actually made use of it... This is a clear bug. Save and restore the MSACSR register along with the vector registers. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7300/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: GIC: Generalise check for pending interruptsJeffrey Deans2014-08-021-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move most of the functionality of gic_get_int() into a new function gic_get_int_mask() which takes a bitmask of interrupts in which the caller is interested, and returns the subset which are pending for the current CPU. This allows CP0 IRQ dispatch routines to check only the GIC interrupts which are routed to a particular CPU interrupt input. gic_get_int() is reimplemented using gic_get_int_mask() and is retained for use by any platforms for which gic_get_int() is sufficient. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7376/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: GIC: Prevent array overrunJeffrey Deans2014-08-021-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A GIC interrupt which is declared as having a GIC_MAP_TO_NMI_MSK mapping causes the cpu parameter to gic_setup_intr() to be increased to 32, causing memory corruption when pcpu_masks[] is written to again later in the function. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7375/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: GIC: Remove GIC_FLAG_IPIJeffrey Deans2014-08-021-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | irq-gic.c:gic_get_int() masks out interrupts from the pending set which aren’t in the pcpu_mask. Only interrupts marked with GIC_FLAG_IPI were set in pcpu_mask, meaning that peripheral interrupts also had to be marked as IPIs. Remove the use of GIC_FLAG_IPI and allow the flags member of struct gic_intr_map to be zero. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7374/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: GIC: move GIC interrupt bitmap declarationsJeffrey Deans2014-08-021-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several bitmaps are declared in arch/mips/include/asm/gic.h, but the scope of their use is limited to arch/mips/kernel/irq-gic.c. Move the declarations from the header file to the C file. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7372/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: kernel: cpu-probe: Detect unique RI/XI exceptionsLeonid Yegoshin2014-08-021-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Detect if the core supports unique exception codes for the Read-Inhibit and Execute-Inhibit exceptions and set the option accordingly. The RI/XI exception support is detected by setting the 27th bit (IEC) of the PageGrain C0 register and reading back the value of that register to verify the bit is enabled. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7340/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: Use dedicated exception handler if CPU supports RI/XI exceptionsLeonid Yegoshin2014-08-021-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the regular tlb_do_page_fault_0 (no write) handler to handle the RI and XI exceptions. Also skip the RI/XI validation check on TLB load handler since it's redundant when the CPU has unique RI/XI exceptions. Singed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7339/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: kernel: cpu-probe: Add support for the HardWare Table WalkerMarkos Chandras2014-08-021-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Detect if the core implements the HTW and set the option accordingly. Also, add a new kernel parameter called 'nohtw' allowing the user to disable the htw support and fallback to the software refill handler. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7335/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
| * | MIPS: cpu: Add new cpu option for Hardware Table Walker.Markos Chandras2014-08-021-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Moreover, report hardware page table walker support as 'htw' in the ASE list of /proc/cpuinfo, if the core implements this feature. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7334/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
OpenPOWER on IntegriCloud