summaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/context.c
Commit message (Collapse)AuthorAgeFilesLines
* ARM: Remove current_mm per-cpu variableCatalin Marinas2012-04-171-11/+1
| | | | | | | | | | | | | | The current_mm variable was used to store the new mm between the switch_mm() and switch_to() calls where an IPI to reset the context could have set the wrong mm. Since the interrupts are disabled during context switch, there is no need for this variable, current->active_mm already points to the current mm when interrupts are re-enabled. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on ASID-capable CPUsCatalin Marinas2012-04-171-2/+2
| | | | | | | | | | | | | | | | | | | | | | Since the ASIDs must be unique to an mm across all the CPUs in a system, the __new_context() function needs to broadcast a context reset event to all the CPUs during ASID allocation if a roll-over occurred. Such IPIs cannot be issued with interrupts disabled and ARM had to define __ARCH_WANT_INTERRUPTS_ON_CTXSW. This patch changes the check_context() function to check_and_switch_context() called from switch_mm(). In case of ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the interrupts are disabled, it defers the __new_context() and cpu_switch_mm() calls to the post-lock switch hook where the interrupts are enabled. Setting the reserved TTBR0 was also moved to check_and_switch_context() from cpu_v7_switch_mm(). Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: Use TTBR1 instead of reserved context IDWill Deacon2012-04-171-18/+27
| | | | | | | | | | | | | | | | | | | | | | | | On ARMv7 CPUs that cache first level page table entries (like the Cortex-A15), using a reserved ASID while changing the TTBR or flushing the TLB is unsafe. This is because the CPU may cache the first level entry as the result of a speculative memory access while the reserved ASID is assigned. After the process owning the page tables dies, the memory will be reallocated and may be written with junk values which can be interpreted as global, valid PTEs by the processor. This will result in the TLB being populated with bogus global entries. This patch avoids the use of a reserved context ID in the v7 switch_mm and ASID rollover code by temporarily using the swapper_pg_dir pointed at by TTBR1, which contains only global entries that are not tagged with ASIDs. Reviewed-by: Frank Rowand <frank.rowand@am.sony.com> Tested-by: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: add LPAE support] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* ARM: LPAE: Add context switching supportCatalin Marinas2011-12-081-2/+17
| | | | | | | | With LPAE, TTBRx registers are 64-bit. The ASID is stored in TTBR0 rather than a separate Context ID register. This patch makes the necessary changes to handle context switching on LPAE. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* locking, ARM: Annotate low level hw locks as rawThomas Gleixner2011-09-131-7/+7
| | | | | | | | | | | | Annotate the low level hardware locks which must not be preempted. In mainline this change documents the low level nature of the lock - otherwise there's no functional difference. Lockdep and Sparse checking will work as usual. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks"Russell King2011-06-091-3/+3
| | | | | | | | | | | | | | | | | | | | This reverts commit 45b95235b0ac86cef2ad4480b0618b8778847479. Will Deacon reports that: In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID") I updated the ASID rollover code to use only the kernel page tables whilst updating the ASID. Unfortunately, the code to restore the user page tables was part of a later patch which isn't yet in mainline, so this leaves the code quite broken. We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW from ARM, so lets revert these until we can properly sort out what we're doing with the context switching. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Revert "ARM: 6943/1: mm: use TTBR1 instead of reserved context ID"Russell King2011-06-091-6/+5
| | | | | | | | | | | | | | | | | | | | This reverts commit 52af9c6cd863fe37d1103035ec7ee22ac1296458. Will Deacon reports that: In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID") I updated the ASID rollover code to use only the kernel page tables whilst updating the ASID. Unfortunately, the code to restore the user page tables was part of a later patch which isn't yet in mainline, so this leaves the code quite broken. We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW from ARM, so lets revert these until we can properly sort out what we're doing with the ARM context switching. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 6944/1: mm: allow ASID 0 to be allocated to tasksWill Deacon2011-05-261-3/+3
| | | | | | | | | Now that ASID 0 is no longer used as a reserved value, allow it to be allocated to tasks. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 6943/1: mm: use TTBR1 instead of reserved context IDWill Deacon2011-05-261-5/+6
| | | | | | | | | | | | | | | | | | | | | | On ARMv7 CPUs that cache first level page table entries (like the Cortex-A15), using a reserved ASID while changing the TTBR or flushing the TLB is unsafe. This is because the CPU may cache the first level entry as the result of a speculative memory access while the reserved ASID is assigned. After the process owning the page tables dies, the memory will be reallocated and may be written with junk values which can be interpreted as global, valid PTEs by the processor. This will result in the TLB being populated with bogus global entries. This patch avoids the use of a reserved context ID in the v7 switch_mm and ASID rollover code by temporarily using the swapper_pg_dir pointed at by TTBR1, which contains only global entries that are not tagged with ASIDs. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: 5905/1: ARM: Global ASID allocation on SMPCatalin Marinas2010-02-151-14/+110
| | | | | | | | | | | | | | | | The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* ARM: Fix errata 411920 workaroundsRussell King2009-10-291-4/+1
| | | | | | | | | | Errata 411920 indicates that any "invalidate entire instruction cache" operation can fail if the right conditions are present. This is not limited just to those operations in flush.c, but elsewhere. Place the workaround in the already existing __flush_icache_all() function instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* cpumask: use mm_cpumask() wrapper: armRusty Russell2009-09-241-1/+1
| | | | | | | | | Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
*-. Merge branches 'armv7', 'at91', 'misc' and 'omap' into develRussell King2007-05-091-3/+7
|\ \
| | * [ARM] Fix ASID version switchRussell King2007-05-081-3/+7
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Close a hole in the ASID version switch, particularly the following scenario: CPU0 MM PID CPU1 MM PID idle A pid(A) A idle(lazy tlb) * new asid version triggered by B * B pid(B) A pid(A) * MM A gets new asid version * A idle(lazy tlb) A pid(A) * CPU1 doesn't see the new ASID * The result is that CPU1 continues running with the hardware set for the original (stale) ASID value, but mm->context.id contains the new ASID value. The result is that the next MM fault on CPU1 updates the page table entries, but flush_tlb_page() fails due to wrong ASID. There is a related case with a threaded application is allocated a new ASID on one CPU while another of its threads is running on some different CPU. This scenario is not fixed by this commit. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | [ARM] armv7: add support for asid-tagged VIVT I-cacheCatalin Marinas2007-05-091-0/+7
|/ | | | | | | | | ARMv7 can have VIPT, PIPT or ASID-tagged VIVT I-cache. This patch adds the necessary invalidation of the I-cache when the ASID numbers are re-used. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] 4128/1: Architecture compliant TTBR changing sequenceCatalin Marinas2007-02-081-2/+10
| | | | | | | | | | | On newer architectures (ARMv6, ARMv7), the depth of the prefetch and branch prediction is implementation defined and there is a small risk of wrong ASID tagging when changing TTBR0 before setting the new context id. The recommended solution is to set a reserved ASID during TTBR changing. This patch reserves ASID 0. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* [ARM] Move mmu.c out of the wayRussell King2006-09-201-0/+45
Rename mmu.c to context.c - it's the ARMv6 ASID context handling code rather than generic "mmu" handling code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
OpenPOWER on IntegriCloud