diff options
author | James Morse <james.morse@arm.com> | 2016-10-18 11:27:46 +0100 |
---|---|---|
committer | Will Deacon <will.deacon@arm.com> | 2016-10-20 09:50:53 +0100 |
commit | 2a6dcb2b5f3e21592ca8dfa198dcce7bec09b020 (patch) | |
tree | 48b4ba1964223e9c90a967752babd3ce24f2739b /arch/arm64/include/asm/processor.h | |
parent | 87261d19046aeaeed8eb3d2793fde850ae1b5c9e (diff) | |
download | op-kernel-dev-2a6dcb2b5f3e21592ca8dfa198dcce7bec09b020.zip op-kernel-dev-2a6dcb2b5f3e21592ca8dfa198dcce7bec09b020.tar.gz |
arm64: cpufeature: Schedule enable() calls instead of calling them via IPI
The enable() call for a cpufeature/errata is called using on_each_cpu().
This issues a cross-call IPI to get the work done. Implicitly, this
stashes the running PSTATE in SPSR when the CPU receives the IPI, and
restores it when we return. This means an enable() call can never modify
PSTATE.
To allow PAN to do this, change the on_each_cpu() call to use
stop_machine(). This schedules the work on each CPU which allows
us to modify PSTATE.
This involves changing the protype of all the enable() functions.
enable_cpu_capabilities() is called during boot and enables the feature
on all online CPUs. This path now uses stop_machine(). CPU features for
hotplug'd CPUs are enabled by verify_local_cpu_features() which only
acts on the local CPU, and can already modify the running PSTATE as it
is called from secondary_start_kernel().
Reported-by: Tony Thompson <anthony.thompson@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include/asm/processor.h')
-rw-r--r-- | arch/arm64/include/asm/processor.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index df2e53d..60e3482 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -188,8 +188,8 @@ static inline void spin_lock_prefetch(const void *ptr) #endif -void cpu_enable_pan(void *__unused); -void cpu_enable_uao(void *__unused); -void cpu_enable_cache_maint_trap(void *__unused); +int cpu_enable_pan(void *__unused); +int cpu_enable_uao(void *__unused); +int cpu_enable_cache_maint_trap(void *__unused); #endif /* __ASM_PROCESSOR_H */ |