summaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel
Commit message (Collapse)AuthorAgeFilesLines
* arm64: use private ratelimit state along with show_unhandled_signalsVladimir Murzin2015-06-191-3/+2
| | | | | | | | | | | | | | | | | printk_ratelimit() shares the ratelimiting state with other callers what may lead to scenarios where at the time we want to print out debug information we already limited, so nothing appears in the dmesg - this makes exception-trace quite poor helper in debugging. Additionally, we have imbalance with some messages limited with global ratelimit state and other messages limited with their private state defined via pr_*_ratelimited(). To address this inconsistency show_unhandled_signals_ratelimited() macro is introduced and caller sites are converted to use it. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: vdso: work-around broken ELF toolchains in MakefileWill Deacon2015-06-191-0/+4
| | | | | | | | | | | | | | | | | | | | | When building the kernel with a bare-metal (ELF) toolchain, the -shared option may not be passed down to collect2, resulting in silent corruption of the vDSO image (in particular, the DYNAMIC section is omitted). The effect of this corruption is that the dynamic linker fails to find the vDSO symbols and libc is instead used for the syscalls that we intended to optimise (e.g. gettimeofday). Functionally, there is no issue as the sigreturn trampoline is still intact and located by the kernel. This patch fixes the problem by explicitly passing -shared to the linker when building the vDSO. Cc: <stable@vger.kernel.org> Reported-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com> Reported-by: James Greenlaigh <james.greenhalgh@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: kernel: rename __cpu_suspend to keep it aligned with armSudeep Holla2015-06-193-6/+6
| | | | | | | | | | | | | | | This patch renames __cpu_suspend to cpu_suspend so that it's aligned with ARM32. It also removes the redundant wrapper created. This is in preparation to implement generic PSCI system suspend using the cpu_{suspend,resume} which now has the same interface on both ARM and ARM64. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: compat: print compat_sp instead of spVladimir Murzin2015-06-171-2/+2
| | | | | | | We check against compat_sp, but print out arm64's sp - fix it. Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: entry: fix context tracking for el0_sp_pcMark Rutland2015-06-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6c81fe7925cc4c42 ("arm64: enable context tracking") did not update el0_sp_pc to use ct_user_exit, but this appears to have been unintentional. In commit 6ab6463aeb5fbc75 ("arm64: adjust el0_sync so that a function can be called") we made x0 available, and in the return to userspace we call ct_user_enter in the kernel_exit macro. Due to this, we currently don't correctly inform RCU of the user->kernel transition, and may erroneously account for time spent in the kernel as if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING is enabled. As we do record the kernel->user transition, a userspace application making accesses from an unaligned stack pointer can demonstrate the imbalance, provoking the following warning: ------------[ cut here ]------------ WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4() Modules linked in: CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8 Hardware name: ARM Juno development board (r0) (DT) Call trace: [<ffffffc000089914>] dump_backtrace+0x0/0x124 [<ffffffc000089a48>] show_stack+0x10/0x1c [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8 [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0 [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20 [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4 [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114 [<ffffffc00008561c>] el1_preempt+0x4/0x28 [<ffffffc0001b8040>] exit_files+0x38/0x4c [<ffffffc0000b5b94>] do_exit+0x430/0x978 [<ffffffc0000b614c>] do_group_exit+0x40/0xd4 [<ffffffc0000c0208>] get_signal+0x23c/0x4f4 [<ffffffc0000890b4>] do_signal+0x1ac/0x518 [<ffffffc000089650>] do_notify_resume+0x5c/0x68 ---[ end trace 963c192600337066 ]--- This patch adds the missing ct_user_exit to the el0_sp_pc entry path, correcting the context tracking for this case. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Fixes: 6c81fe7925cc ("arm64: enable context tracking") Cc: <stable@vger.kernel.org> # v3.17+ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: KVM: Switch vgic save/restore to alternative_insnMarc Zyngier2015-06-121-1/+0
| | | | | | | | | | | | | | | | So far, we configured the world-switch by having a small array of pointers to the save and restore functions, depending on the GIC used on the platform. Loading these values each time is a bit silly (they never change), and it makes sense to rely on the instruction patching instead. This leads to a nice cleanup of the code. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: alternative: Introduce feature for GICv3 CPU interfaceMarc Zyngier2015-06-121-0/+16
| | | | | | | | | | | | | Add a new item to the feature set (ARM64_HAS_SYSREG_GIC_CPUIF) to indicate that we have a system register GIC CPU interface This will help KVM switching to alternative instruction patching. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: psci: fix !CONFIG_HOTPLUG_CPU build warningWill Deacon2015-06-111-5/+5
| | | | | | | | | | | | | | | | When building without CONFIG_HOTPLUG_CPU, GCC complains (rightly) that psci_tos_resident_on is unused: arch/arm64/kernel/psci.c:61:13: warning: ‘psci_tos_resident_on’ defined but not used [-Wunused-function] static bool psci_tos_resident_on(int cpu) As it's only ever used when CONFIG_HOTPLUG_CPU is selected, let's move it into the existing ifdef. Signed-off-by: Will Deacon <will.deacon@arm.com> [Mark: write commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix bug for reloading FPSIMD state after CPU hotplug.Janet Liu2015-06-111-0/+31
| | | | | | | | | | | | | | | | | | Now FPSIMD don't handle HOTPLUG_CPU. This introduces bug after cpu down/up process. After cpu down/up process, the FPSMID hardware register is default value, not any process's fpsimd context. when CPU_DEAD set cpu's fpsimd_state to NULL, it will force to load the fpsimd context for the thread, to avoid the chance to skip to load the context. If process A is the last user process on CPU N before cpu down, and the first user process on the same CPU N after cpu up, A's fpsimd_state.cpu is the current cpu id, and per_cpu(fpsimd_last_state) points A's fpsimd_state, so kernel will not reload the context during it return to user space. Signed-off-by: Janet Liu <janet.liu@spreadtrum.com> Signed-off-by: Xiongshan An <xiongshan.an@spreadtrum.com> Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com> [catalin.marinas@arm.com: some mostly cosmetic clean-ups] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: kernel thread don't need to save fpsimd context.Janet Liu2015-06-111-1/+2
| | | | | | | | | | | kernel thread's default fpsimd state is zero. When fork a thread, if parent is kernel thread, and save hardware context to parent's fpsimd state, but this hardware context is user process's context, because kernel thread don't use fpsimd, it will not introduce issue, it add a little cost. Signed-off-by: Janet Liu <janet.liu@spreadtrum.com> Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: fix missing syscall trace exitJosh Stone2015-06-081-1/+6
| | | | | | | | | | | | | | | | | | | If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on the fast path. It's then possible to have TIF_SYSCALL_TRACE added in the middle of the syscall, but ret_fast_syscall doesn't check this flag again. This causes a ptrace syscall-exit-stop to be missed. For instance, from a PTRACE_EVENT_FORK reported during do_fork, the tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE. Now the completion of the fork should have a syscall-exit-stop. Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the fast exit path. Do the same on arm64. Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Josh Stone <jistone@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Merge branch 'arm64/psci-rework' of ↵Catalin Marinas2015-06-053-106/+146
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux * 'arm64/psci-rework' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: psci: remove ACPI coupling arm64: psci: kill psci_power_state arm64: psci: account for Trusted OS instances arm64: psci: support unsigned return values arm64: psci: remove unnecessary id indirection arm64: smp: consistently use error codes arm64: smp_plat: add get_logical_index arm/arm64: kvm: add missing PSCI include Conflicts: arch/arm64/kernel/smp.c
| * arm64: psci: remove ACPI couplingMark Rutland2015-05-272-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 32-bit ARM port doesn't have ACPI headers, and conditionally including them is going to look horrendous. In preparation for sharing the PSCI invocation code with 32-bit, move the acpi_psci_* function declarations and definitions such that the PSCI client code need not include ACPI headers. While it would seem like we could simply hide the ACPI includes in psci.h, the ACPI headers have hilarious circular dependencies which make this infeasible without reorganising most of ACPICA. So rather than doing that, move the acpi_psci_* prototypes into psci.h. The psci_acpi_init function is made dependent on CONFIG_ACPI (with a stub implementation in asm/psci.h) such that it need not be built for 32-bit ARM or kernels without ACPI support. The currently missing __init annotations are added to the prototypes in the header. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Reviewed-by: Al Stone <al.stone@linaro.org> Reviewed-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
| * arm64: psci: kill psci_power_stateMark Rutland2015-05-271-52/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A PSCI 1.0 implementation may choose to use the new extended StateID format, the presence of which may be queried via the PSCI_FEATURES call. The layout of this new StateID format is incompatible with the existing format, and so to handle both we must abstract attempts to parse the fields. In preparation for PSCI 1.0 support, this patch introduces psci_power_state_loses_context and psci_power_state_is_valid functions to query information from a PSCI power state, which is no longer decomposed (and hence the pack/unpack functions are removed). As it is no longer decomposed, it is now passed round as an opaque u32 token. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
| * arm64: psci: account for Trusted OS instancesMark Rutland2015-05-271-0/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Software resident in the secure world (a "Trusted OS") may cause CPU_OFF calls for the CPU it is resident on to be denied. Such a denial would be fatal for the kernel, and so we must detect when this can happen before the point of no return. This patch implements Trusted OS detection for PSCI 0.2+ systems, using MIGRATE_INFO_TYPE and MIGRATE_INFO_UP_CPU. When a trusted OS is detected as resident on a particular CPU, attempts to hot unplug that CPU will be denied early, before they can prove fatal. Trusted OS migration is not implemented by this patch. Implementation of migratable UP trusted OSs seems unlikely, and the right policy for migration is unclear (and will likely differ across implementations). As such, it is likely that migration will require cooperation with Trusted OS drivers. PSCI implementations prior to 0.1 do not provide the facility to detect the presence of a Trusted OS, nor the CPU any such OS is resident on, so without additional information it is not possible to handle Trusted OSs with PSCI 0.1. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
| * arm64: psci: support unsigned return valuesMark Rutland2015-05-271-29/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PSCI_VERSION and MIGRATE_INFO_TYPE_UP_CPU return unsigned values, with the latter returning a 64-bit value. However, the PSCI invocation functions have prototypes returning int. This patch upgrades the invocation functions to return unsigned long, with a new typedef to keep things legible. As PSCI_VERSION cannot return a negative value, the erroneous check against PSCI_RET_NOT_SUPPORTED is also removed. The unrelated psci_initcall_t typedef is moved closer to its first user, to avoid confusion with the invocation functions. In preparation for sharing the code with ARM, unsigned long is used in preference of u64. In the SMC32 calling convention, the relevant fields will be 32 bits wide. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
| * arm64: psci: remove unnecessary id indirectionMark Rutland2015-05-271-17/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PSCI 0.1 did not define canonical IDs for CPU_ON, CPU_OFF, CPU_SUSPEND, or MIGRATE, and so these need to be provided when using firmware compliant to PSCI 0.1. However, functions introduced in 0.2 or later have canonical IDs, and these cannot be provided via DT. There's no need to indirect the IDs via a table; they can be used directly at callsites (and already are for SYSTEM_OFF and SYSTEM_RESET). This patch removes the unnecessary function ID indirection for AFFINITY_INFO and MIGRATE_INFO_TYPE. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
| * arm64: smp: consistently use error codesMark Rutland2015-05-272-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cpu_kill currently returns one for success and zero for failure, which is unlike all the other cpu_operations, which return zero for success and an error code upon failure. This difference is unnecessarily confusing. Make cpu_kill consistent with the other cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
* | arm64: alternative: Merge alternative-asm.h into alternative.hMarc Zyngier2015-06-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier). Fold the content of alternative-asm.h into alternative.h, and update the few users. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: alternative: Allow immediate branch as alternative instructionMarc Zyngier2015-06-051-5/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since all branches are PC-relative on AArch64, these instructions cannot be used as an alternative with the simplistic approach we currently have (the immediate has been computed from the .altinstr_replacement section, and end-up being completely off if the target is outside of the replacement sequence). This patch handles the branch instructions in a different way, using the insn framework to recompute the immediate, and generate the right displacement in the above case. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: Rework alternate sequence for ARM erratum 845719Marc Zyngier2015-06-051-12/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The workaround for erratum 845719 is currently using a branch between two alternate sequences, which is quite fragile, and that we are going to break as we rework the alternative code. This patch reworks the workaround to fit in a single alternative sequence. The generated code itself is unchanged. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: insn: Add aarch64_{get,set}_branch_offsetMarc Zyngier2015-06-031-0/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to deal with branches located in alternate sequences, but pointing to the main kernel text, it is required to extract the relative displacement encoded in the instruction, and to be able to update said instruction with a new offset (once it is known). For this, we introduce three new helpers: - aarch64_insn_is_branch_imm is a predicate indicating if the instruction is an immediate branch - aarch64_get_branch_offset returns a signed value representing the byte offset encoded in a branch instruction - aarch64_set_branch_offset takes an instruction and an offset, and returns the corresponding updated instruction. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: drop sleep_idmap_phys and clean up cpu_resume()Ard Biesheuvel2015-06-022-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | Two cleanups of the asm function cpu_resume(): - The global variable sleep_idmap_phys always points to idmap_pg_dir, so we can just use that value directly in the CPU resume path. - Unclutter the load of sleep_save_sp::save_ptr_stash_phys. Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: reduce ID map to a single pageArd Biesheuvel2015-06-023-7/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ea8c2e112445 ("arm64: Extend the idmap to the whole kernel image") changed the early page table code so that the entire kernel Image is covered by the identity map. This allows functions that need to enable or disable the MMU to reside anywhere in the kernel Image. However, this change has the unfortunate side effect that the Image cannot cross a physical 512 MB alignment boundary anymore, since the early page table code cannot deal with the Image crossing a /virtual/ 512 MB alignment boundary. So instead, reduce the ID map to a single page, that is populated by the contents of the .idmap.text section. Only three functions reside there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset(). If new code is introduced that needs to manipulate the MMU state, it should be added to this section as well. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: use fixmap region for permanent FDT mappingArd Biesheuvel2015-06-022-58/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: context-switch user tls register tpidr_el0 for compat tasksWill Deacon2015-06-011-27/+20
| | | | | | | | | | | | | | | | | | | | | | | | Since commit a4780adeefd0 ("ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork"), arch/arm/ has context switched the user-writable TLS register, so do the same for compat tasks running under the arm64 kernel. Reported-by: André Hentschel <nerv@dawncrow.de> Tested-by: André Hentschel <nerv@dawncrow.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: Use common outgoing-CPU-notification codePaul E. McKenney2015-05-211-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | Merge branch 'for-next/cpu-init' of ↵Catalin Marinas2015-05-197-222/+237
|\ \ | |/ | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux * 'for-next/cpu-init' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: ARM64: kernel: unify ACPI and DT cpus initialization ARM64: kernel: make cpu_ops hooks DT agnostic
| * ARM64: kernel: unify ACPI and DT cpus initializationLorenzo Pieralisi2015-05-194-196/+191
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code that initializes cpus on arm64 is currently split in two different code paths that carry out DT and ACPI cpus initialization. Most of the code executing SMP initialization is common and should be merged to reduce discrepancies between ACPI and DT initialization and to have code initializing cpus in a single common place in the kernel. This patch refactors arm64 SMP cpus initialization code to merge ACPI and DT boot paths in a common file and to create sanity checks that can be reused by both boot methods. Current code assumes PSCI is the only available boot method when arm64 boots with ACPI; this can be easily extended if/when the ACPI parking protocol is merged into the kernel. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
| * ARM64: kernel: make cpu_ops hooks DT agnosticLorenzo Pieralisi2015-05-196-35/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM64 CPU operations such as cpu_init and cpu_init_idle take a struct device_node pointer as a parameter, which corresponds to the device tree node of the logical cpu on which the operation has to be applied. With the advent of ACPI on arm64, where MADT static table entries are used to initialize cpus, the device tree node parameter in cpu_ops hooks become useless when booting with ACPI, since in that case cpu device tree nodes are not present and can not be used for cpu initialization. The current cpu_init hook requires a struct device_node pointer parameter because it is called while parsing the device tree to initialize CPUs, when the cpu_logical_map (that is used to match a cpu node reg property to a device tree node) for a given logical cpu id is not set up yet. This means that the cpu_init hook cannot rely on the of_get_cpu_node function to retrieve the device tree node corresponding to the logical cpu id passed in as parameter, so the cpu device tree node must be passed in as a parameter to fix this catch-22 dependency cycle. This patch reshuffles the cpu_logical_map initialization code so that the cpu_init cpu_ops hook can safely use the of_get_cpu_node function to retrieve the cpu device tree node, removing the need for the device tree node pointer parameter. In the process, the patch removes device tree node parameters from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus initialization consolidation. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: kill flush_cache_all()Mark Rutland2015-05-191-11/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* | arm64: Mark PMU interrupt IRQF_NO_THREADAnders Roxell2015-05-191-1/+1
|/ | | | | | | | | | Mark the PMU interrupts as non-threadable, as is the case with arch/arm: d9c3365 ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD Acked-by: Will Deacon <will.deacon@arm.com> Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* arm64: perf: fix memory leak when probing PMU PPIsWill Deacon2015-05-121-4/+4
| | | | | | | | | | | | | | | Commit d795ef9aa831 ("arm64: perf: don't warn about missing interrupt-affinity property for PPIs") added a check for PPIs so that we avoid parsing the interrupt-affinity property for these naturally affine interrupts. Unfortunately, this check can trigger an early (successful) return and we will leak the irqs array. This patch fixes the issue by reordering the code so that the check is performed before any independent allocation. Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Revert "arm64: alternative: Allow immediate branch as alternative instruction"Will Deacon2015-05-051-52/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts most of commit fef7f2b2010381c795ae43743ad31931cc58f5ad. It turns out that there are a couple of problems with the way we're fixing up branch instructions used as part of alternative instruction sequences: (1) If the branch target is also in the alternative sequence, we'll generate a branch into the .altinstructions section which actually gets freed. (2) The calls to aarch64_insn_{read,write} bring an awful lot more code into the patching path (e.g. taking locks, poking the fixmap, invalidating the TLB) which isn't actually needed for the early patching run under stop_machine, but makes the use of alternative sequences extremely fragile (as we can't patch code that could be used by the patching code). Given that no code actually requires alternative patching of immediate branches, let's remove this support for now and revisit it when we've got a user. We leave the updated size check, since we really do require the sequences to be the same length. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: Fix the pmu node name in warning messageSuzuki K. Poulose2015-04-301-1/+1
| | | | | | | | | | | | | | | | | With commit d5efd9cc9cf2 ("arm64: pmu: add support for interrupt-affinity property"), we print a warning when we find a PMU SPI with a missing missing interrupt-affinity property in a pmu node. Unfortunately, we pass the wrong (NULL) device node to of_node_full_name, resulting in unhelpful messages such as: hw perfevents: Failed to parse <no-node>/interrupt-affinity[0] This patch fixes the name to that of the pmu node. Fixes: d5efd9cc9cf2 (arm64: pmu: add support for interrupt-affinity property) Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* arm64: perf: don't warn about missing interrupt-affinity property for PPIsWill Deacon2015-04-301-1/+6
| | | | | | | | | PPIs are affine by nature, so the interrupt-affinity property is not used and therefore we shouldn't print a warning in its absence. Reported-by: Maxime Ripard <maxime.ripard@free-electrons.com> Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* Merge tag 'arm64-upstream' of ↵Linus Torvalds2015-04-248-40/+475
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull initial ACPI support for arm64 from Will Deacon: "This series introduces preliminary ACPI 5.1 support to the arm64 kernel using the "hardware reduced" profile. We don't support any peripherals yet, so it's fairly limited in scope: - MEMORY init (UEFI) - ACPI discovery (RSDP via UEFI) - CPU init (FADT) - GIC init (MADT) - SMP boot (MADT + PSCI) - ACPI Kconfig options (dependent on EXPERT) ACPI for arm64 has been in development for a while now and hardware has been available that can boot with either FDT or ACPI tables. This has been made possible by both changes to the ACPI spec to cater for ARM-based machines (known as "hardware-reduced" in ACPI parlance) but also a Linaro-driven effort to get this supported on top of the Linux kernel. This pull request is the result of that work. These changes allow us to initialise the CPUs, interrupt controller, and timers via ACPI tables, with memory information and cmdline coming from EFI. We don't support a hybrid ACPI/FDT scheme. Of course, there is still plenty of work to do (a serial console would be nice!) but I expect that to happen on a per-driver basis after this core series has been merged. Anyway, the diff stat here is fairly horrible, but splitting this up and merging it via all the different subsystems would have been extremely painful. Instead, we've got all the relevant Acks in place and I've not seen anything other than trivial (Kconfig) conflicts in -next (for completeness, I've included my resolution below). Nearly half of the insertions fall under Documentation/. So, we'll see how this goes. Right now, it all depends on EXPERT and I fully expect people to use FDT by default for the immediate future" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (31 commits) ARM64 / ACPI: make acpi_map_gic_cpu_interface() as void function ARM64 / ACPI: Ignore the return error value of acpi_map_gic_cpu_interface() ARM64 / ACPI: fix usage of acpi_map_gic_cpu_interface ARM64: kernel: acpi: honour acpi=force command line parameter ARM64: kernel: acpi: refactor ACPI tables init and checks ARM64: kernel: psci: let ACPI probe PSCI version ARM64: kernel: psci: factor out probe function ACPI: move arm64 GSI IRQ model to generic GSI IRQ layer ARM64 / ACPI: Don't unflatten device tree if acpi=force is passed ARM64 / ACPI: additions of ACPI documentation for arm64 Documentation: ACPI for ARM64 ARM64 / ACPI: Enable ARM64 in Kconfig XEN / ACPI: Make XEN ACPI depend on X86 ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64 clocksource / arch_timer: Parse GTDT to initialize arch timer irqchip: Add GICv2 specific ACPI boot support ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsi ACPI / processor: Make it possible to get CPU hardware ID via GICC ACPI / processor: Introduce phys_cpuid_t for CPU hardware ID ARM64 / ACPI: Parse MADT for SMP initialization ...
| * ARM64 / ACPI: make acpi_map_gic_cpu_interface() as void functionHanjun Guo2015-03-311-14/+9
| | | | | | | | | | | | | | | | | | | | Since the only caller of acpi_parse_gic_cpu_interface() doesn't need the return value, make it have a void return type to avoid introducing subtle bugs, and update the comments of the function accordingly. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64 / ACPI: Ignore the return error value of acpi_map_gic_cpu_interface()Hanjun Guo2015-03-311-1/+2
| | | | | | | | | | | | | | | | | | | | | | MADT scanning will stop when it gets an error from the handler, acpi_map_gic_cpu_interface(), on arm64. However, we need to find all of the enabled CPUs so that SMP initialization can work properly. So, if an error occurs in this case, ignore it for now so that we can find all of the enabled CPUs. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64 / ACPI: fix usage of acpi_map_gic_cpu_interfaceWill Deacon2015-03-261-6/+5
| | | | | | | | | | | | | | | | | | | | | | acpi_parse_gic_cpu_interface calls acpi_map_gic_cpu_interface by both passing a 32-bit value in the u8 enabled parameter and then subsequently ignoring its return value. Sort it out. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64: kernel: acpi: honour acpi=force command line parameterLorenzo Pieralisi2015-03-262-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If acpi=force is passed on the command line, it forces ACPI to be the only available boot method, hence it must be left enabled even if the initialization and sanity checks on ACPI tables fails. This patch refactors ACPI initialization to prevent disabling ACPI if acpi=force is passed on the command line. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64: kernel: acpi: refactor ACPI tables init and checksLorenzo Pieralisi2015-03-261-36/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current ACPI init code on ARM64 relies on acpi_table_parse() API to check if the FADT is present and to carry out sanity checks on that. The handler passed to the acpi_table_parse() function and used to carry out the parsing on the requested table returns a value that is ignored by the acpi_table_parse() function, so it is not possible to propagate errors back to the acpi_table_parse() caller through the handler. This forces ARM64 ACPI init code to have disable_acpi() calls scattered all over the place that makes code unwieldy and not easy to follow. This patch refactors the ARM64 ACPI init code, by creating a self-contained function (ie acpi_fadt_sanity_check()) that carries out the required checks on FADT and returns an adequate return value to the caller. This allows creating a common error path that disables ACPI and makes code more readable and easy to parse and change were further checks FADT to be added in the future. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64: kernel: psci: let ACPI probe PSCI versionLorenzo Pieralisi2015-03-261-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | PSCI v0.2+ allows the kernel to probe the PSCI firmware version. This patch replaces the default initialization of PSCI v0.2+ functions with code that allows probing PSCI firmware version and initializes PSCI functions accordingly. Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64: kernel: psci: factor out probe functionLorenzo Pieralisi2015-03-261-16/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PSCI v0.2+ versions provide a specific PSCI call (PSCI_VERSION) to detect the PSCI version at run-time. Current PSCI v0.2 init code carries out the version probing in the PSCI 0.2 DT init function, but the version probing does not depend on DT so it can be factored out in order to make it available to other boot mechanisms (ie ACPI) to reuse. The psci_probe() probing function can be easily extended to add detection and initialization of PSCI functions defined in PSCI versions >0.2. Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ACPI: move arm64 GSI IRQ model to generic GSI IRQ layerLorenzo Pieralisi2015-03-261-73/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code deployed to implement GSI linux IRQ numbers mapping on arm64 turns out to be generic enough so that it can be moved to ACPI core code along with its respective config option ACPI_GENERIC_GSI selectable on architectures that can reuse the same code. Current ACPI IRQ mapping code is not integrated in the kernel IRQ domain infrastructure, in particular there is no way to look-up the IRQ domain associated with a particular interrupt controller, so this first version of GSI generic code carries out the GSI<->IRQ mapping relying on the IRQ default domain which is supposed to be always set on a specific architecture in case the domain structure passed to irq_create/find_mapping() functions is missing. This patch moves the arm64 acpi functions that implement the gsi mappings: acpi_gsi_to_irq() acpi_register_gsi() acpi_unregister_gsi() to ACPI core code. Since the generic GSI<->domain mapping is based on IRQ domains, it can be extended as soon as a way to map an interrupt controller to an IRQ domain is implemented for ACPI in the IRQ domain layer. x86 and ia64 code for GSI mappings cannot rely on the generic GSI layer at present for legacy reasons, so they do not select the ACPI_GENERIC_GSI config options and keep relying on their arch specific GSI mapping layer. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64 / ACPI: Don't unflatten device tree if acpi=force is passedHanjun Guo2015-03-262-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | Since the policy is that once we pass acpi=force in the early param, we will not unflatten device tree even if ACPI is disabled in ACPI table init fails, so fix the code by comparinging both acpi_disabled and param_acpi_force before the device tree is unflattened. CC: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64Al Stone2015-03-261-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ACPI reduced hardware mode is disabled by default, but ARM64 can only run properly in ACPI hardware reduced mode, so select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64. If the firmware is not using hardware reduced ACPI mode, we will disable ACPI to avoid nightmare such as accessing some registers which are not available on ARM64. CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Reviewed-by: Grant Likely <grant.likely@linaro.org> Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: Yijing Wang <wangyijing@huawei.com> Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: Jon Masters <jcm@redhat.com> Tested-by: Timur Tabi <timur@codeaurora.org> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Al Stone <al.stone@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * clocksource / arch_timer: Parse GTDT to initialize arch timerHanjun Guo2015-03-261-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Using the information presented by GTDT (Generic Timer Description Table) to initialize the arch timer (not memory-mapped). CC: Daniel Lezcano <daniel.lezcano@linaro.org> CC: Thomas Gleixner <tglx@linutronix.de> Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com> Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: Yijing Wang <wangyijing@huawei.com> Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: Jon Masters <jcm@redhat.com> Tested-by: Timur Tabi <timur@codeaurora.org> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Robert Richter <rrichter@cavium.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * irqchip: Add GICv2 specific ACPI boot supportTomasz Nowicki2015-03-261-0/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ACPI kernel uses MADT table for proper GIC initialization. It needs to parse GIC related subtables, collect CPU interface and distributor addresses and call driver initialization function (which is hardware abstraction agnostic). In a similar way, FDT initialize GICv1/2. NOTE: This commit allow to initialize GICv1/2 basic functionality. While now simple GICv2 init call is used, any further GIC features require generic infrastructure for proper ACPI irqchip initialization. That mechanism and stacked irqdomains to support GICv2 MSI/virtualization extension, GICv3/4 and its ITS are considered as next steps. CC: Jason Cooper <jason@lakedaemon.net> CC: Marc Zyngier <marc.zyngier@arm.com> CC: Thomas Gleixner <tglx@linutronix.de> Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: Yijing Wang <wangyijing@huawei.com> Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: Jon Masters <jcm@redhat.com> Tested-by: Timur Tabi <timur@codeaurora.org> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Robert Richter <rrichter@cavium.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Reviewed-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
| * ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsiHanjun Guo2015-03-261-0/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce ACPI_IRQ_MODEL_GIC which is needed for ARM64 as GIC is used, and then register device's gsi with the core IRQ subsystem. acpi_register_gsi() is similar to DT based irq_of_parse_and_map(), since gsi is unique in the system, so use hwirq number directly for the mapping. We are going to implement stacked domains when GICv2m, GICv3, ITS support are added. CC: Marc Zyngier <marc.zyngier@arm.com> Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com> Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: Yijing Wang <wangyijing@huawei.com> Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: Jon Masters <jcm@redhat.com> Tested-by: Timur Tabi <timur@codeaurora.org> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Robert Richter <rrichter@cavium.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
OpenPOWER on IntegriCloud