| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| | |
Conflicts:
arch/arm/include/asm/atomic.h
arch/arm/include/asm/hardirq.h
arch/arm/kernel/smp.c
|
| |\
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into devel-stable
This series extends the existing ARM v2p runtime patching for 64 bit.
Needed for LPAE machines which have physical memory beyond 4GB.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On some PAE systems (e.g. TI Keystone), memory is above the
32-bit addressable limit, and the interconnect provides an
aliased view of parts of physical memory in the 32-bit addressable
space. This alias is strictly for boot time usage, and is not
otherwise usable because of coherency limitations. On such systems,
the idmap mechanism needs to take this aliased mapping into account.
This patch introduces virt_to_idmap() and a arch function pointer which
can be populated by platform which needs it. Also populate necessary
idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
to be compatible with multi-platform builds.
Most architecture won't touch it and in that case virt_to_idmap()
fall-back to existing virt_to_phys() macro.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We need a mechanism to let an inbound CPU signal that it is alive before
even getting into the kernel environment i.e. from early assembly code.
Using an IPI is the simplest way to achieve that.
This adds some basic infrastructure to register a struct completion
pointer to be "completed" when the dedicated IPI for this task is
received.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we're running a kernel compiled with SMP_ON_UP=y and the
hardware only supports UP operation there isn't any
smp_cross_call function assigned. Unfortunately, we call
smp_cross_call() unconditionally in arch_irq_work_raise() and
crash the kernel on UP devices. Check to make sure we're running
on an SMP device before calling smp_cross_call() here.
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = c0004000
[00000000] *pgd=00000000
Internal error: Oops: 80000005 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.12.0-rc6-00018-g8d45144-dirty #16
task: de05b440 ti: de05c000 task.ti: de05c000
PC is at 0x0
LR is at arch_irq_work_raise+0x3c/0x48
pc : [<00000000>] lr : [<c0019590>] psr: 60000193
sp : de05dd60 ip : 00000001 fp : 00000000
r10: c085e2f0 r9 : de05c000 r8 : c07be0a4
r7 : de05c000 r6 : de05c000 r5 : c07c5778 r4 : c0824554
r3 : 00000000 r2 : 00000000 r1 : 00000006 r0 : c0529a58
Flags: nZCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment kernel
Control: 10c5387d Table: 80004019 DAC: 00000017
Process swapper/0 (pid: 1, stack limit = 0xde05c248)
Stack: (0xde05dd60 to 0xde05e000)
dd60: c07b9dbc c00cb2dc 00000001 c08242c0 c08242c0 60000113 c07be0a8 c00b0590
dd80: de05c000 c085e2f0 c08242c0 c08242c0 c1414c28 c00b07cc de05b440 c1414c28
dda0: c08242c0 c00b0af8 c0862bb0 c0862db0 c1414cd8 de05c028 c0824840 de05ddb8
ddc0: 00000000 00000009 00000001 00000024 c07be0a8 c07be0a4 de05c000 c085e2f0
dde0: 00000000 c004a4b0 00000010 de00d2dc 00000054 00000100 00000024 00000000
de00: de05c028 0000000a ffff8ae7 00200040 00000016 de05c000 60000193 de05c000
de20: 00000054 00000000 00000000 00000000 00000000 c004a704 00000000 de05c008
de40: c07ba254 c004aa1c c07c5778 c0014b70 fa200000 00000054 de05de80 c0861244
de60: 00000000 c0008634 de05b440 c051c778 20000113 ffffffff de05deb4 c051d0a4
de80: 00000001 00000001 00000000 de05b440 c082afac de057ac0 de057ac0 de0443c0
dea0: 00000000 00000000 00000000 00000000 c082afbc de05dec8 c009f2a0 c051c778
dec0: 20000113 ffffffff 00000000 c016edb0 00000000 000002b0 de057ac0 de057ac0
dee0: 00000000 c016ee40 c0875e50 de05df2e de057ac0 00000000 00000013 00000000
df00: 00000000 c016f054 de043600 de0443c0 c008eb38 de004ec0 c0875e50 c008eb44
df20: 00000012 00000000 00000000 3931f0f8 00000000 00000000 00000014 c0822e84
df40: 00000000 c008ed2c 00000000 00000000 00000000 c07b7490 c07b7490 c075ab3c
df60: 00000000 c00701ac 00000002 00000000 c0070160 dffadb73 7bf8edb4 00000000
df80: c051092c 00000000 00000000 00000000 00000000 00000000 00000000 c0510934
dfa0: de05aa40 00000000 c051092c c0013ce8 00000000 00000000 00000000 00000000
dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 07efffe5 4dfac6f5
[<c0019590>] (arch_irq_work_raise+0x3c/0x48) from [<c00cb2dc>] (irq_work_queue+0xe4/0xf8)
[<c00cb2dc>] (irq_work_queue+0xe4/0xf8) from [<c00b0590>] (rcu_accelerate_cbs+0x1d4/0x1d8)
[<c00b0590>] (rcu_accelerate_cbs+0x1d4/0x1d8) from [<c00b07cc>] (rcu_start_gp+0x34/0x48)
[<c00b07cc>] (rcu_start_gp+0x34/0x48) from [<c00b0af8>] (rcu_process_callbacks+0x318/0x608)
[<c00b0af8>] (rcu_process_callbacks+0x318/0x608) from [<c004a4b0>] (__do_softirq+0x114/0x2a0)
[<c004a4b0>] (__do_softirq+0x114/0x2a0) from [<c004a704>] (do_softirq+0x6c/0x74)
[<c004a704>] (do_softirq+0x6c/0x74) from [<c004aa1c>] (irq_exit+0xac/0x100)
[<c004aa1c>] (irq_exit+0xac/0x100) from [<c0014b70>] (handle_IRQ+0x54/0xb4)
[<c0014b70>] (handle_IRQ+0x54/0xb4) from [<c0008634>] (omap3_intc_handle_irq+0x60/0x74)
[<c0008634>] (omap3_intc_handle_irq+0x60/0x74) from [<c051d0a4>] (__irq_svc+0x44/0x5c)
Exception stack(0xde05de80 to 0xde05dec8)
de80: 00000001 00000001 00000000 de05b440 c082afac de057ac0 de057ac0 de0443c0
dea0: 00000000 00000000 00000000 00000000 c082afbc de05dec8 c009f2a0 c051c778
dec0: 20000113 ffffffff
[<c051d0a4>] (__irq_svc+0x44/0x5c) from [<c051c778>] (_raw_spin_unlock_irq+0x28/0x2c)
[<c051c778>] (_raw_spin_unlock_irq+0x28/0x2c) from [<c016edb0>] (proc_alloc_inum+0x30/0xa8)
[<c016edb0>] (proc_alloc_inum+0x30/0xa8) from [<c016ee40>] (proc_register+0x18/0x130)
[<c016ee40>] (proc_register+0x18/0x130) from [<c016f054>] (proc_mkdir_data+0x44/0x6c)
[<c016f054>] (proc_mkdir_data+0x44/0x6c) from [<c008eb44>] (register_irq_proc+0x6c/0x128)
[<c008eb44>] (register_irq_proc+0x6c/0x128) from [<c008ed2c>] (init_irq_proc+0x74/0xb0)
[<c008ed2c>] (init_irq_proc+0x74/0xb0) from [<c075ab3c>] (kernel_init_freeable+0x84/0x1c8)
[<c075ab3c>] (kernel_init_freeable+0x84/0x1c8) from [<c0510934>] (kernel_init+0x8/0x150)
[<c0510934>] (kernel_init+0x8/0x150) from [<c0013ce8>] (ret_from_fork+0x14/0x2c)
Code: bad PC value
Fixes: bf18525fd79 "ARM: 7872/1: Support arch_irq_work_raise() via self IPIs"
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default, IRQ work is run from the tick interrupt (see
irq_work_run() in update_process_times()). When we're in full
NOHZ mode, restarting the tick requires the use of IRQ work and
if the only place we run IRQ work is in the tick interrupt we
have an unbreakable cycle. Implement arch_irq_work_raise() via
self IPIs to break this cycle and get the tick started again.
Note that we implement this via IPIs which are only available on
SMP builds. This shouldn't be a problem because full NOHZ is only
supported on SMP builds anyway.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC cleanups from Olof Johansson:
"This branch contains code cleanups, moves and removals for 3.12.
There's a large number of various cleanups, and a nice net removal of
13500 lines of code.
Highlights worth mentioning are:
- A series of patches from Stephen Boyd removing the ARM local timer
API.
- Move of Qualcomm MSM IOMMU code to drivers/iommu.
- Samsung PWM driver cleanups from Tomasz Figa, removing legacy PWM
driver and switching over to the drivers/pwm one.
- Removal of some unusued auto-generated headers for OMAP2+ (PRM/CM).
There's also a move of a header file out of include/linux/i2c/ to
platform_data, where it really belongs. It touches mostly ARM
platform code for include changes so we took it through our tree"
* tag 'cleanup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (83 commits)
ARM: OMAP2+: Add back the define for AM33XX_RST_GLOBAL_WARM_SW_MASK
gpio: (gpio-pca953x) move header to linux/platform_data/
arm: zynq: hotplug: Remove unreachable code
ARM: SAMSUNG: Remove unnecessary exynos4_default_sdhci*()
tegra: simplify use of devm_ioremap_resource
ARM: SAMSUNG: Remove plat/regs-timer.h header
ARM: SAMSUNG: Remove remaining uses of plat/regs-timer.h header
ARM: SAMSUNG: Remove pwm-clock infrastructure
ARM: SAMSUNG: Remove old PWM timer platform devices
pwm: Remove superseded pwm-samsung-legacy driver
ARM: SAMSUNG: Modify board files to use new PWM platform device
ARM: SAMSUNG: Rework private data handling in dev-backlight
pwm: Add new pwm-samsung driver
ARM: mach-mvebu: remove redundant DT parsing and validation
ARM: msm: Only compile io.c on platforms that use it
iommu/msm: Move mach includes to iommu directory
ARM: msm: Remove devices-iommu.c
ARM: msm: Move mach/board.h contents to common.h
ARM: msm: Migrate msm_timer to CLOCKSOURCE_OF_DECLARE
ARM: msm: Remove TMR and TMR0 static mappings
...
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/davidb/linux-msm into next/cleanup
From Stephen Boyd:
Now that we have a generic arch hook for broadcast we can remove the
local timer API entirely. Doing so will reduce code in ARM core, reduce
the architecture dependencies of our timer drivers, and simplify the code
because we no longer go through an architecture layer that is essentially
a hotplug notifier.
* tag 'remove-local-timers' of git://git.kernel.org/pub/scm/linux/kernel/git/davidb/linux-msm:
ARM: smp: Remove local timer API
clocksource: time-armada-370-xp: Divorce from local timer API
clocksource: time-armada-370-xp: Fix sparse warning
ARM: msm: Divorce msm_timer from local timer API
ARM: PRIMA2: Divorce timer-marco from local timer API
ARM: EXYNOS4: Divorce mct from local timer API
ARM: OMAP2+: Divorce from local timer API
ARM: smp_twd: Divorce smp_twd from local timer API
ARM: smp: Remove duplicate dummy timer implementation
Resolved a large number of conflicts due to __cpuinit cleanups, etc.
Signed-off-by: Olof Johansson <olof@lixom.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are no more users of this API, remove it.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Drop ARM's version of the dummy timer now that we have a generic
implementation in drivers/clocksource.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
| | | | |
| \ \ | |
|\ \ \ \
| | |/ /
| |/| | |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Now that we support a timer-backed delay loop, I'm quickly getting sick
and tired of people complaining that their beloved bogomips value has
decreased. You know who you are!
This patch removes the bogomips line from /proc/cpuinfo, based on the
reasoning that any program parsing this is already broken and, as such,
won't be further broken if the field is removed.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Architectures should fully validate whether kexec is possible as part of
machine_kexec_prepare(), so that user-space's kexec_load() operation can
report any problems. Performing validation in machine_kexec() itself is
too late, since it is not allowed to return.
Prior to this patch, ARM's machine_kexec() was testing after-the-fact
whether machine_kexec_prepare() was able to disable all but one CPU.
Instead, modify machine_kexec_prepare() to validate all conditions
necessary for machine_kexec_prepare()'s to succeed. BUG if the validation
succeeded, yet disabling the CPUs didn't actually work.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
This removes all the ARM uses of the __cpuinit macros from C code,
and all __CPUINIT from assembly code. It also had two ".previous"
section statements that were paired off against __CPUINIT
(aka .section ".cpuinit.text") that also get removed here.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
|
|\ \
| | |
| | |
| | |
| | |
| | | |
Conflicts:
arch/arm/Makefile
arch/arm/include/asm/glue-proc.h
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
Conflicts:
arch/arm/kernel/smp.c
Please pull these miscellaneous LPAE fixes I've been collecting for a while
now for 3.11. They've been tested and reviewed by quite a few people, and most
of the patches are pretty trivial. -- Will Deacon.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Subash Patel <subash.rp@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The MPU initialisation on the primary core is performed in two stages, one
minimal stage to ensure the CPU can boot and a second one after
sanity_check_meminfo. As the memory configuration is known by the time we
boot secondary cores only a single step is necessary, provided the values
for DRSR are passed to secondaries.
This patch implements this arrangement. The configuration generated for the
MPU regions is made available to the secondary core, which can then use the
asm MPU intialisation code to program a complete region configuration.
This is necessary for SMP configurations without an MMU, as the MPU
initialisation is the only way to ensure that memory is specified as
'shared'.
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
CC: Nicolas Pitre <nico@linaro.org>
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | | |
nommu systems do not require any page tables, so don't try to initialise
them when bringing up secondary cores.
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add comments to machine_shutdown()/halt()/power_off()/restart() that
describe their purpose and/or requirements re: CPUs being active/not.
In machine_shutdown(), replace the call to smp_send_stop() with a call to
disable_nonboot_cpus(). This completely disables all but one CPU, thus
satisfying the requirement that only a single CPU be active for kexec.
Adjust Kconfig dependencies for this change.
In machine_halt()/power_off()/restart(), call smp_send_stop() directly,
rather than via machine_shutdown(); these functions don't need to
completely de-activate all CPUs using hotplug, but rather just quiesce
them.
Remove smp_kill_cpus(), and its call from smp_send_stop().
smp_kill_cpus() was indirectly calling smp_ops.cpu_kill() without calling
smp_ops.cpu_die() on the target CPUs first. At least some implementations
of smp_ops had issues with this; it caused cpu_kill() to hang on Tegra,
for example. Since smp_send_stop() is only used for shutdown, halt, and
power-off, there is no need to attempt any kind of CPU hotplug here.
Adjust Kconfig to reflect that machine_shutdown() (and hence kexec)
relies upon disable_nonboot_cpus(). However, this alone doesn't guarantee
that hotplug will work, or even that hotplug is implemented for a
particular piece of HW that a multi-platform zImage runs on. Hence, add
error-checking to machine_kexec() to determine whether it did work.
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Zhangfei Gao <zhangfei.gao@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Before f7b861b7a6d9 ("arm: Use generic idle loop") ARM would kill the
CPU within the rcu idle section. Now that the rcu_idle_enter()/exit()
pair have been pushed lower down in the idle loop this is no longer true
and so using RCU_NONIDLE here is no longer necessary and also harmful
because RCU is not actually idle at this point.
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull ARM updates from Russell King:
"The major items included in here are:
- MCPM, multi-cluster power management, part of the infrastructure
required for ARMs big.LITTLE support.
- A rework of the ARM KVM code to allow re-use by ARM64.
- Error handling cleanups of the IS_ERR_OR_NULL() madness and fixes
of that stuff for arch/arm
- Preparatory patches for Cortex-M3 support from Uwe Kleine-König.
There is also a set of three patches in here from Hugh/Catalin to
address freeing of inappropriate page tables on LPAE. You already
have these from akpm, but they were already part of my tree at the
time he sent them, so unfortunately they'll end up with duplicate
commits"
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits)
ARM: EXYNOS: remove unnecessary use of IS_ERR_VALUE()
ARM: IMX: remove unnecessary use of IS_ERR_VALUE()
ARM: OMAP: use consistent error checking
ARM: cleanup: OMAP hwmod error checking
ARM: 7709/1: mcpm: Add explicit AFLAGS to support v6/v7 multiplatform kernels
ARM: 7700/2: Make cpu_init() notrace
ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE
ARM: 7701/1: mm: Allow arch code to control the user page table ceiling
ARM: 7703/1: Disable preemption in broadcast_tlb*_a15_erratum()
ARM: mcpm: provide an interface to set the SMP ops at run time
ARM: mcpm: generic SMP secondary bringup and hotplug support
ARM: mcpm_head.S: vlock-based first man election
ARM: mcpm: Add baremetal voting mutexes
ARM: mcpm: introduce helpers for platform coherency exit/setup
ARM: mcpm: introduce the CPU/cluster power API
ARM: multi-cluster PM: secondary kernel entry code
ARM: cacheflush: add synchronization helpers for mixed cache state accesses
ARM: cpu hotplug: remove majority of cache flushing from platforms
ARM: smp: flush L1 cache in cpu_die()
ARM: tegra: remove tegra specific cpu_disable()
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Flush the L1 cache for the CPU which is going down in cpu_die() so
that we don't end up with all platforms doing this. This ensures
that any cache lines we own are pushed out before the cache becomes
inaccessible.
We may end up subsequently creating some dirty cache lines - for
example, with the complete() call, but this update must become
visible to other CPUs before __cpu_die() can proceed. Subsequent
accesses from the platforms cpu_die() function should _not_ matter.
Also place a mb() after the complete() call to ensure that this is
visible to other CPUs.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the generic idle loop and replace enable/disable_hlt with the
respective core functions.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Cc: Russell King <linux@arm.linux.org.uk>
Tested-by: Kevin Hilman <khilman@linaro.org> # OMAP
Link: http://lkml.kernel.org/r/20130321215233.826238797@linutronix.de
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 70264367a243 ("ARM: 7653/2: do not scale loops_per_jiffy when
using a constant delay clock") fixed a problem with our timer-based
delay loop, where loops_per_jiffy is scaled by cpufreq yet used directly
by the timer delay ops.
This patch fixes the problem in a more elegant way by keeping a private
ticks_per_jiffy field in the delay ops, independent of loops_per_jiffy
and therefore not subject to scaling. The loop-based delay continues to
use loops_per_jiffy directly, as it should.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clock-event
With recent arm broadcast time clean-up from Mark Rutland, the dummy
broadcast device is always registered with timer subsystem. And since
the rating of the dummy clock event is very high, it may be preferred
over a real clock event.
This is a change in behavior from past and not an intended one. So
reduce the rating of the dummy clock-event so that real clock-event
device is selected when available.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ARM ARM requires branch predictor maintenance if, for a given ASID,
the instructions at a specific virtual address appear to change.
From the kernel's point of view, that means:
- Changing the kernel's view of memory (e.g. switching to the
identity map)
- ASID rollover (since ASIDs will be re-allocated to new tasks)
This patch adds explicit branch predictor maintenance when either of the
two conditions above are met.
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Pull late ARM updates from Russell King:
"Here is the late set of ARM updates for this merge window; in here is:
- The ARM parts of the broadcast timer support, core parts merged
through tglx's tree. This was left over from the previous merge to
allow the dependency on tglx's tree to be resolved.
- A fix to the VFP code which shows up on Raspberry Pi's, as well as
fixing the fallout from a previous commit in this area.
- A number of smaller fixes scattered throughout the ARM tree"
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm:
ARM: Fix broken commit 0cc41e4a21d43 corrupting kernel messages
ARM: fix scheduling while atomic warning in alignment handling code
ARM: VFP: fix emulation of second VFP instruction
ARM: 7656/1: uImage: Error out on build of multiplatform without LOADADDR
ARM: 7640/1: memory: tegra_ahb_enable_smmu() depends on TEGRA_IOMMU_SMMU
ARM: 7654/1: Preserve L_PTE_VALID in pte_modify()
ARM: 7653/2: do not scale loops_per_jiffy when using a constant delay clock
ARM: 7651/1: remove unused smp_timer_broadcast #define
|
| |\ \ |
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When udelay() is implemented using an architected timer, it is wrong
to scale loops_per_jiffy when changing the CPU clock frequency since
the timer clock remains constant.
The lpj should probably become an implementation detail relevant to
the CPU loop based delay routine only and more confined to it. In the
mean time this is the minimal fix needed to have expected delays with
the timer based implementation when cpufreq is also in use.
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Liviu Dudau <Liviu.Dudau@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The assignment of clock_event_device::broadcast can be done by timer
core as of 12ad100046: "clockevents: Add generic timer broadcast
function", and the arm code moved over to this as of 3d06770eef: "arm:
Add generic timer broadcast support", but left a dangling #define when
!CONFIG_GENERIC_TIMER_BROADCAST.
This patch removes the now unused #define.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Pull ARM virtualization changes:
"This contains parts of the ARM KVM support that have dependencies on
other patches merged through the arm-soc tree. In combination with
patches coming through Russell's tree, this will finally add full
support for the kernel based virtual machine on ARM, which has been
awaited for some time now.
Further, we now have a separate platform for virtual machines and qemu
booting that is used by both Xen and KVM, separating these from the
Versatile Express reference implementation. Obviously, this new
platform is multiplatform capable so it can be combined with existing
machines in the same kernel."
* tag 'virt' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (38 commits)
ARM: arch_timer: include linux/errno.h
arm: arch_timer: add missing inline in stub function
ARM: KVM: arch_timers: Wire the init code and config option
ARM: KVM: arch_timers: Add timer world switch
ARM: KVM: arch_timers: Add guest timer core support
ARM: KVM: Add VGIC configuration option
ARM: KVM: VGIC initialisation code
ARM: KVM: VGIC control interface world switch
ARM: KVM: VGIC interrupt injection
ARM: KVM: vgic: retire queued, disabled interrupts
ARM: KVM: VGIC virtual CPU interface management
ARM: KVM: VGIC distributor handling
ARM: KVM: VGIC accept vcpu and dist base addresses from user space
ARM: KVM: Initial VGIC infrastructure code
ARM: KVM: Keep track of currently running vcpus
KVM: ARM: Introduce KVM_ARM_SET_DEVICE_ADDR ioctl
ARM: gic: add __ASSEMBLY__ guard to C definitions
ARM: gic: define GICH offsets for VGIC support
ARM: gic: add missing distributor defintions
ARM: mach-virt: fixup machine descriptor after removal of sys_timer
...
|
| |\ \ \
| | |/ /
| |/| | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Implement timer_broadcast for the arm architecture, allowing for the use
of clock_event_device_drivers decoupled from the timer tick broadcast
mechanism.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently, the ARM backend must maintain a redundant list of timers for
the purpose of centralising timer broadcast functionality. This prevents
sharing timer drivers across architectures.
This patch moves the pain of dealing with timer broadcasts to the core
clockevents tick broadcast code, which already maintains its own list
of timers.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
|
|\ \ \
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pull ARM SoC cleanups from Arnd Bergmann:
"A large number of cleanups, all over the platforms. This is dominated
largely by the Samsung platforms (s3c, s5p, exynos) and a few of the
others moving code out of arch/arm into more appropriate subsystems.
The clocksource and irqchip drivers are now abstracted to the point
where platforms that are already cleaned up do not need to even
specify the driver they use, it can all get configured from the device
tree as we do for normal device drivers. The clocksource changes
basically touch every single platform in the process.
We further clean up the use of platform specific header files here,
with the goal of turning more of the platforms over to being
"multiplatform" enabled, which implies that they cannot expose their
headers to architecture independent code any more.
It is expected that no functional changes are part of the cleanup.
The overall reduction in total code lines is mostly the result of
removing broken and obsolete code."
* tag 'cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (133 commits)
ARM: mvebu: correct gated clock documentation
ARM: kirkwood: add missing include for nsa310
ARM: exynos: move exynos4210-combiner to drivers/irqchip
mfd: db8500-prcmu: update resource passing
drivers/db8500-cpufreq: delete dangling include
ARM: at91: remove NEOCORE 926 board
sunxi: Cleanup the reset code and add meaningful registers defines
ARM: S3C24XX: header mach/regs-mem.h local
ARM: S3C24XX: header mach/regs-power.h local
ARM: S3C24XX: header mach/regs-s3c2412-mem.h local
ARM: S3C24XX: Remove plat-s3c24xx directory in arch/arm/
ARM: S3C24XX: transform s3c2443 subirqs into new structure
ARM: S3C24XX: modify s3c2443 irq init to initialize all irqs
ARM: S3C24XX: move s3c2443 irq code to irq.c
ARM: S3C24XX: transform s3c2416 irqs into new structure
ARM: S3C24XX: modify s3c2416 irq init to initialize all irqs
ARM: S3C24XX: move s3c2416 irq init to common irq code
ARM: S3C24XX: Modify s3c_irq_wake to use the hwirq property
ARM: S3C24XX: Move irq syscore-ops to irq-pm
clocksource: always define CLOCKSOURCE_OF_DECLARE
...
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In preparation of moving gic code to drivers/irqchip, remove the direct
platform dependencies on gic_raise_softirq. Move the setup of
smp_cross_call into the gic code and use arch_send_wakeup_ipi_mask
function to trigger wake-up IPIs.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Kukjin Kim <kgene.kim@samsung.com>
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: David Brown <davidb@codeaurora.org>
Cc: Daniel Walker <dwalker@fifo99.com>
Cc: Bryan Huntsman <bryanh@codeaurora.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Magnus Damm <magnus.damm@gmail.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Shiraz Hashim <shiraz.hashim@st.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
Cc: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Olof Johansson <olof@lixom.net>
|
|/
|
|
|
|
|
|
|
|
|
| |
Remove some silly wrapper functions which aren't really required:
platform_smp_prepare_cpus
platform_secondary_init
platform_cpu_die
This simplifies the code.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Use the previously unused TPIDRPRW register to store percpu offsets.
TPIDRPRW is only accessible in PL1, so it can only be used in the kernel.
This replaces 2 loads with a mrc instruction for each percpu variable
access. With hackbench, the performance improvement is 1.4% on Cortex-A9
(highbank). Taking an average of 30 runs of "hackbench -l 1000" yields:
Before: 6.2191
After: 6.1348
Will Deacon reported similar delta on v6 with 11MPCore.
The asm "memory clobber" are needed here to ensure the percpu offset
gets reloaded. Testing by Will found that this would not happen in
__schedule() which is a bit of a special case as preemption is disabled
but the execution can move cores.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The advent of big.LITTLE ARM platforms requires the kernel to be able
to identify the MIDRs of all online CPUs upon request. MIDRs are stashed
at boot time so that kernel subsystems can detect the MIDR of online CPUs
by simply retrieving per-CPU data updated by all booted CPUs.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
|
| | | |
| \ | |
| \ | |
| \ | |
|\ \ \ \
| | | |/
| | | |
| | | | |
'warnings' into for-next
|
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | | |
Add function arch_send_wakeup_ipi_mask(), so that platform code can
use it as an easy way to wake up cores that are in WFI.
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is what is done for the regular interrupts in kernel/irqs/proc.c
already, before calling arch_show_interrupts(). Not doing so for the
IPIs causes the column headers not to match with the content whenever
some CPUs are offline.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When booting a secondary CPU, the primary CPU hands two sets of page
tables via the secondary_data struct:
(1) swapper_pg_dir: a normal, cacheable, shared (if SMP) mapping
of the kernel image (i.e. the tables used by init_mm).
(2) idmap_pgd: an uncached mapping of the .idmap.text ELF
section.
The idmap is generally used when enabling and disabling the MMU, which
includes early CPU boot. In this case, the secondary CPU switches to
swapper as soon as it enters C code:
struct mm_struct *mm = &init_mm;
unsigned int cpu = smp_processor_id();
/*
* All kernel threads share the same mm context; grab a
* reference and switch to it.
*/
atomic_inc(&mm->mm_count);
current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm));
cpu_switch_mm(mm->pgd, mm);
This causes a problem on ARMv7, where the identity mapping is treated as
strongly-ordered leading to architecturally UNPREDICTABLE behaviour of
exclusive accesses, such as those used by atomic_inc.
This patch re-orders the secondary_start_kernel function so that we
switch to swapper before performing any exclusive accesses.
Cc: <stable@vger.kernel.org>
Cc: David McKay <david.mckay@st.com>
Reported-by: Gilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|\
| |
| |
| |
| | |
Conflicts:
arch/arm/kernel/smp.c
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pull ARM updates from Russell King:
"This is the first chunk of ARM updates for this merge window.
Conflicts are expected in two files - asm/timex.h and
mach-integrator/integrator_cp.c. Nothing particularly stands out more
than anything else.
Most of the growth is down to the opcodes stuff from Dave Martin,
which is countered by Rob's patches to use more of the asm-generic
headers on ARM."
(A few more conflicts grew since then, but it all looked fairly trivial)
* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (44 commits)
ARM: 7548/1: include linux/sched.h in syscall.h
ARM: 7541/1: Add ARM ERRATA 775420 workaround
ARM: ensure vm_struct has its phys_addr member filled in
ARM: 7540/1: kexec: Check segment memory addresses
ARM: 7539/1: kexec: scan for dtb magic in segments
ARM: 7538/1: delay: add registration mechanism for delay timer sources
ARM: 7536/1: smp: Formalize an IPI for wakeup
ARM: 7525/1: ptrace: use updated syscall number for syscall auditing
ARM: 7524/1: support syscall tracing
ARM: 7519/1: integrator: convert platform devices to Device Tree
ARM: 7518/1: integrator: convert AMBA devices to device tree
ARM: 7517/1: integrator: initial device tree support
ARM: 7516/1: plat-versatile: add DT support to FPGA IRQ
ARM: 7515/1: integrator: check PL010 base address from resource
ARM: 7514/1: integrator: call common init function from machine
ARM: 7522/1: arch_timers: register a time/cycle counter
ARM: 7523/1: arch_timers: enable the use of the virtual timer
ARM: 7531/1: mark kernelmode mem{cpy,set} non-experimental
ARM: 7520/1: Build dtb files in all target
ARM: Fix build warning in arch/arm/mm/alignment.c
...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Remove the offset from ipi_msg_type and assume that SGI0 is the
wakeup interrupt now that all WFI hotplug users call
gic_raise_softirq() with 0 instead of 1. This allows us to
track how many wakeup interrupts are sent and also removes the
unknown IPI printk message for WFI hotplug based systems.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael J Wysocki:
- Improved system suspend/resume and runtime PM handling for the SH
TMU, CMT and MTU2 clock event devices (also used by ARM/shmobile).
- Generic PM domains framework extensions related to cpuidle support
and domain objects lookup using names.
- ARM/shmobile power management updates including improved support for
the SH7372's A4S power domain containing the CPU core.
- cpufreq changes related to AMD CPUs support from Matthew Garrett,
Andre Przywara and Borislav Petkov.
- cpu0 cpufreq driver from Shawn Guo.
- cpufreq governor fixes related to the relaxing of limit from Michal
Pecio.
- OMAP cpufreq updates from Axel Lin and Richard Zhao.
- cpuidle ladder governor fixes related to the disabling of states from
Carsten Emde and me.
- Runtime PM core updates related to the interactions with the system
suspend core from Alan Stern and Kevin Hilman.
- Wakeup sources modification allowing more helper functions to be
called from interrupt context from John Stultz and additional
diagnostic code from Todd Poynor.
- System suspend error code path fix from Feng Hong.
Fixed up conflicts in cpufreq/powernow-k8 that stemmed from the
workqueue fixes conflicting fairly badly with the removal of support for
hardware P-state chips. The changes were independent but somewhat
intertwined.
* tag 'pm-for-3.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (76 commits)
Revert "PM QoS: Use spinlock in the per-device PM QoS constraints code"
PM / Runtime: let rpm_resume() succeed if RPM_ACTIVE, even when disabled, v2
cpuidle: rename function name "__cpuidle_register_driver", v2
cpufreq: OMAP: Check IS_ERR() instead of NULL for omap_device_get_by_hwmod_name
cpuidle: remove some empty lines
PM: Prevent runtime suspend during system resume
PM QoS: Use spinlock in the per-device PM QoS constraints code
PM / Sleep: use resume event when call dpm_resume_early
cpuidle / ACPI : move cpuidle_device field out of the acpi_processor_power structure
ACPI / processor: remove pointless variable initialization
ACPI / processor: remove unused function parameter
cpufreq: OMAP: remove loops_per_jiffy recalculate for smp
sections: fix section conflicts in drivers/cpufreq
cpufreq: conservative: update frequency when limits are relaxed
cpufreq / ondemand: update frequency when limits are relaxed
properly __init-annotate pm_sysrq_init()
cpufreq: Add a generic cpufreq-cpu0 driver
PM / OPP: Initialize OPP table from device tree
ARM: add cpufreq transiton notifier to adjust loops_per_jiffy for smp
cpufreq: Remove support for hardware P-state chips from powernow-k8
...
|
| | |/
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If CONFIG_SMP, cpufreq skips loops_per_jiffy update, because different
arch has different per-cpu loops_per_jiffy definition.
Signed-off-by: Richard Zhao <richard.zhao@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|