summaryrefslogtreecommitdiffstats
path: root/drivers/cpuidle
Commit message (Collapse)AuthorAgeFilesLines
* cpuidle: add maintainer entryDaniel Lezcano2013-04-262-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently cpuidle drivers are spread across different archs. As a result, there are several different paths for cpuidle patch submissions: cpuidle core changes go through linux-pm, ARM driver changes go to the arm-soc or SoC-specific trees, sh changes go through the sh arch tree, pseries changes go through the PowerPC tree and finally intel changes go through the Len's tree while ACPI idle changes go through linux-pm. That makes it difficult to consolidate code and to propagate modifications from the cpuidle core to the different drivers. Hopefully, a movement has started to put the majority of cpuidle drivers under drivers/cpuidle like cpuidle-calxeda.c and cpuidle-kirkwood.c. Add a maintainer entry for cpuidle to MAINTAINERS to clarify the situation and to indicate to new cpuidle driver authors that those drivers should not go into arch-specific directories. The upstreaming process is unchanged: Rafael takes patches for merging into his tree, but with an Acked-by: tag from the driver's maintainer, so indicate in the drivers' headers who maintains them. The arrangement will be the same as for cpufreq. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Andrew Lunn <andrew@lunn.ch> #for kirkwood Acked-by: Jason Cooper <jason@lakedaemon.net> #for kirkwood Acked-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: fix comment formatDaniel Lezcano2013-04-241-1/+1
| | | | | | | Fix comment format for the kernel doc script. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* ARM: kirkwood: cpuidle: use init/exit common routineDaniel Lezcano2013-04-231-15/+2
| | | | | | | | | Remove the duplicated code and use the cpuidle common code for initialization. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Tested-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* ARM: calxeda: cpuidle: use init/exit common routineDaniel Lezcano2013-04-231-51/+1
| | | | | | | | Remove the duplicated code and use the cpuidle common code for initialization. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: make a single register function for allDaniel Lezcano2013-04-231-0/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usual scheme to initialize a cpuidle driver on a SMP is: cpuidle_register_driver(drv); for_each_possible_cpu(cpu) { device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); } This code is duplicated in each cpuidle driver. On UP systems, it is done this way: cpuidle_register_driver(drv); device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); On UP, the macro 'for_each_cpu' does one iteration: #define for_each_cpu(cpu, mask) \ for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask) Hence, the initialization loop is the same for UP than SMP. Beside, we saw different bugs / mis-initialization / return code unchecked in the different drivers, the code is duplicated including bugs. After fixing all these ones, it appears the initialization pattern is the same for everyone. Please note, some drivers are doing dev->state_count = drv->state_count. This is not necessary because it is done by the cpuidle_enable_device function in the cpuidle framework. This is true, until you have the same states for all your devices. Otherwise, the 'low level' API should be used instead with the specific initialization for the driver. Let's add a wrapper function doing this initialization with a cpumask parameter for the coupled idle states and use it for all the drivers. That will save a lot of LOC, consolidate the code, and the modifications in the future could be done in a single place. Another benefit is the consolidation of the cpuidle_device variable which is now in the cpuidle framework and no longer spread accross the different arch specific drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: remove en_core_tk_irqen flagDaniel Lezcano2013-04-233-56/+18
| | | | | | | | | | | The en_core_tk_irqen flag is set in all the cpuidle driver which means it is not necessary to specify this flag. Remove the flag and the code related to it. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> # for mach-omap2/* Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: initialize the broadcast timer frameworkDaniel Lezcano2013-04-011-2/+29
| | | | | | | | | | | | | The commit 89878baa73f0f1c679355006bd8632e5d78f96c2 introduced the CPUIDLE_FLAG_TIMER_STOP flag where we specify a specific idle state stops the local timer. Now use this flag to check at init time if one state will need the broadcast timer and, in this case, setup the broadcast timer framework. That prevents multiple code duplication in the drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: kirkwood: fix coccicheck warningsSilviu-Mihai Popescu2013-04-011-3/+3
| | | | | | | | | | | | Convert all uses of devm_request_and_ioremap() to the newly introduced devm_ioremap_resource() which provides more consistent error handling. devm_ioremap_resource() provides its own error messages so all explicit error messages can be removed from the failure code paths. Signed-off-by: Silviu-Mihai Popescu <silviupopescu1990@gmail.com> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / kirkwood: remove redundant Kconfig optionDaniel Lezcano2013-04-012-7/+1
| | | | | | | | | | | | | | | | | | | | When the CPU_IDLE and the ARCH_KIRKWOOD options are set it is pointless to define a new option CPU_IDLE_KIRKWOOD because it is redundant. The Makefile drivers directory contains a condition to compile the cpuidle drivers: obj-$(CONFIG_CPU_IDLE) += cpuidle/ Hence, if CPU_IDLE is not set we won't enter this directory. This patch removes the useless Kconfig option and replaces the condition in the Makefile by CONFIG_ARCH_KIRKWOOD. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle : handle clockevent notify from the cpuidle frameworkDaniel Lezcano2013-04-011-0/+9
| | | | | | | | | | | | | | | | | | When a cpu enters a deep idle state, the local timers are stopped and the time framework falls back to the timer device used as a broadcast timer. The different cpuidle drivers are calling clockevents_notify ENTER/EXIT when the idle state stops the local timer. Add a new flag CPUIDLE_FLAG_TIMER_STOP which can be set by the cpuidle drivers. If the flag is set, the cpuidle core code takes care of the notification on behalf of the driver to avoid pointless code duplication. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds2013-02-213-0/+113
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM SoC-specific updates from Arnd Bergmann: "This is a larger set of new functionality for the existing SoC families, including: - vt8500 gains support for new CPU cores, notably the Cortex-A9 based wm8850 - prima2 gains support for the "marco" SoC family, its SMP based cousin - tegra gains support for the new Tegra4 (Tegra114) family - socfpga now supports a newer version of the hardware including SMP - i.mx31 and bcm2835 are now using DT probing for their clocks - lots of updates for sh-mobile - OMAP updates for clocks, power management and USB - i.mx6q and tegra now support cpuidle - kirkwood now supports PCIe hot plugging - tegra clock support is updated - tegra USB PHY probing gets implemented diffently" * tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (148 commits) ARM: prima2: remove duplicate v7_invalidate_l1 ARM: shmobile: r8a7779: Correct TMU clock support again ARM: prima2: fix __init section for cpu hotplug ARM: OMAP: Consolidate OMAP USB-HS platform data (part 3/3) ARM: OMAP: Consolidate OMAP USB-HS platform data (part 1/3) arm: socfpga: Add SMP support for actual socfpga harware arm: Add v7_invalidate_l1 to cache-v7.S arm: socfpga: Add entries to enable make dtbs socfpga arm: socfpga: Add new device tree source for actual socfpga HW ARM: tegra: sort Kconfig selects for Tegra114 ARM: tegra: enable ARCH_REQUIRE_GPIOLIB for Tegra114 ARM: tegra: Fix build error w/ ARCH_TEGRA_114_SOC w/o ARCH_TEGRA_3x_SOC ARM: tegra: Fix build error for gic update ARM: tegra: remove empty tegra_smp_init_cpus() ARM: shmobile: Register ARM architected timer ARM: MARCO: fix the build issue due to gic-vic-to-irqchip move ARM: shmobile: r8a7779: Correct TMU clock support ARM: mxs_defconfig: Select CONFIG_DEVTMPFS_MOUNT ARM: mxs: decrease mxs_clockevent_device.min_delta_ns to 2 clock cycles ARM: mxs: use apbx bus clock to drive the timers on timrotv2 ...
| * cpuidle: kirkwood: Move out of mach directoryAndrew Lunn2013-01-313-0/+113
| | | | | | | | | | | | | | | | Move the Kirkwood cpuidle driver out of arch/arm/mach-kirkwood and into drivers/cpuidle. Convert the driver into a platform driver. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Jason Cooper <jason@lakedaemon.net>
* | PM / tracing: remove deprecated power trace APIPaul Gortmaker2013-01-261-2/+0
|/ | | | | | | | | | | | | | | | | | | | The text in Documentation said it would be removed in 2.6.41; the text in the Kconfig said removal in the 3.1 release. Either way you look at it, we are well past both, so push it off a cliff. Note that the POWER_CSTATE and the POWER_PSTATE are part of the legacy tracing API. Remove all tracepoints which use these flags. As can be seen from context, most already have a trace entry via trace_cpu_idle anyways. Also, the cpufreq/cpufreq.c PSTATE one is actually unpaired, as compared to the CSTATE ones which all have a clear start/stop. As part of this, the trace_power_frequency also becomes orphaned, so it too is deleted. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: remove the power_specified field in the driverDaniel Lezcano2013-01-153-44/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We realized that the power usage field is never filled and when it is filled for tegra, the power_specified flag is not set causing all of these values to be reset when the driver is initialized with set_power_state(). However, the power_specified flag can be simply removed under the assumption that the states are always backward sorted, which is the case with the current code. This change allows the menu governor select function and the cpuidle_play_dead() to be simplified. Moreover, the set_power_states() function can removed as it does not make sense any more. Drop the power_specified flag from struct cpuidle_driver and make the related changes as described above. As a consequence, this also fixes the bug where on the dynamic C-states system, the power fields are not initialized. [rjw: Changelog] References: https://bugzilla.kernel.org/show_bug.cgi?id=42870 References: https://bugzilla.kernel.org/show_bug.cgi?id=43349 References: https://lkml.org/lkml/2012/10/16/518 Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: fix number of initialized/destroyed statesKrzysztof Mazur2013-01-111-1/+1
| | | | | | | | | | | | | | | | | Commit bf4d1b5ddb78f86078ac6ae0415802d5f0c68f92 (cpuidle: support multiple drivers) changed the number of initialized state kobjects in cpuidle_add_state_sysfs() from device->state_count to drv->state_count, but left device->state_count in cpuidle_remove_state_sysfs(). The values of these two fields may be different, in which case a NULL pointer dereference may happen in cpuidle_remove_state_sysfs(), for example. Fix this problem by making cpuidle_add_state_sysfs() use device->state_count too (which restores the original behavior of it). [rjw: Changelog] Signed-off-by: Krzysztof Mazur <krzysiek@podlesie.net> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: fix lock contention in the idle pathDaniel Lezcano2013-01-031-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit bf4d1b5 (cpuidle: support multiple drivers) introduced locking in cpuidle_get_cpu_driver(), which is used in the idle_call() function. This leads to a contention problem with a large number of CPUs, because they all try to run the idle routine at the same time. The lock can be safely removed because of how is used the cpuidle API. Namely, cpuidle_register_driver() is called first, but the cpuidle idle function is not entered before cpuidle_register_device() is called, because the cpuidle device is not enabled then. Moreover, cpuidle_unregister_driver(), which would reset the driver value to NULL, is not called before cpuidle_unregister_device(). All of the cpuidle drivers use the API in the same way. In general, a cleanup around the lock is necessary and a proper refcounting mechanism should be used to ensure the consistency in the API (for example, cpuidle_unregister_driver() should fail if the driver's refcount is not 0). However, these modifications will require some code reorganization and rewrite which will be too intrusive for a fix. For this reason, fix the contention problem introduced by commit bf4d1b5 by simply removing the locking from cpuidle_get_cpu_driver(), which restores the original behavior of that routine. [rjw: Changelog.] Reported-and-tested-by: Russ Anderson <rja@sgi.com> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / coupled: fix ready counter decrementSivaram Nair2013-01-031-1/+1
| | | | | | | | | | | | The ready_waiting_counts atomic variable is compared against the wrong online cpu count. The latter is computed incorrectly using logical-OR instead of bit-OR. This patch fixes that. Signed-off-by: Sivaram Nair <sivaramn@nvidia.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Colin Cross <ccross@android.com> Cc: <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Fix finding state with min power_usageSivaram Nair2013-01-032-2/+2
| | | | | | | | | | | Since cpuidle_state.power_usage is a signed value, use INT_MAX (instead of -1) to init the local copies so that functions that tries to find cpuidle states with minimum power usage works correctly even if they use non-negative values. Signed-off-by: Sivaram Nair <sivaramn@nvidia.com> Reviewed-by: Rik van Riel <riel@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds2012-12-123-0/+173
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM SoC updates from Olof Johansson: "This contains the bulk of new SoC development for this merge window. Two new platforms have been added, the sunxi platforms (Allwinner A1x SoCs) by Maxime Ripard, and a generic Broadcom platform for a new series of ARMv7 platforms from them, where the hope is that we can keep the platform code generic enough to have them all share one mach directory. The new Broadcom platform is contributed by Christian Daudt. Highbank has grown support for Calxeda's next generation of hardware, ECX-2000. clps711x has seen a lot of cleanup from Alexander Shiyan, and he's also taken on maintainership of the platform. Beyond this there has been a bunch of work from a number of people on converting more platforms to IRQ domains, pinctrl conversion, cleanup and general feature enablement across most of the active platforms." Fix up trivial conflicts as per Olof. * tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (174 commits) mfd: vexpress-sysreg: Remove LEDs code irqchip: irq-sunxi: Add terminating entry for sunxi_irq_dt_ids clocksource: sunxi_timer: Add terminating entry for sunxi_timer_dt_ids irq: versatile: delete dangling variable ARM: sunxi: add missing include for mdelay() ARM: EXYNOS: Avoid early use of of_machine_is_compatible() ARM: dts: add node for PL330 MDMA1 controller for exynos4 ARM: EXYNOS: Add support for secondary CPU bring-up on Exynos4412 ARM: EXYNOS: add UART3 to DEBUG_LL ports ARM: S3C24XX: Add clkdev entry for camif-upll clock ARM: SAMSUNG: Add s3c24xx/s3c64xx CAMIF GPIO setup helpers ARM: sunxi: Add missing sun4i.dtsi file pinctrl: samsung: Do not initialise statics to 0 ARM i.MX6: remove gate_mask from pllv3 ARM i.MX6: Fix ethernet PLL clocks ARM i.MX6: rename PLLs according to datasheet ARM i.MX6: Add pwm support ARM i.MX51: Add pwm support ARM i.MX53: Add pwm support ARM: mx5: Replace clk_register_clkdev with clock DT lookup ...
| * cpuidle: add Calxeda SOC idle supportRob Herring2012-11-073-0/+173
| | | | | | | | | | | | | | | | | | | | Add support for core powergating on Calxeda platforms. Initially, this supports ECX-1000 (highbank), but support will be added for ECX-2000 later. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Len Brown <len.brown@intel.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
* | cpuidle: Measure idle state durations with monotonic clockJulius Werner2012-11-271-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many cpuidle drivers measure their time spent in an idle state by reading the wallclock time before and after idling and calculating the difference. This leads to erroneous results when the wallclock time gets updated by another processor in the meantime, adding that clock adjustment to the idle state's time counter. If the clock adjustment was negative, the result is even worse due to an erroneous cast from int to unsigned long long of the last_residency variable. The negative 32 bit integer will zero-extend and result in a forward time jump of roughly four billion milliseconds or 1.3 hours on the idle state residency counter. This patch changes all affected cpuidle drivers to either use the monotonic clock for their measurements or make use of the generic time measurement wrapper in cpuidle.c, which was already working correctly. Some superfluous CLIs/STIs in the ACPI code are removed (interrupts should always already be disabled before entering the idle function, and not get reenabled until the generic wrapper has performed its second measurement). It also removes the erroneous cast, making sure that negative residency values are applied correctly even though they should not appear anymore. Signed-off-by: Julius Werner <jwerner@chromium.org> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Len Brown <len.brown@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: fix a suspicious RCU usage in menu governorLi Zhong2012-11-231-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I saw this suspicious RCU usage on the next tree of 11/15 [ 67.123404] =============================== [ 67.123413] [ INFO: suspicious RCU usage. ] [ 67.123423] 3.7.0-rc5-next-20121115-dirty #1 Not tainted [ 67.123434] ------------------------------- [ 67.123444] include/trace/events/timer.h:186 suspicious rcu_dereference_check() usage! [ 67.123458] [ 67.123458] other info that might help us debug this: [ 67.123458] [ 67.123474] [ 67.123474] RCU used illegally from idle CPU! [ 67.123474] rcu_scheduler_active = 1, debug_locks = 0 [ 67.123493] RCU used illegally from extended quiescent state! [ 67.123507] 1 lock held by swapper/1/0: [ 67.123516] #0: (&cpu_base->lock){-.-...}, at: [<c0000000000979b0>] .__hrtimer_start_range_ns+0x28c/0x524 [ 67.123555] [ 67.123555] stack backtrace: [ 67.123566] Call Trace: [ 67.123576] [c0000001e2ccb920] [c00000000001275c] .show_stack+0x78/0x184 (unreliable) [ 67.123599] [c0000001e2ccb9d0] [c0000000000c15a0] .lockdep_rcu_suspicious+0x120/0x148 [ 67.123619] [c0000001e2ccba70] [c00000000009601c] .enqueue_hrtimer+0x1c0/0x1c8 [ 67.123639] [c0000001e2ccbb00] [c000000000097aa0] .__hrtimer_start_range_ns+0x37c/0x524 [ 67.123660] [c0000001e2ccbc20] [c0000000005c9698] .menu_select+0x508/0x5bc [ 67.123678] [c0000001e2ccbd20] [c0000000005c740c] .cpuidle_idle_call+0xa8/0x6e4 [ 67.123699] [c0000001e2ccbdd0] [c0000000000459a0] .pSeries_idle+0x10/0x34 [ 67.123717] [c0000001e2ccbe40] [c000000000014dc8] .cpu_idle+0x130/0x280 [ 67.123738] [c0000001e2ccbee0] [c0000000006ffa8c] .start_secondary+0x378/0x384 [ 67.123758] [c0000001e2ccbf90] [c00000000000936c] .start_secondary_prolog+0x10/0x14 hrtimer_start was added in 198fd638 and ae515197. The patch below tries to use RCU_NONIDLE around it to avoid the above report. Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Rik van Riel <riel@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: support multiple driversDaniel Lezcano2012-11-155-38/+351
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the tegra3 and the big.LITTLE [1] new architectures, several cpus with different characteristics (latencies and states) can co-exists on the system. The cpuidle framework has the limitation of handling only identical cpus. This patch removes this limitation by introducing the multiple driver support for cpuidle. This option is configurable at compile time and should be enabled for the architectures mentioned above. So there is no impact for the other platforms if the option is disabled. The option defaults to 'n'. Note the multiple drivers support is also compatible with the existing drivers, even if just one driver is needed, all the cpu will be tied to this driver using an extra small chunk of processor memory. The multiple driver support use a per-cpu driver pointer instead of a global variable and the accessor to this variable are done from a cpu context. In order to keep the compatibility with the existing drivers, the function 'cpuidle_register_driver' and 'cpuidle_unregister_driver' will register the specified driver for all the cpus. The semantic for the output of /sys/devices/system/cpu/cpuidle/current_driver remains the same except the driver name will be related to the current cpu. The /sys/devices/system/cpu/cpu[0-9]/cpuidle/driver/name files are added allowing to read the per cpu driver name. [1] http://lwn.net/Articles/481055/ Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: prepare the cpuidle core to handle multiple driversDaniel Lezcano2012-11-151-18/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch is a preparation for the multiple cpuidle drivers support. As the next patch will introduce the multiple drivers with the Kconfig option and we want to keep the code clean and understandable, this patch defines a set of functions for encapsulating some common parts and splits what should be done under a lock from the rest. [rjw: Modified the subject and changelog slightly.] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: move driver checking within the lock sectionDaniel Lezcano2012-11-151-9/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code is racy and the check with cpuidle_curr_driver should be done under the lock. I don't find a path in the different drivers where that could happen because the arch specific drivers are written in such way it is not possible to register a driver while it is unregistered, except maybe in a very improbable case when "intel_idle" and "processor_idle" are competing. One could unregister a driver, while the other one is registering. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: move driver's refcount to cpuidleDaniel Lezcano2012-11-151-5/+8
| | | | | | | | | | | | | | | | | | | | We want to support different cpuidle drivers co-existing together. In this case we should move the refcount to the cpuidle_driver structure to handle several drivers at a time. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: fixup device.h header in cpuidle.hDaniel Lezcano2012-11-152-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "struct device" is only used in sysfs.c. The other .c files including the private header "cpuidle.h" do not need to pull the entire headers tree from there as they don't manipulate the "struct device". This patch fixes this by moving the header inclusion to sysfs.c and adding a forward declaration for the struct device. The number of lines generated by the preprocesor: Without this patch : 17269 loc With this patch : 16446 loc Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle / sysfs: move structure declaration into the sysfs.c fileDaniel Lezcano2012-11-151-0/+7
| | | | | | | | | | | | | | | | | | | | The structure cpuidle_state_kobj is not used anywhere except in the sysfs.c file. The definition of this structure is not needed in the cpuidle header file. This patch moves it to the sysfs.c file in order to encapsulate the code a bit more. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Get typical recent sleep intervalYouquan Song2012-11-151-23/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function detect_repeating_patterns was not very useful for workloads with alternating long and short pauses, for example virtual machines handling network requests for each other (say a web and database server). Instead, try to find a recent sleep interval that is somewhere between the median and the mode sleep time, by discarding outliers to the up side and recalculating the average and standard deviation until that is no longer required. This should do something sane with a sleep interval series like: 200 180 210 10000 30 1000 170 200 The current code would simply discard such a series, while the new code will guess a typical sleep interval just shy of 200. The original patch come from Rik van Riel <riel@redhat.com>. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Set residency to 0 if target Cstate not enterYouquan Song2012-11-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When cpuidle governor choose a C-state to enter for idle CPU, but it notice that there is tasks request to be executed. So the idle CPU will not really enter the target C-state and go to run task. In this situation, it will use the residency of previous really entered target C-states. Obviously, it is not reasonable. So, this patch fix it by set the target C-state residency to 0. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Quickly notice prediction failure in general caseYouquan Song2012-11-151-1/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The prediction for future is difficult and when the cpuidle governor prediction fails and govenor possibly choose the shallower C-state than it should. How to quickly notice and find the failure becomes important for power saving. The patch extends to general case that prediction logic get a small predicted residency, so it choose a shallow C-state though the expected residency is large . Once the prediction will be fail, the CPU will keep staying at shallow C-state for a long time. Acutally, the CPU has change enter into deep C-state. So when the expected residency is long enough but governor choose a shallow C-state, an timer will be added in order to monitor if the prediction failure. When C-state is waken up prior to the adding timer, the timer will be cancelled initiatively. When the timer is triggered and menu governor will quickly notice prediction failure and re-evaluates deeper C-states possibility. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Quickly notice prediction failure for repeat modeYouquan Song2012-11-151-5/+70
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The prediction for future is difficult and when the cpuidle governor prediction fails and govenor possibly choose the shallower C-state than it should. How to quickly notice and find the failure becomes important for power saving. cpuidle menu governor has a method to predict the repeat pattern if there are 8 C-states residency which are continuous and the same or very close, so it will predict the next C-states residency will keep same residency time. There is a real case that turbostat utility (tools/power/x86/turbostat) at kernel 3.3 or early. turbostat utility will read 10 registers one by one at Sandybridge, so it will generate 10 IPIs to wake up idle CPUs. So cpuidle menu governor will predict it is repeat mode and there is another IPI wake up idle CPU soon, so it keeps idle CPU stay at C1 state even though CPU is totally idle. However, in the turbostat, following 10 registers reading is sleep 5 seconds by default, so the idle CPU will keep at C1 for a long time though it is idle until break event occurs. In a idle Sandybridge system, run "./turbostat -v", we will notice that deep C-state dangles between "70% ~ 99%". After patched the kernel, we will notice deep C-state stays at >99.98%. In the patch, a timer is added when menu governor detects a repeat mode and choose a shallow C-state. The timer is set to a time out value that greater than predicted time, and we conclude repeat mode prediction failure if timer is triggered. When repeat mode happens as expected, the timer is not triggered and CPU waken up from C-states and it will cancel the timer initiatively. When repeat mode does not happen, the timer will be time out and menu governor will quickly notice that the repeat mode prediction fails and then re-evaluates deeper C-states possibility. Below is another case which will clearly show the patch much benefit: #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <signal.h> #include <sys/time.h> #include <time.h> #include <pthread.h> volatile int * shutdown; volatile long * count; int delay = 20; int loop = 8; void usage(void) { fprintf(stderr, "Usage: idle_predict [options]\n" " --help -h Print this help\n" " --thread -n Thread number\n" " --loop -l Loop times in shallow Cstate\n" " --delay -t Sleep time (uS)in shallow Cstate\n"); } void *simple_loop() { int idle_num = 1; while (!(*shutdown)) { *count = *count + 1; if (idle_num % loop) usleep(delay); else { /* sleep 1 second */ usleep(1000000); idle_num = 0; } idle_num++; } } static void sighand(int sig) { *shutdown = 1; } int main(int argc, char *argv[]) { sigset_t sigset; int signum = SIGALRM; int i, c, er = 0, thread_num = 8; pthread_t pt[1024]; static char optstr[] = "n:l:t:h:"; while ((c = getopt(argc, argv, optstr)) != EOF) switch (c) { case 'n': thread_num = atoi(optarg); break; case 'l': loop = atoi(optarg); break; case 't': delay = atoi(optarg); break; case 'h': default: usage(); exit(1); } printf("thread=%d,loop=%d,delay=%d\n",thread_num,loop,delay); count = malloc(sizeof(long)); shutdown = malloc(sizeof(int)); *count = 0; *shutdown = 0; sigemptyset(&sigset); sigaddset(&sigset, signum); sigprocmask (SIG_BLOCK, &sigset, NULL); signal(SIGINT, sighand); signal(SIGTERM, sighand); for(i = 0; i < thread_num ; i++) pthread_create(&pt[i], NULL, simple_loop, NULL); for (i = 0; i < thread_num; i++) pthread_join(pt[i], NULL); exit(0); } Get powertop V2 from git://github.com/fenrus75/powertop, build powertop. After build the above test application, then run it. Test plaform can be Intel Sandybridge or other recent platforms. #./idle_predict -l 10 & #./powertop We will find that deep C-state will dangle between 40%~100% and much time spent on C1 state. It is because menu governor wrongly predict that repeat mode is kept, so it will choose the C1 shallow C-state even though it has chance to sleep 1 second in deep C-state. While after patched the kernel, we find that deep C-state will keep >99.6%. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Youquan Song <youquan.song@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle / sysfs: move kobj initialization in the syfs fileDaniel Lezcano2012-11-152-6/+5
| | | | | | | | | | | | | | | | Move the kobj initialization and completion in the sysfs.c and encapsulate the code more. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle / sysfs: change function parameterDaniel Lezcano2012-11-153-16/+8
|/ | | | | | | | | | | | | | The function needs the cpuidle_device which is initially passed to the caller. The current code gets the struct device from the struct cpuidle_device, pass it the cpuidle_add_sysfs function. This function calls per_cpu(cpuidle_devices, cpu) to get the cpuidle_device. This patch pass the cpuidle_device instead and simplify the code. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* ACPI idle, CPU hotplug: Fix NULL pointer dereference during hotplugSrivatsa S. Bhat2012-10-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a KVM guest, when a CPU is taken offline and brought back online, we hit the following NULL pointer dereference: [ 45.400843] Unregister pv shared memory for cpu 1 [ 45.412331] smpboot: CPU 1 is now offline [ 45.529894] SMP alternatives: lockdep: fixing up alternatives [ 45.533472] smpboot: Booting Node 0 Processor 1 APIC 0x1 [ 45.411526] kvm-clock: cpu 1, msr 0:7d14601, secondary cpu clock [ 45.571370] KVM setup async PF for cpu 1 [ 45.572331] kvm-stealtime: cpu 1, msr 7d0e040 [ 45.575031] BUG: unable to handle kernel NULL pointer dereference at (null) [ 45.576017] IP: [<ffffffff81519f98>] cpuidle_disable_device+0x18/0x80 [ 45.576017] PGD 5dfb067 PUD 5da8067 PMD 0 [ 45.576017] Oops: 0000 [#1] SMP [ 45.576017] Modules linked in: [ 45.576017] CPU 0 [ 45.576017] Pid: 607, comm: stress_cpu_hotp Not tainted 3.6.0-padata-tp-debug #3 Bochs Bochs [ 45.576017] RIP: 0010:[<ffffffff81519f98>] [<ffffffff81519f98>] cpuidle_disable_device+0x18/0x80 [ 45.576017] RSP: 0018:ffff880005d93ce8 EFLAGS: 00010286 [ 45.576017] RAX: ffff880005d93fd8 RBX: 0000000000000000 RCX: 0000000000000006 [ 45.576017] RDX: 0000000000000006 RSI: 2222222222222222 RDI: 0000000000000000 [ 45.576017] RBP: ffff880005d93cf8 R08: 2222222222222222 R09: 2222222222222222 [ 45.576017] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 45.576017] R13: 0000000000000000 R14: ffffffff81c8cca0 R15: 0000000000000001 [ 45.576017] FS: 00007f91936ae700(0000) GS:ffff880007c00000(0000) knlGS:0000000000000000 [ 45.576017] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 45.576017] CR2: 0000000000000000 CR3: 0000000005db3000 CR4: 00000000000006f0 [ 45.576017] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 45.576017] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 45.576017] Process stress_cpu_hotp (pid: 607, threadinfo ffff880005d92000, task ffff8800066bbf40) [ 45.576017] Stack: [ 45.576017] ffff880007a96400 0000000000000000 ffff880005d93d28 ffffffff813ac689 [ 45.576017] ffff880007a96400 ffff880007a96400 0000000000000002 ffffffff81cd8d01 [ 45.576017] ffff880005d93d58 ffffffff813aa498 0000000000000001 00000000ffffffdd [ 45.576017] Call Trace: [ 45.576017] [<ffffffff813ac689>] acpi_processor_hotplug+0x55/0x97 [ 45.576017] [<ffffffff813aa498>] acpi_cpu_soft_notify+0x93/0xce [ 45.576017] [<ffffffff816ae47d>] notifier_call_chain+0x5d/0x110 [ 45.576017] [<ffffffff8109730e>] __raw_notifier_call_chain+0xe/0x10 [ 45.576017] [<ffffffff81069050>] __cpu_notify+0x20/0x40 [ 45.576017] [<ffffffff81069085>] cpu_notify+0x15/0x20 [ 45.576017] [<ffffffff816978f1>] _cpu_up+0xee/0x137 [ 45.576017] [<ffffffff81697983>] cpu_up+0x49/0x59 [ 45.576017] [<ffffffff8168758d>] store_online+0x9d/0xe0 [ 45.576017] [<ffffffff8140a9f8>] dev_attr_store+0x18/0x30 [ 45.576017] [<ffffffff812322c0>] sysfs_write_file+0xe0/0x150 [ 45.576017] [<ffffffff811b389c>] vfs_write+0xac/0x180 [ 45.576017] [<ffffffff811b3be2>] sys_write+0x52/0xa0 [ 45.576017] [<ffffffff816b31e9>] system_call_fastpath+0x16/0x1b [ 45.576017] Code: 48 c7 c7 40 e5 ca 81 e8 07 d0 18 00 5d c3 0f 1f 44 00 00 0f 1f 44 00 00 55 48 89 e5 48 83 ec 10 48 89 5d f0 4c 89 65 f8 48 89 fb <f6> 07 02 75 13 48 8b 5d f0 4c 8b 65 f8 c9 c3 66 0f 1f 84 00 00 [ 45.576017] RIP [<ffffffff81519f98>] cpuidle_disable_device+0x18/0x80 [ 45.576017] RSP <ffff880005d93ce8> [ 45.576017] CR2: 0000000000000000 [ 45.656079] ---[ end trace 433d6c9ac0b02cef ]--- Analysis: Commit 3d339dc (cpuidle / ACPI : move cpuidle_device field out of the acpi_processor_power structure()) made the allocation of the dev structure (struct cpuidle) of a CPU dynamic, whereas previously it was statically allocated. And this dynamic allocation occurs in acpi_processor_power_init() if pr->flags.power evaluates to non-zero. On KVM guests, pr->flags.power evaluates to zero, hence dev is never allocated. This causes the NULL pointer (dev) dereference in cpuidle_disable_device() during a subsequent CPU online operation. Fix this by ensuring that dev is non-NULL before dereferencing. Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Len Brown <len.brown@intel.com>
* cpuidle: rename function name "__cpuidle_register_driver", v2Daniel Lezcano2012-09-221-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | The function __cpuidle_register_driver name is confusing because it suggests, conforming to the coding style of the kernel, it registers the driver without taking a lock. Actually, it just fill the different power field states with a decresing value if the power has not been specified. Clarify the purpose of the function by changing its name and move the condition out of this function. This patch fix nothing and does not change the behavior of the function. It is just for the sake of clarity. IHMO, reading in the code: + if (!drv->power_specified) + set_power_states(drv); is much more explicit than: - __cpuidle_register_driver(drv); Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: remove some empty linesDaniel Lezcano2012-09-191-3/+0
| | | | | | | | | | This mindless patch is just about removing some trailing carriage returns. [rjw: Changed the subject.] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* PM / cpuidle: Make ladder governor use the "disabled" state flagRafael J. Wysocki2012-09-041-1/+3
| | | | | | | | | For the mechanism introduced by commit cbc9ef0 (PM / Domains: Add preliminary support for cpuidle, v2) to work with the ladder governor, that governor should respect the "disabled" state flag added by that commit. Change the ladder governor accordingly. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* Honor state disabling in the cpuidle ladder governorCarsten Emde2012-09-041-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are two cpuidle governors ladder and menu. While the ladder governor is always available, if CONFIG_CPU_IDLE is selected, the menu governor additionally requires CONFIG_NO_HZ. A particular C state can be disabled by writing to the sysfs file /sys/devices/system/cpu/cpuN/cpuidle/stateN/disable, but this mechanism is only implemented in the menu governor. Thus, in a system where CONFIG_NO_HZ is not selected, the ladder governor becomes default and always will walk through all sleep states - irrespective of whether the C state was disabled via sysfs or not. The only way to select a specific C state was to write the related latency to /dev/cpu_dma_latency and keep the file open as long as this setting was required - not very practical and not suitable for setting a single core in an SMP system. With this patch, the ladder governor only will promote to the next C state, if it has not been disabled, and it will demote, if the current C state was disabled. Note that the patch does not make the setting of the sysfs variable "disable" coherent, i.e. if one is disabling a light state, then all deeper states are disabled as well, but the "disable" variable does not reflect it. Likewise, if one enables a deep state but a lighter state still is disabled, then this has no effect. A related section has been added to the documentation. Signed-off-by: Carsten Emde <C.Emde@osadl.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: Prevent null pointer dereference in cpuidle_coupled_cpu_notifyJon Medhurst (Tixy)2012-08-171-1/+1
| | | | | | | | | | | | | When a kernel is built to support multiple hardware types it's possible that CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is set but the hardware the kernel is run on doesn't support cpuidle and therefore doesn't load a driver for it. In this case, when the system is shut down, cpuidle_coupled_cpu_notify() gets called with cpuidle_devices set to NULL. There are quite possibly other circumstances where this situation can also occur and we should check for it. Signed-off-by: Jon Medhurst <tixy@linaro.org> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* cpuidle: coupled: fix sleeping while atomic in cpu notifierColin Cross2012-08-171-0/+12
| | | | | | | | | | | The cpu hotplug notifier gets called in both atomic and non-atomic contexts, it is not always safe to lock a mutex. Filter out all events except the six necessary ones, which are all sleepable, before taking the mutex. Signed-off-by: Colin Cross <ccross@android.com> Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
* Merge branch 'release' of ↵Linus Torvalds2012-07-265-28/+808
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux Pull ACPI & power management update from Len Brown: "Re-write of the turbostat tool. lower overhead was necessary for measuring very large system when they are very idle. IVB support in intel_idle It's what I run on my IVB, others should be able to also:-) ACPICA core update We have found some bugs due to divergence between Linux and the upstream ACPICA base. Most of these patches are to reduce that divergence to reduce the risk of future bugs. Some cpuidle updates, mostly for non-Intel More will be coming, as they depend on this part. Some thermal management changes needed by non-ACPI systems. Some _OST (OS Status Indication) updates for hot ACPI hot-plug." * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: (51 commits) Thermal: Documentation update Thermal: Add Hysteresis attributes Thermal: Make Thermal trip points writeable ACPI/AC: prevent OOPS on some boxes due to missing check power_supply_register() return value check tools/power: turbostat: fix large c1% issue tools/power: turbostat v2 - re-write for efficiency ACPICA: Update to version 20120711 ACPICA: AcpiSrc: Fix some translation issues for Linux conversion ACPICA: Update header files copyrights to 2012 ACPICA: Add new ACPI table load/unload external interfaces ACPICA: Split file: tbxface.c -> tbxfload.c ACPICA: Add PCC address space to space ID decode function ACPICA: Fix some comment fields ACPICA: Table manager: deploy new firmware error/warning interfaces ACPICA: Add new interfaces for BIOS(firmware) errors and warnings ACPICA: Split exception code utilities to a new file, utexcep.c ACPI: acpi_pad: tune round_robin_time ACPICA: Update to version 20120620 ACPICA: Add support for implicit notify on multiple devices ACPICA: Update comments; no functional change ...
| *---. Merge branches 'acpi_pad', 'acpica', 'apei-bugzilla-43282', 'battery', ↵Len Brown2012-07-265-28/+808
| |\ \ \ | | | | | | | | | | | | | | | 'cpuidle-coupled', 'cpuidle-tweaks', 'intel_idle-ivb', 'ost', 'red-hat-bz-772730', 'thermal', 'thermal-spear' and 'turbostat-v2' into release
| | | | * cpuidle: add checks to avoid NULL pointer dereferenceSrivatsa S. Bhat2012-06-011-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing check for dev == NULL in __cpuidle_register_device() is rendered useless because dev is dereferenced before the check itself. Moreover, correctly speaking, it is the job of the callers of this function, i.e., cpuidle_register_device() & cpuidle_enable_device() (which also happen to be exported functions) to ensure that __cpuidle_register_device() is called with a non-NULL dev. So add the necessary dev == NULL checks in the two callers and remove the (useless) check from __cpuidle_register_device(). Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com>
| | | | * cpuidle: remove unused hrtimer_peek_ahead_timers() callSergey Senozhatsky2012-06-011-9/+0
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9a6558371bcd01c2973b7638181db4ccc34eab4f Author: Arjan van de Ven <arjan@linux.intel.com> Date: Sun Nov 9 12:45:10 2008 -0800 regression: disable timer peek-ahead for 2.6.28 It's showing up as regressions; disabling it very likely just papers over an underlying issue, but time is running out for 2.6.28, lets get back to this for 2.6.29 Many years has passed since 2008, so it seems ok to remove whole `#if 0' block. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Kevin Hilman <khilman@ti.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Cc: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com>
| | | * cpuidle: coupled: add parallel barrier functionColin Cross2012-06-021-0/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adds cpuidle_coupled_parallel_barrier, which can be used by coupled cpuidle state enter functions to handle resynchronization after determining if any cpu needs to abort. The normal use case will be: static bool abort_flag; static atomic_t abort_barrier; int arch_cpuidle_enter(struct cpuidle_device *dev, ...) { if (arch_turn_off_irq_controller()) { /* returns an error if an irq is pending and would be lost if idle continued and turned off power */ abort_flag = true; } cpuidle_coupled_parallel_barrier(dev, &abort_barrier); if (abort_flag) { /* One of the cpus didn't turn off it's irq controller */ arch_turn_on_irq_controller(); return -EINTR; } /* continue with idle */ ... } This will cause all cpus to abort idle together if one of them needs to abort. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Signed-off-by: Len Brown <len.brown@intel.com>
| | | * cpuidle: add support for states that affect multiple cpusColin Cross2012-06-025-1/+726
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the cpus cannot be independently powered down, either due to sequencing restrictions (on Tegra 2, cpu 0 must be the last to power down), or due to HW bugs (on OMAP4460, a cpu powering up will corrupt the gic state unless the other cpu runs a work around). Each cpu has a power state that it can enter without coordinating with the other cpu (usually Wait For Interrupt, or WFI), and one or more "coupled" power states that affect blocks shared between the cpus (L2 cache, interrupt controller, and sometimes the whole SoC). Entering a coupled power state must be tightly controlled on both cpus. The easiest solution to implementing coupled cpu power states is to hotplug all but one cpu whenever possible, usually using a cpufreq governor that looks at cpu load to determine when to enable the secondary cpus. This causes problems, as hotplug is an expensive operation, so the number of hotplug transitions must be minimized, leading to very slow response to loads, often on the order of seconds. This file implements an alternative solution, where each cpu will wait in the WFI state until all cpus are ready to enter a coupled state, at which point the coupled state function will be called on all cpus at approximately the same time. Once all cpus are ready to enter idle, they are woken by an smp cross call. At this point, there is a chance that one of the cpus will find work to do, and choose not to enter idle. A final pass is needed to guarantee that all cpus will call the power state enter function at the same time. During this pass, each cpu will increment the ready counter, and continue once the ready counter matches the number of online coupled cpus. If any cpu exits idle, the other cpus will decrement their counter and retry. To use coupled cpuidle states, a cpuidle driver must: Set struct cpuidle_device.coupled_cpus to the mask of all coupled cpus, usually the same as cpu_possible_mask if all cpus are part of the same cluster. The coupled_cpus mask must be set in the struct cpuidle_device for each cpu. Set struct cpuidle_device.safe_state to a state that is not a coupled state. This is usually WFI. Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each state that affects multiple cpus. Provide a struct cpuidle_state.enter function for each state that affects multiple cpus. This function is guaranteed to be called on all cpus at approximately the same time. The driver should ensure that the cpus all abort together if any cpu tries to abort once the function is called. update1: cpuidle: coupled: fix count of online cpus online_count was never incremented on boot, and was also counting cpus that were not part of the coupled set. Fix both issues by introducting a new function that counts online coupled cpus, and call it from register as well as the hotplug notifier. update2: cpuidle: coupled: fix decrementing ready count cpuidle_coupled_set_not_ready sometimes refuses to decrement the ready count in order to prevent a race condition. This makes it unsuitable for use when finished with idle. Add a new function cpuidle_coupled_set_done that decrements both the ready count and waiting count, and call it after idle is complete. Cc: Amit Kucheria <amit.kucheria@linaro.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Trinabh Gupta <g.trinabh@gmail.com> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Acked-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>
| | | * cpuidle: fix error handling in __cpuidle_register_deviceColin Cross2012-06-021-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the error handling in __cpuidle_register_device to include the missing list_del. Move it to a label, which will simplify the error handling when coupled states are added. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>
| | | * cpuidle: refactor out cpuidle_enter_stateColin Cross2012-06-022-13/+31
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Split the code to enter a state and update the stats into a helper function, cpuidle_enter_state, and export it. This function will be called by the coupled state code to handle entering the safe state and the final coupled state. Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: Kevin Hilman <khilman@ti.com> Tested-by: Kevin Hilman <khilman@ti.com> Signed-off-by: Colin Cross <ccross@android.com> Reviewed-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Len Brown <len.brown@intel.com>
* | | Merge branch 'pm-domains'Rafael J. Wysocki2012-07-192-1/+3
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-domains: PM / Domains: Fix build warning for CONFIG_PM_RUNTIME unset PM / Domains: Replace plain integer with NULL pointer in domain.c file PM / Domains: Add missing static storage class specifier in domain.c file PM / Domains: Allow device callbacks to be added at any time PM / Domains: Add device domain data reference counter PM / Domains: Add preliminary support for cpuidle, v2 PM / Domains: Do not stop devices after restoring their states PM / Domains: Use subsystem runtime suspend/resume callbacks by default
OpenPOWER on IntegriCloud