| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'core-iommu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
iommu/core: Fix build with INTR_REMAP=y && CONFIG_DMAR=n
iommu/amd: Don't use MSI address range for DMA addresses
iommu/amd: Move missing parts to drivers/iommu
iommu: Move iommu Kconfig entries to submenu
x86/ia64: intel-iommu: move to drivers/iommu/
x86: amd_iommu: move to drivers/iommu/
msm: iommu: move to drivers/iommu/
drivers: iommu: move to a dedicated folder
x86/amd-iommu: Store device alias as dev_data pointer
x86/amd-iommu: Search for existind dev_data before allocting a new one
x86/amd-iommu: Allow dev_data->alias to be NULL
x86/amd-iommu: Use only dev_data in low-level domain attach/detach functions
x86/amd-iommu: Use only dev_data for dte and iotlb flushing routines
x86/amd-iommu: Store ATS state in dev_data
x86/amd-iommu: Store devid in dev_data
x86/amd-iommu: Introduce global dev_data_list
x86/amd-iommu: Remove redundant device_flush_dte() calls
iommu-api: Add missing header file
Fix up trivial conflicts (independent additions close to each other) in
drivers/Makefile and include/linux/pci.h
|
| |\
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu into core/iommu
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
IOMMU_API is not selected when no DMA remapping driver is
selected, but the whole drivers/iommu/ directory is only
built with IOMMU_API=y. Fixed with this patch by including
the directory with IOMMU_SUPPORT instead.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Reserve the MSI address range in the address allocator so
that MSI addresses are not handed out as dma handles.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | |\ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
arch/x86/include/asm/amd_iommu_types.h
arch/x86/kernel/amd_iommu.c
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
A few parts of the driver were missing in drivers/iommu.
Move them there to have the complete driver in that
directory.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
For better navigation this patch moves the drivers/iommu
drivers into its own submenu in Kconfig.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This should ease finding similarities with different platforms,
with the intention of solving problems once in a generic framework
which everyone can use.
Note: to move intel-iommu.c, the declaration of pci_find_upstream_pcie_bridge()
has to move from drivers/pci/pci.h to include/linux/pci.h. This is handled
in this patch, too.
As suggested, also drop DMAR's EXPERIMENTAL tag while we're at it.
Compile-tested on x86_64.
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This should ease finding similarities with different platforms,
with the intention of solving problems once in a generic framework
which everyone can use.
Compile-tested on x86_64.
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This should ease finding similarities with different platforms,
with the intention of solving problems once in a generic framework
which everyone can use.
Compile-tested for MSM8X60.
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Acked-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Create a dedicated folder for iommu drivers, and move the base
iommu implementation over there.
Grouping the various iommu drivers in a single location will help
finding similar problems shared by different platforms, so they
could be solved once, in the iommu framework, instead of solved
differently (or duplicated) in each driver.
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
If CONFIG_IOMMU_API is not defined some functions will just
return -ENODEV. Add errno.h for the definition of ENODEV.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This finally allows PCI-Device-IDs to be handled by the
IOMMU driver that have no corresponding struct device
present in the system.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Search for existing dev_data first will allow to switch
dev_data->alias to just store dev_data instead of struct
device.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Let dev_data->alias be just NULL if the device has no alias.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
With this patch the low-level attach/detach functions only
work on dev_data structures. This allows to remove the
dev_data->dev pointer.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This patch make the functions flushing the DTE and IOTLBs
only take the dev_data structure instead of the struct
device directly.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This allows the low-level functions to operate on dev_data
exclusivly later.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This allows to use dev_data independent of struct device
later.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This list keeps all allocated iommu_dev_data structs in a
list together. This is needed for instances that have no
associated device.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
| | | |/
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Remove these function calls from places where the function
has already been called by another function.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/suspend-2.6: (51 commits)
PM: Improve error code of pm_notifier_call_chain()
PM: Add "RTC" to PM trace time stamps to avoid confusion
PM / Suspend: Export suspend_set_ops, suspend_valid_only_mem
PM / Suspend: Add .suspend_again() callback to suspend_ops
PM / OPP: Introduce function to free cpufreq table
ARM / shmobile: Return -EBUSY from A4LC power off if A3RV is active
PM / Domains: Take .power_off() error code into account
ARM / shmobile: Use genpd_queue_power_off_work()
ARM / shmobile: Use pm_genpd_poweroff_unused()
PM / Domains: Introduce function to power off all unused PM domains
OMAP: PM: disable idle on suspend for GPIO and UART
OMAP: PM: omap_device: add API to disable idle on suspend
OMAP: PM: omap_device: add system PM methods for PM domain handling
OMAP: PM: omap_device: conditionally use PM domain runtime helpers
PM / Runtime: Add new helper function: pm_runtime_status_suspended()
PM / Domains: Queue up power off work only if it is not pending
PM / Domains: Improve handling of wakeup devices during system suspend
PM / Domains: Do not restore all devices on power off error
PM / Domains: Allow callbacks to execute all runtime PM helpers
PM / Domains: Do not execute device callbacks under locks
...
|
| |\ \ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
* pm-runtime:
OMAP: PM: disable idle on suspend for GPIO and UART
OMAP: PM: omap_device: add API to disable idle on suspend
OMAP: PM: omap_device: add system PM methods for PM domain handling
OMAP: PM: omap_device: conditionally use PM domain runtime helpers
PM / Runtime: Add new helper function: pm_runtime_status_suspended()
PM / Runtime: Consistent utilization of deferred_resume
PM / Runtime: Prevent runtime_resume from racing with probe
PM / Runtime: Replace "run-time" with "runtime" in documentation
PM / Runtime: Improve documentation of enable, disable and barrier
PM: Limit race conditions between runtime PM and system sleep (v2)
PCI / PM: Detect early wakeup in pci_pm_prepare()
PM / Runtime: Return special error code if runtime PM is disabled
PM / Runtime: Update documentation of interactions with system sleep
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Until these drivers are runtime PM converted, their device power
states are managed by calling custom driver hooks late in the
idle/suspend path. Therefore, do not let the suspend/resume core code
automatically idle these devices since they will be managed manually
by the OMAP PM core very late in the idle/suspend path.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
By default, omap_devices will be automatically idled on suspend
(and re-enabled on resume.) Using this new API, device init code
can disable this feature if desired.
NOTE: any driver/device that has been runtime PM converted should
not be using this API.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
In the omap_device PM domain callbacks, use omap_device idle/enable to
automatically manage device idle states during system suspend/resume.
If an omap_device has not already been runtime suspended, the
->suspend_noirq() method of the PM domain will use omap_device_idle()
to idle the HW after calling the driver's ->runtime_suspend()
callback. Similarily, upon resume, if the device was suspended during
->suspend_noirq(), the ->resume_noirq() method of the PM domain will
use omap_device_enable() to enable the HW and then call the driver's
->runtime_resume() callback.
If a device has already been runtime suspended, the noirq methods of
the PM domain leave the device runtime suspended by default.
However, if a driver needs to runtime resume a device during suspend
(for example, to change its wakeup settings), it may do so using
pm_runtime_get* in it's ->suspend() callback.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Only build and use the runtime PM helper functions only when runtime
PM is actually enabled.
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This boolean function simply returns whether or not the runtime status
of the device is 'suspended'. Unlike pm_runtime_suspended(), this
function returns the runtime status whether or not runtime PM for the
device has been disabled or not.
Also add entry to Documentation/power/runtime.txt
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
dev->power.deferred_resume is used as a bool typically, so change
one assignment to false from 0, like other places.
Signed-off-by: ShuoX Liu <shuox.liu@intel.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This patch (as1475) adds device_lock() and device_unlock() calls to
the store methods for the power/control and power/autosuspend_delay_ms
sysfs attribute files. We don't want badly timed writes to these
files to cause runtime_resume callbacks to occur while a driver is
being probed for a device.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The runtime PM documentation and kerneldoc comments sometimes spell
"runtime" with a dash (i.e. "run-time"). Replace all of those
instances with "runtime" to make the naming consistent.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The runtime PM documentation in Documentation/power/runtime_pm.txt
doesn't say that pm_runtime_enable() and pm_runtime_disable() work by
operating on power.disable_depth, which is wrong, because the
possibility of nesting disables doesn't follow from the description
of these functions. Also, there is no description of
pm_runtime_barrier() at all in the document, which is confusing.
Improve the documentation by fixing those issues.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
One of the roles of the PM core is to prevent different PM callbacks
executed for the same device object from racing with each other.
Unfortunately, after commit e8665002477f0278f84f898145b1f141ba26ee26
(PM: Allow pm_runtime_suspend() to succeed during system suspend)
runtime PM callbacks may be executed concurrently with system
suspend/resume callbacks for the same device.
The main reason for commit e8665002477f0278f84f898145b1f141ba26ee26
was that some subsystems and device drivers wanted to use runtime PM
helpers, pm_runtime_suspend() and pm_runtime_put_sync() in
particular, for carrying out the suspend of devices in their
.suspend() callbacks. However, as it's been determined recently,
there are multiple reasons not to do so, inlcuding:
* The caller really doesn't control the runtime PM usage counters,
because user space can access them through sysfs and effectively
block runtime PM. That means using pm_runtime_suspend() or
pm_runtime_get_sync() to suspend devices during system suspend
may or may not work.
* If a driver calls pm_runtime_suspend() from its .suspend()
callback, it causes the subsystem's .runtime_suspend() callback to
be executed, which leads to the call sequence:
subsys->suspend(dev)
driver->suspend(dev)
pm_runtime_suspend(dev)
subsys->runtime_suspend(dev)
recursive from the subsystem's point of view. For some subsystems
that may actually work (e.g. the platform bus type), but for some
it will fail in a rather spectacular fashion (e.g. PCI). In each
case it means a layering violation.
* Both the subsystem and the driver can provide .suspend_noirq()
callbacks for system suspend that can do whatever the
.runtime_suspend() callbacks do just fine, so it really isn't
necessary to call pm_runtime_suspend() during system suspend.
* The runtime PM's handling of wakeup devices is usually different
from the system suspend's one, so .runtime_suspend() may simply be
inappropriate for system suspend.
* System suspend is supposed to work even if CONFIG_PM_RUNTIME is
unset.
* The runtime PM workqueue is frozen before system suspend, so if
whatever the driver is going to do during system suspend depends
on it, that simply won't work.
Still, there is a good reason to allow pm_runtime_resume() to
succeed during system suspend and resume (for instance, some
subsystems and device drivers may legitimately use it to ensure that
their devices are in full-power states before suspending them).
Moreover, there is no reason to prevent runtime PM callbacks from
being executed in parallel with the system suspend/resume .prepare()
and .complete() callbacks and the code removed by commit
e8665002477f0278f84f898145b1f141ba26ee26 went too far in this
respect. On the other hand, runtime PM callbacks, including
.runtime_resume(), must not be executed during system suspend's
"late" stage of suspending devices and during system resume's "early"
device resume stage.
Taking all of the above into consideration, make the PM core
acquire a runtime PM reference to every device and resume it if
there's a runtime PM resume request pending right before executing
the subsystem-level .suspend() callback for it. Make the PM core
drop references to all devices right after executing the
subsystem-level .resume() callbacks for them. Additionally,
make the PM core disable the runtime PM framework for all devices
during system suspend, after executing the subsystem-level .suspend()
callbacks for them, and enable the runtime PM framework for all
devices during system resume, right before executing the
subsystem-level .resume() callbacks for them.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Kevin Hilman <khilman@ti.com>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
A subsequent patch is going to move the invocation of
pm_runtime_barrier() from dpm_prepare() to __device_suspend().
Consequently, early wakeup events resulting from runtime resume
requests for wakeup devices queued up right before system suspend
will only be detected after all of the subsystem-level .prepare()
callbacks have run. However, the PCI bus type calls
pm_runtime_get_sync() from its pci_pm_prepare() callback routine,
so it would destroy the early wakeup events information regarding PCI
devices. To prevent this from happening add an early wakeup
detection mechanism, analogous to the one currently in dpm_prepare(),
to pci_pm_prepare().
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Some callers of pm_runtime_get_sync() and other runtime PM helper
functions, scsi_autopm_get_host() and scsi_autopm_get_device() in
particular, need to distinguish error codes returned when runtime PM
is disabled (i.e. power.disable_depth is nonzero for the given
device) from error codes returned in other situations. For this
reason, make the runtime PM helper functions return -EACCES when
power.disable_depth is nonzero and ensure that this error code
won't be returned by them in any other circumstances. Modify
scsi_autopm_get_host() and scsi_autopm_get_device() to check the
error code returned by pm_runtime_get_sync() and ignore -EACCES.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The documents describing the interactions between runtime PM and
system sleep generally refer to the model in which the system sleep
state is entered through a global firmware or hardware operation.
As a result, some recommendations given in there are not entirely
suitable for systems in which this is not the case. Update the
documentation to take the existence of those systems into account.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reviewed-by: Kevin Hilman <khilman@ti.com>
|
| |\ \ \ \ \
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
* pm-domains: (33 commits)
ARM / shmobile: Return -EBUSY from A4LC power off if A3RV is active
PM / Domains: Take .power_off() error code into account
ARM / shmobile: Use genpd_queue_power_off_work()
ARM / shmobile: Use pm_genpd_poweroff_unused()
PM / Domains: Introduce function to power off all unused PM domains
PM / Domains: Queue up power off work only if it is not pending
PM / Domains: Improve handling of wakeup devices during system suspend
PM / Domains: Do not restore all devices on power off error
PM / Domains: Allow callbacks to execute all runtime PM helpers
PM / Domains: Do not execute device callbacks under locks
PM / Domains: Make failing pm_genpd_prepare() clean up properly
PM / Domains: Set device state to "active" during system resume
ARM: mach-shmobile: sh7372 A3RV requires A4LC
PM / Domains: Export pm_genpd_poweron() in header
ARM: mach-shmobile: sh7372 late pm domain off
ARM: mach-shmobile: Runtime PM late init callback
ARM: mach-shmobile: sh7372 D4 support
ARM: mach-shmobile: sh7372 A4MP support
ARM: mach-shmobile: sh7372: make sure that fsi is peripheral of spu2
ARM: mach-shmobile: sh7372 A3SG support
...
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Since the A4LC should only be powered off if the A3RV is off, make
the A4LC's power down routine return -EBUSY if A3RV is not off to
indicate to the core that it doesn't want to power off the domain in
that case. This will cause the core to regard A4LC as active, so
the pm_genpd_poweron() in pd_power_down_a3rv() is not necessary any
more.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Magnus Damm <damm@opensource.se>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Currently pm_genpd_poweroff() discards error codes returned by
the PM domain's .power_off() callback, because it's safer to always
regard the domain as inaccessible to drivers after a failing
.power_off(). Still, there are situations in which the low-level
code may want to indicate that it doesn't want to power off the
domain, so allow it to do that by returning -EBUSY from .power_off().
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Magnus Damm <damm@opensource.se>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Make pd_power_down_a3rv() use genpd_queue_power_off_work() to queue
up the powering off of the A4LC domain to avoid queuing it up when
it is pending.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Magnus Damm <damm@opensource.se>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Make shmobile use pm_genpd_poweroff_unused() instead of the
open-coded powering off PM domains without devices in use.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Magnus Damm <damm@opensource.se>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Add a new function pm_genpd_poweroff_unused() queuing up the
execution of pm_genpd_poweroff() for every initialized generic PM
domain. Calling it will cause every generic PM domain without
devices in use to be powered off.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Magnus Damm <damm@opensource.se>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In theory it is possible that pm_genpd_poweroff() for two different
subdomains of the same parent domain will attempt to queue up the
execution of pm_genpd_poweroff() for the parent twice in a row. This
would lead to unpleasant consequences, so prevent it from happening
by checking if genpd->power_off_work is pending before attempting to
queue it up.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Kevin points out that if there's a device that can wake up the system
from sleep states, but it doesn't generate wakeup signals by itself
(they are generated on its behalf by other parts of the system) and
it currently is not enabled to wake up the system (that is,
device_may_wakeup() returns "false" for it), we may need to change
its wakeup settings during system suspend (for example, the device
might have been configured to signal remote wakeup from the system's
working state, as needed by runtime PM). Therefore the generic PM
domains code should invoke the system suspend callbacks provided by
the device's driver, which it doesn't do if the PM domain is powered
off during the system suspend's "prepare" stage. This is a valid
point. Moreover, this code also should make sure that system wakeup
devices that are enabled to wake up the system from sleep states and
have to remain active for this purpose are not suspended while the
system is in a sleep state.
To avoid the above issues, make the generic PM domains' .prepare()
routine, pm_genpd_prepare(), force runtime resume of devices whose
system wakeup settings may need to be changed during system suspend
or that should remain active while the system is in a sleep state to
be able to wake it up from that state.
Reported-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Since every device in a PM domain has its own need_restore
flag, which is set by __pm_genpd_save_device(), there's no need to
walk the domain's device list and restore all devices on an error
from one of the drivers' .runtime_suspend() callbacks.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
A deadlock may occur if one of the PM domains' .start_device() or
.stop_device() callbacks or a device driver's .runtime_suspend() or
.runtime_resume() callback executed by the core generic PM domain
code uses a "wrong" runtime PM helper function. This happens, for
example, if .runtime_resume() from one device's driver calls
pm_runtime_resume() for another device in the same PM domain.
A similar situation may take place if a device's parent is in the
same PM domain, in which case the runtime PM framework may execute
pm_genpd_runtime_resume() automatically for the parent (if it is
suspended at the moment). This, of course, is undesirable, so
the generic PM domains code should be modified to prevent it from
happening.
The runtime PM framework guarantees that pm_genpd_runtime_suspend()
and pm_genpd_runtime_resume() won't be executed in parallel for
the same device, so the generic PM domains code need not worry
about those cases. Still, it needs to prevent the other possible
race conditions between pm_genpd_runtime_suspend(),
pm_genpd_runtime_resume(), pm_genpd_poweron() and pm_genpd_poweroff()
from happening and it needs to avoid deadlocks at the same time.
To this end, modify the generic PM domains code to relax
synchronization rules so that:
* pm_genpd_poweron() doesn't wait for the PM domain status to
change from GPD_STATE_BUSY. If it finds that the status is
not GPD_STATE_POWER_OFF, it returns without powering the domain on
(it may modify the status depending on the circumstances).
* pm_genpd_poweroff() returns as soon as it finds that the PM
domain's status changed from GPD_STATE_BUSY after it's released
the PM domain's lock.
* pm_genpd_runtime_suspend() doesn't wait for the PM domain status
to change from GPD_STATE_BUSY after executing the domain's
.stop_device() callback and executes pm_genpd_poweroff() only
if pm_genpd_runtime_resume() is not executed in parallel.
* pm_genpd_runtime_resume() doesn't wait for the PM domain status
to change from GPD_STATE_BUSY after executing pm_genpd_poweron()
and sets the domain's status to GPD_STATE_BUSY and increments its
counter of resuming devices (introduced by this change) immediately
after acquiring the lock. The counter of resuming devices is then
decremented after executing __pm_genpd_runtime_resume() for the
device and the domain's status is reset to GPD_STATE_ACTIVE (unless
there are more resuming devices in the domain, in which case the
status remains GPD_STATE_BUSY).
This way, for example, if a device driver's .runtime_resume()
callback executes pm_runtime_resume() for another device in the same
PM domain, pm_genpd_poweron() called by pm_genpd_runtime_resume()
invoked by the runtime PM framework will not block and it will see
that there's nothing to do for it. Next, the PM domain's lock will
be acquired without waiting for its status to change from
GPD_STATE_BUSY and the device driver's .runtime_resume() callback
will be executed. In turn, if pm_runtime_suspend() is executed by
one device driver's .runtime_resume() callback for another device in
the same PM domain, pm_genpd_poweroff() executed by
pm_genpd_runtime_suspend() invoked by the runtime PM framework as a
result will notice that one of the devices in the domain is being
resumed, so it will return immediately.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Currently, the .start_device() and .stop_device() callbacks from
struct generic_pm_domain() as well as the device drivers' runtime PM
callbacks used by the generic PM domains code are executed under
the generic PM domain lock. This, unfortunately, is prone to
deadlocks, for example if a device and its parent are boths members
of the same PM domain. For this reason, it would be better if the
PM domains code didn't execute device callbacks under the lock.
Rework the locking in the generic PM domains code so that the lock
is dropped for the execution of device callbacks. To this end,
introduce PM domains states reflecting the current status of a PM
domain and such that the PM domain lock cannot be acquired if the
status is GPD_STATE_BUSY. Make threads attempting to acquire a PM
domain's lock wait until the status changes to either
GPD_STATE_ACTIVE or GPD_STATE_POWER_OFF.
This change by itself doesn't fix the deadlock problem mentioned
above, but the mechanism introduced by it will be used for for this
purpose by a subsequent patch.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
If pm_generic_prepare() in pm_genpd_prepare() returns error code,
the PM domains counter of "prepared" devices should be decremented
and its suspend_power_off flag should be reset if this counter drops
down to zero. Otherwise, the PM domain runtime PM code will not
handle the domain correctly (it will permanently think that system
suspend is in progress).
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
The runtime PM status of devices in a power domain that is not
powered off in pm_genpd_complete() should be set to "active", because
those devices are operational at this point. Some of them may not be
in use, though, so make pm_genpd_complete() call pm_runtime_idle()
in addition to pm_runtime_set_active() for each of them.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Add a power domain workaround for the VPU and A3RV on sh7372.
The sh7372 data sheet mentions that the VPU is located in the
A3RV power domain. The A3RV power domain is not related to A4LC
in any way, but testing shows that unless A3RV _and_ A4LC are
powered on the VPU test program will bomb out.
This issue may be caused by a more or less undocumented dependency
on the MERAM block that happens to be located in A4LC. So now we
know that the out-of-reset requirement of the VPU is that the MERAM
is powered on.
This patch adds a workaround for A3RV to make sure A4LC is powered
on - this so we can use the VPU even though the LCDCs are in blanking
state and A4LC is supposed to be off.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|