summaryrefslogtreecommitdiffstats
path: root/drivers/iommu
Commit message (Collapse)AuthorAgeFilesLines
* iommu/of: Ignore all errors except EPROBE_DEFERSricharan R2017-05-301-0/+6
| | | | | | | | | | | | | | | | | | | | | While deferring the probe of IOMMU masters, xlate and add_device callbacks called from of_iommu_configure can pass back error values like -ENODEV, which means the IOMMU cannot be connected with that master for real reasons. Before the IOMMU probe deferral, all such errors were ignored. Now all those errors are propagated back, killing the master's probe for such errors. Instead ignore all the errors except EPROBE_DEFER, which is the only one of concern and let the master work without IOMMU, thus restoring the old behavior. Also make explicit that of_dma_configure handles only -EPROBE_DEFER from of_iommu_configure. Fixes: 7b07cbefb68d ("iommu: of: Handle IOMMU lookup failure with deferred probing or error") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Magnus Damn <magnus.damn@gmail.com> Signed-off-by: Sricharan R <sricharan@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/of: Fix check for returning EPROBE_DEFERSricharan R2017-05-301-0/+1
| | | | | | | | | | | | | | | | | | | | Now with IOMMU probe deferral, we return -EPROBE_DEFER for masters that are connected to an IOMMU which is not probed yet, but going to get probed, so that we can attach the correct dma_ops. So while trying to defer the probe of the master, check if the of_iommu node that it is connected to is marked in DT as 'status=disabled', then the IOMMU is never is going to get probed. So simply return NULL and let the master work without an IOMMU. Fixes: 7b07cbefb68d ("iommu: of: Handle IOMMU lookup failure with deferred probing or error") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Tested-by: Will Deacon <will.deacon@arm.com> Tested-by: Magnus Damn <magnus.damn@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sricharan R <sricharan@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/mediatek: Include linux/dma-mapping.hArnd Bergmann2017-05-171-0/+1
| | | | | | | | | | | | | | | | The mediatek iommu driver relied on an implicit include of dma-mapping.h, but for some reason that is no longer there in 4.12-rc1: drivers/iommu/mtk_iommu_v1.c: In function 'mtk_iommu_domain_finalise': drivers/iommu/mtk_iommu_v1.c:233:16: error: implicit declaration of function 'dma_zalloc_coherent'; did you mean 'debug_dma_alloc_coherent'? [-Werror=implicit-function-declaration] drivers/iommu/mtk_iommu_v1.c: In function 'mtk_iommu_domain_free': drivers/iommu/mtk_iommu_v1.c:265:2: error: implicit declaration of function 'dma_free_coherent'; did you mean 'debug_dma_free_coherent'? [-Werror=implicit-function-declaration] This adds an explicit #include to make it build again. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 208480bb27 ('iommu: Remove trace-events include from iommu.h') Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/vt-d: Flush the IOTLB to get rid of the initial kdump mappingsKarimAllah Ahmed2017-05-171-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Ever since commit 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel") the kdump kernel copies the IOMMU context tables from the previous kernel. Each device mappings will be destroyed once the driver for the respective device takes over. This unfortunately breaks the workflow of mapping and unmapping a new context to the IOMMU. The mapping function assumes that either: 1) Unmapping did the proper IOMMU flushing and it only ever flush if the IOMMU unit supports caching invalid entries. 2) The system just booted and the initialization code took care of flushing all IOMMU caches. This assumption is not true for the kdump kernel since the context tables have been copied from the previous kernel and translations could have been cached ever since. So make sure to flush the IOTLB as well when we destroy these old copied mappings. Cc: Joerg Roedel <joro@8bytes.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Anthony Liguori <aliguori@amazon.com> Signed-off-by: KarimAllah Ahmed <karahmed@amazon.de> Acked-by: David Woodhouse <dwmw@amazon.co.uk> Cc: stable@vger.kernel.org v4.2+ Fixes: 091d42e43d ("iommu/vt-d: Copy translation tables from old kernel") Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/dma: Don't touch invalid iova_domain membersRobin Murphy2017-05-171-5/+8
| | | | | | | | | | | | | | | | | | | When __iommu_dma_map() and iommu_dma_free_iova() are called from iommu_dma_get_msi_page(), various iova_*() helpers are still invoked in the process, whcih is unwise since they access a different member of the union (the iova_domain) from that which was last written, and there's no guarantee that sensible values will result anyway. CLean up the code paths that are valid for an MSI cookie to ensure we only do iova_domain-specific things when we're actually dealing with one. Fixes: a44e6657585b ("iommu/dma: Clean up MSI IOVA allocation") Reported-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Shanker Donthineni <shankerd@codeaurora.org> Tested-by: Bharat Bhushan <bharat.bhushan@nxp.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* Merge tag 'iommu-updates-v4.12' of ↵Linus Torvalds2017-05-0918-480/+916
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull IOMMU updates from Joerg Roedel: - code optimizations for the Intel VT-d driver - ability to switch off a previously enabled Intel IOMMU - support for 'struct iommu_device' for OMAP, Rockchip and Mediatek IOMMUs - header optimizations for IOMMU core code headers and a few fixes that became necessary in other parts of the kernel because of that - ACPI/IORT updates and fixes - Exynos IOMMU optimizations - updates for the IOMMU dma-api code to bring it closer to use per-cpu iova caches - new command-line option to set default domain type allocated by the iommu core code - another command line option to allow the Intel IOMMU switched off in a tboot environment - ARM/SMMU: TLB sync optimisations for SMMUv2, Support for using an IDENTITY domain in conjunction with DMA ops, Support for SMR masking, Support for 16-bit ASIDs (was previously broken) - various other small fixes and improvements * tag 'iommu-updates-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (63 commits) soc/qbman: Move dma-mapping.h include to qman_priv.h soc/qbman: Fix implicit header dependency now causing build fails iommu: Remove trace-events include from iommu.h iommu: Remove pci.h include from trace/events/iommu.h arm: dma-mapping: Don't override dma_ops in arch_setup_dma_ops() ACPI/IORT: Fix CONFIG_IOMMU_API dependency iommu/vt-d: Don't print the failure message when booting non-kdump kernel iommu: Move report_iommu_fault() to iommu.c iommu: Include device.h in iommu.h x86, iommu/vt-d: Add an option to disable Intel IOMMU force on iommu/arm-smmu: Return IOVA in iova_to_phys when SMMU is bypassed iommu/arm-smmu: Correct sid to mask iommu/amd: Fix incorrect error handling in amd_iommu_bind_pasid() iommu: Make iommu_bus_notifier return NOTIFY_DONE rather than error code omap3isp: Remove iommu_group related code iommu/omap: Add iommu-group support iommu/omap: Make use of 'struct iommu_device' iommu/omap: Store iommu_dev pointer in arch_data iommu/omap: Move data structures to omap-iommu.h iommu/omap: Drop legacy-style device support ...
| *-------------. Merge branches 'arm/exynos', 'arm/omap', 'arm/rockchip', 'arm/mediatek', ↵Joerg Roedel2017-05-0420-530/+956
| |\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'arm/smmu', 'arm/core', 'x86/vt-d', 'x86/amd' and 'core' into next
| | | | | | | | | * iommu: Remove pci.h include from trace/events/iommu.hJoerg Roedel2017-04-293-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The include file does not need any PCI specifics, so remove that include. Also fix the places that relied on it. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | | | * iommu: Move report_iommu_fault() to iommu.cJoerg Roedel2017-04-271-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function is in no fast-path, there is no need for it to be static inline in a header file. This also removes the need to include iommu trace-points in iommu.h. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | | | * iommu: Make iommu_bus_notifier return NOTIFY_DONE rather than error codezhichang.yuan2017-04-201-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In iommu_bus_notifier(), when action is BUS_NOTIFY_ADD_DEVICE, it will return 'ops->add_device(dev)' directly. But ops->add_device will return ERR_VAL, such as -ENODEV. These value will make notifier_call_chain() not to traverse the remain nodes in struct notifier_block list. This patch revises iommu_bus_notifier() to return NOTIFY_DONE when some errors happened in ops->add_device(). Signed-off-by: zhichang.yuan <yuanzhichang@hisilicon.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | | * | iommu/amd: Fix incorrect error handling in amd_iommu_bind_pasid()Pan Bian2017-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In function amd_iommu_bind_pasid(), the control flow jumps to label out_free when pasid_state->mm and mm is NULL. And mmput(mm) is called. In function mmput(mm), mm is referenced without validation. This will result in a NULL dereference bug. This patch fixes the bug. Signed-off-by: Pan Bian <bianpan2016@163.com> Fixes: f0aac63b873b ('iommu/amd: Don't hold a reference to mm_struct') Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/vt-d: Don't print the failure message when booting non-kdump kernelQiuxu Zhuo2017-04-281-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting a new non-kdump kernel, we have below failure message: [ 0.004000] DMAR-IR: IRQ remapping was enabled on dmar2 but we are not in kdump mode [ 0.004000] DMAR-IR: Failed to copy IR table for dmar2 from previous kernel [ 0.004000] DMAR-IR: IRQ remapping was enabled on dmar1 but we are not in kdump mode [ 0.004000] DMAR-IR: Failed to copy IR table for dmar1 from previous kernel [ 0.004000] DMAR-IR: IRQ remapping was enabled on dmar0 but we are not in kdump mode [ 0.004000] DMAR-IR: Failed to copy IR table for dmar0 from previous kernel [ 0.004000] DMAR-IR: IRQ remapping was enabled on dmar3 but we are not in kdump mode [ 0.004000] DMAR-IR: Failed to copy IR table for dmar3 from previous kernel For non-kdump case, we no need to copy IR table from previous kernel so it's nonthing actually failed. To be less alarming or misleading, do not print "DMAR-IR: Failed to copy IR table for dmar[0-9] from previous kernel" messages when booting non-kdump kernel. Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | x86, iommu/vt-d: Add an option to disable Intel IOMMU force onShaohua Li2017-04-261-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOMMU harms performance signficantly when we run very fast networking workloads. It's 40GB networking doing XDP test. Software overhead is almost unaware, but it's the IOTLB miss (based on our analysis) which kills the performance. We observed the same performance issue even with software passthrough (identity mapping), only the hardware passthrough survives. The pps with iommu (with software passthrough) is only about ~30% of that without it. This is a limitation in hardware based on our observation, so we'd like to disable the IOMMU force on, but we do want to use TBOOT and we can sacrifice the DMA security bought by IOMMU. I must admit I know nothing about TBOOT, but TBOOT guys (cc-ed) think not eabling IOMMU is totally ok. So introduce a new boot option to disable the force on. It's kind of silly we need to run into intel_iommu_init even without force on, but we need to disable TBOOT PMR registers. For system without the boot option, nothing is changed. Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/vt-d: Make sure IOMMUs are off when intel_iommu=offJoerg Roedel2017-03-291-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When booting into a kexec kernel with intel_iommu=off, and the previous kernel had intel_iommu=on, the IOMMU hardware is still enabled and gets not disabled by the new kernel. This causes the boot to fail because DMA is blocked by the hardware. Disable the IOMMUs when we find it enabled in the kexec kernel and boot with intel_iommu=off. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/dmar: Remove redundant ' != 0' when check return codeAndy Shevchenko2017-03-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Usual pattern when we check for return code, which might be negative errno, is either (ret) or (!ret). Remove extra ' != 0' from condition. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/dmar: Remove redundant assignment of retAndy Shevchenko2017-03-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to assign ret to 0 in some cases. Moreover it might shadow some errors in the future. Remove such assignments. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/dmar: Return directly from a loop in dmar_dev_scope_status()Andy Shevchenko2017-03-221-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to have a temporary variable. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | | iommu/dmar: Rectify return code handling in detect_intel_iommu()Andy Shevchenko2017-03-221-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is inconsistency in return codes across the functions called from detect_intel_iommu(). Make it consistent and propagate return code to the caller. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/arm-smmu: Clean up early-probing workaroundsRobin Murphy2017-04-202-107/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the appropriate ordering is enforced via probe-deferral of masters in core code, rip it all out and bask in the simplicity. Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [Sricharan: Rebased on top of ACPI IORT SMMU series] Signed-off-by: Sricharan R <sricharan@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu: of: Handle IOMMU lookup failure with deferred probing or errorLaurent Pinchart2017-04-201-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Failures to look up an IOMMU when parsing the DT iommus property need to be handled separately from the .of_xlate() failures to support deferred probing. The lack of a registered IOMMU can be caused by the lack of a driver for the IOMMU, the IOMMU device probe not having been performed yet, having been deferred, or having failed. The first case occurs when the device tree describes the bus master and IOMMU topology correctly but no device driver exists for the IOMMU yet or the device driver has not been compiled in. Return NULL, the caller will configure the device without an IOMMU. The second and third cases are handled by deferring the probe of the bus master device which will eventually get reprobed after the IOMMU. The last case is currently handled by deferring the probe of the bus master device as well. A mechanism to either configure the bus master device without an IOMMU or to fail the bus master device probe depending on whether the IOMMU is optional or mandatory would be a good enhancement. Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Laurent Pichart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: Sricharan R <sricharan@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/of: Prepare for deferred IOMMU configurationRobin Murphy2017-04-201-1/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOMMU configuration represents unchanging properties of the hardware, and as such should only need happen once in a device's lifetime, but the necessary interaction with the IOMMU device and driver complicates exactly when that point should be. Since the only reasonable tool available for handling the inter-device dependency is probe deferral, we need to prepare of_iommu_configure() to run later than it is currently called (i.e. at driver probe rather than device creation), to handle being retried, and to tell whether a not-yet present IOMMU should be waited for or skipped (by virtue of having declared a built-in driver or not). Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/of: Refactor of_iommu_configure() for error handlingRobin Murphy2017-04-201-30/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for some upcoming cleverness, rework the control flow in of_iommu_configure() to minimise duplication and improve the propogation of errors. It's also as good a time as any to switch over from the now-just-a-compatibility-wrapper of_iommu_get_ops() to using the generic IOMMU instance interface directly. Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/iova: Fix underflow bug in __alloc_and_insert_iova_rangeNate Watterson2017-04-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normally, calling alloc_iova() using an iova_domain with insufficient pfns remaining between start_pfn and dma_limit will fail and return a NULL pointer. Unexpectedly, if such a "full" iova_domain contains an iova with pfn_lo == 0, the alloc_iova() call will instead succeed and return an iova containing invalid pfns. This is caused by an underflow bug in __alloc_and_insert_iova_range() that occurs after walking the "full" iova tree when the search ends at the iova with pfn_lo == 0 and limit_pfn is then adjusted to be just below that (-1). This (now huge) limit_pfn gives the impression that a vast amount of space is available between it and start_pfn and thus a new iova is allocated with the invalid pfn_hi value, 0xFFF.... . To rememdy this, a check is introduced to ensure that adjustments to limit_pfn will not underflow. This issue has been observed in the wild, and is easily reproduced with the following sample code. struct iova_domain *iovad = kzalloc(sizeof(*iovad), GFP_KERNEL); struct iova *rsvd_iova, *good_iova, *bad_iova; unsigned long limit_pfn = 3; unsigned long start_pfn = 1; unsigned long va_size = 2; init_iova_domain(iovad, SZ_4K, start_pfn, limit_pfn); rsvd_iova = reserve_iova(iovad, 0, 0); good_iova = alloc_iova(iovad, va_size, limit_pfn, true); bad_iova = alloc_iova(iovad, va_size, limit_pfn, true); Prior to the patch, this yielded: *rsvd_iova == {0, 0} /* Expected */ *good_iova == {2, 3} /* Expected */ *bad_iova == {-2, -1} /* Oh no... */ After the patch, bad_iova is NULL as expected since inadequate space remains between limit_pfn and start_pfn after allocating good_iova. Signed-off-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Plumb in the per-CPU IOVA cachesRobin Murphy2017-04-031-20/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With IOVA allocation suitably tidied up, we are finally free to opt in to the per-CPU caching mechanism. The caching alone can provide a modest improvement over walking the rbtree for weedier systems (iperf3 shows ~10% more ethernet throughput on an ARM Juno r1 constrained to a single 650MHz Cortex-A53), but the real gain will be in sidestepping the rbtree lock contention which larger ARM-based systems with lots of parallel I/O are starting to feel the pain of. Reviewed-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Clean up MSI IOVA allocationRobin Murphy2017-04-031-33/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that allocation is suitably abstracted, our private alloc/free helpers can drive the trivial MSI cookie allocator directly as well, which lets us clean up its exposed guts from iommu_dma_map_msi_msg() and simplify things quite a bit. Reviewed-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Convert to address-based allocationRobin Murphy2017-04-031-52/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for some IOVA allocation improvements, clean up all the explicit struct iova usage such that all our mapping, unmapping and cleanup paths deal exclusively with addresses rather than implementation details. In the process, a few of the things we're touching get renamed for the sake of internal consistency. Reviewed-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Nate Watterson <nwatters@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Make PCI window reservation genericRobin Murphy2017-03-223-10/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we're applying the IOMMU API reserved regions to our IOVA domains, we shouldn't need to privately special-case PCI windows, or indeed anything else which isn't specific to our iommu-dma layer. However, since those aren't IOMMU-specific either, rather than start duplicating code into IOMMU drivers let's transform the existing function into an iommu_get_resv_regions() helper that they can share. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Handle IOMMU API reserved regionsRobin Murphy2017-03-221-7/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that it's simple to discover the necessary reservations for a given device/IOMMU combination, let's wire up the appropriate handling. Basic reserved regions and direct-mapped regions we simply have to carve out of IOVA space (the IOMMU core having already mapped the latter before attaching the device). For hardware MSI regions, we also pre-populate the cookie with matching msi_pages. That way, irqchip drivers which normally assume MSIs to require mapping at the IOMMU can keep working without having to special-case their iommu_dma_map_msi_msg() hook, or indeed be aware at all of quirks preventing the IOMMU from translating certain addresses. Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | | iommu/dma: Don't reserve PCI I/O windowsRobin Murphy2017-03-221-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even if a host controller's CPU-side MMIO windows into PCI I/O space do happen to leak into PCI memory space such that it might treat them as peer addresses, trying to reserve the corresponding I/O space addresses doesn't do anything to help solve that problem. Stop doing a silly thing. Fixes: fade1ec055dc ("iommu/dma: Avoid PCI host bridge windows") Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * | | | | iommu/arm-smmu: Return IOVA in iova_to_phys when SMMU is bypassedSunil Goutham2017-04-262-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For software initiated address translation, when domain type is IOMMU_DOMAIN_IDENTITY i.e SMMU is bypassed, mimic HW behavior i.e return the same IOVA as translated address. This patch is an extension to Will Deacon's patchset "Implement SMMU passthrough using the default domain". Signed-off-by: Sunil Goutham <sgoutham@cavium.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * | | | | iommu/arm-smmu: Correct sid to maskPeng Fan2017-04-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From code "SMR mask 0x%x out of range for SMMU", so, we need to use mask, not sid. Signed-off-by: Peng Fan <peng.fan@nxp.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | * | | | | iommu/io-pgtable-arm: Avoid shift overflow in block sizeRobin Murphy2017-04-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The recursive nature of __arm_lpae_{map,unmap}() means that ARM_LPAE_BLOCK_SIZE() is evaluated for every level, including those where block mappings aren't possible. This in itself is harmless enough, as we will only ever be called with valid sizes from the pgsize_bitmap, and thus always recurse down past any imaginary block sizes. The only problem is that most of those imaginary sizes overflow the type used for the calculation, and thus trigger warnings under UBsan: [ 63.020939] ================================================================================ [ 63.021284] UBSAN: Undefined behaviour in drivers/iommu/io-pgtable-arm.c:312:22 [ 63.021602] shift exponent 39 is too large for 32-bit type 'int' [ 63.021909] CPU: 0 PID: 1119 Comm: lkvm Not tainted 4.7.0-rc3+ #819 [ 63.022163] Hardware name: FVP Base (DT) [ 63.022345] Call trace: [ 63.022629] [<ffffff900808f258>] dump_backtrace+0x0/0x3a8 [ 63.022975] [<ffffff900808f614>] show_stack+0x14/0x20 [ 63.023294] [<ffffff90086bc9dc>] dump_stack+0x104/0x148 [ 63.023609] [<ffffff9008713ce8>] ubsan_epilogue+0x18/0x68 [ 63.023956] [<ffffff9008714410>] __ubsan_handle_shift_out_of_bounds+0x18c/0x1bc [ 63.024365] [<ffffff900890fcb0>] __arm_lpae_map+0x720/0xae0 [ 63.024732] [<ffffff9008910170>] arm_lpae_map+0x100/0x190 [ 63.025049] [<ffffff90089183d8>] arm_smmu_map+0x78/0xc8 [ 63.025390] [<ffffff9008906c18>] iommu_map+0x130/0x230 [ 63.025763] [<ffffff9008bf7564>] vfio_iommu_type1_attach_group+0x4bc/0xa00 [ 63.026156] [<ffffff9008bf3c78>] vfio_fops_unl_ioctl+0x320/0x580 [ 63.026515] [<ffffff9008377420>] do_vfs_ioctl+0x140/0xd28 [ 63.026858] [<ffffff9008378094>] SyS_ioctl+0x8c/0xa0 [ 63.027179] [<ffffff9008086e70>] el0_svc_naked+0x24/0x28 [ 63.027412] ================================================================================ Perform the shift in a 64-bit type to prevent the theoretical overflow and keep the peace. As it turns out, this generates identical code for 32-bit ARM, and marginally shorter AArch64 code, so it's good all round. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu: Allow default domain type to be set on the kernel command lineWill Deacon2017-04-061-3/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The IOMMU core currently initialises the default domain for each group to IOMMU_DOMAIN_DMA, under the assumption that devices will use IOMMU-backed DMA ops by default. However, in some cases it is desirable for the DMA ops to bypass the IOMMU for performance reasons, reserving use of translation for subsystems such as VFIO that require it for enforcing device isolation. Rather than modify each IOMMU driver to provide different semantics for DMA domains, instead we introduce a command line parameter that can be used to change the type of the default domain. Passthrough can then be specified using "iommu.passthrough=1" on the kernel command line. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu-v3: Install bypass STEs for IOMMU_DOMAIN_IDENTITY domainsWill Deacon2017-04-061-21/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for allowing the default domain type to be overridden, this patch adds support for IOMMU_DOMAIN_IDENTITY domains to the ARM SMMUv3 driver. An identity domain is created by placing the corresponding stream table entries into "bypass" mode, which allows transactions to flow through the SMMU without any translation. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu-v3: Make arm_smmu_install_ste_for_dev return voidWill Deacon2017-04-061-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | arm_smmu_install_ste_for_dev cannot fail and always returns 0, however the fact that it returns int means that callers end up implementing redundant error handling code which complicates STE tracking and is never executed. This patch changes the return type of arm_smmu_install_ste_for_dev to void, to make it explicit that it cannot fail. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Install bypass S2CRs for IOMMU_DOMAIN_IDENTITY domainsWill Deacon2017-04-061-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for allowing the default domain type to be overridden, this patch adds support for IOMMU_DOMAIN_IDENTITY domains to the ARM SMMU driver. An identity domain is created by placing the corresponding S2CR registers into "bypass" mode, which allows transactions to flow through the SMMU without any translation. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Restrict domain attributes to UNMANAGED domainsWill Deacon2017-04-062-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM SMMU drivers provide a DOMAIN_ATTR_NESTING domain attribute, which allows callers of the IOMMU API to request that the page table for a domain is installed at stage-2, if supported by the hardware. Since setting this attribute only makes sense for UNMANAGED domains, this patch returns -ENODEV if the domain_{get,set}_attr operations are called on other domain types. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Add global SMR masking propertyRobin Murphy2017-04-061-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current SMR masking support using a 2-cell iommu-specifier is primarily intended to handle individual masters with large and/or complex Stream ID assignments; it quickly gets a bit clunky in other SMR use-cases where we just want to consistently mask out the same part of every Stream ID (e.g. for MMU-500 configurations where the appended TBU number gets in the way unnecessarily). Let's add a new property to allow a single global mask value to better fit the latter situation. Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Nipun Gupta <nipun.gupta@nxp.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Poll for TLB sync completion more effectivelyRobin Murphy2017-04-061-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On relatively slow development platforms and software models, the inefficiency of our TLB sync loop tends not to show up - for instance on a Juno r1 board I typically see the TLBI has completed of its own accord by the time we get to the sync, such that the latter finishes instantly. However, on larger systems doing real I/O, it's less realistic for the TLBs to go idle immediately, and at that point falling into the 1MHz polling loop turns out to throw away performance drastically. Let's strike a balance by polling more than once between pauses, such that we have much more chance of catching normal operations completing before committing to the fixed delay, but also backing off exponentially, since if a sync really hasn't completed within one or two "reasonable time" periods, it becomes increasingly unlikely that it ever will. Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Use per-context TLB sync as appropriateRobin Murphy2017-04-061-33/+80
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TLB synchronisation typically involves the SMMU blocking all incoming transactions until the TLBs report completion of all outstanding operations. In the common SMMUv2 configuration of a single distributed SMMU serving multiple peripherals, that means that a single unmap request has the potential to bring the hammer down on the entire system if synchronised globally. Since stage 1 contexts, and stage 2 contexts under SMMUv2, offer local sync operations, let's make use of those wherever we can in the hope of minimising global disruption. To that end, rather than add any more branches to the already unwieldy monolithic TLB maintenance ops, break them up into smaller, neater, functions which we can then mix and match as appropriate. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Tidy up context bank indexingRobin Murphy2017-04-061-16/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM_AMMU_CB() is calculated relative to ARM_SMMU_CB_BASE(), but the latter is never of use on its own, and what we end up with is the same ARM_SMMU_CB_BASE() + ARM_AMMU_CB() expression being duplicated at every callsite. Folding the two together gives us a self-contained context bank accessor which is much more pleasant to work with. Secondly, we might as well simplify CB_BASE itself at the same time. We use the address space size for its own sake precisely once, at probe time, and every other usage is to dynamically calculate CB_BASE over and over and over again. Let's flip things around so that we just maintain the CB_BASE address directly. Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Simplify ASID/VMID handlingRobin Murphy2017-04-061-17/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Calculating ASIDs/VMIDs dynamically from arm_smmu_cfg was a neat trick, but the global uniqueness workaround makes it somewhat more awkward, and means we end up having to pass extra state around in certain cases just to keep a handle on the offset. We already have 16 bits going spare in arm_smmu_cfg; let's just precalculate an ASID/VMID, plop it in there, and tidy up the users accordingly. We'd also need something like this anyway if we ever get near to thinking about SVM, so it's no bad thing. Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Fix 16-bit ASID configurationSunil Goutham2017-04-061-19/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16-bit ASID should be enabled before initializing TTBR0/1, otherwise only LSB 8-bit ASID will be considered. Hence moving configuration of TTBCR register ahead of TTBR0/1 while initializing context bank. Signed-off-by: Sunil Goutham <sgoutham@cavium.com> [will: rewrote comment] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | | iommu/arm-smmu: Print message when Cavium erratum 27704 was detectedRobert Richter2017-04-061-0/+1
| | | | | | |_|/ / | | | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Firmware is responsible for properly enabling smmu workarounds. Print a message for better diagnostics when Cavium erratum 27704 was detected. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Robert Richter <rrichter@cavium.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | * | | | | iommu/mediatek: Teach MTK-IOMMUv1 about 'struct iommu_device'Joerg Roedel2017-04-031-1/+24
| | | | |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make use of the iommu_device_register() interface. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | * | | | | iommu/rockchip: Make use of 'struct iommu_device'Joerg Roedel2017-04-031-2/+28
| | | |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Register hardware IOMMUs seperatly with the iommu-core code and add a sysfs representation of the iommu topology. Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | * | | | | iommu/omap: Add iommu-group supportJoerg Roedel2017-04-202-1/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support for IOMMU groups will become mandatory for drivers, so add it to the omap iommu driver. Signed-off-by: Joerg Roedel <jroedel@suse.de> [s-anna@ti.com: minor error cleanups] Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | * | | | | iommu/omap: Make use of 'struct iommu_device'Joerg Roedel2017-04-202-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the driver to register individual iommus and establish links between devices and iommus in sysfs. Signed-off-by: Joerg Roedel <jroedel@suse.de> [s-anna@ti.com: fix some cleanup issues during failures] Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | * | | | | iommu/omap: Store iommu_dev pointer in arch_dataJoerg Roedel2017-04-202-35/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of finding the matching IOMMU for a device using string comparision functions, store the pointer to the iommu_dev in arch_data during the omap_iommu_add_device callback and reset it during the omap_iommu_remove_device callback functions. Signed-off-by: Joerg Roedel <jroedel@suse.de> [s-anna@ti.com: few minor cleanups] Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | * | | | | iommu/omap: Move data structures to omap-iommu.hJoerg Roedel2017-04-202-16/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The internal data-structures are scattered over various header and C files. Consolidate them in omap-iommu.h. While at this, add the kerneldoc comment for the missing iommu domain variable and revise the iommu_arch_data name. Signed-off-by: Joerg Roedel <jroedel@suse.de> [s-anna@ti.com: revise kerneldoc comments] Signed-off-by: Suman Anna <s-anna@ti.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
OpenPOWER on IntegriCloud