summaryrefslogtreecommitdiffstats
path: root/drivers/iommu
Commit message (Collapse)AuthorAgeFilesLines
* iommu/amd: Correct the wrong setting of alias DTE in do_attachBaoquan He2016-01-291-1/+1
| | | | | | | | | | | | | | | | | In below commit alias DTE is set when its peripheral is setting DTE. However there's a code bug here to wrongly set the alias DTE, correct it in this patch. commit e25bfb56ea7f046b71414e02f80f620deb5c6362 Author: Joerg Roedel <jroedel@suse.de> Date: Tue Oct 20 17:33:38 2015 +0200 iommu/amd: Set alias DTE in do_attach/do_detach Signed-off-by: Baoquan He <bhe@redhat.com> Tested-by: Mark Hounschell <markh@compro.net> Cc: stable@vger.kernel.org # v4.4 Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/vt-d: Don't skip PCI devices when disabling IOTLBJeremy McNicoll2016-01-291-1/+1
| | | | | | | | | | Fix a simple typo when disabling IOTLB on PCI(e) devices. Fixes: b16d0cb9e2fc ("iommu/vt-d: Always enable PASID/PRI PCI capabilities before ATS") Cc: stable@vger.kernel.org # v4.4 Signed-off-by: Jeremy McNicoll <jmcnicol@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/io-pgtable-arm: Fix io-pgtable-arm build failureLada Trimasova2016-01-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Trying to build a kernel for ARC with both options CONFIG_COMPILE_TEST and CONFIG_IOMMU_IO_PGTABLE_LPAE enabled (e.g. as a result of "make allyesconfig") results in the following build failure: | CC drivers/iommu/io-pgtable-arm.o | linux/drivers/iommu/io-pgtable-arm.c: In | function ‘__arm_lpae_alloc_pages’: | linux/drivers/iommu/io-pgtable-arm.c:221:3: | error: implicit declaration of function ‘dma_map_single’ | [-Werror=implicit-function-declaration] | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ | linux/drivers/iommu/io-pgtable-arm.c:221:42: | error: ‘DMA_TO_DEVICE’ undeclared (first use in this function) | dma = dma_map_single(dev, pages, size, DMA_TO_DEVICE); | ^ Since IOMMU_IO_PGTABLE_LPAE depends on DMA API, io-pgtable-arm.c should include linux/dma-mapping.h. This fixes the reported failure. Cc: Alexey Brodkin <abrodkin@synopsys.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Joerg Roedel <joro@8bytes.org> Signed-off-by: Lada Trimasova <ltrimas@synopsys.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* Merge tag 'iommu-updates-v4.5' of ↵Linus Torvalds2016-01-1916-1048/+401
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu Pull IOMMU updates from Joerg Roedel: "The updates include: - Small code cleanups in the AMD IOMMUv2 driver - Scalability improvements for the DMA-API implementation of the AMD IOMMU driver. This is just a starting point, but already showed some good improvements in my tests. - Removal of the unused Renesas IPMMU/IPMMUI driver - Updates for ARM-SMMU include: * Some fixes to get the driver working nicely on Broadcom hardware * A change to the io-pgtable API to indicate the unit in which to flush (all callers converted, with Ack from Laurent) * Use of devm_* for allocating/freeing the SMMUv3 buffers - Some other small fixes and improvements for other drivers" * tag 'iommu-updates-v4.5' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (46 commits) iommu/vt-d: Fix up error handling in alloc_iommu iommu/vt-d: Check the return value of iommu_device_create() iommu/amd: Remove an unneeded condition iommu/amd: Preallocate dma_ops apertures based on dma_mask iommu/amd: Use trylock to aquire bitmap_lock iommu/amd: Make dma_ops_domain->next_index percpu iommu/amd: Relax locking in dma_ops path iommu/amd: Initialize new aperture range before making it visible iommu/amd: Build io page-tables with cmpxchg64 iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addresses iommu/amd: Optimize dma_ops_free_addresses iommu/amd: Remove need_flush from struct dma_ops_domain iommu/amd: Iterate over all aperture ranges in dma_ops_area_alloc iommu/amd: Flush iommu tlb in dma_ops_free_addresses iommu/amd: Rename dma_ops_domain->next_address to next_index iommu/amd: Remove 'start' parameter from dma_ops_area_alloc iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc() iommu/amd: Retry address allocation within one aperture iommu/amd: Move aperture_range.offset to another cache-line iommu/amd: Add dma_ops_aperture_alloc() function ...
| *-----------. Merge branches 's390', 'arm/renesas', 'arm/msm', 'arm/shmobile', 'arm/smmu', ↵Joerg Roedel2016-01-1916-1048/+401
| |\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | 'x86/amd' and 'x86/vt-d' into next
| | | | | | | | * iommu/vt-d: Fix up error handling in alloc_iommuJoerg Roedel2016-01-071-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only check for error when iommu->iommu_dev has been assigned and only assign drhd->iommu when the function can't fail anymore. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | | * iommu/vt-d: Check the return value of iommu_device_create()Nicholas Krause2016-01-071-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds the proper check to alloc_iommu to make sure that the call to iommu_device_create has completed successfully and if not return the error code to the caller after freeing up resources allocated previously. Signed-off-by: Nicholas Krause <xerofoify@gmail.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Remove an unneeded conditionDan Carpenter2016-01-071-5/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | get_device_id() returns an unsigned short device id. It never fails and it never returns a negative so we can remove this condition. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Preallocate dma_ops apertures based on dma_maskJoerg Roedel2015-12-281-7/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Preallocate between 4 and 8 apertures when a device gets it dma_mask. With more apertures we reduce the lock contention of the domain lock significantly. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Use trylock to aquire bitmap_lockJoerg Roedel2015-12-281-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | First search for a non-contended aperture with trylock before spinning. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Make dma_ops_domain->next_index percpuJoerg Roedel2015-12-281-10/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make this pointer percpu so that we start searching for new addresses in the range we last stopped and which is has a higher probability of being still in the cache. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Relax locking in dma_ops pathJoerg Roedel2015-12-281-59/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove the long holding times of the domain->lock and rely on the bitmap_lock instead. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Initialize new aperture range before making it visibleJoerg Roedel2015-12-281-13/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make sure the aperture range is fully initialized before it is visible to the address allocator. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Build io page-tables with cmpxchg64Joerg Roedel2015-12-281-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows to build up the page-tables without holding any locks. As a consequence it removes the need to pre-populate dma_ops page-tables. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Allocate new aperture ranges in dma_ops_alloc_addressesJoerg Roedel2015-12-281-19/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It really belongs there and not in __map_single. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Optimize dma_ops_free_addressesJoerg Roedel2015-12-281-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't flush the iommu tlb when we free something behind the current next_bit pointer. Update the next_bit pointer instead and let the flush happen on the next wraparound in the allocation path. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Remove need_flush from struct dma_ops_domainJoerg Roedel2015-12-281-24/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flushing of iommu tlbs is now done on a per-range basis. So there is no need anymore for domain-wide flush tracking. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Iterate over all aperture ranges in dma_ops_area_allocJoerg Roedel2015-12-281-17/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This way we don't need to care about the next_index wrapping around in dma_ops_alloc_addresses. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Flush iommu tlb in dma_ops_free_addressesJoerg Roedel2015-12-281-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of setting need_flush, do the flush directly in dma_ops_free_addresses. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Rename dma_ops_domain->next_address to next_indexJoerg Roedel2015-12-281-13/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It points to the next aperture index to allocate from. We don't need the full address anymore because this is now tracked in struct aperture_range. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Remove 'start' parameter from dma_ops_area_allocJoerg Roedel2015-12-281-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parameter is not needed because the value is part of the already passed in struct dma_ops_domain. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Flush iommu tlb in dma_ops_aperture_alloc()Joerg Roedel2015-12-281-5/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since the allocator wraparound happens in this function now, flush the iommu tlb there too. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Retry address allocation within one apertureJoerg Roedel2015-12-281-10/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of skipping to the next aperture, first try again in the current one. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Move aperture_range.offset to another cache-lineJoerg Roedel2015-12-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Moving it before the pte_pages array puts in into the same cache-line as the spin-lock and the bitmap array pointer. This should safe a cache-miss. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Add dma_ops_aperture_alloc() functionJoerg Roedel2015-12-281-12/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make this a wrapper around iommu_ops_area_alloc() for now and add more logic to this function later on. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Pass correct shift to iommu_area_alloc()Joerg Roedel2015-12-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The page-offset of the aperture must be passed instead of 0. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Flush the IOMMU TLB before the addresses are freedJoerg Roedel2015-12-281-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows to keep the bitmap_lock only for a very short period of time. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Flush IOMMU TLB on __map_single error pathJoerg Roedel2015-12-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There have been present PTEs which in theory could have made it to the IOMMU TLB. Flush the addresses out on the error path to make sure no stale entries remain. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Introduce bitmap_lock in struct aperture_rangeJoerg Roedel2015-12-281-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This lock only protects the address allocation bitmap in one aperture. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Move 'struct dma_ops_domain' definition to amd_iommu.cJoerg Roedel2015-12-282-40/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is only used in this file anyway, so keep it there. Same with 'struct aperture_range'. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Warn only once on unexpected pte valueJoerg Roedel2015-12-281-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This prevents possible flooding of the kernel log. Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Constify mmu_notifier_ops structuresJulia Lawall2015-12-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This mmu_notifier_ops structure is never modified, so declare it as const, like the other mmu_notifier_ops structures. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Cleanup error handling in do_fault()Joerg Roedel2015-12-141-16/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the three error paths that look the same and move error handling to a single place. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Acked-By: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | | * | iommu/amd: Correctly set flags for handle_mm_fault callJoerg Roedel2015-12-141-5/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of just checking for a write access, calculate the flags that are passed to handle_mm_fault() more precisly and use the pre-defined macros. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> Acked-By: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | | | * | | Merge branch 'for-joerg/arm-smmu/updates' of ↵Joerg Roedel2015-12-225-173/+119
| | | | | | |\ \ \ | | | | | | | |_|/ | | | | | | |/| | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu
| | | | | | | * | iommu/io-pgtable-arm: Ensure we free the final level on teardownWill Deacon2015-12-171-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When tearing down page tables, we return early for the final level since we know that we won't have any table pointers to follow. Unfortunately, this also means that we forget to free the final level, so we end up leaking memory. Fix the issue by always freeing the current level, but just don't bother to iterate over the ptes if we're at the final level. Cc: <stable@vger.kernel.org> Reported-by: Zhang Bo <zhangbo_a@xiaomi.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Use STE.S1STALLD only when supportedPrem Mallappa2015-12-171-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is ILLEGAL to set STE.S1STALLD to 1 if stage 1 is enabled and either the stall or terminate models are not supported. This patch fixes the STALLD check and ensures that we don't set STALLD in the STE when it is not supported. Signed-off-by: Prem Mallappa <pmallapp@broadcom.com> [will: consistently use IDR0_STALL_MODEL_* prefix] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Fix write to GERRORN registerPrem Mallappa2015-12-171-12/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When acknowledging global errors, the GERRORN register should be written with the original GERROR value so that active errors are toggled. This patch fixed the driver to write the original GERROR value to GERRORN, instead of an active error mask. Signed-off-by: Prem Mallappa <pmallapp@broadcom.com> [will: reworked use of active bits and fixed commit log] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/io-pgtable: Make io_pgtable_ops_to_pgtable() macro commonRobin Murphy2015-12-172-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to keep a useful accessor for a public structure hidden away in a private implementation. Move it out alongside the structure definition so that other implementations may reuse it. Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Invalidate TLBs properlyRobin Murphy2015-12-172-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When invalidating an IOVA range potentially spanning multiple pages, such as when removing an entire intermediate-level table, we currently only issue an invalidation for the first IOVA of that range. Since the architecture specifies that address-based TLB maintenance operations target a single entry, an SMMU could feasibly retain live entries for subsequent pages within that unmapped range, which is not good. Make sure we hit every possible entry by iterating over the whole range at the granularity provided by the pagetable implementation. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: added missing semicolons...] Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/io-pgtable: Indicate granule for TLB maintenanceRobin Murphy2015-12-175-18/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOMMU hardware with range-based TLB maintenance commands can work happily with the iova and size arguments passed via the tlb_add_flush callback, but for IOMMUs which require separate commands per entry in the range, it is not straightforward to infer the necessary granularity when it comes to issuing the actual commands. Add an additional argument indicating the granularity for the benefit of drivers needing to know, and update the ARM LPAE code appropriately (for non-leaf invalidations we currently just assume the worst-case page granularity rather than walking the table to check). Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/io-pgtable-arm: Avoid dereferencing bogus PTEsRobin Murphy2015-12-171-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the case of corrupted page tables, or when an invalid size is given, __arm_lpae_unmap() may recurse beyond the maximum number of levels. Unfortunately the detection of this error condition only happens *after* calculating a nonsense offset from something which might not be a valid table pointer and dereferencing that to see if it is a valid PTE. Make things a little more robust by checking the level is valid before doing anything which depends on it being so. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Handle unknown CERROR values gracefullyWill Deacon2015-12-171-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Whilst the architecture only defines a few of the possible CERROR values, we should handle unknown values gracefully rather than go out of bounds trying to print out an error description. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Correct group reference countPeng Fan2015-12-172-7/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic flow for add a device: arm_smmu_add_device |->iommu_group_get_for_dev |->iommu_group_get return group; (1) |->ops->device_group : Init/increase reference count to/by 1. |->iommu_group_add_device : Increase reference count by 1. return group (2) |->return 0; Since we are adding one device, the flow is (2) and the group reference count will be increased by 2. So, we need to add iommu_group_put at the end of arm_smmu_add_device to decrease the count by 1. Also take the failure path into consideration when fail to add a device. Signed-off-by: Peng Fan <van.freenix@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Use incoming shareability attributes in bypass modeWill Deacon2015-12-171-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we initialise a bypass STE, we memset the structure to zero and set the Valid and Config fields to indicate that the stream should bypass the SMMU. Unfortunately, this results in an SHCFG field of 0 which means that the shareability of any incoming transactions is overridden with non-shareable, leading to potential coherence problems down the line. This patch fixes the issue by initialising bypass STEs to use the incoming shareability attributes. When translation is in effect at either stage 1 or stage 2, the shareability is determined by the page tables. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Delete an unnecessary check before free_io_pgtable_ops()Markus Elfring2015-12-171-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The free_io_pgtable_ops() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Convert DMA buffer allocations to the managed APIWill Deacon2015-12-171-111/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARM SMMUv3 driver uses dma_{alloc,free}_coherent to manage its queues and configuration data structures. This patch converts the driver to the managed (dmam_*) API, so that resources are freed automatically on device teardown. This greatly simplifies the failure paths and allows us to remove a bunch of handcrafted freeing code. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | | | * | iommu/arm-smmu: Remove #define for non-existent PRIQ_0_OF fieldWill Deacon2015-12-171-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | PRIQ_0_OF has been removed from the SMMUv3 architecture, so remove its corresponding (and unused) #define from the driver. Signed-off-by: Will Deacon <will.deacon@arm.com>
| | | | | * | | | iommu/shmobile: Remove unused Renesas IPMMU/IPMMUI driverGeert Uytterhoeven2015-12-145-642/+0
| | | | |/ / / / | | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of commit 44d88c754e57a6d9 ("ARM: shmobile: Remove legacy SoC code for R-Mobile A1"), the Renesas IPMMU/IPMMUI driver is no longer used. In theory it could still be used on SH-Mobile AG5 and R-Mobile A1 SoCs, but that requires adding DT support to the driver, which is not planned. Remove the driver, it can be resurrected from git history when needed. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Joerg Roedel <jroedel@suse.de>
| | | | * | | | iommu/msm: Use platform_register/unregister_drivers()Thierry Reding2015-12-141-18/+7
| | | |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These new helpers simplify implementing multi-driver modules and properly handle failure to register one driver by unregistering all previously registered drivers. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
OpenPOWER on IntegriCloud