summaryrefslogtreecommitdiffstats
path: root/drivers/pci/intel-iommu.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge git://git.infradead.org/iommu-2.6Linus Torvalds2009-04-031-19/+96
|\ | | | | | | | | | | | | | | | | | | | | | | * git://git.infradead.org/iommu-2.6: intel-iommu: Fix address wrap on 32-bit kernel. intel-iommu: Enable DMAR on 32-bit kernel. intel-iommu: fix PCI device detach from virtual machine intel-iommu: VT-d page table to support snooping control bit iommu: Add domain_has_cap iommu_ops intel-iommu: Snooping control support Fixed trivial conflicts in arch/x86/Kconfig and drivers/pci/intel-iommu.c
| * intel-iommu: Fix address wrap on 32-bit kernel.Zhao, Yu2009-03-251-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The problem is in dma_pte_clear_range and dma_pte_free_pagetable. When intel_unmap_single and intel_unmap_sg call them, the end address may be zero if the 'start_addr + size' rounds up. So no PTE gets cleared. The uncleared PTE fires the BUG_ON when it's used again to create new mappings. After I modified dma_pte_clear_range a bit, the BUG_ON is gone. Tested both 32 and 32 PAE modes on Intel X58 and Q35 platforms. Signed-off-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Enable DMAR on 32-bit kernel.David Woodhouse2009-03-251-12/+8
| | | | | | | | | | | | | | If we fix a few highmem-related thinkos and a couple of printk format warnings, the Intel IOMMU driver works fine in a 32-bit kernel. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: fix PCI device detach from virtual machineHan, Weidong2009-03-251-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When assign a device behind conventional PCI bridge or PCIe to PCI/PCI-x bridge to a domain, it must assign its bridge and may also need to assign secondary interface to the same domain. Dependent assignment is already there, but dependent deassignment is missed when detach device from virtual machine. This results in conventional PCI device assignment failure after it has been assigned once. This patch addes dependent deassignment, and fixes the issue. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: VT-d page table to support snooping control bitSheng Yang2009-03-241-1/+11
| | | | | | | | | | | | | | The user can request to enable snooping control through VT-d page table. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * iommu: Add domain_has_cap iommu_opsSheng Yang2009-03-241-0/+12
| | | | | | | | | | | | | | | | | | This iommu_op can tell if domain have a specific capability, like snooping control for Intel IOMMU, which can be used by other components of kernel to adjust the behaviour. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
| * intel-iommu: Snooping control supportSheng Yang2009-03-241-5/+33
| | | | | | | | | | | | | | | | Snooping control enabled IOMMU to guarantee DMA cache coherency and thus reduce software effort (VMM) in maintaining effective memory type. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* | Merge branch 'linux-next' of ↵Linus Torvalds2009-04-011-1/+1
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6 * 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: (88 commits) PCI: fix HT MSI mapping fix PCI: don't enable too much HT MSI mapping x86/PCI: make pci=lastbus=255 work when acpi is on PCI: save and restore PCIe 2.0 registers PCI: update fakephp for bus_id removal PCI: fix kernel oops on bridge removal PCI: fix conflict between SR-IOV and config space sizing powerpc/PCI: include pci.h in powerpc MSI implementation PCI Hotplug: schedule fakephp for feature removal PCI Hotplug: rename legacy_fakephp to fakephp PCI Hotplug: restore fakephp interface with complete reimplementation PCI: Introduce /sys/bus/pci/devices/.../rescan PCI: Introduce /sys/bus/pci/devices/.../remove PCI: Introduce /sys/bus/pci/rescan PCI: Introduce pci_rescan_bus() PCI: do not enable bridges more than once PCI: do not initialize bridges more than once PCI: always scan child buses PCI: pci_scan_slot() returns newly found devices PCI: don't scan existing devices ... Fix trivial append-only conflict in Documentation/feature-removal-schedule.txt
| * | PCI: add missing KERN_* constants to printksFrank Seidel2009-03-191-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | According to kerneljanitors todo list all printk calls (beginning a new line) should have an according KERN_* constant. Those are the missing pieces here for the pci subsystem. Signed-off-by: Frank Seidel <frank@f-seidel.de> Reviewed-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Tested-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
* | Merge branch 'iommu-for-linus' of ↵Linus Torvalds2009-03-301-17/+33
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'iommu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (60 commits) dma-debug: make memory range checks more consistent dma-debug: warn of unmapping an invalid dma address dma-debug: fix dma_debug_add_bus() definition for !CONFIG_DMA_API_DEBUG dma-debug/x86: register pci bus for dma-debug leak detection dma-debug: add a check dma memory leaks dma-debug: add checks for kernel text and rodata dma-debug: print stacktrace of mapping path on unmap error dma-debug: Documentation update dma-debug: x86 architecture bindings dma-debug: add function to dump dma mappings dma-debug: add checks for sync_single_sg_* dma-debug: add checks for sync_single_range_* dma-debug: add checks for sync_single_* dma-debug: add checking for [alloc|free]_coherent dma-debug: add add checking for map/unmap_sg dma-debug: add checking for map/unmap_page/single dma-debug: add core checking functions dma-debug: add debugfs interface dma-debug: add kernel command line parameters dma-debug: add initialization code ... Fix trivial conflicts due to whitespace changes in arch/x86/kernel/pci-nommu.c
| * \ Merge branch 'linus' into core/iommuIngo Molnar2009-03-051-4/+26
| |\ \ | | |/
| * | intel-iommu: make dma mapping functions staticFUJITA Tomonori2009-01-291-18/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | The dma ops unification enables X86 and IA64 to share intel_dma_ops so we can make dma mapping functions static. This also remove unused intel_map_single(). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | IA64: fix VT-d dma_mapping_errorFUJITA Tomonori2009-01-291-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dma_mapping_error is used to see if dma_map_single and dma_map_page succeed. IA64 VT-d dma_mapping_error always says that dma_map_single is successful even though it could fail. Note that X86 VT-d works properly in this regard. This patch fixes IA64 VT-d dma_mapping_error by adding VT-d's own dma_mapping_error() that works for both X86_64 and IA64. VT-d uses zero as an error dma address so VT-d's dma_mapping_error returns 1 if a passed dma address is zero (as x86's VT-d dma_mapping_error does now). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | Merge branch 'linus' into core/iommuIngo Molnar2009-01-161-1/+2
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: arch/ia64/include/asm/dma-mapping.h arch/ia64/include/asm/machvec.h arch/ia64/include/asm/machvec_sn2.h
| * | | x86, ia64: convert to use generic dma_map_ops structFUJITA Tomonori2009-01-061-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | x86: remove map_single and unmap_single in struct dma_mapping_opsFUJITA Tomonori2009-01-061-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch converts dma_map_single and dma_unmap_single to use map_page and unmap_page respectively and removes unnecessary map_single and unmap_single in struct dma_mapping_ops. This leaves intel-iommu's dma_map_single and dma_unmap_single since IA64 uses them. They will be removed after the unification. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
| * | | intel-iommu: add map_page and unmap_pageFUJITA Tomonori2009-01-061-2/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a preparation of struct dma_mapping_ops unification. We use map_page and unmap_page instead of map_single and unmap_single. This uses a temporary workaround, ifdef X86_64 to avoid IA64 build. The workaround will be removed after the unification. Well, changing x86's struct dma_mapping_ops could break IA64. It's just wrong. It's one of problems that this patchset fixes. We will remove map_single and unmap_single hooks in the last patch in this patchset. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | | x86, dmar: start with sane state while enabling dma and interrupt-remappingSuresh Siddha2009-03-171-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: cleanup/sanitization Start from a sane state while enabling dma and interrupt-remapping, by clearing the previous recorded faults and disabling previously enabled queued invalidation and interrupt-remapping. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | | x86, x2apic: enable fault handling for intr-remappingSuresh Siddha2009-03-171-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Impact: interface augmentation (not yet used) Enable fault handling flow for intr-remapping aswell. Fault handling code now shared by both dma-remapping and intr-remapping. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | | x86, dmar: move page fault handling code to dmar.cSuresh Siddha2009-03-171-188/+0
| |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | Impact: code movement Move page fault handling code to dmar.c This will be shared both by DMA-remapping and Intr-remapping code. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* | | iommu: fix Intel IOMMU write-buffer flushingDavid Woodhouse2009-02-141-1/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the cause of the DMA faults and disk corruption that people have been seeing. Some chipsets neglect to report the RWBF "capability" -- the flag which says that we need to flush the chipset write-buffer when changing the DMA page tables, to ensure that the change is visible to the IOMMU. Override that bit on the affected chipsets, and everything is happy again. Thanks to Chris and Bhavesh and others for helping to debug. Should resolve: https://bugzilla.redhat.com/show_bug.cgi?id=479996 http://bugzilla.kernel.org/show_bug.cgi?id=12578 Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Tested-and-acked-by: Chris Wright <chrisw@sous-sol.org> Reviewed-by: Bhavesh Davda <bhavesh@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | | x86: disable intel_iommu support by defaultKyle McMartin2009-02-051-3/+11
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Due to recurring issues with DMAR support on certain platforms. There's a number of filesystem corruption incidents reported: https://bugzilla.redhat.com/show_bug.cgi?id=479996 http://bugzilla.kernel.org/show_bug.cgi?id=12578 Provide a Kconfig option to change whether it is enabled by default. If disabled, it can still be reenabled by passing intel_iommu=on to the kernel. Keep the .config option off by default. Signed-off-by: Kyle McMartin <kyle@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Acked-By: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | Prevent oops at boot with VT-dDirk Hohndel2009-01-131-1/+2
|/ | | | | | | | | | | With some broken BIOSs when VT-d is enabled, the data structures are filled incorrectly. This can cause a NULL pointer dereference in very early boot. Signed-off-by: Dirk Hohndel <hohndel@linux.intel.com> Acked-by: Yu Zhao <yu.zhao@intel.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* intel-iommu: fix bit shift at DOMAIN_FLAG_P2P_MULTIPLE_DEVICESMike Day2009-01-031-1/+1
| | | | | Signed-off-by: Mike Day <ncmike@ncultra.org> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: remove now unused intel_iommu_found functionJoerg Roedel2009-01-031-6/+0
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: register functions for the IOMMU APIJoerg Roedel2009-01-031-0/+15
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: adapt domain iova_to_phys function for IOMMU APIJoerg Roedel2009-01-031-3/+4
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: adapt domain map and unmap functions for IOMMU APIJoerg Roedel2009-01-031-13/+20
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: adapt device attach and detach functions for IOMMU APIJoerg Roedel2009-01-031-12/+15
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* VT-d: adapt domain init and destroy functions for IOMMU APIJoerg Roedel2009-01-031-15/+18
| | | | Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Check agaw is sufficient for mapped memoryWeidong Han2009-01-031-0/+61
| | | | | | | When domain is related to multiple iommus, need to check if the minimum agaw is sufficient for the mapped memory Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Change intel iommu APIs of virtual machine domainWeidong Han2009-01-031-70/+59
| | | | | | | These APIs are used by KVM to use VT-d Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Change domain_context_mapping_one for virtual machine domainWeidong Han2009-01-031-3/+52
| | | | | | | | | vm_domid won't be set in context, find available domain id for a device from its iommu. For a virtual machine domain, a default agaw will be set, and skip top levels of page tables for iommu which has less agaw than default. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Allocation and free functions of virtual machine domainWeidong Han2009-01-031-2/+105
| | | | | | | virtual machine domain is different from native DMA-API domain, implement separate allocation and free functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Add domain_flush_cacheWeidong Han2009-01-031-17/+26
| | | | | | | | | Because virtual machine domain may have multiple devices from different iommus, it cannot use __iommu_flush_cache. In some common low level functions, use domain_flush_cache instead of __iommu_flush_cache. On the other hand, in some functions, iommu can is specified or domain cannot be got, still use __iommu_flush_cache Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Add/remove domain device info for virtual machine domainWeidong Han2009-01-031-5/+166
| | | | | | | | | Add iommu reference count in domain, and add a lock to protect iommu setting including iommu_bmp, iommu_count and iommu_coherency. virtual machine domain may have multiple devices from different iommus, so it needs to do more things when add/remove domain device info. Thus implement separate these functions for virtual machine domain. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Add domain flag DOMAIN_FLAG_VIRTUAL_MACHINEWeidong Han2009-01-031-0/+7
| | | | | | | Add this flag for VT-d used in virtual machine, like KVM. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* iommu coherencyWeidong Han2009-01-031-0/+24
| | | | | | | In dmar_domain, more than one iommus may be included in iommu_bmp. Due to "Coherency" capability may be different across iommus, set this variable to indicate iommu access is coherent or not. Only when all related iommus in a dmar_domain are all coherent, iommu access of this domain is coherent. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* calculate agaw for each iommuWeidong Han2009-01-031-0/+22
| | | | | | | "SAGAW" capability may be different across iommus. Use a default agaw, but if default agaw is not supported in some iommus, choose a less supported agaw. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* iommu bitmap instead of iommu pointer in dmar_domainWeidong Han2009-01-031-30/+67
| | | | | | | In order to support assigning multiple devices from different iommus to a domain, iommu bitmap is used to keep all iommus the domain are related to. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Get iommu from g_iommus for deferred flushWeidong Han2009-01-031-3/+4
| | | | | | | deferred_flush[] uses the iommu seq_id to index, so its iommu is fixed and can get it from g_iommus. Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Add global iommu listWeidong Han2009-01-031-0/+25
| | | | | Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* change P2P domain flagsWeidong Han2009-01-031-3/+5
| | | | | Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Initialize domain flags to 0Weidong Han2009-01-031-0/+1
| | | | | | | It's random number after the domain is allocated by kmem_cache_alloc Signed-off-by: Weidong Han <weidong.han@intel.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* intel-iommu: trivially inline DMA PTE macrosMark McLoughlin2009-01-031-23/+48
| | | | | Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* intel-iommu: trivially inline context entry macrosMark McLoughlin2009-01-031-30/+55
| | | | | | | | | | | | | Some macros were unused, so I just dropped them: context_fault_disable context_translation_type context_address_root context_address_width context_domain_id Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* intel-iommu: move iommu_prepare_gfx_mapping() out of dma_remapping.hMark McLoughlin2009-01-031-0/+5
| | | | | Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* intel-iommu: move struct device_domain_info out of dma_remapping.hMark McLoughlin2009-01-031-0/+10
| | | | | Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* intel-iommu: move struct dmar_domain def out dma_remapping.hMark McLoughlin2009-01-031-0/+18
| | | | | Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
* intel-iommu: move DMA PTE defs out of dma_remapping.hMark McLoughlin2009-01-031-0/+22
| | | | | | | DMA_PTE_READ/WRITE are needed by kvm. Signed-off-by: Mark McLoughlin <markmc@redhat.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
OpenPOWER on IntegriCloud