summaryrefslogtreecommitdiffstats
path: root/arch/x86/include/asm/dma-mapping.h
Commit message (Collapse)AuthorAgeFilesLines
* dma-debug: New interfaces to debug dma mapping errorsShuah Khan2012-10-241-0/+1
| | | | | | | | | | | | | | | | | | | | Add dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check dma mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by debug_dma_map_page() to indicate that dma_mapping_error() has been called by the driver. When driver does unmap, debug_dma_unmap() checks the flag and if this flag is still set, prints warning message that includes call trace that leads up to the unmap. This interface can be called from dma_mapping_error() routines to enable dma mapping error check debugging. Tested: Intel iommu and swiotlb (iommu=soft) on x86-64 with CONFIG_DMA_API_DEBUG enabled and disabled. Signed-off-by: Shuah Khan <shuah.khan@hp.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Merge branch 'for-linus' of ↵Linus Torvalds2012-05-251-0/+5
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.linaro.org/people/mszyprowski/linux-dma-mapping Pull CMA and ARM DMA-mapping updates from Marek Szyprowski: "These patches contain two major updates for DMA mapping subsystem (mainly for ARM architecture). First one is Contiguous Memory Allocator (CMA) which makes it possible for device drivers to allocate big contiguous chunks of memory after the system has booted. The main difference from the similar frameworks is the fact that CMA allows to transparently reuse the memory region reserved for the big chunk allocation as a system memory, so no memory is wasted when no big chunk is allocated. Once the alloc request is issued, the framework migrates system pages to create space for the required big chunk of physically contiguous memory. For more information one can refer to nice LWN articles: - 'A reworked contiguous memory allocator': http://lwn.net/Articles/447405/ - 'CMA and ARM': http://lwn.net/Articles/450286/ - 'A deep dive into CMA': http://lwn.net/Articles/486301/ - and the following thread with the patches and links to all previous versions: https://lkml.org/lkml/2012/4/3/204 The main client for this new framework is ARM DMA-mapping subsystem. The second part provides a complete redesign in ARM DMA-mapping subsystem. The core implementation has been changed to use common struct dma_map_ops based infrastructure with the recent updates for new dma attributes merged in v3.4-rc2. This allows to use more than one implementation of dma-mapping calls and change/select them on the struct device basis. The first client of this new infractructure is dmabounce implementation which has been completely cut out of the core, common code. The last patch of this redesign update introduces a new, experimental implementation of dma-mapping calls on top of generic IOMMU framework. This lets ARM sub-platform to transparently use IOMMU for DMA-mapping calls if one provides required IOMMU hardware. For more information please refer to the following thread: http://www.spinics.net/lists/arm-kernel/msg175729.html The last patch merges changes from both updates and provides a resolution for the conflicts which cannot be avoided when patches have been applied on the same files (mainly arch/arm/mm/dma-mapping.c)." Acked by Andrew Morton <akpm@linux-foundation.org>: "Yup, this one please. It's had much work, plenty of review and I think even Russell is happy with it." * 'for-linus' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: (28 commits) ARM: dma-mapping: use PMD size for section unmap cma: fix migration mode ARM: integrate CMA with DMA-mapping subsystem X86: integrate CMA with DMA-mapping subsystem drivers: add Contiguous Memory Allocator mm: trigger page reclaim in alloc_contig_range() to stabilise watermarks mm: extract reclaim code from __alloc_pages_direct_reclaim() mm: Serialize access to min_free_kbytes mm: page_isolation: MIGRATE_CMA isolation functions added mm: mmzone: MIGRATE_CMA migration type added mm: page_alloc: change fallbacks array handling mm: page_alloc: introduce alloc_contig_range() mm: compaction: export some of the functions mm: compaction: introduce isolate_freepages_range() mm: compaction: introduce map_pages() mm: compaction: introduce isolate_migratepages_range() mm: page_alloc: remove trailing whitespace ARM: dma-mapping: add support for IOMMU mapper ARM: dma-mapping: use alloc, mmap, free from dma_ops ARM: dma-mapping: remove redundant code and do the cleanup ... Conflicts: arch/x86/include/asm/dma-mapping.h
| * X86: integrate CMA with DMA-mapping subsystemMarek Szyprowski2012-05-211-0/+5
| | | | | | | | | | | | | | | | | | | | | | This patch adds support for CMA to dma-mapping subsystem for x86 architecture that uses common pci-dma/pci-nommu implementation. This allows to test CMA on KVM/QEMU and a lot of common x86 boxes. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: Arnd Bergmann <arnd@arndb.de>
* | x86: Introduce CONFIG_X86_DMA_REMAPAlessandro Rubini2012-04-121-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The default functions phys_to_dma, dma_to_phys implement identity mapping as fast inline functions. Some systems, however, may need a custom function to implement its own mapping between CPU addresses and device addresses. This new configuration option allows the functions to be external when needed (such as for the ConneXt device) Signed-off-by: Alessandro Rubini <rubini@gnudd.com> Link: http://lkml.kernel.org/r/6e4329b772df675f1c442f68e59e844e4dd8c965.1333560789.git.rubini@gnudd.com Acked-by: Giancarlo Asnaghi <giancarlo.asnaghi@st.com> Cc: Alan Cox <alan@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* | x86-32: Introduce CONFIG_X86_DEV_DMA_OPSAlessandro Rubini2012-04-121-1/+1
|/ | | | | | | | | | | | | 32-bit x86 systems may need their own DMA operations, so add a new config option, which is turned on for 64-bit systems. This patch has no functional effect but it paves the way for supporting the STA2x11 I/O Hub and possibly other chips. Signed-off-by: Alessandro Rubini <rubini@gnudd.com> Link: http://lkml.kernel.org/r/f79fcc1a2e17ef942e1b798b92aac43a80202532.1333560789.git.rubini@gnudd.com Acked-by: Giancarlo Asnaghi <giancarlo.asnaghi@st.com> Cc: Alan Cox <alan@linux.intel.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* X86 & IA64: adapt for dma_map_ops changesAndrzej Pietrasiewicz2012-03-281-10/+16
| | | | | | | | | | | | | Adapt core x86 and IA64 architecture code for dma_map_ops changes: replace alloc/free_coherent with generic alloc/free methods. Signed-off-by: Andrzej Pietrasiewicz <andrzej.p@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> [removed swiotlb related changes and replaced it with wrappers, merged with IA64 patch to avoid inter-patch dependences in intel-iommu code] Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Tony Luck <tony.luck@intel.com>
* doc: fix broken referencesPaul Bolle2011-09-271-1/+1
| | | | | | | | | | | | | There are numerous broken references to Documentation files (in other Documentation files, in comments, etc.). These broken references are caused by typo's in the references, and by renames or removals of the Documentation files. Some broken references are simply odd. Fix these broken references, sometimes by dropping the irrelevant text they were part of. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* dma-mapping: remove dma_is_consistent APIFUJITA Tomonori2010-08-111-1/+0
| | | | | | | | | | | | | | | | | | | | | Architectures implement dma_is_consistent() in different ways (some misinterpret the definition of API in DMA-API.txt). So it hasn't been so useful for drivers. We have only one user of the API in tree. Unlikely out-of-tree drivers use the API. Even if we fix dma_is_consistent() in some architectures, it doesn't look useful at all. It was invented long ago for some old systems that can't allocate coherent memory at all. It's better to export only APIs that are definitely necessary for drivers. Let's remove this API. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dma-mapping: unify dma_get_cache_alignment implementationsFUJITA Tomonori2010-08-111-7/+0
| | | | | | | | | | | | | | | | dma_get_cache_alignment returns the minimum DMA alignment. Architectures defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we can unify dma_get_cache_alignment implementations. Note that some architectures implement dma_get_cache_alignment wrongly. dma_get_cache_alignment() should return the minimum DMA alignment. So fully-coherent architectures should return 1. This patch also fixes this issue. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dma-mapping: fix off-by-one error in dma_capable()Jan Beulich2009-12-161-1/+1
| | | | | | | | | | | | | | | | | | | dma_mask is, when interpreted as address, the last valid byte, and hence comparison msut also be done using the last valid of the buffer in question. Also fix the open-coded instances in lib/swiotlb.c. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Becky Bruce <beckyb@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* x86: Kill bad_dma_address variableFUJITA Tomonori2009-11-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | This kills bad_dma_address variable, the old mechanism to enable IOMMU drivers to make dma_mapping_error() work in IOMMU's specific way. bad_dma_address variable was introduced to enable IOMMU drivers to make dma_mapping_error() work in IOMMU's specific way. However, it can't handle systems that use both swiotlb and HW IOMMU. SO we introduced dma_map_ops->mapping_error to solve that case. Intel VT-d, GART, and swiotlb already use dma_map_ops->mapping_error. Calgary, AMD IOMMU, and nommu use zero for an error dma address. This adds DMA_ERROR_CODE and converts them to use it (as SPARC and POWER does). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: muli@il.ibm.com Cc: joerg.roedel@amd.com LKML-Reference: <1258287594-8777-3-git-send-email-fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86/PCI: Adjust GFP mask handling for coherent allocationsJan Beulich2009-11-081-3/+7
| | | | | | | | | | | | | | | | | | | Rather than forcing GFP flags and DMA mask to be inconsistent, GFP flags should be determined even for the fallback device through dma_alloc_coherent_mask()/dma_alloc_coherent_gfp_flags(). This restores 64-bit behavior as it was prior to commits 8965eb19386fdf5ccd0ef8b02593eb8560aa3416 and 4a367f3a9dbf2e7ffcee4702203479809236ee6e (not sure why there are two of them), where GFP_DMA was forced on for 32-bit, but not for 64-bit, with the slight adjustment that afaict even 32-bit doesn't need this without CONFIG_ISA. Signed-off-by: Jan Beulich <jbeulich@novell.com> Acked-by: Takashi Iwai <tiwai@suse.de> LKML-Reference: <4AF18187020000780001D8AA@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
* x86, IA64, powerpc: add phys_to_dma() and dma_to_phys()FUJITA Tomonori2009-07-281-0/+10
| | | | | | | | | This adds two functions, phys_to_dma() and dma_to_phys() to x86, IA64 and powerpc. swiotlb uses them. phys_to_dma() converts a physical address to a dma address. dma_to_phys() does the opposite. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
* x86: add dma_capable() to replace is_buffer_dma_capable()FUJITA Tomonori2009-07-281-0/+8
| | | | | | | | | | dma_capable() eventually replaces is_buffer_dma_capable(), which tells if a memory area is dma-capable or not. The problem of is_buffer_dma_capable() is that it doesn't take a pointer to struct device so it doesn't work for POWERPC. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
* dma-mapping: x86: use asm-generic/dma-mapping-common.hFUJITA Tomonori2009-06-181-171/+2
| | | | | | | | | | | | | Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Joerg Roedel <joerg.roedel@amd.com> Acked-by: Ingo Molnar <mingo@elte.hu> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* kmemcheck: add hooks for page- and sg-dma-mappingsVegard Nossum2009-06-151-0/+5
| | | | | | | | This is needed for page allocator support to prevent false positives when accessing pages which are dma-mapped. [rebased for mainline inclusion] Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com>
* kmemcheck: add DMA hooksVegard Nossum2009-06-151-0/+2
| | | | | | | | | This patch hooks into the DMA API to prevent the reporting of the false positives that would otherwise be reported when memory is accessed that is also used directly by devices. [rebased for mainline inclusion] Signed-off-by: Vegard Nossum <vegard.nossum@gmail.com>
* dma-mapping: replace all DMA_24BIT_MASK macro with DMA_BIT_MASK(24)Yang Hongyang2009-04-071-2/+2
| | | | | | | | Replace all DMA_24BIT_MASK macro with DMA_BIT_MASK(24) Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)Yang Hongyang2009-04-071-2/+2
| | | | | | | | Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32) Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* dma-debug: x86 architecture bindingsJoerg Roedel2009-03-171-6/+39
| | | | | | Impact: make use of DMA-API debugging code in x86 Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
* Merge branch 'linus' into core/iommuIngo Molnar2009-03-051-2/+2
|\
| * Documentation: move DMA-mapping.txt to Doc/PCI/Randy Dunlap2009-01-291-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move DMA-mapping.txt to Documentation/PCI/. DMA-mapping.txt was supposed to be moved from Documentation/ to Documentation/PCI/. The 00-INDEX files in those two directories were updated, along with a few other text files, but the file itself somehow escaped being moved, so move it and update more text files and source files with its new location. Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Acked-by: Greg Kroah-Hartman <gregkh@suse.de> cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | x86, ia64: convert to use generic dma_map_ops structFUJITA Tomonori2009-01-061-93/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: remove map_single and unmap_single in struct dma_mapping_opsFUJITA Tomonori2009-01-061-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | This patch converts dma_map_single and dma_unmap_single to use map_page and unmap_page respectively and removes unnecessary map_single and unmap_single in struct dma_mapping_ops. This leaves intel-iommu's dma_map_single and dma_unmap_single since IA64 uses them. They will be removed after the unification. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: add map_page and unmap_page to struct dma_mapping_opsFUJITA Tomonori2009-01-061-0/+8
|/ | | | | | | | | | | | | This patch adds map_page and unmap_page to struct dma_mapping_ops. This is a preparation of struct dma_mapping_ops unification. We use map_page and unmap_page instead of map_single and unmap_single. We will remove map_single and unmap_single hooks in the last patch in this patchset. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'core-for-linus' of ↵Linus Torvalds2008-12-301-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (63 commits) stacktrace: provide save_stack_trace_tsk() weak alias rcu: provide RCU options on non-preempt architectures too printk: fix discarding message when recursion_bug futex: clean up futex_(un)lock_pi fault handling "Tree RCU": scalable classic RCU implementation futex: rename field in futex_q to clarify single waiter semantics x86/swiotlb: add default swiotlb_arch_range_needs_mapping x86/swiotlb: add default phys<->bus conversion x86: unify pci iommu setup and allow swiotlb to compile for 32 bit x86: add swiotlb allocation functions swiotlb: consolidate swiotlb info message printing swiotlb: support bouncing of HighMem pages swiotlb: factor out copy to/from device swiotlb: add arch hook to force mapping swiotlb: allow architectures to override phys<->bus<->phys conversions swiotlb: add comment where we handle the overflow of a dma mask on 32 bit rcu: fix rcutorture behavior during reboot resources: skip sanity check of busy resources swiotlb: move some definitions to header swiotlb: allow architectures to override swiotlb pool allocation ... Fix up trivial conflicts in arch/x86/kernel/Makefile arch/x86/mm/init_32.c include/linux/hardirq.h as per Ingo's suggestions.
| * x86: unify pci iommu setup and allow swiotlb to compile for 32 bitJeremy Fitzhardinge2008-12-171-1/+1
| | | | | | | | | | | | | | | | swiotlb on 32 bit will be used by Xen domain 0 support. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* | x86: fix dma_mapping_error for 32bit x86, cleanupFUJITA Tomonori2008-12-011-2/+0
|/ | | | | | | | | | | | | This removes ifdef CONFIG_X86_64 in dma_mapping_error(): 1) Xen people plan to use swiotlb on X86_32 for Dom0 support. swiotlb uses ops->mapping_error so X86_32 also needs to check ops->mapping_error. 2) Removing #ifdef hack is almost always a good thing. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: fix dma_mapping_error for 32bit x86Thomas Bogendoerfer2008-11-291-4/+2
| | | | | | | | | | | Devices like b44 ethernet can't dma from addresses above 1GB. The driver handles this cases by falling back to GFP_DMA allocation. But for detecting the problem it needs to get an indication from dma_mapping_error. The bug is triggered by using a VMSPLIT option of 2G/2G. Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: use GFP_DMA for 24bit coherent_dma_maskFUJITA Tomonori2008-10-231-1/+3
| | | | | | | | | | | | | | | | | dma_alloc_coherent (include/asm-x86/dma-mapping.h) avoids GFP_DMA allocation first and if the allocated address is not fit for the device's coherent_dma_mask, then dma_alloc_coherent does GFP_DMA allocation. This is because dma_alloc_coherent avoids precious GFP_DMA zone if possible. This is also how the old dma_alloc_coherent (arch/x86/kernel/pci-dma.c) works. However, if the coherent_dma_mask of a device is 24bit, there is no point to go into the above GFP_DMA retry mechanism. We had better use GFP_DMA in the first place. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Fix ASM_X86__ header guardsH. Peter Anvin2008-10-221-3/+3
| | | | | | | | | Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86, um: ... and asm-x86 moveAl Viro2008-10-221-0/+308
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
OpenPOWER on IntegriCloud