summaryrefslogtreecommitdiffstats
path: root/arch/arm/mm/dma-mapping.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'iommu-config-for-linus' of ↵Linus Torvalds2014-12-161-7/+77
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC/iommu configuration update from Arnd Bergmann: "The iomm-config branch contains work from Will Deacon, quoting his description: This series adds automatic IOMMU and DMA-mapping configuration for OF-based DMA masters described using the generic IOMMU devicetree bindings. Although there is plenty of future work around splitting up iommu_ops, adding default IOMMU domains and sorting out automatic IOMMU group creation for the platform_bus, this is already useful enough for people to port over their IOMMU drivers and start using the new probing infrastructure (indeed, Marek has patches queued for the Exynos IOMMU). The branch touches core ARM and IOMMU driver files, and the respective maintainers (Russell King and Joerg Roedel) agreed to have the contents merged through the arm-soc tree. The final version was ready just before the merge window, so we ended up delaying it a bit longer than the rest, but we don't expect to see regressions because this is just additional infrastructure that will get used in drivers starting in 3.20 but is unused so far" * tag 'iommu-config-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: iommu: store DT-probed IOMMU data privately arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops arm: call iommu_init before of_platform_populate dma-mapping: detect and configure IOMMU in of_dma_configure iommu: fix initialization without 'add_device' callback iommu: provide helper function to configure an IOMMU for an of master iommu: add new iommu_ops callback for adding an OF device dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops iommu: provide early initialisation hook for IOMMU drivers
| * arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_opsWill Deacon2014-12-011-7/+76
|/ | | | | | | | | | | | | | | | This patch plumbs the existing ARM IOMMU DMA infrastructure (which isn't actually called outside of a few drivers) into arch_setup_dma_ops, so that we can use IOMMUs for DMA transfers in a more generic fashion. Since this significantly complicates the arch_setup_dma_ops function, it is moved out of line into dma-mapping.c. If CONFIG_ARM_DMA_USE_IOMMU is not set, the iommu parameter is ignored and the normal ops are used instead. Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* ARM: 8181/1: Drop extra return statementLaura Abbott2014-10-291-1/+0
| | | | | | | | | | Commit 513510ddba9650fc7da456eefeb0ead7632324f6 (common: dma-mapping: introduce common remapping functions) managed to end up with an extra return statement from the original patch. Drop it. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* arm: use genalloc for the atomic poolLaura Abbott2014-10-091-104/+49
| | | | | | | | | | | | | | | | | | ARM currently uses a bitmap for tracking atomic allocations. genalloc already handles this type of memory pool allocation so switch to using that instead. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* common: dma-mapping: introduce common remapping functionsLaura Abbott2014-10-091-48/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | For architectures without coherent DMA, memory for DMA may need to be remapped with coherent attributes. Factor out the the remapping code from arm and put it in a common location to reduce code duplication. As part of this, the arm APIs are now migrated away from ioremap_page_range to the common APIs which use map_vm_area for remapping. This should be an equivalent change and using map_vm_area is more correct as ioremap_page_range is intended to bring in io addresses into the cpu space and not regular kernel managed memory. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Mitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* CMA: generalize CMA reserved area management functionalityJoonsoo Kim2014-08-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, there are two users on CMA functionality, one is the DMA subsystem and the other is the KVM on powerpc. They have their own code to manage CMA reserved area even if they looks really similar. From my guess, it is caused by some needs on bitmap management. KVM side wants to maintain bitmap not for 1 page, but for more size. Eventually it use bitmap where one bit represents 64 pages. When I implement CMA related patches, I should change those two places to apply my change and it seem to be painful to me. I want to change this situation and reduce future code management overhead through this patch. This change could also help developer who want to use CMA in their new feature development, since they can use CMA easily without copying & pasting this reserved area management code. In previous patches, we have prepared some features to generalize CMA reserved area management and now it's time to do it. This patch moves core functions to mm/cma.c and change DMA APIs to use these functions. There is no functional change in DMA APIs. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> Acked-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com> Acked-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Alexander Graf <agraf@suse.de> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Gleb Natapov <gleb@kernel.org> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* ARM: DMA: ensure that old section mappings are flushed from the TLBRussell King2014-07-171-1/+10
| | | | | | | | | | | | | | | When setting up the CMA region, we must ensure that the old section mappings are flushed from the TLB before replacing them with page tables, otherwise we can suffer from mismatched aliases if the CPU speculatively prefetches from these mappings at an inopportune time. A mismatched alias can occur when the TLB contains a section mapping, but a subsequent prefetch causes it to load a page table mapping, resulting in the possibility of the TLB containing two matching mappings for the same virtual address region. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm into nextLinus Torvalds2014-06-051-5/+6
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM updates from Russell King: - Major clean-up of the L2 cache support code. The existing mess was becoming rather unmaintainable through all the additions that others have done over time. This turns it into a much nicer structure, and implements a few performance improvements as well. - Clean up some of the CP15 control register tweaks for alignment support, moving some code and data into alignment.c - DMA properties for ARM, from Santosh and reviewed by DT people. This adds DT properties to specify bus translations we can't discover automatically, and to indicate whether devices are coherent. - Hibernation support for ARM - Make ftrace work with read-only text in modules - add suspend support for PJ4B CPUs - rework interrupt masking for undefined instruction handling, which allows us to enable interrupts earlier in the handling of these exceptions. - support for big endian page tables - fix stacktrace support to exclude stacktrace functions from the trace, and add save_stack_trace_regs() implementation so that kprobes can record stack traces. - Add support for the Cortex-A17 CPU. - Remove last vestiges of ARM710 support. - Removal of ARM "meminfo" structure, finally converting us solely to memblock to handle the early memory initialisation. * 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (142 commits) ARM: ensure C page table setup code follows assembly code (part II) ARM: ensure C page table setup code follows assembly code ARM: consolidate last remaining open-coded alignment trap enable ARM: remove global cr_no_alignment ARM: remove CPU_CP15 conditional from alignment.c ARM: remove unused adjust_cr() function ARM: move "noalign" command line option to alignment.c ARM: provide common method to clear bits in CPU control register ARM: 8025/1: Get rid of meminfo ARM: 8060/1: mm: allow sub-architectures to override PCI I/O memory type ARM: 8066/1: correction for ARM patch 8031/2 ARM: 8049/1: ftrace/add save_stack_trace_regs() implementation ARM: 8065/1: remove last use of CONFIG_CPU_ARM710 ARM: 8062/1: Modify ldrt fixup handler to re-execute the userspace instruction ARM: 8047/1: rwsem: use asm-generic rwsem implementation ARM: l2c: trial at enabling some Cortex-A9 optimisations ARM: l2c: add warnings for stuff modifying aux_ctrl register values ARM: l2c: print a warning with L2C-310 caches if the cache size is modified ARM: l2c: remove old .set_debug method ARM: l2c: kill L2X0_AUX_CTRL_MASK before anyone else makes use of this ...
| * Merge branch 'devel-stable' into for-nextRussell King2014-06-051-2/+2
| |\
| | * Merge tag 'dt-dma-properties-for-arm' of ↵Russell King2014-05-231-2/+2
| | |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into devel-stable DT support for 'dma-ranges'and 'dma-coherent' properties with ARM updates - The 'dma-ranges' helps to take care of few DMAable system memory restrictions by use of dma_pfn_offset which is maintained per device. Arch code then uses it for dma address translations for such cases. We update the dma_pfn_offset accordingly during DT the device creation process. - The 'dma-coherent' property is used to setup arch's coherent dma_ops.
| | | * ARM: dma: use phys_addr_t in __dma_page_[cpu_to_dev/dev_to_cpu]Santosh Shilimkar2014-05-071-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On a 32 bit ARM architecture with LPAE extension physical addresses cannot fit into unsigned long variable. So fix it by using phys_addr_t instead of unsigned long. Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
| * | | Merge branches 'alignment', 'fixes', 'l2c' (early part) and 'misc' into for-nextRussell King2014-06-051-3/+4
| |\ \ \ | | |/ / | |/| |
| | * | ARM: dma-mapping: avoid calling dma_cache_maint_page() on dev=>cpuRussell King2014-05-221-3/+4
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid calling dma_cache_maint_page() when unmapping a DMA_TO_DEVICE buffer. The L1 cache ops never do anything in this circumstance, nor do they ever need to - all that matters for this case is that the data written is visible to the device before DMA starts. What happens during the transfer (provided the buffer is not written to) is of no real consequence. We already do this optimisation for the L2 cache. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | arm: dma-mapping: add checking cma area initializedGioh Kim2014-05-221-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If CMA is turned on and CMA size is set to zero, kernel should behave as if CMA was not enabled at compile time. Every dma allocation should check existence of cma area before requesting memory. Signed-off-by: Gioh Kim <gioh.kim@lge.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> [mszyprow: removed redundant empty line from the patch] Signed-off-by: <m.szyprowski@samsung.com>
* | | arm: dma-iommu: Clean up redundant variableRitesh Harjani2014-05-201-5/+6
|/ / | | | | | | | | | | | | | | | | | | | | mapping->size can be derived from mapping->bits << PAGE_SHIFT which makes mapping->size as redundant. Clean this up. Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com> Reported-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* | arm: dma-mapping: Fix mapping size valueRitesh Harjani2014-04-231-1/+1
|/ | | | | | | | | | | | | | | | | | | | 68efd7d2fb("arm: dma-mapping: remove order parameter from arm_iommu_create_mapping()") is causing kernel panic because it wrongly sets the value of mapping->size: Unable to handle kernel NULL pointer dereference at virtual address 000000a0 pgd = e7a84000 [000000a0] *pgd=00000000 ... PC is at bitmap_clear+0x48/0xd0 LR is at __iommu_remove_mapping+0x130/0x164 Fix it by correcting mapping->size value. Signed-off-by: Ritesh Harjani <ritesh.harjani@gmail.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-armLinus Torvalds2014-04-051-3/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM changes from Russell King: - Perf updates from Will Deacon: - Support for Qualcomm Krait processors (run perf on your phone!) - Support for Cortex-A12 (run perf stat on your FPGA!) - Support for perf_sample_event_took, allowing us to automatically decrease the sample rate if we can't handle the PMU interrupts quickly enough (run perf record on your FPGA!). - Basic uprobes support from David Long: This patch series adds basic uprobes support to ARM. It is based on patches developed earlier by Rabin Vincent. That approach of adding hooks into the kprobes instruction parsing code was not well received. This approach separates the ARM instruction parsing code in kprobes out into a separate set of functions which can be used by both kprobes and uprobes. Both kprobes and uprobes then provide their own semantic action tables to process the results of the parsing. - ARMv7M (microcontroller) updates from Uwe Kleine-König - OMAP DMA updates (recently added Vinod's Ack even though they've been sitting in linux-next for a few months) to reduce the reliance of omap-dma on the code in arch/arm. - SA11x0 changes from Dmitry Eremin-Solenikov and Alexander Shiyan - Support for Cortex-A12 CPU - Align support for ARMv6 with ARMv7 so they can cooperate better in a single zImage. - Addition of first AT_HWCAP2 feature bits for ARMv8 crypto support. - Removal of IRQ_DISABLED from various ARM files - Improved efficiency of virt_to_page() for single zImage - Patch from Ulf Hansson to permit runtime PM callbacks to be available for AMBA devices for suspend/resume as well. - Finally kill asm/system.h on ARM. * 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (89 commits) dmaengine: omap-dma: more consolidation of CCR register setup dmaengine: omap-dma: move IRQ handling to omap-dma dmaengine: omap-dma: move register read/writes into omap-dma.c ARM: omap: dma: get rid of 'p' allocation and clean up ARM: omap: move dma channel allocation into plat-omap code ARM: omap: dma: get rid of errata global ARM: omap: clean up DMA register accesses ARM: omap: remove almost-const variables ARM: omap: remove references to disable_irq_lch dmaengine: omap-dma: cleanup errata 3.3 handling dmaengine: omap-dma: provide register read/write functions dmaengine: omap-dma: use cached CCR value when enabling DMA dmaengine: omap-dma: move barrier to omap_dma_start_desc() dmaengine: omap-dma: move clnk_ctrl setting to preparation functions dmaengine: omap-dma: improve efficiency loading C.SA/C.EI/C.FI registers dmaengine: omap-dma: consolidate clearing channel status register dmaengine: omap-dma: move CCR buffering disable errata out of the fast path dmaengine: omap-dma: provide register definitions dmaengine: omap-dma: consolidate setup of CCR dmaengine: omap-dma: consolidate setup of CSDP ...
| * Merge branch 'devel-stable' into for-nextRussell King2014-04-041-1/+1
| |\
| * | ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocatorSteven Capper2014-02-181-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Coherant DMA allocator allocates pages of high order then splits them up into smaller pages. This splitting logic would run into problems if the allocator was given compound pages. Thus the Coherant DMA allocator was originally incompatible with compound pages existing and, by extension, huge pages. A compile #error was put in place whenever huge pages were enabled. Compatibility with compound pages has since been introduced by the following commit (which merely excludes GFP_COMP pages from being requested by the coherant DMA allocator): ea2e705 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations When huge page support was introduced to ARM, the compile #error in dma-mapping.c was replaced by a #warning when it should have been removed instead. This patch removes the compile #warning in dma-mapping.c when huge pages are enabled. Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | arm: dma-mapping: remove order parameter from arm_iommu_create_mapping()Marek Szyprowski2014-02-281-22/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'order' parameter for IOMMU-aware dma-mapping implementation was introduced mainly as a hack to reduce size of the bitmap used for tracking IO virtual address space. Since now it is possible to dynamically resize the bitmap, this hack is not needed and can be removed without any impact on the client devices. This way the parameters for arm_iommu_create_mapping() becomes much easier to understand. 'size' parameter now means the maximum supported IO address space size. The code will allocate (resize) bitmap in chunks, ensuring that a single chunk is not larger than a single memory page to avoid unreliable allocations of size larger than PAGE_SIZE in atomic context. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* | | arm: dma-mapping: Add support to extend DMA IOMMU mappingsAndreas Herrmann2014-02-281-19/+104
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using just one bitmap to keep track of IO virtual addresses (handed out for IOMMU use) introduce an array of bitmaps. This allows us to extend existing mappings when running out of iova space in the initial mapping etc. If there is not enough space in the mapping to service an IO virtual address allocation request, __alloc_iova() tries to extend the mapping -- by allocating another bitmap -- and makes another allocation attempt using the freshly allocated bitmap. This allows arm iommu drivers to start with a decent initial size when an dma_iommu_mapping is created and still to avoid running out of IO virtual addresses for the mapping. Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com> [mszyprow: removed extensions parameter to arm_iommu_create_mapping() function, which will be modified in the next patch anyway, also some debug messages about extending bitmap] Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* | ARM: dma-mapping: fix GFP_ATOMIC macro usageMarek Szyprowski2014-02-111-1/+1
|/ | | | | | | | | | GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an atomic allocation, the code must test __GFP_WAIT flag presence. This patch fixes the issue introduced in v3.6-rc5 Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> CC: stable@vger.kernel.org
*---. Merge branches 'amba', 'fixes', 'kees', 'misc' and 'unstable/sa11x0' into ↵Russell King2014-01-211-3/+3
|\ \ \ | | | | | | | | | | | | for-next
| | * | ARM: fix executability of CMA mappingsRussell King2013-12-111-3/+3
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | ARM: another fix for the DMA mapping checksRussell King2013-12-091-51/+40
| |/ |/| | | | | | | | | | | | | | | | | | | | | Peter reports that OMAP audio broke with the recent fix for these checks, caused by OMAP audio using a 64-bit DMA mask. We should allow 64-bit DMA masks even with 32-bit dma_addr_t if we can be sure the amount of RAM we have won't allow the 32-bit dma_addr_t to overflow. Unfortunately, the checks to detect overflow were not correct. Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: dma-mapping: check DMA mask against available memoryRussell King2013-11-301-2/+7
|/ | | | | | | | Some buses have negative offsets, which causes the DMA mask checks to falsely fail. Fix this by using the actual amount of memory fitted in the system. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2013-11-141-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM updates from Russell King: "Included in this series are: 1. BE8 (modern big endian) changes for ARM from Ben Dooks 2. big.Little support from Nicolas Pitre and Dave Martin 3. support for LPAE systems with all system memory above 4GB 4. Perf updates from Will Deacon 5. Additional prefetching and other performance improvements from Will. 6. Neon-optimised AES implementation fro Ard. 7. A number of smaller fixes scattered around the place. There is a rather horrid merge conflict in tools/perf - I was never notified of the conflict because it originally occurred between Will's tree and other stuff. Consequently I have a resolution which Will forwarded me, which I'll forward on immediately after sending this mail. The other notable thing is I'm expecting some build breakage in the crypto stuff on ARM only with Ard's AES patches. These were merged into a stable git branch which others had already pulled, so there's little I can do about this. The problem is caused because these patches have a dependency on some code in the crypto git tree - I tried requesting a branch I can pull to resolve these, and all I got each time from the crypto people was "we'll revert our patches then" which would only make things worse since I still don't have the dependent patches. I've no idea what's going on there or how to resolve that, and since I can't split these patches from the rest of this pull request, I'm rather stuck with pushing this as-is or reverting Ard's patches. Since it should "come out in the wash" I've left them in - the only build problems they seem to cause at the moment are with randconfigs, and since it's a new feature anyway. However, if by -rc1 the dependencies aren't in, I think it'd be best to revert Ard's patches" I resolved the perf conflict roughly as per the patch sent by Russell, but there may be some differences. Any errors are likely mine. Let's see how the crypto issues work out.. * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits) ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h" ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg(). ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h ARM: 7871/1: amba: Extend number of IRQS ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise() ARM: 7872/1: Support arch_irq_work_raise() via self IPIs ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode ARM: 7878/1: nommu: Implement dummy early_paging_init() ARM: 7876/1: clear Thumb-2 IT state on exception handling ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}() ARM: footbridge: fix build warnings for netwinder ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu ARM: fix misplaced arch_virt_to_idmap() ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation ARM: 7869/1: remove unused XSCALE_PMU Kconfig param ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments ARM: 7862/1: pcpu: replace __get_cpu_var_uses ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code ...
| *-. Merge branches 'fixes', 'mmci' and 'sa11x0' into for-nextRussell King2013-11-121-2/+2
| |\ \
| | * | ARM: dma-mapping: don't allow DMA mappings to be marked executableRussell King2013-10-241-2/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | DMA mapping permissions were being derived from pgprot_kernel directly without using PAGE_KERNEL. This causes them to be marked with executable permission, which is not what we want. Fix this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | | Merge branch 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2013-11-141-6/+45
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull DMA mask updates from Russell King: "This series cleans up the handling of DMA masks in a lot of drivers, fixing some bugs as we go. Some of the more serious errors include: - drivers which only set their coherent DMA mask if the attempt to set the streaming mask fails. - drivers which test for a NULL dma mask pointer, and then set the dma mask pointer to a location in their module .data section - which will cause problems if the module is reloaded. To counter these, I have introduced two helper functions: - dma_set_mask_and_coherent() takes care of setting both the streaming and coherent masks at the same time, with the correct error handling as specified by the API. - dma_coerce_mask_and_coherent() which resolves the problem of drivers forcefully setting DMA masks. This is more a marker for future work to further clean these locations up - the code which creates the devices really should be initialising these, but to fix that in one go along with this change could potentially be very disruptive. The last thing this series does is prise away some of Linux's addition to "DMA addresses are physical addresses and RAM always starts at zero". We have ARM LPAE systems where all system memory is above 4GB physical, hence having DMA masks interpreted by (eg) the block layers as describing physical addresses in the range 0..DMAMASK fails on these platforms. Santosh Shilimkar addresses this in this series; the patches were copied to the appropriate people multiple times but were ignored. Fixing this also gets rid of some ARM weirdness in the setup of the max*pfn variables, and brings ARM into line with every other Linux architecture as far as those go" * 'for-linus-dma-masks' of git://git.linaro.org/people/rmk/linux-arm: (52 commits) ARM: 7805/1: mm: change max*pfn to include the physical offset of memory ARM: 7797/1: mmc: Use dma_max_pfn(dev) helper for bounce_limit calculations ARM: 7796/1: scsi: Use dma_max_pfn(dev) helper for bounce_limit calculations ARM: 7795/1: mm: dma-mapping: Add dma_max_pfn(dev) helper function ARM: 7794/1: block: Rename parameter dma_mask to max_addr for blk_queue_bounce_limit() ARM: DMA-API: better handing of DMA masks for coherent allocations ARM: 7857/1: dma: imx-sdma: setup dma mask DMA-API: firmware/google/gsmi.c: avoid direct access to DMA masks DMA-API: dcdbas: update DMA mask handing DMA-API: dma: edma.c: no need to explicitly initialize DMA masks DMA-API: usb: musb: use platform_device_register_full() to avoid directly messing with dma masks DMA-API: crypto: remove last references to 'static struct device *dev' DMA-API: crypto: fix ixp4xx crypto platform device support DMA-API: others: use dma_set_coherent_mask() DMA-API: staging: use dma_set_coherent_mask() DMA-API: usb: use new dma_coerce_mask_and_coherent() DMA-API: usb: use dma_set_coherent_mask() DMA-API: parport: parport_pc.c: use dma_coerce_mask_and_coherent() DMA-API: net: octeon: use dma_coerce_mask_and_coherent() DMA-API: net: nxp/lpc_eth: use dma_coerce_mask_and_coherent() ...
| * | ARM: DMA-API: better handing of DMA masks for coherent allocationsRussell King2013-10-311-6/+45
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to start treating DMA masks as something which is specific to the bus that the device resides on, otherwise we're going to hit all sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit systems, where memory is offset from PFN 0. In order to start doing this, we convert the DMA mask to a PFN using the device specific dma_to_pfn() macro. This is the reverse of the pfn_to_dma() macro which is used to get the DMA address for the device. This gives us a PFN mask, which we can then check against the PFN limit of the DMA zone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: dma-mapping: Always pass proper prot flags to iommu_map()Andreas Herrmann2013-10-021-15/+28
|/ | | | | | | | | | | | | | | | | | | | ... otherwise it is impossible for the low level iommu driver to figure out which pte flags should be used. In __map_sg_chunk we can derive the flags from dma_data_direction. In __iommu_create_mapping we should treat the memory like DMA_BIDIRECTIONAL and pass both IOMMU_READ and IOMMU_WRITE to iommu_map. __iommu_create_mapping is used during dma_alloc_coherent (via arm_iommu_alloc_attrs). AFAIK dma_alloc_coherent is responsible for allocation _and_ mapping. I think this implies that access to the mapped pages should be allowed. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds2013-09-051-1/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull ARM updates from Russell King: "This set includes adding support for Neon acceleration of RAID6 XOR code from Ard Biesheuvel, cache flushing and barrier updates from Will Deacon, and a cleanup to the ARM debug code which reduces the amount of code by about 500 lines. A few other cleanups, such as constifying the machine descriptors which already shouldn't be written to, cleaning up the printing of the L2 cache size" * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (55 commits) ARM: 7826/1: debug: support debug ll on hisilicon soc ARM: 7830/1: delay: don't bother reporting bogomips in /proc/cpuinfo ARM: 7829/1: Add ".text.unlikely" and ".text.hot" to arm unwind tables ARM: 7828/1: ARMv7-M: implement restart routine common to all v7-M machines ARM: 7827/1: highbank: fix debug uart virtual address for LPAE ARM: 7823/1: errata: workaround Cortex-A15 erratum 773022 ARM: 7806/1: allow DEBUG_UNCOMPRESS for Tegra ARM: 7793/1: debug: use generic option for ep93xx PL10x debug port ARM: debug: move SPEAr debug to generic PL01x code ARM: debug: move davinci debug to generic 8250 code ARM: debug: move keystone debug to generic 8250 code ARM: debug: remove DEBUG_ROCKCHIP_UART ARM: debug: provide generic option choices for 8250 and PL01x ports ARM: debug: move PL01X debug include into arch/arm/include/debug/ ARM: debug: provide PL01x debug uart phys/virt address configuration options ARM: debug: add support for word accesses to debug/8250.S ARM: debug: move 8250 debug include into arch/arm/include/debug/ ARM: debug: provide 8250 debug uart phys/virt address configuration options ARM: debug: provide 8250 debug uart register shift configuration option ARM: debug: provide 8250 debug uart flow control configuration option ...
| * ARM: mm: remove redundant dsb() prior to range TLB invalidationWill Deacon2013-08-121-1/+0
| | | | | | | | | | | | | | | | | | The kernel TLB range invalidation functions already contain dsb instructions before and after the maintenance, so there is no need to introduce additional barriers. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | Merge remote-tracking branch 'origin/next' into kvm-ppc-nextAlexander Graf2013-08-291-8/+32
|\ \ | |/ | | | | | | | | | | Conflicts: mm/Kconfig CMA DMA split and ZSWAP introduction were conflicting, fix up manually.
| * Merge branch 'for-v3.11' of ↵Linus Torvalds2013-07-061-8/+32
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.linaro.org/people/mszyprowski/linux-dma-mapping Pull ARM DMA mapping updates from Marek Szyprowski: "This contains important bugfixes and an update for IOMMU integration support for ARM architecture" * 'for-v3.11' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: ARM: dma: Drop __GFP_COMP for iommu dma memory allocations ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as clean ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detach ARM: dma-mapping: convert DMA direction into IOMMU protection attributes ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_pool
| | * ARM: dma: Drop __GFP_COMP for iommu dma memory allocationsRichard Zhao2013-06-281-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __iommu_alloc_buffer wants to split pages after allocation in order to reduce the memory footprint. This does not work well with __GFP_COMP pages, so drop this flag before allocation One failure example is snd_malloc_dev_pages call dma_alloc_coherent with __GFP_COMP. Signed-off-by: Richard Zhao <rizhao@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
| | * ARM: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as cleanMing Lei2013-06-281-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is common for one sg to include many pages, so mark all these pages as clean to avoid unnecessary flushing on them in set_pte_at() or update_mmu_cache(). The patch might improve loading performance of applciation code a bit. On the below test code to read file(~1GByte size) from usb mass storage disk to buffer created with mmap(PROT_READ | PROT_EXEC) on Pandaboard, average ~1% improvement can be observed with the patch on 10 times test. unsigned int sum = 0; static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2) { return (tv2->tv_sec - tv1->tv_sec) * 1000000 + (tv2->tv_usec - tv1->tv_usec); } int main(int argc, char *argv[]) { char *mbuffer; int fd; int i; unsigned long page_size, size; struct stat stat; struct timeval t1, t2; page_size = getpagesize(); fd = open(argv[1], O_RDONLY); assert(fd >= 0); fstat(fd, &stat); size = stat.st_size; printf("%s: file %s, file size %lu, page size %lu\n", argv[0], read_filename, size, page_size); gettimeofday(&t1, NULL); mbuffer = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0); for (i = 0 ; i < size ; i += page_size) sum += mbuffer[i]; munmap(mbuffer, page_size); gettimeofday(&t2, NULL); printf("\tread mmaped time: %luus\n", tv_diff(&t1, &t2)); close(fd); } Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
| | * ARM: dma-mapping: NULLify dev->archdata.mapping pointer on detachWill Deacon2013-06-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current code only clobbers a local variable, so the device is left with a stale mapping pointer. Cc: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
| | * ARM: dma-mapping: convert DMA direction into IOMMU protection attributesWill Deacon2013-06-281-2/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOMMU mappings take a prot parameter, identifying the protection bits to enforce on the newly created mapping (READ or WRITE). The ARM dma-mapping framework currently just passes 0 as the prot argument, resulting in faulting mappings. This patch infers the protection attributes based on the direction of the DMA transfer. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
| | * ARM: dma-mapping: Get pages if the cpu_addr is out of atomic_poolYoungJun Cho2013-06-281-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In __iommu_get_pages(), the cpu_addr is checked wheather in atomic_pool range or not. So if the cpu_addr is in atomic_pool range, it does not need to check twice. Signed-off-by: YoungJun Cho <yj44.cho@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* | | Merge remote-tracking branch 'cmadma/for-v3.12-cma-dma' into kvm-ppc-nextAlexander Graf2013-07-081-3/+3
|\ \ \ | |/ / |/| | | | | Add prerequisite patch for CMA RMA allocation patches
| * | mm/cma: Move dma contiguous changes into a seperate configAneesh Kumar K.V2013-07-021-3/+3
| |/ | | | | | | | | | | | | | | | | | | | | | | We want to use CMA for allocating hash page table and real mode area for PPC64. Hence move DMA contiguous related changes into a seperate config so that ppc64 can enable CMA without requiring DMA contiguous. Acked-by: Michal Nazarewicz <mina86@mina86.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [removed defconfig changes] Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
* | Merge branch 'devel-stable' into for-nextRussell King2013-06-291-1/+1
|\ \ | | | | | | | | | | | | | | | Conflicts: arch/arm/Makefile arch/arm/include/asm/glue-proc.h
| * | ARM: mm: HugeTLB support for LPAE systems.Catalin Marinas2013-06-041-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for hugetlbfs based on the x86 implementation. It allows mapping of 2MB sections (see Documentation/vm/hugetlbpage.txt for usage). The 64K pages configuration is not supported (section size is 512MB in this case). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> [steve.capper@linaro.org: symbolic constants replace numbers in places. Split up into multiple files, to simplify future non-LPAE support, removed huge_pmd_share code, as this is very rarely executed, Added PROT_NONE support]. Signed-off-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com>
* | ARM: 7730/1: DMA-mapping: mark all !DMA_TO_DEVICE pages in unmapping as cleanMing Lei2013-05-231-3/+17
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is common for one sg to include many pages, so mark all these pages as clean to avoid unnecessary flushing on them in set_pte_at() or update_mmu_cache(). The patch might improve loading performance of applciation code a bit. On the below test code to read file(~1GByte size) from usb mass storage disk to buffer created with mmap(PROT_READ | PROT_EXEC) on Pandaboard, average ~1% improvement can be observed with the patch on 10 times test. unsigned int sum = 0; static unsigned long tv_diff(struct timeval *tv1, struct timeval *tv2) { return (tv2->tv_sec - tv1->tv_sec) * 1000000 + (tv2->tv_usec - tv1->tv_usec); } int main(int argc, char *argv[]) { char *mbuffer; int fd; int i; unsigned long page_size, size; struct stat stat; struct timeval t1, t2; page_size = getpagesize(); fd = open(argv[1], O_RDONLY); assert(fd >= 0); fstat(fd, &stat); size = stat.st_size; printf("%s: file %s, file size %lu, page size %lun", argv[0], read_filename, size, page_size); gettimeofday(&t1, NULL); mbuffer = mmap(NULL, size, PROT_READ | PROT_EXEC, MAP_SHARED, fd, 0); for (i = 0 ; i < size ; i += page_size) sum += mbuffer[i]; munmap(mbuffer, page_size); gettimeofday(&t2, NULL); printf("tread mmaped time: %luusn", tv_diff(&t1, &t2)); close(fd); } Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* Merge branches 'devel-stable', 'entry', 'fixes', 'mach-types', 'misc' and ↵Russell King2013-05-021-7/+8
|\ | | | | | | 'smp-hotplug' into for-linus
| * ARM: 7693/1: mm: clean-up in order to reduce to call kmap_high_get()Joonsoo Kim2013-04-171-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In kmap_atomic(), kmap_high_get() is invoked for checking already mapped area. In __flush_dcache_page() and dma_cache_maint_page(), we explicitly call kmap_high_get() before kmap_atomic() when cache_is_vipt(), so kmap_high_get() can be invoked twice. This is useless operation, so remove one. v2: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to be self-documented Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
* | ARM: DMA-mapping: add missing GFP_DMA flag for atomic buffer allocationMarek Szyprowski2013-03-141-2/+3
|/ | | | | | | | | | Atomic pool should always be allocated from DMA zone if such zone is available in the system to avoid issues caused by limited dma mask of any of the devices used for making an atomic allocation. Reported-by: Krzysztof Halasa <khc@pm.waw.pl> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Stable <stable@vger.kernel.org> [v3.6+]
* ARM: DMA-mapping: fix memory leak in IOMMU dma-mapping implementationMarek Szyprowski2013-02-251-3/+3
| | | | | | | | | | | This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
OpenPOWER on IntegriCloud