summaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2017-11-1426-291/+3597
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dmaengine updates from Vinod Koul: "Updates for this cycle include: - new driver for Spreadtrum dma controller, ST MDMA and DMAMUX controllers - PM support for IMG MDC drivers - updates to bcm-sba-raid driver and improvements to sun6i driver - subsystem conversion for: - timers to use timer_setup() - remove usage of PCI pool API - usage of %p format specifier - minor updates to bunch of drivers" * tag 'dmaengine-4.15-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (49 commits) dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type dmaengine: dmatest: warn user when dma test times out dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue" dmaengine: stm32_mdma: activate pack/unpack feature dmaengine: at_hdmac: Remove unnecessary 0x prefixes before %pad dmaengine: coh901318: Remove unnecessary 0x prefixes before %pad MAINTAINERS: Step down from a co-maintaner of DW DMAC driver dmaengine: pch_dma: Replace PCI pool old API dmaengine: Convert timers to use timer_setup() dmaengine: sprd: Add Spreadtrum DMA driver dt-bindings: dmaengine: Add Spreadtrum SC9860 DMA controller dmaengine: sun6i: Retrieve channel count/max request from devicetree dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCs dmaengine: bcm-sba-raid: Use common GPL comment header dmaengine: bcm-sba-raid: Use only single mailbox channel dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lock dmaengine: pl330: fix descriptor allocation fail dmaengine: rcar-dmac: use TCRB instead of TCR for residue dmaengine: sun6i: Add support for Allwinner A64 and compatibles arm64: allwinner: a64: Add devicetree binding for DMA controller ...
| * Merge branch 'topic/xilinx' into for-linusVinod Koul2017-11-141-0/+14
| |\
| | * dmaengine: xilinx_dma: Move enum xdma_ip_type to driver fileLars-Peter Clausen2017-09-171-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The enum xdma_ip_type is only used inside the Xilinx DMA driver and not exported to any consumers (nor should it be). So move it from the global header to driver file itself. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/timer_api' into for-linusVinod Koul2017-11-144-11/+8
| |\ \
| | * | dmaengine: Convert timers to use timer_setup()Kees Cook2017-10-244-11/+8
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | In preparation for unconditionally passing the struct timer_list pointer to all timer callbacks, switch to using the new timer_setup() and from_timer() to pass the timer pointer explicitly. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/ti' into for-linusVinod Koul2017-11-143-4/+14
| |\ \
| | * | dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value typePeter Ujfalusi2017-11-081-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The used 0x1f mask is only valid for am335x family of SoC, different family using this type of crossbar might have different number of electable events. In case of am43xx family 0x3f mask should have been used for example. Instead of trying to handle each family's mask, just use u8 type to store the mux value since the event offsets are aligned to byte offset. Fixes: 42dbdcc6bf965 ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx") Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: omap-dma: Implement protection for invalid max_burstPeter Ujfalusi2017-10-121-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the device's max_burst to 16777215 (EN is 24bit unsigned value) so clients can take this into consideration when setting up the transfer. During slave transfer preparation check if the requested maxburst is valid. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: edma: Implement protection for invalid max_burstPeter Ujfalusi2017-10-121-0/+5
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | the device's max_burst to 32767 (CIDX is 16bit signed value) so clients can take this into consideration when setting up the transfer. During slave transfer preparation check if the requested maxburst is valid. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/sun' into for-linusVinod Koul2017-11-141-62/+195
| |\ \
| | * | dmaengine: sun6i: Retrieve channel count/max request from devicetreeStefan Brüns2017-10-231-1/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To avoid introduction of a new compatible for each small SoC/DMA controller variation, move the definition of the channel count to the devicetree. The number of vchans is no longer explicit, but limited by the highest port/DMA request number. The result is a slight overallocation for SoCs with a sparse port mapping. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Add support for Allwinner A64 and compatiblesStefan Brüns2017-10-161-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The A64 SoC has the same dma engine as the H3 (sun8i), with a reduced amount of physical channels. To allow future reuse of the compatible, leave the channel count etc. in the config data blank and retrieve it from the devicetree. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Move number of pchans/vchans/request to device structStefan Brüns2017-10-161-10/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Preparatory patch: If the same compatible is used for different SoCs which have a common register layout, but different number of channels, the channel count can no longer be stored in the config. Store it in the device structure instead. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Enable additional burst lengths/widths on H3Stefan Brüns2017-10-161-9/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The H3 supports bursts lengths of 1, 4, 8 and 16 transfers, each with a width of 1, 2, 4 or 8 bytes. The register value for the the width is log2-encoded, change the conversion function to provide the correct value for width == 8. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Restructure code to allow extension for new SoCsStefan Brüns2017-10-161-28/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current code mixes three distinct operations when transforming the slave config to register settings: 1. special handling of DMA_SLAVE_BUSWIDTH_UNDEFINED, maxburst == 0 2. range checking 3. conversion of raw to register values As the range checks depend on the specific SoC, move these out of the conversion to distinct operations. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Correct burst length field offsets for H3Stefan Brüns2017-10-161-7/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the H3, the burst lengths field offsets in the channel configuration register differs from earlier SoC generations. Using the A31 register macros actually configured the H3 controller do to bursts of length 1 always, which although working leads to higher bus utilisation. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: Correct setting of clock autogating register for A83T/H3Stefan Brüns2017-10-161-5/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The H83T uses a compatible string different from the A23, but requires the same clock autogating register setting. The H3 also requires setting the clock autogating register, but has the register at a different offset. Add three suitable callbacks for the existing controller generations and set it in the controller config structure. Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sun6i: use of_device_get_match_dataCorentin Labbe2017-09-211-4/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | The usage of of_device_get_match_data reduce the code size a bit. Furthermore, it prevents an improbable dereference when of_match_device() return NULL. Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/sprd' into for-linusVinod Koul2017-11-143-0/+997
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | Kconfig and Makefile conflicts so put them in right order (sprd ones after stm ones) Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: sprd: Add Spreadtrum DMA driverBaolin Wang2017-10-243-0/+997
| | |/ | | | | | | | | | | | | | | | | | | This patch adds the DMA controller driver for Spreadtrum SC9860 platform. Signed-off-by: Baolin Wang <baolin.wang@spreadtrum.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/stm' into for-linusVinod Koul2017-11-144-0/+2032
| |\ \
| | * | dmaengine: stm32_mdma: activate pack/unpack featurePierre-Yves MORDRET2017-11-081-34/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If source and destination bus width differs pack/unpack MDMA feature has to be activated for alignment. This pack/unpack feature implies to have both source/destination address and buffer length aligned on bus width. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: stm32: remove redundant initialization of hwdescColin Ian King2017-10-121-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hwdesc is being initialized to desc->hwdesc but this is never read as hwdesc is overwritten in a for-loop. Remove the redundant initialization and move the declaration of hwdesc into the for-loop. Cleans up clang warning: Value stored to 'hwdesc' during its initialization is never read Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: stm32_mdma: add CONFIG_OF dependencyArnd Bergmann2017-10-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without CONFIG_OF we get a build warning: warning: (STM32_MDMA) selects DMA_OF which has unmet direct dependencies (DMADEVICES && OF) This adds a dependency on CONFIG_OF. Since this means we no longer need to select 'DMA_OF', I'm dropping that line as well. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: stm32: use %p format specfier for pointerVinod Koul2017-10-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pointer print was using explict cast and printing as %x which causes below warn on some arch's so print using %p format specfier. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: stm32-dmamux: Fix a NULL vs IS_ERR() check in probeDan Carpenter2017-10-081-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | devm_ioremap_resource() doesn't return NULL, it returns error pointers. Fixes: df7e762db5f6 ("dmaengine: Add STM32 DMAMUX driver") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: Add STM32 MDMA driverPierre-Yves MORDRET2017-10-083-0/+1679
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the driver for the STM32 MDMA controller. Master Direct memory access (MDMA) is used in order to provide high-speed data transfer between memory and memory or between peripherals and memory. MDMA controller provides a master AXI interface for main memory and peripheral registers access (system access port) and a master AHB interface only for Cortex-M7 TCM memory access (TCM access port). MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2). It offers up to 64 channels, each dedicated to managing memory access requests from one of the DMA stream memory buffer or other peripherals (w/ integrated FIFO). Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: Add STM32 DMAMUX driverPierre-Yves MORDRET2017-09-273-0/+337
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements the STM32 DMAMUX driver. The DMAMUX request multiplexer allows routing a DMA request line between the peripherals and the DMA controllers of the product. The routing function is ensured by a programmable multi-channel DMA request line multiplexer. Each channel selects a unique DMA request line, unconditionally or synchronously with events from its DMAMUX synchronization inputs. The DMAMUX may also be used as a DMA request generator from programmable events on its input trigger signals Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/sa11x0' into for-linusVinod Koul2017-11-141-0/+11
| |\ \
| | * | dmaengine: sa11x0: add DMA filtersRussell King2017-09-271-0/+11
| | |/ | | | | | | | | | | | | | | | | | | | | | Add DMA filters for the sa11x0 DMA channels. This will allow us to migrate away from directly using the DMA filter function in drivers. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/renasas' into for-linusVinod Koul2017-11-141-3/+2
| |\ \
| | * | dmaengine: Revert "rcar-dmac: use TCRB instead of TCR for residue"Vinod Koul2017-11-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 847449f23dcb: ("dmaengine: rcar-dmac: use TCRB instead of TCR for residue") as it breaks small serial console. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: rcar-dmac: use TCRB instead of TCR for residueHiroyuki Yokoyama2017-10-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SYS/RT/Audio DMAC includes independent data buffers for reading and writing. Therefore, the read transfer counter and write transfer counter have different values. TCR indicates read counter, and TCRB indicates write counter. The relationship is like below. TCR TCRB [SOURCE] -> [DMAC] -> [SINK] In the MEM_TO_DEV direction, what really matters is how much data has been written to the device. If the DMA is interrupted between read and write, then, the data doesn't end up in the destination, so shouldn't be counted. TCRB is thus the register we should use in this cases. In the DEV_TO_MEM direction, the situation is more complex. Both the read and write side are important. What matters from a data consumer point of view is how much data has been written to memory. On the other hand, if the transfer is interrupted between read and write, we'll end up losing data. It can also be important to report. In the MEM_TO_MEM direction, what matters is of course how much data has been written to memory from data consumer point of view. Here, because read and write have independent data buffers, it will take a while for TCR and TCRB to become equal. Thus we should check TCRB in this case, too. Thus, all cases we should check TCRB instead of TCR. Without this patch, Sound Capture has noise after PluseAudio support (= 07b7acb51d2 ("ASoC: rsnd: update pointer more accurate")), because the recorder will use wrong residue counter which indicates transferred from sound device, but in reality the data was not yet put to memory and recorder will record it. Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com> [Kuninori: added detail information in log] Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: nbpfaxi: Use of_device_get_match_data() helperGeert Uytterhoeven2017-10-121-3/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | Use the of_device_get_match_data() helper instead of open coding. Note that when used with DT, there's always a valid match. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/qcom' into for-linusVinod Koul2017-11-141-60/+109
| |\ \
| | * | dmaengine: qcom-bam: Process multiple pending descriptorsSricharan R2017-09-251-60/+109
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bam dmaengine has a circular FIFO to which we add hw descriptors that describes the transaction. The FIFO has space for about 4096 hw descriptors. Currently we add one descriptor and wait for it to complete with interrupt and then add the next pending descriptor. In this way, the FIFO is underutilized since only one descriptor is processed at a time, although there is space in FIFO for the BAM to process more. Instead keep adding descriptors to FIFO till its full, that allows BAM to continue to work on the next descriptor immediately after signalling completion interrupt for the previous descriptor. Also when the client has not set the DMA_PREP_INTERRUPT for a descriptor, then do not configure BAM to trigger a interrupt upon completion of that descriptor. This way we get a interrupt only for the descriptor for which DMA_PREP_INTERRUPT was requested and there signal completion of all the previous completed descriptors. So we still do callbacks for all requested descriptors, but just that the number of interrupts are reduced. CURRENT: ------ ------- --------------- |DES 0| |DESC 1| |DESC 2 + INT | ------ ------- --------------- | | | | | | INTERRUPT: (INT) (INT) (INT) CALLBACK: (CB) (CB) (CB) MTD_SPEEDTEST READ PAGE: 3560 KiB/s MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s IOZONE READ: 2456 KB/s IOZONE WRITE: 1230 KB/s bam dma interrupts (after tests): 96508 CHANGE: ------ ------- ------------- |DES 0| |DESC 1 |DESC 2 + INT | ------ ------- -------------- | | (INT) (CB for 0, 1, 2) MTD_SPEEDTEST READ PAGE: 3860 KiB/s MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s IOZONE READ: 2677 KB/s IOZONE WRITE: 1308 KB/s bam dma interrupts (after tests): 58806 Signed-off-by: Sricharan R <sricharan@codeaurora.org> Reviewed-by: Andy Gross <andy.gross@linaro.org> Tested-by: Abhishek Sahu <absahu@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/pl330' into for-linusVinod Koul2017-11-141-19/+20
| |\ \
| | * | dmaengine: pl330: fix descriptor allocation failAlexander Kochetkov2017-10-201-19/+20
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If two concurrent threads call pl330_get_desc() when DMAC descriptor pool is empty it is possible that allocation for one of threads will fail with message: kernel: dma-pl330 20078000.dma-controller: pl330_get_desc:2469 ALERT! Here how that can happen. Thread A calls pl330_get_desc() to get descriptor. If DMAC descriptor pool is empty pl330_get_desc() allocates new descriptor on shared pool using add_desc() and then get newly allocated descriptor using pluck_desc(). At the same time thread B calls pluck_desc() and take newly allocated descriptor. In that case descriptor allocation for thread A will fail. Using on-stack pool for new descriptor allow avoid the issue described. The patch modify pl330_get_desc() to use on-stack pool for allocation new descriptors. Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/imx' into for-linusVinod Koul2017-11-141-3/+11
| |\ \
| | * | dmaengine: imx-sdma: Correct src_addr_widths and directionsNicolin Chen2017-09-211-3/+11
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | The driver already supports DMA_DEV_TO_DEV in sdma_config(), DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in sdma_prep_slave_sg(). So this patch adds them to the lists. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/img' into for-linusVinod Koul2017-11-141-18/+80
| |\ \
| | * | dmaengine: img-mdc: Add runtime PMEd Blake2017-10-161-24/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add runtime PM support to disable the clock when the h/w is not in use. The existing clock_prepare_enable is removed from probe() as the clock is no longer permanently enabled. Signed-off-by: Ed Blake <ed.blake@sondrel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: img-mdc: Add suspend / resume handlingEd Blake2017-10-161-0/+33
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add suspend / resume handling using suspend_late and resume_early, and check that all channels are idle before suspending. DMA drivers should use suspend_late / resume_early to ensure that all DMA client devices are suspended before the DMA device itself, and that client devices are resumed after the DMA device. This avoids suspending the DMA device while transactions are still active. It is the responsibility of client drivers to terminate all DMA transactions in their suspend handlers, so there should be no active transactions by the time suspend_late is called. There's no need to save and restore registers for MDC during suspend / resume, as all transactions will be terminated as a result of the suspend, and all required registers are programmed anyway at the start of any new transactions following resume. Signed-off-by: Ed Blake <ed.blake@sondrel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/dmatest' into for-linusVinod Koul2017-11-141-0/+1
| |\ \
| | * | dmaengine: dmatest: warn user when dma test times outAdam Wallis2017-11-081-0/+1
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit adfa543e7314 ("dmatest: don't use set_freezable_with_signal()") introduced a bug (that is in fact documented by the patch commit text) that leaves behind a dangling pointer. Since the done_wait structure is allocated on the stack, future invocations to the DMATEST can produce undesirable results (e.g., corrupted spinlocks). Ideally, this would be cleaned up in the thread handler, but at the very least, the kernel is left in a very precarious scenario that can lead to some long debug sessions when the crash comes later. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=197605 Signed-off-by: Adam Wallis <awallis@codeaurora.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/bcom' into for-linusVinod Koul2017-11-142-81/+38
| |\ \
| | * | dmaengine: Build bcm-sba-raid driver as loadable module for iProc SoCsAnup Patel2017-10-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By default, we build Broadcom SBA RAID driver as loadable module for iProc SOCs so that kernel image is little smaller and we load SBA RAID driver only when required. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: bcm-sba-raid: Use common GPL comment headerAnup Patel2017-10-231-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes the comment header of Broadcom SBA RAID driver similar to the GPL comment header used across Broadcom driver sources. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: bcm-sba-raid: Use only single mailbox channelAnup Patel2017-10-231-77/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Each mailbox channel used by Broadcom SBA RAID driver is a separate HW ring. Currently, Broadcom SBA RAID driver creates one DMA channel using one or more mailbox channels. When we are using more than one mailbox channels for a DMA channel, the sba_request are distributed evenly among multiple mailbox channels which results in sba_request being completed out-of-order. The above described out-of-order completion of sba_request breaks the dma_async_is_complete() API because it assumes DMA cookies are completed in orderly fashion. To ensure correct behaviour of dma_async_is_complete() API, this patch updates Broadcom SBA RAID driver to use only single mailbox channel. If additional mailbox channels are specified in DT then those will be ignored. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: bcm-sba-raid: serialize dma_cookie_complete() using reqs_lockAnup Patel2017-10-231-0/+2
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As-per documentation in driver/dma/dmaengine.h, the dma_cookie_complete() API should be called with lock held. This patch ensures that Broadcom SBA RAID driver calls the dma_cookie_complete() API with reqs_lock held. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
OpenPOWER on IntegriCloud