summaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAgeFilesLines
* dmaengine: at_hdmac: fix race condition in atc_advance_work()Ludovic Desroches2013-04-181-5/+4
| | | | | | | | | | | | | | | The BUG_ON() directive is triggered probably due to a latency modification following inclusion of commit c10d73671ad3 ("softirq: reduce latencies"). This condition has not been met before 3.9-rc1 and doesn't trigger without this patch. We now make sure that DMA channel is idle before calling atc_complete_all() which makes the BUG_ON() "protection" useless. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'fixes' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-04-112-17/+41
|\ | | | | | | | | | | | | | | | | | | Pull slave-dmaengine fixes from Vinod Koul: "The first one fixes issue in pl330 to check for DT compatible and the second one fixes omap-dma to start without delay" * 'fixes' of git://git.infradead.org/users/vkoul/slave-dma: dmaengine: omap-dma: Start DMA without delay for cyclic channels DMA: PL330: Add check if device tree compatible
| * dmaengine: omap-dma: Start DMA without delay for cyclic channelsPeter Ujfalusi2013-04-101-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | cyclic DMA is only used by audio which needs DMA to be started without a delay. If the DMA for audio is started using the tasklet we experience random channel switch (to be more precise: channel shift). Reported-by: Peter Meerwald <pmeerw@pmeerw.net> CC: stable@vger.kernel.org # v3.7+ Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * DMA: PL330: Add check if device tree compatiblePadmavathi Venna2013-04-021-11/+27
| | | | | | | | | | | | | | | | | | This patch register the dma controller with generic dma helpers only in DT case. This also adds some extra error handling in the driver. Signed-off-by: Padmavathi Venna <padma.v@samsung.com> Reported-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* | Merge branch 'for-linus' of ↵Linus Torvalds2013-04-031-0/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Martin Schwidefsky: "Just a bunch of bugfixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/mm: provide emtpy check_pgt_cache() function s390/uaccess: fix page table walk s390/3270: fix minor_start issue s390/uaccess: fix clear_user_pt() s390/scm_blk: fix error return code in scm_blk_init() s390/scm_block: fix printk format string drivers/Kconfig: add several missing GENERIC_HARDIRQS dependencies
| * drivers/Kconfig: add several missing GENERIC_HARDIRQS dependenciesHeiko Carstens2013-03-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | With this patch an allmodconfig finally builds on s390 again. Fixes these build errors: ERROR: "devm_request_threaded_irq" [drivers/spi/spi-altera.ko] undefined! ERROR: "devm_request_threaded_irq" [drivers/media/platform/sh_veu.ko] undefined! ERROR: "devm_request_threaded_irq" [drivers/dma/dw_dmac.ko] undefined! Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
* | dw_dmac: adjust slave_id accordingly to request line baseAndy Shevchenko2013-03-302-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | On some hardware configurations we have got the request line with the offset. The patch introduces convert_slave_id() helper for that cases. The request line base is came from the driver data provided by the platform_device_id table. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* | dmaengine: dw_dma: fix endianess for DT xlate functionArnd Bergmann2013-03-301-3/+3
|/ | | | | | | | | | | | | As reported by Wu Fengguang's build robot tracking sparse warnings, the dma_spec arguments in the dw_dma_xlate are already byte swapped on little-endian platforms and must not get swapped again. This code is currently not used anywhere, but will be used in Linux 3.10 when the ARM SPEAr platform starts using the generic DMA DT binding. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reported-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-03-032-77/+75
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | Pull second set of slave-dmaengine updates from Vinod Koul: "Arnd's patch moves the dw_dmac to use generic DMA binding. I agreed to merge this late as it will avoid the conflicts between trees. The second patch from Matt adding a dma_request_slave_channel_compat API was supposed to be picked up, but somehow never got picked up. Some patches dependent on this are already in -next :(" * 'next' of git://git.infradead.org/users/vkoul/slave-dma: dmaengine: dw_dmac: move to generic DMA binding dmaengine: add dma_request_slave_channel_compat()
| * dmaengine: dw_dmac: move to generic DMA bindingArnd Bergmann2013-02-282-77/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original device tree binding for this driver, from Viresh Kumar unfortunately conflicted with the generic DMA binding, and did not allow to completely seperate slave device configuration from the controller. This is an attempt to replace it with an implementation of the generic binding, but it is currently completely untested, because I do not have any hardware with this particular controller. The patch applies on top of the slave-dma tree, which contains both the base support for the generic DMA binding, as well as the earlier attempt from Viresh. Both of these are currently not merged upstream however. This version incorporates feedback from Viresh Kumar, Andy Shevchenko and Russell King. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Vinod Koul <vinod.koul@linux.intel.com> Cc: devicetree-discuss@lists.ozlabs.org Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
* | dmaengine: convert to idr_alloc()Tejun Heo2013-02-271-10/+6
| | | | | | | | | | | | | | | | | | Convert to the much saner new idr interface. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Dan Williams <djbw@fb.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2013-02-2635-632/+1701
|\ \ | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull slave-dmaengine updates from Vinod Koul: "This is fairly big pull by my standards as I had missed last merge window. So we have the support for device tree for slave-dmaengine, large updates to dw_dmac driver from Andy for reusing on different architectures. Along with this we have fixes on bunch of the drivers" Fix up trivial conflicts, usually due to #include line movement next to each other. * 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits) Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT" ARM: dts: pl330: Add #dma-cells for generic dma binding support DMA: PL330: Register the DMA controller with the generic DMA helpers DMA: PL330: Add xlate function DMA: PL330: Add new pl330 filter for DT case. dma: tegra20-apb-dma: remove unnecessary assignment edma: do not waste memory for dma_mask dma: coh901318: set residue only if dma is in progress dma: coh901318: avoid unbalanced locking dmaengine.h: remove redundant else keyword dma: of-dma: protect list write operation by spin_lock dmaengine: ste_dma40: do not remove descriptors for cyclic transfers dma: of-dma.c: fix memory leakage dw_dmac: apply default dma_mask if needed dmaengine: ioat - fix spare sparse complain dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING dw_dmac: add support for Lynxpoint DMA controllers dw_dmac: return proper residue value dw_dmac: fill individual length of descriptor ...
| * DMA: PL330: Register the DMA controller with the generic DMA helpersPadmavathi Venna2013-02-141-0/+10
| | | | | | | | | | | | | | | | | | This patch registers the pl330 dma controller driver with the generic device tree dma helper functions. Signed-off-by: Padmavathi Venna <padma.v@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * DMA: PL330: Add xlate functionPadmavathi Venna2013-02-141-0/+25
| | | | | | | | | | | | | | | | | | | | Add xlate to translate the device-tree binding information into the appropriate format. The filter function requires the dma controller device and dma channel number as filter_params. Signed-off-by: Padmavathi Venna <padma.v@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * DMA: PL330: Add new pl330 filter for DT case.Padmavathi Venna2013-02-141-14/+15
| | | | | | | | | | | | | | | | | | | | This patch adds a new pl330_dt_filter for DT case to filter the required channel based on the new filter params and modifies the old filter only for non-DT case as suggested by Arnd Bergmann. Signed-off-by: Padmavathi Venna <padma.v@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dma: tegra20-apb-dma: remove unnecessary assignmentAndy Shevchenko2013-02-141-1/+0
| | | | | | | | | | | | | | | | | | | | There is no need to assign 0 to residue, because dma_cookie_status() does this for us. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Laxman Dewangan <ldewangan@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * edma: do not waste memory for dma_maskAndy Shevchenko2013-02-141-2/+4
| | | | | | | | | | | | | | | | | | | | Accordingly to commentary in the platform_device_register_full the memory allocated for dma_mask will not going to be freed. That's why is better to assign dma_mask afterwards. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dma: coh901318: set residue only if dma is in progressAndy Shevchenko2013-02-141-1/+3
| | | | | | | | | | | | | | | | | | | | When status is DMA_SUCCESS the residue should be zero. Otherwise it's a bug. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dma: coh901318: avoid unbalanced lockingAndy Shevchenko2013-02-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | In case the len is 0 we must return without trying to unlock the lock that was not locked. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dma: of-dma: protect list write operation by spin_lockAndy Shevchenko2013-02-141-0/+2
| | | | | | | | | | | | | | | | | | | | It's possible to have an inconsistency in the list due to unprotected operation on it. The patch adds a proper locking on the list operation. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Rob Herring <rob.herring@calxeda.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dmaengine: ste_dma40: do not remove descriptors for cyclic transfersFabio Baltieri2013-02-141-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Fix dma_tc_handle() to call d40_desc_remove() and d40_desc_done() only for non-cyclic transfers, as this was breaking ux500_pcm since introduced in: d49278e dmaengine: dma40: Add support to split up large elements Reported-by: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dma: of-dma.c: fix memory leakageCong Ding2013-02-141-0/+1
| | | | | | | | | | | | | | The memory allocated to ofdma might be a leakage when error occurs. Signed-off-by: Cong Ding <dinggnu@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: apply default dma_mask if neededAndy Shevchenko2013-02-141-0/+6
| | | | | | | | | | | | | | | | | | In some cases we got the device without dma_mask configured. We have to apply the default value to avoid crashes during memory mapping. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dmaengine: ioat - fix spare sparse complainFengguang Wu2013-02-131-1/+1
| | | | | | | | | | | | | | | | | | >> drivers/dma/ioat/dma_v3.c:371:6: sparse: symbol 'ioat3_timer_event' was not declared. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.cVinod Koul2013-02-133-0/+270
| | | | | | | | | | | | | | as requested by Rob Suggested-by: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDINGDave Jiang2013-02-123-97/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a race that can hit during __cleanup() when the ioat->head pointer is incremented during descriptor submission. The __cleanup() can clear the PENDING flag when it does not see any active descriptors. This causes new submitted descriptors to be ignored because the COMPLETION_PENDING flag is cleared. This was introduced when code was adapted from ioatdma v1 to ioatdma v2. For v2 and v3, IOAT_COMPLETION_PENDING flag will be abandoned and a new flag IOAT_CHAN_ACTIVE will be utilized. This flag will also be protected under the prep_lock when being modified in order to avoid the race. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: add support for Lynxpoint DMA controllersMika Westerberg2013-02-121-0/+6
| | | | | | | | | | | | | | | | | | | | | | Intel Lynxpoint PCH Low Power Subsystem has DMA controller to support general purpose serial buses like SPI, I2C, and HSUART. This controller is enumerated from ACPI namespace with ACPI ID INTL9C60. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: return proper residue valueAndy Shevchenko2013-01-282-2/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the driver returns full length of the active descriptor which is wrong. We have to go throught the active descriptor and substract the length of each sent children in the chain from the total length along with the actual data in the DMA channel registers. The cyclic case is not handled by this patch due to len field in the descriptor structure is left untouched by the original code. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: fill individual length of descriptorAndy Shevchenko2013-01-281-0/+3
| | | | | | | | | | | | | | | | | | It will be useful to have the length of the transfer in the descriptor. The cyclic transfer functions remained untouched. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: introduce total_len field in struct dw_descAndy Shevchenko2013-01-282-6/+7
| | | | | | | | | | | | | | | | | | By this new field we distinguish a total length of the chain and the individual length of each descriptor in the chain. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: remove unnecessary tx_list field in dw_dma_chanAndy Shevchenko2013-01-282-6/+15
| | | | | | | | | | | | | | | | | | The soft LLP mode is working for active descriptor only. So, we do not need to have a copy of its pointer. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dw_dmac: print out DW_PARAMS and DWC_PARAMS when debugAndy Shevchenko2013-01-281-0/+5
| | | | | | | | | | | | | | | | | | It's usefull to have the values of the DW_PARAMS and DWC_PARAMS printed when debug mode is enabled. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * DMAEngine: sirf: lock the shared registers access in sirfsoc_dma_terminate_allBarry Song2013-01-281-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | Just like Russell pointed out in "DMAEngine: sirf: add DMA pause/resume support" at http://www.spinics.net/lists/arm-kernel/msg212496.html here I find sirfsoc_dma_terminate_all() has same problem, so move the locking to the front of registers access. Signed-off-by: Barry Song <Baohua.Song@csr.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * DMAEngine: sirf: add DMA pause/resume supportBarry Song2013-01-281-0/+46
| | | | | | | | | | | | | | | | | | | | pause/resume are important for users like ALSA sound drivers, this patches make the sirf prima2/marco support DMA commands DMA_PAUSE and DMA_RESUME. Signed-off-by: Barry Song <Baohua.Song@csr.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * Merge tag 'ux500-dma40' of //git.linaro.org/people/fabiobaltieri/linux.gitVinod Koul2013-01-213-143/+505
| |\ | | | | | | | | | | | | | | | Pull ste_dma40 fixes from Fabio Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: set_dma40: balance clock in probe fail codeFabio Baltieri2013-01-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clock code was changed to use clk_prepare_enable in: b707c65 dma/ste_dma40: Fixup clock usage during probe but clk_disable on probe fail path was not updated. This patch fix this by using clk_disable_unprepare in place of clk_disable. Acked-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: set_dma40: ignore spurious interruptsFabio Baltieri2013-01-141-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some DMA channels may be used by other cores in the SoC. This patch modifies the dma interrupt handler to ignore interrupts from unknown channels. Cc: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: add software lli supportFabio Baltieri2013-01-141-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch add support to manage LLI by SW for select phy channels. There is a HW issue in certain controllers due to which on certain occassions HW LLI cannot be used on some physical channels. To avoid the HW issue on a specific phy channel, the phy channel number can be added to the list of soft_lli_channels and there after all the transfers on that channel will use software LLI, for peripheral to memory transfers. SoftLLI introduces relink overhead, that could impact performace for certain use cases. This is based on a previous patch of Narayanan Gopalakrishnan. Cc: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: minor code readability fixesFabio Baltieri2013-01-141-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | Use internal variables to the cycles to improve code readability, no functional changes. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: minor cosmetic fixesFabio Baltieri2013-01-142-22/+13
| | | | | | | | | | | | | | | | | | | | | | | | This patch contains various non functional cosmetic fixes. Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: add a done queue for completed descriptorsFabio Baltieri2013-01-141-4/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is to keep the active queue for only those transfers which are actually active in the hardware. Descriptors will be moved to the done queue after they are completed in the hardware (interrupt handler) but before all the cleanup work has been completed (tasklet). Mostly based on a previous patch by Rabin Vincent. Cc: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: support more than 128 event linesTong Liu2013-01-142-81/+355
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | U8540 DMA controller is different from u9540 we need define new registers and use them to support handling more than 128 event lines. Signed-off-by: Tong Liu <tong.liu@stericsson.com> Reviewed-by: Per Forlin <per.forlin@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: physical channels number correctionGerald Baeza2013-01-141-5/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DMAC_ICFG[0:2]=SCHNB only allows to count 'multiple of 4' physical channels so it was ok with platforms having 8 channels but cannot be used for next versions (with 10 or 14 channels). This patch allows to provide the number of physical channels for a DMA device via platform_data, or still rely on SCHNB if platform_data announces 0 channel. Signed-off-by: Gerald Baeza <gerald.baeza@stericsson.com> Reviewed-by: Per Forlin <per.forlin@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: support fixed physical channel allocationGerald Baeza2013-01-141-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes existing use_fixed_channel field (of stedma40_chan_cfg structure) applicable to physical channels. Signed-off-by: Gerald Baeza <gerald.baeza@stericsson.com> Tested-by: Yannick Fertre <yannick.fertre@stericsson.com> Reviewed-by: Per Forlin <per.forlin@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: don't allow high priority dest event linesRabin Vincent2013-01-141-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hardware bug: when a logical channel is triggerred by a high priority destination event line, an extra packet transaction is generated in case of important data write response latency on previous logical channel A and if the source transfer of current logical channel B is already completed and if no other channel with a higher priority than B is waiting for execution. Software workaround: do not set the high priority level for the destination event lines that trigger logical channels. Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com> Reviewed-by: Shreshtha Kumar Sahu <shreshthakumar.sahu@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: don't check for pm_runtime_suspended()Narayanan G2013-01-141-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The check for runtime suspend is not needed during a regular suspend, as the framework takes care of this. This fixes the issue of DMA driver not letting the system to go to deepsleep in the first attempt. Signed-off-by: Narayanan G <narayanan.gopalakrishnan@stericsson.com> Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: limit burst size to 16Per Forlin2013-01-141-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The client is not aware of the maximum burst size in the dma driver. If the size exceeds 16 set max to 16. Signed-off-by: Per Forlin <per.forlin@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: set dma max seg sizePer Forlin2013-01-141-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Maximum DMA seg size is (0xffff x data_width). If max seg size is not set it deafults to 64k. This results in failure if transferring 64k in byte mode. Large seg sizes may be supported by splitting large transfer. Signed-off-by: Per Forlin <per.forlin@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: use writel_relaxed for lcxaPer Forlin2013-01-141-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | lcpa and lcla are written often and the cache_sync() overhead in writel is costly, especially for wlan where every single network packet (in RX mode) corresponds to a separate DMA transfer. Signed-off-by: Per Forlin <per.forlin@stericsson.com> Reviewed-by: Narayanan Gopalakrishnan <narayanan.gopalakrishnan@stericsson.com> Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
| | * dmaengine: ste_dma40: reset priority bit for logical channelsNarayanan2013-01-141-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch sets the SSCFG/SDCFG bit[7] PRI only for physical channel requests with high priority. For logical channels, this bit will be zero. Signed-off-by: Narayanan G <narayanan.gopalakrishnan@stericsson.com> Reviewed-by: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Fabio Baltieri <fabio.baltieri@linaro.org>
OpenPOWER on IntegriCloud