2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-19 18:53:52 +08:00
Commit Graph

14 Commits

Author SHA1 Message Date
Daniel Mack
1ac0e845c1 dma: mmp_pdma: fix maximum transfer length
There's no reason for limiting the maximum transfer length to 0x1000.
Take the actual bit mask instead; the PDMA is able to transfer chunks of
up to SZ_8K - 1.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
638a542cc4 dma: mmp_pdma: refactor unlocking path in lookup_phy()
As suggested by Ezequiel García, release the spinlock at the end of the
function only, and use a goto for the control flow.

Just a minor cleanup.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Daniel Mack
8b298ded90 dma: mmp_pdma: factor out DRCMR register calculation
The exact same calculation is done twice, so let's factor it out to a
macro.

Signed-off-by: Daniel Mack <zonque@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-14 13:55:15 +05:30
Jingoo Han
69c9f0ae1d dma: mmp_pdma: Staticize mmp_pdma_alloc_descriptor()
mmp_pdma_alloc_descriptor() is used only in this file.
Fix the following sparse warning:

drivers/dma/mmp_pdma.c:359:25: warning: symbol 'mmp_pdma_alloc_descriptor' was not declared. Should it be static?

Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-13 16:54:42 +05:30
Xiang Wang
26a2dfdeb9 dma: mmp_pdma: clear DRCMR when free a phy channel
In mmp pdma, phy channels are allocated/freed dynamically.
The mapping from DMA request to DMA channel number in DRCMR
should be cleared when a phy channel is freed. Otherwise
conflicts will happen when:
1. A is using channel 2 and free it after finished, but A
still maps to channel 2 in DRCMR of A.
2. Now another one B gets channel 2. So B maps to channel 2
too in DRCMR of B.
In the datasheet, it is described that "Do not map two active
requests to the same channel since it produces unpredictable
results" and we can observe that during test.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:27 +05:30
Xiang Wang
027f28b7bb dma: mmp_pdma: add protect when alloc/free phy channels
In mmp pdma, phy channels are allocated/freed dynamically
and frequently. But no proper protection is added.
Conflict will happen when multi-users are requesting phy
channels at the same time. Use spinlock to protect.

Signed-off-by: Xiang Wang <wangx@marvell.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:26 +05:30
Andy Shevchenko
4aa9fe0a1f mmp_pdma: remove useless use of lock
Accordingly to dma_cookie_status() description locking is not required.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-08-05 09:32:24 +05:30
Linus Torvalds
5115f3c19d Merge branch 'next' of git://git.infradead.org/users/vkoul/slave-dma
Pull slave-dmaengine updates from Vinod Koul:
 "This is fairly big pull by my standards as I had missed last merge
  window.  So we have the support for device tree for slave-dmaengine,
  large updates to dw_dmac driver from Andy for reusing on different
  architectures.  Along with this we have fixes on bunch of the drivers"

Fix up trivial conflicts, usually due to #include line movement next to
each other.

* 'next' of git://git.infradead.org/users/vkoul/slave-dma: (111 commits)
  Revert "ARM: SPEAr13xx: Pass DW DMAC platform data from DT"
  ARM: dts: pl330: Add #dma-cells for generic dma binding support
  DMA: PL330: Register the DMA controller with the generic DMA helpers
  DMA: PL330: Add xlate function
  DMA: PL330: Add new pl330 filter for DT case.
  dma: tegra20-apb-dma: remove unnecessary assignment
  edma: do not waste memory for dma_mask
  dma: coh901318: set residue only if dma is in progress
  dma: coh901318: avoid unbalanced locking
  dmaengine.h: remove redundant else keyword
  dma: of-dma: protect list write operation by spin_lock
  dmaengine: ste_dma40: do not remove descriptors for cyclic transfers
  dma: of-dma.c: fix memory leakage
  dw_dmac: apply default dma_mask if needed
  dmaengine: ioat - fix spare sparse complain
  dmaengine: move drivers/of/dma.c -> drivers/dma/of-dma.c
  ioatdma: fix race between updating ioat->head and IOAT_COMPLETION_PENDING
  dw_dmac: add support for Lynxpoint DMA controllers
  dw_dmac: return proper residue value
  dw_dmac: fill individual length of descriptor
  ...
2013-02-26 09:24:48 -08:00
Thierry Reding
7331205a96 dma: Convert to devm_ioremap_resource()
Convert all uses of devm_request_and_ioremap() to the newly introduced
devm_ioremap_resource() which provides more consistent error handling.

devm_ioremap_resource() provides its own error messages so all explicit
error messages can be removed from the failure code paths.

Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-25 12:21:46 -08:00
Cong Ding
ed30933e6f dma: remove unnecessary null pointer check in mmp_pdma.c
the pointer cfg is dereferenced in line 594, so it's no reason to check null
again in line 620.

Signed-off-by: Cong Ding <dinggnu@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-01-20 05:49:40 -08:00
Greg Kroah-Hartman
4bf27b8b33 Drivers: dma: remove __dev* attributes.
CONFIG_HOTPLUG is going away as an option.  As a result, the __dev*
markings need to be removed.

This change removes the use of __devinit, __devexit_p, __devinitconst,
and __devexit from these drivers.

Based on patches originally written by Bill Pemberton, but redone by me
in order to handle some of the coding style issues better, by hand.

Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Viresh Kumar <viresh.linux@gmail.com>
Cc: Dan Williams <djbw@fb.com>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Barry Song <baohua.song@csr.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Jassi Brar <jassisinghbrar@gmail.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Bill Pemberton <wfp5p@virginia.edu>
Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-01-03 15:57:15 -08:00
Bill Pemberton
463a1f8b3c dma: remove use of __devinit
CONFIG_HOTPLUG is going away as an option so __devinit is no longer
needed.

Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Cc: Li Yang <leoli@freescale.com>
Cc: Zhang Wei <zw@zh-kernel.org>
Cc: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-11-28 12:42:36 -08:00
Bill Pemberton
a7d6e3ec28 dma: remove use of __devexit_p
CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer
needed.

Signed-off-by: Bill Pemberton <wfp5p@virginia.edu>
Acked-by: Barry Song <baohua.song@csr.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-11-28 12:41:36 -08:00
Zhangfei Gao
c8acd6aa6b dmaengine: mmp-pdma support
1. virtual channel vs. physical channel
Virtual channel is managed by dmaengine
Physical channel handling resource, such as irq
Physical channel is alloced dynamically as descending priority,
freed immediately when irq done.
The availble highest priority physically channel will alwayes be alloced

Issue pending list -> alloc highest dma physically channel available -> dma done -> free physically channel

2. list: running list & pending list
submit: desc list -> pending list
issue_pending_list: if (IDLE) pending list -> running list; free pending list (RUN)
irq: free running list (IDLE)
     check pendlist -> pending list -> running list; free pending list (RUN)

3. irq:
Each list generate one irq, calling callback
One list may contain several desc chain, in such case, make sure only the last desc list generate irq.

4. async
Submit will add desc chain to pending list, which can be multi-called
If multi desc chain is submitted, only the last desc would generate irq -> call back
If IDLE, issue_pending_list start pending_list, transforming pendlist to running list
If RUN, irq will start pending list

5. test
5.1 pxa3xx_nand on pxa910
5.2 insmod dmatest.ko (threads_per_chan=y)
By default drivers/dma/dmatest.c test every channel and test memcpy with 1 threads per channel

Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@linux.intel.com>
2012-09-14 08:14:07 +05:30