2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-19 18:53:52 +08:00
Commit Graph

1937 Commits

Author SHA1 Message Date
Laurent Pinchart
a2a7c176cc dma: mmp_pdma: Simplify access to channel drcmr value
As the physical channel and virtual channel point to each other,
pchan->phy->vchan is always equal to pchan. Simplify the code
accordingly.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-05-02 21:19:07 +05:30
Joel Fernandes
b0cce4ca3e dmaengine: edma: update DMA memcpy to use new param element
edma param struct is now within an edma_pset struct introduced in Thomas
Gleixner's edma tx status series. Update memcpy function for the same.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:36:57 +05:30
Joel Fernandes
04361d887f dmaengine: edma: Document variables used for residue accounting
The granular residue accounting code uses certain variables specifically
for residue accounting. Document these in the structure declaration.
Also move around some elements and group them together.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:36:41 +05:30
Thomas Gleixner
740b41f788 dmaengine: edma: Provide granular accounting
The first slot in the ParamRAM of EDMA holds the current active
subtransfer. Depending on the direction we read either the source or
the destination address from there. In the internal psets we have the
address of the buffer(s).

In the cyclic case we only use the internal pset[0] which holds the
start address of the circular buffer and calculate the remaining room
to the end of the buffer.

In the SG case we read the current address and compare it to the
internal psets address and length.

- If the current address is outside of this range, the pset has been
  processed already and we mark it done, update the residue_stat value
  and process the next set. That avoids that we need to walk all
  processed psets for every invocation of tx_status.

- If its inside the range we know that we look at the current active
  set and stop the walk.

- In case of intermediate transfers we update the stats in the
  interrupt callback function before starting the next batch of
  transfers. The tx_status callback and the interrupt callback are
  serialized via vchan.lock.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Hunk #2 in original patch manually applied]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:36:03 +05:30
Thomas Gleixner
c2da2340e5 dmaengine: edma: Store transfer data in edma_desc and edma_pset
For granular accounting we need to store the direction and the
information for the individual psets:

- source or destination address, depending on direction
- length

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:34:07 +05:30
Thomas Gleixner
b5088ad963 dmaengine: edma: Create private pset struct
Preparatory patch to support finer grained accounting.

Move the edma_params array out of edma_desc so we can add further per
pset data to it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[joelf@ti.com: Fixed up hunk #3 in original patch to apply]
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:33:42 +05:30
Thomas Gleixner
de13593971 dmaengine: edma: Check the current decriptor first in tx_status()
It's likely that the caller investigates the status of a currently
active descriptor. Make that simple check first and only rumage in the
vchan list if that fails.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:32:15 +05:30
Thomas Gleixner
b6205c3901 dmaengine: edma: Sanitize residue reporting
The residue reporting in edma_tx_status() is just broken. It blindly
walks the psets and recalculates the lenght of the transfer from the
hardware parameters. For cyclic transfers it adds the link pset, which
results in interestingly large residues. For non-cyclic it adds the
dummy pset, which is stupid as well.

Aside of that it's silly to walk through the pset params when the per
descriptor residue is known at the point of creating it.

Store the information in edma_desc and use it.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-30 10:31:56 +05:30
Peter Ujfalusi
9aac90960b dmaengine: edma: Add channel number to debug prints
It helps to identify issues if we have some information regarding to the
channel which the event is associated.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-29 11:31:40 +05:30
Joel Fernandes
406efb1a74 dmaengine: edma: No need save/restore interrupt flags during spin_lock in IRQ
The vchan lock in edma_callback is acquired in hard interrupt context. As
interrupts are already disabled, there's no point in save/restoring interrupt
mask bit or cpsr flags.

Get rid of flags local variable and use spin_lock instead of spin_lock_irqsave.

Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-23 11:21:17 +05:30
Joel Fernandes
8cc3e30bea dmaengine: edma: Add DMA memcpy support
We add DMA memcpy support to EDMA driver. Successful tests performed using
dmatest kernel module. Copy alignment is set to DMA_SLAVE_BUSWIDTH_4_BYTES and
users must ensure length is aligned so that copy is performed fully.

Signed-off-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:38:56 +05:30
Peter Ujfalusi
e6fad592b0 dmaengine: edma: Print the direction value as well when it is not supported
In case of not supported direction it is better to print the direction also.
It is unlikely, but in such an event it helps with the debugging.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:36:10 +05:30
Peter Ujfalusi
c594c8912b dmaengine: edma: Prefix debug prints where the text were identical in prep callbacks
prep_slave_sg and prep_dma_cyclic callbacks have mostly same failure cases
with the same texts printed in case we hit them. It helps when debugging if
we know exactly which callback generated the errors.
At the same time change the debug level for descriptor allocation failure
from dbg to err since all other error cases are dev_err and this failure is
similarly fatal as the other ones.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:36:03 +05:30
Peter Ujfalusi
2c88ee6b6b dmaengine: edma: Implement device_slave_caps callback
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:35:54 +05:30
Peter Ujfalusi
83bb3126cc dmaengine: edma: Reduce debug print verbosity for non verbose debugging
Do not print the paRAM information when verbose debugging is not asked and
also reduce the number of lines printed in edma_prep_dma_cyclic()

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:35:46 +05:30
Peter Ujfalusi
232b223d82 dmaengine: edma: Set DMA_CYCLIC capability flag
Indicate that the edma dmaengine driver has support for cyclic mode.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:35:39 +05:30
Peter Ujfalusi
72c7b67aff dmaengine: edma: Add support for DMA_PAUSE/RESUME operation
Pause/Resume can be used by the audio stack when the stream is paused/resumed
The edma platform code has support for this and the legacy audio stack used
this.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:35:31 +05:30
Peter Ujfalusi
b2b617de04 dmaengine: edma: Correct the handling of src/dst_maxburst == 0
When clients asks for maxburst = 0 it is basically the same case as if they
were asking for maxburst = 1 since in both case ASYNC need to be used and
the eDMA is expected to write/read one word per DMA request.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Reviewed-and-Tested-by: Joel Fernandes <joelf@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-22 21:35:22 +05:30
Yuan Yao
8edc51c197 dma: fix eDMA driver as a subsys_initcall
Because of some driver base on DMA, changed the initcall order as subsys_initcall.

Signed-off-by: Yuan Yao <yao.yuan@freescale.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-16 12:03:47 +05:30
Dan Carpenter
f3817e777c dmaengine: sirf: off by one in of_dma_sirfsoc_xlate()
The ">" here should be ">=" or we are one step beyond the end of the
sdma->channels[] array.

Fixes: 2e041c9462 ('dmaengine: sirf: enable generic dt binding for dma channels')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-16 11:59:24 +05:30
Jean Delvare
2dda47d1a4 platform: Fix timberdale dependencies
VIDEO_TIMBERDALE selects TIMB_DMA which itself depends on
MFD_TIMBERDALE, so VIDEO_TIMBERDALE should either select or depend on
MFD_TIMBERDALE as well. I chose to make it depend on it because I
think it makes more sense and it is consistent with what other options
are doing.

Adding a "|| HAS_IOMEM" to the TIMB_DMA dependencies silenced the
kconfig warning about unmet direct dependencies but it was wrong:
without MFD_TIMBERDALE, TIMB_DMA is useless as the driver has no
device to bind to.

Signed-off-by: Jean Delvare <jdelvare@suse.de>
Cc: Vinod Koul <vinod.koul@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-16 11:57:27 +05:30
Sekhar Nori
5fc68a6cad dma: edma: fix incorrect SG list handling
The code to handle any length SG lists calls edma_resume()
even before edma_start() is called. This is incorrect
because edma_resume() enables edma events on the channel
after which CPU (in edma_start) cannot clear posted
events by writing to ECR (per the EDMA user's guide).

Because of this EDMA transfers fail to start if due
to some reason there is a pending EDMA event registered
even before EDMA transfers are started. This can happen if
an EDMA event is a byproduct of device initialization.

Fix this by calling edma_resume() only if it is not the
first batch of MAX_NR_SG elements.

Without this patch, MMC/SD fails to function on DA850 EVM
with DMA. The behaviour is triggered by specific IP and
this can explain why the issue was not reported before
(example with MMC/SD on AM335x).

Tested on DA850 EVM and AM335x EVM-SK using MMC/SD card.

Cc: stable@vger.kernel.org # v3.12.x+
Cc: Joel Fernandes <joelf@ti.com>
Acked-by: Joel Fernandes <joelf@ti.com>
Tested-by: Jon Ringle <jringle@gridpoint.com>
Tested-by: Alexander Holler <holler@ahsoftware.de>
Reported-by: Jon Ringle <jringle@gridpoint.com>
Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-14 09:29:55 +05:30
Linus Torvalds
6c61403a44 Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma
Pull slave-dmaengine updates from Vinod Koul:
 - New driver for Qcom bam dma
 - New driver for RCAR peri-peri
 - New driver for FSL eDMA
 - Various odd fixes and updates thru the subsystem

* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
  dmaengine: add Qualcomm BAM dma driver
  shdma: add R-Car Audio DMAC peri peri driver
  dmaengine: sirf: enable generic dt binding for dma channels
  dma: omap-dma: Implement device_slave_caps callback
  dmaengine: qcom_bam_dma: Add device tree binding
  dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
  dma: dw: allocate memory in two stages in probe
  Add new line to test result strings produced in verbose mode
  dmaengine: pch_dma: use tasklet_kill in teardown
  dmaengine: at_hdmac: use tasklet_kill in teardown
  dma: cppi41: start tear down only if channel is busy
  usb: musb: musb_cppi41: Dont reprogram DMA if tear down is initiated
  dmaengine: s3c24xx-dma: make phy->irq signed for error handling
  dma: imx-dma: Add missing module owner field
  dma: imx-dma: Replace printk with dev_*
  dma: fsl-edma: fix static checker warning of NULL dereference
  dma: Remove comment about embedding dma_slave_config into custom structs
  dma: mmp_tdma: move to generic device tree binding
  dma: mmp_pdma: add IRQF_SHARED when request irq
  dma: edma: Fix memory leak in edma_prep_dma_cyclic()
  ...
2014-04-10 08:55:08 -07:00
Vinod Koul
8673bcef8c Merge branch 'topic/bam' into for-linus
Conflicts:
	drivers/dma/Makefile

Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-05 20:45:55 +05:30
Andy Gross
e7c0fe2a5c dmaengine: add Qualcomm BAM dma driver
Add the DMA engine driver for the QCOM Bus Access Manager (BAM) DMA controller
found in the MSM 8x74 platforms.

Each BAM DMA device is associated with a specific on-chip peripheral.  Each
channel provides a uni-directional data transfer engine that is capable of
transferring data between the peripheral and system memory (System mode), or
between two peripherals (BAM2BAM).

The initial release of this driver only supports slave transfers between
peripherals and system memory.

Signed-off-by: Andy Gross <agross@codeaurora.org>
Tested-by: Stanimir Varbanov <svarbanov@mm-sol.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-04-05 20:44:26 +05:30
Russell King
bce5669be3 Merge branch 'devel-stable' into for-next 2014-04-04 00:33:49 +01:00
Russell King
aa4c5b962a dmaengine: omap-dma: more consolidation of CCR register setup
We can move the handling of the DMA synchronisation control out of the
prepare functions; this can be pre-calculated when the DMA channel has
been allocated, so we don't need to duplicate this in both prepare
functions.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:32:53 +01:00
Russell King
6ddeb6d844 dmaengine: omap-dma: move IRQ handling to omap-dma
Move the interrupt handling for OMAP2+ into omap-dma, rather than using
the legacy support in the platform code.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:31:53 +01:00
Russell King
596c471b69 dmaengine: omap-dma: move register read/writes into omap-dma.c
Export the DMA register information from the SoC specific data, such
that we can access the registers directly in omap-dma.c, mapping the
register region ourselves as well.

Rather than calculating the DMA channel register in its entirety for
each access, we pre-calculate an offset base address for the allocated
DMA channel and then just use the appropriate register offset.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:31:49 +01:00
Russell King
b07fd625ac dmaengine: omap-dma: cleanup errata 3.3 handling
Provide a function to read the CSAC/CDAC register, working around the
OMAP 3.2/3.3 erratum (which requires two reads of the register if the
first returned zero.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:30:28 +01:00
Russell King
c5ed98b6ae dmaengine: omap-dma: provide register read/write functions
Provide a pair of channel register accessors, and a pair of global
accessors for non-channel specific registers.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:30:25 +01:00
Russell King
45da7b0451 dmaengine: omap-dma: use cached CCR value when enabling DMA
We don't need to read-modify-write the CCR register; we already know
what value it should contain at this point.  Use the cached CCR value
when setting the enable bit.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:30:21 +01:00
Russell King
5987190270 dmaengine: omap-dma: move barrier to omap_dma_start_desc()
We don't need to issue a barrier for every segment of a DMA transfer;
doing this just once per descriptor will do.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:30:18 +01:00
Russell King
965aeb4df1 dmaengine: omap-dma: move clnk_ctrl setting to preparation functions
Move the clnk_ctrl setup to the preparation functions, saving its
value in the omap_desc.  This only needs to be set once per descriptor,
not for each segment, so set it in omap_dma_start_desc() rather than
omap_dma_start().

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:30:15 +01:00
Russell King
893e63e301 dmaengine: omap-dma: improve efficiency loading C.SA/C.EI/C.FI registers
The only thing which changes is which registers are written, so put this
in local variables instead.  This results in smaller code.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:28:47 +01:00
Russell King
470b23f730 dmaengine: omap-dma: consolidate clearing channel status register
Consolidate clearing of the channel status register, rather than open
coding the same functionality in two places.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:28:08 +01:00
Russell King
49ae0b2943 dmaengine: omap-dma: move CCR buffering disable errata out of the fast path
Since we record the CCR register in the dma transaction, we can move the
processing of the iframe buffering errata out of the omap_dma_start().
Move it to the preparation functions.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:28:05 +01:00
Russell King
9043826d88 dmaengine: omap-dma: provide register definitions
Provide our own set of more complete register definitions; this allows
us to get rid of the meaningless 1 << n constants scattered throughout
this code.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:28:02 +01:00
Russell King
3ed4d18f39 dmaengine: omap-dma: consolidate setup of CCR
Consolidate the setup of the channel control register.  Prepare the
basic value in the preparation of the DMA descriptor, and write it into
the register upon descriptor execution.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:59 +01:00
Russell King
2f0d13bdf6 dmaengine: omap-dma: consolidate setup of CSDP
Consolidate the setup of the channel source destination parameters
register.  This way, we calculate the required CSDP value when we setup
a transfer descriptor, and only write it to the device registers once
when we start the descriptor.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:56 +01:00
Russell King
3997cab391 dmaengine: omap-dma: move reading of dma position to omap-dma.c
Read the current DMA position from the hardware directly rather than via
arch/arm/plat-omap/dma.c.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:53 +01:00
Russell King
fa3ad86ae0 dmaengine: omap-dma: control start/stop directly
Program the non-cyclic mode DMA start/stop directly, rather than via
arch/arm/plat-omap/dma.c.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:50 +01:00
Russell King
913a2d0c69 dmaengine: omap-dma: consolidate writes to DMA registers
There's no need to keep writing registers which don't change value in
omap_dma_start_sg().  Move this into omap_dma_start_desc() and merge
the register updates together.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:47 +01:00
Russell King
b9e97822da dmaengine: omap-dma: program hardware directly
Program the transfer parameters directly into the hardware, rather
than using the functions in arch/arm/plat-omap/dma.c.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:43 +01:00
Russell King
1b416c4b41 dmaengine: omap-dma: provide a hook to get the underlying DMA platform ops
Provide and use a hook to obtain the underlying DMA platform operations
so that omap-dma.c can access the hardware more directly without
involving the legacy DMA driver.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:38 +01:00
Russell King
104fce73fd dmaengine: omap-dma: use devm_kzalloc() to allocate omap_dmadev.
Use devm_kzalloc() to allocate omap_dmadev() so that we don't need
complex error cleanup paths.

Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-04 00:27:09 +01:00
Kuninori Morimoto
e43a34e3ec shdma: add R-Car Audio DMAC peri peri driver
Add support Audio DMAC peri peri driver
for Renesas R-Car Gen2 SoC, using 'shdma-base'
DMA driver framework.

Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
[fixed checkpatch error]
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-03-29 19:38:09 +05:30
Barry Song
2e041c9462 dmaengine: sirf: enable generic dt binding for dma channels
move to support of_dma_request_slave_channel() and dma_request_slave_channel.
we add a xlate() to let dma clients be able to find right dma_chan by generic
"dmas" properties in dts.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-03-29 19:20:13 +05:30
Peter Ujfalusi
80b0e0abfb dma: omap-dma: Implement device_slave_caps callback
With the callback implemented omap-dma can provide information to client
drivers regarding to supported address widths, directions, residue
granularity, etc.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-03-29 19:03:30 +05:30
Chew, Chiau Ee
4501fe61b2 dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
This is to disable/enable DW_DMAC hw during late suspend/early resume.
Since DMA is providing service to other clients (eg: SPI, HSUART),
we need to ensure DMA suspends after the clients and resume
before the clients are active.

Signed-off-by: Chew, Chiau Ee <chiau.ee.chew@intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2014-03-26 11:52:03 +05:30