Commit Graph

177 Commits

Author SHA1 Message Date
Uwe Kleine-König
67572bfe2e dmaengine: dw: platform: Convert to platform remove callback returning void
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new() which already returns void. Eventually after all drivers
are converted, .remove_new() is renamed to .remove().

Trivially convert this driver from always returning zero in the remove
callback to the void returning variant.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20230919133207.1400430-12-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2023-09-28 13:10:47 +05:30
Rob Herring
897500c7ea dmaengine: Explicitly include correct DT includes
The DT of_device.h and of_platform.h date back to the separate
of_platform_bus_type before it as merged into the regular platform bus.
As part of that merge prepping Arm DT support 13 years ago, they
"temporarily" include each other. They also include platform_device.h
and of.h. As a result, there's a pretty much random mix of those include
files used throughout the tree. In order to detangle these headers and
replace the implicit includes with struct declarations, users need to
explicitly include the correct includes.

Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20230718143138.1066177-1-robh@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2023-08-01 23:51:27 +05:30
Andy Shevchenko
255ccd8b16 dmaengine: dw: Move check for paused channel to dwc_get_residue()
Move check for paused channel to dwc_get_residue() and rename the latter
to dwc_get_residue_and_status().

This improves data integrity as residue and DMA channel status are set
in the same function under the same conditions.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20230130151747.20704-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2023-02-16 18:45:48 +05:30
Linus Torvalds
31be1d0fbd dmaengine updates for v6.0-rc1
New support/Core
  - Remove DMA_MEMCPY_SG for lack of users
  - Tegra 234 dmaengine support
  - Mediatek MT8365 dma support
  - Apple ADMAC driver
 
  Updates:
  - Yaml conversion for ST-Ericsson DMA40 binding and Freescale edma
  - rz-dmac updates and device_synchronize support
  - Bunch of typo in comments fixes in drivers
  - multithread support in sf-pdma driver
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmLrWacACgkQfBQHDyUj
 g0f/wRAAsGxg7IQqMKhWTiE6xN3/B4vxTD9Er4jCwjVw+ibivH9Nvhp9n4Cv5qr0
 Me1eGNq6e4KMD1RRBvy2KmK44pBodrCeDpWLGonOBToWPlKBGFRjOZ0v/H3/eVOs
 kjfYb73zPmleGZy2w0i6g8g5cwCwb5eDUGtztqIcYRET3jH+rWKYrHnMG/gaa1iF
 9isMKUNqplv2mKSXmxsMRJPzY7NRuPJthnsQSKdEXaY9HEmEUX9wAB8K1Dy+UPK/
 vAPg/Zn9XSnir4JWYxLSMI2bDrOz4xkaQ2Xac9pV1KIAMyx76RGu/Yz0JdVUsgGU
 w6aI/AYDtKeQe5sZSpbt3K/Ef2s5tVRfnCO3avtva6ozO39vOxpqTyujidxF8gJW
 xCsQVa8t92mKB8Y9/pwGIjYEnSoyLoxclBTMl7eVLvbHPa+maVeOnixfb/5uWD45
 +6djWv3FW/D7WilsjyZe57tSjvhw3RrDQEpKwuMCpmScMqitu0pVzFBYv+vpIjxL
 q5lbRK0mP9trdGHqsoD/GVjdxv+O7bwZjBNPzahxoRpN4+jktb8xfRQEZUW2Uqyf
 HPLvoLNbVPK0UyHkPTAj/QnTq56M21fMIuCn1Jp6RjzRzD2w7fHFtoOF6+wsFVx6
 iBYDzQRTq2lNIGFnoQ8N94XiKORfdJNv+ZstGTirWKc6xaKDw7E=
 =/IFO
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine

Pull dmaengine updates from Vinod Koul:
 "New support / Core:

   - Remove DMA_MEMCPY_SG for lack of users

   - Tegra 234 dmaengine support

   - Mediatek MT8365 dma support

   - Apple ADMAC driver

  Updates:

   - Yaml conversion for ST-Ericsson DMA40 binding and Freescale edma

   - rz-dmac updates and device_synchronize support

   - Bunch of typo in comments fixes in drivers

   - multithread support in sf-pdma driver"

* tag 'dmaengine-6.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (50 commits)
  dmaengine: mediatek: mtk-hsdma: Fix typo 'the the' in comment
  dmaengine: axi-dmac: check cache coherency register
  dmaengine: sh: rz-dmac: Add device_synchronize callback
  dmaengine: sprd: Cleanup in .remove() after pm_runtime_get_sync() failed
  dmaengine: tegra: Add terminate() for Tegra234
  dt-bindings: dmaengine: Add compatible for Tegra234
  dmaengine: xilinx: use strscpy to replace strlcpy
  dmaengine: imx-sdma: Add FIFO stride support for multi FIFO script
  dmaengine: idxd: Correct IAX operation code names
  dmaengine: imx-dma: Cast of_device_get_match_data() with (uintptr_t)
  dmaengine: dw-axi-dmac: ignore interrupt if no descriptor
  dmaengine: dw-axi-dmac: do not print NULL LLI during error
  dmaengine: altera-msgdma: Fixed some inconsistent function name descriptions
  dmaengine: imx-sdma: Add missing struct documentation
  dmaengine: sf-pdma: Add multithread support for a DMA channel
  dt-bindings: dma: dw-axi-dmac: extend the number of interrupts
  dmaengine: dmatest: use strscpy to replace strlcpy
  dmaengine: ste_dma40: fix typo in comment
  dmaengine: jz4780: fix typo in comment
  dmaengine: s3c24xx: fix typo in comment
  ...
2022-08-04 18:44:38 -07:00
Hans-Christian Noren Egtvedt
300a596590 dma:dw: remove reference to AVR32 architecture in core.c
The AVR32 architecture does no longer exist in the Linux kernel, hence
remove a reference to it in comments to avoid confusion.

Signed-off-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no>
2022-08-03 11:03:02 +02:00
Miquel Raynal
7811f2e7fd dmaengine: dw: dmamux: Fix build without CONFIG_OF
When built without OF support, of_match_node() expands to NULL, which
produces the following output:
>> drivers/dma/dw/rzn1-dmamux.c:105:34: warning: unused variable 'rzn1_dmac_match' [-Wunused-const-variable]
   static const struct of_device_id rzn1_dmac_match[] = {

One way to silence the warning is to enclose the structure definition
with an #ifdef CONFIG_OF/#endif block.

Fixes: 134d9c52fc ("dmaengine: dw: dmamux: Introduce RZN1 DMA router support")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220609141455.300879-2-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-06-10 17:51:21 +05:30
Miquel Raynal
2717d33841 dmaengine: dw: dmamux: Export the module device table
This is a tristate driver that can be built as a module, as a result,
the OF match table should be exported with MODULE_DEVICE_TABLE().

Fixes: 134d9c52fc ("dmaengine: dw: dmamux: Introduce RZN1 DMA router support")
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20220609141455.300879-1-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-06-10 17:51:21 +05:30
Miquel Raynal
d5a8fe0fee dmaengine: dw: Add RZN1 compatible
The Renesas RZN1 DMA IP is very close to the original DW DMA IP, a DMA
router has been introduced to handle the wiring options that have been
added.

Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-By: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20220427095653.91804-8-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 22:34:51 +05:30
Miquel Raynal
134d9c52fc dmaengine: dw: dmamux: Introduce RZN1 DMA router support
The Renesas RZN1 DMA IP is based on a DW core, with eg. an additional
dmamux register located in the system control area which can take up to
32 requests (16 per DMA controller). Each DMA channel can be wired to
two different peripherals.

We need two additional information from the 'dmas' property: the channel
(bit in the dmamux register) that must be accessed and the value of the
mux for this channel.

Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Link: https://lore.kernel.org/r/20220427095653.91804-6-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 22:34:51 +05:30
Qing Wang
1365e117bf dmaengine: dw: switch from 'pci_' to 'dma_' API
The wrappers in include/linux/pci-dma-compat.h should go away.

pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.

Signed-off-by: Qing Wang <wangqing@vivo.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Link: https://lore.kernel.org/r/1633663733-47199-6-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-10-26 10:54:32 +05:30
Andy Shevchenko
d6ff82cc1b dmaengine: dw: Simplify DT property parser
Since we converted internal data types to match DT, there is no need to have
an intermediate conversion layer, hence drop a few conditionals and for loops
for good.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Serge Semin <fancer.lancer@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Link: https://lore.kernel.org/r/20210802184355.49879-3-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-08-06 19:18:59 +05:30
Andy Shevchenko
dfa6a2f4c2 dmaengine: dw: Remove error message from DT parsing code
Users are a bit frightened of the harmless message that tells that
DT is missed on ACPI-based platforms. Remove it for good, it will
simplify the future conversion to fwnode and device property APIs.

Fixes: a9ddb575d6 ("dmaengine: dw_dmac: Enhance device tree support")
Depends-on: f5e84eae79 ("dmaengine: dw: platform: Split OF helpers to separate module")
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=199379
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Serge Semin <fancer.lancer@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Link: https://lore.kernel.org/r/20210802184355.49879-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-08-06 19:17:46 +05:30
Andy Shevchenko
fe364a7d95 dmaengine: dw: Program xBAR hardware for Elkhart Lake
Intel Elkhart Lake PSE DMA implementation is integrated with crossbar IP
in order to serve more hardware than there are DMA request lines available.

Due to this, program xBAR hardware to make flexible support of PSE peripheral.

The Device-to-Device has not been tested and it's not supported by DMA Engine,
but it's left in the code for the sake of documenting hardware features.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20210712113940.42753-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-07-14 10:16:30 +05:30
Andy Shevchenko
88cd1d6191 dmaengine: dw: Make it dependent to HAS_IOMEM
Some architectures do not provide devm_*() APIs. Hence make the driver
dependent on HAVE_IOMEM.

Fixes: dbde5c2934 ("dw_dmac: use devm_* functions to simplify code")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20210324141757.24710-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-04-12 14:54:45 +05:30
Cezary Rojewski
b6c14d7a83 dmaengine dw: Revert "dmaengine: dw: Enable runtime PM"
This reverts commit 842067940a.
For some solutions e.g. sound/soc/intel/catpt, DW DMA is part of a
compound device (in that very example, domains: ADSP, SSP0, SSP1, DMA0
and DMA1 are part of a single entity) rather than being a standalone
one. Driver for said device may enlist DMA to transfer data during
suspend or resume sequences.

Manipulating RPM explicitly in dw's DMA request and release channel
functions causes suspend() to also invoke resume() for the exact same
device. Similar situation occurs for resume() sequence. Effectively
renders device dysfunctional after first suspend() attempt. Revert the
change to address the problem.

Fixes: 842067940a ("dmaengine: dw: Enable runtime PM")
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Cezary Rojewski <cezary.rojewski@intel.com>
Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Link: https://lore.kernel.org/r/20210203191924.15706-1-cezary.rojewski@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-02-08 17:36:12 +05:30
Andy Shevchenko
842067940a dmaengine: dw: Enable runtime PM
When consumer requests channel power on the DMA controller device
and otherwise on the freeing channel resources.

Note, in some cases consumer acquires channel at the ->probe() stage and
releases it at the ->remove() stage. It will mean that DMA controller device
will be powered during all this time if there is no assist from hardware
to idle it. The above mentioned cases should be investigated separately
and individually.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20201103183938.64752-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-11-09 17:19:20 +05:30
Allen Pais
169bb74f89 dmaengine: dw: convert tasklets to use new tasklet_setup() API
In preparation for unconditionally passing the
struct tasklet_struct pointer to all tasklet
callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Link: https://lore.kernel.org/r/20200831103542.305571-6-allen.lkml@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-09-18 12:18:11 +05:30
Serge Semin
e8ee6c8cb6 dmaengine: dw: Add DMA-channels mask cell support
DW DMA IP-core provides a way to synthesize the DMA controller with
channels having different parameters like maximum burst-length,
multi-block support, maximum data width, etc. Those parameters both
explicitly and implicitly affect the channels performance. Since DMA slave
devices might be very demanding to the DMA performance, let's provide a
functionality for the slaves to be assigned with DW DMA channels, which
performance according to the platform engineer fulfill their requirements.
After this patch is applied it can be done by passing the mask of suitable
DMA-channels either directly in the dw_dma_slave structure instance or as
a fifth cell of the DMA DT-property. If mask is zero or not provided, then
there is no limitation on the channels allocation.

For instance Baikal-T1 SoC is equipped with a DW DMAC engine, which first
two channels are synthesized with max burst length of 16, while the rest
of the channels have been created with max-burst-len=4. It would seem that
the first two channels must be faster than the others and should be more
preferable for the time-critical DMA slave devices. In practice it turned
out that the situation is quite the opposite. The channels with
max-burst-len=4 demonstrated a better performance than the channels with
max-burst-len=16 even when they both had been initialized with the same
settings. The performance drop of the first two DMA-channels made them
unsuitable for the DW APB SSI slave device. No matter what settings they
are configured with, full-duplex SPI transfers occasionally experience the
Rx FIFO overflow. It means that the DMA-engine doesn't keep up with
incoming data pace even though the SPI-bus is enabled with speed of 25MHz
while the DW DMA controller is clocked with 50MHz signal. There is no such
problem has been noticed for the channels synthesized with
max-burst-len=4.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Link: https://lore.kernel.org/r/20200731200826.9292-6-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-08-17 11:58:31 +05:30
Serge Semin
8d2f59dab3 dmaengine: dw: Ignore burst setting for memory peripherals
According to the DW DMA controller Databook 2.18b (page 40 "3.5 Memory
Peripherals") memory peripherals don't have handshaking interface
connected to the controller, therefore they can never be a flow
controller. Since the CTLx.SRC_MSIZE and CTLx.DEST_MSIZE are properties
valid only for peripherals with a handshaking interface, we can freely
zero these fields out if the memory peripheral is selected to be the
source or the destination of the DMA transfers.

Note according to the databook, length of burst transfers to memory is
always equal to the number of data items available in a channel FIFO or
data items required to complete the block transfer, whichever is smaller;
length of burst transfers from memory is always equal to the space
available in a channel FIFO or number of data items required to complete
the block transfer, whichever is smaller.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Link: https://lore.kernel.org/r/20200731200826.9292-5-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-08-17 11:58:31 +05:30
Serge Semin
0ed725d1f5 dmaengine: dw: Discard dlen from the dev-to-mem xfer width calculation
Indeed in case of the DMA_DEV_TO_MEM DMA transfers it's enough to take the
destination memory address and the destination master data width into
account to calculate the CTLx.DST_TR_WIDTH setting of the memory
peripheral. According to the DW DMAC IP-core Databook 2.18b (page 66,
Example 5) at the and of a DMA transfer when the DMA-channel internal FIFO
is left with data less than for a single destination burst transaction,
the destination peripheral will enter the Single Transaction Region where
the DW DMA controller can complete a block transfer to the destination
using single transactions (non-burst transaction of CTLx.DST_TR_WIDTH
bytes). If there is no enough data in the DMA-channel internal FIFO for
even a single non-burst transaction of CTLx.DST_TR_WIDTH bytes, then the
channel enters "FIFO flush mode". That mode is activated to empty the FIFO
and flush the leftovers out to the memory peripheral. The flushing
procedure is simple.  The data is sent to the memory by means of a set of
single transaction of CTLx.SRC_TR_WIDTH bytes. To sum up it's redundant to
use the LLPs length to find out the CTLx.DST_TR_WIDTH parameter value,
since each DMA transfer will be completed with the CTLx.SRC_TR_WIDTH bytes
transaction if it is required.

We suggest to remove the LLP entry length from the statement which
calculates the memory peripheral DMA transaction width since it's
redundant due to the feature described above. By doing so we'll improve
the memory bus utilization and speed up the DMA-channel performance for
DMA_DEV_TO_MEM DMA-transfers.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200731200826.9292-4-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-08-17 11:58:31 +05:30
Serge Semin
6d9459d040 dmaengine: dw: Activate FIFO-mode for memory peripherals only
CFGx.FIFO_MODE field controls a DMA-controller "FIFO readiness" criterion.
In other words it determines when to start pushing data out of a DW
DMAC channel FIFO to a destination peripheral or from a source
peripheral to the DW DMAC channel FIFO. Currently FIFO-mode is set to one
for all DW DMAC channels. It means they are tuned to flush data out of
FIFO (to a memory peripheral or by accepting the burst transaction
requests) when FIFO is at least half-full (except at the end of the block
transfer, when FIFO-flush mode is activated) and are configured to get
data to the FIFO when it's at least half-empty.

Such configuration is a good choice when there is no slave device involved
in the DMA transfers. In that case the number of bursts per block is less
than when CFGx.FIFO_MODE = 0 and, hence, the bus utilization will improve.
But the latency of DMA transfers may increase when CFGx.FIFO_MODE = 1,
since DW DMAC will wait for the channel FIFO contents to be either
half-full or half-empty depending on having the destination or the source
transfers. Such latencies might be dangerous in case if the DMA transfers
are expected to be performed from/to a slave device. Since normally
peripheral devices keep data in internal FIFOs, any latency at some
critical moment may cause one being overflown and consequently losing
data. This especially concerns a case when either a peripheral device is
relatively fast or the DW DMAC engine is relatively slow with respect to
the incoming data pace.

In order to solve problems, which might be caused by the latencies
described above, let's enable the FIFO half-full/half-empty "FIFO
readiness" criterion only for DMA transfers with no slave device involved.
Thanks to the commit 99ba8b9b0d ("dmaengine: dw: Initialize channel
before each transfer") we can freely do that in the generic
dw_dma_initialize_chan() method.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200731200826.9292-3-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-08-17 11:58:31 +05:30
Vinod Koul
0b5ad7b952 Merge branch 'for-linus' into fixes
Signed-off-by: Vinod Koul <vkoul@kernel.org>

 Conflicts:
	drivers/dma/idxd/sysfs.c
2020-08-05 19:02:07 +05:30
Serge Semin
0f9d5f008e dmaengine: dw: Initialize max_sg_burst capability
Multi-block support provides a way to map the kernel-specific SG-table so
the DW DMA device would handle it as a whole instead of handling the
SG-list items or so called LLP block items one by one. So if true LLP
list isn't supported by the DW DMA engine, then soft-LLP mode will be
utilized to load and execute each LLP-block one by one. The soft-LLP mode
of the DMA transactions execution might not work well for some DMA
consumers like SPI due to its Tx and Rx buffers inter-dependency. Let's
initialize the max_sg_burst DMA channels capability based on the nollp
flag state. If it's true, no hardware accelerated LLP is available and
max_sg_burst should be set with 1, which means that the DMA engine
can handle only a single SG list entry at a time. If noLLP is set to
false, then hardware accelerated LLP is supported and the DMA engine
can handle infinite number of SG entries in a single DMA transaction.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200723005848.31907-11-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27 14:30:55 +05:30
Serge Semin
ca7f285171 dmaengine: dw: Introduce max burst length hw config
IP core of the DW DMA controller may be synthesized with different
max burst length of the transfers per each channel. According to Synopsis
having the fixed maximum burst transactions length may provide some
performance gain. At the same time setting up the source and destination
multi size exceeding the max burst length limitation may cause a serious
problems. In our case the DMA transaction just hangs up. In order to fix
this lets introduce the max burst length platform config of the DW DMA
controller device and don't let the DMA channels configuration code
exceed the burst length hardware limitation.

Note the maximum burst length parameter can be detected either in runtime
from the DWC parameter registers or from the dedicated DT property.
Depending on the IP core configuration the maximum value can vary from
channel to channel so by overriding the channel slave max_burst capability
we make sure a DMA consumer will get the channel-specific max burst
length.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200723005848.31907-10-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27 14:30:55 +05:30
Serge Semin
585d35451e dmaengine: dw: Initialize min and max burst DMA device capability
According to the DW APB DMAC data book the minimum burst transaction
length is 1 and it's true for any version of the controller since
isn't parametrised in the coreAssembler so can't be changed at the
IP-core synthesis stage. The maximum burst transaction can vary from
channel to channel and from controller to controller depending on a
IP-core parameter the system engineer activated during the IP-core
synthesis. Let's initialise both min_burst and max_burst members of the
DMA controller descriptor with extreme values so the DMA clients could
use them to properly optimize the DMA requests. The channels and
controller-specific max_burst length initialization will be introduced
by the follow-up patches.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200723005848.31907-9-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27 14:30:55 +05:30
Serge Semin
e6fe576796 dmaengine: dw: Set DMA device max segment size parameter
Maximum block size DW DMAC configuration corresponds to the max segment
size DMA parameter in the DMA core subsystem notation. Lets set it with a
value specific to the probed DW DMA controller. It shall help the DMA
clients to create size-optimized SG-list items for the controller. This in
turn will cause less dw_desc allocations, less LLP reinitializations,
better DMA device performance.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200723005848.31907-8-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27 14:30:55 +05:30
Serge Semin
ef3e515a87 dmaengine: dw: Take HC_LLP flag into account for noLLP auto-config
Full multi-block transfers functionality is enabled in DW DMA
controller only if CHx_MULTI_BLK_EN is set. But LLP-based transfers
can be executed only if hardcode channel x LLP register feature isn't
enabled, which can be switched on at the IP core synthesis for
optimization. If it's enabled then the LLP register is hardcoded to
zero, so the blocks chaining based on the LLPs is unsupported.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200723005848.31907-7-Sergey.Semin@baikalelectronics.ru
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27 14:30:55 +05:30
Andy Shevchenko
99ba8b9b0d dmaengine: dw: Initialize channel before each transfer
In some cases DMA can be used only with a consumer which does runtime power
management and on the platforms, that have DMA auto power gating logic
(see comments in the drivers/acpi/acpi_lpss.c), may result in DMA losing
its context. Simple mitigation of this issue is to initialize channel
each time the consumer initiates a transfer.

Fixes: cfdf5b6cc5 ("dw_dmac: add support for Lynxpoint DMA controllers")
Reported-by: Tsuchiya Yuto <kitakar@gmail.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=206403
Link: https://lore.kernel.org/r/20200705115620.51929-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-06 10:21:05 +05:30
Andy Shevchenko
0658e5a83a dmaengine: dw: Replace 'objs' by 'y'
`-objs` is fitted for building host programs, change to `-y`,
more straightforward for device drivers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200526182416.52805-2-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-06-16 21:54:47 +05:30
Andy Shevchenko
38e4fb6672 dmaengine: dw: Register ACPI DMA controller for PCI that has companion
If PCI enumerated controller has a companion device,
register it in the ACPI DMA controllers as well.

Fixes: f7c799e950 ("dmaengine: dw: we do support Merrifield SoC in PCI mode")
Depends-on: b685fe26e9 ("dmaengine: dw: platform: Split ACPI helpers to separate module")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20200526182416.52805-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-06-16 21:54:47 +05:30
Andy Shevchenko
f27c22736d dmaengine: dw: platform: Mark 'hclk' clock optional
On some platforms the clock can be fixed rate, always running one and
there is no need to do anything with it.

In order to support those platforms, switch to use optional clock.

Fixes: f8d9ddbc28 ("dmaengine: dw: platform: Enable iDMA 32-bit on Intel Elkhart Lake")
Depends-on: 60b8f0ddf1 ("clk: Add (devm_)clk_get_optional() functions")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20190924085116.83683-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-10-14 13:51:44 +05:30
Andy Shevchenko
f5e84eae79 dmaengine: dw: platform: Split OF helpers to separate module
For better maintenance split OF helpers to the separate module.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-11-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:28 +05:30
Andy Shevchenko
b685fe26e9 dmaengine: dw: platform: Split ACPI helpers to separate module
For better maintenance split ACPI helpers to the separate module.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-10-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:28 +05:30
Andy Shevchenko
84da042e70 dmaengine: dw: platform: Move handle check to dw_dma_acpi_controller_register()
Move ACPI handle check to the dw_dma_acpi_controller_register().

While here, convert it to has_acpi_companion() which is recommended way.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-9-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:28 +05:30
Andy Shevchenko
e7b8514e4d dmaengine: dw: platform: Switch to acpi_dma_controller_register()
There is a possibility to have registered ACPI DMA controller
while it has been gone already.

To avoid the potential crash, move to non-managed
acpi_dma_controller_register().

Fixes: 42c91ee71d ("dw_dmac: add ACPI support")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-8-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:28 +05:30
Andy Shevchenko
a9c56721d6 dmaengine: dw: platform: Use devm_platform_ioremap_resource()
Use the new helper that wraps the calls to platform_get_resource()
and devm_ioremap_resource() together.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-7-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:28 +05:30
Andy Shevchenko
f8d9ddbc28 dmaengine: dw: platform: Enable iDMA 32-bit on Intel Elkhart Lake
Intel® PSE (Programmable Services Engine) provides few DMA controllers
to the host on Intel Elkhart Lake. Enable them in the ACPI glue driver.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-6-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:27 +05:30
Andy Shevchenko
b3757413b9 dmaengine: dw: platform: Use struct dw_dma_chip_pdata
Now, when we have a generic structure for the chip and platform data,
use it in the platform glue driver.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-5-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:27 +05:30
Andy Shevchenko
ae923c91aa dmaengine: dw: Export struct dw_dma_chip_pdata for wider use
We are expecting some devices can be enumerated either as PCI or ACPI.
Nevertheless, they will share same information, thus, provide a generic
struct dw_dma_chip_pdata for all glue drivers.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://lore.kernel.org/r/20190820131546.75744-4-andriy.shevchenko@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-21 09:41:27 +05:30
Jarkko Nikula
b48b8bc45a dmaengine: dw: Update Intel Elkhart Lake Service Engine acronym
Intel Elkhart Lake Offload Service Engine (OSE) will be called as
Intel(R) Programmable Services Engine (Intel(R) PSE) in documentation.

Update the comment here accordingly.

Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Link: https://lore.kernel.org/r/20190813080602.15376-1-jarkko.nikula@linux.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-08-20 16:39:18 +05:30
Andy Shevchenko
9e5ab0655e dmaengine: dw: Enable iDMA 32-bit on Intel Elkhart Lake
Intel Elkhart Lake OSE (Offload Service Engine) provides few DMA controllers
to the host. Enable them in the driver.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-06-25 10:03:33 +05:30
Andy Shevchenko
a183ec708b dmaengine: dw: Distinguish ->remove() between DW and iDMA 32-bit
In the same way as done for ->probe(), call ->remove() based on
the type of the hardware.

While it works now due to equivalency of the two removal functions,
it might be changed in the future.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-06-25 09:57:41 +05:30
Vinod Koul
278489c2e1 Merge branch 'topic/dw' into for-linus 2019-03-12 12:03:47 +05:30
Andy Shevchenko
b466a37fbc dmaengine: dw: convert to SPDX identifiers
This patch updates license to use SPDX-License-Identifier
instead of verbose license text.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
934891b0a1 dmaengine: dw: Don't pollute CTL_LO on iDMA 32-bit
Intel iDMA 32-bit doesn't have a concept of bus masters and thus
there is no need to setup any kind of masters in the CTL_LO register.

Moreover, the burst size for memory-to-memory transfer is not what is says,
we need to have a corrected list of possible sizes. Note, that
the size of 8 items, each of that up to 4 bytes, is chosen because of
maximum of 1/2 FIFO, which is 64 bytes on Intel Merrifield.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
91f0ff883e dmaengine: dw: Reset DRAIN bit when resume the channel
For Intel iDMA 32-bit the channel can be drained on a suspend.
We need to reset the bit on the resume to return a status quo.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
69da8be90d dmaengine: dw: Split DW and iDMA 32-bit operations
Here is a kinda big refactoring that should have been done
in the first place, when Intel iDMA 32-bit support appeared.

It splits operations which are different to Synopsys DesignWare and
Intel iDMA 32-bit controllers.

No functional change intended.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
0781657796 dmaengine: dw: Remove unused internal property
All known devices, which use DT for configuration, support
memory-to-memory transfers. So enable it by default.

The rest two cases, i.e. Intel Quark and PPC460ex, instantiate DMA driver and
use its channels exclusively for hardware, which means there is no available
channel for any other purposes anyway.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
d7dba6be0f dmaengine: dw: Remove misleading is_private property
The commit a9ddb575d6

   ("dmaengine: dw_dmac: Enhance device tree support")

introduces is_private property in uncertain understanding what does it mean.

First of all, documentation defines DMA_PRIVATE capability as

Documentation/crypto/async-tx-api.txt:
  The DMA_PRIVATE capability flag is used to tag dma devices that should not be
  used by the general-purpose allocator. It can be set at initialization time
  if it is known that a channel will always be private. Alternatively,
  it is set when dma_request_channel() finds an unused "public" channel.

  A couple caveats to note when implementing a driver and consumer:
  1/ Once a channel has been privately allocated it will no longer be
     considered by the general-purpose allocator even after a call to
     dma_release_channel().
  2/ Since capabilities are specified at the device level a dma_device with
     multiple channels will either have all channels public, or all channels
     private.

Documentation/driver-api/dmaengine/provider.rst:
  - DMA_PRIVATE
    The devices only supports slave transfers, and as such isn't available
    for async transfers.

The capability had been introduced by the commit 59b5ec2144

  ("dmaengine: introduce dma_request_channel and private channels")

and some code didn't changed from that times ever.

Taking into consideration above and the fact that on all known platforms
Synopsys DesignWare DMA engine is attached to serve slave transfers,
the DMA_PRIVATE capability must be enabled for this device unconditionally.
Otherwise, as rightfully noticed in drivers/dma/at_xdmac.c:
  /*
   * Without DMA_PRIVATE the driver is not able to allocate more than
   * one channel, second allocation fails in private_candidate.
   */
because of of a caveats mentioned in above documentation excerpts.

So, remove conditional around DMA_PRIVATE followed by removal leftovers.

If someone wonders, DMA_PRIVATE can be not used if and only if the all channels
of the DMA controller are supposed to serve memory-to-memory like operations.
For example, EP93xx has two controllers, one of which can only perform
memory-to-memory transfers

Note, this change doesn't affect dmatest to be able to test such controllers.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> (maintainer:SERIAL DRIVERS)
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30
Andy Shevchenko
87fe9ae84d dmaengine: dw: Add missed multi-block support for iDMA 32-bit
Intel integrated DMA 32-bit support multi-block transfers.
Add missed setting to the platform data.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 17:57:13 +05:30