2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-17 01:34:00 +08:00

asm-generic: Add memory barrier dma_mb()

The memory barrier dma_mb() is introduced by commit a76a37777f
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.

Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20220523113126.171714-2-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
This commit is contained in:
Kefeng Wang 2022-05-23 19:31:25 +08:00 committed by Will Deacon
parent a111daf0c5
commit ed59dfd950
2 changed files with 14 additions and 5 deletions

View File

@ -1894,6 +1894,7 @@ There are some more advanced barrier functions:
(*) dma_wmb(); (*) dma_wmb();
(*) dma_rmb(); (*) dma_rmb();
(*) dma_mb();
These are for use with consistent memory to guarantee the ordering These are for use with consistent memory to guarantee the ordering
of writes or reads of shared memory accessible to both the CPU and a of writes or reads of shared memory accessible to both the CPU and a
@ -1925,11 +1926,11 @@ There are some more advanced barrier functions:
The dma_rmb() allows us guarantee the device has released ownership The dma_rmb() allows us guarantee the device has released ownership
before we read the data from the descriptor, and the dma_wmb() allows before we read the data from the descriptor, and the dma_wmb() allows
us to guarantee the data is written to the descriptor before the device us to guarantee the data is written to the descriptor before the device
can see it now has ownership. Note that, when using writel(), a prior can see it now has ownership. The dma_mb() implies both a dma_rmb() and
wmb() is not needed to guarantee that the cache coherent memory writes a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
have completed before writing to the MMIO region. The cheaper to guarantee that the cache coherent memory writes have completed before
writel_relaxed() does not provide this guarantee and must not be used writing to the MMIO region. The cheaper writel_relaxed() does not provide
here. this guarantee and must not be used here.
See the subsection "Kernel I/O barrier effects" for more information on See the subsection "Kernel I/O barrier effects" for more information on
relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for

View File

@ -38,6 +38,10 @@
#define wmb() do { kcsan_wmb(); __wmb(); } while (0) #define wmb() do { kcsan_wmb(); __wmb(); } while (0)
#endif #endif
#ifdef __dma_mb
#define dma_mb() do { kcsan_mb(); __dma_mb(); } while (0)
#endif
#ifdef __dma_rmb #ifdef __dma_rmb
#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) #define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0)
#endif #endif
@ -65,6 +69,10 @@
#define wmb() mb() #define wmb() mb()
#endif #endif
#ifndef dma_mb
#define dma_mb() mb()
#endif
#ifndef dma_rmb #ifndef dma_rmb
#define dma_rmb() rmb() #define dma_rmb() rmb()
#endif #endif