2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-23 04:34:11 +08:00

docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section

Commit 4614bbdee3 ("docs/memory-barriers.txt: Rewrite "KERNEL I/O
BARRIER EFFECTS" section") rewrote the I/O ordering section of
memory-barriers.txt.

Subsequently, Ingo noticed a number of issues with the style, spacing
and grammar of the rewritten section. Fix them based on his suggestions.

Link: https://lkml.kernel.org/r/20190410105833.GA116161@gmail.com
Fixes: 4614bbdee3 ("docs/memory-barriers.txt: Rewrite "KERNEL I/O BARRIER EFFECTS" section")
Reported-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
This commit is contained in:
Will Deacon 2019-04-10 14:01:06 +01:00
parent 01e3b958ef
commit 0cde62a46e

View File

@ -2517,80 +2517,88 @@ guarantees:
(*) readX(), writeX():
The readX() and writeX() MMIO accessors take a pointer to the peripheral
being accessed as an __iomem * parameter. For pointers mapped with the
default I/O attributes (e.g. those returned by ioremap()), then the
ordering guarantees are as follows:
The readX() and writeX() MMIO accessors take a pointer to the
peripheral being accessed as an __iomem * parameter. For pointers
mapped with the default I/O attributes (e.g. those returned by
ioremap()), the ordering guarantees are as follows:
1. All readX() and writeX() accesses to the same peripheral are ordered
with respect to each other. For example, this ensures that MMIO register
writes by the CPU to a particular device will arrive in program order.
1. All readX() and writeX() accesses to the same peripheral are ordered
with respect to each other. This ensures that MMIO register writes by
the CPU to a particular device will arrive in program order.
2. A writeX() by the CPU to the peripheral will first wait for the
completion of all prior CPU writes to memory. For example, this ensures
that writes by the CPU to an outbound DMA buffer allocated by
dma_alloc_coherent() will be visible to a DMA engine when the CPU writes
to its MMIO control register to trigger the transfer.
2. A writeX() by the CPU to the peripheral will first wait for the
completion of all prior CPU writes to memory. This ensures that
writes by the CPU to an outbound DMA buffer allocated by
dma_alloc_coherent() will be visible to a DMA engine when the CPU
writes to its MMIO control register to trigger the transfer.
3. A readX() by the CPU from the peripheral will complete before any
subsequent CPU reads from memory can begin. For example, this ensures
that reads by the CPU from an incoming DMA buffer allocated by
dma_alloc_coherent() will not see stale data after reading from the DMA
engine's MMIO status register to establish that the DMA transfer has
completed.
3. A readX() by the CPU from the peripheral will complete before any
subsequent CPU reads from memory can begin. This ensures that reads
by the CPU from an incoming DMA buffer allocated by
dma_alloc_coherent() will not see stale data after reading from the
DMA engine's MMIO status register to establish that the DMA transfer
has completed.
4. A readX() by the CPU from the peripheral will complete before any
subsequent delay() loop can begin execution. For example, this ensures
that two MMIO register writes by the CPU to a peripheral will arrive at
least 1us apart if the first write is immediately read back with readX()
and udelay(1) is called prior to the second writeX().
4. A readX() by the CPU from the peripheral will complete before any
subsequent delay() loop can begin execution. This ensures that two
MMIO register writes by the CPU to a peripheral will arrive at least
1us apart if the first write is immediately read back with readX()
and udelay(1) is called prior to the second writeX():
__iomem pointers obtained with non-default attributes (e.g. those returned
by ioremap_wc()) are unlikely to provide many of these guarantees.
writel(42, DEVICE_REGISTER_0); // Arrives at the device...
readl(DEVICE_REGISTER_0);
udelay(1);
writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
The ordering properties of __iomem pointers obtained with non-default
attributes (e.g. those returned by ioremap_wc()) are specific to the
underlying architecture and therefore the guarantees listed above cannot
generally be relied upon for accesses to these types of mappings.
(*) readX_relaxed(), writeX_relaxed():
These are similar to readX() and writeX(), but provide weaker memory
ordering guarantees. Specifically, they do not guarantee ordering with
respect to normal memory accesses or delay() loops (i.e bullets 2-4 above)
but they are still guaranteed to be ordered with respect to other accesses
to the same peripheral when operating on __iomem pointers mapped with the
default I/O attributes.
These are similar to readX() and writeX(), but provide weaker memory
ordering guarantees. Specifically, they do not guarantee ordering with
respect to normal memory accesses or delay() loops (i.e. bullets 2-4
above) but they are still guaranteed to be ordered with respect to other
accesses to the same peripheral when operating on __iomem pointers
mapped with the default I/O attributes.
(*) readsX(), writesX():
The readsX() and writesX() MMIO accessors are designed for accessing
register-based, memory-mapped FIFOs residing on peripherals that are not
capable of performing DMA. Consequently, they provide only the ordering
guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
The readsX() and writesX() MMIO accessors are designed for accessing
register-based, memory-mapped FIFOs residing on peripherals that are not
capable of performing DMA. Consequently, they provide only the ordering
guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
(*) inX(), outX():
The inX() and outX() accessors are intended to access legacy port-mapped
I/O peripherals, which may require special instructions on some
architectures (notably x86). The port number of the peripheral being
accessed is passed as an argument.
The inX() and outX() accessors are intended to access legacy port-mapped
I/O peripherals, which may require special instructions on some
architectures (notably x86). The port number of the peripheral being
accessed is passed as an argument.
Since many CPU architectures ultimately access these peripherals via an
internal virtual memory mapping, the portable ordering guarantees provided
by inX() and outX() are the same as those provided by readX() and writeX()
respectively when accessing a mapping with the default I/O attributes.
Since many CPU architectures ultimately access these peripherals via an
internal virtual memory mapping, the portable ordering guarantees
provided by inX() and outX() are the same as those provided by readX()
and writeX() respectively when accessing a mapping with the default I/O
attributes.
Device drivers may expect outX() to emit a non-posted write transaction
that waits for a completion response from the I/O peripheral before
returning. This is not guaranteed by all architectures and is therefore
not part of the portable ordering semantics.
Device drivers may expect outX() to emit a non-posted write transaction
that waits for a completion response from the I/O peripheral before
returning. This is not guaranteed by all architectures and is therefore
not part of the portable ordering semantics.
(*) insX(), outsX():
As above, the insX() and outsX() accessors provide the same ordering
guarantees as readsX() and writesX() respectively when accessing a mapping
with the default I/O attributes.
As above, the insX() and outsX() accessors provide the same ordering
guarantees as readsX() and writesX() respectively when accessing a
mapping with the default I/O attributes.
(*) ioreadX(), iowriteX()
(*) ioreadX(), iowriteX():
These will perform appropriately for the type of access they're actually
doing, be it inX()/outX() or readX()/writeX().
These will perform appropriately for the type of access they're actually
doing, be it inX()/outX() or readX()/writeX().
All of these accessors assume that the underlying peripheral is little-endian,
and will therefore perform byte-swapping operations on big-endian architectures.