diff options
author | Kefeng Wang <wangkefeng.wang@huawei.com> | 2022-05-23 19:31:25 +0800 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2022-06-23 18:34:58 +0100 |
commit | ed59dfd9509d172e4920994ed9cbebf93b0050cc (patch) | |
tree | 836859cb481958c930fc01482806bce7126ba36c /Documentation/memory-barriers.txt | |
parent | a111daf0c53ae91e71fd2bfe7497862d14132e3e (diff) | |
download | lwn-ed59dfd9509d172e4920994ed9cbebf93b0050cc.tar.gz lwn-ed59dfd9509d172e4920994ed9cbebf93b0050cc.zip |
asm-generic: Add memory barrier dma_mb()
The memory barrier dma_mb() is introduced by commit a76a37777f2c
("iommu/arm-smmu-v3: Ensure queue is read after updating prod pointer"),
which is used to ensure that prior (both reads and writes) accesses
to memory by a CPU are ordered w.r.t. a subsequent MMIO write.
Reviewed-by: Arnd Bergmann <arnd@arndb.de> # for asm-generic
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20220523113126.171714-2-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'Documentation/memory-barriers.txt')
-rw-r--r-- | Documentation/memory-barriers.txt | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index b12df9137e1c..832b5d36e279 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -1894,6 +1894,7 @@ There are some more advanced barrier functions: (*) dma_wmb(); (*) dma_rmb(); + (*) dma_mb(); These are for use with consistent memory to guarantee the ordering of writes or reads of shared memory accessible to both the CPU and a @@ -1925,11 +1926,11 @@ There are some more advanced barrier functions: The dma_rmb() allows us guarantee the device has released ownership before we read the data from the descriptor, and the dma_wmb() allows us to guarantee the data is written to the descriptor before the device - can see it now has ownership. Note that, when using writel(), a prior - wmb() is not needed to guarantee that the cache coherent memory writes - have completed before writing to the MMIO region. The cheaper - writel_relaxed() does not provide this guarantee and must not be used - here. + can see it now has ownership. The dma_mb() implies both a dma_rmb() and + a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed + to guarantee that the cache coherent memory writes have completed before + writing to the MMIO region. The cheaper writel_relaxed() does not provide + this guarantee and must not be used here. See the subsection "Kernel I/O barrier effects" for more information on relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for |