summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2020-10-24 12:17:05 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2020-10-24 12:17:05 -0700
commit1b307ac87075c3207c345822ea276fe4f28481d7 (patch)
tree8d3b04cc685b4dd8d34e9decbae0ba1bb5e8647b /Documentation
parent9bf8d8bcf3cebe44863188f1f2d822214e84f5b1 (diff)
parent6857a5ebaabc5b9d989872700b4b71dd2a6d6453 (diff)
downloadlwn-1b307ac87075c3207c345822ea276fe4f28481d7.tar.gz
lwn-1b307ac87075c3207c345822ea276fe4f28481d7.zip
Merge tag 'dma-mapping-5.10-1' of git://git.infradead.org/users/hch/dma-mapping
Pull dma-mapping fixes from Christoph Hellwig: - document the new dma_{alloc,free}_pages() API - two fixups for the dma-mapping.h split * tag 'dma-mapping-5.10-1' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: document dma_{alloc,free}_pages dma-mapping: move more functions to dma-map-ops.h ARM/sa1111: add a missing include of dma-map-ops.h
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/core-api/dma-api.rst49
1 files changed, 43 insertions, 6 deletions
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index ea0413276ddb..75cb757bbff0 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -519,10 +519,9 @@ routines, e.g.:::
Part II - Non-coherent DMA allocations
--------------------------------------
-These APIs allow to allocate pages in the kernel direct mapping that are
-guaranteed to be DMA addressable. This means that unlike dma_alloc_coherent,
-virt_to_page can be called on the resulting address, and the resulting
-struct page can be used for everything a struct page is suitable for.
+These APIs allow to allocate pages that are guaranteed to be DMA addressable
+by the passed in device, but which need explicit management of memory ownership
+for the kernel vs the device.
If you don't understand how cache line coherency works between a processor and
an I/O device, you should not be using this part of the API.
@@ -537,7 +536,7 @@ an I/O device, you should not be using this part of the API.
This routine allocates a region of <size> bytes of consistent memory. It
returns a pointer to the allocated region (in the processor's virtual address
space) or NULL if the allocation failed. The returned memory may or may not
-be in the kernels direct mapping. Drivers must not call virt_to_page on
+be in the kernel direct mapping. Drivers must not call virt_to_page on
the returned memory region.
It also returns a <dma_handle> which may be cast to an unsigned integer the
@@ -565,7 +564,45 @@ reused.
Free a region of memory previously allocated using dma_alloc_noncoherent().
dev, size and dma_handle and dir must all be the same as those passed into
dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
-the dma_alloc_noncoherent().
+dma_alloc_noncoherent().
+
+::
+
+ struct page *
+ dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
+ enum dma_data_direction dir, gfp_t gfp)
+
+This routine allocates a region of <size> bytes of non-coherent memory. It
+returns a pointer to first struct page for the region, or NULL if the
+allocation failed. The resulting struct page can be used for everything a
+struct page is suitable for.
+
+It also returns a <dma_handle> which may be cast to an unsigned integer the
+same width as the bus and given to the device as the DMA address base of
+the region.
+
+The dir parameter specified if data is read and/or written by the device,
+see dma_map_single() for details.
+
+The gfp parameter allows the caller to specify the ``GFP_`` flags (see
+kmalloc()) for the allocation, but rejects flags used to specify a memory
+zone such as GFP_DMA or GFP_HIGHMEM.
+
+Before giving the memory to the device, dma_sync_single_for_device() needs
+to be called, and before reading memory written by the device,
+dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
+reused.
+
+::
+
+ void
+ dma_free_pages(struct device *dev, size_t size, struct page *page,
+ dma_addr_t dma_handle, enum dma_data_direction dir)
+
+Free a region of memory previously allocated using dma_alloc_pages().
+dev, size and dma_handle and dir must all be the same as those passed into
+dma_alloc_noncoherent(). page must be the pointer returned by
+dma_alloc_pages().
::