diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2020-09-04 19:41:47 -0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-09-11 10:24:53 -0300 |
commit | a665aca89a411115e35ea937c2d3fb2ee4f5a701 (patch) | |
tree | 79c8a3e6ba1b0df6d1cfe1e1067a40b72bc1f942 /net/wimax | |
parent | 89603f7e7e5a6b719f1a163a05bd8a9231b58318 (diff) | |
download | lwn-a665aca89a411115e35ea937c2d3fb2ee4f5a701.tar.gz lwn-a665aca89a411115e35ea937c2d3fb2ee4f5a701.zip |
RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()
ib_umem_num_pages() should only be used by things working with the SGL in
CPU pages directly.
Drivers building DMA lists should use the new ib_num_dma_blocks() which
returns the number of blocks rdma_umem_for_each_block() will return.
To make this general for DMA drivers requires a different implementation.
Computing DMA block count based on umem->address only works if the
requested page size is < PAGE_SIZE and/or the IOVA == umem->address.
Instead the number of DMA pages should be computed in the IOVA address
space, not umem->address. Thus the IOVA has to be stored inside the umem
so it can be used for these calculations.
For now set it to umem->address by default and fix it up if
ib_umem_find_best_pgsz() was called. This allows drivers to be converted
to ib_umem_num_dma_blocks() safely.
Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'net/wimax')
0 files changed, 0 insertions, 0 deletions