diff options
author | Ralph Campbell <rcampbell@nvidia.com> | 2020-07-01 15:53:49 -0700 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2020-07-10 16:24:28 -0300 |
commit | 3b50a6e536d2d843857ffe5f923eff7be4222afe (patch) | |
tree | 0caed7527380f98df8cf760bcfa076314ac8ffb8 /mm/percpu-km.c | |
parent | dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258 (diff) | |
download | lwn-3b50a6e536d2d843857ffe5f923eff7be4222afe.tar.gz lwn-3b50a6e536d2d843857ffe5f923eff7be4222afe.zip |
mm/hmm: provide the page mapping order in hmm_range_fault()
hmm_range_fault() returns an array of page frame numbers and flags for how
the pages are mapped in the requested process' page tables. The PFN can be
used to get the struct page with hmm_pfn_to_page() and the page size order
can be determined with compound_order(page).
However, if the page is larger than order 0 (PAGE_SIZE), there is no
indication that a compound page is mapped by the CPU using a larger page
size. Without this information, the caller can't safely use a large device
PTE to map the compound page because the CPU might be using smaller PTEs
with different read/write permissions.
Add a new function hmm_pfn_to_map_order() to return the mapping size order
so that callers know the pages are being mapped with consistent
permissions and a large device page table mapping can be used if one is
available.
This will allow devices to optimize mapping the page into HW by avoiding
or batching work for huge pages. For instance the dma_map can be done with
a high order directly.
Link: https://lore.kernel.org/r/20200701225352.9649-3-rcampbell@nvidia.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'mm/percpu-km.c')
0 files changed, 0 insertions, 0 deletions