diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-04-04 15:11:04 -0400 |
---|---|---|
committer | Matthew Wilcox (Oracle) <willy@infradead.org> | 2022-04-07 09:43:41 -0400 |
commit | f584b68005ac782097d63a691740cb0dfed072ed (patch) | |
tree | 8ed4111a39c1fcbccee5a9df4b484bed8214b467 /mm/mempolicy.c | |
parent | c185e494ae0ceb126d89b8e3413ed0a1132e05d3 (diff) | |
download | lwn-f584b68005ac782097d63a691740cb0dfed072ed.tar.gz lwn-f584b68005ac782097d63a691740cb0dfed072ed.zip |
mm: Add vma_alloc_folio()
This wrapper around alloc_pages_vma() calls prep_transhuge_page(),
removing the obligation from the caller. This is in the same spirit
as __folio_alloc().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Diffstat (limited to 'mm/mempolicy.c')
-rw-r--r-- | mm/mempolicy.c | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index a2516d31db6c..ec15f4f4b714 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2227,6 +2227,19 @@ out: } EXPORT_SYMBOL(alloc_pages_vma); +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma, + unsigned long addr, bool hugepage) +{ + struct folio *folio; + + folio = (struct folio *)alloc_pages_vma(gfp, order, vma, addr, + hugepage); + if (folio && order > 1) + prep_transhuge_page(&folio->page); + + return folio; +} + /** * alloc_pages - Allocate pages. * @gfp: GFP flags. |