diff options
author | Joao Martins <joao.m.martins@oracle.com> | 2022-04-28 23:16:16 -0700 |
---|---|---|
committer | akpm <akpm@linux-foundation.org> | 2022-04-28 23:16:16 -0700 |
commit | 6fd3620b342861de9547ea01d28f664892ef51a1 (patch) | |
tree | 07ea9306c6fee977cb55f748097dc6224eb47ae3 /include | |
parent | 4917f55b4ef963e2d2288fe4eb651728be8db406 (diff) | |
download | lwn-6fd3620b342861de9547ea01d28f664892ef51a1.tar.gz lwn-6fd3620b342861de9547ea01d28f664892ef51a1.zip |
mm/page_alloc: reuse tail struct pages for compound devmaps
Currently memmap_init_zone_device() ends up initializing 32768 pages when
it only needs to initialize 128 given tail page reuse. That number is
worse with 1GB compound pages, 262144 instead of 128. Update
memmap_init_zone_device() to skip redundant initialization, detailed
below.
When a pgmap @vmemmap_shift is set, all pages are mapped at a given huge
page alignment and use compound pages to describe them as opposed to a
struct per 4K.
With @vmemmap_shift > 0 and when struct pages are stored in ram (!altmap)
most tail pages are reused. Consequently, the amount of unique struct
pages is a lot smaller than the total amount of struct pages being mapped.
The altmap path is left alone since it does not support memory savings
based on compound pages devmap.
Link: https://lkml.kernel.org/r/20220420155310.9712-6-joao.m.martins@oracle.com
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jane Chu <jane.chu@oracle.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'include')
0 files changed, 0 insertions, 0 deletions