diff options
author | Matthew Wilcox (Oracle) <willy@infradead.org> | 2019-09-23 15:34:52 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2019-09-24 15:54:08 -0700 |
commit | 4101196b19d7f905dca5dcf46cd35eb758cf06c0 (patch) | |
tree | f19a6fe24db9f749ef3e8c808eba6a067a336aa8 /mm/shmem.c | |
parent | 875d91b11a201276ac3a9ab79f8b0fa3dc4ee8fd (diff) | |
download | lwn-4101196b19d7f905dca5dcf46cd35eb758cf06c0.tar.gz lwn-4101196b19d7f905dca5dcf46cd35eb758cf06c0.zip |
mm: page cache: store only head pages in i_pages
Transparent Huge Pages are currently stored in i_pages as pointers to
consecutive subpages. This patch changes that to storing consecutive
pointers to the head page in preparation for storing huge pages more
efficiently in i_pages.
Large parts of this are "inspired" by Kirill's patch
https://lore.kernel.org/lkml/20170126115819.58875-2-kirill.shutemov@linux.intel.com/
Kirill and Huang Ying contributed several fixes.
[willy@infradead.org: use compound_nr, squish uninit-var warning]
Link: http://lkml.kernel.org/r/20190731210400.7419-1-willy@infradead.org
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Acked-by: Jan Kara <jack@suse.cz>
Reviewed-by: Kirill Shutemov <kirill@shutemov.name>
Reviewed-by: Song Liu <songliubraving@fb.com>
Tested-by: Song Liu <songliubraving@fb.com>
Tested-by: William Kucharski <william.kucharski@oracle.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Tested-by: Qian Cai <cai@lca.pw>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r-- | mm/shmem.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index 15d26c86e5ef..57a6aedf6649 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -631,7 +631,7 @@ static int shmem_add_to_page_cache(struct page *page, if (xas_error(&xas)) goto unlock; next: - xas_store(&xas, page + i); + xas_store(&xas, page); if (++i < nr) { xas_next(&xas); goto next; |