summaryrefslogtreecommitdiff
path: root/mm/mm_slot.h
diff options
context:
space:
mode:
authorKefeng Wang <wangkefeng.wang@huawei.com>2024-10-28 22:56:55 +0800
committerAndrew Morton <akpm@linux-foundation.org>2024-12-18 19:04:42 -0800
commit8aca2bc96c833ba695ede7a45ad7784c836a262e (patch)
tree42029e980fc3487e203e0741321e7d385f7bd412 /mm/mm_slot.h
parentdad2dc9c92e0f93f33cebcb0595b8daa3d57473f (diff)
downloadlwn-8aca2bc96c833ba695ede7a45ad7784c836a262e.tar.gz
lwn-8aca2bc96c833ba695ede7a45ad7784c836a262e.zip
mm: use aligned address in clear_gigantic_page()
In current kernel, hugetlb_no_page() calls folio_zero_user() with the fault address. Where the fault address may be not aligned with the huge page size. Then, folio_zero_user() may call clear_gigantic_page() with the address, while clear_gigantic_page() requires the address to be huge page size aligned. So, this may cause memory corruption or information leak, addtional, use more obvious naming 'addr_hint' instead of 'addr' for clear_gigantic_page(). Link: https://lkml.kernel.org/r/20241028145656.932941-1-wangkefeng.wang@huawei.com Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by: "Huang, Ying" <ying.huang@intel.com> Reviewed-by: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/mm_slot.h')
0 files changed, 0 insertions, 0 deletions