diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-05-27 11:29:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-05-27 11:29:35 -0700 |
commit | 77fb622de1393b1d54f24f4f7ed98f84feeda502 (patch) | |
tree | c23243c07995b6a906b90ce4c0bfc1c514aab61f /mm/hugetlb.c | |
parent | 6f664045c8688c40ad0591abd6ab89db9ecd7945 (diff) | |
parent | 24c8e27e63224ce832b4723cb60632d3eddb55de (diff) | |
download | lwn-77fb622de1393b1d54f24f4f7ed98f84feeda502.tar.gz lwn-77fb622de1393b1d54f24f4f7ed98f84feeda502.zip |
Merge tag 'mm-hotfixes-stable-2022-05-27' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull hotfixes from Andrew Morton:
"Six hotfixes.
The page_table_check one from Miaohe Lin is considered a minor thing
so it isn't marked for -stable. The remainder address pre-5.19 issues
and are cc:stable"
* tag 'mm-hotfixes-stable-2022-05-27' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
mm/page_table_check: fix accessing unmapped ptep
kexec_file: drop weak attribute from arch_kexec_apply_relocations[_add]
mm/page_alloc: always attempt to allocate at least one page during bulk allocation
hugetlb: fix huge_pmd_unshare address update
zsmalloc: fix races between asynchronous zspage free and page migration
Revert "mm/cma.c: remove redundant cma_mutex lock"
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r-- | mm/hugetlb.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 01f0e2e5ab48..7c468ac1d069 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6755,7 +6755,14 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, pud_clear(pud); put_page(virt_to_page(ptep)); mm_dec_nr_pmds(mm); - *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; + /* + * This update of passed address optimizes loops sequentially + * processing addresses in increments of huge page size (PMD_SIZE + * in this case). By clearing the pud, a PUD_SIZE area is unmapped. + * Update address to the 'last page' in the cleared area so that + * calling loop can move to first page past this area. + */ + *addr |= PUD_SIZE - PMD_SIZE; return 1; } |