summaryrefslogtreecommitdiff
path: root/mm/hugetlb.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2019-04-14 15:09:40 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2019-04-14 15:09:40 -0700
commit6b3a707736301c2128ca85ce85fb13f60b5e350a (patch)
tree2bf1892cf29121150adece8d1221ecd513a4e792 /mm/hugetlb.c
parent4443f8e6ac7755cd775c70d08be8042dc2f936cb (diff)
parent15fab63e1e57be9fdb5eec1bbc5916e9825e9acb (diff)
downloadlwn-6b3a707736301c2128ca85ce85fb13f60b5e350a.tar.gz
lwn-6b3a707736301c2128ca85ce85fb13f60b5e350a.zip
Merge branch 'page-refs' (page ref overflow)
Merge page ref overflow branch. Jann Horn reported that he can overflow the page ref count with sufficient memory (and a filesystem that is intentionally extremely slow). Admittedly it's not exactly easy. To have more than four billion references to a page requires a minimum of 32GB of kernel memory just for the pointers to the pages, much less any metadata to keep track of those pointers. Jann needed a total of 140GB of memory and a specially crafted filesystem that leaves all reads pending (in order to not ever free the page references and just keep adding more). Still, we have a fairly straightforward way to limit the two obvious user-controllable sources of page references: direct-IO like page references gotten through get_user_pages(), and the splice pipe page duplication. So let's just do that. * branch page-refs: fs: prevent page refcount overflow in pipe_buf_get mm: prevent get_user_pages() from overflowing page refcount mm: add 'try_get_page()' helper function mm: make page ref count overflow check tighter and more explicit
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r--mm/hugetlb.c13
1 files changed, 13 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 97b1e0290c66..6cdc7b2d9100 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4299,6 +4299,19 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
page = pte_page(huge_ptep_get(pte));
+
+ /*
+ * Instead of doing 'try_get_page()' below in the same_page
+ * loop, just check the count once here.
+ */
+ if (unlikely(page_count(page) <= 0)) {
+ if (pages) {
+ spin_unlock(ptl);
+ remainder = 0;
+ err = -ENOMEM;
+ break;
+ }
+ }
same_page:
if (pages) {
pages[i] = mem_map_offset(page, pfn_offset);