diff options
author | Adam Litke <agl@us.ibm.com> | 2007-11-14 16:59:37 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-11-14 18:45:39 -0800 |
commit | 348ea204cc23cda35faf962414b674c57da647d7 (patch) | |
tree | fb27a17c13ca745bd3f0fb15d0d967bc5d5bc088 /mm/hugetlb.c | |
parent | 6c55be8b962f1bdc592d579e81fc27b11ea53dfc (diff) | |
download | lwn-348ea204cc23cda35faf962414b674c57da647d7.tar.gz lwn-348ea204cc23cda35faf962414b674c57da647d7.zip |
hugetlb: split alloc_huge_page into private and shared components
Hugetlbfs implements a quota system which can limit the amount of memory that
can be used by the filesystem. Before allocating a new huge page for a file,
the quota is checked and debited. The quota is then credited when truncating
the file. I found a few bugs in the code for both MAP_PRIVATE and MAP_SHARED
mappings. Before detailing the problems and my proposed solutions, we should
agree on a definition of quotas that properly addresses both private and
shared pages. Since the purpose of quotas is to limit total memory
consumption on a per-filesystem basis, I argue that all pages allocated by the
fs (private and shared) should be charged against quota.
Private Mappings
================
The current code will debit quota for private pages sometimes, but will never
credit it. At a minimum, this causes a leak in the quota accounting which
renders the accounting essentially useless as it is. Shared pages have a one
to one mapping with a hugetlbfs file and are easy to account by debiting on
allocation and crediting on truncate. Private pages are anonymous in nature
and have a many to one relationship with their hugetlbfs files (due to copy on
write). Because private pages are not indexed by the mapping's radix tree,
thier quota cannot be credited at file truncation time. Crediting must be
done when the page is unmapped and freed.
Shared Pages
============
I discovered an issue concerning the interaction between the MAP_SHARED
reservation system and quotas. Since quota is not checked until page
instantiation, an over-quota mmap/reservation will initially succeed. When
instantiating the first over-quota page, the program will receive SIGBUS.
This is inconsistent since the reservation is supposed to be a guarantee. The
solution is to debit the full amount of quota at reservation time and credit
the unused portion when the reservation is released.
This patch series brings quotas back in line by making the following
modifications:
* Private pages
- Debit quota in alloc_huge_page()
- Credit quota in free_huge_page()
* Shared pages
- Debit quota for entire reservation at mmap time
- Credit quota for instantiated pages in free_huge_page()
- Credit quota for unused reservation at munmap time
This patch:
The shared page reservation and dynamic pool resizing features have made the
allocation of private vs. shared huge pages quite different. By splitting
out the private/shared-specific portions of the process into their own
functions, readability is greatly improved. alloc_huge_page now calls the
proper helper and performs common operations.
[akpm@linux-foundation.org: coding-style cleanups]
Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Ken Chen <kenchen@google.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <hermes@gibson.dropbear.id.au>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb.c')
-rw-r--r-- | mm/hugetlb.c | 46 |
1 files changed, 27 insertions, 19 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e2c80631d36a..f43b3dca12b5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -353,35 +353,43 @@ void return_unused_surplus_pages(unsigned long unused_resv_pages) } } -static struct page *alloc_huge_page(struct vm_area_struct *vma, - unsigned long addr) + +static struct page *alloc_huge_page_shared(struct vm_area_struct *vma, + unsigned long addr) { - struct page *page = NULL; - int use_reserved_page = vma->vm_flags & VM_MAYSHARE; + struct page *page; spin_lock(&hugetlb_lock); - if (!use_reserved_page && (free_huge_pages <= resv_huge_pages)) - goto fail; - page = dequeue_huge_page(vma, addr); - if (!page) - goto fail; - spin_unlock(&hugetlb_lock); - set_page_refcounted(page); return page; +} -fail: - spin_unlock(&hugetlb_lock); +static struct page *alloc_huge_page_private(struct vm_area_struct *vma, + unsigned long addr) +{ + struct page *page = NULL; - /* - * Private mappings do not use reserved huge pages so the allocation - * may have failed due to an undersized hugetlb pool. Try to grab a - * surplus huge page from the buddy allocator. - */ - if (!use_reserved_page) + spin_lock(&hugetlb_lock); + if (free_huge_pages > resv_huge_pages) + page = dequeue_huge_page(vma, addr); + spin_unlock(&hugetlb_lock); + if (!page) page = alloc_buddy_huge_page(vma, addr); + return page; +} +static struct page *alloc_huge_page(struct vm_area_struct *vma, + unsigned long addr) +{ + struct page *page; + + if (vma->vm_flags & VM_MAYSHARE) + page = alloc_huge_page_shared(vma, addr); + else + page = alloc_huge_page_private(vma, addr); + if (page) + set_page_refcounted(page); return page; } |