diff options
author | Lee Schermerhorn <Lee.Schermerhorn@hp.com> | 2009-09-21 17:01:04 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2009-10-05 08:28:03 -0700 |
commit | c7be4a49273fc339f6eb3cf785ef454b723b3d00 (patch) | |
tree | 0b0506a0360cabf1c351456ca5202284d52e0003 | |
parent | ec49bc1bdc7c6e92873373ff61206c366d63a50a (diff) | |
download | lwn-c7be4a49273fc339f6eb3cf785ef454b723b3d00.tar.gz lwn-c7be4a49273fc339f6eb3cf785ef454b723b3d00.zip |
hugetlb: restore interleaving of bootmem huge pages (2.6.31)
Not upstream as it is fixed differently in .32
I noticed that alloc_bootmem_huge_page() will only advance to the next
node on failure to allocate a huge page. I asked about this on linux-mm
and linux-numa, cc'ing the usual huge page suspects. Mel Gorman
responded:
I strongly suspect that the same node being used until allocation
failure instead of round-robin is an oversight and not deliberate
at all. It appears to be a side-effect of a fix made way back in
commit 63b4613c3f0d4b724ba259dc6c201bb68b884e1a ["hugetlb: fix
hugepage allocation with memoryless nodes"]. Prior to that patch
it looked like allocations would always round-robin even when
allocation was successful.
Andy Whitcroft countered that the existing behavior looked like Andi
Kleen's original implementation and suggested that we ask him. We did and
Andy replied that his intention was to interleave the allocations. So,
...
This patch moves the advance of the hstate next node from which to
allocate up before the test for success of the attempted allocation. This
will unconditionally advance the next node from which to alloc,
interleaving successful allocations over the nodes with sufficient
contiguous memory, and skipping over nodes that fail the huge page
allocation attempt.
Note that alloc_bootmem_huge_page() will only be called for huge pages of
order > MAX_ORDER.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Rientjes <rientjes@google.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Eric Whitney <eric.whitney@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r-- | mm/hugetlb.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2403eb9a03f0..42f7e1a3fcbf 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1017,6 +1017,7 @@ int __weak alloc_bootmem_huge_page(struct hstate *h) NODE_DATA(h->hugetlb_next_nid), huge_page_size(h), huge_page_size(h), 0); + hstate_next_node(h); if (addr) { /* * Use the beginning of the huge page to store the @@ -1026,7 +1027,6 @@ int __weak alloc_bootmem_huge_page(struct hstate *h) m = addr; goto found; } - hstate_next_node(h); nr_nodes--; } return 0; |