diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2017-11-17 15:26:34 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-11-17 16:10:00 -0800 |
commit | b527cfe5bc23208cf9a346879501333cec638aba (patch) | |
tree | 3302cb8445a9ae8edd1ac187ffef8dc8f84f9781 /mm | |
parent | 21dc7e023611fbcf8e38f255731bcf3cc38e7638 (diff) | |
download | lwn-b527cfe5bc23208cf9a346879501333cec638aba.tar.gz lwn-b527cfe5bc23208cf9a346879501333cec638aba.zip |
mm, compaction: extend pageblock_skip_persistent() to all compound pages
pageblock_skip_persistent() checks for HugeTLB pages of pageblock order.
When clearing pageblock skip bits for compaction, the bits are not
cleared for such pageblocks, because they cannot contain base pages
suitable for migration, nor free pages to use as migration targets.
This optimization can be simply extended to all compound pages of order
equal or larger than pageblock order, because migrating such pages (if
they support it) cannot help sub-pageblock fragmentation. This includes
THP's and also gigantic HugeTLB pages, which the current implementation
doesn't persistently skip due to a strict pageblock_order equality check
and not recognizing tail pages.
While THP pages are generally less "persistent" than HugeTLB, we can
still expect that if a THP exists at the point of
__reset_isolation_suitable(), it will exist also during the subsequent
compaction run. The time difference here could be actually smaller than
between a compaction run that sets a (non-persistent) skip bit on a THP,
and the next compaction run that observes it.
Link: http://lkml.kernel.org/r/20171102121706.21504-1-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/compaction.c | 25 |
1 files changed, 14 insertions, 11 deletions
diff --git a/mm/compaction.c b/mm/compaction.c index 94b5c0865dd1..e8f5b4e2cb05 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -219,17 +219,21 @@ static void reset_cached_positions(struct zone *zone) } /* - * Hugetlbfs pages should consistenly be skipped until updated by the hugetlb - * subsystem. It is always pointless to compact pages of pageblock_order and - * the free scanner can reconsider when no longer huge. + * Compound pages of >= pageblock_order should consistenly be skipped until + * released. It is always pointless to compact pages of such order (if they are + * migratable), and the pageblocks they occupy cannot contain any free pages. */ -static bool pageblock_skip_persistent(struct page *page, unsigned int order) +static bool pageblock_skip_persistent(struct page *page) { - if (!PageHuge(page)) + if (!PageCompound(page)) return false; - if (order != pageblock_order) - return false; - return true; + + page = compound_head(page); + + if (compound_order(page) >= pageblock_order) + return true; + + return false; } /* @@ -256,7 +260,7 @@ static void __reset_isolation_suitable(struct zone *zone) continue; if (zone != page_zone(page)) continue; - if (pageblock_skip_persistent(page, compound_order(page))) + if (pageblock_skip_persistent(page)) continue; clear_pageblock_skip(page); @@ -323,8 +327,7 @@ static inline bool isolation_suitable(struct compact_control *cc, return true; } -static inline bool pageblock_skip_persistent(struct page *page, - unsigned int order) +static inline bool pageblock_skip_persistent(struct page *page) { return false; } |