diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2014-01-29 14:05:41 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-01-29 16:22:39 -0800 |
commit | a1c3bfb2f67ef766de03f1f56bdfff9c8595ab14 (patch) | |
tree | e06405192d674561bf2718ab03879c32103ae34e /mm/page-writeback.c | |
parent | a804552b9a15c931cfc2a92a2e0aed1add8b580a (diff) | |
download | lwn-a1c3bfb2f67ef766de03f1f56bdfff9c8595ab14.tar.gz lwn-a1c3bfb2f67ef766de03f1f56bdfff9c8595ab14.zip |
mm/page-writeback.c: do not count anon pages as dirtyable memory
The VM is currently heavily tuned to avoid swapping. Whether that is
good or bad is a separate discussion, but as long as the VM won't swap
to make room for dirty cache, we can not consider anonymous pages when
calculating the amount of dirtyable memory, the baseline to which
dirty_background_ratio and dirty_ratio are applied.
A simple workload that occupies a significant size (40+%, depending on
memory layout, storage speeds etc.) of memory with anon/tmpfs pages and
uses the remainder for a streaming writer demonstrates this problem. In
that case, the actual cache pages are a small fraction of what is
considered dirtyable overall, which results in an relatively large
portion of the cache pages to be dirtied. As kswapd starts rotating
these, random tasks enter direct reclaim and stall on IO.
Only consider free pages and file pages dirtyable.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reported-by: Tejun Heo <tj@kernel.org>
Tested-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page-writeback.c')
-rw-r--r-- | mm/page-writeback.c | 6 |
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 61119b8a11e6..2d30e2cfe804 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -205,7 +205,8 @@ static unsigned long zone_dirtyable_memory(struct zone *zone) nr_pages = zone_page_state(zone, NR_FREE_PAGES); nr_pages -= min(nr_pages, zone->dirty_balance_reserve); - nr_pages += zone_reclaimable_pages(zone); + nr_pages += zone_page_state(zone, NR_INACTIVE_FILE); + nr_pages += zone_page_state(zone, NR_ACTIVE_FILE); return nr_pages; } @@ -258,7 +259,8 @@ static unsigned long global_dirtyable_memory(void) x = global_page_state(NR_FREE_PAGES); x -= min(x, dirty_balance_reserve); - x += global_reclaimable_pages(); + x += global_page_state(NR_INACTIVE_FILE); + x += global_page_state(NR_ACTIVE_FILE); if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); |