diff options
author | Nick Piggin <npiggin@suse.de> | 2009-01-06 14:39:06 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2009-01-24 16:36:26 -0800 |
commit | b37c30215d093eb5b3a1f23b7d46cbda8e120a4b (patch) | |
tree | 4b6dfc0b26ed1c62af72d2465b23a3163458f092 /mm | |
parent | c1496489b4fd832ca61bb17a33dfaaa0123abbcb (diff) | |
download | lwn-b37c30215d093eb5b3a1f23b7d46cbda8e120a4b.tar.gz lwn-b37c30215d093eb5b3a1f23b7d46cbda8e120a4b.zip |
mm: write_cache_pages early loop termination
commit bd19e012f6fd3b7309689165ea865cbb7bb88c1e upstream.
We'd like to break out of the loop early in many situations, however the
existing code has been setting mapping->writeback_index past the final
page in the pagevec lookup for cyclic writeback. This is a problem if we
don't process all pages up to the final page.
Currently the code mostly keeps writeback_index reasonable and hacked
around this by not breaking out of the loop or writing pages outside the
range in these cases. Keep track of a real "done index" that enables us
to terminate the loop in a much more flexible manner.
Needed by the subsequent patch to preserve writepage errors, and then
further patches to break out of the loop early for other reasons. However
there are no functional changes with this patch alone.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page-writeback.c | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 99c24b1ec7cc..3ca18f0bdce6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -875,6 +875,7 @@ int write_cache_pages(struct address_space *mapping, pgoff_t uninitialized_var(writeback_index); pgoff_t index; pgoff_t end; /* Inclusive */ + pgoff_t done_index; int cycled; int range_whole = 0; @@ -900,6 +901,7 @@ int write_cache_pages(struct address_space *mapping, cycled = 1; /* ignore range_cyclic tests */ } retry: + done_index = index; while (!done && (index <= end) && (nr_pages = pagevec_lookup_tag(&pvec, mapping, &index, PAGECACHE_TAG_DIRTY, @@ -909,6 +911,8 @@ retry: for (i = 0; i < nr_pages; i++) { struct page *page = pvec.pages[i]; + done_index = page->index + 1; + /* * At this point we hold neither mapping->tree_lock nor * lock on the page itself: the page may be truncated or @@ -970,7 +974,7 @@ retry: goto retry; } if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) - mapping->writeback_index = index; + mapping->writeback_index = done_index; if (wbc->range_cont) wbc->range_start = index << PAGE_CACHE_SHIFT; |