diff options
author | Rik van Riel <riel@redhat.com> | 2008-10-18 20:26:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-10-20 08:50:25 -0700 |
commit | 7e9cd484204f9e5b316ed35b241abf088d76e0af (patch) | |
tree | 79f2567e7bb96af2d97d8d5407cc990e26eda95c /mm/swap_state.c | |
parent | 556adecba110bf5f1db6c6b56416cfab5bcab698 (diff) | |
download | lwn-7e9cd484204f9e5b316ed35b241abf088d76e0af.tar.gz lwn-7e9cd484204f9e5b316ed35b241abf088d76e0af.zip |
vmscan: fix pagecache reclaim referenced bit check
Moving referenced pages back to the head of the active list creates a huge
scalability problem, because by the time a large memory system finally
runs out of free memory, every single page in the system will have been
referenced.
Not only do we not have the time to scan every single page on the active
list, but since they have will all have the referenced bit set, that bit
conveys no useful information.
A more scalable solution is to just move every page that hits the end of
the active list to the inactive list.
We clear the referenced bit off of mapped pages, which need just one
reference to be moved back onto the active list.
Unmapped pages will be moved back to the active list after two references
(see mark_page_accessed). We preserve the PG_referenced flag on unmapped
pages to preserve accesses that were made while the page was on the active
list.
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swap_state.c')
0 files changed, 0 insertions, 0 deletions