diff options
author | Rafael Aquini <aquini@redhat.com> | 2013-06-12 14:04:49 -0700 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2013-06-19 02:17:01 +0100 |
commit | 1d9910635dfc24ecac41129b5f4752b403194ab4 (patch) | |
tree | ab4ce3509e1edad32560b3e57ba61d1bebb73d41 /mm | |
parent | d3bfb85149dc6f19e914125c65167861c05b1511 (diff) | |
download | lwn-1d9910635dfc24ecac41129b5f4752b403194ab4.tar.gz lwn-1d9910635dfc24ecac41129b5f4752b403194ab4.zip |
swap: avoid read_swap_cache_async() race to deadlock while waiting on discard I/O completion
commit cbab0e4eec299e9059199ebe6daf48730be46d2b upstream.
read_swap_cache_async() can race against get_swap_page(), and stumble
across a SWAP_HAS_CACHE entry in the swap map whose page wasn't brought
into the swapcache yet.
This transient swap_map state is expected to be transitory, but the
actual placement of discard at scan_swap_map() inserts a wait for I/O
completion thus making the thread at read_swap_cache_async() to loop
around its -EEXIST case, while the other end at get_swap_page() is
scheduled away at scan_swap_map(). This can leave the system deadlocked
if the I/O completion happens to be waiting on the CPU waitqueue where
read_swap_cache_async() is busy looping and !CONFIG_PREEMPT.
This patch introduces a cond_resched() call to make the aforementioned
read_swap_cache_async() busy loop condition to bail out when necessary,
thus avoiding the subtle race window.
Signed-off-by: Rafael Aquini <aquini@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: Shaohua Li <shli@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/swap_state.c | 18 |
1 files changed, 17 insertions, 1 deletions
diff --git a/mm/swap_state.c b/mm/swap_state.c index 7704d9cd4658..7b3dadd136e1 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -314,8 +314,24 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * Swap entry may have been freed since our caller observed it. */ err = swapcache_prepare(entry); - if (err == -EEXIST) { /* seems racy */ + if (err == -EEXIST) { radix_tree_preload_end(); + /* + * We might race against get_swap_page() and stumble + * across a SWAP_HAS_CACHE swap_map entry whose page + * has not been brought into the swapcache yet, while + * the other end is scheduled away waiting on discard + * I/O completion at scan_swap_map(). + * + * In order to avoid turning this transitory state + * into a permanent loop around this -EEXIST case + * if !CONFIG_PREEMPT and the I/O completion happens + * to be waiting on the CPU waitqueue where we are now + * busy looping, we just conditionally invoke the + * scheduler here, if there are some more important + * tasks to run. + */ + cond_resched(); continue; } if (err) { /* swp entry is obsolete ? */ |