diff options
author | NeilBrown <neilb@suse.de> | 2024-04-08 12:09:17 +1000 |
---|---|---|
committer | Chuck Lever <chuck.lever@oracle.com> | 2024-05-06 09:07:16 -0400 |
commit | eec7620800081e27dbf8019ac2e66259f0d5bf6f (patch) | |
tree | dddf4b7cae4bee44dc0bae09c66730629203d31a /fs/nfsd/state.h | |
parent | b3f03739ca8cd6058a5d8754ea1354bc21fa0f2f (diff) | |
download | lwn-eec7620800081e27dbf8019ac2e66259f0d5bf6f.tar.gz lwn-eec7620800081e27dbf8019ac2e66259f0d5bf6f.zip |
nfsd: replace rp_mutex to avoid deadlock in move_to_close_lru()
move_to_close_lru() waits for sc_count to become zero while holding
rp_mutex. This can deadlock if another thread holds a reference and is
waiting for rp_mutex.
By the time we get to move_to_close_lru() the openowner is unhashed and
cannot be found any more. So code waiting for the mutex can safely
retry the lookup if move_to_close_lru() has started.
So change rp_mutex to an atomic_t with three states:
RP_UNLOCK - state is still hashed, not locked for reply
RP_LOCKED - state is still hashed, is locked for reply
RP_UNHASHED - state is not hashed, no code can get a lock.
Use wait_var_event() to wait for either a lock, or for the owner to be
unhashed. In the latter case, retry the lookup.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Diffstat (limited to 'fs/nfsd/state.h')
-rw-r--r-- | fs/nfsd/state.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h index 2ed0fcf879fd..a6261754deed 100644 --- a/fs/nfsd/state.h +++ b/fs/nfsd/state.h @@ -486,7 +486,7 @@ struct nfs4_replay { unsigned int rp_buflen; char *rp_buf; struct knfsd_fh rp_openfh; - struct mutex rp_mutex; + atomic_t rp_locked; char rp_ibuf[NFSD4_REPLAY_ISIZE]; }; |