diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2021-02-04 13:52:02 +0000 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-02-04 08:05:46 -0700 |
commit | 7335e3bf9d0a92be09bb4f38d06ab22c40f0fead (patch) | |
tree | a0f8ea4b3110ad63f38aed94f8560ed7644ebf28 /fs/io_uring.c | |
parent | 6bf985dc50dd882a95fffa9c7eef0d1416f512e6 (diff) | |
download | lwn-7335e3bf9d0a92be09bb4f38d06ab22c40f0fead.tar.gz lwn-7335e3bf9d0a92be09bb4f38d06ab22c40f0fead.zip |
io_uring: don't forget to adjust io_size
We have invariant in io_read() of how much we're trying to read spilled
into an iter and io_size variable. The last one controls decision making
about whether to do read-retries. However, io_size is modified only
after the first read attempt, so if we happen to go for a third retry in
a single call to io_read(), we will get io_size greater than in the
iterator, so may lead to various side effects up to live-locking.
Modify io_size each time.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io_uring.c')
-rw-r--r-- | fs/io_uring.c | 14 |
1 files changed, 5 insertions, 9 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index f8492d62b6a1..25fffff27c76 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -3548,16 +3548,11 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, /* some cases will consume bytes even on error returns */ iov_iter_revert(iter, io_size - iov_iter_count(iter)); ret = 0; - } else if (ret <= 0 || ret == io_size) { - /* make sure -ERESTARTSYS -> -EINTR is done */ + } else if (ret <= 0 || ret == io_size || !force_nonblock || + (req->file->f_flags & O_NONBLOCK) || + !(req->flags & REQ_F_ISREG)) { + /* read all, failed, already did sync or don't want to retry */ goto done; - } else { - /* we did blocking attempt. no retry. */ - if (!force_nonblock || (req->file->f_flags & O_NONBLOCK) || - !(req->flags & REQ_F_ISREG)) - goto done; - - io_size -= ret; } ret2 = io_setup_async_rw(req, iovec, inline_vecs, iter, true); @@ -3570,6 +3565,7 @@ static int io_read(struct io_kiocb *req, bool force_nonblock, /* now use our persistent iterator, if we aren't already */ iter = &rw->iter; retry: + io_size -= ret; rw->bytes_done += ret; /* if we can retry, do so with the callbacks armed */ if (!io_rw_should_retry(req)) { |