diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2023-08-11 13:53:45 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2023-08-11 10:42:57 -0600 |
commit | b6b2bb58a75407660f638a68e6e34a07036146d0 (patch) | |
tree | e6c9a49099ab2542577003a22c85af3dee481427 /io_uring/io_uring.h | |
parent | 056695bffa4beed5668dd4aa11efb696eacb3ed9 (diff) | |
download | lwn-b6b2bb58a75407660f638a68e6e34a07036146d0.tar.gz lwn-b6b2bb58a75407660f638a68e6e34a07036146d0.zip |
io_uring: never overflow io_aux_cqe
Now all callers of io_aux_cqe() set allow_overflow to false, remove the
parameter and not allow overflowing auxilary multishot cqes.
When CQ is full the function callers and all multishot requests in
general are expected to complete the request. That prevents indefinite
in-background grows of the overflow list and let's the userspace to
handle the backlog at its own pace.
Resubmitting a request should also be faster than accounting a bunch of
overflows, so it should be better for perf when it happens, but a well
behaving userspace should be trying to avoid overflows in any case.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/bb20d14d708ea174721e58bb53786b0521e4dd6d.1691757663.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/io_uring.h')
-rw-r--r-- | io_uring/io_uring.h | 3 |
1 files changed, 1 insertions, 2 deletions
diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 3dc0b6fb0ef7..3e6ff3cd9a24 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -44,8 +44,7 @@ int io_run_task_work_sig(struct io_ring_ctx *ctx); void io_req_defer_failed(struct io_kiocb *req, s32 res); void io_req_complete_post(struct io_kiocb *req, unsigned issue_flags); bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 cflags); -bool io_aux_cqe(const struct io_kiocb *req, bool defer, s32 res, u32 cflags, - bool allow_overflow); +bool io_fill_cqe_req_aux(struct io_kiocb *req, bool defer, s32 res, u32 cflags); void __io_commit_cqring_flush(struct io_ring_ctx *ctx); struct page **io_pin_pages(unsigned long ubuf, unsigned long len, int *npages); |