diff options
author | Pavel Begunkov <asml.silence@gmail.com> | 2021-08-09 09:07:32 -0600 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2021-08-23 13:07:59 -0600 |
commit | 90291099f24a82863e00de136d95ad7e73560107 (patch) | |
tree | 0fd19c7dbce6daf6796c08b4c466b637625ff168 /fs/io_uring.c | |
parent | 282cdc86937bd31cf0ea49978ad7a42cfe12ea35 (diff) | |
download | lwn-90291099f24a82863e00de136d95ad7e73560107.tar.gz lwn-90291099f24a82863e00de136d95ad7e73560107.zip |
io_uring: optimise io_cqring_wait() hot path
Turns out we always init struct io_wait_queue in io_cqring_wait(), even
if it's not used after, i.e. there are already enough of CQEs. And often
it's exactly what happens, for instance, requests may have been
completed inline, or in case of io_uring_enter(submit=N, wait=1).
It shows up in my profiler, so optimise it by delaying the struct init.
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/6f1b81c60b947d165583dc333947869c3d85d037.1628471125.git.asml.silence@gmail.com
[axboe: fixed up for new cqring wait]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'fs/io_uring.c')
-rw-r--r-- | fs/io_uring.c | 14 |
1 files changed, 6 insertions, 8 deletions
diff --git a/fs/io_uring.c b/fs/io_uring.c index ff17c4e9aa42..14aaeb87b149 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -7063,14 +7063,7 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, const sigset_t __user *sig, size_t sigsz, struct __kernel_timespec __user *uts) { - struct io_wait_queue iowq = { - .wq = { - .private = current, - .func = io_wake_function, - .entry = LIST_HEAD_INIT(iowq.wq.entry), - }, - .ctx = ctx, - }; + struct io_wait_queue iowq; struct io_rings *rings = ctx->rings; signed long timeout = MAX_SCHEDULE_TIMEOUT; int ret; @@ -7104,8 +7097,13 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, timeout = timespec64_to_jiffies(&ts); } + init_waitqueue_func_entry(&iowq.wq, io_wake_function); + iowq.wq.private = current; + INIT_LIST_HEAD(&iowq.wq.entry); + iowq.ctx = ctx; iowq.nr_timeouts = atomic_read(&ctx->cq_timeouts); iowq.cq_tail = READ_ONCE(ctx->rings->cq.head) + min_events; + trace_io_uring_cqring_wait(ctx, min_events); do { /* if we can't even flush overflow, don't wait for more */ |