summaryrefslogtreecommitdiff
path: root/io_uring/fdinfo.c
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2024-11-03 10:23:38 -0700
committerJens Axboe <axboe@kernel.dk>2024-11-06 13:55:38 -0700
commitb6f58a3f4aa8dba424356c7a69388a81f4459300 (patch)
tree762afa454110f88f4ef7d5e0b7530486710ad8fa /io_uring/fdinfo.c
parent6ed368cc5d5d255ffffad33cfa02ecf2b77b7c44 (diff)
downloadlwn-b6f58a3f4aa8dba424356c7a69388a81f4459300.tar.gz
lwn-b6f58a3f4aa8dba424356c7a69388a81f4459300.zip
io_uring: move struct io_kiocb from task_struct to io_uring_task
Rather than store the task_struct itself in struct io_kiocb, store the io_uring specific task_struct. The life times are the same in terms of io_uring, and this avoids doing some dereferences through the task_struct. For the hot path of putting local task references, we can deref req->tctx instead, which we'll need anyway in that function regardless of whether it's local or remote references. This is mostly straight forward, except the original task PF_EXITING check needs a bit of tweaking. task_work is _always_ run from the originating task, except in the fallback case, where it's run from a kernel thread. Replace the potentially racy (in case of fallback work) checks for req->task->flags with current->flags. It's either the still the original task, in which case PF_EXITING will be sane, or it has PF_KTHREAD set, in which case it's fallback work. Both cases should prevent moving forward with the given request. Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/fdinfo.c')
-rw-r--r--io_uring/fdinfo.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c
index 8da0d9e4533a..efbec34ccb18 100644
--- a/io_uring/fdinfo.c
+++ b/io_uring/fdinfo.c
@@ -203,7 +203,7 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, struct file *file)
hlist_for_each_entry(req, &hb->list, hash_node)
seq_printf(m, " op=%d, task_works=%d\n", req->opcode,
- task_work_pending(req->task));
+ task_work_pending(req->tctx->task));
}
if (has_lock)