summaryrefslogtreecommitdiff
path: root/io_uring/msg_ring.h
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2024-06-06 12:25:01 -0600
committerJens Axboe <axboe@kernel.dk>2024-06-24 08:39:55 -0600
commit50cf5f3842af3135b88b041890e7e12a74425fcb (patch)
tree7c7f4035cbb6be7a747f264693345235fc64ee52 /io_uring/msg_ring.h
parent0617bb500bfabf8447062f1e1edde92ed2b638f1 (diff)
downloadlwn-50cf5f3842af3135b88b041890e7e12a74425fcb.tar.gz
lwn-50cf5f3842af3135b88b041890e7e12a74425fcb.zip
io_uring/msg_ring: add an alloc cache for io_kiocb entries
With slab accounting, allocating and freeing memory has considerable overhead. Add a basic alloc cache for the io_kiocb allocations that msg_ring needs to do. Unlike other caches, this one is used by the sender, grabbing it from the remote ring. When the remote ring gets the posted completion, it'll free it locally. Hence it is separately locked, using ctx->msg_lock. Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'io_uring/msg_ring.h')
-rw-r--r--io_uring/msg_ring.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/io_uring/msg_ring.h b/io_uring/msg_ring.h
index 3987ee6c0e5f..3030f3942f0f 100644
--- a/io_uring/msg_ring.h
+++ b/io_uring/msg_ring.h
@@ -3,3 +3,4 @@
int io_msg_ring_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe);
int io_msg_ring(struct io_kiocb *req, unsigned int issue_flags);
void io_msg_ring_cleanup(struct io_kiocb *req);
+void io_msg_cache_free(const void *entry);