summaryrefslogtreecommitdiff
path: root/block/blk-mq.h
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2021-05-11 23:22:34 +0800
committerJens Axboe <axboe@kernel.dk>2021-05-24 06:47:22 -0600
commit2e315dc07df009c3e29d6926871f62a30cfae394 (patch)
treec3fe336910932031c6114b806a2a68582fadea94 /block/blk-mq.h
parent84da7acc3ba53af26f15c4b0ada446127b7a7836 (diff)
downloadlwn-2e315dc07df009c3e29d6926871f62a30cfae394.tar.gz
lwn-2e315dc07df009c3e29d6926871f62a30cfae394.zip
blk-mq: grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter
Grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter(), and this way will prevent the request from being re-used when ->fn is running. The approach is same as what we do during handling timeout. Fix request use-after-free(UAF) related with completion race or queue releasing: - If one rq is referred before rq->q is frozen, then queue won't be frozen before the request is released during iteration. - If one rq is referred after rq->q is frozen, refcount_inc_not_zero() will return false, and we won't iterate over this request. However, still one request UAF not covered: refcount_inc_not_zero() may read one freed request, and it will be handled in next patch. Tested-by: John Garry <john.garry@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20210511152236.763464-3-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.h')
-rw-r--r--block/blk-mq.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 9ce64bc4a6c8..556368d2c5b6 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -47,6 +47,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
void blk_mq_flush_busy_ctxs(struct blk_mq_hw_ctx *hctx, struct list_head *list);
struct request *blk_mq_dequeue_from_ctx(struct blk_mq_hw_ctx *hctx,
struct blk_mq_ctx *start);
+void blk_mq_put_rq_ref(struct request *rq);
/*
* Internal helpers for allocating/freeing the request map