summaryrefslogtreecommitdiff
path: root/block/blk-mq.c
diff options
context:
space:
mode:
authorJens Axboe <axboe@kernel.dk>2019-01-18 10:34:16 -0700
committerJens Axboe <axboe@kernel.dk>2019-04-13 19:08:22 -0600
commit77f1e0a52d26242b6c2dba019f6ebebfb9ff701e (patch)
treed813fd9cdf53b7d1351f885e2c8706719672fba4 /block/blk-mq.c
parent917257daa0fea7a007102691c0e27d9216a96768 (diff)
downloadlwn-77f1e0a52d26242b6c2dba019f6ebebfb9ff701e.tar.gz
lwn-77f1e0a52d26242b6c2dba019f6ebebfb9ff701e.zip
bfq: update internal depth state when queue depth changes
A previous commit moved the shallow depth and BFQ depth map calculations to be done at init time, moving it outside of the hotter IO path. This potentially causes hangs if the users changes the depth of the scheduler map, by writing to the 'nr_requests' sysfs file for that device. Add a blk-mq-sched hook that allows blk-mq to inform the scheduler if the depth changes, so that the scheduler can update its internal state. Tested-by: Kai Krakow <kai@kaishome.de> Reported-by: Paolo Valente <paolo.valente@linaro.org> Fixes: f0635b8a416e ("bfq: calculate shallow depths at init time") Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r--block/blk-mq.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9516304a38ee..fc60ed7e940e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3135,6 +3135,8 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
}
if (ret)
break;
+ if (q->elevator && q->elevator->type->ops.depth_updated)
+ q->elevator->type->ops.depth_updated(hctx);
}
if (!ret)