summaryrefslogtreecommitdiff
path: root/block/blk-mq.c
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2014-06-18 11:21:08 -0400
committerJens Axboe <axboe@fb.com>2014-07-01 10:27:06 -0600
commit531ed6261e7466907418b1a9971a5c71d7d250e4 (patch)
treec4620c6c04575eb0d3fa2d24eb418a8dd8859a1e /block/blk-mq.c
parent17737d3b5997ac9f810967f0c6014d124ec39490 (diff)
downloadlwn-531ed6261e7466907418b1a9971a5c71d7d250e4.tar.gz
lwn-531ed6261e7466907418b1a9971a5c71d7d250e4.zip
blk-mq: fix a memory ordering bug in blk_mq_queue_enter()
blk-mq uses a percpu_counter to keep track of how many usages are in flight. The percpu_counter is drained while freezing to ensure that no usage is left in-flight after freezing is complete. blk_mq_queue_enter/exit() and blk_mq_[un]freeze_queue() implement this per-cpu gating mechanism; unfortunately, it contains a subtle bug - smp_wmb() in blk_mq_queue_enter() doesn't prevent prevent the cpu from fetching @q->bypass_depth before incrementing @q->mq_usage_counter and if freezing happens inbetween the caller can slip through and freezing can be complete while there are active users. Use smp_mb() instead so that bypass_depth and mq_usage_counter modifications and tests are properly interlocked. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Nicholas A. Bellinger <nab@linux-iscsi.org> Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r--block/blk-mq.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ad69ef657e85..9541f5111ba6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -81,7 +81,7 @@ static int blk_mq_queue_enter(struct request_queue *q)
int ret;
__percpu_counter_add(&q->mq_usage_counter, 1, 1000000);
- smp_wmb();
+ smp_mb();
/* we have problems freezing the queue if it's initializing */
if (!blk_queue_dying(q) &&