diff options
author | Tejun Heo <tj@kernel.org> | 2014-07-01 10:29:17 -0600 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2014-07-01 10:29:17 -0600 |
commit | 776687bce42bb22cce48b5da950e48ebbb9a948f (patch) | |
tree | 68901461bfd070246574f1e2440ba1ef2ae93ec0 /block/blk-mq.c | |
parent | 531ed6261e7466907418b1a9971a5c71d7d250e4 (diff) | |
download | op-kernel-dev-776687bce42bb22cce48b5da950e48ebbb9a948f.zip op-kernel-dev-776687bce42bb22cce48b5da950e48ebbb9a948f.tar.gz |
block, blk-mq: draining can't be skipped even if bypass_depth was non-zero
Currently, both blk_queue_bypass_start() and blk_mq_freeze_queue()
skip queue draining if bypass_depth was already above zero. The
assumption is that the one which bumped the bypass_depth should have
performed draining already; however, there's nothing which prevents a
new instance of bypassing/freezing from starting before the previous
one finishes draining. The current code may allow the later
bypassing/freezing instances to complete while there still are
in-flight requests which haven't finished draining.
Fix it by draining regardless of bypass_depth. We still skip draining
from blk_queue_bypass_start() while the queue is initializing to avoid
introducing excessive delays during boot. INIT_DONE setting is moved
above the initial blk_queue_bypass_end() so that bypassing attempts
can't slip inbetween.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'block/blk-mq.c')
-rw-r--r-- | block/blk-mq.c | 7 |
1 files changed, 2 insertions, 5 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c index 9541f51..f4bdddd 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -131,15 +131,12 @@ void blk_mq_drain_queue(struct request_queue *q) */ static void blk_mq_freeze_queue(struct request_queue *q) { - bool drain; - spin_lock_irq(q->queue_lock); - drain = !q->bypass_depth++; + q->bypass_depth++; queue_flag_set(QUEUE_FLAG_BYPASS, q); spin_unlock_irq(q->queue_lock); - if (drain) - blk_mq_drain_queue(q); + blk_mq_drain_queue(q); } static void blk_mq_unfreeze_queue(struct request_queue *q) |