summaryrefslogtreecommitdiffstats
path: root/drivers/scsi/sg.c
diff options
context:
space:
mode:
authorChristoph Hellwig <hch@lst.de>2014-04-11 19:07:01 +0200
committerChristoph Hellwig <hch@lst.de>2014-07-25 07:43:45 -0400
commit71e75c97f97a9645d25fbf3d8e4165a558f18747 (patch)
treefb85185386af55199c46499dc3ce366d227870e1 /drivers/scsi/sg.c
parent74665016086615bbaa3fa6f83af410a0a4e029ee (diff)
downloadop-kernel-dev-71e75c97f97a9645d25fbf3d8e4165a558f18747.zip
op-kernel-dev-71e75c97f97a9645d25fbf3d8e4165a558f18747.tar.gz
scsi: convert device_busy to atomic_t
Avoid taking the queue_lock to check the per-device queue limit. Instead we do an atomic_inc_return early on to grab our slot in the queue, and if necessary decrement it after finishing all checks. Unlike the host and target busy counters this doesn't allow us to avoid the queue_lock in the request_fn due to the way the interface works, but it'll allow us to prepare for using the blk-mq code, which doesn't use the queue_lock at all, and it at least avoids a queue_lock round trip in scsi_device_unbusy, which is still important given how busy the queue_lock is. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
Diffstat (limited to 'drivers/scsi/sg.c')
-rw-r--r--drivers/scsi/sg.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index 7a291f5..01cf888 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -2574,7 +2574,7 @@ static int sg_proc_seq_show_dev(struct seq_file *s, void *v)
scsidp->id, scsidp->lun, (int) scsidp->type,
1,
(int) scsidp->queue_depth,
- (int) scsidp->device_busy,
+ (int) atomic_read(&scsidp->device_busy),
(int) scsi_device_online(scsidp));
}
read_unlock_irqrestore(&sg_index_lock, iflags);
OpenPOWER on IntegriCloud