diff options
author | Kiyoshi Ueda <k-ueda@ct.jp.nec.com> | 2010-03-06 02:30:02 +0000 |
---|---|---|
committer | Alasdair G Kergon <agk@redhat.com> | 2010-03-06 02:30:02 +0000 |
commit | d0259bf0eefc503d3c9c9ccda35033c3dd3aac30 (patch) | |
tree | 8e0a6ebf9e9509875c160369803a2b6dd2abd943 /drivers/md/dm-mpath.c | |
parent | fce323dd68e13354071538c765b062859e6f8286 (diff) | |
download | op-kernel-dev-d0259bf0eefc503d3c9c9ccda35033c3dd3aac30.zip op-kernel-dev-d0259bf0eefc503d3c9c9ccda35033c3dd3aac30.tar.gz |
dm mpath: hold io until all pg_inits completed
m->queue_io is set to block processing I/Os, and it needs to be kept
while pg-init, which issues multiple path activations, is in progress.
But m->queue is cleared when a path activation completes without error
in pg_init_done(), even while other path activations are in progress.
That may cause undesired -EIO on paths which are not complete activation.
This patch fixes that by not clearing m->queue_io until all path
activations complete.
(Before the hardware handlers were moved into the SCSI layer, pg_init
only used one path.)
Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com>
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Diffstat (limited to 'drivers/md/dm-mpath.c')
-rw-r--r-- | drivers/md/dm-mpath.c | 17 |
1 files changed, 11 insertions, 6 deletions
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 8fa0c95..ae187e5 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -1181,14 +1181,19 @@ static void pg_init_done(void *data, int errors) m->current_pgpath = NULL; m->current_pg = NULL; } - } else if (!m->pg_init_required) { - m->queue_io = 0; + } else if (!m->pg_init_required) pg->bypassed = 0; - } - m->pg_init_in_progress--; - if (!m->pg_init_in_progress) - queue_work(kmultipathd, &m->process_queued_ios); + if (--m->pg_init_in_progress) + /* Activations of other paths are still on going */ + goto out; + + if (!m->pg_init_required) + m->queue_io = 0; + + queue_work(kmultipathd, &m->process_queued_ios); + +out: spin_unlock_irqrestore(&m->lock, flags); } |