summaryrefslogtreecommitdiffstats
path: root/drivers/s390
diff options
context:
space:
mode:
authorMichael Holzheu <holzheu@linux.vnet.ibm.com>2009-10-14 12:43:45 +0200
committerMartin Schwidefsky <sky@mschwide.boeblingen.de.ibm.com>2009-10-14 12:43:52 +0200
commit03cadd36d51c737d7ad6aa21e2524296be6fe57f (patch)
treebbfe2c44a6eed8edf5b3f9a5064031efb00fc8cd /drivers/s390
parent7874b1b66a53c4d9c8dcb37884cbb758aa2d712c (diff)
downloadop-kernel-dev-03cadd36d51c737d7ad6aa21e2524296be6fe57f.zip
op-kernel-dev-03cadd36d51c737d7ad6aa21e2524296be6fe57f.tar.gz
[S390] tape390: Fix request queue handling in block driver
When setting a channel attached tape online under Linux 2.6.31, the "vol_id" process from udev hangs in sync_page(): 2 sync_page+144 [0x1dfaac] 3 __wait_on_bit_lock+194 [0x58c23e] 4 __lock_page+116 [0x1df9dc] 5 truncate_inode_pages_range+728 [0x1ed7cc] 6 __blkdev_put+244 [0x25f738] 7 __fput+300 [0x229c4c] 8 filp_close+122 [0x225a3a] The reason for that is an error in the request queue handling. It can happen that we fetch a request, but do not process it further because the number of queued requests exceeds TAPEBLOCK_MIN_REQUEUE. To fix this, we should call blk_peek_request() instead of blk_fetch_request() in the while condition and fetch the request in the loop body afterwards. This bug was introduced with the patch "block: implement and enforce request peek/start/fetch" (9934c8c04561413609d2bc38c6b9f268cba774a4) Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'drivers/s390')
-rw-r--r--drivers/s390/char/tape_block.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/s390/char/tape_block.c b/drivers/s390/char/tape_block.c
index 64f57ef..0c0705b 100644
--- a/drivers/s390/char/tape_block.c
+++ b/drivers/s390/char/tape_block.c
@@ -162,9 +162,10 @@ tapeblock_requeue(struct work_struct *work) {
spin_lock_irq(&device->blk_data.request_queue_lock);
while (
!blk_queue_plugged(queue) &&
- (req = blk_fetch_request(queue)) &&
+ blk_peek_request(queue) &&
nr_queued < TAPEBLOCK_MIN_REQUEUE
) {
+ req = blk_fetch_request(queue);
if (rq_data_dir(req) == WRITE) {
DBF_EVENT(1, "TBLOCK: Rejecting write request\n");
spin_unlock_irq(&device->blk_data.request_queue_lock);
OpenPOWER on IntegriCloud