summaryrefslogtreecommitdiffstats
path: root/include/linux/blkdev.h
Commit message (Collapse)AuthorAgeFilesLines
* [SCSI] unify SCSI_IOCTL_SEND_COMMAND implementationsChristoph Hellwig2006-04-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | We currently have two implementations of this obsolete ioctl, one in the block layer and one in the scsi code. Both of them have drawbacks. This patch kills the scsi layer version after updating the block version with the missing bits: - argument checking - use scatterlist I/O - set number of retries based on the submitted command This is the last user of non-S/G I/O except for the gdth driver, so getting this in ASAP and through the scsi tree would be nie to kill the non-S/G I/O path. Jens, what do you think about adding a check for non-S/G I/O in the midlayer? Thanks to Or Gerlitz for testing this patch. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [BLOCK] cfq-iosched: seek and async performance fixesJens Axboe2006-03-281-1/+7
| | | | | | | | | | | | | | Detect whether a given process is seeky and if so disable (mostly) the idle window if it is. We still allow just a little idle time, just enough to allow that process to submit a new request. That is needed to maintain fairness across priority groups. In some cases, we could setup several async queues. This is not optimal from a performance POV, since we want all async io in one queue to perform good sorting on it. It also impacted sync queues, as async io got too much slice time. Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] [BLOCK] cfq-iosched: change cfq io context linking from list to treeJens Axboe2006-03-281-8/+6
| | | | | | | | | On setups with many disks, we spend a considerable amount of time looking up the process-disk mapping on each queue of io. Testing with a NULL based block driver, this costs 40-50% reduction in throughput for 1000 disks. Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] Block queue IO tracing support (blktrace) as of 2006-03-23Jens Axboe2006-03-231-0/+3
| | | | Signed-off-by: Jens Axboe <axboe@suse.de>
* [PATCH] fix sysfs interaction and lifetime rules handling for queuesAl Viro2006-03-181-3/+3
|
* [PATCH] make cfq_exit_queue() prune the cfq_io_context for that queueAl Viro2006-03-181-0/+2
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [PATCH] keep sync and async cfq_queue separateAl Viro2006-03-181-1/+1
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* [BLOCK] ll_rw_blk: make max_sectors and max_hw_sectors unsigned intsJens Axboe2006-01-241-3/+3
| | | | | | | IDE lba48 can support full 64k request size, which overflows the max_hw_sectors variable. Signed-off-by: Jens Axboe <axboe@suse.de>
* Merge branch 'blk-softirq' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds2006-01-091-3/+18
|\ | | | | | | Manual merge for trivial #include changes
| * [BLOCK] ll_rw_blk: Enable out-of-order request completions through softirqJens Axboe2006-01-091-3/+18
| | | | | | | | | | | | | | | | | | Request completion can be a quite heavy process, since it needs to iterate through the entire request and complete the bio's it holds. This patch adds blk_complete_request() which moves this processing into a dedicated block softirq. Signed-off-by: Jens Axboe <axboe@suse.de>
* | [BLOCK] Kill blk_attempt_remerge()Jens Axboe2006-01-091-1/+0
|/ | | | | | | | It's a broken interface, it's done way too late. And apparently it triggers slab problems in recent kernels as well (most likely after the generic dispatch code was merged). So kill it, ide-cd is the only user of it. Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] Correct blk_execute_rq_nowait() prototypeJens Axboe2006-01-061-2/+1
|
* [BLOCK] reimplement handling of barrier requestTejun Heo2006-01-061-25/+57
| | | | | | | | | | | | Reimplement handling of barrier requests. * Flexible handling to deal with various capabilities of target devices. * Retry support for falling back. * Tagged queues which don't support ordered tag can do ordered. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* [BLOCK] add @uptodate to end_that_request_last() and @error to rq_end_io_fn()Tejun Heo2006-01-061-3/+3
| | | | | | | | | | | | | | | | | add @uptodate argument to end_that_request_last() and @error to rq_end_io_fn(). there's no generic way to pass error code to request completion function, making generic error handling of non-fs request difficult (rq->errors is driver-specific and each driver uses it differently). this patch adds @uptodate to end_that_request_last() and @error to rq_end_io_fn(). for fs requests, this doesn't really matter, so just using the same uptodate argument used in the last call to end_that_request_first() should suffice. imho, this can also help the generic command-carrying request jens is working on. Signed-off-by: tejun heo <htejun@gmail.com> Signed-Off-By: Jens Axboe <axboe@suse.de>
* [SCSI] seperate max_sectors from max_hw_sectorsMike Christie2005-12-151-1/+2
| | | | | | | | | | | | | | | | | | | | - export __blk_put_request and blk_execute_rq_nowait needed for async REQ_BLOCK_PC requests - seperate max_hw_sectors and max_sectors for block/scsi_ioctl.c and SG_IO bio.c helpers per Jens's last comments. Since block/scsi_ioctl.c SG_IO was already testing against max_sectors and SCSI-ml was setting max_sectors and max_hw_sectors to the same value this does not change any scsi SG_IO behavior. It only prepares ll_rw_blk.c, scsi_ioctl.c and bio.c for when SCSI-ml begins to set a valid max_hw_sectors for all LLDs. Today if a LLD does not set it SCSI-ml sets it to a safe default and some LLDs set it to a artificial low value to overcome memory and feedback issues. Note: Since we now cap max_sectors to BLK_DEF_MAX_SECTORS, which is 1024, drivers that used to call blk_queue_max_sectors with a large value of max_sectors will now see the fs requests capped to BLK_DEF_MAX_SECTORS. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [SCSI] add retries field to request for REQ_BLOCK_PC useMike Christie2005-12-141-0/+1
| | | | | | | | | | | | For tape we need to control the retries. This patch adds a retries counter on the request for REQ_BLOCK_PC commands originating from scsi_execute* to use. REQ_BLOCK_PC commands comming from the block layer SG_IO path continue to use the retires set in the ULD init_command. (scsi_execute* does not set the gendisk so we do not execute the init_command in that path). Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [SCSI] export blk layer functions needed for blk_execute_rq_nowaitMike Christie2005-12-141-0/+5
| | | | | | | To send async requests we need these two functions exported. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [BLOCK] Implement elv_drain_elevator for improved switch error detectionTejun Heo2005-11-121-0/+2
| | | | | | | | | | | | This patch adds request_queue->nr_sorted which keeps the number of requests in the iosched and implement elv_drain_elevator which performs forced dispatching. elv_drain_elevator checks whether iosched actually dispatches all requests it has and prints error message if it doesn't. As buggy forced dispatching can result in wrong barrier operations, I think this extra check is worthwhile. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* Merge branch 'elevator-switch' of git://brick.kernel.dk/data/git/linux-2.6-blockLinus Torvalds2005-10-281-6/+4
|\ | | | | | | Manual fixup for trivial "gfp_t" changes.
| * [BLOCK] elevator switch fixes/cleanupJens Axboe2005-10-281-1/+1
| | | | | | | | | | | | | | | | | | - 100msec sleep is a little excessive, lots of requests can complete in that timeframe. Use 10msec instead. - Rename QUEUE_FLAG_BYPASS to QUEUE_FLAG_ELVSWITCH to indicate what is going on. Signed-off-by: Jens Axboe <axboe@suse.de>
| * [BLOCK] Reimplement elevator switchTejun Heo2005-10-281-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch reimplements elevator switch. This patch assumes generic dispatch queue patchset is applied. * Each request is tagged with REQ_ELVPRIV flag if it has its elevator private data set. * Requests which doesn't have REQ_ELVPRIV flag set never enter iosched. They are always directly back inserted to dispatch queue. Of course, elevator_put_req_fn is called only for requests which have its REQ_ELVPRIV set. * Request queue maintains the current number of requests which have its elevator data set (elevator_set_req_fn called) in q->rq->elvpriv. * If a request queue has QUEUE_FLAG_BYPASS set, elevator private data is not allocated for new requests. To switch to another iosched, we set QUEUE_FLAG_BYPASS and wait until elvpriv goes to zero; then, we attach the new iosched and clears QUEUE_FLAG_BYPASS. New implementation is much simpler and main code paths are less cluttered, IMHO. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* | Merge branch 'generic-dispatch' of ↵Linus Torvalds2005-10-281-4/+22
|\ \ | |/ | | | | git://brick.kernel.dk/data/git/linux-2.6-block
| * [BLOCK] kill generic max_back_kb handlingTejun Heo2005-10-281-1/+0
| | | | | | | | | | | | | | | | This patch kills max_back_kb handling from elv_dispatch_sort() and kills max_back_kb field from struct request_queue. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] 03/05 move last_merge handlin into generic elevator codeTejun Heo2005-10-281-0/+3
| | | | | | | | | | | | | | | | | | | | Currently, both generic elevator code and specific ioscheds participate in the management and usage of last_merge. This and the following patches move last_merge handling into generic elevator code. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] generic dispatch fixesJens Axboe2005-10-281-1/+12
| | | | | | | | | | | | | | - Split elv_dispatch_insert() into two functions - Rename rq_last_sector() to rq_end_sector() Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] 01/05 Implement generic dispatch queueTejun Heo2005-10-281-6/+11
| | | | | | | | | | | | | | | | | | | | Implements generic dispatch queue which can replace all dispatch queues implemented by each iosched. This reduces code duplication, eases enforcing semantics over dispatch queue, and simplifies specific ioscheds. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Jens Axboe <axboe@suse.de>
* | [PATCH] gfp_t: block layer coreAl Viro2005-10-281-7/+7
|/ | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] include/linux/blkdev.h: "extern inline" -> "static inline"Adrian Bunk2005-09-101-1/+1
| | | | | | | | "extern inline" doesn't make much sense. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* fix mismerge in ll_rw_blk.cJames Bottomley2005-08-281-4/+6
|\
| * [PATCH] update blk_execute_rq to take an at_head parameterJames Bottomley2005-06-201-2/+2
| | | | | | | | | | | | | | | | | | Original From: Mike Christie <michaelc@cs.wisc.edu> Modified to split out block changes (this patch) and SCSI pieces. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
| * [PATCH] Add scatter-gather support for the block layer SG_IOJames Bottomley2005-06-201-0/+1
| | | | | | | | Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] Cleanup blk_rq_map_* interfacesJens Axboe2005-06-201-4/+3
| | | | | | | | | | | | | | | | | | Change the blk_rq_map_user() and blk_rq_map_kern() interface to require a previously allocated request to be passed in. This is both more efficient for multiple iterations of mapping data to the same request, and it is also a much nicer API. Signed-off-by: Jens Axboe <axboe@suse.de>
| * [PATCH] Add blk_rq_map_kern()Mike Christie2005-06-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | Add blk_rq_map_kern which takes a kernel buffer and maps it into a request and bio. This can be used by the dm hw_handlers, old sg_scsi_ioctl, and one day scsi special requests so all requests comming into scsi will have bios. All requests having bios should allow scsi to use scatter lists for all IO and allow it to use block layer functions. Signed-off-by: Jens Axboe <axboe@suse.de>
* | [PATCH] blk: fix tag shrinking (revive real_max_size)Tejun Heo2005-08-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | My patch in commit fa72b903f75e4f0f0b2c2feed093005167da4023 incorrectly removed blk_queue_tag->real_max_depth. The original resize implementation was incorrect in the following points. * actual allocation size of tag_index was shorter than real_max_size, but assumed to be of the same size, possibly causing memory access beyond the allocated area. * bits in tag_map between max_deptn and real_max_depth were initialized to 1's, making the tags permanently reserved. In an attempt to fix above two bugs, I had removed allocation optimization in init_tag_map and real_max_size. Tag map/index were allocated and freed immediately during resize. Unfortunately, I wasn't considering that tag map/index can be resized dynamically with tags beyond new_depth active. This led to accessing freed area after shrinking tags and led to the following bug reporting thread on linux-scsi. http://marc.theaimsgroup.com/?l=linux-scsi&m=112319898111885&w=2 To fix the problem, I've revived real_max_depth without allocation optimization in init_tag_map, and Andrew Vasquez confirmed that the problem was fixed. As Jens is not going to be available for a week, he asked me to make sure that this patch reaches you. http://marc.theaimsgroup.com/?l=linux-scsi&m=112325778530886&w=2 Also, a comment was added to make sure that real_max_size is needed for dynamic shrinking. Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] blk: light iocontext opsNick Piggin2005-06-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | get_io_context needlessly turned off interrupts and checked for racing io context creations. Both of which aren't needed, because the io context can only be created while in process context of the current process. Also, split the function in 2. A light version, current_io_context does not elevate the reference count specifically, but can be used when in process context, because the process holds a reference itself. Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] Update cfq io scheduler to time sliced designJens Axboe2005-06-271-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This updates the CFQ io scheduler to the new time sliced design (cfq v3). It provides full process fairness, while giving excellent aggregate system throughput even for many competing processes. It supports io priorities, either inherited from the cpu nice value or set directly with the ioprio_get/set syscalls. The latter closely mimic set/getpriority. This import is based on my latest from -mm. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] drivers/block/ll_rw_blk.c: cleanupsAdrian Bunk2005-06-251-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch contains the following cleanups: - make needlessly global code static - remove the following unused global functions: - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge - remove the following unused EXPORT_SYMBOL's: - blk_phys_contig_segment - blk_hw_contig_segment - blkdev_scsi_issue_flush_fn - __blk_attempt_remerge Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] blk: remove BLK_TAGS_{PER_LONG|MASK}Tejun Heo2005-06-231-3/+0
| | | | | | | | | | | | | | | | | | Replace BLK_TAGS_PER_LONG with BITS_PER_LONG and remove unused BLK_TAGS_MASK. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] blk: remove blk_queue_tag->real_max_depth optimizationTejun Heo2005-06-231-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | blk_queue_tag->real_max_depth was used to optimize out unnecessary allocations/frees on tag resize. However, the whole thing was very broken - tag_map was never allocated to real_max_depth resulting in access beyond the end of the map, bits in [max_depth..real_max_depth] were set when initializing a map and copied when resizing resulting in pre-occupied tags. As the gain of the optimization is very small, well, almost nill, remove the whole thing. Signed-off-by: Tejun Heo <htejun@gmail.com> Acked-by: Jens Axboe <axboe@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] NUMA aware block device control structure allocationChristoph Lameter2005-06-231-1/+5
|/ | | | | | | | | | | | | | | | Patch to allocate the control structures for for ide devices on the node of the device itself (for NUMA systems). The patch depends on the Slab API change patch by Manfred and me (in mm) and the pcidev_to_node patch that I posted today. Does some realignment too. Signed-off-by: Justin M. Forbes <jmforbes@linuxtx.org> Signed-off-by: Christoph Lameter <christoph@lameter.com> Signed-off-by: Pravin Shelar <pravin@calsoftinc.com> Signed-off-by: Shobhit Dayal <shobhit@calsoftinc.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [SCSI] remove requeue feature from blk_insert_request()Tejun Heo2005-05-201-1/+1
| | | | | | | | | | | | blk_insert_request() has a unobivous feature of requeuing a request setting REQ_SPECIAL|REQ_SOFTBARRIER. SCSI midlayer was the only user and as previous patches removed the usage, remove the feature from blk_insert_request(). Only special requests should be queued with blk_insert_request(). All requeueing should go through blk_requeue_request(). Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* [PATCH] fix NMI lockup with CFQ scheduler2005-04-161-1/+4
| | | | | | | | | | The current problem seen is that the queue lock is actually in the SCSI device structure, so when that structure is freed on device release, we go boom if the queue tries to access the lock again. The fix here is to move the lock from the scsi_device to the queue. Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+759
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
OpenPOWER on IntegriCloud