summaryrefslogtreecommitdiffstats
path: root/block/cfq-iosched.c
Commit message (Collapse)AuthorAgeFilesLines
* Fix common misspellingsLucas De Marchi2011-03-311-1/+1
| | | | | | Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* cfq-iosched: removing unnecessary think time checkingLi, Shaohua2011-03-231-9/+4
| | | | | | | | | | Removing think time checking. A high thinktime queue might means the queue dispatches several requests and then do away. Limitting such queue seems meaningless. And also this can simplify code. This is suggested by Vivek. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Don't clear queue stats when preempt.Justin TerAvest2011-03-231-22/+17
| | | | | | | | | | | | | | | | For v2, I added back lines to cfq_preempt_queue() that were removed during updates for accounting unaccounted_time. Thanks for pointing out that I'd missed these, Vivek. Previous commit "cfq-iosched: Don't set active queue in preempt" wrongly cleared stats for preempting queues when it shouldn't have, because when we choose a queue to preempt, it still isn't necessarily scheduled next. Thanks to Vivek Goyal for figuring this out and understanding how the preemption code works. Signed-off-by: Justin TerAvest <teravest@google.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Don't set active queue in preemptJustin TerAvest2011-03-221-16/+23
| | | | | | | | | | | | | Commit "Add unaccounted time to timeslice_used" changed the behavior of cfq_preempt_queue to set cfqq active. Vivek pointed out that other preemption rules might get involved, so we shouldn't manually set which queue is active. This cleans up the code to just clear the queue stats at preemption time. Signed-off-by: Justin TerAvest <teravest@google.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Don't update group weights when on service treeJustin TerAvest2011-03-171-12/+41
| | | | | | | | | | | | | | | | | | | Version 3 is updated to apply to for-2.6.39/core. For version 2, I took Vivek's advice and made sure we update the group weight from cfq_group_service_tree_add(). If a weight was updated while a group is on the service tree, the calculation for the total weight of the service tree can be adjusted improperly, which either leads to bad service tree weights, or potentially crashes (if total_weight becomes 0). This patch defers updates to the weight until a group is off the service tree. Signed-off-by: Justin TerAvest <teravest@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* blk-cgroup: Add unaccounted time to timeslice_used.Justin TerAvest2011-03-121-8/+13
| | | | | | | | | | | | There are two kind of times that tasks are not charged for: the first seek and the extra time slice used over the allocated timeslice. Both of these exported as a new unaccounted_time stat. I think it would be good to have this reported in 'time' as well, but that is probably a separate discussion. Signed-off-by: Justin TerAvest <teravest@google.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* Merge branch 'for-2.6.39/stack-plug' into for-2.6.39/coreJens Axboe2011-03-101-8/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: block/blk-core.c block/blk-flush.c drivers/md/raid1.c drivers/md/raid10.c drivers/md/raid5.c fs/nilfs2/btnode.c fs/nilfs2/mdt.c Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block: remove per-queue pluggingJens Axboe2011-03-101-8/+0
| | | | | | | | | | | | | | | | Code has been converted over to the new explicit on-stack plugging, and delay users have been converted to use the new API for that. So lets kill off the old plugging along with aops->sync_page(). Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | Merge branch 'block-for-2.6.39-core' of ↵Jens Axboe2011-03-071-3/+3
|\ \ | | | | | | | | | ssh://master.kernel.org/pub/scm/linux/kernel/git/tj/misc into for-2.6.39/core
| * \ Merge branch 'for-linus' of ../linux-2.6-block into block-for-2.6.39/coreTejun Heo2011-03-041-3/+3
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This merge creates two set of conflicts. One is simple context conflicts caused by removal of throtl_scheduled_delayed_work() in for-linus and removal of throtl_shutdown_timer_wq() in for-2.6.39/core. The other is caused by commit 255bb490c8 (block: blk-flush shouldn't call directly into q->request_fn() __blk_run_queue()) in for-linus crashing with FLUSH reimplementation in for-2.6.39/core. The conflict isn't trivial but the resolution is straight-forward. * __blk_run_queue() calls in flush_end_io() and flush_data_end_io() should be called with @force_kblockd set to %true. * elv_insert() in blk_kick_flush() should use %ELEVATOR_INSERT_REQUEUE. Both changes are to avoid invoking ->request_fn() directly from request completion path and closely match the changes in the commit 255bb490c8. Signed-off-by: Tejun Heo <tj@kernel.org>
| | * block: add @force_kblockd to __blk_run_queue()Tejun Heo2011-03-021-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __blk_run_queue() automatically either calls q->request_fn() directly or schedules kblockd depending on whether the function is recursed. blk-flush implementation needs to be able to explicitly choose kblockd. Add @force_kblockd. All the current users are converted to specify %false for the parameter and this patch doesn't introduce any behavior change. stable: This is prerequisite for fixing ide oops caused by the new blk-flush implementation. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jan Beulich <JBeulich@novell.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | | cfq-iosched: Fix update_vdisktime logicGui Jianfeng2011-03-071-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The update_vdisktime logic is broken since commit b54ce60eb7f61f8e314b8b241b0469eda3bb1d42, st->min_vdisktime never makes a progress. Fix it. Thanks Vivek for pointing it out. Signed-off-by: Gui Jianfeng <guijianfen@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | | cfq-iosched: give busy sync queue no dispatch limitShaohua Li2011-03-071-2/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there are a sync and an async queue and the sync queue's think time is small, we can ignore the sync queue's dispatch quantum. Because the sync queue will always preempt the async queue, we don't need to care about async's latency. This can fix a performance regression of aiostress test, which is introduced by commit f8ae6e3eb825. The issue should exist even without the commit, but the commit amplifies the impact. The initial post does the same optimization for RT queue too, but since I have no real workload for it, Vivek suggests to drop it. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | | cfq-iosched: fix race in cfq_set_request()Jens Axboe2011-03-071-2/+1
|/ / | | | | | | | | | | | | We need to hold the queue lock over the reference increment, it's not atomic anymore. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Always provide group isolation.Justin TerAvest2011-03-011-36/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Effectively, make group_isolation=1 the default and remove the tunable. The setting group_isolation=0 was because by default we idle on sync-noidle tree and on fast devices, this can be very harmful for throughput. However, this problem can also be addressed by tuning slice_idle and possibly group_idle on faster storage devices. This change simplifies the CFQ code by removing the feature entirely. Signed-off-by: Justin TerAvest <teravest@google.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | Merge commit 'v2.6.38-rc6' into for-2.6.39/coreJens Axboe2011-03-011-56/+84
|\ \ | |/ | | | | | | | | | | Conflicts: block/cfq-iosched.c Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * cfq-iosched: Don't wait if queue already has requests.Justin TerAvest2011-02-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1 added logic to wait for the last queue of the group to become busy (have at least one request), so that the group does not lose out for not being continuously backlogged. The commit did not check for the condition that the last queue already has some requests. As a result, if the queue already has requests, wait_busy is set. Later on, cfq_select_queue() checks the flag, and decides that since the queue has a request now and wait_busy is set, the queue is expired. This results in early expiration of the queue. This patch fixes the problem by adding a check to see if queue already has requests. If it does, wait_busy is not set. As a result, time slices do not expire early. The queues with more than one request are usually buffered writers. Testing shows improvement in isolation between buffered writers. Cc: stable@kernel.org Signed-off-by: Justin TerAvest <teravest@google.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * cfq: rename a function to give it more appropriate nameVivek Goyal2011-01-191-3/+3
| | | | | | | | | | | | | | | | | | o Rename a function to give it more approprate name. We are calculating cfq queue slice and function name gives the impression as if cfq group slice length is being calculated. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block cfq: compensate preempted queue even if it has no slice assignedShaohua Li2011-01-141-4/+15
| | | | | | | | | | | | | | | | | | If a queue is preempted before it gets slice assigned, the queue doesn't get compensation, which looks unfair. For such queue, we compensate it for a whole slice. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * block cfq: make queue preempt work for queues from different workloadShaohua Li2011-01-141-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I got this: fio-874 [007] 2157.724514: 8,32 m N cfq874 preempt fio-874 [007] 2157.724519: 8,32 m N cfq830 slice expired t=1 fio-874 [007] 2157.724520: 8,32 m N cfq830 sl_used=1 disp=0 charge=1 iops=0 sect=0 fio-874 [007] 2157.724521: 8,32 m N cfq830 set_active wl_prio:0 wl_type:0 fio-874 [007] 2157.724522: 8,32 m N cfq830 Not idling. st->count:1 cfq830 is an async queue, and preempted by a sync queue cfq874. But since we have cfqg->saved_workload_slice mechanism, the preempt is a nop. Looks currently our preempt is totally broken if the two queues are not from the same workload type. Below patch fixes it. This will might make async queue starvation, but it's what our old code does before cgroup is added. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * Merge branch 'for-2.6.38/core' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2011-01-131-54/+58
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.38/core' of git://git.kernel.dk/linux-2.6-block: (43 commits) block: ensure that completion error gets properly traced blktrace: add missing probe argument to block_bio_complete block cfq: don't use atomic_t for cfq_group block cfq: don't use atomic_t for cfq_queue block: trace event block fix unassigned field block: add internal hd part table references block: fix accounting bug on cross partition merges kref: add kref_test_and_get bio-integrity: mark kintegrityd_wq highpri and CPU intensive block: make kblockd_workqueue smarter Revert "sd: implement sd_check_events()" block: Clean up exit_io_context() source code. Fix compile warnings due to missing removal of a 'ret' variable fs/block: type signature of major_to_index(int) to major_to_index(unsigned) block: convert !IS_ERR(p) && p to !IS_ERR_NOR_NULL(p) cfq-iosched: don't check cfqg in choose_service_tree() fs/splice: Pull buf->ops->confirm() from splice_from_pipe actors cdrom: export cdrom_check_events() sd: implement sd_check_events() sr: implement sr_check_events() ...
| | * block cfq: don't use atomic_t for cfq_groupShaohua Li2011-01-071-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cfq_group->ref is used with queue_lock hold, the only exception is cfq_set_request, which looks like a bug to me, so ref doesn't need to be an atomic and atomic operation is slower. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * block cfq: don't use atomic_t for cfq_queueShaohua Li2011-01-071-11/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cfq_queue->ref is used with queue_lock hold, so ref doesn't need to be an atomic and atomic operation is slower. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * cfq-iosched: don't check cfqg in choose_service_tree()Gui Jianfeng2010-12-171-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When cfq_choose_cfqg() is called in select_queue(), there must be at least one backlogged CFQ queue waiting for dispatching, hence there must be at least one backlogged CFQ group on service tree. So we never call choose_service_tree() with cfqg == NULL. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * block cfq: select new workload if priority changedShaohua Li writes2010-12-131-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If priority is changed, continuing to check workload_expires and service tree count of the previous workload does not make sense. We should always choose the workload with lowest key of new priority in such case. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * cfq-iosched: Get rid of on_st flagGui Jianfeng2010-11-301-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's able to check whether a CFQ group on a service tree by checking "cfqg->rb_node". There's no need to maintain an extra flag here. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * cfq-iosched: Get rid of st->activeGui Jianfeng2010-11-301-14/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | When a cfq group is running, it won't be dequeued from service tree, so there's no need to store the active one in st->active. Just gid rid of it. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * Merge branch 'v2.6.37-rc2' into for-2.6.38/coreJens Axboe2010-11-161-19/+20
| | |\
| | * | cfq-iosched: don't schedule a dispatch for a non-idle queueShaohua Li2010-11-091-18/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vivek suggests we don't need schedule a dispatch when an idle queue becomes nonidle. And he is right, cfq_should_preempt already covers the logic. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * | cfq-iosched: don't idle if a deep seek queue is slowShaohua Li2010-11-081-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a deep seek queue slowly deliver requests but disk is much faster, idle for the queue just wastes disk throughput. If the queue delevers all requests before half its slice is used, the patch disable idle for it. In my test, application delivers 32 requests one time, the disk can accept 128 requests at maxium and disk is fast. without the patch, the throughput is just around 30m/s, while with it, the speed is about 80m/s. The disk is a SSD, but is detected as a rotational disk. I can configure it as SSD, but I thought the deep seek queue logic should be fixed too, for example, considering a fast raid. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * | cfq-iosched: schedule dispatch for noidle queueShaohua Li2010-11-081-1/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A queue is idle at cfq_dispatch_requests(), but it gets noidle later. Unless other task explictly does unplug or all requests are drained, we will not deliever requests to the disk even cfq_arm_slice_timer doesn't make the queue idle. For example, cfq_should_idle() returns true because of service_tree->count == 1, and then other queues are added. Note, I didn't see obvious performance impacts so far with the patch, but just thought this could be a problem. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| | * | cfq-iosched: do cleanupShaohua Li2010-11-081-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Some functions should return boolean. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * | | tree-wide: fix comment/printk typosUwe Kleine-König2010-11-011-1/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | "gadget", "through", "command", "maintain", "maintain", "controller", "address", "between", "initiali[zs]e", "instead", "function", "select", "already", "equal", "access", "management", "hierarchy", "registration", "interest", "relative", "memory", "offset", "already", Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* | | block: share request flush fields with elevator_privateMike Snitzer2011-02-111-9/+9
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | Flush requests are never put on the IO scheduler. Convert request structure's elevator_private* into an array and have the flush fields share a union with it. Reclaim the space lost in 'struct request' by moving 'completion_data' back in the union with 'rb_node'. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | Merge branch 'for-2.6.37/core' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds2010-10-221-19/+20
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.37/core' of git://git.kernel.dk/linux-2.6-block: (39 commits) cfq-iosched: Fix a gcc 4.5 warning and put some comments block: Turn bvec_k{un,}map_irq() into static inline functions block: fix accounting bug on cross partition merges block: Make the integrity mapped property a bio flag block: Fix double free in blk_integrity_unregister block: Ensure physical block size is unsigned int blkio-throttle: Fix possible multiplication overflow in iops calculations blkio-throttle: limit max iops value to UINT_MAX blkio-throttle: There is no need to convert jiffies to milli seconds blkio-throttle: Fix link failure failure on i386 blkio: Recalculate the throttled bio dispatch time upon throttle limit change blkio: Add root group to td->tg_list blkio: deletion of a cgroup was causes oops blkio: Do not export throttle files if CONFIG_BLK_DEV_THROTTLING=n block: set the bounce_pfn to the actual DMA limit rather than to max memory block: revert bad fix for memory hotplug causing bounces Fix compile error in blk-exec.c for !CONFIG_DETECT_HUNG_TASK block: set the bounce_pfn to the actual DMA limit rather than to max memory block: Prevent hang_check firing during long I/O cfq: improve fsync performance for small files ... Fix up trivial conflicts due to __rcu sparse annotation in include/linux/genhd.h
| * cfq-iosched: Fix a gcc 4.5 warning and put some commentsVivek Goyal2010-10-221-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Andi encountedred following warning with gcc 4.5 linux/block/cfq-iosched.c: In function ‘cfq_dispatch_requests’: linux/block/cfq-iosched.c:2156:3: warning: array subscript is above array bounds - Warning happens due to following code. slice = group_slice * count / max_t(unsigned, cfqg->busy_queues_avg[cfqd->serving_prio], cfq_group_busy_queues_wl(cfqd->serving_prio, cfqd, cfqg)); gcc is complaining about cfqg->busy_queues_avg[] being indexed by CFQ prio classes (RT, BE and IDLE) while the array size is only 2. - At run time, we never access cfqg->busy_queues_avg[IDLE] and return from function before this code hits. - To fix warning increase the array size though it will remain unused. This patch also puts some comments to clarify some of the confusions. - I have taken Jens's patch and modified it a bit. - Compile tested with gcc 4.4 and boot tested. I don't have gcc 4.5 running, Andi can you please test it with gcc 4.5 to make sure it worked. Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * blkio: Recalculate the throttled bio dispatch time upon throttle limit changeVivek Goyal2010-10-011-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Currently any cgroup throttle limit changes are processed asynchronousy and the change does not take affect till a new bio is dispatched from same group. o It might happen that a user sets a redicuously low limit on throttling. Say 1 bytes per second on reads. In such cases simple operations like mount a disk can wait for a very long time. o Once bio is throttled, there is no easy way to come out of that wait even if user increases the read limit later. o This patch fixes it. Now if a user changes the cgroup limits, we recalculate the bio dispatch time according to new limits. o Can't take queueu lock under blkcg_lock, hence after the change I wake up the dispatch thread again which recalculates the time. So there are some variables being synchronized across two threads without lock and I had to make use of barriers. Hoping I have used barriers correctly. Any review of memory barrier code especially will help. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * cfq: improve fsync performance for small filesCorrado Zoccolo2010-09-201-14/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fsync performance for small files achieved by cfq on high-end disks is lower than what deadline can achieve, due to idling introduced between the sync write happening in process context and the journal commit. Moreover, when competing with a sequential reader, a process writing small files and fsync-ing them is starved. This patch fixes the two problems by: - marking journal commits as WRITE_SYNC, so that they get the REQ_NOIDLE flag set, - force all queues that have REQ_NOIDLE requests to be put in the noidle tree. Having the queue associated to the fsync-ing process and the one associated to journal commits in the noidle tree allows: - switching between them without idling, - fairness vs. competing idling queues, since they will be serviced only after the noidle tree expires its slice. Acked-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Tested-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
| * blk-cgroup: Prepare the base for supporting more than one IO control policiesVivek Goyal2010-09-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | o This patch prepares the base for introducing new IO control policies. Currently all the code is written knowing there is only one policy and that is proportional bandwidth. Creating infrastructure for newer policies to come in. o Also there were many functions which were generated using macro. It was very confusing. Got rid of those. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: fix a kernel OOPs when usb key is insertedVivek Goyal2010-09-211-3/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mike reported a kernel crash when a usb key hotplug is performed while all kernel thrads are not in a root cgroup and are running in one of the child cgroups of blkio controller. BUG: unable to handle kernel NULL pointer dereference at 0000002c IP: [<c11c7b08>] cfq_get_queue+0x232/0x412 *pde = 00000000 Oops: 0000 [#1] PREEMPT last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/host3/scsi_host/host3/uevent [..] Pid: 30039, comm: scsi_scan_3 Not tainted 2.6.35.2-fg.roam #1 Volvi2 /Aspire 4315 EIP: 0060:[<c11c7b08>] EFLAGS: 00010086 CPU: 0 EIP is at cfq_get_queue+0x232/0x412 EAX: f705f9c0 EBX: e977abac ECX: 00000000 EDX: 00000000 ESI: f00da400 EDI: f00da4ec EBP: e977a800 ESP: dff8fd00 DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 Process scsi_scan_3 (pid: 30039, ti=dff8e000 task=f6b6c9a0 task.ti=dff8e000) Stack: 00000000 00000000 00000001 01ff0000 f00da508 00000000 f00da524 f00da540 <0> e7994940 dd631750 f705f9c0 e977a820 e977ac44 f00da4d0 00000001 f6b6c9a0 <0> 00000010 00008010 0000000b 00000000 00000001 e977a800 dd76fac0 00000246 Call Trace: [<c11c7f10>] ? cfq_set_request+0x228/0x34c [<c11c7ce8>] ? cfq_set_request+0x0/0x34c [<c11bb3b9>] ? elv_set_request+0xf/0x1c [<c11bdd51>] ? get_request+0x1ad/0x22f [<c11bddf2>] ? get_request_wait+0x1f/0x11a [<c11d013b>] ? kvasprintf+0x33/0x3b [<c127b537>] ? scsi_execute+0x1d/0x103 [<c127b675>] ? scsi_execute_req+0x58/0x83 [<c127c391>] ? scsi_probe_and_add_lun+0x188/0x7c2 [<c12718c6>] ? attribute_container_add_device+0x15/0xfa [<c11c95d1>] ? kobject_get+0xf/0x13 [<c126d1db>] ? get_device+0x10/0x14 [<c127be93>] ? scsi_alloc_target+0x217/0x24d [<c127cbd8>] ? __scsi_scan_target+0x95/0x480 [<c10204eb>] ? dequeue_entity+0x14/0x1fe [<c1020491>] ? update_curr+0x165/0x1ab [<c1020491>] ? update_curr+0x165/0x1ab [<c127d00d>] ? scsi_scan_channel+0x4a/0x76 [<c127d0b0>] ? scsi_scan_host_selected+0x77/0xad [<c127d13c>] ? do_scan_async+0x0/0x11a [<c127d137>] ? do_scsi_scan_host+0x51/0x56 [<c127d13c>] ? do_scan_async+0x0/0x11a [<c127d14a>] ? do_scan_async+0xe/0x11a [<c127d13c>] ? do_scan_async+0x0/0x11a [<c10354c5>] ? kthread+0x5e/0x63 [<c1035467>] ? kthread+0x0/0x63 [<c1002af6>] ? kernel_thread_helper+0x6/0x10 Code: 44 24 1c 54 83 44 24 18 54 83 fa 03 75 94 8b 06 c7 86 64 02 00 00 01 00 00 00 83 e0 03 09 f0 89 06 8b 44 24 28 8b 90 58 01 00 00 <8b> 42 2c 85 c0 75 03 8b 42 08 8d 54 24 48 52 8d 4c 24 50 51 68 EIP: [<c11c7b08>] cfq_get_queue+0x232/0x412 SS:ESP 0068:dff8fd00 CR2: 000000000000002c ---[ end trace 9a88306573f69b12 ]--- The problem here is that we don't have bdi->dev information available when thread does some IO. Hence when dev_name() tries to access bdi->dev, it crashes. This problem does not happen if kernel threads are in root group as root group is statically allocated at device initialization time and we don't hit this piece of code. Fix it by delaying the filling of major and minor number information of device in blk_group. Initially a blk_group is created with 0 as device information and this information is filled later once some more IO comes in from same group. Reported-by: Mike Kazantsev <mk.fraggod@gmail.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: blktrace print per slice sector statsVivek Goyal2010-08-231-2/+7
| | | | | | | | | | | | | | | | | | | | o Divyesh had gotten rid of this code in the past. I want to re-introduce it back as it helps me a lot during debugging. Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Reviewed-by: Divyesh Shah <dpshah@google.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Implement tunable group_idleVivek Goyal2010-08-231-9/+56
| | | | | | | | | | | | | | | | | | | | | | | | o Implement a new tunable group_idle, which allows idling on the group instead of a cfq queue. Hence one can set slice_idle = 0 and not idle on the individual queues but idle on the group. This way on fast storage we can get fairness between groups at the same time overall throughput improves. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Do group share accounting in IOPS when slice_idle=0Vivek Goyal2010-08-231-6/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | o Implement another CFQ mode where we charge group in terms of number of requests dispatched instead of measuring the time. Measuring in terms of time is not possible when we are driving deeper queue depths and there are requests from multiple cfq queues in the request queue. o This mode currently gets activated if one sets slice_idle=0 and associated disk supports NCQ. Again the idea is that on an NCQ disk with idling disabled most of the queues will dispatch 1 or more requests and then cfq queue expiry happens and we don't have a way to measure time. So start providing fairness in terms of IOPS. o Currently IOPS mode works only with cfq group scheduling. CFQ is following different scheduling algorithms for queue and group scheduling. These IOPS stats are used only for group scheduling hence in non-croup mode nothing should change. o For CFQ group scheduling one can disable slice idling so that we don't idle on queue and drive deeper request queue depths (achieving better throughput), at the same time group idle is enabled so one should get service differentiation among groups. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* | cfq-iosched: Do not idle if slice_idle=0Vivek Goyal2010-08-231-1/+4
|/ | | | | | | | | | | Do not idle either on cfq queue or service tree if slice_idle=0. User does not want any queue or service tree idling. Currently even if slice_idle=0, we were waiting for request to finish before expiring the queue and that can lead to lower queue depths. Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: unify flags for struct bio and struct requestChristoph Hellwig2010-08-071-7/+7
| | | | | | | | | | | | | | Remove the current bio flags and reuse the request flags for the bio, too. This allows to more easily trace the type of I/O from the filesystem down to the block driver. There were two flags in the bio that were missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've renamed two request flags that had a superflous RW in them. Note that the flags are in bio.h despite having the REQ_ name - as blkdev.h includes bio.h that is the only way to go for now. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* block: remove wrappers for request type/flagsChristoph Hellwig2010-08-071-8/+11
| | | | | | | | | Remove all the trivial wrappers for the cmd_type and cmd_flags fields in struct requests. This allows much easier grepping for different request types instead of unwinding through macros. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: Fixed boot warning with BLK_CGROUP=y and CFQ_GROUP_IOSCHED=nVivek Goyal2010-06-181-27/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hi Jens, Few days back Ingo noticed a CFQ boot time warning. This patch fixes it. The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really be making blkio stat related calls. > Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With > some > configs i get bad spinlock warnings during bootup: > > [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750 > usecs > [ 28.972003] calling b44_init+0x0/0x55 @ 1 > [ 28.976009] bus: 'pci': add driver b44 > [ 28.976374] sda: > [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117 > [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: <none>/-1, +.owner_cpu: 0 > [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183 > [ 28.980000] Call Trace: > [ 28.980000] [<41ba6d55>] ? printk+0x20/0x24 > [ 28.980000] [<4134b7b7>] spin_bug+0x7c/0x87 > [ 28.980000] [<4134b853>] do_raw_spin_lock+0x1e/0x123 > [ 28.980000] [<41ba92ca>] ? _raw_spin_lock_irqsave+0x12/0x20 > [ 28.980000] [<41ba92d2>] _raw_spin_lock_irqsave+0x1a/0x20 > [ 28.980000] [<4133476f>] blkiocg_update_io_add_stats+0x25/0xfb > [ 28.980000] [<41335dae>] ? cfq_prio_tree_add+0xb1/0xc1 > [ 28.980000] [<41337bc7>] cfq_insert_request+0x8c/0x425 Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq: Don't allow queue merges for queues that have no process referencesJeff Moyer2010-06-171-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hi, A user reported a kernel bug when running a particular program that did the following: created 32 threads - each thread took a mutex, grabbed a global offset, added a buffer size to that offset, released the lock - read from the given offset in the file - created a new thread to do the same - exited The result is that cfq's close cooperator logic would trigger, as the threads were issuing I/O within the mean seek distance of one another. This workload managed to routinely trigger a use after free bug when walking the list of merge candidates for a particular cfqq (cfqq->new_cfqq). The logic used for merging queues looks like this: static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq) { int process_refs, new_process_refs; struct cfq_queue *__cfqq; /* Avoid a circular list and skip interim queue merges */ while ((__cfqq = new_cfqq->new_cfqq)) { if (__cfqq == cfqq) return; new_cfqq = __cfqq; } process_refs = cfqq_process_refs(cfqq); /* * If the process for the cfqq has gone away, there is no * sense in merging the queues. */ if (process_refs == 0) return; /* * Merge in the direction of the lesser amount of work. */ new_process_refs = cfqq_process_refs(new_cfqq); if (new_process_refs >= process_refs) { cfqq->new_cfqq = new_cfqq; atomic_add(process_refs, &new_cfqq->ref); } else { new_cfqq->new_cfqq = cfqq; atomic_add(new_process_refs, &cfqq->ref); } } When a merge candidate is found, we add the process references for the queue with less references to the queue with more. The actual merging of queues happens when a new request is issued for a given cfqq. In the case of the test program, it only does a single pread call to read in 1MB, so the actual merge never happens. Normally, this is fine, as when the queue exits, we simply drop the references we took on the other cfqqs in the merge chain: /* * If this queue was scheduled to merge with another queue, be * sure to drop the reference taken on that queue (and others in * the merge chain). See cfq_setup_merge and cfq_merge_cfqqs. */ __cfqq = cfqq->new_cfqq; while (__cfqq) { if (__cfqq == cfqq) { WARN(1, "cfqq->new_cfqq loop detected\n"); break; } next = __cfqq->new_cfqq; cfq_put_queue(__cfqq); __cfqq = next; } However, there is a hole in this logic. Consider the following (and keep in mind that each I/O keeps a reference to the cfqq): q1->new_cfqq = q2 // q2 now has 2 process references q3->new_cfqq = q2 // q2 now has 3 process references // the process associated with q2 exits // q2 now has 2 process references // queue 1 exits, drops its reference on q2 // q2 now has 1 process reference // q3 exits, so has 0 process references, and hence drops its references // to q2, which leaves q2 also with 0 process references q4 comes along and wants to merge with q3 q3->new_cfqq still points at q2! We follow that link and end up at an already freed cfqq. So, the fix is to not follow a merge chain if the top-most queue does not have a process reference, otherwise any queue in the chain could be already freed. I also changed the logic to disallow merging with a queue that does not have any process references. Previously, we did this check for one of the merge candidates, but not the other. That doesn't really make sense. Without the attached patch, my system would BUG within a couple of seconds of running the reproducer program. With the patch applied, my system ran the program for over an hour without issues. This addresses the following bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=16217 Thanks a ton to Phil Carns for providing the bug report and an excellent reproducer. [ Note for stable: this applies to 2.6.32/33/34 ]. Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Reported-by: Phil Carns <carns@mcs.anl.gov> Cc: stable@kernel.org Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
* cfq-iosched: fix an oops caused by slab leakShaohua Li2010-05-251-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I got below oops when unloading cfq-iosched. Considering scenario: queue A merge to B, C merge to D and B will be merged to D. Before B is merged to D, we do split B. We should put B's reference for D. [ 807.768536] ============================================================================= [ 807.768539] BUG cfq_queue: Objects remaining on kmem_cache_close() [ 807.768541] ----------------------------------------------------------------------------- [ 807.768543] [ 807.768546] INFO: Slab 0xffffea0003e6b4e0 objects=26 used=1 fp=0xffff88011d584fd8 flags=0x200000000004082 [ 807.768550] Pid: 5946, comm: rmmod Tainted: G W 2.6.34-07097-gf4b87de-dirty #724 [ 807.768552] Call Trace: [ 807.768560] [<ffffffff81104e8d>] slab_err+0x8f/0x9d [ 807.768564] [<ffffffff811059e1>] ? flush_cpu_slab+0x0/0x93 [ 807.768569] [<ffffffff8164be52>] ? add_preempt_count+0xe/0xca [ 807.768572] [<ffffffff8164bd9c>] ? sub_preempt_count+0xe/0xb6 [ 807.768577] [<ffffffff81648871>] ? _raw_spin_unlock+0x15/0x30 [ 807.768580] [<ffffffff8164bd9c>] ? sub_preempt_count+0xe/0xb6 [ 807.768584] [<ffffffff811061bc>] list_slab_objects+0x9b/0x19f [ 807.768588] [<ffffffff8164bf0a>] ? add_preempt_count+0xc6/0xca [ 807.768591] [<ffffffff81109e27>] kmem_cache_destroy+0x13f/0x21d [ 807.768597] [<ffffffffa000ff13>] cfq_slab_kill+0x1a/0x43 [cfq_iosched] [ 807.768601] [<ffffffffa000ffcf>] cfq_exit+0x93/0x9e [cfq_iosched] [ 807.768606] [<ffffffff810973a2>] sys_delete_module+0x1b1/0x219 [ 807.768612] [<ffffffff8102fb5b>] system_call_fastpath+0x16/0x1b [ 807.768618] INFO: Object 0xffff88011d584618 @offset=1560 [ 807.768622] INFO: Allocated in cfq_get_queue+0x11e/0x274 [cfq_iosched] age=7173 cpu=1 pid=5496 [ 807.768626] ============================================================================= Cc: stable@kernel.org Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* cfq-iosched: compact io_context radix_treeKonstantin Khlebnikov2010-05-241-5/+39
| | | | | | | | | | | | | | | Use small consequent indexes as radix tree keys instead of sparse cfqd address. This change will reduce radix tree depth from 11 (6 for 32-bit hosts) to 1 if host have <=64 disks under cfq control, or to 0 if there only one disk. So, this patch save 10*560 bytes for each process (5*296 for 32-bit hosts) For each cfqd allocate cic index from ida. To unlink dead cic from tree without cfqd access store index into ->key. (bit 0 -- dead mark, bits 1..30 -- index: ida produce id in range 0..2^31-1) Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
OpenPOWER on IntegriCloud