From 0bbfeb8320421989d3e12bd95fae86b9ac0712aa Mon Sep 17 00:00:00 2001 From: Justin TerAvest Date: Tue, 1 Mar 2011 15:05:08 -0500 Subject: cfq-iosched: Always provide group isolation. Effectively, make group_isolation=1 the default and remove the tunable. The setting group_isolation=0 was because by default we idle on sync-noidle tree and on fast devices, this can be very harmful for throughput. However, this problem can also be addressed by tuning slice_idle and possibly group_idle on faster storage devices. This change simplifies the CFQ code by removing the feature entirely. Signed-off-by: Justin TerAvest Acked-by: Vivek Goyal Signed-off-by: Jens Axboe --- Documentation/cgroups/blkio-controller.txt | 28 ---------------------------- 1 file changed, 28 deletions(-) (limited to 'Documentation/cgroups') diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt index 4ed7b5c..d915c16 100644 --- a/Documentation/cgroups/blkio-controller.txt +++ b/Documentation/cgroups/blkio-controller.txt @@ -343,34 +343,6 @@ Common files among various policies CFQ sysfs tunable ================= -/sys/block//queue/iosched/group_isolation ------------------------------------------------ - -If group_isolation=1, it provides stronger isolation between groups at the -expense of throughput. By default group_isolation is 0. In general that -means that if group_isolation=0, expect fairness for sequential workload -only. Set group_isolation=1 to see fairness for random IO workload also. - -Generally CFQ will put random seeky workload in sync-noidle category. CFQ -will disable idling on these queues and it does a collective idling on group -of such queues. Generally these are slow moving queues and if there is a -sync-noidle service tree in each group, that group gets exclusive access to -disk for certain period. That means it will bring the throughput down if -group does not have enough IO to drive deeper queue depths and utilize disk -capacity to the fullest in the slice allocated to it. But the flip side is -that even a random reader should get better latencies and overall throughput -if there are lots of sequential readers/sync-idle workload running in the -system. - -If group_isolation=0, then CFQ automatically moves all the random seeky queues -in the root group. That means there will be no service differentiation for -that kind of workload. This leads to better throughput as we do collective -idling on root sync-noidle tree. - -By default one should run with group_isolation=0. If that is not sufficient -and one wants stronger isolation between groups, then set group_isolation=1 -but this will come at cost of reduced throughput. - /sys/block//queue/iosched/slice_idle ------------------------------------------ On a faster hardware CFQ can be slow, especially with sequential workload. -- cgit v1.1 From df457f845e5449be2e7d96668791f789b3770ac7 Mon Sep 17 00:00:00 2001 From: Justin TerAvest Date: Tue, 8 Mar 2011 19:45:00 +0100 Subject: blk-cgroup: Lower minimum weight from 100 to 10. We've found that we still get good, useful isolation at weights this low. I'd like to adjust the minimum so that any other changes can take these values into account. Signed-off-by: Justin TerAvest Acked-by: Vivek Goyal Signed-off-by: Jens Axboe --- Documentation/cgroups/blkio-controller.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'Documentation/cgroups') diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt index d915c16..465351d 100644 --- a/Documentation/cgroups/blkio-controller.txt +++ b/Documentation/cgroups/blkio-controller.txt @@ -140,7 +140,7 @@ Proportional weight policy files - Specifies per cgroup weight. This is default weight of the group on all the devices until and unless overridden by per device rule. (See blkio.weight_device). - Currently allowed range of weights is from 100 to 1000. + Currently allowed range of weights is from 10 to 1000. - blkio.weight_device - One can specify per cgroup per device rules using this interface. -- cgit v1.1