diff options
author | Mike Snitzer <snitzer@redhat.com> | 2014-07-18 17:59:43 -0400 |
---|---|---|
committer | Mike Snitzer <snitzer@redhat.com> | 2014-08-01 12:30:35 -0400 |
commit | fdfb4c8c1a9fc8dd8cf8eeb4e3ed83573b375285 (patch) | |
tree | 1d74a9957919f623797358268a4cdadfc53dbb19 /drivers | |
parent | 298a9fa08a1577211d42a75e8fc073baef61e0d9 (diff) | |
download | op-kernel-dev-fdfb4c8c1a9fc8dd8cf8eeb4e3ed83573b375285.zip op-kernel-dev-fdfb4c8c1a9fc8dd8cf8eeb4e3ed83573b375285.tar.gz |
dm thin: set minimum_io_size to pool's data block size
Before, if the block layer's limit stacking didn't establish an
optimal_io_size that was compatible with the thin-pool's data block size
we'd set optimal_io_size to the data block size and minimum_io_size to 0
(which the block layer adjusts to be physical_block_size).
Update pool_io_hints() to set both minimum_io_size and optimal_io_size
to the thin-pool's data block size. This fixes an issue reported where
mkfs.xfs would create more XFS Allocation Groups on thinp volumes than
on a normal linear LV of comparable size, see:
https://bugzilla.redhat.com/show_bug.cgi?id=1003227
Reported-by: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Diffstat (limited to 'drivers')
-rw-r--r-- | drivers/md/dm-thin.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 0e844a5..4843801 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -3177,7 +3177,7 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) */ if (io_opt_sectors < pool->sectors_per_block || do_div(io_opt_sectors, pool->sectors_per_block)) { - blk_limits_io_min(limits, 0); + blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT); blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT); } |