diff options
author | Lukas Czerner <lczerner@redhat.com> | 2014-06-11 12:28:43 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-07-09 11:18:26 -0700 |
commit | dd7ba80f4a3933c025e6adcf5f5d730181d91ab0 (patch) | |
tree | bc09ed28f4b67ebc944c932976dd71e889e908c3 /drivers/md/dm-thin.c | |
parent | e4b08895ac3f0b5c5eae0f33c76a93a617cdfb63 (diff) | |
download | lwn-dd7ba80f4a3933c025e6adcf5f5d730181d91ab0.tar.gz lwn-dd7ba80f4a3933c025e6adcf5f5d730181d91ab0.zip |
dm thin: update discard_granularity to reflect the thin-pool blocksize
commit 09869de57ed2728ae3c619803932a86cb0e2c4f8 upstream.
DM thinp already checks whether the discard_granularity of the data
device is a factor of the thin-pool block size. But when using the
dm-thin-pool's discard passdown support, DM thinp was not selecting the
max of the underlying data device's discard_granularity and the
thin-pool's block size.
Update set_discard_limits() to set discard_granularity to the max of
these values. This enables blkdev_issue_discard() to properly align the
discards that are sent to the DM thin device on a full block boundary.
As such each discard will now cover an entire DM thin-pool block and the
block will be reclaimed.
Reported-by: Zdenek Kabelac <zkabelac@redhat.com>
Signed-off-by: Lukas Czerner <lczerner@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers/md/dm-thin.c')
-rw-r--r-- | drivers/md/dm-thin.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 94d2ac1b493e..359af3a519b5 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -2925,7 +2925,8 @@ static void set_discard_limits(struct pool_c *pt, struct queue_limits *limits) */ if (pt->adjusted_pf.discard_passdown) { data_limits = &bdev_get_queue(pt->data_dev->bdev)->limits; - limits->discard_granularity = data_limits->discard_granularity; + limits->discard_granularity = max(data_limits->discard_granularity, + pool->sectors_per_block << SECTOR_SHIFT); } else limits->discard_granularity = pool->sectors_per_block << SECTOR_SHIFT; } |