diff options
author | Heinz Mauelshagen <heinzm@redhat.com> | 2024-07-09 13:56:38 +0200 |
---|---|---|
committer | Mikulas Patocka <mpatocka@redhat.com> | 2024-07-10 13:10:06 +0200 |
commit | d176fadb9e783c152d0820a50f84882b6c5ae314 (patch) | |
tree | 139e2f0b1137f6aafa0550d7bae59d3cdc42b93f /drivers/md | |
parent | 453496b899b5f62ff193bca46097f0f7211cec46 (diff) | |
download | lwn-d176fadb9e783c152d0820a50f84882b6c5ae314.tar.gz lwn-d176fadb9e783c152d0820a50f84882b6c5ae314.zip |
dm raid: fix stripes adding reshape size issues
Adding stripes to an existing raid4/5/6/10 mapped device grows its
capacity though it'll be only made available _after_ the respective
reshape finished as of MD kernel reshape semantics. Such reshaping
involves moving a window forward starting at BOD reading content
from previous lesser stripes and writing them back in the new
layout with more stripes. Once that process finishes at end of
previous data, the grown size may be announced and used. In order
to avoid writing over any existing data in place, out-of-place space
is added to the beginning of each data device by lvm2 before starting
the reshape process. That reshape space wasn't taken into acount for
data device size calculation.
Fixes resulting from above:
- correct event handling conditions in do_table_event() to set
the device's capacity after the stripe adding reshape ended
- subtract mentioned out-of-place space doing data device and
array size calculations
- conditionally set capacity as of superblock in preresume
Testing:
- passes all LVM2 RAID tests including new lvconvert-raid-reshape-size.sh one
Tested-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Diffstat (limited to 'drivers/md')
-rw-r--r-- | drivers/md/dm-raid.c | 20 |
1 files changed, 17 insertions, 3 deletions
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c index 871e278de662..0c3323e0adb2 100644 --- a/drivers/md/dm-raid.c +++ b/drivers/md/dm-raid.c @@ -1673,7 +1673,7 @@ static int rs_set_dev_and_array_sectors(struct raid_set *rs, sector_t sectors, b if (sector_div(dev_sectors, data_stripes)) goto bad; - array_sectors = (data_stripes + delta_disks) * dev_sectors; + array_sectors = (data_stripes + delta_disks) * (dev_sectors - _get_reshape_sectors(rs)); if (sector_div(array_sectors, rs->raid10_copies)) goto bad; @@ -1682,7 +1682,7 @@ static int rs_set_dev_and_array_sectors(struct raid_set *rs, sector_t sectors, b else /* Striped layouts */ - array_sectors = (data_stripes + delta_disks) * dev_sectors; + array_sectors = (data_stripes + delta_disks) * (dev_sectors - _get_reshape_sectors(rs)); mddev->array_sectors = array_sectors; mddev->dev_sectors = dev_sectors; @@ -1721,11 +1721,20 @@ static void do_table_event(struct work_struct *ws) struct raid_set *rs = container_of(ws, struct raid_set, md.event_work); smp_rmb(); /* Make sure we access most actual mddev properties */ - if (!rs_is_reshaping(rs)) { + + /* Only grow size resulting from added stripe(s) after reshape ended. */ + if (!rs_is_reshaping(rs) && + rs->array_sectors > rs->md.array_sectors && + !rs->md.delta_disks && + rs->md.raid_disks == rs->raid_disks) { + /* The raid10 personality doesn't provide proper device sizes -> correct. */ if (rs_is_raid10(rs)) rs_set_rdev_sectors(rs); + + rs->md.array_sectors = rs->array_sectors; rs_set_capacity(rs); } + dm_table_event(rs->ti->table); } @@ -4023,6 +4032,11 @@ static int raid_preresume(struct dm_target *ti) if (test_and_set_bit(RT_FLAG_RS_PRERESUMED, &rs->runtime_flags)) return 0; + /* If different and no explicit grow request, expose MD array size as of superblock. */ + if (!test_bit(RT_FLAG_RS_GROW, &rs->runtime_flags) && + rs->array_sectors != mddev->array_sectors) + rs_set_capacity(rs); + /* * The superblocks need to be updated on disk if the * array is new or new devices got added (thus zeroed |