diff options
author | Jonathan Brassow <jbrassow@redhat.com> | 2013-01-22 21:42:18 -0600 |
---|---|---|
committer | NeilBrown <neilb@suse.de> | 2013-01-24 12:02:36 +1100 |
commit | 55ebbb59c1c6eb1b040f62b8c4ae0b724de6e55a (patch) | |
tree | 93c97e2afe99e3cc329ec5dcd941f968a4729abb /Documentation/device-mapper | |
parent | 5f243b9b46a22e5790dbbc36f574c2417af49a41 (diff) | |
download | lwn-55ebbb59c1c6eb1b040f62b8c4ae0b724de6e55a.tar.gz lwn-55ebbb59c1c6eb1b040f62b8c4ae0b724de6e55a.zip |
DM-RAID: Fix RAID10's check for sufficient redundancy
Before attempting to activate a RAID array, it is checked for sufficient
redundancy. That is, we make sure that there are not too many failed
devices - or devices specified for rebuild - to undermine our ability to
activate the array. The current code performs this check twice - once to
ensure there were not too many devices specified for rebuild by the user
('validate_rebuild_devices') and again after possibly experiencing a failure
to read the superblock ('analyse_superblocks'). Neither of these checks are
sufficient. The first check is done properly but with insufficient
information about the possible failure state of the devices to make a good
determination if the array can be activated. The second check is simply
done wrong in the case of RAID10 because it doesn't account for the
independence of the stripes (i.e. mirror sets). The solution is to use the
properly written check ('validate_rebuild_devices'), but perform the check
after the superblocks have been read and we know which devices have failed.
This gives us one check instead of two and performs it in a location where
it can be done right.
Only RAID10 was affected and it was affected in the following ways:
- the code did not properly catch the condition where a user specified
a device for rebuild that already had a failed device in the same mirror
set. (This condition would, however, be caught at a deeper level in MD.)
- the code triggers a false positive and denies activation when devices in
independent mirror sets have failed - counting the failures as though they
were all in the same set.
The most likely place this error was introduced (or this patch should have
been included) is in commit 4ec1e369 - first introduced in v3.7-rc1.
Consequently this fix should also go in v3.7.y, however there is a
small conflict on the .version in raid_target, so I'll submit a
separate patch to -stable.
Cc: stable@vger.kernel.org
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'Documentation/device-mapper')
-rw-r--r-- | Documentation/device-mapper/dm-raid.txt | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/Documentation/device-mapper/dm-raid.txt b/Documentation/device-mapper/dm-raid.txt index 728c38c242d6..56fb62b09fc5 100644 --- a/Documentation/device-mapper/dm-raid.txt +++ b/Documentation/device-mapper/dm-raid.txt @@ -141,3 +141,4 @@ Version History 1.2.0 Handle creation of arrays that contain failed devices. 1.3.0 Added support for RAID 10 1.3.1 Allow device replacement/rebuild for RAID 10 +1.3.2 Fix/improve redundancy checking for RAID10 |