summaryrefslogtreecommitdiff
path: root/drivers
diff options
context:
space:
mode:
authorNeilBrown <neilb@suse.de>2013-11-14 15:16:15 +1100
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-12-04 10:50:33 -0800
commitf69a26d2cff8533570ca3ef3c59f2e1174b084b1 (patch)
tree158224fbed9c322fbb8c49f18a9d26490a323324 /drivers
parent3ae78536556792344cad475b78aecaf66e9ab3a6 (diff)
downloadlwn-f69a26d2cff8533570ca3ef3c59f2e1174b084b1.tar.gz
lwn-f69a26d2cff8533570ca3ef3c59f2e1174b084b1.zip
md: fix calculation of stacking limits on level change.
commit 02e5f5c0a0f726e66e3d8506ea1691e344277969 upstream. The various ->run routines of md personalities assume that the 'queue' has been initialised by the blk_set_stacking_limits() call in md_alloc(). However when the level is changed (by level_store()) the ->run routine for the new level is called for an array which has already had the stacking limits modified. This can result in incorrect final settings. So call blk_set_stacking_limits() before ->run in level_store(). A specific consequence of this bug is that it causes discard_granularity to be set incorrectly when reshaping a RAID4 to a RAID0. This is suitable for any -stable kernel since 3.3 in which blk_set_stacking_limits() was introduced. Reported-and-tested-by: "Baldysiak, Pawel" <pawel.baldysiak@intel.com> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/md/md.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 7b45b5e1b31e..e63ca864b35a 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -3507,6 +3507,7 @@ level_store(struct mddev *mddev, const char *buf, size_t len)
mddev->in_sync = 1;
del_timer_sync(&mddev->safemode_timer);
}
+ blk_set_stacking_limits(&mddev->queue->limits);
pers->run(mddev);
mddev_resume(mddev);
set_bit(MD_CHANGE_DEVS, &mddev->flags);