summaryrefslogtreecommitdiff
path: root/block/blk-settings.c
diff options
context:
space:
mode:
authorKent Overstreet <kmo@daterainc.com>2013-07-11 22:39:53 -0700
committerKent Overstreet <kmo@daterainc.com>2014-01-08 13:05:09 -0800
commitc78afc6261b09f74abff8c0719b80692a4959768 (patch)
tree4b3d5e421fad23e3bd0866a0b18c845acf297506 /block/blk-settings.c
parent5f5837d2d650db25b9153b91535e67a96b265f58 (diff)
downloadlwn-c78afc6261b09f74abff8c0719b80692a4959768.tar.gz
lwn-c78afc6261b09f74abff8c0719b80692a4959768.zip
bcache/md: Use raid stripe size
Now that we've got code for raid5/6 stripe awareness, bcache just needs to know about the stripes and when writing partial stripes is expensive - we probably don't want to enable this optimization for raid1 or 10, even though they have stripes. So add a flag to queue_limits. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Diffstat (limited to 'block/blk-settings.c')
-rw-r--r--block/blk-settings.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 05e826793e4e..5d21239bc859 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -592,6 +592,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
ret = -1;
}
+ t->raid_partial_stripes_expensive =
+ max(t->raid_partial_stripes_expensive,
+ b->raid_partial_stripes_expensive);
+
/* Find lowest common alignment_offset */
t->alignment_offset = lcm(t->alignment_offset, alignment)
& (max(t->physical_block_size, t->io_min) - 1);