summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorFan Du <fan.du@windriver.com>2013-04-30 15:27:27 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-11-29 10:50:33 -0800
commitf22ff9d05def87a049c5c8c7b86539bd4f8e3172 (patch)
treeed805b8838b5cf2d7686345c8859eeb1f8d48901 /kernel
parentdf4011e050b4e80165a317424e6b3367dfa7697c (diff)
downloadlwn-f22ff9d05def87a049c5c8c7b86539bd4f8e3172.tar.gz
lwn-f22ff9d05def87a049c5c8c7b86539bd4f8e3172.zip
include/linux/fs.h: disable preempt when acquire i_size_seqcount write lock
commit 74e3d1e17b2e11d175970b85acd44f5927000ba2 upstream. Two rt tasks bind to one CPU core. The higher priority rt task A preempts a lower priority rt task B which has already taken the write seq lock, and then the higher priority rt task A try to acquire read seq lock, it's doomed to lockup. rt task A with lower priority: call write i_size_write rt task B with higher priority: call sync, and preempt task A write_seqcount_begin(&inode->i_size_seqcount); i_size_read inode->i_size = i_size; read_seqcount_begin <-- lockup here... So disable preempt when acquiring every i_size_seqcount *write* lock will cure the problem. Signed-off-by: Fan Du <fan.du@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Zhao Hongjiang <zhaohongjiang@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
0 files changed, 0 insertions, 0 deletions