diff options
author | Tejun Heo <tj@kernel.org> | 2015-03-04 10:37:43 -0500 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2015-04-19 10:11:06 +0200 |
commit | d8b274f40cb5cf5ef5cd18f9830a60233011b631 (patch) | |
tree | ac367ac3acda0c68ddf5b2e7ee96e06ba89808c9 /mm/page-writeback.c | |
parent | da715490285d40afb34148891a2b720043633ad8 (diff) | |
download | lwn-d8b274f40cb5cf5ef5cd18f9830a60233011b631.tar.gz lwn-d8b274f40cb5cf5ef5cd18f9830a60233011b631.zip |
writeback: add missing INITIAL_JIFFIES init in global_update_bandwidth()
commit 7d70e15480c0450d2bfafaad338a32e884fc215e upstream.
global_update_bandwidth() uses static variable update_time as the
timestamp for the last update but forgets to initialize it to
INITIALIZE_JIFFIES.
This means that global_dirty_limit will be 5 mins into the future on
32bit and some large amount jiffies into the past on 64bit. This
isn't critical as the only effect is that global_dirty_limit won't be
updated for the first 5 mins after booting on 32bit machines,
especially given the auxiliary nature of global_dirty_limit's role -
protecting against global dirty threshold's sudden dips; however, it
does lead to unintended suboptimal behavior. Fix it.
Fixes: c42843f2f0bb ("writeback: introduce smoothed global dirty limit")
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Jan Kara <jack@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm/page-writeback.c')
-rw-r--r-- | mm/page-writeback.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 9f45f87a5859..d3653325a255 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -943,7 +943,7 @@ static void global_update_bandwidth(unsigned long thresh, unsigned long now) { static DEFINE_SPINLOCK(dirty_lock); - static unsigned long update_time; + static unsigned long update_time = INITIAL_JIFFIES; /* * check locklessly first to optimize away locking for the most time |