diff options
author | Eric Sandeen <sandeen@redhat.com> | 2009-12-23 07:58:12 -0500 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-04-26 07:41:36 -0700 |
commit | 5d40c8cbd548e3259589d364dd5bf64ab504f147 (patch) | |
tree | 9acb2b80e86321180b60a198ac22ceeeeeb2c4e4 /fs | |
parent | b78a38dca6e04634ddc718e315712b45abcf92fd (diff) | |
download | lwn-5d40c8cbd548e3259589d364dd5bf64ab504f147.tar.gz lwn-5d40c8cbd548e3259589d364dd5bf64ab504f147.zip |
ext4: flush delalloc blocks when space is low
commit c8afb44682fcef6273e8b8eb19fab13ddd05b386 upstream.
Creating many small files in rapid succession on a small
filesystem can lead to spurious ENOSPC; on a 104MB filesystem:
for i in `seq 1 22500`; do
echo -n > $SCRATCH_MNT/$i
echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX > $SCRATCH_MNT/$i
done
leads to ENOSPC even though after a sync, 40% of the fs is free
again.
This is because we reserve worst-case metadata for delalloc writes,
and when data is allocated that worst-case reservation is not
usually needed.
When freespace is low, kicking off an async writeback will start
converting that worst-case space usage into something more realistic,
almost always freeing up space to continue.
This resolves the testcase for me, and survives all 4 generic
ENOSPC tests in xfstests.
We'll still need a hard synchronous sync to squeeze out the last bit,
but this fixes things up to a large degree.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: maximilian attems <max@stro.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'fs')
-rw-r--r-- | fs/ext4/inode.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index e233879ebbcb..edfefcaeb745 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3031,11 +3031,18 @@ static int ext4_nonda_switch(struct super_block *sb) if (2 * free_blocks < 3 * dirty_blocks || free_blocks < (dirty_blocks + EXT4_FREEBLOCKS_WATERMARK)) { /* - * free block count is less that 150% of dirty blocks - * or free blocks is less that watermark + * free block count is less than 150% of dirty blocks + * or free blocks is less than watermark */ return 1; } + /* + * Even if we don't switch but are nearing capacity, + * start pushing delalloc when 1/2 of free blocks are dirty. + */ + if (free_blocks < 2 * dirty_blocks) + writeback_inodes_sb_if_idle(sb); + return 0; } |