summaryrefslogtreecommitdiff
path: root/fs/nilfs2
diff options
context:
space:
mode:
authorJan Kara <jack@suse.cz>2024-06-21 16:42:38 +0200
committerAndrew Morton <akpm@linux-foundation.org>2024-07-03 19:30:15 -0700
commit68ed2a394a0190433ba982b353579075a29099bd (patch)
tree48882dd6d3512d8fc652e8245c583b8ad3534faf /fs/nilfs2
parent8dfcffa37094fef2c8cf8b602316766a86956d07 (diff)
downloadlwn-68ed2a394a0190433ba982b353579075a29099bd.tar.gz
lwn-68ed2a394a0190433ba982b353579075a29099bd.zip
mm: avoid overflows in dirty throttling logic
The dirty throttling logic is interspersed with assumptions that dirty limits in PAGE_SIZE units fit into 32-bit (so that various multiplications fit into 64-bits). If limits end up being larger, we will hit overflows, possible divisions by 0 etc. Fix these problems by never allowing so large dirty limits as they have dubious practical value anyway. For dirty_bytes / dirty_background_bytes interfaces we can just refuse to set so large limits. For dirty_ratio / dirty_background_ratio it isn't so simple as the dirty limit is computed from the amount of available memory which can change due to memory hotplug etc. So when converting dirty limits from ratios to numbers of pages, we just don't allow the result to exceed UINT_MAX. This is root-only triggerable problem which occurs when the operator sets dirty limits to >16 TB. Link: https://lkml.kernel.org/r/20240621144246.11148-2-jack@suse.cz Signed-off-by: Jan Kara <jack@suse.cz> Reported-by: Zach O'Keefe <zokeefe@google.com> Reviewed-By: Zach O'Keefe <zokeefe@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'fs/nilfs2')
0 files changed, 0 insertions, 0 deletions