diff options
author | Darrick J. Wong <djwong@kernel.org> | 2023-09-11 08:39:03 -0700 |
---|---|---|
committer | Darrick J. Wong <djwong@kernel.org> | 2023-09-11 08:39:03 -0700 |
commit | 62334fab47621dd91ab30dd5bb6c43d78a8ec279 (patch) | |
tree | 76f83feb49fd1a2d9529f99b6225364be52d8dd7 /fs/xfs/xfs_super.c | |
parent | ecd49f7a36fbccc884471f86fc43de6ca8d1f786 (diff) | |
download | lwn-62334fab47621dd91ab30dd5bb6c43d78a8ec279.tar.gz lwn-62334fab47621dd91ab30dd5bb6c43d78a8ec279.zip |
xfs: use per-mount cpumask to track nonempty percpu inodegc lists
Directly track which CPUs have contributed to the inodegc percpu lists
instead of trusting the cpu online mask. This eliminates a theoretical
problem where the inodegc flush functions might fail to flush a CPU's
inodes if that CPU happened to be dying at exactly the same time. Most
likely nobody's noticed this because the CPU dead hook moves the percpu
inodegc list to another CPU and schedules that worker immediately. But
it's quite possible that this is a subtle race leading to UAF if the
inodegc flush were part of an unmount.
Further benefits: This reduces the overhead of the inodegc flush code
slightly by allowing us to ignore CPUs that have empty lists. Better
yet, it reduces our dependence on the cpu online masks, which have been
the cause of confusion and drama lately.
Fixes: ab23a7768739 ("xfs: per-cpu deferred inode inactivation queues")
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Diffstat (limited to 'fs/xfs/xfs_super.c')
-rw-r--r-- | fs/xfs/xfs_super.c | 4 |
1 files changed, 1 insertions, 3 deletions
diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index ed29a5022e36..3a91ba3a4c62 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1135,9 +1135,8 @@ xfs_inodegc_init_percpu( for_each_possible_cpu(cpu) { gc = per_cpu_ptr(mp->m_inodegc, cpu); -#if defined(DEBUG) || defined(XFS_WARN) gc->cpu = cpu; -#endif + gc->mp = mp; init_llist_head(&gc->list); gc->items = 0; gc->error = 0; @@ -2336,7 +2335,6 @@ xfs_cpu_dead( spin_lock(&xfs_mount_list_lock); list_for_each_entry_safe(mp, n, &xfs_mount_list, m_mount_list) { spin_unlock(&xfs_mount_list_lock); - xfs_inodegc_cpu_dead(mp, cpu); spin_lock(&xfs_mount_list_lock); } spin_unlock(&xfs_mount_list_lock); |