diff options
author | Qi Zheng <zhengqi.arch@bytedance.com> | 2023-06-09 08:15:15 +0000 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-06-19 13:19:34 -0700 |
commit | 1a554ecc971406e291cea867112f7f2e377e810e (patch) | |
tree | 328f6ee3a972f4d27cf3f6dc593d43fdf98eeaaf /mm/vmscan.c | |
parent | c534f7cca6b9b1c0dc97d6e9c5587858d4330cd9 (diff) | |
download | lwn-1a554ecc971406e291cea867112f7f2e377e810e.tar.gz lwn-1a554ecc971406e291cea867112f7f2e377e810e.zip |
Revert "mm: shrinkers: make count and scan in shrinker debugfs lockless"
This reverts commit 20cd1892fcc3efc10a7ac327cc3790494bec46b5.
Kernel test robot reports -88.8% regression in stress-ng.ramfs.ops_per_sec
test case [1], which is caused by commit f95bdb700bc6 ("mm: vmscan: make
global slab shrink lockless"). The root cause is that SRCU has to be
careful to not frequently check for SRCU read-side critical section exits.
Therefore, even if no one is currently in the SRCU read-side critical
section, synchronize_srcu() cannot return quickly. That's why
unregister_shrinker() has become slower.
We will try to use the refcount+RCU method [2] proposed by Dave Chinner to
continue to re-implement the lockless slab shrink. So revert the
shrinker_srcu related changes first.
[1]. https://lore.kernel.org/lkml/202305230837.db2c233f-yujie.liu@intel.com/
[2]. https://lore.kernel.org/lkml/ZIJhou1d55d4H1s0@dread.disaster.area/
Link: https://lkml.kernel.org/r/20230609081518.3039120-5-qi.zheng@linux.dev
Reported-by: kernel test robot <yujie.liu@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202305230837.db2c233f-yujie.liu@intel.com
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Kirill Tkhai <tkhai@ya.ru>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
0 files changed, 0 insertions, 0 deletions