diff options
author | Xiongwei Song <xiongwei.song@windriver.com> | 2024-04-04 13:58:26 +0800 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2024-04-04 11:29:26 +0200 |
commit | ff99b18fee793826dd5604da72d6259a531b45e9 (patch) | |
tree | fdcbfa4e5d0015161ce4709ea2ffe435736086d7 /mm/slub.c | |
parent | 721a2f8be134f9bb61f4358cbb7ae394eaf74573 (diff) | |
download | lwn-ff99b18fee793826dd5604da72d6259a531b45e9.tar.gz lwn-ff99b18fee793826dd5604da72d6259a531b45e9.zip |
mm/slub: simplify get_partial_node()
The break conditions for filling cpu partial can be more readable and
simple.
If slub_get_cpu_partial() returns 0, we can confirm that we don't need
to fill cpu partial, then we should break from the loop. On the other
hand, we also should break from the loop if we have added enough cpu
partial slabs.
Meanwhile, the logic above gets rid of the #ifdef and also fixes a weird
corner case that if we set cpu_partial_slabs to 0 from sysfs, we still
allocate at least one here.
Signed-off-by: Xiongwei Song <xiongwei.song@windriver.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/mm/slub.c b/mm/slub.c index 936f2b13a78e..a9b1337e81c2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2614,18 +2614,18 @@ static struct slab *get_partial_node(struct kmem_cache *s, if (!partial) { partial = slab; stat(s, ALLOC_FROM_PARTIAL); + + if ((slub_get_cpu_partial(s) == 0)) { + break; + } } else { put_cpu_partial(s, slab, 0); stat(s, CPU_PARTIAL_NODE); - partial_slabs++; - } -#ifdef CONFIG_SLUB_CPU_PARTIAL - if (partial_slabs > s->cpu_partial_slabs / 2) - break; -#else - break; -#endif + if (++partial_slabs > slub_get_cpu_partial(s) / 2) { + break; + } + } } spin_unlock_irqrestore(&n->list_lock, flags); return partial; |