diff options
author | Hyeonggon Yoo <42.hyeyoo@gmail.com> | 2022-10-15 13:34:29 +0900 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2022-10-15 21:42:05 +0200 |
commit | e36ce448a08d43de69e7449eb225805a7a8addf8 (patch) | |
tree | d1b0f706f04ff2bb76f86cba4597c7e3f41a86d7 /include/linux | |
parent | d5eff736902d5565a24f1b571b5987b3e5ee9a5b (diff) | |
download | lwn-e36ce448a08d43de69e7449eb225805a7a8addf8.tar.gz lwn-e36ce448a08d43de69e7449eb225805a7a8addf8.zip |
mm/slab: use kmalloc_node() for off slab freelist_idx_t array allocation
After commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than
order-1 page to page allocator"), SLAB passes large ( > PAGE_SIZE * 2)
requests to buddy like SLUB does.
SLAB has been using kmalloc caches to allocate freelist_idx_t array for
off slab caches. But after the commit, freelist_size can be bigger than
KMALLOC_MAX_CACHE_SIZE.
Instead of using pointer to kmalloc cache, use kmalloc_node() and only
check if the kmalloc cache is off slab during calculate_slab_order().
If freelist_size > KMALLOC_MAX_CACHE_SIZE, no looping condition happens
as it allocates freelist_idx_t array directly from buddy.
Link: https://lore.kernel.org/all/20221014205818.GA1428667@roeck-us.net/
Reported-and-tested-by: Guenter Roeck <linux@roeck-us.net>
Fixes: d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator")
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/slab_def.h | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index e24c9aff6fed..f0ffad6a3365 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -33,7 +33,6 @@ struct kmem_cache { size_t colour; /* cache colouring range */ unsigned int colour_off; /* colour offset */ - struct kmem_cache *freelist_cache; unsigned int freelist_size; /* constructor func */ |