summaryrefslogtreecommitdiff
path: root/mm/slub.c
diff options
context:
space:
mode:
authorKefeng Wang <wangkefeng.wang@huawei.com>2021-11-05 13:35:14 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2021-11-06 13:30:32 -0700
commitd0fe47c64152a63ceed4b9f29ac56371407fa7b4 (patch)
tree9c574dbce4d7659963ca8de2f0a3084483a26bb6 /mm/slub.c
parentffc95a46d677adb5f49f01789db9d8c3ae0af5e2 (diff)
downloadlwn-d0fe47c64152a63ceed4b9f29ac56371407fa7b4.tar.gz
lwn-d0fe47c64152a63ceed4b9f29ac56371407fa7b4.zip
slub: add back check for free nonslab objects
After commit f227f0faf63b ("slub: fix unreclaimable slab stat for bulk free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE, which only check with CONFIG_DEBUG_VM enabled, but this config may impact performance, so it only for debug. Commit 0937502af7c9 ("slub: Add check for kfree() of non slab objects.") add the ability, which should be needed in any configs to catch the invalid free, they even could be potential issue, eg, memory corruption, use after free and double free, so replace VM_BUG_ON_PAGE to WARN_ON_ONCE, add object address printing to help use to debug the issue. Link: https://lkml.kernel.org/r/20210930070214.61499-1-wangkefeng.wang@huawei.com Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rienjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index d8f77346376d..b6a1790812f7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3522,7 +3522,9 @@ static inline void free_nonslab_page(struct page *page, void *object)
{
unsigned int order = compound_order(page);
- VM_BUG_ON_PAGE(!PageCompound(page), page);
+ if (WARN_ON_ONCE(!PageCompound(page)))
+ pr_warn_once("object pointer: 0x%p\n", object);
+
kfree_hook(object);
mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
__free_pages(page, order);