summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorLi Zefan <lizefan@huawei.com>2013-09-10 11:43:37 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2014-02-13 13:48:00 -0800
commitd6d76c6649ec10061810780c0d2398a0232ed1d3 (patch)
tree23da804b09bd061a24e32278a2333e5fb2ce0244 /mm
parent6bf1831daef67ae1a1b183ebed11885ea9bf9e1d (diff)
downloadlwn-d6d76c6649ec10061810780c0d2398a0232ed1d3.tar.gz
lwn-d6d76c6649ec10061810780c0d2398a0232ed1d3.zip
slub: Fix calculation of cpu slabs
commit 8afb1474db4701d1ab80cd8251137a3260e6913e upstream. /sys/kernel/slab/:t-0000048 # cat cpu_slabs 231 N0=16 N1=215 /sys/kernel/slab/:t-0000048 # cat slabs 145 N0=36 N1=109 See, the number of slabs is smaller than that of cpu slabs. The bug was introduced by commit 49e2258586b423684f03c278149ab46d8f8b6700 ("slub: per cpu cache for partial pages"). We should use page->pages instead of page->pobjects when calculating the number of cpu partial slabs. This also fixes the mapping of slabs and nodes. As there's no variable storing the number of total/active objects in cpu partial slabs, and we don't have user interfaces requiring those statistics, I just add WARN_ON for those cases. Acked-by: Christoph Lameter <cl@linux.com> Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com> Signed-off-by: Li Zefan <lizefan@huawei.com> Signed-off-by: Pekka Enberg <penberg@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/slub.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index c34bd44e8be9..deaed7b47213 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4285,7 +4285,13 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
page = ACCESS_ONCE(c->partial);
if (page) {
- x = page->pobjects;
+ node = page_to_nid(page);
+ if (flags & SO_TOTAL)
+ WARN_ON_ONCE(1);
+ else if (flags & SO_OBJECTS)
+ WARN_ON_ONCE(1);
+ else
+ x = page->pages;
total += x;
nodes[node] += x;
}