summaryrefslogtreecommitdiff
path: root/lib/locking-selftest-mutex.h
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2013-01-21 17:01:25 +0900
committerPekka Enberg <penberg@kernel.org>2013-04-02 09:42:10 +0300
commit633b076464da52b3c7bf0f62932fbfc0ea23d8b3 (patch)
tree546927d08f30ea3049051b89a55b7c7a56937f7f /lib/locking-selftest-mutex.h
parent7d557b3cb69398d83ceabad9cf147c93a3aa97fd (diff)
downloadlwn-633b076464da52b3c7bf0f62932fbfc0ea23d8b3.tar.gz
lwn-633b076464da52b3c7bf0f62932fbfc0ea23d8b3.zip
slub: correct to calculate num of acquired objects in get_partial_node()
There is a subtle bug when calculating a number of acquired objects. Currently, we calculate "available = page->objects - page->inuse", after acquire_slab() is called in get_partial_node(). In acquire_slab() with mode = 1, we always set new.inuse = page->objects. So, acquire_slab(s, n, page, object == NULL); if (!object) { c->page = page; stat(s, ALLOC_FROM_PARTIAL); object = t; available = page->objects - page->inuse; !!! availabe is always 0 !!! ... Therfore, "available > s->cpu_partial / 2" is always false and we always go to second iteration. This patch correct this problem. After that, we don't need return value of put_cpu_partial(). So remove it. Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com> Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'lib/locking-selftest-mutex.h')
0 files changed, 0 insertions, 0 deletions