diff options
author | Christoph Lameter <clameter@sgi.com> | 2007-12-21 14:37:37 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-12-21 15:51:07 -0800 |
commit | 76be895001f2b0bee42a7685e942d3e08d5dd46c (patch) | |
tree | 7444607c21c11ad363eee300f286ad8e1b71b65f /mm | |
parent | ea67db4cdbbf7f4e74150e71da0984e25121f500 (diff) | |
download | lwn-76be895001f2b0bee42a7685e942d3e08d5dd46c.tar.gz lwn-76be895001f2b0bee42a7685e942d3e08d5dd46c.zip |
SLUB: Improve hackbench speed
Increase the mininum number of partial slabs to keep around and put
partial slabs to the end of the partial queue so that they can add
more objects.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c index b9f37cb0f2e6..3655ad359f03 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -172,7 +172,7 @@ static inline void ClearSlabDebug(struct page *page) * Mininum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. */ -#define MIN_PARTIAL 2 +#define MIN_PARTIAL 5 /* * Maximum number of desirable partial slabs. @@ -1613,7 +1613,7 @@ checks_ok: * then add it. */ if (unlikely(!prior)) - add_partial(get_node(s, page_to_nid(page)), page); + add_partial_tail(get_node(s, page_to_nid(page)), page); out_unlock: slab_unlock(page); |