diff options
author | Christoph Lameter <clameter@engr.sgi.com> | 2006-03-22 00:08:15 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-03-22 07:53:59 -0800 |
commit | ac2b898ca6fb06196a26869c23b66afe7944e52e (patch) | |
tree | e82e7bebd89b02813ce23f76fec4aeb5626da655 /mm/slab.c | |
parent | 911851e6ee6ac4e26f07be342a89632f78494fef (diff) | |
download | lwn-ac2b898ca6fb06196a26869c23b66afe7944e52e.tar.gz lwn-ac2b898ca6fb06196a26869c23b66afe7944e52e.zip |
[PATCH] slab: Remove SLAB_NO_REAP option
SLAB_NO_REAP is documented as an option that will cause this slab not to be
reaped under memory pressure. However, that is not what happens. The only
thing that SLAB_NO_REAP controls at the moment is the reclaim of the unused
slab elements that were allocated in batch in cache_reap(). Cache_reap()
is run every few seconds independently of memory pressure.
Could we remove the whole thing? Its only used by three slabs anyways and
I cannot find a reason for having this option.
There is an additional problem with SLAB_NO_REAP. If set then the recovery
of objects from alien caches is switched off. Objects not freed on the
same node where they were initially allocated will only be reused if a
certain amount of objects accumulates from one alien node (not very likely)
or if the cache is explicitly shrunk. (Strangely __cache_shrink does not
check for SLAB_NO_REAP)
Getting rid of SLAB_NO_REAP fixes the problems with alien cache freeing.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r-- | mm/slab.c | 13 |
1 files changed, 2 insertions, 11 deletions
diff --git a/mm/slab.c b/mm/slab.c index 5c2574989834..24235506b2a0 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -170,12 +170,12 @@ #if DEBUG # define CREATE_MASK (SLAB_DEBUG_INITIAL | SLAB_RED_ZONE | \ SLAB_POISON | SLAB_HWCACHE_ALIGN | \ - SLAB_NO_REAP | SLAB_CACHE_DMA | \ + SLAB_CACHE_DMA | \ SLAB_MUST_HWCACHE_ALIGN | SLAB_STORE_USER | \ SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \ SLAB_DESTROY_BY_RCU) #else -# define CREATE_MASK (SLAB_HWCACHE_ALIGN | SLAB_NO_REAP | \ +# define CREATE_MASK (SLAB_HWCACHE_ALIGN | \ SLAB_CACHE_DMA | SLAB_MUST_HWCACHE_ALIGN | \ SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \ SLAB_DESTROY_BY_RCU) @@ -662,7 +662,6 @@ static struct kmem_cache cache_cache = { .limit = BOOT_CPUCACHE_ENTRIES, .shared = 1, .buffer_size = sizeof(struct kmem_cache), - .flags = SLAB_NO_REAP, .name = "kmem_cache", #if DEBUG .obj_size = sizeof(struct kmem_cache), @@ -1848,9 +1847,6 @@ static void setup_cpu_cache(struct kmem_cache *cachep) * %SLAB_RED_ZONE - Insert `Red' zones around the allocated memory to check * for buffer overruns. * - * %SLAB_NO_REAP - Don't automatically reap this cache when we're under - * memory pressure. - * * %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware * cacheline. This can be beneficial if you're counting cycles as closely * as davem. @@ -3584,10 +3580,6 @@ static void cache_reap(void *unused) struct slab *slabp; searchp = list_entry(walk, struct kmem_cache, next); - - if (searchp->flags & SLAB_NO_REAP) - goto next; - check_irq_on(); l3 = searchp->nodelists[numa_node_id()]; @@ -3635,7 +3627,6 @@ static void cache_reap(void *unused) } while (--tofree > 0); next_unlock: spin_unlock_irq(&l3->list_lock); -next: cond_resched(); } check_irq_on(); |