diff options
author | Mel Gorman <mel@csn.ul.ie> | 2007-10-16 01:25:52 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-10-16 09:43:00 -0700 |
commit | e12ba74d8ff3e2f73a583500d7095e406df4d093 (patch) | |
tree | a0d3385b65f0b3e1e00b0bbf11b75e7538a93edb /include/linux/slab.h | |
parent | c361be55b3128474aa66d31092db330b07539103 (diff) | |
download | lwn-e12ba74d8ff3e2f73a583500d7095e406df4d093.tar.gz lwn-e12ba74d8ff3e2f73a583500d7095e406df4d093.zip |
Group short-lived and reclaimable kernel allocations
This patch marks a number of allocations that are either short-lived such as
network buffers or are reclaimable such as inode allocations. When something
like updatedb is called, long-lived and unmovable kernel allocations tend to
be spread throughout the address space which increases fragmentation.
This patch groups these allocations together as much as possible by adding a
new MIGRATE_TYPE. The MIGRATE_RECLAIMABLE type is for allocations that can be
reclaimed on demand, but not moved. i.e. they can be migrated by deleting
them and re-reading the information from elsewhere.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/slab.h')
-rw-r--r-- | include/linux/slab.h | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/include/linux/slab.h b/include/linux/slab.h index d859354b9e51..3a5bad3ad126 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -24,12 +24,14 @@ #define SLAB_HWCACHE_ALIGN 0x00002000UL /* Align objs on cache lines */ #define SLAB_CACHE_DMA 0x00004000UL /* Use GFP_DMA memory */ #define SLAB_STORE_USER 0x00010000UL /* DEBUG: Store the last owner for bug hunting */ -#define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are reclaimable */ #define SLAB_PANIC 0x00040000UL /* Panic if kmem_cache_create() fails */ #define SLAB_DESTROY_BY_RCU 0x00080000UL /* Defer freeing slabs to RCU */ #define SLAB_MEM_SPREAD 0x00100000UL /* Spread some memory over cpuset */ #define SLAB_TRACE 0x00200000UL /* Trace allocations and frees */ +/* The following flags affect the page allocator grouping pages by mobility */ +#define SLAB_RECLAIM_ACCOUNT 0x00020000UL /* Objects are reclaimable */ +#define SLAB_TEMPORARY SLAB_RECLAIM_ACCOUNT /* Objects are short-lived */ /* * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. * |