diff options
author | Paul Mackerras <paulus@samba.org> | 2008-06-18 15:29:12 +1000 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2008-07-01 11:27:57 +1000 |
commit | 3a8247cc2c856930f34eafce33f6a039227ee175 (patch) | |
tree | aa8599cdf09893f1150a2bc137878d8b8a661780 /include/asm-powerpc | |
parent | e952e6c4d6635b36c212c056a9427bd93460178c (diff) | |
download | lwn-3a8247cc2c856930f34eafce33f6a039227ee175.tar.gz lwn-3a8247cc2c856930f34eafce33f6a039227ee175.zip |
powerpc: Only demote individual slices rather than whole process
At present, if we have a kernel with a 64kB page size, and some
process maps something that has to be mapped with 4kB pages (such as a
cache-inhibited mapping on POWER5+, or the eHCA infiniband queue-pair
pages), we change the process to use 4kB pages everywhere. This hurts
the performance of HPC programs that access eHCA from userspace.
With this patch, the kernel will only demote the slice(s) containing
the eHCA or cache-inhibited mappings, leaving the remaining slices
able to use 64kB hardware pages.
This also changes the slice_get_unmapped_area code so that it is
willing to place a 64k-page mapping into (or across) a 4k-page slice
if there is no better alternative, i.e. if the program specified
MAP_FIXED or if there is not sufficient space available in slices that
are either empty or already have 64k-page mappings in them.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Diffstat (limited to 'include/asm-powerpc')
-rw-r--r-- | include/asm-powerpc/page_64.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/asm-powerpc/page_64.h b/include/asm-powerpc/page_64.h index 25af4fc8daf4..02fd80710e9d 100644 --- a/include/asm-powerpc/page_64.h +++ b/include/asm-powerpc/page_64.h @@ -126,16 +126,22 @@ extern unsigned int get_slice_psize(struct mm_struct *mm, extern void slice_init_context(struct mm_struct *mm, unsigned int psize); extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); +extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start, + unsigned long len, unsigned int psize); + #define slice_mm_new_context(mm) ((mm)->context.id == 0) #endif /* __ASSEMBLY__ */ #else #define slice_init() +#define get_slice_psize(mm, addr) ((mm)->context.user_psize) #define slice_set_user_psize(mm, psize) \ do { \ (mm)->context.user_psize = (psize); \ (mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \ } while (0) +#define slice_set_range_psize(mm, start, len, psize) \ + slice_set_user_psize((mm), (psize)) #define slice_mm_new_context(mm) 1 #endif /* CONFIG_PPC_MM_SLICES */ |