diff options
author | Yinghai Lu <yinghai@kernel.org> | 2012-11-16 19:39:04 -0800 |
---|---|---|
committer | H. Peter Anvin <hpa@linux.intel.com> | 2012-11-17 11:59:27 -0800 |
commit | 22c8ca2ac256bb681be791858b35502b5d37e73b (patch) | |
tree | 7eab01f0ad05e228f9d9c01da85503e6fafdb4f2 /arch/x86/mm/mm_internal.h | |
parent | 6f80b68e9e515547edbacb0c37491730bf766db5 (diff) | |
download | lwn-22c8ca2ac256bb681be791858b35502b5d37e73b.tar.gz lwn-22c8ca2ac256bb681be791858b35502b5d37e73b.zip |
x86, mm: Add alloc_low_pages(num)
32bit kmap mapping needs pages to be used for low to high.
At this point those pages are still from pgt_buf_* from BRK, so it is
ok now.
But we want to move early_ioremap_page_table_range_init() out of
init_memory_mapping() and only call it one time later, that will
make page_table_range_init/page_table_kmap_check/alloc_low_page to
use memblock to get page.
memblock allocation for pages are from high to low.
So will get panic from page_table_kmap_check() that has BUG_ON to do
ordering checking.
This patch add alloc_low_pages to make it possible to allocate serveral
pages at first, and hand out pages one by one from low to high.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1353123563-3103-28-git-send-email-yinghai@kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86/mm/mm_internal.h')
-rw-r--r-- | arch/x86/mm/mm_internal.h | 6 |
1 files changed, 5 insertions, 1 deletions
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index b3f993a2555e..7e3b88ee078a 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -1,6 +1,10 @@ #ifndef __X86_MM_INTERNAL_H #define __X86_MM_INTERNAL_H -void *alloc_low_page(void); +void *alloc_low_pages(unsigned int num); +static inline void *alloc_low_page(void) +{ + return alloc_low_pages(1); +} #endif /* __X86_MM_INTERNAL_H */ |