diff options
author | Peter Zijlstra <peterz@infradead.org> | 2022-10-27 14:54:41 -0700 |
---|---|---|
committer | Dave Hansen <dave.hansen@linux.intel.com> | 2022-12-15 10:37:26 -0800 |
commit | 97e3d26b5e5f371b3ee223d94dd123e6c442ba80 (patch) | |
tree | 80ee0994078b5307c6edd071d36574313c995243 /arch/x86/kernel/hw_breakpoint.c | |
parent | 3f148f3318140035e87decc1214795ff0755757b (diff) | |
download | lwn-97e3d26b5e5f371b3ee223d94dd123e6c442ba80.tar.gz lwn-97e3d26b5e5f371b3ee223d94dd123e6c442ba80.zip |
x86/mm: Randomize per-cpu entry area
Seth found that the CPU-entry-area; the piece of per-cpu data that is
mapped into the userspace page-tables for kPTI is not subject to any
randomization -- irrespective of kASLR settings.
On x86_64 a whole P4D (512 GB) of virtual address space is reserved for
this structure, which is plenty large enough to randomize things a
little.
As such, use a straight forward randomization scheme that avoids
duplicates to spread the existing CPUs over the available space.
[ bp: Fix le build. ]
Reported-by: Seth Jenkins <sethjenkins@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Diffstat (limited to 'arch/x86/kernel/hw_breakpoint.c')
-rw-r--r-- | arch/x86/kernel/hw_breakpoint.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c index 668a4a6533d9..bbb0f737aab1 100644 --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -266,7 +266,7 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end) /* CPU entry erea is always used for CPU entry */ if (within_area(addr, end, CPU_ENTRY_AREA_BASE, - CPU_ENTRY_AREA_TOTAL_SIZE)) + CPU_ENTRY_AREA_MAP_SIZE)) return true; /* |