summaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/probes/decode-insn.c
AgeCommit message (Collapse)Author
2020-09-14arm64: kprobe: clarify the comment of steppable hint instructionsAmit Daniel Kachhap
The existing comment about steppable hint instruction is not complete and only describes NOP instructions as steppable. As the function aarch64_insn_is_steppable_hint allows all white-listed instruction to be probed so the comment is updated to reflect this. Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-7-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-14arm64: kprobe: add checks for ARMv8.3-PAuth combined instructionsAmit Daniel Kachhap
Currently the ARMv8.3-PAuth combined branch instructions (braa, retaa etc.) are not simulated for out-of-line execution with a handler. Hence the uprobe of such instructions leads to kernel warnings in a loop as they are not explicitly checked and fall into INSN_GOOD categories. Other combined instructions like LDRAA and LDRBB can be probed. The issue of the combined branch instructions is fixed by adding group definitions of all such instructions and rejecting their probes. The instruction groups added are br_auth(braa, brab, braaz and brabz), blr_auth(blraa, blrab, blraaz and blrabz), ret_auth(retaa and retab) and eret_auth(eretaa and eretab). Warning log: WARNING: CPU: 0 PID: 156 at arch/arm64/kernel/probes/uprobes.c:182 uprobe_single_step_handler+0x34/0x50 Modules linked in: CPU: 0 PID: 156 Comm: func Not tainted 5.9.0-rc3 #188 Hardware name: Foundation-v8A (DT) pstate: 804003c9 (Nzcv DAIF +PAN -UAO BTYPE=--) pc : uprobe_single_step_handler+0x34/0x50 lr : single_step_handler+0x70/0xf8 sp : ffff800012af3e30 x29: ffff800012af3e30 x28: ffff000878723b00 x27: 0000000000000000 x26: 0000000000000000 x25: 0000000000000000 x24: 0000000000000000 x23: 0000000060001000 x22: 00000000cb000022 x21: ffff800012065ce8 x20: ffff800012af3ec0 x19: ffff800012068d50 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : ffff800010085c90 x8 : 0000000000000000 x7 : 0000000000000000 x6 : ffff80001205a9c8 x5 : ffff80001205a000 x4 : ffff80001233db80 x3 : ffff8000100a7a60 x2 : 0020000000000003 x1 : 0000fffffffff008 x0 : ffff800012af3ec0 Call trace: uprobe_single_step_handler+0x34/0x50 single_step_handler+0x70/0xf8 do_debug_exception+0xb8/0x130 el0_sync_handler+0x138/0x1b8 el0_sync+0x158/0x180 Fixes: 74afda4016a7 ("arm64: compile the kernel with ptrauth return address signing") Fixes: 04ca3204fa09 ("arm64: enable pointer authentication") Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Link: https://lore.kernel.org/r/20200914083656.21428-2-amit.kachhap@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-05-04arm64: insn: Provide a better name for aarch64_insn_is_nop()Mark Brown
The current aarch64_insn_is_nop() has exactly one caller which uses it solely to identify if the instruction is a HINT that can safely be stepped, requiring us to list things that aren't NOPs and make things more confusing than they need to be. Rename the function to reflect the actual usage and make things more clear. Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200504131326.18290-3-broonie@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 174Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 655 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070034.575739538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-07arm64: fix error: conflicting types for 'kprobe_fault_handler'Pratyush Anand
When CONFIG_KPROBE is disabled but CONFIG_UPROBE_EVENT is enabled, we get following compilation error: In file included from .../arch/arm64/kernel/probes/decode-insn.c:20:0: .../arch/arm64/include/asm/kprobes.h:52:5: error: conflicting types for 'kprobe_fault_handler' int kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr); ^~~~~~~~~~~~~~~~~~~~ In file included from .../arch/arm64/kernel/probes/decode-insn.c:17:0: .../include/linux/kprobes.h:398:90: note: previous definition of 'kprobe_fault_handler' was here static inline int kprobe_fault_handler(struct pt_regs *regs, int trapnr) ^ .../scripts/Makefile.build:290: recipe for target 'arch/arm64/kernel/probes/decode-insn.o' failed <asm/kprobes.h> is already included from <linux/kprobes.h> under #ifdef CONFIG_KPROBE. So, this patch fixes the error by removing it from decode-insn.c. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-07arm64: kprobe: protect/rename few definitions to be reused by uprobePratyush Anand
decode-insn code has to be reused by arm64 uprobe implementation as well. Therefore, this patch protects some portion of kprobe code and renames few other, so that decode-insn functionality can be reused by uprobe even when CONFIG_KPROBES is not defined. kprobe_opcode_t and struct arch_specific_insn are also defined by linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these definitions in asm/probes.h. linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion of asm/kprobes.h from decode-insn.c. There are some definitions like kprobe_insn and kprobes_handler_t etc can be re-used by uprobe. So, it would be better to remove 'k' from their names. struct arch_specific_insn is specific to kprobe. Therefore, introduce a new struct arch_probe_insn which will be common for both kprobe and uprobe, so that decode-insn code can be shared. Modify kprobe code accordingly. Function arm_probe_decode_insn() will be needed by uprobe as well. So make it global. Signed-off-by: Pratyush Anand <panand@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-09-15arm64: Improve kprobes test for atomic sequenceDavid A. Long
Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this, further limit the backwards search to stop if it hits a symbol address from kallsyms. The presumption is that this is the entry point to this code (particularly for the common case of placing probes at the beginning of functions). This also improves efficiency by not searching code that is not part of the function. There may be some possibility that the label might not denote the entry path to the probed instruction but the likelihood seems low and this is just another example of how the kprobes user really needs to be careful about what they are doing. Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-07-19arm64: kprobes instruction simulation supportSandeepa Prabhu
Kprobes needs simulation of instructions that cannot be stepped from a different memory location, e.g.: those instructions that uses PC-relative addressing. In simulation, the behaviour of the instruction is implemented using a copy of pt_regs. The following instruction categories are simulated: - All branching instructions(conditional, register, and immediate) - Literal access instructions(load-literal, adr/adrp) Conditional execution is limited to branching instructions in ARM v8. If conditions at PSTATE do not match the condition fields of opcode, the instruction is effectively NOP. Thanks to Will Cohen for assorted suggested changes. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: William Cohen <wcohen@redhat.com> Signed-off-by: David A. Long <dave.long@linaro.org> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: removed linux/module.h include] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-07-19arm64: Kprobes with single stepping supportSandeepa Prabhu
Add support for basic kernel probes(kprobes) and jump probes (jprobes) for ARM64. Kprobes utilizes software breakpoint and single step debug exceptions supported on ARM v8. A software breakpoint is placed at the probe address to trap the kernel execution into the kprobe handler. ARM v8 supports enabling single stepping before the break exception return (ERET), with next PC in exception return address (ELR_EL1). The kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed, and enables single stepping. The PC is set to the out-of-line slot address before the ERET. With this scheme, the instruction is executed with the exact same register context except for the PC (and DAIF) registers. Debug mask (PSTATE.D) is enabled only when single stepping a recursive kprobe, e.g.: during kprobes reenter so that probed instruction can be single stepped within the kprobe handler -exception- context. The recursion depth of kprobe is always 2, i.e. upon probe re-entry, any further re-entry is prevented by not calling handlers and the case counted as a missed kprobe). Single stepping from the x-o-l slot has a drawback for PC-relative accesses like branching and symbolic literals access as the offset from the new PC (slot address) may not be ensured to fit in the immediate value of the opcode. Such instructions need simulation, so reject probing them. Instructions generating exceptions or cpu mode change are rejected for probing. Exclusive load/store instructions are rejected too. Additionally, the code is checked to see if it is inside an exclusive load/store sequence (code from Pratyush). System instructions are mostly enabled for stepping, except MSR/MRS accesses to "DAIF" flags in PSTATE, which are not safe for probing. This also changes arch/arm64/include/asm/ptrace.h to use include/asm-generic/ptrace.h. Thanks to Steve Capper and Pratyush Anand for several suggested Changes. Signed-off-by: Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Signed-off-by: David A. Long <dave.long@linaro.org> Signed-off-by: Pratyush Anand <panand@redhat.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>