diff options
author | Yonghong Song <yonghong.song@linux.dev> | 2024-11-12 08:39:07 -0800 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2024-11-12 16:26:24 -0800 |
commit | a76ab5731e32d50ff5b1ae97e9dc4b23f41c23f5 (patch) | |
tree | a104cbc12ec9710639d696b294712b98468ab153 /kernel/bpf/core.c | |
parent | c748a255aedfd42adc4213479f669f0f4809b85e (diff) | |
download | lwn-a76ab5731e32d50ff5b1ae97e9dc4b23f41c23f5.tar.gz lwn-a76ab5731e32d50ff5b1ae97e9dc4b23f41c23f5.zip |
bpf: Find eligible subprogs for private stack support
Private stack will be allocated with percpu allocator in jit time.
To avoid complexity at runtime, only one copy of private stack is
available per cpu per prog. So runtime recursion check is necessary
to avoid stack corruption.
Current private stack only supports kprobe/perf_event/tp/raw_tp
which has recursion check in the kernel, and prog types that use
bpf trampoline recursion check. For trampoline related prog types,
currently only tracing progs have recursion checking.
To avoid complexity, all async_cb subprogs use normal kernel stack
including those subprogs used by both main prog subtree and async_cb
subtree. Any prog having tail call also uses kernel stack.
To avoid jit penalty with private stack support, a subprog stack
size threshold is set such that only if the stack size is no less
than the threshold, private stack is supported. The current threshold
is 64 bytes. This avoids jit penality if the stack usage is small.
A useless 'continue' is also removed from a loop in func
check_max_stack_depth().
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20241112163907.2223839-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'kernel/bpf/core.c')
-rw-r--r-- | kernel/bpf/core.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 233ea78f8f1b..14d9288441f2 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -3045,6 +3045,11 @@ bool __weak bpf_jit_supports_exceptions(void) return false; } +bool __weak bpf_jit_supports_private_stack(void) +{ + return false; +} + void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie) { } |