summaryrefslogtreecommitdiff
path: root/arch/riscv/net
AgeCommit message (Collapse)Author
2019-12-27Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-12-27 The following pull-request contains BPF updates for your *net-next* tree. We've added 127 non-merge commits during the last 17 day(s) which contain a total of 110 files changed, 6901 insertions(+), 2721 deletions(-). There are three merge conflicts. Conflicts and resolution looks as follows: 1) Merge conflict in net/bpf/test_run.c: There was a tree-wide cleanup c593642c8be0 ("treewide: Use sizeof_field() macro") which gets in the way with b590cb5f802d ("bpf: Switch to offsetofend in BPF_PROG_TEST_RUN"): <<<<<<< HEAD if (!range_is_zero(__skb, offsetof(struct __sk_buff, priority) + sizeof_field(struct __sk_buff, priority), ======= if (!range_is_zero(__skb, offsetofend(struct __sk_buff, priority), >>>>>>> 7c8dce4b166113743adad131b5a24c4acc12f92c There are a few occasions that look similar to this. Always take the chunk with offsetofend(). Note that there is one where the fields differ in here: <<<<<<< HEAD if (!range_is_zero(__skb, offsetof(struct __sk_buff, tstamp) + sizeof_field(struct __sk_buff, tstamp), ======= if (!range_is_zero(__skb, offsetofend(struct __sk_buff, gso_segs), >>>>>>> 7c8dce4b166113743adad131b5a24c4acc12f92c Just take the one with offsetofend() /and/ gso_segs. Latter is correct due to 850a88cc4096 ("bpf: Expose __sk_buff wire_len/gso_segs to BPF_PROG_TEST_RUN"). 2) Merge conflict in arch/riscv/net/bpf_jit_comp.c: (I'm keeping Bjorn in Cc here for a double-check in case I got it wrong.) <<<<<<< HEAD if (is_13b_check(off, insn)) return -1; emit(rv_blt(tcc, RV_REG_ZERO, off >> 1), ctx); ======= emit_branch(BPF_JSLT, RV_REG_T1, RV_REG_ZERO, off, ctx); >>>>>>> 7c8dce4b166113743adad131b5a24c4acc12f92c Result should look like: emit_branch(BPF_JSLT, tcc, RV_REG_ZERO, off, ctx); 3) Merge conflict in arch/riscv/include/asm/pgtable.h: <<<<<<< HEAD ======= #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) #define VMALLOC_END (PAGE_OFFSET - 1) #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) #define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) #define BPF_JIT_REGION_END (VMALLOC_END) /* * Roughly size the vmemmap space to be large enough to fit enough * struct pages to map half the virtual address space. Then * position vmemmap directly below the VMALLOC region. */ #define VMEMMAP_SHIFT \ (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT) #define VMEMMAP_SIZE BIT(VMEMMAP_SHIFT) #define VMEMMAP_END (VMALLOC_START - 1) #define VMEMMAP_START (VMALLOC_START - VMEMMAP_SIZE) #define vmemmap ((struct page *)VMEMMAP_START) >>>>>>> 7c8dce4b166113743adad131b5a24c4acc12f92c Only take the BPF_* defines from there and move them higher up in the same file. Remove the rest from the chunk. The VMALLOC_* etc defines got moved via 01f52e16b868 ("riscv: define vmemmap before pfn_to_page calls"). Result: [...] #define __S101 PAGE_READ_EXEC #define __S110 PAGE_SHARED_EXEC #define __S111 PAGE_SHARED_EXEC #define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1) #define VMALLOC_END (PAGE_OFFSET - 1) #define VMALLOC_START (PAGE_OFFSET - VMALLOC_SIZE) #define BPF_JIT_REGION_SIZE (SZ_128M) #define BPF_JIT_REGION_START (PAGE_OFFSET - BPF_JIT_REGION_SIZE) #define BPF_JIT_REGION_END (VMALLOC_END) /* * Roughly size the vmemmap space to be large enough to fit enough * struct pages to map half the virtual address space. Then * position vmemmap directly below the VMALLOC region. */ #define VMEMMAP_SHIFT \ (CONFIG_VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT) #define VMEMMAP_SIZE BIT(VMEMMAP_SHIFT) #define VMEMMAP_END (VMALLOC_START - 1) #define VMEMMAP_START (VMALLOC_START - VMEMMAP_SIZE) [...] Let me know if there are any other issues. Anyway, the main changes are: 1) Extend bpftool to produce a struct (aka "skeleton") tailored and specific to a provided BPF object file. This provides an alternative, simplified API compared to standard libbpf interaction. Also, add libbpf extern variable resolution for .kconfig section to import Kconfig data, from Andrii Nakryiko. 2) Add BPF dispatcher for XDP which is a mechanism to avoid indirect calls by generating a branch funnel as discussed back in bpfconf'19 at LSF/MM. Also, add various BPF riscv JIT improvements, from Björn Töpel. 3) Extend bpftool to allow matching BPF programs and maps by name, from Paul Chaignon. 4) Support for replacing cgroup BPF programs attached with BPF_F_ALLOW_MULTI flag for allowing updates without service interruption, from Andrey Ignatov. 5) Cleanup and simplification of ring access functions for AF_XDP with a bonus of 0-5% performance improvement, from Magnus Karlsson. 6) Enable BPF JITs for x86-64 and arm64 by default. Also, final version of audit support for BPF, from Daniel Borkmann and latter with Jiri Olsa. 7) Move and extend test_select_reuseport into BPF program tests under BPF selftests, from Jakub Sitnicki. 8) Various BPF sample improvements for xdpsock for customizing parameters to set up and benchmark AF_XDP, from Jay Jayatheerthan. 9) Improve libbpf to provide a ulimit hint on permission denied errors. Also change XDP sample programs to attach in driver mode by default, from Toke Høiland-Jørgensen. 10) Extend BPF test infrastructure to allow changing skb mark from tc BPF programs, from Nikita V. Shirokov. 11) Optimize prologue code sequence in BPF arm32 JIT, from Russell King. 12) Fix xdp_redirect_cpu BPF sample to manually attach to tracepoints after libbpf conversion, from Jesper Dangaard Brouer. 13) Minor misc improvements from various others. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-19riscv, bpf: Optimize callsBjörn Töpel
Instead of using emit_imm() and emit_jalr() which can expand to six instructions, start using jal or auipc+jalr. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-8-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Provide RISC-V specific JIT image alloc/freeBjörn Töpel
This commit makes sure that the JIT images is kept close to the kernel text, so BPF calls can use relative calling with auipc/jalr or jal instead of loading the full 64-bit address and jalr. The BPF JIT image region is 128 MB before the kernel text. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-7-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Optimize BPF tail callsBjörn Töpel
Remove one addi, and instead use the offset part of jalr. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-6-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Add support for far jumps and exitsBjörn Töpel
This commit add support for far (offset > 21b) jumps and exits. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Luke Nelson <lukenels@cs.washington.edu> Link: https://lore.kernel.org/bpf/20191216091343.23260-5-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Add support for far branching when emitting tail callBjörn Töpel
Start use the emit_branch() function in the tail call emitter in order to support far branching. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-4-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Add support for far branchingBjörn Töpel
This commit adds branch relaxation to the BPF JIT, and with that support for far (offset greater than 12b) branching. The branch relaxation requires more than two passes to converge. For most programs it is three passes, but for larger programs it can be more. Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Luke Nelson <lukenels@cs.washington.edu> Link: https://lore.kernel.org/bpf/20191216091343.23260-3-bjorn.topel@gmail.com
2019-12-19riscv, bpf: Fix broken BPF tail callsBjörn Töpel
The BPF JIT incorrectly clobbered the a0 register, and did not flag usage of s5 register when BPF stack was being used. Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191216091343.23260-2-bjorn.topel@gmail.com
2019-12-11bpf, riscv: Limit to 33 tail callsPaul Chaignon
All BPF JIT compilers except RISC-V's and MIPS' enforce a 33-tail calls limit at runtime. In addition, a test was recently added, in tailcalls2, to check this limit. This patch updates the tail call limit in RISC-V's JIT compiler to allow 33 tail calls. I tested it using the above selftest on an emulated RISCV64. Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Reported-by: Mahshid Khezri <khezri.mahshid@gmail.com> Signed-off-by: Paul Chaignon <paul.chaignon@orange.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/966fe384383bf23a0ee1efe8d7291c78a3fb832b.1575916815.git.paul.chaignon@gmail.com
2019-07-05bpf, riscv: Enable zext optimization for more RV64G ALU opsLuke Nelson
Commit 66d0d5a854a6 ("riscv: bpf: eliminate zero extension code-gen") added the new zero-extension optimization for some BPF ALU operations. Since then, bugs in the JIT that have been fixed in the bpf tree require this optimization to be added to other operations: commit 1e692f09e091 ("bpf, riscv: clear high 32 bits for ALU32 add/sub/neg/lsh/rsh/arsh"), and commit fe121ee531d1 ("bpf, riscv: clear target register high 32-bits for and/or/xor on ALU32"). Now that these have been merged to bpf-next, the zext optimization can be enabled for the fixed operations. Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Cc: Song Liu <liu.song.a23@gmail.com> Cc: Jiong Wang <jiong.wang@netronome.com> Cc: Xi Wang <xi.wang@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-06-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Honestly all the conflicts were simple overlapping changes, nothing really interesting to report. Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-07Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf 2019-06-07 The following pull-request contains BPF updates for your *net* tree. The main changes are: 1) Fix several bugs in riscv64 JIT code emission which forgot to clear high 32-bits for alu32 ops, from Björn and Luke with selftests covering all relevant BPF alu ops from Björn and Jiong. 2) Two fixes for UDP BPF reuseport that avoid calling the program in case of __udp6_lib_err and UDP GRO which broke reuseport_select_sock() assumption that skb->data is pointing to transport header, from Martin. 3) Two fixes for BPF sockmap: a use-after-free from sleep in psock's backlog workqueue, and a missing restore of sk_write_space when psock gets dropped, from Jakub and John. 4) Fix unconnected UDP sendmsg hook API which is insufficient as-is since it breaks standard applications like DNS if reverse NAT is not performed upon receive, from Daniel. 5) Fix an out-of-bounds read in __bpf_skc_lookup which in case of AF_INET6 fails to verify that the length of the tuple is long enough, from Lorenz. 6) Fix libbpf's libbpf__probe_raw_btf to return an fd instead of 0/1 (for {un,}successful probe) as that is expected to be propagated as an fd to load_sk_storage_btf() and thus closing the wrong descriptor otherwise, from Michal. 7) Fix bpftool's JSON output for the case when a lookup fails, from Krzesimir. 8) Minor misc fixes in docs, samples and selftests, from various others. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-05-31bpf, riscv: clear high 32 bits for ALU32 add/sub/neg/lsh/rsh/arshLuke Nelson
In BPF, 32-bit ALU operations should zero-extend their results into the 64-bit registers. The current BPF JIT on RISC-V emits incorrect instructions that perform sign extension only (e.g., addw, subw) on 32-bit add, sub, lsh, rsh, arsh, and neg. This behavior diverges from the interpreter and JITs for other architectures. This patch fixes the bugs by performing zero extension on the destination register of 32-bit ALU operations. Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Cc: Xi Wang <xi.wang@gmail.com> Signed-off-by: Luke Nelson <luke.r.nels@gmail.com> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Reviewed-by: Palmer Dabbelt <palmer@sifive.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-24riscv: bpf: eliminate zero extension code-genJiong Wang
Cc: Björn Töpel <bjorn.topel@gmail.com> Acked-by: Björn Töpel <bjorn.topel@gmail.com> Tested-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2019-05-23bpf, riscv: clear target register high 32-bits for and/or/xor on ALU32Björn Töpel
When using 32-bit subregisters (ALU32), the RISC-V JIT would not clear the high 32-bits of the target register and therefore generate incorrect code. E.g., in the following code: $ cat test.c unsigned int f(unsigned long long a, unsigned int b) { return (unsigned int)a & b; } $ clang-9 -target bpf -O2 -emit-llvm -S test.c -o - | \ llc-9 -mattr=+alu32 -mcpu=v3 .text .file "test.c" .globl f .p2align 3 .type f,@function f: r0 = r1 w0 &= w2 exit .Lfunc_end0: .size f, .Lfunc_end0-f The JIT would not clear the high 32-bits of r0 after the and-operation, which in this case might give an incorrect return value. After this patch, that is not the case, and the upper 32-bits are cleared. Reported-by: Jiong Wang <jiong.wang@netronome.com> Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-05-21treewide: Add SPDX license identifier - Makefile/KconfigThomas Gleixner
Add SPDX license identifiers to all Make/Kconfig files which: - Have no license information of any form These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-05bpf, riscv: add BPF JIT for RV64GBjörn Töpel
This commit adds a BPF JIT for RV64G. The JIT is a two-pass JIT, and has a dynamic prolog/epilogue (similar to the MIPS64 BPF JIT) instead of static ones (e.g. x86_64). At the moment the RISC-V Linux port does not support CONFIG_HAVE_KPROBES, which means that CONFIG_BPF_EVENTS is not supported. Thus, no tests involving BPF_PROG_TYPE_TRACEPOINT, BPF_PROG_TYPE_PERF_EVENT, BPF_PROG_TYPE_KPROBE and BPF_PROG_TYPE_RAW_TRACEPOINT passes. The implementation does not support "far branching" (>4KiB). Test results: # modprobe test_bpf test_bpf: Summary: 378 PASSED, 0 FAILED, [366/366 JIT'ed] # echo 1 > /proc/sys/kernel/unprivileged_bpf_disabled # ./test_verifier ... Summary: 761 PASSED, 507 SKIPPED, 2 FAILED Note that "test_verifier" was run with one build with CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y and one without, otherwise many of the the tests that require unaligned access were skipped. CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y: # echo 1 > /proc/sys/kernel/unprivileged_bpf_disabled # ./test_verifier | grep -c 'NOTE.*unknown align' 0 No CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS: # echo 1 > /proc/sys/kernel/unprivileged_bpf_disabled # ./test_verifier | grep -c 'NOTE.*unknown align' 59 The two failing test_verifier tests are: "ld_abs: vlan + abs, test 1" "ld_abs: jump around ld_abs" This is due to that "far branching" involved in those tests. All tests where done on QEMU (QEMU emulator version 3.1.50 (v3.1.0-688-g8ae951fbc106)). Signed-off-by: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>