diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-06-06 18:39:49 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-06-06 18:39:49 -0700 |
commit | 1c8c5a9d38f607c0b6fd12c91cbe1a4418762a21 (patch) | |
tree | dcc97181d4d187252e0cc8fdf29d9b365fa3ffd0 /Documentation | |
parent | 285767604576148fc1be7fcd112e4a90eb0d6ad2 (diff) | |
parent | 7170e6045a6a8b33f4fa5753589dc77b16198e2d (diff) | |
download | lwn-1c8c5a9d38f607c0b6fd12c91cbe1a4418762a21.tar.gz lwn-1c8c5a9d38f607c0b6fd12c91cbe1a4418762a21.zip |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next
Pull networking updates from David Miller:
1) Add Maglev hashing scheduler to IPVS, from Inju Song.
2) Lots of new TC subsystem tests from Roman Mashak.
3) Add TCP zero copy receive and fix delayed acks and autotuning with
SO_RCVLOWAT, from Eric Dumazet.
4) Add XDP_REDIRECT support to mlx5 driver, from Jesper Dangaard
Brouer.
5) Add ttl inherit support to vxlan, from Hangbin Liu.
6) Properly separate ipv6 routes into their logically independant
components. fib6_info for the routing table, and fib6_nh for sets of
nexthops, which thus can be shared. From David Ahern.
7) Add bpf_xdp_adjust_tail helper, which can be used to generate ICMP
messages from XDP programs. From Nikita V. Shirokov.
8) Lots of long overdue cleanups to the r8169 driver, from Heiner
Kallweit.
9) Add BTF ("BPF Type Format"), from Martin KaFai Lau.
10) Add traffic condition monitoring to iwlwifi, from Luca Coelho.
11) Plumb extack down into fib_rules, from Roopa Prabhu.
12) Add Flower classifier offload support to igb, from Vinicius Costa
Gomes.
13) Add UDP GSO support, from Willem de Bruijn.
14) Add documentation for eBPF helpers, from Quentin Monnet.
15) Add TLS tx offload to mlx5, from Ilya Lesokhin.
16) Allow applications to be given the number of bytes available to read
on a socket via a control message returned from recvmsg(), from
Soheil Hassas Yeganeh.
17) Add x86_32 eBPF JIT compiler, from Wang YanQing.
18) Add AF_XDP sockets, with zerocopy support infrastructure as well.
From Björn Töpel.
19) Remove indirect load support from all of the BPF JITs and handle
these operations in the verifier by translating them into native BPF
instead. From Daniel Borkmann.
20) Add GRO support to ipv6 gre tunnels, from Eran Ben Elisha.
21) Allow XDP programs to do lookups in the main kernel routing tables
for forwarding. From David Ahern.
22) Allow drivers to store hardware state into an ELF section of kernel
dump vmcore files, and use it in cxgb4. From Rahul Lakkireddy.
23) Various RACK and loss detection improvements in TCP, from Yuchung
Cheng.
24) Add TCP SACK compression, from Eric Dumazet.
25) Add User Mode Helper support and basic bpfilter infrastructure, from
Alexei Starovoitov.
26) Support ports and protocol values in RTM_GETROUTE, from Roopa
Prabhu.
27) Support bulking in ->ndo_xdp_xmit() API, from Jesper Dangaard
Brouer.
28) Add lots of forwarding selftests, from Petr Machata.
29) Add generic network device failover driver, from Sridhar Samudrala.
* ra.kernel.org:/pub/scm/linux/kernel/git/davem/net-next: (1959 commits)
strparser: Add __strp_unpause and use it in ktls.
rxrpc: Fix terminal retransmission connection ID to include the channel
net: hns3: Optimize PF CMDQ interrupt switching process
net: hns3: Fix for VF mailbox receiving unknown message
net: hns3: Fix for VF mailbox cannot receiving PF response
bnx2x: use the right constant
Revert "net: sched: cls: Fix offloading when ingress dev is vxlan"
net: dsa: b53: Fix for brcm tag issue in Cygnus SoC
enic: fix UDP rss bits
netdev-FAQ: clarify DaveM's position for stable backports
rtnetlink: validate attributes in do_setlink()
mlxsw: Add extack messages for port_{un, }split failures
netdevsim: Add extack error message for devlink reload
devlink: Add extack to reload and port_{un, }split operations
net: metrics: add proper netlink validation
ipmr: fix error path when ipmr_new_table fails
ip6mr: only set ip6mr_table from setsockopt when ip6mr_new_table succeeds
net: hns3: remove unused hclgevf_cfg_func_mta_filter
netfilter: provide udp*_lib_lookup for nf_tproxy
qed*: Utilize FW 8.37.2.0
...
Diffstat (limited to 'Documentation')
43 files changed, 1913 insertions, 936 deletions
diff --git a/Documentation/bpf/README.rst b/Documentation/bpf/README.rst new file mode 100644 index 000000000000..b9a80c9e9392 --- /dev/null +++ b/Documentation/bpf/README.rst @@ -0,0 +1,36 @@ +================= +BPF documentation +================= + +This directory contains documentation for the BPF (Berkeley Packet +Filter) facility, with a focus on the extended BPF version (eBPF). + +This kernel side documentation is still work in progress. The main +textual documentation is (for historical reasons) described in +`Documentation/networking/filter.txt`_, which describe both classical +and extended BPF instruction-set. +The Cilium project also maintains a `BPF and XDP Reference Guide`_ +that goes into great technical depth about the BPF Architecture. + +The primary info for the bpf syscall is available in the `man-pages`_ +for `bpf(2)`_. + + + +Frequently asked questions (FAQ) +================================ + +Two sets of Questions and Answers (Q&A) are maintained. + +* QA for common questions about BPF see: bpf_design_QA_ + +* QA for developers interacting with BPF subsystem: bpf_devel_QA_ + + +.. Links: +.. _bpf_design_QA: bpf_design_QA.rst +.. _bpf_devel_QA: bpf_devel_QA.rst +.. _Documentation/networking/filter.txt: ../networking/filter.txt +.. _man-pages: https://www.kernel.org/doc/man-pages/ +.. _bpf(2): http://man7.org/linux/man-pages/man2/bpf.2.html +.. _BPF and XDP Reference Guide: http://cilium.readthedocs.io/en/latest/bpf/ diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst new file mode 100644 index 000000000000..6780a6d81745 --- /dev/null +++ b/Documentation/bpf/bpf_design_QA.rst @@ -0,0 +1,221 @@ +============== +BPF Design Q&A +============== + +BPF extensibility and applicability to networking, tracing, security +in the linux kernel and several user space implementations of BPF +virtual machine led to a number of misunderstanding on what BPF actually is. +This short QA is an attempt to address that and outline a direction +of where BPF is heading long term. + +.. contents:: + :local: + :depth: 3 + +Questions and Answers +===================== + +Q: Is BPF a generic instruction set similar to x64 and arm64? +------------------------------------------------------------- +A: NO. + +Q: Is BPF a generic virtual machine ? +------------------------------------- +A: NO. + +BPF is generic instruction set *with* C calling convention. +----------------------------------------------------------- + +Q: Why C calling convention was chosen? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A: Because BPF programs are designed to run in the linux kernel +which is written in C, hence BPF defines instruction set compatible +with two most used architectures x64 and arm64 (and takes into +consideration important quirks of other architectures) and +defines calling convention that is compatible with C calling +convention of the linux kernel on those architectures. + +Q: can multiple return values be supported in the future? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: NO. BPF allows only register R0 to be used as return value. + +Q: can more than 5 function arguments be supported in the future? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: NO. BPF calling convention only allows registers R1-R5 to be used +as arguments. BPF is not a standalone instruction set. +(unlike x64 ISA that allows msft, cdecl and other conventions) + +Q: can BPF programs access instruction pointer or return address? +----------------------------------------------------------------- +A: NO. + +Q: can BPF programs access stack pointer ? +------------------------------------------ +A: NO. + +Only frame pointer (register R10) is accessible. +From compiler point of view it's necessary to have stack pointer. +For example LLVM defines register R11 as stack pointer in its +BPF backend, but it makes sure that generated code never uses it. + +Q: Does C-calling convention diminishes possible use cases? +----------------------------------------------------------- +A: YES. + +BPF design forces addition of major functionality in the form +of kernel helper functions and kernel objects like BPF maps with +seamless interoperability between them. It lets kernel call into +BPF programs and programs call kernel helpers with zero overhead. +As all of them were native C code. That is particularly the case +for JITed BPF programs that are indistinguishable from +native kernel C code. + +Q: Does it mean that 'innovative' extensions to BPF code are disallowed? +------------------------------------------------------------------------ +A: Soft yes. + +At least for now until BPF core has support for +bpf-to-bpf calls, indirect calls, loops, global variables, +jump tables, read only sections and all other normal constructs +that C code can produce. + +Q: Can loops be supported in a safe way? +---------------------------------------- +A: It's not clear yet. + +BPF developers are trying to find a way to +support bounded loops where the verifier can guarantee that +the program terminates in less than 4096 instructions. + +Instruction level questions +--------------------------- + +Q: LD_ABS and LD_IND instructions vs C code +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Q: How come LD_ABS and LD_IND instruction are present in BPF whereas +C code cannot express them and has to use builtin intrinsics? + +A: This is artifact of compatibility with classic BPF. Modern +networking code in BPF performs better without them. +See 'direct packet access'. + +Q: BPF instructions mapping not one-to-one to native CPU +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Q: It seems not all BPF instructions are one-to-one to native CPU. +For example why BPF_JNE and other compare and jumps are not cpu-like? + +A: This was necessary to avoid introducing flags into ISA which are +impossible to make generic and efficient across CPU architectures. + +Q: why BPF_DIV instruction doesn't map to x64 div? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: Because if we picked one-to-one relationship to x64 it would have made +it more complicated to support on arm64 and other archs. Also it +needs div-by-zero runtime check. + +Q: why there is no BPF_SDIV for signed divide operation? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: Because it would be rarely used. llvm errors in such case and +prints a suggestion to use unsigned divide instead + +Q: Why BPF has implicit prologue and epilogue? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: Because architectures like sparc have register windows and in general +there are enough subtle differences between architectures, so naive +store return address into stack won't work. Another reason is BPF has +to be safe from division by zero (and legacy exception path +of LD_ABS insn). Those instructions need to invoke epilogue and +return implicitly. + +Q: Why BPF_JLT and BPF_JLE instructions were not introduced in the beginning? +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A: Because classic BPF didn't have them and BPF authors felt that compiler +workaround would be acceptable. Turned out that programs lose performance +due to lack of these compare instructions and they were added. +These two instructions is a perfect example what kind of new BPF +instructions are acceptable and can be added in the future. +These two already had equivalent instructions in native CPUs. +New instructions that don't have one-to-one mapping to HW instructions +will not be accepted. + +Q: BPF 32-bit subregister requirements +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Q: BPF 32-bit subregisters have a requirement to zero upper 32-bits of BPF +registers which makes BPF inefficient virtual machine for 32-bit +CPU architectures and 32-bit HW accelerators. Can true 32-bit registers +be added to BPF in the future? + +A: NO. The first thing to improve performance on 32-bit archs is to teach +LLVM to generate code that uses 32-bit subregisters. Then second step +is to teach verifier to mark operations where zero-ing upper bits +is unnecessary. Then JITs can take advantage of those markings and +drastically reduce size of generated code and improve performance. + +Q: Does BPF have a stable ABI? +------------------------------ +A: YES. BPF instructions, arguments to BPF programs, set of helper +functions and their arguments, recognized return codes are all part +of ABI. However when tracing programs are using bpf_probe_read() helper +to walk kernel internal datastructures and compile with kernel +internal headers these accesses can and will break with newer +kernels. The union bpf_attr -> kern_version is checked at load time +to prevent accidentally loading kprobe-based bpf programs written +for a different kernel. Networking programs don't do kern_version check. + +Q: How much stack space a BPF program uses? +------------------------------------------- +A: Currently all program types are limited to 512 bytes of stack +space, but the verifier computes the actual amount of stack used +and both interpreter and most JITed code consume necessary amount. + +Q: Can BPF be offloaded to HW? +------------------------------ +A: YES. BPF HW offload is supported by NFP driver. + +Q: Does classic BPF interpreter still exist? +-------------------------------------------- +A: NO. Classic BPF programs are converted into extend BPF instructions. + +Q: Can BPF call arbitrary kernel functions? +------------------------------------------- +A: NO. BPF programs can only call a set of helper functions which +is defined for every program type. + +Q: Can BPF overwrite arbitrary kernel memory? +--------------------------------------------- +A: NO. + +Tracing bpf programs can *read* arbitrary memory with bpf_probe_read() +and bpf_probe_read_str() helpers. Networking programs cannot read +arbitrary memory, since they don't have access to these helpers. +Programs can never read or write arbitrary memory directly. + +Q: Can BPF overwrite arbitrary user memory? +------------------------------------------- +A: Sort-of. + +Tracing BPF programs can overwrite the user memory +of the current task with bpf_probe_write_user(). Every time such +program is loaded the kernel will print warning message, so +this helper is only useful for experiments and prototypes. +Tracing BPF programs are root only. + +Q: bpf_trace_printk() helper warning +------------------------------------ +Q: When bpf_trace_printk() helper is used the kernel prints nasty +warning message. Why is that? + +A: This is done to nudge program authors into better interfaces when +programs need to pass data to user space. Like bpf_perf_event_output() +can be used to efficiently stream data via perf ring buffer. +BPF maps can be used for asynchronous data sharing between kernel +and user space. bpf_trace_printk() should only be used for debugging. + +Q: New functionality via kernel modules? +---------------------------------------- +Q: Can BPF functionality such as new program or map types, new +helpers, etc be added out of kernel module code? + +A: NO. diff --git a/Documentation/bpf/bpf_design_QA.txt b/Documentation/bpf/bpf_design_QA.txt deleted file mode 100644 index f3e458a0bb2f..000000000000 --- a/Documentation/bpf/bpf_design_QA.txt +++ /dev/null @@ -1,156 +0,0 @@ -BPF extensibility and applicability to networking, tracing, security -in the linux kernel and several user space implementations of BPF -virtual machine led to a number of misunderstanding on what BPF actually is. -This short QA is an attempt to address that and outline a direction -of where BPF is heading long term. - -Q: Is BPF a generic instruction set similar to x64 and arm64? -A: NO. - -Q: Is BPF a generic virtual machine ? -A: NO. - -BPF is generic instruction set _with_ C calling convention. - -Q: Why C calling convention was chosen? -A: Because BPF programs are designed to run in the linux kernel - which is written in C, hence BPF defines instruction set compatible - with two most used architectures x64 and arm64 (and takes into - consideration important quirks of other architectures) and - defines calling convention that is compatible with C calling - convention of the linux kernel on those architectures. - -Q: can multiple return values be supported in the future? -A: NO. BPF allows only register R0 to be used as return value. - -Q: can more than 5 function arguments be supported in the future? -A: NO. BPF calling convention only allows registers R1-R5 to be used - as arguments. BPF is not a standalone instruction set. - (unlike x64 ISA that allows msft, cdecl and other conventions) - -Q: can BPF programs access instruction pointer or return address? -A: NO. - -Q: can BPF programs access stack pointer ? -A: NO. Only frame pointer (register R10) is accessible. - From compiler point of view it's necessary to have stack pointer. - For example LLVM defines register R11 as stack pointer in its - BPF backend, but it makes sure that generated code never uses it. - -Q: Does C-calling convention diminishes possible use cases? -A: YES. BPF design forces addition of major functionality in the form - of kernel helper functions and kernel objects like BPF maps with - seamless interoperability between them. It lets kernel call into - BPF programs and programs call kernel helpers with zero overhead. - As all of them were native C code. That is particularly the case - for JITed BPF programs that are indistinguishable from - native kernel C code. - -Q: Does it mean that 'innovative' extensions to BPF code are disallowed? -A: Soft yes. At least for now until BPF core has support for - bpf-to-bpf calls, indirect calls, loops, global variables, - jump tables, read only sections and all other normal constructs - that C code can produce. - -Q: Can loops be supported in a safe way? -A: It's not clear yet. BPF developers are trying to find a way to - support bounded loops where the verifier can guarantee that - the program terminates in less than 4096 instructions. - -Q: How come LD_ABS and LD_IND instruction are present in BPF whereas - C code cannot express them and has to use builtin intrinsics? -A: This is artifact of compatibility with classic BPF. Modern - networking code in BPF performs better without them. - See 'direct packet access'. - -Q: It seems not all BPF instructions are one-to-one to native CPU. - For example why BPF_JNE and other compare and jumps are not cpu-like? -A: This was necessary to avoid introducing flags into ISA which are - impossible to make generic and efficient across CPU architectures. - -Q: why BPF_DIV instruction doesn't map to x64 div? -A: Because if we picked one-to-one relationship to x64 it would have made - it more complicated to support on arm64 and other archs. Also it - needs div-by-zero runtime check. - -Q: why there is no BPF_SDIV for signed divide operation? -A: Because it would be rarely used. llvm errors in such case and - prints a suggestion to use unsigned divide instead - -Q: Why BPF has implicit prologue and epilogue? -A: Because architectures like sparc have register windows and in general - there are enough subtle differences between architectures, so naive - store return address into stack won't work. Another reason is BPF has - to be safe from division by zero (and legacy exception path - of LD_ABS insn). Those instructions need to invoke epilogue and - return implicitly. - -Q: Why BPF_JLT and BPF_JLE instructions were not introduced in the beginning? -A: Because classic BPF didn't have them and BPF authors felt that compiler - workaround would be acceptable. Turned out that programs lose performance - due to lack of these compare instructions and they were added. - These two instructions is a perfect example what kind of new BPF - instructions are acceptable and can be added in the future. - These two already had equivalent instructions in native CPUs. - New instructions that don't have one-to-one mapping to HW instructions - will not be accepted. - -Q: BPF 32-bit subregisters have a requirement to zero upper 32-bits of BPF - registers which makes BPF inefficient virtual machine for 32-bit - CPU architectures and 32-bit HW accelerators. Can true 32-bit registers - be added to BPF in the future? -A: NO. The first thing to improve performance on 32-bit archs is to teach - LLVM to generate code that uses 32-bit subregisters. Then second step - is to teach verifier to mark operations where zero-ing upper bits - is unnecessary. Then JITs can take advantage of those markings and - drastically reduce size of generated code and improve performance. - -Q: Does BPF have a stable ABI? -A: YES. BPF instructions, arguments to BPF programs, set of helper - functions and their arguments, recognized return codes are all part - of ABI. However when tracing programs are using bpf_probe_read() helper - to walk kernel internal datastructures and compile with kernel - internal headers these accesses can and will break with newer - kernels. The union bpf_attr -> kern_version is checked at load time - to prevent accidentally loading kprobe-based bpf programs written - for a different kernel. Networking programs don't do kern_version check. - -Q: How much stack space a BPF program uses? -A: Currently all program types are limited to 512 bytes of stack - space, but the verifier computes the actual amount of stack used - and both interpreter and most JITed code consume necessary amount. - -Q: Can BPF be offloaded to HW? -A: YES. BPF HW offload is supported by NFP driver. - -Q: Does classic BPF interpreter still exist? -A: NO. Classic BPF programs are converted into extend BPF instructions. - -Q: Can BPF call arbitrary kernel functions? -A: NO. BPF programs can only call a set of helper functions which - is defined for every program type. - -Q: Can BPF overwrite arbitrary kernel memory? -A: NO. Tracing bpf programs can _read_ arbitrary memory with bpf_probe_read() - and bpf_probe_read_str() helpers. Networking programs cannot read - arbitrary memory, since they don't have access to these helpers. - Programs can never read or write arbitrary memory directly. - -Q: Can BPF overwrite arbitrary user memory? -A: Sort-of. Tracing BPF programs can overwrite the user memory - of the current task with bpf_probe_write_user(). Every time such - program is loaded the kernel will print warning message, so - this helper is only useful for experiments and prototypes. - Tracing BPF programs are root only. - -Q: When bpf_trace_printk() helper is used the kernel prints nasty - warning message. Why is that? -A: This is done to nudge program authors into better interfaces when - programs need to pass data to user space. Like bpf_perf_event_output() - can be used to efficiently stream data via perf ring buffer. - BPF maps can be used for asynchronous data sharing between kernel - and user space. bpf_trace_printk() should only be used for debugging. - -Q: Can BPF functionality such as new program or map types, new - helpers, etc be added out of kernel module code? -A: NO. diff --git a/Documentation/bpf/bpf_devel_QA.rst b/Documentation/bpf/bpf_devel_QA.rst new file mode 100644 index 000000000000..0e7c1d946e83 --- /dev/null +++ b/Documentation/bpf/bpf_devel_QA.rst @@ -0,0 +1,640 @@ +================================= +HOWTO interact with BPF subsystem +================================= + +This document provides information for the BPF subsystem about various +workflows related to reporting bugs, submitting patches, and queueing +patches for stable kernels. + +For general information about submitting patches, please refer to +`Documentation/process/`_. This document only describes additional specifics +related to BPF. + +.. contents:: + :local: + :depth: 2 + +Reporting bugs +============== + +Q: How do I report bugs for BPF kernel code? +-------------------------------------------- +A: Since all BPF kernel development as well as bpftool and iproute2 BPF +loader development happens through the netdev kernel mailing list, +please report any found issues around BPF to the following mailing +list: + + netdev@vger.kernel.org + +This may also include issues related to XDP, BPF tracing, etc. + +Given netdev has a high volume of traffic, please also add the BPF +maintainers to Cc (from kernel MAINTAINERS_ file): + +* Alexei Starovoitov <ast@kernel.org> +* Daniel Borkmann <daniel@iogearbox.net> + +In case a buggy commit has already been identified, make sure to keep +the actual commit authors in Cc as well for the report. They can +typically be identified through the kernel's git tree. + +**Please do NOT report BPF issues to bugzilla.kernel.org since it +is a guarantee that the reported issue will be overlooked.** + +Submitting patches +================== + +Q: To which mailing list do I need to submit my BPF patches? +------------------------------------------------------------ +A: Please submit your BPF patches to the netdev kernel mailing list: + + netdev@vger.kernel.org + +Historically, BPF came out of networking and has always been maintained +by the kernel networking community. Although these days BPF touches +many other subsystems as well, the patches are still routed mainly +through the networking community. + +In case your patch has changes in various different subsystems (e.g. +tracing, security, etc), make sure to Cc the related kernel mailing +lists and maintainers from there as well, so they are able to review +the changes and provide their Acked-by's to the patches. + +Q: Where can I find patches currently under discussion for BPF subsystem? +------------------------------------------------------------------------- +A: All patches that are Cc'ed to netdev are queued for review under netdev +patchwork project: + + http://patchwork.ozlabs.org/project/netdev/list/ + +Those patches which target BPF, are assigned to a 'bpf' delegate for +further processing from BPF maintainers. The current queue with +patches under review can be found at: + + https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 + +Once the patches have been reviewed by the BPF community as a whole +and approved by the BPF maintainers, their status in patchwork will be +changed to 'Accepted' and the submitter will be notified by mail. This +means that the patches look good from a BPF perspective and have been +applied to one of the two BPF kernel trees. + +In case feedback from the community requires a respin of the patches, +their status in patchwork will be set to 'Changes Requested', and purged +from the current review queue. Likewise for cases where patches would +get rejected or are not applicable to the BPF trees (but assigned to +the 'bpf' delegate). + +Q: How do the changes make their way into Linux? +------------------------------------------------ +A: There are two BPF kernel trees (git repositories). Once patches have +been accepted by the BPF maintainers, they will be applied to one +of the two BPF trees: + + * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/ + * https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/ + +The bpf tree itself is for fixes only, whereas bpf-next for features, +cleanups or other kind of improvements ("next-like" content). This is +analogous to net and net-next trees for networking. Both bpf and +bpf-next will only have a master branch in order to simplify against +which branch patches should get rebased to. + +Accumulated BPF patches in the bpf tree will regularly get pulled +into the net kernel tree. Likewise, accumulated BPF patches accepted +into the bpf-next tree will make their way into net-next tree. net and +net-next are both run by David S. Miller. From there, they will go +into the kernel mainline tree run by Linus Torvalds. To read up on the +process of net and net-next being merged into the mainline tree, see +the `netdev FAQ`_ under: + + `Documentation/networking/netdev-FAQ.txt`_ + +Occasionally, to prevent merge conflicts, we might send pull requests +to other trees (e.g. tracing) with a small subset of the patches, but +net and net-next are always the main trees targeted for integration. + +The pull requests will contain a high-level summary of the accumulated +patches and can be searched on netdev kernel mailing list through the +following subject lines (``yyyy-mm-dd`` is the date of the pull +request):: + + pull-request: bpf yyyy-mm-dd + pull-request: bpf-next yyyy-mm-dd + +Q: How do I indicate which tree (bpf vs. bpf-next) my patch should be applied to? +--------------------------------------------------------------------------------- + +A: The process is the very same as described in the `netdev FAQ`_, so +please read up on it. The subject line must indicate whether the +patch is a fix or rather "next-like" content in order to let the +maintainers know whether it is targeted at bpf or bpf-next. + +For fixes eventually landing in bpf -> net tree, the subject must +look like:: + + git format-patch --subject-prefix='PATCH bpf' start..finish + +For features/improvements/etc that should eventually land in +bpf-next -> net-next, the subject must look like:: + + git format-patch --subject-prefix='PATCH bpf-next' start..finish + +If unsure whether the patch or patch series should go into bpf +or net directly, or bpf-next or net-next directly, it is not a +problem either if the subject line says net or net-next as target. +It is eventually up to the maintainers to do the delegation of +the patches. + +If it is clear that patches should go into bpf or bpf-next tree, +please make sure to rebase the patches against those trees in +order to reduce potential conflicts. + +In case the patch or patch series has to be reworked and sent out +again in a second or later revision, it is also required to add a +version number (``v2``, ``v3``, ...) into the subject prefix:: + + git format-patch --subject-prefix='PATCH net-next v2' start..finish + +When changes have been requested to the patch series, always send the +whole patch series again with the feedback incorporated (never send +individual diffs on top of the old series). + +Q: What does it mean when a patch gets applied to bpf or bpf-next tree? +----------------------------------------------------------------------- +A: It means that the patch looks good for mainline inclusion from +a BPF point of view. + +Be aware that this is not a final verdict that the patch will +automatically get accepted into net or net-next trees eventually: + +On the netdev kernel mailing list reviews can come in at any point +in time. If discussions around a patch conclude that they cannot +get included as-is, we will either apply a follow-up fix or drop +them from the trees entirely. Therefore, we also reserve to rebase +the trees when deemed necessary. After all, the purpose of the tree +is to: + +i) accumulate and stage BPF patches for integration into trees + like net and net-next, and + +ii) run extensive BPF test suite and + workloads on the patches before they make their way any further. + +Once the BPF pull request was accepted by David S. Miller, then +the patches end up in net or net-next tree, respectively, and +make their way from there further into mainline. Again, see the +`netdev FAQ`_ for additional information e.g. on how often they are +merged to mainline. + +Q: How long do I need to wait for feedback on my BPF patches? +------------------------------------------------------------- +A: We try to keep the latency low. The usual time to feedback will +be around 2 or 3 business days. It may vary depending on the +complexity of changes and current patch load. + +Q: How often do you send pull requests to major kernel trees like net or net-next? +---------------------------------------------------------------------------------- + +A: Pull requests will be sent out rather often in order to not +accumulate too many patches in bpf or bpf-next. + +As a rule of thumb, expect pull requests for each tree regularly +at the end of the week. In some cases pull requests could additionally +come also in the middle of the week depending on the current patch +load or urgency. + +Q: Are patches applied to bpf-next when the merge window is open? +----------------------------------------------------------------- +A: For the time when the merge window is open, bpf-next will not be +processed. This is roughly analogous to net-next patch processing, +so feel free to read up on the `netdev FAQ`_ about further details. + +During those two weeks of merge window, we might ask you to resend +your patch series once bpf-next is open again. Once Linus released +a ``v*-rc1`` after the merge window, we continue processing of bpf-next. + +For non-subscribers to kernel mailing lists, there is also a status +page run by David S. Miller on net-next that provides guidance: + + http://vger.kernel.org/~davem/net-next.html + +Q: Verifier changes and test cases +---------------------------------- +Q: I made a BPF verifier change, do I need to add test cases for +BPF kernel selftests_? + +A: If the patch has changes to the behavior of the verifier, then yes, +it is absolutely necessary to add test cases to the BPF kernel +selftests_ suite. If they are not present and we think they are +needed, then we might ask for them before accepting any changes. + +In particular, test_verifier.c is tracking a high number of BPF test +cases, including a lot of corner cases that LLVM BPF back end may +generate out of the restricted C code. Thus, adding test cases is +absolutely crucial to make sure future changes do not accidentally +affect prior use-cases. Thus, treat those test cases as: verifier +behavior that is not tracked in test_verifier.c could potentially +be subject to change. + +Q: samples/bpf preference vs selftests? +--------------------------------------- +Q: When should I add code to `samples/bpf/`_ and when to BPF kernel +selftests_ ? + +A: In general, we prefer additions to BPF kernel selftests_ rather than +`samples/bpf/`_. The rationale is very simple: kernel selftests are +regularly run by various bots to test for kernel regressions. + +The more test cases we add to BPF selftests, the better the coverage +and the less likely it is that those could accidentally break. It is +not that BPF kernel selftests cannot demo how a specific feature can +be used. + +That said, `samples/bpf/`_ may be a good place for people to get started, +so it might be advisable that simple demos of features could go into +`samples/bpf/`_, but advanced functional and corner-case testing rather +into kernel selftests. + +If your sample looks like a test case, then go for BPF kernel selftests +instead! + +Q: When should I add code to the bpftool? +----------------------------------------- +A: The main purpose of bpftool (under tools/bpf/bpftool/) is to provide +a central user space tool for debugging and introspection of BPF programs +and maps that are active in the kernel. If UAPI changes related to BPF +enable for dumping additional information of programs or maps, then +bpftool should be extended as well to support dumping them. + +Q: When should I add code to iproute2's BPF loader? +--------------------------------------------------- +A: For UAPI changes related to the XDP or tc layer (e.g. ``cls_bpf``), +the convention is that those control-path related changes are added to +iproute2's BPF loader as well from user space side. This is not only +useful to have UAPI changes properly designed to be usable, but also +to make those changes available to a wider user base of major +downstream distributions. + +Q: Do you accept patches as well for iproute2's BPF loader? +----------------------------------------------------------- +A: Patches for the iproute2's BPF loader have to be sent to: + + netdev@vger.kernel.org + +While those patches are not processed by the BPF kernel maintainers, +please keep them in Cc as well, so they can be reviewed. + +The official git repository for iproute2 is run by Stephen Hemminger +and can be found at: + + https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/ + +The patches need to have a subject prefix of '``[PATCH iproute2 +master]``' or '``[PATCH iproute2 net-next]``'. '``master``' or +'``net-next``' describes the target branch where the patch should be +applied to. Meaning, if kernel changes went into the net-next kernel +tree, then the related iproute2 changes need to go into the iproute2 +net-next branch, otherwise they can be targeted at master branch. The +iproute2 net-next branch will get merged into the master branch after +the current iproute2 version from master has been released. + +Like BPF, the patches end up in patchwork under the netdev project and +are delegated to 'shemminger' for further processing: + + http://patchwork.ozlabs.org/project/netdev/list/?delegate=389 + +Q: What is the minimum requirement before I submit my BPF patches? +------------------------------------------------------------------ +A: When submitting patches, always take the time and properly test your +patches *prior* to submission. Never rush them! If maintainers find +that your patches have not been properly tested, it is a good way to +get them grumpy. Testing patch submissions is a hard requirement! + +Note, fixes that go to bpf tree *must* have a ``Fixes:`` tag included. +The same applies to fixes that target bpf-next, where the affected +commit is in net-next (or in some cases bpf-next). The ``Fixes:`` tag is +crucial in order to identify follow-up commits and tremendously helps +for people having to do backporting, so it is a must have! + +We also don't accept patches with an empty commit message. Take your +time and properly write up a high quality commit message, it is +essential! + +Think about it this way: other developers looking at your code a month +from now need to understand *why* a certain change has been done that +way, and whether there have been flaws in the analysis or assumptions +that the original author did. Thus providing a proper rationale and +describing the use-case for the changes is a must. + +Patch submissions with >1 patch must have a cover letter which includes +a high level description of the series. This high level summary will +then be placed into the merge commit by the BPF maintainers such that +it is also accessible from the git log for future reference. + +Q: Features changing BPF JIT and/or LLVM +---------------------------------------- +Q: What do I need to consider when adding a new instruction or feature +that would require BPF JIT and/or LLVM integration as well? + +A: We try hard to keep all BPF JITs up to date such that the same user +experience can be guaranteed when running BPF programs on different +architectures without having the program punt to the less efficient +interpreter in case the in-kernel BPF JIT is enabled. + +If you are unable to implement or test the required JIT changes for +certain architectures, please work together with the related BPF JIT +developers in order to get the feature implemented in a timely manner. +Please refer to the git log (``arch/*/net/``) to locate the necessary +people for helping out. + +Also always make sure to add BPF test cases (e.g. test_bpf.c and +test_verifier.c) for new instructions, so that they can receive +broad test coverage and help run-time testing the various BPF JITs. + +In case of new BPF instructions, once the changes have been accepted +into the Linux kernel, please implement support into LLVM's BPF back +end. See LLVM_ section below for further information. + +Stable submission +================= + +Q: I need a specific BPF commit in stable kernels. What should I do? +-------------------------------------------------------------------- +A: In case you need a specific fix in stable kernels, first check whether +the commit has already been applied in the related ``linux-*.y`` branches: + + https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/ + +If not the case, then drop an email to the BPF maintainers with the +netdev kernel mailing list in Cc and ask for the fix to be queued up: + + netdev@vger.kernel.org + +The process in general is the same as on netdev itself, see also the +`netdev FAQ`_ document. + +Q: Do you also backport to kernels not currently maintained as stable? +---------------------------------------------------------------------- +A: No. If you need a specific BPF commit in kernels that are currently not +maintained by the stable maintainers, then you are on your own. + +The current stable and longterm stable kernels are all listed here: + + https://www.kernel.org/ + +Q: The BPF patch I am about to submit needs to go to stable as well +------------------------------------------------------------------- +What should I do? + +A: The same rules apply as with netdev patch submissions in general, see +`netdev FAQ`_ under: + + `Documentation/networking/netdev-FAQ.txt`_ + +Never add "``Cc: stable@vger.kernel.org``" to the patch description, but +ask the BPF maintainers to queue the patches instead. This can be done +with a note, for example, under the ``---`` part of the patch which does +not go into the git log. Alternatively, this can be done as a simple +request by mail instead. + +Q: Queue stable patches +----------------------- +Q: Where do I find currently queued BPF patches that will be submitted +to stable? + +A: Once patches that fix critical bugs got applied into the bpf tree, they +are queued up for stable submission under: + + http://patchwork.ozlabs.org/bundle/bpf/stable/?state=* + +They will be on hold there at minimum until the related commit made its +way into the mainline kernel tree. + +After having been under broader exposure, the queued patches will be +submitted by the BPF maintainers to the stable maintainers. + +Testing patches +=============== + +Q: How to run BPF selftests +--------------------------- +A: After you have booted into the newly compiled kernel, navigate to +the BPF selftests_ suite in order to test BPF functionality (current +working directory points to the root of the cloned git tree):: + + $ cd tools/testing/selftests/bpf/ + $ make + +To run the verifier tests:: + + $ sudo ./test_verifier + +The verifier tests print out all the current checks being +performed. The summary at the end of running all tests will dump +information of test successes and failures:: + + Summary: 418 PASSED, 0 FAILED + +In order to run through all BPF selftests, the following command is +needed:: + + $ sudo make run_tests + +See the kernels selftest `Documentation/dev-tools/kselftest.rst`_ +document for further documentation. + +Q: Which BPF kernel selftests version should I run my kernel against? +--------------------------------------------------------------------- +A: If you run a kernel ``xyz``, then always run the BPF kernel selftests +from that kernel ``xyz`` as well. Do not expect that the BPF selftest +from the latest mainline tree will pass all the time. + +In particular, test_bpf.c and test_verifier.c have a large number of +test cases and are constantly updated with new BPF test sequences, or +existing ones are adapted to verifier changes e.g. due to verifier +becoming smarter and being able to better track certain things. + +LLVM +==== + +Q: Where do I find LLVM with BPF support? +----------------------------------------- +A: The BPF back end for LLVM is upstream in LLVM since version 3.7.1. + +All major distributions these days ship LLVM with BPF back end enabled, +so for the majority of use-cases it is not required to compile LLVM by +hand anymore, just install the distribution provided package. + +LLVM's static compiler lists the supported targets through +``llc --version``, make sure BPF targets are listed. Example:: + + $ llc --version + LLVM (http://llvm.org/): + LLVM version 6.0.0svn + Optimized build. + Default target: x86_64-unknown-linux-gnu + Host CPU: skylake + + Registered Targets: + bpf - BPF (host endian) + bpfeb - BPF (big endian) + bpfel - BPF (little endian) + x86 - 32-bit X86: Pentium-Pro and above + x86-64 - 64-bit X86: EM64T and AMD64 + +For developers in order to utilize the latest features added to LLVM's +BPF back end, it is advisable to run the latest LLVM releases. Support +for new BPF kernel features such as additions to the BPF instruction +set are often developed together. + +All LLVM releases can be found at: http://releases.llvm.org/ + +Q: Got it, so how do I build LLVM manually anyway? +-------------------------------------------------- +A: You need cmake and gcc-c++ as build requisites for LLVM. Once you have +that set up, proceed with building the latest LLVM and clang version +from the git repositories:: + + $ git clone http://llvm.org/git/llvm.git + $ cd llvm/tools + $ git clone --depth 1 http://llvm.org/git/clang.git + $ cd ..; mkdir build; cd build + $ cmake .. -DLLVM_TARGETS_TO_BUILD="BPF;X86" \ + -DBUILD_SHARED_LIBS=OFF \ + -DCMAKE_BUILD_TYPE=Release \ + -DLLVM_BUILD_RUNTIME=OFF + $ make -j $(getconf _NPROCESSORS_ONLN) + +The built binaries can then be found in the build/bin/ directory, where +you can point the PATH variable to. + +Q: Reporting LLVM BPF issues +---------------------------- +Q: Should I notify BPF kernel maintainers about issues in LLVM's BPF code +generation back end or about LLVM generated code that the verifier +refuses to accept? + +A: Yes, please do! + +LLVM's BPF back end is a key piece of the whole BPF +infrastructure and it ties deeply into verification of programs from the +kernel side. Therefore, any issues on either side need to be investigated +and fixed whenever necessary. + +Therefore, please make sure to bring them up at netdev kernel mailing +list and Cc BPF maintainers for LLVM and kernel bits: + +* Yonghong Song <yhs@fb.com> +* Alexei Starovoitov <ast@kernel.org> +* Daniel Borkmann <daniel@iogearbox.net> + +LLVM also has an issue tracker where BPF related bugs can be found: + + https://bugs.llvm.org/buglist.cgi?quicksearch=bpf + +However, it is better to reach out through mailing lists with having +maintainers in Cc. + +Q: New BPF instruction for kernel and LLVM +------------------------------------------ +Q: I have added a new BPF instruction to the kernel, how can I integrate +it into LLVM? + +A: LLVM has a ``-mcpu`` selector for the BPF back end in order to allow +the selection of BPF instruction set extensions. By default the +``generic`` processor target is used, which is the base instruction set +(v1) of BPF. + +LLVM has an option to select ``-mcpu=probe`` where it will probe the host +kernel for supported BPF instruction set extensions and selects the +optimal set automatically. + +For cross-compilation, a specific version can be select manually as well :: + + $ llc -march bpf -mcpu=help + Available CPUs for this target: + + generic - Select the generic processor. + probe - Select the probe processor. + v1 - Select the v1 processor. + v2 - Select the v2 processor. + [...] + +Newly added BPF instructions to the Linux kernel need to follow the same +scheme, bump the instruction set version and implement probing for the +extensions such that ``-mcpu=probe`` users can benefit from the +optimization transparently when upgrading their kernels. + +If you are unable to implement support for the newly added BPF instruction +please reach out to BPF developers for help. + +By the way, the BPF kernel selftests run with ``-mcpu=probe`` for better +test coverage. + +Q: clang flag for target bpf? +----------------------------- +Q: In some cases clang flag ``-target bpf`` is used but in other cases the +default clang target, which matches the underlying architecture, is used. +What is the difference and when I should use which? + +A: Although LLVM IR generation and optimization try to stay architecture +independent, ``-target <arch>`` still has some impact on generated code: + +- BPF program may recursively include header file(s) with file scope + inline assembly codes. The default target can handle this well, + while ``bpf`` target may fail if bpf backend assembler does not + understand these assembly codes, which is true in most cases. + +- When compiled without ``-g``, additional elf sections, e.g., + .eh_frame and .rela.eh_frame, may be present in the object file + with default target, but not with ``bpf`` target. + +- The default target may turn a C switch statement into a switch table + lookup and jump operation. Since the switch table is placed + in the global readonly section, the bpf program will fail to load. + The bpf target does not support switch table optimization. + The clang option ``-fno-jump-tables`` can be used to disable + switch table generation. + +- For clang ``-target bpf``, it is guaranteed that pointer or long / + unsigned long types will always have a width of 64 bit, no matter + whether underlying clang binary or default target (or kernel) is + 32 bit. However, when native clang target is used, then it will + compile these types based on the underlying architecture's conventions, + meaning in case of 32 bit architecture, pointer or long / unsigned + long types e.g. in BPF context structure will have width of 32 bit + while the BPF LLVM back end still operates in 64 bit. The native + target is mostly needed in tracing for the case of walking ``pt_regs`` + or other kernel structures where CPU's register width matters. + Otherwise, ``clang -target bpf`` is generally recommended. + +You should use default target when: + +- Your program includes a header file, e.g., ptrace.h, which eventually + pulls in some header files containing file scope host assembly codes. + +- You can add ``-fno-jump-tables`` to work around the switch table issue. + +Otherwise, you can use ``bpf`` target. Additionally, you *must* use bpf target +when: + +- Your program uses data structures with pointer or long / unsigned long + types that interface with BPF helpers or context data structures. Access + into these structures is verified by the BPF verifier and may result + in verification failures if the native architecture is not aligned with + the BPF architecture, e.g. 64-bit. An example of this is + BPF_PROG_TYPE_SK_MSG require ``-target bpf`` + + +.. Links +.. _Documentation/process/: https://www.kernel.org/doc/html/latest/process/ +.. _MAINTAINERS: ../../MAINTAINERS +.. _Documentation/networking/netdev-FAQ.txt: ../networking/netdev-FAQ.txt +.. _netdev FAQ: ../networking/netdev-FAQ.txt +.. _samples/bpf/: ../../samples/bpf/ +.. _selftests: ../../tools/testing/selftests/bpf/ +.. _Documentation/dev-tools/kselftest.rst: + https://www.kernel.org/doc/html/latest/dev-tools/kselftest.html + +Happy BPF hacking! diff --git a/Documentation/bpf/bpf_devel_QA.txt b/Documentation/bpf/bpf_devel_QA.txt deleted file mode 100644 index da57601153a0..000000000000 --- a/Documentation/bpf/bpf_devel_QA.txt +++ /dev/null @@ -1,570 +0,0 @@ -This document provides information for the BPF subsystem about various -workflows related to reporting bugs, submitting patches, and queueing -patches for stable kernels. - -For general information about submitting patches, please refer to -Documentation/process/. This document only describes additional specifics -related to BPF. - -Reporting bugs: ---------------- - -Q: How do I report bugs for BPF kernel code? - -A: Since all BPF kernel development as well as bpftool and iproute2 BPF - loader development happens through the netdev kernel mailing list, - please report any found issues around BPF to the following mailing - list: - - netdev@vger.kernel.org - - This may also include issues related to XDP, BPF tracing, etc. - - Given netdev has a high volume of traffic, please also add the BPF - maintainers to Cc (from kernel MAINTAINERS file): - - Alexei Starovoitov <ast@kernel.org> - Daniel Borkmann <daniel@iogearbox.net> - - In case a buggy commit has already been identified, make sure to keep - the actual commit authors in Cc as well for the report. They can - typically be identified through the kernel's git tree. - - Please do *not* report BPF issues to bugzilla.kernel.org since it - is a guarantee that the reported issue will be overlooked. - -Submitting patches: -------------------- - -Q: To which mailing list do I need to submit my BPF patches? - -A: Please submit your BPF patches to the netdev kernel mailing list: - - netdev@vger.kernel.org - - Historically, BPF came out of networking and has always been maintained - by the kernel networking community. Although these days BPF touches - many other subsystems as well, the patches are still routed mainly - through the networking community. - - In case your patch has changes in various different subsystems (e.g. - tracing, security, etc), make sure to Cc the related kernel mailing - lists and maintainers from there as well, so they are able to review - the changes and provide their Acked-by's to the patches. - -Q: Where can I find patches currently under discussion for BPF subsystem? - -A: All patches that are Cc'ed to netdev are queued for review under netdev - patchwork project: - - http://patchwork.ozlabs.org/project/netdev/list/ - - Those patches which target BPF, are assigned to a 'bpf' delegate for - further processing from BPF maintainers. The current queue with - patches under review can be found at: - - https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 - - Once the patches have been reviewed by the BPF community as a whole - and approved by the BPF maintainers, their status in patchwork will be - changed to 'Accepted' and the submitter will be notified by mail. This - means that the patches look good from a BPF perspective and have been - applied to one of the two BPF kernel trees. - - In case feedback from the community requires a respin of the patches, - their status in patchwork will be set to 'Changes Requested', and purged - from the current review queue. Likewise for cases where patches would - get rejected or are not applicable to the BPF trees (but assigned to - the 'bpf' delegate). - -Q: How do the changes make their way into Linux? - -A: There are two BPF kernel trees (git repositories). Once patches have - been accepted by the BPF maintainers, they will be applied to one - of the two BPF trees: - - https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git/ - https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/ - - The bpf tree itself is for fixes only, whereas bpf-next for features, - cleanups or other kind of improvements ("next-like" content). This is - analogous to net and net-next trees for networking. Both bpf and - bpf-next will only have a master branch in order to simplify against - which branch patches should get rebased to. - - Accumulated BPF patches in the bpf tree will regularly get pulled - into the net kernel tree. Likewise, accumulated BPF patches accepted - into the bpf-next tree will make their way into net-next tree. net and - net-next are both run by David S. Miller. From there, they will go - into the kernel mainline tree run by Linus Torvalds. To read up on the - process of net and net-next being merged into the mainline tree, see - the netdev FAQ under: - - Documentation/networking/netdev-FAQ.txt - - Occasionally, to prevent merge conflicts, we might send pull requests - to other trees (e.g. tracing) with a small subset of the patches, but - net and net-next are always the main trees targeted for integration. - - The pull requests will contain a high-level summary of the accumulated - patches and can be searched on netdev kernel mailing list through the - following subject lines (yyyy-mm-dd is the date of the pull request): - - pull-request: bpf yyyy-mm-dd - pull-request: bpf-next yyyy-mm-dd - -Q: How do I indicate which tree (bpf vs. bpf-next) my patch should be - applied to? - -A: The process is the very same as described in the netdev FAQ, so - please read up on it. The subject line must indicate whether the - patch is a fix or rather "next-like" content in order to let the - maintainers know whether it is targeted at bpf or bpf-next. - - For fixes eventually landing in bpf -> net tree, the subject must - look like: - - git format-patch --subject-prefix='PATCH bpf' start..finish - - For features/improvements/etc that should eventually land in - bpf-next -> net-next, the subject must look like: - - git format-patch --subject-prefix='PATCH bpf-next' start..finish - - If unsure whether the patch or patch series should go into bpf - or net directly, or bpf-next or net-next directly, it is not a - problem either if the subject line says net or net-next as target. - It is eventually up to the maintainers to do the delegation of - the patches. - - If it is clear that patches should go into bpf or bpf-next tree, - please make sure to rebase the patches against those trees in - order to reduce potential conflicts. - - In case the patch or patch series has to be reworked and sent out - again in a second or later revision, it is also required to add a - version number (v2, v3, ...) into the subject prefix: - - git format-patch --subject-prefix='PATCH net-next v2' start..finish - - When changes have been requested to the patch series, always send the - whole patch series again with the feedback incorporated (never send - individual diffs on top of the old series). - -Q: What does it mean when a patch gets applied to bpf or bpf-next tree? - -A: It means that the patch looks good for mainline inclusion from - a BPF point of view. - - Be aware that this is not a final verdict that the patch will - automatically get accepted into net or net-next trees eventually: - - On the netdev kernel mailing list reviews can come in at any point - in time. If discussions around a patch conclude that they cannot - get included as-is, we will either apply a follow-up fix or drop - them from the trees entirely. Therefore, we also reserve to rebase - the trees when deemed necessary. After all, the purpose of the tree - is to i) accumulate and stage BPF patches for integration into trees - like net and net-next, and ii) run extensive BPF test suite and - workloads on the patches before they make their way any further. - - Once the BPF pull request was accepted by David S. Miller, then - the patches end up in net or net-next tree, respectively, and - make their way from there further into mainline. Again, see the - netdev FAQ for additional information e.g. on how often they are - merged to mainline. - -Q: How long do I need to wait for feedback on my BPF patches? - -A: We try to keep the latency low. The usual time to feedback will - be around 2 or 3 business days. It may vary depending on the - complexity of changes and current patch load. - -Q: How often do you send pull requests to major kernel trees like - net or net-next? - -A: Pull requests will be sent out rather often in order to not - accumulate too many patches in bpf or bpf-next. - - As a rule of thumb, expect pull requests for each tree regularly - at the end of the week. In some cases pull requests could additionally - come also in the middle of the week depending on the current patch - load or urgency. - -Q: Are patches applied to bpf-next when the merge window is open? - -A: For the time when the merge window is open, bpf-next will not be - processed. This is roughly analogous to net-next patch processing, - so feel free to read up on the netdev FAQ about further details. - - During those two weeks of merge window, we might ask you to resend - your patch series once bpf-next is open again. Once Linus released - a v*-rc1 after the merge window, we continue processing of bpf-next. - - For non-subscribers to kernel mailing lists, there is also a status - page run by David S. Miller on net-next that provides guidance: - - http://vger.kernel.org/~davem/net-next.html - -Q: I made a BPF verifier change, do I need to add test cases for - BPF kernel selftests? - -A: If the patch has changes to the behavior of the verifier, then yes, - it is absolutely necessary to add test cases to the BPF kernel - selftests suite. If they are not present and we think they are - needed, then we might ask for them before accepting any changes. - - In particular, test_verifier.c is tracking a high number of BPF test - cases, including a lot of corner cases that LLVM BPF back end may - generate out of the restricted C code. Thus, adding test cases is - absolutely crucial to make sure future changes do not accidentally - affect prior use-cases. Thus, treat those test cases as: verifier - behavior that is not tracked in test_verifier.c could potentially - be subject to change. - -Q: When should I add code to samples/bpf/ and when to BPF kernel - selftests? - -A: In general, we prefer additions to BPF kernel selftests rather than - samples/bpf/. The rationale is very simple: kernel selftests are - regularly run by various bots to test for kernel regressions. - - The more test cases we add to BPF selftests, the better the coverage - and the less likely it is that those could accidentally break. It is - not that BPF kernel selftests cannot demo how a specific feature can - be used. - - That said, samples/bpf/ may be a good place for people to get started, - so it might be advisable that simple demos of features could go into - samples/bpf/, but advanced functional and corner-case testing rather - into kernel selftests. - - If your sample looks like a test case, then go for BPF kernel selftests - instead! - -Q: When should I add code to the bpftool? - -A: The main purpose of bpftool (under tools/bpf/bpftool/) is to provide - a central user space tool for debugging and introspection of BPF programs - and maps that are active in the kernel. If UAPI changes related to BPF - enable for dumping additional information of programs or maps, then - bpftool should be extended as well to support dumping them. - -Q: When should I add code to iproute2's BPF loader? - -A: For UAPI changes related to the XDP or tc layer (e.g. cls_bpf), the - convention is that those control-path related changes are added to - iproute2's BPF loader as well from user space side. This is not only - useful to have UAPI changes properly designed to be usable, but also - to make those changes available to a wider user base of major - downstream distributions. - -Q: Do you accept patches as well for iproute2's BPF loader? - -A: Patches for the iproute2's BPF loader have to be sent to: - - netdev@vger.kernel.org - - While those patches are not processed by the BPF kernel maintainers, - please keep them in Cc as well, so they can be reviewed. - - The official git repository for iproute2 is run by Stephen Hemminger - and can be found at: - - https://git.kernel.org/pub/scm/linux/kernel/git/shemminger/iproute2.git/ - - The patches need to have a subject prefix of '[PATCH iproute2 master]' - or '[PATCH iproute2 net-next]'. 'master' or 'net-next' describes the - target branch where the patch should be applied to. Meaning, if kernel - changes went into the net-next kernel tree, then the related iproute2 - changes need to go into the iproute2 net-next branch, otherwise they - can be targeted at master branch. The iproute2 net-next branch will get - merged into the master branch after the current iproute2 version from - master has been released. - - Like BPF, the patches end up in patchwork under the netdev project and - are delegated to 'shemminger' for further processing: - - http://patchwork.ozlabs.org/project/netdev/list/?delegate=389 - -Q: What is the minimum requirement before I submit my BPF patches? - -A: When submitting patches, always take the time and properly test your - patches *prior* to submission. Never rush them! If maintainers find - that your patches have not been properly tested, it is a good way to - get them grumpy. Testing patch submissions is a hard requirement! - - Note, fixes that go to bpf tree *must* have a Fixes: tag included. The - same applies to fixes that target bpf-next, where the affected commit - is in net-next (or in some cases bpf-next). The Fixes: tag is crucial - in order to identify follow-up commits and tremendously helps for people - having to do backporting, so it is a must have! - - We also don't accept patches with an empty commit message. Take your - time and properly write up a high quality commit message, it is - essential! - - Think about it this way: other developers looking at your code a month - from now need to understand *why* a certain change has been done that - way, and whether there have been flaws in the analysis or assumptions - that the original author did. Thus providing a proper rationale and - describing the use-case for the changes is a must. - - Patch submissions with >1 patch must have a cover letter which includes - a high level description of the series. This high level summary will - then be placed into the merge commit by the BPF maintainers such that - it is also accessible from the git log for future reference. - -Q: What do I need to consider when adding a new instruction or feature - that would require BPF JIT and/or LLVM integration as well? - -A: We try hard to keep all BPF JITs up to date such that the same user - experience can be guaranteed when running BPF programs on different - architectures without having the program punt to the less efficient - interpreter in case the in-kernel BPF JIT is enabled. - - If you are unable to implement or test the required JIT changes for - certain architectures, please work together with the related BPF JIT - developers in order to get the feature implemented in a timely manner. - Please refer to the git log (arch/*/net/) to locate the necessary - people for helping out. - - Also always make sure to add BPF test cases (e.g. test_bpf.c and - test_verifier.c) for new instructions, so that they can receive - broad test coverage and help run-time testing the various BPF JITs. - - In case of new BPF instructions, once the changes have been accepted - into the Linux kernel, please implement support into LLVM's BPF back - end. See LLVM section below for further information. - -Stable submission: ------------------- - -Q: I need a specific BPF commit in stable kernels. What should I do? - -A: In case you need a specific fix in stable kernels, first check whether - the commit has already been applied in the related linux-*.y branches: - - https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/ - - If not the case, then drop an email to the BPF maintainers with the - netdev kernel mailing list in Cc and ask for the fix to be queued up: - - netdev@vger.kernel.org - - The process in general is the same as on netdev itself, see also the - netdev FAQ document. - -Q: Do you also backport to kernels not currently maintained as stable? - -A: No. If you need a specific BPF commit in kernels that are currently not - maintained by the stable maintainers, then you are on your own. - - The current stable and longterm stable kernels are all listed here: - - https://www.kernel.org/ - -Q: The BPF patch I am about to submit needs to go to stable as well. What - should I do? - -A: The same rules apply as with netdev patch submissions in general, see - netdev FAQ under: - - Documentation/networking/netdev-FAQ.txt - - Never add "Cc: stable@vger.kernel.org" to the patch description, but - ask the BPF maintainers to queue the patches instead. This can be done - with a note, for example, under the "---" part of the patch which does - not go into the git log. Alternatively, this can be done as a simple - request by mail instead. - -Q: Where do I find currently queued BPF patches that will be submitted - to stable? - -A: Once patches that fix critical bugs got applied into the bpf tree, they - are queued up for stable submission under: - - http://patchwork.ozlabs.org/bundle/bpf/stable/?state=* - - They will be on hold there at minimum until the related commit made its - way into the mainline kernel tree. - - After having been under broader exposure, the queued patches will be - submitted by the BPF maintainers to the stable maintainers. - -Testing patches: ----------------- - -Q: Which BPF kernel selftests version should I run my kernel against? - -A: If you run a kernel xyz, then always run the BPF kernel selftests from - that kernel xyz as well. Do not expect that the BPF selftest from the - latest mainline tree will pass all the time. - - In particular, test_bpf.c and test_verifier.c have a large number of - test cases and are constantly updated with new BPF test sequences, or - existing ones are adapted to verifier changes e.g. due to verifier - becoming smarter and being able to better track certain things. - -LLVM: ------ - -Q: Where do I find LLVM with BPF support? - -A: The BPF back end for LLVM is upstream in LLVM since version 3.7.1. - - All major distributions these days ship LLVM with BPF back end enabled, - so for the majority of use-cases it is not required to compile LLVM by - hand anymore, just install the distribution provided package. - - LLVM's static compiler lists the supported targets through 'llc --version', - make sure BPF targets are listed. Example: - - $ llc --version - LLVM (http://llvm.org/): - LLVM version 6.0.0svn - Optimized build. - Default target: x86_64-unknown-linux-gnu - Host CPU: skylake - - Registered Targets: - bpf - BPF (host endian) - bpfeb - BPF (big endian) - bpfel - BPF (little endian) - x86 - 32-bit X86: Pentium-Pro and above - x86-64 - 64-bit X86: EM64T and AMD64 - - For developers in order to utilize the latest features added to LLVM's - BPF back end, it is advisable to run the latest LLVM releases. Support - for new BPF kernel features such as additions to the BPF instruction - set are often developed together. - - All LLVM releases can be found at: http://releases.llvm.org/ - -Q: Got it, so how do I build LLVM manually anyway? - -A: You need cmake and gcc-c++ as build requisites for LLVM. Once you have - that set up, proceed with building the latest LLVM and clang version - from the git repositories: - - $ git clone http://llvm.org/git/llvm.git - $ cd llvm/tools - $ git clone --depth 1 http://llvm.org/git/clang.git - $ cd ..; mkdir build; cd build - $ cmake .. -DLLVM_TARGETS_TO_BUILD="BPF;X86" \ - -DBUILD_SHARED_LIBS=OFF \ - -DCMAKE_BUILD_TYPE=Release \ - -DLLVM_BUILD_RUNTIME=OFF - $ make -j $(getconf _NPROCESSORS_ONLN) - - The built binaries can then be found in the build/bin/ directory, where - you can point the PATH variable to. - -Q: Should I notify BPF kernel maintainers about issues in LLVM's BPF code - generation back end or about LLVM generated code that the verifier - refuses to accept? - -A: Yes, please do! LLVM's BPF back end is a key piece of the whole BPF - infrastructure and it ties deeply into verification of programs from the - kernel side. Therefore, any issues on either side need to be investigated - and fixed whenever necessary. - - Therefore, please make sure to bring them up at netdev kernel mailing - list and Cc BPF maintainers for LLVM and kernel bits: - - Yonghong Song <yhs@fb.com> - Alexei Starovoitov <ast@kernel.org> - Daniel Borkmann <daniel@iogearbox.net> - - LLVM also has an issue tracker where BPF related bugs can be found: - - https://bugs.llvm.org/buglist.cgi?quicksearch=bpf - - However, it is better to reach out through mailing lists with having - maintainers in Cc. - -Q: I have added a new BPF instruction to the kernel, how can I integrate - it into LLVM? - -A: LLVM has a -mcpu selector for the BPF back end in order to allow the - selection of BPF instruction set extensions. By default the 'generic' - processor target is used, which is the base instruction set (v1) of BPF. - - LLVM has an option to select -mcpu=probe where it will probe the host - kernel for supported BPF instruction set extensions and selects the - optimal set automatically. - - For cross-compilation, a specific version can be select manually as well. - - $ llc -march bpf -mcpu=help - Available CPUs for this target: - - generic - Select the generic processor. - probe - Select the probe processor. - v1 - Select the v1 processor. - v2 - Select the v2 processor. - [...] - - Newly added BPF instructions to the Linux kernel need to follow the same - scheme, bump the instruction set version and implement probing for the - extensions such that -mcpu=probe users can benefit from the optimization - transparently when upgrading their kernels. - - If you are unable to implement support for the newly added BPF instruction - please reach out to BPF developers for help. - - By the way, the BPF kernel selftests run with -mcpu=probe for better - test coverage. - -Q: In some cases clang flag "-target bpf" is used but in other cases the - default clang target, which matches the underlying architecture, is used. - What is the difference and when I should use which? - -A: Although LLVM IR generation and optimization try to stay architecture - independent, "-target <arch>" still has some impact on generated code: - - - BPF program may recursively include header file(s) with file scope - inline assembly codes. The default target can handle this well, - while bpf target may fail if bpf backend assembler does not - understand these assembly codes, which is true in most cases. - - - When compiled without -g, additional elf sections, e.g., - .eh_frame and .rela.eh_frame, may be present in the object file - with default target, but not with bpf target. - - - The default target may turn a C switch statement into a switch table - lookup and jump operation. Since the switch table is placed - in the global readonly section, the bpf program will fail to load. - The bpf target does not support switch table optimization. - The clang option "-fno-jump-tables" can be used to disable - switch table generation. - - - For clang -target bpf, it is guaranteed that pointer or long / - unsigned long types will always have a width of 64 bit, no matter - whether underlying clang binary or default target (or kernel) is - 32 bit. However, when native clang target is used, then it will - compile these types based on the underlying architecture's conventions, - meaning in case of 32 bit architecture, pointer or long / unsigned - long types e.g. in BPF context structure will have width of 32 bit - while the BPF LLVM back end still operates in 64 bit. The native - target is mostly needed in tracing for the case of walking pt_regs - or other kernel structures where CPU's register width matters. - Otherwise, clang -target bpf is generally recommended. - - You should use default target when: - - - Your program includes a header file, e.g., ptrace.h, which eventually - pulls in some header files containing file scope host assembly codes. - - You can add "-fno-jump-tables" to work around the switch table issue. - - Otherwise, you can use bpf target. Additionally, you _must_ use bpf target - when: - - - Your program uses data structures with pointer or long / unsigned long - types that interface with BPF helpers or context data structures. Access - into these structures is verified by the BPF verifier and may result - in verification failures if the native architecture is not aligned with - the BPF architecture, e.g. 64-bit. An example of this is - BPF_PROG_TYPE_SK_MSG require '-target bpf' - -Happy BPF hacking! diff --git a/Documentation/devicetree/bindings/arm/stm32/stm32-syscon.txt b/Documentation/devicetree/bindings/arm/stm32/stm32-syscon.txt new file mode 100644 index 000000000000..99980aee26e5 --- /dev/null +++ b/Documentation/devicetree/bindings/arm/stm32/stm32-syscon.txt @@ -0,0 +1,14 @@ +STMicroelectronics STM32 Platforms System Controller + +Properties: + - compatible : should contain two values. First value must be : + - " st,stm32mp157-syscfg " - for stm32mp157 based SoCs, + second value must be always "syscon". + - reg : offset and length of the register set. + + Example: + syscfg: syscon@50020000 { + compatible = "st,stm32mp157-syscfg", "syscon"; + reg = <0x50020000 0x400>; + }; + diff --git a/Documentation/devicetree/bindings/arm/stm32.txt b/Documentation/devicetree/bindings/arm/stm32/stm32.txt index 6808ed9ddfd5..6808ed9ddfd5 100644 --- a/Documentation/devicetree/bindings/arm/stm32.txt +++ b/Documentation/devicetree/bindings/arm/stm32/stm32.txt diff --git a/Documentation/devicetree/bindings/net/dsa/dsa.txt b/Documentation/devicetree/bindings/net/dsa/dsa.txt index cfe8f64eca4f..3ceeb8de1196 100644 --- a/Documentation/devicetree/bindings/net/dsa/dsa.txt +++ b/Documentation/devicetree/bindings/net/dsa/dsa.txt @@ -82,8 +82,6 @@ linked into one DSA cluster. switch0: switch0@0 { compatible = "marvell,mv88e6085"; - #address-cells = <1>; - #size-cells = <0>; reg = <0>; dsa,member = <0 0>; @@ -135,8 +133,6 @@ linked into one DSA cluster. switch1: switch1@0 { compatible = "marvell,mv88e6085"; - #address-cells = <1>; - #size-cells = <0>; reg = <0>; dsa,member = <0 1>; @@ -204,8 +200,6 @@ linked into one DSA cluster. switch2: switch2@0 { compatible = "marvell,mv88e6085"; - #address-cells = <1>; - #size-cells = <0>; reg = <0>; dsa,member = <0 2>; diff --git a/Documentation/devicetree/bindings/net/dsa/qca8k.txt b/Documentation/devicetree/bindings/net/dsa/qca8k.txt index 9c67ee4890d7..bbcb255c3150 100644 --- a/Documentation/devicetree/bindings/net/dsa/qca8k.txt +++ b/Documentation/devicetree/bindings/net/dsa/qca8k.txt @@ -2,7 +2,10 @@ Required properties: -- compatible: should be "qca,qca8337" +- compatible: should be one of: + "qca,qca8334" + "qca,qca8337" + - #size-cells: must be 0 - #address-cells: must be 1 @@ -14,6 +17,20 @@ port and PHY id, each subnode describing a port needs to have a valid phandle referencing the internal PHY connected to it. The CPU port of this switch is always port 0. +A CPU port node has the following optional node: + +- fixed-link : Fixed-link subnode describing a link to a non-MDIO + managed entity. See + Documentation/devicetree/bindings/net/fixed-link.txt + for details. + +For QCA8K the 'fixed-link' sub-node supports only the following properties: + +- 'speed' (integer, mandatory), to indicate the link speed. Accepted + values are 10, 100 and 1000 +- 'full-duplex' (boolean, optional), to indicate that full duplex is + used. When absent, half duplex is assumed. + Example: @@ -53,6 +70,10 @@ Example: label = "cpu"; ethernet = <&gmac1>; phy-mode = "rgmii"; + fixed-link { + speed = 1000; + full-duplex; + }; }; port@1 { diff --git a/Documentation/devicetree/bindings/net/dwmac-sun8i.txt b/Documentation/devicetree/bindings/net/dwmac-sun8i.txt index 3d6d5fa0c4d5..cfe724398a12 100644 --- a/Documentation/devicetree/bindings/net/dwmac-sun8i.txt +++ b/Documentation/devicetree/bindings/net/dwmac-sun8i.txt @@ -7,6 +7,7 @@ Required properties: - compatible: must be one of the following string: "allwinner,sun8i-a83t-emac" "allwinner,sun8i-h3-emac" + "allwinner,sun8i-r40-gmac" "allwinner,sun8i-v3s-emac" "allwinner,sun50i-a64-emac" - reg: address and length of the register for the device. @@ -20,18 +21,18 @@ Required properties: - phy-handle: See ethernet.txt - #address-cells: shall be 1 - #size-cells: shall be 0 -- syscon: A phandle to the syscon of the SoC with one of the following - compatible string: - - allwinner,sun8i-h3-system-controller - - allwinner,sun8i-v3s-system-controller - - allwinner,sun50i-a64-system-controller - - allwinner,sun8i-a83t-system-controller +- syscon: A phandle to the device containing the EMAC or GMAC clock register Optional properties: -- allwinner,tx-delay-ps: TX clock delay chain value in ps. Range value is 0-700. Default is 0) -- allwinner,rx-delay-ps: RX clock delay chain value in ps. Range value is 0-3100. Default is 0) -Both delay properties need to be a multiple of 100. They control the delay for -external PHY. +- allwinner,tx-delay-ps: TX clock delay chain value in ps. + Range is 0-700. Default is 0. + Unavailable for allwinner,sun8i-r40-gmac +- allwinner,rx-delay-ps: RX clock delay chain value in ps. + Range is 0-3100. Default is 0. + Range is 0-700 for allwinner,sun8i-r40-gmac +Both delay properties need to be a multiple of 100. They control the +clock delay for external RGMII PHY. They do not apply to the internal +PHY or external non-RGMII PHYs. Optional properties for the following compatibles: - "allwinner,sun8i-h3-emac", diff --git a/Documentation/devicetree/bindings/net/fsl-tsec-phy.txt b/Documentation/devicetree/bindings/net/fsl-tsec-phy.txt index 79bf352e659c..047bdf7bdd2f 100644 --- a/Documentation/devicetree/bindings/net/fsl-tsec-phy.txt +++ b/Documentation/devicetree/bindings/net/fsl-tsec-phy.txt @@ -86,70 +86,4 @@ Example: * Gianfar PTP clock nodes -General Properties: - - - compatible Should be "fsl,etsec-ptp" - - reg Offset and length of the register set for the device - - interrupts There should be at least two interrupts. Some devices - have as many as four PTP related interrupts. - -Clock Properties: - - - fsl,cksel Timer reference clock source. - - fsl,tclk-period Timer reference clock period in nanoseconds. - - fsl,tmr-prsc Prescaler, divides the output clock. - - fsl,tmr-add Frequency compensation value. - - fsl,tmr-fiper1 Fixed interval period pulse generator. - - fsl,tmr-fiper2 Fixed interval period pulse generator. - - fsl,max-adj Maximum frequency adjustment in parts per billion. - - These properties set the operational parameters for the PTP - clock. You must choose these carefully for the clock to work right. - Here is how to figure good values: - - TimerOsc = selected reference clock MHz - tclk_period = desired clock period nanoseconds - NominalFreq = 1000 / tclk_period MHz - FreqDivRatio = TimerOsc / NominalFreq (must be greater that 1.0) - tmr_add = ceil(2^32 / FreqDivRatio) - OutputClock = NominalFreq / tmr_prsc MHz - PulseWidth = 1 / OutputClock microseconds - FiperFreq1 = desired frequency in Hz - FiperDiv1 = 1000000 * OutputClock / FiperFreq1 - tmr_fiper1 = tmr_prsc * tclk_period * FiperDiv1 - tclk_period - max_adj = 1000000000 * (FreqDivRatio - 1.0) - 1 - - The calculation for tmr_fiper2 is the same as for tmr_fiper1. The - driver expects that tmr_fiper1 will be correctly set to produce a 1 - Pulse Per Second (PPS) signal, since this will be offered to the PPS - subsystem to synchronize the Linux clock. - - Reference clock source is determined by the value, which is holded - in CKSEL bits in TMR_CTRL register. "fsl,cksel" property keeps the - value, which will be directly written in those bits, that is why, - according to reference manual, the next clock sources can be used: - - <0> - external high precision timer reference clock (TSEC_TMR_CLK - input is used for this purpose); - <1> - eTSEC system clock; - <2> - eTSEC1 transmit clock; - <3> - RTC clock input. - - When this attribute is not used, eTSEC system clock will serve as - IEEE 1588 timer reference clock. - -Example: - - ptp_clock@24e00 { - compatible = "fsl,etsec-ptp"; - reg = <0x24E00 0xB0>; - interrupts = <12 0x8 13 0x8>; - interrupt-parent = < &ipic >; - fsl,cksel = <1>; - fsl,tclk-period = <10>; - fsl,tmr-prsc = <100>; - fsl,tmr-add = <0x999999A4>; - fsl,tmr-fiper1 = <0x3B9AC9F6>; - fsl,tmr-fiper2 = <0x00018696>; - fsl,max-adj = <659999998>; - }; +Refer to Documentation/devicetree/bindings/ptp/ptp-qoriq.txt diff --git a/Documentation/devicetree/bindings/net/meson-dwmac.txt b/Documentation/devicetree/bindings/net/meson-dwmac.txt index 61cada22ae6c..1321bb194ed9 100644 --- a/Documentation/devicetree/bindings/net/meson-dwmac.txt +++ b/Documentation/devicetree/bindings/net/meson-dwmac.txt @@ -11,6 +11,7 @@ Required properties on all platforms: - "amlogic,meson8b-dwmac" - "amlogic,meson8m2-dwmac" - "amlogic,meson-gxbb-dwmac" + - "amlogic,meson-axg-dwmac" Additionally "snps,dwmac" and any applicable more detailed version number described in net/stmmac.txt should be used. diff --git a/Documentation/devicetree/bindings/net/microchip,lan78xx.txt b/Documentation/devicetree/bindings/net/microchip,lan78xx.txt new file mode 100644 index 000000000000..76786a0f6d3d --- /dev/null +++ b/Documentation/devicetree/bindings/net/microchip,lan78xx.txt @@ -0,0 +1,54 @@ +Microchip LAN78xx Gigabit Ethernet controller + +The LAN78XX devices are usually configured by programming their OTP or with +an external EEPROM, but some platforms (e.g. Raspberry Pi 3 B+) have neither. +The Device Tree properties, if present, override the OTP and EEPROM. + +Required properties: +- compatible: Should be one of "usb424,7800", "usb424,7801" or "usb424,7850". + +Optional properties: +- local-mac-address: see ethernet.txt +- mac-address: see ethernet.txt + +Optional properties of the embedded PHY: +- microchip,led-modes: a 0..4 element vector, with each element configuring + the operating mode of an LED. Omitted LEDs are turned off. Allowed values + are defined in "include/dt-bindings/net/microchip-lan78xx.h". + +Example: + +/* Based on the configuration for a Raspberry Pi 3 B+ */ +&usb { + usb-port@1 { + compatible = "usb424,2514"; + reg = <1>; + #address-cells = <1>; + #size-cells = <0>; + + usb-port@1 { + compatible = "usb424,2514"; + reg = <1>; + #address-cells = <1>; + #size-cells = <0>; + + ethernet: ethernet@1 { + compatible = "usb424,7800"; + reg = <1>; + local-mac-address = [ 00 11 22 33 44 55 ]; + + mdio { + #address-cells = <0x1>; + #size-cells = <0x0>; + eth_phy: ethernet-phy@1 { + reg = <1>; + microchip,led-modes = < + LAN78XX_LINK_1000_ACTIVITY + LAN78XX_LINK_10_100_ACTIVITY + >; + }; + }; + }; + }; + }; +}; diff --git a/Documentation/devicetree/bindings/net/mscc-miim.txt b/Documentation/devicetree/bindings/net/mscc-miim.txt new file mode 100644 index 000000000000..7104679cf59d --- /dev/null +++ b/Documentation/devicetree/bindings/net/mscc-miim.txt @@ -0,0 +1,26 @@ +Microsemi MII Management Controller (MIIM) / MDIO +================================================= + +Properties: +- compatible: must be "mscc,ocelot-miim" +- reg: The base address of the MDIO bus controller register bank. Optionally, a + second register bank can be defined if there is an associated reset register + for internal PHYs +- #address-cells: Must be <1>. +- #size-cells: Must be <0>. MDIO addresses have no size component. +- interrupts: interrupt specifier (refer to the interrupt binding) + +Typically an MDIO bus might have several children. + +Example: + mdio@107009c { + #address-cells = <1>; + #size-cells = <0>; + compatible = "mscc,ocelot-miim"; + reg = <0x107009c 0x36>, <0x10700f0 0x8>; + interrupts = <14>; + + phy0: ethernet-phy@0 { + reg = <0>; + }; + }; diff --git a/Documentation/devicetree/bindings/net/mscc-ocelot.txt b/Documentation/devicetree/bindings/net/mscc-ocelot.txt new file mode 100644 index 000000000000..0a84711abece --- /dev/null +++ b/Documentation/devicetree/bindings/net/mscc-ocelot.txt @@ -0,0 +1,82 @@ +Microsemi Ocelot network Switch +=============================== + +The Microsemi Ocelot network switch can be found on Microsemi SoCs (VSC7513, +VSC7514) + +Required properties: +- compatible: Should be "mscc,vsc7514-switch" +- reg: Must contain an (offset, length) pair of the register set for each + entry in reg-names. +- reg-names: Must include the following entries: + - "sys" + - "rew" + - "qs" + - "hsio" + - "qsys" + - "ana" + - "portX" with X from 0 to the number of last port index available on that + switch +- interrupts: Should contain the switch interrupts for frame extraction and + frame injection +- interrupt-names: should contain the interrupt names: "xtr", "inj" +- ethernet-ports: A container for child nodes representing switch ports. + +The ethernet-ports container has the following properties + +Required properties: + +- #address-cells: Must be 1 +- #size-cells: Must be 0 + +Each port node must have the following mandatory properties: +- reg: Describes the port address in the switch + +Port nodes may also contain the following optional standardised +properties, described in binding documents: + +- phy-handle: Phandle to a PHY on an MDIO bus. See + Documentation/devicetree/bindings/net/ethernet.txt for details. + +Example: + + switch@1010000 { + compatible = "mscc,vsc7514-switch"; + reg = <0x1010000 0x10000>, + <0x1030000 0x10000>, + <0x1080000 0x100>, + <0x10d0000 0x10000>, + <0x11e0000 0x100>, + <0x11f0000 0x100>, + <0x1200000 0x100>, + <0x1210000 0x100>, + <0x1220000 0x100>, + <0x1230000 0x100>, + <0x1240000 0x100>, + <0x1250000 0x100>, + <0x1260000 0x100>, + <0x1270000 0x100>, + <0x1280000 0x100>, + <0x1800000 0x80000>, + <0x1880000 0x10000>; + reg-names = "sys", "rew", "qs", "hsio", "port0", + "port1", "port2", "port3", "port4", "port5", + "port6", "port7", "port8", "port9", "port10", + "qsys", "ana"; + interrupts = <21 22>; + interrupt-names = "xtr", "inj"; + + ethernet-ports { + #address-cells = <1>; + #size-cells = <0>; + + port0: port@0 { + reg = <0>; + phy-handle = <&phy0>; + }; + port1: port@1 { + reg = <1>; + phy-handle = <&phy1>; + }; + }; + }; diff --git a/Documentation/devicetree/bindings/net/qualcomm-bluetooth.txt b/Documentation/devicetree/bindings/net/qualcomm-bluetooth.txt new file mode 100644 index 000000000000..0ea18a53cc29 --- /dev/null +++ b/Documentation/devicetree/bindings/net/qualcomm-bluetooth.txt @@ -0,0 +1,30 @@ +Qualcomm Bluetooth Chips +--------------------- + +This documents the binding structure and common properties for serial +attached Qualcomm devices. + +Serial attached Qualcomm devices shall be a child node of the host UART +device the slave device is attached to. + +Required properties: + - compatible: should contain one of the following: + * "qcom,qca6174-bt" + +Optional properties: + - enable-gpios: gpio specifier used to enable chip + - clocks: clock provided to the controller (SUSCLK_32KHZ) + +Example: + +serial@7570000 { + label = "BT-UART"; + status = "okay"; + + bluetooth { + compatible = "qcom,qca6174-bt"; + + enable-gpios = <&pm8994_gpios 19 GPIO_ACTIVE_HIGH>; + clocks = <&divclk4>; + }; +}; diff --git a/Documentation/devicetree/bindings/net/sff,sfp.txt b/Documentation/devicetree/bindings/net/sff,sfp.txt index 929591d52ed6..832139919f20 100644 --- a/Documentation/devicetree/bindings/net/sff,sfp.txt +++ b/Documentation/devicetree/bindings/net/sff,sfp.txt @@ -7,11 +7,11 @@ Required properties: "sff,sfp" for SFP modules "sff,sff" for soldered down SFF modules -Optional Properties: - - i2c-bus : phandle of an I2C bus controller for the SFP two wire serial interface +Optional Properties: + - mod-def0-gpios : GPIO phandle and a specifier of the MOD-DEF0 (AKA Mod_ABS) module presence input gpio signal, active (module absent) high. Must not be present for SFF modules diff --git a/Documentation/devicetree/bindings/net/sh_eth.txt b/Documentation/devicetree/bindings/net/sh_eth.txt index 5172799a7f1a..82a4cf2c145d 100644 --- a/Documentation/devicetree/bindings/net/sh_eth.txt +++ b/Documentation/devicetree/bindings/net/sh_eth.txt @@ -14,6 +14,7 @@ Required properties: "renesas,ether-r8a7791" if the device is a part of R8A7791 SoC. "renesas,ether-r8a7793" if the device is a part of R8A7793 SoC. "renesas,ether-r8a7794" if the device is a part of R8A7794 SoC. + "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC. "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC. "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device. "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1 diff --git a/Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt b/Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt index 96398cc2982f..fc8f01718690 100644 --- a/Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt +++ b/Documentation/devicetree/bindings/net/socionext,uniphier-ave4.txt @@ -13,13 +13,25 @@ Required properties: - reg: Address where registers are mapped and size of region. - interrupts: Should contain the MAC interrupt. - phy-mode: See ethernet.txt in the same directory. Allow to choose - "rgmii", "rmii", or "mii" according to the PHY. + "rgmii", "rmii", "mii", or "internal" according to the PHY. + The acceptable mode is SoC-dependent. - phy-handle: Should point to the external phy device. See ethernet.txt file in the same directory. - clocks: A phandle to the clock for the MAC. + For Pro4 SoC, that is "socionext,uniphier-pro4-ave4", + another MAC clock, GIO bus clock and PHY clock are also required. + - clock-names: Should contain + - "ether", "ether-gb", "gio", "ether-phy" for Pro4 SoC + - "ether" for others + - resets: A phandle to the reset control for the MAC. For Pro4 SoC, + GIO bus reset is also required. + - reset-names: Should contain + - "ether", "gio" for Pro4 SoC + - "ether" for others + - socionext,syscon-phy-mode: A phandle to syscon with one argument + that configures phy mode. The argument is the ID of MAC instance. Optional properties: - - resets: A phandle to the reset control for the MAC. - local-mac-address: See ethernet.txt in the same directory. Required subnode: @@ -34,8 +46,11 @@ Example: interrupts = <0 66 4>; phy-mode = "rgmii"; phy-handle = <ðphy>; + clock-names = "ether"; clocks = <&sys_clk 6>; + reset-names = "ether"; resets = <&sys_rst 6>; + socionext,syscon-phy-mode = <&soc_glue 0>; local-mac-address = [00 00 00 00 00 00]; mdio { diff --git a/Documentation/devicetree/bindings/net/stm32-dwmac.txt b/Documentation/devicetree/bindings/net/stm32-dwmac.txt index 489dbcb66c5a..1341012722aa 100644 --- a/Documentation/devicetree/bindings/net/stm32-dwmac.txt +++ b/Documentation/devicetree/bindings/net/stm32-dwmac.txt @@ -6,14 +6,28 @@ Please see stmmac.txt for the other unchanged properties. The device node has following properties. Required properties: -- compatible: Should be "st,stm32-dwmac" to select glue, and +- compatible: For MCU family should be "st,stm32-dwmac" to select glue, and "snps,dwmac-3.50a" to select IP version. + For MPU family should be "st,stm32mp1-dwmac" to select + glue, and "snps,dwmac-4.20a" to select IP version. - clocks: Must contain a phandle for each entry in clock-names. - clock-names: Should be "stmmaceth" for the host clock. Should be "mac-clk-tx" for the MAC TX clock. Should be "mac-clk-rx" for the MAC RX clock. + For MPU family need to add also "ethstp" for power mode clock and, + "syscfg-clk" for SYSCFG clock. +- interrupt-names: Should contain a list of interrupt names corresponding to + the interrupts in the interrupts property, if available. + Should be "macirq" for the main MAC IRQ + Should be "eth_wake_irq" for the IT which wake up system - st,syscon : Should be phandle/offset pair. The phandle to the syscon node which - encompases the glue register, and the offset of the control register. + encompases the glue register, and the offset of the control register. + +Optional properties: +- clock-names: For MPU family "mac-clk-ck" for PHY without quartz +- st,int-phyclk (boolean) : valid only where PHY do not have quartz and need to be clock + by RCC + Example: ethernet@40028000 { diff --git a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt index 3d2a031217da..7fd4e8ce4149 100644 --- a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt +++ b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt @@ -4,6 +4,7 @@ Required properties: - compatible: Should be one of the following: * "qcom,ath10k" * "qcom,ipq4019-wifi" + * "qcom,wcn3990-wifi" PCI based devices uses compatible string "qcom,ath10k" and takes calibration data along with board specific data via "qcom,ath10k-calibration-data". @@ -18,8 +19,12 @@ In general, entry "qcom,ath10k-pre-calibration-data" and "qcom,ath10k-calibration-data" conflict with each other and only one can be provided per device. +SNOC based devices (i.e. wcn3990) uses compatible string "qcom,wcn3990-wifi". + Optional properties: - reg: Address and length of the register set for the device. +- reg-names: Must include the list of following reg names, + "membase" - resets: Must contain an entry for each entry in reset-names. See ../reset/reseti.txt for details. - reset-names: Must include the list of following reset names, @@ -49,6 +54,8 @@ Optional properties: hw versions. - qcom,ath10k-pre-calibration-data : pre calibration data as an array, the length can vary between hw versions. +- <supply-name>-supply: handle to the regulator device tree node + optional "supply-name" is "vdd-0.8-cx-mx". Example (to supply the calibration data alone): @@ -119,3 +126,27 @@ wifi0: wifi@a000000 { qcom,msi_base = <0x40>; qcom,ath10k-pre-calibration-data = [ 01 02 03 ... ]; }; + +Example (to supply wcn3990 SoC wifi block details): + +wifi@18000000 { + compatible = "qcom,wcn3990-wifi"; + reg = <0x18800000 0x800000>; + reg-names = "membase"; + clocks = <&clock_gcc clk_aggre2_noc_clk>; + clock-names = "smmu_aggre2_noc_clk" + interrupts = + <0 130 0 /* CE0 */ >, + <0 131 0 /* CE1 */ >, + <0 132 0 /* CE2 */ >, + <0 133 0 /* CE3 */ >, + <0 134 0 /* CE4 */ >, + <0 135 0 /* CE5 */ >, + <0 136 0 /* CE6 */ >, + <0 137 0 /* CE7 */ >, + <0 138 0 /* CE8 */ >, + <0 139 0 /* CE9 */ >, + <0 140 0 /* CE10 */ >, + <0 141 0 /* CE11 */ >; + vdd-0.8-cx-mx-supply = <&pm8998_l5>; +}; diff --git a/Documentation/devicetree/bindings/ptp/ptp-qoriq.txt b/Documentation/devicetree/bindings/ptp/ptp-qoriq.txt new file mode 100644 index 000000000000..0f569d8e73a3 --- /dev/null +++ b/Documentation/devicetree/bindings/ptp/ptp-qoriq.txt @@ -0,0 +1,69 @@ +* Freescale QorIQ 1588 timer based PTP clock + +General Properties: + + - compatible Should be "fsl,etsec-ptp" + - reg Offset and length of the register set for the device + - interrupts There should be at least two interrupts. Some devices + have as many as four PTP related interrupts. + +Clock Properties: + + - fsl,cksel Timer reference clock source. + - fsl,tclk-period Timer reference clock period in nanoseconds. + - fsl,tmr-prsc Prescaler, divides the output clock. + - fsl,tmr-add Frequency compensation value. + - fsl,tmr-fiper1 Fixed interval period pulse generator. + - fsl,tmr-fiper2 Fixed interval period pulse generator. + - fsl,max-adj Maximum frequency adjustment in parts per billion. + + These properties set the operational parameters for the PTP + clock. You must choose these carefully for the clock to work right. + Here is how to figure good values: + + TimerOsc = selected reference clock MHz + tclk_period = desired clock period nanoseconds + NominalFreq = 1000 / tclk_period MHz + FreqDivRatio = TimerOsc / NominalFreq (must be greater that 1.0) + tmr_add = ceil(2^32 / FreqDivRatio) + OutputClock = NominalFreq / tmr_prsc MHz + PulseWidth = 1 / OutputClock microseconds + FiperFreq1 = desired frequency in Hz + FiperDiv1 = 1000000 * OutputClock / FiperFreq1 + tmr_fiper1 = tmr_prsc * tclk_period * FiperDiv1 - tclk_period + max_adj = 1000000000 * (FreqDivRatio - 1.0) - 1 + + The calculation for tmr_fiper2 is the same as for tmr_fiper1. The + driver expects that tmr_fiper1 will be correctly set to produce a 1 + Pulse Per Second (PPS) signal, since this will be offered to the PPS + subsystem to synchronize the Linux clock. + + Reference clock source is determined by the value, which is holded + in CKSEL bits in TMR_CTRL register. "fsl,cksel" property keeps the + value, which will be directly written in those bits, that is why, + according to reference manual, the next clock sources can be used: + + <0> - external high precision timer reference clock (TSEC_TMR_CLK + input is used for this purpose); + <1> - eTSEC system clock; + <2> - eTSEC1 transmit clock; + <3> - RTC clock input. + + When this attribute is not used, eTSEC system clock will serve as + IEEE 1588 timer reference clock. + +Example: + + ptp_clock@24e00 { + compatible = "fsl,etsec-ptp"; + reg = <0x24E00 0xB0>; + interrupts = <12 0x8 13 0x8>; + interrupt-parent = < &ipic >; + fsl,cksel = <1>; + fsl,tclk-period = <10>; + fsl,tmr-prsc = <100>; + fsl,tmr-add = <0x999999A4>; + fsl,tmr-fiper1 = <0x3B9AC9F6>; + fsl,tmr-fiper2 = <0x00018696>; + fsl,max-adj = <659999998>; + }; diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt index 77cd42cc5f54..b025770eeb92 100644 --- a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt +++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt @@ -17,7 +17,8 @@ pool management. Required properties: -- compatible : Must be "ti,keystone-navigator-qmss"; +- compatible : Must be "ti,keystone-navigator-qmss". + : Must be "ti,66ak2g-navss-qm" for QMSS on K2G SoC. - clocks : phandle to the reference clock for this device. - queue-range : <start number> total range of queue numbers for the device. - linkram0 : <address size> for internal link ram, where size is the total @@ -39,6 +40,12 @@ Required properties: - Descriptor memory setup region. - Queue Management/Queue Proxy region for queue Push. - Queue Management/Queue Proxy region for queue Pop. + +For QMSS on K2G SoC, following QM reg indexes are used in that order + - Queue Peek region. + - Queue configuration region. + - Queue Management/Queue Proxy region for queue Push/Pop. + - queue-pools : child node classifying the queue ranges into pools. Queue ranges are grouped into 3 type of pools: - qpend : pool of qpend(interruptible) queues diff --git a/Documentation/filesystems/nfs/nfsroot.txt b/Documentation/filesystems/nfs/nfsroot.txt index 5efae00f6c7f..d2963123eb1c 100644 --- a/Documentation/filesystems/nfs/nfsroot.txt +++ b/Documentation/filesystems/nfs/nfsroot.txt @@ -5,6 +5,7 @@ Written 1996 by Gero Kuhlmann <gero@gkminix.han.de> Updated 1997 by Martin Mares <mj@atrey.karlin.mff.cuni.cz> Updated 2006 by Nico Schottelius <nico-kernel-nfsroot@schottelius.org> Updated 2006 by Horms <horms@verge.net.au> +Updated 2018 by Chris Novakovic <chris@chrisn.me.uk> @@ -79,7 +80,7 @@ nfsroot=[<server-ip>:]<root-dir>[,<nfs-options>] ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>: - <dns0-ip>:<dns1-ip> + <dns0-ip>:<dns1-ip>:<ntp0-ip> This parameter tells the kernel how to configure IP addresses of devices and also how to set up the IP routing table. It was originally called @@ -110,6 +111,9 @@ ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>: will not be triggered if it is missing and NFS root is not in operation. + Value is exported to /proc/net/pnp with the prefix "bootserver " + (see below). + Default: Determined using autoconfiguration. The address of the autoconfiguration server is used. @@ -123,10 +127,13 @@ ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>: Default: Determined using autoconfiguration. - <hostname> Name of the client. May be supplied by autoconfiguration, - but its absence will not trigger autoconfiguration. - If specified and DHCP is used, the user provided hostname will - be carried in the DHCP request to hopefully update DNS record. + <hostname> Name of the client. If a '.' character is present, anything + before the first '.' is used as the client's hostname, and anything + after it is used as its NIS domain name. May be supplied by + autoconfiguration, but its absence will not trigger autoconfiguration. + If specified and DHCP is used, the user-provided hostname (and NIS + domain name, if present) will be carried in the DHCP request; this + may cause a DNS record to be created or updated for the client. Default: Client IP address is used in ASCII notation. @@ -162,12 +169,55 @@ ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>: Default: any - <dns0-ip> IP address of first nameserver. - Value gets exported by /proc/net/pnp which is often linked - on embedded systems by /etc/resolv.conf. + <dns0-ip> IP address of primary nameserver. + Value is exported to /proc/net/pnp with the prefix "nameserver " + (see below). + + Default: None if not using autoconfiguration; determined + automatically if using autoconfiguration. + + <dns1-ip> IP address of secondary nameserver. + See <dns0-ip>. + + <ntp0-ip> IP address of a Network Time Protocol (NTP) server. + Value is exported to /proc/net/ipconfig/ntp_servers, but is + otherwise unused (see below). + + Default: None if not using autoconfiguration; determined + automatically if using autoconfiguration. + + After configuration (whether manual or automatic) is complete, two files + are created in the following format; lines are omitted if their respective + value is empty following configuration: + + - /proc/net/pnp: + + #PROTO: <DHCP|BOOTP|RARP|MANUAL> (depending on configuration method) + domain <dns-domain> (if autoconfigured, the DNS domain) + nameserver <dns0-ip> (primary name server IP) + nameserver <dns1-ip> (secondary name server IP) + nameserver <dns2-ip> (tertiary name server IP) + bootserver <server-ip> (NFS server IP) + + - /proc/net/ipconfig/ntp_servers: + + <ntp0-ip> (NTP server IP) + <ntp1-ip> (NTP server IP) + <ntp2-ip> (NTP server IP) + + <dns-domain> and <dns2-ip> (in /proc/net/pnp) and <ntp1-ip> and <ntp2-ip> + (in /proc/net/ipconfig/ntp_servers) are requested during autoconfiguration; + they cannot be specified as part of the "ip=" kernel command line parameter. + + Because the "domain" and "nameserver" options are recognised by DNS + resolvers, /etc/resolv.conf is often linked to /proc/net/pnp on systems + that use an NFS root filesystem. - <dns1-ip> IP address of second nameserver. - Same as above. + Note that the kernel will not synchronise the system time with any NTP + servers it discovers; this is the responsibility of a user space process + (e.g. an initrd/initramfs script that passes the IP addresses listed in + /proc/net/ipconfig/ntp_servers to an NTP client before mounting the real + root filesystem if it is on NFS). nfsrootdebug diff --git a/Documentation/networking/6lowpan.txt b/Documentation/networking/6lowpan.txt index a7dc7e939c7a..2e5a939d7e6f 100644 --- a/Documentation/networking/6lowpan.txt +++ b/Documentation/networking/6lowpan.txt @@ -24,10 +24,10 @@ enum lowpan_lltypes. Example to evaluate the private usually you can do: -static inline sturct lowpan_priv_foobar * +static inline struct lowpan_priv_foobar * lowpan_foobar_priv(struct net_device *dev) { - return (sturct lowpan_priv_foobar *)lowpan_priv(dev)->priv; + return (struct lowpan_priv_foobar *)lowpan_priv(dev)->priv; } switch (dev->type) { diff --git a/Documentation/networking/af_xdp.rst b/Documentation/networking/af_xdp.rst new file mode 100644 index 000000000000..ff929cfab4f4 --- /dev/null +++ b/Documentation/networking/af_xdp.rst @@ -0,0 +1,312 @@ +.. SPDX-License-Identifier: GPL-2.0 + +====== +AF_XDP +====== + +Overview +======== + +AF_XDP is an address family that is optimized for high performance +packet processing. + +This document assumes that the reader is familiar with BPF and XDP. If +not, the Cilium project has an excellent reference guide at +http://cilium.readthedocs.io/en/latest/bpf/. + +Using the XDP_REDIRECT action from an XDP program, the program can +redirect ingress frames to other XDP enabled netdevs, using the +bpf_redirect_map() function. AF_XDP sockets enable the possibility for +XDP programs to redirect frames to a memory buffer in a user-space +application. + +An AF_XDP socket (XSK) is created with the normal socket() +syscall. Associated with each XSK are two rings: the RX ring and the +TX ring. A socket can receive packets on the RX ring and it can send +packets on the TX ring. These rings are registered and sized with the +setsockopts XDP_RX_RING and XDP_TX_RING, respectively. It is mandatory +to have at least one of these rings for each socket. An RX or TX +descriptor ring points to a data buffer in a memory area called a +UMEM. RX and TX can share the same UMEM so that a packet does not have +to be copied between RX and TX. Moreover, if a packet needs to be kept +for a while due to a possible retransmit, the descriptor that points +to that packet can be changed to point to another and reused right +away. This again avoids copying data. + +The UMEM consists of a number of equally sized chunks. A descriptor in +one of the rings references a frame by referencing its addr. The addr +is simply an offset within the entire UMEM region. The user space +allocates memory for this UMEM using whatever means it feels is most +appropriate (malloc, mmap, huge pages, etc). This memory area is then +registered with the kernel using the new setsockopt XDP_UMEM_REG. The +UMEM also has two rings: the FILL ring and the COMPLETION ring. The +fill ring is used by the application to send down addr for the kernel +to fill in with RX packet data. References to these frames will then +appear in the RX ring once each packet has been received. The +completion ring, on the other hand, contains frame addr that the +kernel has transmitted completely and can now be used again by user +space, for either TX or RX. Thus, the frame addrs appearing in the +completion ring are addrs that were previously transmitted using the +TX ring. In summary, the RX and FILL rings are used for the RX path +and the TX and COMPLETION rings are used for the TX path. + +The socket is then finally bound with a bind() call to a device and a +specific queue id on that device, and it is not until bind is +completed that traffic starts to flow. + +The UMEM can be shared between processes, if desired. If a process +wants to do this, it simply skips the registration of the UMEM and its +corresponding two rings, sets the XDP_SHARED_UMEM flag in the bind +call and submits the XSK of the process it would like to share UMEM +with as well as its own newly created XSK socket. The new process will +then receive frame addr references in its own RX ring that point to +this shared UMEM. Note that since the ring structures are +single-consumer / single-producer (for performance reasons), the new +process has to create its own socket with associated RX and TX rings, +since it cannot share this with the other process. This is also the +reason that there is only one set of FILL and COMPLETION rings per +UMEM. It is the responsibility of a single process to handle the UMEM. + +How is then packets distributed from an XDP program to the XSKs? There +is a BPF map called XSKMAP (or BPF_MAP_TYPE_XSKMAP in full). The +user-space application can place an XSK at an arbitrary place in this +map. The XDP program can then redirect a packet to a specific index in +this map and at this point XDP validates that the XSK in that map was +indeed bound to that device and ring number. If not, the packet is +dropped. If the map is empty at that index, the packet is also +dropped. This also means that it is currently mandatory to have an XDP +program loaded (and one XSK in the XSKMAP) to be able to get any +traffic to user space through the XSK. + +AF_XDP can operate in two different modes: XDP_SKB and XDP_DRV. If the +driver does not have support for XDP, or XDP_SKB is explicitly chosen +when loading the XDP program, XDP_SKB mode is employed that uses SKBs +together with the generic XDP support and copies out the data to user +space. A fallback mode that works for any network device. On the other +hand, if the driver has support for XDP, it will be used by the AF_XDP +code to provide better performance, but there is still a copy of the +data into user space. + +Concepts +======== + +In order to use an AF_XDP socket, a number of associated objects need +to be setup. + +Jonathan Corbet has also written an excellent article on LWN, +"Accelerating networking with AF_XDP". It can be found at +https://lwn.net/Articles/750845/. + +UMEM +---- + +UMEM is a region of virtual contiguous memory, divided into +equal-sized frames. An UMEM is associated to a netdev and a specific +queue id of that netdev. It is created and configured (chunk size, +headroom, start address and size) by using the XDP_UMEM_REG setsockopt +system call. A UMEM is bound to a netdev and queue id, via the bind() +system call. + +An AF_XDP is socket linked to a single UMEM, but one UMEM can have +multiple AF_XDP sockets. To share an UMEM created via one socket A, +the next socket B can do this by setting the XDP_SHARED_UMEM flag in +struct sockaddr_xdp member sxdp_flags, and passing the file descriptor +of A to struct sockaddr_xdp member sxdp_shared_umem_fd. + +The UMEM has two single-producer/single-consumer rings, that are used +to transfer ownership of UMEM frames between the kernel and the +user-space application. + +Rings +----- + +There are a four different kind of rings: Fill, Completion, RX and +TX. All rings are single-producer/single-consumer, so the user-space +application need explicit synchronization of multiple +processes/threads are reading/writing to them. + +The UMEM uses two rings: Fill and Completion. Each socket associated +with the UMEM must have an RX queue, TX queue or both. Say, that there +is a setup with four sockets (all doing TX and RX). Then there will be +one Fill ring, one Completion ring, four TX rings and four RX rings. + +The rings are head(producer)/tail(consumer) based rings. A producer +writes the data ring at the index pointed out by struct xdp_ring +producer member, and increasing the producer index. A consumer reads +the data ring at the index pointed out by struct xdp_ring consumer +member, and increasing the consumer index. + +The rings are configured and created via the _RING setsockopt system +calls and mmapped to user-space using the appropriate offset to mmap() +(XDP_PGOFF_RX_RING, XDP_PGOFF_TX_RING, XDP_UMEM_PGOFF_FILL_RING and +XDP_UMEM_PGOFF_COMPLETION_RING). + +The size of the rings need to be of size power of two. + +UMEM Fill Ring +~~~~~~~~~~~~~~ + +The Fill ring is used to transfer ownership of UMEM frames from +user-space to kernel-space. The UMEM addrs are passed in the ring. As +an example, if the UMEM is 64k and each chunk is 4k, then the UMEM has +16 chunks and can pass addrs between 0 and 64k. + +Frames passed to the kernel are used for the ingress path (RX rings). + +The user application produces UMEM addrs to this ring. Note that the +kernel will mask the incoming addr. E.g. for a chunk size of 2k, the +log2(2048) LSB of the addr will be masked off, meaning that 2048, 2050 +and 3000 refers to the same chunk. + + +UMEM Completetion Ring +~~~~~~~~~~~~~~~~~~~~~~ + +The Completion Ring is used transfer ownership of UMEM frames from +kernel-space to user-space. Just like the Fill ring, UMEM indicies are +used. + +Frames passed from the kernel to user-space are frames that has been +sent (TX ring) and can be used by user-space again. + +The user application consumes UMEM addrs from this ring. + + +RX Ring +~~~~~~~ + +The RX ring is the receiving side of a socket. Each entry in the ring +is a struct xdp_desc descriptor. The descriptor contains UMEM offset +(addr) and the length of the data (len). + +If no frames have been passed to kernel via the Fill ring, no +descriptors will (or can) appear on the RX ring. + +The user application consumes struct xdp_desc descriptors from this +ring. + +TX Ring +~~~~~~~ + +The TX ring is used to send frames. The struct xdp_desc descriptor is +filled (index, length and offset) and passed into the ring. + +To start the transfer a sendmsg() system call is required. This might +be relaxed in the future. + +The user application produces struct xdp_desc descriptors to this +ring. + +XSKMAP / BPF_MAP_TYPE_XSKMAP +---------------------------- + +On XDP side there is a BPF map type BPF_MAP_TYPE_XSKMAP (XSKMAP) that +is used in conjunction with bpf_redirect_map() to pass the ingress +frame to a socket. + +The user application inserts the socket into the map, via the bpf() +system call. + +Note that if an XDP program tries to redirect to a socket that does +not match the queue configuration and netdev, the frame will be +dropped. E.g. an AF_XDP socket is bound to netdev eth0 and +queue 17. Only the XDP program executing for eth0 and queue 17 will +successfully pass data to the socket. Please refer to the sample +application (samples/bpf/) in for an example. + +Usage +===== + +In order to use AF_XDP sockets there are two parts needed. The +user-space application and the XDP program. For a complete setup and +usage example, please refer to the sample application. The user-space +side is xdpsock_user.c and the XDP side xdpsock_kern.c. + +Naive ring dequeue and enqueue could look like this:: + + // struct xdp_rxtx_ring { + // __u32 *producer; + // __u32 *consumer; + // struct xdp_desc *desc; + // }; + + // struct xdp_umem_ring { + // __u32 *producer; + // __u32 *consumer; + // __u64 *desc; + // }; + + // typedef struct xdp_rxtx_ring RING; + // typedef struct xdp_umem_ring RING; + + // typedef struct xdp_desc RING_TYPE; + // typedef __u64 RING_TYPE; + + int dequeue_one(RING *ring, RING_TYPE *item) + { + __u32 entries = *ring->producer - *ring->consumer; + + if (entries == 0) + return -1; + + // read-barrier! + + *item = ring->desc[*ring->consumer & (RING_SIZE - 1)]; + (*ring->consumer)++; + return 0; + } + + int enqueue_one(RING *ring, const RING_TYPE *item) + { + u32 free_entries = RING_SIZE - (*ring->producer - *ring->consumer); + + if (free_entries == 0) + return -1; + + ring->desc[*ring->producer & (RING_SIZE - 1)] = *item; + + // write-barrier! + + (*ring->producer)++; + return 0; + } + + +For a more optimized version, please refer to the sample application. + +Sample application +================== + +There is a xdpsock benchmarking/test application included that +demonstrates how to use AF_XDP sockets with both private and shared +UMEMs. Say that you would like your UDP traffic from port 4242 to end +up in queue 16, that we will enable AF_XDP on. Here, we use ethtool +for this:: + + ethtool -N p3p2 rx-flow-hash udp4 fn + ethtool -N p3p2 flow-type udp4 src-port 4242 dst-port 4242 \ + action 16 + +Running the rxdrop benchmark in XDP_DRV mode can then be done +using:: + + samples/bpf/xdpsock -i p3p2 -q 16 -r -N + +For XDP_SKB mode, use the switch "-S" instead of "-N" and all options +can be displayed with "-h", as usual. + +Credits +======= + +- Björn Töpel (AF_XDP core) +- Magnus Karlsson (AF_XDP core) +- Alexander Duyck +- Alexei Starovoitov +- Daniel Borkmann +- Jesper Dangaard Brouer +- John Fastabend +- Jonathan Corbet (LWN coverage) +- Michael S. Tsirkin +- Qi Z Zhang +- Willem de Bruijn + diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt index 9ba04c0bab8d..c13214d073a4 100644 --- a/Documentation/networking/bonding.txt +++ b/Documentation/networking/bonding.txt @@ -140,7 +140,7 @@ bonding module at load time, or are specified via sysfs. Module options may be given as command line arguments to the insmod or modprobe command, but are usually specified in either the -/etc/modrobe.d/*.conf configuration files, or in a distro-specific +/etc/modprobe.d/*.conf configuration files, or in a distro-specific configuration file (some of which are detailed in the next section). Details on bonding support for sysfs is provided in the diff --git a/Documentation/networking/e100.txt b/Documentation/networking/e100.rst index 54810b82c01a..d4d837027925 100644 --- a/Documentation/networking/e100.txt +++ b/Documentation/networking/e100.rst @@ -1,7 +1,7 @@ Linux* Base Driver for the Intel(R) PRO/100 Family of Adapters ============================================================== -March 15, 2011 +June 1, 2018 Contents ======== @@ -36,16 +36,9 @@ Channel Bonding documentation can be found in the Linux kernel source: Identifying Your Adapter ======================== -For more information on how to identify your adapter, go to the Adapter & -Driver ID Guide at: - - http://support.intel.com/support/network/adapter/pro100/21397.htm - -For the latest Intel network drivers for Linux, refer to the following -website. In the search field, enter your adapter name or type, or use the -networking link on the left to search for your adapter: - - http://downloadfinder.intel.com/scripts-df/support_intel.asp +For information on how to identify your adapter, and for the latest Intel +network drivers, refer to the Intel Support website: +http://www.intel.com/support Driver Configuration Parameters =============================== @@ -57,22 +50,26 @@ Rx Descriptors: Number of receive descriptors. A receive descriptor is a data structure that describes a receive buffer and its attributes to the network controller. The data in the descriptor is used by the controller to write data from the controller to host memory. In the 3.x.x driver the valid range - for this parameter is 64-256. The default value is 64. This parameter can be - changed using the command: + for this parameter is 64-256. The default value is 256. This parameter can be + changed using the command:: - ethtool -G eth? rx n, where n is the number of desired rx descriptors. + ethtool -G eth? rx n + + Where n is the number of desired Rx descriptors. Tx Descriptors: Number of transmit descriptors. A transmit descriptor is a data structure that describes a transmit buffer and its attributes to the network controller. The data in the descriptor is used by the controller to read data from the host memory to the controller. In the 3.x.x driver the valid - range for this parameter is 64-256. The default value is 64. This parameter - can be changed using the command: + range for this parameter is 64-256. The default value is 128. This parameter + can be changed using the command:: + + ethtool -G eth? tx n - ethtool -G eth? tx n, where n is the number of desired tx descriptors. + Where n is the number of desired Tx descriptors. Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by - default. The ethtool utility can be used as follows to force speed/duplex. + default. The ethtool utility can be used as follows to force speed/duplex.:: ethtool -s eth? autoneg off speed {10|100} duplex {full|half} @@ -81,7 +78,7 @@ Speed/Duplex: The driver auto-negotiates the link speed and duplex settings by Event Log Message Level: The driver uses the message level flag to log events to syslog. The message level can be set at driver load time. It can also be - set using the command: + set using the command:: ethtool -s eth? msglvl n @@ -112,9 +109,9 @@ Additional Configurations --------------------- In order to see link messages and other Intel driver information on your console, you must set the dmesg level up to six. This can be done by - entering the following on the command line before loading the e100 driver: + entering the following on the command line before loading the e100 driver:: - dmesg -n 8 + dmesg -n 6 If you wish to see all messages issued by the driver, including debug messages, set the dmesg level to eight. @@ -146,7 +143,8 @@ Additional Configurations NAPI (Rx polling mode) is supported in the e100 driver. - See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI. + See https://wiki.linuxfoundation.org/networking/napi for more information + on NAPI. Multiple Interfaces on Same Ethernet Broadcast Network ------------------------------------------------------ @@ -160,7 +158,7 @@ Additional Configurations If you have multiple interfaces in a server, either turn on ARP filtering by - (1) entering: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter + (1) entering:: echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter (this only works if your kernel's version is higher than 2.4.5), or (2) installing the interfaces in separate broadcast domains (either @@ -169,15 +167,11 @@ Additional Configurations Support ======= - For general information, go to the Intel support website at: +http://www.intel.com/support/ - http://support.intel.com - - or the Intel Wired Networking project hosted by Sourceforge at: - - http://sourceforge.net/projects/e1000 - -If an issue is identified with the released source code on the supported -kernel with a supported adapter, email the specific information related to the -issue to e1000-devel@lists.sourceforge.net. +or the Intel Wired Networking project hosted by Sourceforge at: +http://sourceforge.net/projects/e1000 +If an issue is identified with the released source code on a supported kernel +with a supported adapter, email the specific information related to the issue +to e1000-devel@lists.sf.net. diff --git a/Documentation/networking/e1000.txt b/Documentation/networking/e1000.rst index 1f6ed848363d..616848940e63 100644 --- a/Documentation/networking/e1000.txt +++ b/Documentation/networking/e1000.rst @@ -154,7 +154,7 @@ NOTE: When e1000 is loaded with default settings and multiple adapters are in use simultaneously, the CPU utilization may increase non- linearly. In order to limit the CPU utilization without impacting the overall throughput, we recommend that you load the driver as - follows: + follows:: modprobe e1000 InterruptThrottleRate=3000,3000,3000 @@ -167,8 +167,8 @@ NOTE: When e1000 is loaded with default settings and multiple adapters RxDescriptors ------------- -Valid Range: 80-256 for 82542 and 82543-based adapters - 80-4096 for all other supported adapters +Valid Range: 48-256 for 82542 and 82543-based adapters + 48-4096 for all other supported adapters Default Value: 256 This value specifies the number of receive buffer descriptors allocated @@ -230,8 +230,8 @@ speed. Duplex should also be set when Speed is set to either 10 or 100. TxDescriptors ------------- -Valid Range: 80-256 for 82542 and 82543-based adapters - 80-4096 for all other supported adapters +Valid Range: 48-256 for 82542 and 82543-based adapters + 48-4096 for all other supported adapters Default Value: 256 This value is the number of transmit descriptors allocated by the driver. @@ -242,41 +242,10 @@ NOTE: Depending on the available system resources, the request for a higher number of transmit descriptors may be denied. In this case, use a lower number. -TxDescriptorStep ----------------- -Valid Range: 1 (use every Tx Descriptor) - 4 (use every 4th Tx Descriptor) - -Default Value: 1 (use every Tx Descriptor) - -On certain non-Intel architectures, it has been observed that intense TX -traffic bursts of short packets may result in an improper descriptor -writeback. If this occurs, the driver will report a "TX Timeout" and reset -the adapter, after which the transmit flow will restart, though data may -have stalled for as much as 10 seconds before it resumes. - -The improper writeback does not occur on the first descriptor in a system -memory cache-line, which is typically 32 bytes, or 4 descriptors long. - -Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors -are aligned to the start of a system memory cache line, and so this problem -will not occur. - -NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of - TxDescriptors available for transmits to 1/4 of the normal allocation. - This has a possible negative performance impact, which may be - compensated for by allocating more descriptors using the TxDescriptors - module parameter. - - There are other conditions which may result in "TX Timeout", which will - not be resolved by the use of the TxDescriptorStep parameter. As the - issue addressed by this parameter has never been observed on Intel - Architecture platforms, it should not be used on Intel platforms. - TxIntDelay ---------- Valid Range: 0-65535 (0=off) -Default Value: 64 +Default Value: 8 This value delays the generation of transmit interrupts in units of 1.024 microseconds. Transmit interrupt reduction can improve CPU @@ -288,7 +257,7 @@ TxAbsIntDelay ------------- (This parameter is supported only on 82540, 82545 and later adapters.) Valid Range: 0-65535 (0=off) -Default Value: 64 +Default Value: 32 This value, in units of 1.024 microseconds, limits the delay in which a transmit interrupt is generated. Useful only if TxIntDelay is non-zero, @@ -310,7 +279,7 @@ Copybreak --------- Valid Range: 0-xxxxxxx (0=off) Default Value: 256 -Usage: insmod e1000.ko copybreak=128 +Usage: modprobe e1000.ko copybreak=128 Driver copies all packets below or equaling this size to a fresh RX buffer before handing it up the stack. @@ -328,14 +297,6 @@ Default Value: 0 (disabled) Allows PHY to turn off in lower power states. The user can turn off this parameter in supported chipsets. -KumeranLockLoss ---------------- -Valid Range: 0-1 -Default Value: 1 (enabled) - -This workaround skips resetting the PHY at shutdown for the initial -silicon releases of ICH8 systems. - Speed and Duplex Configuration ============================== @@ -397,12 +358,12 @@ Additional Configurations ------------ Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500. Use the ifconfig command to increase the MTU size. - For example: + For example:: ifconfig eth<x> mtu 9000 up This setting is not saved across reboots. It can be made permanent if - you add: + you add:: MTU=9000 diff --git a/Documentation/networking/failover.rst b/Documentation/networking/failover.rst new file mode 100644 index 000000000000..f0c8483cdbf5 --- /dev/null +++ b/Documentation/networking/failover.rst @@ -0,0 +1,18 @@ +.. SPDX-License-Identifier: GPL-2.0 + +======== +FAILOVER +======== + +Overview +======== + +The failover module provides a generic interface for paravirtual drivers +to register a netdev and a set of ops with a failover instance. The ops +are used as event handlers that get called to handle netdev register/ +unregister/link change/name change events on slave pci ethernet devices +with the same mac address as the failover netdev. + +This enables paravirtual drivers to use a VF as an accelerated low latency +datapath. It also allows live migration of VMs with direct attached VFs by +failing over to the paravirtual datapath when the VF is unplugged. diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt index fd55c7de9991..e6b4ebb2b243 100644 --- a/Documentation/networking/filter.txt +++ b/Documentation/networking/filter.txt @@ -483,6 +483,12 @@ Example output from dmesg: [ 3389.935851] JIT code: 00000030: 00 e8 28 94 ff e0 83 f8 01 75 07 b8 ff ff 00 00 [ 3389.935852] JIT code: 00000040: eb 02 31 c0 c9 c3 +When CONFIG_BPF_JIT_ALWAYS_ON is enabled, bpf_jit_enable is permanently set to 1 and +setting any other value than that will return in failure. This is even the case for +setting bpf_jit_enable to 2, since dumping the final JIT image into the kernel log +is discouraged and introspection through bpftool (under tools/bpf/bpftool/) is the +generally recommended approach instead. + In the kernel source tree under tools/bpf/, there's bpf_jit_disasm for generating disassembly out of the kernel log's hexdump: @@ -1136,6 +1142,7 @@ into a register from memory, the register's top 56 bits are known zero, while the low 8 are unknown - which is represented as the tnum (0x0; 0xff). If we then OR this with 0x40, we get (0x40; 0xbf), then if we add 1 we get (0x0; 0x1ff), because of potential carries. + Besides arithmetic, the register state can also be updated by conditional branches. For instance, if a SCALAR_VALUE is compared > 8, in the 'true' branch it will have a umin_value (unsigned minimum value) of 9, whereas in the 'false' @@ -1144,14 +1151,16 @@ BPF_JSGE) would instead update the signed minimum/maximum values. Information from the signed and unsigned bounds can be combined; for instance if a value is first tested < 8 and then tested s> 4, the verifier will conclude that the value is also > 4 and s< 8, since the bounds prevent crossing the sign boundary. + PTR_TO_PACKETs with a variable offset part have an 'id', which is common to all pointers sharing that same variable offset. This is important for packet range -checks: after adding some variable to a packet pointer, if you then copy it to -another register and (say) add a constant 4, both registers will share the same -'id' but one will have a fixed offset of +4. Then if it is bounds-checked and -found to be less than a PTR_TO_PACKET_END, the other register is now known to -have a safe range of at least 4 bytes. See 'Direct packet access', below, for -more on PTR_TO_PACKET ranges. +checks: after adding a variable to a packet pointer register A, if you then copy +it to another register B and then add a constant 4 to A, both registers will +share the same 'id' but the A will have a fixed offset of +4. Then if A is +bounds-checked and found to be less than a PTR_TO_PACKET_END, the register B is +now known to have a safe range of at least 4 bytes. See 'Direct packet access', +below, for more on PTR_TO_PACKET ranges. + The 'id' field is also used on PTR_TO_MAP_VALUE_OR_NULL, common to all copies of the pointer returned from a map lookup. This means that when one copy is checked and found to be non-NULL, all copies can become PTR_TO_MAP_VALUEs. diff --git a/Documentation/networking/gtp.txt b/Documentation/networking/gtp.txt index 0d9c18f05ec6..6966bbec1ecb 100644 --- a/Documentation/networking/gtp.txt +++ b/Documentation/networking/gtp.txt @@ -67,7 +67,7 @@ Don't be confused by terminology: The GTP User Plane goes through kernel accelerated path, while the GTP Control Plane goes to Userspace :) -The official homepge of the module is at +The official homepage of the module is at https://osmocom.org/projects/linux-kernel-gtp-u/wiki == Userspace Programs with Linux Kernel GTP-U support == @@ -120,7 +120,7 @@ If yo have questions regarding how to use the Kernel GTP module from your own software, or want to contribute to the code, please use the osmocom-net-grps mailing list for related discussion. The list can be reached at osmocom-net-gprs@lists.osmocom.org and the mailman -interface for managign your subscription is at +interface for managing your subscription is at https://lists.osmocom.org/mailman/listinfo/osmocom-net-gprs == Issue Tracker == diff --git a/Documentation/networking/ila.txt b/Documentation/networking/ila.txt index 78df879abd26..a17dac9dc915 100644 --- a/Documentation/networking/ila.txt +++ b/Documentation/networking/ila.txt @@ -121,7 +121,7 @@ three options to deal with this: - checksum neutral mapping When an address is translated the difference can be offset - elsewhere in a part of the packet that is covered by the + elsewhere in a part of the packet that is covered by the checksum. The low order sixteen bits of the identifier are used. This method is preferred since it doesn't require parsing a packet beyond the IP header and in most cases the diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst index f204eaff657d..fec8588a588e 100644 --- a/Documentation/networking/index.rst +++ b/Documentation/networking/index.rst @@ -6,9 +6,12 @@ Contents: .. toctree:: :maxdepth: 2 + af_xdp batman-adv can dpaa2/index + e100 + e1000 kapi z8530book msg_zerocopy diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index 35ffaa281b26..ce8fbf5aa63c 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -26,7 +26,7 @@ ip_no_pmtu_disc - INTEGER discarded. Outgoing frames are handled the same as in mode 1, implicitly setting IP_PMTUDISC_DONT on every created socket. - Mode 3 is a hardend pmtu discover mode. The kernel will only + Mode 3 is a hardened pmtu discover mode. The kernel will only accept fragmentation-needed errors if the underlying protocol can verify them besides a plain socket lookup. Current protocols for which pmtu events will be honored are TCP, SCTP @@ -449,8 +449,10 @@ tcp_recovery - INTEGER features. RACK: 0x1 enables the RACK loss detection for fast detection of lost - retransmissions and tail drops. + retransmissions and tail drops. It also subsumes and disables + RFC6675 recovery for SACK connections. RACK: 0x2 makes RACK's reordering window static (min_rtt/4). + RACK: 0x4 disables RACK's DUPACK threshold heuristic Default: 0x1 @@ -523,6 +525,19 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max tcp_sack - BOOLEAN Enable select acknowledgments (SACKS). +tcp_comp_sack_delay_ns - LONG INTEGER + TCP tries to reduce number of SACK sent, using a timer + based on 5% of SRTT, capped by this sysctl, in nano seconds. + The default is 1ms, based on TSO autosizing period. + + Default : 1,000,000 ns (1 ms) + +tcp_comp_sack_nr - INTEGER + Max numer of SACK that can be compressed. + Using 0 disables SACK compression. + + Detault : 44 + tcp_slow_start_after_idle - BOOLEAN If set, provide RFC2861 behavior and time out the congestion window after an idle period. An idle period is defined at @@ -652,11 +667,15 @@ tcp_tso_win_divisor - INTEGER building larger TSO frames. Default: 3 -tcp_tw_reuse - BOOLEAN - Allow to reuse TIME-WAIT sockets for new connections when it is - safe from protocol viewpoint. Default value is 0. +tcp_tw_reuse - INTEGER + Enable reuse of TIME-WAIT sockets for new connections when it is + safe from protocol viewpoint. + 0 - disable + 1 - global enable + 2 - enable for loopback traffic only It should not be changed without advice/request of technical experts. + Default: 2 tcp_window_scaling - BOOLEAN Enable window scaling as defined in RFC1323. @@ -1428,6 +1447,19 @@ ip6frag_low_thresh - INTEGER ip6frag_time - INTEGER Time in seconds to keep an IPv6 fragment in memory. +IPv6 Segment Routing: + +seg6_flowlabel - INTEGER + Controls the behaviour of computing the flowlabel of outer + IPv6 header in case of SR T.encaps + + -1 set flowlabel to zero. + 0 copy flowlabel from Inner packet in case of Inner IPv6 + (Set flowlabel to 0 in case IPv4/L2) + 1 Compute the flowlabel using seg6_make_flowlabel() + + Default is 0. + conf/default/*: Change the interface-specific default settings. diff --git a/Documentation/networking/ipsec.txt b/Documentation/networking/ipsec.txt index 8dbc08b7e431..ba794b7e51be 100644 --- a/Documentation/networking/ipsec.txt +++ b/Documentation/networking/ipsec.txt @@ -25,8 +25,8 @@ Quote from RFC3173: is implementation dependent. Current IPComp implementation is indeed by the book, while as in practice -when sending non-compressed packet to the peer(whether or not packet len -is smaller than the threshold or the compressed len is large than original +when sending non-compressed packet to the peer (whether or not packet len +is smaller than the threshold or the compressed len is larger than original packet len), the packet is dropped when checking the policy as this packet matches the selector but not coming from any XFRM layer, i.e., with no security path. Such naked packet will not eventually make it to upper layer. diff --git a/Documentation/networking/ipvlan.txt b/Documentation/networking/ipvlan.txt index 812ef003e0a8..27a38e50c287 100644 --- a/Documentation/networking/ipvlan.txt +++ b/Documentation/networking/ipvlan.txt @@ -73,11 +73,11 @@ mode to make conn-tracking work. This is the default option. To configure the IPvlan port in this mode, user can choose to either add this option on the command-line or don't specify anything. This is the traditional mode where slaves can cross-talk among -themseleves apart from talking through the master device. +themselves apart from talking through the master device. 5.2 private: If this option is added to the command-line, the port is set in private -mode. i.e. port wont allow cross communication between slaves. +mode. i.e. port won't allow cross communication between slaves. 5.3 vepa: If this is added to the command-line, the port is set in VEPA mode. diff --git a/Documentation/networking/kcm.txt b/Documentation/networking/kcm.txt index 9a513295b07c..b773a5278ac4 100644 --- a/Documentation/networking/kcm.txt +++ b/Documentation/networking/kcm.txt @@ -1,4 +1,4 @@ -Kernel Connection Mulitplexor +Kernel Connection Multiplexor ----------------------------- Kernel Connection Multiplexor (KCM) is a mechanism that provides a message based @@ -31,7 +31,7 @@ KCM implements an NxM multiplexor in the kernel as diagrammed below: KCM sockets ----------- -The KCM sockets provide the user interface to the muliplexor. All the KCM sockets +The KCM sockets provide the user interface to the multiplexor. All the KCM sockets bound to a multiplexor are considered to have equivalent function, and I/O operations in different sockets may be done in parallel without the need for synchronization between threads in userspace. @@ -199,7 +199,7 @@ while. Example use: BFP programs for message delineation ------------------------------------ -BPF programs can be compiled using the BPF LLVM backend. For exmple, +BPF programs can be compiled using the BPF LLVM backend. For example, the BPF program for parsing Thrift is: #include "bpf.h" /* for __sk_buff */ @@ -222,7 +222,7 @@ messages. The kernel provides necessary assurances that messages are sent and received atomically. This relieves much of the burden applications have in mapping a message based protocol onto the TCP stream. KCM also make application layer messages a unit of work in the kernel for the purposes of -steerng and scheduling, which in turn allows a simpler networking model in +steering and scheduling, which in turn allows a simpler networking model in multithreaded applications. Configurations @@ -272,7 +272,7 @@ on the socket thus waking up the application thread. When the application sees the error (which may just be a disconnect) it should unattach the socket from KCM and then close it. It is assumed that once an error is posted on the TCP socket the data stream is unrecoverable (i.e. an error -may have occurred in the middle of receiving a messssge). +may have occurred in the middle of receiving a message). TCP connection monitoring ------------------------- diff --git a/Documentation/networking/net_failover.rst b/Documentation/networking/net_failover.rst new file mode 100644 index 000000000000..70ca2f5800c4 --- /dev/null +++ b/Documentation/networking/net_failover.rst @@ -0,0 +1,116 @@ +.. SPDX-License-Identifier: GPL-2.0 + +============ +NET_FAILOVER +============ + +Overview +======== + +The net_failover driver provides an automated failover mechanism via APIs +to create and destroy a failover master netdev and mananges a primary and +standby slave netdevs that get registered via the generic failover +infrastructrure. + +The failover netdev acts a master device and controls 2 slave devices. The +original paravirtual interface is registered as 'standby' slave netdev and +a passthru/vf device with the same MAC gets registered as 'primary' slave +netdev. Both 'standby' and 'failover' netdevs are associated with the same +'pci' device. The user accesses the network interface via 'failover' netdev. +The 'failover' netdev chooses 'primary' netdev as default for transmits when +it is available with link up and running. + +This can be used by paravirtual drivers to enable an alternate low latency +datapath. It also enables hypervisor controlled live migration of a VM with +direct attached VF by failing over to the paravirtual datapath when the VF +is unplugged. + +virtio-net accelerated datapath: STANDBY mode +============================================= + +net_failover enables hypervisor controlled accelerated datapath to virtio-net +enabled VMs in a transparent manner with no/minimal guest userspace chanages. + +To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY +feature on the virtio-net interface and assign the same MAC address to both +virtio-net and VF interfaces. + +Here is an example XML snippet that shows such configuration. + + <interface type='network'> + <mac address='52:54:00:00:12:53'/> + <source network='enp66s0f0_br'/> + <target dev='tap01'/> + <model type='virtio'/> + <driver name='vhost' queues='4'/> + <link state='down'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> + </interface> + <interface type='hostdev' managed='yes'> + <mac address='52:54:00:00:12:53'/> + <source> + <address type='pci' domain='0x0000' bus='0x42' slot='0x02' function='0x5'/> + </source> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> + </interface> + +Booting a VM with the above configuration will result in the following 3 +netdevs created in the VM. + +4: ens10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 + link/ether 52:54:00:00:12:53 brd ff:ff:ff:ff:ff:ff + inet 192.168.12.53/24 brd 192.168.12.255 scope global dynamic ens10 + valid_lft 42482sec preferred_lft 42482sec + inet6 fe80::97d8:db2:8c10:b6d6/64 scope link + valid_lft forever preferred_lft forever +5: ens10nsby: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ens10 state UP group default qlen 1000 + link/ether 52:54:00:00:12:53 brd ff:ff:ff:ff:ff:ff +7: ens11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ens10 state UP group default qlen 1000 + link/ether 52:54:00:00:12:53 brd ff:ff:ff:ff:ff:ff + +ens10 is the 'failover' master netdev, ens10nsby and ens11 are the slave +'standby' and 'primary' netdevs respectively. + +Live Migration of a VM with SR-IOV VF & virtio-net in STANDBY mode +================================================================== + +net_failover also enables hypervisor controlled live migration to be supported +with VMs that have direct attached SR-IOV VF devices by automatic failover to +the paravirtual datapath when the VF is unplugged. + +Here is a sample script that shows the steps to initiate live migration on +the source hypervisor. + +# cat vf_xml +<interface type='hostdev' managed='yes'> + <mac address='52:54:00:00:12:53'/> + <source> + <address type='pci' domain='0x0000' bus='0x42' slot='0x02' function='0x5'/> + </source> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> +</interface> + +# Source Hypervisor +#!/bin/bash + +DOMAIN=fedora27-tap01 +PF=enp66s0f0 +VF_NUM=5 +TAP_IF=tap01 +VF_XML= + +MAC=52:54:00:00:12:53 +ZERO_MAC=00:00:00:00:00:00 + +virsh domif-setlink $DOMAIN $TAP_IF up +bridge fdb del $MAC dev $PF master +virsh detach-device $DOMAIN $VF_XML +ip link set $PF vf $VF_NUM mac $ZERO_MAC + +virsh migrate --live $DOMAIN qemu+ssh://$REMOTE_HOST/system + +# Destination Hypervisor +#!/bin/bash + +virsh attach-device $DOMAIN $VF_XML +virsh domif-setlink $DOMAIN $TAP_IF down diff --git a/Documentation/networking/netdev-FAQ.txt b/Documentation/networking/netdev-FAQ.txt index 2a3278d5cf35..fa951b820b25 100644 --- a/Documentation/networking/netdev-FAQ.txt +++ b/Documentation/networking/netdev-FAQ.txt @@ -179,6 +179,15 @@ A: No. See above answer. In short, if you think it really belongs in dash marker line as described in Documentation/process/submitting-patches.rst to temporarily embed that information into the patch that you send. +Q: Are all networking bug fixes backported to all stable releases? + +A: Due to capacity, Dave could only take care of the backports for the last + 2 stable releases. For earlier stable releases, each stable branch maintainer + is supposed to take care of them. If you find any patch is missing from an + earlier stable branch, please notify stable@vger.kernel.org with either a + commit ID or a formal patch backported, and CC Dave and other relevant + networking developers. + Q: Someone said that the comment style and coding convention is different for the networking content. Is this true? diff --git a/Documentation/networking/netdev-features.txt b/Documentation/networking/netdev-features.txt index c77f9d57eb91..c4a54c162547 100644 --- a/Documentation/networking/netdev-features.txt +++ b/Documentation/networking/netdev-features.txt @@ -113,6 +113,13 @@ whatever headers there might be. NETIF_F_TSO_ECN means that hardware can properly split packets with CWR bit set, be it TCPv4 (when NETIF_F_TSO is enabled) or TCPv6 (NETIF_F_TSO6). + * Transmit UDP segmentation offload + +NETIF_F_GSO_UDP_GSO_L4 accepts a single UDP header with a payload that exceeds +gso_size. On segmentation, it segments the payload on gso_size boundaries and +replicates the network and UDP headers (fixing up the last one if less than +gso_size). + * Transmit DMA from high memory On platforms where this is relevant, NETIF_F_HIGHDMA signals that diff --git a/Documentation/networking/nf_conntrack-sysctl.txt b/Documentation/networking/nf_conntrack-sysctl.txt index 433b6724797a..1669dc2419fd 100644 --- a/Documentation/networking/nf_conntrack-sysctl.txt +++ b/Documentation/networking/nf_conntrack-sysctl.txt @@ -156,7 +156,7 @@ nf_conntrack_timestamp - BOOLEAN nf_conntrack_udp_timeout - INTEGER (seconds) default 30 -nf_conntrack_udp_timeout_stream2 - INTEGER (seconds) +nf_conntrack_udp_timeout_stream - INTEGER (seconds) default 180 This extended timeout will be used in case there is an UDP stream diff --git a/Documentation/sysctl/net.txt b/Documentation/sysctl/net.txt index 5992602469d8..9ecde517728c 100644 --- a/Documentation/sysctl/net.txt +++ b/Documentation/sysctl/net.txt @@ -45,6 +45,7 @@ through bpf(2) and passing a verifier in the kernel, a JIT will then translate these BPF proglets into native CPU instructions. There are two flavors of JITs, the newer eBPF JIT currently supported on: - x86_64 + - x86_32 - arm64 - arm32 - ppc64 |