diff options
author | Stanislav Fomichev <sdf@google.com> | 2019-05-28 14:14:43 -0700 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2019-05-29 15:17:35 +0200 |
commit | dbcc1ba26e43bd32cb308e50ac4cb4a29d2f5967 (patch) | |
tree | 1d8ce96e66911655a7abeafbddb7fb39b777a175 /include/linux/bpf-cgroup.h | |
parent | 02205d2ed6fe26a8f4fd9e9cec251d1dc7f79316 (diff) | |
download | lwn-dbcc1ba26e43bd32cb308e50ac4cb4a29d2f5967.tar.gz lwn-dbcc1ba26e43bd32cb308e50ac4cb4a29d2f5967.zip |
bpf: cgroup: properly use bpf_prog_array api
Now that we don't have __rcu markers on the bpf_prog_array helpers,
let's use proper rcu_dereference_protected to obtain array pointer
under mutex.
We also don't need __rcu annotations on cgroup_bpf.inactive since
it's not read/updated concurrently.
v4:
* drop cgroup_rcu_xyz wrappers and use rcu APIs directly; presumably
should be more clear to understand which mutex/refcount protects
each particular place
v3:
* amend cgroup_rcu_dereference to include percpu_ref_is_dying;
cgroup_bpf is now reference counted and we don't hold cgroup_mutex
anymore in cgroup_bpf_release
v2:
* replace xchg with rcu_swap_protected
Cc: Roman Gushchin <guro@fb.com>
Signed-off-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Roman Gushchin <guro@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'include/linux/bpf-cgroup.h')
-rw-r--r-- | include/linux/bpf-cgroup.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h index 9f100fc422c3..b631ee75762d 100644 --- a/include/linux/bpf-cgroup.h +++ b/include/linux/bpf-cgroup.h @@ -72,7 +72,7 @@ struct cgroup_bpf { u32 flags[MAX_BPF_ATTACH_TYPE]; /* temp storage for effective prog array used by prog_attach/detach */ - struct bpf_prog_array __rcu *inactive; + struct bpf_prog_array *inactive; /* reference counter used to detach bpf programs after cgroup removal */ struct percpu_ref refcnt; |