summaryrefslogtreecommitdiffstats
path: root/include
diff options
context:
space:
mode:
authorDaniel Borkmann <daniel@iogearbox.net>2018-06-15 02:30:47 +0200
committerAlexei Starovoitov <ast@kernel.org>2018-06-15 11:14:25 -0700
commit7d1982b4e335c1b184406b7566f6041bfe313c35 (patch)
tree885f83b1f8e96c502c2939838d22e63756c7b011 /include
parent26bf8a89d887c0686acef0f44eaadd49abfcab03 (diff)
downloadop-kernel-dev-7d1982b4e335c1b184406b7566f6041bfe313c35.zip
op-kernel-dev-7d1982b4e335c1b184406b7566f6041bfe313c35.tar.gz
bpf: fix panic in prog load calls cleanup
While testing I found that when hitting error path in bpf_prog_load() where we jump to free_used_maps and prog contained BPF to BPF calls that were JITed earlier, then we never clean up the bpf_prog_kallsyms_add() done under jit_subprogs(). Add proper API to make BPF kallsyms deletion more clear and fix that. Fixes: 1c2a088a6626 ("bpf: x64: add JIT support for multi-function programs") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/filter.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 45fc0f5..297c56f 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -961,6 +961,9 @@ static inline void bpf_prog_kallsyms_del(struct bpf_prog *fp)
}
#endif /* CONFIG_BPF_JIT */
+void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp);
+void bpf_prog_kallsyms_del_all(struct bpf_prog *fp);
+
#define BPF_ANC BIT(15)
static inline bool bpf_needs_clear_a(const struct sock_filter *first)
OpenPOWER on IntegriCloud