diff options
author | Marcelo Leitner <mleitner@redhat.com> | 2014-12-11 10:02:22 -0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2014-12-11 14:57:08 -0500 |
commit | 00c83b01d58068dfeb2e1351cca6fccf2a83fa8f (patch) | |
tree | 2b00c1d5a6ba84dc4fb9f8c6f9aee3de7aaa6799 /kernel | |
parent | 51f8301485d701caab10e7e9b7f9d7866f2fe3cf (diff) | |
download | op-kernel-dev-00c83b01d58068dfeb2e1351cca6fccf2a83fa8f.zip op-kernel-dev-00c83b01d58068dfeb2e1351cca6fccf2a83fa8f.tar.gz |
Fix race condition between vxlan_sock_add and vxlan_sock_release
Currently, when trying to reuse a socket, vxlan_sock_add will grab
vn->sock_lock, locate a reusable socket, inc refcount and release
vn->sock_lock.
But vxlan_sock_release() will first decrement refcount, and then grab
that lock. refcnt operations are atomic but as currently we have
deferred works which hold vs->refcnt each, this might happen, leading to
a use after free (specially after vxlan_igmp_leave):
CPU 1 CPU 2
deferred work vxlan_sock_add
... ...
spin_lock(&vn->sock_lock)
vs = vxlan_find_sock();
vxlan_sock_release
dec vs->refcnt, reaches 0
spin_lock(&vn->sock_lock)
vxlan_sock_hold(vs), refcnt=1
spin_unlock(&vn->sock_lock)
hlist_del_rcu(&vs->hlist);
vxlan_notify_del_rx_port(vs)
spin_unlock(&vn->sock_lock)
So when we look for a reusable socket, we check if it wasn't freed
already before reusing it.
Signed-off-by: Marcelo Ricardo Leitner <mleitner@redhat.com>
Fixes: 7c47cedf43a8b3 ("vxlan: move IGMP join/leave to work queue")
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'kernel')
0 files changed, 0 insertions, 0 deletions