diff options
author | csjp <csjp@FreeBSD.org> | 2006-06-02 19:59:33 +0000 |
---|---|---|
committer | csjp <csjp@FreeBSD.org> | 2006-06-02 19:59:33 +0000 |
commit | 2c4f67981e37d4914db61b39de9ce50520b8ab77 (patch) | |
tree | 91b5bc64ab856cef269d9fab6ff3feca3e06cf2c /sys/net/if_disc.c | |
parent | 420f0a56b11b92d44992ae037cd8d5e18cc582f6 (diff) | |
download | FreeBSD-src-2c4f67981e37d4914db61b39de9ce50520b8ab77.zip FreeBSD-src-2c4f67981e37d4914db61b39de9ce50520b8ab77.tar.gz |
Fix the following bpf(4) race condition which can result in a panic:
(1) bpf peer attaches to interface netif0
(2) Packet is received by netif0
(3) ifp->if_bpf pointer is checked and handed off to bpf
(4) bpf peer detaches from netif0 resulting in ifp->if_bpf being
initialized to NULL.
(5) ifp->if_bpf is dereferenced by bpf machinery
(6) Kaboom
This race condition likely explains the various different kernel panics
reported around sending SIGINT to tcpdump or dhclient processes. But really
this race can result in kernel panics anywhere you have frequent bpf attach
and detach operations with high packet per second load.
Summary of changes:
- Remove the bpf interface's "driverp" member
- When we attach bpf interfaces, we now set the ifp->if_bpf member to the
bpf interface structure. Once this is done, ifp->if_bpf should never be
NULL. [1]
- Introduce bpf_peers_present function, an inline operation which will do
a lockless read bpf peer list associated with the interface. It should
be noted that the bpf code will pickup the bpf_interface lock before adding
or removing bpf peers. This should serialize the access to the bpf descriptor
list, removing the race.
- Expose the bpf_if structure in bpf.h so that the bpf_peers_present function
can use it. This also removes the struct bpf_if; hack that was there.
- Adjust all consumers of the raw if_bpf structure to use bpf_peers_present
Now what happens is:
(1) Packet is received by netif0
(2) Check to see if bpf descriptor list is empty
(3) Pickup the bpf interface lock
(4) Hand packet off to process
From the attach/detach side:
(1) Pickup the bpf interface lock
(2) Add/remove from bpf descriptor list
Now that we are storing the bpf interface structure with the ifnet, there is
is no need to walk the bpf interface list to locate the correct bpf interface.
We now simply look up the interface, and initialize the pointer. This has a
nice side effect of changing a bpf interface attach operation from O(N) (where
N is the number of bpf interfaces), to O(1).
[1] From now on, we can no longer check ifp->if_bpf to tell us whether or
not we have any bpf peers that might be interested in receiving packets.
In collaboration with: sam@
MFC after: 1 month
Diffstat (limited to 'sys/net/if_disc.c')
-rw-r--r-- | sys/net/if_disc.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/sys/net/if_disc.c b/sys/net/if_disc.c index 1d87c47..8fd6d9d 100644 --- a/sys/net/if_disc.c +++ b/sys/net/if_disc.c @@ -158,7 +158,7 @@ discoutput(struct ifnet *ifp, struct mbuf *m, struct sockaddr *dst, dst->sa_family = af; } - if (ifp->if_bpf) { + if (bpf_peers_present(ifp->if_bpf)) { u_int af = dst->sa_family; bpf_mtap2(ifp->if_bpf, &af, sizeof(af), m); } |