diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2010-04-28 14:35:48 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2010-04-28 14:35:48 -0700 |
commit | 4b0b72f7dd617b13abd1b04c947e15873e011a24 (patch) | |
tree | 16fc7bc990fa47cccb62bdb34cb23bd3c26b7a50 /include/net/sock.h | |
parent | cfc1fbb079b265bf69d4ceba590a2e2c1a1cde33 (diff) | |
download | op-kernel-dev-4b0b72f7dd617b13abd1b04c947e15873e011a24.zip op-kernel-dev-4b0b72f7dd617b13abd1b04c947e15873e011a24.tar.gz |
net: speedup udp receive path
Since commit 95766fff ([UDP]: Add memory accounting.),
each received packet needs one extra sock_lock()/sock_release() pair.
This added latency because of possible backlog handling. Then later,
ticket spinlocks added yet another latency source in case of DDOS.
This patch introduces lock_sock_bh() and unlock_sock_bh()
synchronization primitives, avoiding one atomic operation and backlog
processing.
skb_free_datagram_locked() uses them instead of full blown
lock_sock()/release_sock(). skb is orphaned inside locked section for
proper socket memory reclaim, and finally freed outside of it.
UDP receive path now take the socket spinlock only once.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/sock.h')
-rw-r--r-- | include/net/sock.h | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/include/net/sock.h b/include/net/sock.h index cf12b1e..d361c77 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1021,6 +1021,16 @@ extern void release_sock(struct sock *sk); SINGLE_DEPTH_NESTING) #define bh_unlock_sock(__sk) spin_unlock(&((__sk)->sk_lock.slock)) +static inline void lock_sock_bh(struct sock *sk) +{ + spin_lock_bh(&sk->sk_lock.slock); +} + +static inline void unlock_sock_bh(struct sock *sk) +{ + spin_unlock_bh(&sk->sk_lock.slock); +} + extern struct sock *sk_alloc(struct net *net, int family, gfp_t priority, struct proto *prot); |