summaryrefslogtreecommitdiffstats
path: root/net/dccp/minisocks.c
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2015-04-23 18:03:44 -0700
committerDavid S. Miller <davem@davemloft.net>2015-04-24 11:39:15 -0400
commitb357a364c57c940ddb932224542494363df37378 (patch)
tree84657c956c5c64a2ae058b8285271f778b1aea67 /net/dccp/minisocks.c
parent1d8dc3d3c8f1d8ee1da9d54c5d7c8694419ade42 (diff)
downloadop-kernel-dev-b357a364c57c940ddb932224542494363df37378.zip
op-kernel-dev-b357a364c57c940ddb932224542494363df37378.tar.gz
inet: fix possible panic in reqsk_queue_unlink()
[ 3897.923145] BUG: unable to handle kernel NULL pointer dereference at 0000000000000080 [ 3897.931025] IP: [<ffffffffa9f27686>] reqsk_timer_handler+0x1a6/0x243 There is a race when reqsk_timer_handler() and tcp_check_req() call inet_csk_reqsk_queue_unlink() on the same req at the same time. Before commit fa76ce7328b2 ("inet: get rid of central tcp/dccp listener timer"), listener spinlock was held and race could not happen. To solve this bug, we change reqsk_queue_unlink() to not assume req must be found, and we return a status, to conditionally release a refcount on the request sock. This also means tcp_check_req() in non fastopen case might or not consume req refcount, so tcp_v6_hnd_req() & tcp_v4_hnd_req() have to properly handle this. (Same remark for dccp_check_req() and its callers) inet_csk_reqsk_queue_drop() is now too big to be inlined, as it is called 4 times in tcp and 3 times in dccp. Fixes: fa76ce7328b2 ("inet: get rid of central tcp/dccp listener timer") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/dccp/minisocks.c')
-rw-r--r--net/dccp/minisocks.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
index 5f56666..30addee 100644
--- a/net/dccp/minisocks.c
+++ b/net/dccp/minisocks.c
@@ -186,8 +186,7 @@ struct sock *dccp_check_req(struct sock *sk, struct sk_buff *skb,
if (child == NULL)
goto listen_overflow;
- inet_csk_reqsk_queue_unlink(sk, req);
- inet_csk_reqsk_queue_removed(sk, req);
+ inet_csk_reqsk_queue_drop(sk, req);
inet_csk_reqsk_queue_add(sk, req, child);
out:
return child;
OpenPOWER on IntegriCloud