summaryrefslogtreecommitdiffstats
path: root/net/ipv4
Commit message (Collapse)AuthorAgeFilesLines
* netfilter: fix export secctx error handlingPablo Neira Ayuso2011-01-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | In 1ae4de0cdf855305765592647025bde55e85e451, the secctx was exported via the /proc/net/netfilter/nf_conntrack and ctnetlink interfaces instead of the secmark. That patch introduced the use of security_secid_to_secctx() which may return a non-zero value on error. In one of my setups, I have NF_CONNTRACK_SECMARK enabled but no security modules. Thus, security_secid_to_secctx() returns a negative value that results in the breakage of the /proc and `conntrack -L' outputs. To fix this, we skip the inclusion of secctx if the aforementioned function fails. This patch also fixes the dynamic netlink message size calculation if security_secid_to_secctx() returns an error, since its logic is also wrong. This problem exists in Linux kernel >= 2.6.37. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* ipv4: IP defragmentation must be ECN awareEric Dumazet2011-01-061-0/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RFC3168 (The Addition of Explicit Congestion Notification to IP) states : 5.3. Fragmentation ECN-capable packets MAY have the DF (Don't Fragment) bit set. Reassembly of a fragmented packet MUST NOT lose indications of congestion. In other words, if any fragment of an IP packet to be reassembled has the CE codepoint set, then one of two actions MUST be taken: * Set the CE codepoint on the reassembled packet. However, this MUST NOT occur if any of the other fragments contributing to this reassembly carries the Not-ECT codepoint. * The packet is dropped, instead of being reassembled, for any other reason. This patch implements this requirement for IPv4, choosing the first action : If one fragment had NO-ECT codepoint reassembled frame has NO-ECT ElIf one fragment had CE codepoint reassembled frame has CE Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'master' of ↵David S. Miller2011-01-041-2/+6
|\ | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6
| * ipv4/route.c: respect prefsrc for local routesJoel Sing2011-01-041-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The preferred source address is currently ignored for local routes, which results in all local connections having a src address that is the same as the local dst address. Fix this by respecting the preferred source address when it is provided for local routes. This bug can be demonstrated as follows: # ifconfig dummy0 192.168.0.1 # ip route show table local | grep local.*dummy0 local 192.168.0.1 dev dummy0 proto kernel scope host src 192.168.0.1 # ip route change table local local 192.168.0.1 dev dummy0 \ proto kernel scope host src 127.0.0.1 # ip route show table local | grep local.*dummy0 local 192.168.0.1 dev dummy0 proto kernel scope host src 127.0.0.1 We now establish a local connection and verify the source IP address selection: # nc -l 192.168.0.1 3128 & # nc 192.168.0.1 3128 & # netstat -ant | grep 192.168.0.1:3128.*EST tcp 0 0 192.168.0.1:3128 192.168.0.1:33228 ESTABLISHED tcp 0 0 192.168.0.1:33228 192.168.0.1:3128 ESTABLISHED Signed-off-by: Joel Sing <jsing@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge branch 'master' of ↵David S. Miller2010-12-263-7/+14
|\ \ | |/ | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: net/ipv4/fib_frontend.c
| * ipv4: dont create routes on down devicesEric Dumazet2010-12-251-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In ip_route_output_slow(), instead of allowing a route to be created on a not UPed device, report -ENETUNREACH immediately. # ip tunnel add mode ipip remote 10.16.0.164 local 10.16.0.72 dev eth0 # (Note : tunl1 is down) # ping -I tunl1 10.1.2.3 PING 10.1.2.3 (10.1.2.3) from 192.168.18.5 tunl1: 56(84) bytes of data. (nothing) # ./a.out tunl1 # ip tunnel del tunl1 Message from syslogd@shelby at Dec 22 10:12:08 ... kernel: unregister_netdevice: waiting for tunl1 to become free. Usage count = 3 After patch: # ping -I tunl1 10.1.2.3 connect: Network is unreachable Reported-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Reviewed-by: Octavian Purdila <opurdila@ixiacom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * Revert "ipv4: Allow configuring subnets as local addresses"David S. Miller2010-12-231-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 4465b469008bc03b98a1b8df4e9ae501b6c69d4b. Conflicts: net/ipv4/fib_frontend.c As reported by Ben Greear, this causes regressions: > Change 4465b469008bc03b98a1b8df4e9ae501b6c69d4b caused rules > to stop matching the input device properly because the > FLOWI_FLAG_MATCH_ANY_IIF is always defined in ip_dev_find(). > > This breaks rules such as: > > ip rule add pref 512 lookup local > ip rule del pref 0 lookup local > ip link set eth2 up > ip -4 addr add 172.16.0.102/24 broadcast 172.16.0.255 dev eth2 > ip rule add to 172.16.0.102 iif eth2 lookup local pref 10 > ip rule add iif eth2 lookup 10001 pref 20 > ip route add 172.16.0.0/24 dev eth2 table 10001 > ip route add unreachable 0/0 table 10001 > > If you had a second interface 'eth0' that was on a different > subnet, pinging a system on that interface would fail: > > [root@ct503-60 ~]# ping 192.168.100.1 > connect: Invalid argument Reported-by: Ben Greear <greearb@candelatech.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: fix listening_get_next()Eric Dumazet2010-12-231-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexey Vlasov found /proc/net/tcp could sometime loop and display millions of sockets in LISTEN state. In 2.6.29, when we converted TCP hash tables to RCU, we left two sk_next() calls in listening_get_next(). We must instead use sk_nulls_next() to properly detect an end of chain. Reported-by: Alexey Vlasov <renton@renton.name> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | tcp: cleanup of cwnd initialization in tcp_init_metrics()Jiri Kosina2010-12-231-17/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 86bcebafc5e7f5 ("tcp: fix >2 iw selection") fixed a case when congestion window initialization has been mistakenly omitted by introducing cwnd label and putting backwards goto from the end of the function. This makes the code unnecessarily tricky to read and understand on a first sight. Shuffle the code around a little bit to make it more obvious. Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: David S. Miller <davem@davemloft.net>
* | TCP: increase default initial receive window.Nandita Dukkipati2010-12-201-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes the default initial receive window to 10 mss (defined constant). The default window is limited to the maximum of 10*1460 and 2*mss (when mss > 1460). draft-ietf-tcpm-initcwnd-00 is a proposal to the IETF that recommends increasing TCP's initial congestion window to 10 mss or about 15KB. Leading up to this proposal were several large-scale live Internet experiments with an initial congestion window of 10 mss (IW10), where we showed that the average latency of HTTP responses improved by approximately 10%. This was accompanied by a slight increase in retransmission rate (0.5%), most of which is coming from applications opening multiple simultaneous connections. To understand the extreme worst case scenarios, and fairness issues (IW10 versus IW3), we further conducted controlled testbed experiments. We came away finding minimal negative impact even under low link bandwidths (dial-ups) and small buffers. These results are extremely encouraging to adopting IW10. However, an initial congestion window of 10 mss is useless unless a TCP receiver advertises an initial receive window of at least 10 mss. Fortunately, in the large-scale Internet experiments we found that most widely used operating systems advertised large initial receive windows of 64KB, allowing us to experiment with a wide range of initial congestion windows. Linux systems were among the few exceptions that advertised a small receive window of 6KB. The purpose of this patch is to fix this shortcoming. References: 1. A comprehensive list of all IW10 references to date. http://code.google.com/speed/protocols/tcpm-IW10.html 2. Paper describing results from large-scale Internet experiments with IW10. http://ccr.sigcomm.org/drupal/?q=node/621 3. Controlled testbed experiments under worst case scenarios and a fairness study. http://www.ietf.org/proceedings/79/slides/tcpm-0.pdf 4. Raw test data from testbed experiments (Linux senders/receivers) with initial congestion and receive windows of both 10 mss. http://research.csc.ncsu.edu/netsrv/?q=content/iw10 5. Internet-Draft. Increasing TCP's Initial Window. https://datatracker.ietf.org/doc/draft-ietf-tcpm-initcwnd/ Signed-off-by: Nandita Dukkipati <nanditad@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | ipv4: Flush per-ns routing cache more sanely.David S. Miller2010-12-202-41/+29
| | | | | | | | | | | | | | | | Flush the routing cache only of entries that match the network namespace in which the purge event occurred. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
* | Merge branch 'master' of ↵David S. Miller2010-12-172-0/+2
|\ \ | |/ | | | | | | | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/bnx2x/bnx2x.h drivers/net/wireless/iwlwifi/iwl-1000.c drivers/net/wireless/iwlwifi/iwl-6000.c drivers/net/wireless/iwlwifi/iwl-core.h drivers/vhost/vhost.c
| * net: fix nulls list corruptions in sk_prot_allocOctavian Purdila2010-12-162-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Special care is taken inside sk_port_alloc to avoid overwriting skc_node/skc_nulls_node. We should also avoid overwriting skc_bind_node/skc_portaddr_node. The patch fixes the following crash: BUG: unable to handle kernel paging request at fffffffffffffff0 IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370 [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360 [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700 [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190 [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0 [<ffffffff812eda35>] udp_rcv+0x15/0x20 [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190 [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0 [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0 [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0 [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350 [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0 Signed-off-by: Leonard Crestez <lcrestez@ixiacom.com> Signed-off-by: Octavian Purdila <opurdila@ixiacom.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Use skb_checksum_start_offset()Michał Mirosław2010-12-161-1/+1
| | | | | | | | | | | | | | | | | | Replace skb->csum_start - skb_headroom(skb) with skb_checksum_start_offset(). Note for usb/smsc95xx: skb->data - skb->head == skb_headroom(skb). Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Abstract default MTU metric calculation behind an accessor.David S. Miller2010-12-141-9/+20
| | | | | | | | | | | | | | | | | | | | | | Like RTAX_ADVMSS, make the default calculation go through a dst_ops method rather than caching the computation in the routing cache entries. Now dst metrics are pretty much left as-is when new entries are created, thus optimizing metric sharing becomes a real possibility. Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Abstract default ADVMSS behind an accessor.David S. Miller2010-12-133-13/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make all RTAX_ADVMSS metric accesses go through a new helper function, dst_metric_advmss(). Leave the actual default metric as "zero" in the real metric slot, and compute the actual default value dynamically via a new dst_ops AF specific callback. For stacked IPSEC routes, we use the advmss of the path which preserves existing behavior. Unlike ipv4/ipv6, DecNET ties the advmss to the mtu and thus updates advmss on pmtu updates. This inconsistency in advmss handling results in more raw metric accesses than I wish we ended up with. Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: add limits to ip_default_ttlEric Dumazet2010-12-131-2/+5
| | | | | | | | | | | | | | ip_default_ttl should be between 1 and 255 Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | ipv4: Don't pre-seed hoplimit metric.David S. Miller2010-12-127-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Always go through a new ip4_dst_hoplimit() helper, just like ipv6. This allowed several simplifications: 1) The interim dst_metric_hoplimit() can go as it's no longer userd. 2) The sysctl_ip_default_ttl entry no longer needs to use ipv4_doint_and_flush, since the sysctl is not cached in routing cache metrics any longer. 3) ipv4_doint_and_flush no longer needs to be exported and therefore can be marked static. When ipv4_doint_and_flush_strategy was removed some time ago, the external declaration in ip.h was mistakenly left around so kill that off too. We have to move the sysctl_ip_default_ttl declaration into ipv4's route cache definition header net/route.h, because currently net/ip.h (where the declaration lives now) has a back dependency on net/route.h Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Abstract RTAX_HOPLIMIT metric accesses behind helper.David S. Miller2010-12-125-5/+5
| | | | | | | | Signed-off-by: David S. Miller <davem@davemloft.net>
* | xfrm: Traffic Flow Confidentiality for IPv4 ESPMartin Willi2010-12-101-8/+24
| | | | | | | | | | | | | | | | | | | | Add TFC padding to all packets smaller than the boundary configured on the xfrm state. If the boundary is larger than the PMTU, limit padding to the PMTU. Signed-off-by: Martin Willi <martin@strongswan.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: optimize INET input path furtherEric Dumazet2010-12-091-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Followup of commit b178bb3dfc30 (net: reorder struct sock fields) Optimize INET input path a bit further, by : 1) moving sk_refcnt close to sk_lock. This reduces number of dirtied cache lines by one on 64bit arches (and 64 bytes cache line size). 2) moving inet_daddr & inet_rcv_saddr at the beginning of sk (same cache line than hash / family / bound_dev_if / nulls_node) This reduces number of accessed cache lines in lookups by one, and dont increase size of inet and timewait socks. inet and tw sockets now share same place-holder for these fields. Before patch : offsetof(struct sock, sk_refcnt) = 0x10 offsetof(struct sock, sk_lock) = 0x40 offsetof(struct sock, sk_receive_queue) = 0x60 offsetof(struct inet_sock, inet_daddr) = 0x270 offsetof(struct inet_sock, inet_rcv_saddr) = 0x274 After patch : offsetof(struct sock, sk_refcnt) = 0x44 offsetof(struct sock, sk_lock) = 0x48 offsetof(struct sock, sk_receive_queue) = 0x68 offsetof(struct inet_sock, inet_daddr) = 0x0 offsetof(struct inet_sock, inet_rcv_saddr) = 0x4 compute_score() (udp or tcp) now use a single cache line per ignored item, instead of two. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: Abstract away all dst_entry metrics accesses.David S. Miller2010-12-093-35/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use helper functions to hide all direct accesses, especially writes, to dst_entry metrics values. This will allow us to: 1) More easily change how the metrics are stored. 2) Implement COW for metrics. In particular this will help us put metrics into the inetpeer cache if that is what we end up doing. We can make the _metrics member a pointer instead of an array, initially have it point at the read-only metrics in the FIB, and then on the first set grab an inetpeer entry and point the _metrics member there. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
* | Merge branch 'master' of ↵David S. Miller2010-12-088-26/+36
|\ \ | |/ | | | | | | | | | | | | master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 Conflicts: drivers/net/wireless/ath/ath9k/ar9003_eeprom.c net/llc/af_llc.c
| * tcp: protect sysctl_tcp_cookie_size readsEric Dumazet2010-12-081-12/+15
| | | | | | | | | | | | | | | | | | | | | | Make sure sysctl_tcp_cookie_size is read once in tcp_cookie_size_check(), or we might return an illegal value to caller if sysctl_tcp_cookie_size is changed by another cpu. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Cc: William Allen Simpson <william.allen.simpson@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: avoid a possible divide by zeroEric Dumazet2010-12-081-2/+4
| | | | | | | | | | | | | | | | | | sysctl_tcp_tso_win_divisor might be set to zero while one cpu runs in tcp_tso_should_defer(). Make sure we dont allow a divide by zero by reading sysctl_tcp_tso_win_divisor exactly once. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: Replace time wait bucket msg by counterTom Herbert2010-12-082-1/+2
| | | | | | | | | | | | | | | | | | Rather than printing the message to the log, use a mib counter to keep track of the count of occurences of time wait bucket overflow. Reduces spam in logs. Signed-off-by: Tom Herbert <therbert@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: Bug fix in initialization of receive window.Nandita Dukkipati2010-12-081-5/+4
| | | | | | | | | | | | | | | | | | | | | | The bug has to do with boundary checks on the initial receive window. If the initial receive window falls between init_cwnd and the receive window specified by the user, the initial window is incorrectly brought down to init_cwnd. The correct behavior is to allow it to remain unchanged. Signed-off-by: Nandita Dukkipati <nanditad@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * inet: Fix __inet_inherit_port() to correctly increment bsockets and num_ownersNagendra Tomar2010-11-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | inet sockets corresponding to passive connections are added to the bind hash using ___inet_inherit_port(). These sockets are later removed from the bind hash using __inet_put_port(). These two functions are not exactly symmetrical. __inet_put_port() decrements hashinfo->bsockets and tb->num_owners, whereas ___inet_inherit_port() does not increment them. This results in both of these going to -ve values. This patch fixes this by calling inet_bind_hash() from ___inet_inherit_port(), which does the right thing. 'bsockets' and 'num_owners' were introduced by commit a9d8f9110d7e953c (inet: Allowing more than 64k connections and heavily optimize bind(0)) Signed-off-by: Nagendra Singh Tomar <tomer_iisc@yahoo.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Evgeniy Polyakov <zbr@ioremap.net> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: restrict net.ipv4.tcp_adv_win_scale (#20312)Alexey Dobriyan2010-11-281-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tcp_win_from_space() does the following: if (sysctl_tcp_adv_win_scale <= 0) return space >> (-sysctl_tcp_adv_win_scale); else return space - (space >> sysctl_tcp_adv_win_scale); "space" is int. As per C99 6.5.7 (3) shifting int for 32 or more bits is undefined behaviour. Indeed, if sysctl_tcp_adv_win_scale is exactly 32, space >> 32 equals space and function returns 0. Which means we busyloop in tcp_fixup_rcvbuf(). Restrict net.ipv4.tcp_adv_win_scale to [-31, 31]. Fix https://bugzilla.kernel.org/show_bug.cgi?id=20312 Steps to reproduce: echo 32 >/proc/sys/net/ipv4/tcp_adv_win_scale wget www.kernel.org [softlockup] Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * netns: Don't leak others' openreq-s in procPavel Emelyanov2010-11-271-1/+3
| | | | | | | | | | | | | | The /proc/net/tcp leaks openreq sockets from other namespaces. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * tcp: Make TCP_MAXSEG minimum more correct.David S. Miller2010-11-241-1/+1
| | | | | | | | | | | | | | Use TCP_MIN_MSS instead of constant 64. Reported-by: Min Zhang <mzhang@mvista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: allow GFP_HIGHMEM in __vmalloc()Eric Dumazet2010-11-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | We forgot to use __GFP_HIGHMEM in several __vmalloc() calls. In ceph, add the missing flag. In fib_trie.c, xfrm_hash.c and request_sock.c, using vzalloc() is cleaner and allows using HIGHMEM pages as well. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: RCU conversion of dev_getbyhwaddr() and arp_ioctl()Eric Dumazet2010-12-081-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Le dimanche 05 décembre 2010 à 09:19 +0100, Eric Dumazet a écrit : > Hmm.. > > If somebody can explain why RTNL is held in arp_ioctl() (and therefore > in arp_req_delete()), we might first remove RTNL use in arp_ioctl() so > that your patch can be applied. > > Right now it is not good, because RTNL wont be necessarly held when you > are going to call arp_invalidate() ? While doing this analysis, I found a refcount bug in llc, I'll send a patch for net-2.6 Meanwhile, here is the patch for net-next-2.6 Your patch then can be applied after mine. Thanks [PATCH] net: RCU conversion of dev_getbyhwaddr() and arp_ioctl() dev_getbyhwaddr() was called under RTNL. Rename it to dev_getbyhwaddr_rcu() and change all its caller to now use RCU locking instead of RTNL. Change arp_ioctl() to use RCU instead of RTNL locking. Note: this fix a dev refcount bug in llc Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: arp: use assignmentChangli Gao2010-12-061-1/+1
| | | | | | | | | | | | | | | | Only when dont_send is 0, arp_filter() is consulted, so we can simply assign the return value of arp_filter() to dont_send instead. Signed-off-by: Changli Gao <xiaosuo@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: kill an RCU warning in inet_fill_link_af()Eric Dumazet2010-12-061-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commits 9f0f7272 (ipv4: AF_INET link address family) and cf7afbfeb8c (rtnl: make link af-specific updates atomic) used incorrect __in_dev_get_rcu() in RTNL protected contexts, triggering PROVE_RCU warnings. Switch to __in_dev_get_rtnl(), wich is more appropriate, since we hold RTNL. Based on a report and initial patch from Amerigo Wang. Reported-by: Amerigo Wang <amwang@redhat.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Thomas Graf <tgraf@infradead.org> Reviewed-by: WANG Cong <amwang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | tcp: use TCP_BASE_MSS to set basic mss valueShan Wei2010-12-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TCP_BASE_MSS is defined, but not used. commit 5d424d5a introduce this macro, so use it to initial sysctl_tcp_base_mss. commit 5d424d5a674f782d0659a3b66d951f412901faee Author: John Heffner <jheffner@psc.edu> Date: Mon Mar 20 17:53:41 2006 -0800 [TCP]: MTU probing Signed-off-by: Shan Wei <shanwei@cn.fujitsu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | timewait_sock: Create and use getpeer op.David S. Miller2010-12-012-30/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | The only thing AF-specific about remembering the timestamp for a time-wait TCP socket is getting the peer. Abstract that behind a new timewait_sock_ops vector. Support for real IPV6 sockets is not filled in yet, but curiously this makes timewait recycling start to work for v4-mapped ipv6 sockets. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Kill use of inet_peer_address_t typedef.David S. Miller2010-12-011-4/+4
| | | | | | | | | | | | They are verboten these days. Signed-off-by: David S. Miller <davem@davemloft.net>
* | ipip: add module alias for tunl0 tunnel devicestephen hemminger2010-12-011-0/+1
| | | | | | | | | | | | | | | | | | If ipip is built as a module the 'ip tunnel add' command would fail because the ipip module was not being autoloaded. Adding an alias for the tunl0 device name cause dev_load() to autoload it when needed. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | gre: add module alias for gre0 tunnel devicestephen hemminger2010-12-011-0/+1
| | | | | | | | | | | | | | | | | | If gre is built as a module the 'ip tunnel add' command would fail because the ip_gre module was not being autoloaded. Adding an alias for the gre0 device name cause dev_load() to autoload it when needed. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | gre: minor cleanupsstephen hemminger2010-12-011-2/+2
| | | | | | | | | | | | | | | | Use strcpy() rather the sprintf() for the case where name is getting generated. Fix indentation. Signed-off-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | inet: Turn ->remember_stamp into ->get_peer in connection AF ops.David S. Miller2010-11-302-28/+38
| | | | | | | | | | | | | | | | | | Then we can make a completely generic tcp_remember_stamp() that uses ->get_peer() as a helper, minimizing the AF specific code and minimizing the eventual code duplication when we implement the ipv6 side of TW recycling. Signed-off-by: David S. Miller <davem@davemloft.net>
* | ipv6: Add infrastructure to bind inet_peer objects to routes.David S. Miller2010-11-301-0/+2
| | | | | | | | | | | | They are only allowed on cached ipv6 routes. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Add v6 peers tree, abstract root properly.David S. Miller2010-11-301-9/+18
| | | | | | | | | | | | | | Add the ipv6 peer tree instance, and adapt remaining direct references to 'v4_peers' as needed. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Abstract address comparisons.David S. Miller2010-11-301-8/+27
| | | | | | | | | | | | Now v4 and v6 addresses will both work properly. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Make inet_getpeer() take an inet_peer_adress_t pointer.David S. Miller2010-11-304-9/+9
| | | | | | | | | | | | And make an inet_getpeer_v4() helper, update callers. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Introduce inet_peer_address_t.David S. Miller2010-11-302-9/+9
| | | | | | | | | | | | Currently only the v4 aspect is used, but this will change. Signed-off-by: David S. Miller <davem@davemloft.net>
* | inetpeer: Abstract out the tree root accesses.David S. Miller2010-11-301-50/+69
| | | | | | | | | | | | | | | | | | | | Instead of directly accessing "peer", change to code to operate using a "struct inet_peer_base *" pointer. This will facilitate the addition of a seperate tree for ipv6 peer entries. Signed-off-by: David S. Miller <davem@davemloft.net>
* | net: add some KERN_CONT markers to continuation linesUwe Kleine-König2010-11-281-16/+16
| | | | | | | | | | | | Cc: netdev@vger.kernel.org Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
* | rtnl: make link af-specific updates atomicThomas Graf2010-11-271-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As David pointed out correctly, updates to af-specific attributes are currently not atomic. If multiple changes are requested and one of them fails, previous updates may have been applied already leaving the link behind in a undefined state. This patch splits the function parse_link_af() into two functions validate_link_af() and set_link_at(). validate_link_af() is placed to validate_linkmsg() check for errors as early as possible before any changes to the link have been made. set_link_af() is called to commit the changes later. This method is not fail proof, while it is currently sufficient to make set_link_af() inerrable and thus 100% atomic, the validation function method will not be able to detect all error scenarios in the future, there will likely always be errors depending on states which are f.e. not protected by rtnl_mutex and thus may change between validation and setting. Also, instead of silently ignoring unknown address families and config blocks for address families which did not register a set function the errors EAFNOSUPPORT respectively EOPNOSUPPORT are returned to avoid comitting 4 out of 5 update requests without notifying the user. Signed-off-by: Thomas Graf <tgraf@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud