| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Slab destructors were no longer supported after Christoph's
c59def9f222d44bb7e2f0a559f2906191a0862d7 change. They've been
BUGs for both slab and slub, and slob never supported them
either.
This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
| |
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
|
|
|
|
|
|
|
| |
Make all initialized struct seq_operations in net/ const
Signed-off-by: Philippe De Muyter <phdm@macqel.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
SCTP currently permits users to bind to link-local addresses,
but doesn't verify that the scope id specified at bind matches
the interface that the address is configured on. It was report
that this can hang a system.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
In-kernel sockets created with sock_create_kern don't usually
have a file and file descriptor allocated to them. As a result,
when SCTP tries to check the non-blocking flag, we Oops when
dereferencing a NULL file pointer.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
| |
Correctly dereference bytes_copied in sctp_copy_laddrs().
I totally must have spaced when doing this.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sctp_sock_migrate() grabs the socket lock on a newly allocated socket while
holding the socket lock on an old socket. lockdep worries that this might
be a recursive lock attempt.
task/3026 is trying to acquire lock:
(sk_lock-AF_INET){--..}, at: [<ffffffff88105b8c>] sctp_sock_migrate+0x2e3/0x327 [sctp]
but task is already holding lock:
(sk_lock-AF_INET){--..}, at: [<ffffffff8810891f>] sctp_accept+0xdf/0x1e3 [sctp]
This patch tells lockdep that this locking is safe by using
lock_sock_nested().
Signed-off-by: Zach Brown <zach.brown@oracle.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the split out of the patch that we agreed I should split
out from my last patch. It changes space_left to be computed in the same
way the to variable is. I know we talked about changing space_left to an
int, but I think size_t is more appropriate, since we should never have
negative space in our buffer, and computing using offsetof means space_left
should now never drop below zero.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I noted the other day while looking at a bug that was ostensibly
in some perl networking library, that we strictly avoid allowing getsockopt
operations to complete if we pass in oversized buffers. This seems to make
libraries like Perl::NET malfunction since it seems to allocate oversized
buffers for use in several operations. It also seems to be out of line with
the way udp, tcp and ip getsockopt routines handle buffer input (since the
*optlen pointer in both an input and an output and gets set to the length
of the data that we copy into the buffer). This patch brings our getsockopt
helpers into line with other protocols, and allows us to accept oversized
buffers for our getsockopt operations. Tested by me with good results.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
|
|
|
|
|
|
|
|
|
| |
Right now, when we receive a mtu estimate smaller then minim
threshold in the ICMP message, we disable the path mtu discovery
on the transport. This leads to the never increasing sctp fragmentation
point even when the real path mtu has increased.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, if the socket is owned by the user, we drop the ICMP
message. As a result SCTP forgets that path MTU changed and
never adjusting it's estimate. This causes all subsequent
packets to be fragmented. With this patch, we'll flag the association
that it needs to udpate it's estimate based on the already updated
routing information.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Introduce new function sctp_transport_update_pmtu that updates
the transports and destination caches view of the path mtu.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
If the copy_to_user or copy_user calls fail in sctp_getsockopt_local_addrs(),
the function should free locally allocated storage before returning error.
Spotted by Coverity.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
Allow sctp_bindx() to accept multiple address with
unspecified port. In this case, all addresses inherit
the first bound port. We still catch full mis-matches.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
|
|
|
|
|
| |
During peeloff of AF_INET6 socket, the inet6_sk(sk)->daddr
wasn't set correctly since the code was assuming IPv4 only.
Now we use a correct call to set the destination address.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Acked-by: Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
|
|
|
|
| |
Recent gcc versions emit warnings when unsigned variables are
compared < 0 or >= 0.
Signed-off-by: Bill Nottingham <notting@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
Use menuconfigs instead of menus, so the whole menu can be disabled at
once instead of going through all options.
Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
The socket API draft is unclear about whether to include the
chunk header or not. Recent discussion on the sctp implementors
mailing list clarified that the chunk header shouldn't be included,
but the error parameter header still needs to be there.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
| |
I broke the non-wildcard case recently. This is to fixes it.
Now, explictitly bound addresses can ge retrieved using the API.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
SCTP was checking for NULL when trying to detect hmac
allocation failure where it should have been using IS_ERR.
Also, print a rate limited warning to the log telling the
user what happend.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Michael Opdenacker <michael@free-electrons.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During the INIT/COOKIE-ACK collision cases, it's possible to get
into a situation where the association id is not yet set at the time
of the user event generation. As a result, user events have an
association id set to 0 which will confuse applications.
This happens if we hit case B of duplicate cookie processing.
In the particular example found and provided by Oscar Isaula
<Oscar.Isaula@motorola.com>, flow looks like this:
A B
---- INIT-------> (lost)
<---------INIT------
---- INIT-ACK--->
<------ Cookie ECHO
When the Cookie Echo is received, we end up trying to update the
association that was created on A as a result of the (lost) INIT,
but that association doesn't have the ID set yet.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
| |
Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
| |
Update the SO_REUSEADDR handling to also check for listen state. This
was muliple listening server sockets can't be created and they will
not steal packets from each other.
Reported by Paolo Galtieri <pgaltieri@mvista.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
We need to make sure that all destination ports are the same, since
the association really must not connect to multiple different ports
at once. This was reported on the sctp-impl list.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cleanup of dev_base list use, with the aim to simplify making device
list per-namespace. In almost every occasion, use of dev_base variable
and dev->next pointer could be easily replaced by for_each_netdev
loop. A few most complicated places were converted to using
first_netdev()/next_netdev().
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Kirill Korotaev <dev@openvz.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
| |
sctp_getsockopt_local_addrs_old() in net/sctp/socket.c calls
copy_to_user() while the spinlock addr_lock is held. this should not
be done as copy_to_user() might sleep. the call to
sctp_copy_laddrs_to_user() while holding the lock is also problematic
as it calls copy_to_user()
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
| |
Spring cleaning time...
There seems to be a lot of places in the network code that have
extra bogus semicolons after conditionals. Most commonly is a
bogus semicolon after: switch() { }
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
When a transmitted packet is looped back directly, CHECKSUM_PARTIAL
maps to the semantics of CHECKSUM_UNNECESSARY. Therefore we should
treat it as such in the stack.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As stated in the sctp socket api draft:
sac_info: variable
If the sac_state is SCTP_COMM_LOST and an ABORT chunk was received
for this association, sac_info[] contains the complete ABORT chunk as
defined in the SCTP specification RFC2960 [RFC2960] section 3.3.7.
We now save received ABORT chunks into the sac_info field and pass that
to the user.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
Parameters only take effect when a corresponding flag bit is set
and a value is specified. This means we need to check the flags
in addition to checking for non-zero value.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
| |
This option induces partial delivery to run as soon
as the specified amount of data has been accumulated on
the association. However, we give preference to fully
reassembled messages over PD messages. In any case,
window and buffer is freed up.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@.hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This option was introduced in draft-ietf-tsvwg-sctpsocket-13. It
prevents head-of-line blocking in the case of one-to-many endpoint.
Applications enabling this option really must enable SCTP_SNDRCV event
so that they would know where the data belongs. Based on an
earlier patch by Ivan Skytte Jørgensen.
Additionally, this functionality now permits multiple associations
on the same endpoint to enter Partial Delivery. Applications should
be extra careful, when using this functionality, to track EOR indicators.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
on 64bit architectures, allowing us to combine the 4 bytes hole left by the
layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
:-)
Many calculations that previously required that skb->{transport,network,
mac}_header be first converted to a pointer now can be done directly, being
meaningful as offsets or pointers.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
architectures
With this we save 8 bytes per network packet, leaving a 4 bytes hole to be used
in further shrinking work, likely with the offsetization of other pointers,
such as ->{data,tail,end}, at the cost of adds, that were minimized by the
usual practice of setting skb->{mac,nh,n}.raw to a local variable that is then
accessed multiple times in each function, it also is not more expensive than
before with regards to most of the handling of such headers, like setting one
of these headers to another (transport to network, etc), or subtracting, adding
to/from it, comparing them, etc.
Now we have this layout for sk_buff on a x86_64 machine:
[acme@mica net-2.6.22]$ pahole vmlinux sk_buff
struct sk_buff {
struct sk_buff * next; /* 0 8 */
struct sk_buff * prev; /* 8 8 */
struct rb_node rb; /* 16 24 */
struct sock * sk; /* 40 8 */
ktime_t tstamp; /* 48 8 */
struct net_device * dev; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct net_device * input_dev; /* 64 8 */
sk_buff_data_t transport_header; /* 72 4 */
sk_buff_data_t network_header; /* 76 4 */
sk_buff_data_t mac_header; /* 80 4 */
/* XXX 4 bytes hole, try to pack */
struct dst_entry * dst; /* 88 8 */
struct sec_path * sp; /* 96 8 */
char cb[48]; /* 104 48 */
/* cacheline 2 boundary (128 bytes) was 24 bytes ago*/
unsigned int len; /* 152 4 */
unsigned int data_len; /* 156 4 */
unsigned int mac_len; /* 160 4 */
union {
__wsum csum; /* 4 */
__u32 csum_offset; /* 4 */
}; /* 164 4 */
__u32 priority; /* 168 4 */
__u8 local_df:1; /* 172 1 */
__u8 cloned:1; /* 172 1 */
__u8 ip_summed:2; /* 172 1 */
__u8 nohdr:1; /* 172 1 */
__u8 nfctinfo:3; /* 172 1 */
__u8 pkt_type:3; /* 173 1 */
__u8 fclone:2; /* 173 1 */
__u8 ipvs_property:1; /* 173 1 */
/* XXX 2 bits hole, try to pack */
__be16 protocol; /* 174 2 */
void (*destructor)(struct sk_buff *); /* 176 8 */
struct nf_conntrack * nfct; /* 184 8 */
/* --- cacheline 3 boundary (192 bytes) --- */
struct sk_buff * nfct_reasm; /* 192 8 */
struct nf_bridge_info *nf_bridge; /* 200 8 */
__u16 tc_index; /* 208 2 */
__u16 tc_verd; /* 210 2 */
dma_cookie_t dma_cookie; /* 212 4 */
__u32 secmark; /* 216 4 */
__u32 mark; /* 220 4 */
unsigned int truesize; /* 224 4 */
atomic_t users; /* 228 4 */
unsigned char * head; /* 232 8 */
unsigned char * data; /* 240 8 */
unsigned char * tail; /* 248 8 */
/* --- cacheline 4 boundary (256 bytes) --- */
unsigned char * end; /* 256 8 */
}; /* size: 264, cachelines: 5 */
/* sum members: 260, holes: 1, sum holes: 4 */
/* bit holes: 1, sum bit holes: 2 bits */
/* last cacheline: 8 bytes */
On 32 bits nothing changes, and pointers continue to be used with the compiler
turning all this abstraction layer into dust. But there are some sk_buff
validation tricks that are now possible, humm... :-)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
| |
Renaming skb->h to skb->transport_header, skb->nh to skb->network_header and
skb->mac to skb->mac_header, to match the names of the associated helpers
(skb[_[re]set]_{transport,network,mac}_header).
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
| |
For consistency with all the other skb->h.raw accessors.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
| |
For the quite common 'skb->h.raw - skb->data' sequence.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
| |
Now the skb->nh union has just one member, .raw, i.e. it is just like the
skb->mac union, strange, no? I'm just leaving it like that till the transport
layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or
->mac_header_offset?), ditto for ->{h,nh}.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
| |
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now related to this form:
skb->nh.ipv6h = (struct ipv6hdr *)skb_put(skb, length);
That, as the others, is done when skb->tail is still equal to skb->data, making
the conversion to skb_reset_network_header possible.
Also one more case equivalent to skb->nh.raw = skb->data, of this form:
iph = (struct ipv6hdr *)skb->data;
<SNIP>
skb->nh.ipv6h = iph;
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This time of the type:
skb->nh.iph = (struct iphdr *)skb->data;
That is completely equivalent to:
skb->nh.raw = skb->data;
Wonder why people love casts... :-)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The way partial delivery is currently implemnted, it is possible to
intereleave a message (either from another steram, or unordered) that
is not part of partial delivery process. The only way to this is for
a message to not be a fragment and be 'in order' or unorderd for a
given stream. This will result in bypassing the reassembly/ordering
queues where things live duing partial delivery, and the
message will be delivered to the socket in the middle of partial delivery.
This is a two-fold problem, in that:
1. the app now must check the stream-id and flags which it may not
be doing.
2. this clearing partial delivery state from the association and results
in ulp hanging.
This patch is a band-aid over a much bigger problem in that we
don't do stream interleave.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
| |
During the sctp_bindx() call to add additional addresses to the
endpoint, any v4mapped addresses are converted and stored as regular
v4 addresses. However, when trying to remove these addresses, the
v4mapped addresses are not converted and the operation fails. This
patch unmaps the addresses on during the remove operation as well.
Signed-off-by: Paolo Galtieri <pgaltieri@mvista.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In current implementation, LKSCTP does receive buffer accounting for
data in sctp_receive_queue and pd_lobby. However, LKSCTP don't do
accounting for data in frag_list when data is fragmented. In addition,
LKSCTP doesn't do accounting for data in reasm and lobby queue in
structure sctp_ulpq.
When there are date in these queue, assertion failed message is printed
in inet_sock_destruct because sk_rmem_alloc of oldsk does not become 0
when socket is destroyed.
Signed-off-by: Tsutomu Fujii <t-fujii@nb.jp.nec.com>
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
|
|
|
|
|
|
| |
Reset ssthresh to the correct value (peer's a_rwnd) when restarting
association.
Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|