summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* tcp: uninline tcp_oow_rate_limited()Eric Dumazet2015-03-172-30/+32
| | | | | | | | | tcp_oow_rate_limited() is hardly used in fast path, there is no point inlining it. Signed-of-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp: move tcp_openreq_init() to tcp_input.cEric Dumazet2015-03-172-25/+25
| | | | | | | | This big helper is called once from tcp_conn_request(), there is no point having it in an include. Compiler will inline it anyway. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* inet: move ir_mark to fill a holeEric Dumazet2015-03-171-6/+5
| | | | | | | | | | | On 64bit arches, we can save 8 bytes in inet_request_sock by moving ir_mark to fill a hole. While we are at it, inet_request_mark() can get a const qualifier for listener socket. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* netfilter: xt_socket: prepare for TCP_NEW_SYN_RECV supportEric Dumazet2015-03-171-12/+22
| | | | | | | | | | TCP request socks soon will be visible in ehash table. xt_socket will be able to match them, but first we need to make sure to not consider them as full sockets. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* netfilter: tproxy: prepare TCP_NEW_SYN_RECV supportEric Dumazet2015-03-171-6/+12
| | | | | | | TCP request socks soon will be visible in ehash table. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* netfilter: use sk_fullsock() helperEric Dumazet2015-03-175-6/+6
| | | | | | | | | Upcoming request sockets have TCP_NEW_SYN_RECV state and should be special cased a bit like TCP_TIME_WAIT sockets. Signed-off-by; Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bpf: allow BPF programs access 'protocol' and 'vlan_tci' fieldsAlexei Starovoitov2015-03-173-22/+62
| | | | | | | | | | | | | as a follow on to patch 70006af95515 ("bpf: allow eBPF access skb fields") this patch allows 'protocol' and 'vlan_tci' fields to be accessible from extended BPF programs. The usage of 'protocol', 'vlan_present' and 'vlan_tci' fields is the same as corresponding SKF_AD_PROTOCOL, SKF_AD_VLAN_TAG_PRESENT and SKF_AD_VLAN_TAG accesses in classic BPF. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'const_of_device_id'David S. Miller2015-03-1740-44/+44
|\ | | | | | | | | | | | | | | | | | | | | | | | | Fabian Frederick says: ==================== drivers/net: constify of_device_id array This small patchset adds const to of_device_id arrays in drivers/net branch. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * via-velocity: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: via-rhine: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * ehea: constify of_device_id arrayFabian Frederick2015-03-171-2/+2
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * IBM-EMAC: constify of_device_id arrayFabian Frederick2015-03-175-5/+5
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * can: constify of_device_id arrayFabian Frederick2015-03-175-5/+5
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: phy: constify of_device_id arrayFabian Frederick2015-03-175-5/+5
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * orinoco: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: xilinx: constify of_device_id arrayFabian Frederick2015-03-173-3/+3
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: greth: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * netdev: octeon_mgmt: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ethernet: apple: constify of_device_id arrayFabian Frederick2015-03-172-2/+2
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * drivers: net: xgene: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: ethoc: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/fsl: constify of_device_id arrayFabian Frederick2015-03-1710-12/+12
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * Altera TSE: constify of_device_id arrayFabian Frederick2015-03-171-2/+2
| | | | | | | | | | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: netcp: constify of_device_id arrayFabian Frederick2015-03-171-1/+1
|/ | | | | | | | of_device_id is always used as const. (See driver.of_match_table and open firmware functions) Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David S. Miller <davem@davemloft.net>
* rhashtable: Avoid calculating hash again to unlockThomas Graf2015-03-161-6/+5
| | | | | | | | Caching the lock pointer avoids having to hash on the object again to unlock the bucket locks. Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* tcp_metrics: fix wrong lockdep annotationsEric Dumazet2015-03-161-12/+8
| | | | | | | | | | | | | | | Changes in tcp_metric hash table are protected by tcp_metrics_lock only, not by genl_mutex While we are at it use deref_locked() instead of rcu_dereference() in tcp_new() to avoid unnecessary barrier, as we hold tcp_metrics_lock as well. Reported-by: Andrew Vagin <avagin@parallels.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Fixes: 098a697b497e ("tcp_metrics: Use a single hash table for all network namespaces.") Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* dsa: change "select" to "depends on" for NET_SWITCHDEV and for NET_DSAJiri Pirko2015-03-162-9/+7
| | | | | | | | | | | | | | | | This would fix randconfig compile error: net/built-in.o: In function `netdev_switch_fib_ipv4_abort': (.text+0xf7811): undefined reference to `fib_flush_external' Also it fixes following warnings: warning: (NET_DSA) selects NET_SWITCHDEV which has unmet direct dependencies (NET && INET) warning: (NET_DSA_MV88E6060 && NET_DSA_MV88E6131 && NET_DSA_MV88E6123_61_65 && NET_DSA_MV88E6171 && NET_DSA_MV88E6352 && NET_DSA_BCM_SF2) selects NET_DSA which has unmet direct dependencies (NET && HAVE_NET_DSA && NET_SWITCHDEV) Reported-by: Randy Dunlap <rdunlap@infradead.org> Suggested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/fsl: modify xgmac_mdio for little endian SoCsShaohui Xie2015-03-161-28/+68
| | | | | | | | | | | | | MDIO controller on little endian Socs, e.g. ls2085a is similar to the controller on big endian Socs, but the MDIO access is little endian, we use I/O accessor function to handle endianness, so the driver can run on little endian Socs. A property "little-endian" is used in DTS to indicate the MDIO is little endian, if driver probes the property, driver will access MDIO in little endian, otherwise, driver works in big endian by default. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/fsl: fix a bug in xgmac_mdioShaohui Xie2015-03-161-1/+1
| | | | | | | | There is a bug in xgmac_wait_until_done() which mdio_stat should be used instead of mdio_data when checking if busy bit is cleared. Signed-off-by: Shaohui Xie <Shaohui.Xie@freescale.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: kernel socket should be released in init_net namespaceYing Xue2015-03-161-1/+1
| | | | | | | | | | | Creating a kernel socket with sock_create_kern() happens in "init_net" namespace, however, releasing it with sk_release_kernel() occurs in the current namespace which may be different with "init_net" namespace. Therefore, we should guarantee that the namespace in which a kernel socket is created is same as the socket is created. Signed-off-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* rhashtable: Annotate RCU locking of walkersThomas Graf2015-03-161-0/+2
| | | | | | | | | | | Fixes the following sparse warnings: lib/rhashtable.c:767:5: warning: context imbalance in 'rhashtable_walk_start' - wrong count at exit lib/rhashtable.c:849:6: warning: context imbalance in 'rhashtable_walk_stop' - unexpected unlock Fixes: f2dba9c6ff0d ("rhashtable: Introduce rhashtable_walk_*") Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
* rocker: replace fixed stack allocation with dynamic allocationScott Feldman2015-03-161-3/+10
| | | | | | | | | | In hast to fix some sparse warning, I hard-coded a fix-sized array on the stack which is probably too big for kernel standards. Fix this by converting array to dynamic allocation. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'listener_refactor'David S. Miller2015-03-1613-102/+99
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | Eric Dumazet says: ==================== inet: tcp listener refactoring, part 10 We are getting close to the point where request sockets will be hashed into generic hash table. Some followups are needed for netfilter and will be handled in next patch series. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * inet: add proper refcounting to request sockEric Dumazet2015-03-167-18/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | reqsk_put() is the generic function that should be used to release a refcount (and automatically call reqsk_free()) reqsk_free() might be called if refcount is known to be 0 or undefined. refcnt is set to one in inet_csk_reqsk_queue_add() As request socks are not yet in global ehash table, I added temporary debugging checks in reqsk_put() and reqsk_free() Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * inet: factorize sock_edemux()/sock_gen_put() codeEric Dumazet2015-03-162-15/+6
| | | | | | | | | | | | | | | | sock_edemux() is not used in fast path, and should really call sock_gen_put() to save some code. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * inet_diag: allow sk_diag_fill() to handle request socksEric Dumazet2015-03-161-67/+53
| | | | | | | | | | | | | | | | | | | | | | | | inet_diag_fill_req() is renamed to inet_req_diag_fill() and moved up, so that it can be called fom sk_diag_fill() inet_diag_bc_sk() is ready to handle request socks. inet_twsk_diag_dump() is no longer needed. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * inet: ip early demux should avoid request socketsEric Dumazet2015-03-162-2/+2
| | | | | | | | | | | | | | | | | | | | When a request socket is created, we do not cache ip route dst entry, like for timewait sockets. Let's use sk_fullsock() helper. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net: add sk_fullsock() helperEric Dumazet2015-03-161-0/+9
|/ | | | | | | | | We have many places where we want to check if a socket is not a timewait or request socket. Use a helper to avoid hard coding this. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'swdev_ops'David S. Miller2015-03-165-89/+105
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scott Feldman says: ==================== switchdev: add swdev ops v3: - Fix missing include for DSA build v2: - Per Simon's review, squash some of the dependent commits into one to make series git bisect safe. v1: Per discussions at netconf, move switchdev ndo ops to a new swdev_ops to keep ndo namespace clean and maintain switchdev-related ops into one place. There are no functional changes here; just shuffling ops around for better organization. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * netdev: remove ndo ops for switchdevScott Feldman2015-03-161-38/+0
| | | | | | | | | | Signed-off-by: Scott Feldman <sfeldma@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * switchdev: use new swdev opsScott Feldman2015-03-163-51/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move swdev wrappers over to new swdev ops (from previous ndo ops). No functional changes to the implementation. Signed-off-by: Scott Feldman <sfeldma@gmail.com> rocker: move to new swdev ops Signed-off-by: Scott Feldman <sfeldma@gmail.com> dsa: move to new swdev ops Signed-off-by: Scott Feldman <sfeldma@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * switchdev: add swdev opsScott Feldman2015-03-162-0/+41
|/ | | | | | | | | As discussed at netconf, introduce swdev_ops as first step to move switchdev ops from ndo to swdev. This will keep switchdev from cluttering up ndo ops space. Signed-off-by: Scott Feldman <sfeldma@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'rhashtable-fixes-next'David S. Miller2015-03-151-13/+11
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Herbert Xu says: ==================== rhashtable: Fix two bugs caused by multiple rehash preparation While testing some new patches over the weekend I discovered a couple of bugs in the series that had just been merged. These two patches fix them: 1) A use-after-free in the walker that can cause crashes when walking during a rehash. 2) When a second rehash starts during a single rhashtable_remove call the remove may fail when it shouldn't. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * rhashtable: Fix rhashtable_remove failuresHerbert Xu2015-03-151-10/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The commit 9d901bc05153bbf33b5da2cd6266865e531f0545 ("rhashtable: Free bucket tables asynchronously after rehash") causes gratuitous failures in rhashtable_remove. The reason is that it inadvertently introduced multiple rehashing from the perspective of readers. IOW it is now possible to see more than two tables during a single RCU critical section. Fortunately the other reader rhashtable_lookup already deals with this correctly thanks to c4db8848af6af92f90462258603be844baeab44d ("rhashtable: rhashtable: Move future_tbl into struct bucket_table") so only rhashtable_remove is broken by this change. This patch fixes this by looping over every table from the first one to the last or until we find the element that we were trying to delete. Incidentally the simple test for detecting rehashing to prevent starting another shrinking no longer works. Since it isn't needed anyway (the work queue and the mutex serves as a natural barrier to unnecessary rehashes) I've simply killed the test. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
| * rhashtable: Fix use-after-free in rhashtable_walk_stopHerbert Xu2015-03-151-3/+4
|/ | | | | | | | | | | | | | | The commit c4db8848af6af92f90462258603be844baeab44d ("rhashtable: Move future_tbl into struct bucket_table") introduced a use-after- free bug in rhashtable_walk_stop because it dereferences tbl after droping the RCU read lock. This patch fixes it by moving the RCU read unlock down to the bottom of rhashtable_walk_stop. In fact this was how I had it originally but it got dropped while rearranging patches because this one depended on the async freeing of bucket_table. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: bcmgenet: add support for Hardware Filter BlockPetri Gynther2015-03-152-0/+176
| | | | | | | | Add support for Hardware Filter Block (HFB) so that incoming Rx traffic can be matched and directed to desired Rx queues. Signed-off-by: Petri Gynther <pgynther@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'ebpf_skb_fields'David S. Miller2015-03-1510-51/+335
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alexei Starovoitov says: ==================== bpf: allow eBPF access skb fields V1->V2: - refactored field access converter into common helper convert_skb_access() used in both classic and extended BPF - added missing build_bug_on for field 'len' - added comment to uapi/linux/bpf.h as suggested by Daniel - dropped exposing 'ifindex' field for now classic BPF has a way to access skb fields, whereas extended BPF didn't. This patch introduces this ability. Classic BPF can access fields via negative SKF_AD_OFF offset. Positive bpf_ld_abs N is treated as load from packet, whereas bpf_ld_abs -0x1000 + N is treated as skb fields access. Many offsets were hard coded over years: SKF_AD_PROTOCOL, SKF_AD_PKTTYPE, etc. The problem with this approach was that for every new field classic bpf assembler had to be tweaked. I've considered doing the same for extended, but for every new field LLVM compiler would have to be modifed. Since it would need to add a new intrinsic. It could be done with single intrinsic and magic offset or use of inline assembler, but neither are clean from compiler backend point of view, since they look like calls but shouldn't scratch caller-saved registers. Another approach was to introduce a new helper functions like bpf_get_pkt_type() for every field that we want to access, but that is equally ugly for kernel and slow, since helpers are calls and they are slower then just loads. In theory helper calls can be 'inlined' inside kernel into direct loads, but since they were calls for user space, compiler would have to spill registers around such calls anyway. Teaching compiler to treat such helpers differently is even uglier. They were few other ideas considered. At the end the best seems to be to introduce a user accessible mirror of in-kernel sk_buff structure: struct __sk_buff { __u32 len; __u32 pkt_type; __u32 mark; __u32 queue_mapping; }; bpf programs will do: int bpf_prog1(struct __sk_buff *skb) { __u32 var = skb->pkt_type; which will be compiled to bpf assembler as: dst_reg = *(u32 *)(src_reg + 4) // 4 == offsetof(struct __sk_buff, pkt_type) bpf verifier will check validity of access and will convert it to: dst_reg = *(u8 *)(src_reg + offsetof(struct sk_buff, __pkt_type_offset)) dst_reg &= 7 since 'pkt_type' is a bitfield. No new instructions added. LLVM doesn't need to be modified. JITs don't change and verifier already knows when it accesses 'ctx' pointer. The only thing needed was to convert user visible offset within __sk_buff to kernel internal offset within sk_buff. For 'len' and other fields conversion is trivial. Converting 'pkt_type' takes 2 or 3 instructions depending on endianness. More fields can be exposed by adding to the end of the 'struct __sk_buff'. Like vlan_tci and others can be added later. When pkt_type field is moved around, goes into different structure, removed or its size changes, the function convert_skb_access() would need to updated and it will cover both classic and extended. Patch 2 updates examples to demonstrates how fields are accessed and adds new tests for verifier, since it needs to detect a corner case when attacker is using single bpf instruction in two branches with different register types. The 4 fields of __sk_buff are already exposed to user space via classic bpf and I believe they're useful in extended as well. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * samples: bpf: add skb->field examples and testsAlexei Starovoitov2015-03-155-16/+101
| | | | | | | | | | | | | | | | | | - modify sockex1 example to count number of bytes in outgoing packets - modify sockex2 example to count number of bytes and packets per flow - add 4 stress tests that exercise 'skb->field' code path of verifier Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * bpf: allow extended BPF programs access skb fieldsAlexei Starovoitov2015-03-155-35/+234
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | introduce user accessible mirror of in-kernel 'struct sk_buff': struct __sk_buff { __u32 len; __u32 pkt_type; __u32 mark; __u32 queue_mapping; }; bpf programs can do: int bpf_prog(struct __sk_buff *skb) { __u32 var = skb->pkt_type; which will be compiled to bpf assembler as: dst_reg = *(u32 *)(src_reg + 4) // 4 == offsetof(struct __sk_buff, pkt_type) bpf verifier will check validity of access and will convert it to: dst_reg = *(u8 *)(src_reg + offsetof(struct sk_buff, __pkt_type_offset)) dst_reg &= 7 since skb->pkt_type is a bitfield. Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'ebpf_helpers'David S. Miller2015-03-155-0/+36
|\ | | | | | | | | | | | | | | | | | | | | | | | | Daniel Borkmann says: ==================== eBPF updates Two small eBPF helper additions to better match up with ancillary classic BPF functionality. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
OpenPOWER on IntegriCloud