diff options
author | Patrick McHardy <kaber@trash.net> | 2009-11-10 06:14:14 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2009-11-13 14:07:32 -0800 |
commit | 572a9d7b6fc7f20f573664063324c086be310c42 (patch) | |
tree | 0ab3655fdfa923b0b9c6c1ee51a2e31e97e9549f /net/sched | |
parent | 9ea2bdab11da97b2ac6f87d79976d25fa6d27295 (diff) | |
download | op-kernel-dev-572a9d7b6fc7f20f573664063324c086be310c42.zip op-kernel-dev-572a9d7b6fc7f20f573664063324c086be310c42.tar.gz |
net: allow to propagate errors through ->ndo_hard_start_xmit()
Currently the ->ndo_hard_start_xmit() callbacks are only permitted to return
one of the NETDEV_TX codes. This prevents any kind of error propagation for
virtual devices, like queue congestion of the underlying device in case of
layered devices, or unreachability in case of tunnels.
This patches changes the NET_XMIT codes to avoid clashes with the NETDEV_TX
codes and changes the two callers of dev_hard_start_xmit() to expect either
errno codes, NET_XMIT codes or NETDEV_TX codes as return value.
In case of qdisc_restart(), all non NETDEV_TX codes are mapped to NETDEV_TX_OK
since no error propagation is possible when using qdiscs. In case of
dev_queue_xmit(), the error is propagated upwards.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r-- | net/sched/sch_generic.c | 9 |
1 files changed, 8 insertions, 1 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 4ae6aa5..b13821a 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -120,8 +120,15 @@ int sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, HARD_TX_LOCK(dev, txq, smp_processor_id()); if (!netif_tx_queue_stopped(txq) && - !netif_tx_queue_frozen(txq)) + !netif_tx_queue_frozen(txq)) { ret = dev_hard_start_xmit(skb, dev, txq); + + /* an error implies that the skb was consumed */ + if (ret < 0) + ret = NETDEV_TX_OK; + /* all NET_XMIT codes map to NETDEV_TX_OK */ + ret &= ~NET_XMIT_MASK; + } HARD_TX_UNLOCK(dev, txq); spin_lock(root_lock); |