summaryrefslogtreecommitdiff
path: root/net/core
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2013-08-28 18:10:43 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-09-14 06:54:56 -0700
commit56a12acebcbd08342f7287a5870fe7ec2c0de91a (patch)
tree560453facee78ddfafdf617a6a54503ab0dae25c /net/core
parent8db07b82b70897d868d864402b43a68da5e0cd59 (diff)
downloadlwn-56a12acebcbd08342f7287a5870fe7ec2c0de91a.tar.gz
lwn-56a12acebcbd08342f7287a5870fe7ec2c0de91a.zip
net: revert 8728c544a9c ("net: dev_pick_tx() fix")
[ Upstream commit 702821f4ea6f68db18aa1de7d8ed62c6ba586a64 ] commit 8728c544a9cbdc ("net: dev_pick_tx() fix") and commit b6fe83e9525a ("bonding: refine IFF_XMIT_DST_RELEASE capability") are quite incompatible : Queue selection is disabled because skb dst was dropped before entering bonding device. This causes major performance regression, mainly because TCP packets for a given flow can be sent to multiple queues. This is particularly visible when using the new FQ packet scheduler with MQ + FQ setup on the slaves. We can safely revert the first commit now that 416186fbf8c5b ("net: Split core bits of netdev_pick_tx into __netdev_pick_tx") properly caps the queue_index. Reported-by: Xi Wang <xii@google.com> Diagnosed-by: Xi Wang <xii@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Tom Herbert <therbert@google.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Cc: Denys Fedorysychenko <nuclearcat@nuclearcat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'net/core')
-rw-r--r--net/core/flow_dissector.c11
1 files changed, 3 insertions, 8 deletions
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 00ee068efc1c..c99cc371bbd7 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -345,14 +345,9 @@ u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
if (new_index < 0)
new_index = skb_tx_hash(dev, skb);
- if (queue_index != new_index && sk) {
- struct dst_entry *dst =
- rcu_dereference_check(sk->sk_dst_cache, 1);
-
- if (dst && skb_dst(skb) == dst)
- sk_tx_queue_set(sk, queue_index);
-
- }
+ if (queue_index != new_index && sk &&
+ rcu_access_pointer(sk->sk_dst_cache))
+ sk_tx_queue_set(sk, queue_index);
queue_index = new_index;
}