summaryrefslogtreecommitdiff
path: root/net/ipv4
diff options
context:
space:
mode:
authorPaolo Abeni <pabeni@redhat.com>2017-06-21 10:24:40 +0200
committerDavid S. Miller <davem@davemloft.net>2017-06-21 11:38:11 -0400
commitdd99e425be23294a9a91b365bd04f9b255fb72e8 (patch)
tree0bd383473708ab4a183c3a138b6cee363ae4d569 /net/ipv4
parentda2e9cf03b8fccbb69dd1e215bb1e554ce8e8cbe (diff)
downloadlwn-dd99e425be23294a9a91b365bd04f9b255fb72e8.tar.gz
lwn-dd99e425be23294a9a91b365bd04f9b255fb72e8.zip
udp: prefetch rmem_alloc in udp_queue_rcv_skb()
On UDP packets processing, if the BH is the bottle-neck, it always sees a cache miss while updating rmem_alloc; try to avoid it prefetching the value as soon as we have the socket available. Performances under flood with multiple NIC rx queues used are unaffected, but when a single NIC rx queue is in use, this gives ~10% performance improvement. Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv4')
-rw-r--r--net/ipv4/udp.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index f3450f092d71..067a607917f9 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1949,6 +1949,7 @@ static int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
}
}
+ prefetch(&sk->sk_rmem_alloc);
if (rcu_access_pointer(sk->sk_filter) &&
udp_lib_checksum_complete(skb))
goto csum_error;