summaryrefslogtreecommitdiff
path: root/include/linux/sunrpc
diff options
context:
space:
mode:
authorChuck Lever <chuck.lever@oracle.com>2016-11-29 11:04:50 -0500
committerJ. Bruce Fields <bfields@redhat.com>2016-11-30 17:31:13 -0500
commite4eb42cecc6dc546aac888ee4913d59121e886ee (patch)
treed78a0e21882dfb0ba3046a6f6962285d3efac9ef /include/linux/sunrpc
parent5fdca6531434c1c1b2d584873afdda52e5ad448c (diff)
downloadlwn-e4eb42cecc6dc546aac888ee4913d59121e886ee.tar.gz
lwn-e4eb42cecc6dc546aac888ee4913d59121e886ee.zip
svcrdma: Remove BH-disabled spin locking in svc_rdma_send()
svcrdma's current SQ accounting algorithm takes sc_lock and disables bottom-halves while posting all RDMA Read, Write, and Send WRs. This is relatively heavyweight serialization. And note that Write and Send are already fully serialized by the xpt_mutex. Using a single atomic_t should be all that is necessary to guarantee that ib_post_send() is called only when there is enough space on the send queue. This is what the other RDMA-enabled storage targets do. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'include/linux/sunrpc')
-rw-r--r--include/linux/sunrpc/svc_rdma.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index 6aef63b9a669..601cb07aa746 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -139,7 +139,7 @@ struct svcxprt_rdma {
int sc_max_sge_rd; /* max sge for read target */
bool sc_snd_w_inv; /* OK to use Send With Invalidate */
- atomic_t sc_sq_count; /* Number of SQ WR on queue */
+ atomic_t sc_sq_avail; /* SQEs ready to be consumed */
unsigned int sc_sq_depth; /* Depth of SQ */
unsigned int sc_rq_depth; /* Depth of RQ */
u32 sc_max_requests; /* Forward credits */