summaryrefslogtreecommitdiff
path: root/include/rdma/rdma_vt.h
diff options
context:
space:
mode:
authorMichael J. Ruhl <michael.j.ruhl@intel.com>2018-09-10 09:49:27 -0700
committerJason Gunthorpe <jgg@mellanox.com>2018-09-11 09:55:02 -0600
commit0b79b27748cbec221e1ceabf63578198602bf01d (patch)
treea62ba4181d233bc314b46b3d12f62fe68e34d5b4 /include/rdma/rdma_vt.h
parent3e5d60bcc8a42bfd0c888a0cf52a5a7e8398677d (diff)
downloadlwn-0b79b27748cbec221e1ceabf63578198602bf01d.tar.gz
lwn-0b79b27748cbec221e1ceabf63578198602bf01d.zip
IB/{hfi1, qib, rdmavt}: Schedule multi RC/UC packets instead of posting
The post_send() path determines if it should post directly or, schedule the post for later. The current logic is: if the swqe ring is empty or (for hfi1) wqe->length <= piothreshold post the send else schedule This can allow large requests to call the send engine directly. Large requests can potentially produce a large number of packets prior to returning to the caller, blocking the caller from posting more requests, and allowing better parallel processing. Allow the driver(s) more say in this logic (pass call_send to the driver, rather than examining a return value). Update hfi1/qib logic to schedule the send engine if an RC or UC message is larger than the QP MTU size. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'include/rdma/rdma_vt.h')
-rw-r--r--include/rdma/rdma_vt.h10
1 files changed, 8 insertions, 2 deletions
diff --git a/include/rdma/rdma_vt.h b/include/rdma/rdma_vt.h
index e79229a0cf01..e32facdd9fd3 100644
--- a/include/rdma/rdma_vt.h
+++ b/include/rdma/rdma_vt.h
@@ -214,8 +214,14 @@ struct rvt_driver_provided {
void (*schedule_send)(struct rvt_qp *qp);
void (*schedule_send_no_lock)(struct rvt_qp *qp);
- /* Driver specific work request checking */
- int (*check_send_wqe)(struct rvt_qp *qp, struct rvt_swqe *wqe);
+ /*
+ * Validate the wqe. This needs to be done prior to inserting the
+ * wqe into the ring, but after the wqe has been set up. Allow for
+ * driver specific work request checking by providing a callback.
+ * call_send indicates if the wqe should be posted or scheduled.
+ */
+ int (*check_send_wqe)(struct rvt_qp *qp, struct rvt_swqe *wqe,
+ bool *call_send);
/*
* Sometimes rdmavt needs to kick the driver's send progress. That is