diff options
author | Tariq Toukan <tariqt@nvidia.com> | 2022-07-27 12:43:42 +0300 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-07-28 21:50:54 -0700 |
commit | 7adc91e0c93901a0eeeea10665d0feb48ffde2d4 (patch) | |
tree | ef573dc17f1c5fbc6842a2b42f7cb2223615bc06 /include/net | |
parent | 113671b255ee3b9f5585a6d496ef0e675e698698 (diff) | |
download | lwn-7adc91e0c93901a0eeeea10665d0feb48ffde2d4.tar.gz lwn-7adc91e0c93901a0eeeea10665d0feb48ffde2d4.zip |
net/tls: Multi-threaded calls to TX tls_dev_del
Multiple TLS device-offloaded contexts can be added in parallel via
concurrent calls to .tls_dev_add, while calls to .tls_dev_del are
sequential in tls_device_gc_task.
This is not a sustainable behavior. This creates a rate gap between add
and del operations (addition rate outperforms the deletion rate). When
running for enough time, the TLS device resources could get exhausted,
failing to offload new connections.
Replace the single-threaded garbage collector work with a per-context
alternative, so they can be handled on several cores in parallel. Use
a new dedicated destruct workqueue for this.
Tested with mlx5 device:
Before: 22141 add/sec, 103 del/sec
After: 11684 add/sec, 11684 del/sec
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/net')
-rw-r--r-- | include/net/tls.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/net/tls.h b/include/net/tls.h index abb050b0df83..b75b5727abdb 100644 --- a/include/net/tls.h +++ b/include/net/tls.h @@ -161,6 +161,8 @@ struct tls_offload_context_tx { struct scatterlist sg_tx_data[MAX_SKB_FRAGS]; void (*sk_destruct)(struct sock *sk); + struct work_struct destruct_work; + struct tls_context *ctx; u8 driver_state[] __aligned(8); /* The TLS layer reserves room for driver specific state * Currently the belief is that there is not enough |