diff options
author | Guixin Liu <kanie@linux.alibaba.com> | 2024-10-31 10:27:20 +0800 |
---|---|---|
committer | Keith Busch <kbusch@kernel.org> | 2024-11-05 08:36:18 -0800 |
commit | c74649b6e400edae67eba56e5285a92619dfb647 (patch) | |
tree | e76d2073bd349cd9eca04ccb057eccdc56f552a4 /drivers/nvme | |
parent | 63a5c7a4b4c49ad86c362e9f555e6f343804ee1d (diff) | |
download | lwn-c74649b6e400edae67eba56e5285a92619dfb647.tar.gz lwn-c74649b6e400edae67eba56e5285a92619dfb647.zip |
nvmet: make nvmet_wq visible in sysfs
In some complex scenarios, we deploy multiple tasks on a single machine
(hybrid deployment), such as Docker containers for function computation
(background processing), real-time tasks, monitoring, event handling,
and management, along with an NVMe target server.
Each of these components is restricted to its own CPU cores to prevent
mutual interference and ensure strict isolation. To achieve this level
of isolation for nvmet_wq we need to use sysfs tunables such as
cpumask that are currently not accessible.
Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.
with this patch :-
nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
affinity_scope affinity_strict cpumask max_active nice per_cpu
power subsystem uevent
Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Diffstat (limited to 'drivers/nvme')
-rw-r--r-- | drivers/nvme/target/core.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index ed2424f8a396..15b25f464e77 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -1717,7 +1717,7 @@ static int __init nvmet_init(void) goto out_free_zbd_work_queue; nvmet_wq = alloc_workqueue("nvmet-wq", - WQ_MEM_RECLAIM | WQ_UNBOUND, 0); + WQ_MEM_RECLAIM | WQ_UNBOUND | WQ_SYSFS, 0); if (!nvmet_wq) goto out_free_buffered_work_queue; |