diff options
author | Ming Lei <ming.lei@redhat.com> | 2019-04-28 15:39:32 +0800 |
---|---|---|
committer | Martin K. Petersen <martin.petersen@oracle.com> | 2019-06-20 15:21:33 -0400 |
commit | 3dccdf53c2f38399b11085ded4447ce1467f006c (patch) | |
tree | d5415d9704bd5dc2151e36363e5b6cfa0a1dd087 /lib/test_ubsan.c | |
parent | 92524fa12312d1f082a473e14c590c48b4ef3fe5 (diff) | |
download | lwn-3dccdf53c2f38399b11085ded4447ce1467f006c.tar.gz lwn-3dccdf53c2f38399b11085ded4447ce1467f006c.zip |
scsi: core: avoid preallocating big SGL for data
scsi_mq_setup_tags() preallocates a big buffer for the IO SGL. The size is
based on scsi_mq_sgl_size() which is determined based on
shost->sg_tablesize and SG_CHUNK_SIZE.
Modern DMA engines are often capable of dealing with very big segments so
the resulting scsi_mq_sgl_size() is often too big. SG_CHUNK_SIZE results in
a static 4KB SGL allocation per command.
If an HBA has lots of deep queues, preallocation for the sg list can
consume substantial amounts of memory. For lpfc, nr_hw_queues can be 70
and each queue's depth 3781. This means the resulting preallocation for
the data SGL is 70*3781*2K = 517MB.
Switch to runtime allocation for SGL for lists longer than 2 entries. This
is the approach used by NVMe PCI so it should be reasonable for SCSI as
well. Runtime SGL allocation has always been the case for the legacy I/O
path so this is nothing new.
[mkp: attempted to clarify commit desc]
Cc: Christoph Hellwig <hch@lst.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Ewan D. Milne <emilne@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Diffstat (limited to 'lib/test_ubsan.c')
0 files changed, 0 insertions, 0 deletions