diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2023-10-17 15:11:44 -0300 |
---|---|---|
committer | Will Deacon <will@kernel.org> | 2023-12-13 13:09:22 +0000 |
commit | 9b3febc3a3da7fcd81ece10614b7fd6c729ba8b4 (patch) | |
tree | 9c19620996fe726ea86965e4378bc0366cbca319 /drivers/iommu | |
parent | e0976331ad114af8e379e18483c346c6c79ca858 (diff) | |
download | lwn-9b3febc3a3da7fcd81ece10614b7fd6c729ba8b4.tar.gz lwn-9b3febc3a3da7fcd81ece10614b7fd6c729ba8b4.zip |
iommu/arm-smmu: Convert to domain_alloc_paging()
Now that the BLOCKED and IDENTITY behaviors are managed with their own
domains change to the domain_alloc_paging() op.
The check for using_legacy_binding is now redundant,
arm_smmu_def_domain_type() always returns IOMMU_DOMAIN_IDENTITY for this
mode, so the core code will never attempt to create a DMA domain in the
first place.
Since commit a4fdd9762272 ("iommu: Use flush queue capability") the core
code only passes in IDENTITY/BLOCKED/UNMANAGED/DMA domain types. It will
not pass in IDENTITY or BLOCKED if the global statics exist, so the test
for DMA is also redundant now too.
Call arm_smmu_init_domain_context() early if a dev is available.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/5-v2-c86cc8c2230e+160bb-smmu_newapi_jgg@nvidia.com
[will: Simplify arm_smmu_domain_alloc_paging() since 'cfg' cannot be NULL]
Signed-off-by: Will Deacon <will@kernel.org>
Diffstat (limited to 'drivers/iommu')
-rw-r--r-- | drivers/iommu/arm/arm-smmu/arm-smmu.c | 17 |
1 files changed, 11 insertions, 6 deletions
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c index 664d53dfb3bd..b0a6b367d8a2 100644 --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c @@ -859,14 +859,10 @@ static void arm_smmu_destroy_domain_context(struct arm_smmu_domain *smmu_domain) arm_smmu_rpm_put(smmu); } -static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) +static struct iommu_domain *arm_smmu_domain_alloc_paging(struct device *dev) { struct arm_smmu_domain *smmu_domain; - if (type != IOMMU_DOMAIN_UNMANAGED) { - if (using_legacy_binding || type != IOMMU_DOMAIN_DMA) - return NULL; - } /* * Allocate the domain and initialise some of its data structures. * We can't really do anything meaningful until we've added a @@ -879,6 +875,15 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) mutex_init(&smmu_domain->init_mutex); spin_lock_init(&smmu_domain->cb_lock); + if (dev) { + struct arm_smmu_master_cfg *cfg = dev_iommu_priv_get(dev); + + if (arm_smmu_init_domain_context(smmu_domain, cfg->smmu, dev)) { + kfree(smmu_domain); + return NULL; + } + } + return &smmu_domain->domain; } @@ -1603,7 +1608,7 @@ static struct iommu_ops arm_smmu_ops = { .identity_domain = &arm_smmu_identity_domain, .blocked_domain = &arm_smmu_blocked_domain, .capable = arm_smmu_capable, - .domain_alloc = arm_smmu_domain_alloc, + .domain_alloc_paging = arm_smmu_domain_alloc_paging, .probe_device = arm_smmu_probe_device, .release_device = arm_smmu_release_device, .probe_finalize = arm_smmu_probe_finalize, |