diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2024-08-29 21:06:18 -0300 |
---|---|---|
committer | Joerg Roedel <jroedel@suse.de> | 2024-09-04 11:39:00 +0200 |
commit | 9ac0b3380acdece01fa1b361687e3cd988831c55 (patch) | |
tree | 33ed0185b2a5ee44d2f3a76f1b66b6f0e0204ca0 /drivers/iommu/amd/io_pgtable_v2.c | |
parent | 47f218d108950984b24af81f66356ceda380eb74 (diff) | |
download | lwn-9ac0b3380acdece01fa1b361687e3cd988831c55.tar.gz lwn-9ac0b3380acdece01fa1b361687e3cd988831c55.zip |
iommu/amd: Narrow the use of struct protection_domain to invalidation
The AMD io_pgtable stuff doesn't implement the tlb ops callbacks, instead
it invokes the invalidation ops directly on the struct protection_domain.
Narrow the use of struct protection_domain to only those few code paths.
Make everything else properly use struct amd_io_pgtable through the call
chains, which is the correct modular type for an io-pgtable module.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>
Link: https://lore.kernel.org/r/9-v2-831cdc4d00f3+1a315-amd_iopgtbl_jgg@nvidia.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Diffstat (limited to 'drivers/iommu/amd/io_pgtable_v2.c')
-rw-r--r-- | drivers/iommu/amd/io_pgtable_v2.c | 11 |
1 files changed, 7 insertions, 4 deletions
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c index 1e3be8c5312b..ed2c1faae6d5 100644 --- a/drivers/iommu/amd/io_pgtable_v2.c +++ b/drivers/iommu/amd/io_pgtable_v2.c @@ -233,8 +233,8 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, phys_addr_t paddr, size_t pgsize, size_t pgcount, int prot, gfp_t gfp, size_t *mapped) { - struct protection_domain *pdom = io_pgtable_ops_to_domain(ops); - struct io_pgtable_cfg *cfg = &pdom->iop.pgtbl.cfg; + struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops); + struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg; u64 *pte; unsigned long map_size; unsigned long mapped_size = 0; @@ -251,7 +251,7 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, while (mapped_size < size) { map_size = get_alloc_page_size(pgsize); - pte = v2_alloc_pte(cfg->amd.nid, pdom->iop.pgd, + pte = v2_alloc_pte(cfg->amd.nid, pgtable->pgd, iova, map_size, gfp, &updated); if (!pte) { ret = -EINVAL; @@ -266,8 +266,11 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova, } out: - if (updated) + if (updated) { + struct protection_domain *pdom = io_pgtable_ops_to_domain(ops); + amd_iommu_domain_flush_pages(pdom, o_iova, size); + } if (mapped) *mapped += mapped_size; |