diff options
author | Jason Gunthorpe <jgg@nvidia.com> | 2023-07-17 15:12:06 -0300 |
---|---|---|
committer | Jason Gunthorpe <jgg@nvidia.com> | 2023-07-26 10:19:57 -0300 |
commit | 70eadc7fc7ef29bfe0e361376983822b5e36dd67 (patch) | |
tree | 26caf3da154c3f07b69fdfd39e5680565145e23f /drivers/iommu/iommufd/hw_pagetable.c | |
parent | 17bad52708b457c4a0cbd7195ad813d6c4308b82 (diff) | |
download | lwn-70eadc7fc7ef29bfe0e361376983822b5e36dd67.tar.gz lwn-70eadc7fc7ef29bfe0e361376983822b5e36dd67.zip |
iommufd: Allow a hwpt to be aborted after allocation
During creation the hwpt must have the ioas->mutex held until the object
is finalized. This means we need to be able to call
iommufd_object_abort_and_destroy() while holding the mutex.
Since iommufd_hw_pagetable_destroy() also needs the mutex this is
problematic.
Fix it by creating a special abort op for the object that can assume the
caller is holding the lock, as required by the contract.
The next patch will add another iommufd_object_abort_and_destroy() for a
hwpt.
Fixes: e8d57210035b ("iommufd: Add kAPI toward external drivers for physical devices")
Link: https://lore.kernel.org/r/10-v8-6659224517ea+532-iommufd_alloc_jgg@nvidia.com
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Diffstat (limited to 'drivers/iommu/iommufd/hw_pagetable.c')
-rw-r--r-- | drivers/iommu/iommufd/hw_pagetable.c | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c index e0699d7f4c64..2087b51d9807 100644 --- a/drivers/iommu/iommufd/hw_pagetable.c +++ b/drivers/iommu/iommufd/hw_pagetable.c @@ -25,6 +25,21 @@ void iommufd_hw_pagetable_destroy(struct iommufd_object *obj) refcount_dec(&hwpt->ioas->obj.users); } +void iommufd_hw_pagetable_abort(struct iommufd_object *obj) +{ + struct iommufd_hw_pagetable *hwpt = + container_of(obj, struct iommufd_hw_pagetable, obj); + + /* The ioas->mutex must be held until finalize is called. */ + lockdep_assert_held(&hwpt->ioas->mutex); + + if (!list_empty(&hwpt->hwpt_item)) { + list_del_init(&hwpt->hwpt_item); + iopt_table_remove_domain(&hwpt->ioas->iopt, hwpt->domain); + } + iommufd_hw_pagetable_destroy(obj); +} + int iommufd_hw_pagetable_enforce_cc(struct iommufd_hw_pagetable *hwpt) { if (hwpt->enforce_cache_coherency) @@ -49,6 +64,10 @@ int iommufd_hw_pagetable_enforce_cc(struct iommufd_hw_pagetable *hwpt) * Allocate a new iommu_domain and return it as a hw_pagetable. The HWPT * will be linked to the given ioas and upon return the underlying iommu_domain * is fully popoulated. + * + * The caller must hold the ioas->mutex until after + * iommufd_object_abort_and_destroy() or iommufd_object_finalize() is called on + * the returned hwpt. */ struct iommufd_hw_pagetable * iommufd_hw_pagetable_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas, |