summaryrefslogtreecommitdiff
path: root/drivers/dma-buf/dma-resv.c
AgeCommit message (Collapse)Author
2022-04-07dma-buf: drop seq count based updateChristian König
This should be possible now since we don't have the distinction between exclusive and shared fences any more. The only possible pitfall is that a dma_fence would be reused during the RCU grace period, but even that could be handled with a single extra check. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-15-christian.koenig@amd.com
2022-04-07dma-buf: add DMA_RESV_USAGE_BOOKKEEP v3Christian König
Add an usage for submissions independent of implicit sync but still interesting for memory management. v2: cleanup the kerneldoc a bit v3: separate amdgpu changes from this Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-10-christian.koenig@amd.com
2022-04-07dma-buf: add DMA_RESV_USAGE_KERNEL v3Christian König
Add an usage for kernel submissions. Waiting for those are mandatory for dynamic DMA-bufs. As a precaution this patch also changes all occurrences where fences are added as part of memory management in TTM, VMWGFX and i915 to use the new value because it now becomes possible for drivers to ignore fences with the WRITE usage. v2: use "must" in documentation, fix whitespaces v3: separate out some driver changes and better document why some changes should still be part of this patch. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-5-christian.koenig@amd.com
2022-04-07dma-buf & drm/amdgpu: remove dma_resv workaroundChristian König
Rework the internals of the dma_resv object to allow adding more than one write fence and remember for each fence what purpose it had. This allows removing the workaround from amdgpu which used a container for this instead. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: amd-gfx@lists.freedesktop.org Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-4-christian.koenig@amd.com
2022-04-07dma-buf: specify usage while adding fences to dma_resv obj v7Christian König
Instead of distingting between shared and exclusive fences specify the fence usage while adding fences. Rework all drivers to use this interface instead and deprecate the old one. v2: some kerneldoc comments suggested by Daniel v3: fix a missing case in radeon v4: rebase on nouveau changes, fix lockdep and temporary disable warning v5: more documentation updates v6: separate internal dma_resv changes from this patch, avoids to disable warning temporary, rebase on upstream changes v7: fix missed case in lima driver, minimize changes to i915_gem_busy_ioctl Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-3-christian.koenig@amd.com
2022-04-07dma-buf: add enum dma_resv_usage v4Christian König
This change adds the dma_resv_usage enum and allows us to specify why a dma_resv object is queried for its containing fences. Additional to that a dma_resv_usage_rw() helper function is added to aid retrieving the fences for a read or write userspace submission. This is then deployed to the different query functions of the dma_resv object and all of their users. When the write paratermer was previously true we now use DMA_RESV_USAGE_WRITE and DMA_RESV_USAGE_READ otherwise. v2: add KERNEL/OTHER in separate patch v3: some kerneldoc suggestions by Daniel v4: some more kerneldoc suggestions by Daniel, fix missing cases lost in the rebase pointed out by Bas. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220407085946.744568-2-christian.koenig@amd.com
2022-04-06dma-buf/drivers: make reserving a shared slot mandatory v4Christian König
Audit all the users of dma_resv_add_excl_fence() and make sure they reserve a shared slot also when only trying to add an exclusive fence. This is the next step towards handling the exclusive fence like a shared one. v2: fix missed case in amdgpu v3: and two more radeon, rename function v4: add one more case to TTM, fix i915 after rebase Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220406075132.3263-2-christian.koenig@amd.com
2022-04-05dma-buf: finally make dma_resv_excl_fence private v2Christian König
Drivers should never touch this directly. v2: fix rebase clash Signed-off-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-10-christian.koenig@amd.com Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2022-04-03dma-buf: add dma_resv_get_singleton v2Christian König
Add a function to simplify getting a single fence for all the fences in the dma_resv object. v2: fix ref leak in error handling Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-3-christian.koenig@amd.com
2022-04-01dma-buf: drop the DAG approach for the dma_resv object v3Christian König
So far we had the approach of using a directed acyclic graph with the dma_resv obj. This turned out to have many downsides, especially it means that every single driver and user of this interface needs to be aware of this restriction when adding fences. If the rules for the DAG are not followed then we end up with potential hard to debug memory corruption, information leaks or even elephant big security holes because we allow userspace to access freed up memory. Since we already took a step back from that by always looking at all fences we now go a step further and stop dropping the shared fences when a new exclusive one is added. v2: Drop some now superflous documentation v3: Add some more documentation for the new handling. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-11-christian.koenig@amd.com
2022-03-24dma-buf: finally make the dma_resv_list private v2Christian König
Drivers should never touch this directly. v2: drop kerneldoc for now internal handling Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-2-christian.koenig@amd.com
2022-03-24dma-buf: add dma_resv_replace_fences v2Christian König
This function allows to replace fences from the shared fence list when we can gurantee that the operation represented by the original fence has finished or no accesses to the resources protected by the dma_resv object any more when the new fence finishes. Then use this function in the amdkfd code when BOs are unmapped from the process. v2: add an example when this is usefull. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20220321135856.1331-1-christian.koenig@amd.com
2022-02-08dma-buf: warn about containers in dma_resv objectChristian König
Drivers should not add containers as shared fences to the dma_resv object, instead each fence should be added individually. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20220204100429.2049-5-christian.koenig@amd.com
2022-01-31dma-resv: some doc polish for iteratorsDaniel Vetter
Hammer it a bit more in that iterators can be restarted and when that matters, plus suggest to prefer the locked version whenver. Also delete the two leftover kerneldoc for static functions plus sprinkle some more links while at it. v2: Keep some comments (Christian) Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20211130152756.1388106-1-daniel.vetter@ffwll.ch
2022-01-19dma-buf: drop excl_fence parameter from dma_resv_get_fencesChristian König
Returning the exclusive fence separately is no longer used. Instead add a write parameter to indicate the use case. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211207123411.167006-4-christian.koenig@amd.com
2021-11-30dma-buf: make fence mandatory for dma_resv_add_excl_fence v2Christian König
Calling dma_resv_add_excl_fence() with the fence as NULL and expecting that that this frees up the fences is simply abuse of the internals of the dma_resv object. v2: drop the fence pruning completely. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211129120659.1815-4-christian.koenig@amd.com
2021-11-11dma-buf: add dma_fence_describe and dma_resv_describe v2Christian König
Add functions to dump dma_fence and dma_resv objects into a seq_file and use them for printing the debugfs information. v2: fix missing include reported by test robot. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Rob Clark <robdclark@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211103081231.18578-2-christian.koenig@amd.com
2021-10-11dma-resv: Fix dma_resv_get_fences and dma_resv_copy_fences after conversionTvrtko Ursulin
Cache the count of shared fences in the iterator to avoid dereferencing the dma_resv_object outside the RCU protection. Otherwise iterator and its users can observe an incosistent state which makes it impossible to use safely. Such as: <6> [187.517041] [IGT] gem_sync: executing <7> [187.536343] i915 0000:00:02.0: [drm:i915_gem_context_create_ioctl [i915]] HW context 1 created <7> [187.536793] i915 0000:00:02.0: [drm:i915_gem_context_create_ioctl [i915]] HW context 1 created <6> [187.551235] [IGT] gem_sync: starting subtest basic-many-each <1> [188.935462] BUG: kernel NULL pointer dereference, address: 0000000000000010 <1> [188.935485] #PF: supervisor write access in kernel mode <1> [188.935495] #PF: error_code(0x0002) - not-present page <6> [188.935504] PGD 0 P4D 0 <4> [188.935512] Oops: 0002 [#1] PREEMPT SMP NOPTI <4> [188.935521] CPU: 2 PID: 1467 Comm: gem_sync Not tainted 5.15.0-rc4-CI-Patchwork_21264+ #1 <4> [188.935535] Hardware name: /NUC6CAYB, BIOS AYAPLCEL.86A.0049.2018.0508.1356 05/08/2018 <4> [188.935546] RIP: 0010:dma_resv_get_fences+0x116/0x2d0 <4> [188.935560] Code: 10 85 c0 7f c9 be 03 00 00 00 e8 15 8b df ff eb bd e8 8e c6 ff ff eb b6 41 8b 04 24 49 8b 55 00 48 89 e7 8d 48 01 41 89 0c 24 <4c> 89 34 c2 e8 41 f2 ff ff 49 89 c6 48 85 c0 75 8c 48 8b 44 24 10 <4> [188.935583] RSP: 0018:ffffc900011dbcc8 EFLAGS: 00010202 <4> [188.935593] RAX: 0000000000000000 RBX: 00000000ffffffff RCX: 0000000000000001 <4> [188.935603] RDX: 0000000000000010 RSI: ffffffff822e343c RDI: ffffc900011dbcc8 <4> [188.935613] RBP: ffffc900011dbd48 R08: ffff88812d255bb8 R09: 00000000fffffffe <4> [188.935623] R10: 0000000000000001 R11: 0000000000000000 R12: ffffc900011dbd44 <4> [188.935633] R13: ffffc900011dbd50 R14: ffff888113d29cc0 R15: 0000000000000000 <4> [188.935643] FS: 00007f68d17e9700(0000) GS:ffff888277900000(0000) knlGS:0000000000000000 <4> [188.935655] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 <4> [188.935665] CR2: 0000000000000010 CR3: 000000012d0a4000 CR4: 00000000003506e0 <4> [188.935676] Call Trace: <4> [188.935685] i915_gem_object_wait+0x1ff/0x410 [i915] <4> [188.935988] i915_gem_wait_ioctl+0xf2/0x2a0 [i915] <4> [188.936272] ? i915_gem_object_wait+0x410/0x410 [i915] <4> [188.936533] drm_ioctl_kernel+0xae/0x140 <4> [188.936546] drm_ioctl+0x201/0x3d0 <4> [188.936555] ? i915_gem_object_wait+0x410/0x410 [i915] <4> [188.936820] ? __fget_files+0xc2/0x1c0 <4> [188.936830] ? __fget_files+0xda/0x1c0 <4> [188.936839] __x64_sys_ioctl+0x6d/0xa0 <4> [188.936848] do_syscall_64+0x3a/0xb0 <4> [188.936859] entry_SYSCALL_64_after_hwframe+0x44/0xae If the shared object has changed during the RCU unlocked period callers will correctly handle the restart on the next iteration. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 96601e8a4755 ("dma-buf: use new iterator in dma_resv_copy_fences") Fixes: d3c80698c9f5 ("dma-buf: use new iterator in dma_resv_get_fences v3") Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4274 Cc: Christian König <christian.koenig@amd.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: linux-media@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20211008095007.972693-1-tvrtko.ursulin@linux.intel.com Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com>
2021-10-07dma-buf: add dma_resv_for_each_fence v3Christian König
A simpler version of the iterator to be used when the dma_resv object is locked. v2: fix index check here as well v3: minor coding improvement, some documentation cleanup Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20211006123609.2026-1-christian.koenig@amd.com
2021-10-06dma-buf: use new iterator in dma_resv_test_signaledChristian König
This makes the function much simpler since the complex retry logic is now handled elsewhere. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211005113742.1101-8-christian.koenig@amd.com
2021-10-06dma-buf: use new iterator in dma_resv_wait_timeoutChristian König
This makes the function much simpler since the complex retry logic is now handled elsewhere. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211005113742.1101-7-christian.koenig@amd.com
2021-10-06dma-buf: use new iterator in dma_resv_get_fences v3Christian König
This makes the function much simpler since the complex retry logic is now handled elsewhere. v2: use sizeof(void*) instead v3: fix rebase bug Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211005113742.1101-6-christian.koenig@amd.com
2021-10-06dma-buf: use new iterator in dma_resv_copy_fencesChristian König
This makes the function much simpler since the complex retry logic is now handled else where. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20211005113742.1101-5-christian.koenig@amd.com
2021-10-06dma-buf: add dma_resv_for_each_fence_unlocked v8Christian König
Abstract the complexity of iterating over all the fences in a dma_resv object. The new loop handles the whole RCU and retry dance and returns only fences where we can be sure we grabbed the right one. v2: fix accessing the shared fences while they might be freed, improve kerneldoc, rename _cursor to _iter, add dma_resv_iter_is_exclusive, add dma_resv_iter_begin/end v3: restructor the code, move rcu_read_lock()/unlock() into the iterator, add dma_resv_iter_is_restarted() v4: fix NULL deref when no explicit fence exists, drop superflous rcu_read_lock()/unlock() calls. v5: fix typos in the documentation v6: fix coding error when excl fence is NULL v7: one more logic fix v8: fix index check in dma_resv_iter_is_exclusive() Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> (v7) Link: https://patchwork.freedesktop.org/patch/msgid/20211005113742.1101-2-christian.koenig@amd.com
2021-08-30dma-resv: Give the docs a do-overDaniel Vetter
Specifically document the new/clarified rules around how the shared fences do not have any ordering requirements against the exclusive fence. But also document all the things a bit better, given how central struct dma_resv to dynamic buffer management the docs have been very inadequat. - Lots more links to other pieces of the puzzle. Unfortunately ttm_buffer_object has no docs, so no links :-( - Explain/complain a bit about dma_resv_locking_ctx(). I still don't like that one, but fixing the ttm call chains is going to be horrible. Plus we want to plug in real slowpath locking when we do that anyway. - Main part of the patch is some actual docs for struct dma_resv. Overall I think we still have a lot of bad naming in this area (e.g. dma_resv.fence is singular, but contains the multiple shared fences), but I think that's more indicative of how the semantics and rules are just not great. Another thing that's real awkard is how chaining exclusive fences right now means direct dma_resv.exclusive_fence pointer access with an rcu_assign_pointer. Not so great either. v2: - Fix a pile of typos (Matt, Jason) - Hammer it in that breaking the rules leads to use-after-free issues around dma-buf sharing (Christian) Reviewed-by: Christian König <christian.koenig@amd.com> Cc: Jason Ekstrand <jason@jlekstrand.net> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20210805104705.862416-21-daniel.vetter@ffwll.ch
2021-07-08dma-buf: fix dma_resv_test_signaled test_all handling v2Christian König
As the name implies if testing all fences is requested we should indeed test all fences and not skip the exclusive one because we see shared ones. v2: fix logic once more Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210702111642.17259-3-christian.koenig@amd.com
2021-06-06dma-buf: drop the _rcu postfix on function names v3Christian König
The functions can be called both in _rcu context as well as while holding the lock. v2: add some kerneldoc as suggested by Daniel v3: fix indentation Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210602111714.212426-7-christian.koenig@amd.com
2021-06-06dma-buf: rename and cleanup dma_resv_get_list v2Christian König
When the comment needs to state explicitly that this is doesn't get a reference to the object then the function is named rather badly. Rename the function and use it in even more places. v2: use dma_resv_shared_list as new name Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210602111714.212426-5-christian.koenig@amd.com
2021-06-06dma-buf: rename and cleanup dma_resv_get_excl v3Christian König
When the comment needs to state explicitly that this doesn't get a reference to the object then the function is named rather badly. Rename the function and use rcu_dereference_check(), this way it can be used from both rcu as well as lock protected critical sections. v2: improve kerneldoc as suggested by Daniel v3: use dma_resv_excl_fence as function name Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210602111714.212426-4-christian.koenig@amd.com
2021-06-05dma-buf: add missing EXPORT_SYMBOLChristian König
The newly added dma_resv_reset_shared_max() is used from an inline function, so it can appear in drivers as well. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Link: https://patchwork.freedesktop.org/patch/msgid/20210604155228.616679-1-christian.koenig@amd.com
2021-06-04dma-buf: cleanup dma-resv shared fence debugging a bit v2Christian König
Make that a function instead of inline. v2: improve the kerneldoc wording as suggested by Daniel Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210602111714.212426-3-christian.koenig@amd.com
2021-06-04dma-buf: add SPDX header and fix style in dma-resv.cChristian König
dma_resv_lockdep() seems to have some space/tab mixups. Fix that and move the function to the end of the file. Also fix some minor things checkpatch.pl pointed out while at it. No functional change. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210602140359.272601-2-christian.koenig@amd.com
2020-11-25dma-buf/dma-resv: Respect num_fences when initializing the shared fence list.Maarten Lankhorst
We hardcode the maximum number of shared fences to 4, instead of respecting num_fences. Use a minimum of 4, but more if num_fences is higher. This seems to have been an oversight when first implementing the api. Fixes: 04a5faa8cbe5 ("reservation: update api and add some helpers") Cc: <stable@vger.kernel.org> # v3.17+ Reported-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20201124115707.406917-1-maarten.lankhorst@linux.intel.com
2020-10-08dma-buf: use struct_size macroChristian König
Instead of manually calculating the structure size. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/394252/
2020-09-17dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resvDaniel Vetter
GPU drivers need this in their shrinkers, to be able to throw out mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers, but that loop is resolved by trylocking in shrinkers. So full hierarchy is now (ignore some of the other branches we already have primed): mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write I hope that's not inconsistent with anything mm or fs does, adding relevant people. Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Dave Chinner <david@fromorbit.com> Cc: Qian Cai <cai@lca.pw> Cc: linux-xfs@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: linux-mm@kvack.org Cc: linux-rdma@vger.kernel.org Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200728135839.1035515-1-daniel.vetter@ffwll.ch
2020-08-10Merge tag 'locking-urgent-2020-08-10' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking updates from Thomas Gleixner: "A set of locking fixes and updates: - Untangle the header spaghetti which causes build failures in various situations caused by the lockdep additions to seqcount to validate that the write side critical sections are non-preemptible. - The seqcount associated lock debug addons which were blocked by the above fallout. seqcount writers contrary to seqlock writers must be externally serialized, which usually happens via locking - except for strict per CPU seqcounts. As the lock is not part of the seqcount, lockdep cannot validate that the lock is held. This new debug mechanism adds the concept of associated locks. sequence count has now lock type variants and corresponding initializers which take a pointer to the associated lock used for writer serialization. If lockdep is enabled the pointer is stored and write_seqcount_begin() has a lockdep assertion to validate that the lock is held. Aside of the type and the initializer no other code changes are required at the seqcount usage sites. The rest of the seqcount API is unchanged and determines the type at compile time with the help of _Generic which is possible now that the minimal GCC version has been moved up. Adding this lockdep coverage unearthed a handful of seqcount bugs which have been addressed already independent of this. While generally useful this comes with a Trojan Horse twist: On RT kernels the write side critical section can become preemtible if the writers are serialized by an associated lock, which leads to the well known reader preempts writer livelock. RT prevents this by storing the associated lock pointer independent of lockdep in the seqcount and changing the reader side to block on the lock when a reader detects that a writer is in the write side critical section. - Conversion of seqcount usage sites to associated types and initializers" * tag 'locking-urgent-2020-08-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (25 commits) locking/seqlock, headers: Untangle the spaghetti monster locking, arch/ia64: Reduce <asm/smp.h> header dependencies by moving XTP bits into the new <asm/xtp.h> header x86/headers: Remove APIC headers from <asm/smp.h> seqcount: More consistent seqprop names seqcount: Compress SEQCNT_LOCKNAME_ZERO() seqlock: Fold seqcount_LOCKNAME_init() definition seqlock: Fold seqcount_LOCKNAME_t definition seqlock: s/__SEQ_LOCKDEP/__SEQ_LOCK/g hrtimer: Use sequence counter with associated raw spinlock kvm/eventfd: Use sequence counter with associated spinlock userfaultfd: Use sequence counter with associated spinlock NFSv4: Use sequence counter with associated spinlock iocost: Use sequence counter with associated spinlock raid5: Use sequence counter with associated spinlock vfs: Use sequence counter with associated spinlock timekeeping: Use sequence counter with associated raw spinlock xfrm: policy: Use sequence counters with associated lock netfilter: nft_set_rbtree: Use sequence counter with associated rwlock netfilter: conntrack: Use sequence counter with associated spinlock sched: tasks: Use sequence counter with associated spinlock ...
2020-07-29dma-buf: Use sequence counter with associated wound/wait mutexAhmed S. Darwish
A sequence counter write side critical section must be protected by some form of locking to serialize writers. If the serialization primitive is not disabling preemption implicitly, preemption has to be explicitly disabled before entering the sequence counter write side critical section. The dma-buf reservation subsystem uses plain sequence counters to manage updates to reservations. Writer serialization is accomplished through a wound/wait mutex. Acquiring a wound/wait mutex does not disable preemption, so this needs to be done manually before and after the write side critical section. Use the newly-added seqcount_ww_mutex_t instead: - It associates the ww_mutex with the sequence count, which enables lockdep to validate that the write side critical section is properly serialized. - It removes the need to explicitly add preempt_disable/enable() around the write side critical section because the write_begin/end() functions for this new data type automatically do this. If lockdep is disabled this ww_mutex lock association is compiled out and has neither storage size nor runtime overhead. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://lkml.kernel.org/r/20200720155530.1173732-13-a.darwish@linutronix.de
2020-07-29dma-buf: Remove custom seqcount lockdep class keyAhmed S. Darwish
Commit 3c3b177a9369 ("reservation: add support for read-only access using rcu") introduced a sequence counter to manage updates to reservations. Back then, the reservation object initializer reservation_object_init() was always inlined. Having the sequence counter initialization inlined meant that each of the call sites would have a different lockdep class key, which would've broken lockdep's deadlock detection. The aforementioned commit thus introduced, and exported, a custom seqcount lockdep class key and name. The commit 8735f16803f00 ("dma-buf: cleanup reservation_object_init...") transformed the reservation object initializer to a normal non-inlined C function. seqcount_init(), which automatically defines the seqcount lockdep class key and must be called non-inlined, can now be safely used. Remove the seqcount custom lockdep class key, name, and export. Use seqcount_init() inside the dma reservation object initializer. Signed-off-by: Ahmed S. Darwish <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://lkml.kernel.org/r/20200720155530.1173732-12-a.darwish@linutronix.de
2020-07-21dma-fence: prime lockdep annotationsDaniel Vetter
Two in one go: - it is allowed to call dma_fence_wait() while holding a dma_resv_lock(). This is fundamental to how eviction works with ttm, so required. - it is allowed to call dma_fence_wait() from memory reclaim contexts, specifically from shrinker callbacks (which i915 does), and from mmu notifier callbacks (which amdgpu does, and which i915 sometimes also does, and probably always should, but that's kinda a debate). Also for stuff like HMM we really need to be able to do this, or things get real dicey. Consequence is that any critical path necessary to get to a dma_fence_signal for a fence must never a) call dma_resv_lock nor b) allocate memory with GFP_KERNEL. Also by implication of dma_resv_lock(), no userspace faulting allowed. That's some supremely obnoxious limitations, which is why we need to sprinkle the right annotations to all relevant paths. The one big locking context we're leaving out here is mmu notifiers, added in commit 23b68395c7c78a764e8963fc15a7cfd318bf187f Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Mon Aug 26 22:14:21 2019 +0200 mm/mmu_notifiers: add a lockdep map for invalidate_range_start/end that one covers a lot of other callsites, and it's also allowed to wait on dma-fences from mmu notifiers. But there's no ready-made functions exposed to prime this, so I've left it out for now. v2: Also track against mmu notifier context. v3: kerneldoc to spec the cross-driver contract. Note that currently i915 throws in a hard-coded 10s timeout on foreign fences (not sure why that was done, but it's there), which is why that rule is worded with SHOULD instead of MUST. Also some of the mmu_notifier/shrinker rules might surprise SoC drivers, I haven't fully audited them all. Which is infeasible anyway, we'll need to run them with lockdep and dma-fence annotations and see what goes boom. v4: A spelling fix from Mika v5: #ifdef for CONFIG_MMU_NOTIFIER. Reported by 0day. Unfortunately this means lockdep enforcement is slightly inconsistent, it won't spot GFP_NOIO and GFP_NOFS allocations in the wrong spot if CONFIG_MMU_NOTIFIER is disabled in the kernel config. Oh well. v5: Note that only drivers/gpu has a reasonable (or at least historical) excuse to use dma_fence_wait() from shrinker and mmu notifier callbacks. Everyone else should either have a better memory manager model, or better hardware. This reflects discussions with Jason Gunthorpe. Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: Felix Kuehling <Felix.Kuehling@amd.com> Cc: kernel test robot <lkp@intel.com> Acked-by: Christian König <christian.koenig@amd.com> Acked-by: Dave Airlie <airlied@redhat.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com> (v4) Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Thomas Hellstrom <thomas.hellstrom@intel.com> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-rdma@vger.kernel.org Cc: amd-gfx@lists.freedesktop.org Cc: intel-gfx@lists.freedesktop.org Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Christian König <christian.koenig@amd.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20200707201229.472834-3-daniel.vetter@ffwll.ch
2020-06-09DMA reservations: use the new mmap locking APIMichel Lespinasse
This use is converted manually ahead of the next patch in the series, as it requires including a new header which the automated conversion would miss. Signed-off-by: Michel Lespinasse <walken@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ying Han <yinghan@google.com> Link: http://lkml.kernel.org/r/20200520052908.204642-4-walken@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-11-21dma-resv: Also prime acquire ctx for lockdepDaniel Vetter
Semnatically it really doesn't matter where we grab the ticket. But since the ticket is a fake lockdep lock, it matters for lockdep validation purposes. This means stuff like grabbing a ticket and then doing copy_from/to_user isn't allowed anymore. This is a changed compared to the current ttm fault handler, which doesn't bother with having a full reservation. Since I'm looking into fixing the TODO entry in ttm_mem_evict_wait_busy() I think that'll have to change sooner or later anyway, better get started. A bit more context on why I'm looking into this: For backwards compat with existing i915 gem code I think we'll have to do full slowpath locking in the i915 equivalent of the eviction code. And with dynamic dma-buf that will leak across drivers, so another thing we need to standardize and make sure it's done the same way everyway. Unfortunately this means another full audit of all drivers: - gem helpers: acquire_init is done right before taking locks, so no problem. Same for acquire_fini and unlocking, which means nothing that's not already covered by the dma_resv_lock rules will be caught with this extension here to the acquire_ctx. - etnaviv: An absolute massive amount of code is run between the acquire_init and the first lock acquisition in submit_lock_objects. But nothing that would touch user memory and could cause a fault. Furthermore nothing that uses the ticket, so even if I missed something, it would be easy to fix by pushing the acquire_init right before the first use. Similar on the unlock/acquire_fini side. - i915: Right now (and this will likely change a lot rsn) the acquire ctx and actual locks are right next to each another. No problem. - msm has a problem: submit_create calls acquire_init, but then submit_lookup_objects() has a bunch of copy_from_user to do the object lookups. That's the only thing before submit_lock_objects call dma_resv_lock(). Despite all the copypasta to etnaviv, etnaviv does not have this issue since it copies all the userspace structs earlier. submit_cleanup does not have any such issues. With the prep patch to pull out the acquire_ctx and reorder it msm is going to be safe too. - nouveau: acquire_init is right next to ttm_bo_reserve, so all good. Similar on the acquire_fini/ttm_bo_unreserve side. - ttm execbuf utils: acquire context and locking are even in the same functions here (one function to reserve everything, the other to unreserve), so all good. - vc4: Another case where acquire context and locking are handled in the same functions (one function to lock everything, the other to unlock). Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Christian König <christian.koenig@amd.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: Huang Rui <ray.huang@amd.com> Cc: Eric Anholt <eric@anholt.net> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Rob Herring <robh@kernel.org> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Russell King <linux+etnaviv@armlinux.org.uk> Cc: Christian Gmeiner <christian.gmeiner@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: Sean Paul <sean@poorly.run> Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191119210844.16947-3-daniel.vetter@ffwll.ch
2019-11-20dma_resv: prime lockdep annotationsSteven Price
From d07ea81611ed6e4fb8cc290f42d23dbcca2da2f8 Mon Sep 17 00:00:00 2001 From: Steven Price <steven.price@arm.com> Date: Mon, 11 Nov 2019 13:07:19 +0000 Subject: [PATCH] dma_resv: Correct return type of dma_resv_lockdep() subsys_initcall() expects a function which returns 'int'. Fix dma_resv_lockdep() so it returns an 'int' error code. Fixes: b2a8116e2592 ("dma_resv: prime lockdep annotations") Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/c0a0c70d-e6fe-1103-2888-1ce1425f4a5d@arm.com
2019-11-06dma_resv: prime lockdep annotationsDaniel Vetter
Full audit of everyone: - i915, radeon, amdgpu should be clean per their maintainers. - vram helpers should be fine, they don't do command submission, so really no business holding struct_mutex while doing copy_*_user. But I haven't checked them all. - panfrost seems to dma_resv_lock only in panfrost_job_push, which looks clean. - v3d holds dma_resv locks in the tail of its v3d_submit_cl_ioctl(), copying from/to userspace happens all in v3d_lookup_bos which is outside of the critical section. - vmwgfx has a bunch of ioctls that do their own copy_*_user: - vmw_execbuf_process: First this does some copies in vmw_execbuf_cmdbuf() and also in the vmw_execbuf_process() itself. Then comes the usual ttm reserve/validate sequence, then actual submission/fencing, then unreserving, and finally some more copy_to_user in vmw_execbuf_copy_fence_user. Glossing over tons of details, but looks all safe. - vmw_fence_event_ioctl: No ttm_reserve/dma_resv_lock anywhere to be seen, seems to only create a fence and copy it out. - a pile of smaller ioctl in vmwgfx_ioctl.c, no reservations to be found there. Summary: vmwgfx seems to be fine too. - virtio: There's virtio_gpu_execbuffer_ioctl, which does all the copying from userspace before even looking up objects through their handles, so safe. Plus the getparam/getcaps ioctl, also both safe. - qxl only has qxl_execbuffer_ioctl, which calls into qxl_process_single_command. There's a lovely comment before the __copy_from_user_inatomic that the slowpath should be copied from i915, but I guess that never happened. Try not to be unlucky and get your CS data evicted between when it's written and the kernel tries to read it. The only other copy_from_user is for relocs, but those are done before qxl_release_reserve_list(), which seems to be the only thing reserving buffers (in the ttm/dma_resv sense) in that code. So looks safe. - A debugfs file in nouveau_debugfs_pstate_set() and the usif ioctl in usif_ioctl() look safe. nouveau_gem_ioctl_pushbuf() otoh breaks this everywhere and needs to be fixed up. v2: Thomas pointed at that vmwgfx calls dma_resv_init while it holds a dma_resv lock of a different object already. Christian mentioned that ttm core does this too for ghost objects. intel-gfx-ci highlighted that i915 has similar issues. Unfortunately we can't do this in the usual module init functions, because kernel threads don't have an ->mm - we have to wait around for some user thread to do this. Solution is to spawn a worker (but only once). It's horrible, but it works. v3: We can allocate mm! (Chris). Horrible worker hack out, clean initcall solution in. v4: Annotate with __init (Rob Herring) Cc: Rob Herring <robh@kernel.org> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Christian König <christian.koenig@amd.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: Rob Herring <robh@kernel.org> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Cc: Eric Anholt <eric@anholt.net> Cc: Dave Airlie <airlied@redhat.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Ben Skeggs <bskeggs@redhat.com> Cc: "VMware Graphics" <linux-graphics-maintainer@vmware.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20191104173801.2972-1-daniel.vetter@ffwll.ch
2019-09-22dma-buf/resv: fix exclusive fence getQiang Yu
This causes kernel crash when testing lima driver. Cc: Christian König <christian.koenig@amd.com> Fixes: b8c036dfc66f ("dma-buf: simplify reservation_object_get_fences_rcu a bit") Signed-off-by: Qiang Yu <yuq825@gmail.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190922074900.853-1-yuq825@gmail.com
2019-08-16dma-buf: Restore seqlock around dma_resv updatesChris Wilson
This reverts 67c97fb79a7f ("dma-buf: add reservation_object_fences helper") dd7a7d1ff2f1 ("drm/i915: use new reservation_object_fences helper") 0e1d8083bddb ("dma-buf: further relax reservation_object_add_shared_fence") 5d344f58da76 ("dma-buf: nuke reservation_object seq number") The scenario that defeats simply grabbing a set of shared/exclusive fences and using them blissfully under RCU is that any of those fences may be reallocated by a SLAB_TYPESAFE_BY_RCU fence slab cache. In this scenario, while keeping the rcu_read_lock we need to establish that no fence was changed in the dma_resv after a read (or full) memory barrier. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Acked-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190814182401.25009-1-chris@chris-wilson.co.uk
2019-08-13dma-buf: rename reservation_object to dma_resvChristian König
Be more consistent with the naming of the other DMA-buf objects. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/323401/