diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-14 12:36:25 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-14 12:36:25 -0700 |
| commit | 7393febcb1b2082c0484952729cbebfe4dc508d5 (patch) | |
| tree | d561808391b363749ab77512def195da22566db3 /lib | |
| parent | e80d033851b3bc94c3d254ac66660ddd0a49d72c (diff) | |
| parent | a21c1e961de28b95099a9ca2c3774b2eee1a33bb (diff) | |
| download | lwn-7393febcb1b2082c0484952729cbebfe4dc508d5.tar.gz lwn-7393febcb1b2082c0484952729cbebfe4dc508d5.zip | |
Merge tag 'locking-core-2026-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
"Mutexes:
- Add killable flavor to guard definitions (Davidlohr Bueso)
- Remove the list_head from struct mutex (Matthew Wilcox)
- Rename mutex_init_lockep() (Davidlohr Bueso)
rwsems:
- Remove the list_head from struct rw_semaphore and
replace it with a single pointer (Matthew Wilcox)
- Fix logic error in rwsem_del_waiter() (Andrei Vagin)
Semaphores:
- Remove the list_head from struct semaphore (Matthew Wilcox)
Jump labels:
- Use ATOMIC_INIT() for initialization of .enabled (Thomas Weißschuh)
- Remove workaround for old compilers in initializations
(Thomas Weißschuh)
Lock context analysis changes and improvements:
- Add context analysis for rwsems (Peter Zijlstra)
- Fix rwlock and spinlock lock context annotations (Bart Van Assche)
- Fix rwlock support in <linux/spinlock_up.h> (Bart Van Assche)
- Add lock context annotations in the spinlock implementation
(Bart Van Assche)
- signal: Fix the lock_task_sighand() annotation (Bart Van Assche)
- ww-mutex: Fix the ww_acquire_ctx function annotations
(Bart Van Assche)
- Add lock context support in do_raw_{read,write}_trylock()
(Bart Van Assche)
- arm64, compiler-context-analysis: Permit alias analysis through
__READ_ONCE() with CONFIG_LTO=y (Marco Elver)
- Add __cond_releases() (Peter Zijlstra)
- Add context analysis for mutexes (Peter Zijlstra)
- Add context analysis for rtmutexes (Peter Zijlstra)
- Convert futexes to compiler context analysis (Peter Zijlstra)
Rust integration updates:
- Add atomic fetch_sub() implementation (Andreas Hindborg)
- Refactor various rust_helper_ methods for expansion (Boqun Feng)
- Add Atomic<*{mut,const} T> support (Boqun Feng)
- Add atomic operation helpers over raw pointers (Boqun Feng)
- Add performance-optimal Flag type for atomic booleans, to avoid
slow byte-sized RMWs on architectures that don't support them.
(FUJITA Tomonori)
- Misc cleanups and fixes (Andreas Hindborg, Boqun Feng, FUJITA
Tomonori)
LTO support updates:
- arm64: Optimize __READ_ONCE() with CONFIG_LTO=y (Marco Elver)
- compiler: Simplify generic RELOC_HIDE() (Marco Elver)
Miscellaneous fixes and cleanups by Peter Zijlstra, Randy Dunlap,
Thomas Weißschuh, Davidlohr Bueso and Mikhail Gavrilov"
* tag 'locking-core-2026-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits)
compiler: Simplify generic RELOC_HIDE()
locking: Add lock context annotations in the spinlock implementation
locking: Add lock context support in do_raw_{read,write}_trylock()
locking: Fix rwlock support in <linux/spinlock_up.h>
lockdep: Raise default stack trace limits when KASAN is enabled
cleanup: Optimize guards
jump_label: remove workaround for old compilers in initializations
jump_label: use ATOMIC_INIT() for initialization of .enabled
futex: Convert to compiler context analysis
locking/rwsem: Fix logic error in rwsem_del_waiter()
locking/rwsem: Add context analysis
locking/rtmutex: Add context analysis
locking/mutex: Add context analysis
compiler-context-analysys: Add __cond_releases()
locking/mutex: Remove the list_head from struct mutex
locking/semaphore: Remove the list_head from struct semaphore
locking/rwsem: Remove the list_head from struct rw_semaphore
rust: atomic: Update a safety comment in impl of `fetch_add()`
rust: sync: atomic: Update documentation for `fetch_add()`
rust: sync: atomic: Add fetch_sub()
...
Diffstat (limited to 'lib')
| -rw-r--r-- | lib/Kconfig.debug | 8 | ||||
| -rw-r--r-- | lib/test_context-analysis.c | 11 |
2 files changed, 19 insertions, 0 deletions
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 94eb2667b7e1..aac60b6cfa4b 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1618,14 +1618,22 @@ config LOCKDEP_STACK_TRACE_BITS int "Size for MAX_STACK_TRACE_ENTRIES (as Nth power of 2)" depends on LOCKDEP && !LOCKDEP_SMALL range 10 26 + default 21 if KASAN default 19 help Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message. + KASAN significantly increases stack trace consumption because its + slab tracking interacts with lockdep's dependency validation under + PREEMPT_FULL, creating a feedback loop. The higher default when + KASAN is enabled costs ~12MB extra, which is negligible compared to + KASAN's own shadow memory overhead. + config LOCKDEP_STACK_TRACE_HASH_BITS int "Size for STACK_TRACE_HASH_SIZE (as Nth power of 2)" depends on LOCKDEP && !LOCKDEP_SMALL range 10 26 + default 16 if KASAN default 14 help Try increasing this value if you need large STACK_TRACE_HASH_SIZE. diff --git a/lib/test_context-analysis.c b/lib/test_context-analysis.c index 140efa8a9763..06b4a6a028e0 100644 --- a/lib/test_context-analysis.c +++ b/lib/test_context-analysis.c @@ -596,3 +596,14 @@ static void __used test_ww_mutex_lock_ctx(struct test_ww_mutex_data *d) ww_mutex_destroy(&d->mtx); } + +static DEFINE_PER_CPU(raw_spinlock_t, test_per_cpu_lock); + +static void __used test_per_cpu(int cpu) +{ + raw_spin_lock(&per_cpu(test_per_cpu_lock, cpu)); + raw_spin_unlock(&per_cpu(test_per_cpu_lock, cpu)); + + raw_spin_lock(per_cpu_ptr(&test_per_cpu_lock, cpu)); + raw_spin_unlock(per_cpu_ptr(&test_per_cpu_lock, cpu)); +} |
