summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorSebastian Andrzej Siewior <bigeasy@linutronix.de>2022-05-17 11:16:14 +0200
committerPeter Zijlstra <peterz@infradead.org>2022-06-13 10:29:57 +0200
commit4051a81774d6d8e28192742c26999d6f29bc0e68 (patch)
treed51d31554aa2803d1c0c793006aa50eb96f71e50 /kernel
parentb13baccc3850ca8b8cccbf8ed9912dbaa0fdf7f3 (diff)
downloadlwn-4051a81774d6d8e28192742c26999d6f29bc0e68.tar.gz
lwn-4051a81774d6d8e28192742c26999d6f29bc0e68.zip
locking/lockdep: Use sched_clock() for random numbers
Since the rewrote of prandom_u32(), in the commit mentioned below, the function uses sleeping locks which extracing random numbers and filling the batch. This breaks lockdep on PREEMPT_RT because lock_pin_lock() disables interrupts while calling __lock_pin_lock(). This can't be moved earlier because the main user of the function (rq_pin_lock()) invokes that function after disabling interrupts in order to acquire the lock. The cookie does not require random numbers as its goal is to provide a random value in order to notice unexpected "unlock + lock" sites. Use sched_clock() to provide random numbers. Fixes: a0103f4d86f88 ("random32: use real rng for non-deterministic randomness") Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YoNn3pTkm5+QzE5k@linutronix.de
Diffstat (limited to 'kernel')
-rw-r--r--kernel/locking/lockdep.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 81e87280513e..f06b91ca6482 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -5432,7 +5432,7 @@ static struct pin_cookie __lock_pin_lock(struct lockdep_map *lock)
* be guessable and still allows some pin nesting in
* our u32 pin_count.
*/
- cookie.val = 1 + (prandom_u32() >> 16);
+ cookie.val = 1 + (sched_clock() & 0xffff);
hlock->pin_count += cookie.val;
return cookie;
}