<feed xmlns='http://www.w3.org/2005/Atom'>
<title>lwn.git/kernel/locking, branch master</title>
<subtitle>Linux kernel documentation tree maintained by Jonathan Corbet</subtitle>
<id>http://mirrors.hust.edu.cn/git/lwn.git/atom?h=master</id>
<link rel='self' href='http://mirrors.hust.edu.cn/git/lwn.git/atom?h=master'/>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/'/>
<updated>2026-04-14T20:33:36+00:00</updated>
<entry>
<title>Merge tag 'sched-core-2026-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2026-04-14T20:33:36+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2026-04-14T20:33:36+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=1c3b68f0d55b5932eb38eda602a61aec6d6f5e5e'/>
<id>urn:sha1:1c3b68f0d55b5932eb38eda602a61aec6d6f5e5e</id>
<content type='text'>
Pull scheduler updates from Ingo Molnar:
 "Fair scheduling updates:
   - Skip SCHED_IDLE rq for SCHED_IDLE tasks (Christian Loehle)
   - Remove superfluous rcu_read_lock() in the wakeup path (K Prateek Nayak)
   - Simplify the entry condition for update_idle_cpu_scan() (K Prateek Nayak)
   - Simplify SIS_UTIL handling in select_idle_cpu() (K Prateek Nayak)
   - Avoid overflow in enqueue_entity() (K Prateek Nayak)
   - Update overutilized detection (Vincent Guittot)
   - Prevent negative lag increase during delayed dequeue (Vincent Guittot)
   - Clear buddies for preempt_short (Vincent Guittot)
   - Implement more complex proportional newidle balance (Peter Zijlstra)
   - Increase weight bits for avg_vruntime (Peter Zijlstra)
   - Use full weight to __calc_delta() (Peter Zijlstra)

  RT and DL scheduling updates:
   - Fix incorrect schedstats for rt and dl thread (Dengjun Su)
   - Skip group schedulable check with rt_group_sched=0 (Michal Koutný)
   - Move group schedulability check to sched_rt_global_validate()
     (Michal Koutný)
   - Add reporting of runtime left &amp; abs deadline to sched_getattr()
     for DEADLINE tasks (Tommaso Cucinotta)

  Scheduling topology updates by K Prateek Nayak:
   - Compute sd_weight considering cpuset partitions
   - Extract "imb_numa_nr" calculation into a separate helper
   - Allocate per-CPU sched_domain_shared in s_data
   - Switch to assigning "sd-&gt;shared" from s_data
   - Remove sched_domain_shared allocation with sd_data

  Energy-aware scheduling updates:
   - Filter false overloaded_group case for EAS (Vincent Guittot)
   - PM: EM: Switch to rcu_dereference_all() in wakeup path
     (Dietmar Eggemann)

  Infrastructure updates:
   - Replace use of system_unbound_wq with system_dfl_wq (Marco Crivellari)

  Proxy scheduling updates by John Stultz:
   - Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()
   - Minimise repeated sched_proxy_exec() checking
   - Fix potentially missing balancing with Proxy Exec
   - Fix and improve task::blocked_on et al handling
   - Add assert_balance_callbacks_empty() helper
   - Add logic to zap balancing callbacks if we pick again
   - Move attach_one_task() and attach_task() helpers to sched.h
   - Handle blocked-waiter migration (and return migration)
   - Add K Prateek Nayak to scheduler reviewers for proxy execution

  Misc cleanups and fixes by John Stultz, Joseph Salisbury, Peter
  Zijlstra, K Prateek Nayak, Michal Koutný, Randy Dunlap, Shrikanth
  Hegde, Vincent Guittot, Zhan Xusheng, Xie Yuanbin and Vincent Guittot"

* tag 'sched-core-2026-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (46 commits)
  sched/eevdf: Clear buddies for preempt_short
  sched/rt: Cleanup global RT bandwidth functions
  sched/rt: Move group schedulability check to sched_rt_global_validate()
  sched/rt: Skip group schedulable check with rt_group_sched=0
  sched/fair: Avoid overflow in enqueue_entity()
  sched: Use u64 for bandwidth ratio calculations
  sched/fair: Prevent negative lag increase during delayed dequeue
  sched/fair: Use sched_energy_enabled()
  sched: Handle blocked-waiter migration (and return migration)
  sched: Move attach_one_task and attach_task helpers to sched.h
  sched: Add logic to zap balance callbacks if we pick again
  sched: Add assert_balance_callbacks_empty helper
  sched/locking: Add special p-&gt;blocked_on==PROXY_WAKING value for proxy return-migration
  sched: Fix modifying donor-&gt;blocked on without proper locking
  locking: Add task::blocked_lock to serialize blocked_on state
  sched: Fix potentially missing balancing with Proxy Exec
  sched: Minimise repeated sched_proxy_exec() checking
  sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr()
  MAINTAINERS: Add K Prateek Nayak to scheduler reviewers
  sched/core: Get this cpu once in ttwu_queue_cond()
  ...
</content>
</entry>
<entry>
<title>sched/locking: Add special p-&gt;blocked_on==PROXY_WAKING value for proxy return-migration</title>
<updated>2026-04-03T12:23:40+00:00</updated>
<author>
<name>John Stultz</name>
<email>jstultz@google.com</email>
</author>
<published>2026-03-24T19:13:21+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=2d7622669836dcbbb449741b4e6c503ffe005c25'/>
<id>urn:sha1:2d7622669836dcbbb449741b4e6c503ffe005c25</id>
<content type='text'>
As we add functionality to proxy execution, we may migrate a
donor task to a runqueue where it can't run due to cpu affinity.
Thus, we must be careful to ensure we return-migrate the task
back to a cpu in its cpumask when it becomes unblocked.

Peter helpfully provided the following example with pictures:
"Suppose we have a ww_mutex cycle:

                  ,-+-* Mutex-1 &lt;-.
        Task-A ---' |             | ,-- Task-B
                    `-&gt; Mutex-2 *-+-'

Where Task-A holds Mutex-1 and tries to acquire Mutex-2, and
where Task-B holds Mutex-2 and tries to acquire Mutex-1.

Then the blocked_on-&gt;owner chain will go in circles.

        Task-A  -&gt; Mutex-2
          ^          |
          |          v
        Mutex-1 &lt;- Task-B

We need two things:

 - find_proxy_task() to stop iterating the circle;

 - the woken task to 'unblock' and run, such that it can
   back-off and re-try the transaction.

Now, the current code [without this patch] does:
        __clear_task_blocked_on();
        wake_q_add();

And surely clearing -&gt;blocked_on is sufficient to break the
cycle.

Suppose it is Task-B that is made to back-off, then we have:

  Task-A -&gt; Mutex-2 -&gt; Task-B (no further blocked_on)

and it would attempt to run Task-B. Or worse, it could directly
pick Task-B and run it, without ever getting into
find_proxy_task().

Now, here is a problem because Task-B might not be runnable on
the CPU it is currently on; and because !task_is_blocked() we
don't get into the proxy paths, so nobody is going to fix this
up.

Ideally we would have dequeued Task-B alongside of clearing
-&gt;blocked_on, but alas, [the lock ordering prevents us from
getting the task_rq_lock() and] spoils things."

Thus we need more than just a binary concept of the task being
blocked on a mutex or not.

So allow setting blocked_on to PROXY_WAKING as a special value
which specifies the task is no longer blocked, but needs to
be evaluated for return migration *before* it can be run.

This will then be used in a later patch to handle proxy
return-migration.

Signed-off-by: John Stultz &lt;jstultz@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: K Prateek Nayak &lt;kprateek.nayak@amd.com&gt;
Link: https://patch.msgid.link/20260324191337.1841376-7-jstultz@google.com
</content>
</entry>
<entry>
<title>locking: Add task::blocked_lock to serialize blocked_on state</title>
<updated>2026-04-03T12:23:39+00:00</updated>
<author>
<name>John Stultz</name>
<email>jstultz@google.com</email>
</author>
<published>2026-03-24T19:13:19+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=fa4a1ff8ab235a308d8c983827657a69649185fd'/>
<id>urn:sha1:fa4a1ff8ab235a308d8c983827657a69649185fd</id>
<content type='text'>
So far, we have been able to utilize the mutex::wait_lock
for serializing the blocked_on state, but when we move to
proxying across runqueues, we will need to add more state
and a way to serialize changes to this state in contexts
where we don't hold the mutex::wait_lock.

So introduce the task::blocked_lock, which nests under the
mutex::wait_lock in the locking order, and rework the locking
to use it.

Signed-off-by: John Stultz &lt;jstultz@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Reviewed-by: K Prateek Nayak &lt;kprateek.nayak@amd.com&gt;
Link: https://patch.msgid.link/20260324191337.1841376-5-jstultz@google.com
</content>
</entry>
<entry>
<title>locking: Add lock context annotations in the spinlock implementation</title>
<updated>2026-03-16T12:16:50+00:00</updated>
<author>
<name>Bart Van Assche</name>
<email>bvanassche@acm.org</email>
</author>
<published>2026-03-13T17:15:09+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=b06e988c4c52ce8750616ea9b23c8bd3b611b931'/>
<id>urn:sha1:b06e988c4c52ce8750616ea9b23c8bd3b611b931</id>
<content type='text'>
Make the spinlock implementation compatible with lock context analysis
(CONTEXT_ANALYSIS := 1) by adding lock context annotations to the
_raw_##op##_...() macros.

Signed-off-by: Bart Van Assche &lt;bvanassche@acm.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260313171510.230998-4-bvanassche@acm.org
</content>
</entry>
<entry>
<title>locking/rwsem: Fix logic error in rwsem_del_waiter()</title>
<updated>2026-03-16T12:16:48+00:00</updated>
<author>
<name>Andrei Vagin</name>
<email>avagin@google.com</email>
</author>
<published>2026-03-14T18:26:07+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=68bcd8b6e0b10d902f7fc8bf3f08f335f5d1640e'/>
<id>urn:sha1:68bcd8b6e0b10d902f7fc8bf3f08f335f5d1640e</id>
<content type='text'>
Commit 1ea4b473504b ("locking/rwsem: Remove the list_head from struct
rw_semaphore") introduced a logic error in rwsem_del_waiter().

The root cause of this issue is an inconsistency in the return values of
__rwsem_del_waiter() and rwsem_del_waiter(). Specifically,
__rwsem_del_waiter() returns true when the wait list becomes empty,
whereas rwsem_del_waiter() is supposed to return true if the wait list
is NOT empty.

This caused a null pointer dereference in rwsem_mark_wake() because it
was being called when sem-&gt;first_waiter was NULL.

Fixes: 1ea4b473504b ("locking/rwsem: Remove the list_head from struct rw_semaphore")
Reported-by: syzbot+3d2ff92c67127d337463@syzkaller.appspotmail.com
Signed-off-by: Andrei Vagin &lt;avagin@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Tested-by: syzbot+3d2ff92c67127d337463@syzkaller.appspotmail.com
Link: https://patch.msgid.link/20260314182607.3343346-1-avagin@google.com
</content>
</entry>
<entry>
<title>locking/rwsem: Add context analysis</title>
<updated>2026-03-08T10:06:53+00:00</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-03-06T09:43:56+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=739690915ce1f017223ef4e6f3cc966ccfa3c861'/>
<id>urn:sha1:739690915ce1f017223ef4e6f3cc966ccfa3c861</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260306101417.GT1282955@noisy.programming.kicks-ass.net
</content>
</entry>
<entry>
<title>locking/rtmutex: Add context analysis</title>
<updated>2026-03-08T10:06:53+00:00</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-01-20T17:17:50+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=90bb681dcdf7e69c90b56a18f06c0389a0810b92'/>
<id>urn:sha1:90bb681dcdf7e69c90b56a18f06c0389a0810b92</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260121111213.851599178@infradead.org
</content>
</entry>
<entry>
<title>locking/mutex: Add context analysis</title>
<updated>2026-03-08T10:06:53+00:00</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2026-01-20T09:06:08+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=5c4326231cde36fd5e90c41e403df9fac6238f4b'/>
<id>urn:sha1:5c4326231cde36fd5e90c41e403df9fac6238f4b</id>
<content type='text'>
Add compiler context analysis annotations.

Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260121111213.745353747@infradead.org
</content>
</entry>
<entry>
<title>locking/mutex: Remove the list_head from struct mutex</title>
<updated>2026-03-08T10:06:52+00:00</updated>
<author>
<name>Matthew Wilcox (Oracle)</name>
<email>willy@infradead.org</email>
</author>
<published>2026-03-05T19:55:43+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=25500ba7e77ce9d3d9b5a1929d41a2ee2e23f6fe'/>
<id>urn:sha1:25500ba7e77ce9d3d9b5a1929d41a2ee2e23f6fe</id>
<content type='text'>
Instead of embedding a list_head in struct mutex, store a pointer to
the first waiter.  The list of waiters remains a doubly linked list so
we can efficiently add to the tail of the list, remove from the front
(or middle) of the list.

Some of the list manipulation becomes more complicated, but it's a
reasonable tradeoff on the slow paths to shrink data structures which
embed a mutex like struct file.

Some of the debug checks have to be deleted because there's no equivalent
to checking them in the new scheme (eg an empty waiter-&gt;list now means
that it is the only waiter, not that the waiter is no longer on the list).

Signed-off-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260305195545.3707590-4-willy@infradead.org
</content>
</entry>
<entry>
<title>locking/semaphore: Remove the list_head from struct semaphore</title>
<updated>2026-03-08T10:06:52+00:00</updated>
<author>
<name>Matthew Wilcox (Oracle)</name>
<email>willy@infradead.org</email>
</author>
<published>2026-03-05T19:55:42+00:00</published>
<link rel='alternate' type='text/html' href='http://mirrors.hust.edu.cn/git/lwn.git/commit/?id=b9bdd4b6840454ef87f61b6506c9635c57a81650'/>
<id>urn:sha1:b9bdd4b6840454ef87f61b6506c9635c57a81650</id>
<content type='text'>
Instead of embedding a list_head in struct semaphore, store a pointer to
the first waiter.  The list of waiters remains a doubly linked list so
we can efficiently add to the tail of the list and remove from the front
(or middle) of the list.

Some of the list manipulation becomes more complicated, but it's a
reasonable tradeoff on the slow paths to shrink data structures
which embed a semaphore.

Signed-off-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20260305195545.3707590-3-willy@infradead.org
</content>
</entry>
</feed>
