From c190c3b16c0f56ff338df12df53c03859155951b Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 23 Jun 2015 19:03:45 -0700 Subject: rcu: Switch synchronize_sched_expedited() to stop_one_cpu() The synchronize_sched_expedited() currently invokes try_stop_cpus(), which schedules the stopper kthreads on each online non-idle CPU, and waits until all those kthreads are running before letting any of them stop. This is disastrous for real-time workloads, which get hit with a preemption that is as long as the longest scheduling latency on any CPU, including any non-realtime housekeeping CPUs. This commit therefore switches to using stop_one_cpu() on each CPU in turn. This avoids inflicting the worst-case scheduling latency on the worst-case CPU onto all other CPUs, and also simplifies the code a little bit. Follow-up commits will simplify the counter-snapshotting algorithm and convert a number of the counters that are now protected by the new ->expedited_mutex to non-atomic. Signed-off-by: Peter Zijlstra [ paulmck: Kept stop_one_cpu(), dropped disabling of "guardrails". ] Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.h | 1 + 1 file changed, 1 insertion(+) (limited to 'kernel/rcu/tree.h') diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 7c0b09d754a1..7c25fe473ad9 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -480,6 +480,7 @@ struct rcu_state { /* _rcu_barrier(). */ /* End of fields guarded by barrier_mutex. */ + struct mutex expedited_mutex; /* Serializes expediting. */ atomic_long_t expedited_start; /* Starting ticket. */ atomic_long_t expedited_done; /* Done ticket. */ atomic_long_t expedited_wrap; /* # near-wrap incidents. */ -- cgit v1.2.3