summaryrefslogtreecommitdiff
path: root/include/linux/sched.h
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2020-09-24 13:50:42 +0200
committerPeter Zijlstra <peterz@infradead.org>2020-11-17 13:15:28 +0100
commitec618b84f6e15281cc3660664d34cd0dd2f2579e (patch)
treeaf7536d66f934bb1979bf9e9dd2052082ee9eaea /include/linux/sched.h
parentf97bb5272d9e95d400d6c8643ebb146b3e3e7842 (diff)
downloadlwn-ec618b84f6e15281cc3660664d34cd0dd2f2579e.tar.gz
lwn-ec618b84f6e15281cc3660664d34cd0dd2f2579e.zip
sched: Fix rq->nr_iowait ordering
schedule() ttwu() deactivate_task(); if (p->on_rq && ...) // false atomic_dec(&task_rq(p)->nr_iowait); if (prev->in_iowait) atomic_inc(&rq->nr_iowait); Allows nr_iowait to be decremented before it gets incremented, resulting in more dodgy IO-wait numbers than usual. Note that because we can now do ttwu_queue_wakelist() before p->on_cpu==0, we lose the natural ordering and have to further delay the decrement. Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Tejun Heo <tj@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Mel Gorman <mgorman@techsingularity.net> Link: https://lkml.kernel.org/r/20201117093829.GD3121429@hirez.programming.kicks-ass.net
Diffstat (limited to 'include/linux/sched.h')
0 files changed, 0 insertions, 0 deletions