diff options
author | Oleg Nesterov <oleg@redhat.com> | 2010-03-15 10:10:10 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2010-07-20 19:11:46 +0200 |
commit | ae35e6fffd929f71b6ea1193862d50b56b636a1b (patch) | |
tree | bd05e08973ab163406647734a6e6bdbab747b869 | |
parent | a1e0f4a378e0d6d2f03c59c9699bb7746938c520 (diff) | |
download | lwn-ae35e6fffd929f71b6ea1193862d50b56b636a1b.tar.gz lwn-ae35e6fffd929f71b6ea1193862d50b56b636a1b.zip |
sched: move_task_off_dead_cpu(): Take rq->lock around select_fallback_rq()
move_task_off_dead_cpu()->select_fallback_rq() reads/updates ->cpus_allowed
lockless. We can race with set_cpus_allowed() running in parallel.
Change it to take rq->lock around select_fallback_rq(). Note that it is not
trivial to move this spin_lock() into select_fallback_rq(), we must recheck
the task was not migrated after we take the lock and other callers do not
need this lock.
To avoid the races with other callers of select_fallback_rq() which rely on
TASK_WAKING, we also check p->state != TASK_WAKING and do nothing otherwise.
The owner of TASK_WAKING must update ->cpus_allowed and choose the correct
CPU anyway, and the subsequent __migrate_task() is just meaningless because
p->se.on_rq must be false.
Alternatively, we could change select_task_rq() to take rq->lock right
after it calls sched_class->select_task_rq(), but this looks a bit ugly.
Also, change it to not assume irqs are disabled and absorb __migrate_task_irq().
[ upstream: 1445c08d06c5594895b4fae952ef8a457e89c390 ]
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20100315091010.GA9131@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-rw-r--r-- | kernel/sched.c | 30 |
1 files changed, 15 insertions, 15 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index d7f6876907e3..59db39610b20 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -7739,29 +7739,29 @@ static int migration_thread(void *data) } #ifdef CONFIG_HOTPLUG_CPU - -static int __migrate_task_irq(struct task_struct *p, int src_cpu, int dest_cpu) -{ - int ret; - - local_irq_disable(); - ret = __migrate_task(p, src_cpu, dest_cpu); - local_irq_enable(); - return ret; -} - /* * Figure out where task on dead CPU should go, use force if necessary. */ static void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p) { - int dest_cpu; - + struct rq *rq = cpu_rq(dead_cpu); + int needs_cpu, uninitialized_var(dest_cpu); + unsigned long flags; again: - dest_cpu = select_fallback_rq(dead_cpu, p); + local_irq_save(flags); + + raw_spin_lock(&rq->lock); + needs_cpu = (task_cpu(p) == dead_cpu) && !(p->state & TASK_WAKING); + if (needs_cpu) + dest_cpu = select_fallback_rq(dead_cpu, p); + raw_spin_unlock(&rq->lock); /* It can have affinity changed while we were choosing. */ - if (unlikely(!__migrate_task_irq(p, dead_cpu, dest_cpu))) + if (needs_cpu) + needs_cpu = !__migrate_task(p, dead_cpu, dest_cpu); + local_irq_restore(flags); + + if (unlikely(needs_cpu)) goto again; } |