diff options
author | Oleg Nesterov <oleg@redhat.com> | 2016-11-29 18:51:07 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2016-12-08 14:36:19 +0100 |
commit | cf380a4a96e2260742051fa7fc831596bb26cc8b (patch) | |
tree | 8f27e5d81c85ab8de3c4a2f92eee70d7ad092022 | |
parent | efb29fbfa50c490dac64a9418ebe553be82df781 (diff) | |
download | lwn-cf380a4a96e2260742051fa7fc831596bb26cc8b.tar.gz lwn-cf380a4a96e2260742051fa7fc831596bb26cc8b.zip |
kthread: Don't use to_live_kthread() in kthread_[un]park()
Now that to_kthread() is always validm change kthread_park() and
kthread_unpark() to use it and kill to_live_kthread().
The conversion of kthread_unpark() is trivial. If KTHREAD_IS_PARKED is set
then the task has called complete(&self->parked) and there the function
cannot race against a concurrent kthread_stop() and exit.
kthread_park() is more tricky, because its semantics are not well
defined. It returns -ENOSYS if the thread exited but this can never happen
and as Roman pointed out kthread_park() can obviously block forever if it
would race with the exiting kthread.
The usage of kthread_park() in cpuhp code (cpu.c, smpboot.c, stop_machine.c)
is fine. It can never see an exiting/exited kthread, smpboot_destroy_threads()
clears *ht->store, smpboot_park_thread() checks it is not NULL under the same
smpboot_threads_lock. cpuhp_threads and cpu_stop_threads never exit, so other
callers are fine too.
But it has two more users:
- watchdog_park_threads():
The code is actually correct, get_online_cpus() ensures that
kthread_park() can't race with itself (note that kthread_park() can't
handle this race correctly), but it should not use kthread_park()
directly.
- drivers/gpu/drm/amd/scheduler/gpu_scheduler.c should not use
kthread_park() either.
kthread_park() must not be called after amd_sched_fini() which does
kthread_stop(), otherwise even to_live_kthread() is not safe because
task_struct can be already freed and sched->thread can point to nowhere.
The usage of kthread_park/unpark should either be restricted to core code
which is properly protected against the exit race or made more robust so it
is safe to use it in drivers.
To catch eventual exit issues, add a WARN_ON(PF_EXITING) for now.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Chunming Zhou <David1.Zhou@amd.com>
Cc: Roman Pen <roman.penyaev@profitbricks.com>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/20161129175107.GA5339@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-rw-r--r-- | kernel/kthread.c | 69 |
1 files changed, 24 insertions, 45 deletions
diff --git a/kernel/kthread.c b/kernel/kthread.c index 4dcbc8b5d6b6..01d27164e5b7 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -78,21 +78,6 @@ void free_kthread_struct(struct task_struct *k) kfree(to_kthread(k)); } -#define __to_kthread(vfork) \ - container_of(vfork, struct kthread, exited) - -/* - * TODO: kill it and use to_kthread(). But we still need the users - * like kthread_stop() which has to sync with the exiting kthread. - */ -static struct kthread *to_live_kthread(struct task_struct *k) -{ - struct completion *vfork = ACCESS_ONCE(k->vfork_done); - if (likely(vfork)) - return __to_kthread(vfork); - return NULL; -} - /** * kthread_should_stop - should this kthread return now? * @@ -441,8 +426,18 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data), return p; } -static void __kthread_unpark(struct task_struct *k, struct kthread *kthread) +/** + * kthread_unpark - unpark a thread created by kthread_create(). + * @k: thread created by kthread_create(). + * + * Sets kthread_should_park() for @k to return false, wakes it, and + * waits for it to return. If the thread is marked percpu then its + * bound to the cpu again. + */ +void kthread_unpark(struct task_struct *k) { + struct kthread *kthread = to_kthread(k); + clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); /* * We clear the IS_PARKED bit here as we don't wait @@ -460,22 +455,6 @@ static void __kthread_unpark(struct task_struct *k, struct kthread *kthread) wake_up_state(k, TASK_PARKED); } } - -/** - * kthread_unpark - unpark a thread created by kthread_create(). - * @k: thread created by kthread_create(). - * - * Sets kthread_should_park() for @k to return false, wakes it, and - * waits for it to return. If the thread is marked percpu then its - * bound to the cpu again. - */ -void kthread_unpark(struct task_struct *k) -{ - struct kthread *kthread = to_live_kthread(k); - - if (kthread) - __kthread_unpark(k, kthread); -} EXPORT_SYMBOL_GPL(kthread_unpark); /** @@ -492,20 +471,20 @@ EXPORT_SYMBOL_GPL(kthread_unpark); */ int kthread_park(struct task_struct *k) { - struct kthread *kthread = to_live_kthread(k); - int ret = -ENOSYS; - - if (kthread) { - if (!test_bit(KTHREAD_IS_PARKED, &kthread->flags)) { - set_bit(KTHREAD_SHOULD_PARK, &kthread->flags); - if (k != current) { - wake_up_process(k); - wait_for_completion(&kthread->parked); - } + struct kthread *kthread = to_kthread(k); + + if (WARN_ON(k->flags & PF_EXITING)) + return -ENOSYS; + + if (!test_bit(KTHREAD_IS_PARKED, &kthread->flags)) { + set_bit(KTHREAD_SHOULD_PARK, &kthread->flags); + if (k != current) { + wake_up_process(k); + wait_for_completion(&kthread->parked); } - ret = 0; } - return ret; + + return 0; } EXPORT_SYMBOL_GPL(kthread_park); @@ -534,7 +513,7 @@ int kthread_stop(struct task_struct *k) get_task_struct(k); kthread = to_kthread(k); set_bit(KTHREAD_SHOULD_STOP, &kthread->flags); - __kthread_unpark(k, kthread); + kthread_unpark(k); wake_up_process(k); wait_for_completion(&kthread->exited); ret = k->exit_code; |