diff options
author | Oleg Nesterov <oleg@redhat.com> | 2012-08-26 21:12:11 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2012-09-13 16:47:34 +0200 |
commit | 9da33de62431c7839f98156720862262272a8380 (patch) | |
tree | 1a05e4bab566cf0aeba5890e536387c0859012ac /kernel/task_work.c | |
parent | ac3d0da8f3290b3d394cdb7f50604424a7cd6092 (diff) | |
download | lwn-9da33de62431c7839f98156720862262272a8380.tar.gz lwn-9da33de62431c7839f98156720862262272a8380.zip |
task_work: task_work_add() should not succeed after exit_task_work()
ed3e694d "move exit_task_work() past exit_files() et.al" destroyed
the add/exit synchronization we had, the caller itself should ensure
task_work_add() can't race with the exiting task.
However, this is not convenient/simple, and the only user which tries
to do this is buggy (see the next patch). Unless the task is current,
there is simply no way to do this in general.
Change exit_task_work()->task_work_run() to use the dummy "work_exited"
entry to let task_work_add() know it should fail.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20120826191211.GA4228@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/task_work.c')
-rw-r--r-- | kernel/task_work.c | 22 |
1 files changed, 16 insertions, 6 deletions
diff --git a/kernel/task_work.c b/kernel/task_work.c index f13ec0bda1d5..65bd3c92d6f3 100644 --- a/kernel/task_work.c +++ b/kernel/task_work.c @@ -2,16 +2,17 @@ #include <linux/task_work.h> #include <linux/tracehook.h> +static struct callback_head work_exited; /* all we need is ->next == NULL */ + int task_work_add(struct task_struct *task, struct callback_head *work, bool notify) { struct callback_head *head; - /* - * Not inserting the new work if the task has already passed - * exit_task_work() is the responisbility of callers. - */ + do { head = ACCESS_ONCE(task->task_works); + if (unlikely(head == &work_exited)) + return -ESRCH; work->next = head; } while (cmpxchg(&task->task_works, head, work) != head); @@ -30,7 +31,7 @@ task_work_cancel(struct task_struct *task, task_work_func_t func) * If cmpxchg() fails we continue without updating pprev. * Either we raced with task_work_add() which added the * new entry before this work, we will find it again. Or - * we raced with task_work_run(), *pprev == NULL. + * we raced with task_work_run(), *pprev == NULL/exited. */ raw_spin_lock_irqsave(&task->pi_lock, flags); while ((work = ACCESS_ONCE(*pprev))) { @@ -51,7 +52,16 @@ void task_work_run(void) struct callback_head *work, *head, *next; for (;;) { - work = xchg(&task->task_works, NULL); + /* + * work->func() can do task_work_add(), do not set + * work_exited unless the list is empty. + */ + do { + work = ACCESS_ONCE(task->task_works); + head = !work && (task->flags & PF_EXITING) ? + &work_exited : NULL; + } while (cmpxchg(&task->task_works, work, head) != work); + if (!work) break; /* |