summaryrefslogtreecommitdiff
path: root/kernel/sched.c
diff options
context:
space:
mode:
authorOleg Nesterov <oleg@tv-sign.ru>2007-11-15 20:57:40 +0100
committerIngo Molnar <mingo@elte.hu>2007-11-15 20:57:40 +0100
commitdae51f56204d33444f61d9e7af3ee70aef55daa4 (patch)
tree586c209c5b46da902dff912fa96e797840198030 /kernel/sched.c
parent9778385db35a799d410039be123044a0d3e917a2 (diff)
downloadlwn-dae51f56204d33444f61d9e7af3ee70aef55daa4.tar.gz
lwn-dae51f56204d33444f61d9e7af3ee70aef55daa4.zip
sched: fix SCHED_FIFO tasks & FAIR_GROUP_SCHED
Suppose that the SCHED_FIFO task does switch_uid(new_user); Now, p->se.cfs_rq and p->se.parent both point into the old user_struct->tg because sched_move_task() doesn't call set_task_cfs_rq() for !fair_sched_class case. Suppose that old user_struct/task_group is freed/reused, and the task does sched_setscheduler(SCHED_NORMAL); __setscheduler() sets fair_sched_class, but doesn't update ->se.cfs_rq/parent which point to the freed memory. This means that check_preempt_wakeup() doing while (!is_same_group(se, pse)) { se = parent_entity(se); pse = parent_entity(pse); } may OOPS in a similar way if rq->curr or p did something like above. Perhaps we need something like the patch below, note that __setscheduler() can't do set_task_cfs_rq(). Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r--kernel/sched.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 908335e1781a..d62b759753f3 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -7087,8 +7087,10 @@ void sched_move_task(struct task_struct *tsk)
rq = task_rq_lock(tsk, &flags);
- if (tsk->sched_class != &fair_sched_class)
+ if (tsk->sched_class != &fair_sched_class) {
+ set_task_cfs_rq(tsk);
goto done;
+ }
update_rq_clock(rq);