diff options
author | Joonsoo Kim <iamjoonsoo.kim@lge.com> | 2013-04-23 17:27:39 +0900 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-04-24 08:52:44 +0200 |
commit | cfc03118047172f5bdc58d63c607d16d33ce5305 (patch) | |
tree | 8711e931791f814b3a40956b4a6d0f7d9088c0a6 /kernel/sched | |
parent | de5eb2dd7f171ee8a45d23cd41aa2efe9ab922b3 (diff) | |
download | lwn-cfc03118047172f5bdc58d63c607d16d33ce5305.tar.gz lwn-cfc03118047172f5bdc58d63c607d16d33ce5305.zip |
sched: Don't consider other cpus in our group in case of NEWLY_IDLE
Commit 88b8dac0 makes load_balance() consider other cpus in its
group, regardless of idle type. When we do NEWLY_IDLE balancing,
we should not consider it, because a motivation of NEWLY_IDLE
balancing is to turn this cpu to non idle state if needed. This
is not the case of other cpus. So, change code not to consider
other cpus for NEWLY_IDLE balancing.
With this patch, assign 'if (pulled_task) this_rq->idle_stamp =
0' in idle_balance() is corrected, because NEWLY_IDLE balancing
doesn't consider other cpus. Assigning to 'this_rq->idle_stamp'
is now valid.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Tested-by: Jason Low <jason.low2@hp.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1366705662-3587-4-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 726e12905725..dfa92b7b3dec 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5026,8 +5026,21 @@ static int load_balance(int this_cpu, struct rq *this_rq, .cpus = cpus, }; + /* + * For NEWLY_IDLE load_balancing, we don't need to consider + * other cpus in our group + */ + if (idle == CPU_NEWLY_IDLE) { + env.dst_grpmask = NULL; + /* + * we don't care max_lb_iterations in this case, + * in following patch, this will be removed + */ + max_lb_iterations = 0; + } else + max_lb_iterations = cpumask_weight(env.dst_grpmask); + cpumask_copy(cpus, cpu_active_mask); - max_lb_iterations = cpumask_weight(env.dst_grpmask); schedstat_inc(sd, lb_count[idle]); |