summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorSrikar Dronamraju <srikar@linux.vnet.ibm.com>2015-06-16 17:26:00 +0530
committerIngo Molnar <mingo@kernel.org>2015-07-07 08:46:10 +0200
commit44dcb04f0ea8eaac3b9c9d3172416efc5a950214 (patch)
tree4ea5f6409e7d76401816fe58b02fce6777467825 /kernel/sched
parent2a1ed24ce94036d00a7c5d5e99a77a80f0aa556a (diff)
downloadlwn-44dcb04f0ea8eaac3b9c9d3172416efc5a950214.tar.gz
lwn-44dcb04f0ea8eaac3b9c9d3172416efc5a950214.zip
sched/numa: Consider 'imbalance_pct' when comparing loads in numa_has_capacity()
This is consistent with all other load balancing instances where we absorb unfairness upto env->imbalance_pct. Absorbing unfairness upto env->imbalance_pct allows to pull and retain task to their preferred nodes. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Rik van Riel <riel@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1434455762-30857-3-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 43ee84f05d1e..a53a610095e6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1415,8 +1415,9 @@ static bool numa_has_capacity(struct task_numa_env *env)
* --------------------- vs ---------------------
* src->compute_capacity dst->compute_capacity
*/
- if (src->load * dst->compute_capacity >
- dst->load * src->compute_capacity)
+ if (src->load * dst->compute_capacity * env->imbalance_pct >
+
+ dst->load * src->compute_capacity * 100)
return true;
return false;