diff options
author | Mike Galbraith <efault@gmx.de> | 2010-03-11 17:17:13 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2010-03-11 18:32:49 +0100 |
commit | 39c0cbe2150cbd848a25ba6cdb271d1ad46818ad (patch) | |
tree | 7b9c356b39a2b50219398ce534d7d64e7ab4bf06 /kernel/sched.c | |
parent | 41acab8851a0408c1d5ad6c21a07456f88b54d40 (diff) | |
download | lwn-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.tar.gz lwn-39c0cbe2150cbd848a25ba6cdb271d1ad46818ad.zip |
sched: Rate-limit nohz
Entering nohz code on every micro-idle is costing ~10% throughput for netperf
TCP_RR when scheduling cross-cpu. Rate limiting entry fixes this, but raises
ticks a bit. On my Q6600, an idle box goes from ~85 interrupts/sec to 128.
The higher the context switch rate, the more nohz entry costs. With this patch
and some cycle recovery patches in my tree, max cross cpu context switch rate is
improved by ~16%, a large portion of which of which is this ratelimiting.
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1268301003.6785.28.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r-- | kernel/sched.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index a4aa071f08f3..60b1bbe2ad1b 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -492,6 +492,7 @@ struct rq { #define CPU_LOAD_IDX_MAX 5 unsigned long cpu_load[CPU_LOAD_IDX_MAX]; #ifdef CONFIG_NO_HZ + u64 nohz_stamp; unsigned char in_nohz_recently; #endif /* capture load from *all* tasks on this cpu: */ @@ -1228,6 +1229,17 @@ void wake_up_idle_cpu(int cpu) if (!tsk_is_polling(rq->idle)) smp_send_reschedule(cpu); } + +int nohz_ratelimit(int cpu) +{ + struct rq *rq = cpu_rq(cpu); + u64 diff = rq->clock - rq->nohz_stamp; + + rq->nohz_stamp = rq->clock; + + return diff < (NSEC_PER_SEC / HZ) >> 1; +} + #endif /* CONFIG_NO_HZ */ static u64 sched_avg_period(void) |