diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2011-11-16 14:38:16 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-12-06 08:33:52 +0100 |
commit | 0f5a2601284237e2ba089389fd75d67f77626cef (patch) | |
tree | 37eedc660f09a36cfbd6b2a2c28e8cd0d1dbe167 /include/linux/perf_event.h | |
parent | d6c1c49de577fa292af2449817364b7d89b574d8 (diff) | |
download | lwn-0f5a2601284237e2ba089389fd75d67f77626cef.tar.gz lwn-0f5a2601284237e2ba089389fd75d67f77626cef.zip |
perf: Avoid a useless pmu_disable() in the perf-tick
Gleb writes:
> Currently pmu is disabled and re-enabled on each timer interrupt even
> when no rotation or frequency adjustment is needed. On Intel CPU this
> results in two writes into PERF_GLOBAL_CTRL MSR per tick. On bare metal
> it does not cause significant slowdown, but when running perf in a virtual
> machine it leads to 20% slowdown on my machine.
Cure this by keeping a perf_event_context::nr_freq counter that counts the
number of active events that require frequency adjustments and use this in a
similar fashion to the already existing nr_events != nr_active test in
perf_rotate_context().
By being able to exclude both rotation and frequency adjustments a-priory for
the common case we can avoid the otherwise superfluous PMU disable.
Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-515yhoatehd3gza7we9fapaa@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'include/linux/perf_event.h')
-rw-r--r-- | include/linux/perf_event.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index b1f89122bf6a..cb44c9e75660 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -890,6 +890,7 @@ struct perf_event_context { int nr_active; int is_active; int nr_stat; + int nr_freq; int rotate_disable; atomic_t refcount; struct task_struct *task; |