diff options
author | Sean Christopherson <seanjc@google.com> | 2023-11-09 18:28:50 -0800 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2024-02-01 09:35:47 -0800 |
commit | b31880ca2f41dc2196e31d97e498b0fa884c2b2a (patch) | |
tree | f10b1c293f170a21e1ab3f4cc21a79f5f41bad66 /arch/x86/kvm/pmu.c | |
parent | be6b067dae1573cf4d53c8b08175d8872d82f030 (diff) | |
download | lwn-b31880ca2f41dc2196e31d97e498b0fa884c2b2a.tar.gz lwn-b31880ca2f41dc2196e31d97e498b0fa884c2b2a.zip |
KVM: x86/pmu: Move pmc_idx => pmc translation helper to common code
Add a common helper for *internal* PMC lookups, and delete the ops hook
and Intel's implementation. Keep AMD's implementation, but rename it to
amd_pmu_get_pmc() to make it somewhat more obvious that it's suited for
both KVM-internal and guest-initiated lookups.
Because KVM tracks all counters in a single bitmap, getting a counter
when iterating over a bitmap, e.g. of all valid PMCs, requires a small
amount of math, that while simple, isn't super obvious and doesn't use the
same semantics as PMC lookups from RDPMC! Although AMD doesn't support
fixed counters, the common PMU code still behaves as if there a split, the
high half of which just happens to always be empty.
Opportunstically add a comment to explain both what is going on, and why
KVM uses a single bitmap, e.g. the boilerplate for iterating over separate
bitmaps could be done via macros, so it's not (just) about deduplicating
code.
Link: https://lore.kernel.org/r/20231110022857.1273836-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'arch/x86/kvm/pmu.c')
-rw-r--r-- | arch/x86/kvm/pmu.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index 67d589ac9363..0873937c90bc 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -505,7 +505,7 @@ void kvm_pmu_handle_event(struct kvm_vcpu *vcpu) int bit; for_each_set_bit(bit, pmu->reprogram_pmi, X86_PMC_IDX_MAX) { - struct kvm_pmc *pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, bit); + struct kvm_pmc *pmc = kvm_pmc_idx_to_pmc(pmu, bit); if (unlikely(!pmc)) { clear_bit(bit, pmu->reprogram_pmi); @@ -725,7 +725,7 @@ static void kvm_pmu_reset(struct kvm_vcpu *vcpu) bitmap_zero(pmu->reprogram_pmi, X86_PMC_IDX_MAX); for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (!pmc) continue; @@ -801,7 +801,7 @@ void kvm_pmu_cleanup(struct kvm_vcpu *vcpu) pmu->pmc_in_use, X86_PMC_IDX_MAX); for_each_set_bit(i, bitmask, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (pmc && pmc->perf_event && !pmc_speculative_in_use(pmc)) pmc_stop_counter(pmc); @@ -856,7 +856,7 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 perf_hw_id) int i; for_each_set_bit(i, pmu->all_valid_pmc_idx, X86_PMC_IDX_MAX) { - pmc = static_call(kvm_x86_pmu_pmc_idx_to_pmc)(pmu, i); + pmc = kvm_pmc_idx_to_pmc(pmu, i); if (!pmc || !pmc_event_is_allowed(pmc)) continue; |