diff options
author | Sean Christopherson <seanjc@google.com> | 2023-11-10 05:28:56 +0300 |
---|---|---|
committer | Sean Christopherson <seanjc@google.com> | 2024-02-01 20:35:48 +0300 |
commit | e35529fb4ac930a4a39e0c15bafcb28a30d26611 (patch) | |
tree | 27ef009c0846b25be27ccc2fcd14a9b7272cf475 /arch/x86/kvm/pmu.c | |
parent | afda2d7666f894d1d7b8406cf54801e6c11f63c2 (diff) | |
download | linux-e35529fb4ac930a4a39e0c15bafcb28a30d26611.tar.xz |
KVM: x86/pmu: Check eventsel first when emulating (branch) insns retired
When triggering events, i.e. emulating PMC events in software, check for
a matching event selector before checking the event is allowed. The "is
allowed" check *might* be cheap, but it could also be very costly, e.g. if
userspace has defined a large PMU event filter. The event selector check
on the other hand is all but guaranteed to be <10 uops, e.g. looks
something like:
0xffffffff8105e615 <+5>: movabs $0xf0000ffff,%rax
0xffffffff8105e61f <+15>: xor %rdi,%rsi
0xffffffff8105e622 <+18>: test %rax,%rsi
0xffffffff8105e625 <+21>: sete %al
Link: https://lore.kernel.org/r/20231110022857.1273836-10-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
Diffstat (limited to 'arch/x86/kvm/pmu.c')
-rw-r--r-- | arch/x86/kvm/pmu.c | 9 |
1 files changed, 3 insertions, 6 deletions
diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c index d891a954b45a..8d81f176ab7b 100644 --- a/arch/x86/kvm/pmu.c +++ b/arch/x86/kvm/pmu.c @@ -857,9 +857,6 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) return; kvm_for_each_pmc(pmu, pmc, i, bitmap) { - if (!pmc_event_is_allowed(pmc)) - continue; - /* * Ignore checks for edge detect (all events currently emulated * but KVM are always rising edges), pin control (unsupported @@ -874,11 +871,11 @@ void kvm_pmu_trigger_event(struct kvm_vcpu *vcpu, u64 eventsel) * might be wrong if they are defined in the future, but so * could ignoring them, so do the simple thing for now. */ - if ((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) + if (((pmc->eventsel ^ eventsel) & AMD64_RAW_EVENT_MASK_NB) || + !pmc_event_is_allowed(pmc) || !cpl_is_matched(pmc)) continue; - if (cpl_is_matched(pmc)) - kvm_pmu_incr_counter(pmc); + kvm_pmu_incr_counter(pmc); } } EXPORT_SYMBOL_GPL(kvm_pmu_trigger_event); |