diff options
author | Marc Zyngier <maz@kernel.org> | 2020-06-23 10:44:08 +0100 |
---|---|---|
committer | Marc Zyngier <maz@kernel.org> | 2020-06-23 11:24:39 +0100 |
commit | a3f574cd65487cd993f79ab235d70229d9302c1e (patch) | |
tree | 68dcd6b06a1483b31561abdb41630e6a1cdb63c6 /arch/arm64/kvm/vgic | |
parent | a25e91028ac2f544e0140aff2c9360a0e995dd86 (diff) | |
download | lwn-a3f574cd65487cd993f79ab235d70229d9302c1e.tar.gz lwn-a3f574cd65487cd993f79ab235d70229d9302c1e.zip |
KVM: arm64: vgic-v4: Plug race between non-residency and v4.1 doorbell
When making a vPE non-resident because it has hit a blocking WFI,
the doorbell can fire at any time after the write to the RD.
Crucially, it can fire right between the write to GICR_VPENDBASER
and the write to the pending_last field in the its_vpe structure.
This means that we would overwrite pending_last with stale data,
and potentially not wakeup until some unrelated event (such as
a timer interrupt) puts the vPE back on the CPU.
GICv4 isn't affected by this as we actively mask the doorbell on
entering the guest, while GICv4.1 automatically manages doorbell
delivery without any hypervisor-driven masking.
Use the vpe_lock to synchronize such update, which solves the
problem altogether.
Fixes: ae699ad348cdc ("irqchip/gic-v4.1: Move doorbell management to the GICv4 abstraction layer")
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Diffstat (limited to 'arch/arm64/kvm/vgic')
-rw-r--r-- | arch/arm64/kvm/vgic/vgic-v4.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c index 27ac833e5ec7..b5fa73c9fd35 100644 --- a/arch/arm64/kvm/vgic/vgic-v4.c +++ b/arch/arm64/kvm/vgic/vgic-v4.c @@ -90,7 +90,15 @@ static irqreturn_t vgic_v4_doorbell_handler(int irq, void *info) !irqd_irq_disabled(&irq_to_desc(irq)->irq_data)) disable_irq_nosync(irq); + /* + * The v4.1 doorbell can fire concurrently with the vPE being + * made non-resident. Ensure we only update pending_last + * *after* the non-residency sequence has completed. + */ + raw_spin_lock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock); vcpu->arch.vgic_cpu.vgic_v3.its_vpe.pending_last = true; + raw_spin_unlock(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vpe_lock); + kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu); kvm_vcpu_kick(vcpu); |