diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-17 07:18:03 -0700 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2026-04-17 07:18:03 -0700 |
| commit | 01f492e1817e858d1712f2489d0afbaa552f417b (patch) | |
| tree | 9ba6df223570acd45ccb2ba647407f75f4393eab /arch/loongarch | |
| parent | e55d98e7756135f32150b9b8f75d580d0d4b2dd3 (diff) | |
| parent | 6b802031877a995456c528095c41d1948546bf45 (diff) | |
| download | lwn-01f492e1817e858d1712f2489d0afbaa552f417b.tar.gz lwn-01f492e1817e858d1712f2489d0afbaa552f417b.zip | |
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm updates from Paolo Bonzini:
"Arm:
- Add support for tracing in the standalone EL2 hypervisor code,
which should help both debugging and performance analysis. This
uses the new infrastructure for 'remote' trace buffers that can be
exposed by non-kernel entities such as firmware, and which came
through the tracing tree
- Add support for GICv5 Per Processor Interrupts (PPIs), as the
starting point for supporting the new GIC architecture in KVM
- Finally add support for pKVM protected guests, where pages are
unmapped from the host as they are faulted into the guest and can
be shared back from the guest using pKVM hypercalls. Protected
guests are created using a new machine type identifier. As the
elusive guestmem has not yet delivered on its promises, anonymous
memory is also supported
This is only a first step towards full isolation from the host; for
example, the CPU register state and DMA accesses are not yet
isolated. Because this does not really yet bring fully what it
promises, it is hidden behind CONFIG_ARM_PKVM_GUEST +
'kvm-arm.mode=protected', and also triggers TAINT_USER when a VM is
created. Caveat emptor
- Rework the dreaded user_mem_abort() function to make it more
maintainable, reducing the amount of state being exposed to the
various helpers and rendering a substantial amount of state
immutable
- Expand the Stage-2 page table dumper to support NV shadow page
tables on a per-VM basis
- Tidy up the pKVM PSCI proxy code to be slightly less hard to
follow
- Fix both SPE and TRBE in non-VHE configurations so that they do not
generate spurious, out of context table walks that ultimately lead
to very bad HW lockups
- A small set of patches fixing the Stage-2 MMU freeing in error
cases
- Tighten-up accepted SMC immediate value to be only #0 for host
SMCCC calls
- The usual cleanups and other selftest churn
LoongArch:
- Use CSR_CRMD_PLV for kvm_arch_vcpu_in_kernel()
- Add DMSINTC irqchip in kernel support
RISC-V:
- Fix steal time shared memory alignment checks
- Fix vector context allocation leak
- Fix array out-of-bounds in pmu_ctr_read() and pmu_fw_ctr_read_hi()
- Fix double-free of sdata in kvm_pmu_clear_snapshot_area()
- Fix integer overflow in kvm_pmu_validate_counter_mask()
- Fix shift-out-of-bounds in make_xfence_request()
- Fix lost write protection on huge pages during dirty logging
- Split huge pages during fault handling for dirty logging
- Skip CSR restore if VCPU is reloaded on the same core
- Implement kvm_arch_has_default_irqchip() for KVM selftests
- Factored-out ISA checks into separate sources
- Added hideleg to struct kvm_vcpu_config
- Factored-out VCPU config into separate sources
- Support configuration of per-VM HGATP mode from KVM user space
s390:
- Support for ESA (31-bit) guests inside nested hypervisors
- Remove restriction on memslot alignment, which is not needed
anymore with the new gmap code
- Fix LPSW/E to update the bear (which of course is the breaking
event address register)
x86:
- Shut up various UBSAN warnings on reading module parameter before
they were initialized
- Don't zero-allocate page tables that are used for splitting
hugepages in the TDP MMU, as KVM is guaranteed to set all SPTEs in
the page table and thus write all bytes
- As an optimization, bail early when trying to unsync 4KiB mappings
if the target gfn can just be mapped with a 2MiB hugepage
x86 generic:
- Copy single-chunk MMIO write values into struct kvm_vcpu (more
precisely struct kvm_mmio_fragment) to fix use-after-free stack
bugs where KVM would dereference stack pointer after an exit to
userspace
- Clean up and comment the emulated MMIO code to try to make it
easier to maintain (not necessarily "easy", but "easier")
- Move VMXON+VMXOFF and EFER.SVME toggling out of KVM (not *all* of
VMX and SVM enabling) as it is needed for trusted I/O
- Advertise support for AVX512 Bit Matrix Multiply (BMM) instructions
- Immediately fail the build if a required #define is missing in one
of KVM's headers that is included multiple times
- Reject SET_GUEST_DEBUG with -EBUSY if there's an already injected
exception, mostly to prevent syzkaller from abusing the uAPI to
trigger WARNs, but also because it can help prevent userspace from
unintentionally crashing the VM
- Exempt SMM from CPUID faulting on Intel, as per the spec
- Misc hardening and cleanup changes
x86 (AMD):
- Fix and optimize IRQ window inhibit handling for AVIC; make it
per-vCPU so that KVM doesn't prematurely re-enable AVIC if multiple
vCPUs have to-be-injected IRQs
- Clean up and optimize the OSVW handling, avoiding a bug in which
KVM would overwrite state when enabling virtualization on multiple
CPUs in parallel. This should not be a problem because OSVW should
usually be the same for all CPUs
- Drop a WARN in KVM_MEMORY_ENCRYPT_REG_REGION where KVM complains
about a "too large" size based purely on user input
- Clean up and harden the pinning code for KVM_MEMORY_ENCRYPT_REG_REGION
- Disallow synchronizing a VMSA of an already-launched/encrypted
vCPU, as doing so for an SNP guest will crash the host due to an
RMP violation page fault
- Overhaul KVM's APIs for detecting SEV+ guests so that VM-scoped
queries are required to hold kvm->lock, and enforce it by lockdep.
Fix various bugs where sev_guest() was not ensured to be stable for
the whole duration of a function or ioctl
- Convert a pile of kvm->lock SEV code to guard()
- Play nicer with userspace that does not enable
KVM_CAP_EXCEPTION_PAYLOAD, for which KVM needs to set CR2 and DR6
as a response to ioctls such as KVM_GET_VCPU_EVENTS (even if the
payload would end up in EXITINFO2 rather than CR2, for example).
Only set CR2 and DR6 when consumption of the payload is imminent,
but on the other hand force delivery of the payload in all paths
where userspace retrieves CR2 or DR6
- Use vcpu->arch.cr2 when updating vmcb12's CR2 on nested #VMEXIT
instead of vmcb02->save.cr2. The value is out of sync after a
save/restore or after a #PF is injected into L2
- Fix a class of nSVM bugs where some fields written by the CPU are
not synchronized from vmcb02 to cached vmcb12 after VMRUN, and so
are not up-to-date when saved by KVM_GET_NESTED_STATE
- Fix a class of bugs where the ordering between KVM_SET_NESTED_STATE
and KVM_SET_{S}REGS could cause vmcb02 to be incorrectly
initialized after save+restore
- Add a variety of missing nSVM consistency checks
- Fix several bugs where KVM failed to correctly update VMCB fields
on nested #VMEXIT
- Fix several bugs where KVM failed to correctly synthesize #UD or
#GP for SVM-related instructions
- Add support for save+restore of virtualized LBRs (on SVM)
- Refactor various helpers and macros to improve clarity and
(hopefully) make the code easier to maintain
- Aggressively sanitize fields when copying from vmcb12, to guard
against unintentionally allowing L1 to utilize yet-to-be-defined
features
- Fix several bugs where KVM botched rAX legality checks when
emulating SVM instructions. There are remaining issues in that KVM
doesn't handle size prefix overrides for 64-bit guests
- Fail emulation of VMRUN/VMLOAD/VMSAVE if mapping vmcb12 fails
instead of somewhat arbitrarily synthesizing #GP (i.e. don't double
down on AMD's architectural but sketchy behavior of generating #GP
for "unsupported" addresses)
- Cache all used vmcb12 fields to further harden against TOCTOU bugs
x86 (Intel):
- Drop obsolete branch hint prefixes from the VMX instruction macros
- Use ASM_INPUT_RM() in __vmcs_writel() to coerce clang into using a
register input when appropriate
- Code cleanups
guest_memfd:
- Don't mark guest_memfd folios as accessed, as guest_memfd doesn't
support reclaim, the memory is unevictable, and there is no storage
to write back to
LoongArch selftests:
- Add KVM PMU test cases
s390 selftests:
- Enable more memory selftests
x86 selftests:
- Add support for Hygon CPUs in KVM selftests
- Fix a bug in the MSR test where it would get false failures on
AMD/Hygon CPUs with exactly one of RDPID or RDTSCP
- Add an MADV_COLLAPSE testcase for guest_memfd as a regression test
for a bug where the kernel would attempt to collapse guest_memfd
folios against KVM's will"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (373 commits)
KVM: x86: use inlines instead of macros for is_sev_*guest
x86/virt: Treat SVM as unsupported when running as an SEV+ guest
KVM: SEV: Goto an existing error label if charging misc_cg for an ASID fails
KVM: SVM: Move lock-protected allocation of SEV ASID into a separate helper
KVM: SEV: use mutex guard in snp_handle_guest_req()
KVM: SEV: use mutex guard in sev_mem_enc_unregister_region()
KVM: SEV: use mutex guard in sev_mem_enc_ioctl()
KVM: SEV: use mutex guard in snp_launch_update()
KVM: SEV: Assert that kvm->lock is held when querying SEV+ support
KVM: SEV: Document that checking for SEV+ guests when reclaiming memory is "safe"
KVM: SEV: Hide "struct kvm_sev_info" behind CONFIG_KVM_AMD_SEV=y
KVM: SEV: WARN on unhandled VM type when initializing VM
KVM: LoongArch: selftests: Add PMU overflow interrupt test
KVM: LoongArch: selftests: Add basic PMU event counting test
KVM: LoongArch: selftests: Add cpucfg read/write helpers
LoongArch: KVM: Add DMSINTC inject msi to vCPU
LoongArch: KVM: Add DMSINTC device support
LoongArch: KVM: Make vcpu_is_preempted() as a macro rather than function
LoongArch: KVM: Move host CSR_GSTAT save and restore in context switch
LoongArch: KVM: Move host CSR_EENTRY save and restore in context switch
...
Diffstat (limited to 'arch/loongarch')
| -rw-r--r-- | arch/loongarch/include/asm/kvm_dmsintc.h | 27 | ||||
| -rw-r--r-- | arch/loongarch/include/asm/kvm_host.h | 3 | ||||
| -rw-r--r-- | arch/loongarch/include/asm/kvm_pch_pic.h | 3 | ||||
| -rw-r--r-- | arch/loongarch/include/asm/qspinlock.h | 26 | ||||
| -rw-r--r-- | arch/loongarch/include/uapi/asm/kvm.h | 4 | ||||
| -rw-r--r-- | arch/loongarch/kernel/paravirt.c | 16 | ||||
| -rw-r--r-- | arch/loongarch/kvm/Makefile | 1 | ||||
| -rw-r--r-- | arch/loongarch/kvm/intc/dmsintc.c | 182 | ||||
| -rw-r--r-- | arch/loongarch/kvm/intc/pch_pic.c | 15 | ||||
| -rw-r--r-- | arch/loongarch/kvm/interrupt.c | 2 | ||||
| -rw-r--r-- | arch/loongarch/kvm/irqfd.c | 10 | ||||
| -rw-r--r-- | arch/loongarch/kvm/main.c | 14 | ||||
| -rw-r--r-- | arch/loongarch/kvm/vcpu.c | 29 |
13 files changed, 289 insertions, 43 deletions
diff --git a/arch/loongarch/include/asm/kvm_dmsintc.h b/arch/loongarch/include/asm/kvm_dmsintc.h new file mode 100644 index 000000000000..5a71b9ccbe78 --- /dev/null +++ b/arch/loongarch/include/asm/kvm_dmsintc.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2025 Loongson Technology Corporation Limited + */ + +#ifndef __ASM_KVM_DMSINTC_H +#define __ASM_KVM_DMSINTC_H + +#include <linux/kvm_types.h> + +struct loongarch_dmsintc { + struct kvm *kvm; + uint64_t msg_addr_base; + uint64_t msg_addr_size; + uint32_t cpu_mask; +}; + +struct dmsintc_state { + atomic64_t vector_map[4]; +}; + +int kvm_loongarch_register_dmsintc_device(void); +void dmsintc_inject_irq(struct kvm_vcpu *vcpu); +int dmsintc_set_irq(struct kvm *kvm, u64 addr, int data, int level); +int dmsintc_deliver_msi_to_vcpu(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 vector, int level); + +#endif diff --git a/arch/loongarch/include/asm/kvm_host.h b/arch/loongarch/include/asm/kvm_host.h index 19eb5e5c3984..130cedbb6b39 100644 --- a/arch/loongarch/include/asm/kvm_host.h +++ b/arch/loongarch/include/asm/kvm_host.h @@ -20,6 +20,7 @@ #include <asm/inst.h> #include <asm/kvm_mmu.h> #include <asm/kvm_ipi.h> +#include <asm/kvm_dmsintc.h> #include <asm/kvm_eiointc.h> #include <asm/kvm_pch_pic.h> #include <asm/loongarch.h> @@ -133,6 +134,7 @@ struct kvm_arch { s64 time_offset; struct kvm_context __percpu *vmcs; struct loongarch_ipi *ipi; + struct loongarch_dmsintc *dmsintc; struct loongarch_eiointc *eiointc; struct loongarch_pch_pic *pch_pic; }; @@ -247,6 +249,7 @@ struct kvm_vcpu_arch { struct kvm_mp_state mp_state; /* ipi state */ struct ipi_state ipi_state; + struct dmsintc_state dmsintc_state; /* cpucfg */ u32 cpucfg[KVM_MAX_CPUCFG_REGS]; diff --git a/arch/loongarch/include/asm/kvm_pch_pic.h b/arch/loongarch/include/asm/kvm_pch_pic.h index 7f33a3039272..e74b3b742634 100644 --- a/arch/loongarch/include/asm/kvm_pch_pic.h +++ b/arch/loongarch/include/asm/kvm_pch_pic.h @@ -68,8 +68,9 @@ struct loongarch_pch_pic { uint64_t pch_pic_base; }; +struct kvm_kernel_irq_routing_entry; int kvm_loongarch_register_pch_pic_device(void); void pch_pic_set_irq(struct loongarch_pch_pic *s, int irq, int level); -void pch_msi_set_irq(struct kvm *kvm, int irq, int level); +int pch_msi_set_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e, int level); #endif /* __ASM_KVM_PCH_PIC_H */ diff --git a/arch/loongarch/include/asm/qspinlock.h b/arch/loongarch/include/asm/qspinlock.h index 66244801db67..0ee15b3b3937 100644 --- a/arch/loongarch/include/asm/qspinlock.h +++ b/arch/loongarch/include/asm/qspinlock.h @@ -2,11 +2,13 @@ #ifndef _ASM_LOONGARCH_QSPINLOCK_H #define _ASM_LOONGARCH_QSPINLOCK_H +#include <asm/kvm_para.h> #include <linux/jump_label.h> #ifdef CONFIG_PARAVIRT - +DECLARE_STATIC_KEY_FALSE(virt_preempt_key); DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key); +DECLARE_PER_CPU(struct kvm_steal_time, steal_time); #define virt_spin_lock virt_spin_lock @@ -34,9 +36,25 @@ __retry: return true; } -#define vcpu_is_preempted vcpu_is_preempted - -bool vcpu_is_preempted(int cpu); +/* + * Macro is better than inline function here + * With macro, parameter cpu is parsed only when it is used. + * With inline function, parameter cpu is parsed even though it is not used. + * This may cause cache line thrashing across NUMA nodes. + */ +#define vcpu_is_preempted(cpu) \ +({ \ + bool __val; \ + \ + if (!static_branch_unlikely(&virt_preempt_key)) \ + __val = false; \ + else { \ + struct kvm_steal_time *src; \ + src = &per_cpu(steal_time, cpu); \ + __val = !!(READ_ONCE(src->preempted) & KVM_VCPU_PREEMPTED); \ + } \ + __val; \ +}) #endif /* CONFIG_PARAVIRT */ diff --git a/arch/loongarch/include/uapi/asm/kvm.h b/arch/loongarch/include/uapi/asm/kvm.h index 419647aacdf3..cd0b5c11ca9c 100644 --- a/arch/loongarch/include/uapi/asm/kvm.h +++ b/arch/loongarch/include/uapi/asm/kvm.h @@ -155,4 +155,8 @@ struct kvm_iocsr_entry { #define KVM_DEV_LOONGARCH_PCH_PIC_GRP_CTRL 0x40000006 #define KVM_DEV_LOONGARCH_PCH_PIC_CTRL_INIT 0 +#define KVM_DEV_LOONGARCH_DMSINTC_GRP_CTRL 0x40000007 +#define KVM_DEV_LOONGARCH_DMSINTC_MSG_ADDR_BASE 0x0 +#define KVM_DEV_LOONGARCH_DMSINTC_MSG_ADDR_SIZE 0x1 + #endif /* __UAPI_ASM_LOONGARCH_KVM_H */ diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/paravirt.c index b74fe6db49ab..10821cce554c 100644 --- a/arch/loongarch/kernel/paravirt.c +++ b/arch/loongarch/kernel/paravirt.c @@ -10,9 +10,9 @@ #include <asm/paravirt.h> static int has_steal_clock; -static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); -static DEFINE_STATIC_KEY_FALSE(virt_preempt_key); +DEFINE_STATIC_KEY_FALSE(virt_preempt_key); DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); +DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); static bool steal_acc = true; @@ -260,18 +260,6 @@ static int pv_time_cpu_down_prepare(unsigned int cpu) return 0; } - -bool vcpu_is_preempted(int cpu) -{ - struct kvm_steal_time *src; - - if (!static_branch_unlikely(&virt_preempt_key)) - return false; - - src = &per_cpu(steal_time, cpu); - return !!(src->preempted & KVM_VCPU_PREEMPTED); -} -EXPORT_SYMBOL(vcpu_is_preempted); #endif static void pv_cpu_reboot(void *unused) diff --git a/arch/loongarch/kvm/Makefile b/arch/loongarch/kvm/Makefile index cb41d9265662..ae469edec99c 100644 --- a/arch/loongarch/kvm/Makefile +++ b/arch/loongarch/kvm/Makefile @@ -17,6 +17,7 @@ kvm-y += tlb.o kvm-y += vcpu.o kvm-y += vm.o kvm-y += intc/ipi.o +kvm-y += intc/dmsintc.o kvm-y += intc/eiointc.o kvm-y += intc/pch_pic.o kvm-y += irqfd.o diff --git a/arch/loongarch/kvm/intc/dmsintc.c b/arch/loongarch/kvm/intc/dmsintc.c new file mode 100644 index 000000000000..de25735ce039 --- /dev/null +++ b/arch/loongarch/kvm/intc/dmsintc.c @@ -0,0 +1,182 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2025 Loongson Technology Corporation Limited + */ + +#include <linux/kvm_host.h> +#include <asm/kvm_csr.h> +#include <asm/kvm_dmsintc.h> +#include <asm/kvm_vcpu.h> + +void dmsintc_inject_irq(struct kvm_vcpu *vcpu) +{ + unsigned int i; + unsigned long vector[4], old; + struct dmsintc_state *ds = &vcpu->arch.dmsintc_state; + + if (!ds) + return; + + for (i = 0; i < 4; i++) { + old = atomic64_read(&(ds->vector_map[i])); + if (old) + vector[i] = atomic64_xchg(&(ds->vector_map[i]), 0); + } + + if (vector[0]) { + old = kvm_read_hw_gcsr(LOONGARCH_CSR_ISR0); + kvm_write_hw_gcsr(LOONGARCH_CSR_ISR0, vector[0] | old); + } + + if (vector[1]) { + old = kvm_read_hw_gcsr(LOONGARCH_CSR_ISR1); + kvm_write_hw_gcsr(LOONGARCH_CSR_ISR1, vector[1] | old); + } + + if (vector[2]) { + old = kvm_read_hw_gcsr(LOONGARCH_CSR_ISR2); + kvm_write_hw_gcsr(LOONGARCH_CSR_ISR2, vector[2] | old); + } + + if (vector[3]) { + old = kvm_read_hw_gcsr(LOONGARCH_CSR_ISR3); + kvm_write_hw_gcsr(LOONGARCH_CSR_ISR3, vector[3] | old); + } +} + +int dmsintc_deliver_msi_to_vcpu(struct kvm *kvm, + struct kvm_vcpu *vcpu, u32 vector, int level) +{ + struct kvm_interrupt vcpu_irq; + struct dmsintc_state *ds = &vcpu->arch.dmsintc_state; + + if (!level) + return 0; + if (!vcpu || vector >= 256) + return -EINVAL; + if (!ds) + return -ENODEV; + + vcpu_irq.irq = INT_AVEC; + set_bit(vector, (unsigned long *)&ds->vector_map); + kvm_vcpu_ioctl_interrupt(vcpu, &vcpu_irq); + kvm_vcpu_kick(vcpu); + + return 0; +} + +int dmsintc_set_irq(struct kvm *kvm, u64 addr, int data, int level) +{ + unsigned int irq, cpu; + struct kvm_vcpu *vcpu; + + irq = (addr >> AVEC_IRQ_SHIFT) & AVEC_IRQ_MASK; + cpu = (addr >> AVEC_CPU_SHIFT) & kvm->arch.dmsintc->cpu_mask; + if (cpu >= KVM_MAX_VCPUS) + return -EINVAL; + vcpu = kvm_get_vcpu_by_cpuid(kvm, cpu); + if (!vcpu) + return -EINVAL; + + return dmsintc_deliver_msi_to_vcpu(kvm, vcpu, irq, level); +} + +static int kvm_dmsintc_ctrl_access(struct kvm_device *dev, + struct kvm_device_attr *attr, bool is_write) +{ + int addr = attr->attr; + unsigned long cpu_bit, val; + void __user *data = (void __user *)attr->addr; + struct loongarch_dmsintc *s = dev->kvm->arch.dmsintc; + + switch (addr) { + case KVM_DEV_LOONGARCH_DMSINTC_MSG_ADDR_BASE: + if (is_write) { + if (copy_from_user(&val, data, sizeof(s->msg_addr_base))) + return -EFAULT; + if (s->msg_addr_base) + return -EFAULT; /* Duplicate setting are not allowed. */ + if ((val & (BIT(AVEC_CPU_SHIFT) - 1)) != 0) + return -EINVAL; + s->msg_addr_base = val; + cpu_bit = find_first_bit((unsigned long *)&(s->msg_addr_base), 64) - AVEC_CPU_SHIFT; + cpu_bit = min(cpu_bit, AVEC_CPU_BIT); + s->cpu_mask = GENMASK(cpu_bit - 1, 0) & AVEC_CPU_MASK; + } + break; + case KVM_DEV_LOONGARCH_DMSINTC_MSG_ADDR_SIZE: + if (is_write) { + if (copy_from_user(&val, data, sizeof(s->msg_addr_size))) + return -EFAULT; + if (s->msg_addr_size) + return -EFAULT; /*Duplicate setting are not allowed. */ + s->msg_addr_size = val; + } + break; + default: + kvm_err("%s: unknown dmsintc register, addr = %d\n", __func__, addr); + return -ENXIO; + } + + return 0; +} + +static int kvm_dmsintc_set_attr(struct kvm_device *dev, + struct kvm_device_attr *attr) +{ + switch (attr->group) { + case KVM_DEV_LOONGARCH_DMSINTC_GRP_CTRL: + return kvm_dmsintc_ctrl_access(dev, attr, true); + default: + kvm_err("%s: unknown group (%d)\n", __func__, attr->group); + return -EINVAL; + } +} + +static int kvm_dmsintc_create(struct kvm_device *dev, u32 type) +{ + struct kvm *kvm; + struct loongarch_dmsintc *s; + + if (!dev) { + kvm_err("%s: kvm_device ptr is invalid!\n", __func__); + return -EINVAL; + } + + kvm = dev->kvm; + if (kvm->arch.dmsintc) { + kvm_err("%s: LoongArch DMSINTC has already been created!\n", __func__); + return -EINVAL; + } + + s = kzalloc(sizeof(struct loongarch_dmsintc), GFP_KERNEL); + if (!s) + return -ENOMEM; + + s->kvm = kvm; + kvm->arch.dmsintc = s; + + return 0; +} + +static void kvm_dmsintc_destroy(struct kvm_device *dev) +{ + + if (!dev || !dev->kvm || !dev->kvm->arch.dmsintc) + return; + + kfree(dev->kvm->arch.dmsintc); + kfree(dev); +} + +static struct kvm_device_ops kvm_dmsintc_dev_ops = { + .name = "kvm-loongarch-dmsintc", + .create = kvm_dmsintc_create, + .destroy = kvm_dmsintc_destroy, + .set_attr = kvm_dmsintc_set_attr, +}; + +int kvm_loongarch_register_dmsintc_device(void) +{ + return kvm_register_device_ops(&kvm_dmsintc_dev_ops, KVM_DEV_TYPE_LOONGARCH_DMSINTC); +} diff --git a/arch/loongarch/kvm/intc/pch_pic.c b/arch/loongarch/kvm/intc/pch_pic.c index dd7e7f8d53db..aa0ed59ae8cf 100644 --- a/arch/loongarch/kvm/intc/pch_pic.c +++ b/arch/loongarch/kvm/intc/pch_pic.c @@ -3,6 +3,7 @@ * Copyright (C) 2024 Loongson Technology Corporation Limited */ +#include <asm/kvm_dmsintc.h> #include <asm/kvm_eiointc.h> #include <asm/kvm_pch_pic.h> #include <asm/kvm_vcpu.h> @@ -67,9 +68,19 @@ void pch_pic_set_irq(struct loongarch_pch_pic *s, int irq, int level) } /* msi irq handler */ -void pch_msi_set_irq(struct kvm *kvm, int irq, int level) +int pch_msi_set_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e, int level) { - eiointc_set_irq(kvm->arch.eiointc, irq, level); + u64 msg_addr = (((u64)e->msi.address_hi) << 32) | e->msi.address_lo; + + if (cpu_has_msgint && kvm->arch.dmsintc && + msg_addr >= kvm->arch.dmsintc->msg_addr_base && + msg_addr < (kvm->arch.dmsintc->msg_addr_base + kvm->arch.dmsintc->msg_addr_size)) { + return dmsintc_set_irq(kvm, msg_addr, e->msi.data, level); + } + + eiointc_set_irq(kvm->arch.eiointc, e->msi.data, level); + + return 0; } static int loongarch_pch_pic_read(struct loongarch_pch_pic *s, gpa_t addr, int len, void *val) diff --git a/arch/loongarch/kvm/interrupt.c b/arch/loongarch/kvm/interrupt.c index fb704f4c8ac5..32930959f7c2 100644 --- a/arch/loongarch/kvm/interrupt.c +++ b/arch/loongarch/kvm/interrupt.c @@ -7,6 +7,7 @@ #include <linux/errno.h> #include <asm/kvm_csr.h> #include <asm/kvm_vcpu.h> +#include <asm/kvm_dmsintc.h> static unsigned int priority_to_irq[EXCCODE_INT_NUM] = { [INT_TI] = CPU_TIMER, @@ -33,6 +34,7 @@ static int kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority) irq = priority_to_irq[priority]; if (kvm_guest_has_msgint(&vcpu->arch) && (priority == INT_AVEC)) { + dmsintc_inject_irq(vcpu); set_gcsr_estat(irq); return 1; } diff --git a/arch/loongarch/kvm/irqfd.c b/arch/loongarch/kvm/irqfd.c index 9a39627aecf0..f4f953b22419 100644 --- a/arch/loongarch/kvm/irqfd.c +++ b/arch/loongarch/kvm/irqfd.c @@ -29,9 +29,7 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e, if (!level) return -1; - pch_msi_set_irq(kvm, e->msi.data, level); - - return 0; + return pch_msi_set_irq(kvm, e, level); } /* @@ -71,13 +69,15 @@ int kvm_set_routing_entry(struct kvm *kvm, int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e, struct kvm *kvm, int irq_source_id, int level, bool line_status) { + if (!level) + return -EWOULDBLOCK; + switch (e->type) { case KVM_IRQ_ROUTING_IRQCHIP: pch_pic_set_irq(kvm->arch.pch_pic, e->irqchip.pin, level); return 0; case KVM_IRQ_ROUTING_MSI: - pch_msi_set_irq(kvm, e->msi.data, level); - return 0; + return pch_msi_set_irq(kvm, e, level); default: return -EWOULDBLOCK; } diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c index 2c593ac7892f..76ebff2faedd 100644 --- a/arch/loongarch/kvm/main.c +++ b/arch/loongarch/kvm/main.c @@ -271,11 +271,11 @@ void kvm_check_vpid(struct kvm_vcpu *vcpu) * memory with new address is changed on other VCPUs. */ set_gcsr_llbctl(CSR_LLBCTL_WCLLB); - } - /* Restore GSTAT(0x50).vpid */ - vpid = (vcpu->arch.vpid & vpid_mask) << CSR_GSTAT_GID_SHIFT; - change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); + /* Restore GSTAT(0x50).vpid */ + vpid = (vcpu->arch.vpid & vpid_mask) << CSR_GSTAT_GID_SHIFT; + change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); + } } void kvm_init_vmcs(struct kvm *kvm) @@ -416,6 +416,12 @@ static int kvm_loongarch_env_init(void) /* Register LoongArch PCH-PIC interrupt controller interface. */ ret = kvm_loongarch_register_pch_pic_device(); + if (ret) + return ret; + + /* Register LoongArch DMSINTC interrupt contrroller interface */ + if (cpu_has_msgint) + ret = kvm_loongarch_register_dmsintc_device(); return ret; } diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 831f381a8fd1..e28084c49e68 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -149,14 +149,6 @@ static void kvm_lose_pmu(struct kvm_vcpu *vcpu) kvm_restore_host_pmu(vcpu); } -static void kvm_check_pmu(struct kvm_vcpu *vcpu) -{ - if (kvm_check_request(KVM_REQ_PMU, vcpu)) { - kvm_own_pmu(vcpu); - vcpu->arch.aux_inuse |= KVM_LARCH_PMU; - } -} - static void kvm_update_stolen_time(struct kvm_vcpu *vcpu) { u32 version; @@ -232,6 +224,15 @@ static int kvm_check_requests(struct kvm_vcpu *vcpu) static void kvm_late_check_requests(struct kvm_vcpu *vcpu) { lockdep_assert_irqs_disabled(); + + if (!kvm_request_pending(vcpu)) + return; + + if (kvm_check_request(KVM_REQ_PMU, vcpu)) { + kvm_own_pmu(vcpu); + vcpu->arch.aux_inuse |= KVM_LARCH_PMU; + } + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GPA, vcpu)) if (vcpu->arch.flush_gpa != INVALID_GPA) { kvm_flush_tlb_gpa(vcpu, vcpu->arch.flush_gpa); @@ -312,7 +313,6 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) /* Make sure the vcpu mode has been written */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); kvm_check_vpid(vcpu); - kvm_check_pmu(vcpu); /* * Called after function kvm_check_vpid() @@ -320,7 +320,6 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) * and it may also clear KVM_REQ_TLB_FLUSH_GPA pending bit */ kvm_late_check_requests(vcpu); - vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); /* Clear KVM_LARCH_SWCSR_LATEST as CSR will change when enter guest */ vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST; @@ -402,7 +401,7 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) val = gcsr_read(LOONGARCH_CSR_CRMD); preempt_enable(); - return (val & CSR_PRMD_PPLV) == PLV_KERN; + return (val & CSR_CRMD_PLV) == PLV_KERN; } #ifdef CONFIG_GUEST_PERF_EVENTS @@ -1628,9 +1627,11 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) * If not, any old guest state from this vCPU will have been clobbered. */ context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); - if (migrated || (context->last_vcpu != vcpu)) + if (migrated || (context->last_vcpu != vcpu)) { + context->last_vcpu = vcpu; vcpu->arch.aux_inuse &= ~KVM_LARCH_HWCSR_USABLE; - context->last_vcpu = vcpu; + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); + } /* Restore timer state regardless */ kvm_restore_timer(vcpu); @@ -1698,6 +1699,7 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) /* Restore Root.GINTC from unused Guest.GINTC register */ write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]); + write_csr_gstat(csr->csrs[LOONGARCH_CSR_GSTAT]); /* * We should clear linked load bit to break interrupted atomics. This @@ -1793,6 +1795,7 @@ static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu) kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ISR3); } + csr->csrs[LOONGARCH_CSR_GSTAT] = read_csr_gstat(); vcpu->arch.aux_inuse |= KVM_LARCH_SWCSR_LATEST; out: |
