diff options
author | Vitaly Kuznetsov <vkuznets@redhat.com> | 2020-07-10 16:11:55 +0200 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2020-07-10 12:57:37 -0400 |
commit | a506fdd22342606d22645a6bf90a2d848e92e5d7 (patch) | |
tree | 748c6ca60c6c142cbeef397e57948929a93fe376 /arch/x86/kvm/mmu/mmu.c | |
parent | bf7dea425327c5da12f540a1595f22770597e496 (diff) | |
download | lwn-a506fdd22342606d22645a6bf90a2d848e92e5d7.tar.gz lwn-a506fdd22342606d22645a6bf90a2d848e92e5d7.zip |
KVM: nSVM: implement nested_svm_load_cr3() and use it for host->guest switch
Undesired triple fault gets injected to L1 guest on SVM when L2 is
launched with certain CR3 values. #TF is raised by mmu_check_root()
check in fast_pgd_switch() and the root cause is that when
kvm_set_cr3() is called from nested_prepare_vmcb_save() with NPT
enabled CR3 points to a nGPA so we can't check it with
kvm_is_visible_gfn().
Using generic kvm_set_cr3() when switching to nested guest is not
a great idea as we'll have to distinguish between 'real' CR3s and
'nested' CR3s to e.g. not call kvm_mmu_new_pgd() with nGPA. Following
nVMX implement nested-specific nested_svm_load_cr3() doing the job.
To support the change, nested_svm_load_cr3() needs to be re-ordered
with nested_svm_init_mmu_context().
Note: the current implementation is sub-optimal as we always do TLB
flush/MMU sync but this is still an improvement as we at least stop doing
kvm_mmu_reset_context().
Fixes: 7c390d350f8b ("kvm: x86: Add fast CR3 switch code path")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200710141157.1640173-8-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu/mmu.c')
-rw-r--r-- | arch/x86/kvm/mmu/mmu.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 78c88e8aecfa..61c35fec5219 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4952,6 +4952,8 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, u32 cr0, u32 cr4, u32 efer, union kvm_mmu_role new_role = kvm_calc_shadow_mmu_root_page_role(vcpu, false); + __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base, false, false); + if (new_role.as_u64 != context->mmu_role.as_u64) shadow_mmu_init_context(vcpu, context, cr0, cr4, efer, new_role); } |