diff options
author | Lai Jiangshan <jiangshan.ljs@antgroup.com> | 2022-03-11 15:03:41 +0800 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2022-04-02 05:34:42 -0400 |
commit | 5b22bbe717d9b96867783cde380c43f6cc28aafd (patch) | |
tree | e66bdd48a326d5ada5c8c9b73dc42ab809d5651b /arch/x86/kvm/mmu.h | |
parent | cf1d88b36ba7e83bdaa50bccc4c47864e8f08cbe (diff) | |
download | lwn-5b22bbe717d9b96867783cde380c43f6cc28aafd.tar.gz lwn-5b22bbe717d9b96867783cde380c43f6cc28aafd.zip |
KVM: X86: Change the type of access u32 to u64
Change the type of access u32 to u64 for FNAME(walk_addr) and
->gva_to_gpa().
The kinds of accesses are usually combinations of UWX, and VMX/SVM's
nested paging adds a new factor of access: is it an access for a guest
page table or for a final guest physical address.
And SMAP relies a factor for supervisor access: explicit or implicit.
So @access in FNAME(walk_addr) and ->gva_to_gpa() is better to include
all these information to do the walk.
Although @access(u32) has enough bits to encode all the kinds, this
patch extends it to u64:
o Extra bits will be in the higher 32 bits, so that we can
easily obtain the traditional access mode (UWX) by converting
it to u32.
o Reuse the value for the access kind defined by SVM's nested
paging (PFERR_GUEST_FINAL_MASK and PFERR_GUEST_PAGE_MASK) as
@error_code in kvm_handle_page_fault().
Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Message-Id: <20220311070346.45023-2-jiangshanlai@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/mmu.h')
-rw-r--r-- | arch/x86/kvm/mmu.h | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index bf8dbc4bb12a..74efeaefa8f8 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -214,8 +214,10 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, */ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned pte_access, unsigned pte_pkey, - unsigned pfec) + u64 access) { + /* strip nested paging fault error codes */ + unsigned int pfec = access; int cpl = static_call(kvm_x86_get_cpl)(vcpu); unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu); @@ -317,12 +319,12 @@ static inline void kvm_update_page_stats(struct kvm *kvm, int level, int count) atomic64_add(count, &kvm->stat.pages[level - 1]); } -gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, +gpa_t translate_nested_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u64 access, struct x86_exception *exception); static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, - gpa_t gpa, u32 access, + gpa_t gpa, u64 access, struct x86_exception *exception) { if (mmu != &vcpu->arch.nested_mmu) |