diff options
author | Joel Schopp <joel.schopp@amd.com> | 2014-07-09 11:17:04 -0500 |
---|---|---|
committer | Jiri Slaby <jslaby@suse.cz> | 2015-04-30 11:15:10 +0200 |
commit | 628d3e68cb2113cb0c4b935745e35a2efb7e944b (patch) | |
tree | 4f140f1192c4a08ca19eeb34cf6c51f9ceac17a2 /arch/arm | |
parent | 64b4f742f29eb19cd8c669703de4102a67228076 (diff) | |
download | lwn-628d3e68cb2113cb0c4b935745e35a2efb7e944b.tar.gz lwn-628d3e68cb2113cb0c4b935745e35a2efb7e944b.zip |
arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
commit dbff124e29fa24aff9705b354b5f4648cd96e0bb upstream.
The current aarch64 calculation for VTTBR_BADDR_MASK masks only 39 bits
and not all the bits in the PA range. This is clearly a bug that
manifests itself on systems that allocate memory in the higher address
space range.
[ Modified from Joel's original patch to be based on PHYS_MASK_SHIFT
instead of a hard-coded value and to move the alignment check of the
allocation to mmu.c. Also added a comment explaining why we hardcode
the IPA range and changed the stage-2 pgd allocation to be based on
the 40 bit IPA range instead of the maximum possible 48 bit PA range.
- Christoffer ]
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Joel Schopp <joel.schopp@amd.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'arch/arm')
-rw-r--r-- | arch/arm/kvm/arm.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 8eacf88d68fd..26ca5c694755 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -425,9 +425,9 @@ static void update_vttbr(struct kvm *kvm) /* update vttbr to be used with the new vmid */ pgd_phys = virt_to_phys(kvm->arch.pgd); + BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK); vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK; - kvm->arch.vttbr = pgd_phys & VTTBR_BADDR_MASK; - kvm->arch.vttbr |= vmid; + kvm->arch.vttbr = pgd_phys | vmid; spin_unlock(&kvm_vmid_lock); } |