diff options
author | Alexander Graf <agraf@suse.de> | 2010-08-02 16:08:22 +0200 |
---|---|---|
committer | Avi Kivity <avi@redhat.com> | 2010-10-24 10:52:05 +0200 |
commit | 4cb6b7ea0cd085e6613153ad69608cad6421abcc (patch) | |
tree | 79c5eae68895c2ac75e09846f44782dd347a0cdd /arch/powerpc | |
parent | c60b4cf70127941e2f944a7971a7f6b3ecb367ac (diff) | |
download | lwn-4cb6b7ea0cd085e6613153ad69608cad6421abcc.tar.gz lwn-4cb6b7ea0cd085e6613153ad69608cad6421abcc.zip |
KVM: PPC: Preload magic page when in kernel mode
When the guest jumps into kernel mode and has the magic page mapped, theres a
very high chance that it will also use it. So let's detect that scenario and
map the segment accordingly.
Signed-off-by: Alexander Graf <agraf@suse.de>
Diffstat (limited to 'arch/powerpc')
-rw-r--r-- | arch/powerpc/kvm/book3s.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 37db61d37041..54ca578239db 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -145,6 +145,16 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr) (old_msr & (MSR_PR|MSR_IR|MSR_DR))) { kvmppc_mmu_flush_segments(vcpu); kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu)); + + /* Preload magic page segment when in kernel mode */ + if (!(msr & MSR_PR) && vcpu->arch.magic_page_pa) { + struct kvm_vcpu_arch *a = &vcpu->arch; + + if (msr & MSR_DR) + kvmppc_mmu_map_segment(vcpu, a->magic_page_ea); + else + kvmppc_mmu_map_segment(vcpu, a->magic_page_pa); + } } /* Preload FPU if it's enabled */ |