diff options
author | Avi Kivity <avi@qumranet.com> | 2007-09-16 18:58:32 +0200 |
---|---|---|
committer | Avi Kivity <avi@qumranet.com> | 2008-01-30 17:52:48 +0200 |
commit | c7addb902054195b995114df154e061c7d604f69 (patch) | |
tree | 985910a6c970957126c91e55c55b0e73ae877e0c /drivers/kvm/kvm_main.c | |
parent | 51c6cf662b4b361a09fbd324f4c67875d9bcfbea (diff) | |
download | lwn-c7addb902054195b995114df154e061c7d604f69.tar.gz lwn-c7addb902054195b995114df154e061c7d604f69.zip |
KVM: Allow not-present guest page faults to bypass kvm
There are two classes of page faults trapped by kvm:
- host page faults, where the fault is needed to allow kvm to install
the shadow pte or update the guest accessed and dirty bits
- guest page faults, where the guest has faulted and kvm simply injects
the fault back into the guest to handle
The second class, guest page faults, is pure overhead. We can eliminate
some of it on vmx using the following evil trick:
- when we set up a shadow page table entry, if the corresponding guest pte
is not present, set up the shadow pte as not present
- if the guest pte _is_ present, mark the shadow pte as present but also
set one of the reserved bits in the shadow pte
- tell the vmx hardware not to trap faults which have the present bit clear
With this, normal page-not-present faults go directly to the guest,
bypassing kvm entirely.
Unfortunately, this trick only works on Intel hardware, as AMD lacks a
way to discriminate among page faults based on error code. It is also
a little risky since it uses reserved bits which might become unreserved
in the future, so a module parameter is provided to disable it.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Diffstat (limited to 'drivers/kvm/kvm_main.c')
-rw-r--r-- | drivers/kvm/kvm_main.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c index 710483669f34..82cc7ae0fc83 100644 --- a/drivers/kvm/kvm_main.c +++ b/drivers/kvm/kvm_main.c @@ -3501,7 +3501,9 @@ int kvm_init_x86(struct kvm_x86_ops *ops, unsigned int vcpu_size, kvm_preempt_ops.sched_in = kvm_sched_in; kvm_preempt_ops.sched_out = kvm_sched_out; - return r; + kvm_mmu_set_nonpresent_ptes(0ull, 0ull); + + return 0; out_free: kmem_cache_destroy(kvm_vcpu_cache); |