summaryrefslogtreecommitdiff
path: root/arch/x86/include/asm
diff options
context:
space:
mode:
authorAvi Kivity <avi@redhat.com>2012-09-16 14:18:51 +0300
committerAvi Kivity <avi@redhat.com>2012-09-20 13:00:08 +0300
commit8cbc70696f149e44753b0fe60162b4ff96c2dd2b (patch)
tree79729287462257080b071f584bdba7a7ef9a25ea /arch/x86/include/asm
parent3d34adec7081621ff51c195be045b87d75c0c49d (diff)
downloadlwn-8cbc70696f149e44753b0fe60162b4ff96c2dd2b.tar.gz
lwn-8cbc70696f149e44753b0fe60162b4ff96c2dd2b.zip
KVM: MMU: Update accessed and dirty bits after guest pagetable walk
While unspecified, the behaviour of Intel processors is to first perform the page table walk, then, if the walk was successful, to atomically update the accessed and dirty bits of walked paging elements. While we are not required to follow this exactly, doing so will allow us to perform the access permissions check after the walk is complete, rather than after each walk step. (the tricky case is SMEP: a zero in any pte's U bit makes the referenced page a supervisor page, so we can't fault on a one bit during the walk itself). Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
Diffstat (limited to 'arch/x86/include/asm')
0 files changed, 0 insertions, 0 deletions