diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> | 2015-03-30 10:41:03 +0530 |
---|---|---|
committer | Michael Ellerman <mpe@ellerman.id.au> | 2015-04-17 11:23:39 +1000 |
commit | 691e95fd7396905a38d98919e9c150dbc3ea21a3 (patch) | |
tree | d89b898d4f42d167f0da169f482d7104b46870d8 /arch/powerpc/kvm/e500_mmu_host.c | |
parent | dac5657067919161eb3273ca787d8ae9814801e7 (diff) | |
download | lwn-691e95fd7396905a38d98919e9c150dbc3ea21a3.tar.gz lwn-691e95fd7396905a38d98919e9c150dbc3ea21a3.zip |
powerpc/mm/thp: Make page table walk safe against thp split/collapse
We can disable a THP split or a hugepage collapse by disabling irq.
We do send IPI to all the cpus in the early part of split/collapse,
and disabling local irq ensure we don't make progress with
split/collapse. If the THP is getting split we return NULL from
find_linux_pte_or_hugepte(). For all the current callers it should be ok.
We need to be careful if we want to use returned pte_t pointer outside
the irq disabled region. W.r.t to THP split, the pfn remains the same,
but then a hugepage collapse will result in a pfn change. There are
few steps we can take to avoid a hugepage collapse.One way is to take page
reference inside the irq disable region. Other option is to take
mmap_sem so that a parallel collapse will not happen. We can also
disable collapse by taking pmd_lock. Another method used by kvm
subsystem is to check whether we had a mmu_notifer update in between
using mmu_notifier_retry().
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/kvm/e500_mmu_host.c')
-rw-r--r-- | arch/powerpc/kvm/e500_mmu_host.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index a1f5b0d4b1d6..4d33e199edcc 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -338,6 +338,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, pte_t *ptep; unsigned int wimg = 0; pgd_t *pgdir; + unsigned long flags; /* used to check for invalidations in progress */ mmu_seq = kvm->mmu_notifier_seq; @@ -468,14 +469,23 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, pgdir = vcpu_e500->vcpu.arch.pgdir; + /* + * We are just looking at the wimg bits, so we don't + * care much about the trans splitting bit. + * We are holding kvm->mmu_lock so a notifier invalidate + * can't run hence pfn won't change. + */ + local_irq_save(flags); ptep = find_linux_pte_or_hugepte(pgdir, hva, NULL); if (ptep) { pte_t pte = READ_ONCE(*ptep); - if (pte_present(pte)) + if (pte_present(pte)) { wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) & MAS2_WIMGE_MASK; - else { + local_irq_restore(flags); + } else { + local_irq_restore(flags); pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n", __func__, (long)gfn, pfn); ret = -EINVAL; |