diff options
author | David S. Miller <davem@davemloft.net> | 2014-08-04 16:34:01 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-08-14 09:38:26 +0800 |
commit | b4a8febea9f14c69ec3038cb10c1fcfd2d44401d (patch) | |
tree | 14f4ee0ac5c3d82ec801895874b1c17730402261 | |
parent | a8973758f3bd08a2d102ee96ec28b2fe9242d77b (diff) | |
download | lwn-b4a8febea9f14c69ec3038cb10c1fcfd2d44401d.tar.gz lwn-b4a8febea9f14c69ec3038cb10c1fcfd2d44401d.zip |
sparc64: Do not insert non-valid PTEs into the TSB hash table.
[ Upstream commit 18f38132528c3e603c66ea464727b29e9bbcb91b ]
The assumption was that update_mmu_cache() (and the equivalent for PMDs) would
only be called when the PTE being installed will be accessible by the user.
This is not true for code paths originating from remove_migration_pte().
There are dire consequences for placing a non-valid PTE into the TSB. The TLB
miss frramework assumes thatwhen a TSB entry matches we can just load it into
the TLB and return from the TLB miss trap.
So if a non-valid PTE is in there, we will deadlock taking the TLB miss over
and over, never satisfying the miss.
Just exit early from update_mmu_cache() and friends in this situation.
Based upon a report and patch from Christopher Alexander Tobias Schulze.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | arch/sparc/mm/init_64.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index ed3c969a5f4c..b08a6279c4ca 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -350,6 +350,10 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * mm = vma->vm_mm; + /* Don't insert a non-valid PTE into the TSB, we'll deadlock. */ + if (!pte_accessible(mm, pte)) + return; + spin_lock_irqsave(&mm->context.lock, flags); #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) @@ -2614,6 +2618,10 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pte = pmd_val(entry); + /* Don't insert a non-valid PMD into the TSB, we'll deadlock. */ + if (!(pte & _PAGE_VALID)) + return; + /* We are fabricating 8MB pages using 4MB real hw pages. */ pte |= (addr & (1UL << REAL_HPAGE_SHIFT)); |