summaryrefslogtreecommitdiff
path: root/mm/khugepaged.c
diff options
context:
space:
mode:
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>2017-11-27 06:21:26 +0300
committerLinus Torvalds <torvalds@linux-foundation.org>2017-11-27 12:26:29 -0800
commit152e93af3cfe2d29d8136cc0a02a8612507136ee (patch)
tree19bd28f0ea6af08ba14ae4bfd841b5256f888ee7 /mm/khugepaged.c
parenta8f97366452ed491d13cf1e44241bc0b5740b1f0 (diff)
downloadlwn-152e93af3cfe2d29d8136cc0a02a8612507136ee.tar.gz
lwn-152e93af3cfe2d29d8136cc0a02a8612507136ee.zip
mm, thp: Do not make pmd/pud dirty without a reason
Currently we make page table entries dirty all the time regardless of access type and don't even consider if the mapping is write-protected. The reasoning is that we don't really need dirty tracking on THP and making the entry dirty upfront may save some time on first write to the page. Unfortunately, such approach may result in false-positive can_follow_write_pmd() for huge zero page or read-only shmem file. Let's only make page dirty only if we about to write to the page anyway (as we do for small pages). I've restructured the code to make entry dirty inside maybe_p[mu]d_mkwrite(). It also takes into account if the vma is write-protected. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/khugepaged.c')
-rw-r--r--mm/khugepaged.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ea4ff259b671..db43dc8a8ae6 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1057,7 +1057,7 @@ static void collapse_huge_page(struct mm_struct *mm,
pgtable = pmd_pgtable(_pmd);
_pmd = mk_huge_pmd(new_page, vma->vm_page_prot);
- _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
+ _pmd = maybe_pmd_mkwrite(_pmd, vma, false);
/*
* spin_lock() below is not the equivalent of smp_wmb(), so