summaryrefslogtreecommitdiff
path: root/mm/memory.c
diff options
context:
space:
mode:
authorShaohua Li <shaohua.li@intel.com>2010-08-16 09:16:55 +0800
committerH. Peter Anvin <hpa@zytor.com>2010-08-23 10:04:57 -0700
commit61c77326d1df079f202fa79403c3ccd8c5966a81 (patch)
tree57780e6b94f24f402d1c9036d6e7cf37a359c22f /mm/memory.c
parent76be97c1fc945db08aae1f1b746012662d643e97 (diff)
downloadlwn-61c77326d1df079f202fa79403c3ccd8c5966a81.tar.gz
lwn-61c77326d1df079f202fa79403c3ccd8c5966a81.zip
x86, mm: Avoid unnecessary TLB flush
In x86, access and dirty bits are set automatically by CPU when CPU accesses memory. When we go into the code path of below flush_tlb_fix_spurious_fault(), we already set dirty bit for pte and don't need flush tlb. This might mean tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When the CPUs do page write, they will automatically check the bit and no software involved. On the other hand, flush tlb in below position is harmful. Test creates CPU number of threads, each thread writes to a same but random address in same vma range and we measure the total time. Under a 4 socket system, original time is 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for tlb flush. Signed-off-by: Shaohua Li <shaohua.li@intel.com> LKML-Reference: <20100816011655.GA362@sli10-desk.sh.intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Andrea Archangeli <aarcange@redhat.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
Diffstat (limited to 'mm/memory.c')
-rw-r--r--mm/memory.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index 2ed2267439df..a40da6983961 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3147,7 +3147,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
* with threads.
*/
if (flags & FAULT_FLAG_WRITE)
- flush_tlb_page(vma, address);
+ flush_tlb_fix_spurious_fault(vma, address);
}
unlock:
pte_unmap_unlock(pte, ptl);