summaryrefslogtreecommitdiff
path: root/mm/mmap.c
diff options
context:
space:
mode:
authorShaohua Li <shaohua.li@intel.com>2011-05-24 17:11:19 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2011-05-25 08:39:04 -0700
commit5f70b962ccc2f2e6259417cf3d1233dc9e16cf5e (patch)
tree5e3d83554554e3c315a7bab654fc51345078bc9d /mm/mmap.c
parent34679d7eac9ecc20face093db9aa610f1e9c893a (diff)
downloadlwn-5f70b962ccc2f2e6259417cf3d1233dc9e16cf5e.tar.gz
lwn-5f70b962ccc2f2e6259417cf3d1233dc9e16cf5e.zip
mmap: avoid unnecessary anon_vma lock
If we only change vma->vm_end, we can avoid taking anon_vma lock even if 'insert' isn't NULL, which is the case of split_vma. As I understand it, we need the lock before because rmap must get the 'insert' VMA when we adjust old VMA's vm_end (the 'insert' VMA is linked to anon_vma list in __insert_vm_struct before). But now this isn't true any more. The 'insert' VMA is already linked to anon_vma list in __split_vma(with anon_vma_clone()) instead of __insert_vm_struct. There is no race rmap can't get required VMAs. So the anon_vma lock is unnecessary, and this can reduce one locking in brk case and improve scalability. Signed-off-by: Shaohua Li<shaohua.li@intel.com> Cc: Rik van Riel <riel@redhat.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mmap.c')
-rw-r--r--mm/mmap.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/mmap.c b/mm/mmap.c
index eaec3df82a2b..15b1fae57efe 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -609,7 +609,7 @@ again: remove_next = 1 + (end > next->vm_end);
* lock may be shared between many sibling processes. Skipping
* the lock for brk adjustments makes a difference sometimes.
*/
- if (vma->anon_vma && (insert || importer || start != vma->vm_start)) {
+ if (vma->anon_vma && (importer || start != vma->vm_start)) {
anon_vma = vma->anon_vma;
anon_vma_lock(anon_vma);
}