summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/sync_bitops.h
diff options
context:
space:
mode:
authorWill Deacon <will.deacon@arm.com>2014-02-21 17:01:48 +0100
committerRussell King <rmk+kernel@arm.linux.org.uk>2014-02-25 11:32:40 +0000
commit1971188aa19651d8f447211c6535fb68661d77c5 (patch)
treebfaccbce3e19113ebd908b59cd7208d8012709ca /arch/arm/include/asm/sync_bitops.h
parentc32ffce0f66e5d1d4856254516e24f5ef275cd00 (diff)
downloadlwn-1971188aa19651d8f447211c6535fb68661d77c5.tar.gz
lwn-1971188aa19651d8f447211c6535fb68661d77c5.zip
ARM: 7985/1: mm: implement pte_accessible for faulting mappings
The pte_accessible macro can be used to identify page table entries capable of being cached by a TLB. In principle, this differs from pte_present, since PROT_NONE mappings are mapped using invalid entries identified as present and ptes designated as `old' can use either invalid entries or those with the access flag cleared (guaranteed not to be in the TLB). However, there is a race to take care of, as described in 20841405940e ("mm: fix TLB flush race between migration, and change_protection_range"), between a page being migrated and mprotected at the same time. In this case, we can check whether a TLB invalidation is pending for the mm and if so, temporarily consider PROT_NONE mappings as valid. This patch implements a quick pte_accessible macro for ARM by simply checking if the pte is valid/present depending on the mm. For classic MMU, these checks are identical and will generate some false positives for PROT_NONE mappings, but this is better than the current asm-generic definition of ((void)(pte),1). Finally, pte_present_user is moved to use pte_valid (and renamed appropriately) since we don't care about cache flushing for faulting mappings. Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/arm/include/asm/sync_bitops.h')
0 files changed, 0 insertions, 0 deletions