diff options
author | Mark Rutland <mark.rutland@arm.com> | 2023-06-05 08:00:58 +0100 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2023-06-05 09:57:13 +0200 |
commit | dda5f312bb09e56e7a1c3e3851f2000eb2e9c879 (patch) | |
tree | 2c5d77a688caffdffb4b516c1e6b00baeadb1259 /arch/arm/lib/testchangebit.S | |
parent | 497cc42bf53b55185ab3d39c634fbf09eb6681ae (diff) | |
download | lwn-dda5f312bb09e56e7a1c3e3851f2000eb2e9c879.tar.gz lwn-dda5f312bb09e56e7a1c3e3851f2000eb2e9c879.zip |
locking/atomic: arm: fix sync ops
The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.
Fix this by defining sync ops with the required barriers.
Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.
Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-2-mark.rutland@arm.com
Diffstat (limited to 'arch/arm/lib/testchangebit.S')
-rw-r--r-- | arch/arm/lib/testchangebit.S | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S index 4ebecc67e6e0..f13fe9bc2399 100644 --- a/arch/arm/lib/testchangebit.S +++ b/arch/arm/lib/testchangebit.S @@ -10,3 +10,7 @@ .text testop _test_and_change_bit, eor, str + +#if __LINUX_ARM_ARCH__ >= 6 +sync_testop _sync_test_and_change_bit, eor, str +#endif |