diff options
author | Michael Ellerman <mpe@ellerman.id.au> | 2014-10-14 12:07:56 +1100 |
---|---|---|
committer | Jiri Slaby <jslaby@suse.cz> | 2014-10-31 15:11:33 +0100 |
commit | d9fc4e657d6fe0886a2a960eb09102c27a7babfd (patch) | |
tree | b5f3f67efd956106ec2e6867de2b240bcb104cc0 /arch | |
parent | 5aaee42d255a81d3b010432265254353946d88b3 (diff) | |
download | lwn-d9fc4e657d6fe0886a2a960eb09102c27a7babfd.tar.gz lwn-d9fc4e657d6fe0886a2a960eb09102c27a7babfd.zip |
powerpc: Add smp_mb()s to arch_spin_unlock_wait()
commit 78e05b1421fa41ae8457701140933baa5e7d9479 upstream.
Similar to the previous commit which described why we need to add a
barrier to arch_spin_is_locked(), we have a similar problem with
spin_unlock_wait().
We need a barrier on entry to ensure any spinlock we have previously
taken is visibly locked prior to the load of lock->slock.
It's also not clear if spin_unlock_wait() is intended to have ACQUIRE
semantics. For now be conservative and add a barrier on exit to give it
ACQUIRE semantics.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/powerpc/lib/locks.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 0c9c8d7d0734..170a0346f756 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -70,12 +70,16 @@ void __rw_yield(arch_rwlock_t *rw) void arch_spin_unlock_wait(arch_spinlock_t *lock) { + smp_mb(); + while (lock->slock) { HMT_low(); if (SHARED_PROCESSOR) __spin_yield(lock); } HMT_medium(); + + smp_mb(); } EXPORT_SYMBOL(arch_spin_unlock_wait); |