summaryrefslogtreecommitdiff
path: root/Documentation/lockdep-design.txt
diff options
context:
space:
mode:
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>2013-12-11 13:59:09 -0800
committerIngo Molnar <mingo@kernel.org>2013-12-16 11:36:15 +0100
commit17eb88e068430014deb709e5af34197cdf2390c9 (patch)
tree869d7c1e27ff7eeb2b0b846b8f844d32ac375222 /Documentation/lockdep-design.txt
parent01352fb81658cbf78c55844de8e3d1d606bbf3f8 (diff)
downloadlwn-17eb88e068430014deb709e5af34197cdf2390c9.tar.gz
lwn-17eb88e068430014deb709e5af34197cdf2390c9.zip
Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK
Historically, an UNLOCK+LOCK pair executed by one CPU, by one task, or on a given lock variable has implied a full memory barrier. In a recent LKML thread, the wisdom of this historical approach was called into question: http://www.spinics.net/lists/linux-mm/msg65653.html, in part due to the memory-order complexities of low-handoff-overhead queued locks on x86 systems. This patch therefore removes this guarantee from the documentation, and further documents how to restore it via a new smp_mb__after_unlock_lock() primitive. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <linux-arch@vger.kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1386799151-2219-6-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'Documentation/lockdep-design.txt')
0 files changed, 0 insertions, 0 deletions