diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2018-06-04 16:40:11 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2018-06-04 16:40:11 -0700 |
commit | 92400b8c8b42e53abb0fcb4ac75cb85d4177a891 (patch) | |
tree | b6c7ef758d1c2b5e32e2483a0dbde7cd23a6d8a0 /Documentation/core-api | |
parent | 31a85cb35c82d686a95f903fdf9a346aba818290 (diff) | |
parent | 1b22fc609cecd1b16c4a015e1a6b3c9717484e3a (diff) | |
download | lwn-92400b8c8b42e53abb0fcb4ac75cb85d4177a891.tar.gz lwn-92400b8c8b42e53abb0fcb4ac75cb85d4177a891.zip |
Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
- Lots of tidying up changes all across the map for Linux's formal
memory/locking-model tooling, by Alan Stern, Akira Yokosawa, Andrea
Parri, Paul E. McKenney and SeongJae Park.
Notable changes beyond an overall update in the tooling itself is the
tidying up of spin_is_locked() semantics, which spills over into the
kernel proper as well.
- qspinlock improvements: the locking algorithm now guarantees forward
progress whereas the previous implementation in mainline could starve
threads indefinitely in cmpxchg() loops. Also other related cleanups
to the qspinlock code (Will Deacon)
- misc smaller improvements, cleanups and fixes all across the locking
subsystem
* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (51 commits)
locking/rwsem: Simplify the is-owner-spinnable checks
tools/memory-model: Add reference for 'Simplifying ARM concurrency'
tools/memory-model: Update ASPLOS information
MAINTAINERS, tools/memory-model: Update e-mail address for Andrea Parri
tools/memory-model: Fix coding style in 'lock.cat'
tools/memory-model: Remove out-of-date comments and code from lock.cat
tools/memory-model: Improve mixed-access checking in lock.cat
tools/memory-model: Improve comments in lock.cat
tools/memory-model: Remove duplicated code from lock.cat
tools/memory-model: Flag "cumulativity" and "propagation" tests
tools/memory-model: Add model support for spin_is_locked()
tools/memory-model: Add scripts to test memory model
tools/memory-model: Fix coding style in 'linux-kernel.def'
tools/memory-model: Model 'smp_store_mb()'
tools/memory-order: Update the cheat-sheet to show that smp_mb__after_atomic() orders later RMW operations
tools/memory-order: Improve key for SELF and SV
tools/memory-model: Fix cheat sheet typo
tools/memory-model: Update required version of herdtools7
tools/memory-model: Redefine rb in terms of rcu-fence
tools/memory-model: Rename link and rcu-path to rcu-link and rb
...
Diffstat (limited to 'Documentation/core-api')
-rw-r--r-- | Documentation/core-api/atomic_ops.rst | 13 |
1 files changed, 7 insertions, 6 deletions
diff --git a/Documentation/core-api/atomic_ops.rst b/Documentation/core-api/atomic_ops.rst index fce929144ccd..2e7165f86f55 100644 --- a/Documentation/core-api/atomic_ops.rst +++ b/Documentation/core-api/atomic_ops.rst @@ -111,7 +111,6 @@ If the compiler can prove that do_something() does not store to the variable a, then the compiler is within its rights transforming this to the following:: - tmp = a; if (a > 0) for (;;) do_something(); @@ -119,7 +118,7 @@ the following:: If you don't want the compiler to do this (and you probably don't), then you should use something like the following:: - while (READ_ONCE(a) < 0) + while (READ_ONCE(a) > 0) do_something(); Alternatively, you could place a barrier() call in the loop. @@ -467,10 +466,12 @@ Like the above, except that these routines return a boolean which indicates whether the changed bit was set _BEFORE_ the atomic bit operation. -WARNING! It is incredibly important that the value be a boolean, -ie. "0" or "1". Do not try to be fancy and save a few instructions by -declaring the above to return "long" and just returning something like -"old_val & mask" because that will not work. + +.. warning:: + It is incredibly important that the value be a boolean, ie. "0" or "1". + Do not try to be fancy and save a few instructions by declaring the + above to return "long" and just returning something like "old_val & + mask" because that will not work. For one thing, this return value gets truncated to int in many code paths using these interfaces, so on 64-bit if the bit is set in the |