diff options
author | Matthew Wilcox <mawilcox@microsoft.com> | 2017-02-24 15:00:58 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-02-24 17:46:57 -0800 |
commit | e4afd2e5567fc5d59988025f7528f9b4794d86a5 (patch) | |
tree | d00cf7c2409d3fdeb4dbb81ec04d6cb9c0e7965d /lib/find_bit.c | |
parent | 55ded9551f9a64f2872df77a954d4c30f8958e82 (diff) | |
download | lwn-e4afd2e5567fc5d59988025f7528f9b4794d86a5.tar.gz lwn-e4afd2e5567fc5d59988025f7528f9b4794d86a5.zip |
lib/find_bit.c: micro-optimise find_next_*_bit
This saves 32 bytes on my x86-64 build, mostly due to alignment
considerations and sharing more code between find_next_bit and
find_next_zero_bit, but it does save a couple of instructions.
There's really two parts to this commit:
- First, the first half of the test: (!nbits || start >= nbits) is
trivially a subset of the second half, since nbits and start are both
unsigned
- Second, while looking at the disassembly, I noticed that GCC was
predicting the branch taken. Since this is a failure case, it's
clearly the less likely of the two branches, so add an unlikely() to
override GCC's heuristics.
[mawilcox@microsoft.com: v2]
Link: http://lkml.kernel.org/r/1483709016-1834-1-git-send-email-mawilcox@linuxonhyperv.com
Link: http://lkml.kernel.org/r/1483709016-1834-1-git-send-email-mawilcox@linuxonhyperv.com
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Acked-by: Yury Norov <ynorov@caviumnetworks.com>
Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'lib/find_bit.c')
-rw-r--r-- | lib/find_bit.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/lib/find_bit.c b/lib/find_bit.c index 18072ea9c20e..6ed74f78380c 100644 --- a/lib/find_bit.c +++ b/lib/find_bit.c @@ -33,7 +33,7 @@ static unsigned long _find_next_bit(const unsigned long *addr, { unsigned long tmp; - if (!nbits || start >= nbits) + if (unlikely(start >= nbits)) return nbits; tmp = addr[start / BITS_PER_LONG] ^ invert; @@ -151,7 +151,7 @@ static unsigned long _find_next_bit_le(const unsigned long *addr, { unsigned long tmp; - if (!nbits || start >= nbits) + if (unlikely(start >= nbits)) return nbits; tmp = addr[start / BITS_PER_LONG] ^ invert; |