summaryrefslogtreecommitdiff
path: root/include/asm-generic/preempt.h
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2013-09-27 17:30:03 +0200
committerIngo Molnar <mingo@kernel.org>2013-09-28 10:04:47 +0200
commit75f93fed50c2abadbab6ef546b265f51ca975b27 (patch)
treeae531501cb671c948baedb8e07111f8dda2d5036 /include/asm-generic/preempt.h
parent1a338ac32ca630f67df25b4a16436cccc314e997 (diff)
downloadlwn-75f93fed50c2abadbab6ef546b265f51ca975b27.tar.gz
lwn-75f93fed50c2abadbab6ef546b265f51ca975b27.zip
sched: Revert need_resched() to look at TIF_NEED_RESCHED
Yuanhan reported a serious throughput regression in his pigz benchmark. Using the ftrace patch I found that several idle paths need more TLC before we can switch the generic need_resched() over to preempt_need_resched. The preemption paths benefit most from preempt_need_resched and do indeed use it; all other need_resched() users don't really care that much so reverting need_resched() back to tif_need_resched() is the simple and safe solution. Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: Fengguang Wu <fengguang.wu@intel.com> Cc: Huang Ying <ying.huang@intel.com> Cc: lkp@linux.intel.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include/asm-generic/preempt.h')
-rw-r--r--include/asm-generic/preempt.h8
1 files changed, 0 insertions, 8 deletions
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h
index 5dc14ed3791c..ddf2b420ac8f 100644
--- a/include/asm-generic/preempt.h
+++ b/include/asm-generic/preempt.h
@@ -85,14 +85,6 @@ static __always_inline bool __preempt_count_dec_and_test(void)
}
/*
- * Returns true when we need to resched -- even if we can not.
- */
-static __always_inline bool need_resched(void)
-{
- return unlikely(test_preempt_need_resched());
-}
-
-/*
* Returns true when we need to resched and can (barring IRQ state).
*/
static __always_inline bool should_resched(void)