summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorDimitri Sivanich <sivanich@sgi.com>2010-03-01 11:48:15 -0600
committerGreg Kroah-Hartman <gregkh@suse.de>2010-04-01 15:58:57 -0700
commit847d52cc4a3e149949df66d7484aba999dc61e19 (patch)
tree9c5d99c9e7cc8462c028833f1f81e56bd99c3aac
parent37c3a08ca3714118fe8f931f28b2cc76bb4ac0e5 (diff)
downloadlwn-847d52cc4a3e149949df66d7484aba999dc61e19.tar.gz
lwn-847d52cc4a3e149949df66d7484aba999dc61e19.zip
x86: Fix sched_clock_cpu for systems with unsynchronized TSC
commit 14be1f7454ea96ee614467a49cf018a1a383b189 upstream. On UV systems, the TSC is not synchronized across blades. The sched_clock_cpu() function is returning values that can go backwards (I've seen as much as 8 seconds) when switching between cpus. As each cpu comes up, early_init_intel() will currently set the sched_clock_stable flag true. When mark_tsc_unstable() runs, it clears the flag, but this only occurs once (the first time a cpu comes up whose TSC is not synchronized with cpu 0). After this, early_init_intel() will set the flag again as the next cpu comes up. Only set sched_clock_stable if tsc has not been marked unstable. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20100301174815.GC8224@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-rw-r--r--arch/x86/kernel/cpu/intel.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index a2a03cf4a489..2a94890149db 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -70,7 +70,8 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
if (c->x86_power & (1 << 8)) {
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
- sched_clock_stable = 1;
+ if (!check_tsc_unstable())
+ sched_clock_stable = 1;
}
/*