diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2023-10-02 14:00:05 +0200 |
---|---|---|
committer | Borislav Petkov (AMD) <bp@alien8.de> | 2023-10-24 15:05:55 +0200 |
commit | 7eb314a22800457396f541c655697dabd71e44a7 (patch) | |
tree | 381820e5f0cf90a0bf7a88a6b9f908aede78b516 /arch/x86/kernel/cpu/microcode/intel.c | |
parent | 0bf871651211b58c7b19f40b746b646d5311e2ec (diff) | |
download | lwn-7eb314a22800457396f541c655697dabd71e44a7.tar.gz lwn-7eb314a22800457396f541c655697dabd71e44a7.zip |
x86/microcode: Rendezvous and load in NMI
stop_machine() does not prevent the spin-waiting sibling from handling
an NMI, which is obviously violating the whole concept of rendezvous.
Implement a static branch right in the beginning of the NMI handler
which is nopped out except when enabled by the late loading mechanism.
The late loader enables the static branch before stop_machine() is
invoked. Each CPU has an nmi_enable in its control structure which
indicates whether the CPU should go into the update routine.
This is required to bridge the gap between enabling the branch and
actually being at the point where it is required to enter the loader
wait loop.
Each CPU which arrives in the stopper thread function sets that flag and
issues a self NMI right after that. If the NMI function sees the flag
clear, it returns. If it's set it clears the flag and enters the
rendezvous.
This is safe against a real NMI which hits in between setting the flag
and sending the NMI to itself. The real NMI will be swallowed by the
microcode update and the self NMI will then let stuff continue.
Otherwise this would end up with a spurious NMI.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20231002115903.489900814@linutronix.de
Diffstat (limited to 'arch/x86/kernel/cpu/microcode/intel.c')
-rw-r--r-- | arch/x86/kernel/cpu/microcode/intel.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c index e5c5ddfd6831..905ed3b557fb 100644 --- a/arch/x86/kernel/cpu/microcode/intel.c +++ b/arch/x86/kernel/cpu/microcode/intel.c @@ -611,6 +611,7 @@ static struct microcode_ops microcode_intel_ops = { .collect_cpu_info = collect_cpu_info, .apply_microcode = apply_microcode_late, .finalize_late_load = finalize_late_load, + .use_nmi = IS_ENABLED(CONFIG_X86_64), }; static __init void calc_llc_size_per_core(struct cpuinfo_x86 *c) |