diff options
author | Marc Zyngier <maz@kernel.org> | 2020-09-11 19:16:11 +0100 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2020-09-13 13:18:10 +0100 |
commit | ed888cb0d1ebce69f12794e89fbd5e2c86d40b8d (patch) | |
tree | e93170f182f651f95ab98fc1b4b2268c1e87881a /arch/arm64/kernel/cpu_errata.c | |
parent | e0328feda79d9681b3e3245e6e180295550c8ee9 (diff) | |
download | lwn-ed888cb0d1ebce69f12794e89fbd5e2c86d40b8d.tar.gz lwn-ed888cb0d1ebce69f12794e89fbd5e2c86d40b8d.zip |
arm64: Allow CPUs unffected by ARM erratum 1418040 to come in late
Now that we allow CPUs affected by erratum 1418040 to come in late,
this prevents their unaffected sibblings from coming in late (or
coming back after a suspend or hotplug-off, which amounts to the
same thing).
To allow this, we need to add ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU,
which amounts to set .type to ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE.
Fixes: bf87bb0881d0 ("arm64: Allow booting of late CPUs affected by erratum 1418040")
Reported-by: Matthias Kaehlcke <mka@chromium.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Tested-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
Tested-by: Matthias Kaehlcke <mka@chromium.org>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20200911181611.2073183-1-maz@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/kernel/cpu_errata.c')
-rw-r--r-- | arch/arm64/kernel/cpu_errata.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c index c332d49780dc..560ba69e13c1 100644 --- a/arch/arm64/kernel/cpu_errata.c +++ b/arch/arm64/kernel/cpu_errata.c @@ -910,8 +910,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = { .desc = "ARM erratum 1418040", .capability = ARM64_WORKAROUND_1418040, ERRATA_MIDR_RANGE_LIST(erratum_1418040_list), - .type = (ARM64_CPUCAP_SCOPE_LOCAL_CPU | - ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU), + /* + * We need to allow affected CPUs to come in late, but + * also need the non-affected CPUs to be able to come + * in at any point in time. Wonderful. + */ + .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE, }, #endif #ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT |