summaryrefslogtreecommitdiff
path: root/arch/arm/mach-berlin/platsmp.c
AgeCommit message (Collapse)Author
2023-05-29ARM: mm: Make virt_to_pfn() a static inlineLinus Walleij
Making virt_to_pfn() a static inline taking a strongly typed (const void *) makes the contract of a passing a pointer of that type to the function explicit and exposes any misuse of the macro virt_to_pfn() acting polymorphic and accepting many types such as (void *), (unitptr_t) or (unsigned long) as arguments without warnings. Doing this is a bit intrusive: virt_to_pfn() requires PHYS_PFN_OFFSET and PAGE_SHIFT to be defined, and this is defined in <asm/page.h>, so this must be included *before* <asm/memory.h>. The use of macros were obscuring the unclear inclusion order here, as the macros would eventually be resolved, but a static inline like this cannot be compiled with unresolved macros. The naive solution to include <asm/page.h> at the top of <asm/memory.h> does not work, because <asm/memory.h> sometimes includes <asm/page.h> at the end of itself, which would create a confusing inclusion loop. So instead, take the approach to always unconditionally include <asm/page.h> at the end of <asm/memory.h> arch/arm uses <asm/memory.h> explicitly in a lot of places, however it turns out that if we just unconditionally include <asm/memory.h> into <asm/page.h> and switch all inclusions of <asm/memory.h> to <asm/page.h> instead, we enforce the right order and <asm/memory.h> will always have access to the definitions. Put an inclusion guard in place making it impossible to include <asm/memory.h> explicitly. Link: https://lore.kernel.org/linux-mm/20220701160004.2ffff4e5ab59a55499f4c736@linux-foundation.org/ Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2018-05-24ARM: berlin: switch to SPDX license identifierJisheng Zhang
Use the appropriate SPDX license identifier and drop the previous boilerplate license text. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
2018-05-24arm: berlin: remove non-necessary flush_cache_all()Jisheng Zhang
I believe the flush_cache_all() after scu_enable() is to "Ensure that the data accessed by CPU0 before the SCU was initialised is visible to the other CPUs." as commented in scu_enable(). So here flush_cache_all() is a duplication, remove it. Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
2017-02-28ARM: 8646/1: mmu: decouple VECTORS_BASE from KconfigAfzal Mohammed
For MMU configurations, VECTORS_BASE is always 0xffff0000, a macro definition will suffice. For no-MMU, exception base address is dynamically determined in subsequent patches. To preserve bisectability, now make the macro applicable for no-MMU scenario too. Thanks to 0-DAY kernel test infrastructure that found the bisectability issue. This macro will be restricted to MMU case upon dynamically determining exception base address for no-MMU. Once exception address is handled dynamically for no-MMU, VECTORS_BASE can be removed from Kconfig. Signed-off-by: afzal mohammed <afzal.mohd.ma@gmail.com> Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2017-02-28ARM: 8641/1: treewide: Replace uses of virt_to_phys with __pa_symbolFlorian Fainelli
All low-level PM/SMP code using virt_to_phys() should actually use __pa_symbol() against kernel symbols. Update code where relevant to move away from virt_to_phys(). Acked-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-12-01ARM: use const and __initconst for smp_operationsMasahiro Yamada
These smp_operations structures are not over-written, so add "const" qualifier and replace __initdata with __initconst. Also, add "static" where it is possible. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Moritz Fischer <moritz.fischer@ettus.com> Acked-by: Stephen Boyd <sboyd@codeaurora.org> # qcom part Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Patrice Chotard <patrice.chotard@st.com> Acked-by: Heiko Stuebner <heiko@sntech.de> Acked-by: Wei Xu <xuwei5@hisilicon.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Shawn Guo <shawnguo@kernel.org> Acked-by: Matthias Brugger <matthias.bgg@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Liviu Dudau <Liviu.Dudau@arm.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-10-15arm: berlin: add CPU hotplug supportJisheng Zhang
Add cpu hotplug support for berlin SoCs such as BG2 and BG2Q. These SoC don't support power off cpu independently, but we also want cpu hotplug support in these SoCs. We achieve this goal by putting the dying CPU in WFI state after the coherency is disabled, then asserting the dying CPU reset bit to put the CPU in reset state. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
2015-10-15arm: berlin: use non-self-cleared reset register to reset cpuJisheng Zhang
In Berlin SoCs, there are two kinds of cpu reset control registers: the first one's corresponding bits will be self-cleared after some cycles, while the second one's bits won't. Previously the first kind of reset control register is used, this patch uses the second kind one to prepare for the next hotplug commit. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
2015-06-01ARM: v7 setup function should invalidate L1 cacheRussell King
All ARMv5 and older CPUs invalidate their caches in the early assembly setup function, prior to enabling the MMU. This is because the L1 cache should not contain any data relevant to the execution of the kernel at this point; all data should have been flushed out to memory. This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed, these typically do not search their caches when caching is disabled (as it needs to be when the MMU is disabled) so this change should be safe. ARMv7 allows there to be CPUs which search their caches while caching is disabled, and it's permitted that the cache is uninitialised at boot; for these, the architecture reference manual requires that an implementation specific code sequence is used immediately after reset to ensure that the cache is placed into a sane state. Such functionality is definitely outside the remit of the Linux kernel, and must be done by the SoC's firmware before _any_ CPU gets to the Linux kernel. Changing the data cache clean+invalidate to a mere invalidate allows us to get rid of a lot of platform specific hacks around this issue for their secondary CPU bringup paths - some of which were buggy. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Thierry Reding <treding@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Michal Simek <michal.simek@xilinx.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-06-16ARM: berlin: add SMP supportAntoine Ténart
Adds SMP support for Berlin SoCs. Secondary CPUs are reset, then execute the instruction we put in the reset exception register, setting the pc at the address contained in the software reset address register, which is the physical address of the Berlin secondary startup. This implementation avoid using the pen lock mechanism. Signed-off-by: Antoine Ténart <antoine.tenart@free-electrons.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>