- !LLSC now only needs a single spinlock for atomics and bitops
- Some codegen changes (slight bloat) with generic bitops
1. code increase due to LD-check-atomic paradigm vs. unconditonal
atomic (but dirty'ing the cache line even if set already).
So despite increase, generic is right thing to do.
2. code decrease (but use of costlier instructions such as DIV vs.
shifts based math) due to signed arithmetic.
This needs to be revisited seperately.
arc:
static inline int test_bit(unsigned int nr, const volatile unsigned long *addr)
^^^^^^^^^^^^
generic:
static inline int test_bit(int nr, const volatile unsigned long *addr)
^^^
Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[vgupta: wrote patch based on Will's poc, analysed codegen diffs]
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
Some typos are found out by codespell tool:
./intc-compact.c:145: prioity ==> priority
./smp.c:286: recevier ==> receiver
./stacktrace.c:152 prelogue ==> prologue
Fix typos found by codespell.
Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Changcheng Deng <deng.changcheng@zte.com.cn>
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
As pointed out by commit
de9b8f5dcb ("sched: Fix crash trying to dequeue/enqueue the idle thread")
init_idle() can and will be invoked more than once on the same idle
task. At boot time, it is invoked for the boot CPU thread by
sched_init(). Then smp_init() creates the threads for all the secondary
CPUs and invokes init_idle() on them.
As the hotplug machinery brings the secondaries to life, it will issue
calls to idle_thread_get(), which itself invokes init_idle() yet again.
In this case it's invoked twice more per secondary: at _cpu_up(), and at
bringup_cpu().
Given smp_init() already initializes the idle tasks for all *possible*
CPUs, no further initialization should be required. Now, removing
init_idle() from idle_thread_get() exposes some interesting expectations
with regards to the idle task's preempt_count: the secondary startup always
issues a preempt_disable(), requiring some reset of the preempt count to 0
between hot-unplug and hotplug, which is currently served by
idle_thread_get() -> idle_init().
Given the idle task is supposed to have preemption disabled once and never
see it re-enabled, it seems that what we actually want is to initialize its
preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove
init_idle() from idle_thread_get().
Secondary startups were patched via coccinelle:
@begone@
@@
-preempt_disable();
...
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
When a secondary CPU fails to come up, there is a missing space in the
log:
Timeout: CPU1 FAILED to comeup !!!
Fix it.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
As we have option in u-boot to set CPU mask for running linux,
we want to pass information to kernel about CPU cores should
be brought up. So we patch kernel dtb in u-boot to set
possible-cpus property.
This also allows us to have correctly setuped MCIP debug mask.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Please do not apply this to mainline directly, instead please re-run the
coccinelle script shown below and apply its output.
For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't harmful, and changing them results in
churn.
However, for some features, the read/write distinction is critical to
correct operation. To distinguish these cases, separate read/write
accessors must be used. This patch migrates (most) remaining
ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
coccinelle script:
----
// Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
// WRITE_ONCE()
// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
virtual patch
@ depends on patch @
expression E1, E2;
@@
- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)
@ depends on patch @
expression E;
@@
- ACCESS_ONCE(E)
+ READ_ONCE(E)
----
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: viro@zeniv.linux.org.uk
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Apart from adding the helper function itself, the rest of the kernel is
converted mechanically using:
git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_users);/mmget\(\1\);/'
git grep -l 'atomic_inc.*mm_users' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_users);/mmget\(\&\1\);/'
This is needed for a later patch that hooks into the helper, but might
be a worthwhile cleanup on its own.
(Michal Hocko provided most of the kerneldoc comment.)
Link: http://lkml.kernel.org/r/20161218123229.22952-2-vegard.nossum@oracle.com
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Apart from adding the helper function itself, the rest of the kernel is
converted mechanically using:
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)->mm_count);/mmgrab\(\1\);/'
git grep -l 'atomic_inc.*mm_count' | xargs sed -i 's/atomic_inc(&\(.*\)\.mm_count);/mmgrab\(\&\1\);/'
This is needed for a later patch that hooks into the helper, but might
be a worthwhile cleanup on its own.
(Michal Hocko provided most of the kerneldoc comment.)
Link: http://lkml.kernel.org/r/20161218123229.22952-1-vegard.nossum@oracle.com
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is needed on HS38 cores, for setting up IO-Coherency aperture properly
The polling could perturb the caches and coherecy fabric which could be
wrong in the small window when Master is setting up IOC aperture etc
in arc_cache_init()
We do it only for ARCv2 based builds to not affect EZChip ARCompact
based platform.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For run-on-reset SMP configs, non master cores call a routine which
waits until Master gives it a "go" signal (currently using a shared
mem flag). The same routine then jumps off the well known entry point of
all non Master cores i.e. @first_lines_of_secondary
This patch moves out the last part into one single place in early boot
code.
This is better in terms of absraction (the wait API only waits) and
returns, leaving out the "jump off to" part.
In actual implementation this requires some restructuring of the early
boot code as well as Master now jumps to BSS setup explicitly,
vs. falling thru into it before.
Technically this patch doesn't cause any functional change, it just
moves the ugly #ifdef'ry from assembly code to "C"
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This came up when reviewing code to address missing IRQ affinity
setting in AXS103 platform and/or implementing hierarchical IRQ domains
- smp_ipi_irq_setup() callers pass hwirq but in turn calls
request_percpu_irq() which expects a linux virq. So invoke
irq_find_mapping() to do the conversion
(also explicitify this in code by renaming the args appropriately)
- idu_of_init()/idu_cascade_isr() were similarly using linux virq where
hwirq is expected, so do the conversion using irqd_to_hwirq() helper
Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
[vgupta: made changelog a bit concise a bit]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
At smp_prepare_cpus() we set present cpu mask as part of init
for all CPUs at range [0-max_cpus].
This is done without checking if this mask is already being set.
At platform of eznps this mask is already being initialized at
smp_init_cpus() by using hook plat_smp_ops.init_early_smp().
So to avoid overriding of present cpu mask we check the number of
bits which are set in this mask. At the begin only bit for boot CPU
is set so if number of bits already set is no more than one we can be
assure that there is no overriding of this mask.
Signed-off-by: Noam Camus <noamca@mellanox.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
In SMP setup, master loops for each_present_cpu calling cpu_up().
For ARC it returns as soon as new cpu's status becomes online,
However secondary may still do HW initializing,
machine or platform hook level.
So turn secondary online only after all HW setup is done.
Signed-off-by: Noam Camus <noamc@ezchip.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
ARC Timers so far have been handled as "legacy" w/o explicit description
in DT. This poses challenge for newer platforms wanting to use them.
This series will eventually help move timers over to DT.
This patch does a small change of using a CPU notifier to set clockevent
on non-boot CPUs. So explicit setup is done only on boot CPU (which will
later be done by DT)
Signed-off-by: Noam Camus <noamc@ezchip.com>
[vgupta: broken off from a bigger patch]
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- The idea is to remove the API usage since it has a subltle
design flaw - relies on being called on cpu0 first. This is true for
some early per cpu irqs such as TIMER/IPI, but not for late probed
per cpu peripherals such a perf. And it's usage in perf has already
bitten us once: see c6317bc7c5
("ARCv2: perf: Ensure perf intr gets enabled on all cores") where we
ended up open coding it anyways
- The seeming duplication will go away once we start using cpu notifier
for timer setup
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Let the non boot cpus call into idle with the corresponding hotplug state, so
the hotplug core can handle the further bringup. That's a first step to
convert the boot side of the hotplugged cpus to do all the synchronization
with the other side through the state machine. For now it'll only start the
hotplug thread and kick the full bringup of the cpu.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: Rik van Riel <riel@redhat.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This will better reflect its description i.e. "any needed setup..."
and not just do an "IPI request".
Signed-off-by: Noam Camus <noamc@ezchip.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Note this is not part of platform owned static machine_desc,
but more of device owned plat_smp_ops (rather misnamed) which a IPI
provider or some such typically defines.
This will help us seperate out the IPI registration from platform
specific init_cpu_smp() into device specific init_irq_cpu()
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This adds a platform agnostic early SMP init hook which is called on
Master core before calling setup_processor()
setup_arch()
smp_init_cpus()
smp_ops.init_early_smp()
...
setup_processor()
How this helps:
- Used for one time init of certain SMP centric IP blocks, before
calling setup_processor() which probes various bits of core,
possibly including this block
- Currently platforms need to call this IP block init from their
init routines, which doesn't make sense as this is specific to ARC
core and not platform and otherwise requires copy/paste in all
(and hence a possible point of failure)
e.g. MCIP init is called from 2 platforms currently (axs10x and sim)
which will go away once we have this.
This change only adds the hooks but they are empty for now. Next commit
will populate them and remove the explicit init calls from platforms.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
For non halt-on-reset case, all cores start of simultaneously in @stext.
Master core0 proceeds with kernel boot, while other spin-wait on
@wake_flag being set by master once it is ready. So NO hardware assist
is needed for master to "kick" the others.
This patch moves this soft implementation out of mcip.c (as there is no
hardware assist) into common smp.c
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
This allows platforms to provide their own cpu wakeup routines
as well as IPI send / clear backends, while allowing a SMP kernel w/o
any such backend to build/boot
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
In light of recent SNAFU with SMP build, allow simple platform to build
as SMP but run UP.
* Remove the dependence on simulation SMP extension to enable quick
build/test iterations of SMP kernel.
* In absence of platform SMP registration, prevent the NULL smp feature
name from borkign the system
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The current cpu-private IRQ registration is ugly as it requires need to
expose arch_unmask_irq() outside of intc code.
So switch to percpu IRQ APIs:
-request_percpu_irq [boot core]
-enable_percpu_irq [all cores]
Encapsulated in helper arc_request_percpu_irq()
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Handle it just like timer. Current request_percpu_irq() would fail on
non-boot cpus and thus IRQ will remian unmasked on those cpus.
[vgupta: fix changelong]
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
* Don't send an IPI if receiver already has a pending IPI.
Atomically piggyback the new msg with pending msg.
* IPI receiver looping on xchg() not required
References: https://lkml.org/lkml/2013/11/25/232
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The interface is confusing, it feels like we are getting "sender" info,
whereas it is the "receiver", which can very well be retrived by
smp_processor_id(), if need be.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The current IPI sending callstack needlessly involves cpumask.
arch_send_call_function_single_ipi(cpu) / smp_send_reschedule(cpu)
ipi_send_msg(cpumask_of(cpu)) --> [cpu to cpumask]
plat_smp_ops.ipi_send(callmap)
for_each_cpu(callmap) --> [cpuask to cpu]
do_plat_specific_ipi_PER_CPU
Given that current backends are not capable of 1:N IPIs, lets simplify
the interface for now, by keeping "a" cpu all along.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Commit 9a46ad6d6d "smp: make smp_call_function_many() use logic
similar to smp_call_function_single()" has unified the way to handle
single and multiple cross-CPU function calls. Now only one interrupt
is needed for architecture specific code to support generic SMP function
call interfaces, so kill the redundant single function call interrupt.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
- Add mm_cpumask setting (aggregating only, unlike some other arches)
used to restrict the TLB flush cross-calling
- cross-calling versions of TLB flush routines (thanks to Noam)
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/arc uses of the __cpuinit macros from
all C files. Currently arc does not have any __CPUINIT used in
assembly files.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
The generic idle loop implements all functionality. Aside of that it
allows arc to implement the tsk_is_polling() functionality correctly,
despite the patently (now gone) comment in the original arc cpu_idle()
function:
/* Since we SLEEP in idle loop, TIF_POLLING_NRFLAG can't be set */
See kernel/cpu/idle.c
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Cc: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Cc: Magnus Damm <magnus.damm@gmail.com>
Acked-by: Vineet Gupta <vgupta@synopsys.com>
Tested-by: Vineet Gupta <vgupta@synopsys.com>
Link: http://lkml.kernel.org/r/20130321215233.711253792@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
This again is for switch from singleton platform SMP API to
multi-platform paradigm
Platform code is not yet setup to populate the callbacks, that happens
in next commit
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
-platform API is retired and instead callbacks are used
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
The orig platform code orgnaization was singleton design pattern - only
one platform (and board thereof) would build at a time.
Thus any platform/board specific code (e.g. irq init, early init ...)
expected by ARC common code was exported as well defined set of APIs,
with only ONE instance building ever.
Now with multiple-platform build requirement, that design of code no
longer holds - multiple board specific calls need to build at the same
time - so ARC common code can't use the API approach, it needs a
callback based design where each board registers it's specific set of
functions, and at runtime, depending on board detection, the callbacks
are used from the registry.
This commit adds all the infrastructure, where board specific callbacks
are specified as a "maThine description".
All the hooks are placed in right spots, no board callbacks registered
yet (with MACHINE_STARt/END constructs) so the hooks will not run.
Next commit will actually convert the platform to this infrastructure.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>