Commit Graph

28 Commits

Author SHA1 Message Date
Christoph Lameter
494fc42170 sparc: Replace __get_cpu_var uses
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &__get_cpu_var(x).  This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.

Other use cases are for storing and retrieving data from the current
processors percpu area.  __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.

__get_cpu_var() is defined as :

#define __get_cpu_var(var) (*this_cpu_ptr(&(var)))

__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.

this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.

This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset.  Thereby address calculations are avoided and less registers
are used when code is generated.

At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.

The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e.  using a global
register that may be set to the per cpu base.

Transformations done to __get_cpu_var()

1. Determine the address of the percpu instance of the current processor.

	DEFINE_PER_CPU(int, y);
	int *x = &__get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(&y);

2. Same as #1 but this time an array structure is involved.

	DEFINE_PER_CPU(int, y[20]);
	int *x = __get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(y);

3. Retrieve the content of the current processors instance of a per cpu
variable.

	DEFINE_PER_CPU(int, y);
	int x = __get_cpu_var(y)

   Converts to

	int x = __this_cpu_read(y);

4. Retrieve the content of a percpu struct

	DEFINE_PER_CPU(struct mystruct, y);
	struct mystruct x = __get_cpu_var(y);

   Converts to

	memcpy(&x, this_cpu_ptr(&y), sizeof(x));

5. Assignment to a per cpu variable

	DEFINE_PER_CPU(int, y)
	__get_cpu_var(y) = x;

   Converts to

	__this_cpu_write(y, x);

6. Increment/Decrement etc of a per cpu variable

	DEFINE_PER_CPU(int, y);
	__get_cpu_var(y)++

   Converts to

	__this_cpu_inc(y)

Cc: sparclinux@vger.kernel.org
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
2014-08-26 13:45:55 -04:00
David S. Miller
58556104e9 sparc64: Do not disable interrupts in nmi_cpu_busy()
nmi_cpu_busy() is a SMP function call that just makes sure that all of the
cpus are spinning using cpu cycles while the NMI test runs.

It does not need to disable IRQs because we just care about NMIs executing
which will even with 'normal' IRQs disabled.

It is not legal to enable hard IRQs in a SMP cross call, in fact this bug
triggers the BUG check in irq_work_run_list():

	BUG_ON(!irqs_disabled());

Because now irq_work_run() is invoked from the tail of
generic_smp_call_function_single_interrupt().

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-11 20:45:01 -07:00
David S. Miller
16ce8a30e6 sparc64: Normalize NMI watchdog logging and behavior.
Bring this code in line with the perf based generic NMI watchdog
in kernel/watchdog.c (which we should convert over to at some
point).

In particular, don't do anything super fancy when the watchdog
triggers, and specifically don't do a do_exit() which only makes
things worse.

Either panic(), or WARN().  The latter of which will do all of
the actions such as give us a stack backtrace.

Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-03 22:25:33 -07:00
David S. Miller
ce4a925c29 sparc64: Abstract away the %pcr values used to enable/disable NMI
We assumed PCR_PIC_PRIV can always be used to disable it, but that
won't be true for SPARC-T4.

This allows us also to get rid of some messy defines used in only
one location.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-18 23:26:19 -07:00
David S. Miller
73a6b0538c sparc64: Abstract away the NMI PIC counter computation.
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-18 23:26:18 -07:00
David S. Miller
09d053c797 sparc64: Abstract away PIC register accesses.
And, like for the PCR, allow indexing of different PIC register
numbers.

This also removes all of the non-__KERNEL__ bits from asm/perfctr.h,
nothing kernel side should include it any more.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-18 23:26:14 -07:00
David S. Miller
0bab20ba4c sparc64: Add 'reg_num' argument to pcr_ops methods.
SPARC-T4 and later have multiple PCR registers, one for each
PIC counter.

Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-18 23:04:08 -07:00
David Howells
d550bbd40c Disintegrate asm/system.h for Sparc
Disintegrate asm/system.h for Sparc.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: sparclinux@vger.kernel.org
2012-03-28 18:30:03 +01:00
Paul Gortmaker
066bcaca51 sparc: move symbol exporters to use export.h not module.h
Many of the core sparc kernel files are not modules, but just
including module.h for exporting symbols.  Now these files can
use the lighter footprint export.h for this role.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-10-31 19:30:53 -04:00
Peter Zijlstra
004417a6d4 perf, arch: Cleanup perf-pmu init vs lockup-detector
The perf hardware pmu got initialized at various points in the boot,
some before early_initcall() some after (notably arch_initcall).

The problem is that the NMI lockup detector is ran from early_initcall()
and expects the hardware pmu to be present.

Sanitize this by moving all architecture hardware pmu implementations to
initialize at early_initcall() and move the lockup detector to an explicit
initcall right after that.

Cc: paulus <paulus@samba.org>
Cc: davem <davem@davemloft.net>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290707759.2145.119.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-11-26 15:14:56 +01:00
David S. Miller
ec687886de sparc64: Run NMIs on the hardirq stack.
Otherwise we can overflow the main stack with the function tracer
enabled.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-14 02:04:29 -07:00
David S. Miller
daecbf58a5 sparc64: Use a seperate counter for timer interrupts and NMI checks, like x86.
This keeps us from having to use kstat_irqs_cpu() from the NMI handler,
the former of which is a profiled function.

Instead we use a currently empty slot in the cpu_data

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-04-12 22:37:07 -07:00
Tejun Heo
ab386128f2 Merge branch 'master' into percpu 2010-02-02 14:38:15 +09:00
Christoph Lameter
dbfc196a3c local_t: Remove leftover local.h
Somehow the local.h was not removed when taking out the local_t usage during
the 2.6.32 merge.

CC: David Miller <davem@davemloft.net>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
2010-01-05 15:35:24 +09:00
David S. Miller
8183e2b384 sparc64: Fix NMI programming when perf events are active.
If perf events are active, we should not reset the %pcr to
PCR_PIC_PRIV.  That perf events code does the management.

Signed-off-by: David S. Miller <davem@davemloft.net>
2010-01-04 15:37:04 -08:00
Rusty Russell
dd17c8f729 percpu: remove per_cpu__ prefix.
Now that the return from alloc_percpu is compatible with the address
of per-cpu vars, it makes sense to hand around the address of per-cpu
variables.  To make this sane, we remove the per_cpu__ prefix we used
created to stop people accidentally using these vars directly.

Now we have sparse, we can use that (next patch).

tj: * Updated to convert stuff which were missed by or added after the
      original patch.

    * Kill per_cpu_var() macro.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
2009-10-29 22:34:15 +09:00
Christoph Lameter
494f6a9e12 this_cpu: Use this_cpu_xx in nmi handling
this_cpu_inc/dec reduces the number of instructions needed.

Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
2009-10-12 19:51:49 +09:00
Ingo Molnar
cdd6c482c9 perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!

In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.

Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.

All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)

The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.

Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.

User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)

This patch has been generated via the following script:

  FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

  sed -i \
    -e 's/PERF_EVENT_/PERF_RECORD_/g' \
    -e 's/PERF_COUNTER/PERF_EVENT/g' \
    -e 's/perf_counter/perf_event/g' \
    -e 's/nb_counters/nb_events/g' \
    -e 's/swcounter/swevent/g' \
    -e 's/tpcounter_event/tp_event/g' \
    $FILES

  for N in $(find . -name perf_counter.[ch]); do
    M=$(echo $N | sed 's/perf_counter/perf_event/g')
    mv $N $M
  done

  FILES=$(find . -name perf_event.*)

  sed -i \
    -e 's/COUNTER_MASK/REG_MASK/g' \
    -e 's/COUNTER/EVENT/g' \
    -e 's/\<event\>/event_id/g' \
    -e 's/counter/event/g' \
    -e 's/Counter/Event/g' \
    $FILES

... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.

Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.

( NOTE: 'counters' are still the proper terminology when we deal
  with hardware registers - and these sed scripts are a bit
  over-eager in renaming them. I've undone some of that, but
  in case there's something left where 'counter' would be
  better than 'event' we can undo that on an individual basis
  instead of touching an otherwise nicely automated patch. )

Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 14:28:04 +02:00
David S. Miller
cabc5c0f7f Merge branch 'master' of /home/davem/src/GIT/linux-2.6/
Conflicts:
	arch/sparc/Kconfig
2009-09-11 20:35:13 -07:00
David S. Miller
59abbd1e7c sparc64: Initial hw perf counter support.
Only supports one simple counter and only UltraSPARC-IIIi chips.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 06:28:20 -07:00
David S. Miller
2d0740c456 sparc64: Use nmi_enter() and nmi_exit(), as needed.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-10 05:56:16 -07:00
David S. Miller
d89be56b21 sparc64: Make touch_nmi_watchdog() actually work.
It guards it's actions on nmi_watchdog_active, but nothing ever
sets that and it's initial value is zero.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-08 23:29:16 -07:00
David S. Miller
a8f2226455 sparc64: Manage NMI watchdog enabling like x86.
Use a per-cpu 'wd_enabled' boolean and a global atomic_t count
of watchdog NMI enabled cpus which is set to '-1' if something
is wrong with the watchdog and it can't be used.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-08 23:16:06 -07:00
David S. Miller
e6617c6ec2 sparc64: Kill spurious NMI watchdog triggers by increasing limit to 30 seconds.
This is a compromise and a temporary workaround for bootup NMI
watchdog triggers some people see with qla2xxx devices present.

This happens when, for example:

CPU 0 is in the driver init and looping submitting mailbox commands to
load the firmware, then waiting for completion.

CPU 1 is receiving the device interrupts.  CPU 1 is where the NMI
watchdog triggers.

CPU 0 is submitting mailbox commands fast enough that by the time CPU
1 returns from the device interrupt handler, a new one is pending.
This sequence runs for more than 5 seconds.

The problematic case is CPU 1's timer interrupt running when the
barrage of device interrupts begin.  Then we have:

	timer interrupt
	return for softirq checking
	pending, thus enable interrupts

		 qla2xxx interrupt
		 return
		 qla2xxx interrupt
		 return
		 ... 5+ seconds pass
		 final qla2xxx interrupt for fw load
		 return

	run timer softirq
	return

At some point in the multi-second qla2xxx interrupt storm we trigger
the NMI watchdog on CPU 1 from the NMI interrupt handler.

The timer softirq, once we get back to running it, is smart enough to
run the timer work enough times to make up for the missed timer
interrupts.

However, the NMI watchdogs (both x86 and sparc) use the timer
interrupt count to notice the cpu is wedged.  But in the above
scenerio we'll receive only one such timer interrupt even if we last
all the way back to running the timer softirq.

The default watchdog trigger point is only 5 seconds, which is pretty
low (the softwatchdog triggers at 60 seconds).  So increase it to 30
seconds for now.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-09-03 02:35:20 -07:00
David S. Miller
ffaba67409 sparc64: Fix reset hangs on Niagara systems.
Hypervisor versions older than version 1.6.1 cannot handle
leaving the profile counter overflow interrupt chirping
when the system does a soft reset.

So use a reboot notifier to shut off the NMI watchdog.

Signed-off-by: David S. Miller <davem@davemloft.net>
2009-03-29 15:40:33 -07:00
David S. Miller
dc4ff585ff sparc64: Call dump_stack() in die_nmi().
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-04 13:48:11 -08:00
Stephen Rothwell
47a4a0e766 sparc: fixup for sparseirq changes
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-02-02 22:14:28 -08:00
David S. Miller
e5553a6d04 sparc64: Implement NMI watchdog on capable cpus.
Signed-off-by: David S. Miller <davem@davemloft.net>
2009-01-30 00:03:53 -08:00