This is superfluous, as the default CPU type and family are already
established by the initial cpuinfo definition. Given that we are still
able to probe for the CPU family even if we are not able to detect the
subtype, it's preferable to let the probing code fill out what it can and
leave the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a family member to struct sh_cpuinfo, which allows us to fall
back more on the probe routines to work out what sort of subtype we are
running on. This will be used by the CPU cache initialization code in
order to first do family-level initialization, followed by subtype-level
optimizations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inserts a ULONG_MAX entry at the end of the valid entries in the
stack trace buffer so the default code doesn't need to scan to the end of
available slots. This also makes the trace buffer termination behaviour
consistent with the other architectures.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This flags the default unwinder as reliable, as it tends to be reliable
enough for the purposes of the stacktrace buffer. We leave the unreliable
cases for the unwind methods that we know to be completely broken.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the reliability checks from the x86 stacktrace code so known
bad addresses are not recorded in the stack trace buffer.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
save_stack_trace_tsk() and friends can be called from atomic context (as
triggered by latencytop), and subsequently hit two problematic allocation
points that were using GFP_KERNEL (these were dwarf_unwind_stack() and
dwarf_frame_alloc_regs()). Convert these over to GFP_ATOMIC and get
latencytop working with the DWARF unwinder.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Trying to figure out the best value for DWARF_ARCH_UNWIND_OFFSET is
tricky at best. Various things can change the size (and offset from the
beginning of the function) of the prologue. Notably, turning on ftrace
adds calls to mcount at the beginning of functions, thereby pushing the
prologue further into the function.
So replace DWARF_ARCH_UNWIND_OFFSET with some code that continues to
execute CFA instructions until the value of return address register is
defined. This is safe to do because we know that the return address must
have been pushed onto the frame before our first function call; we just
can't figure out where at compile-time.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The destination address might be unaligned, so set it with
put_unaligned() for safety. This restores the previous behaviour, albeit
through the proper API.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This was using internal symbols for unaligned accesses, bypassing the
exposed interface for variable sized safe accesses. This converts all of
the __get_unaligned_cpuXX() users over to get_unaligned() directly,
relying on the cast to select the proper internal routine.
Additionally, the __put_unaligned_cpuXX() case is superfluous given that
the destination address is aligned in all of the current cases, so just
drop that outright.
Furthermore, this switches to the asm/unaligned.h header instead of the
asm-generic version, which was silently bypassing the SH-4A optimized
unaligned ops.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Annotate various assembly code paths with CFI assembler directives so
that DWARF unwind info is available for the unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In order to use DWARF unwinder info the frame register has to contain a
valid value. Whilst GCC takes care of this for C code, we have to do it
ourselves for assembly.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Instead of implementing our own stack unwinder via dump_trace() we
should use the new stack unwinder API because it is more modular. This
change allows us to decouple the interface for generating stacktraces
from the implementation of a stack unwinder.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Provide an interface for registering stack unwinders, where each
unwinder is given a rating that describes its accuracy and
complexity. The more accurate an unwinder is, the more complex it is.
If a the current stack unwinder faults, then the stack unwinder with the
next highest accuracy will be used in its place (provided one is
available). For example, this allows unwinders, such as the DWARF
unwinder, to liberally sprinkle BUG()s to catch badly formed DWARF debug
info.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Copy the stacktrace ops code from x86 and provide a central function for
use by functions that need to dump a callstack.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This can use the now generic clear_page() implementation, which is backed
by the sh64 optimized memset routine. This also fixes up the case where
PAGE_SIZE != 4kB.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch removes the unused MSTPCRn register definitions
from the SuperH Mobile code for sh7722, sh7723 and sh7724.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds early printk support for SH770x (tested on SH7709 based hp6xx).
Signed-off-by: Rafael Ignacio Zurita <rizurita@yahoo.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This cleans up the irqflags tracing code quite a bit and ties it
in to various missing callsites that caused an imbalance when
CONFIG_PROVE_LOCKING was enabled.
Previously this was catching on:
987 #ifdef CONFIG_PROVE_LOCKING
988 DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
989 DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
990 #endif
991 retval = -EAGAIN;
with hardirqs being doubly enabled, and subsequently bailing out
with the following call trace:
Call trace:
[<88035224>] __lock_acquire+0x616/0x6a6
[<88015a8c>] do_fork+0xf8/0x2b0
[<880331ec>] trace_hardirqs_on_caller+0xd4/0x114
[<88241074>] _spin_unlock_irq+0x20/0x64
[<88035224>] __lock_acquire+0x616/0x6a6
[<8800386c>] kernel_thread+0x48/0x70
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88024ecc>] ____call_usermodehelper+0x0/0x110
[<88003894>] kernel_thread_helper+0x0/0x14
[<88024bac>] __call_usermodehelper+0x38/0x70
[<88025dc0>] worker_thread+0x150/0x274
[<88035b9c>] lock_release+0x0/0x198
[<88024b74>] __call_usermodehelper+0x0/0x70
[<88028cf0>] autoremove_wake_function+0x0/0x30
[<88028bf2>] kthread+0x3e/0x70
[<88025c70>] worker_thread+0x0/0x274
[<8800389c>] kernel_thread_helper+0x8/0x14
[<88028bb4>] kthread+0x0/0x70
[<88003894>] kernel_thread_helper+0x0/0x14
Reported-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reverts commit 1d29ebebcb.
Bumping up the earlytimer initialization causes IRQs to be enabled too
early, which blows up lockdep:
...
NR_IRQS:256 nr_irqs:256
------------[ cut here ]------------
Badness at kernel/lockdep.c:2128
Pid : 0, Comm: swapper
CPU : 0 Not tainted (2.6.31-rc3-00205-g3ed6e12-dirty #2443)
PC is at trace_hardirqs_on_caller+0x48/0x10c
PR is at trace_hardirqs_on_caller+0x3c/0x10c
...
Revert it back to late_time_init time, which fixes up lockdep.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().
This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().
A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the m66592-udc driver to use the on_chip flag
from platform data to enable on chip behaviour instead
of relying on CONFIG_SUPERH_BUILT_IN_M66592 ugliness.
This makes the code cleaner and also allows us to support
both external and internal m66592 with the same kernel.
It also makes the Kconfig part more future proof since
we with this patch can add support for new processors
with on-chip m66592 without modifying the Kconfig.
The patch adds a m66592 header file for platform data
and ties in platform data to the existing m66592 devices.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Convert the r8a66597-hcd driver to use the on_chip flag
from platform data to enable on chip behaviour instead
of relying on CONFIG_SUPERH_ON_CHIP_R8A66597 ugliness.
This makes the code cleaner and also allows us to support
both external and internal r8a66597 with the same kernel.
It also makes the Kconfig part more future proof since
we with this patch can add support for new processors
with on-chip r8a66597 without modifying the Kconfig.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Extend the SuperH hwblk code to support more than one counter.
Contains ground work for the future Runtime PM implementation.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The function prototype for mcount is not defined if we are not building
with ftrace support enabled, so use DECLARE_EXPORT() to stub one in.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
STACK_DEBUG ties in to mcount in order to do function-granular stack
overflow checks as opposed to lazily checking from IRQ context. As the
default is nohz, the frequency of overflow checking is too irregular to
catch much useful information, and so the mcount approach employed by
sparc64 is adopted instead.
This kills off the old check entirely from the do_IRQ() path and now
adopts CONFIG_MCOUNT instead.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a general CONFIG_MCOUNT in order to permit mcount generation
without ftrace support. This is primarily for allowing platforms to
enable aggressive stack overflow checking without having to enable ftrace
support. Based on the sparc64 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Annotate __switch_to() so that the function graph tracer does not try to
trace it. Use __notrace_funcgraph, as opposed to notrace, so that other
tracers can continue to trace __switch_to().
The reason that we don't want to trace __switch_to() with the function
graph tracer is because of how the return address stack in task_struct
is implemented. When we enter __switch_to we store the real return
address on prev's ret_stack. When we return from __switch_to() we've
patched the return address on the kernel stack to be
return_to_handler. Calling return_to_handler we do,
-> ftrace_return_to_handler()
-> ftrace_pop_return_ftrace()
Which tries to pop the real return address from current->ret_stack. The
problem being that we stored the return address on prev->ret_stack, but
current now points to next, and next->ret_stack doesn't contain the
correct return address (and is possibly even empty).
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Add both dynamic and static function graph tracer support for sh.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Enable kernel stack checking code in both the dynamic ftrace and mcount
code paths. Check the stack to see if it's overflowing and make sure
that the stack pointer contains an address that's either in init_stack
or after the bss.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch converts the sh architecture to use the new linker script
macros in include/asm-generic/vmlinux.lds.h.
Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that I've added TIF_SYSCALL_FTRACE the thread flags do not fit into
a single byte any more. Code testing them now needs to be aware of the
upper and lower bytes.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds cpuidle support for SuperH Mobile.
The sleep mode selected by cpuidle is compared with
the mode selected by the hwblk sleep code and the
best allowed mode is entered.
At this point "Sleep mode" and "Sleep mode + SF" are
supported. This code can easily be extended to support
"Software suspend mode", but the assembly code must
first be updated to avoid loosing interrupts.
Also, update the code to only copy the assembly snippet
into internal memory once at bootup.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>