The MSA architecture specification allows for hardware to not implement
unaligned vector memory accesses in some or all cases. A typical example
of this is the I6400 core which does not implement unaligned vector
memory access when the memory crosses a page boundary. The architecture
also requires that such memory accesses complete successfully as far as
userland is concerned, so the kernel is required to emulate them.
This patch implements support for emulating unaligned MSA ld & st
instructions by copying between the user memory & the tasks FP context
in struct thread_struct, updating hardware registers from there as
appropriate in order to avoid saving & restoring the entire vector
context for each unaligned memory access.
Tested both using an I6400 CPU and with a QEMU build hacked to produce
AdEL exceptions for unaligned vector memory accesses.
[paul.burton@imgtec.com:
- Remove #ifdef's
- Move msa_op into enum major_op rather than #define
- Replace msa_{to,from}_wd with {read,write}_msa_wr_{b,h,w,l} and the
format-agnostic wrappers, removing the custom endian mangling for
big endian systems.
- Restructure the msa_op case in emulate_load_store_insn to share
more code between the load & store cases.
- Avoid the need for a temporary union fpureg on the stack by simply
reusing the already suitably aligned context in struct
thread_struct.
- Use sizeof(*fpr) rather than hardcoding 16 as the size for user
memory checks & copies.
- Stop recalculating the address of the unaligned vector memory access
and rely upon the value read from BadVAddr as we do for other
unaligned memory access instructions.
- Drop the now unused val8 & val16 fields in union fpureg.
- Rewrite commit message.
- General formatting cleanups.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-kernel@vger.kernel.org
Cc: Jie Chen <chenj@lemote.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/10573/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel
unaligned accesses") renamed the Load* and Store* defines in unaligned.c
to _Load* and _Store* as part of its fix. One define was missed out which
causes big endian R6 kernels to fail to build.
arch/mips/kernel/unaligned.c:880:35:
error: implicit declaration of function '_StoreDW'
#define StoreDW(addr, value, res) _StoreDW(addr, value, res)
^
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel unaligned accesses")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10575/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When emulating a regular lh/lw/lhu/sh/sw we need to use the appropriate
instruction if we are in EVA mode. This is necessary for userspace
applications which trigger alignment exceptions. In such case, the
userspace load/store instruction needs to be emulated with the correct
eva/non-eva instruction by the kernel emulator.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Fixes: c1771216ab ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9503/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It's best to surround such complex macros with do {} while statements
so they can appear as independent logical blocks when used within other
control blocks.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9502/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit c1771216ab ("MIPS: kernel: unaligned: Handle unaligned
accesses for EVA") allowed unaligned accesses to be emulated for
EVA. However, when emulating regular load/store unaligned accesses,
we need to use the appropriate "address space" instructions for that.
Previously, an unaligned load/store instruction in kernel space would
have used the corresponding EVA instructions to emulate it which led to
segmentation faults because of the address translation that happens
with EVA instructions. This is now fixed by using the EVA instruction
only when emulating EVA unaligned accesses.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Fixes: c1771216ab ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9501/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Rework `process_fpemu_return' and move IEEE 754 exception interpretation
there, from `do_fpe'. Record the cause bits set in FCSR before they are
cleared and pass them through to `process_fpemu_return' so as to set
`si_code' correctly too for SIGFPE signals sent from emulation rather
than those issued by hardware with the FPE processor exception only.
For simplicity `mipsr2_decoder' assumes `*fcr31' has been preinitialised
and only sets it to anything if an FPU instruction has been emulated,
which in turn is the only case SIGFPE can be issued for here.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9705/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The load/store unaligned instructions have been removed in MIPS R6
so we need to re-implement the related macros using the regular
load/store instructions. Moreover, the load/store from coprocessor 2
instructions have been reallocated in Release 6 so we will handle them
in the emulator instead.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In do_ade(), is_fpu_owner() isn't preempt-safe. For example, when an
unaligned ldc1 is executed, do_cpu() is called and then FPU will be
enabled (and TIF_USEDFPU will be set for the current process). Then,
do_ade() is called because the access is unaligned. If the current
process is preempted at this time, TIF_USEDFPU will be cleard. So when
the process is scheduled again, BUG_ON(!is_fpu_owner()) is triggered.
This small program can trigger this BUG in a preemptible kernel:
int main (int argc, char *argv[])
{
double u64[2];
while (1) {
asm volatile (
".set push \n\t"
".set noreorder \n\t"
"ldc1 $f3, 4(%0) \n\t"
".set pop \n\t"
::"r"(u64):
);
}
return 0;
}
V2: Remove the BUG_ON() unconditionally due to Paul's suggestion.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Jie Chen <chenj@lemote.com>
Signed-off-by: Rui Wang <wangr@lemote.com>
Cc: <stable@vger.kernel.org>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Handle unaligned accesses when we access userspace memory
EVA mode.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Use the load/store instruction wrappers from asm/asm.h to
perform such operations when operating in EVA mode.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
It is only used from within a single file, it should not be globally
visible.
Signed-off-by: David Daney <david.daney@cavium.com>
Acked-by: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5325/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add logic needed to handle unaligned accesses in MIPS16e mode.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Add logic needed to handle unaligned accesses in microMIPS mode.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Add logic needed to do floating point emulation in microMIPS mode.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Steven J. Hill <Steven. Hill@imgtec.com>
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
None of these files are using modular infrastructure, and build
tests reveal that none of these files are really relying on any
implicit inclusions via. module.h either. So delete them.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Software events are required as part of the measurable stuff by the
Linux performance counter subsystem. Here is the list of events added by
this patch:
PERF_COUNT_SW_PAGE_FAULTS
PERF_COUNT_SW_PAGE_FAULTS_MIN
PERF_COUNT_SW_PAGE_FAULTS_MAJ
PERF_COUNT_SW_ALIGNMENT_FAULTS
PERF_COUNT_SW_EMULATION_FAULTS
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Cc: a.p.zijlstra@chello.nl
Cc: paulus@samba.org
Cc: mingo@elte.hu
Cc: acme@redhat.com
Cc: jamie.iles@picochip.com
Acked-by: David Daney <ddaney@caviumnetworks.com>
Reviewed-by: Matt Fleming <matt@console-pimps.org>
Patchwork: https://patchwork.linux-mips.org/patch/1686/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Put the original syscall number into ->regs[0] when we leave syscall
with error. Use it in restart logics. Everything else will have
it 0 since we pass through SAVE_SOME on all the ways in. Note that
in places like bad_stack and inllegal_syscall we leave it 0 - it's not
restartable.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-kernel@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/1698/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Away with the daemons of ifdef; get ready for future COP2 users.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patchwork: http://patchwork.linux-mips.org/patch/708/
When init is started it is SIGNAL_UNKILLABLE. If it were to get an
address error, we would try to send it SIGBUS, but it would be ignored
and the faulting instruction restarted. This results in an endless
loop.
We need to use force_sig() instead so it will actually die and give us
some useful information.
Reported-by: Florian Fainelli <florian@openwrt.org>
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Arguably using the address error handler has always been ugly. But with
processors that handle unaligned loads and stores in hardware the
current mechanism ceases to work so switch it to a BREAK instruction and
allocate break code 514 to the FPU emulator.
Yoichi Yuasa provided a build fix for CONFIG_BUG=n.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
debugfs_create_*() returns NULL on error. Make its callers return -ENODEV
on error.
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Acked-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Currently a number of unaligned instructions is counted but not used.
Add /debug/mips/unaligned_instructions file to show the value.
And add /debug/mips/unaligned_action to control behavior upon an
unaligned access. Possible actions are:
0: silently fixup the unaligned access.
1: send SIGBUS.
2: dump registers, process name, etc. and fixup.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.
This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
getting them indirectly
Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
they don't need sched.h
b) sched.h stops being dependency for significant number of files:
on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
after patch it's only 3744 (-8.3%).
Cross-compile tested on
all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
alpha alpha-up
arm
i386 i386-up i386-defconfig i386-allnoconfig
ia64 ia64-up
m68k
mips
parisc parisc-up
powerpc powerpc-up
s390 s390-up
sparc sparc-up
sparc64 sparc64-up
um-x86_64
x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
as well as my two usual configs.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!