The architecture requires (R_TYTWB) that an attempt to return from EL3
when SCR_EL3.{NSE,NS} are {1,0} is an illegal exception return. (This
enforces that the CPU can't ever be executing below EL3 with the
NSE,NS bits indicating an invalid security state.)
We were missing this check; add it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807150618.101357-1-peter.maydell@linaro.org
This is a mandatory feature for Armv8.1 architectures but we don't
state the feature clearly in our emulation list. Also include
FEAT_CRC32 comment in aarch64_max_tcg_initfn for ease of grepping.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20230824075406.1515566-1-alex.bennee@linaro.org
Cc: qemu-stable@nongnu.org
Message-Id: <20230222110104.3996971-1-alex.bennee@linaro.org>
[PMM: pluralize 'instructions' in docs]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This feature allows the operating system to set TCR_ELx.HWU*
to allow the implementation to use the PBHA bits from the
block and page descriptors for for IMPLEMENTATION DEFINED
purposes. Since QEMU has no need to use these bits, we may
simply ignore them.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Like FEAT_TRF (Self-hosted Trace Extension), suppress tracing
external to the cpu, which is out of scope for QEMU.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is only one additional EL1 register modeled, which
also needs to use access_actlr_w.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Access to many of the special registers is enabled or disabled
by ACTLR_EL[23], which we implement as constant 0, which means
that all writes outside EL3 should trap.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Do not hard-code the constants for Neoverse V1.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When the cpu support MTE, but the system does not, reduce cpu
support to user instructions at EL0 instead of completely
disabling MTE. If we encounter a cpu implementation which does
something else, we can revisit this setting.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Support all of the easy GM block sizes.
Use direct memory operations, since the pointers are aligned.
While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
an atomic store of one nibble. This is not difficult, but there
is also no point in supporting it until required.
Note that cortex-a710 sets GM blocksize to match its cacheline
size of 64 bytes. I expect many implementations will also
match the cacheline, which makes 16 bytes very unlikely.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230811214031.171020-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Previously we hard-coded the blocksize with GMID_EL1_BS.
But the value we choose for -cpu max does not match the
value that cortex-a710 uses.
Mirror the way we handle dcz_blocksize.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230811214031.171020-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This value is only 4 bits wide.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230811214031.171020-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Try and make the self reported global hack a little less hackish by
providing a query function instead. As gdb_has_xml was always set if
we negotiated XML we can now use the presence of ->target_xml as the
test instead.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20230829161528.2707696-12-alex.bennee@linaro.org>
Make the conversion between privilege level and QEMU MMU index
consistent, and afterwards switch to MMU indices 11-15.
Signed-off-by: Helge Deller <deller@gmx.de>
-----BEGIN PGP SIGNATURE-----
iHUEABYKAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCZOtpFAAKCRD3ErUQojoP
X0lxAPwKfsMZOO/e81XXLgxeEZ5R4yjtIelErvOWmMvBfxEDUwEA6HgJt4gOe1uR
Dw7d+wTqr+CSOj5I87+sJYl1FmihzQU=
=01eA
-----END PGP SIGNATURE-----
Merge tag 'devel-hppa-priv-cleanup2-pull-request' of https://github.com/hdeller/qemu-hppa into staging
target/hppa: Clean up conversion from/to MMU index and privilege level
Make the conversion between privilege level and QEMU MMU index
consistent, and afterwards switch to MMU indices 11-15.
Signed-off-by: Helge Deller <deller@gmx.de>
# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQS86RI+GtKfB8BJu973ErUQojoPXwUCZOtpFAAKCRD3ErUQojoP
# X0lxAPwKfsMZOO/e81XXLgxeEZ5R4yjtIelErvOWmMvBfxEDUwEA6HgJt4gOe1uR
# Dw7d+wTqr+CSOj5I87+sJYl1FmihzQU=
# =01eA
# -----END PGP SIGNATURE-----
# gpg: Signature made Sun 27 Aug 2023 11:17:40 EDT
# gpg: using EDDSA key BCE9123E1AD29F07C049BBDEF712B510A23A0F5F
# gpg: Good signature from "Helge Deller <deller@gmx.de>" [unknown]
# gpg: aka "Helge Deller <deller@kernel.org>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 4544 8228 2CD9 10DB EF3D 25F8 3E5F 3D04 A7A2 4603
# Subkey fingerprint: BCE9 123E 1AD2 9F07 C049 BBDE F712 B510 A23A 0F5F
* tag 'devel-hppa-priv-cleanup2-pull-request' of https://github.com/hdeller/qemu-hppa:
target/hppa: Switch to use MMU indices 11-15
target/hppa: Use privilege helper in hppa_get_physical_address()
target/hppa: Do not use hardcoded value for tlb_flush_*()
target/hppa: Add privilege to MMU index conversion helpers
target/hppa: Add missing PL1 and PL2 privilege levels
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Current QEMU can expose waitpkg to guests when it is available. However,
VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE is still not recognized and
masked by QEMU. This can lead to an unexpected situation when a L1
hypervisor wants to expose waitpkg to a L2 guest. The L1 hypervisor can
assume that VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE exists as waitpkg is
available. The L1 hypervisor then can accidentally expose waitpkg to the
L2 guest. This will cause invalid opcode exception in the L2 guest when
it executes waitpkg related instructions.
This patch adds VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE support, and
sets up dependency between the bit and CPUID_7_0_ECX_WAITPKG. QEMU should
not expose waitpkg feature if VMX_SECONDARY_EXEC_ENABLE_USER_WAIT_PAUSE is
not available to avoid unexpected invalid opcode exception in L2 guests.
Signed-off-by: Ake Koomsin <ake@igel.co.jp>
Message-ID: <20230807093339.32091-2-ake@igel.co.jp>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The MMU indices 9-15 will use shorter assembler instructions
when run on a x86-64 host. So, switch over to those to get
smaller code and maybe minimally faster emulation.
Signed-off-by: Helge Deller <deller@gmx.de>
Convert hppa_get_physical_address() to use the privilege helper macro.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Avoid using hardcoded values when calling the tlb_flush*() functions.
Instead, define and use HPPA_MMU_FLUSH_MASK (keeping the current
behavior, which doesn't flush the physical address MMU).
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Add two macros which convert privilege level to/from MMU index:
- PRIV_TO_MMU_IDX(priv)
returns the MMU index for the given privilege level
- MMU_IDX_TO_PRIV(mmu_idx)
returns the corresponding privilege level for this MMU index
The introduction of those macros make the code easier to read and
will help to improve performance in follow-up patch.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The hppa CPU has 4 privilege levels (0-3).
Mention the missing PL1 and PL2 levels, although the Linux kernel
uses only 0 (KERNEL) and 3 (USER). Not sure about HP-UX.
Signed-off-by: Helge Deller <deller@gmx.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230823145542.79633-9-philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The setcond + neg + or sequence is a complex method of
performing a conditional move.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The setcond + neg + and sequence is a complex method of
performing a conditional move.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Changes the address type of the guest memory read/write functions from
target_ulong to abi_ptr. (abi_ptr is currently typedef'd to target_ulong
but that will change in a following commit.) This will reduce the
coupling between accel/ and target/.
Note: Function pointers that point to cpu_[st|ld]*() in target/riscv and
target/rx are also updated in this commit.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230807155706.9580-6-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Changes the signature of the target-defined functions for
inserting/removing hvf hw breakpoints. The address and length arguments
are now of vaddr type, which both matches the type used internally in
accel/hvf/hvf-all.c and makes the api target-agnostic.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230807155706.9580-5-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Changes the signature of the target-defined functions for
inserting/removing kvm hw breakpoints. The address and length arguments
are now of vaddr type, which both matches the type used internally in
accel/kvm/kvm-all.c and makes the api target-agnostic.
Signed-off-by: Anton Johansson <anjo@rev.ng>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230807155706.9580-4-anjo@rev.ng>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since GDB 13.1(GDB commit ea3352172), GDB LoongArch changed to use
fcc0-7 instead of fcc register. This commit partially reverts commit
2f149c759 (`target/loongarch: Update gdb_set_fpu() and gdb_get_fpu()`)
to match the behavior of GDB.
Note that it is a breaking change for GDB 13.0 or earlier, but it is
also required for GDB 13.1 or later to work.
Signed-off-by: Jiajie Chen <c@jia.je>
Acked-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230808054315.3391465-1-c@jia.je>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Implement the callback for getting the architecture-dependent CPU
ID, the cpu ID is physical id described in ACPI MADT table, this
will be used for cpu hotplug.
Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230824005007.2000525-1-maobibo@loongson.cn>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Add LoongArch32 cpu la132.
Due to lack of public documentation of la132, it is currently a
synthetic LoongArch32 cpu model. Details need to be added in the future.
Signed-off-by: Jiajie Chen <c@jia.je>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230822032724.1353391-10-gaosong@loongson.cn>
Message-Id: <20230822071959.35620-4-philmd@linaro.org>
The default check parmeter is ALL.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-ID: <20230822032724.1353391-8-gaosong@loongson.cn>
Message-Id: <20230822071959.35620-2-philmd@linaro.org>
In VA32 mode, BL, JIRL and PC* instructions should sign-extend the low
32 bit result to 64 bits.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230822032724.1353391-7-gaosong@loongson.cn>
Message-Id: <20230822071959.35620-1-philmd@linaro.org>
When running in VA32 mode(!LA64 or VA32L[1-3] matching PLV), virtual
address is truncated to 32 bits before address mapping.
Signed-off-by: Jiajie Chen <c@jia.je>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230822032724.1353391-6-gaosong@loongson.cn>
Message-Id: <20230822071405.35386-10-philmd@linaro.org>
Add LA64 and VA32(32-bit Virtual Address) to DisasContext to allow the
translator to reject doubleword instructions in LA32 mode for example.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230822032724.1353391-5-gaosong@loongson.cn>
Message-Id: <20230822071405.35386-5-philmd@linaro.org>
VPPN of TLBEHI/TLBREHI is limited to 19 bits in LA32.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230822032724.1353391-4-gaosong@loongson.cn>
Message-Id: <20230822071405.35386-4-philmd@linaro.org>
LA32 uses a different encoding for CSR.DMW and a new direct mapping
mechanism.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230822032724.1353391-3-gaosong@loongson.cn>
Message-Id: <20230822071405.35386-3-philmd@linaro.org>
The TLB entry of LA32 lacks NR, NX and RPLV and they are hardwired to
zero in LoongArch32.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230822032724.1353391-2-gaosong@loongson.cn>
Message-Id: <20230822071405.35386-2-philmd@linaro.org>
GPRs and PC are 32-bit wide in loongarch32 mode.
Signed-off-by: Jiajie Chen <c@jia.je>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230817093121.1053890-4-gaosong@loongson.cn>
[PMD: Rebased, set gdb_num_core_regs]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230821125959.28666-9-philmd@linaro.org>
Add is_la64 function to check if the current cpucfg[1].arch equals to
2(LA64).
Signed-off-by: Jiajie Chen <c@jia.je>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Message-ID: <20230817093121.1053890-2-gaosong@loongson.cn>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230821125959.28666-7-philmd@linaro.org>
Extract loongarch64 specific code from loongarch_cpu_class_init()
to a new loongarch64_cpu_class_init().
In preparation of supporting loongarch32 cores, rename these
functions using the '64' suffix.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230821125959.28666-6-philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
In preparation of introducing TYPE_LOONGARCH32_CPU, introduce
an abstract TYPE_LOONGARCH64_CPU.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230821125959.28666-5-philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20230817093121.1053890-11-gaosong@loongson.cn>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230821125959.28666-4-philmd@linaro.org>
Commit 228021f05e ("target/loongarch: Add core definition") sets
disas_set_info to loongarch_cpu_disas_set_info. Probably due to
a failed git-rebase, commit ca61e75071 ("target/loongarch: Add gdb
support") also sets it to the same value. Remove the duplication.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230821125959.28666-3-philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Various CSR registers have Read/Write fields. We might
want to see guest trying to change such registers.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Message-Id: <20230821125959.28666-2-philmd@linaro.org>
Signed-off-by: Song Gao <gaosong@loongson.cn>
Currently the emulation of VSTRS recognizes partial matches in presence
of \0 in the haystack, which, according to PoP, is not correct:
If the ZS flag is one and a zero byte was detected
in the second operand, then there can not be a
partial match ...
Add a check for this. While at it, fold a number of explicitly handled
special cases into the generic logic.
Cc: qemu-stable@nongnu.org
Reported-by: Claudio Fontana <cfontana@suse.de>
Closes: https://lists.gnu.org/archive/html/qemu-devel/2023-08/msg00633.html
Fixes: 1d706f3141 ("target/s390x: vxeh2: vector string search")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230804233748.218935-3-iii@linux.ibm.com>
Tested-by: Claudio Fontana <cfontana@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Unlike most other instructions that contain an immediate element index,
VREP's one is 16-bit, and not 4-bit. The code uses only 8 bits, so
using, e.g., 0x101 does not lead to a specification exception.
Fix by checking all 16 bits.
Cc: qemu-stable@nongnu.org
Fixes: 28d08731b1 ("s390x/tcg: Implement VECTOR REPLICATE")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230807163459.849766-1-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The length is always truncated to 16 bytes. Do not probe more than
that.
Cc: qemu-stable@nongnu.org
Fixes: 0e0a5b49ad ("s390x/tcg: Implement VECTOR STORE WITH LENGTH")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230804235624.263260-1-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
VFMIN and VFMAX should raise a specification exceptions when bits 1-3
of M5 are set.
Cc: qemu-stable@nongnu.org
Fixes: da4807527f ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230804234621.252522-1-iii@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Output message are slightly modified to ease selection with wildcards
and to report extra parameters.
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Message-Id: <20230804080415.56852-1-clg@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When FEAT_RME is implemented, these bits override the value of
CNT[VP]_CTL_EL0.IMASK in Realm and Root state. Move the IRQ state update
into a new gt_update_irq() function and test those bits every time we
recompute the IRQ state.
Since we're removing the IRQ state from some trace events, add a new
trace event for gt_update_irq().
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230809123706.1842548-7-jean-philippe@linaro.org
[PMM: only register change hook if not USER_ONLY and if TCG]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
The AT instruction is UNDEFINED if the {NSE,NS} configuration is
invalid. Add a function to check this on all AT instructions that apply
to an EL lower than 3.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Message-id: 20230809123706.1842548-6-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
At the moment we only handle Secure and Nonsecure security spaces for
the AT instructions. Add support for Realm and Root.
For AArch64, arm_security_space() gives the desired space. ARM DDI0487J
says (R_NYXTL):
If EL3 is implemented, then when an address translation instruction
that applies to an Exception level lower than EL3 is executed, the
Effective value of SCR_EL3.{NSE, NS} determines the target Security
state that the instruction applies to.
For AArch32, some instructions can access NonSecure space from Secure,
so we still need to pass the state explicitly to do_ats_write().
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-5-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
GPC checks are not performed on the output address for AT instructions,
as stated by ARM DDI 0487J in D8.12.2:
When populating PAR_EL1 with the result of an address translation
instruction, granule protection checks are not performed on the final
output address of a successful translation.
Rename get_phys_addr_with_secure(), since it's only used to handle AT
instructions.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-4-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When HCR_EL2.E2H is enabled, TLB entries are formed using the EL2&0
translation regime, instead of the EL2 translation regime. The TLB VAE2*
instructions invalidate the regime that corresponds to the current value
of HCR_EL2.E2H.
At the moment we only invalidate the EL2 translation regime. This causes
problems with RMM, which issues TLBI VAE2IS instructions with
HCR_EL2.E2H enabled. Update vae2_tlbmask() to take HCR_EL2.E2H into
account.
Add vae2_tlbbits() as well, since the top-byte-ignore configuration is
different between the EL2&0 and EL2 regime.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-3-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In realm state, stage-2 translation tables are fetched from the realm
physical address space (R_PGRQD).
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230809123706.1842548-2-jean-philippe@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The PAR_EL1.SH field documents that for the cases of:
* Device memory
* Normal memory with both Inner and Outer Non-Cacheable
the field should be 0b10 rather than whatever was in the
translation table descriptor field. (In the pseudocode this
is handled by PAREncodeShareability().) Perform this
adjustment when assembling a PAR value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-16-peter.maydell@linaro.org
When we report faults due to stage 2 faults during a stage 1
page table walk, the 'level' parameter should be the level
of the walk in stage 2 that faulted, not the level of the
walk in stage 1. Correct the reporting of these faults.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-15-peter.maydell@linaro.org
The architecture doesn't permit block descriptors at any arbitrary
level of the page table walk; it depends on the granule size which
levels are permitted. We implemented only a partial version of this
check which assumes that block descriptors are valid at all levels
except level 3, which meant that we wouldn't deliver the Translation
fault for all cases of this sort of guest page table error.
Implement the logic corresponding to the pseudocode
AArch64.DecodeDescriptorType() and AArch64.BlockDescSupported().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-14-peter.maydell@linaro.org
When the MMU is disabled, data accesses should be Device nGnRnE,
Outer Shareable, Untagged. We handle the other cases from
AArch64.S1DisabledOutput() correctly but missed this one.
Device nGnRnE is memattr == 0, so the only part we were missing
was that shareability should be set to 2 for both insn fetches
and data accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-13-peter.maydell@linaro.org
We only use S1Translate::out_secure in two places, where we are
setting up MemTxAttrs for a page table load. We can use
arm_space_is_secure(ptw->out_space) instead, which guarantees
that we're setting the MemTxAttrs secure and space fields
consistently, and allows us to drop the out_secure field in
S1Translate entirely.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-12-peter.maydell@linaro.org
We no longer look at the in_secure field of the S1Translate struct
anyway, so we can remove it and all the code which sets it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-11-peter.maydell@linaro.org
Replace the last uses of ptw->in_secure with appropriate
checks on ptw->in_space.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-10-peter.maydell@linaro.org
When we do a translation in Secure state, the NSTable bits in table
descriptors may downgrade us to NonSecure; we update ptw->in_secure
and ptw->in_space accordingly. We guard that check correctly with a
conditional that means it's only applied for Secure stage 1
translations. However, later on in get_phys_addr_lpae() we fold the
effects of the NSTable bits into the final descriptor attributes
bits, and there we do it unconditionally regardless of the CPU state.
That means that in Realm state (where in_secure is false) we will set
bit 5 in attrs, and later use it to decide to output to non-secure
space.
We don't in fact need to do this folding in at all any more (since
commit 2f1ff4e7b9): if an NSTable bit was set then we have
already set ptw->in_space to ARMSS_NonSecure, and in that situation
we don't look at attrs bit 5. The only thing we still need to deal
with is the real NS bit in the final descriptor word, so we can just
drop the code that ORed in the NSTable bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-9-peter.maydell@linaro.org
Pass an ARMSecuritySpace instead of a bool secure to
arm_is_el2_enabled_secstate(). This doesn't change behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-8-peter.maydell@linaro.org
arm_hcr_el2_eff_secstate() takes a bool secure, which it uses to
determine whether EL2 is enabled in the current security state.
With the advent of FEAT_RME this is no longer sufficient, because
EL2 can be enabled for Secure state but not for Root, and both
of those will pass 'secure == true' in the callsites in ptw.c.
As it happens in all of our callsites in ptw.c we either avoid making
the call or else avoid using the returned value if we're doing a
translation for Root, so this is not a behaviour change even if the
experimental FEAT_RME is enabled. But it is less confusing in the
ptw.c code if we avoid the use of a bool secure that duplicates some
of the information in the ArmSecuritySpace argument.
Make arm_hcr_el2_eff_secstate() take an ARMSecuritySpace argument
instead. Because we always want to know the HCR_EL2 for the
security state defined by the current effective value of
SCR_EL3.{NSE,NS}, it makes no sense to pass ARMSS_Root here,
and we assert that callers don't do that.
To avoid the assert(), we thus push the call to
arm_hcr_el2_eff_secstate() down into the cases in
regime_translation_disabled() that need it, rather than calling the
function and ignoring the result for the Root space translations.
All other calls to this function in ptw.c are already in places
where we have confirmed that the mmu_idx is a stage 2 translation
or that the regime EL is not 3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-7-peter.maydell@linaro.org
Plumb the ARMSecurityState through to regime_translation_disabled()
rather than just a bool is_secure.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-6-peter.maydell@linaro.org
In commit 6d2654ffac we created the S1Translate struct and
used it to plumb through various arguments that we were previously
passing one-at-a-time to get_phys_addr_v5(), get_phys_addr_v6(), and
get_phys_addr_lpae(). Extend that pattern to get_phys_addr_pmsav5(),
get_phys_addr_pmsav7(), get_phys_addr_pmsav8() and
get_phys_addr_disabled(), so that all the get_phys_addr_* functions
we call from get_phys_addr_nogpc() take the S1Translate struct rather
than the mmu_idx and is_secure bool.
(This refactoring is a prelude to having the called functions look
at ptw->is_space rather than using an is_secure boolean.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-5-peter.maydell@linaro.org
The s1ns bit in ARMMMUFaultInfo is documented as "true if
we faulted on a non-secure IPA while in secure state". Both the
places which look at this bit only do so after having confirmed
that this is a stage 2 fault and we're dealing with Secure EL2,
which leaves the ptw.c code free to set the bit to any random
value in the other cases.
Instead of taking advantage of that freedom, consistently
make the bit be set to false for the "not a stage 2 fault
for Secure EL2" cases. This removes some cases where we
were using an 'is_secure' boolean and leaving the reader
guessing about whether that was the right thing for Realm
and Root cases.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-4-peter.maydell@linaro.org
In S1_ptw_translate() we set up the ARMMMUFaultInfo if the attempt to
translate the page descriptor address into a physical address fails.
This used to only be possible if we are doing a stage 2 ptw for that
descriptor address, and so the code always sets fi->stage2 and
fi->s1ptw to true. However, with FEAT_RME it is also possible for
the lookup of the page descriptor address to fail because of a
Granule Protection Check fault. These should not be reported as
stage 2, otherwise arm_deliver_fault() will incorrectly set
HPFAR_EL2. Similarly the s1ptw bit should only be set for stage 2
faults on stage 1 translation table walks, i.e. not for GPC faults.
Add a comment to the the other place where we might detect a
stage2-fault-on-stage-1-ptw, in arm_casq_ptw(), noting why we know in
that case that it must really be a stage 2 fault and not a GPC fault.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-3-peter.maydell@linaro.org
For an Unsupported Atomic Update fault where the stage 1 translation
table descriptor update can't be done because it's to an unsupported
memory type, this is a stage 1 abort (per the Arm ARM R_VSXXT). This
means we should not set fi->s1ptw, because this will cause the code
in the get_phys_addr_lpae() error-exit path to mark it as stage 2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230807141514.19075-2-peter.maydell@linaro.org
On MIPS, QEMU requires KVM_VM_MIPS_VZ type for KVM. Report an error in
such a case as other architectures do when an error occurred during KVM
type decision.
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20230727073134.134102-4-akihiko.odaki@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Before this change, the default KVM type, which is used for non-virt
machine models, was 0.
The kernel documentation says:
> On arm64, the physical address size for a VM (IPA Size limit) is
> limited to 40bits by default. The limit can be configured if the host
> supports the extension KVM_CAP_ARM_VM_IPA_SIZE. When supported, use
> KVM_VM_TYPE_ARM_IPA_SIZE(IPA_Bits) to set the size in the machine type
> identifier, where IPA_Bits is the maximum width of any physical
> address used by the VM. The IPA_Bits is encoded in bits[7-0] of the
> machine type identifier.
>
> e.g, to configure a guest to use 48bit physical address size::
>
> vm_fd = ioctl(dev_fd, KVM_CREATE_VM, KVM_VM_TYPE_ARM_IPA_SIZE(48));
>
> The requested size (IPA_Bits) must be:
>
> == =========================================================
> 0 Implies default size, 40bits (for backward compatibility)
> N Implies N bits, where N is a positive integer such that,
> 32 <= N <= Host_IPA_Limit
> == =========================================================
> Host_IPA_Limit is the maximum possible value for IPA_Bits on the host
> and is dependent on the CPU capability and the kernel configuration.
> The limit can be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the
> KVM_CHECK_EXTENSION ioctl() at run-time.
>
> Creation of the VM will fail if the requested IPA size (whether it is
> implicit or explicit) is unsupported on the host.
https://docs.kernel.org/virt/kvm/api.html#kvm-create-vm
So if Host_IPA_Limit < 40, specifying 0 as the type will fail. This
actually confused libvirt, which uses "none" machine model to probe the
KVM availability, on M2 MacBook Air.
Fix this by using Host_IPA_Limit as the default type when
KVM_CAP_ARM_VM_IPA_SIZE is available.
Cc: qemu-stable@nongnu.org
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20230727073134.134102-3-akihiko.odaki@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
kvm_arch_get_default_type() returns the default KVM type. This hook is
particularly useful to derive a KVM type that is valid for "none"
machine model, which is used by libvirt to probe the availability of
KVM.
For MIPS, the existing mips_kvm_type() is reused. This function ensures
the availability of VZ which is mandatory to use KVM on the current
QEMU.
Cc: qemu-stable@nongnu.org
Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Message-id: 20230727073134.134102-2-akihiko.odaki@daynix.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: added doc comment for new function]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
cpu->cfg.mvendorid is a 32 bit field and kvm_set_one_reg() always write
a target_ulong val, i.e. a 64 bit field in a 64 bit host.
Given that we're passing a pointer to the mvendorid field, the reg is
reading 64 bits starting from mvendorid and going 32 bits in the next
field, marchid. Here's an example:
$ ./qemu-system-riscv64 -machine virt,accel=kvm -m 2G -smp 1 \
-cpu rv64,marchid=0xab,mvendorid=0xcd,mimpid=0xef(...)
(inside the guest)
# cat /proc/cpuinfo
processor : 0
hart : 0
isa : rv64imafdc_zicbom_zicboz_zihintpause_zbb_sstc
mmu : sv57
mvendorid : 0xab000000cd
marchid : 0xab
mimpid : 0xef
'mvendorid' was written as a combination of 0xab (the value from the
adjacent field, marchid) and its intended value 0xcd.
Fix it by assigning cpu->cfg.mvendorid to a target_ulong var 'reg' and
use it as input for kvm_set_one_reg(). Here's the result with this patch
applied and using the same QEMU command line:
# cat /proc/cpuinfo
processor : 0
hart : 0
isa : rv64imafdc_zicbom_zicboz_zihintpause_zbb_sstc
mmu : sv57
mvendorid : 0xcd
marchid : 0xab
mimpid : 0xef
This bug affects only the generic (rv64) CPUs when running with KVM in a
64 bit env since the 'host' CPU does not allow the machine IDs to be
changed via command line.
Fixes: 1fb5a622f7 ("target/riscv: handle mvendorid/marchid/mimpid for KVM CPUs")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Acked-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Andrew Jones <ajones@ventanamicro.com>
Message-ID: <20230802180058.281385-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
A patch to pass the correct exception address when handling floating
point exceptions.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmTT95sACgkQw7McLV5m
J+TV2g/8CTpOm2bvyFF0YmRhmTBit0kqyDcX1Shi8/2SMO4++CCpIp1mlaxdHZKe
swdOqIqJeCl3+v+z4xN3ubNMis1Gac8DmXVpVmnUoocDS6m0zM3ly9kETKjYy2vn
+GLGzOJ+GnPeQ2oApWwOyCqdCwSx2ZuIYK+FRKIx8T1pRm4Nb1gGP6nRKYAy0+C9
aINdaQEZrFMKl8mlEuGcNmw5YDVvT6M9+KAMaNG0AzG8N9oMCo8VZpeY4z0qkZVp
forksGucRoWVZ5JWl6kzcPAxxAf49olRx0njfbbUcUlyXtsVQpNhPPsdDGAE5gLu
8kHqtRG5OIJUvsZUaedHmJW9BsISnKqIhB7keG72xeBCYPqsKkzpWotq79I50hWY
arTvAbyEwNCPEi1kpevveuGokoKsHKr/6yJRsA2VXM5AFhIy54DkLNz6Zh8W1OGA
Nst45kSt7tQsTwxXHTHWGO6gRK/7ZtSr/afsEYZCz9vRUnb4UMeBBAuM9u0W+WYZ
+hEZivQI7AEVuFbfzCTpw96jAPg4tpJ0JzC0o3Vh/EKIZahrPdzvmBlsV15geu4/
xa5PBWRFpySLEO/6/I9XrIux8wjQ1NHOTC6NtJkH33tu9tJ9pfmyRs+jdUiNwWyd
mMz0jvDUhjGaqUYSbXDvBLcSAIKbpXpnay2StSt0S/Enr08KU+o=
=yZi9
-----END PGP SIGNATURE-----
Merge tag 'or1k-pull-request-20230809' of https://github.com/stffrdhrn/qemu into staging
OpenRISC FPU Fix for 8.1
A patch to pass the correct exception address when handling floating
point exceptions.
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE2cRzVK74bBA6Je/xw7McLV5mJ+QFAmTT95sACgkQw7McLV5m
# J+TV2g/8CTpOm2bvyFF0YmRhmTBit0kqyDcX1Shi8/2SMO4++CCpIp1mlaxdHZKe
# swdOqIqJeCl3+v+z4xN3ubNMis1Gac8DmXVpVmnUoocDS6m0zM3ly9kETKjYy2vn
# +GLGzOJ+GnPeQ2oApWwOyCqdCwSx2ZuIYK+FRKIx8T1pRm4Nb1gGP6nRKYAy0+C9
# aINdaQEZrFMKl8mlEuGcNmw5YDVvT6M9+KAMaNG0AzG8N9oMCo8VZpeY4z0qkZVp
# forksGucRoWVZ5JWl6kzcPAxxAf49olRx0njfbbUcUlyXtsVQpNhPPsdDGAE5gLu
# 8kHqtRG5OIJUvsZUaedHmJW9BsISnKqIhB7keG72xeBCYPqsKkzpWotq79I50hWY
# arTvAbyEwNCPEi1kpevveuGokoKsHKr/6yJRsA2VXM5AFhIy54DkLNz6Zh8W1OGA
# Nst45kSt7tQsTwxXHTHWGO6gRK/7ZtSr/afsEYZCz9vRUnb4UMeBBAuM9u0W+WYZ
# +hEZivQI7AEVuFbfzCTpw96jAPg4tpJ0JzC0o3Vh/EKIZahrPdzvmBlsV15geu4/
# xa5PBWRFpySLEO/6/I9XrIux8wjQ1NHOTC6NtJkH33tu9tJ9pfmyRs+jdUiNwWyd
# mMz0jvDUhjGaqUYSbXDvBLcSAIKbpXpnay2StSt0S/Enr08KU+o=
# =yZi9
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 09 Aug 2023 01:31:23 PM PDT
# gpg: using RSA key D9C47354AEF86C103A25EFF1C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25 EFF1 C3B3 1C2D 5E66 27E4
* tag 'or1k-pull-request-20230809' of https://github.com/stffrdhrn/qemu:
target/openrisc: Set EPCR to next PC on FPE exceptions
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Until v2.07s, the VRMA page size (L||LP) was encoded in LPCR[VRMASD].
In v3.0 that moved to the partition table PS field.
The powernv machine can now run KVM HPT guests on POWER9/10 CPUs with
this fix and the patch to add ASDR.
Fixes: 3367c62f52 ("target/ppc: Support for POWER9 native hash")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-ID: <20230730111842.39292-1-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
HDEC is defined to not wake from PM state. There is a check in the HDEC
timer to avoid setting the interrupt if we are in a PM state, but no
check on PM entry to lower HDEC if it already fired. This can cause a
HDECR wake up and QEMU abort with unsupported exception in Power Save
mode.
Fixes: 4b236b621b ("ppc: Initial HDEC support")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-ID: <20230726182230.433945-4-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The ASDR register was introduced in ISA v3.0. It has not been
implemented for HPT. With HPT, ASDR is the format of the slbmte RS
operand (containing VSID), which matches the ppc_slb_t field.
Fixes: 3367c62f52 ("target/ppc: Support for POWER9 native hash")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Message-ID: <20230726182230.433945-2-npiggin@gmail.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
When CR0.TS=1, execution of x87 FPU, MMX, and some SSE instructions will
cause a Device Not Available (DNA) exception (#NM). System software uses
this exception event to lazily context switch FPU state.
Before this patch, enter_mmx helpers may be generated just before #NM
generation, prematurely resetting FPU state before the guest has a
chance to save it.
Signed-off-by: Matt Borgerson <contact@mborgerson.com>
Message-ID: <CADc=-s5F10muEhLs4f3mxqsEPAHWj0XFfOC2sfFMVHrk9fcpMg@mail.gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
On hppa the Instruction Address Offset Queue (IAOQ) registers specifies
the next to-be-executed instructions addresses. Each generated TB writes those
registers at least once, so those registers are used heavily in generated
code.
Looking at the generated assembly, for a x86-64 host this code
to write the address $0x7ffe826f into iaoq_f is generated:
0x7f73e8000184: c7 85 d4 01 00 00 6f 82 movl $0x7ffe826f, 0x1d4(%rbp)
0x7f73e800018c: fe 7f
0x7f73e800018e: c7 85 d8 01 00 00 73 82 movl $0x7ffe8273, 0x1d8(%rbp)
0x7f73e8000196: fe 7f
With the trivial change, by moving the variables iaoq_f and iaoq_b to
the top of struct CPUArchState, the offset to %rbp is reduced (from
0x1d4 to 0), which allows the x86-64 tcg to generate 3 bytes less of
generated code per move instruction:
0x7fc1e800018c: c7 45 00 6f 82 fe 7f movl $0x7ffe826f, (%rbp)
0x7fc1e8000193: c7 45 04 73 82 fe 7f movl $0x7ffe8273, 4(%rbp)
Overall this is a reduction of generated code (not a reduction of
number of instructions).
A test run with checks the generated code size by running "/bin/ls"
with qemu-user shows that the code size shrinks from 1616767 to 1569273
bytes, which is ~97% of the former size.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Helge Deller <deller@gmx.de>
Cc: qemu-stable@nongnu.org
The arguments for deposit64 are (value, start, length, fieldval); this
appears to have thought they were (value, fieldval, start,
length). Reorder the parameters to match the actual function.
Cc: qemu-stable@nongnu.org
Fixes: 950272506d ("target/m68k: Use semihosting/syscalls.h")
Reported-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230801154519.3505531-1-peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The arguments for deposit64 are (value, start, length, fieldval); this
appears to have thought they were (value, fieldval, start,
length). Reorder the parameters to match the actual function.
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Fixes: d1e23cbaa4 ("target/nios2: Use semihosting/syscalls.h")
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230731235245.295513-1-keithp@keithp.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Instead of using R_ARG0 (the semihost function number), use R_ARG1
(the provided exit status).
Signed-off-by: Keith Packard <keithp@keithp.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230801152245.332749-1-keithp@keithp.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Coverity points out (CID 1507534, 1507968) that we sometimes access
env->xen_singleshot_timer_ns under the protection of
env->xen_timers_lock and sometimes not.
This isn't always an issue. There are two modes for the timers; if the
kernel supports the EVTCHN_SEND capability then it handles all the timer
hypercalls and delivery internally, and all we use the field for is to
get/set the timer as part of the vCPU state via an ioctl(). If the
kernel doesn't have that support, then we do all the emulation within
qemu, and *those* are the code paths where we actually care about the
locking.
But it doesn't hurt to be a little bit more consistent and avoid having
to explain *why* it's OK.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Reviewed-by: Paul Durrant <paul@xen.org>
Message-Id: <20230801175747.145906-3-dwmw2@infradead.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The architecture specification calls for the EPCR to be set to "Address
of next not executed instruction" when there is a floating point
exception (FPE). This was not being done, so fix it by using the same
pattern as syscall. Also, we move this logic down to be done for
instructions not in the delay slot as called for by the architecture
manual.
Without this patch FPU exceptions will loop, as the exception handling
will always return back to the failed floating point instruction.
This was not noticed in earlier testing because:
1. The compiler usually generates code which clobbers the input operand
such as:
lf.div.s r19,r17,r19
2. The target will store the operation output before to the register
before handling the exception. So an operation such as:
float a = 100.0f;
float b = 0.0f;
float c = a / b; /* lf.div.s r19,r17,r19 */
Will first execute:
100 / 0 -> Store inf to c (r19)
-> triggering divide by zero exception
-> handle and return
Then it will execute:
100 / inf -> Store 0 to c (no exception)
To confirm the looping behavior and the fix I used the following:
float fpu_div(float a, float b) {
float c;
asm volatile("lf.div.s %0, %1, %2"
: "+r" (c)
: "r" (a), "r" (b));
return c;
}
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
This solves a problem in which the store to LowCore during tlb_fill
triggers a clean-page TB invalidation for page0 during translation,
which results in an assertion failure for locked pages.
By delaying the store until after the exception has been raised,
we will have unwound the pages locked for translation and the
problem does not arise. There are plenty of other updates to
LowCore while delivering an interrupt/exception; trans_exc_code
does not need to be special.
Reviewed-by: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The change to use translator_use_goto_tb went too far, as the
CF_SINGLE_STEP flag managed by the translator only handles
gdb single stepping and not the architectural single stepping
modeled in DisasContext.singlestep_enabled.
Fixes: 6e9cc373ec ("target/ppc: Use translator_use_goto_tb")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1795
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Currently we list all the Arm decodetree files together and add them
unconditionally to arm_ss. This means we build them for both
qemu-system-aarch64 and qemu-system-arm. However, some of them are
AArch64-specific, so there is no need to build them for
qemu-system-arm. (Meson is smart enough to notice that the generated
.c.inc file is not used by any objects that go into qemu-system-arm,
so we only unnecessarily run decodetree, not anything more
heavyweight like a recompile or relink, but it's still unnecessary
work.)
Split gen into gen_a32 and gen_a64, and only add gen_a64 for
TARGET_AARCH64 compiles.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230718104628.1137734-1-peter.maydell@linaro.org
In commit 0b188ea05a we changed the implementation of
trans_CSEL() to use tcg_constant_i32(). However, this change
was incorrect, because the implementation of the function
sets up the TCGv_i32 rn and rm to be either zero or else
a TCG temp created in load_reg(), and these TCG temps are
then in both cases written to by the emitted TCG ops.
The result is that we hit a TCG assertion:
qemu-system-arm: ../../tcg/tcg.c:4455: tcg_reg_alloc_mov: Assertion `!temp_readonly(ots)' failed.
(or on a non-debug build, just produce a garbage result)
Adjust the code so that rn and rm are always writeable
temporaries whether the instruction is using the special
case "0" or a normal register as input.
Cc: qemu-stable@nongnu.org
Fixes: 0b188ea05a ("target/arm: Use tcg_constant in trans_CSEL")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230727103906.2641264-1-peter.maydell@linaro.org
When converting to decodetree, the code to rebuild mop for the pair
only made it into trans_STP and not into trans_STGP.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1790
Fixes: 8c212eb659 ("target/arm: Convert load/store-pair to decodetree")
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230726165416.309624-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
hw/sd/sdhci: Default I/O ops to little endian
hw/mips/loongson3-virt: Only use default USB if available
hw/char/escc: Implement loopback mode to allow self-testing
target/mips: Avoid overruns and shifts by negative number
target/sparc: Handle FPRS correctly on big-endian hosts
target/tricore: Rename tricore_feature to avoid clash with libcapstone
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmS/4ksACgkQ4+MsLN6t
wN6OSg//cZY9C6fRXNNaIqkmnhjbaV6KLtjE7mOKp0RUyh3aN0dtTwWIjdJc0O5C
iipHESYhcbHTiN/TxK0zXg4KgtKmtwqGsa3QTXGdTlSkTY/dMNioSpb7p82becu0
fhCvGRLJ97j7/mhebiBNT/urrcG5h3n7CjA5IoFMMA4f+cajsGZHwmq5TTzc2ehy
4FuchjFUw+cgqU1peNYoqt2dfnxFg0EgKBSRikl8MyPf9lFzTlXOKbgd+qppG6hI
2fAUHyMqBkU22sAoK0eB0077LjgjPPQfmn8UPGkpGD5QZQcvBRNArg4fyHxCKTS7
zOsO1Qc+4D2l2RJlIHgct2pmcHdT29TlTn2T4Lg900Hm09KelZh1XF+1BemCC13z
cGWjPcYozvGFFiHlhazINtbGpB6XaP/Z3OwroRHRn+Mn3ss+FaU+j/p+4YlEVyFi
4yoEyjhNma6/hssmstifSQsaOf6XthzpS+XdKNB6G1b2WuRSc1Z59b2gcPBTwbXY
B52lfI61nzSrP9pLuS8c/6hQXQvADIEndeWEcWZ50h3WW2Cemj9jTDVgfjWC4Vg9
wV2U6NeTr+g54cSU5vcKiZrqsQHUoLiKbZFRJkXF7EEMbOErIQnyIS5l8xf71Pay
YPxuPf1VprRiR07d+ZaA+wmEaBxLCUPEl1CEuu5NPVA9S4yIIWE=
=F+Wb
-----END PGP SIGNATURE-----
Merge tag 'misc-fixes-20230725' of https://github.com/philmd/qemu into staging
Misc patches queue
hw/sd/sdhci: Default I/O ops to little endian
hw/mips/loongson3-virt: Only use default USB if available
hw/char/escc: Implement loopback mode to allow self-testing
target/mips: Avoid overruns and shifts by negative number
target/sparc: Handle FPRS correctly on big-endian hosts
target/tricore: Rename tricore_feature to avoid clash with libcapstone
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmS/4ksACgkQ4+MsLN6t
# wN6OSg//cZY9C6fRXNNaIqkmnhjbaV6KLtjE7mOKp0RUyh3aN0dtTwWIjdJc0O5C
# iipHESYhcbHTiN/TxK0zXg4KgtKmtwqGsa3QTXGdTlSkTY/dMNioSpb7p82becu0
# fhCvGRLJ97j7/mhebiBNT/urrcG5h3n7CjA5IoFMMA4f+cajsGZHwmq5TTzc2ehy
# 4FuchjFUw+cgqU1peNYoqt2dfnxFg0EgKBSRikl8MyPf9lFzTlXOKbgd+qppG6hI
# 2fAUHyMqBkU22sAoK0eB0077LjgjPPQfmn8UPGkpGD5QZQcvBRNArg4fyHxCKTS7
# zOsO1Qc+4D2l2RJlIHgct2pmcHdT29TlTn2T4Lg900Hm09KelZh1XF+1BemCC13z
# cGWjPcYozvGFFiHlhazINtbGpB6XaP/Z3OwroRHRn+Mn3ss+FaU+j/p+4YlEVyFi
# 4yoEyjhNma6/hssmstifSQsaOf6XthzpS+XdKNB6G1b2WuRSc1Z59b2gcPBTwbXY
# B52lfI61nzSrP9pLuS8c/6hQXQvADIEndeWEcWZ50h3WW2Cemj9jTDVgfjWC4Vg9
# wV2U6NeTr+g54cSU5vcKiZrqsQHUoLiKbZFRJkXF7EEMbOErIQnyIS5l8xf71Pay
# YPxuPf1VprRiR07d+ZaA+wmEaBxLCUPEl1CEuu5NPVA9S4yIIWE=
# =F+Wb
# -----END PGP SIGNATURE-----
# gpg: Signature made Tue 25 Jul 2023 15:55:07 BST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'misc-fixes-20230725' of https://github.com/philmd/qemu:
target/tricore: Rename tricore_feature
target/sparc: Handle FPRS correctly on big-endian hosts
target/mips: Avoid shift by negative number in page_table_walk_refill()
target/mips: Pass directory/leaf shift values to walk_directory()
target/mips/mxu: Avoid overrun in gen_mxu_q8adde()
target/mips/mxu: Avoid overrun in gen_mxu_S32SLT()
target/mips/mxu: Replace magic array size by its definition
hw/char/escc: Implement loopback mode
hw/mips: Improve the default USB settings in the loongson3-virt machine
hw/sd/sdhci: Do not force sdhci_mmio_*_ops onto all SD controllers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
this name is used by capstone and will lead to a build failure of QEMU,
when capstone is enabled. So we rename it to tricore_has_feature(), to
match has_feature() in translate.c.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1774
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
this name is used by capstone and will lead to a build failure of QEMU,
when capstone is enabled. So we rename it to tricore_has_feature(), to
match has_feature() in translate.c.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1774
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20230721060605.76636-1-kbastian@mail.uni-paderborn.de>
In CPUSparcState we define the fprs field as uint64_t. However we
then refer to it in translate.c via a TCGv_i32 which we set up with
tcg_global_mem_new_ptr(). This means that on a big-endian host when
the guest does something to writo te the FPRS register this value
ends up in the wrong half of the uint64_t, and the QEMU C code that
refers to env->fprs sees the wrong value. The effect of this is that
guest code that enables the FPU crashes with spurious FPU Disabled
exceptions. In particular, this is why
tests/avocado/machine_sparc64_sun4u.py:Sun4uMachine.test_sparc64_sun4u
times out on an s390 host.
There are multiple ways we could fix this; since there are actually
only three bits in the FPRS register and the code in translate.c
would be a bit painful to convert to dealing with a TCGv_i64, change
the type of the CPU state struct field to match what translate.c is
expecting.
(None of the other fields referenced by the r32[] array in
sparc_tcg_init() have the wrong type.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20230717103544.637453-1-peter.maydell@linaro.org>
Coverity points out that in page_table_walk_refill() we can
shift by a negative number, which is undefined behaviour
(CID 1452918, 1452920, 1452922). We already catch the
negative directory_shift and leaf_shift as being a "bail
out early" case, but not until we've already used them to
calculated some offset values.
The shifts can be negative only if ptew > 1, so make the
bail-out-early check look directly at that, and only
calculate the shift amounts and the offsets based on them
after we have done that check. This allows
us to simplify the expressions used to calculate the
shift amounts, use an unsigned type, and avoids the
undefined behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
[PMD: Check for ptew > 1, use unsigned type]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230717213504.24777-3-philmd@linaro.org>
We already evaluated directory_shift and leaf_shift in
page_table_walk_refill(), no need to do that again: pass
as argument.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20230717213504.24777-2-philmd@linaro.org>
Coverity reports a potential overruns (CID 1517770):
Overrunning array "mxu_gpr" of 15 8-byte elements at
element index 4294967295 (byte offset 34359738367)
using index "XRb - 1U" (which evaluates to 4294967295).
Add a gen_extract_mxu_gpr() helper similar to
gen_load_mxu_gpr() to safely extract MXU registers.
Fixes: eb79951ab6 ("target/mips/mxu: Add Q8ADDE ... insns")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230712060806.82323-4-philmd@linaro.org>
Coverity reports a potential overrun (CID 1517769):
Overrunning array "mxu_gpr" of 15 8-byte elements at
element index 4294967295 (byte offset 34359738367)
using index "XRb - 1U" (which evaluates to 4294967295).
Use gen_load_mxu_gpr() to safely load MXU registers.
Fixes: ff7936f009 ("target/mips/mxu: Add S32SLT ... insns")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230712060806.82323-3-philmd@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230712060806.82323-2-philmd@linaro.org>
A lot of the code called from helper_exception_bkpt_insn() is written
assuming A-profile, but we will also call this helper on M-profile
CPUs when they execute a BKPT insn. This used to work by accident,
but recent changes mean that we will hit an assert when some of this
code calls down into lower level functions that end up calling
arm_security_space_below_el3(), arm_el_is_aa64(), and other functions
that now explicitly assert that the guest CPU is not M-profile.
Handle M-profile directly to avoid the assertions:
* in arm_debug_target_el(), M-profile debug exceptions always
go to EL1
* in arm_debug_exception_fsr(), M-profile always uses the short
format FSR (compare commit d7fe699be5, though in this case
the code in arm_v7m_cpu_do_interrupt() does not need to
look at the FSR value at all)
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1775
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230721143239.1753066-1-peter.maydell@linaro.org
The test in tests/avocado/machine_loongarch.py is currently failing
on big endian hosts like s390x. By comparing the traces between running
the QEMU_EFI.fd bios on a s390x and on a x86 host, it's quickly obvious
that the CSRRD instruction for the CPUID is behaving differently. And
indeed: The code currently does a long read (i.e. 64 bit) from the
address that points to the CPUState->cpu_index field (with tcg_gen_ld_tl()
in the trans_csrrd() function). But this cpu_index field is only an "int"
(i.e. 32 bit). While this dirty pointer magic works on little endian hosts,
it of course fails on big endian hosts. Fix it by using a proper helper
function instead.
Message-Id: <20230720175307.854460-1-thuth@redhat.com>
Reviewed-by: Song Gao <gaosong@loongson.cn>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Type 13 is reserved, so using it should result in specification
exception. Due to an off-by-1 error the code triggers an assertion at a
later point in time instead.
Cc: qemu-stable@nongnu.org
Fixes: da4807527f ("s390x/tcg: Implement VECTOR FP (MAXIMUM|MINIMUM)")
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-8-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
MC requires bit positions 8-11 (upper 4 bits of class) to be zeros,
otherwise it must raise a specification exception.
Cc: qemu-stable@nongnu.org
Fixes: 20d143e2ca ("s390x/tcg: Implement MONITOR CALL")
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-6-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When the mask is zero, access exceptions should still be recognized for
1 byte at the second-operand address. CC should be set to 0.
Cc: qemu-stable@nongnu.org
Fixes: e023e832d0 ("s390x: translate engine for s390x CPU")
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-5-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
CONVERT TO LOGICAL/FIXED deviate from IEEE 754 in that they raise an
inexact exception on out-of-range inputs. float_flag_invalid_cvti
aligns nicely with that behavior, so convert it to
S390_IEEE_MASK_INEXACT.
Cc: qemu-stable@nongnu.org
Fixes: defb0e3157 ("s390x: Implement opcode helpers")
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-4-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
When the mask is zero, access exceptions should still be recognized for
1 byte at the second-operand address. CC should be set to 0.
Cc: qemu-stable@nongnu.org
Fixes: defb0e3157 ("s390x: Implement opcode helpers")
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-3-iii@linux.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
R2 designates an even-odd register pair; the instruction should raise
a specification exception when R2 is not even.
Cc: qemu-stable@nongnu.org
Fixes: e023e832d0 ("s390x: translate engine for s390x CPU")
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
Message-Id: <20230724082032.66864-2-iii@linux.ibm.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The previous check was failing with:
VLEN=128 ELEN = 64 SEW = 16 and LMUL = 1/8 which is a
valid combination.
Fix the check to allow valid combinations when VLEN is a multiple of
ELEN.
From the specification:
"In general, the requirement is to support LMUL ≥ SEWMIN/ELEN, where
SEWMIN is the narrowest supported SEW value and ELEN is the widest
supported SEW value. In the standard extensions, SEWMIN=8. For standard
vector extensions with ELEN=32, fractional LMULs of 1/2 and 1/4 must be
supported. For standard vector extensions with ELEN=64, fractional LMULs
of 1/2, 1/4, and 1/8 must be supported." Elsewhere in the specification
it makes clear that VLEN>=ELEN.
From inspection this new check allows:
VLEN=ELEN=64 1/2, 1/4, 1/8 for SEW >=8
VLEN=ELEN=32 1/2, 1/4 for SEW >=8
Fixes: d9b7609a1f ("target/riscv: rvv-1.0: configure instructions")
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Message-Id: <20230718131316.12283-2-rbradford@rivosinc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Commit bd30559568 made changes in how we're checking and disabling
extensions based on env->priv_ver. One of the changes was to move the
extension disablement code to the end of realize(), being able to
disable extensions after we've auto-enabled some of them.
An unfortunate side effect of this change started to happen with CPUs
that has an older priv version, like sifive-u54. Starting on commit
2288a5ce43 we're auto-enabling zca, zcd and zcf if RVC is enabled,
but these extensions are priv version 1.12.0. When running a cpu that
has an older priv ver (like sifive-u54) the user is spammed with
warnings like these:
qemu-system-riscv64: warning: disabling zca extension for hart 0x0000000000000000 because privilege spec version does not match
qemu-system-riscv64: warning: disabling zcd extension for hart 0x0000000000000000 because privilege spec version does not match
The warnings are part of the code that disables the extension, but in this
case we're throwing user warnings for stuff that we enabled on our own,
without user intervention. Users are left wondering what they did wrong.
A quick 8.1 fix for this nuisance is to check the CPU priv spec before
auto-enabling zca/zcd/zcf. A more appropriate fix will include a more
robust framework that will account for both priv_ver and user choice
when auto-enabling/disabling extensions, but for 8.1 we'll make it do
with this simple check.
It's also worth noticing that this is the only case where we're
auto-enabling extensions based on a criteria (in this case RVC) that
doesn't match the priv spec of the extensions we're enabling. There's no
need for more 8.1 band-aids.
Cc: Conor Dooley <conor@kernel.org>
Fixes: 2288a5ce43 ("target/riscv: add cfg properties for Zc* extension")
Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Weiwei Li <liweiwei@iscas.ac.cn>
Tested-by: Conor Dooley <conor.dooley@microchip.com>
Message-Id: <20230717154141.60898-1-dbarboza@ventanamicro.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
In get_phys_addr_twostage() the code that applies the effects of
VSTCR.{SA,SW} and VTCR.{NSA,NSW} only updates result->f.attrs.secure.
Now we also have f.attrs.space for FEAT_RME, we need to keep the two
in sync.
These bits only have an effect for Secure space translations, not
for Root, so use the input in_space field to determine whether to
apply them rather than the input is_secure. This doesn't actually
make a difference because Root translations are never two-stage,
but it's a little clearer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-4-peter.maydell@linaro.org
In commit fe4a5472cc we rearranged the logic in S1_ptw_translate()
so that the debug-access "call get_phys_addr_*" codepath is used both
when S1 is doing ptw reads from stage 2 and when it is doing ptw
reads from physical memory. However, we didn't update the
calculation of s2ptw->in_space and s2ptw->in_secure to account for
the "ptw reads from physical memory" case. This meant that debug
accesses when in Secure state broke.
Create a new function S2_security_space() which returns the
correct security space to use for the ptw load, and use it to
determine the correct .in_secure and .in_space fields for the
stage 2 lookup for the ptw load.
Reported-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-3-peter.maydell@linaro.org
Fixes: fe4a5472cc ("target/arm: Use get_phys_addr_with_struct in S1_ptw_translate")
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add comments to the in_* fields in the S1Translate struct
that explain what they're doing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230710152130.3928330-2-peter.maydell@linaro.org
Replace the 0/-1 result with true/false.
Invert the sense of the test of all callers.
Document the function.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20230707204054.8792-25-richard.henderson@linaro.org>
GINVI and GINVT operations are supported on MIPS I6400 and I6500 cores,
so indicate that properly in CP0.Config5 register bits [16:15].
Cc: qemu-stable@nongnu.org
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@fungible.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230630072806.3093704-1-marcin.nowakowski@fungible.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The instruction implements SAD (sum-absolute-difference) operation which
is used in motion estimation algorithms. The instruction handles four
8-bit data in parallel.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-34-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The instruction shuffles 8 bytes in two registers by
one of 4 predefined patterns.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-33-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The instruction is used to parallel multiply and accumulate
four 8-bit data.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-32-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
The instruction is used to determine sign of four 16-bit
packed data in parallel.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-31-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These instructions do parallel quad 8-bit multiply and accumulate.
They are close to existing Q8MUL Q8MULSU so the generation
function modified to support all of them.
Also the patch fixes decoding of Q8MULSU according to tests on
hardware.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-30-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These instructions are:
- single 32-bit
- dual 16-bit packed
- quad 8-bit packed
conditional moves.
They are grouped in pool20 in the source code.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-29-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These instructions are counterparts for D32/Q16-SLL/SLR/SAR with
difference that the shift amount placed into GPR.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-28-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
These instructions are same data shift in various directions, thus one
generation function is implemented for all three.
Signed-off-by: Siarhei Volkau <lis8215@gmail.com>
Message-Id: <20230608104222.1520143-27-lis8215@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>