Rather than have the helper take an optional vector address
override, instead have its caller modify env->nip itself.
This is more consistent when adding pnv nmi support, and also
with mce injection added later.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325144147.221875-2-npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
system calls (at least in Linux) use registers r3-r8 for inputs, so
include those registers in the dump.
This also adds a mode for PAPR hcalls, which have a different calling
convention.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200317054918.199161-1-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In a few cases, we're able to remove some manual replication.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We can now unify the implementation of the 3 VSPLTI instructions.
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The gen_gvec_dupi switch is unnecessary with the new function.
Replace it with a local gen_gvec_dup_imm that takes care of the
register to offset conversion and length arguments.
Drop zero_vec and use use gen_gvec_dup_imm with 0.
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Currently "cf-core.xml" is sent to GDB when using any m68k flavor. Thing is
it uses the "org.gnu.gdb.coldfire.core" feature name and gdb 8.3 then expects
a coldfire FPU instead of the default m68881 FPU.
This is not OK because the m68881 floats registers are 96 bits wide so it
crashes GDB with the following error message:
(gdb) target remote localhost:7960
Remote debugging using localhost:7960
warning: Register "fp0" has an unsupported size (96 bits)
warning: Register "fp1" has an unsupported size (96 bits)
...
Remote 'g' packet reply is too long (expected 148 bytes, got 180 bytes): \
00000000000[...]0000
With this patch: qemu-system-m68k -M none -cpu m68020 -s -S
(gdb) tar rem :1234
Remote debugging using :1234
warning: No executable has been specified and target does not support
determining executable automatically. Try using the "file" command.
0x00000000 in ?? ()
(gdb) p $fp0
$1 = nan(0xffffffffffffffff)
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <1588094279-17913-3-git-send-email-frederic.konrad@adacore.com>
Message-Id: <20200430190122.4592-10-alex.bennee@linaro.org>
When converted to use GByteArray in commits 462474d760 and
a010bdbe71, the call to stfq_p() was removed. This call
serialize a float.
Since we now use a GByteArray, we can not use stfq_p() directly.
Introduce the gdb_get_float64() helper to load a float64 register.
Fixes: 462474d760 ("target/m68k: use gdb_get_reg helpers")
Fixes: a010bdbe71 ("extend GByteArray to read register helpers")
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200414163853.12164-3-philmd@redhat.com>
Message-Id: <20200430190122.4592-3-alex.bennee@linaro.org>
We're going to want at least some of the NeonGen* typedefs
for the refactored 32-bit Neon decoder, so move them all
to translate.h since it makes more sense to keep them in
one group.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-23-peter.maydell@linaro.org
Convert the Neon VMUL, VMLA, VMLS and VSHL insns in the
3-reg-same grouping to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-20-peter.maydell@linaro.org
Convert the Neon VQADD/VQSUB insns in the 3-reg-same grouping
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-19-peter.maydell@linaro.org
Convert the Neon comparison ops in the 3-reg-same grouping
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-18-peter.maydell@linaro.org
Convert the Neon 3-reg-same VMAX and VMIN insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-17-peter.maydell@linaro.org
Convert the Neon logic ops in the 3-reg-same grouping to decodetree.
Note that for the logic ops the 'size' field forms part of their
decode and the actual operations are always bitwise.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-16-peter.maydell@linaro.org
Convert the Neon 3-reg-same VADD and VSUB insns to decodetree.
Note that we don't need the neon_3r_sizes[op] check here because all
size values are OK for VADD and VSUB; we'll add this when we convert
the first insn that has size restrictions.
For this we need one of the GVecGen*Fn typedefs currently in
translate-a64.h; move them all to translate.h as a block so they
are visible to the 32-bit decoder.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-15-peter.maydell@linaro.org
Convert the Neon "load/store single structure to one lane" insns to
decodetree.
As this is the last set of insns in the neon load/store group,
we can remove the whole disas_neon_ls_insn() function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-14-peter.maydell@linaro.org
Convert the Neon "load single structure to all lanes" insns to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-13-peter.maydell@linaro.org
Convert the Neon "load/store multiple structures" insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-12-peter.maydell@linaro.org
Convert the VFM[AS]L (scalar) insns in the 2reg-scalar-ext group
to decodetree. These are the last ones in the group so we can remove
all the legacy decode for the group.
Note that in disas_thumb2_insn() the parts of this encoding space
where the decodetree decoder returns false will correctly be directed
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
into disas_coproc_insn() by mistake.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-11-peter.maydell@linaro.org
Convert the V[US]DOT (scalar) insns in the 2reg-scalar-ext group
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-10-peter.maydell@linaro.org
Convert VCMLA (scalar) in the 2reg-scalar-ext group to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-9-peter.maydell@linaro.org
Convert the VFM[AS]L (vector) insns to decodetree. This is the last
insn in the legacy decoder for the 3same_ext group, so we can
delete the legacy decoder function for the group entirely.
Note that in disas_thumb2_insn() the parts of this encoding space
where the decodetree decoder returns false will correctly be directed
to illegal_op by the "(insn & (1 << 28))" check so they won't fall
into disas_coproc_insn() by mistake.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-8-peter.maydell@linaro.org
Convert the V[US]DOT (vector) insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-7-peter.maydell@linaro.org
Convert the VCADD (vector) insns to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-6-peter.maydell@linaro.org
Convert the VCMLA (vector) insns in the 3same extension group to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-5-peter.maydell@linaro.org
Add the infrastructure for building and invoking a decodetree decoder
for the AArch32 Neon encodings. At the moment the new decoder covers
nothing, so we always fall back to the existing hand-written decode.
We follow the same pattern we did for the VFP decodetree conversion
(commit 78e138bc1f and following): code that deals
with Neon will be moving gradually out to translate-neon.vfp.inc,
which we #include into translate.c.
In order to share the decode files between A32 and T32, we
split Neon into 3 parts:
* data-processing
* load-store
* 'shared' encodings
The first two groups of instructions have similar but not identical
A32 and T32 encodings, so we need to manually transform the T32
encoding into the A32 one before calling the decoder; the third group
covers the Neon instructions which are identical in A32 and T32.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200430181003.21682-4-peter.maydell@linaro.org
We were accidentally permitting decode of Thumb Neon insns even if
the CPU didn't have the FEATURE_NEON bit set, because the feature
check was being done before the call to disas_neon_data_insn() and
disas_neon_ls_insn() in the Arm decoder but was omitted from the
Thumb decoder. Push the feature bit check down into the called
functions so it is done for both Arm and Thumb encodings.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20200430181003.21682-3-peter.maydell@linaro.org
Somewhere along theline we accidentally added a duplicate
"using D16-D31 when they don't exist" check to do_vfm_dp()
(probably an artifact of a patchseries rebase). Remove it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20200430181003.21682-2-peter.maydell@linaro.org
MIDR_EL1 is a 64-bit system register with the top 32-bit being RES0.
Represent it in QEMU's ARMCPU struct with a uint64_t, not a
uint32_t.
This fixes an error when compiling with -Werror=conversion
because we were manipulating the register value using a
local uint64_t variable:
target/arm/cpu64.c: In function ‘aarch64_max_initfn’:
target/arm/cpu64.c:628:21: error: conversion from ‘uint64_t’ {aka ‘long unsigned int’} to ‘uint32_t’ {aka ‘unsigned int’} may change value [-Werror=conversion]
628 | cpu->midr = t;
| ^
and future-proofs us against a possible future architecture
change using some of the top 32 bits.
Suggested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20200428172634.29707-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In aarch64_max_initfn() we update both 32-bit and 64-bit ID
registers. The intended pattern is that for 64-bit ID registers we
use FIELD_DP64 and the uint64_t 't' register, while 32-bit ID
registers use FIELD_DP32 and the uint32_t 'u' register. For
ID_AA64DFR0 we accidentally used 'u', meaning that the top 32 bits of
this 64-bit ID register would end up always zero. Luckily at the
moment that's what they should be anyway, so this bug has no visible
effects.
Use the right-sized variable.
Fixes: 3bec78447a
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20200423110915.10527-1-peter.maydell@linaro.org
The ARMv8.2-TTS2UXN feature extends the XN field in stage 2
translation table descriptors from just bit [54] to bits [54:53],
allowing stage 2 to control execution permissions separately for EL0
and EL1. Implement the new semantics of the XN field and enable
the feature for our 'max' CPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200330210400.11724-5-peter.maydell@linaro.org
For ARMv8.2-TTS2UXN, the stage 2 page table walk wants to know
whether the stage 1 access is for EL0 or not, because whether
exec permission is given can depend on whether this is an EL0
or EL1 access. Add a new argument to get_phys_addr_lpae() so
the call sites can pass this information in.
Since get_phys_addr_lpae() doesn't already have a doc comment,
add one so we have a place to put the documentation of the
semantics of the new s1_is_el0 argument.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200330210400.11724-4-peter.maydell@linaro.org
The access_type argument to get_phys_addr_lpae() is an MMUAccessType;
use the enum constant MMU_DATA_LOAD rather than a literal 0 when we
call it in S1_ptw_translate().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200330210400.11724-3-peter.maydell@linaro.org
We define ARMMMUIdx_Stage2 as being an MMU index which uses a QEMU
TLB. However we never actually use the TLB -- all stage 2 lookups
are done by direct calls to get_phys_addr_lpae() followed by a
physical address load via address_space_ld*().
Remove Stage2 from the list of ARM MMU indexes which correspond to
real core MMU indexes, and instead put it in the set of "NOTLB" ARM
MMU indexes.
This allows us to drop NB_MMU_MODES to 11. It also means we can
safely add support for the ARMv8.3-TTS2UXN extension, which adds
permission bits to the stage 2 descriptors which define execute
permission separatel for EL0 and EL1; supporting that while keeping
Stage2 in a QEMU TLB would require us to use separate TLBs for
"Stage2 for an EL0 access" and "Stage2 for an EL1 access", which is a
lot of extra complication given we aren't even using the QEMU TLB.
In the process of updating the comment on our MMU index use,
fix a couple of other minor errors:
* NS EL2 EL2&0 was missing from the list in the comment
* some text hadn't been updated from when we bumped NB_MMU_MODES
above 8
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200330210400.11724-2-peter.maydell@linaro.org
According to Arm ARM, VQDMULL is only valid when U=0, while having
U=1 is unallocated.
Signed-off-by: Fredrik Strupe <fredrik@strupe.net>
Fixes: 695272dcb9 ("target-arm: Handle UNDEF cases for Neon 3-regs-different-widths")
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will move this code in the next commit. Clean it up
first to avoid checkpatch.pl errors.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200423073358.27155-5-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Make cpu_register() (renamed to arm_cpu_register()) available
from internals.h so we can register CPUs also from other files
in the future.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200423073358.27155-3-philmd@redhat.com
Message-ID: <20190921150420.30743-2-thuth@redhat.com>
[PMD: Only take cpu_register() from Thomas's patch]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Under KVM these registers are written by the hardware.
Restrict the writefn handlers to TCG to avoid when building
without TCG:
LINK aarch64-softmmu/qemu-system-aarch64
target/arm/helper.o: In function `do_ats_write':
target/arm/helper.c:3524: undefined reference to `raise_exception'
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200423073358.27155-2-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These instructions are often used in glibc's string routines.
They were the final uses of the 32-bit at a time neon helpers.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200418162808.4680-1-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add the pvr-user2 property to control the user-defined
PVR1 User2 register.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add the pvr-user1 property to control the user-defined
PVR0 User1 field.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add the unaligned-exceptions property to control if the core
traps unaligned memory accesses.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add the div-zero-exception property to control if the core
traps divizions by zero.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add the ill-opcode-exception property to control if illegal
instructions will raise exceptions.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Add the opcode-0x0-illegal CPU property to control if the core
should trap opcode zero as illegal.
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Luc Michel <luc.michel@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
The sifive-e34 cpu type is the same as the sifive-e31 with the
single precision floating-point extension enabled.
Signed-off-by: Corey Wharton <coreyw7@fb.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Message-id: 20200313193429.8035-3-coreyw7@fb.com
Message-Id: <20200313193429.8035-3-coreyw7@fb.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
As-per RISC-V H-Extension v0.5 draft, the Stage2 SV32 page table has
12bits of VPN[1] and 10bits of VPN[0]. The additional 2bits in VPN[1]
is required to handle the 34bit intermediate physical address coming
from Stage1 SV32 page table. The 12bits of VPN[1] implies that Stage2
SV32 level-0 page table will be 16KB in size with total 4096 enteries
where each entry maps 4MB of memory (same as Stage1 SV32 page table).
The get_physical_address() function is broken for Stage2 SV32 level-0
page table because it incorrectly computes output physical address for
Stage2 SV32 level-0 page table entry.
The root cause of the issue is that get_physical_address() uses the
"widened" variable to compute level-0 physical address mapping which
changes level-0 mapping size (instead of 4MB). We should use the
"widened" variable only for computing index of Stage2 SV32 level-0
page table.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20200330082724.120444-1-anup.patel@wdc.com
Message-Id: <20200330082724.120444-1-anup.patel@wdc.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Take the result of stage-1 and stage-2 page table walks and AND the two
protection flags together. This way we require both to set permissions
instead of just stage-2.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Anup Patel <anup@brainfault.org>
Message-id: 846f1e18f5922d818bc464ec32c144ef314ec724.1585262586.git.alistair.francis@wdc.com
Message-Id: <846f1e18f5922d818bc464ec32c144ef314ec724.1585262586.git.alistair.francis@wdc.com>
When doing the fist of a two stage lookup (Hypervisor extensions) don't
set the current protection flags from the second stage lookup of the
base address PTE.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Anup Patel <anup@brainfault.org>
Message-id: 931db85d6890ed4bc2b527fd1011197cd28299aa.1585262586.git.alistair.francis@wdc.com
Message-Id: <931db85d6890ed4bc2b527fd1011197cd28299aa.1585262586.git.alistair.francis@wdc.com>
The unpack facility is an indication that diagnose 308 subcodes 8-10
are available to the guest. That means, that the guest can put itself
into protected mode.
Once it is in protected mode, the hardware stops any attempt of VM
introspection by the hypervisor.
Some features are currently not supported in protected mode:
* vfio devices
* Migration
* Huge page backings
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-17-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
For protected VMs status storing is not done by QEMU anymore.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-15-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
For protected guests, we need to put the IO emulation results into the
SIDA, so SIE will write them into the guest at the next entry.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-14-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
IO instruction data is routed through SIDAD for protected guests, so
adresses do not need to be checked, as this is kernel memory which is
always available.
Also the instruction data always starts at offset 0 of the SIDAD.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-13-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
For protected guests the IPIB is written/read to/from the SIDA, so we
need those accesses to go through s390_cpu_pv_mem_read/write().
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-12-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Handling of CPU reset and setting of the IPL psw from guest storage at
offset 0 is done by a Ultravisor call. Let's only fetch it if
necessary.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-11-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
SCLP for a protected guest is done over the SIDAD, so we need to use
the s390_cpu_pv_mem_* functions to access the SIDAD instead of guest
memory when reading/writing SCBs.
To not confuse the sclp emulation, we set 0x4000 as the SCCB address,
since the function that injects the sclp external interrupt would
reject a zero sccb address.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20200319131921.2367-10-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
For protected guests, we need to put the STSI emulation results into
the SIDA, so SIE will write them into the guest at the next entry.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-9-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Protected guests save the instruction control blocks in the SIDA
instead of QEMU/KVM directly accessing the guest's memory.
Let's introduce new functions to access the SIDA.
The memops for doing so are available with KVM_CAP_S390_PROTECTED, so
let's check for that.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-8-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Protected VMs no longer intercept with code 4 for an instruction
interception. Instead they have codes 104 and 108 for protected
instruction interception and protected instruction notification
respectively.
The 104 mirrors the 4 interception.
The 108 is a notification interception to let KVM and QEMU know that
something changed and we need to update tracking information or
perform specific tasks. It's currently taken for the following
instructions:
* spx (To inform about the changed prefix location)
* sclp (On incorrect SCCB values, so we can inject a IRQ)
* sigp (All but "stop and store status")
* diag308 (Subcodes 0/1)
Of these exits only sclp errors, state changing sigps and diag308 will
reach QEMU. QEMU will do its parts of the job, while the ultravisor
has done the instruction part of the job.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200319131921.2367-7-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The unpack facility provides the means to setup a protected guest. A
protected guest cannot be introspected by the hypervisor or any
user/administrator of the machine it is running on.
Protected guests are encrypted at rest and need a special boot
mechanism via diag308 subcode 8 and 10.
Code 8 sets the PV specific IPLB which is retained separately from
those set via code 5.
Code 10 is used to unpack the VM into protected memory, verify its
integrity and start it.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Co-developed-by: Christian Borntraeger <borntraeger@de.ibm.com> [Changes
to machine]
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20200323083606.24520-1-frankja@linux.ibm.com>
[CH: fixed up KVM_PV_VM_ -> KVM_PV_]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Fixes the following coccinelle warnings:
$ spatch --sp-file --verbose-parsing ... \
scripts/coccinelle/remove_local_err.cocci
...
SUSPICIOUS: a \ character appears outside of a #define at ./target/ppc/translate_init.inc.c:5213
SUSPICIOUS: a \ character appears outside of a #define at ./target/ppc/translate_init.inc.c:5261
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:166
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:167
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:169
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:170
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:171
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:172
SUSPICIOUS: a \ character appears outside of a #define at ./target/microblaze/cpu.c:173
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5787
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5789
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5800
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5801
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5802
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5804
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5805
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:5806
SUSPICIOUS: a \ character appears outside of a #define at ./target/i386/cpu.c:6329
SUSPICIOUS: a \ character appears outside of a #define at ./hw/sd/sdhci.c:1133
SUSPICIOUS: a \ character appears outside of a #define at ./hw/scsi/scsi-disk.c:3081
SUSPICIOUS: a \ character appears outside of a #define at ./hw/net/virtio-net.c:1529
SUSPICIOUS: a \ character appears outside of a #define at ./hw/riscv/sifive_u.c:468
SUSPICIOUS: a \ character appears outside of a #define at ./dump/dump.c:1895
SUSPICIOUS: a \ character appears outside of a #define at ./block/vhdx.c:2209
SUSPICIOUS: a \ character appears outside of a #define at ./block/vhdx.c:2215
SUSPICIOUS: a \ character appears outside of a #define at ./block/vhdx.c:2221
SUSPICIOUS: a \ character appears outside of a #define at ./block/vhdx.c:2222
SUSPICIOUS: a \ character appears outside of a #define at ./block/replication.c:172
SUSPICIOUS: a \ character appears outside of a #define at ./block/replication.c:173
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20200412223619.11284-2-f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
They are part of the IPL process, so let's put them into the ipl
header.
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200319131921.2367-2-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
In commit 41a4bf1fea the added code to set the CNP
field in ID_MMFR4 for the AArch64 'max' CPU had a typo
where it used the wrong variable name, resulting in ID_MMFR4
fields AC2, XNX and LSM being wrong. Fix the typo.
Fixes: 41a4bf1fea
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20200422124501.28015-1-peter.maydell@linaro.org
If mtmsr L=1 sets MSR[EE] while there is a maskable exception pending,
it does not cause an interrupt. This causes the test case to hang:
https://lists.gnu.org/archive/html/qemu-ppc/2019-10/msg00826.html
More recently, Linux reduced the occurance of operations (e.g., rfi)
which stop translation and allow pending interrupts to be processed.
This started causing hangs in Linux boot in long-running kernel tests,
running with '-d int' shows the decrementer stops firing despite DEC
wrapping and MSR[EE]=1.
https://lists.ozlabs.org/pipermail/linuxppc-dev/2020-April/208301.html
The cause is the broken mtmsr L=1 behaviour, which is contrary to the
architecture. From Power ISA v3.0B, p.977, Move To Machine State Register,
Programming Note states:
If MSR[EE]=0 and an External, Decrementer, or Performance Monitor
exception is pending, executing an mtmsrd instruction that sets
MSR[EE] to 1 will cause the interrupt to occur before the next
instruction is executed, if no higher priority exception exists
Fix this by handling L=1 exactly the same way as L=0, modulo the MSR
bits altered.
The confusion arises from L=0 being "context synchronizing" whereas L=1
is "execution synchronizing", which is a weaker semantic. However this
is not a relaxation of the requirement that these exceptions cause
interrupts when MSR[EE]=1 (e.g., when mtmsr executes to completion as
TCG is doing here), rather it specifies how a pipelined processor can
have multiple instructions in flight where one may influence how another
behaves.
Cc: qemu-stable@nongnu.org
Reported-by: Anton Blanchard <anton@ozlabs.org>
Reported-by: Nathan Chancellor <natechancellor@gmail.com>
Tested-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200414111131.465560-1-npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since we now use a GByteArray, we can not use stfl_p() directly.
Introduce the gdb_get_float32() helper to load a float32 register.
Fixes: a010bdbe71 ("extend GByteArray to read register helpers")
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200414163853.12164-2-philmd@redhat.com>
Message-Id: <20200414200631.12799-14-alex.bennee@linaro.org>
Introduce gdb_get_zeroes() to fill a GByteArray with zeroes.
Fixes: a010bdbe71 ("extend GByteArray to read register helpers")
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200414102427.7459-1-philmd@redhat.com>
[AJB: used slightly more gliby set_size approach]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20200414200631.12799-13-alex.bennee@linaro.org>
We should only pass in gdb_get_reg16() with the GByteArray* object
itself, no need to shift. Without this patch, gdb remote attach will
crash QEMU:
(gdb) target remote :1234
Remote debugging using :1234
Remote communication error. Target disconnected.: Connection reset by peer.
$ qemu-system-x86_64 -m 1G -smp 4 ... -s
ERROR:qemu/gdbstub.c:1843:handle_read_all_regs: assertion failed: (len == gdbserver_state.mem_buf->len)
Bail out! ERROR:qemu/gdbstub.c:1843:handle_read_all_regs: assertion failed: (len == gdbserver_state.mem_buf->len)
Fixes: a010bdbe71 ("extend GByteArray to read register helpers")
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200409164954.36902-3-peterx@redhat.com>
Message-Id: <20200414200631.12799-12-alex.bennee@linaro.org>
Since a010bdbe71 the gdbstub API takes a GByteArray*. Unfortunately
we forgot to update the gdb_get_reg*() calls. Do it now.
Fixes: a010bdbe71 ("extend GByteArray to read register helpers")
Reported-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200409172509.4078-1-philmd@redhat.com>
Message-Id: <20200414200631.12799-11-alex.bennee@linaro.org>
Patch acb9f95a7c "i386: Fix GCC warning with snprintf when HAX
is enabled" replaced Windows device names with posix device
names. Revert this.
Fixes: acb9f95a7c "i386: Fix GCC warning with snprintf when HAX is enabled"
Signed-off-by: Volker Rümelin <vr_qemu@t-online.de>
Message-Id: <20200322210211.29603-1-vr_qemu@t-online.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Coverity reported a missing fall through comment, add it.
Fixes: e5918d7d7f ("target/rx: TCG translation")
Reported-by: Coverity (CID 1422222 MISSING_BREAK)
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200403184419.28556-1-philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rather than dynamically allocate, and risk failing to free
when we longjmp out of the translator, allocate the maximum
buffer size based on the maximum supported instruction length.
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Richard Henderson <richard.henderson@linaro.org>
Dynamically allocating a new structure within the DisasContext can
potentially leak as we can longjmp out of the translation loop (see
test_phys_mem). The proper fix would be to use static allocation
within the DisasContext but as the Xtensa translator imports it's code
from elsewhere I leave that as an exercise for the maintainer.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20200403191150.863-6-alex.bennee@linaro.org>
The KVM FWNMI capability should be enabled with the "ibm,nmi-register"
rtas call. Although MCEs from KVM will be delivered as architected
interrupts to the guest before "ibm,nmi-register" is called, KVM has
different behaviour depending on whether the guest has enabled FWNMI
(it attempts to do more recovery on behalf of a non-FWNMI guest).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200325142906.221248-2-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
An old comment in get_phys_addr_lpae() claims that the code does not
support the different format TCR for VTCR_EL2. This used to be true
but it is not true now (in particular the aa64_va_parameters() and
aa32_va_parameters() functions correctly handle the different
register format by checking whether the mmu_idx is Stage2).
Remove the out of date parts of the comment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200331143407.3186-1-peter.maydell@linaro.org
Our implementation of the PSTATE.PAN bit incorrectly cleared all
access permission bits for privileged access to memory which is
user-accessible. It should only affect the privileged read and write
permissions; execute permission is dealt with via XN/PXN instead.
Fixes: 81636b70c2
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200330170651.20901-1-peter.maydell@linaro.org
While support for parsing ieee_half in the XML description was added
to gdb in 2019 (a6d0f249) there is no easy way for the gdbstub to know
if the gdb end will understand it. Disable it for now and allow older
gdbs to successfully connect to the default -cpu max SVE enabled
QEMUs.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200402143913.24005-1-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 048c95163b ("target/i386: work around KVM_GET_MSRS bug for
secondary execution controls") added a workaround for KVM pre-dating
commit 6defc591846d ("KVM: nVMX: include conditional controls in /dev/kvm
KVM_GET_MSRS") which wasn't setting certain available controls. The
workaround uses generic CPUID feature bits to set missing VMX controls.
It was found that in some cases it is possible to observe hosts which
have certain CPUID features but lack the corresponding VMX control.
In particular, it was reported that Azure VMs have RDSEED but lack
VMX_SECONDARY_EXEC_RDSEED_EXITING; attempts to enable this feature
bit result in QEMU abort.
Resolve the issue but not applying the workaround when we don't have
to. As there is no good way to find out if KVM has the fix itself, use
95c5c7c77c ("KVM: nVMX: list VMX MSRs in KVM_GET_MSR_INDEX_LIST") instead
as these [are supposed to] come together.
Fixes: 048c95163b ("target/i386: work around KVM_GET_MSRS bug for secondary execution controls")
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200331162752.1209928-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The sequence of instructions exposes an issue:
sti
hlt
Interrupts cannot be delivered to hvf after hlt instruction cpu because
HF_INHIBIT_IRQ_MASK is set just before hlt is handled and never reset
after moving instruction pointer beyond hlt.
So, after hvf_vcpu_exec() returns, CPU thread gets locked up forever in
qemu_wait_io_event() (cpu_thread_is_idle() evaluates inhibition
flag and considers the CPU idle if the flag is set).
Cc: Cameron Esfahani <dirty@apple.com>
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Message-Id: <20200328174411.51491-1-r.bolshakov@yadro.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The cpu number reporting is handled by KVM and QEMU only fills in the
VM name, uuid and other values.
Unfortunately KVM doesn't report reserved cpus and doesn't even know
they exist until the are created via the ioctl.
So let's fix up the cpu values after KVM has written its values to the
3.2.2 sysib. To be consistent, we use the same code to retrieve the cpu
numbers as the STSI TCG code in target/s390x/misc_helper.c:HELPER(stsi).
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20200331110123.3774-1-frankja@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Current Icelake-Server CPU model lacks all the features enumerated by
MSR_IA32_ARCH_CAPABILITIES.
Add them, so that guest of "Icelake-Server" can see all of them.
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Message-Id: <20200316095605.12318-1-xiaoyao.li@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The CPUID level need to be set to 0x14 manually on old
machine-type if Intel PT is enabled in guest. E.g. the
CPUID[0].EAX(level)=7 and CPUID[7].EBX[25](intel-pt)=1 when the
Qemu with "-machine pc-i440fx-3.1 -cpu qemu64,+intel-pt" parameter.
Some Intel PT capabilities are exposed by leaf 0x14 and the
missing capabilities will cause some MSRs access failed.
This patch add a warning message to inform the user to extend
the CPUID level.
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Message-Id: <1584031686-16444-1-git-send-email-luwei.kang@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
If the system is numa configured the pkg_offset needs
to be adjusted for EPYC cpu models. Fix it calling the
model specific handler.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <158396725589.58170.16424607815207074485.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The APIC ID is decoded based on the sequence sockets->dies->cores->threads.
This works fine for most standard AMD and other vendors' configurations,
but this decoding sequence does not follow that of AMD's APIC ID enumeration
strictly. In some cases this can cause CPU topology inconsistency.
When booting a guest VM, the kernel tries to validate the topology, and finds
it inconsistent with the enumeration of EPYC cpu models. The more details are
in the bug https://bugzilla.redhat.com/show_bug.cgi?id=1728166.
To fix the problem we need to build the topology as per the Processor
Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1
Processors. The documentation is available from the bugzilla Link below.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
It is also available at
https://www.amd.com/system/files/TechDocs/55570-B1_PUB.zip
Here is the text from the PPR.
Operating systems are expected to use Core::X86::Cpuid::SizeId[ApicIdSize], the
number of least significant bits in the Initial APIC ID that indicate core ID
within a processor, in constructing per-core CPUID masks.
Core::X86::Cpuid::SizeId[ApicIdSize] determines the maximum number of cores
(MNC) that the processor could theoretically support, not the actual number of
cores that are actually implemented or enabled on the processor, as indicated
by Core::X86::Cpuid::SizeId[NC].
Each Core::X86::Apic::ApicId[ApicId] register is preset as follows:
• ApicId[6] = Socket ID.
• ApicId[5:4] = Node ID.
• ApicId[3] = Logical CCX L3 complex ID
• ApicId[2:0]= (SMT) ? {LogicalCoreID[1:0],ThreadId} : {1'b0,LogicalCoreID[1:0]}
The new apic id encoding is enabled for EPYC and EPYC-Rome models.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <158396724913.58170.3539083528095710811.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add a boolean variable use_epyc_apic_id_encoding in X86CPUDefinition.
This will be set if this cpu model needs to use new EPYC based
apic id encoding.
Override the handlers with EPYC based handlers if use_epyc_apic_id_encoding
is set. This will be done in x86_cpus_init.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <158396723514.58170.14825482171652019765.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Use the new functions from topology.h and delete the unused code. Given the
sockets, nodes, cores and threads, the new functions generate apic id for EPYC
mode. Removes all the hardcoded values.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <158396722151.58170.8031705769621392927.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The arm_current_el() should be invoked after mode switching. Otherwise, we
get a wrong current EL value, since current EL is also determined by
current mode.
Fixes: 4a2696c0d4 ("target/arm: Set PAN bit as required on exception entry")
Signed-off-by: Changbin Du <changbin.du@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20200328140232.17278-1-changbin.du@gmail.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Loongson multimedia condition instructions were previously implemented as
write 0 to rd due to lack of documentation. So I just confirmed with Loongson
about their encoding and implemented them correctly.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Acked-by: Huacai Chen <chenhc@lemote.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Message-Id: <20200324122212.11156-1-jiaxun.yang@flygoat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Notice the magic page during translate, much like we already
do for the arm32 commpage. At runtime, raise an exception to
return cpu_loop for emulation.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200213032223.14643-4-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
We are not short of numbers for EXCP_*. There is no need to confuse things
by having EXCP_VMEXIT and EXCP_SYSCALL overlap, even though the former is
only used for system mode and the latter is only used for user mode.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200213032223.14643-2-richard.henderson@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The ppc_dcr_read() and ppc_dcr_write() functions call into callbacks
in device code, so we need to hold the QEMU iothread lock while
calling them. This is the case already for the callsites in
kvmppc_handle_dcr_read/write(), but we must also take the lock when
calling the helpers from TCG.
This fixes a bug where attempting to initialise the PPC405EP
SDRAM will cause an assertion when sdram_map_bcr() attempts
to remap memory regions.
Reported-by: Amit Lazar <abasarlaz@hotmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20200322192258.14039-1-peter.maydell@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The new ISA v3.0 slbia variants have not been implemented for TCG,
which can lead to crashing when a POWER9 machine boots Linux using
the hash MMU, for example ("disable_radix" kernel command line).
Add them.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200319064439.1020571-1-npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
[dwg: Fixed compile error for USER_ONLY builds]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
slbia must invalidate TLBs even if it does not remove a valid SLB
entry, because slbmte can overwrite valid entries without removing
their TLBs.
As the architecture says, slbia invalidates all lookaside information,
not conditionally based on if it removed valid entries.
It does not seem possible for POWER8 or earlier Linux kernels to hit
this bug because it never changes its kernel SLB translations, and it
should always have valid entries if any accesses are made to userspace
regions. However other operating systems which may modify SLB entry 0
or do more fancy things with segments might be affected.
When POWER9 slbia support is added in the next patch, this becomes a
real problem because some new slbia variants don't invalidate all
non-zero entries.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200318044135.851716-1-npiggin@gmail.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Coverity reports a BAD_SHIFT with ctz32(imm5), with imm5 == 0.
This is an invalid encoding, but we diagnose that just below
by rejecting size > 3. Avoid the warning by sinking the
computation of index below the check.
Reported-by: Coverity (CID 1421965)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200320160622.8040-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coverity raised a shed-load of errors cascading from inferring
that clz32(immh) might yield 32, from immh might be 0.
While immh cannot be 0 from encoding, it is not obvious even to
a human how we've checked that: via the filtering provided by
data_proc_simd[].
Reported-by: Coverity (CID 1421923, and more)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200320160622.8040-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coverity rightly notes that ctz32(bas) on 0 will return 32,
which makes the len calculation a BAD_SHIFT.
A value of 0 in DBGWCR<n>_EL1.BAS is reserved. Simply move
the existing check we have for this case.
Reported-by: Coverity (CID 1421964)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20200320160622.8040-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are so many different forms of each RX instruction
that it will be very useful to be able to look at the bytes
to see on which path a bug may lie.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-24-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Collected, to be used in the next patch.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-23-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Many of the multi-part prints have been eliminated by previous
patches. Eliminate the rest of them.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-22-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Note that the ld == 3 case handled by prt_ldmi is decoded as
XCHG_rr and cannot appear here.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-21-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This has consistency with prt_ri(). It loads all data before
beginning output. It uses exactly one call to prt() to emit
the full instruction.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-20-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
We were eliding all zero indexes. It is only ld==0 that does
not have an index in the instruction. This also allows us to
avoid breaking the final print into multiple pieces.
Reviewed-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190531134315.4109-19-richard.henderson@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200224141923.82118-8-ysato@users.sourceforge.jp>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMD: Use newer QOM style, split cpu-qom.h, restrict access to
extable array, use rx_cpu_tlb_fill() extracted from patch of
Yoshinori Sato 'Convert to CPUClass::tlb_fill', call cpu_reset
after qemu_init_vcpu, make rx_crname a function]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20200224141923.82118-7-ysato@users.sourceforge.jp>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
[PMD: Use GByteArray in gdbstub (rebase commit a010bdbe),
use device_class_set_parent_reset (rebase commit 781c67ca)]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMD: Removed tlb_fill, extracted from patch of Yoshinori Sato
'Convert to CPUClass::tlb_fill']
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200224141923.82118-6-ysato@users.sourceforge.jp>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
This part only supported RXv1 instructions.
Instruction manual:
https://www.renesas.com/us/en/doc/products/mpumcu/doc/rx_family/r01us0032ej0120_rxsm.pdf
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200224141923.82118-5-ysato@users.sourceforge.jp>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Bug fixes:
* memory encryption: Disable mem merge
(Dr. David Alan Gilbert)
Features:
* New EPYC CPU definitions (Babu Moger)
* Denventon-v2 CPU model (Tao Xu)
* New 'note' field on versioned CPU models (Tao Xu)
Cleanups:
* x86 CPU topology cleanups (Babu Moger)
* cpu: Use DeviceClass reset instead of a special CPUClass reset
(Peter Maydell)
-----BEGIN PGP SIGNATURE-----
iQJIBAABCAAyFiEEWjIv1avE09usz9GqKAeTb5hNxaYFAl5xdnsUHGVoYWJrb3N0
QHJlZGhhdC5jb20ACgkQKAeTb5hNxaYkGA/9Fn1tCdW/74CEREPbcKNOf8twmCr2
L4qykix7mFcZXstFhEQuoNJQMz8mEPJngOfUSQY1c9w4psf0AXE6q3wbdNcxxdj1
1/+cPbaRuoF8EKw63MgR3AaReuWtAV+sGS4+eKBMJTMUbl03pOYARE+irCWJU6rd
YdP0t6CX0NWF4afv+2wMeeZVr+IcKEo81jCCCSjmM0YLkwvu0Vs5ng3jE7vtFKPj
MQHMyqD/lz0FwyksBiOLwjOCbnmIydWc/8VV68UH5ulxka96jk8CwmI0+A9v2UMQ
4PjQ84UeQclJTbec+h/Qy8DoCP3qiqijFMRau2wo1UWCsAjMcaRIJjIe5CSOJFRu
3FrP2FEJCZiWjh11b/x3jIyjK6MDjv3Y1oky1j5VkCnFUNLHbXUA2KY3jaZ/pf+1
BDqa6lNDYJBN+FQQt0yXDWAdGLUxxP87S9jmU9RULzwAwCic0FxVR/a5zk9EUDi0
mA+WL0ekfhIEVACdHYuCTxujGq8QnGiCppr1Wgx3t+GgveR8AjXdd/KclcKskYiw
ozbujtBPQUImuq3xi6FTkRHXuEW+zc+IFbhZ3Zq5OhmJmpdgmSHryFcKAdvNJH/z
VllKAsLg1hffm+PjlpuZLBucC4PBrvHbS7htHhMaemEiJHO9V5EfGDWQdELNRM8p
sKymFNs5XjzQcGE=
=9fEL
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging
x86 and machine queue for 5.0 soft freeze
Bug fixes:
* memory encryption: Disable mem merge
(Dr. David Alan Gilbert)
Features:
* New EPYC CPU definitions (Babu Moger)
* Denventon-v2 CPU model (Tao Xu)
* New 'note' field on versioned CPU models (Tao Xu)
Cleanups:
* x86 CPU topology cleanups (Babu Moger)
* cpu: Use DeviceClass reset instead of a special CPUClass reset
(Peter Maydell)
# gpg: Signature made Wed 18 Mar 2020 01:16:43 GMT
# gpg: using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg: issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-and-machine-pull-request:
hw/i386: Rename apicid_from_topo_ids to x86_apicid_from_topo_ids
hw/i386: Update structures to save the number of nodes per package
hw/i386: Remove unnecessary initialization in x86_cpu_new
machine: Add SMP Sockets in CpuTopology
hw/i386: Consolidate topology functions
hw/i386: Introduce X86CPUTopoInfo to contain topology info
cpu: Use DeviceClass reset instead of a special CPUClass reset
machine/memory encryption: Disable mem merge
hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs
i386: Add 2nd Generation AMD EPYC processors
i386: Add missing cpu feature bits in EPYC model
target/i386: Add new property note to versioned CPU models
target/i386: Add Denverton-v2 (no MPX) CPU model
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
- docker updates for VirGL
- re-factor gdbstub for static GDBState
- re-factor gdbstub for dynamic arrays
- add SVE support to arm gdbstub
- add some guest debug tests to check-tcg
- add aarch64 userspace register tests
- remove packet size limit to gdbstub
- simplify gdbstub monitor code
- report vContSupported in gdbstub to use proper single-step
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAl5xDUIACgkQ+9DbCVqe
KkQwCwf/YtmUsNxxO+CgNctq2u3jV4FoOdQP3bejvmT2+cigKJhQuBlWPg1/YsqF
RDNkmBQx2JaVVMuVmpnwVK1UD+kmYZqrtlOkPNcVrjPmLCq3BVI1LHe6Rjoerx8F
QoZyH0IMNHbBgDo1I46lSFOWcxmOvo+Ow7NX5bPKwlRzf0dyEqSJahRaZLAgUscR
taTtGfk9uQsnxoRsvH/efiQ4bZtUvrEQuhEX3WW/yVE1jTpcb2llwX4xONJb2It3
/0WREGEEIT8PpnWw2S3FH4THY/BjWgz/FPDwNNZYCKBMWDjuG/8KHryd738T9rzo
lkGP9YcXmiyxMMyFFwS8RD3SHr8LvQ==
=Wm+a
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-and-gdbstub-170320-1' into staging
Testing and gdbstub updates:
- docker updates for VirGL
- re-factor gdbstub for static GDBState
- re-factor gdbstub for dynamic arrays
- add SVE support to arm gdbstub
- add some guest debug tests to check-tcg
- add aarch64 userspace register tests
- remove packet size limit to gdbstub
- simplify gdbstub monitor code
- report vContSupported in gdbstub to use proper single-step
# gpg: Signature made Tue 17 Mar 2020 17:47:46 GMT
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-and-gdbstub-170320-1: (28 commits)
gdbstub: Fix single-step issue by confirming 'vContSupported+' feature to gdb
gdbstub: do not split gdb_monitor_write payload
gdbstub: change GDBState.last_packet to GByteArray
tests/tcg/aarch64: add test-sve-ioctl guest-debug test
tests/tcg/aarch64: add SVE iotcl test
tests/tcg/aarch64: add a gdbstub testcase for SVE registers
tests/guest-debug: add a simple test runner
configure: allow user to specify what gdb to use
tests/tcg/aarch64: userspace system register test
target/arm: don't bother with id_aa64pfr0_read for USER_ONLY
target/arm: generate xml description of our SVE registers
target/arm: default SVE length to 64 bytes for linux-user
target/arm: explicitly encode regnum in our XML
target/arm: prepare for multiple dynamic XMLs
gdbstub: extend GByteArray to read register helpers
target/i386: use gdb_get_reg helpers
target/m68k: use gdb_get_reg helpers
target/arm: use gdb_get_reg helpers
gdbstub: add helper for 128 bit registers
gdbstub: move mem_buf to GDBState and use GByteArray
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Here's my final pull request for the qemu-5.0 soft freeze. Sorry this
is just under the wire - I hit some last minute problems that took a
while to fix up and retest.
Highlights are:
* Numerous fixes for the FWNMI feature
* A handful of cleanups to the device tree construction code
* Numerous fixes for the spapr-vscsi device
* A number of fixes and cleanups for real mode (MMU off) softmmu
handling
* Fixes for handling of the PAPR RMA
* Better handling of hotplug/unplug events during boot
* Assorted other fixes
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl5wnnsACgkQbDjKyiDZ
s5JdpQ//eY/AOTs09UhvKxt8DN7lC2WyHGxYSncb2Tj2zaJyPPX9p296IDBMw+KX
Cafr6LzwLjpcpOyf/EWzg7qYGbNYoYgRWoOkHI/9pHsrIH3ZvhmnyTVQI5CffeEb
EDDXJUQo/2sFpAGeODr5zz+zAQUGzt6ZZUxAiQAF9RYc9ohUGD2x5c86Asx6ZTZo
/14bd3qnrcy1x+TxDetb1idFxFr2DsdYqpHAi88zHm+UaWzxYrb7kakd+YbqI24N
tYryf5SdtGrWAAdF/7nq2PQJFzskx+t0QearU+ruovRydxYbUtBpkr5HauoVuQXR
LiV270sDYDS/D1vvQQKzLxkUuvWmbZ0rB+2BAtS1rwq2sOKqYyQEAkTWfGtSXcf8
7fuZm2i1G78MuYGTOLCrF1u0owUB3QYHvt1NUW09GyWS8X3mahtj2fRe1RtPV/5d
NL217bcd32fkMoGCg/lFvK9sCQzR6zJGKkJvOGMVW4ahHCLixpjIWabWtdXjfguT
UahRPvlX7fzeVT+DISfjqyxwL+THnTvB3CTMWG2cktf0K1ke4SXcQ0mPyksN1NuC
QocfPCr1TN2ri8g9dAPwQmOkojnNs9izpIWRYSl3avTJFNseNPxuHQALXj2Y3Y/O
EoYxLN+cqPukQ1O3GxEj5QMKe8V/0986mxWnuS/dMohQOoy+zV4=
=BPnR
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-5.0-20200317' into staging
ppc patch queue 2020-03-17
Here's my final pull request for the qemu-5.0 soft freeze. Sorry this
is just under the wire - I hit some last minute problems that took a
while to fix up and retest.
Highlights are:
* Numerous fixes for the FWNMI feature
* A handful of cleanups to the device tree construction code
* Numerous fixes for the spapr-vscsi device
* A number of fixes and cleanups for real mode (MMU off) softmmu
handling
* Fixes for handling of the PAPR RMA
* Better handling of hotplug/unplug events during boot
* Assorted other fixes
# gpg: Signature made Tue 17 Mar 2020 09:55:07 GMT
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-5.0-20200317: (45 commits)
pseries: Update SLOF firmware image
ppc/spapr: Ignore common "ibm,nmi-interlock" Linux bug
ppc/spapr: Implement FWNMI System Reset delivery
target/ppc: allow ppc_cpu_do_system_reset to take an alternate vector
ppc/spapr: Allow FWNMI on TCG
ppc/spapr: Fix FWNMI machine check interrupt delivery
ppc/spapr: Add FWNMI System Reset state
ppc/spapr: Change FWNMI names
ppc/spapr: Fix FWNMI machine check failure handling
spapr: Rename DT functions to newer naming convention
spapr: Move creation of ibm,architecture-vec-5 property
spapr: Move creation of ibm,dynamic-reconfiguration-memory dt node
spapr/rtas: Reserve space for RTAS blob and log
pseries: Update SLOF firmware image
ppc/spapr: Move GPRs setup to one place
target/ppc: Fix rlwinm on ppc64
spapr/xive: use SPAPR_IRQ_IPI to define IPI ranges exposed to the guest
hw/scsi/spapr_vscsi: Convert debug fprintf() to trace event
hw/scsi/spapr_vscsi: Prevent buffer overflow
hw/scsi/spapr_vscsi: Do not mix SRP IU size with DMA buffer size
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update structures X86CPUTopoIDs and CPUX86State to hold the number of
nodes per package. This is required to build EPYC mode topology.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <158396720035.58170.1973738805301006456.stgit@naples-babu.amd.com>
Now that we have all the parameters in X86CPUTopoInfo, we can just
pass the structure to calculate the offsets and width.
Signed-off-by: Babu Moger <babu.moger@amd.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <158396717953.58170.5628042059144117669.stgit@naples-babu.amd.com>
The CPUClass has a 'reset' method. This is a legacy from when
TYPE_CPU used not to inherit from TYPE_DEVICE. We don't need it any
more, as we can simply use the TYPE_DEVICE reset. The 'cpu_reset()'
function is kept as the API which most places use to reset a CPU; it
is now a wrapper which calls device_cold_reset() and then the
tracepoint function.
This change should not cause CPU objects to be reset more often
than they are at the moment, because:
* nobody is directly calling device_cold_reset() or
qdev_reset_all() on CPU objects
* no CPU object is on a qbus, so they will not be reset either
by somebody calling qbus_reset_all()/bus_cold_reset(), or
by the main "reset sysbus and everything in the qbus tree"
reset that most devices are reset by
Note that this does not change the need for each machine or whatever
to use qemu_register_reset() to arrange to call cpu_reset() -- that
is necessary because CPU objects are not on any qbus, so they don't
get reset when the qbus tree rooted at the sysbus bus is reset, and
this isn't being changed here.
All the changes to the files under target/ were made using the
included Coccinelle script, except:
(1) the deletion of the now-inaccurate and not terribly useful
"CPUClass::reset" comments was done with a perl one-liner afterwards:
perl -n -i -e '/ CPUClass::reset/ or print' target/*/*.c
(2) this bit of the s390 change was done by hand, because the
Coccinelle script is not sophisticated enough to handle the
parent_reset call being inside another function:
| @@ -96,8 +96,9 @@ static void s390_cpu_reset(CPUState *s, cpu_reset_type type)
| S390CPU *cpu = S390_CPU(s);
| S390CPUClass *scc = S390_CPU_GET_CLASS(cpu);
| CPUS390XState *env = &cpu->env;
|+ DeviceState *dev = DEVICE(s);
|
|- scc->parent_reset(s);
|+ scc->parent_reset(dev);
| cpu->env.sigp_order = 0;
| s390_cpu_set_state(S390_CPU_STATE_STOPPED, cpu);
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20200303100511.5498-1-peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Adds the support for 2nd Gen AMD EPYC Processors. The model display
name will be EPYC-Rome.
Adds the following new feature bits on top of the feature bits from the
first generation EPYC models.
perfctr-core : core performance counter extensions support. Enables the VM to
use extended performance counter support. It enables six
programmable counters instead of four counters.
clzero : instruction zeroes out the 64 byte cache line specified in RAX.
xsaveerptr : XSAVE, XSAVE, FXSAVEOPT, XSAVEC, XSAVES always save error
pointers and FXRSTOR, XRSTOR, XRSTORS always restore error
pointers.
wbnoinvd : Write back and do not invalidate cache
ibpb : Indirect Branch Prediction Barrier
amd-stibp : Single Thread Indirect Branch Predictor
clwb : Cache Line Write Back and Retain
xsaves : XSAVES, XRSTORS and IA32_XSS support
rdpid : Read Processor ID instruction support
umip : User-Mode Instruction Prevention support
The Reference documents are available at
https://developer.amd.com/wp-content/resources/55803_0.54-PUB.pdfhttps://www.amd.com/system/files/TechDocs/24594.pdf
Depends on following kernel commits:
40bc47b08b6e ("kvm: x86: Enumerate support for CLZERO instruction")
504ce1954fba ("KVM: x86: Expose XSAVEERPTR to the guest")
6d61e3c32248 ("kvm: x86: Expose RDPID in KVM_GET_SUPPORTED_CPUID")
52297436199d ("kvm: svm: Update svm_xsaves_supported")
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <157314966312.23828.17684821666338093910.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Adds the following missing CPUID bits:
perfctr-core : core performance counter extensions support. Enables the VM
to use extended performance counter support. It enables six
programmable counters instead of 4 counters.
clzero : instruction zeroes out the 64 byte cache line specified in RAX.
xsaveerptr : XSAVE, XSAVE, FXSAVEOPT, XSAVEC, XSAVES always save error
pointers and FXRSTOR, XRSTOR, XRSTORS always restore error
pointers.
ibpb : Indirect Branch Prediction Barrie.
xsaves : XSAVES, XRSTORS and IA32_XSS supported.
Depends on following kernel commits:
40bc47b08b6e ("kvm: x86: Enumerate support for CLZERO instruction")
504ce1954fba ("KVM: x86: Expose XSAVEERPTR to the guest")
52297436199d ("kvm: svm: Update svm_xsaves_supported")
These new features will be added in EPYC-v3. The -cpu help output after the change.
x86 EPYC-v1 AMD EPYC Processor
x86 EPYC-v2 AMD EPYC Processor (with IBPB)
x86 EPYC-v3 AMD EPYC Processor
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <157314965662.23828.3063243729449408327.stgit@naples-babu.amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add additional information for -cpu help to indicate the changes in this
version of CPU model.
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20200212081328.7385-4-tao3.xu@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Because MPX is being removed from the linux kernel, remove MPX feature
from Denverton.
Signed-off-by: Tao Xu <tao3.xu@intel.com>
Message-Id: <20200212081328.7385-2-tao3.xu@intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
* get/set_uint cleanups (Felipe)
* Lock guard support (Stefan)
* MemoryRegion ownership cleanup (Philippe)
* AVX512 optimization for buffer_is_zero (Robert)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJecOZiAAoJEL/70l94x66DgGkH/jpY4IgqlSAAWCgaxfe1n1vg
ahSzSLrC8wiJq2Jxbmxn+5BbH6BxQ9ibflsY5bvCY/sTb7UlOFCPkFhQ2iUgplkw
ciB5UfgCA6OHpKEhpHhXtzlybtNOlxXNWYJ1SrcVXbRES8f7XdhMKs15mnJJuOOE
k/tuZo/44yZRJl0Cv+nkvIFcCVgyu1q0Lln/1MMPngY2r9gt893cY9feTBSSWgnp
+7HZr5TXI7mcIytczFKzbdujlG4391DGejKX66IIxGcWg9vXS7TwAStzH1vSKVfJ
73SKZBoCU5gpHHHC+dqVyouMerV+UE+WQPNtF+LCsNgJBw/2NXc1ZgDrtz1OI2c=
=+LRX
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* Bugfixes all over the place
* get/set_uint cleanups (Felipe)
* Lock guard support (Stefan)
* MemoryRegion ownership cleanup (Philippe)
* AVX512 optimization for buffer_is_zero (Robert)
# gpg: Signature made Tue 17 Mar 2020 15:01:54 GMT
# gpg: using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (62 commits)
hw/arm: Let devices own the MemoryRegion they create
hw/arm: Remove unnecessary memory_region_set_readonly() on ROM alias
hw/ppc/ppc405: Use memory_region_init_rom() with read-only regions
hw/arm/stm32: Use memory_region_init_rom() with read-only regions
hw/char: Let devices own the MemoryRegion they create
hw/riscv: Let devices own the MemoryRegion they create
hw/dma: Let devices own the MemoryRegion they create
hw/display: Let devices own the MemoryRegion they create
hw/core: Let devices own the MemoryRegion they create
scripts/cocci: Patch to let devices own their MemoryRegions
scripts/cocci: Patch to remove unnecessary memory_region_set_readonly()
scripts/cocci: Patch to detect potential use of memory_region_init_rom
hw/sparc: Use memory_region_init_rom() with read-only regions
hw/sh4: Use memory_region_init_rom() with read-only regions
hw/riscv: Use memory_region_init_rom() with read-only regions
hw/ppc: Use memory_region_init_rom() with read-only regions
hw/pci-host: Use memory_region_init_rom() with read-only regions
hw/net: Use memory_region_init_rom() with read-only regions
hw/m68k: Use memory_region_init_rom() with read-only regions
hw/display: Use memory_region_init_rom() with read-only regions
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For system emulation we need to check the state of the GIC before we
report the value. However this isn't relevant to exporting of the
value to linux-user and indeed breaks the exported value as set by
modify_arm_cp_regs.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200316172155.971-20-alex.bennee@linaro.org>
We also expose a the helpers to read/write the the registers.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200316172155.971-19-alex.bennee@linaro.org>
The Linux kernel chooses the default of 64 bytes for SVE registers on
the basis that it is the largest size on known hardware that won't
grow the signal frame. We still honour the sve-max-vq property and
userspace can expand the number of lanes by calling PR_SVE_SET_VL.
This should not make any difference to SVE enabled software as the SVE
is of course vector length agnostic.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200316172155.971-18-alex.bennee@linaro.org>
This is described as optional but I'm not convinced of the numbering
when multiple target fragments are sent.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200316172155.971-17-alex.bennee@linaro.org>
We will want to generate similar dynamic XML for gdbstub support of
SVE registers (the upstream doesn't use XML). To that end lightly
rename a few things to make the distinction.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200316172155.971-16-alex.bennee@linaro.org>
Instead of passing a pointer to memory now just extend the GByteArray
to all the read register helpers. They can then safely append their
data through the normal way. We don't bother with this abstraction for
write registers as we have already ensured the buffer being copied
from is the correct size.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Damien Hedde <damien.hedde@greensocs.com>
Message-Id: <20200316172155.971-15-alex.bennee@linaro.org>
This is cleaner than poking memory directly and will make later
clean-ups easier.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200316172155.971-14-alex.bennee@linaro.org>
This is cleaner than poking memory directly and will make later
clean-ups easier.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200316172155.971-13-alex.bennee@linaro.org>
This is cleaner than poking memory directly and will make later
clean-ups easier.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20200316172155.971-12-alex.bennee@linaro.org>
Provide for an alternate delivery location, -1 defaults to the
architected address.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-7-npiggin@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
FWNMI machine check delivery misses a few things that will make it fail
with TCG at least (which we would like to allow in future to improve
testing).
It's not nice to scatter interrupt delivery logic around the tree, so
move it to excp_helper.c and share code where possible.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Message-Id: <20200316142613.121089-5-npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
rlwinm cannot just AND with Mask if shift value is zero on ppc64 when
Mask Begin is greater than Mask End and high bits are set to 1.
Note that PowerISA 3.0B says that for `rlwinm' ROTL32 is used, and
ROTL32 is defined (in 3.3.14) so that rotated value should have two
copies of lower word of the source value.
This seems to be another incarnation of the fix from 820724d170
("target-ppc: Fix rlwimi, rlwinm, rlwnm again"), except I leave
optimization when Mask value is less than 32 bits.
Fixes: 7b4d326f47 ("target-ppc: Use the new deposit and extract ops")
Cc: qemu-stable@nongnu.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Message-Id: <20200309204557.14836-1-vt@altlinux.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently riscv_cpu_local_irq_pending is used to find out pending
interrupt and VS mode interrupts are being shifted to represent
S mode interrupts in this function. So when the cause returned by
this function is passed to riscv_cpu_do_interrupt to actually
forward the interrupt, the VS mode forwarding check does not work
as intended and interrupt is actually forwarded to hypervisor. This
patch fixes this issue.
Signed-off-by: Rajnesh Kanwal <rajnesh.kanwal49@gmail.com>
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
As reported in: https://bugs.launchpad.net/qemu/+bug/1851939 we weren't
correctly handling illegal instructions based on the value of MSTATUS_TSR
and the current privledge level.
This patch fixes the issue raised in the bug by raising an illegal
instruction if TSR is set and we are in S-Mode.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Jonathan Behrens <jonathan@fintelia.io
Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
This function calculates the maximum size of the RMA as implied by the
host's page size of structure of the VRMA (there are a number of other
constraints on the RMA size which will supersede this one in many
circumstances).
The current interface takes the current RMA size estimate, and clamps it
to the VRMA derived size. The only current caller passes in an arguably
wrong value (it will match the current RMA estimate in some but not all
cases).
We want to fix that, but for now just keep concerns separated by having the
KVM helper function just return the VRMA derived limit, and let the caller
combine it with other constraints. We call the new function
kvmppc_vrma_limit() to more clearly indicate its limited responsibility.
The helper should only ever be called in the KVM enabled case, so replace
its !CONFIG_KVM stub with an assert() rather than a dummy value.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cedric Le Goater <clg@fr.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Currently, we construct the SLBE used for VRMA translations when the LPCR
is written (which controls some bits in the SLBE), then use it later for
translations.
This is a bit complex and confusing - simplify it by simply constructing
the SLBE directly from the LPCR when we need it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When the LPCR is written, we update the env->rmls field with the RMA limit
it implies. Simplify things by just calculating the value directly from
the LPCR value when we need it.
It's possible this is a little slower, but it's unlikely to be significant,
since this is only for real mode accesses in a translation configuration
that's not used very often, and the whole thing is behind the qemu TLB
anyway. Therefore, keeping the number of state variables down and not
having to worry about making sure it's always in sync seems the better
option.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The table of RMA limits based on the LPCR[RMLS] field is slightly wrong.
We're missing the RMLS == 0 => 256 GiB RMA option, which is available on
POWER8, so add that.
The comment that goes with the table is much more wrong. We *don't* filter
invalid RMLS values when writing the LPCR, and there's not really a
sensible way to do so. Furthermore, while in theory the set of RMLS values
is implementation dependent, it seems in practice the same set has been
available since around POWER4+ up until POWER8, the last model which
supports RMLS at all. So, correct that as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently we use a big switch statement in ppc_hash64_update_rmls() to work
out what the right RMA limit is based on the LPCR[RMLS] field. There's no
formula for this - it's just an arbitrary mapping defined by the existing
CPU implementations - but we can make it a bit more readable by using a
lookup table rather than a switch. In addition we can use the MiB/GiB
symbols to make it a bit clearer.
While there we add a bit of clarity and rationale to the comment about
what happens if the LPCR[RMLS] doesn't contain a valid value.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
When we store the Logical Partitioning Control Register (LPCR) we have a
big switch statement to work out which are valid bits for the cpu model
we're emulating.
As well as being ugly, this isn't really conceptually correct, since it is
based on the mmu_model variable, whereas the LPCR isn't (only) about the
MMU, so mmu_model is basically just acting as a proxy for the cpu model.
Handle this in a simpler way, by adding a suitable lpcr_mask to the QOM
class.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Currently we create the Real Mode Offset Register (RMOR) on all Book3S cpus
from POWER7 onwards. However the translation mode which the RMOR controls
is no longer supported in POWER9, and so the register has been removed from
the architecture.
Remove it from our model on POWER9 and POWER10.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>