2019-06-03 13:44:50 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2015-10-19 22:50:37 +08:00
|
|
|
/*
|
|
|
|
* Copyright (C) 2012-2015 - ARM Ltd
|
|
|
|
* Author: Marc Zyngier <marc.zyngier@arm.com>
|
|
|
|
*/
|
|
|
|
|
2020-10-14 16:29:27 +08:00
|
|
|
#include <hyp/adjust_pc.h>
|
|
|
|
|
2015-10-19 22:50:37 +08:00
|
|
|
#include <linux/compiler.h>
|
|
|
|
#include <linux/irqchip/arm-gic.h>
|
|
|
|
#include <linux/kvm_host.h>
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
#include <linux/swab.h>
|
2015-10-19 22:50:37 +08:00
|
|
|
|
2016-09-06 16:28:46 +08:00
|
|
|
#include <asm/kvm_emulate.h>
|
2016-01-28 21:44:07 +08:00
|
|
|
#include <asm/kvm_hyp.h>
|
2017-10-24 00:11:14 +08:00
|
|
|
#include <asm/kvm_mmu.h>
|
2015-10-19 22:50:37 +08:00
|
|
|
|
2020-06-25 21:14:19 +08:00
|
|
|
static bool __is_be(struct kvm_vcpu *vcpu)
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
{
|
|
|
|
if (vcpu_mode_is_32bit(vcpu))
|
KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s
Currently, the {read,write}_sysreg_el*() accessors for accessing
particular ELs' sysregs in the presence of VHE rely on some local
hacks and define their system register encodings in a way that is
inconsistent with the core definitions in <asm/sysreg.h>.
As a result, it is necessary to add duplicate definitions for any
system register that already needs a definition in sysreg.h for
other reasons.
This is a bit of a maintenance headache, and the reasons for the
_el*() accessors working the way they do is a bit historical.
This patch gets rid of the shadow sysreg definitions in
<asm/kvm_hyp.h>, converts the _el*() accessors to use the core
__msr_s/__mrs_s interface, and converts all call sites to use the
standard sysreg #define names (i.e., upper case, with SYS_ prefix).
This patch will conflict heavily anyway, so the opportunity
to clean up some bad whitespace in the context of the changes is
taken.
The change exposes a few system registers that have no sysreg.h
definition, due to msr_s/mrs_s being used in place of msr/mrs:
additions are made in order to fill in the gaps.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
[Rebased to v4.21-rc1]
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
[Rebased to v5.2-rc5, changelog updates]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-06 18:29:40 +08:00
|
|
|
return !!(read_sysreg_el2(SYS_SPSR) & PSR_AA32_E_BIT);
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
|
|
|
|
return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE);
|
|
|
|
}
|
|
|
|
|
2016-09-06 21:02:17 +08:00
|
|
|
/*
|
|
|
|
* __vgic_v2_perform_cpuif_access -- perform a GICV access on behalf of the
|
|
|
|
* guest.
|
|
|
|
*
|
|
|
|
* @vcpu: the offending vcpu
|
|
|
|
*
|
|
|
|
* Returns:
|
|
|
|
* 1: GICV access successfully performed
|
|
|
|
* 0: Not a GICV access
|
2018-11-09 23:07:11 +08:00
|
|
|
* -1: Illegal GICV access successfully performed
|
2016-09-06 21:02:17 +08:00
|
|
|
*/
|
2020-06-25 21:14:19 +08:00
|
|
|
int __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
|
arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
In order to efficiently perform the GICV access on behalf of the
guest, we need to be able to avoid going back all the way to
the host kernel.
For this, we introduce a new hook in the world switch code,
conveniently placed just after populating the fault info.
At that point, we only have saved/restored the GP registers,
and we can quickly perform all the required checks (data abort,
translation fault, valid faulting syndrome, not an external
abort, not a PTW).
Coming back from the emulation code, we need to skip the emulated
instruction. This involves an additional bit of save/restore in
order to be able to access the guest's PC (and possibly CPSR if
this is a 32bit guest).
At this stage, no emulation code is provided.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-06 16:28:45 +08:00
|
|
|
{
|
2016-09-06 16:28:46 +08:00
|
|
|
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
|
|
|
|
struct vgic_dist *vgic = &kvm->arch.vgic;
|
|
|
|
phys_addr_t fault_ipa;
|
|
|
|
void __iomem *addr;
|
|
|
|
int rd;
|
|
|
|
|
|
|
|
/* Build the full address */
|
|
|
|
fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
|
|
|
|
fault_ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
|
|
|
|
|
|
|
|
/* If not for GICV, move on */
|
|
|
|
if (fault_ipa < vgic->vgic_cpu_base ||
|
|
|
|
fault_ipa >= (vgic->vgic_cpu_base + KVM_VGIC_V2_CPU_SIZE))
|
2016-09-06 21:02:17 +08:00
|
|
|
return 0;
|
2016-09-06 16:28:46 +08:00
|
|
|
|
|
|
|
/* Reject anything but a 32bit access */
|
2018-11-09 23:07:11 +08:00
|
|
|
if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32)) {
|
|
|
|
__kvm_skip_instr(vcpu);
|
2016-09-06 21:02:17 +08:00
|
|
|
return -1;
|
2018-11-09 23:07:11 +08:00
|
|
|
}
|
2016-09-06 16:28:46 +08:00
|
|
|
|
|
|
|
/* Not aligned? Don't bother */
|
2018-11-09 23:07:11 +08:00
|
|
|
if (fault_ipa & 3) {
|
|
|
|
__kvm_skip_instr(vcpu);
|
2016-09-06 21:02:17 +08:00
|
|
|
return -1;
|
2018-11-09 23:07:11 +08:00
|
|
|
}
|
2016-09-06 16:28:46 +08:00
|
|
|
|
|
|
|
rd = kvm_vcpu_dabt_get_rd(vcpu);
|
2021-01-06 02:05:41 +08:00
|
|
|
addr = kvm_vgic_global_state.vcpu_hyp_va;
|
2016-09-06 16:28:46 +08:00
|
|
|
addr += fault_ipa - vgic->vgic_cpu_base;
|
|
|
|
|
|
|
|
if (kvm_vcpu_dabt_iswrite(vcpu)) {
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
u32 data = vcpu_get_reg(vcpu, rd);
|
|
|
|
if (__is_be(vcpu)) {
|
|
|
|
/* guest pre-swabbed data, undo this for writel() */
|
2020-02-21 00:58:38 +08:00
|
|
|
data = __kvm_swab32(data);
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
}
|
2016-09-06 16:28:46 +08:00
|
|
|
writel_relaxed(data, addr);
|
|
|
|
} else {
|
|
|
|
u32 data = readl_relaxed(addr);
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
if (__is_be(vcpu)) {
|
|
|
|
/* guest expects swabbed data */
|
2020-02-21 00:58:38 +08:00
|
|
|
data = __kvm_swab32(data);
|
arm64: vgic-v2: Fix proxying of cpuif access
Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
and co, which check the endianness, which call into vcpu_read_sys_reg...
which isn't mapped at EL2 (it was inlined before, and got moved OoL
with the VHE optimizations).
The result is of course a nice panic. Let's add some specialized
cruft to keep the broken platforms that require this hack alive.
But, this code used vcpu_data_guest_to_host(), which expected us to
write the value to host memory, instead we have trapped the guest's
read or write to an mmio-device, and are about to replay it using the
host's readl()/writel() which also perform swabbing based on the host
endianness. This goes wrong when both host and guest are big-endian,
as readl()/writel() will undo the guest's swabbing, causing the
big-endian value to be written to device-memory.
What needs doing?
A big-endian guest will have pre-swabbed data before storing, undo this.
If its necessary for the host, writel() will re-swab it.
For a read a big-endian guest expects to swab the data after the load.
The hosts's readl() will correct for host endianness, giving us the
device-memory's value in the register. For a big-endian guest, swab it
as if we'd only done the load.
For a little-endian guest, nothing needs doing as readl()/writel() leave
the correct device-memory value in registers.
Tested on Juno with that rarest of things: a big-endian 64K host.
Based on a patch from Marc Zyngier.
Reported-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Fixes: bf8feb39642b ("arm64: KVM: vgic-v2: Add GICV access from HYP")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-04 23:19:24 +08:00
|
|
|
}
|
|
|
|
vcpu_set_reg(vcpu, rd, data);
|
2016-09-06 16:28:46 +08:00
|
|
|
}
|
|
|
|
|
2018-11-09 23:07:11 +08:00
|
|
|
__kvm_skip_instr(vcpu);
|
|
|
|
|
2016-09-06 21:02:17 +08:00
|
|
|
return 1;
|
arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
In order to efficiently perform the GICV access on behalf of the
guest, we need to be able to avoid going back all the way to
the host kernel.
For this, we introduce a new hook in the world switch code,
conveniently placed just after populating the fault info.
At that point, we only have saved/restored the GP registers,
and we can quickly perform all the required checks (data abort,
translation fault, valid faulting syndrome, not an external
abort, not a PTW).
Coming back from the emulation code, we need to skip the emulated
instruction. This involves an additional bit of save/restore in
order to be able to access the guest's PC (and possibly CPSR if
this is a 32bit guest).
At this stage, no emulation code is provided.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-09-06 16:28:45 +08:00
|
|
|
}
|