* Initial infrastructure for shadow stage-2 MMUs, as part of nested
   virtualization enablement
 
 * Support for userspace changes to the guest CTR_EL0 value, enabling
   (in part) migration of VMs between heterogenous hardware
 
 * Fixes + improvements to pKVM's FF-A proxy, adding support for v1.1 of
   the protocol
 
 * FPSIMD/SVE support for nested, including merged trap configuration
   and exception routing
 
 * New command-line parameter to control the WFx trap behavior under KVM
 
 * Introduce kCFI hardening in the EL2 hypervisor
 
 * Fixes + cleanups for handling presence/absence of FEAT_TCRX
 
 * Miscellaneous fixes + documentation updates
 
 LoongArch:
 
 * Add paravirt steal time support.
 
 * Add support for KVM_DIRTY_LOG_INITIALLY_SET.
 
 * Add perf kvm-stat support for loongarch.
 
 RISC-V:
 
 * Redirect AMO load/store access fault traps to guest
 
 * perf kvm stat support
 
 * Use guest files for IMSIC virtualization, when available
 
 ONE_REG support for the Zimop, Zcmop, Zca, Zcf, Zcd, Zcb and Zawrs ISA
 extensions is coming through the RISC-V tree.
 
 s390:
 
 * Assortment of tiny fixes which are not time critical
 
 x86:
 
 * Fixes for Xen emulation.
 
 * Add a global struct to consolidate tracking of host values, e.g. EFER
 
 * Add KVM_CAP_X86_APIC_BUS_CYCLES_NS to allow configuring the effective APIC
   bus frequency, because TDX.
 
 * Print the name of the APICv/AVIC inhibits in the relevant tracepoint.
 
 * Clean up KVM's handling of vendor specific emulation to consistently act on
   "compatible with Intel/AMD", versus checking for a specific vendor.
 
 * Drop MTRR virtualization, and instead always honor guest PAT on CPUs
   that support self-snoop.
 
 * Update to the newfangled Intel CPU FMS infrastructure.
 
 * Don't advertise IA32_PERF_GLOBAL_OVF_CTRL as an MSR-to-be-saved, as it reads
   '0' and writes from userspace are ignored.
 
 * Misc cleanups
 
 x86 - MMU:
 
 * Small cleanups, renames and refactoring extracted from the upcoming
   Intel TDX support.
 
 * Don't allocate kvm_mmu_page.shadowed_translation for shadow pages that can't
   hold leafs SPTEs.
 
 * Unconditionally drop mmu_lock when allocating TDP MMU page tables for eager
   page splitting, to avoid stalling vCPUs when splitting huge pages.
 
 * Bug the VM instead of simply warning if KVM tries to split a SPTE that is
   non-present or not-huge.  KVM is guaranteed to end up in a broken state
   because the callers fully expect a valid SPTE, it's all but dangerous
   to let more MMU changes happen afterwards.
 
 x86 - AMD:
 
 * Make per-CPU save_area allocations NUMA-aware.
 
 * Force sev_es_host_save_area() to be inlined to avoid calling into an
   instrumentable function from noinstr code.
 
 * Base support for running SEV-SNP guests.  API-wise, this includes
   a new KVM_X86_SNP_VM type, encrypting/measure the initial image into
   guest memory, and finalizing it before launching it.  Internally,
   there are some gmem/mmu hooks needed to prepare gmem-allocated pages
   before mapping them into guest private memory ranges.
 
   This includes basic support for attestation guest requests, enough to
   say that KVM supports the GHCB 2.0 specification.
 
   There is no support yet for loading into the firmware those signing
   keys to be used for attestation requests, and therefore no need yet
   for the host to provide certificate data for those keys.  To support
   fetching certificate data from userspace, a new KVM exit type will be
   needed to handle fetching the certificate from userspace. An attempt to
   define a new KVM_EXIT_COCO/KVM_EXIT_COCO_REQ_CERTS exit type to handle
   this was introduced in v1 of this patchset, but is still being discussed
   by community, so for now this patchset only implements a stub version
   of SNP Extended Guest Requests that does not provide certificate data.
 
 x86 - Intel:
 
 * Remove an unnecessary EPT TLB flush when enabling hardware.
 
 * Fix a series of bugs that cause KVM to fail to detect nested pending posted
   interrupts as valid wake eents for a vCPU executing HLT in L2 (with
   HLT-exiting disable by L1).
 
 * KVM: x86: Suppress MMIO that is triggered during task switch emulation
 
   Explicitly suppress userspace emulated MMIO exits that are triggered when
   emulating a task switch as KVM doesn't support userspace MMIO during
   complex (multi-step) emulation.  Silently ignoring the exit request can
   result in the WARN_ON_ONCE(vcpu->mmio_needed) firing if KVM exits to
   userspace for some other reason prior to purging mmio_needed.
 
   See commit 0dc902267c ("KVM: x86: Suppress pending MMIO write exits if
   emulator detects exception") for more details on KVM's limitations with
   respect to emulated MMIO during complex emulator flows.
 
 Generic:
 
 * Rename the AS_UNMOVABLE flag that was introduced for KVM to AS_INACCESSIBLE,
   because the special casing needed by these pages is not due to just
   unmovability (and in fact they are only unmovable because the CPU cannot
   access them).
 
 * New ioctl to populate the KVM page tables in advance, which is useful to
   mitigate KVM page faults during guest boot or after live migration.
   The code will also be used by TDX, but (probably) not through the ioctl.
 
 * Enable halt poll shrinking by default, as Intel found it to be a clear win.
 
 * Setup empty IRQ routing when creating a VM to avoid having to synchronize
   SRCU when creating a split IRQCHIP on x86.
 
 * Rework the sched_in/out() paths to replace kvm_arch_sched_in() with a flag
   that arch code can use for hooking both sched_in() and sched_out().
 
 * Take the vCPU @id as an "unsigned long" instead of "u32" to avoid
   truncating a bogus value from userspace, e.g. to help userspace detect bugs.
 
 * Mark a vCPU as preempted if and only if it's scheduled out while in the
   KVM_RUN loop, e.g. to avoid marking it preempted and thus writing guest
   memory when retrieving guest state during live migration blackout.
 
 Selftests:
 
 * Remove dead code in the memslot modification stress test.
 
 * Treat "branch instructions retired" as supported on all AMD Family 17h+ CPUs.
 
 * Print the guest pseudo-RNG seed only when it changes, to avoid spamming the
   log for tests that create lots of VMs.
 
 * Make the PMU counters test less flaky when counting LLC cache misses by
   doing CLFLUSH{OPT} in every loop iteration.
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaZQB0UHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNkZwf/bv2jiENaLFNGPe/VqTKMQ6PHQLMG
 +sNHx6fJPP35gTM8Jqf0/7/ummZXcSuC1mWrzYbecZm7Oeg3vwNXHZ4LquwwX6Dv
 8dKcUzLbWDAC4WA3SKhi8C8RV2v6E7ohy69NtAJmFWTc7H95dtIQm6cduV2osTC3
 OEuHe1i8d9umk6couL9Qhm8hk3i9v2KgCsrfyNrQgLtS3hu7q6yOTR8nT0iH6sJR
 KE5A8prBQgLmF34CuvYDw4Hu6E4j+0QmIqodovg2884W1gZQ9LmcVqYPaRZGsG8S
 iDdbkualLKwiR1TpRr3HJGKWSFdc7RblbsnHRvHIZgFsMQiimh4HrBSCyQ==
 =zepX
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm updates from Paolo Bonzini:
 "ARM:

   - Initial infrastructure for shadow stage-2 MMUs, as part of nested
     virtualization enablement

   - Support for userspace changes to the guest CTR_EL0 value, enabling
     (in part) migration of VMs between heterogenous hardware

   - Fixes + improvements to pKVM's FF-A proxy, adding support for v1.1
     of the protocol

   - FPSIMD/SVE support for nested, including merged trap configuration
     and exception routing

   - New command-line parameter to control the WFx trap behavior under
     KVM

   - Introduce kCFI hardening in the EL2 hypervisor

   - Fixes + cleanups for handling presence/absence of FEAT_TCRX

   - Miscellaneous fixes + documentation updates

  LoongArch:

   - Add paravirt steal time support

   - Add support for KVM_DIRTY_LOG_INITIALLY_SET

   - Add perf kvm-stat support for loongarch

  RISC-V:

   - Redirect AMO load/store access fault traps to guest

   - perf kvm stat support

   - Use guest files for IMSIC virtualization, when available

  s390:

   - Assortment of tiny fixes which are not time critical

  x86:

   - Fixes for Xen emulation

   - Add a global struct to consolidate tracking of host values, e.g.
     EFER

   - Add KVM_CAP_X86_APIC_BUS_CYCLES_NS to allow configuring the
     effective APIC bus frequency, because TDX

   - Print the name of the APICv/AVIC inhibits in the relevant
     tracepoint

   - Clean up KVM's handling of vendor specific emulation to
     consistently act on "compatible with Intel/AMD", versus checking
     for a specific vendor

   - Drop MTRR virtualization, and instead always honor guest PAT on
     CPUs that support self-snoop

   - Update to the newfangled Intel CPU FMS infrastructure

   - Don't advertise IA32_PERF_GLOBAL_OVF_CTRL as an MSR-to-be-saved, as
     it reads '0' and writes from userspace are ignored

   - Misc cleanups

  x86 - MMU:

   - Small cleanups, renames and refactoring extracted from the upcoming
     Intel TDX support

   - Don't allocate kvm_mmu_page.shadowed_translation for shadow pages
     that can't hold leafs SPTEs

   - Unconditionally drop mmu_lock when allocating TDP MMU page tables
     for eager page splitting, to avoid stalling vCPUs when splitting
     huge pages

   - Bug the VM instead of simply warning if KVM tries to split a SPTE
     that is non-present or not-huge. KVM is guaranteed to end up in a
     broken state because the callers fully expect a valid SPTE, it's
     all but dangerous to let more MMU changes happen afterwards

  x86 - AMD:

   - Make per-CPU save_area allocations NUMA-aware

   - Force sev_es_host_save_area() to be inlined to avoid calling into
     an instrumentable function from noinstr code

   - Base support for running SEV-SNP guests. API-wise, this includes a
     new KVM_X86_SNP_VM type, encrypting/measure the initial image into
     guest memory, and finalizing it before launching it. Internally,
     there are some gmem/mmu hooks needed to prepare gmem-allocated
     pages before mapping them into guest private memory ranges

     This includes basic support for attestation guest requests, enough
     to say that KVM supports the GHCB 2.0 specification

     There is no support yet for loading into the firmware those signing
     keys to be used for attestation requests, and therefore no need yet
     for the host to provide certificate data for those keys.

     To support fetching certificate data from userspace, a new KVM exit
     type will be needed to handle fetching the certificate from
     userspace.

     An attempt to define a new KVM_EXIT_COCO / KVM_EXIT_COCO_REQ_CERTS
     exit type to handle this was introduced in v1 of this patchset, but
     is still being discussed by community, so for now this patchset
     only implements a stub version of SNP Extended Guest Requests that
     does not provide certificate data

  x86 - Intel:

   - Remove an unnecessary EPT TLB flush when enabling hardware

   - Fix a series of bugs that cause KVM to fail to detect nested
     pending posted interrupts as valid wake eents for a vCPU executing
     HLT in L2 (with HLT-exiting disable by L1)

   - KVM: x86: Suppress MMIO that is triggered during task switch
     emulation

     Explicitly suppress userspace emulated MMIO exits that are
     triggered when emulating a task switch as KVM doesn't support
     userspace MMIO during complex (multi-step) emulation

     Silently ignoring the exit request can result in the
     WARN_ON_ONCE(vcpu->mmio_needed) firing if KVM exits to userspace
     for some other reason prior to purging mmio_needed

     See commit 0dc902267c ("KVM: x86: Suppress pending MMIO write
     exits if emulator detects exception") for more details on KVM's
     limitations with respect to emulated MMIO during complex emulator
     flows

  Generic:

   - Rename the AS_UNMOVABLE flag that was introduced for KVM to
     AS_INACCESSIBLE, because the special casing needed by these pages
     is not due to just unmovability (and in fact they are only
     unmovable because the CPU cannot access them)

   - New ioctl to populate the KVM page tables in advance, which is
     useful to mitigate KVM page faults during guest boot or after live
     migration. The code will also be used by TDX, but (probably) not
     through the ioctl

   - Enable halt poll shrinking by default, as Intel found it to be a
     clear win

   - Setup empty IRQ routing when creating a VM to avoid having to
     synchronize SRCU when creating a split IRQCHIP on x86

   - Rework the sched_in/out() paths to replace kvm_arch_sched_in() with
     a flag that arch code can use for hooking both sched_in() and
     sched_out()

   - Take the vCPU @id as an "unsigned long" instead of "u32" to avoid
     truncating a bogus value from userspace, e.g. to help userspace
     detect bugs

   - Mark a vCPU as preempted if and only if it's scheduled out while in
     the KVM_RUN loop, e.g. to avoid marking it preempted and thus
     writing guest memory when retrieving guest state during live
     migration blackout

  Selftests:

   - Remove dead code in the memslot modification stress test

   - Treat "branch instructions retired" as supported on all AMD Family
     17h+ CPUs

   - Print the guest pseudo-RNG seed only when it changes, to avoid
     spamming the log for tests that create lots of VMs

   - Make the PMU counters test less flaky when counting LLC cache
     misses by doing CLFLUSH{OPT} in every loop iteration"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits)
  crypto: ccp: Add the SNP_VLEK_LOAD command
  KVM: x86/pmu: Add kvm_pmu_call() to simplify static calls of kvm_pmu_ops
  KVM: x86: Introduce kvm_x86_call() to simplify static calls of kvm_x86_ops
  KVM: x86: Replace static_call_cond() with static_call()
  KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event
  x86/sev: Move sev_guest.h into common SEV header
  KVM: SEV: Provide support for SNP_GUEST_REQUEST NAE event
  KVM: x86: Suppress MMIO that is triggered during task switch emulation
  KVM: x86/mmu: Clean up make_huge_page_split_spte() definition and intro
  KVM: x86/mmu: Bug the VM if KVM tries to split a !hugepage SPTE
  KVM: selftests: x86: Add test for KVM_PRE_FAULT_MEMORY
  KVM: x86: Implement kvm_arch_vcpu_pre_fault_memory()
  KVM: x86/mmu: Make kvm_mmu_do_page_fault() return mapped level
  KVM: x86/mmu: Account pf_{fixed,emulate,spurious} in callers of "do page fault"
  KVM: x86/mmu: Bump pf_taken stat only in the "real" page fault handler
  KVM: Add KVM_PRE_FAULT_MEMORY vcpu ioctl to pre-populate guest memory
  KVM: Document KVM_PRE_FAULT_MEMORY ioctl
  mm, virt: merge AS_UNMOVABLE and AS_INACCESSIBLE
  perf kvm: Add kvm-stat for loongarch64
  LoongArch: KVM: Add PV steal time support in guest side
  ...
This commit is contained in:
Linus Torvalds 2024-07-20 12:41:03 -07:00
commit 2c9b351240
163 changed files with 7813 additions and 2378 deletions

View File

@ -2722,6 +2722,24 @@
[KVM,ARM,EARLY] Allow use of GICv4 for direct
injection of LPIs.
kvm-arm.wfe_trap_policy=
[KVM,ARM] Control when to set WFE instruction trap for
KVM VMs. Traps are allowed but not guaranteed by the
CPU architecture.
trap: set WFE instruction trap
notrap: clear WFE instruction trap
kvm-arm.wfi_trap_policy=
[KVM,ARM] Control when to set WFI instruction trap for
KVM VMs. Traps are allowed but not guaranteed by the
CPU architecture.
trap: set WFI instruction trap
notrap: clear WFI instruction trap
kvm_cma_resv_ratio=n [PPC,EARLY]
Reserves given percentage from system memory area for
contiguous memory allocation for KVM hash pagetable
@ -4036,9 +4054,9 @@
prediction) vulnerability. System may allow data
leaks with this option.
no-steal-acc [X86,PV_OPS,ARM64,PPC/PSERIES,RISCV,EARLY] Disable
paravirtualized steal time accounting. steal time is
computed, but won't influence scheduler behaviour
no-steal-acc [X86,PV_OPS,ARM64,PPC/PSERIES,RISCV,LOONGARCH,EARLY]
Disable paravirtualized steal time accounting. steal time
is computed, but won't influence scheduler behaviour
nosync [HW,M68K] Disables sync negotiation for all devices.

View File

@ -176,6 +176,25 @@ to SNP_CONFIG command defined in the SEV-SNP spec. The current values of
the firmware parameters affected by this command can be queried via
SNP_PLATFORM_STATUS.
2.7 SNP_VLEK_LOAD
-----------------
:Technology: sev-snp
:Type: hypervisor ioctl cmd
:Parameters (in): struct sev_user_data_snp_vlek_load
:Returns (out): 0 on success, -negative on error
When requesting an attestation report a guest is able to specify whether
it wants SNP firmware to sign the report using either a Versioned Chip
Endorsement Key (VCEK), which is derived from chip-unique secrets, or a
Versioned Loaded Endorsement Key (VLEK) which is obtained from an AMD
Key Derivation Service (KDS) and derived from seeds allocated to
enrolled cloud service providers.
In the case of VLEK keys, the SNP_VLEK_LOAD SNP command is used to load
them into the system after obtaining them from the KDS, and corresponds
closely to the SNP_VLEK_LOAD firmware command specified in the SEV-SNP
spec.
3. SEV-SNP CPUID Enforcement
============================

View File

@ -891,12 +891,12 @@ like this::
The irq_type field has the following values:
- irq_type[0]:
- KVM_ARM_IRQ_TYPE_CPU:
out-of-kernel GIC: irq_id 0 is IRQ, irq_id 1 is FIQ
- irq_type[1]:
- KVM_ARM_IRQ_TYPE_SPI:
in-kernel GIC: SPI, irq_id between 32 and 1019 (incl.)
(the vcpu_index field is ignored)
- irq_type[2]:
- KVM_ARM_IRQ_TYPE_PPI:
in-kernel GIC: PPI, irq_id between 16 and 31 (incl.)
(The irq_id field thus corresponds nicely to the IRQ ID in the ARM GIC specs)
@ -1403,6 +1403,12 @@ Instead, an abort (data abort if the cause of the page-table update
was a load or a store, instruction abort if it was an instruction
fetch) is injected in the guest.
S390:
^^^^^
Returns -EINVAL if the VM has the KVM_VM_S390_UCONTROL flag set.
Returns -EINVAL if called on a protected VM.
4.36 KVM_SET_TSS_ADDR
---------------------
@ -1921,7 +1927,7 @@ flags:
If KVM_MSI_VALID_DEVID is set, devid contains a unique device identifier
for the device that wrote the MSI message. For PCI, this is usually a
BFD identifier in the lower 16 bits.
BDF identifier in the lower 16 bits.
On x86, address_hi is ignored unless the KVM_X2APIC_API_USE_32BIT_IDS
feature of KVM_CAP_X2APIC_API capability is enabled. If it is enabled,
@ -2989,7 +2995,7 @@ flags:
If KVM_MSI_VALID_DEVID is set, devid contains a unique device identifier
for the device that wrote the MSI message. For PCI, this is usually a
BFD identifier in the lower 16 bits.
BDF identifier in the lower 16 bits.
On x86, address_hi is ignored unless the KVM_X2APIC_API_USE_32BIT_IDS
feature of KVM_CAP_X2APIC_API capability is enabled. If it is enabled,
@ -6276,6 +6282,12 @@ state. At VM creation time, all memory is shared, i.e. the PRIVATE attribute
is '0' for all gfns. Userspace can control whether memory is shared/private by
toggling KVM_MEMORY_ATTRIBUTE_PRIVATE via KVM_SET_MEMORY_ATTRIBUTES as needed.
S390:
^^^^^
Returns -EINVAL if the VM has the KVM_VM_S390_UCONTROL flag set.
Returns -EINVAL if called on a protected VM.
4.141 KVM_SET_MEMORY_ATTRIBUTES
-------------------------------
@ -6355,6 +6367,61 @@ a single guest_memfd file, but the bound ranges must not overlap).
See KVM_SET_USER_MEMORY_REGION2 for additional details.
4.143 KVM_PRE_FAULT_MEMORY
------------------------
:Capability: KVM_CAP_PRE_FAULT_MEMORY
:Architectures: none
:Type: vcpu ioctl
:Parameters: struct kvm_pre_fault_memory (in/out)
:Returns: 0 if at least one page is processed, < 0 on error
Errors:
========== ===============================================================
EINVAL The specified `gpa` and `size` were invalid (e.g. not
page aligned, causes an overflow, or size is zero).
ENOENT The specified `gpa` is outside defined memslots.
EINTR An unmasked signal is pending and no page was processed.
EFAULT The parameter address was invalid.
EOPNOTSUPP Mapping memory for a GPA is unsupported by the
hypervisor, and/or for the current vCPU state/mode.
EIO unexpected error conditions (also causes a WARN)
========== ===============================================================
::
struct kvm_pre_fault_memory {
/* in/out */
__u64 gpa;
__u64 size;
/* in */
__u64 flags;
__u64 padding[5];
};
KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
for the current vCPU state. KVM maps memory as if the vCPU generated a
stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
CoW. However, KVM does not mark any newly created stage-2 PTE as Accessed.
In some cases, multiple vCPUs might share the page tables. In this
case, the ioctl can be called in parallel.
When the ioctl returns, the input values are updated to point to the
remaining range. If `size` > 0 on return, the caller can just issue
the ioctl again with the same `struct kvm_map_memory` argument.
Shadow page tables cannot support this ioctl because they
are indexed by virtual address or nested guest physical address.
Calling this ioctl when the guest is using shadow page tables (for
example because it is running a nested guest with nested page tables)
will fail with `EOPNOTSUPP` even if `KVM_CHECK_EXTENSION` reports
the capability to be present.
`flags` must currently be zero.
5. The kvm_run structure
========================
@ -6419,9 +6486,12 @@ More architecture-specific flags detailing state of the VCPU that may
affect the device's behavior. Current defined flags::
/* x86, set if the VCPU is in system management mode */
#define KVM_RUN_X86_SMM (1 << 0)
#define KVM_RUN_X86_SMM (1 << 0)
/* x86, set if bus lock detected in VM */
#define KVM_RUN_BUS_LOCK (1 << 1)
#define KVM_RUN_X86_BUS_LOCK (1 << 1)
/* x86, set if the VCPU is executing a nested (L2) guest */
#define KVM_RUN_X86_GUEST_MODE (1 << 2)
/* arm64, set for KVM_EXIT_DEBUG */
#define KVM_DEBUG_ARCH_HSR_HIGH_VALID (1 << 0)
@ -7767,29 +7837,31 @@ Valid bits in args[0] are::
#define KVM_BUS_LOCK_DETECTION_OFF (1 << 0)
#define KVM_BUS_LOCK_DETECTION_EXIT (1 << 1)
Enabling this capability on a VM provides userspace with a way to select
a policy to handle the bus locks detected in guest. Userspace can obtain
the supported modes from the result of KVM_CHECK_EXTENSION and define it
through the KVM_ENABLE_CAP.
Enabling this capability on a VM provides userspace with a way to select a
policy to handle the bus locks detected in guest. Userspace can obtain the
supported modes from the result of KVM_CHECK_EXTENSION and define it through
the KVM_ENABLE_CAP. The supported modes are mutually-exclusive.
KVM_BUS_LOCK_DETECTION_OFF and KVM_BUS_LOCK_DETECTION_EXIT are supported
currently and mutually exclusive with each other. More bits can be added in
the future.
This capability allows userspace to force VM exits on bus locks detected in the
guest, irrespective whether or not the host has enabled split-lock detection
(which triggers an #AC exception that KVM intercepts). This capability is
intended to mitigate attacks where a malicious/buggy guest can exploit bus
locks to degrade the performance of the whole system.
With KVM_BUS_LOCK_DETECTION_OFF set, bus locks in guest will not cause vm exits
so that no additional actions are needed. This is the default mode.
If KVM_BUS_LOCK_DETECTION_OFF is set, KVM doesn't force guest bus locks to VM
exit, although the host kernel's split-lock #AC detection still applies, if
enabled.
With KVM_BUS_LOCK_DETECTION_EXIT set, vm exits happen when bus lock detected
in VM. KVM just exits to userspace when handling them. Userspace can enforce
its own throttling or other policy based mitigations.
If KVM_BUS_LOCK_DETECTION_EXIT is set, KVM enables a CPU feature that ensures
bus locks in the guest trigger a VM exit, and KVM exits to userspace for all
such VM exits, e.g. to allow userspace to throttle the offending guest and/or
apply some other policy-based mitigation. When exiting to userspace, KVM sets
KVM_RUN_X86_BUS_LOCK in vcpu-run->flags, and conditionally sets the exit_reason
to KVM_EXIT_X86_BUS_LOCK.
This capability is aimed to address the thread that VM can exploit bus locks to
degree the performance of the whole system. Once the userspace enable this
capability and select the KVM_BUS_LOCK_DETECTION_EXIT mode, KVM will set the
KVM_RUN_BUS_LOCK flag in vcpu-run->flags field and exit to userspace. Concerning
the bus lock vm exit can be preempted by a higher priority VM exit, the exit
notifications to userspace can be KVM_EXIT_BUS_LOCK or other reasons.
KVM_RUN_BUS_LOCK flag is used to distinguish between them.
Note! Detected bus locks may be coincident with other exits to userspace, i.e.
KVM_RUN_X86_BUS_LOCK should be checked regardless of the primary exit reason if
userspace wants to take action on all detected bus locks.
7.23 KVM_CAP_PPC_DAWR1
----------------------
@ -7905,10 +7977,10 @@ perform a bulk copy of tags to/from the guest.
7.29 KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM
-------------------------------------
Architectures: x86 SEV enabled
Type: vm
Parameters: args[0] is the fd of the source vm
Returns: 0 on success
:Architectures: x86 SEV enabled
:Type: vm
:Parameters: args[0] is the fd of the source vm
:Returns: 0 on success
This capability enables userspace to migrate the encryption context from the VM
indicated by the fd to the VM this is called on.
@ -7956,7 +8028,11 @@ The valid bits in cap.args[0] are:
When this quirk is disabled, the reset value
is 0x10000 (APIC_LVT_MASKED).
KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW.
KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on
AMD CPUs to workaround buggy guest firmware
that runs in perpetuity with CR0.CD, i.e.
with caches in "no fill" mode.
When this quirk is disabled, KVM does not
change the value of CR0.CD and CR0.NW.
@ -8073,6 +8149,37 @@ error/annotated fault.
See KVM_EXIT_MEMORY_FAULT for more information.
7.35 KVM_CAP_X86_APIC_BUS_CYCLES_NS
-----------------------------------
:Architectures: x86
:Target: VM
:Parameters: args[0] is the desired APIC bus clock rate, in nanoseconds
:Returns: 0 on success, -EINVAL if args[0] contains an invalid value for the
frequency or if any vCPUs have been created, -ENXIO if a virtual
local APIC has not been created using KVM_CREATE_IRQCHIP.
This capability sets the VM's APIC bus clock frequency, used by KVM's in-kernel
virtual APIC when emulating APIC timers. KVM's default value can be retrieved
by KVM_CHECK_EXTENSION.
Note: Userspace is responsible for correctly configuring CPUID 0x15, a.k.a. the
core crystal clock frequency, if a non-zero CPUID 0x15 is exposed to the guest.
7.36 KVM_CAP_X86_GUEST_MODE
------------------------------
:Architectures: x86
:Returns: Informational only, -EINVAL on direct KVM_ENABLE_CAP.
The presence of this capability indicates that KVM_RUN will update the
KVM_RUN_X86_GUEST_MODE bit in kvm_run.flags to indicate whether the
vCPU was executing nested guest code when it exited.
KVM exits with the register state of either the L1 or L2 guest
depending on which executed at the time of an exit. Userspace must
take care to differentiate between these cases.
8. Other capabilities.
======================

View File

@ -31,7 +31,7 @@ Groups:
KVM_VGIC_V2_ADDR_TYPE_CPU (rw, 64-bit)
Base address in the guest physical address space of the GIC virtual cpu
interface register mappings. Only valid for KVM_DEV_TYPE_ARM_VGIC_V2.
This address needs to be 4K aligned and the region covers 4 KByte.
This address needs to be 4K aligned and the region covers 8 KByte.
Errors:

View File

@ -79,11 +79,11 @@ adjustment of the polling interval.
Module Parameters
=================
The kvm module has 3 tuneable module parameters to adjust the global max
polling interval as well as the rate at which the polling interval is grown and
shrunk. These variables are defined in include/linux/kvm_host.h and as module
parameters in virt/kvm/kvm_main.c, or arch/powerpc/kvm/book3s_hv.c in the
powerpc kvm-hv case.
The kvm module has 4 tunable module parameters to adjust the global max polling
interval, the initial value (to grow from 0), and the rate at which the polling
interval is grown and shrunk. These variables are defined in
include/linux/kvm_host.h and as module parameters in virt/kvm/kvm_main.c, or
arch/powerpc/kvm/book3s_hv.c in the powerpc kvm-hv case.
+-----------------------+---------------------------+-------------------------+
|Module Parameter | Description | Default Value |
@ -105,7 +105,7 @@ powerpc kvm-hv case.
| | grow_halt_poll_ns() | |
| | function. | |
+-----------------------+---------------------------+-------------------------+
|halt_poll_ns_shrink | The value by which the | 0 |
|halt_poll_ns_shrink | The value by which the | 2 |
| | halt polling interval is | |
| | divided in the | |
| | shrink_halt_poll_ns() | |

View File

@ -466,6 +466,112 @@ issued by the hypervisor to make the guest ready for execution.
Returns: 0 on success, -negative on error
18. KVM_SEV_SNP_LAUNCH_START
----------------------------
The KVM_SNP_LAUNCH_START command is used for creating the memory encryption
context for the SEV-SNP guest. It must be called prior to issuing
KVM_SEV_SNP_LAUNCH_UPDATE or KVM_SEV_SNP_LAUNCH_FINISH;
Parameters (in): struct kvm_sev_snp_launch_start
Returns: 0 on success, -negative on error
::
struct kvm_sev_snp_launch_start {
__u64 policy; /* Guest policy to use. */
__u8 gosvw[16]; /* Guest OS visible workarounds. */
__u16 flags; /* Must be zero. */
__u8 pad0[6];
__u64 pad1[4];
};
See SNP_LAUNCH_START in the SEV-SNP specification [snp-fw-abi]_ for further
details on the input parameters in ``struct kvm_sev_snp_launch_start``.
19. KVM_SEV_SNP_LAUNCH_UPDATE
-----------------------------
The KVM_SEV_SNP_LAUNCH_UPDATE command is used for loading userspace-provided
data into a guest GPA range, measuring the contents into the SNP guest context
created by KVM_SEV_SNP_LAUNCH_START, and then encrypting/validating that GPA
range so that it will be immediately readable using the encryption key
associated with the guest context once it is booted, after which point it can
attest the measurement associated with its context before unlocking any
secrets.
It is required that the GPA ranges initialized by this command have had the
KVM_MEMORY_ATTRIBUTE_PRIVATE attribute set in advance. See the documentation
for KVM_SET_MEMORY_ATTRIBUTES for more details on this aspect.
Upon success, this command is not guaranteed to have processed the entire
range requested. Instead, the ``gfn_start``, ``uaddr``, and ``len`` fields of
``struct kvm_sev_snp_launch_update`` will be updated to correspond to the
remaining range that has yet to be processed. The caller should continue
calling this command until those fields indicate the entire range has been
processed, e.g. ``len`` is 0, ``gfn_start`` is equal to the last GFN in the
range plus 1, and ``uaddr`` is the last byte of the userspace-provided source
buffer address plus 1. In the case where ``type`` is KVM_SEV_SNP_PAGE_TYPE_ZERO,
``uaddr`` will be ignored completely.
Parameters (in): struct kvm_sev_snp_launch_update
Returns: 0 on success, < 0 on error, -EAGAIN if caller should retry
::
struct kvm_sev_snp_launch_update {
__u64 gfn_start; /* Guest page number to load/encrypt data into. */
__u64 uaddr; /* Userspace address of data to be loaded/encrypted. */
__u64 len; /* 4k-aligned length in bytes to copy into guest memory.*/
__u8 type; /* The type of the guest pages being initialized. */
__u8 pad0;
__u16 flags; /* Must be zero. */
__u32 pad1;
__u64 pad2[4];
};
where the allowed values for page_type are #define'd as::
KVM_SEV_SNP_PAGE_TYPE_NORMAL
KVM_SEV_SNP_PAGE_TYPE_ZERO
KVM_SEV_SNP_PAGE_TYPE_UNMEASURED
KVM_SEV_SNP_PAGE_TYPE_SECRETS
KVM_SEV_SNP_PAGE_TYPE_CPUID
See the SEV-SNP spec [snp-fw-abi]_ for further details on how each page type is
used/measured.
20. KVM_SEV_SNP_LAUNCH_FINISH
-----------------------------
After completion of the SNP guest launch flow, the KVM_SEV_SNP_LAUNCH_FINISH
command can be issued to make the guest ready for execution.
Parameters (in): struct kvm_sev_snp_launch_finish
Returns: 0 on success, -negative on error
::
struct kvm_sev_snp_launch_finish {
__u64 id_block_uaddr;
__u64 id_auth_uaddr;
__u8 id_block_en;
__u8 auth_key_en;
__u8 vcek_disabled;
__u8 host_data[32];
__u8 pad0[3];
__u16 flags; /* Must be zero */
__u64 pad1[4];
};
See SNP_LAUNCH_FINISH in the SEV-SNP specification [snp-fw-abi]_ for further
details on the input parameters in ``struct kvm_sev_snp_launch_finish``.
Device attribute API
====================
@ -497,9 +603,11 @@ References
==========
See [white-paper]_, [api-spec]_, [amd-apm]_ and [kvm-forum]_ for more info.
See [white-paper]_, [api-spec]_, [amd-apm]_, [kvm-forum]_, and [snp-fw-abi]_
for more info.
.. [white-paper] https://developer.amd.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf
.. [api-spec] https://support.amd.com/TechDocs/55766_SEV-KM_API_Specification.pdf
.. [amd-apm] https://support.amd.com/TechDocs/24593.pdf (section 15.34)
.. [kvm-forum] https://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf
.. [snp-fw-abi] https://www.amd.com/system/files/TechDocs/56860.pdf

View File

@ -48,3 +48,21 @@ have the same physical APIC ID, KVM will deliver events targeting that APIC ID
only to the vCPU with the lowest vCPU ID. If KVM_X2APIC_API_USE_32BIT_IDS is
not enabled, KVM follows x86 architecture when processing interrupts (all vCPUs
matching the target APIC ID receive the interrupt).
MTRRs
-----
KVM does not virtualize guest MTRR memory types. KVM emulates accesses to MTRR
MSRs, i.e. {RD,WR}MSR in the guest will behave as expected, but KVM does not
honor guest MTRRs when determining the effective memory type, and instead
treats all of guest memory as having Writeback (WB) MTRRs.
CR0.CD
------
KVM does not virtualize CR0.CD on Intel CPUs. Similar to MTRR MSRs, KVM
emulates CR0.CD accesses so that loads and stores from/to CR0 behave as
expected, but setting CR0.CD=1 has no impact on the cachaeability of guest
memory.
Note, this erratum does not affect AMD CPUs, which fully virtualize CR0.CD in
hardware, i.e. put the CPU caches into "no fill" mode when CR0.CD=1, even when
running in the guest.

View File

@ -12248,6 +12248,8 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: kvmarm@lists.linux.dev
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git
F: Documentation/virt/kvm/arm/
F: Documentation/virt/kvm/devices/arm*
F: arch/arm64/include/asm/kvm*
F: arch/arm64/include/uapi/asm/kvm*
F: arch/arm64/kvm/

View File

@ -160,6 +160,7 @@
#define ESR_ELx_Xs_MASK (GENMASK_ULL(4, 0))
/* ISS field definitions for exceptions taken in to Hyp */
#define ESR_ELx_FSC_ADDRSZ (0x00)
#define ESR_ELx_CV (UL(1) << 24)
#define ESR_ELx_COND_SHIFT (20)
#define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT)
@ -387,6 +388,11 @@
#ifndef __ASSEMBLY__
#include <asm/types.h>
static inline unsigned long esr_brk_comment(unsigned long esr)
{
return esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
}
static inline bool esr_is_data_abort(unsigned long esr)
{
const unsigned long ec = ESR_ELx_EC(esr);
@ -394,6 +400,12 @@ static inline bool esr_is_data_abort(unsigned long esr)
return ec == ESR_ELx_EC_DABT_LOW || ec == ESR_ELx_EC_DABT_CUR;
}
static inline bool esr_is_cfi_brk(unsigned long esr)
{
return ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
(esr_brk_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE;
}
static inline bool esr_fsc_is_translation_fault(unsigned long esr)
{
esr = esr & ESR_ELx_FSC;

View File

@ -102,7 +102,6 @@
#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
#define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En | HCRX_EL2_EnFPM)
/* TCR_EL2 Registers bits */

View File

@ -232,6 +232,8 @@ extern void __kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
phys_addr_t start, unsigned long pages);
extern void __kvm_tlb_flush_vmid(struct kvm_s2_mmu *mmu);
extern int __kvm_tlbi_s1e2(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding);
extern void __kvm_timer_set_cntvoff(u64 cntvoff);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);

View File

@ -11,6 +11,7 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
@ -55,6 +56,14 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu);
int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2);
int kvm_inject_nested_irq(struct kvm_vcpu *vcpu);
static inline void kvm_inject_nested_sve_trap(struct kvm_vcpu *vcpu)
{
u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SVE) |
ESR_ELx_IL;
kvm_inject_nested_sync(vcpu, esr);
}
#if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__)
static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
{
@ -69,39 +78,17 @@ static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
if (has_vhe() || has_hvhe())
vcpu->arch.hcr_el2 |= HCR_E2H;
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
vcpu->arch.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
vcpu->arch.hcr_el2 |= HCR_TERR;
}
if (!vcpu_has_run_once(vcpu))
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB)) {
vcpu->arch.hcr_el2 |= HCR_FWB;
} else {
/*
* For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C
* get set in SCTLR_EL1 such that we can detect when the guest
* MMU gets turned on and do the necessary cache maintenance
* then.
*/
/*
* For non-FWB CPUs, we trap VM ops (HCR_EL2.TVM) until M+C
* get set in SCTLR_EL1 such that we can detect when the guest
* MMU gets turned on and do the necessary cache maintenance
* then.
*/
if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
vcpu->arch.hcr_el2 |= HCR_TVM;
}
if (cpus_have_final_cap(ARM64_HAS_EVT) &&
!cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE))
vcpu->arch.hcr_el2 |= HCR_TID4;
else
vcpu->arch.hcr_el2 |= HCR_TID2;
if (vcpu_el1_is_32bit(vcpu))
vcpu->arch.hcr_el2 &= ~HCR_RW;
if (kvm_has_mte(vcpu->kvm))
vcpu->arch.hcr_el2 |= HCR_ATA;
}
static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
@ -660,4 +647,50 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu)
kvm_write_cptr_el2(val);
}
/*
* Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE
* format if E2H isn't set.
*/
static inline u64 vcpu_sanitised_cptr_el2(const struct kvm_vcpu *vcpu)
{
u64 cptr = __vcpu_sys_reg(vcpu, CPTR_EL2);
if (!vcpu_el2_e2h_is_set(vcpu))
cptr = translate_cptr_el2_to_cpacr_el1(cptr);
return cptr;
}
static inline bool ____cptr_xen_trap_enabled(const struct kvm_vcpu *vcpu,
unsigned int xen)
{
switch (xen) {
case 0b00:
case 0b10:
return true;
case 0b01:
return vcpu_el2_tge_is_set(vcpu) && !vcpu_is_el2(vcpu);
case 0b11:
default:
return false;
}
}
#define __guest_hyp_cptr_xen_trap_enabled(vcpu, xen) \
(!vcpu_has_nv(vcpu) ? false : \
____cptr_xen_trap_enabled(vcpu, \
SYS_FIELD_GET(CPACR_ELx, xen, \
vcpu_sanitised_cptr_el2(vcpu))))
static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
{
return __guest_hyp_cptr_xen_trap_enabled(vcpu, FPEN);
}
static inline bool guest_hyp_sve_traps_enabled(const struct kvm_vcpu *vcpu)
{
return __guest_hyp_cptr_xen_trap_enabled(vcpu, ZEN);
}
#endif /* __ARM64_KVM_EMULATE_H__ */

View File

@ -189,6 +189,33 @@ struct kvm_s2_mmu {
uint64_t split_page_chunk_size;
struct kvm_arch *arch;
/*
* For a shadow stage-2 MMU, the virtual vttbr used by the
* host to parse the guest S2.
* This either contains:
* - the virtual VTTBR programmed by the guest hypervisor with
* CnP cleared
* - The value 1 (VMID=0, BADDR=0, CnP=1) if invalid
*
* We also cache the full VTCR which gets used for TLB invalidation,
* taking the ARM ARM's "Any of the bits in VTCR_EL2 are permitted
* to be cached in a TLB" to the letter.
*/
u64 tlb_vttbr;
u64 tlb_vtcr;
/*
* true when this represents a nested context where virtual
* HCR_EL2.VM == 1
*/
bool nested_stage2_enabled;
/*
* 0: Nobody is currently using this, check vttbr for validity
* >0: Somebody is actively using this.
*/
atomic_t refcnt;
};
struct kvm_arch_memory_slot {
@ -256,6 +283,14 @@ struct kvm_arch {
*/
u64 fgu[__NR_FGT_GROUP_IDS__];
/*
* Stage 2 paging state for VMs with nested S2 using a virtual
* VMID.
*/
struct kvm_s2_mmu *nested_mmus;
size_t nested_mmus_size;
int nested_mmus_next;
/* Interrupt controller */
struct vgic_dist vgic;
@ -327,11 +362,11 @@ struct kvm_arch {
* Atomic access to multiple idregs are guarded by kvm_arch.config_lock.
*/
#define IDREG_IDX(id) (((sys_reg_CRm(id) - 1) << 3) | sys_reg_Op2(id))
#define IDX_IDREG(idx) sys_reg(3, 0, 0, ((idx) >> 3) + 1, (idx) & Op2_mask)
#define IDREG(kvm, id) ((kvm)->arch.id_regs[IDREG_IDX(id)])
#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
u64 id_regs[KVM_ARM_ID_REG_NUM];
u64 ctr_el0;
/* Masks for VNCR-baked sysregs */
struct kvm_sysreg_masks *sysreg_masks;
@ -423,6 +458,7 @@ enum vcpu_sysreg {
MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
HACR_EL2, /* Hypervisor Auxiliary Control Register */
ZCR_EL2, /* SVE Control Register (EL2) */
TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
TCR_EL2, /* Translation Control Register (EL2) */
@ -867,6 +903,9 @@ struct kvm_vcpu_arch {
#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl)
#define vcpu_sve_zcr_elx(vcpu) \
(unlikely(is_hyp_ctxt(vcpu)) ? ZCR_EL2 : ZCR_EL1)
#define vcpu_sve_state_size(vcpu) ({ \
size_t __size_ret; \
unsigned int __vcpu_vq; \
@ -991,6 +1030,7 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
case DACR32_EL2: *val = read_sysreg_s(SYS_DACR32_EL2); break;
case IFSR32_EL2: *val = read_sysreg_s(SYS_IFSR32_EL2); break;
case DBGVCR32_EL2: *val = read_sysreg_s(SYS_DBGVCR32_EL2); break;
case ZCR_EL1: *val = read_sysreg_s(SYS_ZCR_EL12); break;
default: return false;
}
@ -1036,6 +1076,7 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
case DACR32_EL2: write_sysreg_s(val, SYS_DACR32_EL2); break;
case IFSR32_EL2: write_sysreg_s(val, SYS_IFSR32_EL2); break;
case DBGVCR32_EL2: write_sysreg_s(val, SYS_DBGVCR32_EL2); break;
case ZCR_EL1: write_sysreg_s(val, SYS_ZCR_EL12); break;
default: return false;
}
@ -1145,7 +1186,7 @@ int __init populate_nv_trap_config(void);
bool lock_all_vcpus(struct kvm *kvm);
void unlock_all_vcpus(struct kvm *kvm);
void kvm_init_sysreg(struct kvm_vcpu *);
void kvm_calculate_traps(struct kvm_vcpu *vcpu);
/* MMIO helpers */
void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
@ -1248,7 +1289,6 @@ static inline bool kvm_system_needs_idmapped_vectors(void)
}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
void kvm_arm_init_debug(void);
void kvm_arm_vcpu_init_debug(struct kvm_vcpu *vcpu);
@ -1306,6 +1346,7 @@ void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu);
void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu);
int __init kvm_set_ipa_limit(void);
u32 kvm_get_pa_bits(struct kvm *kvm);
#define __KVM_HAVE_ARCH_VM_ALLOC
struct kvm *kvm_arch_alloc_vm(void);
@ -1355,6 +1396,24 @@ static inline void kvm_hyp_reserve(void) { }
void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
static inline u64 *__vm_id_reg(struct kvm_arch *ka, u32 reg)
{
switch (reg) {
case sys_reg(3, 0, 0, 1, 0) ... sys_reg(3, 0, 0, 7, 7):
return &ka->id_regs[IDREG_IDX(reg)];
case SYS_CTR_EL0:
return &ka->ctr_el0;
default:
WARN_ON_ONCE(1);
return NULL;
}
}
#define kvm_read_vm_id_reg(kvm, reg) \
({ u64 __val = *__vm_id_reg(&(kvm)->arch, reg); __val; })
void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
#define __expand_field_sign_unsigned(id, fld, val) \
((u64)SYS_FIELD_VALUE(id, fld, val))
@ -1371,7 +1430,7 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
#define get_idreg_field_unsigned(kvm, id, fld) \
({ \
u64 __val = IDREG((kvm), SYS_##id); \
u64 __val = kvm_read_vm_id_reg((kvm), SYS_##id); \
FIELD_GET(id##_##fld##_MASK, __val); \
})

View File

@ -124,8 +124,8 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context *host_ctxt, u64 spsr,
#endif
#ifdef __KVM_NVHE_HYPERVISOR__
void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
phys_addr_t pgd, void *sp, void *cont_fn);
void __pkvm_init_switch_pgd(phys_addr_t pgd, unsigned long sp,
void (*fn)(void));
int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
unsigned long *per_cpu_base, u32 hyp_va_bits);
void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);

View File

@ -98,6 +98,7 @@ alternative_cb_end
#include <asm/mmu_context.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_host.h>
#include <asm/kvm_nested.h>
void kvm_update_va_mask(struct alt_instr *alt,
__le32 *origptr, __le32 *updptr, int nr_inst);
@ -165,6 +166,10 @@ int create_hyp_exec_mappings(phys_addr_t phys_addr, size_t size,
int create_hyp_stack(phys_addr_t phys_addr, unsigned long *haddr);
void __init free_hyp_pgds(void);
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size);
void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end);
void stage2_unmap_vm(struct kvm *kvm);
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type);
void kvm_uninit_stage2_mmu(struct kvm *kvm);
@ -326,5 +331,26 @@ static inline struct kvm *kvm_s2_mmu_to_kvm(struct kvm_s2_mmu *mmu)
{
return container_of(mmu->arch, struct kvm, arch);
}
static inline u64 get_vmid(u64 vttbr)
{
return (vttbr & VTTBR_VMID_MASK(kvm_get_vmid_bits())) >>
VTTBR_VMID_SHIFT;
}
static inline bool kvm_s2_mmu_valid(struct kvm_s2_mmu *mmu)
{
return !(mmu->tlb_vttbr & VTTBR_CNP_BIT);
}
static inline bool kvm_is_nested_s2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu)
{
/*
* Be careful, mmu may not be fully initialised so do look at
* *any* of its fields.
*/
return &kvm->arch.mmu != mmu;
}
#endif /* __ASSEMBLY__ */
#endif /* __ARM64_KVM_MMU_H__ */

View File

@ -5,6 +5,7 @@
#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_pgtable.h>
static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu)
{
@ -32,7 +33,7 @@ static inline u64 translate_tcr_el2_to_tcr_el1(u64 tcr)
static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
{
u64 cpacr_el1 = 0;
u64 cpacr_el1 = CPACR_ELx_RES1;
if (cptr_el2 & CPTR_EL2_TTA)
cpacr_el1 |= CPACR_ELx_TTA;
@ -41,6 +42,8 @@ static inline u64 translate_cptr_el2_to_cpacr_el1(u64 cptr_el2)
if (!(cptr_el2 & CPTR_EL2_TZ))
cpacr_el1 |= CPACR_ELx_ZEN;
cpacr_el1 |= cptr_el2 & (CPTR_EL2_TCPAC | CPTR_EL2_TAM);
return cpacr_el1;
}
@ -61,6 +64,125 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
}
extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
extern void kvm_init_nested(struct kvm *kvm);
extern int kvm_vcpu_init_nested(struct kvm_vcpu *vcpu);
extern void kvm_init_nested_s2_mmu(struct kvm_s2_mmu *mmu);
extern struct kvm_s2_mmu *lookup_s2_mmu(struct kvm_vcpu *vcpu);
union tlbi_info;
extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
const union tlbi_info *info,
void (*)(struct kvm_s2_mmu *,
const union tlbi_info *));
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
extern void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu);
struct kvm_s2_trans {
phys_addr_t output;
unsigned long block_size;
bool writable;
bool readable;
int level;
u32 esr;
u64 upper_attr;
};
static inline phys_addr_t kvm_s2_trans_output(struct kvm_s2_trans *trans)
{
return trans->output;
}
static inline unsigned long kvm_s2_trans_size(struct kvm_s2_trans *trans)
{
return trans->block_size;
}
static inline u32 kvm_s2_trans_esr(struct kvm_s2_trans *trans)
{
return trans->esr;
}
static inline bool kvm_s2_trans_readable(struct kvm_s2_trans *trans)
{
return trans->readable;
}
static inline bool kvm_s2_trans_writable(struct kvm_s2_trans *trans)
{
return trans->writable;
}
static inline bool kvm_s2_trans_executable(struct kvm_s2_trans *trans)
{
return !(trans->upper_attr & BIT(54));
}
extern int kvm_walk_nested_s2(struct kvm_vcpu *vcpu, phys_addr_t gipa,
struct kvm_s2_trans *result);
extern int kvm_s2_handle_perm_fault(struct kvm_vcpu *vcpu,
struct kvm_s2_trans *trans);
extern int kvm_inject_s2_fault(struct kvm_vcpu *vcpu, u64 esr_el2);
extern void kvm_nested_s2_wp(struct kvm *kvm);
extern void kvm_nested_s2_unmap(struct kvm *kvm);
extern void kvm_nested_s2_flush(struct kvm *kvm);
unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val);
static inline bool kvm_supported_tlbi_s1e1_op(struct kvm_vcpu *vpcu, u32 instr)
{
struct kvm *kvm = vpcu->kvm;
u8 CRm = sys_reg_CRm(instr);
if (!(sys_reg_Op0(instr) == TLBI_Op0 &&
sys_reg_Op1(instr) == TLBI_Op1_EL1))
return false;
if (!(sys_reg_CRn(instr) == TLBI_CRn_XS ||
(sys_reg_CRn(instr) == TLBI_CRn_nXS &&
kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))))
return false;
if (CRm == TLBI_CRm_nROS &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
return false;
if ((CRm == TLBI_CRm_RIS || CRm == TLBI_CRm_ROS ||
CRm == TLBI_CRm_RNS) &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
return false;
return true;
}
static inline bool kvm_supported_tlbi_s1e2_op(struct kvm_vcpu *vpcu, u32 instr)
{
struct kvm *kvm = vpcu->kvm;
u8 CRm = sys_reg_CRm(instr);
if (!(sys_reg_Op0(instr) == TLBI_Op0 &&
sys_reg_Op1(instr) == TLBI_Op1_EL2))
return false;
if (!(sys_reg_CRn(instr) == TLBI_CRn_XS ||
(sys_reg_CRn(instr) == TLBI_CRn_nXS &&
kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))))
return false;
if (CRm == TLBI_CRm_IPAIS || CRm == TLBI_CRm_IPAONS)
return false;
if (CRm == TLBI_CRm_nROS &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
return false;
if ((CRm == TLBI_CRm_RIS || CRm == TLBI_CRm_ROS ||
CRm == TLBI_CRm_RNS) &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
return false;
return true;
}
int kvm_init_nv_sysregs(struct kvm *kvm);
@ -76,4 +198,11 @@ static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
}
#endif
#define KVM_NV_GUEST_MAP_SZ (KVM_PGTABLE_PROT_SW1 | KVM_PGTABLE_PROT_SW0)
static inline u64 kvm_encode_nested_level(struct kvm_s2_trans *trans)
{
return FIELD_PREP(KVM_NV_GUEST_MAP_SZ, trans->level);
}
#endif /* __ARM64_KVM_NESTED_H */

View File

@ -654,6 +654,23 @@
#define OP_AT_S12E0W sys_insn(AT_Op0, 4, AT_CRn, 8, 7)
/* TLBI instructions */
#define TLBI_Op0 1
#define TLBI_Op1_EL1 0 /* Accessible from EL1 or higher */
#define TLBI_Op1_EL2 4 /* Accessible from EL2 or higher */
#define TLBI_CRn_XS 8 /* Extra Slow (the common one) */
#define TLBI_CRn_nXS 9 /* not Extra Slow (which nobody uses)*/
#define TLBI_CRm_IPAIS 0 /* S2 Inner-Shareable */
#define TLBI_CRm_nROS 1 /* non-Range, Outer-Sharable */
#define TLBI_CRm_RIS 2 /* Range, Inner-Sharable */
#define TLBI_CRm_nRIS 3 /* non-Range, Inner-Sharable */
#define TLBI_CRm_IPAONS 4 /* S2 Outer and Non-Shareable */
#define TLBI_CRm_ROS 5 /* Range, Outer-Sharable */
#define TLBI_CRm_RNS 6 /* Range, Non-Sharable */
#define TLBI_CRm_nRNS 7 /* non-Range, Non-Sharable */
#define OP_TLBI_VMALLE1OS sys_insn(1, 0, 8, 1, 0)
#define OP_TLBI_VAE1OS sys_insn(1, 0, 8, 1, 1)
#define OP_TLBI_ASIDE1OS sys_insn(1, 0, 8, 1, 2)

View File

@ -128,6 +128,7 @@ int main(void)
DEFINE(VCPU_FAULT_DISR, offsetof(struct kvm_vcpu, arch.fault.disr_el1));
DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2));
DEFINE(CPU_USER_PT_REGS, offsetof(struct kvm_cpu_context, regs));
DEFINE(CPU_ELR_EL2, offsetof(struct kvm_cpu_context, sys_regs[ELR_EL2]));
DEFINE(CPU_RGSR_EL1, offsetof(struct kvm_cpu_context, sys_regs[RGSR_EL1]));
DEFINE(CPU_GCR_EL1, offsetof(struct kvm_cpu_context, sys_regs[GCR_EL1]));
DEFINE(CPU_APIAKEYLO_EL1, offsetof(struct kvm_cpu_context, sys_regs[APIAKEYLO_EL1]));

View File

@ -312,9 +312,7 @@ static int call_break_hook(struct pt_regs *regs, unsigned long esr)
* entirely not preemptible, and we can use rcu list safely here.
*/
list_for_each_entry_rcu(hook, list, node) {
unsigned long comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
if ((comment & ~hook->mask) == hook->imm)
if ((esr_brk_comment(esr) & ~hook->mask) == hook->imm)
fn = hook->fn;
}

View File

@ -1105,8 +1105,6 @@ static struct break_hook ubsan_break_hook = {
};
#endif
#define esr_comment(esr) ((esr) & ESR_ELx_BRK64_ISS_COMMENT_MASK)
/*
* Initial handler for AArch64 BRK exceptions
* This handler only used until debug_traps_init().
@ -1115,15 +1113,15 @@ int __init early_brk64(unsigned long addr, unsigned long esr,
struct pt_regs *regs)
{
#ifdef CONFIG_CFI_CLANG
if ((esr_comment(esr) & ~CFI_BRK_IMM_MASK) == CFI_BRK_IMM_BASE)
if (esr_is_cfi_brk(esr))
return cfi_handler(regs, esr) != DBG_HOOK_HANDLED;
#endif
#ifdef CONFIG_KASAN_SW_TAGS
if ((esr_comment(esr) & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
if ((esr_brk_comment(esr) & ~KASAN_BRK_MASK) == KASAN_BRK_IMM)
return kasan_handler(regs, esr) != DBG_HOOK_HANDLED;
#endif
#ifdef CONFIG_UBSAN_TRAP
if ((esr_comment(esr) & ~UBSAN_BRK_MASK) == UBSAN_BRK_IMM)
if ((esr_brk_comment(esr) & ~UBSAN_BRK_MASK) == UBSAN_BRK_IMM)
return ubsan_handler(regs, esr) != DBG_HOOK_HANDLED;
#endif
return bug_handler(regs, esr) != DBG_HOOK_HANDLED;

View File

@ -48,6 +48,15 @@
static enum kvm_mode kvm_mode = KVM_MODE_DEFAULT;
enum kvm_wfx_trap_policy {
KVM_WFX_NOTRAP_SINGLE_TASK, /* Default option */
KVM_WFX_NOTRAP,
KVM_WFX_TRAP,
};
static enum kvm_wfx_trap_policy kvm_wfi_trap_policy __read_mostly = KVM_WFX_NOTRAP_SINGLE_TASK;
static enum kvm_wfx_trap_policy kvm_wfe_trap_policy __read_mostly = KVM_WFX_NOTRAP_SINGLE_TASK;
DECLARE_KVM_HYP_PER_CPU(unsigned long, kvm_hyp_vector);
DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
@ -170,6 +179,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
mutex_unlock(&kvm->lock);
#endif
kvm_init_nested(kvm);
ret = kvm_share_hyp(kvm, kvm + 1);
if (ret)
return ret;
@ -546,11 +557,32 @@ static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
}
}
static bool kvm_vcpu_should_clear_twi(struct kvm_vcpu *vcpu)
{
if (unlikely(kvm_wfi_trap_policy != KVM_WFX_NOTRAP_SINGLE_TASK))
return kvm_wfi_trap_policy == KVM_WFX_NOTRAP;
return single_task_running() &&
(atomic_read(&vcpu->arch.vgic_cpu.vgic_v3.its_vpe.vlpi_count) ||
vcpu->kvm->arch.vgic.nassgireq);
}
static bool kvm_vcpu_should_clear_twe(struct kvm_vcpu *vcpu)
{
if (unlikely(kvm_wfe_trap_policy != KVM_WFX_NOTRAP_SINGLE_TASK))
return kvm_wfe_trap_policy == KVM_WFX_NOTRAP;
return single_task_running();
}
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
struct kvm_s2_mmu *mmu;
int *last_ran;
if (vcpu_has_nv(vcpu))
kvm_vcpu_load_hw_mmu(vcpu);
mmu = vcpu->arch.hw_mmu;
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
@ -579,10 +611,15 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
if (kvm_arm_is_pvtime_enabled(&vcpu->arch))
kvm_make_request(KVM_REQ_RECORD_STEAL, vcpu);
if (single_task_running())
vcpu_clear_wfx_traps(vcpu);
if (kvm_vcpu_should_clear_twe(vcpu))
vcpu->arch.hcr_el2 &= ~HCR_TWE;
else
vcpu_set_wfx_traps(vcpu);
vcpu->arch.hcr_el2 |= HCR_TWE;
if (kvm_vcpu_should_clear_twi(vcpu))
vcpu->arch.hcr_el2 &= ~HCR_TWI;
else
vcpu->arch.hcr_el2 |= HCR_TWI;
vcpu_set_pauth_traps(vcpu);
@ -601,6 +638,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_timer_vcpu_put(vcpu);
kvm_vgic_put(vcpu);
kvm_vcpu_pmu_restore_host(vcpu);
if (vcpu_has_nv(vcpu))
kvm_vcpu_put_hw_mmu(vcpu);
kvm_arm_vmid_clear_active();
vcpu_clear_on_unsupported_cpu(vcpu);
@ -797,7 +836,7 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
* This needs to happen after NV has imposed its own restrictions on
* the feature set
*/
kvm_init_sysreg(vcpu);
kvm_calculate_traps(vcpu);
ret = kvm_timer_enable(vcpu);
if (ret)
@ -1099,7 +1138,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
vcpu_load(vcpu);
if (run->immediate_exit) {
if (!vcpu->wants_to_run) {
ret = -EINTR;
goto out;
}
@ -1419,11 +1458,6 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features))
return -EINVAL;
/* Disallow NV+SVE for the time being */
if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features) &&
test_bit(KVM_ARM_VCPU_SVE, &features))
return -EINVAL;
if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features))
return 0;
@ -1459,6 +1493,10 @@ static int kvm_setup_vcpu(struct kvm_vcpu *vcpu)
if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu)
ret = kvm_arm_set_default_pmu(kvm);
/* Prepare for nested if required */
if (!ret && vcpu_has_nv(vcpu))
ret = kvm_vcpu_init_nested(vcpu);
return ret;
}
@ -2858,6 +2896,36 @@ static int __init early_kvm_mode_cfg(char *arg)
}
early_param("kvm-arm.mode", early_kvm_mode_cfg);
static int __init early_kvm_wfx_trap_policy_cfg(char *arg, enum kvm_wfx_trap_policy *p)
{
if (!arg)
return -EINVAL;
if (strcmp(arg, "trap") == 0) {
*p = KVM_WFX_TRAP;
return 0;
}
if (strcmp(arg, "notrap") == 0) {
*p = KVM_WFX_NOTRAP;
return 0;
}
return -EINVAL;
}
static int __init early_kvm_wfi_trap_policy_cfg(char *arg)
{
return early_kvm_wfx_trap_policy_cfg(arg, &kvm_wfi_trap_policy);
}
early_param("kvm-arm.wfi_trap_policy", early_kvm_wfi_trap_policy_cfg);
static int __init early_kvm_wfe_trap_policy_cfg(char *arg)
{
return early_kvm_wfx_trap_policy_cfg(arg, &kvm_wfe_trap_policy);
}
early_param("kvm-arm.wfe_trap_policy", early_kvm_wfe_trap_policy_cfg);
enum kvm_mode kvm_get_mode(void)
{
return kvm_mode;

View File

@ -79,6 +79,12 @@ enum cgt_group_id {
CGT_MDCR_E2TB,
CGT_MDCR_TDCC,
CGT_CPACR_E0POE,
CGT_CPTR_TAM,
CGT_CPTR_TCPAC,
CGT_HCRX_TCR2En,
/*
* Anything after this point is a combination of coarse trap
* controls, which must all be evaluated to decide what to do.
@ -89,6 +95,7 @@ enum cgt_group_id {
CGT_HCR_TTLB_TTLBIS,
CGT_HCR_TTLB_TTLBOS,
CGT_HCR_TVM_TRVM,
CGT_HCR_TVM_TRVM_HCRX_TCR2En,
CGT_HCR_TPU_TICAB,
CGT_HCR_TPU_TOCU,
CGT_HCR_NV1_nNV2_ENSCXT,
@ -106,6 +113,8 @@ enum cgt_group_id {
CGT_CNTHCTL_EL1PCTEN = __COMPLEX_CONDITIONS__,
CGT_CNTHCTL_EL1PTEN,
CGT_CPTR_TTA,
/* Must be last */
__NR_CGT_GROUP_IDS__
};
@ -345,6 +354,30 @@ static const struct trap_bits coarse_trap_bits[] = {
.mask = MDCR_EL2_TDCC,
.behaviour = BEHAVE_FORWARD_ANY,
},
[CGT_CPACR_E0POE] = {
.index = CPTR_EL2,
.value = CPACR_ELx_E0POE,
.mask = CPACR_ELx_E0POE,
.behaviour = BEHAVE_FORWARD_ANY,
},
[CGT_CPTR_TAM] = {
.index = CPTR_EL2,
.value = CPTR_EL2_TAM,
.mask = CPTR_EL2_TAM,
.behaviour = BEHAVE_FORWARD_ANY,
},
[CGT_CPTR_TCPAC] = {
.index = CPTR_EL2,
.value = CPTR_EL2_TCPAC,
.mask = CPTR_EL2_TCPAC,
.behaviour = BEHAVE_FORWARD_ANY,
},
[CGT_HCRX_TCR2En] = {
.index = HCRX_EL2,
.value = 0,
.mask = HCRX_EL2_TCR2En,
.behaviour = BEHAVE_FORWARD_ANY,
},
};
#define MCB(id, ...) \
@ -359,6 +392,8 @@ static const enum cgt_group_id *coarse_control_combo[] = {
MCB(CGT_HCR_TTLB_TTLBIS, CGT_HCR_TTLB, CGT_HCR_TTLBIS),
MCB(CGT_HCR_TTLB_TTLBOS, CGT_HCR_TTLB, CGT_HCR_TTLBOS),
MCB(CGT_HCR_TVM_TRVM, CGT_HCR_TVM, CGT_HCR_TRVM),
MCB(CGT_HCR_TVM_TRVM_HCRX_TCR2En,
CGT_HCR_TVM, CGT_HCR_TRVM, CGT_HCRX_TCR2En),
MCB(CGT_HCR_TPU_TICAB, CGT_HCR_TPU, CGT_HCR_TICAB),
MCB(CGT_HCR_TPU_TOCU, CGT_HCR_TPU, CGT_HCR_TOCU),
MCB(CGT_HCR_NV1_nNV2_ENSCXT, CGT_HCR_NV1_nNV2, CGT_HCR_ENSCXT),
@ -410,12 +445,26 @@ static enum trap_behaviour check_cnthctl_el1pten(struct kvm_vcpu *vcpu)
return BEHAVE_FORWARD_ANY;
}
static enum trap_behaviour check_cptr_tta(struct kvm_vcpu *vcpu)
{
u64 val = __vcpu_sys_reg(vcpu, CPTR_EL2);
if (!vcpu_el2_e2h_is_set(vcpu))
val = translate_cptr_el2_to_cpacr_el1(val);
if (val & CPACR_ELx_TTA)
return BEHAVE_FORWARD_ANY;
return BEHAVE_HANDLE_LOCALLY;
}
#define CCC(id, fn) \
[id - __COMPLEX_CONDITIONS__] = fn
static const complex_condition_check ccc[] = {
CCC(CGT_CNTHCTL_EL1PCTEN, check_cnthctl_el1pcten),
CCC(CGT_CNTHCTL_EL1PTEN, check_cnthctl_el1pten),
CCC(CGT_CPTR_TTA, check_cptr_tta),
};
/*
@ -622,6 +671,7 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
SR_TRAP(SYS_MAIR_EL1, CGT_HCR_TVM_TRVM),
SR_TRAP(SYS_AMAIR_EL1, CGT_HCR_TVM_TRVM),
SR_TRAP(SYS_CONTEXTIDR_EL1, CGT_HCR_TVM_TRVM),
SR_TRAP(SYS_TCR2_EL1, CGT_HCR_TVM_TRVM_HCRX_TCR2En),
SR_TRAP(SYS_DC_ZVA, CGT_HCR_TDZ),
SR_TRAP(SYS_DC_GVA, CGT_HCR_TDZ),
SR_TRAP(SYS_DC_GZVA, CGT_HCR_TDZ),
@ -1000,6 +1050,59 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
SR_TRAP(SYS_TRBPTR_EL1, CGT_MDCR_E2TB),
SR_TRAP(SYS_TRBSR_EL1, CGT_MDCR_E2TB),
SR_TRAP(SYS_TRBTRG_EL1, CGT_MDCR_E2TB),
SR_TRAP(SYS_CPACR_EL1, CGT_CPTR_TCPAC),
SR_TRAP(SYS_AMUSERENR_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCFGR_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCGCR_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCNTENCLR0_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCNTENCLR1_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCNTENSET0_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCNTENSET1_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMCR_EL0, CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR0_EL0(0), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR0_EL0(1), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR0_EL0(2), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR0_EL0(3), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(0), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(1), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(2), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(3), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(4), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(5), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(6), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(7), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(8), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(9), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(10), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(11), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(12), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(13), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(14), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVCNTR1_EL0(15), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER0_EL0(0), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER0_EL0(1), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER0_EL0(2), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER0_EL0(3), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(0), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(1), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(2), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(3), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(4), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(5), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(6), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(7), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(8), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(9), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(10), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(11), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(12), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(13), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(14), CGT_CPTR_TAM),
SR_TRAP(SYS_AMEVTYPER1_EL0(15), CGT_CPTR_TAM),
SR_TRAP(SYS_POR_EL0, CGT_CPACR_E0POE),
/* op0=2, op1=1, and CRn<0b1000 */
SR_RANGE_TRAP(sys_reg(2, 1, 0, 0, 0),
sys_reg(2, 1, 7, 15, 7), CGT_CPTR_TTA),
SR_TRAP(SYS_CNTP_TVAL_EL0, CGT_CNTHCTL_EL1PTEN),
SR_TRAP(SYS_CNTP_CVAL_EL0, CGT_CNTHCTL_EL1PTEN),
SR_TRAP(SYS_CNTP_CTL_EL0, CGT_CNTHCTL_EL1PTEN),
@ -1071,6 +1174,7 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_TPIDRRO_EL0, HFGxTR, TPIDRRO_EL0, 1),
SR_FGT(SYS_TPIDR_EL1, HFGxTR, TPIDR_EL1, 1),
SR_FGT(SYS_TCR_EL1, HFGxTR, TCR_EL1, 1),
SR_FGT(SYS_TCR2_EL1, HFGxTR, TCR_EL1, 1),
SR_FGT(SYS_SCXTNUM_EL0, HFGxTR, SCXTNUM_EL0, 1),
SR_FGT(SYS_SCXTNUM_EL1, HFGxTR, SCXTNUM_EL1, 1),
SR_FGT(SYS_SCTLR_EL1, HFGxTR, SCTLR_EL1, 1),

View File

@ -178,7 +178,13 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu)) {
__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
u64 zcr = read_sysreg_el1(SYS_ZCR);
/*
* If the vCPU is in the hyp context then ZCR_EL1 is
* loaded with its vEL2 counterpart.
*/
__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr;
/*
* Restore the VL that was saved when bound to the CPU,
@ -189,11 +195,14 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
* Note that this means that at guest exit ZCR_EL1 is
* not necessarily the same as on guest entry.
*
* Restoring the VL isn't needed in VHE mode since
* ZCR_EL2 (accessed via ZCR_EL1) would fulfill the same
* role when doing the save from EL2.
* ZCR_EL2 holds the guest hypervisor's VL when running
* a nested guest, which could be smaller than the
* max for the vCPU. Similar to above, we first need to
* switch to a VL consistent with the layout of the
* vCPU's SVE state. KVM support for NV implies VHE, so
* using the ZCR_EL1 alias is safe.
*/
if (!has_vhe())
if (!has_vhe() || (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)))
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,
SYS_ZCR_EL1);
}

View File

@ -94,11 +94,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
* Guest access to FP/ASIMD registers are routed to this handler only
* when the system doesn't support FP/ASIMD.
* This handles the cases where the system does not support FP/ASIMD or when
* we are running nested virtualization and the guest hypervisor is trapping
* FP/ASIMD accesses by its guest guest.
*
* All other handling of guest vs. host FP/ASIMD register state is handled in
* fixup_guest_exit().
*/
static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
if (guest_hyp_fpsimd_traps_enabled(vcpu))
return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
/* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@ -209,6 +217,9 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
*/
static int handle_sve(struct kvm_vcpu *vcpu)
{
if (guest_hyp_sve_traps_enabled(vcpu))
return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
kvm_inject_undefined(vcpu);
return 1;
}
@ -304,7 +315,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
[ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
@ -411,6 +422,20 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
}
static void print_nvhe_hyp_panic(const char *name, u64 panic_addr)
{
kvm_err("nVHE hyp %s at: [<%016llx>] %pB!\n", name, panic_addr,
(void *)(panic_addr + kaslr_offset()));
}
static void kvm_nvhe_report_cfi_failure(u64 panic_addr)
{
print_nvhe_hyp_panic("CFI failure", panic_addr);
if (IS_ENABLED(CONFIG_CFI_PERMISSIVE))
kvm_err(" (CONFIG_CFI_PERMISSIVE ignored for hyp failures)\n");
}
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
u64 elr_virt, u64 elr_phys,
u64 par, uintptr_t vcpu,
@ -423,7 +448,7 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
if (mode != PSR_MODE_EL2t && mode != PSR_MODE_EL2h) {
kvm_err("Invalid host exception to nVHE hyp!\n");
} else if (ESR_ELx_EC(esr) == ESR_ELx_EC_BRK64 &&
(esr & ESR_ELx_BRK64_ISS_COMMENT_MASK) == BUG_BRK_IMM) {
esr_brk_comment(esr) == BUG_BRK_IMM) {
const char *file = NULL;
unsigned int line = 0;
@ -439,11 +464,11 @@ void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
if (file)
kvm_err("nVHE hyp BUG at: %s:%u!\n", file, line);
else
kvm_err("nVHE hyp BUG at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
print_nvhe_hyp_panic("BUG", panic_addr);
} else if (IS_ENABLED(CONFIG_CFI_CLANG) && esr_is_cfi_brk(esr)) {
kvm_nvhe_report_cfi_failure(panic_addr);
} else {
kvm_err("nVHE hyp panic at: [<%016llx>] %pB!\n", panic_addr,
(void *)(panic_addr + kaslr_offset()));
print_nvhe_hyp_panic("panic", panic_addr);
}
/* Dump the nVHE hypervisor backtrace */

View File

@ -83,6 +83,14 @@ alternative_else_nop_endif
eret
sb
SYM_INNER_LABEL(__guest_exit_restore_elr_and_panic, SYM_L_GLOBAL)
// x2-x29,lr: vcpu regs
// vcpu x0-x1 on the stack
adr_this_cpu x0, kvm_hyp_ctxt, x1
ldr x0, [x0, #CPU_ELR_EL2]
msr elr_el2, x0
SYM_INNER_LABEL(__guest_exit_panic, SYM_L_GLOBAL)
// x2-x29,lr: vcpu regs
// vcpu x0-x1 on the stack

View File

@ -314,11 +314,24 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
/*
* The vCPU's saved SVE state layout always matches the max VL of the
* vCPU. Start off with the max VL so we can load the SVE state.
*/
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr,
true);
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
/*
* The effective VL for a VM could differ from the max VL when running a
* nested guest, as the guest hypervisor could select a smaller VL. Slap
* that into hardware before wrapping up.
*/
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))
sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2);
write_sysreg_el1(__vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)), SYS_ZCR);
}
static inline void __hyp_sve_save_host(void)
@ -354,10 +367,19 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Only handle traps the vCPU can support here: */
switch (esr_ec) {
case ESR_ELx_EC_FP_ASIMD:
/* Forward traps to the guest hypervisor as required */
if (guest_hyp_fpsimd_traps_enabled(vcpu))
return false;
break;
case ESR_ELx_EC_SYS64:
if (WARN_ON_ONCE(!is_hyp_ctxt(vcpu)))
return false;
fallthrough;
case ESR_ELx_EC_SVE:
if (!sve_guest)
return false;
if (guest_hyp_sve_traps_enabled(vcpu))
return false;
break;
default:
return false;
@ -693,7 +715,7 @@ guest:
static inline void __kvm_unexpected_el2_exception(void)
{
extern char __guest_exit_panic[];
extern char __guest_exit_restore_elr_and_panic[];
unsigned long addr, fixup;
struct kvm_exception_table_entry *entry, *end;
unsigned long elr_el2 = read_sysreg(elr_el2);
@ -715,7 +737,8 @@ static inline void __kvm_unexpected_el2_exception(void)
}
/* Trigger a panic after restoring the hyp context. */
write_sysreg(__guest_exit_panic, elr_el2);
this_cpu_ptr(&kvm_hyp_ctxt)->sys_regs[ELR_EL2] = elr_el2;
write_sysreg(__guest_exit_restore_elr_and_panic, elr_el2);
}
#endif /* __ARM64_KVM_HYP_SWITCH_H__ */

View File

@ -55,6 +55,17 @@ static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
}
static inline bool ctxt_has_tcrx(struct kvm_cpu_context *ctxt)
{
struct kvm_vcpu *vcpu;
if (!cpus_have_final_cap(ARM64_HAS_TCR2))
return false;
vcpu = ctxt_to_vcpu(ctxt);
return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, TCRX, IMP);
}
static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
{
ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
@ -62,8 +73,14 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, TTBR0_EL1) = read_sysreg_el1(SYS_TTBR0);
ctxt_sys_reg(ctxt, TTBR1_EL1) = read_sysreg_el1(SYS_TTBR1);
ctxt_sys_reg(ctxt, TCR_EL1) = read_sysreg_el1(SYS_TCR);
if (cpus_have_final_cap(ARM64_HAS_TCR2))
if (ctxt_has_tcrx(ctxt)) {
ctxt_sys_reg(ctxt, TCR2_EL1) = read_sysreg_el1(SYS_TCR2);
if (ctxt_has_s1pie(ctxt)) {
ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR);
ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0);
}
}
ctxt_sys_reg(ctxt, ESR_EL1) = read_sysreg_el1(SYS_ESR);
ctxt_sys_reg(ctxt, AFSR0_EL1) = read_sysreg_el1(SYS_AFSR0);
ctxt_sys_reg(ctxt, AFSR1_EL1) = read_sysreg_el1(SYS_AFSR1);
@ -73,10 +90,6 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
ctxt_sys_reg(ctxt, AMAIR_EL1) = read_sysreg_el1(SYS_AMAIR);
ctxt_sys_reg(ctxt, CNTKCTL_EL1) = read_sysreg_el1(SYS_CNTKCTL);
if (ctxt_has_s1pie(ctxt)) {
ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR);
ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0);
}
ctxt_sys_reg(ctxt, PAR_EL1) = read_sysreg_par();
ctxt_sys_reg(ctxt, TPIDR_EL1) = read_sysreg(tpidr_el1);
@ -138,8 +151,14 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg_el1(ctxt_sys_reg(ctxt, CPACR_EL1), SYS_CPACR);
write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR0_EL1), SYS_TTBR0);
write_sysreg_el1(ctxt_sys_reg(ctxt, TTBR1_EL1), SYS_TTBR1);
if (cpus_have_final_cap(ARM64_HAS_TCR2))
if (ctxt_has_tcrx(ctxt)) {
write_sysreg_el1(ctxt_sys_reg(ctxt, TCR2_EL1), SYS_TCR2);
if (ctxt_has_s1pie(ctxt)) {
write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR);
write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0);
}
}
write_sysreg_el1(ctxt_sys_reg(ctxt, ESR_EL1), SYS_ESR);
write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR0_EL1), SYS_AFSR0);
write_sysreg_el1(ctxt_sys_reg(ctxt, AFSR1_EL1), SYS_AFSR1);
@ -149,10 +168,6 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL1), SYS_CONTEXTIDR);
write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL1), SYS_AMAIR);
write_sysreg_el1(ctxt_sys_reg(ctxt, CNTKCTL_EL1), SYS_CNTKCTL);
if (ctxt_has_s1pie(ctxt)) {
write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR);
write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0);
}
write_sysreg(ctxt_sys_reg(ctxt, PAR_EL1), par_el1);
write_sysreg(ctxt_sys_reg(ctxt, TPIDR_EL1), tpidr_el1);

View File

@ -9,7 +9,7 @@
#include <asm/kvm_host.h>
#define FFA_MIN_FUNC_NUM 0x60
#define FFA_MAX_FUNC_NUM 0x7F
#define FFA_MAX_FUNC_NUM 0xFF
int hyp_ffa_init(void *pages);
bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id);

View File

@ -89,9 +89,9 @@ quiet_cmd_hyprel = HYPREL $@
quiet_cmd_hypcopy = HYPCOPY $@
cmd_hypcopy = $(OBJCOPY) --prefix-symbols=__kvm_nvhe_ $< $@
# Remove ftrace, Shadow Call Stack, and CFI CFLAGS.
# This is equivalent to the 'notrace', '__noscs', and '__nocfi' annotations.
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS) $(CC_FLAGS_CFI), $(KBUILD_CFLAGS))
# Remove ftrace and Shadow Call Stack CFLAGS.
# This is equivalent to the 'notrace' and '__noscs' annotations.
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_FTRACE) $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
# Starting from 13.0.0 llvm emits SHT_REL section '.llvm.call-graph-profile'
# when profile optimization is applied. gen-hyprel does not support SHT_REL and
# causes a build failure. Remove profile optimization flags.

View File

@ -67,6 +67,9 @@ struct kvm_ffa_buffers {
*/
static struct kvm_ffa_buffers hyp_buffers;
static struct kvm_ffa_buffers host_buffers;
static u32 hyp_ffa_version;
static bool has_version_negotiated;
static hyp_spinlock_t version_lock;
static void ffa_to_smccc_error(struct arm_smccc_res *res, u64 ffa_errno)
{
@ -462,7 +465,7 @@ static __always_inline void do_ffa_mem_xfer(const u64 func_id,
memcpy(buf, host_buffers.tx, fraglen);
ep_mem_access = (void *)buf +
ffa_mem_desc_offset(buf, 0, FFA_VERSION_1_0);
ffa_mem_desc_offset(buf, 0, hyp_ffa_version);
offset = ep_mem_access->composite_off;
if (!offset || buf->ep_count != 1 || buf->sender_id != HOST_FFA_ID) {
ret = FFA_RET_INVALID_PARAMETERS;
@ -541,7 +544,7 @@ static void do_ffa_mem_reclaim(struct arm_smccc_res *res,
fraglen = res->a2;
ep_mem_access = (void *)buf +
ffa_mem_desc_offset(buf, 0, FFA_VERSION_1_0);
ffa_mem_desc_offset(buf, 0, hyp_ffa_version);
offset = ep_mem_access->composite_off;
/*
* We can trust the SPMD to get this right, but let's at least
@ -651,91 +654,10 @@ out_handled:
return true;
}
bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
static int hyp_ffa_post_init(void)
{
struct arm_smccc_res res;
/*
* There's no way we can tell what a non-standard SMC call might
* be up to. Ideally, we would terminate these here and return
* an error to the host, but sadly devices make use of custom
* firmware calls for things like power management, debugging,
* RNG access and crash reporting.
*
* Given that the architecture requires us to trust EL3 anyway,
* we forward unrecognised calls on under the assumption that
* the firmware doesn't expose a mechanism to access arbitrary
* non-secure memory. Short of a per-device table of SMCs, this
* is the best we can do.
*/
if (!is_ffa_call(func_id))
return false;
switch (func_id) {
case FFA_FEATURES:
if (!do_ffa_features(&res, host_ctxt))
return false;
goto out_handled;
/* Memory management */
case FFA_FN64_RXTX_MAP:
do_ffa_rxtx_map(&res, host_ctxt);
goto out_handled;
case FFA_RXTX_UNMAP:
do_ffa_rxtx_unmap(&res, host_ctxt);
goto out_handled;
case FFA_MEM_SHARE:
case FFA_FN64_MEM_SHARE:
do_ffa_mem_xfer(FFA_FN64_MEM_SHARE, &res, host_ctxt);
goto out_handled;
case FFA_MEM_RECLAIM:
do_ffa_mem_reclaim(&res, host_ctxt);
goto out_handled;
case FFA_MEM_LEND:
case FFA_FN64_MEM_LEND:
do_ffa_mem_xfer(FFA_FN64_MEM_LEND, &res, host_ctxt);
goto out_handled;
case FFA_MEM_FRAG_TX:
do_ffa_mem_frag_tx(&res, host_ctxt);
goto out_handled;
}
if (ffa_call_supported(func_id))
return false; /* Pass through */
ffa_to_smccc_error(&res, FFA_RET_NOT_SUPPORTED);
out_handled:
ffa_set_retval(host_ctxt, &res);
return true;
}
int hyp_ffa_init(void *pages)
{
struct arm_smccc_res res;
size_t min_rxtx_sz;
void *tx, *rx;
if (kvm_host_psci_config.smccc_version < ARM_SMCCC_VERSION_1_2)
return 0;
arm_smccc_1_1_smc(FFA_VERSION, FFA_VERSION_1_0, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 == FFA_RET_NOT_SUPPORTED)
return 0;
/*
* Firmware returns the maximum supported version of the FF-A
* implementation. Check that the returned version is
* backwards-compatible with the hyp according to the rules in DEN0077A
* v1.1 REL0 13.2.1.
*
* Of course, things are never simple when dealing with firmware. v1.1
* broke ABI with v1.0 on several structures, which is itself
* incompatible with the aforementioned versioning scheme. The
* expectation is that v1.x implementations that do not support the v1.0
* ABI return NOT_SUPPORTED rather than a version number, according to
* DEN0077A v1.1 REL0 18.6.4.
*/
if (FFA_MAJOR_VERSION(res.a0) != 1)
return -EOPNOTSUPP;
struct arm_smccc_res res;
arm_smccc_1_1_smc(FFA_ID_GET, 0, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 != FFA_SUCCESS)
@ -766,6 +688,199 @@ int hyp_ffa_init(void *pages)
if (min_rxtx_sz > PAGE_SIZE)
return -EOPNOTSUPP;
return 0;
}
static void do_ffa_version(struct arm_smccc_res *res,
struct kvm_cpu_context *ctxt)
{
DECLARE_REG(u32, ffa_req_version, ctxt, 1);
if (FFA_MAJOR_VERSION(ffa_req_version) != 1) {
res->a0 = FFA_RET_NOT_SUPPORTED;
return;
}
hyp_spin_lock(&version_lock);
if (has_version_negotiated) {
res->a0 = hyp_ffa_version;
goto unlock;
}
/*
* If the client driver tries to downgrade the version, we need to ask
* first if TEE supports it.
*/
if (FFA_MINOR_VERSION(ffa_req_version) < FFA_MINOR_VERSION(hyp_ffa_version)) {
arm_smccc_1_1_smc(FFA_VERSION, ffa_req_version, 0,
0, 0, 0, 0, 0,
res);
if (res->a0 == FFA_RET_NOT_SUPPORTED)
goto unlock;
hyp_ffa_version = ffa_req_version;
}
if (hyp_ffa_post_init())
res->a0 = FFA_RET_NOT_SUPPORTED;
else {
has_version_negotiated = true;
res->a0 = hyp_ffa_version;
}
unlock:
hyp_spin_unlock(&version_lock);
}
static void do_ffa_part_get(struct arm_smccc_res *res,
struct kvm_cpu_context *ctxt)
{
DECLARE_REG(u32, uuid0, ctxt, 1);
DECLARE_REG(u32, uuid1, ctxt, 2);
DECLARE_REG(u32, uuid2, ctxt, 3);
DECLARE_REG(u32, uuid3, ctxt, 4);
DECLARE_REG(u32, flags, ctxt, 5);
u32 count, partition_sz, copy_sz;
hyp_spin_lock(&host_buffers.lock);
if (!host_buffers.rx) {
ffa_to_smccc_res(res, FFA_RET_BUSY);
goto out_unlock;
}
arm_smccc_1_1_smc(FFA_PARTITION_INFO_GET, uuid0, uuid1,
uuid2, uuid3, flags, 0, 0,
res);
if (res->a0 != FFA_SUCCESS)
goto out_unlock;
count = res->a2;
if (!count)
goto out_unlock;
if (hyp_ffa_version > FFA_VERSION_1_0) {
/* Get the number of partitions deployed in the system */
if (flags & 0x1)
goto out_unlock;
partition_sz = res->a3;
} else {
/* FFA_VERSION_1_0 lacks the size in the response */
partition_sz = FFA_1_0_PARTITON_INFO_SZ;
}
copy_sz = partition_sz * count;
if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
ffa_to_smccc_res(res, FFA_RET_ABORTED);
goto out_unlock;
}
memcpy(host_buffers.rx, hyp_buffers.rx, copy_sz);
out_unlock:
hyp_spin_unlock(&host_buffers.lock);
}
bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
{
struct arm_smccc_res res;
/*
* There's no way we can tell what a non-standard SMC call might
* be up to. Ideally, we would terminate these here and return
* an error to the host, but sadly devices make use of custom
* firmware calls for things like power management, debugging,
* RNG access and crash reporting.
*
* Given that the architecture requires us to trust EL3 anyway,
* we forward unrecognised calls on under the assumption that
* the firmware doesn't expose a mechanism to access arbitrary
* non-secure memory. Short of a per-device table of SMCs, this
* is the best we can do.
*/
if (!is_ffa_call(func_id))
return false;
if (!has_version_negotiated && func_id != FFA_VERSION) {
ffa_to_smccc_error(&res, FFA_RET_INVALID_PARAMETERS);
goto out_handled;
}
switch (func_id) {
case FFA_FEATURES:
if (!do_ffa_features(&res, host_ctxt))
return false;
goto out_handled;
/* Memory management */
case FFA_FN64_RXTX_MAP:
do_ffa_rxtx_map(&res, host_ctxt);
goto out_handled;
case FFA_RXTX_UNMAP:
do_ffa_rxtx_unmap(&res, host_ctxt);
goto out_handled;
case FFA_MEM_SHARE:
case FFA_FN64_MEM_SHARE:
do_ffa_mem_xfer(FFA_FN64_MEM_SHARE, &res, host_ctxt);
goto out_handled;
case FFA_MEM_RECLAIM:
do_ffa_mem_reclaim(&res, host_ctxt);
goto out_handled;
case FFA_MEM_LEND:
case FFA_FN64_MEM_LEND:
do_ffa_mem_xfer(FFA_FN64_MEM_LEND, &res, host_ctxt);
goto out_handled;
case FFA_MEM_FRAG_TX:
do_ffa_mem_frag_tx(&res, host_ctxt);
goto out_handled;
case FFA_VERSION:
do_ffa_version(&res, host_ctxt);
goto out_handled;
case FFA_PARTITION_INFO_GET:
do_ffa_part_get(&res, host_ctxt);
goto out_handled;
}
if (ffa_call_supported(func_id))
return false; /* Pass through */
ffa_to_smccc_error(&res, FFA_RET_NOT_SUPPORTED);
out_handled:
ffa_set_retval(host_ctxt, &res);
return true;
}
int hyp_ffa_init(void *pages)
{
struct arm_smccc_res res;
void *tx, *rx;
if (kvm_host_psci_config.smccc_version < ARM_SMCCC_VERSION_1_2)
return 0;
arm_smccc_1_1_smc(FFA_VERSION, FFA_VERSION_1_1, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 == FFA_RET_NOT_SUPPORTED)
return 0;
/*
* Firmware returns the maximum supported version of the FF-A
* implementation. Check that the returned version is
* backwards-compatible with the hyp according to the rules in DEN0077A
* v1.1 REL0 13.2.1.
*
* Of course, things are never simple when dealing with firmware. v1.1
* broke ABI with v1.0 on several structures, which is itself
* incompatible with the aforementioned versioning scheme. The
* expectation is that v1.x implementations that do not support the v1.0
* ABI return NOT_SUPPORTED rather than a version number, according to
* DEN0077A v1.1 REL0 18.6.4.
*/
if (FFA_MAJOR_VERSION(res.a0) != 1)
return -EOPNOTSUPP;
if (FFA_MINOR_VERSION(res.a0) < FFA_MINOR_VERSION(FFA_VERSION_1_1))
hyp_ffa_version = res.a0;
else
hyp_ffa_version = FFA_VERSION_1_1;
tx = pages;
pages += KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE;
rx = pages;
@ -787,5 +902,6 @@ int hyp_ffa_init(void *pages)
.lock = __HYP_SPIN_LOCK_UNLOCKED,
};
version_lock = __HYP_SPIN_LOCK_UNLOCKED;
return 0;
}

View File

@ -50,6 +50,9 @@
#ifndef R_AARCH64_ABS64
#define R_AARCH64_ABS64 257
#endif
#ifndef R_AARCH64_ABS32
#define R_AARCH64_ABS32 258
#endif
#ifndef R_AARCH64_PREL64
#define R_AARCH64_PREL64 260
#endif
@ -383,6 +386,9 @@ static void emit_rela_section(Elf64_Shdr *sh_rela)
case R_AARCH64_ABS64:
emit_rela_abs64(rela, sh_orig_name);
break;
/* Allow 32-bit absolute relocation, for kCFI type hashes. */
case R_AARCH64_ABS32:
break;
/* Allow position-relative data relocations. */
case R_AARCH64_PREL64:
case R_AARCH64_PREL32:

View File

@ -197,12 +197,6 @@ SYM_FUNC_END(__host_hvc)
sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp
/* If a guest is loaded, panic out of it. */
stp x0, x1, [sp, #-16]!
get_loaded_vcpu x0, x1
cbnz x0, __guest_exit_panic
add sp, sp, #16
/*
* The panic may not be clean if the exception is taken before the host
* context has been saved by __host_exit or after the hyp context has

View File

@ -5,6 +5,7 @@
*/
#include <linux/arm-smccc.h>
#include <linux/cfi_types.h>
#include <linux/linkage.h>
#include <asm/alternative.h>
@ -265,33 +266,38 @@ alternative_else_nop_endif
SYM_CODE_END(__kvm_handle_stub_hvc)
SYM_FUNC_START(__pkvm_init_switch_pgd)
/*
* void __pkvm_init_switch_pgd(phys_addr_t pgd, unsigned long sp,
* void (*fn)(void));
*
* SYM_TYPED_FUNC_START() allows C to call this ID-mapped function indirectly
* using a physical pointer without triggering a kCFI failure.
*/
SYM_TYPED_FUNC_START(__pkvm_init_switch_pgd)
/* Turn the MMU off */
pre_disable_mmu_workaround
mrs x2, sctlr_el2
bic x3, x2, #SCTLR_ELx_M
msr sctlr_el2, x3
mrs x3, sctlr_el2
bic x4, x3, #SCTLR_ELx_M
msr sctlr_el2, x4
isb
tlbi alle2
/* Install the new pgtables */
ldr x3, [x0, #NVHE_INIT_PGD_PA]
phys_to_ttbr x4, x3
phys_to_ttbr x5, x0
alternative_if ARM64_HAS_CNP
orr x4, x4, #TTBR_CNP_BIT
orr x5, x5, #TTBR_CNP_BIT
alternative_else_nop_endif
msr ttbr0_el2, x4
msr ttbr0_el2, x5
/* Set the new stack pointer */
ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
mov sp, x0
mov sp, x1
/* And turn the MMU back on! */
dsb nsh
isb
set_sctlr_el2 x2
ret x1
set_sctlr_el2 x3
ret x2
SYM_FUNC_END(__pkvm_init_switch_pgd)
.popsection

View File

@ -339,7 +339,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
{
struct kvm_nvhe_init_params *params;
void *virt = hyp_phys_to_virt(phys);
void (*fn)(phys_addr_t params_pa, void *finalize_fn_va);
typeof(__pkvm_init_switch_pgd) *fn;
int ret;
BUG_ON(kvm_check_pvm_sysreg_table());
@ -363,7 +363,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_cpus,
/* Jump in the idmap page to switch to the new page-tables */
params = this_cpu_ptr(&kvm_init_params);
fn = (typeof(fn))__hyp_pa(__pkvm_init_switch_pgd);
fn(__hyp_pa(params), __pkvm_init_finalise);
fn(params->pgd_pa, params->stack_hyp_va, __pkvm_init_finalise);
unreachable();
}

View File

@ -65,6 +65,77 @@ static u64 __compute_hcr(struct kvm_vcpu *vcpu)
return hcr | (__vcpu_sys_reg(vcpu, HCR_EL2) & ~NV_HCR_GUEST_EXCLUDE);
}
static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
{
u64 cptr;
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
u64 val = CPACR_ELx_TTA | CPTR_EL2_TAM;
if (guest_owns_fp_regs()) {
val |= CPACR_ELx_FPEN;
if (vcpu_has_sve(vcpu))
val |= CPACR_ELx_ZEN;
} else {
__activate_traps_fpsimd32(vcpu);
}
if (!vcpu_has_nv(vcpu))
goto write;
/*
* The architecture is a bit crap (what a surprise): an EL2 guest
* writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA,
* as they are RES0 in the guest's view. To work around it, trap the
* sucker using the very same bit it can't set...
*/
if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu))
val |= CPTR_EL2_TCPAC;
/*
* Layer the guest hypervisor's trap configuration on top of our own if
* we're in a nested context.
*/
if (is_hyp_ctxt(vcpu))
goto write;
cptr = vcpu_sanitised_cptr_el2(vcpu);
/*
* Pay attention, there's some interesting detail here.
*
* The CPTR_EL2.xEN fields are 2 bits wide, although there are only two
* meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest):
*
* - CPTR_EL2.xEN = x0, traps are enabled
* - CPTR_EL2.xEN = x1, traps are disabled
*
* In other words, bit[0] determines if guest accesses trap or not. In
* the interest of simplicity, clear the entire field if the guest
* hypervisor has traps enabled to dispel any illusion of something more
* complicated taking place.
*/
if (!(SYS_FIELD_GET(CPACR_ELx, FPEN, cptr) & BIT(0)))
val &= ~CPACR_ELx_FPEN;
if (!(SYS_FIELD_GET(CPACR_ELx, ZEN, cptr) & BIT(0)))
val &= ~CPACR_ELx_ZEN;
if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
val |= cptr & CPACR_ELx_E0POE;
val |= cptr & CPTR_EL2_TCPAC;
write:
write_sysreg(val, cpacr_el1);
}
static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
@ -91,30 +162,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
}
}
val = read_sysreg(cpacr_el1);
val |= CPACR_ELx_TTA;
val &= ~(CPACR_ELx_ZEN | CPACR_ELx_SMEN);
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
val |= CPTR_EL2_TAM;
if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu))
val |= CPACR_ELx_ZEN;
} else {
val &= ~CPACR_ELx_FPEN;
__activate_traps_fpsimd32(vcpu);
}
write_sysreg(val, cpacr_el1);
__activate_cptr_traps(vcpu);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
}
@ -266,10 +314,111 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
__fpsimd_save_state(*host_data_ptr(fpsimd_state));
}
static bool kvm_hyp_handle_tlbi_el2(struct kvm_vcpu *vcpu, u64 *exit_code)
{
int ret = -EINVAL;
u32 instr;
u64 val;
/*
* Ideally, we would never trap on EL2 S1 TLB invalidations using
* the EL1 instructions when the guest's HCR_EL2.{E2H,TGE}=={1,1}.
* But "thanks" to FEAT_NV2, we don't trap writes to HCR_EL2,
* meaning that we can't track changes to the virtual TGE bit. So we
* have to leave HCR_EL2.TTLB set on the host. Oopsie...
*
* Try and handle these invalidation as quickly as possible, without
* fully exiting. Note that we don't need to consider any forwarding
* here, as having E2H+TGE set is the very definition of being
* InHost.
*
* For the lesser hypervisors out there that have failed to get on
* with the VHE program, we can also handle the nVHE style of EL2
* invalidation.
*/
if (!(is_hyp_ctxt(vcpu)))
return false;
instr = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
val = vcpu_get_reg(vcpu, kvm_vcpu_sys_get_rt(vcpu));
if ((kvm_supported_tlbi_s1e1_op(vcpu, instr) &&
vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)) ||
kvm_supported_tlbi_s1e2_op (vcpu, instr))
ret = __kvm_tlbi_s1e2(NULL, val, instr);
if (ret)
return false;
__kvm_skip_instr(vcpu);
return true;
}
static bool kvm_hyp_handle_cpacr_el1(struct kvm_vcpu *vcpu, u64 *exit_code)
{
u64 esr = kvm_vcpu_get_esr(vcpu);
int rt;
if (!is_hyp_ctxt(vcpu) || esr_sys64_to_sysreg(esr) != SYS_CPACR_EL1)
return false;
rt = kvm_vcpu_sys_get_rt(vcpu);
if ((esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ) {
vcpu_set_reg(vcpu, rt, __vcpu_sys_reg(vcpu, CPTR_EL2));
} else {
vcpu_write_sys_reg(vcpu, vcpu_get_reg(vcpu, rt), CPTR_EL2);
__activate_cptr_traps(vcpu);
}
__kvm_skip_instr(vcpu);
return true;
}
static bool kvm_hyp_handle_zcr_el2(struct kvm_vcpu *vcpu, u64 *exit_code)
{
u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
if (!vcpu_has_nv(vcpu))
return false;
if (sysreg != SYS_ZCR_EL2)
return false;
if (guest_owns_fp_regs())
return false;
/*
* ZCR_EL2 traps are handled in the slow path, with the expectation
* that the guest's FP context has already been loaded onto the CPU.
*
* Load the guest's FP context and unconditionally forward to the
* slow path for handling (i.e. return false).
*/
kvm_hyp_handle_fpsimd(vcpu, exit_code);
return false;
}
static bool kvm_hyp_handle_sysreg_vhe(struct kvm_vcpu *vcpu, u64 *exit_code)
{
if (kvm_hyp_handle_tlbi_el2(vcpu, exit_code))
return true;
if (kvm_hyp_handle_cpacr_el1(vcpu, exit_code))
return true;
if (kvm_hyp_handle_zcr_el2(vcpu, exit_code))
return true;
return kvm_hyp_handle_sysreg(vcpu, exit_code);
}
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
[ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg,
[ESR_ELx_EC_SYS64] = kvm_hyp_handle_sysreg_vhe,
[ESR_ELx_EC_SVE] = kvm_hyp_handle_fpsimd,
[ESR_ELx_EC_FP_ASIMD] = kvm_hyp_handle_fpsimd,
[ESR_ELx_EC_IABT_LOW] = kvm_hyp_handle_iabt_low,
@ -388,7 +537,7 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
return ret;
}
static void __hyp_call_panic(u64 spsr, u64 elr, u64 par)
static void __noreturn __hyp_call_panic(u64 spsr, u64 elr, u64 par)
{
struct kvm_cpu_context *host_ctxt;
struct kvm_vcpu *vcpu;
@ -413,7 +562,6 @@ void __noreturn hyp_panic(void)
u64 par = read_sysreg_par();
__hyp_call_panic(spsr, elr, par);
unreachable();
}
asmlinkage void kvm_unexpected_el2_exception(void)

View File

@ -219,3 +219,150 @@ void __kvm_flush_vm_context(void)
__tlbi(alle1is);
dsb(ish);
}
/*
* TLB invalidation emulation for NV. For any given instruction, we
* perform the following transformtions:
*
* - a TLBI targeting EL2 S1 is remapped to EL1 S1
* - a non-shareable TLBI is upgraded to being inner-shareable
* - an outer-shareable TLBI is also mapped to inner-shareable
* - an nXS TLBI is upgraded to XS
*/
int __kvm_tlbi_s1e2(struct kvm_s2_mmu *mmu, u64 va, u64 sys_encoding)
{
struct tlb_inv_context cxt;
int ret = 0;
/*
* The guest will have provided its own DSB ISHST before trapping.
* If it hasn't, that's its own problem, and we won't paper over it
* (plus, there is plenty of extra synchronisation before we even
* get here...).
*/
if (mmu)
enter_vmid_context(mmu, &cxt);
switch (sys_encoding) {
case OP_TLBI_ALLE2:
case OP_TLBI_ALLE2IS:
case OP_TLBI_ALLE2OS:
case OP_TLBI_VMALLE1:
case OP_TLBI_VMALLE1IS:
case OP_TLBI_VMALLE1OS:
case OP_TLBI_ALLE2NXS:
case OP_TLBI_ALLE2ISNXS:
case OP_TLBI_ALLE2OSNXS:
case OP_TLBI_VMALLE1NXS:
case OP_TLBI_VMALLE1ISNXS:
case OP_TLBI_VMALLE1OSNXS:
__tlbi(vmalle1is);
break;
case OP_TLBI_VAE2:
case OP_TLBI_VAE2IS:
case OP_TLBI_VAE2OS:
case OP_TLBI_VAE1:
case OP_TLBI_VAE1IS:
case OP_TLBI_VAE1OS:
case OP_TLBI_VAE2NXS:
case OP_TLBI_VAE2ISNXS:
case OP_TLBI_VAE2OSNXS:
case OP_TLBI_VAE1NXS:
case OP_TLBI_VAE1ISNXS:
case OP_TLBI_VAE1OSNXS:
__tlbi(vae1is, va);
break;
case OP_TLBI_VALE2:
case OP_TLBI_VALE2IS:
case OP_TLBI_VALE2OS:
case OP_TLBI_VALE1:
case OP_TLBI_VALE1IS:
case OP_TLBI_VALE1OS:
case OP_TLBI_VALE2NXS:
case OP_TLBI_VALE2ISNXS:
case OP_TLBI_VALE2OSNXS:
case OP_TLBI_VALE1NXS:
case OP_TLBI_VALE1ISNXS:
case OP_TLBI_VALE1OSNXS:
__tlbi(vale1is, va);
break;
case OP_TLBI_ASIDE1:
case OP_TLBI_ASIDE1IS:
case OP_TLBI_ASIDE1OS:
case OP_TLBI_ASIDE1NXS:
case OP_TLBI_ASIDE1ISNXS:
case OP_TLBI_ASIDE1OSNXS:
__tlbi(aside1is, va);
break;
case OP_TLBI_VAAE1:
case OP_TLBI_VAAE1IS:
case OP_TLBI_VAAE1OS:
case OP_TLBI_VAAE1NXS:
case OP_TLBI_VAAE1ISNXS:
case OP_TLBI_VAAE1OSNXS:
__tlbi(vaae1is, va);
break;
case OP_TLBI_VAALE1:
case OP_TLBI_VAALE1IS:
case OP_TLBI_VAALE1OS:
case OP_TLBI_VAALE1NXS:
case OP_TLBI_VAALE1ISNXS:
case OP_TLBI_VAALE1OSNXS:
__tlbi(vaale1is, va);
break;
case OP_TLBI_RVAE2:
case OP_TLBI_RVAE2IS:
case OP_TLBI_RVAE2OS:
case OP_TLBI_RVAE1:
case OP_TLBI_RVAE1IS:
case OP_TLBI_RVAE1OS:
case OP_TLBI_RVAE2NXS:
case OP_TLBI_RVAE2ISNXS:
case OP_TLBI_RVAE2OSNXS:
case OP_TLBI_RVAE1NXS:
case OP_TLBI_RVAE1ISNXS:
case OP_TLBI_RVAE1OSNXS:
__tlbi(rvae1is, va);
break;
case OP_TLBI_RVALE2:
case OP_TLBI_RVALE2IS:
case OP_TLBI_RVALE2OS:
case OP_TLBI_RVALE1:
case OP_TLBI_RVALE1IS:
case OP_TLBI_RVALE1OS:
case OP_TLBI_RVALE2NXS:
case OP_TLBI_RVALE2ISNXS:
case OP_TLBI_RVALE2OSNXS:
case OP_TLBI_RVALE1NXS:
case OP_TLBI_RVALE1ISNXS:
case OP_TLBI_RVALE1OSNXS:
__tlbi(rvale1is, va);
break;
case OP_TLBI_RVAAE1:
case OP_TLBI_RVAAE1IS:
case OP_TLBI_RVAAE1OS:
case OP_TLBI_RVAAE1NXS:
case OP_TLBI_RVAAE1ISNXS:
case OP_TLBI_RVAAE1OSNXS:
__tlbi(rvaae1is, va);
break;
case OP_TLBI_RVAALE1:
case OP_TLBI_RVAALE1IS:
case OP_TLBI_RVAALE1OS:
case OP_TLBI_RVAALE1NXS:
case OP_TLBI_RVAALE1ISNXS:
case OP_TLBI_RVAALE1OSNXS:
__tlbi(rvaale1is, va);
break;
default:
ret = -EINVAL;
}
dsb(ish);
isb();
if (mmu)
exit_vmid_context(&cxt);
return ret;
}

View File

@ -328,18 +328,23 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64
may_block));
}
static void unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, u64 size)
{
__unmap_stage2_range(mmu, start, size, true);
}
void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush);
}
static void stage2_flush_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
phys_addr_t addr = memslot->base_gfn << PAGE_SHIFT;
phys_addr_t end = addr + PAGE_SIZE * memslot->npages;
stage2_apply_range_resched(&kvm->arch.mmu, addr, end, kvm_pgtable_stage2_flush);
kvm_stage2_flush_range(&kvm->arch.mmu, addr, end);
}
/**
@ -362,6 +367,8 @@ static void stage2_flush_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, bkt, slots)
stage2_flush_memslot(kvm, memslot);
kvm_nested_s2_flush(kvm);
write_unlock(&kvm->mmu_lock);
srcu_read_unlock(&kvm->srcu, idx);
}
@ -855,21 +862,9 @@ static struct kvm_pgtable_mm_ops kvm_s2_mm_ops = {
.icache_inval_pou = invalidate_icache_guest_page,
};
/**
* kvm_init_stage2_mmu - Initialise a S2 MMU structure
* @kvm: The pointer to the KVM structure
* @mmu: The pointer to the s2 MMU structure
* @type: The machine type of the virtual machine
*
* Allocates only the stage-2 HW PGD level table(s).
* Note we don't need locking here as this is only called when the VM is
* created, which can only be done once.
*/
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
static int kvm_init_ipa_range(struct kvm_s2_mmu *mmu, unsigned long type)
{
u32 kvm_ipa_limit = get_kvm_ipa_limit();
int cpu, err;
struct kvm_pgtable *pgt;
u64 mmfr0, mmfr1;
u32 phys_shift;
@ -896,11 +891,51 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift);
return 0;
}
/**
* kvm_init_stage2_mmu - Initialise a S2 MMU structure
* @kvm: The pointer to the KVM structure
* @mmu: The pointer to the s2 MMU structure
* @type: The machine type of the virtual machine
*
* Allocates only the stage-2 HW PGD level table(s).
* Note we don't need locking here as this is only called in two cases:
*
* - when the VM is created, which can't race against anything
*
* - when secondary kvm_s2_mmu structures are initialised for NV
* guests, and the caller must hold kvm->lock as this is called on a
* per-vcpu basis.
*/
int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long type)
{
int cpu, err;
struct kvm_pgtable *pgt;
/*
* If we already have our page tables in place, and that the
* MMU context is the canonical one, we have a bug somewhere,
* as this is only supposed to ever happen once per VM.
*
* Otherwise, we're building nested page tables, and that's
* probably because userspace called KVM_ARM_VCPU_INIT more
* than once on the same vcpu. Since that's actually legal,
* don't kick a fuss and leave gracefully.
*/
if (mmu->pgt != NULL) {
if (kvm_is_nested_s2_mmu(kvm, mmu))
return 0;
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
err = kvm_init_ipa_range(mmu, type);
if (err)
return err;
pgt = kzalloc(sizeof(*pgt), GFP_KERNEL_ACCOUNT);
if (!pgt)
return -ENOMEM;
@ -925,6 +960,10 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t
mmu->pgt = pgt;
mmu->pgd_phys = __pa(pgt->pgd);
if (kvm_is_nested_s2_mmu(kvm, mmu))
kvm_init_nested_s2_mmu(mmu);
return 0;
out_destroy_pgtable:
@ -976,7 +1015,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
if (!(vma->vm_flags & VM_PFNMAP)) {
gpa_t gpa = addr + (vm_start - memslot->userspace_addr);
unmap_stage2_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, vm_end - vm_start);
}
hva = vm_end;
} while (hva < reg_end);
@ -1003,6 +1042,8 @@ void stage2_unmap_vm(struct kvm *kvm)
kvm_for_each_memslot(memslot, bkt, slots)
stage2_unmap_memslot(kvm, memslot);
kvm_nested_s2_unmap(kvm);
write_unlock(&kvm->mmu_lock);
mmap_read_unlock(current->mm);
srcu_read_unlock(&kvm->srcu, idx);
@ -1102,12 +1143,12 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
}
/**
* stage2_wp_range() - write protect stage2 memory region range
* kvm_stage2_wp_range() - write protect stage2 memory region range
* @mmu: The KVM stage-2 MMU pointer
* @addr: Start address of range
* @end: End address of range
*/
static void stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_addr_t end)
{
stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect);
}
@ -1138,7 +1179,8 @@ static void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot)
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
write_lock(&kvm->mmu_lock);
stage2_wp_range(&kvm->arch.mmu, start, end);
kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
kvm_nested_s2_wp(kvm);
write_unlock(&kvm->mmu_lock);
kvm_flush_remote_tlbs_memslot(kvm, memslot);
}
@ -1192,7 +1234,7 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
lockdep_assert_held_write(&kvm->mmu_lock);
stage2_wp_range(&kvm->arch.mmu, start, end);
kvm_stage2_wp_range(&kvm->arch.mmu, start, end);
/*
* Eager-splitting is done when manual-protect is set. We
@ -1204,6 +1246,8 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm,
*/
if (kvm_dirty_log_manual_protect_and_init_set(kvm))
kvm_mmu_split_huge_pages(kvm, start, end);
kvm_nested_s2_wp(kvm);
}
static void kvm_send_hwpoison_signal(unsigned long address, short lsb)
@ -1375,6 +1419,7 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, unsigned long hva,
bool fault_is_perm)
{
@ -1383,6 +1428,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
bool exec_fault, mte_allowed;
bool device = false, vfio_allow_any_uc = false;
unsigned long mmu_seq;
phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
struct vm_area_struct *vma;
@ -1466,10 +1512,38 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
vma_pagesize = 1UL << vma_shift;
if (nested) {
unsigned long max_map_size;
max_map_size = force_pte ? PAGE_SIZE : PUD_SIZE;
ipa = kvm_s2_trans_output(nested);
/*
* If we're about to create a shadow stage 2 entry, then we
* can only create a block mapping if the guest stage 2 page
* table uses at least as big a mapping.
*/
max_map_size = min(kvm_s2_trans_size(nested), max_map_size);
/*
* Be careful that if the mapping size falls between
* two host sizes, take the smallest of the two.
*/
if (max_map_size >= PMD_SIZE && max_map_size < PUD_SIZE)
max_map_size = PMD_SIZE;
else if (max_map_size >= PAGE_SIZE && max_map_size < PMD_SIZE)
max_map_size = PAGE_SIZE;
force_pte = (max_map_size == PAGE_SIZE);
vma_pagesize = min(vma_pagesize, (long)max_map_size);
}
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);
gfn = fault_ipa >> PAGE_SHIFT;
gfn = ipa >> PAGE_SHIFT;
mte_allowed = kvm_vma_mte_allowed(vma);
vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED;
@ -1520,6 +1594,25 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && device)
return -ENOEXEC;
/*
* Potentially reduce shadow S2 permissions to match the guest's own
* S2. For exec faults, we'd only reach this point if the guest
* actually allowed it (see kvm_s2_handle_perm_fault).
*
* Also encode the level of the original translation in the SW bits
* of the leaf entry as a proxy for the span of that translation.
* This will be retrieved on TLB invalidation from the guest and
* used to limit the invalidation scope if a TTL hint or a range
* isn't provided.
*/
if (nested) {
writable &= kvm_s2_trans_writable(nested);
if (!kvm_s2_trans_readable(nested))
prot &= ~KVM_PGTABLE_PROT_R;
prot |= kvm_encode_nested_level(nested);
}
read_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_invalidate_retry(kvm, mmu_seq)) {
@ -1566,7 +1659,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
prot |= KVM_PGTABLE_PROT_NORMAL_NC;
else
prot |= KVM_PGTABLE_PROT_DEVICE;
} else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC)) {
} else if (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) &&
(!nested || kvm_s2_trans_executable(nested))) {
prot |= KVM_PGTABLE_PROT_X;
}
@ -1575,14 +1669,21 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* permissions only if vma_pagesize equals fault_granule. Otherwise,
* kvm_pgtable_stage2_map() should be called to change block size.
*/
if (fault_is_perm && vma_pagesize == fault_granule)
if (fault_is_perm && vma_pagesize == fault_granule) {
/*
* Drop the SW bits in favour of those stored in the
* PTE, which will be preserved.
*/
prot &= ~KVM_NV_GUEST_MAP_SZ;
ret = kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot);
else
} else {
ret = kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize,
__pfn_to_phys(pfn), prot,
memcache,
KVM_PGTABLE_WALK_HANDLE_FAULT |
KVM_PGTABLE_WALK_SHARED);
}
out_unlock:
read_unlock(&kvm->mmu_lock);
@ -1626,8 +1727,10 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
*/
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
{
struct kvm_s2_trans nested_trans, *nested = NULL;
unsigned long esr;
phys_addr_t fault_ipa;
phys_addr_t fault_ipa; /* The address we faulted on */
phys_addr_t ipa; /* Always the IPA in the L1 guest phys space */
struct kvm_memory_slot *memslot;
unsigned long hva;
bool is_iabt, write_fault, writable;
@ -1636,7 +1739,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
esr = kvm_vcpu_get_esr(vcpu);
fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
if (esr_fsc_is_translation_fault(esr)) {
@ -1686,7 +1789,42 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
idx = srcu_read_lock(&vcpu->kvm->srcu);
gfn = fault_ipa >> PAGE_SHIFT;
/*
* We may have faulted on a shadow stage 2 page table if we are
* running a nested guest. In this case, we have to resolve the L2
* IPA to the L1 IPA first, before knowing what kind of memory should
* back the L1 IPA.
*
* If the shadow stage 2 page table walk faults, then we simply inject
* this to the guest and carry on.
*
* If there are no shadow S2 PTs because S2 is disabled, there is
* nothing to walk and we treat it as a 1:1 before going through the
* canonical translation.
*/
if (kvm_is_nested_s2_mmu(vcpu->kvm,vcpu->arch.hw_mmu) &&
vcpu->arch.hw_mmu->nested_stage2_enabled) {
u32 esr;
ret = kvm_walk_nested_s2(vcpu, fault_ipa, &nested_trans);
if (ret) {
esr = kvm_s2_trans_esr(&nested_trans);
kvm_inject_s2_fault(vcpu, esr);
goto out_unlock;
}
ret = kvm_s2_handle_perm_fault(vcpu, &nested_trans);
if (ret) {
esr = kvm_s2_trans_esr(&nested_trans);
kvm_inject_s2_fault(vcpu, esr);
goto out_unlock;
}
ipa = kvm_s2_trans_output(&nested_trans);
nested = &nested_trans;
}
gfn = ipa >> PAGE_SHIFT;
memslot = gfn_to_memslot(vcpu->kvm, gfn);
hva = gfn_to_hva_memslot_prot(memslot, gfn, &writable);
write_fault = kvm_is_write_fault(vcpu);
@ -1730,13 +1868,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
ret = io_mem_abort(vcpu, fault_ipa);
ipa |= kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
ret = io_mem_abort(vcpu, ipa);
goto out_unlock;
}
/* Userspace should not be able to register out-of-bounds IPAs */
VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->arch.hw_mmu));
VM_BUG_ON(ipa >= kvm_phys_size(vcpu->arch.hw_mmu));
if (esr_fsc_is_access_flag_fault(esr)) {
handle_access_fault(vcpu, fault_ipa);
@ -1744,7 +1882,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
ret = user_mem_abort(vcpu, fault_ipa, memslot, hva,
ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
esr_fsc_is_permission_fault(esr));
if (ret == 0)
ret = 1;
@ -1767,6 +1905,7 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
kvm_nested_s2_unmap(kvm);
return false;
}
@ -1780,6 +1919,10 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt,
range->start << PAGE_SHIFT,
size, true);
/*
* TODO: Handle nested_mmu structures here using the reverse mapping in
* a later version of patch series.
*/
}
bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
@ -2022,11 +2165,6 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
{
}
void kvm_arch_flush_shadow_all(struct kvm *kvm)
{
kvm_uninit_stage2_mmu(kvm);
}
void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot)
{
@ -2034,7 +2172,8 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
phys_addr_t size = slot->npages << PAGE_SHIFT;
write_lock(&kvm->mmu_lock);
unmap_stage2_range(&kvm->arch.mmu, gpa, size);
kvm_stage2_unmap_range(&kvm->arch.mmu, gpa, size);
kvm_nested_s2_unmap(kvm);
write_unlock(&kvm->mmu_lock);
}

File diff suppressed because it is too large Load Diff

View File

@ -53,7 +53,7 @@ static u32 __kvm_pmu_event_mask(unsigned int pmuver)
static u32 kvm_pmu_event_mask(struct kvm *kvm)
{
u64 dfr0 = IDREG(kvm, SYS_ID_AA64DFR0_EL1);
u64 dfr0 = kvm_read_vm_id_reg(kvm, SYS_ID_AA64DFR0_EL1);
u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, dfr0);
return __kvm_pmu_event_mask(pmuver);

View File

@ -268,6 +268,12 @@ void kvm_reset_vcpu(struct kvm_vcpu *vcpu)
preempt_enable();
}
u32 kvm_get_pa_bits(struct kvm *kvm)
{
/* Fixed limit until we can configure ID_AA64MMFR0.PARange */
return kvm_ipa_limit;
}
u32 get_kvm_ipa_limit(void)
{
return kvm_ipa_limit;

View File

@ -121,6 +121,7 @@ static bool get_el2_to_el1_mapping(unsigned int reg,
MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL );
MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL );
MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL );
default:
return false;
}
@ -383,6 +384,12 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu,
bool was_enabled = vcpu_has_cache_enabled(vcpu);
u64 val, mask, shift;
if (reg_to_encoding(r) == SYS_TCR2_EL1 &&
!kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, TCRX, IMP)) {
kvm_inject_undefined(vcpu);
return false;
}
BUG_ON(!p->is_write);
get_access_mask(r, &mask, &shift);
@ -1565,7 +1572,7 @@ static u64 kvm_read_sanitised_id_reg(struct kvm_vcpu *vcpu,
static u64 read_id_reg(const struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
{
return IDREG(vcpu->kvm, reg_to_encoding(r));
return kvm_read_vm_id_reg(vcpu->kvm, reg_to_encoding(r));
}
static bool is_feature_id_reg(u32 encoding)
@ -1583,6 +1590,9 @@ static bool is_feature_id_reg(u32 encoding)
*/
static inline bool is_vm_ftr_id_reg(u32 id)
{
if (id == SYS_CTR_EL0)
return true;
return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 &&
sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 &&
sys_reg_CRm(id) < 8);
@ -1851,7 +1861,7 @@ static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
ret = arm64_check_features(vcpu, rd, val);
if (!ret)
IDREG(vcpu->kvm, id) = val;
kvm_set_vm_id_reg(vcpu->kvm, id, val);
mutex_unlock(&vcpu->kvm->arch.config_lock);
@ -1867,6 +1877,18 @@ static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
return ret;
}
void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val)
{
u64 *p = __vm_id_reg(&kvm->arch, reg);
lockdep_assert_held(&kvm->arch.config_lock);
if (KVM_BUG_ON(kvm_vm_has_ran_once(kvm) || !p, kvm))
return;
*p = val;
}
static int get_raz_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
u64 *val)
{
@ -1886,7 +1908,7 @@ static bool access_ctr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
if (p->is_write)
return write_to_read_only(vcpu, p, r);
p->regval = read_sanitised_ftr_reg(SYS_CTR_EL0);
p->regval = kvm_read_vm_id_reg(vcpu->kvm, SYS_CTR_EL0);
return true;
}
@ -2199,6 +2221,40 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
return __vcpu_sys_reg(vcpu, r->reg) = val;
}
static unsigned int sve_el2_visibility(const struct kvm_vcpu *vcpu,
const struct sys_reg_desc *rd)
{
unsigned int r;
r = el2_visibility(vcpu, rd);
if (r)
return r;
return sve_visibility(vcpu, rd);
}
static bool access_zcr_el2(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
unsigned int vq;
if (guest_hyp_sve_traps_enabled(vcpu)) {
kvm_inject_nested_sve_trap(vcpu);
return true;
}
if (!p->is_write) {
p->regval = vcpu_read_sys_reg(vcpu, ZCR_EL2);
return true;
}
vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1;
vq = min(vq, vcpu_sve_max_vq(vcpu));
vcpu_write_sys_reg(vcpu, vq - 1, ZCR_EL2);
return true;
}
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@ -2471,11 +2527,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_CCSIDR_EL1), access_ccsidr },
{ SYS_DESC(SYS_CLIDR_EL1), access_clidr, reset_clidr, CLIDR_EL1,
.set_user = set_clidr },
.set_user = set_clidr, .val = ~CLIDR_EL1_RES0 },
{ SYS_DESC(SYS_CCSIDR2_EL1), undef_access },
{ SYS_DESC(SYS_SMIDR_EL1), undef_access },
{ SYS_DESC(SYS_CSSELR_EL1), access_csselr, reset_unknown, CSSELR_EL1 },
{ SYS_DESC(SYS_CTR_EL0), access_ctr },
ID_WRITABLE(CTR_EL0, CTR_EL0_DIC_MASK |
CTR_EL0_IDC_MASK |
CTR_EL0_DminLine_MASK |
CTR_EL0_IminLine_MASK),
{ SYS_DESC(SYS_SVCR), undef_access },
{ PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr,
@ -2688,6 +2747,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HFGITR_EL2, reset_val, 0),
EL2_REG_VNCR(HACR_EL2, reset_val, 0),
{ SYS_DESC(SYS_ZCR_EL2), .access = access_zcr_el2, .reset = reset_val,
.visibility = sve_el2_visibility, .reg = ZCR_EL2 },
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
@ -2741,6 +2803,264 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
};
static bool kvm_supported_tlbi_s12_op(struct kvm_vcpu *vpcu, u32 instr)
{
struct kvm *kvm = vpcu->kvm;
u8 CRm = sys_reg_CRm(instr);
if (sys_reg_CRn(instr) == TLBI_CRn_nXS &&
!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
return false;
if (CRm == TLBI_CRm_nROS &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
return false;
return true;
}
static bool handle_alle1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
if (!kvm_supported_tlbi_s12_op(vcpu, sys_encoding)) {
kvm_inject_undefined(vcpu);
return false;
}
write_lock(&vcpu->kvm->mmu_lock);
/*
* Drop all shadow S2s, resulting in S1/S2 TLBIs for each of the
* corresponding VMIDs.
*/
kvm_nested_s2_unmap(vcpu->kvm);
write_unlock(&vcpu->kvm->mmu_lock);
return true;
}
static bool kvm_supported_tlbi_ipas2_op(struct kvm_vcpu *vpcu, u32 instr)
{
struct kvm *kvm = vpcu->kvm;
u8 CRm = sys_reg_CRm(instr);
u8 Op2 = sys_reg_Op2(instr);
if (sys_reg_CRn(instr) == TLBI_CRn_nXS &&
!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
return false;
if (CRm == TLBI_CRm_IPAIS && (Op2 == 2 || Op2 == 6) &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
return false;
if (CRm == TLBI_CRm_IPAONS && (Op2 == 0 || Op2 == 4) &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
return false;
if (CRm == TLBI_CRm_IPAONS && (Op2 == 3 || Op2 == 7) &&
!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
return false;
return true;
}
/* Only defined here as this is an internal "abstraction" */
union tlbi_info {
struct {
u64 start;
u64 size;
} range;
struct {
u64 addr;
} ipa;
struct {
u64 addr;
u32 encoding;
} va;
};
static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
kvm_stage2_unmap_range(mmu, info->range.start, info->range.size);
}
static bool handle_vmalls12e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
u64 limit, vttbr;
if (!kvm_supported_tlbi_s12_op(vcpu, sys_encoding)) {
kvm_inject_undefined(vcpu);
return false;
}
vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
limit = BIT_ULL(kvm_get_pa_bits(vcpu->kvm));
kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
&(union tlbi_info) {
.range = {
.start = 0,
.size = limit,
},
},
s2_mmu_unmap_range);
return true;
}
static bool handle_ripas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
u64 base, range, tg, num, scale;
int shift;
if (!kvm_supported_tlbi_ipas2_op(vcpu, sys_encoding)) {
kvm_inject_undefined(vcpu);
return false;
}
/*
* Because the shadow S2 structure doesn't necessarily reflect that
* of the guest's S2 (different base granule size, for example), we
* decide to ignore TTL and only use the described range.
*/
tg = FIELD_GET(GENMASK(47, 46), p->regval);
scale = FIELD_GET(GENMASK(45, 44), p->regval);
num = FIELD_GET(GENMASK(43, 39), p->regval);
base = p->regval & GENMASK(36, 0);
switch(tg) {
case 1:
shift = 12;
break;
case 2:
shift = 14;
break;
case 3:
default: /* IMPDEF: handle tg==0 as 64k */
shift = 16;
break;
}
base <<= shift;
range = __TLBI_RANGE_PAGES(num, scale) << shift;
kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
&(union tlbi_info) {
.range = {
.start = base,
.size = range,
},
},
s2_mmu_unmap_range);
return true;
}
static void s2_mmu_unmap_ipa(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
unsigned long max_size;
u64 base_addr;
/*
* We drop a number of things from the supplied value:
*
* - NS bit: we're non-secure only.
*
* - IPA[51:48]: We don't support 52bit IPA just yet...
*
* And of course, adjust the IPA to be on an actual address.
*/
base_addr = (info->ipa.addr & GENMASK_ULL(35, 0)) << 12;
max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
base_addr &= ~(max_size - 1);
kvm_stage2_unmap_range(mmu, base_addr, max_size);
}
static bool handle_ipas2e1is(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
if (!kvm_supported_tlbi_ipas2_op(vcpu, sys_encoding)) {
kvm_inject_undefined(vcpu);
return false;
}
kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
&(union tlbi_info) {
.ipa = {
.addr = p->regval,
},
},
s2_mmu_unmap_ipa);
return true;
}
static void s2_mmu_tlbi_s1e1(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
WARN_ON(__kvm_tlbi_s1e2(mmu, info->va.addr, info->va.encoding));
}
static bool handle_tlbi_el1(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
u32 sys_encoding = sys_insn(p->Op0, p->Op1, p->CRn, p->CRm, p->Op2);
u64 vttbr = vcpu_read_sys_reg(vcpu, VTTBR_EL2);
/*
* If we're here, this is because we've trapped on a EL1 TLBI
* instruction that affects the EL1 translation regime while
* we're running in a context that doesn't allow us to let the
* HW do its thing (aka vEL2):
*
* - HCR_EL2.E2H == 0 : a non-VHE guest
* - HCR_EL2.{E2H,TGE} == { 1, 0 } : a VHE guest in guest mode
*
* We don't expect these helpers to ever be called when running
* in a vEL1 context.
*/
WARN_ON(!vcpu_is_el2(vcpu));
if (!kvm_supported_tlbi_s1e1_op(vcpu, sys_encoding)) {
kvm_inject_undefined(vcpu);
return false;
}
kvm_s2_mmu_iterate_by_vmid(vcpu->kvm, get_vmid(vttbr),
&(union tlbi_info) {
.va = {
.addr = p->regval,
.encoding = sys_encoding,
},
},
s2_mmu_tlbi_s1e1);
return true;
}
#define SYS_INSN(insn, access_fn) \
{ \
SYS_DESC(OP_##insn), \
.access = (access_fn), \
}
static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_ISW), access_dcsw },
{ SYS_DESC(SYS_DC_IGSW), access_dcgsw },
@ -2751,9 +3071,147 @@ static struct sys_reg_desc sys_insn_descs[] = {
{ SYS_DESC(SYS_DC_CISW), access_dcsw },
{ SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
{ SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
};
static const struct sys_reg_desc *first_idreg;
SYS_INSN(TLBI_VMALLE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_VMALLE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1IS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1OS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1, handle_tlbi_el1),
SYS_INSN(TLBI_VMALLE1, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1, handle_tlbi_el1),
SYS_INSN(TLBI_VMALLE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VMALLE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1ISNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1OSNXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAAE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVALE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_RVAALE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_VMALLE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_ASIDE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAAE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_VALE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_VAALE1NXS, handle_tlbi_el1),
SYS_INSN(TLBI_IPAS2E1IS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2E1IS, handle_ripas2e1is),
SYS_INSN(TLBI_IPAS2LE1IS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2LE1IS, handle_ripas2e1is),
SYS_INSN(TLBI_ALLE2OS, trap_undef),
SYS_INSN(TLBI_VAE2OS, trap_undef),
SYS_INSN(TLBI_ALLE1OS, handle_alle1is),
SYS_INSN(TLBI_VALE2OS, trap_undef),
SYS_INSN(TLBI_VMALLS12E1OS, handle_vmalls12e1is),
SYS_INSN(TLBI_RVAE2IS, trap_undef),
SYS_INSN(TLBI_RVALE2IS, trap_undef),
SYS_INSN(TLBI_ALLE1IS, handle_alle1is),
SYS_INSN(TLBI_VMALLS12E1IS, handle_vmalls12e1is),
SYS_INSN(TLBI_IPAS2E1OS, handle_ipas2e1is),
SYS_INSN(TLBI_IPAS2E1, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2E1, handle_ripas2e1is),
SYS_INSN(TLBI_RIPAS2E1OS, handle_ripas2e1is),
SYS_INSN(TLBI_IPAS2LE1OS, handle_ipas2e1is),
SYS_INSN(TLBI_IPAS2LE1, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2LE1, handle_ripas2e1is),
SYS_INSN(TLBI_RIPAS2LE1OS, handle_ripas2e1is),
SYS_INSN(TLBI_RVAE2OS, trap_undef),
SYS_INSN(TLBI_RVALE2OS, trap_undef),
SYS_INSN(TLBI_RVAE2, trap_undef),
SYS_INSN(TLBI_RVALE2, trap_undef),
SYS_INSN(TLBI_ALLE1, handle_alle1is),
SYS_INSN(TLBI_VMALLS12E1, handle_vmalls12e1is),
SYS_INSN(TLBI_IPAS2E1ISNXS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2E1ISNXS, handle_ripas2e1is),
SYS_INSN(TLBI_IPAS2LE1ISNXS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2LE1ISNXS, handle_ripas2e1is),
SYS_INSN(TLBI_ALLE2OSNXS, trap_undef),
SYS_INSN(TLBI_VAE2OSNXS, trap_undef),
SYS_INSN(TLBI_ALLE1OSNXS, handle_alle1is),
SYS_INSN(TLBI_VALE2OSNXS, trap_undef),
SYS_INSN(TLBI_VMALLS12E1OSNXS, handle_vmalls12e1is),
SYS_INSN(TLBI_RVAE2ISNXS, trap_undef),
SYS_INSN(TLBI_RVALE2ISNXS, trap_undef),
SYS_INSN(TLBI_ALLE2ISNXS, trap_undef),
SYS_INSN(TLBI_VAE2ISNXS, trap_undef),
SYS_INSN(TLBI_ALLE1ISNXS, handle_alle1is),
SYS_INSN(TLBI_VALE2ISNXS, trap_undef),
SYS_INSN(TLBI_VMALLS12E1ISNXS, handle_vmalls12e1is),
SYS_INSN(TLBI_IPAS2E1OSNXS, handle_ipas2e1is),
SYS_INSN(TLBI_IPAS2E1NXS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2E1NXS, handle_ripas2e1is),
SYS_INSN(TLBI_RIPAS2E1OSNXS, handle_ripas2e1is),
SYS_INSN(TLBI_IPAS2LE1OSNXS, handle_ipas2e1is),
SYS_INSN(TLBI_IPAS2LE1NXS, handle_ipas2e1is),
SYS_INSN(TLBI_RIPAS2LE1NXS, handle_ripas2e1is),
SYS_INSN(TLBI_RIPAS2LE1OSNXS, handle_ripas2e1is),
SYS_INSN(TLBI_RVAE2OSNXS, trap_undef),
SYS_INSN(TLBI_RVALE2OSNXS, trap_undef),
SYS_INSN(TLBI_RVAE2NXS, trap_undef),
SYS_INSN(TLBI_RVALE2NXS, trap_undef),
SYS_INSN(TLBI_ALLE2NXS, trap_undef),
SYS_INSN(TLBI_VAE2NXS, trap_undef),
SYS_INSN(TLBI_ALLE1NXS, handle_alle1is),
SYS_INSN(TLBI_VALE2NXS, trap_undef),
SYS_INSN(TLBI_VMALLS12E1NXS, handle_vmalls12e1is),
};
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
@ -2762,7 +3220,7 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
if (p->is_write) {
return ignore_write(vcpu, p);
} else {
u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
u64 dfr = kvm_read_vm_id_reg(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
u32 el3 = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, EL3, IMP);
p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
@ -3440,6 +3898,25 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
return false;
}
static const struct sys_reg_desc *idregs_debug_find(struct kvm *kvm, u8 pos)
{
unsigned long i, idreg_idx = 0;
for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) {
const struct sys_reg_desc *r = &sys_reg_descs[i];
if (!is_vm_ftr_id_reg(reg_to_encoding(r)))
continue;
if (idreg_idx == pos)
return r;
idreg_idx++;
}
return NULL;
}
static void *idregs_debug_start(struct seq_file *s, loff_t *pos)
{
struct kvm *kvm = s->private;
@ -3451,7 +3928,7 @@ static void *idregs_debug_start(struct seq_file *s, loff_t *pos)
if (test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags) &&
*iter == (u8)~0) {
*iter = *pos;
if (*iter >= KVM_ARM_ID_REG_NUM)
if (!idregs_debug_find(kvm, *iter))
iter = NULL;
} else {
iter = ERR_PTR(-EBUSY);
@ -3468,7 +3945,7 @@ static void *idregs_debug_next(struct seq_file *s, void *v, loff_t *pos)
(*pos)++;
if ((kvm->arch.idreg_debugfs_iter + 1) < KVM_ARM_ID_REG_NUM) {
if (idregs_debug_find(kvm, kvm->arch.idreg_debugfs_iter + 1)) {
kvm->arch.idreg_debugfs_iter++;
return &kvm->arch.idreg_debugfs_iter;
@ -3493,16 +3970,16 @@ static void idregs_debug_stop(struct seq_file *s, void *v)
static int idregs_debug_show(struct seq_file *s, void *v)
{
struct kvm *kvm = s->private;
const struct sys_reg_desc *desc;
struct kvm *kvm = s->private;
desc = first_idreg + kvm->arch.idreg_debugfs_iter;
desc = idregs_debug_find(kvm, kvm->arch.idreg_debugfs_iter);
if (!desc->name)
return 0;
seq_printf(s, "%20s:\t%016llx\n",
desc->name, IDREG(kvm, IDX_IDREG(kvm->arch.idreg_debugfs_iter)));
desc->name, kvm_read_vm_id_reg(kvm, reg_to_encoding(desc)));
return 0;
}
@ -3532,8 +4009,7 @@ static void reset_vm_ftr_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc
if (test_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags))
return;
lockdep_assert_held(&kvm->arch.config_lock);
IDREG(kvm, id) = reg->reset(vcpu, reg);
kvm_set_vm_id_reg(kvm, id, reg->reset(vcpu, reg));
}
static void reset_vcpu_ftr_id_reg(struct kvm_vcpu *vcpu,
@ -3686,8 +4162,8 @@ id_to_sys_reg_desc(struct kvm_vcpu *vcpu, u64 id,
*/
#define FUNCTION_INVARIANT(reg) \
static u64 get_##reg(struct kvm_vcpu *v, \
const struct sys_reg_desc *r) \
static u64 reset_##reg(struct kvm_vcpu *v, \
const struct sys_reg_desc *r) \
{ \
((struct sys_reg_desc *)r)->val = read_sysreg(reg); \
return ((struct sys_reg_desc *)r)->val; \
@ -3697,18 +4173,11 @@ FUNCTION_INVARIANT(midr_el1)
FUNCTION_INVARIANT(revidr_el1)
FUNCTION_INVARIANT(aidr_el1)
static u64 get_ctr_el0(struct kvm_vcpu *v, const struct sys_reg_desc *r)
{
((struct sys_reg_desc *)r)->val = read_sanitised_ftr_reg(SYS_CTR_EL0);
return ((struct sys_reg_desc *)r)->val;
}
/* ->val is filled in by kvm_sys_reg_table_init() */
static struct sys_reg_desc invariant_sys_regs[] __ro_after_init = {
{ SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
{ SYS_DESC(SYS_REVIDR_EL1), NULL, get_revidr_el1 },
{ SYS_DESC(SYS_AIDR_EL1), NULL, get_aidr_el1 },
{ SYS_DESC(SYS_CTR_EL0), NULL, get_ctr_el0 },
{ SYS_DESC(SYS_MIDR_EL1), NULL, reset_midr_el1 },
{ SYS_DESC(SYS_REVIDR_EL1), NULL, reset_revidr_el1 },
{ SYS_DESC(SYS_AIDR_EL1), NULL, reset_aidr_el1 },
};
static int get_invariant_sys_reg(u64 id, u64 __user *uaddr)
@ -4019,20 +4488,11 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
if (!is_feature_id_reg(encoding) || !reg->set_user)
continue;
/*
* For ID registers, we return the writable mask. Other feature
* registers return a full 64bit mask. That's not necessary
* compliant with a given revision of the architecture, but the
* RES0/RES1 definitions allow us to do that.
*/
if (is_vm_ftr_id_reg(encoding)) {
if (!reg->val ||
(is_aa32_id_reg(encoding) && !kvm_supports_32bit_el0()))
continue;
val = reg->val;
} else {
val = ~0UL;
if (!reg->val ||
(is_aa32_id_reg(encoding) && !kvm_supports_32bit_el0())) {
continue;
}
val = reg->val;
if (put_user(val, (masks + KVM_ARM_FEATURE_ID_RANGE_INDEX(encoding))))
return -EFAULT;
@ -4041,11 +4501,34 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
return 0;
}
void kvm_init_sysreg(struct kvm_vcpu *vcpu)
static void vcpu_set_hcr(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
mutex_lock(&kvm->arch.config_lock);
if (has_vhe() || has_hvhe())
vcpu->arch.hcr_el2 |= HCR_E2H;
if (cpus_have_final_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
vcpu->arch.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
vcpu->arch.hcr_el2 |= HCR_TERR;
}
if (cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
vcpu->arch.hcr_el2 |= HCR_FWB;
if (cpus_have_final_cap(ARM64_HAS_EVT) &&
!cpus_have_final_cap(ARM64_MISMATCHED_CACHE_TYPE) &&
kvm_read_vm_id_reg(kvm, SYS_CTR_EL0) == read_sanitised_ftr_reg(SYS_CTR_EL0))
vcpu->arch.hcr_el2 |= HCR_TID4;
else
vcpu->arch.hcr_el2 |= HCR_TID2;
if (vcpu_el1_is_32bit(vcpu))
vcpu->arch.hcr_el2 &= ~HCR_RW;
if (kvm_has_mte(vcpu->kvm))
vcpu->arch.hcr_el2 |= HCR_ATA;
/*
* In the absence of FGT, we cannot independently trap TLBI
@ -4054,12 +4537,29 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
*/
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
vcpu->arch.hcr_el2 |= HCR_TTLBOS;
}
void kvm_calculate_traps(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
mutex_lock(&kvm->arch.config_lock);
vcpu_set_hcr(vcpu);
if (cpus_have_final_cap(ARM64_HAS_HCX)) {
vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
/*
* In general, all HCRX_EL2 bits are gated by a feature.
* The only reason we can set SMPME without checking any
* feature is that its effects are not directly observable
* from the guest.
*/
vcpu->arch.hcrx_el2 = HCRX_EL2_SMPME;
if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
if (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
vcpu->arch.hcrx_el2 |= HCRX_EL2_TCR2En;
}
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
@ -4115,7 +4615,6 @@ out:
int __init kvm_sys_reg_table_init(void)
{
struct sys_reg_params params;
bool valid = true;
unsigned int i;
int ret = 0;
@ -4136,12 +4635,6 @@ int __init kvm_sys_reg_table_init(void)
for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
/* Find the first idreg (SYS_ID_PFR0_EL1) in sys_reg_descs. */
params = encoding_to_params(SYS_ID_PFR0_EL1);
first_idreg = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
if (!first_idreg)
return -EINVAL;
ret = populate_nv_trap_config();
for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)

View File

@ -649,6 +649,17 @@ config PARAVIRT
over full virtualization. However, when run without a hypervisor
the kernel is theoretically slower and slightly larger.
config PARAVIRT_TIME_ACCOUNTING
bool "Paravirtual steal time accounting"
depends on PARAVIRT
help
Select this option to enable fine granularity task steal time
accounting. Time spent executing other tasks in parallel with
the current vCPU is discounted from the vCPU power. To account for
that, there can be a small performance impact.
If in doubt, say N here.
endmenu
config ARCH_SELECT_MEMORY_MODEL

View File

@ -30,12 +30,17 @@
#define KVM_PRIVATE_MEM_SLOTS 0
#define KVM_HALT_POLL_NS_DEFAULT 500000
#define KVM_REQ_TLB_FLUSH_GPA KVM_ARCH_REQ(0)
#define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(1)
#define KVM_GUESTDBG_SW_BP_MASK \
(KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP)
#define KVM_GUESTDBG_VALID_MASK \
(KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_USE_SW_BP | KVM_GUESTDBG_SINGLESTEP)
#define KVM_DIRTY_LOG_MANUAL_CAPS \
(KVM_DIRTY_LOG_MANUAL_PROTECT_ENABLE | KVM_DIRTY_LOG_INITIALLY_SET)
struct kvm_vm_stat {
struct kvm_vm_stat_generic generic;
u64 pages;
@ -190,6 +195,7 @@ struct kvm_vcpu_arch {
/* vcpu's vpid */
u64 vpid;
gpa_t flush_gpa;
/* Frequency of stable timer in Hz */
u64 timer_mhz;
@ -201,6 +207,13 @@ struct kvm_vcpu_arch {
struct kvm_mp_state mp_state;
/* cpucfg */
u32 cpucfg[KVM_MAX_CPUCFG_REGS];
/* paravirt steal time */
struct {
u64 guest_addr;
u64 last_steal;
struct gfn_to_hva_cache cache;
} st;
};
static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg)
@ -261,7 +274,6 @@ static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch)
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}

View File

@ -14,6 +14,7 @@
#define KVM_HCALL_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SERVICE)
#define KVM_HCALL_FUNC_IPI 1
#define KVM_HCALL_FUNC_NOTIFY 2
#define KVM_HCALL_SWDBG HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SWDBG)
@ -24,6 +25,16 @@
#define KVM_HCALL_INVALID_CODE -1UL
#define KVM_HCALL_INVALID_PARAMETER -2UL
#define KVM_STEAL_PHYS_VALID BIT_ULL(0)
#define KVM_STEAL_PHYS_MASK GENMASK_ULL(63, 6)
struct kvm_steal_time {
__u64 steal;
__u32 version;
__u32 flags;
__u32 pad[12];
};
/*
* Hypercall interface for KVM hypervisor
*

View File

@ -120,4 +120,9 @@ static inline void kvm_write_reg(struct kvm_vcpu *vcpu, int num, unsigned long v
vcpu->arch.gprs[num] = val;
}
static inline bool kvm_pvtime_supported(void)
{
return !!sched_info_on();
}
#endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */

View File

@ -169,6 +169,7 @@
#define KVM_SIGNATURE "KVM\0"
#define CPUCFG_KVM_FEATURE (CPUCFG_KVM_BASE + 4)
#define KVM_FEATURE_IPI BIT(1)
#define KVM_FEATURE_STEAL_TIME BIT(2)
#ifndef __ASSEMBLY__

View File

@ -18,6 +18,7 @@ static inline u64 paravirt_steal_clock(int cpu)
}
int __init pv_ipi_init(void);
int __init pv_time_init(void);
#else
@ -26,5 +27,9 @@ static inline int pv_ipi_init(void)
return 0;
}
static inline int pv_time_init(void)
{
return 0;
}
#endif // CONFIG_PARAVIRT
#endif

View File

@ -81,7 +81,11 @@ struct kvm_fpu {
#define LOONGARCH_REG_64(TYPE, REG) (TYPE | KVM_REG_SIZE_U64 | (REG << LOONGARCH_REG_SHIFT))
#define KVM_IOC_CSRID(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CSR, REG)
#define KVM_IOC_CPUCFG(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CPUCFG, REG)
/* Device Control API on vcpu fd */
#define KVM_LOONGARCH_VCPU_CPUCFG 0
#define KVM_LOONGARCH_VCPU_PVTIME_CTRL 1
#define KVM_LOONGARCH_VCPU_PVTIME_GPA 0
struct kvm_debug_exit_arch {
};

View File

@ -4,11 +4,14 @@
#include <linux/interrupt.h>
#include <linux/jump_label.h>
#include <linux/kvm_para.h>
#include <linux/reboot.h>
#include <linux/static_call.h>
#include <asm/paravirt.h>
static int has_steal_clock;
struct static_key paravirt_steal_enabled;
struct static_key paravirt_steal_rq_enabled;
static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
static u64 native_steal_clock(int cpu)
{
@ -17,6 +20,34 @@ static u64 native_steal_clock(int cpu)
DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock);
static bool steal_acc = true;
static int __init parse_no_stealacc(char *arg)
{
steal_acc = false;
return 0;
}
early_param("no-steal-acc", parse_no_stealacc);
static u64 paravt_steal_clock(int cpu)
{
int version;
u64 steal;
struct kvm_steal_time *src;
src = &per_cpu(steal_time, cpu);
do {
version = src->version;
virt_rmb(); /* Make sure that the version is read before the steal */
steal = src->steal;
virt_rmb(); /* Make sure that the steal is read before the next version */
} while ((version & 1) || (version != src->version));
return steal;
}
#ifdef CONFIG_SMP
static void pv_send_ipi_single(int cpu, unsigned int action)
{
@ -149,3 +180,117 @@ int __init pv_ipi_init(void)
return 0;
}
static int pv_enable_steal_time(void)
{
int cpu = smp_processor_id();
unsigned long addr;
struct kvm_steal_time *st;
if (!has_steal_clock)
return -EPERM;
st = &per_cpu(steal_time, cpu);
addr = per_cpu_ptr_to_phys(st);
/* The whole structure kvm_steal_time should be in one page */
if (PFN_DOWN(addr) != PFN_DOWN(addr + sizeof(*st))) {
pr_warn("Illegal PV steal time addr %lx\n", addr);
return -EFAULT;
}
addr |= KVM_STEAL_PHYS_VALID;
kvm_hypercall2(KVM_HCALL_FUNC_NOTIFY, KVM_FEATURE_STEAL_TIME, addr);
return 0;
}
static void pv_disable_steal_time(void)
{
if (has_steal_clock)
kvm_hypercall2(KVM_HCALL_FUNC_NOTIFY, KVM_FEATURE_STEAL_TIME, 0);
}
#ifdef CONFIG_SMP
static int pv_time_cpu_online(unsigned int cpu)
{
unsigned long flags;
local_irq_save(flags);
pv_enable_steal_time();
local_irq_restore(flags);
return 0;
}
static int pv_time_cpu_down_prepare(unsigned int cpu)
{
unsigned long flags;
local_irq_save(flags);
pv_disable_steal_time();
local_irq_restore(flags);
return 0;
}
#endif
static void pv_cpu_reboot(void *unused)
{
pv_disable_steal_time();
}
static int pv_reboot_notify(struct notifier_block *nb, unsigned long code, void *unused)
{
on_each_cpu(pv_cpu_reboot, NULL, 1);
return NOTIFY_DONE;
}
static struct notifier_block pv_reboot_nb = {
.notifier_call = pv_reboot_notify,
};
int __init pv_time_init(void)
{
int r, feature;
if (!cpu_has_hypervisor)
return 0;
if (!kvm_para_available())
return 0;
feature = read_cpucfg(CPUCFG_KVM_FEATURE);
if (!(feature & KVM_FEATURE_STEAL_TIME))
return 0;
has_steal_clock = 1;
r = pv_enable_steal_time();
if (r < 0) {
has_steal_clock = 0;
return 0;
}
register_reboot_notifier(&pv_reboot_nb);
#ifdef CONFIG_SMP
r = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
"loongarch/pv_time:online",
pv_time_cpu_online, pv_time_cpu_down_prepare);
if (r < 0) {
has_steal_clock = 0;
pr_err("Failed to install cpu hotplug callbacks\n");
return r;
}
#endif
static_call_update(pv_steal_clock, paravt_steal_clock);
static_key_slow_inc(&paravirt_steal_enabled);
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
if (steal_acc)
static_key_slow_inc(&paravirt_steal_rq_enabled);
#endif
pr_info("Using paravirt steal-time\n");
return 0;
}

View File

@ -15,6 +15,7 @@
#include <asm/cpu-features.h>
#include <asm/loongarch.h>
#include <asm/paravirt.h>
#include <asm/time.h>
u64 cpu_clock_freq;
@ -214,4 +215,5 @@ void __init time_init(void)
constant_clockevent_init();
constant_clocksource_init();
pv_time_init();
}

View File

@ -29,6 +29,7 @@ config KVM
select KVM_MMIO
select HAVE_KVM_READONLY_MEM
select KVM_XFER_TO_GUEST_WORK
select SCHED_INFO
help
Support hosting virtualized guest machines using
hardware virtualization extensions. You will need

View File

@ -24,7 +24,7 @@
static int kvm_emu_cpucfg(struct kvm_vcpu *vcpu, larch_inst inst)
{
int rd, rj;
unsigned int index;
unsigned int index, ret;
if (inst.reg2_format.opcode != cpucfg_op)
return EMULATE_FAIL;
@ -50,7 +50,10 @@ static int kvm_emu_cpucfg(struct kvm_vcpu *vcpu, larch_inst inst)
vcpu->arch.gprs[rd] = *(unsigned int *)KVM_SIGNATURE;
break;
case CPUCFG_KVM_FEATURE:
vcpu->arch.gprs[rd] = KVM_FEATURE_IPI;
ret = KVM_FEATURE_IPI;
if (kvm_pvtime_supported())
ret |= KVM_FEATURE_STEAL_TIME;
vcpu->arch.gprs[rd] = ret;
break;
default:
vcpu->arch.gprs[rd] = 0;
@ -687,6 +690,34 @@ static int kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
return RESUME_GUEST;
}
static long kvm_save_notify(struct kvm_vcpu *vcpu)
{
unsigned long id, data;
id = kvm_read_reg(vcpu, LOONGARCH_GPR_A1);
data = kvm_read_reg(vcpu, LOONGARCH_GPR_A2);
switch (id) {
case KVM_FEATURE_STEAL_TIME:
if (!kvm_pvtime_supported())
return KVM_HCALL_INVALID_CODE;
if (data & ~(KVM_STEAL_PHYS_MASK | KVM_STEAL_PHYS_VALID))
return KVM_HCALL_INVALID_PARAMETER;
vcpu->arch.st.guest_addr = data;
if (!(data & KVM_STEAL_PHYS_VALID))
break;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
break;
default:
break;
};
return 0;
};
/*
* kvm_handle_lsx_disabled() - Guest used LSX while disabled in root.
* @vcpu: Virtual CPU context.
@ -758,6 +789,9 @@ static void kvm_handle_service(struct kvm_vcpu *vcpu)
kvm_send_pv_ipi(vcpu);
ret = KVM_HCALL_SUCCESS;
break;
case KVM_HCALL_FUNC_NOTIFY:
ret = kvm_save_notify(vcpu);
break;
default:
ret = KVM_HCALL_INVALID_CODE;
break;

View File

@ -242,6 +242,7 @@ void kvm_check_vpid(struct kvm_vcpu *vcpu)
kvm_update_vpid(vcpu, cpu);
trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
vcpu->cpu = cpu;
kvm_clear_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);
}
/* Restore GSTAT(0x50).vpid */

View File

@ -163,6 +163,7 @@ static kvm_pte_t *kvm_populate_gpa(struct kvm *kvm,
child = kvm_mmu_memory_cache_alloc(cache);
_kvm_pte_init(child, ctx.invalid_ptes[ctx.level - 1]);
smp_wmb(); /* Make pte visible before pmd */
kvm_set_pte(entry, __pa(child));
} else if (kvm_pte_huge(*entry)) {
return entry;
@ -444,6 +445,17 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
enum kvm_mr_change change)
{
int needs_flush;
u32 old_flags = old ? old->flags : 0;
u32 new_flags = new ? new->flags : 0;
bool log_dirty_pages = new_flags & KVM_MEM_LOG_DIRTY_PAGES;
/* Only track memslot flags changed */
if (change != KVM_MR_FLAGS_ONLY)
return;
/* Discard dirty page tracking on readonly memslot */
if ((old_flags & new_flags) & KVM_MEM_READONLY)
return;
/*
* If dirty page logging is enabled, write protect all pages in the slot
@ -454,9 +466,14 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
* MOVE/DELETE: The old mappings will already have been cleaned up by
* kvm_arch_flush_shadow_memslot()
*/
if (change == KVM_MR_FLAGS_ONLY &&
(!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
if (!(old_flags & KVM_MEM_LOG_DIRTY_PAGES) && log_dirty_pages) {
/*
* Initially-all-set does not require write protecting any page
* because they're all assumed to be dirty.
*/
if (kvm_dirty_log_manual_protect_and_init_set(kvm))
return;
spin_lock(&kvm->mmu_lock);
/* Write protect GPA page table entries */
needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn,
@ -540,6 +557,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
gfn_t gfn = gpa >> PAGE_SHIFT;
struct kvm *kvm = vcpu->kvm;
struct kvm_memory_slot *slot;
struct page *page;
spin_lock(&kvm->mmu_lock);
@ -551,10 +569,8 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
}
/* Track access to pages marked old */
new = *ptep;
if (!kvm_pte_young(new))
new = kvm_pte_mkyoung(new);
/* call kvm_set_pfn_accessed() after unlock */
new = kvm_pte_mkyoung(*ptep);
/* call kvm_set_pfn_accessed() after unlock */
if (write && !kvm_pte_dirty(new)) {
if (!kvm_pte_write(new)) {
@ -582,19 +598,22 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
if (changed) {
kvm_set_pte(ptep, new);
pfn = kvm_pte_pfn(new);
page = kvm_pfn_to_refcounted_page(pfn);
if (page)
get_page(page);
}
spin_unlock(&kvm->mmu_lock);
/*
* Fixme: pfn may be freed after mmu_lock
* kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this?
*/
if (kvm_pte_young(changed))
kvm_set_pfn_accessed(pfn);
if (changed) {
if (kvm_pte_young(changed))
kvm_set_pfn_accessed(pfn);
if (kvm_pte_dirty(changed)) {
mark_page_dirty(kvm, gfn);
kvm_set_pfn_dirty(pfn);
if (kvm_pte_dirty(changed)) {
mark_page_dirty(kvm, gfn);
kvm_set_pfn_dirty(pfn);
}
if (page)
put_page(page);
}
return ret;
out:
@ -737,6 +756,7 @@ static kvm_pte_t *kvm_split_huge(struct kvm_vcpu *vcpu, kvm_pte_t *ptep, gfn_t g
val += PAGE_SIZE;
}
smp_wmb(); /* Make pte visible before pmd */
/* The later kvm_flush_tlb_gpa() will flush hugepage tlb */
kvm_set_pte(ptep, __pa(child));
@ -858,10 +878,20 @@ retry:
/* Disable dirty logging on HugePages */
level = 0;
if (!fault_supports_huge_mapping(memslot, hva, write)) {
level = 0;
} else {
if (fault_supports_huge_mapping(memslot, hva, write)) {
/* Check page level about host mmu*/
level = host_pfn_mapping_level(kvm, gfn, memslot);
if (level == 1) {
/*
* Check page level about secondary mmu
* Disable hugepage if it is normal page on
* secondary mmu already
*/
ptep = kvm_populate_gpa(kvm, NULL, gpa, 0);
if (ptep && !kvm_pte_huge(*ptep))
level = 0;
}
if (level == 1) {
gfn = gfn & ~(PTRS_PER_PTE - 1);
pfn = pfn & ~(PTRS_PER_PTE - 1);
@ -892,7 +922,6 @@ retry:
kvm_set_pfn_dirty(pfn);
}
kvm_set_pfn_accessed(pfn);
kvm_release_pfn_clean(pfn);
out:
srcu_read_unlock(&kvm->srcu, srcu_idx);
@ -908,7 +937,8 @@ int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
return ret;
/* Invalidate this entry in the TLB */
kvm_flush_tlb_gpa(vcpu, gpa);
vcpu->arch.flush_gpa = gpa;
kvm_make_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);
return 0;
}

View File

@ -23,10 +23,7 @@ void kvm_flush_tlb_all(void)
void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa)
{
unsigned long flags;
local_irq_save(flags);
lockdep_assert_irqs_disabled();
gpa &= (PAGE_MASK << 1);
invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa);
local_irq_restore(flags);
}

View File

@ -31,6 +31,50 @@ const struct kvm_stats_header kvm_vcpu_stats_header = {
sizeof(kvm_vcpu_stats_desc),
};
static void kvm_update_stolen_time(struct kvm_vcpu *vcpu)
{
u32 version;
u64 steal;
gpa_t gpa;
struct kvm_memslots *slots;
struct kvm_steal_time __user *st;
struct gfn_to_hva_cache *ghc;
ghc = &vcpu->arch.st.cache;
gpa = vcpu->arch.st.guest_addr;
if (!(gpa & KVM_STEAL_PHYS_VALID))
return;
gpa &= KVM_STEAL_PHYS_MASK;
slots = kvm_memslots(vcpu->kvm);
if (slots->generation != ghc->generation || gpa != ghc->gpa) {
if (kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, gpa, sizeof(*st))) {
ghc->gpa = INVALID_GPA;
return;
}
}
st = (struct kvm_steal_time __user *)ghc->hva;
unsafe_get_user(version, &st->version, out);
if (version & 1)
version += 1; /* first time write, random junk */
version += 1;
unsafe_put_user(version, &st->version, out);
smp_wmb();
unsafe_get_user(steal, &st->steal, out);
steal += current->sched_info.run_delay - vcpu->arch.st.last_steal;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
unsafe_put_user(steal, &st->steal, out);
smp_wmb();
version += 1;
unsafe_put_user(version, &st->version, out);
out:
mark_page_dirty_in_slot(vcpu->kvm, ghc->memslot, gpa_to_gfn(ghc->gpa));
}
/*
* kvm_check_requests - check and handle pending vCPU requests
*
@ -48,9 +92,22 @@ static int kvm_check_requests(struct kvm_vcpu *vcpu)
if (kvm_dirty_ring_check_request(vcpu))
return RESUME_HOST;
if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu))
kvm_update_stolen_time(vcpu);
return RESUME_GUEST;
}
static void kvm_late_check_requests(struct kvm_vcpu *vcpu)
{
lockdep_assert_irqs_disabled();
if (kvm_check_request(KVM_REQ_TLB_FLUSH_GPA, vcpu))
if (vcpu->arch.flush_gpa != INVALID_GPA) {
kvm_flush_tlb_gpa(vcpu, vcpu->arch.flush_gpa);
vcpu->arch.flush_gpa = INVALID_GPA;
}
}
/*
* Check and handle pending signal and vCPU requests etc
* Run with irq enabled and preempt enabled
@ -101,6 +158,13 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu)
/* Make sure the vcpu mode has been written */
smp_store_mb(vcpu->mode, IN_GUEST_MODE);
kvm_check_vpid(vcpu);
/*
* Called after function kvm_check_vpid()
* Since it updates CSR.GSTAT used by kvm_flush_tlb_gpa(),
* and it may also clear KVM_REQ_TLB_FLUSH_GPA pending bit
*/
kvm_late_check_requests(vcpu);
vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY);
/* Clear KVM_LARCH_SWCSR_LATEST as CSR will change when enter guest */
vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;
@ -354,6 +418,17 @@ static int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *val)
return -EINVAL;
if (id == LOONGARCH_CSR_ESTAT) {
preempt_disable();
vcpu_load(vcpu);
/*
* Sync pending interrupts into ESTAT so that interrupt
* remains during VM migration stage
*/
kvm_deliver_intr(vcpu);
vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST;
vcpu_put(vcpu);
preempt_enable();
/* ESTAT IP0~IP7 get from GINTC */
gintc = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff;
*val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_ESTAT) | (gintc << 2);
@ -662,6 +737,16 @@ static int kvm_loongarch_cpucfg_has_attr(struct kvm_vcpu *vcpu,
return -ENXIO;
}
static int kvm_loongarch_pvtime_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
if (!kvm_pvtime_supported() ||
attr->attr != KVM_LOONGARCH_VCPU_PVTIME_GPA)
return -ENXIO;
return 0;
}
static int kvm_loongarch_vcpu_has_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
@ -671,6 +756,9 @@ static int kvm_loongarch_vcpu_has_attr(struct kvm_vcpu *vcpu,
case KVM_LOONGARCH_VCPU_CPUCFG:
ret = kvm_loongarch_cpucfg_has_attr(vcpu, attr);
break;
case KVM_LOONGARCH_VCPU_PVTIME_CTRL:
ret = kvm_loongarch_pvtime_has_attr(vcpu, attr);
break;
default:
break;
}
@ -678,7 +766,7 @@ static int kvm_loongarch_vcpu_has_attr(struct kvm_vcpu *vcpu,
return ret;
}
static int kvm_loongarch_get_cpucfg_attr(struct kvm_vcpu *vcpu,
static int kvm_loongarch_cpucfg_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int ret = 0;
@ -694,6 +782,23 @@ static int kvm_loongarch_get_cpucfg_attr(struct kvm_vcpu *vcpu,
return ret;
}
static int kvm_loongarch_pvtime_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
u64 gpa;
u64 __user *user = (u64 __user *)attr->addr;
if (!kvm_pvtime_supported() ||
attr->attr != KVM_LOONGARCH_VCPU_PVTIME_GPA)
return -ENXIO;
gpa = vcpu->arch.st.guest_addr;
if (put_user(gpa, user))
return -EFAULT;
return 0;
}
static int kvm_loongarch_vcpu_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
@ -701,7 +806,10 @@ static int kvm_loongarch_vcpu_get_attr(struct kvm_vcpu *vcpu,
switch (attr->group) {
case KVM_LOONGARCH_VCPU_CPUCFG:
ret = kvm_loongarch_get_cpucfg_attr(vcpu, attr);
ret = kvm_loongarch_cpucfg_get_attr(vcpu, attr);
break;
case KVM_LOONGARCH_VCPU_PVTIME_CTRL:
ret = kvm_loongarch_pvtime_get_attr(vcpu, attr);
break;
default:
break;
@ -716,6 +824,43 @@ static int kvm_loongarch_cpucfg_set_attr(struct kvm_vcpu *vcpu,
return -ENXIO;
}
static int kvm_loongarch_pvtime_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
int idx, ret = 0;
u64 gpa, __user *user = (u64 __user *)attr->addr;
struct kvm *kvm = vcpu->kvm;
if (!kvm_pvtime_supported() ||
attr->attr != KVM_LOONGARCH_VCPU_PVTIME_GPA)
return -ENXIO;
if (get_user(gpa, user))
return -EFAULT;
if (gpa & ~(KVM_STEAL_PHYS_MASK | KVM_STEAL_PHYS_VALID))
return -EINVAL;
if (!(gpa & KVM_STEAL_PHYS_VALID)) {
vcpu->arch.st.guest_addr = gpa;
return 0;
}
/* Check the address is in a valid memslot */
idx = srcu_read_lock(&kvm->srcu);
if (kvm_is_error_hva(gfn_to_hva(kvm, gpa >> PAGE_SHIFT)))
ret = -EINVAL;
srcu_read_unlock(&kvm->srcu, idx);
if (!ret) {
vcpu->arch.st.guest_addr = gpa;
vcpu->arch.st.last_steal = current->sched_info.run_delay;
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
}
return ret;
}
static int kvm_loongarch_vcpu_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
@ -725,6 +870,9 @@ static int kvm_loongarch_vcpu_set_attr(struct kvm_vcpu *vcpu,
case KVM_LOONGARCH_VCPU_CPUCFG:
ret = kvm_loongarch_cpucfg_set_attr(vcpu, attr);
break;
case KVM_LOONGARCH_VCPU_PVTIME_CTRL:
ret = kvm_loongarch_pvtime_set_attr(vcpu, attr);
break;
default:
break;
}
@ -994,6 +1142,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
struct loongarch_csrs *csr;
vcpu->arch.vpid = 0;
vcpu->arch.flush_gpa = INVALID_GPA;
hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
vcpu->arch.swtimer.function = kvm_swtimer_wakeup;
@ -1084,6 +1233,7 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
/* Control guest page CCA attribute */
change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT);
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
/* Don't bother restoring registers multiple times unless necessary */
if (vcpu->arch.aux_inuse & KVM_LARCH_HWCSR_USABLE)
@ -1266,7 +1416,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
kvm_complete_iocsr_read(vcpu, run);
}
if (run->immediate_exit)
if (!vcpu->wants_to_run)
return r;
/* Clear exit_reason */

View File

@ -890,7 +890,6 @@ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}

View File

@ -434,7 +434,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
vcpu->mmio_needed = 0;
}
if (vcpu->run->immediate_exit)
if (!vcpu->wants_to_run)
goto out;
lose_fpu(1);

View File

@ -900,7 +900,6 @@ struct kvm_vcpu_arch {
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}

View File

@ -1852,7 +1852,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
kvm_sigset_activate(vcpu);
if (run->immediate_exit)
if (!vcpu->wants_to_run)
r = -EINTR;
else
r = kvmppc_vcpu_run(vcpu);

View File

@ -1,58 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2021 Western Digital Corporation or its affiliates.
* Copyright (C) 2022 Ventana Micro Systems Inc.
*/
#ifndef __KVM_RISCV_AIA_IMSIC_H
#define __KVM_RISCV_AIA_IMSIC_H
#include <linux/bitops.h>
#define APLIC_MAX_IDC BIT(14)
#define APLIC_MAX_SOURCE 1024
#define APLIC_DOMAINCFG 0x0000
#define APLIC_DOMAINCFG_RDONLY 0x80000000
#define APLIC_DOMAINCFG_IE BIT(8)
#define APLIC_DOMAINCFG_DM BIT(2)
#define APLIC_DOMAINCFG_BE BIT(0)
#define APLIC_SOURCECFG_BASE 0x0004
#define APLIC_SOURCECFG_D BIT(10)
#define APLIC_SOURCECFG_CHILDIDX_MASK 0x000003ff
#define APLIC_SOURCECFG_SM_MASK 0x00000007
#define APLIC_SOURCECFG_SM_INACTIVE 0x0
#define APLIC_SOURCECFG_SM_DETACH 0x1
#define APLIC_SOURCECFG_SM_EDGE_RISE 0x4
#define APLIC_SOURCECFG_SM_EDGE_FALL 0x5
#define APLIC_SOURCECFG_SM_LEVEL_HIGH 0x6
#define APLIC_SOURCECFG_SM_LEVEL_LOW 0x7
#define APLIC_IRQBITS_PER_REG 32
#define APLIC_SETIP_BASE 0x1c00
#define APLIC_SETIPNUM 0x1cdc
#define APLIC_CLRIP_BASE 0x1d00
#define APLIC_CLRIPNUM 0x1ddc
#define APLIC_SETIE_BASE 0x1e00
#define APLIC_SETIENUM 0x1edc
#define APLIC_CLRIE_BASE 0x1f00
#define APLIC_CLRIENUM 0x1fdc
#define APLIC_SETIPNUM_LE 0x2000
#define APLIC_SETIPNUM_BE 0x2004
#define APLIC_GENMSI 0x3000
#define APLIC_TARGET_BASE 0x3004
#define APLIC_TARGET_HART_IDX_SHIFT 18
#define APLIC_TARGET_HART_IDX_MASK 0x3fff
#define APLIC_TARGET_GUEST_IDX_SHIFT 12
#define APLIC_TARGET_GUEST_IDX_MASK 0x3f
#define APLIC_TARGET_IPRIO_MASK 0xff
#define APLIC_TARGET_EIID_MASK 0x7ff
#endif

View File

@ -1,38 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (C) 2021 Western Digital Corporation or its affiliates.
* Copyright (C) 2022 Ventana Micro Systems Inc.
*/
#ifndef __KVM_RISCV_AIA_IMSIC_H
#define __KVM_RISCV_AIA_IMSIC_H
#include <linux/types.h>
#include <asm/csr.h>
#define IMSIC_MMIO_PAGE_SHIFT 12
#define IMSIC_MMIO_PAGE_SZ (1UL << IMSIC_MMIO_PAGE_SHIFT)
#define IMSIC_MMIO_PAGE_LE 0x00
#define IMSIC_MMIO_PAGE_BE 0x04
#define IMSIC_MIN_ID 63
#define IMSIC_MAX_ID 2048
#define IMSIC_EIDELIVERY 0x70
#define IMSIC_EITHRESHOLD 0x72
#define IMSIC_EIP0 0x80
#define IMSIC_EIP63 0xbf
#define IMSIC_EIPx_BITS 32
#define IMSIC_EIE0 0xc0
#define IMSIC_EIE63 0xff
#define IMSIC_EIEx_BITS 32
#define IMSIC_FIRST IMSIC_EIDELIVERY
#define IMSIC_LAST IMSIC_EIE63
#define IMSIC_MMIO_SETIPNUM_LE 0x00
#define IMSIC_MMIO_SETIPNUM_BE 0x04
#endif

View File

@ -287,7 +287,6 @@ struct kvm_vcpu_arch {
};
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12

View File

@ -10,12 +10,12 @@
#include <linux/kernel.h>
#include <linux/bitops.h>
#include <linux/irq.h>
#include <linux/irqchip/riscv-imsic.h>
#include <linux/irqdomain.h>
#include <linux/kvm_host.h>
#include <linux/percpu.h>
#include <linux/spinlock.h>
#include <asm/cpufeature.h>
#include <asm/kvm_aia_imsic.h>
struct aia_hgei_control {
raw_spinlock_t lock;
@ -394,6 +394,8 @@ int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
{
int ret = -ENOENT;
unsigned long flags;
const struct imsic_global_config *gc;
const struct imsic_local_config *lc;
struct aia_hgei_control *hgctrl = per_cpu_ptr(&aia_hgei, cpu);
if (!kvm_riscv_aia_available() || !hgctrl)
@ -409,11 +411,14 @@ int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
/* TODO: To be updated later by AIA IMSIC HW guest file support */
if (hgei_va)
*hgei_va = NULL;
if (hgei_pa)
*hgei_pa = 0;
gc = imsic_get_global_config();
lc = (gc) ? per_cpu_ptr(gc->local, cpu) : NULL;
if (lc && ret > 0) {
if (hgei_va)
*hgei_va = lc->msi_va + (ret * IMSIC_MMIO_PAGE_SZ);
if (hgei_pa)
*hgei_pa = lc->msi_pa + (ret * IMSIC_MMIO_PAGE_SZ);
}
return ret;
}
@ -605,9 +610,11 @@ void kvm_riscv_aia_disable(void)
int kvm_riscv_aia_init(void)
{
int rc;
const struct imsic_global_config *gc;
if (!riscv_isa_extension_available(NULL, SxAIA))
return -ENODEV;
gc = imsic_get_global_config();
/* Figure-out number of bits in HGEIE */
csr_write(CSR_HGEIE, -1UL);
@ -619,17 +626,17 @@ int kvm_riscv_aia_init(void)
/*
* Number of usable HGEI lines should be minimum of per-HART
* IMSIC guest files and number of bits in HGEIE
*
* TODO: To be updated later by AIA IMSIC HW guest file support
*/
kvm_riscv_aia_nr_hgei = 0;
if (gc)
kvm_riscv_aia_nr_hgei = min((ulong)kvm_riscv_aia_nr_hgei,
BIT(gc->guest_index_bits) - 1);
else
kvm_riscv_aia_nr_hgei = 0;
/*
* Find number of guest MSI IDs
*
* TODO: To be updated later by AIA IMSIC HW guest file support
*/
/* Find number of guest MSI IDs */
kvm_riscv_aia_max_ids = IMSIC_MAX_ID;
if (gc && kvm_riscv_aia_nr_hgei)
kvm_riscv_aia_max_ids = gc->nr_guest_ids + 1;
/* Initialize guest external interrupt line management */
rc = aia_hgei_init();

View File

@ -7,12 +7,12 @@
* Anup Patel <apatel@ventanamicro.com>
*/
#include <linux/irqchip/riscv-aplic.h>
#include <linux/kvm_host.h>
#include <linux/math.h>
#include <linux/spinlock.h>
#include <linux/swab.h>
#include <kvm/iodev.h>
#include <asm/kvm_aia_aplic.h>
struct aplic_irq {
raw_spinlock_t lock;

View File

@ -8,9 +8,9 @@
*/
#include <linux/bits.h>
#include <linux/irqchip/riscv-imsic.h>
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <asm/kvm_aia_imsic.h>
static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
{

View File

@ -9,13 +9,13 @@
#include <linux/atomic.h>
#include <linux/bitmap.h>
#include <linux/irqchip/riscv-imsic.h>
#include <linux/kvm_host.h>
#include <linux/math.h>
#include <linux/spinlock.h>
#include <linux/swab.h>
#include <kvm/iodev.h>
#include <asm/csr.h>
#include <asm/kvm_aia_imsic.h>
#define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64))

67
arch/riscv/kvm/trace.h Normal file
View File

@ -0,0 +1,67 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Tracepoints for RISC-V KVM
*
* Copyright 2024 Beijing ESWIN Computing Technology Co., Ltd.
*
*/
#if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_KVM_H
#include <linux/tracepoint.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
TRACE_EVENT(kvm_entry,
TP_PROTO(struct kvm_vcpu *vcpu),
TP_ARGS(vcpu),
TP_STRUCT__entry(
__field(unsigned long, pc)
),
TP_fast_assign(
__entry->pc = vcpu->arch.guest_context.sepc;
),
TP_printk("PC: 0x016%lx", __entry->pc)
);
TRACE_EVENT(kvm_exit,
TP_PROTO(struct kvm_cpu_trap *trap),
TP_ARGS(trap),
TP_STRUCT__entry(
__field(unsigned long, sepc)
__field(unsigned long, scause)
__field(unsigned long, stval)
__field(unsigned long, htval)
__field(unsigned long, htinst)
),
TP_fast_assign(
__entry->sepc = trap->sepc;
__entry->scause = trap->scause;
__entry->stval = trap->stval;
__entry->htval = trap->htval;
__entry->htinst = trap->htinst;
),
TP_printk("SEPC:0x%lx, SCAUSE:0x%lx, STVAL:0x%lx, HTVAL:0x%lx, HTINST:0x%lx",
__entry->sepc,
__entry->scause,
__entry->stval,
__entry->htval,
__entry->htinst)
);
#endif /* _TRACE_RSICV_KVM_H */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@ -21,6 +21,9 @@
#include <asm/cacheflush.h>
#include <asm/kvm_vcpu_vector.h>
#define CREATE_TRACE_POINTS
#include "trace.h"
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, ecall_exit_stat),
@ -761,7 +764,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
return ret;
}
if (run->immediate_exit) {
if (!vcpu->wants_to_run) {
kvm_vcpu_srcu_read_unlock(vcpu);
return -EINTR;
}
@ -832,6 +835,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
*/
kvm_riscv_local_tlb_sanitize(vcpu);
trace_kvm_entry(vcpu);
guest_timing_enter_irqoff();
kvm_riscv_vcpu_enter_exit(vcpu);
@ -870,6 +875,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
local_irq_enable();
trace_kvm_exit(&trap);
preempt_enable();
kvm_vcpu_srcu_read_lock(vcpu);

View File

@ -185,6 +185,8 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
case EXC_INST_ILLEGAL:
case EXC_LOAD_MISALIGNED:
case EXC_STORE_MISALIGNED:
case EXC_LOAD_ACCESS:
case EXC_STORE_ACCESS:
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
ret = 1;

View File

@ -15,7 +15,6 @@
#include <linux/hrtimer.h>
#include <linux/interrupt.h>
#include <linux/kvm_types.h>
#include <linux/kvm_host.h>
#include <linux/kvm.h>
#include <linux/seqlock.h>
#include <linux/module.h>
@ -1047,7 +1046,6 @@ extern int kvm_s390_gisc_register(struct kvm *kvm, u32 gisc);
extern int kvm_s390_gisc_unregister(struct kvm *kvm, u32 gisc);
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_free_memslot(struct kvm *kvm,
struct kvm_memory_slot *slot) {}
static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}

View File

@ -2997,14 +2997,9 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg)
break;
}
case KVM_CREATE_IRQCHIP: {
struct kvm_irq_routing_entry routing;
r = -EINVAL;
if (kvm->arch.use_irqchip) {
/* Set up dummy routing. */
memset(&routing, 0, sizeof(routing));
r = kvm_set_irq_routing(kvm, &routing, 0, 0);
}
if (kvm->arch.use_irqchip)
r = 0;
break;
}
case KVM_SET_DEVICE_ATTR: {
@ -5033,7 +5028,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
if (vcpu->kvm->arch.pv.dumping)
return -EINVAL;
if (kvm_run->immediate_exit)
if (!vcpu->wants_to_run)
return -EINTR;
if (kvm_run->kvm_valid_regs & ~KVM_SYNC_S390_VALID_FIELDS ||
@ -5750,6 +5745,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
{
gpa_t size;
if (kvm_is_ucontrol(kvm))
return -EINVAL;
/* When we are protected, we should not change the memory slots */
if (kvm_s390_pv_get_handle(kvm))
return -EINVAL;

View File

@ -1304,10 +1304,24 @@ static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
if (rc == -EAGAIN)
rc = 0;
if (rc || scb_s->icptcode || signal_pending(current) ||
kvm_s390_vcpu_has_irq(vcpu, 0) ||
kvm_s390_vcpu_sie_inhibited(vcpu))
/*
* Exit the loop if the guest needs to process the intercept
*/
if (rc || scb_s->icptcode)
break;
/*
* Exit the loop if the host needs to process an intercept,
* but rewind the PSW to re-enter SIE once that's completed
* instead of passing a "no action" intercept to the guest.
*/
if (signal_pending(current) ||
kvm_s390_vcpu_has_irq(vcpu, 0) ||
kvm_s390_vcpu_sie_inhibited(vcpu)) {
kvm_s390_rewind_psw(vcpu, 4);
break;
}
cond_resched();
}
@ -1426,8 +1440,10 @@ int kvm_s390_handle_vsie(struct kvm_vcpu *vcpu)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
if (signal_pending(current) || kvm_s390_vcpu_has_irq(vcpu, 0) ||
kvm_s390_vcpu_sie_inhibited(vcpu))
kvm_s390_vcpu_sie_inhibited(vcpu)) {
kvm_s390_rewind_psw(vcpu, 4);
return 0;
}
vsie_page = get_vsie_page(vcpu->kvm, scb_addr);
if (IS_ERR(vsie_page))

View File

@ -9,8 +9,7 @@ BUILD_BUG_ON(1)
* "static_call_update()" calls.
*
* KVM_X86_OP_OPTIONAL() can be used for those functions that can have
* a NULL definition, for example if "static_call_cond()" will be used
* at the call sites. KVM_X86_OP_OPTIONAL_RET0() can be used likewise
* a NULL definition. KVM_X86_OP_OPTIONAL_RET0() can be used likewise
* to make a definition optional, but in this case the default will
* be __static_call_return0.
*/
@ -85,7 +84,6 @@ KVM_X86_OP_OPTIONAL(update_cr8_intercept)
KVM_X86_OP(refresh_apicv_exec_ctrl)
KVM_X86_OP_OPTIONAL(hwapic_irr_update)
KVM_X86_OP_OPTIONAL(hwapic_isr_update)
KVM_X86_OP_OPTIONAL_RET0(guest_apic_has_interrupt)
KVM_X86_OP_OPTIONAL(load_eoi_exitmap)
KVM_X86_OP_OPTIONAL(set_virtual_apic_mode)
KVM_X86_OP_OPTIONAL(set_apic_access_page_addr)
@ -103,7 +101,6 @@ KVM_X86_OP(write_tsc_multiplier)
KVM_X86_OP(get_exit_info)
KVM_X86_OP(check_intercept)
KVM_X86_OP(handle_exit_irqoff)
KVM_X86_OP(sched_in)
KVM_X86_OP_OPTIONAL(update_cpu_dirty_logging)
KVM_X86_OP_OPTIONAL(vcpu_blocking)
KVM_X86_OP_OPTIONAL(vcpu_unblocking)
@ -139,6 +136,9 @@ KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
KVM_X86_OP_OPTIONAL(get_untagged_addr)
KVM_X86_OP_OPTIONAL(alloc_apic_backing_page)
KVM_X86_OP_OPTIONAL_RET0(gmem_prepare)
KVM_X86_OP_OPTIONAL_RET0(private_max_mapping_level)
KVM_X86_OP_OPTIONAL(gmem_invalidate)
#undef KVM_X86_OP
#undef KVM_X86_OP_OPTIONAL

View File

@ -9,8 +9,7 @@ BUILD_BUG_ON(1)
* "static_call_update()" calls.
*
* KVM_X86_PMU_OP_OPTIONAL() can be used for those functions that can have
* a NULL definition, for example if "static_call_cond()" will be used
* at the call sites.
* a NULL definition.
*/
KVM_X86_PMU_OP(rdpmc_ecx_to_pmc)
KVM_X86_PMU_OP(msr_idx_to_pmc)

View File

@ -121,6 +121,7 @@
KVM_ARCH_REQ_FLAGS(31, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_HV_TLB_FLUSH \
KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
#define KVM_REQ_UPDATE_PROTECTED_GUEST_STATE KVM_ARCH_REQ(34)
#define CR0_RESERVED_BITS \
(~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \
@ -159,7 +160,6 @@
#define KVM_MIN_FREE_MMU_PAGES 5
#define KVM_REFILL_PAGES 25
#define KVM_MAX_CPUID_ENTRIES 256
#define KVM_NR_FIXED_MTRR_REGION 88
#define KVM_NR_VAR_MTRR 8
#define ASYNC_PF_PER_VCPU 64
@ -533,12 +533,16 @@ struct kvm_pmc {
};
/* More counters may conflict with other existing Architectural MSRs */
#define KVM_INTEL_PMC_MAX_GENERIC 8
#define MSR_ARCH_PERFMON_PERFCTR_MAX (MSR_ARCH_PERFMON_PERFCTR0 + KVM_INTEL_PMC_MAX_GENERIC - 1)
#define MSR_ARCH_PERFMON_EVENTSEL_MAX (MSR_ARCH_PERFMON_EVENTSEL0 + KVM_INTEL_PMC_MAX_GENERIC - 1)
#define KVM_PMC_MAX_FIXED 3
#define MSR_ARCH_PERFMON_FIXED_CTR_MAX (MSR_ARCH_PERFMON_FIXED_CTR0 + KVM_PMC_MAX_FIXED - 1)
#define KVM_AMD_PMC_MAX_GENERIC 6
#define KVM_MAX(a, b) ((a) >= (b) ? (a) : (b))
#define KVM_MAX_NR_INTEL_GP_COUNTERS 8
#define KVM_MAX_NR_AMD_GP_COUNTERS 6
#define KVM_MAX_NR_GP_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_GP_COUNTERS, \
KVM_MAX_NR_AMD_GP_COUNTERS)
#define KVM_MAX_NR_INTEL_FIXED_COUTNERS 3
#define KVM_MAX_NR_AMD_FIXED_COUTNERS 0
#define KVM_MAX_NR_FIXED_COUNTERS KVM_MAX(KVM_MAX_NR_INTEL_FIXED_COUTNERS, \
KVM_MAX_NR_AMD_FIXED_COUTNERS)
struct kvm_pmu {
u8 version;
@ -546,16 +550,16 @@ struct kvm_pmu {
unsigned nr_arch_fixed_counters;
unsigned available_event_types;
u64 fixed_ctr_ctrl;
u64 fixed_ctr_ctrl_mask;
u64 fixed_ctr_ctrl_rsvd;
u64 global_ctrl;
u64 global_status;
u64 counter_bitmask[2];
u64 global_ctrl_mask;
u64 global_status_mask;
u64 global_ctrl_rsvd;
u64 global_status_rsvd;
u64 reserved_bits;
u64 raw_event_mask;
struct kvm_pmc gp_counters[KVM_INTEL_PMC_MAX_GENERIC];
struct kvm_pmc fixed_counters[KVM_PMC_MAX_FIXED];
struct kvm_pmc gp_counters[KVM_MAX_NR_GP_COUNTERS];
struct kvm_pmc fixed_counters[KVM_MAX_NR_FIXED_COUNTERS];
/*
* Overlay the bitmap with a 64-bit atomic so that all bits can be
@ -571,9 +575,9 @@ struct kvm_pmu {
u64 ds_area;
u64 pebs_enable;
u64 pebs_enable_mask;
u64 pebs_enable_rsvd;
u64 pebs_data_cfg;
u64 pebs_data_cfg_mask;
u64 pebs_data_cfg_rsvd;
/*
* If a guest counter is cross-mapped to host counter with different
@ -604,18 +608,12 @@ enum {
KVM_DEBUGREG_WONT_EXIT = 2,
};
struct kvm_mtrr_range {
u64 base;
u64 mask;
struct list_head node;
};
struct kvm_mtrr {
struct kvm_mtrr_range var_ranges[KVM_NR_VAR_MTRR];
mtrr_type fixed_ranges[KVM_NR_FIXED_MTRR_REGION];
u64 var[KVM_NR_VAR_MTRR * 2];
u64 fixed_64k;
u64 fixed_16k[2];
u64 fixed_4k[8];
u64 deftype;
struct list_head head;
};
/* Hyper-V SynIC timer */
@ -1207,7 +1205,7 @@ enum kvm_apicv_inhibit {
* APIC acceleration is disabled by a module parameter
* and/or not supported in hardware.
*/
APICV_INHIBIT_REASON_DISABLE,
APICV_INHIBIT_REASON_DISABLED,
/*
* APIC acceleration is inhibited because AutoEOI feature is
@ -1277,8 +1275,27 @@ enum kvm_apicv_inhibit {
* mapping between logical ID and vCPU.
*/
APICV_INHIBIT_REASON_LOGICAL_ID_ALIASED,
NR_APICV_INHIBIT_REASONS,
};
#define __APICV_INHIBIT_REASON(reason) \
{ BIT(APICV_INHIBIT_REASON_##reason), #reason }
#define APICV_INHIBIT_REASONS \
__APICV_INHIBIT_REASON(DISABLED), \
__APICV_INHIBIT_REASON(HYPERV), \
__APICV_INHIBIT_REASON(ABSENT), \
__APICV_INHIBIT_REASON(BLOCKIRQ), \
__APICV_INHIBIT_REASON(PHYSICAL_ID_ALIASED), \
__APICV_INHIBIT_REASON(APIC_ID_MODIFIED), \
__APICV_INHIBIT_REASON(APIC_BASE_MODIFIED), \
__APICV_INHIBIT_REASON(NESTED), \
__APICV_INHIBIT_REASON(IRQWIN), \
__APICV_INHIBIT_REASON(PIT_REINJ), \
__APICV_INHIBIT_REASON(SEV), \
__APICV_INHIBIT_REASON(LOGICAL_ID_ALIASED)
struct kvm_arch {
unsigned long n_used_mmu_pages;
unsigned long n_requested_mmu_pages;
@ -1364,6 +1381,7 @@ struct kvm_arch {
u32 default_tsc_khz;
bool user_set_tsc;
u64 apic_bus_cycle_ns;
seqcount_raw_spinlock_t pvclock_sc;
bool use_master_clock;
@ -1708,13 +1726,11 @@ struct kvm_x86_ops {
void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
void (*enable_irq_window)(struct kvm_vcpu *vcpu);
void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
bool (*check_apicv_inhibit_reasons)(enum kvm_apicv_inhibit reason);
const unsigned long required_apicv_inhibits;
bool allow_apicv_in_x2apic_without_x2apic_virtualization;
void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
void (*hwapic_isr_update)(int isr);
bool (*guest_apic_has_interrupt)(struct kvm_vcpu *vcpu);
void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
@ -1749,8 +1765,6 @@ struct kvm_x86_ops {
struct x86_exception *exception);
void (*handle_exit_irqoff)(struct kvm_vcpu *vcpu);
void (*sched_in)(struct kvm_vcpu *vcpu, int cpu);
/*
* Size of the CPU's dirty log buffer, i.e. VMX's PML buffer. A zero
* value indicates CPU dirty logging is unsupported or disabled.
@ -1812,6 +1826,9 @@ struct kvm_x86_ops {
gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu);
int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order);
void (*gmem_invalidate)(kvm_pfn_t start, kvm_pfn_t end);
int (*private_max_mapping_level)(struct kvm *kvm, kvm_pfn_t pfn);
};
struct kvm_x86_nested_ops {
@ -1819,7 +1836,7 @@ struct kvm_x86_nested_ops {
bool (*is_exception_vmexit)(struct kvm_vcpu *vcpu, u8 vector,
u32 error_code);
int (*check_events)(struct kvm_vcpu *vcpu);
bool (*has_events)(struct kvm_vcpu *vcpu);
bool (*has_events)(struct kvm_vcpu *vcpu, bool for_injection);
void (*triple_fault)(struct kvm_vcpu *vcpu);
int (*get_state)(struct kvm_vcpu *vcpu,
struct kvm_nested_state __user *user_kvm_nested_state,
@ -1853,11 +1870,13 @@ struct kvm_arch_async_pf {
};
extern u32 __read_mostly kvm_nr_uret_msrs;
extern u64 __read_mostly host_efer;
extern bool __read_mostly allow_smaller_maxphyaddr;
extern bool __read_mostly enable_apicv;
extern struct kvm_x86_ops kvm_x86_ops;
#define kvm_x86_call(func) static_call(kvm_x86_##func)
#define kvm_pmu_call(func) static_call(kvm_x86_pmu_##func)
#define KVM_X86_OP(func) \
DECLARE_STATIC_CALL(kvm_x86_##func, *(((struct kvm_x86_ops *)0)->func));
#define KVM_X86_OP_OPTIONAL KVM_X86_OP
@ -1881,7 +1900,7 @@ void kvm_arch_free_vm(struct kvm *kvm);
static inline int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
{
if (kvm_x86_ops.flush_remote_tlbs &&
!static_call(kvm_x86_flush_remote_tlbs)(kvm))
!kvm_x86_call(flush_remote_tlbs)(kvm))
return 0;
else
return -ENOTSUPP;
@ -1894,7 +1913,7 @@ static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn,
if (!kvm_x86_ops.flush_remote_tlbs_range)
return -EOPNOTSUPP;
return static_call(kvm_x86_flush_remote_tlbs_range)(kvm, gfn, nr_pages);
return kvm_x86_call(flush_remote_tlbs_range)(kvm, gfn, nr_pages);
}
#endif /* CONFIG_HYPERV */
@ -1939,6 +1958,7 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
const struct kvm_memory_slot *memslot);
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages);
void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end);
int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3);
@ -2292,12 +2312,12 @@ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
{
static_call_cond(kvm_x86_vcpu_blocking)(vcpu);
kvm_x86_call(vcpu_blocking)(vcpu);
}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
{
static_call_cond(kvm_x86_vcpu_unblocking)(vcpu);
kvm_x86_call(vcpu_unblocking)(vcpu);
}
static inline int kvm_cpu_get_apicid(int mps_cpu)

View File

@ -59,6 +59,14 @@
#define GHCB_MSR_AP_RESET_HOLD_RESULT_POS 12
#define GHCB_MSR_AP_RESET_HOLD_RESULT_MASK GENMASK_ULL(51, 0)
/* Preferred GHCB GPA Request */
#define GHCB_MSR_PREF_GPA_REQ 0x010
#define GHCB_MSR_GPA_VALUE_POS 12
#define GHCB_MSR_GPA_VALUE_MASK GENMASK_ULL(51, 0)
#define GHCB_MSR_PREF_GPA_RESP 0x011
#define GHCB_MSR_PREF_GPA_NONE 0xfffffffffffff
/* GHCB GPA Register */
#define GHCB_MSR_REG_GPA_REQ 0x012
#define GHCB_MSR_REG_GPA_REQ_VAL(v) \
@ -93,11 +101,17 @@ enum psc_op {
/* GHCBData[11:0] */ \
GHCB_MSR_PSC_REQ)
#define GHCB_MSR_PSC_REQ_TO_GFN(msr) (((msr) & GENMASK_ULL(51, 12)) >> 12)
#define GHCB_MSR_PSC_REQ_TO_OP(msr) (((msr) & GENMASK_ULL(55, 52)) >> 52)
#define GHCB_MSR_PSC_RESP 0x015
#define GHCB_MSR_PSC_RESP_VAL(val) \
/* GHCBData[63:32] */ \
(((u64)(val) & GENMASK_ULL(63, 32)) >> 32)
/* Set highest bit as a generic error response */
#define GHCB_MSR_PSC_RESP_ERROR (BIT_ULL(63) | GHCB_MSR_PSC_RESP)
/* GHCB Run at VMPL Request/Response */
#define GHCB_MSR_VMPL_REQ 0x016
#define GHCB_MSR_VMPL_REQ_LEVEL(v) \
@ -129,8 +143,19 @@ enum psc_op {
* The VMGEXIT_PSC_MAX_ENTRY determines the size of the PSC structure, which
* is a local stack variable in set_pages_state(). Do not increase this value
* without evaluating the impact to stack usage.
*
* Use VMGEXIT_PSC_MAX_COUNT in cases where the actual GHCB-defined max value
* is needed, such as when processing GHCB requests on the hypervisor side.
*/
#define VMGEXIT_PSC_MAX_ENTRY 64
#define VMGEXIT_PSC_MAX_COUNT 253
#define VMGEXIT_PSC_ERROR_GENERIC (0x100UL << 32)
#define VMGEXIT_PSC_ERROR_INVALID_HDR ((1UL << 32) | 1)
#define VMGEXIT_PSC_ERROR_INVALID_ENTRY ((1UL << 32) | 2)
#define VMGEXIT_PSC_OP_PRIVATE 1
#define VMGEXIT_PSC_OP_SHARED 2
struct psc_hdr {
u16 cur_entry;

View File

@ -91,6 +91,9 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
/* RMUPDATE detected 4K page and 2MB page overlap. */
#define RMPUPDATE_FAIL_OVERLAP 4
/* PSMASH failed due to concurrent access by another CPU */
#define PSMASH_FAIL_INUSE 3
/* RMP page size */
#define RMP_PG_SIZE_4K 0
#define RMP_PG_SIZE_2M 1
@ -116,6 +119,54 @@ struct snp_req_data {
unsigned int data_npages;
};
#define MAX_AUTHTAG_LEN 32
/* See SNP spec SNP_GUEST_REQUEST section for the structure */
enum msg_type {
SNP_MSG_TYPE_INVALID = 0,
SNP_MSG_CPUID_REQ,
SNP_MSG_CPUID_RSP,
SNP_MSG_KEY_REQ,
SNP_MSG_KEY_RSP,
SNP_MSG_REPORT_REQ,
SNP_MSG_REPORT_RSP,
SNP_MSG_EXPORT_REQ,
SNP_MSG_EXPORT_RSP,
SNP_MSG_IMPORT_REQ,
SNP_MSG_IMPORT_RSP,
SNP_MSG_ABSORB_REQ,
SNP_MSG_ABSORB_RSP,
SNP_MSG_VMRK_REQ,
SNP_MSG_VMRK_RSP,
SNP_MSG_TYPE_MAX
};
enum aead_algo {
SNP_AEAD_INVALID,
SNP_AEAD_AES_256_GCM,
};
struct snp_guest_msg_hdr {
u8 authtag[MAX_AUTHTAG_LEN];
u64 msg_seqno;
u8 rsvd1[8];
u8 algo;
u8 hdr_version;
u16 hdr_sz;
u8 msg_type;
u8 msg_version;
u16 msg_sz;
u32 rsvd2;
u8 msg_vmpck;
u8 rsvd3[35];
} __packed;
struct snp_guest_msg {
struct snp_guest_msg_hdr hdr;
u8 payload[4000];
} __packed;
struct sev_guest_platform_data {
u64 secrets_gpa;
};

View File

@ -285,7 +285,14 @@ static_assert((X2AVIC_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AVIC_
#define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF)
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
#define SVM_SEV_FEAT_SNP_ACTIVE BIT(0)
#define SVM_SEV_FEAT_RESTRICTED_INJECTION BIT(3)
#define SVM_SEV_FEAT_ALTERNATE_INJECTION BIT(4)
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
#define SVM_SEV_FEAT_INT_INJ_MODES \
(SVM_SEV_FEAT_RESTRICTED_INJECTION | \
SVM_SEV_FEAT_ALTERNATE_INJECTION)
struct vmcb_seg {
u16 selector;

View File

@ -106,6 +106,7 @@ struct kvm_ioapic_state {
#define KVM_RUN_X86_SMM (1 << 0)
#define KVM_RUN_X86_BUS_LOCK (1 << 1)
#define KVM_RUN_X86_GUEST_MODE (1 << 2)
/* for KVM_GET_REGS and KVM_SET_REGS */
struct kvm_regs {
@ -697,6 +698,11 @@ enum sev_cmd_id {
/* Second time is the charm; improved versions of the above ioctls. */
KVM_SEV_INIT2,
/* SNP-specific commands */
KVM_SEV_SNP_LAUNCH_START = 100,
KVM_SEV_SNP_LAUNCH_UPDATE,
KVM_SEV_SNP_LAUNCH_FINISH,
KVM_SEV_NR_MAX,
};
@ -824,6 +830,48 @@ struct kvm_sev_receive_update_data {
__u32 pad2;
};
struct kvm_sev_snp_launch_start {
__u64 policy;
__u8 gosvw[16];
__u16 flags;
__u8 pad0[6];
__u64 pad1[4];
};
/* Kept in sync with firmware values for simplicity. */
#define KVM_SEV_SNP_PAGE_TYPE_NORMAL 0x1
#define KVM_SEV_SNP_PAGE_TYPE_ZERO 0x3
#define KVM_SEV_SNP_PAGE_TYPE_UNMEASURED 0x4
#define KVM_SEV_SNP_PAGE_TYPE_SECRETS 0x5
#define KVM_SEV_SNP_PAGE_TYPE_CPUID 0x6
struct kvm_sev_snp_launch_update {
__u64 gfn_start;
__u64 uaddr;
__u64 len;
__u8 type;
__u8 pad0;
__u16 flags;
__u32 pad1;
__u64 pad2[4];
};
#define KVM_SEV_SNP_ID_BLOCK_SIZE 96
#define KVM_SEV_SNP_ID_AUTH_SIZE 4096
#define KVM_SEV_SNP_FINISH_DATA_SIZE 32
struct kvm_sev_snp_launch_finish {
__u64 id_block_uaddr;
__u64 id_auth_uaddr;
__u8 id_block_en;
__u8 auth_key_en;
__u8 vcek_disabled;
__u8 host_data[KVM_SEV_SNP_FINISH_DATA_SIZE];
__u8 pad0[3];
__u16 flags;
__u64 pad1[4];
};
#define KVM_X2APIC_API_USE_32BIT_IDS (1ULL << 0)
#define KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK (1ULL << 1)
@ -874,5 +922,6 @@ struct kvm_hyperv_eventfd {
#define KVM_X86_SW_PROTECTED_VM 1
#define KVM_X86_SEV_VM 2
#define KVM_X86_SEV_ES_VM 3
#define KVM_X86_SNP_VM 4
#endif /* _ASM_X86_KVM_H */

View File

@ -44,6 +44,7 @@ config KVM
select KVM_VFIO
select HAVE_KVM_PM_NOTIFIER if PM
select KVM_GENERIC_HARDWARE_ENABLING
select KVM_GENERIC_PRE_FAULT_MEMORY
select KVM_WERROR if WERROR
help
Support hosting fully virtualized guest machines using hardware
@ -139,6 +140,9 @@ config KVM_AMD_SEV
depends on KVM_AMD && X86_64
depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m)
select ARCH_HAS_CC_PLATFORM
select KVM_GENERIC_PRIVATE_MEM
select HAVE_KVM_GMEM_PREPARE
select HAVE_KVM_GMEM_INVALIDATE
help
Provides support for launching Encrypted VMs (SEV) and Encrypted VMs
with Encrypted State (SEV-ES) on AMD processors.

View File

@ -335,6 +335,18 @@ static bool kvm_cpuid_has_hyperv(struct kvm_cpuid_entry2 *entries, int nent)
#endif
}
static bool guest_cpuid_is_amd_or_hygon(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *entry;
entry = kvm_find_cpuid_entry(vcpu, 0);
if (!entry)
return false;
return is_guest_vendor_amd(entry->ebx, entry->ecx, entry->edx) ||
is_guest_vendor_hygon(entry->ebx, entry->ecx, entry->edx);
}
static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
{
struct kvm_lapic *apic = vcpu->arch.apic;
@ -388,7 +400,7 @@ static void kvm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
vcpu->arch.cpuid_nent));
/* Invoke the vendor callback only after the above state is updated. */
static_call(kvm_x86_vcpu_after_set_cpuid)(vcpu);
kvm_x86_call(vcpu_after_set_cpuid)(vcpu);
/*
* Except for the MMU, which needs to do its thing any vendor specific

View File

@ -102,24 +102,6 @@ static __always_inline void guest_cpuid_clear(struct kvm_vcpu *vcpu,
*reg &= ~__feature_bit(x86_feature);
}
static inline bool guest_cpuid_is_amd_or_hygon(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *best;
best = kvm_find_cpuid_entry(vcpu, 0);
return best &&
(is_guest_vendor_amd(best->ebx, best->ecx, best->edx) ||
is_guest_vendor_hygon(best->ebx, best->ecx, best->edx));
}
static inline bool guest_cpuid_is_intel(struct kvm_vcpu *vcpu)
{
struct kvm_cpuid_entry2 *best;
best = kvm_find_cpuid_entry(vcpu, 0);
return best && is_guest_vendor_intel(best->ebx, best->ecx, best->edx);
}
static inline bool guest_cpuid_is_amd_compatible(struct kvm_vcpu *vcpu)
{
return vcpu->arch.is_amd_compatible;

View File

@ -2354,50 +2354,6 @@ setup_syscalls_segments(struct desc_struct *cs, struct desc_struct *ss)
ss->avl = 0;
}
static bool vendor_intel(struct x86_emulate_ctxt *ctxt)
{
u32 eax, ebx, ecx, edx;
eax = ecx = 0;
ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, true);
return is_guest_vendor_intel(ebx, ecx, edx);
}
static bool em_syscall_is_enabled(struct x86_emulate_ctxt *ctxt)
{
const struct x86_emulate_ops *ops = ctxt->ops;
u32 eax, ebx, ecx, edx;
/*
* syscall should always be enabled in longmode - so only become
* vendor specific (cpuid) if other modes are active...
*/
if (ctxt->mode == X86EMUL_MODE_PROT64)
return true;
eax = 0x00000000;
ecx = 0x00000000;
ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, true);
/*
* remark: Intel CPUs only support "syscall" in 64bit longmode. Also a
* 64bit guest with a 32bit compat-app running will #UD !! While this
* behaviour can be fixed (by emulating) into AMD response - CPUs of
* AMD can't behave like Intel.
*/
if (is_guest_vendor_intel(ebx, ecx, edx))
return false;
if (is_guest_vendor_amd(ebx, ecx, edx) ||
is_guest_vendor_hygon(ebx, ecx, edx))
return true;
/*
* default: (not Intel, not AMD, not Hygon), apply Intel's
* stricter rules...
*/
return false;
}
static int em_syscall(struct x86_emulate_ctxt *ctxt)
{
const struct x86_emulate_ops *ops = ctxt->ops;
@ -2411,7 +2367,15 @@ static int em_syscall(struct x86_emulate_ctxt *ctxt)
ctxt->mode == X86EMUL_MODE_VM86)
return emulate_ud(ctxt);
if (!(em_syscall_is_enabled(ctxt)))
/*
* Intel compatible CPUs only support SYSCALL in 64-bit mode, whereas
* AMD allows SYSCALL in any flavor of protected mode. Note, it's
* infeasible to emulate Intel behavior when running on AMD hardware,
* as SYSCALL won't fault in the "wrong" mode, i.e. there is no #UD
* for KVM to trap-and-emulate, unlike emulating AMD on Intel.
*/
if (ctxt->mode != X86EMUL_MODE_PROT64 &&
ctxt->ops->guest_cpuid_is_intel_compatible(ctxt))
return emulate_ud(ctxt);
ops->get_msr(ctxt, MSR_EFER, &efer);
@ -2471,11 +2435,11 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
return emulate_gp(ctxt, 0);
/*
* Not recognized on AMD in compat mode (but is recognized in legacy
* mode).
* Intel's architecture allows SYSENTER in compatibility mode, but AMD
* does not. Note, AMD does allow SYSENTER in legacy protected mode.
*/
if ((ctxt->mode != X86EMUL_MODE_PROT64) && (efer & EFER_LMA)
&& !vendor_intel(ctxt))
if ((ctxt->mode != X86EMUL_MODE_PROT64) && (efer & EFER_LMA) &&
!ctxt->ops->guest_cpuid_is_intel_compatible(ctxt))
return emulate_ud(ctxt);
/* sysenter/sysexit have not been tested in 64bit mode. */
@ -2647,7 +2611,14 @@ static void string_registers_quirk(struct x86_emulate_ctxt *ctxt)
* manner when ECX is zero due to REP-string optimizations.
*/
#ifdef CONFIG_X86_64
if (ctxt->ad_bytes != 4 || !vendor_intel(ctxt))
u32 eax, ebx, ecx, edx;
if (ctxt->ad_bytes != 4)
return;
eax = ecx = 0;
ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, true);
if (!is_guest_vendor_intel(ebx, ecx, edx))
return;
*reg_write(ctxt, VCPU_REGS_RCX) = 0;

View File

@ -1417,7 +1417,7 @@ static int kvm_hv_set_msr_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data,
}
/* vmcall/vmmcall */
static_call(kvm_x86_patch_hypercall)(vcpu, instructions + i);
kvm_x86_call(patch_hypercall)(vcpu, instructions + i);
i += 3;
/* ret */
@ -1737,7 +1737,8 @@ static int kvm_hv_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata,
data = (u64)vcpu->arch.virtual_tsc_khz * 1000;
break;
case HV_X64_MSR_APIC_FREQUENCY:
data = APIC_BUS_FREQUENCY;
data = div64_u64(1000000000ULL,
vcpu->kvm->arch.apic_bus_cycle_ns);
break;
default:
kvm_pr_unimpl_rdmsr(vcpu, msr);
@ -1985,7 +1986,7 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
*/
gva = entries[i] & PAGE_MASK;
for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
static_call(kvm_x86_flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
++vcpu->stat.tlb_flush;
}
@ -2526,7 +2527,7 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
* hypercall generates UD from non zero cpl and real mode
* per HYPER-V spec
*/
if (static_call(kvm_x86_get_cpl)(vcpu) != 0 || !is_protmode(vcpu)) {
if (kvm_x86_call(get_cpl)(vcpu) != 0 || !is_protmode(vcpu)) {
kvm_queue_exception(vcpu, UD_VECTOR);
return 1;
}

View File

@ -157,7 +157,7 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu)
{
__kvm_migrate_apic_timer(vcpu);
__kvm_migrate_pit_timer(vcpu);
static_call_cond(kvm_x86_migrate_timers)(vcpu);
kvm_x86_call(migrate_timers)(vcpu);
}
bool kvm_arch_irqfd_allowed(struct kvm *kvm, struct kvm_irqfd *args)

View File

@ -106,7 +106,6 @@ void __kvm_migrate_timers(struct kvm_vcpu *vcpu);
int apic_has_pending_timer(struct kvm_vcpu *vcpu);
int kvm_setup_default_irq_routing(struct kvm *kvm);
int kvm_setup_empty_irq_routing(struct kvm *kvm);
int kvm_irq_delivery_to_apic(struct kvm *kvm, struct kvm_lapic *src,
struct kvm_lapic_irq *irq,
struct dest_map *dest_map);

View File

@ -395,13 +395,6 @@ int kvm_setup_default_irq_routing(struct kvm *kvm)
ARRAY_SIZE(default_routing), 0);
}
static const struct kvm_irq_routing_entry empty_routing[] = {};
int kvm_setup_empty_irq_routing(struct kvm *kvm)
{
return kvm_set_irq_routing(kvm, empty_routing, 0, 0);
}
void kvm_arch_post_irq_routing_update(struct kvm *kvm)
{
if (!irqchip_split(kvm))

View File

@ -98,7 +98,7 @@ static inline unsigned long kvm_register_read_raw(struct kvm_vcpu *vcpu, int reg
return 0;
if (!kvm_register_is_available(vcpu, reg))
static_call(kvm_x86_cache_reg)(vcpu, reg);
kvm_x86_call(cache_reg)(vcpu, reg);
return vcpu->arch.regs[reg];
}
@ -138,7 +138,7 @@ static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
might_sleep(); /* on svm */
if (!kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR))
static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_PDPTR);
kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_PDPTR);
return vcpu->arch.walk_mmu->pdptrs[index];
}
@ -153,7 +153,7 @@ static inline ulong kvm_read_cr0_bits(struct kvm_vcpu *vcpu, ulong mask)
ulong tmask = mask & KVM_POSSIBLE_CR0_GUEST_BITS;
if ((tmask & vcpu->arch.cr0_guest_owned_bits) &&
!kvm_register_is_available(vcpu, VCPU_EXREG_CR0))
static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR0);
kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR0);
return vcpu->arch.cr0 & mask;
}
@ -175,7 +175,7 @@ static inline ulong kvm_read_cr4_bits(struct kvm_vcpu *vcpu, ulong mask)
ulong tmask = mask & KVM_POSSIBLE_CR4_GUEST_BITS;
if ((tmask & vcpu->arch.cr4_guest_owned_bits) &&
!kvm_register_is_available(vcpu, VCPU_EXREG_CR4))
static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR4);
kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR4);
return vcpu->arch.cr4 & mask;
}
@ -190,7 +190,7 @@ static __always_inline bool kvm_is_cr4_bit_set(struct kvm_vcpu *vcpu,
static inline ulong kvm_read_cr3(struct kvm_vcpu *vcpu)
{
if (!kvm_register_is_available(vcpu, VCPU_EXREG_CR3))
static_call(kvm_x86_cache_reg)(vcpu, VCPU_EXREG_CR3);
kvm_x86_call(cache_reg)(vcpu, VCPU_EXREG_CR3);
return vcpu->arch.cr3;
}

View File

@ -223,6 +223,7 @@ struct x86_emulate_ops {
bool (*guest_has_movbe)(struct x86_emulate_ctxt *ctxt);
bool (*guest_has_fxsr)(struct x86_emulate_ctxt *ctxt);
bool (*guest_has_rdpid)(struct x86_emulate_ctxt *ctxt);
bool (*guest_cpuid_is_intel_compatible)(struct x86_emulate_ctxt *ctxt);
void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked);

View File

@ -738,8 +738,8 @@ static inline void apic_clear_irr(int vec, struct kvm_lapic *apic)
if (unlikely(apic->apicv_active)) {
/* need to update RVI */
kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
static_call_cond(kvm_x86_hwapic_irr_update)(apic->vcpu,
apic_find_highest_irr(apic));
kvm_x86_call(hwapic_irr_update)(apic->vcpu,
apic_find_highest_irr(apic));
} else {
apic->irr_pending = false;
kvm_lapic_clear_vector(vec, apic->regs + APIC_IRR);
@ -765,7 +765,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
* just set SVI.
*/
if (unlikely(apic->apicv_active))
static_call_cond(kvm_x86_hwapic_isr_update)(vec);
kvm_x86_call(hwapic_isr_update)(vec);
else {
++apic->isr_count;
BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@ -810,7 +810,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
* and must be left alone.
*/
if (unlikely(apic->apicv_active))
static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic));
kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
else {
--apic->isr_count;
BUG_ON(apic->isr_count < 0);
@ -946,7 +946,7 @@ static int apic_has_interrupt_for_ppr(struct kvm_lapic *apic, u32 ppr)
{
int highest_irr;
if (kvm_x86_ops.sync_pir_to_irr)
highest_irr = static_call(kvm_x86_sync_pir_to_irr)(apic->vcpu);
highest_irr = kvm_x86_call(sync_pir_to_irr)(apic->vcpu);
else
highest_irr = apic_find_highest_irr(apic);
if (highest_irr == -1 || (highest_irr & 0xF0) <= ppr)
@ -1338,8 +1338,8 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
apic->regs + APIC_TMR);
}
static_call(kvm_x86_deliver_interrupt)(apic, delivery_mode,
trig_mode, vector);
kvm_x86_call(deliver_interrupt)(apic, delivery_mode,
trig_mode, vector);
break;
case APIC_DM_REMRD:
@ -1557,7 +1557,8 @@ static u32 apic_get_tmcct(struct kvm_lapic *apic)
remaining = 0;
ns = mod_64(ktime_to_ns(remaining), apic->lapic_timer.period);
return div64_u64(ns, (APIC_BUS_CYCLE_NS * apic->divide_count));
return div64_u64(ns, (apic->vcpu->kvm->arch.apic_bus_cycle_ns *
apic->divide_count));
}
static void __report_tpr_access(struct kvm_lapic *apic, bool write)
@ -1973,7 +1974,8 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
static inline u64 tmict_to_ns(struct kvm_lapic *apic, u32 tmict)
{
return (u64)tmict * APIC_BUS_CYCLE_NS * (u64)apic->divide_count;
return (u64)tmict * apic->vcpu->kvm->arch.apic_bus_cycle_ns *
(u64)apic->divide_count;
}
static void update_target_expiration(struct kvm_lapic *apic, uint32_t old_divisor)
@ -2103,7 +2105,7 @@ static void cancel_hv_timer(struct kvm_lapic *apic)
{
WARN_ON(preemptible());
WARN_ON(!apic->lapic_timer.hv_timer_in_use);
static_call(kvm_x86_cancel_hv_timer)(apic->vcpu);
kvm_x86_call(cancel_hv_timer)(apic->vcpu);
apic->lapic_timer.hv_timer_in_use = false;
}
@ -2120,7 +2122,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
if (!ktimer->tscdeadline)
return false;
if (static_call(kvm_x86_set_hv_timer)(vcpu, ktimer->tscdeadline, &expired))
if (kvm_x86_call(set_hv_timer)(vcpu, ktimer->tscdeadline, &expired))
return false;
ktimer->hv_timer_in_use = true;
@ -2575,7 +2577,7 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
if ((old_value ^ value) & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE)) {
kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
static_call_cond(kvm_x86_set_virtual_apic_mode)(vcpu);
kvm_x86_call(set_virtual_apic_mode)(vcpu);
}
apic->base_address = apic->vcpu->arch.apic_base &
@ -2685,7 +2687,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
u64 msr_val;
int i;
static_call_cond(kvm_x86_apicv_pre_state_restore)(vcpu);
kvm_x86_call(apicv_pre_state_restore)(vcpu);
if (!init_event) {
msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE;
@ -2740,9 +2742,9 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
vcpu->arch.pv_eoi.msr_val = 0;
apic_update_ppr(apic);
if (apic->apicv_active) {
static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, -1);
static_call_cond(kvm_x86_hwapic_isr_update)(-1);
kvm_x86_call(apicv_post_state_restore)(vcpu);
kvm_x86_call(hwapic_irr_update)(vcpu, -1);
kvm_x86_call(hwapic_isr_update)(-1);
}
vcpu->arch.apic_arb_prio = 0;
@ -2838,7 +2840,7 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu)
vcpu->arch.apic = apic;
if (kvm_x86_ops.alloc_apic_backing_page)
apic->regs = static_call(kvm_x86_alloc_apic_backing_page)(vcpu);
apic->regs = kvm_x86_call(alloc_apic_backing_page)(vcpu);
else
apic->regs = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
if (!apic->regs) {
@ -3017,7 +3019,7 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
struct kvm_lapic *apic = vcpu->arch.apic;
int r;
static_call_cond(kvm_x86_apicv_pre_state_restore)(vcpu);
kvm_x86_call(apicv_pre_state_restore)(vcpu);
kvm_lapic_set_base(vcpu, vcpu->arch.apic_base);
/* set SPIV separately to get count of SW disabled APICs right */
@ -3044,9 +3046,10 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
kvm_lapic_set_reg(apic, APIC_TMCCT, 0);
kvm_apic_update_apicv(vcpu);
if (apic->apicv_active) {
static_call_cond(kvm_x86_apicv_post_state_restore)(vcpu);
static_call_cond(kvm_x86_hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
static_call_cond(kvm_x86_hwapic_isr_update)(apic_find_highest_isr(apic));
kvm_x86_call(apicv_post_state_restore)(vcpu);
kvm_x86_call(hwapic_irr_update)(vcpu,
apic_find_highest_irr(apic));
kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
}
kvm_make_request(KVM_REQ_EVENT, vcpu);
if (ioapic_in_kernel(vcpu->kvm))
@ -3334,7 +3337,8 @@ int kvm_apic_accept_events(struct kvm_vcpu *vcpu)
/* evaluate pending_events before reading the vector */
smp_rmb();
sipi_vector = apic->sipi_vector;
static_call(kvm_x86_vcpu_deliver_sipi_vector)(vcpu, sipi_vector);
kvm_x86_call(vcpu_deliver_sipi_vector)(vcpu,
sipi_vector);
vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
}
}

View File

@ -16,8 +16,7 @@
#define APIC_DEST_NOSHORT 0x0
#define APIC_DEST_MASK 0x800
#define APIC_BUS_CYCLE_NS 1
#define APIC_BUS_FREQUENCY (1000000000ULL / APIC_BUS_CYCLE_NS)
#define APIC_BUS_CYCLE_NS_DEFAULT 1
#define APIC_BROADCAST 0xFF
#define X2APIC_BROADCAST 0xFFFFFFFFul
@ -236,7 +235,7 @@ static inline bool kvm_apic_has_pending_init_or_sipi(struct kvm_vcpu *vcpu)
static inline bool kvm_apic_init_sipi_allowed(struct kvm_vcpu *vcpu)
{
return !is_smm(vcpu) &&
!static_call(kvm_x86_apic_init_signal_blocked)(vcpu);
!kvm_x86_call(apic_init_signal_blocked)(vcpu);
}
static inline bool kvm_lowest_prio_delivery(struct kvm_lapic_irq *irq)

View File

@ -57,12 +57,6 @@ static __always_inline u64 rsvd_bits(int s, int e)
return ((2ULL << (e - s)) - 1) << s;
}
/*
* The number of non-reserved physical address bits irrespective of features
* that repurpose legal bits, e.g. MKTME.
*/
extern u8 __read_mostly shadow_phys_bits;
static inline gfn_t kvm_mmu_max_gfn(void)
{
/*
@ -76,30 +70,11 @@ static inline gfn_t kvm_mmu_max_gfn(void)
* than hardware's real MAXPHYADDR. Using the host MAXPHYADDR
* disallows such SPTEs entirely and simplifies the TDP MMU.
*/
int max_gpa_bits = likely(tdp_enabled) ? shadow_phys_bits : 52;
int max_gpa_bits = likely(tdp_enabled) ? kvm_host.maxphyaddr : 52;
return (1ULL << (max_gpa_bits - PAGE_SHIFT)) - 1;
}
static inline u8 kvm_get_shadow_phys_bits(void)
{
/*
* boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected
* in CPU detection code, but the processor treats those reduced bits as
* 'keyID' thus they are not reserved bits. Therefore KVM needs to look at
* the physical address bits reported by CPUID.
*/
if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008))
return cpuid_eax(0x80000008) & 0xff;
/*
* Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with
* custom CPUID. Proceed with whatever the kernel found since these features
* aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008).
*/
return boot_cpu_data.x86_phys_bits;
}
u8 kvm_mmu_get_max_tdp_level(void);
void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask);
@ -163,8 +138,8 @@ static inline void kvm_mmu_load_pgd(struct kvm_vcpu *vcpu)
if (!VALID_PAGE(root_hpa))
return;
static_call(kvm_x86_load_mmu_pgd)(vcpu, root_hpa,
vcpu->arch.mmu->root_role.level);
kvm_x86_call(load_mmu_pgd)(vcpu, root_hpa,
vcpu->arch.mmu->root_role.level);
}
static inline void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu,
@ -199,7 +174,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
{
/* strip nested paging fault error codes */
unsigned int pfec = access;
unsigned long rflags = static_call(kvm_x86_get_rflags)(vcpu);
unsigned long rflags = kvm_x86_call(get_rflags)(vcpu);
/*
* For explicit supervisor accesses, SMAP is disabled if EFLAGS.AC = 1.
@ -246,14 +221,7 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
return -(u32)fault & errcode;
}
bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma);
static inline bool kvm_mmu_honors_guest_mtrrs(struct kvm *kvm)
{
return __kvm_mmu_honors_guest_mtrrs(kvm_arch_has_noncoherent_dma(kvm));
}
void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end);
bool kvm_mmu_may_ignore_guest_pat(void);
int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);

View File

@ -722,7 +722,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index)
if (sp->role.passthrough)
return sp->gfn;
if (!sp->role.direct)
if (sp->shadowed_translation)
return sp->shadowed_translation[index] >> PAGE_SHIFT;
return sp->gfn + (index << ((sp->role.level - 1) * SPTE_LEVEL_BITS));
@ -736,7 +736,7 @@ static gfn_t kvm_mmu_page_get_gfn(struct kvm_mmu_page *sp, int index)
*/
static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index)
{
if (sp_has_gptes(sp))
if (sp->shadowed_translation)
return sp->shadowed_translation[index] & ACC_ALL;
/*
@ -757,7 +757,7 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page *sp, int index)
static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int index,
gfn_t gfn, unsigned int access)
{
if (sp_has_gptes(sp)) {
if (sp->shadowed_translation) {
sp->shadowed_translation[index] = (gfn << PAGE_SHIFT) | access;
return;
}
@ -1700,8 +1700,7 @@ static void kvm_mmu_free_shadow_page(struct kvm_mmu_page *sp)
hlist_del(&sp->hash_link);
list_del(&sp->link);
free_page((unsigned long)sp->spt);
if (!sp->role.direct)
free_page((unsigned long)sp->shadowed_translation);
free_page((unsigned long)sp->shadowed_translation);
kmem_cache_free(mmu_page_header_cache, sp);
}
@ -2203,7 +2202,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm,
sp = kvm_mmu_memory_cache_alloc(caches->page_header_cache);
sp->spt = kvm_mmu_memory_cache_alloc(caches->shadow_page_cache);
if (!role.direct)
if (!role.direct && role.level <= KVM_MAX_HUGEPAGE_LEVEL)
sp->shadowed_translation = kvm_mmu_memory_cache_alloc(caches->shadowed_info_cache);
set_page_private(virt_to_page(sp->spt), (unsigned long)sp);
@ -3308,7 +3307,7 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu,
return RET_PF_CONTINUE;
}
static bool page_fault_can_be_fast(struct kvm_page_fault *fault)
static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault)
{
/*
* Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only
@ -3319,6 +3318,26 @@ static bool page_fault_can_be_fast(struct kvm_page_fault *fault)
if (fault->rsvd)
return false;
/*
* For hardware-protected VMs, certain conditions like attempting to
* perform a write to a page which is not in the state that the guest
* expects it to be in can result in a nested/extended #PF. In this
* case, the below code might misconstrue this situation as being the
* result of a write-protected access, and treat it as a spurious case
* rather than taking any action to satisfy the real source of the #PF
* such as generating a KVM_EXIT_MEMORY_FAULT. This can lead to the
* guest spinning on a #PF indefinitely, so don't attempt the fast path
* in this case.
*
* Note that the kvm_mem_is_private() check might race with an
* attribute update, but this will either result in the guest spinning
* on RET_PF_SPURIOUS until the update completes, or an actual spurious
* case might go down the slow path. Either case will resolve itself.
*/
if (kvm->arch.has_private_mem &&
fault->is_private != kvm_mem_is_private(kvm, fault->gfn))
return false;
/*
* #PF can be fast if:
*
@ -3419,7 +3438,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
u64 *sptep;
uint retry_count = 0;
if (!page_fault_can_be_fast(fault))
if (!page_fault_can_be_fast(vcpu->kvm, fault))
return ret;
walk_shadow_page_lockless_begin(vcpu);
@ -3428,7 +3447,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
u64 new_spte;
if (tdp_mmu_enabled)
sptep = kvm_tdp_mmu_fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
sptep = kvm_tdp_mmu_fast_pf_get_last_sptep(vcpu, fault->gfn, &spte);
else
sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte);
@ -3438,7 +3457,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
* available as the vCPU holds a reference to its root(s).
*/
if (WARN_ON_ONCE(!sptep))
spte = REMOVED_SPTE;
spte = FROZEN_SPTE;
if (!is_shadow_present_pte(spte))
break;
@ -4271,7 +4290,16 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
work->arch.cr3 != kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu))
return;
kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, work->arch.error_code, true, NULL);
r = kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, work->arch.error_code,
true, NULL, NULL);
/*
* Account fixed page faults, otherwise they'll never be counted, but
* ignore stats for all other return times. Page-ready "faults" aren't
* truly spurious and never trigger emulation
*/
if (r == RET_PF_FIXED)
vcpu->stat.pf_fixed++;
}
static inline u8 kvm_max_level_for_order(int order)
@ -4291,6 +4319,25 @@ static inline u8 kvm_max_level_for_order(int order)
return PG_LEVEL_4K;
}
static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn,
u8 max_level, int gmem_order)
{
u8 req_max_level;
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
max_level = min(kvm_max_level_for_order(gmem_order), max_level);
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn);
if (req_max_level)
max_level = min(max_level, req_max_level);
return req_max_level;
}
static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
struct kvm_page_fault *fault)
{
@ -4308,9 +4355,9 @@ static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
return r;
}
fault->max_level = min(kvm_max_level_for_order(max_order),
fault->max_level);
fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn,
fault->max_level, max_order);
return RET_PF_CONTINUE;
}
@ -4561,7 +4608,10 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
if (WARN_ON_ONCE(error_code >> 32))
error_code = lower_32_bits(error_code);
/* Ensure the above sanity check also covers KVM-defined flags. */
/*
* Restrict KVM-defined flags to bits 63:32 so that it's impossible for
* them to conflict with #PF error codes, which are limited to 32 bits.
*/
BUILD_BUG_ON(lower_32_bits(PFERR_SYNTHETIC_MASK));
vcpu->arch.l1tf_flush_l1d = true;
@ -4621,38 +4671,23 @@ out_unlock:
}
#endif
bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma)
bool kvm_mmu_may_ignore_guest_pat(void)
{
/*
* If host MTRRs are ignored (shadow_memtype_mask is non-zero), and the
* VM has non-coherent DMA (DMA doesn't snoop CPU caches), KVM's ABI is
* to honor the memtype from the guest's MTRRs so that guest accesses
* to memory that is DMA'd aren't cached against the guest's wishes.
*
* Note, KVM may still ultimately ignore guest MTRRs for certain PFNs,
* e.g. KVM will force UC memtype for host MMIO.
* When EPT is enabled (shadow_memtype_mask is non-zero), the CPU does
* not support self-snoop (or is affected by an erratum), and the VM
* has non-coherent DMA (DMA doesn't snoop CPU caches), KVM's ABI is to
* honor the memtype from the guest's PAT so that guest accesses to
* memory that is DMA'd aren't cached against the guest's wishes. As a
* result, KVM _may_ ignore guest PAT, whereas without non-coherent DMA,
* KVM _always_ ignores or honors guest PAT, i.e. doesn't toggle SPTE
* bits in response to non-coherent device (un)registration.
*/
return vm_has_noncoherent_dma && shadow_memtype_mask;
return !static_cpu_has(X86_FEATURE_SELFSNOOP) && shadow_memtype_mask;
}
int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
/*
* If the guest's MTRRs may be used to compute the "real" memtype,
* restrict the mapping level to ensure KVM uses a consistent memtype
* across the entire mapping.
*/
if (kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) {
for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) {
int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
gfn_t base = gfn_round_for_level(fault->gfn,
fault->max_level);
if (kvm_mtrr_check_gfn_range_consistency(vcpu, base, page_num))
break;
}
}
#ifdef CONFIG_X86_64
if (tdp_mmu_enabled)
return kvm_tdp_mmu_page_fault(vcpu, fault);
@ -4661,6 +4696,79 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
return direct_page_fault(vcpu, fault);
}
static int kvm_tdp_map_page(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code,
u8 *level)
{
int r;
/*
* Restrict to TDP page fault, since that's the only case where the MMU
* is indexed by GPA.
*/
if (vcpu->arch.mmu->page_fault != kvm_tdp_page_fault)
return -EOPNOTSUPP;
do {
if (signal_pending(current))
return -EINTR;
cond_resched();
r = kvm_mmu_do_page_fault(vcpu, gpa, error_code, true, NULL, level);
} while (r == RET_PF_RETRY);
if (r < 0)
return r;
switch (r) {
case RET_PF_FIXED:
case RET_PF_SPURIOUS:
return 0;
case RET_PF_EMULATE:
return -ENOENT;
case RET_PF_RETRY:
case RET_PF_CONTINUE:
case RET_PF_INVALID:
default:
WARN_ONCE(1, "could not fix page fault during prefault");
return -EIO;
}
}
long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
struct kvm_pre_fault_memory *range)
{
u64 error_code = PFERR_GUEST_FINAL_MASK;
u8 level = PG_LEVEL_4K;
u64 end;
int r;
/*
* reload is efficient when called repeatedly, so we can do it on
* every iteration.
*/
kvm_mmu_reload(vcpu);
if (kvm_arch_has_private_mem(vcpu->kvm) &&
kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))
error_code |= PFERR_PRIVATE_ACCESS;
/*
* Shadow paging uses GVA for kvm page fault, so restrict to
* two-dimensional paging.
*/
r = kvm_tdp_map_page(vcpu, range->gpa, error_code, &level);
if (r < 0)
return r;
/*
* If the mapping that covers range->gpa can use a huge page, it
* may start below it or end after range->gpa + range->size.
*/
end = (range->gpa & KVM_HPAGE_MASK(level)) + KVM_HPAGE_SIZE(level);
return min(range->size, end - range->gpa);
}
static void nonpaging_init_context(struct kvm_mmu *context)
{
context->page_fault = nonpaging_page_fault;
@ -4988,7 +5096,7 @@ static void reset_rsvds_bits_mask_ept(struct kvm_vcpu *vcpu,
static inline u64 reserved_hpa_bits(void)
{
return rsvd_bits(shadow_phys_bits, 63);
return rsvd_bits(kvm_host.maxphyaddr, 63);
}
/*
@ -5633,7 +5741,7 @@ int kvm_mmu_load(struct kvm_vcpu *vcpu)
* stale entries. Flushing on alloc also allows KVM to skip the TLB
* flush when freeing a root (see kvm_tdp_mmu_put_root()).
*/
static_call(kvm_x86_flush_tlb_current)(vcpu);
kvm_x86_call(flush_tlb_current)(vcpu);
out:
return r;
}
@ -5886,14 +5994,24 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err
}
if (r == RET_PF_INVALID) {
vcpu->stat.pf_taken++;
r = kvm_mmu_do_page_fault(vcpu, cr2_or_gpa, error_code, false,
&emulation_type);
&emulation_type, NULL);
if (KVM_BUG_ON(r == RET_PF_INVALID, vcpu->kvm))
return -EIO;
}
if (r < 0)
return r;
if (r == RET_PF_FIXED)
vcpu->stat.pf_fixed++;
else if (r == RET_PF_EMULATE)
vcpu->stat.pf_emulate++;
else if (r == RET_PF_SPURIOUS)
vcpu->stat.pf_spurious++;
if (r != RET_PF_EMULATE)
return 1;
@ -5995,7 +6113,7 @@ void kvm_mmu_invalidate_addr(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
if (is_noncanonical_address(addr, vcpu))
return;
static_call(kvm_x86_flush_tlb_gva)(vcpu, addr);
kvm_x86_call(flush_tlb_gva)(vcpu, addr);
}
if (!mmu->sync_spte)
@ -6787,6 +6905,7 @@ restart:
return need_tlb_flush;
}
EXPORT_SYMBOL_GPL(kvm_zap_gfn_range);
static void kvm_rmap_zap_collapsible_sptes(struct kvm *kvm,
const struct kvm_memory_slot *slot)
@ -6917,7 +7036,6 @@ static unsigned long mmu_shrink_scan(struct shrinker *shrink,
list_for_each_entry(kvm, &vm_list, vm_list) {
int idx;
LIST_HEAD(invalid_list);
/*
* Never scan more than sc->nr_to_scan VM instances.

View File

@ -288,7 +288,8 @@ static inline void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
}
static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
u64 err, bool prefetch, int *emulation_type)
u64 err, bool prefetch,
int *emulation_type, u8 *level)
{
struct kvm_page_fault fault = {
.addr = cr2_or_gpa,
@ -318,14 +319,6 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
fault.slot = kvm_vcpu_gfn_to_memslot(vcpu, fault.gfn);
}
/*
* Async #PF "faults", a.k.a. prefetch faults, are not faults from the
* guest perspective and have already been counted at the time of the
* original fault.
*/
if (!prefetch)
vcpu->stat.pf_taken++;
if (IS_ENABLED(CONFIG_MITIGATION_RETPOLINE) && fault.is_tdp)
r = kvm_tdp_page_fault(vcpu, &fault);
else
@ -344,20 +337,9 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
if (fault.write_fault_to_shadow_pgtable && emulation_type)
*emulation_type |= EMULTYPE_WRITE_PF_TO_SP;
if (level)
*level = fault.goal_level;
/*
* Similar to above, prefetch faults aren't truly spurious, and the
* async #PF path doesn't do emulation. Do count faults that are fixed
* by the async #PF handler though, otherwise they'll never be counted.
*/
if (r == RET_PF_FIXED)
vcpu->stat.pf_fixed++;
else if (prefetch)
;
else if (r == RET_PF_EMULATE)
vcpu->stat.pf_emulate++;
else if (r == RET_PF_SPURIOUS)
vcpu->stat.pf_spurious++;
return r;
}

View File

@ -911,7 +911,8 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int
gpa_t pte_gpa;
gfn_t gfn;
if (WARN_ON_ONCE(sp->spt[i] == SHADOW_NONPRESENT_VALUE))
if (WARN_ON_ONCE(sp->spt[i] == SHADOW_NONPRESENT_VALUE ||
!sp->shadowed_translation))
return 0;
first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);

View File

@ -43,7 +43,25 @@ u64 __read_mostly shadow_acc_track_mask;
u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask;
u8 __read_mostly shadow_phys_bits;
static u8 __init kvm_get_host_maxphyaddr(void)
{
/*
* boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected
* in CPU detection code, but the processor treats those reduced bits as
* 'keyID' thus they are not reserved bits. Therefore KVM needs to look at
* the physical address bits reported by CPUID, i.e. the raw MAXPHYADDR,
* when reasoning about CPU behavior with respect to MAXPHYADDR.
*/
if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008))
return cpuid_eax(0x80000008) & 0xff;
/*
* Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with
* custom CPUID. Proceed with whatever the kernel found since these features
* aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008).
*/
return boot_cpu_data.x86_phys_bits;
}
void __init kvm_mmu_spte_module_init(void)
{
@ -55,6 +73,8 @@ void __init kvm_mmu_spte_module_init(void)
* will change when the vendor module is (re)loaded.
*/
allow_mmio_caching = enable_mmio_caching;
kvm_host.maxphyaddr = kvm_get_host_maxphyaddr();
}
static u64 generation_mmio_spte_mask(u64 gen)
@ -190,8 +210,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= PT_PAGE_SIZE_MASK;
if (shadow_memtype_mask)
spte |= static_call(kvm_x86_get_mt_mask)(vcpu, gfn,
kvm_is_mmio_pfn(pfn));
spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn,
kvm_is_mmio_pfn(pfn));
if (host_writable)
spte |= shadow_host_writable_mask;
else
@ -271,18 +291,12 @@ static u64 make_spte_executable(u64 spte)
* This is used during huge page splitting to build the SPTEs that make up the
* new page table.
*/
u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, union kvm_mmu_page_role role,
int index)
u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte,
union kvm_mmu_page_role role, int index)
{
u64 child_spte;
u64 child_spte = huge_spte;
if (WARN_ON_ONCE(!is_shadow_present_pte(huge_spte)))
return 0;
if (WARN_ON_ONCE(!is_large_pte(huge_spte)))
return 0;
child_spte = huge_spte;
KVM_BUG_ON(!is_shadow_present_pte(huge_spte) || !is_large_pte(huge_spte), kvm);
/*
* The child_spte already has the base address of the huge page being
@ -383,7 +397,7 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
* not set any RWX bits.
*/
if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
WARN_ON(mmio_value && (FROZEN_SPTE & mmio_mask) == mmio_value))
mmio_value = 0;
if (!mmio_value)
@ -441,8 +455,6 @@ void kvm_mmu_reset_all_pte_masks(void)
u8 low_phys_bits;
u64 mask;
shadow_phys_bits = kvm_get_shadow_phys_bits();
/*
* If the CPU has 46 or less physical address bits, then set an
* appropriate mask to guard against L1TF attacks. Otherwise, it is
@ -494,7 +506,7 @@ void kvm_mmu_reset_all_pte_masks(void)
* 52-bit physical addresses then there are no reserved PA bits in the
* PTEs and so the reserved PA approach must be disabled.
*/
if (shadow_phys_bits < 52)
if (kvm_host.maxphyaddr < 52)
mask = BIT_ULL(51) | PT_PRESENT_MASK;
else
mask = 0;

View File

@ -202,7 +202,7 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
/*
* If a thread running without exclusive control of the MMU lock must perform a
* multi-part operation on an SPTE, it can set the SPTE to REMOVED_SPTE as a
* multi-part operation on an SPTE, it can set the SPTE to FROZEN_SPTE as a
* non-present intermediate value. Other threads which encounter this value
* should not modify the SPTE.
*
@ -212,14 +212,14 @@ extern u64 __read_mostly shadow_nonpresent_or_rsvd_mask;
*
* Only used by the TDP MMU.
*/
#define REMOVED_SPTE (SHADOW_NONPRESENT_VALUE | 0x5a0ULL)
#define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x5a0ULL)
/* Removed SPTEs must not be misconstrued as shadow present PTEs. */
static_assert(!(REMOVED_SPTE & SPTE_MMU_PRESENT_MASK));
static_assert(!(FROZEN_SPTE & SPTE_MMU_PRESENT_MASK));
static inline bool is_removed_spte(u64 spte)
static inline bool is_frozen_spte(u64 spte)
{
return spte == REMOVED_SPTE;
return spte == FROZEN_SPTE;
}
/* Get an SPTE's index into its parent's page table (and the spt array). */

View File

@ -365,8 +365,8 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
* value to the removed SPTE value.
*/
for (;;) {
old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, REMOVED_SPTE);
if (!is_removed_spte(old_spte))
old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, FROZEN_SPTE);
if (!is_frozen_spte(old_spte))
break;
cpu_relax();
}
@ -397,11 +397,11 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
* No retry is needed in the atomic update path as the
* sole concern is dropping a Dirty bit, i.e. no other
* task can zap/remove the SPTE as mmu_lock is held for
* write. Marking the SPTE as a removed SPTE is not
* write. Marking the SPTE as a frozen SPTE is not
* strictly necessary for the same reason, but using
* the remove SPTE value keeps the shared/exclusive
* the frozen SPTE value keeps the shared/exclusive
* paths consistent and allows the handle_changed_spte()
* call below to hardcode the new value to REMOVED_SPTE.
* call below to hardcode the new value to FROZEN_SPTE.
*
* Note, even though dropping a Dirty bit is the only
* scenario where a non-atomic update could result in a
@ -413,10 +413,10 @@ static void handle_removed_pt(struct kvm *kvm, tdp_ptep_t pt, bool shared)
* it here.
*/
old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte,
REMOVED_SPTE, level);
FROZEN_SPTE, level);
}
handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), gfn,
old_spte, REMOVED_SPTE, level, shared);
old_spte, FROZEN_SPTE, level, shared);
}
call_rcu(&sp->rcu_head, tdp_mmu_free_sp_rcu_callback);
@ -490,19 +490,19 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
*/
if (!was_present && !is_present) {
/*
* If this change does not involve a MMIO SPTE or removed SPTE,
* If this change does not involve a MMIO SPTE or frozen SPTE,
* it is unexpected. Log the change, though it should not
* impact the guest since both the former and current SPTEs
* are nonpresent.
*/
if (WARN_ON_ONCE(!is_mmio_spte(kvm, old_spte) &&
!is_mmio_spte(kvm, new_spte) &&
!is_removed_spte(new_spte)))
!is_frozen_spte(new_spte)))
pr_err("Unexpected SPTE change! Nonpresent SPTEs\n"
"should not be replaced with another,\n"
"different nonpresent SPTE, unless one or both\n"
"are MMIO SPTEs, or the new SPTE is\n"
"a temporary removed SPTE.\n"
"a temporary frozen SPTE.\n"
"as_id: %d gfn: %llx old_spte: %llx new_spte: %llx level: %d",
as_id, gfn, old_spte, new_spte, level);
return;
@ -530,7 +530,8 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn,
kvm_set_pfn_accessed(spte_to_pfn(old_spte));
}
static inline int __tdp_mmu_set_spte_atomic(struct tdp_iter *iter, u64 new_spte)
static inline int __must_check __tdp_mmu_set_spte_atomic(struct tdp_iter *iter,
u64 new_spte)
{
u64 *sptep = rcu_dereference(iter->sptep);
@ -540,7 +541,7 @@ static inline int __tdp_mmu_set_spte_atomic(struct tdp_iter *iter, u64 new_spte)
* and pre-checking before inserting a new SPTE is advantageous as it
* avoids unnecessary work.
*/
WARN_ON_ONCE(iter->yielded || is_removed_spte(iter->old_spte));
WARN_ON_ONCE(iter->yielded || is_frozen_spte(iter->old_spte));
/*
* Note, fast_pf_fix_direct_spte() can also modify TDP MMU SPTEs and
@ -572,9 +573,9 @@ static inline int __tdp_mmu_set_spte_atomic(struct tdp_iter *iter, u64 new_spte)
* no side-effects other than setting iter->old_spte to the last
* known value of the spte.
*/
static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
static inline int __must_check tdp_mmu_set_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter,
u64 new_spte)
{
int ret;
@ -590,8 +591,8 @@ static inline int tdp_mmu_set_spte_atomic(struct kvm *kvm,
return 0;
}
static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter)
static inline int __must_check tdp_mmu_zap_spte_atomic(struct kvm *kvm,
struct tdp_iter *iter)
{
int ret;
@ -603,26 +604,26 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm,
* in its place before the TLBs are flushed.
*
* Delay processing of the zapped SPTE until after TLBs are flushed and
* the REMOVED_SPTE is replaced (see below).
* the FROZEN_SPTE is replaced (see below).
*/
ret = __tdp_mmu_set_spte_atomic(iter, REMOVED_SPTE);
ret = __tdp_mmu_set_spte_atomic(iter, FROZEN_SPTE);
if (ret)
return ret;
kvm_flush_remote_tlbs_gfn(kvm, iter->gfn, iter->level);
/*
* No other thread can overwrite the removed SPTE as they must either
* No other thread can overwrite the frozen SPTE as they must either
* wait on the MMU lock or use tdp_mmu_set_spte_atomic() which will not
* overwrite the special removed SPTE value. Use the raw write helper to
* overwrite the special frozen SPTE value. Use the raw write helper to
* avoid an unnecessary check on volatile bits.
*/
__kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE);
/*
* Process the zapped SPTE after flushing TLBs, and after replacing
* REMOVED_SPTE with 0. This minimizes the amount of time vCPUs are
* blocked by the REMOVED_SPTE and reduces contention on the child
* FROZEN_SPTE with 0. This minimizes the amount of time vCPUs are
* blocked by the FROZEN_SPTE and reduces contention on the child
* SPTEs.
*/
handle_changed_spte(kvm, iter->as_id, iter->gfn, iter->old_spte,
@ -652,12 +653,12 @@ static u64 tdp_mmu_set_spte(struct kvm *kvm, int as_id, tdp_ptep_t sptep,
/*
* No thread should be using this function to set SPTEs to or from the
* temporary removed SPTE value.
* temporary frozen SPTE value.
* If operating under the MMU lock in read mode, tdp_mmu_set_spte_atomic
* should be used. If operating under the MMU lock in write mode, the
* use of the removed SPTE should not be necessary.
* use of the frozen SPTE should not be necessary.
*/
WARN_ON_ONCE(is_removed_spte(old_spte) || is_removed_spte(new_spte));
WARN_ON_ONCE(is_frozen_spte(old_spte) || is_frozen_spte(new_spte));
old_spte = kvm_tdp_mmu_write_spte(sptep, old_spte, new_spte, level);
@ -1126,7 +1127,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
* If SPTE has been frozen by another thread, just give up and
* retry, avoiding unnecessary page table allocation and free.
*/
if (is_removed_spte(iter.old_spte))
if (is_frozen_spte(iter.old_spte))
goto retry;
if (iter.level == fault->goal_level)
@ -1339,17 +1340,15 @@ bool kvm_tdp_mmu_wrprot_slot(struct kvm *kvm,
return spte_set;
}
static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(void)
{
struct kvm_mmu_page *sp;
gfp |= __GFP_ZERO;
sp = kmem_cache_alloc(mmu_page_header_cache, gfp);
sp = kmem_cache_zalloc(mmu_page_header_cache, GFP_KERNEL_ACCOUNT);
if (!sp)
return NULL;
sp->spt = (void *)__get_free_page(gfp);
sp->spt = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
if (!sp->spt) {
kmem_cache_free(mmu_page_header_cache, sp);
return NULL;
@ -1358,47 +1357,6 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp)
return sp;
}
static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
struct tdp_iter *iter,
bool shared)
{
struct kvm_mmu_page *sp;
kvm_lockdep_assert_mmu_lock_held(kvm, shared);
/*
* Since we are allocating while under the MMU lock we have to be
* careful about GFP flags. Use GFP_NOWAIT to avoid blocking on direct
* reclaim and to avoid making any filesystem callbacks (which can end
* up invoking KVM MMU notifiers, resulting in a deadlock).
*
* If this allocation fails we drop the lock and retry with reclaim
* allowed.
*/
sp = __tdp_mmu_alloc_sp_for_split(GFP_NOWAIT | __GFP_ACCOUNT);
if (sp)
return sp;
rcu_read_unlock();
if (shared)
read_unlock(&kvm->mmu_lock);
else
write_unlock(&kvm->mmu_lock);
iter->yielded = true;
sp = __tdp_mmu_alloc_sp_for_split(GFP_KERNEL_ACCOUNT);
if (shared)
read_lock(&kvm->mmu_lock);
else
write_lock(&kvm->mmu_lock);
rcu_read_lock();
return sp;
}
/* Note, the caller is responsible for initializing @sp. */
static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
struct kvm_mmu_page *sp, bool shared)
@ -1445,7 +1403,6 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm,
{
struct kvm_mmu_page *sp = NULL;
struct tdp_iter iter;
int ret = 0;
rcu_read_lock();
@ -1469,17 +1426,31 @@ retry:
continue;
if (!sp) {
sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, shared);
rcu_read_unlock();
if (shared)
read_unlock(&kvm->mmu_lock);
else
write_unlock(&kvm->mmu_lock);
sp = tdp_mmu_alloc_sp_for_split();
if (shared)
read_lock(&kvm->mmu_lock);
else
write_lock(&kvm->mmu_lock);
if (!sp) {
ret = -ENOMEM;
trace_kvm_mmu_split_huge_page(iter.gfn,
iter.old_spte,
iter.level, ret);
break;
iter.level, -ENOMEM);
return -ENOMEM;
}
if (iter.yielded)
continue;
rcu_read_lock();
iter.yielded = true;
continue;
}
tdp_mmu_init_child_sp(sp, &iter);
@ -1500,7 +1471,7 @@ retry:
if (sp)
tdp_mmu_free_sp(sp);
return ret;
return 0;
}
@ -1801,12 +1772,11 @@ int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
*
* WARNING: This function is only intended to be called during fast_page_fault.
*/
u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gfn_t gfn,
u64 *spte)
{
struct tdp_iter iter;
struct kvm_mmu *mmu = vcpu->arch.mmu;
gfn_t gfn = addr >> PAGE_SHIFT;
tdp_ptep_t sptep = NULL;
tdp_mmu_for_each_pte(iter, mmu, gfn, gfn + 1) {

View File

@ -64,7 +64,7 @@ static inline void kvm_tdp_mmu_walk_lockless_end(void)
int kvm_tdp_mmu_get_walk(struct kvm_vcpu *vcpu, u64 addr, u64 *sptes,
int *root_level);
u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, u64 addr,
u64 *kvm_tdp_mmu_fast_pf_get_last_sptep(struct kvm_vcpu *vcpu, gfn_t gfn,
u64 *spte);
#ifdef CONFIG_X86_64

Some files were not shown because too many files have changed in this diff Show More