2020-03-03 21:35:59 +08:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2020-10-30 04:17:01 +08:00
|
|
|
/aarch64/get-reg-list
|
2020-10-30 04:17:03 +08:00
|
|
|
/aarch64/get-reg-list-sve
|
2021-04-06 00:39:41 +08:00
|
|
|
/aarch64/vgic_init
|
2019-10-07 21:26:56 +08:00
|
|
|
/s390x/memop
|
2020-03-13 23:56:44 +08:00
|
|
|
/s390x/resets
|
2020-03-10 17:15:53 +08:00
|
|
|
/s390x/sync_regs_test
|
2018-09-19 01:54:26 +08:00
|
|
|
/x86_64/cr4_cpuid_sync_test
|
2020-06-08 19:23:45 +08:00
|
|
|
/x86_64/debug_regs
|
2018-10-19 22:38:16 +08:00
|
|
|
/x86_64/evmcs_test
|
2021-01-30 00:18:21 +08:00
|
|
|
/x86_64/get_cpuid_test
|
2021-03-18 22:56:29 +08:00
|
|
|
/x86_64/get_msr_index_features
|
2020-10-28 07:10:44 +08:00
|
|
|
/x86_64/kvm_pv_test
|
2021-03-18 22:09:49 +08:00
|
|
|
/x86_64/hyperv_clock
|
2019-05-06 22:19:10 +08:00
|
|
|
/x86_64/hyperv_cpuid
|
2021-05-21 17:52:04 +08:00
|
|
|
/x86_64/hyperv_features
|
2019-05-31 22:14:52 +08:00
|
|
|
/x86_64/mmio_warning_test
|
2018-09-19 01:54:26 +08:00
|
|
|
/x86_64/platform_info_test
|
2021-03-18 23:16:24 +08:00
|
|
|
/x86_64/set_boot_cpu_id
|
2018-09-19 01:54:26 +08:00
|
|
|
/x86_64/set_sregs_test
|
2019-05-06 22:19:10 +08:00
|
|
|
/x86_64/smm_test
|
|
|
|
/x86_64/state_test
|
2020-03-10 17:15:53 +08:00
|
|
|
/x86_64/svm_vmcall_test
|
2018-09-19 01:54:26 +08:00
|
|
|
/x86_64/sync_regs_test
|
2020-11-06 20:39:26 +08:00
|
|
|
/x86_64/tsc_msrs_test
|
2020-10-13 03:47:16 +08:00
|
|
|
/x86_64/userspace_msr_exit_test
|
2020-10-27 02:09:22 +08:00
|
|
|
/x86_64/vmx_apic_access_test
|
2019-02-01 06:49:21 +08:00
|
|
|
/x86_64/vmx_close_while_nested_test
|
2019-10-07 21:26:56 +08:00
|
|
|
/x86_64/vmx_dirty_log_test
|
2020-11-06 20:39:26 +08:00
|
|
|
/x86_64/vmx_preemption_timer_test
|
2019-05-03 02:31:41 +08:00
|
|
|
/x86_64/vmx_set_nested_state_test
|
2018-09-19 01:54:26 +08:00
|
|
|
/x86_64/vmx_tsc_adjust_test
|
2021-05-27 02:44:18 +08:00
|
|
|
/x86_64/vmx_nested_tsc_scaling_test
|
2020-11-06 06:38:23 +08:00
|
|
|
/x86_64/xapic_ipi_test
|
2021-02-11 02:26:05 +08:00
|
|
|
/x86_64/xen_shinfo_test
|
|
|
|
/x86_64/xen_vmcall_test
|
2019-10-22 07:30:28 +08:00
|
|
|
/x86_64/xss_msr_test
|
2021-02-01 13:10:39 +08:00
|
|
|
/x86_64/vmx_pmu_msrs_test
|
2020-03-10 17:15:53 +08:00
|
|
|
/demand_paging_test
|
2018-09-19 01:54:32 +08:00
|
|
|
/dirty_log_test
|
2020-10-28 07:37:33 +08:00
|
|
|
/dirty_log_perf_test
|
2021-02-13 08:14:52 +08:00
|
|
|
/hardware_disable_test
|
2019-07-31 22:28:51 +08:00
|
|
|
/kvm_create_max_vcpus
|
KVM: selftests: Add a test for kvm page table code
This test serves as a performance tester and a bug reproducer for
kvm page table code (GPA->HPA mappings), so it gives guidance for
people trying to make some improvement for kvm.
The function guest_code() can cover the conditions where a single vcpu or
multiple vcpus access guest pages within the same memory region, in three
VM stages(before dirty logging, during dirty logging, after dirty logging).
Besides, the backing src memory type(ANONYMOUS/THP/HUGETLB) of the tested
memory region can be specified by users, which means normal page mappings
or block mappings can be chosen by users to be created in the test.
If ANONYMOUS memory is specified, kvm will create normal page mappings
for the tested memory region before dirty logging, and update attributes
of the page mappings from RO to RW during dirty logging. If THP/HUGETLB
memory is specified, kvm will create block mappings for the tested memory
region before dirty logging, and split the blcok mappings into normal page
mappings during dirty logging, and coalesce the page mappings back into
block mappings after dirty logging is stopped.
So in summary, as a performance tester, this test can present the
performance of kvm creating/updating normal page mappings, or the
performance of kvm creating/splitting/recovering block mappings,
through execution time.
When we need to coalesce the page mappings back to block mappings after
dirty logging is stopped, we have to firstly invalidate *all* the TLB
entries for the page mappings right before installation of the block entry,
because a TLB conflict abort error could occur if we can't invalidate the
TLB entries fully. We have hit this TLB conflict twice on aarch64 software
implementation and fixed it. As this test can imulate process from dirty
logging enabled to dirty logging stopped of a VM with block mappings,
so it can also reproduce this TLB conflict abort due to inadequate TLB
invalidation when coalescing tables.
Signed-off-by: Yanan Wang <wangyanan55@huawei.com>
Reviewed-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <20210330080856.14940-11-wangyanan55@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-03-30 16:08:56 +08:00
|
|
|
/kvm_page_table_test
|
2021-01-13 05:42:53 +08:00
|
|
|
/memslot_modification_stress_test
|
KVM: selftests: add a memslot-related performance benchmark
This benchmark contains the following tests:
* Map test, where the host unmaps guest memory while the guest writes to
it (maps it).
The test is designed in a way to make the unmap operation on the host
take a negligible amount of time in comparison with the mapping
operation in the guest.
The test area is actually split in two: the first half is being mapped
by the guest while the second half in being unmapped by the host.
Then a guest <-> host sync happens and the areas are reversed.
* Unmap test which is broadly similar to the above map test, but it is
designed in an opposite way: to make the mapping operation in the guest
take a negligible amount of time in comparison with the unmap operation
on the host.
This test is available in two variants: with per-page unmap operation
or a chunked one (using 2 MiB chunk size).
* Move active area test which involves moving the last (highest gfn)
memslot a bit back and forth on the host while the guest is
concurrently writing around the area being moved (including over the
moved memslot).
* Move inactive area test which is similar to the previous move active
area test, but now guest writes all happen outside of the area being
moved.
* Read / write test in which the guest writes to the beginning of each
page of the test area while the host writes to the middle of each such
page.
Then each side checks the values the other side has written.
This particular test is not expected to give different results depending
on particular memslots implementation, it is meant as a rough sanity
check and to provide insight on the spread of test results expected.
Each test performs its operation in a loop until a test period ends
(this is 5 seconds by default, but it is configurable).
Then the total count of loops done is divided by the actual elapsed
time to give the test result.
The tests have a configurable memslot cap with the "-s" test option, by
default the system maximum is used.
Each test is repeated a particular number of times (by default 20
times), the best result achieved is printed.
The test memory area is divided equally between memslots, the reminder
is added to the last memslot.
The test area size does not depend on the number of memslots in use.
The tests also measure the time that it took to add all these memslots.
The best result from the tests that use the whole test area is printed
after all the requested tests are done.
In general, these tests are designed to use as much memory as possible
(within reason) while still doing 100+ loops even on high memslot counts
with the default test length.
Increasing the test runtime makes it increasingly more likely that some
event will happen on the system during the test run, which might lower
the test result.
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-Id: <8d31bb3d92bc8fa33a9756fa802ee14266ab994e.1618253574.git.maciej.szmigiero@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-13 22:08:28 +08:00
|
|
|
/memslot_perf_test
|
2020-04-11 07:17:06 +08:00
|
|
|
/set_memory_region_test
|
2020-03-13 23:56:44 +08:00
|
|
|
/steal_time
|