vfio/common: Change vfio_devices_all_running_and_saving() logic to equivalent one

vfio_devices_all_running_and_saving() is used to check if migration is
in pre-copy phase. This is done by checking if migration is in setup or
active states and if all VFIO devices are in pre-copy state, i.e.
_SAVING | _RUNNING.

In VFIO migration protocol v2 pre-copy support is made optional. Hence,
a matching v2 protocol pre-copy state can't be used here.

As preparation for adding v2 protocol, change
vfio_devices_all_running_and_saving() logic such that it doesn't use the
VFIO pre-copy state.

The new equivalent logic checks if migration is in active state and if
all VFIO devices are in running state [1]. No functional changes
intended.

[1] Note that checking if migration is in setup or active states and if
all VFIO devices are in running state doesn't guarantee that we are in
pre-copy phase, thus we check if migration is only in active state.

Signed-off-by: Avihai Horon <avihaih@nvidia.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Link: https://lore.kernel.org/r/20230216143630.25610-5-avihaih@nvidia.com
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
This commit is contained in:
Avihai Horon 2023-02-16 16:36:23 +02:00 committed by Alex Williamson
parent b051a3f640
commit 8b942af393

View File

@ -40,6 +40,7 @@
#include "trace.h"
#include "qapi/error.h"
#include "migration/migration.h"
#include "migration/misc.h"
#include "sysemu/tpm.h"
VFIOGroupList vfio_group_list =
@ -363,13 +364,16 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container)
return true;
}
static bool vfio_devices_all_running_and_saving(VFIOContainer *container)
/*
* Check if all VFIO devices are running and migration is active, which is
* essentially equivalent to the migration being in pre-copy phase.
*/
static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container)
{
VFIOGroup *group;
VFIODevice *vbasedev;
MigrationState *ms = migrate_get_current();
if (!migration_is_setup_or_active(ms->state)) {
if (!migration_is_active(migrate_get_current())) {
return false;
}
@ -381,8 +385,7 @@ static bool vfio_devices_all_running_and_saving(VFIOContainer *container)
return false;
}
if ((migration->device_state & VFIO_DEVICE_STATE_V1_SAVING) &&
(migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING)) {
if (migration->device_state & VFIO_DEVICE_STATE_V1_RUNNING) {
continue;
} else {
return false;
@ -461,7 +464,7 @@ static int vfio_dma_unmap(VFIOContainer *container,
};
if (iotlb && container->dirty_pages_supported &&
vfio_devices_all_running_and_saving(container)) {
vfio_devices_all_running_and_mig_active(container)) {
return vfio_dma_unmap_bitmap(container, iova, size, iotlb);
}
@ -488,7 +491,7 @@ static int vfio_dma_unmap(VFIOContainer *container,
return -errno;
}
if (iotlb && vfio_devices_all_running_and_saving(container)) {
if (iotlb && vfio_devices_all_running_and_mig_active(container)) {
cpu_physical_memory_set_dirty_range(iotlb->translated_addr, size,
tcg_enabled() ? DIRTY_CLIENTS_ALL :
DIRTY_CLIENTS_NOCODE);