2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-28 07:04:00 +08:00
linux-next/virt/kvm
Sean Christopherson 8931a454ae KVM: Take mmu_lock when handling MMU notifier iff the hva hits a memslot
Defer acquiring mmu_lock in the MMU notifier paths until a "hit" has been
detected in the memslots, i.e. don't take the lock for notifications that
don't affect the guest.

For small VMs, spurious locking is a minor annoyance.  And for "volatile"
setups where the majority of notifications _are_ relevant, this barely
qualifies as an optimization.

But, for large VMs (hundreds of threads) with static setups, e.g. no
page migration, no swapping, etc..., the vast majority of MMU notifier
callbacks will be unrelated to the guest, e.g. will often be in response
to the userspace VMM adjusting its own virtual address space.  In such
large VMs, acquiring mmu_lock can be painful as it blocks vCPUs from
handling page faults.  In some scenarios it can even be "fatal" in the
sense that it causes unacceptable brownouts, e.g. when rebuilding huge
pages after live migration, a significant percentage of vCPUs will be
attempting to handle page faults.

x86's TDP MMU implementation is especially susceptible to spurious
locking due it taking mmu_lock for read when handling page faults.
Because rwlock is fair, a single writer will stall future readers, while
the writer is itself stalled waiting for in-progress readers to complete.
This is exacerbated by the MMU notifiers often firing multiple times in
quick succession, e.g. moving a page will (always?) invoke three separate
notifiers: .invalidate_range_start(), invalidate_range_end(), and
.change_pte().  Unnecessarily taking mmu_lock each time means even a
single spurious sequence can be problematic.

Note, this optimizes only the unpaired callbacks.  Optimizing the
.invalidate_range_{start,end}() pairs is more complex and will be done in
a future patch.

Suggested-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210402005658.3024832-9-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-04-17 08:31:08 -04:00
..
async_pf.c mm/gup: remove task_struct pointer for all gup code 2020-08-12 10:58:04 -07:00
async_pf.h treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 504 2019-06-19 17:09:56 +02:00
coalesced_mmio.c mm, kvm: account kvm_vcpu_mmap to kmemcg 2020-12-19 11:18:37 -08:00
coalesced_mmio.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
dirty_ring.c KVM: x86/mmu: Use an rwlock for the x86 MMU 2021-02-04 05:27:43 -05:00
eventfd.c kvm/eventfd: Drain events from eventfd in irqfd_wakeup() 2020-11-15 09:49:11 -05:00
irqchip.c KVM/arm updates for 5.3 2019-07-11 15:14:16 +02:00
Kconfig entry: Provide infrastructure for work before transitioning to guest mode 2020-07-24 15:03:42 +02:00
kvm_main.c KVM: Take mmu_lock when handling MMU notifier iff the hva hits a memslot 2021-04-17 08:31:08 -04:00
mmu_lock.h KVM: x86/mmu: Use an rwlock for the x86 MMU 2021-02-04 05:27:43 -05:00
vfio.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 2019-06-19 17:09:55 +02:00
vfio.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00