mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-22 18:44:44 +08:00
KVM: x86/mmu: Clean up the gorilla math in mmu_topup_memory_caches()
Clean up the minimums in mmu_topup_memory_caches() to document the driving mechanisms behind the minimums. Now that encountering an empty cache is unlikely to trigger BUG_ON(), it is less dangerous to be more precise when defining the minimums. For rmaps, the logic is 1 parent PTE per level, plus a single rmap, and prefetched rmaps. The extra objects in the current '8 + PREFETCH' minimum came about due to an abundance of paranoia in commitc41ef344de
("KVM: MMU: increase per-vcpu rmap cache alloc size"), i.e. it could have increased the minimum to 2 rmaps. Furthermore, the unexpected extra rmap case was killed off entirely by commitsf759e2b4c7
("KVM: MMU: avoid pte_list_desc running out in kvm_mmu_pte_write") andf5a1e9f895
("KVM: MMU: remove call to kvm_mmu_pte_write from walk_addr"). For the so called page cache, replace '8' with 2*PT64_ROOT_MAX_LEVEL. The 2x multiplier is needed because the cache is used for both shadow pages and gfn arrays for indirect MMUs. And finally, for page headers, replace '4' with PT64_ROOT_MAX_LEVEL. Note, KVM now supports 5-level paging, i.e. the old minimums that used a baseline derived from 4-level paging were technically wrong. But, KVM always allocates roots in a separate flow, e.g. it's impossible in the current implementation to actually need 5 new shadow pages in a single flow. Use PT64_ROOT_MAX_LEVEL unmodified instead of subtracting 1, as the direct usage is likely more intuitive to uninformed readers, and the inflated minimum is unlikely to affect functionality in practice. Reviewed-by: Ben Gardon <bgardon@google.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200703023545.8771-9-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
f3747a5a9e
commit
531281ad98
@ -1104,14 +1104,17 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int r;
|
||||
|
||||
/* 1 rmap, 1 parent PTE per level, and the prefetched rmaps. */
|
||||
r = mmu_topup_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache,
|
||||
8 + PTE_PREFETCH_NUM);
|
||||
1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM);
|
||||
if (r)
|
||||
return r;
|
||||
r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache, 8);
|
||||
r = mmu_topup_memory_cache(&vcpu->arch.mmu_page_cache,
|
||||
2 * PT64_ROOT_MAX_LEVEL);
|
||||
if (r)
|
||||
return r;
|
||||
return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache, 4);
|
||||
return mmu_topup_memory_cache(&vcpu->arch.mmu_page_header_cache,
|
||||
PT64_ROOT_MAX_LEVEL);
|
||||
}
|
||||
|
||||
static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
|
||||
|
Loading…
Reference in New Issue
Block a user