2019-06-04 16:11:32 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-only */
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* Kernel-based Virtual Machine driver for Linux
|
|
|
|
*
|
|
|
|
* This module enables machines with Intel VT-x extensions to run virtual
|
|
|
|
* machines without emulation or binary translation.
|
|
|
|
*
|
|
|
|
* MMU support
|
|
|
|
*
|
|
|
|
* Copyright (C) 2006 Qumranet, Inc.
|
2010-10-06 20:23:22 +08:00
|
|
|
* Copyright 2010 Red Hat, Inc. and/or its affiliates.
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Yaniv Kamay <yaniv@qumranet.com>
|
|
|
|
* Avi Kivity <avi@qumranet.com>
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We need the mmu code to access both 32-bit and 64-bit guest ptes,
|
|
|
|
* so the code in this file is compiled twice, once per pte size.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#if PTTYPE == 64
|
|
|
|
#define pt_element_t u64
|
|
|
|
#define guest_walker guest_walker64
|
|
|
|
#define FNAME(name) paging##64_##name
|
KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAs
Ignore "dynamic" host adjustments to the physical address mask when
generating the masks for guest PTEs, i.e. the guest PA masks. The host
physical address space and guest physical address space are two different
beasts, e.g. even though SEV's C-bit is the same bit location for both
host and guest, disabling SME in the host (which clears shadow_me_mask)
does not affect the guest PTE->GPA "translation".
For non-SEV guests, not dropping bits is the correct behavior. Assuming
KVM and userspace correctly enumerate/configure guest MAXPHYADDR, bits
that are lost as collateral damage from memory encryption are treated as
reserved bits, i.e. KVM will never get to the point where it attempts to
generate a gfn using the affected bits. And if userspace wants to create
a bogus vCPU, then userspace gets to deal with the fallout of hardware
doing odd things with bad GPAs.
For SEV guests, not dropping the C-bit is technically wrong, but it's a
moot point because KVM can't read SEV guest's page tables in any case
since they're always encrypted. Not to mention that the current KVM code
is also broken since sme_me_mask does not have to be non-zero for SEV to
be supported by KVM. The proper fix would be to teach all of KVM to
correctly handle guest private memory, but that's a task for the future.
Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
Cc: stable@vger.kernel.org
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210623230552.4027702-5-seanjc@google.com>
[Use a new header instead of adding header guards to paging_tmpl.h. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-24 07:05:49 +08:00
|
|
|
#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
|
2009-07-27 22:30:45 +08:00
|
|
|
#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
|
|
|
|
#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
|
KVM: Allow not-present guest page faults to bypass kvm
There are two classes of page faults trapped by kvm:
- host page faults, where the fault is needed to allow kvm to install
the shadow pte or update the guest accessed and dirty bits
- guest page faults, where the guest has faulted and kvm simply injects
the fault back into the guest to handle
The second class, guest page faults, is pure overhead. We can eliminate
some of it on vmx using the following evil trick:
- when we set up a shadow page table entry, if the corresponding guest pte
is not present, set up the shadow pte as not present
- if the guest pte _is_ present, mark the shadow pte as present but also
set one of the reserved bits in the shadow pte
- tell the vmx hardware not to trap faults which have the present bit clear
With this, normal page-not-present faults go directly to the guest,
bypassing kvm entirely.
Unfortunately, this trick only works on Intel hardware, as AMD lacks a
way to discriminate among page faults based on error code. It is also
a little risky since it uses reserved bits which might become unreserved
in the future, so a module parameter is provided to disable it.
Signed-off-by: Avi Kivity <avi@qumranet.com>
2007-09-17 00:58:32 +08:00
|
|
|
#define PT_LEVEL_BITS PT64_LEVEL_BITS
|
2013-08-05 16:07:10 +08:00
|
|
|
#define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT
|
|
|
|
#define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
|
2017-03-30 17:55:29 +08:00
|
|
|
#define PT_HAVE_ACCESSED_DIRTY(mmu) true
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2020-02-08 01:37:42 +08:00
|
|
|
#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
|
2007-12-07 20:56:58 +08:00
|
|
|
#define CMPXCHG cmpxchg
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
#else
|
2007-12-07 20:56:58 +08:00
|
|
|
#define CMPXCHG cmpxchg64
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
#define PT_MAX_FULL_LEVELS 2
|
|
|
|
#endif
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
#elif PTTYPE == 32
|
|
|
|
#define pt_element_t u32
|
|
|
|
#define guest_walker guest_walker32
|
|
|
|
#define FNAME(name) paging##32_##name
|
|
|
|
#define PT_BASE_ADDR_MASK PT32_BASE_ADDR_MASK
|
2009-07-27 22:30:45 +08:00
|
|
|
#define PT_LVL_ADDR_MASK(lvl) PT32_LVL_ADDR_MASK(lvl)
|
|
|
|
#define PT_LVL_OFFSET_MASK(lvl) PT32_LVL_OFFSET_MASK(lvl)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
#define PT_INDEX(addr, level) PT32_INDEX(addr, level)
|
KVM: Allow not-present guest page faults to bypass kvm
There are two classes of page faults trapped by kvm:
- host page faults, where the fault is needed to allow kvm to install
the shadow pte or update the guest accessed and dirty bits
- guest page faults, where the guest has faulted and kvm simply injects
the fault back into the guest to handle
The second class, guest page faults, is pure overhead. We can eliminate
some of it on vmx using the following evil trick:
- when we set up a shadow page table entry, if the corresponding guest pte
is not present, set up the shadow pte as not present
- if the guest pte _is_ present, mark the shadow pte as present but also
set one of the reserved bits in the shadow pte
- tell the vmx hardware not to trap faults which have the present bit clear
With this, normal page-not-present faults go directly to the guest,
bypassing kvm entirely.
Unfortunately, this trick only works on Intel hardware, as AMD lacks a
way to discriminate among page faults based on error code. It is also
a little risky since it uses reserved bits which might become unreserved
in the future, so a module parameter is provided to disable it.
Signed-off-by: Avi Kivity <avi@qumranet.com>
2007-09-17 00:58:32 +08:00
|
|
|
#define PT_LEVEL_BITS PT32_LEVEL_BITS
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
#define PT_MAX_FULL_LEVELS 2
|
2013-08-05 16:07:10 +08:00
|
|
|
#define PT_GUEST_DIRTY_SHIFT PT_DIRTY_SHIFT
|
|
|
|
#define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
|
2017-03-30 17:55:29 +08:00
|
|
|
#define PT_HAVE_ACCESSED_DIRTY(mmu) true
|
2007-12-07 20:56:58 +08:00
|
|
|
#define CMPXCHG cmpxchg
|
2013-08-05 16:07:12 +08:00
|
|
|
#elif PTTYPE == PTTYPE_EPT
|
|
|
|
#define pt_element_t u64
|
|
|
|
#define guest_walker guest_walkerEPT
|
|
|
|
#define FNAME(name) ept_##name
|
KVM: x86/mmu: Do not apply HPA (memory encryption) mask to GPAs
Ignore "dynamic" host adjustments to the physical address mask when
generating the masks for guest PTEs, i.e. the guest PA masks. The host
physical address space and guest physical address space are two different
beasts, e.g. even though SEV's C-bit is the same bit location for both
host and guest, disabling SME in the host (which clears shadow_me_mask)
does not affect the guest PTE->GPA "translation".
For non-SEV guests, not dropping bits is the correct behavior. Assuming
KVM and userspace correctly enumerate/configure guest MAXPHYADDR, bits
that are lost as collateral damage from memory encryption are treated as
reserved bits, i.e. KVM will never get to the point where it attempts to
generate a gfn using the affected bits. And if userspace wants to create
a bogus vCPU, then userspace gets to deal with the fallout of hardware
doing odd things with bad GPAs.
For SEV guests, not dropping the C-bit is technically wrong, but it's a
moot point because KVM can't read SEV guest's page tables in any case
since they're always encrypted. Not to mention that the current KVM code
is also broken since sme_me_mask does not have to be non-zero for SEV to
be supported by KVM. The proper fix would be to teach all of KVM to
correctly handle guest private memory, but that's a task for the future.
Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
Cc: stable@vger.kernel.org
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210623230552.4027702-5-seanjc@google.com>
[Use a new header instead of adding header guards to paging_tmpl.h. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-24 07:05:49 +08:00
|
|
|
#define PT_BASE_ADDR_MASK GUEST_PT64_BASE_ADDR_MASK
|
2013-08-05 16:07:12 +08:00
|
|
|
#define PT_LVL_ADDR_MASK(lvl) PT64_LVL_ADDR_MASK(lvl)
|
|
|
|
#define PT_LVL_OFFSET_MASK(lvl) PT64_LVL_OFFSET_MASK(lvl)
|
|
|
|
#define PT_INDEX(addr, level) PT64_INDEX(addr, level)
|
|
|
|
#define PT_LEVEL_BITS PT64_LEVEL_BITS
|
2017-03-30 17:55:30 +08:00
|
|
|
#define PT_GUEST_DIRTY_SHIFT 9
|
|
|
|
#define PT_GUEST_ACCESSED_SHIFT 8
|
|
|
|
#define PT_HAVE_ACCESSED_DIRTY(mmu) ((mmu)->ept_ad)
|
2013-08-05 16:07:12 +08:00
|
|
|
#define CMPXCHG cmpxchg64
|
2020-03-03 10:02:36 +08:00
|
|
|
#define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
#else
|
|
|
|
#error Invalid PTTYPE value
|
|
|
|
#endif
|
|
|
|
|
2017-03-30 17:55:30 +08:00
|
|
|
#define PT_GUEST_DIRTY_MASK (1 << PT_GUEST_DIRTY_SHIFT)
|
|
|
|
#define PT_GUEST_ACCESSED_MASK (1 << PT_GUEST_ACCESSED_SHIFT)
|
|
|
|
|
2009-07-27 22:30:45 +08:00
|
|
|
#define gpte_to_gfn_lvl FNAME(gpte_to_gfn_lvl)
|
2020-04-28 08:54:22 +08:00
|
|
|
#define gpte_to_gfn(pte) gpte_to_gfn_lvl((pte), PG_LEVEL_4K)
|
2007-11-21 18:35:07 +08:00
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* The guest_walker structure emulates the behavior of the hardware page
|
|
|
|
* table walker.
|
|
|
|
*/
|
|
|
|
struct guest_walker {
|
|
|
|
int level;
|
2012-09-16 19:18:51 +08:00
|
|
|
unsigned max_level;
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
gfn_t table_gfn[PT_MAX_FULL_LEVELS];
|
2007-12-12 08:12:27 +08:00
|
|
|
pt_element_t ptes[PT_MAX_FULL_LEVELS];
|
2010-08-22 19:13:33 +08:00
|
|
|
pt_element_t prefetch_ptes[PTE_PREFETCH_NUM];
|
2007-12-12 08:12:27 +08:00
|
|
|
gpa_t pte_gpa[PT_MAX_FULL_LEVELS];
|
2012-09-16 19:18:51 +08:00
|
|
|
pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS];
|
2013-09-09 19:52:33 +08:00
|
|
|
bool pte_writable[PT_MAX_FULL_LEVELS];
|
KVM: X86: MMU: Use the correct inherited permissions to get shadow page
When computing the access permissions of a shadow page, use the effective
permissions of the walk up to that point, i.e. the logic AND of its parents'
permissions. Two guest PxE entries that point at the same table gfn need to
be shadowed with different shadow pages if their parents' permissions are
different. KVM currently uses the effective permissions of the last
non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have
full ("uwx") permissions, and the effective permissions are recorded only
in role.access and merged into the leaves, this can lead to incorrect
reuse of a shadow page and eventually to a missing guest protection page
fault.
For example, here is a shared pagetable:
pgd[] pud[] pmd[] virtual address pointers
/->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
/->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
pgd-| (shared pmd[] as above)
\->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
\->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
pud1 and pud2 point to the same pmd table, so:
- ptr1 and ptr3 points to the same page.
- ptr2 and ptr4 points to the same page.
(pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
- First, the guest reads from ptr1 first and KVM prepares a shadow
page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
"u--" comes from the effective permissions of pgd, pud1 and
pmd1, which are stored in pt->access. "u--" is used also to get
the pagetable for pud1, instead of "uw-".
- Then the guest writes to ptr2 and KVM reuses pud1 which is present.
The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
even though the pud1 pmd (because of the incorrect argument to
kvm_mmu_get_page in the previous step) has role.access="u--".
- Then the guest reads from ptr3. The hypervisor reuses pud1's
shadow pmd for pud2, because both use "u--" for their permissions.
Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
- At last, the guest writes to ptr4. This causes no vmexit or pagefault,
because pud1's shadow page structures included an "uw-" page even though
its role.access was "u--".
Any kind of shared pagetable might have the similar problem when in
virtual machine without TDP enabled if the permissions are different
from different ancestors.
In order to fix the problem, we change pt->access to be an array, and
any access in it will not include permissions ANDed from child ptes.
The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
Remember to test it with TDP disabled.
The problem had existed long before the commit 41074d07c78b ("KVM: MMU:
Fix inherited permissions for emulated guest pte updates"), and it
is hard to find which is the culprit. So there is no fixes tag here.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
Cc: stable@vger.kernel.org
Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-03 13:24:55 +08:00
|
|
|
unsigned int pt_access[PT_MAX_FULL_LEVELS];
|
|
|
|
unsigned int pte_access;
|
2007-01-06 08:36:44 +08:00
|
|
|
gfn_t gfn;
|
2010-11-22 23:53:27 +08:00
|
|
|
struct x86_exception fault;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
};
|
|
|
|
|
2009-07-27 22:30:45 +08:00
|
|
|
static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl)
|
2007-11-21 18:35:07 +08:00
|
|
|
{
|
2009-07-27 22:30:45 +08:00
|
|
|
return (gpte & PT_LVL_ADDR_MASK(lvl)) >> PAGE_SHIFT;
|
2007-11-21 18:35:07 +08:00
|
|
|
}
|
|
|
|
|
2017-03-30 17:55:29 +08:00
|
|
|
static inline void FNAME(protect_clean_gpte)(struct kvm_mmu *mmu, unsigned *access,
|
|
|
|
unsigned gpte)
|
2013-08-05 16:07:09 +08:00
|
|
|
{
|
|
|
|
unsigned mask;
|
|
|
|
|
2013-08-05 16:07:11 +08:00
|
|
|
/* dirty bit is not supported, so no need to track it */
|
2017-03-30 17:55:29 +08:00
|
|
|
if (!PT_HAVE_ACCESSED_DIRTY(mmu))
|
2013-08-05 16:07:11 +08:00
|
|
|
return;
|
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
BUILD_BUG_ON(PT_WRITABLE_MASK != ACC_WRITE_MASK);
|
|
|
|
|
|
|
|
mask = (unsigned)~ACC_WRITE_MASK;
|
|
|
|
/* Allow write access to dirty gptes */
|
2013-08-05 16:07:10 +08:00
|
|
|
mask |= (gpte >> (PT_GUEST_DIRTY_SHIFT - PT_WRITABLE_SHIFT)) &
|
|
|
|
PT_WRITABLE_MASK;
|
2013-08-05 16:07:09 +08:00
|
|
|
*access &= mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int FNAME(is_present_gpte)(unsigned long pte)
|
|
|
|
{
|
2013-08-05 16:07:12 +08:00
|
|
|
#if PTTYPE != PTTYPE_EPT
|
2016-07-13 06:18:50 +08:00
|
|
|
return pte & PT_PRESENT_MASK;
|
2013-08-05 16:07:12 +08:00
|
|
|
#else
|
|
|
|
return pte & 7;
|
|
|
|
#endif
|
2013-08-05 16:07:09 +08:00
|
|
|
}
|
|
|
|
|
KVM: x86/mmu: Micro-optimize nEPT's bad memptype/XWR checks
Rework the handling of nEPT's bad memtype/XWR checks to micro-optimize
the checks as much as possible. Move the check to a separate helper,
__is_bad_mt_xwr(), which allows the guest_rsvd_check usage in
paging_tmpl.h to omit the check entirely for paging32/64 (bad_mt_xwr is
always zero for non-nEPT) while retaining the bitwise-OR of the current
code for the shadow_zero_check in walk_shadow_page_get_mmio_spte().
Add a comment for the bitwise-OR usage in the mmio spte walk to avoid
future attempts to "fix" the code, which is what prompted this
optimization in the first place[*].
Opportunistically remove the superfluous '!= 0' and parantheses, and
use BIT_ULL() instead of open coding its equivalent.
The net effect is that code generation is largely unchanged for
walk_shadow_page_get_mmio_spte(), marginally better for
ept_prefetch_invalid_gpte(), and significantly improved for
paging32/64_prefetch_invalid_gpte().
Note, walk_shadow_page_get_mmio_spte() can't use a templated version of
the memtype/XRW as it works on the host's shadow PTEs, e.g. checks that
KVM hasn't borked its EPT tables. Even if it could be templated, the
benefits of having a single implementation far outweight the few uops
that would be saved for NPT or non-TDP paging, e.g. most compilers
inline it all the way to up kvm_mmu_page_fault().
[*] https://lkml.kernel.org/r/20200108001859.25254-1-sean.j.christopherson@intel.com
Cc: Jim Mattson <jmattson@google.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-10 07:06:40 +08:00
|
|
|
static bool FNAME(is_bad_mt_xwr)(struct rsvd_bits_validate *rsvd_check, u64 gpte)
|
|
|
|
{
|
|
|
|
#if PTTYPE != PTTYPE_EPT
|
|
|
|
return false;
|
|
|
|
#else
|
|
|
|
return __is_bad_mt_xwr(rsvd_check, gpte);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool FNAME(is_rsvd_bits_set)(struct kvm_mmu *mmu, u64 gpte, int level)
|
|
|
|
{
|
|
|
|
return __is_rsvd_bits_set(&mmu->guest_rsvd_check, gpte, level) ||
|
|
|
|
FNAME(is_bad_mt_xwr)(&mmu->guest_rsvd_check, gpte);
|
|
|
|
}
|
|
|
|
|
2011-04-20 21:33:16 +08:00
|
|
|
static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
|
2011-05-01 13:33:07 +08:00
|
|
|
pt_element_t __user *ptep_user, unsigned index,
|
|
|
|
pt_element_t orig_pte, pt_element_t new_pte)
|
2007-12-07 20:56:58 +08:00
|
|
|
{
|
2011-05-01 13:33:07 +08:00
|
|
|
int npages;
|
2007-12-07 20:56:58 +08:00
|
|
|
pt_element_t ret;
|
|
|
|
pt_element_t *table;
|
|
|
|
struct page *page;
|
|
|
|
|
2019-05-14 08:17:11 +08:00
|
|
|
npages = get_user_pages_fast((unsigned long)ptep_user, 1, FOLL_WRITE, &page);
|
2019-02-01 04:24:33 +08:00
|
|
|
if (likely(npages == 1)) {
|
|
|
|
table = kmap_atomic(page);
|
|
|
|
ret = CMPXCHG(&table[index], orig_pte, new_pte);
|
|
|
|
kunmap_atomic(table);
|
|
|
|
|
|
|
|
kvm_release_page_dirty(page);
|
|
|
|
} else {
|
|
|
|
struct vm_area_struct *vma;
|
|
|
|
unsigned long vaddr = (unsigned long)ptep_user & PAGE_MASK;
|
|
|
|
unsigned long pfn;
|
|
|
|
unsigned long paddr;
|
|
|
|
|
2020-06-09 12:33:29 +08:00
|
|
|
mmap_read_lock(current->mm);
|
2019-02-01 04:24:33 +08:00
|
|
|
vma = find_vma_intersection(current->mm, vaddr, vaddr + PAGE_SIZE);
|
|
|
|
if (!vma || !(vma->vm_flags & VM_PFNMAP)) {
|
2020-06-09 12:33:29 +08:00
|
|
|
mmap_read_unlock(current->mm);
|
2019-02-01 04:24:33 +08:00
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
|
|
|
|
paddr = pfn << PAGE_SHIFT;
|
|
|
|
table = memremap(paddr, PAGE_SIZE, MEMREMAP_WB);
|
|
|
|
if (!table) {
|
2020-06-09 12:33:29 +08:00
|
|
|
mmap_read_unlock(current->mm);
|
2019-02-01 04:24:33 +08:00
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
ret = CMPXCHG(&table[index], orig_pte, new_pte);
|
|
|
|
memunmap(table);
|
2020-06-09 12:33:29 +08:00
|
|
|
mmap_read_unlock(current->mm);
|
2019-02-01 04:24:33 +08:00
|
|
|
}
|
2007-12-07 20:56:58 +08:00
|
|
|
|
|
|
|
return (ret != orig_pte);
|
|
|
|
}
|
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
static bool FNAME(prefetch_invalid_gpte)(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_mmu_page *sp, u64 *spte,
|
|
|
|
u64 gpte)
|
|
|
|
{
|
|
|
|
if (!FNAME(is_present_gpte)(gpte))
|
|
|
|
goto no_present;
|
|
|
|
|
2013-08-05 16:07:11 +08:00
|
|
|
/* if accessed bit is not supported prefetch non accessed gpte */
|
2018-10-09 03:28:05 +08:00
|
|
|
if (PT_HAVE_ACCESSED_DIRTY(vcpu->arch.mmu) &&
|
|
|
|
!(gpte & PT_GUEST_ACCESSED_MASK))
|
2013-08-05 16:07:09 +08:00
|
|
|
goto no_present;
|
|
|
|
|
2020-04-28 08:54:22 +08:00
|
|
|
if (FNAME(is_rsvd_bits_set)(vcpu->arch.mmu, gpte, PG_LEVEL_4K))
|
KVM: x86/mmu: Reorder the reserved bit check in prefetch_invalid_gpte()
Move the !PRESENT and !ACCESSED checks in FNAME(prefetch_invalid_gpte)
above the call to is_rsvd_bits_set(). For a well behaved guest, the
!PRESENT and !ACCESSED are far more likely to evaluate true than the
reserved bit checks, and they do not require additional memory accesses.
Before:
Dump of assembler code for function paging32_prefetch_invalid_gpte:
0x0000000000044240 <+0>: callq 0x44245 <paging32_prefetch_invalid_gpte+5>
0x0000000000044245 <+5>: mov %rcx,%rax
0x0000000000044248 <+8>: shr $0x7,%rax
0x000000000004424c <+12>: and $0x1,%eax
0x000000000004424f <+15>: lea 0x0(,%rax,4),%r8
0x0000000000044257 <+23>: add %r8,%rax
0x000000000004425a <+26>: mov %rcx,%r8
0x000000000004425d <+29>: and 0x120(%rsi,%rax,8),%r8
0x0000000000044265 <+37>: mov 0x170(%rsi),%rax
0x000000000004426c <+44>: shr %cl,%rax
0x000000000004426f <+47>: and $0x1,%eax
0x0000000000044272 <+50>: or %rax,%r8
0x0000000000044275 <+53>: jne 0x4427c <paging32_prefetch_invalid_gpte+60>
0x0000000000044277 <+55>: test $0x1,%cl
0x000000000004427a <+58>: jne 0x4428a <paging32_prefetch_invalid_gpte+74>
0x000000000004427c <+60>: mov %rdx,%rsi
0x000000000004427f <+63>: callq 0x44080 <drop_spte>
0x0000000000044284 <+68>: mov $0x1,%eax
0x0000000000044289 <+73>: retq
0x000000000004428a <+74>: xor %eax,%eax
0x000000000004428c <+76>: and $0x20,%ecx
0x000000000004428f <+79>: jne 0x44289 <paging32_prefetch_invalid_gpte+73>
0x0000000000044291 <+81>: mov %rdx,%rsi
0x0000000000044294 <+84>: callq 0x44080 <drop_spte>
0x0000000000044299 <+89>: mov $0x1,%eax
0x000000000004429e <+94>: jmp 0x44289 <paging32_prefetch_invalid_gpte+73>
End of assembler dump.
After:
Dump of assembler code for function paging32_prefetch_invalid_gpte:
0x0000000000044240 <+0>: callq 0x44245 <paging32_prefetch_invalid_gpte+5>
0x0000000000044245 <+5>: test $0x1,%cl
0x0000000000044248 <+8>: je 0x4424f <paging32_prefetch_invalid_gpte+15>
0x000000000004424a <+10>: test $0x20,%cl
0x000000000004424d <+13>: jne 0x4425d <paging32_prefetch_invalid_gpte+29>
0x000000000004424f <+15>: mov %rdx,%rsi
0x0000000000044252 <+18>: callq 0x44080 <drop_spte>
0x0000000000044257 <+23>: mov $0x1,%eax
0x000000000004425c <+28>: retq
0x000000000004425d <+29>: mov %rcx,%rax
0x0000000000044260 <+32>: mov (%rsi),%rsi
0x0000000000044263 <+35>: shr $0x7,%rax
0x0000000000044267 <+39>: and $0x1,%eax
0x000000000004426a <+42>: lea 0x0(,%rax,4),%r8
0x0000000000044272 <+50>: add %r8,%rax
0x0000000000044275 <+53>: mov %rcx,%r8
0x0000000000044278 <+56>: and 0x120(%rsi,%rax,8),%r8
0x0000000000044280 <+64>: mov 0x170(%rsi),%rax
0x0000000000044287 <+71>: shr %cl,%rax
0x000000000004428a <+74>: and $0x1,%eax
0x000000000004428d <+77>: mov %rax,%rcx
0x0000000000044290 <+80>: xor %eax,%eax
0x0000000000044292 <+82>: or %rcx,%r8
0x0000000000044295 <+85>: je 0x4425c <paging32_prefetch_invalid_gpte+28>
0x0000000000044297 <+87>: mov %rdx,%rsi
0x000000000004429a <+90>: callq 0x44080 <drop_spte>
0x000000000004429f <+95>: mov $0x1,%eax
0x00000000000442a4 <+100>: jmp 0x4425c <paging32_prefetch_invalid_gpte+28>
End of assembler dump.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-10 07:06:39 +08:00
|
|
|
goto no_present;
|
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
return false;
|
|
|
|
|
|
|
|
no_present:
|
|
|
|
drop_spte(vcpu->kvm, spte);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-07-13 06:18:51 +08:00
|
|
|
/*
|
|
|
|
* For PTTYPE_EPT, a page table can be executable but not readable
|
|
|
|
* on supported processors. Therefore, set_spte does not automatically
|
|
|
|
* set bit 0 if execute only is supported. Here, we repurpose ACC_USER_MASK
|
|
|
|
* to signify readability since it isn't used in the EPT case
|
|
|
|
*/
|
2018-07-18 15:57:50 +08:00
|
|
|
static inline unsigned FNAME(gpte_access)(u64 gpte)
|
2013-08-05 16:07:09 +08:00
|
|
|
{
|
|
|
|
unsigned access;
|
2013-08-05 16:07:12 +08:00
|
|
|
#if PTTYPE == PTTYPE_EPT
|
|
|
|
access = ((gpte & VMX_EPT_WRITABLE_MASK) ? ACC_WRITE_MASK : 0) |
|
|
|
|
((gpte & VMX_EPT_EXECUTABLE_MASK) ? ACC_EXEC_MASK : 0) |
|
2016-07-13 06:18:51 +08:00
|
|
|
((gpte & VMX_EPT_READABLE_MASK) ? ACC_USER_MASK : 0);
|
2013-08-05 16:07:12 +08:00
|
|
|
#else
|
2016-02-23 21:19:20 +08:00
|
|
|
BUILD_BUG_ON(ACC_EXEC_MASK != PT_PRESENT_MASK);
|
|
|
|
BUILD_BUG_ON(ACC_EXEC_MASK != 1);
|
|
|
|
access = gpte & (PT_WRITABLE_MASK | PT_USER_MASK | PT_PRESENT_MASK);
|
|
|
|
/* Combine NX with P (which is set here) to get ACC_EXEC_MASK. */
|
|
|
|
access ^= (gpte >> PT64_NX_SHIFT);
|
2013-08-05 16:07:12 +08:00
|
|
|
#endif
|
2013-08-05 16:07:09 +08:00
|
|
|
|
|
|
|
return access;
|
|
|
|
}
|
|
|
|
|
2012-09-16 19:18:51 +08:00
|
|
|
static int FNAME(update_accessed_dirty_bits)(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_mmu *mmu,
|
|
|
|
struct guest_walker *walker,
|
2020-06-23 05:58:29 +08:00
|
|
|
gpa_t addr, int write_fault)
|
2012-09-16 19:18:51 +08:00
|
|
|
{
|
|
|
|
unsigned level, index;
|
|
|
|
pt_element_t pte, orig_pte;
|
|
|
|
pt_element_t __user *ptep_user;
|
|
|
|
gfn_t table_gfn;
|
|
|
|
int ret;
|
|
|
|
|
2013-08-05 16:07:11 +08:00
|
|
|
/* dirty/accessed bits are not supported, so no need to update them */
|
2017-03-30 17:55:29 +08:00
|
|
|
if (!PT_HAVE_ACCESSED_DIRTY(mmu))
|
2013-08-05 16:07:11 +08:00
|
|
|
return 0;
|
|
|
|
|
2012-09-16 19:18:51 +08:00
|
|
|
for (level = walker->max_level; level >= walker->level; --level) {
|
|
|
|
pte = orig_pte = walker->ptes[level - 1];
|
|
|
|
table_gfn = walker->table_gfn[level - 1];
|
|
|
|
ptep_user = walker->ptep_user[level - 1];
|
|
|
|
index = offset_in_page(ptep_user) / sizeof(pt_element_t);
|
2013-08-05 16:07:10 +08:00
|
|
|
if (!(pte & PT_GUEST_ACCESSED_MASK)) {
|
2012-09-16 19:18:51 +08:00
|
|
|
trace_kvm_mmu_set_accessed_bit(table_gfn, index, sizeof(pte));
|
2013-08-05 16:07:10 +08:00
|
|
|
pte |= PT_GUEST_ACCESSED_MASK;
|
2012-09-16 19:18:51 +08:00
|
|
|
}
|
2013-08-05 16:07:09 +08:00
|
|
|
if (level == walker->level && write_fault &&
|
2013-08-05 16:07:10 +08:00
|
|
|
!(pte & PT_GUEST_DIRTY_MASK)) {
|
2012-09-16 19:18:51 +08:00
|
|
|
trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
|
2017-05-06 03:25:13 +08:00
|
|
|
#if PTTYPE == PTTYPE_EPT
|
2020-06-23 05:58:32 +08:00
|
|
|
if (kvm_x86_ops.nested_ops->write_log_dirty(vcpu, addr))
|
2017-05-06 03:25:13 +08:00
|
|
|
return -EINVAL;
|
|
|
|
#endif
|
2013-08-05 16:07:10 +08:00
|
|
|
pte |= PT_GUEST_DIRTY_MASK;
|
2012-09-16 19:18:51 +08:00
|
|
|
}
|
|
|
|
if (pte == orig_pte)
|
|
|
|
continue;
|
|
|
|
|
2013-09-09 19:52:33 +08:00
|
|
|
/*
|
|
|
|
* If the slot is read-only, simply do not process the accessed
|
|
|
|
* and dirty bits. This is the correct thing to do if the slot
|
|
|
|
* is ROM, and page tables in read-as-ROM/write-as-MMIO slots
|
|
|
|
* are only supported if the accessed and dirty bits are already
|
|
|
|
* set in the ROM (so that MMIO writes are never needed).
|
|
|
|
*
|
|
|
|
* Note that NPT does not allow this at all and faults, since
|
|
|
|
* it always wants nested page table entries for the guest
|
|
|
|
* page tables to be writable. And EPT works but will simply
|
|
|
|
* overwrite the read-only memory to set the accessed and dirty
|
|
|
|
* bits.
|
|
|
|
*/
|
|
|
|
if (unlikely(!walker->pte_writable[level - 1]))
|
|
|
|
continue;
|
|
|
|
|
2012-09-16 19:18:51 +08:00
|
|
|
ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, orig_pte, pte);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2015-04-08 21:39:23 +08:00
|
|
|
kvm_vcpu_mark_page_dirty(vcpu, table_gfn);
|
2016-02-25 02:02:31 +08:00
|
|
|
walker->ptes[level - 1] = pte;
|
2012-09-16 19:18:51 +08:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-03-22 16:51:20 +08:00
|
|
|
static inline unsigned FNAME(gpte_pkeys)(struct kvm_vcpu *vcpu, u64 gpte)
|
|
|
|
{
|
|
|
|
unsigned pkeys = 0;
|
|
|
|
#if PTTYPE == 64
|
|
|
|
pte_t pte = {.pte = gpte};
|
|
|
|
|
|
|
|
pkeys = pte_flags_pkey(pte_flags(pte));
|
|
|
|
#endif
|
|
|
|
return pkeys;
|
|
|
|
}
|
|
|
|
|
KVM: x86/mmu: Optimize and clean up so called "last nonleaf level" logic
Drop the pre-computed last_nonleaf_level, which is arguably wrong and at
best confusing. Per the comment:
Can have large pages at levels 2..last_nonleaf_level-1.
the intent of the variable would appear to be to track what levels can
_legally_ have large pages, but that intent doesn't align with reality.
The computed value will be wrong for 5-level paging, or if 1gb pages are
not supported.
The flawed code is not a problem in practice, because except for 32-bit
PSE paging, bit 7 is reserved if large pages aren't supported at the
level. Take advantage of this invariant and simply omit the level magic
math for 64-bit page tables (including PAE).
For 32-bit paging (non-PAE), the adjustments are needed purely because
bit 7 is ignored if PSE=0. Retain that logic as is, but make
is_last_gpte() unique per PTTYPE so that the PSE check is avoided for
PAE and EPT paging. In the spirit of avoiding branches, bump the "last
nonleaf level" for 32-bit PSE paging by adding the PSE bit itself.
Note, bit 7 is ignored or has other meaning in CR3/EPTP, but despite
FNAME(walk_addr_generic) briefly grabbing CR3/EPTP in "pte", they are
not PTEs and will blow up all the other gpte helpers.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622175739.3610207-51-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-23 01:57:35 +08:00
|
|
|
static inline bool FNAME(is_last_gpte)(struct kvm_mmu *mmu,
|
|
|
|
unsigned int level, unsigned int gpte)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* For EPT and PAE paging (both variants), bit 7 is either reserved at
|
|
|
|
* all level or indicates a huge page (ignoring CR3/EPTP). In either
|
|
|
|
* case, bit 7 being set terminates the walk.
|
|
|
|
*/
|
|
|
|
#if PTTYPE == 32
|
|
|
|
/*
|
|
|
|
* 32-bit paging requires special handling because bit 7 is ignored if
|
|
|
|
* CR4.PSE=0, not reserved. Clear bit 7 in the gpte if the level is
|
|
|
|
* greater than the last level for which bit 7 is the PAGE_SIZE bit.
|
|
|
|
*
|
|
|
|
* The RHS has bit 7 set iff level < (2 + PSE). If it is clear, bit 7
|
|
|
|
* is not reserved and does not indicate a large page at this level,
|
|
|
|
* so clear PT_PAGE_SIZE_MASK in gpte if that is the case.
|
|
|
|
*/
|
|
|
|
gpte &= level - (PT32_ROOT_LEVEL + mmu->mmu_role.ext.cr4_pse);
|
|
|
|
#endif
|
|
|
|
/*
|
|
|
|
* PG_LEVEL_4K always terminates. The RHS has bit 7 set
|
|
|
|
* iff level <= PG_LEVEL_4K, which for our purpose means
|
|
|
|
* level == PG_LEVEL_4K; set PT_PAGE_SIZE_MASK in gpte then.
|
|
|
|
*/
|
|
|
|
gpte |= level - PG_LEVEL_4K - 1;
|
|
|
|
|
|
|
|
return gpte & PT_PAGE_SIZE_MASK;
|
|
|
|
}
|
2007-01-06 08:36:40 +08:00
|
|
|
/*
|
2019-12-07 07:57:14 +08:00
|
|
|
* Fetch a guest pte for a guest virtual address, or for an L2's GPA.
|
2007-01-06 08:36:40 +08:00
|
|
|
*/
|
2010-09-10 23:30:47 +08:00
|
|
|
static int FNAME(walk_addr_generic)(struct guest_walker *walker,
|
|
|
|
struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
|
2019-12-07 07:57:14 +08:00
|
|
|
gpa_t addr, u32 access)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
2012-09-16 19:18:51 +08:00
|
|
|
int ret;
|
2007-10-17 18:18:47 +08:00
|
|
|
pt_element_t pte;
|
treewide: Remove uninitialized_var() usage
Using uninitialized_var() is dangerous as it papers over real bugs[1]
(or can in the future), and suppresses unrelated compiler warnings
(e.g. "unused variable"). If the compiler thinks it is uninitialized,
either simply initialize the variable or make compiler changes.
In preparation for removing[2] the[3] macro[4], remove all remaining
needless uses with the following script:
git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \
xargs perl -pi -e \
's/\buninitialized_var\(([^\)]+)\)/\1/g;
s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;'
drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid
pathological white-space.
No outstanding warnings were found building allmodconfig with GCC 9.3.0
for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64,
alpha, and m68k.
[1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/
[2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/
[3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/
[4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/
Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5
Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB
Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers
Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs
Signed-off-by: Kees Cook <keescook@chromium.org>
2020-06-04 04:09:38 +08:00
|
|
|
pt_element_t __user *ptep_user;
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
gfn_t table_gfn;
|
2017-05-11 19:23:29 +08:00
|
|
|
u64 pt_access, pte_access;
|
|
|
|
unsigned index, accessed_dirty, pte_pkey;
|
2017-03-30 17:55:30 +08:00
|
|
|
unsigned nested_access;
|
2007-10-17 18:18:47 +08:00
|
|
|
gpa_t pte_gpa;
|
2017-03-30 17:55:29 +08:00
|
|
|
bool have_ad;
|
2011-07-01 00:34:56 +08:00
|
|
|
int offset;
|
2017-05-11 19:23:29 +08:00
|
|
|
u64 walk_nx_mask = 0;
|
2011-07-01 00:34:56 +08:00
|
|
|
const int write_fault = access & PFERR_WRITE_MASK;
|
|
|
|
const int user_fault = access & PFERR_USER_MASK;
|
|
|
|
const int fetch_fault = access & PFERR_FETCH_MASK;
|
|
|
|
u16 errcode = 0;
|
2012-09-12 20:12:09 +08:00
|
|
|
gpa_t real_gpa;
|
|
|
|
gfn_t gfn;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2012-06-20 16:00:00 +08:00
|
|
|
trace_kvm_mmu_pagetable_walk(addr, access);
|
2011-07-01 00:36:07 +08:00
|
|
|
retry_walk:
|
2010-09-10 23:30:47 +08:00
|
|
|
walker->level = mmu->root_level;
|
2020-03-03 10:02:39 +08:00
|
|
|
pte = mmu->get_guest_pgd(vcpu);
|
2017-03-30 17:55:29 +08:00
|
|
|
have_ad = PT_HAVE_ACCESSED_DIRTY(mmu);
|
2010-09-10 23:30:47 +08:00
|
|
|
|
2007-01-06 08:36:41 +08:00
|
|
|
#if PTTYPE == 64
|
2017-05-11 19:23:29 +08:00
|
|
|
walk_nx_mask = 1ULL << PT64_NX_SHIFT;
|
2010-09-10 23:30:47 +08:00
|
|
|
if (walker->level == PT32E_ROOT_LEVEL) {
|
2011-07-28 16:36:17 +08:00
|
|
|
pte = mmu->get_pdptr(vcpu, (addr >> 30) & 3);
|
2009-07-06 17:21:32 +08:00
|
|
|
trace_kvm_mmu_paging_element(pte, walker->level);
|
2013-08-05 16:07:09 +08:00
|
|
|
if (!FNAME(is_present_gpte)(pte))
|
2010-07-06 21:20:43 +08:00
|
|
|
goto error;
|
2007-01-06 08:36:41 +08:00
|
|
|
--walker->level;
|
|
|
|
}
|
|
|
|
#endif
|
2012-09-16 19:18:51 +08:00
|
|
|
walker->max_level = walker->level;
|
2014-10-01 01:49:18 +08:00
|
|
|
ASSERT(!(is_long_mode(vcpu) && !is_pae(vcpu)));
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2017-03-30 17:55:30 +08:00
|
|
|
/*
|
|
|
|
* FIXME: on Intel processors, loads of the PDPTE registers for PAE paging
|
|
|
|
* by the MOV to CR instruction are treated as reads and do not cause the
|
|
|
|
* processor to set the dirty flag in any EPT paging-structure entry.
|
|
|
|
*/
|
|
|
|
nested_access = (have_ad ? PFERR_WRITE_MASK : 0) | PFERR_USER_MASK;
|
|
|
|
|
2017-05-11 19:23:29 +08:00
|
|
|
pte_access = ~0;
|
2012-09-12 20:12:09 +08:00
|
|
|
++walker->level;
|
2007-01-06 08:36:40 +08:00
|
|
|
|
2012-09-12 20:12:09 +08:00
|
|
|
do {
|
2011-04-21 23:34:44 +08:00
|
|
|
unsigned long host_addr;
|
|
|
|
|
2017-05-11 19:23:29 +08:00
|
|
|
pt_access = pte_access;
|
2012-09-12 20:12:09 +08:00
|
|
|
--walker->level;
|
|
|
|
|
2007-10-17 18:18:47 +08:00
|
|
|
index = PT_INDEX(addr, walker->level);
|
2007-11-21 18:35:07 +08:00
|
|
|
table_gfn = gpte_to_gfn(pte);
|
2010-09-10 23:30:52 +08:00
|
|
|
offset = index * sizeof(pt_element_t);
|
|
|
|
pte_gpa = gfn_to_gpa(table_gfn) + offset;
|
2017-10-05 17:10:23 +08:00
|
|
|
|
|
|
|
BUG_ON(walker->level < 1);
|
2007-10-17 18:18:47 +08:00
|
|
|
walker->table_gfn[walker->level - 1] = table_gfn;
|
2007-12-12 08:12:27 +08:00
|
|
|
walker->pte_gpa[walker->level - 1] = pte_gpa;
|
2007-10-17 18:18:47 +08:00
|
|
|
|
2021-11-24 20:20:45 +08:00
|
|
|
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(table_gfn),
|
|
|
|
nested_access, &walker->fault);
|
2014-09-02 19:18:37 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* FIXME: This can happen if emulation (for of an INS/OUTS
|
|
|
|
* instruction) triggers a nested page fault. The exit
|
|
|
|
* qualification / exit info field will incorrectly have
|
|
|
|
* "guest page access" as the nested page fault's cause,
|
|
|
|
* instead of "guest page structure access". To fix this,
|
|
|
|
* the x86_exception struct should be augmented with enough
|
|
|
|
* information to fix the exit_qualification or exit_info_1
|
|
|
|
* fields.
|
|
|
|
*/
|
2020-06-22 23:14:35 +08:00
|
|
|
if (unlikely(real_gpa == UNMAPPED_GVA))
|
2014-09-02 19:23:06 +08:00
|
|
|
return 0;
|
2014-09-02 19:18:37 +08:00
|
|
|
|
2020-06-22 23:14:35 +08:00
|
|
|
host_addr = kvm_vcpu_gfn_to_hva_prot(vcpu, gpa_to_gfn(real_gpa),
|
2013-09-09 19:52:33 +08:00
|
|
|
&walker->pte_writable[walker->level - 1]);
|
2011-07-01 00:34:56 +08:00
|
|
|
if (unlikely(kvm_is_error_hva(host_addr)))
|
|
|
|
goto error;
|
2011-04-21 23:34:44 +08:00
|
|
|
|
|
|
|
ptep_user = (pt_element_t __user *)((void *)host_addr + offset);
|
2020-02-16 00:29:04 +08:00
|
|
|
if (unlikely(__get_user(pte, ptep_user)))
|
2011-07-01 00:34:56 +08:00
|
|
|
goto error;
|
2012-09-16 19:18:51 +08:00
|
|
|
walker->ptep_user[walker->level - 1] = ptep_user;
|
2010-01-15 03:41:27 +08:00
|
|
|
|
2009-07-06 17:21:32 +08:00
|
|
|
trace_kvm_mmu_paging_element(pte, walker->level);
|
2007-10-17 18:18:47 +08:00
|
|
|
|
2017-05-11 19:23:29 +08:00
|
|
|
/*
|
|
|
|
* Inverting the NX it lets us AND it like other
|
|
|
|
* permission bits.
|
|
|
|
*/
|
|
|
|
pte_access = pt_access & (pte ^ walk_nx_mask);
|
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
if (unlikely(!FNAME(is_present_gpte)(pte)))
|
2011-07-01 00:34:56 +08:00
|
|
|
goto error;
|
2007-01-26 16:56:41 +08:00
|
|
|
|
KVM: x86/mmu: Micro-optimize nEPT's bad memptype/XWR checks
Rework the handling of nEPT's bad memtype/XWR checks to micro-optimize
the checks as much as possible. Move the check to a separate helper,
__is_bad_mt_xwr(), which allows the guest_rsvd_check usage in
paging_tmpl.h to omit the check entirely for paging32/64 (bad_mt_xwr is
always zero for non-nEPT) while retaining the bitwise-OR of the current
code for the shadow_zero_check in walk_shadow_page_get_mmio_spte().
Add a comment for the bitwise-OR usage in the mmio spte walk to avoid
future attempts to "fix" the code, which is what prompted this
optimization in the first place[*].
Opportunistically remove the superfluous '!= 0' and parantheses, and
use BIT_ULL() instead of open coding its equivalent.
The net effect is that code generation is largely unchanged for
walk_shadow_page_get_mmio_spte(), marginally better for
ept_prefetch_invalid_gpte(), and significantly improved for
paging32/64_prefetch_invalid_gpte().
Note, walk_shadow_page_get_mmio_spte() can't use a templated version of
the memtype/XRW as it works on the host's shadow PTEs, e.g. checks that
KVM hasn't borked its EPT tables. Even if it could be templated, the
benefits of having a single implementation far outweight the few uops
that would be saved for NPT or non-TDP paging, e.g. most compilers
inline it all the way to up kvm_mmu_page_fault().
[*] https://lkml.kernel.org/r/20200108001859.25254-1-sean.j.christopherson@intel.com
Cc: Jim Mattson <jmattson@google.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Cc: Arvind Sankar <nivedita@alum.mit.edu>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-01-10 07:06:40 +08:00
|
|
|
if (unlikely(FNAME(is_rsvd_bits_set)(mmu, pte, walker->level))) {
|
2016-03-25 21:19:35 +08:00
|
|
|
errcode = PFERR_RSVD_MASK | PFERR_PRESENT_MASK;
|
2011-07-01 00:34:56 +08:00
|
|
|
goto error;
|
2010-07-06 21:20:43 +08:00
|
|
|
}
|
2009-03-30 16:21:08 +08:00
|
|
|
|
2007-12-12 08:12:27 +08:00
|
|
|
walker->ptes[walker->level - 1] = pte;
|
KVM: X86: MMU: Use the correct inherited permissions to get shadow page
When computing the access permissions of a shadow page, use the effective
permissions of the walk up to that point, i.e. the logic AND of its parents'
permissions. Two guest PxE entries that point at the same table gfn need to
be shadowed with different shadow pages if their parents' permissions are
different. KVM currently uses the effective permissions of the last
non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have
full ("uwx") permissions, and the effective permissions are recorded only
in role.access and merged into the leaves, this can lead to incorrect
reuse of a shadow page and eventually to a missing guest protection page
fault.
For example, here is a shared pagetable:
pgd[] pud[] pmd[] virtual address pointers
/->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
/->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
pgd-| (shared pmd[] as above)
\->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
\->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
pud1 and pud2 point to the same pmd table, so:
- ptr1 and ptr3 points to the same page.
- ptr2 and ptr4 points to the same page.
(pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
- First, the guest reads from ptr1 first and KVM prepares a shadow
page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
"u--" comes from the effective permissions of pgd, pud1 and
pmd1, which are stored in pt->access. "u--" is used also to get
the pagetable for pud1, instead of "uw-".
- Then the guest writes to ptr2 and KVM reuses pud1 which is present.
The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
even though the pud1 pmd (because of the incorrect argument to
kvm_mmu_get_page in the previous step) has role.access="u--".
- Then the guest reads from ptr3. The hypervisor reuses pud1's
shadow pmd for pud2, because both use "u--" for their permissions.
Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
- At last, the guest writes to ptr4. This causes no vmexit or pagefault,
because pud1's shadow page structures included an "uw-" page even though
its role.access was "u--".
Any kind of shared pagetable might have the similar problem when in
virtual machine without TDP enabled if the permissions are different
from different ancestors.
In order to fix the problem, we change pt->access to be an array, and
any access in it will not include permissions ANDed from child ptes.
The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
Remember to test it with TDP disabled.
The problem had existed long before the commit 41074d07c78b ("KVM: MMU:
Fix inherited permissions for emulated guest pte updates"), and it
is hard to find which is the culprit. So there is no fixes tag here.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
Cc: stable@vger.kernel.org
Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-03 13:24:55 +08:00
|
|
|
|
|
|
|
/* Convert to ACC_*_MASK flags for struct guest_walker. */
|
|
|
|
walker->pt_access[walker->level - 1] = FNAME(gpte_access)(pt_access ^ walk_nx_mask);
|
KVM: x86/mmu: Optimize and clean up so called "last nonleaf level" logic
Drop the pre-computed last_nonleaf_level, which is arguably wrong and at
best confusing. Per the comment:
Can have large pages at levels 2..last_nonleaf_level-1.
the intent of the variable would appear to be to track what levels can
_legally_ have large pages, but that intent doesn't align with reality.
The computed value will be wrong for 5-level paging, or if 1gb pages are
not supported.
The flawed code is not a problem in practice, because except for 32-bit
PSE paging, bit 7 is reserved if large pages aren't supported at the
level. Take advantage of this invariant and simply omit the level magic
math for 64-bit page tables (including PAE).
For 32-bit paging (non-PAE), the adjustments are needed purely because
bit 7 is ignored if PSE=0. Retain that logic as is, but make
is_last_gpte() unique per PTTYPE so that the PSE check is avoided for
PAE and EPT paging. In the spirit of avoiding branches, bump the "last
nonleaf level" for 32-bit PSE paging by adding the PSE bit itself.
Note, bit 7 is ignored or has other meaning in CR3/EPTP, but despite
FNAME(walk_addr_generic) briefly grabbing CR3/EPTP in "pte", they are
not PTEs and will blow up all the other gpte helpers.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622175739.3610207-51-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-23 01:57:35 +08:00
|
|
|
} while (!FNAME(is_last_gpte)(mmu, walker->level, pte));
|
2007-10-17 18:18:47 +08:00
|
|
|
|
2016-03-22 16:51:20 +08:00
|
|
|
pte_pkey = FNAME(gpte_pkeys)(vcpu, pte);
|
2017-05-11 19:23:29 +08:00
|
|
|
accessed_dirty = have_ad ? pte_access & PT_GUEST_ACCESSED_MASK : 0;
|
|
|
|
|
|
|
|
/* Convert to ACC_*_MASK flags for struct guest_walker. */
|
2018-07-18 15:57:50 +08:00
|
|
|
walker->pte_access = FNAME(gpte_access)(pte_access ^ walk_nx_mask);
|
2017-05-11 19:23:29 +08:00
|
|
|
errcode = permission_fault(vcpu, mmu, walker->pte_access, pte_pkey, access);
|
2016-03-08 17:08:16 +08:00
|
|
|
if (unlikely(errcode))
|
2010-07-06 21:20:43 +08:00
|
|
|
goto error;
|
|
|
|
|
2012-09-12 20:12:09 +08:00
|
|
|
gfn = gpte_to_gfn_lvl(pte, walker->level);
|
|
|
|
gfn += (addr & PT_LVL_OFFSET_MASK(walker->level)) >> PAGE_SHIFT;
|
|
|
|
|
2020-04-28 08:54:22 +08:00
|
|
|
if (PTTYPE == 32 && walker->level > PG_LEVEL_4K && is_cpuid_PSE36())
|
2012-09-12 20:12:09 +08:00
|
|
|
gfn += pse36_gfn_delta(pte);
|
|
|
|
|
2021-11-24 20:20:45 +08:00
|
|
|
real_gpa = kvm_translate_gpa(vcpu, mmu, gfn_to_gpa(gfn), access, &walker->fault);
|
2012-09-12 20:12:09 +08:00
|
|
|
if (real_gpa == UNMAPPED_GVA)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
walker->gfn = real_gpa >> PAGE_SHIFT;
|
|
|
|
|
2012-09-12 18:44:53 +08:00
|
|
|
if (!write_fault)
|
2017-05-11 19:23:29 +08:00
|
|
|
FNAME(protect_clean_gpte)(mmu, &walker->pte_access, pte);
|
2012-12-27 20:44:58 +08:00
|
|
|
else
|
|
|
|
/*
|
2013-08-05 16:07:11 +08:00
|
|
|
* On a write fault, fold the dirty bit into accessed_dirty.
|
|
|
|
* For modes without A/D bits support accessed_dirty will be
|
|
|
|
* always clear.
|
2012-12-27 20:44:58 +08:00
|
|
|
*/
|
2013-08-05 16:07:10 +08:00
|
|
|
accessed_dirty &= pte >>
|
|
|
|
(PT_GUEST_DIRTY_SHIFT - PT_GUEST_ACCESSED_SHIFT);
|
2012-09-16 20:03:02 +08:00
|
|
|
|
|
|
|
if (unlikely(!accessed_dirty)) {
|
2020-06-23 05:58:29 +08:00
|
|
|
ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker,
|
|
|
|
addr, write_fault);
|
2012-09-16 20:03:02 +08:00
|
|
|
if (unlikely(ret < 0))
|
|
|
|
goto error;
|
|
|
|
else if (ret)
|
|
|
|
goto retry_walk;
|
|
|
|
}
|
2007-10-17 18:18:47 +08:00
|
|
|
|
2007-12-09 22:15:46 +08:00
|
|
|
pgprintk("%s: pte %llx pte_access %x pt_access %x\n",
|
KVM: X86: MMU: Use the correct inherited permissions to get shadow page
When computing the access permissions of a shadow page, use the effective
permissions of the walk up to that point, i.e. the logic AND of its parents'
permissions. Two guest PxE entries that point at the same table gfn need to
be shadowed with different shadow pages if their parents' permissions are
different. KVM currently uses the effective permissions of the last
non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have
full ("uwx") permissions, and the effective permissions are recorded only
in role.access and merged into the leaves, this can lead to incorrect
reuse of a shadow page and eventually to a missing guest protection page
fault.
For example, here is a shared pagetable:
pgd[] pud[] pmd[] virtual address pointers
/->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
/->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
pgd-| (shared pmd[] as above)
\->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
\->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
pud1 and pud2 point to the same pmd table, so:
- ptr1 and ptr3 points to the same page.
- ptr2 and ptr4 points to the same page.
(pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
- First, the guest reads from ptr1 first and KVM prepares a shadow
page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
"u--" comes from the effective permissions of pgd, pud1 and
pmd1, which are stored in pt->access. "u--" is used also to get
the pagetable for pud1, instead of "uw-".
- Then the guest writes to ptr2 and KVM reuses pud1 which is present.
The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
even though the pud1 pmd (because of the incorrect argument to
kvm_mmu_get_page in the previous step) has role.access="u--".
- Then the guest reads from ptr3. The hypervisor reuses pud1's
shadow pmd for pud2, because both use "u--" for their permissions.
Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
- At last, the guest writes to ptr4. This causes no vmexit or pagefault,
because pud1's shadow page structures included an "uw-" page even though
its role.access was "u--".
Any kind of shared pagetable might have the similar problem when in
virtual machine without TDP enabled if the permissions are different
from different ancestors.
In order to fix the problem, we change pt->access to be an array, and
any access in it will not include permissions ANDed from child ptes.
The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
Remember to test it with TDP disabled.
The problem had existed long before the commit 41074d07c78b ("KVM: MMU:
Fix inherited permissions for emulated guest pte updates"), and it
is hard to find which is the culprit. So there is no fixes tag here.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
Cc: stable@vger.kernel.org
Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-03 13:24:55 +08:00
|
|
|
__func__, (u64)pte, walker->pte_access,
|
|
|
|
walker->pt_access[walker->level - 1]);
|
2007-01-26 16:56:41 +08:00
|
|
|
return 1;
|
|
|
|
|
2010-07-06 21:20:43 +08:00
|
|
|
error:
|
2011-07-01 00:34:56 +08:00
|
|
|
errcode |= write_fault | user_fault;
|
2021-06-23 01:57:20 +08:00
|
|
|
if (fetch_fault && (is_efer_nx(mmu) || is_cr4_smep(mmu)))
|
2011-07-01 00:34:56 +08:00
|
|
|
errcode |= PFERR_FETCH_MASK;
|
2010-09-10 23:30:46 +08:00
|
|
|
|
2011-07-01 00:34:56 +08:00
|
|
|
walker->fault.vector = PF_VECTOR;
|
|
|
|
walker->fault.error_code_valid = true;
|
|
|
|
walker->fault.error_code = errcode;
|
2013-08-06 17:00:32 +08:00
|
|
|
|
|
|
|
#if PTTYPE == PTTYPE_EPT
|
|
|
|
/*
|
|
|
|
* Use PFERR_RSVD_MASK in error_code to to tell if EPT
|
|
|
|
* misconfiguration requires to be injected. The detection is
|
|
|
|
* done by is_rsvd_bits_set() above.
|
|
|
|
*
|
|
|
|
* We set up the value of exit_qualification to inject:
|
2018-03-01 02:06:48 +08:00
|
|
|
* [2:0] - Derive from the access bits. The exit_qualification might be
|
|
|
|
* out of date if it is serving an EPT misconfiguration.
|
2013-08-06 17:00:32 +08:00
|
|
|
* [5:3] - Calculated by the page walk of the guest EPT page tables
|
|
|
|
* [7:8] - Derived from [7:8] of real exit_qualification
|
|
|
|
*
|
|
|
|
* The other bits are set to 0.
|
|
|
|
*/
|
|
|
|
if (!(errcode & PFERR_RSVD_MASK)) {
|
2018-03-01 02:06:48 +08:00
|
|
|
vcpu->arch.exit_qualification &= 0x180;
|
|
|
|
if (write_fault)
|
|
|
|
vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_WRITE;
|
|
|
|
if (user_fault)
|
|
|
|
vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_READ;
|
|
|
|
if (fetch_fault)
|
|
|
|
vcpu->arch.exit_qualification |= EPT_VIOLATION_ACC_INSTR;
|
2017-05-11 19:23:29 +08:00
|
|
|
vcpu->arch.exit_qualification |= (pte_access & 0x7) << 3;
|
2013-08-06 17:00:32 +08:00
|
|
|
}
|
|
|
|
#endif
|
2010-11-29 22:12:30 +08:00
|
|
|
walker->fault.address = addr;
|
|
|
|
walker->fault.nested_page_fault = mmu != vcpu->arch.walk_mmu;
|
2021-02-25 23:41:33 +08:00
|
|
|
walker->fault.async_page_fault = false;
|
2010-09-10 23:30:46 +08:00
|
|
|
|
2010-11-22 23:53:27 +08:00
|
|
|
trace_kvm_mmu_walker_error(walker->fault.error_code);
|
2007-07-23 14:51:39 +08:00
|
|
|
return 0;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
|
|
|
|
2010-09-10 23:30:47 +08:00
|
|
|
static int FNAME(walk_addr)(struct guest_walker *walker,
|
2019-12-07 07:57:14 +08:00
|
|
|
struct kvm_vcpu *vcpu, gpa_t addr, u32 access)
|
2010-09-10 23:30:47 +08:00
|
|
|
{
|
2018-10-09 03:28:05 +08:00
|
|
|
return FNAME(walk_addr_generic)(walker, vcpu, vcpu->arch.mmu, addr,
|
2010-09-28 17:03:14 +08:00
|
|
|
access);
|
2010-09-10 23:30:47 +08:00
|
|
|
}
|
|
|
|
|
2012-10-16 20:10:12 +08:00
|
|
|
static bool
|
|
|
|
FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
|
|
|
|
u64 *spte, pt_element_t gpte, bool no_dirty_log)
|
2007-05-01 21:53:31 +08:00
|
|
|
{
|
2021-08-14 04:35:03 +08:00
|
|
|
struct kvm_memory_slot *slot;
|
2007-12-09 23:00:02 +08:00
|
|
|
unsigned pte_access;
|
2012-10-16 20:10:12 +08:00
|
|
|
gfn_t gfn;
|
kvm: rename pfn_t to kvm_pfn_t
To date, we have implemented two I/O usage models for persistent memory,
PMEM (a persistent "ram disk") and DAX (mmap persistent memory into
userspace). This series adds a third, DAX-GUP, that allows DAX mappings
to be the target of direct-i/o. It allows userspace to coordinate
DMA/RDMA from/to persistent memory.
The implementation leverages the ZONE_DEVICE mm-zone that went into
4.3-rc1 (also discussed at kernel summit) to flag pages that are owned
and dynamically mapped by a device driver. The pmem driver, after
mapping a persistent memory range into the system memmap via
devm_memremap_pages(), arranges for DAX to distinguish pfn-only versus
page-backed pmem-pfns via flags in the new pfn_t type.
The DAX code, upon seeing a PFN_DEV+PFN_MAP flagged pfn, flags the
resulting pte(s) inserted into the process page tables with a new
_PAGE_DEVMAP flag. Later, when get_user_pages() is walking ptes it keys
off _PAGE_DEVMAP to pin the device hosting the page range active.
Finally, get_page() and put_page() are modified to take references
against the device driver established page mapping.
Finally, this need for "struct page" for persistent memory requires
memory capacity to store the memmap array. Given the memmap array for a
large pool of persistent may exhaust available DRAM introduce a
mechanism to allocate the memmap from persistent memory. The new
"struct vmem_altmap *" parameter to devm_memremap_pages() enables
arch_add_memory() to use reserved pmem capacity rather than the page
allocator.
This patch (of 18):
The core has developed a need for a "pfn_t" type [1]. Move the existing
pfn_t in KVM to kvm_pfn_t [2].
[1]: https://lists.01.org/pipermail/linux-nvdimm/2015-September/002199.html
[2]: https://lists.01.org/pipermail/linux-nvdimm/2015-September/002218.html
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-16 08:56:11 +08:00
|
|
|
kvm_pfn_t pfn;
|
2007-05-01 21:53:31 +08:00
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte))
|
2012-10-16 20:10:12 +08:00
|
|
|
return false;
|
2010-11-23 11:08:42 +08:00
|
|
|
|
2008-03-04 04:59:56 +08:00
|
|
|
pgprintk("%s: gpte %llx spte %p\n", __func__, (u64)gpte, spte);
|
2012-10-16 20:10:12 +08:00
|
|
|
|
|
|
|
gfn = gpte_to_gfn(gpte);
|
2018-07-18 15:57:50 +08:00
|
|
|
pte_access = sp->role.access & FNAME(gpte_access)(gpte);
|
2018-10-09 03:28:05 +08:00
|
|
|
FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);
|
2021-08-14 04:35:03 +08:00
|
|
|
|
|
|
|
slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn,
|
2012-10-16 20:10:12 +08:00
|
|
|
no_dirty_log && (pte_access & ACC_WRITE_MASK));
|
2021-08-14 04:35:03 +08:00
|
|
|
if (!slot)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
pfn = gfn_to_pfn_memslot_atomic(slot, gfn);
|
2012-10-16 20:10:59 +08:00
|
|
|
if (is_error_pfn(pfn))
|
2012-10-16 20:10:12 +08:00
|
|
|
return false;
|
2011-03-09 15:43:51 +08:00
|
|
|
|
2021-08-14 04:35:03 +08:00
|
|
|
mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL);
|
2019-01-04 08:22:21 +08:00
|
|
|
kvm_release_pfn_clean(pfn);
|
2012-10-16 20:10:12 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2010-07-13 19:27:08 +08:00
|
|
|
static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu,
|
|
|
|
struct guest_walker *gw, int level)
|
|
|
|
{
|
|
|
|
pt_element_t curr_pte;
|
2010-08-22 19:13:33 +08:00
|
|
|
gpa_t base_gpa, pte_gpa = gw->pte_gpa[level - 1];
|
|
|
|
u64 mask;
|
|
|
|
int r, index;
|
|
|
|
|
2020-04-28 08:54:22 +08:00
|
|
|
if (level == PG_LEVEL_4K) {
|
2010-08-22 19:13:33 +08:00
|
|
|
mask = PTE_PREFETCH_NUM * sizeof(pt_element_t) - 1;
|
|
|
|
base_gpa = pte_gpa & ~mask;
|
|
|
|
index = (pte_gpa - base_gpa) / sizeof(pt_element_t);
|
|
|
|
|
2015-04-08 21:39:23 +08:00
|
|
|
r = kvm_vcpu_read_guest_atomic(vcpu, base_gpa,
|
2010-08-22 19:13:33 +08:00
|
|
|
gw->prefetch_ptes, sizeof(gw->prefetch_ptes));
|
|
|
|
curr_pte = gw->prefetch_ptes[index];
|
|
|
|
} else
|
2015-04-08 21:39:23 +08:00
|
|
|
r = kvm_vcpu_read_guest_atomic(vcpu, pte_gpa,
|
2010-07-13 19:27:08 +08:00
|
|
|
&curr_pte, sizeof(curr_pte));
|
2010-08-22 19:13:33 +08:00
|
|
|
|
2010-07-13 19:27:08 +08:00
|
|
|
return r || curr_pte != gw->ptes[level - 1];
|
|
|
|
}
|
|
|
|
|
2010-08-22 19:13:33 +08:00
|
|
|
static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
|
|
|
|
u64 *sptep)
|
2010-08-22 19:12:48 +08:00
|
|
|
{
|
|
|
|
struct kvm_mmu_page *sp;
|
2010-08-22 19:13:33 +08:00
|
|
|
pt_element_t *gptep = gw->prefetch_ptes;
|
2010-08-22 19:12:48 +08:00
|
|
|
u64 *spte;
|
2010-08-22 19:13:33 +08:00
|
|
|
int i;
|
2010-08-22 19:12:48 +08:00
|
|
|
|
2020-06-23 04:20:33 +08:00
|
|
|
sp = sptep_to_sp(sptep);
|
2010-08-22 19:12:48 +08:00
|
|
|
|
2020-04-28 08:54:22 +08:00
|
|
|
if (sp->role.level > PG_LEVEL_4K)
|
2010-08-22 19:12:48 +08:00
|
|
|
return;
|
|
|
|
|
2021-02-22 10:45:22 +08:00
|
|
|
/*
|
|
|
|
* If addresses are being invalidated, skip prefetching to avoid
|
|
|
|
* accidentally prefetching those addresses.
|
|
|
|
*/
|
|
|
|
if (unlikely(vcpu->kvm->mmu_notifier_count))
|
|
|
|
return;
|
|
|
|
|
2010-08-22 19:12:48 +08:00
|
|
|
if (sp->role.direct)
|
|
|
|
return __direct_pte_prefetch(vcpu, sp, sptep);
|
|
|
|
|
|
|
|
i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1);
|
|
|
|
spte = sp->spt + i;
|
|
|
|
|
|
|
|
for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) {
|
|
|
|
if (spte == sptep)
|
|
|
|
continue;
|
|
|
|
|
2011-07-12 03:28:04 +08:00
|
|
|
if (is_shadow_present_pte(*spte))
|
2010-08-22 19:12:48 +08:00
|
|
|
continue;
|
|
|
|
|
2012-10-16 20:10:12 +08:00
|
|
|
if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true))
|
2010-08-22 19:12:48 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* Fetch a shadow pte for a specific level in the paging hierarchy.
|
2012-10-16 20:08:43 +08:00
|
|
|
* If the guest tries to write a write-protected page, we need to
|
|
|
|
* emulate this operation, return 1 to indicate this case.
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*/
|
2021-08-06 16:35:50 +08:00
|
|
|
static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
|
|
|
|
struct guest_walker *gw)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
2010-07-13 19:27:10 +08:00
|
|
|
struct kvm_mmu_page *sp = NULL;
|
2010-07-13 19:27:11 +08:00
|
|
|
struct kvm_shadow_walk_iterator it;
|
KVM: X86: MMU: Use the correct inherited permissions to get shadow page
When computing the access permissions of a shadow page, use the effective
permissions of the walk up to that point, i.e. the logic AND of its parents'
permissions. Two guest PxE entries that point at the same table gfn need to
be shadowed with different shadow pages if their parents' permissions are
different. KVM currently uses the effective permissions of the last
non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have
full ("uwx") permissions, and the effective permissions are recorded only
in role.access and merged into the leaves, this can lead to incorrect
reuse of a shadow page and eventually to a missing guest protection page
fault.
For example, here is a shared pagetable:
pgd[] pud[] pmd[] virtual address pointers
/->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
/->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
pgd-| (shared pmd[] as above)
\->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
\->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
pud1 and pud2 point to the same pmd table, so:
- ptr1 and ptr3 points to the same page.
- ptr2 and ptr4 points to the same page.
(pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
- First, the guest reads from ptr1 first and KVM prepares a shadow
page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
"u--" comes from the effective permissions of pgd, pud1 and
pmd1, which are stored in pt->access. "u--" is used also to get
the pagetable for pud1, instead of "uw-".
- Then the guest writes to ptr2 and KVM reuses pud1 which is present.
The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
even though the pud1 pmd (because of the incorrect argument to
kvm_mmu_get_page in the previous step) has role.access="u--".
- Then the guest reads from ptr3. The hypervisor reuses pud1's
shadow pmd for pud2, because both use "u--" for their permissions.
Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
- At last, the guest writes to ptr4. This causes no vmexit or pagefault,
because pud1's shadow page structures included an "uw-" page even though
its role.access was "u--".
Any kind of shared pagetable might have the similar problem when in
virtual machine without TDP enabled if the permissions are different
from different ancestors.
In order to fix the problem, we change pt->access to be an array, and
any access in it will not include permissions ANDed from child ptes.
The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
Remember to test it with TDP disabled.
The problem had existed long before the commit 41074d07c78b ("KVM: MMU:
Fix inherited permissions for emulated guest pte updates"), and it
is hard to find which is the culprit. So there is no fixes tag here.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
Cc: stable@vger.kernel.org
Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-03 13:24:55 +08:00
|
|
|
unsigned int direct_access, access;
|
2021-08-07 21:21:53 +08:00
|
|
|
int top_level, ret;
|
2021-08-06 16:35:50 +08:00
|
|
|
gfn_t base_gfn = fault->gfn;
|
2008-08-23 00:11:39 +08:00
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
WARN_ON_ONCE(gw->gfn != base_gfn);
|
2011-07-12 03:25:19 +08:00
|
|
|
direct_access = gw->pte_access;
|
2010-06-30 16:05:00 +08:00
|
|
|
|
2018-10-09 03:28:05 +08:00
|
|
|
top_level = vcpu->arch.mmu->root_level;
|
2010-07-13 19:27:10 +08:00
|
|
|
if (top_level == PT32E_ROOT_LEVEL)
|
|
|
|
top_level = PT32_ROOT_LEVEL;
|
|
|
|
/*
|
|
|
|
* Verify that the top-level gpte is still there. Since the page
|
|
|
|
* is a root page, it is either write protected (and cannot be
|
|
|
|
* changed from now on) or it is invalid (in which case, we don't
|
|
|
|
* really care if it changes underneath us after this point).
|
|
|
|
*/
|
|
|
|
if (FNAME(gpte_changed)(vcpu, gw, top_level))
|
|
|
|
goto out_gpte_changed;
|
|
|
|
|
2022-02-21 22:28:33 +08:00
|
|
|
if (WARN_ON(!VALID_PAGE(vcpu->arch.mmu->root.hpa)))
|
2014-01-04 03:09:32 +08:00
|
|
|
goto out_gpte_changed;
|
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
for (shadow_walk_init(&it, vcpu, fault->addr);
|
2010-07-13 19:27:11 +08:00
|
|
|
shadow_walk_okay(&it) && it.level > gw->level;
|
|
|
|
shadow_walk_next(&it)) {
|
2010-07-13 19:27:09 +08:00
|
|
|
gfn_t table_gfn;
|
|
|
|
|
KVM: MMU: improve write flooding detected
Detecting write-flooding does not work well, when we handle page written, if
the last speculative spte is not accessed, we treat the page is
write-flooding, however, we can speculative spte on many path, such as pte
prefetch, page synced, that means the last speculative spte may be not point
to the written page and the written page can be accessed via other sptes, so
depends on the Accessed bit of the last speculative spte is not enough
Instead of detected page accessed, we can detect whether the spte is accessed
after it is written, if the spte is not accessed but it is written frequently,
we treat is not a page table or it not used for a long time
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:58:36 +08:00
|
|
|
clear_sp_write_flooding_count(it.sptep);
|
2010-07-13 19:27:11 +08:00
|
|
|
drop_large_spte(vcpu, it.sptep);
|
2007-05-30 19:21:51 +08:00
|
|
|
|
2010-07-13 19:27:10 +08:00
|
|
|
sp = NULL;
|
2010-07-13 19:27:11 +08:00
|
|
|
if (!is_shadow_present_pte(*it.sptep)) {
|
|
|
|
table_gfn = gw->table_gfn[it.level - 2];
|
KVM: X86: MMU: Use the correct inherited permissions to get shadow page
When computing the access permissions of a shadow page, use the effective
permissions of the walk up to that point, i.e. the logic AND of its parents'
permissions. Two guest PxE entries that point at the same table gfn need to
be shadowed with different shadow pages if their parents' permissions are
different. KVM currently uses the effective permissions of the last
non-leaf entry for all non-leaf entries. Because all non-leaf SPTEs have
full ("uwx") permissions, and the effective permissions are recorded only
in role.access and merged into the leaves, this can lead to incorrect
reuse of a shadow page and eventually to a missing guest protection page
fault.
For example, here is a shared pagetable:
pgd[] pud[] pmd[] virtual address pointers
/->pmd1(u--)->pte1(uw-)->page1 <- ptr1 (u--)
/->pud1(uw-)--->pmd2(uw-)->pte2(uw-)->page2 <- ptr2 (uw-)
pgd-| (shared pmd[] as above)
\->pud2(u--)--->pmd1(u--)->pte1(uw-)->page1 <- ptr3 (u--)
\->pmd2(uw-)->pte2(uw-)->page2 <- ptr4 (u--)
pud1 and pud2 point to the same pmd table, so:
- ptr1 and ptr3 points to the same page.
- ptr2 and ptr4 points to the same page.
(pud1 and pud2 here are pud entries, while pmd1 and pmd2 here are pmd entries)
- First, the guest reads from ptr1 first and KVM prepares a shadow
page table with role.access=u--, from ptr1's pud1 and ptr1's pmd1.
"u--" comes from the effective permissions of pgd, pud1 and
pmd1, which are stored in pt->access. "u--" is used also to get
the pagetable for pud1, instead of "uw-".
- Then the guest writes to ptr2 and KVM reuses pud1 which is present.
The hypervisor set up a shadow page for ptr2 with pt->access is "uw-"
even though the pud1 pmd (because of the incorrect argument to
kvm_mmu_get_page in the previous step) has role.access="u--".
- Then the guest reads from ptr3. The hypervisor reuses pud1's
shadow pmd for pud2, because both use "u--" for their permissions.
Thus, the shadow pmd already includes entries for both pmd1 and pmd2.
- At last, the guest writes to ptr4. This causes no vmexit or pagefault,
because pud1's shadow page structures included an "uw-" page even though
its role.access was "u--".
Any kind of shared pagetable might have the similar problem when in
virtual machine without TDP enabled if the permissions are different
from different ancestors.
In order to fix the problem, we change pt->access to be an array, and
any access in it will not include permissions ANDed from child ptes.
The test code is: https://lore.kernel.org/kvm/20210603050537.19605-1-jiangshanlai@gmail.com/
Remember to test it with TDP disabled.
The problem had existed long before the commit 41074d07c78b ("KVM: MMU:
Fix inherited permissions for emulated guest pte updates"), and it
is hard to find which is the culprit. So there is no fixes tag here.
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20210603052455.21023-1-jiangshanlai@gmail.com>
Cc: stable@vger.kernel.org
Fixes: cea0f0e7ea54 ("[PATCH] KVM: MMU: Shadow page table caching")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-03 13:24:55 +08:00
|
|
|
access = gw->pt_access[it.level - 2];
|
2021-08-06 16:35:50 +08:00
|
|
|
sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr,
|
2021-09-18 08:56:28 +08:00
|
|
|
it.level-1, false, access);
|
|
|
|
/*
|
|
|
|
* We must synchronize the pagetable before linking it
|
|
|
|
* because the guest doesn't need to flush tlb when
|
|
|
|
* the gpte is changed from non-present to present.
|
|
|
|
* Otherwise, the guest may use the wrong mapping.
|
|
|
|
*
|
|
|
|
* For PG_LEVEL_4K, kvm_mmu_get_page() has already
|
|
|
|
* synchronized it transiently via kvm_sync_page().
|
|
|
|
*
|
|
|
|
* For higher level pagetable, we synchronize it via
|
|
|
|
* the slower mmu_sync_children(). If it needs to
|
|
|
|
* break, some progress has been made; return
|
|
|
|
* RET_PF_RETRY and retry on the next #PF.
|
|
|
|
* KVM_REQ_MMU_SYNC is not necessary but it
|
|
|
|
* expedites the process.
|
|
|
|
*/
|
|
|
|
if (sp->unsync_children &&
|
|
|
|
mmu_sync_children(vcpu, sp, false))
|
|
|
|
return RET_PF_RETRY;
|
2010-07-13 19:27:10 +08:00
|
|
|
}
|
2010-07-13 19:27:09 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Verify that the gpte in the page we've just write
|
|
|
|
* protected is still there.
|
|
|
|
*/
|
2010-07-13 19:27:11 +08:00
|
|
|
if (FNAME(gpte_changed)(vcpu, gw, it.level - 1))
|
2010-07-13 19:27:09 +08:00
|
|
|
goto out_gpte_changed;
|
2008-08-23 00:11:39 +08:00
|
|
|
|
2010-07-13 19:27:10 +08:00
|
|
|
if (sp)
|
2015-11-26 20:14:34 +08:00
|
|
|
link_shadow_page(vcpu, it.sptep, sp);
|
2008-12-25 21:10:50 +08:00
|
|
|
}
|
2007-11-21 20:11:49 +08:00
|
|
|
|
2021-08-07 21:21:53 +08:00
|
|
|
kvm_mmu_hugepage_adjust(vcpu, fault);
|
2019-12-07 07:57:26 +08:00
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
trace_kvm_mmu_spte_requested(fault);
|
2019-07-01 18:22:57 +08:00
|
|
|
|
2019-06-24 19:06:21 +08:00
|
|
|
for (; shadow_walk_okay(&it); shadow_walk_next(&it)) {
|
KVM: MMU: improve write flooding detected
Detecting write-flooding does not work well, when we handle page written, if
the last speculative spte is not accessed, we treat the page is
write-flooding, however, we can speculative spte on many path, such as pte
prefetch, page synced, that means the last speculative spte may be not point
to the written page and the written page can be accessed via other sptes, so
depends on the Accessed bit of the last speculative spte is not enough
Instead of detected page accessed, we can detect whether the spte is accessed
after it is written, if the spte is not accessed but it is written frequently,
we treat is not a page table or it not used for a long time
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:58:36 +08:00
|
|
|
clear_sp_write_flooding_count(it.sptep);
|
2019-11-04 19:22:02 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We cannot overwrite existing page tables with an NX
|
|
|
|
* large page, as the leaf could be executable.
|
|
|
|
*/
|
2021-08-07 21:21:53 +08:00
|
|
|
if (fault->nx_huge_page_workaround_enabled)
|
2021-08-06 16:35:50 +08:00
|
|
|
disallowed_hugepage_adjust(fault, *it.sptep, it.level);
|
2019-11-04 19:22:02 +08:00
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
base_gfn = fault->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
|
2021-08-07 21:21:53 +08:00
|
|
|
if (it.level == fault->goal_level)
|
2019-06-24 19:06:21 +08:00
|
|
|
break;
|
|
|
|
|
2010-07-13 19:27:11 +08:00
|
|
|
validate_direct_spte(vcpu, it.sptep, direct_access);
|
2010-07-13 19:27:09 +08:00
|
|
|
|
2010-07-13 19:27:11 +08:00
|
|
|
drop_large_spte(vcpu, it.sptep);
|
2010-07-13 19:27:09 +08:00
|
|
|
|
2019-06-24 19:06:21 +08:00
|
|
|
if (!is_shadow_present_pte(*it.sptep)) {
|
2021-08-06 16:35:50 +08:00
|
|
|
sp = kvm_mmu_get_page(vcpu, base_gfn, fault->addr,
|
2019-06-24 19:06:21 +08:00
|
|
|
it.level - 1, true, direct_access);
|
|
|
|
link_shadow_page(vcpu, it.sptep, sp);
|
2021-08-07 21:21:53 +08:00
|
|
|
if (fault->huge_page_disallowed &&
|
|
|
|
fault->req_level >= it.level)
|
2019-11-04 19:22:02 +08:00
|
|
|
account_huge_nx_page(vcpu->kvm, sp);
|
2019-06-24 19:06:21 +08:00
|
|
|
}
|
2010-07-13 19:27:09 +08:00
|
|
|
}
|
|
|
|
|
2021-09-06 20:25:46 +08:00
|
|
|
if (WARN_ON_ONCE(it.level != fault->goal_level))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2021-08-14 04:35:03 +08:00
|
|
|
ret = mmu_set_spte(vcpu, fault->slot, it.sptep, gw->pte_access,
|
2021-08-17 19:49:47 +08:00
|
|
|
base_gfn, fault->pfn, fault);
|
2020-09-24 06:04:25 +08:00
|
|
|
if (ret == RET_PF_SPURIOUS)
|
|
|
|
return ret;
|
|
|
|
|
2010-08-22 19:13:33 +08:00
|
|
|
FNAME(pte_prefetch)(vcpu, gw, it.sptep);
|
2019-06-24 19:06:21 +08:00
|
|
|
++vcpu->stat.pf_fixed;
|
2017-08-17 21:03:32 +08:00
|
|
|
return ret;
|
2010-07-13 19:27:09 +08:00
|
|
|
|
|
|
|
out_gpte_changed:
|
2017-08-17 21:03:32 +08:00
|
|
|
return RET_PF_RETRY;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
|
|
|
|
2013-01-08 14:36:51 +08:00
|
|
|
/*
|
|
|
|
* To see whether the mapped gfn can write its page table in the current
|
|
|
|
* mapping.
|
|
|
|
*
|
|
|
|
* It is the helper function of FNAME(page_fault). When guest uses large page
|
|
|
|
* size to map the writable gfn which is used as current page table, we should
|
|
|
|
* force kvm to use small page size to map it because new shadow page will be
|
|
|
|
* created when kvm establishes shadow page table that stop kvm using large
|
|
|
|
* page size. Do it early can avoid unnecessary #PF and emulation.
|
|
|
|
*
|
2013-01-13 23:49:07 +08:00
|
|
|
* @write_fault_to_shadow_pgtable will return true if the fault gfn is
|
|
|
|
* currently used as its page table.
|
|
|
|
*
|
2013-01-08 14:36:51 +08:00
|
|
|
* Note: the PDPT page table is not checked for PAE-32 bit guest. It is ok
|
|
|
|
* since the PDPT is always shadowed, that means, we can not use large page
|
|
|
|
* size to map the gfn which is used as PDPT.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu,
|
2020-09-24 02:37:35 +08:00
|
|
|
struct guest_walker *walker, bool user_fault,
|
2013-01-13 23:49:07 +08:00
|
|
|
bool *write_fault_to_shadow_pgtable)
|
2013-01-08 14:36:51 +08:00
|
|
|
{
|
|
|
|
int level;
|
|
|
|
gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1);
|
2013-01-13 23:49:07 +08:00
|
|
|
bool self_changed = false;
|
2013-01-08 14:36:51 +08:00
|
|
|
|
|
|
|
if (!(walker->pte_access & ACC_WRITE_MASK ||
|
2021-06-23 01:57:37 +08:00
|
|
|
(!is_cr0_wp(vcpu->arch.mmu) && !user_fault)))
|
2013-01-08 14:36:51 +08:00
|
|
|
return false;
|
|
|
|
|
2013-01-13 23:49:07 +08:00
|
|
|
for (level = walker->level; level <= walker->max_level; level++) {
|
|
|
|
gfn_t gfn = walker->gfn ^ walker->table_gfn[level - 1];
|
|
|
|
|
|
|
|
self_changed |= !(gfn & mask);
|
|
|
|
*write_fault_to_shadow_pgtable |= !gfn;
|
|
|
|
}
|
2013-01-08 14:36:51 +08:00
|
|
|
|
2013-01-13 23:49:07 +08:00
|
|
|
return self_changed;
|
2013-01-08 14:36:51 +08:00
|
|
|
}
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* Page fault handler. There are several causes for a page fault:
|
|
|
|
* - there is no shadow pte for the guest pte
|
|
|
|
* - write access through a shadow pte marked read only so that we can set
|
|
|
|
* the dirty bit
|
|
|
|
* - write access to a shadow pte marked read only so we can update the page
|
|
|
|
* dirty bitmap, when userspace requests it
|
|
|
|
* - mmio access; in this case we will never install a present shadow pte
|
|
|
|
* - normal guest page fault due to the guest pte marked not present, not
|
|
|
|
* writable, or not executable
|
|
|
|
*
|
2007-01-06 08:36:54 +08:00
|
|
|
* Returns: 1 if we need to emulate the instruction, 0 otherwise, or
|
|
|
|
* a negative value on error.
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*/
|
2021-08-06 16:35:50 +08:00
|
|
|
static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
|
|
|
struct guest_walker walker;
|
2007-01-06 08:36:54 +08:00
|
|
|
int r;
|
2008-07-25 22:24:52 +08:00
|
|
|
unsigned long mmu_seq;
|
2021-08-07 20:57:34 +08:00
|
|
|
bool is_self_change_mapping;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
pgprintk("%s: addr %lx err %x\n", __func__, fault->addr, fault->error_code);
|
2021-08-06 16:35:50 +08:00
|
|
|
WARN_ON_ONCE(fault->is_tdp);
|
2007-01-06 08:36:53 +08:00
|
|
|
|
2016-02-22 16:23:41 +08:00
|
|
|
/*
|
2021-08-06 16:35:50 +08:00
|
|
|
* Look up the guest pte for the faulting address.
|
2016-02-22 16:23:41 +08:00
|
|
|
* If PFEC.RSVD is set, this is a shadow page fault.
|
|
|
|
* The bit needs to be cleared before walking guest page tables.
|
|
|
|
*/
|
2021-08-06 16:35:50 +08:00
|
|
|
r = FNAME(walk_addr)(&walker, vcpu, fault->addr,
|
|
|
|
fault->error_code & ~PFERR_RSVD_MASK);
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The page is not mapped by the guest. Let the guest handle it.
|
|
|
|
*/
|
2007-01-26 16:56:41 +08:00
|
|
|
if (!r) {
|
2008-03-04 04:59:56 +08:00
|
|
|
pgprintk("%s: guest page fault\n", __func__);
|
2021-09-29 21:19:32 +08:00
|
|
|
if (!fault->prefetch)
|
2020-03-26 00:50:03 +08:00
|
|
|
kvm_inject_emulated_page_fault(vcpu, &walker.fault);
|
KVM: MMU: improve write flooding detected
Detecting write-flooding does not work well, when we handle page written, if
the last speculative spte is not accessed, we treat the page is
write-flooding, however, we can speculative spte on many path, such as pte
prefetch, page synced, that means the last speculative spte may be not point
to the written page and the written page can be accessed via other sptes, so
depends on the Accessed bit of the last speculative spte is not enough
Instead of detected page accessed, we can detect whether the spte is accessed
after it is written, if the spte is not accessed but it is written frequently,
we treat is not a page table or it not used for a long time
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:58:36 +08:00
|
|
|
|
2017-08-17 21:03:32 +08:00
|
|
|
return RET_PF_RETRY;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
|
|
|
|
2021-08-06 16:21:58 +08:00
|
|
|
fault->gfn = walker.gfn;
|
2021-09-24 17:05:26 +08:00
|
|
|
fault->slot = kvm_vcpu_gfn_to_memslot(vcpu, fault->gfn);
|
|
|
|
|
2021-08-06 16:21:58 +08:00
|
|
|
if (page_fault_handle_page_track(vcpu, fault)) {
|
2021-08-06 16:35:50 +08:00
|
|
|
shadow_page_table_clear_flood(vcpu, fault->addr);
|
2017-08-17 21:03:32 +08:00
|
|
|
return RET_PF_EMULATE;
|
2016-02-24 17:51:12 +08:00
|
|
|
}
|
2016-02-24 17:51:11 +08:00
|
|
|
|
2020-07-03 10:35:36 +08:00
|
|
|
r = mmu_topup_memory_caches(vcpu, true);
|
2020-07-03 10:35:31 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2013-01-13 23:49:07 +08:00
|
|
|
vcpu->arch.write_fault_to_shadow_pgtable = false;
|
|
|
|
|
|
|
|
is_self_change_mapping = FNAME(is_self_change_mapping)(vcpu,
|
2021-08-06 16:35:50 +08:00
|
|
|
&walker, fault->user, &vcpu->arch.write_fault_to_shadow_pgtable);
|
2013-01-13 23:49:07 +08:00
|
|
|
|
2020-09-24 02:37:30 +08:00
|
|
|
if (is_self_change_mapping)
|
2021-08-06 16:21:58 +08:00
|
|
|
fault->max_level = PG_LEVEL_4K;
|
2019-12-07 07:57:21 +08:00
|
|
|
else
|
2021-08-06 16:21:58 +08:00
|
|
|
fault->max_level = walker.level;
|
2019-12-07 07:57:21 +08:00
|
|
|
|
2008-07-25 22:24:52 +08:00
|
|
|
mmu_seq = vcpu->kvm->mmu_notifier_seq;
|
2008-09-17 07:54:47 +08:00
|
|
|
smp_rmb();
|
2010-10-14 17:22:46 +08:00
|
|
|
|
2021-08-07 20:57:34 +08:00
|
|
|
if (kvm_faultin_pfn(vcpu, fault, &r))
|
2021-08-11 04:52:41 +08:00
|
|
|
return r;
|
2007-12-30 18:29:05 +08:00
|
|
|
|
2021-08-06 16:35:50 +08:00
|
|
|
if (handle_abnormal_pfn(vcpu, fault, walker.pte_access, &r))
|
2011-07-12 03:29:38 +08:00
|
|
|
return r;
|
|
|
|
|
2013-01-08 14:36:04 +08:00
|
|
|
/*
|
|
|
|
* Do not change pte_access if the pfn is a mmio page, otherwise
|
|
|
|
* we will cache the incorrect access into mmio spte.
|
|
|
|
*/
|
2021-08-06 16:35:50 +08:00
|
|
|
if (fault->write && !(walker.pte_access & ACC_WRITE_MASK) &&
|
2021-09-24 17:05:26 +08:00
|
|
|
!is_cr0_wp(vcpu->arch.mmu) && !fault->user && fault->slot) {
|
2013-01-08 14:36:04 +08:00
|
|
|
walker.pte_access |= ACC_WRITE_MASK;
|
|
|
|
walker.pte_access &= ~ACC_USER_MASK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we converted a user page to a kernel page,
|
|
|
|
* so that the kernel can write to it when cr0.wp=0,
|
|
|
|
* then we should prevent the kernel from executing it
|
|
|
|
* if SMEP is enabled.
|
|
|
|
*/
|
2021-06-23 01:57:38 +08:00
|
|
|
if (is_cr4_smep(vcpu->arch.mmu))
|
2013-01-08 14:36:04 +08:00
|
|
|
walker.pte_access &= ~ACC_EXEC_MASK;
|
|
|
|
}
|
|
|
|
|
2019-01-04 08:22:21 +08:00
|
|
|
r = RET_PF_RETRY;
|
2021-02-03 02:57:24 +08:00
|
|
|
write_lock(&vcpu->kvm->mmu_lock);
|
KVM: x86/mmu: Retry page fault if root is invalidated by memslot update
Bail from the page fault handler if the root shadow page was obsoleted by
a memslot update. Do the check _after_ acuiring mmu_lock, as the TDP MMU
doesn't rely on the memslot/MMU generation, and instead relies on the
root being explicit marked invalid by kvm_mmu_zap_all_fast(), which takes
mmu_lock for write.
For the TDP MMU, inserting a SPTE into an obsolete root can leak a SP if
kvm_tdp_mmu_zap_invalidated_roots() has already zapped the SP, i.e. has
moved past the gfn associated with the SP.
For other MMUs, the resulting behavior is far more convoluted, though
unlikely to be truly problematic. Installing SPs/SPTEs into the obsolete
root isn't directly problematic, as the obsolete root will be unloaded
and dropped before the vCPU re-enters the guest. But because the legacy
MMU tracks shadow pages by their role, any SP created by the fault can
can be reused in the new post-reload root. Again, that _shouldn't_ be
problematic as any leaf child SPTEs will be created for the current/valid
memslot generation, and kvm_mmu_get_page() will not reuse child SPs from
the old generation as they will be flagged as obsolete. But, given that
continuing with the fault is pointess (the root will be unloaded), apply
the check to all MMUs.
Fixes: b7cccd397f31 ("KVM: x86/mmu: Fast invalidation for TDP MMU")
Cc: stable@vger.kernel.org
Cc: Ben Gardon <bgardon@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20211120045046.3940942-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-11-20 12:50:22 +08:00
|
|
|
|
|
|
|
if (is_page_fault_stale(vcpu, fault, mmu_seq))
|
2008-07-25 22:24:52 +08:00
|
|
|
goto out_unlock;
|
2010-08-28 19:22:46 +08:00
|
|
|
|
2020-06-24 03:35:42 +08:00
|
|
|
r = make_mmu_pages_available(vcpu);
|
|
|
|
if (r)
|
2017-08-11 07:28:02 +08:00
|
|
|
goto out_unlock;
|
2021-08-06 16:35:50 +08:00
|
|
|
r = FNAME(fetch)(vcpu, fault, &walker);
|
2008-07-25 22:24:52 +08:00
|
|
|
|
|
|
|
out_unlock:
|
2021-02-03 02:57:24 +08:00
|
|
|
write_unlock(&vcpu->kvm->mmu_lock);
|
2021-08-07 20:57:34 +08:00
|
|
|
kvm_release_pfn_clean(fault->pfn);
|
2019-01-04 08:22:21 +08:00
|
|
|
return r;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
|
|
|
|
2011-09-22 16:56:06 +08:00
|
|
|
static gpa_t FNAME(get_level1_sp_gpa)(struct kvm_mmu_page *sp)
|
|
|
|
{
|
|
|
|
int offset = 0;
|
|
|
|
|
2020-04-28 08:54:22 +08:00
|
|
|
WARN_ON(sp->role.level != PG_LEVEL_4K);
|
2011-09-22 16:56:06 +08:00
|
|
|
|
|
|
|
if (PTTYPE == 32)
|
|
|
|
offset = sp->role.quadrant << PT64_LEVEL_BITS;
|
|
|
|
|
|
|
|
return gfn_to_gpa(sp->gfn) + offset * sizeof(pt_element_t);
|
|
|
|
}
|
|
|
|
|
2018-06-28 05:59:16 +08:00
|
|
|
static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa)
|
2008-09-24 00:18:35 +08:00
|
|
|
{
|
2008-12-25 21:19:00 +08:00
|
|
|
struct kvm_shadow_walk_iterator iterator;
|
2010-05-15 18:53:35 +08:00
|
|
|
struct kvm_mmu_page *sp;
|
2020-09-24 06:14:05 +08:00
|
|
|
u64 old_spte;
|
2008-12-25 21:19:00 +08:00
|
|
|
int level;
|
|
|
|
u64 *sptep;
|
|
|
|
|
2011-07-12 03:23:20 +08:00
|
|
|
vcpu_clear_mmio_info(vcpu, gva);
|
|
|
|
|
2011-09-22 16:56:39 +08:00
|
|
|
/*
|
|
|
|
* No need to check return value here, rmap_can_add() can
|
|
|
|
* help us to skip pte prefetch later.
|
|
|
|
*/
|
2020-07-03 10:35:36 +08:00
|
|
|
mmu_topup_memory_caches(vcpu, true);
|
2008-09-24 00:18:35 +08:00
|
|
|
|
2018-06-28 05:59:16 +08:00
|
|
|
if (!VALID_PAGE(root_hpa)) {
|
2014-01-04 03:09:32 +08:00
|
|
|
WARN_ON(1);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-02-03 02:57:24 +08:00
|
|
|
write_lock(&vcpu->kvm->mmu_lock);
|
2018-06-28 05:59:16 +08:00
|
|
|
for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) {
|
2008-12-25 21:19:00 +08:00
|
|
|
level = iterator.level;
|
|
|
|
sptep = iterator.sptep;
|
2008-12-02 08:32:05 +08:00
|
|
|
|
2020-06-23 04:20:33 +08:00
|
|
|
sp = sptep_to_sp(sptep);
|
2020-09-24 06:14:05 +08:00
|
|
|
old_spte = *sptep;
|
|
|
|
if (is_last_spte(old_spte, level)) {
|
2011-09-22 16:56:39 +08:00
|
|
|
pt_element_t gpte;
|
|
|
|
gpa_t pte_gpa;
|
|
|
|
|
2010-05-15 18:53:35 +08:00
|
|
|
if (!sp->unsync)
|
|
|
|
break;
|
|
|
|
|
2011-09-22 16:56:06 +08:00
|
|
|
pte_gpa = FNAME(get_level1_sp_gpa)(sp);
|
2010-03-15 19:59:57 +08:00
|
|
|
pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
|
2008-12-25 21:19:00 +08:00
|
|
|
|
2020-09-24 06:14:06 +08:00
|
|
|
mmu_page_zap_pte(vcpu->kvm, sp, sptep, NULL);
|
2020-09-24 06:14:05 +08:00
|
|
|
if (is_shadow_present_pte(old_spte))
|
2018-12-06 21:21:09 +08:00
|
|
|
kvm_flush_remote_tlbs_with_address(vcpu->kvm,
|
|
|
|
sp->gfn, KVM_PAGES_PER_HPAGE(sp->role.level));
|
2011-09-22 16:56:39 +08:00
|
|
|
|
|
|
|
if (!rmap_can_add(vcpu))
|
|
|
|
break;
|
|
|
|
|
2015-04-08 21:39:23 +08:00
|
|
|
if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte,
|
|
|
|
sizeof(pt_element_t)))
|
2011-09-22 16:56:39 +08:00
|
|
|
break;
|
|
|
|
|
2021-09-18 08:56:34 +08:00
|
|
|
FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false);
|
2008-12-23 04:49:30 +08:00
|
|
|
}
|
2008-09-24 00:18:35 +08:00
|
|
|
|
2021-09-06 20:25:47 +08:00
|
|
|
if (!sp->unsync_children)
|
2008-12-25 21:19:00 +08:00
|
|
|
break;
|
|
|
|
}
|
2021-02-03 02:57:24 +08:00
|
|
|
write_unlock(&vcpu->kvm->mmu_lock);
|
2008-09-24 00:18:35 +08:00
|
|
|
}
|
|
|
|
|
2019-12-07 07:57:14 +08:00
|
|
|
/* Note, @addr is a GPA when gva_to_gpa() translates an L2 GPA to an L1 GPA. */
|
2021-11-24 20:20:44 +08:00
|
|
|
static gpa_t FNAME(gva_to_gpa)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
|
|
|
|
gpa_t addr, u32 access,
|
2010-11-22 23:53:26 +08:00
|
|
|
struct x86_exception *exception)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
|
|
|
struct guest_walker walker;
|
2007-02-12 16:54:36 +08:00
|
|
|
gpa_t gpa = UNMAPPED_GVA;
|
|
|
|
int r;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2019-12-07 07:57:14 +08:00
|
|
|
#ifndef CONFIG_X86_64
|
|
|
|
/* A 64-bit GVA should be impossible on 32-bit KVM. */
|
2021-11-24 20:20:44 +08:00
|
|
|
WARN_ON_ONCE((addr >> 32) && mmu == vcpu->arch.walk_mmu);
|
2019-12-07 07:57:14 +08:00
|
|
|
#endif
|
|
|
|
|
2021-11-24 20:20:44 +08:00
|
|
|
r = FNAME(walk_addr_generic)(&walker, vcpu, mmu, addr, access);
|
2010-09-10 23:30:50 +08:00
|
|
|
|
|
|
|
if (r) {
|
|
|
|
gpa = gfn_to_gpa(walker.gfn);
|
2021-11-24 20:20:44 +08:00
|
|
|
gpa |= addr & ~PAGE_MASK;
|
2010-11-22 23:53:27 +08:00
|
|
|
} else if (exception)
|
|
|
|
*exception = walker.fault;
|
2010-09-10 23:30:50 +08:00
|
|
|
|
|
|
|
return gpa;
|
|
|
|
}
|
|
|
|
|
2008-09-24 00:18:33 +08:00
|
|
|
/*
|
|
|
|
* Using the cached information from sp->gfns is safe because:
|
|
|
|
* - The spte has a reference to the struct page, so the pfn for a given gfn
|
|
|
|
* can't change unless all sptes pointing to it are nuked first.
|
2021-09-18 08:56:32 +08:00
|
|
|
*
|
|
|
|
* Returns
|
|
|
|
* < 0: the sp should be zapped
|
|
|
|
* 0: the sp is synced and no tlb flushing is required
|
|
|
|
* > 0: the sp is synced and tlb flushing is required
|
2008-09-24 00:18:33 +08:00
|
|
|
*/
|
2010-11-19 17:04:03 +08:00
|
|
|
static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
|
2008-09-24 00:18:33 +08:00
|
|
|
{
|
2021-06-23 01:56:56 +08:00
|
|
|
union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base;
|
2021-09-18 08:56:32 +08:00
|
|
|
int i;
|
2010-11-19 17:03:22 +08:00
|
|
|
bool host_writable;
|
2010-04-16 17:16:40 +08:00
|
|
|
gpa_t first_pte_gpa;
|
2021-08-17 19:22:32 +08:00
|
|
|
bool flush = false;
|
2008-09-24 00:18:33 +08:00
|
|
|
|
2021-06-23 01:56:56 +08:00
|
|
|
/*
|
|
|
|
* Ignore various flags when verifying that it's safe to sync a shadow
|
|
|
|
* page using the current MMU context.
|
|
|
|
*
|
|
|
|
* - level: not part of the overall MMU role and will never match as the MMU's
|
|
|
|
* level tracks the root level
|
|
|
|
* - access: updated based on the new guest PTE
|
|
|
|
* - quadrant: not part of the overall MMU role (similar to level)
|
|
|
|
*/
|
|
|
|
const union kvm_mmu_page_role sync_role_ign = {
|
|
|
|
.level = 0xf,
|
|
|
|
.access = 0x7,
|
|
|
|
.quadrant = 0x3,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Direct pages can never be unsync, and KVM should never attempt to
|
|
|
|
* sync a shadow page for a different MMU context, e.g. if the role
|
|
|
|
* differs then the memslot lookup (SMM vs. non-SMM) will be bogus, the
|
|
|
|
* reserved bits checks will be wrong, etc...
|
|
|
|
*/
|
|
|
|
if (WARN_ON_ONCE(sp->role.direct ||
|
|
|
|
(sp->role.word ^ mmu_role.word) & ~sync_role_ign.word))
|
2021-09-18 08:56:32 +08:00
|
|
|
return -1;
|
2010-05-26 16:49:59 +08:00
|
|
|
|
2011-09-22 16:56:06 +08:00
|
|
|
first_pte_gpa = FNAME(get_level1_sp_gpa)(sp);
|
2010-04-16 17:16:40 +08:00
|
|
|
|
2008-09-24 00:18:33 +08:00
|
|
|
for (i = 0; i < PT64_ENT_PER_PAGE; i++) {
|
2021-08-17 19:22:32 +08:00
|
|
|
u64 *sptep, spte;
|
2021-08-17 20:46:45 +08:00
|
|
|
struct kvm_memory_slot *slot;
|
2008-09-24 00:18:33 +08:00
|
|
|
unsigned pte_access;
|
|
|
|
pt_element_t gpte;
|
|
|
|
gpa_t pte_gpa;
|
2010-05-13 10:08:08 +08:00
|
|
|
gfn_t gfn;
|
2008-09-24 00:18:33 +08:00
|
|
|
|
2011-07-12 03:33:44 +08:00
|
|
|
if (!sp->spt[i])
|
2008-09-24 00:18:33 +08:00
|
|
|
continue;
|
|
|
|
|
2010-04-16 17:16:40 +08:00
|
|
|
pte_gpa = first_pte_gpa + i * sizeof(pt_element_t);
|
2008-09-24 00:18:33 +08:00
|
|
|
|
2015-04-08 21:39:23 +08:00
|
|
|
if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte,
|
|
|
|
sizeof(pt_element_t)))
|
2021-09-18 08:56:32 +08:00
|
|
|
return -1;
|
2008-09-24 00:18:33 +08:00
|
|
|
|
2013-08-05 16:07:09 +08:00
|
|
|
if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) {
|
2021-08-17 19:22:32 +08:00
|
|
|
flush = true;
|
2010-11-23 11:08:42 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2011-07-12 03:33:44 +08:00
|
|
|
gfn = gpte_to_gfn(gpte);
|
|
|
|
pte_access = sp->role.access;
|
2018-07-18 15:57:50 +08:00
|
|
|
pte_access &= FNAME(gpte_access)(gpte);
|
2018-10-09 03:28:05 +08:00
|
|
|
FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);
|
2011-07-12 03:33:44 +08:00
|
|
|
|
2021-09-18 08:56:32 +08:00
|
|
|
if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access))
|
2011-07-12 03:33:44 +08:00
|
|
|
continue;
|
|
|
|
|
2010-11-23 11:08:42 +08:00
|
|
|
if (gfn != sp->gfns[i]) {
|
2011-07-12 03:28:04 +08:00
|
|
|
drop_spte(vcpu->kvm, &sp->spt[i]);
|
2021-08-17 19:22:32 +08:00
|
|
|
flush = true;
|
2008-09-24 00:18:33 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2021-08-17 19:22:32 +08:00
|
|
|
sptep = &sp->spt[i];
|
|
|
|
spte = *sptep;
|
|
|
|
host_writable = spte & shadow_host_writable_mask;
|
2021-08-17 20:46:45 +08:00
|
|
|
slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn);
|
|
|
|
make_spte(vcpu, sp, slot, pte_access, gfn,
|
2021-08-17 19:22:32 +08:00
|
|
|
spte_to_pfn(spte), spte, true, false,
|
2021-08-17 19:43:19 +08:00
|
|
|
host_writable, &spte);
|
2010-12-23 16:09:29 +08:00
|
|
|
|
2021-08-17 19:22:32 +08:00
|
|
|
flush |= mmu_spte_update(sptep, spte);
|
2008-09-24 00:18:33 +08:00
|
|
|
}
|
|
|
|
|
2021-08-17 19:22:32 +08:00
|
|
|
return flush;
|
2008-09-24 00:18:33 +08:00
|
|
|
}
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
#undef pt_element_t
|
|
|
|
#undef guest_walker
|
|
|
|
#undef FNAME
|
|
|
|
#undef PT_BASE_ADDR_MASK
|
|
|
|
#undef PT_INDEX
|
2009-07-27 22:30:45 +08:00
|
|
|
#undef PT_LVL_ADDR_MASK
|
|
|
|
#undef PT_LVL_OFFSET_MASK
|
KVM: Allow not-present guest page faults to bypass kvm
There are two classes of page faults trapped by kvm:
- host page faults, where the fault is needed to allow kvm to install
the shadow pte or update the guest accessed and dirty bits
- guest page faults, where the guest has faulted and kvm simply injects
the fault back into the guest to handle
The second class, guest page faults, is pure overhead. We can eliminate
some of it on vmx using the following evil trick:
- when we set up a shadow page table entry, if the corresponding guest pte
is not present, set up the shadow pte as not present
- if the guest pte _is_ present, mark the shadow pte as present but also
set one of the reserved bits in the shadow pte
- tell the vmx hardware not to trap faults which have the present bit clear
With this, normal page-not-present faults go directly to the guest,
bypassing kvm entirely.
Unfortunately, this trick only works on Intel hardware, as AMD lacks a
way to discriminate among page faults based on error code. It is also
a little risky since it uses reserved bits which might become unreserved
in the future, so a module parameter is provided to disable it.
Signed-off-by: Avi Kivity <avi@qumranet.com>
2007-09-17 00:58:32 +08:00
|
|
|
#undef PT_LEVEL_BITS
|
[PATCH] KVM: MMU: Shadow page table caching
Define a hashtable for caching shadow page tables. Look up the cache on
context switch (cr3 change) or during page faults.
The key to the cache is a combination of
- the guest page table frame number
- the number of paging levels in the guest
* we can cache real mode, 32-bit mode, pae, and long mode page
tables simultaneously. this is useful for smp bootup.
- the guest page table table
* some kernels use a page as both a page table and a page directory. this
allows multiple shadow pages to exist for that page, one per level
- the "quadrant"
* 32-bit mode page tables span 4MB, whereas a shadow page table spans
2MB. similarly, a 32-bit page directory spans 4GB, while a shadow
page directory spans 1GB. the quadrant allows caching up to 4 shadow page
tables for one guest page in one level.
- a "metaphysical" bit
* for real mode, and for pse pages, there is no guest page table, so set
the bit to avoid write protecting the page.
Signed-off-by: Avi Kivity <avi@qumranet.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2007-01-06 08:36:43 +08:00
|
|
|
#undef PT_MAX_FULL_LEVELS
|
2007-11-21 18:35:07 +08:00
|
|
|
#undef gpte_to_gfn
|
2009-07-27 22:30:45 +08:00
|
|
|
#undef gpte_to_gfn_lvl
|
2007-12-07 20:56:58 +08:00
|
|
|
#undef CMPXCHG
|
2013-08-05 16:07:10 +08:00
|
|
|
#undef PT_GUEST_ACCESSED_MASK
|
|
|
|
#undef PT_GUEST_DIRTY_MASK
|
|
|
|
#undef PT_GUEST_DIRTY_SHIFT
|
|
|
|
#undef PT_GUEST_ACCESSED_SHIFT
|
2017-03-30 17:55:29 +08:00
|
|
|
#undef PT_HAVE_ACCESSED_DIRTY
|