2019-06-04 16:11:32 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/******************************************************************************
|
2009-08-12 20:04:37 +08:00
|
|
|
* emulate.c
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*
|
|
|
|
* Generic x86 (32-bit and 64-bit) instruction decoder and emulator.
|
|
|
|
*
|
|
|
|
* Copyright (c) 2005 Keir Fraser
|
|
|
|
*
|
|
|
|
* Linux coding style, mod r/m decoder, segment base fixes, real-mode
|
2007-07-17 21:16:56 +08:00
|
|
|
* privileged instructions:
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*
|
|
|
|
* Copyright (C) 2006 Qumranet
|
2010-10-06 20:23:22 +08:00
|
|
|
* Copyright 2010 Red Hat, Inc. and/or its affiliates.
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
*
|
|
|
|
* Avi Kivity <avi@qumranet.com>
|
|
|
|
* Yaniv Kamay <yaniv@qumranet.com>
|
|
|
|
*
|
|
|
|
* From: xen-unstable 10676:af9809f51f81a3c43f276f00c81a52ef558afda4
|
|
|
|
*/
|
|
|
|
|
2007-12-16 17:02:48 +08:00
|
|
|
#include <linux/kvm_host.h>
|
2008-06-28 01:58:02 +08:00
|
|
|
#include "kvm_cache_regs.h"
|
2020-02-19 07:29:49 +08:00
|
|
|
#include "kvm_emulate.h"
|
2013-01-04 22:18:49 +08:00
|
|
|
#include <linux/stringify.h>
|
2015-04-20 02:12:59 +08:00
|
|
|
#include <asm/debugreg.h>
|
2018-01-25 17:58:13 +08:00
|
|
|
#include <asm/nospec-branch.h>
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2010-01-21 21:31:48 +08:00
|
|
|
#include "x86.h"
|
2010-03-18 21:20:17 +08:00
|
|
|
#include "tss.h"
|
2017-08-24 20:27:53 +08:00
|
|
|
#include "mmu.h"
|
2018-03-12 19:12:53 +08:00
|
|
|
#include "pmu.h"
|
2009-06-17 21:50:33 +08:00
|
|
|
|
2011-09-13 15:45:41 +08:00
|
|
|
/*
|
|
|
|
* Operand types
|
|
|
|
*/
|
2011-09-13 15:45:42 +08:00
|
|
|
#define OpNone 0ull
|
|
|
|
#define OpImplicit 1ull /* No generic decode */
|
|
|
|
#define OpReg 2ull /* Register */
|
|
|
|
#define OpMem 3ull /* Memory */
|
|
|
|
#define OpAcc 4ull /* Accumulator: AL/AX/EAX/RAX */
|
|
|
|
#define OpDI 5ull /* ES:DI/EDI/RDI */
|
|
|
|
#define OpMem64 6ull /* Memory, 64-bit */
|
|
|
|
#define OpImmUByte 7ull /* Zero-extended 8-bit immediate */
|
|
|
|
#define OpDX 8ull /* DX register */
|
2011-09-13 15:45:43 +08:00
|
|
|
#define OpCL 9ull /* CL register (for shifts) */
|
|
|
|
#define OpImmByte 10ull /* 8-bit sign extended immediate */
|
|
|
|
#define OpOne 11ull /* Implied 1 */
|
2012-12-07 07:55:10 +08:00
|
|
|
#define OpImm 12ull /* Sign extended up to 32-bit immediate */
|
2011-09-13 15:45:47 +08:00
|
|
|
#define OpMem16 13ull /* Memory operand (16-bit). */
|
|
|
|
#define OpMem32 14ull /* Memory operand (32-bit). */
|
|
|
|
#define OpImmU 15ull /* Immediate operand, zero extended */
|
|
|
|
#define OpSI 16ull /* SI/ESI/RSI */
|
|
|
|
#define OpImmFAddr 17ull /* Immediate far address */
|
|
|
|
#define OpMemFAddr 18ull /* Far address in memory */
|
|
|
|
#define OpImmU16 19ull /* Immediate operand, 16 bits, zero extended */
|
2011-09-13 15:45:49 +08:00
|
|
|
#define OpES 20ull /* ES */
|
|
|
|
#define OpCS 21ull /* CS */
|
|
|
|
#define OpSS 22ull /* SS */
|
|
|
|
#define OpDS 23ull /* DS */
|
|
|
|
#define OpFS 24ull /* FS */
|
|
|
|
#define OpGS 25ull /* GS */
|
2012-01-16 21:08:44 +08:00
|
|
|
#define OpMem8 26ull /* 8-bit zero extended memory operand */
|
2012-12-07 07:55:10 +08:00
|
|
|
#define OpImm64 27ull /* Sign extended 16/32/64-bit immediate */
|
2013-05-09 17:32:50 +08:00
|
|
|
#define OpXLat 28ull /* memory at BX/EBX/RBX + zero-extended AL */
|
2013-02-09 17:31:45 +08:00
|
|
|
#define OpAccLo 29ull /* Low part of extended acc (AX/AX/EAX/RAX) */
|
|
|
|
#define OpAccHi 30ull /* High part of extended acc (-/DX/EDX/RDX) */
|
2011-09-13 15:45:47 +08:00
|
|
|
|
|
|
|
#define OpBits 5 /* Width of operand field */
|
2011-09-13 15:45:42 +08:00
|
|
|
#define OpMask ((1ull << OpBits) - 1)
|
2011-09-13 15:45:41 +08:00
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* Opcode effective-address decode tables.
|
|
|
|
* Note that we only emulate instructions that have at least one memory
|
|
|
|
* operand (excluding implicit stack references). We assume that stack
|
|
|
|
* references and instruction fetches will never occur in special memory
|
|
|
|
* areas that require emulation. So, for example, 'mov <imm>,<reg>' need
|
|
|
|
* not be handled.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Operand sizes: 8-bit operands or specified/overridden size. */
|
2010-07-29 20:11:49 +08:00
|
|
|
#define ByteOp (1<<0) /* 8-bit operands. */
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/* Destination operand type. */
|
2011-09-13 15:45:41 +08:00
|
|
|
#define DstShift 1
|
|
|
|
#define ImplicitOps (OpImplicit << DstShift)
|
|
|
|
#define DstReg (OpReg << DstShift)
|
|
|
|
#define DstMem (OpMem << DstShift)
|
|
|
|
#define DstAcc (OpAcc << DstShift)
|
|
|
|
#define DstDI (OpDI << DstShift)
|
|
|
|
#define DstMem64 (OpMem64 << DstShift)
|
2014-12-25 08:52:18 +08:00
|
|
|
#define DstMem16 (OpMem16 << DstShift)
|
2011-09-13 15:45:41 +08:00
|
|
|
#define DstImmUByte (OpImmUByte << DstShift)
|
|
|
|
#define DstDX (OpDX << DstShift)
|
2013-02-09 17:31:45 +08:00
|
|
|
#define DstAccLo (OpAccLo << DstShift)
|
2011-09-13 15:45:41 +08:00
|
|
|
#define DstMask (OpMask << DstShift)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/* Source operand type. */
|
2011-09-13 15:45:47 +08:00
|
|
|
#define SrcShift 6
|
|
|
|
#define SrcNone (OpNone << SrcShift)
|
|
|
|
#define SrcReg (OpReg << SrcShift)
|
|
|
|
#define SrcMem (OpMem << SrcShift)
|
|
|
|
#define SrcMem16 (OpMem16 << SrcShift)
|
|
|
|
#define SrcMem32 (OpMem32 << SrcShift)
|
|
|
|
#define SrcImm (OpImm << SrcShift)
|
|
|
|
#define SrcImmByte (OpImmByte << SrcShift)
|
|
|
|
#define SrcOne (OpOne << SrcShift)
|
|
|
|
#define SrcImmUByte (OpImmUByte << SrcShift)
|
|
|
|
#define SrcImmU (OpImmU << SrcShift)
|
|
|
|
#define SrcSI (OpSI << SrcShift)
|
2013-05-09 17:32:50 +08:00
|
|
|
#define SrcXLat (OpXLat << SrcShift)
|
2011-09-13 15:45:47 +08:00
|
|
|
#define SrcImmFAddr (OpImmFAddr << SrcShift)
|
|
|
|
#define SrcMemFAddr (OpMemFAddr << SrcShift)
|
|
|
|
#define SrcAcc (OpAcc << SrcShift)
|
|
|
|
#define SrcImmU16 (OpImmU16 << SrcShift)
|
2012-12-07 07:55:10 +08:00
|
|
|
#define SrcImm64 (OpImm64 << SrcShift)
|
2011-09-13 15:45:47 +08:00
|
|
|
#define SrcDX (OpDX << SrcShift)
|
2012-01-16 21:08:44 +08:00
|
|
|
#define SrcMem8 (OpMem8 << SrcShift)
|
2013-02-09 17:31:45 +08:00
|
|
|
#define SrcAccHi (OpAccHi << SrcShift)
|
2011-09-13 15:45:47 +08:00
|
|
|
#define SrcMask (OpMask << SrcShift)
|
2011-05-31 02:23:14 +08:00
|
|
|
#define BitOp (1<<11)
|
|
|
|
#define MemAbs (1<<12) /* Memory operand is absolute displacement */
|
|
|
|
#define String (1<<13) /* String instruction (rep capable) */
|
|
|
|
#define Stack (1<<14) /* Stack instruction (push/pop) */
|
|
|
|
#define GroupMask (7<<15) /* Opcode uses one of the group mechanisms */
|
|
|
|
#define Group (1<<15) /* Bits 3:5 of modrm byte extend opcode */
|
|
|
|
#define GroupDual (2<<15) /* Alternate decoding of mod == 3 */
|
|
|
|
#define Prefix (3<<15) /* Instruction varies with 66/f2/f3 prefix */
|
|
|
|
#define RMExt (4<<15) /* Opcode extension in ModRM r/m if mod == 3 */
|
2012-12-20 22:57:43 +08:00
|
|
|
#define Escape (5<<15) /* Escape to coprocessor instruction */
|
2014-11-26 21:47:18 +08:00
|
|
|
#define InstrDual (6<<15) /* Alternate instruction decoding of mod == 3 */
|
2015-01-26 15:32:24 +08:00
|
|
|
#define ModeDual (7<<15) /* Different instruction for 32/64 bit */
|
2011-05-31 02:23:14 +08:00
|
|
|
#define Sse (1<<18) /* SSE Vector instruction */
|
2011-09-13 15:45:44 +08:00
|
|
|
/* Generic ModRM decode. */
|
|
|
|
#define ModRM (1<<19)
|
|
|
|
/* Destination is only written; never read. */
|
|
|
|
#define Mov (1<<20)
|
2009-08-23 19:24:25 +08:00
|
|
|
/* Misc flags */
|
2011-04-04 18:39:26 +08:00
|
|
|
#define Prot (1<<21) /* instruction generates #UD if not in prot-mode */
|
2013-09-22 22:44:52 +08:00
|
|
|
#define EmulateOnUD (1<<22) /* Emulate if unsupported by the host */
|
2010-08-01 20:10:29 +08:00
|
|
|
#define NoAccess (1<<23) /* Don't access memory (lea/invlpg/verr etc) */
|
2010-08-01 19:46:54 +08:00
|
|
|
#define Op3264 (1<<24) /* Operand is 64b in long mode, 32b otherwise */
|
2010-07-26 19:37:47 +08:00
|
|
|
#define Undefined (1<<25) /* No Such Instruction */
|
2010-02-10 20:21:36 +08:00
|
|
|
#define Lock (1<<26) /* lock prefix is allowed for the instruction */
|
2010-02-10 20:21:35 +08:00
|
|
|
#define Priv (1<<27) /* instruction generates #GP if current CPL != 0 */
|
2009-08-23 19:24:25 +08:00
|
|
|
#define No64 (1<<28)
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
#define PageTable (1 << 29) /* instruction used to write page table */
|
2013-04-11 16:59:55 +08:00
|
|
|
#define NotImpl (1 << 30) /* instruction is not implemented */
|
2008-12-04 21:26:42 +08:00
|
|
|
/* Source 2 operand type */
|
2013-04-11 16:59:55 +08:00
|
|
|
#define Src2Shift (31)
|
2011-09-13 15:45:43 +08:00
|
|
|
#define Src2None (OpNone << Src2Shift)
|
2013-02-09 17:31:46 +08:00
|
|
|
#define Src2Mem (OpMem << Src2Shift)
|
2011-09-13 15:45:43 +08:00
|
|
|
#define Src2CL (OpCL << Src2Shift)
|
|
|
|
#define Src2ImmByte (OpImmByte << Src2Shift)
|
|
|
|
#define Src2One (OpOne << Src2Shift)
|
|
|
|
#define Src2Imm (OpImm << Src2Shift)
|
2011-09-13 15:45:49 +08:00
|
|
|
#define Src2ES (OpES << Src2Shift)
|
|
|
|
#define Src2CS (OpCS << Src2Shift)
|
|
|
|
#define Src2SS (OpSS << Src2Shift)
|
|
|
|
#define Src2DS (OpDS << Src2Shift)
|
|
|
|
#define Src2FS (OpFS << Src2Shift)
|
|
|
|
#define Src2GS (OpGS << Src2Shift)
|
2011-09-13 15:45:43 +08:00
|
|
|
#define Src2Mask (OpMask << Src2Shift)
|
2012-04-09 23:40:02 +08:00
|
|
|
#define Mmx ((u64)1 << 40) /* MMX Vector instruction */
|
2016-11-09 03:54:17 +08:00
|
|
|
#define AlignMask ((u64)7 << 41)
|
2012-04-09 23:39:59 +08:00
|
|
|
#define Aligned ((u64)1 << 41) /* Explicitly aligned (e.g. MOVDQA) */
|
2016-11-09 03:54:17 +08:00
|
|
|
#define Unaligned ((u64)2 << 41) /* Explicitly unaligned (e.g. MOVDQU) */
|
|
|
|
#define Avx ((u64)3 << 41) /* Advanced Vector Extensions */
|
|
|
|
#define Aligned16 ((u64)4 << 41) /* Aligned to 16 byte boundary (e.g. FXSAVE) */
|
2013-01-04 22:18:48 +08:00
|
|
|
#define Fastop ((u64)1 << 44) /* Use opcode::u.fastop */
|
2013-01-04 22:18:50 +08:00
|
|
|
#define NoWrite ((u64)1 << 45) /* No writeback */
|
2013-02-09 17:31:44 +08:00
|
|
|
#define SrcWrite ((u64)1 << 46) /* Write back src operand */
|
2014-05-26 04:05:21 +08:00
|
|
|
#define NoMod ((u64)1 << 47) /* Mod field is ignored */
|
2014-03-27 18:58:02 +08:00
|
|
|
#define Intercept ((u64)1 << 48) /* Has valid intercept field */
|
|
|
|
#define CheckPerm ((u64)1 << 49) /* Has valid check_perm field */
|
2014-06-18 22:19:35 +08:00
|
|
|
#define PrivUD ((u64)1 << 51) /* #UD instead of #GP on CPL > 0 */
|
2014-10-24 16:35:09 +08:00
|
|
|
#define NearBranch ((u64)1 << 52) /* Near branches */
|
2014-11-02 17:55:00 +08:00
|
|
|
#define No16 ((u64)1 << 53) /* No 16 bit operand */
|
2014-12-25 08:52:21 +08:00
|
|
|
#define IncSP ((u64)1 << 54) /* SP is incremented before ModRM calc */
|
2016-12-15 03:59:23 +08:00
|
|
|
#define TwoMemOp ((u64)1 << 55) /* Instruction has two memory operand */
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2013-02-09 17:31:45 +08:00
|
|
|
#define DstXacc (DstAccLo | SrcAccHi | SrcWrite)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2010-07-29 20:11:54 +08:00
|
|
|
#define X2(x...) x, x
|
|
|
|
#define X3(x...) X2(x), x
|
|
|
|
#define X4(x...) X2(x), X2(x)
|
|
|
|
#define X5(x...) X4(x), x
|
|
|
|
#define X6(x...) X4(x), X2(x)
|
|
|
|
#define X7(x...) X4(x), X3(x)
|
|
|
|
#define X8(x...) X4(x), X4(x)
|
|
|
|
#define X16(x...) X8(x), X8(x)
|
2010-07-26 19:37:39 +08:00
|
|
|
|
2013-01-04 22:18:48 +08:00
|
|
|
#define NR_FASTOP (ilog2(sizeof(ulong)) + 1)
|
|
|
|
#define FASTOP_SIZE 8
|
|
|
|
|
2010-07-29 20:11:35 +08:00
|
|
|
struct opcode {
|
2011-09-13 15:45:42 +08:00
|
|
|
u64 flags : 56;
|
|
|
|
u64 intercept : 8;
|
2010-07-29 20:11:39 +08:00
|
|
|
union {
|
2010-07-29 20:11:51 +08:00
|
|
|
int (*execute)(struct x86_emulate_ctxt *ctxt);
|
2012-08-30 07:30:15 +08:00
|
|
|
const struct opcode *group;
|
|
|
|
const struct group_dual *gdual;
|
|
|
|
const struct gprefix *gprefix;
|
2012-12-20 22:57:43 +08:00
|
|
|
const struct escape *esc;
|
2014-11-26 21:47:18 +08:00
|
|
|
const struct instr_dual *idual;
|
2015-01-26 15:32:24 +08:00
|
|
|
const struct mode_dual *mdual;
|
2013-01-04 22:18:48 +08:00
|
|
|
void (*fastop)(struct fastop *fake);
|
2010-07-29 20:11:39 +08:00
|
|
|
} u;
|
2011-04-04 18:39:25 +08:00
|
|
|
int (*check_perm)(struct x86_emulate_ctxt *ctxt);
|
2010-07-29 20:11:39 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct group_dual {
|
|
|
|
struct opcode mod012[8];
|
|
|
|
struct opcode mod3[8];
|
2010-07-29 20:11:35 +08:00
|
|
|
};
|
|
|
|
|
2011-03-29 17:34:38 +08:00
|
|
|
struct gprefix {
|
|
|
|
struct opcode pfx_no;
|
|
|
|
struct opcode pfx_66;
|
|
|
|
struct opcode pfx_f2;
|
|
|
|
struct opcode pfx_f3;
|
|
|
|
};
|
|
|
|
|
2012-12-20 22:57:43 +08:00
|
|
|
struct escape {
|
|
|
|
struct opcode op[8];
|
|
|
|
struct opcode high[64];
|
|
|
|
};
|
|
|
|
|
2014-11-26 21:47:18 +08:00
|
|
|
struct instr_dual {
|
|
|
|
struct opcode mod012;
|
|
|
|
struct opcode mod3;
|
|
|
|
};
|
|
|
|
|
2015-01-26 15:32:24 +08:00
|
|
|
struct mode_dual {
|
|
|
|
struct opcode mode32;
|
|
|
|
struct opcode mode64;
|
|
|
|
};
|
|
|
|
|
2010-07-28 17:38:40 +08:00
|
|
|
#define EFLG_RESERVED_ZEROS_MASK 0xffc0802a
|
|
|
|
|
2014-12-25 08:52:19 +08:00
|
|
|
enum x86_transfer_type {
|
|
|
|
X86_TRANSFER_NONE,
|
|
|
|
X86_TRANSFER_CALL_JMP,
|
|
|
|
X86_TRANSFER_RET,
|
|
|
|
X86_TRANSFER_TASK_SWITCH,
|
|
|
|
};
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
static ulong reg_read(struct x86_emulate_ctxt *ctxt, unsigned nr)
|
|
|
|
{
|
|
|
|
if (!(ctxt->regs_valid & (1 << nr))) {
|
|
|
|
ctxt->regs_valid |= 1 << nr;
|
|
|
|
ctxt->_regs[nr] = ctxt->ops->read_gpr(ctxt, nr);
|
|
|
|
}
|
|
|
|
return ctxt->_regs[nr];
|
|
|
|
}
|
|
|
|
|
|
|
|
static ulong *reg_write(struct x86_emulate_ctxt *ctxt, unsigned nr)
|
|
|
|
{
|
|
|
|
ctxt->regs_valid |= 1 << nr;
|
|
|
|
ctxt->regs_dirty |= 1 << nr;
|
|
|
|
return &ctxt->_regs[nr];
|
|
|
|
}
|
|
|
|
|
|
|
|
static ulong *reg_rmw(struct x86_emulate_ctxt *ctxt, unsigned nr)
|
|
|
|
{
|
|
|
|
reg_read(ctxt, nr);
|
|
|
|
return reg_write(ctxt, nr);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void writeback_registers(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
unsigned reg;
|
|
|
|
|
|
|
|
for_each_set_bit(reg, (ulong *)&ctxt->regs_dirty, 16)
|
|
|
|
ctxt->ops->write_gpr(ctxt, reg, ctxt->_regs[reg]);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void invalidate_registers(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
ctxt->regs_dirty = 0;
|
|
|
|
ctxt->regs_valid = 0;
|
|
|
|
}
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/*
|
|
|
|
* These EFLAGS bits are restored from saved value during emulation, and
|
|
|
|
* any changes are written back to the saved value after emulation.
|
|
|
|
*/
|
2015-03-29 21:33:03 +08:00
|
|
|
#define EFLAGS_MASK (X86_EFLAGS_OF|X86_EFLAGS_SF|X86_EFLAGS_ZF|X86_EFLAGS_AF|\
|
|
|
|
X86_EFLAGS_PF|X86_EFLAGS_CF)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2008-11-26 21:14:10 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
#define ON64(x) x
|
|
|
|
#else
|
|
|
|
#define ON64(x)
|
|
|
|
#endif
|
|
|
|
|
2020-02-18 00:48:26 +08:00
|
|
|
/*
|
|
|
|
* fastop functions have a special calling convention:
|
|
|
|
*
|
|
|
|
* dst: rax (in/out)
|
|
|
|
* src: rdx (in/out)
|
|
|
|
* src2: rcx (in)
|
|
|
|
* flags: rflags (in/out)
|
|
|
|
* ex: rsi (in:fastop pointer, out:zero if exception)
|
|
|
|
*
|
|
|
|
* Moreover, they are all exactly FASTOP_SIZE bytes long, so functions for
|
|
|
|
* different operand sizes can be reached by calculation, rather than a jump
|
|
|
|
* table (which would be bigger than the code).
|
|
|
|
*/
|
2020-01-22 12:43:39 +08:00
|
|
|
static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
|
2013-01-20 01:51:55 +08:00
|
|
|
|
2019-07-18 09:36:37 +08:00
|
|
|
#define __FOP_FUNC(name) \
|
2016-01-22 06:49:29 +08:00
|
|
|
".align " __stringify(FASTOP_SIZE) " \n\t" \
|
|
|
|
".type " name ", @function \n\t" \
|
|
|
|
name ":\n\t"
|
|
|
|
|
2019-07-18 09:36:37 +08:00
|
|
|
#define FOP_FUNC(name) \
|
|
|
|
__FOP_FUNC(#name)
|
|
|
|
|
|
|
|
#define __FOP_RET(name) \
|
|
|
|
"ret \n\t" \
|
|
|
|
".size " name ", .-" name "\n\t"
|
|
|
|
|
|
|
|
#define FOP_RET(name) \
|
|
|
|
__FOP_RET(#name)
|
2013-01-04 22:18:49 +08:00
|
|
|
|
|
|
|
#define FOP_START(op) \
|
|
|
|
extern void em_##op(struct fastop *fake); \
|
|
|
|
asm(".pushsection .text, \"ax\" \n\t" \
|
|
|
|
".global em_" #op " \n\t" \
|
2019-07-18 09:36:37 +08:00
|
|
|
".align " __stringify(FASTOP_SIZE) " \n\t" \
|
|
|
|
"em_" #op ":\n\t"
|
2013-01-04 22:18:49 +08:00
|
|
|
|
|
|
|
#define FOP_END \
|
|
|
|
".popsection")
|
|
|
|
|
2019-07-18 09:36:37 +08:00
|
|
|
#define __FOPNOP(name) \
|
|
|
|
__FOP_FUNC(name) \
|
|
|
|
__FOP_RET(name)
|
|
|
|
|
2016-01-22 06:49:29 +08:00
|
|
|
#define FOPNOP() \
|
2019-07-18 09:36:37 +08:00
|
|
|
__FOPNOP(__stringify(__UNIQUE_ID(nop)))
|
2013-01-20 01:51:50 +08:00
|
|
|
|
2013-01-04 22:18:49 +08:00
|
|
|
#define FOP1E(op, dst) \
|
2019-07-18 09:36:37 +08:00
|
|
|
__FOP_FUNC(#op "_" #dst) \
|
|
|
|
"10: " #op " %" #dst " \n\t" \
|
|
|
|
__FOP_RET(#op "_" #dst)
|
2013-02-09 17:31:49 +08:00
|
|
|
|
|
|
|
#define FOP1EEX(op, dst) \
|
|
|
|
FOP1E(op, dst) _ASM_EXTABLE(10b, kvm_fastop_exception)
|
2013-01-04 22:18:49 +08:00
|
|
|
|
|
|
|
#define FASTOP1(op) \
|
|
|
|
FOP_START(op) \
|
|
|
|
FOP1E(op##b, al) \
|
|
|
|
FOP1E(op##w, ax) \
|
|
|
|
FOP1E(op##l, eax) \
|
|
|
|
ON64(FOP1E(op##q, rax)) \
|
|
|
|
FOP_END
|
|
|
|
|
2013-02-09 17:31:48 +08:00
|
|
|
/* 1-operand, using src2 (for MUL/DIV r/m) */
|
|
|
|
#define FASTOP1SRC2(op, name) \
|
|
|
|
FOP_START(name) \
|
|
|
|
FOP1E(op, cl) \
|
|
|
|
FOP1E(op, cx) \
|
|
|
|
FOP1E(op, ecx) \
|
|
|
|
ON64(FOP1E(op, rcx)) \
|
|
|
|
FOP_END
|
|
|
|
|
2013-02-09 17:31:49 +08:00
|
|
|
/* 1-operand, using src2 (for MUL/DIV r/m), with exceptions */
|
|
|
|
#define FASTOP1SRC2EX(op, name) \
|
|
|
|
FOP_START(name) \
|
|
|
|
FOP1EEX(op, cl) \
|
|
|
|
FOP1EEX(op, cx) \
|
|
|
|
FOP1EEX(op, ecx) \
|
|
|
|
ON64(FOP1EEX(op, rcx)) \
|
|
|
|
FOP_END
|
|
|
|
|
2013-01-04 22:18:53 +08:00
|
|
|
#define FOP2E(op, dst, src) \
|
2019-07-18 09:36:37 +08:00
|
|
|
__FOP_FUNC(#op "_" #dst "_" #src) \
|
|
|
|
#op " %" #src ", %" #dst " \n\t" \
|
|
|
|
__FOP_RET(#op "_" #dst "_" #src)
|
2013-01-04 22:18:53 +08:00
|
|
|
|
|
|
|
#define FASTOP2(op) \
|
|
|
|
FOP_START(op) \
|
2013-02-09 17:31:47 +08:00
|
|
|
FOP2E(op##b, al, dl) \
|
|
|
|
FOP2E(op##w, ax, dx) \
|
|
|
|
FOP2E(op##l, eax, edx) \
|
|
|
|
ON64(FOP2E(op##q, rax, rdx)) \
|
2013-01-04 22:18:53 +08:00
|
|
|
FOP_END
|
|
|
|
|
2013-01-20 01:51:54 +08:00
|
|
|
/* 2 operand, word only */
|
|
|
|
#define FASTOP2W(op) \
|
|
|
|
FOP_START(op) \
|
|
|
|
FOPNOP() \
|
2013-02-09 17:31:47 +08:00
|
|
|
FOP2E(op##w, ax, dx) \
|
|
|
|
FOP2E(op##l, eax, edx) \
|
|
|
|
ON64(FOP2E(op##q, rax, rdx)) \
|
2013-01-20 01:51:54 +08:00
|
|
|
FOP_END
|
|
|
|
|
2013-01-20 01:51:51 +08:00
|
|
|
/* 2 operand, src is CL */
|
|
|
|
#define FASTOP2CL(op) \
|
|
|
|
FOP_START(op) \
|
|
|
|
FOP2E(op##b, al, cl) \
|
|
|
|
FOP2E(op##w, ax, cl) \
|
|
|
|
FOP2E(op##l, eax, cl) \
|
|
|
|
ON64(FOP2E(op##q, rax, cl)) \
|
|
|
|
FOP_END
|
|
|
|
|
2014-11-02 17:54:50 +08:00
|
|
|
/* 2 operand, src and dest are reversed */
|
|
|
|
#define FASTOP2R(op, name) \
|
|
|
|
FOP_START(name) \
|
|
|
|
FOP2E(op##b, dl, al) \
|
|
|
|
FOP2E(op##w, dx, ax) \
|
|
|
|
FOP2E(op##l, edx, eax) \
|
|
|
|
ON64(FOP2E(op##q, rdx, rax)) \
|
|
|
|
FOP_END
|
|
|
|
|
2013-01-20 01:51:50 +08:00
|
|
|
#define FOP3E(op, dst, src, src2) \
|
2019-07-18 09:36:37 +08:00
|
|
|
__FOP_FUNC(#op "_" #dst "_" #src "_" #src2) \
|
|
|
|
#op " %" #src2 ", %" #src ", %" #dst " \n\t"\
|
|
|
|
__FOP_RET(#op "_" #dst "_" #src "_" #src2)
|
2013-01-20 01:51:50 +08:00
|
|
|
|
|
|
|
/* 3-operand, word-only, src2=cl */
|
|
|
|
#define FASTOP3WCL(op) \
|
|
|
|
FOP_START(op) \
|
|
|
|
FOPNOP() \
|
2013-02-09 17:31:47 +08:00
|
|
|
FOP3E(op##w, ax, dx, cl) \
|
|
|
|
FOP3E(op##l, eax, edx, cl) \
|
|
|
|
ON64(FOP3E(op##q, rax, rdx, cl)) \
|
2013-01-20 01:51:50 +08:00
|
|
|
FOP_END
|
|
|
|
|
2013-01-20 01:51:52 +08:00
|
|
|
/* Special case for SETcc - 1 instruction per cc */
|
2016-01-22 06:49:29 +08:00
|
|
|
#define FOP_SETCC(op) \
|
|
|
|
".align 4 \n\t" \
|
|
|
|
".type " #op ", @function \n\t" \
|
|
|
|
#op ": \n\t" \
|
|
|
|
#op " %al \n\t" \
|
2019-07-18 09:36:37 +08:00
|
|
|
__FOP_RET(#op)
|
2013-01-20 01:51:52 +08:00
|
|
|
|
2017-10-04 23:39:05 +08:00
|
|
|
asm(".pushsection .fixup, \"ax\"\n"
|
|
|
|
".global kvm_fastop_exception \n"
|
|
|
|
"kvm_fastop_exception: xor %esi, %esi; ret\n"
|
|
|
|
".popsection");
|
2013-02-09 17:31:49 +08:00
|
|
|
|
2013-01-20 01:51:52 +08:00
|
|
|
FOP_START(setcc)
|
|
|
|
FOP_SETCC(seto)
|
|
|
|
FOP_SETCC(setno)
|
|
|
|
FOP_SETCC(setc)
|
|
|
|
FOP_SETCC(setnc)
|
|
|
|
FOP_SETCC(setz)
|
|
|
|
FOP_SETCC(setnz)
|
|
|
|
FOP_SETCC(setbe)
|
|
|
|
FOP_SETCC(setnbe)
|
|
|
|
FOP_SETCC(sets)
|
|
|
|
FOP_SETCC(setns)
|
|
|
|
FOP_SETCC(setp)
|
|
|
|
FOP_SETCC(setnp)
|
|
|
|
FOP_SETCC(setl)
|
|
|
|
FOP_SETCC(setnl)
|
|
|
|
FOP_SETCC(setle)
|
|
|
|
FOP_SETCC(setnle)
|
|
|
|
FOP_END;
|
|
|
|
|
2019-07-18 09:36:37 +08:00
|
|
|
FOP_START(salc)
|
|
|
|
FOP_FUNC(salc)
|
|
|
|
"pushf; sbb %al, %al; popf \n\t"
|
|
|
|
FOP_RET(salc)
|
2013-05-09 17:32:51 +08:00
|
|
|
FOP_END;
|
|
|
|
|
2016-11-09 03:54:18 +08:00
|
|
|
/*
|
|
|
|
* XXX: inoutclob user must know where the argument is being expanded.
|
2018-12-30 23:14:15 +08:00
|
|
|
* Relying on CONFIG_CC_HAS_ASM_GOTO would allow us to remove _fault.
|
2016-11-09 03:54:18 +08:00
|
|
|
*/
|
|
|
|
#define asm_safe(insn, inoutclob...) \
|
|
|
|
({ \
|
|
|
|
int _fault = 0; \
|
|
|
|
\
|
|
|
|
asm volatile("1:" insn "\n" \
|
|
|
|
"2:\n" \
|
|
|
|
".pushsection .fixup, \"ax\"\n" \
|
|
|
|
"3: movl $1, %[_fault]\n" \
|
|
|
|
" jmp 2b\n" \
|
|
|
|
".popsection\n" \
|
|
|
|
_ASM_EXTABLE(1b, 3b) \
|
|
|
|
: [_fault] "+qm"(_fault) inoutclob ); \
|
|
|
|
\
|
|
|
|
_fault ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE; \
|
|
|
|
})
|
|
|
|
|
2011-04-04 18:39:27 +08:00
|
|
|
static int emulator_check_intercept(struct x86_emulate_ctxt *ctxt,
|
|
|
|
enum x86_intercept intercept,
|
|
|
|
enum x86_intercept_stage stage)
|
|
|
|
{
|
|
|
|
struct x86_instruction_info info = {
|
|
|
|
.intercept = intercept,
|
2011-06-01 20:34:25 +08:00
|
|
|
.rep_prefix = ctxt->rep_prefix,
|
|
|
|
.modrm_mod = ctxt->modrm_mod,
|
|
|
|
.modrm_reg = ctxt->modrm_reg,
|
|
|
|
.modrm_rm = ctxt->modrm_rm,
|
|
|
|
.src_val = ctxt->src.val64,
|
2014-06-30 18:52:55 +08:00
|
|
|
.dst_val = ctxt->dst.val64,
|
2011-06-01 20:34:25 +08:00
|
|
|
.src_bytes = ctxt->src.bytes,
|
|
|
|
.dst_bytes = ctxt->dst.bytes,
|
|
|
|
.ad_bytes = ctxt->ad_bytes,
|
2011-04-04 18:39:27 +08:00
|
|
|
.next_rip = ctxt->eip,
|
|
|
|
};
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
return ctxt->ops->intercept(ctxt, &info, stage);
|
2011-04-04 18:39:27 +08:00
|
|
|
}
|
|
|
|
|
2012-06-07 22:49:24 +08:00
|
|
|
static void assign_masked(ulong *dest, ulong src, ulong mask)
|
|
|
|
{
|
|
|
|
*dest = (*dest & ~mask) | (src & mask);
|
|
|
|
}
|
|
|
|
|
2015-03-30 20:39:20 +08:00
|
|
|
static void assign_register(unsigned long *reg, u64 val, int bytes)
|
|
|
|
{
|
|
|
|
/* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */
|
|
|
|
switch (bytes) {
|
|
|
|
case 1:
|
|
|
|
*(u8 *)reg = (u8)val;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
*(u16 *)reg = (u16)val;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
*reg = (u32)val;
|
|
|
|
break; /* 64b: zero-extend */
|
|
|
|
case 8:
|
|
|
|
*reg = val;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
static inline unsigned long ad_mask(struct x86_emulate_ctxt *ctxt)
|
2008-02-19 03:12:48 +08:00
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
return (1UL << (ctxt->ad_bytes << 3)) - 1;
|
2008-02-19 03:12:48 +08:00
|
|
|
}
|
|
|
|
|
2012-06-07 22:49:24 +08:00
|
|
|
static ulong stack_mask(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 sel;
|
|
|
|
struct desc_struct ss;
|
|
|
|
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
return ~0UL;
|
|
|
|
ctxt->ops->get_segment(ctxt, &sel, &ss, NULL, VCPU_SREG_SS);
|
|
|
|
return ~0U >> ((ss.d ^ 1) * 16); /* d=0: 0xffff; d=1: 0xffffffff */
|
|
|
|
}
|
|
|
|
|
2012-06-13 01:03:23 +08:00
|
|
|
static int stack_size(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return (__fls(stack_mask(ctxt)) + 1) >> 3;
|
|
|
|
}
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
/* Access/update address held in a register, based on addressing mode. */
|
2008-02-19 23:40:38 +08:00
|
|
|
static inline unsigned long
|
2011-06-01 20:34:25 +08:00
|
|
|
address_mask(struct x86_emulate_ctxt *ctxt, unsigned long reg)
|
2008-02-19 23:40:38 +08:00
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->ad_bytes == sizeof(unsigned long))
|
2008-02-19 23:40:38 +08:00
|
|
|
return reg;
|
|
|
|
else
|
2011-06-01 20:34:25 +08:00
|
|
|
return reg & ad_mask(ctxt);
|
2008-02-19 23:40:38 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned long
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address(struct x86_emulate_ctxt *ctxt, int reg)
|
2008-02-19 23:40:38 +08:00
|
|
|
{
|
2014-11-20 01:25:08 +08:00
|
|
|
return address_mask(ctxt, reg_read(ctxt, reg));
|
2008-02-19 23:40:38 +08:00
|
|
|
}
|
|
|
|
|
2012-08-19 19:34:31 +08:00
|
|
|
static void masked_increment(ulong *reg, ulong mask, int inc)
|
|
|
|
{
|
|
|
|
assign_masked(reg, *reg + inc, mask);
|
|
|
|
}
|
|
|
|
|
2008-02-19 23:40:41 +08:00
|
|
|
static inline void
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address_increment(struct x86_emulate_ctxt *ctxt, int reg, int inc)
|
2008-02-19 23:40:41 +08:00
|
|
|
{
|
2015-04-28 18:06:00 +08:00
|
|
|
ulong *preg = reg_rmw(ctxt, reg);
|
2012-08-19 19:34:31 +08:00
|
|
|
|
2015-04-28 18:06:00 +08:00
|
|
|
assign_register(preg, *preg + inc, ctxt->ad_bytes);
|
2012-08-19 19:34:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc)
|
|
|
|
{
|
2012-08-28 04:46:17 +08:00
|
|
|
masked_increment(reg_rmw(ctxt, VCPU_REGS_RSP), stack_mask(ctxt), inc);
|
2008-02-19 23:40:41 +08:00
|
|
|
}
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2011-04-03 19:08:51 +08:00
|
|
|
static u32 desc_limit_scaled(struct desc_struct *desc)
|
|
|
|
{
|
|
|
|
u32 limit = get_desc_limit(desc);
|
|
|
|
|
|
|
|
return desc->g ? (limit << 12) | 0xfff : limit;
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
static unsigned long seg_base(struct x86_emulate_ctxt *ctxt, int seg)
|
2008-06-22 21:22:51 +08:00
|
|
|
{
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64 && seg < VCPU_SREG_FS)
|
|
|
|
return 0;
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
return ctxt->ops->get_cached_segment_base(ctxt, seg);
|
2008-06-22 21:22:51 +08:00
|
|
|
}
|
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
static int emulate_exception(struct x86_emulate_ctxt *ctxt, int vec,
|
|
|
|
u32 error, bool valid)
|
2010-04-29 00:15:44 +08:00
|
|
|
{
|
2014-08-20 16:08:23 +08:00
|
|
|
WARN_ON(vec > 0x1f);
|
2010-11-22 23:53:21 +08:00
|
|
|
ctxt->exception.vector = vec;
|
|
|
|
ctxt->exception.error_code = error;
|
|
|
|
ctxt->exception.error_code_valid = valid;
|
2010-11-22 23:53:25 +08:00
|
|
|
return X86EMUL_PROPAGATE_FAULT;
|
2010-04-29 00:15:44 +08:00
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:29 +08:00
|
|
|
static int emulate_db(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return emulate_exception(ctxt, DB_VECTOR, 0, false);
|
|
|
|
}
|
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
static int emulate_gp(struct x86_emulate_ctxt *ctxt, int err)
|
2010-04-29 00:15:44 +08:00
|
|
|
{
|
2010-11-22 23:53:25 +08:00
|
|
|
return emulate_exception(ctxt, GP_VECTOR, err, true);
|
2010-04-29 00:15:44 +08:00
|
|
|
}
|
|
|
|
|
2011-04-03 17:32:09 +08:00
|
|
|
static int emulate_ss(struct x86_emulate_ctxt *ctxt, int err)
|
|
|
|
{
|
|
|
|
return emulate_exception(ctxt, SS_VECTOR, err, true);
|
|
|
|
}
|
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
static int emulate_ud(struct x86_emulate_ctxt *ctxt)
|
2010-04-29 00:15:44 +08:00
|
|
|
{
|
2010-11-22 23:53:25 +08:00
|
|
|
return emulate_exception(ctxt, UD_VECTOR, 0, false);
|
2010-04-29 00:15:44 +08:00
|
|
|
}
|
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
static int emulate_ts(struct x86_emulate_ctxt *ctxt, int err)
|
2010-04-29 00:15:44 +08:00
|
|
|
{
|
2010-11-22 23:53:25 +08:00
|
|
|
return emulate_exception(ctxt, TS_VECTOR, err, true);
|
2010-04-29 00:15:44 +08:00
|
|
|
}
|
|
|
|
|
2010-08-26 16:59:01 +08:00
|
|
|
static int emulate_de(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2010-11-22 23:53:25 +08:00
|
|
|
return emulate_exception(ctxt, DE_VECTOR, 0, false);
|
2010-08-26 16:59:01 +08:00
|
|
|
}
|
|
|
|
|
2011-03-29 17:41:27 +08:00
|
|
|
static int emulate_nm(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return emulate_exception(ctxt, NM_VECTOR, 0, false);
|
|
|
|
}
|
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
static u16 get_segment_selector(struct x86_emulate_ctxt *ctxt, unsigned seg)
|
|
|
|
{
|
|
|
|
u16 selector;
|
|
|
|
struct desc_struct desc;
|
|
|
|
|
|
|
|
ctxt->ops->get_segment(ctxt, &selector, &desc, NULL, seg);
|
|
|
|
return selector;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void set_segment_selector(struct x86_emulate_ctxt *ctxt, u16 selector,
|
|
|
|
unsigned seg)
|
|
|
|
{
|
|
|
|
u16 dummy;
|
|
|
|
u32 base3;
|
|
|
|
struct desc_struct desc;
|
|
|
|
|
|
|
|
ctxt->ops->get_segment(ctxt, &dummy, &desc, &base3, seg);
|
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, base3, seg);
|
|
|
|
}
|
|
|
|
|
2020-02-19 07:29:43 +08:00
|
|
|
static inline u8 ctxt_virt_addr_bits(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return (ctxt->ops->get_cr(ctxt, 4) & X86_CR4_LA57) ? 57 : 48;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool emul_is_noncanonical_address(u64 la,
|
|
|
|
struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return get_canonical(la, ctxt_virt_addr_bits(ctxt)) != la;
|
|
|
|
}
|
|
|
|
|
2012-04-09 23:39:59 +08:00
|
|
|
/*
|
|
|
|
* x86 defines three classes of vector instructions: explicitly
|
|
|
|
* aligned, explicitly unaligned, and the rest, which change behaviour
|
|
|
|
* depending on whether they're AVX encoded or not.
|
|
|
|
*
|
|
|
|
* Also included is CMPXCHG16B which is not a vector instruction, yet it is
|
2016-11-09 03:54:16 +08:00
|
|
|
* subject to the same check. FXSAVE and FXRSTOR are checked here too as their
|
|
|
|
* 512 bytes of data must be aligned to a 16 byte boundary.
|
2012-04-09 23:39:59 +08:00
|
|
|
*/
|
2016-11-09 03:54:16 +08:00
|
|
|
static unsigned insn_alignment(struct x86_emulate_ctxt *ctxt, unsigned size)
|
2012-04-09 23:39:59 +08:00
|
|
|
{
|
2016-11-09 03:54:17 +08:00
|
|
|
u64 alignment = ctxt->d & AlignMask;
|
2012-04-09 23:39:59 +08:00
|
|
|
|
|
|
|
if (likely(size < 16))
|
2016-11-09 03:54:16 +08:00
|
|
|
return 1;
|
2012-04-09 23:39:59 +08:00
|
|
|
|
2016-11-09 03:54:17 +08:00
|
|
|
switch (alignment) {
|
|
|
|
case Unaligned:
|
|
|
|
case Avx:
|
2016-11-09 03:54:16 +08:00
|
|
|
return 1;
|
2016-11-09 03:54:17 +08:00
|
|
|
case Aligned16:
|
2016-11-09 03:54:16 +08:00
|
|
|
return 16;
|
2016-11-09 03:54:17 +08:00
|
|
|
case Aligned:
|
|
|
|
default:
|
2016-11-09 03:54:16 +08:00
|
|
|
return size;
|
2016-11-09 03:54:17 +08:00
|
|
|
}
|
2012-04-09 23:39:59 +08:00
|
|
|
}
|
|
|
|
|
2014-10-27 21:54:44 +08:00
|
|
|
static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
unsigned *max_size, unsigned size,
|
|
|
|
bool write, bool fetch,
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
enum x86emul_mode mode, ulong *linear)
|
2011-04-03 17:33:12 +08:00
|
|
|
{
|
2011-04-03 17:32:09 +08:00
|
|
|
struct desc_struct desc;
|
|
|
|
bool usable;
|
2011-04-03 17:33:12 +08:00
|
|
|
ulong la;
|
2011-04-03 17:32:09 +08:00
|
|
|
u32 lim;
|
2011-04-27 18:20:30 +08:00
|
|
|
u16 sel;
|
2017-08-24 20:27:56 +08:00
|
|
|
u8 va_bits;
|
2011-04-03 17:33:12 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
la = seg_base(ctxt, addr.seg) + addr.ea;
|
2014-10-27 21:40:39 +08:00
|
|
|
*max_size = 0;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
switch (mode) {
|
2011-04-03 17:32:09 +08:00
|
|
|
case X86EMUL_MODE_PROT64:
|
2016-02-20 01:07:21 +08:00
|
|
|
*linear = la;
|
2017-08-24 20:27:56 +08:00
|
|
|
va_bits = ctxt_virt_addr_bits(ctxt);
|
|
|
|
if (get_canonical(la, va_bits) != la)
|
2014-11-19 23:43:12 +08:00
|
|
|
goto bad;
|
2014-10-27 21:40:39 +08:00
|
|
|
|
2017-08-24 20:27:56 +08:00
|
|
|
*max_size = min_t(u64, ~0u, (1ull << va_bits) - la);
|
2014-10-27 21:40:39 +08:00
|
|
|
if (size > *max_size)
|
|
|
|
goto bad;
|
2011-04-03 17:32:09 +08:00
|
|
|
break;
|
|
|
|
default:
|
2016-02-20 01:07:21 +08:00
|
|
|
*linear = la = (u32)la;
|
2011-04-27 18:20:30 +08:00
|
|
|
usable = ctxt->ops->get_segment(ctxt, &sel, &desc, NULL,
|
|
|
|
addr.seg);
|
2011-04-03 17:32:09 +08:00
|
|
|
if (!usable)
|
|
|
|
goto bad;
|
2012-12-11 21:14:12 +08:00
|
|
|
/* code segment in protected mode or read-only data segment */
|
|
|
|
if ((((ctxt->mode != X86EMUL_MODE_REAL) && (desc.type & 8))
|
|
|
|
|| !(desc.type & 2)) && write)
|
2011-04-03 17:32:09 +08:00
|
|
|
goto bad;
|
|
|
|
/* unreadable code segment */
|
2011-04-19 00:05:53 +08:00
|
|
|
if (!fetch && (desc.type & 8) && !(desc.type & 2))
|
2011-04-03 17:32:09 +08:00
|
|
|
goto bad;
|
|
|
|
lim = desc_limit_scaled(&desc);
|
2014-11-20 01:33:38 +08:00
|
|
|
if (!(desc.type & 8) && (desc.type & 4)) {
|
2012-06-28 15:19:51 +08:00
|
|
|
/* expand-down segment */
|
2014-10-27 21:40:39 +08:00
|
|
|
if (addr.ea <= lim)
|
2011-04-03 17:32:09 +08:00
|
|
|
goto bad;
|
|
|
|
lim = desc.d ? 0xffffffff : 0xffff;
|
|
|
|
}
|
2014-11-20 01:33:38 +08:00
|
|
|
if (addr.ea > lim)
|
|
|
|
goto bad;
|
2015-01-26 15:32:26 +08:00
|
|
|
if (lim == 0xffffffff)
|
|
|
|
*max_size = ~0u;
|
|
|
|
else {
|
|
|
|
*max_size = (u64)lim + 1 - addr.ea;
|
|
|
|
if (size > *max_size)
|
|
|
|
goto bad;
|
|
|
|
}
|
2011-04-03 17:32:09 +08:00
|
|
|
break;
|
|
|
|
}
|
2016-11-09 03:54:16 +08:00
|
|
|
if (la & (insn_alignment(ctxt, size) - 1))
|
2012-04-09 23:39:59 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2011-04-03 17:33:12 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2011-04-03 17:32:09 +08:00
|
|
|
bad:
|
|
|
|
if (addr.seg == VCPU_SREG_SS)
|
2014-10-27 21:40:49 +08:00
|
|
|
return emulate_ss(ctxt, 0);
|
2011-04-03 17:32:09 +08:00
|
|
|
else
|
2014-10-27 21:40:49 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2011-04-03 17:33:12 +08:00
|
|
|
}
|
|
|
|
|
2011-04-19 00:05:53 +08:00
|
|
|
static int linearize(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
unsigned size, bool write,
|
|
|
|
ulong *linear)
|
|
|
|
{
|
2014-10-27 21:40:39 +08:00
|
|
|
unsigned max_size;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
return __linearize(ctxt, addr, &max_size, size, write, false,
|
|
|
|
ctxt->mode, linear);
|
2011-04-19 00:05:53 +08:00
|
|
|
}
|
|
|
|
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,
|
|
|
|
enum x86emul_mode mode)
|
|
|
|
{
|
|
|
|
ulong linear;
|
|
|
|
int rc;
|
|
|
|
unsigned max_size;
|
|
|
|
struct segmented_address addr = { .seg = VCPU_SREG_CS,
|
|
|
|
.ea = dst };
|
|
|
|
|
|
|
|
if (ctxt->op_bytes != sizeof(unsigned long))
|
|
|
|
addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
|
|
|
|
rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);
|
|
|
|
if (rc == X86EMUL_CONTINUE)
|
|
|
|
ctxt->_eip = addr.ea;
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)
|
|
|
|
{
|
|
|
|
return assign_eip(ctxt, dst, ctxt->mode);
|
2011-04-19 00:05:53 +08:00
|
|
|
}
|
|
|
|
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,
|
|
|
|
const struct desc_struct *cs_desc)
|
|
|
|
{
|
|
|
|
enum x86emul_mode mode = ctxt->mode;
|
2015-01-26 15:32:27 +08:00
|
|
|
int rc;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
2015-01-26 15:32:27 +08:00
|
|
|
if (ctxt->mode >= X86EMUL_MODE_PROT16) {
|
|
|
|
if (cs_desc->l) {
|
|
|
|
u64 efer = 0;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
|
2015-01-26 15:32:27 +08:00
|
|
|
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
|
|
|
if (efer & EFER_LMA)
|
|
|
|
mode = X86EMUL_MODE_PROT64;
|
|
|
|
} else
|
|
|
|
mode = X86EMUL_MODE_PROT32; /* temporary value */
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)
|
|
|
|
mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;
|
2015-01-26 15:32:27 +08:00
|
|
|
rc = assign_eip(ctxt, dst, mode);
|
|
|
|
if (rc == X86EMUL_CONTINUE)
|
|
|
|
ctxt->mode = mode;
|
|
|
|
return rc;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)
|
|
|
|
{
|
|
|
|
return assign_eip_near(ctxt, ctxt->_eip + rel);
|
|
|
|
}
|
2011-04-19 00:05:53 +08:00
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
static int linear_read_system(struct x86_emulate_ctxt *ctxt, ulong linear,
|
|
|
|
void *data, unsigned size)
|
|
|
|
{
|
2018-06-06 23:38:09 +08:00
|
|
|
return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, true);
|
2018-06-06 22:43:02 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int linear_write_system(struct x86_emulate_ctxt *ctxt,
|
|
|
|
ulong linear, void *data,
|
|
|
|
unsigned int size)
|
|
|
|
{
|
2018-06-06 23:38:09 +08:00
|
|
|
return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, true);
|
2018-06-06 22:43:02 +08:00
|
|
|
}
|
|
|
|
|
2011-03-31 22:52:26 +08:00
|
|
|
static int segmented_read_std(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
void *data,
|
|
|
|
unsigned size)
|
|
|
|
{
|
2011-04-01 00:54:30 +08:00
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
2011-04-03 16:31:19 +08:00
|
|
|
rc = linearize(ctxt, addr, size, false, &linear);
|
2011-04-01 00:54:30 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2018-06-06 23:38:09 +08:00
|
|
|
return ctxt->ops->read_std(ctxt, linear, data, size, &ctxt->exception, false);
|
2011-03-31 22:52:26 +08:00
|
|
|
}
|
|
|
|
|
2017-01-12 10:28:29 +08:00
|
|
|
static int segmented_write_std(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
void *data,
|
|
|
|
unsigned int size)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
|
|
|
rc = linearize(ctxt, addr, size, true, &linear);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2018-06-06 23:38:09 +08:00
|
|
|
return ctxt->ops->write_std(ctxt, linear, data, size, &ctxt->exception, false);
|
2017-01-12 10:28:29 +08:00
|
|
|
}
|
|
|
|
|
2011-07-30 17:00:17 +08:00
|
|
|
/*
|
2014-05-06 18:24:32 +08:00
|
|
|
* Prefetch the remaining bytes of the instruction without crossing page
|
2011-07-30 17:00:17 +08:00
|
|
|
* boundary if they are not in fetch_cache yet.
|
|
|
|
*/
|
2014-05-06 19:05:25 +08:00
|
|
|
static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size)
|
2007-11-20 19:15:52 +08:00
|
|
|
{
|
|
|
|
int rc;
|
2014-10-27 21:40:39 +08:00
|
|
|
unsigned size, max_size;
|
2014-05-06 18:24:32 +08:00
|
|
|
unsigned long linear;
|
2014-05-06 22:33:01 +08:00
|
|
|
int cur_size = ctxt->fetch.end - ctxt->fetch.data;
|
2014-05-06 18:24:32 +08:00
|
|
|
struct segmented_address addr = { .seg = VCPU_SREG_CS,
|
2014-05-06 22:33:01 +08:00
|
|
|
.ea = ctxt->eip + cur_size };
|
|
|
|
|
2014-10-27 21:40:39 +08:00
|
|
|
/*
|
|
|
|
* We do not know exactly how many bytes will be needed, and
|
|
|
|
* __linearize is expensive, so fetch as much as possible. We
|
|
|
|
* just have to avoid going beyond the 15 byte limit, the end
|
|
|
|
* of the segment, or the end of the page.
|
|
|
|
*
|
|
|
|
* __linearize is called with size 0 so that it does not do any
|
|
|
|
* boundary check itself. Instead, we use max_size to check
|
|
|
|
* against op_size.
|
|
|
|
*/
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
rc = __linearize(ctxt, addr, &max_size, 0, false, true, ctxt->mode,
|
|
|
|
&linear);
|
2014-06-19 17:37:06 +08:00
|
|
|
if (unlikely(rc != X86EMUL_CONTINUE))
|
|
|
|
return rc;
|
|
|
|
|
2014-10-27 21:40:39 +08:00
|
|
|
size = min_t(unsigned, 15UL ^ cur_size, max_size);
|
2014-06-19 17:37:06 +08:00
|
|
|
size = min_t(unsigned, size, PAGE_SIZE - offset_in_page(linear));
|
2014-05-06 19:05:25 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* One instruction can only straddle two pages,
|
|
|
|
* and one has been loaded at the beginning of
|
|
|
|
* x86_decode_insn. So, if not enough bytes
|
|
|
|
* still, we must have hit the 15-byte boundary.
|
|
|
|
*/
|
|
|
|
if (unlikely(size < op_size))
|
2014-10-27 21:40:39 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
2014-05-06 22:33:01 +08:00
|
|
|
rc = ctxt->ops->fetch(ctxt, linear, ctxt->fetch.end,
|
2014-05-06 18:24:32 +08:00
|
|
|
size, &ctxt->exception);
|
|
|
|
if (unlikely(rc != X86EMUL_CONTINUE))
|
|
|
|
return rc;
|
2014-05-06 22:33:01 +08:00
|
|
|
ctxt->fetch.end += size;
|
2010-02-12 14:53:59 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2007-11-20 19:15:52 +08:00
|
|
|
}
|
|
|
|
|
2014-05-06 19:05:25 +08:00
|
|
|
static __always_inline int do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt,
|
|
|
|
unsigned size)
|
2007-11-20 19:15:52 +08:00
|
|
|
{
|
2014-10-03 06:10:04 +08:00
|
|
|
unsigned done_size = ctxt->fetch.end - ctxt->fetch.ptr;
|
|
|
|
|
|
|
|
if (unlikely(done_size < size))
|
|
|
|
return __do_insn_fetch_bytes(ctxt, size - done_size);
|
2014-05-06 19:05:25 +08:00
|
|
|
else
|
|
|
|
return X86EMUL_CONTINUE;
|
2007-11-20 19:15:52 +08:00
|
|
|
}
|
|
|
|
|
2011-05-14 23:54:58 +08:00
|
|
|
/* Fetch next part of the instruction being emulated. */
|
2011-07-30 17:01:26 +08:00
|
|
|
#define insn_fetch(_type, _ctxt) \
|
2014-05-06 19:05:25 +08:00
|
|
|
({ _type _x; \
|
|
|
|
\
|
|
|
|
rc = do_insn_fetch_bytes(_ctxt, sizeof(_type)); \
|
2011-05-14 23:54:58 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE) \
|
|
|
|
goto done; \
|
2014-05-06 19:05:25 +08:00
|
|
|
ctxt->_eip += sizeof(_type); \
|
2017-06-28 10:37:14 +08:00
|
|
|
memcpy(&_x, ctxt->fetch.ptr, sizeof(_type)); \
|
2014-05-06 22:33:01 +08:00
|
|
|
ctxt->fetch.ptr += sizeof(_type); \
|
2014-05-06 19:05:25 +08:00
|
|
|
_x; \
|
2011-05-14 23:54:58 +08:00
|
|
|
})
|
|
|
|
|
2011-07-30 17:00:17 +08:00
|
|
|
#define insn_fetch_arr(_arr, _size, _ctxt) \
|
2014-05-06 19:05:25 +08:00
|
|
|
({ \
|
|
|
|
rc = do_insn_fetch_bytes(_ctxt, _size); \
|
2011-05-14 23:54:58 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE) \
|
|
|
|
goto done; \
|
2014-05-06 19:05:25 +08:00
|
|
|
ctxt->_eip += (_size); \
|
2014-05-06 22:33:01 +08:00
|
|
|
memcpy(_arr, ctxt->fetch.ptr, _size); \
|
|
|
|
ctxt->fetch.ptr += (_size); \
|
2011-05-14 23:54:58 +08:00
|
|
|
})
|
|
|
|
|
2007-07-17 21:16:11 +08:00
|
|
|
/*
|
|
|
|
* Given the 'reg' portion of a ModRM byte, and a register block, return a
|
|
|
|
* pointer into the block that addresses the relevant register.
|
|
|
|
* @highbyte_regs specifies whether to decode AH,CH,DH,BH.
|
|
|
|
*/
|
2012-08-28 04:46:17 +08:00
|
|
|
static void *decode_register(struct x86_emulate_ctxt *ctxt, u8 modrm_reg,
|
2013-11-04 21:52:41 +08:00
|
|
|
int byteop)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
|
|
|
void *p;
|
2013-11-04 21:52:41 +08:00
|
|
|
int highbyte_regs = (ctxt->rex_prefix == 0) && byteop;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
|
|
|
if (highbyte_regs && modrm_reg >= 4 && modrm_reg < 8)
|
2012-08-28 04:46:17 +08:00
|
|
|
p = (unsigned char *)reg_rmw(ctxt, modrm_reg & 3) + 1;
|
|
|
|
else
|
|
|
|
p = reg_rmw(ctxt, modrm_reg);
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
return p;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int read_descriptor(struct x86_emulate_ctxt *ctxt,
|
2010-11-17 21:28:21 +08:00
|
|
|
struct segmented_address addr,
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
u16 *size, unsigned long *address, int op_bytes)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (op_bytes == 2)
|
|
|
|
op_bytes = 3;
|
|
|
|
*address = 0;
|
2011-03-31 22:52:26 +08:00
|
|
|
rc = segmented_read_std(ctxt, addr, size, 2);
|
2010-02-12 14:57:56 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
return rc;
|
2010-11-17 21:28:22 +08:00
|
|
|
addr.ea += 2;
|
2011-03-31 22:52:26 +08:00
|
|
|
rc = segmented_read_std(ctxt, addr, address, op_bytes);
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2013-01-20 01:51:56 +08:00
|
|
|
FASTOP2(add);
|
|
|
|
FASTOP2(or);
|
|
|
|
FASTOP2(adc);
|
|
|
|
FASTOP2(sbb);
|
|
|
|
FASTOP2(and);
|
|
|
|
FASTOP2(sub);
|
|
|
|
FASTOP2(xor);
|
|
|
|
FASTOP2(cmp);
|
|
|
|
FASTOP2(test);
|
|
|
|
|
2013-02-09 17:31:48 +08:00
|
|
|
FASTOP1SRC2(mul, mul_ex);
|
|
|
|
FASTOP1SRC2(imul, imul_ex);
|
2013-02-09 17:31:49 +08:00
|
|
|
FASTOP1SRC2EX(div, div_ex);
|
|
|
|
FASTOP1SRC2EX(idiv, idiv_ex);
|
2013-02-09 17:31:48 +08:00
|
|
|
|
2013-01-20 01:51:56 +08:00
|
|
|
FASTOP3WCL(shld);
|
|
|
|
FASTOP3WCL(shrd);
|
|
|
|
|
|
|
|
FASTOP2W(imul);
|
|
|
|
|
|
|
|
FASTOP1(not);
|
|
|
|
FASTOP1(neg);
|
|
|
|
FASTOP1(inc);
|
|
|
|
FASTOP1(dec);
|
|
|
|
|
|
|
|
FASTOP2CL(rol);
|
|
|
|
FASTOP2CL(ror);
|
|
|
|
FASTOP2CL(rcl);
|
|
|
|
FASTOP2CL(rcr);
|
|
|
|
FASTOP2CL(shl);
|
|
|
|
FASTOP2CL(shr);
|
|
|
|
FASTOP2CL(sar);
|
|
|
|
|
|
|
|
FASTOP2W(bsf);
|
|
|
|
FASTOP2W(bsr);
|
|
|
|
FASTOP2W(bt);
|
|
|
|
FASTOP2W(bts);
|
|
|
|
FASTOP2W(btr);
|
|
|
|
FASTOP2W(btc);
|
|
|
|
|
2013-02-09 17:31:51 +08:00
|
|
|
FASTOP2(xadd);
|
|
|
|
|
2014-11-02 17:54:50 +08:00
|
|
|
FASTOP2R(cmp, cmp_r);
|
|
|
|
|
2015-03-30 20:39:21 +08:00
|
|
|
static int em_bsf_c(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* If src is zero, do not writeback, but update flags */
|
|
|
|
if (ctxt->src.val == 0)
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return fastop(ctxt, em_bsf);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_bsr_c(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* If src is zero, do not writeback, but update flags */
|
|
|
|
if (ctxt->src.val == 0)
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return fastop(ctxt, em_bsr);
|
|
|
|
}
|
|
|
|
|
2016-01-23 00:16:12 +08:00
|
|
|
static __always_inline u8 test_cc(unsigned int condition, unsigned long flags)
|
2007-09-15 15:23:07 +08:00
|
|
|
{
|
2013-01-20 01:51:52 +08:00
|
|
|
u8 rc;
|
|
|
|
void (*fop)(void) = (void *)em_setcc + 4 * (condition & 0xf);
|
2007-09-15 15:23:07 +08:00
|
|
|
|
2013-01-20 01:51:52 +08:00
|
|
|
flags = (flags & EFLAGS_MASK) | X86_EFLAGS_IF;
|
2018-01-25 17:58:13 +08:00
|
|
|
asm("push %[flags]; popf; " CALL_NOSPEC
|
|
|
|
: "=a"(rc) : [thunk_target]"r"(fop), [flags]"r"(flags));
|
2013-01-20 01:51:52 +08:00
|
|
|
return rc;
|
2007-09-15 15:23:07 +08:00
|
|
|
}
|
|
|
|
|
2010-08-01 17:53:09 +08:00
|
|
|
static void fetch_register_operand(struct operand *op)
|
|
|
|
{
|
|
|
|
switch (op->bytes) {
|
|
|
|
case 1:
|
|
|
|
op->val = *(u8 *)op->addr.reg;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
op->val = *(u16 *)op->addr.reg;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
op->val = *(u32 *)op->addr.reg;
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
op->val = *(u64 *)op->addr.reg;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-20 22:57:43 +08:00
|
|
|
static int em_fninit(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
|
|
|
|
return emulate_nm(ctxt);
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2012-12-20 22:57:43 +08:00
|
|
|
asm volatile("fninit");
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2012-12-20 22:57:43 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_fnstcw(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 fcw;
|
|
|
|
|
|
|
|
if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
|
|
|
|
return emulate_nm(ctxt);
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2012-12-20 22:57:43 +08:00
|
|
|
asm volatile("fnstcw %0": "+m"(fcw));
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2012-12-20 22:57:43 +08:00
|
|
|
|
|
|
|
ctxt->dst.val = fcw;
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_fnstsw(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 fsw;
|
|
|
|
|
|
|
|
if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
|
|
|
|
return emulate_nm(ctxt);
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2012-12-20 22:57:43 +08:00
|
|
|
asm volatile("fnstsw %0": "+m"(fsw));
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2012-12-20 22:57:43 +08:00
|
|
|
|
|
|
|
ctxt->dst.val = fsw;
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-03-29 17:41:27 +08:00
|
|
|
static void decode_register_operand(struct x86_emulate_ctxt *ctxt,
|
2012-01-16 21:08:45 +08:00
|
|
|
struct operand *op)
|
2007-10-31 16:27:04 +08:00
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
unsigned reg = ctxt->modrm_reg;
|
2007-10-31 17:15:56 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (!(ctxt->d & ModRM))
|
|
|
|
reg = (ctxt->b & 7) | ((ctxt->rex_prefix & 1) << 3);
|
2011-03-29 17:41:27 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->d & Sse) {
|
2011-03-29 17:41:27 +08:00
|
|
|
op->type = OP_XMM;
|
|
|
|
op->bytes = 16;
|
|
|
|
op->addr.xmm = reg;
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_read_sse_reg(reg, &op->vec_val);
|
2011-03-29 17:41:27 +08:00
|
|
|
return;
|
|
|
|
}
|
2012-04-09 23:40:02 +08:00
|
|
|
if (ctxt->d & Mmx) {
|
|
|
|
reg &= 7;
|
|
|
|
op->type = OP_MM;
|
|
|
|
op->bytes = 8;
|
|
|
|
op->addr.mm = reg;
|
|
|
|
return;
|
|
|
|
}
|
2011-03-29 17:41:27 +08:00
|
|
|
|
2007-10-31 16:27:04 +08:00
|
|
|
op->type = OP_REG;
|
2013-11-04 21:52:42 +08:00
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
|
|
|
op->addr.reg = decode_register(ctxt, reg, ctxt->d & ByteOp);
|
|
|
|
|
2010-08-01 17:53:09 +08:00
|
|
|
fetch_register_operand(op);
|
2007-10-31 16:27:04 +08:00
|
|
|
op->orig_val = op->val;
|
|
|
|
}
|
|
|
|
|
2012-06-10 22:15:39 +08:00
|
|
|
static void adjust_modrm_seg(struct x86_emulate_ctxt *ctxt, int base_reg)
|
|
|
|
{
|
|
|
|
if (base_reg == VCPU_REGS_RSP || base_reg == VCPU_REGS_RBP)
|
|
|
|
ctxt->modrm_seg = VCPU_SREG_SS;
|
|
|
|
}
|
|
|
|
|
2007-11-01 12:31:28 +08:00
|
|
|
static int decode_modrm(struct x86_emulate_ctxt *ctxt,
|
2010-08-01 20:40:19 +08:00
|
|
|
struct operand *op)
|
2007-11-01 12:31:28 +08:00
|
|
|
{
|
|
|
|
u8 sib;
|
2014-04-17 00:46:11 +08:00
|
|
|
int index_reg, base_reg, scale;
|
2010-02-12 14:53:59 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
2010-08-01 20:40:19 +08:00
|
|
|
ulong modrm_ea = 0;
|
2007-11-01 12:31:28 +08:00
|
|
|
|
2014-04-17 00:46:11 +08:00
|
|
|
ctxt->modrm_reg = ((ctxt->rex_prefix << 1) & 8); /* REX.R */
|
|
|
|
index_reg = (ctxt->rex_prefix << 2) & 8; /* REX.X */
|
|
|
|
base_reg = (ctxt->rex_prefix << 3) & 8; /* REX.B */
|
2007-11-01 12:31:28 +08:00
|
|
|
|
2014-04-17 00:46:11 +08:00
|
|
|
ctxt->modrm_mod = (ctxt->modrm & 0xc0) >> 6;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->modrm_reg |= (ctxt->modrm & 0x38) >> 3;
|
2014-04-17 00:46:11 +08:00
|
|
|
ctxt->modrm_rm = base_reg | (ctxt->modrm & 0x07);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->modrm_seg = VCPU_SREG_DS;
|
2007-11-01 12:31:28 +08:00
|
|
|
|
2014-05-26 04:05:21 +08:00
|
|
|
if (ctxt->modrm_mod == 3 || (ctxt->d & NoMod)) {
|
2010-08-01 20:40:19 +08:00
|
|
|
op->type = OP_REG;
|
2011-06-01 20:34:25 +08:00
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
2013-05-30 22:35:55 +08:00
|
|
|
op->addr.reg = decode_register(ctxt, ctxt->modrm_rm,
|
2013-11-04 21:52:41 +08:00
|
|
|
ctxt->d & ByteOp);
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->d & Sse) {
|
2011-03-29 17:41:27 +08:00
|
|
|
op->type = OP_XMM;
|
|
|
|
op->bytes = 16;
|
2011-06-01 20:34:25 +08:00
|
|
|
op->addr.xmm = ctxt->modrm_rm;
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_read_sse_reg(ctxt->modrm_rm, &op->vec_val);
|
2011-03-29 17:41:27 +08:00
|
|
|
return rc;
|
|
|
|
}
|
2012-04-09 23:40:02 +08:00
|
|
|
if (ctxt->d & Mmx) {
|
|
|
|
op->type = OP_MM;
|
|
|
|
op->bytes = 8;
|
2014-05-06 20:03:29 +08:00
|
|
|
op->addr.mm = ctxt->modrm_rm & 7;
|
2012-04-09 23:40:02 +08:00
|
|
|
return rc;
|
|
|
|
}
|
2010-08-01 20:40:19 +08:00
|
|
|
fetch_register_operand(op);
|
2007-11-01 12:31:28 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2010-08-01 20:40:19 +08:00
|
|
|
op->type = OP_MEM;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->ad_bytes == 2) {
|
2012-08-28 04:46:17 +08:00
|
|
|
unsigned bx = reg_read(ctxt, VCPU_REGS_RBX);
|
|
|
|
unsigned bp = reg_read(ctxt, VCPU_REGS_RBP);
|
|
|
|
unsigned si = reg_read(ctxt, VCPU_REGS_RSI);
|
|
|
|
unsigned di = reg_read(ctxt, VCPU_REGS_RDI);
|
2007-11-01 12:31:28 +08:00
|
|
|
|
|
|
|
/* 16-bit ModR/M decode. */
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->modrm_mod) {
|
2007-11-01 12:31:28 +08:00
|
|
|
case 0:
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->modrm_rm == 6)
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(u16, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 1:
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(s8, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 2:
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(u16, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
}
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->modrm_rm) {
|
2007-11-01 12:31:28 +08:00
|
|
|
case 0:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bx + si;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 1:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bx + di;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 2:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bp + si;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 3:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bp + di;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 4:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += si;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 5:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += di;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 6:
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->modrm_mod != 0)
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bp;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 7:
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea += bx;
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
}
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->modrm_rm == 2 || ctxt->modrm_rm == 3 ||
|
|
|
|
(ctxt->modrm_rm == 6 && ctxt->modrm_mod != 0))
|
|
|
|
ctxt->modrm_seg = VCPU_SREG_SS;
|
2010-08-01 20:40:19 +08:00
|
|
|
modrm_ea = (u16)modrm_ea;
|
2007-11-01 12:31:28 +08:00
|
|
|
} else {
|
|
|
|
/* 32/64-bit ModR/M decode. */
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->modrm_rm & 7) == 4) {
|
2011-07-30 17:01:26 +08:00
|
|
|
sib = insn_fetch(u8, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
index_reg |= (sib >> 3) & 7;
|
|
|
|
base_reg |= sib & 7;
|
|
|
|
scale = sib >> 6;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((base_reg & 7) == 5 && ctxt->modrm_mod == 0)
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(s32, ctxt);
|
2012-06-10 22:15:39 +08:00
|
|
|
else {
|
2012-08-28 04:46:17 +08:00
|
|
|
modrm_ea += reg_read(ctxt, base_reg);
|
2012-06-10 22:15:39 +08:00
|
|
|
adjust_modrm_seg(ctxt, base_reg);
|
2014-12-25 08:52:21 +08:00
|
|
|
/* Increment ESP on POP [ESP] */
|
|
|
|
if ((ctxt->d & IncSP) &&
|
|
|
|
base_reg == VCPU_REGS_RSP)
|
|
|
|
modrm_ea += ctxt->op_bytes;
|
2012-06-10 22:15:39 +08:00
|
|
|
}
|
2008-06-16 12:23:17 +08:00
|
|
|
if (index_reg != 4)
|
2012-08-28 04:46:17 +08:00
|
|
|
modrm_ea += reg_read(ctxt, index_reg) << scale;
|
2011-06-01 20:34:25 +08:00
|
|
|
} else if ((ctxt->modrm_rm & 7) == 5 && ctxt->modrm_mod == 0) {
|
2014-11-02 17:54:41 +08:00
|
|
|
modrm_ea += insn_fetch(s32, ctxt);
|
2008-06-16 12:53:26 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->rip_relative = 1;
|
2012-06-10 22:15:39 +08:00
|
|
|
} else {
|
|
|
|
base_reg = ctxt->modrm_rm;
|
2012-08-28 04:46:17 +08:00
|
|
|
modrm_ea += reg_read(ctxt, base_reg);
|
2012-06-10 22:15:39 +08:00
|
|
|
adjust_modrm_seg(ctxt, base_reg);
|
|
|
|
}
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->modrm_mod) {
|
2007-11-01 12:31:28 +08:00
|
|
|
case 1:
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(s8, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 2:
|
2011-07-30 17:01:26 +08:00
|
|
|
modrm_ea += insn_fetch(s32, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2010-11-17 21:28:21 +08:00
|
|
|
op->addr.mem.ea = modrm_ea;
|
2014-04-17 00:46:14 +08:00
|
|
|
if (ctxt->ad_bytes != 8)
|
|
|
|
ctxt->memop.addr.mem.ea = (u32)ctxt->memop.addr.mem.ea;
|
|
|
|
|
2007-11-01 12:31:28 +08:00
|
|
|
done:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int decode_abs(struct x86_emulate_ctxt *ctxt,
|
2010-08-01 20:40:19 +08:00
|
|
|
struct operand *op)
|
2007-11-01 12:31:28 +08:00
|
|
|
{
|
2010-02-12 14:53:59 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
2007-11-01 12:31:28 +08:00
|
|
|
|
2010-08-01 20:40:19 +08:00
|
|
|
op->type = OP_MEM;
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->ad_bytes) {
|
2007-11-01 12:31:28 +08:00
|
|
|
case 2:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->addr.mem.ea = insn_fetch(u16, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 4:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->addr.mem.ea = insn_fetch(u32, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
case 8:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->addr.mem.ea = insn_fetch(u64, ctxt);
|
2007-11-01 12:31:28 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
static void fetch_bit_operand(struct x86_emulate_ctxt *ctxt)
|
2010-08-09 11:34:56 +08:00
|
|
|
{
|
2010-09-28 16:33:32 +08:00
|
|
|
long sv = 0, mask;
|
2010-08-09 11:34:56 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->dst.type == OP_MEM && ctxt->src.type == OP_REG) {
|
2014-06-15 21:12:57 +08:00
|
|
|
mask = ~((long)ctxt->dst.bytes * 8 - 1);
|
2010-08-09 11:34:56 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->src.bytes == 2)
|
|
|
|
sv = (s16)ctxt->src.val & (s16)mask;
|
|
|
|
else if (ctxt->src.bytes == 4)
|
|
|
|
sv = (s32)ctxt->src.val & (s32)mask;
|
2014-06-15 21:12:57 +08:00
|
|
|
else
|
|
|
|
sv = (s64)ctxt->src.val & (s64)mask;
|
2010-08-09 11:34:56 +08:00
|
|
|
|
2014-11-19 23:43:09 +08:00
|
|
|
ctxt->dst.addr.mem.ea = address_mask(ctxt,
|
|
|
|
ctxt->dst.addr.mem.ea + (sv >> 3));
|
2010-08-09 11:34:56 +08:00
|
|
|
}
|
2010-08-09 11:39:14 +08:00
|
|
|
|
|
|
|
/* only subword offset */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val &= (ctxt->dst.bytes << 3) - 1;
|
2010-08-09 11:34:56 +08:00
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
static int read_emulated(struct x86_emulate_ctxt *ctxt,
|
|
|
|
unsigned long addr, void *dest, unsigned size)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
{
|
2010-07-29 20:11:52 +08:00
|
|
|
int rc;
|
2011-06-01 20:34:25 +08:00
|
|
|
struct read_cache *mc = &ctxt->mem_read;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2012-07-26 13:12:22 +08:00
|
|
|
if (mc->pos < mc->end)
|
|
|
|
goto read_cached;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2012-07-26 13:12:22 +08:00
|
|
|
WARN_ON((mc->end + size) >= sizeof(mc->data));
|
|
|
|
|
|
|
|
rc = ctxt->ops->read_emulated(ctxt, addr, mc->data + mc->end, size,
|
|
|
|
&ctxt->exception);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
mc->end += size;
|
|
|
|
|
|
|
|
read_cached:
|
|
|
|
memcpy(dest, mc->data + mc->pos, size);
|
|
|
|
mc->pos += size;
|
2010-07-29 20:11:52 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2011-03-31 22:52:26 +08:00
|
|
|
static int segmented_read(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
void *data,
|
|
|
|
unsigned size)
|
|
|
|
{
|
2011-04-01 00:54:30 +08:00
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
2011-04-03 16:31:19 +08:00
|
|
|
rc = linearize(ctxt, addr, size, false, &linear);
|
2011-04-01 00:54:30 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2011-05-15 00:00:52 +08:00
|
|
|
return read_emulated(ctxt, linear, data, size);
|
2011-03-31 22:52:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int segmented_write(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
const void *data,
|
|
|
|
unsigned size)
|
|
|
|
{
|
2011-04-01 00:54:30 +08:00
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
2011-04-03 16:31:19 +08:00
|
|
|
rc = linearize(ctxt, addr, size, true, &linear);
|
2011-04-01 00:54:30 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2011-04-20 18:37:53 +08:00
|
|
|
return ctxt->ops->write_emulated(ctxt, linear, data, size,
|
|
|
|
&ctxt->exception);
|
2011-03-31 22:52:26 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int segmented_cmpxchg(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct segmented_address addr,
|
|
|
|
const void *orig_data, const void *data,
|
|
|
|
unsigned size)
|
|
|
|
{
|
2011-04-01 00:54:30 +08:00
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
2011-04-03 16:31:19 +08:00
|
|
|
rc = linearize(ctxt, addr, size, true, &linear);
|
2011-04-01 00:54:30 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2011-04-20 18:37:53 +08:00
|
|
|
return ctxt->ops->cmpxchg_emulated(ctxt, linear, orig_data, data,
|
|
|
|
size, &ctxt->exception);
|
2011-03-31 22:52:26 +08:00
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
static int pio_in_emulated(struct x86_emulate_ctxt *ctxt,
|
|
|
|
unsigned int size, unsigned short port,
|
|
|
|
void *dest)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
struct read_cache *rc = &ctxt->io_read;
|
2007-09-25 19:36:40 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc->pos == rc->end) { /* refill pio read ahead */
|
|
|
|
unsigned int in_page, n;
|
2011-06-01 20:34:25 +08:00
|
|
|
unsigned int count = ctxt->rep_prefix ?
|
2012-08-28 04:46:17 +08:00
|
|
|
address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) : 1;
|
2015-03-29 21:33:03 +08:00
|
|
|
in_page = (ctxt->eflags & X86_EFLAGS_DF) ?
|
2012-08-28 04:46:17 +08:00
|
|
|
offset_in_page(reg_read(ctxt, VCPU_REGS_RDI)) :
|
|
|
|
PAGE_SIZE - offset_in_page(reg_read(ctxt, VCPU_REGS_RDI));
|
2014-07-25 21:27:05 +08:00
|
|
|
n = min3(in_page, (unsigned int)sizeof(rc->data) / size, count);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (n == 0)
|
|
|
|
n = 1;
|
|
|
|
rc->pos = rc->end = 0;
|
2011-05-15 00:00:52 +08:00
|
|
|
if (!ctxt->ops->pio_in_emulated(ctxt, size, port, rc->data, n))
|
2010-07-29 20:11:52 +08:00
|
|
|
return 0;
|
|
|
|
rc->end = n * size;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
|
|
|
|
2014-04-18 08:35:10 +08:00
|
|
|
if (ctxt->rep_prefix && (ctxt->d & String) &&
|
2015-03-29 21:33:03 +08:00
|
|
|
!(ctxt->eflags & X86_EFLAGS_DF)) {
|
2012-09-03 20:24:29 +08:00
|
|
|
ctxt->dst.data = rc->data + rc->pos;
|
|
|
|
ctxt->dst.type = OP_MEM_STR;
|
|
|
|
ctxt->dst.count = (rc->end - rc->pos) / size;
|
|
|
|
rc->pos = rc->end;
|
|
|
|
} else {
|
|
|
|
memcpy(dest, rc->data + rc->pos, size);
|
|
|
|
rc->pos += size;
|
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
return 1;
|
|
|
|
}
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2012-02-08 21:34:38 +08:00
|
|
|
static int read_interrupt_descriptor(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 index, struct desc_struct *desc)
|
|
|
|
{
|
|
|
|
struct desc_ptr dt;
|
|
|
|
ulong addr;
|
|
|
|
|
|
|
|
ctxt->ops->get_idt(ctxt, &dt);
|
|
|
|
|
|
|
|
if (dt.size < index * 8 + 7)
|
|
|
|
return emulate_gp(ctxt, index << 3 | 0x2);
|
|
|
|
|
|
|
|
addr = dt.address + index * 8;
|
2018-10-28 20:58:28 +08:00
|
|
|
return linear_read_system(ctxt, addr, desc, sizeof(*desc));
|
2012-02-08 21:34:38 +08:00
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
static void get_descriptor_table_ptr(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 selector, struct desc_ptr *dt)
|
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2014-06-02 23:34:05 +08:00
|
|
|
u32 base3 = 0;
|
2011-05-15 00:00:52 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (selector & 1 << 2) {
|
|
|
|
struct desc_struct desc;
|
2011-04-27 18:20:30 +08:00
|
|
|
u16 sel;
|
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
memset(dt, 0, sizeof(*dt));
|
2014-06-02 23:34:05 +08:00
|
|
|
if (!ops->get_segment(ctxt, &sel, &desc, &base3,
|
|
|
|
VCPU_SREG_LDTR))
|
2010-07-29 20:11:52 +08:00
|
|
|
return;
|
2008-01-18 18:38:59 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
dt->size = desc_limit_scaled(&desc); /* what if limit > 65535? */
|
2014-06-02 23:34:05 +08:00
|
|
|
dt->address = get_desc_base(&desc) | ((u64)base3 << 32);
|
2010-07-29 20:11:52 +08:00
|
|
|
} else
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_gdt(ctxt, dt);
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
2010-07-29 20:11:39 +08:00
|
|
|
|
2014-12-25 08:52:23 +08:00
|
|
|
static int get_descriptor_ptr(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 selector, ulong *desc_addr_p)
|
2010-07-29 20:11:52 +08:00
|
|
|
{
|
|
|
|
struct desc_ptr dt;
|
|
|
|
u16 index = selector >> 3;
|
|
|
|
ulong addr;
|
2010-07-29 20:11:39 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
get_descriptor_table_ptr(ctxt, selector, &dt);
|
2010-07-29 20:11:39 +08:00
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
if (dt.size < index * 8 + 7)
|
|
|
|
return emulate_gp(ctxt, selector & 0xfffc);
|
2008-01-18 18:38:59 +08:00
|
|
|
|
2014-12-25 08:52:23 +08:00
|
|
|
addr = dt.address + index * 8;
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
if (addr >> 32 != 0) {
|
|
|
|
u64 efer = 0;
|
|
|
|
|
|
|
|
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
|
|
|
if (!(efer & EFER_LMA))
|
|
|
|
addr &= (u32)-1;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
*desc_addr_p = addr;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* allowed just for 8 bytes segments */
|
|
|
|
static int read_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 selector, struct desc_struct *desc,
|
|
|
|
ulong *desc_addr_p)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = get_descriptor_ptr(ctxt, selector, desc_addr_p);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
return linear_read_system(ctxt, *desc_addr_p, desc, sizeof(*desc));
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
2010-07-29 20:11:51 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* allowed just for 8 bytes segments */
|
|
|
|
static int write_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 selector, struct desc_struct *desc)
|
|
|
|
{
|
2014-12-25 08:52:23 +08:00
|
|
|
int rc;
|
2010-07-29 20:11:52 +08:00
|
|
|
ulong addr;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2014-12-25 08:52:23 +08:00
|
|
|
rc = get_descriptor_ptr(ctxt, selector, &addr);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
return linear_write_system(ctxt, addr, desc, sizeof(*desc));
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
2007-10-28 22:34:25 +08:00
|
|
|
|
2014-05-15 23:56:57 +08:00
|
|
|
static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
2014-09-19 03:39:39 +08:00
|
|
|
u16 selector, int seg, u8 cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
enum x86_transfer_type transfer,
|
2014-09-19 03:39:39 +08:00
|
|
|
struct desc_struct *desc)
|
2010-07-29 20:11:52 +08:00
|
|
|
{
|
2012-06-13 21:30:53 +08:00
|
|
|
struct desc_struct seg_desc, old_desc;
|
2014-05-15 23:56:57 +08:00
|
|
|
u8 dpl, rpl;
|
2010-07-29 20:11:52 +08:00
|
|
|
unsigned err_vec = GP_VECTOR;
|
|
|
|
u32 err_code = 0;
|
|
|
|
bool null_selector = !(selector & ~0x3); /* 0000-0003 are null */
|
2012-06-13 21:29:39 +08:00
|
|
|
ulong desc_addr;
|
2010-07-29 20:11:52 +08:00
|
|
|
int ret;
|
2012-08-21 22:07:04 +08:00
|
|
|
u16 dummy;
|
2014-06-02 23:34:04 +08:00
|
|
|
u32 base3 = 0;
|
2010-03-18 21:20:20 +08:00
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
memset(&seg_desc, 0, sizeof(seg_desc));
|
2010-03-18 21:20:20 +08:00
|
|
|
|
2013-04-11 20:06:03 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_REAL) {
|
|
|
|
/* set real mode segment descriptor (keep limit etc. for
|
|
|
|
* unreal mode) */
|
2012-08-21 22:07:04 +08:00
|
|
|
ctxt->ops->get_segment(ctxt, &dummy, &seg_desc, NULL, seg);
|
2010-07-29 20:11:52 +08:00
|
|
|
set_desc_base(&seg_desc, selector << 4);
|
|
|
|
goto load;
|
2013-04-11 20:06:03 +08:00
|
|
|
} else if (seg <= VCPU_SREG_GS && ctxt->mode == X86EMUL_MODE_VM86) {
|
|
|
|
/* VM86 needs a clean new segment descriptor */
|
|
|
|
set_desc_base(&seg_desc, selector << 4);
|
|
|
|
set_desc_limit(&seg_desc, 0xffff);
|
|
|
|
seg_desc.type = 3;
|
|
|
|
seg_desc.p = 1;
|
|
|
|
seg_desc.s = 1;
|
|
|
|
seg_desc.dpl = 3;
|
|
|
|
goto load;
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
2012-06-07 22:03:42 +08:00
|
|
|
rpl = selector & 3;
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* TR should be in GDT only */
|
|
|
|
if (seg == VCPU_SREG_TR && (selector & (1 << 2)))
|
|
|
|
goto exception;
|
|
|
|
|
2017-01-12 22:02:32 +08:00
|
|
|
/* NULL selector is not valid for TR, CS and (except for long mode) SS */
|
|
|
|
if (null_selector) {
|
|
|
|
if (seg == VCPU_SREG_CS || seg == VCPU_SREG_TR)
|
|
|
|
goto exception;
|
|
|
|
|
|
|
|
if (seg == VCPU_SREG_SS) {
|
|
|
|
if (ctxt->mode != X86EMUL_MODE_PROT64 || rpl != cpl)
|
|
|
|
goto exception;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ctxt->ops->set_segment expects the CPL to be in
|
|
|
|
* SS.DPL, so fake an expand-up 32-bit data segment.
|
|
|
|
*/
|
|
|
|
seg_desc.type = 3;
|
|
|
|
seg_desc.p = 1;
|
|
|
|
seg_desc.s = 1;
|
|
|
|
seg_desc.dpl = cpl;
|
|
|
|
seg_desc.d = 1;
|
|
|
|
seg_desc.g = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Skip all following checks */
|
2010-07-29 20:11:52 +08:00
|
|
|
goto load;
|
2017-01-12 22:02:32 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2012-06-13 21:29:39 +08:00
|
|
|
ret = read_segment_descriptor(ctxt, selector, &seg_desc, &desc_addr);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
err_code = selector & 0xfffc;
|
2014-12-25 08:52:19 +08:00
|
|
|
err_vec = (transfer == X86_TRANSFER_TASK_SWITCH) ? TS_VECTOR :
|
|
|
|
GP_VECTOR;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2012-06-28 15:19:51 +08:00
|
|
|
/* can't load system descriptor into segment selector */
|
2014-12-25 08:52:19 +08:00
|
|
|
if (seg <= VCPU_SREG_GS && !seg_desc.s) {
|
|
|
|
if (transfer == X86_TRANSFER_CALL_JMP)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2010-07-29 20:11:52 +08:00
|
|
|
goto exception;
|
2014-12-25 08:52:19 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
if (!seg_desc.p) {
|
|
|
|
err_vec = (seg == VCPU_SREG_SS) ? SS_VECTOR : NP_VECTOR;
|
|
|
|
goto exception;
|
|
|
|
}
|
|
|
|
|
|
|
|
dpl = seg_desc.dpl;
|
|
|
|
|
|
|
|
switch (seg) {
|
|
|
|
case VCPU_SREG_SS:
|
|
|
|
/*
|
|
|
|
* segment is not a writable data segment or segment
|
|
|
|
* selector's RPL != CPL or segment selector's RPL != CPL
|
|
|
|
*/
|
|
|
|
if (rpl != cpl || (seg_desc.type & 0xa) != 0x2 || dpl != cpl)
|
|
|
|
goto exception;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case VCPU_SREG_CS:
|
|
|
|
if (!(seg_desc.type & 8))
|
|
|
|
goto exception;
|
|
|
|
|
|
|
|
if (seg_desc.type & 4) {
|
|
|
|
/* conforming */
|
|
|
|
if (dpl > cpl)
|
|
|
|
goto exception;
|
|
|
|
} else {
|
|
|
|
/* nonconforming */
|
|
|
|
if (rpl > cpl || dpl != cpl)
|
|
|
|
goto exception;
|
|
|
|
}
|
2014-09-19 03:39:43 +08:00
|
|
|
/* in long-mode d/b must be clear if l is set */
|
|
|
|
if (seg_desc.d && seg_desc.l) {
|
|
|
|
u64 efer = 0;
|
|
|
|
|
|
|
|
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
|
|
|
if (efer & EFER_LMA)
|
|
|
|
goto exception;
|
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* CS(RPL) <- CPL */
|
|
|
|
selector = (selector & 0xfffc) | cpl;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case VCPU_SREG_TR:
|
|
|
|
if (seg_desc.s || (seg_desc.type != 1 && seg_desc.type != 9))
|
|
|
|
goto exception;
|
2012-06-13 21:30:53 +08:00
|
|
|
old_desc = seg_desc;
|
|
|
|
seg_desc.type |= 2; /* busy */
|
|
|
|
ret = ctxt->ops->cmpxchg_emulated(ctxt, desc_addr, &old_desc, &seg_desc,
|
|
|
|
sizeof(seg_desc), &ctxt->exception);
|
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case VCPU_SREG_LDTR:
|
|
|
|
if (seg_desc.s || seg_desc.type != 2)
|
|
|
|
goto exception;
|
|
|
|
break;
|
|
|
|
default: /* DS, ES, FS, or GS */
|
2007-10-18 01:30:41 +08:00
|
|
|
/*
|
2010-07-29 20:11:52 +08:00
|
|
|
* segment is not a data or readable code segment or
|
|
|
|
* ((segment is a data or nonconforming code segment)
|
|
|
|
* and (both RPL and CPL > DPL))
|
2007-10-18 01:30:41 +08:00
|
|
|
*/
|
2010-07-29 20:11:52 +08:00
|
|
|
if ((seg_desc.type & 0xa) == 0x8 ||
|
|
|
|
(((seg_desc.type & 0xc) != 0xc) &&
|
|
|
|
(rpl > dpl && cpl > dpl)))
|
|
|
|
goto exception;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (seg_desc.s) {
|
|
|
|
/* mark segment as accessed */
|
2014-12-25 08:52:22 +08:00
|
|
|
if (!(seg_desc.type & 1)) {
|
|
|
|
seg_desc.type |= 1;
|
|
|
|
ret = write_segment_descriptor(ctxt, selector,
|
|
|
|
&seg_desc);
|
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
|
|
|
}
|
2014-06-02 23:34:04 +08:00
|
|
|
} else if (ctxt->mode == X86EMUL_MODE_PROT64) {
|
2018-06-06 22:43:02 +08:00
|
|
|
ret = linear_read_system(ctxt, desc_addr+8, &base3, sizeof(base3));
|
2014-06-02 23:34:04 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2017-08-24 20:27:56 +08:00
|
|
|
if (emul_is_noncanonical_address(get_desc_base(&seg_desc) |
|
|
|
|
((u64)base3 << 32), ctxt))
|
2014-11-02 17:54:56 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
load:
|
2014-06-02 23:34:04 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &seg_desc, base3, seg);
|
2014-09-19 03:39:39 +08:00
|
|
|
if (desc)
|
|
|
|
*desc = seg_desc;
|
2010-07-29 20:11:52 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
exception:
|
2014-08-20 16:05:08 +08:00
|
|
|
return emulate_exception(ctxt, err_vec, err_code, true);
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
2014-05-15 23:56:57 +08:00
|
|
|
static int load_segment_descriptor(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 selector, int seg)
|
|
|
|
{
|
|
|
|
u8 cpl = ctxt->ops->cpl(ctxt);
|
2017-01-12 22:02:32 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* None of MOV, POP and LSS can load a NULL selector in CPL=3, but
|
|
|
|
* they can load it at CPL<3 (Intel's manual says only LSS can,
|
|
|
|
* but it's wrong).
|
|
|
|
*
|
|
|
|
* However, the Intel manual says that putting IST=1/DPL=3 in
|
|
|
|
* an interrupt gate will result in SS=3 (the AMD manual instead
|
|
|
|
* says it doesn't), so allow SS=3 in __load_segment_descriptor
|
|
|
|
* and only forbid it here.
|
|
|
|
*/
|
|
|
|
if (seg == VCPU_SREG_SS && selector == 3 &&
|
|
|
|
ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
return emulate_exception(ctxt, GP_VECTOR, 0, true);
|
|
|
|
|
2014-12-25 08:52:19 +08:00
|
|
|
return __load_segment_descriptor(ctxt, selector, seg, cpl,
|
|
|
|
X86_TRANSFER_NONE, NULL);
|
2014-05-15 23:56:57 +08:00
|
|
|
}
|
|
|
|
|
2010-08-17 09:17:30 +08:00
|
|
|
static void write_register_operand(struct operand *op)
|
|
|
|
{
|
2015-03-30 20:39:20 +08:00
|
|
|
return assign_register(op->addr.reg, op->val, op->bytes);
|
2010-08-17 09:17:30 +08:00
|
|
|
}
|
|
|
|
|
2013-02-09 17:31:44 +08:00
|
|
|
static int writeback(struct x86_emulate_ctxt *ctxt, struct operand *op)
|
2010-07-29 20:11:52 +08:00
|
|
|
{
|
2013-02-09 17:31:44 +08:00
|
|
|
switch (op->type) {
|
2010-07-29 20:11:52 +08:00
|
|
|
case OP_REG:
|
2013-02-09 17:31:44 +08:00
|
|
|
write_register_operand(op);
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case OP_MEM:
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->lock_prefix)
|
2014-04-01 19:23:24 +08:00
|
|
|
return segmented_cmpxchg(ctxt,
|
|
|
|
op->addr.mem,
|
|
|
|
&op->orig_val,
|
|
|
|
&op->val,
|
|
|
|
op->bytes);
|
|
|
|
else
|
|
|
|
return segmented_write(ctxt,
|
2013-02-09 17:31:44 +08:00
|
|
|
op->addr.mem,
|
|
|
|
&op->val,
|
|
|
|
op->bytes);
|
2010-03-18 21:20:21 +08:00
|
|
|
break;
|
2012-09-03 20:24:29 +08:00
|
|
|
case OP_MEM_STR:
|
2014-04-01 19:23:24 +08:00
|
|
|
return segmented_write(ctxt,
|
|
|
|
op->addr.mem,
|
|
|
|
op->data,
|
|
|
|
op->bytes * op->count);
|
2012-09-03 20:24:29 +08:00
|
|
|
break;
|
2011-03-29 17:41:27 +08:00
|
|
|
case OP_XMM:
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_write_sse_reg(op->addr.xmm, &op->vec_val);
|
2011-03-29 17:41:27 +08:00
|
|
|
break;
|
2012-04-09 23:40:02 +08:00
|
|
|
case OP_MM:
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_write_mmx_reg(op->addr.mm, &op->mm_val);
|
2012-04-09 23:40:02 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case OP_NONE:
|
|
|
|
/* no writeback */
|
2010-04-29 00:15:26 +08:00
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
default:
|
2010-04-29 00:15:26 +08:00
|
|
|
break;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2012-06-13 01:19:40 +08:00
|
|
|
static int push(struct x86_emulate_ctxt *ctxt, void *data, int bytes)
|
2010-07-29 20:11:52 +08:00
|
|
|
{
|
2011-04-12 23:29:09 +08:00
|
|
|
struct segmented_address addr;
|
2008-12-04 21:26:42 +08:00
|
|
|
|
2012-08-19 19:34:31 +08:00
|
|
|
rsp_increment(ctxt, -bytes);
|
2012-08-28 04:46:17 +08:00
|
|
|
addr.ea = reg_read(ctxt, VCPU_REGS_RSP) & stack_mask(ctxt);
|
2011-04-12 23:29:09 +08:00
|
|
|
addr.seg = VCPU_SREG_SS;
|
|
|
|
|
2012-06-13 01:19:40 +08:00
|
|
|
return segmented_write(ctxt, addr, data, bytes);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_push(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-04-12 23:29:09 +08:00
|
|
|
/* Disable writeback. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
2012-06-13 01:19:40 +08:00
|
|
|
return push(ctxt, &ctxt->src.val, ctxt->op_bytes);
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
2010-03-18 21:20:20 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
static int emulate_pop(struct x86_emulate_ctxt *ctxt,
|
|
|
|
void *dest, int len)
|
|
|
|
{
|
|
|
|
int rc;
|
2010-11-17 21:28:21 +08:00
|
|
|
struct segmented_address addr;
|
2007-09-18 17:27:19 +08:00
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
addr.ea = reg_read(ctxt, VCPU_REGS_RSP) & stack_mask(ctxt);
|
2010-11-17 21:28:21 +08:00
|
|
|
addr.seg = VCPU_SREG_SS;
|
2011-03-31 22:52:26 +08:00
|
|
|
rc = segmented_read(ctxt, addr, dest, len);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2012-08-19 19:34:31 +08:00
|
|
|
rsp_increment(ctxt, len);
|
2010-07-29 20:11:52 +08:00
|
|
|
return rc;
|
2007-09-18 17:27:19 +08:00
|
|
|
}
|
|
|
|
|
2011-04-23 17:49:40 +08:00
|
|
|
static int em_pop(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
return emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes);
|
2011-04-23 17:49:40 +08:00
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
static int emulate_popf(struct x86_emulate_ctxt *ctxt,
|
2011-05-15 00:00:52 +08:00
|
|
|
void *dest, int len)
|
2010-04-29 00:15:22 +08:00
|
|
|
{
|
|
|
|
int rc;
|
2010-07-29 20:11:52 +08:00
|
|
|
unsigned long val, change_mask;
|
2015-03-29 21:33:03 +08:00
|
|
|
int iopl = (ctxt->eflags & X86_EFLAGS_IOPL) >> X86_EFLAGS_IOPL_BIT;
|
2011-05-15 00:00:52 +08:00
|
|
|
int cpl = ctxt->ops->cpl(ctxt);
|
2010-04-29 00:15:22 +08:00
|
|
|
|
2011-05-02 01:27:55 +08:00
|
|
|
rc = emulate_pop(ctxt, &val, len);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-04-29 00:15:22 +08:00
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
change_mask = X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
|
|
|
|
X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_OF |
|
|
|
|
X86_EFLAGS_TF | X86_EFLAGS_DF | X86_EFLAGS_NT |
|
|
|
|
X86_EFLAGS_AC | X86_EFLAGS_ID;
|
2010-04-29 00:15:22 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
switch(ctxt->mode) {
|
|
|
|
case X86EMUL_MODE_PROT64:
|
|
|
|
case X86EMUL_MODE_PROT32:
|
|
|
|
case X86EMUL_MODE_PROT16:
|
|
|
|
if (cpl == 0)
|
2015-03-29 21:33:03 +08:00
|
|
|
change_mask |= X86_EFLAGS_IOPL;
|
2010-07-29 20:11:52 +08:00
|
|
|
if (cpl <= iopl)
|
2015-03-29 21:33:03 +08:00
|
|
|
change_mask |= X86_EFLAGS_IF;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case X86EMUL_MODE_VM86:
|
2010-11-22 23:53:25 +08:00
|
|
|
if (iopl < 3)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2015-03-29 21:33:03 +08:00
|
|
|
change_mask |= X86_EFLAGS_IF;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
default: /* real mode */
|
2015-03-29 21:33:03 +08:00
|
|
|
change_mask |= (X86_EFLAGS_IOPL | X86_EFLAGS_IF);
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
2010-04-29 00:15:22 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
*(unsigned long *)dest =
|
|
|
|
(ctxt->eflags & ~change_mask) | (val & change_mask);
|
|
|
|
|
|
|
|
return rc;
|
2010-04-29 00:15:22 +08:00
|
|
|
}
|
|
|
|
|
2011-04-23 17:52:56 +08:00
|
|
|
static int em_popf(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_REG;
|
|
|
|
ctxt->dst.addr.reg = &ctxt->eflags;
|
|
|
|
ctxt->dst.bytes = ctxt->op_bytes;
|
|
|
|
return emulate_popf(ctxt, &ctxt->dst.val, ctxt->op_bytes);
|
2011-04-23 17:52:56 +08:00
|
|
|
}
|
|
|
|
|
2012-06-13 01:03:23 +08:00
|
|
|
static int em_enter(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
unsigned frame_size = ctxt->src.val;
|
|
|
|
unsigned nesting_level = ctxt->src2.val & 31;
|
2012-08-28 04:46:17 +08:00
|
|
|
ulong rbp;
|
2012-06-13 01:03:23 +08:00
|
|
|
|
|
|
|
if (nesting_level)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
rbp = reg_read(ctxt, VCPU_REGS_RBP);
|
|
|
|
rc = push(ctxt, &rbp, stack_size(ctxt));
|
2012-06-13 01:03:23 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2012-08-28 04:46:17 +08:00
|
|
|
assign_masked(reg_rmw(ctxt, VCPU_REGS_RBP), reg_read(ctxt, VCPU_REGS_RSP),
|
2012-06-13 01:03:23 +08:00
|
|
|
stack_mask(ctxt));
|
2012-08-28 04:46:17 +08:00
|
|
|
assign_masked(reg_rmw(ctxt, VCPU_REGS_RSP),
|
|
|
|
reg_read(ctxt, VCPU_REGS_RSP) - frame_size,
|
2012-06-13 01:03:23 +08:00
|
|
|
stack_mask(ctxt));
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-06-07 22:49:24 +08:00
|
|
|
static int em_leave(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2012-08-28 04:46:17 +08:00
|
|
|
assign_masked(reg_rmw(ctxt, VCPU_REGS_RSP), reg_read(ctxt, VCPU_REGS_RBP),
|
2012-06-07 22:49:24 +08:00
|
|
|
stack_mask(ctxt));
|
2012-08-28 04:46:17 +08:00
|
|
|
return emulate_pop(ctxt, reg_rmw(ctxt, VCPU_REGS_RBP), ctxt->op_bytes);
|
2012-06-07 22:49:24 +08:00
|
|
|
}
|
|
|
|
|
2011-09-13 15:45:51 +08:00
|
|
|
static int em_push_sreg(struct x86_emulate_ctxt *ctxt)
|
2010-03-18 21:20:27 +08:00
|
|
|
{
|
2011-09-13 15:45:51 +08:00
|
|
|
int seg = ctxt->src2.val;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = get_segment_selector(ctxt, seg);
|
2014-11-02 17:54:51 +08:00
|
|
|
if (ctxt->op_bytes == 4) {
|
|
|
|
rsp_increment(ctxt, -2);
|
|
|
|
ctxt->op_bytes = 2;
|
|
|
|
}
|
2010-03-18 21:20:27 +08:00
|
|
|
|
2011-04-12 23:31:23 +08:00
|
|
|
return em_push(ctxt);
|
2010-03-18 21:20:27 +08:00
|
|
|
}
|
|
|
|
|
2011-09-13 15:45:51 +08:00
|
|
|
static int em_pop_sreg(struct x86_emulate_ctxt *ctxt)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
2011-09-13 15:45:51 +08:00
|
|
|
int seg = ctxt->src2.val;
|
2010-07-29 20:11:52 +08:00
|
|
|
unsigned long selector;
|
|
|
|
int rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2014-12-25 08:52:17 +08:00
|
|
|
rc = emulate_pop(ctxt, &selector, 2);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2014-06-05 23:29:34 +08:00
|
|
|
if (ctxt->modrm_reg == VCPU_SREG_SS)
|
|
|
|
ctxt->interruptibility = KVM_X86_SHADOW_INT_MOV_SS;
|
2014-12-25 08:52:17 +08:00
|
|
|
if (ctxt->op_bytes > 2)
|
|
|
|
rsp_increment(ctxt, ctxt->op_bytes - 2);
|
2014-06-05 23:29:34 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
rc = load_segment_descriptor(ctxt, (u16)selector, seg);
|
2010-07-29 20:11:52 +08:00
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2011-04-23 17:51:07 +08:00
|
|
|
static int em_pusha(struct x86_emulate_ctxt *ctxt)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
2012-08-28 04:46:17 +08:00
|
|
|
unsigned long old_esp = reg_read(ctxt, VCPU_REGS_RSP);
|
2010-07-29 20:11:52 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
int reg = VCPU_REGS_RAX;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
while (reg <= VCPU_REGS_RDI) {
|
|
|
|
(reg == VCPU_REGS_RSP) ?
|
2012-08-28 04:46:17 +08:00
|
|
|
(ctxt->src.val = old_esp) : (ctxt->src.val = reg_read(ctxt, reg));
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-04-12 23:31:23 +08:00
|
|
|
rc = em_push(ctxt);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
++reg;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2011-04-23 17:52:56 +08:00
|
|
|
static int em_pushf(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->src.val = (unsigned long)ctxt->eflags & ~X86_EFLAGS_VM;
|
2011-04-23 17:52:56 +08:00
|
|
|
return em_push(ctxt);
|
|
|
|
}
|
|
|
|
|
2011-04-23 17:51:07 +08:00
|
|
|
static int em_popa(struct x86_emulate_ctxt *ctxt)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
2010-07-29 20:11:52 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
int reg = VCPU_REGS_RDI;
|
2015-03-30 20:39:20 +08:00
|
|
|
u32 val;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
while (reg >= VCPU_REGS_RAX) {
|
|
|
|
if (reg == VCPU_REGS_RSP) {
|
2012-08-19 19:34:31 +08:00
|
|
|
rsp_increment(ctxt, ctxt->op_bytes);
|
2010-07-29 20:11:52 +08:00
|
|
|
--reg;
|
|
|
|
}
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2015-03-30 20:39:20 +08:00
|
|
|
rc = emulate_pop(ctxt, &val, ctxt->op_bytes);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
break;
|
2015-03-30 20:39:20 +08:00
|
|
|
assign_register(reg_rmw(ctxt, reg), val, ctxt->op_bytes);
|
2010-07-29 20:11:52 +08:00
|
|
|
--reg;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
static int __emulate_int_real(struct x86_emulate_ctxt *ctxt, int irq)
|
2010-08-04 19:38:06 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-08-17 16:17:51 +08:00
|
|
|
int rc;
|
2010-08-04 19:38:06 +08:00
|
|
|
struct desc_ptr dt;
|
|
|
|
gva_t cs_addr;
|
|
|
|
gva_t eip_addr;
|
|
|
|
u16 cs, eip;
|
|
|
|
|
|
|
|
/* TODO: Add limit checks */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = ctxt->eflags;
|
2011-04-12 23:31:23 +08:00
|
|
|
rc = em_push(ctxt);
|
2010-08-17 16:17:51 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-08-04 19:38:06 +08:00
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~(X86_EFLAGS_IF | X86_EFLAGS_TF | X86_EFLAGS_AC);
|
2010-08-04 19:38:06 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = get_segment_selector(ctxt, VCPU_SREG_CS);
|
2011-04-12 23:31:23 +08:00
|
|
|
rc = em_push(ctxt);
|
2010-08-17 16:17:51 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-08-04 19:38:06 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = ctxt->_eip;
|
2011-04-12 23:31:23 +08:00
|
|
|
rc = em_push(ctxt);
|
2010-08-17 16:17:51 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_idt(ctxt, &dt);
|
2010-08-04 19:38:06 +08:00
|
|
|
|
|
|
|
eip_addr = dt.address + (irq << 2);
|
|
|
|
cs_addr = dt.address + (irq << 2) + 2;
|
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
rc = linear_read_system(ctxt, cs_addr, &cs, 2);
|
2010-08-04 19:38:06 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
rc = linear_read_system(ctxt, eip_addr, &eip, 2);
|
2010-08-04 19:38:06 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
rc = load_segment_descriptor(ctxt, cs, VCPU_SREG_CS);
|
2010-08-04 19:38:06 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = eip;
|
2010-08-04 19:38:06 +08:00
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
int emulate_int_real(struct x86_emulate_ctxt *ctxt, int irq)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
invalidate_registers(ctxt);
|
|
|
|
rc = __emulate_int_real(ctxt, irq);
|
|
|
|
if (rc == X86EMUL_CONTINUE)
|
|
|
|
writeback_registers(ctxt);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
static int emulate_int(struct x86_emulate_ctxt *ctxt, int irq)
|
2010-08-04 19:38:06 +08:00
|
|
|
{
|
|
|
|
switch(ctxt->mode) {
|
|
|
|
case X86EMUL_MODE_REAL:
|
2012-08-28 04:46:17 +08:00
|
|
|
return __emulate_int_real(ctxt, irq);
|
2010-08-04 19:38:06 +08:00
|
|
|
case X86EMUL_MODE_VM86:
|
|
|
|
case X86EMUL_MODE_PROT16:
|
|
|
|
case X86EMUL_MODE_PROT32:
|
|
|
|
case X86EMUL_MODE_PROT64:
|
|
|
|
default:
|
|
|
|
/* Protected mode interrupts unimplemented yet */
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
static int emulate_iret_real(struct x86_emulate_ctxt *ctxt)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
2010-07-29 20:11:52 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
unsigned long temp_eip = 0;
|
|
|
|
unsigned long temp_eflags = 0;
|
|
|
|
unsigned long cs = 0;
|
2015-03-29 21:33:03 +08:00
|
|
|
unsigned long mask = X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF |
|
|
|
|
X86_EFLAGS_ZF | X86_EFLAGS_SF | X86_EFLAGS_TF |
|
|
|
|
X86_EFLAGS_IF | X86_EFLAGS_DF | X86_EFLAGS_OF |
|
|
|
|
X86_EFLAGS_IOPL | X86_EFLAGS_NT | X86_EFLAGS_RF |
|
|
|
|
X86_EFLAGS_AC | X86_EFLAGS_ID |
|
2015-04-08 14:08:14 +08:00
|
|
|
X86_EFLAGS_FIXED;
|
2015-03-29 21:33:03 +08:00
|
|
|
unsigned long vm86_mask = X86_EFLAGS_VM | X86_EFLAGS_VIF |
|
|
|
|
X86_EFLAGS_VIP;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* TODO: Add stack limit check */
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulate_pop(ctxt, &temp_eip, ctxt->op_bytes);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-11-22 23:53:25 +08:00
|
|
|
if (temp_eip & ~0xffff)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulate_pop(ctxt, &cs, ctxt->op_bytes);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulate_pop(ctxt, &temp_eflags, ctxt->op_bytes);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
rc = load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = temp_eip;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->op_bytes == 4)
|
2010-07-29 20:11:52 +08:00
|
|
|
ctxt->eflags = ((temp_eflags & mask) | (ctxt->eflags & vm86_mask));
|
2011-06-01 20:34:25 +08:00
|
|
|
else if (ctxt->op_bytes == 2) {
|
2010-07-29 20:11:52 +08:00
|
|
|
ctxt->eflags &= ~0xffff;
|
|
|
|
ctxt->eflags |= temp_eflags;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
ctxt->eflags &= ~EFLG_RESERVED_ZEROS_MASK; /* Clear reserved zeros */
|
2015-04-08 14:08:14 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_FIXED;
|
2015-01-26 15:32:23 +08:00
|
|
|
ctxt->ops->set_nmi_mask(ctxt, false);
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
return rc;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2011-05-29 20:55:10 +08:00
|
|
|
static int em_iret(struct x86_emulate_ctxt *ctxt)
|
2010-06-15 09:03:33 +08:00
|
|
|
{
|
2010-07-29 20:11:52 +08:00
|
|
|
switch(ctxt->mode) {
|
|
|
|
case X86EMUL_MODE_REAL:
|
2011-05-15 00:00:52 +08:00
|
|
|
return emulate_iret_real(ctxt);
|
2010-07-29 20:11:52 +08:00
|
|
|
case X86EMUL_MODE_VM86:
|
|
|
|
case X86EMUL_MODE_PROT16:
|
|
|
|
case X86EMUL_MODE_PROT32:
|
|
|
|
case X86EMUL_MODE_PROT64:
|
2010-06-15 09:03:33 +08:00
|
|
|
default:
|
2010-07-29 20:11:52 +08:00
|
|
|
/* iret from protected mode unimplemented yet */
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2010-06-15 09:03:33 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-05-02 01:30:48 +08:00
|
|
|
static int em_jmp_far(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
int rc;
|
2016-11-24 04:15:00 +08:00
|
|
|
unsigned short sel;
|
|
|
|
struct desc_struct new_desc;
|
2014-09-19 03:39:39 +08:00
|
|
|
u8 cpl = ctxt->ops->cpl(ctxt);
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
|
2011-05-02 01:30:48 +08:00
|
|
|
|
2014-12-25 08:52:19 +08:00
|
|
|
rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl,
|
|
|
|
X86_TRANSFER_CALL_JMP,
|
2014-09-19 03:39:39 +08:00
|
|
|
&new_desc);
|
2011-05-02 01:30:48 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
|
2016-11-24 04:15:00 +08:00
|
|
|
/* Error handling is not implemented. */
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2014-09-19 03:39:39 +08:00
|
|
|
return rc;
|
2011-05-02 01:30:48 +08:00
|
|
|
}
|
|
|
|
|
2014-09-19 03:39:41 +08:00
|
|
|
static int em_jmp_abs(struct x86_emulate_ctxt *ctxt)
|
2007-09-24 17:10:54 +08:00
|
|
|
{
|
2014-09-19 03:39:41 +08:00
|
|
|
return assign_eip_near(ctxt, ctxt->src.val);
|
|
|
|
}
|
2007-09-24 17:10:54 +08:00
|
|
|
|
2014-09-19 03:39:41 +08:00
|
|
|
static int em_call_near_abs(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
long int old_eip;
|
|
|
|
|
|
|
|
old_eip = ctxt->_eip;
|
|
|
|
rc = assign_eip_near(ctxt, ctxt->src.val);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
ctxt->src.val = old_eip;
|
|
|
|
rc = em_push(ctxt);
|
2011-04-12 23:29:09 +08:00
|
|
|
return rc;
|
2007-09-24 17:10:54 +08:00
|
|
|
}
|
|
|
|
|
2011-12-06 17:07:27 +08:00
|
|
|
static int em_cmpxchg8b(struct x86_emulate_ctxt *ctxt)
|
2007-09-24 17:10:54 +08:00
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
u64 old = ctxt->dst.orig_val64;
|
2007-09-24 17:10:54 +08:00
|
|
|
|
2014-06-02 23:34:10 +08:00
|
|
|
if (ctxt->dst.bytes == 16)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
if (((u32) (old >> 0) != (u32) reg_read(ctxt, VCPU_REGS_RAX)) ||
|
|
|
|
((u32) (old >> 32) != (u32) reg_read(ctxt, VCPU_REGS_RDX))) {
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = (u32) (old >> 0);
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = (u32) (old >> 32);
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_ZF;
|
2007-09-24 17:10:54 +08:00
|
|
|
} else {
|
2012-08-28 04:46:17 +08:00
|
|
|
ctxt->dst.val64 = ((u64)reg_read(ctxt, VCPU_REGS_RCX) << 32) |
|
|
|
|
(u32) reg_read(ctxt, VCPU_REGS_RBX);
|
2007-09-24 17:10:54 +08:00
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_ZF;
|
2007-09-24 17:10:54 +08:00
|
|
|
}
|
2010-02-12 14:57:56 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2007-09-24 17:10:54 +08:00
|
|
|
}
|
|
|
|
|
2011-05-29 21:00:22 +08:00
|
|
|
static int em_ret(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-09-19 03:39:38 +08:00
|
|
|
int rc;
|
|
|
|
unsigned long eip;
|
|
|
|
|
|
|
|
rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
return assign_eip_near(ctxt, eip);
|
2011-05-29 21:00:22 +08:00
|
|
|
}
|
|
|
|
|
2011-05-29 20:55:10 +08:00
|
|
|
static int em_ret_far(struct x86_emulate_ctxt *ctxt)
|
2009-01-05 19:27:34 +08:00
|
|
|
{
|
|
|
|
int rc;
|
2014-09-19 03:39:39 +08:00
|
|
|
unsigned long eip, cs;
|
2014-06-15 21:12:59 +08:00
|
|
|
int cpl = ctxt->ops->cpl(ctxt);
|
2016-11-24 04:15:00 +08:00
|
|
|
struct desc_struct new_desc;
|
2009-01-05 19:27:34 +08:00
|
|
|
|
2014-09-19 03:39:39 +08:00
|
|
|
rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
|
2010-02-12 14:57:56 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2009-01-05 19:27:34 +08:00
|
|
|
return rc;
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulate_pop(ctxt, &cs, ctxt->op_bytes);
|
2010-02-12 14:57:56 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2009-01-05 19:27:34 +08:00
|
|
|
return rc;
|
2014-06-15 21:12:59 +08:00
|
|
|
/* Outer-privilege level return is not implemented */
|
|
|
|
if (ctxt->mode >= X86EMUL_MODE_PROT16 && (cs & 3) > cpl)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2014-12-25 08:52:19 +08:00
|
|
|
rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, cpl,
|
|
|
|
X86_TRANSFER_RET,
|
2014-09-19 03:39:39 +08:00
|
|
|
&new_desc);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
rc = assign_eip_far(ctxt, eip, &new_desc);
|
2016-11-24 04:15:00 +08:00
|
|
|
/* Error handling is not implemented. */
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2009-01-05 19:27:34 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2013-09-09 23:40:20 +08:00
|
|
|
static int em_ret_far_imm(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = em_ret_far(ctxt);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
rsp_increment(ctxt, ctxt->src.val);
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-11-22 14:20:47 +08:00
|
|
|
static int em_cmpxchg(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* Save real source value, then compare EAX against destination. */
|
2014-06-02 23:34:07 +08:00
|
|
|
ctxt->dst.orig_val = ctxt->dst.val;
|
|
|
|
ctxt->dst.val = reg_read(ctxt, VCPU_REGS_RAX);
|
2011-11-22 14:20:47 +08:00
|
|
|
ctxt->src.orig_val = ctxt->src.val;
|
2014-06-02 23:34:07 +08:00
|
|
|
ctxt->src.val = ctxt->dst.orig_val;
|
2013-01-20 01:51:57 +08:00
|
|
|
fastop(ctxt, em_cmp);
|
2011-11-22 14:20:47 +08:00
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
if (ctxt->eflags & X86_EFLAGS_ZF) {
|
2015-01-26 15:32:21 +08:00
|
|
|
/* Success: write back to memory; no update of EAX */
|
|
|
|
ctxt->src.type = OP_NONE;
|
2011-11-22 14:20:47 +08:00
|
|
|
ctxt->dst.val = ctxt->src.orig_val;
|
|
|
|
} else {
|
|
|
|
/* Failure: write the value we saw to EAX. */
|
2015-01-26 15:32:21 +08:00
|
|
|
ctxt->src.type = OP_REG;
|
|
|
|
ctxt->src.addr.reg = reg_rmw(ctxt, VCPU_REGS_RAX);
|
|
|
|
ctxt->src.val = ctxt->dst.orig_val;
|
|
|
|
/* Create write-cycle to dest by writing the same value */
|
2014-06-02 23:34:07 +08:00
|
|
|
ctxt->dst.val = ctxt->dst.orig_val;
|
2011-11-22 14:20:47 +08:00
|
|
|
}
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-09-13 15:45:50 +08:00
|
|
|
static int em_lseg(struct x86_emulate_ctxt *ctxt)
|
2010-08-23 14:56:54 +08:00
|
|
|
{
|
2011-09-13 15:45:50 +08:00
|
|
|
int seg = ctxt->src2.val;
|
2010-08-23 14:56:54 +08:00
|
|
|
unsigned short sel;
|
|
|
|
int rc;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
|
2010-08-23 14:56:54 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
rc = load_segment_descriptor(ctxt, sel, seg);
|
2010-08-23 14:56:54 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ctxt->src.val;
|
2010-08-23 14:56:54 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2015-05-05 17:50:23 +08:00
|
|
|
static int emulator_has_longmode(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2019-12-18 05:32:38 +08:00
|
|
|
return ctxt->ops->guest_has_long_mode(ctxt);
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#else
|
|
|
|
return false;
|
|
|
|
#endif
|
2015-05-05 17:50:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void rsm_set_desc_flags(struct desc_struct *desc, u32 flags)
|
|
|
|
{
|
|
|
|
desc->g = (flags >> 23) & 1;
|
|
|
|
desc->d = (flags >> 22) & 1;
|
|
|
|
desc->l = (flags >> 21) & 1;
|
|
|
|
desc->avl = (flags >> 20) & 1;
|
|
|
|
desc->p = (flags >> 15) & 1;
|
|
|
|
desc->dpl = (flags >> 13) & 3;
|
|
|
|
desc->s = (flags >> 12) & 1;
|
|
|
|
desc->type = (flags >> 8) & 15;
|
|
|
|
}
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
static int rsm_load_seg_32(struct x86_emulate_ctxt *ctxt, const char *smstate,
|
|
|
|
int n)
|
2015-05-05 17:50:23 +08:00
|
|
|
{
|
|
|
|
struct desc_struct desc;
|
|
|
|
int offset;
|
|
|
|
u16 selector;
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u32, smstate, 0x7fa8 + n * 4);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
if (n < 3)
|
|
|
|
offset = 0x7f84 + n * 12;
|
|
|
|
else
|
|
|
|
offset = 0x7f2c + (n - 3) * 12;
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8));
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4));
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, offset));
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, 0, n);
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2019-04-02 23:03:09 +08:00
|
|
|
static int rsm_load_seg_64(struct x86_emulate_ctxt *ctxt, const char *smstate,
|
|
|
|
int n)
|
2015-05-05 17:50:23 +08:00
|
|
|
{
|
|
|
|
struct desc_struct desc;
|
|
|
|
int offset;
|
|
|
|
u16 selector;
|
|
|
|
u32 base3;
|
|
|
|
|
|
|
|
offset = 0x7e00 + n * 16;
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u16, smstate, offset);
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u16, smstate, offset + 2) << 8);
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, offset + 4));
|
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, offset + 8));
|
|
|
|
base3 = GET_SMSTATE(u32, smstate, offset + 12);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, base3, n);
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#endif
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
static int rsm_enter_protected_mode(struct x86_emulate_ctxt *ctxt,
|
2017-12-21 07:49:14 +08:00
|
|
|
u64 cr0, u64 cr3, u64 cr4)
|
2015-05-05 17:50:23 +08:00
|
|
|
{
|
|
|
|
int bad;
|
2017-12-21 07:49:14 +08:00
|
|
|
u64 pcid;
|
|
|
|
|
|
|
|
/* In order to later set CR4.PCIDE, CR3[11:0] must be zero. */
|
|
|
|
pcid = 0;
|
|
|
|
if (cr4 & X86_CR4_PCIDE) {
|
|
|
|
pcid = cr3 & 0xfff;
|
|
|
|
cr3 &= ~0xfff;
|
|
|
|
}
|
|
|
|
|
|
|
|
bad = ctxt->ops->set_cr(ctxt, 3, cr3);
|
|
|
|
if (bad)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* First enable PAE, long mode needs it before CR0.PG = 1 is set.
|
|
|
|
* Then enable protected mode. However, PCID cannot be enabled
|
|
|
|
* if EFER.LMA=0, so set it separately.
|
|
|
|
*/
|
|
|
|
bad = ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
|
|
|
|
if (bad)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
|
|
|
bad = ctxt->ops->set_cr(ctxt, 0, cr0);
|
|
|
|
if (bad)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
|
|
|
if (cr4 & X86_CR4_PCIDE) {
|
|
|
|
bad = ctxt->ops->set_cr(ctxt, 4, cr4);
|
|
|
|
if (bad)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2017-12-21 07:49:14 +08:00
|
|
|
if (pcid) {
|
|
|
|
bad = ctxt->ops->set_cr(ctxt, 3, cr3 | pcid);
|
|
|
|
if (bad)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
}
|
|
|
|
|
2015-05-05 17:50:23 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
static int rsm_load_state_32(struct x86_emulate_ctxt *ctxt,
|
|
|
|
const char *smstate)
|
2015-05-05 17:50:23 +08:00
|
|
|
{
|
|
|
|
struct desc_struct desc;
|
|
|
|
struct desc_ptr dt;
|
|
|
|
u16 selector;
|
2017-12-21 07:49:14 +08:00
|
|
|
u32 val, cr0, cr3, cr4;
|
2015-05-05 17:50:23 +08:00
|
|
|
int i;
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
cr0 = GET_SMSTATE(u32, smstate, 0x7ffc);
|
|
|
|
cr3 = GET_SMSTATE(u32, smstate, 0x7ff8);
|
|
|
|
ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED;
|
|
|
|
ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
for (i = 0; i < 8; i++)
|
2019-04-02 23:03:09 +08:00
|
|
|
*reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
val = GET_SMSTATE(u32, smstate, 0x7fcc);
|
2020-08-28 01:11:44 +08:00
|
|
|
|
2021-02-05 09:24:57 +08:00
|
|
|
if (ctxt->ops->set_dr(ctxt, 6, val))
|
2020-08-28 01:11:44 +08:00
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
val = GET_SMSTATE(u32, smstate, 0x7fc8);
|
2020-08-28 01:11:44 +08:00
|
|
|
|
2021-02-05 09:24:57 +08:00
|
|
|
if (ctxt->ops->set_dr(ctxt, 7, val))
|
2020-08-28 01:11:44 +08:00
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u32, smstate, 0x7fc4);
|
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f64));
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f60));
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f5c));
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_TR);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u32, smstate, 0x7fc0);
|
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7f80));
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7f7c));
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7f78));
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, 0, VCPU_SREG_LDTR);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
dt.address = GET_SMSTATE(u32, smstate, 0x7f74);
|
|
|
|
dt.size = GET_SMSTATE(u32, smstate, 0x7f70);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_gdt(ctxt, &dt);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
dt.address = GET_SMSTATE(u32, smstate, 0x7f58);
|
|
|
|
dt.size = GET_SMSTATE(u32, smstate, 0x7f54);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_idt(ctxt, &dt);
|
|
|
|
|
|
|
|
for (i = 0; i < 6; i++) {
|
2019-04-02 23:03:09 +08:00
|
|
|
int r = rsm_load_seg_32(ctxt, smstate, i);
|
2015-05-05 17:50:23 +08:00
|
|
|
if (r != X86EMUL_CONTINUE)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
cr4 = GET_SMSTATE(u32, smstate, 0x7f14);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7ef8));
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2017-12-21 07:49:14 +08:00
|
|
|
return rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
|
2015-05-05 17:50:23 +08:00
|
|
|
}
|
|
|
|
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2019-04-02 23:03:09 +08:00
|
|
|
static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
|
|
|
|
const char *smstate)
|
2015-05-05 17:50:23 +08:00
|
|
|
{
|
|
|
|
struct desc_struct desc;
|
|
|
|
struct desc_ptr dt;
|
2017-12-21 07:49:14 +08:00
|
|
|
u64 val, cr0, cr3, cr4;
|
2015-05-05 17:50:23 +08:00
|
|
|
u32 base3;
|
|
|
|
u16 selector;
|
2015-10-14 21:25:52 +08:00
|
|
|
int i, r;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
|
|
|
for (i = 0; i < 16; i++)
|
2019-04-02 23:03:09 +08:00
|
|
|
*reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78);
|
|
|
|
ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2021-02-05 09:24:58 +08:00
|
|
|
val = GET_SMSTATE(u64, smstate, 0x7f68);
|
2020-08-28 01:11:44 +08:00
|
|
|
|
2021-02-05 09:24:57 +08:00
|
|
|
if (ctxt->ops->set_dr(ctxt, 6, val))
|
2020-08-28 01:11:44 +08:00
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2021-02-05 09:24:58 +08:00
|
|
|
val = GET_SMSTATE(u64, smstate, 0x7f60);
|
2020-08-28 01:11:44 +08:00
|
|
|
|
2021-02-05 09:24:57 +08:00
|
|
|
if (ctxt->ops->set_dr(ctxt, 7, val))
|
2020-08-28 01:11:44 +08:00
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
cr0 = GET_SMSTATE(u64, smstate, 0x7f58);
|
|
|
|
cr3 = GET_SMSTATE(u64, smstate, 0x7f50);
|
|
|
|
cr4 = GET_SMSTATE(u64, smstate, 0x7f48);
|
|
|
|
ctxt->ops->set_smbase(ctxt, GET_SMSTATE(u32, smstate, 0x7f00));
|
|
|
|
val = GET_SMSTATE(u64, smstate, 0x7ed0);
|
2020-08-28 01:11:44 +08:00
|
|
|
|
|
|
|
if (ctxt->ops->set_msr(ctxt, MSR_EFER, val & ~EFER_LMA))
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u32, smstate, 0x7e90);
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e92) << 8);
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e94));
|
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e98));
|
|
|
|
base3 = GET_SMSTATE(u32, smstate, 0x7e9c);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_TR);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
dt.size = GET_SMSTATE(u32, smstate, 0x7e84);
|
|
|
|
dt.address = GET_SMSTATE(u64, smstate, 0x7e88);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_idt(ctxt, &dt);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
selector = GET_SMSTATE(u32, smstate, 0x7e70);
|
|
|
|
rsm_set_desc_flags(&desc, GET_SMSTATE(u32, smstate, 0x7e72) << 8);
|
|
|
|
set_desc_limit(&desc, GET_SMSTATE(u32, smstate, 0x7e74));
|
|
|
|
set_desc_base(&desc, GET_SMSTATE(u32, smstate, 0x7e78));
|
|
|
|
base3 = GET_SMSTATE(u32, smstate, 0x7e7c);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_segment(ctxt, selector, &desc, base3, VCPU_SREG_LDTR);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
dt.size = GET_SMSTATE(u32, smstate, 0x7e64);
|
|
|
|
dt.address = GET_SMSTATE(u64, smstate, 0x7e68);
|
2015-05-05 17:50:23 +08:00
|
|
|
ctxt->ops->set_gdt(ctxt, &dt);
|
|
|
|
|
2017-12-21 07:49:14 +08:00
|
|
|
r = rsm_enter_protected_mode(ctxt, cr0, cr3, cr4);
|
2015-10-14 21:25:52 +08:00
|
|
|
if (r != X86EMUL_CONTINUE)
|
|
|
|
return r;
|
|
|
|
|
2015-05-05 17:50:23 +08:00
|
|
|
for (i = 0; i < 6; i++) {
|
2019-04-02 23:03:09 +08:00
|
|
|
r = rsm_load_seg_64(ctxt, smstate, i);
|
2015-05-05 17:50:23 +08:00
|
|
|
if (r != X86EMUL_CONTINUE)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2015-10-14 21:25:52 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2015-05-05 17:50:23 +08:00
|
|
|
}
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#endif
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2015-05-07 17:36:11 +08:00
|
|
|
static int em_rsm(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2015-05-05 17:50:23 +08:00
|
|
|
unsigned long cr0, cr4, efer;
|
2019-04-02 23:03:09 +08:00
|
|
|
char buf[512];
|
2015-05-05 17:50:23 +08:00
|
|
|
u64 smbase;
|
|
|
|
int ret;
|
|
|
|
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0)
|
2015-05-07 17:36:11 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2019-04-02 23:03:09 +08:00
|
|
|
smbase = ctxt->ops->get_smbase(ctxt);
|
|
|
|
|
|
|
|
ret = ctxt->ops->read_phys(ctxt, smbase + 0xfe00, buf, sizeof(buf));
|
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
2019-04-02 23:03:11 +08:00
|
|
|
if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0)
|
|
|
|
ctxt->ops->set_nmi_mask(ctxt, false);
|
|
|
|
|
2021-06-10 02:56:13 +08:00
|
|
|
ctxt->ops->exiting_smm(ctxt);
|
2019-04-02 23:03:11 +08:00
|
|
|
|
2015-05-05 17:50:23 +08:00
|
|
|
/*
|
|
|
|
* Get back to real mode, to prepare a safe state in which to load
|
2015-11-03 20:43:05 +08:00
|
|
|
* CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU
|
|
|
|
* supports long mode.
|
2015-05-05 17:50:23 +08:00
|
|
|
*/
|
2015-11-03 20:43:05 +08:00
|
|
|
if (emulator_has_longmode(ctxt)) {
|
|
|
|
struct desc_struct cs_desc;
|
|
|
|
|
|
|
|
/* Zero CR4.PCIDE before CR0.PG. */
|
2019-04-02 23:10:47 +08:00
|
|
|
cr4 = ctxt->ops->get_cr(ctxt, 4);
|
|
|
|
if (cr4 & X86_CR4_PCIDE)
|
2015-11-03 20:43:05 +08:00
|
|
|
ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
|
|
|
|
|
|
|
|
/* A 32-bit code segment is required to clear EFER.LMA. */
|
|
|
|
memset(&cs_desc, 0, sizeof(cs_desc));
|
|
|
|
cs_desc.type = 0xb;
|
|
|
|
cs_desc.s = cs_desc.g = cs_desc.p = 1;
|
|
|
|
ctxt->ops->set_segment(ctxt, 0, &cs_desc, 0, VCPU_SREG_CS);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* For the 64-bit case, this will clear EFER.LMA. */
|
2015-05-05 17:50:23 +08:00
|
|
|
cr0 = ctxt->ops->get_cr(ctxt, 0);
|
|
|
|
if (cr0 & X86_CR0_PE)
|
|
|
|
ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
|
2015-11-03 20:43:05 +08:00
|
|
|
|
2019-04-02 23:10:47 +08:00
|
|
|
if (emulator_has_longmode(ctxt)) {
|
|
|
|
/* Clear CR4.PAE before clearing EFER.LME. */
|
|
|
|
cr4 = ctxt->ops->get_cr(ctxt, 4);
|
|
|
|
if (cr4 & X86_CR4_PAE)
|
|
|
|
ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
|
|
|
|
|
|
|
|
/* And finally go back to 32-bit mode. */
|
|
|
|
efer = 0;
|
|
|
|
ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
|
|
|
|
}
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2017-10-11 22:54:40 +08:00
|
|
|
/*
|
|
|
|
* Give pre_leave_smm() a chance to make ISA-specific changes to the
|
|
|
|
* vCPU state (e.g. enter guest mode) before loading state from the SMM
|
|
|
|
* state-save area.
|
|
|
|
*/
|
2019-04-02 23:03:09 +08:00
|
|
|
if (ctxt->ops->pre_leave_smm(ctxt, buf))
|
KVM: x86: Emulate triple fault shutdown if RSM emulation fails
Use the recently introduced KVM_REQ_TRIPLE_FAULT to properly emulate
shutdown if RSM from SMM fails.
Note, entering shutdown after clearing the SMM flag and restoring NMI
blocking is architecturally correct with respect to AMD's APM, which KVM
also uses for SMRAM layout and RSM NMI blocking behavior. The APM says:
An RSM causes a processor shutdown if an invalid-state condition is
found in the SMRAM state-save area. Only an external reset, external
processor-initialization, or non-maskable external interrupt (NMI) can
cause the processor to leave the shutdown state.
Of note is processor-initialization (INIT) as a valid shutdown wake
event, as INIT is blocked by SMM, implying that entering shutdown also
forces the CPU out of SMM.
For recent Intel CPUs, restoring NMI blocking is technically wrong, but
so is restoring NMI blocking in the first place, and Intel's RSM
"architecture" is such a mess that just about anything is allowed and can
be justified as micro-architectural behavior.
Per the SDM:
On Pentium 4 and later processors, shutdown will inhibit INTR and A20M
but will not change any of the other inhibits. On these processors,
NMIs will be inhibited if no action is taken in the SMI handler to
uninhibit them (see Section 34.8).
where Section 34.8 says:
When the processor enters SMM while executing an NMI handler, the
processor saves the SMRAM state save map but does not save the
attribute to keep NMI interrupts disabled. Potentially, an NMI could be
latched (while in SMM or upon exit) and serviced upon exit of SMM even
though the previous NMI handler has still not completed.
I.e. RSM unconditionally unblocks NMI, but shutdown on RSM does not,
which is in direct contradiction of KVM's behavior. But, as mentioned
above, KVM follows AMD architecture and restores NMI blocking on RSM, so
that micro-architectural detail is already lost.
And for Pentium era CPUs, SMI# can break shutdown, meaning that at least
some Intel CPUs fully leave SMM when entering shutdown:
In the shutdown state, Intel processors stop executing instructions
until a RESET#, INIT# or NMI# is asserted. While Pentium family
processors recognize the SMI# signal in shutdown state, P6 family and
Intel486 processors do not.
In other words, the fact that Intel CPUs have implemented the two
extremes gives KVM carte blanche when it comes to honoring Intel's
architecture for handling shutdown during RSM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210609185619.992058-3-seanjc@google.com>
[Return X86EMUL_CONTINUE after triple fault. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-10 02:56:12 +08:00
|
|
|
goto emulate_shutdown;
|
2017-10-11 22:54:40 +08:00
|
|
|
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2015-05-05 17:50:23 +08:00
|
|
|
if (emulator_has_longmode(ctxt))
|
2019-04-02 23:03:09 +08:00
|
|
|
ret = rsm_load_state_64(ctxt, buf);
|
2015-05-05 17:50:23 +08:00
|
|
|
else
|
KVM: x86: Always use 32-bit SMRAM save state for 32-bit kernels
Invoking the 64-bit variation on a 32-bit kenrel will crash the guest,
trigger a WARN, and/or lead to a buffer overrun in the host, e.g.
rsm_load_state_64() writes r8-r15 unconditionally, but enum kvm_reg and
thus x86_emulate_ctxt._regs only define r8-r15 for CONFIG_X86_64.
KVM allows userspace to report long mode support via CPUID, even though
the guest is all but guaranteed to crash if it actually tries to enable
long mode. But, a pure 32-bit guest that is ignorant of long mode will
happily plod along.
SMM complicates things as 64-bit CPUs use a different SMRAM save state
area. KVM handles this correctly for 64-bit kernels, e.g. uses the
legacy save state map if userspace has hid long mode from the guest,
but doesn't fare well when userspace reports long mode support on a
32-bit host kernel (32-bit KVM doesn't support 64-bit guests).
Since the alternative is to crash the guest, e.g. by not loading state
or explicitly requesting shutdown, unconditionally use the legacy SMRAM
save state map for 32-bit KVM. If a guest has managed to get far enough
to handle SMIs when running under a weird/buggy userspace hypervisor,
then don't deliberately crash the guest since there are no downsides
(from KVM's perspective) to allow it to continue running.
Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-04-02 23:10:48 +08:00
|
|
|
#endif
|
2019-04-02 23:03:09 +08:00
|
|
|
ret = rsm_load_state_32(ctxt, buf);
|
2015-05-05 17:50:23 +08:00
|
|
|
|
KVM: x86: Emulate triple fault shutdown if RSM emulation fails
Use the recently introduced KVM_REQ_TRIPLE_FAULT to properly emulate
shutdown if RSM from SMM fails.
Note, entering shutdown after clearing the SMM flag and restoring NMI
blocking is architecturally correct with respect to AMD's APM, which KVM
also uses for SMRAM layout and RSM NMI blocking behavior. The APM says:
An RSM causes a processor shutdown if an invalid-state condition is
found in the SMRAM state-save area. Only an external reset, external
processor-initialization, or non-maskable external interrupt (NMI) can
cause the processor to leave the shutdown state.
Of note is processor-initialization (INIT) as a valid shutdown wake
event, as INIT is blocked by SMM, implying that entering shutdown also
forces the CPU out of SMM.
For recent Intel CPUs, restoring NMI blocking is technically wrong, but
so is restoring NMI blocking in the first place, and Intel's RSM
"architecture" is such a mess that just about anything is allowed and can
be justified as micro-architectural behavior.
Per the SDM:
On Pentium 4 and later processors, shutdown will inhibit INTR and A20M
but will not change any of the other inhibits. On these processors,
NMIs will be inhibited if no action is taken in the SMI handler to
uninhibit them (see Section 34.8).
where Section 34.8 says:
When the processor enters SMM while executing an NMI handler, the
processor saves the SMRAM state save map but does not save the
attribute to keep NMI interrupts disabled. Potentially, an NMI could be
latched (while in SMM or upon exit) and serviced upon exit of SMM even
though the previous NMI handler has still not completed.
I.e. RSM unconditionally unblocks NMI, but shutdown on RSM does not,
which is in direct contradiction of KVM's behavior. But, as mentioned
above, KVM follows AMD architecture and restores NMI blocking on RSM, so
that micro-architectural detail is already lost.
And for Pentium era CPUs, SMI# can break shutdown, meaning that at least
some Intel CPUs fully leave SMM when entering shutdown:
In the shutdown state, Intel processors stop executing instructions
until a RESET#, INIT# or NMI# is asserted. While Pentium family
processors recognize the SMI# signal in shutdown state, P6 family and
Intel486 processors do not.
In other words, the fact that Intel CPUs have implemented the two
extremes gives KVM carte blanche when it comes to honoring Intel's
architecture for handling shutdown during RSM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210609185619.992058-3-seanjc@google.com>
[Return X86EMUL_CONTINUE after triple fault. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-10 02:56:12 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
goto emulate_shutdown;
|
2015-05-05 17:50:23 +08:00
|
|
|
|
2021-06-10 02:56:18 +08:00
|
|
|
/*
|
|
|
|
* Note, the ctxt->ops callbacks are responsible for handling side
|
|
|
|
* effects when writing MSRs and CRs, e.g. MMU context resets, CPUID
|
|
|
|
* runtime updates, etc... If that changes, e.g. this flow is moved
|
|
|
|
* out of the emulator to make it look more like enter_smm(), then
|
|
|
|
* those side effects need to be explicitly handled for both success
|
|
|
|
* and shutdown.
|
|
|
|
*/
|
2015-05-05 17:50:23 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
KVM: x86: Emulate triple fault shutdown if RSM emulation fails
Use the recently introduced KVM_REQ_TRIPLE_FAULT to properly emulate
shutdown if RSM from SMM fails.
Note, entering shutdown after clearing the SMM flag and restoring NMI
blocking is architecturally correct with respect to AMD's APM, which KVM
also uses for SMRAM layout and RSM NMI blocking behavior. The APM says:
An RSM causes a processor shutdown if an invalid-state condition is
found in the SMRAM state-save area. Only an external reset, external
processor-initialization, or non-maskable external interrupt (NMI) can
cause the processor to leave the shutdown state.
Of note is processor-initialization (INIT) as a valid shutdown wake
event, as INIT is blocked by SMM, implying that entering shutdown also
forces the CPU out of SMM.
For recent Intel CPUs, restoring NMI blocking is technically wrong, but
so is restoring NMI blocking in the first place, and Intel's RSM
"architecture" is such a mess that just about anything is allowed and can
be justified as micro-architectural behavior.
Per the SDM:
On Pentium 4 and later processors, shutdown will inhibit INTR and A20M
but will not change any of the other inhibits. On these processors,
NMIs will be inhibited if no action is taken in the SMI handler to
uninhibit them (see Section 34.8).
where Section 34.8 says:
When the processor enters SMM while executing an NMI handler, the
processor saves the SMRAM state save map but does not save the
attribute to keep NMI interrupts disabled. Potentially, an NMI could be
latched (while in SMM or upon exit) and serviced upon exit of SMM even
though the previous NMI handler has still not completed.
I.e. RSM unconditionally unblocks NMI, but shutdown on RSM does not,
which is in direct contradiction of KVM's behavior. But, as mentioned
above, KVM follows AMD architecture and restores NMI blocking on RSM, so
that micro-architectural detail is already lost.
And for Pentium era CPUs, SMI# can break shutdown, meaning that at least
some Intel CPUs fully leave SMM when entering shutdown:
In the shutdown state, Intel processors stop executing instructions
until a RESET#, INIT# or NMI# is asserted. While Pentium family
processors recognize the SMI# signal in shutdown state, P6 family and
Intel486 processors do not.
In other words, the fact that Intel CPUs have implemented the two
extremes gives KVM carte blanche when it comes to honoring Intel's
architecture for handling shutdown during RSM.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210609185619.992058-3-seanjc@google.com>
[Return X86EMUL_CONTINUE after triple fault. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-06-10 02:56:12 +08:00
|
|
|
|
|
|
|
emulate_shutdown:
|
|
|
|
ctxt->ops->triple_fault(ctxt);
|
|
|
|
return X86EMUL_CONTINUE;
|
2015-05-07 17:36:11 +08:00
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
static void
|
2009-06-18 18:56:00 +08:00
|
|
|
setup_syscalls_segments(struct x86_emulate_ctxt *ctxt,
|
2011-05-15 00:00:52 +08:00
|
|
|
struct desc_struct *cs, struct desc_struct *ss)
|
2009-06-18 18:56:00 +08:00
|
|
|
{
|
|
|
|
cs->l = 0; /* will be adjusted later */
|
2010-04-29 00:15:30 +08:00
|
|
|
set_desc_base(cs, 0); /* flat segment */
|
2009-06-18 18:56:00 +08:00
|
|
|
cs->g = 1; /* 4kb granularity */
|
2010-04-29 00:15:30 +08:00
|
|
|
set_desc_limit(cs, 0xfffff); /* 4GB limit */
|
2009-06-18 18:56:00 +08:00
|
|
|
cs->type = 0x0b; /* Read, Execute, Accessed */
|
|
|
|
cs->s = 1;
|
|
|
|
cs->dpl = 0; /* will be adjusted later */
|
2010-04-29 00:15:30 +08:00
|
|
|
cs->p = 1;
|
|
|
|
cs->d = 1;
|
2012-07-25 20:49:42 +08:00
|
|
|
cs->avl = 0;
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2010-04-29 00:15:30 +08:00
|
|
|
set_desc_base(ss, 0); /* flat segment */
|
|
|
|
set_desc_limit(ss, 0xfffff); /* 4GB limit */
|
2009-06-18 18:56:00 +08:00
|
|
|
ss->g = 1; /* 4kb granularity */
|
|
|
|
ss->s = 1;
|
|
|
|
ss->type = 0x03; /* Read/Write, Accessed */
|
2010-04-29 00:15:30 +08:00
|
|
|
ss->d = 1; /* 32bit stack segment */
|
2009-06-18 18:56:00 +08:00
|
|
|
ss->dpl = 0;
|
2010-04-29 00:15:30 +08:00
|
|
|
ss->p = 1;
|
2012-07-25 20:49:42 +08:00
|
|
|
ss->l = 0;
|
|
|
|
ss->avl = 0;
|
2009-06-18 18:56:00 +08:00
|
|
|
}
|
|
|
|
|
2012-02-01 18:23:21 +08:00
|
|
|
static bool vendor_intel(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u32 eax, ebx, ecx, edx;
|
|
|
|
|
|
|
|
eax = ecx = 0;
|
2020-03-05 09:34:37 +08:00
|
|
|
ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, true);
|
2020-03-05 09:34:32 +08:00
|
|
|
return is_guest_vendor_intel(ebx, ecx, edx);
|
2012-02-01 18:23:21 +08:00
|
|
|
}
|
|
|
|
|
2012-01-12 23:43:04 +08:00
|
|
|
static bool em_syscall_is_enabled(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2012-01-12 23:43:04 +08:00
|
|
|
u32 eax, ebx, ecx, edx;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* syscall should always be enabled in longmode - so only become
|
|
|
|
* vendor specific (cpuid) if other modes are active...
|
|
|
|
*/
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
eax = 0x00000000;
|
|
|
|
ecx = 0x00000000;
|
2020-03-05 09:34:37 +08:00
|
|
|
ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, true);
|
2012-06-07 19:10:16 +08:00
|
|
|
/*
|
2020-03-05 09:34:32 +08:00
|
|
|
* remark: Intel CPUs only support "syscall" in 64bit longmode. Also a
|
|
|
|
* 64bit guest with a 32bit compat-app running will #UD !! While this
|
|
|
|
* behaviour can be fixed (by emulating) into AMD response - CPUs of
|
|
|
|
* AMD can't behave like Intel.
|
2012-06-07 19:10:16 +08:00
|
|
|
*/
|
2020-03-05 09:34:32 +08:00
|
|
|
if (is_guest_vendor_intel(ebx, ecx, edx))
|
2012-06-07 19:10:16 +08:00
|
|
|
return false;
|
|
|
|
|
2020-03-05 09:34:32 +08:00
|
|
|
if (is_guest_vendor_amd(ebx, ecx, edx) ||
|
|
|
|
is_guest_vendor_hygon(ebx, ecx, edx))
|
2018-09-23 17:36:31 +08:00
|
|
|
return true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* default: (not Intel, not AMD, not Hygon), apply Intel's
|
|
|
|
* stricter rules...
|
|
|
|
*/
|
2012-01-12 23:43:04 +08:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2011-05-29 20:55:10 +08:00
|
|
|
static int em_syscall(struct x86_emulate_ctxt *ctxt)
|
2009-06-18 18:56:00 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-04-29 00:15:30 +08:00
|
|
|
struct desc_struct cs, ss;
|
2009-06-18 18:56:00 +08:00
|
|
|
u64 msr_data;
|
2010-04-29 00:15:30 +08:00
|
|
|
u16 cs_sel, ss_sel;
|
2011-04-20 20:21:35 +08:00
|
|
|
u64 efer = 0;
|
2009-06-18 18:56:00 +08:00
|
|
|
|
|
|
|
/* syscall is not available in real mode */
|
2010-03-18 21:20:12 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_REAL ||
|
2010-11-22 23:53:25 +08:00
|
|
|
ctxt->mode == X86EMUL_MODE_VM86)
|
|
|
|
return emulate_ud(ctxt);
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2012-01-12 23:43:04 +08:00
|
|
|
if (!(em_syscall_is_enabled(ctxt)))
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2011-04-20 20:21:35 +08:00
|
|
|
ops->get_msr(ctxt, MSR_EFER, &efer);
|
2012-01-12 23:43:04 +08:00
|
|
|
if (!(efer & EFER_SCE))
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2019-11-09 16:58:54 +08:00
|
|
|
setup_syscalls_segments(ctxt, &cs, &ss);
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_STAR, &msr_data);
|
2009-06-18 18:56:00 +08:00
|
|
|
msr_data >>= 32;
|
2010-04-29 00:15:30 +08:00
|
|
|
cs_sel = (u16)(msr_data & 0xfffc);
|
|
|
|
ss_sel = (u16)(msr_data + 8);
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2011-04-20 20:21:35 +08:00
|
|
|
if (efer & EFER_LMA) {
|
2010-04-29 00:15:30 +08:00
|
|
|
cs.d = 0;
|
2009-06-18 18:56:00 +08:00
|
|
|
cs.l = 1;
|
|
|
|
}
|
2011-04-27 18:20:30 +08:00
|
|
|
ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS);
|
|
|
|
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RCX) = ctxt->_eip;
|
2011-04-20 20:21:35 +08:00
|
|
|
if (efer & EFER_LMA) {
|
2009-06-18 18:56:00 +08:00
|
|
|
#ifdef CONFIG_X86_64
|
2014-07-21 19:37:30 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_R11) = ctxt->eflags;
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt,
|
2010-04-29 00:15:28 +08:00
|
|
|
ctxt->mode == X86EMUL_MODE_PROT64 ?
|
|
|
|
MSR_LSTAR : MSR_CSTAR, &msr_data);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = msr_data;
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_SYSCALL_MASK, &msr_data);
|
2014-07-21 19:37:30 +08:00
|
|
|
ctxt->eflags &= ~msr_data;
|
2015-04-08 14:08:14 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_FIXED;
|
2009-06-18 18:56:00 +08:00
|
|
|
#endif
|
|
|
|
} else {
|
|
|
|
/* legacy mode */
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_STAR, &msr_data);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = (u32)msr_data;
|
2009-06-18 18:56:00 +08:00
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF);
|
2009-06-18 18:56:00 +08:00
|
|
|
}
|
|
|
|
|
2017-06-07 21:13:14 +08:00
|
|
|
ctxt->tf = (ctxt->eflags & X86_EFLAGS_TF) != 0;
|
2010-02-18 18:15:02 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2009-06-18 18:56:00 +08:00
|
|
|
}
|
|
|
|
|
2011-05-29 20:55:10 +08:00
|
|
|
static int em_sysenter(struct x86_emulate_ctxt *ctxt)
|
2009-06-18 18:56:01 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-04-29 00:15:30 +08:00
|
|
|
struct desc_struct cs, ss;
|
2009-06-18 18:56:01 +08:00
|
|
|
u64 msr_data;
|
2010-04-29 00:15:30 +08:00
|
|
|
u16 cs_sel, ss_sel;
|
2011-04-20 20:21:35 +08:00
|
|
|
u64 efer = 0;
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
ops->get_msr(ctxt, MSR_EFER, &efer);
|
2010-02-10 20:21:31 +08:00
|
|
|
/* inject #GP if in real mode */
|
2010-11-22 23:53:25 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_REAL)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2012-02-01 18:23:21 +08:00
|
|
|
/*
|
|
|
|
* Not recognized on AMD in compat mode (but is recognized in legacy
|
|
|
|
* mode).
|
|
|
|
*/
|
2015-01-02 05:11:11 +08:00
|
|
|
if ((ctxt->mode != X86EMUL_MODE_PROT64) && (efer & EFER_LMA)
|
2012-02-01 18:23:21 +08:00
|
|
|
&& !vendor_intel(ctxt))
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2014-11-02 17:55:01 +08:00
|
|
|
/* sysenter/sysexit have not been tested in 64bit mode. */
|
2010-11-22 23:53:25 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
2014-11-02 17:55:01 +08:00
|
|
|
return X86EMUL_UNHANDLEABLE;
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
|
2015-01-02 05:11:11 +08:00
|
|
|
if ((msr_data & 0xfffc) == 0x0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2019-11-09 16:58:54 +08:00
|
|
|
setup_syscalls_segments(ctxt, &cs, &ss);
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF);
|
2015-03-29 21:33:04 +08:00
|
|
|
cs_sel = (u16)msr_data & ~SEGMENT_RPL_MASK;
|
2010-04-29 00:15:30 +08:00
|
|
|
ss_sel = cs_sel + 8;
|
2015-01-02 05:11:11 +08:00
|
|
|
if (efer & EFER_LMA) {
|
2010-04-29 00:15:30 +08:00
|
|
|
cs.d = 0;
|
2009-06-18 18:56:01 +08:00
|
|
|
cs.l = 1;
|
|
|
|
}
|
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS);
|
|
|
|
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_IA32_SYSENTER_EIP, &msr_data);
|
2015-01-02 05:11:11 +08:00
|
|
|
ctxt->_eip = (efer & EFER_LMA) ? msr_data : (u32)msr_data;
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data);
|
2015-01-02 05:11:11 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RSP) = (efer & EFER_LMA) ? msr_data :
|
|
|
|
(u32)msr_data;
|
2021-02-03 00:55:46 +08:00
|
|
|
if (efer & EFER_LMA)
|
|
|
|
ctxt->mode = X86EMUL_MODE_PROT64;
|
2009-06-18 18:56:01 +08:00
|
|
|
|
2010-02-18 18:15:02 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2009-06-18 18:56:01 +08:00
|
|
|
}
|
|
|
|
|
2011-05-29 20:55:10 +08:00
|
|
|
static int em_sysexit(struct x86_emulate_ctxt *ctxt)
|
2009-06-18 18:56:02 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-04-29 00:15:30 +08:00
|
|
|
struct desc_struct cs, ss;
|
2014-09-19 03:39:38 +08:00
|
|
|
u64 msr_data, rcx, rdx;
|
2009-06-18 18:56:02 +08:00
|
|
|
int usermode;
|
2011-05-15 23:25:10 +08:00
|
|
|
u16 cs_sel = 0, ss_sel = 0;
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2010-02-10 20:21:31 +08:00
|
|
|
/* inject #GP if in real mode or Virtual 8086 mode */
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_REAL ||
|
2010-11-22 23:53:25 +08:00
|
|
|
ctxt->mode == X86EMUL_MODE_VM86)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
setup_syscalls_segments(ctxt, &cs, &ss);
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->rex_prefix & 0x8) != 0x0)
|
2009-06-18 18:56:02 +08:00
|
|
|
usermode = X86EMUL_MODE_PROT64;
|
|
|
|
else
|
|
|
|
usermode = X86EMUL_MODE_PROT32;
|
|
|
|
|
2014-09-19 03:39:38 +08:00
|
|
|
rcx = reg_read(ctxt, VCPU_REGS_RCX);
|
|
|
|
rdx = reg_read(ctxt, VCPU_REGS_RDX);
|
|
|
|
|
2009-06-18 18:56:02 +08:00
|
|
|
cs.dpl = 3;
|
|
|
|
ss.dpl = 3;
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
|
2009-06-18 18:56:02 +08:00
|
|
|
switch (usermode) {
|
|
|
|
case X86EMUL_MODE_PROT32:
|
2010-04-29 00:15:30 +08:00
|
|
|
cs_sel = (u16)(msr_data + 16);
|
2010-11-22 23:53:25 +08:00
|
|
|
if ((msr_data & 0xfffc) == 0x0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2010-04-29 00:15:30 +08:00
|
|
|
ss_sel = (u16)(msr_data + 24);
|
2014-09-19 03:39:45 +08:00
|
|
|
rcx = (u32)rcx;
|
|
|
|
rdx = (u32)rdx;
|
2009-06-18 18:56:02 +08:00
|
|
|
break;
|
|
|
|
case X86EMUL_MODE_PROT64:
|
2010-04-29 00:15:30 +08:00
|
|
|
cs_sel = (u16)(msr_data + 32);
|
2010-11-22 23:53:25 +08:00
|
|
|
if (msr_data == 0x0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2010-04-29 00:15:30 +08:00
|
|
|
ss_sel = cs_sel + 8;
|
|
|
|
cs.d = 0;
|
2009-06-18 18:56:02 +08:00
|
|
|
cs.l = 1;
|
2017-08-24 20:27:56 +08:00
|
|
|
if (emul_is_noncanonical_address(rcx, ctxt) ||
|
|
|
|
emul_is_noncanonical_address(rdx, ctxt))
|
2014-09-19 03:39:38 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2009-06-18 18:56:02 +08:00
|
|
|
break;
|
|
|
|
}
|
2015-03-29 21:33:04 +08:00
|
|
|
cs_sel |= SEGMENT_RPL_MASK;
|
|
|
|
ss_sel |= SEGMENT_RPL_MASK;
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS);
|
|
|
|
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2014-09-19 03:39:38 +08:00
|
|
|
ctxt->_eip = rdx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RSP) = rcx;
|
2009-06-18 18:56:02 +08:00
|
|
|
|
2010-02-18 18:15:02 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
2009-06-18 18:56:02 +08:00
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
static bool emulator_bad_iopl(struct x86_emulate_ctxt *ctxt)
|
2010-02-10 20:21:33 +08:00
|
|
|
{
|
|
|
|
int iopl;
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_REAL)
|
|
|
|
return false;
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_VM86)
|
|
|
|
return true;
|
2015-03-29 21:33:03 +08:00
|
|
|
iopl = (ctxt->eflags & X86_EFLAGS_IOPL) >> X86_EFLAGS_IOPL_BIT;
|
2011-05-15 00:00:52 +08:00
|
|
|
return ctxt->ops->cpl(ctxt) > iopl;
|
2010-02-10 20:21:33 +08:00
|
|
|
}
|
|
|
|
|
2018-03-12 19:12:48 +08:00
|
|
|
#define VMWARE_PORT_VMPORT (0x5658)
|
|
|
|
#define VMWARE_PORT_VMRPC (0x5659)
|
|
|
|
|
2010-02-10 20:21:33 +08:00
|
|
|
static bool emulator_io_port_access_allowed(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 port, u16 len)
|
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-04-29 00:15:30 +08:00
|
|
|
struct desc_struct tr_seg;
|
2011-03-07 20:55:06 +08:00
|
|
|
u32 base3;
|
2010-02-10 20:21:33 +08:00
|
|
|
int r;
|
2011-04-27 18:20:30 +08:00
|
|
|
u16 tr, io_bitmap_ptr, perm, bit_idx = port & 0x7;
|
2010-02-10 20:21:33 +08:00
|
|
|
unsigned mask = (1 << len) - 1;
|
2011-03-07 20:55:06 +08:00
|
|
|
unsigned long base;
|
2010-02-10 20:21:33 +08:00
|
|
|
|
2018-03-12 19:12:48 +08:00
|
|
|
/*
|
|
|
|
* VMware allows access to these ports even if denied
|
|
|
|
* by TSS I/O permission bitmap. Mimic behavior.
|
|
|
|
*/
|
|
|
|
if (enable_vmware_backdoor &&
|
|
|
|
((port == VMWARE_PORT_VMPORT) || (port == VMWARE_PORT_VMRPC)))
|
|
|
|
return true;
|
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
ops->get_segment(ctxt, &tr, &tr_seg, &base3, VCPU_SREG_TR);
|
2010-04-29 00:15:30 +08:00
|
|
|
if (!tr_seg.p)
|
2010-02-10 20:21:33 +08:00
|
|
|
return false;
|
2010-04-29 00:15:30 +08:00
|
|
|
if (desc_limit_scaled(&tr_seg) < 103)
|
2010-02-10 20:21:33 +08:00
|
|
|
return false;
|
2011-03-07 20:55:06 +08:00
|
|
|
base = get_desc_base(&tr_seg);
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
base |= ((u64)base3) << 32;
|
|
|
|
#endif
|
2018-06-06 23:38:09 +08:00
|
|
|
r = ops->read_std(ctxt, base + 102, &io_bitmap_ptr, 2, NULL, true);
|
2010-02-10 20:21:33 +08:00
|
|
|
if (r != X86EMUL_CONTINUE)
|
|
|
|
return false;
|
2010-04-29 00:15:30 +08:00
|
|
|
if (io_bitmap_ptr + port/8 > desc_limit_scaled(&tr_seg))
|
2010-02-10 20:21:33 +08:00
|
|
|
return false;
|
2018-06-06 23:38:09 +08:00
|
|
|
r = ops->read_std(ctxt, base + io_bitmap_ptr + port/8, &perm, 2, NULL, true);
|
2010-02-10 20:21:33 +08:00
|
|
|
if (r != X86EMUL_CONTINUE)
|
|
|
|
return false;
|
|
|
|
if ((perm >> bit_idx) & mask)
|
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool emulator_io_permited(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 port, u16 len)
|
|
|
|
{
|
2010-08-02 17:47:51 +08:00
|
|
|
if (ctxt->perm_ok)
|
|
|
|
return true;
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
if (emulator_bad_iopl(ctxt))
|
|
|
|
if (!emulator_io_port_access_allowed(ctxt, port, len))
|
2010-02-10 20:21:33 +08:00
|
|
|
return false;
|
2010-08-02 17:47:51 +08:00
|
|
|
|
|
|
|
ctxt->perm_ok = true;
|
|
|
|
|
2010-02-10 20:21:33 +08:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
KVM: x86: Fix zero iterations REP-string
When a REP-string is executed in 64-bit mode with an address-size prefix,
ECX/EDI/ESI are used as counter and pointers. When ECX is initially zero, Intel
CPUs clear the high 32-bits of RCX, and recent Intel CPUs update the high bits
of the pointers in MOVS/STOS. This behavior is specific to Intel according to
few experiments.
As one may guess, this is an undocumented behavior. Yet, it is observable in
the guest, since at least VMX traps REP-INS/OUTS even when ECX=0. Note that
VMware appears to get it right. The behavior can be observed using the
following code:
#include <stdio.h>
#define LOW_MASK (0xffffffff00000000ull)
#define ALL_MASK (0xffffffffffffffffull)
#define TEST(opcode) \
do { \
asm volatile(".byte 0xf2 \n\t .byte 0x67 \n\t .byte " opcode "\n\t" \
: "=S"(s), "=c"(c), "=D"(d) \
: "S"(ALL_MASK), "c"(LOW_MASK), "D"(ALL_MASK)); \
printf("opcode %s rcx=%llx rsi=%llx rdi=%llx\n", \
opcode, c, s, d); \
} while(0)
void main()
{
unsigned long long s, d, c;
iopl(3);
TEST("0x6c");
TEST("0x6d");
TEST("0x6e");
TEST("0x6f");
TEST("0xa4");
TEST("0xa5");
TEST("0xa6");
TEST("0xa7");
TEST("0xaa");
TEST("0xab");
TEST("0xae");
TEST("0xaf");
}
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-28 18:06:01 +08:00
|
|
|
static void string_registers_quirk(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Intel CPUs mask the counter and pointers in quite strange
|
|
|
|
* manner when ECX is zero due to REP-string optimizations.
|
|
|
|
*/
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
if (ctxt->ad_bytes != 4 || !vendor_intel(ctxt))
|
|
|
|
return;
|
|
|
|
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RCX) = 0;
|
|
|
|
|
|
|
|
switch (ctxt->b) {
|
|
|
|
case 0xa4: /* movsb */
|
|
|
|
case 0xa5: /* movsd/w */
|
|
|
|
*reg_rmw(ctxt, VCPU_REGS_RSI) &= (u32)-1;
|
2020-08-24 06:36:59 +08:00
|
|
|
fallthrough;
|
KVM: x86: Fix zero iterations REP-string
When a REP-string is executed in 64-bit mode with an address-size prefix,
ECX/EDI/ESI are used as counter and pointers. When ECX is initially zero, Intel
CPUs clear the high 32-bits of RCX, and recent Intel CPUs update the high bits
of the pointers in MOVS/STOS. This behavior is specific to Intel according to
few experiments.
As one may guess, this is an undocumented behavior. Yet, it is observable in
the guest, since at least VMX traps REP-INS/OUTS even when ECX=0. Note that
VMware appears to get it right. The behavior can be observed using the
following code:
#include <stdio.h>
#define LOW_MASK (0xffffffff00000000ull)
#define ALL_MASK (0xffffffffffffffffull)
#define TEST(opcode) \
do { \
asm volatile(".byte 0xf2 \n\t .byte 0x67 \n\t .byte " opcode "\n\t" \
: "=S"(s), "=c"(c), "=D"(d) \
: "S"(ALL_MASK), "c"(LOW_MASK), "D"(ALL_MASK)); \
printf("opcode %s rcx=%llx rsi=%llx rdi=%llx\n", \
opcode, c, s, d); \
} while(0)
void main()
{
unsigned long long s, d, c;
iopl(3);
TEST("0x6c");
TEST("0x6d");
TEST("0x6e");
TEST("0x6f");
TEST("0xa4");
TEST("0xa5");
TEST("0xa6");
TEST("0xa7");
TEST("0xaa");
TEST("0xab");
TEST("0xae");
TEST("0xaf");
}
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-28 18:06:01 +08:00
|
|
|
case 0xaa: /* stosb */
|
|
|
|
case 0xab: /* stosd/w */
|
|
|
|
*reg_rmw(ctxt, VCPU_REGS_RDI) &= (u32)-1;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2010-03-18 21:20:17 +08:00
|
|
|
static void save_state_to_tss16(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct tss_segment_16 *tss)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
tss->ip = ctxt->_eip;
|
2010-03-18 21:20:17 +08:00
|
|
|
tss->flag = ctxt->eflags;
|
2012-08-28 04:46:17 +08:00
|
|
|
tss->ax = reg_read(ctxt, VCPU_REGS_RAX);
|
|
|
|
tss->cx = reg_read(ctxt, VCPU_REGS_RCX);
|
|
|
|
tss->dx = reg_read(ctxt, VCPU_REGS_RDX);
|
|
|
|
tss->bx = reg_read(ctxt, VCPU_REGS_RBX);
|
|
|
|
tss->sp = reg_read(ctxt, VCPU_REGS_RSP);
|
|
|
|
tss->bp = reg_read(ctxt, VCPU_REGS_RBP);
|
|
|
|
tss->si = reg_read(ctxt, VCPU_REGS_RSI);
|
|
|
|
tss->di = reg_read(ctxt, VCPU_REGS_RDI);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
tss->es = get_segment_selector(ctxt, VCPU_SREG_ES);
|
|
|
|
tss->cs = get_segment_selector(ctxt, VCPU_SREG_CS);
|
|
|
|
tss->ss = get_segment_selector(ctxt, VCPU_SREG_SS);
|
|
|
|
tss->ds = get_segment_selector(ctxt, VCPU_SREG_DS);
|
|
|
|
tss->ldt = get_segment_selector(ctxt, VCPU_SREG_LDTR);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int load_state_from_tss16(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct tss_segment_16 *tss)
|
|
|
|
{
|
|
|
|
int ret;
|
2014-05-15 23:56:57 +08:00
|
|
|
u8 cpl;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = tss->ip;
|
2010-03-18 21:20:17 +08:00
|
|
|
ctxt->eflags = tss->flag | 2;
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = tss->ax;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RCX) = tss->cx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = tss->dx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RBX) = tss->bx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RSP) = tss->sp;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RBP) = tss->bp;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RSI) = tss->si;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDI) = tss->di;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* SDM says that segment selectors are loaded before segment
|
|
|
|
* descriptors
|
|
|
|
*/
|
2011-04-27 18:20:30 +08:00
|
|
|
set_segment_selector(ctxt, tss->ldt, VCPU_SREG_LDTR);
|
|
|
|
set_segment_selector(ctxt, tss->es, VCPU_SREG_ES);
|
|
|
|
set_segment_selector(ctxt, tss->cs, VCPU_SREG_CS);
|
|
|
|
set_segment_selector(ctxt, tss->ss, VCPU_SREG_SS);
|
|
|
|
set_segment_selector(ctxt, tss->ds, VCPU_SREG_DS);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2014-05-15 23:56:57 +08:00
|
|
|
cpl = tss->cs & 3;
|
|
|
|
|
2010-03-18 21:20:17 +08:00
|
|
|
/*
|
2012-06-28 15:19:51 +08:00
|
|
|
* Now load segment descriptors. If fault happens at this stage
|
2010-03-18 21:20:17 +08:00
|
|
|
* it is handled in a context of new task
|
|
|
|
*/
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ldt, VCPU_SREG_LDTR, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int task_switch_16(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 tss_selector, u16 old_tss_sel,
|
|
|
|
ulong old_tss_base, struct desc_struct *new_desc)
|
|
|
|
{
|
|
|
|
struct tss_segment_16 tss_seg;
|
|
|
|
int ret;
|
2010-11-22 23:53:22 +08:00
|
|
|
u32 new_tss_base = get_desc_base(new_desc);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof(tss_seg));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
save_state_to_tss16(ctxt, &tss_seg);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
ret = linear_write_system(ctxt, old_tss_base, &tss_seg, sizeof(tss_seg));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof(tss_seg));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (old_tss_sel != 0xffff) {
|
|
|
|
tss_seg.prev_task_link = old_tss_sel;
|
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
ret = linear_write_system(ctxt, new_tss_base,
|
|
|
|
&tss_seg.prev_task_link,
|
2018-10-28 20:58:28 +08:00
|
|
|
sizeof(tss_seg.prev_task_link));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
return load_state_from_tss16(ctxt, &tss_seg);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void save_state_to_tss32(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct tss_segment_32 *tss)
|
|
|
|
{
|
2014-04-07 23:37:47 +08:00
|
|
|
/* CR3 and ldt selector are not saved intentionally */
|
2011-06-01 20:34:25 +08:00
|
|
|
tss->eip = ctxt->_eip;
|
2010-03-18 21:20:17 +08:00
|
|
|
tss->eflags = ctxt->eflags;
|
2012-08-28 04:46:17 +08:00
|
|
|
tss->eax = reg_read(ctxt, VCPU_REGS_RAX);
|
|
|
|
tss->ecx = reg_read(ctxt, VCPU_REGS_RCX);
|
|
|
|
tss->edx = reg_read(ctxt, VCPU_REGS_RDX);
|
|
|
|
tss->ebx = reg_read(ctxt, VCPU_REGS_RBX);
|
|
|
|
tss->esp = reg_read(ctxt, VCPU_REGS_RSP);
|
|
|
|
tss->ebp = reg_read(ctxt, VCPU_REGS_RBP);
|
|
|
|
tss->esi = reg_read(ctxt, VCPU_REGS_RSI);
|
|
|
|
tss->edi = reg_read(ctxt, VCPU_REGS_RDI);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-04-27 18:20:30 +08:00
|
|
|
tss->es = get_segment_selector(ctxt, VCPU_SREG_ES);
|
|
|
|
tss->cs = get_segment_selector(ctxt, VCPU_SREG_CS);
|
|
|
|
tss->ss = get_segment_selector(ctxt, VCPU_SREG_SS);
|
|
|
|
tss->ds = get_segment_selector(ctxt, VCPU_SREG_DS);
|
|
|
|
tss->fs = get_segment_selector(ctxt, VCPU_SREG_FS);
|
|
|
|
tss->gs = get_segment_selector(ctxt, VCPU_SREG_GS);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int load_state_from_tss32(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct tss_segment_32 *tss)
|
|
|
|
{
|
|
|
|
int ret;
|
2014-05-15 23:56:57 +08:00
|
|
|
u8 cpl;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
if (ctxt->ops->set_cr(ctxt, 3, tss->cr3))
|
2010-11-22 23:53:25 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = tss->eip;
|
2010-03-18 21:20:17 +08:00
|
|
|
ctxt->eflags = tss->eflags | 2;
|
2012-02-08 21:34:41 +08:00
|
|
|
|
|
|
|
/* General purpose registers */
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = tss->eax;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RCX) = tss->ecx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = tss->edx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RBX) = tss->ebx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RSP) = tss->esp;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RBP) = tss->ebp;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RSI) = tss->esi;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDI) = tss->edi;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* SDM says that segment selectors are loaded before segment
|
2014-05-15 23:56:57 +08:00
|
|
|
* descriptors. This is important because CPL checks will
|
|
|
|
* use CS.RPL.
|
2010-03-18 21:20:17 +08:00
|
|
|
*/
|
2011-04-27 18:20:30 +08:00
|
|
|
set_segment_selector(ctxt, tss->ldt_selector, VCPU_SREG_LDTR);
|
|
|
|
set_segment_selector(ctxt, tss->es, VCPU_SREG_ES);
|
|
|
|
set_segment_selector(ctxt, tss->cs, VCPU_SREG_CS);
|
|
|
|
set_segment_selector(ctxt, tss->ss, VCPU_SREG_SS);
|
|
|
|
set_segment_selector(ctxt, tss->ds, VCPU_SREG_DS);
|
|
|
|
set_segment_selector(ctxt, tss->fs, VCPU_SREG_FS);
|
|
|
|
set_segment_selector(ctxt, tss->gs, VCPU_SREG_GS);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2012-02-08 21:34:41 +08:00
|
|
|
/*
|
|
|
|
* If we're switching between Protected Mode and VM86, we need to make
|
|
|
|
* sure to update the mode before loading the segment descriptors so
|
|
|
|
* that the selectors are interpreted correctly.
|
|
|
|
*/
|
2014-05-15 23:56:57 +08:00
|
|
|
if (ctxt->eflags & X86_EFLAGS_VM) {
|
2012-02-08 21:34:41 +08:00
|
|
|
ctxt->mode = X86EMUL_MODE_VM86;
|
2014-05-15 23:56:57 +08:00
|
|
|
cpl = 3;
|
|
|
|
} else {
|
2012-02-08 21:34:41 +08:00
|
|
|
ctxt->mode = X86EMUL_MODE_PROT32;
|
2014-05-15 23:56:57 +08:00
|
|
|
cpl = tss->cs & 3;
|
|
|
|
}
|
2012-02-08 21:34:41 +08:00
|
|
|
|
2010-03-18 21:20:17 +08:00
|
|
|
/*
|
2021-03-18 22:28:01 +08:00
|
|
|
* Now load segment descriptors. If fault happens at this stage
|
2010-03-18 21:20:17 +08:00
|
|
|
* it is handled in a context of new task
|
|
|
|
*/
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ldt_selector, VCPU_SREG_LDTR,
|
2014-12-25 08:52:19 +08:00
|
|
|
cpl, X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->es, VCPU_SREG_ES, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->cs, VCPU_SREG_CS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ss, VCPU_SREG_SS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->ds, VCPU_SREG_DS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->fs, VCPU_SREG_FS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2014-09-19 03:39:39 +08:00
|
|
|
ret = __load_segment_descriptor(ctxt, tss->gs, VCPU_SREG_GS, cpl,
|
2014-12-25 08:52:19 +08:00
|
|
|
X86_TRANSFER_TASK_SWITCH, NULL);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2015-03-29 06:27:17 +08:00
|
|
|
return ret;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int task_switch_32(struct x86_emulate_ctxt *ctxt,
|
|
|
|
u16 tss_selector, u16 old_tss_sel,
|
|
|
|
ulong old_tss_base, struct desc_struct *new_desc)
|
|
|
|
{
|
|
|
|
struct tss_segment_32 tss_seg;
|
|
|
|
int ret;
|
2010-11-22 23:53:22 +08:00
|
|
|
u32 new_tss_base = get_desc_base(new_desc);
|
2014-04-07 23:37:47 +08:00
|
|
|
u32 eip_offset = offsetof(struct tss_segment_32, eip);
|
|
|
|
u32 ldt_sel_offset = offsetof(struct tss_segment_32, ldt_selector);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
ret = linear_read_system(ctxt, old_tss_base, &tss_seg, sizeof(tss_seg));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
save_state_to_tss32(ctxt, &tss_seg);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2014-04-07 23:37:47 +08:00
|
|
|
/* Only GP registers and segment selectors are saved */
|
2018-06-06 22:43:02 +08:00
|
|
|
ret = linear_write_system(ctxt, old_tss_base + eip_offset, &tss_seg.eip,
|
|
|
|
ldt_sel_offset - eip_offset);
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
2018-10-28 20:58:28 +08:00
|
|
|
ret = linear_read_system(ctxt, new_tss_base, &tss_seg, sizeof(tss_seg));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
|
|
|
|
if (old_tss_sel != 0xffff) {
|
|
|
|
tss_seg.prev_task_link = old_tss_sel;
|
|
|
|
|
2018-06-06 22:43:02 +08:00
|
|
|
ret = linear_write_system(ctxt, new_tss_base,
|
|
|
|
&tss_seg.prev_task_link,
|
2018-10-28 20:58:28 +08:00
|
|
|
sizeof(tss_seg.prev_task_link));
|
2010-11-22 23:53:24 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
return load_state_from_tss32(ctxt, &tss_seg);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int emulator_do_task_switch(struct x86_emulate_ctxt *ctxt,
|
2012-02-08 21:34:38 +08:00
|
|
|
u16 tss_selector, int idt_index, int reason,
|
2010-04-14 21:51:09 +08:00
|
|
|
bool has_error_code, u32 error_code)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-03-18 21:20:17 +08:00
|
|
|
struct desc_struct curr_tss_desc, next_tss_desc;
|
|
|
|
int ret;
|
2011-04-27 18:20:30 +08:00
|
|
|
u16 old_tss_sel = get_segment_selector(ctxt, VCPU_SREG_TR);
|
2010-03-18 21:20:17 +08:00
|
|
|
ulong old_tss_base =
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->get_cached_segment_base(ctxt, VCPU_SREG_TR);
|
2010-03-18 21:20:19 +08:00
|
|
|
u32 desc_limit;
|
2015-04-20 02:12:59 +08:00
|
|
|
ulong desc_addr, dr7;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
|
|
|
/* FIXME: old_tss_base == ~0 ? */
|
|
|
|
|
2012-06-13 21:29:39 +08:00
|
|
|
ret = read_segment_descriptor(ctxt, tss_selector, &next_tss_desc, &desc_addr);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2012-06-13 21:29:39 +08:00
|
|
|
ret = read_segment_descriptor(ctxt, old_tss_sel, &curr_tss_desc, &desc_addr);
|
2010-03-18 21:20:17 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
/* FIXME: check that next_tss_desc is tss */
|
|
|
|
|
2012-02-08 21:34:38 +08:00
|
|
|
/*
|
|
|
|
* Check privileges. The three cases are task switch caused by...
|
|
|
|
*
|
|
|
|
* 1. jmp/call/int to task gate: Check against DPL of the task gate
|
|
|
|
* 2. Exception/IRQ/iret: No check is performed
|
2014-11-02 17:54:57 +08:00
|
|
|
* 3. jmp/call to TSS/task-gate: No check is performed since the
|
|
|
|
* hardware checks it before exiting.
|
2012-02-08 21:34:38 +08:00
|
|
|
*/
|
|
|
|
if (reason == TASK_SWITCH_GATE) {
|
|
|
|
if (idt_index != -1) {
|
|
|
|
/* Software interrupts */
|
|
|
|
struct desc_struct task_gate_desc;
|
|
|
|
int dpl;
|
|
|
|
|
|
|
|
ret = read_interrupt_descriptor(ctxt, idt_index,
|
|
|
|
&task_gate_desc);
|
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
dpl = task_gate_desc.dpl;
|
|
|
|
if ((tss_selector & 3) > dpl || ops->cpl(ctxt) > dpl)
|
|
|
|
return emulate_gp(ctxt, (idt_index << 3) | 0x2);
|
|
|
|
}
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2010-03-18 21:20:19 +08:00
|
|
|
desc_limit = desc_limit_scaled(&next_tss_desc);
|
|
|
|
if (!next_tss_desc.p ||
|
|
|
|
((desc_limit < 0x67 && (next_tss_desc.type & 8)) ||
|
|
|
|
desc_limit < 0x2b)) {
|
2014-08-20 16:05:08 +08:00
|
|
|
return emulate_ts(ctxt, tss_selector & 0xfffc);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (reason == TASK_SWITCH_IRET || reason == TASK_SWITCH_JMP) {
|
|
|
|
curr_tss_desc.type &= ~(1 << 1); /* clear busy flag */
|
2011-05-15 00:00:52 +08:00
|
|
|
write_segment_descriptor(ctxt, old_tss_sel, &curr_tss_desc);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (reason == TASK_SWITCH_IRET)
|
|
|
|
ctxt->eflags = ctxt->eflags & ~X86_EFLAGS_NT;
|
|
|
|
|
|
|
|
/* set back link to prev task only if NT bit is set in eflags
|
2012-06-28 15:19:51 +08:00
|
|
|
note that old_tss_sel is not used after this point */
|
2010-03-18 21:20:17 +08:00
|
|
|
if (reason != TASK_SWITCH_CALL && reason != TASK_SWITCH_GATE)
|
|
|
|
old_tss_sel = 0xffff;
|
|
|
|
|
|
|
|
if (next_tss_desc.type & 8)
|
2011-05-15 00:00:52 +08:00
|
|
|
ret = task_switch_32(ctxt, tss_selector, old_tss_sel,
|
2010-03-18 21:20:17 +08:00
|
|
|
old_tss_base, &next_tss_desc);
|
|
|
|
else
|
2011-05-15 00:00:52 +08:00
|
|
|
ret = task_switch_16(ctxt, tss_selector, old_tss_sel,
|
2010-03-18 21:20:17 +08:00
|
|
|
old_tss_base, &next_tss_desc);
|
2010-04-14 21:50:57 +08:00
|
|
|
if (ret != X86EMUL_CONTINUE)
|
|
|
|
return ret;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
|
|
|
if (reason == TASK_SWITCH_CALL || reason == TASK_SWITCH_GATE)
|
|
|
|
ctxt->eflags = ctxt->eflags | X86_EFLAGS_NT;
|
|
|
|
|
|
|
|
if (reason != TASK_SWITCH_IRET) {
|
|
|
|
next_tss_desc.type |= (1 << 1); /* set busy flag */
|
2011-05-15 00:00:52 +08:00
|
|
|
write_segment_descriptor(ctxt, tss_selector, &next_tss_desc);
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ops->set_cr(ctxt, 0, ops->get_cr(ctxt, 0) | X86_CR0_TS);
|
2011-04-27 18:20:30 +08:00
|
|
|
ops->set_segment(ctxt, tss_selector, &next_tss_desc, 0, VCPU_SREG_TR);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2010-04-14 21:51:09 +08:00
|
|
|
if (has_error_code) {
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->op_bytes = ctxt->ad_bytes = (next_tss_desc.type & 8) ? 4 : 2;
|
|
|
|
ctxt->lock_prefix = 0;
|
|
|
|
ctxt->src.val = (unsigned long) error_code;
|
2011-04-12 23:31:23 +08:00
|
|
|
ret = em_push(ctxt);
|
2010-04-14 21:51:09 +08:00
|
|
|
}
|
|
|
|
|
2015-04-20 02:12:59 +08:00
|
|
|
ops->get_dr(ctxt, 7, &dr7);
|
|
|
|
ops->set_dr(ctxt, 7, dr7 & ~(DR_LOCAL_ENABLE_MASK | DR_LOCAL_SLOWDOWN));
|
|
|
|
|
2010-03-18 21:20:17 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
|
2012-02-08 21:34:38 +08:00
|
|
|
u16 tss_selector, int idt_index, int reason,
|
2010-04-14 21:51:09 +08:00
|
|
|
bool has_error_code, u32 error_code)
|
2010-03-18 21:20:17 +08:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
invalidate_registers(ctxt);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = ctxt->eip;
|
|
|
|
ctxt->dst.type = OP_NONE;
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2012-02-08 21:34:38 +08:00
|
|
|
rc = emulator_do_task_switch(ctxt, tss_selector, idt_index, reason,
|
2010-04-14 21:51:09 +08:00
|
|
|
has_error_code, error_code);
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
if (rc == X86EMUL_CONTINUE) {
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->eip = ctxt->_eip;
|
2012-08-28 04:46:17 +08:00
|
|
|
writeback_registers(ctxt);
|
|
|
|
}
|
2010-03-18 21:20:17 +08:00
|
|
|
|
2011-03-28 22:57:49 +08:00
|
|
|
return (rc == X86EMUL_UNHANDLEABLE) ? EMULATION_FAILED : EMULATION_OK;
|
2010-03-18 21:20:17 +08:00
|
|
|
}
|
|
|
|
|
2012-09-03 20:24:28 +08:00
|
|
|
static void string_addr_inc(struct x86_emulate_ctxt *ctxt, int reg,
|
|
|
|
struct operand *op)
|
2010-03-18 21:20:21 +08:00
|
|
|
{
|
2015-03-29 21:33:03 +08:00
|
|
|
int df = (ctxt->eflags & X86_EFLAGS_DF) ? -op->count : op->count;
|
2010-03-18 21:20:21 +08:00
|
|
|
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address_increment(ctxt, reg, df * op->bytes);
|
|
|
|
op->addr.mem.ea = register_address(ctxt, reg);
|
2010-03-18 21:20:21 +08:00
|
|
|
}
|
|
|
|
|
2010-08-18 19:16:35 +08:00
|
|
|
static int em_das(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u8 al, old_al;
|
|
|
|
bool af, cf, old_cf;
|
|
|
|
|
|
|
|
cf = ctxt->eflags & X86_EFLAGS_CF;
|
2011-06-01 20:34:25 +08:00
|
|
|
al = ctxt->dst.val;
|
2010-08-18 19:16:35 +08:00
|
|
|
|
|
|
|
old_al = al;
|
|
|
|
old_cf = cf;
|
|
|
|
cf = false;
|
|
|
|
af = ctxt->eflags & X86_EFLAGS_AF;
|
|
|
|
if ((al & 0x0f) > 9 || af) {
|
|
|
|
al -= 6;
|
|
|
|
cf = old_cf | (al >= 250);
|
|
|
|
af = true;
|
|
|
|
} else {
|
|
|
|
af = false;
|
|
|
|
}
|
|
|
|
if (old_al > 0x99 || old_cf) {
|
|
|
|
al -= 0x60;
|
|
|
|
cf = true;
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = al;
|
2010-08-18 19:16:35 +08:00
|
|
|
/* Set PF, ZF, SF */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.type = OP_IMM;
|
|
|
|
ctxt->src.val = 0;
|
|
|
|
ctxt->src.bytes = 1;
|
2013-01-20 01:51:57 +08:00
|
|
|
fastop(ctxt, em_or);
|
2010-08-18 19:16:35 +08:00
|
|
|
ctxt->eflags &= ~(X86_EFLAGS_AF | X86_EFLAGS_CF);
|
|
|
|
if (cf)
|
|
|
|
ctxt->eflags |= X86_EFLAGS_CF;
|
|
|
|
if (af)
|
|
|
|
ctxt->eflags |= X86_EFLAGS_AF;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2013-05-09 17:32:49 +08:00
|
|
|
static int em_aam(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u8 al, ah;
|
|
|
|
|
|
|
|
if (ctxt->src.val == 0)
|
|
|
|
return emulate_de(ctxt);
|
|
|
|
|
|
|
|
al = ctxt->dst.val & 0xff;
|
|
|
|
ah = al / ctxt->src.val;
|
|
|
|
al %= ctxt->src.val;
|
|
|
|
|
|
|
|
ctxt->dst.val = (ctxt->dst.val & 0xffff0000) | al | (ah << 8);
|
|
|
|
|
|
|
|
/* Set PF, ZF, SF */
|
|
|
|
ctxt->src.type = OP_IMM;
|
|
|
|
ctxt->src.val = 0;
|
|
|
|
ctxt->src.bytes = 1;
|
|
|
|
fastop(ctxt, em_or);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-12-10 17:42:30 +08:00
|
|
|
static int em_aad(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u8 al = ctxt->dst.val & 0xff;
|
|
|
|
u8 ah = (ctxt->dst.val >> 8) & 0xff;
|
|
|
|
|
|
|
|
al = (al + (ah * ctxt->src.val)) & 0xff;
|
|
|
|
|
|
|
|
ctxt->dst.val = (ctxt->dst.val & 0xffff0000) | al;
|
|
|
|
|
2013-02-13 23:50:39 +08:00
|
|
|
/* Set PF, ZF, SF */
|
|
|
|
ctxt->src.type = OP_IMM;
|
|
|
|
ctxt->src.val = 0;
|
|
|
|
ctxt->src.bytes = 1;
|
|
|
|
fastop(ctxt, em_or);
|
2012-12-10 17:42:30 +08:00
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-11-22 14:18:35 +08:00
|
|
|
static int em_call(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-09-19 03:39:38 +08:00
|
|
|
int rc;
|
2011-11-22 14:18:35 +08:00
|
|
|
long rel = ctxt->src.val;
|
|
|
|
|
|
|
|
ctxt->src.val = (unsigned long)ctxt->_eip;
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, rel);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2011-11-22 14:18:35 +08:00
|
|
|
return em_push(ctxt);
|
|
|
|
}
|
|
|
|
|
2010-08-18 19:51:45 +08:00
|
|
|
static int em_call_far(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 sel, old_cs;
|
|
|
|
ulong old_eip;
|
|
|
|
int rc;
|
2014-09-19 03:39:39 +08:00
|
|
|
struct desc_struct old_desc, new_desc;
|
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
|
|
|
int cpl = ctxt->ops->cpl(ctxt);
|
2015-01-26 15:32:27 +08:00
|
|
|
enum x86emul_mode prev_mode = ctxt->mode;
|
2010-08-18 19:51:45 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
old_eip = ctxt->_eip;
|
2014-09-19 03:39:39 +08:00
|
|
|
ops->get_segment(ctxt, &old_cs, &old_desc, NULL, VCPU_SREG_CS);
|
2010-08-18 19:51:45 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
memcpy(&sel, ctxt->src.valptr + ctxt->op_bytes, 2);
|
2014-12-25 08:52:19 +08:00
|
|
|
rc = __load_segment_descriptor(ctxt, sel, VCPU_SREG_CS, cpl,
|
|
|
|
X86_TRANSFER_CALL_JMP, &new_desc);
|
2014-09-19 03:39:39 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2014-12-25 08:52:20 +08:00
|
|
|
return rc;
|
2010-08-18 19:51:45 +08:00
|
|
|
|
KVM: x86: Perform limit checks when assigning EIP
If branch (e.g., jmp, ret) causes limit violations, since the target IP >
limit, the #GP exception occurs before the branch. In other words, the RIP
pushed on the stack should be that of the branch and not that of the target.
To do so, we can call __linearize, with new EIP, which also saves us the code
which performs the canonical address checks. On the case of assigning an EIP >=
2^32 (when switching cs.l), we also safe, as __linearize will check the new EIP
does not exceed the limit and would trigger #GP(0) otherwise.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-11-19 23:43:11 +08:00
|
|
|
rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);
|
2014-09-19 03:39:39 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto fail;
|
2010-08-18 19:51:45 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = old_cs;
|
2011-04-12 23:31:23 +08:00
|
|
|
rc = em_push(ctxt);
|
2010-08-18 19:51:45 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2014-09-19 03:39:39 +08:00
|
|
|
goto fail;
|
2010-08-18 19:51:45 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = old_eip;
|
2014-09-19 03:39:39 +08:00
|
|
|
rc = em_push(ctxt);
|
|
|
|
/* If we failed, we tainted the memory, but the very least we should
|
|
|
|
restore cs */
|
2015-01-26 15:32:27 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE) {
|
|
|
|
pr_warn_once("faulting far call emulation tainted memory\n");
|
2014-09-19 03:39:39 +08:00
|
|
|
goto fail;
|
2015-01-26 15:32:27 +08:00
|
|
|
}
|
2014-09-19 03:39:39 +08:00
|
|
|
return rc;
|
|
|
|
fail:
|
|
|
|
ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS);
|
2015-01-26 15:32:27 +08:00
|
|
|
ctxt->mode = prev_mode;
|
2014-09-19 03:39:39 +08:00
|
|
|
return rc;
|
|
|
|
|
2010-08-18 19:51:45 +08:00
|
|
|
}
|
|
|
|
|
2010-08-18 20:12:09 +08:00
|
|
|
static int em_ret_near_imm(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
int rc;
|
2014-09-19 03:39:38 +08:00
|
|
|
unsigned long eip;
|
2010-08-18 20:12:09 +08:00
|
|
|
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = emulate_pop(ctxt, &eip, ctxt->op_bytes);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
rc = assign_eip_near(ctxt, eip);
|
2010-08-18 20:12:09 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2012-08-19 19:34:31 +08:00
|
|
|
rsp_increment(ctxt, ctxt->src.val);
|
2010-08-18 20:12:09 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-05-29 20:59:09 +08:00
|
|
|
static int em_xchg(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* Write back the register source. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.val = ctxt->dst.val;
|
|
|
|
write_register_operand(&ctxt->src);
|
2011-05-29 20:59:09 +08:00
|
|
|
|
|
|
|
/* Write back the memory destination with implicit LOCK prefix. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ctxt->src.orig_val;
|
|
|
|
ctxt->lock_prefix = 1;
|
2011-05-29 20:59:09 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2010-08-18 23:31:43 +08:00
|
|
|
static int em_imul_3op(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ctxt->src2.val;
|
2013-01-20 01:51:55 +08:00
|
|
|
return fastop(ctxt, em_imul);
|
2010-08-18 23:31:43 +08:00
|
|
|
}
|
|
|
|
|
2010-08-19 20:13:00 +08:00
|
|
|
static int em_cwd(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_REG;
|
|
|
|
ctxt->dst.bytes = ctxt->src.bytes;
|
2012-08-28 04:46:17 +08:00
|
|
|
ctxt->dst.addr.reg = reg_rmw(ctxt, VCPU_REGS_RDX);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ~((ctxt->src.val >> (ctxt->src.bytes * 8 - 1)) - 1);
|
2010-08-19 20:13:00 +08:00
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2016-07-12 17:04:26 +08:00
|
|
|
static int em_rdpid(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u64 tsc_aux = 0;
|
|
|
|
|
|
|
|
if (ctxt->ops->get_msr(ctxt, MSR_TSC_AUX, &tsc_aux))
|
2020-08-28 10:23:42 +08:00
|
|
|
return emulate_ud(ctxt);
|
2016-07-12 17:04:26 +08:00
|
|
|
ctxt->dst.val = tsc_aux;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2010-08-18 23:54:34 +08:00
|
|
|
static int em_rdtsc(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u64 tsc = 0;
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ctxt->ops->get_msr(ctxt, MSR_IA32_TSC, &tsc);
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = (u32)tsc;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = tsc >> 32;
|
2010-08-18 23:54:34 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-11-10 20:57:30 +08:00
|
|
|
static int em_rdpmc(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u64 pmc;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
if (ctxt->ops->read_pmc(ctxt, reg_read(ctxt, VCPU_REGS_RCX), &pmc))
|
2011-11-10 20:57:30 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = (u32)pmc;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = pmc >> 32;
|
2011-11-10 20:57:30 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2010-08-03 19:46:56 +08:00
|
|
|
static int em_mov(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-03-27 18:36:25 +08:00
|
|
|
memcpy(ctxt->dst.valptr, ctxt->src.valptr, sizeof(ctxt->src.valptr));
|
2010-08-03 19:46:56 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2013-10-29 19:54:56 +08:00
|
|
|
static int em_movbe(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 tmp;
|
|
|
|
|
2019-12-18 05:32:38 +08:00
|
|
|
if (!ctxt->ops->guest_has_movbe(ctxt))
|
2013-10-29 19:54:56 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
switch (ctxt->op_bytes) {
|
|
|
|
case 2:
|
|
|
|
/*
|
|
|
|
* From MOVBE definition: "...When the operand size is 16 bits,
|
|
|
|
* the upper word of the destination register remains unchanged
|
|
|
|
* ..."
|
|
|
|
*
|
|
|
|
* Both casting ->valptr and ->val to u16 breaks strict aliasing
|
|
|
|
* rules so we have to do the operation almost per hand.
|
|
|
|
*/
|
|
|
|
tmp = (u16)ctxt->src.val;
|
|
|
|
ctxt->dst.val &= ~0xffffUL;
|
|
|
|
ctxt->dst.val |= (unsigned long)swab16(tmp);
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
ctxt->dst.val = swab32((u32)ctxt->src.val);
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
ctxt->dst.val = swab64(ctxt->src.val);
|
|
|
|
break;
|
|
|
|
default:
|
2014-08-20 16:05:08 +08:00
|
|
|
BUG();
|
2013-10-29 19:54:56 +08:00
|
|
|
}
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-11-22 14:19:19 +08:00
|
|
|
static int em_cr_write(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val))
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_dr_write(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
unsigned long val;
|
|
|
|
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
val = ctxt->src.val & ~0ULL;
|
|
|
|
else
|
|
|
|
val = ctxt->src.val & ~0U;
|
|
|
|
|
|
|
|
/* #UD condition is already handled. */
|
|
|
|
if (ctxt->ops->set_dr(ctxt, ctxt->modrm_reg, val) < 0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-11-22 14:20:03 +08:00
|
|
|
static int em_wrmsr(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2020-09-25 22:34:16 +08:00
|
|
|
u64 msr_index = reg_read(ctxt, VCPU_REGS_RCX);
|
2011-11-22 14:20:03 +08:00
|
|
|
u64 msr_data;
|
2020-09-25 22:34:16 +08:00
|
|
|
int r;
|
2011-11-22 14:20:03 +08:00
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
msr_data = (u32)reg_read(ctxt, VCPU_REGS_RAX)
|
|
|
|
| ((u64)reg_read(ctxt, VCPU_REGS_RDX) << 32);
|
2020-09-25 22:34:16 +08:00
|
|
|
r = ctxt->ops->set_msr(ctxt, msr_index, msr_data);
|
|
|
|
|
|
|
|
if (r == X86EMUL_IO_NEEDED)
|
|
|
|
return r;
|
|
|
|
|
2020-10-01 19:29:52 +08:00
|
|
|
if (r > 0)
|
2011-11-22 14:20:03 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
2020-10-01 19:29:52 +08:00
|
|
|
return r < 0 ? X86EMUL_UNHANDLEABLE : X86EMUL_CONTINUE;
|
2011-11-22 14:20:03 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int em_rdmsr(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2020-09-25 22:34:16 +08:00
|
|
|
u64 msr_index = reg_read(ctxt, VCPU_REGS_RCX);
|
2011-11-22 14:20:03 +08:00
|
|
|
u64 msr_data;
|
2020-09-25 22:34:16 +08:00
|
|
|
int r;
|
|
|
|
|
|
|
|
r = ctxt->ops->get_msr(ctxt, msr_index, &msr_data);
|
|
|
|
|
|
|
|
if (r == X86EMUL_IO_NEEDED)
|
|
|
|
return r;
|
2011-11-22 14:20:03 +08:00
|
|
|
|
2020-09-25 22:34:16 +08:00
|
|
|
if (r)
|
2011-11-22 14:20:03 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = (u32)msr_data;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = msr_data >> 32;
|
2011-11-22 14:20:03 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2016-07-12 16:35:51 +08:00
|
|
|
static int em_store_sreg(struct x86_emulate_ctxt *ctxt, int segment)
|
2011-05-29 21:01:33 +08:00
|
|
|
{
|
2016-07-12 16:35:51 +08:00
|
|
|
if (segment > VCPU_SREG_GS &&
|
|
|
|
(ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
|
|
|
|
ctxt->ops->cpl(ctxt) > 0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
2011-05-29 21:01:33 +08:00
|
|
|
|
2016-07-12 16:35:51 +08:00
|
|
|
ctxt->dst.val = get_segment_selector(ctxt, segment);
|
2014-11-02 17:54:46 +08:00
|
|
|
if (ctxt->dst.bytes == 4 && ctxt->dst.type == OP_MEM)
|
|
|
|
ctxt->dst.bytes = 2;
|
2011-05-29 21:01:33 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2016-07-12 16:35:51 +08:00
|
|
|
static int em_mov_rm_sreg(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (ctxt->modrm_reg > VCPU_SREG_GS)
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
return em_store_sreg(ctxt, ctxt->modrm_reg);
|
|
|
|
}
|
|
|
|
|
2011-05-29 21:01:33 +08:00
|
|
|
static int em_mov_sreg_rm(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
u16 sel = ctxt->src.val;
|
2011-05-29 21:01:33 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->modrm_reg == VCPU_SREG_CS || ctxt->modrm_reg > VCPU_SREG_GS)
|
2011-05-29 21:01:33 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->modrm_reg == VCPU_SREG_SS)
|
2011-05-29 21:01:33 +08:00
|
|
|
ctxt->interruptibility = KVM_X86_SHADOW_INT_MOV_SS;
|
|
|
|
|
|
|
|
/* Disable writeback. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return load_segment_descriptor(ctxt, sel, ctxt->modrm_reg);
|
2011-05-29 21:01:33 +08:00
|
|
|
}
|
|
|
|
|
2016-07-12 16:35:51 +08:00
|
|
|
static int em_sldt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return em_store_sreg(ctxt, VCPU_SREG_LDTR);
|
|
|
|
}
|
|
|
|
|
2012-06-13 17:28:33 +08:00
|
|
|
static int em_lldt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 sel = ctxt->src.val;
|
|
|
|
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return load_segment_descriptor(ctxt, sel, VCPU_SREG_LDTR);
|
|
|
|
}
|
|
|
|
|
2016-07-12 16:35:51 +08:00
|
|
|
static int em_str(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return em_store_sreg(ctxt, VCPU_SREG_TR);
|
|
|
|
}
|
|
|
|
|
2012-06-13 21:33:29 +08:00
|
|
|
static int em_ltr(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u16 sel = ctxt->src.val;
|
|
|
|
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return load_segment_descriptor(ctxt, sel, VCPU_SREG_TR);
|
|
|
|
}
|
|
|
|
|
2011-04-01 00:48:09 +08:00
|
|
|
static int em_invlpg(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-04-01 00:54:30 +08:00
|
|
|
int rc;
|
|
|
|
ulong linear;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = linearize(ctxt, ctxt->src.addr.mem, 1, false, &linear);
|
2011-04-01 00:54:30 +08:00
|
|
|
if (rc == X86EMUL_CONTINUE)
|
2011-04-20 20:38:44 +08:00
|
|
|
ctxt->ops->invlpg(ctxt, linear);
|
2011-04-01 00:48:09 +08:00
|
|
|
/* Disable writeback. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
2011-04-01 00:48:09 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-04-20 20:32:49 +08:00
|
|
|
static int em_clts(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
ulong cr0;
|
|
|
|
|
|
|
|
cr0 = ctxt->ops->get_cr(ctxt, 0);
|
|
|
|
cr0 &= ~X86_CR0_TS;
|
|
|
|
ctxt->ops->set_cr(ctxt, 0, cr0);
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2015-03-10 03:27:43 +08:00
|
|
|
static int em_hypercall(struct x86_emulate_ctxt *ctxt)
|
2011-04-21 17:07:59 +08:00
|
|
|
{
|
2014-08-29 16:26:55 +08:00
|
|
|
int rc = ctxt->ops->fix_hypercall(ctxt);
|
2011-04-21 17:07:59 +08:00
|
|
|
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
/* Let the processor re-execute the fixed hypercall */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = ctxt->eip;
|
2011-04-21 17:07:59 +08:00
|
|
|
/* Disable writeback. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
2011-04-21 17:07:59 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-06-10 22:21:18 +08:00
|
|
|
static int emulate_store_desc_ptr(struct x86_emulate_ctxt *ctxt,
|
|
|
|
void (*get)(struct x86_emulate_ctxt *ctxt,
|
|
|
|
struct desc_ptr *ptr))
|
|
|
|
{
|
|
|
|
struct desc_ptr desc_ptr;
|
|
|
|
|
2016-07-12 16:36:41 +08:00
|
|
|
if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
|
|
|
|
ctxt->ops->cpl(ctxt) > 0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
2012-06-10 22:21:18 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
ctxt->op_bytes = 8;
|
|
|
|
get(ctxt, &desc_ptr);
|
|
|
|
if (ctxt->op_bytes == 2) {
|
|
|
|
ctxt->op_bytes = 4;
|
|
|
|
desc_ptr.address &= 0x00ffffff;
|
|
|
|
}
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
2017-01-12 10:28:29 +08:00
|
|
|
return segmented_write_std(ctxt, ctxt->dst.addr.mem,
|
|
|
|
&desc_ptr, 2 + ctxt->op_bytes);
|
2012-06-10 22:21:18 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int em_sgdt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return emulate_store_desc_ptr(ctxt, ctxt->ops->get_gdt);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_sidt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return emulate_store_desc_ptr(ctxt, ctxt->ops->get_idt);
|
|
|
|
}
|
|
|
|
|
2014-11-02 17:54:55 +08:00
|
|
|
static int em_lgdt_lidt(struct x86_emulate_ctxt *ctxt, bool lgdt)
|
2011-04-21 17:07:59 +08:00
|
|
|
{
|
|
|
|
struct desc_ptr desc_ptr;
|
|
|
|
int rc;
|
|
|
|
|
2012-06-07 22:04:36 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
ctxt->op_bytes = 8;
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = read_descriptor(ctxt, ctxt->src.addr.mem,
|
2011-04-21 17:07:59 +08:00
|
|
|
&desc_ptr.size, &desc_ptr.address,
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->op_bytes);
|
2011-04-21 17:07:59 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
2014-11-02 17:54:56 +08:00
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64 &&
|
2017-08-24 20:27:56 +08:00
|
|
|
emul_is_noncanonical_address(desc_ptr.address, ctxt))
|
2014-11-02 17:54:56 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
2014-11-02 17:54:55 +08:00
|
|
|
if (lgdt)
|
|
|
|
ctxt->ops->set_gdt(ctxt, &desc_ptr);
|
|
|
|
else
|
|
|
|
ctxt->ops->set_idt(ctxt, &desc_ptr);
|
2011-04-21 17:07:59 +08:00
|
|
|
/* Disable writeback. */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
2011-04-21 17:07:59 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2014-11-02 17:54:55 +08:00
|
|
|
static int em_lgdt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return em_lgdt_lidt(ctxt, true);
|
|
|
|
}
|
|
|
|
|
2011-04-21 17:07:59 +08:00
|
|
|
static int em_lidt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-11-02 17:54:55 +08:00
|
|
|
return em_lgdt_lidt(ctxt, false);
|
2011-04-21 17:07:59 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int em_smsw(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2016-07-12 16:36:41 +08:00
|
|
|
if ((ctxt->ops->get_cr(ctxt, 4) & X86_CR4_UMIP) &&
|
|
|
|
ctxt->ops->cpl(ctxt) > 0)
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
2014-06-02 23:34:11 +08:00
|
|
|
if (ctxt->dst.type == OP_MEM)
|
|
|
|
ctxt->dst.bytes = 2;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ctxt->ops->get_cr(ctxt, 0);
|
2011-04-21 17:07:59 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_lmsw(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
ctxt->ops->set_cr(ctxt, 0, (ctxt->ops->get_cr(ctxt, 0) & ~0x0eul)
|
2011-06-01 20:34:25 +08:00
|
|
|
| (ctxt->src.val & 0x0f));
|
|
|
|
ctxt->dst.type = OP_NONE;
|
2011-04-21 17:07:59 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-05-29 21:04:08 +08:00
|
|
|
static int em_loop(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-09-19 03:39:38 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address_increment(ctxt, VCPU_REGS_RCX, -1);
|
2012-08-28 04:46:17 +08:00
|
|
|
if ((address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) != 0) &&
|
2011-06-01 20:34:25 +08:00
|
|
|
(ctxt->b == 0xe2 || test_cc(ctxt->b ^ 0x5, ctxt->eflags)))
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, ctxt->src.val);
|
2011-05-29 21:04:08 +08:00
|
|
|
|
2014-09-19 03:39:38 +08:00
|
|
|
return rc;
|
2011-05-29 21:04:08 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int em_jcxz(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-09-19 03:39:38 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
if (address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) == 0)
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, ctxt->src.val);
|
2011-05-29 21:04:08 +08:00
|
|
|
|
2014-09-19 03:39:38 +08:00
|
|
|
return rc;
|
2011-05-29 21:04:08 +08:00
|
|
|
}
|
|
|
|
|
2011-11-22 14:16:54 +08:00
|
|
|
static int em_in(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (!pio_in_emulated(ctxt, ctxt->dst.bytes, ctxt->src.val,
|
|
|
|
&ctxt->dst.val))
|
|
|
|
return X86EMUL_IO_NEEDED;
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_out(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
ctxt->ops->pio_out_emulated(ctxt, ctxt->src.bytes, ctxt->dst.val,
|
|
|
|
&ctxt->src.val, 1);
|
|
|
|
/* Disable writeback. */
|
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-05-29 21:05:15 +08:00
|
|
|
static int em_cli(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (emulator_bad_iopl(ctxt))
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
ctxt->eflags &= ~X86_EFLAGS_IF;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int em_sti(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (emulator_bad_iopl(ctxt))
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
ctxt->interruptibility = KVM_X86_SHADOW_INT_STI;
|
|
|
|
ctxt->eflags |= X86_EFLAGS_IF;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-06-07 19:11:36 +08:00
|
|
|
static int em_cpuid(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u32 eax, ebx, ecx, edx;
|
2017-03-20 16:16:28 +08:00
|
|
|
u64 msr = 0;
|
|
|
|
|
|
|
|
ctxt->ops->get_msr(ctxt, MSR_MISC_FEATURES_ENABLES, &msr);
|
|
|
|
if (msr & MSR_MISC_FEATURES_ENABLES_CPUID_FAULT &&
|
|
|
|
ctxt->ops->cpl(ctxt)) {
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
}
|
2012-06-07 19:11:36 +08:00
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
eax = reg_read(ctxt, VCPU_REGS_RAX);
|
|
|
|
ecx = reg_read(ctxt, VCPU_REGS_RCX);
|
2020-03-05 09:34:37 +08:00
|
|
|
ctxt->ops->get_cpuid(ctxt, &eax, &ebx, &ecx, &edx, false);
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_write(ctxt, VCPU_REGS_RAX) = eax;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RBX) = ebx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RCX) = ecx;
|
|
|
|
*reg_write(ctxt, VCPU_REGS_RDX) = edx;
|
2012-06-07 19:11:36 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2013-10-31 18:19:42 +08:00
|
|
|
static int em_sahf(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u32 flags;
|
|
|
|
|
2015-03-29 21:33:03 +08:00
|
|
|
flags = X86_EFLAGS_CF | X86_EFLAGS_PF | X86_EFLAGS_AF | X86_EFLAGS_ZF |
|
|
|
|
X86_EFLAGS_SF;
|
2013-10-31 18:19:42 +08:00
|
|
|
flags &= *reg_rmw(ctxt, VCPU_REGS_RAX) >> 8;
|
|
|
|
|
|
|
|
ctxt->eflags &= ~0xffUL;
|
|
|
|
ctxt->eflags |= flags | X86_EFLAGS_FIXED;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-06-11 18:09:07 +08:00
|
|
|
static int em_lahf(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2012-08-28 04:46:17 +08:00
|
|
|
*reg_rmw(ctxt, VCPU_REGS_RAX) &= ~0xff00UL;
|
|
|
|
*reg_rmw(ctxt, VCPU_REGS_RAX) |= (ctxt->eflags & 0xff) << 8;
|
2012-06-11 18:09:07 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-06-13 17:25:06 +08:00
|
|
|
static int em_bswap(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
switch (ctxt->op_bytes) {
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
case 8:
|
|
|
|
asm("bswap %0" : "+r"(ctxt->dst.val));
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
default:
|
|
|
|
asm("bswap %0" : "+r"(*(u32 *)&ctxt->dst.val));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2014-10-13 18:04:13 +08:00
|
|
|
static int em_clflush(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* emulating clflush regardless of cpuid */
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2020-11-03 20:04:00 +08:00
|
|
|
static int em_clflushopt(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* emulating clflushopt regardless of cpuid */
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2015-01-26 15:32:24 +08:00
|
|
|
static int em_movsxd(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
ctxt->dst.val = (s32) ctxt->src.val;
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
static int check_fxsr(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2019-12-18 05:32:38 +08:00
|
|
|
if (!ctxt->ops->guest_has_fxsr(ctxt))
|
2016-11-10 02:07:06 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
if (ctxt->ops->get_cr(ctxt, 0) & (X86_CR0_TS | X86_CR0_EM))
|
|
|
|
return emulate_nm(ctxt);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't emulate a case that should never be hit, instead of working
|
|
|
|
* around a lack of fxsave64/fxrstor64 on old compilers.
|
|
|
|
*/
|
|
|
|
if (ctxt->mode >= X86EMUL_MODE_PROT64)
|
|
|
|
return X86EMUL_UNHANDLEABLE;
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2017-05-31 11:08:38 +08:00
|
|
|
/*
|
|
|
|
* Hardware doesn't save and restore XMM 0-7 without CR4.OSFXSR, but does save
|
|
|
|
* and restore MXCSR.
|
|
|
|
*/
|
|
|
|
static size_t __fxstate_size(int nregs)
|
|
|
|
{
|
|
|
|
return offsetof(struct fxregs_state, xmm_space[0]) + nregs * 16;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline size_t fxstate_size(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
bool cr4_osfxsr;
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
return __fxstate_size(16);
|
|
|
|
|
|
|
|
cr4_osfxsr = ctxt->ops->get_cr(ctxt, 4) & X86_CR4_OSFXSR;
|
|
|
|
return __fxstate_size(cr4_osfxsr ? 8 : 0);
|
|
|
|
}
|
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
/*
|
|
|
|
* FXSAVE and FXRSTOR have 4 different formats depending on execution mode,
|
|
|
|
* 1) 16 bit mode
|
|
|
|
* 2) 32 bit mode
|
|
|
|
* - like (1), but FIP and FDP (foo) are only 16 bit. At least Intel CPUs
|
|
|
|
* preserve whole 32 bit values, though, so (1) and (2) are the same wrt.
|
|
|
|
* save and restore
|
|
|
|
* 3) 64-bit mode with REX.W prefix
|
|
|
|
* - like (2), but XMM 8-15 are being saved and restored
|
|
|
|
* 4) 64-bit mode without REX.W prefix
|
|
|
|
* - like (3), but FIP and FDP are 64 bit
|
|
|
|
*
|
|
|
|
* Emulation uses (3) for (1) and (2) and preserves XMM 8-15 to reach the
|
|
|
|
* desired result. (4) is not emulated.
|
|
|
|
*
|
|
|
|
* Note: Guest and host CPUID.(EAX=07H,ECX=0H):EBX[bit 13] (deprecate FPU CS
|
|
|
|
* and FPU DS) should match.
|
|
|
|
*/
|
|
|
|
static int em_fxsave(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
struct fxregs_state fx_state;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = check_fxsr(ctxt);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2020-01-18 03:30:50 +08:00
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
rc = asm_safe("fxsave %[fx]", , [fx] "+m"(fx_state));
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2020-01-18 03:30:50 +08:00
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2017-05-31 11:08:38 +08:00
|
|
|
return segmented_write_std(ctxt, ctxt->memop.addr.mem, &fx_state,
|
|
|
|
fxstate_size(ctxt));
|
2016-11-10 02:07:06 +08:00
|
|
|
}
|
|
|
|
|
KVM: x86: fix em_fxstor() sleeping while in atomic
Commit 9d643f63128b ("KVM: x86: avoid large stack allocations in
em_fxrstor") optimize the stack size, but introduced a guest memory access
which might sleep while in atomic.
Fix it by introducing, again, a second fxregs_state. Try to avoid
large stacks by using noinline. Add some helpful comments.
Reported by syzbot:
in_atomic(): 1, irqs_disabled(): 0, pid: 2909, name: syzkaller879109
2 locks held by syzkaller879109/2909:
#0: (&vcpu->mutex){+.+.}, at: [<ffffffff8106222c>] vcpu_load+0x1c/0x70
arch/x86/kvm/../../../virt/kvm/kvm_main.c:154
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_enter_guest
arch/x86/kvm/x86.c:6983 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_run
arch/x86/kvm/x86.c:7061 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>]
kvm_arch_vcpu_ioctl_run+0x1bc2/0x58b0 arch/x86/kvm/x86.c:7222
CPU: 1 PID: 2909 Comm: syzkaller879109 Not tainted 4.13.0-rc4-next-20170811
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
___might_sleep+0x2b2/0x470 kernel/sched/core.c:6014
__might_sleep+0x95/0x190 kernel/sched/core.c:5967
__might_fault+0xab/0x1d0 mm/memory.c:4383
__copy_from_user include/linux/uaccess.h:71 [inline]
__kvm_read_guest_page+0x58/0xa0
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1771
kvm_vcpu_read_guest_page+0x44/0x60
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1791
kvm_read_guest_virt_helper+0x76/0x140 arch/x86/kvm/x86.c:4407
kvm_read_guest_virt_system+0x3c/0x50 arch/x86/kvm/x86.c:4466
segmented_read_std+0x10c/0x180 arch/x86/kvm/emulate.c:819
em_fxrstor+0x27b/0x410 arch/x86/kvm/emulate.c:4022
x86_emulate_insn+0x55d/0x3c50 arch/x86/kvm/emulate.c:5471
x86_emulate_instruction+0x411/0x1ca0 arch/x86/kvm/x86.c:5698
kvm_mmu_page_fault+0x18b/0x2c0 arch/x86/kvm/mmu.c:4854
handle_ept_violation+0x1fc/0x5e0 arch/x86/kvm/vmx.c:6400
vmx_handle_exit+0x281/0x1ab0 arch/x86/kvm/vmx.c:8718
vcpu_enter_guest arch/x86/kvm/x86.c:6999 [inline]
vcpu_run arch/x86/kvm/x86.c:7061 [inline]
kvm_arch_vcpu_ioctl_run+0x1cee/0x58b0 arch/x86/kvm/x86.c:7222
kvm_vcpu_ioctl+0x64c/0x1010 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2591
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x437fc9
RSP: 002b:00007ffc7b4d5ab8 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00000000004002b0 RCX: 0000000000437fc9
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000005
RBP: 0000000000000086 R08: 0000000000000000 R09: 0000000020ae8000
R10: 0000000000009120 R11: 0000000000000206 R12: 0000000000000000
R13: 0000000000000004 R14: 0000000000000004 R15: 0000000020077000
Fixes: 9d643f63128b ("KVM: x86: avoid large stack allocations in em_fxrstor")
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-11-08 01:04:05 +08:00
|
|
|
/*
|
|
|
|
* FXRSTOR might restore XMM registers not provided by the guest. Fill
|
|
|
|
* in the host registers (via FXSAVE) instead, so they won't be modified.
|
|
|
|
* (preemption has to stay disabled until FXRSTOR).
|
|
|
|
*
|
|
|
|
* Use noinline to keep the stack for other functions called by callers small.
|
|
|
|
*/
|
|
|
|
static noinline int fxregs_fixup(struct fxregs_state *fx_state,
|
|
|
|
const size_t used_size)
|
|
|
|
{
|
|
|
|
struct fxregs_state fx_tmp;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = asm_safe("fxsave %[fx]", , [fx] "+m"(fx_tmp));
|
|
|
|
memcpy((void *)fx_state + used_size, (void *)&fx_tmp + used_size,
|
|
|
|
__fxstate_size(16) - used_size);
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
static int em_fxrstor(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
struct fxregs_state fx_state;
|
|
|
|
int rc;
|
2017-05-31 11:08:38 +08:00
|
|
|
size_t size;
|
2016-11-10 02:07:06 +08:00
|
|
|
|
|
|
|
rc = check_fxsr(ctxt);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
KVM: x86: fix em_fxstor() sleeping while in atomic
Commit 9d643f63128b ("KVM: x86: avoid large stack allocations in
em_fxrstor") optimize the stack size, but introduced a guest memory access
which might sleep while in atomic.
Fix it by introducing, again, a second fxregs_state. Try to avoid
large stacks by using noinline. Add some helpful comments.
Reported by syzbot:
in_atomic(): 1, irqs_disabled(): 0, pid: 2909, name: syzkaller879109
2 locks held by syzkaller879109/2909:
#0: (&vcpu->mutex){+.+.}, at: [<ffffffff8106222c>] vcpu_load+0x1c/0x70
arch/x86/kvm/../../../virt/kvm/kvm_main.c:154
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_enter_guest
arch/x86/kvm/x86.c:6983 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_run
arch/x86/kvm/x86.c:7061 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>]
kvm_arch_vcpu_ioctl_run+0x1bc2/0x58b0 arch/x86/kvm/x86.c:7222
CPU: 1 PID: 2909 Comm: syzkaller879109 Not tainted 4.13.0-rc4-next-20170811
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
___might_sleep+0x2b2/0x470 kernel/sched/core.c:6014
__might_sleep+0x95/0x190 kernel/sched/core.c:5967
__might_fault+0xab/0x1d0 mm/memory.c:4383
__copy_from_user include/linux/uaccess.h:71 [inline]
__kvm_read_guest_page+0x58/0xa0
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1771
kvm_vcpu_read_guest_page+0x44/0x60
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1791
kvm_read_guest_virt_helper+0x76/0x140 arch/x86/kvm/x86.c:4407
kvm_read_guest_virt_system+0x3c/0x50 arch/x86/kvm/x86.c:4466
segmented_read_std+0x10c/0x180 arch/x86/kvm/emulate.c:819
em_fxrstor+0x27b/0x410 arch/x86/kvm/emulate.c:4022
x86_emulate_insn+0x55d/0x3c50 arch/x86/kvm/emulate.c:5471
x86_emulate_instruction+0x411/0x1ca0 arch/x86/kvm/x86.c:5698
kvm_mmu_page_fault+0x18b/0x2c0 arch/x86/kvm/mmu.c:4854
handle_ept_violation+0x1fc/0x5e0 arch/x86/kvm/vmx.c:6400
vmx_handle_exit+0x281/0x1ab0 arch/x86/kvm/vmx.c:8718
vcpu_enter_guest arch/x86/kvm/x86.c:6999 [inline]
vcpu_run arch/x86/kvm/x86.c:7061 [inline]
kvm_arch_vcpu_ioctl_run+0x1cee/0x58b0 arch/x86/kvm/x86.c:7222
kvm_vcpu_ioctl+0x64c/0x1010 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2591
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x437fc9
RSP: 002b:00007ffc7b4d5ab8 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00000000004002b0 RCX: 0000000000437fc9
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000005
RBP: 0000000000000086 R08: 0000000000000000 R09: 0000000020ae8000
R10: 0000000000009120 R11: 0000000000000206 R12: 0000000000000000
R13: 0000000000000004 R14: 0000000000000004 R15: 0000000020077000
Fixes: 9d643f63128b ("KVM: x86: avoid large stack allocations in em_fxrstor")
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-11-08 01:04:05 +08:00
|
|
|
size = fxstate_size(ctxt);
|
|
|
|
rc = segmented_read_std(ctxt, ctxt->memop.addr.mem, &fx_state, size);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
return rc;
|
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2020-01-18 03:30:50 +08:00
|
|
|
|
2017-05-31 11:08:38 +08:00
|
|
|
if (size < __fxstate_size(16)) {
|
KVM: x86: fix em_fxstor() sleeping while in atomic
Commit 9d643f63128b ("KVM: x86: avoid large stack allocations in
em_fxrstor") optimize the stack size, but introduced a guest memory access
which might sleep while in atomic.
Fix it by introducing, again, a second fxregs_state. Try to avoid
large stacks by using noinline. Add some helpful comments.
Reported by syzbot:
in_atomic(): 1, irqs_disabled(): 0, pid: 2909, name: syzkaller879109
2 locks held by syzkaller879109/2909:
#0: (&vcpu->mutex){+.+.}, at: [<ffffffff8106222c>] vcpu_load+0x1c/0x70
arch/x86/kvm/../../../virt/kvm/kvm_main.c:154
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_enter_guest
arch/x86/kvm/x86.c:6983 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>] vcpu_run
arch/x86/kvm/x86.c:7061 [inline]
#1: (&kvm->srcu){....}, at: [<ffffffff810dd162>]
kvm_arch_vcpu_ioctl_run+0x1bc2/0x58b0 arch/x86/kvm/x86.c:7222
CPU: 1 PID: 2909 Comm: syzkaller879109 Not tainted 4.13.0-rc4-next-20170811
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
___might_sleep+0x2b2/0x470 kernel/sched/core.c:6014
__might_sleep+0x95/0x190 kernel/sched/core.c:5967
__might_fault+0xab/0x1d0 mm/memory.c:4383
__copy_from_user include/linux/uaccess.h:71 [inline]
__kvm_read_guest_page+0x58/0xa0
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1771
kvm_vcpu_read_guest_page+0x44/0x60
arch/x86/kvm/../../../virt/kvm/kvm_main.c:1791
kvm_read_guest_virt_helper+0x76/0x140 arch/x86/kvm/x86.c:4407
kvm_read_guest_virt_system+0x3c/0x50 arch/x86/kvm/x86.c:4466
segmented_read_std+0x10c/0x180 arch/x86/kvm/emulate.c:819
em_fxrstor+0x27b/0x410 arch/x86/kvm/emulate.c:4022
x86_emulate_insn+0x55d/0x3c50 arch/x86/kvm/emulate.c:5471
x86_emulate_instruction+0x411/0x1ca0 arch/x86/kvm/x86.c:5698
kvm_mmu_page_fault+0x18b/0x2c0 arch/x86/kvm/mmu.c:4854
handle_ept_violation+0x1fc/0x5e0 arch/x86/kvm/vmx.c:6400
vmx_handle_exit+0x281/0x1ab0 arch/x86/kvm/vmx.c:8718
vcpu_enter_guest arch/x86/kvm/x86.c:6999 [inline]
vcpu_run arch/x86/kvm/x86.c:7061 [inline]
kvm_arch_vcpu_ioctl_run+0x1cee/0x58b0 arch/x86/kvm/x86.c:7222
kvm_vcpu_ioctl+0x64c/0x1010 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2591
vfs_ioctl fs/ioctl.c:45 [inline]
do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
SYSC_ioctl fs/ioctl.c:700 [inline]
SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x437fc9
RSP: 002b:00007ffc7b4d5ab8 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00000000004002b0 RCX: 0000000000437fc9
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000005
RBP: 0000000000000086 R08: 0000000000000000 R09: 0000000020ae8000
R10: 0000000000009120 R11: 0000000000000206 R12: 0000000000000000
R13: 0000000000000004 R14: 0000000000000004 R15: 0000000020077000
Fixes: 9d643f63128b ("KVM: x86: avoid large stack allocations in em_fxrstor")
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
2017-11-08 01:04:05 +08:00
|
|
|
rc = fxregs_fixup(&fx_state, size);
|
2017-05-31 11:08:38 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto out;
|
|
|
|
}
|
2016-11-10 02:07:06 +08:00
|
|
|
|
2017-05-31 11:08:38 +08:00
|
|
|
if (fx_state.mxcsr >> 16) {
|
|
|
|
rc = emulate_gp(ctxt, 0);
|
|
|
|
goto out;
|
|
|
|
}
|
2016-11-10 02:07:06 +08:00
|
|
|
|
|
|
|
if (rc == X86EMUL_CONTINUE)
|
|
|
|
rc = asm_safe("fxrstor %[fx]", : [fx] "m"(fx_state));
|
|
|
|
|
2017-05-31 11:08:38 +08:00
|
|
|
out:
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2020-01-18 03:30:50 +08:00
|
|
|
|
2016-11-10 02:07:06 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2019-08-13 21:53:32 +08:00
|
|
|
static int em_xsetbv(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
u32 eax, ecx, edx;
|
|
|
|
|
|
|
|
eax = reg_read(ctxt, VCPU_REGS_RAX);
|
|
|
|
edx = reg_read(ctxt, VCPU_REGS_RDX);
|
|
|
|
ecx = reg_read(ctxt, VCPU_REGS_RCX);
|
|
|
|
|
|
|
|
if (ctxt->ops->set_xcr(ctxt, ecx, ((u64)edx << 32) | eax))
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:28 +08:00
|
|
|
static bool valid_cr(int nr)
|
|
|
|
{
|
|
|
|
switch (nr) {
|
|
|
|
case 0:
|
|
|
|
case 2 ... 4:
|
|
|
|
case 8:
|
|
|
|
return true;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-22 10:21:20 +08:00
|
|
|
static int check_cr_access(struct x86_emulate_ctxt *ctxt)
|
2011-04-04 18:39:28 +08:00
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
if (!valid_cr(ctxt->modrm_reg))
|
2011-04-04 18:39:28 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:29 +08:00
|
|
|
static int check_dr7_gd(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
unsigned long dr7;
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ctxt->ops->get_dr(ctxt, 7, &dr7);
|
2011-04-04 18:39:29 +08:00
|
|
|
|
|
|
|
/* Check if DR7.Global_Enable is set */
|
|
|
|
return dr7 & (1 << 13);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_dr_read(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
int dr = ctxt->modrm_reg;
|
2011-04-04 18:39:29 +08:00
|
|
|
u64 cr4;
|
|
|
|
|
|
|
|
if (dr > 7)
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
cr4 = ctxt->ops->get_cr(ctxt, 4);
|
2011-04-04 18:39:29 +08:00
|
|
|
if ((cr4 & X86_CR4_DE) && (dr == 4 || dr == 5))
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
2014-11-02 17:54:43 +08:00
|
|
|
if (check_dr7_gd(ctxt)) {
|
|
|
|
ulong dr6;
|
|
|
|
|
|
|
|
ctxt->ops->get_dr(ctxt, 6, &dr6);
|
2019-06-06 06:54:47 +08:00
|
|
|
dr6 &= ~DR_TRAP_BITS;
|
2021-02-02 17:04:31 +08:00
|
|
|
dr6 |= DR6_BD | DR6_ACTIVE_LOW;
|
2014-11-02 17:54:43 +08:00
|
|
|
ctxt->ops->set_dr(ctxt, 6, dr6);
|
2011-04-04 18:39:29 +08:00
|
|
|
return emulate_db(ctxt);
|
2014-11-02 17:54:43 +08:00
|
|
|
}
|
2011-04-04 18:39:29 +08:00
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_dr_write(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
u64 new_val = ctxt->src.val64;
|
|
|
|
int dr = ctxt->modrm_reg;
|
2011-04-04 18:39:29 +08:00
|
|
|
|
|
|
|
if ((dr == 6 || dr == 7) && (new_val & 0xffffffff00000000ULL))
|
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return check_dr_read(ctxt);
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:31 +08:00
|
|
|
static int check_svme(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2017-05-19 01:37:32 +08:00
|
|
|
u64 efer = 0;
|
2011-04-04 18:39:31 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);
|
2011-04-04 18:39:31 +08:00
|
|
|
|
|
|
|
if (!(efer & EFER_SVME))
|
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_svme_pa(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2012-08-28 04:46:17 +08:00
|
|
|
u64 rax = reg_read(ctxt, VCPU_REGS_RAX);
|
2011-04-04 18:39:31 +08:00
|
|
|
|
|
|
|
/* Valid physical address? */
|
2011-04-22 00:09:22 +08:00
|
|
|
if (rax & 0xffff000000000000ULL)
|
2011-04-04 18:39:31 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return check_svme(ctxt);
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:32 +08:00
|
|
|
static int check_rdtsc(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-04-20 18:37:53 +08:00
|
|
|
u64 cr4 = ctxt->ops->get_cr(ctxt, 4);
|
2011-04-04 18:39:32 +08:00
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
if (cr4 & X86_CR4_TSD && ctxt->ops->cpl(ctxt))
|
2011-04-04 18:39:32 +08:00
|
|
|
return emulate_ud(ctxt);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:33 +08:00
|
|
|
static int check_rdpmc(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-04-20 18:37:53 +08:00
|
|
|
u64 cr4 = ctxt->ops->get_cr(ctxt, 4);
|
2012-08-28 04:46:17 +08:00
|
|
|
u64 rcx = reg_read(ctxt, VCPU_REGS_RCX);
|
2011-04-04 18:39:33 +08:00
|
|
|
|
2018-03-12 19:12:53 +08:00
|
|
|
/*
|
|
|
|
* VMware allows access to these Pseduo-PMCs even when read via RDPMC
|
|
|
|
* in Ring3 when CR4.PCE=0.
|
|
|
|
*/
|
|
|
|
if (enable_vmware_backdoor && is_vmware_backdoor_pmc(rcx))
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
|
2011-04-20 18:37:53 +08:00
|
|
|
if ((!(cr4 & X86_CR4_PCE) && ctxt->ops->cpl(ctxt)) ||
|
2014-06-02 23:34:09 +08:00
|
|
|
ctxt->ops->check_pmc(ctxt, rcx))
|
2011-04-04 18:39:33 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2011-04-04 18:39:35 +08:00
|
|
|
static int check_perm_in(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.bytes = min(ctxt->dst.bytes, 4u);
|
|
|
|
if (!emulator_io_permited(ctxt, ctxt->src.val, ctxt->dst.bytes))
|
2011-04-04 18:39:35 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int check_perm_out(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.bytes = min(ctxt->src.bytes, 4u);
|
|
|
|
if (!emulator_io_permited(ctxt, ctxt->dst.val, ctxt->src.bytes))
|
2011-04-04 18:39:35 +08:00
|
|
|
return emulate_gp(ctxt, 0);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2010-07-29 20:11:53 +08:00
|
|
|
#define D(_y) { .flags = (_y) }
|
2014-03-27 18:58:02 +08:00
|
|
|
#define DI(_y, _i) { .flags = (_y)|Intercept, .intercept = x86_intercept_##_i }
|
|
|
|
#define DIP(_y, _i, _p) { .flags = (_y)|Intercept|CheckPerm, \
|
|
|
|
.intercept = x86_intercept_##_i, .check_perm = (_p) }
|
2013-04-11 16:59:55 +08:00
|
|
|
#define N D(NotImpl)
|
2011-04-04 18:39:31 +08:00
|
|
|
#define EXT(_f, _e) { .flags = ((_f) | RMExt), .u.group = (_e) }
|
2012-04-30 16:46:31 +08:00
|
|
|
#define G(_f, _g) { .flags = ((_f) | Group | ModRM), .u.group = (_g) }
|
|
|
|
#define GD(_f, _g) { .flags = ((_f) | GroupDual | ModRM), .u.gdual = (_g) }
|
2014-11-26 21:47:18 +08:00
|
|
|
#define ID(_f, _i) { .flags = ((_f) | InstrDual | ModRM), .u.idual = (_i) }
|
2015-01-26 15:32:24 +08:00
|
|
|
#define MD(_f, _m) { .flags = ((_f) | ModeDual), .u.mdual = (_m) }
|
2012-12-20 22:57:43 +08:00
|
|
|
#define E(_f, _e) { .flags = ((_f) | Escape | ModRM), .u.esc = (_e) }
|
2010-07-29 20:11:53 +08:00
|
|
|
#define I(_f, _e) { .flags = (_f), .u.execute = (_e) }
|
2013-01-04 22:18:48 +08:00
|
|
|
#define F(_f, _e) { .flags = (_f) | Fastop, .u.fastop = (_e) }
|
2011-04-04 18:39:22 +08:00
|
|
|
#define II(_f, _e, _i) \
|
2014-03-27 18:58:02 +08:00
|
|
|
{ .flags = (_f)|Intercept, .u.execute = (_e), .intercept = x86_intercept_##_i }
|
2011-04-04 18:39:25 +08:00
|
|
|
#define IIP(_f, _e, _i, _p) \
|
2014-03-27 18:58:02 +08:00
|
|
|
{ .flags = (_f)|Intercept|CheckPerm, .u.execute = (_e), \
|
|
|
|
.intercept = x86_intercept_##_i, .check_perm = (_p) }
|
2010-01-21 00:09:23 +08:00
|
|
|
#define GP(_f, _g) { .flags = ((_f) | Prefix), .u.gprefix = (_g) }
|
2010-07-29 20:11:53 +08:00
|
|
|
|
2010-08-26 16:56:06 +08:00
|
|
|
#define D2bv(_f) D((_f) | ByteOp), D(_f)
|
2011-04-04 18:39:35 +08:00
|
|
|
#define D2bvIP(_f, _i, _p) DIP((_f) | ByteOp, _i, _p), DIP(_f, _i, _p)
|
2010-08-26 16:56:06 +08:00
|
|
|
#define I2bv(_f, _e) I((_f) | ByteOp, _e), I(_f, _e)
|
2013-01-04 22:18:53 +08:00
|
|
|
#define F2bv(_f, _e) F((_f) | ByteOp, _e), F(_f, _e)
|
2011-11-22 14:16:54 +08:00
|
|
|
#define I2bvIP(_f, _e, _i, _p) \
|
|
|
|
IIP((_f) | ByteOp, _e, _i, _p), IIP(_f, _e, _i, _p)
|
2010-08-26 16:56:06 +08:00
|
|
|
|
2013-01-04 22:18:54 +08:00
|
|
|
#define F6ALU(_f, _e) F2bv((_f) | DstMem | SrcReg | ModRM, _e), \
|
|
|
|
F2bv(((_f) | DstReg | SrcMem | ModRM) & ~Lock, _e), \
|
|
|
|
F2bv(((_f) & ~Lock) | DstAcc | SrcImm, _e)
|
2010-08-26 23:34:55 +08:00
|
|
|
|
2014-08-29 16:26:55 +08:00
|
|
|
static const struct opcode group7_rm0[] = {
|
|
|
|
N,
|
2015-03-10 03:27:43 +08:00
|
|
|
I(SrcNone | Priv | EmulateOnUD, em_hypercall),
|
2014-08-29 16:26:55 +08:00
|
|
|
N, N, N, N, N, N,
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group7_rm1[] = {
|
2012-04-30 16:46:31 +08:00
|
|
|
DI(SrcNone | Priv, monitor),
|
|
|
|
DI(SrcNone | Priv, mwait),
|
2011-04-04 18:39:32 +08:00
|
|
|
N, N, N, N, N, N,
|
|
|
|
};
|
|
|
|
|
2019-08-13 21:53:32 +08:00
|
|
|
static const struct opcode group7_rm2[] = {
|
|
|
|
N,
|
|
|
|
II(ImplicitOps | Priv, em_xsetbv, xsetbv),
|
|
|
|
N, N, N, N, N, N,
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group7_rm3[] = {
|
2012-04-30 16:46:31 +08:00
|
|
|
DIP(SrcNone | Prot | Priv, vmrun, check_svme_pa),
|
2015-03-10 03:27:43 +08:00
|
|
|
II(SrcNone | Prot | EmulateOnUD, em_hypercall, vmmcall),
|
2012-04-30 16:46:31 +08:00
|
|
|
DIP(SrcNone | Prot | Priv, vmload, check_svme_pa),
|
|
|
|
DIP(SrcNone | Prot | Priv, vmsave, check_svme_pa),
|
|
|
|
DIP(SrcNone | Prot | Priv, stgi, check_svme),
|
|
|
|
DIP(SrcNone | Prot | Priv, clgi, check_svme),
|
|
|
|
DIP(SrcNone | Prot | Priv, skinit, check_svme),
|
|
|
|
DIP(SrcNone | Prot | Priv, invlpga, check_svme),
|
2011-04-04 18:39:31 +08:00
|
|
|
};
|
2010-08-26 23:34:55 +08:00
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group7_rm7[] = {
|
2011-04-04 18:39:32 +08:00
|
|
|
N,
|
2012-04-30 16:46:31 +08:00
|
|
|
DIP(SrcNone, rdtscp, check_rdtsc),
|
2011-04-04 18:39:32 +08:00
|
|
|
N, N, N, N, N, N,
|
|
|
|
};
|
KVM: x86 emulator: Use opcode::execute for Group 1, CMPS and SCAS
The following instructions are changed to use opcode::execute.
Group 1 (80-83)
ADD (00-05), OR (08-0D), ADC (10-15), SBB (18-1D), AND (20-25),
SUB (28-2D), XOR (30-35), CMP (38-3D)
CMPS (A6-A7), SCAS (AE-AF)
The last two do the same as CMP in the emulator, so em_cmp() is used.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-04-23 17:48:02 +08:00
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group1[] = {
|
2013-01-04 22:18:54 +08:00
|
|
|
F(Lock, em_add),
|
|
|
|
F(Lock | PageTable, em_or),
|
|
|
|
F(Lock, em_adc),
|
|
|
|
F(Lock, em_sbb),
|
|
|
|
F(Lock | PageTable, em_and),
|
|
|
|
F(Lock, em_sub),
|
|
|
|
F(Lock, em_xor),
|
|
|
|
F(NoWrite, em_cmp),
|
2010-07-29 20:11:53 +08:00
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group1A[] = {
|
2016-12-15 03:59:23 +08:00
|
|
|
I(DstMem | SrcNone | Mov | Stack | IncSP | TwoMemOp, em_pop), N, N, N, N, N, N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
};
|
|
|
|
|
2013-01-20 01:51:51 +08:00
|
|
|
static const struct opcode group2[] = {
|
|
|
|
F(DstMem | ModRM, em_rol),
|
|
|
|
F(DstMem | ModRM, em_ror),
|
|
|
|
F(DstMem | ModRM, em_rcl),
|
|
|
|
F(DstMem | ModRM, em_rcr),
|
|
|
|
F(DstMem | ModRM, em_shl),
|
|
|
|
F(DstMem | ModRM, em_shr),
|
|
|
|
F(DstMem | ModRM, em_shl),
|
|
|
|
F(DstMem | ModRM, em_sar),
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group3[] = {
|
2013-01-04 22:18:54 +08:00
|
|
|
F(DstMem | SrcImm | NoWrite, em_test),
|
|
|
|
F(DstMem | SrcImm | NoWrite, em_test),
|
2013-01-04 22:18:52 +08:00
|
|
|
F(DstMem | SrcNone | Lock, em_not),
|
|
|
|
F(DstMem | SrcNone | Lock, em_neg),
|
2013-02-09 17:31:48 +08:00
|
|
|
F(DstXacc | Src2Mem, em_mul_ex),
|
|
|
|
F(DstXacc | Src2Mem, em_imul_ex),
|
2013-02-09 17:31:49 +08:00
|
|
|
F(DstXacc | Src2Mem, em_div_ex),
|
|
|
|
F(DstXacc | Src2Mem, em_idiv_ex),
|
2010-07-29 20:11:53 +08:00
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group4[] = {
|
2013-01-20 01:51:53 +08:00
|
|
|
F(ByteOp | DstMem | SrcNone | Lock, em_inc),
|
|
|
|
F(ByteOp | DstMem | SrcNone | Lock, em_dec),
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, N, N, N,
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group5[] = {
|
2013-01-20 01:51:53 +08:00
|
|
|
F(DstMem | SrcNone | Lock, em_inc),
|
|
|
|
F(DstMem | SrcNone | Lock, em_dec),
|
2014-10-24 16:35:09 +08:00
|
|
|
I(SrcMem | NearBranch, em_call_near_abs),
|
2015-05-04 01:22:57 +08:00
|
|
|
I(SrcMemFAddr | ImplicitOps, em_call_far),
|
2014-10-24 16:35:09 +08:00
|
|
|
I(SrcMem | NearBranch, em_jmp_abs),
|
2014-09-19 03:39:41 +08:00
|
|
|
I(SrcMemFAddr | ImplicitOps, em_jmp_far),
|
2016-12-15 03:59:23 +08:00
|
|
|
I(SrcMem | Stack | TwoMemOp, em_push), D(Undefined),
|
2010-07-29 20:11:53 +08:00
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group6[] = {
|
2016-07-12 16:35:51 +08:00
|
|
|
II(Prot | DstMem, em_sldt, sldt),
|
|
|
|
II(Prot | DstMem, em_str, str),
|
2012-06-13 17:28:33 +08:00
|
|
|
II(Prot | Priv | SrcMem16, em_lldt, lldt),
|
2012-06-13 21:33:29 +08:00
|
|
|
II(Prot | Priv | SrcMem16, em_ltr, ltr),
|
2011-04-04 18:39:30 +08:00
|
|
|
N, N, N, N,
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct group_dual group7 = { {
|
2014-06-02 23:34:06 +08:00
|
|
|
II(Mov | DstMem, em_sgdt, sgdt),
|
|
|
|
II(Mov | DstMem, em_sidt, sidt),
|
2012-04-30 16:46:31 +08:00
|
|
|
II(SrcMem | Priv, em_lgdt, lgdt),
|
|
|
|
II(SrcMem | Priv, em_lidt, lidt),
|
|
|
|
II(SrcNone | DstMem | Mov, em_smsw, smsw), N,
|
|
|
|
II(SrcMem16 | Mov | Priv, em_lmsw, lmsw),
|
|
|
|
II(SrcMem | ByteOp | Priv | NoAccess, em_invlpg, invlpg),
|
2010-07-29 20:11:53 +08:00
|
|
|
}, {
|
2014-08-29 16:26:55 +08:00
|
|
|
EXT(0, group7_rm0),
|
2011-04-21 17:21:50 +08:00
|
|
|
EXT(0, group7_rm1),
|
2019-08-13 21:53:32 +08:00
|
|
|
EXT(0, group7_rm2),
|
|
|
|
EXT(0, group7_rm3),
|
2012-04-30 16:46:31 +08:00
|
|
|
II(SrcNone | DstMem | Mov, em_smsw, smsw), N,
|
|
|
|
II(SrcMem16 | Mov | Priv, em_lmsw, lmsw),
|
|
|
|
EXT(0, group7_rm7),
|
2010-07-29 20:11:53 +08:00
|
|
|
} };
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group8[] = {
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, N,
|
2013-01-20 01:51:54 +08:00
|
|
|
F(DstMem | SrcImmByte | NoWrite, em_bt),
|
|
|
|
F(DstMem | SrcImmByte | Lock | PageTable, em_bts),
|
|
|
|
F(DstMem | SrcImmByte | Lock, em_btr),
|
|
|
|
F(DstMem | SrcImmByte | Lock | PageTable, em_btc),
|
2010-07-29 20:11:53 +08:00
|
|
|
};
|
|
|
|
|
2016-07-12 17:04:26 +08:00
|
|
|
/*
|
|
|
|
* The "memory" destination is actually always a register, since we come
|
|
|
|
* from the register case of group9.
|
|
|
|
*/
|
|
|
|
static const struct gprefix pfx_0f_c7_7 = {
|
2021-05-05 01:17:23 +08:00
|
|
|
N, N, N, II(DstMem | ModRM | Op3264 | EmulateOnUD, em_rdpid, rdpid),
|
2016-07-12 17:04:26 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct group_dual group9 = { {
|
2012-04-30 16:46:31 +08:00
|
|
|
N, I(DstMem64 | Lock | PageTable, em_cmpxchg8b), N, N, N, N, N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
}, {
|
2016-07-12 17:04:26 +08:00
|
|
|
N, N, N, N, N, N, N,
|
|
|
|
GP(0, &pfx_0f_c7_7),
|
2010-07-29 20:11:53 +08:00
|
|
|
} };
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode group11[] = {
|
2012-04-30 16:46:31 +08:00
|
|
|
I(DstMem | SrcImm | Mov | PageTable, em_mov),
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
X7(D(Undefined)),
|
2010-08-03 20:05:46 +08:00
|
|
|
};
|
|
|
|
|
2014-10-13 18:04:13 +08:00
|
|
|
static const struct gprefix pfx_0f_ae_7 = {
|
2020-11-03 20:04:00 +08:00
|
|
|
I(SrcMem | ByteOp, em_clflush), I(SrcMem | ByteOp, em_clflushopt), N, N,
|
2014-10-13 18:04:13 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct group_dual group15 = { {
|
2016-11-10 02:07:06 +08:00
|
|
|
I(ModRM | Aligned16, em_fxsave),
|
|
|
|
I(ModRM | Aligned16, em_fxrstor),
|
|
|
|
N, N, N, N, N, GP(0, &pfx_0f_ae_7),
|
2014-10-13 18:04:13 +08:00
|
|
|
}, {
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
} };
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct gprefix pfx_0f_6f_0f_7f = {
|
2012-04-09 23:40:03 +08:00
|
|
|
I(Mmx, em_mov), I(Sse | Aligned, em_mov), N, I(Sse | Unaligned, em_mov),
|
2010-01-21 00:09:23 +08:00
|
|
|
};
|
|
|
|
|
2014-11-26 21:47:18 +08:00
|
|
|
static const struct instr_dual instr_dual_0f_2b = {
|
|
|
|
I(0, em_mov), N
|
|
|
|
};
|
|
|
|
|
2014-07-14 18:54:48 +08:00
|
|
|
static const struct gprefix pfx_0f_2b = {
|
2014-11-26 21:47:18 +08:00
|
|
|
ID(0, &instr_dual_0f_2b), ID(0, &instr_dual_0f_2b), N, N,
|
2012-04-09 23:40:01 +08:00
|
|
|
};
|
|
|
|
|
2018-04-01 23:54:44 +08:00
|
|
|
static const struct gprefix pfx_0f_10_0f_11 = {
|
|
|
|
I(Unaligned, em_mov), I(Unaligned, em_mov), N, N,
|
|
|
|
};
|
|
|
|
|
KVM: x86 emulator: emulate MOVAPS
HCK memory driver test fails when testing 32-bit Windows 8.1
with baloon driver.
tracing KVM shows error:
reason EXIT_ERR rip 0x81c18326 info 0 0
x/10i 0x81c18326-20
0x0000000081c18312: add %al,(%eax)
0x0000000081c18314: add %cl,-0x7127711d(%esi)
0x0000000081c1831a: rolb $0x0,0x80ec(%ecx)
0x0000000081c18321: and $0xfffffff0,%esp
0x0000000081c18324: mov %esp,%esi
0x0000000081c18326: movaps %xmm0,(%esi)
0x0000000081c18329: movaps %xmm1,0x10(%esi)
0x0000000081c1832d: movaps %xmm2,0x20(%esi)
0x0000000081c18331: movaps %xmm3,0x30(%esi)
0x0000000081c18335: movaps %xmm4,0x40(%esi)
which points to MOVAPS instruction currently no emulated by KVM.
Fix it by adding appropriate entries to opcode table in KVM's emulator.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-03-16 04:01:59 +08:00
|
|
|
static const struct gprefix pfx_0f_28_0f_29 = {
|
2014-03-16 04:02:00 +08:00
|
|
|
I(Aligned, em_mov), I(Aligned, em_mov), N, N,
|
KVM: x86 emulator: emulate MOVAPS
HCK memory driver test fails when testing 32-bit Windows 8.1
with baloon driver.
tracing KVM shows error:
reason EXIT_ERR rip 0x81c18326 info 0 0
x/10i 0x81c18326-20
0x0000000081c18312: add %al,(%eax)
0x0000000081c18314: add %cl,-0x7127711d(%esi)
0x0000000081c1831a: rolb $0x0,0x80ec(%ecx)
0x0000000081c18321: and $0xfffffff0,%esp
0x0000000081c18324: mov %esp,%esi
0x0000000081c18326: movaps %xmm0,(%esi)
0x0000000081c18329: movaps %xmm1,0x10(%esi)
0x0000000081c1832d: movaps %xmm2,0x20(%esi)
0x0000000081c18331: movaps %xmm3,0x30(%esi)
0x0000000081c18335: movaps %xmm4,0x40(%esi)
which points to MOVAPS instruction currently no emulated by KVM.
Fix it by adding appropriate entries to opcode table in KVM's emulator.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-03-16 04:01:59 +08:00
|
|
|
};
|
|
|
|
|
KVM: x86 emulator: emulate MOVNTDQ
Windows 8.1 guest with NVIDIA driver and GPU fails to boot with an
emulation failure. The KVM spew suggests the fault is with lack of
movntdq emulation (courtesy of Paolo):
Code=02 00 00 b8 08 00 00 00 f3 0f 6f 44 0a f0 f3 0f 6f 4c 0a e0 <66> 0f e7 41 f0 66 0f e7 49 e0 48 83 e9 40 f3 0f 6f 44 0a 10 f3 0f 6f 0c 0a 66 0f e7 41 10
$ as -o a.out
.section .text
.byte 0x66, 0x0f, 0xe7, 0x41, 0xf0
.byte 0x66, 0x0f, 0xe7, 0x49, 0xe0
$ objdump -d a.out
0: 66 0f e7 41 f0 movntdq %xmm0,-0x10(%rcx)
5: 66 0f e7 49 e0 movntdq %xmm1,-0x20(%rcx)
Add the necessary emulation.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-12 01:56:31 +08:00
|
|
|
static const struct gprefix pfx_0f_e7 = {
|
|
|
|
N, I(Sse, em_mov), N, N,
|
|
|
|
};
|
|
|
|
|
2012-12-20 22:57:43 +08:00
|
|
|
static const struct escape escape_d9 = { {
|
2014-12-25 08:52:18 +08:00
|
|
|
N, N, N, N, N, N, N, I(DstMem16 | Mov, em_fnstcw),
|
2012-12-20 22:57:43 +08:00
|
|
|
}, {
|
|
|
|
/* 0xC0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xC8 - 0xCF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD8 - 0xDF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE0 - 0xE7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE8 - 0xEF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF0 - 0xF7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF8 - 0xFF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
} };
|
|
|
|
|
|
|
|
static const struct escape escape_db = { {
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
}, {
|
|
|
|
/* 0xC0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xC8 - 0xCF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD8 - 0xDF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE0 - 0xE7 */
|
|
|
|
N, N, N, I(ImplicitOps, em_fninit), N, N, N, N,
|
|
|
|
/* 0xE8 - 0xEF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF0 - 0xF7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF8 - 0xFF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
} };
|
|
|
|
|
|
|
|
static const struct escape escape_dd = { {
|
2014-12-25 08:52:18 +08:00
|
|
|
N, N, N, N, N, N, N, I(DstMem16 | Mov, em_fnstsw),
|
2012-12-20 22:57:43 +08:00
|
|
|
}, {
|
|
|
|
/* 0xC0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xC8 - 0xCF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD0 - 0xC7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xD8 - 0xDF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE0 - 0xE7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE8 - 0xEF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF0 - 0xF7 */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xF8 - 0xFF */
|
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
} };
|
|
|
|
|
2014-11-26 21:47:18 +08:00
|
|
|
static const struct instr_dual instr_dual_0f_c3 = {
|
|
|
|
I(DstMem | SrcReg | ModRM | No16 | Mov, em_mov), N
|
|
|
|
};
|
|
|
|
|
2015-01-26 15:32:24 +08:00
|
|
|
static const struct mode_dual mode_dual_63 = {
|
|
|
|
N, I(DstReg | SrcMem32 | ModRM | Mov, em_movsxd)
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode opcode_table[256] = {
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x00 - 0x07 */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock, em_add),
|
2011-09-13 15:45:51 +08:00
|
|
|
I(ImplicitOps | Stack | No64 | Src2ES, em_push_sreg),
|
|
|
|
I(ImplicitOps | Stack | No64 | Src2ES, em_pop_sreg),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x08 - 0x0F */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock | PageTable, em_or),
|
2011-09-13 15:45:51 +08:00
|
|
|
I(ImplicitOps | Stack | No64 | Src2CS, em_push_sreg),
|
|
|
|
N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x10 - 0x17 */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock, em_adc),
|
2011-09-13 15:45:51 +08:00
|
|
|
I(ImplicitOps | Stack | No64 | Src2SS, em_push_sreg),
|
|
|
|
I(ImplicitOps | Stack | No64 | Src2SS, em_pop_sreg),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x18 - 0x1F */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock, em_sbb),
|
2011-09-13 15:45:51 +08:00
|
|
|
I(ImplicitOps | Stack | No64 | Src2DS, em_push_sreg),
|
|
|
|
I(ImplicitOps | Stack | No64 | Src2DS, em_pop_sreg),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x20 - 0x27 */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock | PageTable, em_and), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x28 - 0x2F */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock, em_sub), N, I(ByteOp | DstAcc | No64, em_das),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x30 - 0x37 */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(Lock, em_xor), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x38 - 0x3F */
|
2013-01-04 22:18:54 +08:00
|
|
|
F6ALU(NoWrite, em_cmp), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x40 - 0x4F */
|
2013-01-20 01:51:53 +08:00
|
|
|
X8(F(DstReg, em_inc)), X8(F(DstReg, em_dec)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x50 - 0x57 */
|
2010-07-29 20:11:55 +08:00
|
|
|
X8(I(SrcReg | Stack, em_push)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x58 - 0x5F */
|
2011-04-23 17:49:40 +08:00
|
|
|
X8(I(DstReg | Stack, em_pop)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x60 - 0x67 */
|
2011-04-23 17:51:07 +08:00
|
|
|
I(ImplicitOps | Stack | No64, em_pusha),
|
|
|
|
I(ImplicitOps | Stack | No64, em_popa),
|
2015-01-26 15:32:24 +08:00
|
|
|
N, MD(ModRM, &mode_dual_63),
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, N,
|
|
|
|
/* 0x68 - 0x6F */
|
2010-08-19 00:29:33 +08:00
|
|
|
I(SrcImm | Mov | Stack, em_push),
|
|
|
|
I(DstReg | SrcMem | ModRM | Src2Imm, em_imul_3op),
|
2010-08-18 23:25:25 +08:00
|
|
|
I(SrcImmByte | Mov | Stack, em_push),
|
|
|
|
I(DstReg | SrcMem | ModRM | Src2ImmByte, em_imul_3op),
|
2012-09-03 20:24:29 +08:00
|
|
|
I2bvIP(DstDI | SrcDX | Mov | String | Unaligned, em_in, ins, check_perm_in), /* insb, insw/insd */
|
2011-11-23 11:27:39 +08:00
|
|
|
I2bvIP(SrcSI | DstDX | String, em_out, outs, check_perm_out), /* outsb, outsw/outsd */
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x70 - 0x7F */
|
2014-10-24 16:35:09 +08:00
|
|
|
X16(D(SrcImmByte | NearBranch)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x80 - 0x87 */
|
2012-04-30 16:46:31 +08:00
|
|
|
G(ByteOp | DstMem | SrcImm, group1),
|
|
|
|
G(DstMem | SrcImm, group1),
|
|
|
|
G(ByteOp | DstMem | SrcImm | No64, group1),
|
|
|
|
G(DstMem | SrcImmByte, group1),
|
2013-01-04 22:18:54 +08:00
|
|
|
F2bv(DstMem | SrcReg | ModRM | NoWrite, em_test),
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
I2bv(DstMem | SrcReg | ModRM | Lock | PageTable, em_xchg),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x88 - 0x8F */
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
I2bv(DstMem | SrcReg | ModRM | Mov | PageTable, em_mov),
|
2010-08-03 19:46:56 +08:00
|
|
|
I2bv(DstReg | SrcMem | ModRM | Mov, em_mov),
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
I(DstMem | SrcNone | ModRM | Mov | PageTable, em_mov_rm_sreg),
|
2011-05-29 21:01:33 +08:00
|
|
|
D(ModRM | SrcMem | NoAccess | DstReg),
|
|
|
|
I(ImplicitOps | SrcMem16 | ModRM, em_mov_sreg_rm),
|
|
|
|
G(0, group1A),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x90 - 0x97 */
|
2011-04-04 18:39:34 +08:00
|
|
|
DI(SrcAcc | DstReg, pause), X7(D(SrcAcc | DstReg)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x98 - 0x9F */
|
2010-08-19 20:13:00 +08:00
|
|
|
D(DstAcc | SrcNone), I(ImplicitOps | SrcAcc, em_cwd),
|
2010-08-25 14:10:53 +08:00
|
|
|
I(SrcImmFAddr | No64, em_call_far), N,
|
2011-04-23 17:52:56 +08:00
|
|
|
II(ImplicitOps | Stack, em_pushf, pushf),
|
2013-10-31 18:19:42 +08:00
|
|
|
II(ImplicitOps | Stack, em_popf, popf),
|
|
|
|
I(ImplicitOps, em_sahf), I(ImplicitOps, em_lahf),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xA0 - 0xA7 */
|
2010-08-03 19:46:56 +08:00
|
|
|
I2bv(DstAcc | SrcMem | Mov | MemAbs, em_mov),
|
KVM: x86: tag the instructions which are used to write page table
The idea is from Avi:
| tag instructions that are typically used to modify the page tables, and
| drop shadow if any other instruction is used.
| The list would include, I'd guess, and, or, bts, btc, mov, xchg, cmpxchg,
| and cmpxchg8b.
This patch is used to tag the instructions and in the later path, shadow page
is dropped if it is written by other instructions
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-09-22 16:53:46 +08:00
|
|
|
I2bv(DstMem | SrcAcc | Mov | MemAbs | PageTable, em_mov),
|
2016-12-15 03:59:23 +08:00
|
|
|
I2bv(SrcSI | DstDI | Mov | String | TwoMemOp, em_mov),
|
|
|
|
F2bv(SrcSI | DstDI | String | NoWrite | TwoMemOp, em_cmp_r),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xA8 - 0xAF */
|
2013-01-04 22:18:54 +08:00
|
|
|
F2bv(DstAcc | SrcImm | NoWrite, em_test),
|
2010-08-03 19:46:56 +08:00
|
|
|
I2bv(SrcAcc | DstDI | Mov | String, em_mov),
|
|
|
|
I2bv(SrcSI | DstAcc | Mov | String, em_mov),
|
2014-11-02 17:54:50 +08:00
|
|
|
F2bv(SrcAcc | DstDI | String | NoWrite, em_cmp_r),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xB0 - 0xB7 */
|
2010-08-03 19:46:56 +08:00
|
|
|
X8(I(ByteOp | DstReg | SrcImm | Mov, em_mov)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xB8 - 0xBF */
|
2012-12-07 07:55:10 +08:00
|
|
|
X8(I(DstReg | SrcImm64 | Mov, em_mov)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xC0 - 0xC7 */
|
2013-01-20 01:51:51 +08:00
|
|
|
G(ByteOp | Src2ImmByte, group2), G(Src2ImmByte, group2),
|
2014-10-24 16:35:09 +08:00
|
|
|
I(ImplicitOps | NearBranch | SrcImmU16, em_ret_near_imm),
|
|
|
|
I(ImplicitOps | NearBranch, em_ret),
|
2011-09-13 15:45:50 +08:00
|
|
|
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2ES, em_lseg),
|
|
|
|
I(DstReg | SrcMemFAddr | ModRM | No64 | Src2DS, em_lseg),
|
2010-08-03 20:05:46 +08:00
|
|
|
G(ByteOp, group11), G(0, group11),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xC8 - 0xCF */
|
2012-06-13 01:03:23 +08:00
|
|
|
I(Stack | SrcImmU16 | Src2ImmByte, em_enter), I(Stack, em_leave),
|
2015-01-26 15:32:22 +08:00
|
|
|
I(ImplicitOps | SrcImmU16, em_ret_far_imm),
|
|
|
|
I(ImplicitOps, em_ret_far),
|
2011-04-04 18:39:23 +08:00
|
|
|
D(ImplicitOps), DI(SrcImmByte, intn),
|
2011-05-29 20:56:26 +08:00
|
|
|
D(ImplicitOps | No64), II(ImplicitOps, em_iret, iret),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xD0 - 0xD7 */
|
2013-01-20 01:51:51 +08:00
|
|
|
G(Src2One | ByteOp, group2), G(Src2One, group2),
|
|
|
|
G(Src2CL | ByteOp, group2), G(Src2CL, group2),
|
2013-05-09 17:32:49 +08:00
|
|
|
I(DstAcc | SrcImmUByte | No64, em_aam),
|
2013-05-09 17:32:51 +08:00
|
|
|
I(DstAcc | SrcImmUByte | No64, em_aad),
|
|
|
|
F(DstAcc | ByteOp | No64, em_salc),
|
2013-05-09 17:32:50 +08:00
|
|
|
I(DstAcc | SrcXLat | ByteOp, em_mov),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xD8 - 0xDF */
|
2012-12-20 22:57:43 +08:00
|
|
|
N, E(0, &escape_d9), N, E(0, &escape_db), N, E(0, &escape_dd), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xE0 - 0xE7 */
|
2014-10-24 16:35:09 +08:00
|
|
|
X3(I(SrcImmByte | NearBranch, em_loop)),
|
|
|
|
I(SrcImmByte | NearBranch, em_jcxz),
|
2011-11-22 14:16:54 +08:00
|
|
|
I2bvIP(SrcImmUByte | DstAcc, em_in, in, check_perm_in),
|
|
|
|
I2bvIP(SrcAcc | DstImmUByte, em_out, out, check_perm_out),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xE8 - 0xEF */
|
2014-10-24 16:35:09 +08:00
|
|
|
I(SrcImm | NearBranch, em_call), D(SrcImm | ImplicitOps | NearBranch),
|
|
|
|
I(SrcImmFAddr | No64, em_jmp_far),
|
|
|
|
D(SrcImmByte | ImplicitOps | NearBranch),
|
2011-11-22 14:16:54 +08:00
|
|
|
I2bvIP(SrcDX | DstAcc, em_in, in, check_perm_in),
|
|
|
|
I2bvIP(SrcAcc | DstDX, em_out, out, check_perm_out),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xF0 - 0xF7 */
|
2011-04-04 18:39:34 +08:00
|
|
|
N, DI(ImplicitOps, icebp), N, N,
|
2011-04-04 18:39:23 +08:00
|
|
|
DI(ImplicitOps | Priv, hlt), D(ImplicitOps),
|
|
|
|
G(ByteOp, group3), G(0, group3),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xF8 - 0xFF */
|
2011-05-29 21:05:15 +08:00
|
|
|
D(ImplicitOps), D(ImplicitOps),
|
|
|
|
I(ImplicitOps, em_cli), I(ImplicitOps, em_sti),
|
2010-07-29 20:11:53 +08:00
|
|
|
D(ImplicitOps), D(ImplicitOps), G(0, group4), G(0, group5),
|
|
|
|
};
|
|
|
|
|
2012-08-30 07:30:15 +08:00
|
|
|
static const struct opcode twobyte_table[256] = {
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x00 - 0x0F */
|
2011-04-04 18:39:30 +08:00
|
|
|
G(0, group6), GD(0, &group7), N, N,
|
2013-09-22 22:44:52 +08:00
|
|
|
N, I(ImplicitOps | EmulateOnUD, em_syscall),
|
2011-05-29 20:56:26 +08:00
|
|
|
II(ImplicitOps | Priv, em_clts, clts), N,
|
2011-04-04 18:39:23 +08:00
|
|
|
DI(ImplicitOps | Priv, invd), DI(ImplicitOps | Priv, wbinvd), N, N,
|
2014-10-13 18:04:14 +08:00
|
|
|
N, D(ImplicitOps | ModRM | SrcMem | NoAccess), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x10 - 0x1F */
|
2018-04-01 23:54:44 +08:00
|
|
|
GP(ModRM | DstReg | SrcMem | Mov | Sse, &pfx_0f_10_0f_11),
|
|
|
|
GP(ModRM | DstMem | SrcReg | Mov | Sse, &pfx_0f_10_0f_11),
|
|
|
|
N, N, N, N, N, N,
|
2020-05-16 00:18:14 +08:00
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 4 * prefetch + 4 * reserved NOP */
|
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), N, N,
|
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
|
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
|
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* 8 * reserved NOP */
|
|
|
|
D(ImplicitOps | ModRM | SrcMem | NoAccess), /* NOP + 7 * reserved NOP */
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x20 - 0x2F */
|
2021-04-22 10:21:20 +08:00
|
|
|
DIP(ModRM | DstMem | Priv | Op3264 | NoMod, cr_read, check_cr_access),
|
2014-05-26 04:05:21 +08:00
|
|
|
DIP(ModRM | DstMem | Priv | Op3264 | NoMod, dr_read, check_dr_read),
|
|
|
|
IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_cr_write, cr_write,
|
2021-04-22 10:21:20 +08:00
|
|
|
check_cr_access),
|
2014-05-26 04:05:21 +08:00
|
|
|
IIP(ModRM | SrcMem | Priv | Op3264 | NoMod, em_dr_write, dr_write,
|
|
|
|
check_dr_write),
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, N,
|
KVM: x86 emulator: emulate MOVAPS
HCK memory driver test fails when testing 32-bit Windows 8.1
with baloon driver.
tracing KVM shows error:
reason EXIT_ERR rip 0x81c18326 info 0 0
x/10i 0x81c18326-20
0x0000000081c18312: add %al,(%eax)
0x0000000081c18314: add %cl,-0x7127711d(%esi)
0x0000000081c1831a: rolb $0x0,0x80ec(%ecx)
0x0000000081c18321: and $0xfffffff0,%esp
0x0000000081c18324: mov %esp,%esi
0x0000000081c18326: movaps %xmm0,(%esi)
0x0000000081c18329: movaps %xmm1,0x10(%esi)
0x0000000081c1832d: movaps %xmm2,0x20(%esi)
0x0000000081c18331: movaps %xmm3,0x30(%esi)
0x0000000081c18335: movaps %xmm4,0x40(%esi)
which points to MOVAPS instruction currently no emulated by KVM.
Fix it by adding appropriate entries to opcode table in KVM's emulator.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-03-16 04:01:59 +08:00
|
|
|
GP(ModRM | DstReg | SrcMem | Mov | Sse, &pfx_0f_28_0f_29),
|
|
|
|
GP(ModRM | DstMem | SrcReg | Mov | Sse, &pfx_0f_28_0f_29),
|
2014-07-14 18:54:48 +08:00
|
|
|
N, GP(ModRM | DstMem | SrcReg | Mov | Sse, &pfx_0f_2b),
|
2012-04-09 23:40:01 +08:00
|
|
|
N, N, N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x30 - 0x3F */
|
2011-11-22 14:20:03 +08:00
|
|
|
II(ImplicitOps | Priv, em_wrmsr, wrmsr),
|
2011-04-04 18:39:33 +08:00
|
|
|
IIP(ImplicitOps, em_rdtsc, rdtsc, check_rdtsc),
|
2011-11-22 14:20:03 +08:00
|
|
|
II(ImplicitOps | Priv, em_rdmsr, rdmsr),
|
2011-11-10 20:57:30 +08:00
|
|
|
IIP(ImplicitOps, em_rdpmc, rdpmc, check_rdpmc),
|
2013-09-22 22:44:52 +08:00
|
|
|
I(ImplicitOps | EmulateOnUD, em_sysenter),
|
|
|
|
I(ImplicitOps | Priv | EmulateOnUD, em_sysexit),
|
2011-02-01 22:32:03 +08:00
|
|
|
N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, N, N, N, N, N,
|
|
|
|
/* 0x40 - 0x4F */
|
2014-06-15 21:13:00 +08:00
|
|
|
X16(D(DstReg | SrcMem | ModRM)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x50 - 0x5F */
|
|
|
|
N, N, N, N, N, N, N, N, N, N, N, N, N, N, N, N,
|
|
|
|
/* 0x60 - 0x6F */
|
2010-01-21 00:09:23 +08:00
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, GP(SrcMem | DstReg | ModRM | Mov, &pfx_0f_6f_0f_7f),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x70 - 0x7F */
|
2010-01-21 00:09:23 +08:00
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, N,
|
|
|
|
N, N, N, GP(SrcReg | DstMem | ModRM | Mov, &pfx_0f_6f_0f_7f),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x80 - 0x8F */
|
2014-10-24 16:35:09 +08:00
|
|
|
X16(D(SrcImm | NearBranch)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0x90 - 0x9F */
|
2010-08-06 17:10:07 +08:00
|
|
|
X16(D(ByteOp | DstMem | SrcNone | ModRM| Mov)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xA0 - 0xA7 */
|
2011-09-13 15:45:51 +08:00
|
|
|
I(Stack | Src2FS, em_push_sreg), I(Stack | Src2FS, em_pop_sreg),
|
2013-01-20 01:51:54 +08:00
|
|
|
II(ImplicitOps, em_cpuid, cpuid),
|
|
|
|
F(DstMem | SrcReg | ModRM | BitOp | NoWrite, em_bt),
|
2013-01-20 01:51:50 +08:00
|
|
|
F(DstMem | SrcReg | Src2ImmByte | ModRM, em_shld),
|
|
|
|
F(DstMem | SrcReg | Src2CL | ModRM, em_shld), N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xA8 - 0xAF */
|
2011-09-13 15:45:51 +08:00
|
|
|
I(Stack | Src2GS, em_push_sreg), I(Stack | Src2GS, em_pop_sreg),
|
2015-11-03 20:43:05 +08:00
|
|
|
II(EmulateOnUD | ImplicitOps, em_rsm, rsm),
|
2013-01-20 01:51:54 +08:00
|
|
|
F(DstMem | SrcReg | ModRM | BitOp | Lock | PageTable, em_bts),
|
2013-01-20 01:51:50 +08:00
|
|
|
F(DstMem | SrcReg | Src2ImmByte | ModRM, em_shrd),
|
|
|
|
F(DstMem | SrcReg | Src2CL | ModRM, em_shrd),
|
2014-10-13 18:04:13 +08:00
|
|
|
GD(0, &group15), F(DstReg | SrcMem | ModRM, em_imul),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xB0 - 0xB7 */
|
2015-01-26 15:32:21 +08:00
|
|
|
I2bv(DstMem | SrcReg | ModRM | Lock | PageTable | SrcWrite, em_cmpxchg),
|
2011-09-13 15:45:50 +08:00
|
|
|
I(DstReg | SrcMemFAddr | ModRM | Src2SS, em_lseg),
|
2013-01-20 01:51:54 +08:00
|
|
|
F(DstMem | SrcReg | ModRM | BitOp | Lock, em_btr),
|
2011-09-13 15:45:50 +08:00
|
|
|
I(DstReg | SrcMemFAddr | ModRM | Src2FS, em_lseg),
|
|
|
|
I(DstReg | SrcMemFAddr | ModRM | Src2GS, em_lseg),
|
2012-01-16 21:08:45 +08:00
|
|
|
D(DstReg | SrcMem8 | ModRM | Mov), D(DstReg | SrcMem16 | ModRM | Mov),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xB8 - 0xBF */
|
|
|
|
N, N,
|
2011-11-22 14:17:48 +08:00
|
|
|
G(BitOp, group8),
|
2013-01-20 01:51:54 +08:00
|
|
|
F(DstMem | SrcReg | ModRM | BitOp | Lock | PageTable, em_btc),
|
2015-03-30 20:39:21 +08:00
|
|
|
I(DstReg | SrcMem | ModRM, em_bsf_c),
|
|
|
|
I(DstReg | SrcMem | ModRM, em_bsr_c),
|
2012-01-16 21:08:45 +08:00
|
|
|
D(DstReg | SrcMem8 | ModRM | Mov), D(DstReg | SrcMem16 | ModRM | Mov),
|
2012-06-13 17:25:06 +08:00
|
|
|
/* 0xC0 - 0xC7 */
|
2013-02-09 17:31:51 +08:00
|
|
|
F2bv(DstMem | SrcReg | ModRM | SrcWrite | Lock, em_xadd),
|
2014-11-26 21:47:18 +08:00
|
|
|
N, ID(0, &instr_dual_0f_c3),
|
2010-07-29 20:11:53 +08:00
|
|
|
N, N, N, GD(0, &group9),
|
2012-06-13 17:25:06 +08:00
|
|
|
/* 0xC8 - 0xCF */
|
|
|
|
X8(I(DstReg, em_bswap)),
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xD0 - 0xDF */
|
|
|
|
N, N, N, N, N, N, N, N, N, N, N, N, N, N, N, N,
|
|
|
|
/* 0xE0 - 0xEF */
|
KVM: x86 emulator: emulate MOVNTDQ
Windows 8.1 guest with NVIDIA driver and GPU fails to boot with an
emulation failure. The KVM spew suggests the fault is with lack of
movntdq emulation (courtesy of Paolo):
Code=02 00 00 b8 08 00 00 00 f3 0f 6f 44 0a f0 f3 0f 6f 4c 0a e0 <66> 0f e7 41 f0 66 0f e7 49 e0 48 83 e9 40 f3 0f 6f 44 0a 10 f3 0f 6f 0c 0a 66 0f e7 41 10
$ as -o a.out
.section .text
.byte 0x66, 0x0f, 0xe7, 0x41, 0xf0
.byte 0x66, 0x0f, 0xe7, 0x49, 0xe0
$ objdump -d a.out
0: 66 0f e7 41 f0 movntdq %xmm0,-0x10(%rcx)
5: 66 0f e7 49 e0 movntdq %xmm1,-0x20(%rcx)
Add the necessary emulation.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-07-12 01:56:31 +08:00
|
|
|
N, N, N, N, N, N, N, GP(SrcReg | DstMem | ModRM | Mov, &pfx_0f_e7),
|
|
|
|
N, N, N, N, N, N, N, N,
|
2010-07-29 20:11:53 +08:00
|
|
|
/* 0xF0 - 0xFF */
|
|
|
|
N, N, N, N, N, N, N, N, N, N, N, N, N, N, N, N
|
|
|
|
};
|
|
|
|
|
2014-11-26 21:47:18 +08:00
|
|
|
static const struct instr_dual instr_dual_0f_38_f0 = {
|
|
|
|
I(DstReg | SrcMem | Mov, em_movbe), N
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct instr_dual instr_dual_0f_38_f1 = {
|
|
|
|
I(DstMem | SrcReg | Mov, em_movbe), N
|
|
|
|
};
|
|
|
|
|
2013-10-29 19:54:10 +08:00
|
|
|
static const struct gprefix three_byte_0f_38_f0 = {
|
2014-11-26 21:47:18 +08:00
|
|
|
ID(0, &instr_dual_0f_38_f0), N, N, N
|
2013-10-29 19:54:10 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct gprefix three_byte_0f_38_f1 = {
|
2014-11-26 21:47:18 +08:00
|
|
|
ID(0, &instr_dual_0f_38_f1), N, N, N
|
2013-10-29 19:54:10 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Insns below are selected by the prefix which indexed by the third opcode
|
|
|
|
* byte.
|
|
|
|
*/
|
|
|
|
static const struct opcode opcode_map_0f_38[256] = {
|
|
|
|
/* 0x00 - 0x7f */
|
|
|
|
X16(N), X16(N), X16(N), X16(N), X16(N), X16(N), X16(N), X16(N),
|
2013-10-29 19:54:56 +08:00
|
|
|
/* 0x80 - 0xef */
|
|
|
|
X16(N), X16(N), X16(N), X16(N), X16(N), X16(N), X16(N),
|
|
|
|
/* 0xf0 - 0xf1 */
|
2014-12-07 17:49:42 +08:00
|
|
|
GP(EmulateOnUD | ModRM, &three_byte_0f_38_f0),
|
|
|
|
GP(EmulateOnUD | ModRM, &three_byte_0f_38_f1),
|
2013-10-29 19:54:56 +08:00
|
|
|
/* 0xf2 - 0xff */
|
|
|
|
N, N, X4(N), X8(N)
|
2013-10-29 19:54:10 +08:00
|
|
|
};
|
|
|
|
|
2010-07-29 20:11:53 +08:00
|
|
|
#undef D
|
|
|
|
#undef N
|
|
|
|
#undef G
|
|
|
|
#undef GD
|
|
|
|
#undef I
|
2010-01-21 00:09:23 +08:00
|
|
|
#undef GP
|
2011-04-04 18:39:31 +08:00
|
|
|
#undef EXT
|
2015-01-26 15:32:24 +08:00
|
|
|
#undef MD
|
2015-01-26 15:32:25 +08:00
|
|
|
#undef ID
|
2010-07-29 20:11:53 +08:00
|
|
|
|
2010-08-26 16:56:06 +08:00
|
|
|
#undef D2bv
|
2011-04-04 18:39:35 +08:00
|
|
|
#undef D2bvIP
|
2010-08-26 16:56:06 +08:00
|
|
|
#undef I2bv
|
2011-11-22 14:16:54 +08:00
|
|
|
#undef I2bvIP
|
KVM: x86 emulator: Use opcode::execute for Group 1, CMPS and SCAS
The following instructions are changed to use opcode::execute.
Group 1 (80-83)
ADD (00-05), OR (08-0D), ADC (10-15), SBB (18-1D), AND (20-25),
SUB (28-2D), XOR (30-35), CMP (38-3D)
CMPS (A6-A7), SCAS (AE-AF)
The last two do the same as CMP in the emulator, so em_cmp() is used.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-04-23 17:48:02 +08:00
|
|
|
#undef I6ALU
|
2010-08-26 16:56:06 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
static unsigned imm_size(struct x86_emulate_ctxt *ctxt)
|
2010-08-19 00:20:21 +08:00
|
|
|
{
|
|
|
|
unsigned size;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
size = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
2010-08-19 00:20:21 +08:00
|
|
|
if (size == 8)
|
|
|
|
size = 4;
|
|
|
|
return size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int decode_imm(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
|
|
|
unsigned size, bool sign_extension)
|
|
|
|
{
|
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
|
|
|
|
op->type = OP_IMM;
|
|
|
|
op->bytes = size;
|
2011-06-01 20:34:25 +08:00
|
|
|
op->addr.mem.ea = ctxt->_eip;
|
2010-08-19 00:20:21 +08:00
|
|
|
/* NB. Immediates are sign-extended as necessary. */
|
|
|
|
switch (op->bytes) {
|
|
|
|
case 1:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->val = insn_fetch(s8, ctxt);
|
2010-08-19 00:20:21 +08:00
|
|
|
break;
|
|
|
|
case 2:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->val = insn_fetch(s16, ctxt);
|
2010-08-19 00:20:21 +08:00
|
|
|
break;
|
|
|
|
case 4:
|
2011-07-30 17:01:26 +08:00
|
|
|
op->val = insn_fetch(s32, ctxt);
|
2010-08-19 00:20:21 +08:00
|
|
|
break;
|
2012-12-07 07:55:10 +08:00
|
|
|
case 8:
|
|
|
|
op->val = insn_fetch(s64, ctxt);
|
|
|
|
break;
|
2010-08-19 00:20:21 +08:00
|
|
|
}
|
|
|
|
if (!sign_extension) {
|
|
|
|
switch (op->bytes) {
|
|
|
|
case 1:
|
|
|
|
op->val &= 0xff;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
op->val &= 0xffff;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
op->val &= 0xffffffff;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
done:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2011-09-13 15:45:41 +08:00
|
|
|
static int decode_operand(struct x86_emulate_ctxt *ctxt, struct operand *op,
|
|
|
|
unsigned d)
|
|
|
|
{
|
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
|
|
|
|
switch (d) {
|
|
|
|
case OpReg:
|
2012-01-16 21:08:45 +08:00
|
|
|
decode_register_operand(ctxt, op);
|
2011-09-13 15:45:41 +08:00
|
|
|
break;
|
|
|
|
case OpImmUByte:
|
2011-09-13 15:45:45 +08:00
|
|
|
rc = decode_imm(ctxt, op, 1, false);
|
2011-09-13 15:45:41 +08:00
|
|
|
break;
|
|
|
|
case OpMem:
|
2011-09-13 15:45:48 +08:00
|
|
|
ctxt->memop.bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
2011-09-13 15:45:47 +08:00
|
|
|
mem_common:
|
|
|
|
*op = ctxt->memop;
|
|
|
|
ctxt->memopp = op;
|
2014-04-01 20:54:19 +08:00
|
|
|
if (ctxt->d & BitOp)
|
2011-09-13 15:45:41 +08:00
|
|
|
fetch_bit_operand(ctxt);
|
|
|
|
op->orig_val = op->val;
|
|
|
|
break;
|
2011-09-13 15:45:48 +08:00
|
|
|
case OpMem64:
|
2014-06-02 23:34:10 +08:00
|
|
|
ctxt->memop.bytes = (ctxt->op_bytes == 8) ? 16 : 8;
|
2011-09-13 15:45:48 +08:00
|
|
|
goto mem_common;
|
2011-09-13 15:45:41 +08:00
|
|
|
case OpAcc:
|
|
|
|
op->type = OP_REG;
|
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
2012-08-28 04:46:17 +08:00
|
|
|
op->addr.reg = reg_rmw(ctxt, VCPU_REGS_RAX);
|
2011-09-13 15:45:41 +08:00
|
|
|
fetch_register_operand(op);
|
|
|
|
op->orig_val = op->val;
|
|
|
|
break;
|
2013-02-09 17:31:45 +08:00
|
|
|
case OpAccLo:
|
|
|
|
op->type = OP_REG;
|
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 2 : ctxt->op_bytes;
|
|
|
|
op->addr.reg = reg_rmw(ctxt, VCPU_REGS_RAX);
|
|
|
|
fetch_register_operand(op);
|
|
|
|
op->orig_val = op->val;
|
|
|
|
break;
|
|
|
|
case OpAccHi:
|
|
|
|
if (ctxt->d & ByteOp) {
|
|
|
|
op->type = OP_NONE;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
op->type = OP_REG;
|
|
|
|
op->bytes = ctxt->op_bytes;
|
|
|
|
op->addr.reg = reg_rmw(ctxt, VCPU_REGS_RDX);
|
|
|
|
fetch_register_operand(op);
|
|
|
|
op->orig_val = op->val;
|
|
|
|
break;
|
2011-09-13 15:45:41 +08:00
|
|
|
case OpDI:
|
|
|
|
op->type = OP_MEM;
|
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
|
|
|
op->addr.mem.ea =
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address(ctxt, VCPU_REGS_RDI);
|
2011-09-13 15:45:41 +08:00
|
|
|
op->addr.mem.seg = VCPU_SREG_ES;
|
|
|
|
op->val = 0;
|
2012-09-03 20:24:29 +08:00
|
|
|
op->count = 1;
|
2011-09-13 15:45:41 +08:00
|
|
|
break;
|
|
|
|
case OpDX:
|
|
|
|
op->type = OP_REG;
|
|
|
|
op->bytes = 2;
|
2012-08-28 04:46:17 +08:00
|
|
|
op->addr.reg = reg_rmw(ctxt, VCPU_REGS_RDX);
|
2011-09-13 15:45:41 +08:00
|
|
|
fetch_register_operand(op);
|
|
|
|
break;
|
2011-09-13 15:45:43 +08:00
|
|
|
case OpCL:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:43 +08:00
|
|
|
op->bytes = 1;
|
2012-08-28 04:46:17 +08:00
|
|
|
op->val = reg_read(ctxt, VCPU_REGS_RCX) & 0xff;
|
2011-09-13 15:45:43 +08:00
|
|
|
break;
|
|
|
|
case OpImmByte:
|
|
|
|
rc = decode_imm(ctxt, op, 1, true);
|
|
|
|
break;
|
|
|
|
case OpOne:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:43 +08:00
|
|
|
op->bytes = 1;
|
|
|
|
op->val = 1;
|
|
|
|
break;
|
|
|
|
case OpImm:
|
|
|
|
rc = decode_imm(ctxt, op, imm_size(ctxt), true);
|
|
|
|
break;
|
2012-12-07 07:55:10 +08:00
|
|
|
case OpImm64:
|
|
|
|
rc = decode_imm(ctxt, op, ctxt->op_bytes, true);
|
|
|
|
break;
|
2012-01-16 21:08:44 +08:00
|
|
|
case OpMem8:
|
|
|
|
ctxt->memop.bytes = 1;
|
2013-04-24 18:38:36 +08:00
|
|
|
if (ctxt->memop.type == OP_REG) {
|
2013-11-04 21:52:41 +08:00
|
|
|
ctxt->memop.addr.reg = decode_register(ctxt,
|
|
|
|
ctxt->modrm_rm, true);
|
2013-04-24 18:38:36 +08:00
|
|
|
fetch_register_operand(&ctxt->memop);
|
|
|
|
}
|
2012-01-16 21:08:44 +08:00
|
|
|
goto mem_common;
|
2011-09-13 15:45:47 +08:00
|
|
|
case OpMem16:
|
|
|
|
ctxt->memop.bytes = 2;
|
|
|
|
goto mem_common;
|
|
|
|
case OpMem32:
|
|
|
|
ctxt->memop.bytes = 4;
|
|
|
|
goto mem_common;
|
|
|
|
case OpImmU16:
|
|
|
|
rc = decode_imm(ctxt, op, 2, false);
|
|
|
|
break;
|
|
|
|
case OpImmU:
|
|
|
|
rc = decode_imm(ctxt, op, imm_size(ctxt), false);
|
|
|
|
break;
|
|
|
|
case OpSI:
|
|
|
|
op->type = OP_MEM;
|
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
|
|
|
op->addr.mem.ea =
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address(ctxt, VCPU_REGS_RSI);
|
2014-04-17 00:46:13 +08:00
|
|
|
op->addr.mem.seg = ctxt->seg_override;
|
2011-09-13 15:45:47 +08:00
|
|
|
op->val = 0;
|
2012-09-03 20:24:29 +08:00
|
|
|
op->count = 1;
|
2011-09-13 15:45:47 +08:00
|
|
|
break;
|
2013-05-09 17:32:50 +08:00
|
|
|
case OpXLat:
|
|
|
|
op->type = OP_MEM;
|
|
|
|
op->bytes = (ctxt->d & ByteOp) ? 1 : ctxt->op_bytes;
|
|
|
|
op->addr.mem.ea =
|
2014-11-20 01:25:08 +08:00
|
|
|
address_mask(ctxt,
|
2013-05-09 17:32:50 +08:00
|
|
|
reg_read(ctxt, VCPU_REGS_RBX) +
|
|
|
|
(reg_read(ctxt, VCPU_REGS_RAX) & 0xff));
|
2014-04-17 00:46:13 +08:00
|
|
|
op->addr.mem.seg = ctxt->seg_override;
|
2013-05-09 17:32:50 +08:00
|
|
|
op->val = 0;
|
|
|
|
break;
|
2011-09-13 15:45:47 +08:00
|
|
|
case OpImmFAddr:
|
|
|
|
op->type = OP_IMM;
|
|
|
|
op->addr.mem.ea = ctxt->_eip;
|
|
|
|
op->bytes = ctxt->op_bytes + 2;
|
|
|
|
insn_fetch_arr(op->valptr, op->bytes, ctxt);
|
|
|
|
break;
|
|
|
|
case OpMemFAddr:
|
|
|
|
ctxt->memop.bytes = ctxt->op_bytes + 2;
|
|
|
|
goto mem_common;
|
2011-09-13 15:45:49 +08:00
|
|
|
case OpES:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_ES;
|
|
|
|
break;
|
|
|
|
case OpCS:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_CS;
|
|
|
|
break;
|
|
|
|
case OpSS:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_SS;
|
|
|
|
break;
|
|
|
|
case OpDS:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_DS;
|
|
|
|
break;
|
|
|
|
case OpFS:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_FS;
|
|
|
|
break;
|
|
|
|
case OpGS:
|
2014-11-02 17:54:47 +08:00
|
|
|
op->type = OP_IMM;
|
2011-09-13 15:45:49 +08:00
|
|
|
op->val = VCPU_SREG_GS;
|
|
|
|
break;
|
2011-09-13 15:45:41 +08:00
|
|
|
case OpImplicit:
|
|
|
|
/* Special instructions do their own operand decoding. */
|
|
|
|
default:
|
|
|
|
op->type = OP_NONE; /* Disable writeback. */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2021-05-28 08:01:37 +08:00
|
|
|
int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len, int emulation_type)
|
2010-07-29 20:11:52 +08:00
|
|
|
{
|
|
|
|
int rc = X86EMUL_CONTINUE;
|
|
|
|
int mode = ctxt->mode;
|
2011-04-24 19:09:59 +08:00
|
|
|
int def_op_bytes, def_ad_bytes, goffset, simd_prefix;
|
2011-03-29 17:34:38 +08:00
|
|
|
bool op_prefix = false;
|
2014-04-17 00:46:13 +08:00
|
|
|
bool has_seg_override = false;
|
2011-04-24 19:09:59 +08:00
|
|
|
struct opcode opcode;
|
2017-11-06 08:54:47 +08:00
|
|
|
u16 dummy;
|
|
|
|
struct desc_struct desc;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2011-09-13 15:45:40 +08:00
|
|
|
ctxt->memop.type = OP_NONE;
|
|
|
|
ctxt->memopp = NULL;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->_eip = ctxt->eip;
|
2014-05-06 22:33:01 +08:00
|
|
|
ctxt->fetch.ptr = ctxt->fetch.data;
|
|
|
|
ctxt->fetch.end = ctxt->fetch.data + insn_len;
|
2013-09-22 22:44:51 +08:00
|
|
|
ctxt->opcode_len = 1;
|
KVM: x86: clear stale x86_emulate_ctxt->intercept value
After commit 07721feee46b ("KVM: nVMX: Don't emulate instructions in guest
mode") Hyper-V guests on KVM stopped booting with:
kvm_nested_vmexit: rip fffff802987d6169 reason EPT_VIOLATION info1 181
info2 0 int_info 0 int_info_err 0
kvm_page_fault: address febd0000 error_code 181
kvm_emulate_insn: 0:fffff802987d6169: f3 a5
kvm_emulate_insn: 0:fffff802987d6169: f3 a5 FAIL
kvm_inj_exception: #UD (0x0)
"f3 a5" is a "rep movsw" instruction, which should not be intercepted
at all. Commit c44b4c6ab80e ("KVM: emulate: clean up initializations in
init_decode_cache") reduced the number of fields cleared by
init_decode_cache() claiming that they are being cleared elsewhere,
'intercept', however, is left uncleared if the instruction does not have
any of the "slow path" flags (NotImpl, Stack, Op3264, Sse, Mmx, CheckPerm,
NearBranch, No16 and of course Intercept itself).
Fixes: c44b4c6ab80e ("KVM: emulate: clean up initializations in init_decode_cache")
Fixes: 07721feee46b ("KVM: nVMX: Don't emulate instructions in guest mode")
Cc: stable@vger.kernel.org
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-03 22:33:15 +08:00
|
|
|
ctxt->intercept = x86_intercept_none;
|
2010-12-21 18:12:07 +08:00
|
|
|
if (insn_len > 0)
|
2011-06-01 20:34:25 +08:00
|
|
|
memcpy(ctxt->fetch.data, insn, insn_len);
|
2014-05-06 18:24:32 +08:00
|
|
|
else {
|
2014-05-06 19:05:25 +08:00
|
|
|
rc = __do_insn_fetch_bytes(ctxt, 1);
|
2014-05-06 18:24:32 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2019-08-16 00:20:32 +08:00
|
|
|
goto done;
|
2014-05-06 18:24:32 +08:00
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
switch (mode) {
|
|
|
|
case X86EMUL_MODE_REAL:
|
|
|
|
case X86EMUL_MODE_VM86:
|
2017-11-06 08:54:47 +08:00
|
|
|
def_op_bytes = def_ad_bytes = 2;
|
|
|
|
ctxt->ops->get_segment(ctxt, &dummy, &desc, NULL, VCPU_SREG_CS);
|
|
|
|
if (desc.d)
|
|
|
|
def_op_bytes = def_ad_bytes = 4;
|
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case X86EMUL_MODE_PROT16:
|
|
|
|
def_op_bytes = def_ad_bytes = 2;
|
|
|
|
break;
|
|
|
|
case X86EMUL_MODE_PROT32:
|
|
|
|
def_op_bytes = def_ad_bytes = 4;
|
|
|
|
break;
|
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
case X86EMUL_MODE_PROT64:
|
|
|
|
def_op_bytes = 4;
|
|
|
|
def_ad_bytes = 8;
|
|
|
|
break;
|
|
|
|
#endif
|
|
|
|
default:
|
2011-07-30 17:03:34 +08:00
|
|
|
return EMULATION_FAILED;
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->op_bytes = def_op_bytes;
|
|
|
|
ctxt->ad_bytes = def_ad_bytes;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
/* Legacy prefixes. */
|
|
|
|
for (;;) {
|
2011-07-30 17:01:26 +08:00
|
|
|
switch (ctxt->b = insn_fetch(u8, ctxt)) {
|
2010-07-29 20:11:52 +08:00
|
|
|
case 0x66: /* operand-size override */
|
2011-03-29 17:34:38 +08:00
|
|
|
op_prefix = true;
|
2010-07-29 20:11:52 +08:00
|
|
|
/* switch between 2/4 bytes */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->op_bytes = def_op_bytes ^ 6;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case 0x67: /* address-size override */
|
|
|
|
if (mode == X86EMUL_MODE_PROT64)
|
|
|
|
/* switch between 4/8 bytes */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->ad_bytes = def_ad_bytes ^ 12;
|
2010-07-29 20:11:52 +08:00
|
|
|
else
|
|
|
|
/* switch between 2/4 bytes */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->ad_bytes = def_ad_bytes ^ 6;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case 0x26: /* ES override */
|
2019-12-12 04:47:50 +08:00
|
|
|
has_seg_override = true;
|
|
|
|
ctxt->seg_override = VCPU_SREG_ES;
|
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case 0x2e: /* CS override */
|
2019-12-12 04:47:50 +08:00
|
|
|
has_seg_override = true;
|
|
|
|
ctxt->seg_override = VCPU_SREG_CS;
|
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case 0x36: /* SS override */
|
2019-12-12 04:47:50 +08:00
|
|
|
has_seg_override = true;
|
|
|
|
ctxt->seg_override = VCPU_SREG_SS;
|
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case 0x3e: /* DS override */
|
2014-04-17 00:46:13 +08:00
|
|
|
has_seg_override = true;
|
2019-12-12 04:47:50 +08:00
|
|
|
ctxt->seg_override = VCPU_SREG_DS;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case 0x64: /* FS override */
|
2019-12-12 04:47:50 +08:00
|
|
|
has_seg_override = true;
|
|
|
|
ctxt->seg_override = VCPU_SREG_FS;
|
|
|
|
break;
|
2010-07-29 20:11:52 +08:00
|
|
|
case 0x65: /* GS override */
|
2014-04-17 00:46:13 +08:00
|
|
|
has_seg_override = true;
|
2019-12-12 04:47:50 +08:00
|
|
|
ctxt->seg_override = VCPU_SREG_GS;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case 0x40 ... 0x4f: /* REX */
|
|
|
|
if (mode != X86EMUL_MODE_PROT64)
|
|
|
|
goto done_prefixes;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->rex_prefix = ctxt->b;
|
2010-07-29 20:11:52 +08:00
|
|
|
continue;
|
|
|
|
case 0xf0: /* LOCK */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->lock_prefix = 1;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
case 0xf2: /* REPNE/REPNZ */
|
|
|
|
case 0xf3: /* REP/REPE/REPZ */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->rep_prefix = ctxt->b;
|
2010-07-29 20:11:52 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto done_prefixes;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Any legacy prefix after a REX prefix nullifies its effect. */
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->rex_prefix = 0;
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
done_prefixes:
|
|
|
|
|
|
|
|
/* REX prefix. */
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->rex_prefix & 8)
|
|
|
|
ctxt->op_bytes = 8; /* REX.W */
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
/* Opcode byte(s). */
|
2011-06-01 20:34:25 +08:00
|
|
|
opcode = opcode_table[ctxt->b];
|
2010-08-05 16:34:39 +08:00
|
|
|
/* Two-byte opcode? */
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->b == 0x0f) {
|
2013-09-22 22:44:51 +08:00
|
|
|
ctxt->opcode_len = 2;
|
2011-07-30 17:01:26 +08:00
|
|
|
ctxt->b = insn_fetch(u8, ctxt);
|
2011-06-01 20:34:25 +08:00
|
|
|
opcode = twobyte_table[ctxt->b];
|
2013-10-29 19:54:10 +08:00
|
|
|
|
|
|
|
/* 0F_38 opcode map */
|
|
|
|
if (ctxt->b == 0x38) {
|
|
|
|
ctxt->opcode_len = 3;
|
|
|
|
ctxt->b = insn_fetch(u8, ctxt);
|
|
|
|
opcode = opcode_map_0f_38[ctxt->b];
|
|
|
|
}
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->d = opcode.flags;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2012-04-30 16:48:25 +08:00
|
|
|
if (ctxt->d & ModRM)
|
|
|
|
ctxt->modrm = insn_fetch(u8, ctxt);
|
|
|
|
|
2014-06-02 23:34:03 +08:00
|
|
|
/* vex-prefix instructions are not implemented */
|
|
|
|
if (ctxt->opcode_len == 1 && (ctxt->b == 0xc5 || ctxt->b == 0xc4) &&
|
2014-11-02 17:54:58 +08:00
|
|
|
(mode == X86EMUL_MODE_PROT64 || (ctxt->modrm & 0xc0) == 0xc0)) {
|
2014-06-02 23:34:03 +08:00
|
|
|
ctxt->d = NotImpl;
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
while (ctxt->d & GroupMask) {
|
|
|
|
switch (ctxt->d & GroupMask) {
|
2011-04-24 19:09:59 +08:00
|
|
|
case Group:
|
2011-06-01 20:34:25 +08:00
|
|
|
goffset = (ctxt->modrm >> 3) & 7;
|
2011-04-24 19:09:59 +08:00
|
|
|
opcode = opcode.u.group[goffset];
|
|
|
|
break;
|
|
|
|
case GroupDual:
|
2011-06-01 20:34:25 +08:00
|
|
|
goffset = (ctxt->modrm >> 3) & 7;
|
|
|
|
if ((ctxt->modrm >> 6) == 3)
|
2011-04-24 19:09:59 +08:00
|
|
|
opcode = opcode.u.gdual->mod3[goffset];
|
|
|
|
else
|
|
|
|
opcode = opcode.u.gdual->mod012[goffset];
|
|
|
|
break;
|
|
|
|
case RMExt:
|
2011-06-01 20:34:25 +08:00
|
|
|
goffset = ctxt->modrm & 7;
|
2011-04-04 18:39:31 +08:00
|
|
|
opcode = opcode.u.group[goffset];
|
2011-04-24 19:09:59 +08:00
|
|
|
break;
|
|
|
|
case Prefix:
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->rep_prefix && op_prefix)
|
2011-07-30 17:03:34 +08:00
|
|
|
return EMULATION_FAILED;
|
2011-06-01 20:34:25 +08:00
|
|
|
simd_prefix = op_prefix ? 0x66 : ctxt->rep_prefix;
|
2011-04-24 19:09:59 +08:00
|
|
|
switch (simd_prefix) {
|
|
|
|
case 0x00: opcode = opcode.u.gprefix->pfx_no; break;
|
|
|
|
case 0x66: opcode = opcode.u.gprefix->pfx_66; break;
|
|
|
|
case 0xf2: opcode = opcode.u.gprefix->pfx_f2; break;
|
|
|
|
case 0xf3: opcode = opcode.u.gprefix->pfx_f3; break;
|
|
|
|
}
|
|
|
|
break;
|
2012-12-20 22:57:43 +08:00
|
|
|
case Escape:
|
2019-12-12 04:47:41 +08:00
|
|
|
if (ctxt->modrm > 0xbf) {
|
|
|
|
size_t size = ARRAY_SIZE(opcode.u.esc->high);
|
|
|
|
u32 index = array_index_nospec(
|
|
|
|
ctxt->modrm - 0xc0, size);
|
|
|
|
|
|
|
|
opcode = opcode.u.esc->high[index];
|
|
|
|
} else {
|
2012-12-20 22:57:43 +08:00
|
|
|
opcode = opcode.u.esc->op[(ctxt->modrm >> 3) & 7];
|
2019-12-12 04:47:41 +08:00
|
|
|
}
|
2012-12-20 22:57:43 +08:00
|
|
|
break;
|
2014-11-26 21:47:18 +08:00
|
|
|
case InstrDual:
|
|
|
|
if ((ctxt->modrm >> 6) == 3)
|
|
|
|
opcode = opcode.u.idual->mod3;
|
|
|
|
else
|
|
|
|
opcode = opcode.u.idual->mod012;
|
|
|
|
break;
|
2015-01-26 15:32:24 +08:00
|
|
|
case ModeDual:
|
|
|
|
if (ctxt->mode == X86EMUL_MODE_PROT64)
|
|
|
|
opcode = opcode.u.mdual->mode64;
|
|
|
|
else
|
|
|
|
opcode = opcode.u.mdual->mode32;
|
|
|
|
break;
|
2011-04-24 19:09:59 +08:00
|
|
|
default:
|
2011-07-30 17:03:34 +08:00
|
|
|
return EMULATION_FAILED;
|
2011-03-29 17:34:38 +08:00
|
|
|
}
|
2011-04-24 19:09:59 +08:00
|
|
|
|
2011-09-13 15:45:42 +08:00
|
|
|
ctxt->d &= ~(u64)GroupMask;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->d |= opcode.flags;
|
2011-03-29 17:34:38 +08:00
|
|
|
}
|
|
|
|
|
2014-03-27 19:00:57 +08:00
|
|
|
/* Unrecognised? */
|
|
|
|
if (ctxt->d == 0)
|
|
|
|
return EMULATION_FAILED;
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->execute = opcode.u.execute;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2021-05-28 08:01:37 +08:00
|
|
|
if (unlikely(emulation_type & EMULTYPE_TRAP_UD) &&
|
|
|
|
likely(!(ctxt->d & EmulateOnUD)))
|
2014-08-13 21:50:13 +08:00
|
|
|
return EMULATION_FAILED;
|
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (unlikely(ctxt->d &
|
2014-11-02 17:55:00 +08:00
|
|
|
(NotImpl|Stack|Op3264|Sse|Mmx|Intercept|CheckPerm|NearBranch|
|
|
|
|
No16))) {
|
2014-03-27 18:58:02 +08:00
|
|
|
/*
|
|
|
|
* These are copied unconditionally here, and checked unconditionally
|
|
|
|
* in x86_emulate_insn.
|
|
|
|
*/
|
|
|
|
ctxt->check_perm = opcode.check_perm;
|
|
|
|
ctxt->intercept = opcode.intercept;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (ctxt->d & NotImpl)
|
|
|
|
return EMULATION_FAILED;
|
2011-02-01 22:32:03 +08:00
|
|
|
|
2014-10-24 16:35:09 +08:00
|
|
|
if (mode == X86EMUL_MODE_PROT64) {
|
|
|
|
if (ctxt->op_bytes == 4 && (ctxt->d & Stack))
|
|
|
|
ctxt->op_bytes = 8;
|
|
|
|
else if (ctxt->d & NearBranch)
|
|
|
|
ctxt->op_bytes = 8;
|
|
|
|
}
|
2010-08-01 19:46:54 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (ctxt->d & Op3264) {
|
|
|
|
if (mode == X86EMUL_MODE_PROT64)
|
|
|
|
ctxt->op_bytes = 8;
|
|
|
|
else
|
|
|
|
ctxt->op_bytes = 4;
|
|
|
|
}
|
|
|
|
|
2014-11-02 17:55:00 +08:00
|
|
|
if ((ctxt->d & No16) && ctxt->op_bytes == 2)
|
|
|
|
ctxt->op_bytes = 4;
|
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (ctxt->d & Sse)
|
|
|
|
ctxt->op_bytes = 16;
|
|
|
|
else if (ctxt->d & Mmx)
|
|
|
|
ctxt->op_bytes = 8;
|
|
|
|
}
|
2011-03-29 17:41:27 +08:00
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* ModRM and SIB bytes. */
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->d & ModRM) {
|
2011-09-13 15:45:40 +08:00
|
|
|
rc = decode_modrm(ctxt, &ctxt->memop);
|
2014-04-17 00:46:13 +08:00
|
|
|
if (!has_seg_override) {
|
|
|
|
has_seg_override = true;
|
|
|
|
ctxt->seg_override = ctxt->modrm_seg;
|
|
|
|
}
|
2011-06-01 20:34:25 +08:00
|
|
|
} else if (ctxt->d & MemAbs)
|
2011-09-13 15:45:40 +08:00
|
|
|
rc = decode_abs(ctxt, &ctxt->memop);
|
2010-07-29 20:11:52 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
|
2014-04-17 00:46:13 +08:00
|
|
|
if (!has_seg_override)
|
|
|
|
ctxt->seg_override = VCPU_SREG_DS;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2014-04-17 00:46:13 +08:00
|
|
|
ctxt->memop.addr.mem.seg = ctxt->seg_override;
|
2010-07-29 20:11:52 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Decode and fetch the source operand: register, memory
|
|
|
|
* or immediate.
|
|
|
|
*/
|
2011-09-13 15:45:47 +08:00
|
|
|
rc = decode_operand(ctxt, &ctxt->src, (ctxt->d >> SrcShift) & OpMask);
|
2010-08-19 00:20:21 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/*
|
|
|
|
* Decode and fetch the second source operand: register, memory
|
|
|
|
* or immediate.
|
|
|
|
*/
|
2011-09-13 15:45:43 +08:00
|
|
|
rc = decode_operand(ctxt, &ctxt->src2, (ctxt->d >> Src2Shift) & OpMask);
|
2010-08-19 00:20:21 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
|
2010-07-29 20:11:52 +08:00
|
|
|
/* Decode and fetch the destination operand: register or memory. */
|
2011-09-13 15:45:41 +08:00
|
|
|
rc = decode_operand(ctxt, &ctxt->dst, (ctxt->d >> DstShift) & OpMask);
|
2010-07-29 20:11:52 +08:00
|
|
|
|
2016-10-28 02:25:52 +08:00
|
|
|
if (ctxt->rip_relative && likely(ctxt->memopp))
|
2014-11-19 23:43:09 +08:00
|
|
|
ctxt->memopp->addr.mem.ea = address_mask(ctxt,
|
|
|
|
ctxt->memopp->addr.mem.ea + ctxt->_eip);
|
2011-06-20 00:21:11 +08:00
|
|
|
|
2014-10-23 20:54:14 +08:00
|
|
|
done:
|
2019-08-27 21:07:08 +08:00
|
|
|
if (rc == X86EMUL_PROPAGATE_FAULT)
|
|
|
|
ctxt->have_exception = true;
|
2011-07-30 17:03:34 +08:00
|
|
|
return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK;
|
2010-07-29 20:11:52 +08:00
|
|
|
}
|
|
|
|
|
2011-09-22 17:02:48 +08:00
|
|
|
bool x86_page_table_writing_insn(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
return ctxt->d & PageTable;
|
|
|
|
}
|
|
|
|
|
2010-08-25 17:47:42 +08:00
|
|
|
static bool string_insn_completed(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
/* The second termination condition only applies for REPE
|
|
|
|
* and REPNE. Test if the repeat string operation prefix is
|
|
|
|
* REPE/REPZ or REPNE/REPNZ and if it's the case it tests the
|
|
|
|
* corresponding termination condition according to:
|
|
|
|
* - if REPE/REPZ and ZF = 0 then done
|
|
|
|
* - if REPNE/REPNZ and ZF = 1 then done
|
|
|
|
*/
|
2011-06-01 20:34:25 +08:00
|
|
|
if (((ctxt->b == 0xa6) || (ctxt->b == 0xa7) ||
|
|
|
|
(ctxt->b == 0xae) || (ctxt->b == 0xaf))
|
|
|
|
&& (((ctxt->rep_prefix == REPE_PREFIX) &&
|
2015-03-29 21:33:03 +08:00
|
|
|
((ctxt->eflags & X86_EFLAGS_ZF) == 0))
|
2011-06-01 20:34:25 +08:00
|
|
|
|| ((ctxt->rep_prefix == REPNE_PREFIX) &&
|
2015-03-29 21:33:03 +08:00
|
|
|
((ctxt->eflags & X86_EFLAGS_ZF) == X86_EFLAGS_ZF))))
|
2010-08-25 17:47:42 +08:00
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2012-04-09 23:40:02 +08:00
|
|
|
static int flush_pending_x87_faults(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2016-11-09 03:54:18 +08:00
|
|
|
int rc;
|
2012-04-09 23:40:02 +08:00
|
|
|
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_get();
|
2016-11-09 03:54:18 +08:00
|
|
|
rc = asm_safe("fwait");
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_fpu_put();
|
2012-04-09 23:40:02 +08:00
|
|
|
|
2016-11-09 03:54:18 +08:00
|
|
|
if (unlikely(rc != X86EMUL_CONTINUE))
|
2012-04-09 23:40:02 +08:00
|
|
|
return emulate_exception(ctxt, MF_VECTOR, 0, false);
|
|
|
|
|
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2020-01-18 03:30:52 +08:00
|
|
|
static void fetch_possible_mmx_operand(struct operand *op)
|
2012-04-09 23:40:02 +08:00
|
|
|
{
|
|
|
|
if (op->type == OP_MM)
|
2021-05-26 16:56:08 +08:00
|
|
|
kvm_read_mmx_reg(op->addr.mm, &op->mm_val);
|
2012-04-09 23:40:02 +08:00
|
|
|
}
|
|
|
|
|
2020-01-22 12:43:39 +08:00
|
|
|
static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop)
|
2013-01-04 22:18:48 +08:00
|
|
|
{
|
|
|
|
ulong flags = (ctxt->eflags & EFLAGS_MASK) | X86_EFLAGS_IF;
|
2016-03-10 02:59:50 +08:00
|
|
|
|
2013-02-09 17:31:48 +08:00
|
|
|
if (!(ctxt->d & ByteOp))
|
|
|
|
fop += __ffs(ctxt->dst.bytes) * FASTOP_SIZE;
|
2016-03-10 02:59:50 +08:00
|
|
|
|
2018-01-25 17:58:13 +08:00
|
|
|
asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n"
|
2013-02-09 17:31:49 +08:00
|
|
|
: "+a"(ctxt->dst.val), "+d"(ctxt->src.val), [flags]"+D"(flags),
|
2018-01-25 17:58:13 +08:00
|
|
|
[thunk_target]"+S"(fop), ASM_CALL_CONSTRAINT
|
2013-02-09 17:31:49 +08:00
|
|
|
: "c"(ctxt->src2.val));
|
2016-03-10 02:59:50 +08:00
|
|
|
|
2013-01-04 22:18:48 +08:00
|
|
|
ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
|
2013-02-09 17:31:49 +08:00
|
|
|
if (!fop) /* exception is returned in fop variable */
|
|
|
|
return emulate_de(ctxt);
|
2013-01-04 22:18:48 +08:00
|
|
|
return X86EMUL_CONTINUE;
|
|
|
|
}
|
2012-08-28 04:46:17 +08:00
|
|
|
|
2014-04-17 00:46:09 +08:00
|
|
|
void init_decode_cache(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
2014-04-17 00:46:13 +08:00
|
|
|
memset(&ctxt->rip_relative, 0,
|
|
|
|
(void *)&ctxt->modrm - (void *)&ctxt->rip_relative);
|
2014-04-17 00:46:09 +08:00
|
|
|
|
|
|
|
ctxt->io_read.pos = 0;
|
|
|
|
ctxt->io_read.end = 0;
|
|
|
|
ctxt->mem_read.end = 0;
|
|
|
|
}
|
|
|
|
|
2011-05-15 00:00:52 +08:00
|
|
|
int x86_emulate_insn(struct x86_emulate_ctxt *ctxt)
|
2007-09-18 17:27:19 +08:00
|
|
|
{
|
2012-08-30 07:30:16 +08:00
|
|
|
const struct x86_emulate_ops *ops = ctxt->ops;
|
2010-02-12 14:57:56 +08:00
|
|
|
int rc = X86EMUL_CONTINUE;
|
2011-06-01 20:34:25 +08:00
|
|
|
int saved_dst_type = ctxt->dst.type;
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
unsigned emul_flags;
|
2007-09-18 17:27:19 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->mem_read.pos = 0;
|
2009-05-13 04:21:06 +08:00
|
|
|
|
2014-03-27 19:00:57 +08:00
|
|
|
/* LOCK prefix is allowed only with some instructions */
|
|
|
|
if (ctxt->lock_prefix && (!(ctxt->d & Lock) || ctxt->dst.type != OP_MEM)) {
|
2010-11-22 23:53:25 +08:00
|
|
|
rc = emulate_ud(ctxt);
|
2010-02-11 20:43:14 +08:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2014-03-27 19:00:57 +08:00
|
|
|
if ((ctxt->d & SrcMask) == SrcMemFAddr && ctxt->src.type != OP_MEM) {
|
2010-11-22 23:53:25 +08:00
|
|
|
rc = emulate_ud(ctxt);
|
2010-02-10 20:21:36 +08:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
emul_flags = ctxt->ops->get_hflags(ctxt);
|
2014-03-27 18:58:02 +08:00
|
|
|
if (unlikely(ctxt->d &
|
|
|
|
(No64|Undefined|Sse|Mmx|Intercept|CheckPerm|Priv|Prot|String))) {
|
|
|
|
if ((ctxt->mode == X86EMUL_MODE_PROT64 && (ctxt->d & No64)) ||
|
|
|
|
(ctxt->d & Undefined)) {
|
|
|
|
rc = emulate_ud(ctxt);
|
|
|
|
goto done;
|
|
|
|
}
|
2011-03-29 17:41:27 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (((ctxt->d & (Sse|Mmx)) && ((ops->get_cr(ctxt, 0) & X86_CR0_EM)))
|
|
|
|
|| ((ctxt->d & Sse) && !(ops->get_cr(ctxt, 4) & X86_CR4_OSFXSR))) {
|
|
|
|
rc = emulate_ud(ctxt);
|
2012-04-09 23:40:02 +08:00
|
|
|
goto done;
|
2014-03-27 18:58:02 +08:00
|
|
|
}
|
2012-04-09 23:40:02 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if ((ctxt->d & (Sse|Mmx)) && (ops->get_cr(ctxt, 0) & X86_CR0_TS)) {
|
|
|
|
rc = emulate_nm(ctxt);
|
2011-04-04 18:39:22 +08:00
|
|
|
goto done;
|
2014-03-27 18:58:02 +08:00
|
|
|
}
|
2011-04-04 18:39:22 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
if (ctxt->d & Mmx) {
|
|
|
|
rc = flush_pending_x87_faults(ctxt);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
/*
|
|
|
|
* Now that we know the fpu is exception safe, we can fetch
|
|
|
|
* operands from it.
|
|
|
|
*/
|
2020-01-18 03:30:52 +08:00
|
|
|
fetch_possible_mmx_operand(&ctxt->src);
|
|
|
|
fetch_possible_mmx_operand(&ctxt->src2);
|
2014-03-27 18:58:02 +08:00
|
|
|
if (!(ctxt->d & Mov))
|
2020-01-18 03:30:52 +08:00
|
|
|
fetch_possible_mmx_operand(&ctxt->dst);
|
2014-03-27 18:58:02 +08:00
|
|
|
}
|
2010-02-10 20:21:35 +08:00
|
|
|
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && ctxt->intercept) {
|
2014-03-27 18:58:02 +08:00
|
|
|
rc = emulator_check_intercept(ctxt, ctxt->intercept,
|
|
|
|
X86_ICPT_PRE_EXCEPT);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
2011-04-04 18:39:26 +08:00
|
|
|
|
2014-12-10 17:19:04 +08:00
|
|
|
/* Instruction can only be executed in protected mode */
|
|
|
|
if ((ctxt->d & Prot) && ctxt->mode < X86EMUL_MODE_PROT16) {
|
|
|
|
rc = emulate_ud(ctxt);
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
/* Privileged instruction can be executed only in CPL=0 */
|
|
|
|
if ((ctxt->d & Priv) && ops->cpl(ctxt)) {
|
2014-06-18 22:19:35 +08:00
|
|
|
if (ctxt->d & PrivUD)
|
|
|
|
rc = emulate_ud(ctxt);
|
|
|
|
else
|
|
|
|
rc = emulate_gp(ctxt, 0);
|
2011-04-04 18:39:25 +08:00
|
|
|
goto done;
|
2014-03-27 18:58:02 +08:00
|
|
|
}
|
2011-04-04 18:39:25 +08:00
|
|
|
|
2014-03-27 18:58:02 +08:00
|
|
|
/* Do instruction specific permission checks */
|
2014-04-17 00:46:10 +08:00
|
|
|
if (ctxt->d & CheckPerm) {
|
2014-03-27 18:58:02 +08:00
|
|
|
rc = ctxt->check_perm(ctxt);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) {
|
2014-03-27 18:58:02 +08:00
|
|
|
rc = emulator_check_intercept(ctxt, ctxt->intercept,
|
|
|
|
X86_ICPT_POST_EXCEPT);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ctxt->rep_prefix && (ctxt->d & String)) {
|
|
|
|
/* All REP prefixes have the same first termination condition */
|
|
|
|
if (address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) == 0) {
|
KVM: x86: Fix zero iterations REP-string
When a REP-string is executed in 64-bit mode with an address-size prefix,
ECX/EDI/ESI are used as counter and pointers. When ECX is initially zero, Intel
CPUs clear the high 32-bits of RCX, and recent Intel CPUs update the high bits
of the pointers in MOVS/STOS. This behavior is specific to Intel according to
few experiments.
As one may guess, this is an undocumented behavior. Yet, it is observable in
the guest, since at least VMX traps REP-INS/OUTS even when ECX=0. Note that
VMware appears to get it right. The behavior can be observed using the
following code:
#include <stdio.h>
#define LOW_MASK (0xffffffff00000000ull)
#define ALL_MASK (0xffffffffffffffffull)
#define TEST(opcode) \
do { \
asm volatile(".byte 0xf2 \n\t .byte 0x67 \n\t .byte " opcode "\n\t" \
: "=S"(s), "=c"(c), "=D"(d) \
: "S"(ALL_MASK), "c"(LOW_MASK), "D"(ALL_MASK)); \
printf("opcode %s rcx=%llx rsi=%llx rdi=%llx\n", \
opcode, c, s, d); \
} while(0)
void main()
{
unsigned long long s, d, c;
iopl(3);
TEST("0x6c");
TEST("0x6d");
TEST("0x6e");
TEST("0x6f");
TEST("0xa4");
TEST("0xa5");
TEST("0xa6");
TEST("0xa7");
TEST("0xaa");
TEST("0xab");
TEST("0xae");
TEST("0xaf");
}
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-28 18:06:01 +08:00
|
|
|
string_registers_quirk(ctxt);
|
2014-03-27 18:58:02 +08:00
|
|
|
ctxt->eip = ctxt->_eip;
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_RF;
|
2014-03-27 18:58:02 +08:00
|
|
|
goto done;
|
|
|
|
}
|
2007-11-28 01:05:37 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->src.type == OP_MEM) && !(ctxt->d & NoAccess)) {
|
|
|
|
rc = segmented_read(ctxt, ctxt->src.addr.mem,
|
|
|
|
ctxt->src.valptr, ctxt->src.bytes);
|
2010-01-20 15:47:21 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
2007-09-18 17:27:19 +08:00
|
|
|
goto done;
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->src.orig_val64 = ctxt->src.val64;
|
2007-09-18 17:27:19 +08:00
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->src2.type == OP_MEM) {
|
|
|
|
rc = segmented_read(ctxt, ctxt->src2.addr.mem,
|
|
|
|
&ctxt->src2.val, ctxt->src2.bytes);
|
2010-02-25 22:36:42 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->d & DstMask) == ImplicitOps)
|
2007-09-18 17:27:19 +08:00
|
|
|
goto special_insn;
|
|
|
|
|
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->dst.type == OP_MEM) && !(ctxt->d & Mov)) {
|
2010-03-18 21:20:20 +08:00
|
|
|
/* optimisation - avoid slow emulated read if Mov */
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = segmented_read(ctxt, ctxt->dst.addr.mem,
|
|
|
|
&ctxt->dst.val, ctxt->dst.bytes);
|
2014-12-25 08:52:16 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE) {
|
2015-02-09 17:02:05 +08:00
|
|
|
if (!(ctxt->d & NoWrite) &&
|
|
|
|
rc == X86EMUL_PROPAGATE_FAULT &&
|
2014-12-25 08:52:16 +08:00
|
|
|
ctxt->exception.vector == PF_VECTOR)
|
|
|
|
ctxt->exception.error_code |= PFERR_WRITE_MASK;
|
2010-03-18 21:20:20 +08:00
|
|
|
goto done;
|
2014-12-25 08:52:16 +08:00
|
|
|
}
|
2007-01-23 12:40:40 +08:00
|
|
|
}
|
2015-02-13 00:04:47 +08:00
|
|
|
/* Copy full 64-bit value for CMPXCHG8B. */
|
|
|
|
ctxt->dst.orig_val64 = ctxt->dst.val64;
|
2007-01-23 12:40:40 +08:00
|
|
|
|
2007-11-28 01:30:56 +08:00
|
|
|
special_insn:
|
|
|
|
|
KVM: x86: fix emulation of RSM and IRET instructions
On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm
on hflags is reverted later on in x86_emulate_instruction where hflags are
overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests
as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu.
Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after
an instruction is emulated, this commit deletes emul_flags altogether and
makes the emulator access vcpu->arch.hflags using two new accessors. This
way all changes, on the emulator side as well as in functions called from
the emulator and accessing vcpu state with emul_to_vcpu, are preserved.
More details on the bug and its manifestation with Windows and OVMF:
It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD.
I believe that the SMM part explains why we started seeing this only with
OVMF.
KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates
the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because
later on in x86_emulate_instruction we overwrite arch.hflags with
ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call.
The AMD-specific hflag of interest here is HF_NMI_MASK.
When rebooting the system, Windows sends an NMI IPI to all but the current
cpu to shut them down. Only after all of them are parked in HLT will the
initiating cpu finish the restart. If NMI is masked, other cpus never get
the memo and the initiating cpu spins forever, waiting for
hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe.
Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back")
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-04-25 22:42:44 +08:00
|
|
|
if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) {
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulator_check_intercept(ctxt, ctxt->intercept,
|
2011-04-04 18:39:27 +08:00
|
|
|
X86_ICPT_POST_MEMACCESS);
|
2011-04-04 18:39:22 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2014-07-24 19:51:23 +08:00
|
|
|
if (ctxt->rep_prefix && (ctxt->d & String))
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_RF;
|
2014-07-24 19:51:23 +08:00
|
|
|
else
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_RF;
|
2014-07-21 19:37:29 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->execute) {
|
2020-01-22 12:43:39 +08:00
|
|
|
if (ctxt->d & Fastop)
|
2020-02-18 00:48:26 +08:00
|
|
|
rc = fastop(ctxt, ctxt->fop);
|
2020-01-22 12:43:39 +08:00
|
|
|
else
|
2020-01-22 11:21:44 +08:00
|
|
|
rc = ctxt->execute(ctxt);
|
2010-07-29 20:11:51 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
goto writeback;
|
|
|
|
}
|
|
|
|
|
2013-09-22 22:44:51 +08:00
|
|
|
if (ctxt->opcode_len == 2)
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
goto twobyte_insn;
|
2013-10-29 19:54:10 +08:00
|
|
|
else if (ctxt->opcode_len == 3)
|
|
|
|
goto threebyte_insn;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->b) {
|
2009-04-12 18:36:30 +08:00
|
|
|
case 0x70 ... 0x7f: /* jcc (short) */
|
2011-06-01 20:34:25 +08:00
|
|
|
if (test_cc(ctxt->b, ctxt->eflags))
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, ctxt->src.val);
|
2007-11-28 01:30:56 +08:00
|
|
|
break;
|
2007-09-15 15:35:36 +08:00
|
|
|
case 0x8d: /* lea r16/r32, m */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ctxt->src.addr.mem.ea;
|
2007-09-15 15:35:36 +08:00
|
|
|
break;
|
2010-08-01 17:41:59 +08:00
|
|
|
case 0x90 ... 0x97: /* nop / xchg reg, rax */
|
2012-08-28 04:46:17 +08:00
|
|
|
if (ctxt->dst.addr.reg == reg_rmw(ctxt, VCPU_REGS_RAX))
|
2014-06-15 21:13:01 +08:00
|
|
|
ctxt->dst.type = OP_NONE;
|
|
|
|
else
|
|
|
|
rc = em_xchg(ctxt);
|
2011-05-29 20:59:09 +08:00
|
|
|
break;
|
2010-08-18 16:43:13 +08:00
|
|
|
case 0x98: /* cbw/cwde/cdqe */
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->op_bytes) {
|
|
|
|
case 2: ctxt->dst.val = (s8)ctxt->dst.val; break;
|
|
|
|
case 4: ctxt->dst.val = (s16)ctxt->dst.val; break;
|
|
|
|
case 8: ctxt->dst.val = (s32)ctxt->dst.val; break;
|
2010-08-18 16:43:13 +08:00
|
|
|
}
|
|
|
|
break;
|
2010-08-04 19:38:06 +08:00
|
|
|
case 0xcc: /* int3 */
|
2011-05-29 21:02:55 +08:00
|
|
|
rc = emulate_int(ctxt, 3);
|
|
|
|
break;
|
2010-08-04 19:38:06 +08:00
|
|
|
case 0xcd: /* int n */
|
2011-06-01 20:34:25 +08:00
|
|
|
rc = emulate_int(ctxt, ctxt->src.val);
|
2010-08-04 19:38:06 +08:00
|
|
|
break;
|
|
|
|
case 0xce: /* into */
|
2015-03-29 21:33:03 +08:00
|
|
|
if (ctxt->eflags & X86_EFLAGS_OF)
|
2011-05-29 21:02:55 +08:00
|
|
|
rc = emulate_int(ctxt, 4);
|
2010-08-04 19:38:06 +08:00
|
|
|
break;
|
2007-09-19 07:34:25 +08:00
|
|
|
case 0xe9: /* jmp rel */
|
2011-05-29 20:56:26 +08:00
|
|
|
case 0xeb: /* jmp rel short */
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, ctxt->src.val);
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE; /* Disable writeback. */
|
2007-09-19 07:34:25 +08:00
|
|
|
break;
|
2007-11-28 01:14:21 +08:00
|
|
|
case 0xf4: /* hlt */
|
2011-04-20 20:43:05 +08:00
|
|
|
ctxt->ops->halt(ctxt);
|
2008-07-06 21:51:26 +08:00
|
|
|
break;
|
2007-11-28 01:14:21 +08:00
|
|
|
case 0xf5: /* cmc */
|
|
|
|
/* complement carry flag from eflags reg */
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags ^= X86_EFLAGS_CF;
|
2007-11-28 01:14:21 +08:00
|
|
|
break;
|
|
|
|
case 0xf8: /* clc */
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_CF;
|
2007-11-28 01:14:21 +08:00
|
|
|
break;
|
2010-08-05 20:42:49 +08:00
|
|
|
case 0xf9: /* stc */
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_CF;
|
2010-08-05 20:42:49 +08:00
|
|
|
break;
|
2008-09-01 09:52:24 +08:00
|
|
|
case 0xfc: /* cld */
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_DF;
|
2008-09-01 09:52:24 +08:00
|
|
|
break;
|
|
|
|
case 0xfd: /* std */
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags |= X86_EFLAGS_DF;
|
2008-09-01 09:52:24 +08:00
|
|
|
break;
|
2010-07-25 19:51:16 +08:00
|
|
|
default:
|
|
|
|
goto cannot_emulate;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
2007-11-28 01:30:56 +08:00
|
|
|
|
2010-08-30 22:12:28 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
|
2007-11-28 01:30:56 +08:00
|
|
|
writeback:
|
2013-02-09 17:31:44 +08:00
|
|
|
if (ctxt->d & SrcWrite) {
|
|
|
|
BUG_ON(ctxt->src.type == OP_MEM || ctxt->src.type == OP_MEM_STR);
|
|
|
|
rc = writeback(ctxt, &ctxt->src);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
2014-06-15 21:12:58 +08:00
|
|
|
if (!(ctxt->d & NoWrite)) {
|
|
|
|
rc = writeback(ctxt, &ctxt->dst);
|
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
}
|
2007-11-28 01:30:56 +08:00
|
|
|
|
2010-03-18 21:20:26 +08:00
|
|
|
/*
|
|
|
|
* restore dst type in case the decoding will be reused
|
|
|
|
* (happens for string instruction )
|
|
|
|
*/
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = saved_dst_type;
|
2010-03-18 21:20:26 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->d & SrcMask) == SrcSI)
|
2012-09-03 20:24:28 +08:00
|
|
|
string_addr_inc(ctxt, VCPU_REGS_RSI, &ctxt->src);
|
2010-03-18 21:20:21 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if ((ctxt->d & DstMask) == DstDI)
|
2012-09-03 20:24:28 +08:00
|
|
|
string_addr_inc(ctxt, VCPU_REGS_RDI, &ctxt->dst);
|
2010-03-18 21:20:22 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
if (ctxt->rep_prefix && (ctxt->d & String)) {
|
2012-09-03 20:24:29 +08:00
|
|
|
unsigned int count;
|
2011-06-01 20:34:25 +08:00
|
|
|
struct read_cache *r = &ctxt->io_read;
|
2012-09-03 20:24:29 +08:00
|
|
|
if ((ctxt->d & SrcMask) == SrcSI)
|
|
|
|
count = ctxt->src.count;
|
|
|
|
else
|
|
|
|
count = ctxt->dst.count;
|
2014-11-20 01:25:08 +08:00
|
|
|
register_address_increment(ctxt, VCPU_REGS_RCX, -count);
|
2010-08-25 17:47:42 +08:00
|
|
|
|
2010-08-25 17:47:43 +08:00
|
|
|
if (!string_insn_completed(ctxt)) {
|
|
|
|
/*
|
|
|
|
* Re-enter guest when pio read ahead buffer is empty
|
|
|
|
* or, if it is not used, after each 1024 iteration.
|
|
|
|
*/
|
2012-08-28 04:46:17 +08:00
|
|
|
if ((r->end != 0 || reg_read(ctxt, VCPU_REGS_RCX) & 0x3ff) &&
|
2010-08-25 17:47:43 +08:00
|
|
|
(r->end == 0 || r->end != r->pos)) {
|
|
|
|
/*
|
|
|
|
* Reset read cache. Usually happens before
|
|
|
|
* decode, but since instruction is restarted
|
|
|
|
* we have to do it here.
|
|
|
|
*/
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->mem_read.end = 0;
|
2012-08-28 04:46:17 +08:00
|
|
|
writeback_registers(ctxt);
|
2010-08-25 17:47:43 +08:00
|
|
|
return EMULATION_RESTART;
|
|
|
|
}
|
|
|
|
goto done; /* skip rip writeback */
|
2010-08-17 16:22:17 +08:00
|
|
|
}
|
2015-03-29 21:33:03 +08:00
|
|
|
ctxt->eflags &= ~X86_EFLAGS_RF;
|
2010-03-18 21:20:26 +08:00
|
|
|
}
|
2010-08-25 17:47:43 +08:00
|
|
|
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->eip = ctxt->_eip;
|
2020-04-27 23:55:59 +08:00
|
|
|
if (ctxt->mode != X86EMUL_MODE_PROT64)
|
|
|
|
ctxt->eip = (u32)ctxt->_eip;
|
2007-11-28 01:30:56 +08:00
|
|
|
|
|
|
|
done:
|
2014-08-20 16:08:23 +08:00
|
|
|
if (rc == X86EMUL_PROPAGATE_FAULT) {
|
|
|
|
WARN_ON(ctxt->exception.vector > 0x1f);
|
2010-11-22 23:53:21 +08:00
|
|
|
ctxt->have_exception = true;
|
2014-08-20 16:08:23 +08:00
|
|
|
}
|
2011-04-04 18:39:24 +08:00
|
|
|
if (rc == X86EMUL_INTERCEPTED)
|
|
|
|
return EMULATION_INTERCEPTED;
|
|
|
|
|
2012-08-28 04:46:17 +08:00
|
|
|
if (rc == X86EMUL_CONTINUE)
|
|
|
|
writeback_registers(ctxt);
|
|
|
|
|
2010-08-25 17:47:43 +08:00
|
|
|
return (rc == X86EMUL_UNHANDLEABLE) ? EMULATION_FAILED : EMULATION_OK;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
|
|
|
|
twobyte_insn:
|
2011-06-01 20:34:25 +08:00
|
|
|
switch (ctxt->b) {
|
2007-11-28 01:30:56 +08:00
|
|
|
case 0x09: /* wbinvd */
|
2011-04-22 03:16:05 +08:00
|
|
|
(ctxt->ops->wbinvd)(ctxt);
|
2010-06-30 12:25:15 +08:00
|
|
|
break;
|
|
|
|
case 0x08: /* invd */
|
2007-11-28 01:30:56 +08:00
|
|
|
case 0x0d: /* GrpP (prefetch) */
|
|
|
|
case 0x18: /* Grp16 (prefetch/nop) */
|
2013-05-30 19:22:39 +08:00
|
|
|
case 0x1f: /* nop */
|
2007-11-28 01:30:56 +08:00
|
|
|
break;
|
|
|
|
case 0x20: /* mov cr, reg */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = ops->get_cr(ctxt, ctxt->modrm_reg);
|
2007-11-28 01:30:56 +08:00
|
|
|
break;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
case 0x21: /* mov from dr to reg */
|
2011-06-01 20:34:25 +08:00
|
|
|
ops->get_dr(ctxt, ctxt->modrm_reg, &ctxt->dst.val);
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
|
|
|
case 0x40 ... 0x4f: /* cmov */
|
2014-06-15 21:13:00 +08:00
|
|
|
if (test_cc(ctxt->b, ctxt->eflags))
|
|
|
|
ctxt->dst.val = ctxt->src.val;
|
2015-03-30 20:39:19 +08:00
|
|
|
else if (ctxt->op_bytes != 4)
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.type = OP_NONE; /* no writeback */
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2009-04-12 18:36:30 +08:00
|
|
|
case 0x80 ... 0x8f: /* jnz rel, etc*/
|
2011-06-01 20:34:25 +08:00
|
|
|
if (test_cc(ctxt->b, ctxt->eflags))
|
2014-09-19 03:39:38 +08:00
|
|
|
rc = jmp_rel(ctxt, ctxt->src.val);
|
2007-11-28 01:30:56 +08:00
|
|
|
break;
|
2010-08-06 17:10:07 +08:00
|
|
|
case 0x90 ... 0x9f: /* setcc r/m8 */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.val = test_cc(ctxt->b, ctxt->eflags);
|
2010-08-06 17:10:07 +08:00
|
|
|
break;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
case 0xb6 ... 0xb7: /* movzx */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.bytes = ctxt->op_bytes;
|
2012-06-12 00:40:15 +08:00
|
|
|
ctxt->dst.val = (ctxt->src.bytes == 1) ? (u8) ctxt->src.val
|
2011-06-01 20:34:25 +08:00
|
|
|
: (u16) ctxt->src.val;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
|
|
|
case 0xbe ... 0xbf: /* movsx */
|
2011-06-01 20:34:25 +08:00
|
|
|
ctxt->dst.bytes = ctxt->op_bytes;
|
2012-06-12 00:40:15 +08:00
|
|
|
ctxt->dst.val = (ctxt->src.bytes == 1) ? (s8) ctxt->src.val :
|
2011-06-01 20:34:25 +08:00
|
|
|
(s16) ctxt->src.val;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
break;
|
2010-07-25 19:51:16 +08:00
|
|
|
default:
|
|
|
|
goto cannot_emulate;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
2010-08-30 22:12:28 +08:00
|
|
|
|
2013-10-29 19:54:10 +08:00
|
|
|
threebyte_insn:
|
|
|
|
|
2010-08-30 22:12:28 +08:00
|
|
|
if (rc != X86EMUL_CONTINUE)
|
|
|
|
goto done;
|
|
|
|
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
goto writeback;
|
|
|
|
|
|
|
|
cannot_emulate:
|
2011-03-28 22:57:49 +08:00
|
|
|
return EMULATION_FAILED;
|
[PATCH] kvm: userspace interface
web site: http://kvm.sourceforge.net
mailing list: kvm-devel@lists.sourceforge.net
(http://lists.sourceforge.net/lists/listinfo/kvm-devel)
The following patchset adds a driver for Intel's hardware virtualization
extensions to the x86 architecture. The driver adds a character device
(/dev/kvm) that exposes the virtualization capabilities to userspace. Using
this driver, a process can run a virtual machine (a "guest") in a fully
virtualized PC containing its own virtual hard disks, network adapters, and
display.
Using this driver, one can start multiple virtual machines on a host.
Each virtual machine is a process on the host; a virtual cpu is a thread in
that process. kill(1), nice(1), top(1) work as expected. In effect, the
driver adds a third execution mode to the existing two: we now have kernel
mode, user mode, and guest mode. Guest mode has its own address space mapping
guest physical memory (which is accessible to user mode by mmap()ing
/dev/kvm). Guest mode has no access to any I/O devices; any such access is
intercepted and directed to user mode for emulation.
The driver supports i386 and x86_64 hosts and guests. All combinations are
allowed except x86_64 guest on i386 host. For i386 guests and hosts, both pae
and non-pae paging modes are supported.
SMP hosts and UP guests are supported. At the moment only Intel
hardware is supported, but AMD virtualization support is being worked on.
Performance currently is non-stellar due to the naive implementation of the
mmu virtualization, which throws away most of the shadow page table entries
every context switch. We plan to address this in two ways:
- cache shadow page tables across tlb flushes
- wait until AMD and Intel release processors with nested page tables
Currently a virtual desktop is responsive but consumes a lot of CPU. Under
Windows I tried playing pinball and watching a few flash movies; with a recent
CPU one can hardly feel the virtualization. Linux/X is slower, probably due
to X being in a separate process.
In addition to the driver, you need a slightly modified qemu to provide I/O
device emulation and the BIOS.
Caveats (akpm: might no longer be true):
- The Windows install currently bluescreens due to a problem with the
virtual APIC. We are working on a fix. A temporary workaround is to
use an existing image or install through qemu
- Windows 64-bit does not work. That's also true for qemu, so it's
probably a problem with the device model.
[bero@arklinux.org: build fix]
[simon.kagstrom@bth.se: build fix, other fixes]
[uril@qumranet.com: KVM: Expose interrupt bitmap]
[akpm@osdl.org: i386 build fix]
[mingo@elte.hu: i386 fixes]
[rdreier@cisco.com: add log levels to all printks]
[randy.dunlap@oracle.com: Fix sparse NULL and C99 struct init warnings]
[anthony@codemonkey.ws: KVM: AMD SVM: 32-bit host support]
Signed-off-by: Yaniv Kamay <yaniv@qumranet.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Cc: Simon Kagstrom <simon.kagstrom@bth.se>
Cc: Bernhard Rosenkraenzer <bero@arklinux.org>
Signed-off-by: Uri Lublin <uril@qumranet.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Anthony Liguori <anthony@codemonkey.ws>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-10 18:21:36 +08:00
|
|
|
}
|
2012-08-28 04:46:17 +08:00
|
|
|
|
|
|
|
void emulator_invalidate_register_cache(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
invalidate_registers(ctxt);
|
|
|
|
}
|
|
|
|
|
|
|
|
void emulator_writeback_register_cache(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
writeback_registers(ctxt);
|
|
|
|
}
|
2016-12-15 03:59:23 +08:00
|
|
|
|
|
|
|
bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt)
|
|
|
|
{
|
|
|
|
if (ctxt->rep_prefix && (ctxt->d & String))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (ctxt->d & TwoMemOp)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|