2017-11-24 22:00:33 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2008-03-26 01:47:31 +08:00
|
|
|
/*
|
2012-07-20 17:15:04 +08:00
|
|
|
* handling interprocessor communication
|
2008-03-26 01:47:31 +08:00
|
|
|
*
|
2013-11-21 23:01:48 +08:00
|
|
|
* Copyright IBM Corp. 2008, 2013
|
2008-03-26 01:47:31 +08:00
|
|
|
*
|
|
|
|
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
|
|
|
* Christian Borntraeger <borntraeger@de.ibm.com>
|
2009-05-20 21:34:55 +08:00
|
|
|
* Christian Ehrhardt <ehrhardt@de.ibm.com>
|
2008-03-26 01:47:31 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/kvm.h>
|
|
|
|
#include <linux/kvm_host.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2012-06-04 18:55:15 +08:00
|
|
|
#include <asm/sigp.h>
|
2008-03-26 01:47:31 +08:00
|
|
|
#include "gaccess.h"
|
|
|
|
#include "kvm-s390.h"
|
2012-07-23 23:20:29 +08:00
|
|
|
#include "trace.h"
|
2008-03-26 01:47:31 +08:00
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_sense(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu,
|
2012-01-11 18:20:32 +08:00
|
|
|
u64 *reg)
|
2008-03-26 01:47:31 +08:00
|
|
|
{
|
2018-01-24 01:05:31 +08:00
|
|
|
const bool stopped = kvm_s390_test_cpuflags(dst_vcpu, CPUSTAT_STOPPED);
|
2008-03-26 01:47:31 +08:00
|
|
|
int rc;
|
2014-10-14 21:29:30 +08:00
|
|
|
int ext_call_pending;
|
2008-03-26 01:47:31 +08:00
|
|
|
|
2014-10-14 21:29:30 +08:00
|
|
|
ext_call_pending = kvm_s390_ext_call_pending(dst_vcpu);
|
2018-01-24 01:05:31 +08:00
|
|
|
if (!stopped && !ext_call_pending)
|
2012-06-26 22:06:41 +08:00
|
|
|
rc = SIGP_CC_ORDER_CODE_ACCEPTED;
|
|
|
|
else {
|
2008-03-26 01:47:31 +08:00
|
|
|
*reg &= 0xffffffff00000000UL;
|
2014-10-14 21:29:30 +08:00
|
|
|
if (ext_call_pending)
|
2012-06-26 22:06:41 +08:00
|
|
|
*reg |= SIGP_STATUS_EXT_CALL_PENDING;
|
2018-01-24 01:05:31 +08:00
|
|
|
if (stopped)
|
2012-06-26 22:06:41 +08:00
|
|
|
*reg |= SIGP_STATUS_STOPPED;
|
2012-06-26 22:06:40 +08:00
|
|
|
rc = SIGP_CC_STATUS_STORED;
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
VCPU_EVENT(vcpu, 4, "sensed status of cpu %x rc %x", dst_vcpu->vcpu_id,
|
|
|
|
rc);
|
2008-03-26 01:47:31 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2014-06-06 16:24:15 +08:00
|
|
|
static int __inject_sigp_emergency(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu)
|
2008-03-26 01:47:31 +08:00
|
|
|
{
|
2014-07-29 21:11:49 +08:00
|
|
|
struct kvm_s390_irq irq = {
|
2014-05-14 23:14:46 +08:00
|
|
|
.type = KVM_S390_INT_EMERGENCY,
|
2014-07-29 21:11:49 +08:00
|
|
|
.u.emerg.code = vcpu->vcpu_id,
|
2014-05-14 23:14:46 +08:00
|
|
|
};
|
|
|
|
int rc = 0;
|
2008-03-26 01:47:31 +08:00
|
|
|
|
2014-07-29 21:11:49 +08:00
|
|
|
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
2014-05-14 23:14:46 +08:00
|
|
|
if (!rc)
|
2014-03-19 23:59:39 +08:00
|
|
|
VCPU_EVENT(vcpu, 4, "sent sigp emerg to cpu %x",
|
|
|
|
dst_vcpu->vcpu_id);
|
2008-03-26 01:47:31 +08:00
|
|
|
|
2014-05-14 23:14:46 +08:00
|
|
|
return rc ? rc : SIGP_CC_ORDER_CODE_ACCEPTED;
|
2011-10-18 18:27:15 +08:00
|
|
|
}
|
|
|
|
|
2014-06-06 16:24:15 +08:00
|
|
|
static int __sigp_emergency(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu)
|
|
|
|
{
|
|
|
|
return __inject_sigp_emergency(vcpu, dst_vcpu);
|
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_conditional_emergency(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu,
|
2013-11-21 23:01:48 +08:00
|
|
|
u16 asn, u64 *reg)
|
|
|
|
{
|
|
|
|
const u64 psw_int_mask = PSW_MASK_IO | PSW_MASK_EXT;
|
|
|
|
u16 p_asn, s_asn;
|
|
|
|
psw_t *psw;
|
2016-02-18 17:15:43 +08:00
|
|
|
bool idle;
|
2013-11-21 23:01:48 +08:00
|
|
|
|
2016-02-18 17:15:43 +08:00
|
|
|
idle = is_vcpu_idle(vcpu);
|
2013-11-21 23:01:48 +08:00
|
|
|
psw = &dst_vcpu->arch.sie_block->gpsw;
|
|
|
|
p_asn = dst_vcpu->arch.sie_block->gcr[4] & 0xffff; /* Primary ASN */
|
|
|
|
s_asn = dst_vcpu->arch.sie_block->gcr[3] & 0xffff; /* Secondary ASN */
|
|
|
|
|
2014-06-06 16:24:15 +08:00
|
|
|
/* Inject the emergency signal? */
|
2016-02-18 17:15:43 +08:00
|
|
|
if (!is_vcpu_stopped(vcpu)
|
2013-11-21 23:01:48 +08:00
|
|
|
|| (psw->mask & psw_int_mask) != psw_int_mask
|
2016-02-18 17:15:43 +08:00
|
|
|
|| (idle && psw->addr != 0)
|
|
|
|
|| (!idle && (asn == p_asn || asn == s_asn))) {
|
2014-06-06 16:24:15 +08:00
|
|
|
return __inject_sigp_emergency(vcpu, dst_vcpu);
|
2013-11-21 23:01:48 +08:00
|
|
|
} else {
|
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_INCORRECT_STATE;
|
|
|
|
return SIGP_CC_STATUS_STORED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_external_call(struct kvm_vcpu *vcpu,
|
2014-10-14 21:29:30 +08:00
|
|
|
struct kvm_vcpu *dst_vcpu, u64 *reg)
|
2011-10-18 18:27:15 +08:00
|
|
|
{
|
2014-07-29 21:11:49 +08:00
|
|
|
struct kvm_s390_irq irq = {
|
2014-05-14 23:14:46 +08:00
|
|
|
.type = KVM_S390_INT_EXTERNAL_CALL,
|
2014-07-29 21:11:49 +08:00
|
|
|
.u.extcall.code = vcpu->vcpu_id,
|
2014-05-14 23:14:46 +08:00
|
|
|
};
|
|
|
|
int rc;
|
2011-10-18 18:27:15 +08:00
|
|
|
|
2014-07-29 21:11:49 +08:00
|
|
|
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
2014-10-14 21:29:30 +08:00
|
|
|
if (rc == -EBUSY) {
|
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_EXT_CALL_PENDING;
|
|
|
|
return SIGP_CC_STATUS_STORED;
|
|
|
|
} else if (rc == 0) {
|
2014-03-19 23:59:39 +08:00
|
|
|
VCPU_EVENT(vcpu, 4, "sent sigp ext call to cpu %x",
|
|
|
|
dst_vcpu->vcpu_id);
|
2014-10-14 21:29:30 +08:00
|
|
|
}
|
2011-10-18 18:27:15 +08:00
|
|
|
|
2014-05-14 23:14:46 +08:00
|
|
|
return rc ? rc : SIGP_CC_ORDER_CODE_ACCEPTED;
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
2014-06-06 16:25:09 +08:00
|
|
|
static int __sigp_stop(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu)
|
2009-05-20 21:34:55 +08:00
|
|
|
{
|
2014-10-15 22:48:53 +08:00
|
|
|
struct kvm_s390_irq irq = {
|
|
|
|
.type = KVM_S390_SIGP_STOP,
|
|
|
|
};
|
2009-05-20 21:34:55 +08:00
|
|
|
int rc;
|
|
|
|
|
2014-10-15 22:48:53 +08:00
|
|
|
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
|
|
|
if (rc == -EBUSY)
|
|
|
|
rc = SIGP_CC_BUSY;
|
|
|
|
else if (rc == 0)
|
|
|
|
VCPU_EVENT(vcpu, 4, "sent sigp stop to cpu %x",
|
|
|
|
dst_vcpu->vcpu_id);
|
2013-11-06 22:46:33 +08:00
|
|
|
|
2014-06-06 16:25:09 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __sigp_stop_and_store_status(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu, u64 *reg)
|
|
|
|
{
|
2014-10-15 22:48:53 +08:00
|
|
|
struct kvm_s390_irq irq = {
|
|
|
|
.type = KVM_S390_SIGP_STOP,
|
|
|
|
.u.stop.flags = KVM_S390_STOP_FLAG_STORE_STATUS,
|
|
|
|
};
|
2014-06-06 16:25:09 +08:00
|
|
|
int rc;
|
|
|
|
|
2014-10-15 22:48:53 +08:00
|
|
|
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
|
|
|
if (rc == -EBUSY)
|
|
|
|
rc = SIGP_CC_BUSY;
|
|
|
|
else if (rc == 0)
|
|
|
|
VCPU_EVENT(vcpu, 4, "sent sigp stop and store status to cpu %x",
|
|
|
|
dst_vcpu->vcpu_id);
|
2013-11-06 22:46:33 +08:00
|
|
|
|
2008-03-26 01:47:31 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2017-02-24 23:01:30 +08:00
|
|
|
static int __sigp_set_arch(struct kvm_vcpu *vcpu, u32 parameter,
|
|
|
|
u64 *status_reg)
|
2008-03-26 01:47:31 +08:00
|
|
|
{
|
2017-02-24 23:01:30 +08:00
|
|
|
*status_reg &= 0xffffffff00000000UL;
|
|
|
|
|
|
|
|
/* Reject set arch order, with czam we're always in z/Arch mode. */
|
2021-10-09 04:31:07 +08:00
|
|
|
*status_reg |= SIGP_STATUS_INVALID_PARAMETER;
|
2017-02-24 23:01:30 +08:00
|
|
|
return SIGP_CC_STATUS_STORED;
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_set_prefix(struct kvm_vcpu *vcpu, struct kvm_vcpu *dst_vcpu,
|
|
|
|
u32 address, u64 *reg)
|
2008-03-26 01:47:31 +08:00
|
|
|
{
|
2014-10-14 15:44:55 +08:00
|
|
|
struct kvm_s390_irq irq = {
|
|
|
|
.type = KVM_S390_SIGP_SET_PREFIX,
|
|
|
|
.u.prefix.address = address & 0x7fffe000u,
|
|
|
|
};
|
2008-03-26 01:47:31 +08:00
|
|
|
int rc;
|
|
|
|
|
2014-01-01 23:47:12 +08:00
|
|
|
/*
|
|
|
|
* Make sure the new value is valid memory. We only need to check the
|
|
|
|
* first page, since address is 8k aligned and memory pieces are always
|
|
|
|
* at least 1MB aligned and have at least a size of 1MB.
|
|
|
|
*/
|
2014-10-14 15:44:55 +08:00
|
|
|
if (kvm_is_error_gpa(vcpu->kvm, irq.u.prefix.address)) {
|
2012-06-26 22:06:39 +08:00
|
|
|
*reg &= 0xffffffff00000000UL;
|
2012-06-04 18:55:15 +08:00
|
|
|
*reg |= SIGP_STATUS_INVALID_PARAMETER;
|
2012-06-26 22:06:40 +08:00
|
|
|
return SIGP_CC_STATUS_STORED;
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
2014-10-14 15:44:55 +08:00
|
|
|
rc = kvm_s390_inject_vcpu(dst_vcpu, &irq);
|
|
|
|
if (rc == -EBUSY) {
|
2012-06-26 22:06:39 +08:00
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_INCORRECT_STATE;
|
2014-10-14 15:44:55 +08:00
|
|
|
return SIGP_CC_STATUS_STORED;
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_store_status_at_addr(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu,
|
|
|
|
u32 addr, u64 *reg)
|
2013-11-14 03:48:51 +08:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
2018-01-24 01:05:31 +08:00
|
|
|
if (!kvm_s390_test_cpuflags(dst_vcpu, CPUSTAT_STOPPED)) {
|
2013-11-14 03:48:51 +08:00
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_INCORRECT_STATE;
|
|
|
|
return SIGP_CC_STATUS_STORED;
|
|
|
|
}
|
|
|
|
|
|
|
|
addr &= 0x7ffffe00;
|
|
|
|
rc = kvm_s390_store_status_unloaded(dst_vcpu, addr);
|
|
|
|
if (rc == -EFAULT) {
|
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_INVALID_PARAMETER;
|
|
|
|
rc = SIGP_CC_STATUS_STORED;
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
static int __sigp_sense_running(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu, u64 *reg)
|
2011-11-17 18:00:42 +08:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
2016-03-04 19:23:55 +08:00
|
|
|
if (!test_kvm_facility(vcpu->kvm, 9)) {
|
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_INVALID_ORDER;
|
|
|
|
return SIGP_CC_STATUS_STORED;
|
|
|
|
}
|
|
|
|
|
2018-01-24 01:05:31 +08:00
|
|
|
if (kvm_s390_test_cpuflags(dst_vcpu, CPUSTAT_RUNNING)) {
|
2014-02-25 22:36:45 +08:00
|
|
|
/* running */
|
|
|
|
rc = SIGP_CC_ORDER_CODE_ACCEPTED;
|
|
|
|
} else {
|
|
|
|
/* not running */
|
|
|
|
*reg &= 0xffffffff00000000UL;
|
|
|
|
*reg |= SIGP_STATUS_NOT_RUNNING;
|
|
|
|
rc = SIGP_CC_STATUS_STORED;
|
2011-11-17 18:00:42 +08:00
|
|
|
}
|
|
|
|
|
2014-03-19 23:59:39 +08:00
|
|
|
VCPU_EVENT(vcpu, 4, "sensed running status of cpu %x rc %x",
|
|
|
|
dst_vcpu->vcpu_id, rc);
|
2011-11-17 18:00:42 +08:00
|
|
|
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2014-05-23 18:22:56 +08:00
|
|
|
static int __prepare_sigp_re_start(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu, u8 order_code)
|
2012-02-08 15:28:29 +08:00
|
|
|
{
|
2014-03-19 23:59:39 +08:00
|
|
|
struct kvm_s390_local_interrupt *li = &dst_vcpu->arch.local_int;
|
2014-05-23 18:22:56 +08:00
|
|
|
/* handle (RE)START in user space */
|
|
|
|
int rc = -EOPNOTSUPP;
|
2012-02-08 15:28:29 +08:00
|
|
|
|
2014-10-15 22:48:53 +08:00
|
|
|
/* make sure we don't race with STOP irq injection */
|
2014-05-16 16:23:53 +08:00
|
|
|
spin_lock(&li->lock);
|
2014-10-15 22:48:53 +08:00
|
|
|
if (kvm_s390_is_stop_irq_pending(dst_vcpu))
|
2012-06-26 22:06:40 +08:00
|
|
|
rc = SIGP_CC_BUSY;
|
2014-05-16 16:23:53 +08:00
|
|
|
spin_unlock(&li->lock);
|
2014-02-25 22:36:45 +08:00
|
|
|
|
2012-02-08 15:28:29 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2014-05-23 18:22:56 +08:00
|
|
|
static int __prepare_sigp_cpu_reset(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu, u8 order_code)
|
|
|
|
{
|
|
|
|
/* handle (INITIAL) CPU RESET in user space */
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __prepare_sigp_unknown(struct kvm_vcpu *vcpu,
|
|
|
|
struct kvm_vcpu *dst_vcpu)
|
|
|
|
{
|
|
|
|
/* handle unknown orders in user space */
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
|
2014-03-19 21:57:49 +08:00
|
|
|
static int handle_sigp_dst(struct kvm_vcpu *vcpu, u8 order_code,
|
|
|
|
u16 cpu_addr, u32 parameter, u64 *status_reg)
|
2008-03-26 01:47:31 +08:00
|
|
|
{
|
|
|
|
int rc;
|
2015-11-05 16:06:06 +08:00
|
|
|
struct kvm_vcpu *dst_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
|
2014-03-19 23:59:39 +08:00
|
|
|
|
|
|
|
if (!dst_vcpu)
|
|
|
|
return SIGP_CC_NOT_OPERATIONAL;
|
2008-03-26 01:47:31 +08:00
|
|
|
|
KVM: s390: Clarify SIGP orders versus STOP/RESTART
With KVM_CAP_S390_USER_SIGP, there are only five Signal Processor
orders (CONDITIONAL EMERGENCY SIGNAL, EMERGENCY SIGNAL, EXTERNAL CALL,
SENSE, and SENSE RUNNING STATUS) which are intended for frequent use
and thus are processed in-kernel. The remainder are sent to userspace
with the KVM_CAP_S390_USER_SIGP capability. Of those, three orders
(RESTART, STOP, and STOP AND STORE STATUS) have the potential to
inject work back into the kernel, and thus are asynchronous.
Let's look for those pending IRQs when processing one of the in-kernel
SIGP orders, and return BUSY (CC2) if one is in process. This is in
agreement with the Principles of Operation, which states that only one
order can be "active" on a CPU at a time.
Cc: stable@vger.kernel.org
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Link: https://lore.kernel.org/r/20211213210550.856213-2-farman@linux.ibm.com
[borntraeger@linux.ibm.com: add stable tag]
Signed-off-by: Christian Borntraeger <borntraeger@linux.ibm.com>
2021-12-14 05:05:50 +08:00
|
|
|
/*
|
|
|
|
* SIGP RESTART, SIGP STOP, and SIGP STOP AND STORE STATUS orders
|
|
|
|
* are processed asynchronously. Until the affected VCPU finishes
|
|
|
|
* its work and calls back into KVM to clear the (RESTART or STOP)
|
|
|
|
* interrupt, we need to return any new non-reset orders "busy".
|
|
|
|
*
|
|
|
|
* This is important because a single VCPU could issue:
|
|
|
|
* 1) SIGP STOP $DESTINATION
|
|
|
|
* 2) SIGP SENSE $DESTINATION
|
|
|
|
*
|
|
|
|
* If the SIGP SENSE would not be rejected as "busy", it could
|
|
|
|
* return an incorrect answer as to whether the VCPU is STOPPED
|
|
|
|
* or OPERATING.
|
|
|
|
*/
|
|
|
|
if (order_code != SIGP_INITIAL_CPU_RESET &&
|
|
|
|
order_code != SIGP_CPU_RESET) {
|
|
|
|
/*
|
|
|
|
* Lockless check. Both SIGP STOP and SIGP (RE)START
|
|
|
|
* properly synchronize everything while processing
|
|
|
|
* their orders, while the guest cannot observe a
|
|
|
|
* difference when issuing other orders from two
|
|
|
|
* different VCPUs.
|
|
|
|
*/
|
|
|
|
if (kvm_s390_is_stop_irq_pending(dst_vcpu) ||
|
|
|
|
kvm_s390_is_restart_irq_pending(dst_vcpu))
|
|
|
|
return SIGP_CC_BUSY;
|
|
|
|
}
|
|
|
|
|
2008-03-26 01:47:31 +08:00
|
|
|
switch (order_code) {
|
|
|
|
case SIGP_SENSE:
|
|
|
|
vcpu->stat.instruction_sigp_sense++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_sense(vcpu, dst_vcpu, status_reg);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
2011-10-18 18:27:15 +08:00
|
|
|
case SIGP_EXTERNAL_CALL:
|
|
|
|
vcpu->stat.instruction_sigp_external_call++;
|
2014-10-14 21:29:30 +08:00
|
|
|
rc = __sigp_external_call(vcpu, dst_vcpu, status_reg);
|
2011-10-18 18:27:15 +08:00
|
|
|
break;
|
2012-06-04 18:55:15 +08:00
|
|
|
case SIGP_EMERGENCY_SIGNAL:
|
2008-03-26 01:47:31 +08:00
|
|
|
vcpu->stat.instruction_sigp_emergency++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_emergency(vcpu, dst_vcpu);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
|
|
|
case SIGP_STOP:
|
|
|
|
vcpu->stat.instruction_sigp_stop++;
|
2014-06-06 16:25:09 +08:00
|
|
|
rc = __sigp_stop(vcpu, dst_vcpu);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
2012-06-04 18:55:15 +08:00
|
|
|
case SIGP_STOP_AND_STORE_STATUS:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_stop_store_status++;
|
2014-06-06 16:25:09 +08:00
|
|
|
rc = __sigp_stop_and_store_status(vcpu, dst_vcpu, status_reg);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
2013-11-14 03:48:51 +08:00
|
|
|
case SIGP_STORE_STATUS_AT_ADDRESS:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_store_status++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_store_status_at_addr(vcpu, dst_vcpu, parameter,
|
2014-03-19 21:57:49 +08:00
|
|
|
status_reg);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
|
|
|
case SIGP_SET_PREFIX:
|
|
|
|
vcpu->stat.instruction_sigp_prefix++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_set_prefix(vcpu, dst_vcpu, parameter, status_reg);
|
2008-03-26 01:47:31 +08:00
|
|
|
break;
|
2013-11-21 23:01:48 +08:00
|
|
|
case SIGP_COND_EMERGENCY_SIGNAL:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_cond_emergency++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_conditional_emergency(vcpu, dst_vcpu, parameter,
|
2014-03-19 21:57:49 +08:00
|
|
|
status_reg);
|
2013-11-21 23:01:48 +08:00
|
|
|
break;
|
2011-11-17 18:00:42 +08:00
|
|
|
case SIGP_SENSE_RUNNING:
|
|
|
|
vcpu->stat.instruction_sigp_sense_running++;
|
2014-03-19 23:59:39 +08:00
|
|
|
rc = __sigp_sense_running(vcpu, dst_vcpu, status_reg);
|
2011-11-17 18:00:42 +08:00
|
|
|
break;
|
2013-12-03 19:54:55 +08:00
|
|
|
case SIGP_START:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_start++;
|
2014-05-23 18:22:56 +08:00
|
|
|
rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code);
|
2013-12-03 19:54:55 +08:00
|
|
|
break;
|
2008-03-26 01:47:31 +08:00
|
|
|
case SIGP_RESTART:
|
|
|
|
vcpu->stat.instruction_sigp_restart++;
|
2014-05-23 18:22:56 +08:00
|
|
|
rc = __prepare_sigp_re_start(vcpu, dst_vcpu, order_code);
|
|
|
|
break;
|
|
|
|
case SIGP_INITIAL_CPU_RESET:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_init_cpu_reset++;
|
2014-05-23 18:22:56 +08:00
|
|
|
rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code);
|
|
|
|
break;
|
|
|
|
case SIGP_CPU_RESET:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_cpu_reset++;
|
2014-05-23 18:22:56 +08:00
|
|
|
rc = __prepare_sigp_cpu_reset(vcpu, dst_vcpu, order_code);
|
2013-11-27 18:47:10 +08:00
|
|
|
break;
|
2008-03-26 01:47:31 +08:00
|
|
|
default:
|
2014-05-23 18:25:11 +08:00
|
|
|
vcpu->stat.instruction_sigp_unknown++;
|
2014-05-23 18:22:56 +08:00
|
|
|
rc = __prepare_sigp_unknown(vcpu, dst_vcpu);
|
2014-03-19 21:57:49 +08:00
|
|
|
}
|
|
|
|
|
2014-05-23 18:22:56 +08:00
|
|
|
if (rc == -EOPNOTSUPP)
|
|
|
|
VCPU_EVENT(vcpu, 4,
|
|
|
|
"sigp order %u -> cpu %x: handled in user space",
|
|
|
|
order_code, dst_vcpu->vcpu_id);
|
|
|
|
|
2014-03-19 21:57:49 +08:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2015-07-21 18:44:57 +08:00
|
|
|
static int handle_sigp_order_in_user_space(struct kvm_vcpu *vcpu, u8 order_code,
|
|
|
|
u16 cpu_addr)
|
2014-10-09 20:10:13 +08:00
|
|
|
{
|
|
|
|
if (!vcpu->kvm->arch.user_sigp)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
switch (order_code) {
|
|
|
|
case SIGP_SENSE:
|
|
|
|
case SIGP_EXTERNAL_CALL:
|
|
|
|
case SIGP_EMERGENCY_SIGNAL:
|
|
|
|
case SIGP_COND_EMERGENCY_SIGNAL:
|
|
|
|
case SIGP_SENSE_RUNNING:
|
|
|
|
return 0;
|
|
|
|
/* update counters as we're directly dropping to user space */
|
|
|
|
case SIGP_STOP:
|
|
|
|
vcpu->stat.instruction_sigp_stop++;
|
|
|
|
break;
|
|
|
|
case SIGP_STOP_AND_STORE_STATUS:
|
|
|
|
vcpu->stat.instruction_sigp_stop_store_status++;
|
|
|
|
break;
|
|
|
|
case SIGP_STORE_STATUS_AT_ADDRESS:
|
|
|
|
vcpu->stat.instruction_sigp_store_status++;
|
|
|
|
break;
|
2015-02-12 22:06:34 +08:00
|
|
|
case SIGP_STORE_ADDITIONAL_STATUS:
|
|
|
|
vcpu->stat.instruction_sigp_store_adtl_status++;
|
|
|
|
break;
|
2014-10-09 20:10:13 +08:00
|
|
|
case SIGP_SET_PREFIX:
|
|
|
|
vcpu->stat.instruction_sigp_prefix++;
|
|
|
|
break;
|
|
|
|
case SIGP_START:
|
|
|
|
vcpu->stat.instruction_sigp_start++;
|
|
|
|
break;
|
|
|
|
case SIGP_RESTART:
|
|
|
|
vcpu->stat.instruction_sigp_restart++;
|
|
|
|
break;
|
|
|
|
case SIGP_INITIAL_CPU_RESET:
|
|
|
|
vcpu->stat.instruction_sigp_init_cpu_reset++;
|
|
|
|
break;
|
|
|
|
case SIGP_CPU_RESET:
|
|
|
|
vcpu->stat.instruction_sigp_cpu_reset++;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
vcpu->stat.instruction_sigp_unknown++;
|
|
|
|
}
|
2015-07-21 18:44:57 +08:00
|
|
|
VCPU_EVENT(vcpu, 3, "SIGP: order %u for CPU %d handled in userspace",
|
|
|
|
order_code, cpu_addr);
|
2014-10-09 20:10:13 +08:00
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-03-19 21:57:49 +08:00
|
|
|
int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
int r1 = (vcpu->arch.sie_block->ipa & 0x00f0) >> 4;
|
|
|
|
int r3 = vcpu->arch.sie_block->ipa & 0x000f;
|
|
|
|
u32 parameter;
|
|
|
|
u16 cpu_addr = vcpu->run->s.regs.gprs[r3];
|
|
|
|
u8 order_code;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
/* sigp in userspace can exit */
|
|
|
|
if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
|
|
|
|
return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
|
|
|
|
|
2015-01-19 18:24:51 +08:00
|
|
|
order_code = kvm_s390_get_base_disp_rs(vcpu, NULL);
|
2015-07-21 18:44:57 +08:00
|
|
|
if (handle_sigp_order_in_user_space(vcpu, order_code, cpu_addr))
|
2014-10-09 20:10:13 +08:00
|
|
|
return -EOPNOTSUPP;
|
2014-03-19 21:57:49 +08:00
|
|
|
|
|
|
|
if (r1 % 2)
|
|
|
|
parameter = vcpu->run->s.regs.gprs[r1];
|
|
|
|
else
|
|
|
|
parameter = vcpu->run->s.regs.gprs[r1 + 1];
|
|
|
|
|
|
|
|
trace_kvm_s390_handle_sigp(vcpu, order_code, cpu_addr, parameter);
|
|
|
|
switch (order_code) {
|
|
|
|
case SIGP_SET_ARCHITECTURE:
|
|
|
|
vcpu->stat.instruction_sigp_arch++;
|
2017-02-24 23:01:30 +08:00
|
|
|
rc = __sigp_set_arch(vcpu, parameter,
|
|
|
|
&vcpu->run->s.regs.gprs[r1]);
|
2014-03-19 21:57:49 +08:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
rc = handle_sigp_dst(vcpu, order_code, cpu_addr,
|
|
|
|
parameter,
|
|
|
|
&vcpu->run->s.regs.gprs[r1]);
|
2008-03-26 01:47:31 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (rc < 0)
|
|
|
|
return rc;
|
|
|
|
|
2013-11-26 19:27:16 +08:00
|
|
|
kvm_s390_set_psw_cc(vcpu, rc);
|
2008-03-26 01:47:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2014-02-21 15:59:59 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle SIGP partial execution interception.
|
|
|
|
*
|
|
|
|
* This interception will occur at the source cpu when a source cpu sends an
|
|
|
|
* external call to a target cpu and the target cpu has the WAIT bit set in
|
|
|
|
* its cpuflags. Interception will occurr after the interrupt indicator bits at
|
|
|
|
* the target cpu have been set. All error cases will lead to instruction
|
|
|
|
* interception, therefore nothing is to be checked or prepared.
|
|
|
|
*/
|
|
|
|
int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu)
|
|
|
|
{
|
|
|
|
int r3 = vcpu->arch.sie_block->ipa & 0x000f;
|
|
|
|
u16 cpu_addr = vcpu->run->s.regs.gprs[r3];
|
|
|
|
struct kvm_vcpu *dest_vcpu;
|
2015-01-19 18:24:51 +08:00
|
|
|
u8 order_code = kvm_s390_get_base_disp_rs(vcpu, NULL);
|
2014-02-21 15:59:59 +08:00
|
|
|
|
|
|
|
if (order_code == SIGP_EXTERNAL_CALL) {
|
KVM: s390: pv: don't present the ecall interrupt twice
When the SIGP interpretation facility is present and a VCPU sends an
ecall to another VCPU in enabled wait, the sending VCPU receives a 56
intercept (partial execution), so KVM can wake up the receiving CPU.
Note that the SIGP interpretation facility will take care of the
interrupt delivery and KVM's only job is to wake the receiving VCPU.
For PV, the sending VCPU will receive a 108 intercept (pv notify) and
should continue like in the non-PV case, i.e. wake the receiving VCPU.
For PV and non-PV guests the interrupt delivery will occur through the
SIGP interpretation facility on SIE entry when SIE finds the X bit in
the status field set.
However, in handle_pv_notification(), there was no special handling for
SIGP, which leads to interrupt injection being requested by KVM for the
next SIE entry. This results in the interrupt being delivered twice:
once by the SIGP interpretation facility and once by KVM through the
IICTL.
Add the necessary special handling in handle_pv_notification(), similar
to handle_partial_execution(), which simply wakes the receiving VCPU and
leave interrupt delivery to the SIGP interpretation facility.
In contrast to external calls, emergency calls are not interpreted but
also cause a 108 intercept, which is why we still need to call
handle_instruction() for SIGP orders other than ecall.
Since kvm_s390_handle_sigp_pei() is now called for all SIGP orders which
cause a 108 intercept - even if they are actually handled by
handle_instruction() - move the tracepoint in kvm_s390_handle_sigp_pei()
to avoid possibly confusing trace messages.
Signed-off-by: Nico Boehr <nrb@linux.ibm.com>
Cc: <stable@vger.kernel.org> # 5.7
Fixes: da24a0cc58ed ("KVM: s390: protvirt: Instruction emulation")
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@linux.ibm.com>
Link: https://lore.kernel.org/r/20220718130434.73302-1-nrb@linux.ibm.com
Message-Id: <20220718130434.73302-1-nrb@linux.ibm.com>
Signed-off-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
2022-07-18 21:04:34 +08:00
|
|
|
trace_kvm_s390_handle_sigp_pei(vcpu, order_code, cpu_addr);
|
|
|
|
|
2015-11-05 16:06:06 +08:00
|
|
|
dest_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, cpu_addr);
|
2014-02-21 15:59:59 +08:00
|
|
|
BUG_ON(dest_vcpu == NULL);
|
|
|
|
|
2014-05-16 17:59:46 +08:00
|
|
|
kvm_s390_vcpu_wakeup(dest_vcpu);
|
2014-02-21 15:59:59 +08:00
|
|
|
kvm_s390_set_psw_cc(vcpu, SIGP_CC_ORDER_CODE_ACCEPTED);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|