s390 updates for 6.8 merge window

- Add machine variable capacity information to /proc/sysinfo.
 
 - Limit the waste of page tables and always align vmalloc area size
   and base address on segment boundary.
 
 - Fix a memory leak when an attempt to register interruption sub class
   (ISC) for the adjunct-processor (AP) guest failed.
 
 - Reset response code AP_RESPONSE_INVALID_GISA to understandable
   by guest AP_RESPONSE_INVALID_ADDRESS in response to a failed
   interruption sub class (ISC) registration attempt.
 
 - Improve reaction to adjunct-processor (AP) AP_RESPONSE_OTHERWISE_CHANGED
   response code when enabling interrupts on behalf of a guest.
 
 - Fix incorrect sysfs 'status' attribute of adjunct-processor (AP) queue
   device bound to the vfio_ap device driver when the mediated device is
   attached to a guest, but the queue device is not passed through.
 
 - Rework struct ap_card to hold the whole adjunct-processor (AP) card
   hardware information. As result, all the ugly bit checks are replaced
   by simple evaluations of the required bit fields.
 
 - Improve handling of some weird scenarios between service element (SE)
   host and SE guest with adjunct-processor (AP) pass-through support.
 
 - Change local_ctl_set_bit() and local_ctl_clear_bit() so they return the
   previous value of the to be changed control register. This is useful if
   a bit is only changed temporarily and the previous content needs to be
   restored.
 
 - The kernel starts with machine checks disabled and is expected to enable
   it once trap_init() is called. However the implementation allows machine
   checks early. Consistently enable it in trap_init() only.
 
 - local_mcck_disable() and local_mcck_enable() assume that machine checks
   are always enabled. Instead implement and use local_mcck_save() and
   local_mcck_restore() to disable machine checks and restore the previous
   state.
 
 - Modification of floating point control (FPC) register of a traced
   process using ptrace interface may lead to corruption of the FPC
   register of the tracing process. Fix this.
 
 - kvm_arch_vcpu_ioctl_set_fpu() allows to set the floating point control
   (FPC) register in vCPU, but may lead to corruption of the FPC register
   of the host process. Fix this.
 
 - Use READ_ONCE() to read a vCPU floating point register value from the
   memory mapped area. This avoids that, depending on code generation,
   a different value is tested for validity than the one that is used.
 
 - Get rid of test_fp_ctl(), since it is quite subtle to use it correctly.
   Instead copy a new floating point control register value into its save
   area and test the validity of the new value when loading it.
 
 - Remove superfluous save_fpu_regs() call.
 
 - Remove s390 support for ARCH_WANTS_DYNAMIC_TASK_STRUCT. All machines
   provide the vector facility since many years and the need to make the
   task structure size dependent on the vector facility does not exist.
 
 - Remove the "novx" kernel command line option, as the vector code runs
   without any problems since many years.
 
 - Add the vector facility to the z13 architecture level set (ALS).
   All hypervisors support the vector facility since many years.
   This allows compile time optimizations of the kernel.
 
 - Get rid of MACHINE_HAS_VX and replace it with cpu_has_vx(). As result,
   the compiled code will have less runtime checks and less code.
 
 - Convert pgste_get_lock() and pgste_set_unlock() ASM inlines to C.
 
 - Convert the struct subchannel spinlock from pointer to member.
 -----BEGIN PGP SIGNATURE-----
 
 iI0EABYIADUWIQQrtrZiYVkVzKQcYivNdxKlNrRb8AUCZZxnchccYWdvcmRlZXZA
 bGludXguaWJtLmNvbQAKCRDNdxKlNrRb8PyGAP9Z0Z1Pm3hPf24M4FgY5uuRqBHo
 VoiuohOZRGONwifnsgEAmN/3ba/7d/j0iMwpUdUFouiqtwTUihh15wYx1sH2IA8=
 =F6YD
 -----END PGP SIGNATURE-----

Merge tag 's390-6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Alexander Gordeev:

 - Add machine variable capacity information to /proc/sysinfo.

 - Limit the waste of page tables and always align vmalloc area size and
   base address on segment boundary.

 - Fix a memory leak when an attempt to register interruption sub class
   (ISC) for the adjunct-processor (AP) guest failed.

 - Reset response code AP_RESPONSE_INVALID_GISA to understandable by
   guest AP_RESPONSE_INVALID_ADDRESS in response to a failed
   interruption sub class (ISC) registration attempt.

 - Improve reaction to adjunct-processor (AP)
   AP_RESPONSE_OTHERWISE_CHANGED response code when enabling interrupts
   on behalf of a guest.

 - Fix incorrect sysfs 'status' attribute of adjunct-processor (AP)
   queue device bound to the vfio_ap device driver when the mediated
   device is attached to a guest, but the queue device is not passed
   through.

 - Rework struct ap_card to hold the whole adjunct-processor (AP) card
   hardware information. As result, all the ugly bit checks are replaced
   by simple evaluations of the required bit fields.

 - Improve handling of some weird scenarios between service element (SE)
   host and SE guest with adjunct-processor (AP) pass-through support.

 - Change local_ctl_set_bit() and local_ctl_clear_bit() so they return
   the previous value of the to be changed control register. This is
   useful if a bit is only changed temporarily and the previous content
   needs to be restored.

 - The kernel starts with machine checks disabled and is expected to
   enable it once trap_init() is called. However the implementation
   allows machine checks early. Consistently enable it in trap_init()
   only.

 - local_mcck_disable() and local_mcck_enable() assume that machine
   checks are always enabled. Instead implement and use
   local_mcck_save() and local_mcck_restore() to disable machine checks
   and restore the previous state.

 - Modification of floating point control (FPC) register of a traced
   process using ptrace interface may lead to corruption of the FPC
   register of the tracing process. Fix this.

 - kvm_arch_vcpu_ioctl_set_fpu() allows to set the floating point
   control (FPC) register in vCPU, but may lead to corruption of the FPC
   register of the host process. Fix this.

 - Use READ_ONCE() to read a vCPU floating point register value from the
   memory mapped area. This avoids that, depending on code generation, a
   different value is tested for validity than the one that is used.

 - Get rid of test_fp_ctl(), since it is quite subtle to use it
   correctly. Instead copy a new floating point control register value
   into its save area and test the validity of the new value when
   loading it.

 - Remove superfluous save_fpu_regs() call.

 - Remove s390 support for ARCH_WANTS_DYNAMIC_TASK_STRUCT. All machines
   provide the vector facility since many years and the need to make the
   task structure size dependent on the vector facility does not exist.

 - Remove the "novx" kernel command line option, as the vector code runs
   without any problems since many years.

 - Add the vector facility to the z13 architecture level set (ALS). All
   hypervisors support the vector facility since many years. This allows
   compile time optimizations of the kernel.

 - Get rid of MACHINE_HAS_VX and replace it with cpu_has_vx(). As
   result, the compiled code will have less runtime checks and less
   code.

 - Convert pgste_get_lock() and pgste_set_unlock() ASM inlines to C.

 - Convert the struct subchannel spinlock from pointer to member.

* tag 's390-6.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (24 commits)
  Revert "s390: update defconfigs"
  s390/cio: make sch->lock spinlock pointer a member
  s390: update defconfigs
  s390/mm: convert pgste locking functions to C
  s390/fpu: get rid of MACHINE_HAS_VX
  s390/als: add vector facility to z13 architecture level set
  s390/fpu: remove "novx" option
  s390/fpu: remove ARCH_WANTS_DYNAMIC_TASK_STRUCT support
  KVM: s390: remove superfluous save_fpu_regs() call
  s390/fpu: get rid of test_fp_ctl()
  KVM: s390: use READ_ONCE() to read fpc register value
  KVM: s390: fix setting of fpc register
  s390/ptrace: handle setting of fpc register correctly
  s390/nmi: implement and use local_mcck_save() / local_mcck_restore()
  s390/nmi: consistently enable machine checks in trap_init()
  s390/ctlreg: return old register contents when changing bits
  s390/ap: handle outband SE bind state change
  s390/ap: store TAPQ hwinfo in struct ap_card
  s390/vfio-ap: fix sysfs status attribute for AP queue devices
  s390/vfio-ap: improve reaction to response code 07 from PQAP(AQIC) command
  ...
This commit is contained in:
Linus Torvalds 2024-01-10 18:18:20 -08:00
commit de927f6c0b
50 changed files with 517 additions and 455 deletions

View File

@ -123,7 +123,6 @@ config S390
select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_SYM_ANNOTATIONS select ARCH_USE_SYM_ANNOTATIONS
select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_WANTS_NO_INSTR select ARCH_WANTS_NO_INSTR
select ARCH_WANT_DEFAULT_BPF_JIT select ARCH_WANT_DEFAULT_BPF_JIT
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION

View File

@ -274,7 +274,7 @@ void parse_boot_command_line(void)
memory_limit = round_down(memparse(val, NULL), PAGE_SIZE); memory_limit = round_down(memparse(val, NULL), PAGE_SIZE);
if (!strcmp(param, "vmalloc") && val) { if (!strcmp(param, "vmalloc") && val) {
vmalloc_size = round_up(memparse(val, NULL), PAGE_SIZE); vmalloc_size = round_up(memparse(val, NULL), _SEGMENT_SIZE);
vmalloc_size_set = 1; vmalloc_size_set = 1;
} }

View File

@ -255,7 +255,8 @@ static unsigned long setup_kernel_memory_layout(void)
VMALLOC_END = MODULES_VADDR; VMALLOC_END = MODULES_VADDR;
/* allow vmalloc area to occupy up to about 1/2 of the rest virtual space left */ /* allow vmalloc area to occupy up to about 1/2 of the rest virtual space left */
vmalloc_size = min(vmalloc_size, round_down(VMALLOC_END / 2, _REGION3_SIZE)); vsize = round_down(VMALLOC_END / 2, _SEGMENT_SIZE);
vmalloc_size = min(vmalloc_size, vsize);
VMALLOC_START = VMALLOC_END - vmalloc_size; VMALLOC_START = VMALLOC_END - vmalloc_size;
/* split remaining virtual space between 1:1 mapping & vmemmap array */ /* split remaining virtual space between 1:1 mapping & vmemmap array */

View File

@ -82,7 +82,7 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src,
* it cannot handle a block of data or less, but otherwise * it cannot handle a block of data or less, but otherwise
* it can handle data of arbitrary size * it can handle data of arbitrary size
*/ */
if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20 || !MACHINE_HAS_VX) if (bytes <= CHACHA_BLOCK_SIZE || nrounds != 20 || !cpu_has_vx())
chacha_crypt_generic(state, dst, src, bytes, nrounds); chacha_crypt_generic(state, dst, src, bytes, nrounds);
else else
chacha20_crypt_s390(state, dst, src, bytes, chacha20_crypt_s390(state, dst, src, bytes,

View File

@ -88,7 +88,7 @@ static inline bool ap_instructions_available(void)
} }
/* TAPQ register GR2 response struct */ /* TAPQ register GR2 response struct */
struct ap_tapq_gr2 { struct ap_tapq_hwinfo {
union { union {
unsigned long value; unsigned long value;
struct { struct {
@ -96,11 +96,13 @@ struct ap_tapq_gr2 {
unsigned int apinfo : 32; /* ap type, ... */ unsigned int apinfo : 32; /* ap type, ... */
}; };
struct { struct {
unsigned int s : 1; /* APSC */ unsigned int apsc : 1; /* APSC */
unsigned int m : 1; /* AP4KM */ unsigned int mex4k : 1; /* AP4KM */
unsigned int c : 1; /* AP4KC */ unsigned int crt4k : 1; /* AP4KC */
unsigned int mode : 3; unsigned int cca : 1; /* D */
unsigned int n : 1; /* APXA */ unsigned int accel : 1; /* A */
unsigned int ep11 : 1; /* X */
unsigned int apxa : 1; /* APXA */
unsigned int : 1; unsigned int : 1;
unsigned int class : 8; unsigned int class : 8;
unsigned int bs : 2; /* SE bind/assoc */ unsigned int bs : 2; /* SE bind/assoc */
@ -126,11 +128,12 @@ struct ap_tapq_gr2 {
/** /**
* ap_tapq(): Test adjunct processor queue. * ap_tapq(): Test adjunct processor queue.
* @qid: The AP queue number * @qid: The AP queue number
* @info: Pointer to queue descriptor * @info: Pointer to tapq hwinfo struct
* *
* Returns AP queue status structure. * Returns AP queue status structure.
*/ */
static inline struct ap_queue_status ap_tapq(ap_qid_t qid, struct ap_tapq_gr2 *info) static inline struct ap_queue_status ap_tapq(ap_qid_t qid,
struct ap_tapq_hwinfo *info)
{ {
union ap_queue_status_reg reg1; union ap_queue_status_reg reg1;
unsigned long reg2; unsigned long reg2;
@ -158,7 +161,7 @@ static inline struct ap_queue_status ap_tapq(ap_qid_t qid, struct ap_tapq_gr2 *i
* Returns AP queue status structure. * Returns AP queue status structure.
*/ */
static inline struct ap_queue_status ap_test_queue(ap_qid_t qid, int tbit, static inline struct ap_queue_status ap_test_queue(ap_qid_t qid, int tbit,
struct ap_tapq_gr2 *info) struct ap_tapq_hwinfo *info)
{ {
if (tbit) if (tbit)
qid |= 1UL << 23; /* set T bit*/ qid |= 1UL << 23; /* set T bit*/

View File

@ -141,22 +141,26 @@ static __always_inline void local_ctl_store(unsigned int cr, struct ctlreg *reg)
: [cr] "i" (cr)); : [cr] "i" (cr));
} }
static __always_inline void local_ctl_set_bit(unsigned int cr, unsigned int bit) static __always_inline struct ctlreg local_ctl_set_bit(unsigned int cr, unsigned int bit)
{ {
struct ctlreg reg; struct ctlreg new, old;
local_ctl_store(cr, &reg); local_ctl_store(cr, &old);
reg.val |= 1UL << bit; new = old;
local_ctl_load(cr, &reg); new.val |= 1UL << bit;
local_ctl_load(cr, &new);
return old;
} }
static __always_inline void local_ctl_clear_bit(unsigned int cr, unsigned int bit) static __always_inline struct ctlreg local_ctl_clear_bit(unsigned int cr, unsigned int bit)
{ {
struct ctlreg reg; struct ctlreg new, old;
local_ctl_store(cr, &reg); local_ctl_store(cr, &old);
reg.val &= ~(1UL << bit); new = old;
local_ctl_load(cr, &reg); new.val &= ~(1UL << bit);
local_ctl_load(cr, &new);
return old;
} }
struct lowcore; struct lowcore;

View File

@ -46,26 +46,33 @@
#include <linux/preempt.h> #include <linux/preempt.h>
#include <asm/asm-extable.h> #include <asm/asm-extable.h>
#include <asm/fpu/internal.h>
void save_fpu_regs(void); void save_fpu_regs(void);
void load_fpu_regs(void); void load_fpu_regs(void);
void __load_fpu_regs(void); void __load_fpu_regs(void);
static inline int test_fp_ctl(u32 fpc) /**
* sfpc_safe - Set floating point control register safely.
* @fpc: new value for floating point control register
*
* Set floating point control register. This may lead to an exception,
* since a saved value may have been modified by user space (ptrace,
* signal return, kvm registers) to an invalid value. In such a case
* set the floating point control register to zero.
*/
static inline void sfpc_safe(u32 fpc)
{ {
u32 orig_fpc; asm volatile("\n"
int rc; "0: sfpc %[fpc]\n"
"1: nopr %%r7\n"
asm volatile( ".pushsection .fixup, \"ax\"\n"
" efpc %1\n" "2: lghi %[fpc],0\n"
" sfpc %2\n" " jg 0b\n"
"0: sfpc %1\n" ".popsection\n"
" la %0,0\n" EX_TABLE(1b, 2b)
"1:\n" : [fpc] "+d" (fpc)
EX_TABLE(0b,1b) : : "memory");
: "=d" (rc), "=&d" (orig_fpc)
: "d" (fpc), "0" (-EINVAL));
return rc;
} }
#define KERNEL_FPC 1 #define KERNEL_FPC 1

View File

@ -10,8 +10,14 @@
#define _ASM_S390_FPU_INTERNAL_H #define _ASM_S390_FPU_INTERNAL_H
#include <linux/string.h> #include <linux/string.h>
#include <asm/facility.h>
#include <asm/fpu/types.h> #include <asm/fpu/types.h>
static inline bool cpu_has_vx(void)
{
return likely(test_facility(129));
}
static inline void save_vx_regs(__vector128 *vxrs) static inline void save_vx_regs(__vector128 *vxrs)
{ {
asm volatile( asm volatile(
@ -41,7 +47,7 @@ static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu)
{ {
fpregs->pad = 0; fpregs->pad = 0;
fpregs->fpc = fpu->fpc; fpregs->fpc = fpu->fpc;
if (MACHINE_HAS_VX) if (cpu_has_vx())
convert_vx_to_fp((freg_t *)&fpregs->fprs, fpu->vxrs); convert_vx_to_fp((freg_t *)&fpregs->fprs, fpu->vxrs);
else else
memcpy((freg_t *)&fpregs->fprs, fpu->fprs, memcpy((freg_t *)&fpregs->fprs, fpu->fprs,
@ -51,7 +57,7 @@ static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu)
static inline void fpregs_load(_s390_fp_regs *fpregs, struct fpu *fpu) static inline void fpregs_load(_s390_fp_regs *fpregs, struct fpu *fpu)
{ {
fpu->fpc = fpregs->fpc; fpu->fpc = fpregs->fpc;
if (MACHINE_HAS_VX) if (cpu_has_vx())
convert_fp_to_vx(fpu->vxrs, (freg_t *)&fpregs->fprs); convert_fp_to_vx(fpu->vxrs, (freg_t *)&fpregs->fprs);
else else
memcpy(fpu->fprs, (freg_t *)&fpregs->fprs, memcpy(fpu->fprs, (freg_t *)&fpregs->fprs,

View File

@ -184,11 +184,7 @@ struct thread_struct {
struct gs_cb *gs_cb; /* Current guarded storage cb */ struct gs_cb *gs_cb; /* Current guarded storage cb */
struct gs_cb *gs_bc_cb; /* Broadcast guarded storage cb */ struct gs_cb *gs_bc_cb; /* Broadcast guarded storage cb */
struct pgm_tdb trap_tdb; /* Transaction abort diagnose block */ struct pgm_tdb trap_tdb; /* Transaction abort diagnose block */
/* struct fpu fpu; /* FP and VX register save area */
* Warning: 'fpu' is dynamically-sized. It *MUST* be at
* the end.
*/
struct fpu fpu; /* FP and VX register save area */
}; };
/* Flag to disable transactions. */ /* Flag to disable transactions. */
@ -331,14 +327,36 @@ static inline unsigned long __extract_psw(void)
return (((unsigned long) reg1) << 32) | ((unsigned long) reg2); return (((unsigned long) reg1) << 32) | ((unsigned long) reg2);
} }
static inline void local_mcck_enable(void) static inline unsigned long __local_mcck_save(void)
{ {
__load_psw_mask(__extract_psw() | PSW_MASK_MCHECK); unsigned long mask = __extract_psw();
__load_psw_mask(mask & ~PSW_MASK_MCHECK);
return mask & PSW_MASK_MCHECK;
}
#define local_mcck_save(mflags) \
do { \
typecheck(unsigned long, mflags); \
mflags = __local_mcck_save(); \
} while (0)
static inline void local_mcck_restore(unsigned long mflags)
{
unsigned long mask = __extract_psw();
mask &= ~PSW_MASK_MCHECK;
__load_psw_mask(mask | mflags);
} }
static inline void local_mcck_disable(void) static inline void local_mcck_disable(void)
{ {
__load_psw_mask(__extract_psw() & ~PSW_MASK_MCHECK); __local_mcck_save();
}
static inline void local_mcck_enable(void)
{
__load_psw_mask(__extract_psw() | PSW_MASK_MCHECK);
} }
/* /*

View File

@ -28,7 +28,6 @@
#define MACHINE_FLAG_TOPOLOGY BIT(10) #define MACHINE_FLAG_TOPOLOGY BIT(10)
#define MACHINE_FLAG_TE BIT(11) #define MACHINE_FLAG_TE BIT(11)
#define MACHINE_FLAG_TLB_LC BIT(12) #define MACHINE_FLAG_TLB_LC BIT(12)
#define MACHINE_FLAG_VX BIT(13)
#define MACHINE_FLAG_TLB_GUEST BIT(14) #define MACHINE_FLAG_TLB_GUEST BIT(14)
#define MACHINE_FLAG_NX BIT(15) #define MACHINE_FLAG_NX BIT(15)
#define MACHINE_FLAG_GS BIT(16) #define MACHINE_FLAG_GS BIT(16)
@ -90,7 +89,6 @@ extern unsigned long mio_wb_bit_mask;
#define MACHINE_HAS_TOPOLOGY (S390_lowcore.machine_flags & MACHINE_FLAG_TOPOLOGY) #define MACHINE_HAS_TOPOLOGY (S390_lowcore.machine_flags & MACHINE_FLAG_TOPOLOGY)
#define MACHINE_HAS_TE (S390_lowcore.machine_flags & MACHINE_FLAG_TE) #define MACHINE_HAS_TE (S390_lowcore.machine_flags & MACHINE_FLAG_TE)
#define MACHINE_HAS_TLB_LC (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_LC) #define MACHINE_HAS_TLB_LC (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_LC)
#define MACHINE_HAS_VX (S390_lowcore.machine_flags & MACHINE_FLAG_VX)
#define MACHINE_HAS_TLB_GUEST (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_GUEST) #define MACHINE_HAS_TLB_GUEST (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_GUEST)
#define MACHINE_HAS_NX (S390_lowcore.machine_flags & MACHINE_FLAG_NX) #define MACHINE_HAS_NX (S390_lowcore.machine_flags & MACHINE_FLAG_NX)
#define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS) #define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS)

View File

@ -40,6 +40,10 @@ struct sysinfo_1_1_1 {
unsigned int ncr; unsigned int ncr;
unsigned int npr; unsigned int npr;
unsigned int ntr; unsigned int ntr;
char reserved_3[4];
char model_var_cap[16];
unsigned int model_var_cap_rating;
unsigned int nvr;
}; };
struct sysinfo_1_2_1 { struct sysinfo_1_2_1 {

View File

@ -29,6 +29,7 @@
#include <asm/lowcore.h> #include <asm/lowcore.h>
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/vdso.h> #include <asm/vdso.h>
#include <asm/fpu/api.h>
#include "compat_linux.h" #include "compat_linux.h"
#include "compat_ptrace.h" #include "compat_ptrace.h"
#include "entry.h" #include "entry.h"
@ -98,10 +99,6 @@ static int restore_sigregs32(struct pt_regs *regs,_sigregs32 __user *sregs)
if (!is_ri_task(current) && (user_sregs.regs.psw.mask & PSW32_MASK_RI)) if (!is_ri_task(current) && (user_sregs.regs.psw.mask & PSW32_MASK_RI))
return -EINVAL; return -EINVAL;
/* Test the floating-point-control word. */
if (test_fp_ctl(user_sregs.fpregs.fpc))
return -EINVAL;
/* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */ /* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */
regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) | regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) |
(__u64)(user_sregs.regs.psw.mask & PSW32_MASK_USER) << 32 | (__u64)(user_sregs.regs.psw.mask & PSW32_MASK_USER) << 32 |
@ -137,7 +134,7 @@ static int save_sigregs_ext32(struct pt_regs *regs,
return -EFAULT; return -EFAULT;
/* Save vector registers to signal stack */ /* Save vector registers to signal stack */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = current->thread.fpu.vxrs[i].low; vxrs[i] = current->thread.fpu.vxrs[i].low;
if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, if (__copy_to_user(&sregs_ext->vxrs_low, vxrs,
@ -165,7 +162,7 @@ static int restore_sigregs_ext32(struct pt_regs *regs,
*(__u32 *)&regs->gprs[i] = gprs_high[i]; *(__u32 *)&regs->gprs[i] = gprs_high[i];
/* Restore vector registers from signal stack */ /* Restore vector registers from signal stack */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
if (__copy_from_user(vxrs, &sregs_ext->vxrs_low, if (__copy_from_user(vxrs, &sregs_ext->vxrs_low,
sizeof(sregs_ext->vxrs_low)) || sizeof(sregs_ext->vxrs_low)) ||
__copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW, __copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW,
@ -265,7 +262,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
* the machine supports it * the machine supports it
*/ */
frame_size = sizeof(*frame) - sizeof(frame->sregs_ext.__reserved); frame_size = sizeof(*frame) - sizeof(frame->sregs_ext.__reserved);
if (!MACHINE_HAS_VX) if (!cpu_has_vx())
frame_size -= sizeof(frame->sregs_ext.vxrs_low) + frame_size -= sizeof(frame->sregs_ext.vxrs_low) +
sizeof(frame->sregs_ext.vxrs_high); sizeof(frame->sregs_ext.vxrs_high);
frame = get_sigframe(&ksig->ka, regs, frame_size); frame = get_sigframe(&ksig->ka, regs, frame_size);
@ -348,11 +345,12 @@ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
* the machine supports it * the machine supports it
*/ */
uc_flags = UC_GPRS_HIGH; uc_flags = UC_GPRS_HIGH;
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
uc_flags |= UC_VXRS; uc_flags |= UC_VXRS;
} else } else {
frame_size -= sizeof(frame->uc.uc_mcontext_ext.vxrs_low) + frame_size -= sizeof(frame->uc.uc_mcontext_ext.vxrs_low) +
sizeof(frame->uc.uc_mcontext_ext.vxrs_high); sizeof(frame->uc.uc_mcontext_ext.vxrs_high);
}
frame = get_sigframe(&ksig->ka, regs, frame_size); frame = get_sigframe(&ksig->ka, regs, frame_size);
if (frame == (void __user *) -1UL) if (frame == (void __user *) -1UL)
return -EFAULT; return -EFAULT;

View File

@ -22,6 +22,7 @@
#include <asm/ipl.h> #include <asm/ipl.h>
#include <asm/sclp.h> #include <asm/sclp.h>
#include <asm/maccess.h> #include <asm/maccess.h>
#include <asm/fpu/api.h>
#define PTR_ADD(x, y) (((char *) (x)) + ((unsigned long) (y))) #define PTR_ADD(x, y) (((char *) (x)) + ((unsigned long) (y)))
#define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y))) #define PTR_SUB(x, y) (((char *) (x)) - ((unsigned long) (y)))
@ -319,7 +320,7 @@ static void *fill_cpu_elf_notes(void *ptr, int cpu, struct save_area *sa)
ptr = nt_init(ptr, NT_S390_TODPREG, &sa->todpreg, sizeof(sa->todpreg)); ptr = nt_init(ptr, NT_S390_TODPREG, &sa->todpreg, sizeof(sa->todpreg));
ptr = nt_init(ptr, NT_S390_CTRS, &sa->ctrs, sizeof(sa->ctrs)); ptr = nt_init(ptr, NT_S390_CTRS, &sa->ctrs, sizeof(sa->ctrs));
ptr = nt_init(ptr, NT_S390_PREFIX, &sa->prefix, sizeof(sa->prefix)); ptr = nt_init(ptr, NT_S390_PREFIX, &sa->prefix, sizeof(sa->prefix));
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
ptr = nt_init(ptr, NT_S390_VXRS_HIGH, ptr = nt_init(ptr, NT_S390_VXRS_HIGH,
&sa->vxrs_high, sizeof(sa->vxrs_high)); &sa->vxrs_high, sizeof(sa->vxrs_high));
ptr = nt_init(ptr, NT_S390_VXRS_LOW, ptr = nt_init(ptr, NT_S390_VXRS_LOW,
@ -343,7 +344,7 @@ static size_t get_cpu_elf_notes_size(void)
size += nt_size(NT_S390_TODPREG, sizeof(sa->todpreg)); size += nt_size(NT_S390_TODPREG, sizeof(sa->todpreg));
size += nt_size(NT_S390_CTRS, sizeof(sa->ctrs)); size += nt_size(NT_S390_CTRS, sizeof(sa->ctrs));
size += nt_size(NT_S390_PREFIX, sizeof(sa->prefix)); size += nt_size(NT_S390_PREFIX, sizeof(sa->prefix));
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
size += nt_size(NT_S390_VXRS_HIGH, sizeof(sa->vxrs_high)); size += nt_size(NT_S390_VXRS_HIGH, sizeof(sa->vxrs_high));
size += nt_size(NT_S390_VXRS_LOW, sizeof(sa->vxrs_low)); size += nt_size(NT_S390_VXRS_LOW, sizeof(sa->vxrs_low));
} }

View File

@ -229,10 +229,8 @@ static __init void detect_machine_facilities(void)
} }
if (test_facility(51)) if (test_facility(51))
S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC; S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC;
if (test_facility(129)) { if (test_facility(129))
S390_lowcore.machine_flags |= MACHINE_FLAG_VX;
system_ctl_set_bit(0, CR0_VECTOR_BIT); system_ctl_set_bit(0, CR0_VECTOR_BIT);
}
if (test_facility(130)) if (test_facility(130))
S390_lowcore.machine_flags |= MACHINE_FLAG_NX; S390_lowcore.machine_flags |= MACHINE_FLAG_NX;
if (test_facility(133)) if (test_facility(133))
@ -271,14 +269,6 @@ static inline void setup_access_registers(void)
restore_access_regs(acrs); restore_access_regs(acrs);
} }
static int __init disable_vector_extension(char *str)
{
S390_lowcore.machine_flags &= ~MACHINE_FLAG_VX;
system_ctl_clear_bit(0, CR0_VECTOR_BIT);
return 0;
}
early_param("novx", disable_vector_extension);
char __bootdata(early_command_line)[COMMAND_LINE_SIZE]; char __bootdata(early_command_line)[COMMAND_LINE_SIZE];
static void __init setup_boot_command_line(void) static void __init setup_boot_command_line(void)
{ {

View File

@ -24,7 +24,7 @@ void __kernel_fpu_begin(struct kernel_fpu *state, u32 flags)
/* Save floating point control */ /* Save floating point control */
asm volatile("stfpc %0" : "=Q" (state->fpc)); asm volatile("stfpc %0" : "=Q" (state->fpc));
if (!MACHINE_HAS_VX) { if (!cpu_has_vx()) {
if (flags & KERNEL_VXR_V0V7) { if (flags & KERNEL_VXR_V0V7) {
/* Save floating-point registers */ /* Save floating-point registers */
asm volatile("std 0,%0" : "=Q" (state->fprs[0])); asm volatile("std 0,%0" : "=Q" (state->fprs[0]));
@ -106,7 +106,7 @@ void __kernel_fpu_end(struct kernel_fpu *state, u32 flags)
/* Restore floating-point controls */ /* Restore floating-point controls */
asm volatile("lfpc %0" : : "Q" (state->fpc)); asm volatile("lfpc %0" : : "Q" (state->fpc));
if (!MACHINE_HAS_VX) { if (!cpu_has_vx()) {
if (flags & KERNEL_VXR_V0V7) { if (flags & KERNEL_VXR_V0V7) {
/* Restore floating-point registers */ /* Restore floating-point registers */
asm volatile("ld 0,%0" : : "Q" (state->fprs[0])); asm volatile("ld 0,%0" : : "Q" (state->fprs[0]));
@ -177,11 +177,11 @@ EXPORT_SYMBOL(__kernel_fpu_end);
void __load_fpu_regs(void) void __load_fpu_regs(void)
{ {
struct fpu *state = &current->thread.fpu;
unsigned long *regs = current->thread.fpu.regs; unsigned long *regs = current->thread.fpu.regs;
struct fpu *state = &current->thread.fpu;
asm volatile("lfpc %0" : : "Q" (state->fpc)); sfpc_safe(state->fpc);
if (likely(MACHINE_HAS_VX)) { if (likely(cpu_has_vx())) {
asm volatile("lgr 1,%0\n" asm volatile("lgr 1,%0\n"
"VLM 0,15,0,1\n" "VLM 0,15,0,1\n"
"VLM 16,31,256,1\n" "VLM 16,31,256,1\n"
@ -232,7 +232,7 @@ void save_fpu_regs(void)
regs = current->thread.fpu.regs; regs = current->thread.fpu.regs;
asm volatile("stfpc %0" : "=Q" (state->fpc)); asm volatile("stfpc %0" : "=Q" (state->fpc));
if (likely(MACHINE_HAS_VX)) { if (likely(cpu_has_vx())) {
asm volatile("lgr 1,%0\n" asm volatile("lgr 1,%0\n"
"VSTM 0,15,0,1\n" "VSTM 0,15,0,1\n"
"VSTM 16,31,256,1\n" "VSTM 16,31,256,1\n"

View File

@ -91,7 +91,7 @@ static noinline void __machine_kdump(void *image)
} }
/* Store status of the boot CPU */ /* Store status of the boot CPU */
mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK); mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
if (MACHINE_HAS_VX) if (cpu_has_vx())
save_vx_regs((__vector128 *) mcesa->vector_save_area); save_vx_regs((__vector128 *) mcesa->vector_save_area);
if (MACHINE_HAS_GS) { if (MACHINE_HAS_GS) {
local_ctl_store(2, &cr2_old.reg); local_ctl_store(2, &cr2_old.reg);

View File

@ -32,6 +32,7 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/pai.h> #include <asm/pai.h>
#include <asm/vx-insn.h> #include <asm/vx-insn.h>
#include <asm/fpu/api.h>
struct mcck_struct { struct mcck_struct {
unsigned int kill_task : 1; unsigned int kill_task : 1;
@ -45,7 +46,7 @@ static DEFINE_PER_CPU(struct mcck_struct, cpu_mcck);
static inline int nmi_needs_mcesa(void) static inline int nmi_needs_mcesa(void)
{ {
return MACHINE_HAS_VX || MACHINE_HAS_GS; return cpu_has_vx() || MACHINE_HAS_GS;
} }
/* /*
@ -159,16 +160,17 @@ NOKPROBE_SYMBOL(s390_handle_damage);
void s390_handle_mcck(void) void s390_handle_mcck(void)
{ {
struct mcck_struct mcck; struct mcck_struct mcck;
unsigned long mflags;
/* /*
* Disable machine checks and get the current state of accumulated * Disable machine checks and get the current state of accumulated
* machine checks. Afterwards delete the old state and enable machine * machine checks. Afterwards delete the old state and enable machine
* checks again. * checks again.
*/ */
local_mcck_disable(); local_mcck_save(mflags);
mcck = *this_cpu_ptr(&cpu_mcck); mcck = *this_cpu_ptr(&cpu_mcck);
memset(this_cpu_ptr(&cpu_mcck), 0, sizeof(mcck)); memset(this_cpu_ptr(&cpu_mcck), 0, sizeof(mcck));
local_mcck_enable(); local_mcck_restore(mflags);
if (mcck.channel_report) if (mcck.channel_report)
crw_handle_channel_report(); crw_handle_channel_report();
@ -234,7 +236,7 @@ static int notrace s390_validate_registers(union mci mci)
} }
mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK); mcesa = __va(S390_lowcore.mcesad & MCESA_ORIGIN_MASK);
if (!MACHINE_HAS_VX) { if (!cpu_has_vx()) {
/* Validate floating point registers */ /* Validate floating point registers */
asm volatile( asm volatile(
" ld 0,0(%0)\n" " ld 0,0(%0)\n"

View File

@ -20,8 +20,10 @@ u64 perf_reg_value(struct pt_regs *regs, int idx)
return 0; return 0;
idx -= PERF_REG_S390_FP0; idx -= PERF_REG_S390_FP0;
fp = MACHINE_HAS_VX ? *(freg_t *)(current->thread.fpu.vxrs + idx) if (cpu_has_vx())
: current->thread.fpu.fprs[idx]; fp = *(freg_t *)(current->thread.fpu.vxrs + idx);
else
fp = current->thread.fpu.fprs[idx];
return fp.ui; return fp.ui;
} }

View File

@ -89,7 +89,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
*/ */
save_fpu_regs(); save_fpu_regs();
memcpy(dst, src, arch_task_struct_size); *dst = *src;
dst->thread.fpu.regs = dst->thread.fpu.fprs; dst->thread.fpu.regs = dst->thread.fpu.fprs;
/* /*

View File

@ -201,11 +201,8 @@ static int __init setup_hwcaps(void)
if (MACHINE_HAS_TE) if (MACHINE_HAS_TE)
elf_hwcap |= HWCAP_TE; elf_hwcap |= HWCAP_TE;
/* /* vector */
* Vector extension can be disabled with the "novx" parameter. if (test_facility(129)) {
* Use MACHINE_HAS_VX instead of facility bit 129.
*/
if (MACHINE_HAS_VX) {
elf_hwcap |= HWCAP_VXRS; elf_hwcap |= HWCAP_VXRS;
if (test_facility(134)) if (test_facility(134))
elf_hwcap |= HWCAP_VXRS_BCD; elf_hwcap |= HWCAP_VXRS_BCD;

View File

@ -30,6 +30,7 @@
#include <asm/switch_to.h> #include <asm/switch_to.h>
#include <asm/runtime_instr.h> #include <asm/runtime_instr.h>
#include <asm/facility.h> #include <asm/facility.h>
#include <asm/fpu/api.h>
#include "entry.h" #include "entry.h"
@ -254,7 +255,7 @@ static unsigned long __peek_user(struct task_struct *child, addr_t addr)
* or the child->thread.fpu.vxrs array * or the child->thread.fpu.vxrs array
*/ */
offset = addr - offsetof(struct user, regs.fp_regs.fprs); offset = addr - offsetof(struct user, regs.fp_regs.fprs);
if (MACHINE_HAS_VX) if (cpu_has_vx())
tmp = *(addr_t *) tmp = *(addr_t *)
((addr_t) child->thread.fpu.vxrs + 2*offset); ((addr_t) child->thread.fpu.vxrs + 2*offset);
else else
@ -392,8 +393,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
/* /*
* floating point control reg. is in the thread structure * floating point control reg. is in the thread structure
*/ */
if ((unsigned int) data != 0 || if ((unsigned int)data != 0)
test_fp_ctl(data >> (BITS_PER_LONG - 32)))
return -EINVAL; return -EINVAL;
child->thread.fpu.fpc = data >> (BITS_PER_LONG - 32); child->thread.fpu.fpc = data >> (BITS_PER_LONG - 32);
@ -403,7 +403,7 @@ static int __poke_user(struct task_struct *child, addr_t addr, addr_t data)
* or the child->thread.fpu.vxrs array * or the child->thread.fpu.vxrs array
*/ */
offset = addr - offsetof(struct user, regs.fp_regs.fprs); offset = addr - offsetof(struct user, regs.fp_regs.fprs);
if (MACHINE_HAS_VX) if (cpu_has_vx())
*(addr_t *)((addr_t) *(addr_t *)((addr_t)
child->thread.fpu.vxrs + 2*offset) = data; child->thread.fpu.vxrs + 2*offset) = data;
else else
@ -630,7 +630,7 @@ static u32 __peek_user_compat(struct task_struct *child, addr_t addr)
* or the child->thread.fpu.vxrs array * or the child->thread.fpu.vxrs array
*/ */
offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs); offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs);
if (MACHINE_HAS_VX) if (cpu_has_vx())
tmp = *(__u32 *) tmp = *(__u32 *)
((addr_t) child->thread.fpu.vxrs + 2*offset); ((addr_t) child->thread.fpu.vxrs + 2*offset);
else else
@ -748,8 +748,6 @@ static int __poke_user_compat(struct task_struct *child,
/* /*
* floating point control reg. is in the thread structure * floating point control reg. is in the thread structure
*/ */
if (test_fp_ctl(tmp))
return -EINVAL;
child->thread.fpu.fpc = data; child->thread.fpu.fpc = data;
} else if (addr < offsetof(struct compat_user, regs.fp_regs) + sizeof(s390_fp_regs)) { } else if (addr < offsetof(struct compat_user, regs.fp_regs) + sizeof(s390_fp_regs)) {
@ -758,7 +756,7 @@ static int __poke_user_compat(struct task_struct *child,
* or the child->thread.fpu.vxrs array * or the child->thread.fpu.vxrs array
*/ */
offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs); offset = addr - offsetof(struct compat_user, regs.fp_regs.fprs);
if (MACHINE_HAS_VX) if (cpu_has_vx())
*(__u32 *)((addr_t) *(__u32 *)((addr_t)
child->thread.fpu.vxrs + 2*offset) = tmp; child->thread.fpu.vxrs + 2*offset) = tmp;
else else
@ -914,7 +912,7 @@ static int s390_fpregs_set(struct task_struct *target,
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();
if (MACHINE_HAS_VX) if (cpu_has_vx())
convert_vx_to_fp(fprs, target->thread.fpu.vxrs); convert_vx_to_fp(fprs, target->thread.fpu.vxrs);
else else
memcpy(&fprs, target->thread.fpu.fprs, sizeof(fprs)); memcpy(&fprs, target->thread.fpu.fprs, sizeof(fprs));
@ -926,7 +924,7 @@ static int s390_fpregs_set(struct task_struct *target,
0, offsetof(s390_fp_regs, fprs)); 0, offsetof(s390_fp_regs, fprs));
if (rc) if (rc)
return rc; return rc;
if (ufpc[1] != 0 || test_fp_ctl(ufpc[0])) if (ufpc[1] != 0)
return -EINVAL; return -EINVAL;
target->thread.fpu.fpc = ufpc[0]; target->thread.fpu.fpc = ufpc[0];
} }
@ -937,7 +935,7 @@ static int s390_fpregs_set(struct task_struct *target,
if (rc) if (rc)
return rc; return rc;
if (MACHINE_HAS_VX) if (cpu_has_vx())
convert_fp_to_vx(target->thread.fpu.vxrs, fprs); convert_fp_to_vx(target->thread.fpu.vxrs, fprs);
else else
memcpy(target->thread.fpu.fprs, &fprs, sizeof(fprs)); memcpy(target->thread.fpu.fprs, &fprs, sizeof(fprs));
@ -988,7 +986,7 @@ static int s390_vxrs_low_get(struct task_struct *target,
__u64 vxrs[__NUM_VXRS_LOW]; __u64 vxrs[__NUM_VXRS_LOW];
int i; int i;
if (!MACHINE_HAS_VX) if (!cpu_has_vx())
return -ENODEV; return -ENODEV;
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();
@ -1005,7 +1003,7 @@ static int s390_vxrs_low_set(struct task_struct *target,
__u64 vxrs[__NUM_VXRS_LOW]; __u64 vxrs[__NUM_VXRS_LOW];
int i, rc; int i, rc;
if (!MACHINE_HAS_VX) if (!cpu_has_vx())
return -ENODEV; return -ENODEV;
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();
@ -1025,7 +1023,7 @@ static int s390_vxrs_high_get(struct task_struct *target,
const struct user_regset *regset, const struct user_regset *regset,
struct membuf to) struct membuf to)
{ {
if (!MACHINE_HAS_VX) if (!cpu_has_vx())
return -ENODEV; return -ENODEV;
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();
@ -1040,7 +1038,7 @@ static int s390_vxrs_high_set(struct task_struct *target,
{ {
int rc; int rc;
if (!MACHINE_HAS_VX) if (!cpu_has_vx())
return -ENODEV; return -ENODEV;
if (target == current) if (target == current)
save_fpu_regs(); save_fpu_regs();

View File

@ -408,15 +408,15 @@ static void __init setup_lowcore(void)
lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT; lc->restart_psw.mask = PSW_KERNEL_BITS & ~PSW_MASK_DAT;
lc->restart_psw.addr = __pa(restart_int_handler); lc->restart_psw.addr = __pa(restart_int_handler);
lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; lc->external_new_psw.mask = PSW_KERNEL_BITS;
lc->external_new_psw.addr = (unsigned long) ext_int_handler; lc->external_new_psw.addr = (unsigned long) ext_int_handler;
lc->svc_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; lc->svc_new_psw.mask = PSW_KERNEL_BITS;
lc->svc_new_psw.addr = (unsigned long) system_call; lc->svc_new_psw.addr = (unsigned long) system_call;
lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; lc->program_new_psw.mask = PSW_KERNEL_BITS;
lc->program_new_psw.addr = (unsigned long) pgm_check_handler; lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
lc->mcck_new_psw.mask = PSW_KERNEL_BITS; lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler; lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; lc->io_new_psw.mask = PSW_KERNEL_BITS;
lc->io_new_psw.addr = (unsigned long) io_int_handler; lc->io_new_psw.addr = (unsigned long) io_int_handler;
lc->clock_comparator = clock_comparator_max; lc->clock_comparator = clock_comparator_max;
lc->current_task = (unsigned long)&init_task; lc->current_task = (unsigned long)&init_task;
@ -819,22 +819,6 @@ static void __init setup_randomness(void)
static_branch_enable(&s390_arch_random_available); static_branch_enable(&s390_arch_random_available);
} }
/*
* Find the correct size for the task_struct. This depends on
* the size of the struct fpu at the end of the thread_struct
* which is embedded in the task_struct.
*/
static void __init setup_task_size(void)
{
int task_size = sizeof(struct task_struct);
if (!MACHINE_HAS_VX) {
task_size -= sizeof(__vector128) * __NUM_VXRS;
task_size += sizeof(freg_t) * __NUM_FPRS;
}
arch_task_struct_size = task_size;
}
/* /*
* Issue diagnose 318 to set the control program name and * Issue diagnose 318 to set the control program name and
* version codes. * version codes.
@ -927,7 +911,6 @@ void __init setup_arch(char **cmdline_p)
os_info_init(); os_info_init();
setup_ipl(); setup_ipl();
setup_task_size();
setup_control_program_code(); setup_control_program_code();
/* Do some memory reservations *before* memory is added to memblock */ /* Do some memory reservations *before* memory is added to memblock */

View File

@ -150,10 +150,6 @@ static int restore_sigregs(struct pt_regs *regs, _sigregs __user *sregs)
if (!is_ri_task(current) && (user_sregs.regs.psw.mask & PSW_MASK_RI)) if (!is_ri_task(current) && (user_sregs.regs.psw.mask & PSW_MASK_RI))
return -EINVAL; return -EINVAL;
/* Test the floating-point-control word. */
if (test_fp_ctl(user_sregs.fpregs.fpc))
return -EINVAL;
/* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */ /* Use regs->psw.mask instead of PSW_USER_BITS to preserve PER bit. */
regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) | regs->psw.mask = (regs->psw.mask & ~(PSW_MASK_USER | PSW_MASK_RI)) |
(user_sregs.regs.psw.mask & (PSW_MASK_USER | PSW_MASK_RI)); (user_sregs.regs.psw.mask & (PSW_MASK_USER | PSW_MASK_RI));
@ -183,7 +179,7 @@ static int save_sigregs_ext(struct pt_regs *regs,
int i; int i;
/* Save vector registers to signal stack */ /* Save vector registers to signal stack */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
for (i = 0; i < __NUM_VXRS_LOW; i++) for (i = 0; i < __NUM_VXRS_LOW; i++)
vxrs[i] = current->thread.fpu.vxrs[i].low; vxrs[i] = current->thread.fpu.vxrs[i].low;
if (__copy_to_user(&sregs_ext->vxrs_low, vxrs, if (__copy_to_user(&sregs_ext->vxrs_low, vxrs,
@ -203,7 +199,7 @@ static int restore_sigregs_ext(struct pt_regs *regs,
int i; int i;
/* Restore vector registers from signal stack */ /* Restore vector registers from signal stack */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
if (__copy_from_user(vxrs, &sregs_ext->vxrs_low, if (__copy_from_user(vxrs, &sregs_ext->vxrs_low,
sizeof(sregs_ext->vxrs_low)) || sizeof(sregs_ext->vxrs_low)) ||
__copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW, __copy_from_user(current->thread.fpu.vxrs + __NUM_VXRS_LOW,
@ -301,7 +297,7 @@ static int setup_frame(int sig, struct k_sigaction *ka,
* included in the signal frame on a 31-bit system. * included in the signal frame on a 31-bit system.
*/ */
frame_size = sizeof(*frame) - sizeof(frame->sregs_ext); frame_size = sizeof(*frame) - sizeof(frame->sregs_ext);
if (MACHINE_HAS_VX) if (cpu_has_vx())
frame_size += sizeof(frame->sregs_ext); frame_size += sizeof(frame->sregs_ext);
frame = get_sigframe(ka, regs, frame_size); frame = get_sigframe(ka, regs, frame_size);
if (frame == (void __user *) -1UL) if (frame == (void __user *) -1UL)
@ -378,7 +374,7 @@ static int setup_rt_frame(struct ksignal *ksig, sigset_t *set,
* included in the signal frame on a 31-bit system. * included in the signal frame on a 31-bit system.
*/ */
uc_flags = 0; uc_flags = 0;
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
frame_size += sizeof(_sigregs_ext); frame_size += sizeof(_sigregs_ext);
uc_flags |= UC_VXRS; uc_flags |= UC_VXRS;
} }

View File

@ -582,7 +582,7 @@ int smp_store_status(int cpu)
if (__pcpu_sigp_relax(pcpu->address, SIGP_STORE_STATUS_AT_ADDRESS, if (__pcpu_sigp_relax(pcpu->address, SIGP_STORE_STATUS_AT_ADDRESS,
pa) != SIGP_CC_ORDER_CODE_ACCEPTED) pa) != SIGP_CC_ORDER_CODE_ACCEPTED)
return -EIO; return -EIO;
if (!MACHINE_HAS_VX && !MACHINE_HAS_GS) if (!cpu_has_vx() && !MACHINE_HAS_GS)
return 0; return 0;
pa = lc->mcesad & MCESA_ORIGIN_MASK; pa = lc->mcesad & MCESA_ORIGIN_MASK;
if (MACHINE_HAS_GS) if (MACHINE_HAS_GS)
@ -638,7 +638,7 @@ void __init smp_save_dump_ipl_cpu(void)
copy_oldmem_kernel(regs, __LC_FPREGS_SAVE_AREA, 512); copy_oldmem_kernel(regs, __LC_FPREGS_SAVE_AREA, 512);
save_area_add_regs(sa, regs); save_area_add_regs(sa, regs);
memblock_free(regs, 512); memblock_free(regs, 512);
if (MACHINE_HAS_VX) if (cpu_has_vx())
save_area_add_vxrs(sa, boot_cpu_vector_save_area); save_area_add_vxrs(sa, boot_cpu_vector_save_area);
} }
@ -671,7 +671,7 @@ void __init smp_save_dump_secondary_cpus(void)
panic("could not allocate memory for save area\n"); panic("could not allocate memory for save area\n");
__pcpu_sigp_relax(addr, SIGP_STORE_STATUS_AT_ADDRESS, __pa(page)); __pcpu_sigp_relax(addr, SIGP_STORE_STATUS_AT_ADDRESS, __pa(page));
save_area_add_regs(sa, page); save_area_add_regs(sa, page);
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
__pcpu_sigp_relax(addr, SIGP_STORE_ADDITIONAL_STATUS, __pa(page)); __pcpu_sigp_relax(addr, SIGP_STORE_ADDITIONAL_STATUS, __pa(page));
save_area_add_vxrs(sa, page); save_area_add_vxrs(sa, page);
} }

View File

@ -81,10 +81,12 @@ static bool convert_ext_name(unsigned char encoding, char *name, size_t len)
static void stsi_1_1_1(struct seq_file *m, struct sysinfo_1_1_1 *info) static void stsi_1_1_1(struct seq_file *m, struct sysinfo_1_1_1 *info)
{ {
bool has_var_cap;
int i; int i;
if (stsi(info, 1, 1, 1)) if (stsi(info, 1, 1, 1))
return; return;
has_var_cap = !!info->model_var_cap[0];
EBCASC(info->manufacturer, sizeof(info->manufacturer)); EBCASC(info->manufacturer, sizeof(info->manufacturer));
EBCASC(info->type, sizeof(info->type)); EBCASC(info->type, sizeof(info->type));
EBCASC(info->model, sizeof(info->model)); EBCASC(info->model, sizeof(info->model));
@ -93,6 +95,8 @@ static void stsi_1_1_1(struct seq_file *m, struct sysinfo_1_1_1 *info)
EBCASC(info->model_capacity, sizeof(info->model_capacity)); EBCASC(info->model_capacity, sizeof(info->model_capacity));
EBCASC(info->model_perm_cap, sizeof(info->model_perm_cap)); EBCASC(info->model_perm_cap, sizeof(info->model_perm_cap));
EBCASC(info->model_temp_cap, sizeof(info->model_temp_cap)); EBCASC(info->model_temp_cap, sizeof(info->model_temp_cap));
if (has_var_cap)
EBCASC(info->model_var_cap, sizeof(info->model_var_cap));
seq_printf(m, "Manufacturer: %-16.16s\n", info->manufacturer); seq_printf(m, "Manufacturer: %-16.16s\n", info->manufacturer);
seq_printf(m, "Type: %-4.4s\n", info->type); seq_printf(m, "Type: %-4.4s\n", info->type);
if (info->lic) if (info->lic)
@ -120,12 +124,18 @@ static void stsi_1_1_1(struct seq_file *m, struct sysinfo_1_1_1 *info)
seq_printf(m, "Model Temp. Capacity: %-16.16s %08u\n", seq_printf(m, "Model Temp. Capacity: %-16.16s %08u\n",
info->model_temp_cap, info->model_temp_cap,
info->model_temp_cap_rating); info->model_temp_cap_rating);
if (has_var_cap && info->model_var_cap_rating)
seq_printf(m, "Model Var. Capacity: %-16.16s %08u\n",
info->model_var_cap,
info->model_var_cap_rating);
if (info->ncr) if (info->ncr)
seq_printf(m, "Nominal Cap. Rating: %08u\n", info->ncr); seq_printf(m, "Nominal Cap. Rating: %08u\n", info->ncr);
if (info->npr) if (info->npr)
seq_printf(m, "Nominal Perm. Rating: %08u\n", info->npr); seq_printf(m, "Nominal Perm. Rating: %08u\n", info->npr);
if (info->ntr) if (info->ntr)
seq_printf(m, "Nominal Temp. Rating: %08u\n", info->ntr); seq_printf(m, "Nominal Temp. Rating: %08u\n", info->ntr);
if (has_var_cap && info->nvr)
seq_printf(m, "Nominal Var. Rating: %08u\n", info->nvr);
if (info->cai) { if (info->cai) {
seq_printf(m, "Capacity Adj. Ind.: %d\n", info->cai); seq_printf(m, "Capacity Adj. Ind.: %d\n", info->cai);
seq_printf(m, "Capacity Ch. Reason: %d\n", info->ccr); seq_printf(m, "Capacity Ch. Reason: %d\n", info->ccr);

View File

@ -195,7 +195,7 @@ static void vector_exception(struct pt_regs *regs)
{ {
int si_code, vic; int si_code, vic;
if (!MACHINE_HAS_VX) { if (!cpu_has_vx()) {
do_trap(regs, SIGILL, ILL_ILLOPN, "illegal operation"); do_trap(regs, SIGILL, ILL_ILLOPN, "illegal operation");
return; return;
} }
@ -288,6 +288,17 @@ static void __init test_monitor_call(void)
void __init trap_init(void) void __init trap_init(void)
{ {
unsigned long flags;
struct ctlreg cr0;
local_irq_save(flags);
cr0 = local_ctl_clear_bit(0, CR0_LOW_ADDRESS_PROTECTION_BIT);
psw_bits(S390_lowcore.external_new_psw).mcheck = 1;
psw_bits(S390_lowcore.program_new_psw).mcheck = 1;
psw_bits(S390_lowcore.svc_new_psw).mcheck = 1;
psw_bits(S390_lowcore.io_new_psw).mcheck = 1;
local_ctl_load(0, &cr0);
local_irq_restore(flags);
local_mcck_enable(); local_mcck_enable();
test_monitor_call(); test_monitor_call();
} }

View File

@ -52,6 +52,7 @@ SECTIONS
SOFTIRQENTRY_TEXT SOFTIRQENTRY_TEXT
FTRACE_HOTPATCH_TRAMPOLINES_TEXT FTRACE_HOTPATCH_TRAMPOLINES_TEXT
*(.text.*_indirect_*) *(.text.*_indirect_*)
*(.fixup)
*(.gnu.warning) *(.gnu.warning)
. = ALIGN(PAGE_SIZE); . = ALIGN(PAGE_SIZE);
_etext = .; /* End of text section */ _etext = .; /* End of text section */

View File

@ -639,7 +639,7 @@ static int __write_machine_check(struct kvm_vcpu *vcpu,
rc |= put_guest_lc(vcpu, mci.val, (u64 __user *) __LC_MCCK_CODE); rc |= put_guest_lc(vcpu, mci.val, (u64 __user *) __LC_MCCK_CODE);
/* Register-save areas */ /* Register-save areas */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs); convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs);
rc |= write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA, fprs, 128); rc |= write_guest_lc(vcpu, __LC_FPREGS_SAVE_AREA, fprs, 128);
} else { } else {

View File

@ -618,7 +618,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = MACHINE_HAS_ESOP; r = MACHINE_HAS_ESOP;
break; break;
case KVM_CAP_S390_VECTOR_REGISTERS: case KVM_CAP_S390_VECTOR_REGISTERS:
r = MACHINE_HAS_VX; r = test_facility(129);
break; break;
case KVM_CAP_S390_RI: case KVM_CAP_S390_RI:
r = test_facility(64); r = test_facility(64);
@ -767,7 +767,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap)
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
if (kvm->created_vcpus) { if (kvm->created_vcpus) {
r = -EBUSY; r = -EBUSY;
} else if (MACHINE_HAS_VX) { } else if (cpu_has_vx()) {
set_kvm_facility(kvm->arch.model.fac_mask, 129); set_kvm_facility(kvm->arch.model.fac_mask, 129);
set_kvm_facility(kvm->arch.model.fac_list, 129); set_kvm_facility(kvm->arch.model.fac_list, 129);
if (test_facility(134)) { if (test_facility(134)) {
@ -3962,9 +3962,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
if (test_kvm_facility(vcpu->kvm, 156)) if (test_kvm_facility(vcpu->kvm, 156))
vcpu->run->kvm_valid_regs |= KVM_SYNC_ETOKEN; vcpu->run->kvm_valid_regs |= KVM_SYNC_ETOKEN;
/* fprs can be synchronized via vrs, even if the guest has no vx. With /* fprs can be synchronized via vrs, even if the guest has no vx. With
* MACHINE_HAS_VX, (load|store)_fpu_regs() will work with vrs format. * cpu_has_vx(), (load|store)_fpu_regs() will work with vrs format.
*/ */
if (MACHINE_HAS_VX) if (cpu_has_vx())
vcpu->run->kvm_valid_regs |= KVM_SYNC_VRS; vcpu->run->kvm_valid_regs |= KVM_SYNC_VRS;
else else
vcpu->run->kvm_valid_regs |= KVM_SYNC_FPRS; vcpu->run->kvm_valid_regs |= KVM_SYNC_FPRS;
@ -4316,18 +4316,13 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
vcpu_load(vcpu); vcpu_load(vcpu);
if (test_fp_ctl(fpu->fpc)) {
ret = -EINVAL;
goto out;
}
vcpu->run->s.regs.fpc = fpu->fpc; vcpu->run->s.regs.fpc = fpu->fpc;
if (MACHINE_HAS_VX) if (cpu_has_vx())
convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs, convert_fp_to_vx((__vector128 *) vcpu->run->s.regs.vrs,
(freg_t *) fpu->fprs); (freg_t *) fpu->fprs);
else else
memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs)); memcpy(vcpu->run->s.regs.fprs, &fpu->fprs, sizeof(fpu->fprs));
out:
vcpu_put(vcpu); vcpu_put(vcpu);
return ret; return ret;
} }
@ -4336,9 +4331,7 @@ int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
{ {
vcpu_load(vcpu); vcpu_load(vcpu);
/* make sure we have the latest values */ if (cpu_has_vx())
save_fpu_regs();
if (MACHINE_HAS_VX)
convert_vx_to_fp((freg_t *) fpu->fprs, convert_vx_to_fp((freg_t *) fpu->fprs,
(__vector128 *) vcpu->run->s.regs.vrs); (__vector128 *) vcpu->run->s.regs.vrs);
else else
@ -4963,14 +4956,11 @@ static void sync_regs(struct kvm_vcpu *vcpu)
save_fpu_regs(); save_fpu_regs();
vcpu->arch.host_fpregs.fpc = current->thread.fpu.fpc; vcpu->arch.host_fpregs.fpc = current->thread.fpu.fpc;
vcpu->arch.host_fpregs.regs = current->thread.fpu.regs; vcpu->arch.host_fpregs.regs = current->thread.fpu.regs;
if (MACHINE_HAS_VX) if (cpu_has_vx())
current->thread.fpu.regs = vcpu->run->s.regs.vrs; current->thread.fpu.regs = vcpu->run->s.regs.vrs;
else else
current->thread.fpu.regs = vcpu->run->s.regs.fprs; current->thread.fpu.regs = vcpu->run->s.regs.fprs;
current->thread.fpu.fpc = vcpu->run->s.regs.fpc; current->thread.fpu.fpc = vcpu->run->s.regs.fpc;
if (test_fp_ctl(current->thread.fpu.fpc))
/* User space provided an invalid FPC, let's clear it */
current->thread.fpu.fpc = 0;
/* Sync fmt2 only data */ /* Sync fmt2 only data */
if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) { if (likely(!kvm_s390_pv_cpu_is_protected(vcpu))) {
@ -5145,7 +5135,7 @@ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long gpa)
gpa -= __LC_FPREGS_SAVE_AREA; gpa -= __LC_FPREGS_SAVE_AREA;
/* manually convert vector registers if necessary */ /* manually convert vector registers if necessary */
if (MACHINE_HAS_VX) { if (cpu_has_vx()) {
convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs); convert_vx_to_fp(fprs, (__vector128 *) vcpu->run->s.regs.vrs);
rc = write_guest_abs(vcpu, gpa + __LC_FPREGS_SAVE_AREA, rc = write_guest_abs(vcpu, gpa + __LC_FPREGS_SAVE_AREA,
fprs, 128); fprs, 128);

View File

@ -350,15 +350,15 @@ static noinline int unwindme_func3(struct unwindme *u)
/* This function must appear in the backtrace. */ /* This function must appear in the backtrace. */
static noinline int unwindme_func2(struct unwindme *u) static noinline int unwindme_func2(struct unwindme *u)
{ {
unsigned long flags; unsigned long flags, mflags;
int rc; int rc;
if (u->flags & UWM_SWITCH_STACK) { if (u->flags & UWM_SWITCH_STACK) {
local_irq_save(flags); local_irq_save(flags);
local_mcck_disable(); local_mcck_save(mflags);
rc = call_on_stack(1, S390_lowcore.nodat_stack, rc = call_on_stack(1, S390_lowcore.nodat_stack,
int, unwindme_func3, struct unwindme *, u); int, unwindme_func3, struct unwindme *, u);
local_mcck_enable(); local_mcck_restore(mflags);
local_irq_restore(flags); local_irq_restore(flags);
return rc; return rc;
} else { } else {

View File

@ -125,32 +125,23 @@ static inline pte_t ptep_flush_lazy(struct mm_struct *mm,
static inline pgste_t pgste_get_lock(pte_t *ptep) static inline pgste_t pgste_get_lock(pte_t *ptep)
{ {
unsigned long new = 0; unsigned long value = 0;
#ifdef CONFIG_PGSTE #ifdef CONFIG_PGSTE
unsigned long old; unsigned long *ptr = (unsigned long *)(ptep + PTRS_PER_PTE);
asm( do {
" lg %0,%2\n" value = __atomic64_or_barrier(PGSTE_PCL_BIT, ptr);
"0: lgr %1,%0\n" } while (value & PGSTE_PCL_BIT);
" nihh %0,0xff7f\n" /* clear PCL bit in old */ value |= PGSTE_PCL_BIT;
" oihh %1,0x0080\n" /* set PCL bit in new */
" csg %0,%1,%2\n"
" jl 0b\n"
: "=&d" (old), "=&d" (new), "=Q" (ptep[PTRS_PER_PTE])
: "Q" (ptep[PTRS_PER_PTE]) : "cc", "memory");
#endif #endif
return __pgste(new); return __pgste(value);
} }
static inline void pgste_set_unlock(pte_t *ptep, pgste_t pgste) static inline void pgste_set_unlock(pte_t *ptep, pgste_t pgste)
{ {
#ifdef CONFIG_PGSTE #ifdef CONFIG_PGSTE
asm( barrier();
" nihh %1,0xff7f\n" /* clear PCL bit */ WRITE_ONCE(*(unsigned long *)(ptep + PTRS_PER_PTE), pgste_val(pgste) & ~PGSTE_PCL_BIT);
" stg %1,%0\n"
: "=Q" (ptep[PTRS_PER_PTE])
: "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE])
: "cc", "memory");
#endif #endif
} }

View File

@ -46,6 +46,7 @@ static struct facility_def facility_defs[] = {
#endif #endif
#ifdef CONFIG_HAVE_MARCH_Z13_FEATURES #ifdef CONFIG_HAVE_MARCH_Z13_FEATURES
53, /* load-and-zero-rightmost-byte, etc. */ 53, /* load-and-zero-rightmost-byte, etc. */
129, /* vector */
#endif #endif
#ifdef CONFIG_HAVE_MARCH_Z14_FEATURES #ifdef CONFIG_HAVE_MARCH_Z14_FEATURES
58, /* miscellaneous-instruction-extension 2 */ 58, /* miscellaneous-instruction-extension 2 */

View File

@ -219,16 +219,16 @@ EXPORT_SYMBOL_GPL(chsc_sadc);
static int s390_subchannel_remove_chpid(struct subchannel *sch, void *data) static int s390_subchannel_remove_chpid(struct subchannel *sch, void *data)
{ {
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
if (sch->driver && sch->driver->chp_event) if (sch->driver && sch->driver->chp_event)
if (sch->driver->chp_event(sch, data, CHP_OFFLINE) != 0) if (sch->driver->chp_event(sch, data, CHP_OFFLINE) != 0)
goto out_unreg; goto out_unreg;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return 0; return 0;
out_unreg: out_unreg:
sch->lpm = 0; sch->lpm = 0;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
css_schedule_eval(sch->schid); css_schedule_eval(sch->schid);
return 0; return 0;
} }
@ -258,10 +258,10 @@ void chsc_chp_offline(struct chp_id chpid)
static int __s390_process_res_acc(struct subchannel *sch, void *data) static int __s390_process_res_acc(struct subchannel *sch, void *data)
{ {
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
if (sch->driver && sch->driver->chp_event) if (sch->driver && sch->driver->chp_event)
sch->driver->chp_event(sch, data, CHP_ONLINE); sch->driver->chp_event(sch, data, CHP_ONLINE);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return 0; return 0;
} }
@ -292,10 +292,10 @@ static void s390_process_res_acc(struct chp_link *link)
static int process_fces_event(struct subchannel *sch, void *data) static int process_fces_event(struct subchannel *sch, void *data)
{ {
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
if (sch->driver && sch->driver->chp_event) if (sch->driver && sch->driver->chp_event)
sch->driver->chp_event(sch, data, CHP_FCES_EVENT); sch->driver->chp_event(sch, data, CHP_FCES_EVENT);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return 0; return 0;
} }
@ -769,11 +769,11 @@ static void __s390_subchannel_vary_chpid(struct subchannel *sch,
memset(&link, 0, sizeof(struct chp_link)); memset(&link, 0, sizeof(struct chp_link));
link.chpid = chpid; link.chpid = chpid;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
if (sch->driver && sch->driver->chp_event) if (sch->driver && sch->driver->chp_event)
sch->driver->chp_event(sch, &link, sch->driver->chp_event(sch, &link,
on ? CHP_VARY_ON : CHP_VARY_OFF); on ? CHP_VARY_ON : CHP_VARY_OFF);
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
} }
static int s390_subchannel_vary_chpid_off(struct subchannel *sch, void *data) static int s390_subchannel_vary_chpid_off(struct subchannel *sch, void *data)

View File

@ -211,10 +211,10 @@ static int chsc_async(struct chsc_async_area *chsc_area,
chsc_area->header.key = PAGE_DEFAULT_KEY >> 4; chsc_area->header.key = PAGE_DEFAULT_KEY >> 4;
while ((sch = chsc_get_next_subchannel(sch))) { while ((sch = chsc_get_next_subchannel(sch))) {
spin_lock(sch->lock); spin_lock(&sch->lock);
private = dev_get_drvdata(&sch->dev); private = dev_get_drvdata(&sch->dev);
if (private->request) { if (private->request) {
spin_unlock(sch->lock); spin_unlock(&sch->lock);
ret = -EBUSY; ret = -EBUSY;
continue; continue;
} }
@ -239,7 +239,7 @@ static int chsc_async(struct chsc_async_area *chsc_area,
default: default:
ret = -ENODEV; ret = -ENODEV;
} }
spin_unlock(sch->lock); spin_unlock(&sch->lock);
CHSC_MSG(2, "chsc on 0.%x.%04x returned cc=%d\n", CHSC_MSG(2, "chsc on 0.%x.%04x returned cc=%d\n",
sch->schid.ssid, sch->schid.sch_no, cc); sch->schid.ssid, sch->schid.sch_no, cc);
if (ret == -EINPROGRESS) if (ret == -EINPROGRESS)

View File

@ -546,7 +546,7 @@ static irqreturn_t do_cio_interrupt(int irq, void *dummy)
return IRQ_HANDLED; return IRQ_HANDLED;
} }
sch = phys_to_virt(tpi_info->intparm); sch = phys_to_virt(tpi_info->intparm);
spin_lock(sch->lock); spin_lock(&sch->lock);
/* Store interrupt response block to lowcore. */ /* Store interrupt response block to lowcore. */
if (tsch(tpi_info->schid, irb) == 0) { if (tsch(tpi_info->schid, irb) == 0) {
/* Keep subchannel information word up to date. */ /* Keep subchannel information word up to date. */
@ -558,7 +558,7 @@ static irqreturn_t do_cio_interrupt(int irq, void *dummy)
inc_irq_stat(IRQIO_CIO); inc_irq_stat(IRQIO_CIO);
} else } else
inc_irq_stat(IRQIO_CIO); inc_irq_stat(IRQIO_CIO);
spin_unlock(sch->lock); spin_unlock(&sch->lock);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
@ -663,7 +663,7 @@ struct subchannel *cio_probe_console(void)
if (IS_ERR(sch)) if (IS_ERR(sch))
return sch; return sch;
lockdep_set_class(sch->lock, &console_sch_key); lockdep_set_class(&sch->lock, &console_sch_key);
isc_register(CONSOLE_ISC); isc_register(CONSOLE_ISC);
sch->config.isc = CONSOLE_ISC; sch->config.isc = CONSOLE_ISC;
sch->config.intparm = (u32)virt_to_phys(sch); sch->config.intparm = (u32)virt_to_phys(sch);

View File

@ -83,7 +83,7 @@ enum sch_todo {
/* subchannel data structure used by I/O subroutines */ /* subchannel data structure used by I/O subroutines */
struct subchannel { struct subchannel {
struct subchannel_id schid; struct subchannel_id schid;
spinlock_t *lock; /* subchannel lock */ spinlock_t lock; /* subchannel lock */
struct mutex reg_mutex; struct mutex reg_mutex;
enum { enum {
SUBCHANNEL_TYPE_IO = 0, SUBCHANNEL_TYPE_IO = 0,

View File

@ -148,16 +148,10 @@ out:
static void css_sch_todo(struct work_struct *work); static void css_sch_todo(struct work_struct *work);
static int css_sch_create_locks(struct subchannel *sch) static void css_sch_create_locks(struct subchannel *sch)
{ {
sch->lock = kmalloc(sizeof(*sch->lock), GFP_KERNEL); spin_lock_init(&sch->lock);
if (!sch->lock)
return -ENOMEM;
spin_lock_init(sch->lock);
mutex_init(&sch->reg_mutex); mutex_init(&sch->reg_mutex);
return 0;
} }
static void css_subchannel_release(struct device *dev) static void css_subchannel_release(struct device *dev)
@ -167,7 +161,6 @@ static void css_subchannel_release(struct device *dev)
sch->config.intparm = 0; sch->config.intparm = 0;
cio_commit_config(sch); cio_commit_config(sch);
kfree(sch->driver_override); kfree(sch->driver_override);
kfree(sch->lock);
kfree(sch); kfree(sch);
} }
@ -219,9 +212,7 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
sch->schib = *schib; sch->schib = *schib;
sch->st = schib->pmcw.st; sch->st = schib->pmcw.st;
ret = css_sch_create_locks(sch); css_sch_create_locks(sch);
if (ret)
goto err;
INIT_WORK(&sch->todo_work, css_sch_todo); INIT_WORK(&sch->todo_work, css_sch_todo);
sch->dev.release = &css_subchannel_release; sch->dev.release = &css_subchannel_release;
@ -233,19 +224,17 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
*/ */
ret = dma_set_coherent_mask(&sch->dev, DMA_BIT_MASK(31)); ret = dma_set_coherent_mask(&sch->dev, DMA_BIT_MASK(31));
if (ret) if (ret)
goto err_lock; goto err;
/* /*
* But we don't have such restrictions imposed on the stuff that * But we don't have such restrictions imposed on the stuff that
* is handled by the streaming API. * is handled by the streaming API.
*/ */
ret = dma_set_mask(&sch->dev, DMA_BIT_MASK(64)); ret = dma_set_mask(&sch->dev, DMA_BIT_MASK(64));
if (ret) if (ret)
goto err_lock; goto err;
return sch; return sch;
err_lock:
kfree(sch->lock);
err: err:
kfree(sch); kfree(sch);
return ERR_PTR(ret); return ERR_PTR(ret);
@ -604,12 +593,12 @@ static void css_sch_todo(struct work_struct *work)
sch = container_of(work, struct subchannel, todo_work); sch = container_of(work, struct subchannel, todo_work);
/* Find out todo. */ /* Find out todo. */
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
todo = sch->todo; todo = sch->todo;
CIO_MSG_EVENT(4, "sch_todo: sch=0.%x.%04x, todo=%d\n", sch->schid.ssid, CIO_MSG_EVENT(4, "sch_todo: sch=0.%x.%04x, todo=%d\n", sch->schid.ssid,
sch->schid.sch_no, todo); sch->schid.sch_no, todo);
sch->todo = SCH_TODO_NOTHING; sch->todo = SCH_TODO_NOTHING;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
/* Perform todo. */ /* Perform todo. */
switch (todo) { switch (todo) {
case SCH_TODO_NOTHING: case SCH_TODO_NOTHING:
@ -617,9 +606,9 @@ static void css_sch_todo(struct work_struct *work)
case SCH_TODO_EVAL: case SCH_TODO_EVAL:
ret = css_evaluate_known_subchannel(sch, 1); ret = css_evaluate_known_subchannel(sch, 1);
if (ret == -EAGAIN) { if (ret == -EAGAIN) {
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
css_sched_sch_todo(sch, todo); css_sched_sch_todo(sch, todo);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
} }
break; break;
case SCH_TODO_UNREG: case SCH_TODO_UNREG:
@ -1028,12 +1017,7 @@ static int __init setup_css(int nr)
css->pseudo_subchannel->dev.parent = &css->device; css->pseudo_subchannel->dev.parent = &css->device;
css->pseudo_subchannel->dev.release = css_subchannel_release; css->pseudo_subchannel->dev.release = css_subchannel_release;
mutex_init(&css->pseudo_subchannel->reg_mutex); mutex_init(&css->pseudo_subchannel->reg_mutex);
ret = css_sch_create_locks(css->pseudo_subchannel); css_sch_create_locks(css->pseudo_subchannel);
if (ret) {
kfree(css->pseudo_subchannel);
device_unregister(&css->device);
goto out_err;
}
dev_set_name(&css->pseudo_subchannel->dev, "defunct"); dev_set_name(&css->pseudo_subchannel->dev, "defunct");
ret = device_register(&css->pseudo_subchannel->dev); ret = device_register(&css->pseudo_subchannel->dev);

View File

@ -748,7 +748,7 @@ static int io_subchannel_initialize_dev(struct subchannel *sch,
mutex_init(&cdev->reg_mutex); mutex_init(&cdev->reg_mutex);
atomic_set(&priv->onoff, 0); atomic_set(&priv->onoff, 0);
cdev->ccwlock = sch->lock; cdev->ccwlock = &sch->lock;
cdev->dev.parent = &sch->dev; cdev->dev.parent = &sch->dev;
cdev->dev.release = ccw_device_release; cdev->dev.release = ccw_device_release;
cdev->dev.bus = &ccw_bus_type; cdev->dev.bus = &ccw_bus_type;
@ -764,9 +764,9 @@ static int io_subchannel_initialize_dev(struct subchannel *sch,
goto out_put; goto out_put;
} }
priv->flags.initialized = 1; priv->flags.initialized = 1;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
sch_set_cdev(sch, cdev); sch_set_cdev(sch, cdev);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return 0; return 0;
out_put: out_put:
@ -851,9 +851,9 @@ static void io_subchannel_register(struct ccw_device *cdev)
CIO_MSG_EVENT(0, "Could not register ccw dev 0.%x.%04x: %d\n", CIO_MSG_EVENT(0, "Could not register ccw dev 0.%x.%04x: %d\n",
cdev->private->dev_id.ssid, cdev->private->dev_id.ssid,
cdev->private->dev_id.devno, ret); cdev->private->dev_id.devno, ret);
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
sch_set_cdev(sch, NULL); sch_set_cdev(sch, NULL);
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
mutex_unlock(&cdev->reg_mutex); mutex_unlock(&cdev->reg_mutex);
/* Release initial device reference. */ /* Release initial device reference. */
put_device(&cdev->dev); put_device(&cdev->dev);
@ -904,9 +904,9 @@ static void io_subchannel_recog(struct ccw_device *cdev, struct subchannel *sch)
atomic_inc(&ccw_device_init_count); atomic_inc(&ccw_device_init_count);
/* Start async. device sensing. */ /* Start async. device sensing. */
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
ccw_device_recognition(cdev); ccw_device_recognition(cdev);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
} }
static int ccw_device_move_to_sch(struct ccw_device *cdev, static int ccw_device_move_to_sch(struct ccw_device *cdev,
@ -921,12 +921,12 @@ static int ccw_device_move_to_sch(struct ccw_device *cdev,
return -ENODEV; return -ENODEV;
if (!sch_is_pseudo_sch(old_sch)) { if (!sch_is_pseudo_sch(old_sch)) {
spin_lock_irq(old_sch->lock); spin_lock_irq(&old_sch->lock);
old_enabled = old_sch->schib.pmcw.ena; old_enabled = old_sch->schib.pmcw.ena;
rc = 0; rc = 0;
if (old_enabled) if (old_enabled)
rc = cio_disable_subchannel(old_sch); rc = cio_disable_subchannel(old_sch);
spin_unlock_irq(old_sch->lock); spin_unlock_irq(&old_sch->lock);
if (rc == -EBUSY) { if (rc == -EBUSY) {
/* Release child reference for new parent. */ /* Release child reference for new parent. */
put_device(&sch->dev); put_device(&sch->dev);
@ -944,9 +944,9 @@ static int ccw_device_move_to_sch(struct ccw_device *cdev,
sch->schib.pmcw.dev, rc); sch->schib.pmcw.dev, rc);
if (old_enabled) { if (old_enabled) {
/* Try to re-enable the old subchannel. */ /* Try to re-enable the old subchannel. */
spin_lock_irq(old_sch->lock); spin_lock_irq(&old_sch->lock);
cio_enable_subchannel(old_sch, (u32)virt_to_phys(old_sch)); cio_enable_subchannel(old_sch, (u32)virt_to_phys(old_sch));
spin_unlock_irq(old_sch->lock); spin_unlock_irq(&old_sch->lock);
} }
/* Release child reference for new parent. */ /* Release child reference for new parent. */
put_device(&sch->dev); put_device(&sch->dev);
@ -954,19 +954,19 @@ static int ccw_device_move_to_sch(struct ccw_device *cdev,
} }
/* Clean up old subchannel. */ /* Clean up old subchannel. */
if (!sch_is_pseudo_sch(old_sch)) { if (!sch_is_pseudo_sch(old_sch)) {
spin_lock_irq(old_sch->lock); spin_lock_irq(&old_sch->lock);
sch_set_cdev(old_sch, NULL); sch_set_cdev(old_sch, NULL);
spin_unlock_irq(old_sch->lock); spin_unlock_irq(&old_sch->lock);
css_schedule_eval(old_sch->schid); css_schedule_eval(old_sch->schid);
} }
/* Release child reference for old parent. */ /* Release child reference for old parent. */
put_device(&old_sch->dev); put_device(&old_sch->dev);
/* Initialize new subchannel. */ /* Initialize new subchannel. */
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
cdev->ccwlock = sch->lock; cdev->ccwlock = &sch->lock;
if (!sch_is_pseudo_sch(sch)) if (!sch_is_pseudo_sch(sch))
sch_set_cdev(sch, cdev); sch_set_cdev(sch, cdev);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
if (!sch_is_pseudo_sch(sch)) if (!sch_is_pseudo_sch(sch))
css_update_ssd_info(sch); css_update_ssd_info(sch);
return 0; return 0;
@ -1077,9 +1077,9 @@ static int io_subchannel_probe(struct subchannel *sch)
return 0; return 0;
out_schedule: out_schedule:
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
css_sched_sch_todo(sch, SCH_TODO_UNREG); css_sched_sch_todo(sch, SCH_TODO_UNREG);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return 0; return 0;
} }
@ -1093,10 +1093,10 @@ static void io_subchannel_remove(struct subchannel *sch)
goto out_free; goto out_free;
ccw_device_unregister(cdev); ccw_device_unregister(cdev);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
sch_set_cdev(sch, NULL); sch_set_cdev(sch, NULL);
set_io_private(sch, NULL); set_io_private(sch, NULL);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
out_free: out_free:
dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area), dma_free_coherent(&sch->dev, sizeof(*io_priv->dma_area),
io_priv->dma_area, io_priv->dma_area_dma); io_priv->dma_area, io_priv->dma_area_dma);
@ -1203,7 +1203,7 @@ static void io_subchannel_quiesce(struct subchannel *sch)
struct ccw_device *cdev; struct ccw_device *cdev;
int ret; int ret;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
cdev = sch_get_cdev(sch); cdev = sch_get_cdev(sch);
if (cio_is_console(sch->schid)) if (cio_is_console(sch->schid))
goto out_unlock; goto out_unlock;
@ -1220,15 +1220,15 @@ static void io_subchannel_quiesce(struct subchannel *sch)
ret = ccw_device_cancel_halt_clear(cdev); ret = ccw_device_cancel_halt_clear(cdev);
if (ret == -EBUSY) { if (ret == -EBUSY) {
ccw_device_set_timeout(cdev, HZ/10); ccw_device_set_timeout(cdev, HZ/10);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
wait_event(cdev->private->wait_q, wait_event(cdev->private->wait_q,
cdev->private->state != DEV_STATE_QUIESCE); cdev->private->state != DEV_STATE_QUIESCE);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
} }
ret = cio_disable_subchannel(sch); ret = cio_disable_subchannel(sch);
} }
out_unlock: out_unlock:
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
} }
static void io_subchannel_shutdown(struct subchannel *sch) static void io_subchannel_shutdown(struct subchannel *sch)
@ -1439,7 +1439,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
enum io_sch_action action; enum io_sch_action action;
int rc = -EAGAIN; int rc = -EAGAIN;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
if (!device_is_registered(&sch->dev)) if (!device_is_registered(&sch->dev))
goto out_unlock; goto out_unlock;
if (work_pending(&sch->todo_work)) if (work_pending(&sch->todo_work))
@ -1492,7 +1492,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
default: default:
break; break;
} }
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
/* All other actions require process context. */ /* All other actions require process context. */
if (!process) if (!process)
goto out; goto out;
@ -1507,9 +1507,9 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
break; break;
case IO_SCH_UNREG_CDEV: case IO_SCH_UNREG_CDEV:
case IO_SCH_UNREG_ATTACH: case IO_SCH_UNREG_ATTACH:
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
sch_set_cdev(sch, NULL); sch_set_cdev(sch, NULL);
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
/* Unregister ccw device. */ /* Unregister ccw device. */
ccw_device_unregister(cdev); ccw_device_unregister(cdev);
break; break;
@ -1538,9 +1538,9 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
put_device(&cdev->dev); put_device(&cdev->dev);
goto out; goto out;
} }
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
ccw_device_trigger_reprobe(cdev); ccw_device_trigger_reprobe(cdev);
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
/* Release reference from get_ccwdev_by_dev_id() */ /* Release reference from get_ccwdev_by_dev_id() */
put_device(&cdev->dev); put_device(&cdev->dev);
break; break;
@ -1550,7 +1550,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
return 0; return 0;
out_unlock: out_unlock:
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
out: out:
return rc; return rc;
} }
@ -1846,9 +1846,9 @@ static void ccw_device_todo(struct work_struct *work)
css_schedule_eval(sch->schid); css_schedule_eval(sch->schid);
fallthrough; fallthrough;
case CDEV_TODO_UNREG: case CDEV_TODO_UNREG:
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
sch_set_cdev(sch, NULL); sch_set_cdev(sch, NULL);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
ccw_device_unregister(cdev); ccw_device_unregister(cdev);
break; break;
default: default:

View File

@ -698,29 +698,29 @@ int ccw_device_stlck(struct ccw_device *cdev)
return -ENOMEM; return -ENOMEM;
init_completion(&data.done); init_completion(&data.done);
data.rc = -EIO; data.rc = -EIO;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
rc = cio_enable_subchannel(sch, (u32)virt_to_phys(sch)); rc = cio_enable_subchannel(sch, (u32)virt_to_phys(sch));
if (rc) if (rc)
goto out_unlock; goto out_unlock;
/* Perform operation. */ /* Perform operation. */
cdev->private->state = DEV_STATE_STEAL_LOCK; cdev->private->state = DEV_STATE_STEAL_LOCK;
ccw_device_stlck_start(cdev, &data, &buffer[0], &buffer[32]); ccw_device_stlck_start(cdev, &data, &buffer[0], &buffer[32]);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
/* Wait for operation to finish. */ /* Wait for operation to finish. */
if (wait_for_completion_interruptible(&data.done)) { if (wait_for_completion_interruptible(&data.done)) {
/* Got a signal. */ /* Got a signal. */
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
ccw_request_cancel(cdev); ccw_request_cancel(cdev);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
wait_for_completion(&data.done); wait_for_completion(&data.done);
} }
rc = data.rc; rc = data.rc;
/* Check results. */ /* Check results. */
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
cio_disable_subchannel(sch); cio_disable_subchannel(sch);
cdev->private->state = DEV_STATE_BOXED; cdev->private->state = DEV_STATE_BOXED;
out_unlock: out_unlock:
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
kfree(buffer); kfree(buffer);
return rc; return rc;

View File

@ -101,12 +101,12 @@ static void eadm_subchannel_timeout(struct timer_list *t)
struct eadm_private *private = from_timer(private, t, timer); struct eadm_private *private = from_timer(private, t, timer);
struct subchannel *sch = private->sch; struct subchannel *sch = private->sch;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
EADM_LOG(1, "timeout"); EADM_LOG(1, "timeout");
EADM_LOG_HEX(1, &sch->schid, sizeof(sch->schid)); EADM_LOG_HEX(1, &sch->schid, sizeof(sch->schid));
if (eadm_subchannel_clear(sch)) if (eadm_subchannel_clear(sch))
EADM_LOG(0, "clear failed"); EADM_LOG(0, "clear failed");
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
} }
static void eadm_subchannel_set_timeout(struct subchannel *sch, int expires) static void eadm_subchannel_set_timeout(struct subchannel *sch, int expires)
@ -163,16 +163,16 @@ static struct subchannel *eadm_get_idle_sch(void)
spin_lock_irqsave(&list_lock, flags); spin_lock_irqsave(&list_lock, flags);
list_for_each_entry(private, &eadm_list, head) { list_for_each_entry(private, &eadm_list, head) {
sch = private->sch; sch = private->sch;
spin_lock(sch->lock); spin_lock(&sch->lock);
if (private->state == EADM_IDLE) { if (private->state == EADM_IDLE) {
private->state = EADM_BUSY; private->state = EADM_BUSY;
list_move_tail(&private->head, &eadm_list); list_move_tail(&private->head, &eadm_list);
spin_unlock(sch->lock); spin_unlock(&sch->lock);
spin_unlock_irqrestore(&list_lock, flags); spin_unlock_irqrestore(&list_lock, flags);
return sch; return sch;
} }
spin_unlock(sch->lock); spin_unlock(&sch->lock);
} }
spin_unlock_irqrestore(&list_lock, flags); spin_unlock_irqrestore(&list_lock, flags);
@ -190,7 +190,7 @@ int eadm_start_aob(struct aob *aob)
if (!sch) if (!sch)
return -EBUSY; return -EBUSY;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
eadm_subchannel_set_timeout(sch, EADM_TIMEOUT); eadm_subchannel_set_timeout(sch, EADM_TIMEOUT);
ret = eadm_subchannel_start(sch, aob); ret = eadm_subchannel_start(sch, aob);
if (!ret) if (!ret)
@ -203,7 +203,7 @@ int eadm_start_aob(struct aob *aob)
css_sched_sch_todo(sch, SCH_TODO_EVAL); css_sched_sch_todo(sch, SCH_TODO_EVAL);
out_unlock: out_unlock:
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return ret; return ret;
} }
@ -221,7 +221,7 @@ static int eadm_subchannel_probe(struct subchannel *sch)
INIT_LIST_HEAD(&private->head); INIT_LIST_HEAD(&private->head);
timer_setup(&private->timer, eadm_subchannel_timeout, 0); timer_setup(&private->timer, eadm_subchannel_timeout, 0);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
set_eadm_private(sch, private); set_eadm_private(sch, private);
private->state = EADM_IDLE; private->state = EADM_IDLE;
private->sch = sch; private->sch = sch;
@ -229,11 +229,11 @@ static int eadm_subchannel_probe(struct subchannel *sch)
ret = cio_enable_subchannel(sch, (u32)virt_to_phys(sch)); ret = cio_enable_subchannel(sch, (u32)virt_to_phys(sch));
if (ret) { if (ret) {
set_eadm_private(sch, NULL); set_eadm_private(sch, NULL);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
kfree(private); kfree(private);
goto out; goto out;
} }
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
spin_lock_irq(&list_lock); spin_lock_irq(&list_lock);
list_add(&private->head, &eadm_list); list_add(&private->head, &eadm_list);
@ -248,7 +248,7 @@ static void eadm_quiesce(struct subchannel *sch)
DECLARE_COMPLETION_ONSTACK(completion); DECLARE_COMPLETION_ONSTACK(completion);
int ret; int ret;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
if (private->state != EADM_BUSY) if (private->state != EADM_BUSY)
goto disable; goto disable;
@ -256,11 +256,11 @@ static void eadm_quiesce(struct subchannel *sch)
goto disable; goto disable;
private->completion = &completion; private->completion = &completion;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
wait_for_completion_io(&completion); wait_for_completion_io(&completion);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
private->completion = NULL; private->completion = NULL;
disable: disable:
@ -269,7 +269,7 @@ disable:
ret = cio_disable_subchannel(sch); ret = cio_disable_subchannel(sch);
} while (ret == -EBUSY); } while (ret == -EBUSY);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
} }
static void eadm_subchannel_remove(struct subchannel *sch) static void eadm_subchannel_remove(struct subchannel *sch)
@ -282,9 +282,9 @@ static void eadm_subchannel_remove(struct subchannel *sch)
eadm_quiesce(sch); eadm_quiesce(sch);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
set_eadm_private(sch, NULL); set_eadm_private(sch, NULL);
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
kfree(private); kfree(private);
} }
@ -309,7 +309,7 @@ static int eadm_subchannel_sch_event(struct subchannel *sch, int process)
struct eadm_private *private; struct eadm_private *private;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
if (!device_is_registered(&sch->dev)) if (!device_is_registered(&sch->dev))
goto out_unlock; goto out_unlock;
@ -325,7 +325,7 @@ static int eadm_subchannel_sch_event(struct subchannel *sch, int process)
private->state = EADM_IDLE; private->state = EADM_IDLE;
out_unlock: out_unlock:
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return 0; return 0;
} }

View File

@ -65,14 +65,14 @@ int vfio_ccw_sch_quiesce(struct subchannel *sch)
* cancel/halt/clear completion. * cancel/halt/clear completion.
*/ */
private->completion = &completion; private->completion = &completion;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
if (ret == -EBUSY) if (ret == -EBUSY)
wait_for_completion_timeout(&completion, 3*HZ); wait_for_completion_timeout(&completion, 3*HZ);
private->completion = NULL; private->completion = NULL;
flush_workqueue(vfio_ccw_work_q); flush_workqueue(vfio_ccw_work_q);
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
ret = cio_disable_subchannel(sch); ret = cio_disable_subchannel(sch);
} while (ret == -EBUSY); } while (ret == -EBUSY);
@ -249,7 +249,7 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
unsigned long flags; unsigned long flags;
int rc = -EAGAIN; int rc = -EAGAIN;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
if (!device_is_registered(&sch->dev)) if (!device_is_registered(&sch->dev))
goto out_unlock; goto out_unlock;
@ -264,7 +264,7 @@ static int vfio_ccw_sch_event(struct subchannel *sch, int process)
} }
out_unlock: out_unlock:
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return rc; return rc;
} }

View File

@ -25,7 +25,7 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
unsigned long flags; unsigned long flags;
int ret; int ret;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
orb = cp_get_orb(&private->cp, sch); orb = cp_get_orb(&private->cp, sch);
if (!orb) { if (!orb) {
@ -72,7 +72,7 @@ static int fsm_io_helper(struct vfio_ccw_private *private)
ret = ccode; ret = ccode;
} }
out: out:
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return ret; return ret;
} }
@ -83,7 +83,7 @@ static int fsm_do_halt(struct vfio_ccw_private *private)
int ccode; int ccode;
int ret; int ret;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
VFIO_CCW_TRACE_EVENT(2, "haltIO"); VFIO_CCW_TRACE_EVENT(2, "haltIO");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev)); VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
@ -111,7 +111,7 @@ static int fsm_do_halt(struct vfio_ccw_private *private)
default: default:
ret = ccode; ret = ccode;
} }
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return ret; return ret;
} }
@ -122,7 +122,7 @@ static int fsm_do_clear(struct vfio_ccw_private *private)
int ccode; int ccode;
int ret; int ret;
spin_lock_irqsave(sch->lock, flags); spin_lock_irqsave(&sch->lock, flags);
VFIO_CCW_TRACE_EVENT(2, "clearIO"); VFIO_CCW_TRACE_EVENT(2, "clearIO");
VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev)); VFIO_CCW_TRACE_EVENT(2, dev_name(&sch->dev));
@ -147,7 +147,7 @@ static int fsm_do_clear(struct vfio_ccw_private *private)
default: default:
ret = ccode; ret = ccode;
} }
spin_unlock_irqrestore(sch->lock, flags); spin_unlock_irqrestore(&sch->lock, flags);
return ret; return ret;
} }
@ -376,18 +376,18 @@ static void fsm_open(struct vfio_ccw_private *private,
struct subchannel *sch = to_subchannel(private->vdev.dev->parent); struct subchannel *sch = to_subchannel(private->vdev.dev->parent);
int ret; int ret;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
sch->isc = VFIO_CCW_ISC; sch->isc = VFIO_CCW_ISC;
ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch); ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch);
if (ret) if (ret)
goto err_unlock; goto err_unlock;
private->state = VFIO_CCW_STATE_IDLE; private->state = VFIO_CCW_STATE_IDLE;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
return; return;
err_unlock: err_unlock:
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER); vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
} }
@ -397,7 +397,7 @@ static void fsm_close(struct vfio_ccw_private *private,
struct subchannel *sch = to_subchannel(private->vdev.dev->parent); struct subchannel *sch = to_subchannel(private->vdev.dev->parent);
int ret; int ret;
spin_lock_irq(sch->lock); spin_lock_irq(&sch->lock);
if (!sch->schib.pmcw.ena) if (!sch->schib.pmcw.ena)
goto err_unlock; goto err_unlock;
@ -409,12 +409,12 @@ static void fsm_close(struct vfio_ccw_private *private,
goto err_unlock; goto err_unlock;
private->state = VFIO_CCW_STATE_STANDBY; private->state = VFIO_CCW_STATE_STANDBY;
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
cp_free(&private->cp); cp_free(&private->cp);
return; return;
err_unlock: err_unlock:
spin_unlock_irq(sch->lock); spin_unlock_irq(&sch->lock);
vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER); vfio_ccw_fsm_event(private, VFIO_CCW_EVENT_NOT_OPER);
} }

View File

@ -357,13 +357,12 @@ EXPORT_SYMBOL(ap_test_config_ctrl_domain);
* -1 invalid APQN, TAPQ error or AP queue status which * -1 invalid APQN, TAPQ error or AP queue status which
* indicates there is no APQN. * indicates there is no APQN.
*/ */
static int ap_queue_info(ap_qid_t qid, int *q_type, unsigned int *q_fac, static int ap_queue_info(ap_qid_t qid, struct ap_tapq_hwinfo *hwinfo,
int *q_depth, int *q_ml, bool *q_decfg, bool *q_cstop) bool *decfg, bool *cstop)
{ {
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_gr2 tapq_info;
tapq_info.value = 0; hwinfo->value = 0;
/* make sure we don't run into a specifiation exception */ /* make sure we don't run into a specifiation exception */
if (AP_QID_CARD(qid) > ap_max_adapter_id || if (AP_QID_CARD(qid) > ap_max_adapter_id ||
@ -371,7 +370,7 @@ static int ap_queue_info(ap_qid_t qid, int *q_type, unsigned int *q_fac,
return -1; return -1;
/* call TAPQ on this APQN */ /* call TAPQ on this APQN */
status = ap_test_queue(qid, ap_apft_available(), &tapq_info); status = ap_test_queue(qid, ap_apft_available(), hwinfo);
switch (status.response_code) { switch (status.response_code) {
case AP_RESPONSE_NORMAL: case AP_RESPONSE_NORMAL:
@ -389,15 +388,11 @@ static int ap_queue_info(ap_qid_t qid, int *q_type, unsigned int *q_fac,
} }
/* There should be at least one of the mode bits set */ /* There should be at least one of the mode bits set */
if (WARN_ON_ONCE(!tapq_info.value)) if (WARN_ON_ONCE(!hwinfo->value))
return 0; return 0;
*q_type = tapq_info.at; *decfg = status.response_code == AP_RESPONSE_DECONFIGURED;
*q_fac = tapq_info.fac; *cstop = status.response_code == AP_RESPONSE_CHECKSTOPPED;
*q_depth = tapq_info.qd;
*q_ml = tapq_info.ml;
*q_decfg = status.response_code == AP_RESPONSE_DECONFIGURED;
*q_cstop = status.response_code == AP_RESPONSE_CHECKSTOPPED;
return 1; return 1;
} }
@ -642,11 +637,11 @@ static int ap_uevent(const struct device *dev, struct kobj_uevent_env *env)
return rc; return rc;
/* Add MODE=<accel|cca|ep11> */ /* Add MODE=<accel|cca|ep11> */
if (ap_test_bit(&ac->functions, AP_FUNC_ACCEL)) if (ac->hwinfo.accel)
rc = add_uevent_var(env, "MODE=accel"); rc = add_uevent_var(env, "MODE=accel");
else if (ap_test_bit(&ac->functions, AP_FUNC_COPRO)) else if (ac->hwinfo.cca)
rc = add_uevent_var(env, "MODE=cca"); rc = add_uevent_var(env, "MODE=cca");
else if (ap_test_bit(&ac->functions, AP_FUNC_EP11)) else if (ac->hwinfo.ep11)
rc = add_uevent_var(env, "MODE=ep11"); rc = add_uevent_var(env, "MODE=ep11");
if (rc) if (rc)
return rc; return rc;
@ -654,11 +649,11 @@ static int ap_uevent(const struct device *dev, struct kobj_uevent_env *env)
struct ap_queue *aq = to_ap_queue(&ap_dev->device); struct ap_queue *aq = to_ap_queue(&ap_dev->device);
/* Add MODE=<accel|cca|ep11> */ /* Add MODE=<accel|cca|ep11> */
if (ap_test_bit(&aq->card->functions, AP_FUNC_ACCEL)) if (aq->card->hwinfo.accel)
rc = add_uevent_var(env, "MODE=accel"); rc = add_uevent_var(env, "MODE=accel");
else if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO)) else if (aq->card->hwinfo.cca)
rc = add_uevent_var(env, "MODE=cca"); rc = add_uevent_var(env, "MODE=cca");
else if (ap_test_bit(&aq->card->functions, AP_FUNC_EP11)) else if (aq->card->hwinfo.ep11)
rc = add_uevent_var(env, "MODE=ep11"); rc = add_uevent_var(env, "MODE=ep11");
if (rc) if (rc)
return rc; return rc;
@ -1799,12 +1794,12 @@ static inline void ap_scan_rm_card_dev_and_queue_devs(struct ap_card *ac)
*/ */
static inline void ap_scan_domains(struct ap_card *ac) static inline void ap_scan_domains(struct ap_card *ac)
{ {
int rc, dom, depth, type, ml; struct ap_tapq_hwinfo hwinfo;
bool decfg, chkstop; bool decfg, chkstop;
struct ap_queue *aq; struct ap_queue *aq;
struct device *dev; struct device *dev;
unsigned int func;
ap_qid_t qid; ap_qid_t qid;
int rc, dom;
/* /*
* Go through the configuration for the domains and compare them * Go through the configuration for the domains and compare them
@ -1827,8 +1822,7 @@ static inline void ap_scan_domains(struct ap_card *ac)
goto put_dev_and_continue; goto put_dev_and_continue;
} }
/* domain is valid, get info from this APQN */ /* domain is valid, get info from this APQN */
rc = ap_queue_info(qid, &type, &func, &depth, rc = ap_queue_info(qid, &hwinfo, &decfg, &chkstop);
&ml, &decfg, &chkstop);
switch (rc) { switch (rc) {
case -1: case -1:
if (dev) { if (dev) {
@ -1853,6 +1847,7 @@ static inline void ap_scan_domains(struct ap_card *ac)
aq->card = ac; aq->card = ac;
aq->config = !decfg; aq->config = !decfg;
aq->chkstop = chkstop; aq->chkstop = chkstop;
aq->se_bstate = hwinfo.bs;
dev = &aq->ap_dev.device; dev = &aq->ap_dev.device;
dev->bus = &ap_bus_type; dev->bus = &ap_bus_type;
dev->parent = &ac->ap_dev.device; dev->parent = &ac->ap_dev.device;
@ -1882,6 +1877,8 @@ static inline void ap_scan_domains(struct ap_card *ac)
} }
/* handle state changes on already existing queue device */ /* handle state changes on already existing queue device */
spin_lock_bh(&aq->lock); spin_lock_bh(&aq->lock);
/* SE bind state */
aq->se_bstate = hwinfo.bs;
/* checkstop state */ /* checkstop state */
if (chkstop && !aq->chkstop) { if (chkstop && !aq->chkstop) {
/* checkstop on */ /* checkstop on */
@ -1955,11 +1952,11 @@ put_dev_and_continue:
*/ */
static inline void ap_scan_adapter(int ap) static inline void ap_scan_adapter(int ap)
{ {
int rc, dom, depth, type, comp_type, ml; struct ap_tapq_hwinfo hwinfo;
int rc, dom, comp_type;
bool decfg, chkstop; bool decfg, chkstop;
struct ap_card *ac; struct ap_card *ac;
struct device *dev; struct device *dev;
unsigned int func;
ap_qid_t qid; ap_qid_t qid;
/* Is there currently a card device for this adapter ? */ /* Is there currently a card device for this adapter ? */
@ -1989,8 +1986,7 @@ static inline void ap_scan_adapter(int ap)
for (dom = 0; dom <= ap_max_domain_id; dom++) for (dom = 0; dom <= ap_max_domain_id; dom++)
if (ap_test_config_usage_domain(dom)) { if (ap_test_config_usage_domain(dom)) {
qid = AP_MKQID(ap, dom); qid = AP_MKQID(ap, dom);
if (ap_queue_info(qid, &type, &func, &depth, if (ap_queue_info(qid, &hwinfo, &decfg, &chkstop) > 0)
&ml, &decfg, &chkstop) > 0)
break; break;
} }
if (dom > ap_max_domain_id) { if (dom > ap_max_domain_id) {
@ -2006,7 +2002,7 @@ static inline void ap_scan_adapter(int ap)
} }
return; return;
} }
if (!type) { if (!hwinfo.at) {
/* No apdater type info available, an unusable adapter */ /* No apdater type info available, an unusable adapter */
if (ac) { if (ac) {
AP_DBF_INFO("%s(%d) no valid type (0) info, rm card and queue devs\n", AP_DBF_INFO("%s(%d) no valid type (0) info, rm card and queue devs\n",
@ -2019,18 +2015,18 @@ static inline void ap_scan_adapter(int ap)
} }
return; return;
} }
hwinfo.value &= TAPQ_CARD_HWINFO_MASK; /* filter card specific hwinfo */
if (ac) { if (ac) {
/* Check APQN against existing card device for changes */ /* Check APQN against existing card device for changes */
if (ac->raw_hwtype != type) { if (ac->hwinfo.at != hwinfo.at) {
AP_DBF_INFO("%s(%d) hwtype %d changed, rm card and queue devs\n", AP_DBF_INFO("%s(%d) hwtype %d changed, rm card and queue devs\n",
__func__, ap, type); __func__, ap, hwinfo.at);
ap_scan_rm_card_dev_and_queue_devs(ac); ap_scan_rm_card_dev_and_queue_devs(ac);
put_device(dev); put_device(dev);
ac = NULL; ac = NULL;
} else if ((ac->functions & TAPQ_CARD_FUNC_CMP_MASK) != } else if (ac->hwinfo.fac != hwinfo.fac) {
(func & TAPQ_CARD_FUNC_CMP_MASK)) {
AP_DBF_INFO("%s(%d) functions 0x%08x changed, rm card and queue devs\n", AP_DBF_INFO("%s(%d) functions 0x%08x changed, rm card and queue devs\n",
__func__, ap, func); __func__, ap, hwinfo.fac);
ap_scan_rm_card_dev_and_queue_devs(ac); ap_scan_rm_card_dev_and_queue_devs(ac);
put_device(dev); put_device(dev);
ac = NULL; ac = NULL;
@ -2064,13 +2060,13 @@ static inline void ap_scan_adapter(int ap)
if (!ac) { if (!ac) {
/* Build a new card device */ /* Build a new card device */
comp_type = ap_get_compatible_type(qid, type, func); comp_type = ap_get_compatible_type(qid, hwinfo.at, hwinfo.fac);
if (!comp_type) { if (!comp_type) {
AP_DBF_WARN("%s(%d) type %d, can't get compatibility type\n", AP_DBF_WARN("%s(%d) type %d, can't get compatibility type\n",
__func__, ap, type); __func__, ap, hwinfo.at);
return; return;
} }
ac = ap_card_create(ap, depth, type, comp_type, func, ml); ac = ap_card_create(ap, hwinfo, comp_type);
if (!ac) { if (!ac) {
AP_DBF_WARN("%s(%d) ap_card_create() failed\n", AP_DBF_WARN("%s(%d) ap_card_create() failed\n",
__func__, ap); __func__, ap);
@ -2101,13 +2097,13 @@ static inline void ap_scan_adapter(int ap)
get_device(dev); get_device(dev);
if (decfg) if (decfg)
AP_DBF_INFO("%s(%d) new (decfg) card dev type=%d func=0x%08x created\n", AP_DBF_INFO("%s(%d) new (decfg) card dev type=%d func=0x%08x created\n",
__func__, ap, type, func); __func__, ap, hwinfo.at, hwinfo.fac);
else if (chkstop) else if (chkstop)
AP_DBF_INFO("%s(%d) new (chkstop) card dev type=%d func=0x%08x created\n", AP_DBF_INFO("%s(%d) new (chkstop) card dev type=%d func=0x%08x created\n",
__func__, ap, type, func); __func__, ap, hwinfo.at, hwinfo.fac);
else else
AP_DBF_INFO("%s(%d) new card dev type=%d func=0x%08x created\n", AP_DBF_INFO("%s(%d) new card dev type=%d func=0x%08x created\n",
__func__, ap, type, func); __func__, ap, hwinfo.at, hwinfo.fac);
} }
/* Verify the domains and the queue devices for this card */ /* Verify the domains and the queue devices for this card */

View File

@ -75,16 +75,6 @@ static inline int ap_test_bit(unsigned int *ptr, unsigned int nr)
#define AP_DEVICE_TYPE_CEX7 13 #define AP_DEVICE_TYPE_CEX7 13
#define AP_DEVICE_TYPE_CEX8 14 #define AP_DEVICE_TYPE_CEX8 14
/*
* Known function facilities
*/
#define AP_FUNC_MEX4K 1
#define AP_FUNC_CRT4K 2
#define AP_FUNC_COPRO 3
#define AP_FUNC_ACCEL 4
#define AP_FUNC_EP11 5
#define AP_FUNC_APXA 6
/* /*
* AP queue state machine states * AP queue state machine states
*/ */
@ -182,9 +172,7 @@ struct ap_device {
struct ap_card { struct ap_card {
struct ap_device ap_dev; struct ap_device ap_dev;
int raw_hwtype; /* AP raw hardware type. */ struct ap_tapq_hwinfo hwinfo; /* TAPQ GR2 content */
unsigned int functions; /* TAPQ GR2 upper 32 facility bits */
int queue_depth; /* AP queue depth.*/
int id; /* AP card number. */ int id; /* AP card number. */
unsigned int maxmsgsize; /* AP msg limit for this card */ unsigned int maxmsgsize; /* AP msg limit for this card */
bool config; /* configured state */ bool config; /* configured state */
@ -192,7 +180,7 @@ struct ap_card {
atomic64_t total_request_count; /* # requests ever for this AP device.*/ atomic64_t total_request_count; /* # requests ever for this AP device.*/
}; };
#define TAPQ_CARD_FUNC_CMP_MASK 0xFFFF0000 #define TAPQ_CARD_HWINFO_MASK 0xFEFF0000FFFF0F0FUL
#define ASSOC_IDX_INVALID 0x10000 #define ASSOC_IDX_INVALID 0x10000
#define to_ap_card(x) container_of((x), struct ap_card, ap_dev.device) #define to_ap_card(x) container_of((x), struct ap_card, ap_dev.device)
@ -206,7 +194,7 @@ struct ap_queue {
bool config; /* configured state */ bool config; /* configured state */
bool chkstop; /* checkstop state */ bool chkstop; /* checkstop state */
ap_qid_t qid; /* AP queue id. */ ap_qid_t qid; /* AP queue id. */
bool se_bound; /* SE bound state */ unsigned int se_bstate; /* SE bind state (BS) */
unsigned int assoc_idx; /* SE association index */ unsigned int assoc_idx; /* SE association index */
int queue_count; /* # messages currently on AP queue. */ int queue_count; /* # messages currently on AP queue. */
int pendingq_count; /* # requests on pendingq list. */ int pendingq_count; /* # requests on pendingq list. */
@ -290,8 +278,8 @@ void ap_queue_remove(struct ap_queue *aq);
void ap_queue_init_state(struct ap_queue *aq); void ap_queue_init_state(struct ap_queue *aq);
void _ap_queue_init_state(struct ap_queue *aq); void _ap_queue_init_state(struct ap_queue *aq);
struct ap_card *ap_card_create(int id, int queue_depth, int raw_type, struct ap_card *ap_card_create(int id, struct ap_tapq_hwinfo info,
int comp_type, unsigned int functions, int ml); int comp_type);
#define APMASKSIZE (BITS_TO_LONGS(AP_DEVICES) * sizeof(unsigned long)) #define APMASKSIZE (BITS_TO_LONGS(AP_DEVICES) * sizeof(unsigned long))
#define AQMASKSIZE (BITS_TO_LONGS(AP_DOMAINS) * sizeof(unsigned long)) #define AQMASKSIZE (BITS_TO_LONGS(AP_DOMAINS) * sizeof(unsigned long))

View File

@ -34,7 +34,7 @@ static ssize_t raw_hwtype_show(struct device *dev,
{ {
struct ap_card *ac = to_ap_card(dev); struct ap_card *ac = to_ap_card(dev);
return sysfs_emit(buf, "%d\n", ac->raw_hwtype); return sysfs_emit(buf, "%d\n", ac->hwinfo.at);
} }
static DEVICE_ATTR_RO(raw_hwtype); static DEVICE_ATTR_RO(raw_hwtype);
@ -44,7 +44,7 @@ static ssize_t depth_show(struct device *dev, struct device_attribute *attr,
{ {
struct ap_card *ac = to_ap_card(dev); struct ap_card *ac = to_ap_card(dev);
return sysfs_emit(buf, "%d\n", ac->queue_depth); return sysfs_emit(buf, "%d\n", ac->hwinfo.qd);
} }
static DEVICE_ATTR_RO(depth); static DEVICE_ATTR_RO(depth);
@ -54,7 +54,7 @@ static ssize_t ap_functions_show(struct device *dev,
{ {
struct ap_card *ac = to_ap_card(dev); struct ap_card *ac = to_ap_card(dev);
return sysfs_emit(buf, "0x%08X\n", ac->functions); return sysfs_emit(buf, "0x%08X\n", ac->hwinfo.fac);
} }
static DEVICE_ATTR_RO(ap_functions); static DEVICE_ATTR_RO(ap_functions);
@ -229,8 +229,8 @@ static void ap_card_device_release(struct device *dev)
kfree(ac); kfree(ac);
} }
struct ap_card *ap_card_create(int id, int queue_depth, int raw_type, struct ap_card *ap_card_create(int id, struct ap_tapq_hwinfo hwinfo,
int comp_type, unsigned int functions, int ml) int comp_type)
{ {
struct ap_card *ac; struct ap_card *ac;
@ -240,12 +240,10 @@ struct ap_card *ap_card_create(int id, int queue_depth, int raw_type,
ac->ap_dev.device.release = ap_card_device_release; ac->ap_dev.device.release = ap_card_device_release;
ac->ap_dev.device.type = &ap_card_type; ac->ap_dev.device.type = &ap_card_type;
ac->ap_dev.device_type = comp_type; ac->ap_dev.device_type = comp_type;
ac->raw_hwtype = raw_type; ac->hwinfo = hwinfo;
ac->queue_depth = queue_depth;
ac->functions = functions;
ac->id = id; ac->id = id;
ac->maxmsgsize = ml > 0 ? ac->maxmsgsize = hwinfo.ml > 0 ?
ml * AP_TAPQ_ML_FIELD_CHUNK_SIZE : AP_DEFAULT_MAX_MSG_SIZE; hwinfo.ml * AP_TAPQ_ML_FIELD_CHUNK_SIZE : AP_DEFAULT_MAX_MSG_SIZE;
return ac; return ac;
} }

View File

@ -24,13 +24,12 @@ static void __ap_flush_queue(struct ap_queue *aq);
static inline bool ap_q_supports_bind(struct ap_queue *aq) static inline bool ap_q_supports_bind(struct ap_queue *aq)
{ {
return ap_test_bit(&aq->card->functions, AP_FUNC_EP11) || return aq->card->hwinfo.ep11 || aq->card->hwinfo.accel;
ap_test_bit(&aq->card->functions, AP_FUNC_ACCEL);
} }
static inline bool ap_q_supports_assoc(struct ap_queue *aq) static inline bool ap_q_supports_assoc(struct ap_queue *aq)
{ {
return ap_test_bit(&aq->card->functions, AP_FUNC_EP11); return aq->card->hwinfo.ep11;
} }
static inline bool ap_q_needs_bind(struct ap_queue *aq) static inline bool ap_q_needs_bind(struct ap_queue *aq)
@ -257,7 +256,7 @@ static enum ap_sm_wait ap_sm_write(struct ap_queue *aq)
list_move_tail(&ap_msg->list, &aq->pendingq); list_move_tail(&ap_msg->list, &aq->pendingq);
aq->requestq_count--; aq->requestq_count--;
aq->pendingq_count++; aq->pendingq_count++;
if (aq->queue_count < aq->card->queue_depth) { if (aq->queue_count < aq->card->hwinfo.qd) {
aq->sm_state = AP_SM_STATE_WORKING; aq->sm_state = AP_SM_STATE_WORKING;
return AP_SM_WAIT_AGAIN; return AP_SM_WAIT_AGAIN;
} }
@ -318,7 +317,6 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
case AP_RESPONSE_RESET_IN_PROGRESS: case AP_RESPONSE_RESET_IN_PROGRESS:
aq->sm_state = AP_SM_STATE_RESET_WAIT; aq->sm_state = AP_SM_STATE_RESET_WAIT;
aq->rapq_fbit = 0; aq->rapq_fbit = 0;
aq->se_bound = false;
return AP_SM_WAIT_LOW_TIMEOUT; return AP_SM_WAIT_LOW_TIMEOUT;
default: default:
aq->dev_state = AP_DEV_STATE_ERROR; aq->dev_state = AP_DEV_STATE_ERROR;
@ -339,17 +337,15 @@ static enum ap_sm_wait ap_sm_reset(struct ap_queue *aq)
static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq) static enum ap_sm_wait ap_sm_reset_wait(struct ap_queue *aq)
{ {
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_hwinfo hwinfo;
void *lsi_ptr; void *lsi_ptr;
if (aq->queue_count > 0 && aq->reply) /* Get the status with TAPQ */
/* Try to read a completed message and get the status */ status = ap_test_queue(aq->qid, 1, &hwinfo);
status = ap_sm_recv(aq);
else
/* Get the status with TAPQ */
status = ap_tapq(aq->qid, NULL);
switch (status.response_code) { switch (status.response_code) {
case AP_RESPONSE_NORMAL: case AP_RESPONSE_NORMAL:
aq->se_bstate = hwinfo.bs;
lsi_ptr = ap_airq_ptr(); lsi_ptr = ap_airq_ptr();
if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0) if (lsi_ptr && ap_queue_enable_irq(aq, lsi_ptr) == 0)
aq->sm_state = AP_SM_STATE_SETIRQ_WAIT; aq->sm_state = AP_SM_STATE_SETIRQ_WAIT;
@ -421,9 +417,9 @@ static enum ap_sm_wait ap_sm_setirq_wait(struct ap_queue *aq)
static enum ap_sm_wait ap_sm_assoc_wait(struct ap_queue *aq) static enum ap_sm_wait ap_sm_assoc_wait(struct ap_queue *aq)
{ {
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_gr2 info; struct ap_tapq_hwinfo hwinfo;
status = ap_test_queue(aq->qid, 1, &info); status = ap_test_queue(aq->qid, 1, &hwinfo);
/* handle asynchronous error on this queue */ /* handle asynchronous error on this queue */
if (status.async && status.response_code) { if (status.async && status.response_code) {
aq->dev_state = AP_DEV_STATE_ERROR; aq->dev_state = AP_DEV_STATE_ERROR;
@ -442,8 +438,11 @@ static enum ap_sm_wait ap_sm_assoc_wait(struct ap_queue *aq)
return AP_SM_WAIT_NONE; return AP_SM_WAIT_NONE;
} }
/* update queue's SE bind state */
aq->se_bstate = hwinfo.bs;
/* check bs bits */ /* check bs bits */
switch (info.bs) { switch (hwinfo.bs) {
case AP_BS_Q_USABLE: case AP_BS_Q_USABLE:
/* association is through */ /* association is through */
aq->sm_state = AP_SM_STATE_IDLE; aq->sm_state = AP_SM_STATE_IDLE;
@ -460,7 +459,7 @@ static enum ap_sm_wait ap_sm_assoc_wait(struct ap_queue *aq)
aq->dev_state = AP_DEV_STATE_ERROR; aq->dev_state = AP_DEV_STATE_ERROR;
aq->last_err_rc = status.response_code; aq->last_err_rc = status.response_code;
AP_DBF_WARN("%s bs 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n", AP_DBF_WARN("%s bs 0x%02x on 0x%02x.%04x -> AP_DEV_STATE_ERROR\n",
__func__, info.bs, __func__, hwinfo.bs,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
return AP_SM_WAIT_NONE; return AP_SM_WAIT_NONE;
} }
@ -687,9 +686,9 @@ static ssize_t ap_functions_show(struct device *dev,
{ {
struct ap_queue *aq = to_ap_queue(dev); struct ap_queue *aq = to_ap_queue(dev);
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_gr2 info; struct ap_tapq_hwinfo hwinfo;
status = ap_test_queue(aq->qid, 1, &info); status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code > AP_RESPONSE_BUSY) { if (status.response_code > AP_RESPONSE_BUSY) {
AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code, __func__, status.response_code,
@ -697,7 +696,7 @@ static ssize_t ap_functions_show(struct device *dev,
return -EIO; return -EIO;
} }
return sysfs_emit(buf, "0x%08X\n", info.fac); return sysfs_emit(buf, "0x%08X\n", hwinfo.fac);
} }
static DEVICE_ATTR_RO(ap_functions); static DEVICE_ATTR_RO(ap_functions);
@ -840,19 +839,25 @@ static ssize_t se_bind_show(struct device *dev,
{ {
struct ap_queue *aq = to_ap_queue(dev); struct ap_queue *aq = to_ap_queue(dev);
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_gr2 info; struct ap_tapq_hwinfo hwinfo;
if (!ap_q_supports_bind(aq)) if (!ap_q_supports_bind(aq))
return sysfs_emit(buf, "-\n"); return sysfs_emit(buf, "-\n");
status = ap_test_queue(aq->qid, 1, &info); status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code > AP_RESPONSE_BUSY) { if (status.response_code > AP_RESPONSE_BUSY) {
AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code, __func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
return -EIO; return -EIO;
} }
switch (info.bs) {
/* update queue's SE bind state */
spin_lock_bh(&aq->lock);
aq->se_bstate = hwinfo.bs;
spin_unlock_bh(&aq->lock);
switch (hwinfo.bs) {
case AP_BS_Q_USABLE: case AP_BS_Q_USABLE:
case AP_BS_Q_USABLE_NO_SECURE_KEY: case AP_BS_Q_USABLE_NO_SECURE_KEY:
return sysfs_emit(buf, "bound\n"); return sysfs_emit(buf, "bound\n");
@ -867,6 +872,7 @@ static ssize_t se_bind_store(struct device *dev,
{ {
struct ap_queue *aq = to_ap_queue(dev); struct ap_queue *aq = to_ap_queue(dev);
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_hwinfo hwinfo;
bool value; bool value;
int rc; int rc;
@ -878,39 +884,80 @@ static ssize_t se_bind_store(struct device *dev,
if (rc) if (rc)
return rc; return rc;
if (value) { if (!value) {
/* bind, do BAPQ */ /* Unbind. Set F bit arg and trigger RAPQ */
spin_lock_bh(&aq->lock);
if (aq->sm_state < AP_SM_STATE_IDLE) {
spin_unlock_bh(&aq->lock);
return -EBUSY;
}
status = ap_bapq(aq->qid);
spin_unlock_bh(&aq->lock);
if (!status.response_code) {
aq->se_bound = true;
AP_DBF_INFO("%s bapq(0x%02x.%04x) success\n", __func__,
AP_QID_CARD(aq->qid),
AP_QID_QUEUE(aq->qid));
} else {
AP_DBF_WARN("%s RC 0x%02x on bapq(0x%02x.%04x)\n",
__func__, status.response_code,
AP_QID_CARD(aq->qid),
AP_QID_QUEUE(aq->qid));
return -EIO;
}
} else {
/* unbind, set F bit arg and trigger RAPQ */
spin_lock_bh(&aq->lock); spin_lock_bh(&aq->lock);
__ap_flush_queue(aq); __ap_flush_queue(aq);
aq->rapq_fbit = 1; aq->rapq_fbit = 1;
aq->assoc_idx = ASSOC_IDX_INVALID; _ap_queue_init_state(aq);
aq->sm_state = AP_SM_STATE_RESET_START; rc = count;
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL)); goto out;
spin_unlock_bh(&aq->lock);
} }
return count; /* Bind. Check current SE bind state */
status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code) {
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
return -EIO;
}
/* Update BS state */
spin_lock_bh(&aq->lock);
aq->se_bstate = hwinfo.bs;
if (hwinfo.bs != AP_BS_Q_AVAIL_FOR_BINDING) {
AP_DBF_WARN("%s bind attempt with bs %d on queue 0x%02x.%04x\n",
__func__, hwinfo.bs,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = -EINVAL;
goto out;
}
/* Check SM state */
if (aq->sm_state < AP_SM_STATE_IDLE) {
rc = -EBUSY;
goto out;
}
/* invoke BAPQ */
status = ap_bapq(aq->qid);
if (status.response_code) {
AP_DBF_WARN("%s RC 0x%02x on bapq(0x%02x.%04x)\n",
__func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = -EIO;
goto out;
}
aq->assoc_idx = ASSOC_IDX_INVALID;
/* verify SE bind state */
status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code) {
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = -EIO;
goto out;
}
aq->se_bstate = hwinfo.bs;
if (!(hwinfo.bs == AP_BS_Q_USABLE ||
hwinfo.bs == AP_BS_Q_USABLE_NO_SECURE_KEY)) {
AP_DBF_WARN("%s BAPQ success, but bs shows %d on queue 0x%02x.%04x\n",
__func__, hwinfo.bs,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = -EIO;
goto out;
}
/* SE bind was successful */
AP_DBF_INFO("%s bapq(0x%02x.%04x) success\n", __func__,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = count;
out:
spin_unlock_bh(&aq->lock);
return rc;
} }
static DEVICE_ATTR_RW(se_bind); static DEVICE_ATTR_RW(se_bind);
@ -920,12 +967,12 @@ static ssize_t se_associate_show(struct device *dev,
{ {
struct ap_queue *aq = to_ap_queue(dev); struct ap_queue *aq = to_ap_queue(dev);
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_gr2 info; struct ap_tapq_hwinfo hwinfo;
if (!ap_q_supports_assoc(aq)) if (!ap_q_supports_assoc(aq))
return sysfs_emit(buf, "-\n"); return sysfs_emit(buf, "-\n");
status = ap_test_queue(aq->qid, 1, &info); status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code > AP_RESPONSE_BUSY) { if (status.response_code > AP_RESPONSE_BUSY) {
AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n", AP_DBF_DBG("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code, __func__, status.response_code,
@ -933,7 +980,12 @@ static ssize_t se_associate_show(struct device *dev,
return -EIO; return -EIO;
} }
switch (info.bs) { /* update queue's SE bind state */
spin_lock_bh(&aq->lock);
aq->se_bstate = hwinfo.bs;
spin_unlock_bh(&aq->lock);
switch (hwinfo.bs) {
case AP_BS_Q_USABLE: case AP_BS_Q_USABLE:
if (aq->assoc_idx == ASSOC_IDX_INVALID) { if (aq->assoc_idx == ASSOC_IDX_INVALID) {
AP_DBF_WARN("%s AP_BS_Q_USABLE but invalid assoc_idx\n", __func__); AP_DBF_WARN("%s AP_BS_Q_USABLE but invalid assoc_idx\n", __func__);
@ -955,6 +1007,7 @@ static ssize_t se_associate_store(struct device *dev,
{ {
struct ap_queue *aq = to_ap_queue(dev); struct ap_queue *aq = to_ap_queue(dev);
struct ap_queue_status status; struct ap_queue_status status;
struct ap_tapq_hwinfo hwinfo;
unsigned int value; unsigned int value;
int rc; int rc;
@ -968,18 +1021,28 @@ static ssize_t se_associate_store(struct device *dev,
if (value >= ASSOC_IDX_INVALID) if (value >= ASSOC_IDX_INVALID)
return -EINVAL; return -EINVAL;
/* check current SE bind state */
status = ap_test_queue(aq->qid, 1, &hwinfo);
if (status.response_code) {
AP_DBF_WARN("%s RC 0x%02x on tapq(0x%02x.%04x)\n",
__func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
return -EIO;
}
spin_lock_bh(&aq->lock); spin_lock_bh(&aq->lock);
aq->se_bstate = hwinfo.bs;
/* sm should be in idle state */ if (hwinfo.bs != AP_BS_Q_USABLE_NO_SECURE_KEY) {
if (aq->sm_state != AP_SM_STATE_IDLE) { AP_DBF_WARN("%s association attempt with bs %d on queue 0x%02x.%04x\n",
spin_unlock_bh(&aq->lock); __func__, hwinfo.bs,
return -EBUSY; AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
rc = -EINVAL;
goto out;
} }
/* already associated or association pending ? */ /* check SM state */
if (aq->assoc_idx != ASSOC_IDX_INVALID) { if (aq->sm_state != AP_SM_STATE_IDLE) {
spin_unlock_bh(&aq->lock); rc = -EBUSY;
return -EINVAL; goto out;
} }
/* trigger the asynchronous association request */ /* trigger the asynchronous association request */
@ -990,17 +1053,20 @@ static ssize_t se_associate_store(struct device *dev,
aq->sm_state = AP_SM_STATE_ASSOC_WAIT; aq->sm_state = AP_SM_STATE_ASSOC_WAIT;
aq->assoc_idx = value; aq->assoc_idx = value;
ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL)); ap_wait(ap_sm_event(aq, AP_SM_EVENT_POLL));
spin_unlock_bh(&aq->lock);
break; break;
default: default:
spin_unlock_bh(&aq->lock);
AP_DBF_WARN("%s RC 0x%02x on aapq(0x%02x.%04x)\n", AP_DBF_WARN("%s RC 0x%02x on aapq(0x%02x.%04x)\n",
__func__, status.response_code, __func__, status.response_code,
AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid)); AP_QID_CARD(aq->qid), AP_QID_QUEUE(aq->qid));
return -EIO; rc = -EIO;
goto out;
} }
return count; rc = count;
out:
spin_unlock_bh(&aq->lock);
return rc;
} }
static DEVICE_ATTR_RW(se_associate); static DEVICE_ATTR_RW(se_associate);
@ -1123,7 +1189,9 @@ bool ap_queue_usable(struct ap_queue *aq)
} }
/* SE guest's queues additionally need to be bound */ /* SE guest's queues additionally need to be bound */
if (ap_q_needs_bind(aq) && !aq->se_bound) if (ap_q_needs_bind(aq) &&
!(aq->se_bstate == AP_BS_Q_USABLE ||
aq->se_bstate == AP_BS_Q_USABLE_NO_SECURE_KEY))
rc = false; rc = false;
unlock_and_out: unlock_and_out:

View File

@ -393,8 +393,8 @@ static int ensure_nib_shared(unsigned long addr, struct gmap *gmap)
* Register the guest ISC to GIB interface and retrieve the * Register the guest ISC to GIB interface and retrieve the
* host ISC to issue the host side PQAP/AQIC * host ISC to issue the host side PQAP/AQIC
* *
* Response.status may be set to AP_RESPONSE_INVALID_ADDRESS in case the * status.response_code may be set to AP_RESPONSE_INVALID_ADDRESS in case the
* vfio_pin_pages failed. * vfio_pin_pages or kvm_s390_gisc_register failed.
* *
* Otherwise return the ap_queue_status returned by the ap_aqic(), * Otherwise return the ap_queue_status returned by the ap_aqic(),
* all retry handling will be done by the guest. * all retry handling will be done by the guest.
@ -457,7 +457,8 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
VFIO_AP_DBF_WARN("%s: gisc registration failed: nisc=%d, isc=%d, apqn=%#04x\n", VFIO_AP_DBF_WARN("%s: gisc registration failed: nisc=%d, isc=%d, apqn=%#04x\n",
__func__, nisc, isc, q->apqn); __func__, nisc, isc, q->apqn);
status.response_code = AP_RESPONSE_INVALID_GISA; vfio_unpin_pages(&q->matrix_mdev->vdev, nib, 1);
status.response_code = AP_RESPONSE_INVALID_ADDRESS;
return status; return status;
} }
@ -475,8 +476,11 @@ static struct ap_queue_status vfio_ap_irq_enable(struct vfio_ap_queue *q,
break; break;
case AP_RESPONSE_OTHERWISE_CHANGED: case AP_RESPONSE_OTHERWISE_CHANGED:
/* We could not modify IRQ settings: clear new configuration */ /* We could not modify IRQ settings: clear new configuration */
ret = kvm_s390_gisc_unregister(kvm, isc);
if (ret)
VFIO_AP_DBF_WARN("%s: kvm_s390_gisc_unregister: rc=%d isc=%d, apqn=%#04x\n",
__func__, ret, isc, q->apqn);
vfio_unpin_pages(&q->matrix_mdev->vdev, nib, 1); vfio_unpin_pages(&q->matrix_mdev->vdev, nib, 1);
kvm_s390_gisc_unregister(kvm, isc);
break; break;
default: default:
pr_warn("%s: apqn %04x: response: %02x\n", __func__, q->apqn, pr_warn("%s: apqn %04x: response: %02x\n", __func__, q->apqn,
@ -1976,6 +1980,7 @@ static ssize_t status_show(struct device *dev,
{ {
ssize_t nchars = 0; ssize_t nchars = 0;
struct vfio_ap_queue *q; struct vfio_ap_queue *q;
unsigned long apid, apqi;
struct ap_matrix_mdev *matrix_mdev; struct ap_matrix_mdev *matrix_mdev;
struct ap_device *apdev = to_ap_dev(dev); struct ap_device *apdev = to_ap_dev(dev);
@ -1983,8 +1988,21 @@ static ssize_t status_show(struct device *dev,
q = dev_get_drvdata(&apdev->device); q = dev_get_drvdata(&apdev->device);
matrix_mdev = vfio_ap_mdev_for_queue(q); matrix_mdev = vfio_ap_mdev_for_queue(q);
/* If the queue is assigned to the matrix mediated device, then
* determine whether it is passed through to a guest; otherwise,
* indicate that it is unassigned.
*/
if (matrix_mdev) { if (matrix_mdev) {
if (matrix_mdev->kvm) apid = AP_QID_CARD(q->apqn);
apqi = AP_QID_QUEUE(q->apqn);
/*
* If the queue is passed through to the guest, then indicate
* that it is in use; otherwise, indicate that it is
* merely assigned to a matrix mediated device.
*/
if (matrix_mdev->kvm &&
test_bit_inv(apid, matrix_mdev->shadow_apcb.apm) &&
test_bit_inv(apqi, matrix_mdev->shadow_apcb.aqm))
nchars = scnprintf(buf, PAGE_SIZE, "%s\n", nchars = scnprintf(buf, PAGE_SIZE, "%s\n",
AP_QUEUE_IN_USE); AP_QUEUE_IN_USE);
else else
@ -2297,7 +2315,7 @@ static void vfio_ap_filter_apid_by_qtype(unsigned long *apm, unsigned long *aqm)
bool apid_cleared; bool apid_cleared;
struct ap_queue_status status; struct ap_queue_status status;
unsigned long apid, apqi; unsigned long apid, apqi;
struct ap_tapq_gr2 info; struct ap_tapq_hwinfo info;
for_each_set_bit_inv(apid, apm, AP_DEVICES) { for_each_set_bit_inv(apid, apm, AP_DEVICES) {
apid_cleared = false; apid_cleared = false;

View File

@ -673,7 +673,7 @@ static long zcrypt_rsa_modexpo(struct ap_perms *perms,
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
/* Check for usable accelerator or CCA card */ /* Check for usable accelerator or CCA card */
if (!zc->online || !zc->card->config || zc->card->chkstop || if (!zc->online || !zc->card->config || zc->card->chkstop ||
!(zc->card->functions & 0x18000000)) !(zc->card->hwinfo.accel || zc->card->hwinfo.cca))
continue; continue;
/* Check for size limits */ /* Check for size limits */
if (zc->min_mod_size > mex->inputdatalength || if (zc->min_mod_size > mex->inputdatalength ||
@ -778,7 +778,7 @@ static long zcrypt_rsa_crt(struct ap_perms *perms,
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
/* Check for usable accelerator or CCA card */ /* Check for usable accelerator or CCA card */
if (!zc->online || !zc->card->config || zc->card->chkstop || if (!zc->online || !zc->card->config || zc->card->chkstop ||
!(zc->card->functions & 0x18000000)) !(zc->card->hwinfo.accel || zc->card->hwinfo.cca))
continue; continue;
/* Check for size limits */ /* Check for size limits */
if (zc->min_mod_size > crt->inputdatalength || if (zc->min_mod_size > crt->inputdatalength ||
@ -893,7 +893,7 @@ static long _zcrypt_send_cprb(bool userspace, struct ap_perms *perms,
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
/* Check for usable CCA card */ /* Check for usable CCA card */
if (!zc->online || !zc->card->config || zc->card->chkstop || if (!zc->online || !zc->card->config || zc->card->chkstop ||
!(zc->card->functions & 0x10000000)) !zc->card->hwinfo.cca)
continue; continue;
/* Check for user selected CCA card */ /* Check for user selected CCA card */
if (xcrb->user_defined != AUTOSELECT && if (xcrb->user_defined != AUTOSELECT &&
@ -1064,7 +1064,7 @@ static long _zcrypt_send_ep11_cprb(bool userspace, struct ap_perms *perms,
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
/* Check for usable EP11 card */ /* Check for usable EP11 card */
if (!zc->online || !zc->card->config || zc->card->chkstop || if (!zc->online || !zc->card->config || zc->card->chkstop ||
!(zc->card->functions & 0x04000000)) !zc->card->hwinfo.ep11)
continue; continue;
/* Check for user selected EP11 card */ /* Check for user selected EP11 card */
if (targets && if (targets &&
@ -1177,7 +1177,7 @@ static long zcrypt_rng(char *buffer)
for_each_zcrypt_card(zc) { for_each_zcrypt_card(zc) {
/* Check for usable CCA card */ /* Check for usable CCA card */
if (!zc->online || !zc->card->config || zc->card->chkstop || if (!zc->online || !zc->card->config || zc->card->chkstop ||
!(zc->card->functions & 0x10000000)) !zc->card->hwinfo.cca)
continue; continue;
/* get weight index of the card device */ /* get weight index of the card device */
wgt = zc->speed_rating[func_code]; wgt = zc->speed_rating[func_code];
@ -1238,7 +1238,7 @@ static void zcrypt_device_status_mask(struct zcrypt_device_status *devstatus)
queue = AP_QID_QUEUE(zq->queue->qid); queue = AP_QID_QUEUE(zq->queue->qid);
stat = &devstatus[card * AP_DOMAINS + queue]; stat = &devstatus[card * AP_DOMAINS + queue];
stat->hwtype = zc->card->ap_dev.device_type; stat->hwtype = zc->card->ap_dev.device_type;
stat->functions = zc->card->functions >> 26; stat->functions = zc->card->hwinfo.fac >> 26;
stat->qid = zq->queue->qid; stat->qid = zq->queue->qid;
stat->online = zq->online ? 0x01 : 0x00; stat->online = zq->online ? 0x01 : 0x00;
} }
@ -1263,7 +1263,7 @@ void zcrypt_device_status_mask_ext(struct zcrypt_device_status_ext *devstatus)
queue = AP_QID_QUEUE(zq->queue->qid); queue = AP_QID_QUEUE(zq->queue->qid);
stat = &devstatus[card * AP_DOMAINS + queue]; stat = &devstatus[card * AP_DOMAINS + queue];
stat->hwtype = zc->card->ap_dev.device_type; stat->hwtype = zc->card->ap_dev.device_type;
stat->functions = zc->card->functions >> 26; stat->functions = zc->card->hwinfo.fac >> 26;
stat->qid = zq->queue->qid; stat->qid = zq->queue->qid;
stat->online = zq->online ? 0x01 : 0x00; stat->online = zq->online ? 0x01 : 0x00;
} }
@ -1286,7 +1286,7 @@ int zcrypt_device_status_ext(int card, int queue,
if (card == AP_QID_CARD(zq->queue->qid) && if (card == AP_QID_CARD(zq->queue->qid) &&
queue == AP_QID_QUEUE(zq->queue->qid)) { queue == AP_QID_QUEUE(zq->queue->qid)) {
devstat->hwtype = zc->card->ap_dev.device_type; devstat->hwtype = zc->card->ap_dev.device_type;
devstat->functions = zc->card->functions >> 26; devstat->functions = zc->card->hwinfo.fac >> 26;
devstat->qid = zq->queue->qid; devstat->qid = zq->queue->qid;
devstat->online = zq->online ? 0x01 : 0x00; devstat->online = zq->online ? 0x01 : 0x00;
spin_unlock(&zcrypt_list_lock); spin_unlock(&zcrypt_list_lock);

View File

@ -477,7 +477,7 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
return -ENOMEM; return -ENOMEM;
zc->card = ac; zc->card = ac;
dev_set_drvdata(&ap_dev->device, zc); dev_set_drvdata(&ap_dev->device, zc);
if (ap_test_bit(&ac->functions, AP_FUNC_ACCEL)) { if (ac->hwinfo.accel) {
if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) { if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) {
zc->type_string = "CEX4A"; zc->type_string = "CEX4A";
zc->user_space_type = ZCRYPT_CEX4; zc->user_space_type = ZCRYPT_CEX4;
@ -506,8 +506,7 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
zc->user_space_type = ZCRYPT_CEX6; zc->user_space_type = ZCRYPT_CEX6;
} }
zc->min_mod_size = CEX4A_MIN_MOD_SIZE; zc->min_mod_size = CEX4A_MIN_MOD_SIZE;
if (ap_test_bit(&ac->functions, AP_FUNC_MEX4K) && if (ac->hwinfo.mex4k && ac->hwinfo.crt4k) {
ap_test_bit(&ac->functions, AP_FUNC_CRT4K)) {
zc->max_mod_size = CEX4A_MAX_MOD_SIZE_4K; zc->max_mod_size = CEX4A_MAX_MOD_SIZE_4K;
zc->max_exp_bit_length = zc->max_exp_bit_length =
CEX4A_MAX_MOD_SIZE_4K; CEX4A_MAX_MOD_SIZE_4K;
@ -516,7 +515,7 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
zc->max_exp_bit_length = zc->max_exp_bit_length =
CEX4A_MAX_MOD_SIZE_2K; CEX4A_MAX_MOD_SIZE_2K;
} }
} else if (ap_test_bit(&ac->functions, AP_FUNC_COPRO)) { } else if (ac->hwinfo.cca) {
if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) { if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) {
zc->type_string = "CEX4C"; zc->type_string = "CEX4C";
zc->speed_rating = CEX4C_SPEED_IDX; zc->speed_rating = CEX4C_SPEED_IDX;
@ -556,7 +555,7 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
zc->min_mod_size = CEX4C_MIN_MOD_SIZE; zc->min_mod_size = CEX4C_MIN_MOD_SIZE;
zc->max_mod_size = CEX4C_MAX_MOD_SIZE; zc->max_mod_size = CEX4C_MAX_MOD_SIZE;
zc->max_exp_bit_length = CEX4C_MAX_MOD_SIZE; zc->max_exp_bit_length = CEX4C_MAX_MOD_SIZE;
} else if (ap_test_bit(&ac->functions, AP_FUNC_EP11)) { } else if (ac->hwinfo.ep11) {
if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) { if (ac->ap_dev.device_type == AP_DEVICE_TYPE_CEX4) {
zc->type_string = "CEX4P"; zc->type_string = "CEX4P";
zc->user_space_type = ZCRYPT_CEX4; zc->user_space_type = ZCRYPT_CEX4;
@ -599,14 +598,14 @@ static int zcrypt_cex4_card_probe(struct ap_device *ap_dev)
return rc; return rc;
} }
if (ap_test_bit(&ac->functions, AP_FUNC_COPRO)) { if (ac->hwinfo.cca) {
rc = sysfs_create_group(&ap_dev->device.kobj, rc = sysfs_create_group(&ap_dev->device.kobj,
&cca_card_attr_grp); &cca_card_attr_grp);
if (rc) { if (rc) {
zcrypt_card_unregister(zc); zcrypt_card_unregister(zc);
zcrypt_card_free(zc); zcrypt_card_free(zc);
} }
} else if (ap_test_bit(&ac->functions, AP_FUNC_EP11)) { } else if (ac->hwinfo.ep11) {
rc = sysfs_create_group(&ap_dev->device.kobj, rc = sysfs_create_group(&ap_dev->device.kobj,
&ep11_card_attr_grp); &ep11_card_attr_grp);
if (rc) { if (rc) {
@ -627,9 +626,9 @@ static void zcrypt_cex4_card_remove(struct ap_device *ap_dev)
struct zcrypt_card *zc = dev_get_drvdata(&ap_dev->device); struct zcrypt_card *zc = dev_get_drvdata(&ap_dev->device);
struct ap_card *ac = to_ap_card(&ap_dev->device); struct ap_card *ac = to_ap_card(&ap_dev->device);
if (ap_test_bit(&ac->functions, AP_FUNC_COPRO)) if (ac->hwinfo.cca)
sysfs_remove_group(&ap_dev->device.kobj, &cca_card_attr_grp); sysfs_remove_group(&ap_dev->device.kobj, &cca_card_attr_grp);
else if (ap_test_bit(&ac->functions, AP_FUNC_EP11)) else if (ac->hwinfo.ep11)
sysfs_remove_group(&ap_dev->device.kobj, &ep11_card_attr_grp); sysfs_remove_group(&ap_dev->device.kobj, &ep11_card_attr_grp);
zcrypt_card_unregister(zc); zcrypt_card_unregister(zc);
@ -654,19 +653,19 @@ static int zcrypt_cex4_queue_probe(struct ap_device *ap_dev)
struct zcrypt_queue *zq; struct zcrypt_queue *zq;
int rc; int rc;
if (ap_test_bit(&aq->card->functions, AP_FUNC_ACCEL)) { if (aq->card->hwinfo.accel) {
zq = zcrypt_queue_alloc(aq->card->maxmsgsize); zq = zcrypt_queue_alloc(aq->card->maxmsgsize);
if (!zq) if (!zq)
return -ENOMEM; return -ENOMEM;
zq->ops = zcrypt_msgtype(MSGTYPE50_NAME, zq->ops = zcrypt_msgtype(MSGTYPE50_NAME,
MSGTYPE50_VARIANT_DEFAULT); MSGTYPE50_VARIANT_DEFAULT);
} else if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO)) { } else if (aq->card->hwinfo.cca) {
zq = zcrypt_queue_alloc(aq->card->maxmsgsize); zq = zcrypt_queue_alloc(aq->card->maxmsgsize);
if (!zq) if (!zq)
return -ENOMEM; return -ENOMEM;
zq->ops = zcrypt_msgtype(MSGTYPE06_NAME, zq->ops = zcrypt_msgtype(MSGTYPE06_NAME,
MSGTYPE06_VARIANT_DEFAULT); MSGTYPE06_VARIANT_DEFAULT);
} else if (ap_test_bit(&aq->card->functions, AP_FUNC_EP11)) { } else if (aq->card->hwinfo.ep11) {
zq = zcrypt_queue_alloc(aq->card->maxmsgsize); zq = zcrypt_queue_alloc(aq->card->maxmsgsize);
if (!zq) if (!zq)
return -ENOMEM; return -ENOMEM;
@ -689,14 +688,14 @@ static int zcrypt_cex4_queue_probe(struct ap_device *ap_dev)
return rc; return rc;
} }
if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO)) { if (aq->card->hwinfo.cca) {
rc = sysfs_create_group(&ap_dev->device.kobj, rc = sysfs_create_group(&ap_dev->device.kobj,
&cca_queue_attr_grp); &cca_queue_attr_grp);
if (rc) { if (rc) {
zcrypt_queue_unregister(zq); zcrypt_queue_unregister(zq);
zcrypt_queue_free(zq); zcrypt_queue_free(zq);
} }
} else if (ap_test_bit(&aq->card->functions, AP_FUNC_EP11)) { } else if (aq->card->hwinfo.ep11) {
rc = sysfs_create_group(&ap_dev->device.kobj, rc = sysfs_create_group(&ap_dev->device.kobj,
&ep11_queue_attr_grp); &ep11_queue_attr_grp);
if (rc) { if (rc) {
@ -717,9 +716,9 @@ static void zcrypt_cex4_queue_remove(struct ap_device *ap_dev)
struct zcrypt_queue *zq = dev_get_drvdata(&ap_dev->device); struct zcrypt_queue *zq = dev_get_drvdata(&ap_dev->device);
struct ap_queue *aq = to_ap_queue(&ap_dev->device); struct ap_queue *aq = to_ap_queue(&ap_dev->device);
if (ap_test_bit(&aq->card->functions, AP_FUNC_COPRO)) if (aq->card->hwinfo.cca)
sysfs_remove_group(&ap_dev->device.kobj, &cca_queue_attr_grp); sysfs_remove_group(&ap_dev->device.kobj, &cca_queue_attr_grp);
else if (ap_test_bit(&aq->card->functions, AP_FUNC_EP11)) else if (aq->card->hwinfo.ep11)
sysfs_remove_group(&ap_dev->device.kobj, &ep11_queue_attr_grp); sysfs_remove_group(&ap_dev->device.kobj, &ep11_queue_attr_grp);
zcrypt_queue_unregister(zq); zcrypt_queue_unregister(zq);

View File

@ -158,7 +158,7 @@ static void raid6_s390vx$#_xor_syndrome(int disks, int start, int stop,
static int raid6_s390vx$#_valid(void) static int raid6_s390vx$#_valid(void)
{ {
return MACHINE_HAS_VX; return cpu_has_vx();
} }
const struct raid6_calls raid6_s390vx$# = { const struct raid6_calls raid6_s390vx$# = {