2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-15 08:44:14 +08:00

SCSI misc on 20161006

This update includes the usual round of major driver updates (hpsa, be2iscsi,
 hisi_sas, zfcp, cxlflash).  There's a new incarnation of hpsa called smartpqi
 for which a driver is added, there's some cleanup work of the ibm vscsi target
 and updates to libfc, plus a whole host of minor fixes and updates and finally
 the removal of several ISA drivers which seem not to have been used for years.
 
 Signed-off-by: James Bottomley <jejb@linux.vnet.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABAgAGBQJX9fZGAAoJEAVr7HOZEZN4TfkP/2bOHBGqyQ16P9jRjWXtC6pJ
 Fp/ZfU6ZrSpcGN49Wr9vyPbpvYKdtIZg3oUs6XhKmnfP+lbeIIJ5jxlEnwBVwWya
 JOOD91o8lLN7zRMuyfYIfgnm4dIU3GSLpnWIyfAhoMH1utiLLcq7s2XEM5girDft
 dVUL20XprtJkVsg2C+hhRAI8PMjWFInadj2eRIHdxJIDC8fXR+w8ojBShou+lf6Q
 /zYPgckTCBlZWIc/ohI3j52r4qmkChgX+3/jR+v9i5bGXjFfpmh0GzxM7tscESSa
 4Y/ZLTg72j/colYkA1jt04YLxA2dQCa6b8DmJIcUTL0WStsJUQH5hFFFHt3mSafI
 HirqRfHpmadHbfi5Kiyx688S5b0oVN4bMxvMoEOAUy7WVaLEr84GJ5VYhoAwkPhL
 USaDx6Hsa1OT0lGYAtyRKOUT/d55grztEOnSxBFiQgRoB8wrGX616Xg8VONy7JZS
 wEZtf1v5K0+ZXJiu4NtY+/RzQdOwu7OQHKfN5mLri8tJ+eo8d88ZwSARJxEZetSM
 P4EVR2ZjhL+Ct78v3i4Yj8FVMXHSzzulj530KQ/U7z/l4c2S54mtEKijDmXmto8k
 baiIah/wgaS/fznoOsJw+Iy/2HqsAtNZsReNcgNPLzfabTBXKSBXJDLmO4d3g/3s
 zwj1m3JtzAx2j3kQrkSv
 =cyTO
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "This update includes the usual round of major driver updates (hpsa,
  be2iscsi, hisi_sas, zfcp, cxlflash). There's a new incarnation of hpsa
  called smartpqi for which a driver is added, there's some cleanup work
  of the ibm vscsi target and updates to libfc, plus a whole host of
  minor fixes and updates and finally the removal of several ISA drivers
  which seem not to have been used for years"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (173 commits)
  scsi: mvsas: Mark symbols static where possible
  scsi: pm8001: Mark symbols static where possible
  scsi: arcmsr: Simplify user_len checking
  scsi: fcoe: fix off by one in eth2fc_speed()
  scsi: dtc: remove from tree
  scsi: t128: remove from tree
  scsi: pas16: remove from tree
  scsi: u14-34f: remove from tree
  scsi: ultrastor: remove from tree
  scsi: in2000: remove from tree
  scsi: wd7000: remove from tree
  scsi: scsi_dh_alua: Fix memory leak in alua_rtpg()
  scsi: lpfc: Mark symbols static where possible
  scsi: hpsa: correct call to hpsa_do_reset
  scsi: ufs: Get a TM service response from the correct offset
  scsi: ibmvfc: Fix I/O hang when port is not mapped
  scsi: megaraid_sas: clean function declarations in megaraid_sas_base.c up
  scsi: ipr: Remove redundant messages at adapter init time
  scsi: ipr: Don't log unnecessary 9084 error details
  scsi: smartpqi: raid bypass lba calculation fix
  ...
This commit is contained in:
Linus Torvalds 2016-10-07 09:28:53 -07:00
commit 4dfddf5036
117 changed files with 13025 additions and 13725 deletions

View File

@ -121,7 +121,7 @@ Block library API
below. below.
The block library can be found on GitHub: The block library can be found on GitHub:
http://www.github.com/mikehollinger/ibmcapikv http://github.com/open-power/capiflash
CXL Flash Driver IOCTLs CXL Flash Driver IOCTLs
@ -171,11 +171,30 @@ DK_CXLFLASH_ATTACH
destroyed, the tokens are to be considered stale and subsequent destroyed, the tokens are to be considered stale and subsequent
usage will result in errors. usage will result in errors.
- When a context is no longer needed, the user shall detach from - A valid adapter file descriptor (fd2 >= 0) is only returned on
the context via the DK_CXLFLASH_DETACH ioctl. the initial attach for a context. Subsequent attaches to an
existing context (DK_CXLFLASH_ATTACH_REUSE_CONTEXT flag present)
do not provide the adapter file descriptor as it was previously
made known to the application.
- A close on fd2 will invalidate the tokens. This operation is not - When a context is no longer needed, the user shall detach from
required by the user. the context via the DK_CXLFLASH_DETACH ioctl. When this ioctl
returns with a valid adapter file descriptor and the return flag
DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_
close the adapter file descriptor following a successful detach.
- When this ioctl returns with a valid fd2 and the return flag
DK_CXLFLASH_APP_CLOSE_ADAP_FD is present, the application _must_
close fd2 in the following circumstances:
+ Following a successful detach of the last user of the context
+ Following a successful recovery on the context's original fd2
+ In the child process of a fork(), following a clone ioctl,
on the fd2 associated with the source context
- At any time, a close on fd2 will invalidate the tokens. Applications
should exercise caution to only close fd2 when appropriate (outlined
in the previous bullet) to avoid premature loss of I/O.
DK_CXLFLASH_USER_DIRECT DK_CXLFLASH_USER_DIRECT
----------------------- -----------------------
@ -254,6 +273,10 @@ DK_CXLFLASH_DETACH
success, all "tokens" which had been provided to the user from the success, all "tokens" which had been provided to the user from the
DK_CXLFLASH_ATTACH onward are no longer valid. DK_CXLFLASH_ATTACH onward are no longer valid.
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
attach, the application _must_ close the fd2 associated with the context
following the detach of the final user of the context.
DK_CXLFLASH_VLUN_CLONE DK_CXLFLASH_VLUN_CLONE
---------------------- ----------------------
This ioctl is responsible for cloning a previously created This ioctl is responsible for cloning a previously created
@ -261,7 +284,7 @@ DK_CXLFLASH_VLUN_CLONE
support maintaining user space access to storage after a process support maintaining user space access to storage after a process
forks. Upon success, the child process (which invoked the ioctl) forks. Upon success, the child process (which invoked the ioctl)
will have access to the same LUNs via the same resource handle(s) will have access to the same LUNs via the same resource handle(s)
and fd2 as the parent, but under a different context. as the parent, but under a different context.
Context sharing across processes is not supported with CXL and Context sharing across processes is not supported with CXL and
therefore each fork must be met with establishing a new context therefore each fork must be met with establishing a new context
@ -275,6 +298,12 @@ DK_CXLFLASH_VLUN_CLONE
translation tables are copied from the parent context to the child's translation tables are copied from the parent context to the child's
and then synced with the AFU. and then synced with the AFU.
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
attach, the application _must_ close the fd2 associated with the source
context (still resident/accessible in the parent process) following the
clone. This is to avoid a stale entry in the file descriptor table of the
child process.
DK_CXLFLASH_VERIFY DK_CXLFLASH_VERIFY
------------------ ------------------
This ioctl is used to detect various changes such as the capacity of This ioctl is used to detect various changes such as the capacity of
@ -309,6 +338,11 @@ DK_CXLFLASH_RECOVER_AFU
at which time the context/resources they held will be freed as part of at which time the context/resources they held will be freed as part of
the release fop. the release fop.
When the DK_CXLFLASH_APP_CLOSE_ADAP_FD flag was returned on a successful
attach, the application _must_ unmap and close the fd2 associated with the
original context following this ioctl returning success and indicating that
the context was recovered (DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET).
DK_CXLFLASH_MANAGE_LUN DK_CXLFLASH_MANAGE_LUN
---------------------- ----------------------
This ioctl is used to switch a LUN from a mode where it is available This ioctl is used to switch a LUN from a mode where it is available

View File

@ -64,8 +64,6 @@ hpsa.txt
- HP Smart Array Controller SCSI driver. - HP Smart Array Controller SCSI driver.
hptiop.txt hptiop.txt
- HIGHPOINT ROCKETRAID 3xxx RAID DRIVER - HIGHPOINT ROCKETRAID 3xxx RAID DRIVER
in2000.txt
- info on in2000 driver
libsas.txt libsas.txt
- Serial Attached SCSI management layer. - Serial Attached SCSI management layer.
link_power_management_policy.txt link_power_management_policy.txt

View File

@ -1,43 +0,0 @@
README file for the Linux DTC3180/3280 scsi driver.
by Ray Van Tassle (rayvt@comm.mot.com) March 1996
Based on the generic & core NCR5380 code by Drew Eckhard
SCSI device driver for the DTC 3180/3280.
Data Technology Corp---a division of Qume.
The 3280 has a standard floppy interface.
The 3180 does not. Otherwise, they are identical.
The DTC3x80 does not support DMA but it does have Pseudo-DMA which is
supported by the driver.
Its DTC406 scsi chip is supposedly compatible with the NCR 53C400.
It is memory mapped, uses an IRQ, but no dma or io-port. There is
internal DMA, between SCSI bus and an on-chip 128-byte buffer. Double
buffering is done automagically by the chip. Data is transferred
between the on-chip buffer and CPU/RAM via memory moves.
The driver detects the possible memory addresses (jumper selectable):
CC00, DC00, C800, and D800
The possible IRQ's (jumper selectable) are:
IRQ 10, 11, 12, 15
Parity is supported by the chip, but not by this driver.
Information can be obtained from /proc/scsi/dtc3c80/N.
Note on interrupts:
The documentation says that it can be set to interrupt whenever the
on-chip buffer needs CPU attention. I couldn't get this to work. So
the driver polls for data-ready in the pseudo-DMA transfer routine.
The interrupt support routines in the NCR3280.c core modules handle
scsi disconnect/reconnect, and this (mostly) works. However..... I
have tested it with 4 totally different hard drives (both SCSI-1 and
SCSI-2), and one CDROM drive. Interrupts works great for all but one
specific hard drive. For this one, the driver will eventually hang in
the transfer state. I have tested with: "dd bs=4k count=2k
of=/dev/null if=/dev/sdb". It reads ok for a while, then hangs.
After beating my head against this for a couple of weeks, getting
nowhere, I give up. So.....This driver does NOT use interrupts, even
if you have the card jumpered to an IRQ. Probably nobody will ever
care.

View File

@ -1,202 +0,0 @@
UPDATE NEWS: version 1.33 - 26 Aug 98
Interrupt management in this driver has become, over
time, increasingly odd and difficult to explain - this
has been mostly due to my own mental inadequacies. In
recent kernels, it has failed to function at all when
compiled for SMP. I've fixed that problem, and after
taking a fresh look at interrupts in general, greatly
reduced the number of places where they're fiddled
with. Done some heavy testing and it looks very good.
The driver now makes use of the __initfunc() and
__initdata macros to save about 4k of kernel memory.
Once again, the same code works for both 2.0.xx and
2.1.xx kernels.
UPDATE NEWS: version 1.32 - 28 Mar 98
Removed the check for legal IN2000 hardware versions:
It appears that the driver works fine with serial
EPROMs (the 8-pin chip that defines hardware rev) as
old as 2.1, so we'll assume that all cards are OK.
UPDATE NEWS: version 1.31 - 6 Jul 97
Fixed a bug that caused incorrect SCSI status bytes to be
returned from commands sent to LUNs greater than 0. This
means that CDROM changers work now! Fixed a bug in the
handling of command-line arguments when loaded as a module.
Also put all the header data in in2000.h where it belongs.
There are no longer any differences between this driver in
the 2.1.xx source tree and the 2.0.xx tree, as of 2.0.31
and 2.1.45 (or is it .46?) - this makes things much easier
for me...
UPDATE NEWS: version 1.30 - 14 Oct 96
Fixed a bug in the code that sets the transfer direction
bit (DESTID_DPD in the WD_DESTINATION_ID register). There
are quite a few SCSI commands that do a write-to-device;
now we deal with all of them correctly. Thanks to Joerg
Dorchain for catching this one.
UPDATE NEWS: version 1.29 - 24 Sep 96
The memory-mapped hardware on the card is now accessed via
the 'readb()' and 'readl()' macros - required by the new
memory management scheme in the 2.1.x kernel series.
As suggested by Andries Brouwer, 'bios_param()' no longer
forces an artificial 1023 track limit on drives. Also
removed some kludge-code left over from struggles with
older (buggy) compilers.
UPDATE NEWS: version 1.28 - 07 May 96
Tightened up the "interrupts enabled/disabled" discipline
in 'in2000_queuecommand()' and maybe 1 or 2 other places.
I _think_ it may have been a little too lax, causing an
occasional crash during full moon. A fully functional
/proc interface is now in place - if you want to play
with it, start by doing 'cat /proc/scsi/in2000/0'. You
can also use it to change a few run-time parameters on
the fly, but it's mostly for debugging. The curious
should take a good look at 'in2000_proc_info()' in the
in2000.c file to get an understanding of what it's all
about; I figure that people who are really into it will
want to add features suited to their own needs...
Also, sync is now DISABLED by default.
UPDATE NEWS: version 1.27 - 10 Apr 96
Fixed a well-hidden bug in the adaptive-disconnect code
that would show up every now and then during extreme
heavy loads involving 2 or more simultaneously active
devices. Thanks to Joe Mack for keeping my nose to the
grindstone on this one.
UPDATE NEWS: version 1.26 - 07 Mar 96
1.25 had a nasty bug that bit people with swap partitions
and tape drives. Also, in my attempt to guess my way
through Intel assembly language, I made an error in the
inline code for IO writes. Made a few other changes and
repairs - this version (fingers crossed) should work well.
UPDATE NEWS: version 1.25 - 05 Mar 96
Kernel 1.3.70 interrupt mods added; old kernels still OK.
Big help from Bill Earnest and David Willmore on speed
testing and optimizing: I think there's a real improvement
in this area.
New! User-friendly command-line interface for LILO and
module loading - the old method is gone, so you'll need
to read the comments for 'setup_strings' near the top
of in2000.c. For people with CDROM's or other devices
that have a tough time with sync negotiation, you can
now selectively disable sync on individual devices -
search for the 'nosync' keyword in the command-line
comments. Some of you disable the BIOS on the card, which
caused the auto-detect function to fail; there is now a
command-line option to force detection of a ROM-less card.
UPDATE NEWS: version 1.24a - 24 Feb 96
There was a bug in the synchronous transfer code. Only
a few people downloaded before I caught it - could have
been worse.
UPDATE NEWS: version 1.24 - 23 Feb 96
Lots of good changes. Advice from Bill Earnest resulted
in much better detection of cards, more efficient usage
of the fifo, and (hopefully) faster data transfers. The
jury is still out on speed - I hope it's improved some.
One nifty new feature is a cool way of doing disconnect/
reselect. The driver defaults to what I'm calling
'adaptive disconnect' - meaning that each command is
evaluated individually as to whether or not it should be
run with the option to disconnect/reselect (if the device
chooses), or as a "SCSI-bus-hog". When several devices
are operating simultaneously, disconnects are usually an
advantage. In a single device system, or if only 1 device
is being accessed, transfers usually go faster if disconnects
are not allowed.
The default arguments (you get these when you don't give an 'in2000'
command-line argument, or you give a blank argument) will cause
the driver to do adaptive disconnect, synchronous transfers, and a
minimum of debug messages. If you want to fool with the options,
search for 'setup_strings' near the top of the in2000.c file and
check the 'hostdata->args' section in in2000.h - but be warned! Not
everything is working yet (some things will never work, probably).
I believe that disabling disconnects (DIS_NEVER) will allow you
to choose a LEVEL2 value higher than 'L2_BASIC', but I haven't
spent a lot of time testing this. You might try 'ENABLE_CLUSTERING'
to see what happens: my tests showed little difference either way.
There's also a define called 'DEFAULT_SX_PER'; this sets the data
transfer speed for the asynchronous mode. I've put it at 500 ns
despite the fact that the card could handle settings of 376 or
252, because higher speeds may be a problem with poor quality
cables or improper termination; 500 ns is a compromise. You can
choose your own default through the command-line with the
'period' keyword.
------------------------------------------------
*********** DIP switch settings **************
------------------------------------------------
sw1-1 sw1-2 BIOS address (hex)
-----------------------------------------
off off C8000 - CBFF0
on off D8000 - DBFF0
off on D0000 - D3FF0
on on BIOS disabled
sw1-3 sw1-4 IO port address (hex)
------------------------------------
off off 220 - 22F
on off 200 - 20F
off on 110 - 11F
on on 100 - 10F
sw1-5 sw1-6 sw1-7 Interrupt
------------------------------
off off off 15
off on off 14
off off on 11
off on on 10
on - - disabled
sw1-8 function depends on BIOS version. In earlier versions this
controlled synchronous data transfer support for MSDOS:
off = disabled
on = enabled
In later ROMs (starting with 01.3 in April 1994) sw1-8 controls
the "greater than 2 disk drive" feature that first appeared in
MSDOS 5.0 (ignored by Linux):
off = 2 drives maximum
on = 7 drives maximum
sw1-9 Floppy controller
--------------------------
off disabled
on enabled
------------------------------------------------
I should mention that Drew Eckhardt's 'Generic NCR5380' sources
were my main inspiration, with lots of reference to the IN2000
driver currently distributed in the kernel source. I also owe
much to a driver written by Hamish Macdonald for Linux-m68k(!).
And to Eric Wright for being an ALPHA guinea pig. And to Bill
Earnest for 2 tons of great input and information. And to David
Willmore for extensive 'bonnie' testing. And to Joe Mack for
continual testing and feedback.
John Shifflett jshiffle@netcom.com

View File

@ -34,9 +34,6 @@ parameters may be changed at runtime by the command
See drivers/scsi/BusLogic.c, comment before function See drivers/scsi/BusLogic.c, comment before function
BusLogic_ParseDriverOptions(). BusLogic_ParseDriverOptions().
dtc3181e= [HW,SCSI]
See Documentation/scsi/g_NCR5380.txt.
eata= [HW,SCSI] eata= [HW,SCSI]
fdomain= [HW,SCSI] fdomain= [HW,SCSI]
@ -47,9 +44,6 @@ parameters may be changed at runtime by the command
gvp11= [HW,SCSI] gvp11= [HW,SCSI]
in2000= [HW,SCSI]
See header of drivers/scsi/in2000.c.
ips= [HW,SCSI] Adaptec / IBM ServeRAID controller ips= [HW,SCSI] Adaptec / IBM ServeRAID controller
See header of drivers/scsi/ips.c. See header of drivers/scsi/ips.c.
@ -83,9 +77,6 @@ parameters may be changed at runtime by the command
Format: <buffer_size>,<write_threshold> Format: <buffer_size>,<write_threshold>
See also Documentation/scsi/st.txt. See also Documentation/scsi/st.txt.
pas16= [HW,SCSI]
See header of drivers/scsi/pas16.c.
scsi_debug_*= [SCSI] scsi_debug_*= [SCSI]
See drivers/scsi/scsi_debug.c. See drivers/scsi/scsi_debug.c.
@ -119,18 +110,9 @@ parameters may be changed at runtime by the command
sym53c416= [HW,SCSI] sym53c416= [HW,SCSI]
See header of drivers/scsi/sym53c416.c. See header of drivers/scsi/sym53c416.c.
t128= [HW,SCSI]
See header of drivers/scsi/t128.c.
tmscsim= [HW,SCSI] tmscsim= [HW,SCSI]
See comment before function dc390_setup() in See comment before function dc390_setup() in
drivers/scsi/tmscsim.c. drivers/scsi/tmscsim.c.
u14-34f= [HW,SCSI] UltraStor 14F/34F SCSI host adapter
See header of drivers/scsi/u14-34f.c.
wd33c93= [HW,SCSI] wd33c93= [HW,SCSI]
See header of drivers/scsi/wd33c93.c. See header of drivers/scsi/wd33c93.c.
wd7000= [HW,SCSI]
See header of drivers/scsi/wd7000.c.

View File

@ -0,0 +1,80 @@
SMARTPQI - Microsemi Smart PQI Driver
-----------------------------------------
This file describes the smartpqi SCSI driver for Microsemi
(http://www.microsemi.com) PQI controllers. The smartpqi driver
is the next generation SCSI driver for Microsemi Corp. The smartpqi
driver is the first SCSI driver to implement the PQI queuing model.
The smartpqi driver will replace the aacraid driver for Adaptec Series 9
controllers. Customers running an older kernel (Pre-4.9) using an Adaptec
Series 9 controller will have to configure the smartpqi driver or their
volumes will not be added to the OS.
For Microsemi smartpqi controller support, enable the smartpqi driver
when configuring the kernel.
For more information on the PQI Queuing Interface, please see:
http://www.t10.org/drafts.htm
http://www.t10.org/members/w_pqi2.htm
Supported devices:
------------------
<Controller names to be added as they become publically available.>
smartpqi specific entries in /sys
-----------------------------
smartpqi host attributes:
-------------------------
/sys/class/scsi_host/host*/rescan
/sys/class/scsi_host/host*/version
The host rescan attribute is a write only attribute. Writing to this
attribute will trigger the driver to scan for new, changed, or removed
devices and notify the SCSI mid-layer of any changes detected.
The version attribute is read-only and will return the driver version
and the controller firmware version.
For example:
driver: 0.9.13-370
firmware: 0.01-522
smartpqi sas device attributes
------------------------------
HBA devices are added to the SAS transport layer. These attributes are
automatically added by the SAS transport layer.
/sys/class/sas_device/end_device-X:X/sas_address
/sys/class/sas_device/end_device-X:X/enclosure_identifier
/sys/class/sas_device/end_device-X:X/scsi_target_id
smartpqi specific ioctls:
-------------------------
For compatibility with applications written for the cciss protocol.
CCISS_DEREGDISK
CCISS_REGNEWDISK
CCISS_REGNEWD
The above three ioctls all do exactly the same thing, which is to cause the driver
to rescan for new devices. This does exactly the same thing as writing to the
smartpqi specific host "rescan" attribute.
CCISS_GETPCIINFO
Returns PCI domain, bus, device and function and "board ID" (PCI subsystem ID).
CCISS_GETDRIVVER
Returns driver version in three bytes encoded as:
(DRIVER_MAJOR << 28) | (DRIVER_MINOR << 24) | (DRIVER_RELEASE << 16) | DRIVER_REVISION;
CCISS_PASSTHRU
Allows "BMIC" and "CISS" commands to be passed through to the Smart Storage Array.
These are used extensively by the SSA Array Configuration Utility, SNMP storage
agents, etc.

View File

@ -7973,6 +7973,18 @@ W: http://www.melexis.com
S: Supported S: Supported
F: drivers/iio/temperature/mlx90614.c F: drivers/iio/temperature/mlx90614.c
MICROSEMI SMART ARRAY SMARTPQI DRIVER (smartpqi)
M: Don Brace <don.brace@microsemi.com>
L: esc.storagedev@microsemi.com
L: linux-scsi@vger.kernel.org
S: Supported
F: drivers/scsi/smartpqi/smartpqi*.[ch]
F: drivers/scsi/smartpqi/Kconfig
F: drivers/scsi/smartpqi/Makefile
F: include/linux/cciss*.h
F: include/uapi/linux/cciss*.h
F: Documentation/scsi/smartpqi.txt
MN88472 MEDIA DRIVER MN88472 MEDIA DRIVER
M: Antti Palosaari <crope@iki.fi> M: Antti Palosaari <crope@iki.fi>
L: linux-media@vger.kernel.org L: linux-media@vger.kernel.org
@ -8185,20 +8197,16 @@ M: Michael Schmitz <schmitzmic@gmail.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/scsi/g_NCR5380.txt F: Documentation/scsi/g_NCR5380.txt
F: Documentation/scsi/dtc3x80.txt
F: drivers/scsi/NCR5380.* F: drivers/scsi/NCR5380.*
F: drivers/scsi/arm/cumana_1.c F: drivers/scsi/arm/cumana_1.c
F: drivers/scsi/arm/oak.c F: drivers/scsi/arm/oak.c
F: drivers/scsi/atari_scsi.* F: drivers/scsi/atari_scsi.*
F: drivers/scsi/dmx3191d.c F: drivers/scsi/dmx3191d.c
F: drivers/scsi/dtc.*
F: drivers/scsi/g_NCR5380.* F: drivers/scsi/g_NCR5380.*
F: drivers/scsi/g_NCR5380_mmio.c F: drivers/scsi/g_NCR5380_mmio.c
F: drivers/scsi/mac_scsi.* F: drivers/scsi/mac_scsi.*
F: drivers/scsi/pas16.*
F: drivers/scsi/sun3_scsi.* F: drivers/scsi/sun3_scsi.*
F: drivers/scsi/sun3_scsi_vme.c F: drivers/scsi/sun3_scsi_vme.c
F: drivers/scsi/t128.*
NCR DUAL 700 SCSI DRIVER (MICROCHANNEL) NCR DUAL 700 SCSI DRIVER (MICROCHANNEL)
M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
@ -10740,12 +10748,12 @@ S: Maintained
F: drivers/misc/phantom.c F: drivers/misc/phantom.c
F: include/uapi/linux/phantom.h F: include/uapi/linux/phantom.h
SERVER ENGINES 10Gbps iSCSI - BladeEngine 2 DRIVER Emulex 10Gbps iSCSI - OneConnect DRIVER
M: Jayamohan Kallickal <jayamohan.kallickal@avagotech.com> M: Subbu Seetharaman <subbu.seetharaman@broadcom.com>
M: Ketan Mukadam <ketan.mukadam@avagotech.com> M: Ketan Mukadam <ketan.mukadam@broadcom.com>
M: John Soni Jose <sony.john@avagotech.com> M: Jitendra Bhivare <jitendra.bhivare@broadcom.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
W: http://www.avagotech.com W: http://www.broadcom.com
S: Supported S: Supported
F: drivers/scsi/be2iscsi/ F: drivers/scsi/be2iscsi/
@ -12143,12 +12151,6 @@ S: Maintained
F: drivers/tc/ F: drivers/tc/
F: include/linux/tc.h F: include/linux/tc.h
U14-34F SCSI DRIVER
M: Dario Ballabio <ballabio_dario@emc.com>
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/u14-34f.c
UBI FILE SYSTEM (UBIFS) UBI FILE SYSTEM (UBIFS)
M: Richard Weinberger <richard@nod.at> M: Richard Weinberger <richard@nod.at>
M: Artem Bityutskiy <dedekind1@gmail.com> M: Artem Bityutskiy <dedekind1@gmail.com>
@ -12876,12 +12878,6 @@ F: drivers/watchdog/
F: include/linux/watchdog.h F: include/linux/watchdog.h
F: include/uapi/linux/watchdog.h F: include/uapi/linux/watchdog.h
WD7000 SCSI DRIVER
M: Miroslav Zagorac <zaga@fly.cc.fer.hr>
L: linux-scsi@vger.kernel.org
S: Maintained
F: drivers/scsi/wd7000.c
WIIMOTE HID DRIVER WIIMOTE HID DRIVER
M: David Herrmann <dh.herrmann@googlemail.com> M: David Herrmann <dh.herrmann@googlemail.com>
L: linux-input@vger.kernel.org L: linux-input@vger.kernel.org

View File

@ -1865,8 +1865,8 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id)
snprintf(ioc->reset_work_q_name, MPT_KOBJ_NAME_LEN, snprintf(ioc->reset_work_q_name, MPT_KOBJ_NAME_LEN,
"mpt_poll_%d", ioc->id); "mpt_poll_%d", ioc->id);
ioc->reset_work_q = ioc->reset_work_q = alloc_workqueue(ioc->reset_work_q_name,
create_singlethread_workqueue(ioc->reset_work_q_name); WQ_MEM_RECLAIM, 0);
if (!ioc->reset_work_q) { if (!ioc->reset_work_q) {
printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n",
ioc->name); ioc->name);
@ -1992,7 +1992,8 @@ mpt_attach(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_LIST_HEAD(&ioc->fw_event_list); INIT_LIST_HEAD(&ioc->fw_event_list);
spin_lock_init(&ioc->fw_event_lock); spin_lock_init(&ioc->fw_event_lock);
snprintf(ioc->fw_event_q_name, MPT_KOBJ_NAME_LEN, "mpt/%d", ioc->id); snprintf(ioc->fw_event_q_name, MPT_KOBJ_NAME_LEN, "mpt/%d", ioc->id);
ioc->fw_event_q = create_singlethread_workqueue(ioc->fw_event_q_name); ioc->fw_event_q = alloc_workqueue(ioc->fw_event_q_name,
WQ_MEM_RECLAIM, 0);
if (!ioc->fw_event_q) { if (!ioc->fw_event_q) {
printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n",
ioc->name); ioc->name);

View File

@ -1324,9 +1324,12 @@ mptfc_probe(struct pci_dev *pdev, const struct pci_device_id *id)
snprintf(ioc->fc_rescan_work_q_name, sizeof(ioc->fc_rescan_work_q_name), snprintf(ioc->fc_rescan_work_q_name, sizeof(ioc->fc_rescan_work_q_name),
"mptfc_wq_%d", sh->host_no); "mptfc_wq_%d", sh->host_no);
ioc->fc_rescan_work_q = ioc->fc_rescan_work_q =
create_singlethread_workqueue(ioc->fc_rescan_work_q_name); alloc_ordered_workqueue(ioc->fc_rescan_work_q_name,
if (!ioc->fc_rescan_work_q) WQ_MEM_RECLAIM);
if (!ioc->fc_rescan_work_q) {
error = -ENOMEM;
goto out_mptfc_probe; goto out_mptfc_probe;
}
/* /*
* Pre-fetch FC port WWN and stuff... * Pre-fetch FC port WWN and stuff...

View File

@ -3,7 +3,7 @@
* *
* Debug traces for zfcp. * Debug traces for zfcp.
* *
* Copyright IBM Corp. 2002, 2013 * Copyright IBM Corp. 2002, 2016
*/ */
#define KMSG_COMPONENT "zfcp" #define KMSG_COMPONENT "zfcp"
@ -65,7 +65,7 @@ void zfcp_dbf_pl_write(struct zfcp_dbf *dbf, void *data, u16 length, char *area,
* @tag: tag indicating which kind of unsolicited status has been received * @tag: tag indicating which kind of unsolicited status has been received
* @req: request for which a response was received * @req: request for which a response was received
*/ */
void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req) void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req)
{ {
struct zfcp_dbf *dbf = req->adapter->dbf; struct zfcp_dbf *dbf = req->adapter->dbf;
struct fsf_qtcb_prefix *q_pref = &req->qtcb->prefix; struct fsf_qtcb_prefix *q_pref = &req->qtcb->prefix;
@ -85,6 +85,8 @@ void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req)
rec->u.res.req_issued = req->issued; rec->u.res.req_issued = req->issued;
rec->u.res.prot_status = q_pref->prot_status; rec->u.res.prot_status = q_pref->prot_status;
rec->u.res.fsf_status = q_head->fsf_status; rec->u.res.fsf_status = q_head->fsf_status;
rec->u.res.port_handle = q_head->port_handle;
rec->u.res.lun_handle = q_head->lun_handle;
memcpy(rec->u.res.prot_status_qual, &q_pref->prot_status_qual, memcpy(rec->u.res.prot_status_qual, &q_pref->prot_status_qual,
FSF_PROT_STATUS_QUAL_SIZE); FSF_PROT_STATUS_QUAL_SIZE);
@ -97,7 +99,7 @@ void zfcp_dbf_hba_fsf_res(char *tag, struct zfcp_fsf_req *req)
rec->pl_len, "fsf_res", req->req_id); rec->pl_len, "fsf_res", req->req_id);
} }
debug_event(dbf->hba, 1, rec, sizeof(*rec)); debug_event(dbf->hba, level, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->hba_lock, flags); spin_unlock_irqrestore(&dbf->hba_lock, flags);
} }
@ -241,7 +243,8 @@ static void zfcp_dbf_set_common(struct zfcp_dbf_rec *rec,
if (sdev) { if (sdev) {
rec->lun_status = atomic_read(&sdev_to_zfcp(sdev)->status); rec->lun_status = atomic_read(&sdev_to_zfcp(sdev)->status);
rec->lun = zfcp_scsi_dev_lun(sdev); rec->lun = zfcp_scsi_dev_lun(sdev);
} } else
rec->lun = ZFCP_DBF_INVALID_LUN;
} }
/** /**
@ -320,13 +323,48 @@ void zfcp_dbf_rec_run(char *tag, struct zfcp_erp_action *erp)
spin_unlock_irqrestore(&dbf->rec_lock, flags); spin_unlock_irqrestore(&dbf->rec_lock, flags);
} }
/**
* zfcp_dbf_rec_run_wka - trace wka port event with info like running recovery
* @tag: identifier for event
* @wka_port: well known address port
* @req_id: request ID to correlate with potential HBA trace record
*/
void zfcp_dbf_rec_run_wka(char *tag, struct zfcp_fc_wka_port *wka_port,
u64 req_id)
{
struct zfcp_dbf *dbf = wka_port->adapter->dbf;
struct zfcp_dbf_rec *rec = &dbf->rec_buf;
unsigned long flags;
spin_lock_irqsave(&dbf->rec_lock, flags);
memset(rec, 0, sizeof(*rec));
rec->id = ZFCP_DBF_REC_RUN;
memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
rec->port_status = wka_port->status;
rec->d_id = wka_port->d_id;
rec->lun = ZFCP_DBF_INVALID_LUN;
rec->u.run.fsf_req_id = req_id;
rec->u.run.rec_status = ~0;
rec->u.run.rec_step = ~0;
rec->u.run.rec_action = ~0;
rec->u.run.rec_count = ~0;
debug_event(dbf->rec, 1, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->rec_lock, flags);
}
static inline static inline
void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf, void *data, u8 id, u16 len, void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf,
u64 req_id, u32 d_id) char *paytag, struct scatterlist *sg, u8 id, u16 len,
u64 req_id, u32 d_id, u16 cap_len)
{ {
struct zfcp_dbf_san *rec = &dbf->san_buf; struct zfcp_dbf_san *rec = &dbf->san_buf;
u16 rec_len; u16 rec_len;
unsigned long flags; unsigned long flags;
struct zfcp_dbf_pay *payload = &dbf->pay_buf;
u16 pay_sum = 0;
spin_lock_irqsave(&dbf->san_lock, flags); spin_lock_irqsave(&dbf->san_lock, flags);
memset(rec, 0, sizeof(*rec)); memset(rec, 0, sizeof(*rec));
@ -334,10 +372,41 @@ void zfcp_dbf_san(char *tag, struct zfcp_dbf *dbf, void *data, u8 id, u16 len,
rec->id = id; rec->id = id;
rec->fsf_req_id = req_id; rec->fsf_req_id = req_id;
rec->d_id = d_id; rec->d_id = d_id;
rec_len = min(len, (u16)ZFCP_DBF_SAN_MAX_PAYLOAD);
memcpy(rec->payload, data, rec_len);
memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN); memcpy(rec->tag, tag, ZFCP_DBF_TAG_LEN);
rec->pl_len = len; /* full length even if we cap pay below */
if (!sg)
goto out;
rec_len = min_t(unsigned int, sg->length, ZFCP_DBF_SAN_MAX_PAYLOAD);
memcpy(rec->payload, sg_virt(sg), rec_len); /* part of 1st sg entry */
if (len <= rec_len)
goto out; /* skip pay record if full content in rec->payload */
/* if (len > rec_len):
* dump data up to cap_len ignoring small duplicate in rec->payload
*/
spin_lock_irqsave(&dbf->pay_lock, flags);
memset(payload, 0, sizeof(*payload));
memcpy(payload->area, paytag, ZFCP_DBF_TAG_LEN);
payload->fsf_req_id = req_id;
payload->counter = 0;
for (; sg && pay_sum < cap_len; sg = sg_next(sg)) {
u16 pay_len, offset = 0;
while (offset < sg->length && pay_sum < cap_len) {
pay_len = min((u16)ZFCP_DBF_PAY_MAX_REC,
(u16)(sg->length - offset));
/* cap_len <= pay_sum < cap_len+ZFCP_DBF_PAY_MAX_REC */
memcpy(payload->data, sg_virt(sg) + offset, pay_len);
debug_event(dbf->pay, 1, payload,
zfcp_dbf_plen(pay_len));
payload->counter++;
offset += pay_len;
pay_sum += pay_len;
}
}
spin_unlock(&dbf->pay_lock);
out:
debug_event(dbf->san, 1, rec, sizeof(*rec)); debug_event(dbf->san, 1, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->san_lock, flags); spin_unlock_irqrestore(&dbf->san_lock, flags);
} }
@ -354,9 +423,62 @@ void zfcp_dbf_san_req(char *tag, struct zfcp_fsf_req *fsf, u32 d_id)
struct zfcp_fsf_ct_els *ct_els = fsf->data; struct zfcp_fsf_ct_els *ct_els = fsf->data;
u16 length; u16 length;
length = (u16)(ct_els->req->length + FC_CT_HDR_LEN); length = (u16)zfcp_qdio_real_bytes(ct_els->req);
zfcp_dbf_san(tag, dbf, sg_virt(ct_els->req), ZFCP_DBF_SAN_REQ, length, zfcp_dbf_san(tag, dbf, "san_req", ct_els->req, ZFCP_DBF_SAN_REQ,
fsf->req_id, d_id); length, fsf->req_id, d_id, length);
}
static u16 zfcp_dbf_san_res_cap_len_if_gpn_ft(char *tag,
struct zfcp_fsf_req *fsf,
u16 len)
{
struct zfcp_fsf_ct_els *ct_els = fsf->data;
struct fc_ct_hdr *reqh = sg_virt(ct_els->req);
struct fc_ns_gid_ft *reqn = (struct fc_ns_gid_ft *)(reqh + 1);
struct scatterlist *resp_entry = ct_els->resp;
struct fc_gpn_ft_resp *acc;
int max_entries, x, last = 0;
if (!(memcmp(tag, "fsscth2", 7) == 0
&& ct_els->d_id == FC_FID_DIR_SERV
&& reqh->ct_rev == FC_CT_REV
&& reqh->ct_in_id[0] == 0
&& reqh->ct_in_id[1] == 0
&& reqh->ct_in_id[2] == 0
&& reqh->ct_fs_type == FC_FST_DIR
&& reqh->ct_fs_subtype == FC_NS_SUBTYPE
&& reqh->ct_options == 0
&& reqh->_ct_resvd1 == 0
&& reqh->ct_cmd == FC_NS_GPN_FT
/* reqh->ct_mr_size can vary so do not match but read below */
&& reqh->_ct_resvd2 == 0
&& reqh->ct_reason == 0
&& reqh->ct_explan == 0
&& reqh->ct_vendor == 0
&& reqn->fn_resvd == 0
&& reqn->fn_domain_id_scope == 0
&& reqn->fn_area_id_scope == 0
&& reqn->fn_fc4_type == FC_TYPE_FCP))
return len; /* not GPN_FT response so do not cap */
acc = sg_virt(resp_entry);
max_entries = (reqh->ct_mr_size * 4 / sizeof(struct fc_gpn_ft_resp))
+ 1 /* zfcp_fc_scan_ports: bytes correct, entries off-by-one
* to account for header as 1st pseudo "entry" */;
/* the basic CT_IU preamble is the same size as one entry in the GPN_FT
* response, allowing us to skip special handling for it - just skip it
*/
for (x = 1; x < max_entries && !last; x++) {
if (x % (ZFCP_FC_GPN_FT_ENT_PAGE + 1))
acc++;
else
acc = sg_virt(++resp_entry);
last = acc->fp_flags & FC_NS_FID_LAST;
}
len = min(len, (u16)(x * sizeof(struct fc_gpn_ft_resp)));
return len; /* cap after last entry */
} }
/** /**
@ -370,9 +492,10 @@ void zfcp_dbf_san_res(char *tag, struct zfcp_fsf_req *fsf)
struct zfcp_fsf_ct_els *ct_els = fsf->data; struct zfcp_fsf_ct_els *ct_els = fsf->data;
u16 length; u16 length;
length = (u16)(ct_els->resp->length + FC_CT_HDR_LEN); length = (u16)zfcp_qdio_real_bytes(ct_els->resp);
zfcp_dbf_san(tag, dbf, sg_virt(ct_els->resp), ZFCP_DBF_SAN_RES, length, zfcp_dbf_san(tag, dbf, "san_res", ct_els->resp, ZFCP_DBF_SAN_RES,
fsf->req_id, 0); length, fsf->req_id, ct_els->d_id,
zfcp_dbf_san_res_cap_len_if_gpn_ft(tag, fsf, length));
} }
/** /**
@ -386,11 +509,13 @@ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf)
struct fsf_status_read_buffer *srb = struct fsf_status_read_buffer *srb =
(struct fsf_status_read_buffer *) fsf->data; (struct fsf_status_read_buffer *) fsf->data;
u16 length; u16 length;
struct scatterlist sg;
length = (u16)(srb->length - length = (u16)(srb->length -
offsetof(struct fsf_status_read_buffer, payload)); offsetof(struct fsf_status_read_buffer, payload));
zfcp_dbf_san(tag, dbf, srb->payload.data, ZFCP_DBF_SAN_ELS, length, sg_init_one(&sg, srb->payload.data, length);
fsf->req_id, ntoh24(srb->d_id)); zfcp_dbf_san(tag, dbf, "san_els", &sg, ZFCP_DBF_SAN_ELS, length,
fsf->req_id, ntoh24(srb->d_id), length);
} }
/** /**
@ -399,7 +524,8 @@ void zfcp_dbf_san_in_els(char *tag, struct zfcp_fsf_req *fsf)
* @sc: pointer to struct scsi_cmnd * @sc: pointer to struct scsi_cmnd
* @fsf: pointer to struct zfcp_fsf_req * @fsf: pointer to struct zfcp_fsf_req
*/ */
void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf) void zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *sc,
struct zfcp_fsf_req *fsf)
{ {
struct zfcp_adapter *adapter = struct zfcp_adapter *adapter =
(struct zfcp_adapter *) sc->device->host->hostdata[0]; (struct zfcp_adapter *) sc->device->host->hostdata[0];
@ -442,7 +568,7 @@ void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf)
} }
} }
debug_event(dbf->scsi, 1, rec, sizeof(*rec)); debug_event(dbf->scsi, level, rec, sizeof(*rec));
spin_unlock_irqrestore(&dbf->scsi_lock, flags); spin_unlock_irqrestore(&dbf->scsi_lock, flags);
} }

View File

@ -2,7 +2,7 @@
* zfcp device driver * zfcp device driver
* debug feature declarations * debug feature declarations
* *
* Copyright IBM Corp. 2008, 2010 * Copyright IBM Corp. 2008, 2015
*/ */
#ifndef ZFCP_DBF_H #ifndef ZFCP_DBF_H
@ -17,6 +17,11 @@
#define ZFCP_DBF_INVALID_LUN 0xFFFFFFFFFFFFFFFFull #define ZFCP_DBF_INVALID_LUN 0xFFFFFFFFFFFFFFFFull
enum zfcp_dbf_pseudo_erp_act_type {
ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD = 0xff,
ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL = 0xfe,
};
/** /**
* struct zfcp_dbf_rec_trigger - trace record for triggered recovery action * struct zfcp_dbf_rec_trigger - trace record for triggered recovery action
* @ready: number of ready recovery actions * @ready: number of ready recovery actions
@ -110,6 +115,7 @@ struct zfcp_dbf_san {
u32 d_id; u32 d_id;
#define ZFCP_DBF_SAN_MAX_PAYLOAD (FC_CT_HDR_LEN + 32) #define ZFCP_DBF_SAN_MAX_PAYLOAD (FC_CT_HDR_LEN + 32)
char payload[ZFCP_DBF_SAN_MAX_PAYLOAD]; char payload[ZFCP_DBF_SAN_MAX_PAYLOAD];
u16 pl_len;
} __packed; } __packed;
/** /**
@ -126,6 +132,8 @@ struct zfcp_dbf_hba_res {
u8 prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE]; u8 prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE];
u32 fsf_status; u32 fsf_status;
u8 fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE]; u8 fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE];
u32 port_handle;
u32 lun_handle;
} __packed; } __packed;
/** /**
@ -279,7 +287,7 @@ static inline
void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req) void zfcp_dbf_hba_fsf_resp(char *tag, int level, struct zfcp_fsf_req *req)
{ {
if (debug_level_enabled(req->adapter->dbf->hba, level)) if (debug_level_enabled(req->adapter->dbf->hba, level))
zfcp_dbf_hba_fsf_res(tag, req); zfcp_dbf_hba_fsf_res(tag, level, req);
} }
/** /**
@ -318,7 +326,7 @@ void _zfcp_dbf_scsi(char *tag, int level, struct scsi_cmnd *scmd,
scmd->device->host->hostdata[0]; scmd->device->host->hostdata[0];
if (debug_level_enabled(adapter->dbf->scsi, level)) if (debug_level_enabled(adapter->dbf->scsi, level))
zfcp_dbf_scsi(tag, scmd, req); zfcp_dbf_scsi(tag, level, scmd, req);
} }
/** /**

View File

@ -3,7 +3,7 @@
* *
* Error Recovery Procedures (ERP). * Error Recovery Procedures (ERP).
* *
* Copyright IBM Corp. 2002, 2010 * Copyright IBM Corp. 2002, 2015
*/ */
#define KMSG_COMPONENT "zfcp" #define KMSG_COMPONENT "zfcp"
@ -1217,8 +1217,14 @@ static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
break; break;
case ZFCP_ERP_ACTION_REOPEN_PORT: case ZFCP_ERP_ACTION_REOPEN_PORT:
if (result == ZFCP_ERP_SUCCEEDED) /* This switch case might also happen after a forced reopen
zfcp_scsi_schedule_rport_register(port); * was successfully done and thus overwritten with a new
* non-forced reopen at `ersfs_2'. In this case, we must not
* do the clean-up of the non-forced version.
*/
if (act->step != ZFCP_ERP_STEP_UNINITIALIZED)
if (result == ZFCP_ERP_SUCCEEDED)
zfcp_scsi_schedule_rport_register(port);
/* fall through */ /* fall through */
case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
put_device(&port->dev); put_device(&port->dev);

View File

@ -3,7 +3,7 @@
* *
* External function declarations. * External function declarations.
* *
* Copyright IBM Corp. 2002, 2010 * Copyright IBM Corp. 2002, 2015
*/ */
#ifndef ZFCP_EXT_H #ifndef ZFCP_EXT_H
@ -35,8 +35,9 @@ extern void zfcp_dbf_adapter_unregister(struct zfcp_adapter *);
extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *, extern void zfcp_dbf_rec_trig(char *, struct zfcp_adapter *,
struct zfcp_port *, struct scsi_device *, u8, u8); struct zfcp_port *, struct scsi_device *, u8, u8);
extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *); extern void zfcp_dbf_rec_run(char *, struct zfcp_erp_action *);
extern void zfcp_dbf_rec_run_wka(char *, struct zfcp_fc_wka_port *, u64);
extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *); extern void zfcp_dbf_hba_fsf_uss(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_fsf_res(char *, struct zfcp_fsf_req *); extern void zfcp_dbf_hba_fsf_res(char *, int, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *); extern void zfcp_dbf_hba_bit_err(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *); extern void zfcp_dbf_hba_berr(struct zfcp_dbf *, struct zfcp_fsf_req *);
extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **); extern void zfcp_dbf_hba_def_err(struct zfcp_adapter *, u64, u16, void **);
@ -44,7 +45,8 @@ extern void zfcp_dbf_hba_basic(char *, struct zfcp_adapter *);
extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32); extern void zfcp_dbf_san_req(char *, struct zfcp_fsf_req *, u32);
extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *); extern void zfcp_dbf_san_res(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *); extern void zfcp_dbf_san_in_els(char *, struct zfcp_fsf_req *);
extern void zfcp_dbf_scsi(char *, struct scsi_cmnd *, struct zfcp_fsf_req *); extern void zfcp_dbf_scsi(char *, int, struct scsi_cmnd *,
struct zfcp_fsf_req *);
/* zfcp_erp.c */ /* zfcp_erp.c */
extern void zfcp_erp_set_adapter_status(struct zfcp_adapter *, u32); extern void zfcp_erp_set_adapter_status(struct zfcp_adapter *, u32);

View File

@ -3,7 +3,7 @@
* *
* Implementation of FSF commands. * Implementation of FSF commands.
* *
* Copyright IBM Corp. 2002, 2013 * Copyright IBM Corp. 2002, 2015
*/ */
#define KMSG_COMPONENT "zfcp" #define KMSG_COMPONENT "zfcp"
@ -508,7 +508,10 @@ static int zfcp_fsf_exchange_config_evaluate(struct zfcp_fsf_req *req)
fc_host_port_type(shost) = FC_PORTTYPE_PTP; fc_host_port_type(shost) = FC_PORTTYPE_PTP;
break; break;
case FSF_TOPO_FABRIC: case FSF_TOPO_FABRIC:
fc_host_port_type(shost) = FC_PORTTYPE_NPORT; if (bottom->connection_features & FSF_FEATURE_NPIV_MODE)
fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
else
fc_host_port_type(shost) = FC_PORTTYPE_NPORT;
break; break;
case FSF_TOPO_AL: case FSF_TOPO_AL:
fc_host_port_type(shost) = FC_PORTTYPE_NLPORT; fc_host_port_type(shost) = FC_PORTTYPE_NLPORT;
@ -613,7 +616,6 @@ static void zfcp_fsf_exchange_port_evaluate(struct zfcp_fsf_req *req)
if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) { if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) {
fc_host_permanent_port_name(shost) = bottom->wwpn; fc_host_permanent_port_name(shost) = bottom->wwpn;
fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
} else } else
fc_host_permanent_port_name(shost) = fc_host_port_name(shost); fc_host_permanent_port_name(shost) = fc_host_port_name(shost);
fc_host_maxframe_size(shost) = bottom->maximum_frame_size; fc_host_maxframe_size(shost) = bottom->maximum_frame_size;
@ -982,8 +984,12 @@ static int zfcp_fsf_setup_ct_els_sbals(struct zfcp_fsf_req *req,
if (zfcp_adapter_multi_buffer_active(adapter)) { if (zfcp_adapter_multi_buffer_active(adapter)) {
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_req)) if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_req))
return -EIO; return -EIO;
qtcb->bottom.support.req_buf_length =
zfcp_qdio_real_bytes(sg_req);
if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_resp)) if (zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, sg_resp))
return -EIO; return -EIO;
qtcb->bottom.support.resp_buf_length =
zfcp_qdio_real_bytes(sg_resp);
zfcp_qdio_set_data_div(qdio, &req->qdio_req, zfcp_qdio_set_data_div(qdio, &req->qdio_req,
zfcp_qdio_sbale_count(sg_req)); zfcp_qdio_sbale_count(sg_req));
@ -1073,6 +1079,7 @@ int zfcp_fsf_send_ct(struct zfcp_fc_wka_port *wka_port,
req->handler = zfcp_fsf_send_ct_handler; req->handler = zfcp_fsf_send_ct_handler;
req->qtcb->header.port_handle = wka_port->handle; req->qtcb->header.port_handle = wka_port->handle;
ct->d_id = wka_port->d_id;
req->data = ct; req->data = ct;
zfcp_dbf_san_req("fssct_1", req, wka_port->d_id); zfcp_dbf_san_req("fssct_1", req, wka_port->d_id);
@ -1169,6 +1176,7 @@ int zfcp_fsf_send_els(struct zfcp_adapter *adapter, u32 d_id,
hton24(req->qtcb->bottom.support.d_id, d_id); hton24(req->qtcb->bottom.support.d_id, d_id);
req->handler = zfcp_fsf_send_els_handler; req->handler = zfcp_fsf_send_els_handler;
els->d_id = d_id;
req->data = els; req->data = els;
zfcp_dbf_san_req("fssels1", req, d_id); zfcp_dbf_san_req("fssels1", req, d_id);
@ -1575,7 +1583,7 @@ out:
int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port) int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
{ {
struct zfcp_qdio *qdio = wka_port->adapter->qdio; struct zfcp_qdio *qdio = wka_port->adapter->qdio;
struct zfcp_fsf_req *req; struct zfcp_fsf_req *req = NULL;
int retval = -EIO; int retval = -EIO;
spin_lock_irq(&qdio->req_q_lock); spin_lock_irq(&qdio->req_q_lock);
@ -1604,6 +1612,8 @@ int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port)
zfcp_fsf_req_free(req); zfcp_fsf_req_free(req);
out: out:
spin_unlock_irq(&qdio->req_q_lock); spin_unlock_irq(&qdio->req_q_lock);
if (req && !IS_ERR(req))
zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req->req_id);
return retval; return retval;
} }
@ -1628,7 +1638,7 @@ static void zfcp_fsf_close_wka_port_handler(struct zfcp_fsf_req *req)
int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port) int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
{ {
struct zfcp_qdio *qdio = wka_port->adapter->qdio; struct zfcp_qdio *qdio = wka_port->adapter->qdio;
struct zfcp_fsf_req *req; struct zfcp_fsf_req *req = NULL;
int retval = -EIO; int retval = -EIO;
spin_lock_irq(&qdio->req_q_lock); spin_lock_irq(&qdio->req_q_lock);
@ -1657,6 +1667,8 @@ int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port)
zfcp_fsf_req_free(req); zfcp_fsf_req_free(req);
out: out:
spin_unlock_irq(&qdio->req_q_lock); spin_unlock_irq(&qdio->req_q_lock);
if (req && !IS_ERR(req))
zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req->req_id);
return retval; return retval;
} }

View File

@ -3,7 +3,7 @@
* *
* Interface to the FSF support functions. * Interface to the FSF support functions.
* *
* Copyright IBM Corp. 2002, 2010 * Copyright IBM Corp. 2002, 2015
*/ */
#ifndef FSF_H #ifndef FSF_H
@ -436,6 +436,7 @@ struct zfcp_blk_drv_data {
* @handler_data: data passed to handler function * @handler_data: data passed to handler function
* @port: Optional pointer to port for zfcp internal ELS (only test link ADISC) * @port: Optional pointer to port for zfcp internal ELS (only test link ADISC)
* @status: used to pass error status to calling function * @status: used to pass error status to calling function
* @d_id: Destination ID of either open WKA port for CT or of D_ID for ELS
*/ */
struct zfcp_fsf_ct_els { struct zfcp_fsf_ct_els {
struct scatterlist *req; struct scatterlist *req;
@ -444,6 +445,7 @@ struct zfcp_fsf_ct_els {
void *handler_data; void *handler_data;
struct zfcp_port *port; struct zfcp_port *port;
int status; int status;
u32 d_id;
}; };
#endif /* FSF_H */ #endif /* FSF_H */

View File

@ -3,7 +3,7 @@
* *
* Interface to Linux SCSI midlayer. * Interface to Linux SCSI midlayer.
* *
* Copyright IBM Corp. 2002, 2013 * Copyright IBM Corp. 2002, 2015
*/ */
#define KMSG_COMPONENT "zfcp" #define KMSG_COMPONENT "zfcp"
@ -556,6 +556,9 @@ static void zfcp_scsi_rport_register(struct zfcp_port *port)
ids.port_id = port->d_id; ids.port_id = port->d_id;
ids.roles = FC_RPORT_ROLE_FCP_TARGET; ids.roles = FC_RPORT_ROLE_FCP_TARGET;
zfcp_dbf_rec_trig("scpaddy", port->adapter, port, NULL,
ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD,
ZFCP_PSEUDO_ERP_ACTION_RPORT_ADD);
rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids); rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids);
if (!rport) { if (!rport) {
dev_err(&port->adapter->ccw_device->dev, dev_err(&port->adapter->ccw_device->dev,
@ -577,6 +580,9 @@ static void zfcp_scsi_rport_block(struct zfcp_port *port)
struct fc_rport *rport = port->rport; struct fc_rport *rport = port->rport;
if (rport) { if (rport) {
zfcp_dbf_rec_trig("scpdely", port->adapter, port, NULL,
ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL,
ZFCP_PSEUDO_ERP_ACTION_RPORT_DEL);
fc_remote_port_delete(rport); fc_remote_port_delete(rport);
port->rport = NULL; port->rport = NULL;
} }

View File

@ -396,18 +396,6 @@ config SCSI_3W_SAS
Please read the comments at the top of Please read the comments at the top of
<file:drivers/scsi/3w-sas.c>. <file:drivers/scsi/3w-sas.c>.
config SCSI_7000FASST
tristate "7000FASST SCSI support"
depends on ISA && SCSI && ISA_DMA_API
select CHECK_SIGNATURE
help
This driver supports the Western Digital 7000 SCSI host adapter
family. Some information is in the source:
<file:drivers/scsi/wd7000.c>.
To compile this driver as a module, choose M here: the
module will be called wd7000.
config SCSI_ACARD config SCSI_ACARD
tristate "ACARD SCSI support" tristate "ACARD SCSI support"
depends on PCI && SCSI depends on PCI && SCSI
@ -512,18 +500,6 @@ config SCSI_ADVANSYS
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called advansys. module will be called advansys.
config SCSI_IN2000
tristate "Always IN2000 SCSI support"
depends on ISA && SCSI
help
This is support for an ISA bus SCSI host adapter. You'll find more
information in <file:Documentation/scsi/in2000.txt>. If it doesn't work
out of the box, you may have to change the jumpers for IRQ or
address selection.
To compile this driver as a module, choose M here: the
module will be called in2000.
config SCSI_ARCMSR config SCSI_ARCMSR
tristate "ARECA (ARC11xx/12xx/13xx/16xx) SATA/SAS RAID Host Adapter" tristate "ARECA (ARC11xx/12xx/13xx/16xx) SATA/SAS RAID Host Adapter"
depends on PCI && SCSI depends on PCI && SCSI
@ -540,6 +516,7 @@ config SCSI_ARCMSR
source "drivers/scsi/esas2r/Kconfig" source "drivers/scsi/esas2r/Kconfig"
source "drivers/scsi/megaraid/Kconfig.megaraid" source "drivers/scsi/megaraid/Kconfig.megaraid"
source "drivers/scsi/mpt3sas/Kconfig" source "drivers/scsi/mpt3sas/Kconfig"
source "drivers/scsi/smartpqi/Kconfig"
source "drivers/scsi/ufs/Kconfig" source "drivers/scsi/ufs/Kconfig"
config SCSI_HPTIOP config SCSI_HPTIOP
@ -660,20 +637,6 @@ config SCSI_DMX3191D
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called dmx3191d. module will be called dmx3191d.
config SCSI_DTC3280
tristate "DTC3180/3280 SCSI support"
depends on ISA && SCSI
select SCSI_SPI_ATTRS
select CHECK_SIGNATURE
help
This is support for DTC 3180/3280 SCSI Host Adapters. Please read
the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>, and the file
<file:Documentation/scsi/dtc3x80.txt>.
To compile this driver as a module, choose M here: the
module will be called dtc.
config SCSI_EATA config SCSI_EATA
tristate "EATA ISA/EISA/PCI (DPT and generic EATA/DMA-compliant boards) support" tristate "EATA ISA/EISA/PCI (DPT and generic EATA/DMA-compliant boards) support"
depends on (ISA || EISA || PCI) && SCSI && ISA_DMA_API depends on (ISA || EISA || PCI) && SCSI && ISA_DMA_API
@ -1248,20 +1211,6 @@ config SCSI_NCR53C8XX_NO_DISCONNECT
not allow targets to disconnect is not reasonable if there is more not allow targets to disconnect is not reasonable if there is more
than 1 device on a SCSI bus. The normal answer therefore is N. than 1 device on a SCSI bus. The normal answer therefore is N.
config SCSI_PAS16
tristate "PAS16 SCSI support"
depends on ISA && SCSI
select SCSI_SPI_ATTRS
---help---
This is support for a SCSI host adapter. It is explained in section
3.10 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/pas16.h>.
To compile this driver as a module, choose M here: the
module will be called pas16.
config SCSI_QLOGIC_FAS config SCSI_QLOGIC_FAS
tristate "Qlogic FAS SCSI support" tristate "Qlogic FAS SCSI support"
depends on ISA && SCSI depends on ISA && SCSI
@ -1382,89 +1331,6 @@ config SCSI_AM53C974
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called am53c974. module will be called am53c974.
config SCSI_T128
tristate "Trantor T128/T128F/T228 SCSI support"
depends on ISA && SCSI
select SCSI_SPI_ATTRS
select CHECK_SIGNATURE
---help---
This is support for a SCSI host adapter. It is explained in section
3.11 of the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/t128.h>. Note that Trantor was purchased by
Adaptec, and some former Trantor products are being sold under the
Adaptec name.
To compile this driver as a module, choose M here: the
module will be called t128.
config SCSI_U14_34F
tristate "UltraStor 14F/34F support"
depends on ISA && SCSI && ISA_DMA_API
---help---
This is support for the UltraStor 14F and 34F SCSI-2 host adapters.
The source at <file:drivers/scsi/u14-34f.c> contains some
information about this hardware. If the driver doesn't work out of
the box, you may have to change some settings in
<file: drivers/scsi/u14-34f.c>. Read the SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. Note that there is also
another driver for the same hardware: "UltraStor SCSI support",
below. You should say Y to both only if you want 24F support as
well.
To compile this driver as a module, choose M here: the
module will be called u14-34f.
config SCSI_U14_34F_TAGGED_QUEUE
bool "enable tagged command queueing"
depends on SCSI_U14_34F
help
This is a feature of SCSI-2 which improves performance: the host
adapter can send several SCSI commands to a device's queue even if
previous commands haven't finished yet.
This is equivalent to the "u14-34f=tc:y" boot option.
config SCSI_U14_34F_LINKED_COMMANDS
bool "enable elevator sorting"
depends on SCSI_U14_34F
help
This option enables elevator sorting for all probed SCSI disks and
CD-ROMs. It definitely reduces the average seek distance when doing
random seeks, but this does not necessarily result in a noticeable
performance improvement: your mileage may vary...
This is equivalent to the "u14-34f=lc:y" boot option.
config SCSI_U14_34F_MAX_TAGS
int "maximum number of queued commands"
depends on SCSI_U14_34F
default "8"
help
This specifies how many SCSI commands can be maximally queued for
each probed SCSI device. You should reduce the default value of 8
only if you have disks with buggy or limited tagged command support.
Minimum is 2 and maximum is 14. This value is also the window size
used by the elevator sorting option above. The effective value used
by the driver for each probed SCSI device is reported at boot time.
This is equivalent to the "u14-34f=mq:8" boot option.
config SCSI_ULTRASTOR
tristate "UltraStor SCSI support"
depends on X86 && ISA && SCSI && ISA_DMA_API
---help---
This is support for the UltraStor 14F, 24F and 34F SCSI-2 host
adapter family. This driver is explained in section 3.12 of the
SCSI-HOWTO, available from
<http://www.tldp.org/docs.html#howto>. If it doesn't work out
of the box, you may have to change some settings in
<file:drivers/scsi/ultrastor.h>.
Note that there is also another driver for the same hardware:
"UltraStor 14F/34F support", above.
To compile this driver as a module, choose M here: the
module will be called ultrastor.
config SCSI_NSP32 config SCSI_NSP32
tristate "Workbit NinjaSCSI-32Bi/UDE support" tristate "Workbit NinjaSCSI-32Bi/UDE support"
depends on PCI && SCSI && !64BIT depends on PCI && SCSI && !64BIT

View File

@ -61,9 +61,7 @@ obj-$(CONFIG_SCSI_SIM710) += 53c700.o sim710.o
obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o obj-$(CONFIG_SCSI_ADVANSYS) += advansys.o
obj-$(CONFIG_SCSI_BUSLOGIC) += BusLogic.o obj-$(CONFIG_SCSI_BUSLOGIC) += BusLogic.o
obj-$(CONFIG_SCSI_DPT_I2O) += dpt_i2o.o obj-$(CONFIG_SCSI_DPT_I2O) += dpt_i2o.o
obj-$(CONFIG_SCSI_U14_34F) += u14-34f.o
obj-$(CONFIG_SCSI_ARCMSR) += arcmsr/ obj-$(CONFIG_SCSI_ARCMSR) += arcmsr/
obj-$(CONFIG_SCSI_ULTRASTOR) += ultrastor.o
obj-$(CONFIG_SCSI_AHA152X) += aha152x.o obj-$(CONFIG_SCSI_AHA152X) += aha152x.o
obj-$(CONFIG_SCSI_AHA1542) += aha1542.o obj-$(CONFIG_SCSI_AHA1542) += aha1542.o
obj-$(CONFIG_SCSI_AHA1740) += aha1740.o obj-$(CONFIG_SCSI_AHA1740) += aha1740.o
@ -75,7 +73,6 @@ obj-$(CONFIG_SCSI_PM8001) += pm8001/
obj-$(CONFIG_SCSI_ISCI) += isci/ obj-$(CONFIG_SCSI_ISCI) += isci/
obj-$(CONFIG_SCSI_IPS) += ips.o obj-$(CONFIG_SCSI_IPS) += ips.o
obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o obj-$(CONFIG_SCSI_FUTURE_DOMAIN)+= fdomain.o
obj-$(CONFIG_SCSI_IN2000) += in2000.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o
obj-$(CONFIG_SCSI_GENERIC_NCR5380_MMIO) += g_NCR5380_mmio.o obj-$(CONFIG_SCSI_GENERIC_NCR5380_MMIO) += g_NCR5380_mmio.o
obj-$(CONFIG_SCSI_NCR53C406A) += NCR53c406a.o obj-$(CONFIG_SCSI_NCR53C406A) += NCR53c406a.o
@ -90,15 +87,12 @@ obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
obj-$(CONFIG_SCSI_LPFC) += lpfc/ obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/ obj-$(CONFIG_SCSI_BFA_FC) += bfa/
obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor/ obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor/
obj-$(CONFIG_SCSI_PAS16) += pas16.o
obj-$(CONFIG_SCSI_T128) += t128.o
obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o
obj-$(CONFIG_SCSI_HPSA) += hpsa.o obj-$(CONFIG_SCSI_HPSA) += hpsa.o
obj-$(CONFIG_SCSI_DTC3280) += dtc.o obj-$(CONFIG_SCSI_SMARTPQI) += smartpqi/
obj-$(CONFIG_SCSI_SYM53C8XX_2) += sym53c8xx_2/ obj-$(CONFIG_SCSI_SYM53C8XX_2) += sym53c8xx_2/
obj-$(CONFIG_SCSI_ZALON) += zalon7xx.o obj-$(CONFIG_SCSI_ZALON) += zalon7xx.o
obj-$(CONFIG_SCSI_EATA_PIO) += eata_pio.o obj-$(CONFIG_SCSI_EATA_PIO) += eata_pio.o
obj-$(CONFIG_SCSI_7000FASST) += wd7000.o
obj-$(CONFIG_SCSI_EATA) += eata.o obj-$(CONFIG_SCSI_EATA) += eata.o
obj-$(CONFIG_SCSI_DC395x) += dc395x.o obj-$(CONFIG_SCSI_DC395x) += dc395x.o
obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o obj-$(CONFIG_SCSI_AM53C974) += esp_scsi.o am53c974.o

View File

@ -230,13 +230,6 @@ static int NCR5380_poll_politely2(struct Scsi_Host *instance,
return -ETIMEDOUT; return -ETIMEDOUT;
} }
static inline int NCR5380_poll_politely(struct Scsi_Host *instance,
int reg, int bit, int val, int wait)
{
return NCR5380_poll_politely2(instance, reg, bit, val,
reg, bit, val, wait);
}
#if NDEBUG #if NDEBUG
static struct { static struct {
unsigned char mask; unsigned char mask;
@ -1854,11 +1847,11 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance)
/* XXX - need to source or sink data here, as appropriate */ /* XXX - need to source or sink data here, as appropriate */
} }
} else { } else {
/* Break up transfer into 3 ms chunks, /* Transfer a small chunk so that the
* presuming 6 accesses per handshake. * irq mode lock is not held too long.
*/ */
transfersize = min((unsigned long)cmd->SCp.this_residual, transfersize = min(cmd->SCp.this_residual,
hostdata->accesses_per_ms / 2); NCR5380_PIO_CHUNK_SIZE);
len = transfersize; len = transfersize;
NCR5380_transfer_pio(instance, &phase, &len, NCR5380_transfer_pio(instance, &phase, &len,
(unsigned char **)&cmd->SCp.ptr); (unsigned char **)&cmd->SCp.ptr);

View File

@ -250,6 +250,8 @@ struct NCR5380_cmd {
#define NCR5380_CMD_SIZE (sizeof(struct NCR5380_cmd)) #define NCR5380_CMD_SIZE (sizeof(struct NCR5380_cmd))
#define NCR5380_PIO_CHUNK_SIZE 256
static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr) static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr)
{ {
return ((struct scsi_cmnd *)ncmd_ptr) - 1; return ((struct scsi_cmnd *)ncmd_ptr) - 1;
@ -292,8 +294,14 @@ static void NCR5380_reselect(struct Scsi_Host *instance);
static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *); static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *, struct scsi_cmnd *);
static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_dma(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data); static int NCR5380_transfer_pio(struct Scsi_Host *instance, unsigned char *phase, int *count, unsigned char **data);
static int NCR5380_poll_politely(struct Scsi_Host *, int, int, int, int);
static int NCR5380_poll_politely2(struct Scsi_Host *, int, int, int, int, int, int, int); static int NCR5380_poll_politely2(struct Scsi_Host *, int, int, int, int, int, int, int);
static inline int NCR5380_poll_politely(struct Scsi_Host *instance,
int reg, int bit, int val, int wait)
{
return NCR5380_poll_politely2(instance, reg, bit, val,
reg, bit, val, wait);
}
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* NCR5380_H */ #endif /* NCR5380_H */

View File

@ -613,7 +613,7 @@ static int aac_src_restart_adapter(struct aac_dev *dev, int bled)
* @dev: Adapter * @dev: Adapter
* @comm: communications method * @comm: communications method
*/ */
int aac_src_select_comm(struct aac_dev *dev, int comm) static int aac_src_select_comm(struct aac_dev *dev, int comm)
{ {
switch (comm) { switch (comm) {
case AAC_COMM_MESSAGE: case AAC_COMM_MESSAGE:

View File

@ -632,7 +632,7 @@ int asd_init_hw(struct asd_ha_struct *asd_ha)
pci_name(asd_ha->pcidev)); pci_name(asd_ha->pcidev));
return err; return err;
} }
pci_write_config_dword(asd_ha->pcidev, PCIC_HSTPCIX_CNTRL, err = pci_write_config_dword(asd_ha->pcidev, PCIC_HSTPCIX_CNTRL,
v | SC_TMR_DIS); v | SC_TMR_DIS);
if (err) { if (err) {
asd_printk("couldn't disable split completion timer of %s\n", asd_printk("couldn't disable split completion timer of %s\n",

View File

@ -2388,15 +2388,23 @@ static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb,
} }
case ARCMSR_MESSAGE_WRITE_WQBUFFER: { case ARCMSR_MESSAGE_WRITE_WQBUFFER: {
unsigned char *ver_addr; unsigned char *ver_addr;
int32_t user_len, cnt2end; uint32_t user_len;
int32_t cnt2end;
uint8_t *pQbuffer, *ptmpuserbuffer; uint8_t *pQbuffer, *ptmpuserbuffer;
user_len = pcmdmessagefld->cmdmessage.Length;
if (user_len > ARCMSR_API_DATA_BUFLEN) {
retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out;
}
ver_addr = kmalloc(ARCMSR_API_DATA_BUFLEN, GFP_ATOMIC); ver_addr = kmalloc(ARCMSR_API_DATA_BUFLEN, GFP_ATOMIC);
if (!ver_addr) { if (!ver_addr) {
retvalue = ARCMSR_MESSAGE_FAIL; retvalue = ARCMSR_MESSAGE_FAIL;
goto message_out; goto message_out;
} }
ptmpuserbuffer = ver_addr; ptmpuserbuffer = ver_addr;
user_len = pcmdmessagefld->cmdmessage.Length;
memcpy(ptmpuserbuffer, memcpy(ptmpuserbuffer,
pcmdmessagefld->messagedatabuffer, user_len); pcmdmessagefld->messagedatabuffer, user_len);
spin_lock_irqsave(&acb->wqbuffer_lock, flags); spin_lock_irqsave(&acb->wqbuffer_lock, flags);

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Emulex * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -8,7 +8,7 @@
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Emulex * Emulex
* 3333 Susan Street * 3333 Susan Street
@ -89,7 +89,7 @@ struct be_aic_obj { /* Adaptive interrupt coalescing (AIC) info */
u32 max_eqd; /* in usecs */ u32 max_eqd; /* in usecs */
u32 prev_eqd; /* in usecs */ u32 prev_eqd; /* in usecs */
u32 et_eqd; /* configured val when aic is off */ u32 et_eqd; /* configured val when aic is off */
ulong jiffs; ulong jiffies;
u64 eq_prev; /* Used to calculate eqe */ u64 eq_prev; /* Used to calculate eqe */
}; };
@ -100,7 +100,7 @@ struct be_eq_obj {
struct be_queue_info q; struct be_queue_info q;
struct beiscsi_hba *phba; struct beiscsi_hba *phba;
struct be_queue_info *cq; struct be_queue_info *cq;
struct work_struct work_cqs; /* Work Item */ struct work_struct mcc_work; /* Work Item */
struct irq_poll iopoll; struct irq_poll iopoll;
}; };
@ -111,8 +111,11 @@ struct be_mcc_obj {
struct beiscsi_mcc_tag_state { struct beiscsi_mcc_tag_state {
unsigned long tag_state; unsigned long tag_state;
#define MCC_TAG_STATE_RUNNING 1 #define MCC_TAG_STATE_RUNNING 0
#define MCC_TAG_STATE_TIMEOUT 2 #define MCC_TAG_STATE_TIMEOUT 1
#define MCC_TAG_STATE_ASYNC 2
#define MCC_TAG_STATE_IGNORE 3
void (*cbfn)(struct beiscsi_hba *, unsigned int);
struct be_dma_mem tag_mem_state; struct be_dma_mem tag_mem_state;
}; };

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Emulex * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -8,7 +8,7 @@
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Emulex * Emulex
* 3333 Susan Street * 3333 Susan Street
@ -57,6 +57,7 @@ struct be_mcc_wrb {
#define MCC_STATUS_ILLEGAL_REQUEST 0x2 #define MCC_STATUS_ILLEGAL_REQUEST 0x2
#define MCC_STATUS_ILLEGAL_FIELD 0x3 #define MCC_STATUS_ILLEGAL_FIELD 0x3
#define MCC_STATUS_INSUFFICIENT_BUFFER 0x4 #define MCC_STATUS_INSUFFICIENT_BUFFER 0x4
#define MCC_STATUS_INVALID_LENGTH 0x74
#define CQE_STATUS_COMPL_MASK 0xFFFF #define CQE_STATUS_COMPL_MASK 0xFFFF
#define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */ #define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */
@ -97,11 +98,23 @@ struct be_mcc_compl {
#define MPU_MAILBOX_DB_RDY_MASK 0x1 /* bit 0 */ #define MPU_MAILBOX_DB_RDY_MASK 0x1 /* bit 0 */
#define MPU_MAILBOX_DB_HI_MASK 0x2 /* bit 1 */ #define MPU_MAILBOX_DB_HI_MASK 0x2 /* bit 1 */
/********** MPU semphore ******************/ /********** MPU semphore: used for SH & BE ******************/
#define MPU_EP_SEMAPHORE_OFFSET 0xac #define SLIPORT_SOFTRESET_OFFSET 0x5c /* CSR BAR offset */
#define EP_SEMAPHORE_POST_STAGE_MASK 0x0000FFFF #define SLIPORT_SEMAPHORE_OFFSET_BEx 0xac /* CSR BAR offset */
#define EP_SEMAPHORE_POST_ERR_MASK 0x1 #define SLIPORT_SEMAPHORE_OFFSET_SH 0x94 /* PCI-CFG offset */
#define EP_SEMAPHORE_POST_ERR_SHIFT 31 #define POST_STAGE_MASK 0x0000FFFF
#define POST_ERROR_BIT 0x80000000
#define POST_ERR_RECOVERY_CODE_MASK 0xF000
/* Soft Reset register masks */
#define SLIPORT_SOFTRESET_SR_MASK 0x00000080 /* SR bit */
/* MPU semphore POST stage values */
#define POST_STAGE_AWAITING_HOST_RDY 0x1 /* FW awaiting goahead from host */
#define POST_STAGE_HOST_RDY 0x2 /* Host has given go-ahed to FW */
#define POST_STAGE_BE_RESET 0x3 /* Host wants to reset chip */
#define POST_STAGE_ARMFW_RDY 0xC000 /* FW is done with POST */
#define POST_STAGE_RECOVERABLE_ERR 0xE000 /* Recoverable err detected */
/********** MCC door bell ************/ /********** MCC door bell ************/
#define DB_MCCQ_OFFSET 0x140 #define DB_MCCQ_OFFSET 0x140
@ -109,9 +122,6 @@ struct be_mcc_compl {
/* Number of entries posted */ /* Number of entries posted */
#define DB_MCCQ_NUM_POSTED_SHIFT 16 /* bits 16 - 29 */ #define DB_MCCQ_NUM_POSTED_SHIFT 16 /* bits 16 - 29 */
/* MPU semphore POST stage values */
#define POST_STAGE_ARMFW_RDY 0xc000 /* FW is done with POST */
/** /**
* When the async bit of mcc_compl is set, the last 4 bytes of * When the async bit of mcc_compl is set, the last 4 bytes of
* mcc_compl is interpreted as follows: * mcc_compl is interpreted as follows:
@ -217,6 +227,7 @@ struct be_mcc_mailbox {
#define OPCODE_COMMON_QUERY_FIRMWARE_CONFIG 58 #define OPCODE_COMMON_QUERY_FIRMWARE_CONFIG 58
#define OPCODE_COMMON_FUNCTION_RESET 61 #define OPCODE_COMMON_FUNCTION_RESET 61
#define OPCODE_COMMON_GET_PORT_NAME 77 #define OPCODE_COMMON_GET_PORT_NAME 77
#define OPCODE_COMMON_SET_FEATURES 191
/** /**
* LIST of opcodes that are common between Initiator and Target * LIST of opcodes that are common between Initiator and Target
@ -345,8 +356,8 @@ struct be_cmd_req_logout_fw_sess {
struct be_cmd_resp_logout_fw_sess { struct be_cmd_resp_logout_fw_sess {
struct be_cmd_resp_hdr hdr; /* dw[4] */ struct be_cmd_resp_hdr hdr; /* dw[4] */
#define BEISCSI_MGMT_SESSION_CLOSE 0x20
uint32_t session_status; uint32_t session_status;
#define BE_SESS_STATUS_CLOSE 0x20
} __packed; } __packed;
struct mgmt_conn_login_options { struct mgmt_conn_login_options {
@ -365,6 +376,14 @@ struct ip_addr_format {
u16 size_of_structure; u16 size_of_structure;
u8 reserved; u8 reserved;
u8 ip_type; u8 ip_type;
#define BEISCSI_IP_TYPE_V4 0x1
#define BEISCSI_IP_TYPE_STATIC_V4 0x3
#define BEISCSI_IP_TYPE_DHCP_V4 0x5
/* type v4 values < type v6 values */
#define BEISCSI_IP_TYPE_V6 0x10
#define BEISCSI_IP_TYPE_ROUTABLE_V6 0x30
#define BEISCSI_IP_TYPE_LINK_LOCAL_V6 0x50
#define BEISCSI_IP_TYPE_AUTO_V6 0x90
u8 addr[16]; u8 addr[16];
u32 rsvd0; u32 rsvd0;
} __packed; } __packed;
@ -430,8 +449,13 @@ struct be_cmd_get_boot_target_req {
struct be_cmd_get_boot_target_resp { struct be_cmd_get_boot_target_resp {
struct be_cmd_resp_hdr hdr; struct be_cmd_resp_hdr hdr;
u32 boot_session_count; u32 boot_session_count;
int boot_session_handle; u32 boot_session_handle;
/**
* FW returns 0xffffffff if it couldn't establish connection with
* configured boot target.
*/
#define BE_BOOT_INVALID_SHANDLE 0xffffffff
}; };
struct be_cmd_reopen_session_req { struct be_cmd_reopen_session_req {
@ -699,16 +723,59 @@ struct be_cmd_get_nic_conf_resp {
u8 mac_address[ETH_ALEN]; u8 mac_address[ETH_ALEN];
} __packed; } __packed;
#define BEISCSI_ALIAS_LEN 32 /******************** Get HBA NAME *******************/
struct be_cmd_hba_name { struct be_cmd_hba_name {
struct be_cmd_req_hdr hdr; struct be_cmd_req_hdr hdr;
u16 flags; u16 flags;
u16 rsvd0; u16 rsvd0;
u8 initiator_name[ISCSI_NAME_LEN]; u8 initiator_name[ISCSI_NAME_LEN];
u8 initiator_alias[BEISCSI_ALIAS_LEN]; #define BE_INI_ALIAS_LEN 32
u8 initiator_alias[BE_INI_ALIAS_LEN];
} __packed; } __packed;
/******************** COMMON SET Features *******************/
#define BE_CMD_SET_FEATURE_UER 0x10
#define BE_CMD_UER_SUPP_BIT 0x1
struct be_uer_req {
u32 uer;
u32 rsvd;
};
struct be_uer_resp {
u32 uer;
u16 ue2rp;
u16 ue2sr;
};
struct be_cmd_set_features {
union {
struct be_cmd_req_hdr req_hdr;
struct be_cmd_resp_hdr resp_hdr;
} h;
u32 feature;
u32 param_len;
union {
struct be_uer_req req;
struct be_uer_resp resp;
u32 rsvd[2];
} param;
} __packed;
int beiscsi_cmd_function_reset(struct beiscsi_hba *phba);
int beiscsi_cmd_special_wrb(struct be_ctrl_info *ctrl, u32 load);
int beiscsi_check_fw_rdy(struct beiscsi_hba *phba);
int beiscsi_init_sliport(struct beiscsi_hba *phba);
int beiscsi_cmd_iscsi_cleanup(struct beiscsi_hba *phba, unsigned short ulp_num);
int beiscsi_detect_ue(struct beiscsi_hba *phba);
int beiscsi_detect_tpe(struct beiscsi_hba *phba);
int beiscsi_cmd_eq_create(struct be_ctrl_info *ctrl, int beiscsi_cmd_eq_create(struct be_ctrl_info *ctrl,
struct be_queue_info *eq, int eq_delay); struct be_queue_info *eq, int eq_delay);
@ -723,24 +790,21 @@ int beiscsi_cmd_mccq_create(struct beiscsi_hba *phba,
struct be_queue_info *mccq, struct be_queue_info *mccq,
struct be_queue_info *cq); struct be_queue_info *cq);
int be_poll_mcc(struct be_ctrl_info *ctrl);
int mgmt_check_supported_fw(struct be_ctrl_info *ctrl,
struct beiscsi_hba *phba);
unsigned int be_cmd_get_initname(struct beiscsi_hba *phba); unsigned int be_cmd_get_initname(struct beiscsi_hba *phba);
void free_mcc_wrb(struct be_ctrl_info *ctrl, unsigned int tag); void free_mcc_wrb(struct be_ctrl_info *ctrl, unsigned int tag);
int be_cmd_modify_eq_delay(struct beiscsi_hba *phba, struct be_set_eqd *, int beiscsi_modify_eq_delay(struct beiscsi_hba *phba, struct be_set_eqd *,
int num); int num);
int beiscsi_mccq_compl_wait(struct beiscsi_hba *phba, int beiscsi_mccq_compl_wait(struct beiscsi_hba *phba,
uint32_t tag, struct be_mcc_wrb **wrb, unsigned int tag,
struct be_mcc_wrb **wrb,
struct be_dma_mem *mbx_cmd_mem); struct be_dma_mem *mbx_cmd_mem);
/*ISCSI Functuions */ int __beiscsi_mcc_compl_status(struct beiscsi_hba *phba,
int be_cmd_fw_initialize(struct be_ctrl_info *ctrl); unsigned int tag,
int be_cmd_fw_uninit(struct be_ctrl_info *ctrl); struct be_mcc_wrb **wrb,
struct be_dma_mem *mbx_cmd_mem);
struct be_mcc_wrb *wrb_from_mbox(struct be_dma_mem *mbox_mem); struct be_mcc_wrb *wrb_from_mbox(struct be_dma_mem *mbox_mem);
int be_mcc_compl_poll(struct beiscsi_hba *phba, unsigned int tag);
void be_mcc_notify(struct beiscsi_hba *phba, unsigned int tag); void be_mcc_notify(struct beiscsi_hba *phba, unsigned int tag);
struct be_mcc_wrb *alloc_mcc_wrb(struct beiscsi_hba *phba, struct be_mcc_wrb *alloc_mcc_wrb(struct beiscsi_hba *phba,
unsigned int *ref_tag); unsigned int *ref_tag);
@ -749,9 +813,6 @@ void beiscsi_process_async_event(struct beiscsi_hba *phba,
int beiscsi_process_mcc_compl(struct be_ctrl_info *ctrl, int beiscsi_process_mcc_compl(struct be_ctrl_info *ctrl,
struct be_mcc_compl *compl); struct be_mcc_compl *compl);
int be_mbox_notify(struct be_ctrl_info *ctrl);
int be_cmd_create_default_pdu_queue(struct be_ctrl_info *ctrl, int be_cmd_create_default_pdu_queue(struct be_ctrl_info *ctrl,
struct be_queue_info *cq, struct be_queue_info *cq,
struct be_queue_info *dq, int length, struct be_queue_info *dq, int length,
@ -767,8 +828,6 @@ int be_cmd_iscsi_post_sgl_pages(struct be_ctrl_info *ctrl,
struct be_dma_mem *q_mem, u32 page_offset, struct be_dma_mem *q_mem, u32 page_offset,
u32 num_pages); u32 num_pages);
int beiscsi_cmd_reset_function(struct beiscsi_hba *phba);
int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem, int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem,
struct be_queue_info *wrbq, struct be_queue_info *wrbq,
struct hwi_wrb_context *pwrb_context, struct hwi_wrb_context *pwrb_context,
@ -777,6 +836,15 @@ int be_cmd_wrbq_create(struct be_ctrl_info *ctrl, struct be_dma_mem *q_mem,
/* Configuration Functions */ /* Configuration Functions */
int be_cmd_set_vlan(struct beiscsi_hba *phba, uint16_t vlan_tag); int be_cmd_set_vlan(struct beiscsi_hba *phba, uint16_t vlan_tag);
int beiscsi_check_supported_fw(struct be_ctrl_info *ctrl,
struct beiscsi_hba *phba);
int beiscsi_get_fw_config(struct be_ctrl_info *ctrl, struct beiscsi_hba *phba);
int beiscsi_get_port_name(struct be_ctrl_info *ctrl, struct beiscsi_hba *phba);
int beiscsi_set_uer_feature(struct beiscsi_hba *phba);
struct be_default_pdu_context { struct be_default_pdu_context {
u32 dw[4]; u32 dw[4];
} __packed; } __packed;
@ -999,7 +1067,16 @@ struct iscsi_cleanup_req {
u16 chute; u16 chute;
u8 hdr_ring_id; u8 hdr_ring_id;
u8 data_ring_id; u8 data_ring_id;
} __packed;
struct iscsi_cleanup_req_v1 {
struct be_cmd_req_hdr hdr;
u16 chute;
u16 rsvd1;
u16 hdr_ring_id;
u16 rsvd2;
u16 data_ring_id;
u16 rsvd3;
} __packed; } __packed;
struct eq_delay { struct eq_delay {
@ -1368,14 +1445,9 @@ struct be_cmd_get_port_name {
* the cxn * the cxn
*/ */
int beiscsi_pci_soft_reset(struct beiscsi_hba *phba);
int be_chk_reset_complete(struct beiscsi_hba *phba);
void be_wrb_hdr_prepare(struct be_mcc_wrb *wrb, int payload_len, void be_wrb_hdr_prepare(struct be_mcc_wrb *wrb, int payload_len,
bool embedded, u8 sge_cnt); bool embedded, u8 sge_cnt);
void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr, void be_cmd_hdr_prepare(struct be_cmd_req_hdr *req_hdr,
u8 subsystem, u8 opcode, int cmd_len); u8 subsystem, u8 opcode, int cmd_len);
void beiscsi_fail_session(struct iscsi_cls_session *cls_session);
#endif /* !BEISCSI_CMDS_H */ #endif /* !BEISCSI_CMDS_H */

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Emulex * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,10 +7,10 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Emulex * Emulex
* 3333 Susan Street * 3333 Susan Street
@ -52,22 +52,20 @@ struct iscsi_cls_session *beiscsi_session_create(struct iscsi_endpoint *ep,
if (!ep) { if (!ep) {
printk(KERN_ERR pr_err("beiscsi_session_create: invalid ep\n");
"beiscsi_session_create: invalid ep\n");
return NULL; return NULL;
} }
beiscsi_ep = ep->dd_data; beiscsi_ep = ep->dd_data;
phba = beiscsi_ep->phba; phba = beiscsi_ep->phba;
if (phba->state & BE_ADAPTER_PCI_ERR) { if (!beiscsi_hba_is_online(phba)) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : PCI_ERROR Recovery\n");
return NULL;
} else {
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_session_create\n"); "BS_%d : HBA in error 0x%lx\n", phba->state);
return NULL;
} }
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_session_create\n");
if (cmds_max > beiscsi_ep->phba->params.wrbs_per_cxn) { if (cmds_max > beiscsi_ep->phba->params.wrbs_per_cxn) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Cannot handle %d cmds." "BS_%d : Cannot handle %d cmds."
@ -119,6 +117,16 @@ void beiscsi_session_destroy(struct iscsi_cls_session *cls_session)
iscsi_session_teardown(cls_session); iscsi_session_teardown(cls_session);
} }
/**
* beiscsi_session_fail(): Closing session with appropriate error
* @cls_session: ptr to session
**/
void beiscsi_session_fail(struct iscsi_cls_session *cls_session)
{
iscsi_session_failure(cls_session->dd_data, ISCSI_ERR_CONN_FAILED);
}
/** /**
* beiscsi_conn_create - create an instance of iscsi connection * beiscsi_conn_create - create an instance of iscsi connection
* @cls_session: ptr to iscsi_cls_session * @cls_session: ptr to iscsi_cls_session
@ -237,7 +245,7 @@ int beiscsi_conn_bind(struct iscsi_cls_session *cls_session,
return beiscsi_bindconn_cid(phba, beiscsi_conn, beiscsi_ep->ep_cid); return beiscsi_bindconn_cid(phba, beiscsi_conn, beiscsi_ep->ep_cid);
} }
static int beiscsi_create_ipv4_iface(struct beiscsi_hba *phba) static int beiscsi_iface_create_ipv4(struct beiscsi_hba *phba)
{ {
if (phba->ipv4_iface) if (phba->ipv4_iface)
return 0; return 0;
@ -256,7 +264,7 @@ static int beiscsi_create_ipv4_iface(struct beiscsi_hba *phba)
return 0; return 0;
} }
static int beiscsi_create_ipv6_iface(struct beiscsi_hba *phba) static int beiscsi_iface_create_ipv6(struct beiscsi_hba *phba)
{ {
if (phba->ipv6_iface) if (phba->ipv6_iface)
return 0; return 0;
@ -275,79 +283,31 @@ static int beiscsi_create_ipv6_iface(struct beiscsi_hba *phba)
return 0; return 0;
} }
void beiscsi_create_def_ifaces(struct beiscsi_hba *phba) void beiscsi_iface_create_default(struct beiscsi_hba *phba)
{ {
struct be_cmd_get_if_info_resp *if_info; struct be_cmd_get_if_info_resp *if_info;
if (!mgmt_get_if_info(phba, BE2_IPV4, &if_info)) { if (!beiscsi_if_get_info(phba, BEISCSI_IP_TYPE_V4, &if_info)) {
beiscsi_create_ipv4_iface(phba); beiscsi_iface_create_ipv4(phba);
kfree(if_info); kfree(if_info);
} }
if (!mgmt_get_if_info(phba, BE2_IPV6, &if_info)) { if (!beiscsi_if_get_info(phba, BEISCSI_IP_TYPE_V6, &if_info)) {
beiscsi_create_ipv6_iface(phba); beiscsi_iface_create_ipv6(phba);
kfree(if_info); kfree(if_info);
} }
} }
void beiscsi_destroy_def_ifaces(struct beiscsi_hba *phba) void beiscsi_iface_destroy_default(struct beiscsi_hba *phba)
{ {
if (phba->ipv6_iface) if (phba->ipv6_iface) {
iscsi_destroy_iface(phba->ipv6_iface); iscsi_destroy_iface(phba->ipv6_iface);
if (phba->ipv4_iface) phba->ipv6_iface = NULL;
}
if (phba->ipv4_iface) {
iscsi_destroy_iface(phba->ipv4_iface); iscsi_destroy_iface(phba->ipv4_iface);
} phba->ipv4_iface = NULL;
static int
beiscsi_set_static_ip(struct Scsi_Host *shost,
struct iscsi_iface_param_info *iface_param,
void *data, uint32_t dt_len)
{
struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct iscsi_iface_param_info *iface_ip = NULL;
struct iscsi_iface_param_info *iface_subnet = NULL;
struct nlattr *nla;
int ret;
switch (iface_param->param) {
case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_ADDR);
if (nla)
iface_ip = nla_data(nla);
nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_SUBNET);
if (nla)
iface_subnet = nla_data(nla);
break;
case ISCSI_NET_PARAM_IPV4_ADDR:
iface_ip = iface_param;
nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_SUBNET);
if (nla)
iface_subnet = nla_data(nla);
break;
case ISCSI_NET_PARAM_IPV4_SUBNET:
iface_subnet = iface_param;
nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_ADDR);
if (nla)
iface_ip = nla_data(nla);
break;
default:
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Unsupported param %d\n",
iface_param->param);
} }
if (!iface_ip || !iface_subnet) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : IP and Subnet Mask required\n");
return -EINVAL;
}
ret = mgmt_set_ip(phba, iface_ip, iface_subnet,
ISCSI_BOOTPROTO_STATIC);
return ret;
} }
/** /**
@ -363,137 +323,141 @@ beiscsi_set_static_ip(struct Scsi_Host *shost,
* Failure: Non-Zero Value * Failure: Non-Zero Value
**/ **/
static int static int
beiscsi_set_vlan_tag(struct Scsi_Host *shost, beiscsi_iface_config_vlan(struct Scsi_Host *shost,
struct iscsi_iface_param_info *iface_param) struct iscsi_iface_param_info *iface_param)
{ {
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
int ret; int ret = -EPERM;
/* Get the Interface Handle */
ret = mgmt_get_all_if_id(phba);
if (ret) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Getting Interface Handle Failed\n");
return ret;
}
switch (iface_param->param) { switch (iface_param->param) {
case ISCSI_NET_PARAM_VLAN_ENABLED: case ISCSI_NET_PARAM_VLAN_ENABLED:
ret = 0;
if (iface_param->value[0] != ISCSI_VLAN_ENABLE) if (iface_param->value[0] != ISCSI_VLAN_ENABLE)
ret = mgmt_set_vlan(phba, BEISCSI_VLAN_DISABLE); ret = beiscsi_if_set_vlan(phba, BEISCSI_VLAN_DISABLE);
break; break;
case ISCSI_NET_PARAM_VLAN_TAG: case ISCSI_NET_PARAM_VLAN_TAG:
ret = mgmt_set_vlan(phba, ret = beiscsi_if_set_vlan(phba,
*((uint16_t *)iface_param->value)); *((uint16_t *)iface_param->value));
break; break;
default:
beiscsi_log(phba, KERN_WARNING, BEISCSI_LOG_CONFIG,
"BS_%d : Unknown Param Type : %d\n",
iface_param->param);
return -ENOSYS;
} }
return ret; return ret;
} }
static int static int
beiscsi_set_ipv4(struct Scsi_Host *shost, beiscsi_iface_config_ipv4(struct Scsi_Host *shost,
struct iscsi_iface_param_info *iface_param, struct iscsi_iface_param_info *info,
void *data, uint32_t dt_len) void *data, uint32_t dt_len)
{ {
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
int ret = 0; u8 *ip = NULL, *subnet = NULL, *gw;
struct nlattr *nla;
int ret = -EPERM;
/* Check the param */ /* Check the param */
switch (iface_param->param) { switch (info->param) {
case ISCSI_NET_PARAM_IFACE_ENABLE:
if (info->value[0] == ISCSI_IFACE_ENABLE)
ret = beiscsi_iface_create_ipv4(phba);
else {
iscsi_destroy_iface(phba->ipv4_iface);
phba->ipv4_iface = NULL;
}
break;
case ISCSI_NET_PARAM_IPV4_GW: case ISCSI_NET_PARAM_IPV4_GW:
ret = mgmt_set_gateway(phba, iface_param); gw = info->value;
ret = beiscsi_if_set_gw(phba, BEISCSI_IP_TYPE_V4, gw);
break; break;
case ISCSI_NET_PARAM_IPV4_BOOTPROTO: case ISCSI_NET_PARAM_IPV4_BOOTPROTO:
if (iface_param->value[0] == ISCSI_BOOTPROTO_DHCP) if (info->value[0] == ISCSI_BOOTPROTO_DHCP)
ret = mgmt_set_ip(phba, iface_param, ret = beiscsi_if_en_dhcp(phba, BEISCSI_IP_TYPE_V4);
NULL, ISCSI_BOOTPROTO_DHCP); else if (info->value[0] == ISCSI_BOOTPROTO_STATIC)
else if (iface_param->value[0] == ISCSI_BOOTPROTO_STATIC) /* release DHCP IP address */
ret = beiscsi_set_static_ip(shost, iface_param, ret = beiscsi_if_en_static(phba, BEISCSI_IP_TYPE_V4,
data, dt_len); NULL, NULL);
else else
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Invalid BOOTPROTO: %d\n", "BS_%d : Invalid BOOTPROTO: %d\n",
iface_param->value[0]); info->value[0]);
break; break;
case ISCSI_NET_PARAM_IFACE_ENABLE: case ISCSI_NET_PARAM_IPV4_ADDR:
if (iface_param->value[0] == ISCSI_IFACE_ENABLE) ip = info->value;
ret = beiscsi_create_ipv4_iface(phba); nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_SUBNET);
else if (nla) {
iscsi_destroy_iface(phba->ipv4_iface); info = nla_data(nla);
subnet = info->value;
}
ret = beiscsi_if_en_static(phba, BEISCSI_IP_TYPE_V4,
ip, subnet);
break; break;
case ISCSI_NET_PARAM_IPV4_SUBNET: case ISCSI_NET_PARAM_IPV4_SUBNET:
case ISCSI_NET_PARAM_IPV4_ADDR: /*
ret = beiscsi_set_static_ip(shost, iface_param, * OPCODE_COMMON_ISCSI_NTWK_MODIFY_IP_ADDR ioctl needs IP
data, dt_len); * and subnet both. Find IP to be applied for this subnet.
*/
subnet = info->value;
nla = nla_find(data, dt_len, ISCSI_NET_PARAM_IPV4_ADDR);
if (nla) {
info = nla_data(nla);
ip = info->value;
}
ret = beiscsi_if_en_static(phba, BEISCSI_IP_TYPE_V4,
ip, subnet);
break; break;
case ISCSI_NET_PARAM_VLAN_ENABLED:
case ISCSI_NET_PARAM_VLAN_TAG:
ret = beiscsi_set_vlan_tag(shost, iface_param);
break;
default:
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Param %d not supported\n",
iface_param->param);
} }
return ret; return ret;
} }
static int static int
beiscsi_set_ipv6(struct Scsi_Host *shost, beiscsi_iface_config_ipv6(struct Scsi_Host *shost,
struct iscsi_iface_param_info *iface_param, struct iscsi_iface_param_info *iface_param,
void *data, uint32_t dt_len) void *data, uint32_t dt_len)
{ {
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
int ret = 0; int ret = -EPERM;
switch (iface_param->param) { switch (iface_param->param) {
case ISCSI_NET_PARAM_IFACE_ENABLE: case ISCSI_NET_PARAM_IFACE_ENABLE:
if (iface_param->value[0] == ISCSI_IFACE_ENABLE) if (iface_param->value[0] == ISCSI_IFACE_ENABLE)
ret = beiscsi_create_ipv6_iface(phba); ret = beiscsi_iface_create_ipv6(phba);
else { else {
iscsi_destroy_iface(phba->ipv6_iface); iscsi_destroy_iface(phba->ipv6_iface);
ret = 0; phba->ipv6_iface = NULL;
} }
break; break;
case ISCSI_NET_PARAM_IPV6_ADDR: case ISCSI_NET_PARAM_IPV6_ADDR:
ret = mgmt_set_ip(phba, iface_param, NULL, ret = beiscsi_if_en_static(phba, BEISCSI_IP_TYPE_V6,
ISCSI_BOOTPROTO_STATIC); iface_param->value, NULL);
break; break;
case ISCSI_NET_PARAM_VLAN_ENABLED:
case ISCSI_NET_PARAM_VLAN_TAG:
ret = beiscsi_set_vlan_tag(shost, iface_param);
break;
default:
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Param %d not supported\n",
iface_param->param);
} }
return ret; return ret;
} }
int be2iscsi_iface_set_param(struct Scsi_Host *shost, int beiscsi_iface_set_param(struct Scsi_Host *shost,
void *data, uint32_t dt_len) void *data, uint32_t dt_len)
{ {
struct iscsi_iface_param_info *iface_param = NULL; struct iscsi_iface_param_info *iface_param = NULL;
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct nlattr *attrib; struct nlattr *attrib;
uint32_t rm_len = dt_len; uint32_t rm_len = dt_len;
int ret = 0 ; int ret;
if (phba->state & BE_ADAPTER_PCI_ERR) { if (!beiscsi_hba_is_online(phba)) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In PCI_ERROR Recovery\n"); "BS_%d : HBA in error 0x%lx\n", phba->state);
return -EBUSY; return -EBUSY;
} }
/* update interface_handle */
ret = beiscsi_if_get_handle(phba);
if (ret) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : Getting Interface Handle Failed\n");
return ret;
}
nla_for_each_attr(attrib, data, dt_len, rm_len) { nla_for_each_attr(attrib, data, dt_len, rm_len) {
iface_param = nla_data(attrib); iface_param = nla_data(attrib);
@ -512,40 +476,58 @@ int be2iscsi_iface_set_param(struct Scsi_Host *shost,
return -EINVAL; return -EINVAL;
} }
switch (iface_param->iface_type) { beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
case ISCSI_IFACE_TYPE_IPV4: "BS_%d : %s.0 set param %d",
ret = beiscsi_set_ipv4(shost, iface_param, (iface_param->iface_type == ISCSI_IFACE_TYPE_IPV4) ?
data, dt_len); "ipv4" : "ipv6", iface_param->param);
break;
case ISCSI_IFACE_TYPE_IPV6: ret = -EPERM;
ret = beiscsi_set_ipv6(shost, iface_param, switch (iface_param->param) {
data, dt_len); case ISCSI_NET_PARAM_VLAN_ENABLED:
case ISCSI_NET_PARAM_VLAN_TAG:
ret = beiscsi_iface_config_vlan(shost, iface_param);
break; break;
default: default:
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, switch (iface_param->iface_type) {
"BS_%d : Invalid iface type :%d passed\n", case ISCSI_IFACE_TYPE_IPV4:
iface_param->iface_type); ret = beiscsi_iface_config_ipv4(shost,
break; iface_param,
data, dt_len);
break;
case ISCSI_IFACE_TYPE_IPV6:
ret = beiscsi_iface_config_ipv6(shost,
iface_param,
data, dt_len);
break;
}
} }
if (ret == -EPERM) {
__beiscsi_log(phba, KERN_ERR,
"BS_%d : %s.0 set param %d not permitted",
(iface_param->iface_type ==
ISCSI_IFACE_TYPE_IPV4) ? "ipv4" : "ipv6",
iface_param->param);
ret = 0;
}
if (ret) if (ret)
return ret; break;
} }
return ret; return ret;
} }
static int be2iscsi_get_if_param(struct beiscsi_hba *phba, static int __beiscsi_iface_get_param(struct beiscsi_hba *phba,
struct iscsi_iface *iface, int param, struct iscsi_iface *iface,
char *buf) int param, char *buf)
{ {
struct be_cmd_get_if_info_resp *if_info; struct be_cmd_get_if_info_resp *if_info;
int len, ip_type = BE2_IPV4; int len, ip_type = BEISCSI_IP_TYPE_V4;
if (iface->iface_type == ISCSI_IFACE_TYPE_IPV6) if (iface->iface_type == ISCSI_IFACE_TYPE_IPV6)
ip_type = BE2_IPV6; ip_type = BEISCSI_IP_TYPE_V6;
len = mgmt_get_if_info(phba, ip_type, &if_info); len = beiscsi_if_get_info(phba, ip_type, &if_info);
if (len) if (len)
return len; return len;
@ -567,24 +549,24 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
break; break;
case ISCSI_NET_PARAM_VLAN_ENABLED: case ISCSI_NET_PARAM_VLAN_ENABLED:
len = sprintf(buf, "%s\n", len = sprintf(buf, "%s\n",
(if_info->vlan_priority == BEISCSI_VLAN_DISABLE) (if_info->vlan_priority == BEISCSI_VLAN_DISABLE) ?
? "Disabled\n" : "Enabled\n"); "disable" : "enable");
break; break;
case ISCSI_NET_PARAM_VLAN_ID: case ISCSI_NET_PARAM_VLAN_ID:
if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE) if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE)
len = -EINVAL; len = -EINVAL;
else else
len = sprintf(buf, "%d\n", len = sprintf(buf, "%d\n",
(if_info->vlan_priority & (if_info->vlan_priority &
ISCSI_MAX_VLAN_ID)); ISCSI_MAX_VLAN_ID));
break; break;
case ISCSI_NET_PARAM_VLAN_PRIORITY: case ISCSI_NET_PARAM_VLAN_PRIORITY:
if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE) if (if_info->vlan_priority == BEISCSI_VLAN_DISABLE)
len = -EINVAL; len = -EINVAL;
else else
len = sprintf(buf, "%d\n", len = sprintf(buf, "%d\n",
((if_info->vlan_priority >> 13) & ((if_info->vlan_priority >> 13) &
ISCSI_MAX_VLAN_PRIORITY)); ISCSI_MAX_VLAN_PRIORITY));
break; break;
default: default:
WARN_ON(1); WARN_ON(1);
@ -594,18 +576,20 @@ static int be2iscsi_get_if_param(struct beiscsi_hba *phba,
return len; return len;
} }
int be2iscsi_iface_get_param(struct iscsi_iface *iface, int beiscsi_iface_get_param(struct iscsi_iface *iface,
enum iscsi_param_type param_type, enum iscsi_param_type param_type,
int param, char *buf) int param, char *buf)
{ {
struct Scsi_Host *shost = iscsi_iface_to_shost(iface); struct Scsi_Host *shost = iscsi_iface_to_shost(iface);
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct be_cmd_get_def_gateway_resp gateway; struct be_cmd_get_def_gateway_resp gateway;
int len = -ENOSYS; int len = -EPERM;
if (phba->state & BE_ADAPTER_PCI_ERR) { if (param_type != ISCSI_NET_PARAM)
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, return 0;
"BS_%d : In PCI_ERROR Recovery\n"); if (!beiscsi_hba_is_online(phba)) {
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : HBA in error 0x%lx\n", phba->state);
return -EBUSY; return -EBUSY;
} }
@ -617,19 +601,22 @@ int be2iscsi_iface_get_param(struct iscsi_iface *iface,
case ISCSI_NET_PARAM_VLAN_ENABLED: case ISCSI_NET_PARAM_VLAN_ENABLED:
case ISCSI_NET_PARAM_VLAN_ID: case ISCSI_NET_PARAM_VLAN_ID:
case ISCSI_NET_PARAM_VLAN_PRIORITY: case ISCSI_NET_PARAM_VLAN_PRIORITY:
len = be2iscsi_get_if_param(phba, iface, param, buf); len = __beiscsi_iface_get_param(phba, iface, param, buf);
break; break;
case ISCSI_NET_PARAM_IFACE_ENABLE: case ISCSI_NET_PARAM_IFACE_ENABLE:
len = sprintf(buf, "enabled\n"); if (iface->iface_type == ISCSI_IFACE_TYPE_IPV4)
len = sprintf(buf, "%s\n",
phba->ipv4_iface ? "enable" : "disable");
else if (iface->iface_type == ISCSI_IFACE_TYPE_IPV6)
len = sprintf(buf, "%s\n",
phba->ipv6_iface ? "enable" : "disable");
break; break;
case ISCSI_NET_PARAM_IPV4_GW: case ISCSI_NET_PARAM_IPV4_GW:
memset(&gateway, 0, sizeof(gateway)); memset(&gateway, 0, sizeof(gateway));
len = mgmt_get_gateway(phba, BE2_IPV4, &gateway); len = beiscsi_if_get_gw(phba, BEISCSI_IP_TYPE_V4, &gateway);
if (!len) if (!len)
len = sprintf(buf, "%pI4\n", &gateway.ip_addr.addr); len = sprintf(buf, "%pI4\n", &gateway.ip_addr.addr);
break; break;
default:
len = -ENOSYS;
} }
return len; return len;
@ -647,7 +634,7 @@ int beiscsi_ep_get_param(struct iscsi_endpoint *ep,
enum iscsi_param param, char *buf) enum iscsi_param param, char *buf)
{ {
struct beiscsi_endpoint *beiscsi_ep = ep->dd_data; struct beiscsi_endpoint *beiscsi_ep = ep->dd_data;
int len = 0; int len;
beiscsi_log(beiscsi_ep->phba, KERN_INFO, beiscsi_log(beiscsi_ep->phba, KERN_INFO,
BEISCSI_LOG_CONFIG, BEISCSI_LOG_CONFIG,
@ -659,13 +646,13 @@ int beiscsi_ep_get_param(struct iscsi_endpoint *ep,
len = sprintf(buf, "%hu\n", beiscsi_ep->dst_tcpport); len = sprintf(buf, "%hu\n", beiscsi_ep->dst_tcpport);
break; break;
case ISCSI_PARAM_CONN_ADDRESS: case ISCSI_PARAM_CONN_ADDRESS:
if (beiscsi_ep->ip_type == BE2_IPV4) if (beiscsi_ep->ip_type == BEISCSI_IP_TYPE_V4)
len = sprintf(buf, "%pI4\n", &beiscsi_ep->dst_addr); len = sprintf(buf, "%pI4\n", &beiscsi_ep->dst_addr);
else else
len = sprintf(buf, "%pI6\n", &beiscsi_ep->dst6_addr); len = sprintf(buf, "%pI6\n", &beiscsi_ep->dst6_addr);
break; break;
default: default:
return -ENOSYS; len = -EPERM;
} }
return len; return len;
} }
@ -758,7 +745,7 @@ static void beiscsi_get_port_state(struct Scsi_Host *shost)
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
struct iscsi_cls_host *ihost = shost->shost_data; struct iscsi_cls_host *ihost = shost->shost_data;
ihost->port_state = (phba->state & BE_ADAPTER_LINK_UP) ? ihost->port_state = test_bit(BEISCSI_HBA_LINK_UP, &phba->state) ?
ISCSI_PORT_STATE_UP : ISCSI_PORT_STATE_DOWN; ISCSI_PORT_STATE_UP : ISCSI_PORT_STATE_DOWN;
} }
@ -810,16 +797,13 @@ int beiscsi_get_host_param(struct Scsi_Host *shost,
struct beiscsi_hba *phba = iscsi_host_priv(shost); struct beiscsi_hba *phba = iscsi_host_priv(shost);
int status = 0; int status = 0;
if (!beiscsi_hba_is_online(phba)) {
if (phba->state & BE_ADAPTER_PCI_ERR) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : In PCI_ERROR Recovery\n");
return -EBUSY;
} else {
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_get_host_param," "BS_%d : HBA in error 0x%lx\n", phba->state);
" param = %d\n", param); return -EBUSY;
} }
beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_get_host_param, param = %d\n", param);
switch (param) { switch (param) {
case ISCSI_HOST_PARAM_HWADDRESS: case ISCSI_HOST_PARAM_HWADDRESS:
@ -961,15 +945,13 @@ int beiscsi_conn_start(struct iscsi_cls_conn *cls_conn)
phba = ((struct beiscsi_conn *)conn->dd_data)->phba; phba = ((struct beiscsi_conn *)conn->dd_data)->phba;
if (phba->state & BE_ADAPTER_PCI_ERR) { if (!beiscsi_hba_is_online(phba)) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In PCI_ERROR Recovery\n"); "BS_%d : HBA in error 0x%lx\n", phba->state);
return -EBUSY; return -EBUSY;
} else {
beiscsi_log(beiscsi_conn->phba, KERN_INFO,
BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_conn_start\n");
} }
beiscsi_log(beiscsi_conn->phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : In beiscsi_conn_start\n");
memset(&params, 0, sizeof(struct beiscsi_offload_params)); memset(&params, 0, sizeof(struct beiscsi_offload_params));
beiscsi_ep = beiscsi_conn->ep; beiscsi_ep = beiscsi_conn->ep;
@ -1186,28 +1168,20 @@ beiscsi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
struct iscsi_endpoint *ep; struct iscsi_endpoint *ep;
int ret; int ret;
if (shost) if (!shost) {
phba = iscsi_host_priv(shost);
else {
ret = -ENXIO; ret = -ENXIO;
printk(KERN_ERR pr_err("beiscsi_ep_connect shost is NULL\n");
"beiscsi_ep_connect shost is NULL\n");
return ERR_PTR(ret); return ERR_PTR(ret);
} }
if (beiscsi_error(phba)) { phba = iscsi_host_priv(shost);
if (!beiscsi_hba_is_online(phba)) {
ret = -EIO; ret = -EIO;
beiscsi_log(phba, KERN_WARNING, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : The FW state Not Stable!!!\n"); "BS_%d : HBA in error 0x%lx\n", phba->state);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
if (!test_bit(BEISCSI_HBA_LINK_UP, &phba->state)) {
if (phba->state & BE_ADAPTER_PCI_ERR) {
ret = -EBUSY;
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG,
"BS_%d : In PCI_ERROR Recovery\n");
return ERR_PTR(ret);
} else if (phba->state & BE_ADAPTER_LINK_DOWN) {
ret = -EBUSY; ret = -EBUSY;
beiscsi_log(phba, KERN_WARNING, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_WARNING, BEISCSI_LOG_CONFIG,
"BS_%d : The Adapter Port state is Down!!!\n"); "BS_%d : The Adapter Port state is Down!!!\n");
@ -1361,9 +1335,9 @@ void beiscsi_ep_disconnect(struct iscsi_endpoint *ep)
tcp_upload_flag = CONNECTION_UPLOAD_ABORT; tcp_upload_flag = CONNECTION_UPLOAD_ABORT;
} }
if (phba->state & BE_ADAPTER_PCI_ERR) { if (!beiscsi_hba_is_online(phba)) {
beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_CONFIG, beiscsi_log(phba, KERN_INFO, BEISCSI_LOG_CONFIG,
"BS_%d : PCI_ERROR Recovery\n"); "BS_%d : HBA in error 0x%lx\n", phba->state);
goto free_ep; goto free_ep;
} }
@ -1386,7 +1360,7 @@ free_ep:
iscsi_destroy_endpoint(beiscsi_ep->openiscsi_ep); iscsi_destroy_endpoint(beiscsi_ep->openiscsi_ep);
} }
umode_t be2iscsi_attr_is_visible(int param_type, int param) umode_t beiscsi_attr_is_visible(int param_type, int param)
{ {
switch (param_type) { switch (param_type) {
case ISCSI_NET_PARAM: case ISCSI_NET_PARAM:

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Avago Technologies * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,10 +7,10 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Avago Technologies * Avago Technologies
* 3333 Susan Street * 3333 Susan Street
@ -23,25 +23,18 @@
#include "be_main.h" #include "be_main.h"
#include "be_mgmt.h" #include "be_mgmt.h"
#define BE2_IPV4 0x1 void beiscsi_iface_create_default(struct beiscsi_hba *phba);
#define BE2_IPV6 0x10
#define BE2_DHCP_V4 0x05
#define NON_BLOCKING 0x0 void beiscsi_iface_destroy_default(struct beiscsi_hba *phba);
#define BLOCKING 0x1
void beiscsi_create_def_ifaces(struct beiscsi_hba *phba); int beiscsi_iface_get_param(struct iscsi_iface *iface,
void beiscsi_destroy_def_ifaces(struct beiscsi_hba *phba);
int be2iscsi_iface_get_param(struct iscsi_iface *iface,
enum iscsi_param_type param_type, enum iscsi_param_type param_type,
int param, char *buf); int param, char *buf);
int be2iscsi_iface_set_param(struct Scsi_Host *shost, int beiscsi_iface_set_param(struct Scsi_Host *shost,
void *data, uint32_t count); void *data, uint32_t count);
umode_t be2iscsi_attr_is_visible(int param_type, int param); umode_t beiscsi_attr_is_visible(int param_type, int param);
void beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn, void beiscsi_offload_connection(struct beiscsi_conn *beiscsi_conn,
struct beiscsi_offload_params *params); struct beiscsi_offload_params *params);
@ -57,6 +50,8 @@ struct iscsi_cls_session *beiscsi_session_create(struct iscsi_endpoint *ep,
void beiscsi_session_destroy(struct iscsi_cls_session *cls_session); void beiscsi_session_destroy(struct iscsi_cls_session *cls_session);
void beiscsi_session_fail(struct iscsi_cls_session *cls_session);
struct iscsi_cls_conn *beiscsi_conn_create(struct iscsi_cls_session struct iscsi_cls_conn *beiscsi_conn_create(struct iscsi_cls_session
*cls_session, uint32_t cid); *cls_session, uint32_t cid);

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Emulex * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,10 +7,10 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Emulex * Emulex
* 3333 Susan Street * 3333 Susan Street
@ -36,7 +36,7 @@
#include <scsi/scsi_transport_iscsi.h> #include <scsi/scsi_transport_iscsi.h>
#define DRV_NAME "be2iscsi" #define DRV_NAME "be2iscsi"
#define BUILD_STR "11.0.0.0" #define BUILD_STR "11.2.0.0"
#define BE_NAME "Emulex OneConnect" \ #define BE_NAME "Emulex OneConnect" \
"Open-iSCSI Driver version" BUILD_STR "Open-iSCSI Driver version" BUILD_STR
#define DRV_DESC BE_NAME " " "Driver" #define DRV_DESC BE_NAME " " "Driver"
@ -82,36 +82,12 @@
#define BEISCSI_MAX_FRAGS_INIT 192 #define BEISCSI_MAX_FRAGS_INIT 192
#define BE_NUM_MSIX_ENTRIES 1 #define BE_NUM_MSIX_ENTRIES 1
#define MPU_EP_CONTROL 0
#define MPU_EP_SEMAPHORE 0xac
#define BE2_SOFT_RESET 0x5c
#define BE2_PCI_ONLINE0 0xb0
#define BE2_PCI_ONLINE1 0xb4
#define BE2_SET_RESET 0x80
#define BE2_MPU_IRAM_ONLINE 0x00000080
#define BE_SENSE_INFO_SIZE 258 #define BE_SENSE_INFO_SIZE 258
#define BE_ISCSI_PDU_HEADER_SIZE 64 #define BE_ISCSI_PDU_HEADER_SIZE 64
#define BE_MIN_MEM_SIZE 16384 #define BE_MIN_MEM_SIZE 16384
#define MAX_CMD_SZ 65536 #define MAX_CMD_SZ 65536
#define IIOC_SCSI_DATA 0x05 /* Write Operation */ #define IIOC_SCSI_DATA 0x05 /* Write Operation */
#define INVALID_SESS_HANDLE 0xFFFFFFFF
/**
* Adapter States
**/
#define BE_ADAPTER_LINK_UP 0x001
#define BE_ADAPTER_LINK_DOWN 0x002
#define BE_ADAPTER_PCI_ERR 0x004
#define BE_ADAPTER_CHECK_BOOT 0x008
#define BEISCSI_CLEAN_UNLOAD 0x01
#define BEISCSI_EEH_UNLOAD 0x02
#define BE_GET_BOOT_RETRIES 45
#define BE_GET_BOOT_TO 20
/** /**
* hardware needs the async PDU buffers to be posted in multiples of 8 * hardware needs the async PDU buffers to be posted in multiples of 8
* So have atleast 8 of them by default * So have atleast 8 of them by default
@ -378,7 +354,6 @@ struct beiscsi_hba {
struct sgl_handle **eh_sgl_hndl_base; struct sgl_handle **eh_sgl_hndl_base;
spinlock_t io_sgl_lock; spinlock_t io_sgl_lock;
spinlock_t mgmt_sgl_lock; spinlock_t mgmt_sgl_lock;
spinlock_t isr_lock;
spinlock_t async_pdu_lock; spinlock_t async_pdu_lock;
unsigned int age; unsigned int age;
struct list_head hba_queue; struct list_head hba_queue;
@ -390,7 +365,6 @@ struct beiscsi_hba {
struct ulp_cid_info *cid_array_info[BEISCSI_ULP_COUNT]; struct ulp_cid_info *cid_array_info[BEISCSI_ULP_COUNT];
struct iscsi_endpoint **ep_array; struct iscsi_endpoint **ep_array;
struct beiscsi_conn **conn_table; struct beiscsi_conn **conn_table;
struct iscsi_boot_kset *boot_kset;
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct iscsi_iface *ipv4_iface; struct iscsi_iface *ipv4_iface;
struct iscsi_iface *ipv6_iface; struct iscsi_iface *ipv6_iface;
@ -418,12 +392,33 @@ struct beiscsi_hba {
unsigned long ulp_supported; unsigned long ulp_supported;
} fw_config; } fw_config;
unsigned int state; unsigned long state;
#define BEISCSI_HBA_ONLINE 0
#define BEISCSI_HBA_LINK_UP 1
#define BEISCSI_HBA_BOOT_FOUND 2
#define BEISCSI_HBA_BOOT_WORK 3
#define BEISCSI_HBA_UER_SUPP 4
#define BEISCSI_HBA_PCI_ERR 5
#define BEISCSI_HBA_FW_TIMEOUT 6
#define BEISCSI_HBA_IN_UE 7
#define BEISCSI_HBA_IN_TPE 8
/* error bits */
#define BEISCSI_HBA_IN_ERR ((1 << BEISCSI_HBA_PCI_ERR) | \
(1 << BEISCSI_HBA_FW_TIMEOUT) | \
(1 << BEISCSI_HBA_IN_UE) | \
(1 << BEISCSI_HBA_IN_TPE))
u8 optic_state; u8 optic_state;
int get_boot; struct delayed_work eqd_update;
bool fw_timeout; /* update EQ delay timer every 1000ms */
bool ue_detected; #define BEISCSI_EQD_UPDATE_INTERVAL 1000
struct delayed_work beiscsi_hw_check_task; struct timer_list hw_check;
/* check for UE every 1000ms */
#define BEISCSI_UE_DETECT_INTERVAL 1000
u32 ue2rp;
struct delayed_work recover_port;
struct work_struct sess_work;
bool mac_addr_set; bool mac_addr_set;
u8 mac_address[ETH_ALEN]; u8 mac_address[ETH_ALEN];
@ -435,7 +430,6 @@ struct beiscsi_hba {
struct be_ctrl_info ctrl; struct be_ctrl_info ctrl;
unsigned int generation; unsigned int generation;
unsigned int interface_handle; unsigned int interface_handle;
struct mgmt_session_info boot_sess;
struct invalidate_command_table inv_tbl[128]; struct invalidate_command_table inv_tbl[128];
struct be_aic_obj aic_obj[MAX_CPUS]; struct be_aic_obj aic_obj[MAX_CPUS];
@ -444,8 +438,29 @@ struct beiscsi_hba {
struct scatterlist *sg, struct scatterlist *sg,
uint32_t num_sg, uint32_t xferlen, uint32_t num_sg, uint32_t xferlen,
uint32_t writedir); uint32_t writedir);
struct boot_struct {
int retry;
unsigned int tag;
unsigned int s_handle;
struct be_dma_mem nonemb_cmd;
enum {
BEISCSI_BOOT_REOPEN_SESS = 1,
BEISCSI_BOOT_GET_SHANDLE,
BEISCSI_BOOT_GET_SINFO,
BEISCSI_BOOT_LOGOUT_SESS,
BEISCSI_BOOT_CREATE_KSET,
} action;
struct mgmt_session_info boot_sess;
struct iscsi_boot_kset *boot_kset;
} boot_struct;
struct work_struct boot_work;
}; };
#define beiscsi_hba_in_error(phba) ((phba)->state & BEISCSI_HBA_IN_ERR)
#define beiscsi_hba_is_online(phba) \
(!beiscsi_hba_in_error((phba)) && \
test_bit(BEISCSI_HBA_ONLINE, &phba->state))
struct beiscsi_session { struct beiscsi_session {
struct pci_pool *bhs_pool; struct pci_pool *bhs_pool;
}; };
@ -508,6 +523,7 @@ struct beiscsi_io_task {
struct sgl_handle *psgl_handle; struct sgl_handle *psgl_handle;
struct beiscsi_conn *conn; struct beiscsi_conn *conn;
struct scsi_cmnd *scsi_cmnd; struct scsi_cmnd *scsi_cmnd;
int num_sg;
struct hwi_wrb_context *pwrb_context; struct hwi_wrb_context *pwrb_context;
unsigned int cmd_sn; unsigned int cmd_sn;
unsigned int flags; unsigned int flags;
@ -592,80 +608,81 @@ struct amap_beiscsi_offload_params {
u8 max_recv_data_segment_length[32]; u8 max_recv_data_segment_length[32];
}; };
/* void hwi_complete_drvr_msgs(struct beiscsi_conn *beiscsi_conn, struct hd_async_handle {
struct beiscsi_hba *phba, struct sol_cqe *psol);*/
struct async_pdu_handle {
struct list_head link; struct list_head link;
struct be_bus_address pa; struct be_bus_address pa;
void *pbuffer; void *pbuffer;
unsigned int consumed; u32 buffer_len;
unsigned char index; u16 index;
unsigned char is_header; u16 cri;
unsigned short cri; u8 is_header;
unsigned long buffer_len; u8 is_final;
}; };
struct hwi_async_entry { /**
struct { * This has list of async PDUs that are waiting to be processed.
unsigned char hdr_received; * Buffers live in this list for a brief duration before they get
unsigned char hdr_len; * processed and posted back to hardware.
unsigned short bytes_received; * Note that we don't really need one cri_wait_queue per async_entry.
* We need one cri_wait_queue per CRI. Its easier to manage if this
* is tagged along with the async_entry.
*/
struct hd_async_entry {
struct cri_wait_queue {
unsigned short hdr_len;
unsigned int bytes_received;
unsigned int bytes_needed; unsigned int bytes_needed;
struct list_head list; struct list_head list;
} wait_queue; } wq;
/* handles posted to FW resides here */
struct list_head header_busy_list; struct hd_async_handle *header;
struct list_head data_busy_list; struct hd_async_handle *data;
}; };
struct hwi_async_pdu_context { struct hd_async_buf_context {
struct { struct be_bus_address pa_base;
struct be_bus_address pa_base; void *va_base;
void *va_base; void *ring_base;
void *ring_base; struct hd_async_handle *handle_base;
struct async_pdu_handle *handle_base; u16 free_entries;
u32 buffer_size;
/**
* Once iSCSI layer finishes processing an async PDU, the
* handles used for the PDU are added to this list.
* They are posted back to FW in groups of 8.
*/
struct list_head free_list;
};
unsigned int host_write_ptr; /**
unsigned int ep_read_ptr; * hd_async_context is declared for each ULP supporting iSCSI function.
unsigned int writables; */
struct hd_async_context {
unsigned int free_entries; struct hd_async_buf_context async_header;
unsigned int busy_entries; struct hd_async_buf_context async_data;
u16 num_entries;
struct list_head free_list; /**
} async_header; * When unsol PDU is in, it needs to be chained till all the bytes are
* received and then processing is done. hd_async_entry is created
struct { * based on the cid_count for each ULP. When unsol PDU comes in based
struct be_bus_address pa_base; * on the conn_id it needs to be added to the correct async_entry wq.
void *va_base; * Below defined cid_to_async_cri_map is used to reterive the
void *ring_base; * async_cri_map for a particular connection.
struct async_pdu_handle *handle_base; *
* This array is initialized after beiscsi_create_wrb_rings returns.
unsigned int host_write_ptr; *
unsigned int ep_read_ptr; * - this method takes more memory space, fixed to 2K
unsigned int writables; * - any support for connections greater than this the array size needs
* to be incremented
unsigned int free_entries; */
unsigned int busy_entries;
struct list_head free_list;
} async_data;
unsigned int buffer_size;
unsigned int num_entries;
#define BE_GET_ASYNC_CRI_FROM_CID(cid) (pasync_ctx->cid_to_async_cri_map[cid]) #define BE_GET_ASYNC_CRI_FROM_CID(cid) (pasync_ctx->cid_to_async_cri_map[cid])
unsigned short cid_to_async_cri_map[BE_MAX_SESSION]; unsigned short cid_to_async_cri_map[BE_MAX_SESSION];
/** /**
* This is a varying size list! Do not add anything * This is a variable size array. Don`t add anything after this field!!
* after this entry!!
*/ */
struct hwi_async_entry *async_entry; struct hd_async_entry *async_entry;
}; };
#define PDUCQE_CODE_MASK 0x0000003F
#define PDUCQE_DPL_MASK 0xFFFF0000
#define PDUCQE_INDEX_MASK 0x0000FFFF
struct i_t_dpdu_cqe { struct i_t_dpdu_cqe {
u32 dw[4]; u32 dw[4];
} __packed; } __packed;
@ -845,7 +862,6 @@ struct wrb_handle *alloc_wrb_handle(struct beiscsi_hba *phba, unsigned int cid,
void void
free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle); free_mgmt_sgl_handle(struct beiscsi_hba *phba, struct sgl_handle *psgl_handle);
void beiscsi_process_all_cqs(struct work_struct *work);
void beiscsi_free_mgmt_task_handles(struct beiscsi_conn *beiscsi_conn, void beiscsi_free_mgmt_task_handles(struct beiscsi_conn *beiscsi_conn,
struct iscsi_task *task); struct iscsi_task *task);
@ -856,11 +872,6 @@ void hwi_ring_cq_db(struct beiscsi_hba *phba,
unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq, int budget); unsigned int beiscsi_process_cq(struct be_eq_obj *pbe_eq, int budget);
void beiscsi_process_mcc_cq(struct beiscsi_hba *phba); void beiscsi_process_mcc_cq(struct beiscsi_hba *phba);
static inline bool beiscsi_error(struct beiscsi_hba *phba)
{
return phba->ue_detected || phba->fw_timeout;
}
struct pdu_nop_out { struct pdu_nop_out {
u32 dw[12]; u32 dw[12];
}; };
@ -1067,11 +1078,18 @@ struct hwi_context_memory {
struct be_queue_info be_cq[MAX_CPUS - 1]; struct be_queue_info be_cq[MAX_CPUS - 1];
struct be_queue_info *be_wrbq; struct be_queue_info *be_wrbq;
/**
* Create array of ULP number for below entries as DEFQ
* will be created for both ULP if iSCSI Protocol is
* loaded on both ULP.
*/
struct be_queue_info be_def_hdrq[BEISCSI_ULP_COUNT]; struct be_queue_info be_def_hdrq[BEISCSI_ULP_COUNT];
struct be_queue_info be_def_dataq[BEISCSI_ULP_COUNT]; struct be_queue_info be_def_dataq[BEISCSI_ULP_COUNT];
struct hwi_async_pdu_context *pasync_ctx[BEISCSI_ULP_COUNT]; struct hd_async_context *pasync_ctx[BEISCSI_ULP_COUNT];
}; };
void beiscsi_start_boot_work(struct beiscsi_hba *phba, unsigned int s_handle);
/* Logging related definitions */ /* Logging related definitions */
#define BEISCSI_LOG_INIT 0x0001 /* Initialization events */ #define BEISCSI_LOG_INIT 0x0001 /* Initialization events */
#define BEISCSI_LOG_MBOX 0x0002 /* Mailbox Events */ #define BEISCSI_LOG_MBOX 0x0002 /* Mailbox Events */

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
/** /**
* Copyright (C) 2005 - 2015 Emulex * Copyright (C) 2005 - 2016 Broadcom
* All rights reserved. * All rights reserved.
* *
* This program is free software; you can redistribute it and/or * This program is free software; you can redistribute it and/or
@ -7,10 +7,10 @@
* as published by the Free Software Foundation. The full GNU General * as published by the Free Software Foundation. The full GNU General
* Public License is included in this distribution in the file called COPYING. * Public License is included in this distribution in the file called COPYING.
* *
* Written by: Jayamohan Kallickal (jayamohan.kallickal@avagotech.com) * Written by: Jayamohan Kallickal (jayamohan.kallickal@broadcom.com)
* *
* Contact Information: * Contact Information:
* linux-drivers@avagotech.com * linux-drivers@broadcom.com
* *
* Emulex * Emulex
* 3333 Susan Street * 3333 Susan Street
@ -96,7 +96,6 @@ struct mcc_wrb {
struct mcc_wrb_payload payload; struct mcc_wrb_payload payload;
}; };
int mgmt_epfw_cleanup(struct beiscsi_hba *phba, unsigned short chute);
int mgmt_open_connection(struct beiscsi_hba *phba, int mgmt_open_connection(struct beiscsi_hba *phba,
struct sockaddr *dst_addr, struct sockaddr *dst_addr,
struct beiscsi_endpoint *beiscsi_ep, struct beiscsi_endpoint *beiscsi_ep,
@ -266,50 +265,41 @@ struct beiscsi_endpoint {
u16 cid_vld; u16 cid_vld;
}; };
int mgmt_get_fw_config(struct be_ctrl_info *ctrl,
struct beiscsi_hba *phba);
int mgmt_get_port_name(struct be_ctrl_info *ctrl,
struct beiscsi_hba *phba);
unsigned int mgmt_invalidate_connection(struct beiscsi_hba *phba, unsigned int mgmt_invalidate_connection(struct beiscsi_hba *phba,
struct beiscsi_endpoint *beiscsi_ep, struct beiscsi_endpoint *beiscsi_ep,
unsigned short cid, unsigned short cid,
unsigned short issue_reset, unsigned short issue_reset,
unsigned short savecfg_flag); unsigned short savecfg_flag);
int mgmt_set_ip(struct beiscsi_hba *phba, int beiscsi_if_en_dhcp(struct beiscsi_hba *phba, u32 ip_type);
struct iscsi_iface_param_info *ip_param,
struct iscsi_iface_param_info *subnet_param,
uint32_t boot_proto);
unsigned int mgmt_get_boot_target(struct beiscsi_hba *phba); int beiscsi_if_en_static(struct beiscsi_hba *phba, u32 ip_type,
u8 *ip, u8 *subnet);
unsigned int mgmt_reopen_session(struct beiscsi_hba *phba, int beiscsi_if_set_gw(struct beiscsi_hba *phba, u32 ip_type, u8 *gw);
unsigned int reopen_type,
unsigned sess_handle);
unsigned int mgmt_get_session_info(struct beiscsi_hba *phba, int beiscsi_if_get_gw(struct beiscsi_hba *phba, u32 ip_type,
u32 boot_session_handle, struct be_cmd_get_def_gateway_resp *resp);
struct be_dma_mem *nonemb_cmd);
int mgmt_get_nic_conf(struct beiscsi_hba *phba, int mgmt_get_nic_conf(struct beiscsi_hba *phba,
struct be_cmd_get_nic_conf_resp *mac); struct be_cmd_get_nic_conf_resp *mac);
int mgmt_get_if_info(struct beiscsi_hba *phba, int ip_type, int beiscsi_if_get_info(struct beiscsi_hba *phba, int ip_type,
struct be_cmd_get_if_info_resp **if_info); struct be_cmd_get_if_info_resp **if_info);
int mgmt_get_gateway(struct beiscsi_hba *phba, int ip_type, unsigned int beiscsi_if_get_handle(struct beiscsi_hba *phba);
struct be_cmd_get_def_gateway_resp *gateway);
int mgmt_set_gateway(struct beiscsi_hba *phba, int beiscsi_if_set_vlan(struct beiscsi_hba *phba, uint16_t vlan_tag);
struct iscsi_iface_param_info *gateway_param);
int be_mgmt_get_boot_shandle(struct beiscsi_hba *phba, unsigned int beiscsi_boot_logout_sess(struct beiscsi_hba *phba);
unsigned int *s_handle);
unsigned int mgmt_get_all_if_id(struct beiscsi_hba *phba); unsigned int beiscsi_boot_reopen_sess(struct beiscsi_hba *phba);
int mgmt_set_vlan(struct beiscsi_hba *phba, uint16_t vlan_tag); unsigned int beiscsi_boot_get_sinfo(struct beiscsi_hba *phba);
unsigned int __beiscsi_boot_get_shandle(struct beiscsi_hba *phba, int async);
int beiscsi_boot_get_shandle(struct beiscsi_hba *phba, unsigned int *s_handle);
ssize_t beiscsi_drvr_ver_disp(struct device *dev, ssize_t beiscsi_drvr_ver_disp(struct device *dev,
struct device_attribute *attr, char *buf); struct device_attribute *attr, char *buf);
@ -339,7 +329,6 @@ void beiscsi_offload_cxn_v2(struct beiscsi_offload_params *params,
struct wrb_handle *pwrb_handle, struct wrb_handle *pwrb_handle,
struct hwi_wrb_context *pwrb_context); struct hwi_wrb_context *pwrb_context);
void beiscsi_ue_detect(struct beiscsi_hba *phba);
int be_cmd_modify_eq_delay(struct beiscsi_hba *phba, int be_cmd_modify_eq_delay(struct beiscsi_hba *phba,
struct be_set_eqd *, int num); struct be_set_eqd *, int num);

View File

@ -5827,13 +5827,13 @@ bfa_fcs_lport_get_rport_max_speed(bfa_fcs_lport_t *port)
bfa_port_speed_t max_speed = 0; bfa_port_speed_t max_speed = 0;
struct bfa_port_attr_s port_attr; struct bfa_port_attr_s port_attr;
bfa_port_speed_t port_speed, rport_speed; bfa_port_speed_t port_speed, rport_speed;
bfa_boolean_t trl_enabled = bfa_fcport_is_ratelim(port->fcs->bfa); bfa_boolean_t trl_enabled;
if (port == NULL) if (port == NULL)
return 0; return 0;
fcs = port->fcs; fcs = port->fcs;
trl_enabled = bfa_fcport_is_ratelim(port->fcs->bfa);
/* Get Physical port's current speed */ /* Get Physical port's current speed */
bfa_fcport_get_attr(port->fcs->bfa, &port_attr); bfa_fcport_get_attr(port->fcs->bfa, &port_attr);

View File

@ -254,7 +254,7 @@ int bnx2fc_send_rls(struct bnx2fc_rport *tgt, struct fc_frame *fp)
return rc; return rc;
} }
void bnx2fc_srr_compl(struct bnx2fc_els_cb_arg *cb_arg) static void bnx2fc_srr_compl(struct bnx2fc_els_cb_arg *cb_arg)
{ {
struct bnx2fc_mp_req *mp_req; struct bnx2fc_mp_req *mp_req;
struct fc_frame_header *fc_hdr, *fh; struct fc_frame_header *fc_hdr, *fh;
@ -364,7 +364,7 @@ srr_compl_done:
kref_put(&orig_io_req->refcount, bnx2fc_cmd_release); kref_put(&orig_io_req->refcount, bnx2fc_cmd_release);
} }
void bnx2fc_rec_compl(struct bnx2fc_els_cb_arg *cb_arg) static void bnx2fc_rec_compl(struct bnx2fc_els_cb_arg *cb_arg)
{ {
struct bnx2fc_cmd *orig_io_req, *new_io_req; struct bnx2fc_cmd *orig_io_req, *new_io_req;
struct bnx2fc_cmd *rec_req; struct bnx2fc_cmd *rec_req;

View File

@ -625,7 +625,7 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
* *
* @arg: ptr to bnx2fc_percpu_info structure * @arg: ptr to bnx2fc_percpu_info structure
*/ */
int bnx2fc_percpu_io_thread(void *arg) static int bnx2fc_percpu_io_thread(void *arg)
{ {
struct bnx2fc_percpu_s *p = arg; struct bnx2fc_percpu_s *p = arg;
struct bnx2fc_work *work, *tmp; struct bnx2fc_work *work, *tmp;
@ -1410,9 +1410,10 @@ bind_err:
return NULL; return NULL;
} }
struct bnx2fc_interface *bnx2fc_interface_create(struct bnx2fc_hba *hba, static struct bnx2fc_interface *
struct net_device *netdev, bnx2fc_interface_create(struct bnx2fc_hba *hba,
enum fip_state fip_mode) struct net_device *netdev,
enum fip_state fip_mode)
{ {
struct fcoe_ctlr_device *ctlr_dev; struct fcoe_ctlr_device *ctlr_dev;
struct bnx2fc_interface *interface; struct bnx2fc_interface *interface;
@ -2765,8 +2766,7 @@ static void __exit bnx2fc_mod_exit(void)
* held. * held.
*/ */
mutex_lock(&bnx2fc_dev_lock); mutex_lock(&bnx2fc_dev_lock);
list_splice(&adapter_list, &to_be_deleted); list_splice_init(&adapter_list, &to_be_deleted);
INIT_LIST_HEAD(&adapter_list);
adapter_count = 0; adapter_count = 0;
mutex_unlock(&bnx2fc_dev_lock); mutex_unlock(&bnx2fc_dev_lock);

View File

@ -994,7 +994,7 @@ void bnx2fc_arm_cq(struct bnx2fc_rport *tgt)
} }
struct bnx2fc_work *bnx2fc_alloc_work(struct bnx2fc_rport *tgt, u16 wqe) static struct bnx2fc_work *bnx2fc_alloc_work(struct bnx2fc_rport *tgt, u16 wqe)
{ {
struct bnx2fc_work *work; struct bnx2fc_work *work;
work = kzalloc(sizeof(struct bnx2fc_work), GFP_ATOMIC); work = kzalloc(sizeof(struct bnx2fc_work), GFP_ATOMIC);

View File

@ -1079,7 +1079,7 @@ int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd)
return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET); return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET);
} }
int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req) static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
{ {
struct bnx2fc_rport *tgt = io_req->tgt; struct bnx2fc_rport *tgt = io_req->tgt;
int rc = SUCCESS; int rc = SUCCESS;

View File

@ -1721,7 +1721,7 @@ out:
/* Wake up waiting threads */ /* Wake up waiting threads */
csio_scsi_cmnd(req) = NULL; csio_scsi_cmnd(req) = NULL;
complete_all(&req->cmplobj); complete(&req->cmplobj);
} }
/* /*
@ -1945,6 +1945,7 @@ csio_eh_abort_handler(struct scsi_cmnd *cmnd)
ready = csio_is_lnode_ready(ln); ready = csio_is_lnode_ready(ln);
tmo = CSIO_SCSI_ABRT_TMO_MS; tmo = CSIO_SCSI_ABRT_TMO_MS;
reinit_completion(&ioreq->cmplobj);
spin_lock_irq(&hw->lock); spin_lock_irq(&hw->lock);
rv = csio_do_abrt_cls(hw, ioreq, (ready ? SCSI_ABORT : SCSI_CLOSE)); rv = csio_do_abrt_cls(hw, ioreq, (ready ? SCSI_ABORT : SCSI_CLOSE));
spin_unlock_irq(&hw->lock); spin_unlock_irq(&hw->lock);
@ -1964,8 +1965,6 @@ csio_eh_abort_handler(struct scsi_cmnd *cmnd)
goto inval_scmnd; goto inval_scmnd;
} }
/* Wait for completion */
init_completion(&ioreq->cmplobj);
wait_for_completion_timeout(&ioreq->cmplobj, msecs_to_jiffies(tmo)); wait_for_completion_timeout(&ioreq->cmplobj, msecs_to_jiffies(tmo));
/* FW didnt respond to abort within our timeout */ /* FW didnt respond to abort within our timeout */

View File

@ -822,17 +822,6 @@ static void notify_shutdown(struct cxlflash_cfg *cfg, bool wait)
} }
} }
/**
* cxlflash_shutdown() - shutdown handler
* @pdev: PCI device associated with the host.
*/
static void cxlflash_shutdown(struct pci_dev *pdev)
{
struct cxlflash_cfg *cfg = pci_get_drvdata(pdev);
notify_shutdown(cfg, false);
}
/** /**
* cxlflash_remove() - PCI entry point to tear down host * cxlflash_remove() - PCI entry point to tear down host
* @pdev: PCI device associated with the host. * @pdev: PCI device associated with the host.
@ -844,6 +833,11 @@ static void cxlflash_remove(struct pci_dev *pdev)
struct cxlflash_cfg *cfg = pci_get_drvdata(pdev); struct cxlflash_cfg *cfg = pci_get_drvdata(pdev);
ulong lock_flags; ulong lock_flags;
if (!pci_is_enabled(pdev)) {
pr_debug("%s: Device is disabled\n", __func__);
return;
}
/* If a Task Management Function is active, wait for it to complete /* If a Task Management Function is active, wait for it to complete
* before continuing with remove. * before continuing with remove.
*/ */
@ -1046,6 +1040,8 @@ static int wait_port_online(__be64 __iomem *fc_regs, u32 delay_us, u32 nretry)
do { do {
msleep(delay_us / 1000); msleep(delay_us / 1000);
status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]);
if (status == U64_MAX)
nretry /= 2;
} while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_ONLINE && } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_ONLINE &&
nretry--); nretry--);
@ -1077,6 +1073,8 @@ static int wait_port_offline(__be64 __iomem *fc_regs, u32 delay_us, u32 nretry)
do { do {
msleep(delay_us / 1000); msleep(delay_us / 1000);
status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]); status = readq_be(&fc_regs[FC_MTIP_STATUS / 8]);
if (status == U64_MAX)
nretry /= 2;
} while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_OFFLINE && } while ((status & FC_MTIP_STATUS_MASK) != FC_MTIP_STATUS_OFFLINE &&
nretry--); nretry--);
@ -1095,42 +1093,25 @@ static int wait_port_offline(__be64 __iomem *fc_regs, u32 delay_us, u32 nretry)
* online. This toggling action can cause this routine to delay up to a few * online. This toggling action can cause this routine to delay up to a few
* seconds. When configured to use the internal LUN feature of the AFU, a * seconds. When configured to use the internal LUN feature of the AFU, a
* failure to come online is overridden. * failure to come online is overridden.
*
* Return:
* 0 when the WWPN is successfully written and the port comes back online
* -1 when the port fails to go offline or come back up online
*/ */
static int afu_set_wwpn(struct afu *afu, int port, __be64 __iomem *fc_regs, static void afu_set_wwpn(struct afu *afu, int port, __be64 __iomem *fc_regs,
u64 wwpn) u64 wwpn)
{ {
int rc = 0;
set_port_offline(fc_regs); set_port_offline(fc_regs);
if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, if (!wait_port_offline(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US,
FC_PORT_STATUS_RETRY_CNT)) { FC_PORT_STATUS_RETRY_CNT)) {
pr_debug("%s: wait on port %d to go offline timed out\n", pr_debug("%s: wait on port %d to go offline timed out\n",
__func__, port); __func__, port);
rc = -1; /* but continue on to leave the port back online */
} }
if (rc == 0) writeq_be(wwpn, &fc_regs[FC_PNAME / 8]);
writeq_be(wwpn, &fc_regs[FC_PNAME / 8]);
/* Always return success after programming WWPN */
rc = 0;
set_port_online(fc_regs); set_port_online(fc_regs);
if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US, if (!wait_port_online(fc_regs, FC_PORT_STATUS_RETRY_INTERVAL_US,
FC_PORT_STATUS_RETRY_CNT)) { FC_PORT_STATUS_RETRY_CNT)) {
pr_err("%s: wait on port %d to go online timed out\n", pr_debug("%s: wait on port %d to go online timed out\n",
__func__, port); __func__, port);
} }
pr_debug("%s: returning rc=%d\n", __func__, rc);
return rc;
} }
/** /**
@ -1187,7 +1168,7 @@ static const struct asyc_intr_info ainfo[] = {
{SISL_ASTATUS_FC0_LOGI_F, "login failed", 0, CLR_FC_ERROR}, {SISL_ASTATUS_FC0_LOGI_F, "login failed", 0, CLR_FC_ERROR},
{SISL_ASTATUS_FC0_LOGI_S, "login succeeded", 0, SCAN_HOST}, {SISL_ASTATUS_FC0_LOGI_S, "login succeeded", 0, SCAN_HOST},
{SISL_ASTATUS_FC0_LINK_DN, "link down", 0, 0}, {SISL_ASTATUS_FC0_LINK_DN, "link down", 0, 0},
{SISL_ASTATUS_FC0_LINK_UP, "link up", 0, SCAN_HOST}, {SISL_ASTATUS_FC0_LINK_UP, "link up", 0, 0},
{SISL_ASTATUS_FC1_OTHER, "other error", 1, CLR_FC_ERROR | LINK_RESET}, {SISL_ASTATUS_FC1_OTHER, "other error", 1, CLR_FC_ERROR | LINK_RESET},
{SISL_ASTATUS_FC1_LOGO, "target initiated LOGO", 1, 0}, {SISL_ASTATUS_FC1_LOGO, "target initiated LOGO", 1, 0},
{SISL_ASTATUS_FC1_CRC_T, "CRC threshold exceeded", 1, LINK_RESET}, {SISL_ASTATUS_FC1_CRC_T, "CRC threshold exceeded", 1, LINK_RESET},
@ -1195,7 +1176,7 @@ static const struct asyc_intr_info ainfo[] = {
{SISL_ASTATUS_FC1_LOGI_F, "login failed", 1, CLR_FC_ERROR}, {SISL_ASTATUS_FC1_LOGI_F, "login failed", 1, CLR_FC_ERROR},
{SISL_ASTATUS_FC1_LOGI_S, "login succeeded", 1, SCAN_HOST}, {SISL_ASTATUS_FC1_LOGI_S, "login succeeded", 1, SCAN_HOST},
{SISL_ASTATUS_FC1_LINK_DN, "link down", 1, 0}, {SISL_ASTATUS_FC1_LINK_DN, "link down", 1, 0},
{SISL_ASTATUS_FC1_LINK_UP, "link up", 1, SCAN_HOST}, {SISL_ASTATUS_FC1_LINK_UP, "link up", 1, 0},
{0x0, "", 0, 0} /* terminator */ {0x0, "", 0, 0} /* terminator */
}; };
@ -1631,15 +1612,10 @@ static int init_global(struct cxlflash_cfg *cfg)
[FC_CRC_THRESH / 8]); [FC_CRC_THRESH / 8]);
/* Set WWPNs. If already programmed, wwpn[i] is 0 */ /* Set WWPNs. If already programmed, wwpn[i] is 0 */
if (wwpn[i] != 0 && if (wwpn[i] != 0)
afu_set_wwpn(afu, i, afu_set_wwpn(afu, i,
&afu->afu_map->global.fc_regs[i][0], &afu->afu_map->global.fc_regs[i][0],
wwpn[i])) { wwpn[i]);
dev_err(dev, "%s: failed to set WWPN on port %d\n",
__func__, i);
rc = -EIO;
goto out;
}
/* Programming WWPN back to back causes additional /* Programming WWPN back to back causes additional
* offline/online transitions and a PLOGI * offline/online transitions and a PLOGI
*/ */
@ -2048,6 +2024,11 @@ retry:
* cxlflash_eh_host_reset_handler() - reset the host adapter * cxlflash_eh_host_reset_handler() - reset the host adapter
* @scp: SCSI command from stack identifying host. * @scp: SCSI command from stack identifying host.
* *
* Following a reset, the state is evaluated again in case an EEH occurred
* during the reset. In such a scenario, the host reset will either yield
* until the EEH recovery is complete or return success or failure based
* upon the current device state.
*
* Return: * Return:
* SUCCESS as defined in scsi/scsi.h * SUCCESS as defined in scsi/scsi.h
* FAILED as defined in scsi/scsi.h * FAILED as defined in scsi/scsi.h
@ -2080,7 +2061,8 @@ static int cxlflash_eh_host_reset_handler(struct scsi_cmnd *scp)
} else } else
cfg->state = STATE_NORMAL; cfg->state = STATE_NORMAL;
wake_up_all(&cfg->reset_waitq); wake_up_all(&cfg->reset_waitq);
break; ssleep(1);
/* fall through */
case STATE_RESET: case STATE_RESET:
wait_event(cfg->reset_waitq, cfg->state != STATE_RESET); wait_event(cfg->reset_waitq, cfg->state != STATE_RESET);
if (cfg->state == STATE_NORMAL) if (cfg->state == STATE_NORMAL)
@ -2596,6 +2578,9 @@ out_remove:
* @pdev: PCI device struct. * @pdev: PCI device struct.
* @state: PCI channel state. * @state: PCI channel state.
* *
* When an EEH occurs during an active reset, wait until the reset is
* complete and then take action based upon the device state.
*
* Return: PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT * Return: PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT
*/ */
static pci_ers_result_t cxlflash_pci_error_detected(struct pci_dev *pdev, static pci_ers_result_t cxlflash_pci_error_detected(struct pci_dev *pdev,
@ -2609,6 +2594,10 @@ static pci_ers_result_t cxlflash_pci_error_detected(struct pci_dev *pdev,
switch (state) { switch (state) {
case pci_channel_io_frozen: case pci_channel_io_frozen:
wait_event(cfg->reset_waitq, cfg->state != STATE_RESET);
if (cfg->state == STATE_FAILTERM)
return PCI_ERS_RESULT_DISCONNECT;
cfg->state = STATE_RESET; cfg->state = STATE_RESET;
scsi_block_requests(cfg->host); scsi_block_requests(cfg->host);
drain_ioctls(cfg); drain_ioctls(cfg);
@ -2685,7 +2674,7 @@ static struct pci_driver cxlflash_driver = {
.id_table = cxlflash_pci_table, .id_table = cxlflash_pci_table,
.probe = cxlflash_probe, .probe = cxlflash_probe,
.remove = cxlflash_remove, .remove = cxlflash_remove,
.shutdown = cxlflash_shutdown, .shutdown = cxlflash_remove,
.err_handler = &cxlflash_err_handler, .err_handler = &cxlflash_err_handler,
}; };

View File

@ -709,14 +709,13 @@ int cxlflash_disk_release(struct scsi_device *sdev,
* @cfg: Internal structure associated with the host. * @cfg: Internal structure associated with the host.
* @ctxi: Context to release. * @ctxi: Context to release.
* *
* This routine is safe to be called with a a non-initialized context * This routine is safe to be called with a a non-initialized context.
* and is tolerant of being called with the context's mutex held (it * Also note that the routine conditionally checks for the existence
* will be unlocked if necessary before freeing). Also note that the * of the context control map before clearing the RHT registers and
* routine conditionally checks for the existence of the context control * context capabilities because it is possible to destroy a context
* map before clearing the RHT registers and context capabilities because * while the context is in the error state (previous mapping was
* it is possible to destroy a context while the context is in the error * removed [so there is no need to worry about clearing] and context
* state (previous mapping was removed [so there is no need to worry about * is waiting for a new mapping).
* clearing] and context is waiting for a new mapping).
*/ */
static void destroy_context(struct cxlflash_cfg *cfg, static void destroy_context(struct cxlflash_cfg *cfg,
struct ctx_info *ctxi) struct ctx_info *ctxi)
@ -732,9 +731,6 @@ static void destroy_context(struct cxlflash_cfg *cfg,
writeq_be(0, &ctxi->ctrl_map->rht_cnt_id); writeq_be(0, &ctxi->ctrl_map->rht_cnt_id);
writeq_be(0, &ctxi->ctrl_map->ctx_cap); writeq_be(0, &ctxi->ctrl_map->ctx_cap);
} }
if (mutex_is_locked(&ctxi->mutex))
mutex_unlock(&ctxi->mutex);
} }
/* Free memory associated with context */ /* Free memory associated with context */
@ -792,32 +788,58 @@ err:
* @cfg: Internal structure associated with the host. * @cfg: Internal structure associated with the host.
* @ctx: Previously obtained CXL context reference. * @ctx: Previously obtained CXL context reference.
* @ctxid: Previously obtained process element associated with CXL context. * @ctxid: Previously obtained process element associated with CXL context.
* @adap_fd: Previously obtained adapter fd associated with CXL context.
* @file: Previously obtained file associated with CXL context. * @file: Previously obtained file associated with CXL context.
* @perms: User-specified permissions. * @perms: User-specified permissions.
*
* Upon return, the context is marked as initialized and the context's mutex
* is locked.
*/ */
static void init_context(struct ctx_info *ctxi, struct cxlflash_cfg *cfg, static void init_context(struct ctx_info *ctxi, struct cxlflash_cfg *cfg,
struct cxl_context *ctx, int ctxid, int adap_fd, struct cxl_context *ctx, int ctxid, struct file *file,
struct file *file, u32 perms) u32 perms)
{ {
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
ctxi->rht_perms = perms; ctxi->rht_perms = perms;
ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl; ctxi->ctrl_map = &afu->afu_map->ctrls[ctxid].ctrl;
ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid);
ctxi->lfd = adap_fd;
ctxi->pid = current->tgid; /* tgid = pid */ ctxi->pid = current->tgid; /* tgid = pid */
ctxi->ctx = ctx; ctxi->ctx = ctx;
ctxi->cfg = cfg;
ctxi->file = file; ctxi->file = file;
ctxi->initialized = true; ctxi->initialized = true;
mutex_init(&ctxi->mutex); mutex_init(&ctxi->mutex);
kref_init(&ctxi->kref);
INIT_LIST_HEAD(&ctxi->luns); INIT_LIST_HEAD(&ctxi->luns);
INIT_LIST_HEAD(&ctxi->list); /* initialize for list_empty() */ INIT_LIST_HEAD(&ctxi->list); /* initialize for list_empty() */
}
/**
* remove_context() - context kref release handler
* @kref: Kernel reference associated with context to be removed.
*
* When a context no longer has any references it can safely be removed
* from global access and destroyed. Note that it is assumed the thread
* relinquishing access to the context holds its mutex.
*/
static void remove_context(struct kref *kref)
{
struct ctx_info *ctxi = container_of(kref, struct ctx_info, kref);
struct cxlflash_cfg *cfg = ctxi->cfg;
u64 ctxid = DECODE_CTXID(ctxi->ctxid);
/* Remove context from table/error list */
WARN_ON(!mutex_is_locked(&ctxi->mutex));
ctxi->unavail = true;
mutex_unlock(&ctxi->mutex);
mutex_lock(&cfg->ctx_tbl_list_mutex);
mutex_lock(&ctxi->mutex); mutex_lock(&ctxi->mutex);
if (!list_empty(&ctxi->list))
list_del(&ctxi->list);
cfg->ctx_tbl[ctxid] = NULL;
mutex_unlock(&cfg->ctx_tbl_list_mutex);
mutex_unlock(&ctxi->mutex);
/* Context now completely uncoupled/unreachable */
destroy_context(cfg, ctxi);
} }
/** /**
@ -845,7 +867,6 @@ static int _cxlflash_disk_detach(struct scsi_device *sdev,
int i; int i;
int rc = 0; int rc = 0;
int lfd;
u64 ctxid = DECODE_CTXID(detach->context_id), u64 ctxid = DECODE_CTXID(detach->context_id),
rctxid = detach->context_id; rctxid = detach->context_id;
@ -887,40 +908,13 @@ static int _cxlflash_disk_detach(struct scsi_device *sdev,
break; break;
} }
/* Tear down context following last LUN cleanup */ /*
if (list_empty(&ctxi->luns)) { * Release the context reference and the sdev reference that
ctxi->unavail = true; * bound this LUN to the context.
mutex_unlock(&ctxi->mutex); */
mutex_lock(&cfg->ctx_tbl_list_mutex); if (kref_put(&ctxi->kref, remove_context))
mutex_lock(&ctxi->mutex);
/* Might not have been in error list so conditionally remove */
if (!list_empty(&ctxi->list))
list_del(&ctxi->list);
cfg->ctx_tbl[ctxid] = NULL;
mutex_unlock(&cfg->ctx_tbl_list_mutex);
mutex_unlock(&ctxi->mutex);
lfd = ctxi->lfd;
destroy_context(cfg, ctxi);
ctxi = NULL;
put_ctx = false; put_ctx = false;
/*
* As a last step, clean up external resources when not
* already on an external cleanup thread, i.e.: close(adap_fd).
*
* NOTE: this will free up the context from the CXL services,
* allowing it to dole out the same context_id on a future
* (or even currently in-flight) disk_attach operation.
*/
if (lfd != -1)
sys_close(lfd);
}
/* Release the sdev reference that bound this LUN to the context */
scsi_device_put(sdev); scsi_device_put(sdev);
out: out:
if (put_ctx) if (put_ctx)
put_context(ctxi); put_context(ctxi);
@ -941,34 +935,18 @@ static int cxlflash_disk_detach(struct scsi_device *sdev,
* *
* This routine is the release handler for the fops registered with * This routine is the release handler for the fops registered with
* the CXL services on an initial attach for a context. It is called * the CXL services on an initial attach for a context. It is called
* when a close is performed on the adapter file descriptor returned * when a close (explicity by the user or as part of a process tear
* to the user. Programmatically, the user is not required to perform * down) is performed on the adapter file descriptor returned to the
* the close, as it is handled internally via the detach ioctl when * user. The user should be aware that explicitly performing a close
* a context is being removed. Note that nothing prevents the user * considered catastrophic and subsequent usage of the superpipe API
* from performing a close, but the user should be aware that doing * with previously saved off tokens will fail.
* so is considered catastrophic and subsequent usage of the superpipe
* API with previously saved off tokens will fail.
* *
* When initiated from an external close (either by the user or via * This routine derives the context reference and calls detach for
* a process tear down), the routine derives the context reference * each LUN associated with the context.The final detach operation
* and calls detach for each LUN associated with the context. The * causes the context itself to be freed. With exception to when the
* final detach operation will cause the context itself to be freed. * CXL process element (context id) lookup fails (a case that should
* Note that the saved off lfd is reset prior to calling detach to * theoretically never occur), every call into this routine results
* signify that the final detach should not perform a close. * in a complete freeing of a context.
*
* When initiated from a detach operation as part of the tear down
* of a context, the context is first completely freed and then the
* close is performed. This routine will fail to derive the context
* reference (due to the context having already been freed) and then
* call into the CXL release entry point.
*
* Thus, with exception to when the CXL process element (context id)
* lookup fails (a case that should theoretically never occur), every
* call into this routine results in a complete freeing of a context.
*
* As part of the detach, all per-context resources associated with the LUN
* are cleaned up. When detaching the last LUN for a context, the context
* itself is cleaned up and released.
* *
* Return: 0 on success * Return: 0 on success
*/ */
@ -1006,11 +984,8 @@ static int cxlflash_cxl_release(struct inode *inode, struct file *file)
goto out; goto out;
} }
dev_dbg(dev, "%s: close(%d) for context %d\n", dev_dbg(dev, "%s: close for context %d\n", __func__, ctxid);
__func__, ctxi->lfd, ctxid);
/* Reset the file descriptor to indicate we're on a close() thread */
ctxi->lfd = -1;
detach.context_id = ctxi->ctxid; detach.context_id = ctxi->ctxid;
list_for_each_entry_safe(lun_access, t, &ctxi->luns, list) list_for_each_entry_safe(lun_access, t, &ctxi->luns, list)
_cxlflash_disk_detach(lun_access->sdev, ctxi, &detach); _cxlflash_disk_detach(lun_access->sdev, ctxi, &detach);
@ -1110,8 +1085,7 @@ static int cxlflash_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
goto err; goto err;
} }
dev_dbg(dev, "%s: fault(%d) for context %d\n", dev_dbg(dev, "%s: fault for context %d\n", __func__, ctxid);
__func__, ctxi->lfd, ctxid);
if (likely(!ctxi->err_recovery_active)) { if (likely(!ctxi->err_recovery_active)) {
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
@ -1186,8 +1160,7 @@ static int cxlflash_cxl_mmap(struct file *file, struct vm_area_struct *vma)
goto out; goto out;
} }
dev_dbg(dev, "%s: mmap(%d) for context %d\n", dev_dbg(dev, "%s: mmap for context %d\n", __func__, ctxid);
__func__, ctxi->lfd, ctxid);
rc = cxl_fd_mmap(file, vma); rc = cxl_fd_mmap(file, vma);
if (likely(!rc)) { if (likely(!rc)) {
@ -1377,12 +1350,12 @@ static int cxlflash_disk_attach(struct scsi_device *sdev,
lun_access->lli = lli; lun_access->lli = lli;
lun_access->sdev = sdev; lun_access->sdev = sdev;
/* Non-NULL context indicates reuse */ /* Non-NULL context indicates reuse (another context reference) */
if (ctxi) { if (ctxi) {
dev_dbg(dev, "%s: Reusing context for LUN! (%016llX)\n", dev_dbg(dev, "%s: Reusing context for LUN! (%016llX)\n",
__func__, rctxid); __func__, rctxid);
kref_get(&ctxi->kref);
list_add(&lun_access->list, &ctxi->luns); list_add(&lun_access->list, &ctxi->luns);
fd = ctxi->lfd;
goto out_attach; goto out_attach;
} }
@ -1430,7 +1403,7 @@ static int cxlflash_disk_attach(struct scsi_device *sdev,
perms = SISL_RHT_PERM(attach->hdr.flags + 1); perms = SISL_RHT_PERM(attach->hdr.flags + 1);
/* Context mutex is locked upon return */ /* Context mutex is locked upon return */
init_context(ctxi, cfg, ctx, ctxid, fd, file, perms); init_context(ctxi, cfg, ctx, ctxid, file, perms);
rc = afu_attach(cfg, ctxi); rc = afu_attach(cfg, ctxi);
if (unlikely(rc)) { if (unlikely(rc)) {
@ -1445,7 +1418,6 @@ static int cxlflash_disk_attach(struct scsi_device *sdev,
* knows about us yet; we can be the only one holding our mutex. * knows about us yet; we can be the only one holding our mutex.
*/ */
list_add(&lun_access->list, &ctxi->luns); list_add(&lun_access->list, &ctxi->luns);
mutex_unlock(&ctxi->mutex);
mutex_lock(&cfg->ctx_tbl_list_mutex); mutex_lock(&cfg->ctx_tbl_list_mutex);
mutex_lock(&ctxi->mutex); mutex_lock(&ctxi->mutex);
cfg->ctx_tbl[ctxid] = ctxi; cfg->ctx_tbl[ctxid] = ctxi;
@ -1453,7 +1425,11 @@ static int cxlflash_disk_attach(struct scsi_device *sdev,
fd_install(fd, file); fd_install(fd, file);
out_attach: out_attach:
attach->hdr.return_flags = 0; if (fd != -1)
attach->hdr.return_flags = DK_CXLFLASH_APP_CLOSE_ADAP_FD;
else
attach->hdr.return_flags = 0;
attach->context_id = ctxi->ctxid; attach->context_id = ctxi->ctxid;
attach->block_size = gli->blk_len; attach->block_size = gli->blk_len;
attach->mmio_size = sizeof(afu->afu_map->hosts[0].harea); attach->mmio_size = sizeof(afu->afu_map->hosts[0].harea);
@ -1494,7 +1470,7 @@ err:
file = NULL; file = NULL;
} }
/* Cleanup our context; safe to call even with mutex locked */ /* Cleanup our context */
if (ctxi) { if (ctxi) {
destroy_context(cfg, ctxi); destroy_context(cfg, ctxi);
ctxi = NULL; ctxi = NULL;
@ -1509,16 +1485,19 @@ err:
* recover_context() - recovers a context in error * recover_context() - recovers a context in error
* @cfg: Internal structure associated with the host. * @cfg: Internal structure associated with the host.
* @ctxi: Context to release. * @ctxi: Context to release.
* @adap_fd: Adapter file descriptor associated with new/recovered context.
* *
* Restablishes the state for a context-in-error. * Restablishes the state for a context-in-error.
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int recover_context(struct cxlflash_cfg *cfg, struct ctx_info *ctxi) static int recover_context(struct cxlflash_cfg *cfg,
struct ctx_info *ctxi,
int *adap_fd)
{ {
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
int rc = 0; int rc = 0;
int old_fd, fd = -1; int fd = -1;
int ctxid = -1; int ctxid = -1;
struct file *file; struct file *file;
struct cxl_context *ctx; struct cxl_context *ctx;
@ -1566,9 +1545,7 @@ static int recover_context(struct cxlflash_cfg *cfg, struct ctx_info *ctxi)
* No error paths after this point. Once the fd is installed it's * No error paths after this point. Once the fd is installed it's
* visible to user space and can't be undone safely on this thread. * visible to user space and can't be undone safely on this thread.
*/ */
old_fd = ctxi->lfd;
ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid); ctxi->ctxid = ENCODE_CTXID(ctxi, ctxid);
ctxi->lfd = fd;
ctxi->ctx = ctx; ctxi->ctx = ctx;
ctxi->file = file; ctxi->file = file;
@ -1585,9 +1562,7 @@ static int recover_context(struct cxlflash_cfg *cfg, struct ctx_info *ctxi)
cfg->ctx_tbl[ctxid] = ctxi; cfg->ctx_tbl[ctxid] = ctxi;
mutex_unlock(&cfg->ctx_tbl_list_mutex); mutex_unlock(&cfg->ctx_tbl_list_mutex);
fd_install(fd, file); fd_install(fd, file);
*adap_fd = fd;
/* Release the original adapter fd and associated CXL resources */
sys_close(old_fd);
out: out:
dev_dbg(dev, "%s: returning ctxid=%d fd=%d rc=%d\n", dev_dbg(dev, "%s: returning ctxid=%d fd=%d rc=%d\n",
__func__, ctxid, fd, rc); __func__, ctxid, fd, rc);
@ -1646,6 +1621,7 @@ static int cxlflash_afu_recover(struct scsi_device *sdev,
rctxid = recover->context_id; rctxid = recover->context_id;
long reg; long reg;
int lretry = 20; /* up to 2 seconds */ int lretry = 20; /* up to 2 seconds */
int new_adap_fd = -1;
int rc = 0; int rc = 0;
atomic_inc(&cfg->recovery_threads); atomic_inc(&cfg->recovery_threads);
@ -1675,7 +1651,7 @@ retry:
if (ctxi->err_recovery_active) { if (ctxi->err_recovery_active) {
retry_recover: retry_recover:
rc = recover_context(cfg, ctxi); rc = recover_context(cfg, ctxi, &new_adap_fd);
if (unlikely(rc)) { if (unlikely(rc)) {
dev_err(dev, "%s: Recovery failed for context %llu (rc=%d)\n", dev_err(dev, "%s: Recovery failed for context %llu (rc=%d)\n",
__func__, ctxid, rc); __func__, ctxid, rc);
@ -1697,9 +1673,9 @@ retry_recover:
ctxi->err_recovery_active = false; ctxi->err_recovery_active = false;
recover->context_id = ctxi->ctxid; recover->context_id = ctxi->ctxid;
recover->adap_fd = ctxi->lfd; recover->adap_fd = new_adap_fd;
recover->mmio_size = sizeof(afu->afu_map->hosts[0].harea); recover->mmio_size = sizeof(afu->afu_map->hosts[0].harea);
recover->hdr.return_flags |= recover->hdr.return_flags = DK_CXLFLASH_APP_CLOSE_ADAP_FD |
DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET; DK_CXLFLASH_RECOVER_AFU_CONTEXT_RESET;
goto out; goto out;
} }

View File

@ -100,13 +100,14 @@ struct ctx_info {
struct cxl_ioctl_start_work work; struct cxl_ioctl_start_work work;
u64 ctxid; u64 ctxid;
int lfd;
pid_t pid; pid_t pid;
bool initialized; bool initialized;
bool unavail; bool unavail;
bool err_recovery_active; bool err_recovery_active;
struct mutex mutex; /* Context protection */ struct mutex mutex; /* Context protection */
struct kref kref;
struct cxl_context *ctx; struct cxl_context *ctx;
struct cxlflash_cfg *cfg;
struct list_head luns; /* LUNs attached to this context */ struct list_head luns; /* LUNs attached to this context */
const struct vm_operations_struct *cxl_mmap_vmops; const struct vm_operations_struct *cxl_mmap_vmops;
struct file *file; struct file *file;

View File

@ -1135,14 +1135,13 @@ int cxlflash_disk_clone(struct scsi_device *sdev,
ctxid_dst = DECODE_CTXID(clone->context_id_dst), ctxid_dst = DECODE_CTXID(clone->context_id_dst),
rctxid_src = clone->context_id_src, rctxid_src = clone->context_id_src,
rctxid_dst = clone->context_id_dst; rctxid_dst = clone->context_id_dst;
int adap_fd_src = clone->adap_fd_src;
int i, j; int i, j;
int rc = 0; int rc = 0;
bool found; bool found;
LIST_HEAD(sidecar); LIST_HEAD(sidecar);
pr_debug("%s: ctxid_src=%llu ctxid_dst=%llu adap_fd_src=%d\n", pr_debug("%s: ctxid_src=%llu ctxid_dst=%llu\n",
__func__, ctxid_src, ctxid_dst, adap_fd_src); __func__, ctxid_src, ctxid_dst);
/* Do not clone yourself */ /* Do not clone yourself */
if (unlikely(rctxid_src == rctxid_dst)) { if (unlikely(rctxid_src == rctxid_dst)) {
@ -1166,13 +1165,6 @@ int cxlflash_disk_clone(struct scsi_device *sdev,
goto out; goto out;
} }
if (unlikely(adap_fd_src != ctxi_src->lfd)) {
pr_debug("%s: Invalid source adapter fd! (%d)\n",
__func__, adap_fd_src);
rc = -EINVAL;
goto out;
}
/* Verify there is no open resource handle in the destination context */ /* Verify there is no open resource handle in the destination context */
for (i = 0; i < MAX_RHT_PER_CONTEXT; i++) for (i = 0; i < MAX_RHT_PER_CONTEXT; i++)
if (ctxi_dst->rht_start[i].nmask != 0) { if (ctxi_dst->rht_start[i].nmask != 0) {
@ -1257,7 +1249,6 @@ int cxlflash_disk_clone(struct scsi_device *sdev,
out_success: out_success:
list_splice(&sidecar, &ctxi_dst->luns); list_splice(&sidecar, &ctxi_dst->luns);
sys_close(adap_fd_src);
/* fall through */ /* fall through */
out: out:

View File

@ -583,6 +583,7 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
sdev_printk(KERN_ERR, sdev, "%s: rtpg retry\n", sdev_printk(KERN_ERR, sdev, "%s: rtpg retry\n",
ALUA_DH_NAME); ALUA_DH_NAME);
scsi_print_sense_hdr(sdev, ALUA_DH_NAME, &sense_hdr); scsi_print_sense_hdr(sdev, ALUA_DH_NAME, &sense_hdr);
kfree(buff);
return err; return err;
} }
sdev_printk(KERN_ERR, sdev, "%s: rtpg failed\n", sdev_printk(KERN_ERR, sdev, "%s: rtpg failed\n",

View File

@ -1,447 +0,0 @@
/*
* DTC 3180/3280 driver, by
* Ray Van Tassle rayvt@comm.mot.com
*
* taken from ...
* Trantor T128/T128F/T228 driver by...
*
* Drew Eckhardt
* Visionary Computing
* (Unix and Linux consulting and custom programming)
* drew@colorado.edu
* +1 (303) 440-4894
*/
/*
* The card is detected and initialized in one of several ways :
* 1. Autoprobe (default) - since the board is memory mapped,
* a BIOS signature is scanned for to locate the registers.
* An interrupt is triggered to autoprobe for the interrupt
* line.
*
* 2. With command line overrides - dtc=address,irq may be
* used on the LILO command line to override the defaults.
*
*/
/*----------------------------------------------------------------*/
/* the following will set the monitor border color (useful to find
where something crashed or gets stuck at */
/* 1 = blue
2 = green
3 = cyan
4 = red
5 = magenta
6 = yellow
7 = white
*/
#if 0
#define rtrc(i) {inb(0x3da); outb(0x31, 0x3c0); outb((i), 0x3c0);}
#else
#define rtrc(i) {}
#endif
#include <linux/module.h>
#include <linux/blkdev.h>
#include <linux/string.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <scsi/scsi_host.h>
#include "dtc.h"
#include "NCR5380.h"
/*
* The DTC3180 & 3280 boards are memory mapped.
*
*/
/*
*/
/* Offset from DTC_5380_OFFSET */
#define DTC_CONTROL_REG 0x100 /* rw */
#define D_CR_ACCESS 0x80 /* ro set=can access 3280 registers */
#define CSR_DIR_READ 0x40 /* rw direction, 1 = read 0 = write */
#define CSR_RESET 0x80 /* wo Resets 53c400 */
#define CSR_5380_REG 0x80 /* ro 5380 registers can be accessed */
#define CSR_TRANS_DIR 0x40 /* rw Data transfer direction */
#define CSR_SCSI_BUFF_INTR 0x20 /* rw Enable int on transfer ready */
#define CSR_5380_INTR 0x10 /* rw Enable 5380 interrupts */
#define CSR_SHARED_INTR 0x08 /* rw Interrupt sharing */
#define CSR_HOST_BUF_NOT_RDY 0x04 /* ro Host buffer not ready */
#define CSR_SCSI_BUF_RDY 0x02 /* ro SCSI buffer ready */
#define CSR_GATED_5380_IRQ 0x01 /* ro Last block xferred */
#define CSR_INT_BASE (CSR_SCSI_BUFF_INTR | CSR_5380_INTR)
#define DTC_BLK_CNT 0x101 /* rw
* # of 128-byte blocks to transfer */
#define D_CR_ACCESS 0x80 /* ro set=can access 3280 registers */
#define DTC_SWITCH_REG 0x3982 /* ro - DIP switches */
#define DTC_RESUME_XFER 0x3982 /* wo - resume data xfer
* after disconnect/reconnect*/
#define DTC_5380_OFFSET 0x3880 /* 8 registers here, see NCR5380.h */
/*!!!! for dtc, it's a 128 byte buffer at 3900 !!! */
#define DTC_DATA_BUF 0x3900 /* rw 128 bytes long */
static struct override {
unsigned int address;
int irq;
} overrides
#ifdef OVERRIDE
[] __initdata = OVERRIDE;
#else
[4] __initdata = {
{ 0, IRQ_AUTO }, { 0, IRQ_AUTO }, { 0, IRQ_AUTO }, { 0, IRQ_AUTO }
};
#endif
#define NO_OVERRIDES ARRAY_SIZE(overrides)
static struct base {
unsigned long address;
int noauto;
} bases[] __initdata = {
{ 0xcc000, 0 },
{ 0xc8000, 0 },
{ 0xdc000, 0 },
{ 0xd8000, 0 }
};
#define NO_BASES ARRAY_SIZE(bases)
static const struct signature {
const char *string;
int offset;
} signatures[] = {
{"DATA TECHNOLOGY CORPORATION BIOS", 0x25},
};
#define NO_SIGNATURES ARRAY_SIZE(signatures)
#ifndef MODULE
/*
* Function : dtc_setup(char *str, int *ints)
*
* Purpose : LILO command line initialization of the overrides array,
*
* Inputs : str - unused, ints - array of integer parameters with ints[0]
* equal to the number of ints.
*
*/
static int __init dtc_setup(char *str)
{
static int commandline_current;
int i;
int ints[10];
get_options(str, ARRAY_SIZE(ints), ints);
if (ints[0] != 2)
printk("dtc_setup: usage dtc=address,irq\n");
else if (commandline_current < NO_OVERRIDES) {
overrides[commandline_current].address = ints[1];
overrides[commandline_current].irq = ints[2];
for (i = 0; i < NO_BASES; ++i)
if (bases[i].address == ints[1]) {
bases[i].noauto = 1;
break;
}
++commandline_current;
}
return 1;
}
__setup("dtc=", dtc_setup);
#endif
/*
* Function : int dtc_detect(struct scsi_host_template * tpnt)
*
* Purpose : detects and initializes DTC 3180/3280 controllers
* that were autoprobed, overridden on the LILO command line,
* or specified at compile time.
*
* Inputs : tpnt - template for this SCSI adapter.
*
* Returns : 1 if a host adapter was found, 0 if not.
*
*/
static int __init dtc_detect(struct scsi_host_template * tpnt)
{
static int current_override, current_base;
struct Scsi_Host *instance;
unsigned int addr;
void __iomem *base;
int sig, count;
for (count = 0; current_override < NO_OVERRIDES; ++current_override) {
addr = 0;
base = NULL;
if (overrides[current_override].address) {
addr = overrides[current_override].address;
base = ioremap(addr, 0x2000);
if (!base)
addr = 0;
} else
for (; !addr && (current_base < NO_BASES); ++current_base) {
dprintk(NDEBUG_INIT, "dtc: probing address 0x%08x\n",
(unsigned int)bases[current_base].address);
if (bases[current_base].noauto)
continue;
base = ioremap(bases[current_base].address, 0x2000);
if (!base)
continue;
for (sig = 0; sig < NO_SIGNATURES; ++sig) {
if (check_signature(base + signatures[sig].offset, signatures[sig].string, strlen(signatures[sig].string))) {
addr = bases[current_base].address;
dprintk(NDEBUG_INIT, "dtc: detected board\n");
goto found;
}
}
iounmap(base);
}
dprintk(NDEBUG_INIT, "dtc: addr = 0x%08x\n", addr);
if (!addr)
break;
found:
instance = scsi_register(tpnt, sizeof(struct NCR5380_hostdata));
if (instance == NULL)
goto out_unmap;
instance->base = addr;
((struct NCR5380_hostdata *)(instance)->hostdata)->base = base;
if (NCR5380_init(instance, FLAG_LATE_DMA_SETUP))
goto out_unregister;
NCR5380_maybe_reset_bus(instance);
NCR5380_write(DTC_CONTROL_REG, CSR_5380_INTR); /* Enable int's */
if (overrides[current_override].irq != IRQ_AUTO)
instance->irq = overrides[current_override].irq;
else
instance->irq = NCR5380_probe_irq(instance, DTC_IRQS);
/* Compatibility with documented NCR5380 kernel parameters */
if (instance->irq == 255)
instance->irq = NO_IRQ;
/* With interrupts enabled, it will sometimes hang when doing heavy
* reads. So better not enable them until I finger it out. */
instance->irq = NO_IRQ;
if (instance->irq != NO_IRQ)
if (request_irq(instance->irq, dtc_intr, 0,
"dtc", instance)) {
printk(KERN_ERR "scsi%d : IRQ%d not free, interrupts disabled\n", instance->host_no, instance->irq);
instance->irq = NO_IRQ;
}
if (instance->irq == NO_IRQ) {
printk(KERN_WARNING "scsi%d : interrupts not enabled. for better interactive performance,\n", instance->host_no);
printk(KERN_WARNING "scsi%d : please jumper the board for a free IRQ.\n", instance->host_no);
}
dprintk(NDEBUG_INIT, "scsi%d : irq = %d\n",
instance->host_no, instance->irq);
++current_override;
++count;
}
return count;
out_unregister:
scsi_unregister(instance);
out_unmap:
iounmap(base);
return count;
}
/*
* Function : int dtc_biosparam(Disk * disk, struct block_device *dev, int *ip)
*
* Purpose : Generates a BIOS / DOS compatible H-C-S mapping for
* the specified device / size.
*
* Inputs : size = size of device in sectors (512 bytes), dev = block device
* major / minor, ip[] = {heads, sectors, cylinders}
*
* Returns : always 0 (success), initializes ip
*
*/
/*
* XXX Most SCSI boards use this mapping, I could be incorrect. Some one
* using hard disks on a trantor should verify that this mapping corresponds
* to that used by the BIOS / ASPI driver by running the linux fdisk program
* and matching the H_C_S coordinates to what DOS uses.
*/
static int dtc_biosparam(struct scsi_device *sdev, struct block_device *dev,
sector_t capacity, int *ip)
{
int size = capacity;
ip[0] = 64;
ip[1] = 32;
ip[2] = size >> 11;
return 0;
}
/****************************************************************
* Function : int NCR5380_pread (struct Scsi_Host *instance,
* unsigned char *dst, int len)
*
* Purpose : Fast 5380 pseudo-dma read function, reads len bytes to
* dst
*
* Inputs : dst = destination, len = length in bytes
*
* Returns : 0 on success, non zero on a failure such as a watchdog
* timeout.
*/
static inline int dtc_pread(struct Scsi_Host *instance,
unsigned char *dst, int len)
{
unsigned char *d = dst;
int i; /* For counting time spent in the poll-loop */
struct NCR5380_hostdata *hostdata = shost_priv(instance);
i = 0;
if (instance->irq == NO_IRQ)
NCR5380_write(DTC_CONTROL_REG, CSR_DIR_READ);
else
NCR5380_write(DTC_CONTROL_REG, CSR_DIR_READ | CSR_INT_BASE);
NCR5380_write(DTC_BLK_CNT, len >> 7); /* Block count */
rtrc(1);
while (len > 0) {
rtrc(2);
while (NCR5380_read(DTC_CONTROL_REG) & CSR_HOST_BUF_NOT_RDY)
++i;
rtrc(3);
memcpy_fromio(d, hostdata->base + DTC_DATA_BUF, 128);
d += 128;
len -= 128;
rtrc(7);
/*** with int's on, it sometimes hangs after here.
* Looks like something makes HBNR go away. */
}
rtrc(4);
while (!(NCR5380_read(DTC_CONTROL_REG) & D_CR_ACCESS))
++i;
rtrc(0);
return (0);
}
/****************************************************************
* Function : int NCR5380_pwrite (struct Scsi_Host *instance,
* unsigned char *src, int len)
*
* Purpose : Fast 5380 pseudo-dma write function, transfers len bytes from
* src
*
* Inputs : src = source, len = length in bytes
*
* Returns : 0 on success, non zero on a failure such as a watchdog
* timeout.
*/
static inline int dtc_pwrite(struct Scsi_Host *instance,
unsigned char *src, int len)
{
int i;
struct NCR5380_hostdata *hostdata = shost_priv(instance);
if (instance->irq == NO_IRQ)
NCR5380_write(DTC_CONTROL_REG, 0);
else
NCR5380_write(DTC_CONTROL_REG, CSR_5380_INTR);
NCR5380_write(DTC_BLK_CNT, len >> 7); /* Block count */
for (i = 0; len > 0; ++i) {
rtrc(5);
/* Poll until the host buffer can accept data. */
while (NCR5380_read(DTC_CONTROL_REG) & CSR_HOST_BUF_NOT_RDY)
++i;
rtrc(3);
memcpy_toio(hostdata->base + DTC_DATA_BUF, src, 128);
src += 128;
len -= 128;
}
rtrc(4);
while (!(NCR5380_read(DTC_CONTROL_REG) & D_CR_ACCESS))
++i;
rtrc(6);
/* Wait until the last byte has been sent to the disk */
while (!(NCR5380_read(TARGET_COMMAND_REG) & TCR_LAST_BYTE_SENT))
++i;
rtrc(7);
/* Check for parity error here. fixme. */
rtrc(0);
return (0);
}
static int dtc_dma_xfer_len(struct scsi_cmnd *cmd)
{
int transfersize = cmd->transfersize;
/* Limit transfers to 32K, for xx400 & xx406
* pseudoDMA that transfers in 128 bytes blocks.
*/
if (transfersize > 32 * 1024 && cmd->SCp.this_residual &&
!(cmd->SCp.this_residual % transfersize))
transfersize = 32 * 1024;
return transfersize;
}
MODULE_LICENSE("GPL");
#include "NCR5380.c"
static int dtc_release(struct Scsi_Host *shost)
{
struct NCR5380_hostdata *hostdata = shost_priv(shost);
if (shost->irq != NO_IRQ)
free_irq(shost->irq, shost);
NCR5380_exit(shost);
scsi_unregister(shost);
iounmap(hostdata->base);
return 0;
}
static struct scsi_host_template driver_template = {
.name = "DTC 3180/3280",
.detect = dtc_detect,
.release = dtc_release,
.proc_name = "dtc3x80",
.info = dtc_info,
.queuecommand = dtc_queue_command,
.eh_abort_handler = dtc_abort,
.eh_bus_reset_handler = dtc_bus_reset,
.bios_param = dtc_biosparam,
.can_queue = 32,
.this_id = 7,
.sg_tablesize = SG_ALL,
.cmd_per_lun = 2,
.use_clustering = DISABLE_CLUSTERING,
.cmd_size = NCR5380_CMD_SIZE,
.max_sectors = 128,
};
#include "scsi_module.c"

View File

@ -1,42 +0,0 @@
/*
* DTC controller, taken from T128 driver by...
* Copyright 1993, Drew Eckhardt
* Visionary Computing
* (Unix and Linux consulting and custom programming)
* drew@colorado.edu
* +1 (303) 440-4894
*/
#ifndef DTC3280_H
#define DTC3280_H
#define NCR5380_implementation_fields \
void __iomem *base
#define DTC_address(reg) \
(((struct NCR5380_hostdata *)shost_priv(instance))->base + DTC_5380_OFFSET + reg)
#define NCR5380_read(reg) (readb(DTC_address(reg)))
#define NCR5380_write(reg, value) (writeb(value, DTC_address(reg)))
#define NCR5380_dma_xfer_len(instance, cmd, phase) \
dtc_dma_xfer_len(cmd)
#define NCR5380_dma_recv_setup dtc_pread
#define NCR5380_dma_send_setup dtc_pwrite
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_intr dtc_intr
#define NCR5380_queue_command dtc_queue_command
#define NCR5380_abort dtc_abort
#define NCR5380_bus_reset dtc_bus_reset
#define NCR5380_info dtc_info
#define NCR5380_io_delay(x) udelay(x)
/* 15 12 11 10
1001 1100 0000 0000 */
#define DTC_IRQS 0x9c00
#endif /* DTC3280_H */

View File

@ -963,10 +963,6 @@ bool esas2r_init_adapter_struct(struct esas2r_adapter *a,
/* initialize the allocated memory */ /* initialize the allocated memory */
if (test_bit(AF_FIRST_INIT, &a->flags)) { if (test_bit(AF_FIRST_INIT, &a->flags)) {
memset(a->req_table, 0,
(num_requests + num_ae_requests +
1) * sizeof(struct esas2r_request *));
esas2r_targ_db_initialize(a); esas2r_targ_db_initialize(a);
/* prime parts of the inbound list */ /* prime parts of the inbound list */

View File

@ -194,7 +194,7 @@ static ssize_t write_hw(struct file *file, struct kobject *kobj,
int length = min(sizeof(struct atto_ioctl), count); int length = min(sizeof(struct atto_ioctl), count);
if (!a->local_atto_ioctl) { if (!a->local_atto_ioctl) {
a->local_atto_ioctl = kzalloc(sizeof(struct atto_ioctl), a->local_atto_ioctl = kmalloc(sizeof(struct atto_ioctl),
GFP_KERNEL); GFP_KERNEL);
if (a->local_atto_ioctl == NULL) { if (a->local_atto_ioctl == NULL) {
esas2r_log(ESAS2R_LOG_WARN, esas2r_log(ESAS2R_LOG_WARN,

View File

@ -83,6 +83,41 @@ static struct notifier_block libfcoe_notifier = {
.notifier_call = libfcoe_device_notification, .notifier_call = libfcoe_device_notification,
}; };
static const struct {
u32 fc_port_speed;
#define SPEED_2000 2000
#define SPEED_4000 4000
#define SPEED_8000 8000
#define SPEED_16000 16000
#define SPEED_32000 32000
u32 eth_port_speed;
} fcoe_port_speed_mapping[] = {
{ FC_PORTSPEED_1GBIT, SPEED_1000 },
{ FC_PORTSPEED_2GBIT, SPEED_2000 },
{ FC_PORTSPEED_4GBIT, SPEED_4000 },
{ FC_PORTSPEED_8GBIT, SPEED_8000 },
{ FC_PORTSPEED_10GBIT, SPEED_10000 },
{ FC_PORTSPEED_16GBIT, SPEED_16000 },
{ FC_PORTSPEED_20GBIT, SPEED_20000 },
{ FC_PORTSPEED_25GBIT, SPEED_25000 },
{ FC_PORTSPEED_32GBIT, SPEED_32000 },
{ FC_PORTSPEED_40GBIT, SPEED_40000 },
{ FC_PORTSPEED_50GBIT, SPEED_50000 },
{ FC_PORTSPEED_100GBIT, SPEED_100000 },
};
static inline u32 eth2fc_speed(u32 eth_port_speed)
{
int i;
for (i = 0; i < ARRAY_SIZE(fcoe_port_speed_mapping); i++) {
if (fcoe_port_speed_mapping[i].eth_port_speed == eth_port_speed)
return fcoe_port_speed_mapping[i].fc_port_speed;
}
return FC_PORTSPEED_UNKNOWN;
}
/** /**
* fcoe_link_speed_update() - Update the supported and actual link speeds * fcoe_link_speed_update() - Update the supported and actual link speeds
* @lport: The local port to update speeds for * @lport: The local port to update speeds for
@ -126,23 +161,7 @@ int fcoe_link_speed_update(struct fc_lport *lport)
SUPPORTED_40000baseLR4_Full)) SUPPORTED_40000baseLR4_Full))
lport->link_supported_speeds |= FC_PORTSPEED_40GBIT; lport->link_supported_speeds |= FC_PORTSPEED_40GBIT;
switch (ecmd.base.speed) { lport->link_speed = eth2fc_speed(ecmd.base.speed);
case SPEED_1000:
lport->link_speed = FC_PORTSPEED_1GBIT;
break;
case SPEED_10000:
lport->link_speed = FC_PORTSPEED_10GBIT;
break;
case SPEED_20000:
lport->link_speed = FC_PORTSPEED_20GBIT;
break;
case SPEED_40000:
lport->link_speed = FC_PORTSPEED_40GBIT;
break;
default:
lport->link_speed = FC_PORTSPEED_UNKNOWN;
break;
}
return 0; return 0;
} }
return -1; return -1;

View File

@ -23,7 +23,7 @@
#include <scsi/sas_ata.h> #include <scsi/sas_ata.h>
#include <scsi/libsas.h> #include <scsi/libsas.h>
#define DRV_VERSION "v1.5" #define DRV_VERSION "v1.6"
#define HISI_SAS_MAX_PHYS 9 #define HISI_SAS_MAX_PHYS 9
#define HISI_SAS_MAX_QUEUES 32 #define HISI_SAS_MAX_QUEUES 32
@ -56,6 +56,11 @@ enum dev_status {
HISI_SAS_DEV_EH, HISI_SAS_DEV_EH,
}; };
enum {
HISI_SAS_INT_ABT_CMD = 0,
HISI_SAS_INT_ABT_DEV = 1,
};
enum hisi_sas_dev_type { enum hisi_sas_dev_type {
HISI_SAS_DEV_TYPE_STP = 0, HISI_SAS_DEV_TYPE_STP = 0,
HISI_SAS_DEV_TYPE_SSP, HISI_SAS_DEV_TYPE_SSP,
@ -89,6 +94,13 @@ struct hisi_sas_port {
struct hisi_sas_cq { struct hisi_sas_cq {
struct hisi_hba *hisi_hba; struct hisi_hba *hisi_hba;
int rd_point;
int id;
};
struct hisi_sas_dq {
struct hisi_hba *hisi_hba;
int wr_point;
int id; int id;
}; };
@ -146,6 +158,9 @@ struct hisi_sas_hw {
struct hisi_sas_slot *slot); struct hisi_sas_slot *slot);
int (*prep_stp)(struct hisi_hba *hisi_hba, int (*prep_stp)(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot); struct hisi_sas_slot *slot);
int (*prep_abort)(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot,
int device_id, int abort_flag, int tag_to_abort);
int (*slot_complete)(struct hisi_hba *hisi_hba, int (*slot_complete)(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot, int abort); struct hisi_sas_slot *slot, int abort);
void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no); void (*phy_enable)(struct hisi_hba *hisi_hba, int phy_no);
@ -185,6 +200,7 @@ struct hisi_hba {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct hisi_sas_cq cq[HISI_SAS_MAX_QUEUES]; struct hisi_sas_cq cq[HISI_SAS_MAX_QUEUES];
struct hisi_sas_dq dq[HISI_SAS_MAX_QUEUES];
struct hisi_sas_phy phy[HISI_SAS_MAX_PHYS]; struct hisi_sas_phy phy[HISI_SAS_MAX_PHYS];
struct hisi_sas_port port[HISI_SAS_MAX_PHYS]; struct hisi_sas_port port[HISI_SAS_MAX_PHYS];

View File

@ -17,6 +17,10 @@
static int hisi_sas_debug_issue_ssp_tmf(struct domain_device *device, static int hisi_sas_debug_issue_ssp_tmf(struct domain_device *device,
u8 *lun, struct hisi_sas_tmf_task *tmf); u8 *lun, struct hisi_sas_tmf_task *tmf);
static int
hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
struct domain_device *device,
int abort_flag, int tag);
static struct hisi_hba *dev_to_hisi_hba(struct domain_device *device) static struct hisi_hba *dev_to_hisi_hba(struct domain_device *device)
{ {
@ -93,7 +97,7 @@ void hisi_sas_slot_task_free(struct hisi_hba *hisi_hba, struct sas_task *task,
slot->task = NULL; slot->task = NULL;
slot->port = NULL; slot->port = NULL;
hisi_sas_slot_index_free(hisi_hba, slot->idx); hisi_sas_slot_index_free(hisi_hba, slot->idx);
memset(slot, 0, sizeof(*slot)); /* slot memory is fully zeroed when it is reused */
} }
EXPORT_SYMBOL_GPL(hisi_sas_slot_task_free); EXPORT_SYMBOL_GPL(hisi_sas_slot_task_free);
@ -116,6 +120,14 @@ static int hisi_sas_task_prep_ata(struct hisi_hba *hisi_hba,
return hisi_hba->hw->prep_stp(hisi_hba, slot); return hisi_hba->hw->prep_stp(hisi_hba, slot);
} }
static int hisi_sas_task_prep_abort(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot,
int device_id, int abort_flag, int tag_to_abort)
{
return hisi_hba->hw->prep_abort(hisi_hba, slot,
device_id, abort_flag, tag_to_abort);
}
/* /*
* This function will issue an abort TMF regardless of whether the * This function will issue an abort TMF regardless of whether the
* task is in the sdev or not. Then it will do the task complete * task is in the sdev or not. Then it will do the task complete
@ -192,27 +204,13 @@ static int hisi_sas_task_prep(struct sas_task *task, struct hisi_hba *hisi_hba,
return rc; return rc;
} }
port = device->port->lldd_port; port = device->port->lldd_port;
if (port && !port->port_attached && !tmf) { if (port && !port->port_attached) {
if (sas_protocol_ata(task->task_proto)) { dev_info(dev, "task prep: %s port%d not attach device\n",
struct task_status_struct *ts = &task->task_status; (sas_protocol_ata(task->task_proto)) ?
"SATA/STP" : "SAS",
device->port->id);
dev_info(dev, return SAS_PHY_DOWN;
"task prep: SATA/STP port%d not attach device\n",
device->port->id);
ts->resp = SAS_TASK_COMPLETE;
ts->stat = SAS_PHY_DOWN;
task->task_done(task);
} else {
struct task_status_struct *ts = &task->task_status;
dev_info(dev,
"task prep: SAS port%d does not attach device\n",
device->port->id);
ts->resp = SAS_TASK_UNDELIVERED;
ts->stat = SAS_PHY_DOWN;
task->task_done(task);
}
return 0;
} }
if (!sas_protocol_ata(task->task_proto)) { if (!sas_protocol_ata(task->task_proto)) {
@ -609,6 +607,9 @@ static void hisi_sas_dev_gone(struct domain_device *device)
dev_info(dev, "found dev[%lld:%x] is gone\n", dev_info(dev, "found dev[%lld:%x] is gone\n",
sas_dev->device_id, sas_dev->dev_type); sas_dev->device_id, sas_dev->dev_type);
hisi_sas_internal_task_abort(hisi_hba, device,
HISI_SAS_INT_ABT_DEV, 0);
hisi_hba->hw->free_device(hisi_hba, sas_dev); hisi_hba->hw->free_device(hisi_hba, sas_dev);
device->lldd_dev = NULL; device->lldd_dev = NULL;
memset(sas_dev, 0, sizeof(*sas_dev)); memset(sas_dev, 0, sizeof(*sas_dev));
@ -728,6 +729,12 @@ static int hisi_sas_exec_internal_tmf_task(struct domain_device *device,
break; break;
} }
if (task->task_status.resp == SAS_TASK_COMPLETE &&
task->task_status.stat == TMF_RESP_FUNC_SUCC) {
res = TMF_RESP_FUNC_SUCC;
break;
}
if (task->task_status.resp == SAS_TASK_COMPLETE && if (task->task_status.resp == SAS_TASK_COMPLETE &&
task->task_status.stat == SAS_DATA_UNDERRUN) { task->task_status.stat == SAS_DATA_UNDERRUN) {
/* no error, but return the number of bytes of /* no error, but return the number of bytes of
@ -826,18 +833,22 @@ static int hisi_sas_abort_task(struct sas_task *task)
} }
} }
hisi_sas_internal_task_abort(hisi_hba, device,
HISI_SAS_INT_ABT_CMD, tag);
} else if (task->task_proto & SAS_PROTOCOL_SATA || } else if (task->task_proto & SAS_PROTOCOL_SATA ||
task->task_proto & SAS_PROTOCOL_STP) { task->task_proto & SAS_PROTOCOL_STP) {
if (task->dev->dev_type == SAS_SATA_DEV) { if (task->dev->dev_type == SAS_SATA_DEV) {
struct hisi_slot_info *slot = task->lldd_task; hisi_sas_internal_task_abort(hisi_hba, device,
HISI_SAS_INT_ABT_DEV, 0);
dev_notice(dev, "abort task: hba=%p task=%p slot=%p\n",
hisi_hba, task, slot);
task->task_state_flags |= SAS_TASK_STATE_ABORTED;
rc = TMF_RESP_FUNC_COMPLETE; rc = TMF_RESP_FUNC_COMPLETE;
goto out;
} }
} else if (task->task_proto & SAS_PROTOCOL_SMP) {
/* SMP */
struct hisi_sas_slot *slot = task->lldd_task;
u32 tag = slot->idx;
hisi_sas_internal_task_abort(hisi_hba, device,
HISI_SAS_INT_ABT_CMD, tag);
} }
out: out:
@ -954,6 +965,157 @@ static int hisi_sas_query_task(struct sas_task *task)
return rc; return rc;
} }
static int
hisi_sas_internal_abort_task_exec(struct hisi_hba *hisi_hba, u64 device_id,
struct sas_task *task, int abort_flag,
int task_tag)
{
struct domain_device *device = task->dev;
struct hisi_sas_device *sas_dev = device->lldd_dev;
struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_port *port;
struct hisi_sas_slot *slot;
struct hisi_sas_cmd_hdr *cmd_hdr_base;
int dlvry_queue_slot, dlvry_queue, n_elem = 0, rc, slot_idx;
if (!device->port)
return -1;
port = device->port->lldd_port;
/* simply get a slot and send abort command */
rc = hisi_sas_slot_index_alloc(hisi_hba, &slot_idx);
if (rc)
goto err_out;
rc = hisi_hba->hw->get_free_slot(hisi_hba, &dlvry_queue,
&dlvry_queue_slot);
if (rc)
goto err_out_tag;
slot = &hisi_hba->slot_info[slot_idx];
memset(slot, 0, sizeof(struct hisi_sas_slot));
slot->idx = slot_idx;
slot->n_elem = n_elem;
slot->dlvry_queue = dlvry_queue;
slot->dlvry_queue_slot = dlvry_queue_slot;
cmd_hdr_base = hisi_hba->cmd_hdr[dlvry_queue];
slot->cmd_hdr = &cmd_hdr_base[dlvry_queue_slot];
slot->task = task;
slot->port = port;
task->lldd_task = slot;
memset(slot->cmd_hdr, 0, sizeof(struct hisi_sas_cmd_hdr));
rc = hisi_sas_task_prep_abort(hisi_hba, slot, device_id,
abort_flag, task_tag);
if (rc)
goto err_out_tag;
/* Port structure is static for the HBA, so
* even if the port is deformed it is ok
* to reference.
*/
list_add_tail(&slot->entry, &port->list);
spin_lock(&task->task_state_lock);
task->task_state_flags |= SAS_TASK_AT_INITIATOR;
spin_unlock(&task->task_state_lock);
hisi_hba->slot_prep = slot;
sas_dev->running_req++;
/* send abort command to our chip */
hisi_hba->hw->start_delivery(hisi_hba);
return 0;
err_out_tag:
hisi_sas_slot_index_free(hisi_hba, slot_idx);
err_out:
dev_err(dev, "internal abort task prep: failed[%d]!\n", rc);
return rc;
}
/**
* hisi_sas_internal_task_abort -- execute an internal
* abort command for single IO command or a device
* @hisi_hba: host controller struct
* @device: domain device
* @abort_flag: mode of operation, device or single IO
* @tag: tag of IO to be aborted (only relevant to single
* IO mode)
*/
static int
hisi_sas_internal_task_abort(struct hisi_hba *hisi_hba,
struct domain_device *device,
int abort_flag, int tag)
{
struct sas_task *task;
struct hisi_sas_device *sas_dev = device->lldd_dev;
struct device *dev = &hisi_hba->pdev->dev;
int res;
unsigned long flags;
if (!hisi_hba->hw->prep_abort)
return -EOPNOTSUPP;
task = sas_alloc_slow_task(GFP_KERNEL);
if (!task)
return -ENOMEM;
task->dev = device;
task->task_proto = device->tproto;
task->task_done = hisi_sas_task_done;
task->slow_task->timer.data = (unsigned long)task;
task->slow_task->timer.function = hisi_sas_tmf_timedout;
task->slow_task->timer.expires = jiffies + 20*HZ;
add_timer(&task->slow_task->timer);
/* Lock as we are alloc'ing a slot, which cannot be interrupted */
spin_lock_irqsave(&hisi_hba->lock, flags);
res = hisi_sas_internal_abort_task_exec(hisi_hba, sas_dev->device_id,
task, abort_flag, tag);
spin_unlock_irqrestore(&hisi_hba->lock, flags);
if (res) {
del_timer(&task->slow_task->timer);
dev_err(dev, "internal task abort: executing internal task failed: %d\n",
res);
goto exit;
}
wait_for_completion(&task->slow_task->completion);
res = TMF_RESP_FUNC_FAILED;
if (task->task_status.resp == SAS_TASK_COMPLETE &&
task->task_status.stat == TMF_RESP_FUNC_COMPLETE) {
res = TMF_RESP_FUNC_COMPLETE;
goto exit;
}
/* TMF timed out, return direct. */
if ((task->task_state_flags & SAS_TASK_STATE_ABORTED)) {
if (!(task->task_state_flags & SAS_TASK_STATE_DONE)) {
dev_err(dev, "internal task abort: timeout.\n");
if (task->lldd_task) {
struct hisi_sas_slot *slot = task->lldd_task;
hisi_sas_slot_task_free(hisi_hba, task, slot);
}
}
}
exit:
dev_info(dev, "internal task abort: task to dev %016llx task=%p "
"resp: 0x%x sts 0x%x\n",
SAS_ADDR(device->sas_addr),
task,
task->task_status.resp, /* 0 is complete, -1 is undelivered */
task->task_status.stat);
sas_free_task(task);
return res;
}
static void hisi_sas_port_formed(struct asd_sas_phy *sas_phy) static void hisi_sas_port_formed(struct asd_sas_phy *sas_phy)
{ {
hisi_sas_port_notify_formed(sas_phy); hisi_sas_port_notify_formed(sas_phy);
@ -1063,11 +1225,16 @@ static int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
for (i = 0; i < hisi_hba->queue_count; i++) { for (i = 0; i < hisi_hba->queue_count; i++) {
struct hisi_sas_cq *cq = &hisi_hba->cq[i]; struct hisi_sas_cq *cq = &hisi_hba->cq[i];
struct hisi_sas_dq *dq = &hisi_hba->dq[i];
/* Completion queue structure */ /* Completion queue structure */
cq->id = i; cq->id = i;
cq->hisi_hba = hisi_hba; cq->hisi_hba = hisi_hba;
/* Delivery queue structure */
dq->id = i;
dq->hisi_hba = hisi_hba;
/* Delivery queue */ /* Delivery queue */
s = sizeof(struct hisi_sas_cmd_hdr) * HISI_SAS_QUEUE_SLOTS; s = sizeof(struct hisi_sas_cmd_hdr) * HISI_SAS_QUEUE_SLOTS;
hisi_hba->cmd_hdr[i] = dma_alloc_coherent(dev, s, hisi_hba->cmd_hdr[i] = dma_alloc_coherent(dev, s,
@ -1128,7 +1295,7 @@ static int hisi_sas_alloc(struct hisi_hba *hisi_hba, struct Scsi_Host *shost)
memset(hisi_hba->breakpoint, 0, s); memset(hisi_hba->breakpoint, 0, s);
hisi_hba->slot_index_count = max_command_entries; hisi_hba->slot_index_count = max_command_entries;
s = hisi_hba->slot_index_count / sizeof(unsigned long); s = hisi_hba->slot_index_count / BITS_PER_BYTE;
hisi_hba->slot_index_tags = devm_kzalloc(dev, s, GFP_KERNEL); hisi_hba->slot_index_tags = devm_kzalloc(dev, s, GFP_KERNEL);
if (!hisi_hba->slot_index_tags) if (!hisi_hba->slot_index_tags)
goto err_out; goto err_out;
@ -1272,6 +1439,12 @@ static struct Scsi_Host *hisi_sas_shost_alloc(struct platform_device *pdev,
&hisi_hba->queue_count)) &hisi_hba->queue_count))
goto err_out; goto err_out;
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)) &&
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
dev_err(dev, "No usable DMA addressing method\n");
goto err_out;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hisi_hba->regs = devm_ioremap_resource(dev, res); hisi_hba->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(hisi_hba->regs)) if (IS_ERR(hisi_hba->regs))
@ -1319,13 +1492,6 @@ int hisi_sas_probe(struct platform_device *pdev,
hisi_hba = shost_priv(shost); hisi_hba = shost_priv(shost);
platform_set_drvdata(pdev, sha); platform_set_drvdata(pdev, sha);
if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)) &&
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
dev_err(dev, "No usable DMA addressing method\n");
rc = -EIO;
goto err_out_ha;
}
phy_nr = port_nr = hisi_hba->n_phy; phy_nr = port_nr = hisi_hba->n_phy;
arr_phy = devm_kcalloc(dev, phy_nr, sizeof(void *), GFP_KERNEL); arr_phy = devm_kcalloc(dev, phy_nr, sizeof(void *), GFP_KERNEL);

View File

@ -490,25 +490,17 @@ static void config_id_frame_v1_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0,
__swab32(identify_buffer[0])); __swab32(identify_buffer[0]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1,
identify_buffer[2]); __swab32(identify_buffer[1]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2,
identify_buffer[1]); __swab32(identify_buffer[2]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3,
identify_buffer[4]); __swab32(identify_buffer[3]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4,
identify_buffer[3]); __swab32(identify_buffer[4]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5,
__swab32(identify_buffer[5])); __swab32(identify_buffer[5]));
} }
static void init_id_frame_v1_hw(struct hisi_hba *hisi_hba)
{
int i;
for (i = 0; i < hisi_hba->n_phy; i++)
config_id_frame_v1_hw(hisi_hba, i);
}
static void setup_itct_v1_hw(struct hisi_hba *hisi_hba, static void setup_itct_v1_hw(struct hisi_hba *hisi_hba,
struct hisi_sas_device *sas_dev) struct hisi_sas_device *sas_dev)
{ {
@ -774,8 +766,6 @@ static int hw_init_v1_hw(struct hisi_hba *hisi_hba)
msleep(100); msleep(100);
init_reg_v1_hw(hisi_hba); init_reg_v1_hw(hisi_hba);
init_id_frame_v1_hw(hisi_hba);
return 0; return 0;
} }
@ -875,12 +865,13 @@ static int get_wideport_bitmap_v1_hw(struct hisi_hba *hisi_hba, int port_id)
static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, int *q, int *s) static int get_free_slot_v1_hw(struct hisi_hba *hisi_hba, int *q, int *s)
{ {
struct device *dev = &hisi_hba->pdev->dev; struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_dq *dq;
u32 r, w; u32 r, w;
int queue = hisi_hba->queue; int queue = hisi_hba->queue;
while (1) { while (1) {
w = hisi_sas_read32_relaxed(hisi_hba, dq = &hisi_hba->dq[queue];
DLVRY_Q_0_WR_PTR + (queue * 0x14)); w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba, r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14)); DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) { if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
@ -903,10 +894,11 @@ static void start_delivery_v1_hw(struct hisi_hba *hisi_hba)
{ {
int dlvry_queue = hisi_hba->slot_prep->dlvry_queue; int dlvry_queue = hisi_hba->slot_prep->dlvry_queue;
int dlvry_queue_slot = hisi_hba->slot_prep->dlvry_queue_slot; int dlvry_queue_slot = hisi_hba->slot_prep->dlvry_queue_slot;
struct hisi_sas_dq *dq = &hisi_hba->dq[dlvry_queue];
hisi_sas_write32(hisi_hba, dq->wr_point = ++dlvry_queue_slot % HISI_SAS_QUEUE_SLOTS;
DLVRY_Q_0_WR_PTR + (dlvry_queue * 0x14), hisi_sas_write32(hisi_hba, DLVRY_Q_0_WR_PTR + (dlvry_queue * 0x14),
++dlvry_queue_slot % HISI_SAS_QUEUE_SLOTS); dq->wr_point);
} }
static int prep_prd_sge_v1_hw(struct hisi_hba *hisi_hba, static int prep_prd_sge_v1_hw(struct hisi_hba *hisi_hba,
@ -1565,14 +1557,11 @@ static irqreturn_t cq_interrupt_v1_hw(int irq, void *p)
struct hisi_sas_complete_v1_hdr *complete_queue = struct hisi_sas_complete_v1_hdr *complete_queue =
(struct hisi_sas_complete_v1_hdr *) (struct hisi_sas_complete_v1_hdr *)
hisi_hba->complete_hdr[queue]; hisi_hba->complete_hdr[queue];
u32 irq_value, rd_point, wr_point; u32 irq_value, rd_point = cq->rd_point, wr_point;
irq_value = hisi_sas_read32(hisi_hba, OQ_INT_SRC); irq_value = hisi_sas_read32(hisi_hba, OQ_INT_SRC);
hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue); hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue);
rd_point = hisi_sas_read32(hisi_hba,
COMPL_Q_0_RD_PTR + (0x14 * queue));
wr_point = hisi_sas_read32(hisi_hba, wr_point = hisi_sas_read32(hisi_hba,
COMPL_Q_0_WR_PTR + (0x14 * queue)); COMPL_Q_0_WR_PTR + (0x14 * queue));
@ -1600,6 +1589,7 @@ static irqreturn_t cq_interrupt_v1_hw(int irq, void *p)
} }
/* update rd_point */ /* update rd_point */
cq->rd_point = rd_point;
hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point); hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
return IRQ_HANDLED; return IRQ_HANDLED;

View File

@ -117,6 +117,8 @@
#define SL_CONTROL (PORT_BASE + 0x94) #define SL_CONTROL (PORT_BASE + 0x94)
#define SL_CONTROL_NOTIFY_EN_OFF 0 #define SL_CONTROL_NOTIFY_EN_OFF 0
#define SL_CONTROL_NOTIFY_EN_MSK (0x1 << SL_CONTROL_NOTIFY_EN_OFF) #define SL_CONTROL_NOTIFY_EN_MSK (0x1 << SL_CONTROL_NOTIFY_EN_OFF)
#define SL_CONTROL_CTA_OFF 17
#define SL_CONTROL_CTA_MSK (0x1 << SL_CONTROL_CTA_OFF)
#define TX_ID_DWORD0 (PORT_BASE + 0x9c) #define TX_ID_DWORD0 (PORT_BASE + 0x9c)
#define TX_ID_DWORD1 (PORT_BASE + 0xa0) #define TX_ID_DWORD1 (PORT_BASE + 0xa0)
#define TX_ID_DWORD2 (PORT_BASE + 0xa4) #define TX_ID_DWORD2 (PORT_BASE + 0xa4)
@ -124,6 +126,9 @@
#define TX_ID_DWORD4 (PORT_BASE + 0xaC) #define TX_ID_DWORD4 (PORT_BASE + 0xaC)
#define TX_ID_DWORD5 (PORT_BASE + 0xb0) #define TX_ID_DWORD5 (PORT_BASE + 0xb0)
#define TX_ID_DWORD6 (PORT_BASE + 0xb4) #define TX_ID_DWORD6 (PORT_BASE + 0xb4)
#define TXID_AUTO (PORT_BASE + 0xb8)
#define TXID_AUTO_CT3_OFF 1
#define TXID_AUTO_CT3_MSK (0x1 << TXID_AUTO_CT3_OFF)
#define RX_IDAF_DWORD0 (PORT_BASE + 0xc4) #define RX_IDAF_DWORD0 (PORT_BASE + 0xc4)
#define RX_IDAF_DWORD1 (PORT_BASE + 0xc8) #define RX_IDAF_DWORD1 (PORT_BASE + 0xc8)
#define RX_IDAF_DWORD2 (PORT_BASE + 0xcc) #define RX_IDAF_DWORD2 (PORT_BASE + 0xcc)
@ -174,6 +179,10 @@
/* HW dma structures */ /* HW dma structures */
/* Delivery queue header */ /* Delivery queue header */
/* dw0 */ /* dw0 */
#define CMD_HDR_ABORT_FLAG_OFF 0
#define CMD_HDR_ABORT_FLAG_MSK (0x3 << CMD_HDR_ABORT_FLAG_OFF)
#define CMD_HDR_ABORT_DEVICE_TYPE_OFF 2
#define CMD_HDR_ABORT_DEVICE_TYPE_MSK (0x1 << CMD_HDR_ABORT_DEVICE_TYPE_OFF)
#define CMD_HDR_RESP_REPORT_OFF 5 #define CMD_HDR_RESP_REPORT_OFF 5
#define CMD_HDR_RESP_REPORT_MSK (0x1 << CMD_HDR_RESP_REPORT_OFF) #define CMD_HDR_RESP_REPORT_MSK (0x1 << CMD_HDR_RESP_REPORT_OFF)
#define CMD_HDR_TLR_CTRL_OFF 6 #define CMD_HDR_TLR_CTRL_OFF 6
@ -214,6 +223,8 @@
#define CMD_HDR_DIF_SGL_LEN_MSK (0xffff << CMD_HDR_DIF_SGL_LEN_OFF) #define CMD_HDR_DIF_SGL_LEN_MSK (0xffff << CMD_HDR_DIF_SGL_LEN_OFF)
#define CMD_HDR_DATA_SGL_LEN_OFF 16 #define CMD_HDR_DATA_SGL_LEN_OFF 16
#define CMD_HDR_DATA_SGL_LEN_MSK (0xffff << CMD_HDR_DATA_SGL_LEN_OFF) #define CMD_HDR_DATA_SGL_LEN_MSK (0xffff << CMD_HDR_DATA_SGL_LEN_OFF)
#define CMD_HDR_ABORT_IPTT_OFF 16
#define CMD_HDR_ABORT_IPTT_MSK (0xffff << CMD_HDR_ABORT_IPTT_OFF)
/* Completion header */ /* Completion header */
/* dw0 */ /* dw0 */
@ -221,6 +232,13 @@
#define CMPLT_HDR_RSPNS_XFRD_MSK (0x1 << CMPLT_HDR_RSPNS_XFRD_OFF) #define CMPLT_HDR_RSPNS_XFRD_MSK (0x1 << CMPLT_HDR_RSPNS_XFRD_OFF)
#define CMPLT_HDR_ERX_OFF 12 #define CMPLT_HDR_ERX_OFF 12
#define CMPLT_HDR_ERX_MSK (0x1 << CMPLT_HDR_ERX_OFF) #define CMPLT_HDR_ERX_MSK (0x1 << CMPLT_HDR_ERX_OFF)
#define CMPLT_HDR_ABORT_STAT_OFF 13
#define CMPLT_HDR_ABORT_STAT_MSK (0x7 << CMPLT_HDR_ABORT_STAT_OFF)
/* abort_stat */
#define STAT_IO_NOT_VALID 0x1
#define STAT_IO_NO_DEVICE 0x2
#define STAT_IO_COMPLETE 0x3
#define STAT_IO_ABORTED 0x4
/* dw1 */ /* dw1 */
#define CMPLT_HDR_IPTT_OFF 0 #define CMPLT_HDR_IPTT_OFF 0
#define CMPLT_HDR_IPTT_MSK (0xffff << CMPLT_HDR_IPTT_OFF) #define CMPLT_HDR_IPTT_MSK (0xffff << CMPLT_HDR_IPTT_OFF)
@ -549,25 +567,17 @@ static void config_id_frame_v2_hw(struct hisi_hba *hisi_hba, int phy_no)
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD0,
__swab32(identify_buffer[0])); __swab32(identify_buffer[0]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD1,
identify_buffer[2]); __swab32(identify_buffer[1]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD2,
identify_buffer[1]); __swab32(identify_buffer[2]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD3,
identify_buffer[4]); __swab32(identify_buffer[3]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD4,
identify_buffer[3]); __swab32(identify_buffer[4]));
hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5, hisi_sas_phy_write32(hisi_hba, phy_no, TX_ID_DWORD5,
__swab32(identify_buffer[5])); __swab32(identify_buffer[5]));
} }
static void init_id_frame_v2_hw(struct hisi_hba *hisi_hba)
{
int i;
for (i = 0; i < hisi_hba->n_phy; i++)
config_id_frame_v2_hw(hisi_hba, i);
}
static void setup_itct_v2_hw(struct hisi_hba *hisi_hba, static void setup_itct_v2_hw(struct hisi_hba *hisi_hba,
struct hisi_sas_device *sas_dev) struct hisi_sas_device *sas_dev)
{ {
@ -589,6 +599,7 @@ static void setup_itct_v2_hw(struct hisi_hba *hisi_hba,
qw0 = HISI_SAS_DEV_TYPE_SSP << ITCT_HDR_DEV_TYPE_OFF; qw0 = HISI_SAS_DEV_TYPE_SSP << ITCT_HDR_DEV_TYPE_OFF;
break; break;
case SAS_SATA_DEV: case SAS_SATA_DEV:
case SAS_SATA_PENDING:
if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type))
qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF; qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF;
else else
@ -672,9 +683,7 @@ static int reset_hw_v2_hw(struct hisi_hba *hisi_hba)
else else
reset_val = 0x7ffff; reset_val = 0x7ffff;
/* Disable all of the DQ */ hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
for (i = 0; i < HISI_SAS_MAX_QUEUES; i++)
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
/* Disable all of the PHYs */ /* Disable all of the PHYs */
for (i = 0; i < hisi_hba->n_phy; i++) { for (i = 0; i < hisi_hba->n_phy; i++) {
@ -810,6 +819,8 @@ static void init_reg_v2_hw(struct hisi_hba *hisi_hba)
hisi_sas_phy_write32(hisi_hba, i, PROG_PHY_LINK_RATE, 0x855); hisi_sas_phy_write32(hisi_hba, i, PROG_PHY_LINK_RATE, 0x855);
hisi_sas_phy_write32(hisi_hba, i, SAS_PHY_CTRL, 0x30b9908); hisi_sas_phy_write32(hisi_hba, i, SAS_PHY_CTRL, 0x30b9908);
hisi_sas_phy_write32(hisi_hba, i, SL_TOUT_CFG, 0x7d7d7d7d); hisi_sas_phy_write32(hisi_hba, i, SL_TOUT_CFG, 0x7d7d7d7d);
hisi_sas_phy_write32(hisi_hba, i, SL_CONTROL, 0x0);
hisi_sas_phy_write32(hisi_hba, i, TXID_AUTO, 0x2);
hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x10); hisi_sas_phy_write32(hisi_hba, i, DONE_RECEIVED_TIME, 0x10);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT0, 0xffffffff);
hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff); hisi_sas_phy_write32(hisi_hba, i, CHL_INT1, 0xffffffff);
@ -901,8 +912,6 @@ static int hw_init_v2_hw(struct hisi_hba *hisi_hba)
msleep(100); msleep(100);
init_reg_v2_hw(hisi_hba); init_reg_v2_hw(hisi_hba);
init_id_frame_v2_hw(hisi_hba);
return 0; return 0;
} }
@ -952,14 +961,8 @@ static void start_phys_v2_hw(unsigned long data)
static void phys_init_v2_hw(struct hisi_hba *hisi_hba) static void phys_init_v2_hw(struct hisi_hba *hisi_hba)
{ {
int i;
struct timer_list *timer = &hisi_hba->timer; struct timer_list *timer = &hisi_hba->timer;
for (i = 0; i < hisi_hba->n_phy; i++) {
hisi_sas_phy_write32(hisi_hba, i, CHL_INT2_MSK, 0x6a);
hisi_sas_phy_read32(hisi_hba, i, CHL_INT2_MSK);
}
setup_timer(timer, start_phys_v2_hw, (unsigned long)hisi_hba); setup_timer(timer, start_phys_v2_hw, (unsigned long)hisi_hba);
mod_timer(timer, jiffies + HZ); mod_timer(timer, jiffies + HZ);
} }
@ -1010,12 +1013,13 @@ static int get_wideport_bitmap_v2_hw(struct hisi_hba *hisi_hba, int port_id)
static int get_free_slot_v2_hw(struct hisi_hba *hisi_hba, int *q, int *s) static int get_free_slot_v2_hw(struct hisi_hba *hisi_hba, int *q, int *s)
{ {
struct device *dev = &hisi_hba->pdev->dev; struct device *dev = &hisi_hba->pdev->dev;
struct hisi_sas_dq *dq;
u32 r, w; u32 r, w;
int queue = hisi_hba->queue; int queue = hisi_hba->queue;
while (1) { while (1) {
w = hisi_sas_read32_relaxed(hisi_hba, dq = &hisi_hba->dq[queue];
DLVRY_Q_0_WR_PTR + (queue * 0x14)); w = dq->wr_point;
r = hisi_sas_read32_relaxed(hisi_hba, r = hisi_sas_read32_relaxed(hisi_hba,
DLVRY_Q_0_RD_PTR + (queue * 0x14)); DLVRY_Q_0_RD_PTR + (queue * 0x14));
if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) { if (r == (w+1) % HISI_SAS_QUEUE_SLOTS) {
@ -1038,9 +1042,11 @@ static void start_delivery_v2_hw(struct hisi_hba *hisi_hba)
{ {
int dlvry_queue = hisi_hba->slot_prep->dlvry_queue; int dlvry_queue = hisi_hba->slot_prep->dlvry_queue;
int dlvry_queue_slot = hisi_hba->slot_prep->dlvry_queue_slot; int dlvry_queue_slot = hisi_hba->slot_prep->dlvry_queue_slot;
struct hisi_sas_dq *dq = &hisi_hba->dq[dlvry_queue];
dq->wr_point = ++dlvry_queue_slot % HISI_SAS_QUEUE_SLOTS;
hisi_sas_write32(hisi_hba, DLVRY_Q_0_WR_PTR + (dlvry_queue * 0x14), hisi_sas_write32(hisi_hba, DLVRY_Q_0_WR_PTR + (dlvry_queue * 0x14),
++dlvry_queue_slot % HISI_SAS_QUEUE_SLOTS); dq->wr_point);
} }
static int prep_prd_sge_v2_hw(struct hisi_hba *hisi_hba, static int prep_prd_sge_v2_hw(struct hisi_hba *hisi_hba,
@ -1563,6 +1569,30 @@ slot_complete_v2_hw(struct hisi_hba *hisi_hba, struct hisi_sas_slot *slot,
goto out; goto out;
} }
/* Use SAS+TMF status codes */
switch ((complete_hdr->dw0 & CMPLT_HDR_ABORT_STAT_MSK)
>> CMPLT_HDR_ABORT_STAT_OFF) {
case STAT_IO_ABORTED:
/* this io has been aborted by abort command */
ts->stat = SAS_ABORTED_TASK;
goto out;
case STAT_IO_COMPLETE:
/* internal abort command complete */
ts->stat = TMF_RESP_FUNC_COMPLETE;
goto out;
case STAT_IO_NO_DEVICE:
ts->stat = TMF_RESP_FUNC_COMPLETE;
goto out;
case STAT_IO_NOT_VALID:
/* abort single io, controller don't find
* the io need to abort
*/
ts->stat = TMF_RESP_FUNC_FAILED;
goto out;
default:
break;
}
if ((complete_hdr->dw0 & CMPLT_HDR_ERX_MSK) && if ((complete_hdr->dw0 & CMPLT_HDR_ERX_MSK) &&
(!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))) { (!(complete_hdr->dw0 & CMPLT_HDR_RSPNS_XFRD_MSK))) {
@ -1775,6 +1805,32 @@ static int prep_ata_v2_hw(struct hisi_hba *hisi_hba,
return 0; return 0;
} }
static int prep_abort_v2_hw(struct hisi_hba *hisi_hba,
struct hisi_sas_slot *slot,
int device_id, int abort_flag, int tag_to_abort)
{
struct sas_task *task = slot->task;
struct domain_device *dev = task->dev;
struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr;
struct hisi_sas_port *port = slot->port;
/* dw0 */
hdr->dw0 = cpu_to_le32((5 << CMD_HDR_CMD_OFF) | /*abort*/
(port->id << CMD_HDR_PORT_OFF) |
((dev_is_sata(dev) ? 1:0) <<
CMD_HDR_ABORT_DEVICE_TYPE_OFF) |
(abort_flag << CMD_HDR_ABORT_FLAG_OFF));
/* dw1 */
hdr->dw1 = cpu_to_le32(device_id << CMD_HDR_DEV_ID_OFF);
/* dw7 */
hdr->dw7 = cpu_to_le32(tag_to_abort << CMD_HDR_ABORT_IPTT_OFF);
hdr->transfer_tags = cpu_to_le32(slot->idx);
return 0;
}
static int phy_up_v2_hw(int phy_no, struct hisi_hba *hisi_hba) static int phy_up_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
{ {
int i, res = 0; int i, res = 0;
@ -1818,9 +1874,6 @@ static int phy_up_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
frame_rcvd[i] = __swab32(idaf); frame_rcvd[i] = __swab32(idaf);
} }
/* Get the linkrates */
link_rate = hisi_sas_read32(hisi_hba, PHY_CONN_RATE);
link_rate = (link_rate >> (phy_no * 4)) & 0xf;
sas_phy->linkrate = link_rate; sas_phy->linkrate = link_rate;
hard_phy_linkrate = hisi_sas_phy_read32(hisi_hba, phy_no, hard_phy_linkrate = hisi_sas_phy_read32(hisi_hba, phy_no,
HARD_PHY_LINKRATE); HARD_PHY_LINKRATE);
@ -1855,16 +1908,21 @@ end:
static int phy_down_v2_hw(int phy_no, struct hisi_hba *hisi_hba) static int phy_down_v2_hw(int phy_no, struct hisi_hba *hisi_hba)
{ {
int res = 0; int res = 0;
u32 phy_cfg, phy_state; u32 phy_state, sl_ctrl, txid_auto;
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 1); hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 1);
phy_cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG);
phy_state = hisi_sas_read32(hisi_hba, PHY_STATE); phy_state = hisi_sas_read32(hisi_hba, PHY_STATE);
hisi_sas_phy_down(hisi_hba, phy_no, (phy_state & 1 << phy_no) ? 1 : 0); hisi_sas_phy_down(hisi_hba, phy_no, (phy_state & 1 << phy_no) ? 1 : 0);
sl_ctrl = hisi_sas_phy_read32(hisi_hba, phy_no, SL_CONTROL);
hisi_sas_phy_write32(hisi_hba, phy_no, SL_CONTROL,
sl_ctrl & ~SL_CONTROL_CTA_MSK);
txid_auto = hisi_sas_phy_read32(hisi_hba, phy_no, TXID_AUTO);
hisi_sas_phy_write32(hisi_hba, phy_no, TXID_AUTO,
txid_auto | TXID_AUTO_CT3_MSK);
hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0, CHL_INT0_NOT_RDY_MSK); hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT0, CHL_INT0_NOT_RDY_MSK);
hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 0); hisi_sas_phy_write32(hisi_hba, phy_no, PHYCTRL_NOT_RDY_MSK, 0);
@ -1986,7 +2044,7 @@ static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
struct hisi_sas_slot *slot; struct hisi_sas_slot *slot;
struct hisi_sas_itct *itct; struct hisi_sas_itct *itct;
struct hisi_sas_complete_v2_hdr *complete_queue; struct hisi_sas_complete_v2_hdr *complete_queue;
u32 irq_value, rd_point, wr_point, dev_id; u32 irq_value, rd_point = cq->rd_point, wr_point, dev_id;
int queue = cq->id; int queue = cq->id;
complete_queue = hisi_hba->complete_hdr[queue]; complete_queue = hisi_hba->complete_hdr[queue];
@ -1994,8 +2052,6 @@ static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue); hisi_sas_write32(hisi_hba, OQ_INT_SRC, 1 << queue);
rd_point = hisi_sas_read32(hisi_hba, COMPL_Q_0_RD_PTR +
(0x14 * queue));
wr_point = hisi_sas_read32(hisi_hba, COMPL_Q_0_WR_PTR + wr_point = hisi_sas_read32(hisi_hba, COMPL_Q_0_WR_PTR +
(0x14 * queue)); (0x14 * queue));
@ -2043,6 +2099,7 @@ static irqreturn_t cq_interrupt_v2_hw(int irq_no, void *p)
} }
/* update rd_point */ /* update rd_point */
cq->rd_point = rd_point;
hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point); hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
return IRQ_HANDLED; return IRQ_HANDLED;
} }
@ -2239,6 +2296,7 @@ static const struct hisi_sas_hw hisi_sas_v2_hw = {
.prep_smp = prep_smp_v2_hw, .prep_smp = prep_smp_v2_hw,
.prep_ssp = prep_ssp_v2_hw, .prep_ssp = prep_ssp_v2_hw,
.prep_stp = prep_ata_v2_hw, .prep_stp = prep_ata_v2_hw,
.prep_abort = prep_abort_v2_hw,
.get_free_slot = get_free_slot_v2_hw, .get_free_slot = get_free_slot_v2_hw,
.start_delivery = start_delivery_v2_hw, .start_delivery = start_delivery_v2_hw,
.slot_complete = slot_complete_v2_hw, .slot_complete = slot_complete_v2_hw,

View File

@ -246,10 +246,6 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
shost->dma_dev = dma_dev; shost->dma_dev = dma_dev;
error = device_add(&shost->shost_gendev);
if (error)
goto out_destroy_freelist;
/* /*
* Increase usage count temporarily here so that calling * Increase usage count temporarily here so that calling
* scsi_autopm_put_host() will trigger runtime idle if there is * scsi_autopm_put_host() will trigger runtime idle if there is
@ -260,6 +256,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
pm_runtime_enable(&shost->shost_gendev); pm_runtime_enable(&shost->shost_gendev);
device_enable_async_suspend(&shost->shost_gendev); device_enable_async_suspend(&shost->shost_gendev);
error = device_add(&shost->shost_gendev);
if (error)
goto out_destroy_freelist;
scsi_host_set_state(shost, SHOST_RUNNING); scsi_host_set_state(shost, SHOST_RUNNING);
get_device(shost->shost_gendev.parent); get_device(shost->shost_gendev.parent);
@ -309,6 +309,10 @@ int scsi_add_host_with_dma(struct Scsi_Host *shost, struct device *dev,
out_del_gendev: out_del_gendev:
device_del(&shost->shost_gendev); device_del(&shost->shost_gendev);
out_destroy_freelist: out_destroy_freelist:
device_disable_async_suspend(&shost->shost_gendev);
pm_runtime_disable(&shost->shost_gendev);
pm_runtime_set_suspended(&shost->shost_gendev);
pm_runtime_put_noidle(&shost->shost_gendev);
scsi_destroy_command_freelist(shost); scsi_destroy_command_freelist(shost);
out_destroy_tags: out_destroy_tags:
if (shost_use_blk_mq(shost)) if (shost_use_blk_mq(shost))

View File

@ -293,6 +293,8 @@ static int detect_controller_lockup(struct ctlr_info *h);
static void hpsa_disable_rld_caching(struct ctlr_info *h); static void hpsa_disable_rld_caching(struct ctlr_info *h);
static inline int hpsa_scsi_do_report_phys_luns(struct ctlr_info *h, static inline int hpsa_scsi_do_report_phys_luns(struct ctlr_info *h,
struct ReportExtendedLUNdata *buf, int bufsize); struct ReportExtendedLUNdata *buf, int bufsize);
static bool hpsa_vpd_page_supported(struct ctlr_info *h,
unsigned char scsi3addr[], u8 page);
static int hpsa_luns_changed(struct ctlr_info *h); static int hpsa_luns_changed(struct ctlr_info *h);
static bool hpsa_cmd_dev_match(struct ctlr_info *h, struct CommandList *c, static bool hpsa_cmd_dev_match(struct ctlr_info *h, struct CommandList *c,
struct hpsa_scsi_dev_t *dev, struct hpsa_scsi_dev_t *dev,
@ -2388,7 +2390,8 @@ static void hpsa_cmd_free_and_done(struct ctlr_info *h,
struct CommandList *c, struct scsi_cmnd *cmd) struct CommandList *c, struct scsi_cmnd *cmd)
{ {
hpsa_cmd_resolve_and_free(h, c); hpsa_cmd_resolve_and_free(h, c);
cmd->scsi_done(cmd); if (cmd && cmd->scsi_done)
cmd->scsi_done(cmd);
} }
static void hpsa_retry_cmd(struct ctlr_info *h, struct CommandList *c) static void hpsa_retry_cmd(struct ctlr_info *h, struct CommandList *c)
@ -2489,7 +2492,17 @@ static void complete_scsi_command(struct CommandList *cp)
ei = cp->err_info; ei = cp->err_info;
cmd = cp->scsi_cmd; cmd = cp->scsi_cmd;
h = cp->h; h = cp->h;
if (!cmd->device) {
cmd->result = DID_NO_CONNECT << 16;
return hpsa_cmd_free_and_done(h, cp, cmd);
}
dev = cmd->device->hostdata; dev = cmd->device->hostdata;
if (!dev) {
cmd->result = DID_NO_CONNECT << 16;
return hpsa_cmd_free_and_done(h, cp, cmd);
}
c2 = &h->ioaccel2_cmd_pool[cp->cmdindex]; c2 = &h->ioaccel2_cmd_pool[cp->cmdindex];
scsi_dma_unmap(cmd); /* undo the DMA mappings */ scsi_dma_unmap(cmd); /* undo the DMA mappings */
@ -2504,8 +2517,15 @@ static void complete_scsi_command(struct CommandList *cp)
cmd->result = (DID_OK << 16); /* host byte */ cmd->result = (DID_OK << 16); /* host byte */
cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */ cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */
if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) {
atomic_dec(&cp->phys_disk->ioaccel_cmds_out); if (dev->physical_device && dev->expose_device &&
dev->removed) {
cmd->result = DID_NO_CONNECT << 16;
return hpsa_cmd_free_and_done(h, cp, cmd);
}
if (likely(cp->phys_disk != NULL))
atomic_dec(&cp->phys_disk->ioaccel_cmds_out);
}
/* /*
* We check for lockup status here as it may be set for * We check for lockup status here as it may be set for
@ -3074,11 +3094,19 @@ static void hpsa_get_raid_level(struct ctlr_info *h,
buf = kzalloc(64, GFP_KERNEL); buf = kzalloc(64, GFP_KERNEL);
if (!buf) if (!buf)
return; return;
rc = hpsa_scsi_do_inquiry(h, scsi3addr, VPD_PAGE | 0xC1, buf, 64);
if (!hpsa_vpd_page_supported(h, scsi3addr,
HPSA_VPD_LV_DEVICE_GEOMETRY))
goto exit;
rc = hpsa_scsi_do_inquiry(h, scsi3addr, VPD_PAGE |
HPSA_VPD_LV_DEVICE_GEOMETRY, buf, 64);
if (rc == 0) if (rc == 0)
*raid_level = buf[8]; *raid_level = buf[8];
if (*raid_level > RAID_UNKNOWN) if (*raid_level > RAID_UNKNOWN)
*raid_level = RAID_UNKNOWN; *raid_level = RAID_UNKNOWN;
exit:
kfree(buf); kfree(buf);
return; return;
} }
@ -3436,7 +3464,7 @@ static void hpsa_get_sas_address(struct ctlr_info *h, unsigned char *scsi3addr,
} }
/* Get a device id from inquiry page 0x83 */ /* Get a device id from inquiry page 0x83 */
static int hpsa_vpd_page_supported(struct ctlr_info *h, static bool hpsa_vpd_page_supported(struct ctlr_info *h,
unsigned char scsi3addr[], u8 page) unsigned char scsi3addr[], u8 page)
{ {
int rc; int rc;
@ -3446,7 +3474,7 @@ static int hpsa_vpd_page_supported(struct ctlr_info *h,
buf = kzalloc(256, GFP_KERNEL); buf = kzalloc(256, GFP_KERNEL);
if (!buf) if (!buf)
return 0; return false;
/* Get the size of the page list first */ /* Get the size of the page list first */
rc = hpsa_scsi_do_inquiry(h, scsi3addr, rc = hpsa_scsi_do_inquiry(h, scsi3addr,
@ -3473,10 +3501,10 @@ static int hpsa_vpd_page_supported(struct ctlr_info *h,
goto exit_supported; goto exit_supported;
exit_unsupported: exit_unsupported:
kfree(buf); kfree(buf);
return 0; return false;
exit_supported: exit_supported:
kfree(buf); kfree(buf);
return 1; return true;
} }
static void hpsa_get_ioaccel_status(struct ctlr_info *h, static void hpsa_get_ioaccel_status(struct ctlr_info *h,
@ -3525,18 +3553,25 @@ static int hpsa_get_device_id(struct ctlr_info *h, unsigned char *scsi3addr,
int rc; int rc;
unsigned char *buf; unsigned char *buf;
if (buflen > 16) /* Does controller have VPD for device id? */
buflen = 16; if (!hpsa_vpd_page_supported(h, scsi3addr, HPSA_VPD_LV_DEVICE_ID))
return 1; /* not supported */
buf = kzalloc(64, GFP_KERNEL); buf = kzalloc(64, GFP_KERNEL);
if (!buf) if (!buf)
return -ENOMEM; return -ENOMEM;
rc = hpsa_scsi_do_inquiry(h, scsi3addr, VPD_PAGE | 0x83, buf, 64);
if (rc == 0) rc = hpsa_scsi_do_inquiry(h, scsi3addr, VPD_PAGE |
memcpy(device_id, &buf[index], buflen); HPSA_VPD_LV_DEVICE_ID, buf, 64);
if (rc == 0) {
if (buflen > 16)
buflen = 16;
memcpy(device_id, &buf[8], buflen);
}
kfree(buf); kfree(buf);
return rc != 0; return rc; /*0 - got id, otherwise, didn't */
} }
static int hpsa_scsi_do_report_luns(struct ctlr_info *h, int logical, static int hpsa_scsi_do_report_luns(struct ctlr_info *h, int logical,
@ -3807,8 +3842,15 @@ static int hpsa_update_device_info(struct ctlr_info *h,
sizeof(this_device->model)); sizeof(this_device->model));
memset(this_device->device_id, 0, memset(this_device->device_id, 0,
sizeof(this_device->device_id)); sizeof(this_device->device_id));
hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8, if (hpsa_get_device_id(h, scsi3addr, this_device->device_id, 8,
sizeof(this_device->device_id)); sizeof(this_device->device_id)))
dev_err(&h->pdev->dev,
"hpsa%d: %s: can't get device id for host %d:C0:T%d:L%d\t%s\t%.16s\n",
h->ctlr, __func__,
h->scsi_host->host_no,
this_device->target, this_device->lun,
scsi_device_type(this_device->devtype),
this_device->model);
if ((this_device->devtype == TYPE_DISK || if ((this_device->devtype == TYPE_DISK ||
this_device->devtype == TYPE_ZBC) && this_device->devtype == TYPE_ZBC) &&
@ -4034,7 +4076,17 @@ static void hpsa_get_ioaccel_drive_info(struct ctlr_info *h,
struct bmic_identify_physical_device *id_phys) struct bmic_identify_physical_device *id_phys)
{ {
int rc; int rc;
struct ext_report_lun_entry *rle = &rlep->LUN[rle_index]; struct ext_report_lun_entry *rle;
/*
* external targets don't support BMIC
*/
if (dev->external) {
dev->queue_depth = 7;
return;
}
rle = &rlep->LUN[rle_index];
dev->ioaccel_handle = rle->ioaccel_handle; dev->ioaccel_handle = rle->ioaccel_handle;
if ((rle->device_flags & 0x08) && dev->ioaccel_handle) if ((rle->device_flags & 0x08) && dev->ioaccel_handle)
@ -4270,6 +4322,11 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
lunaddrbytes = figure_lunaddrbytes(h, raid_ctlr_position, lunaddrbytes = figure_lunaddrbytes(h, raid_ctlr_position,
i, nphysicals, nlogicals, physdev_list, logdev_list); i, nphysicals, nlogicals, physdev_list, logdev_list);
/* Determine if this is a lun from an external target array */
tmpdevice->external =
figure_external_status(h, raid_ctlr_position, i,
nphysicals, nlocal_logicals);
/* /*
* Skip over some devices such as a spare. * Skip over some devices such as a spare.
*/ */
@ -4295,11 +4352,6 @@ static void hpsa_update_scsi_devices(struct ctlr_info *h)
continue; continue;
} }
/* Determine if this is a lun from an external target array */
tmpdevice->external =
figure_external_status(h, raid_ctlr_position, i,
nphysicals, nlocal_logicals);
figure_bus_target_lun(h, lunaddrbytes, tmpdevice); figure_bus_target_lun(h, lunaddrbytes, tmpdevice);
hpsa_update_device_supports_aborts(h, tmpdevice, lunaddrbytes); hpsa_update_device_supports_aborts(h, tmpdevice, lunaddrbytes);
this_device = currentsd[ncurrent]; this_device = currentsd[ncurrent];
@ -4513,7 +4565,9 @@ static int fixup_ioaccel_cdb(u8 *cdb, int *cdb_len)
case READ_6: case READ_6:
case READ_12: case READ_12:
if (*cdb_len == 6) { if (*cdb_len == 6) {
block = get_unaligned_be16(&cdb[2]); block = (((cdb[1] & 0x1F) << 16) |
(cdb[2] << 8) |
cdb[3]);
block_cnt = cdb[4]; block_cnt = cdb[4];
if (block_cnt == 0) if (block_cnt == 0)
block_cnt = 256; block_cnt = 256;
@ -4638,6 +4692,9 @@ static int hpsa_scsi_ioaccel_direct_map(struct ctlr_info *h,
struct scsi_cmnd *cmd = c->scsi_cmd; struct scsi_cmnd *cmd = c->scsi_cmd;
struct hpsa_scsi_dev_t *dev = cmd->device->hostdata; struct hpsa_scsi_dev_t *dev = cmd->device->hostdata;
if (!dev)
return -1;
c->phys_disk = dev; c->phys_disk = dev;
return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle, return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle,
@ -4670,9 +4727,11 @@ static void set_encrypt_ioaccel2(struct ctlr_info *h,
*/ */
switch (cmd->cmnd[0]) { switch (cmd->cmnd[0]) {
/* Required? 6-byte cdbs eliminated by fixup_ioaccel_cdb */ /* Required? 6-byte cdbs eliminated by fixup_ioaccel_cdb */
case WRITE_6:
case READ_6: case READ_6:
first_block = get_unaligned_be16(&cmd->cmnd[2]); case WRITE_6:
first_block = (((cmd->cmnd[1] & 0x1F) << 16) |
(cmd->cmnd[2] << 8) |
cmd->cmnd[3]);
break; break;
case WRITE_10: case WRITE_10:
case READ_10: case READ_10:
@ -4714,6 +4773,12 @@ static int hpsa_scsi_ioaccel2_queue_command(struct ctlr_info *h,
u32 len; u32 len;
u32 total_len = 0; u32 total_len = 0;
if (!cmd->device)
return -1;
if (!cmd->device->hostdata)
return -1;
BUG_ON(scsi_sg_count(cmd) > h->maxsgentries); BUG_ON(scsi_sg_count(cmd) > h->maxsgentries);
if (fixup_ioaccel_cdb(cdb, &cdb_len)) { if (fixup_ioaccel_cdb(cdb, &cdb_len)) {
@ -4822,6 +4887,12 @@ static int hpsa_scsi_ioaccel_queue_command(struct ctlr_info *h,
struct CommandList *c, u32 ioaccel_handle, u8 *cdb, int cdb_len, struct CommandList *c, u32 ioaccel_handle, u8 *cdb, int cdb_len,
u8 *scsi3addr, struct hpsa_scsi_dev_t *phys_disk) u8 *scsi3addr, struct hpsa_scsi_dev_t *phys_disk)
{ {
if (!c->scsi_cmd->device)
return -1;
if (!c->scsi_cmd->device->hostdata)
return -1;
/* Try to honor the device's queue depth */ /* Try to honor the device's queue depth */
if (atomic_inc_return(&phys_disk->ioaccel_cmds_out) > if (atomic_inc_return(&phys_disk->ioaccel_cmds_out) >
phys_disk->queue_depth) { phys_disk->queue_depth) {
@ -4902,12 +4973,17 @@ static int hpsa_scsi_ioaccel_raid_map(struct ctlr_info *h,
#endif #endif
int offload_to_mirror; int offload_to_mirror;
if (!dev)
return -1;
/* check for valid opcode, get LBA and block count */ /* check for valid opcode, get LBA and block count */
switch (cmd->cmnd[0]) { switch (cmd->cmnd[0]) {
case WRITE_6: case WRITE_6:
is_write = 1; is_write = 1;
case READ_6: case READ_6:
first_block = get_unaligned_be16(&cmd->cmnd[2]); first_block = (((cmd->cmnd[1] & 0x1F) << 16) |
(cmd->cmnd[2] << 8) |
cmd->cmnd[3]);
block_cnt = cmd->cmnd[4]; block_cnt = cmd->cmnd[4];
if (block_cnt == 0) if (block_cnt == 0)
block_cnt = 256; block_cnt = 256;
@ -5314,6 +5390,9 @@ static int hpsa_ioaccel_submit(struct ctlr_info *h,
struct hpsa_scsi_dev_t *dev = cmd->device->hostdata; struct hpsa_scsi_dev_t *dev = cmd->device->hostdata;
int rc = IO_ACCEL_INELIGIBLE; int rc = IO_ACCEL_INELIGIBLE;
if (!dev)
return SCSI_MLQUEUE_HOST_BUSY;
cmd->host_scribble = (unsigned char *) c; cmd->host_scribble = (unsigned char *) c;
if (dev->offload_enabled) { if (dev->offload_enabled) {
@ -5852,6 +5931,9 @@ static void setup_ioaccel2_abort_cmd(struct CommandList *c, struct ctlr_info *h,
struct scsi_cmnd *scmd = command_to_abort->scsi_cmd; struct scsi_cmnd *scmd = command_to_abort->scsi_cmd;
struct hpsa_scsi_dev_t *dev = scmd->device->hostdata; struct hpsa_scsi_dev_t *dev = scmd->device->hostdata;
if (!dev)
return;
/* /*
* We're overlaying struct hpsa_tmf_struct on top of something which * We're overlaying struct hpsa_tmf_struct on top of something which
* was allocated as a struct io_accel2_cmd, so we better be sure it * was allocated as a struct io_accel2_cmd, so we better be sure it
@ -5935,7 +6017,7 @@ static int hpsa_send_reset_as_abort_ioaccel2(struct ctlr_info *h,
"Reset as abort: Resetting physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Resetting physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
psa[0], psa[1], psa[2], psa[3], psa[0], psa[1], psa[2], psa[3],
psa[4], psa[5], psa[6], psa[7]); psa[4], psa[5], psa[6], psa[7]);
rc = hpsa_do_reset(h, dev, psa, HPSA_RESET_TYPE_TARGET, reply_queue); rc = hpsa_do_reset(h, dev, psa, HPSA_PHYS_TARGET_RESET, reply_queue);
if (rc != 0) { if (rc != 0) {
dev_warn(&h->pdev->dev, dev_warn(&h->pdev->dev,
"Reset as abort: Failed on physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n", "Reset as abort: Failed on physical device at scsi3addr 0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
@ -5972,6 +6054,9 @@ static int hpsa_send_abort_ioaccel2(struct ctlr_info *h,
struct io_accel2_cmd *c2; struct io_accel2_cmd *c2;
dev = abort->scsi_cmd->device->hostdata; dev = abort->scsi_cmd->device->hostdata;
if (!dev)
return -1;
if (!dev->offload_enabled && !dev->hba_ioaccel_enabled) if (!dev->offload_enabled && !dev->hba_ioaccel_enabled)
return -1; return -1;

View File

@ -312,7 +312,6 @@ struct offline_device_entry {
#define HPSA_DEVICE_RESET_MSG 1 #define HPSA_DEVICE_RESET_MSG 1
#define HPSA_RESET_TYPE_CONTROLLER 0x00 #define HPSA_RESET_TYPE_CONTROLLER 0x00
#define HPSA_RESET_TYPE_BUS 0x01 #define HPSA_RESET_TYPE_BUS 0x01
#define HPSA_RESET_TYPE_TARGET 0x03
#define HPSA_RESET_TYPE_LUN 0x04 #define HPSA_RESET_TYPE_LUN 0x04
#define HPSA_PHYS_TARGET_RESET 0x99 /* not defined by cciss spec */ #define HPSA_PHYS_TARGET_RESET 0x99 /* not defined by cciss spec */
#define HPSA_MSG_SEND_RETRY_LIMIT 10 #define HPSA_MSG_SEND_RETRY_LIMIT 10

View File

@ -157,6 +157,7 @@
/* VPD Inquiry types */ /* VPD Inquiry types */
#define HPSA_VPD_SUPPORTED_PAGES 0x00 #define HPSA_VPD_SUPPORTED_PAGES 0x00
#define HPSA_VPD_LV_DEVICE_ID 0x83
#define HPSA_VPD_LV_DEVICE_GEOMETRY 0xC1 #define HPSA_VPD_LV_DEVICE_GEOMETRY 0xC1
#define HPSA_VPD_LV_IOACCEL_STATUS 0xC2 #define HPSA_VPD_LV_IOACCEL_STATUS 0xC2
#define HPSA_VPD_LV_STATUS 0xC3 #define HPSA_VPD_LV_STATUS 0xC3

View File

@ -52,6 +52,7 @@ static unsigned int max_requests = IBMVFC_MAX_REQUESTS_DEFAULT;
static unsigned int disc_threads = IBMVFC_MAX_DISC_THREADS; static unsigned int disc_threads = IBMVFC_MAX_DISC_THREADS;
static unsigned int ibmvfc_debug = IBMVFC_DEBUG; static unsigned int ibmvfc_debug = IBMVFC_DEBUG;
static unsigned int log_level = IBMVFC_DEFAULT_LOG_LEVEL; static unsigned int log_level = IBMVFC_DEFAULT_LOG_LEVEL;
static unsigned int cls3_error = IBMVFC_CLS3_ERROR;
static LIST_HEAD(ibmvfc_head); static LIST_HEAD(ibmvfc_head);
static DEFINE_SPINLOCK(ibmvfc_driver_lock); static DEFINE_SPINLOCK(ibmvfc_driver_lock);
static struct scsi_transport_template *ibmvfc_transport_template; static struct scsi_transport_template *ibmvfc_transport_template;
@ -86,6 +87,9 @@ MODULE_PARM_DESC(debug, "Enable driver debug information. "
module_param_named(log_level, log_level, uint, 0); module_param_named(log_level, log_level, uint, 0);
MODULE_PARM_DESC(log_level, "Set to 0 - 4 for increasing verbosity of device driver. " MODULE_PARM_DESC(log_level, "Set to 0 - 4 for increasing verbosity of device driver. "
"[Default=" __stringify(IBMVFC_DEFAULT_LOG_LEVEL) "]"); "[Default=" __stringify(IBMVFC_DEFAULT_LOG_LEVEL) "]");
module_param_named(cls3_error, cls3_error, uint, 0);
MODULE_PARM_DESC(cls3_error, "Enable FC Class 3 Error Recovery. "
"[Default=" __stringify(IBMVFC_CLS3_ERROR) "]");
static const struct { static const struct {
u16 status; u16 status;
@ -717,7 +721,6 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
spin_lock_irqsave(vhost->host->host_lock, flags); spin_lock_irqsave(vhost->host->host_lock, flags);
vhost->state = IBMVFC_NO_CRQ; vhost->state = IBMVFC_NO_CRQ;
vhost->logged_in = 0; vhost->logged_in = 0;
ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_NONE);
/* Clean out the queue */ /* Clean out the queue */
memset(crq->msgs, 0, PAGE_SIZE); memset(crq->msgs, 0, PAGE_SIZE);
@ -1335,6 +1338,9 @@ static int ibmvfc_map_sg_data(struct scsi_cmnd *scmd,
struct srp_direct_buf *data = &vfc_cmd->ioba; struct srp_direct_buf *data = &vfc_cmd->ioba;
struct ibmvfc_host *vhost = dev_get_drvdata(dev); struct ibmvfc_host *vhost = dev_get_drvdata(dev);
if (cls3_error)
vfc_cmd->flags |= cpu_to_be16(IBMVFC_CLASS_3_ERR);
sg_mapped = scsi_dma_map(scmd); sg_mapped = scsi_dma_map(scmd);
if (!sg_mapped) { if (!sg_mapped) {
vfc_cmd->flags |= cpu_to_be16(IBMVFC_NO_MEM_DESC); vfc_cmd->flags |= cpu_to_be16(IBMVFC_NO_MEM_DESC);
@ -3381,6 +3387,10 @@ static void ibmvfc_tgt_send_prli(struct ibmvfc_target *tgt)
prli->parms.type = IBMVFC_SCSI_FCP_TYPE; prli->parms.type = IBMVFC_SCSI_FCP_TYPE;
prli->parms.flags = cpu_to_be16(IBMVFC_PRLI_EST_IMG_PAIR); prli->parms.flags = cpu_to_be16(IBMVFC_PRLI_EST_IMG_PAIR);
prli->parms.service_parms = cpu_to_be32(IBMVFC_PRLI_INITIATOR_FUNC); prli->parms.service_parms = cpu_to_be32(IBMVFC_PRLI_INITIATOR_FUNC);
prli->parms.service_parms |= cpu_to_be32(IBMVFC_PRLI_READ_FCP_XFER_RDY_DISABLED);
if (cls3_error)
prli->parms.service_parms |= cpu_to_be32(IBMVFC_PRLI_RETRY);
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT);
if (ibmvfc_send_event(evt, vhost, default_timeout)) { if (ibmvfc_send_event(evt, vhost, default_timeout)) {

View File

@ -54,6 +54,7 @@
#define IBMVFC_DEV_LOSS_TMO (5 * 60) #define IBMVFC_DEV_LOSS_TMO (5 * 60)
#define IBMVFC_DEFAULT_LOG_LEVEL 2 #define IBMVFC_DEFAULT_LOG_LEVEL 2
#define IBMVFC_MAX_CDB_LEN 16 #define IBMVFC_MAX_CDB_LEN 16
#define IBMVFC_CLS3_ERROR 0
/* /*
* Ensure we have resources for ERP and initialization: * Ensure we have resources for ERP and initialization:

View File

@ -1606,8 +1606,6 @@ static void ibmvscsis_send_messages(struct scsi_info *vscsi)
if (!(vscsi->flags & RESPONSE_Q_DOWN)) { if (!(vscsi->flags & RESPONSE_Q_DOWN)) {
list_for_each_entry_safe(cmd, nxt, &vscsi->waiting_rsp, list) { list_for_each_entry_safe(cmd, nxt, &vscsi->waiting_rsp, list) {
pr_debug("send_messages cmd %p\n", cmd);
iue = cmd->iue; iue = cmd->iue;
crq->valid = VALID_CMD_RESP_EL; crq->valid = VALID_CMD_RESP_EL;
@ -1934,6 +1932,8 @@ static int ibmvscsis_drop_nexus(struct ibmvscsis_tport *tport)
/* /*
* Release the SCSI I_T Nexus to the emulated ibmvscsis Target Port * Release the SCSI I_T Nexus to the emulated ibmvscsis Target Port
*/ */
target_wait_for_sess_cmds(se_sess);
transport_deregister_session_configfs(se_sess);
transport_deregister_session(se_sess); transport_deregister_session(se_sess);
tport->ibmv_nexus = NULL; tport->ibmv_nexus = NULL;
kfree(nexus); kfree(nexus);
@ -1978,7 +1978,7 @@ static long ibmvscsis_srp_login(struct scsi_info *vscsi,
reason = SRP_LOGIN_REJ_MULTI_CHANNEL_UNSUPPORTED; reason = SRP_LOGIN_REJ_MULTI_CHANNEL_UNSUPPORTED;
else if (fmt->buffers & (~SUPPORTED_FORMATS)) else if (fmt->buffers & (~SUPPORTED_FORMATS))
reason = SRP_LOGIN_REJ_UNSUPPORTED_DESCRIPTOR_FMT; reason = SRP_LOGIN_REJ_UNSUPPORTED_DESCRIPTOR_FMT;
else if ((fmt->buffers | SUPPORTED_FORMATS) == 0) else if ((fmt->buffers & SUPPORTED_FORMATS) == 0)
reason = SRP_LOGIN_REJ_UNSUPPORTED_DESCRIPTOR_FMT; reason = SRP_LOGIN_REJ_UNSUPPORTED_DESCRIPTOR_FMT;
if (vscsi->state == SRP_PROCESSING) if (vscsi->state == SRP_PROCESSING)
@ -2554,10 +2554,6 @@ static void ibmvscsis_parse_cmd(struct scsi_info *vscsi,
srp->lun.scsi_lun[0] &= 0x3f; srp->lun.scsi_lun[0] &= 0x3f;
pr_debug("calling submit_cmd, se_cmd %p, lun 0x%llx, cdb 0x%x, attr:%d\n",
&cmd->se_cmd, scsilun_to_int(&srp->lun), (int)srp->cdb[0],
attr);
rc = target_submit_cmd(&cmd->se_cmd, nexus->se_sess, srp->cdb, rc = target_submit_cmd(&cmd->se_cmd, nexus->se_sess, srp->cdb,
cmd->sense_buf, scsilun_to_int(&srp->lun), cmd->sense_buf, scsilun_to_int(&srp->lun),
data_len, attr, dir, 0); data_len, attr, dir, 0);
@ -3142,8 +3138,6 @@ static int ibmvscsis_rdma(struct ibmvscsis_cmd *cmd, struct scatterlist *sg,
long tx_len; long tx_len;
long rc = 0; long rc = 0;
pr_debug("rdma: dir %d, bytes 0x%x\n", dir, bytes);
if (bytes == 0) if (bytes == 0)
return 0; return 0;
@ -3192,12 +3186,6 @@ static int ibmvscsis_rdma(struct ibmvscsis_cmd *cmd, struct scatterlist *sg,
vscsi->dds.window[LOCAL].liobn, vscsi->dds.window[LOCAL].liobn,
server_ioba); server_ioba);
} else { } else {
/* write to client */
struct srp_cmd *srp = (struct srp_cmd *)iue->sbuf->buf;
if (!READ_CMD(srp->cdb))
print_hex_dump_bytes(" data:", DUMP_PREFIX_NONE,
sg_virt(sgp), buf_len);
/* The h_copy_rdma will cause phyp, running in another /* The h_copy_rdma will cause phyp, running in another
* partition, to read memory, so we need to make sure * partition, to read memory, so we need to make sure
* the data has been written out, hence these syncs. * the data has been written out, hence these syncs.
@ -3322,12 +3310,9 @@ cmd_work:
rc = ibmvscsis_trans_event(vscsi, crq); rc = ibmvscsis_trans_event(vscsi, crq);
} else if (vscsi->flags & TRANS_EVENT) { } else if (vscsi->flags & TRANS_EVENT) {
/* /*
* if a tranport event has occurred leave * if a transport event has occurred leave
* everything but transport events on the queue * everything but transport events on the queue
*/ *
pr_debug("handle_crq, ignoring\n");
/*
* need to decrement the queue index so we can * need to decrement the queue index so we can
* look at the elment again * look at the elment again
*/ */
@ -3461,6 +3446,7 @@ static int ibmvscsis_probe(struct vio_dev *vdev,
vscsi->map_ioba = dma_map_single(&vdev->dev, vscsi->map_buf, PAGE_SIZE, vscsi->map_ioba = dma_map_single(&vdev->dev, vscsi->map_buf, PAGE_SIZE,
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
if (dma_mapping_error(&vdev->dev, vscsi->map_ioba)) { if (dma_mapping_error(&vdev->dev, vscsi->map_ioba)) {
rc = -ENOMEM;
dev_err(&vscsi->dev, "probe: error mapping command buffer\n"); dev_err(&vscsi->dev, "probe: error mapping command buffer\n");
goto free_buf; goto free_buf;
} }
@ -3693,12 +3679,9 @@ static void ibmvscsis_release_cmd(struct se_cmd *se_cmd)
se_cmd); se_cmd);
struct scsi_info *vscsi = cmd->adapter; struct scsi_info *vscsi = cmd->adapter;
pr_debug("release_cmd %p, flags %d\n", se_cmd, cmd->flags);
spin_lock_bh(&vscsi->intr_lock); spin_lock_bh(&vscsi->intr_lock);
/* Remove from active_q */ /* Remove from active_q */
list_del(&cmd->list); list_move_tail(&cmd->list, &vscsi->waiting_rsp);
list_add_tail(&cmd->list, &vscsi->waiting_rsp);
ibmvscsis_send_messages(vscsi); ibmvscsis_send_messages(vscsi);
spin_unlock_bh(&vscsi->intr_lock); spin_unlock_bh(&vscsi->intr_lock);
} }
@ -3715,9 +3698,6 @@ static int ibmvscsis_write_pending(struct se_cmd *se_cmd)
struct iu_entry *iue = cmd->iue; struct iu_entry *iue = cmd->iue;
int rc; int rc;
pr_debug("write_pending, se_cmd %p, length 0x%x\n",
se_cmd, se_cmd->data_length);
rc = srp_transfer_data(cmd, &vio_iu(iue)->srp.cmd, ibmvscsis_rdma, rc = srp_transfer_data(cmd, &vio_iu(iue)->srp.cmd, ibmvscsis_rdma,
1, 1); 1, 1);
if (rc) { if (rc) {
@ -3756,9 +3736,6 @@ static int ibmvscsis_queue_data_in(struct se_cmd *se_cmd)
uint len = 0; uint len = 0;
int rc; int rc;
pr_debug("queue_data_in, se_cmd %p, length 0x%x\n",
se_cmd, se_cmd->data_length);
rc = srp_transfer_data(cmd, &vio_iu(iue)->srp.cmd, ibmvscsis_rdma, 1, rc = srp_transfer_data(cmd, &vio_iu(iue)->srp.cmd, ibmvscsis_rdma, 1,
1); 1);
if (rc) { if (rc) {

File diff suppressed because it is too large Load Diff

View File

@ -1,412 +0,0 @@
/*
* in2000.h - Linux device driver definitions for the
* Always IN2000 ISA SCSI card.
*
* IMPORTANT: This file is for version 1.33 - 26/Aug/1998
*
* Copyright (c) 1996 John Shifflett, GeoLog Consulting
* john@geolog.com
* jshiffle@netcom.com
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef IN2000_H
#define IN2000_H
#include <asm/io.h>
#define PROC_INTERFACE /* add code for /proc/scsi/in2000/xxx interface */
#ifdef PROC_INTERFACE
#define PROC_STATISTICS /* add code for keeping various real time stats */
#endif
#define SYNC_DEBUG /* extra info on sync negotiation printed */
#define DEBUGGING_ON /* enable command-line debugging bitmask */
#define DEBUG_DEFAULTS 0 /* default bitmask - change from command-line */
#ifdef __i386__
#define FAST_READ_IO /* No problems with these on my machine */
#define FAST_WRITE_IO
#endif
#ifdef DEBUGGING_ON
#define DB(f,a) if (hostdata->args & (f)) a;
#define CHECK_NULL(p,s) /* if (!(p)) {printk("\n"); while (1) printk("NP:%s\r",(s));} */
#else
#define DB(f,a)
#define CHECK_NULL(p,s)
#endif
#define uchar unsigned char
#define read1_io(a) (inb(hostdata->io_base+(a)))
#define read2_io(a) (inw(hostdata->io_base+(a)))
#define write1_io(b,a) (outb((b),hostdata->io_base+(a)))
#define write2_io(w,a) (outw((w),hostdata->io_base+(a)))
#ifdef __i386__
/* These inline assembly defines are derived from a patch
* sent to me by Bill Earnest. He's done a lot of very
* valuable thinking, testing, and coding during his effort
* to squeeze more speed out of this driver. I really think
* that we are doing IO at close to the maximum now with
* the fifo. (And yes, insw uses 'edi' while outsw uses
* 'esi'. Thanks Bill!)
*/
#define FAST_READ2_IO() \
({ \
int __dummy_1,__dummy_2; \
__asm__ __volatile__ ("\n \
cld \n \
orl %%ecx, %%ecx \n \
jz 1f \n \
rep \n \
insw (%%dx),%%es:(%%edi) \n \
1: " \
: "=D" (sp) ,"=c" (__dummy_1) ,"=d" (__dummy_2) /* output */ \
: "2" (f), "0" (sp), "1" (i) /* input */ \
); /* trashed */ \
})
#define FAST_WRITE2_IO() \
({ \
int __dummy_1,__dummy_2; \
__asm__ __volatile__ ("\n \
cld \n \
orl %%ecx, %%ecx \n \
jz 1f \n \
rep \n \
outsw %%ds:(%%esi),(%%dx) \n \
1: " \
: "=S" (sp) ,"=c" (__dummy_1) ,"=d" (__dummy_2)/* output */ \
: "2" (f), "0" (sp), "1" (i) /* input */ \
); /* trashed */ \
})
#endif
/* IN2000 io_port offsets */
#define IO_WD_ASR 0x00 /* R - 3393 auxstat reg */
#define ASR_INT 0x80
#define ASR_LCI 0x40
#define ASR_BSY 0x20
#define ASR_CIP 0x10
#define ASR_PE 0x02
#define ASR_DBR 0x01
#define IO_WD_ADDR 0x00 /* W - 3393 address reg */
#define IO_WD_DATA 0x01 /* R/W - rest of 3393 regs */
#define IO_FIFO 0x02 /* R/W - in2000 dual-port fifo (16 bits) */
#define IN2000_FIFO_SIZE 2048 /* fifo capacity in bytes */
#define IO_CARD_RESET 0x03 /* W - in2000 start master reset */
#define IO_FIFO_COUNT 0x04 /* R - in2000 fifo counter */
#define IO_FIFO_WRITE 0x05 /* W - clear fifo counter, start write */
#define IO_FIFO_READ 0x07 /* W - start fifo read */
#define IO_LED_OFF 0x08 /* W - turn off in2000 activity LED */
#define IO_SWITCHES 0x08 /* R - read in2000 dip switch */
#define SW_ADDR0 0x01 /* bit 0 = bit 0 of index to io addr */
#define SW_ADDR1 0x02 /* bit 1 = bit 1 of index io addr */
#define SW_DISINT 0x04 /* bit 2 true if ints disabled */
#define SW_INT0 0x08 /* bit 3 = bit 0 of index to interrupt */
#define SW_INT1 0x10 /* bit 4 = bit 1 of index to interrupt */
#define SW_INT_SHIFT 3 /* shift right this amount to right justify int bits */
#define SW_SYNC_DOS5 0x20 /* bit 5 used by Always BIOS */
#define SW_FLOPPY 0x40 /* bit 6 true if floppy enabled */
#define SW_BIT7 0x80 /* bit 7 hardwired true (ground) */
#define IO_LED_ON 0x09 /* W - turn on in2000 activity LED */
#define IO_HARDWARE 0x0a /* R - read in2000 hardware rev, stop reset */
#define IO_INTR_MASK 0x0c /* W - in2000 interrupt mask reg */
#define IMASK_WD 0x01 /* WD33c93 interrupt mask */
#define IMASK_FIFO 0x02 /* FIFO interrupt mask */
/* wd register names */
#define WD_OWN_ID 0x00
#define WD_CONTROL 0x01
#define WD_TIMEOUT_PERIOD 0x02
#define WD_CDB_1 0x03
#define WD_CDB_2 0x04
#define WD_CDB_3 0x05
#define WD_CDB_4 0x06
#define WD_CDB_5 0x07
#define WD_CDB_6 0x08
#define WD_CDB_7 0x09
#define WD_CDB_8 0x0a
#define WD_CDB_9 0x0b
#define WD_CDB_10 0x0c
#define WD_CDB_11 0x0d
#define WD_CDB_12 0x0e
#define WD_TARGET_LUN 0x0f
#define WD_COMMAND_PHASE 0x10
#define WD_SYNCHRONOUS_TRANSFER 0x11
#define WD_TRANSFER_COUNT_MSB 0x12
#define WD_TRANSFER_COUNT 0x13
#define WD_TRANSFER_COUNT_LSB 0x14
#define WD_DESTINATION_ID 0x15
#define WD_SOURCE_ID 0x16
#define WD_SCSI_STATUS 0x17
#define WD_COMMAND 0x18
#define WD_DATA 0x19
#define WD_QUEUE_TAG 0x1a
#define WD_AUXILIARY_STATUS 0x1f
/* WD commands */
#define WD_CMD_RESET 0x00
#define WD_CMD_ABORT 0x01
#define WD_CMD_ASSERT_ATN 0x02
#define WD_CMD_NEGATE_ACK 0x03
#define WD_CMD_DISCONNECT 0x04
#define WD_CMD_RESELECT 0x05
#define WD_CMD_SEL_ATN 0x06
#define WD_CMD_SEL 0x07
#define WD_CMD_SEL_ATN_XFER 0x08
#define WD_CMD_SEL_XFER 0x09
#define WD_CMD_RESEL_RECEIVE 0x0a
#define WD_CMD_RESEL_SEND 0x0b
#define WD_CMD_WAIT_SEL_RECEIVE 0x0c
#define WD_CMD_TRANS_ADDR 0x18
#define WD_CMD_TRANS_INFO 0x20
#define WD_CMD_TRANSFER_PAD 0x21
#define WD_CMD_SBT_MODE 0x80
/* SCSI Bus Phases */
#define PHS_DATA_OUT 0x00
#define PHS_DATA_IN 0x01
#define PHS_COMMAND 0x02
#define PHS_STATUS 0x03
#define PHS_MESS_OUT 0x06
#define PHS_MESS_IN 0x07
/* Command Status Register definitions */
/* reset state interrupts */
#define CSR_RESET 0x00
#define CSR_RESET_AF 0x01
/* successful completion interrupts */
#define CSR_RESELECT 0x10
#define CSR_SELECT 0x11
#define CSR_SEL_XFER_DONE 0x16
#define CSR_XFER_DONE 0x18
/* paused or aborted interrupts */
#define CSR_MSGIN 0x20
#define CSR_SDP 0x21
#define CSR_SEL_ABORT 0x22
#define CSR_RESEL_ABORT 0x25
#define CSR_RESEL_ABORT_AM 0x27
#define CSR_ABORT 0x28
/* terminated interrupts */
#define CSR_INVALID 0x40
#define CSR_UNEXP_DISC 0x41
#define CSR_TIMEOUT 0x42
#define CSR_PARITY 0x43
#define CSR_PARITY_ATN 0x44
#define CSR_BAD_STATUS 0x45
#define CSR_UNEXP 0x48
/* service required interrupts */
#define CSR_RESEL 0x80
#define CSR_RESEL_AM 0x81
#define CSR_DISC 0x85
#define CSR_SRV_REQ 0x88
/* Own ID/CDB Size register */
#define OWNID_EAF 0x08
#define OWNID_EHP 0x10
#define OWNID_RAF 0x20
#define OWNID_FS_8 0x00
#define OWNID_FS_12 0x40
#define OWNID_FS_16 0x80
/* Control register */
#define CTRL_HSP 0x01
#define CTRL_HA 0x02
#define CTRL_IDI 0x04
#define CTRL_EDI 0x08
#define CTRL_HHP 0x10
#define CTRL_POLLED 0x00
#define CTRL_BURST 0x20
#define CTRL_BUS 0x40
#define CTRL_DMA 0x80
/* Timeout Period register */
#define TIMEOUT_PERIOD_VALUE 20 /* results in 200 ms. */
/* Synchronous Transfer Register */
#define STR_FSS 0x80
/* Destination ID register */
#define DSTID_DPD 0x40
#define DATA_OUT_DIR 0
#define DATA_IN_DIR 1
#define DSTID_SCC 0x80
/* Source ID register */
#define SRCID_MASK 0x07
#define SRCID_SIV 0x08
#define SRCID_DSP 0x20
#define SRCID_ES 0x40
#define SRCID_ER 0x80
#define ILLEGAL_STATUS_BYTE 0xff
#define DEFAULT_SX_PER 500 /* (ns) fairly safe */
#define DEFAULT_SX_OFF 0 /* aka async */
#define OPTIMUM_SX_PER 252 /* (ns) best we can do (mult-of-4) */
#define OPTIMUM_SX_OFF 12 /* size of in2000 fifo */
struct sx_period {
unsigned int period_ns;
uchar reg_value;
};
struct IN2000_hostdata {
struct Scsi_Host *next;
uchar chip; /* what kind of wd33c93 chip? */
uchar microcode; /* microcode rev if 'B' */
unsigned short io_base; /* IO port base */
unsigned int dip_switch; /* dip switch settings */
unsigned int hrev; /* hardware revision of card */
volatile uchar busy[8]; /* index = target, bit = lun */
volatile Scsi_Cmnd *input_Q; /* commands waiting to be started */
volatile Scsi_Cmnd *selecting; /* trying to select this command */
volatile Scsi_Cmnd *connected; /* currently connected command */
volatile Scsi_Cmnd *disconnected_Q;/* commands waiting for reconnect */
uchar state; /* what we are currently doing */
uchar fifo; /* what the FIFO is up to */
uchar level2; /* extent to which Level-2 commands are used */
uchar disconnect; /* disconnect/reselect policy */
unsigned int args; /* set from command-line argument */
uchar incoming_msg[8]; /* filled during message_in phase */
int incoming_ptr; /* mainly used with EXTENDED messages */
uchar outgoing_msg[8]; /* send this during next message_out */
int outgoing_len; /* length of outgoing message */
unsigned int default_sx_per; /* default transfer period for SCSI bus */
uchar sync_xfer[8]; /* sync_xfer reg settings per target */
uchar sync_stat[8]; /* status of sync negotiation per target */
uchar sync_off; /* bit mask: don't use sync with these targets */
#ifdef PROC_INTERFACE
uchar proc; /* bit mask: what's in proc output */
#ifdef PROC_STATISTICS
unsigned long cmd_cnt[8]; /* # of commands issued per target */
unsigned long int_cnt; /* # of interrupts serviced */
unsigned long disc_allowed_cnt[8]; /* # of disconnects allowed per target */
unsigned long disc_done_cnt[8]; /* # of disconnects done per target*/
#endif
#endif
};
/* defines for hostdata->chip */
#define C_WD33C93 0
#define C_WD33C93A 1
#define C_WD33C93B 2
#define C_UNKNOWN_CHIP 100
/* defines for hostdata->state */
#define S_UNCONNECTED 0
#define S_SELECTING 1
#define S_RUNNING_LEVEL2 2
#define S_CONNECTED 3
#define S_PRE_TMP_DISC 4
#define S_PRE_CMP_DISC 5
/* defines for hostdata->fifo */
#define FI_FIFO_UNUSED 0
#define FI_FIFO_READING 1
#define FI_FIFO_WRITING 2
/* defines for hostdata->level2 */
/* NOTE: only the first 3 are trustworthy at this point -
* having trouble when more than 1 device is reading/writing
* at the same time...
*/
#define L2_NONE 0 /* no combination commands - we get lots of ints */
#define L2_SELECT 1 /* start with SEL_ATN_XFER, but never resume it */
#define L2_BASIC 2 /* resume after STATUS ints & RDP messages */
#define L2_DATA 3 /* resume after DATA_IN/OUT ints */
#define L2_MOST 4 /* resume after anything except a RESELECT int */
#define L2_RESELECT 5 /* resume after everything, including RESELECT ints */
#define L2_ALL 6 /* always resume */
/* defines for hostdata->disconnect */
#define DIS_NEVER 0
#define DIS_ADAPTIVE 1
#define DIS_ALWAYS 2
/* defines for hostdata->args */
#define DB_TEST 1<<0
#define DB_FIFO 1<<1
#define DB_QUEUE_COMMAND 1<<2
#define DB_EXECUTE 1<<3
#define DB_INTR 1<<4
#define DB_TRANSFER 1<<5
#define DB_MASK 0x3f
#define A_NO_SCSI_RESET 1<<15
/* defines for hostdata->sync_xfer[] */
#define SS_UNSET 0
#define SS_FIRST 1
#define SS_WAITING 2
#define SS_SET 3
/* defines for hostdata->proc */
#define PR_VERSION 1<<0
#define PR_INFO 1<<1
#define PR_STATISTICS 1<<2
#define PR_CONNECTED 1<<3
#define PR_INPUTQ 1<<4
#define PR_DISCQ 1<<5
#define PR_TEST 1<<6
#define PR_STOP 1<<7
# include <linux/init.h>
# include <linux/spinlock.h>
# define in2000__INITFUNC(function) __initfunc(function)
# define in2000__INIT __init
# define in2000__INITDATA __initdata
# define CLISPIN_LOCK(host,flags) spin_lock_irqsave(host->host_lock, flags)
# define CLISPIN_UNLOCK(host,flags) spin_unlock_irqrestore(host->host_lock, \
flags)
static int in2000_detect(struct scsi_host_template *) in2000__INIT;
static int in2000_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);
static int in2000_abort(Scsi_Cmnd *);
static void in2000_setup(char *, int *) in2000__INIT;
static int in2000_biosparam(struct scsi_device *, struct block_device *,
sector_t, int *);
static int in2000_bus_reset(Scsi_Cmnd *);
#define IN2000_CAN_Q 16
#define IN2000_SG SG_ALL
#define IN2000_CPL 2
#define IN2000_HOST_ID 7
#endif /* IN2000_H */

View File

@ -493,15 +493,15 @@ struct ipr_error_table_t ipr_error_table[] = {
"9072: Link not operational transition"}, "9072: Link not operational transition"},
{0x066B8200, 0, IPR_DEFAULT_LOG_LEVEL, {0x066B8200, 0, IPR_DEFAULT_LOG_LEVEL,
"9032: Array exposed but still protected"}, "9032: Array exposed but still protected"},
{0x066B8300, 0, IPR_DEFAULT_LOG_LEVEL + 1, {0x066B8300, 0, IPR_DEBUG_LOG_LEVEL,
"70DD: Device forced failed by disrupt device command"}, "70DD: Device forced failed by disrupt device command"},
{0x066B9100, 0, IPR_DEFAULT_LOG_LEVEL, {0x066B9100, 0, IPR_DEFAULT_LOG_LEVEL,
"4061: Multipath redundancy level got better"}, "4061: Multipath redundancy level got better"},
{0x066B9200, 0, IPR_DEFAULT_LOG_LEVEL, {0x066B9200, 0, IPR_DEFAULT_LOG_LEVEL,
"4060: Multipath redundancy level got worse"}, "4060: Multipath redundancy level got worse"},
{0x06808100, 0, IPR_DEFAULT_LOG_LEVEL, {0x06808100, 0, IPR_DEBUG_LOG_LEVEL,
"9083: Device raw mode enabled"}, "9083: Device raw mode enabled"},
{0x06808200, 0, IPR_DEFAULT_LOG_LEVEL, {0x06808200, 0, IPR_DEBUG_LOG_LEVEL,
"9084: Device raw mode disabled"}, "9084: Device raw mode disabled"},
{0x07270000, 0, 0, {0x07270000, 0, 0,
"Failure due to other device"}, "Failure due to other device"},
@ -1473,7 +1473,7 @@ static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb; struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc); u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
list_del(&hostrcb->queue); list_del_init(&hostrcb->queue);
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (ioasc) { if (ioasc) {
@ -2552,6 +2552,23 @@ static void ipr_handle_log_data(struct ipr_ioa_cfg *ioa_cfg,
} }
} }
static struct ipr_hostrcb *ipr_get_free_hostrcb(struct ipr_ioa_cfg *ioa)
{
struct ipr_hostrcb *hostrcb;
hostrcb = list_first_entry_or_null(&ioa->hostrcb_free_q,
struct ipr_hostrcb, queue);
if (unlikely(!hostrcb)) {
dev_info(&ioa->pdev->dev, "Reclaiming async error buffers.");
hostrcb = list_first_entry_or_null(&ioa->hostrcb_report_q,
struct ipr_hostrcb, queue);
}
list_del_init(&hostrcb->queue);
return hostrcb;
}
/** /**
* ipr_process_error - Op done function for an adapter error log. * ipr_process_error - Op done function for an adapter error log.
* @ipr_cmd: ipr command struct * @ipr_cmd: ipr command struct
@ -2569,13 +2586,14 @@ static void ipr_process_error(struct ipr_cmnd *ipr_cmd)
struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb; struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc); u32 ioasc = be32_to_cpu(ipr_cmd->s.ioasa.hdr.ioasc);
u32 fd_ioasc; u32 fd_ioasc;
char *envp[] = { "ASYNC_ERR_LOG=1", NULL };
if (ioa_cfg->sis64) if (ioa_cfg->sis64)
fd_ioasc = be32_to_cpu(hostrcb->hcam.u.error64.fd_ioasc); fd_ioasc = be32_to_cpu(hostrcb->hcam.u.error64.fd_ioasc);
else else
fd_ioasc = be32_to_cpu(hostrcb->hcam.u.error.fd_ioasc); fd_ioasc = be32_to_cpu(hostrcb->hcam.u.error.fd_ioasc);
list_del(&hostrcb->queue); list_del_init(&hostrcb->queue);
list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q);
if (!ioasc) { if (!ioasc) {
@ -2588,6 +2606,10 @@ static void ipr_process_error(struct ipr_cmnd *ipr_cmd)
"Host RCB failed with IOASC: 0x%08X\n", ioasc); "Host RCB failed with IOASC: 0x%08X\n", ioasc);
} }
list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_report_q);
hostrcb = ipr_get_free_hostrcb(ioa_cfg);
kobject_uevent_env(&ioa_cfg->host->shost_dev.kobj, KOBJ_CHANGE, envp);
ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb); ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb);
} }
@ -4095,6 +4117,64 @@ static struct device_attribute ipr_ioa_fw_type_attr = {
.show = ipr_show_fw_type .show = ipr_show_fw_type
}; };
static ssize_t ipr_read_async_err_log(struct file *filep, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count)
{
struct device *cdev = container_of(kobj, struct device, kobj);
struct Scsi_Host *shost = class_to_shost(cdev);
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
struct ipr_hostrcb *hostrcb;
unsigned long lock_flags = 0;
int ret;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
hostrcb = list_first_entry_or_null(&ioa_cfg->hostrcb_report_q,
struct ipr_hostrcb, queue);
if (!hostrcb) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return 0;
}
ret = memory_read_from_buffer(buf, count, &off, &hostrcb->hcam,
sizeof(hostrcb->hcam));
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return ret;
}
static ssize_t ipr_next_async_err_log(struct file *filep, struct kobject *kobj,
struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count)
{
struct device *cdev = container_of(kobj, struct device, kobj);
struct Scsi_Host *shost = class_to_shost(cdev);
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
struct ipr_hostrcb *hostrcb;
unsigned long lock_flags = 0;
spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
hostrcb = list_first_entry_or_null(&ioa_cfg->hostrcb_report_q,
struct ipr_hostrcb, queue);
if (!hostrcb) {
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return count;
}
/* Reclaim hostrcb before exit */
list_move_tail(&hostrcb->queue, &ioa_cfg->hostrcb_free_q);
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
return count;
}
static struct bin_attribute ipr_ioa_async_err_log = {
.attr = {
.name = "async_err_log",
.mode = S_IRUGO | S_IWUSR,
},
.size = 0,
.read = ipr_read_async_err_log,
.write = ipr_next_async_err_log
};
static struct device_attribute *ipr_ioa_attrs[] = { static struct device_attribute *ipr_ioa_attrs[] = {
&ipr_fw_version_attr, &ipr_fw_version_attr,
&ipr_log_level_attr, &ipr_log_level_attr,
@ -7026,8 +7106,7 @@ static int ipr_ioa_reset_done(struct ipr_cmnd *ipr_cmd)
{ {
struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg; struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
struct ipr_resource_entry *res; struct ipr_resource_entry *res;
struct ipr_hostrcb *hostrcb, *temp; int j;
int i = 0, j;
ENTER; ENTER;
ioa_cfg->in_reset_reload = 0; ioa_cfg->in_reset_reload = 0;
@ -7048,12 +7127,16 @@ static int ipr_ioa_reset_done(struct ipr_cmnd *ipr_cmd)
} }
schedule_work(&ioa_cfg->work_q); schedule_work(&ioa_cfg->work_q);
list_for_each_entry_safe(hostrcb, temp, &ioa_cfg->hostrcb_free_q, queue) { for (j = 0; j < IPR_NUM_HCAMS; j++) {
list_del(&hostrcb->queue); list_del_init(&ioa_cfg->hostrcb[j]->queue);
if (i++ < IPR_NUM_LOG_HCAMS) if (j < IPR_NUM_LOG_HCAMS)
ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb); ipr_send_hcam(ioa_cfg,
IPR_HCAM_CDB_OP_CODE_LOG_DATA,
ioa_cfg->hostrcb[j]);
else else
ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb); ipr_send_hcam(ioa_cfg,
IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE,
ioa_cfg->hostrcb[j]);
} }
scsi_report_bus_reset(ioa_cfg->host, IPR_VSET_BUS); scsi_report_bus_reset(ioa_cfg->host, IPR_VSET_BUS);
@ -7966,7 +8049,8 @@ static int ipr_ioafp_identify_hrrq(struct ipr_cmnd *ipr_cmd)
ENTER; ENTER;
ipr_cmd->job_step = ipr_ioafp_std_inquiry; ipr_cmd->job_step = ipr_ioafp_std_inquiry;
dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n"); if (ioa_cfg->identify_hrrq_index == 0)
dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n");
if (ioa_cfg->identify_hrrq_index < ioa_cfg->hrrq_num) { if (ioa_cfg->identify_hrrq_index < ioa_cfg->hrrq_num) {
hrrq = &ioa_cfg->hrrq[ioa_cfg->identify_hrrq_index]; hrrq = &ioa_cfg->hrrq[ioa_cfg->identify_hrrq_index];
@ -8335,7 +8419,7 @@ static void ipr_get_unit_check_buffer(struct ipr_ioa_cfg *ioa_cfg)
hostrcb = list_entry(ioa_cfg->hostrcb_free_q.next, hostrcb = list_entry(ioa_cfg->hostrcb_free_q.next,
struct ipr_hostrcb, queue); struct ipr_hostrcb, queue);
list_del(&hostrcb->queue); list_del_init(&hostrcb->queue);
memset(&hostrcb->hcam, 0, sizeof(hostrcb->hcam)); memset(&hostrcb->hcam, 0, sizeof(hostrcb->hcam));
rc = ipr_get_ldump_data_section(ioa_cfg, rc = ipr_get_ldump_data_section(ioa_cfg,
@ -9332,7 +9416,7 @@ static void ipr_free_mem(struct ipr_ioa_cfg *ioa_cfg)
dma_free_coherent(&ioa_cfg->pdev->dev, ioa_cfg->cfg_table_size, dma_free_coherent(&ioa_cfg->pdev->dev, ioa_cfg->cfg_table_size,
ioa_cfg->u.cfg_table, ioa_cfg->cfg_table_dma); ioa_cfg->u.cfg_table, ioa_cfg->cfg_table_dma);
for (i = 0; i < IPR_NUM_HCAMS; i++) { for (i = 0; i < IPR_MAX_HCAMS; i++) {
dma_free_coherent(&ioa_cfg->pdev->dev, dma_free_coherent(&ioa_cfg->pdev->dev,
sizeof(struct ipr_hostrcb), sizeof(struct ipr_hostrcb),
ioa_cfg->hostrcb[i], ioa_cfg->hostrcb[i],
@ -9572,7 +9656,7 @@ static int ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg)
if (!ioa_cfg->u.cfg_table) if (!ioa_cfg->u.cfg_table)
goto out_free_host_rrq; goto out_free_host_rrq;
for (i = 0; i < IPR_NUM_HCAMS; i++) { for (i = 0; i < IPR_MAX_HCAMS; i++) {
ioa_cfg->hostrcb[i] = dma_alloc_coherent(&pdev->dev, ioa_cfg->hostrcb[i] = dma_alloc_coherent(&pdev->dev,
sizeof(struct ipr_hostrcb), sizeof(struct ipr_hostrcb),
&ioa_cfg->hostrcb_dma[i], &ioa_cfg->hostrcb_dma[i],
@ -9714,6 +9798,7 @@ static void ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
INIT_LIST_HEAD(&ioa_cfg->hostrcb_free_q); INIT_LIST_HEAD(&ioa_cfg->hostrcb_free_q);
INIT_LIST_HEAD(&ioa_cfg->hostrcb_pending_q); INIT_LIST_HEAD(&ioa_cfg->hostrcb_pending_q);
INIT_LIST_HEAD(&ioa_cfg->hostrcb_report_q);
INIT_LIST_HEAD(&ioa_cfg->free_res_q); INIT_LIST_HEAD(&ioa_cfg->free_res_q);
INIT_LIST_HEAD(&ioa_cfg->used_res_q); INIT_LIST_HEAD(&ioa_cfg->used_res_q);
INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread); INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread);
@ -10352,6 +10437,8 @@ static void ipr_remove(struct pci_dev *pdev)
&ipr_trace_attr); &ipr_trace_attr);
ipr_remove_dump_file(&ioa_cfg->host->shost_dev.kobj, ipr_remove_dump_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_dump_attr); &ipr_dump_attr);
sysfs_remove_bin_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_ioa_async_err_log);
scsi_remove_host(ioa_cfg->host); scsi_remove_host(ioa_cfg->host);
__ipr_remove(pdev); __ipr_remove(pdev);
@ -10400,10 +10487,25 @@ static int ipr_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
return rc; return rc;
} }
rc = sysfs_create_bin_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_ioa_async_err_log);
if (rc) {
ipr_remove_dump_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_dump_attr);
ipr_remove_trace_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_trace_attr);
scsi_remove_host(ioa_cfg->host);
__ipr_remove(pdev);
return rc;
}
rc = ipr_create_dump_file(&ioa_cfg->host->shost_dev.kobj, rc = ipr_create_dump_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_dump_attr); &ipr_dump_attr);
if (rc) { if (rc) {
sysfs_remove_bin_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_ioa_async_err_log);
ipr_remove_trace_file(&ioa_cfg->host->shost_dev.kobj, ipr_remove_trace_file(&ioa_cfg->host->shost_dev.kobj,
&ipr_trace_attr); &ipr_trace_attr);
scsi_remove_host(ioa_cfg->host); scsi_remove_host(ioa_cfg->host);

View File

@ -154,7 +154,9 @@
#define IPR_DEFAULT_MAX_ERROR_DUMP 984 #define IPR_DEFAULT_MAX_ERROR_DUMP 984
#define IPR_NUM_LOG_HCAMS 2 #define IPR_NUM_LOG_HCAMS 2
#define IPR_NUM_CFG_CHG_HCAMS 2 #define IPR_NUM_CFG_CHG_HCAMS 2
#define IPR_NUM_HCAM_QUEUE 12
#define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS) #define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS)
#define IPR_MAX_HCAMS (IPR_NUM_HCAMS + IPR_NUM_HCAM_QUEUE)
#define IPR_MAX_SIS64_TARGETS_PER_BUS 1024 #define IPR_MAX_SIS64_TARGETS_PER_BUS 1024
#define IPR_MAX_SIS64_LUNS_PER_TARGET 0xffffffff #define IPR_MAX_SIS64_LUNS_PER_TARGET 0xffffffff
@ -1504,6 +1506,7 @@ struct ipr_ioa_cfg {
u8 log_level; u8 log_level;
#define IPR_MAX_LOG_LEVEL 4 #define IPR_MAX_LOG_LEVEL 4
#define IPR_DEFAULT_LOG_LEVEL 2 #define IPR_DEFAULT_LOG_LEVEL 2
#define IPR_DEBUG_LOG_LEVEL 3
#define IPR_NUM_TRACE_INDEX_BITS 8 #define IPR_NUM_TRACE_INDEX_BITS 8
#define IPR_NUM_TRACE_ENTRIES (1 << IPR_NUM_TRACE_INDEX_BITS) #define IPR_NUM_TRACE_ENTRIES (1 << IPR_NUM_TRACE_INDEX_BITS)
@ -1532,10 +1535,11 @@ struct ipr_ioa_cfg {
char ipr_hcam_label[8]; char ipr_hcam_label[8];
#define IPR_HCAM_LABEL "hcams" #define IPR_HCAM_LABEL "hcams"
struct ipr_hostrcb *hostrcb[IPR_NUM_HCAMS]; struct ipr_hostrcb *hostrcb[IPR_MAX_HCAMS];
dma_addr_t hostrcb_dma[IPR_NUM_HCAMS]; dma_addr_t hostrcb_dma[IPR_MAX_HCAMS];
struct list_head hostrcb_free_q; struct list_head hostrcb_free_q;
struct list_head hostrcb_pending_q; struct list_head hostrcb_pending_q;
struct list_head hostrcb_report_q;
struct ipr_hrr_queue hrrq[IPR_MAX_HRRQ_NUM]; struct ipr_hrr_queue hrrq[IPR_MAX_HRRQ_NUM];
u32 hrrq_num; u32 hrrq_num;

View File

@ -1837,7 +1837,6 @@ static void fc_exch_reset(struct fc_exch *ep)
int rc = 1; int rc = 1;
spin_lock_bh(&ep->ex_lock); spin_lock_bh(&ep->ex_lock);
fc_exch_abort_locked(ep, 0);
ep->state |= FC_EX_RST_CLEANUP; ep->state |= FC_EX_RST_CLEANUP;
fc_exch_timer_cancel(ep); fc_exch_timer_cancel(ep);
if (ep->esb_stat & ESB_ST_REC_QUAL) if (ep->esb_stat & ESB_ST_REC_QUAL)

View File

@ -457,6 +457,9 @@ static void fc_rport_enter_delete(struct fc_rport_priv *rdata,
*/ */
static int fc_rport_logoff(struct fc_rport_priv *rdata) static int fc_rport_logoff(struct fc_rport_priv *rdata)
{ {
struct fc_lport *lport = rdata->local_port;
u32 port_id = rdata->ids.port_id;
mutex_lock(&rdata->rp_mutex); mutex_lock(&rdata->rp_mutex);
FC_RPORT_DBG(rdata, "Remove port\n"); FC_RPORT_DBG(rdata, "Remove port\n");
@ -466,6 +469,15 @@ static int fc_rport_logoff(struct fc_rport_priv *rdata)
FC_RPORT_DBG(rdata, "Port in Delete state, not removing\n"); FC_RPORT_DBG(rdata, "Port in Delete state, not removing\n");
goto out; goto out;
} }
/*
* FC-LS states:
* To explicitly Logout, the initiating Nx_Port shall terminate
* other open Sequences that it initiated with the destination
* Nx_Port prior to performing Logout.
*/
lport->tt.exch_mgr_reset(lport, 0, port_id);
lport->tt.exch_mgr_reset(lport, port_id, 0);
fc_rport_enter_logo(rdata); fc_rport_enter_logo(rdata);
/* /*
@ -547,16 +559,24 @@ static void fc_rport_timeout(struct work_struct *work)
*/ */
static void fc_rport_error(struct fc_rport_priv *rdata, struct fc_frame *fp) static void fc_rport_error(struct fc_rport_priv *rdata, struct fc_frame *fp)
{ {
struct fc_lport *lport = rdata->local_port;
FC_RPORT_DBG(rdata, "Error %ld in state %s, retries %d\n", FC_RPORT_DBG(rdata, "Error %ld in state %s, retries %d\n",
IS_ERR(fp) ? -PTR_ERR(fp) : 0, IS_ERR(fp) ? -PTR_ERR(fp) : 0,
fc_rport_state(rdata), rdata->retries); fc_rport_state(rdata), rdata->retries);
switch (rdata->rp_state) { switch (rdata->rp_state) {
case RPORT_ST_FLOGI: case RPORT_ST_FLOGI:
case RPORT_ST_PLOGI:
rdata->flags &= ~FC_RP_STARTED; rdata->flags &= ~FC_RP_STARTED;
fc_rport_enter_delete(rdata, RPORT_EV_FAILED); fc_rport_enter_delete(rdata, RPORT_EV_FAILED);
break; break;
case RPORT_ST_PLOGI:
if (lport->point_to_multipoint) {
rdata->flags &= ~FC_RP_STARTED;
fc_rport_enter_delete(rdata, RPORT_EV_FAILED);
} else
fc_rport_enter_logo(rdata);
break;
case RPORT_ST_RTV: case RPORT_ST_RTV:
fc_rport_enter_ready(rdata); fc_rport_enter_ready(rdata);
break; break;
@ -1877,7 +1897,7 @@ static void fc_rport_recv_prlo_req(struct fc_rport_priv *rdata,
spp->spp_type_ext = rspp->spp_type_ext; spp->spp_type_ext = rspp->spp_type_ext;
spp->spp_flags = FC_SPP_RESP_ACK; spp->spp_flags = FC_SPP_RESP_ACK;
fc_rport_enter_delete(rdata, RPORT_EV_LOGO); fc_rport_enter_prli(rdata);
fc_fill_reply_hdr(fp, rx_fp, FC_RCTL_ELS_REP, 0); fc_fill_reply_hdr(fp, rx_fp, FC_RCTL_ELS_REP, 0);
lport->tt.frame_send(lport, fp); lport->tt.frame_send(lport, fp);
@ -1915,7 +1935,7 @@ static void fc_rport_recv_logo_req(struct fc_lport *lport, struct fc_frame *fp)
FC_RPORT_DBG(rdata, "Received LOGO request while in state %s\n", FC_RPORT_DBG(rdata, "Received LOGO request while in state %s\n",
fc_rport_state(rdata)); fc_rport_state(rdata));
fc_rport_enter_delete(rdata, RPORT_EV_LOGO); fc_rport_enter_delete(rdata, RPORT_EV_STOP);
mutex_unlock(&rdata->rp_mutex); mutex_unlock(&rdata->rp_mutex);
kref_put(&rdata->kref, rdata->local_port->tt.rport_destroy); kref_put(&rdata->kref, rdata->local_port->tt.rport_destroy);
} else } else

View File

@ -1535,7 +1535,7 @@ lpfc_fdmi_num_disc_check(struct lpfc_vport *vport)
} }
/* Routines for all individual HBA attributes */ /* Routines for all individual HBA attributes */
int static int
lpfc_fdmi_hba_attr_wwnn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad) lpfc_fdmi_hba_attr_wwnn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
{ {
struct lpfc_fdmi_attr_entry *ae; struct lpfc_fdmi_attr_entry *ae;
@ -1551,7 +1551,7 @@ lpfc_fdmi_hba_attr_wwnn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
ad->AttrType = cpu_to_be16(RHBA_NODENAME); ad->AttrType = cpu_to_be16(RHBA_NODENAME);
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1573,7 +1573,7 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_sn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad) lpfc_fdmi_hba_attr_sn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
@ -1594,7 +1594,7 @@ lpfc_fdmi_hba_attr_sn(struct lpfc_vport *vport, struct lpfc_fdmi_attr_def *ad)
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_model(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_model(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1615,7 +1615,7 @@ lpfc_fdmi_hba_attr_model(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_description(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_description(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1637,7 +1637,7 @@ lpfc_fdmi_hba_attr_description(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_hdw_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_hdw_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1669,7 +1669,7 @@ lpfc_fdmi_hba_attr_hdw_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_drvr_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_drvr_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1690,7 +1690,7 @@ lpfc_fdmi_hba_attr_drvr_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_rom_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_rom_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1715,7 +1715,7 @@ lpfc_fdmi_hba_attr_rom_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_fmw_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_fmw_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1736,7 +1736,7 @@ lpfc_fdmi_hba_attr_fmw_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_os_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_os_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1759,7 +1759,7 @@ lpfc_fdmi_hba_attr_os_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_ct_len(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_ct_len(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1775,7 +1775,7 @@ lpfc_fdmi_hba_attr_ct_len(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_symbolic_name(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_symbolic_name(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1794,7 +1794,7 @@ lpfc_fdmi_hba_attr_symbolic_name(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_vendor_info(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_vendor_info(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1811,7 +1811,7 @@ lpfc_fdmi_hba_attr_vendor_info(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_num_ports(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_num_ports(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1828,7 +1828,7 @@ lpfc_fdmi_hba_attr_num_ports(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_fabric_wwnn(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_fabric_wwnn(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1846,7 +1846,7 @@ lpfc_fdmi_hba_attr_fabric_wwnn(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_bios_ver(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_bios_ver(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1867,7 +1867,7 @@ lpfc_fdmi_hba_attr_bios_ver(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_bios_state(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_bios_state(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1884,7 +1884,7 @@ lpfc_fdmi_hba_attr_bios_state(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_hba_attr_vendor_id(struct lpfc_vport *vport, lpfc_fdmi_hba_attr_vendor_id(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1906,7 +1906,7 @@ lpfc_fdmi_hba_attr_vendor_id(struct lpfc_vport *vport,
} }
/* Routines for all individual PORT attributes */ /* Routines for all individual PORT attributes */
int static int
lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport, lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1925,7 +1925,7 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport, lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -1975,7 +1975,7 @@ lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport, lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2039,7 +2039,7 @@ lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_max_frame(struct lpfc_vport *vport, lpfc_fdmi_port_attr_max_frame(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2059,7 +2059,7 @@ lpfc_fdmi_port_attr_max_frame(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_os_devname(struct lpfc_vport *vport, lpfc_fdmi_port_attr_os_devname(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2081,7 +2081,7 @@ lpfc_fdmi_port_attr_os_devname(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_host_name(struct lpfc_vport *vport, lpfc_fdmi_port_attr_host_name(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2102,7 +2102,7 @@ lpfc_fdmi_port_attr_host_name(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_wwnn(struct lpfc_vport *vport, lpfc_fdmi_port_attr_wwnn(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2120,7 +2120,7 @@ lpfc_fdmi_port_attr_wwnn(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_wwpn(struct lpfc_vport *vport, lpfc_fdmi_port_attr_wwpn(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2138,7 +2138,7 @@ lpfc_fdmi_port_attr_wwpn(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_symbolic_name(struct lpfc_vport *vport, lpfc_fdmi_port_attr_symbolic_name(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2156,7 +2156,7 @@ lpfc_fdmi_port_attr_symbolic_name(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_port_type(struct lpfc_vport *vport, lpfc_fdmi_port_attr_port_type(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2175,7 +2175,7 @@ lpfc_fdmi_port_attr_port_type(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_class(struct lpfc_vport *vport, lpfc_fdmi_port_attr_class(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2190,7 +2190,7 @@ lpfc_fdmi_port_attr_class(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_fabric_wwpn(struct lpfc_vport *vport, lpfc_fdmi_port_attr_fabric_wwpn(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2208,7 +2208,7 @@ lpfc_fdmi_port_attr_fabric_wwpn(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport, lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2227,7 +2227,7 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_port_state(struct lpfc_vport *vport, lpfc_fdmi_port_attr_port_state(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2243,7 +2243,7 @@ lpfc_fdmi_port_attr_port_state(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_num_disc(struct lpfc_vport *vport, lpfc_fdmi_port_attr_num_disc(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2259,7 +2259,7 @@ lpfc_fdmi_port_attr_num_disc(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_port_attr_nportid(struct lpfc_vport *vport, lpfc_fdmi_port_attr_nportid(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2274,7 +2274,7 @@ lpfc_fdmi_port_attr_nportid(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_service(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_service(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2295,7 +2295,7 @@ lpfc_fdmi_smart_attr_service(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_guid(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_guid(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2316,7 +2316,7 @@ lpfc_fdmi_smart_attr_guid(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_version(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_version(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2337,7 +2337,7 @@ lpfc_fdmi_smart_attr_version(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_model(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_model(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2358,7 +2358,7 @@ lpfc_fdmi_smart_attr_model(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_port_info(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_port_info(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2378,7 +2378,7 @@ lpfc_fdmi_smart_attr_port_info(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_qos(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_qos(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {
@ -2393,7 +2393,7 @@ lpfc_fdmi_smart_attr_qos(struct lpfc_vport *vport,
return size; return size;
} }
int static int
lpfc_fdmi_smart_attr_security(struct lpfc_vport *vport, lpfc_fdmi_smart_attr_security(struct lpfc_vport *vport,
struct lpfc_fdmi_attr_def *ad) struct lpfc_fdmi_attr_def *ad)
{ {

View File

@ -4617,7 +4617,7 @@ lpfc_els_disc_plogi(struct lpfc_vport *vport)
return sentplogi; return sentplogi;
} }
uint32_t static uint32_t
lpfc_rdp_res_link_service(struct fc_rdp_link_service_desc *desc, lpfc_rdp_res_link_service(struct fc_rdp_link_service_desc *desc,
uint32_t word0) uint32_t word0)
{ {
@ -4629,7 +4629,7 @@ lpfc_rdp_res_link_service(struct fc_rdp_link_service_desc *desc,
return sizeof(struct fc_rdp_link_service_desc); return sizeof(struct fc_rdp_link_service_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_sfp_desc(struct fc_rdp_sfp_desc *desc, lpfc_rdp_res_sfp_desc(struct fc_rdp_sfp_desc *desc,
uint8_t *page_a0, uint8_t *page_a2) uint8_t *page_a0, uint8_t *page_a2)
{ {
@ -4694,7 +4694,7 @@ lpfc_rdp_res_sfp_desc(struct fc_rdp_sfp_desc *desc,
return sizeof(struct fc_rdp_sfp_desc); return sizeof(struct fc_rdp_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_link_error(struct fc_rdp_link_error_status_desc *desc, lpfc_rdp_res_link_error(struct fc_rdp_link_error_status_desc *desc,
READ_LNK_VAR *stat) READ_LNK_VAR *stat)
{ {
@ -4723,7 +4723,7 @@ lpfc_rdp_res_link_error(struct fc_rdp_link_error_status_desc *desc,
return sizeof(struct fc_rdp_link_error_status_desc); return sizeof(struct fc_rdp_link_error_status_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_bbc_desc(struct fc_rdp_bbc_desc *desc, READ_LNK_VAR *stat, lpfc_rdp_res_bbc_desc(struct fc_rdp_bbc_desc *desc, READ_LNK_VAR *stat,
struct lpfc_vport *vport) struct lpfc_vport *vport)
{ {
@ -4748,7 +4748,7 @@ lpfc_rdp_res_bbc_desc(struct fc_rdp_bbc_desc *desc, READ_LNK_VAR *stat,
return sizeof(struct fc_rdp_bbc_desc); return sizeof(struct fc_rdp_bbc_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_oed_temp_desc(struct lpfc_hba *phba, lpfc_rdp_res_oed_temp_desc(struct lpfc_hba *phba,
struct fc_rdp_oed_sfp_desc *desc, uint8_t *page_a2) struct fc_rdp_oed_sfp_desc *desc, uint8_t *page_a2)
{ {
@ -4776,7 +4776,7 @@ lpfc_rdp_res_oed_temp_desc(struct lpfc_hba *phba,
return sizeof(struct fc_rdp_oed_sfp_desc); return sizeof(struct fc_rdp_oed_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_oed_voltage_desc(struct lpfc_hba *phba, lpfc_rdp_res_oed_voltage_desc(struct lpfc_hba *phba,
struct fc_rdp_oed_sfp_desc *desc, struct fc_rdp_oed_sfp_desc *desc,
uint8_t *page_a2) uint8_t *page_a2)
@ -4805,7 +4805,7 @@ lpfc_rdp_res_oed_voltage_desc(struct lpfc_hba *phba,
return sizeof(struct fc_rdp_oed_sfp_desc); return sizeof(struct fc_rdp_oed_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_oed_txbias_desc(struct lpfc_hba *phba, lpfc_rdp_res_oed_txbias_desc(struct lpfc_hba *phba,
struct fc_rdp_oed_sfp_desc *desc, struct fc_rdp_oed_sfp_desc *desc,
uint8_t *page_a2) uint8_t *page_a2)
@ -4834,7 +4834,7 @@ lpfc_rdp_res_oed_txbias_desc(struct lpfc_hba *phba,
return sizeof(struct fc_rdp_oed_sfp_desc); return sizeof(struct fc_rdp_oed_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_oed_txpower_desc(struct lpfc_hba *phba, lpfc_rdp_res_oed_txpower_desc(struct lpfc_hba *phba,
struct fc_rdp_oed_sfp_desc *desc, struct fc_rdp_oed_sfp_desc *desc,
uint8_t *page_a2) uint8_t *page_a2)
@ -4864,7 +4864,7 @@ lpfc_rdp_res_oed_txpower_desc(struct lpfc_hba *phba,
} }
uint32_t static uint32_t
lpfc_rdp_res_oed_rxpower_desc(struct lpfc_hba *phba, lpfc_rdp_res_oed_rxpower_desc(struct lpfc_hba *phba,
struct fc_rdp_oed_sfp_desc *desc, struct fc_rdp_oed_sfp_desc *desc,
uint8_t *page_a2) uint8_t *page_a2)
@ -4893,7 +4893,7 @@ lpfc_rdp_res_oed_rxpower_desc(struct lpfc_hba *phba,
return sizeof(struct fc_rdp_oed_sfp_desc); return sizeof(struct fc_rdp_oed_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_opd_desc(struct fc_rdp_opd_sfp_desc *desc, lpfc_rdp_res_opd_desc(struct fc_rdp_opd_sfp_desc *desc,
uint8_t *page_a0, struct lpfc_vport *vport) uint8_t *page_a0, struct lpfc_vport *vport)
{ {
@ -4907,7 +4907,7 @@ lpfc_rdp_res_opd_desc(struct fc_rdp_opd_sfp_desc *desc,
return sizeof(struct fc_rdp_opd_sfp_desc); return sizeof(struct fc_rdp_opd_sfp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_fec_desc(struct fc_fec_rdp_desc *desc, READ_LNK_VAR *stat) lpfc_rdp_res_fec_desc(struct fc_fec_rdp_desc *desc, READ_LNK_VAR *stat)
{ {
if (bf_get(lpfc_read_link_stat_gec2, stat) == 0) if (bf_get(lpfc_read_link_stat_gec2, stat) == 0)
@ -4924,7 +4924,7 @@ lpfc_rdp_res_fec_desc(struct fc_fec_rdp_desc *desc, READ_LNK_VAR *stat)
return sizeof(struct fc_fec_rdp_desc); return sizeof(struct fc_fec_rdp_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_speed(struct fc_rdp_port_speed_desc *desc, struct lpfc_hba *phba) lpfc_rdp_res_speed(struct fc_rdp_port_speed_desc *desc, struct lpfc_hba *phba)
{ {
uint16_t rdp_cap = 0; uint16_t rdp_cap = 0;
@ -4986,7 +4986,7 @@ lpfc_rdp_res_speed(struct fc_rdp_port_speed_desc *desc, struct lpfc_hba *phba)
return sizeof(struct fc_rdp_port_speed_desc); return sizeof(struct fc_rdp_port_speed_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_diag_port_names(struct fc_rdp_port_name_desc *desc, lpfc_rdp_res_diag_port_names(struct fc_rdp_port_name_desc *desc,
struct lpfc_hba *phba) struct lpfc_hba *phba)
{ {
@ -5003,7 +5003,7 @@ lpfc_rdp_res_diag_port_names(struct fc_rdp_port_name_desc *desc,
return sizeof(struct fc_rdp_port_name_desc); return sizeof(struct fc_rdp_port_name_desc);
} }
uint32_t static uint32_t
lpfc_rdp_res_attach_port_names(struct fc_rdp_port_name_desc *desc, lpfc_rdp_res_attach_port_names(struct fc_rdp_port_name_desc *desc,
struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
{ {
@ -5027,7 +5027,7 @@ lpfc_rdp_res_attach_port_names(struct fc_rdp_port_name_desc *desc,
return sizeof(struct fc_rdp_port_name_desc); return sizeof(struct fc_rdp_port_name_desc);
} }
void static void
lpfc_els_rdp_cmpl(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context, lpfc_els_rdp_cmpl(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context,
int status) int status)
{ {
@ -5165,7 +5165,7 @@ free_rdp_context:
kfree(rdp_context); kfree(rdp_context);
} }
int static int
lpfc_get_rdp_info(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context) lpfc_get_rdp_info(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context)
{ {
LPFC_MBOXQ_t *mbox = NULL; LPFC_MBOXQ_t *mbox = NULL;
@ -7995,7 +7995,7 @@ lpfc_els_unsol_event(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
} }
} }
void static void
lpfc_start_fdmi(struct lpfc_vport *vport) lpfc_start_fdmi(struct lpfc_vport *vport)
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;

View File

@ -2260,7 +2260,7 @@ lpfc_sli4_dump_cfg_rg23(struct lpfc_hba *phba, struct lpfcMboxq *mbox)
return 0; return 0;
} }
void static void
lpfc_mbx_cmpl_rdp_link_stat(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq) lpfc_mbx_cmpl_rdp_link_stat(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
{ {
MAILBOX_t *mb; MAILBOX_t *mb;
@ -2281,7 +2281,7 @@ mbx_failed:
rdp_context->cmpl(phba, rdp_context, rc); rdp_context->cmpl(phba, rdp_context, rc);
} }
void static void
lpfc_mbx_cmpl_rdp_page_a2(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox) lpfc_mbx_cmpl_rdp_page_a2(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
{ {
struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) mbox->context1; struct lpfc_dmabuf *mp = (struct lpfc_dmabuf *) mbox->context1;

View File

@ -5689,7 +5689,7 @@ lpfc_sli4_dealloc_extent(struct lpfc_hba *phba, uint16_t type)
return rc; return rc;
} }
void static void
lpfc_set_features(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox, lpfc_set_features(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox,
uint32_t feature) uint32_t feature)
{ {
@ -8968,7 +8968,7 @@ lpfc_sli_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
* Since ABORTS must go on the same WQ of the command they are * Since ABORTS must go on the same WQ of the command they are
* aborting, we use command's fcp_wqidx. * aborting, we use command's fcp_wqidx.
*/ */
int static int
lpfc_sli_calc_ring(struct lpfc_hba *phba, uint32_t ring_number, lpfc_sli_calc_ring(struct lpfc_hba *phba, uint32_t ring_number,
struct lpfc_iocbq *piocb) struct lpfc_iocbq *piocb)
{ {

View File

@ -189,25 +189,12 @@ u32
megasas_build_and_issue_cmd(struct megasas_instance *instance, megasas_build_and_issue_cmd(struct megasas_instance *instance,
struct scsi_cmnd *scmd); struct scsi_cmnd *scmd);
static void megasas_complete_cmd_dpc(unsigned long instance_addr); static void megasas_complete_cmd_dpc(unsigned long instance_addr);
void
megasas_release_fusion(struct megasas_instance *instance);
int
megasas_ioc_init_fusion(struct megasas_instance *instance);
void
megasas_free_cmds_fusion(struct megasas_instance *instance);
u8
megasas_get_map_info(struct megasas_instance *instance);
int
megasas_sync_map_info(struct megasas_instance *instance);
int int
wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd, wait_and_poll(struct megasas_instance *instance, struct megasas_cmd *cmd,
int seconds); int seconds);
void megasas_reset_reply_desc(struct megasas_instance *instance);
void megasas_fusion_ocr_wq(struct work_struct *work); void megasas_fusion_ocr_wq(struct work_struct *work);
static int megasas_get_ld_vf_affiliation(struct megasas_instance *instance, static int megasas_get_ld_vf_affiliation(struct megasas_instance *instance,
int initial); int initial);
int megasas_check_mpio_paths(struct megasas_instance *instance,
struct scsi_cmnd *scmd);
int int
megasas_issue_dcmd(struct megasas_instance *instance, struct megasas_cmd *cmd) megasas_issue_dcmd(struct megasas_instance *instance, struct megasas_cmd *cmd)
@ -5036,7 +5023,7 @@ static int megasas_init_fw(struct megasas_instance *instance)
/* Find first memory bar */ /* Find first memory bar */
bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM); bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM);
instance->bar = find_first_bit(&bar_list, sizeof(unsigned long)); instance->bar = find_first_bit(&bar_list, BITS_PER_LONG);
if (pci_request_selected_regions(instance->pdev, 1<<instance->bar, if (pci_request_selected_regions(instance->pdev, 1<<instance->bar,
"megasas: LSI")) { "megasas: LSI")) {
dev_printk(KERN_DEBUG, &instance->pdev->dev, "IO memory region busy!\n"); dev_printk(KERN_DEBUG, &instance->pdev->dev, "IO memory region busy!\n");
@ -5782,7 +5769,7 @@ static int megasas_probe_one(struct pci_dev *pdev,
&instance->consumer_h); &instance->consumer_h);
if (!instance->producer || !instance->consumer) { if (!instance->producer || !instance->consumer) {
dev_printk(KERN_DEBUG, &pdev->dev, "Failed to allocate" dev_printk(KERN_DEBUG, &pdev->dev, "Failed to allocate "
"memory for producer, consumer\n"); "memory for producer, consumer\n");
goto fail_alloc_dma_buf; goto fail_alloc_dma_buf;
} }
@ -6711,14 +6698,9 @@ static int megasas_mgmt_ioctl_fw(struct file *file, unsigned long arg)
unsigned long flags; unsigned long flags;
u32 wait_time = MEGASAS_RESET_WAIT_TIME; u32 wait_time = MEGASAS_RESET_WAIT_TIME;
ioc = kmalloc(sizeof(*ioc), GFP_KERNEL); ioc = memdup_user(user_ioc, sizeof(*ioc));
if (!ioc) if (IS_ERR(ioc))
return -ENOMEM; return PTR_ERR(ioc);
if (copy_from_user(ioc, user_ioc, sizeof(*ioc))) {
error = -EFAULT;
goto out_kfree_ioc;
}
instance = megasas_lookup_instance(ioc->host_no); instance = megasas_lookup_instance(ioc->host_no);
if (!instance) { if (!instance) {

View File

@ -991,5 +991,14 @@ union desc_value {
} u; } u;
}; };
void megasas_free_cmds_fusion(struct megasas_instance *instance);
int megasas_ioc_init_fusion(struct megasas_instance *instance);
u8 megasas_get_map_info(struct megasas_instance *instance);
int megasas_sync_map_info(struct megasas_instance *instance);
void megasas_release_fusion(struct megasas_instance *instance);
void megasas_reset_reply_desc(struct megasas_instance *instance);
int megasas_check_mpio_paths(struct megasas_instance *instance,
struct scsi_cmnd *scmd);
void megasas_fusion_ocr_wq(struct work_struct *work);
#endif /* _MEGARAID_SAS_FUSION_H_ */ #endif /* _MEGARAID_SAS_FUSION_H_ */

View File

@ -98,7 +98,7 @@ MODULE_PARM_DESC(mpt3sas_fwfault_debug,
" enable detection of firmware fault and halt firmware - (default=0)"); " enable detection of firmware fault and halt firmware - (default=0)");
static int static int
_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag); _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc);
/** /**
* _scsih_set_fwfault_debug - global setting of ioc->fwfault_debug. * _scsih_set_fwfault_debug - global setting of ioc->fwfault_debug.
@ -218,8 +218,7 @@ _base_fault_reset_work(struct work_struct *work)
ioc->non_operational_loop = 0; ioc->non_operational_loop = 0;
if ((doorbell & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL) { if ((doorbell & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL) {
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
pr_warn(MPT3SAS_FMT "%s: hard reset: %s\n", ioc->name, pr_warn(MPT3SAS_FMT "%s: hard reset: %s\n", ioc->name,
__func__, (rc == 0) ? "success" : "failed"); __func__, (rc == 0) ? "success" : "failed");
doorbell = mpt3sas_base_get_iocstate(ioc, 0); doorbell = mpt3sas_base_get_iocstate(ioc, 0);
@ -2040,7 +2039,7 @@ _base_enable_msix(struct MPT3SAS_ADAPTER *ioc)
* mpt3sas_base_unmap_resources - free controller resources * mpt3sas_base_unmap_resources - free controller resources
* @ioc: per adapter object * @ioc: per adapter object
*/ */
void static void
mpt3sas_base_unmap_resources(struct MPT3SAS_ADAPTER *ioc) mpt3sas_base_unmap_resources(struct MPT3SAS_ADAPTER *ioc)
{ {
struct pci_dev *pdev = ioc->pdev; struct pci_dev *pdev = ioc->pdev;
@ -2145,7 +2144,7 @@ mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc)
_base_mask_interrupts(ioc); _base_mask_interrupts(ioc);
r = _base_get_ioc_facts(ioc, CAN_SLEEP); r = _base_get_ioc_facts(ioc);
if (r) if (r)
goto out_fail; goto out_fail;
@ -3183,12 +3182,11 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
/** /**
* _base_allocate_memory_pools - allocate start of day memory pools * _base_allocate_memory_pools - allocate start of day memory pools
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 success, anything else error * Returns 0 success, anything else error
*/ */
static int static int
_base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
{ {
struct mpt3sas_facts *facts; struct mpt3sas_facts *facts;
u16 max_sge_elements; u16 max_sge_elements;
@ -3658,29 +3656,25 @@ mpt3sas_base_get_iocstate(struct MPT3SAS_ADAPTER *ioc, int cooked)
* _base_wait_on_iocstate - waiting on a particular ioc state * _base_wait_on_iocstate - waiting on a particular ioc state
* @ioc_state: controller state { READY, OPERATIONAL, or RESET } * @ioc_state: controller state { READY, OPERATIONAL, or RESET }
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_wait_on_iocstate(struct MPT3SAS_ADAPTER *ioc, u32 ioc_state, int timeout, _base_wait_on_iocstate(struct MPT3SAS_ADAPTER *ioc, u32 ioc_state, int timeout)
int sleep_flag)
{ {
u32 count, cntdn; u32 count, cntdn;
u32 current_state; u32 current_state;
count = 0; count = 0;
cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; cntdn = 1000 * timeout;
do { do {
current_state = mpt3sas_base_get_iocstate(ioc, 1); current_state = mpt3sas_base_get_iocstate(ioc, 1);
if (current_state == ioc_state) if (current_state == ioc_state)
return 0; return 0;
if (count && current_state == MPI2_IOC_STATE_FAULT) if (count && current_state == MPI2_IOC_STATE_FAULT)
break; break;
if (sleep_flag == CAN_SLEEP)
usleep_range(1000, 1500); usleep_range(1000, 1500);
else
udelay(500);
count++; count++;
} while (--cntdn); } while (--cntdn);
@ -3692,24 +3686,22 @@ _base_wait_on_iocstate(struct MPT3SAS_ADAPTER *ioc, u32 ioc_state, int timeout,
* a write to the doorbell) * a write to the doorbell)
* @ioc: per adapter object * @ioc: per adapter object
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
* *
* Notes: MPI2_HIS_IOC2SYS_DB_STATUS - set to one when IOC writes to doorbell. * Notes: MPI2_HIS_IOC2SYS_DB_STATUS - set to one when IOC writes to doorbell.
*/ */
static int static int
_base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag); _base_diag_reset(struct MPT3SAS_ADAPTER *ioc);
static int static int
_base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout, _base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout)
int sleep_flag)
{ {
u32 cntdn, count; u32 cntdn, count;
u32 int_status; u32 int_status;
count = 0; count = 0;
cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; cntdn = 1000 * timeout;
do { do {
int_status = readl(&ioc->chip->HostInterruptStatus); int_status = readl(&ioc->chip->HostInterruptStatus);
if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) { if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
@ -3718,10 +3710,8 @@ _base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout,
ioc->name, __func__, count, timeout)); ioc->name, __func__, count, timeout));
return 0; return 0;
} }
if (sleep_flag == CAN_SLEEP)
usleep_range(1000, 1500); usleep_range(1000, 1500);
else
udelay(500);
count++; count++;
} while (--cntdn); } while (--cntdn);
@ -3731,11 +3721,38 @@ _base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout,
return -EFAULT; return -EFAULT;
} }
static int
_base_spin_on_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout)
{
u32 cntdn, count;
u32 int_status;
count = 0;
cntdn = 2000 * timeout;
do {
int_status = readl(&ioc->chip->HostInterruptStatus);
if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
dhsprintk(ioc, pr_info(MPT3SAS_FMT
"%s: successful count(%d), timeout(%d)\n",
ioc->name, __func__, count, timeout));
return 0;
}
udelay(500);
count++;
} while (--cntdn);
pr_err(MPT3SAS_FMT
"%s: failed due to timeout count(%d), int_status(%x)!\n",
ioc->name, __func__, count, int_status);
return -EFAULT;
}
/** /**
* _base_wait_for_doorbell_ack - waiting for controller to read the doorbell. * _base_wait_for_doorbell_ack - waiting for controller to read the doorbell.
* @ioc: per adapter object * @ioc: per adapter object
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
* *
@ -3743,15 +3760,14 @@ _base_wait_for_doorbell_int(struct MPT3SAS_ADAPTER *ioc, int timeout,
* doorbell. * doorbell.
*/ */
static int static int
_base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout, _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout)
int sleep_flag)
{ {
u32 cntdn, count; u32 cntdn, count;
u32 int_status; u32 int_status;
u32 doorbell; u32 doorbell;
count = 0; count = 0;
cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; cntdn = 1000 * timeout;
do { do {
int_status = readl(&ioc->chip->HostInterruptStatus); int_status = readl(&ioc->chip->HostInterruptStatus);
if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) { if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) {
@ -3769,10 +3785,7 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout,
} else if (int_status == 0xFFFFFFFF) } else if (int_status == 0xFFFFFFFF)
goto out; goto out;
if (sleep_flag == CAN_SLEEP) usleep_range(1000, 1500);
usleep_range(1000, 1500);
else
udelay(500);
count++; count++;
} while (--cntdn); } while (--cntdn);
@ -3787,20 +3800,18 @@ _base_wait_for_doorbell_ack(struct MPT3SAS_ADAPTER *ioc, int timeout,
* _base_wait_for_doorbell_not_used - waiting for doorbell to not be in use * _base_wait_for_doorbell_not_used - waiting for doorbell to not be in use
* @ioc: per adapter object * @ioc: per adapter object
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
* *
*/ */
static int static int
_base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout, _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout)
int sleep_flag)
{ {
u32 cntdn, count; u32 cntdn, count;
u32 doorbell_reg; u32 doorbell_reg;
count = 0; count = 0;
cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout; cntdn = 1000 * timeout;
do { do {
doorbell_reg = readl(&ioc->chip->Doorbell); doorbell_reg = readl(&ioc->chip->Doorbell);
if (!(doorbell_reg & MPI2_DOORBELL_USED)) { if (!(doorbell_reg & MPI2_DOORBELL_USED)) {
@ -3809,10 +3820,8 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout,
ioc->name, __func__, count, timeout)); ioc->name, __func__, count, timeout));
return 0; return 0;
} }
if (sleep_flag == CAN_SLEEP)
usleep_range(1000, 1500); usleep_range(1000, 1500);
else
udelay(500);
count++; count++;
} while (--cntdn); } while (--cntdn);
@ -3827,13 +3836,11 @@ _base_wait_for_doorbell_not_used(struct MPT3SAS_ADAPTER *ioc, int timeout,
* @ioc: per adapter object * @ioc: per adapter object
* @reset_type: currently only supports: MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET * @reset_type: currently only supports: MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout, _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout)
int sleep_flag)
{ {
u32 ioc_state; u32 ioc_state;
int r = 0; int r = 0;
@ -3852,12 +3859,11 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout,
writel(reset_type << MPI2_DOORBELL_FUNCTION_SHIFT, writel(reset_type << MPI2_DOORBELL_FUNCTION_SHIFT,
&ioc->chip->Doorbell); &ioc->chip->Doorbell);
if ((_base_wait_for_doorbell_ack(ioc, 15, sleep_flag))) { if ((_base_wait_for_doorbell_ack(ioc, 15))) {
r = -EFAULT; r = -EFAULT;
goto out; goto out;
} }
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, timeout);
timeout, sleep_flag);
if (ioc_state) { if (ioc_state) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"%s: failed going to ready state (ioc_state=0x%x)\n", "%s: failed going to ready state (ioc_state=0x%x)\n",
@ -3879,18 +3885,16 @@ _base_send_ioc_reset(struct MPT3SAS_ADAPTER *ioc, u8 reset_type, int timeout,
* @reply_bytes: reply length * @reply_bytes: reply length
* @reply: pointer to reply payload * @reply: pointer to reply payload
* @timeout: timeout in second * @timeout: timeout in second
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes, _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
u32 *request, int reply_bytes, u16 *reply, int timeout, int sleep_flag) u32 *request, int reply_bytes, u16 *reply, int timeout)
{ {
MPI2DefaultReply_t *default_reply = (MPI2DefaultReply_t *)reply; MPI2DefaultReply_t *default_reply = (MPI2DefaultReply_t *)reply;
int i; int i;
u8 failed; u8 failed;
u16 dummy;
__le32 *mfp; __le32 *mfp;
/* make sure doorbell is not in use */ /* make sure doorbell is not in use */
@ -3911,7 +3915,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
((request_bytes/4)<<MPI2_DOORBELL_ADD_DWORDS_SHIFT)), ((request_bytes/4)<<MPI2_DOORBELL_ADD_DWORDS_SHIFT)),
&ioc->chip->Doorbell); &ioc->chip->Doorbell);
if ((_base_wait_for_doorbell_int(ioc, 5, NO_SLEEP))) { if ((_base_spin_on_doorbell_int(ioc, 5))) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"doorbell handshake int failed (line=%d)\n", "doorbell handshake int failed (line=%d)\n",
ioc->name, __LINE__); ioc->name, __LINE__);
@ -3919,7 +3923,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
} }
writel(0, &ioc->chip->HostInterruptStatus); writel(0, &ioc->chip->HostInterruptStatus);
if ((_base_wait_for_doorbell_ack(ioc, 5, sleep_flag))) { if ((_base_wait_for_doorbell_ack(ioc, 5))) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"doorbell handshake ack failed (line=%d)\n", "doorbell handshake ack failed (line=%d)\n",
ioc->name, __LINE__); ioc->name, __LINE__);
@ -3929,7 +3933,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
/* send message 32-bits at a time */ /* send message 32-bits at a time */
for (i = 0, failed = 0; i < request_bytes/4 && !failed; i++) { for (i = 0, failed = 0; i < request_bytes/4 && !failed; i++) {
writel(cpu_to_le32(request[i]), &ioc->chip->Doorbell); writel(cpu_to_le32(request[i]), &ioc->chip->Doorbell);
if ((_base_wait_for_doorbell_ack(ioc, 5, sleep_flag))) if ((_base_wait_for_doorbell_ack(ioc, 5)))
failed = 1; failed = 1;
} }
@ -3941,7 +3945,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
} }
/* now wait for the reply */ /* now wait for the reply */
if ((_base_wait_for_doorbell_int(ioc, timeout, sleep_flag))) { if ((_base_wait_for_doorbell_int(ioc, timeout))) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"doorbell handshake int failed (line=%d)\n", "doorbell handshake int failed (line=%d)\n",
ioc->name, __LINE__); ioc->name, __LINE__);
@ -3952,7 +3956,7 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
reply[0] = le16_to_cpu(readl(&ioc->chip->Doorbell) reply[0] = le16_to_cpu(readl(&ioc->chip->Doorbell)
& MPI2_DOORBELL_DATA_MASK); & MPI2_DOORBELL_DATA_MASK);
writel(0, &ioc->chip->HostInterruptStatus); writel(0, &ioc->chip->HostInterruptStatus);
if ((_base_wait_for_doorbell_int(ioc, 5, sleep_flag))) { if ((_base_wait_for_doorbell_int(ioc, 5))) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"doorbell handshake int failed (line=%d)\n", "doorbell handshake int failed (line=%d)\n",
ioc->name, __LINE__); ioc->name, __LINE__);
@ -3963,22 +3967,22 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
writel(0, &ioc->chip->HostInterruptStatus); writel(0, &ioc->chip->HostInterruptStatus);
for (i = 2; i < default_reply->MsgLength * 2; i++) { for (i = 2; i < default_reply->MsgLength * 2; i++) {
if ((_base_wait_for_doorbell_int(ioc, 5, sleep_flag))) { if ((_base_wait_for_doorbell_int(ioc, 5))) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"doorbell handshake int failed (line=%d)\n", "doorbell handshake int failed (line=%d)\n",
ioc->name, __LINE__); ioc->name, __LINE__);
return -EFAULT; return -EFAULT;
} }
if (i >= reply_bytes/2) /* overflow case */ if (i >= reply_bytes/2) /* overflow case */
dummy = readl(&ioc->chip->Doorbell); readl(&ioc->chip->Doorbell);
else else
reply[i] = le16_to_cpu(readl(&ioc->chip->Doorbell) reply[i] = le16_to_cpu(readl(&ioc->chip->Doorbell)
& MPI2_DOORBELL_DATA_MASK); & MPI2_DOORBELL_DATA_MASK);
writel(0, &ioc->chip->HostInterruptStatus); writel(0, &ioc->chip->HostInterruptStatus);
} }
_base_wait_for_doorbell_int(ioc, 5, sleep_flag); _base_wait_for_doorbell_int(ioc, 5);
if (_base_wait_for_doorbell_not_used(ioc, 5, sleep_flag) != 0) { if (_base_wait_for_doorbell_not_used(ioc, 5) != 0) {
dhsprintk(ioc, pr_info(MPT3SAS_FMT dhsprintk(ioc, pr_info(MPT3SAS_FMT
"doorbell is in use (line=%d)\n", ioc->name, __LINE__)); "doorbell is in use (line=%d)\n", ioc->name, __LINE__));
} }
@ -4015,7 +4019,6 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
{ {
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
bool issue_reset = false; bool issue_reset = false;
int rc; int rc;
void *request; void *request;
@ -4068,7 +4071,7 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
ioc->ioc_link_reset_in_progress = 1; ioc->ioc_link_reset_in_progress = 1;
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->base_cmds.done, wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000)); msecs_to_jiffies(10000));
if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET || if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) && mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) &&
@ -4093,8 +4096,7 @@ mpt3sas_base_sas_iounit_control(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
ioc->base_cmds.status = MPT3_CMD_NOT_USED; ioc->base_cmds.status = MPT3_CMD_NOT_USED;
rc = -EFAULT; rc = -EFAULT;
out: out:
@ -4119,7 +4121,6 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
{ {
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
bool issue_reset = false; bool issue_reset = false;
int rc; int rc;
void *request; void *request;
@ -4170,7 +4171,7 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
memcpy(request, mpi_request, sizeof(Mpi2SepReply_t)); memcpy(request, mpi_request, sizeof(Mpi2SepReply_t));
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->base_cmds.done, wait_for_completion_timeout(&ioc->base_cmds.done,
msecs_to_jiffies(10000)); msecs_to_jiffies(10000));
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -4191,8 +4192,7 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
ioc->base_cmds.status = MPT3_CMD_NOT_USED; ioc->base_cmds.status = MPT3_CMD_NOT_USED;
rc = -EFAULT; rc = -EFAULT;
out: out:
@ -4203,12 +4203,11 @@ mpt3sas_base_scsi_enclosure_processor(struct MPT3SAS_ADAPTER *ioc,
/** /**
* _base_get_port_facts - obtain port facts reply and save in ioc * _base_get_port_facts - obtain port facts reply and save in ioc
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_get_port_facts(struct MPT3SAS_ADAPTER *ioc, int port, int sleep_flag) _base_get_port_facts(struct MPT3SAS_ADAPTER *ioc, int port)
{ {
Mpi2PortFactsRequest_t mpi_request; Mpi2PortFactsRequest_t mpi_request;
Mpi2PortFactsReply_t mpi_reply; Mpi2PortFactsReply_t mpi_reply;
@ -4224,7 +4223,7 @@ _base_get_port_facts(struct MPT3SAS_ADAPTER *ioc, int port, int sleep_flag)
mpi_request.Function = MPI2_FUNCTION_PORT_FACTS; mpi_request.Function = MPI2_FUNCTION_PORT_FACTS;
mpi_request.PortNumber = port; mpi_request.PortNumber = port;
r = _base_handshake_req_reply_wait(ioc, mpi_request_sz, r = _base_handshake_req_reply_wait(ioc, mpi_request_sz,
(u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5, CAN_SLEEP); (u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5);
if (r != 0) { if (r != 0) {
pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n", pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n",
@ -4247,13 +4246,11 @@ _base_get_port_facts(struct MPT3SAS_ADAPTER *ioc, int port, int sleep_flag)
* _base_wait_for_iocstate - Wait until the card is in READY or OPERATIONAL * _base_wait_for_iocstate - Wait until the card is in READY or OPERATIONAL
* @ioc: per adapter object * @ioc: per adapter object
* @timeout: * @timeout:
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout, _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout)
int sleep_flag)
{ {
u32 ioc_state; u32 ioc_state;
int rc; int rc;
@ -4287,8 +4284,7 @@ _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout,
goto issue_diag_reset; goto issue_diag_reset;
} }
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, timeout);
timeout, sleep_flag);
if (ioc_state) { if (ioc_state) {
dfailprintk(ioc, printk(MPT3SAS_FMT dfailprintk(ioc, printk(MPT3SAS_FMT
"%s: failed going to ready state (ioc_state=0x%x)\n", "%s: failed going to ready state (ioc_state=0x%x)\n",
@ -4297,19 +4293,18 @@ _base_wait_for_iocstate(struct MPT3SAS_ADAPTER *ioc, int timeout,
} }
issue_diag_reset: issue_diag_reset:
rc = _base_diag_reset(ioc, sleep_flag); rc = _base_diag_reset(ioc);
return rc; return rc;
} }
/** /**
* _base_get_ioc_facts - obtain ioc facts reply and save in ioc * _base_get_ioc_facts - obtain ioc facts reply and save in ioc
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc)
{ {
Mpi2IOCFactsRequest_t mpi_request; Mpi2IOCFactsRequest_t mpi_request;
Mpi2IOCFactsReply_t mpi_reply; Mpi2IOCFactsReply_t mpi_reply;
@ -4319,7 +4314,7 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name, dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__)); __func__));
r = _base_wait_for_iocstate(ioc, 10, sleep_flag); r = _base_wait_for_iocstate(ioc, 10);
if (r) { if (r) {
dfailprintk(ioc, printk(MPT3SAS_FMT dfailprintk(ioc, printk(MPT3SAS_FMT
"%s: failed getting to correct state\n", "%s: failed getting to correct state\n",
@ -4331,7 +4326,7 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
memset(&mpi_request, 0, mpi_request_sz); memset(&mpi_request, 0, mpi_request_sz);
mpi_request.Function = MPI2_FUNCTION_IOC_FACTS; mpi_request.Function = MPI2_FUNCTION_IOC_FACTS;
r = _base_handshake_req_reply_wait(ioc, mpi_request_sz, r = _base_handshake_req_reply_wait(ioc, mpi_request_sz,
(u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5, CAN_SLEEP); (u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5);
if (r != 0) { if (r != 0) {
pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n", pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n",
@ -4391,12 +4386,11 @@ _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
/** /**
* _base_send_ioc_init - send ioc_init to firmware * _base_send_ioc_init - send ioc_init to firmware
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc)
{ {
Mpi2IOCInitRequest_t mpi_request; Mpi2IOCInitRequest_t mpi_request;
Mpi2IOCInitReply_t mpi_reply; Mpi2IOCInitReply_t mpi_reply;
@ -4479,8 +4473,7 @@ _base_send_ioc_init(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
r = _base_handshake_req_reply_wait(ioc, r = _base_handshake_req_reply_wait(ioc,
sizeof(Mpi2IOCInitRequest_t), (u32 *)&mpi_request, sizeof(Mpi2IOCInitRequest_t), (u32 *)&mpi_request,
sizeof(Mpi2IOCInitReply_t), (u16 *)&mpi_reply, 10, sizeof(Mpi2IOCInitReply_t), (u16 *)&mpi_reply, 10);
sleep_flag);
if (r != 0) { if (r != 0) {
pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n", pr_err(MPT3SAS_FMT "%s: handshake failed (r=%d)\n",
@ -4555,16 +4548,14 @@ mpt3sas_port_enable_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index,
/** /**
* _base_send_port_enable - send port_enable(discovery stuff) to firmware * _base_send_port_enable - send port_enable(discovery stuff) to firmware
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_send_port_enable(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_send_port_enable(struct MPT3SAS_ADAPTER *ioc)
{ {
Mpi2PortEnableRequest_t *mpi_request; Mpi2PortEnableRequest_t *mpi_request;
Mpi2PortEnableReply_t *mpi_reply; Mpi2PortEnableReply_t *mpi_reply;
unsigned long timeleft;
int r = 0; int r = 0;
u16 smid; u16 smid;
u16 ioc_status; u16 ioc_status;
@ -4592,8 +4583,7 @@ _base_send_port_enable(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
init_completion(&ioc->port_enable_cmds.done); init_completion(&ioc->port_enable_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->port_enable_cmds.done, wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ);
300*HZ);
if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
ioc->name, __func__); ioc->name, __func__);
@ -4737,15 +4727,13 @@ _base_unmask_events(struct MPT3SAS_ADAPTER *ioc, u16 event)
/** /**
* _base_event_notification - send event notification * _base_event_notification - send event notification
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_event_notification(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_event_notification(struct MPT3SAS_ADAPTER *ioc)
{ {
Mpi2EventNotificationRequest_t *mpi_request; Mpi2EventNotificationRequest_t *mpi_request;
unsigned long timeleft;
u16 smid; u16 smid;
int r = 0; int r = 0;
int i; int i;
@ -4777,7 +4765,7 @@ _base_event_notification(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
cpu_to_le32(ioc->event_masks[i]); cpu_to_le32(ioc->event_masks[i]);
init_completion(&ioc->base_cmds.done); init_completion(&ioc->base_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ); wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ);
if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
ioc->name, __func__); ioc->name, __func__);
@ -4827,19 +4815,18 @@ mpt3sas_base_validate_event_type(struct MPT3SAS_ADAPTER *ioc, u32 *event_type)
return; return;
mutex_lock(&ioc->base_cmds.mutex); mutex_lock(&ioc->base_cmds.mutex);
_base_event_notification(ioc, CAN_SLEEP); _base_event_notification(ioc);
mutex_unlock(&ioc->base_cmds.mutex); mutex_unlock(&ioc->base_cmds.mutex);
} }
/** /**
* _base_diag_reset - the "big hammer" start of day reset * _base_diag_reset - the "big hammer" start of day reset
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_diag_reset(struct MPT3SAS_ADAPTER *ioc)
{ {
u32 host_diagnostic; u32 host_diagnostic;
u32 ioc_state; u32 ioc_state;
@ -4867,10 +4854,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
writel(MPI2_WRSEQ_6TH_KEY_VALUE, &ioc->chip->WriteSequence); writel(MPI2_WRSEQ_6TH_KEY_VALUE, &ioc->chip->WriteSequence);
/* wait 100 msec */ /* wait 100 msec */
if (sleep_flag == CAN_SLEEP) msleep(100);
msleep(100);
else
mdelay(100);
if (count++ > 20) if (count++ > 20)
goto out; goto out;
@ -4890,10 +4874,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
&ioc->chip->HostDiagnostic); &ioc->chip->HostDiagnostic);
/*This delay allows the chip PCIe hardware time to finish reset tasks*/ /*This delay allows the chip PCIe hardware time to finish reset tasks*/
if (sleep_flag == CAN_SLEEP) msleep(MPI2_HARD_RESET_PCIE_FIRST_READ_DELAY_MICRO_SEC/1000);
msleep(MPI2_HARD_RESET_PCIE_FIRST_READ_DELAY_MICRO_SEC/1000);
else
mdelay(MPI2_HARD_RESET_PCIE_FIRST_READ_DELAY_MICRO_SEC/1000);
/* Approximately 300 second max wait */ /* Approximately 300 second max wait */
for (count = 0; count < (300000000 / for (count = 0; count < (300000000 /
@ -4906,13 +4887,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER)) if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER))
break; break;
/* Wait to pass the second read delay window */ msleep(MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC / 1000);
if (sleep_flag == CAN_SLEEP)
msleep(MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC
/ 1000);
else
mdelay(MPI2_HARD_RESET_PCIE_SECOND_READ_DELAY_MICRO_SEC
/ 1000);
} }
if (host_diagnostic & MPI2_DIAG_HCB_MODE) { if (host_diagnostic & MPI2_DIAG_HCB_MODE) {
@ -4941,8 +4916,7 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
drsprintk(ioc, pr_info(MPT3SAS_FMT drsprintk(ioc, pr_info(MPT3SAS_FMT
"Wait for FW to go to the READY state\n", ioc->name)); "Wait for FW to go to the READY state\n", ioc->name));
ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, 20, ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, 20);
sleep_flag);
if (ioc_state) { if (ioc_state) {
pr_err(MPT3SAS_FMT pr_err(MPT3SAS_FMT
"%s: failed going to ready state (ioc_state=0x%x)\n", "%s: failed going to ready state (ioc_state=0x%x)\n",
@ -4961,14 +4935,12 @@ _base_diag_reset(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
/** /**
* _base_make_ioc_ready - put controller in READY state * _base_make_ioc_ready - put controller in READY state
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* @type: FORCE_BIG_HAMMER or SOFT_RESET * @type: FORCE_BIG_HAMMER or SOFT_RESET
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, int sleep_flag, _base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, enum reset_type type)
enum reset_type type)
{ {
u32 ioc_state; u32 ioc_state;
int rc; int rc;
@ -4995,10 +4967,7 @@ _base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
ioc->name, __func__, ioc_state); ioc->name, __func__, ioc_state);
return -EFAULT; return -EFAULT;
} }
if (sleep_flag == CAN_SLEEP) ssleep(1);
ssleep(1);
else
mdelay(1000);
ioc_state = mpt3sas_base_get_iocstate(ioc, 0); ioc_state = mpt3sas_base_get_iocstate(ioc, 0);
} }
} }
@ -5024,24 +4993,23 @@ _base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_OPERATIONAL) if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_OPERATIONAL)
if (!(_base_send_ioc_reset(ioc, if (!(_base_send_ioc_reset(ioc,
MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET, 15, CAN_SLEEP))) { MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET, 15))) {
return 0; return 0;
} }
issue_diag_reset: issue_diag_reset:
rc = _base_diag_reset(ioc, CAN_SLEEP); rc = _base_diag_reset(ioc);
return rc; return rc;
} }
/** /**
* _base_make_ioc_operational - put controller in OPERATIONAL state * _base_make_ioc_operational - put controller in OPERATIONAL state
* @ioc: per adapter object * @ioc: per adapter object
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
static int static int
_base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc)
{ {
int r, i, index; int r, i, index;
unsigned long flags; unsigned long flags;
@ -5160,7 +5128,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
} }
skip_init_reply_post_free_queue: skip_init_reply_post_free_queue:
r = _base_send_ioc_init(ioc, sleep_flag); r = _base_send_ioc_init(ioc);
if (r) if (r)
return r; return r;
@ -5186,13 +5154,11 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
skip_init_reply_post_host_index: skip_init_reply_post_host_index:
_base_unmask_interrupts(ioc); _base_unmask_interrupts(ioc);
r = _base_event_notification(ioc, sleep_flag); r = _base_event_notification(ioc);
if (r) if (r)
return r; return r;
if (sleep_flag == CAN_SLEEP) _base_static_config_pages(ioc);
_base_static_config_pages(ioc);
if (ioc->is_driver_loading) { if (ioc->is_driver_loading) {
@ -5211,7 +5177,7 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
return r; /* scan_start and scan_finished support */ return r; /* scan_start and scan_finished support */
} }
r = _base_send_port_enable(ioc, sleep_flag); r = _base_send_port_enable(ioc);
if (r) if (r)
return r; return r;
@ -5235,7 +5201,7 @@ mpt3sas_base_free_resources(struct MPT3SAS_ADAPTER *ioc)
if (ioc->chip_phys && ioc->chip) { if (ioc->chip_phys && ioc->chip) {
_base_mask_interrupts(ioc); _base_mask_interrupts(ioc);
ioc->shost_recovery = 1; ioc->shost_recovery = 1;
_base_make_ioc_ready(ioc, CAN_SLEEP, SOFT_RESET); _base_make_ioc_ready(ioc, SOFT_RESET);
ioc->shost_recovery = 0; ioc->shost_recovery = 0;
} }
@ -5292,7 +5258,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
goto out_free_resources; goto out_free_resources;
pci_set_drvdata(ioc->pdev, ioc->shost); pci_set_drvdata(ioc->pdev, ioc->shost);
r = _base_get_ioc_facts(ioc, CAN_SLEEP); r = _base_get_ioc_facts(ioc);
if (r) if (r)
goto out_free_resources; goto out_free_resources;
@ -5326,7 +5292,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
ioc->build_sg_mpi = &_base_build_sg; ioc->build_sg_mpi = &_base_build_sg;
ioc->build_zero_len_sge_mpi = &_base_build_zero_len_sge; ioc->build_zero_len_sge_mpi = &_base_build_zero_len_sge;
r = _base_make_ioc_ready(ioc, CAN_SLEEP, SOFT_RESET); r = _base_make_ioc_ready(ioc, SOFT_RESET);
if (r) if (r)
goto out_free_resources; goto out_free_resources;
@ -5338,12 +5304,12 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
} }
for (i = 0 ; i < ioc->facts.NumberOfPorts; i++) { for (i = 0 ; i < ioc->facts.NumberOfPorts; i++) {
r = _base_get_port_facts(ioc, i, CAN_SLEEP); r = _base_get_port_facts(ioc, i);
if (r) if (r)
goto out_free_resources; goto out_free_resources;
} }
r = _base_allocate_memory_pools(ioc, CAN_SLEEP); r = _base_allocate_memory_pools(ioc);
if (r) if (r)
goto out_free_resources; goto out_free_resources;
@ -5429,7 +5395,7 @@ mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc)
if (ioc->hba_mpi_version_belonged == MPI26_VERSION) if (ioc->hba_mpi_version_belonged == MPI26_VERSION)
_base_unmask_events(ioc, MPI2_EVENT_ACTIVE_CABLE_EXCEPTION); _base_unmask_events(ioc, MPI2_EVENT_ACTIVE_CABLE_EXCEPTION);
r = _base_make_ioc_operational(ioc, CAN_SLEEP); r = _base_make_ioc_operational(ioc);
if (r) if (r)
goto out_free_resources; goto out_free_resources;
@ -5565,21 +5531,18 @@ _base_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase)
/** /**
* _wait_for_commands_to_complete - reset controller * _wait_for_commands_to_complete - reset controller
* @ioc: Pointer to MPT_ADAPTER structure * @ioc: Pointer to MPT_ADAPTER structure
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* *
* This function waiting(3s) for all pending commands to complete * This function waiting(3s) for all pending commands to complete
* prior to putting controller in reset. * prior to putting controller in reset.
*/ */
static void static void
_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) _wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc)
{ {
u32 ioc_state; u32 ioc_state;
unsigned long flags; unsigned long flags;
u16 i; u16 i;
ioc->pending_io_count = 0; ioc->pending_io_count = 0;
if (sleep_flag != CAN_SLEEP)
return;
ioc_state = mpt3sas_base_get_iocstate(ioc, 0); ioc_state = mpt3sas_base_get_iocstate(ioc, 0);
if ((ioc_state & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL) if ((ioc_state & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL)
@ -5602,13 +5565,12 @@ _wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc, int sleep_flag)
/** /**
* mpt3sas_base_hard_reset_handler - reset controller * mpt3sas_base_hard_reset_handler - reset controller
* @ioc: Pointer to MPT_ADAPTER structure * @ioc: Pointer to MPT_ADAPTER structure
* @sleep_flag: CAN_SLEEP or NO_SLEEP
* @type: FORCE_BIG_HAMMER or SOFT_RESET * @type: FORCE_BIG_HAMMER or SOFT_RESET
* *
* Returns 0 for success, non-zero for failure. * Returns 0 for success, non-zero for failure.
*/ */
int int
mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag, mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
enum reset_type type) enum reset_type type)
{ {
int r; int r;
@ -5629,13 +5591,6 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
if (mpt3sas_fwfault_debug) if (mpt3sas_fwfault_debug)
mpt3sas_halt_firmware(ioc); mpt3sas_halt_firmware(ioc);
/* TODO - What we really should be doing is pulling
* out all the code associated with NO_SLEEP; its never used.
* That is legacy code from mpt fusion driver, ported over.
* I will leave this BUG_ON here for now till its been resolved.
*/
BUG_ON(sleep_flag == NO_SLEEP);
/* wait for an active reset in progress to complete */ /* wait for an active reset in progress to complete */
if (!mutex_trylock(&ioc->reset_in_progress_mutex)) { if (!mutex_trylock(&ioc->reset_in_progress_mutex)) {
do { do {
@ -5660,9 +5615,9 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
is_fault = 1; is_fault = 1;
} }
_base_reset_handler(ioc, MPT3_IOC_PRE_RESET); _base_reset_handler(ioc, MPT3_IOC_PRE_RESET);
_wait_for_commands_to_complete(ioc, sleep_flag); _wait_for_commands_to_complete(ioc);
_base_mask_interrupts(ioc); _base_mask_interrupts(ioc);
r = _base_make_ioc_ready(ioc, sleep_flag, type); r = _base_make_ioc_ready(ioc, type);
if (r) if (r)
goto out; goto out;
_base_reset_handler(ioc, MPT3_IOC_AFTER_RESET); _base_reset_handler(ioc, MPT3_IOC_AFTER_RESET);
@ -5675,7 +5630,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
r = -EFAULT; r = -EFAULT;
goto out; goto out;
} }
r = _base_get_ioc_facts(ioc, CAN_SLEEP); r = _base_get_ioc_facts(ioc);
if (r) if (r)
goto out; goto out;
@ -5684,7 +5639,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag,
"Please reboot the system and ensure that the correct" "Please reboot the system and ensure that the correct"
" firmware version is running\n", ioc->name); " firmware version is running\n", ioc->name);
r = _base_make_ioc_operational(ioc, sleep_flag); r = _base_make_ioc_operational(ioc);
if (!r) if (!r)
_base_reset_handler(ioc, MPT3_IOC_DONE_RESET); _base_reset_handler(ioc, MPT3_IOC_DONE_RESET);

View File

@ -119,10 +119,6 @@
#define MPT_MAX_CALLBACKS 32 #define MPT_MAX_CALLBACKS 32
#define CAN_SLEEP 1
#define NO_SLEEP 0
#define INTERNAL_CMDS_COUNT 10 /* reserved cmds */ #define INTERNAL_CMDS_COUNT 10 /* reserved cmds */
/* reserved for issuing internally framed scsi io cmds */ /* reserved for issuing internally framed scsi io cmds */
#define INTERNAL_SCSIIO_CMDS_COUNT 3 #define INTERNAL_SCSIIO_CMDS_COUNT 3
@ -478,7 +474,7 @@ struct _sas_device {
u8 pfa_led_on; u8 pfa_led_on;
u8 pend_sas_rphy_add; u8 pend_sas_rphy_add;
u8 enclosure_level; u8 enclosure_level;
u8 connector_name[4]; u8 connector_name[5];
struct kref refcount; struct kref refcount;
}; };
@ -794,16 +790,6 @@ struct reply_post_struct {
dma_addr_t reply_post_free_dma; dma_addr_t reply_post_free_dma;
}; };
/**
* enum mutex_type - task management mutex type
* @TM_MUTEX_OFF: mutex is not required becuase calling function is acquiring it
* @TM_MUTEX_ON: mutex is required
*/
enum mutex_type {
TM_MUTEX_OFF = 0,
TM_MUTEX_ON = 1,
};
typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc); typedef void (*MPT3SAS_FLUSH_RUNNING_CMDS)(struct MPT3SAS_ADAPTER *ioc);
/** /**
* struct MPT3SAS_ADAPTER - per adapter struct * struct MPT3SAS_ADAPTER - per adapter struct
@ -1229,7 +1215,7 @@ int mpt3sas_base_attach(struct MPT3SAS_ADAPTER *ioc);
void mpt3sas_base_detach(struct MPT3SAS_ADAPTER *ioc); void mpt3sas_base_detach(struct MPT3SAS_ADAPTER *ioc);
int mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc); int mpt3sas_base_map_resources(struct MPT3SAS_ADAPTER *ioc);
void mpt3sas_base_free_resources(struct MPT3SAS_ADAPTER *ioc); void mpt3sas_base_free_resources(struct MPT3SAS_ADAPTER *ioc);
int mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc, int sleep_flag, int mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
enum reset_type type); enum reset_type type);
void *mpt3sas_base_get_msg_frame(struct MPT3SAS_ADAPTER *ioc, u16 smid); void *mpt3sas_base_get_msg_frame(struct MPT3SAS_ADAPTER *ioc, u16 smid);
@ -1291,7 +1277,11 @@ void mpt3sas_scsih_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase);
int mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, int mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle,
uint channel, uint id, uint lun, u8 type, u16 smid_task, uint channel, uint id, uint lun, u8 type, u16 smid_task,
ulong timeout, enum mutex_type m_type); ulong timeout);
int mpt3sas_scsih_issue_locked_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle,
uint channel, uint id, uint lun, u8 type, u16 smid_task,
ulong timeout);
void mpt3sas_scsih_set_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle); void mpt3sas_scsih_set_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle);
void mpt3sas_scsih_clear_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle); void mpt3sas_scsih_clear_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle);
void mpt3sas_expander_remove(struct MPT3SAS_ADAPTER *ioc, u64 sas_address); void mpt3sas_expander_remove(struct MPT3SAS_ADAPTER *ioc, u64 sas_address);

View File

@ -285,7 +285,6 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
{ {
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
Mpi2ConfigRequest_t *config_request; Mpi2ConfigRequest_t *config_request;
int r; int r;
u8 retry_count, issue_host_reset = 0; u8 retry_count, issue_host_reset = 0;
@ -386,8 +385,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
_config_display_some_debug(ioc, smid, "config_request", NULL); _config_display_some_debug(ioc, smid, "config_request", NULL);
init_completion(&ioc->config_cmds.done); init_completion(&ioc->config_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->config_cmds.done, wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ);
timeout*HZ);
if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
ioc->name, __func__); ioc->name, __func__);
@ -491,8 +489,7 @@ _config_request(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
mutex_unlock(&ioc->config_cmds.mutex); mutex_unlock(&ioc->config_cmds.mutex);
if (issue_host_reset) if (issue_host_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
return r; return r;
} }

View File

@ -518,7 +518,7 @@ mpt3sas_ctl_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase)
* *
* Called when application request fasyn callback handler. * Called when application request fasyn callback handler.
*/ */
int static int
_ctl_fasync(int fd, struct file *filep, int mode) _ctl_fasync(int fd, struct file *filep, int mode)
{ {
return fasync_helper(fd, filep, mode, &async_queue); return fasync_helper(fd, filep, mode, &async_queue);
@ -530,7 +530,7 @@ _ctl_fasync(int fd, struct file *filep, int mode)
* @wait - * @wait -
* *
*/ */
unsigned int static unsigned int
_ctl_poll(struct file *filep, poll_table *wait) _ctl_poll(struct file *filep, poll_table *wait)
{ {
struct MPT3SAS_ADAPTER *ioc; struct MPT3SAS_ADAPTER *ioc;
@ -641,9 +641,8 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
MPI2RequestHeader_t *mpi_request = NULL, *request; MPI2RequestHeader_t *mpi_request = NULL, *request;
MPI2DefaultReply_t *mpi_reply; MPI2DefaultReply_t *mpi_reply;
u32 ioc_state; u32 ioc_state;
u16 ioc_status;
u16 smid; u16 smid;
unsigned long timeout, timeleft; unsigned long timeout;
u8 issue_reset; u8 issue_reset;
u32 sz; u32 sz;
void *psge; void *psge;
@ -914,8 +913,7 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
timeout = MPT3_IOCTL_DEFAULT_TIMEOUT; timeout = MPT3_IOCTL_DEFAULT_TIMEOUT;
else else
timeout = karg.timeout; timeout = karg.timeout;
timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done, timeout*HZ);
timeout*HZ);
if (mpi_request->Function == MPI2_FUNCTION_SCSI_TASK_MGMT) { if (mpi_request->Function == MPI2_FUNCTION_SCSI_TASK_MGMT) {
Mpi2SCSITaskManagementRequest_t *tm_request = Mpi2SCSITaskManagementRequest_t *tm_request =
(Mpi2SCSITaskManagementRequest_t *)mpi_request; (Mpi2SCSITaskManagementRequest_t *)mpi_request;
@ -938,7 +936,6 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
} }
mpi_reply = ioc->ctl_cmds.reply; mpi_reply = ioc->ctl_cmds.reply;
ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
if (mpi_reply->Function == MPI2_FUNCTION_SCSI_TASK_MGMT && if (mpi_reply->Function == MPI2_FUNCTION_SCSI_TASK_MGMT &&
(ioc->logging_level & MPT_DEBUG_TM)) { (ioc->logging_level & MPT_DEBUG_TM)) {
@ -1001,13 +998,11 @@ _ctl_do_mpt_command(struct MPT3SAS_ADAPTER *ioc, struct mpt3_ioctl_command karg,
ioc->name, ioc->name,
le16_to_cpu(mpi_request->FunctionDependent1)); le16_to_cpu(mpi_request->FunctionDependent1));
mpt3sas_halt_firmware(ioc); mpt3sas_halt_firmware(ioc);
mpt3sas_scsih_issue_tm(ioc, mpt3sas_scsih_issue_locked_tm(ioc,
le16_to_cpu(mpi_request->FunctionDependent1), 0, 0, le16_to_cpu(mpi_request->FunctionDependent1), 0, 0,
0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 30, 0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 30);
TM_MUTEX_ON);
} else } else
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
} }
out: out:
@ -1220,8 +1215,7 @@ _ctl_do_reset(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
dctlprintk(ioc, pr_info(MPT3SAS_FMT "%s: enter\n", ioc->name, dctlprintk(ioc, pr_info(MPT3SAS_FMT "%s: enter\n", ioc->name,
__func__)); __func__));
retval = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, retval = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
pr_info(MPT3SAS_FMT "host reset: %s\n", pr_info(MPT3SAS_FMT "host reset: %s\n",
ioc->name, ((!retval) ? "SUCCESS" : "FAILED")); ioc->name, ((!retval) ? "SUCCESS" : "FAILED"));
return 0; return 0;
@ -1381,7 +1375,6 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
Mpi2DiagBufferPostRequest_t *mpi_request; Mpi2DiagBufferPostRequest_t *mpi_request;
Mpi2DiagBufferPostReply_t *mpi_reply; Mpi2DiagBufferPostReply_t *mpi_reply;
u8 buffer_type; u8 buffer_type;
unsigned long timeleft;
u16 smid; u16 smid;
u16 ioc_status; u16 ioc_status;
u32 ioc_state; u32 ioc_state;
@ -1499,7 +1492,7 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
@ -1538,8 +1531,7 @@ _ctl_diag_register_2(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
out: out:
@ -1800,7 +1792,6 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
u16 ioc_status; u16 ioc_status;
u32 ioc_state; u32 ioc_state;
int rc; int rc;
unsigned long timeleft;
dctlprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name, dctlprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name,
__func__)); __func__));
@ -1848,7 +1839,7 @@ mpt3sas_send_diag_release(struct MPT3SAS_ADAPTER *ioc, u8 buffer_type,
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
@ -1974,8 +1965,7 @@ _ctl_diag_release(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
rc = mpt3sas_send_diag_release(ioc, buffer_type, &issue_reset); rc = mpt3sas_send_diag_release(ioc, buffer_type, &issue_reset);
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
return rc; return rc;
} }
@ -1995,7 +1985,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
Mpi2DiagBufferPostReply_t *mpi_reply; Mpi2DiagBufferPostReply_t *mpi_reply;
int rc, i; int rc, i;
u8 buffer_type; u8 buffer_type;
unsigned long timeleft, request_size, copy_size; unsigned long request_size, copy_size;
u16 smid; u16 smid;
u16 ioc_status; u16 ioc_status;
u8 issue_reset = 0; u8 issue_reset = 0;
@ -2116,7 +2106,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
init_completion(&ioc->ctl_cmds.done); init_completion(&ioc->ctl_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done, wait_for_completion_timeout(&ioc->ctl_cmds.done,
MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); MPT3_IOCTL_DEFAULT_TIMEOUT*HZ);
if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->ctl_cmds.status & MPT3_CMD_COMPLETE)) {
@ -2155,8 +2145,7 @@ _ctl_diag_read_buffer(struct MPT3SAS_ADAPTER *ioc, void __user *arg)
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
out: out:
@ -2352,7 +2341,7 @@ out_unlock_pciaccess:
* @cmd - ioctl opcode * @cmd - ioctl opcode
* @arg - * @arg -
*/ */
long static long
_ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg) _ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{ {
long ret; long ret;
@ -2372,7 +2361,7 @@ _ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
* @cmd - ioctl opcode * @cmd - ioctl opcode
* @arg - * @arg -
*/ */
long static long
_ctl_mpt2_ioctl(struct file *file, unsigned int cmd, unsigned long arg) _ctl_mpt2_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{ {
long ret; long ret;
@ -2392,7 +2381,7 @@ _ctl_mpt2_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
* *
* This routine handles 32 bit applications in 64bit os. * This routine handles 32 bit applications in 64bit os.
*/ */
long static long
_ctl_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg) _ctl_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
{ {
long ret; long ret;
@ -2410,7 +2399,7 @@ _ctl_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
* *
* This routine handles 32 bit applications in 64bit os. * This routine handles 32 bit applications in 64bit os.
*/ */
long static long
_ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg) _ctl_mpt2_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
{ {
long ret; long ret;

View File

@ -1195,7 +1195,7 @@ _scsih_scsi_lookup_find_by_lun(struct MPT3SAS_ADAPTER *ioc, int id,
* *
* Returns queue depth. * Returns queue depth.
*/ */
int static int
scsih_change_queue_depth(struct scsi_device *sdev, int qdepth) scsih_change_queue_depth(struct scsi_device *sdev, int qdepth)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
@ -1244,7 +1244,7 @@ scsih_change_queue_depth(struct scsi_device *sdev, int qdepth)
* Returns 0 if ok. Any other return is assumed to be an error and * Returns 0 if ok. Any other return is assumed to be an error and
* the device is ignored. * the device is ignored.
*/ */
int static int
scsih_target_alloc(struct scsi_target *starget) scsih_target_alloc(struct scsi_target *starget)
{ {
struct Scsi_Host *shost = dev_to_shost(&starget->dev); struct Scsi_Host *shost = dev_to_shost(&starget->dev);
@ -1311,7 +1311,7 @@ scsih_target_alloc(struct scsi_target *starget)
* *
* Returns nothing. * Returns nothing.
*/ */
void static void
scsih_target_destroy(struct scsi_target *starget) scsih_target_destroy(struct scsi_target *starget)
{ {
struct Scsi_Host *shost = dev_to_shost(&starget->dev); struct Scsi_Host *shost = dev_to_shost(&starget->dev);
@ -1320,7 +1320,6 @@ scsih_target_destroy(struct scsi_target *starget)
struct _sas_device *sas_device; struct _sas_device *sas_device;
struct _raid_device *raid_device; struct _raid_device *raid_device;
unsigned long flags; unsigned long flags;
struct sas_rphy *rphy;
sas_target_priv_data = starget->hostdata; sas_target_priv_data = starget->hostdata;
if (!sas_target_priv_data) if (!sas_target_priv_data)
@ -1339,7 +1338,6 @@ scsih_target_destroy(struct scsi_target *starget)
} }
spin_lock_irqsave(&ioc->sas_device_lock, flags); spin_lock_irqsave(&ioc->sas_device_lock, flags);
rphy = dev_to_rphy(starget->dev.parent);
sas_device = __mpt3sas_get_sdev_from_target(ioc, sas_target_priv_data); sas_device = __mpt3sas_get_sdev_from_target(ioc, sas_target_priv_data);
if (sas_device && (sas_device->starget == starget) && if (sas_device && (sas_device->starget == starget) &&
(sas_device->id == starget->id) && (sas_device->id == starget->id) &&
@ -1369,7 +1367,7 @@ scsih_target_destroy(struct scsi_target *starget)
* Returns 0 if ok. Any other return is assumed to be an error and * Returns 0 if ok. Any other return is assumed to be an error and
* the device is ignored. * the device is ignored.
*/ */
int static int
scsih_slave_alloc(struct scsi_device *sdev) scsih_slave_alloc(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
@ -1434,7 +1432,7 @@ scsih_slave_alloc(struct scsi_device *sdev)
* *
* Returns nothing. * Returns nothing.
*/ */
void static void
scsih_slave_destroy(struct scsi_device *sdev) scsih_slave_destroy(struct scsi_device *sdev)
{ {
struct MPT3SAS_TARGET *sas_target_priv_data; struct MPT3SAS_TARGET *sas_target_priv_data;
@ -1527,7 +1525,7 @@ _scsih_display_sata_capabilities(struct MPT3SAS_ADAPTER *ioc,
* scsih_is_raid - return boolean indicating device is raid volume * scsih_is_raid - return boolean indicating device is raid volume
* @dev the device struct object * @dev the device struct object
*/ */
int static int
scsih_is_raid(struct device *dev) scsih_is_raid(struct device *dev)
{ {
struct scsi_device *sdev = to_scsi_device(dev); struct scsi_device *sdev = to_scsi_device(dev);
@ -1542,7 +1540,7 @@ scsih_is_raid(struct device *dev)
* scsih_get_resync - get raid volume resync percent complete * scsih_get_resync - get raid volume resync percent complete
* @dev the device struct object * @dev the device struct object
*/ */
void static void
scsih_get_resync(struct device *dev) scsih_get_resync(struct device *dev)
{ {
struct scsi_device *sdev = to_scsi_device(dev); struct scsi_device *sdev = to_scsi_device(dev);
@ -1603,7 +1601,7 @@ scsih_get_resync(struct device *dev)
* scsih_get_state - get raid volume level * scsih_get_state - get raid volume level
* @dev the device struct object * @dev the device struct object
*/ */
void static void
scsih_get_state(struct device *dev) scsih_get_state(struct device *dev)
{ {
struct scsi_device *sdev = to_scsi_device(dev); struct scsi_device *sdev = to_scsi_device(dev);
@ -1805,7 +1803,7 @@ _scsih_enable_tlr(struct MPT3SAS_ADAPTER *ioc, struct scsi_device *sdev)
* Returns 0 if ok. Any other return is assumed to be an error and * Returns 0 if ok. Any other return is assumed to be an error and
* the device is ignored. * the device is ignored.
*/ */
int static int
scsih_slave_configure(struct scsi_device *sdev) scsih_slave_configure(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
@ -2021,7 +2019,7 @@ scsih_slave_configure(struct scsi_device *sdev)
* *
* Return nothing. * Return nothing.
*/ */
int static int
scsih_bios_param(struct scsi_device *sdev, struct block_device *bdev, scsih_bios_param(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int params[]) sector_t capacity, int params[])
{ {
@ -2201,7 +2199,6 @@ mpt3sas_scsih_clear_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle)
* @type: MPI2_SCSITASKMGMT_TASKTYPE__XXX (defined in mpi2_init.h) * @type: MPI2_SCSITASKMGMT_TASKTYPE__XXX (defined in mpi2_init.h)
* @smid_task: smid assigned to the task * @smid_task: smid assigned to the task
* @timeout: timeout in seconds * @timeout: timeout in seconds
* @m_type: TM_MUTEX_ON or TM_MUTEX_OFF
* Context: user * Context: user
* *
* A generic API for sending task management requests to firmware. * A generic API for sending task management requests to firmware.
@ -2212,60 +2209,51 @@ mpt3sas_scsih_clear_tm_flag(struct MPT3SAS_ADAPTER *ioc, u16 handle)
*/ */
int int
mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, uint channel, mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, uint channel,
uint id, uint lun, u8 type, u16 smid_task, ulong timeout, uint id, uint lun, u8 type, u16 smid_task, ulong timeout)
enum mutex_type m_type)
{ {
Mpi2SCSITaskManagementRequest_t *mpi_request; Mpi2SCSITaskManagementRequest_t *mpi_request;
Mpi2SCSITaskManagementReply_t *mpi_reply; Mpi2SCSITaskManagementReply_t *mpi_reply;
u16 smid = 0; u16 smid = 0;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
struct scsiio_tracker *scsi_lookup = NULL; struct scsiio_tracker *scsi_lookup = NULL;
int rc; int rc;
u16 msix_task = 0; u16 msix_task = 0;
if (m_type == TM_MUTEX_ON) lockdep_assert_held(&ioc->tm_cmds.mutex);
mutex_lock(&ioc->tm_cmds.mutex);
if (ioc->tm_cmds.status != MPT3_CMD_NOT_USED) { if (ioc->tm_cmds.status != MPT3_CMD_NOT_USED) {
pr_info(MPT3SAS_FMT "%s: tm_cmd busy!!!\n", pr_info(MPT3SAS_FMT "%s: tm_cmd busy!!!\n",
__func__, ioc->name); __func__, ioc->name);
rc = FAILED; return FAILED;
goto err_out;
} }
if (ioc->shost_recovery || ioc->remove_host || if (ioc->shost_recovery || ioc->remove_host ||
ioc->pci_error_recovery) { ioc->pci_error_recovery) {
pr_info(MPT3SAS_FMT "%s: host reset in progress!\n", pr_info(MPT3SAS_FMT "%s: host reset in progress!\n",
__func__, ioc->name); __func__, ioc->name);
rc = FAILED; return FAILED;
goto err_out;
} }
ioc_state = mpt3sas_base_get_iocstate(ioc, 0); ioc_state = mpt3sas_base_get_iocstate(ioc, 0);
if (ioc_state & MPI2_DOORBELL_USED) { if (ioc_state & MPI2_DOORBELL_USED) {
dhsprintk(ioc, pr_info(MPT3SAS_FMT dhsprintk(ioc, pr_info(MPT3SAS_FMT
"unexpected doorbell active!\n", ioc->name)); "unexpected doorbell active!\n", ioc->name));
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER); return (!rc) ? SUCCESS : FAILED;
rc = (!rc) ? SUCCESS : FAILED;
goto err_out;
} }
if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) { if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
mpt3sas_base_fault_info(ioc, ioc_state & mpt3sas_base_fault_info(ioc, ioc_state &
MPI2_DOORBELL_DATA_MASK); MPI2_DOORBELL_DATA_MASK);
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER); return (!rc) ? SUCCESS : FAILED;
rc = (!rc) ? SUCCESS : FAILED;
goto err_out;
} }
smid = mpt3sas_base_get_smid_hpr(ioc, ioc->tm_cb_idx); smid = mpt3sas_base_get_smid_hpr(ioc, ioc->tm_cb_idx);
if (!smid) { if (!smid) {
pr_err(MPT3SAS_FMT "%s: failed obtaining a smid\n", pr_err(MPT3SAS_FMT "%s: failed obtaining a smid\n",
ioc->name, __func__); ioc->name, __func__);
rc = FAILED; return FAILED;
goto err_out;
} }
if (type == MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK) if (type == MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK)
@ -2292,19 +2280,17 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, uint channel,
else else
msix_task = 0; msix_task = 0;
mpt3sas_base_put_smid_hi_priority(ioc, smid, msix_task); mpt3sas_base_put_smid_hi_priority(ioc, smid, msix_task);
timeleft = wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ); wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ);
if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
ioc->name, __func__); ioc->name, __func__);
_debug_dump_mf(mpi_request, _debug_dump_mf(mpi_request,
sizeof(Mpi2SCSITaskManagementRequest_t)/4); sizeof(Mpi2SCSITaskManagementRequest_t)/4);
if (!(ioc->tm_cmds.status & MPT3_CMD_RESET)) { if (!(ioc->tm_cmds.status & MPT3_CMD_RESET)) {
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, rc = mpt3sas_base_hard_reset_handler(ioc,
FORCE_BIG_HAMMER); FORCE_BIG_HAMMER);
rc = (!rc) ? SUCCESS : FAILED; rc = (!rc) ? SUCCESS : FAILED;
ioc->tm_cmds.status = MPT3_CMD_NOT_USED; goto out;
mpt3sas_scsih_clear_tm_flag(ioc, handle);
goto err_out;
} }
} }
@ -2356,17 +2342,23 @@ mpt3sas_scsih_issue_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle, uint channel,
break; break;
} }
out:
mpt3sas_scsih_clear_tm_flag(ioc, handle); mpt3sas_scsih_clear_tm_flag(ioc, handle);
ioc->tm_cmds.status = MPT3_CMD_NOT_USED; ioc->tm_cmds.status = MPT3_CMD_NOT_USED;
if (m_type == TM_MUTEX_ON)
mutex_unlock(&ioc->tm_cmds.mutex);
return rc; return rc;
}
err_out: int mpt3sas_scsih_issue_locked_tm(struct MPT3SAS_ADAPTER *ioc, u16 handle,
if (m_type == TM_MUTEX_ON) uint channel, uint id, uint lun, u8 type, u16 smid_task, ulong timeout)
mutex_unlock(&ioc->tm_cmds.mutex); {
return rc; int ret;
mutex_lock(&ioc->tm_cmds.mutex);
ret = mpt3sas_scsih_issue_tm(ioc, handle, channel, id, lun, type,
smid_task, timeout);
mutex_unlock(&ioc->tm_cmds.mutex);
return ret;
} }
/** /**
@ -2439,7 +2431,7 @@ _scsih_tm_display_info(struct MPT3SAS_ADAPTER *ioc, struct scsi_cmnd *scmd)
* *
* Returns SUCCESS if command aborted else FAILED * Returns SUCCESS if command aborted else FAILED
*/ */
int static int
scsih_abort(struct scsi_cmnd *scmd) scsih_abort(struct scsi_cmnd *scmd)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host); struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
@ -2482,9 +2474,9 @@ scsih_abort(struct scsi_cmnd *scmd)
mpt3sas_halt_firmware(ioc); mpt3sas_halt_firmware(ioc);
handle = sas_device_priv_data->sas_target->handle; handle = sas_device_priv_data->sas_target->handle;
r = mpt3sas_scsih_issue_tm(ioc, handle, scmd->device->channel, r = mpt3sas_scsih_issue_locked_tm(ioc, handle, scmd->device->channel,
scmd->device->id, scmd->device->lun, scmd->device->id, scmd->device->lun,
MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30, TM_MUTEX_ON); MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30);
out: out:
sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(%p)\n", sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(%p)\n",
@ -2498,7 +2490,7 @@ scsih_abort(struct scsi_cmnd *scmd)
* *
* Returns SUCCESS if command aborted else FAILED * Returns SUCCESS if command aborted else FAILED
*/ */
int static int
scsih_dev_reset(struct scsi_cmnd *scmd) scsih_dev_reset(struct scsi_cmnd *scmd)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host); struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
@ -2541,9 +2533,9 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
goto out; goto out;
} }
r = mpt3sas_scsih_issue_tm(ioc, handle, scmd->device->channel, r = mpt3sas_scsih_issue_locked_tm(ioc, handle, scmd->device->channel,
scmd->device->id, scmd->device->lun, scmd->device->id, scmd->device->lun,
MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET, 0, 30, TM_MUTEX_ON); MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET, 0, 30);
out: out:
sdev_printk(KERN_INFO, scmd->device, "device reset: %s scmd(%p)\n", sdev_printk(KERN_INFO, scmd->device, "device reset: %s scmd(%p)\n",
@ -2561,7 +2553,7 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
* *
* Returns SUCCESS if command aborted else FAILED * Returns SUCCESS if command aborted else FAILED
*/ */
int static int
scsih_target_reset(struct scsi_cmnd *scmd) scsih_target_reset(struct scsi_cmnd *scmd)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host); struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
@ -2603,9 +2595,9 @@ scsih_target_reset(struct scsi_cmnd *scmd)
goto out; goto out;
} }
r = mpt3sas_scsih_issue_tm(ioc, handle, scmd->device->channel, r = mpt3sas_scsih_issue_locked_tm(ioc, handle, scmd->device->channel,
scmd->device->id, 0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, scmd->device->id, 0, MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0,
30, TM_MUTEX_ON); 30);
out: out:
starget_printk(KERN_INFO, starget, "target reset: %s scmd(%p)\n", starget_printk(KERN_INFO, starget, "target reset: %s scmd(%p)\n",
@ -2624,7 +2616,7 @@ scsih_target_reset(struct scsi_cmnd *scmd)
* *
* Returns SUCCESS if command aborted else FAILED * Returns SUCCESS if command aborted else FAILED
*/ */
int static int
scsih_host_reset(struct scsi_cmnd *scmd) scsih_host_reset(struct scsi_cmnd *scmd)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host); struct MPT3SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
@ -2641,8 +2633,7 @@ scsih_host_reset(struct scsi_cmnd *scmd)
goto out; goto out;
} }
retval = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, retval = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
r = (retval < 0) ? FAILED : SUCCESS; r = (retval < 0) ? FAILED : SUCCESS;
out: out:
pr_info(MPT3SAS_FMT "host reset: %s scmd(%p)\n", pr_info(MPT3SAS_FMT "host reset: %s scmd(%p)\n",
@ -3455,7 +3446,7 @@ _scsih_tm_volume_tr_complete(struct MPT3SAS_ADAPTER *ioc, u16 smid,
* *
* Context - processed in interrupt context. * Context - processed in interrupt context.
*/ */
void static void
_scsih_issue_delayed_event_ack(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 event, _scsih_issue_delayed_event_ack(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 event,
u32 event_context) u32 event_context)
{ {
@ -3494,7 +3485,7 @@ _scsih_issue_delayed_event_ack(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 event,
* *
* Context - processed in interrupt context. * Context - processed in interrupt context.
*/ */
void static void
_scsih_issue_delayed_sas_io_unit_ctrl(struct MPT3SAS_ADAPTER *ioc, _scsih_issue_delayed_sas_io_unit_ctrl(struct MPT3SAS_ADAPTER *ioc,
u16 smid, u16 handle) u16 smid, u16 handle)
{ {
@ -4032,7 +4023,7 @@ _scsih_eedp_error_handling(struct scsi_cmnd *scmd, u16 ioc_status)
* SCSI_MLQUEUE_DEVICE_BUSY if the device queue is full, or * SCSI_MLQUEUE_DEVICE_BUSY if the device queue is full, or
* SCSI_MLQUEUE_HOST_BUSY if the entire host queue is full * SCSI_MLQUEUE_HOST_BUSY if the entire host queue is full
*/ */
int static int
scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd) scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -4701,7 +4692,7 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
le16_to_cpu(mpi_reply->DevHandle)); le16_to_cpu(mpi_reply->DevHandle));
mpt3sas_trigger_scsi(ioc, data.skey, data.asc, data.ascq); mpt3sas_trigger_scsi(ioc, data.skey, data.asc, data.ascq);
if (!(ioc->logging_level & MPT_DEBUG_REPLY) && if ((ioc->logging_level & MPT_DEBUG_REPLY) &&
((scmd->sense_buffer[2] == UNIT_ATTENTION) || ((scmd->sense_buffer[2] == UNIT_ATTENTION) ||
(scmd->sense_buffer[2] == MEDIUM_ERROR) || (scmd->sense_buffer[2] == MEDIUM_ERROR) ||
(scmd->sense_buffer[2] == HARDWARE_ERROR))) (scmd->sense_buffer[2] == HARDWARE_ERROR)))
@ -5380,8 +5371,9 @@ _scsih_check_device(struct MPT3SAS_ADAPTER *ioc,
MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) { MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
sas_device->enclosure_level = sas_device->enclosure_level =
le16_to_cpu(sas_device_pg0.EnclosureLevel); le16_to_cpu(sas_device_pg0.EnclosureLevel);
memcpy(&sas_device->connector_name[0], memcpy(sas_device->connector_name,
&sas_device_pg0.ConnectorName[0], 4); sas_device_pg0.ConnectorName, 4);
sas_device->connector_name[4] = '\0';
} else { } else {
sas_device->enclosure_level = 0; sas_device->enclosure_level = 0;
sas_device->connector_name[0] = '\0'; sas_device->connector_name[0] = '\0';
@ -5508,8 +5500,9 @@ _scsih_add_device(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phy_num,
if (sas_device_pg0.Flags & MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) { if (sas_device_pg0.Flags & MPI2_SAS_DEVICE0_FLAGS_ENCL_LEVEL_VALID) {
sas_device->enclosure_level = sas_device->enclosure_level =
le16_to_cpu(sas_device_pg0.EnclosureLevel); le16_to_cpu(sas_device_pg0.EnclosureLevel);
memcpy(&sas_device->connector_name[0], memcpy(sas_device->connector_name,
&sas_device_pg0.ConnectorName[0], 4); sas_device_pg0.ConnectorName, 4);
sas_device->connector_name[4] = '\0';
} else { } else {
sas_device->enclosure_level = 0; sas_device->enclosure_level = 0;
sas_device->connector_name[0] = '\0'; sas_device->connector_name[0] = '\0';
@ -6087,8 +6080,7 @@ _scsih_sas_broadcast_primitive_event(struct MPT3SAS_ADAPTER *ioc,
spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags); spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
r = mpt3sas_scsih_issue_tm(ioc, handle, 0, 0, lun, r = mpt3sas_scsih_issue_tm(ioc, handle, 0, 0, lun,
MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK, smid, 30, MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK, smid, 30);
TM_MUTEX_OFF);
if (r == FAILED) { if (r == FAILED) {
sdev_printk(KERN_WARNING, sdev, sdev_printk(KERN_WARNING, sdev,
"mpt3sas_scsih_issue_tm: FAILED when sending " "mpt3sas_scsih_issue_tm: FAILED when sending "
@ -6128,8 +6120,8 @@ _scsih_sas_broadcast_primitive_event(struct MPT3SAS_ADAPTER *ioc,
goto out_no_lock; goto out_no_lock;
r = mpt3sas_scsih_issue_tm(ioc, handle, sdev->channel, sdev->id, r = mpt3sas_scsih_issue_tm(ioc, handle, sdev->channel, sdev->id,
sdev->lun, MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30, sdev->lun, MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid,
TM_MUTEX_OFF); 30);
if (r == FAILED) { if (r == FAILED) {
sdev_printk(KERN_WARNING, sdev, sdev_printk(KERN_WARNING, sdev,
"mpt3sas_scsih_issue_tm: ABORT_TASK: FAILED : " "mpt3sas_scsih_issue_tm: ABORT_TASK: FAILED : "
@ -6297,8 +6289,7 @@ _scsih_ir_fastpath(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phys_disk_num)
mutex_unlock(&ioc->scsih_cmds.mutex); mutex_unlock(&ioc->scsih_cmds.mutex);
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
return rc; return rc;
} }
@ -6311,11 +6302,10 @@ _scsih_ir_fastpath(struct MPT3SAS_ADAPTER *ioc, u16 handle, u8 phys_disk_num)
static void static void
_scsih_reprobe_lun(struct scsi_device *sdev, void *no_uld_attach) _scsih_reprobe_lun(struct scsi_device *sdev, void *no_uld_attach)
{ {
int rc;
sdev->no_uld_attach = no_uld_attach ? 1 : 0; sdev->no_uld_attach = no_uld_attach ? 1 : 0;
sdev_printk(KERN_INFO, sdev, "%s raid component\n", sdev_printk(KERN_INFO, sdev, "%s raid component\n",
sdev->no_uld_attach ? "hidding" : "exposing"); sdev->no_uld_attach ? "hidding" : "exposing");
rc = scsi_device_reprobe(sdev); WARN_ON(scsi_device_reprobe(sdev));
} }
/** /**
@ -8137,7 +8127,7 @@ _scsih_ir_shutdown(struct MPT3SAS_ADAPTER *ioc)
* Routine called when unloading the driver. * Routine called when unloading the driver.
* Return nothing. * Return nothing.
*/ */
void scsih_remove(struct pci_dev *pdev) static void scsih_remove(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -8210,7 +8200,7 @@ void scsih_remove(struct pci_dev *pdev)
* *
* Return nothing. * Return nothing.
*/ */
void static void
scsih_shutdown(struct pci_dev *pdev) scsih_shutdown(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -8451,7 +8441,7 @@ _scsih_probe_devices(struct MPT3SAS_ADAPTER *ioc)
* of scanning the entire bus. In our implemention, we will kick off * of scanning the entire bus. In our implemention, we will kick off
* firmware discovery. * firmware discovery.
*/ */
void static void
scsih_scan_start(struct Scsi_Host *shost) scsih_scan_start(struct Scsi_Host *shost)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -8478,7 +8468,7 @@ scsih_scan_start(struct Scsi_Host *shost)
* scsi_host and the elapsed time of the scan in jiffies. In our implemention, * scsi_host and the elapsed time of the scan in jiffies. In our implemention,
* we wait for firmware discovery to complete, then return 1. * we wait for firmware discovery to complete, then return 1.
*/ */
int static int
scsih_scan_finished(struct Scsi_Host *shost, unsigned long time) scsih_scan_finished(struct Scsi_Host *shost, unsigned long time)
{ {
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -8608,7 +8598,7 @@ static struct raid_function_template mpt3sas_raid_functions = {
* MPI25_VERSION for SAS 3.0 HBA devices, and * MPI25_VERSION for SAS 3.0 HBA devices, and
* MPI26 VERSION for Cutlass & Invader SAS 3.0 HBA devices * MPI26 VERSION for Cutlass & Invader SAS 3.0 HBA devices
*/ */
u16 static u16
_scsih_determine_hba_mpi_version(struct pci_dev *pdev) _scsih_determine_hba_mpi_version(struct pci_dev *pdev)
{ {
@ -8660,7 +8650,7 @@ _scsih_determine_hba_mpi_version(struct pci_dev *pdev)
* *
* Returns 0 success, anything else error. * Returns 0 success, anything else error.
*/ */
int static int
_scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id) _scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{ {
struct MPT3SAS_ADAPTER *ioc; struct MPT3SAS_ADAPTER *ioc;
@ -8869,7 +8859,7 @@ out_add_shost_fail:
* *
* Returns 0 success, anything else error. * Returns 0 success, anything else error.
*/ */
int static int
scsih_suspend(struct pci_dev *pdev, pm_message_t state) scsih_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -8896,7 +8886,7 @@ scsih_suspend(struct pci_dev *pdev, pm_message_t state)
* *
* Returns 0 success, anything else error. * Returns 0 success, anything else error.
*/ */
int static int
scsih_resume(struct pci_dev *pdev) scsih_resume(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -8916,7 +8906,7 @@ scsih_resume(struct pci_dev *pdev)
if (r) if (r)
return r; return r;
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, SOFT_RESET); mpt3sas_base_hard_reset_handler(ioc, SOFT_RESET);
scsi_unblock_requests(shost); scsi_unblock_requests(shost);
mpt3sas_base_start_watchdog(ioc); mpt3sas_base_start_watchdog(ioc);
return 0; return 0;
@ -8933,7 +8923,7 @@ scsih_resume(struct pci_dev *pdev)
* Return value: * Return value:
* PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT * PCI_ERS_RESULT_NEED_RESET or PCI_ERS_RESULT_DISCONNECT
*/ */
pci_ers_result_t static pci_ers_result_t
scsih_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state) scsih_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -8970,7 +8960,7 @@ scsih_pci_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
* code after the PCI slot has been reset, just before we * code after the PCI slot has been reset, just before we
* should resume normal operations. * should resume normal operations.
*/ */
pci_ers_result_t static pci_ers_result_t
scsih_pci_slot_reset(struct pci_dev *pdev) scsih_pci_slot_reset(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -8987,8 +8977,7 @@ scsih_pci_slot_reset(struct pci_dev *pdev)
if (rc) if (rc)
return PCI_ERS_RESULT_DISCONNECT; return PCI_ERS_RESULT_DISCONNECT;
rc = mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, rc = mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
pr_warn(MPT3SAS_FMT "hard reset: %s\n", ioc->name, pr_warn(MPT3SAS_FMT "hard reset: %s\n", ioc->name,
(rc == 0) ? "success" : "failed"); (rc == 0) ? "success" : "failed");
@ -9007,7 +8996,7 @@ scsih_pci_slot_reset(struct pci_dev *pdev)
* OK to resume normal operation. Use completion to allow * OK to resume normal operation. Use completion to allow
* halted scsi ops to resume. * halted scsi ops to resume.
*/ */
void static void
scsih_pci_resume(struct pci_dev *pdev) scsih_pci_resume(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -9024,7 +9013,7 @@ scsih_pci_resume(struct pci_dev *pdev)
* scsih_pci_mmio_enabled - Enable MMIO and dump debug registers * scsih_pci_mmio_enabled - Enable MMIO and dump debug registers
* @pdev: pointer to PCI device * @pdev: pointer to PCI device
*/ */
pci_ers_result_t static pci_ers_result_t
scsih_pci_mmio_enabled(struct pci_dev *pdev) scsih_pci_mmio_enabled(struct pci_dev *pdev)
{ {
struct Scsi_Host *shost = pci_get_drvdata(pdev); struct Scsi_Host *shost = pci_get_drvdata(pdev);
@ -9152,7 +9141,7 @@ static struct pci_driver mpt3sas_driver = {
* *
* Returns 0 success, anything else error. * Returns 0 success, anything else error.
*/ */
int static int
scsih_init(void) scsih_init(void)
{ {
mpt2_ids = 0; mpt2_ids = 0;
@ -9202,7 +9191,7 @@ scsih_init(void)
* *
* Returns 0 success, anything else error. * Returns 0 success, anything else error.
*/ */
void static void
scsih_exit(void) scsih_exit(void)
{ {

View File

@ -300,7 +300,6 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
int rc; int rc;
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
void *psge; void *psge;
u8 issue_reset = 0; u8 issue_reset = 0;
void *data_out = NULL; void *data_out = NULL;
@ -394,8 +393,7 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
ioc->name, (unsigned long long)sas_address)); ioc->name, (unsigned long long)sas_address));
init_completion(&ioc->transport_cmds.done); init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done, wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -446,8 +444,7 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
out: out:
ioc->transport_cmds.status = MPT3_CMD_NOT_USED; ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
if (data_out) if (data_out)
@ -1107,7 +1104,6 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc,
int rc; int rc;
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
void *psge; void *psge;
u8 issue_reset = 0; u8 issue_reset = 0;
void *data_out = NULL; void *data_out = NULL;
@ -1203,8 +1199,7 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc,
phy->number)); phy->number));
init_completion(&ioc->transport_cmds.done); init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done, wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -1253,8 +1248,7 @@ _transport_get_expander_phy_error_log(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
out: out:
ioc->transport_cmds.status = MPT3_CMD_NOT_USED; ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
if (data_out) if (data_out)
@ -1421,7 +1415,6 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc,
int rc; int rc;
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
void *psge; void *psge;
u8 issue_reset = 0; u8 issue_reset = 0;
void *data_out = NULL; void *data_out = NULL;
@ -1522,8 +1515,7 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc,
phy->number, phy_operation)); phy->number, phy_operation));
init_completion(&ioc->transport_cmds.done); init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done, wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s: timeout\n", pr_err(MPT3SAS_FMT "%s: timeout\n",
@ -1564,8 +1556,7 @@ _transport_expander_phy_control(struct MPT3SAS_ADAPTER *ioc,
issue_host_reset: issue_host_reset:
if (issue_reset) if (issue_reset)
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
out: out:
ioc->transport_cmds.status = MPT3_CMD_NOT_USED; ioc->transport_cmds.status = MPT3_CMD_NOT_USED;
if (data_out) if (data_out)
@ -1899,7 +1890,6 @@ _transport_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
int rc; int rc;
u16 smid; u16 smid;
u32 ioc_state; u32 ioc_state;
unsigned long timeleft;
void *psge; void *psge;
u8 issue_reset = 0; u8 issue_reset = 0;
dma_addr_t dma_addr_in = 0; dma_addr_t dma_addr_in = 0;
@ -2043,8 +2033,7 @@ _transport_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
init_completion(&ioc->transport_cmds.done); init_completion(&ioc->transport_cmds.done);
mpt3sas_base_put_smid_default(ioc, smid); mpt3sas_base_put_smid_default(ioc, smid);
timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done, wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ);
10*HZ);
if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
pr_err(MPT3SAS_FMT "%s : timeout\n", pr_err(MPT3SAS_FMT "%s : timeout\n",
@ -2103,8 +2092,7 @@ _transport_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
issue_host_reset: issue_host_reset:
if (issue_reset) { if (issue_reset) {
mpt3sas_base_hard_reset_handler(ioc, CAN_SLEEP, mpt3sas_base_hard_reset_handler(ioc, FORCE_BIG_HAMMER);
FORCE_BIG_HAMMER);
rc = -ETIMEDOUT; rc = -ETIMEDOUT;
} }

View File

@ -136,7 +136,8 @@ static void mvs_64xx_phy_reset(struct mvs_info *mvi, u32 phy_id, int hard)
} }
} }
void mvs_64xx_clear_srs_irq(struct mvs_info *mvi, u8 reg_set, u8 clear_all) static void
mvs_64xx_clear_srs_irq(struct mvs_info *mvi, u8 reg_set, u8 clear_all)
{ {
void __iomem *regs = mvi->regs; void __iomem *regs = mvi->regs;
u32 tmp; u32 tmp;
@ -563,7 +564,7 @@ static u8 mvs_64xx_assign_reg_set(struct mvs_info *mvi, u8 *tfs)
return MVS_ID_NOT_MAPPED; return MVS_ID_NOT_MAPPED;
} }
void mvs_64xx_make_prd(struct scatterlist *scatter, int nr, void *prd) static void mvs_64xx_make_prd(struct scatterlist *scatter, int nr, void *prd)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
@ -633,7 +634,7 @@ static void mvs_64xx_phy_work_around(struct mvs_info *mvi, int i)
mvs_write_port_vsr_data(mvi, i, tmp); mvs_write_port_vsr_data(mvi, i, tmp);
} }
void mvs_64xx_phy_set_link_rate(struct mvs_info *mvi, u32 phy_id, static void mvs_64xx_phy_set_link_rate(struct mvs_info *mvi, u32 phy_id,
struct sas_phy_linkrates *rates) struct sas_phy_linkrates *rates)
{ {
u32 lrmin = 0, lrmax = 0; u32 lrmin = 0, lrmax = 0;
@ -668,20 +669,20 @@ static void mvs_64xx_clear_active_cmds(struct mvs_info *mvi)
} }
u32 mvs_64xx_spi_read_data(struct mvs_info *mvi) static u32 mvs_64xx_spi_read_data(struct mvs_info *mvi)
{ {
void __iomem *regs = mvi->regs_ex; void __iomem *regs = mvi->regs_ex;
return ior32(SPI_DATA_REG_64XX); return ior32(SPI_DATA_REG_64XX);
} }
void mvs_64xx_spi_write_data(struct mvs_info *mvi, u32 data) static void mvs_64xx_spi_write_data(struct mvs_info *mvi, u32 data)
{ {
void __iomem *regs = mvi->regs_ex; void __iomem *regs = mvi->regs_ex;
iow32(SPI_DATA_REG_64XX, data); iow32(SPI_DATA_REG_64XX, data);
} }
int mvs_64xx_spi_buildcmd(struct mvs_info *mvi, static int mvs_64xx_spi_buildcmd(struct mvs_info *mvi,
u32 *dwCmd, u32 *dwCmd,
u8 cmd, u8 cmd,
u8 read, u8 read,
@ -705,7 +706,7 @@ int mvs_64xx_spi_buildcmd(struct mvs_info *mvi,
} }
int mvs_64xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd) static int mvs_64xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd)
{ {
void __iomem *regs = mvi->regs_ex; void __iomem *regs = mvi->regs_ex;
int retry; int retry;
@ -720,7 +721,7 @@ int mvs_64xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd)
return 0; return 0;
} }
int mvs_64xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout) static int mvs_64xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout)
{ {
void __iomem *regs = mvi->regs_ex; void __iomem *regs = mvi->regs_ex;
u32 i, dwTmp; u32 i, dwTmp;
@ -735,7 +736,7 @@ int mvs_64xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout)
return -1; return -1;
} }
void mvs_64xx_fix_dma(struct mvs_info *mvi, u32 phy_mask, static void mvs_64xx_fix_dma(struct mvs_info *mvi, u32 phy_mask,
int buf_len, int from, void *prd) int buf_len, int from, void *prd)
{ {
int i; int i;

View File

@ -48,8 +48,8 @@ static void mvs_94xx_detect_porttype(struct mvs_info *mvi, int i)
} }
} }
void set_phy_tuning(struct mvs_info *mvi, int phy_id, static void set_phy_tuning(struct mvs_info *mvi, int phy_id,
struct phy_tuning phy_tuning) struct phy_tuning phy_tuning)
{ {
u32 tmp, setting_0 = 0, setting_1 = 0; u32 tmp, setting_0 = 0, setting_1 = 0;
u8 i; u8 i;
@ -110,8 +110,8 @@ void set_phy_tuning(struct mvs_info *mvi, int phy_id,
} }
} }
void set_phy_ffe_tuning(struct mvs_info *mvi, int phy_id, static void set_phy_ffe_tuning(struct mvs_info *mvi, int phy_id,
struct ffe_control ffe) struct ffe_control ffe)
{ {
u32 tmp; u32 tmp;
@ -177,7 +177,7 @@ void set_phy_ffe_tuning(struct mvs_info *mvi, int phy_id,
} }
/*Notice: this function must be called when phy is disabled*/ /*Notice: this function must be called when phy is disabled*/
void set_phy_rate(struct mvs_info *mvi, int phy_id, u8 rate) static void set_phy_rate(struct mvs_info *mvi, int phy_id, u8 rate)
{ {
union reg_phy_cfg phy_cfg, phy_cfg_tmp; union reg_phy_cfg phy_cfg, phy_cfg_tmp;
mvs_write_port_vsr_addr(mvi, phy_id, VSR_PHY_MODE2); mvs_write_port_vsr_addr(mvi, phy_id, VSR_PHY_MODE2);
@ -679,7 +679,8 @@ static void mvs_94xx_command_active(struct mvs_info *mvi, u32 slot_idx)
} }
} }
void mvs_94xx_clear_srs_irq(struct mvs_info *mvi, u8 reg_set, u8 clear_all) static void
mvs_94xx_clear_srs_irq(struct mvs_info *mvi, u8 reg_set, u8 clear_all)
{ {
void __iomem *regs = mvi->regs; void __iomem *regs = mvi->regs;
u32 tmp; u32 tmp;
@ -906,8 +907,8 @@ static void mvs_94xx_fix_phy_info(struct mvs_info *mvi, int i,
} }
void mvs_94xx_phy_set_link_rate(struct mvs_info *mvi, u32 phy_id, static void mvs_94xx_phy_set_link_rate(struct mvs_info *mvi, u32 phy_id,
struct sas_phy_linkrates *rates) struct sas_phy_linkrates *rates)
{ {
u32 lrmax = 0; u32 lrmax = 0;
u32 tmp; u32 tmp;
@ -936,25 +937,25 @@ static void mvs_94xx_clear_active_cmds(struct mvs_info *mvi)
} }
u32 mvs_94xx_spi_read_data(struct mvs_info *mvi) static u32 mvs_94xx_spi_read_data(struct mvs_info *mvi)
{ {
void __iomem *regs = mvi->regs_ex - 0x10200; void __iomem *regs = mvi->regs_ex - 0x10200;
return mr32(SPI_RD_DATA_REG_94XX); return mr32(SPI_RD_DATA_REG_94XX);
} }
void mvs_94xx_spi_write_data(struct mvs_info *mvi, u32 data) static void mvs_94xx_spi_write_data(struct mvs_info *mvi, u32 data)
{ {
void __iomem *regs = mvi->regs_ex - 0x10200; void __iomem *regs = mvi->regs_ex - 0x10200;
mw32(SPI_RD_DATA_REG_94XX, data); mw32(SPI_RD_DATA_REG_94XX, data);
} }
int mvs_94xx_spi_buildcmd(struct mvs_info *mvi, static int mvs_94xx_spi_buildcmd(struct mvs_info *mvi,
u32 *dwCmd, u32 *dwCmd,
u8 cmd, u8 cmd,
u8 read, u8 read,
u8 length, u8 length,
u32 addr u32 addr
) )
{ {
void __iomem *regs = mvi->regs_ex - 0x10200; void __iomem *regs = mvi->regs_ex - 0x10200;
@ -974,7 +975,7 @@ int mvs_94xx_spi_buildcmd(struct mvs_info *mvi,
} }
int mvs_94xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd) static int mvs_94xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd)
{ {
void __iomem *regs = mvi->regs_ex - 0x10200; void __iomem *regs = mvi->regs_ex - 0x10200;
mw32(SPI_CTRL_REG_94XX, cmd | SPI_CTRL_SpiStart_94XX); mw32(SPI_CTRL_REG_94XX, cmd | SPI_CTRL_SpiStart_94XX);
@ -982,7 +983,7 @@ int mvs_94xx_spi_issuecmd(struct mvs_info *mvi, u32 cmd)
return 0; return 0;
} }
int mvs_94xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout) static int mvs_94xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout)
{ {
void __iomem *regs = mvi->regs_ex - 0x10200; void __iomem *regs = mvi->regs_ex - 0x10200;
u32 i, dwTmp; u32 i, dwTmp;
@ -997,8 +998,8 @@ int mvs_94xx_spi_waitdataready(struct mvs_info *mvi, u32 timeout)
return -1; return -1;
} }
void mvs_94xx_fix_dma(struct mvs_info *mvi, u32 phy_mask, static void mvs_94xx_fix_dma(struct mvs_info *mvi, u32 phy_mask,
int buf_len, int from, void *prd) int buf_len, int from, void *prd)
{ {
int i; int i;
struct mvs_prd *buf_prd = prd; struct mvs_prd *buf_prd = prd;

View File

@ -74,7 +74,7 @@ void mvs_tag_init(struct mvs_info *mvi)
mvs_tag_clear(mvi, i); mvs_tag_clear(mvi, i);
} }
struct mvs_info *mvs_find_dev_mvi(struct domain_device *dev) static struct mvs_info *mvs_find_dev_mvi(struct domain_device *dev)
{ {
unsigned long i = 0, j = 0, hi = 0; unsigned long i = 0, j = 0, hi = 0;
struct sas_ha_struct *sha = dev->port->ha; struct sas_ha_struct *sha = dev->port->ha;
@ -102,7 +102,7 @@ struct mvs_info *mvs_find_dev_mvi(struct domain_device *dev)
} }
int mvs_find_dev_phyno(struct domain_device *dev, int *phyno) static int mvs_find_dev_phyno(struct domain_device *dev, int *phyno)
{ {
unsigned long i = 0, j = 0, n = 0, num = 0; unsigned long i = 0, j = 0, n = 0, num = 0;
struct mvs_device *mvi_dev = (struct mvs_device *)dev->lldd_dev; struct mvs_device *mvi_dev = (struct mvs_device *)dev->lldd_dev;
@ -1158,7 +1158,7 @@ void mvs_port_deformed(struct asd_sas_phy *sas_phy)
mvs_port_notify_deformed(sas_phy, 1); mvs_port_notify_deformed(sas_phy, 1);
} }
struct mvs_device *mvs_alloc_dev(struct mvs_info *mvi) static struct mvs_device *mvs_alloc_dev(struct mvs_info *mvi)
{ {
u32 dev; u32 dev;
for (dev = 0; dev < MVS_MAX_DEVICES; dev++) { for (dev = 0; dev < MVS_MAX_DEVICES; dev++) {
@ -1175,7 +1175,7 @@ struct mvs_device *mvs_alloc_dev(struct mvs_info *mvi)
return NULL; return NULL;
} }
void mvs_free_dev(struct mvs_device *mvi_dev) static void mvs_free_dev(struct mvs_device *mvi_dev)
{ {
u32 id = mvi_dev->device_id; u32 id = mvi_dev->device_id;
memset(mvi_dev, 0, sizeof(*mvi_dev)); memset(mvi_dev, 0, sizeof(*mvi_dev));
@ -1185,7 +1185,7 @@ void mvs_free_dev(struct mvs_device *mvi_dev)
mvi_dev->taskfileset = MVS_ID_NOT_MAPPED; mvi_dev->taskfileset = MVS_ID_NOT_MAPPED;
} }
int mvs_dev_found_notify(struct domain_device *dev, int lock) static int mvs_dev_found_notify(struct domain_device *dev, int lock)
{ {
unsigned long flags = 0; unsigned long flags = 0;
int res = 0; int res = 0;
@ -1241,7 +1241,7 @@ int mvs_dev_found(struct domain_device *dev)
return mvs_dev_found_notify(dev, 1); return mvs_dev_found_notify(dev, 1);
} }
void mvs_dev_gone_notify(struct domain_device *dev) static void mvs_dev_gone_notify(struct domain_device *dev)
{ {
unsigned long flags = 0; unsigned long flags = 0;
struct mvs_device *mvi_dev = dev->lldd_dev; struct mvs_device *mvi_dev = dev->lldd_dev;
@ -1611,7 +1611,7 @@ static int mvs_sata_done(struct mvs_info *mvi, struct sas_task *task,
return stat; return stat;
} }
void mvs_set_sense(u8 *buffer, int len, int d_sense, static void mvs_set_sense(u8 *buffer, int len, int d_sense,
int key, int asc, int ascq) int key, int asc, int ascq)
{ {
memset(buffer, 0, len); memset(buffer, 0, len);
@ -1650,7 +1650,7 @@ void mvs_set_sense(u8 *buffer, int len, int d_sense,
return; return;
} }
void mvs_fill_ssp_resp_iu(struct ssp_response_iu *iu, static void mvs_fill_ssp_resp_iu(struct ssp_response_iu *iu,
u8 key, u8 asc, u8 asc_q) u8 key, u8 asc, u8 asc_q)
{ {
iu->datapres = 2; iu->datapres = 2;

View File

@ -1,565 +0,0 @@
/*
* This driver adapted from Drew Eckhardt's Trantor T128 driver
*
* Copyright 1993, Drew Eckhardt
* Visionary Computing
* (Unix and Linux consulting and custom programming)
* drew@colorado.edu
* +1 (303) 666-5836
*
* ( Based on T128 - DISTRIBUTION RELEASE 3. )
*
* Modified to work with the Pro Audio Spectrum/Studio 16
* by John Weidman.
*
*
* For more information, please consult
*
* Media Vision
* (510) 770-8600
* (800) 348-7116
*/
/*
* The card is detected and initialized in one of several ways :
* 1. Autoprobe (default) - There are many different models of
* the Pro Audio Spectrum/Studio 16, and I only have one of
* them, so this may require a little tweaking. An interrupt
* is triggered to autoprobe for the interrupt line. Note:
* with the newer model boards, the interrupt is set via
* software after reset using the default_irq for the
* current board number.
*
* 2. With command line overrides - pas16=port,irq may be
* used on the LILO command line to override the defaults.
*
* 3. With the PAS16_OVERRIDE compile time define. This is
* specified as an array of address, irq tuples. Ie, for
* one board at the default 0x388 address, IRQ10, I could say
* -DPAS16_OVERRIDE={{0x388, 10}}
* NOTE: Untested.
*
* 4. When included as a module, with arguments passed on the command line:
* pas16_irq=xx the interrupt
* pas16_addr=xx the port
* e.g. "modprobe pas16 pas16_addr=0x388 pas16_irq=5"
*
* Note that if the override methods are used, place holders must
* be specified for other boards in the system.
*
*
* Configuration notes :
* The current driver does not support interrupt sharing with the
* sound portion of the card. If you use the same irq for the
* scsi port and sound you will have problems. Either use
* a different irq for the scsi port or don't use interrupts
* for the scsi port.
*
* If you have problems with your card not being recognized, use
* the LILO command line override. Try to get it recognized without
* interrupts. Ie, for a board at the default 0x388 base port,
* boot: linux pas16=0x388,0
*
* NO_IRQ (0) should be specified for no interrupt,
* IRQ_AUTO (254) to autoprobe for an IRQ line if overridden
* on the command line.
*/
#include <linux/module.h>
#include <asm/io.h>
#include <asm/dma.h>
#include <linux/blkdev.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <scsi/scsi_host.h>
#include "pas16.h"
#include "NCR5380.h"
static unsigned short pas16_addr;
static int pas16_irq;
static const int scsi_irq_translate[] =
{ 0, 0, 1, 2, 3, 4, 5, 6, 0, 0, 7, 8, 9, 0, 10, 11 };
/* The default_irqs array contains values used to set the irq into the
* board via software (as must be done on newer model boards without
* irq jumpers on the board). The first value in the array will be
* assigned to logical board 0, the next to board 1, etc.
*/
static int default_irqs[] __initdata =
{ PAS16_DEFAULT_BOARD_1_IRQ,
PAS16_DEFAULT_BOARD_2_IRQ,
PAS16_DEFAULT_BOARD_3_IRQ,
PAS16_DEFAULT_BOARD_4_IRQ
};
static struct override {
unsigned short io_port;
int irq;
} overrides
#ifdef PAS16_OVERRIDE
[] __initdata = PAS16_OVERRIDE;
#else
[4] __initdata = {{0,IRQ_AUTO}, {0,IRQ_AUTO}, {0,IRQ_AUTO},
{0,IRQ_AUTO}};
#endif
#define NO_OVERRIDES ARRAY_SIZE(overrides)
static struct base {
unsigned short io_port;
int noauto;
} bases[] __initdata =
{ {PAS16_DEFAULT_BASE_1, 0},
{PAS16_DEFAULT_BASE_2, 0},
{PAS16_DEFAULT_BASE_3, 0},
{PAS16_DEFAULT_BASE_4, 0}
};
#define NO_BASES ARRAY_SIZE(bases)
static const unsigned short pas16_offset[ 8 ] =
{
0x1c00, /* OUTPUT_DATA_REG */
0x1c01, /* INITIATOR_COMMAND_REG */
0x1c02, /* MODE_REG */
0x1c03, /* TARGET_COMMAND_REG */
0x3c00, /* STATUS_REG ro, SELECT_ENABLE_REG wo */
0x3c01, /* BUS_AND_STATUS_REG ro, START_DMA_SEND_REG wo */
0x3c02, /* INPUT_DATA_REGISTER ro, (N/A on PAS16 ?)
* START_DMA_TARGET_RECEIVE_REG wo
*/
0x3c03, /* RESET_PARITY_INTERRUPT_REG ro,
* START_DMA_INITIATOR_RECEIVE_REG wo
*/
};
/*
* Function : enable_board( int board_num, unsigned short port )
*
* Purpose : set address in new model board
*
* Inputs : board_num - logical board number 0-3, port - base address
*
*/
static void __init
enable_board( int board_num, unsigned short port )
{
outb( 0xbc + board_num, MASTER_ADDRESS_PTR );
outb( port >> 2, MASTER_ADDRESS_PTR );
}
/*
* Function : init_board( unsigned short port, int irq )
*
* Purpose : Set the board up to handle the SCSI interface
*
* Inputs : port - base address of the board,
* irq - irq to assign to the SCSI port
* force_irq - set it even if it conflicts with sound driver
*
*/
static void __init
init_board( unsigned short io_port, int irq, int force_irq )
{
unsigned int tmp;
unsigned int pas_irq_code;
/* Initialize the SCSI part of the board */
outb( 0x30, io_port + P_TIMEOUT_COUNTER_REG ); /* Timeout counter */
outb( 0x01, io_port + P_TIMEOUT_STATUS_REG_OFFSET ); /* Reset TC */
outb( 0x01, io_port + WAIT_STATE ); /* 1 Wait state */
inb(io_port + pas16_offset[RESET_PARITY_INTERRUPT_REG]);
/* Set the SCSI interrupt pointer without mucking up the sound
* interrupt pointer in the same byte.
*/
pas_irq_code = ( irq < 16 ) ? scsi_irq_translate[irq] : 0;
tmp = inb( io_port + IO_CONFIG_3 );
if( (( tmp & 0x0f ) == pas_irq_code) && pas_irq_code > 0
&& !force_irq )
{
printk( "pas16: WARNING: Can't use same irq as sound "
"driver -- interrupts disabled\n" );
/* Set up the drive parameters, disable 5380 interrupts */
outb( 0x4d, io_port + SYS_CONFIG_4 );
}
else
{
tmp = ( tmp & 0x0f ) | ( pas_irq_code << 4 );
outb( tmp, io_port + IO_CONFIG_3 );
/* Set up the drive parameters and enable 5380 interrupts */
outb( 0x6d, io_port + SYS_CONFIG_4 );
}
}
/*
* Function : pas16_hw_detect( unsigned short board_num )
*
* Purpose : determine if a pas16 board is present
*
* Inputs : board_num - logical board number ( 0 - 3 )
*
* Returns : 0 if board not found, 1 if found.
*/
static int __init
pas16_hw_detect( unsigned short board_num )
{
unsigned char board_rev, tmp;
unsigned short io_port = bases[ board_num ].io_port;
/* See if we can find a PAS16 board at the address associated
* with this logical board number.
*/
/* First, attempt to take a newer model board out of reset and
* give it a base address. This shouldn't affect older boards.
*/
enable_board( board_num, io_port );
/* Now see if it looks like a PAS16 board */
board_rev = inb( io_port + PCB_CONFIG );
if( board_rev == 0xff )
return 0;
tmp = board_rev ^ 0xe0;
outb( tmp, io_port + PCB_CONFIG );
tmp = inb( io_port + PCB_CONFIG );
outb( board_rev, io_port + PCB_CONFIG );
if( board_rev != tmp ) /* Not a PAS-16 */
return 0;
if( ( inb( io_port + OPERATION_MODE_1 ) & 0x03 ) != 0x03 )
return 0; /* return if no SCSI interface found */
/* Mediavision has some new model boards that return ID bits
* that indicate a SCSI interface, but they're not (LMS). We'll
* put in an additional test to try to weed them out.
*/
outb(0x01, io_port + WAIT_STATE); /* 1 Wait state */
outb(0x20, io_port + pas16_offset[MODE_REG]); /* Is it really SCSI? */
if (inb(io_port + pas16_offset[MODE_REG]) != 0x20) /* Write to a reg. */
return 0; /* and try to read */
outb(0x00, io_port + pas16_offset[MODE_REG]); /* it back. */
if (inb(io_port + pas16_offset[MODE_REG]) != 0x00)
return 0;
return 1;
}
#ifndef MODULE
/*
* Function : pas16_setup(char *str, int *ints)
*
* Purpose : LILO command line initialization of the overrides array,
*
* Inputs : str - unused, ints - array of integer parameters with ints[0]
* equal to the number of ints.
*
*/
static int __init pas16_setup(char *str)
{
static int commandline_current;
int i;
int ints[10];
get_options(str, ARRAY_SIZE(ints), ints);
if (ints[0] != 2)
printk("pas16_setup : usage pas16=io_port,irq\n");
else
if (commandline_current < NO_OVERRIDES) {
overrides[commandline_current].io_port = (unsigned short) ints[1];
overrides[commandline_current].irq = ints[2];
for (i = 0; i < NO_BASES; ++i)
if (bases[i].io_port == (unsigned short) ints[1]) {
bases[i].noauto = 1;
break;
}
++commandline_current;
}
return 1;
}
__setup("pas16=", pas16_setup);
#endif
/*
* Function : int pas16_detect(struct scsi_host_template * tpnt)
*
* Purpose : detects and initializes PAS16 controllers
* that were autoprobed, overridden on the LILO command line,
* or specified at compile time.
*
* Inputs : tpnt - template for this SCSI adapter.
*
* Returns : 1 if a host adapter was found, 0 if not.
*
*/
static int __init pas16_detect(struct scsi_host_template *tpnt)
{
static int current_override;
static unsigned short current_base;
struct Scsi_Host *instance;
unsigned short io_port;
int count;
if (pas16_addr != 0) {
overrides[0].io_port = pas16_addr;
/*
* This is how we avoid seeing more than
* one host adapter at the same I/O port.
* Cribbed shamelessly from pas16_setup().
*/
for (count = 0; count < NO_BASES; ++count)
if (bases[count].io_port == pas16_addr) {
bases[count].noauto = 1;
break;
}
}
if (pas16_irq != 0)
overrides[0].irq = pas16_irq;
for (count = 0; current_override < NO_OVERRIDES; ++current_override) {
io_port = 0;
if (overrides[current_override].io_port)
{
io_port = overrides[current_override].io_port;
enable_board( current_override, io_port );
init_board( io_port, overrides[current_override].irq, 1 );
}
else
for (; !io_port && (current_base < NO_BASES); ++current_base) {
dprintk(NDEBUG_INIT, "pas16: probing io_port 0x%04x\n",
(unsigned int)bases[current_base].io_port);
if ( !bases[current_base].noauto &&
pas16_hw_detect( current_base ) ){
io_port = bases[current_base].io_port;
init_board( io_port, default_irqs[ current_base ], 0 );
dprintk(NDEBUG_INIT, "pas16: detected board\n");
}
}
dprintk(NDEBUG_INIT, "pas16: io_port = 0x%04x\n",
(unsigned int)io_port);
if (!io_port)
break;
instance = scsi_register (tpnt, sizeof(struct NCR5380_hostdata));
if(instance == NULL)
goto out;
instance->io_port = io_port;
if (NCR5380_init(instance, FLAG_DMA_FIXUP | FLAG_LATE_DMA_SETUP))
goto out_unregister;
NCR5380_maybe_reset_bus(instance);
if (overrides[current_override].irq != IRQ_AUTO)
instance->irq = overrides[current_override].irq;
else
instance->irq = NCR5380_probe_irq(instance, PAS16_IRQS);
/* Compatibility with documented NCR5380 kernel parameters */
if (instance->irq == 255)
instance->irq = NO_IRQ;
if (instance->irq != NO_IRQ)
if (request_irq(instance->irq, pas16_intr, 0,
"pas16", instance)) {
printk("scsi%d : IRQ%d not free, interrupts disabled\n",
instance->host_no, instance->irq);
instance->irq = NO_IRQ;
}
if (instance->irq == NO_IRQ) {
printk("scsi%d : interrupts not enabled. for better interactive performance,\n", instance->host_no);
printk("scsi%d : please jumper the board for a free IRQ.\n", instance->host_no);
/* Disable 5380 interrupts, leave drive params the same */
outb( 0x4d, io_port + SYS_CONFIG_4 );
outb( (inb(io_port + IO_CONFIG_3) & 0x0f), io_port + IO_CONFIG_3 );
}
dprintk(NDEBUG_INIT, "scsi%d : irq = %d\n",
instance->host_no, instance->irq);
++current_override;
++count;
}
return count;
out_unregister:
scsi_unregister(instance);
out:
return count;
}
/*
* Function : int pas16_biosparam(Disk *disk, struct block_device *dev, int *ip)
*
* Purpose : Generates a BIOS / DOS compatible H-C-S mapping for
* the specified device / size.
*
* Inputs : size = size of device in sectors (512 bytes), dev = block device
* major / minor, ip[] = {heads, sectors, cylinders}
*
* Returns : always 0 (success), initializes ip
*
*/
/*
* XXX Most SCSI boards use this mapping, I could be incorrect. Some one
* using hard disks on a trantor should verify that this mapping corresponds
* to that used by the BIOS / ASPI driver by running the linux fdisk program
* and matching the H_C_S coordinates to what DOS uses.
*/
static int pas16_biosparam(struct scsi_device *sdev, struct block_device *dev,
sector_t capacity, int *ip)
{
int size = capacity;
ip[0] = 64;
ip[1] = 32;
ip[2] = size >> 11; /* I think I have it as /(32*64) */
if( ip[2] > 1024 ) { /* yes, >, not >= */
ip[0]=255;
ip[1]=63;
ip[2]=size/(63*255);
if( ip[2] > 1023 ) /* yes >1023... */
ip[2] = 1023;
}
return 0;
}
/*
* Function : int pas16_pread (struct Scsi_Host *instance,
* unsigned char *dst, int len)
*
* Purpose : Fast 5380 pseudo-dma read function, transfers len bytes to
* dst
*
* Inputs : dst = destination, len = length in bytes
*
* Returns : 0 on success, non zero on a failure such as a watchdog
* timeout.
*/
static inline int pas16_pread(struct Scsi_Host *instance,
unsigned char *dst, int len)
{
register unsigned char *d = dst;
register unsigned short reg = (unsigned short) (instance->io_port +
P_DATA_REG_OFFSET);
register int i = len;
int ii = 0;
while ( !(inb(instance->io_port + P_STATUS_REG_OFFSET) & P_ST_RDY) )
++ii;
insb( reg, d, i );
if ( inb(instance->io_port + P_TIMEOUT_STATUS_REG_OFFSET) & P_TS_TIM) {
outb( P_TS_CT, instance->io_port + P_TIMEOUT_STATUS_REG_OFFSET);
printk("scsi%d : watchdog timer fired in NCR5380_pread()\n",
instance->host_no);
return -1;
}
return 0;
}
/*
* Function : int pas16_pwrite (struct Scsi_Host *instance,
* unsigned char *src, int len)
*
* Purpose : Fast 5380 pseudo-dma write function, transfers len bytes from
* src
*
* Inputs : src = source, len = length in bytes
*
* Returns : 0 on success, non zero on a failure such as a watchdog
* timeout.
*/
static inline int pas16_pwrite(struct Scsi_Host *instance,
unsigned char *src, int len)
{
register unsigned char *s = src;
register unsigned short reg = (instance->io_port + P_DATA_REG_OFFSET);
register int i = len;
int ii = 0;
while ( !((inb(instance->io_port + P_STATUS_REG_OFFSET)) & P_ST_RDY) )
++ii;
outsb( reg, s, i );
if (inb(instance->io_port + P_TIMEOUT_STATUS_REG_OFFSET) & P_TS_TIM) {
outb( P_TS_CT, instance->io_port + P_TIMEOUT_STATUS_REG_OFFSET);
printk("scsi%d : watchdog timer fired in NCR5380_pwrite()\n",
instance->host_no);
return -1;
}
return 0;
}
#include "NCR5380.c"
static int pas16_release(struct Scsi_Host *shost)
{
if (shost->irq != NO_IRQ)
free_irq(shost->irq, shost);
NCR5380_exit(shost);
scsi_unregister(shost);
return 0;
}
static struct scsi_host_template driver_template = {
.name = "Pro Audio Spectrum-16 SCSI",
.detect = pas16_detect,
.release = pas16_release,
.proc_name = "pas16",
.info = pas16_info,
.queuecommand = pas16_queue_command,
.eh_abort_handler = pas16_abort,
.eh_bus_reset_handler = pas16_bus_reset,
.bios_param = pas16_biosparam,
.can_queue = 32,
.this_id = 7,
.sg_tablesize = SG_ALL,
.cmd_per_lun = 2,
.use_clustering = DISABLE_CLUSTERING,
.cmd_size = NCR5380_CMD_SIZE,
.max_sectors = 128,
};
#include "scsi_module.c"
#ifdef MODULE
module_param(pas16_addr, ushort, 0);
module_param(pas16_irq, int, 0);
#endif
MODULE_LICENSE("GPL");

View File

@ -1,121 +0,0 @@
/*
* This driver adapted from Drew Eckhardt's Trantor T128 driver
*
* Copyright 1993, Drew Eckhardt
* Visionary Computing
* (Unix and Linux consulting and custom programming)
* drew@colorado.edu
* +1 (303) 666-5836
*
* ( Based on T128 - DISTRIBUTION RELEASE 3. )
*
* Modified to work with the Pro Audio Spectrum/Studio 16
* by John Weidman.
*
*
* For more information, please consult
*
* Media Vision
* (510) 770-8600
* (800) 348-7116
*/
#ifndef PAS16_H
#define PAS16_H
#define PAS16_DEFAULT_BASE_1 0x388
#define PAS16_DEFAULT_BASE_2 0x384
#define PAS16_DEFAULT_BASE_3 0x38c
#define PAS16_DEFAULT_BASE_4 0x288
#define PAS16_DEFAULT_BOARD_1_IRQ 10
#define PAS16_DEFAULT_BOARD_2_IRQ 12
#define PAS16_DEFAULT_BOARD_3_IRQ 14
#define PAS16_DEFAULT_BOARD_4_IRQ 15
/*
* The Pro Audio Spectrum boards are I/O mapped. They use a Zilog 5380
* SCSI controller, which is the equivalent of NCR's 5380. "Pseudo-DMA"
* architecture is used, where a PAL drives the DMA signals on the 5380
* allowing fast, blind transfers with proper handshaking.
*/
/* The Time-out Counter register is used to safe-guard against a stuck
* bus (in the case of RDY driven handshake) or a stuck byte (if 16-Bit
* DMA conversion is used). The counter uses a 28.224MHz clock
* divided by 14 as its clock source. In the case of a stuck byte in
* the holding register, an interrupt is generated (and mixed with the
* one with the drive) using the CD-ROM interrupt pointer.
*/
#define P_TIMEOUT_COUNTER_REG 0x4000
#define P_TC_DISABLE 0x80 /* Set to 0 to enable timeout int. */
/* Bits D6-D0 contain timeout count */
#define P_TIMEOUT_STATUS_REG_OFFSET 0x4001
#define P_TS_TIM 0x80 /* check timeout status */
/* Bits D6-D4 N/U */
#define P_TS_ARM_DRQ_INT 0x08 /* Arm DRQ Int. When set high,
* the next rising edge will
* cause a CD-ROM interrupt.
* When set low, the interrupt
* will be cleared. There is
* no status available for
* this interrupt.
*/
#define P_TS_ENABLE_TO_ERR_INTERRUPT /* Enable timeout error int. */
#define P_TS_ENABLE_WAIT /* Enable Wait */
#define P_TS_CT 0x01 /* clear timeout. Note: writing
* to this register clears the
* timeout error int. or status
*/
/*
* The data register reads/writes to/from the 5380 in pseudo-DMA mode
*/
#define P_DATA_REG_OFFSET 0x5c00 /* rw */
#define P_STATUS_REG_OFFSET 0x5c01 /* ro */
#define P_ST_RDY 0x80 /* 5380 DDRQ Status */
#define P_IRQ_STATUS 0x5c03
#define P_IS_IRQ 0x80 /* DIRQ status */
#define PCB_CONFIG 0x803
#define MASTER_ADDRESS_PTR 0x9a01 /* Fixed position - no relo */
#define SYS_CONFIG_4 0x8003
#define WAIT_STATE 0xbc00
#define OPERATION_MODE_1 0xec03
#define IO_CONFIG_3 0xf002
#define NCR5380_implementation_fields /* none */
#define PAS16_io_port(reg) (instance->io_port + pas16_offset[(reg)])
#define NCR5380_read(reg) ( inb(PAS16_io_port(reg)) )
#define NCR5380_write(reg, value) ( outb((value),PAS16_io_port(reg)) )
#define NCR5380_dma_xfer_len(instance, cmd, phase) (cmd->transfersize)
#define NCR5380_dma_recv_setup pas16_pread
#define NCR5380_dma_send_setup pas16_pwrite
#define NCR5380_dma_residual(instance) (0)
#define NCR5380_intr pas16_intr
#define NCR5380_queue_command pas16_queue_command
#define NCR5380_abort pas16_abort
#define NCR5380_bus_reset pas16_bus_reset
#define NCR5380_info pas16_info
/* 15 14 12 10 7 5 3
1101 0100 1010 1000 */
#define PAS16_IRQS 0xd4a8
#endif /* PAS16_H */

View File

@ -4492,8 +4492,8 @@ pm8001_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
* @num: the inbound queue number * @num: the inbound queue number
* @phy_id: the phy id which we wanted to start up. * @phy_id: the phy id which we wanted to start up.
*/ */
int pm8001_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha, static int pm8001_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha,
u8 phy_id) u8 phy_id)
{ {
struct phy_stop_req payload; struct phy_stop_req payload;
struct inbound_queue_table *circularQ; struct inbound_queue_table *circularQ;

View File

@ -527,7 +527,7 @@ void pm8001_ccb_task_free(struct pm8001_hba_info *pm8001_ha,
* pm8001_alloc_dev - find a empty pm8001_device * pm8001_alloc_dev - find a empty pm8001_device
* @pm8001_ha: our hba card information * @pm8001_ha: our hba card information
*/ */
struct pm8001_device *pm8001_alloc_dev(struct pm8001_hba_info *pm8001_ha) static struct pm8001_device *pm8001_alloc_dev(struct pm8001_hba_info *pm8001_ha)
{ {
u32 dev; u32 dev;
for (dev = 0; dev < PM8001_MAX_DEVICES; dev++) { for (dev = 0; dev < PM8001_MAX_DEVICES; dev++) {

View File

@ -306,7 +306,7 @@ static int pmcraid_change_queue_depth(struct scsi_device *scsi_dev, int depth)
* Return Value * Return Value
* None * None
*/ */
void pmcraid_init_cmdblk(struct pmcraid_cmd *cmd, int index) static void pmcraid_init_cmdblk(struct pmcraid_cmd *cmd, int index)
{ {
struct pmcraid_ioarcb *ioarcb = &(cmd->ioa_cb->ioarcb); struct pmcraid_ioarcb *ioarcb = &(cmd->ioa_cb->ioarcb);
dma_addr_t dma_addr = cmd->ioa_cb_bus_addr; dma_addr_t dma_addr = cmd->ioa_cb_bus_addr;
@ -401,7 +401,7 @@ static struct pmcraid_cmd *pmcraid_get_free_cmd(
* Return Value: * Return Value:
* nothing * nothing
*/ */
void pmcraid_return_cmd(struct pmcraid_cmd *cmd) static void pmcraid_return_cmd(struct pmcraid_cmd *cmd)
{ {
struct pmcraid_instance *pinstance = cmd->drv_inst; struct pmcraid_instance *pinstance = cmd->drv_inst;
unsigned long lock_flags; unsigned long lock_flags;
@ -1710,7 +1710,7 @@ static struct pmcraid_ioasc_error *pmcraid_get_error_info(u32 ioasc)
* @ioasc: ioasc code * @ioasc: ioasc code
* @cmd: pointer to command that resulted in 'ioasc' * @cmd: pointer to command that resulted in 'ioasc'
*/ */
void pmcraid_ioasc_logger(u32 ioasc, struct pmcraid_cmd *cmd) static void pmcraid_ioasc_logger(u32 ioasc, struct pmcraid_cmd *cmd)
{ {
struct pmcraid_ioasc_error *error_info = pmcraid_get_error_info(ioasc); struct pmcraid_ioasc_error *error_info = pmcraid_get_error_info(ioasc);
@ -3137,7 +3137,7 @@ static int pmcraid_eh_host_reset_handler(struct scsi_cmnd *scmd)
* returns pointer pmcraid_ioadl_desc, initialized to point to internal * returns pointer pmcraid_ioadl_desc, initialized to point to internal
* or external IOADLs * or external IOADLs
*/ */
struct pmcraid_ioadl_desc * static struct pmcraid_ioadl_desc *
pmcraid_init_ioadls(struct pmcraid_cmd *cmd, int sgcount) pmcraid_init_ioadls(struct pmcraid_cmd *cmd, int sgcount)
{ {
struct pmcraid_ioadl_desc *ioadl; struct pmcraid_ioadl_desc *ioadl;

View File

@ -278,16 +278,6 @@
struct req_que; struct req_que;
struct qla_tgt_sess; struct qla_tgt_sess;
/*
* (sd.h is not exported, hence local inclusion)
* Data Integrity Field tuple.
*/
struct sd_dif_tuple {
__be16 guard_tag; /* Checksum */
__be16 app_tag; /* Opaque storage */
__be32 ref_tag; /* Target LBA or indirect LBA */
};
/* /*
* SCSI Request Block * SCSI Request Block
*/ */

View File

@ -1828,7 +1828,7 @@ qla2x00_handle_dif_error(srb_t *sp, struct sts_entry_24xx *sts24)
if (scsi_prot_sg_count(cmd)) { if (scsi_prot_sg_count(cmd)) {
uint32_t i, j = 0, k = 0, num_ent; uint32_t i, j = 0, k = 0, num_ent;
struct scatterlist *sg; struct scatterlist *sg;
struct sd_dif_tuple *spt; struct t10_pi_tuple *spt;
/* Patch the corresponding protection tags */ /* Patch the corresponding protection tags */
scsi_for_each_prot_sg(cmd, sg, scsi_for_each_prot_sg(cmd, sg,

View File

@ -899,12 +899,12 @@ qla2x00_wait_for_hba_ready(scsi_qla_host_t *vha)
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
while (((qla2x00_reset_active(vha)) || ha->dpc_active || while ((qla2x00_reset_active(vha) || ha->dpc_active ||
ha->flags.mbox_busy) || ha->flags.mbox_busy) ||
test_bit(FX00_RESET_RECOVERY, &vha->dpc_flags) || test_bit(FX00_RESET_RECOVERY, &vha->dpc_flags) ||
test_bit(FX00_TARGET_SCAN, &vha->dpc_flags)) { test_bit(FX00_TARGET_SCAN, &vha->dpc_flags)) {
if (test_bit(UNLOADING, &base_vha->dpc_flags)) if (test_bit(UNLOADING, &base_vha->dpc_flags))
break; break;
msleep(1000); msleep(1000);
} }
} }
@ -4694,7 +4694,7 @@ retry_unlock:
qla83xx_wait_logic(); qla83xx_wait_logic();
retry++; retry++;
ql_dbg(ql_dbg_p3p, base_vha, 0xb064, ql_dbg(ql_dbg_p3p, base_vha, 0xb064,
"Failed to release IDC lock, retyring=%d\n", retry); "Failed to release IDC lock, retrying=%d\n", retry);
goto retry_unlock; goto retry_unlock;
} }
} else if (retry < 10) { } else if (retry < 10) {
@ -4702,7 +4702,7 @@ retry_unlock:
qla83xx_wait_logic(); qla83xx_wait_logic();
retry++; retry++;
ql_dbg(ql_dbg_p3p, base_vha, 0xb065, ql_dbg(ql_dbg_p3p, base_vha, 0xb065,
"Failed to read drv-lockid, retyring=%d\n", retry); "Failed to read drv-lockid, retrying=%d\n", retry);
goto retry_unlock; goto retry_unlock;
} }
@ -4718,7 +4718,7 @@ retry_unlock2:
qla83xx_wait_logic(); qla83xx_wait_logic();
retry++; retry++;
ql_dbg(ql_dbg_p3p, base_vha, 0xb066, ql_dbg(ql_dbg_p3p, base_vha, 0xb066,
"Failed to release IDC lock, retyring=%d\n", retry); "Failed to release IDC lock, retrying=%d\n", retry);
goto retry_unlock2; goto retry_unlock2;
} }
} }

View File

@ -1843,7 +1843,7 @@ static uint32_t ql4_84xx_poll_wait_for_ready(struct scsi_qla_host *ha,
return rval; return rval;
} }
uint32_t ql4_84xx_ipmdio_rd_reg(struct scsi_qla_host *ha, uint32_t addr1, static uint32_t ql4_84xx_ipmdio_rd_reg(struct scsi_qla_host *ha, uint32_t addr1,
uint32_t addr3, uint32_t mask, uint32_t addr, uint32_t addr3, uint32_t mask, uint32_t addr,
uint32_t *data_ptr) uint32_t *data_ptr)
{ {

View File

@ -42,6 +42,7 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/hrtimer.h> #include <linux/hrtimer.h>
#include <linux/uuid.h> #include <linux/uuid.h>
#include <linux/t10-pi.h>
#include <net/checksum.h> #include <net/checksum.h>
@ -627,7 +628,7 @@ static LIST_HEAD(sdebug_host_list);
static DEFINE_SPINLOCK(sdebug_host_list_lock); static DEFINE_SPINLOCK(sdebug_host_list_lock);
static unsigned char *fake_storep; /* ramdisk storage */ static unsigned char *fake_storep; /* ramdisk storage */
static struct sd_dif_tuple *dif_storep; /* protection info */ static struct t10_pi_tuple *dif_storep; /* protection info */
static void *map_storep; /* provisioning map */ static void *map_storep; /* provisioning map */
static unsigned long map_size; static unsigned long map_size;
@ -682,7 +683,7 @@ static void *fake_store(unsigned long long lba)
return fake_storep + lba * sdebug_sector_size; return fake_storep + lba * sdebug_sector_size;
} }
static struct sd_dif_tuple *dif_store(sector_t sector) static struct t10_pi_tuple *dif_store(sector_t sector)
{ {
sector = sector_div(sector, sdebug_store_sectors); sector = sector_div(sector, sdebug_store_sectors);
@ -1349,7 +1350,7 @@ static int resp_inquiry(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
} else if (0x86 == cmd[2]) { /* extended inquiry */ } else if (0x86 == cmd[2]) { /* extended inquiry */
arr[1] = cmd[2]; /*sanity */ arr[1] = cmd[2]; /*sanity */
arr[3] = 0x3c; /* number of following entries */ arr[3] = 0x3c; /* number of following entries */
if (sdebug_dif == SD_DIF_TYPE3_PROTECTION) if (sdebug_dif == T10_PI_TYPE3_PROTECTION)
arr[4] = 0x4; /* SPT: GRD_CHK:1 */ arr[4] = 0x4; /* SPT: GRD_CHK:1 */
else if (have_dif_prot) else if (have_dif_prot)
arr[4] = 0x5; /* SPT: GRD_CHK:1, REF_CHK:1 */ arr[4] = 0x5; /* SPT: GRD_CHK:1, REF_CHK:1 */
@ -2430,7 +2431,7 @@ static __be16 dif_compute_csum(const void *buf, int len)
return csum; return csum;
} }
static int dif_verify(struct sd_dif_tuple *sdt, const void *data, static int dif_verify(struct t10_pi_tuple *sdt, const void *data,
sector_t sector, u32 ei_lba) sector_t sector, u32 ei_lba)
{ {
__be16 csum = dif_compute_csum(data, sdebug_sector_size); __be16 csum = dif_compute_csum(data, sdebug_sector_size);
@ -2442,13 +2443,13 @@ static int dif_verify(struct sd_dif_tuple *sdt, const void *data,
be16_to_cpu(csum)); be16_to_cpu(csum));
return 0x01; return 0x01;
} }
if (sdebug_dif == SD_DIF_TYPE1_PROTECTION && if (sdebug_dif == T10_PI_TYPE1_PROTECTION &&
be32_to_cpu(sdt->ref_tag) != (sector & 0xffffffff)) { be32_to_cpu(sdt->ref_tag) != (sector & 0xffffffff)) {
pr_err("REF check failed on sector %lu\n", pr_err("REF check failed on sector %lu\n",
(unsigned long)sector); (unsigned long)sector);
return 0x03; return 0x03;
} }
if (sdebug_dif == SD_DIF_TYPE2_PROTECTION && if (sdebug_dif == T10_PI_TYPE2_PROTECTION &&
be32_to_cpu(sdt->ref_tag) != ei_lba) { be32_to_cpu(sdt->ref_tag) != ei_lba) {
pr_err("REF check failed on sector %lu\n", pr_err("REF check failed on sector %lu\n",
(unsigned long)sector); (unsigned long)sector);
@ -2504,7 +2505,7 @@ static int prot_verify_read(struct scsi_cmnd *SCpnt, sector_t start_sec,
unsigned int sectors, u32 ei_lba) unsigned int sectors, u32 ei_lba)
{ {
unsigned int i; unsigned int i;
struct sd_dif_tuple *sdt; struct t10_pi_tuple *sdt;
sector_t sector; sector_t sector;
for (i = 0; i < sectors; i++, ei_lba++) { for (i = 0; i < sectors; i++, ei_lba++) {
@ -2580,13 +2581,13 @@ static int resp_read_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
break; break;
} }
if (unlikely(have_dif_prot && check_prot)) { if (unlikely(have_dif_prot && check_prot)) {
if (sdebug_dif == SD_DIF_TYPE2_PROTECTION && if (sdebug_dif == T10_PI_TYPE2_PROTECTION &&
(cmd[1] & 0xe0)) { (cmd[1] & 0xe0)) {
mk_sense_invalid_opcode(scp); mk_sense_invalid_opcode(scp);
return check_condition_result; return check_condition_result;
} }
if ((sdebug_dif == SD_DIF_TYPE1_PROTECTION || if ((sdebug_dif == T10_PI_TYPE1_PROTECTION ||
sdebug_dif == SD_DIF_TYPE3_PROTECTION) && sdebug_dif == T10_PI_TYPE3_PROTECTION) &&
(cmd[1] & 0xe0) == 0) (cmd[1] & 0xe0) == 0)
sdev_printk(KERN_ERR, scp->device, "Unprotected RD " sdev_printk(KERN_ERR, scp->device, "Unprotected RD "
"to DIF device\n"); "to DIF device\n");
@ -2696,7 +2697,7 @@ static int prot_verify_write(struct scsi_cmnd *SCpnt, sector_t start_sec,
unsigned int sectors, u32 ei_lba) unsigned int sectors, u32 ei_lba)
{ {
int ret; int ret;
struct sd_dif_tuple *sdt; struct t10_pi_tuple *sdt;
void *daddr; void *daddr;
sector_t sector = start_sec; sector_t sector = start_sec;
int ppage_offset; int ppage_offset;
@ -2722,7 +2723,7 @@ static int prot_verify_write(struct scsi_cmnd *SCpnt, sector_t start_sec,
} }
for (ppage_offset = 0; ppage_offset < piter.length; for (ppage_offset = 0; ppage_offset < piter.length;
ppage_offset += sizeof(struct sd_dif_tuple)) { ppage_offset += sizeof(struct t10_pi_tuple)) {
/* If we're at the end of the current /* If we're at the end of the current
* data page advance to the next one * data page advance to the next one
*/ */
@ -2893,13 +2894,13 @@ static int resp_write_dt0(struct scsi_cmnd *scp, struct sdebug_dev_info *devip)
break; break;
} }
if (unlikely(have_dif_prot && check_prot)) { if (unlikely(have_dif_prot && check_prot)) {
if (sdebug_dif == SD_DIF_TYPE2_PROTECTION && if (sdebug_dif == T10_PI_TYPE2_PROTECTION &&
(cmd[1] & 0xe0)) { (cmd[1] & 0xe0)) {
mk_sense_invalid_opcode(scp); mk_sense_invalid_opcode(scp);
return check_condition_result; return check_condition_result;
} }
if ((sdebug_dif == SD_DIF_TYPE1_PROTECTION || if ((sdebug_dif == T10_PI_TYPE1_PROTECTION ||
sdebug_dif == SD_DIF_TYPE3_PROTECTION) && sdebug_dif == T10_PI_TYPE3_PROTECTION) &&
(cmd[1] & 0xe0) == 0) (cmd[1] & 0xe0) == 0)
sdev_printk(KERN_ERR, scp->device, "Unprotected WR " sdev_printk(KERN_ERR, scp->device, "Unprotected WR "
"to DIF device\n"); "to DIF device\n");
@ -3135,13 +3136,13 @@ static int resp_comp_write(struct scsi_cmnd *scp,
num = cmd[13]; /* 1 to a maximum of 255 logical blocks */ num = cmd[13]; /* 1 to a maximum of 255 logical blocks */
if (0 == num) if (0 == num)
return 0; /* degenerate case, not an error */ return 0; /* degenerate case, not an error */
if (sdebug_dif == SD_DIF_TYPE2_PROTECTION && if (sdebug_dif == T10_PI_TYPE2_PROTECTION &&
(cmd[1] & 0xe0)) { (cmd[1] & 0xe0)) {
mk_sense_invalid_opcode(scp); mk_sense_invalid_opcode(scp);
return check_condition_result; return check_condition_result;
} }
if ((sdebug_dif == SD_DIF_TYPE1_PROTECTION || if ((sdebug_dif == T10_PI_TYPE1_PROTECTION ||
sdebug_dif == SD_DIF_TYPE3_PROTECTION) && sdebug_dif == T10_PI_TYPE3_PROTECTION) &&
(cmd[1] & 0xe0) == 0) (cmd[1] & 0xe0) == 0)
sdev_printk(KERN_ERR, scp->device, "Unprotected WR " sdev_printk(KERN_ERR, scp->device, "Unprotected WR "
"to DIF device\n"); "to DIF device\n");
@ -4939,12 +4940,11 @@ static int __init scsi_debug_init(void)
} }
switch (sdebug_dif) { switch (sdebug_dif) {
case T10_PI_TYPE0_PROTECTION:
case SD_DIF_TYPE0_PROTECTION:
break; break;
case SD_DIF_TYPE1_PROTECTION: case T10_PI_TYPE1_PROTECTION:
case SD_DIF_TYPE2_PROTECTION: case T10_PI_TYPE2_PROTECTION:
case SD_DIF_TYPE3_PROTECTION: case T10_PI_TYPE3_PROTECTION:
have_dif_prot = true; have_dif_prot = true;
break; break;
@ -5026,7 +5026,7 @@ static int __init scsi_debug_init(void)
if (sdebug_dix) { if (sdebug_dix) {
int dif_size; int dif_size;
dif_size = sdebug_store_sectors * sizeof(struct sd_dif_tuple); dif_size = sdebug_store_sectors * sizeof(struct t10_pi_tuple);
dif_storep = vmalloc(dif_size); dif_storep = vmalloc(dif_size);
pr_err("dif_storep %u bytes @ %p\n", dif_size, dif_storep); pr_err("dif_storep %u bytes @ %p\n", dif_size, dif_storep);
@ -5480,19 +5480,19 @@ static int sdebug_driver_probe(struct device * dev)
switch (sdebug_dif) { switch (sdebug_dif) {
case SD_DIF_TYPE1_PROTECTION: case T10_PI_TYPE1_PROTECTION:
hprot = SHOST_DIF_TYPE1_PROTECTION; hprot = SHOST_DIF_TYPE1_PROTECTION;
if (sdebug_dix) if (sdebug_dix)
hprot |= SHOST_DIX_TYPE1_PROTECTION; hprot |= SHOST_DIX_TYPE1_PROTECTION;
break; break;
case SD_DIF_TYPE2_PROTECTION: case T10_PI_TYPE2_PROTECTION:
hprot = SHOST_DIF_TYPE2_PROTECTION; hprot = SHOST_DIF_TYPE2_PROTECTION;
if (sdebug_dix) if (sdebug_dix)
hprot |= SHOST_DIX_TYPE2_PROTECTION; hprot |= SHOST_DIX_TYPE2_PROTECTION;
break; break;
case SD_DIF_TYPE3_PROTECTION: case T10_PI_TYPE3_PROTECTION:
hprot = SHOST_DIF_TYPE3_PROTECTION; hprot = SHOST_DIF_TYPE3_PROTECTION;
if (sdebug_dix) if (sdebug_dix)
hprot |= SHOST_DIX_TYPE3_PROTECTION; hprot |= SHOST_DIX_TYPE3_PROTECTION;

View File

@ -86,12 +86,14 @@ extern void scsi_device_unbusy(struct scsi_device *sdev);
extern void scsi_queue_insert(struct scsi_cmnd *cmd, int reason); extern void scsi_queue_insert(struct scsi_cmnd *cmd, int reason);
extern void scsi_io_completion(struct scsi_cmnd *, unsigned int); extern void scsi_io_completion(struct scsi_cmnd *, unsigned int);
extern void scsi_run_host_queues(struct Scsi_Host *shost); extern void scsi_run_host_queues(struct Scsi_Host *shost);
extern void scsi_requeue_run_queue(struct work_struct *work);
extern struct request_queue *scsi_alloc_queue(struct scsi_device *sdev); extern struct request_queue *scsi_alloc_queue(struct scsi_device *sdev);
extern struct request_queue *scsi_mq_alloc_queue(struct scsi_device *sdev); extern struct request_queue *scsi_mq_alloc_queue(struct scsi_device *sdev);
extern int scsi_mq_setup_tags(struct Scsi_Host *shost); extern int scsi_mq_setup_tags(struct Scsi_Host *shost);
extern void scsi_mq_destroy_tags(struct Scsi_Host *shost); extern void scsi_mq_destroy_tags(struct Scsi_Host *shost);
extern int scsi_init_queue(void); extern int scsi_init_queue(void);
extern void scsi_exit_queue(void); extern void scsi_exit_queue(void);
extern void scsi_evt_thread(struct work_struct *work);
struct request_queue; struct request_queue;
struct request; struct request;
extern struct kmem_cache *scsi_sdb_cache; extern struct kmem_cache *scsi_sdb_cache;

View File

@ -217,8 +217,6 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
struct scsi_device *sdev; struct scsi_device *sdev;
int display_failure_msg = 1, ret; int display_failure_msg = 1, ret;
struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
extern void scsi_evt_thread(struct work_struct *work);
extern void scsi_requeue_run_queue(struct work_struct *work);
sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size, sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size,
GFP_ATOMIC); GFP_ATOMIC);

View File

@ -52,6 +52,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
#include <linux/pr.h> #include <linux/pr.h>
#include <linux/t10-pi.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/unaligned.h> #include <asm/unaligned.h>
@ -314,7 +315,7 @@ protection_type_store(struct device *dev, struct device_attribute *attr,
if (err) if (err)
return err; return err;
if (val >= 0 && val <= SD_DIF_TYPE3_PROTECTION) if (val >= 0 && val <= T10_PI_TYPE3_PROTECTION)
sdkp->protection_type = val; sdkp->protection_type = val;
return count; return count;
@ -332,7 +333,7 @@ protection_mode_show(struct device *dev, struct device_attribute *attr,
dif = scsi_host_dif_capable(sdp->host, sdkp->protection_type); dif = scsi_host_dif_capable(sdp->host, sdkp->protection_type);
dix = scsi_host_dix_capable(sdp->host, sdkp->protection_type); dix = scsi_host_dix_capable(sdp->host, sdkp->protection_type);
if (!dix && scsi_host_dix_capable(sdp->host, SD_DIF_TYPE0_PROTECTION)) { if (!dix && scsi_host_dix_capable(sdp->host, T10_PI_TYPE0_PROTECTION)) {
dif = 0; dif = 0;
dix = 1; dix = 1;
} }
@ -608,7 +609,7 @@ static unsigned char sd_setup_protect_cmnd(struct scsi_cmnd *scmd,
scmd->prot_flags |= SCSI_PROT_GUARD_CHECK; scmd->prot_flags |= SCSI_PROT_GUARD_CHECK;
} }
if (dif != SD_DIF_TYPE3_PROTECTION) { /* DIX/DIF Type 0, 1, 2 */ if (dif != T10_PI_TYPE3_PROTECTION) { /* DIX/DIF Type 0, 1, 2 */
scmd->prot_flags |= SCSI_PROT_REF_INCREMENT; scmd->prot_flags |= SCSI_PROT_REF_INCREMENT;
if (bio_integrity_flagged(bio, BIP_CTRL_NOCHECK) == false) if (bio_integrity_flagged(bio, BIP_CTRL_NOCHECK) == false)
@ -1031,7 +1032,7 @@ static int sd_setup_read_write_cmnd(struct scsi_cmnd *SCpnt)
else else
protect = 0; protect = 0;
if (protect && sdkp->protection_type == SD_DIF_TYPE2_PROTECTION) { if (protect && sdkp->protection_type == T10_PI_TYPE2_PROTECTION) {
SCpnt->cmnd = mempool_alloc(sd_cdb_pool, GFP_ATOMIC); SCpnt->cmnd = mempool_alloc(sd_cdb_pool, GFP_ATOMIC);
if (unlikely(SCpnt->cmnd == NULL)) { if (unlikely(SCpnt->cmnd == NULL)) {
@ -1997,7 +1998,7 @@ static int sd_read_protection_type(struct scsi_disk *sdkp, unsigned char *buffer
type = ((buffer[12] >> 1) & 7) + 1; /* P_TYPE 0 = Type 1 */ type = ((buffer[12] >> 1) & 7) + 1; /* P_TYPE 0 = Type 1 */
if (type > SD_DIF_TYPE3_PROTECTION) if (type > T10_PI_TYPE3_PROTECTION)
ret = -ENODEV; ret = -ENODEV;
else if (scsi_host_dif_capable(sdp->host, type)) else if (scsi_host_dif_capable(sdp->host, type))
ret = 1; ret = 1;

View File

@ -156,27 +156,6 @@ static inline unsigned int logical_to_bytes(struct scsi_device *sdev, sector_t b
return blocks * sdev->sector_size; return blocks * sdev->sector_size;
} }
/*
* A DIF-capable target device can be formatted with different
* protection schemes. Currently 0 through 3 are defined:
*
* Type 0 is regular (unprotected) I/O
*
* Type 1 defines the contents of the guard and reference tags
*
* Type 2 defines the contents of the guard and reference tags and
* uses 32-byte commands to seed the latter
*
* Type 3 defines the contents of the guard tag only
*/
enum sd_dif_target_protection_types {
SD_DIF_TYPE0_PROTECTION = 0x0,
SD_DIF_TYPE1_PROTECTION = 0x1,
SD_DIF_TYPE2_PROTECTION = 0x2,
SD_DIF_TYPE3_PROTECTION = 0x3,
};
/* /*
* Look up the DIX operation based on whether the command is read or * Look up the DIX operation based on whether the command is read or
* write and whether dix and dif are enabled. * write and whether dix and dif are enabled.
@ -239,15 +218,6 @@ static inline unsigned int sd_prot_flag_mask(unsigned int prot_op)
return flag_mask[prot_op]; return flag_mask[prot_op];
} }
/*
* Data Integrity Field tuple.
*/
struct sd_dif_tuple {
__be16 guard_tag; /* Checksum */
__be16 app_tag; /* Opaque storage */
__be32 ref_tag; /* Target LBA or indirect LBA */
};
#ifdef CONFIG_BLK_DEV_INTEGRITY #ifdef CONFIG_BLK_DEV_INTEGRITY
extern void sd_dif_config_host(struct scsi_disk *); extern void sd_dif_config_host(struct scsi_disk *);

View File

@ -60,14 +60,14 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
/* Enable DMA of protection information */ /* Enable DMA of protection information */
if (scsi_host_get_guard(sdkp->device->host) & SHOST_DIX_GUARD_IP) { if (scsi_host_get_guard(sdkp->device->host) & SHOST_DIX_GUARD_IP) {
if (type == SD_DIF_TYPE3_PROTECTION) if (type == T10_PI_TYPE3_PROTECTION)
bi.profile = &t10_pi_type3_ip; bi.profile = &t10_pi_type3_ip;
else else
bi.profile = &t10_pi_type1_ip; bi.profile = &t10_pi_type1_ip;
bi.flags |= BLK_INTEGRITY_IP_CHECKSUM; bi.flags |= BLK_INTEGRITY_IP_CHECKSUM;
} else } else
if (type == SD_DIF_TYPE3_PROTECTION) if (type == T10_PI_TYPE3_PROTECTION)
bi.profile = &t10_pi_type3_crc; bi.profile = &t10_pi_type3_crc;
else else
bi.profile = &t10_pi_type1_crc; bi.profile = &t10_pi_type1_crc;
@ -82,7 +82,7 @@ void sd_dif_config_host(struct scsi_disk *sdkp)
if (!sdkp->ATO) if (!sdkp->ATO)
goto out; goto out;
if (type == SD_DIF_TYPE3_PROTECTION) if (type == T10_PI_TYPE3_PROTECTION)
bi.tag_size = sizeof(u16) + sizeof(u32); bi.tag_size = sizeof(u16) + sizeof(u32);
else else
bi.tag_size = sizeof(u16); bi.tag_size = sizeof(u16);
@ -121,7 +121,7 @@ void sd_dif_prepare(struct scsi_cmnd *scmd)
sdkp = scsi_disk(scmd->request->rq_disk); sdkp = scsi_disk(scmd->request->rq_disk);
if (sdkp->protection_type == SD_DIF_TYPE3_PROTECTION) if (sdkp->protection_type == T10_PI_TYPE3_PROTECTION)
return; return;
phys = scsi_prot_ref_tag(scmd); phys = scsi_prot_ref_tag(scmd);
@ -172,7 +172,7 @@ void sd_dif_complete(struct scsi_cmnd *scmd, unsigned int good_bytes)
sdkp = scsi_disk(scmd->request->rq_disk); sdkp = scsi_disk(scmd->request->rq_disk);
if (sdkp->protection_type == SD_DIF_TYPE3_PROTECTION || good_bytes == 0) if (sdkp->protection_type == T10_PI_TYPE3_PROTECTION || good_bytes == 0)
return; return;
intervals = good_bytes / scsi_prot_interval(scmd); intervals = good_bytes / scsi_prot_interval(scmd);

View File

@ -79,18 +79,7 @@ static void sg_proc_cleanup(void);
*/ */
#define SG_MAX_CDB_SIZE 252 #define SG_MAX_CDB_SIZE 252
/* #define SG_DEFAULT_TIMEOUT mult_frac(SG_DEFAULT_TIMEOUT_USER, HZ, USER_HZ)
* Suppose you want to calculate the formula muldiv(x,m,d)=int(x * m / d)
* Then when using 32 bit integers x * m may overflow during the calculation.
* Replacing muldiv(x) by muldiv(x)=((x % d) * m) / d + int(x / d) * m
* calculates the same, but prevents the overflow when both m and d
* are "small" numbers (like HZ and USER_HZ).
* Of course an overflow is inavoidable if the result of muldiv doesn't fit
* in 32 bits.
*/
#define MULDIV(X,MUL,DIV) ((((X % DIV) * MUL) / DIV) + ((X / DIV) * MUL))
#define SG_DEFAULT_TIMEOUT MULDIV(SG_DEFAULT_TIMEOUT_USER, HZ, USER_HZ)
int sg_big_buff = SG_DEF_RESERVED_SIZE; int sg_big_buff = SG_DEF_RESERVED_SIZE;
/* N.B. This variable is readable and writeable via /* N.B. This variable is readable and writeable via
@ -884,10 +873,11 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg)
return result; return result;
if (val < 0) if (val < 0)
return -EIO; return -EIO;
if (val >= MULDIV (INT_MAX, USER_HZ, HZ)) if (val >= mult_frac((s64)INT_MAX, USER_HZ, HZ))
val = MULDIV (INT_MAX, USER_HZ, HZ); val = min_t(s64, mult_frac((s64)INT_MAX, USER_HZ, HZ),
INT_MAX);
sfp->timeout_user = val; sfp->timeout_user = val;
sfp->timeout = MULDIV (val, HZ, USER_HZ); sfp->timeout = mult_frac(val, HZ, USER_HZ);
return 0; return 0;
case SG_GET_TIMEOUT: /* N.B. User receives timeout as return value */ case SG_GET_TIMEOUT: /* N.B. User receives timeout as return value */

View File

@ -0,0 +1,54 @@
#
# Kernel configuration file for the SMARTPQI
#
# Copyright (c) 2016 Microsemi Corporation
# Copyright (c) 2016 PMC-Sierra, Inc.
# (mailto:esc.storagedev@microsemi.com)
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; version 2
# of the License.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# NO WARRANTY
# THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
# CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
# LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
# solely responsible for determining the appropriateness of using and
# distributing the Program and assumes all risks associated with its
# exercise of rights under this Agreement, including but not limited to
# the risks and costs of program errors, damage to or loss of data,
# programs or equipment, and unavailability or interruption of operations.
# DISCLAIMER OF LIABILITY
# NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
# HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
config SCSI_SMARTPQI
tristate "Microsemi PQI Driver"
depends on PCI && SCSI && !S390
select SCSI_SAS_ATTRS
select RAID_ATTRS
---help---
This driver supports Microsemi PQI controllers.
<http://www.microsemi.com>
To compile this driver as a module, choose M here: the
module will be called smartpqi.
Note: the aacraid driver will not manage a smartpqi
controller. You need to enable smartpqi for smartpqi
controllers. For more information, please see
Documentation/scsi/smartpqi.txt

View File

@ -0,0 +1,3 @@
ccflags-y += -I.
obj-m += smartpqi.o
smartpqi-objs := smartpqi_init.o smartpqi_sis.o smartpqi_sas_transport.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More