2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-21 11:44:01 +08:00

Merge branch 'upstream'

This commit is contained in:
Jeff Garzik 2005-10-09 09:44:07 -04:00
commit f58f8be7f6
206 changed files with 8272 additions and 2313 deletions

View File

@ -787,6 +787,722 @@ and other resources, etc.
!Idrivers/scsi/libata-scsi.c
</chapter>
<chapter id="ataExceptions">
<title>ATA errors &amp; exceptions</title>
<para>
This chapter tries to identify what error/exception conditions exist
for ATA/ATAPI devices and describe how they should be handled in
implementation-neutral way.
</para>
<para>
The term 'error' is used to describe conditions where either an
explicit error condition is reported from device or a command has
timed out.
</para>
<para>
The term 'exception' is either used to describe exceptional
conditions which are not errors (say, power or hotplug events), or
to describe both errors and non-error exceptional conditions. Where
explicit distinction between error and exception is necessary, the
term 'non-error exception' is used.
</para>
<sect1 id="excat">
<title>Exception categories</title>
<para>
Exceptions are described primarily with respect to legacy
taskfile + bus master IDE interface. If a controller provides
other better mechanism for error reporting, mapping those into
categories described below shouldn't be difficult.
</para>
<para>
In the following sections, two recovery actions - reset and
reconfiguring transport - are mentioned. These are described
further in <xref linkend="exrec"/>.
</para>
<sect2 id="excatHSMviolation">
<title>HSM violation</title>
<para>
This error is indicated when STATUS value doesn't match HSM
requirement during issuing or excution any ATA/ATAPI command.
</para>
<itemizedlist>
<title>Examples</title>
<listitem>
<para>
ATA_STATUS doesn't contain !BSY &amp;&amp; DRDY &amp;&amp; !DRQ while trying
to issue a command.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; !DRQ during PIO data transfer.
</para>
</listitem>
<listitem>
<para>
DRQ on command completion.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR after CDB tranfer starts but before the
last byte of CDB is transferred. ATA/ATAPI standard states
that &quot;The device shall not terminate the PACKET command
with an error before the last byte of the command packet has
been written&quot; in the error outputs description of PACKET
command and the state diagram doesn't include such
transitions.
</para>
</listitem>
</itemizedlist>
<para>
In these cases, HSM is violated and not much information
regarding the error can be acquired from STATUS or ERROR
register. IOW, this error can be anything - driver bug,
faulty device, controller and/or cable.
</para>
<para>
As HSM is violated, reset is necessary to restore known state.
Reconfiguring transport for lower speed might be helpful too
as transmission errors sometimes cause this kind of errors.
</para>
</sect2>
<sect2 id="excatDevErr">
<title>ATA/ATAPI device error (non-NCQ / non-CHECK CONDITION)</title>
<para>
These are errors detected and reported by ATA/ATAPI devices
indicating device problems. For this type of errors, STATUS
and ERROR register values are valid and describe error
condition. Note that some of ATA bus errors are detected by
ATA/ATAPI devices and reported using the same mechanism as
device errors. Those cases are described later in this
section.
</para>
<para>
For ATA commands, this type of errors are indicated by !BSY
&amp;&amp; ERR during command execution and on completion.
</para>
<para>For ATAPI commands,</para>
<itemizedlist>
<listitem>
<para>
!BSY &amp;&amp; ERR &amp;&amp; ABRT right after issuing PACKET
indicates that PACKET command is not supported and falls in
this category.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR(==CHK) &amp;&amp; !ABRT after the last
byte of CDB is transferred indicates CHECK CONDITION and
doesn't fall in this category.
</para>
</listitem>
<listitem>
<para>
!BSY &amp;&amp; ERR(==CHK) &amp;&amp; ABRT after the last byte
of CDB is transferred *probably* indicates CHECK CONDITION and
doesn't fall in this category.
</para>
</listitem>
</itemizedlist>
<para>
Of errors detected as above, the followings are not ATA/ATAPI
device errors but ATA bus errors and should be handled
according to <xref linkend="excatATAbusErr"/>.
</para>
<variablelist>
<varlistentry>
<term>CRC error during data transfer</term>
<listitem>
<para>
This is indicated by ICRC bit in the ERROR register and
means that corruption occurred during data transfer. Upto
ATA/ATAPI-7, the standard specifies that this bit is only
applicable to UDMA transfers but ATA/ATAPI-8 draft revision
1f says that the bit may be applicable to multiword DMA and
PIO.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>ABRT error during data transfer or on completion</term>
<listitem>
<para>
Upto ATA/ATAPI-7, the standard specifies that ABRT could be
set on ICRC errors and on cases where a device is not able
to complete a command. Combined with the fact that MWDMA
and PIO transfer errors aren't allowed to use ICRC bit upto
ATA/ATAPI-7, it seems to imply that ABRT bit alone could
indicate tranfer errors.
</para>
<para>
However, ATA/ATAPI-8 draft revision 1f removes the part
that ICRC errors can turn on ABRT. So, this is kind of
gray area. Some heuristics are needed here.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
ATA/ATAPI device errors can be further categorized as follows.
</para>
<variablelist>
<varlistentry>
<term>Media errors</term>
<listitem>
<para>
This is indicated by UNC bit in the ERROR register. ATA
devices reports UNC error only after certain number of
retries cannot recover the data, so there's nothing much
else to do other than notifying upper layer.
</para>
<para>
READ and WRITE commands report CHS or LBA of the first
failed sector but ATA/ATAPI standard specifies that the
amount of transferred data on error completion is
indeterminate, so we cannot assume that sectors preceding
the failed sector have been transferred and thus cannot
complete those sectors successfully as SCSI does.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Media changed / media change requested error</term>
<listitem>
<para>
&lt;&lt;TODO: fill here&gt;&gt;
</para>
</listitem>
</varlistentry>
<varlistentry><term>Address error</term>
<listitem>
<para>
This is indicated by IDNF bit in the ERROR register.
Report to upper layer.
</para>
</listitem>
</varlistentry>
<varlistentry><term>Other errors</term>
<listitem>
<para>
This can be invalid command or parameter indicated by ABRT
ERROR bit or some other error condition. Note that ABRT
bit can indicate a lot of things including ICRC and Address
errors. Heuristics needed.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
Depending on commands, not all STATUS/ERROR bits are
applicable. These non-applicable bits are marked with
&quot;na&quot; in the output descriptions but upto ATA/ATAPI-7
no definition of &quot;na&quot; can be found. However,
ATA/ATAPI-8 draft revision 1f describes &quot;N/A&quot; as
follows.
</para>
<blockquote>
<variablelist>
<varlistentry><term>3.2.3.3a N/A</term>
<listitem>
<para>
A keyword the indicates a field has no defined value in
this standard and should not be checked by the host or
device. N/A fields should be cleared to zero.
</para>
</listitem>
</varlistentry>
</variablelist>
</blockquote>
<para>
So, it seems reasonable to assume that &quot;na&quot; bits are
cleared to zero by devices and thus need no explicit masking.
</para>
</sect2>
<sect2 id="excatATAPIcc">
<title>ATAPI device CHECK CONDITION</title>
<para>
ATAPI device CHECK CONDITION error is indicated by set CHK bit
(ERR bit) in the STATUS register after the last byte of CDB is
transferred for a PACKET command. For this kind of errors,
sense data should be acquired to gather information regarding
the errors. REQUEST SENSE packet command should be used to
acquire sense data.
</para>
<para>
Once sense data is acquired, this type of errors can be
handled similary to other SCSI errors. Note that sense data
may indicate ATA bus error (e.g. Sense Key 04h HARDWARE ERROR
&amp;&amp; ASC/ASCQ 47h/00h SCSI PARITY ERROR). In such
cases, the error should be considered as an ATA bus error and
handled according to <xref linkend="excatATAbusErr"/>.
</para>
</sect2>
<sect2 id="excatNCQerr">
<title>ATA device error (NCQ)</title>
<para>
NCQ command error is indicated by cleared BSY and set ERR bit
during NCQ command phase (one or more NCQ commands
outstanding). Although STATUS and ERROR registers will
contain valid values describing the error, READ LOG EXT is
required to clear the error condition, determine which command
has failed and acquire more information.
</para>
<para>
READ LOG EXT Log Page 10h reports which tag has failed and
taskfile register values describing the error. With this
information the failed command can be handled as a normal ATA
command error as in <xref linkend="excatDevErr"/> and all
other in-flight commands must be retried. Note that this
retry should not be counted - it's likely that commands
retried this way would have completed normally if it were not
for the failed command.
</para>
<para>
Note that ATA bus errors can be reported as ATA device NCQ
errors. This should be handled as described in <xref
linkend="excatATAbusErr"/>.
</para>
<para>
If READ LOG EXT Log Page 10h fails or reports NQ, we're
thoroughly screwed. This condition should be treated
according to <xref linkend="excatHSMviolation"/>.
</para>
</sect2>
<sect2 id="excatATAbusErr">
<title>ATA bus error</title>
<para>
ATA bus error means that data corruption occurred during
transmission over ATA bus (SATA or PATA). This type of errors
can be indicated by
</para>
<itemizedlist>
<listitem>
<para>
ICRC or ABRT error as described in <xref linkend="excatDevErr"/>.
</para>
</listitem>
<listitem>
<para>
Controller-specific error completion with error information
indicating transmission error.
</para>
</listitem>
<listitem>
<para>
On some controllers, command timeout. In this case, there may
be a mechanism to determine that the timeout is due to
transmission error.
</para>
</listitem>
<listitem>
<para>
Unknown/random errors, timeouts and all sorts of weirdities.
</para>
</listitem>
</itemizedlist>
<para>
As described above, transmission errors can cause wide variety
of symptoms ranging from device ICRC error to random device
lockup, and, for many cases, there is no way to tell if an
error condition is due to transmission error or not;
therefore, it's necessary to employ some kind of heuristic
when dealing with errors and timeouts. For example,
encountering repetitive ABRT errors for known supported
command is likely to indicate ATA bus error.
</para>
<para>
Once it's determined that ATA bus errors have possibly
occurred, lowering ATA bus transmission speed is one of
actions which may alleviate the problem. See <xref
linkend="exrecReconf"/> for more information.
</para>
</sect2>
<sect2 id="excatPCIbusErr">
<title>PCI bus error</title>
<para>
Data corruption or other failures during transmission over PCI
(or other system bus). For standard BMDMA, this is indicated
by Error bit in the BMDMA Status register. This type of
errors must be logged as it indicates something is very wrong
with the system. Resetting host controller is recommended.
</para>
</sect2>
<sect2 id="excatLateCompletion">
<title>Late completion</title>
<para>
This occurs when timeout occurs and the timeout handler finds
out that the timed out command has completed successfully or
with error. This is usually caused by lost interrupts. This
type of errors must be logged. Resetting host controller is
recommended.
</para>
</sect2>
<sect2 id="excatUnknown">
<title>Unknown error (timeout)</title>
<para>
This is when timeout occurs and the command is still
processing or the host and device are in unknown state. When
this occurs, HSM could be in any valid or invalid state. To
bring the device to known state and make it forget about the
timed out command, resetting is necessary. The timed out
command may be retried.
</para>
<para>
Timeouts can also be caused by transmission errors. Refer to
<xref linkend="excatATAbusErr"/> for more details.
</para>
</sect2>
<sect2 id="excatHoplugPM">
<title>Hotplug and power management exceptions</title>
<para>
&lt;&lt;TODO: fill here&gt;&gt;
</para>
</sect2>
</sect1>
<sect1 id="exrec">
<title>EH recovery actions</title>
<para>
This section discusses several important recovery actions.
</para>
<sect2 id="exrecClr">
<title>Clearing error condition</title>
<para>
Many controllers require its error registers to be cleared by
error handler. Different controllers may have different
requirements.
</para>
<para>
For SATA, it's strongly recommended to clear at least SError
register during error handling.
</para>
</sect2>
<sect2 id="exrecRst">
<title>Reset</title>
<para>
During EH, resetting is necessary in the following cases.
</para>
<itemizedlist>
<listitem>
<para>
HSM is in unknown or invalid state
</para>
</listitem>
<listitem>
<para>
HBA is in unknown or invalid state
</para>
</listitem>
<listitem>
<para>
EH needs to make HBA/device forget about in-flight commands
</para>
</listitem>
<listitem>
<para>
HBA/device behaves weirdly
</para>
</listitem>
</itemizedlist>
<para>
Resetting during EH might be a good idea regardless of error
condition to improve EH robustness. Whether to reset both or
either one of HBA and device depends on situation but the
following scheme is recommended.
</para>
<itemizedlist>
<listitem>
<para>
When it's known that HBA is in ready state but ATA/ATAPI
device in in unknown state, reset only device.
</para>
</listitem>
<listitem>
<para>
If HBA is in unknown state, reset both HBA and device.
</para>
</listitem>
</itemizedlist>
<para>
HBA resetting is implementation specific. For a controller
complying to taskfile/BMDMA PCI IDE, stopping active DMA
transaction may be sufficient iff BMDMA state is the only HBA
context. But even mostly taskfile/BMDMA PCI IDE complying
controllers may have implementation specific requirements and
mechanism to reset themselves. This must be addressed by
specific drivers.
</para>
<para>
OTOH, ATA/ATAPI standard describes in detail ways to reset
ATA/ATAPI devices.
</para>
<variablelist>
<varlistentry><term>PATA hardware reset</term>
<listitem>
<para>
This is hardware initiated device reset signalled with
asserted PATA RESET- signal. There is no standard way to
initiate hardware reset from software although some
hardware provides registers that allow driver to directly
tweak the RESET- signal.
</para>
</listitem>
</varlistentry>
<varlistentry><term>Software reset</term>
<listitem>
<para>
This is achieved by turning CONTROL SRST bit on for at
least 5us. Both PATA and SATA support it but, in case of
SATA, this may require controller-specific support as the
second Register FIS to clear SRST should be transmitted
while BSY bit is still set. Note that on PATA, this resets
both master and slave devices on a channel.
</para>
</listitem>
</varlistentry>
<varlistentry><term>EXECUTE DEVICE DIAGNOSTIC command</term>
<listitem>
<para>
Although ATA/ATAPI standard doesn't describe exactly, EDD
implies some level of resetting, possibly similar level
with software reset. Host-side EDD protocol can be handled
with normal command processing and most SATA controllers
should be able to handle EDD's just like other commands.
As in software reset, EDD affects both devices on a PATA
bus.
</para>
<para>
Although EDD does reset devices, this doesn't suit error
handling as EDD cannot be issued while BSY is set and it's
unclear how it will act when device is in unknown/weird
state.
</para>
</listitem>
</varlistentry>
<varlistentry><term>ATAPI DEVICE RESET command</term>
<listitem>
<para>
This is very similar to software reset except that reset
can be restricted to the selected device without affecting
the other device sharing the cable.
</para>
</listitem>
</varlistentry>
<varlistentry><term>SATA phy reset</term>
<listitem>
<para>
This is the preferred way of resetting a SATA device. In
effect, it's identical to PATA hardware reset. Note that
this can be done with the standard SCR Control register.
As such, it's usually easier to implement than software
reset.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
One more thing to consider when resetting devices is that
resetting clears certain configuration parameters and they
need to be set to their previous or newly adjusted values
after reset.
</para>
<para>
Parameters affected are.
</para>
<itemizedlist>
<listitem>
<para>
CHS set up with INITIALIZE DEVICE PARAMETERS (seldomly used)
</para>
</listitem>
<listitem>
<para>
Parameters set with SET FEATURES including transfer mode setting
</para>
</listitem>
<listitem>
<para>
Block count set with SET MULTIPLE MODE
</para>
</listitem>
<listitem>
<para>
Other parameters (SET MAX, MEDIA LOCK...)
</para>
</listitem>
</itemizedlist>
<para>
ATA/ATAPI standard specifies that some parameters must be
maintained across hardware or software reset, but doesn't
strictly specify all of them. Always reconfiguring needed
parameters after reset is required for robustness. Note that
this also applies when resuming from deep sleep (power-off).
</para>
<para>
Also, ATA/ATAPI standard requires that IDENTIFY DEVICE /
IDENTIFY PACKET DEVICE is issued after any configuration
parameter is updated or a hardware reset and the result used
for further operation. OS driver is required to implement
revalidation mechanism to support this.
</para>
</sect2>
<sect2 id="exrecReconf">
<title>Reconfigure transport</title>
<para>
For both PATA and SATA, a lot of corners are cut for cheap
connectors, cables or controllers and it's quite common to see
high transmission error rate. This can be mitigated by
lowering transmission speed.
</para>
<para>
The following is a possible scheme Jeff Garzik suggested.
</para>
<blockquote>
<para>
If more than $N (3?) transmission errors happen in 15 minutes,
</para>
<itemizedlist>
<listitem>
<para>
if SATA, decrease SATA PHY speed. if speed cannot be decreased,
</para>
</listitem>
<listitem>
<para>
decrease UDMA xfer speed. if at UDMA0, switch to PIO4,
</para>
</listitem>
<listitem>
<para>
decrease PIO xfer speed. if at PIO3, complain, but continue
</para>
</listitem>
</itemizedlist>
</blockquote>
</sect2>
</sect1>
</chapter>
<chapter id="PiixInt">
<title>ata_piix Internals</title>
!Idrivers/scsi/ata_piix.c

View File

@ -305,7 +305,7 @@ point out some special detail about the sign-off.
The canonical patch subject line is:
Subject: [PATCH 001/123] [<area>:] <explanation>
Subject: [PATCH 001/123] subsystem: summary phrase
The canonical patch message body contains the following:
@ -330,9 +330,25 @@ alphabetically by subject line - pretty much any email reader will
support that - since because the sequence number is zero-padded,
the numerical and alphabetic sort is the same.
See further details on how to phrase the "<explanation>" in the
"Subject:" line in Andrew Morton's "The perfect patch", referenced
below.
The "subsystem" in the email's Subject should identify which
area or subsystem of the kernel is being patched.
The "summary phrase" in the email's Subject should concisely
describe the patch which that email contains. The "summary
phrase" should not be a filename. Do not use the same "summary
phrase" for every patch in a whole patch series.
Bear in mind that the "summary phrase" of your email becomes
a globally-unique identifier for that patch. It propagates
all the way into the git changelog. The "summary phrase" may
later be used in developer discussions which refer to the patch.
People will want to google for the "summary phrase" to read
discussion regarding that patch.
A couple of example Subjects:
Subject: [patch 2/5] ext2: improve scalability of bitmap searching
Subject: [PATCHv2 001/207] x86: fix eflags tracking
The "from" line must be the very first line in the message body,
and has the form:

View File

@ -355,10 +355,14 @@ ip_dynaddr - BOOLEAN
Default: 0
icmp_echo_ignore_all - BOOLEAN
If set non-zero, then the kernel will ignore all ICMP ECHO
requests sent to it.
Default: 0
icmp_echo_ignore_broadcasts - BOOLEAN
If either is set to true, then the kernel will ignore either all
ICMP ECHO requests sent to it or just those to broadcast/multicast
addresses, respectively.
If set non-zero, then the kernel will ignore all ICMP ECHO and
TIMESTAMP requests sent to it via broadcast/multicast.
Default: 1
icmp_ratelimit - INTEGER
Limit the maximal rates for sending ICMP packets whose type matches

View File

@ -305,7 +305,7 @@ long execve(const char *filename, char **argv, char **envp)
"Ir" (THREAD_START_SP - sizeof(regs)),
"r" (&regs),
"Ir" (sizeof(regs))
: "r0", "r1", "r2", "r3", "ip", "memory");
: "r0", "r1", "r2", "r3", "ip", "lr", "memory");
out:
return ret;

View File

@ -504,7 +504,7 @@ asmlinkage int arm_syscall(int no, struct pt_regs *regs)
bad_access:
spin_unlock(&mm->page_table_lock);
/* simulate a read access fault */
/* simulate a write access fault */
do_DataAbort(addr, 15 + (1 << 11), regs);
return -1;
}

View File

@ -28,14 +28,15 @@
#include <linux/module.h>
#include <asm/arch/imxfb.h>
#include <asm/hardware.h>
#include <asm/arch/imx-regs.h>
#include <asm/mach/map.h>
void imx_gpio_mode(int gpio_mode)
{
unsigned int pin = gpio_mode & GPIO_PIN_MASK;
unsigned int port = (gpio_mode & GPIO_PORT_MASK) >> 5;
unsigned int ocr = (gpio_mode & GPIO_OCR_MASK) >> 10;
unsigned int port = (gpio_mode & GPIO_PORT_MASK) >> GPIO_PORT_SHIFT;
unsigned int ocr = (gpio_mode & GPIO_OCR_MASK) >> GPIO_OCR_SHIFT;
unsigned int tmp;
/* Pullup enable */
@ -57,7 +58,7 @@ void imx_gpio_mode(int gpio_mode)
GPR(port) &= ~(1<<pin);
/* use as gpio? */
if( ocr == 3 )
if(gpio_mode & GPIO_GIUS)
GIUS(port) |= (1<<pin);
else
GIUS(port) &= ~(1<<pin);
@ -72,20 +73,20 @@ void imx_gpio_mode(int gpio_mode)
tmp |= (ocr << (pin*2));
OCR1(port) = tmp;
if( gpio_mode & GPIO_AOUT )
ICONFA1(port) &= ~( 3<<(pin*2));
if( gpio_mode & GPIO_BOUT )
ICONFB1(port) &= ~( 3<<(pin*2));
ICONFA1(port) &= ~( 3<<(pin*2));
ICONFA1(port) |= ((gpio_mode >> GPIO_AOUT_SHIFT) & 3) << (pin * 2);
ICONFB1(port) &= ~( 3<<(pin*2));
ICONFB1(port) |= ((gpio_mode >> GPIO_BOUT_SHIFT) & 3) << (pin * 2);
} else {
tmp = OCR2(port);
tmp &= ~( 3<<((pin-16)*2));
tmp |= (ocr << ((pin-16)*2));
OCR2(port) = tmp;
if( gpio_mode & GPIO_AOUT )
ICONFA2(port) &= ~( 3<<((pin-16)*2));
if( gpio_mode & GPIO_BOUT )
ICONFB2(port) &= ~( 3<<((pin-16)*2));
ICONFA2(port) &= ~( 3<<((pin-16)*2));
ICONFA2(port) |= ((gpio_mode >> GPIO_AOUT_SHIFT) & 3) << ((pin-16) * 2);
ICONFB2(port) &= ~( 3<<((pin-16)*2));
ICONFB2(port) |= ((gpio_mode >> GPIO_BOUT_SHIFT) & 3) << ((pin-16) * 2);
}
}

View File

@ -55,7 +55,7 @@ static void __init
mx1ads_init(void)
{
#ifdef CONFIG_LEDS
imx_gpio_mode(GPIO_PORTA | GPIO_OUT | GPIO_GPIO | 2);
imx_gpio_mode(GPIO_PORTA | GPIO_OUT | 2);
#endif
platform_add_devices(devices, ARRAY_SIZE(devices));
}

View File

@ -370,21 +370,21 @@ config CPU_BIG_ENDIAN
config CPU_ICACHE_DISABLE
bool "Disable I-Cache"
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6
help
Say Y here to disable the processor instruction cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_DISABLE
bool "Disable D-Cache"
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020
depends on CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6
help
Say Y here to disable the processor data cache. Unless
you have a reason not to or are unsure, say N.
config CPU_DCACHE_WRITETHROUGH
bool "Force write through D-cache"
depends on (CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020) && !CPU_DCACHE_DISABLE
depends on (CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM1020 || CPU_V6) && !CPU_DCACHE_DISABLE
default y if CPU_ARM925T
help
Say Y here to use the data cache in writethrough mode. Unless you
@ -399,7 +399,7 @@ config CPU_CACHE_ROUND_ROBIN
config CPU_BPREDICT_DISABLE
bool "Disable branch prediction"
depends on CPU_ARM1020
depends on CPU_ARM1020 || CPU_V6
help
Say Y here to disable branch prediction. If unsure, say N.

View File

@ -1016,6 +1016,11 @@ ia64_mca_cmc_int_handler(int cmc_irq, void *arg, struct pt_regs *ptregs)
cmc_polling_enabled = 1;
spin_unlock(&cmc_history_lock);
/* If we're being hit with CMC interrupts, we won't
* ever execute the schedule_work() below. Need to
* disable CMC interrupts on this processor now.
*/
ia64_mca_cmc_vector_disable(NULL);
schedule_work(&cmc_disable_work);
/*

View File

@ -195,7 +195,7 @@ via_calibrate_decr(void)
;
dend = get_dec();
tb_ticks_per_jiffy = (dstart - dend) / (6 * (HZ/100));
tb_ticks_per_jiffy = (dstart - dend) / ((6 * HZ)/100);
tb_to_us = mulhwu_scale_factor(dstart - dend, 60000);
printk(KERN_INFO "via_calibrate_decr: ticks per jiffy = %u (%u ticks)\n",

View File

@ -25,62 +25,6 @@ source "init/Kconfig"
menu "General machine setup"
config VT
bool
select INPUT
default y
---help---
If you say Y here, you will get support for terminal devices with
display and keyboard devices. These are called "virtual" because you
can run several virtual terminals (also called virtual consoles) on
one physical terminal. This is rather useful, for example one
virtual terminal can collect system messages and warnings, another
one can be used for a text-mode user session, and a third could run
an X session, all in parallel. Switching between virtual terminals
is done with certain key combinations, usually Alt-<function key>.
The setterm command ("man setterm") can be used to change the
properties (such as colors or beeping) of a virtual terminal. The
man page console_codes(4) ("man console_codes") contains the special
character sequences that can be used to change those properties
directly. The fonts used on virtual terminals can be changed with
the setfont ("man setfont") command and the key bindings are defined
with the loadkeys ("man loadkeys") command.
You need at least one virtual terminal device in order to make use
of your keyboard and monitor. Therefore, only people configuring an
embedded system would want to say N here in order to save some
memory; the only way to log into such a system is then via a serial
or network connection.
If unsure, say Y, or else you won't be able to do much with your new
shiny Linux system :-)
config VT_CONSOLE
bool
default y
---help---
The system console is the device which receives all kernel messages
and warnings and which allows logins in single user mode. If you
answer Y here, a virtual terminal (the device used to interact with
a physical terminal) can be used as system console. This is the most
common mode of operations, so you should say Y here unless you want
the kernel messages be output only to a serial port (in which case
you should say Y to "Console on serial port", below).
If you do say Y here, by default the currently visible virtual
terminal (/dev/tty0) will be used as system console. You can change
that with a kernel command line option such as "console=tty3" which
would use the third virtual terminal as system console. (Try "man
bootparam" or see the documentation of your boot loader (lilo or
loadlin) about how to pass options to the kernel at boot time.)
If unsure, say Y.
config HW_CONSOLE
bool
default y
config SMP
bool "Symmetric multi-processing support (does not work on sun4/sun4c)"
depends on BROKEN

View File

@ -457,7 +457,7 @@ void __init time_init(void)
sbus_time_init();
}
extern __inline__ unsigned long do_gettimeoffset(void)
static inline unsigned long do_gettimeoffset(void)
{
return (*master_l10_counter >> 10) & 0x1fffff;
}

View File

@ -260,7 +260,7 @@ static inline pte_t srmmu_pte_modify(pte_t pte, pgprot_t newprot)
{ return __pte((pte_val(pte) & SRMMU_CHG_MASK) | pgprot_val(newprot)); }
/* to find an entry in a top-level page table... */
extern inline pgd_t *srmmu_pgd_offset(struct mm_struct * mm, unsigned long address)
static inline pgd_t *srmmu_pgd_offset(struct mm_struct * mm, unsigned long address)
{ return mm->pgd + (address >> SRMMU_PGDIR_SHIFT); }
/* Find an entry in the second-level page table.. */

View File

@ -97,8 +97,8 @@ do_fpdis:
faddd %f0, %f2, %f4
fmuld %f0, %f2, %f6
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_1:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS + 0xc0, %g2
@ -126,8 +126,8 @@ cplus_fptrap_insn_1:
fzero %f34
ldxa [%g3] ASI_DMMU, %g5
add %g6, TI_FPREGS, %g1
cplus_fptrap_insn_2:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS + 0x40, %g2
@ -153,8 +153,8 @@ cplus_fptrap_insn_2:
3: mov SECONDARY_CONTEXT, %g3
add %g6, TI_FPREGS, %g1
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_3:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
mov 0x40, %g2
@ -319,8 +319,8 @@ do_fptrap_after_fsr:
stx %g3, [%g6 + TI_GSR]
mov SECONDARY_CONTEXT, %g3
ldxa [%g3] ASI_DMMU, %g5
cplus_fptrap_insn_4:
sethi %hi(0), %g2
sethi %hi(sparc64_kern_sec_context), %g2
ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
stxa %g2, [%g3] ASI_DMMU
membar #Sync
add %g6, TI_FPREGS, %g2
@ -341,33 +341,6 @@ cplus_fptrap_insn_4:
ba,pt %xcc, etrap
wr %g0, 0, %fprs
cplus_fptrap_1:
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
.globl cheetah_plus_patch_fpdis
cheetah_plus_patch_fpdis:
/* We configure the dTLB512_0 for 4MB pages and the
* dTLB512_1 for 8K pages when in context zero.
*/
sethi %hi(cplus_fptrap_1), %o0
lduw [%o0 + %lo(cplus_fptrap_1)], %o1
set cplus_fptrap_insn_1, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_2, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_3, %o2
stw %o1, [%o2]
flush %o2
set cplus_fptrap_insn_4, %o2
stw %o1, [%o2]
flush %o2
retl
nop
/* The registers for cross calls will be:
*
* DATA 0: [low 32-bits] Address of function to call, jmp to this

View File

@ -68,12 +68,8 @@ etrap_irq:
wrpr %g3, 0, %otherwin
wrpr %g2, 0, %wstate
cplus_etrap_insn_1:
sethi %hi(0), %g3
sllx %g3, 32, %g3
cplus_etrap_insn_2:
sethi %hi(0), %g2
or %g3, %g2, %g3
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g3
stxa %g3, [%l4] ASI_DMMU
flush %l6
wr %g0, ASI_AIUS, %asi
@ -215,12 +211,8 @@ scetrap: rdpr %pil, %g2
mov PRIMARY_CONTEXT, %l4
wrpr %g3, 0, %otherwin
wrpr %g2, 0, %wstate
cplus_etrap_insn_3:
sethi %hi(0), %g3
sllx %g3, 32, %g3
cplus_etrap_insn_4:
sethi %hi(0), %g2
or %g3, %g2, %g3
sethi %hi(sparc64_kern_pri_context), %g2
ldx [%g2 + %lo(sparc64_kern_pri_context)], %g3
stxa %g3, [%l4] ASI_DMMU
flush %l6
@ -264,38 +256,3 @@ cplus_etrap_insn_4:
#undef TASK_REGOFF
#undef ETRAP_PSTATE1
cplus_einsn_1:
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %g3
cplus_einsn_2:
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
.globl cheetah_plus_patch_etrap
cheetah_plus_patch_etrap:
/* We configure the dTLB512_0 for 4MB pages and the
* dTLB512_1 for 8K pages when in context zero.
*/
sethi %hi(cplus_einsn_1), %o0
sethi %hi(cplus_etrap_insn_1), %o2
lduw [%o0 + %lo(cplus_einsn_1)], %o1
or %o2, %lo(cplus_etrap_insn_1), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_etrap_insn_3), %o2
or %o2, %lo(cplus_etrap_insn_3), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_einsn_2), %o0
sethi %hi(cplus_etrap_insn_2), %o2
lduw [%o0 + %lo(cplus_einsn_2)], %o1
or %o2, %lo(cplus_etrap_insn_2), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_etrap_insn_4), %o2
or %o2, %lo(cplus_etrap_insn_4), %o2
stw %o1, [%o2]
flush %o2
retl
nop

View File

@ -325,23 +325,7 @@ cheetah_tlb_fixup:
1: sethi %hi(tlb_type), %g1
stw %g2, [%g1 + %lo(tlb_type)]
BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g1,g7,1f)
ba,pt %xcc, 2f
nop
1: /* Patch context register writes to support nucleus page
* size correctly.
*/
call cheetah_plus_patch_etrap
nop
call cheetah_plus_patch_rtrap
nop
call cheetah_plus_patch_fpdis
nop
call cheetah_plus_patch_winfixup
nop
2: /* Patch copy/page operations to cheetah optimized versions. */
/* Patch copy/page operations to cheetah optimized versions. */
call cheetah_patch_copyops
nop
call cheetah_patch_copy_page
@ -484,20 +468,13 @@ spitfire_vpte_base:
call prom_set_trap_table
sethi %hi(sparc64_ttable_tl0), %o0
BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g2,g3,1f)
ba,pt %xcc, 2f
nop
1: /* Start using proper page size encodings in ctx register. */
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %g3
/* Start using proper page size encodings in ctx register. */
sethi %hi(sparc64_kern_pri_context), %g3
ldx [%g3 + %lo(sparc64_kern_pri_context)], %g2
mov PRIMARY_CONTEXT, %g1
sllx %g3, 32, %g3
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
or %g3, %g2, %g3
stxa %g3, [%g1] ASI_DMMU
stxa %g2, [%g1] ASI_DMMU
membar #Sync
2:
rdpr %pstate, %o1
or %o1, PSTATE_IE, %o1
wrpr %o1, 0, %pstate

View File

@ -256,9 +256,8 @@ rt_continue: ldx [%sp + PTREGS_OFF + PT_V9_G1], %g1
brnz,pn %l3, kern_rtt
mov PRIMARY_CONTEXT, %l7
ldxa [%l7 + %l7] ASI_DMMU, %l0
cplus_rtrap_insn_1:
sethi %hi(0), %l1
sllx %l1, 32, %l1
sethi %hi(sparc64_kern_pri_nuc_bits), %l1
ldx [%l1 + %lo(sparc64_kern_pri_nuc_bits)], %l1
or %l0, %l1, %l0
stxa %l0, [%l7] ASI_DMMU
flush %g6
@ -345,21 +344,3 @@ kern_fpucheck: ldub [%g6 + TI_FPDEPTH], %l5
wr %g0, FPRS_DU, %fprs
ba,pt %xcc, rt_continue
stb %l5, [%g6 + TI_FPDEPTH]
cplus_rinsn_1:
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %l1
.globl cheetah_plus_patch_rtrap
cheetah_plus_patch_rtrap:
/* We configure the dTLB512_0 for 4MB pages and the
* dTLB512_1 for 8K pages when in context zero.
*/
sethi %hi(cplus_rinsn_1), %o0
sethi %hi(cplus_rtrap_insn_1), %o2
lduw [%o0 + %lo(cplus_rinsn_1)], %o1
or %o2, %lo(cplus_rtrap_insn_1), %o2
stw %o1, [%o2]
flush %o2
retl
nop

View File

@ -187,17 +187,13 @@ int prom_callback(long *args)
}
if ((va >= KERNBASE) && (va < (KERNBASE + (4 * 1024 * 1024)))) {
unsigned long kernel_pctx = 0;
if (tlb_type == cheetah_plus)
kernel_pctx |= (CTX_CHEETAH_PLUS_NUC |
CTX_CHEETAH_PLUS_CTX0);
extern unsigned long sparc64_kern_pri_context;
/* Spitfire Errata #32 workaround */
__asm__ __volatile__("stxa %0, [%1] %2\n\t"
"flush %%g6"
: /* No outputs */
: "r" (kernel_pctx),
: "r" (sparc64_kern_pri_context),
"r" (PRIMARY_CONTEXT),
"i" (ASI_DMMU));

View File

@ -336,20 +336,13 @@ do_unlock:
call init_irqwork_curcpu
nop
BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g2,g3,1f)
ba,pt %xcc, 2f
nop
1: /* Start using proper page size encodings in ctx register. */
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %g3
/* Start using proper page size encodings in ctx register. */
sethi %hi(sparc64_kern_pri_context), %g3
ldx [%g3 + %lo(sparc64_kern_pri_context)], %g2
mov PRIMARY_CONTEXT, %g1
sllx %g3, 32, %g3
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
or %g3, %g2, %g3
stxa %g3, [%g1] ASI_DMMU
stxa %g2, [%g1] ASI_DMMU
membar #Sync
2:
rdpr %pstate, %o1
or %o1, PSTATE_IE, %o1
wrpr %o1, 0, %pstate

View File

@ -16,23 +16,14 @@
.text
set_pcontext:
cplus_winfixup_insn_1:
sethi %hi(0), %l1
sethi %hi(sparc64_kern_pri_context), %l1
ldx [%l1 + %lo(sparc64_kern_pri_context)], %l1
mov PRIMARY_CONTEXT, %g1
sllx %l1, 32, %l1
cplus_winfixup_insn_2:
sethi %hi(0), %g2
or %l1, %g2, %l1
stxa %l1, [%g1] ASI_DMMU
flush %g6
retl
nop
cplus_wfinsn_1:
sethi %uhi(CTX_CHEETAH_PLUS_NUC), %l1
cplus_wfinsn_2:
sethi %hi(CTX_CHEETAH_PLUS_CTX0), %g2
.align 32
/* Here are the rules, pay attention.
@ -395,23 +386,3 @@ window_dax_from_user_common:
add %sp, PTREGS_OFF, %o0
ba,pt %xcc, rtrap
clr %l6
.globl cheetah_plus_patch_winfixup
cheetah_plus_patch_winfixup:
sethi %hi(cplus_wfinsn_1), %o0
sethi %hi(cplus_winfixup_insn_1), %o2
lduw [%o0 + %lo(cplus_wfinsn_1)], %o1
or %o2, %lo(cplus_winfixup_insn_1), %o2
stw %o1, [%o2]
flush %o2
sethi %hi(cplus_wfinsn_2), %o0
sethi %hi(cplus_winfixup_insn_2), %o2
lduw [%o0 + %lo(cplus_wfinsn_2)], %o1
or %o2, %lo(cplus_winfixup_insn_2), %o2
stw %o1, [%o2]
flush %o2
retl
nop

View File

@ -133,6 +133,12 @@ extern unsigned int sparc_ramdisk_size;
struct page *mem_map_zero __read_mostly;
unsigned int sparc64_highest_unlocked_tlb_ent __read_mostly;
unsigned long sparc64_kern_pri_context __read_mostly;
unsigned long sparc64_kern_pri_nuc_bits __read_mostly;
unsigned long sparc64_kern_sec_context __read_mostly;
int bigkernel = 0;
/* XXX Tune this... */
@ -362,6 +368,7 @@ struct linux_prom_translation {
unsigned long data;
};
static struct linux_prom_translation prom_trans[512] __initdata;
static unsigned int prom_trans_ents __initdata;
extern unsigned long prom_boot_page;
extern void prom_remap(unsigned long physpage, unsigned long virtpage, int mmu_ihandle);
@ -375,57 +382,7 @@ unsigned long kern_locked_tte_data;
unsigned long prom_pmd_phys __read_mostly;
unsigned int swapper_pgd_zero __read_mostly;
/* Allocate power-of-2 aligned chunks from the end of the
* kernel image. Return physical address.
*/
static inline unsigned long early_alloc_phys(unsigned long size)
{
unsigned long base;
BUILD_BUG_ON(size & (size - 1));
kern_size = (kern_size + (size - 1)) & ~(size - 1);
base = kern_base + kern_size;
kern_size += size;
return base;
}
static inline unsigned long load_phys32(unsigned long pa)
{
unsigned long val;
__asm__ __volatile__("lduwa [%1] %2, %0"
: "=&r" (val)
: "r" (pa), "i" (ASI_PHYS_USE_EC));
return val;
}
static inline unsigned long load_phys64(unsigned long pa)
{
unsigned long val;
__asm__ __volatile__("ldxa [%1] %2, %0"
: "=&r" (val)
: "r" (pa), "i" (ASI_PHYS_USE_EC));
return val;
}
static inline void store_phys32(unsigned long pa, unsigned long val)
{
__asm__ __volatile__("stwa %0, [%1] %2"
: /* no outputs */
: "r" (val), "r" (pa), "i" (ASI_PHYS_USE_EC));
}
static inline void store_phys64(unsigned long pa, unsigned long val)
{
__asm__ __volatile__("stxa %0, [%1] %2"
: /* no outputs */
: "r" (val), "r" (pa), "i" (ASI_PHYS_USE_EC));
}
static pmd_t *prompmd __read_mostly;
#define BASE_PAGE_SIZE 8192
@ -435,34 +392,28 @@ static inline void store_phys64(unsigned long pa, unsigned long val)
*/
unsigned long prom_virt_to_phys(unsigned long promva, int *error)
{
unsigned long pmd_phys = (prom_pmd_phys +
((promva >> 23) & 0x7ff) * sizeof(pmd_t));
unsigned long pte_phys;
pmd_t pmd_ent;
pte_t pte_ent;
pmd_t *pmdp = prompmd + ((promva >> 23) & 0x7ff);
pte_t *ptep;
unsigned long base;
pmd_val(pmd_ent) = load_phys32(pmd_phys);
if (pmd_none(pmd_ent)) {
if (pmd_none(*pmdp)) {
if (error)
*error = 1;
return 0;
}
pte_phys = (unsigned long)pmd_val(pmd_ent) << 11UL;
pte_phys += ((promva >> 13) & 0x3ff) * sizeof(pte_t);
pte_val(pte_ent) = load_phys64(pte_phys);
if (!pte_present(pte_ent)) {
ptep = (pte_t *)__pmd_page(*pmdp) + ((promva >> 13) & 0x3ff);
if (!pte_present(*ptep)) {
if (error)
*error = 1;
return 0;
}
if (error) {
*error = 0;
return pte_val(pte_ent);
return pte_val(*ptep);
}
base = pte_val(pte_ent) & _PAGE_PADDR;
return (base + (promva & (BASE_PAGE_SIZE - 1)));
base = pte_val(*ptep) & _PAGE_PADDR;
return base + (promva & (BASE_PAGE_SIZE - 1));
}
/* The obp translations are saved based on 8k pagesize, since obp can
@ -475,25 +426,20 @@ static void __init build_obp_range(unsigned long start, unsigned long end, unsig
unsigned long vaddr;
for (vaddr = start; vaddr < end; vaddr += BASE_PAGE_SIZE) {
unsigned long val, pte_phys, pmd_phys;
pmd_t pmd_ent;
int i;
unsigned long val;
pmd_t *pmd;
pte_t *pte;
pmd_phys = (prom_pmd_phys +
(((vaddr >> 23) & 0x7ff) * sizeof(pmd_t)));
pmd_val(pmd_ent) = load_phys32(pmd_phys);
if (pmd_none(pmd_ent)) {
pte_phys = early_alloc_phys(BASE_PAGE_SIZE);
for (i = 0; i < BASE_PAGE_SIZE / sizeof(pte_t); i++)
store_phys64(pte_phys+i*sizeof(pte_t),0);
pmd_val(pmd_ent) = pte_phys >> 11UL;
store_phys32(pmd_phys, pmd_val(pmd_ent));
pmd = prompmd + ((vaddr >> 23) & 0x7ff);
if (pmd_none(*pmd)) {
pte = __alloc_bootmem(BASE_PAGE_SIZE, BASE_PAGE_SIZE,
PAGE_SIZE);
if (!pte)
prom_halt();
memset(pte, 0, BASE_PAGE_SIZE);
pmd_set(pmd, pte);
}
pte_phys = (unsigned long)pmd_val(pmd_ent) << 11UL;
pte_phys += (((vaddr >> 13) & 0x3ff) * sizeof(pte_t));
pte = (pte_t *) __pmd_page(*pmd) + ((vaddr >> 13) & 0x3ff);
val = data;
@ -501,7 +447,8 @@ static void __init build_obp_range(unsigned long start, unsigned long end, unsig
if (tlb_type == spitfire)
val &= ~0x0003fe0000000000UL;
store_phys64(pte_phys, val | _PAGE_MODIFIED);
set_pte_at(&init_mm, vaddr, pte,
__pte(val | _PAGE_MODIFIED));
data += BASE_PAGE_SIZE;
}
@ -514,13 +461,17 @@ static inline int in_obp_range(unsigned long vaddr)
}
#define OBP_PMD_SIZE 2048
static void __init build_obp_pgtable(int prom_trans_ents)
static void __init build_obp_pgtable(void)
{
unsigned long i;
prom_pmd_phys = early_alloc_phys(OBP_PMD_SIZE);
for (i = 0; i < OBP_PMD_SIZE; i += 4)
store_phys32(prom_pmd_phys + i, 0);
prompmd = __alloc_bootmem(OBP_PMD_SIZE, OBP_PMD_SIZE, PAGE_SIZE);
if (!prompmd)
prom_halt();
memset(prompmd, 0, OBP_PMD_SIZE);
prom_pmd_phys = __pa(prompmd);
for (i = 0; i < prom_trans_ents; i++) {
unsigned long start, end;
@ -540,7 +491,7 @@ static void __init build_obp_pgtable(int prom_trans_ents)
/* Read OBP translations property into 'prom_trans[]'.
* Return the number of entries.
*/
static int __init read_obp_translations(void)
static void __init read_obp_translations(void)
{
int n, node;
@ -561,8 +512,10 @@ static int __init read_obp_translations(void)
prom_printf("prom_mappings: Couldn't get property.\n");
prom_halt();
}
n = n / sizeof(struct linux_prom_translation);
return n;
prom_trans_ents = n;
}
static void __init remap_kernel(void)
@ -582,28 +535,38 @@ static void __init remap_kernel(void)
prom_dtlb_load(tlb_ent, tte_data, tte_vaddr);
prom_itlb_load(tlb_ent, tte_data, tte_vaddr);
if (bigkernel) {
prom_dtlb_load(tlb_ent - 1,
tlb_ent -= 1;
prom_dtlb_load(tlb_ent,
tte_data + 0x400000,
tte_vaddr + 0x400000);
prom_itlb_load(tlb_ent - 1,
prom_itlb_load(tlb_ent,
tte_data + 0x400000,
tte_vaddr + 0x400000);
}
sparc64_highest_unlocked_tlb_ent = tlb_ent - 1;
if (tlb_type == cheetah_plus) {
sparc64_kern_pri_context = (CTX_CHEETAH_PLUS_CTX0 |
CTX_CHEETAH_PLUS_NUC);
sparc64_kern_pri_nuc_bits = CTX_CHEETAH_PLUS_NUC;
sparc64_kern_sec_context = CTX_CHEETAH_PLUS_CTX0;
}
}
static void __init inherit_prom_mappings(void)
{
int n;
n = read_obp_translations();
build_obp_pgtable(n);
static void __init inherit_prom_mappings_pre(void)
{
read_obp_translations();
/* Now fixup OBP's idea about where we really are mapped. */
prom_printf("Remapping the kernel... ");
remap_kernel();
prom_printf("done.\n");
}
static void __init inherit_prom_mappings_post(void)
{
build_obp_pgtable();
register_prom_callbacks();
}
@ -788,8 +751,8 @@ void inherit_locked_prom_mappings(int save_p)
}
}
if (tlb_type == spitfire) {
int high = SPITFIRE_HIGHEST_LOCKED_TLBENT - bigkernel;
for (i = 0; i < high; i++) {
int high = sparc64_highest_unlocked_tlb_ent;
for (i = 0; i <= high; i++) {
unsigned long data;
/* Spitfire Errata #32 workaround */
@ -877,9 +840,9 @@ void inherit_locked_prom_mappings(int save_p)
}
}
} else if (tlb_type == cheetah || tlb_type == cheetah_plus) {
int high = CHEETAH_HIGHEST_LOCKED_TLBENT - bigkernel;
int high = sparc64_highest_unlocked_tlb_ent;
for (i = 0; i < high; i++) {
for (i = 0; i <= high; i++) {
unsigned long data;
data = cheetah_get_ldtlb_data(i);
@ -1556,8 +1519,7 @@ void __init paging_init(void)
swapper_pgd_zero = pgd_val(swapper_pg_dir[0]);
/* Inherit non-locked OBP mappings. */
inherit_prom_mappings();
inherit_prom_mappings_pre();
/* Ok, we can use our TLB miss and window trap handlers safely.
* We need to do a quick peek here to see if we are on StarFire
@ -1568,15 +1530,23 @@ void __init paging_init(void)
extern void setup_tba(int);
setup_tba(this_is_starfire);
}
inherit_locked_prom_mappings(1);
__flush_tlb_all();
/* Everything from this point forward, until we are done with
* inherit_prom_mappings_post(), must complete successfully
* without calling into the firmware. The firwmare page tables
* have not been built, but we are running on the Linux kernel's
* trap table.
*/
/* Setup bootmem... */
pages_avail = 0;
last_valid_pfn = end_pfn = bootmem_init(&pages_avail);
inherit_prom_mappings_post();
inherit_locked_prom_mappings(1);
#ifdef CONFIG_DEBUG_PAGEALLOC
kernel_physical_mapping_init();
#endif

View File

@ -15,16 +15,6 @@ extern void save_registers(int pid, union uml_pt_regs *regs);
extern void restore_registers(int pid, union uml_pt_regs *regs);
extern void init_registers(int pid);
extern void get_safe_registers(unsigned long * regs);
extern void get_thread_regs(union uml_pt_regs *uml_regs, void *buffer);
#endif
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -218,10 +218,6 @@ struct syscall_args {
case RBP: UPT_RBP(regs) = __upt_val; break; \
case ORIG_RAX: UPT_ORIG_RAX(regs) = __upt_val; break; \
case CS: UPT_CS(regs) = __upt_val; break; \
case DS: UPT_DS(regs) = __upt_val; break; \
case ES: UPT_ES(regs) = __upt_val; break; \
case FS: UPT_FS(regs) = __upt_val; break; \
case GS: UPT_GS(regs) = __upt_val; break; \
case EFLAGS: UPT_EFLAGS(regs) = __upt_val; break; \
default : \
panic("Bad register in UPT_SET : %d\n", reg); \

View File

@ -62,13 +62,7 @@ void show_stack(struct task_struct *task, unsigned long *esp)
if (esp == NULL) {
if (task != current && task != NULL) {
/* XXX: Isn't this bogus? I.e. isn't this the
* *userspace* stack of this task? If not so, use this
* even when task == current (as in i386).
*/
esp = (unsigned long *) KSTK_ESP(task);
/* Which one? No actual difference - just coding style.*/
//esp = (unsigned long *) PT_REGS_IP(&task->thread.regs);
} else {
esp = (unsigned long *) &esp;
}
@ -84,5 +78,5 @@ void show_stack(struct task_struct *task, unsigned long *esp)
}
printk("Call Trace: \n");
show_trace(current, esp);
show_trace(task, esp);
}

View File

@ -5,6 +5,7 @@
#include <errno.h>
#include <string.h>
#include <setjmp.h>
#include "sysdep/ptrace_user.h"
#include "sysdep/ptrace.h"
#include "uml-config.h"
@ -126,13 +127,11 @@ void get_safe_registers(unsigned long *regs)
memcpy(regs, exec_regs, HOST_FRAME_SIZE * sizeof(unsigned long));
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/
void get_thread_regs(union uml_pt_regs *uml_regs, void *buffer)
{
struct __jmp_buf_tag *jmpbuf = buffer;
UPT_SET(uml_regs, EIP, jmpbuf->__jmpbuf[JB_PC]);
UPT_SET(uml_regs, UESP, jmpbuf->__jmpbuf[JB_SP]);
UPT_SET(uml_regs, EBP, jmpbuf->__jmpbuf[JB_BP]);
}

View File

@ -5,6 +5,7 @@
#include <errno.h>
#include <string.h>
#include <setjmp.h>
#include "ptrace_user.h"
#include "uml-config.h"
#include "skas_ptregs.h"
@ -74,13 +75,11 @@ void get_safe_registers(unsigned long *regs)
memcpy(regs, exec_regs, HOST_FRAME_SIZE * sizeof(unsigned long));
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/
void get_thread_regs(union uml_pt_regs *uml_regs, void *buffer)
{
struct __jmp_buf_tag *jmpbuf = buffer;
UPT_SET(uml_regs, RIP, jmpbuf->__jmpbuf[JB_PC]);
UPT_SET(uml_regs, RSP, jmpbuf->__jmpbuf[JB_RSP]);
UPT_SET(uml_regs, RBP, jmpbuf->__jmpbuf[JB_RBP]);
}

View File

@ -88,9 +88,7 @@ void show_trace(struct task_struct* task, unsigned long * stack)
task = current;
if (task != current) {
//ebp = (unsigned long) KSTK_EBP(task);
/* Which one? No actual difference - just coding style.*/
ebp = (unsigned long) PT_REGS_EBP(&task->thread.regs);
ebp = (unsigned long) KSTK_EBP(task);
} else {
asm ("movl %%ebp, %0" : "=r" (ebp) : );
}
@ -99,15 +97,6 @@ void show_trace(struct task_struct* task, unsigned long * stack)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
print_context_stack(context, stack, ebp);
/*while (((long) stack & (THREAD_SIZE-1)) != 0) {
addr = *stack;
if (__kernel_text_address(addr)) {
printk("%08lx: [<%08lx>]", (unsigned long) stack, addr);
print_symbol(" %s", addr);
printk("\n");
}
stack++;
}*/
printk("\n");
}

View File

@ -46,7 +46,7 @@ void foo(void)
OFFSET(HOST_SC_FP_ST, _fpstate, _st);
OFFSET(HOST_SC_FXSR_ENV, _fpstate, _fxsr_env);
DEFINE_LONGS(HOST_FRAME_SIZE, FRAME_SIZE);
DEFINE(HOST_FRAME_SIZE, FRAME_SIZE);
DEFINE_LONGS(HOST_FP_SIZE, sizeof(struct user_i387_struct));
DEFINE_LONGS(HOST_XFP_SIZE, sizeof(struct user_fxsr_struct));

View File

@ -270,26 +270,26 @@ ENTRY(level3_kernel_pgt)
.org 0x4000
ENTRY(level2_ident_pgt)
/* 40MB for bootup. */
.quad 0x0000000000000183
.quad 0x0000000000200183
.quad 0x0000000000400183
.quad 0x0000000000600183
.quad 0x0000000000800183
.quad 0x0000000000A00183
.quad 0x0000000000C00183
.quad 0x0000000000E00183
.quad 0x0000000001000183
.quad 0x0000000001200183
.quad 0x0000000001400183
.quad 0x0000000001600183
.quad 0x0000000001800183
.quad 0x0000000001A00183
.quad 0x0000000001C00183
.quad 0x0000000001E00183
.quad 0x0000000002000183
.quad 0x0000000002200183
.quad 0x0000000002400183
.quad 0x0000000002600183
.quad 0x0000000000000083
.quad 0x0000000000200083
.quad 0x0000000000400083
.quad 0x0000000000600083
.quad 0x0000000000800083
.quad 0x0000000000A00083
.quad 0x0000000000C00083
.quad 0x0000000000E00083
.quad 0x0000000001000083
.quad 0x0000000001200083
.quad 0x0000000001400083
.quad 0x0000000001600083
.quad 0x0000000001800083
.quad 0x0000000001A00083
.quad 0x0000000001C00083
.quad 0x0000000001E00083
.quad 0x0000000002000083
.quad 0x0000000002200083
.quad 0x0000000002400083
.quad 0x0000000002600083
/* Temporary mappings for the super early allocator in arch/x86_64/mm/init.c */
.globl temp_boot_pmds
temp_boot_pmds:

View File

@ -967,13 +967,12 @@ static int __cpuinit intel_num_cpu_cores(struct cpuinfo_x86 *c)
static void srat_detect_node(void)
{
#ifdef CONFIG_NUMA
unsigned apicid, node;
unsigned node;
int cpu = smp_processor_id();
/* Don't do the funky fallback heuristics the AMD version employs
for now. */
apicid = phys_proc_id[cpu];
node = apicid_to_node[apicid];
node = apicid_to_node[hard_smp_processor_id()];
if (node == NUMA_NO_NODE)
node = 0;
cpu_to_node[cpu] = node;

View File

@ -178,14 +178,12 @@ fore200e_irq_itoa(int irq)
static void*
fore200e_kmalloc(int size, int flags)
fore200e_kmalloc(int size, unsigned int __nocast flags)
{
void* chunk = kmalloc(size, flags);
void *chunk = kzalloc(size, flags);
if (chunk)
memset(chunk, 0x00, size);
else
printk(FORE200E "kmalloc() failed, requested size = %d, flags = 0x%x\n", size, flags);
if (!chunk)
printk(FORE200E "kmalloc() failed, requested size = %d, flags = 0x%x\n", size, flags);
return chunk;
}

View File

@ -47,7 +47,7 @@ MODULE_PARM_DESC(cards_limit, "Maximum number of graphics cards");
MODULE_PARM_DESC(debug, "Enable debug output");
module_param_named(cards_limit, drm_cards_limit, int, 0444);
module_param_named(debug, drm_debug, int, 0666);
module_param_named(debug, drm_debug, int, 0600);
drm_head_t **drm_heads;
struct drm_sysfs_class *drm_class;

View File

@ -69,7 +69,8 @@ int cn_already_initialized = 0;
* a new message.
*
*/
int cn_netlink_send(struct cn_msg *msg, u32 __group, int gfp_mask)
int cn_netlink_send(struct cn_msg *msg, u32 __group,
unsigned int __nocast gfp_mask)
{
struct cn_callback_entry *__cbq;
unsigned int size;

View File

@ -503,6 +503,25 @@ err_free_aux:
return err;
}
static void mthca_free_icms(struct mthca_dev *mdev)
{
u8 status;
mthca_free_icm_table(mdev, mdev->mcg_table.table);
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
mthca_unmap_eq_icm(mdev);
mthca_UNMAP_ICM_AUX(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
}
static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
{
struct mthca_dev_lim dev_lim;
@ -580,18 +599,7 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
return 0;
err_free_icm:
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
mthca_unmap_eq_icm(mdev);
mthca_UNMAP_ICM_AUX(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
mthca_free_icms(mdev);
err_stop_fw:
mthca_UNMAP_FA(mdev, &status);
@ -611,18 +619,7 @@ static void mthca_close_hca(struct mthca_dev *mdev)
mthca_CLOSE_HCA(mdev, 0, &status);
if (mthca_is_memfree(mdev)) {
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
mthca_unmap_eq_icm(mdev);
mthca_UNMAP_ICM_AUX(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
mthca_free_icms(mdev);
mthca_UNMAP_FA(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);

View File

@ -474,7 +474,7 @@ err:
spin_unlock(&priv->lock);
}
static void path_lookup(struct sk_buff *skb, struct net_device *dev)
static void ipoib_path_lookup(struct sk_buff *skb, struct net_device *dev)
{
struct ipoib_dev_priv *priv = netdev_priv(skb->dev);
@ -569,7 +569,7 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (skb->dst && skb->dst->neighbour) {
if (unlikely(!*to_ipoib_neigh(skb->dst->neighbour))) {
path_lookup(skb, dev);
ipoib_path_lookup(skb, dev);
goto out;
}

View File

@ -642,8 +642,6 @@ static void __exit ucb1x00_exit(void)
module_init(ucb1x00_init);
module_exit(ucb1x00_exit);
EXPORT_SYMBOL(ucb1x00_class);
EXPORT_SYMBOL(ucb1x00_io_set_dir);
EXPORT_SYMBOL(ucb1x00_io_write);
EXPORT_SYMBOL(ucb1x00_io_read);

View File

@ -106,8 +106,6 @@ struct ucb1x00_irq {
void (*fn)(int, void *);
};
extern struct class ucb1x00_class;
struct ucb1x00 {
spinlock_t lock;
struct mcp *mcp;

View File

@ -1655,7 +1655,7 @@ config LAN_SAA9730
config NET_POCKET
bool "Pocket and portable adapters"
depends on NET_ETHERNET && ISA
depends on NET_ETHERNET && PARPORT
---help---
Cute little network (Ethernet) devices which attach to the parallel
port ("pocket adapters"), commonly used with laptops. If you have
@ -1679,7 +1679,7 @@ config NET_POCKET
config ATP
tristate "AT-LAN-TEC/RealTek pocket adapter support"
depends on NET_POCKET && ISA && X86
depends on NET_POCKET && PARPORT && X86
select CRC32
---help---
This is a network (Ethernet) device which attaches to your parallel
@ -1694,7 +1694,7 @@ config ATP
config DE600
tristate "D-Link DE600 pocket adapter support"
depends on NET_POCKET && ISA
depends on NET_POCKET && PARPORT
---help---
This is a network (Ethernet) device which attaches to your parallel
port. Read <file:Documentation/networking/DLINK.txt> as well as the
@ -1709,7 +1709,7 @@ config DE600
config DE620
tristate "D-Link DE620 pocket adapter support"
depends on NET_POCKET && ISA
depends on NET_POCKET && PARPORT
---help---
This is a network (Ethernet) device which attaches to your parallel
port. Read <file:Documentation/networking/DLINK.txt> as well as the

View File

@ -487,6 +487,8 @@
* * Added xmit_hash_policy_layer34()
* - Modified by Jay Vosburgh <fubar@us.ibm.com> to also support mode 4.
* Set version to 2.6.3.
* 2005/09/26 - Jay Vosburgh <fubar@us.ibm.com>
* - Removed backwards compatibility for old ifenslaves. Version 2.6.4.
*/
//#define BONDING_DEBUG 1
@ -595,14 +597,7 @@ static int arp_ip_count = 0;
static int bond_mode = BOND_MODE_ROUNDROBIN;
static int xmit_hashtype= BOND_XMIT_POLICY_LAYER2;
static int lacp_fast = 0;
static int app_abi_ver = 0;
static int orig_app_abi_ver = -1; /* This is used to save the first ABI version
* we receive from the application. Once set,
* it won't be changed, and the module will
* refuse to enslave/release interfaces if the
* command comes from an application using
* another ABI version.
*/
struct bond_parm_tbl {
char *modename;
int mode;
@ -1294,12 +1289,13 @@ static void bond_mc_list_destroy(struct bonding *bond)
/*
* Copy all the Multicast addresses from src to the bonding device dst
*/
static int bond_mc_list_copy(struct dev_mc_list *mc_list, struct bonding *bond, int gpf_flag)
static int bond_mc_list_copy(struct dev_mc_list *mc_list, struct bonding *bond,
unsigned int __nocast gfp_flag)
{
struct dev_mc_list *dmi, *new_dmi;
for (dmi = mc_list; dmi; dmi = dmi->next) {
new_dmi = kmalloc(sizeof(struct dev_mc_list), gpf_flag);
new_dmi = kmalloc(sizeof(struct dev_mc_list), gfp_flag);
if (!new_dmi) {
/* FIXME: Potential memory leak !!! */
@ -1702,51 +1698,29 @@ static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_de
}
}
if (app_abi_ver >= 1) {
/* The application is using an ABI, which requires the
* slave interface to be closed.
*/
if ((slave_dev->flags & IFF_UP)) {
printk(KERN_ERR DRV_NAME
": Error: %s is up\n",
slave_dev->name);
res = -EPERM;
goto err_undo_flags;
}
/*
* Old ifenslave binaries are no longer supported. These can
* be identified with moderate accurary by the state of the slave:
* the current ifenslave will set the interface down prior to
* enslaving it; the old ifenslave will not.
*/
if ((slave_dev->flags & IFF_UP)) {
printk(KERN_ERR DRV_NAME ": %s is up. "
"This may be due to an out of date ifenslave.\n",
slave_dev->name);
res = -EPERM;
goto err_undo_flags;
}
if (slave_dev->set_mac_address == NULL) {
printk(KERN_ERR DRV_NAME
": Error: The slave device you specified does "
"not support setting the MAC address.\n");
printk(KERN_ERR
"Your kernel likely does not support slave "
"devices.\n");
if (slave_dev->set_mac_address == NULL) {
printk(KERN_ERR DRV_NAME
": Error: The slave device you specified does "
"not support setting the MAC address.\n");
printk(KERN_ERR
"Your kernel likely does not support slave devices.\n");
res = -EOPNOTSUPP;
goto err_undo_flags;
}
} else {
/* The application is not using an ABI, which requires the
* slave interface to be open.
*/
if (!(slave_dev->flags & IFF_UP)) {
printk(KERN_ERR DRV_NAME
": Error: %s is not running\n",
slave_dev->name);
res = -EINVAL;
goto err_undo_flags;
}
if ((bond->params.mode == BOND_MODE_8023AD) ||
(bond->params.mode == BOND_MODE_TLB) ||
(bond->params.mode == BOND_MODE_ALB)) {
printk(KERN_ERR DRV_NAME
": Error: to use %s mode, you must upgrade "
"ifenslave.\n",
bond_mode_name(bond->params.mode));
res = -EOPNOTSUPP;
goto err_undo_flags;
}
res = -EOPNOTSUPP;
goto err_undo_flags;
}
new_slave = kmalloc(sizeof(struct slave), GFP_KERNEL);
@ -1762,41 +1736,36 @@ static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_de
*/
new_slave->original_flags = slave_dev->flags;
if (app_abi_ver >= 1) {
/* save slave's original ("permanent") mac address for
* modes that needs it, and for restoring it upon release,
* and then set it to the master's address
*/
memcpy(new_slave->perm_hwaddr, slave_dev->dev_addr, ETH_ALEN);
/*
* Save slave's original ("permanent") mac address for modes
* that need it, and for restoring it upon release, and then
* set it to the master's address
*/
memcpy(new_slave->perm_hwaddr, slave_dev->dev_addr, ETH_ALEN);
/* set slave to master's mac address
* The application already set the master's
* mac address to that of the first slave
*/
memcpy(addr.sa_data, bond_dev->dev_addr, bond_dev->addr_len);
addr.sa_family = slave_dev->type;
res = dev_set_mac_address(slave_dev, &addr);
if (res) {
dprintk("Error %d calling set_mac_address\n", res);
goto err_free;
}
/*
* Set slave to master's mac address. The application already
* set the master's mac address to that of the first slave
*/
memcpy(addr.sa_data, bond_dev->dev_addr, bond_dev->addr_len);
addr.sa_family = slave_dev->type;
res = dev_set_mac_address(slave_dev, &addr);
if (res) {
dprintk("Error %d calling set_mac_address\n", res);
goto err_free;
}
/* open the slave since the application closed it */
res = dev_open(slave_dev);
if (res) {
dprintk("Openning slave %s failed\n", slave_dev->name);
goto err_restore_mac;
}
/* open the slave since the application closed it */
res = dev_open(slave_dev);
if (res) {
dprintk("Openning slave %s failed\n", slave_dev->name);
goto err_restore_mac;
}
res = netdev_set_master(slave_dev, bond_dev);
if (res) {
dprintk("Error %d calling netdev_set_master\n", res);
if (app_abi_ver < 1) {
goto err_free;
} else {
goto err_close;
}
goto err_close;
}
new_slave->dev = slave_dev;
@ -1997,39 +1966,6 @@ static int bond_enslave(struct net_device *bond_dev, struct net_device *slave_de
write_unlock_bh(&bond->lock);
if (app_abi_ver < 1) {
/*
* !!! This is to support old versions of ifenslave.
* We can remove this in 2.5 because our ifenslave takes
* care of this for us.
* We check to see if the master has a mac address yet.
* If not, we'll give it the mac address of our slave device.
*/
int ndx = 0;
for (ndx = 0; ndx < bond_dev->addr_len; ndx++) {
dprintk("Checking ndx=%d of bond_dev->dev_addr\n",
ndx);
if (bond_dev->dev_addr[ndx] != 0) {
dprintk("Found non-zero byte at ndx=%d\n",
ndx);
break;
}
}
if (ndx == bond_dev->addr_len) {
/*
* We got all the way through the address and it was
* all 0's.
*/
dprintk("%s doesn't have a MAC address yet. \n",
bond_dev->name);
dprintk("Going to give assign it from %s.\n",
slave_dev->name);
bond_sethwaddr(bond_dev, slave_dev);
}
}
printk(KERN_INFO DRV_NAME
": %s: enslaving %s as a%s interface with a%s link.\n",
bond_dev->name, slave_dev->name,
@ -2227,12 +2163,10 @@ static int bond_release(struct net_device *bond_dev, struct net_device *slave_de
/* close slave before restoring its mac address */
dev_close(slave_dev);
if (app_abi_ver >= 1) {
/* restore original ("permanent") mac address */
memcpy(addr.sa_data, slave->perm_hwaddr, ETH_ALEN);
addr.sa_family = slave_dev->type;
dev_set_mac_address(slave_dev, &addr);
}
/* restore original ("permanent") mac address */
memcpy(addr.sa_data, slave->perm_hwaddr, ETH_ALEN);
addr.sa_family = slave_dev->type;
dev_set_mac_address(slave_dev, &addr);
/* restore the original state of the
* IFF_NOARP flag that might have been
@ -2320,12 +2254,10 @@ static int bond_release_all(struct net_device *bond_dev)
/* close slave before restoring its mac address */
dev_close(slave_dev);
if (app_abi_ver >= 1) {
/* restore original ("permanent") mac address*/
memcpy(addr.sa_data, slave->perm_hwaddr, ETH_ALEN);
addr.sa_family = slave_dev->type;
dev_set_mac_address(slave_dev, &addr);
}
/* restore original ("permanent") mac address*/
memcpy(addr.sa_data, slave->perm_hwaddr, ETH_ALEN);
addr.sa_family = slave_dev->type;
dev_set_mac_address(slave_dev, &addr);
/* restore the original state of the IFF_NOARP flag that might have
* been set by bond_set_slave_inactive_flags()
@ -2423,57 +2355,6 @@ static int bond_ioctl_change_active(struct net_device *bond_dev, struct net_devi
return res;
}
static int bond_ethtool_ioctl(struct net_device *bond_dev, struct ifreq *ifr)
{
struct ethtool_drvinfo info;
void __user *addr = ifr->ifr_data;
uint32_t cmd;
if (get_user(cmd, (uint32_t __user *)addr)) {
return -EFAULT;
}
switch (cmd) {
case ETHTOOL_GDRVINFO:
if (copy_from_user(&info, addr, sizeof(info))) {
return -EFAULT;
}
if (strcmp(info.driver, "ifenslave") == 0) {
int new_abi_ver;
char *endptr;
new_abi_ver = simple_strtoul(info.fw_version,
&endptr, 0);
if (*endptr) {
printk(KERN_ERR DRV_NAME
": Error: got invalid ABI "
"version from application\n");
return -EINVAL;
}
if (orig_app_abi_ver == -1) {
orig_app_abi_ver = new_abi_ver;
}
app_abi_ver = new_abi_ver;
}
strncpy(info.driver, DRV_NAME, 32);
strncpy(info.version, DRV_VERSION, 32);
snprintf(info.fw_version, 32, "%d", BOND_ABI_VERSION);
if (copy_to_user(addr, &info, sizeof(info))) {
return -EFAULT;
}
return 0;
default:
return -EOPNOTSUPP;
}
}
static int bond_info_query(struct net_device *bond_dev, struct ifbond *info)
{
struct bonding *bond = bond_dev->priv;
@ -2776,7 +2657,7 @@ static u32 bond_glean_dev_ip(struct net_device *dev)
return 0;
rcu_read_lock();
idev = __in_dev_get(dev);
idev = __in_dev_get_rcu(dev);
if (!idev)
goto out;
@ -3442,16 +3323,11 @@ static void bond_info_show_slave(struct seq_file *seq, const struct slave *slave
seq_printf(seq, "Link Failure Count: %d\n",
slave->link_failure_count);
if (app_abi_ver >= 1) {
seq_printf(seq,
"Permanent HW addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
slave->perm_hwaddr[0],
slave->perm_hwaddr[1],
slave->perm_hwaddr[2],
slave->perm_hwaddr[3],
slave->perm_hwaddr[4],
slave->perm_hwaddr[5]);
}
seq_printf(seq,
"Permanent HW addr: %02x:%02x:%02x:%02x:%02x:%02x\n",
slave->perm_hwaddr[0], slave->perm_hwaddr[1],
slave->perm_hwaddr[2], slave->perm_hwaddr[3],
slave->perm_hwaddr[4], slave->perm_hwaddr[5]);
if (bond->params.mode == BOND_MODE_8023AD) {
const struct aggregator *agg
@ -4010,15 +3886,12 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
struct ifslave k_sinfo;
struct ifslave __user *u_sinfo = NULL;
struct mii_ioctl_data *mii = NULL;
int prev_abi_ver = orig_app_abi_ver;
int res = 0;
dprintk("bond_ioctl: master=%s, cmd=%d\n",
bond_dev->name, cmd);
switch (cmd) {
case SIOCETHTOOL:
return bond_ethtool_ioctl(bond_dev, ifr);
case SIOCGMIIPHY:
mii = if_mii(ifr);
if (!mii) {
@ -4090,21 +3963,6 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
return -EPERM;
}
if (orig_app_abi_ver == -1) {
/* no orig_app_abi_ver was provided yet, so we'll use the
* current one from now on, even if it's 0
*/
orig_app_abi_ver = app_abi_ver;
} else if (orig_app_abi_ver != app_abi_ver) {
printk(KERN_ERR DRV_NAME
": Error: already using ifenslave ABI version %d; to "
"upgrade ifenslave to version %d, you must first "
"reload bonding.\n",
orig_app_abi_ver, app_abi_ver);
return -EINVAL;
}
slave_dev = dev_get_by_name(ifr->ifr_slave);
dprintk("slave_dev=%p: \n", slave_dev);
@ -4137,14 +3995,6 @@ static int bond_do_ioctl(struct net_device *bond_dev, struct ifreq *ifr, int cmd
dev_put(slave_dev);
}
if (res < 0) {
/* The ioctl failed, so there's no point in changing the
* orig_app_abi_ver. We'll restore it's value just in case
* we've changed it earlier in this function.
*/
orig_app_abi_ver = prev_abi_ver;
}
return res;
}
@ -4578,9 +4428,18 @@ static inline void bond_set_mode_ops(struct bonding *bond, int mode)
}
}
static void bond_ethtool_get_drvinfo(struct net_device *bond_dev,
struct ethtool_drvinfo *drvinfo)
{
strncpy(drvinfo->driver, DRV_NAME, 32);
strncpy(drvinfo->version, DRV_VERSION, 32);
snprintf(drvinfo->fw_version, 32, "%d", BOND_ABI_VERSION);
}
static struct ethtool_ops bond_ethtool_ops = {
.get_tx_csum = ethtool_op_get_tx_csum,
.get_sg = ethtool_op_get_sg,
.get_drvinfo = bond_ethtool_get_drvinfo,
};
/*

View File

@ -40,8 +40,8 @@
#include "bond_3ad.h"
#include "bond_alb.h"
#define DRV_VERSION "2.6.3"
#define DRV_RELDATE "June 8, 2005"
#define DRV_VERSION "2.6.4"
#define DRV_RELDATE "September 26, 2005"
#define DRV_NAME "bonding"
#define DRV_DESCRIPTION "Ethernet Channel Bonding Driver"

View File

@ -4423,18 +4423,14 @@ static struct {
#define CAS_REG_LEN (sizeof(ethtool_register_table)/sizeof(int))
#define CAS_MAX_REGS (sizeof (u32)*CAS_REG_LEN)
static u8 *cas_get_regs(struct cas *cp)
static void cas_read_regs(struct cas *cp, u8 *ptr, int len)
{
u8 *ptr = kmalloc(CAS_MAX_REGS, GFP_KERNEL);
u8 *p;
int i;
unsigned long flags;
if (!ptr)
return NULL;
spin_lock_irqsave(&cp->lock, flags);
for (i = 0, p = ptr; i < CAS_REG_LEN ; i ++, p += sizeof(u32)) {
for (i = 0, p = ptr; i < len ; i ++, p += sizeof(u32)) {
u16 hval;
u32 val;
if (ethtool_register_table[i].offsets < 0) {
@ -4447,8 +4443,6 @@ static u8 *cas_get_regs(struct cas *cp)
memcpy(p, (u8 *)&val, sizeof(u32));
}
spin_unlock_irqrestore(&cp->lock, flags);
return ptr;
}
static struct net_device_stats *cas_get_stats(struct net_device *dev)
@ -4561,316 +4555,251 @@ static void cas_set_multicast(struct net_device *dev)
spin_unlock_irqrestore(&cp->lock, flags);
}
/* Eventually add support for changing the advertisement
* on autoneg.
*/
static int cas_ethtool_ioctl(struct net_device *dev, void __user *ep_user)
static void cas_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
{
struct cas *cp = netdev_priv(dev);
strncpy(info->driver, DRV_MODULE_NAME, ETHTOOL_BUSINFO_LEN);
strncpy(info->version, DRV_MODULE_VERSION, ETHTOOL_BUSINFO_LEN);
info->fw_version[0] = '\0';
strncpy(info->bus_info, pci_name(cp->pdev), ETHTOOL_BUSINFO_LEN);
info->regdump_len = cp->casreg_len < CAS_MAX_REGS ?
cp->casreg_len : CAS_MAX_REGS;
info->n_stats = CAS_NUM_STAT_KEYS;
}
static int cas_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct cas *cp = netdev_priv(dev);
u16 bmcr;
int full_duplex, speed, pause;
struct ethtool_cmd ecmd;
unsigned long flags;
enum link_state linkstate = link_up;
if (copy_from_user(&ecmd, ep_user, sizeof(ecmd)))
return -EFAULT;
switch(ecmd.cmd) {
case ETHTOOL_GDRVINFO: {
struct ethtool_drvinfo info = { .cmd = ETHTOOL_GDRVINFO };
strncpy(info.driver, DRV_MODULE_NAME,
ETHTOOL_BUSINFO_LEN);
strncpy(info.version, DRV_MODULE_VERSION,
ETHTOOL_BUSINFO_LEN);
info.fw_version[0] = '\0';
strncpy(info.bus_info, pci_name(cp->pdev),
ETHTOOL_BUSINFO_LEN);
info.regdump_len = cp->casreg_len < CAS_MAX_REGS ?
cp->casreg_len : CAS_MAX_REGS;
info.n_stats = CAS_NUM_STAT_KEYS;
if (copy_to_user(ep_user, &info, sizeof(info)))
return -EFAULT;
return 0;
cmd->advertising = 0;
cmd->supported = SUPPORTED_Autoneg;
if (cp->cas_flags & CAS_FLAG_1000MB_CAP) {
cmd->supported |= SUPPORTED_1000baseT_Full;
cmd->advertising |= ADVERTISED_1000baseT_Full;
}
case ETHTOOL_GSET:
ecmd.advertising = 0;
ecmd.supported = SUPPORTED_Autoneg;
if (cp->cas_flags & CAS_FLAG_1000MB_CAP) {
ecmd.supported |= SUPPORTED_1000baseT_Full;
ecmd.advertising |= ADVERTISED_1000baseT_Full;
/* Record PHY settings if HW is on. */
spin_lock_irqsave(&cp->lock, flags);
bmcr = 0;
linkstate = cp->lstate;
if (CAS_PHY_MII(cp->phy_type)) {
cmd->port = PORT_MII;
cmd->transceiver = (cp->cas_flags & CAS_FLAG_SATURN) ?
XCVR_INTERNAL : XCVR_EXTERNAL;
cmd->phy_address = cp->phy_addr;
cmd->advertising |= ADVERTISED_TP | ADVERTISED_MII |
ADVERTISED_10baseT_Half |
ADVERTISED_10baseT_Full |
ADVERTISED_100baseT_Half |
ADVERTISED_100baseT_Full;
cmd->supported |=
(SUPPORTED_10baseT_Half |
SUPPORTED_10baseT_Full |
SUPPORTED_100baseT_Half |
SUPPORTED_100baseT_Full |
SUPPORTED_TP | SUPPORTED_MII);
if (cp->hw_running) {
cas_mif_poll(cp, 0);
bmcr = cas_phy_read(cp, MII_BMCR);
cas_read_mii_link_mode(cp, &full_duplex,
&speed, &pause);
cas_mif_poll(cp, 1);
}
/* Record PHY settings if HW is on. */
spin_lock_irqsave(&cp->lock, flags);
bmcr = 0;
linkstate = cp->lstate;
if (CAS_PHY_MII(cp->phy_type)) {
ecmd.port = PORT_MII;
ecmd.transceiver = (cp->cas_flags & CAS_FLAG_SATURN) ?
XCVR_INTERNAL : XCVR_EXTERNAL;
ecmd.phy_address = cp->phy_addr;
ecmd.advertising |= ADVERTISED_TP | ADVERTISED_MII |
ADVERTISED_10baseT_Half |
ADVERTISED_10baseT_Full |
ADVERTISED_100baseT_Half |
ADVERTISED_100baseT_Full;
} else {
cmd->port = PORT_FIBRE;
cmd->transceiver = XCVR_INTERNAL;
cmd->phy_address = 0;
cmd->supported |= SUPPORTED_FIBRE;
cmd->advertising |= ADVERTISED_FIBRE;
ecmd.supported |=
(SUPPORTED_10baseT_Half |
SUPPORTED_10baseT_Full |
SUPPORTED_100baseT_Half |
SUPPORTED_100baseT_Full |
SUPPORTED_TP | SUPPORTED_MII);
if (cp->hw_running) {
cas_mif_poll(cp, 0);
bmcr = cas_phy_read(cp, MII_BMCR);
cas_read_mii_link_mode(cp, &full_duplex,
&speed, &pause);
cas_mif_poll(cp, 1);
}
} else {
ecmd.port = PORT_FIBRE;
ecmd.transceiver = XCVR_INTERNAL;
ecmd.phy_address = 0;
ecmd.supported |= SUPPORTED_FIBRE;
ecmd.advertising |= ADVERTISED_FIBRE;
if (cp->hw_running) {
/* pcs uses the same bits as mii */
bmcr = readl(cp->regs + REG_PCS_MII_CTRL);
cas_read_pcs_link_mode(cp, &full_duplex,
&speed, &pause);
}
if (cp->hw_running) {
/* pcs uses the same bits as mii */
bmcr = readl(cp->regs + REG_PCS_MII_CTRL);
cas_read_pcs_link_mode(cp, &full_duplex,
&speed, &pause);
}
spin_unlock_irqrestore(&cp->lock, flags);
}
spin_unlock_irqrestore(&cp->lock, flags);
if (bmcr & BMCR_ANENABLE) {
ecmd.advertising |= ADVERTISED_Autoneg;
ecmd.autoneg = AUTONEG_ENABLE;
ecmd.speed = ((speed == 10) ?
SPEED_10 :
((speed == 1000) ?
SPEED_1000 : SPEED_100));
ecmd.duplex = full_duplex ? DUPLEX_FULL : DUPLEX_HALF;
if (bmcr & BMCR_ANENABLE) {
cmd->advertising |= ADVERTISED_Autoneg;
cmd->autoneg = AUTONEG_ENABLE;
cmd->speed = ((speed == 10) ?
SPEED_10 :
((speed == 1000) ?
SPEED_1000 : SPEED_100));
cmd->duplex = full_duplex ? DUPLEX_FULL : DUPLEX_HALF;
} else {
cmd->autoneg = AUTONEG_DISABLE;
cmd->speed =
(bmcr & CAS_BMCR_SPEED1000) ?
SPEED_1000 :
((bmcr & BMCR_SPEED100) ? SPEED_100:
SPEED_10);
cmd->duplex =
(bmcr & BMCR_FULLDPLX) ?
DUPLEX_FULL : DUPLEX_HALF;
}
if (linkstate != link_up) {
/* Force these to "unknown" if the link is not up and
* autonogotiation in enabled. We can set the link
* speed to 0, but not cmd->duplex,
* because its legal values are 0 and 1. Ethtool will
* print the value reported in parentheses after the
* word "Unknown" for unrecognized values.
*
* If in forced mode, we report the speed and duplex
* settings that we configured.
*/
if (cp->link_cntl & BMCR_ANENABLE) {
cmd->speed = 0;
cmd->duplex = 0xff;
} else {
ecmd.autoneg = AUTONEG_DISABLE;
ecmd.speed =
(bmcr & CAS_BMCR_SPEED1000) ?
SPEED_1000 :
((bmcr & BMCR_SPEED100) ? SPEED_100:
SPEED_10);
ecmd.duplex =
(bmcr & BMCR_FULLDPLX) ?
cmd->speed = SPEED_10;
if (cp->link_cntl & BMCR_SPEED100) {
cmd->speed = SPEED_100;
} else if (cp->link_cntl & CAS_BMCR_SPEED1000) {
cmd->speed = SPEED_1000;
}
cmd->duplex = (cp->link_cntl & BMCR_FULLDPLX)?
DUPLEX_FULL : DUPLEX_HALF;
}
if (linkstate != link_up) {
/* Force these to "unknown" if the link is not up and
* autonogotiation in enabled. We can set the link
* speed to 0, but not ecmd.duplex,
* because its legal values are 0 and 1. Ethtool will
* print the value reported in parentheses after the
* word "Unknown" for unrecognized values.
*
* If in forced mode, we report the speed and duplex
* settings that we configured.
*/
if (cp->link_cntl & BMCR_ANENABLE) {
ecmd.speed = 0;
ecmd.duplex = 0xff;
} else {
ecmd.speed = SPEED_10;
if (cp->link_cntl & BMCR_SPEED100) {
ecmd.speed = SPEED_100;
} else if (cp->link_cntl & CAS_BMCR_SPEED1000) {
ecmd.speed = SPEED_1000;
}
ecmd.duplex = (cp->link_cntl & BMCR_FULLDPLX)?
DUPLEX_FULL : DUPLEX_HALF;
}
}
if (copy_to_user(ep_user, &ecmd, sizeof(ecmd)))
return -EFAULT;
return 0;
case ETHTOOL_SSET:
if (!capable(CAP_NET_ADMIN))
return -EPERM;
/* Verify the settings we care about. */
if (ecmd.autoneg != AUTONEG_ENABLE &&
ecmd.autoneg != AUTONEG_DISABLE)
return -EINVAL;
if (ecmd.autoneg == AUTONEG_DISABLE &&
((ecmd.speed != SPEED_1000 &&
ecmd.speed != SPEED_100 &&
ecmd.speed != SPEED_10) ||
(ecmd.duplex != DUPLEX_HALF &&
ecmd.duplex != DUPLEX_FULL)))
return -EINVAL;
/* Apply settings and restart link process. */
spin_lock_irqsave(&cp->lock, flags);
cas_begin_auto_negotiation(cp, &ecmd);
spin_unlock_irqrestore(&cp->lock, flags);
return 0;
case ETHTOOL_NWAY_RST:
if ((cp->link_cntl & BMCR_ANENABLE) == 0)
return -EINVAL;
/* Restart link process. */
spin_lock_irqsave(&cp->lock, flags);
cas_begin_auto_negotiation(cp, NULL);
spin_unlock_irqrestore(&cp->lock, flags);
return 0;
case ETHTOOL_GWOL:
case ETHTOOL_SWOL:
break; /* doesn't exist */
/* get link status */
case ETHTOOL_GLINK: {
struct ethtool_value edata = { .cmd = ETHTOOL_GLINK };
edata.data = (cp->lstate == link_up);
if (copy_to_user(ep_user, &edata, sizeof(edata)))
return -EFAULT;
return 0;
}
/* get message-level */
case ETHTOOL_GMSGLVL: {
struct ethtool_value edata = { .cmd = ETHTOOL_GMSGLVL };
edata.data = cp->msg_enable;
if (copy_to_user(ep_user, &edata, sizeof(edata)))
return -EFAULT;
return 0;
}
/* set message-level */
case ETHTOOL_SMSGLVL: {
struct ethtool_value edata;
if (!capable(CAP_NET_ADMIN)) {
return (-EPERM);
}
if (copy_from_user(&edata, ep_user, sizeof(edata)))
return -EFAULT;
cp->msg_enable = edata.data;
return 0;
}
case ETHTOOL_GREGS: {
struct ethtool_regs edata;
u8 *ptr;
int len = cp->casreg_len < CAS_MAX_REGS ?
cp->casreg_len: CAS_MAX_REGS;
if (copy_from_user(&edata, ep_user, sizeof (edata)))
return -EFAULT;
if (edata.len > len)
edata.len = len;
edata.version = 0;
if (copy_to_user (ep_user, &edata, sizeof(edata)))
return -EFAULT;
/* cas_get_regs handles locks (cp->lock). */
ptr = cas_get_regs(cp);
if (ptr == NULL)
return -ENOMEM;
if (copy_to_user(ep_user + sizeof (edata), ptr, edata.len))
return -EFAULT;
kfree(ptr);
return (0);
}
case ETHTOOL_GSTRINGS: {
struct ethtool_gstrings edata;
int len;
if (copy_from_user(&edata, ep_user, sizeof(edata)))
return -EFAULT;
len = edata.len;
switch(edata.string_set) {
case ETH_SS_STATS:
edata.len = (len < CAS_NUM_STAT_KEYS) ?
len : CAS_NUM_STAT_KEYS;
if (copy_to_user(ep_user, &edata, sizeof(edata)))
return -EFAULT;
if (copy_to_user(ep_user + sizeof(edata),
&ethtool_cassini_statnames,
(edata.len * ETH_GSTRING_LEN)))
return -EFAULT;
return 0;
default:
return -EINVAL;
}
}
case ETHTOOL_GSTATS: {
int i = 0;
u64 *tmp;
struct ethtool_stats edata;
struct net_device_stats *stats;
int len;
if (copy_from_user(&edata, ep_user, sizeof(edata)))
return -EFAULT;
len = edata.n_stats;
stats = cas_get_stats(cp->dev);
edata.cmd = ETHTOOL_GSTATS;
edata.n_stats = (len < CAS_NUM_STAT_KEYS) ?
len : CAS_NUM_STAT_KEYS;
if (copy_to_user(ep_user, &edata, sizeof (edata)))
return -EFAULT;
tmp = kmalloc(sizeof(u64)*CAS_NUM_STAT_KEYS, GFP_KERNEL);
if (tmp) {
tmp[i++] = stats->collisions;
tmp[i++] = stats->rx_bytes;
tmp[i++] = stats->rx_crc_errors;
tmp[i++] = stats->rx_dropped;
tmp[i++] = stats->rx_errors;
tmp[i++] = stats->rx_fifo_errors;
tmp[i++] = stats->rx_frame_errors;
tmp[i++] = stats->rx_length_errors;
tmp[i++] = stats->rx_over_errors;
tmp[i++] = stats->rx_packets;
tmp[i++] = stats->tx_aborted_errors;
tmp[i++] = stats->tx_bytes;
tmp[i++] = stats->tx_dropped;
tmp[i++] = stats->tx_errors;
tmp[i++] = stats->tx_fifo_errors;
tmp[i++] = stats->tx_packets;
BUG_ON(i != CAS_NUM_STAT_KEYS);
i = copy_to_user(ep_user + sizeof(edata),
tmp, sizeof(u64)*edata.n_stats);
kfree(tmp);
} else {
return -ENOMEM;
}
if (i)
return -EFAULT;
return 0;
}
}
return -EOPNOTSUPP;
return 0;
}
static int cas_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct cas *cp = netdev_priv(dev);
unsigned long flags;
/* Verify the settings we care about. */
if (cmd->autoneg != AUTONEG_ENABLE &&
cmd->autoneg != AUTONEG_DISABLE)
return -EINVAL;
if (cmd->autoneg == AUTONEG_DISABLE &&
((cmd->speed != SPEED_1000 &&
cmd->speed != SPEED_100 &&
cmd->speed != SPEED_10) ||
(cmd->duplex != DUPLEX_HALF &&
cmd->duplex != DUPLEX_FULL)))
return -EINVAL;
/* Apply settings and restart link process. */
spin_lock_irqsave(&cp->lock, flags);
cas_begin_auto_negotiation(cp, cmd);
spin_unlock_irqrestore(&cp->lock, flags);
return 0;
}
static int cas_nway_reset(struct net_device *dev)
{
struct cas *cp = netdev_priv(dev);
unsigned long flags;
if ((cp->link_cntl & BMCR_ANENABLE) == 0)
return -EINVAL;
/* Restart link process. */
spin_lock_irqsave(&cp->lock, flags);
cas_begin_auto_negotiation(cp, NULL);
spin_unlock_irqrestore(&cp->lock, flags);
return 0;
}
static u32 cas_get_link(struct net_device *dev)
{
struct cas *cp = netdev_priv(dev);
return cp->lstate == link_up;
}
static u32 cas_get_msglevel(struct net_device *dev)
{
struct cas *cp = netdev_priv(dev);
return cp->msg_enable;
}
static void cas_set_msglevel(struct net_device *dev, u32 value)
{
struct cas *cp = netdev_priv(dev);
cp->msg_enable = value;
}
static int cas_get_regs_len(struct net_device *dev)
{
struct cas *cp = netdev_priv(dev);
return cp->casreg_len < CAS_MAX_REGS ? cp->casreg_len: CAS_MAX_REGS;
}
static void cas_get_regs(struct net_device *dev, struct ethtool_regs *regs,
void *p)
{
struct cas *cp = netdev_priv(dev);
regs->version = 0;
/* cas_read_regs handles locks (cp->lock). */
cas_read_regs(cp, p, regs->len / sizeof(u32));
}
static int cas_get_stats_count(struct net_device *dev)
{
return CAS_NUM_STAT_KEYS;
}
static void cas_get_strings(struct net_device *dev, u32 stringset, u8 *data)
{
memcpy(data, &ethtool_cassini_statnames,
CAS_NUM_STAT_KEYS * ETH_GSTRING_LEN);
}
static void cas_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *estats, u64 *data)
{
struct cas *cp = netdev_priv(dev);
struct net_device_stats *stats = cas_get_stats(cp->dev);
int i = 0;
data[i++] = stats->collisions;
data[i++] = stats->rx_bytes;
data[i++] = stats->rx_crc_errors;
data[i++] = stats->rx_dropped;
data[i++] = stats->rx_errors;
data[i++] = stats->rx_fifo_errors;
data[i++] = stats->rx_frame_errors;
data[i++] = stats->rx_length_errors;
data[i++] = stats->rx_over_errors;
data[i++] = stats->rx_packets;
data[i++] = stats->tx_aborted_errors;
data[i++] = stats->tx_bytes;
data[i++] = stats->tx_dropped;
data[i++] = stats->tx_errors;
data[i++] = stats->tx_fifo_errors;
data[i++] = stats->tx_packets;
BUG_ON(i != CAS_NUM_STAT_KEYS);
}
static struct ethtool_ops cas_ethtool_ops = {
.get_drvinfo = cas_get_drvinfo,
.get_settings = cas_get_settings,
.set_settings = cas_set_settings,
.nway_reset = cas_nway_reset,
.get_link = cas_get_link,
.get_msglevel = cas_get_msglevel,
.set_msglevel = cas_set_msglevel,
.get_regs_len = cas_get_regs_len,
.get_regs = cas_get_regs,
.get_stats_count = cas_get_stats_count,
.get_strings = cas_get_strings,
.get_ethtool_stats = cas_get_ethtool_stats,
};
static int cas_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
{
struct cas *cp = netdev_priv(dev);
@ -4883,10 +4812,6 @@ static int cas_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
*/
down(&cp->pm_sem);
switch (cmd) {
case SIOCETHTOOL:
rc = cas_ethtool_ioctl(dev, ifr->ifr_data);
break;
case SIOCGMIIPHY: /* Get address of MII PHY in use. */
data->phy_id = cp->phy_addr;
/* Fallthrough... */
@ -5112,6 +5037,7 @@ static int __devinit cas_init_one(struct pci_dev *pdev,
dev->get_stats = cas_get_stats;
dev->set_multicast_list = cas_set_multicast;
dev->do_ioctl = cas_ioctl;
dev->ethtool_ops = &cas_ethtool_ops;
dev->tx_timeout = cas_tx_timeout;
dev->watchdog_timeo = CAS_TX_TIMEOUT;
dev->change_mtu = cas_change_mtu;

View File

@ -1875,6 +1875,9 @@ static int emac_init_device(struct ocp_device *ocpdev, struct ibm_ocp_mal *mal)
rc = -ENODEV;
goto bail;
}
/* Disable any PHY features not supported by the platform */
ep->phy_mii.def->features &= ~emacdata->phy_feat_exc;
/* Setup initial PHY config & startup aneg */
if (ep->phy_mii.def->ops->init)
@ -1882,6 +1885,34 @@ static int emac_init_device(struct ocp_device *ocpdev, struct ibm_ocp_mal *mal)
netif_carrier_off(ndev);
if (ep->phy_mii.def->features & SUPPORTED_Autoneg)
ep->want_autoneg = 1;
else {
ep->want_autoneg = 0;
/* Select highest supported speed/duplex */
if (ep->phy_mii.def->features & SUPPORTED_1000baseT_Full) {
ep->phy_mii.speed = SPEED_1000;
ep->phy_mii.duplex = DUPLEX_FULL;
} else if (ep->phy_mii.def->features &
SUPPORTED_1000baseT_Half) {
ep->phy_mii.speed = SPEED_1000;
ep->phy_mii.duplex = DUPLEX_HALF;
} else if (ep->phy_mii.def->features &
SUPPORTED_100baseT_Full) {
ep->phy_mii.speed = SPEED_100;
ep->phy_mii.duplex = DUPLEX_FULL;
} else if (ep->phy_mii.def->features &
SUPPORTED_100baseT_Half) {
ep->phy_mii.speed = SPEED_100;
ep->phy_mii.duplex = DUPLEX_HALF;
} else if (ep->phy_mii.def->features &
SUPPORTED_10baseT_Full) {
ep->phy_mii.speed = SPEED_10;
ep->phy_mii.duplex = DUPLEX_FULL;
} else {
ep->phy_mii.speed = SPEED_10;
ep->phy_mii.duplex = DUPLEX_HALF;
}
}
emac_start_link(ep, NULL);
/* read the MAC Address */

View File

@ -584,7 +584,7 @@ static inline int ns83820_add_rx_skb(struct ns83820 *dev, struct sk_buff *skb)
return 0;
}
static inline int rx_refill(struct net_device *ndev, int gfp)
static inline int rx_refill(struct net_device *ndev, unsigned int __nocast gfp)
{
struct ns83820 *dev = PRIV(ndev);
unsigned i;

View File

@ -1832,7 +1832,7 @@ static void fill_multicast_tbl(int count, struct dev_mc_list *addrs,
{
struct dev_mc_list *mc_addr;
for (mc_addr = addrs; mc_addr && --count > 0; mc_addr = mc_addr->next) {
for (mc_addr = addrs; mc_addr && count-- > 0; mc_addr = mc_addr->next) {
u_int position = ether_crc(6, mc_addr->dmi_addr);
#ifndef final_version /* Verify multicast address. */
if ((mc_addr->dmi_addr[0] & 1) == 0)

View File

@ -2837,21 +2837,29 @@ static void skge_netpoll(struct net_device *dev)
static int skge_set_mac_address(struct net_device *dev, void *p)
{
struct skge_port *skge = netdev_priv(dev);
struct sockaddr *addr = p;
int err = 0;
struct skge_hw *hw = skge->hw;
unsigned port = skge->port;
const struct sockaddr *addr = p;
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
skge_down(dev);
spin_lock_bh(&hw->phy_lock);
memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN);
memcpy_toio(skge->hw->regs + B2_MAC_1 + skge->port*8,
memcpy_toio(hw->regs + B2_MAC_1 + port*8,
dev->dev_addr, ETH_ALEN);
memcpy_toio(skge->hw->regs + B2_MAC_2 + skge->port*8,
memcpy_toio(hw->regs + B2_MAC_2 + port*8,
dev->dev_addr, ETH_ALEN);
if (dev->flags & IFF_UP)
err = skge_up(dev);
return err;
if (hw->chip_id == CHIP_ID_GENESIS)
xm_outaddr(hw, port, XM_SA, dev->dev_addr);
else {
gma_set_addr(hw, port, GM_SRC_ADDR_1L, dev->dev_addr);
gma_set_addr(hw, port, GM_SRC_ADDR_2L, dev->dev_addr);
}
spin_unlock_bh(&hw->phy_lock);
return 0;
}
static const struct {

View File

@ -133,14 +133,18 @@
- finally added firmware (GPL'ed by Adaptec)
- removed compatibility code for 2.2.x
LK1.4.2.1 (Ion Badulescu)
- fixed 32/64 bit issues on i386 + CONFIG_HIGHMEM
- added 32-bit padding to outgoing skb's, removed previous workaround
TODO: - fix forced speed/duplexing code (broken a long time ago, when
somebody converted the driver to use the generic MII code)
- fix VLAN support
*/
#define DRV_NAME "starfire"
#define DRV_VERSION "1.03+LK1.4.2"
#define DRV_RELDATE "January 19, 2005"
#define DRV_VERSION "1.03+LK1.4.2.1"
#define DRV_RELDATE "October 3, 2005"
#include <linux/config.h>
#include <linux/version.h>
@ -165,6 +169,14 @@ TODO: - fix forced speed/duplexing code (broken a long time ago, when
* of length 1. If and when this is fixed, the #define below can be removed.
*/
#define HAS_BROKEN_FIRMWARE
/*
* If using the broken firmware, data must be padded to the next 32-bit boundary.
*/
#ifdef HAS_BROKEN_FIRMWARE
#define PADDING_MASK 3
#endif
/*
* Define this if using the driver with the zero-copy patch
*/
@ -257,9 +269,10 @@ static int full_duplex[MAX_UNITS] = {0, };
* This SUCKS.
* We need a much better method to determine if dma_addr_t is 64-bit.
*/
#if (defined(__i386__) && defined(CONFIG_HIGHMEM) && (LINUX_VERSION_CODE > 0x20500 || defined(CONFIG_HIGHMEM64G))) || defined(__x86_64__) || defined (__ia64__) || defined(__mips64__) || (defined(__mips__) && defined(CONFIG_HIGHMEM) && defined(CONFIG_64BIT_PHYS_ADDR))
#if (defined(__i386__) && defined(CONFIG_HIGHMEM64G)) || defined(__x86_64__) || defined (__ia64__) || defined(__mips64__) || (defined(__mips__) && defined(CONFIG_HIGHMEM) && defined(CONFIG_64BIT_PHYS_ADDR))
/* 64-bit dma_addr_t */
#define ADDR_64BITS /* This chip uses 64 bit addresses. */
#define netdrv_addr_t u64
#define cpu_to_dma(x) cpu_to_le64(x)
#define dma_to_cpu(x) le64_to_cpu(x)
#define RX_DESC_Q_ADDR_SIZE RxDescQAddr64bit
@ -268,6 +281,7 @@ static int full_duplex[MAX_UNITS] = {0, };
#define TX_COMPL_Q_ADDR_SIZE TxComplQAddr64bit
#define RX_DESC_ADDR_SIZE RxDescAddr64bit
#else /* 32-bit dma_addr_t */
#define netdrv_addr_t u32
#define cpu_to_dma(x) cpu_to_le32(x)
#define dma_to_cpu(x) le32_to_cpu(x)
#define RX_DESC_Q_ADDR_SIZE RxDescQAddr32bit
@ -1333,21 +1347,10 @@ static int start_tx(struct sk_buff *skb, struct net_device *dev)
}
#if defined(ZEROCOPY) && defined(HAS_BROKEN_FIRMWARE)
{
int has_bad_length = 0;
if (skb_first_frag_len(skb) == 1)
has_bad_length = 1;
else {
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
if (skb_shinfo(skb)->frags[i].size == 1) {
has_bad_length = 1;
break;
}
}
if (has_bad_length)
skb_checksum_help(skb, 0);
if (skb->ip_summed == CHECKSUM_HW) {
skb = skb_padto(skb, (skb->len + PADDING_MASK) & ~PADDING_MASK);
if (skb == NULL)
return NETDEV_TX_OK;
}
#endif /* ZEROCOPY && HAS_BROKEN_FIRMWARE */
@ -2127,13 +2130,12 @@ static int __init starfire_init (void)
#endif
#endif
#ifndef ADDR_64BITS
/* we can do this test only at run-time... sigh */
if (sizeof(dma_addr_t) == sizeof(u64)) {
printk("This driver has not been ported to this 64-bit architecture yet\n");
if (sizeof(dma_addr_t) != sizeof(netdrv_addr_t)) {
printk("This driver has dma_addr_t issues, please send email to maintainer\n");
return -ENODEV;
}
#endif /* not ADDR_64BITS */
return pci_module_init (&starfire_driver);
}

View File

@ -1035,7 +1035,8 @@ struct gem {
#define ALIGNED_RX_SKB_ADDR(addr) \
((((unsigned long)(addr) + (64UL - 1UL)) & ~(64UL - 1UL)) - (unsigned long)(addr))
static __inline__ struct sk_buff *gem_alloc_skb(int size, int gfp_flags)
static __inline__ struct sk_buff *gem_alloc_skb(int size,
unsigned int __nocast gfp_flags)
{
struct sk_buff *skb = alloc_skb(size + 64, gfp_flags);

View File

@ -67,8 +67,8 @@
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "3.41"
#define DRV_MODULE_RELDATE "September 27, 2005"
#define DRV_MODULE_VERSION "3.42"
#define DRV_MODULE_RELDATE "Oct 3, 2005"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
@ -9284,8 +9284,8 @@ static int __devinit tg3_get_invariants(struct tg3 *tp)
static struct pci_device_id write_reorder_chipsets[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_AMD,
PCI_DEVICE_ID_AMD_FE_GATE_700C) },
{ PCI_DEVICE(PCI_VENDOR_ID_AMD,
PCI_DEVICE_ID_AMD_K8_NB) },
{ PCI_DEVICE(PCI_VENDOR_ID_VIA,
PCI_DEVICE_ID_VIA_8385_0) },
{ },
};
u32 misc_ctrl_reg;
@ -9300,15 +9300,6 @@ static int __devinit tg3_get_invariants(struct tg3 *tp)
tp->tg3_flags2 |= TG3_FLG2_SUN_570X;
#endif
/* If we have an AMD 762 or K8 chipset, write
* reordering to the mailbox registers done by the host
* controller can cause major troubles. We read back from
* every mailbox register write to force the writes to be
* posted to the chip in order.
*/
if (pci_dev_present(write_reorder_chipsets))
tp->tg3_flags |= TG3_FLAG_MBOX_WRITE_REORDER;
/* Force memory write invalidate off. If we leave it on,
* then on 5700_BX chips we have to enable a workaround.
* The workaround is to set the TG3PCI_DMA_RW_CTRL boundary
@ -9439,6 +9430,16 @@ static int __devinit tg3_get_invariants(struct tg3 *tp)
if (pci_find_capability(tp->pdev, PCI_CAP_ID_EXP) != 0)
tp->tg3_flags2 |= TG3_FLG2_PCI_EXPRESS;
/* If we have an AMD 762 or VIA K8T800 chipset, write
* reordering to the mailbox registers done by the host
* controller can cause major troubles. We read back from
* every mailbox register write to force the writes to be
* posted to the chip in order.
*/
if (pci_dev_present(write_reorder_chipsets) &&
!(tp->tg3_flags2 & TG3_FLG2_PCI_EXPRESS))
tp->tg3_flags |= TG3_FLAG_MBOX_WRITE_REORDER;
if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5703 &&
tp->pci_lat_timer < 64) {
tp->pci_lat_timer = 64;

View File

@ -531,7 +531,6 @@ static int __devinit ibmtr_probe1(struct net_device *dev, int PIOaddr)
if (!time_after(jiffies, timeout)) continue;
DPRINTK( "Hardware timeout during initialization.\n");
iounmap(t_mmio);
kfree(ti);
return -ENODEV;
}
ti->sram_phys =
@ -645,7 +644,6 @@ static int __devinit ibmtr_probe1(struct net_device *dev, int PIOaddr)
DPRINTK("Unknown shared ram paging info %01X\n",
ti->shared_ram_paging);
iounmap(t_mmio);
kfree(ti);
return -ENODEV;
break;
} /*end switch shared_ram_paging */
@ -675,7 +673,6 @@ static int __devinit ibmtr_probe1(struct net_device *dev, int PIOaddr)
"driver limit (%05x), adapter not started.\n",
chk_base, ibmtr_mem_base + IBMTR_SHARED_RAM_SIZE);
iounmap(t_mmio);
kfree(ti);
return -ENODEV;
} else { /* seems cool, record what we have figured out */
ti->sram_base = new_base >> 12;
@ -690,7 +687,6 @@ static int __devinit ibmtr_probe1(struct net_device *dev, int PIOaddr)
DPRINTK("Could not grab irq %d. Halting Token Ring driver.\n",
irq);
iounmap(t_mmio);
kfree(ti);
return -ENODEV;
}
/*?? Now, allocate some of the PIO PORTs for this driver.. */
@ -699,7 +695,6 @@ static int __devinit ibmtr_probe1(struct net_device *dev, int PIOaddr)
DPRINTK("Could not grab PIO range. Halting driver.\n");
free_irq(dev->irq, dev);
iounmap(t_mmio);
kfree(ti);
return -EBUSY;
}

View File

@ -172,7 +172,7 @@ void t21142_lnk_change(struct net_device *dev, int csr5)
int i;
for (i = 0; i < tp->mtable->leafcount; i++)
if (tp->mtable->mleaf[i].media == dev->if_port) {
int startup = ! ((tp->chip_id == DC21143 && tp->revision == 65));
int startup = ! ((tp->chip_id == DC21143 && (tp->revision == 48 || tp->revision == 65)));
tp->cur_index = i;
tulip_select_media(dev, startup);
setup_done = 1;

View File

@ -57,6 +57,7 @@
#include <linux/ioport.h> /* request_region(), release_region() */
#include <linux/wanrouter.h> /* WAN router definitions */
#include <linux/wanpipe.h> /* WANPIPE common user API definitions */
#include <linux/rcupdate.h>
#include <linux/in.h>
#include <asm/io.h> /* phys_to_virt() */
@ -1268,37 +1269,41 @@ unsigned long get_ip_address(struct net_device *dev, int option)
struct in_ifaddr *ifaddr;
struct in_device *in_dev;
unsigned long addr = 0;
if ((in_dev = __in_dev_get(dev)) == NULL){
return 0;
rcu_read_lock();
if ((in_dev = __in_dev_get_rcu(dev)) == NULL){
goto out;
}
if ((ifaddr = in_dev->ifa_list)== NULL ){
return 0;
goto out;
}
switch (option){
case WAN_LOCAL_IP:
return ifaddr->ifa_local;
addr = ifaddr->ifa_local;
break;
case WAN_POINTOPOINT_IP:
return ifaddr->ifa_address;
addr = ifaddr->ifa_address;
break;
case WAN_NETMASK_IP:
return ifaddr->ifa_mask;
addr = ifaddr->ifa_mask;
break;
case WAN_BROADCAST_IP:
return ifaddr->ifa_broadcast;
addr = ifaddr->ifa_broadcast;
break;
default:
return 0;
break;
}
return 0;
out:
rcu_read_unlock();
return addr;
}
void add_gateway(sdla_t *card, struct net_device *dev)

View File

@ -769,7 +769,7 @@ static void sppp_cisco_input (struct sppp *sp, struct sk_buff *skb)
u32 addr = 0, mask = ~0; /* FIXME: is the mask correct? */
#ifdef CONFIG_INET
rcu_read_lock();
if ((in_dev = __in_dev_get(dev)) != NULL)
if ((in_dev = __in_dev_get_rcu(dev)) != NULL)
{
for (ifa=in_dev->ifa_list; ifa != NULL;
ifa=ifa->ifa_next) {

View File

@ -503,9 +503,14 @@ static int orinoco_xmit(struct sk_buff *skb, struct net_device *dev)
return 0;
}
/* Length of the packet body */
/* FIXME: what if the skb is smaller than this? */
len = max_t(int,skb->len - ETH_HLEN, ETH_ZLEN - ETH_HLEN);
/* Check packet length, pad short packets, round up odd length */
len = max_t(int, ALIGN(skb->len, 2), ETH_ZLEN);
if (skb->len < len) {
skb = skb_padto(skb, len);
if (skb == NULL)
goto fail;
}
len -= ETH_HLEN;
eh = (struct ethhdr *)skb->data;
@ -557,8 +562,7 @@ static int orinoco_xmit(struct sk_buff *skb, struct net_device *dev)
p = skb->data;
}
/* Round up for odd length packets */
err = hermes_bap_pwrite(hw, USER_BAP, p, ALIGN(data_len, 2),
err = hermes_bap_pwrite(hw, USER_BAP, p, data_len,
txfid, data_off);
if (err) {
printk(KERN_ERR "%s: Error %d writing packet to BAP\n",

View File

@ -1352,7 +1352,7 @@ static unsigned char *strip_make_packet(unsigned char *buffer,
struct in_device *in_dev;
rcu_read_lock();
in_dev = __in_dev_get(strip_info->dev);
in_dev = __in_dev_get_rcu(strip_info->dev);
if (in_dev == NULL) {
rcu_read_unlock();
return NULL;
@ -1508,7 +1508,7 @@ static void strip_send(struct strip *strip_info, struct sk_buff *skb)
brd = addr = 0;
rcu_read_lock();
in_dev = __in_dev_get(strip_info->dev);
in_dev = __in_dev_get_rcu(strip_info->dev);
if (in_dev) {
if (in_dev->ifa_list) {
brd = in_dev->ifa_list->ifa_broadcast;

View File

@ -37,6 +37,7 @@
#include <linux/proc_fs.h>
#include <linux/ctype.h>
#include <linux/blkdev.h>
#include <linux/rcupdate.h>
#include <asm/io.h>
#include <asm/processor.h>
#include <asm/hardware.h>
@ -358,9 +359,10 @@ static __inline__ int led_get_net_activity(void)
/* we are running as tasklet, so locking dev_base
* for reading should be OK */
read_lock(&dev_base_lock);
rcu_read_lock();
for (dev = dev_base; dev; dev = dev->next) {
struct net_device_stats *stats;
struct in_device *in_dev = __in_dev_get(dev);
struct in_device *in_dev = __in_dev_get_rcu(dev);
if (!in_dev || !in_dev->ifa_list)
continue;
if (LOOPBACK(in_dev->ifa_list->ifa_local))
@ -371,6 +373,7 @@ static __inline__ int led_get_net_activity(void)
rx_total += stats->rx_packets;
tx_total += stats->tx_packets;
}
rcu_read_unlock();
read_unlock(&dev_base_lock);
retval = 0;

View File

@ -686,6 +686,7 @@ struct qeth_seqno {
__u32 pdu_hdr;
__u32 pdu_hdr_ack;
__u16 ipa;
__u32 pkt_seqno;
};
struct qeth_reply {
@ -848,6 +849,7 @@ qeth_realloc_headroom(struct qeth_card *card, struct sk_buff **skb, int size)
"on interface %s", QETH_CARD_IFNAME(card));
return -ENOMEM;
}
kfree_skb(*skb);
*skb = new_skb;
}
return 0;

View File

@ -511,7 +511,7 @@ static int
__qeth_set_offline(struct ccwgroup_device *cgdev, int recovery_mode)
{
struct qeth_card *card = (struct qeth_card *) cgdev->dev.driver_data;
int rc = 0;
int rc = 0, rc2 = 0, rc3 = 0;
enum qeth_card_states recover_flag;
QETH_DBF_TEXT(setup, 3, "setoffl");
@ -523,11 +523,13 @@ __qeth_set_offline(struct ccwgroup_device *cgdev, int recovery_mode)
CARD_BUS_ID(card));
return -ERESTARTSYS;
}
if ((rc = ccw_device_set_offline(CARD_DDEV(card))) ||
(rc = ccw_device_set_offline(CARD_WDEV(card))) ||
(rc = ccw_device_set_offline(CARD_RDEV(card)))) {
rc = ccw_device_set_offline(CARD_DDEV(card));
rc2 = ccw_device_set_offline(CARD_WDEV(card));
rc3 = ccw_device_set_offline(CARD_RDEV(card));
if (!rc)
rc = (rc2) ? rc2 : rc3;
if (rc)
QETH_DBF_TEXT_(setup, 2, "1err%d", rc);
}
if (recover_flag == CARD_STATE_UP)
card->state = CARD_STATE_RECOVER;
qeth_notify_processes();
@ -1046,6 +1048,7 @@ qeth_setup_card(struct qeth_card *card)
spin_lock_init(&card->vlanlock);
card->vlangrp = NULL;
#endif
spin_lock_init(&card->lock);
spin_lock_init(&card->ip_lock);
spin_lock_init(&card->thread_mask_lock);
card->thread_start_mask = 0;
@ -1626,16 +1629,6 @@ qeth_cmd_timeout(unsigned long data)
spin_unlock_irqrestore(&reply->card->lock, flags);
}
static void
qeth_reset_ip_addresses(struct qeth_card *card)
{
QETH_DBF_TEXT(trace, 2, "rstipadd");
qeth_clear_ip_list(card, 0, 1);
/* this function will also schedule the SET_IP_THREAD */
qeth_set_multicast_list(card->dev);
}
static struct qeth_ipa_cmd *
qeth_check_ipa_data(struct qeth_card *card, struct qeth_cmd_buffer *iob)
{
@ -1664,9 +1657,8 @@ qeth_check_ipa_data(struct qeth_card *card, struct qeth_cmd_buffer *iob)
"IP address reset.\n",
QETH_CARD_IFNAME(card),
card->info.chpid);
card->lan_online = 1;
netif_carrier_on(card->dev);
qeth_reset_ip_addresses(card);
qeth_schedule_recovery(card);
return NULL;
case IPA_CMD_REGISTER_LOCAL_ADDR:
QETH_DBF_TEXT(trace,3, "irla");
@ -2387,6 +2379,7 @@ qeth_layer2_rebuild_skb(struct qeth_card *card, struct sk_buff *skb,
skb_pull(skb, VLAN_HLEN);
}
#endif
*((__u32 *)skb->cb) = ++card->seqno.pkt_seqno;
return vlan_id;
}
@ -3014,7 +3007,7 @@ qeth_alloc_buffer_pool(struct qeth_card *card)
return -ENOMEM;
}
for(j = 0; j < QETH_MAX_BUFFER_ELEMENTS(card); ++j){
ptr = (void *) __get_free_page(GFP_KERNEL);
ptr = (void *) __get_free_page(GFP_KERNEL|GFP_DMA);
if (!ptr) {
while (j > 0)
free_page((unsigned long)
@ -3058,7 +3051,8 @@ qeth_alloc_qdio_buffers(struct qeth_card *card)
if (card->qdio.state == QETH_QDIO_ALLOCATED)
return 0;
card->qdio.in_q = kmalloc(sizeof(struct qeth_qdio_q), GFP_KERNEL);
card->qdio.in_q = kmalloc(sizeof(struct qeth_qdio_q),
GFP_KERNEL|GFP_DMA);
if (!card->qdio.in_q)
return - ENOMEM;
QETH_DBF_TEXT(setup, 2, "inq");
@ -3083,7 +3077,7 @@ qeth_alloc_qdio_buffers(struct qeth_card *card)
}
for (i = 0; i < card->qdio.no_out_queues; ++i){
card->qdio.out_qs[i] = kmalloc(sizeof(struct qeth_qdio_out_q),
GFP_KERNEL);
GFP_KERNEL|GFP_DMA);
if (!card->qdio.out_qs[i]){
while (i > 0)
kfree(card->qdio.out_qs[--i]);
@ -5200,7 +5194,7 @@ qeth_free_vlan_addresses4(struct qeth_card *card, unsigned short vid)
if (!card->vlangrp)
return;
rcu_read_lock();
in_dev = __in_dev_get(card->vlangrp->vlan_devices[vid]);
in_dev = __in_dev_get_rcu(card->vlangrp->vlan_devices[vid]);
if (!in_dev)
goto out;
for (ifa = in_dev->ifa_list; ifa; ifa = ifa->ifa_next) {
@ -6470,6 +6464,9 @@ qeth_query_ipassists_cb(struct qeth_card *card, struct qeth_reply *reply,
if (cmd->hdr.prot_version == QETH_PROT_IPV4) {
card->options.ipa4.supported_funcs = cmd->hdr.ipa_supported;
card->options.ipa4.enabled_funcs = cmd->hdr.ipa_enabled;
/* Disable IPV6 support hard coded for Hipersockets */
if(card->info.type == QETH_CARD_TYPE_IQD)
card->options.ipa4.supported_funcs &= ~IPA_IPV6;
} else {
#ifdef CONFIG_QETH_IPV6
card->options.ipa6.supported_funcs = cmd->hdr.ipa_supported;
@ -7725,7 +7722,7 @@ qeth_arp_constructor(struct neighbour *neigh)
goto out;
rcu_read_lock();
in_dev = rcu_dereference(__in_dev_get(dev));
in_dev = __in_dev_get_rcu(dev);
if (in_dev == NULL) {
rcu_read_unlock();
return -EINVAL;

View File

@ -60,6 +60,7 @@
Remove un-needed eh_abort handler.
Add support for embedded firmware error strings.
2.26.02.003 - Correctly handle single sgl's with use_sg=1.
2.26.02.004 - Add support for 9550SX controllers.
*/
#include <linux/module.h>
@ -82,7 +83,7 @@
#include "3w-9xxx.h"
/* Globals */
#define TW_DRIVER_VERSION "2.26.02.003"
#define TW_DRIVER_VERSION "2.26.02.004"
static TW_Device_Extension *twa_device_extension_list[TW_MAX_SLOT];
static unsigned int twa_device_extension_count;
static int twa_major = -1;
@ -892,11 +893,6 @@ static int twa_decode_bits(TW_Device_Extension *tw_dev, u32 status_reg_value)
writel(TW_CONTROL_CLEAR_QUEUE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
}
if (status_reg_value & TW_STATUS_SBUF_WRITE_ERROR) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0xf, "SBUF Write Error: clearing");
writel(TW_CONTROL_CLEAR_SBUF_WRITE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
}
if (status_reg_value & TW_STATUS_MICROCONTROLLER_ERROR) {
if (tw_dev->reset_print == 0) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x10, "Microcontroller Error: clearing");
@ -930,6 +926,36 @@ out:
return retval;
} /* End twa_empty_response_queue() */
/* This function will clear the pchip/response queue on 9550SX */
static int twa_empty_response_queue_large(TW_Device_Extension *tw_dev)
{
u32 status_reg_value, response_que_value;
int count = 0, retval = 1;
if (tw_dev->tw_pci_dev->device == PCI_DEVICE_ID_3WARE_9550SX) {
status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
while (((status_reg_value & TW_STATUS_RESPONSE_QUEUE_EMPTY) == 0) && (count < TW_MAX_RESPONSE_DRAIN)) {
response_que_value = readl(TW_RESPONSE_QUEUE_REG_ADDR_LARGE(tw_dev));
if ((response_que_value & TW_9550SX_DRAIN_COMPLETED) == TW_9550SX_DRAIN_COMPLETED) {
/* P-chip settle time */
msleep(500);
retval = 0;
goto out;
}
status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
count++;
}
if (count == TW_MAX_RESPONSE_DRAIN)
goto out;
retval = 0;
} else
retval = 0;
out:
return retval;
} /* End twa_empty_response_queue_large() */
/* This function passes sense keys from firmware to scsi layer */
static int twa_fill_sense(TW_Device_Extension *tw_dev, int request_id, int copy_sense, int print_host)
{
@ -1613,8 +1639,16 @@ static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset)
int tries = 0, retval = 1, flashed = 0, do_soft_reset = soft_reset;
while (tries < TW_MAX_RESET_TRIES) {
if (do_soft_reset)
if (do_soft_reset) {
TW_SOFT_RESET(tw_dev);
/* Clear pchip/response queue on 9550SX */
if (twa_empty_response_queue_large(tw_dev)) {
TW_PRINTK(tw_dev->host, TW_DRIVER, 0x36, "Response queue (large) empty failed during reset sequence");
do_soft_reset = 1;
tries++;
continue;
}
}
/* Make sure controller is in a good state */
if (twa_poll_status(tw_dev, TW_STATUS_MICROCONTROLLER_READY | (do_soft_reset == 1 ? TW_STATUS_ATTENTION_INTERRUPT : 0), 60)) {
@ -2034,7 +2068,10 @@ static int __devinit twa_probe(struct pci_dev *pdev, const struct pci_device_id
goto out_free_device_extension;
}
mem_addr = pci_resource_start(pdev, 1);
if (pdev->device == PCI_DEVICE_ID_3WARE_9000)
mem_addr = pci_resource_start(pdev, 1);
else
mem_addr = pci_resource_start(pdev, 2);
/* Save base address */
tw_dev->base_addr = ioremap(mem_addr, PAGE_SIZE);
@ -2148,6 +2185,8 @@ static void twa_remove(struct pci_dev *pdev)
static struct pci_device_id twa_pci_tbl[] __devinitdata = {
{ PCI_VENDOR_ID_3WARE, PCI_DEVICE_ID_3WARE_9000,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{ PCI_VENDOR_ID_3WARE, PCI_DEVICE_ID_3WARE_9550SX,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
{ }
};
MODULE_DEVICE_TABLE(pci, twa_pci_tbl);

View File

@ -267,7 +267,6 @@ static twa_message_type twa_error_table[] = {
#define TW_CONTROL_CLEAR_PARITY_ERROR 0x00800000
#define TW_CONTROL_CLEAR_QUEUE_ERROR 0x00400000
#define TW_CONTROL_CLEAR_PCI_ABORT 0x00100000
#define TW_CONTROL_CLEAR_SBUF_WRITE_ERROR 0x00000008
/* Status register bit definitions */
#define TW_STATUS_MAJOR_VERSION_MASK 0xF0000000
@ -285,9 +284,8 @@ static twa_message_type twa_error_table[] = {
#define TW_STATUS_MICROCONTROLLER_READY 0x00002000
#define TW_STATUS_COMMAND_QUEUE_EMPTY 0x00001000
#define TW_STATUS_EXPECTED_BITS 0x00002000
#define TW_STATUS_UNEXPECTED_BITS 0x00F00008
#define TW_STATUS_SBUF_WRITE_ERROR 0x00000008
#define TW_STATUS_VALID_INTERRUPT 0x00DF0008
#define TW_STATUS_UNEXPECTED_BITS 0x00F00000
#define TW_STATUS_VALID_INTERRUPT 0x00DF0000
/* RESPONSE QUEUE BIT DEFINITIONS */
#define TW_RESPONSE_ID_MASK 0x00000FF0
@ -324,9 +322,9 @@ static twa_message_type twa_error_table[] = {
/* Compatibility defines */
#define TW_9000_ARCH_ID 0x5
#define TW_CURRENT_DRIVER_SRL 28
#define TW_CURRENT_DRIVER_BUILD 9
#define TW_CURRENT_DRIVER_BRANCH 4
#define TW_CURRENT_DRIVER_SRL 30
#define TW_CURRENT_DRIVER_BUILD 80
#define TW_CURRENT_DRIVER_BRANCH 0
/* Phase defines */
#define TW_PHASE_INITIAL 0
@ -334,6 +332,7 @@ static twa_message_type twa_error_table[] = {
#define TW_PHASE_SGLIST 2
/* Misc defines */
#define TW_9550SX_DRAIN_COMPLETED 0xFFFF
#define TW_SECTOR_SIZE 512
#define TW_ALIGNMENT_9000 4 /* 4 bytes */
#define TW_ALIGNMENT_9000_SGL 0x3
@ -417,6 +416,9 @@ static twa_message_type twa_error_table[] = {
#ifndef PCI_DEVICE_ID_3WARE_9000
#define PCI_DEVICE_ID_3WARE_9000 0x1002
#endif
#ifndef PCI_DEVICE_ID_3WARE_9550SX
#define PCI_DEVICE_ID_3WARE_9550SX 0x1003
#endif
/* Bitmask macros to eliminate bitfields */
@ -443,6 +445,7 @@ static twa_message_type twa_error_table[] = {
#define TW_STATUS_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0x4)
#define TW_COMMAND_QUEUE_REG_ADDR(x) (sizeof(dma_addr_t) > 4 ? ((unsigned char __iomem *)x->base_addr + 0x20) : ((unsigned char __iomem *)x->base_addr + 0x8))
#define TW_RESPONSE_QUEUE_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0xC)
#define TW_RESPONSE_QUEUE_REG_ADDR_LARGE(x) ((unsigned char __iomem *)x->base_addr + 0x30)
#define TW_CLEAR_ALL_INTERRUPTS(x) (writel(TW_STATUS_VALID_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
#define TW_CLEAR_ATTENTION_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_ATTENTION_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
#define TW_CLEAR_HOST_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_HOST_INTERRUPT, TW_CONTROL_REG_ADDR(x)))

View File

@ -99,6 +99,7 @@ obj-$(CONFIG_SCSI_DC395x) += dc395x.o
obj-$(CONFIG_SCSI_DC390T) += tmscsim.o
obj-$(CONFIG_MEGARAID_LEGACY) += megaraid.o
obj-$(CONFIG_MEGARAID_NEWGEN) += megaraid/
obj-$(CONFIG_MEGARAID_SAS) += megaraid/
obj-$(CONFIG_SCSI_ACARD) += atp870u.o
obj-$(CONFIG_SCSI_SUNESP) += esp.o
obj-$(CONFIG_SCSI_GDTH) += gdth.o

View File

@ -313,18 +313,37 @@ int aac_get_containers(struct aac_dev *dev)
}
dresp = (struct aac_mount *)fib_data(fibptr);
if ((le32_to_cpu(dresp->status) == ST_OK) &&
(le32_to_cpu(dresp->mnt[0].vol) == CT_NONE)) {
dinfo->command = cpu_to_le32(VM_NameServe64);
dinfo->count = cpu_to_le32(index);
dinfo->type = cpu_to_le32(FT_FILESYS);
if (fib_send(ContainerCommand,
fibptr,
sizeof(struct aac_query_mount),
FsaNormal,
1, 1,
NULL, NULL) < 0)
continue;
} else
dresp->mnt[0].capacityhigh = 0;
dprintk ((KERN_DEBUG
"VM_NameServe cid=%d status=%d vol=%d state=%d cap=%u\n",
"VM_NameServe cid=%d status=%d vol=%d state=%d cap=%llu\n",
(int)index, (int)le32_to_cpu(dresp->status),
(int)le32_to_cpu(dresp->mnt[0].vol),
(int)le32_to_cpu(dresp->mnt[0].state),
(unsigned)le32_to_cpu(dresp->mnt[0].capacity)));
((u64)le32_to_cpu(dresp->mnt[0].capacity)) +
(((u64)le32_to_cpu(dresp->mnt[0].capacityhigh)) << 32)));
if ((le32_to_cpu(dresp->status) == ST_OK) &&
(le32_to_cpu(dresp->mnt[0].vol) != CT_NONE) &&
(le32_to_cpu(dresp->mnt[0].state) != FSCS_HIDDEN)) {
fsa_dev_ptr[index].valid = 1;
fsa_dev_ptr[index].type = le32_to_cpu(dresp->mnt[0].vol);
fsa_dev_ptr[index].size = le32_to_cpu(dresp->mnt[0].capacity);
fsa_dev_ptr[index].size
= ((u64)le32_to_cpu(dresp->mnt[0].capacity)) +
(((u64)le32_to_cpu(dresp->mnt[0].capacityhigh)) << 32);
if (le32_to_cpu(dresp->mnt[0].state) & FSCS_READONLY)
fsa_dev_ptr[index].ro = 1;
}
@ -460,7 +479,7 @@ static int aac_get_container_name(struct scsi_cmnd * scsicmd, int cid)
* is updated in the struct fsa_dev_info structure rather than returned.
*/
static int probe_container(struct aac_dev *dev, int cid)
int probe_container(struct aac_dev *dev, int cid)
{
struct fsa_dev_info *fsa_dev_ptr;
int status;
@ -496,12 +515,30 @@ static int probe_container(struct aac_dev *dev, int cid)
dresp = (struct aac_mount *) fib_data(fibptr);
if ((le32_to_cpu(dresp->status) == ST_OK) &&
(le32_to_cpu(dresp->mnt[0].vol) == CT_NONE)) {
dinfo->command = cpu_to_le32(VM_NameServe64);
dinfo->count = cpu_to_le32(cid);
dinfo->type = cpu_to_le32(FT_FILESYS);
if (fib_send(ContainerCommand,
fibptr,
sizeof(struct aac_query_mount),
FsaNormal,
1, 1,
NULL, NULL) < 0)
goto error;
} else
dresp->mnt[0].capacityhigh = 0;
if ((le32_to_cpu(dresp->status) == ST_OK) &&
(le32_to_cpu(dresp->mnt[0].vol) != CT_NONE) &&
(le32_to_cpu(dresp->mnt[0].state) != FSCS_HIDDEN)) {
fsa_dev_ptr[cid].valid = 1;
fsa_dev_ptr[cid].type = le32_to_cpu(dresp->mnt[0].vol);
fsa_dev_ptr[cid].size = le32_to_cpu(dresp->mnt[0].capacity);
fsa_dev_ptr[cid].size
= ((u64)le32_to_cpu(dresp->mnt[0].capacity)) +
(((u64)le32_to_cpu(dresp->mnt[0].capacityhigh)) << 32);
if (le32_to_cpu(dresp->mnt[0].state) & FSCS_READONLY)
fsa_dev_ptr[cid].ro = 1;
}
@ -655,7 +692,7 @@ int aac_get_adapter_info(struct aac_dev* dev)
fibptr,
sizeof(*info),
FsaNormal,
1, 1,
-1, 1, /* First `interrupt' command uses special wait */
NULL,
NULL);
@ -806,8 +843,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
if (!(dev->raw_io_interface)) {
dev->scsi_host_ptr->sg_tablesize = (dev->max_fib_size -
sizeof(struct aac_fibhdr) -
sizeof(struct aac_write) + sizeof(struct sgmap)) /
sizeof(struct sgmap);
sizeof(struct aac_write) + sizeof(struct sgentry)) /
sizeof(struct sgentry);
if (dev->dac_support) {
/*
* 38 scatter gather elements
@ -816,8 +853,8 @@ int aac_get_adapter_info(struct aac_dev* dev)
(dev->max_fib_size -
sizeof(struct aac_fibhdr) -
sizeof(struct aac_write64) +
sizeof(struct sgmap64)) /
sizeof(struct sgmap64);
sizeof(struct sgentry64)) /
sizeof(struct sgentry64);
}
dev->scsi_host_ptr->max_sectors = AAC_MAX_32BIT_SGBCOUNT;
if(!(dev->adapter_info.options & AAC_OPT_NEW_COMM)) {
@ -854,7 +891,40 @@ static void io_callback(void *context, struct fib * fibptr)
dev = (struct aac_dev *)scsicmd->device->host->hostdata;
cid = ID_LUN_TO_CONTAINER(scsicmd->device->id, scsicmd->device->lun);
dprintk((KERN_DEBUG "io_callback[cpu %d]: lba = %u, t = %ld.\n", smp_processor_id(), ((scsicmd->cmnd[1] & 0x1F) << 16) | (scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3], jiffies));
if (nblank(dprintk(x))) {
u64 lba;
switch (scsicmd->cmnd[0]) {
case WRITE_6:
case READ_6:
lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
(scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3];
break;
case WRITE_16:
case READ_16:
lba = ((u64)scsicmd->cmnd[2] << 56) |
((u64)scsicmd->cmnd[3] << 48) |
((u64)scsicmd->cmnd[4] << 40) |
((u64)scsicmd->cmnd[5] << 32) |
((u64)scsicmd->cmnd[6] << 24) |
(scsicmd->cmnd[7] << 16) |
(scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
break;
case WRITE_12:
case READ_12:
lba = ((u64)scsicmd->cmnd[2] << 24) |
(scsicmd->cmnd[3] << 16) |
(scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
break;
default:
lba = ((u64)scsicmd->cmnd[2] << 24) |
(scsicmd->cmnd[3] << 16) |
(scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
break;
}
printk(KERN_DEBUG
"io_callback[cpu %d]: lba = %llu, t = %ld.\n",
smp_processor_id(), (unsigned long long)lba, jiffies);
}
if (fibptr == NULL)
BUG();
@ -895,7 +965,7 @@ static void io_callback(void *context, struct fib * fibptr)
static int aac_read(struct scsi_cmnd * scsicmd, int cid)
{
u32 lba;
u64 lba;
u32 count;
int status;
@ -907,23 +977,69 @@ static int aac_read(struct scsi_cmnd * scsicmd, int cid)
/*
* Get block address and transfer length
*/
if (scsicmd->cmnd[0] == READ_6) /* 6 byte command */
{
switch (scsicmd->cmnd[0]) {
case READ_6:
dprintk((KERN_DEBUG "aachba: received a read(6) command on id %d.\n", cid));
lba = ((scsicmd->cmnd[1] & 0x1F) << 16) | (scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3];
lba = ((scsicmd->cmnd[1] & 0x1F) << 16) |
(scsicmd->cmnd[2] << 8) | scsicmd->cmnd[3];
count = scsicmd->cmnd[4];
if (count == 0)
count = 256;
} else {
break;
case READ_16:
dprintk((KERN_DEBUG "aachba: received a read(16) command on id %d.\n", cid));
lba = ((u64)scsicmd->cmnd[2] << 56) |
((u64)scsicmd->cmnd[3] << 48) |
((u64)scsicmd->cmnd[4] << 40) |
((u64)scsicmd->cmnd[5] << 32) |
((u64)scsicmd->cmnd[6] << 24) |
(scsicmd->cmnd[7] << 16) |
(scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
count = (scsicmd->cmnd[10] << 24) |
(scsicmd->cmnd[11] << 16) |
(scsicmd->cmnd[12] << 8) | scsicmd->cmnd[13];
break;
case READ_12:
dprintk((KERN_DEBUG "aachba: received a read(12) command on id %d.\n", cid));
lba = ((u64)scsicmd->cmnd[2] << 24) |
(scsicmd->cmnd[3] << 16) |
(scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[6] << 24) |
(scsicmd->cmnd[7] << 16) |
(scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
break;
default:
dprintk((KERN_DEBUG "aachba: received a read(10) command on id %d.\n", cid));
lba = (scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16) | (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
lba = ((u64)scsicmd->cmnd[2] << 24) |
(scsicmd->cmnd[3] << 16) |
(scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
break;
}
dprintk((KERN_DEBUG "aac_read[cpu %d]: lba = %u, t = %ld.\n",
dprintk((KERN_DEBUG "aac_read[cpu %d]: lba = %llu, t = %ld.\n",
smp_processor_id(), (unsigned long long)lba, jiffies));
if ((!(dev->raw_io_interface) || !(dev->raw_io_64)) &&
(lba & 0xffffffff00000000LL)) {
dprintk((KERN_DEBUG "aac_read: Illegal lba\n"));
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 |
SAM_STAT_CHECK_CONDITION;
set_sense((u8 *) &dev->fsa_dev[cid].sense_data,
HARDWARE_ERROR,
SENCODE_INTERNAL_TARGET_FAILURE,
ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
0, 0);
memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
(sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
? sizeof(scsicmd->sense_buffer)
: sizeof(dev->fsa_dev[cid].sense_data));
scsicmd->scsi_done(scsicmd);
return 0;
}
/*
* Alocate and initialize a Fib
*/
@ -936,8 +1052,8 @@ static int aac_read(struct scsi_cmnd * scsicmd, int cid)
if (dev->raw_io_interface) {
struct aac_raw_io *readcmd;
readcmd = (struct aac_raw_io *) fib_data(cmd_fibcontext);
readcmd->block[0] = cpu_to_le32(lba);
readcmd->block[1] = 0;
readcmd->block[0] = cpu_to_le32((u32)(lba&0xffffffff));
readcmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
readcmd->count = cpu_to_le32(count<<9);
readcmd->cid = cpu_to_le16(cid);
readcmd->flags = cpu_to_le16(1);
@ -964,7 +1080,7 @@ static int aac_read(struct scsi_cmnd * scsicmd, int cid)
readcmd->command = cpu_to_le32(VM_CtHostRead64);
readcmd->cid = cpu_to_le16(cid);
readcmd->sector_count = cpu_to_le16(count);
readcmd->block = cpu_to_le32(lba);
readcmd->block = cpu_to_le32((u32)(lba&0xffffffff));
readcmd->pad = 0;
readcmd->flags = 0;
@ -989,7 +1105,7 @@ static int aac_read(struct scsi_cmnd * scsicmd, int cid)
readcmd = (struct aac_read *) fib_data(cmd_fibcontext);
readcmd->command = cpu_to_le32(VM_CtBlockRead);
readcmd->cid = cpu_to_le32(cid);
readcmd->block = cpu_to_le32(lba);
readcmd->block = cpu_to_le32((u32)(lba&0xffffffff));
readcmd->count = cpu_to_le32(count * 512);
aac_build_sg(scsicmd, &readcmd->sg);
@ -1031,7 +1147,7 @@ static int aac_read(struct scsi_cmnd * scsicmd, int cid)
static int aac_write(struct scsi_cmnd * scsicmd, int cid)
{
u32 lba;
u64 lba;
u32 count;
int status;
u16 fibsize;
@ -1048,13 +1164,48 @@ static int aac_write(struct scsi_cmnd * scsicmd, int cid)
count = scsicmd->cmnd[4];
if (count == 0)
count = 256;
} else if (scsicmd->cmnd[0] == WRITE_16) { /* 16 byte command */
dprintk((KERN_DEBUG "aachba: received a write(16) command on id %d.\n", cid));
lba = ((u64)scsicmd->cmnd[2] << 56) |
((u64)scsicmd->cmnd[3] << 48) |
((u64)scsicmd->cmnd[4] << 40) |
((u64)scsicmd->cmnd[5] << 32) |
((u64)scsicmd->cmnd[6] << 24) |
(scsicmd->cmnd[7] << 16) |
(scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
count = (scsicmd->cmnd[10] << 24) | (scsicmd->cmnd[11] << 16) |
(scsicmd->cmnd[12] << 8) | scsicmd->cmnd[13];
} else if (scsicmd->cmnd[0] == WRITE_12) { /* 12 byte command */
dprintk((KERN_DEBUG "aachba: received a write(12) command on id %d.\n", cid));
lba = ((u64)scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16)
| (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[6] << 24) | (scsicmd->cmnd[7] << 16)
| (scsicmd->cmnd[8] << 8) | scsicmd->cmnd[9];
} else {
dprintk((KERN_DEBUG "aachba: received a write(10) command on id %d.\n", cid));
lba = (scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16) | (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
lba = ((u64)scsicmd->cmnd[2] << 24) | (scsicmd->cmnd[3] << 16) | (scsicmd->cmnd[4] << 8) | scsicmd->cmnd[5];
count = (scsicmd->cmnd[7] << 8) | scsicmd->cmnd[8];
}
dprintk((KERN_DEBUG "aac_write[cpu %d]: lba = %u, t = %ld.\n",
dprintk((KERN_DEBUG "aac_write[cpu %d]: lba = %llu, t = %ld.\n",
smp_processor_id(), (unsigned long long)lba, jiffies));
if ((!(dev->raw_io_interface) || !(dev->raw_io_64))
&& (lba & 0xffffffff00000000LL)) {
dprintk((KERN_DEBUG "aac_write: Illegal lba\n"));
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_CHECK_CONDITION;
set_sense((u8 *) &dev->fsa_dev[cid].sense_data,
HARDWARE_ERROR,
SENCODE_INTERNAL_TARGET_FAILURE,
ASENCODE_INTERNAL_TARGET_FAILURE, 0, 0,
0, 0);
memcpy(scsicmd->sense_buffer, &dev->fsa_dev[cid].sense_data,
(sizeof(dev->fsa_dev[cid].sense_data) > sizeof(scsicmd->sense_buffer))
? sizeof(scsicmd->sense_buffer)
: sizeof(dev->fsa_dev[cid].sense_data));
scsicmd->scsi_done(scsicmd);
return 0;
}
/*
* Allocate and initialize a Fib then setup a BlockWrite command
*/
@ -1068,8 +1219,8 @@ static int aac_write(struct scsi_cmnd * scsicmd, int cid)
if (dev->raw_io_interface) {
struct aac_raw_io *writecmd;
writecmd = (struct aac_raw_io *) fib_data(cmd_fibcontext);
writecmd->block[0] = cpu_to_le32(lba);
writecmd->block[1] = 0;
writecmd->block[0] = cpu_to_le32((u32)(lba&0xffffffff));
writecmd->block[1] = cpu_to_le32((u32)((lba&0xffffffff00000000LL)>>32));
writecmd->count = cpu_to_le32(count<<9);
writecmd->cid = cpu_to_le16(cid);
writecmd->flags = 0;
@ -1096,7 +1247,7 @@ static int aac_write(struct scsi_cmnd * scsicmd, int cid)
writecmd->command = cpu_to_le32(VM_CtHostWrite64);
writecmd->cid = cpu_to_le16(cid);
writecmd->sector_count = cpu_to_le16(count);
writecmd->block = cpu_to_le32(lba);
writecmd->block = cpu_to_le32((u32)(lba&0xffffffff));
writecmd->pad = 0;
writecmd->flags = 0;
@ -1121,7 +1272,7 @@ static int aac_write(struct scsi_cmnd * scsicmd, int cid)
writecmd = (struct aac_write *) fib_data(cmd_fibcontext);
writecmd->command = cpu_to_le32(VM_CtBlockWrite);
writecmd->cid = cpu_to_le32(cid);
writecmd->block = cpu_to_le32(lba);
writecmd->block = cpu_to_le32((u32)(lba&0xffffffff));
writecmd->count = cpu_to_le32(count * 512);
writecmd->sg.count = cpu_to_le32(1);
/* ->stable is not used - it did mean which type of write */
@ -1310,11 +1461,18 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
*/
if ((fsa_dev_ptr[cid].valid & 1) == 0) {
switch (scsicmd->cmnd[0]) {
case SERVICE_ACTION_IN:
if (!(dev->raw_io_interface) ||
!(dev->raw_io_64) ||
((scsicmd->cmnd[1] & 0x1f) != SAI_READ_CAPACITY_16))
break;
case INQUIRY:
case READ_CAPACITY:
case TEST_UNIT_READY:
spin_unlock_irq(host->host_lock);
probe_container(dev, cid);
if ((fsa_dev_ptr[cid].valid & 1) == 0)
fsa_dev_ptr[cid].valid = 0;
spin_lock_irq(host->host_lock);
if (fsa_dev_ptr[cid].valid == 0) {
scsicmd->result = DID_NO_CONNECT << 16;
@ -1375,7 +1533,6 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
memset(&inq_data, 0, sizeof (struct inquiry_data));
inq_data.inqd_ver = 2; /* claim compliance to SCSI-2 */
inq_data.inqd_dtq = 0x80; /* set RMB bit to one indicating that the medium is removable */
inq_data.inqd_rdf = 2; /* A response data format value of two indicates that the data shall be in the format specified in SCSI-2 */
inq_data.inqd_len = 31;
/*Format for "pad2" is RelAdr | WBus32 | WBus16 | Sync | Linked |Reserved| CmdQue | SftRe */
@ -1397,13 +1554,55 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
aac_internal_transfer(scsicmd, &inq_data, 0, sizeof(inq_data));
return aac_get_container_name(scsicmd, cid);
}
case SERVICE_ACTION_IN:
if (!(dev->raw_io_interface) ||
!(dev->raw_io_64) ||
((scsicmd->cmnd[1] & 0x1f) != SAI_READ_CAPACITY_16))
break;
{
u64 capacity;
char cp[12];
unsigned int offset = 0;
dprintk((KERN_DEBUG "READ CAPACITY_16 command.\n"));
capacity = fsa_dev_ptr[cid].size - 1;
if (scsicmd->cmnd[13] > 12) {
offset = scsicmd->cmnd[13] - 12;
if (offset > sizeof(cp))
break;
memset(cp, 0, offset);
aac_internal_transfer(scsicmd, cp, 0, offset);
}
cp[0] = (capacity >> 56) & 0xff;
cp[1] = (capacity >> 48) & 0xff;
cp[2] = (capacity >> 40) & 0xff;
cp[3] = (capacity >> 32) & 0xff;
cp[4] = (capacity >> 24) & 0xff;
cp[5] = (capacity >> 16) & 0xff;
cp[6] = (capacity >> 8) & 0xff;
cp[7] = (capacity >> 0) & 0xff;
cp[8] = 0;
cp[9] = 0;
cp[10] = 2;
cp[11] = 0;
aac_internal_transfer(scsicmd, cp, offset, sizeof(cp));
/* Do not cache partition table for arrays */
scsicmd->device->removable = 1;
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
scsicmd->scsi_done(scsicmd);
return 0;
}
case READ_CAPACITY:
{
u32 capacity;
char cp[8];
dprintk((KERN_DEBUG "READ CAPACITY command.\n"));
if (fsa_dev_ptr[cid].size <= 0x100000000LL)
if (fsa_dev_ptr[cid].size <= 0x100000000ULL)
capacity = fsa_dev_ptr[cid].size - 1;
else
capacity = (u32)-1;
@ -1417,6 +1616,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
cp[6] = 2;
cp[7] = 0;
aac_internal_transfer(scsicmd, cp, 0, sizeof(cp));
/* Do not cache partition table for arrays */
scsicmd->device->removable = 1;
scsicmd->result = DID_OK << 16 | COMMAND_COMPLETE << 8 | SAM_STAT_GOOD;
scsicmd->scsi_done(scsicmd);
@ -1497,6 +1698,8 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
{
case READ_6:
case READ_10:
case READ_12:
case READ_16:
/*
* Hack to keep track of ordinal number of the device that
* corresponds to a container. Needed to convert
@ -1504,17 +1707,19 @@ int aac_scsi_cmd(struct scsi_cmnd * scsicmd)
*/
spin_unlock_irq(host->host_lock);
if (scsicmd->request->rq_disk)
memcpy(fsa_dev_ptr[cid].devname,
scsicmd->request->rq_disk->disk_name,
8);
if (scsicmd->request->rq_disk)
strlcpy(fsa_dev_ptr[cid].devname,
scsicmd->request->rq_disk->disk_name,
min(sizeof(fsa_dev_ptr[cid].devname),
sizeof(scsicmd->request->rq_disk->disk_name) + 1));
ret = aac_read(scsicmd, cid);
spin_lock_irq(host->host_lock);
return ret;
case WRITE_6:
case WRITE_10:
case WRITE_12:
case WRITE_16:
spin_unlock_irq(host->host_lock);
ret = aac_write(scsicmd, cid);
spin_lock_irq(host->host_lock);
@ -1745,6 +1950,8 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
case WRITE_10:
case READ_12:
case WRITE_12:
case READ_16:
case WRITE_16:
if(le32_to_cpu(srbreply->data_xfer_length) < scsicmd->underflow ) {
printk(KERN_WARNING"aacraid: SCSI CMD underflow\n");
} else {
@ -1850,8 +2057,8 @@ static void aac_srb_callback(void *context, struct fib * fibptr)
sizeof(scsicmd->sense_buffer) :
le32_to_cpu(srbreply->sense_data_size);
#ifdef AAC_DETAILED_STATUS_INFO
dprintk((KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
le32_to_cpu(srbreply->status), len));
printk(KERN_WARNING "aac_srb_callback: check condition, status = %d len=%d\n",
le32_to_cpu(srbreply->status), len);
#endif
memcpy(scsicmd->sense_buffer, srbreply->sense_data, len);

View File

@ -1,6 +1,10 @@
#if (!defined(dprintk))
# define dprintk(x)
#endif
/* eg: if (nblank(dprintk(x))) */
#define _nblank(x) #x
#define nblank(x) _nblank(x)[0]
/*------------------------------------------------------------------------------
* D E F I N E S
@ -302,7 +306,6 @@ enum aac_queue_types {
*/
#define FsaNormal 1
#define FsaHigh 2
/*
* Define the FIB. The FIB is the where all the requested data and
@ -546,8 +549,6 @@ struct aac_queue {
/* This is only valid for adapter to host command queues. */
spinlock_t *lock; /* Spinlock for this queue must take this lock before accessing the lock */
spinlock_t lockdata; /* Actual lock (used only on one side of the lock) */
unsigned long SavedIrql; /* Previous IRQL when the spin lock is taken */
u32 padding; /* Padding - FIXME - can remove I believe */
struct list_head cmdq; /* A queue of FIBs which need to be prcessed by the FS thread. This is */
/* only valid for command queues which receive entries from the adapter. */
struct list_head pendingq; /* A queue of outstanding fib's to the adapter. */
@ -776,7 +777,9 @@ struct fsa_dev_info {
u64 last;
u64 size;
u32 type;
u32 config_waiting_on;
u16 queue_depth;
u8 config_needed;
u8 valid;
u8 ro;
u8 locked;
@ -1012,6 +1015,7 @@ struct aac_dev
/* macro side-effects BEWARE */
# define raw_io_interface \
init->InitStructRevision==cpu_to_le32(ADAPTER_INIT_STRUCT_REVISION_4)
u8 raw_io_64;
u8 printf_enabled;
};
@ -1362,8 +1366,10 @@ struct aac_srb_reply
#define VM_CtBlockVerify64 18
#define VM_CtHostRead64 19
#define VM_CtHostWrite64 20
#define VM_DrvErrTblLog 21
#define VM_NameServe64 22
#define MAX_VMCOMMAND_NUM 21 /* used for sizing stats array - leave last */
#define MAX_VMCOMMAND_NUM 23 /* used for sizing stats array - leave last */
/*
* Descriptive information (eg, vital stats)
@ -1472,6 +1478,7 @@ struct aac_mntent {
manager (eg, filesystem) */
__le32 altoid; /* != oid <==> snapshot or
broken mirror exists */
__le32 capacityhigh;
};
#define FSCS_NOTCLEAN 0x0001 /* fsck is neccessary before mounting */
@ -1707,6 +1714,7 @@ extern struct aac_common aac_config;
#define AifCmdJobProgress 2 /* Progress report */
#define AifJobCtrZero 101 /* Array Zero progress */
#define AifJobStsSuccess 1 /* Job completes */
#define AifJobStsRunning 102 /* Job running */
#define AifCmdAPIReport 3 /* Report from other user of API */
#define AifCmdDriverNotify 4 /* Notify host driver of event */
#define AifDenMorphComplete 200 /* A morph operation completed */
@ -1777,6 +1785,7 @@ int fib_adapter_complete(struct fib * fibptr, unsigned short size);
struct aac_driver_ident* aac_get_driver_ident(int devtype);
int aac_get_adapter_info(struct aac_dev* dev);
int aac_send_shutdown(struct aac_dev *dev);
int probe_container(struct aac_dev *dev, int cid);
extern int numacb;
extern int acbsize;
extern char aac_driver_version[];

View File

@ -195,7 +195,7 @@ int aac_send_shutdown(struct aac_dev * dev)
fibctx,
sizeof(struct aac_close),
FsaNormal,
1, 1,
-2 /* Timeout silently */, 1,
NULL, NULL);
if (status == 0)
@ -313,8 +313,15 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
dev->max_fib_size = sizeof(struct hw_fib);
dev->sg_tablesize = host->sg_tablesize = (dev->max_fib_size
- sizeof(struct aac_fibhdr)
- sizeof(struct aac_write) + sizeof(struct sgmap))
/ sizeof(struct sgmap);
- sizeof(struct aac_write) + sizeof(struct sgentry))
/ sizeof(struct sgentry);
dev->raw_io_64 = 0;
if ((!aac_adapter_sync_cmd(dev, GET_ADAPTER_PROPERTIES,
0, 0, 0, 0, 0, 0, status+0, status+1, status+2, NULL, NULL)) &&
(status[0] == 0x00000001)) {
if (status[1] & AAC_OPT_NEW_COMM_64)
dev->raw_io_64 = 1;
}
if ((!aac_adapter_sync_cmd(dev, GET_COMM_PREFERRED_SETTINGS,
0, 0, 0, 0, 0, 0,
status+0, status+1, status+2, status+3, status+4))
@ -342,8 +349,8 @@ struct aac_dev *aac_init_adapter(struct aac_dev *dev)
dev->max_fib_size = 512;
dev->sg_tablesize = host->sg_tablesize
= (512 - sizeof(struct aac_fibhdr)
- sizeof(struct aac_write) + sizeof(struct sgmap))
/ sizeof(struct sgmap);
- sizeof(struct aac_write) + sizeof(struct sgentry))
/ sizeof(struct sgentry);
host->can_queue = AAC_NUM_IO_FIB;
} else if (acbsize == 2048) {
host->max_sectors = 512;

View File

@ -39,7 +39,9 @@
#include <linux/completion.h>
#include <linux/blkdev.h>
#include <scsi/scsi_host.h>
#include <scsi/scsi_device.h>
#include <asm/semaphore.h>
#include <asm/delay.h>
#include "aacraid.h"
@ -269,40 +271,22 @@ static int aac_get_entry (struct aac_dev * dev, u32 qid, struct aac_entry **entr
/* Interrupt Moderation, only interrupt for first two entries */
if (idx != le32_to_cpu(*(q->headers.consumer))) {
if (--idx == 0) {
if (qid == AdapHighCmdQueue)
idx = ADAP_HIGH_CMD_ENTRIES;
else if (qid == AdapNormCmdQueue)
if (qid == AdapNormCmdQueue)
idx = ADAP_NORM_CMD_ENTRIES;
else if (qid == AdapHighRespQueue)
idx = ADAP_HIGH_RESP_ENTRIES;
else if (qid == AdapNormRespQueue)
else
idx = ADAP_NORM_RESP_ENTRIES;
}
if (idx != le32_to_cpu(*(q->headers.consumer)))
*nonotify = 1;
}
if (qid == AdapHighCmdQueue) {
if (*index >= ADAP_HIGH_CMD_ENTRIES)
*index = 0;
} else if (qid == AdapNormCmdQueue) {
if (qid == AdapNormCmdQueue) {
if (*index >= ADAP_NORM_CMD_ENTRIES)
*index = 0; /* Wrap to front of the Producer Queue. */
}
else if (qid == AdapHighRespQueue)
{
if (*index >= ADAP_HIGH_RESP_ENTRIES)
*index = 0;
}
else if (qid == AdapNormRespQueue)
{
} else {
if (*index >= ADAP_NORM_RESP_ENTRIES)
*index = 0; /* Wrap to front of the Producer Queue. */
}
else {
printk("aacraid: invalid qid\n");
BUG();
}
if ((*index + 1) == le32_to_cpu(*(q->headers.consumer))) { /* Queue is full */
printk(KERN_WARNING "Queue %d full, %u outstanding.\n",
@ -334,12 +318,8 @@ static int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_f
{
struct aac_entry * entry = NULL;
int map = 0;
struct aac_queue * q = &dev->queues->queue[qid];
spin_lock_irqsave(q->lock, q->SavedIrql);
if (qid == AdapHighCmdQueue || qid == AdapNormCmdQueue)
{
if (qid == AdapNormCmdQueue) {
/* if no entries wait for some if caller wants to */
while (!aac_get_entry(dev, qid, &entry, index, nonotify))
{
@ -350,9 +330,7 @@ static int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_f
*/
entry->size = cpu_to_le32(le16_to_cpu(hw_fib->header.Size));
map = 1;
}
else if (qid == AdapHighRespQueue || qid == AdapNormRespQueue)
{
} else {
while(!aac_get_entry(dev, qid, &entry, index, nonotify))
{
/* if no entries wait for some if caller wants to */
@ -375,42 +353,6 @@ static int aac_queue_get(struct aac_dev * dev, u32 * index, u32 qid, struct hw_f
return 0;
}
/**
* aac_insert_entry - insert a queue entry
* @dev: Adapter
* @index: Index of entry to insert
* @qid: Queue number
* @nonotify: Suppress adapter notification
*
* Gets the next free QE off the requested priorty adapter command
* queue and associates the Fib with the QE. The QE represented by
* index is ready to insert on the queue when this routine returns
* success.
*/
static int aac_insert_entry(struct aac_dev * dev, u32 index, u32 qid, unsigned long nonotify)
{
struct aac_queue * q = &dev->queues->queue[qid];
if(q == NULL)
BUG();
*(q->headers.producer) = cpu_to_le32(index + 1);
spin_unlock_irqrestore(q->lock, q->SavedIrql);
if (qid == AdapHighCmdQueue ||
qid == AdapNormCmdQueue ||
qid == AdapHighRespQueue ||
qid == AdapNormRespQueue)
{
if (!nonotify)
aac_adapter_notify(dev, qid);
}
else
printk("Suprise insert!\n");
return 0;
}
/*
* Define the highest level of host to adapter communication routines.
* These routines will support host to adapter FS commuication. These
@ -439,12 +381,13 @@ static int aac_insert_entry(struct aac_dev * dev, u32 index, u32 qid, unsigned l
int fib_send(u16 command, struct fib * fibptr, unsigned long size, int priority, int wait, int reply, fib_callback callback, void * callback_data)
{
u32 index;
u32 qid;
struct aac_dev * dev = fibptr->dev;
unsigned long nointr = 0;
struct hw_fib * hw_fib = fibptr->hw_fib;
struct aac_queue * q;
unsigned long flags = 0;
unsigned long qflags;
if (!(hw_fib->header.XferState & cpu_to_le32(HostOwned)))
return -EBUSY;
/*
@ -497,26 +440,8 @@ int fib_send(u16 command, struct fib * fibptr, unsigned long size, int priority
* Get a queue entry connect the FIB to it and send an notify
* the adapter a command is ready.
*/
if (priority == FsaHigh) {
hw_fib->header.XferState |= cpu_to_le32(HighPriority);
qid = AdapHighCmdQueue;
} else {
hw_fib->header.XferState |= cpu_to_le32(NormalPriority);
qid = AdapNormCmdQueue;
}
q = &dev->queues->queue[qid];
hw_fib->header.XferState |= cpu_to_le32(NormalPriority);
if(wait)
spin_lock_irqsave(&fibptr->event_lock, flags);
if(aac_queue_get( dev, &index, qid, hw_fib, 1, fibptr, &nointr)<0)
return -EWOULDBLOCK;
dprintk((KERN_DEBUG "fib_send: inserting a queue entry at index %d.\n",index));
dprintk((KERN_DEBUG "Fib contents:.\n"));
dprintk((KERN_DEBUG " Command = %d.\n", hw_fib->header.Command));
dprintk((KERN_DEBUG " XferState = %x.\n", hw_fib->header.XferState));
dprintk((KERN_DEBUG " hw_fib va being sent=%p\n",fibptr->hw_fib));
dprintk((KERN_DEBUG " hw_fib pa being sent=%lx\n",(ulong)fibptr->hw_fib_pa));
dprintk((KERN_DEBUG " fib being sent=%p\n",fibptr));
/*
* Fill in the Callback and CallbackContext if we are not
* going to wait.
@ -525,22 +450,67 @@ int fib_send(u16 command, struct fib * fibptr, unsigned long size, int priority
fibptr->callback = callback;
fibptr->callback_data = callback_data;
}
FIB_COUNTER_INCREMENT(aac_config.FibsSent);
list_add_tail(&fibptr->queue, &q->pendingq);
q->numpending++;
fibptr->done = 0;
fibptr->flags = 0;
if(aac_insert_entry(dev, index, qid, (nointr & aac_config.irq_mod)) < 0)
return -EWOULDBLOCK;
FIB_COUNTER_INCREMENT(aac_config.FibsSent);
dprintk((KERN_DEBUG "fib_send: inserting a queue entry at index %d.\n",index));
dprintk((KERN_DEBUG "Fib contents:.\n"));
dprintk((KERN_DEBUG " Command = %d.\n", hw_fib->header.Command));
dprintk((KERN_DEBUG " XferState = %x.\n", hw_fib->header.XferState));
dprintk((KERN_DEBUG " hw_fib va being sent=%p\n",fibptr->hw_fib));
dprintk((KERN_DEBUG " hw_fib pa being sent=%lx\n",(ulong)fibptr->hw_fib_pa));
dprintk((KERN_DEBUG " fib being sent=%p\n",fibptr));
q = &dev->queues->queue[AdapNormCmdQueue];
if(wait)
spin_lock_irqsave(&fibptr->event_lock, flags);
spin_lock_irqsave(q->lock, qflags);
aac_queue_get( dev, &index, AdapNormCmdQueue, hw_fib, 1, fibptr, &nointr);
list_add_tail(&fibptr->queue, &q->pendingq);
q->numpending++;
*(q->headers.producer) = cpu_to_le32(index + 1);
spin_unlock_irqrestore(q->lock, qflags);
if (!(nointr & aac_config.irq_mod))
aac_adapter_notify(dev, AdapNormCmdQueue);
/*
* If the caller wanted us to wait for response wait now.
*/
if (wait) {
spin_unlock_irqrestore(&fibptr->event_lock, flags);
down(&fibptr->event_wait);
/* Only set for first known interruptable command */
if (wait < 0) {
/*
* *VERY* Dangerous to time out a command, the
* assumption is made that we have no hope of
* functioning because an interrupt routing or other
* hardware failure has occurred.
*/
unsigned long count = 36000000L; /* 3 minutes */
unsigned long qflags;
while (down_trylock(&fibptr->event_wait)) {
if (--count == 0) {
spin_lock_irqsave(q->lock, qflags);
q->numpending--;
list_del(&fibptr->queue);
spin_unlock_irqrestore(q->lock, qflags);
if (wait == -1) {
printk(KERN_ERR "aacraid: fib_send: first asynchronous command timed out.\n"
"Usually a result of a PCI interrupt routing problem;\n"
"update mother board BIOS or consider utilizing one of\n"
"the SAFE mode kernel options (acpi, apic etc)\n");
}
return -ETIMEDOUT;
}
udelay(5);
}
} else
down(&fibptr->event_wait);
if(fibptr->done == 0)
BUG();
@ -622,15 +592,9 @@ void aac_consumer_free(struct aac_dev * dev, struct aac_queue *q, u32 qid)
case HostNormCmdQueue:
notify = HostNormCmdNotFull;
break;
case HostHighCmdQueue:
notify = HostHighCmdNotFull;
break;
case HostNormRespQueue:
notify = HostNormRespNotFull;
break;
case HostHighRespQueue:
notify = HostHighRespNotFull;
break;
default:
BUG();
return;
@ -652,9 +616,13 @@ int fib_adapter_complete(struct fib * fibptr, unsigned short size)
{
struct hw_fib * hw_fib = fibptr->hw_fib;
struct aac_dev * dev = fibptr->dev;
struct aac_queue * q;
unsigned long nointr = 0;
if (hw_fib->header.XferState == 0)
unsigned long qflags;
if (hw_fib->header.XferState == 0) {
return 0;
}
/*
* If we plan to do anything check the structure type first.
*/
@ -669,37 +637,21 @@ int fib_adapter_complete(struct fib * fibptr, unsigned short size)
* send the completed cdb to the adapter.
*/
if (hw_fib->header.XferState & cpu_to_le32(SentFromAdapter)) {
u32 index;
hw_fib->header.XferState |= cpu_to_le32(HostProcessed);
if (hw_fib->header.XferState & cpu_to_le32(HighPriority)) {
u32 index;
if (size)
{
size += sizeof(struct aac_fibhdr);
if (size > le16_to_cpu(hw_fib->header.SenderSize))
return -EMSGSIZE;
hw_fib->header.Size = cpu_to_le16(size);
}
if(aac_queue_get(dev, &index, AdapHighRespQueue, hw_fib, 1, NULL, &nointr) < 0) {
return -EWOULDBLOCK;
}
if (aac_insert_entry(dev, index, AdapHighRespQueue, (nointr & (int)aac_config.irq_mod)) != 0) {
}
} else if (hw_fib->header.XferState &
cpu_to_le32(NormalPriority)) {
u32 index;
if (size) {
size += sizeof(struct aac_fibhdr);
if (size > le16_to_cpu(hw_fib->header.SenderSize))
return -EMSGSIZE;
hw_fib->header.Size = cpu_to_le16(size);
}
if (aac_queue_get(dev, &index, AdapNormRespQueue, hw_fib, 1, NULL, &nointr) < 0)
return -EWOULDBLOCK;
if (aac_insert_entry(dev, index, AdapNormRespQueue, (nointr & (int)aac_config.irq_mod)) != 0)
{
}
if (size) {
size += sizeof(struct aac_fibhdr);
if (size > le16_to_cpu(hw_fib->header.SenderSize))
return -EMSGSIZE;
hw_fib->header.Size = cpu_to_le16(size);
}
q = &dev->queues->queue[AdapNormRespQueue];
spin_lock_irqsave(q->lock, qflags);
aac_queue_get(dev, &index, AdapNormRespQueue, hw_fib, 1, NULL, &nointr);
*(q->headers.producer) = cpu_to_le32(index + 1);
spin_unlock_irqrestore(q->lock, qflags);
if (!(nointr & (int)aac_config.irq_mod))
aac_adapter_notify(dev, AdapNormRespQueue);
}
else
{
@ -791,6 +743,268 @@ void aac_printf(struct aac_dev *dev, u32 val)
memset(cp, 0, 256);
}
/**
* aac_handle_aif - Handle a message from the firmware
* @dev: Which adapter this fib is from
* @fibptr: Pointer to fibptr from adapter
*
* This routine handles a driver notify fib from the adapter and
* dispatches it to the appropriate routine for handling.
*/
static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
{
struct hw_fib * hw_fib = fibptr->hw_fib;
struct aac_aifcmd * aifcmd = (struct aac_aifcmd *)hw_fib->data;
int busy;
u32 container;
struct scsi_device *device;
enum {
NOTHING,
DELETE,
ADD,
CHANGE
} device_config_needed;
/* Sniff for container changes */
if (!dev)
return;
container = (u32)-1;
/*
* We have set this up to try and minimize the number of
* re-configures that take place. As a result of this when
* certain AIF's come in we will set a flag waiting for another
* type of AIF before setting the re-config flag.
*/
switch (le32_to_cpu(aifcmd->command)) {
case AifCmdDriverNotify:
switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
/*
* Morph or Expand complete
*/
case AifDenMorphComplete:
case AifDenVolumeExtendComplete:
container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
if (container >= dev->maximum_num_containers)
break;
/*
* Find the Scsi_Device associated with the SCSI
* address. Make sure we have the right array, and if
* so set the flag to initiate a new re-config once we
* see an AifEnConfigChange AIF come through.
*/
if ((dev != NULL) && (dev->scsi_host_ptr != NULL)) {
device = scsi_device_lookup(dev->scsi_host_ptr,
CONTAINER_TO_CHANNEL(container),
CONTAINER_TO_ID(container),
CONTAINER_TO_LUN(container));
if (device) {
dev->fsa_dev[container].config_needed = CHANGE;
dev->fsa_dev[container].config_waiting_on = AifEnConfigChange;
scsi_device_put(device);
}
}
}
/*
* If we are waiting on something and this happens to be
* that thing then set the re-configure flag.
*/
if (container != (u32)-1) {
if (container >= dev->maximum_num_containers)
break;
if (dev->fsa_dev[container].config_waiting_on ==
le32_to_cpu(*(u32 *)aifcmd->data))
dev->fsa_dev[container].config_waiting_on = 0;
} else for (container = 0;
container < dev->maximum_num_containers; ++container) {
if (dev->fsa_dev[container].config_waiting_on ==
le32_to_cpu(*(u32 *)aifcmd->data))
dev->fsa_dev[container].config_waiting_on = 0;
}
break;
case AifCmdEventNotify:
switch (le32_to_cpu(((u32 *)aifcmd->data)[0])) {
/*
* Add an Array.
*/
case AifEnAddContainer:
container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
if (container >= dev->maximum_num_containers)
break;
dev->fsa_dev[container].config_needed = ADD;
dev->fsa_dev[container].config_waiting_on =
AifEnConfigChange;
break;
/*
* Delete an Array.
*/
case AifEnDeleteContainer:
container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
if (container >= dev->maximum_num_containers)
break;
dev->fsa_dev[container].config_needed = DELETE;
dev->fsa_dev[container].config_waiting_on =
AifEnConfigChange;
break;
/*
* Container change detected. If we currently are not
* waiting on something else, setup to wait on a Config Change.
*/
case AifEnContainerChange:
container = le32_to_cpu(((u32 *)aifcmd->data)[1]);
if (container >= dev->maximum_num_containers)
break;
if (dev->fsa_dev[container].config_waiting_on)
break;
dev->fsa_dev[container].config_needed = CHANGE;
dev->fsa_dev[container].config_waiting_on =
AifEnConfigChange;
break;
case AifEnConfigChange:
break;
}
/*
* If we are waiting on something and this happens to be
* that thing then set the re-configure flag.
*/
if (container != (u32)-1) {
if (container >= dev->maximum_num_containers)
break;
if (dev->fsa_dev[container].config_waiting_on ==
le32_to_cpu(*(u32 *)aifcmd->data))
dev->fsa_dev[container].config_waiting_on = 0;
} else for (container = 0;
container < dev->maximum_num_containers; ++container) {
if (dev->fsa_dev[container].config_waiting_on ==
le32_to_cpu(*(u32 *)aifcmd->data))
dev->fsa_dev[container].config_waiting_on = 0;
}
break;
case AifCmdJobProgress:
/*
* These are job progress AIF's. When a Clear is being
* done on a container it is initially created then hidden from
* the OS. When the clear completes we don't get a config
* change so we monitor the job status complete on a clear then
* wait for a container change.
*/
if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
&& ((((u32 *)aifcmd->data)[6] == ((u32 *)aifcmd->data)[5])
|| (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsSuccess)))) {
for (container = 0;
container < dev->maximum_num_containers;
++container) {
/*
* Stomp on all config sequencing for all
* containers?
*/
dev->fsa_dev[container].config_waiting_on =
AifEnContainerChange;
dev->fsa_dev[container].config_needed = ADD;
}
}
if ((((u32 *)aifcmd->data)[1] == cpu_to_le32(AifJobCtrZero))
&& (((u32 *)aifcmd->data)[6] == 0)
&& (((u32 *)aifcmd->data)[4] == cpu_to_le32(AifJobStsRunning))) {
for (container = 0;
container < dev->maximum_num_containers;
++container) {
/*
* Stomp on all config sequencing for all
* containers?
*/
dev->fsa_dev[container].config_waiting_on =
AifEnContainerChange;
dev->fsa_dev[container].config_needed = DELETE;
}
}
break;
}
device_config_needed = NOTHING;
for (container = 0; container < dev->maximum_num_containers;
++container) {
if ((dev->fsa_dev[container].config_waiting_on == 0)
&& (dev->fsa_dev[container].config_needed != NOTHING)) {
device_config_needed =
dev->fsa_dev[container].config_needed;
dev->fsa_dev[container].config_needed = NOTHING;
break;
}
}
if (device_config_needed == NOTHING)
return;
/*
* If we decided that a re-configuration needs to be done,
* schedule it here on the way out the door, please close the door
* behind you.
*/
busy = 0;
/*
* Find the Scsi_Device associated with the SCSI address,
* and mark it as changed, invalidating the cache. This deals
* with changes to existing device IDs.
*/
if (!dev || !dev->scsi_host_ptr)
return;
/*
* force reload of disk info via probe_container
*/
if ((device_config_needed == CHANGE)
&& (dev->fsa_dev[container].valid == 1))
dev->fsa_dev[container].valid = 2;
if ((device_config_needed == CHANGE) ||
(device_config_needed == ADD))
probe_container(dev, container);
device = scsi_device_lookup(dev->scsi_host_ptr,
CONTAINER_TO_CHANNEL(container),
CONTAINER_TO_ID(container),
CONTAINER_TO_LUN(container));
if (device) {
switch (device_config_needed) {
case DELETE:
scsi_remove_device(device);
break;
case CHANGE:
if (!dev->fsa_dev[container].valid) {
scsi_remove_device(device);
break;
}
scsi_rescan_device(&device->sdev_gendev);
default:
break;
}
scsi_device_put(device);
}
if (device_config_needed == ADD) {
scsi_add_device(dev->scsi_host_ptr,
CONTAINER_TO_CHANNEL(container),
CONTAINER_TO_ID(container),
CONTAINER_TO_LUN(container));
}
}
/**
* aac_command_thread - command processing thread
* @dev: Adapter to monitor
@ -805,7 +1019,6 @@ int aac_command_thread(struct aac_dev * dev)
{
struct hw_fib *hw_fib, *hw_newfib;
struct fib *fib, *newfib;
struct aac_queue_block *queues = dev->queues;
struct aac_fib_context *fibctx;
unsigned long flags;
DECLARE_WAITQUEUE(wait, current);
@ -825,21 +1038,22 @@ int aac_command_thread(struct aac_dev * dev)
* Let the DPC know it has a place to send the AIF's to.
*/
dev->aif_thread = 1;
add_wait_queue(&queues->queue[HostNormCmdQueue].cmdready, &wait);
add_wait_queue(&dev->queues->queue[HostNormCmdQueue].cmdready, &wait);
set_current_state(TASK_INTERRUPTIBLE);
dprintk ((KERN_INFO "aac_command_thread start\n"));
while(1)
{
spin_lock_irqsave(queues->queue[HostNormCmdQueue].lock, flags);
while(!list_empty(&(queues->queue[HostNormCmdQueue].cmdq))) {
spin_lock_irqsave(dev->queues->queue[HostNormCmdQueue].lock, flags);
while(!list_empty(&(dev->queues->queue[HostNormCmdQueue].cmdq))) {
struct list_head *entry;
struct aac_aifcmd * aifcmd;
set_current_state(TASK_RUNNING);
entry = queues->queue[HostNormCmdQueue].cmdq.next;
entry = dev->queues->queue[HostNormCmdQueue].cmdq.next;
list_del(entry);
spin_unlock_irqrestore(queues->queue[HostNormCmdQueue].lock, flags);
spin_unlock_irqrestore(dev->queues->queue[HostNormCmdQueue].lock, flags);
fib = list_entry(entry, struct fib, fiblink);
/*
* We will process the FIB here or pass it to a
@ -860,6 +1074,7 @@ int aac_command_thread(struct aac_dev * dev)
aifcmd = (struct aac_aifcmd *) hw_fib->data;
if (aifcmd->command == cpu_to_le32(AifCmdDriverNotify)) {
/* Handle Driver Notify Events */
aac_handle_aif(dev, fib);
*(__le32 *)hw_fib->data = cpu_to_le32(ST_OK);
fib_adapter_complete(fib, (u16)sizeof(u32));
} else {
@ -869,9 +1084,62 @@ int aac_command_thread(struct aac_dev * dev)
u32 time_now, time_last;
unsigned long flagv;
unsigned num;
struct hw_fib ** hw_fib_pool, ** hw_fib_p;
struct fib ** fib_pool, ** fib_p;
/* Sniff events */
if ((aifcmd->command ==
cpu_to_le32(AifCmdEventNotify)) ||
(aifcmd->command ==
cpu_to_le32(AifCmdJobProgress))) {
aac_handle_aif(dev, fib);
}
time_now = jiffies/HZ;
/*
* Warning: no sleep allowed while
* holding spinlock. We take the estimate
* and pre-allocate a set of fibs outside the
* lock.
*/
num = le32_to_cpu(dev->init->AdapterFibsSize)
/ sizeof(struct hw_fib); /* some extra */
spin_lock_irqsave(&dev->fib_lock, flagv);
entry = dev->fib_list.next;
while (entry != &dev->fib_list) {
entry = entry->next;
++num;
}
spin_unlock_irqrestore(&dev->fib_lock, flagv);
hw_fib_pool = NULL;
fib_pool = NULL;
if (num
&& ((hw_fib_pool = kmalloc(sizeof(struct hw_fib *) * num, GFP_KERNEL)))
&& ((fib_pool = kmalloc(sizeof(struct fib *) * num, GFP_KERNEL)))) {
hw_fib_p = hw_fib_pool;
fib_p = fib_pool;
while (hw_fib_p < &hw_fib_pool[num]) {
if (!(*(hw_fib_p++) = kmalloc(sizeof(struct hw_fib), GFP_KERNEL))) {
--hw_fib_p;
break;
}
if (!(*(fib_p++) = kmalloc(sizeof(struct fib), GFP_KERNEL))) {
kfree(*(--hw_fib_p));
break;
}
}
if ((num = hw_fib_p - hw_fib_pool) == 0) {
kfree(fib_pool);
fib_pool = NULL;
kfree(hw_fib_pool);
hw_fib_pool = NULL;
}
} else if (hw_fib_pool) {
kfree(hw_fib_pool);
hw_fib_pool = NULL;
}
spin_lock_irqsave(&dev->fib_lock, flagv);
entry = dev->fib_list.next;
/*
@ -880,6 +1148,8 @@ int aac_command_thread(struct aac_dev * dev)
* fib, and then set the event to wake up the
* thread that is waiting for it.
*/
hw_fib_p = hw_fib_pool;
fib_p = fib_pool;
while (entry != &dev->fib_list) {
/*
* Extract the fibctx
@ -912,9 +1182,11 @@ int aac_command_thread(struct aac_dev * dev)
* Warning: no sleep allowed while
* holding spinlock
*/
hw_newfib = kmalloc(sizeof(struct hw_fib), GFP_ATOMIC);
newfib = kmalloc(sizeof(struct fib), GFP_ATOMIC);
if (newfib && hw_newfib) {
if (hw_fib_p < &hw_fib_pool[num]) {
hw_newfib = *hw_fib_p;
*(hw_fib_p++) = NULL;
newfib = *fib_p;
*(fib_p++) = NULL;
/*
* Make the copy of the FIB
*/
@ -929,15 +1201,11 @@ int aac_command_thread(struct aac_dev * dev)
fibctx->count++;
/*
* Set the event to wake up the
* thread that will waiting.
* thread that is waiting.
*/
up(&fibctx->wait_sem);
} else {
printk(KERN_WARNING "aifd: didn't allocate NewFib.\n");
if(newfib)
kfree(newfib);
if(hw_newfib)
kfree(hw_newfib);
}
entry = entry->next;
}
@ -947,21 +1215,38 @@ int aac_command_thread(struct aac_dev * dev)
*(__le32 *)hw_fib->data = cpu_to_le32(ST_OK);
fib_adapter_complete(fib, sizeof(u32));
spin_unlock_irqrestore(&dev->fib_lock, flagv);
/* Free up the remaining resources */
hw_fib_p = hw_fib_pool;
fib_p = fib_pool;
while (hw_fib_p < &hw_fib_pool[num]) {
if (*hw_fib_p)
kfree(*hw_fib_p);
if (*fib_p)
kfree(*fib_p);
++fib_p;
++hw_fib_p;
}
if (hw_fib_pool)
kfree(hw_fib_pool);
if (fib_pool)
kfree(fib_pool);
}
spin_lock_irqsave(queues->queue[HostNormCmdQueue].lock, flags);
kfree(fib);
spin_lock_irqsave(dev->queues->queue[HostNormCmdQueue].lock, flags);
}
/*
* There are no more AIF's
*/
spin_unlock_irqrestore(queues->queue[HostNormCmdQueue].lock, flags);
spin_unlock_irqrestore(dev->queues->queue[HostNormCmdQueue].lock, flags);
schedule();
if(signal_pending(current))
break;
set_current_state(TASK_INTERRUPTIBLE);
}
remove_wait_queue(&queues->queue[HostNormCmdQueue].cmdready, &wait);
if (dev->queues)
remove_wait_queue(&dev->queues->queue[HostNormCmdQueue].cmdready, &wait);
dev->aif_thread = 0;
complete_and_exit(&dev->aif_completion, 0);
return 0;
}

View File

@ -748,7 +748,8 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
unique_id++;
}
if (pci_enable_device(pdev))
error = pci_enable_device(pdev);
if (error)
goto out;
if (pci_set_dma_mask(pdev, 0xFFFFFFFFULL) ||
@ -772,6 +773,7 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
shost->irq = pdev->irq;
shost->base = pci_resource_start(pdev, 0);
shost->unique_id = unique_id;
shost->max_cmd_len = 16;
aac = (struct aac_dev *)shost->hostdata;
aac->scsi_host_ptr = shost;
@ -799,7 +801,9 @@ static int __devinit aac_probe_one(struct pci_dev *pdev,
goto out_free_fibs;
aac->maximum_num_channels = aac_drivers[index].channels;
aac_get_adapter_info(aac);
error = aac_get_adapter_info(aac);
if (error < 0)
goto out_deinit;
/*
* Lets override negotiations and drop the maximum SG limit to 34
@ -927,8 +931,8 @@ static int __init aac_init(void)
printk(KERN_INFO "Adaptec %s driver (%s)\n",
AAC_DRIVERNAME, aac_driver_version);
error = pci_module_init(&aac_pci_driver);
if (error)
error = pci_register_driver(&aac_pci_driver);
if (error < 0)
return error;
aac_cfg_major = register_chrdev( 0, "aac", &aac_cfg_fops);

View File

@ -672,17 +672,36 @@ static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *
for (i = 0; i < host_set->n_ports; i++) {
struct ata_port *ap;
u32 tmp;
VPRINTK("port %u\n", i);
if (!(irq_stat & (1 << i)))
continue;
ap = host_set->ports[i];
tmp = irq_stat & (1 << i);
if (tmp && ap) {
if (ap) {
struct ata_queued_cmd *qc;
qc = ata_qc_from_tag(ap, ap->active_tag);
if (ahci_host_intr(ap, qc))
irq_ack |= (1 << i);
if (!ahci_host_intr(ap, qc))
if (ata_ratelimit()) {
struct pci_dev *pdev =
to_pci_dev(ap->host_set->dev);
printk(KERN_WARNING
"ahci(%s): unhandled interrupt on port %u\n",
pci_name(pdev), i);
}
VPRINTK("port %u\n", i);
} else {
VPRINTK("port %u (no irq)\n", i);
if (ata_ratelimit()) {
struct pci_dev *pdev =
to_pci_dev(ap->host_set->dev);
printk(KERN_WARNING
"ahci(%s): interrupt on disabled port %u\n",
pci_name(pdev), i);
}
}
irq_ack |= (1 << i);
}
if (irq_ack) {

View File

@ -112,6 +112,9 @@ aic7770_remove(struct device *dev)
struct ahc_softc *ahc = dev_get_drvdata(dev);
u_long s;
if (ahc->platform_data && ahc->platform_data->host)
scsi_remove_host(ahc->platform_data->host);
ahc_lock(ahc, &s);
ahc_intr_enable(ahc, FALSE);
ahc_unlock(ahc, &s);

View File

@ -1192,11 +1192,6 @@ ahd_platform_free(struct ahd_softc *ahd)
int i, j;
if (ahd->platform_data != NULL) {
if (ahd->platform_data->host != NULL) {
scsi_remove_host(ahd->platform_data->host);
scsi_host_put(ahd->platform_data->host);
}
/* destroy all of the device and target objects */
for (i = 0; i < AHD_NUM_TARGETS; i++) {
starget = ahd->platform_data->starget[i];
@ -1226,6 +1221,9 @@ ahd_platform_free(struct ahd_softc *ahd)
release_mem_region(ahd->platform_data->mem_busaddr,
0x1000);
}
if (ahd->platform_data->host)
scsi_host_put(ahd->platform_data->host);
free(ahd->platform_data, M_DEVBUF);
}
}

View File

@ -95,6 +95,9 @@ ahd_linux_pci_dev_remove(struct pci_dev *pdev)
struct ahd_softc *ahd = pci_get_drvdata(pdev);
u_long s;
if (ahd->platform_data && ahd->platform_data->host)
scsi_remove_host(ahd->platform_data->host);
ahd_lock(ahd, &s);
ahd_intr_enable(ahd, FALSE);
ahd_unlock(ahd, &s);

View File

@ -1209,11 +1209,6 @@ ahc_platform_free(struct ahc_softc *ahc)
int i, j;
if (ahc->platform_data != NULL) {
if (ahc->platform_data->host != NULL) {
scsi_remove_host(ahc->platform_data->host);
scsi_host_put(ahc->platform_data->host);
}
/* destroy all of the device and target objects */
for (i = 0; i < AHC_NUM_TARGETS; i++) {
starget = ahc->platform_data->starget[i];
@ -1242,6 +1237,9 @@ ahc_platform_free(struct ahc_softc *ahc)
0x1000);
}
if (ahc->platform_data->host)
scsi_host_put(ahc->platform_data->host);
free(ahc->platform_data, M_DEVBUF);
}
}

View File

@ -143,6 +143,9 @@ ahc_linux_pci_dev_remove(struct pci_dev *pdev)
struct ahc_softc *ahc = pci_get_drvdata(pdev);
u_long s;
if (ahc->platform_data && ahc->platform_data->host)
scsi_remove_host(ahc->platform_data->host);
ahc_lock(ahc, &s);
ahc_intr_enable(ahc, FALSE);
ahc_unlock(ahc, &s);

View File

@ -176,6 +176,7 @@ void scsi_remove_host(struct Scsi_Host *shost)
transport_unregister_device(&shost->shost_gendev);
class_device_unregister(&shost->shost_classdev);
device_del(&shost->shost_gendev);
scsi_proc_hostdir_rm(shost->hostt);
}
EXPORT_SYMBOL(scsi_remove_host);
@ -262,7 +263,6 @@ static void scsi_host_dev_release(struct device *dev)
if (shost->work_q)
destroy_workqueue(shost->work_q);
scsi_proc_hostdir_rm(shost->hostt);
scsi_destroy_command_freelist(shost);
kfree(shost->shost_data);

View File

@ -48,6 +48,7 @@
#include <linux/completion.h>
#include <linux/suspend.h>
#include <linux/workqueue.h>
#include <linux/jiffies.h>
#include <scsi/scsi.h>
#include "scsi.h"
#include "scsi_priv.h"
@ -70,7 +71,6 @@ static int fgb(u32 bitmap);
static int ata_choose_xfer_mode(struct ata_port *ap,
u8 *xfer_mode_out,
unsigned int *xfer_shift_out);
static int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat);
static void __ata_qc_complete(struct ata_queued_cmd *qc);
static unsigned int ata_unique_id = 1;
@ -3132,52 +3132,6 @@ fsm_start:
goto fsm_start;
}
static void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
unsigned long flags;
int rc;
DPRINTK("ATAPI request sense\n");
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
/* FIXME: is this needed? */
memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
qc->dma_dir = DMA_FROM_DEVICE;
memset(&qc->cdb, 0, ap->cdb_len);
qc->cdb[0] = REQUEST_SENSE;
qc->cdb[4] = SCSI_SENSE_BUFFERSIZE;
qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
qc->tf.command = ATA_CMD_PACKET;
qc->tf.protocol = ATA_PROT_ATAPI;
qc->tf.lbam = (8 * 1024) & 0xff;
qc->tf.lbah = (8 * 1024) >> 8;
qc->nbytes = SCSI_SENSE_BUFFERSIZE;
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
ata_port_disable(ap);
else
wait_for_completion(&wait);
DPRINTK("EXIT\n");
}
/**
* ata_qc_timeout - Handle timeout of queued command
* @qc: Command that timed out
@ -3297,14 +3251,14 @@ void ata_eng_timeout(struct ata_port *ap)
DPRINTK("ENTER\n");
qc = ata_qc_from_tag(ap, ap->active_tag);
if (!qc) {
if (qc)
ata_qc_timeout(qc);
else {
printk(KERN_ERR "ata%u: BUG: timeout without command\n",
ap->id);
goto out;
}
ata_qc_timeout(qc);
out:
DPRINTK("EXIT\n");
}
@ -3373,7 +3327,7 @@ struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap,
return qc;
}
static int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat)
int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat)
{
return 0;
}
@ -4409,7 +4363,7 @@ int ata_device_add(struct ata_probe_ent *ent)
for (i = 0; i < count; i++) {
struct ata_port *ap = host_set->ports[i];
scsi_scan_host(ap->host);
ata_scsi_scan_host(ap);
}
dev_set_drvdata(dev, host_set);
@ -4569,85 +4523,87 @@ void ata_pci_host_stop (struct ata_host_set *host_set)
* ata_pci_init_native_mode - Initialize native-mode driver
* @pdev: pci device to be initialized
* @port: array[2] of pointers to port info structures.
* @ports: bitmap of ports present
*
* Utility function which allocates and initializes an
* ata_probe_ent structure for a standard dual-port
* PIO-based IDE controller. The returned ata_probe_ent
* structure can be passed to ata_device_add(). The returned
* ata_probe_ent structure should then be freed with kfree().
*
* The caller need only pass the address of the primary port, the
* secondary will be deduced automatically. If the device has non
* standard secondary port mappings this function can be called twice,
* once for each interface.
*/
struct ata_probe_ent *
ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port)
ata_pci_init_native_mode(struct pci_dev *pdev, struct ata_port_info **port, int ports)
{
struct ata_probe_ent *probe_ent =
ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]);
int p = 0;
if (!probe_ent)
return NULL;
probe_ent->n_ports = 2;
probe_ent->irq = pdev->irq;
probe_ent->irq_flags = SA_SHIRQ;
probe_ent->port[0].cmd_addr = pci_resource_start(pdev, 0);
probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr =
pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS;
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4);
if (ports & ATA_PORT_PRIMARY) {
probe_ent->port[p].cmd_addr = pci_resource_start(pdev, 0);
probe_ent->port[p].altstatus_addr =
probe_ent->port[p].ctl_addr =
pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS;
probe_ent->port[p].bmdma_addr = pci_resource_start(pdev, 4);
ata_std_ports(&probe_ent->port[p]);
p++;
}
probe_ent->port[1].cmd_addr = pci_resource_start(pdev, 2);
probe_ent->port[1].altstatus_addr =
probe_ent->port[1].ctl_addr =
pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS;
probe_ent->port[1].bmdma_addr = pci_resource_start(pdev, 4) + 8;
ata_std_ports(&probe_ent->port[0]);
ata_std_ports(&probe_ent->port[1]);
if (ports & ATA_PORT_SECONDARY) {
probe_ent->port[p].cmd_addr = pci_resource_start(pdev, 2);
probe_ent->port[p].altstatus_addr =
probe_ent->port[p].ctl_addr =
pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS;
probe_ent->port[p].bmdma_addr = pci_resource_start(pdev, 4) + 8;
ata_std_ports(&probe_ent->port[p]);
p++;
}
probe_ent->n_ports = p;
return probe_ent;
}
static struct ata_probe_ent *
ata_pci_init_legacy_mode(struct pci_dev *pdev, struct ata_port_info **port,
struct ata_probe_ent **ppe2)
static struct ata_probe_ent *ata_pci_init_legacy_port(struct pci_dev *pdev, struct ata_port_info **port, int port_num)
{
struct ata_probe_ent *probe_ent, *probe_ent2;
struct ata_probe_ent *probe_ent;
probe_ent = ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[0]);
if (!probe_ent)
return NULL;
probe_ent2 = ata_probe_ent_alloc(pci_dev_to_dev(pdev), port[1]);
if (!probe_ent2) {
kfree(probe_ent);
return NULL;
}
probe_ent->n_ports = 1;
probe_ent->irq = 14;
probe_ent->hard_port_no = 0;
probe_ent->legacy_mode = 1;
probe_ent->n_ports = 1;
probe_ent->hard_port_no = port_num;
probe_ent2->n_ports = 1;
probe_ent2->irq = 15;
probe_ent2->hard_port_no = 1;
probe_ent2->legacy_mode = 1;
probe_ent->port[0].cmd_addr = 0x1f0;
probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr = 0x3f6;
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4);
probe_ent2->port[0].cmd_addr = 0x170;
probe_ent2->port[0].altstatus_addr =
probe_ent2->port[0].ctl_addr = 0x376;
probe_ent2->port[0].bmdma_addr = pci_resource_start(pdev, 4)+8;
switch(port_num)
{
case 0:
probe_ent->irq = 14;
probe_ent->port[0].cmd_addr = 0x1f0;
probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr = 0x3f6;
break;
case 1:
probe_ent->irq = 15;
probe_ent->port[0].cmd_addr = 0x170;
probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr = 0x376;
break;
}
probe_ent->port[0].bmdma_addr = pci_resource_start(pdev, 4) + 8 * port_num;
ata_std_ports(&probe_ent->port[0]);
ata_std_ports(&probe_ent2->port[0]);
*ppe2 = probe_ent2;
return probe_ent;
}
@ -4676,7 +4632,7 @@ ata_pci_init_legacy_mode(struct pci_dev *pdev, struct ata_port_info **port,
int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
unsigned int n_ports)
{
struct ata_probe_ent *probe_ent, *probe_ent2 = NULL;
struct ata_probe_ent *probe_ent = NULL, *probe_ent2 = NULL;
struct ata_port_info *port[2];
u8 tmp8, mask;
unsigned int legacy_mode = 0;
@ -4693,7 +4649,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
if ((port[0]->host_flags & ATA_FLAG_NO_LEGACY) == 0
&& (pdev->class >> 8) == PCI_CLASS_STORAGE_IDE) {
/* TODO: support transitioning to native mode? */
/* TODO: What if one channel is in native mode ... */
pci_read_config_byte(pdev, PCI_CLASS_PROG, &tmp8);
mask = (1 << 2) | (1 << 0);
if ((tmp8 & mask) != mask)
@ -4701,11 +4657,20 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
}
/* FIXME... */
if ((!legacy_mode) && (n_ports > 1)) {
printk(KERN_ERR "ata: BUG: native mode, n_ports > 1\n");
return -EINVAL;
if ((!legacy_mode) && (n_ports > 2)) {
printk(KERN_ERR "ata: BUG: native mode, n_ports > 2\n");
n_ports = 2;
/* For now */
}
/* FIXME: Really for ATA it isn't safe because the device may be
multi-purpose and we want to leave it alone if it was already
enabled. Secondly for shared use as Arjan says we want refcounting
Checking dev->is_enabled is insufficient as this is not set at
boot for the primary video which is BIOS enabled
*/
rc = pci_enable_device(pdev);
if (rc)
return rc;
@ -4716,6 +4681,7 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
goto err_out;
}
/* FIXME: Should use platform specific mappers for legacy port ranges */
if (legacy_mode) {
if (!request_region(0x1f0, 8, "libata")) {
struct resource *conflict, res;
@ -4760,10 +4726,17 @@ int ata_pci_init_one (struct pci_dev *pdev, struct ata_port_info **port_info,
goto err_out_regions;
if (legacy_mode) {
probe_ent = ata_pci_init_legacy_mode(pdev, port, &probe_ent2);
} else
probe_ent = ata_pci_init_native_mode(pdev, port);
if (!probe_ent) {
if (legacy_mode & (1 << 0))
probe_ent = ata_pci_init_legacy_port(pdev, port, 0);
if (legacy_mode & (1 << 1))
probe_ent2 = ata_pci_init_legacy_port(pdev, port, 1);
} else {
if (n_ports == 2)
probe_ent = ata_pci_init_native_mode(pdev, port, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
else
probe_ent = ata_pci_init_native_mode(pdev, port, ATA_PORT_PRIMARY);
}
if (!probe_ent && !probe_ent2) {
rc = -ENOMEM;
goto err_out_regions;
}
@ -4875,6 +4848,27 @@ static void __exit ata_exit(void)
module_init(ata_init);
module_exit(ata_exit);
static unsigned long ratelimit_time;
static spinlock_t ata_ratelimit_lock = SPIN_LOCK_UNLOCKED;
int ata_ratelimit(void)
{
int rc;
unsigned long flags;
spin_lock_irqsave(&ata_ratelimit_lock, flags);
if (time_after(jiffies, ratelimit_time)) {
rc = 1;
ratelimit_time = jiffies + (HZ/5);
} else
rc = 0;
spin_unlock_irqrestore(&ata_ratelimit_lock, flags);
return rc;
}
/*
* libata is essentially a library of internal helper functions for
* low-level ATA host controller drivers. As such, the API/ABI is
@ -4916,6 +4910,7 @@ EXPORT_SYMBOL_GPL(sata_phy_reset);
EXPORT_SYMBOL_GPL(__sata_phy_reset);
EXPORT_SYMBOL_GPL(ata_bus_reset);
EXPORT_SYMBOL_GPL(ata_port_disable);
EXPORT_SYMBOL_GPL(ata_ratelimit);
EXPORT_SYMBOL_GPL(ata_scsi_ioctl);
EXPORT_SYMBOL_GPL(ata_scsi_queuecmd);
EXPORT_SYMBOL_GPL(ata_scsi_error);

View File

@ -49,6 +49,14 @@ static struct ata_device *
ata_scsi_find_dev(struct ata_port *ap, struct scsi_device *scsidev);
static void ata_scsi_invalid_field(struct scsi_cmnd *cmd,
void (*done)(struct scsi_cmnd *))
{
ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
done(cmd);
}
/**
* ata_std_bios_param - generic bios head/sector/cylinder calculator used by sd.
* @sdev: SCSI device for which BIOS geometry is to be determined
@ -182,7 +190,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
{
struct scsi_cmnd *cmd = qc->scsicmd;
u8 err = 0;
unsigned char *sb = cmd->sense_buffer;
/* Based on the 3ware driver translation table */
static unsigned char sense_table[][4] = {
/* BBD|ECC|ID|MAR */
@ -225,8 +232,6 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
};
int i = 0;
cmd->result = SAM_STAT_CHECK_CONDITION;
/*
* Is this an error we can process/parse
*/
@ -281,11 +286,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
/* Look for best matches first */
if((sense_table[i][0] & err) == sense_table[i][0])
{
sb[0] = 0x70;
sb[2] = sense_table[i][1];
sb[7] = 0x0a;
sb[12] = sense_table[i][2];
sb[13] = sense_table[i][3];
ata_scsi_set_sense(cmd, sense_table[i][1] /* sk */,
sense_table[i][2] /* asc */,
sense_table[i][3] /* ascq */ );
return;
}
i++;
@ -300,11 +303,9 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
{
if(stat_table[i][0] & drv_stat)
{
sb[0] = 0x70;
sb[2] = stat_table[i][1];
sb[7] = 0x0a;
sb[12] = stat_table[i][2];
sb[13] = stat_table[i][3];
ata_scsi_set_sense(cmd, sense_table[i][1] /* sk */,
sense_table[i][2] /* asc */,
sense_table[i][3] /* ascq */ );
return;
}
i++;
@ -313,15 +314,12 @@ void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat)
printk(KERN_ERR "ata%u: called with no error (%02X)!\n", qc->ap->id, drv_stat);
/* additional-sense-code[-qualifier] */
sb[0] = 0x70;
sb[2] = MEDIUM_ERROR;
sb[7] = 0x0A;
if (cmd->sc_data_direction == DMA_FROM_DEVICE) {
sb[12] = 0x11; /* "unrecovered read error" */
sb[13] = 0x04;
ata_scsi_set_sense(cmd, MEDIUM_ERROR, 0x11, 0x4);
/* "unrecovered read error" */
} else {
sb[12] = 0x0C; /* "write error - */
sb[13] = 0x02; /* auto-reallocation failed" */
ata_scsi_set_sense(cmd, MEDIUM_ERROR, 0xc, 0x2);
/* "write error - auto-reallocation failed" */
}
}
@ -430,15 +428,26 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
; /* ignore IMMED bit, violates sat-r05 */
}
if (scsicmd[4] & 0x2)
return 1; /* LOEJ bit set not supported */
goto invalid_fld; /* LOEJ bit set not supported */
if (((scsicmd[4] >> 4) & 0xf) != 0)
return 1; /* power conditions not supported */
goto invalid_fld; /* power conditions not supported */
if (scsicmd[4] & 0x1) {
tf->nsect = 1; /* 1 sector, lba=0 */
tf->lbah = 0x0;
tf->lbam = 0x0;
tf->lbal = 0x0;
tf->device |= ATA_LBA;
if (qc->dev->flags & ATA_DFLAG_LBA) {
qc->tf.flags |= ATA_TFLAG_LBA;
tf->lbah = 0x0;
tf->lbam = 0x0;
tf->lbal = 0x0;
tf->device |= ATA_LBA;
} else {
/* CHS */
tf->lbal = 0x1; /* sect */
tf->lbam = 0x0; /* cyl low */
tf->lbah = 0x0; /* cyl high */
}
tf->command = ATA_CMD_VERIFY; /* READ VERIFY */
} else {
tf->nsect = 0; /* time period value (0 implies now) */
@ -453,6 +462,11 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc,
*/
return 0;
invalid_fld:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
}
@ -487,6 +501,99 @@ static unsigned int ata_scsi_flush_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
return 0;
}
/**
* scsi_6_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 6-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_6_lba_len(u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("six-byte command\n");
lba |= ((u64)scsicmd[2]) << 8;
lba |= ((u64)scsicmd[3]);
len |= ((u32)scsicmd[4]);
*plba = lba;
*plen = len;
}
/**
* scsi_10_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 10-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_10_lba_len(u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("ten-byte command\n");
lba |= ((u64)scsicmd[2]) << 24;
lba |= ((u64)scsicmd[3]) << 16;
lba |= ((u64)scsicmd[4]) << 8;
lba |= ((u64)scsicmd[5]);
len |= ((u32)scsicmd[7]) << 8;
len |= ((u32)scsicmd[8]);
*plba = lba;
*plen = len;
}
/**
* scsi_16_lba_len - Get LBA and transfer length
* @scsicmd: SCSI command to translate
*
* Calculate LBA and transfer length for 16-byte commands.
*
* RETURNS:
* @plba: the LBA
* @plen: the transfer length
*/
static void scsi_16_lba_len(u8 *scsicmd, u64 *plba, u32 *plen)
{
u64 lba = 0;
u32 len = 0;
VPRINTK("sixteen-byte command\n");
lba |= ((u64)scsicmd[2]) << 56;
lba |= ((u64)scsicmd[3]) << 48;
lba |= ((u64)scsicmd[4]) << 40;
lba |= ((u64)scsicmd[5]) << 32;
lba |= ((u64)scsicmd[6]) << 24;
lba |= ((u64)scsicmd[7]) << 16;
lba |= ((u64)scsicmd[8]) << 8;
lba |= ((u64)scsicmd[9]);
len |= ((u32)scsicmd[10]) << 24;
len |= ((u32)scsicmd[11]) << 16;
len |= ((u32)scsicmd[12]) << 8;
len |= ((u32)scsicmd[13]);
*plba = lba;
*plen = len;
}
/**
* ata_scsi_verify_xlat - Translate SCSI VERIFY command into an ATA one
* @qc: Storage for translated ATA taskfile
@ -508,53 +615,31 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
unsigned int lba = tf->flags & ATA_TFLAG_LBA;
unsigned int lba48 = tf->flags & ATA_TFLAG_LBA48;
u64 dev_sectors = qc->dev->n_sectors;
u64 block = 0;
u32 n_block = 0;
u64 block;
u32 n_block;
tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
tf->protocol = ATA_PROT_NODATA;
if (scsicmd[0] == VERIFY) {
block |= ((u64)scsicmd[2]) << 24;
block |= ((u64)scsicmd[3]) << 16;
block |= ((u64)scsicmd[4]) << 8;
block |= ((u64)scsicmd[5]);
n_block |= ((u32)scsicmd[7]) << 8;
n_block |= ((u32)scsicmd[8]);
}
else if (scsicmd[0] == VERIFY_16) {
block |= ((u64)scsicmd[2]) << 56;
block |= ((u64)scsicmd[3]) << 48;
block |= ((u64)scsicmd[4]) << 40;
block |= ((u64)scsicmd[5]) << 32;
block |= ((u64)scsicmd[6]) << 24;
block |= ((u64)scsicmd[7]) << 16;
block |= ((u64)scsicmd[8]) << 8;
block |= ((u64)scsicmd[9]);
n_block |= ((u32)scsicmd[10]) << 24;
n_block |= ((u32)scsicmd[11]) << 16;
n_block |= ((u32)scsicmd[12]) << 8;
n_block |= ((u32)scsicmd[13]);
}
if (scsicmd[0] == VERIFY)
scsi_10_lba_len(scsicmd, &block, &n_block);
else if (scsicmd[0] == VERIFY_16)
scsi_16_lba_len(scsicmd, &block, &n_block);
else
return 1;
goto invalid_fld;
if (!n_block)
return 1;
goto nothing_to_do;
if (block >= dev_sectors)
return 1;
goto out_of_range;
if ((block + n_block) > dev_sectors)
return 1;
goto out_of_range;
if (lba48) {
if (n_block > (64 * 1024))
return 1;
goto invalid_fld;
} else {
if (n_block > 256)
return 1;
goto invalid_fld;
}
if (lba) {
@ -589,14 +674,15 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
head = track % dev->heads;
sect = (u32)block % dev->sectors + 1;
DPRINTK("block[%u] track[%u] cyl[%u] head[%u] sect[%u] \n", (u32)block, track, cyl, head, sect);
DPRINTK("block %u track %u cyl %u head %u sect %u\n",
(u32)block, track, cyl, head, sect);
/* Check whether the converted CHS can fit.
Cylinder: 0-65535
Head: 0-15
Sector: 1-255*/
if ((cyl >> 16) || (head >> 4) || (sect >> 8) || (!sect))
return 1;
goto out_of_range;
tf->command = ATA_CMD_VERIFY;
tf->nsect = n_block & 0xff; /* Sector count 0 means 256 sectors */
@ -607,6 +693,20 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
}
return 0;
invalid_fld:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
out_of_range:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x21, 0x0);
/* "Logical Block Address out of range" */
return 1;
nothing_to_do:
qc->scsicmd->result = SAM_STAT_GOOD;
return 1;
}
/**
@ -635,8 +735,8 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
struct ata_device *dev = qc->dev;
unsigned int lba = tf->flags & ATA_TFLAG_LBA;
unsigned int lba48 = tf->flags & ATA_TFLAG_LBA48;
u64 block = 0;
u32 n_block = 0;
u64 block;
u32 n_block;
tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
tf->protocol = qc->dev->xfer_protocol;
@ -650,56 +750,44 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
}
/* Calculate the SCSI LBA and transfer length. */
if (scsicmd[0] == READ_10 || scsicmd[0] == WRITE_10) {
block |= ((u64)scsicmd[2]) << 24;
block |= ((u64)scsicmd[3]) << 16;
block |= ((u64)scsicmd[4]) << 8;
block |= ((u64)scsicmd[5]);
switch (scsicmd[0]) {
case READ_10:
case WRITE_10:
scsi_10_lba_len(scsicmd, &block, &n_block);
break;
case READ_6:
case WRITE_6:
scsi_6_lba_len(scsicmd, &block, &n_block);
n_block |= ((u32)scsicmd[7]) << 8;
n_block |= ((u32)scsicmd[8]);
VPRINTK("ten-byte command\n");
} else if (scsicmd[0] == READ_6 || scsicmd[0] == WRITE_6) {
block |= ((u64)scsicmd[2]) << 8;
block |= ((u64)scsicmd[3]);
n_block |= ((u32)scsicmd[4]);
/* for 6-byte r/w commands, transfer length 0
* means 256 blocks of data, not 0 block.
*/
if (!n_block)
n_block = 256;
VPRINTK("six-byte command\n");
} else if (scsicmd[0] == READ_16 || scsicmd[0] == WRITE_16) {
block |= ((u64)scsicmd[2]) << 56;
block |= ((u64)scsicmd[3]) << 48;
block |= ((u64)scsicmd[4]) << 40;
block |= ((u64)scsicmd[5]) << 32;
block |= ((u64)scsicmd[6]) << 24;
block |= ((u64)scsicmd[7]) << 16;
block |= ((u64)scsicmd[8]) << 8;
block |= ((u64)scsicmd[9]);
n_block |= ((u32)scsicmd[10]) << 24;
n_block |= ((u32)scsicmd[11]) << 16;
n_block |= ((u32)scsicmd[12]) << 8;
n_block |= ((u32)scsicmd[13]);
VPRINTK("sixteen-byte command\n");
} else {
break;
case READ_16:
case WRITE_16:
scsi_16_lba_len(scsicmd, &block, &n_block);
break;
default:
DPRINTK("no-byte command\n");
return 1;
goto invalid_fld;
}
/* Check and compose ATA command */
if (!n_block)
/* In ATA, sector count 0 means 256 or 65536 sectors, not 0 sectors. */
return 1;
/* For 10-byte and 16-byte SCSI R/W commands, transfer
* length 0 means transfer 0 block of data.
* However, for ATA R/W commands, sector count 0 means
* 256 or 65536 sectors, not 0 sectors as in SCSI.
*/
goto nothing_to_do;
if (lba) {
if (lba48) {
/* The request -may- be too large for LBA48. */
if ((block >> 48) || (n_block > 65536))
return 1;
goto out_of_range;
tf->hob_nsect = (n_block >> 8) & 0xff;
@ -711,11 +799,11 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
/* The request -may- be too large for LBA28. */
if ((block >> 28) || (n_block > 256))
return 1;
goto out_of_range;
tf->device |= (block >> 24) & 0xf;
}
qc->nsect = n_block;
tf->nsect = n_block & 0xff;
@ -730,24 +818,24 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
/* The request -may- be too large for CHS addressing. */
if ((block >> 28) || (n_block > 256))
return 1;
goto out_of_range;
/* Convert LBA to CHS */
track = (u32)block / dev->sectors;
cyl = track / dev->heads;
head = track % dev->heads;
sect = (u32)block % dev->sectors + 1;
DPRINTK("block[%u] track[%u] cyl[%u] head[%u] sect[%u] \n",
DPRINTK("block %u track %u cyl %u head %u sect %u\n",
(u32)block, track, cyl, head, sect);
/* Check whether the converted CHS can fit.
Cylinder: 0-65535
Head: 0-15
Sector: 1-255*/
if ((cyl >> 16) || (head >> 4) || (sect >> 8) || (!sect))
return 1;
if ((cyl >> 16) || (head >> 4) || (sect >> 8) || (!sect))
goto out_of_range;
qc->nsect = n_block;
tf->nsect = n_block & 0xff; /* Sector count 0 means 256 sectors */
tf->lbal = sect;
@ -757,6 +845,20 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc, u8 *scsicmd)
}
return 0;
invalid_fld:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
out_of_range:
ata_scsi_set_sense(qc->scsicmd, ILLEGAL_REQUEST, 0x21, 0x0);
/* "Logical Block Address out of range" */
return 1;
nothing_to_do:
qc->scsicmd->result = SAM_STAT_GOOD;
return 1;
}
static int ata_scsi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
@ -788,6 +890,12 @@ static int ata_scsi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
* This function sets up an ata_queued_cmd structure for the
* SCSI command, and sends that ata_queued_cmd to the hardware.
*
* The xlat_func argument (actor) returns 0 if ready to execute
* ATA command, else 1 to finish translation. If 1 is returned
* then cmd->result (and possibly cmd->sense_buffer) are assumed
* to be set reflecting an error condition or clean (early)
* termination.
*
* LOCKING:
* spin_lock_irqsave(host_set lock)
*/
@ -804,7 +912,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
qc = ata_scsi_qc_new(ap, dev, cmd, done);
if (!qc)
return;
goto err_mem;
/* data is present; dma-map it */
if (cmd->sc_data_direction == DMA_FROM_DEVICE ||
@ -812,7 +920,7 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
if (unlikely(cmd->request_bufflen < 1)) {
printk(KERN_WARNING "ata%u(%u): WARNING: zero len r/w req\n",
ap->id, dev->devno);
goto err_out;
goto err_did;
}
if (cmd->use_sg)
@ -827,19 +935,28 @@ static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev,
qc->complete_fn = ata_scsi_qc_complete;
if (xlat_func(qc, scsicmd))
goto err_out;
goto early_finish;
/* select device, send command to hardware */
if (ata_qc_issue(qc))
goto err_out;
goto err_did;
VPRINTK("EXIT\n");
return;
err_out:
early_finish:
ata_qc_free(qc);
done(cmd);
DPRINTK("EXIT - early finish (good or error)\n");
return;
err_did:
ata_qc_free(qc);
ata_bad_cdb(cmd, done);
DPRINTK("EXIT - badcmd\n");
err_mem:
cmd->result = (DID_ERROR << 16);
done(cmd);
DPRINTK("EXIT - internal\n");
return;
}
/**
@ -906,7 +1023,8 @@ static inline void ata_scsi_rbuf_put(struct scsi_cmnd *cmd, u8 *buf)
* Mapping the response buffer, calling the command's handler,
* and handling the handler's return value. This return value
* indicates whether the handler wishes the SCSI command to be
* completed successfully, or not.
* completed successfully (0), or not (in which case cmd->result
* and sense buffer are assumed to be set).
*
* LOCKING:
* spin_lock_irqsave(host_set lock)
@ -925,12 +1043,9 @@ void ata_scsi_rbuf_fill(struct ata_scsi_args *args,
rc = actor(args, rbuf, buflen);
ata_scsi_rbuf_put(cmd, rbuf);
if (rc)
ata_bad_cdb(cmd, args->done);
else {
if (rc == 0)
cmd->result = SAM_STAT_GOOD;
args->done(cmd);
}
args->done(cmd);
}
/**
@ -1236,8 +1351,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
* in the same manner)
*/
page_control = scsicmd[2] >> 6;
if ((page_control != 0) && (page_control != 3))
return 1;
switch (page_control) {
case 0: /* current */
break; /* supported */
case 3: /* saved */
goto saving_not_supp;
case 1: /* changeable */
case 2: /* defaults */
default:
goto invalid_fld;
}
if (six_byte)
output_len = 4;
@ -1268,7 +1391,7 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
break;
default: /* invalid page code */
return 1;
goto invalid_fld;
}
if (six_byte) {
@ -1281,6 +1404,16 @@ unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf,
}
return 0;
invalid_fld:
ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x24, 0x0);
/* "Invalid field in cbd" */
return 1;
saving_not_supp:
ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x39, 0x0);
/* "Saving parameters not supported" */
return 1;
}
/**
@ -1379,6 +1512,34 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
return 0;
}
/**
* ata_scsi_set_sense - Set SCSI sense data and status
* @cmd: SCSI request to be handled
* @sk: SCSI-defined sense key
* @asc: SCSI-defined additional sense code
* @ascq: SCSI-defined additional sense code qualifier
*
* Helper function that builds a valid fixed format, current
* response code and the given sense key (sk), additional sense
* code (asc) and additional sense code qualifier (ascq) with
* a SCSI command status of %SAM_STAT_CHECK_CONDITION and
* DRIVER_SENSE set in the upper bits of scsi_cmnd::result .
*
* LOCKING:
* Not required
*/
void ata_scsi_set_sense(struct scsi_cmnd *cmd, u8 sk, u8 asc, u8 ascq)
{
cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
cmd->sense_buffer[0] = 0x70; /* fixed format, current */
cmd->sense_buffer[2] = sk;
cmd->sense_buffer[7] = 18 - 8; /* additional sense length */
cmd->sense_buffer[12] = asc;
cmd->sense_buffer[13] = ascq;
}
/**
* ata_scsi_badcmd - End a SCSI request with an error
* @cmd: SCSI request to be handled
@ -1397,30 +1558,84 @@ unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
void ata_scsi_badcmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), u8 asc, u8 ascq)
{
DPRINTK("ENTER\n");
cmd->result = SAM_STAT_CHECK_CONDITION;
cmd->sense_buffer[0] = 0x70;
cmd->sense_buffer[2] = ILLEGAL_REQUEST;
cmd->sense_buffer[7] = 14 - 8; /* addnl. sense len. FIXME: correct? */
cmd->sense_buffer[12] = asc;
cmd->sense_buffer[13] = ascq;
ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, asc, ascq);
done(cmd);
}
void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd)
{
DECLARE_COMPLETION(wait);
struct ata_queued_cmd *qc;
unsigned long flags;
int rc;
DPRINTK("ATAPI request sense\n");
qc = ata_qc_new_init(ap, dev);
BUG_ON(qc == NULL);
/* FIXME: is this needed? */
memset(cmd->sense_buffer, 0, sizeof(cmd->sense_buffer));
ata_sg_init_one(qc, cmd->sense_buffer, sizeof(cmd->sense_buffer));
qc->dma_dir = DMA_FROM_DEVICE;
memset(&qc->cdb, 0, ap->cdb_len);
qc->cdb[0] = REQUEST_SENSE;
qc->cdb[4] = SCSI_SENSE_BUFFERSIZE;
qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
qc->tf.command = ATA_CMD_PACKET;
qc->tf.protocol = ATA_PROT_ATAPI;
qc->tf.lbam = (8 * 1024) & 0xff;
qc->tf.lbah = (8 * 1024) >> 8;
qc->nbytes = SCSI_SENSE_BUFFERSIZE;
qc->waiting = &wait;
qc->complete_fn = ata_qc_complete_noop;
spin_lock_irqsave(&ap->host_set->lock, flags);
rc = ata_qc_issue(qc);
spin_unlock_irqrestore(&ap->host_set->lock, flags);
if (rc)
ata_port_disable(ap);
else
wait_for_completion(&wait);
DPRINTK("EXIT\n");
}
static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
{
struct scsi_cmnd *cmd = qc->scsicmd;
if (unlikely(drv_stat & (ATA_ERR | ATA_BUSY | ATA_DRQ))) {
VPRINTK("ENTER, drv_stat == 0x%x\n", drv_stat);
if (unlikely(drv_stat & (ATA_BUSY | ATA_DRQ)))
ata_to_sense_error(qc, drv_stat);
else if (unlikely(drv_stat & ATA_ERR)) {
DPRINTK("request check condition\n");
/* FIXME: command completion with check condition
* but no sense causes the error handler to run,
* which then issues REQUEST SENSE, fills in the sense
* buffer, and completes the command (for the second
* time). We need to issue REQUEST SENSE some other
* way, to avoid completing the command twice.
*/
cmd->result = SAM_STAT_CHECK_CONDITION;
qc->scsidone(cmd);
return 1;
} else {
}
else {
u8 *scsicmd = cmd->cmnd;
if (scsicmd[0] == INQUIRY) {
@ -1428,15 +1643,30 @@ static int atapi_qc_complete(struct ata_queued_cmd *qc, u8 drv_stat)
unsigned int buflen;
buflen = ata_scsi_rbuf_get(cmd, &buf);
buf[2] = 0x5;
buf[3] = (buf[3] & 0xf0) | 2;
/* ATAPI devices typically report zero for their SCSI version,
* and sometimes deviate from the spec WRT response data
* format. If SCSI version is reported as zero like normal,
* then we make the following fixups: 1) Fake MMC-5 version,
* to indicate to the Linux scsi midlayer this is a modern
* device. 2) Ensure response data format / ATAPI information
* are always correct.
*/
/* FIXME: do we ever override EVPD pages and the like, with
* this code?
*/
if (buf[2] == 0) {
buf[2] = 0x5;
buf[3] = 0x32;
}
ata_scsi_rbuf_put(cmd, buf);
}
cmd->result = SAM_STAT_GOOD;
}
qc->scsidone(cmd);
return 0;
}
/**
@ -1697,7 +1927,7 @@ void ata_scsi_simulate(u16 *id,
case INQUIRY:
if (scsicmd[1] & 2) /* is CmdDt set? */
ata_bad_cdb(cmd, done);
ata_scsi_invalid_field(cmd, done);
else if ((scsicmd[1] & 1) == 0) /* is EVPD clear? */
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_std);
else if (scsicmd[2] == 0x00)
@ -1707,7 +1937,7 @@ void ata_scsi_simulate(u16 *id,
else if (scsicmd[2] == 0x83)
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_83);
else
ata_bad_cdb(cmd, done);
ata_scsi_invalid_field(cmd, done);
break;
case MODE_SENSE:
@ -1717,7 +1947,7 @@ void ata_scsi_simulate(u16 *id,
case MODE_SELECT: /* unconditionally return */
case MODE_SELECT_10: /* bad-field-in-cdb */
ata_bad_cdb(cmd, done);
ata_scsi_invalid_field(cmd, done);
break;
case READ_CAPACITY:
@ -1728,7 +1958,7 @@ void ata_scsi_simulate(u16 *id,
if ((scsicmd[1] & 0x1f) == SAI_READ_CAPACITY_16)
ata_scsi_rbuf_fill(&args, ata_scsiop_read_cap);
else
ata_bad_cdb(cmd, done);
ata_scsi_invalid_field(cmd, done);
break;
case REPORT_LUNS:
@ -1740,8 +1970,26 @@ void ata_scsi_simulate(u16 *id,
/* all other commands */
default:
ata_bad_scsiop(cmd, done);
ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x20, 0x0);
/* "Invalid command operation code" */
done(cmd);
break;
}
}
void ata_scsi_scan_host(struct ata_port *ap)
{
struct ata_device *dev;
unsigned int i;
if (ap->flags & ATA_FLAG_PORT_DISABLED)
return;
for (i = 0; i < ATA_MAX_DEVICES; i++) {
dev = &ap->device[i];
if (ata_dev_present(dev))
scsi_scan_target(&ap->host->shost_gendev, 0, i, 0, 0);
}
}

View File

@ -39,6 +39,7 @@ struct ata_scsi_args {
/* libata-core.c */
extern int atapi_enabled;
extern int ata_qc_complete_noop(struct ata_queued_cmd *qc, u8 drv_stat);
extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap,
struct ata_device *dev);
extern void ata_qc_free(struct ata_queued_cmd *qc);
@ -51,6 +52,9 @@ extern void swap_buf_le16(u16 *buf, unsigned int buf_words);
/* libata-scsi.c */
extern void atapi_request_sense(struct ata_port *ap, struct ata_device *dev,
struct scsi_cmnd *cmd);
extern void ata_scsi_scan_host(struct ata_port *ap);
extern void ata_to_sense_error(struct ata_queued_cmd *qc, u8 drv_stat);
extern int ata_scsi_error(struct Scsi_Host *host);
extern unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf,
@ -76,18 +80,10 @@ extern unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf,
extern void ata_scsi_badcmd(struct scsi_cmnd *cmd,
void (*done)(struct scsi_cmnd *),
u8 asc, u8 ascq);
extern void ata_scsi_set_sense(struct scsi_cmnd *cmd,
u8 sk, u8 asc, u8 ascq);
extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args,
unsigned int (*actor) (struct ata_scsi_args *args,
u8 *rbuf, unsigned int buflen));
static inline void ata_bad_scsiop(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
ata_scsi_badcmd(cmd, done, 0x20, 0x00);
}
static inline void ata_bad_cdb(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *))
{
ata_scsi_badcmd(cmd, done, 0x24, 0x00);
}
#endif /* __LIBATA_H__ */

View File

@ -973,10 +973,10 @@ lpfc_get_host_fabric_name (struct Scsi_Host *shost)
if ((phba->fc_flag & FC_FABRIC) ||
((phba->fc_topology == TOPOLOGY_LOOP) &&
(phba->fc_flag & FC_PUBLIC_LOOP)))
node_name = wwn_to_u64(phba->fc_fabparam.nodeName.wwn);
node_name = wwn_to_u64(phba->fc_fabparam.nodeName.u.wwn);
else
/* fabric is local port if there is no F/FL_Port */
node_name = wwn_to_u64(phba->fc_nodename.wwn);
node_name = wwn_to_u64(phba->fc_nodename.u.wwn);
spin_unlock_irq(shost->host_lock);
@ -1110,7 +1110,7 @@ lpfc_get_starget_node_name(struct scsi_target *starget)
/* Search the mapped list for this target ID */
list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
if (starget->id == ndlp->nlp_sid) {
node_name = wwn_to_u64(ndlp->nlp_nodename.wwn);
node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
break;
}
}
@ -1131,7 +1131,7 @@ lpfc_get_starget_port_name(struct scsi_target *starget)
/* Search the mapped list for this target ID */
list_for_each_entry(ndlp, &phba->fc_nlpmap_list, nlp_listp) {
if (starget->id == ndlp->nlp_sid) {
port_name = wwn_to_u64(ndlp->nlp_portname.wwn);
port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
break;
}
}

View File

@ -1019,8 +1019,8 @@ lpfc_register_remote_port(struct lpfc_hba * phba,
struct fc_rport_identifiers rport_ids;
/* Remote port has reappeared. Re-register w/ FC transport */
rport_ids.node_name = wwn_to_u64(ndlp->nlp_nodename.wwn);
rport_ids.port_name = wwn_to_u64(ndlp->nlp_portname.wwn);
rport_ids.node_name = wwn_to_u64(ndlp->nlp_nodename.u.wwn);
rport_ids.port_name = wwn_to_u64(ndlp->nlp_portname.u.wwn);
rport_ids.port_id = ndlp->nlp_DID;
rport_ids.roles = FC_RPORT_ROLE_UNKNOWN;
if (ndlp->nlp_type & NLP_FCP_TARGET)

View File

@ -280,9 +280,9 @@ struct lpfc_name {
#define NAME_CCITT_GR_TYPE 0xE
uint8_t IEEEextLsb; /* FC Word 0, bit 16:23, IEEE extended Lsb */
uint8_t IEEE[6]; /* FC IEEE address */
};
} s;
uint8_t wwn[8];
};
} u;
};
struct csp {

View File

@ -285,7 +285,7 @@ lpfc_config_port_post(struct lpfc_hba * phba)
if (phba->SerialNumber[0] == 0) {
uint8_t *outptr;
outptr = (uint8_t *) & phba->fc_nodename.IEEE[0];
outptr = &phba->fc_nodename.u.s.IEEE[0];
for (i = 0; i < 12; i++) {
status = *outptr++;
j = ((status & 0xf0) >> 4);
@ -1523,8 +1523,8 @@ lpfc_pci_probe_one(struct pci_dev *pdev, const struct pci_device_id *pid)
* Must done after lpfc_sli_hba_setup()
*/
fc_host_node_name(host) = wwn_to_u64(phba->fc_nodename.wwn);
fc_host_port_name(host) = wwn_to_u64(phba->fc_portname.wwn);
fc_host_node_name(host) = wwn_to_u64(phba->fc_nodename.u.wwn);
fc_host_port_name(host) = wwn_to_u64(phba->fc_portname.u.wwn);
fc_host_supported_classes(host) = FC_COS_CLASS3;
memset(fc_host_supported_fc4s(host), 0,

View File

@ -621,8 +621,6 @@ mega_build_cmd(adapter_t *adapter, Scsi_Cmnd *cmd, int *busy)
if(islogical) {
switch (cmd->cmnd[0]) {
case TEST_UNIT_READY:
memset(cmd->request_buffer, 0, cmd->request_bufflen);
#if MEGA_HAVE_CLUSTERING
/*
* Do we support clustering and is the support enabled
@ -652,11 +650,28 @@ mega_build_cmd(adapter_t *adapter, Scsi_Cmnd *cmd, int *busy)
return NULL;
#endif
case MODE_SENSE:
case MODE_SENSE: {
char *buf;
if (cmd->use_sg) {
struct scatterlist *sg;
sg = (struct scatterlist *)cmd->request_buffer;
buf = kmap_atomic(sg->page, KM_IRQ0) +
sg->offset;
} else
buf = cmd->request_buffer;
memset(cmd->request_buffer, 0, cmd->cmnd[4]);
if (cmd->use_sg) {
struct scatterlist *sg;
sg = (struct scatterlist *)cmd->request_buffer;
kunmap_atomic(buf - sg->offset, KM_IRQ0);
}
cmd->result = (DID_OK << 16);
cmd->scsi_done(cmd);
return NULL;
}
case READ_CAPACITY:
case INQUIRY:
@ -1685,14 +1700,23 @@ mega_rundoneq (adapter_t *adapter)
static void
mega_free_scb(adapter_t *adapter, scb_t *scb)
{
unsigned long length;
switch( scb->dma_type ) {
case MEGA_DMA_TYPE_NONE:
break;
case MEGA_BULK_DATA:
if (scb->cmd->use_sg == 0)
length = scb->cmd->request_bufflen;
else {
struct scatterlist *sgl =
(struct scatterlist *)scb->cmd->request_buffer;
length = sgl->length;
}
pci_unmap_page(adapter->dev, scb->dma_h_bulkdata,
scb->cmd->request_bufflen, scb->dma_direction);
length, scb->dma_direction);
break;
case MEGA_SGLIST:
@ -1741,6 +1765,7 @@ mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len)
struct scatterlist *sgl;
struct page *page;
unsigned long offset;
unsigned int length;
Scsi_Cmnd *cmd;
int sgcnt;
int idx;
@ -1748,14 +1773,23 @@ mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len)
cmd = scb->cmd;
/* Scatter-gather not used */
if( !cmd->use_sg ) {
if( cmd->use_sg == 0 || (cmd->use_sg == 1 &&
!adapter->has_64bit_addr)) {
page = virt_to_page(cmd->request_buffer);
offset = offset_in_page(cmd->request_buffer);
if (cmd->use_sg == 0) {
page = virt_to_page(cmd->request_buffer);
offset = offset_in_page(cmd->request_buffer);
length = cmd->request_bufflen;
} else {
sgl = (struct scatterlist *)cmd->request_buffer;
page = sgl->page;
offset = sgl->offset;
length = sgl->length;
}
scb->dma_h_bulkdata = pci_map_page(adapter->dev,
page, offset,
cmd->request_bufflen,
length,
scb->dma_direction);
scb->dma_type = MEGA_BULK_DATA;
@ -1765,14 +1799,14 @@ mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len)
*/
if( adapter->has_64bit_addr ) {
scb->sgl64[0].address = scb->dma_h_bulkdata;
scb->sgl64[0].length = cmd->request_bufflen;
scb->sgl64[0].length = length;
*buf = (u32)scb->sgl_dma_addr;
*len = (u32)cmd->request_bufflen;
*len = (u32)length;
return 1;
}
else {
*buf = (u32)scb->dma_h_bulkdata;
*len = (u32)cmd->request_bufflen;
*len = (u32)length;
}
return 0;
}
@ -1791,27 +1825,23 @@ mega_build_sglist(adapter_t *adapter, scb_t *scb, u32 *buf, u32 *len)
if( sgcnt > adapter->sglen ) BUG();
*len = 0;
for( idx = 0; idx < sgcnt; idx++, sgl++ ) {
if( adapter->has_64bit_addr ) {
scb->sgl64[idx].address = sg_dma_address(sgl);
scb->sgl64[idx].length = sg_dma_len(sgl);
*len += scb->sgl64[idx].length = sg_dma_len(sgl);
}
else {
scb->sgl[idx].address = sg_dma_address(sgl);
scb->sgl[idx].length = sg_dma_len(sgl);
*len += scb->sgl[idx].length = sg_dma_len(sgl);
}
}
/* Reset pointer and length fields */
*buf = scb->sgl_dma_addr;
/*
* For passthru command, dataxferlen must be set, even for commands
* with a sg list
*/
*len = (u32)cmd->request_bufflen;
/* Return count of SG requests */
return sgcnt;
}

View File

@ -76,3 +76,12 @@ config MEGARAID_LEGACY
To compile this driver as a module, choose M here: the
module will be called megaraid
endif
config MEGARAID_SAS
tristate "LSI Logic MegaRAID SAS RAID Module"
depends on PCI && SCSI
help
Module for LSI Logic's SAS based RAID controllers.
To compile this driver as a module, choose 'm' here.
Module will be called megaraid_sas

View File

@ -1,2 +1,3 @@
obj-$(CONFIG_MEGARAID_MM) += megaraid_mm.o
obj-$(CONFIG_MEGARAID_MAILBOX) += megaraid_mbox.o
obj-$(CONFIG_MEGARAID_SAS) += megaraid_sas.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -330,6 +330,8 @@ qla2x00_update_login_fcport(scsi_qla_host_t *ha, struct mbx_entry *mbxstat,
fcport->flags &= ~FCF_FAILOVER_NEEDED;
fcport->iodesc_idx_sent = IODESC_INVALID_INDEX;
atomic_set(&fcport->state, FCS_ONLINE);
if (fcport->rport)
fc_remote_port_unblock(fcport->rport);
}

File diff suppressed because it is too large Load Diff

View File

@ -29,6 +29,8 @@
* NV-specific details such as register offsets, SATA phy location,
* hotplug info, etc.
*
* 0.09
* - Fixed bug introduced by 0.08's MCP51 and MCP55 support.
*
* 0.08
* - Added support for MCP51 and MCP55.
@ -132,9 +134,7 @@ enum nv_host_type
GENERIC,
NFORCE2,
NFORCE3,
CK804,
MCP51,
MCP55
CK804
};
static struct pci_device_id nv_pci_tbl[] = {
@ -153,13 +153,13 @@ static struct pci_device_id nv_pci_tbl[] = {
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP51 },
PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP51 },
PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP55 },
PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP55 },
PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC },
{ PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID,
PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC },
@ -405,7 +405,7 @@ static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
rc = -ENOMEM;
ppi = &nv_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi);
probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent)
goto err_out_regions;

View File

@ -263,7 +263,7 @@ static int sis_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_regions;
ppi = &sis_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi);
probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) {
rc = -ENOMEM;
goto err_out_regions;

View File

@ -202,7 +202,7 @@ static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_regions;
ppi = &uli_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi);
probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent) {
rc = -ENOMEM;
goto err_out_regions;

View File

@ -212,7 +212,7 @@ static struct ata_probe_ent *vt6420_init_probe_ent(struct pci_dev *pdev)
struct ata_probe_ent *probe_ent;
struct ata_port_info *ppi = &svia_port_info;
probe_ent = ata_pci_init_native_mode(pdev, &ppi);
probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY);
if (!probe_ent)
return NULL;

View File

@ -587,6 +587,7 @@ static int scsi_probe_lun(struct scsi_device *sdev, char *inq_result,
if (sdev->scsi_level >= 2 ||
(sdev->scsi_level == 1 && (inq_result[3] & 0x0f) == 1))
sdev->scsi_level++;
sdev->sdev_target->scsi_level = sdev->scsi_level;
return 0;
}
@ -771,6 +772,15 @@ static int scsi_add_lun(struct scsi_device *sdev, char *inq_result, int *bflags)
return SCSI_SCAN_LUN_PRESENT;
}
static inline void scsi_destroy_sdev(struct scsi_device *sdev)
{
if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev);
transport_destroy_device(&sdev->sdev_gendev);
put_device(&sdev->sdev_gendev);
}
/**
* scsi_probe_and_add_lun - probe a LUN, if a LUN is found add it
* @starget: pointer to target device structure
@ -803,9 +813,9 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
* The rescan flag is used as an optimization, the first scan of a
* host adapter calls into here with rescan == 0.
*/
if (rescan) {
sdev = scsi_device_lookup_by_target(starget, lun);
if (sdev) {
sdev = scsi_device_lookup_by_target(starget, lun);
if (sdev) {
if (rescan || sdev->sdev_state != SDEV_CREATED) {
SCSI_LOG_SCAN_BUS(3, printk(KERN_INFO
"scsi scan: device exists on %s\n",
sdev->sdev_gendev.bus_id));
@ -820,9 +830,9 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
sdev->model);
return SCSI_SCAN_LUN_PRESENT;
}
}
sdev = scsi_alloc_sdev(starget, lun, hostdata);
scsi_device_put(sdev);
} else
sdev = scsi_alloc_sdev(starget, lun, hostdata);
if (!sdev)
goto out;
@ -877,12 +887,8 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
res = SCSI_SCAN_NO_RESPONSE;
}
}
} else {
if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev);
transport_destroy_device(&sdev->sdev_gendev);
put_device(&sdev->sdev_gendev);
}
} else
scsi_destroy_sdev(sdev);
out:
return res;
}
@ -1054,7 +1060,7 @@ EXPORT_SYMBOL(int_to_scsilun);
* 0: scan completed (or no memory, so further scanning is futile)
* 1: no report lun scan, or not configured
**/
static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
static int scsi_report_lun_scan(struct scsi_target *starget, int bflags,
int rescan)
{
char devname[64];
@ -1067,7 +1073,8 @@ static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
struct scsi_lun *lunp, *lun_data;
u8 *data;
struct scsi_sense_hdr sshdr;
struct scsi_target *starget = scsi_target(sdev);
struct scsi_device *sdev;
struct Scsi_Host *shost = dev_to_shost(&starget->dev);
/*
* Only support SCSI-3 and up devices if BLIST_NOREPORTLUN is not set.
@ -1075,15 +1082,23 @@ static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
* support more than 8 LUNs.
*/
if ((bflags & BLIST_NOREPORTLUN) ||
sdev->scsi_level < SCSI_2 ||
(sdev->scsi_level < SCSI_3 &&
(!(bflags & BLIST_REPORTLUN2) || sdev->host->max_lun <= 8)) )
starget->scsi_level < SCSI_2 ||
(starget->scsi_level < SCSI_3 &&
(!(bflags & BLIST_REPORTLUN2) || shost->max_lun <= 8)) )
return 1;
if (bflags & BLIST_NOLUN)
return 0;
if (!(sdev = scsi_device_lookup_by_target(starget, 0))) {
sdev = scsi_alloc_sdev(starget, 0, NULL);
if (!sdev)
return 0;
if (scsi_device_get(sdev))
return 0;
}
sprintf(devname, "host %d channel %d id %d",
sdev->host->host_no, sdev->channel, sdev->id);
shost->host_no, sdev->channel, sdev->id);
/*
* Allocate enough to hold the header (the same size as one scsi_lun)
@ -1098,8 +1113,10 @@ static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
length = (max_scsi_report_luns + 1) * sizeof(struct scsi_lun);
lun_data = kmalloc(length, GFP_ATOMIC |
(sdev->host->unchecked_isa_dma ? __GFP_DMA : 0));
if (!lun_data)
if (!lun_data) {
printk(ALLOC_FAILURE_MSG, __FUNCTION__);
goto out;
}
scsi_cmd[0] = REPORT_LUNS;
@ -1201,10 +1218,6 @@ static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
for (i = 0; i < sizeof(struct scsi_lun); i++)
printk("%02x", data[i]);
printk(" has a LUN larger than currently supported.\n");
} else if (lun == 0) {
/*
* LUN 0 has already been scanned.
*/
} else if (lun > sdev->host->max_lun) {
printk(KERN_WARNING "scsi: %s lun%d has a LUN larger"
" than allowed by the host adapter\n",
@ -1227,13 +1240,13 @@ static int scsi_report_lun_scan(struct scsi_device *sdev, int bflags,
}
kfree(lun_data);
return 0;
out:
/*
* We are out of memory, don't try scanning any further.
*/
printk(ALLOC_FAILURE_MSG, __FUNCTION__);
scsi_device_put(sdev);
if (sdev->sdev_state == SDEV_CREATED)
/*
* the sdev we used didn't appear in the report luns scan
*/
scsi_destroy_sdev(sdev);
return 0;
}
@ -1299,7 +1312,6 @@ static void __scsi_scan_target(struct device *parent, unsigned int channel,
struct Scsi_Host *shost = dev_to_shost(parent);
int bflags = 0;
int res;
struct scsi_device *sdev = NULL;
struct scsi_target *starget;
if (shost->this_id == id)
@ -1325,27 +1337,16 @@ static void __scsi_scan_target(struct device *parent, unsigned int channel,
* Scan LUN 0, if there is some response, scan further. Ideally, we
* would not configure LUN 0 until all LUNs are scanned.
*/
res = scsi_probe_and_add_lun(starget, 0, &bflags, &sdev, rescan, NULL);
if (res == SCSI_SCAN_LUN_PRESENT) {
if (scsi_report_lun_scan(sdev, bflags, rescan) != 0)
res = scsi_probe_and_add_lun(starget, 0, &bflags, NULL, rescan, NULL);
if (res == SCSI_SCAN_LUN_PRESENT || res == SCSI_SCAN_TARGET_PRESENT) {
if (scsi_report_lun_scan(starget, bflags, rescan) != 0)
/*
* The REPORT LUN did not scan the target,
* do a sequential scan.
*/
scsi_sequential_lun_scan(starget, bflags,
res, sdev->scsi_level, rescan);
} else if (res == SCSI_SCAN_TARGET_PRESENT) {
/*
* There's a target here, but lun 0 is offline so we
* can't use the report_lun scan. Fall back to a
* sequential lun scan with a bflags of SPARSELUN and
* a default scsi level of SCSI_2
*/
scsi_sequential_lun_scan(starget, BLIST_SPARSELUN,
SCSI_SCAN_TARGET_PRESENT, SCSI_2, rescan);
res, starget->scsi_level, rescan);
}
if (sdev)
scsi_device_put(sdev);
out_reap:
/* now determine if the target has any children at all
@ -1542,10 +1543,7 @@ void scsi_free_host_dev(struct scsi_device *sdev)
{
BUG_ON(sdev->id != sdev->host->this_id);
if (sdev->host->hostt->slave_destroy)
sdev->host->hostt->slave_destroy(sdev);
transport_destroy_device(&sdev->sdev_gendev);
put_device(&sdev->sdev_gendev);
scsi_destroy_sdev(sdev);
}
EXPORT_SYMBOL(scsi_free_host_dev);

View File

@ -628,17 +628,16 @@ sas_rphy_delete(struct sas_rphy *rphy)
struct Scsi_Host *shost = dev_to_shost(parent->dev.parent);
struct sas_host_attrs *sas_host = to_sas_host_attrs(shost);
transport_destroy_device(&rphy->dev);
scsi_remove_target(dev);
scsi_remove_target(&rphy->dev);
transport_remove_device(dev);
device_del(dev);
transport_destroy_device(dev);
spin_lock(&sas_host->lock);
list_del(&rphy->list);
spin_unlock(&sas_host->lock);
transport_remove_device(dev);
device_del(dev);
transport_destroy_device(dev);
put_device(&parent->dev);
}
EXPORT_SYMBOL(sas_rphy_delete);

View File

@ -518,11 +518,7 @@ static void sunsu_change_mouse_baud(struct uart_sunsu_port *up)
quot = up->port.uartclk / (16 * new_baud);
spin_unlock(&up->port.lock);
sunsu_change_speed(&up->port, up->cflag, 0, quot);
spin_lock(&up->port.lock);
}
static void receive_kbd_ms_chars(struct uart_sunsu_port *up, struct pt_regs *regs, int is_break)

View File

@ -108,7 +108,7 @@ static int bfs_create(struct inode * dir, struct dentry * dentry, int mode,
inode->i_mapping->a_ops = &bfs_aops;
inode->i_mode = mode;
inode->i_ino = ino;
BFS_I(inode)->i_dsk_ino = cpu_to_le16(ino);
BFS_I(inode)->i_dsk_ino = ino;
BFS_I(inode)->i_sblock = 0;
BFS_I(inode)->i_eblock = 0;
insert_inode_hash(inode);

View File

@ -357,28 +357,46 @@ static int bfs_fill_super(struct super_block *s, void *data, int silent)
}
info->si_blocks = (le32_to_cpu(bfs_sb->s_end) + 1)>>BFS_BSIZE_BITS; /* for statfs(2) */
info->si_freeb = (le32_to_cpu(bfs_sb->s_end) + 1 - cpu_to_le32(bfs_sb->s_start))>>BFS_BSIZE_BITS;
info->si_freeb = (le32_to_cpu(bfs_sb->s_end) + 1 - le32_to_cpu(bfs_sb->s_start))>>BFS_BSIZE_BITS;
info->si_freei = 0;
info->si_lf_eblk = 0;
info->si_lf_sblk = 0;
info->si_lf_ioff = 0;
bh = NULL;
for (i=BFS_ROOT_INO; i<=info->si_lasti; i++) {
inode = iget(s,i);
if (BFS_I(inode)->i_dsk_ino == 0)
info->si_freei++;
else {
set_bit(i, info->si_imap);
info->si_freeb -= inode->i_blocks;
if (BFS_I(inode)->i_eblock > info->si_lf_eblk) {
info->si_lf_eblk = BFS_I(inode)->i_eblock;
info->si_lf_sblk = BFS_I(inode)->i_sblock;
info->si_lf_ioff = BFS_INO2OFF(i);
}
struct bfs_inode *di;
int block = (i - BFS_ROOT_INO)/BFS_INODES_PER_BLOCK + 1;
int off = (i - BFS_ROOT_INO) % BFS_INODES_PER_BLOCK;
unsigned long sblock, eblock;
if (!off) {
brelse(bh);
bh = sb_bread(s, block);
}
if (!bh)
continue;
di = (struct bfs_inode *)bh->b_data + off;
if (!di->i_ino) {
info->si_freei++;
continue;
}
set_bit(i, info->si_imap);
info->si_freeb -= BFS_FILEBLOCKS(di);
sblock = le32_to_cpu(di->i_sblock);
eblock = le32_to_cpu(di->i_eblock);
if (eblock > info->si_lf_eblk) {
info->si_lf_eblk = eblock;
info->si_lf_sblk = sblock;
info->si_lf_ioff = BFS_INO2OFF(i);
}
iput(inode);
}
brelse(bh);
if (!(s->s_flags & MS_RDONLY)) {
mark_buffer_dirty(bh);
mark_buffer_dirty(info->si_sbh);
s->s_dirt = 1;
}
dump_imap("read_super", s);

View File

@ -1551,19 +1551,19 @@ do_link:
if (nd->last_type != LAST_NORM)
goto exit;
if (nd->last.name[nd->last.len]) {
putname(nd->last.name);
__putname(nd->last.name);
goto exit;
}
error = -ELOOP;
if (count++==32) {
putname(nd->last.name);
__putname(nd->last.name);
goto exit;
}
dir = nd->dentry;
down(&dir->d_inode->i_sem);
path.dentry = __lookup_hash(&nd->last, nd->dentry, nd);
path.mnt = nd->mnt;
putname(nd->last.name);
__putname(nd->last.name);
goto do_last;
}

View File

@ -102,6 +102,9 @@ ToDo/Notes:
inode instead of a vfs inode as parameter.
- Fix the definition of the CHKD ntfs record magic. It had an off by
two error causing it to be CHKB instead of CHKD.
- Fix a stupid bug in __ntfs_bitmap_set_bits_in_run() which caused the
count to become negative and hence we had a wild memset() scribbling
all over the system's ram.
2.1.23 - Implement extension of resident files and make writing safe as well as
many bug fixes, cleanups, and enhancements...

View File

@ -1,7 +1,7 @@
/*
* bitmap.c - NTFS kernel bitmap handling. Part of the Linux-NTFS project.
*
* Copyright (c) 2004 Anton Altaparmakov
* Copyright (c) 2004-2005 Anton Altaparmakov
*
* This program/include file is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as published
@ -90,7 +90,8 @@ int __ntfs_bitmap_set_bits_in_run(struct inode *vi, const s64 start_bit,
/* If the first byte is partial, modify the appropriate bits in it. */
if (bit) {
u8 *byte = kaddr + pos;
while ((bit & 7) && cnt--) {
while ((bit & 7) && cnt) {
cnt--;
if (value)
*byte |= 1 << bit++;
else

View File

@ -309,7 +309,7 @@ typedef le16 MFT_RECORD_FLAGS;
* Note: The _LE versions will return a CPU endian formatted value!
*/
#define MFT_REF_MASK_CPU 0x0000ffffffffffffULL
#define MFT_REF_MASK_LE const_cpu_to_le64(0x0000ffffffffffffULL)
#define MFT_REF_MASK_LE const_cpu_to_le64(MFT_REF_MASK_CPU)
typedef u64 MFT_REF;
typedef le64 leMFT_REF;

View File

@ -58,7 +58,8 @@ static inline MFT_RECORD *map_mft_record_page(ntfs_inode *ni)
* overflowing the unsigned long, but I don't think we would ever get
* here if the volume was that big...
*/
index = ni->mft_no << vol->mft_record_size_bits >> PAGE_CACHE_SHIFT;
index = (u64)ni->mft_no << vol->mft_record_size_bits >>
PAGE_CACHE_SHIFT;
ofs = (ni->mft_no << vol->mft_record_size_bits) & ~PAGE_CACHE_MASK;
i_size = i_size_read(mft_vi);

Some files were not shown because too many files have changed in this diff Show More