Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1443 commits)
  phy/marvell: add 88ec048 support
  igb: Program MDICNFG register prior to PHY init
  e1000e: correct MAC-PHY interconnect register offset for 82579
  hso: Add new product ID
  can: Add driver for esd CAN-USB/2 device
  l2tp: fix export of header file for userspace
  can-raw: Fix skb_orphan_try handling
  Revert "net: remove zap_completion_queue"
  net: cleanup inclusion
  phy/marvell: add 88e1121 interface mode support
  u32: negative offset fix
  net: Fix a typo from "dev" to "ndev"
  igb: Use irq_synchronize per vector when using MSI-X
  ixgbevf: fix null pointer dereference due to filter being set for VLAN 0
  e1000e: Fix irq_synchronize in MSI-X case
  e1000e: register pm_qos request on hardware activation
  ip_fragment: fix subtracting PPPOE_SES_HLEN from mtu twice
  net: Add getsockopt support for TCP thin-streams
  cxgb4: update driver version
  cxgb4: add new PCI IDs
  ...

Manually fix up conflicts in:
 - drivers/net/e1000e/netdev.c: due to pm_qos registration
   infrastructure changes
 - drivers/net/phy/marvell.c: conflict between adding 88ec048 support
   and cleaning up the IDs
 - drivers/net/wireless/ipw2x00/ipw2100.c: trivial ipw2100_pm_qos_req
   conflict (registration change vs marking it static)
This commit is contained in:
Linus Torvalds 2010-08-04 11:47:58 -07:00
commit 6ba74014c1
1177 changed files with 73408 additions and 50484 deletions

View File

@ -303,15 +303,6 @@ Who: Johannes Berg <johannes@sipsolutions.net>
---------------------------
What: CONFIG_NF_CT_ACCT
When: 2.6.29
Why: Accounting can now be enabled/disabled without kernel recompilation.
Currently used only to set a default value for a feature that is also
controlled by a kernel/module/sysfs/sysctl parameter.
Who: Krzysztof Piotr Oledzki <ole@ans.pl>
---------------------------
What: sysfs ui for changing p4-clockmod parameters
When: September 2009
Why: See commits 129f8ae9b1b5be94517da76009ea956e89104ce8 and

View File

@ -124,6 +124,8 @@ ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>
<hostname> Name of the client. May be supplied by autoconfiguration,
but its absence will not trigger autoconfiguration.
If specified and DHCP is used, the user provided hostname will
be carried in the DHCP request to hopefully update DNS record.
Default: Client IP address is used in ASCII notation.

View File

@ -113,12 +113,16 @@ char *driver_name
int (*load_firmware)(struct capi_ctr *ctrlr, capiloaddata *ldata)
(optional) pointer to a callback function for sending firmware and
configuration data to the device
The function may return before the operation has completed.
Completion must be signalled by a call to capi_ctr_ready().
Return value: 0 on success, error code on error
Called in process context.
void (*reset_ctr)(struct capi_ctr *ctrlr)
(optional) pointer to a callback function for performing a reset on
the device, releasing all registered applications
(optional) pointer to a callback function for stopping the device,
releasing all registered applications
The function may return before the operation has completed.
Completion must be signalled by a call to capi_ctr_down().
Called in process context.
void (*register_appl)(struct capi_ctr *ctrlr, u16 applid,

View File

@ -47,9 +47,9 @@ GigaSet 307x Device Driver
1.2. Software
--------
The driver works with ISDN4linux and so can be used with any software
which is able to use ISDN4linux for ISDN connections (voice or data).
Experimental Kernel CAPI support is available as a compilation option.
The driver works with the Kernel CAPI subsystem as well as the old
ISDN4Linux subsystem, so it can be used with any software which is able
to use CAPI 2.0 or ISDN4Linux for ISDN connections (voice or data).
There are some user space tools available at
http://sourceforge.net/projects/gigaset307x/
@ -152,61 +152,42 @@ GigaSet 307x Device Driver
- GIGVER_FWBASE: retrieve the firmware version of the base
Upon return, version[] is filled with the requested version information.
2.3. ISDN4linux
----------
This is the "normal" mode of operation. After loading the module you can
set up the ISDN system just as you'd do with any ISDN card supported by
the ISDN4Linux subsystem. Most distributions provide some configuration
utility. If not, you can use some HOWTOs like
http://www.linuxhaven.de/dlhp/HOWTO/DE-ISDN-HOWTO-5.html
If this doesn't work, because you have some device like SX100 where
debug output (see section 3.2.) shows something like this when dialing
CMD Received: ERROR
Available Params: 0
Connection State: 0, Response: -1
gigaset_process_response: resp_code -1 in ConState 0 !
Timeout occurred
you probably need to use unimodem mode. (see section 2.5.)
2.4. CAPI
2.3. CAPI
----
If the driver is compiled with CAPI support (kernel configuration option
GIGASET_CAPI, experimental) it can also be used with CAPI 2.0 kernel and
user space applications. For user space access, the module capi.ko must
be loaded. The capiinit command (included in the capi4k-utils package)
does this for you.
GIGASET_CAPI) the devices will show up as CAPI controllers as soon as the
corresponding driver module is loaded, and can then be used with CAPI 2.0
kernel and user space applications. For user space access, the module
capi.ko must be loaded.
The CAPI variant of the driver supports legacy ISDN4Linux applications
via the capidrv compatibility driver. The kernel module capidrv.ko must
be loaded explicitly with the command
Legacy ISDN4Linux applications are supported via the capidrv
compatibility driver. The kernel module capidrv.ko must be loaded
explicitly with the command
modprobe capidrv
if needed, and cannot be unloaded again without unloading the driver
first. (These are limitations of capidrv.)
The note about unimodem mode in the preceding section applies here, too.
Most distributions handle loading and unloading of the various CAPI
modules automatically via the command capiinit(1) from the capi4k-utils
package or a similar mechanism. Note that capiinit(1) cannot unload the
Gigaset drivers because it doesn't support more than one module per
driver.
2.4. ISDN4Linux
----------
If the driver is compiled without CAPI support (native ISDN4Linux
variant), it registers the device with the legacy ISDN4Linux subsystem
after loading the module. It can then be used with ISDN4Linux
applications only. Most distributions provide some configuration utility
for setting up that subsystem. Otherwise you can use some HOWTOs like
http://www.linuxhaven.de/dlhp/HOWTO/DE-ISDN-HOWTO-5.html
2.5. Unimodem mode
-------------
This is needed for some devices [e.g. SX100] as they have problems with
the "normal" commands.
If you have installed the command line tool gigacontr, you can enter
unimodem mode using
gigacontr --mode unimodem
You can switch back using
gigacontr --mode isdn
You can also put the driver directly into Unimodem mode when it's loaded,
by passing the module parameter startmode=0 to the hardware specific
module, e.g.
modprobe usb_gigaset startmode=0
or by adding a line like
options usb_gigaset startmode=0
to an appropriate module configuration file, like /etc/modprobe.d/gigaset
or /etc/modprobe.conf.local.
In this mode the device works like a modem connected to a serial port
(the /dev/ttyGU0, ... mentioned above) which understands the commands
ATZ init, reset
=> OK or ERROR
ATD
@ -234,6 +215,31 @@ GigaSet 307x Device Driver
to an appropriate module configuration file, like /etc/modprobe.d/gigaset
or /etc/modprobe.conf.local.
Unimodem mode is needed for making some devices [e.g. SX100] work which
do not support the regular Gigaset command set. If debug output (see
section 3.2.) shows something like this when dialing:
CMD Received: ERROR
Available Params: 0
Connection State: 0, Response: -1
gigaset_process_response: resp_code -1 in ConState 0 !
Timeout occurred
then switching to unimodem mode may help.
If you have installed the command line tool gigacontr, you can enter
unimodem mode using
gigacontr --mode unimodem
You can switch back using
gigacontr --mode isdn
You can also put the driver directly into Unimodem mode when it's loaded,
by passing the module parameter startmode=0 to the hardware specific
module, e.g.
modprobe usb_gigaset startmode=0
or by adding a line like
options usb_gigaset startmode=0
to an appropriate module configuration file, like /etc/modprobe.d/gigaset
or /etc/modprobe.conf.local.
2.6. Call-ID (CID) mode
------------------
Call-IDs are numbers used to tag commands to, and responses from, the
@ -263,7 +269,22 @@ GigaSet 307x Device Driver
change its CID mode while the driver is loaded, eg.
echo 0 > /sys/class/tty/ttyGU0/cidmode
2.7. Unregistered Wireless Devices (M101/M105)
2.7. Dialing Numbers
---------------
The called party number provided by an application for dialing out must
be a public network number according to the local dialing plan, without
any dial prefix for getting an outside line.
Internal calls can be made by providing an internal extension number
prefixed with "**" (two asterisks) as the called party number. So to dial
eg. the first registered DECT handset, give "**11" as the called party
number. Dialing "***" (three asterisks) calls all extensions
simultaneously (global call).
This holds for both CAPI 2.0 and ISDN4Linux applications. Unimodem mode
does not support internal calls.
2.8. Unregistered Wireless Devices (M101/M105)
-----------------------------------------
The main purpose of the ser_gigaset and usb_gigaset drivers is to allow
the M101 and M105 wireless devices to be used as ISDN devices for ISDN

View File

@ -1598,8 +1598,7 @@ and is between 256 and 4096 characters. It is defined in the file
[NETFILTER] Enable connection tracking flow accounting
0 to disable accounting
1 to enable accounting
Default value depends on CONFIG_NF_CT_ACCT that is
going to be removed in 2.6.29.
Default value is 0.
nfsaddrs= [NFS]
See Documentation/filesystems/nfs/nfsroot.txt.

View File

@ -171,7 +171,7 @@ Where the supported parameter are:
led
Can be used to turn on experimental LED code.
0 = Off, 1 = On. Default is 0.
0 = Off, 1 = On. Default is 1.
mode
Can be used to set the default mode of the adapter.

View File

@ -49,6 +49,7 @@ Table of Contents
3.3 Configuring Bonding Manually with Ifenslave
3.3.1 Configuring Multiple Bonds Manually
3.4 Configuring Bonding Manually via Sysfs
3.5 Overriding Configuration for Special Cases
4. Querying Bonding Configuration
4.1 Bonding Configuration
@ -1318,8 +1319,87 @@ echo 2000 > /sys/class/net/bond1/bonding/arp_interval
echo +eth2 > /sys/class/net/bond1/bonding/slaves
echo +eth3 > /sys/class/net/bond1/bonding/slaves
3.5 Overriding Configuration for Special Cases
----------------------------------------------
When using the bonding driver, the physical port which transmits a frame is
typically selected by the bonding driver, and is not relevant to the user or
system administrator. The output port is simply selected using the policies of
the selected bonding mode. On occasion however, it is helpful to direct certain
classes of traffic to certain physical interfaces on output to implement
slightly more complex policies. For example, to reach a web server over a
bonded interface in which eth0 connects to a private network, while eth1
connects via a public network, it may be desirous to bias the bond to send said
traffic over eth0 first, using eth1 only as a fall back, while all other traffic
can safely be sent over either interface. Such configurations may be achieved
using the traffic control utilities inherent in linux.
4. Querying Bonding Configuration
By default the bonding driver is multiqueue aware and 16 queues are created
when the driver initializes (see Documentation/networking/multiqueue.txt
for details). If more or less queues are desired the module parameter
tx_queues can be used to change this value. There is no sysfs parameter
available as the allocation is done at module init time.
The output of the file /proc/net/bonding/bondX has changed so the output Queue
ID is now printed for each slave:
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1a:a0:12:8f:cb
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1a:a0:12:8f:cc
Slave queue ID: 2
The queue_id for a slave can be set using the command:
# echo "eth1:2" > /sys/class/net/bond0/bonding/queue_id
Any interface that needs a queue_id set should set it with multiple calls
like the one above until proper priorities are set for all interfaces. On
distributions that allow configuration via initscripts, multiple 'queue_id'
arguments can be added to BONDING_OPTS to set all needed slave queues.
These queue id's can be used in conjunction with the tc utility to configure
a multiqueue qdisc and filters to bias certain traffic to transmit on certain
slave devices. For instance, say we wanted, in the above configuration to
force all traffic bound to 192.168.1.100 to use eth1 in the bond as its output
device. The following commands would accomplish this:
# tc qdisc add dev bond0 handle 1 root multiq
# tc filter add dev bond0 protocol ip parent 1: prio 1 u32 match ip dst \
192.168.1.100 action skbedit queue_mapping 2
These commands tell the kernel to attach a multiqueue queue discipline to the
bond0 interface and filter traffic enqueued to it, such that packets with a dst
ip of 192.168.1.100 have their output queue mapping value overwritten to 2.
This value is then passed into the driver, causing the normal output path
selection policy to be overridden, selecting instead qid 2, which maps to eth1.
Note that qid values begin at 1. Qid 0 is reserved to initiate to the driver
that normal output policy selection should take place. One benefit to simply
leaving the qid for a slave to 0 is the multiqueue awareness in the bonding
driver that is now present. This awareness allows tc filters to be placed on
slave devices as well as bond devices and the bonding driver will simply act as
a pass-through for selecting output queues on the slave device rather than
output port selection.
This feature first appeared in bonding driver version 3.7.0 and support for
output slave selection was limited to round-robin and active-backup modes.
4 Querying Bonding Configuration
=================================
4.1 Bonding Configuration

View File

@ -0,0 +1,208 @@
- CAIF SPI porting -
- CAIF SPI basics:
Running CAIF over SPI needs some extra setup, owing to the nature of SPI.
Two extra GPIOs have been added in order to negotiate the transfers
between the master and the slave. The minimum requirement for running
CAIF over SPI is a SPI slave chip and two GPIOs (more details below).
Please note that running as a slave implies that you need to keep up
with the master clock. An overrun or underrun event is fatal.
- CAIF SPI framework:
To make porting as easy as possible, the CAIF SPI has been divided in
two parts. The first part (called the interface part) deals with all
generic functionality such as length framing, SPI frame negotiation
and SPI frame delivery and transmission. The other part is the CAIF
SPI slave device part, which is the module that you have to write if
you want to run SPI CAIF on a new hardware. This part takes care of
the physical hardware, both with regard to SPI and to GPIOs.
- Implementing a CAIF SPI device:
- Functionality provided by the CAIF SPI slave device:
In order to implement a SPI device you will, as a minimum,
need to implement the following
functions:
int (*init_xfer) (struct cfspi_xfer * xfer, struct cfspi_dev *dev):
This function is called by the CAIF SPI interface to give
you a chance to set up your hardware to be ready to receive
a stream of data from the master. The xfer structure contains
both physical and logical adresses, as well as the total length
of the transfer in both directions.The dev parameter can be used
to map to different CAIF SPI slave devices.
void (*sig_xfer) (bool xfer, struct cfspi_dev *dev):
This function is called by the CAIF SPI interface when the output
(SPI_INT) GPIO needs to change state. The boolean value of the xfer
variable indicates whether the GPIO should be asserted (HIGH) or
deasserted (LOW). The dev parameter can be used to map to different CAIF
SPI slave devices.
- Functionality provided by the CAIF SPI interface:
void (*ss_cb) (bool assert, struct cfspi_ifc *ifc);
This function is called by the CAIF SPI slave device in order to
signal a change of state of the input GPIO (SS) to the interface.
Only active edges are mandatory to be reported.
This function can be called from IRQ context (recommended in order
not to introduce latency). The ifc parameter should be the pointer
returned from the platform probe function in the SPI device structure.
void (*xfer_done_cb) (struct cfspi_ifc *ifc);
This function is called by the CAIF SPI slave device in order to
report that a transfer is completed. This function should only be
called once both the transmission and the reception are completed.
This function can be called from IRQ context (recommended in order
not to introduce latency). The ifc parameter should be the pointer
returned from the platform probe function in the SPI device structure.
- Connecting the bits and pieces:
- Filling in the SPI slave device structure:
Connect the necessary callback functions.
Indicate clock speed (used to calculate toggle delays).
Chose a suitable name (helps debugging if you use several CAIF
SPI slave devices).
Assign your private data (can be used to map to your structure).
- Filling in the SPI slave platform device structure:
Add name of driver to connect to ("cfspi_sspi").
Assign the SPI slave device structure as platform data.
- Padding:
In order to optimize throughput, a number of SPI padding options are provided.
Padding can be enabled independently for uplink and downlink transfers.
Padding can be enabled for the head, the tail and for the total frame size.
The padding needs to be correctly configured on both sides of the link.
The padding can be changed via module parameters in cfspi_sspi.c or via
the sysfs directory of the cfspi_sspi driver (before device registration).
- CAIF SPI device template:
/*
* Copyright (C) ST-Ericsson AB 2010
* Author: Daniel Martensson / Daniel.Martensson@stericsson.com
* License terms: GNU General Public License (GPL), version 2.
*
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/wait.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <net/caif/caif_spi.h>
MODULE_LICENSE("GPL");
struct sspi_struct {
struct cfspi_dev sdev;
struct cfspi_xfer *xfer;
};
static struct sspi_struct slave;
static struct platform_device slave_device;
static irqreturn_t sspi_irq(int irq, void *arg)
{
/* You only need to trigger on an edge to the active state of the
* SS signal. Once a edge is detected, the ss_cb() function should be
* called with the parameter assert set to true. It is OK
* (and even advised) to call the ss_cb() function in IRQ context in
* order not to add any delay. */
return IRQ_HANDLED;
}
static void sspi_complete(void *context)
{
/* Normally the DMA or the SPI framework will call you back
* in something similar to this. The only thing you need to
* do is to call the xfer_done_cb() function, providing the pointer
* to the CAIF SPI interface. It is OK to call this function
* from IRQ context. */
}
static int sspi_init_xfer(struct cfspi_xfer *xfer, struct cfspi_dev *dev)
{
/* Store transfer info. For a normal implementation you should
* set up your DMA here and make sure that you are ready to
* receive the data from the master SPI. */
struct sspi_struct *sspi = (struct sspi_struct *)dev->priv;
sspi->xfer = xfer;
return 0;
}
void sspi_sig_xfer(bool xfer, struct cfspi_dev *dev)
{
/* If xfer is true then you should assert the SPI_INT to indicate to
* the master that you are ready to recieve the data from the master
* SPI. If xfer is false then you should de-assert SPI_INT to indicate
* that the transfer is done.
*/
struct sspi_struct *sspi = (struct sspi_struct *)dev->priv;
}
static void sspi_release(struct device *dev)
{
/*
* Here you should release your SPI device resources.
*/
}
static int __init sspi_init(void)
{
/* Here you should initialize your SPI device by providing the
* necessary functions, clock speed, name and private data. Once
* done, you can register your device with the
* platform_device_register() function. This function will return
* with the CAIF SPI interface initialized. This is probably also
* the place where you should set up your GPIOs, interrupts and SPI
* resources. */
int res = 0;
/* Initialize slave device. */
slave.sdev.init_xfer = sspi_init_xfer;
slave.sdev.sig_xfer = sspi_sig_xfer;
slave.sdev.clk_mhz = 13;
slave.sdev.priv = &slave;
slave.sdev.name = "spi_sspi";
slave_device.dev.release = sspi_release;
/* Initialize platform device. */
slave_device.name = "cfspi_sspi";
slave_device.dev.platform_data = &slave.sdev;
/* Register platform device. */
res = platform_device_register(&slave_device);
if (res) {
printk(KERN_WARNING "sspi_init: failed to register dev.\n");
return -ENODEV;
}
return res;
}
static void __exit sspi_exit(void)
{
platform_device_del(&slave_device);
}
module_init(sspi_init);
module_exit(sspi_exit);

View File

@ -903,7 +903,7 @@ arp_ignore - INTEGER
arp_notify - BOOLEAN
Define mode for notification of address and device changes.
0 - (default): do nothing
1 - Generate gratuitous arp replies when device is brought up
1 - Generate gratuitous arp requests when device is brought up
or hardware address changes.
arp_accept - BOOLEAN

View File

@ -493,6 +493,32 @@ The user can also use poll() to check if a buffer is available:
pfd.events = POLLOUT;
retval = poll(&pfd, 1, timeout);
-------------------------------------------------------------------------------
+ PACKET_TIMESTAMP
-------------------------------------------------------------------------------
The PACKET_TIMESTAMP setting determines the source of the timestamp in
the packet meta information. If your NIC is capable of timestamping
packets in hardware, you can request those hardware timestamps to used.
Note: you may need to enable the generation of hardware timestamps with
SIOCSHWTSTAMP.
PACKET_TIMESTAMP accepts the same integer bit field as
SO_TIMESTAMPING. However, only the SOF_TIMESTAMPING_SYS_HARDWARE
and SOF_TIMESTAMPING_RAW_HARDWARE values are recognized by
PACKET_TIMESTAMP. SOF_TIMESTAMPING_SYS_HARDWARE takes precedence over
SOF_TIMESTAMPING_RAW_HARDWARE if both bits are set.
int req = 0;
req |= SOF_TIMESTAMPING_SYS_HARDWARE;
setsockopt(fd, SOL_PACKET, PACKET_TIMESTAMP, (void *) &req, sizeof(req))
If PACKET_TIMESTAMP is not set, a software timestamp generated inside
the networking stack is used (the behavior before this setting was added).
See include/linux/net_tstamp.h and Documentation/networking/timestamping
for more information on hardware timestamps.
--------------------------------------------------------------------------------
+ THANKS
--------------------------------------------------------------------------------

View File

@ -151,6 +151,8 @@ Examples:
pgset stop aborts injection. Also, ^C aborts generator.
pgset "rate 300M" set rate to 300 Mb/s
pgset "ratep 1000000" set rate to 1Mpps
Example scripts
===============
@ -241,6 +243,9 @@ src6
flows
flowlen
rate
ratep
References:
ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/
ftp://robur.slu.se/pub/Linux/net-development/pktgen-testing/examples/

View File

@ -313,11 +313,9 @@ S: Maintained
F: drivers/hwmon/adm1029.c
ADM8211 WIRELESS DRIVER
M: Michael Wu <flamingice@sourmilk.net>
L: linux-wireless@vger.kernel.org
W: http://linuxwireless.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mwu/mac80211-drivers.git
S: Maintained
S: Orphan
F: drivers/net/wireless/adm8211.*
ADT746X FAN DRIVER
@ -1369,7 +1367,7 @@ BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER
M: Eilon Greenstein <eilong@broadcom.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/bnx2x*
F: drivers/net/bnx2x/
BROADCOM TG3 GIGABIT ETHERNET DRIVER
M: Matt Carlson <mcarlson@broadcom.com>
@ -1771,6 +1769,13 @@ W: http://www.openfabrics.org
S: Supported
F: drivers/infiniband/hw/cxgb4/
CXGB4VF ETHERNET DRIVER (CXGB4VF)
M: Casey Leedom <leedom@chelsio.com>
L: netdev@vger.kernel.org
W: http://www.chelsio.com
S: Supported
F: drivers/net/cxgb4vf/
CYBERPRO FB DRIVER
M: Russell King <linux@arm.linux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@ -3686,7 +3691,7 @@ F: include/linux/mv643xx.h
MARVELL MWL8K WIRELESS DRIVER
M: Lennert Buytenhek <buytenh@wantstofly.org>
L: linux-wireless@vger.kernel.org
S: Maintained
S: Odd Fixes
F: drivers/net/wireless/mwl8k.c
MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER
@ -3915,17 +3920,19 @@ L: netem@lists.linux-foundation.org
S: Maintained
F: net/sched/sch_netem.c
NETERION (S2IO) 10GbE DRIVER (xframe/vxge)
M: Ramkrishna Vepa <ram.vepa@neterion.com>
M: Rastapur Santosh <santosh.rastapur@neterion.com>
M: Sivakumar Subramani <sivakumar.subramani@neterion.com>
M: Sreenivasa Honnur <sreenivasa.honnur@neterion.com>
NETERION 10GbE DRIVERS (s2io/vxge)
M: Ramkrishna Vepa <ramkrishna.vepa@exar.com>
M: Sivakumar Subramani <sivakumar.subramani@exar.com>
M: Sreenivasa Honnur <sreenivasa.honnur@exar.com>
M: Jon Mason <jon.mason@exar.com>
L: netdev@vger.kernel.org
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/Linux?Anonymous
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/X3100Linux?Anonymous
S: Supported
F: Documentation/networking/s2io.txt
F: drivers/net/s2io*
F: Documentation/networking/vxge.txt
F: drivers/net/vxge/
NETFILTER/IPTABLES/IPCHAINS
P: Rusty Russell
@ -4272,10 +4279,9 @@ F: include/scsi/osd_*
F: fs/exofs/
P54 WIRELESS DRIVER
M: Michael Wu <flamingice@sourmilk.net>
M: Christian Lamparter <chunkeey@googlemail.com>
L: linux-wireless@vger.kernel.org
W: http://prism54.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mwu/mac80211-drivers.git
W: http://wireless.kernel.org/en/users/Drivers/p54
S: Maintained
F: drivers/net/wireless/p54/
@ -4537,7 +4543,7 @@ PRISM54 WIRELESS DRIVER
M: "Luis R. Rodriguez" <mcgrof@gmail.com>
L: linux-wireless@vger.kernel.org
W: http://prism54.org
S: Maintained
S: Obsolete
F: drivers/net/wireless/prism54/
PROMISE DC4030 CACHING DISK CONTROLLER DRIVER
@ -4733,9 +4739,8 @@ S: Maintained
F: drivers/rapidio/
RAYLINK/WEBGEAR 802.11 WIRELESS LAN DRIVER
M: Corey Thomas <coreythomas@charter.net>
L: linux-wireless@vger.kernel.org
S: Maintained
S: Orphan
F: drivers/net/wireless/ray*
RCUTORTURE MODULE
@ -6068,10 +6073,9 @@ F: Documentation/video4linux/zc0301.txt
F: drivers/media/video/zc0301/
USB ZD1201 DRIVER
M: Jeroen Vreeken <pe1rxq@amsat.org>
L: linux-usb@vger.kernel.org
L: linux-wireless@vger.kernel.org
W: http://linux-lc100020.sourceforge.net
S: Maintained
S: Orphan
F: drivers/net/wireless/zd1201.*
USB ZR364XX DRIVER
@ -6259,14 +6263,6 @@ F: Documentation/watchdog/
F: drivers/watchdog/
F: include/linux/watchdog.h
WAVELAN NETWORK DRIVER & WIRELESS EXTENSIONS
M: Jean Tourrilhes <jt@hpl.hp.com>
L: linux-wireless@vger.kernel.org
W: http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/
S: Maintained
F: Documentation/networking/wavelan.txt
F: drivers/staging/wavelan/
WD7000 SCSI DRIVER
M: Miroslav Zagorac <zaga@fly.cc.fer.hr>
L: linux-scsi@vger.kernel.org

View File

@ -101,10 +101,7 @@ extern struct dentry *of_debugfs_root;
* MicroBlaze doesn't handle unaligned accesses in hardware.
*
* Based on this we force the IP header alignment in network drivers.
* We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
* cacheline alignment of buffers.
*/
#define NET_IP_ALIGN 2
#define NET_SKB_PAD L1_CACHE_BYTES
#endif /* _ASM_MICROBLAZE_SYSTEM_H */

View File

@ -515,11 +515,8 @@ __cmpxchg_local(volatile void *ptr, unsigned long old, unsigned long new,
* powers of 2 writes until it reaches sufficient alignment).
*
* Based on this we disable the IP header alignment in network drivers.
* We also modify NET_SKB_PAD to be a cacheline in size, thus maintaining
* cacheline alignment of buffers.
*/
#define NET_IP_ALIGN 0
#define NET_SKB_PAD L1_CACHE_BYTES
#define cmpxchg64(ptr, o, n) \
({ \

View File

@ -85,7 +85,8 @@ static void appldata_get_net_sum_data(void *data)
rcu_read_lock();
for_each_netdev_rcu(&init_net, dev) {
const struct net_device_stats *stats = dev_get_stats(dev);
struct rtnl_link_stats64 temp;
const struct net_device_stats *stats = dev_get_stats(dev, &temp);
rx_packets += stats->rx_packets;
tx_packets += stats->tx_packets;

View File

@ -25,11 +25,6 @@
#include "net_kern.h"
#include "net_user.h"
static inline void set_ether_mac(struct net_device *dev, unsigned char *addr)
{
memcpy(dev->dev_addr, addr, ETH_ALEN);
}
#define DRIVER_NAME "uml-netdev"
static DEFINE_SPINLOCK(opened_lock);
@ -266,7 +261,7 @@ static int uml_net_set_mac(struct net_device *dev, void *addr)
struct sockaddr *hwaddr = addr;
spin_lock_irq(&lp->lock);
set_ether_mac(dev, hwaddr->sa_data);
eth_mac_addr(dev, hwaddr->sa_data);
spin_unlock_irq(&lp->lock);
return 0;
@ -380,7 +375,6 @@ static const struct net_device_ops uml_netdev_ops = {
.ndo_tx_timeout = uml_net_tx_timeout,
.ndo_set_mac_address = uml_net_set_mac,
.ndo_change_mtu = uml_net_change_mtu,
.ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
};
@ -478,7 +472,7 @@ static void eth_configure(int n, void *init, char *mac,
((*transport->user->init)(&lp->user, dev) != 0))
goto out_unregister;
set_ether_mac(dev, device->mac);
eth_mac_addr(dev, device->mac);
dev->mtu = transport->user->mtu;
dev->netdev_ops = &uml_netdev_ops;
dev->ethtool_ops = &uml_net_ethtool_ops;

View File

@ -457,4 +457,11 @@ static __always_inline void rdtsc_barrier(void)
alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
}
/*
* We handle most unaligned accesses in hardware. On the other hand
* unaligned DMA can be quite expensive on some Nehalem processors.
*
* Based on this we disable the IP header alignment in network drivers.
*/
#define NET_IP_ALIGN 0
#endif /* _ASM_X86_SYSTEM_H */

View File

@ -177,7 +177,7 @@ config ATM_ZATM_DEBUG
config ATM_NICSTAR
tristate "IDT 77201 (NICStAR) (ForeRunnerLE)"
depends on PCI && !64BIT && VIRT_TO_BUS
depends on PCI
help
The NICStAR chipset family is used in a large number of ATM NICs for
25 and for 155 Mbps, including IDT cards and the Fore ForeRunnerLE

View File

@ -40,6 +40,42 @@ struct adummy_dev {
static LIST_HEAD(adummy_devs);
static ssize_t __set_signal(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct atm_dev *atm_dev = container_of(dev, struct atm_dev, class_dev);
int signal;
if (sscanf(buf, "%d", &signal) == 1) {
if (signal < ATM_PHY_SIG_LOST || signal > ATM_PHY_SIG_FOUND)
signal = ATM_PHY_SIG_UNKNOWN;
atm_dev_signal_change(atm_dev, signal);
return 1;
}
return -EINVAL;
}
static ssize_t __show_signal(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct atm_dev *atm_dev = container_of(dev, struct atm_dev, class_dev);
return sprintf(buf, "%d\n", atm_dev->signal);
}
static DEVICE_ATTR(signal, 0644, __show_signal, __set_signal);
static struct attribute *adummy_attrs[] = {
&dev_attr_signal.attr,
NULL
};
static struct attribute_group adummy_group_attrs = {
.name = NULL, /* We want them in dev's root folder */
.attrs = adummy_attrs
};
static int __init
adummy_start(struct atm_dev *dev)
{
@ -128,6 +164,9 @@ static int __init adummy_init(void)
adummy_dev->atm_dev = atm_dev;
atm_dev->dev_data = adummy_dev;
if (sysfs_create_group(&atm_dev->class_dev.kobj, &adummy_group_attrs))
dev_err(&atm_dev->class_dev, "Could not register attrs for adummy\n");
if (adummy_start(atm_dev)) {
printk(KERN_ERR DEV_LABEL ": adummy_start() failed\n");
err = -ENODEV;

View File

@ -2371,10 +2371,8 @@ MODULE_PARM_DESC(pci_lat, "PCI latency in bus cycles");
/********** module entry **********/
static struct pci_device_id amb_pci_tbl[] = {
{ PCI_VENDOR_ID_MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0 },
{ PCI_VENDOR_ID_MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR_BAD, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0 },
{ PCI_VDEVICE(MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR), 0 },
{ PCI_VDEVICE(MADGE, PCI_DEVICE_ID_MADGE_AMBASSADOR_BAD), 0 },
{ 0, }
};

View File

@ -2269,10 +2269,8 @@ out0:
static struct pci_device_id eni_pci_tbl[] = {
{ PCI_VENDOR_ID_EF, PCI_DEVICE_ID_EF_ATM_FPGA, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0 /* FPGA */ },
{ PCI_VENDOR_ID_EF, PCI_DEVICE_ID_EF_ATM_ASIC, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 1 /* ASIC */ },
{ PCI_VDEVICE(EF, PCI_DEVICE_ID_EF_ATM_FPGA), 0 /* FPGA */ },
{ PCI_VDEVICE(EF, PCI_DEVICE_ID_EF_ATM_ASIC), 1 /* ASIC */ },
{ 0, }
};
MODULE_DEVICE_TABLE(pci,eni_pci_tbl);

View File

@ -2027,10 +2027,8 @@ static void __devexit firestream_remove_one (struct pci_dev *pdev)
}
static struct pci_device_id firestream_pci_tbl[] = {
{ PCI_VENDOR_ID_FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS50,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, FS_IS50},
{ PCI_VENDOR_ID_FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS155,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, FS_IS155},
{ PCI_VDEVICE(FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS50), FS_IS50},
{ PCI_VDEVICE(FUJITSU_ME, PCI_DEVICE_ID_FUJITSU_FS155), FS_IS155},
{ 0, }
};

View File

@ -67,6 +67,7 @@
#include <linux/timer.h>
#include <linux/interrupt.h>
#include <linux/dma-mapping.h>
#include <linux/bitmap.h>
#include <linux/slab.h>
#include <asm/io.h>
#include <asm/byteorder.h>
@ -778,61 +779,39 @@ he_init_cs_block_rcm(struct he_dev *he_dev)
static int __devinit
he_init_group(struct he_dev *he_dev, int group)
{
struct he_buff *heb, *next;
dma_addr_t mapping;
int i;
/* small buffer pool */
he_dev->rbps_pool = pci_pool_create("rbps", he_dev->pci_dev,
CONFIG_RBPS_BUFSIZE, 8, 0);
if (he_dev->rbps_pool == NULL) {
hprintk("unable to create rbps pages\n");
he_writel(he_dev, 0x0, G0_RBPS_S + (group * 32));
he_writel(he_dev, 0x0, G0_RBPS_T + (group * 32));
he_writel(he_dev, 0x0, G0_RBPS_QI + (group * 32));
he_writel(he_dev, RBP_THRESH(0x1) | RBP_QSIZE(0x0),
G0_RBPS_BS + (group * 32));
/* bitmap table */
he_dev->rbpl_table = kmalloc(BITS_TO_LONGS(RBPL_TABLE_SIZE)
* sizeof(unsigned long), GFP_KERNEL);
if (!he_dev->rbpl_table) {
hprintk("unable to allocate rbpl bitmap table\n");
return -ENOMEM;
}
bitmap_zero(he_dev->rbpl_table, RBPL_TABLE_SIZE);
he_dev->rbps_base = pci_alloc_consistent(he_dev->pci_dev,
CONFIG_RBPS_SIZE * sizeof(struct he_rbp), &he_dev->rbps_phys);
if (he_dev->rbps_base == NULL) {
hprintk("failed to alloc rbps_base\n");
goto out_destroy_rbps_pool;
/* rbpl_virt 64-bit pointers */
he_dev->rbpl_virt = kmalloc(RBPL_TABLE_SIZE
* sizeof(struct he_buff *), GFP_KERNEL);
if (!he_dev->rbpl_virt) {
hprintk("unable to allocate rbpl virt table\n");
goto out_free_rbpl_table;
}
memset(he_dev->rbps_base, 0, CONFIG_RBPS_SIZE * sizeof(struct he_rbp));
he_dev->rbps_virt = kmalloc(CONFIG_RBPS_SIZE * sizeof(struct he_virt), GFP_KERNEL);
if (he_dev->rbps_virt == NULL) {
hprintk("failed to alloc rbps_virt\n");
goto out_free_rbps_base;
}
for (i = 0; i < CONFIG_RBPS_SIZE; ++i) {
dma_addr_t dma_handle;
void *cpuaddr;
cpuaddr = pci_pool_alloc(he_dev->rbps_pool, GFP_KERNEL|GFP_DMA, &dma_handle);
if (cpuaddr == NULL)
goto out_free_rbps_virt;
he_dev->rbps_virt[i].virt = cpuaddr;
he_dev->rbps_base[i].status = RBP_LOANED | RBP_SMALLBUF | (i << RBP_INDEX_OFF);
he_dev->rbps_base[i].phys = dma_handle;
}
he_dev->rbps_tail = &he_dev->rbps_base[CONFIG_RBPS_SIZE - 1];
he_writel(he_dev, he_dev->rbps_phys, G0_RBPS_S + (group * 32));
he_writel(he_dev, RBPS_MASK(he_dev->rbps_tail),
G0_RBPS_T + (group * 32));
he_writel(he_dev, CONFIG_RBPS_BUFSIZE/4,
G0_RBPS_BS + (group * 32));
he_writel(he_dev,
RBP_THRESH(CONFIG_RBPS_THRESH) |
RBP_QSIZE(CONFIG_RBPS_SIZE - 1) |
RBP_INT_ENB,
G0_RBPS_QI + (group * 32));
/* large buffer pool */
he_dev->rbpl_pool = pci_pool_create("rbpl", he_dev->pci_dev,
CONFIG_RBPL_BUFSIZE, 8, 0);
CONFIG_RBPL_BUFSIZE, 64, 0);
if (he_dev->rbpl_pool == NULL) {
hprintk("unable to create rbpl pool\n");
goto out_free_rbps_virt;
goto out_free_rbpl_virt;
}
he_dev->rbpl_base = pci_alloc_consistent(he_dev->pci_dev,
@ -842,30 +821,29 @@ he_init_group(struct he_dev *he_dev, int group)
goto out_destroy_rbpl_pool;
}
memset(he_dev->rbpl_base, 0, CONFIG_RBPL_SIZE * sizeof(struct he_rbp));
he_dev->rbpl_virt = kmalloc(CONFIG_RBPL_SIZE * sizeof(struct he_virt), GFP_KERNEL);
if (he_dev->rbpl_virt == NULL) {
hprintk("failed to alloc rbpl_virt\n");
goto out_free_rbpl_base;
}
INIT_LIST_HEAD(&he_dev->rbpl_outstanding);
for (i = 0; i < CONFIG_RBPL_SIZE; ++i) {
dma_addr_t dma_handle;
void *cpuaddr;
cpuaddr = pci_pool_alloc(he_dev->rbpl_pool, GFP_KERNEL|GFP_DMA, &dma_handle);
if (cpuaddr == NULL)
goto out_free_rbpl_virt;
heb = pci_pool_alloc(he_dev->rbpl_pool, GFP_KERNEL|GFP_DMA, &mapping);
if (!heb)
goto out_free_rbpl;
heb->mapping = mapping;
list_add(&heb->entry, &he_dev->rbpl_outstanding);
he_dev->rbpl_virt[i].virt = cpuaddr;
he_dev->rbpl_base[i].status = RBP_LOANED | (i << RBP_INDEX_OFF);
he_dev->rbpl_base[i].phys = dma_handle;
set_bit(i, he_dev->rbpl_table);
he_dev->rbpl_virt[i] = heb;
he_dev->rbpl_hint = i + 1;
he_dev->rbpl_base[i].idx = i << RBP_IDX_OFFSET;
he_dev->rbpl_base[i].phys = mapping + offsetof(struct he_buff, data);
}
he_dev->rbpl_tail = &he_dev->rbpl_base[CONFIG_RBPL_SIZE - 1];
he_writel(he_dev, he_dev->rbpl_phys, G0_RBPL_S + (group * 32));
he_writel(he_dev, RBPL_MASK(he_dev->rbpl_tail),
G0_RBPL_T + (group * 32));
he_writel(he_dev, CONFIG_RBPL_BUFSIZE/4,
he_writel(he_dev, (CONFIG_RBPL_BUFSIZE - sizeof(struct he_buff))/4,
G0_RBPL_BS + (group * 32));
he_writel(he_dev,
RBP_THRESH(CONFIG_RBPL_THRESH) |
@ -879,7 +857,7 @@ he_init_group(struct he_dev *he_dev, int group)
CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq), &he_dev->rbrq_phys);
if (he_dev->rbrq_base == NULL) {
hprintk("failed to allocate rbrq\n");
goto out_free_rbpl_virt;
goto out_free_rbpl;
}
memset(he_dev->rbrq_base, 0, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq));
@ -920,33 +898,20 @@ out_free_rbpq_base:
pci_free_consistent(he_dev->pci_dev, CONFIG_RBRQ_SIZE *
sizeof(struct he_rbrq), he_dev->rbrq_base,
he_dev->rbrq_phys);
i = CONFIG_RBPL_SIZE;
out_free_rbpl_virt:
while (i--)
pci_pool_free(he_dev->rbpl_pool, he_dev->rbpl_virt[i].virt,
he_dev->rbpl_base[i].phys);
kfree(he_dev->rbpl_virt);
out_free_rbpl:
list_for_each_entry_safe(heb, next, &he_dev->rbpl_outstanding, entry)
pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
out_free_rbpl_base:
pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE *
sizeof(struct he_rbp), he_dev->rbpl_base,
he_dev->rbpl_phys);
out_destroy_rbpl_pool:
pci_pool_destroy(he_dev->rbpl_pool);
out_free_rbpl_virt:
kfree(he_dev->rbpl_virt);
out_free_rbpl_table:
kfree(he_dev->rbpl_table);
i = CONFIG_RBPS_SIZE;
out_free_rbps_virt:
while (i--)
pci_pool_free(he_dev->rbps_pool, he_dev->rbps_virt[i].virt,
he_dev->rbps_base[i].phys);
kfree(he_dev->rbps_virt);
out_free_rbps_base:
pci_free_consistent(he_dev->pci_dev, CONFIG_RBPS_SIZE *
sizeof(struct he_rbp), he_dev->rbps_base,
he_dev->rbps_phys);
out_destroy_rbps_pool:
pci_pool_destroy(he_dev->rbps_pool);
return -ENOMEM;
}
@ -1002,7 +967,8 @@ he_init_irq(struct he_dev *he_dev)
he_writel(he_dev, 0x0, GRP_54_MAP);
he_writel(he_dev, 0x0, GRP_76_MAP);
if (request_irq(he_dev->pci_dev->irq, he_irq_handler, IRQF_DISABLED|IRQF_SHARED, DEV_LABEL, he_dev)) {
if (request_irq(he_dev->pci_dev->irq,
he_irq_handler, IRQF_SHARED, DEV_LABEL, he_dev)) {
hprintk("irq %d already in use\n", he_dev->pci_dev->irq);
return -EINVAL;
}
@ -1576,9 +1542,10 @@ he_start(struct atm_dev *dev)
static void
he_stop(struct he_dev *he_dev)
{
u16 command;
u32 gen_cntl_0, reg;
struct he_buff *heb, *next;
struct pci_dev *pci_dev;
u32 gen_cntl_0, reg;
u16 command;
pci_dev = he_dev->pci_dev;
@ -1619,37 +1586,19 @@ he_stop(struct he_dev *he_dev)
he_dev->hsp, he_dev->hsp_phys);
if (he_dev->rbpl_base) {
int i;
list_for_each_entry_safe(heb, next, &he_dev->rbpl_outstanding, entry)
pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
for (i = 0; i < CONFIG_RBPL_SIZE; ++i) {
void *cpuaddr = he_dev->rbpl_virt[i].virt;
dma_addr_t dma_handle = he_dev->rbpl_base[i].phys;
pci_pool_free(he_dev->rbpl_pool, cpuaddr, dma_handle);
}
pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE
* sizeof(struct he_rbp), he_dev->rbpl_base, he_dev->rbpl_phys);
}
kfree(he_dev->rbpl_virt);
kfree(he_dev->rbpl_table);
if (he_dev->rbpl_pool)
pci_pool_destroy(he_dev->rbpl_pool);
if (he_dev->rbps_base) {
int i;
for (i = 0; i < CONFIG_RBPS_SIZE; ++i) {
void *cpuaddr = he_dev->rbps_virt[i].virt;
dma_addr_t dma_handle = he_dev->rbps_base[i].phys;
pci_pool_free(he_dev->rbps_pool, cpuaddr, dma_handle);
}
pci_free_consistent(he_dev->pci_dev, CONFIG_RBPS_SIZE
* sizeof(struct he_rbp), he_dev->rbps_base, he_dev->rbps_phys);
}
if (he_dev->rbps_pool)
pci_pool_destroy(he_dev->rbps_pool);
if (he_dev->rbrq_base)
pci_free_consistent(he_dev->pci_dev, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq),
he_dev->rbrq_base, he_dev->rbrq_phys);
@ -1679,13 +1628,13 @@ static struct he_tpd *
__alloc_tpd(struct he_dev *he_dev)
{
struct he_tpd *tpd;
dma_addr_t dma_handle;
dma_addr_t mapping;
tpd = pci_pool_alloc(he_dev->tpd_pool, GFP_ATOMIC|GFP_DMA, &dma_handle);
tpd = pci_pool_alloc(he_dev->tpd_pool, GFP_ATOMIC|GFP_DMA, &mapping);
if (tpd == NULL)
return NULL;
tpd->status = TPD_ADDR(dma_handle);
tpd->status = TPD_ADDR(mapping);
tpd->reserved = 0;
tpd->iovec[0].addr = 0; tpd->iovec[0].len = 0;
tpd->iovec[1].addr = 0; tpd->iovec[1].len = 0;
@ -1714,13 +1663,12 @@ he_service_rbrq(struct he_dev *he_dev, int group)
struct he_rbrq *rbrq_tail = (struct he_rbrq *)
((unsigned long)he_dev->rbrq_base |
he_dev->hsp->group[group].rbrq_tail);
struct he_rbp *rbp = NULL;
unsigned cid, lastcid = -1;
unsigned buf_len = 0;
struct sk_buff *skb;
struct atm_vcc *vcc = NULL;
struct he_vcc *he_vcc;
struct he_iovec *iov;
struct he_buff *heb, *next;
int i;
int pdus_assembled = 0;
int updated = 0;
@ -1740,44 +1688,35 @@ he_service_rbrq(struct he_dev *he_dev, int group)
RBRQ_CON_CLOSED(he_dev->rbrq_head) ? " CON_CLOSED" : "",
RBRQ_HBUF_ERR(he_dev->rbrq_head) ? " HBUF_ERR" : "");
if (RBRQ_ADDR(he_dev->rbrq_head) & RBP_SMALLBUF)
rbp = &he_dev->rbps_base[RBP_INDEX(RBRQ_ADDR(he_dev->rbrq_head))];
else
rbp = &he_dev->rbpl_base[RBP_INDEX(RBRQ_ADDR(he_dev->rbrq_head))];
buf_len = RBRQ_BUFLEN(he_dev->rbrq_head) * 4;
cid = RBRQ_CID(he_dev->rbrq_head);
i = RBRQ_ADDR(he_dev->rbrq_head) >> RBP_IDX_OFFSET;
heb = he_dev->rbpl_virt[i];
cid = RBRQ_CID(he_dev->rbrq_head);
if (cid != lastcid)
vcc = __find_vcc(he_dev, cid);
lastcid = cid;
if (vcc == NULL) {
hprintk("vcc == NULL (cid 0x%x)\n", cid);
if (!RBRQ_HBUF_ERR(he_dev->rbrq_head))
rbp->status &= ~RBP_LOANED;
if (vcc == NULL || (he_vcc = HE_VCC(vcc)) == NULL) {
hprintk("vcc/he_vcc == NULL (cid 0x%x)\n", cid);
if (!RBRQ_HBUF_ERR(he_dev->rbrq_head)) {
clear_bit(i, he_dev->rbpl_table);
list_del(&heb->entry);
pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
}
goto next_rbrq_entry;
}
he_vcc = HE_VCC(vcc);
if (he_vcc == NULL) {
hprintk("he_vcc == NULL (cid 0x%x)\n", cid);
if (!RBRQ_HBUF_ERR(he_dev->rbrq_head))
rbp->status &= ~RBP_LOANED;
goto next_rbrq_entry;
}
if (RBRQ_HBUF_ERR(he_dev->rbrq_head)) {
hprintk("HBUF_ERR! (cid 0x%x)\n", cid);
atomic_inc(&vcc->stats->rx_drop);
goto return_host_buffers;
}
he_vcc->iov_tail->iov_base = RBRQ_ADDR(he_dev->rbrq_head);
he_vcc->iov_tail->iov_len = buf_len;
he_vcc->pdu_len += buf_len;
++he_vcc->iov_tail;
heb->len = RBRQ_BUFLEN(he_dev->rbrq_head) * 4;
clear_bit(i, he_dev->rbpl_table);
list_move_tail(&heb->entry, &he_vcc->buffers);
he_vcc->pdu_len += heb->len;
if (RBRQ_CON_CLOSED(he_dev->rbrq_head)) {
lastcid = -1;
@ -1786,12 +1725,6 @@ he_service_rbrq(struct he_dev *he_dev, int group)
goto return_host_buffers;
}
#ifdef notdef
if ((he_vcc->iov_tail - he_vcc->iov_head) > HE_MAXIOV) {
hprintk("iovec full! cid 0x%x\n", cid);
goto return_host_buffers;
}
#endif
if (!RBRQ_END_PDU(he_dev->rbrq_head))
goto next_rbrq_entry;
@ -1819,15 +1752,8 @@ he_service_rbrq(struct he_dev *he_dev, int group)
__net_timestamp(skb);
for (iov = he_vcc->iov_head;
iov < he_vcc->iov_tail; ++iov) {
if (iov->iov_base & RBP_SMALLBUF)
memcpy(skb_put(skb, iov->iov_len),
he_dev->rbps_virt[RBP_INDEX(iov->iov_base)].virt, iov->iov_len);
else
memcpy(skb_put(skb, iov->iov_len),
he_dev->rbpl_virt[RBP_INDEX(iov->iov_base)].virt, iov->iov_len);
}
list_for_each_entry(heb, &he_vcc->buffers, entry)
memcpy(skb_put(skb, heb->len), &heb->data, heb->len);
switch (vcc->qos.aal) {
case ATM_AAL0:
@ -1867,17 +1793,9 @@ he_service_rbrq(struct he_dev *he_dev, int group)
return_host_buffers:
++pdus_assembled;
for (iov = he_vcc->iov_head;
iov < he_vcc->iov_tail; ++iov) {
if (iov->iov_base & RBP_SMALLBUF)
rbp = &he_dev->rbps_base[RBP_INDEX(iov->iov_base)];
else
rbp = &he_dev->rbpl_base[RBP_INDEX(iov->iov_base)];
rbp->status &= ~RBP_LOANED;
}
he_vcc->iov_tail = he_vcc->iov_head;
list_for_each_entry_safe(heb, next, &he_vcc->buffers, entry)
pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
INIT_LIST_HEAD(&he_vcc->buffers);
he_vcc->pdu_len = 0;
next_rbrq_entry:
@ -1978,27 +1896,46 @@ next_tbrq_entry:
}
}
static void
he_service_rbpl(struct he_dev *he_dev, int group)
{
struct he_rbp *newtail;
struct he_rbp *new_tail;
struct he_rbp *rbpl_head;
struct he_buff *heb;
dma_addr_t mapping;
int i;
int moved = 0;
rbpl_head = (struct he_rbp *) ((unsigned long)he_dev->rbpl_base |
RBPL_MASK(he_readl(he_dev, G0_RBPL_S)));
for (;;) {
newtail = (struct he_rbp *) ((unsigned long)he_dev->rbpl_base |
new_tail = (struct he_rbp *) ((unsigned long)he_dev->rbpl_base |
RBPL_MASK(he_dev->rbpl_tail+1));
/* table 3.42 -- rbpl_tail should never be set to rbpl_head */
if ((newtail == rbpl_head) || (newtail->status & RBP_LOANED))
if (new_tail == rbpl_head)
break;
newtail->status |= RBP_LOANED;
he_dev->rbpl_tail = newtail;
i = find_next_zero_bit(he_dev->rbpl_table, RBPL_TABLE_SIZE, he_dev->rbpl_hint);
if (i > (RBPL_TABLE_SIZE - 1)) {
i = find_first_zero_bit(he_dev->rbpl_table, RBPL_TABLE_SIZE);
if (i > (RBPL_TABLE_SIZE - 1))
break;
}
he_dev->rbpl_hint = i + 1;
heb = pci_pool_alloc(he_dev->rbpl_pool, GFP_ATOMIC|GFP_DMA, &mapping);
if (!heb)
break;
heb->mapping = mapping;
list_add(&heb->entry, &he_dev->rbpl_outstanding);
he_dev->rbpl_virt[i] = heb;
set_bit(i, he_dev->rbpl_table);
new_tail->idx = i << RBP_IDX_OFFSET;
new_tail->phys = mapping + offsetof(struct he_buff, data);
he_dev->rbpl_tail = new_tail;
++moved;
}
@ -2006,33 +1943,6 @@ he_service_rbpl(struct he_dev *he_dev, int group)
he_writel(he_dev, RBPL_MASK(he_dev->rbpl_tail), G0_RBPL_T);
}
static void
he_service_rbps(struct he_dev *he_dev, int group)
{
struct he_rbp *newtail;
struct he_rbp *rbps_head;
int moved = 0;
rbps_head = (struct he_rbp *) ((unsigned long)he_dev->rbps_base |
RBPS_MASK(he_readl(he_dev, G0_RBPS_S)));
for (;;) {
newtail = (struct he_rbp *) ((unsigned long)he_dev->rbps_base |
RBPS_MASK(he_dev->rbps_tail+1));
/* table 3.42 -- rbps_tail should never be set to rbps_head */
if ((newtail == rbps_head) || (newtail->status & RBP_LOANED))
break;
newtail->status |= RBP_LOANED;
he_dev->rbps_tail = newtail;
++moved;
}
if (moved)
he_writel(he_dev, RBPS_MASK(he_dev->rbps_tail), G0_RBPS_T);
}
static void
he_tasklet(unsigned long data)
{
@ -2055,10 +1965,8 @@ he_tasklet(unsigned long data)
HPRINTK("rbrq%d threshold\n", group);
/* fall through */
case ITYPE_RBRQ_TIMER:
if (he_service_rbrq(he_dev, group)) {
if (he_service_rbrq(he_dev, group))
he_service_rbpl(he_dev, group);
he_service_rbps(he_dev, group);
}
break;
case ITYPE_TBRQ_THRESH:
HPRINTK("tbrq%d threshold\n", group);
@ -2070,7 +1978,7 @@ he_tasklet(unsigned long data)
he_service_rbpl(he_dev, group);
break;
case ITYPE_RBPS_THRESH:
he_service_rbps(he_dev, group);
/* shouldn't happen unless small buffers enabled */
break;
case ITYPE_PHY:
HPRINTK("phy interrupt\n");
@ -2098,7 +2006,6 @@ he_tasklet(unsigned long data)
he_service_rbrq(he_dev, 0);
he_service_rbpl(he_dev, 0);
he_service_rbps(he_dev, 0);
he_service_tbrq(he_dev, 0);
break;
default:
@ -2252,7 +2159,7 @@ he_open(struct atm_vcc *vcc)
return -ENOMEM;
}
he_vcc->iov_tail = he_vcc->iov_head;
INIT_LIST_HEAD(&he_vcc->buffers);
he_vcc->pdu_len = 0;
he_vcc->rc_index = -1;
@ -2406,8 +2313,8 @@ he_open(struct atm_vcc *vcc)
goto open_failed;
}
rsr1 = RSR1_GROUP(0);
rsr4 = RSR4_GROUP(0);
rsr1 = RSR1_GROUP(0) | RSR1_RBPL_ONLY;
rsr4 = RSR4_GROUP(0) | RSR4_RBPL_ONLY;
rsr0 = vcc->qos.rxtp.traffic_class == ATM_UBR ?
(RSR0_EPD_ENABLE|RSR0_PPD_ENABLE) : 0;
@ -2963,8 +2870,7 @@ module_param(sdh, bool, 0);
MODULE_PARM_DESC(sdh, "use SDH framing (default 0)");
static struct pci_device_id he_pci_tbl[] = {
{ PCI_VENDOR_ID_FORE, PCI_DEVICE_ID_FORE_HE, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0 },
{ PCI_VDEVICE(FORE, PCI_DEVICE_ID_FORE_HE), 0 },
{ 0, }
};

View File

@ -67,11 +67,6 @@
#define CONFIG_RBPL_BUFSIZE 4096
#define RBPL_MASK(x) (((unsigned long)(x))&((CONFIG_RBPL_SIZE<<3)-1))
#define CONFIG_RBPS_SIZE 1024
#define CONFIG_RBPS_THRESH 64
#define CONFIG_RBPS_BUFSIZE 128
#define RBPS_MASK(x) (((unsigned long)(x))&((CONFIG_RBPS_SIZE<<3)-1))
/* 5.1.3 initialize connection memory */
#define CONFIG_RSRA 0x00000
@ -203,36 +198,37 @@ struct he_hsp {
} group[HE_NUM_GROUPS];
};
/* figure 2.9 receive buffer pools */
/*
* figure 2.9 receive buffer pools
*
* since a virtual address might be more than 32 bits, we store an index
* in the virt member of he_rbp. NOTE: the lower six bits in the rbrq
* addr member are used for buffer status further limiting us to 26 bits.
*/
struct he_rbp {
volatile u32 phys;
volatile u32 status;
volatile u32 idx; /* virt */
};
/* NOTE: it is suggested that virt be the virtual address of the host
buffer. on a 64-bit machine, this would not work. Instead, we
store the real virtual address in another list, and store an index
(and buffer status) in the virt member.
*/
#define RBP_IDX_OFFSET 6
#define RBP_INDEX_OFF 6
#define RBP_INDEX(x) (((long)(x) >> RBP_INDEX_OFF) & 0xffff)
#define RBP_LOANED 0x80000000
#define RBP_SMALLBUF 0x40000000
/*
* the he dma engine will try to hold an extra 16 buffers in its local
* caches. and add a couple buffers for safety.
*/
struct he_virt {
void *virt;
#define RBPL_TABLE_SIZE (CONFIG_RBPL_SIZE + 16 + 2)
struct he_buff {
struct list_head entry;
dma_addr_t mapping;
unsigned long len;
u8 data[];
};
#define RBPL_ALIGNMENT CONFIG_RBPL_SIZE
#define RBPS_ALIGNMENT CONFIG_RBPS_SIZE
#ifdef notyet
struct he_group {
u32 rpbs_size, rpbs_qsize;
struct he_rbp rbps_ba;
u32 rpbl_size, rpbl_qsize;
struct he_rpb_entry *rbpl_ba;
};
@ -297,18 +293,15 @@ struct he_dev {
struct he_rbrq *rbrq_base, *rbrq_head;
int rbrq_peak;
struct he_buff **rbpl_virt;
unsigned long *rbpl_table;
unsigned long rbpl_hint;
struct pci_pool *rbpl_pool;
dma_addr_t rbpl_phys;
struct he_rbp *rbpl_base, *rbpl_tail;
struct he_virt *rbpl_virt;
struct list_head rbpl_outstanding;
int rbpl_peak;
struct pci_pool *rbps_pool;
dma_addr_t rbps_phys;
struct he_rbp *rbps_base, *rbps_tail;
struct he_virt *rbps_virt;
int rbps_peak;
dma_addr_t tbrq_phys;
struct he_tbrq *tbrq_base, *tbrq_head;
int tbrq_peak;
@ -321,20 +314,12 @@ struct he_dev {
struct he_dev *next;
};
struct he_iovec
{
u32 iov_base;
u32 iov_len;
};
#define HE_MAXIOV 20
struct he_vcc
{
struct he_iovec iov_head[HE_MAXIOV];
struct he_iovec *iov_tail;
struct list_head buffers;
int pdu_len;
int rc_index;
wait_queue_head_t rx_waitq;

View File

@ -126,7 +126,7 @@ static void idt77105_restart_timer_func(unsigned long dummy)
istat = GET(ISTAT); /* side effect: clears all interrupt status bits */
if (istat & IDT77105_ISTAT_GOODSIG) {
/* Found signal again */
dev->signal = ATM_PHY_SIG_FOUND;
atm_dev_signal_change(dev, ATM_PHY_SIG_FOUND);
printk(KERN_NOTICE "%s(itf %d): signal detected again\n",
dev->type,dev->number);
/* flush the receive FIFO */
@ -222,7 +222,7 @@ static void idt77105_int(struct atm_dev *dev)
/* Rx Signal Condition Change - line went up or down */
if (istat & IDT77105_ISTAT_GOODSIG) { /* signal detected again */
/* This should not happen (restart timer does it) but JIC */
dev->signal = ATM_PHY_SIG_FOUND;
atm_dev_signal_change(dev, ATM_PHY_SIG_FOUND);
} else { /* signal lost */
/*
* Disable interrupts and stop all transmission and
@ -235,7 +235,7 @@ static void idt77105_int(struct atm_dev *dev)
IDT77105_MCR_DRIC|
IDT77105_MCR_HALTTX
) & ~IDT77105_MCR_EIP, MCR);
dev->signal = ATM_PHY_SIG_LOST;
atm_dev_signal_change(dev, ATM_PHY_SIG_LOST);
printk(KERN_NOTICE "%s(itf %d): signal lost\n",
dev->type,dev->number);
}
@ -272,8 +272,9 @@ static int idt77105_start(struct atm_dev *dev)
memset(&PRIV(dev)->stats,0,sizeof(struct idt77105_stats));
/* initialise dev->signal from Good Signal Bit */
dev->signal = GET(ISTAT) & IDT77105_ISTAT_GOODSIG ? ATM_PHY_SIG_FOUND :
ATM_PHY_SIG_LOST;
atm_dev_signal_change(dev,
GET(ISTAT) & IDT77105_ISTAT_GOODSIG ?
ATM_PHY_SIG_FOUND : ATM_PHY_SIG_LOST);
if (dev->signal == ATM_PHY_SIG_LOST)
printk(KERN_WARNING "%s(itf %d): no signal\n",dev->type,
dev->number);

View File

@ -3364,7 +3364,7 @@ init_card(struct atm_dev *dev)
writel(SAR_STAT_TMROF, SAR_REG_STAT);
}
IPRINTK("%s: Request IRQ ... ", card->name);
if (request_irq(pcidev->irq, idt77252_interrupt, IRQF_DISABLED|IRQF_SHARED,
if (request_irq(pcidev->irq, idt77252_interrupt, IRQF_SHARED,
card->name, card) != 0) {
printk("%s: can't allocate IRQ.\n", card->name);
deinit_card(card);
@ -3779,8 +3779,7 @@ err_out_disable_pdev:
static struct pci_device_id idt77252_pci_tbl[] =
{
{ PCI_VENDOR_ID_IDT, PCI_DEVICE_ID_IDT_IDT77252,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
{ PCI_VDEVICE(IDT, PCI_DEVICE_ID_IDT_IDT77252), 0 },
{ 0, }
};

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,4 @@
/******************************************************************************
*
/*
* nicstar.h
*
* Header file for the nicstar device driver.
@ -8,29 +7,26 @@
* PowerPC support by Jay Talbott (jay_talbott@mcg.mot.com) April 1999
*
* (C) INESC 1998
*
******************************************************************************/
*/
#ifndef _LINUX_NICSTAR_H_
#define _LINUX_NICSTAR_H_
/* Includes *******************************************************************/
/* Includes */
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/idr.h>
#include <linux/uio.h>
#include <linux/skbuff.h>
#include <linux/atmdev.h>
#include <linux/atm_nicstar.h>
/* Options ********************************************************************/
/* Options */
#define NS_MAX_CARDS 4 /* Maximum number of NICStAR based cards
controlled by the device driver. Must
be <= 5 */
be <= 5 */
#undef RCQ_SUPPORT /* Do not define this for now */
@ -43,7 +39,7 @@
#define NS_VPIBITS 2 /* 0, 1, 2, or 8 */
#define NS_MAX_RCTSIZE 4096 /* Number of entries. 4096 or 16384.
Define 4096 only if (all) your card(s)
Define 4096 only if (all) your card(s)
have 32K x 32bit SRAM, in which case
setting this to 16384 will just waste a
lot of memory.
@ -51,33 +47,32 @@
128K x 32bit SRAM will limit the maximum
VCI. */
/*#define NS_PCI_LATENCY 64*/ /* Must be a multiple of 32 */
/*#define NS_PCI_LATENCY 64*//* Must be a multiple of 32 */
/* Number of buffers initially allocated */
#define NUM_SB 32 /* Must be even */
#define NUM_LB 24 /* Must be even */
#define NUM_HB 8 /* Pre-allocated huge buffers */
#define NUM_IOVB 48 /* Iovec buffers */
#define NUM_SB 32 /* Must be even */
#define NUM_LB 24 /* Must be even */
#define NUM_HB 8 /* Pre-allocated huge buffers */
#define NUM_IOVB 48 /* Iovec buffers */
/* Lower level for count of buffers */
#define MIN_SB 8 /* Must be even */
#define MIN_LB 8 /* Must be even */
#define MIN_SB 8 /* Must be even */
#define MIN_LB 8 /* Must be even */
#define MIN_HB 6
#define MIN_IOVB 8
/* Upper level for count of buffers */
#define MAX_SB 64 /* Must be even, <= 508 */
#define MAX_LB 48 /* Must be even, <= 508 */
#define MAX_SB 64 /* Must be even, <= 508 */
#define MAX_LB 48 /* Must be even, <= 508 */
#define MAX_HB 10
#define MAX_IOVB 80
/* These are the absolute maximum allowed for the ioctl() */
#define TOP_SB 256 /* Must be even, <= 508 */
#define TOP_LB 128 /* Must be even, <= 508 */
#define TOP_SB 256 /* Must be even, <= 508 */
#define TOP_LB 128 /* Must be even, <= 508 */
#define TOP_HB 64
#define TOP_IOVB 256
#define MAX_TBD_PER_VC 1 /* Number of TBDs before a TSR */
#define MAX_TBD_PER_SCQ 10 /* Only meaningful for variable rate SCQs */
@ -89,15 +84,12 @@
#define PCR_TOLERANCE (1.0001)
/* ESI stuff ******************************************************************/
/* ESI stuff */
#define NICSTAR_EPROM_MAC_ADDR_OFFSET 0x6C
#define NICSTAR_EPROM_MAC_ADDR_OFFSET_ALT 0xF6
/* #defines *******************************************************************/
/* #defines */
#define NS_IOREMAP_SIZE 4096
@ -123,22 +115,19 @@
#define NS_SMSKBSIZE (NS_SMBUFSIZE + NS_AAL0_HEADER)
#define NS_LGSKBSIZE (NS_SMBUFSIZE + NS_LGBUFSIZE)
/* NICStAR structures located in host memory */
/* NICStAR structures located in host memory **********************************/
/* RSQ - Receive Status Queue
/*
* RSQ - Receive Status Queue
*
* Written by the NICStAR, read by the device driver.
*/
typedef struct ns_rsqe
{
u32 word_1;
u32 buffer_handle;
u32 final_aal5_crc32;
u32 word_4;
typedef struct ns_rsqe {
u32 word_1;
u32 buffer_handle;
u32 final_aal5_crc32;
u32 word_4;
} ns_rsqe;
#define ns_rsqe_vpi(ns_rsqep) \
@ -175,30 +164,27 @@ typedef struct ns_rsqe
#define ns_rsqe_cellcount(ns_rsqep) \
(le32_to_cpu((ns_rsqep)->word_4) & 0x000001FF)
#define ns_rsqe_init(ns_rsqep) \
((ns_rsqep)->word_4 = cpu_to_le32(0x00000000))
((ns_rsqep)->word_4 = cpu_to_le32(0x00000000))
#define NS_RSQ_NUM_ENTRIES (NS_RSQSIZE / 16)
#define NS_RSQ_ALIGNMENT NS_RSQSIZE
/* RCQ - Raw Cell Queue
/*
* RCQ - Raw Cell Queue
*
* Written by the NICStAR, read by the device driver.
*/
typedef struct cell_payload
{
u32 word[12];
typedef struct cell_payload {
u32 word[12];
} cell_payload;
typedef struct ns_rcqe
{
u32 word_1;
u32 word_2;
u32 word_3;
u32 word_4;
cell_payload payload;
typedef struct ns_rcqe {
u32 word_1;
u32 word_2;
u32 word_3;
u32 word_4;
cell_payload payload;
} ns_rcqe;
#define NS_RCQE_SIZE 64 /* bytes */
@ -210,28 +196,25 @@ typedef struct ns_rcqe
#define ns_rcqe_nextbufhandle(ns_rcqep) \
(le32_to_cpu((ns_rcqep)->word_2))
/* SCQ - Segmentation Channel Queue
/*
* SCQ - Segmentation Channel Queue
*
* Written by the device driver, read by the NICStAR.
*/
typedef struct ns_scqe
{
u32 word_1;
u32 word_2;
u32 word_3;
u32 word_4;
typedef struct ns_scqe {
u32 word_1;
u32 word_2;
u32 word_3;
u32 word_4;
} ns_scqe;
/* NOTE: SCQ entries can be either a TBD (Transmit Buffer Descriptors)
or TSR (Transmit Status Requests) */
or TSR (Transmit Status Requests) */
#define NS_SCQE_TYPE_TBD 0x00000000
#define NS_SCQE_TYPE_TSR 0x80000000
#define NS_TBD_EOPDU 0x40000000
#define NS_TBD_AAL0 0x00000000
#define NS_TBD_AAL34 0x04000000
@ -253,10 +236,9 @@ typedef struct ns_scqe
#define ns_tbd_mkword_4(gfc, vpi, vci, pt, clp) \
(cpu_to_le32((gfc) << 28 | (vpi) << 20 | (vci) << 4 | (pt) << 1 | (clp)))
#define NS_TSR_INTENABLE 0x20000000
#define NS_TSR_SCDISVBR 0xFFFF /* Use as scdi for VBR SCD */
#define NS_TSR_SCDISVBR 0xFFFF /* Use as scdi for VBR SCD */
#define ns_tsr_mkword_1(flags) \
(cpu_to_le32(NS_SCQE_TYPE_TSR | (flags)))
@ -273,22 +255,20 @@ typedef struct ns_scqe
#define NS_SCQE_SIZE 16
/* TSQ - Transmit Status Queue
/*
* TSQ - Transmit Status Queue
*
* Written by the NICStAR, read by the device driver.
*/
typedef struct ns_tsi
{
u32 word_1;
u32 word_2;
typedef struct ns_tsi {
u32 word_1;
u32 word_2;
} ns_tsi;
/* NOTE: The first word can be a status word copied from the TSR which
originated the TSI, or a timer overflow indicator. In this last
case, the value of the first word is all zeroes. */
originated the TSI, or a timer overflow indicator. In this last
case, the value of the first word is all zeroes. */
#define NS_TSI_EMPTY 0x80000000
#define NS_TSI_TIMESTAMP_MASK 0x00FFFFFF
@ -301,12 +281,10 @@ typedef struct ns_tsi
#define ns_tsi_init(ns_tsip) \
((ns_tsip)->word_2 = cpu_to_le32(NS_TSI_EMPTY))
#define NS_TSQSIZE 8192
#define NS_TSQ_NUM_ENTRIES 1024
#define NS_TSQ_ALIGNMENT 8192
#define NS_TSI_SCDISVBR NS_TSR_SCDISVBR
#define ns_tsi_tmrof(ns_tsip) \
@ -316,26 +294,22 @@ typedef struct ns_tsi
#define ns_tsi_getscqpos(ns_tsip) \
(le32_to_cpu((ns_tsip)->word_1) & 0x00007FFF)
/* NICStAR structures located in local SRAM */
/* NICStAR structures located in local SRAM ***********************************/
/* RCT - Receive Connection Table
/*
* RCT - Receive Connection Table
*
* Written by both the NICStAR and the device driver.
*/
typedef struct ns_rcte
{
u32 word_1;
u32 buffer_handle;
u32 dma_address;
u32 aal5_crc32;
typedef struct ns_rcte {
u32 word_1;
u32 buffer_handle;
u32 dma_address;
u32 aal5_crc32;
} ns_rcte;
#define NS_RCTE_BSFB 0x00200000 /* Rev. D only */
#define NS_RCTE_BSFB 0x00200000 /* Rev. D only */
#define NS_RCTE_NZGFC 0x00100000
#define NS_RCTE_CONNECTOPEN 0x00080000
#define NS_RCTE_AALMASK 0x00070000
@ -358,25 +332,21 @@ typedef struct ns_rcte
#define NS_RCT_ENTRY_SIZE 4 /* Number of dwords */
/* NOTE: We could make macros to contruct the first word of the RCTE,
but that doesn't seem to make much sense... */
but that doesn't seem to make much sense... */
/* FBD - Free Buffer Descriptor
/*
* FBD - Free Buffer Descriptor
*
* Written by the device driver using via the command register.
*/
typedef struct ns_fbd
{
u32 buffer_handle;
u32 dma_address;
typedef struct ns_fbd {
u32 buffer_handle;
u32 dma_address;
} ns_fbd;
/* TST - Transmit Schedule Table
/*
* TST - Transmit Schedule Table
*
* Written by the device driver.
*/
@ -385,40 +355,38 @@ typedef u32 ns_tste;
#define NS_TST_OPCODE_MASK 0x60000000
#define NS_TST_OPCODE_NULL 0x00000000 /* Insert null cell */
#define NS_TST_OPCODE_FIXED 0x20000000 /* Cell from a fixed rate channel */
#define NS_TST_OPCODE_NULL 0x00000000 /* Insert null cell */
#define NS_TST_OPCODE_FIXED 0x20000000 /* Cell from a fixed rate channel */
#define NS_TST_OPCODE_VARIABLE 0x40000000
#define NS_TST_OPCODE_END 0x60000000 /* Jump */
#define NS_TST_OPCODE_END 0x60000000 /* Jump */
#define ns_tste_make(opcode, sramad) (opcode | sramad)
/* NOTE:
- When the opcode is FIXED, sramad specifies the SRAM address of the
SCD for that fixed rate channel.
SCD for that fixed rate channel.
- When the opcode is END, sramad specifies the SRAM address of the
location of the next TST entry to read.
location of the next TST entry to read.
*/
/* SCD - Segmentation Channel Descriptor
/*
* SCD - Segmentation Channel Descriptor
*
* Written by both the device driver and the NICStAR
*/
typedef struct ns_scd
{
u32 word_1;
u32 word_2;
u32 partial_aal5_crc;
u32 reserved;
ns_scqe cache_a;
ns_scqe cache_b;
typedef struct ns_scd {
u32 word_1;
u32 word_2;
u32 partial_aal5_crc;
u32 reserved;
ns_scqe cache_a;
ns_scqe cache_b;
} ns_scd;
#define NS_SCD_BASE_MASK_VAR 0xFFFFE000 /* Variable rate */
#define NS_SCD_BASE_MASK_FIX 0xFFFFFC00 /* Fixed rate */
#define NS_SCD_BASE_MASK_VAR 0xFFFFE000 /* Variable rate */
#define NS_SCD_BASE_MASK_FIX 0xFFFFFC00 /* Fixed rate */
#define NS_SCD_TAIL_MASK_VAR 0x00001FF0
#define NS_SCD_TAIL_MASK_FIX 0x000003F0
#define NS_SCD_HEAD_MASK_VAR 0x00001FF0
@ -426,13 +394,9 @@ typedef struct ns_scd
#define NS_SCD_XMITFOREVER 0x02000000
/* NOTE: There are other fields in word 2 of the SCD, but as they should
not be needed in the device driver they are not defined here. */
/* NICStAR local SRAM memory map **********************************************/
not be needed in the device driver they are not defined here. */
/* NICStAR local SRAM memory map */
#define NS_RCT 0x00000
#define NS_RCT_32_END 0x03FFF
@ -455,100 +419,93 @@ typedef struct ns_scd
#define NS_LGFBQ 0x1FC00
#define NS_LGFBQ_END 0x1FFFF
/* NISCtAR operation registers ************************************************/
/* NISCtAR operation registers */
/* See Section 3.4 of `IDT77211 NICStAR User Manual' from www.idt.com */
enum ns_regs
{
DR0 = 0x00, /* Data Register 0 R/W*/
DR1 = 0x04, /* Data Register 1 W */
DR2 = 0x08, /* Data Register 2 W */
DR3 = 0x0C, /* Data Register 3 W */
CMD = 0x10, /* Command W */
CFG = 0x14, /* Configuration R/W */
STAT = 0x18, /* Status R/W */
RSQB = 0x1C, /* Receive Status Queue Base W */
RSQT = 0x20, /* Receive Status Queue Tail R */
RSQH = 0x24, /* Receive Status Queue Head W */
CDC = 0x28, /* Cell Drop Counter R/clear */
VPEC = 0x2C, /* VPI/VCI Lookup Error Count R/clear */
ICC = 0x30, /* Invalid Cell Count R/clear */
RAWCT = 0x34, /* Raw Cell Tail R */
TMR = 0x38, /* Timer R */
TSTB = 0x3C, /* Transmit Schedule Table Base R/W */
TSQB = 0x40, /* Transmit Status Queue Base W */
TSQT = 0x44, /* Transmit Status Queue Tail R */
TSQH = 0x48, /* Transmit Status Queue Head W */
GP = 0x4C, /* General Purpose R/W */
VPM = 0x50 /* VPI/VCI Mask W */
enum ns_regs {
DR0 = 0x00, /* Data Register 0 R/W */
DR1 = 0x04, /* Data Register 1 W */
DR2 = 0x08, /* Data Register 2 W */
DR3 = 0x0C, /* Data Register 3 W */
CMD = 0x10, /* Command W */
CFG = 0x14, /* Configuration R/W */
STAT = 0x18, /* Status R/W */
RSQB = 0x1C, /* Receive Status Queue Base W */
RSQT = 0x20, /* Receive Status Queue Tail R */
RSQH = 0x24, /* Receive Status Queue Head W */
CDC = 0x28, /* Cell Drop Counter R/clear */
VPEC = 0x2C, /* VPI/VCI Lookup Error Count R/clear */
ICC = 0x30, /* Invalid Cell Count R/clear */
RAWCT = 0x34, /* Raw Cell Tail R */
TMR = 0x38, /* Timer R */
TSTB = 0x3C, /* Transmit Schedule Table Base R/W */
TSQB = 0x40, /* Transmit Status Queue Base W */
TSQT = 0x44, /* Transmit Status Queue Tail R */
TSQH = 0x48, /* Transmit Status Queue Head W */
GP = 0x4C, /* General Purpose R/W */
VPM = 0x50 /* VPI/VCI Mask W */
};
/* NICStAR commands issued to the CMD register ********************************/
/* NICStAR commands issued to the CMD register */
/* Top 4 bits are command opcode, lower 28 are parameters. */
#define NS_CMD_NO_OPERATION 0x00000000
/* params always 0 */
/* params always 0 */
#define NS_CMD_OPENCLOSE_CONNECTION 0x20000000
/* b19{1=open,0=close} b18-2{SRAM addr} */
/* b19{1=open,0=close} b18-2{SRAM addr} */
#define NS_CMD_WRITE_SRAM 0x40000000
/* b18-2{SRAM addr} b1-0{burst size} */
/* b18-2{SRAM addr} b1-0{burst size} */
#define NS_CMD_READ_SRAM 0x50000000
/* b18-2{SRAM addr} */
/* b18-2{SRAM addr} */
#define NS_CMD_WRITE_FREEBUFQ 0x60000000
/* b0{large buf indicator} */
/* b0{large buf indicator} */
#define NS_CMD_READ_UTILITY 0x80000000
/* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
/* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
#define NS_CMD_WRITE_UTILITY 0x90000000
/* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
/* b8{1=select UTL_CS1} b9{1=select UTL_CS0} b7-0{bus addr} */
#define NS_CMD_OPEN_CONNECTION (NS_CMD_OPENCLOSE_CONNECTION | 0x00080000)
#define NS_CMD_CLOSE_CONNECTION NS_CMD_OPENCLOSE_CONNECTION
/* NICStAR configuration bits */
/* NICStAR configuration bits *************************************************/
#define NS_CFG_SWRST 0x80000000 /* Software Reset */
#define NS_CFG_RXPATH 0x20000000 /* Receive Path Enable */
#define NS_CFG_SMBUFSIZE_MASK 0x18000000 /* Small Receive Buffer Size */
#define NS_CFG_LGBUFSIZE_MASK 0x06000000 /* Large Receive Buffer Size */
#define NS_CFG_EFBIE 0x01000000 /* Empty Free Buffer Queue
Interrupt Enable */
#define NS_CFG_RSQSIZE_MASK 0x00C00000 /* Receive Status Queue Size */
#define NS_CFG_ICACCEPT 0x00200000 /* Invalid Cell Accept */
#define NS_CFG_IGNOREGFC 0x00100000 /* Ignore General Flow Control */
#define NS_CFG_VPIBITS_MASK 0x000C0000 /* VPI/VCI Bits Size Select */
#define NS_CFG_RCTSIZE_MASK 0x00030000 /* Receive Connection Table Size */
#define NS_CFG_VCERRACCEPT 0x00008000 /* VPI/VCI Error Cell Accept */
#define NS_CFG_RXINT_MASK 0x00007000 /* End of Receive PDU Interrupt
Handling */
#define NS_CFG_RAWIE 0x00000800 /* Raw Cell Qu' Interrupt Enable */
#define NS_CFG_RSQAFIE 0x00000400 /* Receive Queue Almost Full
Interrupt Enable */
#define NS_CFG_RXRM 0x00000200 /* Receive RM Cells */
#define NS_CFG_TMRROIE 0x00000080 /* Timer Roll Over Interrupt
Enable */
#define NS_CFG_TXEN 0x00000020 /* Transmit Operation Enable */
#define NS_CFG_TXIE 0x00000010 /* Transmit Status Interrupt
Enable */
#define NS_CFG_TXURIE 0x00000008 /* Transmit Under-run Interrupt
Enable */
#define NS_CFG_UMODE 0x00000004 /* Utopia Mode (cell/byte) Select */
#define NS_CFG_TSQFIE 0x00000002 /* Transmit Status Queue Full
Interrupt Enable */
#define NS_CFG_PHYIE 0x00000001 /* PHY Interrupt Enable */
#define NS_CFG_SWRST 0x80000000 /* Software Reset */
#define NS_CFG_RXPATH 0x20000000 /* Receive Path Enable */
#define NS_CFG_SMBUFSIZE_MASK 0x18000000 /* Small Receive Buffer Size */
#define NS_CFG_LGBUFSIZE_MASK 0x06000000 /* Large Receive Buffer Size */
#define NS_CFG_EFBIE 0x01000000 /* Empty Free Buffer Queue
Interrupt Enable */
#define NS_CFG_RSQSIZE_MASK 0x00C00000 /* Receive Status Queue Size */
#define NS_CFG_ICACCEPT 0x00200000 /* Invalid Cell Accept */
#define NS_CFG_IGNOREGFC 0x00100000 /* Ignore General Flow Control */
#define NS_CFG_VPIBITS_MASK 0x000C0000 /* VPI/VCI Bits Size Select */
#define NS_CFG_RCTSIZE_MASK 0x00030000 /* Receive Connection Table Size */
#define NS_CFG_VCERRACCEPT 0x00008000 /* VPI/VCI Error Cell Accept */
#define NS_CFG_RXINT_MASK 0x00007000 /* End of Receive PDU Interrupt
Handling */
#define NS_CFG_RAWIE 0x00000800 /* Raw Cell Qu' Interrupt Enable */
#define NS_CFG_RSQAFIE 0x00000400 /* Receive Queue Almost Full
Interrupt Enable */
#define NS_CFG_RXRM 0x00000200 /* Receive RM Cells */
#define NS_CFG_TMRROIE 0x00000080 /* Timer Roll Over Interrupt
Enable */
#define NS_CFG_TXEN 0x00000020 /* Transmit Operation Enable */
#define NS_CFG_TXIE 0x00000010 /* Transmit Status Interrupt
Enable */
#define NS_CFG_TXURIE 0x00000008 /* Transmit Under-run Interrupt
Enable */
#define NS_CFG_UMODE 0x00000004 /* Utopia Mode (cell/byte) Select */
#define NS_CFG_TSQFIE 0x00000002 /* Transmit Status Queue Full
Interrupt Enable */
#define NS_CFG_PHYIE 0x00000001 /* PHY Interrupt Enable */
#define NS_CFG_SMBUFSIZE_48 0x00000000
#define NS_CFG_SMBUFSIZE_96 0x08000000
@ -579,33 +536,29 @@ enum ns_regs
#define NS_CFG_RXINT_624US 0x00003000
#define NS_CFG_RXINT_899US 0x00004000
/* NICStAR STATus bits */
/* NICStAR STATus bits ********************************************************/
#define NS_STAT_SFBQC_MASK 0xFF000000 /* hi 8 bits Small Buffer Queue Count */
#define NS_STAT_LFBQC_MASK 0x00FF0000 /* hi 8 bits Large Buffer Queue Count */
#define NS_STAT_TSIF 0x00008000 /* Transmit Status Queue Indicator */
#define NS_STAT_TXICP 0x00004000 /* Transmit Incomplete PDU */
#define NS_STAT_TSQF 0x00001000 /* Transmit Status Queue Full */
#define NS_STAT_TMROF 0x00000800 /* Timer Overflow */
#define NS_STAT_PHYI 0x00000400 /* PHY Device Interrupt */
#define NS_STAT_CMDBZ 0x00000200 /* Command Busy */
#define NS_STAT_SFBQF 0x00000100 /* Small Buffer Queue Full */
#define NS_STAT_LFBQF 0x00000080 /* Large Buffer Queue Full */
#define NS_STAT_RSQF 0x00000040 /* Receive Status Queue Full */
#define NS_STAT_EOPDU 0x00000020 /* End of PDU */
#define NS_STAT_RAWCF 0x00000010 /* Raw Cell Flag */
#define NS_STAT_SFBQE 0x00000008 /* Small Buffer Queue Empty */
#define NS_STAT_LFBQE 0x00000004 /* Large Buffer Queue Empty */
#define NS_STAT_RSQAF 0x00000002 /* Receive Status Queue Almost Full */
#define NS_STAT_SFBQC_MASK 0xFF000000 /* hi 8 bits Small Buffer Queue Count */
#define NS_STAT_LFBQC_MASK 0x00FF0000 /* hi 8 bits Large Buffer Queue Count */
#define NS_STAT_TSIF 0x00008000 /* Transmit Status Queue Indicator */
#define NS_STAT_TXICP 0x00004000 /* Transmit Incomplete PDU */
#define NS_STAT_TSQF 0x00001000 /* Transmit Status Queue Full */
#define NS_STAT_TMROF 0x00000800 /* Timer Overflow */
#define NS_STAT_PHYI 0x00000400 /* PHY Device Interrupt */
#define NS_STAT_CMDBZ 0x00000200 /* Command Busy */
#define NS_STAT_SFBQF 0x00000100 /* Small Buffer Queue Full */
#define NS_STAT_LFBQF 0x00000080 /* Large Buffer Queue Full */
#define NS_STAT_RSQF 0x00000040 /* Receive Status Queue Full */
#define NS_STAT_EOPDU 0x00000020 /* End of PDU */
#define NS_STAT_RAWCF 0x00000010 /* Raw Cell Flag */
#define NS_STAT_SFBQE 0x00000008 /* Small Buffer Queue Empty */
#define NS_STAT_LFBQE 0x00000004 /* Large Buffer Queue Empty */
#define NS_STAT_RSQAF 0x00000002 /* Receive Status Queue Almost Full */
#define ns_stat_sfbqc_get(stat) (((stat) & NS_STAT_SFBQC_MASK) >> 23)
#define ns_stat_lfbqc_get(stat) (((stat) & NS_STAT_LFBQC_MASK) >> 15)
/* #defines which depend on other #defines ************************************/
/* #defines which depend on other #defines */
#define NS_TST0 NS_TST_FRSCD
#define NS_TST1 (NS_TST_FRSCD + NS_TST_NUM_ENTRIES + 1)
@ -672,8 +625,7 @@ enum ns_regs
#define NS_CFG_TSQFIE_OPT 0x00000000
#endif /* ENABLE_TSQFIE */
/* PCI stuff ******************************************************************/
/* PCI stuff */
#ifndef PCI_VENDOR_ID_IDT
#define PCI_VENDOR_ID_IDT 0x111D
@ -683,138 +635,124 @@ enum ns_regs
#define PCI_DEVICE_ID_IDT_IDT77201 0x0001
#endif /* PCI_DEVICE_ID_IDT_IDT77201 */
/* Device driver structures */
/* Device driver structures ***************************************************/
struct ns_skb_cb {
u32 buf_type; /* BUF_SM/BUF_LG/BUF_NONE */
struct ns_skb_prv {
u32 buf_type; /* BUF_SM/BUF_LG/BUF_NONE */
u32 dma;
int iovcnt;
};
#define NS_SKB_CB(skb) ((struct ns_skb_cb *)((skb)->cb))
#define NS_PRV_BUFTYPE(skb) \
(((struct ns_skb_prv *)(ATM_SKB(skb)+1))->buf_type)
#define NS_PRV_DMA(skb) \
(((struct ns_skb_prv *)(ATM_SKB(skb)+1))->dma)
#define NS_PRV_IOVCNT(skb) \
(((struct ns_skb_prv *)(ATM_SKB(skb)+1))->iovcnt)
typedef struct tsq_info
{
void *org;
ns_tsi *base;
ns_tsi *next;
ns_tsi *last;
typedef struct tsq_info {
void *org;
dma_addr_t dma;
ns_tsi *base;
ns_tsi *next;
ns_tsi *last;
} tsq_info;
typedef struct scq_info
{
void *org;
ns_scqe *base;
ns_scqe *last;
ns_scqe *next;
volatile ns_scqe *tail; /* Not related to the nicstar register */
unsigned num_entries;
struct sk_buff **skb; /* Pointer to an array of pointers
to the sk_buffs used for tx */
u32 scd; /* SRAM address of the corresponding
SCD */
int tbd_count; /* Only meaningful on variable rate */
wait_queue_head_t scqfull_waitq;
volatile char full; /* SCQ full indicator */
spinlock_t lock; /* SCQ spinlock */
typedef struct scq_info {
void *org;
dma_addr_t dma;
ns_scqe *base;
ns_scqe *last;
ns_scqe *next;
volatile ns_scqe *tail; /* Not related to the nicstar register */
unsigned num_entries;
struct sk_buff **skb; /* Pointer to an array of pointers
to the sk_buffs used for tx */
u32 scd; /* SRAM address of the corresponding
SCD */
int tbd_count; /* Only meaningful on variable rate */
wait_queue_head_t scqfull_waitq;
volatile char full; /* SCQ full indicator */
spinlock_t lock; /* SCQ spinlock */
} scq_info;
typedef struct rsq_info
{
void *org;
ns_rsqe *base;
ns_rsqe *next;
ns_rsqe *last;
typedef struct rsq_info {
void *org;
dma_addr_t dma;
ns_rsqe *base;
ns_rsqe *next;
ns_rsqe *last;
} rsq_info;
typedef struct skb_pool
{
volatile int count; /* number of buffers in the queue */
struct sk_buff_head queue;
typedef struct skb_pool {
volatile int count; /* number of buffers in the queue */
struct sk_buff_head queue;
} skb_pool;
/* NOTE: for small and large buffer pools, the count is not used, as the
actual value used for buffer management is the one read from the
card. */
typedef struct vc_map
{
volatile unsigned int tx:1; /* TX vc? */
volatile unsigned int rx:1; /* RX vc? */
struct atm_vcc *tx_vcc, *rx_vcc;
struct sk_buff *rx_iov; /* RX iovector skb */
scq_info *scq; /* To keep track of the SCQ */
u32 cbr_scd; /* SRAM address of the corresponding
SCD. 0x00000000 for UBR/VBR/ABR */
int tbd_count;
typedef struct vc_map {
volatile unsigned int tx:1; /* TX vc? */
volatile unsigned int rx:1; /* RX vc? */
struct atm_vcc *tx_vcc, *rx_vcc;
struct sk_buff *rx_iov; /* RX iovector skb */
scq_info *scq; /* To keep track of the SCQ */
u32 cbr_scd; /* SRAM address of the corresponding
SCD. 0x00000000 for UBR/VBR/ABR */
int tbd_count;
} vc_map;
struct ns_skb_data
{
struct atm_vcc *vcc;
int iovcnt;
};
#define NS_SKB(skb) (((struct ns_skb_data *) (skb)->cb))
typedef struct ns_dev
{
int index; /* Card ID to the device driver */
int sram_size; /* In k x 32bit words. 32 or 128 */
void __iomem *membase; /* Card's memory base address */
unsigned long max_pcr;
int rct_size; /* Number of entries */
int vpibits;
int vcibits;
struct pci_dev *pcidev;
struct atm_dev *atmdev;
tsq_info tsq;
rsq_info rsq;
scq_info *scq0, *scq1, *scq2; /* VBR SCQs */
skb_pool sbpool; /* Small buffers */
skb_pool lbpool; /* Large buffers */
skb_pool hbpool; /* Pre-allocated huge buffers */
skb_pool iovpool; /* iovector buffers */
volatile int efbie; /* Empty free buf. queue int. enabled */
volatile u32 tst_addr; /* SRAM address of the TST in use */
volatile int tst_free_entries;
vc_map vcmap[NS_MAX_RCTSIZE];
vc_map *tste2vc[NS_TST_NUM_ENTRIES];
vc_map *scd2vc[NS_FRSCD_NUM];
buf_nr sbnr;
buf_nr lbnr;
buf_nr hbnr;
buf_nr iovnr;
int sbfqc;
int lbfqc;
u32 sm_handle;
u32 sm_addr;
u32 lg_handle;
u32 lg_addr;
struct sk_buff *rcbuf; /* Current raw cell buffer */
u32 rawch; /* Raw cell queue head */
unsigned intcnt; /* Interrupt counter */
spinlock_t int_lock; /* Interrupt lock */
spinlock_t res_lock; /* Card resource lock */
typedef struct ns_dev {
int index; /* Card ID to the device driver */
int sram_size; /* In k x 32bit words. 32 or 128 */
void __iomem *membase; /* Card's memory base address */
unsigned long max_pcr;
int rct_size; /* Number of entries */
int vpibits;
int vcibits;
struct pci_dev *pcidev;
struct idr idr;
struct atm_dev *atmdev;
tsq_info tsq;
rsq_info rsq;
scq_info *scq0, *scq1, *scq2; /* VBR SCQs */
skb_pool sbpool; /* Small buffers */
skb_pool lbpool; /* Large buffers */
skb_pool hbpool; /* Pre-allocated huge buffers */
skb_pool iovpool; /* iovector buffers */
volatile int efbie; /* Empty free buf. queue int. enabled */
volatile u32 tst_addr; /* SRAM address of the TST in use */
volatile int tst_free_entries;
vc_map vcmap[NS_MAX_RCTSIZE];
vc_map *tste2vc[NS_TST_NUM_ENTRIES];
vc_map *scd2vc[NS_FRSCD_NUM];
buf_nr sbnr;
buf_nr lbnr;
buf_nr hbnr;
buf_nr iovnr;
int sbfqc;
int lbfqc;
struct sk_buff *sm_handle;
u32 sm_addr;
struct sk_buff *lg_handle;
u32 lg_addr;
struct sk_buff *rcbuf; /* Current raw cell buffer */
struct ns_rcqe *rawcell;
u32 rawch; /* Raw cell queue head */
unsigned intcnt; /* Interrupt counter */
spinlock_t int_lock; /* Interrupt lock */
spinlock_t res_lock; /* Card resource lock */
} ns_dev;
/* NOTE: Each tste2vc entry relates a given TST entry to the corresponding
CBR vc. If the entry is not allocated, it must be NULL.
There are two TSTs so the driver can modify them on the fly
without stopping the transmission.
scd2vc allows us to find out unused fixed rate SCDs, because
they must have a NULL pointer here. */
CBR vc. If the entry is not allocated, it must be NULL.
There are two TSTs so the driver can modify them on the fly
without stopping the transmission.
scd2vc allows us to find out unused fixed rate SCDs, because
they must have a NULL pointer here. */
#endif /* _LINUX_NICSTAR_H_ */

View File

@ -13,15 +13,15 @@ typedef void __iomem *virt_addr_t;
#define CYCLE_DELAY 5
/* This was the original definition
/*
This was the original definition
#define osp_MicroDelay(microsec) \
do { int _i = 4*microsec; while (--_i > 0) { __SLOW_DOWN_IO; }} while (0)
*/
#define osp_MicroDelay(microsec) {unsigned long useconds = (microsec); \
udelay((useconds));}
/* The following tables represent the timing diagrams found in
/*
* The following tables represent the timing diagrams found in
* the Data Sheet for the Xicor X25020 EEProm. The #defines below
* represent the bits in the NICStAR's General Purpose register
* that must be toggled for the corresponding actions on the EEProm
@ -31,86 +31,80 @@ typedef void __iomem *virt_addr_t;
/* Write Data To EEProm from SI line on rising edge of CLK */
/* Read Data From EEProm on falling edge of CLK */
#define CS_HIGH 0x0002 /* Chip select high */
#define CS_LOW 0x0000 /* Chip select low (active low)*/
#define CLK_HIGH 0x0004 /* Clock high */
#define CLK_LOW 0x0000 /* Clock low */
#define SI_HIGH 0x0001 /* Serial input data high */
#define SI_LOW 0x0000 /* Serial input data low */
#define CS_HIGH 0x0002 /* Chip select high */
#define CS_LOW 0x0000 /* Chip select low (active low) */
#define CLK_HIGH 0x0004 /* Clock high */
#define CLK_LOW 0x0000 /* Clock low */
#define SI_HIGH 0x0001 /* Serial input data high */
#define SI_LOW 0x0000 /* Serial input data low */
/* Read Status Register = 0000 0101b */
#if 0
static u_int32_t rdsrtab[] =
{
CS_HIGH | CLK_HIGH,
CS_LOW | CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH, /* 1 */
CLK_LOW | SI_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH /* 1 */
static u_int32_t rdsrtab[] = {
CS_HIGH | CLK_HIGH,
CS_LOW | CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH, /* 1 */
CLK_LOW | SI_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH /* 1 */
};
#endif /* 0 */
#endif /* 0 */
/* Read from EEPROM = 0000 0011b */
static u_int32_t readtab[] =
{
/*
CS_HIGH | CLK_HIGH,
*/
CS_LOW | CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH, /* 1 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH /* 1 */
static u_int32_t readtab[] = {
/*
CS_HIGH | CLK_HIGH,
*/
CS_LOW | CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW,
CLK_HIGH, /* 0 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH, /* 1 */
CLK_LOW | SI_HIGH,
CLK_HIGH | SI_HIGH /* 1 */
};
/* Clock to read from/write to the eeprom */
static u_int32_t clocktab[] =
{
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW
static u_int32_t clocktab[] = {
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW,
CLK_HIGH,
CLK_LOW
};
#define NICSTAR_REG_WRITE(bs, reg, val) \
while ( readl(bs + STAT) & 0x0200 ) ; \
writel((val),(base)+(reg))
@ -124,153 +118,131 @@ static u_int32_t clocktab[] =
* register.
*/
#if 0
u_int32_t
nicstar_read_eprom_status( virt_addr_t base )
u_int32_t nicstar_read_eprom_status(virt_addr_t base)
{
u_int32_t val;
u_int32_t rbyte;
int32_t i, j;
u_int32_t val;
u_int32_t rbyte;
int32_t i, j;
/* Send read instruction */
val = NICSTAR_REG_READ( base, NICSTAR_REG_GENERAL_PURPOSE ) & 0xFFFFFFF0;
/* Send read instruction */
val = NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE) & 0xFFFFFFF0;
for (i=0; i<ARRAY_SIZE(rdsrtab); i++)
{
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | rdsrtab[i]) );
osp_MicroDelay( CYCLE_DELAY );
}
for (i = 0; i < ARRAY_SIZE(rdsrtab); i++) {
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | rdsrtab[i]));
osp_MicroDelay(CYCLE_DELAY);
}
/* Done sending instruction - now pull data off of bit 16, MSB first */
/* Data clocked out of eeprom on falling edge of clock */
/* Done sending instruction - now pull data off of bit 16, MSB first */
/* Data clocked out of eeprom on falling edge of clock */
rbyte = 0;
for (i=7, j=0; i>=0; i--)
{
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]) );
rbyte |= (((NICSTAR_REG_READ( base, NICSTAR_REG_GENERAL_PURPOSE)
& 0x00010000) >> 16) << i);
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]) );
osp_MicroDelay( CYCLE_DELAY );
}
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE, 2 );
osp_MicroDelay( CYCLE_DELAY );
return rbyte;
rbyte = 0;
for (i = 7, j = 0; i >= 0; i--) {
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]));
rbyte |= (((NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE)
& 0x00010000) >> 16) << i);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]));
osp_MicroDelay(CYCLE_DELAY);
}
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE, 2);
osp_MicroDelay(CYCLE_DELAY);
return rbyte;
}
#endif /* 0 */
#endif /* 0 */
/*
* This routine will clock the Read_data function into the X2520
* eeprom, followed by the address to read from, through the NicSTaR's General
* Purpose register.
*/
static u_int8_t
read_eprom_byte(virt_addr_t base, u_int8_t offset)
static u_int8_t read_eprom_byte(virt_addr_t base, u_int8_t offset)
{
u_int32_t val = 0;
int i,j=0;
u_int8_t tempread = 0;
u_int32_t val = 0;
int i, j = 0;
u_int8_t tempread = 0;
val = NICSTAR_REG_READ( base, NICSTAR_REG_GENERAL_PURPOSE ) & 0xFFFFFFF0;
val = NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE) & 0xFFFFFFF0;
/* Send READ instruction */
for (i=0; i<ARRAY_SIZE(readtab); i++)
{
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | readtab[i]) );
osp_MicroDelay( CYCLE_DELAY );
}
/* Send READ instruction */
for (i = 0; i < ARRAY_SIZE(readtab); i++) {
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | readtab[i]));
osp_MicroDelay(CYCLE_DELAY);
}
/* Next, we need to send the byte address to read from */
for (i=7; i>=0; i--)
{
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++] | ((offset >> i) & 1) ) );
osp_MicroDelay(CYCLE_DELAY);
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++] | ((offset >> i) & 1) ) );
osp_MicroDelay( CYCLE_DELAY );
}
/* Next, we need to send the byte address to read from */
for (i = 7; i >= 0; i--) {
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++] | ((offset >> i) & 1)));
osp_MicroDelay(CYCLE_DELAY);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++] | ((offset >> i) & 1)));
osp_MicroDelay(CYCLE_DELAY);
}
j = 0;
/* Now, we can read data from the eeprom by clocking it in */
for (i=7; i>=0; i--)
{
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]) );
osp_MicroDelay( CYCLE_DELAY );
tempread |= (((NICSTAR_REG_READ( base, NICSTAR_REG_GENERAL_PURPOSE )
& 0x00010000) >> 16) << i);
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]) );
osp_MicroDelay( CYCLE_DELAY );
}
j = 0;
NICSTAR_REG_WRITE( base, NICSTAR_REG_GENERAL_PURPOSE, 2 );
osp_MicroDelay( CYCLE_DELAY );
return tempread;
/* Now, we can read data from the eeprom by clocking it in */
for (i = 7; i >= 0; i--) {
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]));
osp_MicroDelay(CYCLE_DELAY);
tempread |=
(((NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE)
& 0x00010000) >> 16) << i);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | clocktab[j++]));
osp_MicroDelay(CYCLE_DELAY);
}
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE, 2);
osp_MicroDelay(CYCLE_DELAY);
return tempread;
}
static void
nicstar_init_eprom( virt_addr_t base )
static void nicstar_init_eprom(virt_addr_t base)
{
u_int32_t val;
u_int32_t val;
/*
* turn chip select off
*/
val = NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE) & 0xFFFFFFF0;
/*
* turn chip select off
*/
val = NICSTAR_REG_READ(base, NICSTAR_REG_GENERAL_PURPOSE) & 0xFFFFFFF0;
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_HIGH));
osp_MicroDelay( CYCLE_DELAY );
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_HIGH));
osp_MicroDelay(CYCLE_DELAY);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_LOW));
osp_MicroDelay( CYCLE_DELAY );
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_LOW));
osp_MicroDelay(CYCLE_DELAY);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_HIGH));
osp_MicroDelay( CYCLE_DELAY );
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_HIGH));
osp_MicroDelay(CYCLE_DELAY);
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_LOW));
osp_MicroDelay( CYCLE_DELAY );
NICSTAR_REG_WRITE(base, NICSTAR_REG_GENERAL_PURPOSE,
(val | CS_HIGH | CLK_LOW));
osp_MicroDelay(CYCLE_DELAY);
}
/*
* This routine will be the interface to the ReadPromByte function
* above.
*/
*/
static void
nicstar_read_eprom(
virt_addr_t base,
u_int8_t prom_offset,
u_int8_t *buffer,
u_int32_t nbytes )
nicstar_read_eprom(virt_addr_t base,
u_int8_t prom_offset, u_int8_t * buffer, u_int32_t nbytes)
{
u_int i;
for (i=0; i<nbytes; i++)
{
buffer[i] = read_eprom_byte( base, prom_offset );
++prom_offset;
osp_MicroDelay( CYCLE_DELAY );
}
u_int i;
for (i = 0; i < nbytes; i++) {
buffer[i] = read_eprom_byte(base, prom_offset);
++prom_offset;
osp_MicroDelay(CYCLE_DELAY);
}
}
/*
void osp_MicroDelay(int x) {
}
*/

View File

@ -383,7 +383,7 @@ static int process_status(struct solos_card *card, int port, struct sk_buff *skb
/* Anything but 'Showtime' is down */
if (strcmp(state_str, "Showtime")) {
card->atmdev[port]->signal = ATM_PHY_SIG_LOST;
atm_dev_signal_change(card->atmdev[port], ATM_PHY_SIG_LOST);
release_vccs(card->atmdev[port]);
dev_info(&card->dev->dev, "Port %d: %s\n", port, state_str);
return 0;
@ -401,7 +401,7 @@ static int process_status(struct solos_card *card, int port, struct sk_buff *skb
snr[0]?", SNR ":"", snr, attn[0]?", Attn ":"", attn);
card->atmdev[port]->link_rate = rate_down / 424;
card->atmdev[port]->signal = ATM_PHY_SIG_FOUND;
atm_dev_signal_change(card->atmdev[port], ATM_PHY_SIG_FOUND);
return 0;
}
@ -1246,7 +1246,7 @@ static int atm_init(struct solos_card *card)
card->atmdev[i]->ci_range.vci_bits = 16;
card->atmdev[i]->dev_data = card;
card->atmdev[i]->phy_data = (void *)(unsigned long)i;
card->atmdev[i]->signal = ATM_PHY_SIG_UNKNOWN;
atm_dev_signal_change(card->atmdev[i], ATM_PHY_SIG_UNKNOWN);
skb = alloc_skb(sizeof(*header), GFP_ATOMIC);
if (!skb) {

View File

@ -291,8 +291,9 @@ static int suni_ioctl(struct atm_dev *dev,unsigned int cmd,void __user *arg)
static void poll_los(struct atm_dev *dev)
{
dev->signal = GET(RSOP_SIS) & SUNI_RSOP_SIS_LOSV ? ATM_PHY_SIG_LOST :
ATM_PHY_SIG_FOUND;
atm_dev_signal_change(dev,
GET(RSOP_SIS) & SUNI_RSOP_SIS_LOSV ?
ATM_PHY_SIG_LOST : ATM_PHY_SIG_FOUND);
}

View File

@ -1637,10 +1637,8 @@ out_free:
MODULE_LICENSE("GPL");
static struct pci_device_id zatm_pci_tbl[] __devinitdata = {
{ PCI_VENDOR_ID_ZEITNET, PCI_DEVICE_ID_ZEITNET_1221,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, ZATM_COPPER },
{ PCI_VENDOR_ID_ZEITNET, PCI_DEVICE_ID_ZEITNET_1225,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 },
{ PCI_VDEVICE(ZEITNET, PCI_DEVICE_ID_ZEITNET_1221), ZATM_COPPER },
{ PCI_VDEVICE(ZEITNET, PCI_DEVICE_ID_ZEITNET_1225), 0 },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, zatm_pci_tbl);

View File

@ -1819,3 +1819,67 @@ void device_shutdown(void)
spin_unlock(&devices_kset->list_lock);
async_synchronize_full();
}
/*
* Device logging functions
*/
#ifdef CONFIG_PRINTK
static int __dev_printk(const char *level, const struct device *dev,
struct va_format *vaf)
{
if (!dev)
return printk("%s(NULL device *): %pV", level, vaf);
return printk("%s%s %s: %pV",
level, dev_driver_string(dev), dev_name(dev), vaf);
}
int dev_printk(const char *level, const struct device *dev,
const char *fmt, ...)
{
struct va_format vaf;
va_list args;
int r;
va_start(args, fmt);
vaf.fmt = fmt;
vaf.va = &args;
r = __dev_printk(level, dev, &vaf);
va_end(args);
return r;
}
EXPORT_SYMBOL(dev_printk);
#define define_dev_printk_level(func, kern_level) \
int func(const struct device *dev, const char *fmt, ...) \
{ \
struct va_format vaf; \
va_list args; \
int r; \
\
va_start(args, fmt); \
\
vaf.fmt = fmt; \
vaf.va = &args; \
\
r = __dev_printk(kern_level, dev, &vaf); \
va_end(args); \
\
return r; \
} \
EXPORT_SYMBOL(func);
define_dev_printk_level(dev_emerg, KERN_EMERG);
define_dev_printk_level(dev_alert, KERN_ALERT);
define_dev_printk_level(dev_crit, KERN_CRIT);
define_dev_printk_level(dev_err, KERN_ERR);
define_dev_printk_level(dev_warn, KERN_WARNING);
define_dev_printk_level(dev_notice, KERN_NOTICE);
define_dev_printk_level(_dev_info, KERN_INFO);
#endif

View File

@ -58,6 +58,18 @@ config BT_HCIUART_BCSP
Say Y here to compile support for HCI BCSP protocol.
config BT_HCIUART_ATH3K
bool "Atheros AR300x serial support"
depends on BT_HCIUART
help
HCIATH3K (HCI Atheros AR300x) is a serial protocol for
communication between host and Atheros AR300x Bluetooth devices.
This protocol enables AR300x chips to be enabled with
power management support.
Enable this if you have Atheros AR300x serial Bluetooth device.
Say Y here to compile support for HCI UART ATH3K protocol.
config BT_HCIUART_LL
bool "HCILL protocol support"
depends on BT_HCIUART

View File

@ -26,4 +26,5 @@ hci_uart-y := hci_ldisc.o
hci_uart-$(CONFIG_BT_HCIUART_H4) += hci_h4.o
hci_uart-$(CONFIG_BT_HCIUART_BCSP) += hci_bcsp.o
hci_uart-$(CONFIG_BT_HCIUART_LL) += hci_ll.o
hci_uart-$(CONFIG_BT_HCIUART_ATH3K) += hci_ath.o
hci_uart-objs := $(hci_uart-y)

View File

@ -224,7 +224,7 @@ static int bcm203x_probe(struct usb_interface *intf, const struct usb_device_id
BT_DBG("firmware data %p size %zu", firmware->data, firmware->size);
data->fw_data = kmalloc(firmware->size, GFP_KERNEL);
data->fw_data = kmemdup(firmware->data, firmware->size, GFP_KERNEL);
if (!data->fw_data) {
BT_ERR("Can't allocate memory for firmware image");
release_firmware(firmware);
@ -234,7 +234,6 @@ static int bcm203x_probe(struct usb_interface *intf, const struct usb_device_id
return -ENOMEM;
}
memcpy(data->fw_data, firmware->data, firmware->size);
data->fw_size = firmware->size;
data->fw_sent = 0;

View File

@ -62,7 +62,7 @@ struct hci_vendor_hdr {
__u8 type;
__le16 snum;
__le16 dlen;
} __attribute__ ((packed));
} __packed;
static int bpa10x_recv(struct hci_dev *hdev, int queue, void *buf, int count)
{

View File

@ -216,7 +216,7 @@ static const struct file_operations btmrvl_gpiogap_fops = {
static ssize_t btmrvl_hscmd_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
struct btmrvl_private *priv = (struct btmrvl_private *) file->private_data;
struct btmrvl_private *priv = file->private_data;
char buf[16];
long result, ret;

View File

@ -76,6 +76,7 @@ struct btmrvl_private {
int (*hw_host_to_card) (struct btmrvl_private *priv,
u8 *payload, u16 nb);
int (*hw_wakeup_firmware) (struct btmrvl_private *priv);
int (*hw_process_int_status) (struct btmrvl_private *priv);
spinlock_t driver_lock; /* spinlock used by driver */
#ifdef CONFIG_DEBUG_FS
void *debugfs_data;
@ -118,13 +119,13 @@ struct btmrvl_cmd {
__le16 ocf_ogf;
u8 length;
u8 data[4];
} __attribute__ ((packed));
} __packed;
struct btmrvl_event {
u8 ec; /* event counter */
u8 length;
u8 data[4];
} __attribute__ ((packed));
} __packed;
/* Prototype of global function */

View File

@ -502,14 +502,17 @@ static int btmrvl_service_main_thread(void *data)
spin_lock_irqsave(&priv->driver_lock, flags);
if (adapter->int_count) {
adapter->int_count = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
priv->hw_process_int_status(priv);
} else if (adapter->ps_state == PS_SLEEP &&
!skb_queue_empty(&adapter->tx_queue)) {
spin_unlock_irqrestore(&priv->driver_lock, flags);
adapter->wakeup_tries++;
priv->hw_wakeup_firmware(priv);
continue;
} else {
spin_unlock_irqrestore(&priv->driver_lock, flags);
}
spin_unlock_irqrestore(&priv->driver_lock, flags);
if (adapter->ps_state == PS_SLEEP)
continue;

View File

@ -47,6 +47,7 @@
* module_exit function is called.
*/
static u8 user_rmmod;
static u8 sdio_ireg;
static const struct btmrvl_sdio_device btmrvl_sdio_sd6888 = {
.helper = "sd8688_helper.bin",
@ -83,10 +84,10 @@ static int btmrvl_sdio_read_fw_status(struct btmrvl_sdio_card *card, u16 *dat)
*dat = 0;
fws0 = sdio_readb(card->func, CARD_FW_STATUS0_REG, &ret);
if (ret)
return -EIO;
if (!ret)
fws1 = sdio_readb(card->func, CARD_FW_STATUS1_REG, &ret);
fws1 = sdio_readb(card->func, CARD_FW_STATUS1_REG, &ret);
if (ret)
return -EIO;
@ -216,7 +217,7 @@ static int btmrvl_sdio_download_helper(struct btmrvl_sdio_card *card)
tmphlprbufsz = ALIGN_SZ(BTM_UPLD_SIZE, BTSDIO_DMA_ALIGN);
tmphlprbuf = kmalloc(tmphlprbufsz, GFP_KERNEL);
tmphlprbuf = kzalloc(tmphlprbufsz, GFP_KERNEL);
if (!tmphlprbuf) {
BT_ERR("Unable to allocate buffer for helper."
" Terminating download");
@ -224,8 +225,6 @@ static int btmrvl_sdio_download_helper(struct btmrvl_sdio_card *card)
goto done;
}
memset(tmphlprbuf, 0, tmphlprbufsz);
helperbuf = (u8 *) ALIGN_ADDR(tmphlprbuf, BTSDIO_DMA_ALIGN);
/* Perform helper data transfer */
@ -318,7 +317,7 @@ static int btmrvl_sdio_download_fw_w_helper(struct btmrvl_sdio_card *card)
BT_DBG("Downloading FW image (%d bytes)", firmwarelen);
tmpfwbufsz = ALIGN_SZ(BTM_UPLD_SIZE, BTSDIO_DMA_ALIGN);
tmpfwbuf = kmalloc(tmpfwbufsz, GFP_KERNEL);
tmpfwbuf = kzalloc(tmpfwbufsz, GFP_KERNEL);
if (!tmpfwbuf) {
BT_ERR("Unable to allocate buffer for firmware."
" Terminating download");
@ -326,8 +325,6 @@ static int btmrvl_sdio_download_fw_w_helper(struct btmrvl_sdio_card *card)
goto done;
}
memset(tmpfwbuf, 0, tmpfwbufsz);
/* Ensure aligned firmware buffer */
fwbuf = (u8 *) ALIGN_ADDR(tmpfwbuf, BTSDIO_DMA_ALIGN);
@ -555,78 +552,79 @@ exit:
return ret;
}
static int btmrvl_sdio_get_int_status(struct btmrvl_private *priv, u8 * ireg)
static int btmrvl_sdio_process_int_status(struct btmrvl_private *priv)
{
int ret;
u8 sdio_ireg = 0;
ulong flags;
u8 ireg;
struct btmrvl_sdio_card *card = priv->btmrvl_dev.card;
*ireg = 0;
spin_lock_irqsave(&priv->driver_lock, flags);
ireg = sdio_ireg;
sdio_ireg = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
sdio_ireg = sdio_readb(card->func, HOST_INTSTATUS_REG, &ret);
if (ret) {
BT_ERR("sdio_readb: read int status register failed");
ret = -EIO;
goto done;
}
if (sdio_ireg != 0) {
/*
* DN_LD_HOST_INT_STATUS and/or UP_LD_HOST_INT_STATUS
* Clear the interrupt status register and re-enable the
* interrupt.
*/
BT_DBG("sdio_ireg = 0x%x", sdio_ireg);
sdio_writeb(card->func, ~(sdio_ireg) & (DN_LD_HOST_INT_STATUS |
UP_LD_HOST_INT_STATUS),
HOST_INTSTATUS_REG, &ret);
if (ret) {
BT_ERR("sdio_writeb: clear int status register "
"failed");
ret = -EIO;
goto done;
}
}
if (sdio_ireg & DN_LD_HOST_INT_STATUS) {
sdio_claim_host(card->func);
if (ireg & DN_LD_HOST_INT_STATUS) {
if (priv->btmrvl_dev.tx_dnld_rdy)
BT_DBG("tx_done already received: "
" int_status=0x%x", sdio_ireg);
" int_status=0x%x", ireg);
else
priv->btmrvl_dev.tx_dnld_rdy = true;
}
if (sdio_ireg & UP_LD_HOST_INT_STATUS)
if (ireg & UP_LD_HOST_INT_STATUS)
btmrvl_sdio_card_to_host(priv);
*ireg = sdio_ireg;
sdio_release_host(card->func);
ret = 0;
done:
return ret;
return 0;
}
static void btmrvl_sdio_interrupt(struct sdio_func *func)
{
struct btmrvl_private *priv;
struct hci_dev *hcidev;
struct btmrvl_sdio_card *card;
ulong flags;
u8 ireg = 0;
int ret;
card = sdio_get_drvdata(func);
if (card && card->priv) {
priv = card->priv;
hcidev = priv->btmrvl_dev.hcidev;
if (btmrvl_sdio_get_int_status(priv, &ireg))
BT_ERR("reading HOST_INT_STATUS_REG failed");
else
BT_DBG("HOST_INT_STATUS_REG %#x", ireg);
btmrvl_interrupt(priv);
if (!card || !card->priv) {
BT_ERR("sbi_interrupt(%p) card or priv is "
"NULL, card=%p\n", func, card);
return;
}
priv = card->priv;
ireg = sdio_readb(card->func, HOST_INTSTATUS_REG, &ret);
if (ret) {
BT_ERR("sdio_readb: read int status register failed");
return;
}
if (ireg != 0) {
/*
* DN_LD_HOST_INT_STATUS and/or UP_LD_HOST_INT_STATUS
* Clear the interrupt status register and re-enable the
* interrupt.
*/
BT_DBG("ireg = 0x%x", ireg);
sdio_writeb(card->func, ~(ireg) & (DN_LD_HOST_INT_STATUS |
UP_LD_HOST_INT_STATUS),
HOST_INTSTATUS_REG, &ret);
if (ret) {
BT_ERR("sdio_writeb: clear int status register failed");
return;
}
}
spin_lock_irqsave(&priv->driver_lock, flags);
sdio_ireg |= ireg;
spin_unlock_irqrestore(&priv->driver_lock, flags);
btmrvl_interrupt(priv);
}
static int btmrvl_sdio_register_dev(struct btmrvl_sdio_card *card)
@ -930,6 +928,7 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
/* Initialize the interface specific function pointers */
priv->hw_host_to_card = btmrvl_sdio_host_to_card;
priv->hw_wakeup_firmware = btmrvl_sdio_wakeup_fw;
priv->hw_process_int_status = btmrvl_sdio_process_int_status;
if (btmrvl_register_hdev(priv)) {
BT_ERR("Register hdev failed!");

View File

@ -59,6 +59,9 @@ static struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
{ USB_DEVICE_INFO(0xe0, 0x01, 0x01) },
/* Apple iMac11,1 */
{ USB_DEVICE(0x05ac, 0x8215) },
/* AVM BlueFRITZ! USB v2.0 */
{ USB_DEVICE(0x057c, 0x3800) },
@ -146,6 +149,7 @@ static struct usb_device_id blacklist_table[] = {
#define BTUSB_BULK_RUNNING 1
#define BTUSB_ISOC_RUNNING 2
#define BTUSB_SUSPENDING 3
#define BTUSB_DID_ISO_RESUME 4
struct btusb_data {
struct hci_dev *hdev;
@ -179,7 +183,6 @@ struct btusb_data {
unsigned int sco_num;
int isoc_altsetting;
int suspend_count;
int did_iso_resume:1;
};
static int inc_tx(struct btusb_data *data)
@ -807,7 +810,7 @@ static void btusb_work(struct work_struct *work)
int err;
if (hdev->conn_hash.sco_num > 0) {
if (!data->did_iso_resume) {
if (!test_bit(BTUSB_DID_ISO_RESUME, &data->flags)) {
err = usb_autopm_get_interface(data->isoc);
if (err < 0) {
clear_bit(BTUSB_ISOC_RUNNING, &data->flags);
@ -815,7 +818,7 @@ static void btusb_work(struct work_struct *work)
return;
}
data->did_iso_resume = 1;
set_bit(BTUSB_DID_ISO_RESUME, &data->flags);
}
if (data->isoc_altsetting != 2) {
clear_bit(BTUSB_ISOC_RUNNING, &data->flags);
@ -836,10 +839,8 @@ static void btusb_work(struct work_struct *work)
usb_kill_anchored_urbs(&data->isoc_anchor);
__set_isoc_interface(hdev, 0);
if (data->did_iso_resume) {
data->did_iso_resume = 0;
if (test_and_clear_bit(BTUSB_DID_ISO_RESUME, &data->flags))
usb_autopm_put_interface(data->isoc);
}
}
}

View File

@ -104,7 +104,7 @@ typedef struct {
u8 type;
u8 zero;
u16 len;
} __attribute__ ((packed)) nsh_t; /* Nokia Specific Header */
} __packed nsh_t; /* Nokia Specific Header */
#define NSHL 4 /* Nokia Specific Header Length */

235
drivers/bluetooth/hci_ath.c Normal file
View File

@ -0,0 +1,235 @@
/*
* Atheros Communication Bluetooth HCIATH3K UART protocol
*
* HCIATH3K (HCI Atheros AR300x Protocol) is a Atheros Communication's
* power management protocol extension to H4 to support AR300x Bluetooth Chip.
*
* Copyright (c) 2009-2010 Atheros Communications Inc.
*
* Acknowledgements:
* This file is based on hci_h4.c, which was written
* by Maxim Krasnyansky and Marcel Holtmann.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/tty.h>
#include <linux/errno.h>
#include <linux/ioctl.h>
#include <linux/skbuff.h>
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include "hci_uart.h"
struct ath_struct {
struct hci_uart *hu;
unsigned int cur_sleep;
struct sk_buff_head txq;
struct work_struct ctxtsw;
};
static int ath_wakeup_ar3k(struct tty_struct *tty)
{
struct termios settings;
int status = tty->driver->ops->tiocmget(tty, NULL);
if (status & TIOCM_CTS)
return status;
/* Disable Automatic RTSCTS */
n_tty_ioctl_helper(tty, NULL, TCGETS, (unsigned long)&settings);
settings.c_cflag &= ~CRTSCTS;
n_tty_ioctl_helper(tty, NULL, TCSETS, (unsigned long)&settings);
/* Clear RTS first */
status = tty->driver->ops->tiocmget(tty, NULL);
tty->driver->ops->tiocmset(tty, NULL, 0x00, TIOCM_RTS);
mdelay(20);
/* Set RTS, wake up board */
status = tty->driver->ops->tiocmget(tty, NULL);
tty->driver->ops->tiocmset(tty, NULL, TIOCM_RTS, 0x00);
mdelay(20);
status = tty->driver->ops->tiocmget(tty, NULL);
n_tty_ioctl_helper(tty, NULL, TCGETS, (unsigned long)&settings);
settings.c_cflag |= CRTSCTS;
n_tty_ioctl_helper(tty, NULL, TCSETS, (unsigned long)&settings);
return status;
}
static void ath_hci_uart_work(struct work_struct *work)
{
int status;
struct ath_struct *ath;
struct hci_uart *hu;
struct tty_struct *tty;
ath = container_of(work, struct ath_struct, ctxtsw);
hu = ath->hu;
tty = hu->tty;
/* verify and wake up controller */
if (ath->cur_sleep) {
status = ath_wakeup_ar3k(tty);
if (!(status & TIOCM_CTS))
return;
}
/* Ready to send Data */
clear_bit(HCI_UART_SENDING, &hu->tx_state);
hci_uart_tx_wakeup(hu);
}
/* Initialize protocol */
static int ath_open(struct hci_uart *hu)
{
struct ath_struct *ath;
BT_DBG("hu %p", hu);
ath = kzalloc(sizeof(*ath), GFP_ATOMIC);
if (!ath)
return -ENOMEM;
skb_queue_head_init(&ath->txq);
hu->priv = ath;
ath->hu = hu;
INIT_WORK(&ath->ctxtsw, ath_hci_uart_work);
return 0;
}
/* Flush protocol data */
static int ath_flush(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&ath->txq);
return 0;
}
/* Close protocol */
static int ath_close(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
BT_DBG("hu %p", hu);
skb_queue_purge(&ath->txq);
cancel_work_sync(&ath->ctxtsw);
hu->priv = NULL;
kfree(ath);
return 0;
}
#define HCI_OP_ATH_SLEEP 0xFC04
/* Enqueue frame for transmittion */
static int ath_enqueue(struct hci_uart *hu, struct sk_buff *skb)
{
struct ath_struct *ath = hu->priv;
if (bt_cb(skb)->pkt_type == HCI_SCODATA_PKT) {
kfree_skb(skb);
return 0;
}
/*
* Update power management enable flag with parameters of
* HCI sleep enable vendor specific HCI command.
*/
if (bt_cb(skb)->pkt_type == HCI_COMMAND_PKT) {
struct hci_command_hdr *hdr = (void *)skb->data;
if (__le16_to_cpu(hdr->opcode) == HCI_OP_ATH_SLEEP)
ath->cur_sleep = skb->data[HCI_COMMAND_HDR_SIZE];
}
BT_DBG("hu %p skb %p", hu, skb);
/* Prepend skb with frame type */
memcpy(skb_push(skb, 1), &bt_cb(skb)->pkt_type, 1);
skb_queue_tail(&ath->txq, skb);
set_bit(HCI_UART_SENDING, &hu->tx_state);
schedule_work(&ath->ctxtsw);
return 0;
}
static struct sk_buff *ath_dequeue(struct hci_uart *hu)
{
struct ath_struct *ath = hu->priv;
return skb_dequeue(&ath->txq);
}
/* Recv data */
static int ath_recv(struct hci_uart *hu, void *data, int count)
{
if (hci_recv_stream_fragment(hu->hdev, data, count) < 0)
BT_ERR("Frame Reassembly Failed");
return count;
}
static struct hci_uart_proto athp = {
.id = HCI_UART_ATH3K,
.open = ath_open,
.close = ath_close,
.recv = ath_recv,
.enqueue = ath_enqueue,
.dequeue = ath_dequeue,
.flush = ath_flush,
};
int __init ath_init(void)
{
int err = hci_uart_register_proto(&athp);
if (!err)
BT_INFO("HCIATH3K protocol initialized");
else
BT_ERR("HCIATH3K protocol registration failed");
return err;
}
int __exit ath_deinit(void)
{
return hci_uart_unregister_proto(&athp);
}

View File

@ -739,7 +739,7 @@ static struct hci_uart_proto bcsp = {
.flush = bcsp_flush
};
int bcsp_init(void)
int __init bcsp_init(void)
{
int err = hci_uart_register_proto(&bcsp);
@ -751,7 +751,7 @@ int bcsp_init(void)
return err;
}
int bcsp_deinit(void)
int __exit bcsp_deinit(void)
{
return hci_uart_unregister_proto(&bcsp);
}

View File

@ -151,107 +151,8 @@ static inline int h4_check_data_len(struct h4_struct *h4, int len)
/* Recv data */
static int h4_recv(struct hci_uart *hu, void *data, int count)
{
struct h4_struct *h4 = hu->priv;
register char *ptr;
struct hci_event_hdr *eh;
struct hci_acl_hdr *ah;
struct hci_sco_hdr *sh;
register int len, type, dlen;
BT_DBG("hu %p count %d rx_state %ld rx_count %ld",
hu, count, h4->rx_state, h4->rx_count);
ptr = data;
while (count) {
if (h4->rx_count) {
len = min_t(unsigned int, h4->rx_count, count);
memcpy(skb_put(h4->rx_skb, len), ptr, len);
h4->rx_count -= len; count -= len; ptr += len;
if (h4->rx_count)
continue;
switch (h4->rx_state) {
case H4_W4_DATA:
BT_DBG("Complete data");
hci_recv_frame(h4->rx_skb);
h4->rx_state = H4_W4_PACKET_TYPE;
h4->rx_skb = NULL;
continue;
case H4_W4_EVENT_HDR:
eh = hci_event_hdr(h4->rx_skb);
BT_DBG("Event header: evt 0x%2.2x plen %d", eh->evt, eh->plen);
h4_check_data_len(h4, eh->plen);
continue;
case H4_W4_ACL_HDR:
ah = hci_acl_hdr(h4->rx_skb);
dlen = __le16_to_cpu(ah->dlen);
BT_DBG("ACL header: dlen %d", dlen);
h4_check_data_len(h4, dlen);
continue;
case H4_W4_SCO_HDR:
sh = hci_sco_hdr(h4->rx_skb);
BT_DBG("SCO header: dlen %d", sh->dlen);
h4_check_data_len(h4, sh->dlen);
continue;
}
}
/* H4_W4_PACKET_TYPE */
switch (*ptr) {
case HCI_EVENT_PKT:
BT_DBG("Event packet");
h4->rx_state = H4_W4_EVENT_HDR;
h4->rx_count = HCI_EVENT_HDR_SIZE;
type = HCI_EVENT_PKT;
break;
case HCI_ACLDATA_PKT:
BT_DBG("ACL packet");
h4->rx_state = H4_W4_ACL_HDR;
h4->rx_count = HCI_ACL_HDR_SIZE;
type = HCI_ACLDATA_PKT;
break;
case HCI_SCODATA_PKT:
BT_DBG("SCO packet");
h4->rx_state = H4_W4_SCO_HDR;
h4->rx_count = HCI_SCO_HDR_SIZE;
type = HCI_SCODATA_PKT;
break;
default:
BT_ERR("Unknown HCI packet type %2.2x", (__u8)*ptr);
hu->hdev->stat.err_rx++;
ptr++; count--;
continue;
};
ptr++; count--;
/* Allocate packet */
h4->rx_skb = bt_skb_alloc(HCI_MAX_FRAME_SIZE, GFP_ATOMIC);
if (!h4->rx_skb) {
BT_ERR("Can't allocate mem for new packet");
h4->rx_state = H4_W4_PACKET_TYPE;
h4->rx_count = 0;
return -ENOMEM;
}
h4->rx_skb->dev = (void *) hu->hdev;
bt_cb(h4->rx_skb)->pkt_type = type;
}
if (hci_recv_stream_fragment(hu->hdev, data, count) < 0)
BT_ERR("Frame Reassembly Failed");
return count;
}
@ -272,7 +173,7 @@ static struct hci_uart_proto h4p = {
.flush = h4_flush,
};
int h4_init(void)
int __init h4_init(void)
{
int err = hci_uart_register_proto(&h4p);
@ -284,7 +185,7 @@ int h4_init(void)
return err;
}
int h4_deinit(void)
int __exit h4_deinit(void)
{
return hci_uart_unregister_proto(&h4p);
}

View File

@ -210,7 +210,6 @@ static int hci_uart_close(struct hci_dev *hdev)
static int hci_uart_send_frame(struct sk_buff *skb)
{
struct hci_dev* hdev = (struct hci_dev *) skb->dev;
struct tty_struct *tty;
struct hci_uart *hu;
if (!hdev) {
@ -222,7 +221,6 @@ static int hci_uart_send_frame(struct sk_buff *skb)
return -EBUSY;
hu = (struct hci_uart *) hdev->driver_data;
tty = hu->tty;
BT_DBG("%s: type %d len %d", hdev->name, bt_cb(skb)->pkt_type, skb->len);
@ -397,6 +395,9 @@ static int hci_uart_register_dev(struct hci_uart *hu)
if (!reset)
set_bit(HCI_QUIRK_NO_RESET, &hdev->quirks);
if (test_bit(HCI_UART_RAW_DEVICE, &hu->hdev_flags))
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
if (hci_register_dev(hdev) < 0) {
BT_ERR("Can't register HCI device");
hci_free_dev(hdev);
@ -477,6 +478,15 @@ static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file * file,
return hu->hdev->id;
return -EUNATCH;
case HCIUARTSETFLAGS:
if (test_bit(HCI_UART_PROTO_SET, &hu->flags))
return -EBUSY;
hu->hdev_flags = arg;
break;
case HCIUARTGETFLAGS:
return hu->hdev_flags;
default:
err = n_tty_ioctl_helper(tty, file, cmd, arg);
break;
@ -542,6 +552,9 @@ static int __init hci_uart_init(void)
#ifdef CONFIG_BT_HCIUART_LL
ll_init();
#endif
#ifdef CONFIG_BT_HCIUART_ATH3K
ath_init();
#endif
return 0;
}
@ -559,6 +572,9 @@ static void __exit hci_uart_exit(void)
#ifdef CONFIG_BT_HCIUART_LL
ll_deinit();
#endif
#ifdef CONFIG_BT_HCIUART_ATH3K
ath_deinit();
#endif
/* Release tty registration of line discipline */
if ((err = tty_unregister_ldisc(N_HCI)))

View File

@ -74,7 +74,7 @@ enum hcill_states_e {
struct hcill_cmd {
u8 cmd;
} __attribute__((packed));
} __packed;
struct ll_struct {
unsigned long rx_state;
@ -517,7 +517,7 @@ static struct hci_uart_proto llp = {
.flush = ll_flush,
};
int ll_init(void)
int __init ll_init(void)
{
int err = hci_uart_register_proto(&llp);
@ -529,7 +529,7 @@ int ll_init(void)
return err;
}
int ll_deinit(void)
int __exit ll_deinit(void)
{
return hci_uart_unregister_proto(&llp);
}

View File

@ -31,15 +31,20 @@
#define HCIUARTSETPROTO _IOW('U', 200, int)
#define HCIUARTGETPROTO _IOR('U', 201, int)
#define HCIUARTGETDEVICE _IOR('U', 202, int)
#define HCIUARTSETFLAGS _IOW('U', 203, int)
#define HCIUARTGETFLAGS _IOR('U', 204, int)
/* UART protocols */
#define HCI_UART_MAX_PROTO 5
#define HCI_UART_MAX_PROTO 6
#define HCI_UART_H4 0
#define HCI_UART_BCSP 1
#define HCI_UART_3WIRE 2
#define HCI_UART_H4DS 3
#define HCI_UART_LL 4
#define HCI_UART_ATH3K 5
#define HCI_UART_RAW_DEVICE 0
struct hci_uart;
@ -57,6 +62,7 @@ struct hci_uart {
struct tty_struct *tty;
struct hci_dev *hdev;
unsigned long flags;
unsigned long hdev_flags;
struct hci_uart_proto *proto;
void *priv;
@ -66,7 +72,7 @@ struct hci_uart {
spinlock_t rx_lock;
};
/* HCI_UART flag bits */
/* HCI_UART proto flag bits */
#define HCI_UART_PROTO_SET 0
/* TX states */
@ -91,3 +97,8 @@ int bcsp_deinit(void);
int ll_init(void);
int ll_deinit(void);
#endif
#ifdef CONFIG_BT_HCIUART_ATH3K
int ath_init(void);
int ath_deinit(void);
#endif

View File

@ -215,7 +215,7 @@ static int addr4_resolve(struct sockaddr_in *src_in,
neigh = neigh_lookup(&arp_tbl, &rt->rt_gateway, rt->idev->dev);
if (!neigh || !(neigh->nud_state & NUD_VALID)) {
neigh_event_send(rt->u.dst.neighbour, NULL);
neigh_event_send(rt->dst.neighbour, NULL);
ret = -ENODATA;
if (neigh)
goto release;

View File

@ -1364,7 +1364,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
__func__);
goto reject;
}
dst = &rt->u.dst;
dst = &rt->dst;
l2t = t3_l2t_get(tdev, dst->neighbour, dst->neighbour->dev);
if (!l2t) {
printk(KERN_ERR MOD "%s - failed to allocate l2t entry!\n",
@ -1932,7 +1932,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
err = -EHOSTUNREACH;
goto fail3;
}
ep->dst = &rt->u.dst;
ep->dst = &rt->dst;
/* get a l2t entry */
ep->l2t = t3_l2t_get(ep->com.tdev, ep->dst->neighbour,

View File

@ -1365,7 +1365,7 @@ static int pass_accept_req(struct c4iw_dev *dev, struct sk_buff *skb)
__func__);
goto reject;
}
dst = &rt->u.dst;
dst = &rt->dst;
if (dst->neighbour->dev->flags & IFF_LOOPBACK) {
pdev = ip_dev_find(&init_net, peer_ip);
BUG_ON(!pdev);
@ -1939,7 +1939,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
err = -EHOSTUNREACH;
goto fail3;
}
ep->dst = &rt->u.dst;
ep->dst = &rt->dst;
/* get a l2t entry */
if (ep->dst->neighbour->dev->flags & IFF_LOOPBACK) {

View File

@ -1146,7 +1146,7 @@ static int nes_addr_resolve_neigh(struct nes_vnic *nesvnic, u32 dst_ip, int arpi
}
if ((neigh == NULL) || (!(neigh->nud_state & NUD_VALID)))
neigh_event_send(rt->u.dst.neighbour, NULL);
neigh_event_send(rt->dst.neighbour, NULL);
ip_rt_put(rt);
return rc;

View File

@ -1567,6 +1567,12 @@ static int nes_netdev_set_settings(struct net_device *netdev, struct ethtool_cmd
}
static int nes_netdev_set_flags(struct net_device *netdev, u32 flags)
{
return ethtool_op_set_flags(netdev, flags, ETH_FLAG_LRO);
}
static const struct ethtool_ops nes_ethtool_ops = {
.get_link = ethtool_op_get_link,
.get_settings = nes_netdev_get_settings,
@ -1588,7 +1594,7 @@ static const struct ethtool_ops nes_ethtool_ops = {
.get_tso = ethtool_op_get_tso,
.set_tso = ethtool_op_set_tso,
.get_flags = ethtool_op_get_flags,
.set_flags = ethtool_op_set_flags,
.set_flags = nes_netdev_set_flags,
};

View File

@ -147,6 +147,11 @@ static void ipoib_get_ethtool_stats(struct net_device *dev,
data[index++] = priv->lro.lro_mgr.stats.no_desc;
}
static int ipoib_set_flags(struct net_device *dev, u32 flags)
{
return ethtool_op_set_flags(dev, flags, ETH_FLAG_LRO);
}
static const struct ethtool_ops ipoib_ethtool_ops = {
.get_drvinfo = ipoib_get_drvinfo,
.get_rx_csum = ipoib_get_rx_csum,
@ -154,7 +159,7 @@ static const struct ethtool_ops ipoib_ethtool_ops = {
.get_coalesce = ipoib_get_coalesce,
.set_coalesce = ipoib_set_coalesce,
.get_flags = ethtool_op_get_flags,
.set_flags = ethtool_op_set_flags,
.set_flags = ipoib_set_flags,
.get_strings = ipoib_get_strings,
.get_sset_count = ipoib_get_sset_count,
.get_ethtool_stats = ipoib_get_ethtool_stats,

View File

@ -20,7 +20,6 @@
#include <linux/signal.h>
#include <linux/mutex.h>
#include <linux/mm.h>
#include <linux/smp_lock.h>
#include <linux/timer.h>
#include <linux/wait.h>
#include <linux/tty.h>
@ -50,6 +49,7 @@ MODULE_LICENSE("GPL");
/* -------- driver information -------------------------------------- */
static DEFINE_MUTEX(capi_mutex);
static struct class *capi_class;
static int capi_major = 68; /* allocated */
@ -691,7 +691,7 @@ unlock_out:
static ssize_t
capi_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
{
struct capidev *cdev = (struct capidev *)file->private_data;
struct capidev *cdev = file->private_data;
struct sk_buff *skb;
size_t copied;
int err;
@ -726,7 +726,7 @@ capi_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
static ssize_t
capi_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
struct capidev *cdev = (struct capidev *)file->private_data;
struct capidev *cdev = file->private_data;
struct sk_buff *skb;
u16 mlen;
@ -773,7 +773,7 @@ capi_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos
static unsigned int
capi_poll(struct file *file, poll_table * wait)
{
struct capidev *cdev = (struct capidev *)file->private_data;
struct capidev *cdev = file->private_data;
unsigned int mask = 0;
if (!cdev->ap.applid)
@ -985,9 +985,9 @@ capi_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int ret;
lock_kernel();
mutex_lock(&capi_mutex);
ret = capi_ioctl(file, cmd, arg);
unlock_kernel();
mutex_unlock(&capi_mutex);
return ret;
}

View File

@ -1450,12 +1450,9 @@ static void handle_dtrace_data(capidrv_contr *card,
}
for (p = data, end = data+len; p < end; p++) {
u8 w;
PUTBYTE_TO_STATUS(card, ' ');
w = (*p >> 4) & 0xf;
PUTBYTE_TO_STATUS(card, (w < 10) ? '0'+w : 'A'-10+w);
w = *p & 0xf;
PUTBYTE_TO_STATUS(card, (w < 10) ? '0'+w : 'A'-10+w);
PUTBYTE_TO_STATUS(card, hex_asc_hi(*p));
PUTBYTE_TO_STATUS(card, hex_asc_lo(*p));
}
PUTBYTE_TO_STATUS(card, '\n');

View File

@ -20,7 +20,7 @@
#include <linux/sched.h>
#include <linux/isdnif.h>
#include <net/net_namespace.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include "isdn_divert.h"
@ -28,6 +28,7 @@
/* Variables for interface queue */
/*********************************/
ulong if_used = 0; /* number of interface users */
static DEFINE_MUTEX(isdn_divert_mutex);
static struct divert_info *divert_info_head = NULL; /* head of queue */
static struct divert_info *divert_info_tail = NULL; /* pointer to last entry */
static DEFINE_SPINLOCK(divert_info_lock);/* lock for queue */
@ -261,9 +262,9 @@ static long isdn_divert_ioctl(struct file *file, uint cmd, ulong arg)
{
long ret;
lock_kernel();
mutex_lock(&isdn_divert_mutex);
ret = isdn_divert_ioctl_unlocked(file, cmd, arg);
unlock_kernel();
mutex_unlock(&isdn_divert_mutex);
return ret;
}

View File

@ -17,8 +17,7 @@ menuconfig ISDN_DRV_GIGASET
if ISDN_DRV_GIGASET
config GIGASET_CAPI
bool "Gigaset CAPI support (EXPERIMENTAL)"
depends on EXPERIMENTAL
bool "Gigaset CAPI support"
depends on ISDN_CAPI='y'||(ISDN_CAPI='m'&&ISDN_DRV_GIGASET='m')
default ISDN_I4L='n'
help
@ -27,6 +26,7 @@ config GIGASET_CAPI
subsystem you'll have to enable the capidrv glue driver.
(select ISDN_CAPI_CAPIDRV.)
Say N to build the old native ISDN4Linux variant.
If unsure, say Y.
config GIGASET_I4L
bool

View File

@ -1188,24 +1188,6 @@ static void write_iso_tasklet(unsigned long data)
break;
}
}
#ifdef CONFIG_GIGASET_DEBUG
/* check assumption on remaining frames */
for (; i < BAS_NUMFRAMES; i++) {
ifd = &urb->iso_frame_desc[i];
if (ifd->status != -EINPROGRESS
|| ifd->actual_length != 0) {
dev_warn(cs->dev,
"isochronous write: frame %d: %s, "
"%d of %d bytes sent\n",
i, get_usb_statmsg(ifd->status),
ifd->actual_length, ifd->length);
offset = (ifd->offset +
ifd->actual_length)
% BAS_OUTBUFSIZE;
break;
}
}
#endif
break;
case -EPIPE: /* stall - probably underrun */
dev_err(cs->dev, "isochronous write stalled\n");
@ -1913,65 +1895,41 @@ static int start_cbsend(struct cardstate *cs)
* USB transmission is started if necessary.
* parameters:
* cs controller state structure
* buf command string to send
* len number of bytes to send (max. IF_WRITEBUF)
* wake_tasklet tasklet to run when transmission is completed
* (NULL if none)
* cb command buffer structure
* return value:
* number of bytes queued on success
* error code < 0 on error
*/
static int gigaset_write_cmd(struct cardstate *cs,
const unsigned char *buf, int len,
struct tasklet_struct *wake_tasklet)
static int gigaset_write_cmd(struct cardstate *cs, struct cmdbuf_t *cb)
{
struct cmdbuf_t *cb;
unsigned long flags;
int rc;
gigaset_dbg_buffer(cs->mstate != MS_LOCKED ?
DEBUG_TRANSCMD : DEBUG_LOCKCMD,
"CMD Transmit", len, buf);
if (len <= 0) {
/* nothing to do */
rc = 0;
goto notqueued;
}
"CMD Transmit", cb->len, cb->buf);
/* translate "+++" escape sequence sent as a single separate command
* into "close AT channel" command for error recovery
* The next command will reopen the AT channel automatically.
*/
if (len == 3 && !memcmp(buf, "+++", 3)) {
if (cb->len == 3 && !memcmp(cb->buf, "+++", 3)) {
kfree(cb);
rc = req_submit(cs->bcs, HD_CLOSE_ATCHANNEL, 0, BAS_TIMEOUT);
goto notqueued;
if (cb->wake_tasklet)
tasklet_schedule(cb->wake_tasklet);
return rc < 0 ? rc : cb->len;
}
if (len > IF_WRITEBUF)
len = IF_WRITEBUF;
cb = kmalloc(sizeof(struct cmdbuf_t) + len, GFP_ATOMIC);
if (!cb) {
dev_err(cs->dev, "%s: out of memory\n", __func__);
rc = -ENOMEM;
goto notqueued;
}
memcpy(cb->buf, buf, len);
cb->len = len;
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = wake_tasklet;
spin_lock_irqsave(&cs->cmdlock, flags);
cb->prev = cs->lastcmdbuf;
if (cs->lastcmdbuf)
cs->lastcmdbuf->next = cb;
else {
cs->cmdbuf = cb;
cs->curlen = len;
cs->curlen = cb->len;
}
cs->cmdbytes += len;
cs->cmdbytes += cb->len;
cs->lastcmdbuf = cb;
spin_unlock_irqrestore(&cs->cmdlock, flags);
@ -1988,12 +1946,7 @@ static int gigaset_write_cmd(struct cardstate *cs,
}
rc = start_cbsend(cs);
spin_unlock_irqrestore(&cs->lock, flags);
return rc < 0 ? rc : len;
notqueued: /* request handled without queuing */
if (wake_tasklet)
tasklet_schedule(wake_tasklet);
return rc;
return rc < 0 ? rc : cb->len;
}
/* gigaset_write_room

View File

@ -45,6 +45,7 @@
#define CAPI_FACILITY_LI 0x0005
#define CAPI_SUPPSVC_GETSUPPORTED 0x0000
#define CAPI_SUPPSVC_LISTEN 0x0001
/* missing from capiutil.h */
#define CAPIMSG_PLCI_PART(m) CAPIMSG_U8(m, 9)
@ -270,9 +271,13 @@ static inline void dump_rawmsg(enum debuglevel level, const char *tag,
kfree(dbgline);
if (CAPIMSG_COMMAND(data) == CAPI_DATA_B3 &&
(CAPIMSG_SUBCOMMAND(data) == CAPI_REQ ||
CAPIMSG_SUBCOMMAND(data) == CAPI_IND) &&
CAPIMSG_DATALEN(data) > 0) {
CAPIMSG_SUBCOMMAND(data) == CAPI_IND)) {
l = CAPIMSG_DATALEN(data);
gig_dbg(level, " DataLength=%d", l);
if (l <= 0 || !(gigaset_debuglevel & DEBUG_LLDATA))
return;
if (l > 64)
l = 64; /* arbitrary limit */
dbgline = kmalloc(3*l, GFP_ATOMIC);
if (!dbgline)
return;
@ -378,13 +383,13 @@ void gigaset_skb_sent(struct bc_state *bcs, struct sk_buff *dskb)
++bcs->trans_up;
if (!ap) {
dev_err(cs->dev, "%s: no application\n", __func__);
gig_dbg(DEBUG_MCMD, "%s: application gone", __func__);
return;
}
/* don't send further B3 messages if disconnected */
if (bcs->apconnstate < APCONN_ACTIVE) {
gig_dbg(DEBUG_LLDATA, "disconnected, discarding ack");
gig_dbg(DEBUG_MCMD, "%s: disconnected", __func__);
return;
}
@ -422,13 +427,14 @@ void gigaset_skb_rcvd(struct bc_state *bcs, struct sk_buff *skb)
bcs->trans_down++;
if (!ap) {
dev_err(cs->dev, "%s: no application\n", __func__);
gig_dbg(DEBUG_MCMD, "%s: application gone", __func__);
dev_kfree_skb_any(skb);
return;
}
/* don't send further B3 messages if disconnected */
if (bcs->apconnstate < APCONN_ACTIVE) {
gig_dbg(DEBUG_LLDATA, "disconnected, discarding data");
gig_dbg(DEBUG_MCMD, "%s: disconnected", __func__);
dev_kfree_skb_any(skb);
return;
}
@ -454,7 +460,7 @@ void gigaset_skb_rcvd(struct bc_state *bcs, struct sk_buff *skb)
/* Data64 parameter not present */
/* emit message */
dump_rawmsg(DEBUG_LLDATA, "DATA_B3_IND", skb->data);
dump_rawmsg(DEBUG_MCMD, __func__, skb->data);
capi_ctr_handle_message(&iif->ctr, ap->id, skb);
}
EXPORT_SYMBOL_GPL(gigaset_skb_rcvd);
@ -747,7 +753,7 @@ void gigaset_isdn_connD(struct bc_state *bcs)
ap = bcs->ap;
if (!ap) {
spin_unlock_irqrestore(&bcs->aplock, flags);
dev_err(cs->dev, "%s: no application\n", __func__);
gig_dbg(DEBUG_CMD, "%s: application gone", __func__);
return;
}
if (bcs->apconnstate == APCONN_NONE) {
@ -843,7 +849,7 @@ void gigaset_isdn_connB(struct bc_state *bcs)
ap = bcs->ap;
if (!ap) {
spin_unlock_irqrestore(&bcs->aplock, flags);
dev_err(cs->dev, "%s: no application\n", __func__);
gig_dbg(DEBUG_CMD, "%s: application gone", __func__);
return;
}
if (!bcs->apconnstate) {
@ -901,13 +907,12 @@ void gigaset_isdn_connB(struct bc_state *bcs)
*/
void gigaset_isdn_hupB(struct bc_state *bcs)
{
struct cardstate *cs = bcs->cs;
struct gigaset_capi_appl *ap = bcs->ap;
/* ToDo: assure order of DISCONNECT_B3_IND and DISCONNECT_IND ? */
if (!ap) {
dev_err(cs->dev, "%s: no application\n", __func__);
gig_dbg(DEBUG_CMD, "%s: application gone", __func__);
return;
}
@ -978,6 +983,9 @@ static void gigaset_register_appl(struct capi_ctr *ctr, u16 appl,
struct cardstate *cs = ctr->driverdata;
struct gigaset_capi_appl *ap;
gig_dbg(DEBUG_CMD, "%s [%u] l3cnt=%u blkcnt=%u blklen=%u",
__func__, appl, rp->level3cnt, rp->datablkcnt, rp->datablklen);
list_for_each_entry(ap, &iif->appls, ctrlist)
if (ap->id == appl) {
dev_notice(cs->dev,
@ -1062,6 +1070,8 @@ static void gigaset_release_appl(struct capi_ctr *ctr, u16 appl)
struct gigaset_capi_appl *ap, *tmp;
unsigned ch;
gig_dbg(DEBUG_CMD, "%s [%u]", __func__, appl);
list_for_each_entry_safe(ap, tmp, &iif->appls, ctrlist)
if (ap->id == appl) {
/* remove from any channels */
@ -1142,7 +1152,7 @@ static void do_facility_req(struct gigaset_capi_ctr *iif,
case CAPI_FACILITY_SUPPSVC:
/* decode Function parameter */
pparam = cmsg->FacilityRequestParameter;
if (pparam == NULL || *pparam < 2) {
if (pparam == NULL || pparam[0] < 2) {
dev_notice(cs->dev, "%s: %s missing\n", "FACILITY_REQ",
"Facility Request Parameter");
send_conf(iif, ap, skb, CapiIllMessageParmCoding);
@ -1159,8 +1169,32 @@ static void do_facility_req(struct gigaset_capi_ctr *iif,
/* Supported Services: none */
capimsg_setu32(confparam, 6, 0);
break;
case CAPI_SUPPSVC_LISTEN:
if (pparam[0] < 7 || pparam[3] < 4) {
dev_notice(cs->dev, "%s: %s missing\n",
"FACILITY_REQ", "Notification Mask");
send_conf(iif, ap, skb,
CapiIllMessageParmCoding);
return;
}
if (CAPIMSG_U32(pparam, 4) != 0) {
dev_notice(cs->dev,
"%s: unsupported supplementary service notification mask 0x%x\n",
"FACILITY_REQ", CAPIMSG_U32(pparam, 4));
info = CapiFacilitySpecificFunctionNotSupported;
confparam[3] = 2; /* length */
capimsg_setu16(confparam, 4,
CapiSupplementaryServiceNotSupported);
}
info = CapiSuccess;
confparam[3] = 2; /* length */
capimsg_setu16(confparam, 4, CapiSuccess);
break;
/* ToDo: add supported services */
default:
dev_notice(cs->dev,
"%s: unsupported supplementary service function 0x%04x\n",
"FACILITY_REQ", function);
info = CapiFacilitySpecificFunctionNotSupported;
/* Supplementary Service specific parameter */
confparam[3] = 2; /* length */
@ -1951,11 +1985,7 @@ static void do_data_b3_req(struct gigaset_capi_ctr *iif,
u16 handle = CAPIMSG_HANDLE_REQ(skb->data);
/* frequent message, avoid _cmsg overhead */
dump_rawmsg(DEBUG_LLDATA, "DATA_B3_REQ", skb->data);
gig_dbg(DEBUG_LLDATA,
"Receiving data from LL (ch: %d, flg: %x, sz: %d|%d)",
channel, flags, msglen, datalen);
dump_rawmsg(DEBUG_MCMD, __func__, skb->data);
/* check parameters */
if (channel == 0 || channel > cs->channels || ncci != 1) {
@ -2064,7 +2094,7 @@ static void do_data_b3_resp(struct gigaset_capi_ctr *iif,
struct gigaset_capi_appl *ap,
struct sk_buff *skb)
{
dump_rawmsg(DEBUG_LLDATA, __func__, skb->data);
dump_rawmsg(DEBUG_MCMD, __func__, skb->data);
dev_kfree_skb_any(skb);
}

View File

@ -791,8 +791,6 @@ struct cardstate *gigaset_initcs(struct gigaset_driver *drv, int channels,
spin_unlock_irqrestore(&cs->lock, flags);
setup_timer(&cs->timer, timer_tick, (unsigned long) cs);
cs->timer.expires = jiffies + msecs_to_jiffies(GIG_TICK);
/* FIXME: can jiffies increase too much until the timer is added?
* Same problem(?) with mod_timer() in timer_tick(). */
add_timer(&cs->timer);
gig_dbg(DEBUG_INIT, "cs initialized");

View File

@ -35,53 +35,40 @@
#define RT_RING 2
#define RT_NUMBER 3
#define RT_STRING 4
#define RT_HEX 5
#define RT_ZCAU 6
/* Possible ASCII responses */
#define RSP_OK 0
#define RSP_BUSY 1
#define RSP_CONNECT 2
#define RSP_ERROR 1
#define RSP_ZGCI 3
#define RSP_RING 4
#define RSP_ZAOC 5
#define RSP_ZCSTR 6
#define RSP_ZCFGT 7
#define RSP_ZCFG 8
#define RSP_ZCCR 9
#define RSP_EMPTY 10
#define RSP_ZLOG 11
#define RSP_ZCAU 12
#define RSP_ZMWI 13
#define RSP_ZABINFO 14
#define RSP_ZSMLSTCHG 15
#define RSP_ZVLS 5
#define RSP_ZCAU 6
/* responses with values to store in at_state */
/* - numeric */
#define RSP_VAR 100
#define RSP_ZSAU (RSP_VAR + VAR_ZSAU)
#define RSP_ZDLE (RSP_VAR + VAR_ZDLE)
#define RSP_ZVLS (RSP_VAR + VAR_ZVLS)
#define RSP_ZCTP (RSP_VAR + VAR_ZCTP)
/* - string */
#define RSP_STR (RSP_VAR + VAR_NUM)
#define RSP_NMBR (RSP_STR + STR_NMBR)
#define RSP_ZCPN (RSP_STR + STR_ZCPN)
#define RSP_ZCON (RSP_STR + STR_ZCON)
#define RSP_ZBC (RSP_STR + STR_ZBC)
#define RSP_ZHLC (RSP_STR + STR_ZHLC)
#define RSP_ERROR -1 /* ERROR */
#define RSP_WRONG_CID -2 /* unknown cid in cmd */
#define RSP_UNKNOWN -4 /* unknown response */
#define RSP_FAIL -5 /* internal error */
#define RSP_INVAL -6 /* invalid response */
#define RSP_NODEV -9 /* device not connected */
#define RSP_NONE -19
#define RSP_STRING -20
#define RSP_NULL -21
#define RSP_RETRYFAIL -22
#define RSP_RETRY -23
#define RSP_SKIP -24
#define RSP_INIT -27
#define RSP_ANY -26
#define RSP_LAST -28
#define RSP_NODEV -9
/* actions for process_response */
#define ACT_NOTHING 0
@ -259,13 +246,6 @@ struct reply_t gigaset_tab_nocid[] =
/* misc. */
{RSP_ERROR, -1, -1, -1, -1, -1, {ACT_ERROR} },
{RSP_ZCFGT, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZCFG, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZLOG, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZMWI, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZABINFO, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZSMLSTCHG, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZCAU, -1, -1, -1, -1, -1, {ACT_ZCAU} },
{RSP_NONE, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ANY, -1, -1, -1, -1, -1, {ACT_WARN} },
@ -361,10 +341,6 @@ struct reply_t gigaset_tab_cid[] =
/* misc. */
{RSP_ZCON, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZCCR, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZAOC, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZCSTR, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ZCAU, -1, -1, -1, -1, -1, {ACT_ZCAU} },
{RSP_NONE, -1, -1, -1, -1, -1, {ACT_DEBUG} },
{RSP_ANY, -1, -1, -1, -1, -1, {ACT_WARN} },
@ -387,20 +363,11 @@ static const struct resp_type_t {
{"ZVLS", RSP_ZVLS, RT_NUMBER},
{"ZCTP", RSP_ZCTP, RT_NUMBER},
{"ZDLE", RSP_ZDLE, RT_NUMBER},
{"ZCFGT", RSP_ZCFGT, RT_NUMBER},
{"ZCCR", RSP_ZCCR, RT_NUMBER},
{"ZMWI", RSP_ZMWI, RT_NUMBER},
{"ZHLC", RSP_ZHLC, RT_STRING},
{"ZBC", RSP_ZBC, RT_STRING},
{"NMBR", RSP_NMBR, RT_STRING},
{"ZCPN", RSP_ZCPN, RT_STRING},
{"ZCON", RSP_ZCON, RT_STRING},
{"ZAOC", RSP_ZAOC, RT_STRING},
{"ZCSTR", RSP_ZCSTR, RT_STRING},
{"ZCFG", RSP_ZCFG, RT_HEX},
{"ZLOG", RSP_ZLOG, RT_NOTHING},
{"ZABINFO", RSP_ZABINFO, RT_NOTHING},
{"ZSMLSTCHG", RSP_ZSMLSTCHG, RT_NOTHING},
{NULL, 0, 0}
};
@ -418,64 +385,18 @@ static const struct zsau_resp_t {
{NULL, ZSAU_UNKNOWN}
};
/*
* Get integer from char-pointer
*/
static int isdn_getnum(char *p)
{
int v = -1;
gig_dbg(DEBUG_EVENT, "string: %s", p);
while (*p >= '0' && *p <= '9')
v = ((v < 0) ? 0 : (v * 10)) + (int) ((*p++) - '0');
if (*p)
v = -1; /* invalid Character */
return v;
}
/*
* Get integer from char-pointer
*/
static int isdn_gethex(char *p)
{
int v = 0;
int c;
gig_dbg(DEBUG_EVENT, "string: %s", p);
if (!*p)
return -1;
do {
if (v > (INT_MAX - 15) / 16)
return -1;
c = *p;
if (c >= '0' && c <= '9')
c -= '0';
else if (c >= 'a' && c <= 'f')
c -= 'a' - 10;
else if (c >= 'A' && c <= 'F')
c -= 'A' - 10;
else
return -1;
v = v * 16 + c;
} while (*++p);
return v;
}
/* retrieve CID from parsed response
* returns 0 if no CID, -1 if invalid CID, or CID value 1..65535
*/
static int cid_of_response(char *s)
{
int cid;
unsigned long cid;
int rc;
if (s[-1] != ';')
return 0; /* no CID separator */
cid = isdn_getnum(s);
if (cid < 0)
rc = strict_strtoul(s, 10, &cid);
if (rc)
return 0; /* CID not numeric */
if (cid < 1 || cid > 65535)
return -1; /* CID out of range */
@ -588,10 +509,10 @@ void gigaset_handle_modem_response(struct cardstate *cs)
break;
if (!rt->response) {
event->type = RSP_UNKNOWN;
dev_warn(cs->dev,
"unknown modem response: %s\n",
argv[curarg]);
event->type = RSP_NONE;
gig_dbg(DEBUG_EVENT,
"unknown modem response: '%s'\n",
argv[curarg]);
break;
}
@ -645,27 +566,27 @@ void gigaset_handle_modem_response(struct cardstate *cs)
case RT_ZCAU:
event->parameter = -1;
if (curarg + 1 < params) {
i = isdn_gethex(argv[curarg]);
j = isdn_gethex(argv[curarg + 1]);
if (i >= 0 && i < 256 && j >= 0 && j < 256)
event->parameter = (unsigned) i << 8
| j;
curarg += 2;
unsigned long type, value;
i = strict_strtoul(argv[curarg++], 16, &type);
j = strict_strtoul(argv[curarg++], 16, &value);
if (i == 0 && type < 256 &&
j == 0 && value < 256)
event->parameter = (type << 8) | value;
} else
curarg = params - 1;
break;
case RT_NUMBER:
case RT_HEX:
event->parameter = -1;
if (curarg < params) {
if (param_type == RT_HEX)
event->parameter =
isdn_gethex(argv[curarg]);
else
event->parameter =
isdn_getnum(argv[curarg]);
++curarg;
} else
event->parameter = -1;
unsigned long res;
int rc;
rc = strict_strtoul(argv[curarg++], 10, &res);
if (rc == 0)
event->parameter = res;
}
gig_dbg(DEBUG_EVENT, "parameter==%d", event->parameter);
break;
}
@ -797,48 +718,27 @@ static void schedule_init(struct cardstate *cs, int state)
static void send_command(struct cardstate *cs, const char *cmd, int cid,
int dle, gfp_t kmallocflags)
{
size_t cmdlen, buflen;
char *cmdpos, *cmdbuf, *cmdtail;
struct cmdbuf_t *cb;
size_t buflen;
cmdlen = strlen(cmd);
buflen = 11 + cmdlen;
if (unlikely(buflen <= cmdlen)) {
dev_err(cs->dev, "integer overflow in buflen\n");
buflen = strlen(cmd) + 12; /* DLE ( A T 1 2 3 4 5 <cmd> DLE ) \0 */
cb = kmalloc(sizeof(struct cmdbuf_t) + buflen, kmallocflags);
if (!cb) {
dev_err(cs->dev, "%s: out of memory\n", __func__);
return;
}
cmdbuf = kmalloc(buflen, kmallocflags);
if (unlikely(!cmdbuf)) {
dev_err(cs->dev, "out of memory\n");
return;
}
cmdpos = cmdbuf + 9;
cmdtail = cmdpos + cmdlen;
memcpy(cmdpos, cmd, cmdlen);
if (cid > 0 && cid <= 65535) {
do {
*--cmdpos = '0' + cid % 10;
cid /= 10;
++cmdlen;
} while (cid);
}
cmdlen += 2;
*--cmdpos = 'T';
*--cmdpos = 'A';
if (dle) {
cmdlen += 4;
*--cmdpos = '(';
*--cmdpos = 0x10;
*cmdtail++ = 0x10;
*cmdtail++ = ')';
}
cs->ops->write_cmd(cs, cmdpos, cmdlen, NULL);
kfree(cmdbuf);
if (cid > 0 && cid <= 65535)
cb->len = snprintf(cb->buf, buflen,
dle ? "\020(AT%d%s\020)" : "AT%d%s",
cid, cmd);
else
cb->len = snprintf(cb->buf, buflen,
dle ? "\020(AT%s\020)" : "AT%s",
cmd);
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = NULL;
cs->ops->write_cmd(cs, cb);
}
static struct at_state_t *at_state_from_cid(struct cardstate *cs, int cid)
@ -1240,8 +1140,22 @@ static void do_action(int action, struct cardstate *cs,
break;
case ACT_HUPMODEM:
/* send "+++" (hangup in unimodem mode) */
if (cs->connected)
cs->ops->write_cmd(cs, "+++", 3, NULL);
if (cs->connected) {
struct cmdbuf_t *cb;
cb = kmalloc(sizeof(struct cmdbuf_t) + 3, GFP_ATOMIC);
if (!cb) {
dev_err(cs->dev, "%s: out of memory\n",
__func__);
return;
}
memcpy(cb->buf, "+++", 3);
cb->len = 3;
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = NULL;
cs->ops->write_cmd(cs, cb);
}
break;
case ACT_RING:
/* get fresh AT state structure for new CID */
@ -1855,19 +1769,13 @@ static void process_command_flags(struct cardstate *cs)
gig_dbg(DEBUG_EVENT, "Scheduling PC_CIDMODE");
cs->commands_pending = 1;
return;
#ifdef GIG_MAYINITONDIAL
case M_UNKNOWN:
schedule_init(cs, MS_INIT);
return;
#endif
}
bcs->at_state.pending_commands &= ~PC_CID;
cs->curchannel = bcs->channel;
#ifdef GIG_RETRYCID
cs->retry_count = 2;
#else
cs->retry_count = 1;
#endif
schedule_sequence(cs, &cs->at_state, SEQ_CID);
return;
}

View File

@ -46,13 +46,6 @@
#define RBUFSIZE 8192
/* compile time options */
#define GIG_MAJOR 0
#define GIG_MAYINITONDIAL
#define GIG_RETRYCID
#define GIG_X75
#define GIG_TICK 100 /* in milliseconds */
/* timeout values (unit: 1 sec) */
@ -193,9 +186,8 @@ void gigaset_dbg_buffer(enum debuglevel level, const unsigned char *msg,
/* variables in struct at_state_t */
#define VAR_ZSAU 0
#define VAR_ZDLE 1
#define VAR_ZVLS 2
#define VAR_ZCTP 3
#define VAR_NUM 4
#define VAR_ZCTP 2
#define VAR_NUM 3
#define STR_NMBR 0
#define STR_ZCPN 1
@ -574,9 +566,7 @@ struct bas_bc_state {
struct gigaset_ops {
/* Called from ev-layer.c/interface.c for sending AT commands to the
device */
int (*write_cmd)(struct cardstate *cs,
const unsigned char *buf, int len,
struct tasklet_struct *wake_tasklet);
int (*write_cmd)(struct cardstate *cs, struct cmdbuf_t *cb);
/* Called from interface.c for additional device control */
int (*write_room)(struct cardstate *cs);

View File

@ -419,6 +419,8 @@ oom:
dev_err(bcs->cs->dev, "out of memory\n");
for (i = 0; i < AT_NUM; ++i)
kfree(commands[i]);
kfree(commands);
gigaset_free_channel(bcs);
return -ENOMEM;
}
@ -643,9 +645,7 @@ int gigaset_isdn_regdev(struct cardstate *cs, const char *isdnid)
iif->maxbufsize = MAX_BUF_SIZE;
iif->features = ISDN_FEATURE_L2_TRANS |
ISDN_FEATURE_L2_HDLC |
#ifdef GIG_X75
ISDN_FEATURE_L2_X75I |
#endif
ISDN_FEATURE_L3_TRANS |
ISDN_FEATURE_P_EURO;
iif->hl_hdrlen = HW_HDR_LEN; /* Area for storing ack */

View File

@ -339,7 +339,8 @@ static int if_tiocmset(struct tty_struct *tty, struct file *file,
static int if_write(struct tty_struct *tty, const unsigned char *buf, int count)
{
struct cardstate *cs;
int retval = -ENODEV;
struct cmdbuf_t *cb;
int retval;
cs = (struct cardstate *) tty->driver_data;
if (!cs) {
@ -355,18 +356,39 @@ static int if_write(struct tty_struct *tty, const unsigned char *buf, int count)
if (!cs->connected) {
gig_dbg(DEBUG_IF, "not connected");
retval = -ENODEV;
} else if (!cs->open_count)
goto done;
}
if (!cs->open_count) {
dev_warn(cs->dev, "%s: device not opened\n", __func__);
else if (cs->mstate != MS_LOCKED) {
retval = -ENODEV;
goto done;
}
if (cs->mstate != MS_LOCKED) {
dev_warn(cs->dev, "can't write to unlocked device\n");
retval = -EBUSY;
} else {
retval = cs->ops->write_cmd(cs, buf, count,
&cs->if_wake_tasklet);
goto done;
}
if (count <= 0) {
/* nothing to do */
retval = 0;
goto done;
}
mutex_unlock(&cs->mutex);
cb = kmalloc(sizeof(struct cmdbuf_t) + count, GFP_KERNEL);
if (!cb) {
dev_err(cs->dev, "%s: out of memory\n", __func__);
retval = -ENOMEM;
goto done;
}
memcpy(cb->buf, buf, count);
cb->len = count;
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = &cs->if_wake_tasklet;
retval = cs->ops->write_cmd(cs, cb);
done:
mutex_unlock(&cs->mutex);
return retval;
}
@ -655,7 +677,6 @@ void gigaset_if_initdriver(struct gigaset_driver *drv, const char *procname,
goto enomem;
tty->magic = TTY_DRIVER_MAGIC,
tty->major = GIG_MAJOR,
tty->type = TTY_DRIVER_TYPE_SERIAL,
tty->subtype = SERIAL_TYPE_NORMAL,
tty->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_DYNAMIC_DEV;

View File

@ -241,30 +241,13 @@ static void flush_send_queue(struct cardstate *cs)
* return value:
* number of bytes queued, or error code < 0
*/
static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
int len, struct tasklet_struct *wake_tasklet)
static int gigaset_write_cmd(struct cardstate *cs, struct cmdbuf_t *cb)
{
struct cmdbuf_t *cb;
unsigned long flags;
gigaset_dbg_buffer(cs->mstate != MS_LOCKED ?
DEBUG_TRANSCMD : DEBUG_LOCKCMD,
"CMD Transmit", len, buf);
if (len <= 0)
return 0;
cb = kmalloc(sizeof(struct cmdbuf_t) + len, GFP_ATOMIC);
if (!cb) {
dev_err(cs->dev, "%s: out of memory!\n", __func__);
return -ENOMEM;
}
memcpy(cb->buf, buf, len);
cb->len = len;
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = wake_tasklet;
"CMD Transmit", cb->len, cb->buf);
spin_lock_irqsave(&cs->cmdlock, flags);
cb->prev = cs->lastcmdbuf;
@ -272,9 +255,9 @@ static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
cs->lastcmdbuf->next = cb;
else {
cs->cmdbuf = cb;
cs->curlen = len;
cs->curlen = cb->len;
}
cs->cmdbytes += len;
cs->cmdbytes += cb->len;
cs->lastcmdbuf = cb;
spin_unlock_irqrestore(&cs->cmdlock, flags);
@ -282,7 +265,7 @@ static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
if (cs->connected)
tasklet_schedule(&cs->write_tasklet);
spin_unlock_irqrestore(&cs->lock, flags);
return len;
return cb->len;
}
/*

View File

@ -39,7 +39,8 @@ MODULE_PARM_DESC(cidmode, "Call-ID mode");
#define GIGASET_MODULENAME "usb_gigaset"
#define GIGASET_DEVNAME "ttyGU"
#define IF_WRITEBUF 2000 /* arbitrary limit */
/* length limit according to Siemens 3070usb-protokoll.doc ch. 2.1 */
#define IF_WRITEBUF 264
/* Values for the Gigaset M105 Data */
#define USB_M105_VENDOR_ID 0x0681
@ -493,29 +494,13 @@ static int send_cb(struct cardstate *cs, struct cmdbuf_t *cb)
}
/* Send command to device. */
static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
int len, struct tasklet_struct *wake_tasklet)
static int gigaset_write_cmd(struct cardstate *cs, struct cmdbuf_t *cb)
{
struct cmdbuf_t *cb;
unsigned long flags;
gigaset_dbg_buffer(cs->mstate != MS_LOCKED ?
DEBUG_TRANSCMD : DEBUG_LOCKCMD,
"CMD Transmit", len, buf);
if (len <= 0)
return 0;
cb = kmalloc(sizeof(struct cmdbuf_t) + len, GFP_ATOMIC);
if (!cb) {
dev_err(cs->dev, "%s: out of memory\n", __func__);
return -ENOMEM;
}
memcpy(cb->buf, buf, len);
cb->len = len;
cb->offset = 0;
cb->next = NULL;
cb->wake_tasklet = wake_tasklet;
"CMD Transmit", cb->len, cb->buf);
spin_lock_irqsave(&cs->cmdlock, flags);
cb->prev = cs->lastcmdbuf;
@ -523,9 +508,9 @@ static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
cs->lastcmdbuf->next = cb;
else {
cs->cmdbuf = cb;
cs->curlen = len;
cs->curlen = cb->len;
}
cs->cmdbytes += len;
cs->cmdbytes += cb->len;
cs->lastcmdbuf = cb;
spin_unlock_irqrestore(&cs->cmdlock, flags);
@ -533,7 +518,7 @@ static int gigaset_write_cmd(struct cardstate *cs, const unsigned char *buf,
if (cs->connected)
tasklet_schedule(&cs->write_tasklet);
spin_unlock_irqrestore(&cs->lock, flags);
return len;
return cb->len;
}
static int gigaset_write_room(struct cardstate *cs)

View File

@ -14,7 +14,7 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/poll.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include <asm/uaccess.h>
#include "platform.h"
@ -22,6 +22,7 @@
#include "divasync.h"
#include "debug_if.h"
static DEFINE_MUTEX(maint_mutex);
static char *main_revision = "$Revision: 1.32.6.10 $";
static int major;
@ -130,7 +131,7 @@ static int maint_open(struct inode *ino, struct file *filep)
{
int ret;
lock_kernel();
mutex_lock(&maint_mutex);
/* only one open is allowed, so we test
it atomically */
if (test_and_set_bit(0, &opened))
@ -139,7 +140,7 @@ static int maint_open(struct inode *ino, struct file *filep)
filep->private_data = NULL;
ret = nonseekable_open(ino, filep);
}
unlock_kernel();
mutex_unlock(&maint_mutex);
return ret;
}

View File

@ -18,7 +18,6 @@
#include <linux/proc_fs.h>
#include <linux/skbuff.h>
#include <linux/seq_file.h>
#include <linux/smp_lock.h>
#include <asm/uaccess.h>
#include "platform.h"
@ -402,7 +401,6 @@ static unsigned int um_idi_poll(struct file *file, poll_table * wait)
static int um_idi_open(struct inode *inode, struct file *file)
{
cycle_kernel_lock();
return (0);
}

View File

@ -21,7 +21,6 @@
#include <linux/list.h>
#include <linux/poll.h>
#include <linux/kmod.h>
#include <linux/smp_lock.h>
#include "platform.h"
#undef ID_MASK
@ -113,41 +112,40 @@ typedef struct _diva_os_thread_dpc {
This table should be sorted by PCI device ID
*/
static struct pci_device_id divas_pci_tbl[] = {
/* Diva Server BRI-2M PCI 0xE010 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRA,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_MAESTRA_PCI},
/* Diva Server 4BRI-8M PCI 0xE012 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRAQ,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_Q_8M_PCI},
/* Diva Server 4BRI-8M 2.0 PCI 0xE013 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRAQ_U,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_Q_8M_V2_PCI},
/* Diva Server PRI-30M PCI 0xE014 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRAP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_P_30M_PCI},
/* Diva Server PRI 2.0 adapter 0xE015 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_P_30M_V2_PCI},
/* Diva Server Voice 4BRI-8M PCI 0xE016 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_4BRI_VOIP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_VOICE_Q_8M_PCI},
/* Diva Server Voice 4BRI-8M 2.0 PCI 0xE017 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_4BRI_2_VOIP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI},
/* Diva Server BRI-2M 2.0 PCI 0xE018 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_BRI2M_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_B_2M_V2_PCI},
/* Diva Server Voice PRI 2.0 PCI 0xE019 */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2_VOIP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0,
CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI},
/* Diva Server 2FX 0xE01A */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_2F,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_B_2F_PCI},
/* Diva Server Voice BRI-2M 2.0 PCI 0xE01B */
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_BRI2M_2_VOIP,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI},
{0,} /* 0 terminated list. */
/* Diva Server BRI-2M PCI 0xE010 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRA),
CARDTYPE_MAESTRA_PCI },
/* Diva Server 4BRI-8M PCI 0xE012 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAQ),
CARDTYPE_DIVASRV_Q_8M_PCI },
/* Diva Server 4BRI-8M 2.0 PCI 0xE013 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAQ_U),
CARDTYPE_DIVASRV_Q_8M_V2_PCI },
/* Diva Server PRI-30M PCI 0xE014 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP),
CARDTYPE_DIVASRV_P_30M_PCI },
/* Diva Server PRI 2.0 adapter 0xE015 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2),
CARDTYPE_DIVASRV_P_30M_V2_PCI },
/* Diva Server Voice 4BRI-8M PCI 0xE016 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_4BRI_VOIP),
CARDTYPE_DIVASRV_VOICE_Q_8M_PCI },
/* Diva Server Voice 4BRI-8M 2.0 PCI 0xE017 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_4BRI_2_VOIP),
CARDTYPE_DIVASRV_VOICE_Q_8M_V2_PCI },
/* Diva Server BRI-2M 2.0 PCI 0xE018 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_BRI2M_2),
CARDTYPE_DIVASRV_B_2M_V2_PCI },
/* Diva Server Voice PRI 2.0 PCI 0xE019 */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_MAESTRAP_2_VOIP),
CARDTYPE_DIVASRV_VOICE_P_30M_V2_PCI },
/* Diva Server 2FX 0xE01A */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_2F),
CARDTYPE_DIVASRV_B_2F_PCI },
/* Diva Server Voice BRI-2M 2.0 PCI 0xE01B */
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_BRI2M_2_VOIP),
CARDTYPE_DIVASRV_VOICE_B_2M_V2_PCI },
{ 0, } /* 0 terminated list. */
};
MODULE_DEVICE_TABLE(pci, divas_pci_tbl);
@ -581,7 +579,6 @@ xdi_copy_from_user(void *os_handle, void *dst, const void __user *src, int lengt
*/
static int divas_open(struct inode *inode, struct file *file)
{
cycle_kernel_lock();
return (0);
}

View File

@ -5354,12 +5354,9 @@ static struct pci_device_id hfmultipci_ids[] __devinitdata = {
{ PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_HFCE1, PCI_VENDOR_ID_CCD,
PCI_SUBDEVICE_ID_CCD_JHSE1, 0, 0, H(25)}, /* Junghanns E1 */
{ PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_HFC4S, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0},
{ PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_HFC8S, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0},
{ PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_HFCE1, PCI_ANY_ID, PCI_ANY_ID,
0, 0, 0},
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_HFC4S), 0 },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_HFC8S), 0 },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_HFCE1), 0 },
{0, }
};
#undef H

View File

@ -2188,52 +2188,52 @@ static const struct _hfc_map hfc_map[] =
static struct pci_device_id hfc_ids[] =
{
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_2BD0,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[0]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B000,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[1]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B006,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[2]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B007,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[3]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B008,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[4]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B009,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[5]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00A,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[6]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00B,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[7]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00C,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[8]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B100,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[9]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B700,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[10]},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B701,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[11]},
{PCI_VENDOR_ID_ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[12]},
{PCI_VENDOR_ID_ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[13]},
{PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[14]},
{PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_A1T,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[15]},
{PCI_VENDOR_ID_ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[16]},
{PCI_VENDOR_ID_ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[17]},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[18]},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_E,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[19]},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[20]},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_A,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[21]},
{PCI_VENDOR_ID_SITECOM, PCI_DEVICE_ID_SITECOM_DC105V2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &hfc_map[22]},
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_2BD0),
(unsigned long) &hfc_map[0] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B000),
(unsigned long) &hfc_map[1] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B006),
(unsigned long) &hfc_map[2] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B007),
(unsigned long) &hfc_map[3] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B008),
(unsigned long) &hfc_map[4] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B009),
(unsigned long) &hfc_map[5] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00A),
(unsigned long) &hfc_map[6] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00B),
(unsigned long) &hfc_map[7] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00C),
(unsigned long) &hfc_map[8] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B100),
(unsigned long) &hfc_map[9] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B700),
(unsigned long) &hfc_map[10] },
{ PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B701),
(unsigned long) &hfc_map[11] },
{ PCI_VDEVICE(ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1),
(unsigned long) &hfc_map[12] },
{ PCI_VDEVICE(ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675),
(unsigned long) &hfc_map[13] },
{ PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT),
(unsigned long) &hfc_map[14] },
{ PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_A1T),
(unsigned long) &hfc_map[15] },
{ PCI_VDEVICE(ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575),
(unsigned long) &hfc_map[16] },
{ PCI_VDEVICE(ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0),
(unsigned long) &hfc_map[17] },
{ PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E),
(unsigned long) &hfc_map[18] },
{ PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_E),
(unsigned long) &hfc_map[19] },
{ PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A),
(unsigned long) &hfc_map[20] },
{ PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_A),
(unsigned long) &hfc_map[21] },
{ PCI_VDEVICE(SITECOM, PCI_DEVICE_ID_SITECOM_DC105V2),
(unsigned long) &hfc_map[22] },
{},
};

View File

@ -125,36 +125,25 @@ struct inf_hw {
#define PCI_SUB_ID_SEDLBAUER 0x01
static struct pci_device_id infineon_ids[] __devinitdata = {
{ PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA20,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_DIVA20},
{ PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA20_U,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_DIVA20U},
{ PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA201,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_DIVA201},
{ PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA202,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_DIVA202},
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20), INF_DIVA20 },
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20_U), INF_DIVA20U },
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA201), INF_DIVA201 },
{ PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA202), INF_DIVA202 },
{ PCI_VENDOR_ID_TIGERJET, PCI_DEVICE_ID_TIGERJET_100,
PCI_SUBVENDOR_SEDLBAUER_PCI, PCI_SUB_ID_SEDLBAUER, 0, 0,
INF_SPEEDWIN},
INF_SPEEDWIN },
{ PCI_VENDOR_ID_TIGERJET, PCI_DEVICE_ID_TIGERJET_100,
PCI_SUBVENDOR_HST_SAPHIR3, PCI_SUB_ID_SEDLBAUER, 0, 0, INF_SAPHIR3},
{ PCI_VENDOR_ID_ELSA, PCI_DEVICE_ID_ELSA_MICROLINK,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_QS1000},
{ PCI_VENDOR_ID_ELSA, PCI_DEVICE_ID_ELSA_QS3000,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_QS3000},
{ PCI_VENDOR_ID_SATSAGEM, PCI_DEVICE_ID_SATSAGEM_NICCY,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_NICCY},
PCI_SUBVENDOR_HST_SAPHIR3, PCI_SUB_ID_SEDLBAUER, 0, 0, INF_SAPHIR3 },
{ PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_MICROLINK), INF_QS1000 },
{ PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_QS3000), INF_QS3000 },
{ PCI_VDEVICE(SATSAGEM, PCI_DEVICE_ID_SATSAGEM_NICCY), INF_NICCY },
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050,
PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_SCITEL_QUADRO, 0, 0,
INF_SCT_1},
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_R685,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_GAZEL_R685},
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_R753,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_GAZEL_R753},
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_DJINN_ITOO,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_GAZEL_R753},
{ PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_OLITEC,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, INF_GAZEL_R753},
INF_SCT_1 },
{ PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R685), INF_GAZEL_R685 },
{ PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R753), INF_GAZEL_R753 },
{ PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_DJINN_ITOO), INF_GAZEL_R753 },
{ PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_OLITEC), INF_GAZEL_R753 },
{ }
};
MODULE_DEVICE_TABLE(pci, infineon_ids);

View File

@ -1909,68 +1909,68 @@ static void EChannel_proc_rcv(struct hisax_d_if *d_if)
static struct pci_device_id hisax_pci_tbl[] __devinitdata = {
#ifdef CONFIG_HISAX_FRITZPCI
{PCI_VENDOR_ID_AVM, PCI_DEVICE_ID_AVM_A1, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VDEVICE(AVM, PCI_DEVICE_ID_AVM_A1) },
#endif
#ifdef CONFIG_HISAX_DIEHLDIVA
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA20, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA20_U, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA201, PCI_ANY_ID, PCI_ANY_ID},
//#########################################################################################
{PCI_VENDOR_ID_EICON, PCI_DEVICE_ID_EICON_DIVA202, PCI_ANY_ID, PCI_ANY_ID},
//#########################################################################################
{PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20) },
{PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA20_U) },
{PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA201) },
/*##########################################################################*/
{PCI_VDEVICE(EICON, PCI_DEVICE_ID_EICON_DIVA202) },
/*##########################################################################*/
#endif
#ifdef CONFIG_HISAX_ELSA
{PCI_VENDOR_ID_ELSA, PCI_DEVICE_ID_ELSA_MICROLINK, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_ELSA, PCI_DEVICE_ID_ELSA_QS3000, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_MICROLINK) },
{PCI_VDEVICE(ELSA, PCI_DEVICE_ID_ELSA_QS3000) },
#endif
#ifdef CONFIG_HISAX_GAZEL
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_R685, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_R753, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_DJINN_ITOO, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_OLITEC, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R685) },
{PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_R753) },
{PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_DJINN_ITOO) },
{PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_OLITEC) },
#endif
#ifdef CONFIG_HISAX_SCT_QUADRO
{PCI_VENDOR_ID_PLX, PCI_DEVICE_ID_PLX_9050, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VDEVICE(PLX, PCI_DEVICE_ID_PLX_9050) },
#endif
#ifdef CONFIG_HISAX_NICCY
{PCI_VENDOR_ID_SATSAGEM, PCI_DEVICE_ID_SATSAGEM_NICCY, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VDEVICE(SATSAGEM, PCI_DEVICE_ID_SATSAGEM_NICCY) },
#endif
#ifdef CONFIG_HISAX_SEDLBAUER
{PCI_VENDOR_ID_TIGERJET, PCI_DEVICE_ID_TIGERJET_100, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VDEVICE(TIGERJET, PCI_DEVICE_ID_TIGERJET_100) },
#endif
#if defined(CONFIG_HISAX_NETJET) || defined(CONFIG_HISAX_NETJET_U)
{PCI_VENDOR_ID_TIGERJET, PCI_DEVICE_ID_TIGERJET_300, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VDEVICE(TIGERJET, PCI_DEVICE_ID_TIGERJET_300) },
#endif
#if defined(CONFIG_HISAX_TELESPCI) || defined(CONFIG_HISAX_SCT_QUADRO)
{PCI_VENDOR_ID_ZORAN, PCI_DEVICE_ID_ZORAN_36120, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VDEVICE(ZORAN, PCI_DEVICE_ID_ZORAN_36120) },
#endif
#ifdef CONFIG_HISAX_W6692
{PCI_VENDOR_ID_DYNALINK, PCI_DEVICE_ID_DYNALINK_IS64PH, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VENDOR_ID_WINBOND2, PCI_DEVICE_ID_WINBOND2_6692, PCI_ANY_ID,PCI_ANY_ID},
{PCI_VDEVICE(DYNALINK, PCI_DEVICE_ID_DYNALINK_IS64PH) },
{PCI_VDEVICE(WINBOND2, PCI_DEVICE_ID_WINBOND2_6692) },
#endif
#ifdef CONFIG_HISAX_HFC_PCI
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_2BD0, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B000, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B006, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B007, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B008, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B009, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00A, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00B, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B00C, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B100, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B700, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_CCD, PCI_DEVICE_ID_CCD_B701, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_BERKOM, PCI_DEVICE_ID_BERKOM_A1T, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_E, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VENDOR_ID_DIGI, PCI_DEVICE_ID_DIGI_DF_M_A, PCI_ANY_ID, PCI_ANY_ID},
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_2BD0) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B000) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B006) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B007) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B008) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B009) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00A) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00B) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B00C) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B100) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B700) },
{PCI_VDEVICE(CCD, PCI_DEVICE_ID_CCD_B701) },
{PCI_VDEVICE(ABOCOM, PCI_DEVICE_ID_ABOCOM_2BD1) },
{PCI_VDEVICE(ASUSTEK, PCI_DEVICE_ID_ASUSTEK_0675) },
{PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_T_CONCEPT) },
{PCI_VDEVICE(BERKOM, PCI_DEVICE_ID_BERKOM_A1T) },
{PCI_VDEVICE(ANIGMA, PCI_DEVICE_ID_ANIGMA_MC145575) },
{PCI_VDEVICE(ZOLTRIX, PCI_DEVICE_ID_ZOLTRIX_2BD0) },
{PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_E) },
{PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_E) },
{PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_IOM2_A) },
{PCI_VDEVICE(DIGI, PCI_DEVICE_ID_DIGI_DF_M_A) },
#endif
{ } /* Terminating entry */
};

View File

@ -1152,20 +1152,11 @@ QuickHex(char *txt, u_char * p, int cnt)
{
register int i;
register char *t = txt;
register u_char w;
for (i = 0; i < cnt; i++) {
*t++ = ' ';
w = (p[i] >> 4) & 0x0f;
if (w < 10)
*t++ = '0' + w;
else
*t++ = 'A' - 10 + w;
w = p[i] & 0x0f;
if (w < 10)
*t++ = '0' + w;
else
*t++ = 'A' - 10 + w;
*t++ = hex_asc_hi(p[i]);
*t++ = hex_asc_lo(p[i]);
}
*t++ = 0;
return (t - txt);

View File

@ -17,11 +17,12 @@
#include <linux/proc_fs.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include <net/net_namespace.h>
#include "hysdn_defs.h"
static DEFINE_MUTEX(hysdn_conf_mutex);
static char *hysdn_procconf_revision = "$Revision: 1.8.6.4 $";
#define INFO_OUT_LEN 80 /* length of info line including lf */
@ -234,7 +235,7 @@ hysdn_conf_open(struct inode *ino, struct file *filep)
char *cp, *tmp;
/* now search the addressed card */
lock_kernel();
mutex_lock(&hysdn_conf_mutex);
card = card_root;
while (card) {
pd = card->procconf;
@ -243,7 +244,7 @@ hysdn_conf_open(struct inode *ino, struct file *filep)
card = card->next; /* search next entry */
}
if (!card) {
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (-ENODEV); /* device is unknown/invalid */
}
if (card->debug_flags & (LOG_PROC_OPEN | LOG_PROC_ALL))
@ -255,7 +256,7 @@ hysdn_conf_open(struct inode *ino, struct file *filep)
/* write only access -> write boot file or conf line */
if (!(cnf = kmalloc(sizeof(struct conf_writedata), GFP_KERNEL))) {
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (-EFAULT);
}
cnf->card = card;
@ -267,7 +268,7 @@ hysdn_conf_open(struct inode *ino, struct file *filep)
/* read access -> output card info data */
if (!(tmp = kmalloc(INFO_OUT_LEN * 2 + 2, GFP_KERNEL))) {
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (-EFAULT); /* out of memory */
}
filep->private_data = tmp; /* start of string */
@ -301,10 +302,10 @@ hysdn_conf_open(struct inode *ino, struct file *filep)
*cp++ = '\n';
*cp = 0; /* end of string */
} else { /* simultaneous read/write access forbidden ! */
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (-EPERM); /* no permission this time */
}
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return nonseekable_open(ino, filep);
} /* hysdn_conf_open */
@ -319,7 +320,7 @@ hysdn_conf_close(struct inode *ino, struct file *filep)
int retval = 0;
struct proc_dir_entry *pd;
lock_kernel();
mutex_lock(&hysdn_conf_mutex);
/* search the addressed card */
card = card_root;
while (card) {
@ -329,7 +330,7 @@ hysdn_conf_close(struct inode *ino, struct file *filep)
card = card->next; /* search next entry */
}
if (!card) {
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (-ENODEV); /* device is unknown/invalid */
}
if (card->debug_flags & (LOG_PROC_OPEN | LOG_PROC_ALL))
@ -352,7 +353,7 @@ hysdn_conf_close(struct inode *ino, struct file *filep)
kfree(filep->private_data); /* release memory */
}
unlock_kernel();
mutex_unlock(&hysdn_conf_mutex);
return (retval);
} /* hysdn_conf_close */

View File

@ -15,13 +15,15 @@
#include <linux/proc_fs.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include <linux/kernel.h>
#include "hysdn_defs.h"
/* the proc subdir for the interface is defined in the procconf module */
extern struct proc_dir_entry *hysdn_proc_entry;
static DEFINE_MUTEX(hysdn_log_mutex);
static void put_log_buffer(hysdn_card * card, char *cp);
/*************************************************/
@ -154,10 +156,9 @@ static ssize_t
hysdn_log_write(struct file *file, const char __user *buf, size_t count, loff_t * off)
{
unsigned long u = 0;
int found = 0;
unsigned char *cp, valbuf[128];
long base = 10;
hysdn_card *card = (hysdn_card *) file->private_data;
int rc;
unsigned char valbuf[128];
hysdn_card *card = file->private_data;
if (count > (sizeof(valbuf) - 1))
count = sizeof(valbuf) - 1; /* limit length */
@ -165,32 +166,10 @@ hysdn_log_write(struct file *file, const char __user *buf, size_t count, loff_t
return (-EFAULT); /* copy failed */
valbuf[count] = 0; /* terminating 0 */
cp = valbuf;
if ((count > 2) && (valbuf[0] == '0') && (valbuf[1] == 'x')) {
cp += 2; /* pointer after hex modifier */
base = 16;
}
/* scan the input for debug flags */
while (*cp) {
if ((*cp >= '0') && (*cp <= '9')) {
found = 1;
u *= base; /* adjust to next digit */
u += *cp++ - '0';
continue;
}
if (base != 16)
break; /* end of number */
if ((*cp >= 'a') && (*cp <= 'f')) {
found = 1;
u *= base; /* adjust to next digit */
u += *cp++ - 'a' + 10;
continue;
}
break; /* terminated */
}
rc = strict_strtoul(valbuf, 0, &u);
if (found) {
if (rc == 0) {
card->debug_flags = u; /* remember debug flags */
hysdn_addlog(card, "debug set to 0x%lx", card->debug_flags);
}
@ -251,7 +230,7 @@ hysdn_log_open(struct inode *ino, struct file *filep)
struct procdata *pd = NULL;
unsigned long flags;
lock_kernel();
mutex_lock(&hysdn_log_mutex);
card = card_root;
while (card) {
pd = card->proclog;
@ -260,7 +239,7 @@ hysdn_log_open(struct inode *ino, struct file *filep)
card = card->next; /* search next entry */
}
if (!card) {
unlock_kernel();
mutex_unlock(&hysdn_log_mutex);
return (-ENODEV); /* device is unknown/invalid */
}
filep->private_data = card; /* remember our own card */
@ -278,10 +257,10 @@ hysdn_log_open(struct inode *ino, struct file *filep)
filep->private_data = &pd->log_head;
spin_unlock_irqrestore(&card->hysdn_lock, flags);
} else { /* simultaneous read/write access forbidden ! */
unlock_kernel();
mutex_unlock(&hysdn_log_mutex);
return (-EPERM); /* no permission this time */
}
unlock_kernel();
mutex_unlock(&hysdn_log_mutex);
return nonseekable_open(ino, filep);
} /* hysdn_log_open */
@ -300,7 +279,7 @@ hysdn_log_close(struct inode *ino, struct file *filep)
hysdn_card *card;
int retval = 0;
lock_kernel();
mutex_lock(&hysdn_log_mutex);
if ((filep->f_mode & (FMODE_READ | FMODE_WRITE)) == FMODE_WRITE) {
/* write only access -> write debug level written */
retval = 0; /* success */
@ -339,7 +318,7 @@ hysdn_log_close(struct inode *ino, struct file *filep)
kfree(inf);
}
} /* read access */
unlock_kernel();
mutex_unlock(&hysdn_log_mutex);
return (retval);
} /* hysdn_log_close */

View File

@ -17,7 +17,7 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/isdn.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include "isdn_common.h"
#include "isdn_tty.h"
#include "isdn_net.h"
@ -42,6 +42,7 @@ MODULE_LICENSE("GPL");
isdn_dev *dev;
static DEFINE_MUTEX(isdn_mutex);
static char *isdn_revision = "$Revision: 1.1.2.3 $";
extern char *isdn_net_revision;
@ -1070,7 +1071,7 @@ isdn_read(struct file *file, char __user *buf, size_t count, loff_t * off)
int retval;
char *p;
lock_kernel();
mutex_lock(&isdn_mutex);
if (minor == ISDN_MINOR_STATUS) {
if (!file->private_data) {
if (file->f_flags & O_NONBLOCK) {
@ -1163,7 +1164,7 @@ isdn_read(struct file *file, char __user *buf, size_t count, loff_t * off)
#endif
retval = -ENODEV;
out:
unlock_kernel();
mutex_unlock(&isdn_mutex);
return retval;
}
@ -1180,7 +1181,7 @@ isdn_write(struct file *file, const char __user *buf, size_t count, loff_t * off
if (!dev->drivers)
return -ENODEV;
lock_kernel();
mutex_lock(&isdn_mutex);
if (minor <= ISDN_MINOR_BMAX) {
printk(KERN_WARNING "isdn_write minor %d obsolete!\n", minor);
drvidx = isdn_minor2drv(minor);
@ -1225,7 +1226,7 @@ isdn_write(struct file *file, const char __user *buf, size_t count, loff_t * off
#endif
retval = -ENODEV;
out:
unlock_kernel();
mutex_unlock(&isdn_mutex);
return retval;
}
@ -1236,7 +1237,7 @@ isdn_poll(struct file *file, poll_table * wait)
unsigned int minor = iminor(file->f_path.dentry->d_inode);
int drvidx = isdn_minor2drv(minor - ISDN_MINOR_CTRL);
lock_kernel();
mutex_lock(&isdn_mutex);
if (minor == ISDN_MINOR_STATUS) {
poll_wait(file, &(dev->info_waitq), wait);
/* mask = POLLOUT | POLLWRNORM; */
@ -1266,7 +1267,7 @@ isdn_poll(struct file *file, poll_table * wait)
#endif
mask = POLLERR;
out:
unlock_kernel();
mutex_unlock(&isdn_mutex);
return mask;
}
@ -1727,9 +1728,9 @@ isdn_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int ret;
lock_kernel();
mutex_lock(&isdn_mutex);
ret = isdn_ioctl(file, cmd, arg);
unlock_kernel();
mutex_unlock(&isdn_mutex);
return ret;
}
@ -1745,7 +1746,7 @@ isdn_open(struct inode *ino, struct file *filep)
int chidx;
int retval = -ENODEV;
lock_kernel();
mutex_lock(&isdn_mutex);
if (minor == ISDN_MINOR_STATUS) {
infostruct *p;
@ -1796,7 +1797,7 @@ isdn_open(struct inode *ino, struct file *filep)
#endif
out:
nonseekable_open(ino, filep);
unlock_kernel();
mutex_unlock(&isdn_mutex);
return retval;
}
@ -1805,7 +1806,7 @@ isdn_close(struct inode *ino, struct file *filep)
{
uint minor = iminor(ino);
lock_kernel();
mutex_lock(&isdn_mutex);
if (minor == ISDN_MINOR_STATUS) {
infostruct *p = dev->infochain;
infostruct *q = NULL;
@ -1839,7 +1840,7 @@ isdn_close(struct inode *ino, struct file *filep)
#endif
out:
unlock_kernel();
mutex_unlock(&isdn_mutex);
return 0;
}

View File

@ -2924,16 +2924,17 @@ isdn_net_getcfg(isdn_net_ioctl_cfg * cfg)
cfg->dialtimeout = lp->dialtimeout >= 0 ? lp->dialtimeout / HZ : -1;
cfg->dialwait = lp->dialwait / HZ;
if (lp->slave) {
if (strlen(lp->slave->name) > 8)
if (strlen(lp->slave->name) >= 10)
strcpy(cfg->slave, "too-long");
else
strcpy(cfg->slave, lp->slave->name);
} else
cfg->slave[0] = '\0';
if (lp->master) {
if (strlen(lp->master->name) > 8)
if (strlen(lp->master->name) >= 10)
strcpy(cfg->master, "too-long");
strcpy(cfg->master, lp->master->name);
else
strcpy(cfg->master, lp->master->name);
} else
cfg->master[0] = '\0';
return 0;

View File

@ -449,14 +449,9 @@ static int get_filter(void __user *arg, struct sock_filter **p)
/* uprog.len is unsigned short, so no overflow here */
len = uprog.len * sizeof(struct sock_filter);
code = kmalloc(len, GFP_KERNEL);
if (code == NULL)
return -ENOMEM;
if (copy_from_user(code, uprog.filter, len)) {
kfree(code);
return -EFAULT;
}
code = memdup_user(uprog.filter, len);
if (IS_ERR(code))
return PTR_ERR(code);
err = sk_chk_filter(code, uprog.len);
if (err) {
@ -482,7 +477,7 @@ isdn_ppp_ioctl(int min, struct file *file, unsigned int cmd, unsigned long arg)
struct isdn_ppp_comp_data data;
void __user *argp = (void __user *)arg;
is = (struct ippp_struct *) file->private_data;
is = file->private_data;
lp = is->lp;
if (is->debug & 0x1)

View File

@ -2636,12 +2636,6 @@ isdn_tty_modem_result(int code, modem_info * info)
if ((info->flags & ISDN_ASYNC_CLOSING) || (!info->tty)) {
return;
}
#ifdef CONFIG_ISDN_AUDIO
if ( !info->vonline )
tty_ldisc_flush(info->tty);
#else
tty_ldisc_flush(info->tty);
#endif
if ((info->flags & ISDN_ASYNC_CHECK_CD) &&
(!((info->flags & ISDN_ASYNC_CALLOUT_ACTIVE) &&
(info->flags & ISDN_ASYNC_CALLOUT_NOHUP)))) {

View File

@ -24,9 +24,10 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/mISDNif.h>
#include <linux/smp_lock.h>
#include <linux/mutex.h>
#include "core.h"
static DEFINE_MUTEX(mISDN_mutex);
static u_int *debug;
@ -224,7 +225,7 @@ mISDN_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
if (*debug & DEBUG_TIMER)
printk(KERN_DEBUG "%s(%p, %x, %lx)\n", __func__,
filep, cmd, arg);
lock_kernel();
mutex_lock(&mISDN_mutex);
switch (cmd) {
case IMADDTIMER:
if (get_user(tout, (int __user *)arg)) {
@ -256,7 +257,7 @@ mISDN_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
default:
ret = -EINVAL;
}
unlock_kernel();
mutex_unlock(&mISDN_mutex);
return ret;
}

View File

@ -411,14 +411,10 @@ static int pcbit_writecmd(const u_char __user *buf, int len, int driver, int cha
return -EINVAL;
}
cbuf = kmalloc(len, GFP_KERNEL);
if (!cbuf)
return -ENOMEM;
cbuf = memdup_user(buf, len);
if (IS_ERR(cbuf))
return PTR_ERR(cbuf);
if (copy_from_user(cbuf, buf, len)) {
kfree(cbuf);
return -EFAULT;
}
memcpy_toio(dev->sh_mem, cbuf, len);
kfree(cbuf);
return len;

View File

@ -215,19 +215,13 @@ int sc_ioctl(int card, scs_ioctl *data)
pr_debug("%s: DCBIOSETSPID: ioctl received\n",
sc_adapter[card]->devicename);
spid = kmalloc(SCIOC_SPIDSIZE, GFP_KERNEL);
if(!spid) {
kfree(rcvmsg);
return -ENOMEM;
}
/*
* Get the spid from user space
*/
if (copy_from_user(spid, data->dataptr, SCIOC_SPIDSIZE)) {
spid = memdup_user(data->dataptr, SCIOC_SPIDSIZE);
if (IS_ERR(spid)) {
kfree(rcvmsg);
kfree(spid);
return -EFAULT;
return PTR_ERR(spid);
}
pr_debug("%s: SCIOCSETSPID: setting channel %d spid to %s\n",
@ -296,18 +290,13 @@ int sc_ioctl(int card, scs_ioctl *data)
pr_debug("%s: SCIOSETDN: ioctl received\n",
sc_adapter[card]->devicename);
dn = kmalloc(SCIOC_DNSIZE, GFP_KERNEL);
if (!dn) {
kfree(rcvmsg);
return -ENOMEM;
}
/*
* Get the spid from user space
*/
if (copy_from_user(dn, data->dataptr, SCIOC_DNSIZE)) {
dn = memdup_user(data->dataptr, SCIOC_DNSIZE);
if (IS_ERR(dn)) {
kfree(rcvmsg);
kfree(dn);
return -EFAULT;
return PTR_ERR(dn);
}
pr_debug("%s: SCIOCSETDN: setting channel %d dn to %s\n",

View File

@ -34,7 +34,7 @@ struct mc32_mailbox
{
u16 mbox;
u16 data[1];
} __attribute((packed));
} __packed;
struct skb_header
{
@ -43,7 +43,7 @@ struct skb_header
u16 next; /* Do not change! */
u16 length;
u32 data;
} __attribute((packed));
} __packed;
struct mc32_stats
{
@ -68,7 +68,7 @@ struct mc32_stats
u32 dataA[6];
u16 dataB[5];
u32 dataC[14];
} __attribute((packed));
} __packed;
#define STATUS_MASK 0x0F
#define COMPLETED (1<<7)

View File

@ -435,7 +435,6 @@ MODULE_DEVICE_TABLE(pci, vortex_pci_tbl);
First the windows. There are eight register windows, with the command
and status registers available in each.
*/
#define EL3WINDOW(win_num) iowrite16(SelectWindow + (win_num), ioaddr + EL3_CMD)
#define EL3_CMD 0x0e
#define EL3_STATUS 0x0e
@ -645,10 +644,51 @@ struct vortex_private {
u16 deferred; /* Resend these interrupts when we
* bale from the ISR */
u16 io_size; /* Size of PCI region (for release_region) */
spinlock_t lock; /* Serialise access to device & its vortex_private */
struct mii_if_info mii; /* MII lib hooks/info */
/* Serialises access to hardware other than MII and variables below.
* The lock hierarchy is rtnl_lock > lock > mii_lock > window_lock. */
spinlock_t lock;
spinlock_t mii_lock; /* Serialises access to MII */
struct mii_if_info mii; /* MII lib hooks/info */
spinlock_t window_lock; /* Serialises access to windowed regs */
int window; /* Register window */
};
static void window_set(struct vortex_private *vp, int window)
{
if (window != vp->window) {
iowrite16(SelectWindow + window, vp->ioaddr + EL3_CMD);
vp->window = window;
}
}
#define DEFINE_WINDOW_IO(size) \
static u ## size \
window_read ## size(struct vortex_private *vp, int window, int addr) \
{ \
unsigned long flags; \
u ## size ret; \
spin_lock_irqsave(&vp->window_lock, flags); \
window_set(vp, window); \
ret = ioread ## size(vp->ioaddr + addr); \
spin_unlock_irqrestore(&vp->window_lock, flags); \
return ret; \
} \
static void \
window_write ## size(struct vortex_private *vp, u ## size value, \
int window, int addr) \
{ \
unsigned long flags; \
spin_lock_irqsave(&vp->window_lock, flags); \
window_set(vp, window); \
iowrite ## size(value, vp->ioaddr + addr); \
spin_unlock_irqrestore(&vp->window_lock, flags); \
}
DEFINE_WINDOW_IO(8)
DEFINE_WINDOW_IO(16)
DEFINE_WINDOW_IO(32)
#ifdef CONFIG_PCI
#define DEVICE_PCI(dev) (((dev)->bus == &pci_bus_type) ? to_pci_dev((dev)) : NULL)
#else
@ -711,7 +751,7 @@ static int vortex_probe1(struct device *gendev, void __iomem *ioaddr, int irq,
static int vortex_up(struct net_device *dev);
static void vortex_down(struct net_device *dev, int final);
static int vortex_open(struct net_device *dev);
static void mdio_sync(void __iomem *ioaddr, int bits);
static void mdio_sync(struct vortex_private *vp, int bits);
static int mdio_read(struct net_device *dev, int phy_id, int location);
static void mdio_write(struct net_device *vp, int phy_id, int location, int value);
static void vortex_timer(unsigned long arg);
@ -980,10 +1020,16 @@ static int __devinit vortex_init_one(struct pci_dev *pdev,
ioaddr = pci_iomap(pdev, pci_bar, 0);
if (!ioaddr) /* If mapping fails, fall-back to BAR 0... */
ioaddr = pci_iomap(pdev, 0, 0);
if (!ioaddr) {
pci_disable_device(pdev);
rc = -ENOMEM;
goto out;
}
rc = vortex_probe1(&pdev->dev, ioaddr, pdev->irq,
ent->driver_data, unit);
if (rc < 0) {
pci_iounmap(pdev, ioaddr);
pci_disable_device(pdev);
goto out;
}
@ -1119,6 +1165,7 @@ static int __devinit vortex_probe1(struct device *gendev,
vp->has_nway = (vci->drv_flags & HAS_NWAY) ? 1 : 0;
vp->io_size = vci->io_size;
vp->card_idx = card_idx;
vp->window = -1;
/* module list only for Compaq device */
if (gendev == NULL) {
@ -1154,6 +1201,8 @@ static int __devinit vortex_probe1(struct device *gendev,
}
spin_lock_init(&vp->lock);
spin_lock_init(&vp->mii_lock);
spin_lock_init(&vp->window_lock);
vp->gendev = gendev;
vp->mii.dev = dev;
vp->mii.mdio_read = mdio_read;
@ -1205,7 +1254,6 @@ static int __devinit vortex_probe1(struct device *gendev,
vp->mii.force_media = vp->full_duplex;
vp->options = option;
/* Read the station address from the EEPROM. */
EL3WINDOW(0);
{
int base;
@ -1218,14 +1266,15 @@ static int __devinit vortex_probe1(struct device *gendev,
for (i = 0; i < 0x40; i++) {
int timer;
iowrite16(base + i, ioaddr + Wn0EepromCmd);
window_write16(vp, base + i, 0, Wn0EepromCmd);
/* Pause for at least 162 us. for the read to take place. */
for (timer = 10; timer >= 0; timer--) {
udelay(162);
if ((ioread16(ioaddr + Wn0EepromCmd) & 0x8000) == 0)
if ((window_read16(vp, 0, Wn0EepromCmd) &
0x8000) == 0)
break;
}
eeprom[i] = ioread16(ioaddr + Wn0EepromData);
eeprom[i] = window_read16(vp, 0, Wn0EepromData);
}
}
for (i = 0; i < 0x18; i++)
@ -1250,9 +1299,8 @@ static int __devinit vortex_probe1(struct device *gendev,
pr_err("*** EEPROM MAC address is invalid.\n");
goto free_ring; /* With every pack */
}
EL3WINDOW(2);
for (i = 0; i < 6; i++)
iowrite8(dev->dev_addr[i], ioaddr + i);
window_write8(vp, dev->dev_addr[i], 2, i);
if (print_info)
pr_cont(", IRQ %d\n", dev->irq);
@ -1261,8 +1309,7 @@ static int __devinit vortex_probe1(struct device *gendev,
pr_warning(" *** Warning: IRQ %d is unlikely to work! ***\n",
dev->irq);
EL3WINDOW(4);
step = (ioread8(ioaddr + Wn4_NetDiag) & 0x1e) >> 1;
step = (window_read8(vp, 4, Wn4_NetDiag) & 0x1e) >> 1;
if (print_info) {
pr_info(" product code %02x%02x rev %02x.%d date %02d-%02d-%02d\n",
eeprom[6]&0xff, eeprom[6]>>8, eeprom[0x14],
@ -1285,17 +1332,15 @@ static int __devinit vortex_probe1(struct device *gendev,
(unsigned long long)pci_resource_start(pdev, 2),
vp->cb_fn_base);
}
EL3WINDOW(2);
n = ioread16(ioaddr + Wn2_ResetOptions) & ~0x4010;
n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
if (vp->drv_flags & INVERT_LED_PWR)
n |= 0x10;
if (vp->drv_flags & INVERT_MII_PWR)
n |= 0x4000;
iowrite16(n, ioaddr + Wn2_ResetOptions);
window_write16(vp, n, 2, Wn2_ResetOptions);
if (vp->drv_flags & WNO_XCVR_PWR) {
EL3WINDOW(0);
iowrite16(0x0800, ioaddr);
window_write16(vp, 0x0800, 0, 0);
}
}
@ -1313,14 +1358,13 @@ static int __devinit vortex_probe1(struct device *gendev,
{
static const char * const ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
unsigned int config;
EL3WINDOW(3);
vp->available_media = ioread16(ioaddr + Wn3_Options);
vp->available_media = window_read16(vp, 3, Wn3_Options);
if ((vp->available_media & 0xff) == 0) /* Broken 3c916 */
vp->available_media = 0x40;
config = ioread32(ioaddr + Wn3_Config);
config = window_read32(vp, 3, Wn3_Config);
if (print_info) {
pr_debug(" Internal config register is %4.4x, transceivers %#x.\n",
config, ioread16(ioaddr + Wn3_Options));
config, window_read16(vp, 3, Wn3_Options));
pr_info(" %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
8 << RAM_SIZE(config),
RAM_WIDTH(config) ? "word" : "byte",
@ -1346,11 +1390,10 @@ static int __devinit vortex_probe1(struct device *gendev,
if ((vp->available_media & 0x40) || (vci->drv_flags & HAS_NWAY) ||
dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
int phy, phy_idx = 0;
EL3WINDOW(4);
mii_preamble_required++;
if (vp->drv_flags & EXTRA_PREAMBLE)
mii_preamble_required++;
mdio_sync(ioaddr, 32);
mdio_sync(vp, 32);
mdio_read(dev, 24, MII_BMSR);
for (phy = 0; phy < 32 && phy_idx < 1; phy++) {
int mii_status, phyx;
@ -1478,18 +1521,17 @@ static void
vortex_set_duplex(struct net_device *dev)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
pr_info("%s: setting %s-duplex.\n",
dev->name, (vp->full_duplex) ? "full" : "half");
EL3WINDOW(3);
/* Set the full-duplex bit. */
iowrite16(((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
(vp->large_frames ? 0x40 : 0) |
((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ?
0x100 : 0),
ioaddr + Wn3_MAC_Ctrl);
window_write16(vp,
((vp->info1 & 0x8000) || vp->full_duplex ? 0x20 : 0) |
(vp->large_frames ? 0x40 : 0) |
((vp->full_duplex && vp->flow_ctrl && vp->partner_flow_ctrl) ?
0x100 : 0),
3, Wn3_MAC_Ctrl);
}
static void vortex_check_media(struct net_device *dev, unsigned int init)
@ -1529,8 +1571,7 @@ vortex_up(struct net_device *dev)
}
/* Before initializing select the active media port. */
EL3WINDOW(3);
config = ioread32(ioaddr + Wn3_Config);
config = window_read32(vp, 3, Wn3_Config);
if (vp->media_override != 7) {
pr_info("%s: Media override to transceiver %d (%s).\n",
@ -1577,10 +1618,9 @@ vortex_up(struct net_device *dev)
config = BFINS(config, dev->if_port, 20, 4);
if (vortex_debug > 6)
pr_debug("vortex_up(): writing 0x%x to InternalConfig\n", config);
iowrite32(config, ioaddr + Wn3_Config);
window_write32(vp, config, 3, Wn3_Config);
if (dev->if_port == XCVR_MII || dev->if_port == XCVR_NWAY) {
EL3WINDOW(4);
mii_reg1 = mdio_read(dev, vp->phys[0], MII_BMSR);
mii_reg5 = mdio_read(dev, vp->phys[0], MII_LPA);
vp->partner_flow_ctrl = ((mii_reg5 & 0x0400) != 0);
@ -1601,51 +1641,46 @@ vortex_up(struct net_device *dev)
iowrite16(SetStatusEnb | 0x00, ioaddr + EL3_CMD);
if (vortex_debug > 1) {
EL3WINDOW(4);
pr_debug("%s: vortex_up() irq %d media status %4.4x.\n",
dev->name, dev->irq, ioread16(ioaddr + Wn4_Media));
dev->name, dev->irq, window_read16(vp, 4, Wn4_Media));
}
/* Set the station address and mask in window 2 each time opened. */
EL3WINDOW(2);
for (i = 0; i < 6; i++)
iowrite8(dev->dev_addr[i], ioaddr + i);
window_write8(vp, dev->dev_addr[i], 2, i);
for (; i < 12; i+=2)
iowrite16(0, ioaddr + i);
window_write16(vp, 0, 2, i);
if (vp->cb_fn_base) {
unsigned short n = ioread16(ioaddr + Wn2_ResetOptions) & ~0x4010;
unsigned short n = window_read16(vp, 2, Wn2_ResetOptions) & ~0x4010;
if (vp->drv_flags & INVERT_LED_PWR)
n |= 0x10;
if (vp->drv_flags & INVERT_MII_PWR)
n |= 0x4000;
iowrite16(n, ioaddr + Wn2_ResetOptions);
window_write16(vp, n, 2, Wn2_ResetOptions);
}
if (dev->if_port == XCVR_10base2)
/* Start the thinnet transceiver. We should really wait 50ms...*/
iowrite16(StartCoax, ioaddr + EL3_CMD);
if (dev->if_port != XCVR_NWAY) {
EL3WINDOW(4);
iowrite16((ioread16(ioaddr + Wn4_Media) & ~(Media_10TP|Media_SQE)) |
media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
window_write16(vp,
(window_read16(vp, 4, Wn4_Media) &
~(Media_10TP|Media_SQE)) |
media_tbl[dev->if_port].media_bits,
4, Wn4_Media);
}
/* Switch to the stats window, and clear all stats by reading. */
iowrite16(StatsDisable, ioaddr + EL3_CMD);
EL3WINDOW(6);
for (i = 0; i < 10; i++)
ioread8(ioaddr + i);
ioread16(ioaddr + 10);
ioread16(ioaddr + 12);
window_read8(vp, 6, i);
window_read16(vp, 6, 10);
window_read16(vp, 6, 12);
/* New: On the Vortex we must also clear the BadSSD counter. */
EL3WINDOW(4);
ioread8(ioaddr + 12);
window_read8(vp, 4, 12);
/* ..and on the Boomerang we enable the extra statistics bits. */
iowrite16(0x0040, ioaddr + Wn4_NetDiag);
/* Switch to register set 7 for normal use. */
EL3WINDOW(7);
window_write16(vp, 0x0040, 4, Wn4_NetDiag);
if (vp->full_bus_master_rx) { /* Boomerang bus master. */
vp->cur_rx = vp->dirty_rx = 0;
@ -1763,7 +1798,7 @@ vortex_timer(unsigned long data)
void __iomem *ioaddr = vp->ioaddr;
int next_tick = 60*HZ;
int ok = 0;
int media_status, old_window;
int media_status;
if (vortex_debug > 2) {
pr_debug("%s: Media selection timer tick happened, %s.\n",
@ -1771,10 +1806,7 @@ vortex_timer(unsigned long data)
pr_debug("dev->watchdog_timeo=%d\n", dev->watchdog_timeo);
}
disable_irq_lockdep(dev->irq);
old_window = ioread16(ioaddr + EL3_CMD) >> 13;
EL3WINDOW(4);
media_status = ioread16(ioaddr + Wn4_Media);
media_status = window_read16(vp, 4, Wn4_Media);
switch (dev->if_port) {
case XCVR_10baseT: case XCVR_100baseTx: case XCVR_100baseFx:
if (media_status & Media_LnkBeat) {
@ -1794,10 +1826,7 @@ vortex_timer(unsigned long data)
case XCVR_MII: case XCVR_NWAY:
{
ok = 1;
/* Interrupts are already disabled */
spin_lock(&vp->lock);
vortex_check_media(dev, 0);
spin_unlock(&vp->lock);
}
break;
default: /* Other media types handled by Tx timeouts. */
@ -1816,6 +1845,8 @@ vortex_timer(unsigned long data)
if (!ok) {
unsigned int config;
spin_lock_irq(&vp->lock);
do {
dev->if_port = media_tbl[dev->if_port].next;
} while ( ! (vp->available_media & media_tbl[dev->if_port].mask));
@ -1830,19 +1861,22 @@ vortex_timer(unsigned long data)
dev->name, media_tbl[dev->if_port].name);
next_tick = media_tbl[dev->if_port].wait;
}
iowrite16((media_status & ~(Media_10TP|Media_SQE)) |
media_tbl[dev->if_port].media_bits, ioaddr + Wn4_Media);
window_write16(vp,
(media_status & ~(Media_10TP|Media_SQE)) |
media_tbl[dev->if_port].media_bits,
4, Wn4_Media);
EL3WINDOW(3);
config = ioread32(ioaddr + Wn3_Config);
config = window_read32(vp, 3, Wn3_Config);
config = BFINS(config, dev->if_port, 20, 4);
iowrite32(config, ioaddr + Wn3_Config);
window_write32(vp, config, 3, Wn3_Config);
iowrite16(dev->if_port == XCVR_10base2 ? StartCoax : StopCoax,
ioaddr + EL3_CMD);
if (vortex_debug > 1)
pr_debug("wrote 0x%08x to Wn3_Config\n", config);
/* AKPM: FIXME: Should reset Rx & Tx here. P60 of 3c90xc.pdf */
spin_unlock_irq(&vp->lock);
}
leave_media_alone:
@ -1850,8 +1884,6 @@ leave_media_alone:
pr_debug("%s: Media selection timer finished, %s.\n",
dev->name, media_tbl[dev->if_port].name);
EL3WINDOW(old_window);
enable_irq_lockdep(dev->irq);
mod_timer(&vp->timer, RUN_AT(next_tick));
if (vp->deferred)
iowrite16(FakeIntr, ioaddr + EL3_CMD);
@ -1865,12 +1897,11 @@ static void vortex_tx_timeout(struct net_device *dev)
pr_err("%s: transmit timed out, tx_status %2.2x status %4.4x.\n",
dev->name, ioread8(ioaddr + TxStatus),
ioread16(ioaddr + EL3_STATUS));
EL3WINDOW(4);
pr_err(" diagnostics: net %04x media %04x dma %08x fifo %04x\n",
ioread16(ioaddr + Wn4_NetDiag),
ioread16(ioaddr + Wn4_Media),
window_read16(vp, 4, Wn4_NetDiag),
window_read16(vp, 4, Wn4_Media),
ioread32(ioaddr + PktStatus),
ioread16(ioaddr + Wn4_FIFODiag));
window_read16(vp, 4, Wn4_FIFODiag));
/* Slight code bloat to be user friendly. */
if ((ioread8(ioaddr + TxStatus) & 0x88) == 0x88)
pr_err("%s: Transmitter encountered 16 collisions --"
@ -1917,9 +1948,6 @@ static void vortex_tx_timeout(struct net_device *dev)
/* Issue Tx Enable */
iowrite16(TxEnable, ioaddr + EL3_CMD);
dev->trans_start = jiffies; /* prevent tx timeout */
/* Switch to register set 7 for normal use. */
EL3WINDOW(7);
}
/*
@ -1980,10 +2008,10 @@ vortex_error(struct net_device *dev, int status)
ioread16(ioaddr + EL3_STATUS) & StatsFull) {
pr_warning("%s: Updating statistics failed, disabling "
"stats as an interrupt source.\n", dev->name);
EL3WINDOW(5);
iowrite16(SetIntrEnb | (ioread16(ioaddr + 10) & ~StatsFull), ioaddr + EL3_CMD);
iowrite16(SetIntrEnb |
(window_read16(vp, 5, 10) & ~StatsFull),
ioaddr + EL3_CMD);
vp->intr_enable &= ~StatsFull;
EL3WINDOW(7);
DoneDidThat++;
}
}
@ -1993,8 +2021,7 @@ vortex_error(struct net_device *dev, int status)
}
if (status & HostError) {
u16 fifo_diag;
EL3WINDOW(4);
fifo_diag = ioread16(ioaddr + Wn4_FIFODiag);
fifo_diag = window_read16(vp, 4, Wn4_FIFODiag);
pr_err("%s: Host error, FIFO diagnostic register %4.4x.\n",
dev->name, fifo_diag);
/* Adapter failure requires Tx/Rx reset and reinit. */
@ -2043,9 +2070,13 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (vp->bus_master) {
/* Set the bus-master controller to transfer the packet. */
int len = (skb->len + 3) & ~3;
iowrite32(vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len, PCI_DMA_TODEVICE),
ioaddr + Wn7_MasterAddr);
vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
PCI_DMA_TODEVICE);
spin_lock_irq(&vp->window_lock);
window_set(vp, 7);
iowrite32(vp->tx_skb_dma, ioaddr + Wn7_MasterAddr);
iowrite16(len, ioaddr + Wn7_MasterLen);
spin_unlock_irq(&vp->window_lock);
vp->tx_skb = skb;
iowrite16(StartDMADown, ioaddr + EL3_CMD);
/* netif_wake_queue() will be called at the DMADone interrupt. */
@ -2217,6 +2248,9 @@ vortex_interrupt(int irq, void *dev_id)
pr_debug("%s: interrupt, status %4.4x, latency %d ticks.\n",
dev->name, status, ioread8(ioaddr + Timer));
spin_lock(&vp->window_lock);
window_set(vp, 7);
do {
if (vortex_debug > 5)
pr_debug("%s: In interrupt loop, status %4.4x.\n",
@ -2275,6 +2309,8 @@ vortex_interrupt(int irq, void *dev_id)
iowrite16(AckIntr | IntReq | IntLatch, ioaddr + EL3_CMD);
} while ((status = ioread16(ioaddr + EL3_STATUS)) & (IntLatch | RxComplete));
spin_unlock(&vp->window_lock);
if (vortex_debug > 4)
pr_debug("%s: exiting interrupt, status %4.4x.\n",
dev->name, status);
@ -2760,85 +2796,58 @@ static struct net_device_stats *vortex_get_stats(struct net_device *dev)
static void update_stats(void __iomem *ioaddr, struct net_device *dev)
{
struct vortex_private *vp = netdev_priv(dev);
int old_window = ioread16(ioaddr + EL3_CMD);
if (old_window == 0xffff) /* Chip suspended or ejected. */
return;
/* Unlike the 3c5x9 we need not turn off stats updates while reading. */
/* Switch to the stats window, and read everything. */
EL3WINDOW(6);
dev->stats.tx_carrier_errors += ioread8(ioaddr + 0);
dev->stats.tx_heartbeat_errors += ioread8(ioaddr + 1);
dev->stats.tx_window_errors += ioread8(ioaddr + 4);
dev->stats.rx_fifo_errors += ioread8(ioaddr + 5);
dev->stats.tx_packets += ioread8(ioaddr + 6);
dev->stats.tx_packets += (ioread8(ioaddr + 9)&0x30) << 4;
/* Rx packets */ ioread8(ioaddr + 7); /* Must read to clear */
dev->stats.tx_carrier_errors += window_read8(vp, 6, 0);
dev->stats.tx_heartbeat_errors += window_read8(vp, 6, 1);
dev->stats.tx_window_errors += window_read8(vp, 6, 4);
dev->stats.rx_fifo_errors += window_read8(vp, 6, 5);
dev->stats.tx_packets += window_read8(vp, 6, 6);
dev->stats.tx_packets += (window_read8(vp, 6, 9) &
0x30) << 4;
/* Rx packets */ window_read8(vp, 6, 7); /* Must read to clear */
/* Don't bother with register 9, an extension of registers 6&7.
If we do use the 6&7 values the atomic update assumption above
is invalid. */
dev->stats.rx_bytes += ioread16(ioaddr + 10);
dev->stats.tx_bytes += ioread16(ioaddr + 12);
dev->stats.rx_bytes += window_read16(vp, 6, 10);
dev->stats.tx_bytes += window_read16(vp, 6, 12);
/* Extra stats for get_ethtool_stats() */
vp->xstats.tx_multiple_collisions += ioread8(ioaddr + 2);
vp->xstats.tx_single_collisions += ioread8(ioaddr + 3);
vp->xstats.tx_deferred += ioread8(ioaddr + 8);
EL3WINDOW(4);
vp->xstats.rx_bad_ssd += ioread8(ioaddr + 12);
vp->xstats.tx_multiple_collisions += window_read8(vp, 6, 2);
vp->xstats.tx_single_collisions += window_read8(vp, 6, 3);
vp->xstats.tx_deferred += window_read8(vp, 6, 8);
vp->xstats.rx_bad_ssd += window_read8(vp, 4, 12);
dev->stats.collisions = vp->xstats.tx_multiple_collisions
+ vp->xstats.tx_single_collisions
+ vp->xstats.tx_max_collisions;
{
u8 up = ioread8(ioaddr + 13);
u8 up = window_read8(vp, 4, 13);
dev->stats.rx_bytes += (up & 0x0f) << 16;
dev->stats.tx_bytes += (up & 0xf0) << 12;
}
EL3WINDOW(old_window >> 13);
}
static int vortex_nway_reset(struct net_device *dev)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
unsigned long flags;
int rc;
spin_lock_irqsave(&vp->lock, flags);
EL3WINDOW(4);
rc = mii_nway_restart(&vp->mii);
spin_unlock_irqrestore(&vp->lock, flags);
return rc;
return mii_nway_restart(&vp->mii);
}
static int vortex_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
unsigned long flags;
int rc;
spin_lock_irqsave(&vp->lock, flags);
EL3WINDOW(4);
rc = mii_ethtool_gset(&vp->mii, cmd);
spin_unlock_irqrestore(&vp->lock, flags);
return rc;
return mii_ethtool_gset(&vp->mii, cmd);
}
static int vortex_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
unsigned long flags;
int rc;
spin_lock_irqsave(&vp->lock, flags);
EL3WINDOW(4);
rc = mii_ethtool_sset(&vp->mii, cmd);
spin_unlock_irqrestore(&vp->lock, flags);
return rc;
return mii_ethtool_sset(&vp->mii, cmd);
}
static u32 vortex_get_msglevel(struct net_device *dev)
@ -2909,6 +2918,36 @@ static void vortex_get_drvinfo(struct net_device *dev,
}
}
static void vortex_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{
struct vortex_private *vp = netdev_priv(dev);
spin_lock_irq(&vp->lock);
wol->supported = WAKE_MAGIC;
wol->wolopts = 0;
if (vp->enable_wol)
wol->wolopts |= WAKE_MAGIC;
spin_unlock_irq(&vp->lock);
}
static int vortex_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
{
struct vortex_private *vp = netdev_priv(dev);
if (wol->wolopts & ~WAKE_MAGIC)
return -EINVAL;
spin_lock_irq(&vp->lock);
if (wol->wolopts & WAKE_MAGIC)
vp->enable_wol = 1;
else
vp->enable_wol = 0;
acpi_set_WOL(dev);
spin_unlock_irq(&vp->lock);
return 0;
}
static const struct ethtool_ops vortex_ethtool_ops = {
.get_drvinfo = vortex_get_drvinfo,
.get_strings = vortex_get_strings,
@ -2920,6 +2959,8 @@ static const struct ethtool_ops vortex_ethtool_ops = {
.set_settings = vortex_set_settings,
.get_link = ethtool_op_get_link,
.nway_reset = vortex_nway_reset,
.get_wol = vortex_get_wol,
.set_wol = vortex_set_wol,
};
#ifdef CONFIG_PCI
@ -2930,7 +2971,6 @@ static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
int err;
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
unsigned long flags;
pci_power_t state = 0;
@ -2942,7 +2982,6 @@ static int vortex_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
if(state != 0)
pci_set_power_state(VORTEX_PCI(vp), PCI_D0);
spin_lock_irqsave(&vp->lock, flags);
EL3WINDOW(4);
err = generic_mii_ioctl(&vp->mii, if_mii(rq), cmd, NULL);
spin_unlock_irqrestore(&vp->lock, flags);
if(state != 0)
@ -2985,8 +3024,6 @@ static void set_rx_mode(struct net_device *dev)
static void set_8021q_mode(struct net_device *dev, int enable)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
int old_window = ioread16(ioaddr + EL3_CMD);
int mac_ctrl;
if ((vp->drv_flags&IS_CYCLONE) || (vp->drv_flags&IS_TORNADO)) {
@ -2997,28 +3034,23 @@ static void set_8021q_mode(struct net_device *dev, int enable)
if (enable)
max_pkt_size += 4; /* 802.1Q VLAN tag */
EL3WINDOW(3);
iowrite16(max_pkt_size, ioaddr+Wn3_MaxPktSize);
window_write16(vp, max_pkt_size, 3, Wn3_MaxPktSize);
/* set VlanEtherType to let the hardware checksumming
treat tagged frames correctly */
EL3WINDOW(7);
iowrite16(VLAN_ETHER_TYPE, ioaddr+Wn7_VlanEtherType);
window_write16(vp, VLAN_ETHER_TYPE, 7, Wn7_VlanEtherType);
} else {
/* on older cards we have to enable large frames */
vp->large_frames = dev->mtu > 1500 || enable;
EL3WINDOW(3);
mac_ctrl = ioread16(ioaddr+Wn3_MAC_Ctrl);
mac_ctrl = window_read16(vp, 3, Wn3_MAC_Ctrl);
if (vp->large_frames)
mac_ctrl |= 0x40;
else
mac_ctrl &= ~0x40;
iowrite16(mac_ctrl, ioaddr+Wn3_MAC_Ctrl);
window_write16(vp, mac_ctrl, 3, Wn3_MAC_Ctrl);
}
EL3WINDOW(old_window);
}
#else
@ -3037,7 +3069,10 @@ static void set_8021q_mode(struct net_device *dev, int enable)
/* The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
met by back-to-back PCI I/O cycles, but we insert a delay to avoid
"overclocking" issues. */
#define mdio_delay() ioread32(mdio_addr)
static void mdio_delay(struct vortex_private *vp)
{
window_read32(vp, 4, Wn4_PhysicalMgmt);
}
#define MDIO_SHIFT_CLK 0x01
#define MDIO_DIR_WRITE 0x04
@ -3048,16 +3083,15 @@ static void set_8021q_mode(struct net_device *dev, int enable)
/* Generate the preamble required for initial synchronization and
a few older transceivers. */
static void mdio_sync(void __iomem *ioaddr, int bits)
static void mdio_sync(struct vortex_private *vp, int bits)
{
void __iomem *mdio_addr = ioaddr + Wn4_PhysicalMgmt;
/* Establish sync by sending at least 32 logic ones. */
while (-- bits >= 0) {
iowrite16(MDIO_DATA_WRITE1, mdio_addr);
mdio_delay();
iowrite16(MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
window_write16(vp, MDIO_DATA_WRITE1, 4, Wn4_PhysicalMgmt);
mdio_delay(vp);
window_write16(vp, MDIO_DATA_WRITE1 | MDIO_SHIFT_CLK,
4, Wn4_PhysicalMgmt);
mdio_delay(vp);
}
}
@ -3065,59 +3099,70 @@ static int mdio_read(struct net_device *dev, int phy_id, int location)
{
int i;
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
int read_cmd = (0xf6 << 10) | (phy_id << 5) | location;
unsigned int retval = 0;
void __iomem *mdio_addr = ioaddr + Wn4_PhysicalMgmt;
spin_lock_bh(&vp->mii_lock);
if (mii_preamble_required)
mdio_sync(ioaddr, 32);
mdio_sync(vp, 32);
/* Shift the read command bits out. */
for (i = 14; i >= 0; i--) {
int dataval = (read_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
iowrite16(dataval, mdio_addr);
mdio_delay();
iowrite16(dataval | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
mdio_delay(vp);
window_write16(vp, dataval | MDIO_SHIFT_CLK,
4, Wn4_PhysicalMgmt);
mdio_delay(vp);
}
/* Read the two transition, 16 data, and wire-idle bits. */
for (i = 19; i > 0; i--) {
iowrite16(MDIO_ENB_IN, mdio_addr);
mdio_delay();
retval = (retval << 1) | ((ioread16(mdio_addr) & MDIO_DATA_READ) ? 1 : 0);
iowrite16(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
mdio_delay(vp);
retval = (retval << 1) |
((window_read16(vp, 4, Wn4_PhysicalMgmt) &
MDIO_DATA_READ) ? 1 : 0);
window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
4, Wn4_PhysicalMgmt);
mdio_delay(vp);
}
spin_unlock_bh(&vp->mii_lock);
return retval & 0x20000 ? 0xffff : retval>>1 & 0xffff;
}
static void mdio_write(struct net_device *dev, int phy_id, int location, int value)
{
struct vortex_private *vp = netdev_priv(dev);
void __iomem *ioaddr = vp->ioaddr;
int write_cmd = 0x50020000 | (phy_id << 23) | (location << 18) | value;
void __iomem *mdio_addr = ioaddr + Wn4_PhysicalMgmt;
int i;
spin_lock_bh(&vp->mii_lock);
if (mii_preamble_required)
mdio_sync(ioaddr, 32);
mdio_sync(vp, 32);
/* Shift the command bits out. */
for (i = 31; i >= 0; i--) {
int dataval = (write_cmd&(1<<i)) ? MDIO_DATA_WRITE1 : MDIO_DATA_WRITE0;
iowrite16(dataval, mdio_addr);
mdio_delay();
iowrite16(dataval | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
window_write16(vp, dataval, 4, Wn4_PhysicalMgmt);
mdio_delay(vp);
window_write16(vp, dataval | MDIO_SHIFT_CLK,
4, Wn4_PhysicalMgmt);
mdio_delay(vp);
}
/* Leave the interface idle. */
for (i = 1; i >= 0; i--) {
iowrite16(MDIO_ENB_IN, mdio_addr);
mdio_delay();
iowrite16(MDIO_ENB_IN | MDIO_SHIFT_CLK, mdio_addr);
mdio_delay();
window_write16(vp, MDIO_ENB_IN, 4, Wn4_PhysicalMgmt);
mdio_delay(vp);
window_write16(vp, MDIO_ENB_IN | MDIO_SHIFT_CLK,
4, Wn4_PhysicalMgmt);
mdio_delay(vp);
}
spin_unlock_bh(&vp->mii_lock);
}
/* ACPI: Advanced Configuration and Power Interface. */
@ -3131,8 +3176,7 @@ static void acpi_set_WOL(struct net_device *dev)
if (vp->enable_wol) {
/* Power up on: 1==Downloaded Filter, 2==Magic Packets, 4==Link Status. */
EL3WINDOW(7);
iowrite16(2, ioaddr + 0x0c);
window_write16(vp, 2, 7, 0x0c);
/* The RxFilter must accept the WOL frames. */
iowrite16(SetRxFilter|RxStation|RxMulticast|RxBroadcast, ioaddr + EL3_CMD);
iowrite16(RxEnable, ioaddr + EL3_CMD);

View File

@ -322,7 +322,7 @@ struct cp_dma_stats {
__le32 rx_ok_mcast;
__le16 tx_abort;
__le16 tx_underrun;
} __attribute__((packed));
} __packed;
struct cp_extra_stats {
unsigned long rx_frags;

View File

@ -662,7 +662,7 @@ static const struct ethtool_ops rtl8139_ethtool_ops;
/* read MMIO register */
#define RTL_R8(reg) ioread8 (ioaddr + (reg))
#define RTL_R16(reg) ioread16 (ioaddr + (reg))
#define RTL_R32(reg) ((unsigned long) ioread32 (ioaddr + (reg)))
#define RTL_R32(reg) ioread32 (ioaddr + (reg))
static const u16 rtl8139_intr_mask =
@ -862,7 +862,7 @@ retry:
/* if unknown chip, assume array element #0, original RTL-8139 in this case */
i = 0;
dev_dbg(&pdev->dev, "unknown chip version, assuming RTL-8139\n");
dev_dbg(&pdev->dev, "TxConfig = 0x%lx\n", RTL_R32 (TxConfig));
dev_dbg(&pdev->dev, "TxConfig = 0x%x\n", RTL_R32 (TxConfig));
tp->chipset = 0;
match:
@ -1643,7 +1643,7 @@ static void rtl8139_tx_timeout_task (struct work_struct *work)
netdev_dbg(dev, "Tx queue start entry %ld dirty entry %ld\n",
tp->cur_tx, tp->dirty_tx);
for (i = 0; i < NUM_TX_DESC; i++)
netdev_dbg(dev, "Tx descriptor %d is %08lx%s\n",
netdev_dbg(dev, "Tx descriptor %d is %08x%s\n",
i, RTL_R32(TxStatus0 + (i * 4)),
i == tp->dirty_tx % NUM_TX_DESC ?
" (queue head)" : "");
@ -2487,7 +2487,7 @@ static void __set_rx_mode (struct net_device *dev)
int rx_mode;
u32 tmp;
netdev_dbg(dev, "rtl8139_set_rx_mode(%04x) done -- Rx config %08lx\n",
netdev_dbg(dev, "rtl8139_set_rx_mode(%04x) done -- Rx config %08x\n",
dev->flags, RTL_R32(RxConfig));
/* Note: do not reorder, GCC is clever about common statements. */

View File

@ -525,7 +525,21 @@ static irqreturn_t i596_error(int irq, void *dev_id)
}
#endif
static inline void init_rx_bufs(struct net_device *dev)
static inline void remove_rx_bufs(struct net_device *dev)
{
struct i596_private *lp = dev->ml_priv;
struct i596_rbd *rbd;
int i;
for (i = 0, rbd = lp->rbds; i < rx_ring_size; i++, rbd++) {
if (rbd->skb == NULL)
break;
dev_kfree_skb(rbd->skb);
rbd->skb = NULL;
}
}
static inline int init_rx_bufs(struct net_device *dev)
{
struct i596_private *lp = dev->ml_priv;
int i;
@ -537,8 +551,11 @@ static inline void init_rx_bufs(struct net_device *dev)
for (i = 0, rbd = lp->rbds; i < rx_ring_size; i++, rbd++) {
struct sk_buff *skb = dev_alloc_skb(PKT_BUF_SZ);
if (skb == NULL)
panic("82596: alloc_skb() failed");
if (skb == NULL) {
remove_rx_bufs(dev);
return -ENOMEM;
}
skb->dev = dev;
rbd->v_next = rbd+1;
rbd->b_next = WSWAPrbd(virt_to_bus(rbd+1));
@ -574,19 +591,8 @@ static inline void init_rx_bufs(struct net_device *dev)
rfd->v_next = lp->rfds;
rfd->b_next = WSWAPrfd(virt_to_bus(lp->rfds));
rfd->cmd = CMD_EOL|CMD_FLEX;
}
static inline void remove_rx_bufs(struct net_device *dev)
{
struct i596_private *lp = dev->ml_priv;
struct i596_rbd *rbd;
int i;
for (i = 0, rbd = lp->rbds; i < rx_ring_size; i++, rbd++) {
if (rbd->skb == NULL)
break;
dev_kfree_skb(rbd->skb);
}
return 0;
}
@ -1009,20 +1015,35 @@ static int i596_open(struct net_device *dev)
}
#ifdef ENABLE_MVME16x_NET
if (MACH_IS_MVME16x) {
if (request_irq(0x56, i596_error, 0, "i82596_error", dev))
return -EAGAIN;
if (request_irq(0x56, i596_error, 0, "i82596_error", dev)) {
res = -EAGAIN;
goto err_irq_dev;
}
}
#endif
init_rx_bufs(dev);
res = init_rx_bufs(dev);
if (res)
goto err_irq_56;
netif_start_queue(dev);
/* Initialize the 82596 memory */
if (init_i596_mem(dev)) {
res = -EAGAIN;
free_irq(dev->irq, dev);
goto err_queue;
}
return 0;
err_queue:
netif_stop_queue(dev);
remove_rx_bufs(dev);
err_irq_56:
#ifdef ENABLE_MVME16x_NET
free_irq(0x56, dev);
err_irq_dev:
#endif
free_irq(dev->irq, dev);
return res;
}
@ -1488,6 +1509,9 @@ static int i596_close(struct net_device *dev)
}
#endif
#ifdef ENABLE_MVME16x_NET
free_irq(0x56, dev);
#endif
free_irq(dev->irq, dev);
remove_rx_bufs(dev);

View File

@ -530,14 +530,15 @@ config SH_ETH
depends on SUPERH && \
(CPU_SUBTYPE_SH7710 || CPU_SUBTYPE_SH7712 || \
CPU_SUBTYPE_SH7763 || CPU_SUBTYPE_SH7619 || \
CPU_SUBTYPE_SH7724)
CPU_SUBTYPE_SH7724 || CPU_SUBTYPE_SH7757)
select CRC32
select MII
select MDIO_BITBANG
select PHYLIB
help
Renesas SuperH Ethernet device driver.
This driver support SH7710, SH7712, SH7763, SH7619, and SH7724.
This driver supporting CPUs are:
- SH7710, SH7712, SH7763, SH7619, SH7724, and SH7757.
config SUNLANCE
tristate "Sun LANCE support"
@ -1463,7 +1464,7 @@ config FORCEDETH
config CS89x0
tristate "CS89x0 support"
depends on NET_ETHERNET && (ISA || EISA || MACH_IXDP2351 \
|| ARCH_IXDP2X01 || ARCH_PNX010X || MACH_MX31ADS)
|| ARCH_IXDP2X01 || MACH_MX31ADS)
---help---
Support for CS89x0 chipset based Ethernet cards. If you have a
network (Ethernet) card of this type, say Y and read the
@ -1477,7 +1478,7 @@ config CS89x0
config CS89x0_NONISA_IRQ
def_bool y
depends on CS89x0 != n
depends on MACH_IXDP2351 || ARCH_IXDP2X01 || ARCH_PNX010X || MACH_MX31ADS
depends on MACH_IXDP2351 || ARCH_IXDP2X01 || MACH_MX31ADS
config TC35815
tristate "TOSHIBA TC35815 Ethernet support"
@ -1659,6 +1660,7 @@ config R6040
depends on NET_PCI && PCI
select CRC32
select MII
select PHYLIB
help
This is a driver for the R6040 Fast Ethernet MACs found in the
the RDC R-321x System-on-chips.
@ -1748,11 +1750,12 @@ config TLAN
Please email feedback to <torben.mathiasen@compaq.com>.
config KS8842
tristate "Micrel KSZ8842"
depends on HAS_IOMEM
tristate "Micrel KSZ8841/42 with generic bus interface"
depends on HAS_IOMEM && DMA_ENGINE
help
This platform driver is for Micrel KSZ8842 / KS8842
2-port ethernet switch chip (managed, VLAN, QoS).
This platform driver is for KSZ8841(1-port) / KS8842(2-port)
ethernet switch chip (managed, VLAN, QoS) from Micrel or
Timberdale(FPGA).
config KS8851
tristate "Micrel KS8851 SPI"
@ -2601,6 +2604,29 @@ config CHELSIO_T4
To compile this driver as a module choose M here; the module
will be called cxgb4.
config CHELSIO_T4VF_DEPENDS
tristate
depends on PCI && INET
default y
config CHELSIO_T4VF
tristate "Chelsio Communications T4 Virtual Function Ethernet support"
depends on CHELSIO_T4VF_DEPENDS
help
This driver supports Chelsio T4-based gigabit and 10Gb Ethernet
adapters with PCI-E SR-IOV Virtual Functions.
For general information about Chelsio and our products, visit
our website at <http://www.chelsio.com>.
For customer support, please visit our customer support page at
<http://www.chelsio.com/support.htm>.
Please send feedback to <linux-bugs@chelsio.com>.
To compile this driver as a module choose M here; the module
will be called cxgb4vf.
config EHEA
tristate "eHEA Ethernet support"
depends on IBMEBUS && INET && SPARSEMEM
@ -2614,7 +2640,6 @@ config EHEA
config ENIC
tristate "Cisco VIC Ethernet NIC Support"
depends on PCI && INET
select INET_LRO
help
This enables the support for the Cisco VIC Ethernet card.

View File

@ -20,6 +20,7 @@ obj-$(CONFIG_IP1000) += ipg.o
obj-$(CONFIG_CHELSIO_T1) += chelsio/
obj-$(CONFIG_CHELSIO_T3) += cxgb3/
obj-$(CONFIG_CHELSIO_T4) += cxgb4/
obj-$(CONFIG_CHELSIO_T4VF) += cxgb4vf/
obj-$(CONFIG_EHEA) += ehea/
obj-$(CONFIG_CAN) += can/
obj-$(CONFIG_BONDING) += bonding/
@ -83,8 +84,7 @@ obj-$(CONFIG_FEALNX) += fealnx.o
obj-$(CONFIG_TIGON3) += tg3.o
obj-$(CONFIG_BNX2) += bnx2.o
obj-$(CONFIG_CNIC) += cnic.o
obj-$(CONFIG_BNX2X) += bnx2x.o
bnx2x-objs := bnx2x_main.o bnx2x_link.o
obj-$(CONFIG_BNX2X) += bnx2x/
spidernet-y += spider_net.o spider_net_ethtool.o
obj-$(CONFIG_SPIDER_NET) += spidernet.o sungem_phy.o
obj-$(CONFIG_GELIC_NET) += ps3_gelic.o
@ -275,7 +275,7 @@ obj-$(CONFIG_USB_USBNET) += usb/
obj-$(CONFIG_USB_ZD1201) += usb/
obj-$(CONFIG_USB_IPHETH) += usb/
obj-y += wireless/
obj-$(CONFIG_WLAN) += wireless/
obj-$(CONFIG_NET_TULIP) += tulip/
obj-$(CONFIG_HAMRADIO) += hamradio/
obj-$(CONFIG_IRDA) += irda/

View File

@ -218,12 +218,6 @@ static struct devprobe2 isa_probes[] __initdata = {
#ifdef CONFIG_EL1 /* 3c501 */
{el1_probe, 0},
#endif
#ifdef CONFIG_WAVELAN /* WaveLAN */
{wavelan_probe, 0},
#endif
#ifdef CONFIG_ARLAN /* Aironet */
{arlan_probe, 0},
#endif
#ifdef CONFIG_EL16 /* 3c507 */
{el16_probe, 0},
#endif

View File

@ -211,7 +211,7 @@ static int __init ac_probe1(int ioaddr, struct net_device *dev)
retval = request_irq(dev->irq, ei_interrupt, 0, DRV_NAME, dev);
if (retval) {
printk (" nothing! Unable to get IRQ %d.\n", dev->irq);
goto out1;
goto out;
}
printk(" IRQ %d, %s port\n", dev->irq, port_name[dev->if_port]);

View File

@ -37,69 +37,6 @@
#define VERSION "arcnet: cap mode (`c') encapsulation support loaded.\n"
static void rx(struct net_device *dev, int bufnum,
struct archdr *pkthdr, int length);
static int build_header(struct sk_buff *skb,
struct net_device *dev,
unsigned short type,
uint8_t daddr);
static int prepare_tx(struct net_device *dev, struct archdr *pkt, int length,
int bufnum);
static int ack_tx(struct net_device *dev, int acked);
static struct ArcProto capmode_proto =
{
'r',
XMTU,
0,
rx,
build_header,
prepare_tx,
NULL,
ack_tx
};
static void arcnet_cap_init(void)
{
int count;
for (count = 1; count <= 8; count++)
if (arc_proto_map[count] == arc_proto_default)
arc_proto_map[count] = &capmode_proto;
/* for cap mode, we only set the bcast proto if there's no better one */
if (arc_bcast_proto == arc_proto_default)
arc_bcast_proto = &capmode_proto;
arc_proto_default = &capmode_proto;
arc_raw_proto = &capmode_proto;
}
#ifdef MODULE
static int __init capmode_module_init(void)
{
printk(VERSION);
arcnet_cap_init();
return 0;
}
static void __exit capmode_module_exit(void)
{
arcnet_unregister_proto(&capmode_proto);
}
module_init(capmode_module_init);
module_exit(capmode_module_exit);
MODULE_LICENSE("GPL");
#endif /* MODULE */
/* packet receiver */
static void rx(struct net_device *dev, int bufnum,
struct archdr *pkthdr, int length)
@ -231,65 +168,107 @@ static int prepare_tx(struct net_device *dev, struct archdr *pkt, int length,
BUGMSG(D_DURING, "prepare_tx: length=%d ofs=%d\n",
length,ofs);
// Copy the arcnet-header + the protocol byte down:
/* Copy the arcnet-header + the protocol byte down: */
lp->hw.copy_to_card(dev, bufnum, 0, hard, ARC_HDR_SIZE);
lp->hw.copy_to_card(dev, bufnum, ofs, &pkt->soft.cap.proto,
sizeof(pkt->soft.cap.proto));
// Skip the extra integer we have written into it as a cookie
// but write the rest of the message:
/* Skip the extra integer we have written into it as a cookie
but write the rest of the message: */
lp->hw.copy_to_card(dev, bufnum, ofs+1,
((unsigned char*)&pkt->soft.cap.mes),length-1);
lp->lastload_dest = hard->dest;
return 1; /* done */
return 1; /* done */
}
static int ack_tx(struct net_device *dev, int acked)
{
struct arcnet_local *lp = netdev_priv(dev);
struct sk_buff *ackskb;
struct archdr *ackpkt;
int length=sizeof(struct arc_cap);
struct arcnet_local *lp = netdev_priv(dev);
struct sk_buff *ackskb;
struct archdr *ackpkt;
int length=sizeof(struct arc_cap);
BUGMSG(D_DURING, "capmode: ack_tx: protocol: %x: result: %d\n",
lp->outgoing.skb->protocol, acked);
BUGMSG(D_DURING, "capmode: ack_tx: protocol: %x: result: %d\n",
lp->outgoing.skb->protocol, acked);
BUGLVL(D_SKB) arcnet_dump_skb(dev, lp->outgoing.skb, "ack_tx");
BUGLVL(D_SKB) arcnet_dump_skb(dev, lp->outgoing.skb, "ack_tx");
/* Now alloc a skb to send back up through the layers: */
ackskb = alloc_skb(length + ARC_HDR_SIZE , GFP_ATOMIC);
if (ackskb == NULL) {
BUGMSG(D_NORMAL, "Memory squeeze, can't acknowledge.\n");
goto free_outskb;
}
/* Now alloc a skb to send back up through the layers: */
ackskb = alloc_skb(length + ARC_HDR_SIZE , GFP_ATOMIC);
if (ackskb == NULL) {
BUGMSG(D_NORMAL, "Memory squeeze, can't acknowledge.\n");
goto free_outskb;
}
skb_put(ackskb, length + ARC_HDR_SIZE );
ackskb->dev = dev;
skb_put(ackskb, length + ARC_HDR_SIZE );
ackskb->dev = dev;
skb_reset_mac_header(ackskb);
ackpkt = (struct archdr *)skb_mac_header(ackskb);
/* skb_pull(ackskb, ARC_HDR_SIZE); */
skb_reset_mac_header(ackskb);
ackpkt = (struct archdr *)skb_mac_header(ackskb);
/* skb_pull(ackskb, ARC_HDR_SIZE); */
skb_copy_from_linear_data(lp->outgoing.skb, ackpkt,
ARC_HDR_SIZE + sizeof(struct arc_cap));
ackpkt->soft.cap.proto = 0; /* using protocol 0 for acknowledge */
ackpkt->soft.cap.mes.ack=acked;
skb_copy_from_linear_data(lp->outgoing.skb, ackpkt,
ARC_HDR_SIZE + sizeof(struct arc_cap));
ackpkt->soft.cap.proto=0; /* using protocol 0 for acknowledge */
ackpkt->soft.cap.mes.ack=acked;
BUGMSG(D_PROTO, "Ackknowledge for cap packet %x.\n",
*((int*)&ackpkt->soft.cap.cookie[0]));
BUGMSG(D_PROTO, "Ackknowledge for cap packet %x.\n",
*((int*)&ackpkt->soft.cap.cookie[0]));
ackskb->protocol = cpu_to_be16(ETH_P_ARCNET);
ackskb->protocol = cpu_to_be16(ETH_P_ARCNET);
BUGLVL(D_SKB) arcnet_dump_skb(dev, ackskb, "ack_tx_recv");
netif_rx(ackskb);
BUGLVL(D_SKB) arcnet_dump_skb(dev, ackskb, "ack_tx_recv");
netif_rx(ackskb);
free_outskb:
dev_kfree_skb_irq(lp->outgoing.skb);
lp->outgoing.proto = NULL; /* We are always finished when in this protocol */
free_outskb:
dev_kfree_skb_irq(lp->outgoing.skb);
lp->outgoing.proto = NULL; /* We are always finished when in this protocol */
return 0;
return 0;
}
static struct ArcProto capmode_proto =
{
'r',
XMTU,
0,
rx,
build_header,
prepare_tx,
NULL,
ack_tx
};
static void arcnet_cap_init(void)
{
int count;
for (count = 1; count <= 8; count++)
if (arc_proto_map[count] == arc_proto_default)
arc_proto_map[count] = &capmode_proto;
/* for cap mode, we only set the bcast proto if there's no better one */
if (arc_bcast_proto == arc_proto_default)
arc_bcast_proto = &capmode_proto;
arc_proto_default = &capmode_proto;
arc_raw_proto = &capmode_proto;
}
static int __init capmode_module_init(void)
{
printk(VERSION);
arcnet_cap_init();
return 0;
}
static void __exit capmode_module_exit(void)
{
arcnet_unregister_proto(&capmode_proto);
}
module_init(capmode_module_init);
module_exit(capmode_module_exit);
MODULE_LICENSE("GPL");

View File

@ -90,14 +90,14 @@ static int __init com20020isa_probe(struct net_device *dev)
outb(0, _INTMASK);
dev->irq = probe_irq_off(airqmask);
if (dev->irq <= 0) {
if ((int)dev->irq <= 0) {
BUGMSG(D_INIT_REASONS, "Autoprobe IRQ failed first time\n");
airqmask = probe_irq_on();
outb(NORXflag, _INTMASK);
udelay(5);
outb(0, _INTMASK);
dev->irq = probe_irq_off(airqmask);
if (dev->irq <= 0) {
if ((int)dev->irq <= 0) {
BUGMSG(D_NORMAL, "Autoprobe IRQ failed.\n");
err = -ENODEV;
goto out;

View File

@ -213,7 +213,7 @@ static int __init com90io_probe(struct net_device *dev)
outb(0, _INTMASK);
dev->irq = probe_irq_off(airqmask);
if (dev->irq <= 0) {
if ((int)dev->irq <= 0) {
BUGMSG(D_INIT_REASONS, "Autoprobe IRQ failed\n");
goto err_out;
}

View File

@ -738,6 +738,17 @@ static void eth_set_mcast_list(struct net_device *dev)
struct netdev_hw_addr *ha;
u8 diffs[ETH_ALEN], *addr;
int i;
static const u8 allmulti[] = { 0x01, 0x00, 0x00, 0x00, 0x00, 0x00 };
if (dev->flags & IFF_ALLMULTI) {
for (i = 0; i < ETH_ALEN; i++) {
__raw_writel(allmulti[i], &port->regs->mcast_addr[i]);
__raw_writel(allmulti[i], &port->regs->mcast_mask[i]);
}
__raw_writel(DEFAULT_RX_CNTRL0 | RX_CNTRL0_ADDR_FLTR_EN,
&port->regs->rx_control[0]);
return;
}
if ((dev->flags & IFF_PROMISC) || netdev_mc_empty(dev)) {
__raw_writel(DEFAULT_RX_CNTRL0 & ~RX_CNTRL0_ADDR_FLTR_EN,
@ -771,7 +782,8 @@ static int eth_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
if (!netif_running(dev))
return -EINVAL;
return phy_mii_ioctl(port->phydev, if_mii(req), cmd);
return phy_mii_ioctl(port->phydev, req, cmd);
}
/* ethtool support */

View File

@ -822,6 +822,9 @@ static int w90p910_ether_open(struct net_device *dev)
w90p910_set_global_maccmd(dev);
w90p910_enable_rx(dev, 1);
clk_enable(ether->rmiiclk);
clk_enable(ether->clk);
ether->rx_packets = 0x0;
ether->rx_bytes = 0x0;

View File

@ -811,10 +811,8 @@ static int net_close(struct net_device *dev)
/* No statistic counters on the chip to update. */
/* Disable the IRQ on boards of fmv18x where it is feasible. */
if (lp->jumpered) {
if (lp->jumpered)
outb(0x00, ioaddr + IOCONFIG1);
free_irq(dev->irq, dev);
}
/* Power-down the chip. Green, green, green! */
outb(0x00, ioaddr + CONFIG_1);

Some files were not shown because too many files have changed in this diff Show More