2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-11-20 00:26:39 +08:00

Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6

* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6: (107 commits)
  smc911x: fix compilation breakage wjen debug is on
  [netdrvr] eexpress: minor corrections
  add NAPI support to sb1250-mac.c
  ixgb: ROUND_UP macro cleanup in drivers/net/ixgb
  e1000: ROUND_UP macro cleanup in drivers/net/e1000
  Generic HDLC sparse annotations
  e100: Optionally use I/O mode only to access register space
  e100: allow bad MAC address when running with invalid eeprom csum
  ehea: fix for dlpar support
  ehea: fix for sysfs entries
  3C509: Remove unnecessary include of <linux/pm_legacy.h>
  NetXen: Fix for vmalloc issues
  NetXen: Fixes for Power PC architecture
  NetXen: Port swap feature for multi port cards
  NetXen: Removal of redundant macros
  NetXen: Multi PCI support for Quad cards
  NetXen: Removal of redundant argument passing
  NetXen: Use multiple PCI functions
  [netdrvr e100] experiment with doing RX in a similar manner to eepro100
  [PATCH] ieee80211: add missing global needed by IEEE80211_DEBUG_XXXX
  ...
This commit is contained in:
Linus Torvalds 2007-04-29 10:48:48 -07:00
commit e389f9aec6
170 changed files with 29443 additions and 6174 deletions

View File

@ -236,6 +236,12 @@ X!Ilib/string.c
!Enet/core/dev.c
!Enet/ethernet/eth.c
!Iinclude/linux/etherdevice.h
!Edrivers/net/phy/phy.c
!Idrivers/net/phy/phy.c
!Edrivers/net/phy/phy_device.c
!Idrivers/net/phy/phy_device.c
!Edrivers/net/phy/mdio_bus.c
!Idrivers/net/phy/mdio_bus.c
<!-- FIXME: Removed for now since no structured comments in source
X!Enet/core/wireless.c
-->

View File

@ -2,35 +2,88 @@
BCM43xx Linux Driver Project
============================
About this software
-------------------
The goal of this project is to develop a linux driver for Broadcom
BCM43xx chips, based on the specification at
http://bcm-specs.sipsolutions.net/
The project page is http://bcm43xx.berlios.de/
Requirements
Introduction
------------
1) Linux Kernel 2.6.16 or later
http://www.kernel.org/
Many of the wireless devices found in modern notebook computers are
based on the wireless chips produced by Broadcom. These devices have
been a problem for Linux users as there is no open-source driver
available. In addition, Broadcom has not released specifications
for the device, and driver availability has been limited to the
binary-only form used in the GPL versions of AP hardware such as the
Linksys WRT54G, and the Windows and OS X drivers. Before this project
began, the only way to use these devices were to use the Windows or
OS X drivers with either the Linuxant or ndiswrapper modules. There
is a strong penalty if this method is used as loading the binary-only
module "taints" the kernel, and no kernel developer will help diagnose
any kernel problems.
You may want to configure your kernel with:
Development
-----------
CONFIG_DEBUG_FS (optional):
-> Kernel hacking
-> Debug Filesystem
This driver has been developed using
a clean-room technique that is described at
http://bcm-specs.sipsolutions.net/ReverseEngineeringProcess. For legal
reasons, none of the clean-room crew works on the on the Linux driver,
and none of the Linux developers sees anything but the specifications,
which are the ultimate product of the reverse-engineering group.
2) SoftMAC IEEE 802.11 Networking Stack extension and patched ieee80211
modules:
http://softmac.sipsolutions.net/
Software
--------
3) Firmware Files
Since the release of the 2.6.17 kernel, the bcm43xx driver has been
distributed with the kernel source, and is prebuilt in most, if not
all, distributions. There is, however, additional software that is
required. The firmware used by the chip is the intellectual property
of Broadcom and they have not given the bcm43xx team redistribution
rights to this firmware. Since we cannot legally redistribute
the firwmare we cannot include it with the driver. Furthermore, it
cannot be placed in the downloadable archives of any distributing
organization; therefore, the user is responsible for obtaining the
firmware and placing it in the appropriate location so that the driver
can find it when initializing.
Please try fwcutter. Fwcutter can extract the firmware from various
binary driver files. It supports driver files from Windows, MacOS and
Linux. You can get fwcutter from http://bcm43xx.berlios.de/.
Also, fwcutter comes with a README file for further instructions.
To help with this process, the bcm43xx developers provide a separate
program named bcm43xx-fwcutter to "cut" the firmware out of a
Windows or OS X driver and write the extracted files to the proper
location. This program is usually provided with the distribution;
however, it may be downloaded from
http://developer.berlios.de/project/showfiles.php?group_id=4547
The firmware is available in two versions. V3 firmware is used with
the in-kernel bcm43xx driver that uses a software MAC layer called
SoftMAC, and will have a microcode revision of 0x127 or smaller. The
V4 firmware is used by an out-of-kernel driver employing a variation of
the Devicescape MAC layer known as d80211. Once bcm43xx-d80211 reaches
a satisfactory level of development, it will replace bcm43xx-softmac
in the kernel as it is much more flexible and powerful.
A source for the latest V3 firmware is
http://downloads.openwrt.org/sources/wl_apsta-3.130.20.0.o
Once this file is downloaded, the command
'bcm43xx-fwcutter -w <dir> <filename>'
will extract the microcode and write it to directory
<dir>. The correct directory will depend on your distribution;
however, most use '/lib/firmware'. Once this step is completed,
the bcm3xx driver should load when the system is booted. To see
any messages relating to the driver, issue the command 'dmesg |
grep bcm43xx' from a terminal window. If there are any problems,
please send that output to Bcm43xx-dev@lists.berlios.de.
Although the driver has been in-kernel since 2.6.17, the earliest
version is quite limited in its capability. Patches that include
all features of later versions are available for the stable kernel
versions from 2.6.18. These will be needed if you use a BCM4318,
or a PCI Express version (BCM4311 and BCM4312). In addition, if you
have an early BCM4306 and more than 1 GB RAM, your kernel will need
to be patched. These patches, which are being updated regularly,
are available at ftp://lwfinger.dynalias.org/patches. Look for
combined_2.6.YY.patch. Of course you will need kernel source downloaded
from kernel.org, or the source from your distribution.
If you build your own kernel, please enable CONFIG_BCM43XX_DEBUG
and CONFIG_IEEE80211_SOFTMAC_DEBUG. The log information provided is
essential for solving any problems.

View File

@ -1582,9 +1582,9 @@ S: Supported
HOST AP DRIVER
P: Jouni Malinen
M: jkmaline@cc.hut.fi
M: j@w1.fi
L: hostap@shmoo.com (subscribers-only)
L: linux-wireless@vger.kernel.org
L: hostap@shmoo.com
W: http://hostap.epitest.fi/
S: Maintained
@ -1811,6 +1811,7 @@ P: Jeff Kirsher
M: jeffrey.t.kirsher@intel.com
P: Auke Kok
M: auke-jan.h.kok@intel.com
L: e1000-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/e1000/
S: Supported
@ -1825,6 +1826,7 @@ P: Jeff Kirsher
M: jeffrey.t.kirsher@intel.com
P: Auke Kok
M: auke-jan.h.kok@intel.com
L: e1000-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/e1000/
S: Supported
@ -1839,6 +1841,7 @@ P: Jesse Brandeburg
M: jesse.brandeburg@intel.com
P: Auke Kok
M: auke-jan.h.kok@intel.com
L: e1000-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/e1000/
S: Supported
@ -2500,6 +2503,19 @@ M: adaplas@gmail.com
L: linux-fbdev-devel@lists.sourceforge.net (subscribers-only)
S: Maintained
NETERION (S2IO) Xframe 10GbE DRIVER
P: Ramkrishna Vepa
M: ram.vepa@neterion.com
P: Rastapur Santosh
M: santosh.rastapur@neterion.com
P: Sivakumar Subramani
M: sivakumar.subramani@neterion.com
P: Sreenivasa Honnur
M: sreenivasa.honnur@neterion.com
L: netdev@vger.kernel.org
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/TitleIndex?anonymous
S: Supported
OPENCORES I2C BUS DRIVER
P: Peter Korsgaard
M: jacmet@sunsite.dk

View File

@ -17,7 +17,8 @@
# 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
#
obj-y := sim_setup.o sim_mem.o sim_time.o sim_int.o sim_cmdline.o
obj-y := sim_platform.o sim_setup.o sim_mem.o sim_time.o sim_int.o \
sim_cmdline.o
obj-$(CONFIG_EARLY_PRINTK) += sim_console.o
obj-$(CONFIG_SMP) += sim_smp.o

View File

@ -0,0 +1,35 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2007 by Ralf Baechle (ralf@linux-mips.org)
*/
#include <linux/init.h>
#include <linux/if_ether.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
static char mipsnet_string[] = "mipsnet";
static struct platform_device eth1_device = {
.name = mipsnet_string,
.id = 0,
};
/*
* Create a platform device for the GPI port that receives the
* image data from the embedded camera.
*/
static int __init mipsnet_devinit(void)
{
int err;
err = platform_device_register(&eth1_device);
if (err)
printk(KERN_ERR "%s: registration failed\n", mipsnet_string);
return err;
}
device_initcall(mipsnet_devinit);

View File

@ -210,6 +210,9 @@ int ucc_fast_init(struct ucc_fast_info * uf_info, struct ucc_fast_private ** ucc
uf_regs = uccf->uf_regs;
uccf->p_ucce = (u32 *) & (uf_regs->ucce);
uccf->p_uccm = (u32 *) & (uf_regs->uccm);
#ifdef CONFIG_UGETH_TX_ON_DEMAND
uccf->p_utodr = (u16 *) & (uf_regs->utodr);
#endif
#ifdef STATISTICS
uccf->tx_frames = 0;
uccf->rx_frames = 0;

View File

@ -3,7 +3,7 @@
*
* Michael MIC (IEEE 802.11i/TKIP) keyed digest
*
* Copyright (c) 2004 Jouni Malinen <jkmaline@cc.hut.fi>
* Copyright (c) 2004 Jouni Malinen <j@w1.fi>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
@ -173,4 +173,4 @@ module_exit(michael_mic_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Michael MIC");
MODULE_AUTHOR("Jouni Malinen <jkmaline@cc.hut.fi>");
MODULE_AUTHOR("Jouni Malinen <j@w1.fi>");

View File

@ -83,7 +83,6 @@ static int max_interrupt_work = 10;
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/pm.h>
#include <linux/pm_legacy.h>
#include <linux/skbuff.h>
#include <linux/delay.h> /* for udelay() */
#include <linux/spinlock.h>

View File

@ -486,8 +486,8 @@ config SGI_IOC3_ETH_HW_TX_CSUM
enables offloading for checksums on transmit. If unsure, say Y.
config MIPS_SIM_NET
tristate "MIPS simulator Network device (EXPERIMENTAL)"
depends on MIPS_SIM && EXPERIMENTAL
tristate "MIPS simulator Network device"
depends on NET_ETHERNET && MIPS_SIM
help
The MIPSNET device is a simple Ethernet network device which is
emulated by the MIPS Simulator.
@ -1444,7 +1444,8 @@ config CS89x0
config TC35815
tristate "TOSHIBA TC35815 Ethernet support"
depends on NET_PCI && PCI && TOSHIBA_JMR3927
depends on NET_PCI && PCI && MIPS
select MII
config DGRS
tristate "Digi Intl. RightSwitch SE-X support"
@ -2291,14 +2292,10 @@ config UGETH_FILTERING
bool "Mac address filtering support"
depends on UCC_GETH
config UGETH_TX_ON_DEMOND
bool "Transmit on Demond support"
config UGETH_TX_ON_DEMAND
bool "Transmit on Demand support"
depends on UCC_GETH
config UGETH_HAS_GIGA
bool
depends on UCC_GETH && PPC_MPC836x
config MV643XX_ETH
tristate "MV-643XX Ethernet support"
depends on MOMENCO_OCELOT_C || MOMENCO_JAGUAR_ATX || MV64360 || MOMENCO_OCELOT_3 || (PPC_MULTIPLATFORM && PPC32)

View File

@ -18,7 +18,7 @@ gianfar_driver-objs := gianfar.o \
gianfar_sysfs.o
obj-$(CONFIG_UCC_GETH) += ucc_geth_driver.o
ucc_geth_driver-objs := ucc_geth.o ucc_geth_phy.o
ucc_geth_driver-objs := ucc_geth.o ucc_geth_mii.o
#
# link order important here

View File

@ -4,8 +4,6 @@
obj-$(CONFIG_CHELSIO_T1) += cxgb.o
cxgb-$(CONFIG_CHELSIO_T1_1G) += ixf1010.o mac.o mv88e1xxx.o vsc7326.o vsc8244.o
cxgb-$(CONFIG_CHELSIO_T1_1G) += mac.o mv88e1xxx.o vsc7326.o
cxgb-objs := cxgb2.o espi.o tp.o pm3393.o sge.o subr.o \
mv88x201x.o my3126.o $(cxgb-y)

View File

@ -322,9 +322,9 @@ struct board_info {
unsigned char mdio_mdiinv;
unsigned char mdio_mdc;
unsigned char mdio_phybaseaddr;
struct gmac *gmac;
struct gphy *gphy;
struct mdio_ops *mdio_ops;
const struct gmac *gmac;
const struct gphy *gphy;
const struct mdio_ops *mdio_ops;
const char *desc;
};

View File

@ -100,7 +100,7 @@ struct cphy {
u32 elmer_gpo;
struct cphy_ops *ops; /* PHY operations */
const struct cphy_ops *ops; /* PHY operations */
int (*mdio_read)(adapter_t *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *val);
int (*mdio_write)(adapter_t *adapter, int phy_addr, int mmd_addr,
@ -136,7 +136,7 @@ static inline int simple_mdio_write(struct cphy *cphy, int reg,
/* Convenience initializer */
static inline void cphy_init(struct cphy *phy, adapter_t *adapter,
int phy_addr, struct cphy_ops *phy_ops,
struct mdio_ops *mdio_ops)
const struct mdio_ops *mdio_ops)
{
phy->adapter = adapter;
phy->addr = phy_addr;
@ -151,7 +151,7 @@ static inline void cphy_init(struct cphy *phy, adapter_t *adapter,
struct gphy {
/* Construct a PHY instance with the given PHY address */
struct cphy *(*create)(adapter_t *adapter, int phy_addr,
struct mdio_ops *mdio_ops);
const struct mdio_ops *mdio_ops);
/*
* Reset the PHY chip. This resets the whole PHY chip, not individual
@ -160,11 +160,9 @@ struct gphy {
int (*reset)(adapter_t *adapter);
};
extern struct gphy t1_my3126_ops;
extern struct gphy t1_mv88e1xxx_ops;
extern struct gphy t1_vsc8244_ops;
extern struct gphy t1_xpak_ops;
extern struct gphy t1_mv88x201x_ops;
extern struct gphy t1_dummy_phy_ops;
extern const struct gphy t1_my3126_ops;
extern const struct gphy t1_mv88e1xxx_ops;
extern const struct gphy t1_vsc8244_ops;
extern const struct gphy t1_mv88x201x_ops;
#endif /* _CXGB_CPHY_H_ */

View File

@ -126,7 +126,7 @@ typedef struct _cmac_instance cmac_instance;
struct cmac {
struct cmac_statistics stats;
adapter_t *adapter;
struct cmac_ops *ops;
const struct cmac_ops *ops;
cmac_instance *instance;
};
@ -136,11 +136,7 @@ struct gmac {
int (*reset)(adapter_t *);
};
extern struct gmac t1_pm3393_ops;
extern struct gmac t1_chelsio_mac_ops;
extern struct gmac t1_vsc7321_ops;
extern struct gmac t1_vsc7326_ops;
extern struct gmac t1_ixf1010_ops;
extern struct gmac t1_dummy_mac_ops;
extern const struct gmac t1_pm3393_ops;
extern const struct gmac t1_vsc7326_ops;
#endif /* _CXGB_GMAC_H_ */

View File

@ -1,505 +0,0 @@
/* $Date: 2005/11/12 02:13:49 $ $RCSfile: ixf1010.c,v $ $Revision: 1.36 $ */
#include "gmac.h"
#include "elmer0.h"
/* Update fast changing statistics every 15 seconds */
#define STATS_TICK_SECS 15
/* 30 minutes for full statistics update */
#define MAJOR_UPDATE_TICKS (1800 / STATS_TICK_SECS)
/*
* The IXF1010 can handle frames up to 16383 bytes but it's optimized for
* frames up to 9831 (0x2667) bytes, so we limit jumbo frame size to this.
* This length includes ethernet header and FCS.
*/
#define MAX_FRAME_SIZE 0x2667
/* MAC registers */
enum {
/* Per-port registers */
REG_MACADDR_LOW = 0,
REG_MACADDR_HIGH = 0x4,
REG_FDFC_TYPE = 0xC,
REG_FC_TX_TIMER_VALUE = 0x1c,
REG_IPG_RX_TIME1 = 0x28,
REG_IPG_RX_TIME2 = 0x2c,
REG_IPG_TX_TIME = 0x30,
REG_PAUSE_THRES = 0x38,
REG_MAX_FRAME_SIZE = 0x3c,
REG_RGMII_SPEED = 0x40,
REG_FC_ENABLE = 0x48,
REG_DISCARD_CTRL_FRAMES = 0x54,
REG_DIVERSE_CONFIG = 0x60,
REG_RX_FILTER = 0x64,
REG_MC_ADDR_LOW = 0x68,
REG_MC_ADDR_HIGH = 0x6c,
REG_RX_OCTETS_OK = 0x80,
REG_RX_OCTETS_BAD = 0x84,
REG_RX_UC_PKTS = 0x88,
REG_RX_MC_PKTS = 0x8c,
REG_RX_BC_PKTS = 0x90,
REG_RX_FCS_ERR = 0xb0,
REG_RX_TAGGED = 0xb4,
REG_RX_DATA_ERR = 0xb8,
REG_RX_ALIGN_ERR = 0xbc,
REG_RX_LONG_ERR = 0xc0,
REG_RX_JABBER_ERR = 0xc4,
REG_RX_PAUSE_FRAMES = 0xc8,
REG_RX_UNKNOWN_CTRL_FRAMES = 0xcc,
REG_RX_VERY_LONG_ERR = 0xd0,
REG_RX_RUNT_ERR = 0xd4,
REG_RX_SHORT_ERR = 0xd8,
REG_RX_SYMBOL_ERR = 0xe4,
REG_TX_OCTETS_OK = 0x100,
REG_TX_OCTETS_BAD = 0x104,
REG_TX_UC_PKTS = 0x108,
REG_TX_MC_PKTS = 0x10c,
REG_TX_BC_PKTS = 0x110,
REG_TX_EXCESSIVE_LEN_DROP = 0x14c,
REG_TX_UNDERRUN = 0x150,
REG_TX_TAGGED = 0x154,
REG_TX_PAUSE_FRAMES = 0x15C,
/* Global registers */
REG_PORT_ENABLE = 0x1400,
REG_JTAG_ID = 0x1430,
RX_FIFO_HIGH_WATERMARK_BASE = 0x1600,
RX_FIFO_LOW_WATERMARK_BASE = 0x1628,
RX_FIFO_FRAMES_REMOVED_BASE = 0x1650,
REG_RX_ERR_DROP = 0x167c,
REG_RX_FIFO_OVERFLOW_EVENT = 0x1680,
TX_FIFO_HIGH_WATERMARK_BASE = 0x1800,
TX_FIFO_LOW_WATERMARK_BASE = 0x1828,
TX_FIFO_XFER_THRES_BASE = 0x1850,
REG_TX_FIFO_OVERFLOW_EVENT = 0x1878,
REG_TX_FIFO_OOS_EVENT = 0x1884,
TX_FIFO_FRAMES_REMOVED_BASE = 0x1888,
REG_SPI_RX_BURST = 0x1c00,
REG_SPI_RX_TRAINING = 0x1c04,
REG_SPI_RX_CALENDAR = 0x1c08,
REG_SPI_TX_SYNC = 0x1c0c
};
enum { /* RMON registers */
REG_RxOctetsTotalOK = 0x80,
REG_RxOctetsBad = 0x84,
REG_RxUCPkts = 0x88,
REG_RxMCPkts = 0x8c,
REG_RxBCPkts = 0x90,
REG_RxJumboPkts = 0xac,
REG_RxFCSErrors = 0xb0,
REG_RxDataErrors = 0xb8,
REG_RxAlignErrors = 0xbc,
REG_RxLongErrors = 0xc0,
REG_RxJabberErrors = 0xc4,
REG_RxPauseMacControlCounter = 0xc8,
REG_RxVeryLongErrors = 0xd0,
REG_RxRuntErrors = 0xd4,
REG_RxShortErrors = 0xd8,
REG_RxSequenceErrors = 0xe0,
REG_RxSymbolErrors = 0xe4,
REG_TxOctetsTotalOK = 0x100,
REG_TxOctetsBad = 0x104,
REG_TxUCPkts = 0x108,
REG_TxMCPkts = 0x10c,
REG_TxBCPkts = 0x110,
REG_TxJumboPkts = 0x12C,
REG_TxTotalCollisions = 0x134,
REG_TxExcessiveLengthDrop = 0x14c,
REG_TxUnderrun = 0x150,
REG_TxCRCErrors = 0x158,
REG_TxPauseFrames = 0x15c
};
enum {
DIVERSE_CONFIG_PAD_ENABLE = 0x80,
DIVERSE_CONFIG_CRC_ADD = 0x40
};
#define MACREG_BASE 0
#define MACREG(mac, mac_reg) ((mac)->instance->mac_base + (mac_reg))
struct _cmac_instance {
u32 mac_base;
u32 index;
u32 version;
u32 ticks;
};
static void disable_port(struct cmac *mac)
{
u32 val;
t1_tpi_read(mac->adapter, REG_PORT_ENABLE, &val);
val &= ~(1 << mac->instance->index);
t1_tpi_write(mac->adapter, REG_PORT_ENABLE, val);
}
/*
* Read the current values of the RMON counters and add them to the cumulative
* port statistics. The HW RMON counters are cleared by this operation.
*/
static void port_stats_update(struct cmac *mac)
{
static struct {
unsigned int reg;
unsigned int offset;
} hw_stats[] = {
#define HW_STAT(name, stat_name) \
{ REG_##name, \
(&((struct cmac_statistics *)NULL)->stat_name) - (u64 *)NULL }
/* Rx stats */
HW_STAT(RxOctetsTotalOK, RxOctetsOK),
HW_STAT(RxOctetsBad, RxOctetsBad),
HW_STAT(RxUCPkts, RxUnicastFramesOK),
HW_STAT(RxMCPkts, RxMulticastFramesOK),
HW_STAT(RxBCPkts, RxBroadcastFramesOK),
HW_STAT(RxJumboPkts, RxJumboFramesOK),
HW_STAT(RxFCSErrors, RxFCSErrors),
HW_STAT(RxAlignErrors, RxAlignErrors),
HW_STAT(RxLongErrors, RxFrameTooLongErrors),
HW_STAT(RxVeryLongErrors, RxFrameTooLongErrors),
HW_STAT(RxPauseMacControlCounter, RxPauseFrames),
HW_STAT(RxDataErrors, RxDataErrors),
HW_STAT(RxJabberErrors, RxJabberErrors),
HW_STAT(RxRuntErrors, RxRuntErrors),
HW_STAT(RxShortErrors, RxRuntErrors),
HW_STAT(RxSequenceErrors, RxSequenceErrors),
HW_STAT(RxSymbolErrors, RxSymbolErrors),
/* Tx stats (skip collision stats as we are full-duplex only) */
HW_STAT(TxOctetsTotalOK, TxOctetsOK),
HW_STAT(TxOctetsBad, TxOctetsBad),
HW_STAT(TxUCPkts, TxUnicastFramesOK),
HW_STAT(TxMCPkts, TxMulticastFramesOK),
HW_STAT(TxBCPkts, TxBroadcastFramesOK),
HW_STAT(TxJumboPkts, TxJumboFramesOK),
HW_STAT(TxPauseFrames, TxPauseFrames),
HW_STAT(TxExcessiveLengthDrop, TxLengthErrors),
HW_STAT(TxUnderrun, TxUnderrun),
HW_STAT(TxCRCErrors, TxFCSErrors)
}, *p = hw_stats;
u64 *stats = (u64 *) &mac->stats;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(hw_stats); i++) {
u32 val;
t1_tpi_read(mac->adapter, MACREG(mac, p->reg), &val);
stats[p->offset] += val;
}
}
/* No-op interrupt operation as this MAC does not support interrupts */
static int mac_intr_op(struct cmac *mac)
{
return 0;
}
/* Expect MAC address to be in network byte order. */
static int mac_set_address(struct cmac *mac, u8 addr[6])
{
u32 addr_lo, addr_hi;
addr_lo = addr[2];
addr_lo = (addr_lo << 8) | addr[3];
addr_lo = (addr_lo << 8) | addr[4];
addr_lo = (addr_lo << 8) | addr[5];
addr_hi = addr[0];
addr_hi = (addr_hi << 8) | addr[1];
t1_tpi_write(mac->adapter, MACREG(mac, REG_MACADDR_LOW), addr_lo);
t1_tpi_write(mac->adapter, MACREG(mac, REG_MACADDR_HIGH), addr_hi);
return 0;
}
static int mac_get_address(struct cmac *mac, u8 addr[6])
{
u32 addr_lo, addr_hi;
t1_tpi_read(mac->adapter, MACREG(mac, REG_MACADDR_LOW), &addr_lo);
t1_tpi_read(mac->adapter, MACREG(mac, REG_MACADDR_HIGH), &addr_hi);
addr[0] = (u8) (addr_hi >> 8);
addr[1] = (u8) addr_hi;
addr[2] = (u8) (addr_lo >> 24);
addr[3] = (u8) (addr_lo >> 16);
addr[4] = (u8) (addr_lo >> 8);
addr[5] = (u8) addr_lo;
return 0;
}
/* This is intended to reset a port, not the whole MAC */
static int mac_reset(struct cmac *mac)
{
return 0;
}
static int mac_set_rx_mode(struct cmac *mac, struct t1_rx_mode *rm)
{
u32 val, new_mode;
adapter_t *adapter = mac->adapter;
u32 addr_lo, addr_hi;
u8 *addr;
t1_tpi_read(adapter, MACREG(mac, REG_RX_FILTER), &val);
new_mode = val & ~7;
if (!t1_rx_mode_promisc(rm) && mac->instance->version > 0)
new_mode |= 1; /* only set if version > 0 due to erratum */
if (!t1_rx_mode_promisc(rm) && !t1_rx_mode_allmulti(rm)
&& t1_rx_mode_mc_cnt(rm) <= 1)
new_mode |= 2;
if (new_mode != val)
t1_tpi_write(adapter, MACREG(mac, REG_RX_FILTER), new_mode);
switch (t1_rx_mode_mc_cnt(rm)) {
case 0:
t1_tpi_write(adapter, MACREG(mac, REG_MC_ADDR_LOW), 0);
t1_tpi_write(adapter, MACREG(mac, REG_MC_ADDR_HIGH), 0);
break;
case 1:
addr = t1_get_next_mcaddr(rm);
addr_lo = (addr[2] << 24) | (addr[3] << 16) | (addr[4] << 8) |
addr[5];
addr_hi = (addr[0] << 8) | addr[1];
t1_tpi_write(adapter, MACREG(mac, REG_MC_ADDR_LOW), addr_lo);
t1_tpi_write(adapter, MACREG(mac, REG_MC_ADDR_HIGH), addr_hi);
break;
default:
break;
}
return 0;
}
static int mac_set_mtu(struct cmac *mac, int mtu)
{
/* MAX_FRAME_SIZE inludes header + FCS, mtu doesn't */
if (mtu > (MAX_FRAME_SIZE - 14 - 4))
return -EINVAL;
t1_tpi_write(mac->adapter, MACREG(mac, REG_MAX_FRAME_SIZE),
mtu + 14 + 4);
return 0;
}
static int mac_set_speed_duplex_fc(struct cmac *mac, int speed, int duplex,
int fc)
{
u32 val;
if (speed >= 0 && speed != SPEED_100 && speed != SPEED_1000)
return -1;
if (duplex >= 0 && duplex != DUPLEX_FULL)
return -1;
if (speed >= 0) {
val = speed == SPEED_100 ? 1 : 2;
t1_tpi_write(mac->adapter, MACREG(mac, REG_RGMII_SPEED), val);
}
t1_tpi_read(mac->adapter, MACREG(mac, REG_FC_ENABLE), &val);
val &= ~3;
if (fc & PAUSE_RX)
val |= 1;
if (fc & PAUSE_TX)
val |= 2;
t1_tpi_write(mac->adapter, MACREG(mac, REG_FC_ENABLE), val);
return 0;
}
static int mac_get_speed_duplex_fc(struct cmac *mac, int *speed, int *duplex,
int *fc)
{
u32 val;
if (duplex)
*duplex = DUPLEX_FULL;
if (speed) {
t1_tpi_read(mac->adapter, MACREG(mac, REG_RGMII_SPEED),
&val);
*speed = (val & 2) ? SPEED_1000 : SPEED_100;
}
if (fc) {
t1_tpi_read(mac->adapter, MACREG(mac, REG_FC_ENABLE), &val);
*fc = 0;
if (val & 1)
*fc |= PAUSE_RX;
if (val & 2)
*fc |= PAUSE_TX;
}
return 0;
}
static void enable_port(struct cmac *mac)
{
u32 val;
u32 index = mac->instance->index;
adapter_t *adapter = mac->adapter;
t1_tpi_read(adapter, MACREG(mac, REG_DIVERSE_CONFIG), &val);
val |= DIVERSE_CONFIG_CRC_ADD | DIVERSE_CONFIG_PAD_ENABLE;
t1_tpi_write(adapter, MACREG(mac, REG_DIVERSE_CONFIG), val);
if (mac->instance->version > 0)
t1_tpi_write(adapter, MACREG(mac, REG_RX_FILTER), 3);
else /* Don't enable unicast address filtering due to IXF1010 bug */
t1_tpi_write(adapter, MACREG(mac, REG_RX_FILTER), 2);
t1_tpi_read(adapter, REG_RX_ERR_DROP, &val);
val |= (1 << index);
t1_tpi_write(adapter, REG_RX_ERR_DROP, val);
/*
* Clear the port RMON registers by adding their current values to the
* cumulatice port stats and then clearing the stats. Really.
*/
port_stats_update(mac);
memset(&mac->stats, 0, sizeof(struct cmac_statistics));
mac->instance->ticks = 0;
t1_tpi_read(adapter, REG_PORT_ENABLE, &val);
val |= (1 << index);
t1_tpi_write(adapter, REG_PORT_ENABLE, val);
index <<= 2;
if (is_T2(adapter)) {
/* T204: set the Fifo water level & threshold */
t1_tpi_write(adapter, RX_FIFO_HIGH_WATERMARK_BASE + index, 0x740);
t1_tpi_write(adapter, RX_FIFO_LOW_WATERMARK_BASE + index, 0x730);
t1_tpi_write(adapter, TX_FIFO_HIGH_WATERMARK_BASE + index, 0x600);
t1_tpi_write(adapter, TX_FIFO_LOW_WATERMARK_BASE + index, 0x1d0);
t1_tpi_write(adapter, TX_FIFO_XFER_THRES_BASE + index, 0x1100);
} else {
/*
* Set the TX Fifo Threshold to 0x400 instead of 0x100 to work around
* Underrun problem. Intel has blessed this solution.
*/
t1_tpi_write(adapter, TX_FIFO_XFER_THRES_BASE + index, 0x400);
}
}
/* IXF1010 ports do not have separate enables for TX and RX */
static int mac_enable(struct cmac *mac, int which)
{
if (which & (MAC_DIRECTION_RX | MAC_DIRECTION_TX))
enable_port(mac);
return 0;
}
static int mac_disable(struct cmac *mac, int which)
{
if (which & (MAC_DIRECTION_RX | MAC_DIRECTION_TX))
disable_port(mac);
return 0;
}
#define RMON_UPDATE(mac, name, stat_name) \
t1_tpi_read((mac)->adapter, MACREG(mac, REG_##name), &val); \
(mac)->stats.stat_name += val;
/*
* This function is called periodically to accumulate the current values of the
* RMON counters into the port statistics. Since the counters are only 32 bits
* some of them can overflow in less than a minute at GigE speeds, so this
* function should be called every 30 seconds or so.
*
* To cut down on reading costs we update only the octet counters at each tick
* and do a full update at major ticks, which can be every 30 minutes or more.
*/
static const struct cmac_statistics *mac_update_statistics(struct cmac *mac,
int flag)
{
if (flag == MAC_STATS_UPDATE_FULL ||
MAJOR_UPDATE_TICKS <= mac->instance->ticks) {
port_stats_update(mac);
mac->instance->ticks = 0;
} else {
u32 val;
RMON_UPDATE(mac, RxOctetsTotalOK, RxOctetsOK);
RMON_UPDATE(mac, TxOctetsTotalOK, TxOctetsOK);
mac->instance->ticks++;
}
return &mac->stats;
}
static void mac_destroy(struct cmac *mac)
{
kfree(mac);
}
static struct cmac_ops ixf1010_ops = {
.destroy = mac_destroy,
.reset = mac_reset,
.interrupt_enable = mac_intr_op,
.interrupt_disable = mac_intr_op,
.interrupt_clear = mac_intr_op,
.enable = mac_enable,
.disable = mac_disable,
.set_mtu = mac_set_mtu,
.set_rx_mode = mac_set_rx_mode,
.set_speed_duplex_fc = mac_set_speed_duplex_fc,
.get_speed_duplex_fc = mac_get_speed_duplex_fc,
.statistics_update = mac_update_statistics,
.macaddress_get = mac_get_address,
.macaddress_set = mac_set_address,
};
static int ixf1010_mac_reset(adapter_t *adapter)
{
u32 val;
t1_tpi_read(adapter, A_ELMER0_GPO, &val);
if ((val & 1) != 0) {
val &= ~1;
t1_tpi_write(adapter, A_ELMER0_GPO, val);
udelay(2);
}
val |= 1;
t1_tpi_write(adapter, A_ELMER0_GPO, val);
udelay(2);
t1_tpi_write(adapter, REG_PORT_ENABLE, 0);
return 0;
}
static struct cmac *ixf1010_mac_create(adapter_t *adapter, int index)
{
struct cmac *mac;
u32 val;
if (index > 9)
return NULL;
mac = kzalloc(sizeof(*mac) + sizeof(cmac_instance), GFP_KERNEL);
if (!mac)
return NULL;
mac->ops = &ixf1010_ops;
mac->instance = (cmac_instance *)(mac + 1);
mac->instance->mac_base = MACREG_BASE + (index * 0x200);
mac->instance->index = index;
mac->adapter = adapter;
mac->instance->ticks = 0;
t1_tpi_read(adapter, REG_JTAG_ID, &val);
mac->instance->version = val >> 28;
return mac;
}
struct gmac t1_ixf1010_ops = {
STATS_TICK_SECS,
ixf1010_mac_create,
ixf1010_mac_reset
};

View File

@ -363,6 +363,6 @@ static struct cmac *mac_create(adapter_t *adapter, int index)
return mac;
}
struct gmac t1_chelsio_mac_ops = {
const struct gmac t1_chelsio_mac_ops = {
.create = mac_create
};

View File

@ -354,7 +354,7 @@ static struct cphy_ops mv88e1xxx_ops = {
};
static struct cphy *mv88e1xxx_phy_create(adapter_t *adapter, int phy_addr,
struct mdio_ops *mdio_ops)
const struct mdio_ops *mdio_ops)
{
struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL);
@ -390,7 +390,7 @@ static int mv88e1xxx_phy_reset(adapter_t* adapter)
return 0;
}
struct gphy t1_mv88e1xxx_ops = {
mv88e1xxx_phy_create,
mv88e1xxx_phy_reset
const struct gphy t1_mv88e1xxx_ops = {
.create = mv88e1xxx_phy_create,
.reset = mv88e1xxx_phy_reset
};

View File

@ -208,7 +208,7 @@ static struct cphy_ops mv88x201x_ops = {
};
static struct cphy *mv88x201x_phy_create(adapter_t *adapter, int phy_addr,
struct mdio_ops *mdio_ops)
const struct mdio_ops *mdio_ops)
{
u32 val;
struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL);
@ -252,7 +252,7 @@ static int mv88x201x_phy_reset(adapter_t *adapter)
return 0;
}
struct gphy t1_mv88x201x_ops = {
mv88x201x_phy_create,
mv88x201x_phy_reset
const struct gphy t1_mv88x201x_ops = {
.create = mv88x201x_phy_create,
.reset = mv88x201x_phy_reset
};

View File

@ -166,7 +166,7 @@ static struct cphy_ops my3126_ops = {
};
static struct cphy *my3126_phy_create(adapter_t *adapter,
int phy_addr, struct mdio_ops *mdio_ops)
int phy_addr, const struct mdio_ops *mdio_ops)
{
struct cphy *cphy = kzalloc(sizeof (*cphy), GFP_KERNEL);
@ -201,7 +201,7 @@ static int my3126_phy_reset(adapter_t * adapter)
return 0;
}
struct gphy t1_my3126_ops = {
my3126_phy_create,
my3126_phy_reset
const struct gphy t1_my3126_ops = {
.create = my3126_phy_create,
.reset = my3126_phy_reset
};

View File

@ -807,8 +807,8 @@ static int pm3393_mac_reset(adapter_t * adapter)
return successful_reset ? 0 : 1;
}
struct gmac t1_pm3393_ops = {
STATS_TICK_SECS,
pm3393_mac_create,
pm3393_mac_reset
const struct gmac t1_pm3393_ops = {
.stats_update_period = STATS_TICK_SECS,
.create = pm3393_mac_create,
.reset = pm3393_mac_reset,
};

View File

@ -321,10 +321,10 @@ static int mi1_mdio_write(adapter_t *adapter, int phy_addr, int mmd_addr,
}
#if defined(CONFIG_CHELSIO_T1_1G) || defined(CONFIG_CHELSIO_T1_COUGAR)
static struct mdio_ops mi1_mdio_ops = {
mi1_mdio_init,
mi1_mdio_read,
mi1_mdio_write
static const struct mdio_ops mi1_mdio_ops = {
.init = mi1_mdio_init,
.read = mi1_mdio_read,
.write = mi1_mdio_write
};
#endif
@ -377,10 +377,10 @@ static int mi1_mdio_ext_write(adapter_t *adapter, int phy_addr, int mmd_addr,
return 0;
}
static struct mdio_ops mi1_mdio_ext_ops = {
mi1_mdio_init,
mi1_mdio_ext_read,
mi1_mdio_ext_write
static const struct mdio_ops mi1_mdio_ext_ops = {
.init = mi1_mdio_init,
.read = mi1_mdio_ext_read,
.write = mi1_mdio_ext_write
};
enum {
@ -392,63 +392,136 @@ enum {
CH_BRD_N204_4CU,
};
static struct board_info t1_board[] = {
static const struct board_info t1_board[] = {
{
.board = CHBT_BOARD_CHT110,
.port_number = 1,
.caps = SUPPORTED_10000baseT_Full,
.chip_term = CHBT_TERM_T1,
.chip_mac = CHBT_MAC_PM3393,
.chip_phy = CHBT_PHY_MY3126,
.clock_core = 125000000,
.clock_mc3 = 150000000,
.clock_mc4 = 125000000,
.espi_nports = 1,
.clock_elmer0 = 44,
.mdio_mdien = 1,
.mdio_mdiinv = 1,
.mdio_mdc = 1,
.mdio_phybaseaddr = 1,
.gmac = &t1_pm3393_ops,
.gphy = &t1_my3126_ops,
.mdio_ops = &mi1_mdio_ext_ops,
.desc = "Chelsio T110 1x10GBase-CX4 TOE",
},
{ CHBT_BOARD_CHT110, 1/*ports#*/,
SUPPORTED_10000baseT_Full /*caps*/, CHBT_TERM_T1,
CHBT_MAC_PM3393, CHBT_PHY_MY3126,
125000000/*clk-core*/, 150000000/*clk-mc3*/, 125000000/*clk-mc4*/,
1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 1/*mdien*/,
1/*mdiinv*/, 1/*mdc*/, 1/*phybaseaddr*/, &t1_pm3393_ops,
&t1_my3126_ops, &mi1_mdio_ext_ops,
"Chelsio T110 1x10GBase-CX4 TOE" },
{
.board = CHBT_BOARD_N110,
.port_number = 1,
.caps = SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE,
.chip_term = CHBT_TERM_T1,
.chip_mac = CHBT_MAC_PM3393,
.chip_phy = CHBT_PHY_88X2010,
.clock_core = 125000000,
.espi_nports = 1,
.clock_elmer0 = 44,
.mdio_mdien = 0,
.mdio_mdiinv = 0,
.mdio_mdc = 1,
.mdio_phybaseaddr = 0,
.gmac = &t1_pm3393_ops,
.gphy = &t1_mv88x201x_ops,
.mdio_ops = &mi1_mdio_ext_ops,
.desc = "Chelsio N110 1x10GBaseX NIC",
},
{ CHBT_BOARD_N110, 1/*ports#*/,
SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE /*caps*/, CHBT_TERM_T1,
CHBT_MAC_PM3393, CHBT_PHY_88X2010,
125000000/*clk-core*/, 0/*clk-mc3*/, 0/*clk-mc4*/,
1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,
0/*mdiinv*/, 1/*mdc*/, 0/*phybaseaddr*/, &t1_pm3393_ops,
&t1_mv88x201x_ops, &mi1_mdio_ext_ops,
"Chelsio N110 1x10GBaseX NIC" },
{
.board = CHBT_BOARD_N210,
.port_number = 1,
.caps = SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE,
.chip_term = CHBT_TERM_T2,
.chip_mac = CHBT_MAC_PM3393,
.chip_phy = CHBT_PHY_88X2010,
.clock_core = 125000000,
.espi_nports = 1,
.clock_elmer0 = 44,
.mdio_mdien = 0,
.mdio_mdiinv = 0,
.mdio_mdc = 1,
.mdio_phybaseaddr = 0,
.gmac = &t1_pm3393_ops,
.gphy = &t1_mv88x201x_ops,
.mdio_ops = &mi1_mdio_ext_ops,
.desc = "Chelsio N210 1x10GBaseX NIC",
},
{ CHBT_BOARD_N210, 1/*ports#*/,
SUPPORTED_10000baseT_Full | SUPPORTED_FIBRE /*caps*/, CHBT_TERM_T2,
CHBT_MAC_PM3393, CHBT_PHY_88X2010,
125000000/*clk-core*/, 0/*clk-mc3*/, 0/*clk-mc4*/,
1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,
0/*mdiinv*/, 1/*mdc*/, 0/*phybaseaddr*/, &t1_pm3393_ops,
&t1_mv88x201x_ops, &mi1_mdio_ext_ops,
"Chelsio N210 1x10GBaseX NIC" },
{
.board = CHBT_BOARD_CHT210,
.port_number = 1,
.caps = SUPPORTED_10000baseT_Full,
.chip_term = CHBT_TERM_T2,
.chip_mac = CHBT_MAC_PM3393,
.chip_phy = CHBT_PHY_88X2010,
.clock_core = 125000000,
.clock_mc3 = 133000000,
.clock_mc4 = 125000000,
.espi_nports = 1,
.clock_elmer0 = 44,
.mdio_mdien = 0,
.mdio_mdiinv = 0,
.mdio_mdc = 1,
.mdio_phybaseaddr = 0,
.gmac = &t1_pm3393_ops,
.gphy = &t1_mv88x201x_ops,
.mdio_ops = &mi1_mdio_ext_ops,
.desc = "Chelsio T210 1x10GBaseX TOE",
},
{ CHBT_BOARD_CHT210, 1/*ports#*/,
SUPPORTED_10000baseT_Full /*caps*/, CHBT_TERM_T2,
CHBT_MAC_PM3393, CHBT_PHY_88X2010,
125000000/*clk-core*/, 133000000/*clk-mc3*/, 125000000/*clk-mc4*/,
1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,
0/*mdiinv*/, 1/*mdc*/, 0/*phybaseaddr*/, &t1_pm3393_ops,
&t1_mv88x201x_ops, &mi1_mdio_ext_ops,
"Chelsio T210 1x10GBaseX TOE" },
{ CHBT_BOARD_CHT210, 1/*ports#*/,
SUPPORTED_10000baseT_Full /*caps*/, CHBT_TERM_T2,
CHBT_MAC_PM3393, CHBT_PHY_MY3126,
125000000/*clk-core*/, 133000000/*clk-mc3*/, 125000000/*clk-mc4*/,
1/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 1/*mdien*/,
1/*mdiinv*/, 1/*mdc*/, 1/*phybaseaddr*/, &t1_pm3393_ops,
&t1_my3126_ops, &mi1_mdio_ext_ops,
"Chelsio T210 1x10GBase-CX4 TOE" },
{
.board = CHBT_BOARD_CHT210,
.port_number = 1,
.caps = SUPPORTED_10000baseT_Full,
.chip_term = CHBT_TERM_T2,
.chip_mac = CHBT_MAC_PM3393,
.chip_phy = CHBT_PHY_MY3126,
.clock_core = 125000000,
.clock_mc3 = 133000000,
.clock_mc4 = 125000000,
.espi_nports = 1,
.clock_elmer0 = 44,
.mdio_mdien = 1,
.mdio_mdiinv = 1,
.mdio_mdc = 1,
.mdio_phybaseaddr = 1,
.gmac = &t1_pm3393_ops,
.gphy = &t1_my3126_ops,
.mdio_ops = &mi1_mdio_ext_ops,
.desc = "Chelsio T210 1x10GBase-CX4 TOE",
},
#ifdef CONFIG_CHELSIO_T1_1G
{ CHBT_BOARD_CHN204, 4/*ports#*/,
SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Half |
SUPPORTED_100baseT_Full | SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg |
SUPPORTED_PAUSE | SUPPORTED_TP /*caps*/, CHBT_TERM_T2, CHBT_MAC_VSC7321, CHBT_PHY_88E1111,
100000000/*clk-core*/, 0/*clk-mc3*/, 0/*clk-mc4*/,
4/*espi-ports*/, 0/*clk-cspi*/, 44/*clk-elmer0*/, 0/*mdien*/,
0/*mdiinv*/, 1/*mdc*/, 4/*phybaseaddr*/, &t1_vsc7326_ops,
&t1_mv88e1xxx_ops, &mi1_mdio_ops,
"Chelsio N204 4x100/1000BaseT NIC" },
{
.board = CHBT_BOARD_CHN204,
.port_number = 4,
.caps = SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full
| SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full
| SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg |
SUPPORTED_PAUSE | SUPPORTED_TP,
.chip_term = CHBT_TERM_T2,
.chip_mac = CHBT_MAC_VSC7321,
.chip_phy = CHBT_PHY_88E1111,
.clock_core = 100000000,
.espi_nports = 4,
.clock_elmer0 = 44,
.mdio_mdien = 0,
.mdio_mdiinv = 0,
.mdio_mdc = 0,
.mdio_phybaseaddr = 4,
.gmac = &t1_vsc7326_ops,
.gphy = &t1_mv88e1xxx_ops,
.mdio_ops = &mi1_mdio_ops,
.desc = "Chelsio N204 4x100/1000BaseT NIC",
},
#endif
};

View File

@ -723,7 +723,7 @@ static int vsc7326_mac_reset(adapter_t *adapter)
return 0;
}
struct gmac t1_vsc7326_ops = {
const struct gmac t1_vsc7326_ops = {
.stats_update_period = STATS_TICK_SECS,
.create = vsc7326_mac_create,
.reset = vsc7326_mac_reset,

View File

@ -1,367 +0,0 @@
/*
* This file is part of the Chelsio T2 Ethernet driver.
*
* Copyright (C) 2005 Chelsio Communications. All rights reserved.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the LICENSE file included in this
* release for licensing terms and conditions.
*/
#include "common.h"
#include "cphy.h"
#include "elmer0.h"
#ifndef ADVERTISE_PAUSE_CAP
# define ADVERTISE_PAUSE_CAP 0x400
#endif
#ifndef ADVERTISE_PAUSE_ASYM
# define ADVERTISE_PAUSE_ASYM 0x800
#endif
/* Gigabit MII registers */
#ifndef MII_CTRL1000
# define MII_CTRL1000 9
#endif
#ifndef ADVERTISE_1000FULL
# define ADVERTISE_1000FULL 0x200
# define ADVERTISE_1000HALF 0x100
#endif
/* VSC8244 PHY specific registers. */
enum {
VSC8244_INTR_ENABLE = 25,
VSC8244_INTR_STATUS = 26,
VSC8244_AUX_CTRL_STAT = 28,
};
enum {
VSC_INTR_RX_ERR = 1 << 0,
VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */
VSC_INTR_CABLE = 1 << 2, /* cable impairment */
VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */
VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */
VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */
VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */
VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */
VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */
VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */
VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */
VSC_INTR_LINK_CHG = 1 << 13, /* link change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */
};
#define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \
VSC_INTR_NEG_DONE)
#define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \
VSC_INTR_ENABLE)
/* PHY specific auxiliary control & status register fields */
#define S_ACSR_ACTIPHY_TMR 0
#define M_ACSR_ACTIPHY_TMR 0x3
#define V_ACSR_ACTIPHY_TMR(x) ((x) << S_ACSR_ACTIPHY_TMR)
#define S_ACSR_SPEED 3
#define M_ACSR_SPEED 0x3
#define G_ACSR_SPEED(x) (((x) >> S_ACSR_SPEED) & M_ACSR_SPEED)
#define S_ACSR_DUPLEX 5
#define F_ACSR_DUPLEX (1 << S_ACSR_DUPLEX)
#define S_ACSR_ACTIPHY 6
#define F_ACSR_ACTIPHY (1 << S_ACSR_ACTIPHY)
/*
* Reset the PHY. This PHY completes reset immediately so we never wait.
*/
static int vsc8244_reset(struct cphy *cphy, int wait)
{
int err;
unsigned int ctl;
err = simple_mdio_read(cphy, MII_BMCR, &ctl);
if (err)
return err;
ctl &= ~BMCR_PDOWN;
ctl |= BMCR_RESET;
return simple_mdio_write(cphy, MII_BMCR, ctl);
}
static int vsc8244_intr_enable(struct cphy *cphy)
{
simple_mdio_write(cphy, VSC8244_INTR_ENABLE, INTR_MASK);
/* Enable interrupts through Elmer */
if (t1_is_asic(cphy->adapter)) {
u32 elmer;
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4;
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
}
return 0;
}
static int vsc8244_intr_disable(struct cphy *cphy)
{
simple_mdio_write(cphy, VSC8244_INTR_ENABLE, 0);
if (t1_is_asic(cphy->adapter)) {
u32 elmer;
t1_tpi_read(cphy->adapter, A_ELMER0_INT_ENABLE, &elmer);
elmer &= ~ELMER0_GP_BIT1;
if (is_T2(cphy->adapter))
elmer &= ~(ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4);
t1_tpi_write(cphy->adapter, A_ELMER0_INT_ENABLE, elmer);
}
return 0;
}
static int vsc8244_intr_clear(struct cphy *cphy)
{
u32 val;
u32 elmer;
/* Clear PHY interrupts by reading the register. */
simple_mdio_read(cphy, VSC8244_INTR_ENABLE, &val);
if (t1_is_asic(cphy->adapter)) {
t1_tpi_read(cphy->adapter, A_ELMER0_INT_CAUSE, &elmer);
elmer |= ELMER0_GP_BIT1;
if (is_T2(cphy->adapter))
elmer |= ELMER0_GP_BIT2|ELMER0_GP_BIT3|ELMER0_GP_BIT4;
t1_tpi_write(cphy->adapter, A_ELMER0_INT_CAUSE, elmer);
}
return 0;
}
/*
* Force the PHY speed and duplex. This also disables auto-negotiation, except
* for 1Gb/s, where auto-negotiation is mandatory.
*/
static int vsc8244_set_speed_duplex(struct cphy *phy, int speed, int duplex)
{
int err;
unsigned int ctl;
err = simple_mdio_read(phy, MII_BMCR, &ctl);
if (err)
return err;
if (speed >= 0) {
ctl &= ~(BMCR_SPEED100 | BMCR_SPEED1000 | BMCR_ANENABLE);
if (speed == SPEED_100)
ctl |= BMCR_SPEED100;
else if (speed == SPEED_1000)
ctl |= BMCR_SPEED1000;
}
if (duplex >= 0) {
ctl &= ~(BMCR_FULLDPLX | BMCR_ANENABLE);
if (duplex == DUPLEX_FULL)
ctl |= BMCR_FULLDPLX;
}
if (ctl & BMCR_SPEED1000) /* auto-negotiation required for 1Gb/s */
ctl |= BMCR_ANENABLE;
return simple_mdio_write(phy, MII_BMCR, ctl);
}
int t1_mdio_set_bits(struct cphy *phy, int mmd, int reg, unsigned int bits)
{
int ret;
unsigned int val;
ret = mdio_read(phy, mmd, reg, &val);
if (!ret)
ret = mdio_write(phy, mmd, reg, val | bits);
return ret;
}
static int vsc8244_autoneg_enable(struct cphy *cphy)
{
return t1_mdio_set_bits(cphy, 0, MII_BMCR,
BMCR_ANENABLE | BMCR_ANRESTART);
}
static int vsc8244_autoneg_restart(struct cphy *cphy)
{
return t1_mdio_set_bits(cphy, 0, MII_BMCR, BMCR_ANRESTART);
}
static int vsc8244_advertise(struct cphy *phy, unsigned int advertise_map)
{
int err;
unsigned int val = 0;
err = simple_mdio_read(phy, MII_CTRL1000, &val);
if (err)
return err;
val &= ~(ADVERTISE_1000HALF | ADVERTISE_1000FULL);
if (advertise_map & ADVERTISED_1000baseT_Half)
val |= ADVERTISE_1000HALF;
if (advertise_map & ADVERTISED_1000baseT_Full)
val |= ADVERTISE_1000FULL;
err = simple_mdio_write(phy, MII_CTRL1000, val);
if (err)
return err;
val = 1;
if (advertise_map & ADVERTISED_10baseT_Half)
val |= ADVERTISE_10HALF;
if (advertise_map & ADVERTISED_10baseT_Full)
val |= ADVERTISE_10FULL;
if (advertise_map & ADVERTISED_100baseT_Half)
val |= ADVERTISE_100HALF;
if (advertise_map & ADVERTISED_100baseT_Full)
val |= ADVERTISE_100FULL;
if (advertise_map & ADVERTISED_PAUSE)
val |= ADVERTISE_PAUSE_CAP;
if (advertise_map & ADVERTISED_ASYM_PAUSE)
val |= ADVERTISE_PAUSE_ASYM;
return simple_mdio_write(phy, MII_ADVERTISE, val);
}
static int vsc8244_get_link_status(struct cphy *cphy, int *link_ok,
int *speed, int *duplex, int *fc)
{
unsigned int bmcr, status, lpa, adv;
int err, sp = -1, dplx = -1, pause = 0;
err = simple_mdio_read(cphy, MII_BMCR, &bmcr);
if (!err)
err = simple_mdio_read(cphy, MII_BMSR, &status);
if (err)
return err;
if (link_ok) {
/*
* BMSR_LSTATUS is latch-low, so if it is 0 we need to read it
* once more to get the current link state.
*/
if (!(status & BMSR_LSTATUS))
err = simple_mdio_read(cphy, MII_BMSR, &status);
if (err)
return err;
*link_ok = (status & BMSR_LSTATUS) != 0;
}
if (!(bmcr & BMCR_ANENABLE)) {
dplx = (bmcr & BMCR_FULLDPLX) ? DUPLEX_FULL : DUPLEX_HALF;
if (bmcr & BMCR_SPEED1000)
sp = SPEED_1000;
else if (bmcr & BMCR_SPEED100)
sp = SPEED_100;
else
sp = SPEED_10;
} else if (status & BMSR_ANEGCOMPLETE) {
err = simple_mdio_read(cphy, VSC8244_AUX_CTRL_STAT, &status);
if (err)
return err;
dplx = (status & F_ACSR_DUPLEX) ? DUPLEX_FULL : DUPLEX_HALF;
sp = G_ACSR_SPEED(status);
if (sp == 0)
sp = SPEED_10;
else if (sp == 1)
sp = SPEED_100;
else
sp = SPEED_1000;
if (fc && dplx == DUPLEX_FULL) {
err = simple_mdio_read(cphy, MII_LPA, &lpa);
if (!err)
err = simple_mdio_read(cphy, MII_ADVERTISE,
&adv);
if (err)
return err;
if (lpa & adv & ADVERTISE_PAUSE_CAP)
pause = PAUSE_RX | PAUSE_TX;
else if ((lpa & ADVERTISE_PAUSE_CAP) &&
(lpa & ADVERTISE_PAUSE_ASYM) &&
(adv & ADVERTISE_PAUSE_ASYM))
pause = PAUSE_TX;
else if ((lpa & ADVERTISE_PAUSE_ASYM) &&
(adv & ADVERTISE_PAUSE_CAP))
pause = PAUSE_RX;
}
}
if (speed)
*speed = sp;
if (duplex)
*duplex = dplx;
if (fc)
*fc = pause;
return 0;
}
static int vsc8244_intr_handler(struct cphy *cphy)
{
unsigned int cause;
int err, cphy_cause = 0;
err = simple_mdio_read(cphy, VSC8244_INTR_STATUS, &cause);
if (err)
return err;
cause &= INTR_MASK;
if (cause & CFG_CHG_INTR_MASK)
cphy_cause |= cphy_cause_link_change;
if (cause & (VSC_INTR_RX_FIFO | VSC_INTR_TX_FIFO))
cphy_cause |= cphy_cause_fifo_error;
return cphy_cause;
}
static void vsc8244_destroy(struct cphy *cphy)
{
kfree(cphy);
}
static struct cphy_ops vsc8244_ops = {
.destroy = vsc8244_destroy,
.reset = vsc8244_reset,
.interrupt_enable = vsc8244_intr_enable,
.interrupt_disable = vsc8244_intr_disable,
.interrupt_clear = vsc8244_intr_clear,
.interrupt_handler = vsc8244_intr_handler,
.autoneg_enable = vsc8244_autoneg_enable,
.autoneg_restart = vsc8244_autoneg_restart,
.advertise = vsc8244_advertise,
.set_speed_duplex = vsc8244_set_speed_duplex,
.get_link_status = vsc8244_get_link_status
};
static struct cphy* vsc8244_phy_create(adapter_t *adapter, int phy_addr,
struct mdio_ops *mdio_ops)
{
struct cphy *cphy = kzalloc(sizeof(*cphy), GFP_KERNEL);
if (!cphy)
return NULL;
cphy_init(cphy, adapter, phy_addr, &vsc8244_ops, mdio_ops);
return cphy;
}
static int vsc8244_phy_reset(adapter_t* adapter)
{
return 0;
}
struct gphy t1_vsc8244_ops = {
vsc8244_phy_create,
vsc8244_phy_reset
};

View File

@ -1,172 +0,0 @@
/* $Date: 2005/11/23 16:28:53 $ $RCSfile: vsc8244_reg.h,v $ $Revision: 1.1 $ */
#ifndef CHELSIO_MV8E1XXX_H
#define CHELSIO_MV8E1XXX_H
#ifndef BMCR_SPEED1000
# define BMCR_SPEED1000 0x40
#endif
#ifndef ADVERTISE_PAUSE
# define ADVERTISE_PAUSE 0x400
#endif
#ifndef ADVERTISE_PAUSE_ASYM
# define ADVERTISE_PAUSE_ASYM 0x800
#endif
/* Gigabit MII registers */
#define MII_GBMR 1 /* 1000Base-T mode register */
#define MII_GBCR 9 /* 1000Base-T control register */
#define MII_GBSR 10 /* 1000Base-T status register */
/* 1000Base-T control register fields */
#define GBCR_ADV_1000HALF 0x100
#define GBCR_ADV_1000FULL 0x200
#define GBCR_PREFER_MASTER 0x400
#define GBCR_MANUAL_AS_MASTER 0x800
#define GBCR_MANUAL_CONFIG_ENABLE 0x1000
/* 1000Base-T status register fields */
#define GBSR_LP_1000HALF 0x400
#define GBSR_LP_1000FULL 0x800
#define GBSR_REMOTE_OK 0x1000
#define GBSR_LOCAL_OK 0x2000
#define GBSR_LOCAL_MASTER 0x4000
#define GBSR_MASTER_FAULT 0x8000
/* Vitesse PHY interrupt status bits. */
#if 0
#define VSC8244_INTR_JABBER 0x0001
#define VSC8244_INTR_POLARITY_CHNG 0x0002
#define VSC8244_INTR_ENG_DETECT_CHNG 0x0010
#define VSC8244_INTR_DOWNSHIFT 0x0020
#define VSC8244_INTR_MDI_XOVER_CHNG 0x0040
#define VSC8244_INTR_FIFO_OVER_UNDER 0x0080
#define VSC8244_INTR_FALSE_CARRIER 0x0100
#define VSC8244_INTR_SYMBOL_ERROR 0x0200
#define VSC8244_INTR_LINK_CHNG 0x0400
#define VSC8244_INTR_AUTONEG_DONE 0x0800
#define VSC8244_INTR_PAGE_RECV 0x1000
#define VSC8244_INTR_DUPLEX_CHNG 0x2000
#define VSC8244_INTR_SPEED_CHNG 0x4000
#define VSC8244_INTR_AUTONEG_ERR 0x8000
#else
//#define VSC8244_INTR_JABBER 0x0001
//#define VSC8244_INTR_POLARITY_CHNG 0x0002
//#define VSC8244_INTR_BIT2 0x0004
//#define VSC8244_INTR_BIT3 0x0008
#define VSC8244_INTR_RX_ERR 0x0001
#define VSC8244_INTR_MASTER_SLAVE 0x0002
#define VSC8244_INTR_CABLE_IMPAIRED 0x0004
#define VSC8244_INTR_FALSE_CARRIER 0x0008
//#define VSC8244_INTR_ENG_DETECT_CHNG 0x0010
//#define VSC8244_INTR_DOWNSHIFT 0x0020
//#define VSC8244_INTR_MDI_XOVER_CHNG 0x0040
//#define VSC8244_INTR_FIFO_OVER_UNDER 0x0080
#define VSC8244_INTR_BIT4 0x0010
#define VSC8244_INTR_FIFO_RX 0x0020
#define VSC8244_INTR_FIFO_OVER_UNDER 0x0040
#define VSC8244_INTR_LOCK_LOST 0x0080
//#define VSC8244_INTR_FALSE_CARRIER 0x0100
//#define VSC8244_INTR_SYMBOL_ERROR 0x0200
//#define VSC8244_INTR_LINK_CHNG 0x0400
//#define VSC8244_INTR_AUTONEG_DONE 0x0800
#define VSC8244_INTR_SYMBOL_ERROR 0x0100
#define VSC8244_INTR_ENG_DETECT_CHNG 0x0200
#define VSC8244_INTR_AUTONEG_DONE 0x0400
#define VSC8244_INTR_AUTONEG_ERR 0x0800
//#define VSC8244_INTR_PAGE_RECV 0x1000
//#define VSC8244_INTR_DUPLEX_CHNG 0x2000
//#define VSC8244_INTR_SPEED_CHNG 0x4000
//#define VSC8244_INTR_AUTONEG_ERR 0x8000
#define VSC8244_INTR_DUPLEX_CHNG 0x1000
#define VSC8244_INTR_LINK_CHNG 0x2000
#define VSC8244_INTR_SPEED_CHNG 0x4000
#define VSC8244_INTR_STATUS 0x8000
#endif
/* Vitesse PHY specific registers. */
#define VSC8244_SPECIFIC_CNTRL_REGISTER 16
#define VSC8244_SPECIFIC_STATUS_REGISTER 0x1c
#define VSC8244_INTERRUPT_ENABLE_REGISTER 0x19
#define VSC8244_INTERRUPT_STATUS_REGISTER 0x1a
#define VSC8244_EXT_PHY_SPECIFIC_CNTRL_REGISTER 20
#define VSC8244_RECV_ERR_CNTR_REGISTER 21
#define VSC8244_RES_REGISTER 22
#define VSC8244_GLOBAL_STATUS_REGISTER 23
#define VSC8244_LED_CONTROL_REGISTER 24
#define VSC8244_MANUAL_LED_OVERRIDE_REGISTER 25
#define VSC8244_EXT_PHY_SPECIFIC_CNTRL_2_REGISTER 26
#define VSC8244_EXT_PHY_SPECIFIC_STATUS_REGISTER 27
#define VSC8244_VIRTUAL_CABLE_TESTER_REGISTER 28
#define VSC8244_EXTENDED_ADDR_REGISTER 29
#define VSC8244_EXTENDED_REGISTER 30
/* PHY specific control register fields */
#define S_PSCR_MDI_XOVER_MODE 5
#define M_PSCR_MDI_XOVER_MODE 0x3
#define V_PSCR_MDI_XOVER_MODE(x) ((x) << S_PSCR_MDI_XOVER_MODE)
#define G_PSCR_MDI_XOVER_MODE(x) (((x) >> S_PSCR_MDI_XOVER_MODE) & M_PSCR_MDI_XOVER_MODE)
/* Extended PHY specific control register fields */
#define S_DOWNSHIFT_ENABLE 8
#define V_DOWNSHIFT_ENABLE (1 << S_DOWNSHIFT_ENABLE)
#define S_DOWNSHIFT_CNT 9
#define M_DOWNSHIFT_CNT 0x7
#define V_DOWNSHIFT_CNT(x) ((x) << S_DOWNSHIFT_CNT)
#define G_DOWNSHIFT_CNT(x) (((x) >> S_DOWNSHIFT_CNT) & M_DOWNSHIFT_CNT)
/* PHY specific status register fields */
#define S_PSSR_JABBER 0
#define V_PSSR_JABBER (1 << S_PSSR_JABBER)
#define S_PSSR_POLARITY 1
#define V_PSSR_POLARITY (1 << S_PSSR_POLARITY)
#define S_PSSR_RX_PAUSE 2
#define V_PSSR_RX_PAUSE (1 << S_PSSR_RX_PAUSE)
#define S_PSSR_TX_PAUSE 3
#define V_PSSR_TX_PAUSE (1 << S_PSSR_TX_PAUSE)
#define S_PSSR_ENERGY_DETECT 4
#define V_PSSR_ENERGY_DETECT (1 << S_PSSR_ENERGY_DETECT)
#define S_PSSR_DOWNSHIFT_STATUS 5
#define V_PSSR_DOWNSHIFT_STATUS (1 << S_PSSR_DOWNSHIFT_STATUS)
#define S_PSSR_MDI 6
#define V_PSSR_MDI (1 << S_PSSR_MDI)
#define S_PSSR_CABLE_LEN 7
#define M_PSSR_CABLE_LEN 0x7
#define V_PSSR_CABLE_LEN(x) ((x) << S_PSSR_CABLE_LEN)
#define G_PSSR_CABLE_LEN(x) (((x) >> S_PSSR_CABLE_LEN) & M_PSSR_CABLE_LEN)
//#define S_PSSR_LINK 10
//#define S_PSSR_LINK 13
#define S_PSSR_LINK 2
#define V_PSSR_LINK (1 << S_PSSR_LINK)
//#define S_PSSR_STATUS_RESOLVED 11
//#define S_PSSR_STATUS_RESOLVED 10
#define S_PSSR_STATUS_RESOLVED 15
#define V_PSSR_STATUS_RESOLVED (1 << S_PSSR_STATUS_RESOLVED)
#define S_PSSR_PAGE_RECEIVED 12
#define V_PSSR_PAGE_RECEIVED (1 << S_PSSR_PAGE_RECEIVED)
//#define S_PSSR_DUPLEX 13
//#define S_PSSR_DUPLEX 12
#define S_PSSR_DUPLEX 5
#define V_PSSR_DUPLEX (1 << S_PSSR_DUPLEX)
//#define S_PSSR_SPEED 14
//#define S_PSSR_SPEED 14
#define S_PSSR_SPEED 3
#define M_PSSR_SPEED 0x3
#define V_PSSR_SPEED(x) ((x) << S_PSSR_SPEED)
#define G_PSSR_SPEED(x) (((x) >> S_PSSR_SPEED) & M_PSSR_SPEED)
#endif

View File

@ -159,7 +159,7 @@
#define DRV_NAME "e100"
#define DRV_EXT "-NAPI"
#define DRV_VERSION "3.5.17-k2"DRV_EXT
#define DRV_VERSION "3.5.17-k4"DRV_EXT
#define DRV_DESCRIPTION "Intel(R) PRO/100 Network Driver"
#define DRV_COPYRIGHT "Copyright(c) 1999-2006 Intel Corporation"
#define PFX DRV_NAME ": "
@ -174,10 +174,13 @@ MODULE_VERSION(DRV_VERSION);
static int debug = 3;
static int eeprom_bad_csum_allow = 0;
static int use_io = 0;
module_param(debug, int, 0);
module_param(eeprom_bad_csum_allow, int, 0);
module_param(use_io, int, 0);
MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
MODULE_PARM_DESC(eeprom_bad_csum_allow, "Allow bad eeprom checksums");
MODULE_PARM_DESC(use_io, "Force use of i/o access mode");
#define DPRINTK(nlevel, klevel, fmt, args...) \
(void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
@ -282,12 +285,6 @@ enum scb_status {
rus_mask = 0x3C,
};
enum ru_state {
RU_SUSPENDED = 0,
RU_RUNNING = 1,
RU_UNINITIALIZED = -1,
};
enum scb_stat_ack {
stat_ack_not_ours = 0x00,
stat_ack_sw_gen = 0x04,
@ -529,7 +526,6 @@ struct nic {
struct rx *rx_to_use;
struct rx *rx_to_clean;
struct rfd blank_rfd;
enum ru_state ru_running;
spinlock_t cb_lock ____cacheline_aligned;
spinlock_t cmd_lock;
@ -591,7 +587,7 @@ static inline void e100_write_flush(struct nic *nic)
{
/* Flush previous PCI writes through intermediate bridges
* by doing a benign read */
(void)readb(&nic->csr->scb.status);
(void)ioread8(&nic->csr->scb.status);
}
static void e100_enable_irq(struct nic *nic)
@ -599,7 +595,7 @@ static void e100_enable_irq(struct nic *nic)
unsigned long flags;
spin_lock_irqsave(&nic->cmd_lock, flags);
writeb(irq_mask_none, &nic->csr->scb.cmd_hi);
iowrite8(irq_mask_none, &nic->csr->scb.cmd_hi);
e100_write_flush(nic);
spin_unlock_irqrestore(&nic->cmd_lock, flags);
}
@ -609,7 +605,7 @@ static void e100_disable_irq(struct nic *nic)
unsigned long flags;
spin_lock_irqsave(&nic->cmd_lock, flags);
writeb(irq_mask_all, &nic->csr->scb.cmd_hi);
iowrite8(irq_mask_all, &nic->csr->scb.cmd_hi);
e100_write_flush(nic);
spin_unlock_irqrestore(&nic->cmd_lock, flags);
}
@ -618,11 +614,11 @@ static void e100_hw_reset(struct nic *nic)
{
/* Put CU and RU into idle with a selective reset to get
* device off of PCI bus */
writel(selective_reset, &nic->csr->port);
iowrite32(selective_reset, &nic->csr->port);
e100_write_flush(nic); udelay(20);
/* Now fully reset device */
writel(software_reset, &nic->csr->port);
iowrite32(software_reset, &nic->csr->port);
e100_write_flush(nic); udelay(20);
/* Mask off our interrupt line - it's unmasked after reset */
@ -639,7 +635,7 @@ static int e100_self_test(struct nic *nic)
nic->mem->selftest.signature = 0;
nic->mem->selftest.result = 0xFFFFFFFF;
writel(selftest | dma_addr, &nic->csr->port);
iowrite32(selftest | dma_addr, &nic->csr->port);
e100_write_flush(nic);
/* Wait 10 msec for self-test to complete */
msleep(10);
@ -677,23 +673,23 @@ static void e100_eeprom_write(struct nic *nic, u16 addr_len, u16 addr, u16 data)
for(j = 0; j < 3; j++) {
/* Chip select */
writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
iowrite8(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
for(i = 31; i >= 0; i--) {
ctrl = (cmd_addr_data[j] & (1 << i)) ?
eecs | eedi : eecs;
writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
iowrite8(ctrl, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
iowrite8(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
}
/* Wait 10 msec for cmd to complete */
msleep(10);
/* Chip deselect */
writeb(0, &nic->csr->eeprom_ctrl_lo);
iowrite8(0, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
}
};
@ -709,21 +705,21 @@ static u16 e100_eeprom_read(struct nic *nic, u16 *addr_len, u16 addr)
cmd_addr_data = ((op_read << *addr_len) | addr) << 16;
/* Chip select */
writeb(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
iowrite8(eecs | eesk, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
/* Bit-bang to read word from eeprom */
for(i = 31; i >= 0; i--) {
ctrl = (cmd_addr_data & (1 << i)) ? eecs | eedi : eecs;
writeb(ctrl, &nic->csr->eeprom_ctrl_lo);
iowrite8(ctrl, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
writeb(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
iowrite8(ctrl | eesk, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
/* Eeprom drives a dummy zero to EEDO after receiving
* complete address. Use this to adjust addr_len. */
ctrl = readb(&nic->csr->eeprom_ctrl_lo);
ctrl = ioread8(&nic->csr->eeprom_ctrl_lo);
if(!(ctrl & eedo) && i > 16) {
*addr_len -= (i - 16);
i = 17;
@ -733,7 +729,7 @@ static u16 e100_eeprom_read(struct nic *nic, u16 *addr_len, u16 addr)
}
/* Chip deselect */
writeb(0, &nic->csr->eeprom_ctrl_lo);
iowrite8(0, &nic->csr->eeprom_ctrl_lo);
e100_write_flush(nic); udelay(4);
return le16_to_cpu(data);
@ -804,7 +800,7 @@ static int e100_exec_cmd(struct nic *nic, u8 cmd, dma_addr_t dma_addr)
/* Previous command is accepted when SCB clears */
for(i = 0; i < E100_WAIT_SCB_TIMEOUT; i++) {
if(likely(!readb(&nic->csr->scb.cmd_lo)))
if(likely(!ioread8(&nic->csr->scb.cmd_lo)))
break;
cpu_relax();
if(unlikely(i > E100_WAIT_SCB_FAST))
@ -816,8 +812,8 @@ static int e100_exec_cmd(struct nic *nic, u8 cmd, dma_addr_t dma_addr)
}
if(unlikely(cmd != cuc_resume))
writel(dma_addr, &nic->csr->scb.gen_ptr);
writeb(cmd, &nic->csr->scb.cmd_lo);
iowrite32(dma_addr, &nic->csr->scb.gen_ptr);
iowrite8(cmd, &nic->csr->scb.cmd_lo);
err_unlock:
spin_unlock_irqrestore(&nic->cmd_lock, flags);
@ -895,7 +891,7 @@ static u16 mdio_ctrl(struct nic *nic, u32 addr, u32 dir, u32 reg, u16 data)
*/
spin_lock_irqsave(&nic->mdio_lock, flags);
for (i = 100; i; --i) {
if (readl(&nic->csr->mdi_ctrl) & mdi_ready)
if (ioread32(&nic->csr->mdi_ctrl) & mdi_ready)
break;
udelay(20);
}
@ -905,11 +901,11 @@ static u16 mdio_ctrl(struct nic *nic, u32 addr, u32 dir, u32 reg, u16 data)
spin_unlock_irqrestore(&nic->mdio_lock, flags);
return 0; /* No way to indicate timeout error */
}
writel((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);
iowrite32((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);
for (i = 0; i < 100; i++) {
udelay(20);
if ((data_out = readl(&nic->csr->mdi_ctrl)) & mdi_ready)
if ((data_out = ioread32(&nic->csr->mdi_ctrl)) & mdi_ready)
break;
}
spin_unlock_irqrestore(&nic->mdio_lock, flags);
@ -951,7 +947,7 @@ static void e100_get_defaults(struct nic *nic)
((nic->mac >= mac_82558_D101_A4) ? cb_cid : cb_i));
/* Template for a freshly allocated RFD */
nic->blank_rfd.command = cpu_to_le16(cb_el);
nic->blank_rfd.command = cpu_to_le16(cb_el & cb_s);
nic->blank_rfd.rbd = 0xFFFFFFFF;
nic->blank_rfd.size = cpu_to_le16(VLAN_ETH_FRAME_LEN);
@ -1318,7 +1314,7 @@ static inline int e100_exec_cb_wait(struct nic *nic, struct sk_buff *skb,
}
/* ack any interupts, something could have been set */
writeb(~0, &nic->csr->scb.stat_ack);
iowrite8(~0, &nic->csr->scb.stat_ack);
/* if the command failed, or is not OK, notify and return */
if (!counter || !(cb->status & cpu_to_le16(cb_ok))) {
@ -1580,7 +1576,7 @@ static void e100_watchdog(unsigned long data)
* accidentally, due to hardware that shares a register between the
* interrupt mask bit and the SW Interrupt generation bit */
spin_lock_irq(&nic->cmd_lock);
writeb(readb(&nic->csr->scb.cmd_hi) | irq_sw_gen,&nic->csr->scb.cmd_hi);
iowrite8(ioread8(&nic->csr->scb.cmd_hi) | irq_sw_gen,&nic->csr->scb.cmd_hi);
e100_write_flush(nic);
spin_unlock_irq(&nic->cmd_lock);
@ -1746,19 +1742,11 @@ static int e100_alloc_cbs(struct nic *nic)
return 0;
}
static inline void e100_start_receiver(struct nic *nic, struct rx *rx)
static inline void e100_start_receiver(struct nic *nic)
{
if(!nic->rxs) return;
if(RU_SUSPENDED != nic->ru_running) return;
/* handle init time starts */
if(!rx) rx = nic->rxs;
/* (Re)start RU if suspended or idle and RFA is non-NULL */
if(rx->skb) {
e100_exec_cmd(nic, ruc_start, rx->dma_addr);
nic->ru_running = RU_RUNNING;
}
/* Start if RFA is non-NULL */
if(nic->rx_to_clean->skb)
e100_exec_cmd(nic, ruc_start, nic->rx_to_clean->dma_addr);
}
#define RFD_BUF_LEN (sizeof(struct rfd) + VLAN_ETH_FRAME_LEN)
@ -1787,7 +1775,7 @@ static int e100_rx_alloc_skb(struct nic *nic, struct rx *rx)
put_unaligned(cpu_to_le32(rx->dma_addr),
(u32 *)&prev_rfd->link);
wmb();
prev_rfd->command &= ~cpu_to_le16(cb_el);
prev_rfd->command &= ~cpu_to_le16(cb_el & cb_s);
pci_dma_sync_single_for_device(nic->pdev, rx->prev->dma_addr,
sizeof(struct rfd), PCI_DMA_TODEVICE);
}
@ -1825,10 +1813,6 @@ static int e100_rx_indicate(struct nic *nic, struct rx *rx,
pci_unmap_single(nic->pdev, rx->dma_addr,
RFD_BUF_LEN, PCI_DMA_FROMDEVICE);
/* this allows for a fast restart without re-enabling interrupts */
if(le16_to_cpu(rfd->command) & cb_el)
nic->ru_running = RU_SUSPENDED;
/* Pull off the RFD and put the actual data (minus eth hdr) */
skb_reserve(skb, sizeof(struct rfd));
skb_put(skb, actual_size);
@ -1859,45 +1843,18 @@ static void e100_rx_clean(struct nic *nic, unsigned int *work_done,
unsigned int work_to_do)
{
struct rx *rx;
int restart_required = 0;
struct rx *rx_to_start = NULL;
/* are we already rnr? then pay attention!!! this ensures that
* the state machine progression never allows a start with a
* partially cleaned list, avoiding a race between hardware
* and rx_to_clean when in NAPI mode */
if(RU_SUSPENDED == nic->ru_running)
restart_required = 1;
/* Indicate newly arrived packets */
for(rx = nic->rx_to_clean; rx->skb; rx = nic->rx_to_clean = rx->next) {
int err = e100_rx_indicate(nic, rx, work_done, work_to_do);
if(-EAGAIN == err) {
/* hit quota so have more work to do, restart once
* cleanup is complete */
restart_required = 0;
break;
} else if(-ENODATA == err)
if(e100_rx_indicate(nic, rx, work_done, work_to_do))
break; /* No more to clean */
}
/* save our starting point as the place we'll restart the receiver */
if(restart_required)
rx_to_start = nic->rx_to_clean;
/* Alloc new skbs to refill list */
for(rx = nic->rx_to_use; !rx->skb; rx = nic->rx_to_use = rx->next) {
if(unlikely(e100_rx_alloc_skb(nic, rx)))
break; /* Better luck next time (see watchdog) */
}
if(restart_required) {
// ack the rnr?
writeb(stat_ack_rnr, &nic->csr->scb.stat_ack);
e100_start_receiver(nic, rx_to_start);
if(work_done)
(*work_done)++;
}
}
static void e100_rx_clean_list(struct nic *nic)
@ -1905,8 +1862,6 @@ static void e100_rx_clean_list(struct nic *nic)
struct rx *rx;
unsigned int i, count = nic->params.rfds.count;
nic->ru_running = RU_UNINITIALIZED;
if(nic->rxs) {
for(rx = nic->rxs, i = 0; i < count; rx++, i++) {
if(rx->skb) {
@ -1928,7 +1883,6 @@ static int e100_rx_alloc_list(struct nic *nic)
unsigned int i, count = nic->params.rfds.count;
nic->rx_to_use = nic->rx_to_clean = NULL;
nic->ru_running = RU_UNINITIALIZED;
if(!(nic->rxs = kcalloc(count, sizeof(struct rx), GFP_ATOMIC)))
return -ENOMEM;
@ -1943,7 +1897,6 @@ static int e100_rx_alloc_list(struct nic *nic)
}
nic->rx_to_use = nic->rx_to_clean = nic->rxs;
nic->ru_running = RU_SUSPENDED;
return 0;
}
@ -1952,7 +1905,7 @@ static irqreturn_t e100_intr(int irq, void *dev_id)
{
struct net_device *netdev = dev_id;
struct nic *nic = netdev_priv(netdev);
u8 stat_ack = readb(&nic->csr->scb.stat_ack);
u8 stat_ack = ioread8(&nic->csr->scb.stat_ack);
DPRINTK(INTR, DEBUG, "stat_ack = 0x%02X\n", stat_ack);
@ -1961,11 +1914,7 @@ static irqreturn_t e100_intr(int irq, void *dev_id)
return IRQ_NONE;
/* Ack interrupt(s) */
writeb(stat_ack, &nic->csr->scb.stat_ack);
/* We hit Receive No Resource (RNR); restart RU after cleaning */
if(stat_ack & stat_ack_rnr)
nic->ru_running = RU_SUSPENDED;
iowrite8(stat_ack, &nic->csr->scb.stat_ack);
if(likely(netif_rx_schedule_prep(netdev))) {
e100_disable_irq(nic);
@ -2058,7 +2007,7 @@ static int e100_up(struct nic *nic)
if((err = e100_hw_init(nic)))
goto err_clean_cbs;
e100_set_multicast_list(nic->netdev);
e100_start_receiver(nic, NULL);
e100_start_receiver(nic);
mod_timer(&nic->watchdog, jiffies);
if((err = request_irq(nic->pdev->irq, e100_intr, IRQF_SHARED,
nic->netdev->name, nic->netdev)))
@ -2107,7 +2056,7 @@ static void e100_tx_timeout_task(struct work_struct *work)
struct net_device *netdev = nic->netdev;
DPRINTK(TX_ERR, DEBUG, "scb.status=0x%02X\n",
readb(&nic->csr->scb.status));
ioread8(&nic->csr->scb.status));
e100_down(netdev_priv(netdev));
e100_up(netdev_priv(netdev));
}
@ -2139,7 +2088,7 @@ static int e100_loopback_test(struct nic *nic, enum loopback loopback_mode)
mdio_write(nic->netdev, nic->mii.phy_id, MII_BMCR,
BMCR_LOOPBACK);
e100_start_receiver(nic, NULL);
e100_start_receiver(nic);
if(!(skb = netdev_alloc_skb(nic->netdev, ETH_DATA_LEN))) {
err = -ENOMEM;
@ -2230,9 +2179,9 @@ static void e100_get_regs(struct net_device *netdev,
int i;
regs->version = (1 << 24) | nic->rev_id;
buff[0] = readb(&nic->csr->scb.cmd_hi) << 24 |
readb(&nic->csr->scb.cmd_lo) << 16 |
readw(&nic->csr->scb.status);
buff[0] = ioread8(&nic->csr->scb.cmd_hi) << 24 |
ioread8(&nic->csr->scb.cmd_lo) << 16 |
ioread16(&nic->csr->scb.status);
for(i = E100_PHY_REGS; i >= 0; i--)
buff[1 + E100_PHY_REGS - i] =
mdio_read(netdev, nic->mii.phy_id, i);
@ -2604,7 +2553,10 @@ static int __devinit e100_probe(struct pci_dev *pdev,
SET_MODULE_OWNER(netdev);
SET_NETDEV_DEV(netdev, &pdev->dev);
nic->csr = ioremap(pci_resource_start(pdev, 0), sizeof(struct csr));
if (use_io)
DPRINTK(PROBE, INFO, "using i/o access mode\n");
nic->csr = pci_iomap(pdev, (use_io ? 1 : 0), sizeof(struct csr));
if(!nic->csr) {
DPRINTK(PROBE, ERR, "Cannot map device registers, aborting.\n");
err = -ENOMEM;
@ -2651,11 +2603,16 @@ static int __devinit e100_probe(struct pci_dev *pdev,
memcpy(netdev->dev_addr, nic->eeprom, ETH_ALEN);
memcpy(netdev->perm_addr, nic->eeprom, ETH_ALEN);
if(!is_valid_ether_addr(netdev->perm_addr)) {
DPRINTK(PROBE, ERR, "Invalid MAC address from "
"EEPROM, aborting.\n");
err = -EAGAIN;
goto err_out_free;
if (!is_valid_ether_addr(netdev->perm_addr)) {
if (!eeprom_bad_csum_allow) {
DPRINTK(PROBE, ERR, "Invalid MAC address from "
"EEPROM, aborting.\n");
err = -EAGAIN;
goto err_out_free;
} else {
DPRINTK(PROBE, ERR, "Invalid MAC address from EEPROM, "
"you MUST configure one.\n");
}
}
/* Wol magic packet can be enabled from eeprom */
@ -2676,7 +2633,7 @@ static int __devinit e100_probe(struct pci_dev *pdev,
DPRINTK(PROBE, INFO, "addr 0x%llx, irq %d, "
"MAC addr %02X:%02X:%02X:%02X:%02X:%02X\n",
(unsigned long long)pci_resource_start(pdev, 0), pdev->irq,
(unsigned long long)pci_resource_start(pdev, use_io ? 1 : 0), pdev->irq,
netdev->dev_addr[0], netdev->dev_addr[1], netdev->dev_addr[2],
netdev->dev_addr[3], netdev->dev_addr[4], netdev->dev_addr[5]);
@ -2685,7 +2642,7 @@ static int __devinit e100_probe(struct pci_dev *pdev,
err_out_free:
e100_free(nic);
err_out_iounmap:
iounmap(nic->csr);
pci_iounmap(pdev, nic->csr);
err_out_free_res:
pci_release_regions(pdev);
err_out_disable_pdev:

View File

@ -155,9 +155,6 @@ struct e1000_adapter;
/* Number of packet split data buffers (not including the header buffer) */
#define PS_PAGE_BUFFERS MAX_PS_BUFFERS-1
/* only works for sizes that are powers of 2 */
#define E1000_ROUNDUP(i, size) ((i) = (((i) + (size) - 1) & ~((size) - 1)))
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */
struct e1000_buffer {

View File

@ -654,14 +654,11 @@ e1000_set_ringparam(struct net_device *netdev,
e1000_mac_type mac_type = adapter->hw.mac_type;
struct e1000_tx_ring *txdr, *tx_old;
struct e1000_rx_ring *rxdr, *rx_old;
int i, err, tx_ring_size, rx_ring_size;
int i, err;
if ((ring->rx_mini_pending) || (ring->rx_jumbo_pending))
return -EINVAL;
tx_ring_size = sizeof(struct e1000_tx_ring) * adapter->num_tx_queues;
rx_ring_size = sizeof(struct e1000_rx_ring) * adapter->num_rx_queues;
while (test_and_set_bit(__E1000_RESETTING, &adapter->flags))
msleep(1);
@ -672,11 +669,11 @@ e1000_set_ringparam(struct net_device *netdev,
rx_old = adapter->rx_ring;
err = -ENOMEM;
txdr = kzalloc(tx_ring_size, GFP_KERNEL);
txdr = kcalloc(adapter->num_tx_queues, sizeof(struct e1000_tx_ring), GFP_KERNEL);
if (!txdr)
goto err_alloc_tx;
rxdr = kzalloc(rx_ring_size, GFP_KERNEL);
rxdr = kcalloc(adapter->num_rx_queues, sizeof(struct e1000_rx_ring), GFP_KERNEL);
if (!rxdr)
goto err_alloc_rx;
@ -686,12 +683,12 @@ e1000_set_ringparam(struct net_device *netdev,
rxdr->count = max(ring->rx_pending,(uint32_t)E1000_MIN_RXD);
rxdr->count = min(rxdr->count,(uint32_t)(mac_type < e1000_82544 ?
E1000_MAX_RXD : E1000_MAX_82544_RXD));
E1000_ROUNDUP(rxdr->count, REQ_RX_DESCRIPTOR_MULTIPLE);
rxdr->count = ALIGN(rxdr->count, REQ_RX_DESCRIPTOR_MULTIPLE);
txdr->count = max(ring->tx_pending,(uint32_t)E1000_MIN_TXD);
txdr->count = min(txdr->count,(uint32_t)(mac_type < e1000_82544 ?
E1000_MAX_TXD : E1000_MAX_82544_TXD));
E1000_ROUNDUP(txdr->count, REQ_TX_DESCRIPTOR_MULTIPLE);
txdr->count = ALIGN(txdr->count, REQ_TX_DESCRIPTOR_MULTIPLE);
for (i = 0; i < adapter->num_tx_queues; i++)
txdr[i].count = txdr->count;
@ -742,7 +739,7 @@ err_setup:
uint32_t pat, value; \
uint32_t test[] = \
{0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF}; \
for (pat = 0; pat < sizeof(test)/sizeof(test[0]); pat++) { \
for (pat = 0; pat < ARRAY_SIZE(test); pat++) { \
E1000_WRITE_REG(&adapter->hw, R, (test[pat] & W)); \
value = E1000_READ_REG(&adapter->hw, R); \
if (value != (test[pat] & W & M)) { \
@ -1053,23 +1050,24 @@ e1000_setup_desc_rings(struct e1000_adapter *adapter)
struct e1000_rx_ring *rxdr = &adapter->test_rx_ring;
struct pci_dev *pdev = adapter->pdev;
uint32_t rctl;
int size, i, ret_val;
int i, ret_val;
/* Setup Tx descriptor ring and Tx buffers */
if (!txdr->count)
txdr->count = E1000_DEFAULT_TXD;
size = txdr->count * sizeof(struct e1000_buffer);
if (!(txdr->buffer_info = kmalloc(size, GFP_KERNEL))) {
if (!(txdr->buffer_info = kcalloc(txdr->count,
sizeof(struct e1000_buffer),
GFP_KERNEL))) {
ret_val = 1;
goto err_nomem;
}
memset(txdr->buffer_info, 0, size);
txdr->size = txdr->count * sizeof(struct e1000_tx_desc);
E1000_ROUNDUP(txdr->size, 4096);
if (!(txdr->desc = pci_alloc_consistent(pdev, txdr->size, &txdr->dma))) {
txdr->size = ALIGN(txdr->size, 4096);
if (!(txdr->desc = pci_alloc_consistent(pdev, txdr->size,
&txdr->dma))) {
ret_val = 2;
goto err_nomem;
}
@ -1116,12 +1114,12 @@ e1000_setup_desc_rings(struct e1000_adapter *adapter)
if (!rxdr->count)
rxdr->count = E1000_DEFAULT_RXD;
size = rxdr->count * sizeof(struct e1000_buffer);
if (!(rxdr->buffer_info = kmalloc(size, GFP_KERNEL))) {
if (!(rxdr->buffer_info = kcalloc(rxdr->count,
sizeof(struct e1000_buffer),
GFP_KERNEL))) {
ret_val = 4;
goto err_nomem;
}
memset(rxdr->buffer_info, 0, size);
rxdr->size = rxdr->count * sizeof(struct e1000_rx_desc);
if (!(rxdr->desc = pci_alloc_consistent(pdev, rxdr->size, &rxdr->dma))) {

View File

@ -748,9 +748,9 @@ e1000_reset(struct e1000_adapter *adapter)
VLAN_TAG_SIZE;
min_tx_space = min_rx_space;
min_tx_space *= 2;
E1000_ROUNDUP(min_tx_space, 1024);
min_tx_space = ALIGN(min_tx_space, 1024);
min_tx_space >>= 10;
E1000_ROUNDUP(min_rx_space, 1024);
min_rx_space = ALIGN(min_rx_space, 1024);
min_rx_space >>= 10;
/* If current Tx allocation is less than the min Tx FIFO size,
@ -1354,31 +1354,27 @@ e1000_sw_init(struct e1000_adapter *adapter)
static int __devinit
e1000_alloc_queues(struct e1000_adapter *adapter)
{
int size;
size = sizeof(struct e1000_tx_ring) * adapter->num_tx_queues;
adapter->tx_ring = kmalloc(size, GFP_KERNEL);
adapter->tx_ring = kcalloc(adapter->num_tx_queues,
sizeof(struct e1000_tx_ring), GFP_KERNEL);
if (!adapter->tx_ring)
return -ENOMEM;
memset(adapter->tx_ring, 0, size);
size = sizeof(struct e1000_rx_ring) * adapter->num_rx_queues;
adapter->rx_ring = kmalloc(size, GFP_KERNEL);
adapter->rx_ring = kcalloc(adapter->num_rx_queues,
sizeof(struct e1000_rx_ring), GFP_KERNEL);
if (!adapter->rx_ring) {
kfree(adapter->tx_ring);
return -ENOMEM;
}
memset(adapter->rx_ring, 0, size);
#ifdef CONFIG_E1000_NAPI
size = sizeof(struct net_device) * adapter->num_rx_queues;
adapter->polling_netdev = kmalloc(size, GFP_KERNEL);
adapter->polling_netdev = kcalloc(adapter->num_rx_queues,
sizeof(struct net_device),
GFP_KERNEL);
if (!adapter->polling_netdev) {
kfree(adapter->tx_ring);
kfree(adapter->rx_ring);
return -ENOMEM;
}
memset(adapter->polling_netdev, 0, size);
#endif
return E1000_SUCCESS;
@ -1560,7 +1556,7 @@ e1000_setup_tx_resources(struct e1000_adapter *adapter,
/* round up to nearest 4K */
txdr->size = txdr->count * sizeof(struct e1000_tx_desc);
E1000_ROUNDUP(txdr->size, 4096);
txdr->size = ALIGN(txdr->size, 4096);
txdr->desc = pci_alloc_consistent(pdev, txdr->size, &txdr->dma);
if (!txdr->desc) {
@ -1774,18 +1770,18 @@ e1000_setup_rx_resources(struct e1000_adapter *adapter,
}
memset(rxdr->buffer_info, 0, size);
size = sizeof(struct e1000_ps_page) * rxdr->count;
rxdr->ps_page = kmalloc(size, GFP_KERNEL);
rxdr->ps_page = kcalloc(rxdr->count, sizeof(struct e1000_ps_page),
GFP_KERNEL);
if (!rxdr->ps_page) {
vfree(rxdr->buffer_info);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
memset(rxdr->ps_page, 0, size);
size = sizeof(struct e1000_ps_page_dma) * rxdr->count;
rxdr->ps_page_dma = kmalloc(size, GFP_KERNEL);
rxdr->ps_page_dma = kcalloc(rxdr->count,
sizeof(struct e1000_ps_page_dma),
GFP_KERNEL);
if (!rxdr->ps_page_dma) {
vfree(rxdr->buffer_info);
kfree(rxdr->ps_page);
@ -1793,7 +1789,6 @@ e1000_setup_rx_resources(struct e1000_adapter *adapter,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
memset(rxdr->ps_page_dma, 0, size);
if (adapter->hw.mac_type <= e1000_82547_rev_2)
desc_len = sizeof(struct e1000_rx_desc);
@ -1803,7 +1798,7 @@ e1000_setup_rx_resources(struct e1000_adapter *adapter,
/* Round up to nearest 4K */
rxdr->size = rxdr->count * desc_len;
E1000_ROUNDUP(rxdr->size, 4096);
rxdr->size = ALIGN(rxdr->size, 4096);
rxdr->desc = pci_alloc_consistent(pdev, rxdr->size, &rxdr->dma);
@ -2667,7 +2662,7 @@ e1000_watchdog(unsigned long data)
netif_carrier_on(netdev);
netif_wake_queue(netdev);
mod_timer(&adapter->phy_info_timer, jiffies + 2 * HZ);
mod_timer(&adapter->phy_info_timer, round_jiffies(jiffies + 2 * HZ));
adapter->smartspeed = 0;
} else {
/* make sure the receive unit is started */
@ -2684,7 +2679,7 @@ e1000_watchdog(unsigned long data)
DPRINTK(LINK, INFO, "NIC Link is Down\n");
netif_carrier_off(netdev);
netif_stop_queue(netdev);
mod_timer(&adapter->phy_info_timer, jiffies + 2 * HZ);
mod_timer(&adapter->phy_info_timer, round_jiffies(jiffies + 2 * HZ));
/* 80003ES2LAN workaround--
* For packet buffer work-around on link down event;
@ -2736,7 +2731,7 @@ e1000_watchdog(unsigned long data)
e1000_rar_set(&adapter->hw, adapter->hw.mac_addr, 0);
/* Reset the timer */
mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ);
mod_timer(&adapter->watchdog_timer, round_jiffies(jiffies + 2 * HZ));
}
enum latency_range {
@ -3175,7 +3170,7 @@ e1000_82547_fifo_workaround(struct e1000_adapter *adapter, struct sk_buff *skb)
uint32_t fifo_space = adapter->tx_fifo_size - adapter->tx_fifo_head;
uint32_t skb_fifo_len = skb->len + E1000_FIFO_HDR;
E1000_ROUNDUP(skb_fifo_len, E1000_FIFO_HDR);
skb_fifo_len = ALIGN(skb_fifo_len, E1000_FIFO_HDR);
if (adapter->link_duplex != HALF_DUPLEX)
goto no_fifo_stall_required;

View File

@ -305,7 +305,7 @@ e1000_check_options(struct e1000_adapter *adapter)
if (num_TxDescriptors > bd) {
tx_ring->count = TxDescriptors[bd];
e1000_validate_option(&tx_ring->count, &opt, adapter);
E1000_ROUNDUP(tx_ring->count,
tx_ring->count = ALIGN(tx_ring->count,
REQ_TX_DESCRIPTOR_MULTIPLE);
} else {
tx_ring->count = opt.def;
@ -331,7 +331,7 @@ e1000_check_options(struct e1000_adapter *adapter)
if (num_RxDescriptors > bd) {
rx_ring->count = RxDescriptors[bd];
e1000_validate_option(&rx_ring->count, &opt, adapter);
E1000_ROUNDUP(rx_ring->count,
rx_ring->count = ALIGN(rx_ring->count,
REQ_RX_DESCRIPTOR_MULTIPLE);
} else {
rx_ring->count = opt.def;

View File

@ -115,6 +115,7 @@
#include <linux/mca-legacy.h>
#include <linux/spinlock.h>
#include <linux/bitops.h>
#include <linux/jiffies.h>
#include <asm/system.h>
#include <asm/io.h>
@ -556,7 +557,7 @@ static void unstick_cu(struct net_device *dev)
if (lp->started)
{
if ((jiffies - dev->trans_start)>50)
if (time_after(jiffies, dev->trans_start + 50))
{
if (lp->tx_link==lp->last_tx_restart)
{
@ -612,7 +613,7 @@ static void unstick_cu(struct net_device *dev)
}
else
{
if ((jiffies-lp->init_time)>10)
if (time_after(jiffies, lp->init_time + 10))
{
unsigned short status = scb_status(dev);
printk(KERN_WARNING "%s: i82586 startup timed out, status %04x, resetting...\n",
@ -776,7 +777,7 @@ static unsigned short eexp_start_irq(struct net_device *dev,
static void eexp_cmd_clear(struct net_device *dev)
{
unsigned long int oldtime = jiffies;
while (scb_rdcmd(dev) && ((jiffies-oldtime)<10));
while (scb_rdcmd(dev) && (time_before(jiffies, oldtime + 10)));
if (scb_rdcmd(dev)) {
printk("%s: command didn't clear\n", dev->name);
}
@ -1649,7 +1650,7 @@ eexp_set_multicast(struct net_device *dev)
#endif
oj = jiffies;
while ((SCB_CUstat(scb_status(dev)) == 2) &&
((jiffies-oj) < 2000));
(time_before(jiffies, oj + 2000)));
if (SCB_CUstat(scb_status(dev)) == 2)
printk("%s: warning, CU didn't stop\n", dev->name);
lp->started &= ~(STARTED_CU);

View File

@ -39,7 +39,7 @@
#include <asm/io.h>
#define DRV_NAME "ehea"
#define DRV_VERSION "EHEA_0046"
#define DRV_VERSION "EHEA_0058"
#define EHEA_MSG_DEFAULT (NETIF_MSG_LINK | NETIF_MSG_TIMER \
| NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
@ -78,10 +78,7 @@
#define EHEA_RQ2_PKT_SIZE 1522
#define EHEA_L_PKT_SIZE 256 /* low latency */
#define EHEA_POLL_MAX_RWQE 1000
/* Send completion signaling */
#define EHEA_SIG_IV_LONG 1
/* Protection Domain Identifier */
#define EHEA_PD_ID 0xaabcdeff
@ -108,11 +105,7 @@
#define EHEA_CACHE_LINE 128
/* Memory Regions */
#define EHEA_MR_MAX_TX_PAGES 20
#define EHEA_MR_TX_DATA_PN 3
#define EHEA_MR_ACC_CTRL 0x00800000
#define EHEA_RWQES_PER_MR_RQ2 10
#define EHEA_RWQES_PER_MR_RQ3 10
#define EHEA_WATCH_DOG_TIMEOUT 10*HZ
@ -311,6 +304,7 @@ struct ehea_cq {
* Memory Region
*/
struct ehea_mr {
struct ehea_adapter *adapter;
u64 handle;
u64 vaddr;
u32 lkey;
@ -319,17 +313,12 @@ struct ehea_mr {
/*
* Port state information
*/
struct port_state {
int poll_max_processed;
struct port_stats {
int poll_receive_errors;
int ehea_poll;
int queue_stopped;
int min_swqe_avail;
u64 sqc_stop_sum;
int pkt_send;
int pkt_xmit;
int send_tasklet;
int nwqe;
int err_tcp_cksum;
int err_ip_cksum;
int err_frame_crc;
};
#define EHEA_IRQ_NAME_SIZE 20
@ -348,6 +337,7 @@ struct ehea_q_skb_arr {
* Port resources
*/
struct ehea_port_res {
struct port_stats p_stats;
struct ehea_mr send_mr; /* send memory region */
struct ehea_mr recv_mr; /* receive memory region */
spinlock_t xmit_lock;
@ -357,9 +347,8 @@ struct ehea_port_res {
struct ehea_qp *qp;
struct ehea_cq *send_cq;
struct ehea_cq *recv_cq;
struct ehea_eq *send_eq;
struct ehea_eq *recv_eq;
spinlock_t send_lock;
struct ehea_eq *eq;
struct net_device *d_netdev;
struct ehea_q_skb_arr rq1_skba;
struct ehea_q_skb_arr rq2_skba;
struct ehea_q_skb_arr rq3_skba;
@ -369,21 +358,18 @@ struct ehea_port_res {
int swqe_refill_th;
atomic_t swqe_avail;
int swqe_ll_count;
int swqe_count;
u32 swqe_id_counter;
u64 tx_packets;
struct tasklet_struct send_comp_task;
spinlock_t recv_lock;
struct port_state p_state;
u64 rx_packets;
u32 poll_counter;
};
#define EHEA_MAX_PORTS 16
struct ehea_adapter {
u64 handle;
u8 num_ports;
struct ehea_port *port[16];
struct ibmebus_dev *ebus_dev;
struct ehea_port *port[EHEA_MAX_PORTS];
struct ehea_eq *neq; /* notification event queue */
struct workqueue_struct *ehea_wq;
struct tasklet_struct neq_tasklet;
@ -406,7 +392,7 @@ struct ehea_port {
struct net_device *netdev;
struct net_device_stats stats;
struct ehea_port_res port_res[EHEA_MAX_PORT_RES];
struct device_node *of_dev_node; /* Open Firmware Device Node */
struct of_device ofdev; /* Open Firmware Device */
struct ehea_mc_list *mc_list; /* Multicast MAC addresses */
struct vlan_group *vgrp;
struct ehea_eq *qp_eq;
@ -415,7 +401,9 @@ struct ehea_port {
char int_aff_name[EHEA_IRQ_NAME_SIZE];
int allmulti; /* Indicates IFF_ALLMULTI state */
int promisc; /* Indicates IFF_PROMISC state */
int num_tx_qps;
int num_add_tx_qps;
int num_mcs;
int resets;
u64 mac_addr;
u32 logical_port_id;

View File

@ -144,8 +144,8 @@ static int ehea_nway_reset(struct net_device *dev)
static void ehea_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
strlcpy(info->driver, DRV_NAME, sizeof(info->driver) - 1);
strlcpy(info->version, DRV_VERSION, sizeof(info->version) - 1);
strlcpy(info->driver, DRV_NAME, sizeof(info->driver));
strlcpy(info->version, DRV_VERSION, sizeof(info->version));
}
static u32 ehea_get_msglevel(struct net_device *dev)
@ -166,33 +166,23 @@ static u32 ehea_get_rx_csum(struct net_device *dev)
}
static char ehea_ethtool_stats_keys[][ETH_GSTRING_LEN] = {
{"poll_max_processed"},
{"queue_stopped"},
{"min_swqe_avail"},
{"poll_receive_err"},
{"pkt_send"},
{"pkt_xmit"},
{"send_tasklet"},
{"ehea_poll"},
{"nwqe"},
{"swqe_available_0"},
{"sig_comp_iv"},
{"swqe_refill_th"},
{"port resets"},
{"rxo"},
{"rx64"},
{"rx65"},
{"rx128"},
{"rx256"},
{"rx512"},
{"rx1024"},
{"txo"},
{"tx64"},
{"tx65"},
{"tx128"},
{"tx256"},
{"tx512"},
{"tx1024"},
{"Receive errors"},
{"TCP cksum errors"},
{"IP cksum errors"},
{"Frame cksum errors"},
{"num SQ stopped"},
{"SQ stopped"},
{"PR0 free_swqes"},
{"PR1 free_swqes"},
{"PR2 free_swqes"},
{"PR3 free_swqes"},
{"PR4 free_swqes"},
{"PR5 free_swqes"},
{"PR6 free_swqes"},
{"PR7 free_swqes"},
};
static void ehea_get_strings(struct net_device *dev, u32 stringset, u8 *data)
@ -211,63 +201,44 @@ static int ehea_get_stats_count(struct net_device *dev)
static void ehea_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 *data)
{
u64 hret;
int i;
int i, k, tmp;
struct ehea_port *port = netdev_priv(dev);
struct ehea_adapter *adapter = port->adapter;
struct ehea_port_res *pr = &port->port_res[0];
struct port_state *p_state = &pr->p_state;
struct hcp_ehea_port_cb6 *cb6;
for (i = 0; i < ehea_get_stats_count(dev); i++)
data[i] = 0;
i = 0;
data[i++] = p_state->poll_max_processed;
data[i++] = p_state->queue_stopped;
data[i++] = p_state->min_swqe_avail;
data[i++] = p_state->poll_receive_errors;
data[i++] = p_state->pkt_send;
data[i++] = p_state->pkt_xmit;
data[i++] = p_state->send_tasklet;
data[i++] = p_state->ehea_poll;
data[i++] = p_state->nwqe;
data[i++] = atomic_read(&port->port_res[0].swqe_avail);
data[i++] = port->sig_comp_iv;
data[i++] = port->port_res[0].swqe_refill_th;
data[i++] = port->resets;
cb6 = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!cb6) {
ehea_error("no mem for cb6");
return;
}
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp += port->port_res[k].p_stats.poll_receive_errors;
data[i++] = tmp;
hret = ehea_h_query_ehea_port(adapter->handle, port->logical_port_id,
H_PORT_CB6, H_PORT_CB6_ALL, cb6);
if (netif_msg_hw(port))
ehea_dump(cb6, sizeof(*cb6), "ehea_get_ethtool_stats");
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp += port->port_res[k].p_stats.err_tcp_cksum;
data[i++] = tmp;
if (hret == H_SUCCESS) {
data[i++] = cb6->rxo;
data[i++] = cb6->rx64;
data[i++] = cb6->rx65;
data[i++] = cb6->rx128;
data[i++] = cb6->rx256;
data[i++] = cb6->rx512;
data[i++] = cb6->rx1024;
data[i++] = cb6->txo;
data[i++] = cb6->tx64;
data[i++] = cb6->tx65;
data[i++] = cb6->tx128;
data[i++] = cb6->tx256;
data[i++] = cb6->tx512;
data[i++] = cb6->tx1024;
} else
ehea_error("query_ehea_port failed");
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp += port->port_res[k].p_stats.err_ip_cksum;
data[i++] = tmp;
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp += port->port_res[k].p_stats.err_frame_crc;
data[i++] = tmp;
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp += port->port_res[k].p_stats.queue_stopped;
data[i++] = tmp;
for (k = 0, tmp = 0; k < EHEA_MAX_PORT_RES; k++)
tmp |= port->port_res[k].queue_stopped;
data[i++] = tmp;
for (k = 0; k < 8; k++)
data[i++] = atomic_read(&port->port_res[k].swqe_avail);
kfree(cb6);
}
const struct ethtool_ops ehea_ethtool_ops = {

File diff suppressed because it is too large Load Diff

View File

@ -478,12 +478,14 @@ u64 ehea_h_disable_and_get_hea(const u64 adapter_handle, const u64 qp_handle)
0, 0, 0, 0, 0, 0); /* R7-R12 */
}
u64 ehea_h_free_resource(const u64 adapter_handle, const u64 res_handle)
u64 ehea_h_free_resource(const u64 adapter_handle, const u64 res_handle,
u64 force_bit)
{
return ehea_plpar_hcall_norets(H_FREE_RESOURCE,
adapter_handle, /* R4 */
res_handle, /* R5 */
0, 0, 0, 0, 0); /* R6-R10 */
force_bit,
0, 0, 0, 0); /* R7-R10 */
}
u64 ehea_h_alloc_resource_mr(const u64 adapter_handle, const u64 vaddr,

View File

@ -414,7 +414,11 @@ u64 ehea_h_register_rpage(const u64 adapter_handle,
u64 ehea_h_disable_and_get_hea(const u64 adapter_handle, const u64 qp_handle);
u64 ehea_h_free_resource(const u64 adapter_handle, const u64 res_handle);
#define FORCE_FREE 1
#define NORMAL_FREE 0
u64 ehea_h_free_resource(const u64 adapter_handle, const u64 res_handle,
u64 force_bit);
u64 ehea_h_alloc_resource_mr(const u64 adapter_handle, const u64 vaddr,
const u64 length, const u32 access_ctrl,

View File

@ -197,7 +197,7 @@ out_kill_hwq:
hw_queue_dtor(&cq->hw_queue);
out_freeres:
ehea_h_free_resource(adapter->handle, cq->fw_handle);
ehea_h_free_resource(adapter->handle, cq->fw_handle, FORCE_FREE);
out_freemem:
kfree(cq);
@ -206,25 +206,38 @@ out_nomem:
return NULL;
}
u64 ehea_destroy_cq_res(struct ehea_cq *cq, u64 force)
{
u64 hret;
u64 adapter_handle = cq->adapter->handle;
/* deregister all previous registered pages */
hret = ehea_h_free_resource(adapter_handle, cq->fw_handle, force);
if (hret != H_SUCCESS)
return hret;
hw_queue_dtor(&cq->hw_queue);
kfree(cq);
return hret;
}
int ehea_destroy_cq(struct ehea_cq *cq)
{
u64 adapter_handle, hret;
u64 hret;
if (!cq)
return 0;
adapter_handle = cq->adapter->handle;
if ((hret = ehea_destroy_cq_res(cq, NORMAL_FREE)) == H_R_STATE) {
ehea_error_data(cq->adapter, cq->fw_handle);
hret = ehea_destroy_cq_res(cq, FORCE_FREE);
}
/* deregister all previous registered pages */
hret = ehea_h_free_resource(adapter_handle, cq->fw_handle);
if (hret != H_SUCCESS) {
ehea_error("destroy CQ failed");
return -EIO;
}
hw_queue_dtor(&cq->hw_queue);
kfree(cq);
return 0;
}
@ -297,7 +310,7 @@ out_kill_hwq:
hw_queue_dtor(&eq->hw_queue);
out_freeres:
ehea_h_free_resource(adapter->handle, eq->fw_handle);
ehea_h_free_resource(adapter->handle, eq->fw_handle, FORCE_FREE);
out_freemem:
kfree(eq);
@ -316,27 +329,41 @@ struct ehea_eqe *ehea_poll_eq(struct ehea_eq *eq)
return eqe;
}
int ehea_destroy_eq(struct ehea_eq *eq)
u64 ehea_destroy_eq_res(struct ehea_eq *eq, u64 force)
{
u64 hret;
unsigned long flags;
if (!eq)
return 0;
spin_lock_irqsave(&eq->spinlock, flags);
hret = ehea_h_free_resource(eq->adapter->handle, eq->fw_handle);
hret = ehea_h_free_resource(eq->adapter->handle, eq->fw_handle, force);
spin_unlock_irqrestore(&eq->spinlock, flags);
if (hret != H_SUCCESS) {
ehea_error("destroy_eq failed");
return -EIO;
}
if (hret != H_SUCCESS)
return hret;
hw_queue_dtor(&eq->hw_queue);
kfree(eq);
return hret;
}
int ehea_destroy_eq(struct ehea_eq *eq)
{
u64 hret;
if (!eq)
return 0;
if ((hret = ehea_destroy_eq_res(eq, NORMAL_FREE)) == H_R_STATE) {
ehea_error_data(eq->adapter, eq->fw_handle);
hret = ehea_destroy_eq_res(eq, FORCE_FREE);
}
if (hret != H_SUCCESS) {
ehea_error("destroy EQ failed");
return -EIO;
}
return 0;
}
@ -471,41 +498,56 @@ out_kill_hwsq:
out_freeres:
ehea_h_disable_and_get_hea(adapter->handle, qp->fw_handle);
ehea_h_free_resource(adapter->handle, qp->fw_handle);
ehea_h_free_resource(adapter->handle, qp->fw_handle, FORCE_FREE);
out_freemem:
kfree(qp);
return NULL;
}
int ehea_destroy_qp(struct ehea_qp *qp)
u64 ehea_destroy_qp_res(struct ehea_qp *qp, u64 force)
{
u64 hret;
struct ehea_qp_init_attr *qp_attr = &qp->init_attr;
u64 hret;
struct ehea_qp_init_attr *qp_attr = &qp->init_attr;
if (!qp)
return 0;
ehea_h_disable_and_get_hea(qp->adapter->handle, qp->fw_handle);
hret = ehea_h_free_resource(qp->adapter->handle, qp->fw_handle);
if (hret != H_SUCCESS) {
ehea_error("destroy_qp failed");
return -EIO;
}
ehea_h_disable_and_get_hea(qp->adapter->handle, qp->fw_handle);
hret = ehea_h_free_resource(qp->adapter->handle, qp->fw_handle, force);
if (hret != H_SUCCESS)
return hret;
hw_queue_dtor(&qp->hw_squeue);
hw_queue_dtor(&qp->hw_rqueue1);
hw_queue_dtor(&qp->hw_squeue);
hw_queue_dtor(&qp->hw_rqueue1);
if (qp_attr->rq_count > 1)
hw_queue_dtor(&qp->hw_rqueue2);
if (qp_attr->rq_count > 2)
hw_queue_dtor(&qp->hw_rqueue3);
kfree(qp);
if (qp_attr->rq_count > 1)
hw_queue_dtor(&qp->hw_rqueue2);
if (qp_attr->rq_count > 2)
hw_queue_dtor(&qp->hw_rqueue3);
kfree(qp);
return 0;
return hret;
}
int ehea_reg_mr_adapter(struct ehea_adapter *adapter)
int ehea_destroy_qp(struct ehea_qp *qp)
{
u64 hret;
if (!qp)
return 0;
if ((hret = ehea_destroy_qp_res(qp, NORMAL_FREE)) == H_R_STATE) {
ehea_error_data(qp->adapter, qp->fw_handle);
hret = ehea_destroy_qp_res(qp, FORCE_FREE);
}
if (hret != H_SUCCESS) {
ehea_error("destroy QP failed");
return -EIO;
}
return 0;
}
int ehea_reg_kernel_mr(struct ehea_adapter *adapter, struct ehea_mr *mr)
{
int i, k, ret;
u64 hret, pt_abs, start, end, nr_pages;
@ -526,14 +568,14 @@ int ehea_reg_mr_adapter(struct ehea_adapter *adapter)
hret = ehea_h_alloc_resource_mr(adapter->handle, start, end - start,
acc_ctrl, adapter->pd,
&adapter->mr.handle, &adapter->mr.lkey);
&mr->handle, &mr->lkey);
if (hret != H_SUCCESS) {
ehea_error("alloc_resource_mr failed");
ret = -EIO;
goto out;
}
adapter->mr.vaddr = KERNELBASE;
mr->vaddr = KERNELBASE;
k = 0;
while (nr_pages > 0) {
@ -545,7 +587,7 @@ int ehea_reg_mr_adapter(struct ehea_adapter *adapter)
EHEA_PAGESIZE)));
hret = ehea_h_register_rpage_mr(adapter->handle,
adapter->mr.handle, 0,
mr->handle, 0,
0, (u64)pt_abs,
num_pages);
nr_pages -= num_pages;
@ -554,34 +596,68 @@ int ehea_reg_mr_adapter(struct ehea_adapter *adapter)
(k * EHEA_PAGESIZE)));
hret = ehea_h_register_rpage_mr(adapter->handle,
adapter->mr.handle, 0,
mr->handle, 0,
0, abs_adr,1);
nr_pages--;
}
if ((hret != H_SUCCESS) && (hret != H_PAGE_REGISTERED)) {
ehea_h_free_resource(adapter->handle,
adapter->mr.handle);
ehea_error("register_rpage_mr failed: hret = %lX",
hret);
mr->handle, FORCE_FREE);
ehea_error("register_rpage_mr failed");
ret = -EIO;
goto out;
}
}
if (hret != H_SUCCESS) {
ehea_h_free_resource(adapter->handle, adapter->mr.handle);
ehea_error("register_rpage failed for last page: hret = %lX",
hret);
ehea_h_free_resource(adapter->handle, mr->handle,
FORCE_FREE);
ehea_error("register_rpage failed for last page");
ret = -EIO;
goto out;
}
mr->adapter = adapter;
ret = 0;
out:
kfree(pt);
return ret;
}
int ehea_rem_mr(struct ehea_mr *mr)
{
u64 hret;
if (!mr || !mr->adapter)
return -EINVAL;
hret = ehea_h_free_resource(mr->adapter->handle, mr->handle,
FORCE_FREE);
if (hret != H_SUCCESS) {
ehea_error("destroy MR failed");
return -EIO;
}
return 0;
}
int ehea_gen_smr(struct ehea_adapter *adapter, struct ehea_mr *old_mr,
struct ehea_mr *shared_mr)
{
u64 hret;
hret = ehea_h_register_smr(adapter->handle, old_mr->handle,
old_mr->vaddr, EHEA_MR_ACC_CTRL,
adapter->pd, shared_mr);
if (hret != H_SUCCESS)
return -EIO;
shared_mr->adapter = adapter;
return 0;
}
void print_error_data(u64 *data)
{
int length;
@ -597,6 +673,14 @@ void print_error_data(u64 *data)
ehea_error("QP (resource=%lX) state: AER=0x%lX, AERR=0x%lX, "
"port=%lX", resource, data[6], data[12], data[22]);
if (type == 0x4) /* Completion Queue */
ehea_error("CQ (resource=%lX) state: AER=0x%lX", resource,
data[6]);
if (type == 0x3) /* Event Queue */
ehea_error("EQ (resource=%lX) state: AER=0x%lX", resource,
data[6]);
ehea_dump(data, length, "error data");
}

View File

@ -142,6 +142,8 @@ struct ehea_rwqe {
#define EHEA_CQE_STAT_ERR_MASK 0x721F
#define EHEA_CQE_STAT_FAT_ERR_MASK 0x1F
#define EHEA_CQE_STAT_ERR_TCP 0x4000
#define EHEA_CQE_STAT_ERR_IP 0x2000
#define EHEA_CQE_STAT_ERR_CRC 0x1000
struct ehea_cqe {
u64 wr_id; /* work request ID from WQE */
@ -320,6 +322,11 @@ static inline struct ehea_cqe *ehea_poll_rq1(struct ehea_qp *qp, int *wqe_index)
return hw_qeit_get_valid(queue);
}
static inline void ehea_inc_cq(struct ehea_cq *cq)
{
hw_qeit_inc(&cq->hw_queue);
}
static inline void ehea_inc_rq1(struct ehea_qp *qp)
{
hw_qeit_inc(&qp->hw_rqueue1);
@ -327,7 +334,7 @@ static inline void ehea_inc_rq1(struct ehea_qp *qp)
static inline struct ehea_cqe *ehea_poll_cq(struct ehea_cq *my_cq)
{
return hw_qeit_get_inc_valid(&my_cq->hw_queue);
return hw_qeit_get_valid(&my_cq->hw_queue);
}
#define EHEA_CQ_REGISTER_ORIG 0
@ -356,7 +363,12 @@ struct ehea_qp *ehea_create_qp(struct ehea_adapter * adapter, u32 pd,
int ehea_destroy_qp(struct ehea_qp *qp);
int ehea_reg_mr_adapter(struct ehea_adapter *adapter);
int ehea_reg_kernel_mr(struct ehea_adapter *adapter, struct ehea_mr *mr);
int ehea_gen_smr(struct ehea_adapter *adapter, struct ehea_mr *old_mr,
struct ehea_mr *shared_mr);
int ehea_rem_mr(struct ehea_mr *mr);
void ehea_error_data(struct ehea_adapter *adapter, u64 res_handle);

View File

@ -415,11 +415,18 @@ static int ser12_open(struct net_device *dev)
if (!dev || !bc)
return -ENXIO;
if (!dev->base_addr || dev->base_addr > 0x1000-SER12_EXTENT ||
dev->irq < 2 || dev->irq > 15)
if (!dev->base_addr || dev->base_addr > 0xffff-SER12_EXTENT ||
dev->irq < 2 || dev->irq > NR_IRQS) {
printk(KERN_INFO "baycom_ser_fdx: invalid portnumber (max %u) "
"or irq (2 <= irq <= %d)\n",
0xffff-SER12_EXTENT, NR_IRQS);
return -ENXIO;
if (bc->baud < 300 || bc->baud > 4800)
}
if (bc->baud < 300 || bc->baud > 4800) {
printk(KERN_INFO "baycom_ser_fdx: invalid baudrate "
"(300...4800)\n");
return -EINVAL;
}
if (!request_region(dev->base_addr, SER12_EXTENT, "baycom_ser_fdx")) {
printk(KERN_WARNING "BAYCOM_SER_FSX: I/O port 0x%04lx busy \n",
dev->base_addr);

View File

@ -93,7 +93,7 @@ static void ibmveth_proc_unregister_driver(void);
static void ibmveth_proc_register_adapter(struct ibmveth_adapter *adapter);
static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter);
static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance);
static inline void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
static struct kobj_type ktype_veth_pool;
#ifdef CONFIG_PROC_FS
@ -389,7 +389,7 @@ static void ibmveth_rxq_recycle_buffer(struct ibmveth_adapter *adapter)
}
}
static inline void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter)
static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter)
{
ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator);
@ -953,14 +953,16 @@ static int __devinit ibmveth_probe(struct vio_dev *dev, const struct vio_device_
ibmveth_debug_printk_no_adapter("entering ibmveth_probe for UA 0x%x\n",
dev->unit_address);
mac_addr_p = (unsigned char *) vio_get_attribute(dev, VETH_MAC_ADDR, 0);
mac_addr_p = (unsigned char *) vio_get_attribute(dev,
VETH_MAC_ADDR, NULL);
if(!mac_addr_p) {
printk(KERN_ERR "(%s:%3.3d) ERROR: Can't find VETH_MAC_ADDR "
"attribute\n", __FILE__, __LINE__);
return 0;
}
mcastFilterSize_p= (unsigned int *) vio_get_attribute(dev, VETH_MCAST_FILTER_SIZE, 0);
mcastFilterSize_p = (unsigned int *) vio_get_attribute(dev,
VETH_MCAST_FILTER_SIZE, NULL);
if(!mcastFilterSize_p) {
printk(KERN_ERR "(%s:%3.3d) ERROR: Can't find "
"VETH_MCAST_FILTER_SIZE attribute\n",

View File

@ -111,9 +111,6 @@ struct ixgb_adapter;
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define IXGB_RX_BUFFER_WRITE 8 /* Must be power of 2 */
/* only works for sizes that are powers of 2 */
#define IXGB_ROUNDUP(i, size) ((i) = (((i) + (size) - 1) & ~((size) - 1)))
/* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */
struct ixgb_buffer {

View File

@ -577,11 +577,11 @@ ixgb_set_ringparam(struct net_device *netdev,
rxdr->count = max(ring->rx_pending,(uint32_t)MIN_RXD);
rxdr->count = min(rxdr->count,(uint32_t)MAX_RXD);
IXGB_ROUNDUP(rxdr->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
rxdr->count = ALIGN(rxdr->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
txdr->count = max(ring->tx_pending,(uint32_t)MIN_TXD);
txdr->count = min(txdr->count,(uint32_t)MAX_TXD);
IXGB_ROUNDUP(txdr->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
txdr->count = ALIGN(txdr->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
if(netif_running(adapter->netdev)) {
/* Try to get new resources before deleting old */

View File

@ -685,7 +685,7 @@ ixgb_setup_tx_resources(struct ixgb_adapter *adapter)
/* round up to nearest 4K */
txdr->size = txdr->count * sizeof(struct ixgb_tx_desc);
IXGB_ROUNDUP(txdr->size, 4096);
txdr->size = ALIGN(txdr->size, 4096);
txdr->desc = pci_alloc_consistent(pdev, txdr->size, &txdr->dma);
if(!txdr->desc) {
@ -774,7 +774,7 @@ ixgb_setup_rx_resources(struct ixgb_adapter *adapter)
/* Round up to nearest 4K */
rxdr->size = rxdr->count * sizeof(struct ixgb_rx_desc);
IXGB_ROUNDUP(rxdr->size, 4096);
rxdr->size = ALIGN(rxdr->size, 4096);
rxdr->desc = pci_alloc_consistent(pdev, rxdr->size, &rxdr->dma);

View File

@ -245,8 +245,6 @@ ixgb_validate_option(int *value, struct ixgb_option *opt)
return -1;
}
#define LIST_LEN(l) (sizeof(l) / sizeof(l[0]))
/**
* ixgb_check_options - Range Checking for Command Line Parameters
* @adapter: board private structure
@ -284,7 +282,7 @@ ixgb_check_options(struct ixgb_adapter *adapter)
} else {
tx_ring->count = opt.def;
}
IXGB_ROUNDUP(tx_ring->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
tx_ring->count = ALIGN(tx_ring->count, IXGB_REQ_TX_DESCRIPTOR_MULTIPLE);
}
{ /* Receive Descriptor Count */
struct ixgb_option opt = {
@ -303,7 +301,7 @@ ixgb_check_options(struct ixgb_adapter *adapter)
} else {
rx_ring->count = opt.def;
}
IXGB_ROUNDUP(rx_ring->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
rx_ring->count = ALIGN(rx_ring->count, IXGB_REQ_RX_DESCRIPTOR_MULTIPLE);
}
{ /* Receive Checksum Offload Enable */
struct ixgb_option opt = {
@ -335,7 +333,7 @@ ixgb_check_options(struct ixgb_adapter *adapter)
.name = "Flow Control",
.err = "reading default settings from EEPROM",
.def = ixgb_fc_tx_pause,
.arg = { .l = { .nr = LIST_LEN(fc_list),
.arg = { .l = { .nr = ARRAY_SIZE(fc_list),
.p = fc_list }}
};

View File

@ -33,6 +33,13 @@
#include <linux/ethtool.h>
#include <linux/mii.h>
/**
* mii_ethtool_gset - get settings that are specified in @ecmd
* @mii: MII interface
* @ecmd: requested ethtool_cmd
*
* Returns 0 for success, negative on error.
*/
int mii_ethtool_gset(struct mii_if_info *mii, struct ethtool_cmd *ecmd)
{
struct net_device *dev = mii->dev;
@ -114,6 +121,13 @@ int mii_ethtool_gset(struct mii_if_info *mii, struct ethtool_cmd *ecmd)
return 0;
}
/**
* mii_ethtool_sset - set settings that are specified in @ecmd
* @mii: MII interface
* @ecmd: requested ethtool_cmd
*
* Returns 0 for success, negative on error.
*/
int mii_ethtool_sset(struct mii_if_info *mii, struct ethtool_cmd *ecmd)
{
struct net_device *dev = mii->dev;
@ -207,6 +221,10 @@ int mii_ethtool_sset(struct mii_if_info *mii, struct ethtool_cmd *ecmd)
return 0;
}
/**
* mii_check_gmii_support - check if the MII supports Gb interfaces
* @mii: the MII interface
*/
int mii_check_gmii_support(struct mii_if_info *mii)
{
int reg;
@ -221,6 +239,12 @@ int mii_check_gmii_support(struct mii_if_info *mii)
return 0;
}
/**
* mii_link_ok - is link status up/ok
* @mii: the MII interface
*
* Returns 1 if the MII reports link status up/ok, 0 otherwise.
*/
int mii_link_ok (struct mii_if_info *mii)
{
/* first, a dummy read, needed to latch some MII phys */
@ -230,6 +254,12 @@ int mii_link_ok (struct mii_if_info *mii)
return 0;
}
/**
* mii_nway_restart - restart NWay (autonegotiation) for this interface
* @mii: the MII interface
*
* Returns 0 on success, negative on error.
*/
int mii_nway_restart (struct mii_if_info *mii)
{
int bmcr;
@ -247,6 +277,14 @@ int mii_nway_restart (struct mii_if_info *mii)
return r;
}
/**
* mii_check_link - check MII link status
* @mii: MII interface
*
* If the link status changed (previous != current), call
* netif_carrier_on() if current link status is Up or call
* netif_carrier_off() if current link status is Down.
*/
void mii_check_link (struct mii_if_info *mii)
{
int cur_link = mii_link_ok(mii);
@ -258,6 +296,15 @@ void mii_check_link (struct mii_if_info *mii)
netif_carrier_off(mii->dev);
}
/**
* mii_check_media - check the MII interface for a duplex change
* @mii: the MII interface
* @ok_to_print: OK to print link up/down messages
* @init_media: OK to save duplex mode in @mii
*
* Returns 1 if the duplex mode changed, 0 if not.
* If the media type is forced, always returns 0.
*/
unsigned int mii_check_media (struct mii_if_info *mii,
unsigned int ok_to_print,
unsigned int init_media)
@ -326,6 +373,16 @@ unsigned int mii_check_media (struct mii_if_info *mii,
return 0; /* duplex did not change */
}
/**
* generic_mii_ioctl - main MII ioctl interface
* @mii_if: the MII interface
* @mii_data: MII ioctl data structure
* @cmd: MII ioctl command
* @duplex_chg_out: pointer to @duplex_changed status if there was no
* ioctl error
*
* Returns 0 on success, negative on error.
*/
int generic_mii_ioctl(struct mii_if_info *mii_if,
struct mii_ioctl_data *mii_data, int cmd,
unsigned int *duplex_chg_out)

View File

@ -26,8 +26,6 @@ struct mipsnet_priv {
struct net_device_stats stats;
};
static struct platform_device *mips_plat_dev;
static char mipsnet_string[] = "mipsnet";
/*
@ -297,64 +295,17 @@ static struct device_driver mipsnet_driver = {
.remove = __devexit_p(mipsnet_device_remove),
};
static void mipsnet_platform_release(struct device *device)
{
struct platform_device *pldev;
/* free device */
pldev = to_platform_device(device);
kfree(pldev);
}
static int __init mipsnet_init_module(void)
{
struct platform_device *pldev;
int err;
printk(KERN_INFO "MIPSNet Ethernet driver. Version: %s. "
"(c)2005 MIPS Technologies, Inc.\n", MIPSNET_VERSION);
if (driver_register(&mipsnet_driver)) {
err = driver_register(&mipsnet_driver);
if (err)
printk(KERN_ERR "Driver registration failed\n");
err = -ENODEV;
goto out;
}
if (!(pldev = kmalloc (sizeof (*pldev), GFP_KERNEL))) {
err = -ENOMEM;
goto out_unregister_driver;
}
memset (pldev, 0, sizeof (*pldev));
pldev->name = mipsnet_string;
pldev->id = 0;
pldev->dev.release = mipsnet_platform_release;
if (platform_device_register(pldev)) {
err = -ENODEV;
goto out_free_pldev;
}
if (!pldev->dev.driver) {
/*
* The driver was not bound to this device, there was
* no hardware at this address. Unregister it, as the
* release fuction will take care of freeing the
* allocated structure
*/
platform_device_unregister (pldev);
}
mips_plat_dev = pldev;
return 0;
out_free_pldev:
kfree(pldev);
out_unregister_driver:
driver_unregister(&mipsnet_driver);
out:
return err;
}

View File

@ -51,8 +51,8 @@
#include "mv643xx_eth.h"
/* Static function declarations */
static void eth_port_uc_addr_get(struct net_device *dev,
unsigned char *MacAddr);
static void eth_port_uc_addr_get(unsigned int port_num, unsigned char *p_addr);
static void eth_port_uc_addr_set(unsigned int port_num, unsigned char *p_addr);
static void eth_port_set_multicast_list(struct net_device *);
static void mv643xx_eth_port_enable_tx(unsigned int port_num,
unsigned int queues);
@ -1381,7 +1381,7 @@ static int mv643xx_eth_probe(struct platform_device *pdev)
port_num = mp->port_num = pd->port_number;
/* set default config values */
eth_port_uc_addr_get(dev, dev->dev_addr);
eth_port_uc_addr_get(port_num, dev->dev_addr);
mp->rx_ring_size = MV643XX_ETH_PORT_DEFAULT_RECEIVE_QUEUE_SIZE;
mp->tx_ring_size = MV643XX_ETH_PORT_DEFAULT_TRANSMIT_QUEUE_SIZE;
@ -1839,26 +1839,9 @@ static void eth_port_start(struct net_device *dev)
}
/*
* eth_port_uc_addr_set - This function Set the port Unicast address.
*
* DESCRIPTION:
* This function Set the port Ethernet MAC address.
*
* INPUT:
* unsigned int eth_port_num Port number.
* char * p_addr Address to be set
*
* OUTPUT:
* Set MAC address low and high registers. also calls
* eth_port_set_filter_table_entry() to set the unicast
* table with the proper information.
*
* RETURN:
* N/A.
*
* eth_port_uc_addr_set - Write a MAC address into the port's hw registers
*/
static void eth_port_uc_addr_set(unsigned int eth_port_num,
unsigned char *p_addr)
static void eth_port_uc_addr_set(unsigned int port_num, unsigned char *p_addr)
{
unsigned int mac_h;
unsigned int mac_l;
@ -1868,40 +1851,24 @@ static void eth_port_uc_addr_set(unsigned int eth_port_num,
mac_h = (p_addr[0] << 24) | (p_addr[1] << 16) | (p_addr[2] << 8) |
(p_addr[3] << 0);
mv_write(MV643XX_ETH_MAC_ADDR_LOW(eth_port_num), mac_l);
mv_write(MV643XX_ETH_MAC_ADDR_HIGH(eth_port_num), mac_h);
mv_write(MV643XX_ETH_MAC_ADDR_LOW(port_num), mac_l);
mv_write(MV643XX_ETH_MAC_ADDR_HIGH(port_num), mac_h);
/* Accept frames of this address */
table = MV643XX_ETH_DA_FILTER_UNICAST_TABLE_BASE(eth_port_num);
/* Accept frames with this address */
table = MV643XX_ETH_DA_FILTER_UNICAST_TABLE_BASE(port_num);
eth_port_set_filter_table_entry(table, p_addr[5] & 0x0f);
}
/*
* eth_port_uc_addr_get - This function retrieves the port Unicast address
* (MAC address) from the ethernet hw registers.
*
* DESCRIPTION:
* This function retrieves the port Ethernet MAC address.
*
* INPUT:
* unsigned int eth_port_num Port number.
* char *MacAddr pointer where the MAC address is stored
*
* OUTPUT:
* Copy the MAC address to the location pointed to by MacAddr
*
* RETURN:
* N/A.
*
* eth_port_uc_addr_get - Read the MAC address from the port's hw registers
*/
static void eth_port_uc_addr_get(struct net_device *dev, unsigned char *p_addr)
static void eth_port_uc_addr_get(unsigned int port_num, unsigned char *p_addr)
{
struct mv643xx_private *mp = netdev_priv(dev);
unsigned int mac_h;
unsigned int mac_l;
mac_h = mv_read(MV643XX_ETH_MAC_ADDR_HIGH(mp->port_num));
mac_l = mv_read(MV643XX_ETH_MAC_ADDR_LOW(mp->port_num));
mac_h = mv_read(MV643XX_ETH_MAC_ADDR_HIGH(port_num));
mac_l = mv_read(MV643XX_ETH_MAC_ADDR_LOW(port_num));
p_addr[0] = (mac_h >> 24) & 0xff;
p_addr[1] = (mac_h >> 16) & 0xff;

View File

@ -346,10 +346,6 @@ static void eth_port_init(struct mv643xx_private *mp);
static void eth_port_reset(unsigned int eth_port_num);
static void eth_port_start(struct net_device *dev);
/* Port MAC address routines */
static void eth_port_uc_addr_set(unsigned int eth_port_num,
unsigned char *p_addr);
/* PHY and MIB routines */
static void ethernet_phy_reset(unsigned int eth_port_num);

View File

@ -64,9 +64,9 @@
#include "netxen_nic_hw.h"
#define _NETXEN_NIC_LINUX_MAJOR 3
#define _NETXEN_NIC_LINUX_MINOR 3
#define _NETXEN_NIC_LINUX_SUBVERSION 3
#define NETXEN_NIC_LINUX_VERSIONID "3.3.3"
#define _NETXEN_NIC_LINUX_MINOR 4
#define _NETXEN_NIC_LINUX_SUBVERSION 2
#define NETXEN_NIC_LINUX_VERSIONID "3.4.2"
#define NUM_FLASH_SECTORS (64)
#define FLASH_SECTOR_SIZE (64 * 1024)
@ -131,8 +131,8 @@ extern struct workqueue_struct *netxen_workq;
#define FIRST_PAGE_GROUP_START 0
#define FIRST_PAGE_GROUP_END 0x100000
#define SECOND_PAGE_GROUP_START 0x4000000
#define SECOND_PAGE_GROUP_END 0x66BC000
#define SECOND_PAGE_GROUP_START 0x6000000
#define SECOND_PAGE_GROUP_END 0x68BC000
#define THIRD_PAGE_GROUP_START 0x70E4000
#define THIRD_PAGE_GROUP_END 0x8000000
@ -205,6 +205,8 @@ enum {
#define MAX_CMD_DESCRIPTORS 1024
#define MAX_RCV_DESCRIPTORS 16384
#define MAX_CMD_DESCRIPTORS_HOST (MAX_CMD_DESCRIPTORS / 4)
#define MAX_RCV_DESCRIPTORS_1G (MAX_RCV_DESCRIPTORS / 4)
#define MAX_JUMBO_RCV_DESCRIPTORS 1024
#define MAX_LRO_RCV_DESCRIPTORS 64
#define MAX_RCVSTATUS_DESCRIPTORS MAX_RCV_DESCRIPTORS
@ -230,7 +232,9 @@ enum {
(((index) + (count)) & ((length) - 1))
#define MPORT_SINGLE_FUNCTION_MODE 0x1111
#define MPORT_MULTI_FUNCTION_MODE 0x2222
#include "netxen_nic_phan_reg.h"
extern unsigned long long netxen_dma_mask;
extern unsigned long last_schedule_time;
@ -300,6 +304,8 @@ struct netxen_ring_ctx {
#define netxen_set_cmd_desc_port(cmd_desc, var) \
((cmd_desc)->port_ctxid |= ((var) & 0x0F))
#define netxen_set_cmd_desc_ctxid(cmd_desc, var) \
((cmd_desc)->port_ctxid |= ((var) & 0xF0))
#define netxen_set_cmd_desc_flags(cmd_desc, val) \
((cmd_desc)->flags_opcode &= ~cpu_to_le16(0x7f), \
@ -442,7 +448,7 @@ struct status_desc {
/* Bit pattern: 0-6 lro_count indicates frag sequence,
7 last_frag indicates last frag */
u8 lro;
} __attribute__ ((aligned(8)));
} __attribute__ ((aligned(16)));
enum {
NETXEN_RCV_PEG_0 = 0,
@ -703,10 +709,8 @@ extern char netxen_nic_driver_name[];
#else
#define DPRINTK(klevel, fmt, args...) do { \
printk(KERN_##klevel PFX "%s: %s: " fmt, __FUNCTION__,\
(adapter != NULL && \
adapter->port[0] != NULL && \
adapter->port[0]->netdev != NULL) ? \
adapter->port[0]->netdev->name : NULL, \
(adapter != NULL && adapter->netdev != NULL) ? \
adapter->netdev->name : NULL, \
## args); } while(0)
#endif
@ -722,6 +726,18 @@ struct netxen_skb_frag {
u32 length;
};
#define _netxen_set_bits(config_word, start, bits, val) {\
unsigned long long __tmask = (((1ULL << (bits)) - 1) << (start));\
unsigned long long __tvalue = (val); \
(config_word) &= ~__tmask; \
(config_word) |= (((__tvalue) << (start)) & __tmask); \
}
#define _netxen_clear_bits(config_word, start, bits) {\
unsigned long long __tmask = (((1ULL << (bits)) - 1) << (start)); \
(config_word) &= ~__tmask; \
}
/* Following defines are for the state of the buffers */
#define NETXEN_BUFFER_FREE 0
#define NETXEN_BUFFER_BUSY 1
@ -766,6 +782,8 @@ struct netxen_hardware_context {
void __iomem *pci_base0;
void __iomem *pci_base1;
void __iomem *pci_base2;
unsigned long first_page_group_end;
unsigned long first_page_group_start;
void __iomem *db_base;
unsigned long db_len;
@ -780,6 +798,7 @@ struct netxen_hardware_context {
struct pci_dev *cmd_desc_pdev;
dma_addr_t cmd_desc_phys_addr;
struct netxen_adapter *adapter;
int pci_func;
};
#define RCV_RING_LRO RCV_DESC_LRO
@ -788,17 +807,27 @@ struct netxen_hardware_context {
#define ETHERNET_FCS_SIZE 4
struct netxen_adapter_stats {
u64 ints;
u64 hostints;
u64 otherints;
u64 process_rcv;
u64 process_xmit;
u64 noxmitdone;
u64 xmitcsummed;
u64 post_called;
u64 posted;
u64 lastposted;
u64 goodskbposts;
u64 rcvdbadskb;
u64 xmitcalled;
u64 xmitedframes;
u64 xmitfinished;
u64 badskblen;
u64 nocmddescriptor;
u64 polled;
u64 uphappy;
u64 updropped;
u64 uplcong;
u64 uphcong;
u64 upmcong;
u64 updunno;
u64 skbfreed;
u64 txdropped;
u64 txnullskb;
u64 csummed;
u64 no_rcv;
u64 rxbytes;
u64 txbytes;
u64 ints;
};
/*
@ -846,13 +875,20 @@ struct netxen_dummy_dma {
struct netxen_adapter {
struct netxen_hardware_context ahw;
int port_count; /* Number of configured ports */
int active_ports; /* Number of open ports */
struct netxen_port *port[NETXEN_MAX_PORTS]; /* ptr to each port */
struct netxen_adapter *master;
struct net_device *netdev;
struct pci_dev *pdev;
struct net_device_stats net_stats;
unsigned char mac_addr[ETH_ALEN];
int mtu;
int portnum;
spinlock_t tx_lock;
spinlock_t lock;
struct work_struct watchdog_task;
struct timer_list watchdog_timer;
struct work_struct tx_timeout_task;
u32 curr_window;
@ -875,6 +911,15 @@ struct netxen_adapter {
u32 temp;
struct netxen_adapter_stats stats;
u16 portno;
u16 link_speed;
u16 link_duplex;
u16 state;
u16 link_autoneg;
int rcsum;
int status;
spinlock_t stats_lock;
struct netxen_cmd_buffer *cmd_buf_arr; /* Command buffers for xmit */
@ -891,65 +936,23 @@ struct netxen_adapter {
struct netxen_ring_ctx *ctx_desc;
struct pci_dev *ctx_desc_pdev;
dma_addr_t ctx_desc_phys_addr;
int (*enable_phy_interrupts) (struct netxen_adapter *, int);
int (*disable_phy_interrupts) (struct netxen_adapter *, int);
int (*enable_phy_interrupts) (struct netxen_adapter *);
int (*disable_phy_interrupts) (struct netxen_adapter *);
void (*handle_phy_intr) (struct netxen_adapter *);
int (*macaddr_set) (struct netxen_port *, netxen_ethernet_macaddr_t);
int (*set_mtu) (struct netxen_port *, int);
int (*set_promisc) (struct netxen_adapter *, int,
netxen_niu_prom_mode_t);
int (*unset_promisc) (struct netxen_adapter *, int,
netxen_niu_prom_mode_t);
int (*phy_read) (struct netxen_adapter *, long phy, long reg, u32 *);
int (*phy_write) (struct netxen_adapter *, long phy, long reg, u32 val);
int (*macaddr_set) (struct netxen_adapter *, netxen_ethernet_macaddr_t);
int (*set_mtu) (struct netxen_adapter *, int);
int (*set_promisc) (struct netxen_adapter *, netxen_niu_prom_mode_t);
int (*unset_promisc) (struct netxen_adapter *, netxen_niu_prom_mode_t);
int (*phy_read) (struct netxen_adapter *, long reg, u32 *);
int (*phy_write) (struct netxen_adapter *, long reg, u32 val);
int (*init_port) (struct netxen_adapter *, int);
void (*init_niu) (struct netxen_adapter *);
int (*stop_port) (struct netxen_adapter *, int);
int (*stop_port) (struct netxen_adapter *);
}; /* netxen_adapter structure */
/* Max number of xmit producer threads that can run simultaneously */
#define MAX_XMIT_PRODUCERS 16
struct netxen_port_stats {
u64 rcvdbadskb;
u64 xmitcalled;
u64 xmitedframes;
u64 xmitfinished;
u64 badskblen;
u64 nocmddescriptor;
u64 polled;
u64 uphappy;
u64 updropped;
u64 uplcong;
u64 uphcong;
u64 upmcong;
u64 updunno;
u64 skbfreed;
u64 txdropped;
u64 txnullskb;
u64 csummed;
u64 no_rcv;
u64 rxbytes;
u64 txbytes;
};
struct netxen_port {
struct netxen_adapter *adapter;
u16 portnum; /* GBE port number */
u16 link_speed;
u16 link_duplex;
u16 link_autoneg;
int flags;
struct net_device *netdev;
struct pci_dev *pdev;
struct net_device_stats net_stats;
struct netxen_port_stats stats;
struct work_struct tx_timeout_task;
};
#define PCI_OFFSET_FIRST_RANGE(adapter, off) \
((adapter)->ahw.pci_base0 + (off))
#define PCI_OFFSET_SECOND_RANGE(adapter, off) \
@ -987,32 +990,26 @@ static inline void __iomem *pci_base(struct netxen_adapter *adapter,
return NULL;
}
int netxen_niu_xgbe_enable_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_xgbe_disable_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_gbe_disable_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_xgbe_clear_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_gbe_clear_phy_interrupts(struct netxen_adapter *adapter,
int port);
int netxen_niu_xgbe_enable_phy_interrupts(struct netxen_adapter *adapter);
int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter);
int netxen_niu_xgbe_disable_phy_interrupts(struct netxen_adapter *adapter);
int netxen_niu_gbe_disable_phy_interrupts(struct netxen_adapter *adapter);
int netxen_niu_xgbe_clear_phy_interrupts(struct netxen_adapter *adapter);
int netxen_niu_gbe_clear_phy_interrupts(struct netxen_adapter *adapter);
void netxen_nic_xgbe_handle_phy_intr(struct netxen_adapter *adapter);
void netxen_nic_gbe_handle_phy_intr(struct netxen_adapter *adapter);
void netxen_niu_gbe_set_mii_mode(struct netxen_adapter *adapter, int port,
long enable);
void netxen_niu_gbe_set_gmii_mode(struct netxen_adapter *adapter, int port,
long enable);
int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long phy, long reg,
int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long reg,
__u32 * readval);
int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter, long phy,
int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter,
long reg, __u32 val);
/* Functions available from netxen_nic_hw.c */
int netxen_nic_set_mtu_xgb(struct netxen_port *port, int new_mtu);
int netxen_nic_set_mtu_gb(struct netxen_port *port, int new_mtu);
int netxen_nic_set_mtu_xgb(struct netxen_adapter *adapter, int new_mtu);
int netxen_nic_set_mtu_gb(struct netxen_adapter *adapter, int new_mtu);
void netxen_nic_init_niu_gb(struct netxen_adapter *adapter);
void netxen_nic_pci_change_crbwindow(struct netxen_adapter *adapter, u32 wndw);
void netxen_nic_reg_write(struct netxen_adapter *adapter, u64 off, u32 val);
@ -1027,6 +1024,7 @@ int netxen_nic_hw_write_wx(struct netxen_adapter *adapter, u64 off, void *data,
int len);
void netxen_crb_writelit_adapter(struct netxen_adapter *adapter,
unsigned long off, int data);
int netxen_nic_erase_pxe(struct netxen_adapter *adapter);
/* Functions from netxen_nic_init.c */
void netxen_free_adapter_offload(struct netxen_adapter *adapter);
@ -1051,11 +1049,8 @@ int netxen_do_rom_se(struct netxen_adapter *adapter, int addr);
/* Functions from netxen_nic_isr.c */
void netxen_nic_isr_other(struct netxen_adapter *adapter);
void netxen_indicate_link_status(struct netxen_adapter *adapter, u32 port,
u32 link);
void netxen_handle_port_int(struct netxen_adapter *adapter, u32 port,
u32 enable);
void netxen_nic_stop_all_ports(struct netxen_adapter *adapter);
void netxen_indicate_link_status(struct netxen_adapter *adapter, u32 link);
void netxen_handle_port_int(struct netxen_adapter *adapter, u32 enable);
void netxen_initialize_adapter_sw(struct netxen_adapter *adapter);
void netxen_initialize_adapter_hw(struct netxen_adapter *adapter);
void *netxen_alloc(struct pci_dev *pdev, size_t sz, dma_addr_t * ptr,
@ -1110,6 +1105,7 @@ static inline void netxen_nic_enable_int(struct netxen_adapter *adapter)
if (!(adapter->flags & NETXEN_NIC_MSI_ENABLED)) {
mask = 0xbff;
writel(0X0, NETXEN_CRB_NORMALIZE(adapter, CRB_INT_VECTOR));
writel(mask, PCI_OFFSET_SECOND_RANGE(adapter,
ISR_INT_TARGET_MASK));
}
@ -1174,4 +1170,5 @@ extern int netxen_rom_fast_read(struct netxen_adapter *adapter, int addr,
extern struct ethtool_ops netxen_nic_ethtool_ops;
extern int physical_port[]; /* physical port # from virtual port.*/
#endif /* __NETXEN_NIC_H_ */

View File

@ -40,8 +40,8 @@
#include <linux/ethtool.h>
#include <linux/version.h>
#include "netxen_nic_hw.h"
#include "netxen_nic.h"
#include "netxen_nic_hw.h"
#include "netxen_nic_phan_reg.h"
struct netxen_nic_stats {
@ -50,8 +50,8 @@ struct netxen_nic_stats {
int stat_offset;
};
#define NETXEN_NIC_STAT(m) sizeof(((struct netxen_port *)0)->m), \
offsetof(struct netxen_port, m)
#define NETXEN_NIC_STAT(m) sizeof(((struct netxen_adapter *)0)->m), \
offsetof(struct netxen_adapter, m)
#define NETXEN_NIC_PORT_WINDOW 0x10000
#define NETXEN_NIC_INVALID_DATA 0xDEADBEEF
@ -100,8 +100,7 @@ static int netxen_nic_get_eeprom_len(struct net_device *dev)
static void
netxen_nic_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
u32 fw_major = 0;
u32 fw_minor = 0;
u32 fw_build = 0;
@ -115,7 +114,7 @@ netxen_nic_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo)
fw_build = readl(NETXEN_CRB_NORMALIZE(adapter, NETXEN_FW_VERSION_SUB));
sprintf(drvinfo->fw_version, "%d.%d.%d", fw_major, fw_minor, fw_build);
strncpy(drvinfo->bus_info, pci_name(port->pdev), 32);
strncpy(drvinfo->bus_info, pci_name(adapter->pdev), 32);
drvinfo->n_stats = NETXEN_NIC_STATS_LEN;
drvinfo->testinfo_len = NETXEN_NIC_TEST_LEN;
drvinfo->regdump_len = NETXEN_NIC_REGS_LEN;
@ -125,8 +124,7 @@ netxen_nic_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *drvinfo)
static int
netxen_nic_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
struct netxen_board_info *boardinfo = &adapter->ahw.boardcfg;
/* read which mode */
@ -146,8 +144,8 @@ netxen_nic_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
ecmd->port = PORT_TP;
if (netif_running(dev)) {
ecmd->speed = port->link_speed;
ecmd->duplex = port->link_duplex;
ecmd->speed = adapter->link_speed;
ecmd->duplex = adapter->link_duplex;
} else
return -EIO; /* link absent */
} else if (adapter->ahw.board_type == NETXEN_NIC_XGBE) {
@ -165,7 +163,7 @@ netxen_nic_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
} else
return -EIO;
ecmd->phy_address = port->portnum;
ecmd->phy_address = adapter->portnum;
ecmd->transceiver = XCVR_EXTERNAL;
switch ((netxen_brdtype_t) boardinfo->board_type) {
@ -179,7 +177,7 @@ netxen_nic_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
ecmd->port = PORT_TP;
ecmd->autoneg = (boardinfo->board_type ==
NETXEN_BRDTYPE_P2_SB31_10G_CX4) ?
(AUTONEG_DISABLE) : (port->link_autoneg);
(AUTONEG_DISABLE) : (adapter->link_autoneg);
break;
case NETXEN_BRDTYPE_P2_SB31_10G_HMEZ:
case NETXEN_BRDTYPE_P2_SB31_10G_IMEZ:
@ -206,23 +204,22 @@ netxen_nic_get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
static int
netxen_nic_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
__u32 status;
/* read which mode */
if (adapter->ahw.board_type == NETXEN_NIC_GBE) {
/* autonegotiation */
if (adapter->phy_write
&& adapter->phy_write(adapter, port->portnum,
&& adapter->phy_write(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG,
ecmd->autoneg) != 0)
return -EIO;
else
port->link_autoneg = ecmd->autoneg;
adapter->link_autoneg = ecmd->autoneg;
if (adapter->phy_read
&& adapter->phy_read(adapter, port->portnum,
&& adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status) != 0)
return -EIO;
@ -245,13 +242,13 @@ netxen_nic_set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
if (ecmd->duplex == DUPLEX_FULL)
netxen_set_phy_duplex(status);
if (adapter->phy_write
&& adapter->phy_write(adapter, port->portnum,
&& adapter->phy_write(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
*((int *)&status)) != 0)
return -EIO;
else {
port->link_speed = ecmd->speed;
port->link_duplex = ecmd->duplex;
adapter->link_speed = ecmd->speed;
adapter->link_duplex = ecmd->duplex;
}
} else
return -EOPNOTSUPP;
@ -360,15 +357,14 @@ static struct netxen_niu_regs niu_registers[] = {
static void
netxen_nic_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *p)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
__u32 mode, *regs_buff = p;
void __iomem *addr;
int i, window;
memset(p, 0, NETXEN_NIC_REGS_LEN);
regs->version = (1 << 24) | (adapter->ahw.revision_id << 16) |
(port->pdev)->device;
(adapter->pdev)->device;
/* which mode */
NETXEN_NIC_LOCKED_READ_REG(NETXEN_NIU_MODE, &regs_buff[0]);
mode = regs_buff[0];
@ -383,7 +379,8 @@ netxen_nic_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *p)
for (i = 3; niu_registers[mode].reg[i - 3] != -1; i++) {
/* GB: port specific registers */
if (mode == 0 && i >= 19)
window = port->portnum * NETXEN_NIC_PORT_WINDOW;
window = physical_port[adapter->portnum] *
NETXEN_NIC_PORT_WINDOW;
NETXEN_NIC_LOCKED_READ_REG(niu_registers[mode].
reg[i - 3] + window,
@ -395,15 +392,14 @@ netxen_nic_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *p)
static u32 netxen_nic_test_link(struct net_device *dev)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
__u32 status;
int val;
/* read which mode */
if (adapter->ahw.board_type == NETXEN_NIC_GBE) {
if (adapter->phy_read
&& adapter->phy_read(adapter, port->portnum,
&& adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status) != 0)
return -EIO;
@ -422,15 +418,15 @@ static int
netxen_nic_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
u8 * bytes)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
int offset;
int ret;
if (eeprom->len == 0)
return -EINVAL;
eeprom->magic = (port->pdev)->vendor | ((port->pdev)->device << 16);
eeprom->magic = (adapter->pdev)->vendor |
((adapter->pdev)->device << 16);
offset = eeprom->offset;
ret = netxen_rom_fast_read_words(adapter, offset, bytes,
@ -445,8 +441,7 @@ static int
netxen_nic_set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
u8 * bytes)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
int offset = eeprom->offset;
static int flash_start;
static int ready_to_flash;
@ -516,8 +511,7 @@ netxen_nic_set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
static void
netxen_nic_get_ringparam(struct net_device *dev, struct ethtool_ringparam *ring)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
int i;
ring->rx_pending = 0;
@ -541,19 +535,45 @@ static void
netxen_nic_get_pauseparam(struct net_device *dev,
struct ethtool_pauseparam *pause)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
__u32 val;
int port = physical_port[adapter->portnum];
if (adapter->ahw.board_type == NETXEN_NIC_GBE) {
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
return;
/* get flow control settings */
netxen_nic_read_w0(adapter,
NETXEN_NIU_GB_MAC_CONFIG_0(port->portnum),
&val);
netxen_nic_read_w0(adapter,NETXEN_NIU_GB_MAC_CONFIG_0(port),
&val);
pause->rx_pause = netxen_gb_get_rx_flowctl(val);
pause->tx_pause = netxen_gb_get_tx_flowctl(val);
/* get autoneg settings */
pause->autoneg = port->link_autoneg;
netxen_nic_read_w0(adapter, NETXEN_NIU_GB_PAUSE_CTL, &val);
switch (port) {
case 0:
pause->tx_pause = !(netxen_gb_get_gb0_mask(val));
break;
case 1:
pause->tx_pause = !(netxen_gb_get_gb1_mask(val));
break;
case 2:
pause->tx_pause = !(netxen_gb_get_gb2_mask(val));
break;
case 3:
default:
pause->tx_pause = !(netxen_gb_get_gb3_mask(val));
break;
}
} else if (adapter->ahw.board_type == NETXEN_NIC_XGBE) {
if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS))
return;
pause->rx_pause = 1;
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_PAUSE_CTL, &val);
if (port == 0)
pause->tx_pause = !(netxen_xg_get_xg0_mask(val));
else
pause->tx_pause = !(netxen_xg_get_xg1_mask(val));
} else {
printk(KERN_ERR"%s: Unknown board type: %x\n",
netxen_nic_driver_name, adapter->ahw.board_type);
}
}
@ -561,42 +581,76 @@ static int
netxen_nic_set_pauseparam(struct net_device *dev,
struct ethtool_pauseparam *pause)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(dev);
__u32 val;
unsigned int autoneg;
int port = physical_port[adapter->portnum];
/* read mode */
if (adapter->ahw.board_type == NETXEN_NIC_GBE) {
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
return -EIO;
/* set flow control */
netxen_nic_read_w0(adapter,
NETXEN_NIU_GB_MAC_CONFIG_0(port->portnum),
(u32 *) & val);
if (pause->tx_pause)
netxen_gb_tx_flowctl(val);
else
netxen_gb_unset_tx_flowctl(val);
NETXEN_NIU_GB_MAC_CONFIG_0(port), &val);
if (pause->rx_pause)
netxen_gb_rx_flowctl(val);
else
netxen_gb_unset_rx_flowctl(val);
netxen_nic_write_w0(adapter,
NETXEN_NIU_GB_MAC_CONFIG_0(port->portnum),
*&val);
netxen_nic_write_w0(adapter, NETXEN_NIU_GB_MAC_CONFIG_0(port),
val);
/* set autoneg */
autoneg = pause->autoneg;
if (adapter->phy_write
&& adapter->phy_write(adapter, port->portnum,
NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG,
autoneg) != 0)
return -EIO;
else {
port->link_autoneg = pause->autoneg;
return 0;
netxen_nic_read_w0(adapter, NETXEN_NIU_GB_PAUSE_CTL, &val);
switch (port) {
case 0:
if (pause->tx_pause)
netxen_gb_unset_gb0_mask(val);
else
netxen_gb_set_gb0_mask(val);
break;
case 1:
if (pause->tx_pause)
netxen_gb_unset_gb1_mask(val);
else
netxen_gb_set_gb1_mask(val);
break;
case 2:
if (pause->tx_pause)
netxen_gb_unset_gb2_mask(val);
else
netxen_gb_set_gb2_mask(val);
break;
case 3:
default:
if (pause->tx_pause)
netxen_gb_unset_gb3_mask(val);
else
netxen_gb_set_gb3_mask(val);
break;
}
} else
return -EOPNOTSUPP;
netxen_nic_write_w0(adapter, NETXEN_NIU_GB_PAUSE_CTL, val);
} else if (adapter->ahw.board_type == NETXEN_NIC_XGBE) {
if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS))
return -EIO;
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_PAUSE_CTL, &val);
if (port == 0) {
if (pause->tx_pause)
netxen_xg_unset_xg0_mask(val);
else
netxen_xg_set_xg0_mask(val);
} else {
if (pause->tx_pause)
netxen_xg_unset_xg1_mask(val);
else
netxen_xg_set_xg1_mask(val);
}
netxen_nic_write_w0(adapter, NETXEN_NIU_XG_PAUSE_CTL, val);
} else {
printk(KERN_ERR "%s: Unknown board type: %x\n",
netxen_nic_driver_name,
adapter->ahw.board_type);
}
return 0;
}
static int netxen_nic_reg_test(struct net_device *dev)
@ -627,23 +681,12 @@ static void
netxen_nic_diag_test(struct net_device *dev, struct ethtool_test *eth_test,
u64 * data)
{
if (eth_test->flags == ETH_TEST_FL_OFFLINE) { /* offline tests */
/* link test */
if ((data[1] = (u64) netxen_nic_test_link(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
/* register tests */
if ((data[0] = netxen_nic_reg_test(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
} else { /* online tests */
/* register tests */
if((data[0] = netxen_nic_reg_test(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
/* link test */
if ((data[1] = (u64) netxen_nic_test_link(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
}
memset(data, 0, sizeof(uint64_t) * NETXEN_NIC_TEST_LEN);
if ((data[0] = netxen_nic_reg_test(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
/* link test */
if ((data[1] = (u64) netxen_nic_test_link(dev)))
eth_test->flags |= ETH_TEST_FL_FAILED;
}
static void
@ -675,12 +718,13 @@ static void
netxen_nic_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 * data)
{
struct netxen_port *port = netdev_priv(dev);
struct netxen_adapter *adapter = netdev_priv(dev);
int index;
for (index = 0; index < NETXEN_NIC_STATS_LEN; index++) {
char *p =
(char *)port + netxen_nic_gstrings_stats[index].stat_offset;
(char *)adapter +
netxen_nic_gstrings_stats[index].stat_offset;
data[index] =
(netxen_nic_gstrings_stats[index].sizeof_stat ==
sizeof(u64)) ? *(u64 *) p : *(u32 *) p;

View File

@ -467,6 +467,8 @@ enum {
#define NETXEN_PCI_OCM1 (0x05100000UL)
#define NETXEN_PCI_OCM1_MAX (0x051fffffUL)
#define NETXEN_PCI_CRBSPACE (0x06000000UL)
#define NETXEN_PCI_128MB_SIZE (0x08000000UL)
#define NETXEN_PCI_32MB_SIZE (0x02000000UL)
#define NETXEN_CRB_CAM NETXEN_PCI_CRB_WINDOW(NETXEN_HW_PX_MAP_CRB_CAM)
@ -484,6 +486,7 @@ enum {
/* 10 seconds before we give up */
#define NETXEN_NIU_PHY_WAITMAX 50
#define NETXEN_NIU_MAX_GBE_PORTS 4
#define NETXEN_NIU_MAX_XG_PORTS 2
#define NETXEN_NIU_MODE (NETXEN_CRB_NIU + 0x00000)
@ -527,6 +530,7 @@ enum {
#define NETXEN_NIU_XG_PAUSE_CTL (NETXEN_CRB_NIU + 0x00098)
#define NETXEN_NIU_XG_PAUSE_LEVEL (NETXEN_CRB_NIU + 0x000dc)
#define NETXEN_NIU_XG_SEL (NETXEN_CRB_NIU + 0x00128)
#define NETXEN_NIU_GB_PAUSE_CTL (NETXEN_CRB_NIU + 0x0030c)
#define NETXEN_NIU_FULL_LEVEL_XG (NETXEN_CRB_NIU + 0x00450)
@ -649,11 +653,19 @@ enum {
#define PCIX_MS_WINDOW (0x10204)
#define PCIX_SN_WINDOW (0x10208)
#define PCIX_CRB_WINDOW (0x10210)
#define PCIX_CRB_WINDOW_F0 (0x10210)
#define PCIX_CRB_WINDOW_F1 (0x10230)
#define PCIX_CRB_WINDOW_F2 (0x10250)
#define PCIX_CRB_WINDOW_F3 (0x10270)
#define PCIX_TARGET_STATUS (0x10118)
#define PCIX_TARGET_MASK (0x10128)
#define PCIX_MSI_F0 (0x13000)
#define PCIX_MSI_F1 (0x13004)
#define PCIX_MSI_F2 (0x13008)
#define PCIX_MSI_F3 (0x1300c)
#define PCIX_MSI_F(i) (0x13000+((i)*4))
#define PCIX_PS_MEM_SPACE (0x90000)

View File

@ -33,10 +33,225 @@
#include "netxen_nic.h"
#include "netxen_nic_hw.h"
#define DEFINE_GLOBAL_RECV_CRB
#include "netxen_nic_phan_reg.h"
#include <net/ip.h>
struct netxen_recv_crb recv_crb_registers[] = {
/*
* Instance 0.
*/
{
/* rcv_desc_crb: */
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x100),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x104),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x108),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x10c),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x110),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x114),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x118),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x11c),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x120),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x124),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x128),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x12c),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x130),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x134),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x138),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x13c),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x140),
},
/*
* Instance 1,
*/
{
/* rcv_desc_crb: */
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x144),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x148),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x14c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x150),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x154),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x158),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x15c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x160),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x164),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x168),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x16c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x170),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x174),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x178),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x17c),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x180),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x184),
},
/*
* Instance 2,
*/
{
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x1d8),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x1dc),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x1f0),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x1f4),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x1f8),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x1fc),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x200),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x204),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x208),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x20c),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x210),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x214),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x218),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x21c),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x220),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x224),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x228),
},
/*
* Instance 3,
*/
{
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x22c),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x230),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x234),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x238),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x23c),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x240),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x244),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x248),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x24c),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x250),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x254),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x258),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x25c),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x260),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x264),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x268),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x26c),
},
};
u64 ctx_addr_sig_regs[][3] = {
{NETXEN_NIC_REG(0x188), NETXEN_NIC_REG(0x18c), NETXEN_NIC_REG(0x1c0)},
{NETXEN_NIC_REG(0x190), NETXEN_NIC_REG(0x194), NETXEN_NIC_REG(0x1c4)},
{NETXEN_NIC_REG(0x198), NETXEN_NIC_REG(0x19c), NETXEN_NIC_REG(0x1c8)},
{NETXEN_NIC_REG(0x1a0), NETXEN_NIC_REG(0x1a4), NETXEN_NIC_REG(0x1cc)}
};
/* PCI Windowing for DDR regions. */
#define ADDR_IN_RANGE(addr, low, high) \
@ -70,8 +285,7 @@ void netxen_free_hw_resources(struct netxen_adapter *adapter);
int netxen_nic_set_mac(struct net_device *netdev, void *p)
{
struct netxen_port *port = netdev_priv(netdev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(netdev);
struct sockaddr *addr = p;
if (netif_running(netdev))
@ -84,7 +298,7 @@ int netxen_nic_set_mac(struct net_device *netdev, void *p)
memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
if (adapter->macaddr_set)
adapter->macaddr_set(port, addr->sa_data);
adapter->macaddr_set(adapter, addr->sa_data);
return 0;
}
@ -94,56 +308,19 @@ int netxen_nic_set_mac(struct net_device *netdev, void *p)
*/
void netxen_nic_set_multi(struct net_device *netdev)
{
struct netxen_port *port = netdev_priv(netdev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(netdev);
struct dev_mc_list *mc_ptr;
__u32 netxen_mac_addr_cntl_data = 0;
mc_ptr = netdev->mc_list;
if (netdev->flags & IFF_PROMISC) {
if (adapter->set_promisc)
adapter->set_promisc(adapter,
port->portnum,
NETXEN_NIU_PROMISC_MODE);
} else {
if (adapter->unset_promisc &&
adapter->ahw.boardcfg.board_type
!= NETXEN_BRDTYPE_P2_SB31_10G_IMEZ)
if (adapter->unset_promisc)
adapter->unset_promisc(adapter,
port->portnum,
NETXEN_NIU_NON_PROMISC_MODE);
}
if (adapter->ahw.board_type == NETXEN_NIC_XGBE) {
netxen_nic_mcr_set_mode_select(netxen_mac_addr_cntl_data, 0x03);
netxen_nic_mcr_set_id_pool0(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_id_pool1(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_id_pool2(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_id_pool3(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_enable_xtnd0(netxen_mac_addr_cntl_data);
netxen_nic_mcr_set_enable_xtnd1(netxen_mac_addr_cntl_data);
netxen_nic_mcr_set_enable_xtnd2(netxen_mac_addr_cntl_data);
netxen_nic_mcr_set_enable_xtnd3(netxen_mac_addr_cntl_data);
} else {
netxen_nic_mcr_set_mode_select(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_id_pool0(netxen_mac_addr_cntl_data, 0x00);
netxen_nic_mcr_set_id_pool1(netxen_mac_addr_cntl_data, 0x01);
netxen_nic_mcr_set_id_pool2(netxen_mac_addr_cntl_data, 0x02);
netxen_nic_mcr_set_id_pool3(netxen_mac_addr_cntl_data, 0x03);
}
writel(netxen_mac_addr_cntl_data,
NETXEN_CRB_NORMALIZE(adapter, NETXEN_MAC_ADDR_CNTL_REG));
if (adapter->ahw.board_type == NETXEN_NIC_XGBE) {
writel(netxen_mac_addr_cntl_data,
NETXEN_CRB_NORMALIZE(adapter,
NETXEN_MULTICAST_ADDR_HI_0));
} else {
writel(netxen_mac_addr_cntl_data,
NETXEN_CRB_NORMALIZE(adapter,
NETXEN_MULTICAST_ADDR_HI_1));
}
netxen_mac_addr_cntl_data = 0;
writel(netxen_mac_addr_cntl_data,
NETXEN_CRB_NORMALIZE(adapter, NETXEN_NIU_GB_DROP_WRONGADDR));
}
/*
@ -152,8 +329,7 @@ void netxen_nic_set_multi(struct net_device *netdev)
*/
int netxen_nic_change_mtu(struct net_device *netdev, int mtu)
{
struct netxen_port *port = netdev_priv(netdev);
struct netxen_adapter *adapter = port->adapter;
struct netxen_adapter *adapter = netdev_priv(netdev);
int eff_mtu = mtu + NETXEN_ENET_HEADER_SIZE + NETXEN_ETH_FCS_SIZE;
if ((eff_mtu > NETXEN_MAX_MTU) || (eff_mtu < NETXEN_MIN_MTU)) {
@ -163,7 +339,7 @@ int netxen_nic_change_mtu(struct net_device *netdev, int mtu)
}
if (adapter->set_mtu)
adapter->set_mtu(port, mtu);
adapter->set_mtu(adapter, mtu);
netdev->mtu = mtu;
return 0;
@ -180,9 +356,9 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
void *addr;
int loops = 0, err = 0;
int ctx, ring;
u32 card_cmdring = 0;
struct netxen_recv_context *recv_ctx;
struct netxen_rcv_desc_ctx *rcv_desc;
int func_id = adapter->portnum;
DPRINTK(INFO, "crb_base: %lx %x", NETXEN_PCI_CRBSPACE,
PCI_OFFSET_SECOND_RANGE(adapter, NETXEN_PCI_CRBSPACE));
@ -191,11 +367,6 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
DPRINTK(INFO, "cam RAM: %lx %x", NETXEN_CAM_RAM_BASE,
pci_base_offset(adapter, NETXEN_CAM_RAM_BASE));
/* Window 1 call */
card_cmdring = readl(NETXEN_CRB_NORMALIZE(adapter, CRB_CMDPEG_CMDRING));
DPRINTK(INFO, "Command Peg sends 0x%x for cmdring base\n",
card_cmdring);
for (ctx = 0; ctx < MAX_RCV_CTX; ++ctx) {
DPRINTK(INFO, "Command Peg ready..waiting for rcv peg\n");
@ -229,7 +400,7 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
(dma_addr_t *) & adapter->ctx_desc_phys_addr,
&adapter->ctx_desc_pdev);
printk("ctx_desc_phys_addr: 0x%llx\n",
printk(KERN_INFO "ctx_desc_phys_addr: 0x%llx\n",
(unsigned long long) adapter->ctx_desc_phys_addr);
if (addr == NULL) {
DPRINTK(ERR, "bad return from pci_alloc_consistent\n");
@ -238,6 +409,7 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
}
memset(addr, 0, sizeof(struct netxen_ring_ctx));
adapter->ctx_desc = (struct netxen_ring_ctx *)addr;
adapter->ctx_desc->ctx_id = cpu_to_le32(adapter->portnum);
adapter->ctx_desc->cmd_consumer_offset =
cpu_to_le64(adapter->ctx_desc_phys_addr +
sizeof(struct netxen_ring_ctx));
@ -249,7 +421,7 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
adapter->max_tx_desc_count,
(dma_addr_t *) & hw->cmd_desc_phys_addr,
&adapter->ahw.cmd_desc_pdev);
printk("cmd_desc_phys_addr: 0x%llx\n",
printk(KERN_INFO "cmd_desc_phys_addr: 0x%llx\n",
(unsigned long long) hw->cmd_desc_phys_addr);
if (addr == NULL) {
@ -308,11 +480,11 @@ int netxen_nic_hw_resources(struct netxen_adapter *adapter)
/* Window = 1 */
writel(lower32(adapter->ctx_desc_phys_addr),
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_ADDR_REG_LO));
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_ADDR_REG_LO(func_id)));
writel(upper32(adapter->ctx_desc_phys_addr),
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_ADDR_REG_HI));
writel(NETXEN_CTX_SIGNATURE,
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_SIGNATURE_REG));
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_ADDR_REG_HI(func_id)));
writel(NETXEN_CTX_SIGNATURE | func_id,
NETXEN_CRB_NORMALIZE(adapter, CRB_CTX_SIGNATURE_REG(func_id)));
return err;
}
@ -339,10 +511,6 @@ void netxen_free_hw_resources(struct netxen_adapter *adapter)
adapter->ahw.cmd_desc_phys_addr);
adapter->ahw.cmd_desc_head = NULL;
}
/* Special handling: there are 2 ports on this board */
if (adapter->ahw.boardcfg.board_type == NETXEN_BRDTYPE_P2_SB31_10G_IMEZ) {
adapter->ahw.max_ports = 2;
}
for (ctx = 0; ctx < MAX_RCV_CTX; ++ctx) {
recv_ctx = &adapter->recv_ctx[ctx];
@ -385,7 +553,6 @@ void netxen_tso_check(struct netxen_adapter *adapter,
return;
}
}
adapter->stats.xmitcsummed++;
desc->tcp_hdr_offset = skb_transport_offset(skb);
desc->ip_hdr_offset = skb_network_offset(skb);
}
@ -475,7 +642,30 @@ void netxen_nic_pci_change_crbwindow(struct netxen_adapter *adapter, u32 wndw)
if (adapter->curr_window == wndw)
return;
switch(adapter->ahw.pci_func) {
case 0:
offset = PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW));
break;
case 1:
offset = PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW_F1));
break;
case 2:
offset = PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW_F2));
break;
case 3:
offset = PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW_F3));
break;
default:
printk(KERN_INFO "Changing the window for PCI function"
"%d\n", adapter->ahw.pci_func);
offset = PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW));
break;
}
/*
* Move the CRB window.
* We need to write to the "direct access" region of PCI
@ -484,9 +674,6 @@ void netxen_nic_pci_change_crbwindow(struct netxen_adapter *adapter, u32 wndw)
* register address is received by PCI. The direct region bypasses
* the CRB bus.
*/
offset =
PCI_OFFSET_SECOND_RANGE(adapter,
NETXEN_PCIX_PH_REG(PCIX_CRB_WINDOW));
if (wndw & 0x1)
wndw = NETXEN_WINDOW_ONE;
@ -504,7 +691,10 @@ void netxen_nic_pci_change_crbwindow(struct netxen_adapter *adapter, u32 wndw)
count++;
}
adapter->curr_window = wndw;
if (wndw == NETXEN_WINDOW_ONE)
adapter->curr_window = 1;
else
adapter->curr_window = 0;
}
void netxen_load_firmware(struct netxen_adapter *adapter)
@ -749,6 +939,17 @@ netxen_nic_pci_set_window(struct netxen_adapter *adapter,
return addr;
}
int
netxen_nic_erase_pxe(struct netxen_adapter *adapter)
{
if (netxen_rom_fast_write(adapter, PXE_START, 0) == -1) {
printk(KERN_ERR "%s: erase pxe failed\n",
netxen_nic_driver_name);
return -1;
}
return 0;
}
int netxen_nic_get_board_info(struct netxen_adapter *adapter)
{
int rv = 0;
@ -810,43 +1011,29 @@ int netxen_nic_get_board_info(struct netxen_adapter *adapter)
/* NIU access sections */
int netxen_nic_set_mtu_gb(struct netxen_port *port, int new_mtu)
int netxen_nic_set_mtu_gb(struct netxen_adapter *adapter, int new_mtu)
{
struct netxen_adapter *adapter = port->adapter;
netxen_nic_write_w0(adapter,
NETXEN_NIU_GB_MAX_FRAME_SIZE(port->portnum),
new_mtu);
NETXEN_NIU_GB_MAX_FRAME_SIZE(
physical_port[adapter->portnum]), new_mtu);
return 0;
}
int netxen_nic_set_mtu_xgb(struct netxen_port *port, int new_mtu)
int netxen_nic_set_mtu_xgb(struct netxen_adapter *adapter, int new_mtu)
{
struct netxen_adapter *adapter = port->adapter;
new_mtu += NETXEN_NIU_HDRSIZE + NETXEN_NIU_TLRSIZE;
if (port->portnum == 0)
netxen_nic_write_w0(adapter, NETXEN_NIU_XGE_MAX_FRAME_SIZE, new_mtu);
else if (port->portnum == 1)
netxen_nic_write_w0(adapter, NETXEN_NIU_XG1_MAX_FRAME_SIZE, new_mtu);
if (physical_port[adapter->portnum] == 0)
netxen_nic_write_w0(adapter, NETXEN_NIU_XGE_MAX_FRAME_SIZE,
new_mtu);
else
netxen_nic_write_w0(adapter, NETXEN_NIU_XG1_MAX_FRAME_SIZE,
new_mtu);
return 0;
}
void netxen_nic_init_niu_gb(struct netxen_adapter *adapter)
{
int portno;
for (portno = 0; portno < NETXEN_NIU_MAX_GBE_PORTS; portno++)
netxen_niu_gbe_init_port(adapter, portno);
}
void netxen_nic_stop_all_ports(struct netxen_adapter *adapter)
{
int port_nr;
struct netxen_port *port;
for (port_nr = 0; port_nr < adapter->ahw.max_ports; port_nr++) {
port = adapter->port[port_nr];
if (adapter->stop_port)
adapter->stop_port(adapter, port->portnum);
}
netxen_niu_gbe_init_port(adapter, physical_port[adapter->portnum]);
}
void
@ -865,9 +1052,8 @@ netxen_crb_writelit_adapter(struct netxen_adapter *adapter, unsigned long off,
}
}
void netxen_nic_set_link_parameters(struct netxen_port *port)
void netxen_nic_set_link_parameters(struct netxen_adapter *adapter)
{
struct netxen_adapter *adapter = port->adapter;
__u32 status;
__u32 autoneg;
__u32 mode;
@ -876,47 +1062,47 @@ void netxen_nic_set_link_parameters(struct netxen_port *port)
if (netxen_get_niu_enable_ge(mode)) { /* Gb 10/100/1000 Mbps mode */
if (adapter->phy_read
&& adapter->
phy_read(adapter, port->portnum,
phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status) == 0) {
if (netxen_get_phy_link(status)) {
switch (netxen_get_phy_speed(status)) {
case 0:
port->link_speed = SPEED_10;
adapter->link_speed = SPEED_10;
break;
case 1:
port->link_speed = SPEED_100;
adapter->link_speed = SPEED_100;
break;
case 2:
port->link_speed = SPEED_1000;
adapter->link_speed = SPEED_1000;
break;
default:
port->link_speed = -1;
adapter->link_speed = -1;
break;
}
switch (netxen_get_phy_duplex(status)) {
case 0:
port->link_duplex = DUPLEX_HALF;
adapter->link_duplex = DUPLEX_HALF;
break;
case 1:
port->link_duplex = DUPLEX_FULL;
adapter->link_duplex = DUPLEX_FULL;
break;
default:
port->link_duplex = -1;
adapter->link_duplex = -1;
break;
}
if (adapter->phy_read
&& adapter->
phy_read(adapter, port->portnum,
phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG,
&autoneg) != 0)
port->link_autoneg = autoneg;
adapter->link_autoneg = autoneg;
} else
goto link_down;
} else {
link_down:
port->link_speed = -1;
port->link_duplex = -1;
adapter->link_speed = -1;
adapter->link_duplex = -1;
}
}
}
@ -930,7 +1116,7 @@ void netxen_nic_flash_print(struct netxen_adapter *adapter)
char brd_name[NETXEN_MAX_SHORT_NAME];
struct netxen_new_user_info user_info;
int i, addr = USER_START;
u32 *ptr32;
__le32 *ptr32;
struct netxen_board_info *board_info = &(adapter->ahw.boardcfg);
if (board_info->magic != NETXEN_BDINFO_MAGIC) {
@ -956,7 +1142,6 @@ void netxen_nic_flash_print(struct netxen_adapter *adapter)
netxen_nic_driver_name);
return;
}
*ptr32 = le32_to_cpu(*ptr32);
ptr32++;
addr += sizeof(u32);
}

View File

@ -6,12 +6,12 @@
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston,
@ -87,7 +87,7 @@ struct netxen_adapter;
*(u32 *)Y = readl((void __iomem*) addr);
struct netxen_port;
void netxen_nic_set_link_parameters(struct netxen_port *port);
void netxen_nic_set_link_parameters(struct netxen_adapter *adapter);
void netxen_nic_flash_print(struct netxen_adapter *adapter);
int netxen_nic_hw_write_wx(struct netxen_adapter *adapter, u64 off,
void *data, int len);
@ -220,6 +220,69 @@ typedef enum {
_netxen_crb_get_bit(config_word, 1)
#define netxen_get_gb_mii_mgmt_notvalid(config_word) \
_netxen_crb_get_bit(config_word, 2)
/*
* NIU XG Pause Ctl Register
*
* Bit 0 : xg0_mask => 1:disable tx pause frames
* Bit 1 : xg0_request => 1:request single pause frame
* Bit 2 : xg0_on_off => 1:request is pause on, 0:off
* Bit 3 : xg1_mask => 1:disable tx pause frames
* Bit 4 : xg1_request => 1:request single pause frame
* Bit 5 : xg1_on_off => 1:request is pause on, 0:off
*/
#define netxen_xg_set_xg0_mask(config_word) \
((config_word) |= 1 << 0)
#define netxen_xg_set_xg1_mask(config_word) \
((config_word) |= 1 << 3)
#define netxen_xg_get_xg0_mask(config_word) \
_netxen_crb_get_bit((config_word), 0)
#define netxen_xg_get_xg1_mask(config_word) \
_netxen_crb_get_bit((config_word), 3)
#define netxen_xg_unset_xg0_mask(config_word) \
((config_word) &= ~(1 << 0))
#define netxen_xg_unset_xg1_mask(config_word) \
((config_word) &= ~(1 << 3))
/*
* NIU XG Pause Ctl Register
*
* Bit 0 : xg0_mask => 1:disable tx pause frames
* Bit 1 : xg0_request => 1:request single pause frame
* Bit 2 : xg0_on_off => 1:request is pause on, 0:off
* Bit 3 : xg1_mask => 1:disable tx pause frames
* Bit 4 : xg1_request => 1:request single pause frame
* Bit 5 : xg1_on_off => 1:request is pause on, 0:off
*/
#define netxen_gb_set_gb0_mask(config_word) \
((config_word) |= 1 << 0)
#define netxen_gb_set_gb1_mask(config_word) \
((config_word) |= 1 << 2)
#define netxen_gb_set_gb2_mask(config_word) \
((config_word) |= 1 << 4)
#define netxen_gb_set_gb3_mask(config_word) \
((config_word) |= 1 << 6)
#define netxen_gb_get_gb0_mask(config_word) \
_netxen_crb_get_bit((config_word), 0)
#define netxen_gb_get_gb1_mask(config_word) \
_netxen_crb_get_bit((config_word), 2)
#define netxen_gb_get_gb2_mask(config_word) \
_netxen_crb_get_bit((config_word), 4)
#define netxen_gb_get_gb3_mask(config_word) \
_netxen_crb_get_bit((config_word), 6)
#define netxen_gb_unset_gb0_mask(config_word) \
((config_word) &= ~(1 << 0))
#define netxen_gb_unset_gb1_mask(config_word) \
((config_word) &= ~(1 << 2))
#define netxen_gb_unset_gb2_mask(config_word) \
((config_word) &= ~(1 << 4))
#define netxen_gb_unset_gb3_mask(config_word) \
((config_word) &= ~(1 << 6))
/*
* PHY-Specific MII control/status registers.
@ -452,21 +515,21 @@ typedef enum {
((config) |= (((val) & 0x0f) << 28))
/* Set promiscuous mode for a GbE interface */
int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter, int port,
int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter,
netxen_niu_prom_mode_t mode);
int netxen_niu_xg_set_promiscuous_mode(struct netxen_adapter *adapter,
int port, netxen_niu_prom_mode_t mode);
netxen_niu_prom_mode_t mode);
/* get/set the MAC address for a given MAC */
int netxen_niu_macaddr_get(struct netxen_adapter *adapter, int port,
int netxen_niu_macaddr_get(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t * addr);
int netxen_niu_macaddr_set(struct netxen_port *port,
int netxen_niu_macaddr_set(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t addr);
/* XG versons */
int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter, int port,
int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t * addr);
int netxen_niu_xg_macaddr_set(struct netxen_port *port,
int netxen_niu_xg_macaddr_set(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t addr);
/* Generic enable for GbE ports. Will detect the speed of the link. */
@ -475,8 +538,8 @@ int netxen_niu_gbe_init_port(struct netxen_adapter *adapter, int port);
int netxen_niu_xg_init_port(struct netxen_adapter *adapter, int port);
/* Disable a GbE interface */
int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter, int port);
int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter);
int netxen_niu_disable_xg_port(struct netxen_adapter *adapter, int port);
int netxen_niu_disable_xg_port(struct netxen_adapter *adapter);
#endif /* __NETXEN_NIC_HW_H_ */

View File

@ -139,7 +139,7 @@ int netxen_init_firmware(struct netxen_adapter *adapter)
return err;
}
/* Window 1 call */
writel(MPORT_SINGLE_FUNCTION_MODE,
writel(MPORT_MULTI_FUNCTION_MODE,
NETXEN_CRB_NORMALIZE(adapter, CRB_MPORT_MODE));
writel(PHAN_INITIALIZE_ACK,
NETXEN_CRB_NORMALIZE(adapter, CRB_CMDPEG_STATE));
@ -226,7 +226,6 @@ void netxen_initialize_adapter_ops(struct netxen_adapter *adapter)
adapter->unset_promisc = netxen_niu_set_promiscuous_mode;
adapter->phy_read = netxen_niu_gbe_phy_read;
adapter->phy_write = netxen_niu_gbe_phy_write;
adapter->init_port = netxen_niu_gbe_init_port;
adapter->init_niu = netxen_nic_init_niu_gb;
adapter->stop_port = netxen_niu_disable_gbe_port;
break;
@ -277,8 +276,8 @@ u32 netxen_decode_crb_addr(u32 addr)
return (pci_base + offset);
}
static long rom_max_timeout = 10000;
static long rom_lock_timeout = 1000000;
static long rom_max_timeout = 100;
static long rom_lock_timeout = 10000;
static long rom_write_timeout = 700;
static inline int rom_lock(struct netxen_adapter *adapter)
@ -438,9 +437,9 @@ do_rom_fast_read_words(struct netxen_adapter *adapter, int addr,
for (addridx = addr; addridx < (addr + size); addridx += 4) {
ret = do_rom_fast_read(adapter, addridx, (int *)bytes);
*(int *)bytes = cpu_to_le32(*(int *)bytes);
if (ret != 0)
break;
*(int *)bytes = cpu_to_le32(*(int *)bytes);
bytes += 4;
}
@ -499,7 +498,6 @@ static inline int do_rom_fast_write_words(struct netxen_adapter *adapter,
int data;
data = le32_to_cpu((*(u32*)bytes));
ret = do_rom_fast_write(adapter, addridx, data);
if (ret < 0)
return ret;
@ -953,7 +951,8 @@ void netxen_phantom_init(struct netxen_adapter *adapter, int pegtune_val)
if (!pegtune_val) {
val = readl(NETXEN_CRB_NORMALIZE(adapter, CRB_CMDPEG_STATE));
while (val != PHAN_INITIALIZE_COMPLETE && loops < 200000) {
while (val != PHAN_INITIALIZE_COMPLETE &&
val != PHAN_INITIALIZE_ACK && loops < 200000) {
udelay(100);
schedule();
val =
@ -990,9 +989,7 @@ int netxen_nic_rx_has_work(struct netxen_adapter *adapter)
static inline int netxen_nic_check_temp(struct netxen_adapter *adapter)
{
int port_num;
struct netxen_port *port;
struct net_device *netdev;
struct net_device *netdev = adapter->netdev;
uint32_t temp, temp_state, temp_val;
int rv = 0;
@ -1006,14 +1003,9 @@ static inline int netxen_nic_check_temp(struct netxen_adapter *adapter)
"%s: Device temperature %d degrees C exceeds"
" maximum allowed. Hardware has been shut down.\n",
netxen_nic_driver_name, temp_val);
for (port_num = 0; port_num < adapter->ahw.max_ports;
port_num++) {
port = adapter->port[port_num];
netdev = port->netdev;
netif_carrier_off(netdev);
netif_stop_queue(netdev);
}
netif_carrier_off(netdev);
netif_stop_queue(netdev);
rv = 1;
} else if (temp_state == NX_TEMP_WARN) {
if (adapter->temp == NX_TEMP_NORMAL) {
@ -1037,29 +1029,23 @@ static inline int netxen_nic_check_temp(struct netxen_adapter *adapter)
void netxen_watchdog_task(struct work_struct *work)
{
int port_num;
struct netxen_port *port;
struct net_device *netdev;
struct netxen_adapter *adapter =
container_of(work, struct netxen_adapter, watchdog_task);
if (netxen_nic_check_temp(adapter))
if ((adapter->portnum == 0) && netxen_nic_check_temp(adapter))
return;
for (port_num = 0; port_num < adapter->ahw.max_ports; port_num++) {
port = adapter->port[port_num];
netdev = port->netdev;
if ((netif_running(netdev)) && !netif_carrier_ok(netdev)) {
printk(KERN_INFO "%s port %d, %s carrier is now ok\n",
netxen_nic_driver_name, port_num, netdev->name);
netif_carrier_on(netdev);
}
if (netif_queue_stopped(netdev))
netif_wake_queue(netdev);
netdev = adapter->netdev;
if ((netif_running(netdev)) && !netif_carrier_ok(netdev)) {
printk(KERN_INFO "%s port %d, %s carrier is now ok\n",
netxen_nic_driver_name, adapter->portnum, netdev->name);
netif_carrier_on(netdev);
}
if (netif_queue_stopped(netdev))
netif_wake_queue(netdev);
if (adapter->handle_phy_intr)
adapter->handle_phy_intr(adapter);
mod_timer(&adapter->watchdog_timer, jiffies + 2 * HZ);
@ -1074,9 +1060,8 @@ void
netxen_process_rcv(struct netxen_adapter *adapter, int ctxid,
struct status_desc *desc)
{
struct netxen_port *port = adapter->port[netxen_get_sts_port(desc)];
struct pci_dev *pdev = port->pdev;
struct net_device *netdev = port->netdev;
struct pci_dev *pdev = adapter->pdev;
struct net_device *netdev = adapter->netdev;
int index = netxen_get_sts_refhandle(desc);
struct netxen_recv_context *recv_ctx = &(adapter->recv_ctx[ctxid]);
struct netxen_rx_buffer *buffer;
@ -1126,7 +1111,7 @@ netxen_process_rcv(struct netxen_adapter *adapter, int ctxid,
skb = (struct sk_buff *)buffer->skb;
if (likely(netxen_get_sts_status(desc) == STATUS_CKSUM_OK)) {
port->stats.csummed++;
adapter->stats.csummed++;
skb->ip_summed = CHECKSUM_UNNECESSARY;
}
if (desc_ctx == RCV_DESC_LRO_CTXID) {
@ -1146,27 +1131,27 @@ netxen_process_rcv(struct netxen_adapter *adapter, int ctxid,
*/
switch (ret) {
case NET_RX_SUCCESS:
port->stats.uphappy++;
adapter->stats.uphappy++;
break;
case NET_RX_CN_LOW:
port->stats.uplcong++;
adapter->stats.uplcong++;
break;
case NET_RX_CN_MOD:
port->stats.upmcong++;
adapter->stats.upmcong++;
break;
case NET_RX_CN_HIGH:
port->stats.uphcong++;
adapter->stats.uphcong++;
break;
case NET_RX_DROP:
port->stats.updropped++;
adapter->stats.updropped++;
break;
default:
port->stats.updunno++;
adapter->stats.updunno++;
break;
}
@ -1178,14 +1163,13 @@ netxen_process_rcv(struct netxen_adapter *adapter, int ctxid,
/*
* We just consumed one buffer so post a buffer.
*/
adapter->stats.post_called++;
buffer->skb = NULL;
buffer->state = NETXEN_BUFFER_FREE;
buffer->lro_current_frags = 0;
buffer->lro_expected_frags = 0;
port->stats.no_rcv++;
port->stats.rxbytes += length;
adapter->stats.no_rcv++;
adapter->stats.rxbytes += length;
}
/* Process Receive status ring */
@ -1226,7 +1210,6 @@ u32 netxen_process_rcv_ring(struct netxen_adapter *adapter, int ctxid, int max)
/* update the consumer index in phantom */
if (count) {
adapter->stats.process_rcv++;
recv_ctx->status_rx_consumer = consumer;
recv_ctx->status_rx_producer = producer;
@ -1249,13 +1232,10 @@ int netxen_process_cmd_ring(unsigned long data)
int count1 = 0;
int count2 = 0;
struct netxen_cmd_buffer *buffer;
struct netxen_port *port; /* port #1 */
struct netxen_port *nport;
struct pci_dev *pdev;
struct netxen_skb_frag *frag;
u32 i;
struct sk_buff *skb = NULL;
int p;
int done;
spin_lock(&adapter->tx_lock);
@ -1276,7 +1256,6 @@ int netxen_process_cmd_ring(unsigned long data)
}
adapter->proc_cmd_buf_counter++;
adapter->stats.process_xmit++;
/*
* Not needed - does not seem to be used anywhere.
* adapter->cmd_consumer = consumer;
@ -1285,8 +1264,7 @@ int netxen_process_cmd_ring(unsigned long data)
while ((last_consumer != consumer) && (count1 < MAX_STATUS_HANDLE)) {
buffer = &adapter->cmd_buf_arr[last_consumer];
port = adapter->port[buffer->port];
pdev = port->pdev;
pdev = adapter->pdev;
frag = &buffer->frag_array[0];
skb = buffer->skb;
if (skb && (cmpxchg(&buffer->skb, skb, 0) == skb)) {
@ -1299,24 +1277,23 @@ int netxen_process_cmd_ring(unsigned long data)
PCI_DMA_TODEVICE);
}
port->stats.skbfreed++;
adapter->stats.skbfreed++;
dev_kfree_skb_any(skb);
skb = NULL;
} else if (adapter->proc_cmd_buf_counter == 1) {
port->stats.txnullskb++;
adapter->stats.txnullskb++;
}
if (unlikely(netif_queue_stopped(port->netdev)
&& netif_carrier_ok(port->netdev))
&& ((jiffies - port->netdev->trans_start) >
port->netdev->watchdog_timeo)) {
SCHEDULE_WORK(&port->tx_timeout_task);
if (unlikely(netif_queue_stopped(adapter->netdev)
&& netif_carrier_ok(adapter->netdev))
&& ((jiffies - adapter->netdev->trans_start) >
adapter->netdev->watchdog_timeo)) {
SCHEDULE_WORK(&adapter->tx_timeout_task);
}
last_consumer = get_next_index(last_consumer,
adapter->max_tx_desc_count);
count1++;
}
adapter->stats.noxmitdone += count1;
count2 = 0;
spin_lock(&adapter->tx_lock);
@ -1336,13 +1313,10 @@ int netxen_process_cmd_ring(unsigned long data)
}
}
if (count1 || count2) {
for (p = 0; p < adapter->ahw.max_ports; p++) {
nport = adapter->port[p];
if (netif_queue_stopped(nport->netdev)
&& (nport->flags & NETXEN_NETDEV_STATUS)) {
netif_wake_queue(nport->netdev);
nport->flags &= ~NETXEN_NETDEV_STATUS;
}
if (netif_queue_stopped(adapter->netdev)
&& (adapter->flags & NETXEN_NETDEV_STATUS)) {
netif_wake_queue(adapter->netdev);
adapter->flags &= ~NETXEN_NETDEV_STATUS;
}
}
/*
@ -1388,7 +1362,6 @@ void netxen_post_rx_buffers(struct netxen_adapter *adapter, u32 ctx, u32 ringid)
netxen_ctx_msg msg = 0;
dma_addr_t dma;
adapter->stats.post_called++;
rcv_desc = &recv_ctx->rcv_desc[ringid];
producer = rcv_desc->producer;
@ -1441,8 +1414,6 @@ void netxen_post_rx_buffers(struct netxen_adapter *adapter, u32 ctx, u32 ringid)
if (count) {
rcv_desc->begin_alloc = index;
rcv_desc->rcv_pending += count;
adapter->stats.lastposted = count;
adapter->stats.posted += count;
rcv_desc->producer = producer;
if (rcv_desc->rcv_free >= 32) {
rcv_desc->rcv_free = 0;
@ -1450,7 +1421,8 @@ void netxen_post_rx_buffers(struct netxen_adapter *adapter, u32 ctx, u32 ringid)
writel((producer - 1) &
(rcv_desc->max_rx_desc_count - 1),
NETXEN_CRB_NORMALIZE(adapter,
recv_crb_registers[0].
recv_crb_registers[
adapter->portnum].
rcv_desc_crb[ringid].
crb_rcv_producer_offset));
/*
@ -1463,7 +1435,7 @@ void netxen_post_rx_buffers(struct netxen_adapter *adapter, u32 ctx, u32 ringid)
((producer -
1) & (rcv_desc->
max_rx_desc_count - 1)));
netxen_set_msg_ctxid(msg, 0);
netxen_set_msg_ctxid(msg, adapter->portnum);
netxen_set_msg_opcode(msg, NETXEN_RCV_PRODUCER(ringid));
writel(msg,
DB_NORMALIZE(adapter,
@ -1485,7 +1457,6 @@ void netxen_post_rx_buffers_nodb(struct netxen_adapter *adapter, uint32_t ctx,
int count = 0;
int index = 0;
adapter->stats.post_called++;
rcv_desc = &recv_ctx->rcv_desc[ringid];
producer = rcv_desc->producer;
@ -1532,8 +1503,6 @@ void netxen_post_rx_buffers_nodb(struct netxen_adapter *adapter, uint32_t ctx,
if (count) {
rcv_desc->begin_alloc = index;
rcv_desc->rcv_pending += count;
adapter->stats.lastposted = count;
adapter->stats.posted += count;
rcv_desc->producer = producer;
if (rcv_desc->rcv_free >= 32) {
rcv_desc->rcv_free = 0;
@ -1541,7 +1510,8 @@ void netxen_post_rx_buffers_nodb(struct netxen_adapter *adapter, uint32_t ctx,
writel((producer - 1) &
(rcv_desc->max_rx_desc_count - 1),
NETXEN_CRB_NORMALIZE(adapter,
recv_crb_registers[0].
recv_crb_registers[
adapter->portnum].
rcv_desc_crb[ringid].
crb_rcv_producer_offset));
wmb();
@ -1562,13 +1532,7 @@ int netxen_nic_tx_has_work(struct netxen_adapter *adapter)
void netxen_nic_clear_stats(struct netxen_adapter *adapter)
{
struct netxen_port *port;
int port_num;
memset(&adapter->stats, 0, sizeof(adapter->stats));
for (port_num = 0; port_num < adapter->ahw.max_ports; port_num++) {
port = adapter->port[port_num];
memset(&port->stats, 0, sizeof(port->stats));
}
return;
}

View File

@ -6,12 +6,12 @@
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place - Suite 330, Boston,
@ -40,35 +40,35 @@
*/
struct net_device_stats *netxen_nic_get_stats(struct net_device *netdev)
{
struct netxen_port *port = netdev_priv(netdev);
struct net_device_stats *stats = &port->net_stats;
struct netxen_adapter *adapter = netdev_priv(netdev);
struct net_device_stats *stats = &adapter->net_stats;
memset(stats, 0, sizeof(*stats));
/* total packets received */
stats->rx_packets = port->stats.no_rcv;
stats->rx_packets = adapter->stats.no_rcv;
/* total packets transmitted */
stats->tx_packets = port->stats.xmitedframes + port->stats.xmitfinished;
stats->tx_packets = adapter->stats.xmitedframes +
adapter->stats.xmitfinished;
/* total bytes received */
stats->rx_bytes = port->stats.rxbytes;
stats->rx_bytes = adapter->stats.rxbytes;
/* total bytes transmitted */
stats->tx_bytes = port->stats.txbytes;
stats->tx_bytes = adapter->stats.txbytes;
/* bad packets received */
stats->rx_errors = port->stats.rcvdbadskb;
stats->rx_errors = adapter->stats.rcvdbadskb;
/* packet transmit problems */
stats->tx_errors = port->stats.nocmddescriptor;
stats->tx_errors = adapter->stats.nocmddescriptor;
/* no space in linux buffers */
stats->rx_dropped = port->stats.updropped;
stats->rx_dropped = adapter->stats.updropped;
/* no space available in linux */
stats->tx_dropped = port->stats.txdropped;
stats->tx_dropped = adapter->stats.txdropped;
return stats;
}
void netxen_indicate_link_status(struct netxen_adapter *adapter, u32 portno,
u32 link)
void netxen_indicate_link_status(struct netxen_adapter *adapter, u32 link)
{
struct net_device *netdev = (adapter->port[portno])->netdev;
struct net_device *netdev = adapter->netdev;
if (link)
netif_carrier_on(netdev);
@ -76,15 +76,13 @@ void netxen_indicate_link_status(struct netxen_adapter *adapter, u32 portno,
netif_carrier_off(netdev);
}
void netxen_handle_port_int(struct netxen_adapter *adapter, u32 portno,
u32 enable)
void netxen_handle_port_int(struct netxen_adapter *adapter, u32 enable)
{
__u32 int_src;
struct netxen_port *port;
/* This should clear the interrupt source */
if (adapter->phy_read)
adapter->phy_read(adapter, portno,
adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_INT_STATUS,
&int_src);
if (int_src == 0) {
@ -92,9 +90,7 @@ void netxen_handle_port_int(struct netxen_adapter *adapter, u32 portno,
return;
}
if (adapter->disable_phy_interrupts)
adapter->disable_phy_interrupts(adapter, portno);
port = adapter->port[portno];
adapter->disable_phy_interrupts(adapter);
if (netxen_get_phy_int_jabber(int_src))
DPRINTK(INFO, "Jabber interrupt \n");
@ -115,64 +111,57 @@ void netxen_handle_port_int(struct netxen_adapter *adapter, u32 portno,
DPRINTK(INFO, "SPEED CHANGED OR LINK STATUS CHANGED \n");
if (adapter->phy_read
&& adapter->phy_read(adapter, portno,
&& adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status) == 0) {
if (netxen_get_phy_int_link_status_changed(int_src)) {
if (netxen_get_phy_link(status)) {
netxen_niu_gbe_init_port(adapter,
portno);
printk("%s: %s Link UP\n",
printk(KERN_INFO "%s: %s Link UP\n",
netxen_nic_driver_name,
port->netdev->name);
adapter->netdev->name);
} else {
printk("%s: %s Link DOWN\n",
printk(KERN_INFO "%s: %s Link DOWN\n",
netxen_nic_driver_name,
port->netdev->name);
adapter->netdev->name);
}
netxen_indicate_link_status(adapter, portno,
netxen_indicate_link_status(adapter,
netxen_get_phy_link
(status));
}
}
}
if (adapter->enable_phy_interrupts)
adapter->enable_phy_interrupts(adapter, portno);
adapter->enable_phy_interrupts(adapter);
}
void netxen_nic_isr_other(struct netxen_adapter *adapter)
{
u32 portno;
int portno = adapter->portnum;
u32 val, linkup, qg_linksup;
/* verify the offset */
val = readl(NETXEN_CRB_NORMALIZE(adapter, CRB_XG_STATE));
val = val >> physical_port[adapter->portnum];
if (val == adapter->ahw.qg_linksup)
return;
qg_linksup = adapter->ahw.qg_linksup;
adapter->ahw.qg_linksup = val;
DPRINTK(INFO, "link update 0x%08x\n", val);
for (portno = 0; portno < NETXEN_NIU_MAX_GBE_PORTS; portno++) {
linkup = val & 1;
if (linkup != (qg_linksup & 1)) {
printk(KERN_INFO "%s: %s PORT %d link %s\n",
adapter->port[portno]->netdev->name,
netxen_nic_driver_name, portno,
((linkup == 0) ? "down" : "up"));
netxen_indicate_link_status(adapter, portno, linkup);
if (linkup)
netxen_nic_set_link_parameters(adapter->
port[portno]);
}
val = val >> 1;
qg_linksup = qg_linksup >> 1;
linkup = val & 1;
if (linkup != (qg_linksup & 1)) {
printk(KERN_INFO "%s: %s PORT %d link %s\n",
adapter->netdev->name,
netxen_nic_driver_name, portno,
((linkup == 0) ? "down" : "up"));
netxen_indicate_link_status(adapter, linkup);
if (linkup)
netxen_nic_set_link_parameters(adapter);
}
adapter->stats.otherints++;
}
void netxen_nic_gbe_handle_phy_intr(struct netxen_adapter *adapter)
@ -182,26 +171,28 @@ void netxen_nic_gbe_handle_phy_intr(struct netxen_adapter *adapter)
void netxen_nic_xgbe_handle_phy_intr(struct netxen_adapter *adapter)
{
struct net_device *netdev = adapter->port[0]->netdev;
u32 val;
struct net_device *netdev = adapter->netdev;
u32 val, val1;
/* WINDOW = 1 */
val = readl(NETXEN_CRB_NORMALIZE(adapter, CRB_XG_STATE));
val >>= (physical_port[adapter->portnum] * 8);
val1 = val & 0xff;
if (adapter->ahw.xg_linkup == 1 && val != XG_LINK_UP) {
if (adapter->ahw.xg_linkup == 1 && val1 != XG_LINK_UP) {
printk(KERN_INFO "%s: %s NIC Link is down\n",
netxen_nic_driver_name, netdev->name);
adapter->ahw.xg_linkup = 0;
/* read twice to clear sticky bits */
/* WINDOW = 0 */
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_STATUS, &val);
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_STATUS, &val);
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_STATUS, &val1);
netxen_nic_read_w0(adapter, NETXEN_NIU_XG_STATUS, &val1);
if ((val & 0xffb) != 0xffb) {
printk(KERN_INFO "%s ISR: Sync/Align BAD: 0x%08x\n",
netxen_nic_driver_name, val);
netxen_nic_driver_name, val1);
}
} else if (adapter->ahw.xg_linkup == 0 && val == XG_LINK_UP) {
} else if (adapter->ahw.xg_linkup == 0 && val1 == XG_LINK_UP) {
printk(KERN_INFO "%s: %s NIC Link is up\n",
netxen_nic_driver_name, netdev->name);
adapter->ahw.xg_linkup = 1;

File diff suppressed because it is too large Load Diff

View File

@ -88,12 +88,13 @@ static inline int phy_unlock(struct netxen_adapter *adapter)
* -1 on error
*
*/
int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long phy,
long reg, __u32 * readval)
int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long reg,
__u32 * readval)
{
long timeout = 0;
long result = 0;
long restore = 0;
long phy = physical_port[adapter->portnum];
__u32 address;
__u32 command;
__u32 status;
@ -183,12 +184,13 @@ int netxen_niu_gbe_phy_read(struct netxen_adapter *adapter, long phy,
* -1 on error
*
*/
int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter,
long phy, long reg, __u32 val)
int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter, long reg,
__u32 val)
{
long timeout = 0;
long result = 0;
long restore = 0;
long phy = physical_port[adapter->portnum];
__u32 address;
__u32 command;
__u32 status;
@ -258,15 +260,13 @@ int netxen_niu_gbe_phy_write(struct netxen_adapter *adapter,
return result;
}
int netxen_niu_xgbe_enable_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_xgbe_enable_phy_interrupts(struct netxen_adapter *adapter)
{
netxen_crb_writelit_adapter(adapter, NETXEN_NIU_INT_MASK, 0x3f);
return 0;
}
int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter)
{
int result = 0;
__u32 enable = 0;
@ -275,7 +275,7 @@ int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter,
netxen_set_phy_int_speed_changed(enable);
if (0 !=
netxen_niu_gbe_phy_write(adapter, port,
netxen_niu_gbe_phy_write(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_INT_ENABLE,
enable))
result = -EIO;
@ -283,38 +283,34 @@ int netxen_niu_gbe_enable_phy_interrupts(struct netxen_adapter *adapter,
return result;
}
int netxen_niu_xgbe_disable_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_xgbe_disable_phy_interrupts(struct netxen_adapter *adapter)
{
netxen_crb_writelit_adapter(adapter, NETXEN_NIU_INT_MASK, 0x7f);
return 0;
}
int netxen_niu_gbe_disable_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_gbe_disable_phy_interrupts(struct netxen_adapter *adapter)
{
int result = 0;
if (0 !=
netxen_niu_gbe_phy_write(adapter, port,
netxen_niu_gbe_phy_write(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_INT_ENABLE, 0))
result = -EIO;
return result;
}
int netxen_niu_xgbe_clear_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_xgbe_clear_phy_interrupts(struct netxen_adapter *adapter)
{
netxen_crb_writelit_adapter(adapter, NETXEN_NIU_ACTIVE_INT, -1);
return 0;
}
int netxen_niu_gbe_clear_phy_interrupts(struct netxen_adapter *adapter,
int port)
int netxen_niu_gbe_clear_phy_interrupts(struct netxen_adapter *adapter)
{
int result = 0;
if (0 !=
netxen_niu_gbe_phy_write(adapter, port,
netxen_niu_gbe_phy_write(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_INT_STATUS,
-EIO))
result = -EIO;
@ -355,9 +351,9 @@ void netxen_niu_gbe_set_mii_mode(struct netxen_adapter *adapter,
0x5);
}
if (netxen_niu_gbe_enable_phy_interrupts(adapter, port))
if (netxen_niu_gbe_enable_phy_interrupts(adapter))
printk(KERN_ERR PFX "ERROR enabling PHY interrupts\n");
if (netxen_niu_gbe_clear_phy_interrupts(adapter, port))
if (netxen_niu_gbe_clear_phy_interrupts(adapter))
printk(KERN_ERR PFX "ERROR clearing PHY interrupts\n");
}
@ -393,9 +389,9 @@ void netxen_niu_gbe_set_gmii_mode(struct netxen_adapter *adapter,
0x5);
}
if (netxen_niu_gbe_enable_phy_interrupts(adapter, port))
if (netxen_niu_gbe_enable_phy_interrupts(adapter))
printk(KERN_ERR PFX "ERROR enabling PHY interrupts\n");
if (netxen_niu_gbe_clear_phy_interrupts(adapter, port))
if (netxen_niu_gbe_clear_phy_interrupts(adapter))
printk(KERN_ERR PFX "ERROR clearing PHY interrupts\n");
}
@ -404,11 +400,11 @@ int netxen_niu_gbe_init_port(struct netxen_adapter *adapter, int port)
int result = 0;
__u32 status;
if (adapter->disable_phy_interrupts)
adapter->disable_phy_interrupts(adapter, port);
adapter->disable_phy_interrupts(adapter);
mdelay(2);
if (0 ==
netxen_niu_gbe_phy_read(adapter, port,
netxen_niu_gbe_phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status)) {
if (netxen_get_phy_link(status)) {
@ -439,13 +435,13 @@ int netxen_niu_gbe_init_port(struct netxen_adapter *adapter, int port)
| NETXEN_GB_MAC_ENABLE_TX_RX
|
NETXEN_GB_MAC_PAUSED_FRMS);
if (netxen_niu_gbe_clear_phy_interrupts(adapter, port))
if (netxen_niu_gbe_clear_phy_interrupts(adapter))
printk(KERN_ERR PFX
"ERROR clearing PHY interrupts\n");
if (netxen_niu_gbe_enable_phy_interrupts(adapter, port))
if (netxen_niu_gbe_enable_phy_interrupts(adapter))
printk(KERN_ERR PFX
"ERROR enabling PHY interrupts\n");
if (netxen_niu_gbe_clear_phy_interrupts(adapter, port))
if (netxen_niu_gbe_clear_phy_interrupts(adapter))
printk(KERN_ERR PFX
"ERROR clearing PHY interrupts\n");
result = -1;
@ -458,26 +454,18 @@ int netxen_niu_gbe_init_port(struct netxen_adapter *adapter, int port)
int netxen_niu_xg_init_port(struct netxen_adapter *adapter, int port)
{
u32 reg = 0, ret = 0;
u32 reg;
u32 portnum = physical_port[adapter->portnum];
if (adapter->ahw.boardcfg.board_type == NETXEN_BRDTYPE_P2_SB31_10G_IMEZ) {
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XG1_CONFIG_0, 0x5);
/* XXX hack for Mez cards: both ports in promisc mode */
netxen_nic_hw_read_wx(adapter,
NETXEN_NIU_XGE_CONFIG_1, &reg, 4);
reg = (reg | 0x2000UL);
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XGE_CONFIG_1, reg);
reg = 0;
netxen_nic_hw_read_wx(adapter,
NETXEN_NIU_XG1_CONFIG_1, &reg, 4);
reg = (reg | 0x2000UL);
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XG1_CONFIG_1, reg);
}
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XGE_CONFIG_0+(0x10000*portnum), 0x5);
netxen_nic_hw_read_wx(adapter,
NETXEN_NIU_XGE_CONFIG_1+(0x10000*portnum), &reg, 4);
reg = (reg & ~0x2000UL);
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XGE_CONFIG_1+(0x10000*portnum), reg);
return ret;
return 0;
}
/*
@ -498,7 +486,7 @@ int netxen_niu_gbe_handle_phy_interrupt(struct netxen_adapter *adapter,
* The read of the PHY INT status will clear the pending
* interrupt status
*/
if (netxen_niu_gbe_phy_read(adapter, port,
if (netxen_niu_gbe_phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_INT_STATUS,
&int_src) != 0)
result = -EINVAL;
@ -535,7 +523,7 @@ int netxen_niu_gbe_handle_phy_interrupt(struct netxen_adapter *adapter,
printk(KERN_INFO PFX
"speed_changed or link status changed");
if (netxen_niu_gbe_phy_read
(adapter, port,
(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_PHY_STATUS,
&status) == 0) {
if (netxen_get_phy_speed(status) == 2) {
@ -581,10 +569,11 @@ int netxen_niu_gbe_handle_phy_interrupt(struct netxen_adapter *adapter,
* Note that the passed-in value must already be in network byte order.
*/
int netxen_niu_macaddr_get(struct netxen_adapter *adapter,
int phy, netxen_ethernet_macaddr_t * addr)
netxen_ethernet_macaddr_t * addr)
{
u32 stationhigh;
u32 stationlow;
int phy = physical_port[adapter->portnum];
u8 val[8];
if (addr == NULL)
@ -610,13 +599,12 @@ int netxen_niu_macaddr_get(struct netxen_adapter *adapter,
* Set the station MAC address.
* Note that the passed-in value must already be in network byte order.
*/
int netxen_niu_macaddr_set(struct netxen_port *port,
int netxen_niu_macaddr_set(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t addr)
{
u8 temp[4];
u32 val;
struct netxen_adapter *adapter = port->adapter;
int phy = port->portnum;
int phy = physical_port[adapter->portnum];
unsigned char mac_addr[6];
int i;
@ -634,7 +622,7 @@ int netxen_niu_macaddr_set(struct netxen_port *port,
(adapter, NETXEN_NIU_GB_STATION_ADDR_0(phy), &val, 4))
return -2;
netxen_niu_macaddr_get(adapter, phy,
netxen_niu_macaddr_get(adapter,
(netxen_ethernet_macaddr_t *) mac_addr);
if (memcmp(mac_addr, addr, 6) == 0)
break;
@ -642,7 +630,7 @@ int netxen_niu_macaddr_set(struct netxen_port *port,
if (i == 10) {
printk(KERN_ERR "%s: cannot set Mac addr for %s\n",
netxen_nic_driver_name, port->netdev->name);
netxen_nic_driver_name, adapter->netdev->name);
printk(KERN_ERR "MAC address set: "
"%02x:%02x:%02x:%02x:%02x:%02x.\n",
addr[0], addr[1], addr[2], addr[3], addr[4], addr[5]);
@ -735,13 +723,13 @@ int netxen_niu_enable_gbe_port(struct netxen_adapter *adapter,
}
/* Disable a GbE interface */
int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter, int port)
int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter)
{
__u32 mac_cfg0;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
return -EINVAL;
mac_cfg0 = 0;
netxen_gb_soft_reset(mac_cfg0);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_GB_MAC_CONFIG_0(port),
@ -751,13 +739,13 @@ int netxen_niu_disable_gbe_port(struct netxen_adapter *adapter, int port)
}
/* Disable an XG interface */
int netxen_niu_disable_xg_port(struct netxen_adapter *adapter, int port)
int netxen_niu_disable_xg_port(struct netxen_adapter *adapter)
{
__u32 mac_cfg;
u32 port = physical_port[adapter->portnum];
if (port != 0)
return -EINVAL;
mac_cfg = 0;
netxen_xg_soft_reset(mac_cfg);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_CONFIG_0,
@ -767,10 +755,11 @@ int netxen_niu_disable_xg_port(struct netxen_adapter *adapter, int port)
}
/* Set promiscuous mode for a GbE interface */
int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter, int port,
int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter,
netxen_niu_prom_mode_t mode)
{
__u32 reg;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
return -EINVAL;
@ -824,25 +813,50 @@ int netxen_niu_set_promiscuous_mode(struct netxen_adapter *adapter, int port,
* Set the MAC address for an XG port
* Note that the passed-in value must already be in network byte order.
*/
int netxen_niu_xg_macaddr_set(struct netxen_port *port,
int netxen_niu_xg_macaddr_set(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t addr)
{
int phy = physical_port[adapter->portnum];
u8 temp[4];
u32 val;
struct netxen_adapter *adapter = port->adapter;
if ((phy < 0) || (phy > NETXEN_NIU_MAX_XG_PORTS))
return -EIO;
temp[0] = temp[1] = 0;
memcpy(temp + 2, addr, 2);
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_1,
&val, 4))
switch (phy) {
case 0:
memcpy(temp + 2, addr, 2);
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_1,
&val, 4))
return -EIO;
memcpy(&temp, ((u8 *) addr) + 2, sizeof(__le32));
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_HI,
&val, 4))
memcpy(&temp, ((u8 *) addr) + 2, sizeof(__le32));
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_STATION_ADDR_0_HI,
&val, 4))
return -EIO;
break;
case 1:
memcpy(temp + 2, addr, 2);
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XG1_STATION_ADDR_0_1,
&val, 4))
return -EIO;
memcpy(&temp, ((u8 *) addr) + 2, sizeof(__le32));
val = le32_to_cpu(*(__le32 *)temp);
if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XG1_STATION_ADDR_0_HI,
&val, 4))
return -EIO;
break;
default:
printk(KERN_ERR "Unknown port %d\n", phy);
break;
}
return 0;
}
@ -851,9 +865,10 @@ int netxen_niu_xg_macaddr_set(struct netxen_port *port,
* Return the current station MAC address.
* Note that the passed-in value must already be in network byte order.
*/
int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter, int phy,
int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter,
netxen_ethernet_macaddr_t * addr)
{
int phy = physical_port[adapter->portnum];
u32 stationhigh;
u32 stationlow;
u8 val[8];
@ -878,21 +893,24 @@ int netxen_niu_xg_macaddr_get(struct netxen_adapter *adapter, int phy,
}
int netxen_niu_xg_set_promiscuous_mode(struct netxen_adapter *adapter,
int port, netxen_niu_prom_mode_t mode)
netxen_niu_prom_mode_t mode)
{
__u32 reg;
u32 port = physical_port[adapter->portnum];
if ((port < 0) || (port > NETXEN_NIU_MAX_GBE_PORTS))
if ((port < 0) || (port > NETXEN_NIU_MAX_XG_PORTS))
return -EINVAL;
if (netxen_nic_hw_read_wx(adapter, NETXEN_NIU_XGE_CONFIG_1, &reg, 4))
return -EIO;
if (netxen_nic_hw_read_wx(adapter,
NETXEN_NIU_XGE_CONFIG_1 + (0x10000 * port), &reg, 4))
return -EIO;
if (mode == NETXEN_NIU_PROMISC_MODE)
reg = (reg | 0x2000UL);
else
reg = (reg & ~0x2000UL);
netxen_crb_writelit_adapter(adapter, NETXEN_NIU_XGE_CONFIG_1, reg);
netxen_crb_writelit_adapter(adapter,
NETXEN_NIU_XGE_CONFIG_1 + (0x10000 * port), reg);
return 0;
}

View File

@ -100,8 +100,21 @@
#define CRB_CMD_PRODUCER_OFFSET_1 NETXEN_NIC_REG(0x1ac)
#define CRB_CMD_CONSUMER_OFFSET_1 NETXEN_NIC_REG(0x1b0)
#define CRB_CMD_PRODUCER_OFFSET_2 NETXEN_NIC_REG(0x1b8)
#define CRB_CMD_CONSUMER_OFFSET_2 NETXEN_NIC_REG(0x1bc)
// 1c0 to 1cc used for signature reg
#define CRB_CMD_PRODUCER_OFFSET_3 NETXEN_NIC_REG(0x1d0)
#define CRB_CMD_CONSUMER_OFFSET_3 NETXEN_NIC_REG(0x1d4)
#define CRB_TEMP_STATE NETXEN_NIC_REG(0x1b4)
#define CRB_V2P_0 NETXEN_NIC_REG(0x290)
#define CRB_V2P_1 NETXEN_NIC_REG(0x294)
#define CRB_V2P_2 NETXEN_NIC_REG(0x298)
#define CRB_V2P_3 NETXEN_NIC_REG(0x29c)
#define CRB_V2P(port) (CRB_V2P_0+((port)*4))
#define CRB_DRIVER_VERSION NETXEN_NIC_REG(0x2a0)
/* used for ethtool tests */
#define CRB_SCRATCHPAD_TEST NETXEN_NIC_REG(0x280)
@ -139,128 +152,13 @@ struct netxen_recv_crb {
};
#if defined(DEFINE_GLOBAL_RECV_CRB)
struct netxen_recv_crb recv_crb_registers[] = {
/*
* Instance 0.
*/
{
/* rcv_desc_crb: */
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x100),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x104),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x108),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x10c),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x110),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x114),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x118),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x11c),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x120),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x124),
/* crb_gloablrcv_ring: */
NETXEN_NIC_REG(0x128),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x12c),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x130),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x134),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x138),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x13c),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x140),
},
/*
* Instance 1,
*/
{
/* rcv_desc_crb: */
{
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x144),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x148),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x14c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x150),
},
/* Jumbo frames */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x154),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x158),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x15c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x160),
},
/* LRO */
{
/* crb_rcv_producer_offset: */
NETXEN_NIC_REG(0x164),
/* crb_rcv_consumer_offset: */
NETXEN_NIC_REG(0x168),
/* crb_globalrcv_ring: */
NETXEN_NIC_REG(0x16c),
/* crb_rcv_ring_size */
NETXEN_NIC_REG(0x170),
}
},
/* crb_rcvstatus_ring: */
NETXEN_NIC_REG(0x174),
/* crb_rcv_status_producer: */
NETXEN_NIC_REG(0x178),
/* crb_rcv_status_consumer: */
NETXEN_NIC_REG(0x17c),
/* crb_rcvpeg_state: */
NETXEN_NIC_REG(0x180),
/* crb_status_ring_size */
NETXEN_NIC_REG(0x184),
},
};
u64 ctx_addr_sig_regs[][3] = {
{NETXEN_NIC_REG(0x188), NETXEN_NIC_REG(0x18c), NETXEN_NIC_REG(0x1c0)},
{NETXEN_NIC_REG(0x190), NETXEN_NIC_REG(0x194), NETXEN_NIC_REG(0x1c4)},
{NETXEN_NIC_REG(0x198), NETXEN_NIC_REG(0x19c), NETXEN_NIC_REG(0x1c8)},
{NETXEN_NIC_REG(0x1a0), NETXEN_NIC_REG(0x1a4), NETXEN_NIC_REG(0x1cc)}
};
#else
extern struct netxen_recv_crb recv_crb_registers[];
extern u64 ctx_addr_sig_regs[][3];
#define CRB_CTX_ADDR_REG_LO (ctx_addr_sig_regs[0][0])
#define CRB_CTX_ADDR_REG_HI (ctx_addr_sig_regs[0][2])
#define CRB_CTX_SIGNATURE_REG (ctx_addr_sig_regs[0][1])
#endif /* DEFINE_GLOBAL_RECEIVE_CRB */
#define CRB_CTX_ADDR_REG_LO(FUNC_ID) (ctx_addr_sig_regs[FUNC_ID][0])
#define CRB_CTX_ADDR_REG_HI(FUNC_ID) (ctx_addr_sig_regs[FUNC_ID][2])
#define CRB_CTX_SIGNATURE_REG(FUNC_ID) (ctx_addr_sig_regs[FUNC_ID][1])
/*
* Temperature control.

View File

@ -253,12 +253,12 @@ struct pcnet32_access {
* so the structure should be allocated using pci_alloc_consistent().
*/
struct pcnet32_private {
struct pcnet32_init_block init_block;
struct pcnet32_init_block *init_block;
/* The Tx and Rx ring entries must be aligned on 16-byte boundaries in 32bit mode. */
struct pcnet32_rx_head *rx_ring;
struct pcnet32_tx_head *tx_ring;
dma_addr_t dma_addr;/* DMA address of beginning of this
object, returned by pci_alloc_consistent */
dma_addr_t init_dma_addr;/* DMA address of beginning of the init block,
returned by pci_alloc_consistent */
struct pci_dev *pci_dev;
const char *name;
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
@ -653,7 +653,7 @@ static void pcnet32_realloc_rx_ring(struct net_device *dev,
static void pcnet32_purge_rx_ring(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int i;
/* free all allocated skbuffs */
@ -681,7 +681,7 @@ static void pcnet32_poll_controller(struct net_device *dev)
static int pcnet32_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
int r = -EOPNOTSUPP;
@ -696,7 +696,7 @@ static int pcnet32_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
static int pcnet32_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
int r = -EOPNOTSUPP;
@ -711,7 +711,7 @@ static int pcnet32_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
static void pcnet32_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
strcpy(info->driver, DRV_NAME);
strcpy(info->version, DRV_VERSION);
@ -723,7 +723,7 @@ static void pcnet32_get_drvinfo(struct net_device *dev,
static u32 pcnet32_get_link(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
int r;
@ -743,19 +743,19 @@ static u32 pcnet32_get_link(struct net_device *dev)
static u32 pcnet32_get_msglevel(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
return lp->msg_enable;
}
static void pcnet32_set_msglevel(struct net_device *dev, u32 value)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
lp->msg_enable = value;
}
static int pcnet32_nway_reset(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
int r = -EOPNOTSUPP;
@ -770,7 +770,7 @@ static int pcnet32_nway_reset(struct net_device *dev)
static void pcnet32_get_ringparam(struct net_device *dev,
struct ethtool_ringparam *ering)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
ering->tx_max_pending = TX_MAX_RING_SIZE;
ering->tx_pending = lp->tx_ring_size;
@ -781,7 +781,7 @@ static void pcnet32_get_ringparam(struct net_device *dev,
static int pcnet32_set_ringparam(struct net_device *dev,
struct ethtool_ringparam *ering)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
unsigned int size;
ulong ioaddr = dev->base_addr;
@ -847,7 +847,7 @@ static int pcnet32_self_test_count(struct net_device *dev)
static void pcnet32_ethtool_test(struct net_device *dev,
struct ethtool_test *test, u64 * data)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int rc;
if (test->flags == ETH_TEST_FL_OFFLINE) {
@ -868,7 +868,7 @@ static void pcnet32_ethtool_test(struct net_device *dev,
static int pcnet32_loopback_test(struct net_device *dev, uint64_t * data1)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct pcnet32_access *a = &lp->a; /* access to registers */
ulong ioaddr = dev->base_addr; /* card base I/O address */
struct sk_buff *skb; /* sk buff */
@ -1047,7 +1047,7 @@ static int pcnet32_loopback_test(struct net_device *dev, uint64_t * data1)
static void pcnet32_led_blink_callback(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct pcnet32_access *a = &lp->a;
ulong ioaddr = dev->base_addr;
unsigned long flags;
@ -1064,7 +1064,7 @@ static void pcnet32_led_blink_callback(struct net_device *dev)
static int pcnet32_phys_id(struct net_device *dev, u32 data)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct pcnet32_access *a = &lp->a;
ulong ioaddr = dev->base_addr;
unsigned long flags;
@ -1109,7 +1109,7 @@ static int pcnet32_suspend(struct net_device *dev, unsigned long *flags,
int can_sleep)
{
int csr5;
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct pcnet32_access *a = &lp->a;
ulong ioaddr = dev->base_addr;
int ticks;
@ -1257,7 +1257,7 @@ static void pcnet32_rx_entry(struct net_device *dev,
static int pcnet32_rx(struct net_device *dev, int quota)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int entry = lp->cur_rx & lp->rx_mod_mask;
struct pcnet32_rx_head *rxp = &lp->rx_ring[entry];
int npackets = 0;
@ -1282,7 +1282,7 @@ static int pcnet32_rx(struct net_device *dev, int quota)
static int pcnet32_tx(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned int dirty_tx = lp->dirty_tx;
int delta;
int must_restart = 0;
@ -1381,7 +1381,7 @@ static int pcnet32_tx(struct net_device *dev)
#ifdef CONFIG_PCNET32_NAPI
static int pcnet32_poll(struct net_device *dev, int *budget)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int quota = min(dev->quota, *budget);
unsigned long ioaddr = dev->base_addr;
unsigned long flags;
@ -1428,7 +1428,7 @@ static int pcnet32_poll(struct net_device *dev, int *budget)
#define PCNET32_MAX_PHYS 32
static int pcnet32_get_regs_len(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int j = lp->phycount * PCNET32_REGS_PER_PHY;
return ((PCNET32_NUM_REGS + j) * sizeof(u16));
@ -1439,7 +1439,7 @@ static void pcnet32_get_regs(struct net_device *dev, struct ethtool_regs *regs,
{
int i, csr0;
u16 *buff = ptr;
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct pcnet32_access *a = &lp->a;
ulong ioaddr = dev->base_addr;
unsigned long flags;
@ -1592,7 +1592,6 @@ static int __devinit
pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
{
struct pcnet32_private *lp;
dma_addr_t lp_dma_addr;
int i, media;
int fdx, mii, fset, dxsuflo;
int chip_version;
@ -1714,7 +1713,7 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
dxsuflo = 1;
}
dev = alloc_etherdev(0);
dev = alloc_etherdev(sizeof(*lp));
if (!dev) {
if (pcnet32_debug & NETIF_MSG_PROBE)
printk(KERN_ERR PFX "Memory allocation failed.\n");
@ -1805,25 +1804,22 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
}
dev->base_addr = ioaddr;
lp = netdev_priv(dev);
/* pci_alloc_consistent returns page-aligned memory, so we do not have to check the alignment */
if ((lp =
pci_alloc_consistent(pdev, sizeof(*lp), &lp_dma_addr)) == NULL) {
if ((lp->init_block =
pci_alloc_consistent(pdev, sizeof(*lp->init_block), &lp->init_dma_addr)) == NULL) {
if (pcnet32_debug & NETIF_MSG_PROBE)
printk(KERN_ERR PFX
"Consistent memory allocation failed.\n");
ret = -ENOMEM;
goto err_free_netdev;
}
memset(lp, 0, sizeof(*lp));
lp->dma_addr = lp_dma_addr;
lp->pci_dev = pdev;
spin_lock_init(&lp->lock);
SET_MODULE_OWNER(dev);
SET_NETDEV_DEV(dev, &pdev->dev);
dev->priv = lp;
lp->name = chipname;
lp->shared_irq = shared;
lp->tx_ring_size = TX_RING_SIZE; /* default tx ring size */
@ -1870,23 +1866,21 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
&& dev->dev_addr[2] == 0x75)
lp->options = PCNET32_PORT_FD | PCNET32_PORT_GPSI;
lp->init_block.mode = le16_to_cpu(0x0003); /* Disable Rx and Tx. */
lp->init_block.tlen_rlen =
lp->init_block->mode = le16_to_cpu(0x0003); /* Disable Rx and Tx. */
lp->init_block->tlen_rlen =
le16_to_cpu(lp->tx_len_bits | lp->rx_len_bits);
for (i = 0; i < 6; i++)
lp->init_block.phys_addr[i] = dev->dev_addr[i];
lp->init_block.filter[0] = 0x00000000;
lp->init_block.filter[1] = 0x00000000;
lp->init_block.rx_ring = (u32) le32_to_cpu(lp->rx_ring_dma_addr);
lp->init_block.tx_ring = (u32) le32_to_cpu(lp->tx_ring_dma_addr);
lp->init_block->phys_addr[i] = dev->dev_addr[i];
lp->init_block->filter[0] = 0x00000000;
lp->init_block->filter[1] = 0x00000000;
lp->init_block->rx_ring = (u32) le32_to_cpu(lp->rx_ring_dma_addr);
lp->init_block->tx_ring = (u32) le32_to_cpu(lp->tx_ring_dma_addr);
/* switch pcnet32 to 32bit mode */
a->write_bcr(ioaddr, 20, 2);
a->write_csr(ioaddr, 1, (lp->dma_addr + offsetof(struct pcnet32_private,
init_block)) & 0xffff);
a->write_csr(ioaddr, 2, (lp->dma_addr + offsetof(struct pcnet32_private,
init_block)) >> 16);
a->write_csr(ioaddr, 1, (lp->init_dma_addr & 0xffff));
a->write_csr(ioaddr, 2, (lp->init_dma_addr >> 16));
if (pdev) { /* use the IRQ provided by PCI */
dev->irq = pdev->irq;
@ -1992,7 +1986,8 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
err_free_ring:
pcnet32_free_ring(dev);
err_free_consistent:
pci_free_consistent(lp->pci_dev, sizeof(*lp), lp, lp->dma_addr);
pci_free_consistent(lp->pci_dev, sizeof(*lp->init_block),
lp->init_block, lp->init_dma_addr);
err_free_netdev:
free_netdev(dev);
err_release_region:
@ -2003,7 +1998,7 @@ pcnet32_probe1(unsigned long ioaddr, int shared, struct pci_dev *pdev)
/* if any allocation fails, caller must also call pcnet32_free_ring */
static int pcnet32_alloc_ring(struct net_device *dev, char *name)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
lp->tx_ring = pci_alloc_consistent(lp->pci_dev,
sizeof(struct pcnet32_tx_head) *
@ -2070,7 +2065,7 @@ static int pcnet32_alloc_ring(struct net_device *dev, char *name)
static void pcnet32_free_ring(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
kfree(lp->tx_skbuff);
lp->tx_skbuff = NULL;
@ -2103,7 +2098,7 @@ static void pcnet32_free_ring(struct net_device *dev)
static int pcnet32_open(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
u16 val;
int i;
@ -2134,8 +2129,7 @@ static int pcnet32_open(struct net_device *dev)
"%s: pcnet32_open() irq %d tx/rx rings %#x/%#x init %#x.\n",
dev->name, dev->irq, (u32) (lp->tx_ring_dma_addr),
(u32) (lp->rx_ring_dma_addr),
(u32) (lp->dma_addr +
offsetof(struct pcnet32_private, init_block)));
(u32) (lp->init_dma_addr));
/* set/reset autoselect bit */
val = lp->a.read_bcr(ioaddr, 2) & ~2;
@ -2274,7 +2268,7 @@ static int pcnet32_open(struct net_device *dev)
}
#endif
lp->init_block.mode =
lp->init_block->mode =
le16_to_cpu((lp->options & PCNET32_PORT_PORTSEL) << 7);
pcnet32_load_multicast(dev);
@ -2284,12 +2278,8 @@ static int pcnet32_open(struct net_device *dev)
}
/* Re-initialize the PCNET32, and start it when done. */
lp->a.write_csr(ioaddr, 1, (lp->dma_addr +
offsetof(struct pcnet32_private,
init_block)) & 0xffff);
lp->a.write_csr(ioaddr, 2,
(lp->dma_addr +
offsetof(struct pcnet32_private, init_block)) >> 16);
lp->a.write_csr(ioaddr, 1, (lp->init_dma_addr & 0xffff));
lp->a.write_csr(ioaddr, 2, (lp->init_dma_addr >> 16));
lp->a.write_csr(ioaddr, CSR4, 0x0915); /* auto tx pad */
lp->a.write_csr(ioaddr, CSR0, CSR0_INIT);
@ -2316,8 +2306,7 @@ static int pcnet32_open(struct net_device *dev)
printk(KERN_DEBUG
"%s: pcnet32 open after %d ticks, init block %#x csr0 %4.4x.\n",
dev->name, i,
(u32) (lp->dma_addr +
offsetof(struct pcnet32_private, init_block)),
(u32) (lp->init_dma_addr),
lp->a.read_csr(ioaddr, CSR0));
spin_unlock_irqrestore(&lp->lock, flags);
@ -2355,7 +2344,7 @@ static int pcnet32_open(struct net_device *dev)
static void pcnet32_purge_tx_ring(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int i;
for (i = 0; i < lp->tx_ring_size; i++) {
@ -2375,7 +2364,7 @@ static void pcnet32_purge_tx_ring(struct net_device *dev)
/* Initialize the PCNET32 Rx and Tx rings. */
static int pcnet32_init_ring(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int i;
lp->tx_full = 0;
@ -2417,12 +2406,12 @@ static int pcnet32_init_ring(struct net_device *dev)
lp->tx_dma_addr[i] = 0;
}
lp->init_block.tlen_rlen =
lp->init_block->tlen_rlen =
le16_to_cpu(lp->tx_len_bits | lp->rx_len_bits);
for (i = 0; i < 6; i++)
lp->init_block.phys_addr[i] = dev->dev_addr[i];
lp->init_block.rx_ring = (u32) le32_to_cpu(lp->rx_ring_dma_addr);
lp->init_block.tx_ring = (u32) le32_to_cpu(lp->tx_ring_dma_addr);
lp->init_block->phys_addr[i] = dev->dev_addr[i];
lp->init_block->rx_ring = (u32) le32_to_cpu(lp->rx_ring_dma_addr);
lp->init_block->tx_ring = (u32) le32_to_cpu(lp->tx_ring_dma_addr);
wmb(); /* Make sure all changes are visible */
return 0;
}
@ -2433,7 +2422,7 @@ static int pcnet32_init_ring(struct net_device *dev)
*/
static void pcnet32_restart(struct net_device *dev, unsigned int csr0_bits)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
int i;
@ -2463,7 +2452,7 @@ static void pcnet32_restart(struct net_device *dev, unsigned int csr0_bits)
static void pcnet32_tx_timeout(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr, flags;
spin_lock_irqsave(&lp->lock, flags);
@ -2504,7 +2493,7 @@ static void pcnet32_tx_timeout(struct net_device *dev)
static int pcnet32_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
u16 status;
int entry;
@ -2569,7 +2558,7 @@ pcnet32_interrupt(int irq, void *dev_id)
int boguscnt = max_interrupt_work;
ioaddr = dev->base_addr;
lp = dev->priv;
lp = netdev_priv(dev);
spin_lock(&lp->lock);
@ -2651,7 +2640,7 @@ pcnet32_interrupt(int irq, void *dev_id)
static int pcnet32_close(struct net_device *dev)
{
unsigned long ioaddr = dev->base_addr;
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
del_timer_sync(&lp->watchdog_timer);
@ -2692,7 +2681,7 @@ static int pcnet32_close(struct net_device *dev)
static struct net_device_stats *pcnet32_get_stats(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
unsigned long flags;
@ -2706,8 +2695,8 @@ static struct net_device_stats *pcnet32_get_stats(struct net_device *dev)
/* taken from the sunlance driver, which it took from the depca driver */
static void pcnet32_load_multicast(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
volatile struct pcnet32_init_block *ib = &lp->init_block;
struct pcnet32_private *lp = netdev_priv(dev);
volatile struct pcnet32_init_block *ib = lp->init_block;
volatile u16 *mcast_table = (u16 *) & ib->filter;
struct dev_mc_list *dmi = dev->mc_list;
unsigned long ioaddr = dev->base_addr;
@ -2756,7 +2745,7 @@ static void pcnet32_load_multicast(struct net_device *dev)
static void pcnet32_set_multicast_list(struct net_device *dev)
{
unsigned long ioaddr = dev->base_addr, flags;
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int csr15, suspended;
spin_lock_irqsave(&lp->lock, flags);
@ -2767,12 +2756,12 @@ static void pcnet32_set_multicast_list(struct net_device *dev)
if (netif_msg_hw(lp))
printk(KERN_INFO "%s: Promiscuous mode enabled.\n",
dev->name);
lp->init_block.mode =
lp->init_block->mode =
le16_to_cpu(0x8000 | (lp->options & PCNET32_PORT_PORTSEL) <<
7);
lp->a.write_csr(ioaddr, CSR15, csr15 | 0x8000);
} else {
lp->init_block.mode =
lp->init_block->mode =
le16_to_cpu((lp->options & PCNET32_PORT_PORTSEL) << 7);
lp->a.write_csr(ioaddr, CSR15, csr15 & 0x7fff);
pcnet32_load_multicast(dev);
@ -2795,7 +2784,7 @@ static void pcnet32_set_multicast_list(struct net_device *dev)
/* This routine assumes that the lp->lock is held */
static int mdio_read(struct net_device *dev, int phy_id, int reg_num)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
u16 val_out;
@ -2811,7 +2800,7 @@ static int mdio_read(struct net_device *dev, int phy_id, int reg_num)
/* This routine assumes that the lp->lock is held */
static void mdio_write(struct net_device *dev, int phy_id, int reg_num, int val)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long ioaddr = dev->base_addr;
if (!lp->mii)
@ -2823,7 +2812,7 @@ static void mdio_write(struct net_device *dev, int phy_id, int reg_num, int val)
static int pcnet32_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int rc;
unsigned long flags;
@ -2841,7 +2830,7 @@ static int pcnet32_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
static int pcnet32_check_otherphy(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
struct mii_if_info mii = lp->mii_if;
u16 bmcr;
int i;
@ -2888,7 +2877,7 @@ static int pcnet32_check_otherphy(struct net_device *dev)
static void pcnet32_check_media(struct net_device *dev, int verbose)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
int curr_link;
int prev_link = netif_carrier_ok(dev) ? 1 : 0;
u32 bcr9;
@ -2944,7 +2933,7 @@ static void pcnet32_check_media(struct net_device *dev, int verbose)
static void pcnet32_watchdog(struct net_device *dev)
{
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unsigned long flags;
/* Print the link status if it has changed */
@ -2960,12 +2949,13 @@ static void __devexit pcnet32_remove_one(struct pci_dev *pdev)
struct net_device *dev = pci_get_drvdata(pdev);
if (dev) {
struct pcnet32_private *lp = dev->priv;
struct pcnet32_private *lp = netdev_priv(dev);
unregister_netdev(dev);
pcnet32_free_ring(dev);
release_region(dev->base_addr, PCNET32_TOTAL_SIZE);
pci_free_consistent(lp->pci_dev, sizeof(*lp), lp, lp->dma_addr);
pci_free_consistent(lp->pci_dev, sizeof(*lp->init_block),
lp->init_block, lp->init_dma_addr);
free_netdev(dev);
pci_disable_device(pdev);
pci_set_drvdata(pdev, NULL);
@ -3040,12 +3030,13 @@ static void __exit pcnet32_cleanup_module(void)
struct net_device *next_dev;
while (pcnet32_dev) {
struct pcnet32_private *lp = pcnet32_dev->priv;
struct pcnet32_private *lp = netdev_priv(pcnet32_dev);
next_dev = lp->next;
unregister_netdev(pcnet32_dev);
pcnet32_free_ring(pcnet32_dev);
release_region(pcnet32_dev->base_addr, PCNET32_TOTAL_SIZE);
pci_free_consistent(lp->pci_dev, sizeof(*lp), lp, lp->dma_addr);
pci_free_consistent(lp->pci_dev, sizeof(*lp->init_block),
lp->init_block, lp->init_dma_addr);
free_netdev(pcnet32_dev);
pcnet32_dev = next_dev;
}

View File

@ -35,10 +35,14 @@
#include <asm/irq.h>
#include <asm/uaccess.h>
/* mdiobus_register
/**
* mdiobus_register - bring up all the PHYs on a given bus and attach them to bus
* @bus: target mii_bus
*
* description: Called by a bus driver to bring up all the PHYs
* on a given bus, and attach them to the bus
* Description: Called by a bus driver to bring up all the PHYs
* on a given bus, and attach them to the bus.
*
* Returns 0 on success or < 0 on error.
*/
int mdiobus_register(struct mii_bus *bus)
{
@ -114,10 +118,13 @@ void mdiobus_unregister(struct mii_bus *bus)
}
EXPORT_SYMBOL(mdiobus_unregister);
/* mdio_bus_match
/**
* mdio_bus_match - determine if given PHY driver supports the given PHY device
* @dev: target PHY device
* @drv: given PHY driver
*
* description: Given a PHY device, and a PHY driver, return 1 if
* the driver supports the device. Otherwise, return 0
* Description: Given a PHY device, and a PHY driver, return 1 if
* the driver supports the device. Otherwise, return 0.
*/
static int mdio_bus_match(struct device *dev, struct device_driver *drv)
{

View File

@ -39,7 +39,9 @@
#include <asm/irq.h>
#include <asm/uaccess.h>
/* Convenience function to print out the current phy status
/**
* phy_print_status - Convenience function to print out the current phy status
* @phydev: the phy_device struct
*/
void phy_print_status(struct phy_device *phydev)
{
@ -55,10 +57,15 @@ void phy_print_status(struct phy_device *phydev)
EXPORT_SYMBOL(phy_print_status);
/* Convenience functions for reading/writing a given PHY
* register. They MUST NOT be called from interrupt context,
/**
* phy_read - Convenience function for reading a given PHY register
* @phydev: the phy_device struct
* @regnum: register number to read
*
* NOTE: MUST NOT be called from interrupt context,
* because the bus read/write functions may wait for an interrupt
* to conclude the operation. */
* to conclude the operation.
*/
int phy_read(struct phy_device *phydev, u16 regnum)
{
int retval;
@ -72,6 +79,16 @@ int phy_read(struct phy_device *phydev, u16 regnum)
}
EXPORT_SYMBOL(phy_read);
/**
* phy_write - Convenience function for writing a given PHY register
* @phydev: the phy_device struct
* @regnum: register number to write
* @val: value to write to @regnum
*
* NOTE: MUST NOT be called from interrupt context,
* because the bus read/write functions may wait for an interrupt
* to conclude the operation.
*/
int phy_write(struct phy_device *phydev, u16 regnum, u16 val)
{
int err;
@ -85,7 +102,15 @@ int phy_write(struct phy_device *phydev, u16 regnum, u16 val)
}
EXPORT_SYMBOL(phy_write);
/**
* phy_clear_interrupt - Ack the phy device's interrupt
* @phydev: the phy_device struct
*
* If the @phydev driver has an ack_interrupt function, call it to
* ack and clear the phy device's interrupt.
*
* Returns 0 on success on < 0 on error.
*/
int phy_clear_interrupt(struct phy_device *phydev)
{
int err = 0;
@ -96,7 +121,13 @@ int phy_clear_interrupt(struct phy_device *phydev)
return err;
}
/**
* phy_config_interrupt - configure the PHY device for the requested interrupts
* @phydev: the phy_device struct
* @interrupts: interrupt flags to configure for this @phydev
*
* Returns 0 on success on < 0 on error.
*/
int phy_config_interrupt(struct phy_device *phydev, u32 interrupts)
{
int err = 0;
@ -109,9 +140,11 @@ int phy_config_interrupt(struct phy_device *phydev, u32 interrupts)
}
/* phy_aneg_done
/**
* phy_aneg_done - return auto-negotiation status
* @phydev: target phy_device struct
*
* description: Reads the status register and returns 0 either if
* Description: Reads the status register and returns 0 either if
* auto-negotiation is incomplete, or if there was an error.
* Returns BMSR_ANEGCOMPLETE if auto-negotiation is done.
*/
@ -173,9 +206,12 @@ static const struct phy_setting settings[] = {
#define MAX_NUM_SETTINGS (sizeof(settings)/sizeof(struct phy_setting))
/* phy_find_setting
/**
* phy_find_setting - find a PHY settings array entry that matches speed & duplex
* @speed: speed to match
* @duplex: duplex to match
*
* description: Searches the settings array for the setting which
* Description: Searches the settings array for the setting which
* matches the desired speed and duplex, and returns the index
* of that setting. Returns the index of the last setting if
* none of the others match.
@ -192,11 +228,12 @@ static inline int phy_find_setting(int speed, int duplex)
return idx < MAX_NUM_SETTINGS ? idx : MAX_NUM_SETTINGS - 1;
}
/* phy_find_valid
* idx: The first index in settings[] to search
* features: A mask of the valid settings
/**
* phy_find_valid - find a PHY setting that matches the requested features mask
* @idx: The first index in settings[] to search
* @features: A mask of the valid settings
*
* description: Returns the index of the first valid setting less
* Description: Returns the index of the first valid setting less
* than or equal to the one pointed to by idx, as determined by
* the mask in features. Returns the index of the last setting
* if nothing else matches.
@ -209,11 +246,13 @@ static inline int phy_find_valid(int idx, u32 features)
return idx < MAX_NUM_SETTINGS ? idx : MAX_NUM_SETTINGS - 1;
}
/* phy_sanitize_settings
/**
* phy_sanitize_settings - make sure the PHY is set to supported speed and duplex
* @phydev: the target phy_device struct
*
* description: Make sure the PHY is set to supported speeds and
* Description: Make sure the PHY is set to supported speeds and
* duplexes. Drop down by one in this order: 1000/FULL,
* 1000/HALF, 100/FULL, 100/HALF, 10/FULL, 10/HALF
* 1000/HALF, 100/FULL, 100/HALF, 10/FULL, 10/HALF.
*/
void phy_sanitize_settings(struct phy_device *phydev)
{
@ -232,16 +271,17 @@ void phy_sanitize_settings(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_sanitize_settings);
/* phy_ethtool_sset:
* A generic ethtool sset function. Handles all the details
/**
* phy_ethtool_sset - generic ethtool sset function, handles all the details
* @phydev: target phy_device struct
* @cmd: ethtool_cmd
*
* A few notes about parameter checking:
* - We don't set port or transceiver, so we don't care what they
* were set to.
* - phy_start_aneg() will make sure forced settings are sane, and
* choose the next best ones from the ones selected, so we don't
* care if ethtool tries to give us bad values
*
* care if ethtool tries to give us bad values.
*/
int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd)
{
@ -304,9 +344,15 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
}
EXPORT_SYMBOL(phy_ethtool_gset);
/* Note that this function is currently incompatible with the
/**
* phy_mii_ioctl - generic PHY MII ioctl interface
* @phydev: the phy_device struct
* @mii_data: MII ioctl data
* @cmd: ioctl cmd to execute
*
* Note that this function is currently incompatible with the
* PHYCONTROL layer. It changes registers without regard to
* current state. Use at own risk
* current state. Use at own risk.
*/
int phy_mii_ioctl(struct phy_device *phydev,
struct mii_ioctl_data *mii_data, int cmd)
@ -336,6 +382,12 @@ int phy_mii_ioctl(struct phy_device *phydev,
phydev->duplex = DUPLEX_FULL;
else
phydev->duplex = DUPLEX_HALF;
if ((!phydev->autoneg) &&
(val & BMCR_SPEED1000))
phydev->speed = SPEED_1000;
else if ((!phydev->autoneg) &&
(val & BMCR_SPEED100))
phydev->speed = SPEED_100;
break;
case MII_ADVERTISE:
phydev->advertising = val;
@ -358,13 +410,14 @@ int phy_mii_ioctl(struct phy_device *phydev,
return 0;
}
/* phy_start_aneg
/**
* phy_start_aneg - start auto-negotiation for this PHY device
* @phydev: the phy_device struct
*
* description: Sanitizes the settings (if we're not
* autonegotiating them), and then calls the driver's
* config_aneg function. If the PHYCONTROL Layer is operating,
* we change the state to reflect the beginning of
* Auto-negotiation or forcing.
* Description: Sanitizes the settings (if we're not autonegotiating
* them), and then calls the driver's config_aneg function.
* If the PHYCONTROL Layer is operating, we change the state to
* reflect the beginning of Auto-negotiation or forcing.
*/
int phy_start_aneg(struct phy_device *phydev)
{
@ -400,15 +453,19 @@ EXPORT_SYMBOL(phy_start_aneg);
static void phy_change(struct work_struct *work);
static void phy_timer(unsigned long data);
/* phy_start_machine:
/**
* phy_start_machine - start PHY state machine tracking
* @phydev: the phy_device struct
* @handler: callback function for state change notifications
*
* description: The PHY infrastructure can run a state machine
* Description: The PHY infrastructure can run a state machine
* which tracks whether the PHY is starting up, negotiating,
* etc. This function starts the timer which tracks the state
* of the PHY. If you want to be notified when the state
* changes, pass in the callback, otherwise, pass NULL. If you
* of the PHY. If you want to be notified when the state changes,
* pass in the callback @handler, otherwise, pass NULL. If you
* want to maintain your own state machine, do not call this
* function. */
* function.
*/
void phy_start_machine(struct phy_device *phydev,
void (*handler)(struct net_device *))
{
@ -420,9 +477,11 @@ void phy_start_machine(struct phy_device *phydev,
mod_timer(&phydev->phy_timer, jiffies + HZ);
}
/* phy_stop_machine
/**
* phy_stop_machine - stop the PHY state machine tracking
* @phydev: target phy_device struct
*
* description: Stops the state machine timer, sets the state to UP
* Description: Stops the state machine timer, sets the state to UP
* (unless it wasn't up yet). This function must be called BEFORE
* phy_detach.
*/
@ -438,12 +497,14 @@ void phy_stop_machine(struct phy_device *phydev)
phydev->adjust_state = NULL;
}
/* phy_force_reduction
/**
* phy_force_reduction - reduce PHY speed/duplex settings by one step
* @phydev: target phy_device struct
*
* description: Reduces the speed/duplex settings by
* one notch. The order is so:
* 1000/FULL, 1000/HALF, 100/FULL, 100/HALF,
* 10/FULL, 10/HALF. The function bottoms out at 10/HALF.
* Description: Reduces the speed/duplex settings by one notch,
* in this order--
* 1000/FULL, 1000/HALF, 100/FULL, 100/HALF, 10/FULL, 10/HALF.
* The function bottoms out at 10/HALF.
*/
static void phy_force_reduction(struct phy_device *phydev)
{
@ -464,7 +525,9 @@ static void phy_force_reduction(struct phy_device *phydev)
}
/* phy_error:
/**
* phy_error - enter HALTED state for this PHY device
* @phydev: target phy_device struct
*
* Moves the PHY to the HALTED state in response to a read
* or write error, and tells the controller the link is down.
@ -478,9 +541,12 @@ void phy_error(struct phy_device *phydev)
spin_unlock(&phydev->lock);
}
/* phy_interrupt
/**
* phy_interrupt - PHY interrupt handler
* @irq: interrupt line
* @phy_dat: phy_device pointer
*
* description: When a PHY interrupt occurs, the handler disables
* Description: When a PHY interrupt occurs, the handler disables
* interrupts, and schedules a work task to clear the interrupt.
*/
static irqreturn_t phy_interrupt(int irq, void *phy_dat)
@ -501,7 +567,10 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
return IRQ_HANDLED;
}
/* Enable the interrupts from the PHY side */
/**
* phy_enable_interrupts - Enable the interrupts from the PHY side
* @phydev: target phy_device struct
*/
int phy_enable_interrupts(struct phy_device *phydev)
{
int err;
@ -517,7 +586,10 @@ int phy_enable_interrupts(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_enable_interrupts);
/* Disable the PHY interrupts from the PHY side */
/**
* phy_disable_interrupts - Disable the PHY interrupts from the PHY side
* @phydev: target phy_device struct
*/
int phy_disable_interrupts(struct phy_device *phydev)
{
int err;
@ -543,13 +615,15 @@ phy_err:
}
EXPORT_SYMBOL(phy_disable_interrupts);
/* phy_start_interrupts
/**
* phy_start_interrupts - request and enable interrupts for a PHY device
* @phydev: target phy_device struct
*
* description: Request the interrupt for the given PHY. If
* this fails, then we set irq to PHY_POLL.
* Description: Request the interrupt for the given PHY.
* If this fails, then we set irq to PHY_POLL.
* Otherwise, we enable the interrupts in the PHY.
* Returns 0 on success.
* This should only be called with a valid IRQ number.
* Returns 0 on success or < 0 on error.
*/
int phy_start_interrupts(struct phy_device *phydev)
{
@ -574,6 +648,10 @@ int phy_start_interrupts(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_start_interrupts);
/**
* phy_stop_interrupts - disable interrupts from a PHY device
* @phydev: target phy_device struct
*/
int phy_stop_interrupts(struct phy_device *phydev)
{
int err;
@ -596,7 +674,10 @@ int phy_stop_interrupts(struct phy_device *phydev)
EXPORT_SYMBOL(phy_stop_interrupts);
/* Scheduled by the phy_interrupt/timer to handle PHY changes */
/**
* phy_change - Scheduled by the phy_interrupt/timer to handle PHY changes
* @work: work_struct that describes the work to be done
*/
static void phy_change(struct work_struct *work)
{
int err;
@ -630,7 +711,10 @@ phy_err:
phy_error(phydev);
}
/* Bring down the PHY link, and stop checking the status. */
/**
* phy_stop - Bring down the PHY link, and stop checking the status
* @phydev: target phy_device struct
*/
void phy_stop(struct phy_device *phydev)
{
spin_lock(&phydev->lock);
@ -659,9 +743,11 @@ out_unlock:
}
/* phy_start
/**
* phy_start - start or restart a PHY device
* @phydev: target phy_device struct
*
* description: Indicates the attached device's readiness to
* Description: Indicates the attached device's readiness to
* handle PHY-related work. Used during startup to start the
* PHY, and after a call to phy_stop() to resume operation.
* Also used to indicate the MDIO bus has cleared an error

View File

@ -74,11 +74,13 @@ struct phy_device* phy_device_create(struct mii_bus *bus, int addr, int phy_id)
}
EXPORT_SYMBOL(phy_device_create);
/* get_phy_device
/**
* get_phy_device - reads the specified PHY device and returns its @phy_device struct
* @bus: the target MII bus
* @addr: PHY address on the MII bus
*
* description: Reads the ID registers of the PHY at addr on the
* bus, then allocates and returns the phy_device to
* represent it.
* Description: Reads the ID registers of the PHY at @addr on the
* @bus, then allocates and returns the phy_device to represent it.
*/
struct phy_device * get_phy_device(struct mii_bus *bus, int addr)
{
@ -112,23 +114,33 @@ struct phy_device * get_phy_device(struct mii_bus *bus, int addr)
return dev;
}
/* phy_prepare_link:
/**
* phy_prepare_link - prepares the PHY layer to monitor link status
* @phydev: target phy_device struct
* @handler: callback function for link status change notifications
*
* description: Tells the PHY infrastructure to handle the
* Description: Tells the PHY infrastructure to handle the
* gory details on monitoring link status (whether through
* polling or an interrupt), and to call back to the
* connected device driver when the link status changes.
* If you want to monitor your own link state, don't call
* this function */
* this function.
*/
void phy_prepare_link(struct phy_device *phydev,
void (*handler)(struct net_device *))
{
phydev->adjust_link = handler;
}
/* phy_connect:
/**
* phy_connect - connect an ethernet device to a PHY device
* @dev: the network device to connect
* @phy_id: the PHY device to connect
* @handler: callback function for state change notifications
* @flags: PHY device's dev_flags
* @interface: PHY device's interface
*
* description: Convenience function for connecting ethernet
* Description: Convenience function for connecting ethernet
* devices to PHY devices. The default behavior is for
* the PHY infrastructure to handle everything, and only notify
* the connected driver when the link status changes. If you
@ -158,6 +170,10 @@ struct phy_device * phy_connect(struct net_device *dev, const char *phy_id,
}
EXPORT_SYMBOL(phy_connect);
/**
* phy_disconnect - disable interrupts, stop state machine, and detach a PHY device
* @phydev: target phy_device struct
*/
void phy_disconnect(struct phy_device *phydev)
{
if (phydev->irq > 0)
@ -171,21 +187,25 @@ void phy_disconnect(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_disconnect);
/* phy_attach:
*
* description: Called by drivers to attach to a particular PHY
* device. The phy_device is found, and properly hooked up
* to the phy_driver. If no driver is attached, then the
* genphy_driver is used. The phy_device is given a ptr to
* the attaching device, and given a callback for link status
* change. The phy_device is returned to the attaching
* driver.
*/
static int phy_compare_id(struct device *dev, void *data)
{
return strcmp((char *)data, dev->bus_id) ? 0 : 1;
}
/**
* phy_attach - attach a network device to a particular PHY device
* @dev: network device to attach
* @phy_id: PHY device to attach
* @flags: PHY device's dev_flags
* @interface: PHY device's interface
*
* Description: Called by drivers to attach to a particular PHY
* device. The phy_device is found, and properly hooked up
* to the phy_driver. If no driver is attached, then the
* genphy_driver is used. The phy_device is given a ptr to
* the attaching device, and given a callback for link status
* change. The phy_device is returned to the attaching driver.
*/
struct phy_device *phy_attach(struct net_device *dev,
const char *phy_id, u32 flags, phy_interface_t interface)
{
@ -246,6 +266,10 @@ struct phy_device *phy_attach(struct net_device *dev,
}
EXPORT_SYMBOL(phy_attach);
/**
* phy_detach - detach a PHY device from its network device
* @phydev: target phy_device struct
*/
void phy_detach(struct phy_device *phydev)
{
phydev->attached_dev = NULL;
@ -262,11 +286,13 @@ EXPORT_SYMBOL(phy_detach);
/* Generic PHY support and helper functions */
/* genphy_config_advert
/**
* genphy_config_advert - sanitize and advertise auto-negotation parameters
* @phydev: target phy_device struct
*
* description: Writes MII_ADVERTISE with the appropriate values,
* Description: Writes MII_ADVERTISE with the appropriate values,
* after sanitizing the values to make sure we only advertise
* what is supported
* what is supported.
*/
int genphy_config_advert(struct phy_device *phydev)
{
@ -328,11 +354,14 @@ int genphy_config_advert(struct phy_device *phydev)
}
EXPORT_SYMBOL(genphy_config_advert);
/* genphy_setup_forced
/**
* genphy_setup_forced - configures/forces speed/duplex from @phydev
* @phydev: target phy_device struct
*
* description: Configures MII_BMCR to force speed/duplex
* Description: Configures MII_BMCR to force speed/duplex
* to the values in phydev. Assumes that the values are valid.
* Please see phy_sanitize_settings() */
* Please see phy_sanitize_settings().
*/
int genphy_setup_forced(struct phy_device *phydev)
{
int ctl = BMCR_RESET;
@ -361,7 +390,10 @@ int genphy_setup_forced(struct phy_device *phydev)
}
/* Enable and Restart Autonegotiation */
/**
* genphy_restart_aneg - Enable and Restart Autonegotiation
* @phydev: target phy_device struct
*/
int genphy_restart_aneg(struct phy_device *phydev)
{
int ctl;
@ -382,11 +414,13 @@ int genphy_restart_aneg(struct phy_device *phydev)
}
/* genphy_config_aneg
/**
* genphy_config_aneg - restart auto-negotiation or write BMCR
* @phydev: target phy_device struct
*
* description: If auto-negotiation is enabled, we configure the
* Description: If auto-negotiation is enabled, we configure the
* advertising, and then restart auto-negotiation. If it is not
* enabled, then we write the BMCR
* enabled, then we write the BMCR.
*/
int genphy_config_aneg(struct phy_device *phydev)
{
@ -406,11 +440,13 @@ int genphy_config_aneg(struct phy_device *phydev)
}
EXPORT_SYMBOL(genphy_config_aneg);
/* genphy_update_link
/**
* genphy_update_link - update link status in @phydev
* @phydev: target phy_device struct
*
* description: Update the value in phydev->link to reflect the
* Description: Update the value in phydev->link to reflect the
* current link value. In order to do this, we need to read
* the status register twice, keeping the second value
* the status register twice, keeping the second value.
*/
int genphy_update_link(struct phy_device *phydev)
{
@ -437,9 +473,11 @@ int genphy_update_link(struct phy_device *phydev)
}
EXPORT_SYMBOL(genphy_update_link);
/* genphy_read_status
/**
* genphy_read_status - check the link status and update current link state
* @phydev: target phy_device struct
*
* description: Check the link, then figure out the current state
* Description: Check the link, then figure out the current state
* by comparing what we advertise with what the link partner
* advertises. Start by checking the gigabit possibilities,
* then move on to 10/100.
@ -579,9 +617,11 @@ static int genphy_config_init(struct phy_device *phydev)
}
/* phy_probe
/**
* phy_probe - probe and init a PHY device
* @dev: device to probe and init
*
* description: Take care of setting up the phy_device structure,
* Description: Take care of setting up the phy_device structure,
* set the state to READY (the driver's init function should
* set it to STARTING if needed).
*/
@ -643,6 +683,10 @@ static int phy_remove(struct device *dev)
return 0;
}
/**
* phy_driver_register - register a phy_driver with the PHY layer
* @new_driver: new phy_driver to register
*/
int phy_driver_register(struct phy_driver *new_driver)
{
int retval;

View File

@ -39,7 +39,7 @@
#define DRV_NAME "qla3xxx"
#define DRV_STRING "QLogic ISP3XXX Network Driver"
#define DRV_VERSION "v2.03.00-k3"
#define DRV_VERSION "v2.03.00-k4"
#define PFX DRV_NAME " "
static const char ql3xxx_driver_name[] = DRV_NAME;
@ -71,6 +71,30 @@ static struct pci_device_id ql3xxx_pci_tbl[] __devinitdata = {
MODULE_DEVICE_TABLE(pci, ql3xxx_pci_tbl);
/*
* These are the known PHY's which are used
*/
typedef enum {
PHY_TYPE_UNKNOWN = 0,
PHY_VITESSE_VSC8211,
PHY_AGERE_ET1011C,
MAX_PHY_DEV_TYPES
} PHY_DEVICE_et;
typedef struct {
PHY_DEVICE_et phyDevice;
u32 phyIdOUI;
u16 phyIdModel;
char *name;
} PHY_DEVICE_INFO_t;
static const PHY_DEVICE_INFO_t PHY_DEVICES[] =
{{PHY_TYPE_UNKNOWN, 0x000000, 0x0, "PHY_TYPE_UNKNOWN"},
{PHY_VITESSE_VSC8211, 0x0003f1, 0xb, "PHY_VITESSE_VSC8211"},
{PHY_AGERE_ET1011C, 0x00a0bc, 0x1, "PHY_AGERE_ET1011C"},
};
/*
* Caller must take hw_lock.
*/
@ -662,7 +686,7 @@ static u8 ql_mii_disable_scan_mode(struct ql3_adapter *qdev)
}
static int ql_mii_write_reg_ex(struct ql3_adapter *qdev,
u16 regAddr, u16 value, u32 mac_index)
u16 regAddr, u16 value, u32 phyAddr)
{
struct ql3xxx_port_registers __iomem *port_regs =
qdev->mem_map_registers;
@ -680,7 +704,7 @@ static int ql_mii_write_reg_ex(struct ql3_adapter *qdev,
}
ql_write_page0_reg(qdev, &port_regs->macMIIMgmtAddrReg,
PHYAddr[mac_index] | regAddr);
phyAddr | regAddr);
ql_write_page0_reg(qdev, &port_regs->macMIIMgmtDataReg, value);
@ -701,7 +725,7 @@ static int ql_mii_write_reg_ex(struct ql3_adapter *qdev,
}
static int ql_mii_read_reg_ex(struct ql3_adapter *qdev, u16 regAddr,
u16 * value, u32 mac_index)
u16 * value, u32 phyAddr)
{
struct ql3xxx_port_registers __iomem *port_regs =
qdev->mem_map_registers;
@ -720,7 +744,7 @@ static int ql_mii_read_reg_ex(struct ql3_adapter *qdev, u16 regAddr,
}
ql_write_page0_reg(qdev, &port_regs->macMIIMgmtAddrReg,
PHYAddr[mac_index] | regAddr);
phyAddr | regAddr);
ql_write_page0_reg(qdev, &port_regs->macMIIMgmtControlReg,
(MAC_MII_CONTROL_RC << 16));
@ -850,28 +874,31 @@ static void ql_petbi_start_neg(struct ql3_adapter *qdev)
}
static void ql_petbi_reset_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_petbi_reset_ex(struct ql3_adapter *qdev)
{
ql_mii_write_reg_ex(qdev, PETBI_CONTROL_REG, PETBI_CTRL_SOFT_RESET,
mac_index);
PHYAddr[qdev->mac_index]);
}
static void ql_petbi_start_neg_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_petbi_start_neg_ex(struct ql3_adapter *qdev)
{
u16 reg;
/* Enable Auto-negotiation sense */
ql_mii_read_reg_ex(qdev, PETBI_TBI_CTRL, &reg, mac_index);
ql_mii_read_reg_ex(qdev, PETBI_TBI_CTRL, &reg,
PHYAddr[qdev->mac_index]);
reg |= PETBI_TBI_AUTO_SENSE;
ql_mii_write_reg_ex(qdev, PETBI_TBI_CTRL, reg, mac_index);
ql_mii_write_reg_ex(qdev, PETBI_TBI_CTRL, reg,
PHYAddr[qdev->mac_index]);
ql_mii_write_reg_ex(qdev, PETBI_NEG_ADVER,
PETBI_NEG_PAUSE | PETBI_NEG_DUPLEX, mac_index);
PETBI_NEG_PAUSE | PETBI_NEG_DUPLEX,
PHYAddr[qdev->mac_index]);
ql_mii_write_reg_ex(qdev, PETBI_CONTROL_REG,
PETBI_CTRL_AUTO_NEG | PETBI_CTRL_RESTART_NEG |
PETBI_CTRL_FULL_DUPLEX | PETBI_CTRL_SPEED_1000,
mac_index);
PHYAddr[qdev->mac_index]);
}
static void ql_petbi_init(struct ql3_adapter *qdev)
@ -880,10 +907,10 @@ static void ql_petbi_init(struct ql3_adapter *qdev)
ql_petbi_start_neg(qdev);
}
static void ql_petbi_init_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_petbi_init_ex(struct ql3_adapter *qdev)
{
ql_petbi_reset_ex(qdev, mac_index);
ql_petbi_start_neg_ex(qdev, mac_index);
ql_petbi_reset_ex(qdev);
ql_petbi_start_neg_ex(qdev);
}
static int ql_is_petbi_neg_pause(struct ql3_adapter *qdev)
@ -896,33 +923,128 @@ static int ql_is_petbi_neg_pause(struct ql3_adapter *qdev)
return (reg & PETBI_NEG_PAUSE_MASK) == PETBI_NEG_PAUSE;
}
static void phyAgereSpecificInit(struct ql3_adapter *qdev, u32 miiAddr)
{
printk(KERN_INFO "%s: enabling Agere specific PHY\n", qdev->ndev->name);
/* power down device bit 11 = 1 */
ql_mii_write_reg_ex(qdev, 0x00, 0x1940, miiAddr);
/* enable diagnostic mode bit 2 = 1 */
ql_mii_write_reg_ex(qdev, 0x12, 0x840e, miiAddr);
/* 1000MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x10, 0x8805, miiAddr);
/* 1000MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x11, 0xf03e, miiAddr);
/* 100MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x10, 0x8806, miiAddr);
/* 100MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x11, 0x003e, miiAddr);
/* 10MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x10, 0x8807, miiAddr);
/* 10MB amplitude adjust (see Agere errata) */
ql_mii_write_reg_ex(qdev, 0x11, 0x1f00, miiAddr);
/* point to hidden reg 0x2806 */
ql_mii_write_reg_ex(qdev, 0x10, 0x2806, miiAddr);
/* Write new PHYAD w/bit 5 set */
ql_mii_write_reg_ex(qdev, 0x11, 0x0020 | (PHYAddr[qdev->mac_index] >> 8), miiAddr);
/*
* Disable diagnostic mode bit 2 = 0
* Power up device bit 11 = 0
* Link up (on) and activity (blink)
*/
ql_mii_write_reg(qdev, 0x12, 0x840a);
ql_mii_write_reg(qdev, 0x00, 0x1140);
ql_mii_write_reg(qdev, 0x1c, 0xfaf0);
}
static PHY_DEVICE_et getPhyType (struct ql3_adapter *qdev,
u16 phyIdReg0, u16 phyIdReg1)
{
PHY_DEVICE_et result = PHY_TYPE_UNKNOWN;
u32 oui;
u16 model;
int i;
if (phyIdReg0 == 0xffff) {
return result;
}
if (phyIdReg1 == 0xffff) {
return result;
}
/* oui is split between two registers */
oui = (phyIdReg0 << 6) | ((phyIdReg1 & PHY_OUI_1_MASK) >> 10);
model = (phyIdReg1 & PHY_MODEL_MASK) >> 4;
/* Scan table for this PHY */
for(i = 0; i < MAX_PHY_DEV_TYPES; i++) {
if ((oui == PHY_DEVICES[i].phyIdOUI) && (model == PHY_DEVICES[i].phyIdModel))
{
result = PHY_DEVICES[i].phyDevice;
printk(KERN_INFO "%s: Phy: %s\n",
qdev->ndev->name, PHY_DEVICES[i].name);
break;
}
}
return result;
}
static int ql_phy_get_speed(struct ql3_adapter *qdev)
{
u16 reg;
switch(qdev->phyType) {
case PHY_AGERE_ET1011C:
{
if (ql_mii_read_reg(qdev, 0x1A, &reg) < 0)
return 0;
reg = (reg >> 8) & 3;
break;
}
default:
if (ql_mii_read_reg(qdev, AUX_CONTROL_STATUS, &reg) < 0)
return 0;
reg = (((reg & 0x18) >> 3) & 3);
}
if (reg == 2)
switch(reg) {
case 2:
return SPEED_1000;
else if (reg == 1)
case 1:
return SPEED_100;
else if (reg == 0)
case 0:
return SPEED_10;
else
default:
return -1;
}
}
static int ql_is_full_dup(struct ql3_adapter *qdev)
{
u16 reg;
if (ql_mii_read_reg(qdev, AUX_CONTROL_STATUS, &reg) < 0)
return 0;
return (reg & PHY_AUX_DUPLEX_STAT) != 0;
switch(qdev->phyType) {
case PHY_AGERE_ET1011C:
{
if (ql_mii_read_reg(qdev, 0x1A, &reg))
return 0;
return ((reg & 0x0080) && (reg & 0x1000)) != 0;
}
case PHY_VITESSE_VSC8211:
default:
{
if (ql_mii_read_reg(qdev, AUX_CONTROL_STATUS, &reg) < 0)
return 0;
return (reg & PHY_AUX_DUPLEX_STAT) != 0;
}
}
}
static int ql_is_phy_neg_pause(struct ql3_adapter *qdev)
@ -935,6 +1057,73 @@ static int ql_is_phy_neg_pause(struct ql3_adapter *qdev)
return (reg & PHY_NEG_PAUSE) != 0;
}
static int PHY_Setup(struct ql3_adapter *qdev)
{
u16 reg1;
u16 reg2;
bool agereAddrChangeNeeded = false;
u32 miiAddr = 0;
int err;
/* Determine the PHY we are using by reading the ID's */
err = ql_mii_read_reg(qdev, PHY_ID_0_REG, &reg1);
if(err != 0) {
printk(KERN_ERR "%s: Could not read from reg PHY_ID_0_REG\n",
qdev->ndev->name);
return err;
}
err = ql_mii_read_reg(qdev, PHY_ID_1_REG, &reg2);
if(err != 0) {
printk(KERN_ERR "%s: Could not read from reg PHY_ID_0_REG\n",
qdev->ndev->name);
return err;
}
/* Check if we have a Agere PHY */
if ((reg1 == 0xffff) || (reg2 == 0xffff)) {
/* Determine which MII address we should be using
determined by the index of the card */
if (qdev->mac_index == 0) {
miiAddr = MII_AGERE_ADDR_1;
} else {
miiAddr = MII_AGERE_ADDR_2;
}
err =ql_mii_read_reg_ex(qdev, PHY_ID_0_REG, &reg1, miiAddr);
if(err != 0) {
printk(KERN_ERR "%s: Could not read from reg PHY_ID_0_REG after Agere detected\n",
qdev->ndev->name);
return err;
}
err = ql_mii_read_reg_ex(qdev, PHY_ID_1_REG, &reg2, miiAddr);
if(err != 0) {
printk(KERN_ERR "%s: Could not read from reg PHY_ID_0_REG after Agere detected\n",
qdev->ndev->name);
return err;
}
/* We need to remember to initialize the Agere PHY */
agereAddrChangeNeeded = true;
}
/* Determine the particular PHY we have on board to apply
PHY specific initializations */
qdev->phyType = getPhyType(qdev, reg1, reg2);
if ((qdev->phyType == PHY_AGERE_ET1011C) && agereAddrChangeNeeded) {
/* need this here so address gets changed */
phyAgereSpecificInit(qdev, miiAddr);
} else if (qdev->phyType == PHY_TYPE_UNKNOWN) {
printk(KERN_ERR "%s: PHY is unknown\n", qdev->ndev->name);
return -EIO;
}
return 0;
}
/*
* Caller holds hw_lock.
*/
@ -1205,15 +1394,14 @@ static int ql_link_down_detect_clear(struct ql3_adapter *qdev)
/*
* Caller holds hw_lock.
*/
static int ql_this_adapter_controls_port(struct ql3_adapter *qdev,
u32 mac_index)
static int ql_this_adapter_controls_port(struct ql3_adapter *qdev)
{
struct ql3xxx_port_registers __iomem *port_regs =
qdev->mem_map_registers;
u32 bitToCheck = 0;
u32 temp;
switch (mac_index) {
switch (qdev->mac_index) {
case 0:
bitToCheck = PORT_STATUS_F1_ENABLED;
break;
@ -1238,27 +1426,96 @@ static int ql_this_adapter_controls_port(struct ql3_adapter *qdev,
}
}
static void ql_phy_reset_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_phy_reset_ex(struct ql3_adapter *qdev)
{
ql_mii_write_reg_ex(qdev, CONTROL_REG, PHY_CTRL_SOFT_RESET, mac_index);
ql_mii_write_reg_ex(qdev, CONTROL_REG, PHY_CTRL_SOFT_RESET,
PHYAddr[qdev->mac_index]);
}
static void ql_phy_start_neg_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_phy_start_neg_ex(struct ql3_adapter *qdev)
{
u16 reg;
u16 portConfiguration;
ql_mii_write_reg_ex(qdev, PHY_NEG_ADVER,
PHY_NEG_PAUSE | PHY_NEG_ADV_SPEED | 1, mac_index);
if(qdev->phyType == PHY_AGERE_ET1011C) {
/* turn off external loopback */
ql_mii_write_reg(qdev, 0x13, 0x0000);
}
ql_mii_read_reg_ex(qdev, CONTROL_REG, &reg, mac_index);
ql_mii_write_reg_ex(qdev, CONTROL_REG, reg | PHY_CTRL_RESTART_NEG,
mac_index);
if(qdev->mac_index == 0)
portConfiguration = qdev->nvram_data.macCfg_port0.portConfiguration;
else
portConfiguration = qdev->nvram_data.macCfg_port1.portConfiguration;
/* Some HBA's in the field are set to 0 and they need to
be reinterpreted with a default value */
if(portConfiguration == 0)
portConfiguration = PORT_CONFIG_DEFAULT;
/* Set the 1000 advertisements */
ql_mii_read_reg_ex(qdev, PHY_GIG_CONTROL, &reg,
PHYAddr[qdev->mac_index]);
reg &= ~PHY_GIG_ALL_PARAMS;
if(portConfiguration &
PORT_CONFIG_FULL_DUPLEX_ENABLED &
PORT_CONFIG_1000MB_SPEED) {
reg |= PHY_GIG_ADV_1000F;
}
if(portConfiguration &
PORT_CONFIG_HALF_DUPLEX_ENABLED &
PORT_CONFIG_1000MB_SPEED) {
reg |= PHY_GIG_ADV_1000H;
}
ql_mii_write_reg_ex(qdev, PHY_GIG_CONTROL, reg,
PHYAddr[qdev->mac_index]);
/* Set the 10/100 & pause negotiation advertisements */
ql_mii_read_reg_ex(qdev, PHY_NEG_ADVER, &reg,
PHYAddr[qdev->mac_index]);
reg &= ~PHY_NEG_ALL_PARAMS;
if(portConfiguration & PORT_CONFIG_SYM_PAUSE_ENABLED)
reg |= PHY_NEG_ASY_PAUSE | PHY_NEG_SYM_PAUSE;
if(portConfiguration & PORT_CONFIG_FULL_DUPLEX_ENABLED) {
if(portConfiguration & PORT_CONFIG_100MB_SPEED)
reg |= PHY_NEG_ADV_100F;
if(portConfiguration & PORT_CONFIG_10MB_SPEED)
reg |= PHY_NEG_ADV_10F;
}
if(portConfiguration & PORT_CONFIG_HALF_DUPLEX_ENABLED) {
if(portConfiguration & PORT_CONFIG_100MB_SPEED)
reg |= PHY_NEG_ADV_100H;
if(portConfiguration & PORT_CONFIG_10MB_SPEED)
reg |= PHY_NEG_ADV_10H;
}
if(portConfiguration &
PORT_CONFIG_1000MB_SPEED) {
reg |= 1;
}
ql_mii_write_reg_ex(qdev, PHY_NEG_ADVER, reg,
PHYAddr[qdev->mac_index]);
ql_mii_read_reg_ex(qdev, CONTROL_REG, &reg, PHYAddr[qdev->mac_index]);
ql_mii_write_reg_ex(qdev, CONTROL_REG,
reg | PHY_CTRL_RESTART_NEG | PHY_CTRL_AUTO_NEG,
PHYAddr[qdev->mac_index]);
}
static void ql_phy_init_ex(struct ql3_adapter *qdev, u32 mac_index)
static void ql_phy_init_ex(struct ql3_adapter *qdev)
{
ql_phy_reset_ex(qdev, mac_index);
ql_phy_start_neg_ex(qdev, mac_index);
ql_phy_reset_ex(qdev);
PHY_Setup(qdev);
ql_phy_start_neg_ex(qdev);
}
/*
@ -1295,14 +1552,17 @@ static int ql_port_start(struct ql3_adapter *qdev)
{
if(ql_sem_spinlock(qdev, QL_PHY_GIO_SEM_MASK,
(QL_RESOURCE_BITS_BASE_CODE | (qdev->mac_index) *
2) << 7))
2) << 7)) {
printk(KERN_ERR "%s: Could not get hw lock for GIO\n",
qdev->ndev->name);
return -1;
}
if (ql_is_fiber(qdev)) {
ql_petbi_init(qdev);
} else {
/* Copper port */
ql_phy_init_ex(qdev, qdev->mac_index);
ql_phy_init_ex(qdev);
}
ql_sem_unlock(qdev, QL_PHY_GIO_SEM_MASK);
@ -1453,7 +1713,7 @@ static void ql_link_state_machine(struct ql3_adapter *qdev)
*/
static void ql_get_phy_owner(struct ql3_adapter *qdev)
{
if (ql_this_adapter_controls_port(qdev, qdev->mac_index))
if (ql_this_adapter_controls_port(qdev))
set_bit(QL_LINK_MASTER,&qdev->flags);
else
clear_bit(QL_LINK_MASTER,&qdev->flags);
@ -1467,11 +1727,11 @@ static void ql_init_scan_mode(struct ql3_adapter *qdev)
ql_mii_enable_scan_mode(qdev);
if (test_bit(QL_LINK_OPTICAL,&qdev->flags)) {
if (ql_this_adapter_controls_port(qdev, qdev->mac_index))
ql_petbi_init_ex(qdev, qdev->mac_index);
if (ql_this_adapter_controls_port(qdev))
ql_petbi_init_ex(qdev);
} else {
if (ql_this_adapter_controls_port(qdev, qdev->mac_index))
ql_phy_init_ex(qdev, qdev->mac_index);
if (ql_this_adapter_controls_port(qdev))
ql_phy_init_ex(qdev);
}
}
@ -1624,6 +1884,23 @@ static void ql_set_msglevel(struct net_device *ndev, u32 value)
qdev->msg_enable = value;
}
static void ql_get_pauseparam(struct net_device *ndev,
struct ethtool_pauseparam *pause)
{
struct ql3_adapter *qdev = netdev_priv(ndev);
struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers;
u32 reg;
if(qdev->mac_index == 0)
reg = ql_read_page0_reg(qdev, &port_regs->mac0ConfigReg);
else
reg = ql_read_page0_reg(qdev, &port_regs->mac1ConfigReg);
pause->autoneg = ql_get_auto_cfg_status(qdev);
pause->rx_pause = (reg & MAC_CONFIG_REG_RF) >> 2;
pause->tx_pause = (reg & MAC_CONFIG_REG_TF) >> 1;
}
static const struct ethtool_ops ql3xxx_ethtool_ops = {
.get_settings = ql_get_settings,
.get_drvinfo = ql_get_drvinfo,
@ -1631,6 +1908,7 @@ static const struct ethtool_ops ql3xxx_ethtool_ops = {
.get_link = ethtool_op_get_link,
.get_msglevel = ql_get_msglevel,
.set_msglevel = ql_set_msglevel,
.get_pauseparam = ql_get_pauseparam,
};
static int ql_populate_free_queue(struct ql3_adapter *qdev)
@ -1815,14 +2093,14 @@ invalid_seg_count:
atomic_inc(&qdev->tx_count);
}
void ql_get_sbuf(struct ql3_adapter *qdev)
static void ql_get_sbuf(struct ql3_adapter *qdev)
{
if (++qdev->small_buf_index == NUM_SMALL_BUFFERS)
qdev->small_buf_index = 0;
qdev->small_buf_release_cnt++;
}
struct ql_rcv_buf_cb *ql_get_lbuf(struct ql3_adapter *qdev)
static struct ql_rcv_buf_cb *ql_get_lbuf(struct ql3_adapter *qdev)
{
struct ql_rcv_buf_cb *lrg_buf_cb = NULL;
lrg_buf_cb = &qdev->lrg_buf[qdev->lrg_buf_index];
@ -3074,6 +3352,7 @@ static int ql_adapter_initialize(struct ql3_adapter *qdev)
goto out;
}
PHY_Setup(qdev);
ql_init_scan_mode(qdev);
ql_get_phy_owner(qdev);

View File

@ -293,6 +293,16 @@ struct net_rsp_iocb {
#define MII_SCAN_REGISTER 0x00000001
#define PHY_ID_0_REG 2
#define PHY_ID_1_REG 3
#define PHY_OUI_1_MASK 0xfc00
#define PHY_MODEL_MASK 0x03f0
/* Address for the Agere Phy */
#define MII_AGERE_ADDR_1 0x00001000
#define MII_AGERE_ADDR_2 0x00001100
/* 32-bit ispControlStatus */
enum {
ISP_CONTROL_NP_MASK = 0x0003,
@ -789,6 +799,7 @@ enum {
PHY_CTRL_LOOPBACK = 0x4000,
PETBI_CONTROL_REG = 0x00,
PETBI_CTRL_ALL_PARAMS = 0x7140,
PETBI_CTRL_SOFT_RESET = 0x8000,
PETBI_CTRL_AUTO_NEG = 0x1000,
PETBI_CTRL_RESTART_NEG = 0x0200,
@ -811,6 +822,23 @@ enum {
PETBI_EXPANSION_REG = 0x06,
PETBI_EXP_PAGE_RX = 0x0002,
PHY_GIG_CONTROL = 9,
PHY_GIG_ENABLE_MAN = 0x1000, /* Enable Master/Slave Manual Config*/
PHY_GIG_SET_MASTER = 0x0800, /* Set Master (slave if clear)*/
PHY_GIG_ALL_PARAMS = 0x0300,
PHY_GIG_ADV_1000F = 0x0200,
PHY_GIG_ADV_1000H = 0x0100,
PHY_NEG_ADVER = 4,
PHY_NEG_ALL_PARAMS = 0x0fe0,
PHY_NEG_ASY_PAUSE = 0x0800,
PHY_NEG_SYM_PAUSE = 0x0400,
PHY_NEG_ADV_SPEED = 0x01e0,
PHY_NEG_ADV_100F = 0x0100,
PHY_NEG_ADV_100H = 0x0080,
PHY_NEG_ADV_10F = 0x0040,
PHY_NEG_ADV_10H = 0x0020,
PETBI_TBI_CTRL = 0x11,
PETBI_TBI_RESET = 0x8000,
PETBI_TBI_AUTO_SENSE = 0x0100,
@ -826,8 +854,7 @@ enum {
PHY_AUX_RESET_STICK = 0x0002,
PHY_NEG_PAUSE = 0x0400,
PHY_CTRL_SOFT_RESET = 0x8000,
PHY_NEG_ADVER = 4,
PHY_NEG_ADV_SPEED = 0x01e0,
PHY_CTRL_AUTO_NEG = 0x1000,
PHY_CTRL_RESTART_NEG = 0x0200,
};
enum {
@ -892,6 +919,7 @@ enum {EEPROM_SIZE = FM93C86A_SIZE_16,
u16 pauseThreshold_mac;
u16 resumeThreshold_mac;
u16 portConfiguration;
#define PORT_CONFIG_DEFAULT 0xf700
#define PORT_CONFIG_AUTO_NEG_ENABLED 0x8000
#define PORT_CONFIG_SYM_PAUSE_ENABLED 0x4000
#define PORT_CONFIG_FULL_DUPLEX_ENABLED 0x2000
@ -1259,6 +1287,7 @@ struct ql3_adapter {
struct delayed_work tx_timeout_work;
u32 max_frame_size;
u32 device_id;
u16 phyType;
};
#endif /* _QLA3XXX_H_ */

View File

@ -1,6 +1,6 @@
/************************************************************************
* regs.h: A Linux PCI-X Ethernet driver for Neterion 10GbE Server NIC
* Copyright(c) 2002-2005 Neterion Inc.
* Copyright(c) 2002-2007 Neterion Inc.
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.

View File

@ -1,6 +1,6 @@
/************************************************************************
* s2io.c: A Linux PCI-X Ethernet driver for Neterion 10GbE Server NIC
* Copyright(c) 2002-2005 Neterion Inc.
* Copyright(c) 2002-2007 Neterion Inc.
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
@ -84,7 +84,7 @@
#include "s2io.h"
#include "s2io-regs.h"
#define DRV_VERSION "2.0.17.1"
#define DRV_VERSION "2.0.22.1"
/* S2io Driver name & version. */
static char s2io_driver_name[] = "Neterion";
@ -316,7 +316,7 @@ static void s2io_vlan_rx_register(struct net_device *dev,
}
/* A flag indicating whether 'RX_PA_CFG_STRIP_VLAN_TAG' bit is set or not */
int vlan_strip_flag;
static int vlan_strip_flag;
/* Unregister the vlan */
static void s2io_vlan_rx_kill_vid(struct net_device *dev, unsigned long vid)
@ -394,7 +394,6 @@ static const u64 fix_mac[] = {
END_SIGN
};
MODULE_AUTHOR("Raghavendra Koushik <raghavendra.koushik@neterion.com>");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
@ -516,7 +515,7 @@ static int init_shared_mem(struct s2io_nic *nic)
mac_control->fifos[i].list_info = kmalloc(list_holder_size,
GFP_KERNEL);
if (!mac_control->fifos[i].list_info) {
DBG_PRINT(ERR_DBG,
DBG_PRINT(INFO_DBG,
"Malloc failed for list_info\n");
return -ENOMEM;
}
@ -542,9 +541,9 @@ static int init_shared_mem(struct s2io_nic *nic)
tmp_v = pci_alloc_consistent(nic->pdev,
PAGE_SIZE, &tmp_p);
if (!tmp_v) {
DBG_PRINT(ERR_DBG,
DBG_PRINT(INFO_DBG,
"pci_alloc_consistent ");
DBG_PRINT(ERR_DBG, "failed for TxDL\n");
DBG_PRINT(INFO_DBG, "failed for TxDL\n");
return -ENOMEM;
}
/* If we got a zero DMA address(can happen on
@ -561,9 +560,9 @@ static int init_shared_mem(struct s2io_nic *nic)
tmp_v = pci_alloc_consistent(nic->pdev,
PAGE_SIZE, &tmp_p);
if (!tmp_v) {
DBG_PRINT(ERR_DBG,
DBG_PRINT(INFO_DBG,
"pci_alloc_consistent ");
DBG_PRINT(ERR_DBG, "failed for TxDL\n");
DBG_PRINT(INFO_DBG, "failed for TxDL\n");
return -ENOMEM;
}
}
@ -2187,7 +2186,7 @@ static int fill_rxd_3buf(struct s2io_nic *nic, struct RxD_t *rxdp, struct \
/* skb_shinfo(skb)->frag_list will have L4 data payload */
skb_shinfo(skb)->frag_list = dev_alloc_skb(dev->mtu + ALIGN_SIZE);
if (skb_shinfo(skb)->frag_list == NULL) {
DBG_PRINT(ERR_DBG, "%s: dev_alloc_skb failed\n ", dev->name);
DBG_PRINT(INFO_DBG, "%s: dev_alloc_skb failed\n ", dev->name);
return -ENOMEM ;
}
frag_list = skb_shinfo(skb)->frag_list;
@ -2242,6 +2241,7 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
struct buffAdd *ba;
unsigned long flags;
struct RxD_t *first_rxdp = NULL;
u64 Buffer0_ptr = 0, Buffer1_ptr = 0;
mac_control = &nic->mac_control;
config = &nic->config;
@ -2313,8 +2313,8 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
/* allocate skb */
skb = dev_alloc_skb(size);
if(!skb) {
DBG_PRINT(ERR_DBG, "%s: Out of ", dev->name);
DBG_PRINT(ERR_DBG, "memory to allocate SKBs\n");
DBG_PRINT(INFO_DBG, "%s: Out of ", dev->name);
DBG_PRINT(INFO_DBG, "memory to allocate SKBs\n");
if (first_rxdp) {
wmb();
first_rxdp->Control_1 |= RXD_OWN_XENA;
@ -2342,7 +2342,14 @@ static int fill_rx_buffers(struct s2io_nic *nic, int ring_no)
* payload
*/
/* save the buffer pointers to avoid frequent dma mapping */
Buffer0_ptr = ((struct RxD3*)rxdp)->Buffer0_ptr;
Buffer1_ptr = ((struct RxD3*)rxdp)->Buffer1_ptr;
memset(rxdp, 0, sizeof(struct RxD3));
/* restore the buffer pointers for dma sync*/
((struct RxD3*)rxdp)->Buffer0_ptr = Buffer0_ptr;
((struct RxD3*)rxdp)->Buffer1_ptr = Buffer1_ptr;
ba = &mac_control->rings[ring_no].ba[block_no][off];
skb_reserve(skb, BUF0_LEN);
tmp = (u64)(unsigned long) skb->data;
@ -2573,8 +2580,8 @@ static int s2io_poll(struct net_device *dev, int *budget)
for (i = 0; i < config->rx_ring_num; i++) {
if (fill_rx_buffers(nic, i) == -ENOMEM) {
DBG_PRINT(ERR_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(ERR_DBG, " in Rx Poll!!\n");
DBG_PRINT(INFO_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(INFO_DBG, " in Rx Poll!!\n");
break;
}
}
@ -2590,8 +2597,8 @@ no_rx:
for (i = 0; i < config->rx_ring_num; i++) {
if (fill_rx_buffers(nic, i) == -ENOMEM) {
DBG_PRINT(ERR_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(ERR_DBG, " in Rx Poll!!\n");
DBG_PRINT(INFO_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(INFO_DBG, " in Rx Poll!!\n");
break;
}
}
@ -2640,8 +2647,8 @@ static void s2io_netpoll(struct net_device *dev)
for (i = 0; i < config->rx_ring_num; i++) {
if (fill_rx_buffers(nic, i) == -ENOMEM) {
DBG_PRINT(ERR_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(ERR_DBG, " in Rx Netpoll!!\n");
DBG_PRINT(INFO_DBG, "%s:Out of memory", dev->name);
DBG_PRINT(INFO_DBG, " in Rx Netpoll!!\n");
break;
}
}
@ -3307,6 +3314,7 @@ static void s2io_reset(struct s2io_nic * sp)
u16 subid, pci_cmd;
int i;
u16 val16;
unsigned long long reset_cnt = 0;
DBG_PRINT(INIT_DBG,"%s - Resetting XFrame card %s\n",
__FUNCTION__, sp->dev->name);
@ -3372,6 +3380,11 @@ new_way:
/* Reset device statistics maintained by OS */
memset(&sp->stats, 0, sizeof (struct net_device_stats));
/* save reset count */
reset_cnt = sp->mac_control.stats_info->sw_stat.soft_reset_cnt;
memset(sp->mac_control.stats_info, 0, sizeof(struct stat_block));
/* restore reset count */
sp->mac_control.stats_info->sw_stat.soft_reset_cnt = reset_cnt;
/* SXE-002: Configure link and activity LED to turn it off */
subid = sp->pdev->subsystem_device;
@ -3659,7 +3672,7 @@ static int s2io_enable_msi_x(struct s2io_nic *nic)
nic->entries = kmalloc(MAX_REQUESTED_MSI_X * sizeof(struct msix_entry),
GFP_KERNEL);
if (nic->entries == NULL) {
DBG_PRINT(ERR_DBG, "%s: Memory allocation failed\n", __FUNCTION__);
DBG_PRINT(INFO_DBG, "%s: Memory allocation failed\n", __FUNCTION__);
return -ENOMEM;
}
memset(nic->entries, 0, MAX_REQUESTED_MSI_X * sizeof(struct msix_entry));
@ -3668,7 +3681,7 @@ static int s2io_enable_msi_x(struct s2io_nic *nic)
kmalloc(MAX_REQUESTED_MSI_X * sizeof(struct s2io_msix_entry),
GFP_KERNEL);
if (nic->s2io_entries == NULL) {
DBG_PRINT(ERR_DBG, "%s: Memory allocation failed\n", __FUNCTION__);
DBG_PRINT(INFO_DBG, "%s: Memory allocation failed\n", __FUNCTION__);
kfree(nic->entries);
return -ENOMEM;
}
@ -4019,7 +4032,7 @@ static int s2io_chk_rx_buffers(struct s2io_nic *sp, int rng_n)
DBG_PRINT(INTR_DBG, "%s: Rx BD hit ", __FUNCTION__);
DBG_PRINT(INTR_DBG, "PANIC levels\n");
if ((ret = fill_rx_buffers(sp, rng_n)) == -ENOMEM) {
DBG_PRINT(ERR_DBG, "Out of memory in %s",
DBG_PRINT(INFO_DBG, "Out of memory in %s",
__FUNCTION__);
clear_bit(0, (&sp->tasklet_status));
return -1;
@ -4029,8 +4042,8 @@ static int s2io_chk_rx_buffers(struct s2io_nic *sp, int rng_n)
tasklet_schedule(&sp->task);
} else if (fill_rx_buffers(sp, rng_n) == -ENOMEM) {
DBG_PRINT(ERR_DBG, "%s:Out of memory", sp->dev->name);
DBG_PRINT(ERR_DBG, " in Rx Intr!!\n");
DBG_PRINT(INFO_DBG, "%s:Out of memory", sp->dev->name);
DBG_PRINT(INFO_DBG, " in Rx Intr!!\n");
}
return 0;
}
@ -4279,9 +4292,7 @@ static void s2io_updt_stats(struct s2io_nic *sp)
if (cnt == 5)
break; /* Updt failed */
} while(1);
} else {
memset(sp->mac_control.stats_info, 0, sizeof(struct stat_block));
}
}
}
/**
@ -5949,12 +5960,12 @@ static void s2io_tasklet(unsigned long dev_addr)
for (i = 0; i < config->rx_ring_num; i++) {
ret = fill_rx_buffers(sp, i);
if (ret == -ENOMEM) {
DBG_PRINT(ERR_DBG, "%s: Out of ",
DBG_PRINT(INFO_DBG, "%s: Out of ",
dev->name);
DBG_PRINT(ERR_DBG, "memory in tasklet\n");
break;
} else if (ret == -EFILL) {
DBG_PRINT(ERR_DBG,
DBG_PRINT(INFO_DBG,
"%s: Rx Ring %d is full\n",
dev->name, i);
break;
@ -6065,8 +6076,8 @@ static int set_rxd_buffer_pointer(struct s2io_nic *sp, struct RxD_t *rxdp,
} else {
*skb = dev_alloc_skb(size);
if (!(*skb)) {
DBG_PRINT(ERR_DBG, "%s: Out of ", dev->name);
DBG_PRINT(ERR_DBG, "memory to allocate SKBs\n");
DBG_PRINT(INFO_DBG, "%s: Out of ", dev->name);
DBG_PRINT(INFO_DBG, "memory to allocate SKBs\n");
return -ENOMEM ;
}
/* storing the mapped addr in a temp variable
@ -6088,7 +6099,7 @@ static int set_rxd_buffer_pointer(struct s2io_nic *sp, struct RxD_t *rxdp,
} else {
*skb = dev_alloc_skb(size);
if (!(*skb)) {
DBG_PRINT(ERR_DBG, "%s: dev_alloc_skb failed\n",
DBG_PRINT(INFO_DBG, "%s: dev_alloc_skb failed\n",
dev->name);
return -ENOMEM;
}
@ -6115,7 +6126,7 @@ static int set_rxd_buffer_pointer(struct s2io_nic *sp, struct RxD_t *rxdp,
} else {
*skb = dev_alloc_skb(size);
if (!(*skb)) {
DBG_PRINT(ERR_DBG, "%s: dev_alloc_skb failed\n",
DBG_PRINT(INFO_DBG, "%s: dev_alloc_skb failed\n",
dev->name);
return -ENOMEM;
}
@ -6616,7 +6627,6 @@ static int rx_osm_handler(struct ring_info *ring_data, struct RxD_t * rxdp)
/* Updating statistics */
rxdp->Host_Control = 0;
sp->rx_pkt_count++;
sp->stats.rx_packets++;
if (sp->rxd_mode == RXD_MODE_1) {
int len = RXD_GET_BUFFER0_SIZE_1(rxdp->Control_2);
@ -7252,7 +7262,7 @@ s2io_init_nic(struct pci_dev *pdev, const struct pci_device_id *pre)
goto register_failed;
}
s2io_vpd_read(sp);
DBG_PRINT(ERR_DBG, "Copyright(c) 2002-2005 Neterion Inc.\n");
DBG_PRINT(ERR_DBG, "Copyright(c) 2002-2007 Neterion Inc.\n");
DBG_PRINT(ERR_DBG, "%s: Neterion %s (rev %d)\n",dev->name,
sp->product_name, get_xena_rev_id(sp->pdev));
DBG_PRINT(ERR_DBG, "%s: Driver version %s\n", dev->name,

View File

@ -1,6 +1,6 @@
/************************************************************************
* s2io.h: A Linux PCI-X Ethernet driver for Neterion 10GbE Server NIC
* Copyright(c) 2002-2005 Neterion Inc.
* Copyright(c) 2002-2007 Neterion Inc.
* This software may be used and distributed according to the terms of
* the GNU General Public License (GPL), incorporated herein by reference.
@ -760,7 +760,6 @@ struct s2io_nic {
#define MAX_SUPPORTED_MULTICASTS MAX_MAC_SUPPORTED
struct mac_addr def_mac_addr[MAX_MAC_SUPPORTED];
struct mac_addr pre_mac_addr[MAX_MAC_SUPPORTED];
struct net_device_stats stats;
int high_dma_flag;
@ -794,11 +793,6 @@ struct s2io_nic {
u16 all_multi_pos;
u16 promisc_flg;
u16 tx_pkt_count;
u16 rx_pkt_count;
u16 tx_err_count;
u16 rx_err_count;
/* Id timer, used to blink NIC to physically identify NIC. */
struct timer_list id_timer;

View File

@ -95,19 +95,28 @@ MODULE_PARM_DESC(full_duplex, "1-" __MODULE_STRING(MAX_UNITS));
#endif
#ifdef CONFIG_SBMAC_COALESCE
static int int_pktcnt = 0;
module_param(int_pktcnt, int, S_IRUGO);
MODULE_PARM_DESC(int_pktcnt, "Packet count");
static int int_pktcnt_tx = 255;
module_param(int_pktcnt_tx, int, S_IRUGO);
MODULE_PARM_DESC(int_pktcnt_tx, "TX packet count");
static int int_timeout = 0;
module_param(int_timeout, int, S_IRUGO);
MODULE_PARM_DESC(int_timeout, "Timeout value");
static int int_timeout_tx = 255;
module_param(int_timeout_tx, int, S_IRUGO);
MODULE_PARM_DESC(int_timeout_tx, "TX timeout value");
static int int_pktcnt_rx = 64;
module_param(int_pktcnt_rx, int, S_IRUGO);
MODULE_PARM_DESC(int_pktcnt_rx, "RX packet count");
static int int_timeout_rx = 64;
module_param(int_timeout_rx, int, S_IRUGO);
MODULE_PARM_DESC(int_timeout_rx, "RX timeout value");
#endif
#include <asm/sibyte/sb1250.h>
#if defined(CONFIG_SIBYTE_BCM1x55) || defined(CONFIG_SIBYTE_BCM1x80)
#include <asm/sibyte/bcm1480_regs.h>
#include <asm/sibyte/bcm1480_int.h>
#define R_MAC_DMA_OODPKTLOST_RX R_MAC_DMA_OODPKTLOST
#elif defined(CONFIG_SIBYTE_SB1250) || defined(CONFIG_SIBYTE_BCM112X)
#include <asm/sibyte/sb1250_regs.h>
#include <asm/sibyte/sb1250_int.h>
@ -155,8 +164,8 @@ typedef enum { sbmac_state_uninit, sbmac_state_off, sbmac_state_on,
#define NUMCACHEBLKS(x) (((x)+SMP_CACHE_BYTES-1)/SMP_CACHE_BYTES)
#define SBMAC_MAX_TXDESCR 32
#define SBMAC_MAX_RXDESCR 32
#define SBMAC_MAX_TXDESCR 256
#define SBMAC_MAX_RXDESCR 256
#define ETHER_ALIGN 2
#define ETHER_ADDR_LEN 6
@ -185,10 +194,10 @@ typedef struct sbmacdma_s {
* associated with it.
*/
struct sbmac_softc *sbdma_eth; /* back pointer to associated MAC */
int sbdma_channel; /* channel number */
struct sbmac_softc *sbdma_eth; /* back pointer to associated MAC */
int sbdma_channel; /* channel number */
int sbdma_txdir; /* direction (1=transmit) */
int sbdma_maxdescr; /* total # of descriptors in ring */
int sbdma_maxdescr; /* total # of descriptors in ring */
#ifdef CONFIG_SBMAC_COALESCE
int sbdma_int_pktcnt; /* # descriptors rx/tx before interrupt*/
int sbdma_int_timeout; /* # usec rx/tx interrupt */
@ -197,13 +206,16 @@ typedef struct sbmacdma_s {
volatile void __iomem *sbdma_config0; /* DMA config register 0 */
volatile void __iomem *sbdma_config1; /* DMA config register 1 */
volatile void __iomem *sbdma_dscrbase; /* Descriptor base address */
volatile void __iomem *sbdma_dscrcnt; /* Descriptor count register */
volatile void __iomem *sbdma_dscrcnt; /* Descriptor count register */
volatile void __iomem *sbdma_curdscr; /* current descriptor address */
volatile void __iomem *sbdma_oodpktlost;/* pkt drop (rx only) */
/*
* This stuff is for maintenance of the ring
*/
sbdmadscr_t *sbdma_dscrtable_unaligned;
sbdmadscr_t *sbdma_dscrtable; /* base of descriptor table */
sbdmadscr_t *sbdma_dscrtable_end; /* end of descriptor table */
@ -286,8 +298,8 @@ static int sbdma_add_rcvbuffer(sbmacdma_t *d,struct sk_buff *m);
static int sbdma_add_txbuffer(sbmacdma_t *d,struct sk_buff *m);
static void sbdma_emptyring(sbmacdma_t *d);
static void sbdma_fillring(sbmacdma_t *d);
static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d);
static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d);
static int sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d, int work_to_do, int poll);
static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d, int poll);
static int sbmac_initctx(struct sbmac_softc *s);
static void sbmac_channel_start(struct sbmac_softc *s);
static void sbmac_channel_stop(struct sbmac_softc *s);
@ -308,6 +320,8 @@ static struct net_device_stats *sbmac_get_stats(struct net_device *dev);
static void sbmac_set_rx_mode(struct net_device *dev);
static int sbmac_mii_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
static int sbmac_close(struct net_device *dev);
static int sbmac_poll(struct net_device *poll_dev, int *budget);
static int sbmac_mii_poll(struct sbmac_softc *s,int noisy);
static int sbmac_mii_probe(struct net_device *dev);
@ -679,6 +693,10 @@ static void sbdma_initctx(sbmacdma_t *d,
int txrx,
int maxdescr)
{
#ifdef CONFIG_SBMAC_COALESCE
int int_pktcnt, int_timeout;
#endif
/*
* Save away interesting stuff in the structure
*/
@ -728,6 +746,11 @@ static void sbdma_initctx(sbmacdma_t *d,
s->sbm_base + R_MAC_DMA_REGISTER(txrx,chan,R_MAC_DMA_DSCR_CNT);
d->sbdma_curdscr =
s->sbm_base + R_MAC_DMA_REGISTER(txrx,chan,R_MAC_DMA_CUR_DSCRADDR);
if (d->sbdma_txdir)
d->sbdma_oodpktlost = NULL;
else
d->sbdma_oodpktlost =
s->sbm_base + R_MAC_DMA_REGISTER(txrx,chan,R_MAC_DMA_OODPKTLOST_RX);
/*
* Allocate memory for the ring
@ -735,6 +758,7 @@ static void sbdma_initctx(sbmacdma_t *d,
d->sbdma_maxdescr = maxdescr;
d->sbdma_dscrtable_unaligned =
d->sbdma_dscrtable = (sbdmadscr_t *)
kmalloc((d->sbdma_maxdescr+1)*sizeof(sbdmadscr_t), GFP_KERNEL);
@ -765,12 +789,14 @@ static void sbdma_initctx(sbmacdma_t *d,
* Setup Rx/Tx DMA coalescing defaults
*/
int_pktcnt = (txrx == DMA_TX) ? int_pktcnt_tx : int_pktcnt_rx;
if ( int_pktcnt ) {
d->sbdma_int_pktcnt = int_pktcnt;
} else {
d->sbdma_int_pktcnt = 1;
}
int_timeout = (txrx == DMA_TX) ? int_timeout_tx : int_timeout_rx;
if ( int_timeout ) {
d->sbdma_int_timeout = int_timeout;
} else {
@ -1125,32 +1151,63 @@ static void sbdma_fillring(sbmacdma_t *d)
}
}
#ifdef CONFIG_NET_POLL_CONTROLLER
static void sbmac_netpoll(struct net_device *netdev)
{
struct sbmac_softc *sc = netdev_priv(netdev);
int irq = sc->sbm_dev->irq;
__raw_writeq(0, sc->sbm_imr);
sbmac_intr(irq, netdev, NULL);
#ifdef CONFIG_SBMAC_COALESCE
__raw_writeq(((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_TX_CH0) |
((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_RX_CH0),
sc->sbm_imr);
#else
__raw_writeq((M_MAC_INT_CHANNEL << S_MAC_TX_CH0) |
(M_MAC_INT_CHANNEL << S_MAC_RX_CH0), sc->sbm_imr);
#endif
}
#endif
/**********************************************************************
* SBDMA_RX_PROCESS(sc,d)
* SBDMA_RX_PROCESS(sc,d,work_to_do,poll)
*
* Process "completed" receive buffers on the specified DMA channel.
* Note that this isn't really ideal for priority channels, since
* it processes all of the packets on a given channel before
* returning.
*
* Input parameters:
* sc - softc structure
* d - DMA channel context
* sc - softc structure
* d - DMA channel context
* work_to_do - no. of packets to process before enabling interrupt
* again (for NAPI)
* poll - 1: using polling (for NAPI)
*
* Return value:
* nothing
********************************************************************* */
static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
static int sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d,
int work_to_do, int poll)
{
int curidx;
int hwidx;
sbdmadscr_t *dsc;
struct sk_buff *sb;
int len;
int work_done = 0;
int dropped = 0;
for (;;) {
prefetch(d);
again:
/* Check if the HW dropped any frames */
sc->sbm_stats.rx_fifo_errors
+= __raw_readq(sc->sbm_rxdma.sbdma_oodpktlost) & 0xffff;
__raw_writeq(0, sc->sbm_rxdma.sbdma_oodpktlost);
while (work_to_do-- > 0) {
/*
* figure out where we are (as an index) and where
* the hardware is (also as an index)
@ -1162,7 +1219,12 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
* (sbdma_remptr) and the physical address (sbdma_curdscr CSR)
*/
curidx = d->sbdma_remptr - d->sbdma_dscrtable;
dsc = d->sbdma_remptr;
curidx = dsc - d->sbdma_dscrtable;
prefetch(dsc);
prefetch(&d->sbdma_ctxtable[curidx]);
hwidx = (int) (((__raw_readq(d->sbdma_curdscr) & M_DMA_CURDSCR_ADDR) -
d->sbdma_dscrtable_phys) / sizeof(sbdmadscr_t));
@ -1173,13 +1235,12 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
*/
if (curidx == hwidx)
break;
goto done;
/*
* Otherwise, get the packet's sk_buff ptr back
*/
dsc = &(d->sbdma_dscrtable[curidx]);
sb = d->sbdma_ctxtable[curidx];
d->sbdma_ctxtable[curidx] = NULL;
@ -1191,7 +1252,7 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
* receive ring.
*/
if (!(dsc->dscr_a & M_DMA_ETHRX_BAD)) {
if (likely (!(dsc->dscr_a & M_DMA_ETHRX_BAD))) {
/*
* Add a new buffer to replace the old one. If we fail
@ -1199,9 +1260,14 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
* packet and put it right back on the receive ring.
*/
if (sbdma_add_rcvbuffer(d,NULL) == -ENOBUFS) {
sc->sbm_stats.rx_dropped++;
if (unlikely (sbdma_add_rcvbuffer(d,NULL) ==
-ENOBUFS)) {
sc->sbm_stats.rx_dropped++;
sbdma_add_rcvbuffer(d,sb); /* re-add old buffer */
/* No point in continuing at the moment */
printk(KERN_ERR "dropped packet (1)\n");
d->sbdma_remptr = SBDMA_NEXTBUF(d,sbdma_remptr);
goto done;
} else {
/*
* Set length into the packet
@ -1213,8 +1279,6 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
* receive ring. Pass the buffer to
* the kernel
*/
sc->sbm_stats.rx_bytes += len;
sc->sbm_stats.rx_packets++;
sb->protocol = eth_type_trans(sb,d->sbdma_eth->sbm_dev);
/* Check hw IPv4/TCP checksum if supported */
if (sc->rx_hw_checksum == ENABLE) {
@ -1226,8 +1290,22 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
sb->ip_summed = CHECKSUM_NONE;
}
}
prefetch(sb->data);
prefetch((const void *)(((char *)sb->data)+32));
if (poll)
dropped = netif_receive_skb(sb);
else
dropped = netif_rx(sb);
netif_rx(sb);
if (dropped == NET_RX_DROP) {
sc->sbm_stats.rx_dropped++;
d->sbdma_remptr = SBDMA_NEXTBUF(d,sbdma_remptr);
goto done;
}
else {
sc->sbm_stats.rx_bytes += len;
sc->sbm_stats.rx_packets++;
}
}
} else {
/*
@ -1244,12 +1322,16 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
*/
d->sbdma_remptr = SBDMA_NEXTBUF(d,sbdma_remptr);
work_done++;
}
if (!poll) {
work_to_do = 32;
goto again; /* collect fifo drop statistics again */
}
done:
return work_done;
}
/**********************************************************************
* SBDMA_TX_PROCESS(sc,d)
*
@ -1261,22 +1343,30 @@ static void sbdma_rx_process(struct sbmac_softc *sc,sbmacdma_t *d)
*
* Input parameters:
* sc - softc structure
* d - DMA channel context
* d - DMA channel context
* poll - 1: using polling (for NAPI)
*
* Return value:
* nothing
********************************************************************* */
static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d)
static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d, int poll)
{
int curidx;
int hwidx;
sbdmadscr_t *dsc;
struct sk_buff *sb;
unsigned long flags;
int packets_handled = 0;
spin_lock_irqsave(&(sc->sbm_lock), flags);
if (d->sbdma_remptr == d->sbdma_addptr)
goto end_unlock;
hwidx = (int) (((__raw_readq(d->sbdma_curdscr) & M_DMA_CURDSCR_ADDR) -
d->sbdma_dscrtable_phys) / sizeof(sbdmadscr_t));
for (;;) {
/*
* figure out where we are (as an index) and where
@ -1290,8 +1380,6 @@ static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d)
*/
curidx = d->sbdma_remptr - d->sbdma_dscrtable;
hwidx = (int) (((__raw_readq(d->sbdma_curdscr) & M_DMA_CURDSCR_ADDR) -
d->sbdma_dscrtable_phys) / sizeof(sbdmadscr_t));
/*
* If they're the same, that means we've processed all
@ -1329,6 +1417,8 @@ static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d)
d->sbdma_remptr = SBDMA_NEXTBUF(d,sbdma_remptr);
packets_handled++;
}
/*
@ -1337,8 +1427,10 @@ static void sbdma_tx_process(struct sbmac_softc *sc,sbmacdma_t *d)
* watermark on the transmit queue.
*/
netif_wake_queue(d->sbdma_eth->sbm_dev);
if (packets_handled)
netif_wake_queue(d->sbdma_eth->sbm_dev);
end_unlock:
spin_unlock_irqrestore(&(sc->sbm_lock), flags);
}
@ -1412,9 +1504,9 @@ static int sbmac_initctx(struct sbmac_softc *s)
static void sbdma_uninitctx(struct sbmacdma_s *d)
{
if (d->sbdma_dscrtable) {
kfree(d->sbdma_dscrtable);
d->sbdma_dscrtable = NULL;
if (d->sbdma_dscrtable_unaligned) {
kfree(d->sbdma_dscrtable_unaligned);
d->sbdma_dscrtable_unaligned = d->sbdma_dscrtable = NULL;
}
if (d->sbdma_ctxtable) {
@ -1612,15 +1704,9 @@ static void sbmac_channel_start(struct sbmac_softc *s)
#endif
#ifdef CONFIG_SBMAC_COALESCE
/*
* Accept any TX interrupt and EOP count/timer RX interrupts on ch 0
*/
__raw_writeq(((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_TX_CH0) |
((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_RX_CH0), s->sbm_imr);
#else
/*
* Accept any kind of interrupt on TX and RX DMA channel 0
*/
__raw_writeq((M_MAC_INT_CHANNEL << S_MAC_TX_CH0) |
(M_MAC_INT_CHANNEL << S_MAC_RX_CH0), s->sbm_imr);
#endif
@ -2053,57 +2139,46 @@ static irqreturn_t sbmac_intr(int irq,void *dev_instance)
uint64_t isr;
int handled = 0;
for (;;) {
/*
* Read the ISR (this clears the bits in the real
* register, except for counter addr)
*/
/*
* Read the ISR (this clears the bits in the real
* register, except for counter addr)
*/
isr = __raw_readq(sc->sbm_isr) & ~M_MAC_COUNTER_ADDR;
isr = __raw_readq(sc->sbm_isr) & ~M_MAC_COUNTER_ADDR;
if (isr == 0)
return IRQ_RETVAL(0);
handled = 1;
if (isr == 0)
break;
/*
* Transmits on channel 0
*/
handled = 1;
/*
* Transmits on channel 0
*/
if (isr & (M_MAC_INT_CHANNEL << S_MAC_TX_CH0)) {
sbdma_tx_process(sc,&(sc->sbm_txdma));
if (isr & (M_MAC_INT_CHANNEL << S_MAC_TX_CH0)) {
sbdma_tx_process(sc,&(sc->sbm_txdma), 0);
#ifdef CONFIG_NETPOLL_TRAP
if (netpoll_trap()) {
if (test_and_clear_bit(__LINK_STATE_XOFF, &dev->state))
__netif_schedule(dev);
}
#endif
}
/*
* Receives on channel 0
*/
/*
* It's important to test all the bits (or at least the
* EOP_SEEN bit) when deciding to do the RX process
* particularly when coalescing, to make sure we
* take care of the following:
*
* If you have some packets waiting (have been received
* but no interrupt) and get a TX interrupt before
* the RX timer or counter expires, reading the ISR
* above will clear the timer and counter, and you
* won't get another interrupt until a packet shows
* up to start the timer again. Testing
* EOP_SEEN here takes care of this case.
* (EOP_SEEN is part of M_MAC_INT_CHANNEL << S_MAC_RX_CH0)
*/
if (isr & (M_MAC_INT_CHANNEL << S_MAC_RX_CH0)) {
sbdma_rx_process(sc,&(sc->sbm_rxdma));
if (isr & (M_MAC_INT_CHANNEL << S_MAC_RX_CH0)) {
if (netif_rx_schedule_prep(dev)) {
__raw_writeq(0, sc->sbm_imr);
__netif_rx_schedule(dev);
/* Depend on the exit from poll to reenable intr */
}
else {
/* may leave some packets behind */
sbdma_rx_process(sc,&(sc->sbm_rxdma),
SBMAC_MAX_RXDESCR * 2, 0);
}
}
return IRQ_RETVAL(handled);
}
/**********************************************************************
* SBMAC_START_TX(skb,dev)
*
@ -2233,8 +2308,6 @@ static void sbmac_setmulti(struct sbmac_softc *sc)
}
}
#if defined(SBMAC_ETH0_HWADDR) || defined(SBMAC_ETH1_HWADDR) || defined(SBMAC_ETH2_HWADDR) || defined(SBMAC_ETH3_HWADDR)
/**********************************************************************
* SBMAC_PARSE_XDIGIT(str)
@ -2397,8 +2470,13 @@ static int sbmac_init(struct net_device *dev, int idx)
dev->do_ioctl = sbmac_mii_ioctl;
dev->tx_timeout = sbmac_tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
dev->poll = sbmac_poll;
dev->weight = 16;
dev->change_mtu = sb1250_change_mtu;
#ifdef CONFIG_NET_POLL_CONTROLLER
dev->poll_controller = sbmac_netpoll;
#endif
/* This is needed for PASS2 for Rx H/W checksum feature */
sbmac_set_iphdr_offset(sc);
@ -2796,7 +2874,39 @@ static int sbmac_close(struct net_device *dev)
return 0;
}
static int sbmac_poll(struct net_device *dev, int *budget)
{
int work_to_do;
int work_done;
struct sbmac_softc *sc = netdev_priv(dev);
work_to_do = min(*budget, dev->quota);
work_done = sbdma_rx_process(sc, &(sc->sbm_rxdma), work_to_do, 1);
if (work_done > work_to_do)
printk(KERN_ERR "%s exceeded work_to_do budget=%d quota=%d work-done=%d\n",
sc->sbm_dev->name, *budget, dev->quota, work_done);
sbdma_tx_process(sc, &(sc->sbm_txdma), 1);
*budget -= work_done;
dev->quota -= work_done;
if (work_done < work_to_do) {
netif_rx_complete(dev);
#ifdef CONFIG_SBMAC_COALESCE
__raw_writeq(((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_TX_CH0) |
((M_MAC_INT_EOP_COUNT | M_MAC_INT_EOP_TIMER) << S_MAC_RX_CH0),
sc->sbm_imr);
#else
__raw_writeq((M_MAC_INT_CHANNEL << S_MAC_TX_CH0) |
(M_MAC_INT_CHANNEL << S_MAC_RX_CH0), sc->sbm_imr);
#endif
}
return (work_done >= work_to_do);
}
#if defined(SBMAC_ETH0_HWADDR) || defined(SBMAC_ETH1_HWADDR) || defined(SBMAC_ETH2_HWADDR) || defined(SBMAC_ETH3_HWADDR)
static void
@ -2883,7 +2993,7 @@ sbmac_init_module(void)
/*
* The R_MAC_ETHERNET_ADDR register will be set to some nonzero
* value for us by the firmware if we're going to use this MAC.
* value for us by the firmware if we are going to use this MAC.
* If we find a zero, skip this MAC.
*/

View File

@ -624,7 +624,7 @@ static inline void setup_rx_ring(struct sgiseeq_rx_desc *buf, int nbufs)
#define ALIGNED(x) ((((unsigned long)(x)) + 0xf) & ~(0xf))
static int sgiseeq_init(struct hpc3_regs* hpcregs, int irq)
static int sgiseeq_init(struct hpc3_regs* hpcregs, int irq, int has_eeprom)
{
struct sgiseeq_init_block *sr;
struct sgiseeq_private *sp;
@ -650,7 +650,9 @@ static int sgiseeq_init(struct hpc3_regs* hpcregs, int irq)
#define EADDR_NVOFS 250
for (i = 0; i < 3; i++) {
unsigned short tmp = ip22_nvram_read(EADDR_NVOFS / 2 + i);
unsigned short tmp = has_eeprom ?
ip22_eeprom_read(&hpcregs->eeprom, EADDR_NVOFS / 2+i) :
ip22_nvram_read(EADDR_NVOFS / 2+i);
dev->dev_addr[2 * i] = tmp >> 8;
dev->dev_addr[2 * i + 1] = tmp & 0xff;
@ -678,6 +680,11 @@ static int sgiseeq_init(struct hpc3_regs* hpcregs, int irq)
setup_rx_ring(sp->rx_desc, SEEQ_RX_BUFFERS);
setup_tx_ring(sp->tx_desc, SEEQ_TX_BUFFERS);
/* Setup PIO and DMA transfer timing */
sp->hregs->pconfig = 0x161;
sp->hregs->dconfig = HPC3_EDCFG_FIRQ | HPC3_EDCFG_FEOP |
HPC3_EDCFG_FRXDC | HPC3_EDCFG_PTO | 0x026;
/* Setup PIO and DMA transfer timing */
sp->hregs->pconfig = 0x161;
sp->hregs->dconfig = HPC3_EDCFG_FIRQ | HPC3_EDCFG_FEOP |
@ -729,8 +736,23 @@ err_out:
static int __init sgiseeq_probe(void)
{
unsigned int tmp, ret1, ret2 = 0;
/* On board adapter on 1st HPC is always present */
return sgiseeq_init(hpc3c0, SGI_ENET_IRQ);
ret1 = sgiseeq_init(hpc3c0, SGI_ENET_IRQ, 0);
/* Let's see if second HPC is there */
if (!(ip22_is_fullhouse()) &&
get_dbe(tmp, (unsigned int *)&hpc3c1->pbdma[1]) == 0) {
sgimc->giopar |= SGIMC_GIOPAR_MASTEREXP1 |
SGIMC_GIOPAR_EXP164 |
SGIMC_GIOPAR_HPC264;
hpc3c1->pbus_piocfg[0][0] = 0x3ffff;
/* interrupt/config register on Challenge S Mezz board */
hpc3c1->pbus_extregs[0][0] = 0x30;
ret2 = sgiseeq_init(hpc3c1, SGI_GIO_0_IRQ, 1);
}
return (ret1 & ret2) ? ret1 : 0;
}
static void __exit sgiseeq_exit(void)

View File

@ -5123,7 +5123,12 @@ static int skge_resume(struct pci_dev *pdev)
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
pci_enable_device(pdev);
ret = pci_enable_device(pdev);
if (ret) {
printk(KERN_WARNING "sk98lin: unable to enable device %s "
"in resume\n", dev->name);
goto err_out;
}
pci_set_master(pdev);
if (pAC->GIni.GIMacsFound == 2)
ret = request_irq(dev->irq, SkGeIsr, IRQF_SHARED, "sk98lin", dev);
@ -5131,10 +5136,8 @@ static int skge_resume(struct pci_dev *pdev)
ret = request_irq(dev->irq, SkGeIsrOnePort, IRQF_SHARED, "sk98lin", dev);
if (ret) {
printk(KERN_WARNING "sk98lin: unable to acquire IRQ %d\n", dev->irq);
pAC->AllocFlag &= ~SK_ALLOC_IRQ;
dev->irq = 0;
pci_disable_device(pdev);
return -EBUSY;
ret = -EBUSY;
goto err_out_disable_pdev;
}
netif_device_attach(dev);
@ -5151,6 +5154,13 @@ static int skge_resume(struct pci_dev *pdev)
}
return 0;
err_out_disable_pdev:
pci_disable_device(pdev);
err_out:
pAC->AllocFlag &= ~SK_ALLOC_IRQ;
dev->irq = 0;
return ret;
}
#else
#define skge_suspend NULL

View File

@ -1,84 +0,0 @@
/******************************************************************************
*
* (C)Copyright 1998,1999 SysKonnect,
* a business unit of Schneider & Koch & Co. Datensysteme GmbH.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* The information in this file is provided "AS IS" without warranty.
*
******************************************************************************/
/*
* Definition of the Error Log Structure
* This structure will be copied into the Error Log buffer
* during the NDIS General Request ReadErrorLog by the MAC Driver
*/
struct s_error_log {
/*
* place holder for token ring adapter error log (zeros)
*/
u_char reserved_0 ; /* byte 0 inside Error Log */
u_char reserved_1 ; /* byte 1 */
u_char reserved_2 ; /* byte 2 */
u_char reserved_3 ; /* byte 3 */
u_char reserved_4 ; /* byte 4 */
u_char reserved_5 ; /* byte 5 */
u_char reserved_6 ; /* byte 6 */
u_char reserved_7 ; /* byte 7 */
u_char reserved_8 ; /* byte 8 */
u_char reserved_9 ; /* byte 9 */
u_char reserved_10 ; /* byte 10 */
u_char reserved_11 ; /* byte 11 */
u_char reserved_12 ; /* byte 12 */
u_char reserved_13 ; /* byte 13 */
/*
* FDDI link statistics
*/
/*
* smt error low
*/
#define SMT_ERL_AEB (1<<15) /* A elast. buffer */
#define SMT_ERL_BLC (1<<14) /* B link error condition */
#define SMT_ERL_ALC (1<<13) /* A link error condition */
#define SMT_ERL_NCC (1<<12) /* not copied condition */
#define SMT_ERL_FEC (1<<11) /* frame error condition */
/*
* smt event low
*/
#define SMT_EVL_NCE (1<<5)
u_short smt_error_low ; /* byte 14/15 */
u_short smt_error_high ; /* byte 16/17 */
u_short smt_event_low ; /* byte 18/19 */
u_short smt_event_high ; /* byte 20/21 */
u_short connection_policy_violation ; /* byte 22/23 */
u_short port_event ; /* byte 24/25 */
u_short set_count_low ; /* byte 26/27 */
u_short set_count_high ; /* byte 28/29 */
u_short aci_id_code ; /* byte 30/31 */
u_short purge_frame_counter ; /* byte 32/33 */
/*
* CMT and RMT state machines
*/
u_short ecm_state ; /* byte 34/35 */
u_short pcm_a_state ; /* byte 36/37 */
u_short pcm_b_state ; /* byte 38/39 */
u_short cfm_state ; /* byte 40/41 */
u_short rmt_state ; /* byte 42/43 */
u_short not_used[30] ; /* byte 44-103 */
u_short ucode_version_level ; /* byte 104/105 */
u_short not_used_1 ; /* byte 106/107 */
u_short not_used_2 ; /* byte 108/109 */
} ;

View File

@ -42,7 +42,7 @@
#include "skge.h"
#define DRV_NAME "skge"
#define DRV_VERSION "1.10"
#define DRV_VERSION "1.11"
#define PFX DRV_NAME " "
#define DEFAULT_TX_RING_SIZE 128
@ -2621,6 +2621,7 @@ static int skge_down(struct net_device *dev)
static inline int skge_avail(const struct skge_ring *ring)
{
smp_mb();
return ((ring->to_clean > ring->to_use) ? 0 : ring->count)
+ (ring->to_clean - ring->to_use) - 1;
}
@ -2709,6 +2710,8 @@ static int skge_xmit_frame(struct sk_buff *skb, struct net_device *dev)
dev->name, e - skge->tx_ring.start, skb->len);
skge->tx_ring.to_use = e->next;
smp_wmb();
if (skge_avail(&skge->tx_ring) <= TX_LOW_WATER) {
pr_debug("%s: transmit queue full\n", dev->name);
netif_stop_queue(dev);
@ -2726,8 +2729,6 @@ static void skge_tx_free(struct skge_port *skge, struct skge_element *e,
{
struct pci_dev *pdev = skge->hw->pdev;
BUG_ON(!e->skb);
/* skb header vs. fragment */
if (control & BMU_STF)
pci_unmap_single(pdev, pci_unmap_addr(e, mapaddr),
@ -2745,7 +2746,6 @@ static void skge_tx_free(struct skge_port *skge, struct skge_element *e,
dev_kfree_skb(e->skb);
}
e->skb = NULL;
}
/* Free all buffers in transmit ring */
@ -3017,21 +3017,29 @@ static void skge_tx_done(struct net_device *dev)
skge_write8(skge->hw, Q_ADDR(txqaddr[skge->port], Q_CSR), CSR_IRQ_CL_F);
netif_tx_lock(dev);
for (e = ring->to_clean; e != ring->to_use; e = e->next) {
struct skge_tx_desc *td = e->desc;
u32 control = ((const struct skge_tx_desc *) e->desc)->control;
if (td->control & BMU_OWN)
if (control & BMU_OWN)
break;
skge_tx_free(skge, e, td->control);
skge_tx_free(skge, e, control);
}
skge->tx_ring.to_clean = e;
if (skge_avail(&skge->tx_ring) > TX_LOW_WATER)
netif_wake_queue(dev);
/* Can run lockless until we need to synchronize to restart queue. */
smp_mb();
netif_tx_unlock(dev);
if (unlikely(netif_queue_stopped(dev) &&
skge_avail(&skge->tx_ring) > TX_LOW_WATER)) {
netif_tx_lock(dev);
if (unlikely(netif_queue_stopped(dev) &&
skge_avail(&skge->tx_ring) > TX_LOW_WATER)) {
netif_wake_queue(dev);
}
netif_tx_unlock(dev);
}
}
static int skge_poll(struct net_device *dev, int *budget)

View File

@ -232,7 +232,6 @@ enum {
IS_R2_PAR_ERR = 1<<0, /* Queue R2 Parity Error */
IS_ERR_MSK = IS_IRQ_MST_ERR | IS_IRQ_STAT
| IS_NO_STAT_M1 | IS_NO_STAT_M2
| IS_RAM_RD_PAR | IS_RAM_WR_PAR
| IS_M1_PAR_ERR | IS_M2_PAR_ERR
| IS_R1_PAR_ERR | IS_R2_PAR_ERR,
@ -2447,15 +2446,15 @@ enum pause_status {
struct skge_port {
u32 msg_enable;
struct skge_hw *hw;
struct net_device *netdev;
int port;
u32 msg_enable;
struct skge_ring tx_ring;
struct skge_ring rx_ring;
struct net_device_stats net_stats;
struct skge_ring rx_ring ____cacheline_aligned_in_smp;
unsigned int rx_buf_size;
struct timer_list link_timer;
enum pause_control flow_control;
@ -2471,7 +2470,8 @@ struct skge_port {
void *mem; /* PCI memory for rings */
dma_addr_t dma;
unsigned long mem_size;
unsigned int rx_buf_size;
struct net_device_stats net_stats;
};

View File

@ -499,7 +499,7 @@ static inline void smc911x_rcv(struct net_device *dev)
SMC_SET_RX_CFG(RX_CFG_RX_END_ALGN4_ | ((2<<8) & RX_CFG_RXDOFF_));
SMC_PULL_DATA(data, pkt_len+2+3);
DBG(SMC_DEBUG_PKTS, "%s: Received packet\n", dev->name,);
DBG(SMC_DEBUG_PKTS, "%s: Received packet\n", dev->name);
PRINT_PKT(data, ((pkt_len - 4) <= 64) ? pkt_len - 4 : 64);
dev->last_rx = jiffies;
skb->protocol = eth_type_trans(skb, dev);

File diff suppressed because it is too large Load Diff

View File

@ -55,9 +55,6 @@
TODO
Implement pci_driver::suspend() and pci_driver::resume()
power management methods.
Check on 64 bit boxes.
Check and fix on big endian boxes.
@ -125,6 +122,11 @@
#define DM9801_NOISE_FLOOR 8
#define DM9802_NOISE_FLOOR 5
#define DMFE_WOL_LINKCHANGE 0x20000000
#define DMFE_WOL_SAMPLEPACKET 0x10000000
#define DMFE_WOL_MAGICPACKET 0x08000000
#define DMFE_10MHF 0
#define DMFE_100MHF 1
#define DMFE_10MFD 4
@ -251,6 +253,7 @@ struct dmfe_board_info {
u8 wait_reset; /* Hardware failed, need to reset */
u8 dm910x_chk_mode; /* Operating mode check */
u8 first_in_callback; /* Flag to record state */
u8 wol_mode; /* user WOL settings */
struct timer_list timer;
/* System defined statistic counter */
@ -431,6 +434,7 @@ static int __devinit dmfe_init_one (struct pci_dev *pdev,
db->chip_id = ent->driver_data;
db->ioaddr = pci_resource_start(pdev, 0);
db->chip_revision = dev_rev;
db->wol_mode = 0;
db->pdev = pdev;
@ -1065,7 +1069,11 @@ static void dmfe_set_filter_mode(struct DEVICE * dev)
spin_unlock_irqrestore(&db->lock, flags);
}
static void netdev_get_drvinfo(struct net_device *dev,
/*
* Ethtool interace
*/
static void dmfe_ethtool_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
struct dmfe_board_info *np = netdev_priv(dev);
@ -1079,9 +1087,35 @@ static void netdev_get_drvinfo(struct net_device *dev,
dev->base_addr, dev->irq);
}
static int dmfe_ethtool_set_wol(struct net_device *dev,
struct ethtool_wolinfo *wolinfo)
{
struct dmfe_board_info *db = netdev_priv(dev);
if (wolinfo->wolopts & (WAKE_UCAST | WAKE_MCAST | WAKE_BCAST |
WAKE_ARP | WAKE_MAGICSECURE))
return -EOPNOTSUPP;
db->wol_mode = wolinfo->wolopts;
return 0;
}
static void dmfe_ethtool_get_wol(struct net_device *dev,
struct ethtool_wolinfo *wolinfo)
{
struct dmfe_board_info *db = netdev_priv(dev);
wolinfo->supported = WAKE_PHY | WAKE_MAGIC;
wolinfo->wolopts = db->wol_mode;
return;
}
static const struct ethtool_ops netdev_ethtool_ops = {
.get_drvinfo = netdev_get_drvinfo,
.get_drvinfo = dmfe_ethtool_get_drvinfo,
.get_link = ethtool_op_get_link,
.set_wol = dmfe_ethtool_set_wol,
.get_wol = dmfe_ethtool_get_wol,
};
/*
@ -2050,11 +2084,85 @@ static struct pci_device_id dmfe_pci_tbl[] = {
MODULE_DEVICE_TABLE(pci, dmfe_pci_tbl);
#ifdef CONFIG_PM
static int dmfe_suspend(struct pci_dev *pci_dev, pm_message_t state)
{
struct net_device *dev = pci_get_drvdata(pci_dev);
struct dmfe_board_info *db = netdev_priv(dev);
u32 tmp;
/* Disable upper layer interface */
netif_device_detach(dev);
/* Disable Tx/Rx */
db->cr6_data &= ~(CR6_RXSC | CR6_TXSC);
update_cr6(db->cr6_data, dev->base_addr);
/* Disable Interrupt */
outl(0, dev->base_addr + DCR7);
outl(inl (dev->base_addr + DCR5), dev->base_addr + DCR5);
/* Fre RX buffers */
dmfe_free_rxbuffer(db);
/* Enable WOL */
pci_read_config_dword(pci_dev, 0x40, &tmp);
tmp &= ~(DMFE_WOL_LINKCHANGE|DMFE_WOL_MAGICPACKET);
if (db->wol_mode & WAKE_PHY)
tmp |= DMFE_WOL_LINKCHANGE;
if (db->wol_mode & WAKE_MAGIC)
tmp |= DMFE_WOL_MAGICPACKET;
pci_write_config_dword(pci_dev, 0x40, tmp);
pci_enable_wake(pci_dev, PCI_D3hot, 1);
pci_enable_wake(pci_dev, PCI_D3cold, 1);
/* Power down device*/
pci_set_power_state(pci_dev, pci_choose_state (pci_dev,state));
pci_save_state(pci_dev);
return 0;
}
static int dmfe_resume(struct pci_dev *pci_dev)
{
struct net_device *dev = pci_get_drvdata(pci_dev);
u32 tmp;
pci_restore_state(pci_dev);
pci_set_power_state(pci_dev, PCI_D0);
/* Re-initilize DM910X board */
dmfe_init_dm910x(dev);
/* Disable WOL */
pci_read_config_dword(pci_dev, 0x40, &tmp);
tmp &= ~(DMFE_WOL_LINKCHANGE | DMFE_WOL_MAGICPACKET);
pci_write_config_dword(pci_dev, 0x40, tmp);
pci_enable_wake(pci_dev, PCI_D3hot, 0);
pci_enable_wake(pci_dev, PCI_D3cold, 0);
/* Restart upper layer interface */
netif_device_attach(dev);
return 0;
}
#else
#define dmfe_suspend NULL
#define dmfe_resume NULL
#endif
static struct pci_driver dmfe_driver = {
.name = "dmfe",
.id_table = dmfe_pci_tbl,
.probe = dmfe_init_one,
.remove = __devexit_p(dmfe_remove_one),
.suspend = dmfe_suspend,
.resume = dmfe_resume
};
MODULE_AUTHOR("Sten Wang, sten_wang@davicom.com.tw");

View File

@ -673,7 +673,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
if (tp->link_change)
(tp->link_change)(dev, csr5);
}
if (csr5 & SytemError) {
if (csr5 & SystemError) {
int error = (csr5 >> 23) & 7;
/* oops, we hit a PCI error. The code produced corresponds
* to the reason:
@ -743,7 +743,7 @@ irqreturn_t tulip_interrupt(int irq, void *dev_instance)
TxFIFOUnderflow |
TxJabber |
TPLnkFail |
SytemError )) != 0);
SystemError )) != 0);
#else
} while ((csr5 & (NormalIntr|AbnormalIntr)) != 0);

View File

@ -44,8 +44,10 @@ static const unsigned char comet_miireg2offset[32] = {
/* MII transceiver control section.
Read and write the MII registers using software-generated serial
MDIO protocol. See the MII specifications or DP83840A data sheet
for details. */
MDIO protocol.
See IEEE 802.3-2002.pdf (Section 2, Chapter "22.2.4 Management functions")
or DP83840A data sheet for more details.
*/
int tulip_mdio_read(struct net_device *dev, int phy_id, int location)
{
@ -261,24 +263,56 @@ void tulip_select_media(struct net_device *dev, int startup)
u16 *reset_sequence = &((u16*)(p+3))[init_length];
int reset_length = p[2 + init_length*2];
misc_info = reset_sequence + reset_length;
if (startup)
if (startup) {
int timeout = 10; /* max 1 ms */
for (i = 0; i < reset_length; i++)
iowrite32(get_u16(&reset_sequence[i]) << 16, ioaddr + CSR15);
/* flush posted writes */
ioread32(ioaddr + CSR15);
/* Sect 3.10.3 in DP83840A.pdf (p39) */
udelay(500);
/* Section 4.2 in DP83840A.pdf (p43) */
/* and IEEE 802.3 "22.2.4.1.1 Reset" */
while (timeout-- &&
(tulip_mdio_read (dev, phy_num, MII_BMCR) & BMCR_RESET))
udelay(100);
}
for (i = 0; i < init_length; i++)
iowrite32(get_u16(&init_sequence[i]) << 16, ioaddr + CSR15);
ioread32(ioaddr + CSR15); /* flush posted writes */
} else {
u8 *init_sequence = p + 2;
u8 *reset_sequence = p + 3 + init_length;
int reset_length = p[2 + init_length];
misc_info = (u16*)(reset_sequence + reset_length);
if (startup) {
int timeout = 10; /* max 1 ms */
iowrite32(mtable->csr12dir | 0x100, ioaddr + CSR12);
for (i = 0; i < reset_length; i++)
iowrite32(reset_sequence[i], ioaddr + CSR12);
/* flush posted writes */
ioread32(ioaddr + CSR12);
/* Sect 3.10.3 in DP83840A.pdf (p39) */
udelay(500);
/* Section 4.2 in DP83840A.pdf (p43) */
/* and IEEE 802.3 "22.2.4.1.1 Reset" */
while (timeout-- &&
(tulip_mdio_read (dev, phy_num, MII_BMCR) & BMCR_RESET))
udelay(100);
}
for (i = 0; i < init_length; i++)
iowrite32(init_sequence[i], ioaddr + CSR12);
ioread32(ioaddr + CSR12); /* flush posted writes */
}
tmp_info = get_u16(&misc_info[1]);
if (tmp_info)
tp->advertising[phy_num] = tmp_info | 1;

View File

@ -132,7 +132,7 @@ enum pci_cfg_driver_reg {
/* The bits in the CSR5 status registers, mostly interrupt sources. */
enum status_bits {
TimerInt = 0x800,
SytemError = 0x2000,
SystemError = 0x2000,
TPLnkFail = 0x1000,
TPLnkPass = 0x10,
NormalIntr = 0x10000,
@ -482,8 +482,11 @@ static inline void tulip_stop_rxtx(struct tulip_private *tp)
udelay(10);
if (!i)
printk(KERN_DEBUG "%s: tulip_stop_rxtx() failed\n",
pci_name(tp->pdev));
printk(KERN_DEBUG "%s: tulip_stop_rxtx() failed"
" (CSR5 0x%x CSR6 0x%x)\n",
pci_name(tp->pdev),
ioread32(ioaddr + CSR5),
ioread32(ioaddr + CSR6));
}
}

View File

@ -17,11 +17,11 @@
#define DRV_NAME "tulip"
#ifdef CONFIG_TULIP_NAPI
#define DRV_VERSION "1.1.14-NAPI" /* Keep at least for test */
#define DRV_VERSION "1.1.15-NAPI" /* Keep at least for test */
#else
#define DRV_VERSION "1.1.14"
#define DRV_VERSION "1.1.15"
#endif
#define DRV_RELDATE "May 11, 2002"
#define DRV_RELDATE "Feb 27, 2007"
#include <linux/module.h>

View File

@ -1147,7 +1147,7 @@ static irqreturn_t intr_handler(int irq, void *dev_instance)
}
/* Abnormal error summary/uncommon events handlers. */
if (intr_status & (AbnormalIntr | TxFIFOUnderflow | SytemError |
if (intr_status & (AbnormalIntr | TxFIFOUnderflow | SystemError |
TimerInt | TxDied))
netdev_error(dev, intr_status);

File diff suppressed because it is too large Load Diff

View File

@ -28,6 +28,8 @@
#include <asm/ucc.h>
#include <asm/ucc_fast.h>
#include "ucc_geth_mii.h"
#define NUM_TX_QUEUES 8
#define NUM_RX_QUEUES 8
#define NUM_BDS_IN_PREFETCHED_BDS 4
@ -36,15 +38,6 @@
#define ENET_INIT_PARAM_MAX_ENTRIES_RX 9
#define ENET_INIT_PARAM_MAX_ENTRIES_TX 8
struct ucc_mii_mng {
u32 miimcfg; /* MII management configuration reg */
u32 miimcom; /* MII management command reg */
u32 miimadd; /* MII management address reg */
u32 miimcon; /* MII management control reg */
u32 miimstat; /* MII management status reg */
u32 miimind; /* MII management indication reg */
} __attribute__ ((packed));
struct ucc_geth {
struct ucc_fast uccf;
@ -53,7 +46,7 @@ struct ucc_geth {
u32 ipgifg; /* interframe gap reg. */
u32 hafdup; /* half-duplex reg. */
u8 res1[0x10];
struct ucc_mii_mng miimng; /* MII management structure */
u8 miimng[0x18]; /* MII management structure moved to _mii.h */
u32 ifctl; /* interface control reg */
u32 ifstat; /* interface statux reg */
u32 macstnaddr1; /* mac station address part 1 reg */
@ -212,6 +205,9 @@ struct ucc_geth {
#define UCCE_OTHER (UCCE_SCAR | UCCE_GRA | UCCE_CBPR | UCCE_BSY |\
UCCE_RXC | UCCE_TXC | UCCE_TXE)
#define UCCE_RX_EVENTS (UCCE_RXF | UCCE_BSY)
#define UCCE_TX_EVENTS (UCCE_TXB | UCCE_TXE)
/* UCC GETH UPSMR (Protocol Specific Mode Register) */
#define UPSMR_ECM 0x04000000 /* Enable CAM
Miss or
@ -381,66 +377,6 @@ struct ucc_geth {
#define UCCS_MPD 0x01 /* Magic Packet
Detected */
/* UCC GETH MIIMCFG (MII Management Configuration Register) */
#define MIIMCFG_RESET_MANAGEMENT 0x80000000 /* Reset
management */
#define MIIMCFG_NO_PREAMBLE 0x00000010 /* Preamble
suppress */
#define MIIMCFG_CLOCK_DIVIDE_SHIFT (31 - 31) /* clock divide
<< shift */
#define MIIMCFG_CLOCK_DIVIDE_MAX 0xf /* clock divide max val
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_2 0x00000000 /* divide by 2 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_4 0x00000001 /* divide by 4 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_6 0x00000002 /* divide by 6 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_8 0x00000003 /* divide by 8 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_10 0x00000004 /* divide by 10
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_14 0x00000005 /* divide by 14
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_16 0x00000008 /* divide by 16
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_20 0x00000006 /* divide by 20
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_28 0x00000007 /* divide by 28
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_32 0x00000009 /* divide by 32
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_48 0x0000000a /* divide by 48
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_64 0x0000000b /* divide by 64
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_80 0x0000000c /* divide by 80
*/
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_112 0x0000000d /* divide by
112 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_160 0x0000000e /* divide by
160 */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_224 0x0000000f /* divide by
224 */
/* UCC GETH MIIMCOM (MII Management Command Register) */
#define MIIMCOM_SCAN_CYCLE 0x00000002 /* Scan cycle */
#define MIIMCOM_READ_CYCLE 0x00000001 /* Read cycle */
/* UCC GETH MIIMADD (MII Management Address Register) */
#define MIIMADD_PHY_ADDRESS_SHIFT (31 - 23) /* PHY Address
<< shift */
#define MIIMADD_PHY_REGISTER_SHIFT (31 - 31) /* PHY Register
<< shift */
/* UCC GETH MIIMCON (MII Management Control Register) */
#define MIIMCON_PHY_CONTROL_SHIFT (31 - 31) /* PHY Control
<< shift */
#define MIIMCON_PHY_STATUS_SHIFT (31 - 31) /* PHY Status
<< shift */
/* UCC GETH MIIMIND (MII Management Indicator Register) */
#define MIIMIND_NOT_VALID 0x00000004 /* Not valid */
#define MIIMIND_SCAN 0x00000002 /* Scan in
progress */
#define MIIMIND_BUSY 0x00000001
/* UCC GETH IFSTAT (Interface Status Register) */
#define IFSTAT_EXCESS_DEFER 0x00000200 /* Excessive
transmission
@ -931,8 +867,7 @@ struct ucc_geth_hardware_statistics {
#define UCC_GETH_SCHEDULER_ALIGNMENT 4 /* This is a guess */
#define UCC_GETH_TX_STATISTICS_ALIGNMENT 4 /* This is a guess */
#define UCC_GETH_RX_STATISTICS_ALIGNMENT 4 /* This is a guess */
#define UCC_GETH_RX_INTERRUPT_COALESCING_ALIGNMENT 4 /* This is a
guess */
#define UCC_GETH_RX_INTERRUPT_COALESCING_ALIGNMENT 64
#define UCC_GETH_RX_BD_QUEUES_ALIGNMENT 8 /* This is a guess */
#define UCC_GETH_RX_PREFETCHED_BDS_ALIGNMENT 128 /* This is a guess */
#define UCC_GETH_RX_EXTENDED_FILTERING_GLOBAL_PARAMETERS_ALIGNMENT 4 /* This
@ -1009,15 +944,6 @@ struct ucc_geth_hardware_statistics {
register */
#define UCC_GETH_MACCFG1_INIT 0
#define UCC_GETH_MACCFG2_INIT (MACCFG2_RESERVED_1)
#define UCC_GETH_MIIMCFG_MNGMNT_CLC_DIV_INIT \
(MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_112)
/* Ethernet speed */
enum enet_speed {
ENET_SPEED_10BT, /* 10 Base T */
ENET_SPEED_100BT, /* 100 Base T */
ENET_SPEED_1000BT /* 1000 Base T */
};
/* Ethernet Address Type. */
enum enet_addr_type {
@ -1026,22 +952,6 @@ enum enet_addr_type {
ENET_ADDR_TYPE_BROADCAST
};
/* TBI / MII Set Register */
enum enet_tbi_mii_reg {
ENET_TBI_MII_CR = 0x00, /* Control (CR ) */
ENET_TBI_MII_SR = 0x01, /* Status (SR ) */
ENET_TBI_MII_ANA = 0x04, /* AN advertisement (ANA ) */
ENET_TBI_MII_ANLPBPA = 0x05, /* AN link partner base page ability
(ANLPBPA) */
ENET_TBI_MII_ANEX = 0x06, /* AN expansion (ANEX ) */
ENET_TBI_MII_ANNPT = 0x07, /* AN next page transmit (ANNPT ) */
ENET_TBI_MII_ANLPANP = 0x08, /* AN link partner ability next page
(ANLPANP) */
ENET_TBI_MII_EXST = 0x0F, /* Extended status (EXST ) */
ENET_TBI_MII_JD = 0x10, /* Jitter diagnostics (JD ) */
ENET_TBI_MII_TBICON = 0x11 /* TBI control (TBICON ) */
};
/* UCC GETH 82xx Ethernet Address Recognition Location */
enum ucc_geth_enet_address_recognition_location {
UCC_GETH_ENET_ADDRESS_RECOGNITION_LOCATION_STATION_ADDRESS,/* station
@ -1239,8 +1149,7 @@ struct ucc_geth_info {
u16 pausePeriod;
u16 extensionField;
u8 phy_address;
u32 board_flags;
u32 phy_interrupt;
u32 mdio_bus;
u8 weightfactor[NUM_TX_QUEUES];
u8 interruptcoalescingmaxvalue[NUM_RX_QUEUES];
u8 l2qt[UCC_GETH_VLAN_PRIORITY_MAX];
@ -1249,7 +1158,6 @@ struct ucc_geth_info {
u8 iphoffset[TX_IP_OFFSET_ENTRY_MAX];
u16 bdRingLenTx[NUM_TX_QUEUES];
u16 bdRingLenRx[NUM_RX_QUEUES];
enum enet_interface enet_interface;
enum ucc_geth_num_of_station_addresses numStationAddresses;
enum qe_fltr_largest_external_tbl_lookup_key_size
largestexternallookupkeysize;
@ -1326,9 +1234,11 @@ struct ucc_geth_private {
/* index of the first skb which hasn't been transmitted yet. */
u16 skb_dirtytx[NUM_TX_QUEUES];
struct work_struct tq;
struct timer_list phy_info_timer;
struct ugeth_mii_info *mii_info;
struct phy_device *phydev;
phy_interface_t phy_interface;
int max_speed;
uint32_t msg_enable;
int oldspeed;
int oldduplex;
int oldlink;

279
drivers/net/ucc_geth_mii.c Normal file
View File

@ -0,0 +1,279 @@
/*
* drivers/net/ucc_geth_mii.c
*
* Gianfar Ethernet Driver -- MIIM bus implementation
* Provides Bus interface for MIIM regs
*
* Author: Li Yang
*
* Copyright (c) 2002-2004 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/unistd.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <asm/ocp.h>
#include <linux/crc32.h>
#include <linux/mii.h>
#include <linux/phy.h>
#include <linux/fsl_devices.h>
#include <asm/of_platform.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
#include <asm/ucc.h>
#include "ucc_geth_mii.h"
#include "ucc_geth.h"
#define DEBUG
#ifdef DEBUG
#define vdbg(format, arg...) printk(KERN_DEBUG , format "\n" , ## arg)
#else
#define vdbg(format, arg...) do {} while(0)
#endif
#define DRV_DESC "QE UCC Ethernet Controller MII Bus"
#define DRV_NAME "fsl-uec_mdio"
/* Write value to the PHY for this device to the register at regnum, */
/* waiting until the write is done before it returns. All PHY */
/* configuration has to be done through the master UEC MIIM regs */
int uec_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 value)
{
struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv;
/* Setting up the MII Mangement Address Register */
out_be32(&regs->miimadd,
(mii_id << MIIMADD_PHY_ADDRESS_SHIFT) | regnum);
/* Setting up the MII Mangement Control Register with the value */
out_be32(&regs->miimcon, value);
/* Wait till MII management write is complete */
while ((in_be32(&regs->miimind)) & MIIMIND_BUSY)
cpu_relax();
return 0;
}
/* Reads from register regnum in the PHY for device dev, */
/* returning the value. Clears miimcom first. All PHY */
/* configuration has to be done through the TSEC1 MIIM regs */
int uec_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
{
struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv;
u16 value;
/* Setting up the MII Mangement Address Register */
out_be32(&regs->miimadd,
(mii_id << MIIMADD_PHY_ADDRESS_SHIFT) | regnum);
/* Clear miimcom, perform an MII management read cycle */
out_be32(&regs->miimcom, 0);
out_be32(&regs->miimcom, MIIMCOM_READ_CYCLE);
/* Wait till MII management write is complete */
while ((in_be32(&regs->miimind)) & (MIIMIND_BUSY | MIIMIND_NOT_VALID))
cpu_relax();
/* Read MII management status */
value = in_be32(&regs->miimstat);
return value;
}
/* Reset the MIIM registers, and wait for the bus to free */
int uec_mdio_reset(struct mii_bus *bus)
{
struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv;
unsigned int timeout = PHY_INIT_TIMEOUT;
spin_lock_bh(&bus->mdio_lock);
/* Reset the management interface */
out_be32(&regs->miimcfg, MIIMCFG_RESET_MANAGEMENT);
/* Setup the MII Mgmt clock speed */
out_be32(&regs->miimcfg, MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_112);
/* Wait until the bus is free */
while ((in_be32(&regs->miimind) & MIIMIND_BUSY) && timeout--)
cpu_relax();
spin_unlock_bh(&bus->mdio_lock);
if (timeout <= 0) {
printk(KERN_ERR "%s: The MII Bus is stuck!\n", bus->name);
return -EBUSY;
}
return 0;
}
static int uec_mdio_probe(struct of_device *ofdev, const struct of_device_id *match)
{
struct device *device = &ofdev->dev;
struct device_node *np = ofdev->node, *tempnp = NULL;
struct device_node *child = NULL;
struct ucc_mii_mng __iomem *regs;
struct mii_bus *new_bus;
struct resource res;
int k, err = 0;
new_bus = kzalloc(sizeof(struct mii_bus), GFP_KERNEL);
if (NULL == new_bus)
return -ENOMEM;
new_bus->name = "UCC Ethernet Controller MII Bus";
new_bus->read = &uec_mdio_read;
new_bus->write = &uec_mdio_write;
new_bus->reset = &uec_mdio_reset;
memset(&res, 0, sizeof(res));
err = of_address_to_resource(np, 0, &res);
if (err)
goto reg_map_fail;
new_bus->id = res.start;
new_bus->irq = kmalloc(32 * sizeof(int), GFP_KERNEL);
if (NULL == new_bus->irq) {
err = -ENOMEM;
goto reg_map_fail;
}
for (k = 0; k < 32; k++)
new_bus->irq[k] = PHY_POLL;
while ((child = of_get_next_child(np, child)) != NULL) {
int irq = irq_of_parse_and_map(child, 0);
if (irq != NO_IRQ) {
const u32 *id = get_property(child, "reg", NULL);
new_bus->irq[*id] = irq;
}
}
/* Set the base address */
regs = ioremap(res.start, sizeof(struct ucc_mii_mng));
if (NULL == regs) {
err = -ENOMEM;
goto ioremap_fail;
}
new_bus->priv = (void __force *)regs;
new_bus->dev = device;
dev_set_drvdata(device, new_bus);
/* Read MII management master from device tree */
while ((tempnp = of_find_compatible_node(tempnp, "network", "ucc_geth"))
!= NULL) {
struct resource tempres;
err = of_address_to_resource(tempnp, 0, &tempres);
if (err)
goto bus_register_fail;
/* if our mdio regs fall within this UCC regs range */
if ((res.start >= tempres.start) &&
(res.end <= tempres.end)) {
/* set this UCC to be the MII master */
const u32 *id = get_property(tempnp, "device-id", NULL);
if (id == NULL)
goto bus_register_fail;
ucc_set_qe_mux_mii_mng(*id - 1);
/* assign the TBI an address which won't
* conflict with the PHYs */
out_be32(&regs->utbipar, UTBIPAR_INIT_TBIPA);
break;
}
}
err = mdiobus_register(new_bus);
if (0 != err) {
printk(KERN_ERR "%s: Cannot register as MDIO bus\n",
new_bus->name);
goto bus_register_fail;
}
return 0;
bus_register_fail:
iounmap(regs);
ioremap_fail:
kfree(new_bus->irq);
reg_map_fail:
kfree(new_bus);
return err;
}
int uec_mdio_remove(struct of_device *ofdev)
{
struct device *device = &ofdev->dev;
struct mii_bus *bus = dev_get_drvdata(device);
mdiobus_unregister(bus);
dev_set_drvdata(device, NULL);
iounmap((void __iomem *)bus->priv);
bus->priv = NULL;
kfree(bus);
return 0;
}
static struct of_device_id uec_mdio_match[] = {
{
.type = "mdio",
.compatible = "ucc_geth_phy",
},
{},
};
MODULE_DEVICE_TABLE(of, uec_mdio_match);
static struct of_platform_driver uec_mdio_driver = {
.name = DRV_NAME,
.probe = uec_mdio_probe,
.remove = uec_mdio_remove,
.match_table = uec_mdio_match,
};
int __init uec_mdio_init(void)
{
return of_register_platform_driver(&uec_mdio_driver);
}
void __exit uec_mdio_exit(void)
{
of_unregister_platform_driver(&uec_mdio_driver);
}

100
drivers/net/ucc_geth_mii.h Normal file
View File

@ -0,0 +1,100 @@
/*
* drivers/net/ucc_geth_mii.h
*
* Gianfar Ethernet Driver -- MII Management Bus Implementation
* Driver for the MDIO bus controller in the Gianfar register space
*
* Author: Andy Fleming
* Maintainer: Kumar Gala
*
* Copyright (c) 2002-2004 Freescale Semiconductor, Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#ifndef __UEC_MII_H
#define __UEC_MII_H
/* UCC GETH MIIMCFG (MII Management Configuration Register) */
#define MIIMCFG_RESET_MANAGEMENT 0x80000000 /* Reset
management */
#define MIIMCFG_NO_PREAMBLE 0x00000010 /* Preamble
suppress */
#define MIIMCFG_CLOCK_DIVIDE_SHIFT (31 - 31) /* clock divide
<< shift */
#define MIIMCFG_CLOCK_DIVIDE_MAX 0xf /* max clock divide */
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_2 0x00000000
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_4 0x00000001
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_6 0x00000002
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_8 0x00000003
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_10 0x00000004
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_14 0x00000005
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_16 0x00000008
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_20 0x00000006
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_28 0x00000007
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_32 0x00000009
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_48 0x0000000a
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_64 0x0000000b
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_80 0x0000000c
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_112 0x0000000d
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_160 0x0000000e
#define MIIMCFG_MANAGEMENT_CLOCK_DIVIDE_BY_224 0x0000000f
/* UCC GETH MIIMCOM (MII Management Command Register) */
#define MIIMCOM_SCAN_CYCLE 0x00000002 /* Scan cycle */
#define MIIMCOM_READ_CYCLE 0x00000001 /* Read cycle */
/* UCC GETH MIIMADD (MII Management Address Register) */
#define MIIMADD_PHY_ADDRESS_SHIFT (31 - 23) /* PHY Address
<< shift */
#define MIIMADD_PHY_REGISTER_SHIFT (31 - 31) /* PHY Register
<< shift */
/* UCC GETH MIIMCON (MII Management Control Register) */
#define MIIMCON_PHY_CONTROL_SHIFT (31 - 31) /* PHY Control
<< shift */
#define MIIMCON_PHY_STATUS_SHIFT (31 - 31) /* PHY Status
<< shift */
/* UCC GETH MIIMIND (MII Management Indicator Register) */
#define MIIMIND_NOT_VALID 0x00000004 /* Not valid */
#define MIIMIND_SCAN 0x00000002 /* Scan in
progress */
#define MIIMIND_BUSY 0x00000001
/* Initial TBI Physical Address */
#define UTBIPAR_INIT_TBIPA 0x1f
struct ucc_mii_mng {
u32 miimcfg; /* MII management configuration reg */
u32 miimcom; /* MII management command reg */
u32 miimadd; /* MII management address reg */
u32 miimcon; /* MII management control reg */
u32 miimstat; /* MII management status reg */
u32 miimind; /* MII management indication reg */
u8 notcare[28]; /* Space holder */
u32 utbipar; /* TBI phy address reg */
} __attribute__ ((packed));
/* TBI / MII Set Register */
enum enet_tbi_mii_reg {
ENET_TBI_MII_CR = 0x00, /* Control */
ENET_TBI_MII_SR = 0x01, /* Status */
ENET_TBI_MII_ANA = 0x04, /* AN advertisement */
ENET_TBI_MII_ANLPBPA = 0x05, /* AN link partner base page ability */
ENET_TBI_MII_ANEX = 0x06, /* AN expansion */
ENET_TBI_MII_ANNPT = 0x07, /* AN next page transmit */
ENET_TBI_MII_ANLPANP = 0x08, /* AN link partner ability next page */
ENET_TBI_MII_EXST = 0x0F, /* Extended status */
ENET_TBI_MII_JD = 0x10, /* Jitter diagnostics */
ENET_TBI_MII_TBICON = 0x11 /* TBI control */
};
int uec_mdio_read(struct mii_bus *bus, int mii_id, int regnum);
int uec_mdio_write(struct mii_bus *bus, int mii_id, int regnum, u16 value);
int __init uec_mdio_init(void);
void __exit uec_mdio_exit(void);
#endif /* __UEC_MII_H */

View File

@ -1,785 +0,0 @@
/*
* Copyright (C) Freescale Semicondutor, Inc. 2006. All rights reserved.
*
* Author: Shlomi Gridish <gridish@freescale.com>
*
* Description:
* UCC GETH Driver -- PHY handling
*
* Changelog:
* Jun 28, 2006 Li Yang <LeoLi@freescale.com>
* - Rearrange code and style fixes
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#include <linux/kernel.h>
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/version.h>
#include <linux/crc32.h>
#include <linux/mii.h>
#include <linux/ethtool.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
#include "ucc_geth.h"
#include "ucc_geth_phy.h"
#define ugphy_printk(level, format, arg...) \
printk(level format "\n", ## arg)
#define ugphy_dbg(format, arg...) \
ugphy_printk(KERN_DEBUG, format , ## arg)
#define ugphy_err(format, arg...) \
ugphy_printk(KERN_ERR, format , ## arg)
#define ugphy_info(format, arg...) \
ugphy_printk(KERN_INFO, format , ## arg)
#define ugphy_warn(format, arg...) \
ugphy_printk(KERN_WARNING, format , ## arg)
#ifdef UGETH_VERBOSE_DEBUG
#define ugphy_vdbg ugphy_dbg
#else
#define ugphy_vdbg(fmt, args...) do { } while (0)
#endif /* UGETH_VERBOSE_DEBUG */
static void config_genmii_advert(struct ugeth_mii_info *mii_info);
static void genmii_setup_forced(struct ugeth_mii_info *mii_info);
static void genmii_restart_aneg(struct ugeth_mii_info *mii_info);
static int gbit_config_aneg(struct ugeth_mii_info *mii_info);
static int genmii_config_aneg(struct ugeth_mii_info *mii_info);
static int genmii_update_link(struct ugeth_mii_info *mii_info);
static int genmii_read_status(struct ugeth_mii_info *mii_info);
static u16 ucc_geth_phy_read(struct ugeth_mii_info *mii_info, u16 regnum)
{
u16 retval;
unsigned long flags;
ugphy_vdbg("%s: IN", __FUNCTION__);
spin_lock_irqsave(&mii_info->mdio_lock, flags);
retval = mii_info->mdio_read(mii_info->dev, mii_info->mii_id, regnum);
spin_unlock_irqrestore(&mii_info->mdio_lock, flags);
return retval;
}
static void ucc_geth_phy_write(struct ugeth_mii_info *mii_info, u16 regnum, u16 val)
{
unsigned long flags;
ugphy_vdbg("%s: IN", __FUNCTION__);
spin_lock_irqsave(&mii_info->mdio_lock, flags);
mii_info->mdio_write(mii_info->dev, mii_info->mii_id, regnum, val);
spin_unlock_irqrestore(&mii_info->mdio_lock, flags);
}
/* Write value to the PHY for this device to the register at regnum, */
/* waiting until the write is done before it returns. All PHY */
/* configuration has to be done through the TSEC1 MIIM regs */
void write_phy_reg(struct net_device *dev, int mii_id, int regnum, int value)
{
struct ucc_geth_private *ugeth = netdev_priv(dev);
struct ucc_mii_mng *mii_regs;
enum enet_tbi_mii_reg mii_reg = (enum enet_tbi_mii_reg) regnum;
u32 tmp_reg;
ugphy_vdbg("%s: IN", __FUNCTION__);
spin_lock_irq(&ugeth->lock);
mii_regs = ugeth->mii_info->mii_regs;
/* Set this UCC to be the master of the MII managment */
ucc_set_qe_mux_mii_mng(ugeth->ug_info->uf_info.ucc_num);
/* Stop the MII management read cycle */
out_be32(&mii_regs->miimcom, 0);
/* Setting up the MII Mangement Address Register */
tmp_reg = ((u32) mii_id << MIIMADD_PHY_ADDRESS_SHIFT) | mii_reg;
out_be32(&mii_regs->miimadd, tmp_reg);
/* Setting up the MII Mangement Control Register with the value */
out_be32(&mii_regs->miimcon, (u32) value);
/* Wait till MII management write is complete */
while ((in_be32(&mii_regs->miimind)) & MIIMIND_BUSY)
cpu_relax();
spin_unlock_irq(&ugeth->lock);
udelay(10000);
}
/* Reads from register regnum in the PHY for device dev, */
/* returning the value. Clears miimcom first. All PHY */
/* configuration has to be done through the TSEC1 MIIM regs */
int read_phy_reg(struct net_device *dev, int mii_id, int regnum)
{
struct ucc_geth_private *ugeth = netdev_priv(dev);
struct ucc_mii_mng *mii_regs;
enum enet_tbi_mii_reg mii_reg = (enum enet_tbi_mii_reg) regnum;
u32 tmp_reg;
u16 value;
ugphy_vdbg("%s: IN", __FUNCTION__);
spin_lock_irq(&ugeth->lock);
mii_regs = ugeth->mii_info->mii_regs;
/* Setting up the MII Mangement Address Register */
tmp_reg = ((u32) mii_id << MIIMADD_PHY_ADDRESS_SHIFT) | mii_reg;
out_be32(&mii_regs->miimadd, tmp_reg);
/* Perform an MII management read cycle */
out_be32(&mii_regs->miimcom, MIIMCOM_READ_CYCLE);
/* Wait till MII management write is complete */
while ((in_be32(&mii_regs->miimind)) & MIIMIND_BUSY)
cpu_relax();
udelay(10000);
/* Read MII management status */
value = (u16) in_be32(&mii_regs->miimstat);
out_be32(&mii_regs->miimcom, 0);
if (value == 0xffff)
ugphy_warn("read wrong value : mii_id %d,mii_reg %d, base %08x",
mii_id, mii_reg, (u32) & (mii_regs->miimcfg));
spin_unlock_irq(&ugeth->lock);
return (value);
}
void mii_clear_phy_interrupt(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->phyinfo->ack_interrupt)
mii_info->phyinfo->ack_interrupt(mii_info);
}
void mii_configure_phy_interrupt(struct ugeth_mii_info *mii_info,
u32 interrupts)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
mii_info->interrupts = interrupts;
if (mii_info->phyinfo->config_intr)
mii_info->phyinfo->config_intr(mii_info);
}
/* Writes MII_ADVERTISE with the appropriate values, after
* sanitizing advertise to make sure only supported features
* are advertised
*/
static void config_genmii_advert(struct ugeth_mii_info *mii_info)
{
u32 advertise;
u16 adv;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Only allow advertising what this PHY supports */
mii_info->advertising &= mii_info->phyinfo->features;
advertise = mii_info->advertising;
/* Setup standard advertisement */
adv = ucc_geth_phy_read(mii_info, MII_ADVERTISE);
adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
if (advertise & ADVERTISED_10baseT_Half)
adv |= ADVERTISE_10HALF;
if (advertise & ADVERTISED_10baseT_Full)
adv |= ADVERTISE_10FULL;
if (advertise & ADVERTISED_100baseT_Half)
adv |= ADVERTISE_100HALF;
if (advertise & ADVERTISED_100baseT_Full)
adv |= ADVERTISE_100FULL;
ucc_geth_phy_write(mii_info, MII_ADVERTISE, adv);
}
static void genmii_setup_forced(struct ugeth_mii_info *mii_info)
{
u16 ctrl;
u32 features = mii_info->phyinfo->features;
ugphy_vdbg("%s: IN", __FUNCTION__);
ctrl = ucc_geth_phy_read(mii_info, MII_BMCR);
ctrl &=
~(BMCR_FULLDPLX | BMCR_SPEED100 | BMCR_SPEED1000 | BMCR_ANENABLE);
ctrl |= BMCR_RESET;
switch (mii_info->speed) {
case SPEED_1000:
if (features & (SUPPORTED_1000baseT_Half
| SUPPORTED_1000baseT_Full)) {
ctrl |= BMCR_SPEED1000;
break;
}
mii_info->speed = SPEED_100;
case SPEED_100:
if (features & (SUPPORTED_100baseT_Half
| SUPPORTED_100baseT_Full)) {
ctrl |= BMCR_SPEED100;
break;
}
mii_info->speed = SPEED_10;
case SPEED_10:
if (features & (SUPPORTED_10baseT_Half
| SUPPORTED_10baseT_Full))
break;
default: /* Unsupported speed! */
ugphy_err("%s: Bad speed!", mii_info->dev->name);
break;
}
ucc_geth_phy_write(mii_info, MII_BMCR, ctrl);
}
/* Enable and Restart Autonegotiation */
static void genmii_restart_aneg(struct ugeth_mii_info *mii_info)
{
u16 ctl;
ugphy_vdbg("%s: IN", __FUNCTION__);
ctl = ucc_geth_phy_read(mii_info, MII_BMCR);
ctl |= (BMCR_ANENABLE | BMCR_ANRESTART);
ucc_geth_phy_write(mii_info, MII_BMCR, ctl);
}
static int gbit_config_aneg(struct ugeth_mii_info *mii_info)
{
u16 adv;
u32 advertise;
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->autoneg) {
/* Configure the ADVERTISE register */
config_genmii_advert(mii_info);
advertise = mii_info->advertising;
adv = ucc_geth_phy_read(mii_info, MII_1000BASETCONTROL);
adv &= ~(MII_1000BASETCONTROL_FULLDUPLEXCAP |
MII_1000BASETCONTROL_HALFDUPLEXCAP);
if (advertise & SUPPORTED_1000baseT_Half)
adv |= MII_1000BASETCONTROL_HALFDUPLEXCAP;
if (advertise & SUPPORTED_1000baseT_Full)
adv |= MII_1000BASETCONTROL_FULLDUPLEXCAP;
ucc_geth_phy_write(mii_info, MII_1000BASETCONTROL, adv);
/* Start/Restart aneg */
genmii_restart_aneg(mii_info);
} else
genmii_setup_forced(mii_info);
return 0;
}
static int genmii_config_aneg(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->autoneg) {
config_genmii_advert(mii_info);
genmii_restart_aneg(mii_info);
} else
genmii_setup_forced(mii_info);
return 0;
}
static int genmii_update_link(struct ugeth_mii_info *mii_info)
{
u16 status;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Do a fake read */
ucc_geth_phy_read(mii_info, MII_BMSR);
/* Read link and autonegotiation status */
status = ucc_geth_phy_read(mii_info, MII_BMSR);
if ((status & BMSR_LSTATUS) == 0)
mii_info->link = 0;
else
mii_info->link = 1;
/* If we are autonegotiating, and not done,
* return an error */
if (mii_info->autoneg && !(status & BMSR_ANEGCOMPLETE))
return -EAGAIN;
return 0;
}
static int genmii_read_status(struct ugeth_mii_info *mii_info)
{
u16 status;
int err;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Update the link, but return if there
* was an error */
err = genmii_update_link(mii_info);
if (err)
return err;
if (mii_info->autoneg) {
status = ucc_geth_phy_read(mii_info, MII_LPA);
if (status & (LPA_10FULL | LPA_100FULL))
mii_info->duplex = DUPLEX_FULL;
else
mii_info->duplex = DUPLEX_HALF;
if (status & (LPA_100FULL | LPA_100HALF))
mii_info->speed = SPEED_100;
else
mii_info->speed = SPEED_10;
mii_info->pause = 0;
}
/* On non-aneg, we assume what we put in BMCR is the speed,
* though magic-aneg shouldn't prevent this case from occurring
*/
return 0;
}
static int marvell_init(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
ucc_geth_phy_write(mii_info, 0x14, 0x0cd2);
ucc_geth_phy_write(mii_info, 0x1b,
(ucc_geth_phy_read(mii_info, 0x1b) & ~0x000f) | 0x000b);
ucc_geth_phy_write(mii_info, MII_BMCR,
ucc_geth_phy_read(mii_info, MII_BMCR) | BMCR_RESET);
msleep(4000);
return 0;
}
static int marvell_config_aneg(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
/* The Marvell PHY has an errata which requires
* that certain registers get written in order
* to restart autonegotiation */
ucc_geth_phy_write(mii_info, MII_BMCR, BMCR_RESET);
ucc_geth_phy_write(mii_info, 0x1d, 0x1f);
ucc_geth_phy_write(mii_info, 0x1e, 0x200c);
ucc_geth_phy_write(mii_info, 0x1d, 0x5);
ucc_geth_phy_write(mii_info, 0x1e, 0);
ucc_geth_phy_write(mii_info, 0x1e, 0x100);
gbit_config_aneg(mii_info);
return 0;
}
static int marvell_read_status(struct ugeth_mii_info *mii_info)
{
u16 status;
int err;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Update the link, but return if there
* was an error */
err = genmii_update_link(mii_info);
if (err)
return err;
/* If the link is up, read the speed and duplex */
/* If we aren't autonegotiating, assume speeds
* are as set */
if (mii_info->autoneg && mii_info->link) {
int speed;
status = ucc_geth_phy_read(mii_info, MII_M1011_PHY_SPEC_STATUS);
/* Get the duplexity */
if (status & MII_M1011_PHY_SPEC_STATUS_FULLDUPLEX)
mii_info->duplex = DUPLEX_FULL;
else
mii_info->duplex = DUPLEX_HALF;
/* Get the speed */
speed = status & MII_M1011_PHY_SPEC_STATUS_SPD_MASK;
switch (speed) {
case MII_M1011_PHY_SPEC_STATUS_1000:
mii_info->speed = SPEED_1000;
break;
case MII_M1011_PHY_SPEC_STATUS_100:
mii_info->speed = SPEED_100;
break;
default:
mii_info->speed = SPEED_10;
break;
}
mii_info->pause = 0;
}
return 0;
}
static int marvell_ack_interrupt(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Clear the interrupts by reading the reg */
ucc_geth_phy_read(mii_info, MII_M1011_IEVENT);
return 0;
}
static int marvell_config_intr(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->interrupts == MII_INTERRUPT_ENABLED)
ucc_geth_phy_write(mii_info, MII_M1011_IMASK, MII_M1011_IMASK_INIT);
else
ucc_geth_phy_write(mii_info, MII_M1011_IMASK, MII_M1011_IMASK_CLEAR);
return 0;
}
static int cis820x_init(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
ucc_geth_phy_write(mii_info, MII_CIS8201_AUX_CONSTAT,
MII_CIS8201_AUXCONSTAT_INIT);
ucc_geth_phy_write(mii_info, MII_CIS8201_EXT_CON1, MII_CIS8201_EXTCON1_INIT);
return 0;
}
static int cis820x_read_status(struct ugeth_mii_info *mii_info)
{
u16 status;
int err;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Update the link, but return if there
* was an error */
err = genmii_update_link(mii_info);
if (err)
return err;
/* If the link is up, read the speed and duplex */
/* If we aren't autonegotiating, assume speeds
* are as set */
if (mii_info->autoneg && mii_info->link) {
int speed;
status = ucc_geth_phy_read(mii_info, MII_CIS8201_AUX_CONSTAT);
if (status & MII_CIS8201_AUXCONSTAT_DUPLEX)
mii_info->duplex = DUPLEX_FULL;
else
mii_info->duplex = DUPLEX_HALF;
speed = status & MII_CIS8201_AUXCONSTAT_SPEED;
switch (speed) {
case MII_CIS8201_AUXCONSTAT_GBIT:
mii_info->speed = SPEED_1000;
break;
case MII_CIS8201_AUXCONSTAT_100:
mii_info->speed = SPEED_100;
break;
default:
mii_info->speed = SPEED_10;
break;
}
}
return 0;
}
static int cis820x_ack_interrupt(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
ucc_geth_phy_read(mii_info, MII_CIS8201_ISTAT);
return 0;
}
static int cis820x_config_intr(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->interrupts == MII_INTERRUPT_ENABLED)
ucc_geth_phy_write(mii_info, MII_CIS8201_IMASK, MII_CIS8201_IMASK_MASK);
else
ucc_geth_phy_write(mii_info, MII_CIS8201_IMASK, 0);
return 0;
}
#define DM9161_DELAY 10
static int dm9161_read_status(struct ugeth_mii_info *mii_info)
{
u16 status;
int err;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Update the link, but return if there
* was an error */
err = genmii_update_link(mii_info);
if (err)
return err;
/* If the link is up, read the speed and duplex */
/* If we aren't autonegotiating, assume speeds
* are as set */
if (mii_info->autoneg && mii_info->link) {
status = ucc_geth_phy_read(mii_info, MII_DM9161_SCSR);
if (status & (MII_DM9161_SCSR_100F | MII_DM9161_SCSR_100H))
mii_info->speed = SPEED_100;
else
mii_info->speed = SPEED_10;
if (status & (MII_DM9161_SCSR_100F | MII_DM9161_SCSR_10F))
mii_info->duplex = DUPLEX_FULL;
else
mii_info->duplex = DUPLEX_HALF;
}
return 0;
}
static int dm9161_config_aneg(struct ugeth_mii_info *mii_info)
{
struct dm9161_private *priv = mii_info->priv;
ugphy_vdbg("%s: IN", __FUNCTION__);
if (0 == priv->resetdone)
return -EAGAIN;
return 0;
}
static void dm9161_timer(unsigned long data)
{
struct ugeth_mii_info *mii_info = (struct ugeth_mii_info *)data;
struct dm9161_private *priv = mii_info->priv;
u16 status = ucc_geth_phy_read(mii_info, MII_BMSR);
ugphy_vdbg("%s: IN", __FUNCTION__);
if (status & BMSR_ANEGCOMPLETE) {
priv->resetdone = 1;
} else
mod_timer(&priv->timer, jiffies + DM9161_DELAY * HZ);
}
static int dm9161_init(struct ugeth_mii_info *mii_info)
{
struct dm9161_private *priv;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Allocate the private data structure */
priv = kmalloc(sizeof(struct dm9161_private), GFP_KERNEL);
if (NULL == priv)
return -ENOMEM;
mii_info->priv = priv;
/* Reset is not done yet */
priv->resetdone = 0;
ucc_geth_phy_write(mii_info, MII_BMCR,
ucc_geth_phy_read(mii_info, MII_BMCR) | BMCR_RESET);
ucc_geth_phy_write(mii_info, MII_BMCR,
ucc_geth_phy_read(mii_info, MII_BMCR) & ~BMCR_ISOLATE);
config_genmii_advert(mii_info);
/* Start/Restart aneg */
genmii_config_aneg(mii_info);
/* Start a timer for DM9161_DELAY seconds to wait
* for the PHY to be ready */
init_timer(&priv->timer);
priv->timer.function = &dm9161_timer;
priv->timer.data = (unsigned long)mii_info;
mod_timer(&priv->timer, jiffies + DM9161_DELAY * HZ);
return 0;
}
static void dm9161_close(struct ugeth_mii_info *mii_info)
{
struct dm9161_private *priv = mii_info->priv;
ugphy_vdbg("%s: IN", __FUNCTION__);
del_timer_sync(&priv->timer);
kfree(priv);
}
static int dm9161_ack_interrupt(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Clear the interrupts by reading the reg */
ucc_geth_phy_read(mii_info, MII_DM9161_INTR);
return 0;
}
static int dm9161_config_intr(struct ugeth_mii_info *mii_info)
{
ugphy_vdbg("%s: IN", __FUNCTION__);
if (mii_info->interrupts == MII_INTERRUPT_ENABLED)
ucc_geth_phy_write(mii_info, MII_DM9161_INTR, MII_DM9161_INTR_INIT);
else
ucc_geth_phy_write(mii_info, MII_DM9161_INTR, MII_DM9161_INTR_STOP);
return 0;
}
/* Cicada 820x */
static struct phy_info phy_info_cis820x = {
.phy_id = 0x000fc440,
.name = "Cicada Cis8204",
.phy_id_mask = 0x000fffc0,
.features = MII_GBIT_FEATURES,
.init = &cis820x_init,
.config_aneg = &gbit_config_aneg,
.read_status = &cis820x_read_status,
.ack_interrupt = &cis820x_ack_interrupt,
.config_intr = &cis820x_config_intr,
};
static struct phy_info phy_info_dm9161 = {
.phy_id = 0x0181b880,
.phy_id_mask = 0x0ffffff0,
.name = "Davicom DM9161E",
.init = dm9161_init,
.config_aneg = dm9161_config_aneg,
.read_status = dm9161_read_status,
.close = dm9161_close,
};
static struct phy_info phy_info_dm9161a = {
.phy_id = 0x0181b8a0,
.phy_id_mask = 0x0ffffff0,
.name = "Davicom DM9161A",
.features = MII_BASIC_FEATURES,
.init = dm9161_init,
.config_aneg = dm9161_config_aneg,
.read_status = dm9161_read_status,
.ack_interrupt = dm9161_ack_interrupt,
.config_intr = dm9161_config_intr,
.close = dm9161_close,
};
static struct phy_info phy_info_marvell = {
.phy_id = 0x01410c00,
.phy_id_mask = 0xffffff00,
.name = "Marvell 88E11x1",
.features = MII_GBIT_FEATURES,
.init = &marvell_init,
.config_aneg = &marvell_config_aneg,
.read_status = &marvell_read_status,
.ack_interrupt = &marvell_ack_interrupt,
.config_intr = &marvell_config_intr,
};
static struct phy_info phy_info_genmii = {
.phy_id = 0x00000000,
.phy_id_mask = 0x00000000,
.name = "Generic MII",
.features = MII_BASIC_FEATURES,
.config_aneg = genmii_config_aneg,
.read_status = genmii_read_status,
};
static struct phy_info *phy_info[] = {
&phy_info_cis820x,
&phy_info_marvell,
&phy_info_dm9161,
&phy_info_dm9161a,
&phy_info_genmii,
NULL
};
/* Use the PHY ID registers to determine what type of PHY is attached
* to device dev. return a struct phy_info structure describing that PHY
*/
struct phy_info *get_phy_info(struct ugeth_mii_info *mii_info)
{
u16 phy_reg;
u32 phy_ID;
int i;
struct phy_info *theInfo = NULL;
struct net_device *dev = mii_info->dev;
ugphy_vdbg("%s: IN", __FUNCTION__);
/* Grab the bits from PHYIR1, and put them in the upper half */
phy_reg = ucc_geth_phy_read(mii_info, MII_PHYSID1);
phy_ID = (phy_reg & 0xffff) << 16;
/* Grab the bits from PHYIR2, and put them in the lower half */
phy_reg = ucc_geth_phy_read(mii_info, MII_PHYSID2);
phy_ID |= (phy_reg & 0xffff);
/* loop through all the known PHY types, and find one that */
/* matches the ID we read from the PHY. */
for (i = 0; phy_info[i]; i++)
if (phy_info[i]->phy_id == (phy_ID & phy_info[i]->phy_id_mask)){
theInfo = phy_info[i];
break;
}
/* This shouldn't happen, as we have generic PHY support */
if (theInfo == NULL) {
ugphy_info("%s: PHY id %x is not supported!", dev->name,
phy_ID);
return NULL;
} else {
ugphy_info("%s: PHY is %s (%x)", dev->name, theInfo->name,
phy_ID);
}
return theInfo;
}

View File

@ -1,217 +0,0 @@
/*
* Copyright (C) Freescale Semicondutor, Inc. 2006. All rights reserved.
*
* Author: Shlomi Gridish <gridish@freescale.com>
*
* Description:
* UCC GETH Driver -- PHY handling
*
* Changelog:
* Jun 28, 2006 Li Yang <LeoLi@freescale.com>
* - Rearrange code and style fixes
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*
*/
#ifndef __UCC_GETH_PHY_H__
#define __UCC_GETH_PHY_H__
#define MII_end ((u32)-2)
#define MII_read ((u32)-1)
#define MIIMIND_BUSY 0x00000001
#define MIIMIND_NOTVALID 0x00000004
#define UGETH_AN_TIMEOUT 2000
/* 1000BT control (Marvell & BCM54xx at least) */
#define MII_1000BASETCONTROL 0x09
#define MII_1000BASETCONTROL_FULLDUPLEXCAP 0x0200
#define MII_1000BASETCONTROL_HALFDUPLEXCAP 0x0100
/* Cicada Extended Control Register 1 */
#define MII_CIS8201_EXT_CON1 0x17
#define MII_CIS8201_EXTCON1_INIT 0x0000
/* Cicada Interrupt Mask Register */
#define MII_CIS8201_IMASK 0x19
#define MII_CIS8201_IMASK_IEN 0x8000
#define MII_CIS8201_IMASK_SPEED 0x4000
#define MII_CIS8201_IMASK_LINK 0x2000
#define MII_CIS8201_IMASK_DUPLEX 0x1000
#define MII_CIS8201_IMASK_MASK 0xf000
/* Cicada Interrupt Status Register */
#define MII_CIS8201_ISTAT 0x1a
#define MII_CIS8201_ISTAT_STATUS 0x8000
#define MII_CIS8201_ISTAT_SPEED 0x4000
#define MII_CIS8201_ISTAT_LINK 0x2000
#define MII_CIS8201_ISTAT_DUPLEX 0x1000
/* Cicada Auxiliary Control/Status Register */
#define MII_CIS8201_AUX_CONSTAT 0x1c
#define MII_CIS8201_AUXCONSTAT_INIT 0x0004
#define MII_CIS8201_AUXCONSTAT_DUPLEX 0x0020
#define MII_CIS8201_AUXCONSTAT_SPEED 0x0018
#define MII_CIS8201_AUXCONSTAT_GBIT 0x0010
#define MII_CIS8201_AUXCONSTAT_100 0x0008
/* 88E1011 PHY Status Register */
#define MII_M1011_PHY_SPEC_STATUS 0x11
#define MII_M1011_PHY_SPEC_STATUS_1000 0x8000
#define MII_M1011_PHY_SPEC_STATUS_100 0x4000
#define MII_M1011_PHY_SPEC_STATUS_SPD_MASK 0xc000
#define MII_M1011_PHY_SPEC_STATUS_FULLDUPLEX 0x2000
#define MII_M1011_PHY_SPEC_STATUS_RESOLVED 0x0800
#define MII_M1011_PHY_SPEC_STATUS_LINK 0x0400
#define MII_M1011_IEVENT 0x13
#define MII_M1011_IEVENT_CLEAR 0x0000
#define MII_M1011_IMASK 0x12
#define MII_M1011_IMASK_INIT 0x6400
#define MII_M1011_IMASK_CLEAR 0x0000
#define MII_DM9161_SCR 0x10
#define MII_DM9161_SCR_INIT 0x0610
/* DM9161 Specified Configuration and Status Register */
#define MII_DM9161_SCSR 0x11
#define MII_DM9161_SCSR_100F 0x8000
#define MII_DM9161_SCSR_100H 0x4000
#define MII_DM9161_SCSR_10F 0x2000
#define MII_DM9161_SCSR_10H 0x1000
/* DM9161 Interrupt Register */
#define MII_DM9161_INTR 0x15
#define MII_DM9161_INTR_PEND 0x8000
#define MII_DM9161_INTR_DPLX_MASK 0x0800
#define MII_DM9161_INTR_SPD_MASK 0x0400
#define MII_DM9161_INTR_LINK_MASK 0x0200
#define MII_DM9161_INTR_MASK 0x0100
#define MII_DM9161_INTR_DPLX_CHANGE 0x0010
#define MII_DM9161_INTR_SPD_CHANGE 0x0008
#define MII_DM9161_INTR_LINK_CHANGE 0x0004
#define MII_DM9161_INTR_INIT 0x0000
#define MII_DM9161_INTR_STOP \
(MII_DM9161_INTR_DPLX_MASK | MII_DM9161_INTR_SPD_MASK \
| MII_DM9161_INTR_LINK_MASK | MII_DM9161_INTR_MASK)
/* DM9161 10BT Configuration/Status */
#define MII_DM9161_10BTCSR 0x12
#define MII_DM9161_10BTCSR_INIT 0x7800
#define MII_BASIC_FEATURES (SUPPORTED_10baseT_Half | \
SUPPORTED_10baseT_Full | \
SUPPORTED_100baseT_Half | \
SUPPORTED_100baseT_Full | \
SUPPORTED_Autoneg | \
SUPPORTED_TP | \
SUPPORTED_MII)
#define MII_GBIT_FEATURES (MII_BASIC_FEATURES | \
SUPPORTED_1000baseT_Half | \
SUPPORTED_1000baseT_Full)
#define MII_READ_COMMAND 0x00000001
#define MII_INTERRUPT_DISABLED 0x0
#define MII_INTERRUPT_ENABLED 0x1
/* Taken from mii_if_info and sungem_phy.h */
struct ugeth_mii_info {
/* Information about the PHY type */
/* And management functions */
struct phy_info *phyinfo;
struct ucc_mii_mng *mii_regs;
/* forced speed & duplex (no autoneg)
* partner speed & duplex & pause (autoneg)
*/
int speed;
int duplex;
int pause;
/* The most recently read link state */
int link;
/* Enabled Interrupts */
u32 interrupts;
u32 advertising;
int autoneg;
int mii_id;
/* private data pointer */
/* For use by PHYs to maintain extra state */
void *priv;
/* Provided by host chip */
struct net_device *dev;
/* A lock to ensure that only one thing can read/write
* the MDIO bus at a time */
spinlock_t mdio_lock;
/* Provided by ethernet driver */
int (*mdio_read) (struct net_device * dev, int mii_id, int reg);
void (*mdio_write) (struct net_device * dev, int mii_id, int reg,
int val);
};
/* struct phy_info: a structure which defines attributes for a PHY
*
* id will contain a number which represents the PHY. During
* startup, the driver will poll the PHY to find out what its
* UID--as defined by registers 2 and 3--is. The 32-bit result
* gotten from the PHY will be ANDed with phy_id_mask to
* discard any bits which may change based on revision numbers
* unimportant to functionality
*
* There are 6 commands which take a ugeth_mii_info structure.
* Each PHY must declare config_aneg, and read_status.
*/
struct phy_info {
u32 phy_id;
char *name;
unsigned int phy_id_mask;
u32 features;
/* Called to initialize the PHY */
int (*init) (struct ugeth_mii_info * mii_info);
/* Called to suspend the PHY for power */
int (*suspend) (struct ugeth_mii_info * mii_info);
/* Reconfigures autonegotiation (or disables it) */
int (*config_aneg) (struct ugeth_mii_info * mii_info);
/* Determines the negotiated speed and duplex */
int (*read_status) (struct ugeth_mii_info * mii_info);
/* Clears any pending interrupts */
int (*ack_interrupt) (struct ugeth_mii_info * mii_info);
/* Enables or disables interrupts */
int (*config_intr) (struct ugeth_mii_info * mii_info);
/* Clears up any memory if needed */
void (*close) (struct ugeth_mii_info * mii_info);
};
struct phy_info *get_phy_info(struct ugeth_mii_info *mii_info);
void write_phy_reg(struct net_device *dev, int mii_id, int regnum, int value);
int read_phy_reg(struct net_device *dev, int mii_id, int regnum);
void mii_clear_phy_interrupt(struct ugeth_mii_info *mii_info);
void mii_configure_phy_interrupt(struct ugeth_mii_info *mii_info,
u32 interrupts);
struct dm9161_private {
struct timer_list timer;
int resetdone;
};
#endif /* __UCC_GETH_PHY_H__ */

View File

@ -37,16 +37,16 @@
struct hdlc_header {
u8 address;
u8 control;
u16 protocol;
__be16 protocol;
}__attribute__ ((packed));
struct cisco_packet {
u32 type; /* code */
u32 par1;
u32 par2;
u16 rel; /* reliability */
u32 time;
__be32 type; /* code */
__be32 par1;
__be32 par2;
__be16 rel; /* reliability */
__be32 time;
}__attribute__ ((packed));
#define CISCO_PACKET_LEN 18
#define CISCO_BIG_PACKET_LEN 20
@ -97,7 +97,7 @@ static int cisco_hard_header(struct sk_buff *skb, struct net_device *dev,
static void cisco_keepalive_send(struct net_device *dev, u32 type,
u32 par1, u32 par2)
__be32 par1, __be32 par2)
{
struct sk_buff *skb;
struct cisco_packet *data;
@ -115,9 +115,9 @@ static void cisco_keepalive_send(struct net_device *dev, u32 type,
data = (struct cisco_packet*)(skb->data + 4);
data->type = htonl(type);
data->par1 = htonl(par1);
data->par2 = htonl(par2);
data->rel = 0xFFFF;
data->par1 = par1;
data->par2 = par2;
data->rel = __constant_htons(0xFFFF);
/* we will need do_div here if 1000 % HZ != 0 */
data->time = htonl((jiffies - INITIAL_JIFFIES) * (1000 / HZ));
@ -193,7 +193,7 @@ static int cisco_rx(struct sk_buff *skb)
case CISCO_ADDR_REQ: /* Stolen from syncppp.c :-) */
in_dev = dev->ip_ptr;
addr = 0;
mask = ~0; /* is the mask correct? */
mask = __constant_htonl(~0); /* is the mask correct? */
if (in_dev != NULL) {
struct in_ifaddr **ifap = &in_dev->ifa_list;
@ -245,7 +245,7 @@ static int cisco_rx(struct sk_buff *skb)
} /* switch(protocol) */
printk(KERN_INFO "%s: Unsupported protocol %x\n", dev->name,
data->protocol);
ntohs(data->protocol));
dev_kfree_skb_any(skb);
return NET_RX_DROP;
@ -270,8 +270,9 @@ static void cisco_timer(unsigned long arg)
netif_dormant_on(dev);
}
cisco_keepalive_send(dev, CISCO_KEEPALIVE_REQ, ++state(hdlc)->txseq,
state(hdlc)->rxseq);
cisco_keepalive_send(dev, CISCO_KEEPALIVE_REQ,
htonl(++state(hdlc)->txseq),
htonl(state(hdlc)->rxseq));
state(hdlc)->request_sent = 1;
state(hdlc)->timer.expires = jiffies +
state(hdlc)->settings.interval * HZ;

View File

@ -288,31 +288,31 @@ static int fr_hard_header(struct sk_buff **skb_p, u16 dlci)
struct sk_buff *skb = *skb_p;
switch (skb->protocol) {
case __constant_ntohs(NLPID_CCITT_ANSI_LMI):
case __constant_htons(NLPID_CCITT_ANSI_LMI):
head_len = 4;
skb_push(skb, head_len);
skb->data[3] = NLPID_CCITT_ANSI_LMI;
break;
case __constant_ntohs(NLPID_CISCO_LMI):
case __constant_htons(NLPID_CISCO_LMI):
head_len = 4;
skb_push(skb, head_len);
skb->data[3] = NLPID_CISCO_LMI;
break;
case __constant_ntohs(ETH_P_IP):
case __constant_htons(ETH_P_IP):
head_len = 4;
skb_push(skb, head_len);
skb->data[3] = NLPID_IP;
break;
case __constant_ntohs(ETH_P_IPV6):
case __constant_htons(ETH_P_IPV6):
head_len = 4;
skb_push(skb, head_len);
skb->data[3] = NLPID_IPV6;
break;
case __constant_ntohs(ETH_P_802_3):
case __constant_htons(ETH_P_802_3):
head_len = 10;
if (skb_headroom(skb) < head_len) {
struct sk_buff *skb2 = skb_realloc_headroom(skb,
@ -340,7 +340,7 @@ static int fr_hard_header(struct sk_buff **skb_p, u16 dlci)
skb->data[5] = FR_PAD;
skb->data[6] = FR_PAD;
skb->data[7] = FR_PAD;
*(u16*)(skb->data + 8) = skb->protocol;
*(__be16*)(skb->data + 8) = skb->protocol;
}
dlci_to_q922(skb->data, dlci);
@ -974,8 +974,8 @@ static int fr_rx(struct sk_buff *skb)
} else if (skb->len > 10 && data[3] == FR_PAD &&
data[4] == NLPID_SNAP && data[5] == FR_PAD) {
u16 oui = ntohs(*(u16*)(data + 6));
u16 pid = ntohs(*(u16*)(data + 8));
u16 oui = ntohs(*(__be16*)(data + 6));
u16 pid = ntohs(*(__be16*)(data + 8));
skb_pull(skb, 10);
switch ((((u32)oui) << 16) | pid) {
@ -1127,7 +1127,7 @@ static int fr_add_pvc(struct net_device *frad, unsigned int dlci, int type)
memcpy(dev->dev_addr, "\x00\x01", 2);
get_random_bytes(dev->dev_addr + 2, ETH_ALEN - 2);
} else {
*(u16*)dev->dev_addr = htons(dlci);
*(__be16*)dev->dev_addr = htons(dlci);
dlci_to_q922(dev->broadcast, dlci);
}
dev->hard_start_xmit = pvc_xmit;

View File

@ -265,6 +265,19 @@ config IPW2200_DEBUG
If you are not sure, say N here.
config LIBERTAS_USB
tristate "Marvell Libertas 8388 802.11a/b/g cards"
depends on NET_RADIO && USB
select FW_LOADER
---help---
A driver for Marvell Libertas 8388 USB devices.
config LIBERTAS_USB_DEBUG
bool "Enable full debugging output in the Libertas USB module."
depends on LIBERTAS_USB
---help---
Debugging support.
config AIRO
tristate "Cisco/Aironet 34X/35X/4500/4800 ISA and PCI cards"
depends on ISA_DMA_API && WLAN_80211 && (PCI || BROKEN)

View File

@ -43,3 +43,4 @@ obj-$(CONFIG_PCMCIA_RAYCS) += ray_cs.o
obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o
obj-$(CONFIG_USB_ZD1201) += zd1201.o
obj-$(CONFIG_LIBERTAS_USB) += libertas/

View File

@ -1,25 +0,0 @@
README
------
This directory is mostly for Wireless LAN drivers, in their
various incarnations (ISA, PCI, Pcmcia...).
This separate directory is needed because a lot of driver work
on different bus (typically PCI + Pcmcia) and share 95% of the
code. This allow the code and the config options to be in one single
place instead of scattered all over the driver tree, which is never
100% satisfactory.
Note : if you want more info on the topic of Wireless LANs,
you are kindly invited to have a look at the Wireless Howto :
http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/
Some Wireless LAN drivers, like orinoco_cs, require the use of
Wireless Tools to be configured :
http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Tools.html
Special notes for distribution maintainers :
1) wvlan_cs will be discontinued soon in favor of orinoco_cs
2) Please add Wireless Tools support in your scripts
Have fun...
Jean

View File

@ -1145,6 +1145,7 @@ static void airo_networks_free(struct airo_info *ai);
struct airo_info {
struct net_device_stats stats;
struct net_device *dev;
struct list_head dev_list;
/* Note, we can have MAX_FIDS outstanding. FIDs are 16-bits, so we
use the high bit to mark whether it is in use. */
#define MAX_FIDS 6
@ -2360,6 +2361,21 @@ static int airo_change_mtu(struct net_device *dev, int new_mtu)
return 0;
}
static LIST_HEAD(airo_devices);
static void add_airo_dev(struct airo_info *ai)
{
/* Upper layers already keep track of PCI devices,
* so we only need to remember our non-PCI cards. */
if (!ai->pci)
list_add_tail(&ai->dev_list, &airo_devices);
}
static void del_airo_dev(struct airo_info *ai)
{
if (!ai->pci)
list_del(&ai->dev_list);
}
static int airo_close(struct net_device *dev) {
struct airo_info *ai = dev->priv;
@ -2381,8 +2397,6 @@ static int airo_close(struct net_device *dev) {
return 0;
}
static void del_airo_dev( struct net_device *dev );
void stop_airo_card( struct net_device *dev, int freeres )
{
struct airo_info *ai = dev->priv;
@ -2434,14 +2448,12 @@ void stop_airo_card( struct net_device *dev, int freeres )
}
}
crypto_free_cipher(ai->tfm);
del_airo_dev( dev );
del_airo_dev(ai);
free_netdev( dev );
}
EXPORT_SYMBOL(stop_airo_card);
static int add_airo_dev( struct net_device *dev );
static int wll_header_parse(struct sk_buff *skb, unsigned char *haddr)
{
memcpy(haddr, skb_mac_header(skb) + 10, ETH_ALEN);
@ -2740,8 +2752,6 @@ static int airo_networks_allocate(struct airo_info *ai)
static void airo_networks_free(struct airo_info *ai)
{
if (!ai->networks)
return;
kfree(ai->networks);
ai->networks = NULL;
}
@ -2816,12 +2826,10 @@ static struct net_device *_init_airo_card( unsigned short irq, int port,
if (IS_ERR(ai->airo_thread_task))
goto err_out_free;
ai->tfm = NULL;
rc = add_airo_dev( dev );
if (rc)
goto err_out_thr;
add_airo_dev(ai);
if (airo_networks_allocate (ai))
goto err_out_unlink;
goto err_out_thr;
airo_networks_initialize (ai);
/* The Airo-specific entries in the device structure. */
@ -2937,9 +2945,8 @@ err_out_irq:
free_irq(dev->irq, dev);
err_out_nets:
airo_networks_free(ai);
err_out_unlink:
del_airo_dev(dev);
err_out_thr:
del_airo_dev(ai);
set_bit(JOB_DIE, &ai->jobs);
kthread_stop(ai->airo_thread_task);
err_out_free:
@ -5535,11 +5542,6 @@ static int proc_close( struct inode *inode, struct file *file )
return 0;
}
static struct net_device_list {
struct net_device *dev;
struct net_device_list *next;
} *airo_devices;
/* Since the card doesn't automatically switch to the right WEP mode,
we will make it do it. If the card isn't associated, every secs we
will switch WEP modes to see if that will help. If the card is
@ -5582,26 +5584,6 @@ static void timer_func( struct net_device *dev ) {
apriv->expires = RUN_AT(HZ*3);
}
static int add_airo_dev( struct net_device *dev ) {
struct net_device_list *node = kmalloc( sizeof( *node ), GFP_KERNEL );
if ( !node )
return -ENOMEM;
node->dev = dev;
node->next = airo_devices;
airo_devices = node;
return 0;
}
static void del_airo_dev( struct net_device *dev ) {
struct net_device_list **p = &airo_devices;
while( *p && ( (*p)->dev != dev ) )
p = &(*p)->next;
if ( *p && (*p)->dev == dev )
*p = (*p)->next;
}
#ifdef CONFIG_PCI
static int __devinit airo_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *pent)
@ -5625,6 +5607,10 @@ static int __devinit airo_pci_probe(struct pci_dev *pdev,
static void __devexit airo_pci_remove(struct pci_dev *pdev)
{
struct net_device *dev = pci_get_drvdata(pdev);
airo_print_info(dev->name, "Unregistering...");
stop_airo_card(dev, 1);
}
static int airo_pci_suspend(struct pci_dev *pdev, pm_message_t state)
@ -5750,9 +5736,11 @@ static int __init airo_init_module( void )
static void __exit airo_cleanup_module( void )
{
while( airo_devices ) {
airo_print_info(airo_devices->dev->name, "Unregistering...\n");
stop_airo_card( airo_devices->dev, 1 );
struct airo_info *ai;
while(!list_empty(&airo_devices)) {
ai = list_entry(airo_devices.next, struct airo_info, dev_list);
airo_print_info(ai->dev->name, "Unregistering...");
stop_airo_card(ai->dev, 1);
}
#ifdef CONFIG_PCI
pci_unregister_driver(&airo_driver);

View File

@ -277,11 +277,14 @@
#define BCM43xx_SBTMSTATELOW_REJECT 0x02
#define BCM43xx_SBTMSTATELOW_CLOCK 0x10000
#define BCM43xx_SBTMSTATELOW_FORCE_GATE_CLOCK 0x20000
#define BCM43xx_SBTMSTATELOW_G_MODE_ENABLE 0x20000000
/* sbtmstatehigh state flags */
#define BCM43xx_SBTMSTATEHIGH_SERROR 0x00000001
#define BCM43xx_SBTMSTATEHIGH_BUSY 0x00000004
#define BCM43xx_SBTMSTATEHIGH_TIMEOUT 0x00000020
#define BCM43xx_SBTMSTATEHIGH_G_PHY_AVAIL 0x00010000
#define BCM43xx_SBTMSTATEHIGH_A_PHY_AVAIL 0x00020000
#define BCM43xx_SBTMSTATEHIGH_COREFLAGS 0x1FFF0000
#define BCM43xx_SBTMSTATEHIGH_DMA64BIT 0x10000000
#define BCM43xx_SBTMSTATEHIGH_GATEDCLK 0x20000000

View File

@ -32,7 +32,7 @@
#include <linux/netdevice.h>
#include <linux/pci.h>
#include <linux/string.h>
#include <linux/utsrelease.h>
#include <linux/utsname.h>
static void bcm43xx_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
@ -40,7 +40,7 @@ static void bcm43xx_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *
struct bcm43xx_private *bcm = bcm43xx_priv(dev);
strncpy(info->driver, KBUILD_MODNAME, sizeof(info->driver));
strncpy(info->version, UTS_RELEASE, sizeof(info->version));
strncpy(info->version, utsname()->release, sizeof(info->version));
strncpy(info->bus_info, pci_name(bcm->pci_dev), ETHTOOL_BUSINFO_LEN);
}

View File

@ -1407,7 +1407,7 @@ void bcm43xx_wireless_core_reset(struct bcm43xx_private *bcm, int connect_phy)
& ~(BCM43xx_SBF_MAC_ENABLED | 0x00000002));
} else {
if (connect_phy)
flags |= 0x20000000;
flags |= BCM43xx_SBTMSTATELOW_G_MODE_ENABLE;
bcm43xx_phy_connect(bcm, connect_phy);
bcm43xx_core_enable(bcm, flags);
bcm43xx_write16(bcm, 0x03E6, 0x0000);
@ -3604,7 +3604,7 @@ int bcm43xx_select_wireless_core(struct bcm43xx_private *bcm,
u32 sbtmstatelow;
sbtmstatelow = bcm43xx_read32(bcm, BCM43xx_CIR_SBTMSTATELOW);
sbtmstatelow |= 0x20000000;
sbtmstatelow |= BCM43xx_SBTMSTATELOW_G_MODE_ENABLE;
bcm43xx_write32(bcm, BCM43xx_CIR_SBTMSTATELOW, sbtmstatelow);
}
err = wireless_core_up(bcm, 1);

View File

@ -168,16 +168,16 @@ int bcm43xx_phy_connect(struct bcm43xx_private *bcm, int connect)
flags = bcm43xx_read32(bcm, BCM43xx_CIR_SBTMSTATEHIGH);
if (connect) {
if (!(flags & 0x00010000))
if (!(flags & BCM43xx_SBTMSTATEHIGH_G_PHY_AVAIL))
return -ENODEV;
flags = bcm43xx_read32(bcm, BCM43xx_CIR_SBTMSTATELOW);
flags |= (0x800 << 18);
flags |= BCM43xx_SBTMSTATELOW_G_MODE_ENABLE;
bcm43xx_write32(bcm, BCM43xx_CIR_SBTMSTATELOW, flags);
} else {
if (!(flags & 0x00020000))
if (!(flags & BCM43xx_SBTMSTATEHIGH_A_PHY_AVAIL))
return -ENODEV;
flags = bcm43xx_read32(bcm, BCM43xx_CIR_SBTMSTATELOW);
flags &= ~(0x800 << 18);
flags &= ~BCM43xx_SBTMSTATELOW_G_MODE_ENABLE;
bcm43xx_write32(bcm, BCM43xx_CIR_SBTMSTATELOW, flags);
}
out:
@ -300,16 +300,20 @@ static void bcm43xx_phy_agcsetup(struct bcm43xx_private *bcm)
if (phy->rev > 2) {
bcm43xx_phy_write(bcm, 0x0422, 0x287A);
bcm43xx_phy_write(bcm, 0x0420, (bcm43xx_phy_read(bcm, 0x0420) & 0x0FFF) | 0x3000);
bcm43xx_phy_write(bcm, 0x0420, (bcm43xx_phy_read(bcm, 0x0420)
& 0x0FFF) | 0x3000);
}
bcm43xx_phy_write(bcm, 0x04A8, (bcm43xx_phy_read(bcm, 0x04A8) & 0x8080) | 0x7874);
bcm43xx_phy_write(bcm, 0x04A8, (bcm43xx_phy_read(bcm, 0x04A8) & 0x8080)
| 0x7874);
bcm43xx_phy_write(bcm, 0x048E, 0x1C00);
if (phy->rev == 1) {
bcm43xx_phy_write(bcm, 0x04AB, (bcm43xx_phy_read(bcm, 0x04AB) & 0xF0FF) | 0x0600);
bcm43xx_phy_write(bcm, 0x04AB, (bcm43xx_phy_read(bcm, 0x04AB)
& 0xF0FF) | 0x0600);
bcm43xx_phy_write(bcm, 0x048B, 0x005E);
bcm43xx_phy_write(bcm, 0x048C, (bcm43xx_phy_read(bcm, 0x048C) & 0xFF00) | 0x001E);
bcm43xx_phy_write(bcm, 0x048C, (bcm43xx_phy_read(bcm, 0x048C)
& 0xFF00) | 0x001E);
bcm43xx_phy_write(bcm, 0x048D, 0x0002);
}
@ -335,7 +339,8 @@ static void bcm43xx_phy_setupg(struct bcm43xx_private *bcm)
if (phy->rev == 1) {
bcm43xx_phy_write(bcm, 0x0406, 0x4F19);
bcm43xx_phy_write(bcm, BCM43xx_PHY_G_CRS,
(bcm43xx_phy_read(bcm, BCM43xx_PHY_G_CRS) & 0xFC3F) | 0x0340);
(bcm43xx_phy_read(bcm, BCM43xx_PHY_G_CRS)
& 0xFC3F) | 0x0340);
bcm43xx_phy_write(bcm, 0x042C, 0x005A);
bcm43xx_phy_write(bcm, 0x0427, 0x001A);

View File

@ -48,6 +48,10 @@ void bcm43xx_raw_phy_unlock(struct bcm43xx_private *bcm);
local_irq_restore(flags); \
} while (0)
/* Card uses the loopback gain stuff */
#define has_loopback_gain(phy) \
(((phy)->rev > 1) || ((phy)->connected))
u16 bcm43xx_phy_read(struct bcm43xx_private *bcm, u16 offset);
void bcm43xx_phy_write(struct bcm43xx_private *bcm, u16 offset, u16 val);

View File

@ -1343,11 +1343,110 @@ u16 bcm43xx_radio_calibrationvalue(struct bcm43xx_private *bcm)
return ret;
}
#define LPD(L, P, D) (((L) << 2) | ((P) << 1) | ((D) << 0))
static u16 bcm43xx_get_812_value(struct bcm43xx_private *bcm, u8 lpd)
{
struct bcm43xx_phyinfo *phy = bcm43xx_current_phy(bcm);
struct bcm43xx_radioinfo *radio = bcm43xx_current_radio(bcm);
u16 loop_or = 0;
u16 adj_loopback_gain = phy->loopback_gain[0];
u8 loop;
u16 extern_lna_control;
if (!phy->connected)
return 0;
if (!has_loopback_gain(phy)) {
if (phy->rev < 7 || !(bcm->sprom.boardflags
& BCM43xx_BFL_EXTLNA)) {
switch (lpd) {
case LPD(0, 1, 1):
return 0x0FB2;
case LPD(0, 0, 1):
return 0x00B2;
case LPD(1, 0, 1):
return 0x30B2;
case LPD(1, 0, 0):
return 0x30B3;
default:
assert(0);
}
} else {
switch (lpd) {
case LPD(0, 1, 1):
return 0x8FB2;
case LPD(0, 0, 1):
return 0x80B2;
case LPD(1, 0, 1):
return 0x20B2;
case LPD(1, 0, 0):
return 0x20B3;
default:
assert(0);
}
}
} else {
if (radio->revision == 8)
adj_loopback_gain += 0x003E;
else
adj_loopback_gain += 0x0026;
if (adj_loopback_gain >= 0x46) {
adj_loopback_gain -= 0x46;
extern_lna_control = 0x3000;
} else if (adj_loopback_gain >= 0x3A) {
adj_loopback_gain -= 0x3A;
extern_lna_control = 0x2000;
} else if (adj_loopback_gain >= 0x2E) {
adj_loopback_gain -= 0x2E;
extern_lna_control = 0x1000;
} else {
adj_loopback_gain -= 0x10;
extern_lna_control = 0x0000;
}
for (loop = 0; loop < 16; loop++) {
u16 tmp = adj_loopback_gain - 6 * loop;
if (tmp < 6)
break;
}
loop_or = (loop << 8) | extern_lna_control;
if (phy->rev >= 7 && bcm->sprom.boardflags
& BCM43xx_BFL_EXTLNA) {
if (extern_lna_control)
loop_or |= 0x8000;
switch (lpd) {
case LPD(0, 1, 1):
return 0x8F92;
case LPD(0, 0, 1):
return (0x8092 | loop_or);
case LPD(1, 0, 1):
return (0x2092 | loop_or);
case LPD(1, 0, 0):
return (0x2093 | loop_or);
default:
assert(0);
}
} else {
switch (lpd) {
case LPD(0, 1, 1):
return 0x0F92;
case LPD(0, 0, 1):
case LPD(1, 0, 1):
return (0x0092 | loop_or);
case LPD(1, 0, 0):
return (0x0093 | loop_or);
default:
assert(0);
}
}
}
return 0;
}
u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
{
struct bcm43xx_phyinfo *phy = bcm43xx_current_phy(bcm);
struct bcm43xx_radioinfo *radio = bcm43xx_current_radio(bcm);
u16 backup[19] = { 0 };
u16 backup[21] = { 0 };
u16 ret;
u16 i, j;
u32 tmp1 = 0, tmp2 = 0;
@ -1373,19 +1472,36 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
backup[8] = bcm43xx_phy_read(bcm, BCM43xx_PHY_G_CRS);
backup[9] = bcm43xx_phy_read(bcm, 0x0802);
bcm43xx_phy_write(bcm, 0x0814,
(bcm43xx_phy_read(bcm, 0x0814) | 0x0003));
(bcm43xx_phy_read(bcm, 0x0814)
| 0x0003));
bcm43xx_phy_write(bcm, 0x0815,
(bcm43xx_phy_read(bcm, 0x0815) & 0xFFFC));
(bcm43xx_phy_read(bcm, 0x0815)
& 0xFFFC));
bcm43xx_phy_write(bcm, BCM43xx_PHY_G_CRS,
(bcm43xx_phy_read(bcm, BCM43xx_PHY_G_CRS) & 0x7FFF));
(bcm43xx_phy_read(bcm, BCM43xx_PHY_G_CRS)
& 0x7FFF));
bcm43xx_phy_write(bcm, 0x0802,
(bcm43xx_phy_read(bcm, 0x0802) & 0xFFFC));
bcm43xx_phy_write(bcm, 0x0811, 0x01B3);
bcm43xx_phy_write(bcm, 0x0812, 0x0FB2);
if (phy->rev > 1) { /* loopback gain enabled */
backup[19] = bcm43xx_phy_read(bcm, 0x080F);
backup[20] = bcm43xx_phy_read(bcm, 0x0810);
if (phy->rev >= 3)
bcm43xx_phy_write(bcm, 0x080F, 0xC020);
else
bcm43xx_phy_write(bcm, 0x080F, 0x8020);
bcm43xx_phy_write(bcm, 0x0810, 0x0000);
}
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(0, 1, 1)));
if (phy->rev < 7 || !(bcm->sprom.boardflags
& BCM43xx_BFL_EXTLNA))
bcm43xx_phy_write(bcm, 0x0811, 0x01B3);
else
bcm43xx_phy_write(bcm, 0x0811, 0x09B3);
}
bcm43xx_write16(bcm, BCM43xx_MMIO_PHY_RADIO,
(bcm43xx_read16(bcm, BCM43xx_MMIO_PHY_RADIO) | 0x8000));
}
bcm43xx_write16(bcm, BCM43xx_MMIO_PHY_RADIO,
(bcm43xx_read16(bcm, BCM43xx_MMIO_PHY_RADIO) | 0x8000));
backup[10] = bcm43xx_phy_read(bcm, 0x0035);
bcm43xx_phy_write(bcm, 0x0035,
(bcm43xx_phy_read(bcm, 0x0035) & 0xFF7F));
@ -1397,10 +1513,12 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
bcm43xx_write16(bcm, 0x03E6, 0x0122);
} else {
if (phy->analog >= 2)
bcm43xx_phy_write(bcm, 0x0003, (bcm43xx_phy_read(bcm, 0x0003)
& 0xFFBF) | 0x0040);
bcm43xx_phy_write(bcm, 0x0003,
(bcm43xx_phy_read(bcm, 0x0003)
& 0xFFBF) | 0x0040);
bcm43xx_write16(bcm, BCM43xx_MMIO_CHANNEL_EXT,
(bcm43xx_read16(bcm, BCM43xx_MMIO_CHANNEL_EXT) | 0x2000));
(bcm43xx_read16(bcm, BCM43xx_MMIO_CHANNEL_EXT)
| 0x2000));
}
ret = bcm43xx_radio_calibrationvalue(bcm);
@ -1408,16 +1526,25 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
if (phy->type == BCM43xx_PHYTYPE_B)
bcm43xx_radio_write16(bcm, 0x0078, 0x0026);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(0, 1, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xBFAF);
bcm43xx_phy_write(bcm, 0x002B, 0x1403);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x00B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(0, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xBFA0);
bcm43xx_radio_write16(bcm, 0x0051,
(bcm43xx_radio_read16(bcm, 0x0051) | 0x0004));
bcm43xx_radio_write16(bcm, 0x0052, 0x0000);
bcm43xx_radio_write16(bcm, 0x0043,
(bcm43xx_radio_read16(bcm, 0x0043) & 0xFFF0) | 0x0009);
if (radio->revision == 8)
bcm43xx_radio_write16(bcm, 0x0043, 0x001F);
else {
bcm43xx_radio_write16(bcm, 0x0052, 0x0000);
bcm43xx_radio_write16(bcm, 0x0043,
(bcm43xx_radio_read16(bcm, 0x0043) & 0xFFF0)
| 0x0009);
}
bcm43xx_phy_write(bcm, 0x0058, 0x0000);
for (i = 0; i < 16; i++) {
@ -1425,21 +1552,25 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
bcm43xx_phy_write(bcm, 0x0059, 0xC810);
bcm43xx_phy_write(bcm, 0x0058, 0x000D);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xAFB0);
udelay(10);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xEFB0);
udelay(10);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(1, 0, 0)));
bcm43xx_phy_write(bcm, 0x0015, 0xFFF0);
udelay(10);
udelay(20);
tmp1 += bcm43xx_phy_read(bcm, 0x002D);
bcm43xx_phy_write(bcm, 0x0058, 0x0000);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm, LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xAFB0);
}
@ -1457,21 +1588,29 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
bcm43xx_phy_write(bcm, 0x0059, 0xC810);
bcm43xx_phy_write(bcm, 0x0058, 0x000D);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm,
LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xAFB0);
udelay(10);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm,
LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xEFB0);
udelay(10);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B3); /* 0x30B3 is not a typo */
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm,
LPD(1, 0, 0)));
bcm43xx_phy_write(bcm, 0x0015, 0xFFF0);
udelay(10);
tmp2 += bcm43xx_phy_read(bcm, 0x002D);
bcm43xx_phy_write(bcm, 0x0058, 0x0000);
if (phy->connected)
bcm43xx_phy_write(bcm, 0x0812, 0x30B2);
bcm43xx_phy_write(bcm, 0x0812,
bcm43xx_get_812_value(bcm,
LPD(1, 0, 1)));
bcm43xx_phy_write(bcm, 0x0015, 0xAFB0);
}
tmp2++;
@ -1497,15 +1636,20 @@ u16 bcm43xx_radio_init2050(struct bcm43xx_private *bcm)
bcm43xx_phy_write(bcm, 0x0030, backup[2]);
bcm43xx_write16(bcm, 0x03EC, backup[3]);
} else {
bcm43xx_write16(bcm, BCM43xx_MMIO_PHY_RADIO,
(bcm43xx_read16(bcm, BCM43xx_MMIO_PHY_RADIO) & 0x7FFF));
if (phy->connected) {
bcm43xx_write16(bcm, BCM43xx_MMIO_PHY_RADIO,
(bcm43xx_read16(bcm,
BCM43xx_MMIO_PHY_RADIO) & 0x7FFF));
bcm43xx_phy_write(bcm, 0x0811, backup[4]);
bcm43xx_phy_write(bcm, 0x0812, backup[5]);
bcm43xx_phy_write(bcm, 0x0814, backup[6]);
bcm43xx_phy_write(bcm, 0x0815, backup[7]);
bcm43xx_phy_write(bcm, BCM43xx_PHY_G_CRS, backup[8]);
bcm43xx_phy_write(bcm, 0x0802, backup[9]);
if (phy->rev > 1) {
bcm43xx_phy_write(bcm, 0x080F, backup[19]);
bcm43xx_phy_write(bcm, 0x0810, backup[20]);
}
}
}
if (i >= 15)

View File

@ -1,8 +1,8 @@
/*
* Intersil Prism2 driver with Host AP (software access point) support
* Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
* <jkmaline@cc.hut.fi>
* Copyright (c) 2002-2005, Jouni Malinen <jkmaline@cc.hut.fi>
* <j@w1.fi>
* Copyright (c) 2002-2005, Jouni Malinen <j@w1.fi>
*
* This file is to be included into hostap.c when S/W AP functionality is
* compiled.

View File

@ -368,9 +368,9 @@ enum {
#define PRISM2_HOSTAPD_MAX_BUF_SIZE 1024
#define PRISM2_HOSTAPD_RID_HDR_LEN \
((int) (&((struct prism2_hostapd_param *) 0)->u.rid.data))
offsetof(struct prism2_hostapd_param, u.rid.data)
#define PRISM2_HOSTAPD_GENERIC_ELEMENT_HDR_LEN \
((int) (&((struct prism2_hostapd_param *) 0)->u.generic_elem.data))
offsetof(struct prism2_hostapd_param, u.generic_elem.data)
/* Maximum length for algorithm names (-1 for nul termination) used in ioctl()
*/

Some files were not shown because too many files have changed in this diff Show More