Occasionally we may see an interrupt without an event in the eq.
In intx, we currently see the event queue and return IRQ_NONE causing
a the irq to be disabled ("no one cared".) Instead, read the CEV_ISR
reg to check the existence of the interrupt.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While testing the driver on PPC, we ran into a crash with LRO, Jumbo frames.
With CONFIG_PPC_64K_PAGES configured (a default in PPC), MAX_SKB_FRAGS drops to 3 and we were crossing the array limits on skb_shinfo(skb)->frags[].
Now we coalesce the frags from the same physical page into one slot in
skb_shinfo(skb)->frags[] and go to the next index when the frag is from
different physical page.
This patch is against the net-2.6 tree.
Signed-off-by: Ajit Khaparde <ajitk@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rcv and process ansync link status notifications from BE instead of polling
for link status in the be_worker thread.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cleanup multicast_set method to avoid an extra copy of mc_list
and unwanted promiscuos sets to BE.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currenlty multicast_set and promiscuous_config cmds -- that may be called in BH context --
use the blocking MCC mbox to post cmds.
An mbox cmd is protected via a spin_lock(cmd_lock) and not spin_lock_bh() as it is undesirable
to disable BHs while a blocking mbox cmd is in progress (and take long to finish.)
This can lockup a cmd in progress in process context.
So, these two cmds in BH context must use the MCC queue to post cmds.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currenlty all cmds use the blocking MCC mbox to post cmds. An mbox cmd is protected
via a spin_lock(cmd_lock) and not spin_lock_bh() as it is undesirable
to disable BHs while a blocking mbox cmd is in progress (and take long to finish.)
This can lockup a cmd in progress in process context. Instead cmds that may be
called in BH context must use the MCC queue to post cmds. The cmd completions
are rcvd in a separate completion queue and the events are placed in the tx-event
queue.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the tx queue destroy path, be_tx_q_clean() is currently called after the tx queues are freed; it must be called before.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
be_rx_compl_get() must not reset(via the valid word) the rx_compl as the rx_compl is not processed yet; it must be reset after it is processed.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
rx stats are not getting updated when an rx_compl with only one frag is rcvd in non-lro path.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix netdev stat rx_errors to cover length related errors and checksum errors and rx_dropped to the pkts dropped due to lack of buffers
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use cancel_delayed_work_sycn instead of cancel_delayed_work() to reliably kill be_worker() as it rearms itself.
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This driver does not indicate support for frag lists.
Furthermore, even if it did, the code is walking the frag
lists incorrectly. The idiom is:
for (iter = skb_shinfo(skb)->frag_list; iter; iter = iter->next)
but it's doing:
for (iter = skb_shinfo(skb)->frag_list; iter;
iter = skb_shinfo(iter)->frag_list)
which would never work. And this proves that this driver never
saw an SKB with active frag lists.
So just remove the code altogether and the driver TX path becomes
much simpler.
Signed-off-by: David S. Miller <davem@davemloft.net>
Followup of commits 9d21493b4b
and 08baf56108
(net: tx scalability works : trans_start)
(net: txq_trans_update() helper)
Now that core network takes care of trans_start updates, dont do it
in drivers themselves, if possible. Multi queue drivers can
avoid one cache miss (on dev->trans_start) in their start_xmit()
handler.
Exceptions are NETIF_F_LLTX drivers (vxge & tehuti)
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (64 commits)
phylib: Fix delay argument of schedule_delayed_work
NET/ixgbe: Fix powering off during shutdown
NET/e1000e: Fix powering off during shutdown
NET/e1000: Fix powering off during shutdown
packet: avoid warnings when high-order page allocation fails
gianfar: stop send queue before resetting gianfar
myr10ge: again fix lro_gen_skb() alignment
declance: convert to net_device_ops
bfin_mac: convert to net_device_ops
au1000: convert to net_device_ops
atarilance: convert to net_device_ops
a2065: convert to net_device_ops
ixgbe: update real_num_tx_queues on changing num_rx_queues
ixgbe: fix tx queue index
Revert "rose: zero length frame filtering in af_rose.c"
sfc: Use correct macro to set event bitfield
sfc: Match calls to netif_napi_add() and netif_napi_del()
bonding: Remove debug printk
e1000/e1000: fix compile warning
ehea: Fix incomplete conversion to net_device_ops
...
This patch fixes the default value of pause auto-negotiation supported
by PCS.
Signed-off-by: Ajit Khaparde <ajitk@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Enables Rx checksum feature by default.
- Disables support for ipv6 tso.
- Changes in Rx path to handle Rx completions with various checksum options.
Signed-off-by: Ajit Khaparde <ajitk@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is the second go through of the old DMA_nBIT_MASK macro,and there're not
so many of them left,so I put them into one patch.I hope this is the last round.
After this the definition of the old DMA_nBIT_MASK macro could be removed.
Signed-off-by: Yang Hongyang <yanghy@cn.fujitsu.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Tony Lindgren <tony@atomide.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Greg KH <greg@kroah.com>
Cc: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The benet driver is doing a 64 bit divide, which is not supported in
Linux kernel on 32 bit architectures. The correct way to do this is to
use do_div(). Compile tested on i386 only.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Hi, Pls accept this patch to cleanup rx/tx rate calculations as follows:
- check for jiffies wraparound
- remove typecast of a denominator
- do rate calculation only in workqueue context periodically
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a patch to reconfigure vlan-ids during an i/f down/up cycle
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is a patch to replenish the rx-queue when it is in a starved
state (due to out-of-mem conditions)
Signed-off-by: Sathya Perla <sathyap@serverengines.com>
Signed-off-by: David S. Miller <davem@davemloft.net>