With the change in logic for taking FW dump across multiple drivers,
there is no need to hold onto the api lock anymore in the fw dump path.
Instead use rtnl_lock() to synchronize the access to FW dump data structs.
Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In presence of multiple functions, current driver implementation does not
guarantee that the FW dump is taken by the same function that forces it.
Change it by adding a fw reset owner flag that could be changed in the device
reset path and only when a function determines that it needs to reset it.
Signed-off-by: Sritej Velaga <sritej.velaga@qlogic.com>
Signed-off-by: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Update the ixgbe driver version string to better match the Source Driver
with similar device support. Likewise update to the current LAD Linux
versioning scheme.
Signed-of-by: Don Skidmore <donald.c.skidmore@intel.com>
Tested-by: Evan Swanson <evan.swanson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change fixes the fact that we would trigger a null pointer dereference
or specify the wrong ring if the rings were restored. This change makes
certain that the DROP queue is a static value, and all other rings are
based on the ring offsets for the PF.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Disabling Rx checksumming leads to performance degradation due to
RSC causing packets to have incorrect checksums.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Evan Swanson <evan.swanson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Move reset code into a separate function to allow for reuse in other
parts of the code.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Move setting RSC into a separate function to allow for reuse in other
parts of the code.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to allow for nfc to insert and remove filters in order
to test the ethtool interface which includes it's own rules manager.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This code adds support for displaying the filters that were added via the
nfc interface. This is primarily to test the interface for now, but I am
also looking into the feasibility of moving all of the ntuple filter code
in ixgbe over to the nfc interface since it seems to be better implemented.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change adds basic support for the obtaining of RSS ring counts.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This change is meant to update the internal framework of ixgbe so that
perfect filters can be stored and tracked via software.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
I am removing the requirement that Ntuple filters have the same
number of queues and requirements as ATR. As a result this change will
make it so that all the Ntuple flag does is disable ATR for now.
This change fixes an issue in which we were incorrectly re-enabling ATR
when we exited perfect filter mode. This was due to the fact that the
logic assumed RSS and DCB were mutually exclusive which is no longer the
case.
To correct this we just need to add a check to guarantee DCB is disabled
before re-enabling ATR.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Due to numerous issues in ntuple filters it has been decided to move the
interface over to the network flow classification interface. As a first
step to achieving this I first need to remove the old ntuple interface.
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
It needs to be available even when CONFIG_INET is not set.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reported-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
These sk_buff structs were allocated with nlmsg_new() so they should
be freed with nlmsg_free().
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds support for a configuring the minimum number of links that
must be active before asserting carrier. It is similar to the Cisco
EtherChannel min-links feature. This allows setting the minimum number
of member ports that must be up (link-up state) before marking the
bond device as up (carrier on). This is useful for situations where
higher level services such as clustering want to ensure a minimum
number of low bandwidth links are active before switchover.
See:
http://bugzilla.vyatta.com/show_bug.cgi?id=7196
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Flavio Leitner <fbl@redhat.com>
Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are enough instances of this:
iph->frag_off & htons(IP_MF | IP_OFFSET)
that a helper function is probably warranted.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix section mismatch warning:
WARNING: drivers/net/irda/smsc-ircc2.o(.devinit.text+0x1a7): Section mismatch in reference from the function smsc_ircc_pnp_probe() to the function .init.text:smsc_ircc_open()
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually).
To prevent mm.h inclusion via other channels also extract "enum dma_data_direction"
definition into separate header. This tiny piece is what gluing netdevice.h with mm.h
via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h".
Removal of mm.h from scatterlist.h was tried and was found not feasible
on most archs, so the link was cutoff earlier.
Hope people are OK with tiny include file.
Note, that mm_types.h is still dragged in, but it is a separate story.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Missing error checking before nla_parse_nested().
Reported-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Incorrect return type on dcb_setapp() this routine
returns negative error codes. All call sites of
dcb_setapp() assign the return value to an int already
so no need to update drivers.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With multiple APP entries per selector and protocol drivers
or stacks may want to pick a specific value or stripe traffic
across many priorities. Also if an APP entry in use is
deleted the stack/driver may want to choose from the existing
APP entries.
To facilitate this and avoid having duplicate code to walk
the APP ring provide a routine dcb_ieee_getapp_mask() to
return a u8 bitmask of all priorities set for the specified
selector and protocol. This routine and bitmask is a helper
for DCB kernel users.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that we allow multiple IEEE App entries we need a way
to remove specific entries. To do this add the ieee_dcb_delapp()
routine.
Additionaly drivers may need to remove the APP entry from
their firmware tables. Add dcb ops routine to handle this.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds a setapp routine for IEEE802.1Qaz encoded APP data types.
The IEEE 802.1Qaz spec encodes the priority bits differently and
allows for multiple APP data entries of the same selector and
protocol. Trying to force these to use the same set routines was
becoming tedious. Furthermore, userspace could probably enforce
the correct semantics, but expecting drivers to do this seems
error prone in the firmware case.
For these reasons add ieee_dcb_setapp() that understands the
IEEE 802.1Qaz encoded form.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that dcbnl is being used in many cases by more
than a single agent it is beneficial to be notified
when some entity either driver or user space has
changed the DCB attributes.
Today applications either end up polling the interface
or relying on a user space database to maintain the DCB
state and post events. Polling is a poor solution for
obvious reasons. And relying on a user space database
has its own downside. Namely it has created strange
boot dependencies requiring the database be populated
before any applications dependent on DCB attributes
starts or the application goes into a polling loop.
Populating the database requires negotiating link
setting with the peer and can take anywhere from less
than a second up to a few seconds depending on the switch
implementation.
Perhaps more importantly if another application or an
embedded agent sets a DCB link attribute the database
has no way of knowing other than polling the kernel.
This prevents applications from responding quickly to
changes in link events which at least in the FCoE case
and probably any other protocols expecting a lossless
link may result in IO errors.
By adding a multicast group for DCB we have clean way
to disseminate kernel DCB link attributes up to user
space. Avoiding the need for user space to maintain
a coherant database and disperse events that potentially
do not reflect the current link state.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Adding the capabilities bitmask to the get_ieee response allows
user space to determine the current DCBX mode. Either CEE or IEEE
this is useful with devices that support switching between modes
where knowing the current state is relevant.
Derived from work by Mark Rustad
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
And change iSCSI RQ doorbell size from 16B to 64B to match new firmware.
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: Eddie Wai <eddie.wai@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds 2 tracepoints to get a status of a socket receive queue
and related parameter.
One tracepoint is added to sock_queue_rcv_skb. It records rcvbuf size
and its usage. The other tracepoint is added to __sk_mem_schedule and
it records limitations of memory for sockets and current usage.
By using these tracepoints we're able to know detailed reason why kernel
drop the packet.
Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds a tracepoint to __udp_queue_rcv_skb to get the
return value of ip_queue_rcv_skb. It indicates why kernel drops
a packet at this point.
ip_queue_rcv_skb returns following values in the packet drop case:
rcvbuf is full : -ENOMEM
sk_filter returns error : -EINVAL, -EACCESS, -ENOMEM, etc.
__sk_mem_schedule returns error: -ENOBUF
Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It was suggested by "make versioncheck" that the follwing includes of
linux/version.h are redundant:
/home/jj/src/linux-2.6/net/caif/caif_dev.c: 14 linux/version.h not needed.
/home/jj/src/linux-2.6/net/caif/chnl_net.c: 10 linux/version.h not needed.
/home/jj/src/linux-2.6/net/ipv4/gre.c: 19 linux/version.h not needed.
/home/jj/src/linux-2.6/net/netfilter/ipset/ip_set_core.c: 20 linux/version.h not needed.
/home/jj/src/linux-2.6/net/netfilter/xt_set.c: 16 linux/version.h not needed.
and it seems that it is right.
Beyond manually inspecting the source files I also did a few build
tests with various configs to confirm that including the header in
those files is indeed not needed.
Here's a patch to remove the pointless includes.
Signed-off-by: Jesper Juhl <jj@chaosbits.net>
Acked-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch enables software (and phy device) transmit time stamping.
Compile tested only.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Signed-off-by: David S. Miller <davem@davemloft.net>
Because the socket buffer is freed in the completion interrupt, it is not
safe to access it after submitting it to the hardware.
Signed-off-by: Richard Cochran <richard.cochran@omicron.at>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert xen driver to 64 bit statistics interface.
Use stats_sync to ensure that 64 bit update is read atomically on 32 bit platform.
Put hot statistics into per-cpu table.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Need to add stat_sync wrapper around 64 bit statistic values.
Fix wraparound bug in lockup detector where it is unsafely comparing
64 bit value that is not atomic. Since only care about detecting activity
just looking at current low order bits will work.
Remove unused entries in old vxge_sw_stats structure.
Change the error counters to unsigned long since they won't grow so large
as to have to be 64 bits.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert input functional block device to use 64 bit stats.
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Jamal Hadi Salim <hadi@cyberus.ca>
Signed-off-by: David S. Miller <davem@davemloft.net>
Unnecessary casts of void * clutter the code.
These are the remainder casts after several specific
patches to remove netdev_priv and dev_priv.
Done via coccinelle script (and a little editing):
$ cat cast_void_pointer.cocci
@@
type T;
T *pt;
void *pv;
@@
- pt = (T *)pv;
+ pt = pv;
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Acked-By: Chris Snook <chris.snook@gmail.com>
Acked-by: Jon Mason <jdmason@kudzu.us>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: David Dillow <dave@thedillows.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently single PCI pool used across all CPUs and that
doesn't scales up as number of CPU increases, so this
patch adds per CPU PCI pool to setup udl and that aligns
well from FCoE stack as that already has per CPU exch locking.
Adds per CPU PCI alloc setup and free in
ixgbe_fcoe_ddp_pools_alloc and ixgbe_fcoe_ddp_pools_free,
use CPU specific pool during DDP setup.
Re-arranged ixgbe_fcoe struct to have fewer holes
along with adding pools ptr using pahole.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds support for Dell CEM (Comprehensive Embedded Management)).
This consists of informing the management firmware of the driver version
during probe on 82599 and X540 HW.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com>
Tested-by: Evan Swanson <evan.swanson@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The ixgbe_dcb_txq_to_tc() routine was used to map TX rings to
a DCB traffic class. Now that a tx_ring has a DCB traffic class
associated with it this routine is no longer needed.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Now flow directors perfect filters features can coexist with DCB.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This bit mask is wrong DCBX_HOST is always set. It was missed up
until now because lldpad reprograms the device on a link
event. However this is still wrong and it is best not to be
mis-configured for some time immediately following ixgbe_up().
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Setup RSS redirection table to be compatible with multiple packet
buffers. Currently, this works on 82599 devices because the RSS
redirection index is masked by the number of queues per packet
buffer.
This sets the cap on the RSS table to maxq.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The tx_idx and rx_idx values are swapped on 82598 devices
with DCB enabled.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The number of TX and RX queues allocated depends on the device
type, the current features set, online CPUs, and various
compile flags.
To enable DCB with multiple queues and allow it to coexist with
all the features currently implemented it has to setup a valid
queue count. This is done at init time using the FDIR and RSS
max queue counts and allowing each TC to allocate a queue per
CPU.
DCB will now use available queues up to (8 x TCs) this is somewhat
arbitrary cap but allows DCB to use up to 64 queues. Its easy to
increase this later if that is needed.
This is prep work to enable Flow Director with DCB. After this
DCB can easily coexist with existing features and no longer
needs its own DCB feature ring.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
ixgbe devices support different numbers of packet buffers either
8 or 4. Here we only allocate the minimal number of packet
buffers required to implement the net_devices number of traffic
classes.
Fewer traffic classes allows for larger packet buffers in
hardware. Also more Tx/Rx queues can be given to each
traffic class.
This patch is mostly about propagating the number of traffic
classes through the init path. Specifically this adds the 4TC
cases to the MRQC and MTQC setup routines. Also ixgbe_setup_tc()
was sanitized to handle other traffic class value.
Finally changing the number of packet buffers in the hardware
requires the device to reinit. So this moves the reinit work
from DCB into the main ixgbe_setup_tc() routine to consolidate
the reset code. Now dcbnl_xxx ops call ixgbe_setup_tc() to
configure packet buffers if needed.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The MRQC and MTQC registers are configured in the main
setup path but are also reconfigured in the DCB setup
path. The DCB path fixes the DCB configuration by configuring
the SECTXMINIFG gap which is required for DCB pause
to operate correctly.
This patch reduces the duplicate code and does all setup
in ixgbe_setup_mtqc() and ixgbe_setup_mrqc().
Additionally, this removes the IXGBE_QDE. This write never
set the WRITE bit in the register so the write was not
actually doing anything. Also this was to clear the register
but, it is never set and defaults to zero. If this is
needed for SRIOV it should be added correctly in a follow
up patch. But it's never been working so removing it here
should be OK.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Consolidate packet buffer allocation currently being
done in the DCB path and main path. This allows the
feature set and packet buffer requirements to be done
once.
This is prep work to allow DCB to coexist with other
features namely, flow director.
CC: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Replace duplicated code in if/else branches with single
check and ixgbe_init_interrupt_scheme().
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Tested-by: Ross Brattain <ross.b.brattain@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use standard format for net_device_ops (without &)
Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Acked-by: Greg Rose <Gregory.v.rose@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>