Move the commit and abort routines to the bottom of the source code
file. This change is required by the follow up patches that add the
set, chain and table transaction support.
This patch is just a cleanup to access several functions without
having to declare their prototypes.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This patch generalises the existing rule transaction infrastructure
so it can be used to handle set, table and chain object transactions
as well. The transaction provides a data area that stores private
information depending on the transaction type.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The new transaction infrastructure updates the family, table and chain
objects in the context structure, so let's deconstify them. While at it,
move the context structure initialization routine to the top of the
source file as it will be also used from the table and chain routines.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
NFT_META_BRI_IIFNAME to get packet input bridge interface name
NFT_META_BRI_OIFNAME to get packet output bridge interface name
Such meta key are accessible only through NFPROTO_BRIDGE family, on a
dedicated nft meta module: nft_meta_bridge.
Suggested-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This will be useful to create network family dedicated META expression
as for NFPROTO_BRIDGE for instance.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
To ensure family tight expression gets selected in priority to family
agnostic ones.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
We currently have a limit of 8 * PAGE_SIZE anonymous sets. Lift that limit
by continuing the scan if the entire page is exhausted.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This patch adds set_elems notifications. When a set_elem is
added/deleted, all listening peers in userspace will receive the
corresponding notification.
Signed-off-by: Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@gnumonks.org>
Now that nf_tables performs global accounting of set elements, it is not
needed in the hash type anymore.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The current set selection simply choses the first set type that provides
the requested features, which always results in the rbtree being chosen
by virtue of being the first set in the list.
What we actually want to do is choose the implementation that can provide
the requested features and is optimal from either a performance or memory
perspective depending on the characteristics of the elements and the
preferences specified by the user.
The elements are not known when creating a set. Even if we would provide
them for anonymous (literal) sets, we'd still have standalone sets where
the elements are not known in advance. We therefore need an abstract
description of the data charcteristics.
The kernel already knows the size of the key, this patch starts by
introducing a nested set description which so far contains only the maximum
amount of elements. Based on this the set implementations are changed to
provide an estimate of the required amount of memory and the lookup
complexity class.
The set ops have a new callback ->estimate() that is invoked during set
selection. It receives a structure containing the attributes known to the
kernel and is supposed to populate a struct nft_set_estimate with the
complexity class and, in case the size is known, the complete amount of
memory required, or the amount of memory required per element otherwise.
Based on the policy specified by the user (performance/memory, defaulting
to performance) the kernel will then select the best suited implementation.
Even if the set implementation would allow to add more than the specified
maximum amount of elements, they are enforced since new implementations
might not be able to add more than maximum based on which they were
selected.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
For value spanning multiple registers, we need to validate the length
of data loads. In order to add this to nft_ct, we need the length from
key validation. Split the nft_ct_init() function into two functions
for the get and set operations as preparation for that.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
For value spanning multiple registers, we need to validate the length
of data loads. In order to add this to nft_meta, we need the length from
key validation. Split the nft_meta_init() function into two functions
for the get and set operations as preparation for that.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@gnumonks.org>
The set operation for ct mark is only valid if CONFIG_NF_CONNTRACK_MARK is
enabled.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Replace the test in zap_completion_queue to test when it is safe to
free skbs in hard irq context with skb_irq_freeable ensuring we only
free skbs when it is safe, and removing the possibility of subtle
problems.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently netpoll and skb_release_head_state assume that a skb is
freeable in hard irq context except when skb->destructor is set.
The reality is far from this. So add a function skb_irq_freeable to
compute the full test and in the process be the living documentation
of what the requirements are of actually freeing a skb in hard irq
context.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iEYEABECAAYFAlM6jfsACgkQjTAFq1RaXHOVTwCeOyOYOJ+Ze2x33WydGcGsvX8a
QPoAoJVAcekmL5MVbcMTTvc8QL1AebAm
=4aEp
-----END PGP SIGNATURE-----
Merge tag 'linux-can-fixes-for-3.15-20140401' of git://gitorious.org/linux-can/linux-can
linux-can-fixes-for-3.15-20140401
Marc Kleine-Budde says:
====================
this is a pull request of 16 patches for the 3.15 release cycle.
Bjorn Van Tilt contributes a patch which fixes a memory leak in usb_8dev's
usb_8dev_start_xmit()s error path. A patch by Robert Schwebel fixes a typo in
the can documentation. The remaining patches all target the c_can driver. Two
of them are by me; they add a missing netif_napi_del() and return value
checking. Thomas Gleixner contributes 12 patches, which address several
shortcomings in the driver like hardware initialisation, concurrency, message
ordering and poor performance.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 2b3d7b758c687("qlcnic: Add VXLAN Rx offload support") uses
vxlan_get_rx_port() which caused build failure when VXLAN=m.
This patch fixes the build failure by adding dependency on VXLAN
in Kconfig of qlcnic module and use vxlan_get_rx_port() and support
code accordingly.
Signed-off-by: Shahed Shaikh <shahed.shaikh@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit fixes a build error reported by Fengguang, that is
triggered when CONFIG_NETWORK_PHY_TIMESTAMPING is not set:
ERROR: "ptp_classify_raw" [drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.ko] undefined!
The fix is to introduce its own file for the PTP BPF classifier,
so that PTP_1588_CLOCK and/or NETWORK_PHY_TIMESTAMPING can select
it independently from each other. IXP4xx driver on ARM needs to
select it as well since it does not seem to select PTP_1588_CLOCK
or similar that would pull it in automatically.
This also allows for hiding all of the internals of the BPF PTP
program inside that file, and only exporting relevant API bits
to drivers.
This patch also adds a kdoc documentation of ptp_classify_raw()
API to make it clear that it can return PTP_CLASS_* defines. Also,
the BPF program has been translated into bpf_asm code, so that it
can be more easily read and altered (extensively documented in [1]).
In the kernel tree under tools/net/ we have bpf_asm and bpf_dbg
tools, so the commented program can simply be translated via
`./bpf_asm -c prog` where prog is a file that contains the
commented code. This makes it easily readable/verifiable and when
there's a need to change something, jump offsets etc do not need
to be replaced manually which can be very error prone. Instead,
a newly translated version via bpf_asm can simply replace the old
code. I have checked opcode diffs before/after and it's the very
same filter.
[1] Documentation/networking/filter.txt
Fixes: 164d8c6665 ("net: ptp: do not reimplement PTP/BPF classifier")
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Jiri Benc <jbenc@redhat.com>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The "core_ops" variable isn't referenced outside this file and Sparse
complains about it:
drivers/net/ethernet/samsung/sxgbe/sxgbe_core.c:239:29: warning:
symbol 'core_ops' was not declared. Should it be static?
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Bitwise '|' was intended here instead of logical '||'.
Fixes: 1edb9ca69e ('net: sxgbe: add basic framework for Samsung 10Gb ethernet driver')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
"err" is always zero at this point so we always unregister and free the
mdio_bus before returning success. This seems like left over code and
I have deleted it.
Fixes: 1edb9ca69e ('net: sxgbe: add basic framework for Samsung 10Gb ethernet driver')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When using the "separate_tx_channels=1" module parameter, the TX queues are
initially numbered starting from the first TX-only channel number (after all the
RX-only channels). efx_set_channels() renumbers the queues so that they are
indexed from zero.
On EF10, the TX queues need to be relabelled in this way before calling the
dimension_resources NIC type operation, otherwise the TX queue PIO buffers can be
linked to the wrong VIs when using "separate_tx_channels=1".
Added comments to explain UC/WC mappings for PIO buffers
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When netback discovers frontend is sending malformed packet it will
disables the interface which serves that frontend.
However disabling a network interface involving taking a mutex which
cannot be done in softirq context, so we need to defer this process to
kthread context.
This patch does the following:
1. introduce a flag to indicate the interface is disabled.
2. check that flag in TX path, don't do any work if it's true.
3. check that flag in RX path, turn off that interface if it's true.
The reason to disable it in RX path is because RX uses kthread. After
this change the behavior of netback is still consistent -- it won't do
any TX work for a rogue frontend, and the interface will be eventually
turned off.
Also change a "continue" to "break" after xenvif_fatal_tx_err, as it
doesn't make sense to continue processing packets if frontend is rogue.
This is a fix for XSA-90.
Reported-by: Török Edwin <edwin@etorok.net>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make sure that vxlan_get_rx_port() is present in the kernel build in a manner
consistent with mlx4, else mlx4 can be made built-in where vxlan a module and
the phase of the build linking fails. Add CONFIG_MLX4_EN_VXLAN for that.
Also, #ifdef the advertizement and implementation of the mlx4 vxlan ndo
calls and related code under this config directive.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce a CONFIG_BE2NET_VXLAN define to control be2net's build
dependency on the VXLAN driver.
Without this fix, the kernel build fails when VxLAN driver is
selected to be built as a module while be2net is built-in.
fixes: c9c47142 ("be2net: csum, tso and rss steering offload support for VxLAN")
Signed-off-by: Sathya Perla <sathya.perla@emulex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit 9b2777d608 (ieee802154: add TX power control to wpan_phy)
and following erroneously added CSMA and CCA parameters for 802.15.4
devices as PHY parameters, while they are actually MAC parameters and
can differ for any two WPAN instances. Since it is now sensible to have
multiple WPAN devices with differing CSMA/CCA parameters, make these
parameters MAC parameters instead.
Signed-off-by: Phoebe Buckheister <phoebe.buckheister@itwm.fraunhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
All 802.15.4 PHY devices with drivers in tree can support only one WPAN
at any given time, yet the stack allows arbitrarily many WPAN devices to
be created and up at the same time. This cannot work with what the
hardware provides, and in the current implementation, provides an easy
DoS vector to any process on the system that may call socket() and
sendmsg().
Thus, allow only one WPAN per PHY to be up at once, just like mac80211
does for managed devices.
Signed-off-by: Phoebe Buckheister <phoebe.buckheister@itwm.fraunhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
This minor patch fixes the following warning when doing
a `make htmldocs`:
DOCPROC Documentation/DocBook/networking.xml
Warning(.../net/core/filter.c:135): No description found for parameter 'insn'
Warning(.../net/core/filter.c:135): Excess function parameter 'fentry' description in '__sk_run_filter'
HTML Documentation/DocBook/networking.html
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
nla_strcmp compares the string length plus one, so it's implicitly
including the nul-termination in the comparison.
int nla_strcmp(const struct nlattr *nla, const char *str)
{
int len = strlen(str) + 1;
...
d = memcmp(nla_data(nla), str, len);
However, if NLA_STRING is used, userspace can send us a string without
the nul-termination. This is a problem since the string
comparison will not match as the last byte may be not the
nul-termination.
Fix this by skipping the comparison of the nul-termination if the
attribute data is nul-terminated. Suggested by Thomas Graf.
Cc: Florian Westphal <fw@strlen.de>
Cc: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is no point to toggle the RX led for every packet. Especially if
we have a full FIFO we want to avoid everything we can.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The function loads the message object from the hardware to get the
payload length. The previous patch stores that information in an
array, so we can avoid the hardware access.
Remove the hardware access and move the led toggle outside of the
spinlocked region. Toggle the led only once when at least one packet
has been received.
Binary size shrinks along with the code
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
We can avoid the HW access in TX cleanup path for retrieving the DLC
of the sent package if we store the DLC in a private array.
Ideally this should be handled in the can_echo_skb functions, but I
leave that exercise to the CAN folks.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
commit 4ce78a838c (can: c_can: Speed up rx_poll function) hyped a
performance improvement by reducing the access to the interrupt
pending register from a dual 16 bit to a single 16 bit access. Wow!
Thereby it crippled the driver to cast the 16 msg objects in stone,
which is completly braindead as contemporary hardware has up to 128
message objects. Supporting larger object buffers is a major surgery,
but it'd be definitely worth it especially as the driver does not
support HW message filtering ....
The logic of the "FIFO" implementation is to split the FIFO in half.
For the lower half we read the buffers and clear the interrupt pending
bit, but keep the newdat bit set, so the HW will queue above those
buffers.
When we read out the last low buffer then we reenable all the low half
buffers by clearing the newdat bit.
The upper half buffers clear the newdat and the interrupt pending bit
right away as we know that the lower half bits are clear and give us a
headstart against the hardware.
Now the implementation is:
transfer_message_object()
read_object_and_put_into_skb();
if (obj < END_OF_LOW_BUF)
clear_intpending(obj)
else if (obj > END_OF_LOW_BUF)
clear_intpending_and_newdat(obj)
else if (obj == END_OF_LOW_BUF)
clear_newdat_of_all_low_objects()
The hardware allows to avoid most of the mess simply because we can
tell the transfer_message_object() function to clear bits right away.
So we can be clever and do:
if (obj <= END_OF_LOW_BUF)
ctrl = TRANSFER_MSG | CLEAR_INTPND;
else
ctrl = TRANSFER_MSG | CLEAR_INTPND | CLEAR_NEWDAT;
transfer_message_object(ctrl)
read_object_and_put_into_skb();
if (obj == END_OF_LOW_BUF)
clear_newdat_of_all_low_objects()
So we save a complete control operation on all message objects except
the one which is the end of the low buffer. That's a few micro seconds
per object.
I'm not adding a boasting profile to that, simply because it's self
explaining.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[mkl: adjusted subject and commit message]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
If every other line contains line breaks, that's a clear sign for
indentation level madness. Split out the inner loop and move the code
to a separate function. gcc creates slightly worse code for that, but
we'll fix that in the next step.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[mkl: adjusted subject]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The network core does not serialize the access to the hardware. The
xmit related code lets the following happen:
CPU0 CPU1
interrupt()
do_poll()
c_can_do_tx()
Fiddle with HW and xmit()
internal data Fiddle with HW and
internal data
due the complete lack of serialization.
Add proper locking.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The rx_poll code has the following gem:
if (msg_ctrl_save & IF_MCONT_EOB)
return num_rx_pkts;
The EOB bit is the indicator for the hardware that this is the last
configured FIFO object. But this object can contain valid data, if we
manage to free up objects before the overrun case hits.
Now if the code exits due to the EOB bit set, then this buffer is
stale and the interrupt bit and NewDat bit of the buffer are still
set. Results in a nice interrupt storm unless we come into an overrun
situation where the MSGLST bit gets set.
ksoftirqd/0-3 [000] ..s. 79.124101: c_can_poll: rx_poll: val: 00008001 pend 00008001
ksoftirqd/0-3 [000] ..s. 79.124176: c_can_poll: rx_poll: val: 00008000 pend 00008000
ksoftirqd/0-3 [000] ..s. 79.124187: c_can_poll: rx_poll: val: 00008002 pend 00008002
ksoftirqd/0-3 [000] ..s. 79.124256: c_can_poll: rx_poll: val: 00008000 pend 00008000
ksoftirqd/0-3 [000] ..s. 79.124267: c_can_poll: rx_poll: val: 00008000 pend 00008000
The amazing thing is that the check of the MSGLST (aka overrun bit)
used to be after the check of the EOB bit. That was "fixed" in commit
5d0f801a2c(can: c_can: Fix RX message handling, handle lost message
before EOB). But the author of this "fix" did not even understand that
the EOB check is broken as well.
Again a simple solution: Remove
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[mkl: adjusted subject and commit message]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The lost message handling is broken in several ways.
1) Clearing the message lost flag is done by writing 0 to the
message control register of the object.
#define IF_MCONT_CLR_MSGLST (0 << 14)
That clears the object buffer configuration in the worst case,
which results in a loss of the EOB flag. That leaves the FIFO chain
without a limit and causes a complete lockup of the HW
2) In case that the error skb allocation fails, the code happily
claims that it handed down a packet. Just an accounting bug, but ....
3) The code adds a lot of pointless overhead to that error case, where
we need to get stuff done as fast as possible to avoid more packet
loss.
- printk an annoying error message
- reread the object buffer for nothing
Fix is simple again:
- Use the already known MSGCTRL content and only clear the MSGLST bit
- Fix the buffer accounting by adding a proper return code
- Remove the pointless operations
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The buffer handling of c_can has been broken forever. That leads to
message reordering:
ksoftirqd/0-3 [000] ..s. 79.123776: c_can_poll: rx_poll: val: 00007fff
ksoftirqd/0-3 [000] ..s. 79.124101: c_can_poll: rx_poll: val: 00008001
What happens is:
CPU HW
queue new packet into obj 16 (0-15 are busy)
read obj 1-15
return because pending is 0
set pending obj 16 -> pending reg 8000
queue new packet into obj 1
set pending obj 1 -> pending reg 8001
So the current algorithmus reads the newest message first, which
violates the ordering rules of CAN.
Add proper handling of that situation by analyzing the contents of the
pending register for gaps.
This does NOT fix the message object corruption which can lead to
interrupt storms. Thats addressed in the next patches.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[mkl: adjusted subject]
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The hardware has two message control interfaces, but the code only uses the
first one. So on SMP the following can be observed:
CPU0 CPU1
rx_poll()
write IF1 xmit()
write IF1
write IF1
That results in corrupted message object configurations. The TX/RX is not
globally serialized it's only serialized on a core.
Simple solution: Let RX use IF1 and TX use IF2 and all is good.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
The function is broken in several ways:
- The function does not wait for the init to complete.
That can take quite some microseconds.
- No protection against being called for two chips at the same
time. SMP is such a new thing, right?
Clear the start and the init done bit unconditionally and wait for both bits to
be clear.
In the enable path set the init bit and wait for the init done bit.
Add proper locking.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
According to the documentation the CPU must wait for CONTROL_INIT to
be cleared before writing to the baudrate registers.
Signed-off-by: Benedikt Spranger <b.spranger@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
This patch fixes the name of the parameter to configure the sample point used
in iproute2's ip command. The correct writing is "sample-point" not
"sample_point".
Signed-off-by: Robert Schwebel <r.schwebel@pengutronix.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Fixed a memory leak when an error occurred in the transmit function. In the
error handling the urb wasn't freed before returning. There was also a call to
the usb_unanchor_urb() function but the urb wasn't anchored.
Signed-off-by: Bjorn Van Tilt <bjorn.vantilt@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates
This series contains fixes to e1000e, igb, ixgbe, ixgebvf, i40e and
i40evf.
David provides a fix for e1000e to resolve an issue where the device is
capable of transmitting packets but is unable to receive packets until
a previously introduced workaround is called.
Jakub Kicinski provides PTP fixes for ixgbe, which include removing a
redundant if clause and make sure we are not generating both a software and
hardware timestamp. As well as fix a race condition and leaking skbs
when multiple transmit rings try to claim time stamping.
Jean Sacren fixes a function declaration in ixgbe which was introduced
in commit c97506ab0e ("ixgbe: Add check for FW veto bit"). In addition
fixes a function header comment in i40e and fixes the error checking
by binding the check to the pertinent DMA bit mask.
Mark provides several fixes for ixgbe and ixgbevf. Most notably are fixes
to resolve namespace issues and fix ECU warnings induced by LER for ixgbe
and ixgbevf.
Joe Perches fixes up unnecessary casts in i40e and i40evf.
Peter Senna Tschudin fixes igb to use pci_iounmap when the virtual mapping
was done with pci_iomap.
====================# Please enter a commit message to explain why this merge is necessary,
Resolve some rcu warnings produced when LER actions take place.
This appears to be due to not holding the rtnl lock when calling
ixgbe_down, so hold the lock. Also avoid disabling the device
when it is already disabled. This check is necessary because the
callback can be called more than once in some cases.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Resolve some rcu warnings produced when LER actions take place.
This appears to be due to not holding the rtnl lock when calling
ixgbe_down, so hold the lock. Also avoid disabling the device
when it is already disabled. This check is necessary because the
callback can be called more than once in some cases.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Use pci_iounmap instead of iounmap when the virtual mapping was done
with pci_iomap. A simplified version of the semantic patch that finds this
issue is as follows: (http://coccinelle.lip6.fr/)
// <smpl>
@r@
expression addr;
@@
addr = pci_iomap(...)
@rr@
expression r.addr;
@@
* iounmap(addr)
// </smpl>
Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>