Commit Graph

799594 Commits

Author SHA1 Message Date
David Rientjes
356ff8a9a7 Revert "mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask"
This reverts commit 89c83fb539.

This should have been done as part of 2f0799a0ff ("mm, thp: restore
node-local hugepage allocations").  The movement of the thp allocation
policy from alloc_pages_vma() to alloc_hugepage_direct_gfpmask() was
intended to only set __GFP_THISNODE for mempolicies that are not
MPOL_BIND whereas the revert could set this regardless of mempolicy.

While the check for MPOL_BIND between alloc_hugepage_direct_gfpmask()
and alloc_pages_vma() was racy, that has since been removed since the
revert.  What is left is the possibility to use __GFP_THISNODE in
policy_node() when it is unexpected because the special handling for
hugepages in alloc_pages_vma()  was removed as part of the consolidation.

Secondly, prior to 89c83fb539, alloc_pages_vma() implemented a somewhat
different policy for hugepage allocations, which were allocated through
alloc_hugepage_vma().  For hugepage allocations, if the allocating
process's node is in the set of allowed nodes, allocate with
__GFP_THISNODE for that node (for MPOL_PREFERRED, use that node with
__GFP_THISNODE instead).  This was changed for shmem_alloc_hugepage() to
allow fallback to other nodes in 89c83fb539 as it did for new_page() in
mm/mempolicy.c which is functionally different behavior and removes the
requirement to only allocate hugepages locally.

So this commit does a full revert of 89c83fb539 instead of the partial
revert that was done in 2f0799a0ff.  The result is the same thp
allocation policy for 4.20 that was in 4.19.

Fixes: 89c83fb539 ("mm, thp: consolidate THP gfp handling into alloc_hugepage_direct_gfpmask")
Fixes: 2f0799a0ff ("mm, thp: restore node-local hugepage allocations")
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-12-08 10:26:20 -08:00
Benjamin Herrenschmidt
5b3279e2cb Revert "net/ibm/emac: wrong bit is used for STA control"
This reverts commit 624ca9c33c.

This commit is completely bogus. The STACR register has two formats, old
and new, depending on the version of the IP block used. There's a pair of
device-tree properties that can be used to specify the format used:

	has-inverted-stacr-oc
	has-new-stacr-staopc

What this commit did was to change the bit definition used with the old
parts to match the new parts. This of course breaks the driver on all
the old ones.

Instead, the author should have set the appropriate properties in the
device-tree for the variant used on his board.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 22:37:27 -08:00
David S. Miller
83af01ba1c Merge branch 'tc-testing-next'
Lucas Bates says:

====================
tc-testing: implement command timeouts and better results tracking

Patch 1 adds a timeout feature for any command tdc launches in a subshell.
This prevents tdc from hanging indefinitely.

Patches 2-4 introduce a new method for tracking and generating test case
results, and implements it across the core script and all applicable
plugins.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:39:03 -08:00
Lucas Bates
8d189159ac tc-testing: gitignore, ignore generated test results
Ignore any .tap or .xml test result files generated by tdc.

Additionally, ignore plugin symlinks.

Signed-off-by: Lucas Bates <lucasb@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:39:03 -08:00
Lucas Bates
915c158dea tc-testing: Implement the TdcResults module in tdc
In tdc and the valgrind plugin, begin using the TdcResults module
to track executed test cases.

Signed-off-by: Lucas Bates <lucasb@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:39:03 -08:00
Lucas Bates
dfe465d33e tc-testing: Add new TdcResults module
This module includes new classes for tdc to use in keeping track
of test case results, instead of generating and tracking a lengthy
string.

The new module can be extended to support multiple formal
test result formats to be friendlier to automation.

Signed-off-by: Lucas Bates <lucasb@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:39:03 -08:00
Lucas Bates
d37e56df23 tc-testing: Add command timeout feature to tdc
Using an attribute set in the tdc_config.py file, limit the
amount of time tdc will wait for an executed command to
complete and prevent the script from hanging entirely.

This timeout will be applied to all executed commands.

Signed-off-by: Lucas Bates <lucasb@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:39:03 -08:00
David S. Miller
8b78903bc5 Merge branch 'skb-headroom-slab-out-of-bounds'
Stefano Brivio says:

====================
Fix slab out-of-bounds on insufficient headroom for IPv6 packets

Patch 1/2 fixes a slab out-of-bounds occurring with short SCTP packets over
IPv4 over L2TP over IPv6 on a configuration with relatively low HEADER_MAX.

Patch 2/2 makes sure we avoid writing before the allocated buffer in
neigh_hh_output() in case the headroom is enough for the unaligned hardware
header size, but not enough for the aligned one, and that we warn if we hit
this condition.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:24:40 -08:00
Stefano Brivio
e6ac64d4c4 neighbour: Avoid writing before skb->head in neigh_hh_output()
While skb_push() makes the kernel panic if the skb headroom is less than
the unaligned hardware header size, it will proceed normally in case we
copy more than that because of alignment, and we'll silently corrupt
adjacent slabs.

In the case fixed by the previous patch,
"ipv6: Check available headroom in ip6_xmit() even without options", we
end up in neigh_hh_output() with 14 bytes headroom, 14 bytes hardware
header and write 16 bytes, starting 2 bytes before the allocated buffer.

Always check we're not writing before skb->head and, if the headroom is
not enough, warn and drop the packet.

v2:
 - instead of panicking with BUG_ON(), WARN_ON_ONCE() and drop the packet
   (Eric Dumazet)
 - if we avoid the panic, though, we need to explicitly check the headroom
   before the memcpy(), otherwise we'll have corrupted slabs on a running
   kernel, after we warn
 - use __skb_push() instead of skb_push(), as the headroom check is
   already implemented here explicitly (Eric Dumazet)

Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:24:40 -08:00
Stefano Brivio
66033f47ca ipv6: Check available headroom in ip6_xmit() even without options
Even if we send an IPv6 packet without options, MAX_HEADER might not be
enough to account for the additional headroom required by alignment of
hardware headers.

On a configuration without HYPERV_NET, WLAN, AX25, and with IPV6_TUNNEL,
sending short SCTP packets over IPv4 over L2TP over IPv6, we start with
100 bytes of allocated headroom in sctp_packet_transmit(), end up with 54
bytes after l2tp_xmit_skb(), and 14 bytes in ip6_finish_output2().

Those would be enough to append our 14 bytes header, but we're going to
align that to 16 bytes, and write 2 bytes out of the allocated slab in
neigh_hh_output().

KASan says:

[  264.967848] ==================================================================
[  264.967861] BUG: KASAN: slab-out-of-bounds in ip6_finish_output2+0x1aec/0x1c70
[  264.967866] Write of size 16 at addr 000000006af1c7fe by task netperf/6201
[  264.967870]
[  264.967876] CPU: 0 PID: 6201 Comm: netperf Not tainted 4.20.0-rc4+ #1
[  264.967881] Hardware name: IBM 2827 H43 400 (z/VM 6.4.0)
[  264.967887] Call Trace:
[  264.967896] ([<00000000001347d6>] show_stack+0x56/0xa0)
[  264.967903]  [<00000000017e379c>] dump_stack+0x23c/0x290
[  264.967912]  [<00000000007bc594>] print_address_description+0xf4/0x290
[  264.967919]  [<00000000007bc8fc>] kasan_report+0x13c/0x240
[  264.967927]  [<000000000162f5e4>] ip6_finish_output2+0x1aec/0x1c70
[  264.967935]  [<000000000163f890>] ip6_finish_output+0x430/0x7f0
[  264.967943]  [<000000000163fe44>] ip6_output+0x1f4/0x580
[  264.967953]  [<000000000163882a>] ip6_xmit+0xfea/0x1ce8
[  264.967963]  [<00000000017396e2>] inet6_csk_xmit+0x282/0x3f8
[  264.968033]  [<000003ff805fb0ba>] l2tp_xmit_skb+0xe02/0x13e0 [l2tp_core]
[  264.968037]  [<000003ff80631192>] l2tp_eth_dev_xmit+0xda/0x150 [l2tp_eth]
[  264.968041]  [<0000000001220020>] dev_hard_start_xmit+0x268/0x928
[  264.968069]  [<0000000001330e8e>] sch_direct_xmit+0x7ae/0x1350
[  264.968071]  [<000000000122359c>] __dev_queue_xmit+0x2b7c/0x3478
[  264.968075]  [<00000000013d2862>] ip_finish_output2+0xce2/0x11a0
[  264.968078]  [<00000000013d9b14>] ip_finish_output+0x56c/0x8c8
[  264.968081]  [<00000000013ddd1e>] ip_output+0x226/0x4c0
[  264.968083]  [<00000000013dbd6c>] __ip_queue_xmit+0x894/0x1938
[  264.968100]  [<000003ff80bc3a5c>] sctp_packet_transmit+0x29d4/0x3648 [sctp]
[  264.968116]  [<000003ff80b7bf68>] sctp_outq_flush_ctrl.constprop.5+0x8d0/0xe50 [sctp]
[  264.968131]  [<000003ff80b7c716>] sctp_outq_flush+0x22e/0x7d8 [sctp]
[  264.968146]  [<000003ff80b35c68>] sctp_cmd_interpreter.isra.16+0x530/0x6800 [sctp]
[  264.968161]  [<000003ff80b3410a>] sctp_do_sm+0x222/0x648 [sctp]
[  264.968177]  [<000003ff80bbddac>] sctp_primitive_ASSOCIATE+0xbc/0xf8 [sctp]
[  264.968192]  [<000003ff80b93328>] __sctp_connect+0x830/0xc20 [sctp]
[  264.968208]  [<000003ff80bb11ce>] sctp_inet_connect+0x2e6/0x378 [sctp]
[  264.968212]  [<0000000001197942>] __sys_connect+0x21a/0x450
[  264.968215]  [<000000000119aff8>] sys_socketcall+0x3d0/0xb08
[  264.968218]  [<000000000184ea7a>] system_call+0x2a2/0x2c0

[...]

Just like ip_finish_output2() does for IPv4, check that we have enough
headroom in ip6_xmit(), and reallocate it if we don't.

This issue is older than git history.

Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:24:40 -08:00
Eric Dumazet
f9bfe4e6a9 tcp: lack of available data can also cause TSO defer
tcp_tso_should_defer() can return true in three different cases :

 1) We are cwnd-limited
 2) We are rwnd-limited
 3) We are application limited.

Neal pointed out that my recent fix went too far, since
it assumed that if we were not in 1) case, we must be rwnd-limited

Fix this by properly populating the is_cwnd_limited and
is_rwnd_limited booleans.

After this change, we can finally move the silly check for FIN
flag only for the application-limited case.

The same move for EOR bit will be handled in net-next,
since commit 1c09f7d073 ("tcp: do not try to defer skbs
with eor mark (MSG_EOR)") is scheduled for linux-4.21

Tested by running 200 concurrent netperf -t TCP_RR -- -r 60000,100
and checking none of them was rwnd_limited in the chrono_stat
output from "ss -ti" command.

Fixes: 41727549de ("tcp: Do not underestimate rwnd_limited")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Suggested-by: Neal Cardwell <ncardwell@google.com>
Reviewed-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Reviewed-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:18:22 -08:00
yupeng
0fbe82e628 net: call sk_dst_reset when set SO_DONTROUTE
after set SO_DONTROUTE to 1, the IP layer should not route packets if
the dest IP address is not in link scope. But if the socket has cached
the dst_entry, such packets would be routed until the sk_dst_cache
expires. So we should clean the sk_dst_cache when a user set
SO_DONTROUTE option. Below are server/client python scripts which
could reprodue this issue:

server side code:

==========================================================================
import socket
import struct
import time

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(('0.0.0.0', 9000))
s.listen(1)
sock, addr = s.accept()
sock.setsockopt(socket.SOL_SOCKET, socket.SO_DONTROUTE, struct.pack('i', 1))
while True:
    sock.send(b'foo')
    time.sleep(1)
==========================================================================

client side code:
==========================================================================
import socket
import time

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('server_address', 9000))
while True:
    data = s.recv(1024)
    print(data)
==========================================================================

Signed-off-by: yupeng <yupeng0921@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:11:54 -08:00
David Ahern
58956317c8 neighbor: Improve garbage collection
The existing garbage collection algorithm has a number of problems:

1. The gc algorithm will not evict PERMANENT entries as those entries
   are managed by userspace, yet the existing algorithm walks the entire
   hash table which means it always considers PERMANENT entries when
   looking for entries to evict. In some use cases (e.g., EVPN) there
   can be tens of thousands of PERMANENT entries leading to wasted
   CPU cycles when gc kicks in. As an example, with 32k permanent
   entries, neigh_alloc has been observed taking more than 4 msec per
   invocation.

2. Currently, when the number of neighbor entries hits gc_thresh2 and
   the last flush for the table was more than 5 seconds ago gc kicks in
   walks the entire hash table evicting *all* entries not in PERMANENT
   or REACHABLE state and not marked as externally learned. There is no
   discriminator on when the neigh entry was created or if it just moved
   from REACHABLE to another NUD_VALID state (e.g., NUD_STALE).

   It is possible for entries to be created or for established neighbor
   entries to be moved to STALE (e.g., an external node sends an ARP
   request) right before the 5 second window lapses:

        -----|---------x|----------|-----
            t-5         t         t+5

   If that happens those entries are evicted during gc causing unnecessary
   thrashing on neighbor entries and userspace caches trying to track them.

   Further, this contradicts the description of gc_thresh2 which says
   "Entries older than 5 seconds will be cleared".

   One workaround is to make gc_thresh2 == gc_thresh3 but that negates the
   whole point of having separate thresholds.

3. Clearing *all* neigh non-PERMANENT/REACHABLE/externally learned entries
   when gc_thresh2 is exceeded is over kill and contributes to trashing
   especially during startup.

This patch addresses these problems as follows:

1. Use of a separate list_head to track entries that can be garbage
   collected along with a separate counter. PERMANENT entries are not
   added to this list.

   The gc_thresh parameters are only compared to the new counter, not the
   total entries in the table. The forced_gc function is updated to only
   walk this new gc_list looking for entries to evict.

2. Entries are added to the list head at the tail and removed from the
   front.

3. Entries are only evicted if they were last updated more than 5 seconds
   ago, adhering to the original intent of gc_thresh2.

4. Forced gc is stopped once the number of gc_entries drops below
   gc_thresh2.

5. Since gc checks do not apply to PERMANENT entries, gc levels are skipped
   when allocating a new neighbor for a PERMANENT entry. By extension this
   means there are no explicit limits on the number of PERMANENT entries
   that can be created, but this is no different than FIB entries or FDB
   entries.

Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 16:03:10 -08:00
David S. Miller
12edfdfc79 Merge branch 'hns3-error-handling'
Salil Mehta says:

====================
net: hns3: Additions/optimizations related to HNS3 H/W err handling

This patch set primarily does following addtions and optimizations
related to error handling in HNS3 Ethernet driver:

 1. Name changes for enable and process functions and minor loop
    optimizations. [PATCH 1-6]
 2. Modify query and clearing of RAS errors using new set of commands
    because modules specific commands for clearing RCB PPP PF, SSU are
    obselete. [PATCH 7]
 3. Deletes logging 1-bit errors for RAS in HNS3 driver as these never
    get reported to the driver. [PATCH 8]
 4. Add handling of NIC hw errors reported through MSIx rather than
    PCIe AER channel. [PATCH 9]
 5. Add handling for the HW RAS and MSIx errors in the modules MAC, PPP
    PF, MSIx SRAM, RCB and SSU. [PATCH 10-13]
 6. Add handling of RoCEE RAS errors. [PATCH 14]
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:02 -08:00
Shiju Jose
630ba007f4 net: hns3: add handling of RDMA RAS errors
This patch handles the RDMA RAS errors.
1. Enable RAS interrupt, print error detail info and clear error status.
2. Do CORE reset to recovery when these non-fatal errors happened.

Signed-off-by: Xiaofei Tan <tanxiaofei@huawei.com>
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
c3529177db net: hns3: handle hw errors of SSU
This patch enables and handles hw errors of the Storage Switch Unit(SSU).

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
f69b10b317 net: hns3: handle hw errors of PPU(RCB)
This patch enables and handles hw RAS and MSIx errors of PPU(RCB).

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
8fc9d3e3b4 net: hns3: handle hw errors of PPP PF
This patch handles PF hw errors of PPP(Programmable Packet Processor).

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
7838f908e2 net: hns3: add handling of hw errors of MAC
This patch adds enable and handling of hw errors of
the MAC block.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Salil Mehta
f6162d4412 net: hns3: add handling of hw errors reported through MSIX
This patch adds handling for HNS3 hardware errors(non-standard)
which are reported through MSIX interrupts and not through
PCIe AER channel.

These MSIX reported hardware errors are handled using common
misc. interrupt handler. Hardware error related registers
cannot be cleared in context to the interrupt received as
they require *heavy* access to hardware using IMP(Integrated
Mangement Processor) commands. Hence, we defer the clearing
of such error events till later time.

Since, we have defered exact identification of errors we
will have to defer the level of receovery/reset which
might be required. Hence, a new reset type UNKNOWN reset
has been introduced which effectively defers the assertion
of the reset till we get hold of kind of errors at later
time.

Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
8bb147927c net: hns3: deleted logging 1 bit errors
This patch deletes logging 1 bit errors for the following reasons.
1. AER does not notify 1 bit errors to the device drivers.
   However AER reports 1 bit errors to the userspace through the
   trace_aer_event for logging in the rasdaemon.
2. Firmware clears the status of 1 bit errors in the hw registers.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
332fbf5765 net: hns3: add handling of hw ras errors using new set of commands
1. This patch adds handling of hw ras errors using new set of
   common commands.
2. Updated the error message tables to match the register's name and
   error status returned by the commands.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
481a626a60 net: hns3: add optimization in the hclge_hw_error_set_state
1. This patch adds minor loop optimization in the
   hclge_hw_error_set_state function.
2. Adds logging module's name if it fails to configure the
   error interrupts.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
381c356e95 net: hns3: rename process_hw_error function
This patch renames process_hw_error function to
handle_hw_ras_error function to match the purpose
of the function. This is because hw errors reported through
ras and msix interrupts will be handled separately.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
166b04c3ee net: hns3: deletes unnecessary settings of the descriptor data
This patch deletes unnecessary setting of the descriptor data
to 0 for disabling error interrupts because
it is already done by the hclge_cmd_setup_basic_desc function.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
f3fa4a94db net: hns3: re-enable error interrupts on hw reset
This patch adds calling hclge_hw_error_set_state function
to re-enable the error interrupts those will be disabled on
the hw reset.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:01 -08:00
Shiju Jose
98da4027af net: hns3: rename enable error interrupt functions
This patch
- renames the enable error interrupt functions.
  The reason is that these functions
  are used for both enable and disable error interrupts.

- removes redundant logs from the enable error interrupt functions.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:00 -08:00
Shiju Jose
fe0f7d698d net: hns3: remove existing process error functions and reorder hw_blk table
1.The command interface for queryng and clearing hw errors is
  changed, which requires the new process error functions to be added.
  This patch removes all the current process error functions and
  associated definitions. The new functions to handle ras errors
  would be added in this patch set.

2. Fixed order issue of the hw_blk table.

Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 15:57:00 -08:00
Linus Torvalds
5f179793f0 virtio fixes
A couple of last-minute fixes.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJcCdNyAAoJECgfDbjSjVRpNAoH/A2eW0+UjIQar+jPKh1XPwN6
 uOoPAS3AnXGC0qhlJb2/W77intpLmF/SMkOSNBfvg1MXYTPJRGXcjS7v7446qpZf
 iCk/UEDN3Ck0OuxoR4AfO5qkbZSDGBGYDoSvB4JLufy9PTaNCrd+gBM349Fg1cRc
 lkK6aXLe7vydB+x0rLyfCrBEccZ8UtmCGs1FIzd9bvDgRGzlPnPGwavX0w8lx7Jn
 ZtQClajG2BjF0kMu+1hW9m781yjh7TYwshG2lYhQama5a8x3X5jOIUDNknfw3uQd
 crWOPPlzIELJcWrmG2/psydNGcq1b0S6vhFbpO7BiFpvCvdW5hAMk75c7hBWpls=
 =DLS0
 -----END PGP SIGNATURE-----

Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost

Pull vhost/virtio fixes from Michael Tsirkin:
 "A couple of last-minute fixes"

* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
  vhost/vsock: fix use-after-free in network stack callers
  virtio/s390: fix race in ccw_io_helper()
  virtio/s390: avoid race on vcdev->config
  vhost/vsock: fix reset orphans race with close timeout
2018-12-07 14:34:10 -08:00
Linus Torvalds
b8bf4692c9 - Avoid sending IPIs with interrupts disabled
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlwKvQgACgkQa9axLQDI
 XvH+uw//UTEMR3jPre+hLqXQ+/hWWu9yYQeD9PVsyce6a7dIJ2NsL0Y9k4eORArw
 2rftLWq1r2CjndlULwxge01WQ2RXGiDgQ5GFutwGw/6kWid/JPnHj4yhDyDz3mR7
 DtioRhVUx1X3I+gQIqlGZSlF24wHcJfcbbVDBpydpW428Qo0OqM4GZAiCQha7QCZ
 HUMFZgWrYh6vsSe77151cRqoe8gozwlJ3aYrgf1E7cORCVRCbc2uyKLNihizmzIr
 3FwSPPn6zVPiKHfoYPjoyIBGd5Um3I1MgUhRVY/w4sZbylsdVx9whJbC6J5rg3xu
 5gCU3B0tJbZCQJQlJwtUrWNocJ0CGwK20wBukpWtHPyIZg4lf9etzthXEP1Pqn6m
 WT7LXj3SMV0qykp8mPes+AbSKJHOhE9/yoS0wu1As/mGA6sbnu0p/T77DeovChC/
 MTvNFQTvd1MudqOGbvMJrWtyrKG5dPJsE1RZ/xdgqA+jEfgJ7foZz8XCnT8Ypf9X
 Gl0JwcnmhNRXXCZ6FQweyHUgkUMFyLce1YnvtTmWr9QP94HG32IpNBqOLhaIX7Nz
 2NvENq6pmKzzOBOXB8IWz29ZFUhaiM0gaI5BT8XmMQHI6q/Xctv04KY91aJ+RAoU
 zkVt5n1e2gIW3z7oAA6hfRbn7JRvF7TWNaGjRb23783Y3nBHQtw=
 =812H
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fix from Catalin Marinas:
 "Avoid sending IPIs with interrupts disabled"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: hibernate: Avoid sending cross-calling with interrupts disabled
2018-12-07 14:18:49 -08:00
Alexei Starovoitov
6baefa1aa4 Merge branch 'support-alu32_arsh'
Jiong Wang says:

====================
BPF_ALU | BPF_ARSH | BPF_* were rejected by commit: 7891a87efc
("bpf: arsh is not supported in 32 bit alu thus reject it"). As explained
in the commit message, this is due to there is no complete support for them
on interpreter and various JIT compilation back-ends.

This patch set is a follow-up which completes the missing bits. This also
pave the way for running bpf program compiled with ALU32 instruction
enabled by specifing -mattr=+alu32 to LLVM for which case there is likely
to have more BPF_ALU | BPF_ARSH insns that will trigger the rejection code.

test_verifier.c is updated accordingly.

I have tested this patch set on x86-64 and NFP, I need help of review and
test on the arch changes (mips/ppc/s390).

Note, there might be merge confict on mips change which is better to be
applied on top of:

  commit: 20b880a05f06 ("mips: bpf: fix encoding bug for mm_srlv32_op"),

which is on mips-fixes branch at the moment.

Thanks.

v1->v2:
 - Fix ppc implementation bug. Should zero high bits explicitly.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:49 -08:00
Jiong Wang
c099f3f413 selftests: bpf: update testcases for BPF_ALU | BPF_ARSH
"arsh32 on imm" and "arsh32 on reg" now are accepted. Also added two new
testcases to make sure arsh32 won't be treated as arsh64 during
interpretation or JIT code-gen for which case the high bits will be moved
into low halve that the testcases could catch them.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
c49f7dbd4f bpf: verifier remove the rejection on BPF_ALU | BPF_ARSH
This patch remove the rejection on BPF_ALU | BPF_ARSH as we have supported
them on interpreter and all JIT back-ends

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
2dc6b100f9 bpf: interpreter support BPF_ALU | BPF_ARSH
This patch implements interpreting BPF_ALU | BPF_ARSH. Do arithmetic right
shift on low 32-bit sub-register, and zero the high 32 bits.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
84708c1386 nfp: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
BPF_X support needs indirect shift mode, please see code comments for
details.

Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
f860203b01 s390: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
44cf43c04b ppc: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_*
This patch implements code-gen for BPF_ALU | BPF_ARSH | BPF_*.

Cc: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
Cc: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
ee94b90c8a mips: bpf: implement jitting of BPF_ALU | BPF_ARSH | BPF_X
Jitting of BPF_K is supported already, but not BPF_X. This patch complete
the support for the latter on both MIPS and microMIPS.

Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Acked-by: Paul Burton <paul.burton@mips.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:30:48 -08:00
Jiong Wang
17f6c83fb5 mips: bpf: fix encoding bug for mm_srlv32_op
For micro-mips, srlv inside POOL32A encoding space should use 0x50
sub-opcode, NOT 0x90.

Some early version ISA doc describes the encoding as 0x90 for both srlv and
srav, this looks to me was a typo. I checked Binutils libopcode
implementation which is using 0x50 for srlv and 0x90 for srav.

v1->v2:
  - Keep mm_srlv32_op sorted by value.

Fixes: f31318fdf3 ("MIPS: uasm: Add srlv uasm instruction")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jiong Wang <jiong.wang@netronome.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-07 13:29:48 -08:00
Linus Torvalds
1cdc3624a1 Fixes for stackleak
- Remove tracing for inserted stack depth marking function (Anders Roxell)
 - Move gcc-plugin pass location to avoid objtool warnings (Alexander Popov)
 -----BEGIN PGP SIGNATURE-----
 Comment: Kees Cook <kees@outflux.net>
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlwKp1IWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJuT9D/9DP75YerfMFxiVx8BsFnGVfPW3
 QWa/nf2c8VMhmouQ9OI8j8Nj+T4q5VXewbGC5I0F6b2YsIPjHOwK0PR557xn7jRi
 7bY3aTRzJs4v+dDYkXqTkGx4zQ9FSD3NDM0T4vtnVGEdOmojcvoLX6+V7WArOTaa
 M9oP4iNn5/+Z677HyMP3DyTY093WpCx0fNOAf1HI/kpM3TPVJiE5OLXBZY957N01
 eBrt0WHJkmaZkHeqUkK06RTxYzIKBQqFRw77pPiKq79ETxBEwHOgU2hmwwHBv4+h
 u6TQmy7aVsUiXfS1GVvkNkX/jCNxYuK8kP5dsd+cQKn3AfkDHj3RvBTOvrkD0xyF
 7F9Toz/Wpw8+/YVx8ks6cNrssmEq4rd6T7MJcoud1TwEG1o/bSUbPc4uednuIUGL
 sB4J6sxApL2vaZtgqUePVZZJVKwiryFa8LymihkHMfPU4dgCycrYLGa3A1ju9WVs
 psGYhFTEfC1KVLgTmfwZlxz/FWbRmSERRF7cl9cdw8mdlqkKxP1C//VgsdJXOnnW
 c51BS+XK9OI8HTYXmWah82ysuCE7qou4DUJA91jhyza5tEp2V5C0uhOQz2odFcBF
 8axjqExFr4YfAwIgtGOClPA0e5CaB4ASRbOIs8+WL03LiNbfP/p6+92TpnwaP637
 Q5CbAMIfKqNpqAcAJg==
 =1JZ6
 -----END PGP SIGNATURE-----

Merge tag 'gcc-plugins-v4.20-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull gcc stackleak plugin fixes from Kees Cook:

 - Remove tracing for inserted stack depth marking function (Anders
   Roxell)

 - Move gcc-plugin pass location to avoid objtool warnings (Alexander
   Popov)

* tag 'gcc-plugins-v4.20-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  stackleak: Register the 'stackleak_cleanup' pass before the '*free_cfg' pass
  stackleak: Mark stackleak_track_stack() as notrace
2018-12-07 13:13:07 -08:00
Linus Torvalds
52ab2ec005 Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:

 - Disable the new crypto stats interface as it's still being changed

 - Fix potential uses-after-free in cbc/cfb/pcbc.

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: user - Disable statistics interface
  crypto: do not free algorithm before using
2018-12-07 13:07:10 -08:00
David S. Miller
9f4c2cffd0 Merge branch 'mlxsw-Un-offload-FDB-on-NVE-detach-attach'
Ido Schimmel says:

====================
mlxsw: Un/offload FDB on NVE detach/attach

Petr says:

When a VXLAN device is attached to a bridge of a driver capable of
offloading such, or upped, the FDB entries already present at the device
need to be offloaded. Similarly when an offloaded VXLAN device ceases
being interesting (it is downed, or detached, or a front-panel port
netdevice is detached from the bridge that the VXLAN device is attached
to), any offloaded FDB entries need to be unoffloaded and unmarked. This
attach / detach processing is implemented in this patchset.

In patch #1, a code pattern is extracted into a named function for
easier reuse.

In patch #2, vxlan_fdb_replay() is added to send
SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE for each FDB entry with a given VNI.
The intention is that the offloading driver will interpret these events
like any other and thus offload the FDB entries that existed prior to
VXLAN attach.

In patches #3 and #4, the functions vxlan_fdb_clear_offload() resp.
br_fdb_clear_offload() are added. These clear the offloaded flag at
matching FDB entries.

In patches #5-#9, we introduce FID-type-specific and NVE-type-specific
ops necessary to properly abstract invocations of the replay/clear
functions.

Finally patch #10 implements the FDB management.

In patch #11, the mlxsw-specific test case is extended to check that the
management of offload marks under the newly-supported situations is
correct. Patch #12, from Ido, exercises the new code paths in actual
functional test.

v2:
- Patch #1:
    - Modify vxlan_fdb_switchdev_notifier_info() to initialize the
      structure through a passed-in pointer argument, instead of returning
      it as a value.
- Patch #2:
    - Adapt to API change in vxlan_fdb_switchdev_notifier_info()
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:09 -08:00
Ido Schimmel
55939b262a selftests: forwarding: Add PVID test case for VXLAN with VLAN-aware bridges
When using VLAN-aware bridges with VXLAN, the VLAN that is mapped to the
VNI of the VXLAN device is that which is configured as "pvid untagged"
on the corresponding bridge port.

When these flags are toggled or when the VLAN is deleted entirely,
remote hosts should not be able to receive packets from the VTEP.

Add a test case for above mentioned scenarios.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
0efe9ed98d selftests: mlxsw: vxlan: Test FDB un/marking on VXLAN join/leave
When a VXLAN device is attached to an offloaded bridge, or when a
front-panel port is attached to a bridge that already has a VXLAN
device, mlxsw should offload the existing offloadable FDB entries.
Similarly when VXLAN device is downed, the FDB entries are unoffloaded,
and the marks thus need to be cleared. Similarly when a front-panel port
device is attached to a bridge with a VXLAN device, or when VLAN flags
are tweaked on a VXLAN port attached to a VLAN-aware bridge.

Test that the replaying / clearing logic works by observing transitions
in presence of offload marks under different scenarios.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
8a5969d8a8 mlxsw: spectrum_nve: Un/offload FDB on nve_fid_disable/enable
Any existing NVE FDB entries need to be offloaded when NVE is enabled
for a given FID. Recent patches have added fdb_replay op for this, so
just invoke it from mlxsw_sp_nve_fid_enable().

When NVE is disabled on a FID, any existing FDB offloaded marks need to
be cleared on NVE device as well as on its bridge master. An op to
handle this, fdb_clear_offload, has been added to FID ops and NVE ops in
previous patches. Add code to resolve the NVE device, NVE type, and
dispatch to both fdb_clear_offload ops.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
83de78831b mlxsw: spectrum: Add mlxsw_sp_fid_ops.fdb_clear_offload
If there are any offloaded FDB entries at bridge master of an NVE device
at the time that it's un-offloaded, their offloaded marks need to be
cleared. How that is done depends on whether the bridge in question is
vlan aware. Therefore add a per-FID-type operation.

Implement the operation for the 802.1q and 802.1d bridges.

Add and publish a function mlxsw_sp_fid_fdb_clear_offload() to dispatch
to the new operation according to FID type.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
b73ef0e0ee mlxsw: spectrum_nve: Add mlxsw_sp_nve_ops.fdb_clear_offload
If there are any offloaded FDB entries at an NVE device at the time that
it's un-offloaded, their offloaded marks need to be cleared. How that is
done depends on NVE device type, and therefore add a per-NVE-type
operation.

Implement the operation for the sole NVE device type currently supported
by mlxsw, VXLAN.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
a6ef5a48a3 mlxsw: spectrum_nve: Add mlxsw_sp_nve_ops.fdb_replay
A replay of FDB needs to be performed so that the FDB entries existing
at the NVE device are offloaded. How the replay is done depends on NVE
device type, and therefore add a per-NVE-type operation.

Implement the operation for the sole NVE device type currently supported
by mlxsw, VXLAN.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
34139ede05 mlxsw: spectrum_switchdev: Publish mlxsw_sp_switchdev_notifier
The notifier block will need to be passed to vxlan_fdb_replay() in a
follow-up patch.

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00
Petr Machata
2a36c12520 mlxsw: spectrum: Track NVE type at FIDs
A follow-up patch will add support for replay and for clearing of
offload marks. These are NVE type-sensitive operations, and to be able
to dispatch them properly, a FID needs to know what NVE type is attached
to it.

Therefore, track the NVE type at struct mlxsw_sp_fid. Extend
mlxsw_sp_fid_vni_set() to take it as an argument, and add
mlxsw_sp_fid_nve_type().

Signed-off-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-07 12:59:08 -08:00