Commit Graph

768121 Commits

Author SHA1 Message Date
David S. Miller
f5c64e566c Merge branch 'mlxsw-VRRP'
Ido Schimmel says:

====================
mlxsw: Add VRRP support

When a router that is acting as the default gateway of a host stops
functioning, the host will encounter packet loss until the router starts
functioning again.

To increase the reliability of the default gateway without performing
reconfiguration on the host, a host can use a Virtual Router Redundancy
Protocol (VRRP) Router. This virtual router is composed from several
routers where only one is actually forwarding packets from the host (the
master router) while the other routers act as backup routers. The
election of the master router is determined by the VRRP protocol [1].

Packets addressed to the virtual router are always sent to the virtual
router MAC address (IPv4: 00-00-5E-00-01-XX, IPv6: 00-00-5E-00-02-XX).
Such packets can only be accepted by the master router and must be
discarded by the backup routers.

In Linux, VRRP is usually implemented by configuring a macvlan with the
virtual router MAC on top of the router interface that is connected to
the host / LAN. The macvlan on the master router is assigned the virtual
IP (VIP) that the host uses as its gateway.

In order to support VRRP in mlxsw, we first need to enable macvlan upper
devices on top of mlxsw netdevs and their uppers. This is done by the
first patch, which also takes care of sanitizing macvlan configurations
that are not currently supported by the driver.

The second patch directs packets with destination MAC addresses as the
macvlans to the router so that they will undergo an L3 lookup. This is
consistent with the kernel's behavior where the macvlan's Rx handler
will re-inject such packets to the Rx path so that they will be picked
up by the IPvX protocol handlers and undergo an L3 lookup. Note that the
driver prevents the macvlans from being enslaved to other devices, to
ensure the packets will be picked up by the protocol handler and not by
another Rx handler.

The third patch adds packet traps for VRRP control packets for both IPv4
and IPv6. Finally, the last patch optimizes the reception of VRRP MACs
by potentially skipping one L2 lookup for them.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:23:26 -07:00
Ido Schimmel
c3a495409a mlxsw: spectrum_router: Optimize processing of VRRP MACs
Hosts using a VRRP router send their packets with a destination MAC of
the VRRP router which is of the following form [1]:

IPv4 - 00-00-5E-00-01-{VRID}
IPv6 - 00-00-5E-00-02-{VRID}

Where VRID is the ID of the virtual router. Such packets are directed to
the router block in the ASIC by an FDB entry that was added in the
previous patch.

However, in certain cases it is possible to skip this FDB lookup and
send such packets directly to the router. This is accomplished by adding
these special MAC addresses to the RIF cache. If the cache is hit, the
packet will skip the L2 lookup and ingress the router with the RIF
specified in the cache entry.

1. https://tools.ietf.org/html/rfc5798#section-7.3

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:23:26 -07:00
Ido Schimmel
11566d34f8 mlxsw: spectrum: Add VRRP traps
Virtual Router Redundancy Protocol packets are used to communicate the
state of the Master router associated with the virtual router ID (VRID).

These are link-local multicast packets sent with IP protocol 112 that
are trapped in the router block in the ASIC.

Add a trap for these packets and mark the trapped packets to prevent
them from potentially being re-flooded by the bridge driver.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:23:26 -07:00
Ido Schimmel
2db9937804 mlxsw: spectrum_router: Direct macvlans' MACs to router
An IP packet received on a netdev with a macvlan upper whose MAC matches
the packet's destination MAC will be re-injected to the Rx path as if it
was received by the macvlan, and perform an L3 lookup.

Reflect this functionality to the ASIC by programming FDB entries that
will direct MACs of macvlan uppers to the router.

In a similar fashion to router interfaces (RIFs) that are programmed
upon the addition of the first IP address on an interface and destroyed
upon the removal of the last IP address, the FDB entries for the macvlan
are added and destroyed based on the addition of the first and removal
of the last IP address on the macvlan.

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:23:26 -07:00
Ido Schimmel
c55161852f mlxsw: spectrum: Enable macvlan upper devices
In order to allow more unicast MAC addresses (e.g., VRRP virtual MAC) to
be directed to the router we need to enable macvlan uppers on top of
mlxsw netdevs.

Allow macvlan upper devices on top of mlxsw netdevs and sanitize
configurations that can't work. For example, a macvlan can't be enslaved
to a bridge as without ACLs the device doesn't take the destination MAC
into account when classifying a packet to a bridge instance (i.e., a
FID).

Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: Petr Machata <petrm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:23:25 -07:00
Yafang Shao
ff0432e5a8 tcp: remove redundant rcv_nxt update
tcp_rcv_nxt_update() is already executed in tcp_data_queue().
This line is redundant.

See bellow,
	tcp_queue_rcv
		tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq);
	tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); <<<< redundant

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-14 11:21:40 -07:00
Okash Khawaja
2d3feca8c4 bpf: btf: print map dump and lookup with btf info
This patch augments the output of bpftool's map dump and map lookup
commands to print data along side btf info, if the correspondin btf
info is available. The outputs for each of  map dump and map lookup
commands are augmented in two ways:

1. when neither of -j and -p are supplied, btf-ful map data is printed
whose aim is human readability. This means no commitments for json- or
backward- compatibility.

2. when either -j or -p are supplied, a new json object named
"formatted" is added for each key-value pair. This object contains the
same data as the key-value pair, but with btf info. "formatted" object
promises json- and backward- compatibility. Below is a sample output.

$ bpftool map dump -p id 8
[{
        "key": ["0x0f","0x00","0x00","0x00"
        ],
        "value": ["0x03", "0x00", "0x00", "0x00", ...
        ],
        "formatted": {
                "key": 15,
                "value": {
                        "int_field":  3,
                        ...
                }
        }
}
]

This patch calls btf_dumper introduced in previous patch to accomplish
the above. Indeed, btf-ful info is only displayed if btf data for the
given map is available. Otherwise existing output is displayed as-is.

Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-14 13:00:40 +02:00
Okash Khawaja
b12d6ec097 bpf: btf: add btf print functionality
This consumes functionality exported in the previous patch. It does the
main job of printing with BTF data. This is used in the following patch
to provide a more readable output of a map's dump. It relies on
json_writer to do json printing. Below is sample output where map keys
are ints and values are of type struct A:

typedef int int_type;
enum E {
        E0,
        E1,
};

struct B {
        int x;
        int y;
};

struct A {
        int m;
        unsigned long long n;
        char o;
        int p[8];
        int q[4][8];
        enum E r;
        void *s;
        struct B t;
        const int u;
        int_type v;
        unsigned int w1: 3;
        unsigned int w2: 3;
};

$ sudo bpftool map dump id 14
[{
        "key": 0,
        "value": {
            "m": 1,
            "n": 2,
            "o": "c",
            "p": [15,16,17,18,15,16,17,18
            ],
            "q": [[25,26,27,28,25,26,27,28
                ],[35,36,37,38,35,36,37,38
                ],[45,46,47,48,45,46,47,48
                ],[55,56,57,58,55,56,57,58
                ]
            ],
            "r": 1,
            "s": 0x7ffd80531cf8,
            "t": {
                "x": 5,
                "y": 10
            },
            "u": 100,
            "v": 20,
            "w1": 0x7,
            "w2": 0x3
        }
    }
]

This patch uses json's {} and [] to imply struct/union and array. More
explicit information can be added later. For example, a command line
option can be introduced to print whether a key or value is struct
or union, name of a struct etc. This will however come at the expense
of duplicating info when, for example, printing an array of structs.
enums are printed as ints without their names.

Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-14 13:00:40 +02:00
Okash Khawaja
92b57121ca bpf: btf: export btf types and name by offset from lib
This patch introduces btf__resolve_type() function and exports two
existing functions from libbpf. btf__resolve_type follows modifier
types like const and typedef until it hits a type which actually takes
up memory, and then returns it. This function follows similar pattern
to btf__resolve_size but instead of computing size, it just returns
the type.

These  functions will be used in the followig patch which parses
information inside array of `struct btf_type *`. btf_name_by_offset is
used for printing variable names.

Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-14 13:00:40 +02:00
Jakub Kicinski
db42a21a1e tools: include reallocarray feature test in FEATURE_TESTS_BASIC
perf propagates its feature check results to libbpf.  This means
features for which perf probes must be a superset of libbpf's
required features.  perf depends on FEATURE_TESTS_BASIC for its list
of features.

commit 531b014e7a ("tools: bpf: make use of reallocarray") added
reallocarray use to libbpf, make perf also perform the reallocarray
feature check.

Fixes: 531b014e7a ("tools: bpf: make use of reallocarray")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-14 10:38:48 +02:00
kbuild test robot
9cee8c4375 net: mvpp2: mvpp2_cls_flow_get() can be static
Fixes: f9358e12a0 ("net: mvpp2: split ingress traffic into multiple flows")
Signed-off-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-13 20:21:56 -07:00
Linus Walleij
6eb9c9dafd of: mdio: Support fixed links in of_phy_get_and_connect()
By a simple extension of of_phy_get_and_connect() drivers
that have a fixed link on e.g. RGMII can support also
fixed links, so in addition to:

ethernet-port {
	phy-mode = "rgmii";
	phy-handle = <&foo>;
};

This setup with a fixed-link node and no phy-handle will
now also work just fine:

ethernet-port {
	phy-mode = "rgmii";
	fixed-link {
		speed = <1000>;
		full-duplex;
		pause;
	};
};

This is very helpful for connecting random ethernet ports
to e.g. DSA switches that typically reside on fixed links.

The phy-mode is still there as the fixes link in this case
is still an RGMII link.

Tested on the Cortina Gemini driver with the Vitesse DSA
router chip on a fixed 1Gbit link.

Suggested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-13 18:25:14 -07:00
Vlad Buslov
01683a1469 net: sched: refactor flower walk to iterate over idr
Extend struct tcf_walker with additional 'cookie' field. It is intended to
be used by classifier walk implementations to continue iteration directly
from particular filter, instead of iterating 'skip' number of times.

Change flower walk implementation to save filter handle in 'cookie'. Each
time flower walk is called, it looks up filter with saved handle directly
with idr, instead of iterating over filter linked list 'skip' number of
times. This change improves complexity of dumping flower classifier from
quadratic to linearithmic. (assuming idr lookup has logarithmic complexity)

Reviewed-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Vlad Buslov <vladbu@mellanox.com>
Reported-by: Simon Horman <simon.horman@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-13 18:24:27 -07:00
Jesper Dangaard Brouer
d23b27c02f samples/bpf: xdp_redirect_cpu handle parsing of double VLAN tagged packets
People noticed that the code match on IEEE 802.1ad (ETH_P_8021AD) ethertype,
and this implies Q-in-Q or double tagged VLANs.  Thus, we better parse
the next VLAN header too.  It is even marked as a TODO.

This is relevant for real world use-cases, as XDP cpumap redirect can be
used when the NIC RSS hashing is broken.  E.g. the ixgbe driver HW cannot
handle double tagged VLAN packets, and places everything into a single
RX queue.  Using cpumap redirect, users can redistribute traffic across
CPUs to solve this, which is faster than the network stacks RPS solution.

It is left as an exerise how to distribute the packets across CPUs.  It
would be convenient to use the RX hash, but that is not _yet_ exposed
to XDP programs. For now, users can code their own hash, as I've demonstrated
in the Suricata code (where Q-in-Q is handled correctly).

Reported-by: Florian Maury <florian.maury-cv@x-cli.eu>
Reported-by: Marek Majkowski <marek@cloudflare.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-14 00:52:54 +02:00
Nikolay Aleksandrov
c921c2077b net: ipmr: add support for passing full packet on wrong vif
This patch adds support for IGMPMSG_WRVIFWHOLE which is used to pass
full packet and real vif id when the incoming interface is wrong.
While the RP and FHR are setting up state we need to be sending the
registers encapsulated with all the data inside otherwise we lose it.
The RP then decapsulates it and forwards it to the interested parties.
Currently with WRONGVIF we can only be sending empty register packets
and will lose that data.
This behaviour can be enabled by using MRT_PIM with
val == IGMPMSG_WRVIFWHOLE. This doesn't prevent IGMPMSG_WRONGVIF from
happening, it happens in addition to it, also it is controlled by the same
throttling parameters as WRONGVIF (i.e. 1 packet per 3 seconds currently).
Both messages are generated to keep backwards compatibily and avoid
breaking someone who was enabling MRT_PIM with val == 4, since any
positive val is accepted and treated the same.

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-13 14:21:16 -07:00
Daniel Borkmann
ee15f7cdf0 Merge branch 'bpf-xdp-driver-and-hw'
Jakub Kicinski says:

====================
This set is adding support for loading driver and offload XDP
at the same time.  This enables advanced use cases where some
of the work is offloaded to the NIC and some is done by the host.
Separate netlink attributes are added for each mode of operation.
Driver callbacks for offload are cleaned up a little, including
removal of .prog_attached flag.
====================

Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 21:54:57 +02:00
Jakub Kicinski
5f4284015e nfp: add support for simultaneous driver and hw XDP
Split handling of offloaded and driver programs completely.  Since
offloaded programs always come with XDP_FLAGS_HW_MODE set in reality
there could be no sharing, anyway, programs would only be installed
in driver or in hardware.  Splitting the handling allows us to install
programs in HW and in driver at the same time.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
99dadb6e3e selftests/bpf: add test for multiple programs
Add tests for having an XDP program attached in the driver and
another one attached in HW simultaneously.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
799e173d71 netdevsim: add support for simultaneous driver and hw XDP
Allow netdevsim to accept driver and offload attachment of XDP
BPF programs at the same time.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
a25717d2b6 xdp: support simultaneous driver and hw XDP attachment
Split the query of HW-attached program from the software one.
Introduce new .ndo_bpf command to query HW-attached program.
This will allow drivers to install different programs in HW
and SW at the same time.  Netlink can now also carry multiple
programs on dump (in which case mode will be set to
XDP_ATTACHED_MULTI and user has to check per-attachment point
attributes, IFLA_XDP_PROG_ID will not be present).  We reuse
IFLA_XDP_PROG_ID skb space for second mode, so rtnl_xdp_size()
doesn't need to be updated.

Note that the installation side is still not there, since all
drivers currently reject installing more than one program at
the time.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
05296620f6 xdp: factor out common program/flags handling from drivers
Basic operations drivers perform during xdp setup and query can
be moved to helpers in the core.  Encapsulate program and flags
into a structure and add helpers.  Note that the structure is
intended as the "main" program information source in the driver.
Most drivers will additionally place the program pointer in their
fast path or ring structures.

The helpers don't have a huge impact now, but they will
decrease the code duplication when programs can be installed
in HW and driver at the same time.  Encapsulating the basic
operations in helpers will hopefully also reduce the number
of changes to drivers which adopt them.

Helpers could really be static inline, but they depend on
definition of struct netdev_bpf which means they'd have
to be placed in netdevice.h, an already 4500 line header.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
6b86758973 xdp: don't make drivers report attachment mode
prog_attached of struct netdev_bpf should have been superseded
by simply setting prog_id long time ago, but we kept it around
to allow offloading drivers to communicate attachment mode (drv
vs hw).  Subsequently drivers were also allowed to report back
attachment flags (prog_flags), and since nowadays only programs
attached will XDP_FLAGS_HW_MODE can get offloaded, we can tell
the attachment mode from the flags driver reports.  Remove
prog_attached member.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Jakub Kicinski
4f91da26c8 xdp: add per mode attributes for attached programs
In preparation for support of simultaneous driver and hardware XDP
support add per-mode attributes.  The catch-all IFLA_XDP_PROG_ID
will still be reported, but user space can now also access the
program ID in a new IFLA_XDP_<mode>_PROG_ID attribute.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 20:26:35 +02:00
Daniel Borkmann
9c48b1d116 Merge branch 'bpf-arm-jit-improvements'
Russell King says:

====================
Four further jit compiler improves for 32-bit ARM.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:42 +02:00
Russell King
b18bea2a45 ARM: net: bpf: improve 64-bit ALU implementation
Improbe the 64-bit ALU implementation from:

  movw    r8, #65532
  movt    r8, #65535
  movw    r9, #65535
  movt    r9, #65535
  ldr     r7, [fp, #-44]
  adds    r7, r7, r8
  str     r7, [fp, #-44]
  ldr     r7, [fp, #-40]
  adc     r7, r7, r9
  str     r7, [fp, #-40]

to:

  movw    r8, #65532
  movt    r8, #65535
  movw    r9, #65535
  movt    r9, #65535
  ldrd    r6, [fp, #-44]
  adds    r6, r6, r8
  adc     r7, r7, r9
  strd    r6, [fp, #-44]

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:42 +02:00
Russell King
c5eae69257 ARM: net: bpf: improve 64-bit store implementation
Improve the 64-bit store implementation from:

  ldr     r6, [fp, #-8]
  str     r8, [r6]
  ldr     r6, [fp, #-8]
  mov     r7, #4
  add     r7, r6, r7
  str     r9, [r7]

to:

  ldr     r6, [fp, #-8]
  str     r8, [r6]
  str     r9, [r6, #4]

We leave the store as two separate STR instructions rather than using
STRD as the store may not be aligned, and STR can handle misalignment.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:42 +02:00
Russell King
077513b894 ARM: net: bpf: improve 64-bit sign-extended immediate load
Improve the 64-bit sign-extended immediate from:

  mov     r6, #1
  str     r6, [fp, #-52]  ; 0xffffffcc
  mov     r6, #0
  str     r6, [fp, #-48]  ; 0xffffffd0

to:

  mov     r6, #1
  mov     r7, #0
  strd    r6, [fp, #-52]  ; 0xffffffcc

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:41 +02:00
Russell King
f9ff5018c1 ARM: net: bpf: improve 64-bit load immediate implementation
Rather than writing each 32-bit half of the 64-bit immediate value
separately when the register is on the stack:

  movw    r6, #45056      ; 0xb000
  movt    r6, #60979      ; 0xee33
  str     r6, [fp, #-44]  ; 0xffffffd4
  mov     r6, #0
  str     r6, [fp, #-40]  ; 0xffffffd8

arrange to use the double-word store when available instead:

  movw    r6, #45056      ; 0xb000
  movt    r6, #60979      ; 0xee33
  mov     r7, #0
  strd    r6, [fp, #-44]  ; 0xffffffd4

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-07-13 15:26:41 +02:00
Linus Walleij
430ac34de9 net: gemini: Indicate that we can handle jumboframes
The hardware supposedly handles frames up to 10236 bytes and
implements .ndo_change_mtu() so accept 10236 minus the ethernet
header for a VLAN tagged frame on the netdevices. Use
ETH_MIN_MTU as minimum MTU.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:39:15 -07:00
Linus Walleij
06d5151312 net: gemini: Move main init to port
The initialization sequence for the ethernet, setting up
interrupt routing and such things, need to be done after
both the ports are clocked and reset. Before this the
config will not "take". Move the initialization to the
port probe function and keep track of init status in
the state.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:39:15 -07:00
Linus Walleij
60cc7767b9 net: gemini: Allow multiple ports to instantiate
The code was not tested with two ports actually in use at
the same time. (I blame this on lack of actual hardware using
that feature.) Now after locating a system using both ports,
add necessary fix to make both ports come up.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:39:15 -07:00
Linus Walleij
9ab5c929e6 net: gemini: Improve connection prints
Switch over to using a module parameter and debug prints
that can be controlled by this or ethtool like everyone
else. Depromote all other prints to debug messages.

The phy_print_status() was already in place, albeit never
really used because the debuglevel hiding it had to be
set up using ethtool.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:39:15 -07:00
Linus Walleij
cedca41801 net: gemini: Look up L3 maxlen from table
The code to calculate the hardware register enumerator
for the maximum L3 length isn't entirely simple to read.
Use the existing defines and rewrite the function into a
table look-up.

Acked-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:39:15 -07:00
David S. Miller
750c721ee0 Merge branch 'devlink-Add-support-for-region-access'
Alex Vesker says:

====================
devlink: Add support for region access

This is a proposal which will allow access to driver defined address
regions using devlink. Each device can create its supported address
regions and register them. A device which exposes a region will allow
access to it using devlink.

The suggested implementation will allow exposing regions to the user,
reading and dumping snapshots taken from different regions.
A snapshot represents a memory image of a region taken by the driver.

If a device collects a snapshot of an address region it can be later
exposed using devlink region read or dump commands.
This functionality allows for future analyses on the snapshots to be
done.

The major benefit of this support is not only to provide access to
internal address regions which were inaccessible to the user but also
to provide an additional way to debug complex error states using the
region snapshots.

Implemented commands:
$ devlink region help
$ devlink region show [ DEV/REGION ]
$ devlink region del DEV/REGION snapshot SNAPSHOT_ID
$ devlink region dump DEV/REGION [ snapshot SNAPSHOT_ID ]
$ devlink region read DEV/REGION [ snapshot SNAPSHOT_ID ]
	address ADDRESS length length

Show all of the exposed regions with region sizes:
$ devlink region show
pci/0000:00:05.0/cr-space: size 1048576 snapshot [1 2]
pci/0000:00:05.0/fw-health: size 64 snapshot [1 2]

Delete a snapshot using:
$ devlink region del pci/0000:00:05.0/cr-space snapshot 1

Dump a snapshot:
$ devlink region dump pci/0000:00:05.0/fw-health snapshot 1
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
0000000000000010 0000 0000 ffff ff04 0029 8c00 0028 8cc8
0000000000000020 0016 0bb8 0016 1720 0000 0000 c00f 3ffc
0000000000000030 bada cce5 bada cce5 bada cce5 bada cce5

Read a specific part of a snapshot:
$ devlink region read pci/0000:00:05.0/fw-health snapshot 1 address 0
	length 16
0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30

For more information you can check devlink-region.8 man page

Future:
There is a plan to extend the support to include a write command
as well as performing read and dump live region

v1->v2:
-Add a parameter to enable devlink region snapshot
-Allocate snapshot memory using kvmalloc
-Introduce destructor function devlink_snapshot_data_dest_t to avoid
 double allocation

v2->v3:
-Fix incorrect comment in devlink.h for DEVLINK_ATTR_REGION_SIZE
 from u32 to u64
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:14 -07:00
Alex Vesker
3c641ba4a8 net/mlx4_core: Use devlink region_snapshot parameter
This parameter enables capturing region snapshot of the crspace
during critical errors. The default value of this parameter is
disabled, it can be enabled using devlink param commands.
It is possible to configure during runtime and also driver init.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
f6a69885f2 devlink: Add generic parameters region_snapshot
region_snapshot - When set enables capturing region snapshots

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Moshe Shemesh <moshe@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
bedc989b0c net/mlx4_core: Add Crdump FW snapshot support
Crdump allows the driver to create a snapshot of the FW PCI
crspace and health buffer during a critical FW issue.
In case of a FW command timeout, FW getting stuck or a non zero
value on the catastrophic buffer, a snapshot will be taken.

The snapshot is exposed using devlink, cr-space, fw-health
address regions are registered on init and snapshots are attached
once a new snapshot is collected by the driver.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
523f9eb1ef net/mlx4_core: Add health buffer address capability
Health buffer address is a 32 bit PCI address offset provided by
the FW. This offset is used for reading FW health debug data
located on the shared CR space. Cr space is accessible in both
driver and FW and allows for different queries and configurations.
Health buffer size is always 64B of readable data followed by a
lock which is used to block volatile CR space access.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
4e54795a27 devlink: Add support for region snapshot read command
Add support for DEVLINK_CMD_REGION_READ_GET used for both reading
and dumping region data. Read allows reading from a region specific
address for given length. Dump allows reading the full region.
If only snapshot ID is provided a snapshot dump will be done.
If snapshot ID, Address and Length are provided a snapshot read
will done.

This is used for both snapshot access and will be used in the same
way to access current data on the region.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
866319bb94 devlink: Add support for region snapshot delete command
Add support for DEVLINK_CMD_REGION_DEL used
for deleting a snapshot from a region. The snapshot ID is required.
Also added notification support for NEW and DEL of snapshots.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
a006d467fb devlink: Extend the support querying for region snapshot IDs
Extend the support for DEVLINK_CMD_REGION_GET command to also
return the IDs of the snapshot currently present on the region.
Each reply will include a nested snapshots attribute that
can contain multiple snapshot attributes each with an ID.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
d8db7ea55f devlink: Add support for region get command
Add support for DEVLINK_CMD_REGION_GET command which is used for
querying for the supported DEV/REGION values of devlink devices.
The support is both for doit and dumpit.

Reply includes:
  BUS_NAME, DEVICE_NAME, REGION_NAME, REGION_SIZE

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
d7e5272282 devlink: Add support for creating region snapshots
Each device address region can store multiple snapshots,
each snapshot is identified using a different numerical ID.
This ID is used when deleting a snapshot or showing an address
region specific snapshot. This patch exposes a callback to add
a new snapshot to an address region.
The snapshot will be deleted using the destructor function
when destroying a region or when a snapshot delete command
from devlink user tool.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:13 -07:00
Alex Vesker
ccadfa444b devlink: Add callback to query for snapshot id before snapshot create
To restrict the driver with the snapshot ID selection a new callback
is introduced for the driver to get the snapshot ID before creating
a new snapshot. This will also allow giving the same ID for multiple
snapshots taken of different regions on the same time.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:12 -07:00
Alex Vesker
b16ebe925a devlink: Add support for creating and destroying regions
This allows a device to register its supported address regions.
Each address region can be accessed directly for example reading
the snapshots taken of this address space.
Drivers are not limited in the name selection for different regions.
An example of a region-name can be: pci cr-space, register-space.

Signed-off-by: Alex Vesker <valex@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:37:12 -07:00
David S. Miller
23c9ef2b6e Merge branch 'mvpp2-add-RSS-support'
Maxime Chevallier says:

====================
net: mvpp2: add RSS support

This series adds support for RSS on PPv2. There already was some code to
handle the RSS tables, but the driver was missing all the classification
steps required to actually use these tables.

RSS is used through the classifier, using at least 2 lookups :
 - One using the C2 engine, a TCAM engine that match the packet based on
   some header extracted fields, assigns the default rx queue for that
   packet and tag it for RSS
 - One using the C3Hx engine, which computes the hash that's used to perform
   the lookup in the RSS table.

Since RSS spreads the load across CPUs, we need to make sure that packets
from the same flow are always assigned the same rx queue, to prevent
re-ordering.

This series therefore adds a classification step based on the Header Parser,
that separate ingress traffic into 52 flows, based on some L2, L3 and L4
parameters.

Patches 1 and 2 fix some header issues, from the driver splitting

Patches 3 to 7 make sure the correct receive queue setup is used for RSS

Patches 8 to 14 deal with the way we handle the RSS tables

Patch 15 implement basic classifier configuration, by using it to assign the
default receive queue

Patch 16 implement the ingress traffic splitting into multiple flows

Patch 17 adds RSS support, by using the needed classification steps

Patch 18 adds the required ethtool ops to configure the flow hash parameters

This was tested on MacchiatoBin, giving some nice performance improvements
using ip forwarding (going from 5Gbps to 9.6Gbps total throughput).

RSS is disabled by default.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:30:49 -07:00
Maxime Chevallier
436d4fdb20 net: mvpp2: allow setting RSS flow hash parameters with ethtool
This commit allows setting the RSS hash generation parameters from
ethtool. When setting parameters for a given flow type from ethtool
(e.g. tcp4), all the corresponding flows in the flow table are updated,
according to the supported hash parameters.

For example, when configuring TCP over IPv4 hash parameters to be
src/dst IP  + src/dst port ("ethtool -N eth0 rx-flow-hash tcp4 sdfn"),
we only set the "src/dst port" hash parameters on the non-fragmented TCP
over IPv4 flows.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:30:49 -07:00
Maxime Chevallier
d33ec45250 net: mvpp2: add an RSS classification step for each flow
One of the classification action that can be performed is to compute a
hash of the packet header based on some header fields, and lookup a RSS
table based on this hash to determine the final RxQ.

This is done by adding one lookup entry per flow per port, so that we
can configure the hash generation parameters for each flow and each
port.

There are 2 possible engines that can be used for RSS hash generation :

 - C3HA, that generates a hash based on up to 4 header-extracted fields
 - C3HB, that does the same as c3HA, but also includes L4 info in the hash

There are a lot of fields that can be extracted from the header. For now,
we only use the ones that we can configure using ethtool :
 - DST MAC address
 - L3 info
 - Source IP
 - Destination IP
 - Source port
 - Destination port

The C3HB engine is selected when we use L4 fields (src/dst port).

               Header parser          Dec table
 Ingress pkt  +-------------+ flow id +----------------------------+
------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
              +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                      |TCP IPv4 w/ VLAN, frag      |--+
                                      |etc.                        |  |
                                      +----------------------------+  |
                                                                      |
                                            Flow table                |
  +---------+   +------------+         +--------------------------+   |
  | RSS tbl |<--| Classifier |<--------| flow 0: C2 lookup        |   |
  +---------+   +------------+         |         C3 lookup port 0 |   |
                 |         |           |         C3 lookup port 1 |   |
         +-----------+ +-------------+ |         ...              |   |
         | C2 engine | | C3H engines | | flow 1: C2 lookup        |<--+
         +-----------+ +-------------+ |         C3 lookup port 0 |
                                       |         ...              |
                                       | ...                      |
                                       | flow 51 : C2 lookup      |
                                       |           ...            |
                                       +--------------------------+

The C2 engine also gains the role of enabling and disabling the RSS
table lookup for this packet.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:30:49 -07:00
Maxime Chevallier
f9358e12a0 net: mvpp2: split ingress traffic into multiple flows
The PPv2 classifier allows to perform classification operations on each
ingress packet, based on the flow the packet is assigned to.

The current code uses only 1 flow per port, and the only classification
action consists of assigning the rx queue to the packet, depending on the
port.

In preparation for adding RSS support, we have to split all incoming
traffic into different flows. Since RSS assigns a rx queue depending on
the hash of some header fields, we have to make sure that the hash is
generated in a consistent way for all packets in the same flow.

What we call a "flow" is actually a set of attributes attached to a
packet that depends on various L2/L3/L4 info.

This patch introduces 52 flows, wich are a combination of various L2, L3
and L4 attributes :
 - Whether or not the packet has a VLAN tag
 - Whether the packet is IPv4, IPv6 or something else
 - Whether the packet is TCP, UDP or something else
 - Whether or not the packet is fragmented at L3 level.

The flow is associated to a packet by the Header Parser. Each flow
corresponds to an entry in the decoding table. This entry then points to
the sequence of classification lookups to be performed by the
classifier, represented in the flow table.

For now, the only lookup we perform is a C2 lookup to set the default
rx queue.

               Header parser          Dec table
 Ingress pkt  +-------------+ flow id +----------------------------+
------------->| TCAM + SRAM |-------->|TCP IPv4 w/ VLAN, not frag  |
              +-------------+         |TCP IPv4 w/o VLAN, not frag |
                                      |TCP IPv4 w/ VLAN, frag      |--+
                                      |etc.                        |  |
                                      +----------------------------+  |
                                                                      |
                                           Flow table                 |
                +------------+        +---------------------+         |
     To RxQ <---| Classifier |<-------| flow 0: C2 lookup   |<--------+
                +------------+        | flow 1: C2 lookup   |
                       |              | ...                 |
                +------------+        | flow 51 : C2 lookup |
		| C2 engine  |        +---------------------+
                +------------+

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:30:49 -07:00
Maxime Chevallier
b1a962c62c net: mvpp2: use classifier to assign default rx queue
The PPv2 Controller has a classifier, that can perform multiple lookup
operations for each packet, using different engines.

One of these engines is the C2 engine, which performs TCAM based lookups
on data extracted from the packet header. When a packet matches an
entry, the engine sets various attributes, used to perform
classification operations.

One of these attributes is the rx queue in which the packet should be sent.
The current code uses the lookup_id table (also called decoding table)
to assign the rx queue. However, this only works if we use one entry per
port in the decoding table, which won't be the case once we add RSS
lookups.

This patch uses the C2 engine to assign the rx queue to each packet.

The C2 engine is used through the flow table, which dictates what
classification operations are done for a given flow.

Right now, we have one flow per port, which contains every ingress
packet for this port.

Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-07-12 17:30:49 -07:00