2019-05-23 17:14:41 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
2008-01-11 22:57:09 +08:00
|
|
|
/* SCTP kernel implementation
|
2005-04-17 06:20:36 +08:00
|
|
|
* (C) Copyright IBM Corp. 2001, 2004
|
|
|
|
* Copyright (c) 1999-2000 Cisco, Inc.
|
|
|
|
* Copyright (c) 1999-2001 Motorola, Inc.
|
|
|
|
* Copyright (c) 2001 Intel Corp.
|
|
|
|
* Copyright (c) 2001 Nokia, Inc.
|
|
|
|
* Copyright (c) 2001 La Monte H.P. Yarroll
|
|
|
|
*
|
2008-01-11 22:57:09 +08:00
|
|
|
* This file is part of the SCTP kernel implementation
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Initialization/cleanup for SCTP protocol support.
|
|
|
|
*
|
|
|
|
* Please send any bug reports or fixes you make to the
|
|
|
|
* email address(es):
|
2013-07-23 20:51:47 +08:00
|
|
|
* lksctp developers <linux-sctp@vger.kernel.org>
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* Written or modified by:
|
|
|
|
* La Monte H.P. Yarroll <piggy@acm.org>
|
|
|
|
* Karl Knutson <karl@athena.chicago.il.us>
|
|
|
|
* Jon Grimm <jgrimm@us.ibm.com>
|
|
|
|
* Sridhar Samudrala <sri@us.ibm.com>
|
|
|
|
* Daisy Chang <daisyc@us.ibm.com>
|
|
|
|
* Ardelle Fan <ardelle.fan@intel.com>
|
|
|
|
*/
|
|
|
|
|
2010-08-24 21:21:08 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/inetdevice.h>
|
|
|
|
#include <linux/seq_file.h>
|
2018-10-31 06:09:49 +08:00
|
|
|
#include <linux/memblock.h>
|
2008-07-19 14:08:21 +08:00
|
|
|
#include <linux/highmem.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2007-09-12 18:01:34 +08:00
|
|
|
#include <net/net_namespace.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/protocol.h>
|
|
|
|
#include <net/ip.h>
|
|
|
|
#include <net/ipv6.h>
|
2005-12-27 12:43:12 +08:00
|
|
|
#include <net/route.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <net/sctp/sctp.h>
|
|
|
|
#include <net/addrconf.h>
|
|
|
|
#include <net/inet_common.h>
|
|
|
|
#include <net/inet_ecn.h>
|
2020-10-29 15:04:58 +08:00
|
|
|
#include <net/udp_tunnel.h>
|
2024-09-06 00:51:40 +08:00
|
|
|
#include <net/inet_dscp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
#define MAX_SCTP_PORT_HASH_ENTRIES (64 * 1024)
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Global data structures. */
|
2006-09-18 15:04:22 +08:00
|
|
|
struct sctp_globals sctp_globals __read_mostly;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
struct idr sctp_assocs_id;
|
|
|
|
DEFINE_SPINLOCK(sctp_assocs_id_lock);
|
|
|
|
|
|
|
|
static struct sctp_pf *sctp_pf_inet6_specific;
|
|
|
|
static struct sctp_pf *sctp_pf_inet_specific;
|
|
|
|
static struct sctp_af *sctp_af_v4_specific;
|
|
|
|
static struct sctp_af *sctp_af_v6_specific;
|
|
|
|
|
2006-12-07 12:33:20 +08:00
|
|
|
struct kmem_cache *sctp_chunk_cachep __read_mostly;
|
|
|
|
struct kmem_cache *sctp_bucket_cachep __read_mostly;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-11-10 07:24:26 +08:00
|
|
|
long sysctl_sctp_mem[3];
|
2007-09-17 07:04:37 +08:00
|
|
|
int sysctl_sctp_rmem[3];
|
|
|
|
int sysctl_sctp_wmem[3];
|
2007-08-16 07:07:44 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Private helper to extract ipv4 address and stash them in
|
|
|
|
* the protocol structure.
|
|
|
|
*/
|
|
|
|
static void sctp_v4_copy_addrlist(struct list_head *addrlist,
|
|
|
|
struct net_device *dev)
|
|
|
|
{
|
|
|
|
struct in_device *in_dev;
|
|
|
|
struct in_ifaddr *ifa;
|
|
|
|
struct sctp_sockaddr_entry *addr;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
2005-10-04 05:35:55 +08:00
|
|
|
if ((in_dev = __in_dev_get_rcu(dev)) == NULL) {
|
2005-04-17 06:20:36 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2019-06-01 00:27:07 +08:00
|
|
|
in_dev_for_each_ifa_rcu(ifa, in_dev) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Add the address to the local list. */
|
2013-06-17 17:40:04 +08:00
|
|
|
addr = kzalloc(sizeof(*addr), GFP_ATOMIC);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (addr) {
|
2006-11-21 09:04:42 +08:00
|
|
|
addr->a.v4.sin_family = AF_INET;
|
|
|
|
addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
|
2007-09-17 07:02:12 +08:00
|
|
|
addr->valid = 1;
|
|
|
|
INIT_LIST_HEAD(&addr->list);
|
2005-04-17 06:20:36 +08:00
|
|
|
list_add_tail(&addr->list, addrlist);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Extract our IP addresses from the system and stash them in the
|
|
|
|
* protocol structure.
|
|
|
|
*/
|
2012-08-06 16:42:04 +08:00
|
|
|
static void sctp_get_local_addr_list(struct net *net)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct net_device *dev;
|
|
|
|
struct list_head *pos;
|
|
|
|
struct sctp_af *af;
|
|
|
|
|
2009-11-04 21:43:23 +08:00
|
|
|
rcu_read_lock();
|
2012-08-06 16:42:04 +08:00
|
|
|
for_each_netdev_rcu(net, dev) {
|
2013-06-18 10:26:52 +08:00
|
|
|
list_for_each(pos, &sctp_address_families) {
|
2005-04-17 06:20:36 +08:00
|
|
|
af = list_entry(pos, struct sctp_af, list);
|
2012-08-06 16:42:04 +08:00
|
|
|
af->copy_addrlist(&net->sctp.local_addr_list, dev);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
2009-11-04 21:43:23 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Free the existing local addresses. */
|
2012-08-06 16:42:04 +08:00
|
|
|
static void sctp_free_local_addr_list(struct net *net)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sctp_sockaddr_entry *addr;
|
|
|
|
struct list_head *pos, *temp;
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
list_for_each_safe(pos, temp, &net->sctp.local_addr_list) {
|
2005-04-17 06:20:36 +08:00
|
|
|
addr = list_entry(pos, struct sctp_sockaddr_entry, list);
|
|
|
|
list_del(pos);
|
|
|
|
kfree(addr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Copy the local addresses which are valid for 'scope' into 'bp'. */
|
2012-08-06 16:42:04 +08:00
|
|
|
int sctp_copy_local_addr_list(struct net *net, struct sctp_bind_addr *bp,
|
2017-08-05 19:59:54 +08:00
|
|
|
enum sctp_scope scope, gfp_t gfp, int copy_flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct sctp_sockaddr_entry *addr;
|
2017-02-24 15:18:46 +08:00
|
|
|
union sctp_addr laddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
int error = 0;
|
|
|
|
|
2007-09-17 07:02:12 +08:00
|
|
|
rcu_read_lock();
|
2012-08-06 16:42:04 +08:00
|
|
|
list_for_each_entry_rcu(addr, &net->sctp.local_addr_list, list) {
|
2007-09-17 07:02:12 +08:00
|
|
|
if (!addr->valid)
|
|
|
|
continue;
|
2016-12-20 13:49:49 +08:00
|
|
|
if (!sctp_in_scope(net, &addr->a, scope))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Now that the address is in scope, check to see if
|
|
|
|
* the address type is really supported by the local
|
|
|
|
* sock as well as the remote peer.
|
|
|
|
*/
|
|
|
|
if (addr->a.sa.sa_family == AF_INET &&
|
2020-06-25 04:34:18 +08:00
|
|
|
(!(copy_flags & SCTP_ADDR4_ALLOWED) ||
|
|
|
|
!(copy_flags & SCTP_ADDR4_PEERSUPP)))
|
2016-12-20 13:49:49 +08:00
|
|
|
continue;
|
|
|
|
if (addr->a.sa.sa_family == AF_INET6 &&
|
|
|
|
(!(copy_flags & SCTP_ADDR6_ALLOWED) ||
|
|
|
|
!(copy_flags & SCTP_ADDR6_PEERSUPP)))
|
|
|
|
continue;
|
|
|
|
|
2017-02-24 15:18:46 +08:00
|
|
|
laddr = addr->a;
|
|
|
|
/* also works for setting ipv6 address port */
|
|
|
|
laddr.v4.sin_port = htons(bp->port);
|
|
|
|
if (sctp_bind_addr_state(bp, &laddr) != -1)
|
2016-12-20 13:49:50 +08:00
|
|
|
continue;
|
|
|
|
|
2016-12-20 13:49:49 +08:00
|
|
|
error = sctp_add_bind_addr(bp, &addr->a, sizeof(addr->a),
|
|
|
|
SCTP_ADDR_SRC, GFP_ATOMIC);
|
|
|
|
if (error)
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2007-09-17 07:02:12 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2018-02-25 00:18:51 +08:00
|
|
|
/* Copy over any ip options */
|
|
|
|
static void sctp_v4_copy_ip_options(struct sock *sk, struct sock *newsk)
|
|
|
|
{
|
|
|
|
struct inet_sock *newinet, *inet = inet_sk(sk);
|
|
|
|
struct ip_options_rcu *inet_opt, *newopt = NULL;
|
|
|
|
|
|
|
|
newinet = inet_sk(newsk);
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
inet_opt = rcu_dereference(inet->inet_opt);
|
|
|
|
if (inet_opt) {
|
|
|
|
newopt = sock_kmalloc(newsk, sizeof(*inet_opt) +
|
|
|
|
inet_opt->opt.optlen, GFP_ATOMIC);
|
|
|
|
if (newopt)
|
|
|
|
memcpy(newopt, inet_opt, sizeof(*inet_opt) +
|
|
|
|
inet_opt->opt.optlen);
|
|
|
|
else
|
|
|
|
pr_err("%s: Failed to copy ip options\n", __func__);
|
|
|
|
}
|
|
|
|
RCU_INIT_POINTER(newinet->inet_opt, newopt);
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Account for the IP options */
|
|
|
|
static int sctp_v4_ip_options_len(struct sock *sk)
|
|
|
|
{
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
|
|
|
struct ip_options_rcu *inet_opt;
|
|
|
|
int len = 0;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
inet_opt = rcu_dereference(inet->inet_opt);
|
|
|
|
if (inet_opt)
|
|
|
|
len = inet_opt->opt.optlen;
|
|
|
|
|
|
|
|
rcu_read_unlock();
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Initialize a sctp_addr from in incoming skb. */
|
|
|
|
static void sctp_v4_from_skb(union sctp_addr *addr, struct sk_buff *skb,
|
|
|
|
int is_saddr)
|
|
|
|
{
|
2016-12-28 19:26:33 +08:00
|
|
|
/* Always called on head skb, so this is safe */
|
|
|
|
struct sctphdr *sh = sctp_hdr(skb);
|
|
|
|
struct sockaddr_in *sa = &addr->v4;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
addr->v4.sin_family = AF_INET;
|
|
|
|
|
|
|
|
if (is_saddr) {
|
2016-12-28 19:26:33 +08:00
|
|
|
sa->sin_port = sh->source;
|
|
|
|
sa->sin_addr.s_addr = ip_hdr(skb)->saddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2016-12-28 19:26:33 +08:00
|
|
|
sa->sin_port = sh->dest;
|
|
|
|
sa->sin_addr.s_addr = ip_hdr(skb)->daddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2019-12-09 13:45:54 +08:00
|
|
|
memset(sa->sin_zero, 0, sizeof(sa->sin_zero));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize an sctp_addr from a socket. */
|
|
|
|
static void sctp_v4_from_sk(union sctp_addr *addr, struct sock *sk)
|
|
|
|
{
|
|
|
|
addr->v4.sin_family = AF_INET;
|
2006-11-21 09:24:21 +08:00
|
|
|
addr->v4.sin_port = 0;
|
2009-10-15 14:30:45 +08:00
|
|
|
addr->v4.sin_addr.s_addr = inet_sk(sk)->inet_rcv_saddr;
|
2019-12-09 13:45:54 +08:00
|
|
|
memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize sk->sk_rcv_saddr from sctp_addr. */
|
|
|
|
static void sctp_v4_to_sk_saddr(union sctp_addr *addr, struct sock *sk)
|
|
|
|
{
|
2009-10-15 14:30:45 +08:00
|
|
|
inet_sk(sk)->inet_rcv_saddr = addr->v4.sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize sk->sk_daddr from sctp_addr. */
|
|
|
|
static void sctp_v4_to_sk_daddr(union sctp_addr *addr, struct sock *sk)
|
|
|
|
{
|
2009-10-15 14:30:45 +08:00
|
|
|
inet_sk(sk)->inet_daddr = addr->v4.sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize a sctp_addr from an address parameter. */
|
2021-06-29 03:13:41 +08:00
|
|
|
static bool sctp_v4_from_addr_param(union sctp_addr *addr,
|
2005-04-17 06:20:36 +08:00
|
|
|
union sctp_addr_param *param,
|
2006-11-21 09:11:13 +08:00
|
|
|
__be16 port, int iif)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2021-06-29 03:13:41 +08:00
|
|
|
if (ntohs(param->v4.param_hdr.length) < sizeof(struct sctp_ipv4addr_param))
|
|
|
|
return false;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
addr->v4.sin_family = AF_INET;
|
|
|
|
addr->v4.sin_port = port;
|
|
|
|
addr->v4.sin_addr.s_addr = param->v4.addr.s_addr;
|
2019-12-09 13:45:54 +08:00
|
|
|
memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
|
2021-06-29 03:13:41 +08:00
|
|
|
|
|
|
|
return true;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize an address parameter from a sctp_addr and return the length
|
|
|
|
* of the address parameter.
|
|
|
|
*/
|
|
|
|
static int sctp_v4_to_addr_param(const union sctp_addr *addr,
|
|
|
|
union sctp_addr_param *param)
|
|
|
|
{
|
2017-07-17 11:29:49 +08:00
|
|
|
int length = sizeof(struct sctp_ipv4addr_param);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
param->v4.param_hdr.type = SCTP_PARAM_IPV4_ADDRESS;
|
2006-11-21 09:01:42 +08:00
|
|
|
param->v4.param_hdr.length = htons(length);
|
2007-02-09 22:25:18 +08:00
|
|
|
param->v4.addr.s_addr = addr->v4.sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return length;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize a sctp_addr from a dst_entry. */
|
2011-05-04 11:55:05 +08:00
|
|
|
static void sctp_v4_dst_saddr(union sctp_addr *saddr, struct flowi4 *fl4,
|
2006-11-21 09:06:24 +08:00
|
|
|
__be16 port)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
saddr->v4.sin_family = AF_INET;
|
|
|
|
saddr->v4.sin_port = port;
|
2011-05-04 11:55:05 +08:00
|
|
|
saddr->v4.sin_addr.s_addr = fl4->saddr;
|
2019-12-09 13:45:54 +08:00
|
|
|
memset(saddr->v4.sin_zero, 0, sizeof(saddr->v4.sin_zero));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Compare two addresses exactly. */
|
|
|
|
static int sctp_v4_cmp_addr(const union sctp_addr *addr1,
|
|
|
|
const union sctp_addr *addr2)
|
|
|
|
{
|
|
|
|
if (addr1->sa.sa_family != addr2->sa.sa_family)
|
|
|
|
return 0;
|
|
|
|
if (addr1->v4.sin_port != addr2->v4.sin_port)
|
|
|
|
return 0;
|
|
|
|
if (addr1->v4.sin_addr.s_addr != addr2->v4.sin_addr.s_addr)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize addr struct to INADDR_ANY. */
|
2006-11-21 09:24:53 +08:00
|
|
|
static void sctp_v4_inaddr_any(union sctp_addr *addr, __be16 port)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
addr->v4.sin_family = AF_INET;
|
2008-03-18 13:44:53 +08:00
|
|
|
addr->v4.sin_addr.s_addr = htonl(INADDR_ANY);
|
2005-04-17 06:20:36 +08:00
|
|
|
addr->v4.sin_port = port;
|
2019-12-09 13:45:54 +08:00
|
|
|
memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Is this a wildcard address? */
|
|
|
|
static int sctp_v4_is_any(const union sctp_addr *addr)
|
|
|
|
{
|
2008-03-18 13:44:53 +08:00
|
|
|
return htonl(INADDR_ANY) == addr->v4.sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* This function checks if the address is a valid address to be used for
|
|
|
|
* SCTP binding.
|
|
|
|
*
|
|
|
|
* Output:
|
|
|
|
* Return 0 - If the address is a non-unicast or an illegal address.
|
|
|
|
* Return 1 - If the address is a unicast.
|
|
|
|
*/
|
2006-06-18 13:55:35 +08:00
|
|
|
static int sctp_v4_addr_valid(union sctp_addr *addr,
|
|
|
|
struct sctp_sock *sp,
|
|
|
|
const struct sk_buff *skb)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-07-19 14:05:40 +08:00
|
|
|
/* IPv4 addresses not allowed */
|
|
|
|
if (sp && ipv6_only_sock(sctp_opt2sk(sp)))
|
|
|
|
return 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Is this a non-unicast address or a unusable SCTP address? */
|
2007-12-17 05:46:59 +08:00
|
|
|
if (IS_IPV4_UNUSABLE_ADDRESS(addr->v4.sin_addr.s_addr))
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
|
2007-02-09 22:25:18 +08:00
|
|
|
/* Is this a broadcast address? */
|
2009-06-02 13:14:27 +08:00
|
|
|
if (skb && skb_rtable(skb)->rt_flags & RTCF_BROADCAST)
|
2007-02-09 22:25:18 +08:00
|
|
|
return 0;
|
2006-06-18 13:55:35 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Should this be available for binding? */
|
|
|
|
static int sctp_v4_available(union sctp_addr *addr, struct sctp_sock *sp)
|
|
|
|
{
|
2022-11-17 04:01:16 +08:00
|
|
|
struct sock *sk = &sp->inet.sk;
|
|
|
|
struct net *net = sock_net(sk);
|
|
|
|
int tb_id = RT_TABLE_LOCAL;
|
|
|
|
int ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-11-17 04:01:16 +08:00
|
|
|
tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ?: tb_id;
|
|
|
|
ret = inet_addr_type_table(net, addr->v4.sin_addr.s_addr, tb_id);
|
2008-03-18 13:44:53 +08:00
|
|
|
if (addr->v4.sin_addr.s_addr != htonl(INADDR_ANY) &&
|
2005-06-14 06:12:33 +08:00
|
|
|
ret != RTN_LOCAL &&
|
2023-08-16 16:15:37 +08:00
|
|
|
!inet_test_bit(FREEBIND, sk) &&
|
2022-07-14 04:51:55 +08:00
|
|
|
!READ_ONCE(net->ipv4.sysctl_ip_nonlocal_bind))
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2005-06-14 06:12:33 +08:00
|
|
|
|
2008-07-19 14:05:40 +08:00
|
|
|
if (ipv6_only_sock(sctp_opt2sk(sp)))
|
|
|
|
return 0;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Checking the loopback, private and other address scopes as defined in
|
|
|
|
* RFC 1918. The IPv4 scoping is based on the draft for SCTP IPv4
|
|
|
|
* scoping <draft-stewart-tsvwg-sctp-ipv4-00.txt>.
|
|
|
|
*
|
|
|
|
* Level 0 - unusable SCTP addresses
|
|
|
|
* Level 1 - loopback address
|
|
|
|
* Level 2 - link-local addresses
|
|
|
|
* Level 3 - private addresses.
|
|
|
|
* Level 4 - global addresses
|
|
|
|
* For INIT and INIT-ACK address list, let L be the level of
|
2020-08-23 07:15:59 +08:00
|
|
|
* requested destination address, sender and receiver
|
2005-04-17 06:20:36 +08:00
|
|
|
* SHOULD include all of its addresses with level greater
|
|
|
|
* than or equal to L.
|
2009-09-03 19:55:47 +08:00
|
|
|
*
|
|
|
|
* IPv4 scoping can be controlled through sysctl option
|
|
|
|
* net.sctp.addr_scope_policy
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2017-08-05 19:59:54 +08:00
|
|
|
static enum sctp_scope sctp_v4_scope(union sctp_addr *addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2017-08-05 19:59:54 +08:00
|
|
|
enum sctp_scope retval;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Check for unusable SCTP addresses. */
|
2007-12-17 05:46:59 +08:00
|
|
|
if (IS_IPV4_UNUSABLE_ADDRESS(addr->v4.sin_addr.s_addr)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
retval = SCTP_SCOPE_UNUSABLE;
|
2007-12-17 05:46:59 +08:00
|
|
|
} else if (ipv4_is_loopback(addr->v4.sin_addr.s_addr)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
retval = SCTP_SCOPE_LOOPBACK;
|
2007-12-17 05:46:59 +08:00
|
|
|
} else if (ipv4_is_linklocal_169(addr->v4.sin_addr.s_addr)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
retval = SCTP_SCOPE_LINK;
|
2007-12-17 05:46:59 +08:00
|
|
|
} else if (ipv4_is_private_10(addr->v4.sin_addr.s_addr) ||
|
|
|
|
ipv4_is_private_172(addr->v4.sin_addr.s_addr) ||
|
2021-06-30 11:34:08 +08:00
|
|
|
ipv4_is_private_192(addr->v4.sin_addr.s_addr) ||
|
|
|
|
ipv4_is_test_198(addr->v4.sin_addr.s_addr)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
retval = SCTP_SCOPE_PRIVATE;
|
|
|
|
} else {
|
|
|
|
retval = SCTP_SCOPE_GLOBAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Returns a valid dst cache entry for the given source and destination ip
|
|
|
|
* addresses. If an association is passed, trys to get a dst entry with a
|
|
|
|
* source address that matches an address in the bind address list.
|
|
|
|
*/
|
2011-04-27 05:54:17 +08:00
|
|
|
static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
|
|
|
|
struct flowi *fl, struct sock *sk)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-04-27 05:54:17 +08:00
|
|
|
struct sctp_association *asoc = t->asoc;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct rtable *rt;
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
struct flowi _fl;
|
|
|
|
struct flowi4 *fl4 = &_fl.u.ip4;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct sctp_bind_addr *bp;
|
|
|
|
struct sctp_sockaddr_entry *laddr;
|
|
|
|
struct dst_entry *dst = NULL;
|
2011-04-27 05:54:17 +08:00
|
|
|
union sctp_addr *daddr = &t->ipaddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
union sctp_addr dst_saddr;
|
2023-09-22 11:42:16 +08:00
|
|
|
u8 tos = READ_ONCE(inet_sk(sk)->tos);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-07-02 18:21:12 +08:00
|
|
|
if (t->dscp & SCTP_DSCP_SET_MASK)
|
|
|
|
tos = t->dscp & SCTP_DSCP_VAL_MASK;
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
memset(&_fl, 0x0, sizeof(_fl));
|
2011-04-27 05:51:31 +08:00
|
|
|
fl4->daddr = daddr->v4.sin_addr.s_addr;
|
|
|
|
fl4->fl4_dport = daddr->v4.sin_port;
|
|
|
|
fl4->flowi4_proto = IPPROTO_SCTP;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (asoc) {
|
2024-09-06 00:51:40 +08:00
|
|
|
fl4->flowi4_tos = tos & INET_DSCP_MASK;
|
2023-07-17 21:53:40 +08:00
|
|
|
fl4->flowi4_scope = ip_sock_rt_scope(asoc->base.sk);
|
2011-04-27 05:51:31 +08:00
|
|
|
fl4->flowi4_oif = asoc->base.sk->sk_bound_dev_if;
|
|
|
|
fl4->fl4_sport = htons(asoc->base.bind_addr.port);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2010-05-01 09:42:44 +08:00
|
|
|
if (saddr) {
|
2011-04-27 05:51:31 +08:00
|
|
|
fl4->saddr = saddr->v4.sin_addr.s_addr;
|
2019-01-22 02:42:41 +08:00
|
|
|
if (!fl4->fl4_sport)
|
|
|
|
fl4->fl4_sport = saddr->v4.sin_port;
|
2010-05-01 09:42:44 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: dst:%pI4, src:%pI4 - ", __func__, &fl4->daddr,
|
|
|
|
&fl4->saddr);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-08-06 16:46:26 +08:00
|
|
|
rt = ip_route_output_key(sock_net(sk), fl4);
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
if (!IS_ERR(rt)) {
|
2010-06-11 14:31:35 +08:00
|
|
|
dst = &rt->dst;
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
t->dst = dst;
|
|
|
|
memcpy(fl, &_fl, sizeof(_fl));
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* If there is no association or if a source address is passed, no
|
|
|
|
* more validation is required.
|
|
|
|
*/
|
|
|
|
if (!asoc || saddr)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
bp = &asoc->base.bind_addr;
|
|
|
|
|
|
|
|
if (dst) {
|
|
|
|
/* Walk through the bind address list and look for a bind
|
|
|
|
* address that matches the source address of the returned dst.
|
|
|
|
*/
|
2011-05-04 11:55:05 +08:00
|
|
|
sctp_v4_dst_saddr(&dst_saddr, fl4, htons(bp->port));
|
2007-09-17 07:03:28 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(laddr, &bp->address_list, list) {
|
2011-04-26 19:19:36 +08:00
|
|
|
if (!laddr->valid || (laddr->state == SCTP_ADDR_DEL) ||
|
|
|
|
(laddr->state != SCTP_ADDR_SRC &&
|
|
|
|
!asoc->src_out_of_asoc_ok))
|
2006-07-22 05:49:25 +08:00
|
|
|
continue;
|
2006-11-21 09:06:24 +08:00
|
|
|
if (sctp_v4_cmp_addr(&dst_saddr, &laddr->a))
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out_unlock;
|
|
|
|
}
|
2007-09-17 07:03:28 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* None of the bound addresses match the source address of the
|
|
|
|
* dst. So release it.
|
|
|
|
*/
|
|
|
|
dst_release(dst);
|
|
|
|
dst = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Walk through the bind address list and try to get a dst that
|
|
|
|
* matches a bind address as the source address.
|
|
|
|
*/
|
2007-09-17 07:03:28 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
list_for_each_entry_rcu(laddr, &bp->address_list, list) {
|
sctp: fix src address selection if using secondary addresses
In short, sctp is likely to incorrectly choose src address if socket is
bound to secondary addresses. This patch fixes it by adding a new check
that checks if such src address belongs to the interface that routing
identified as output.
This is enough to avoid rp_filter drops on remote peer.
Details:
Currently, sctp will do a routing attempt without specifying the src
address and compare the returned value (preferred source) with the
addresses that the socket is bound to. When using secondary addresses,
this will not match.
Then it will try specifying each of the addresses that the socket is
bound to and re-routing, checking if that address is valid as src for
that dst. Thing is, this check alone is weak:
# ip r l
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.149
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.147
# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:15:18:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.147/24 brd 192.168.122.255 scope global dynamic eth0
valid_lft 2160sec preferred_lft 2160sec
inet 192.168.122.148/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe15:186a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b3:91:46 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.149/24 brd 192.168.100.255 scope global dynamic eth1
valid_lft 2162sec preferred_lft 2162sec
inet 192.168.100.148/24 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb3:9146/64 scope link
valid_lft forever preferred_lft forever
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:05:47:ee brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe05:47ee/64 scope link
valid_lft forever preferred_lft forever
# ip r g 192.168.100.193 from 192.168.122.148
192.168.100.193 from 192.168.122.148 dev eth1
cache
Even if you specify an interface:
# ip r g 192.168.100.193 from 192.168.122.148 oif eth1
192.168.100.193 from 192.168.122.148 dev eth1
cache
Although this would be valid, peers using rp_filter will drop such
packets as their src doesn't match the routes for that interface.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-17 23:34:18 +08:00
|
|
|
struct net_device *odev;
|
|
|
|
|
2007-09-17 07:03:28 +08:00
|
|
|
if (!laddr->valid)
|
|
|
|
continue;
|
2015-07-17 23:34:17 +08:00
|
|
|
if (laddr->state != SCTP_ADDR_SRC ||
|
|
|
|
AF_INET != laddr->a.sa.sa_family)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
fl4->fl4_sport = laddr->a.v4.sin_port;
|
2023-06-02 00:37:46 +08:00
|
|
|
flowi4_update_output(fl4, asoc->base.sk->sk_bound_dev_if,
|
2015-07-17 23:34:17 +08:00
|
|
|
daddr->v4.sin_addr.s_addr,
|
|
|
|
laddr->a.v4.sin_addr.s_addr);
|
|
|
|
|
|
|
|
rt = ip_route_output_key(sock_net(sk), fl4);
|
|
|
|
if (IS_ERR(rt))
|
|
|
|
continue;
|
|
|
|
|
sctp: fix src address selection if using secondary addresses
In short, sctp is likely to incorrectly choose src address if socket is
bound to secondary addresses. This patch fixes it by adding a new check
that checks if such src address belongs to the interface that routing
identified as output.
This is enough to avoid rp_filter drops on remote peer.
Details:
Currently, sctp will do a routing attempt without specifying the src
address and compare the returned value (preferred source) with the
addresses that the socket is bound to. When using secondary addresses,
this will not match.
Then it will try specifying each of the addresses that the socket is
bound to and re-routing, checking if that address is valid as src for
that dst. Thing is, this check alone is weak:
# ip r l
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.149
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.147
# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:15:18:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.147/24 brd 192.168.122.255 scope global dynamic eth0
valid_lft 2160sec preferred_lft 2160sec
inet 192.168.122.148/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe15:186a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b3:91:46 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.149/24 brd 192.168.100.255 scope global dynamic eth1
valid_lft 2162sec preferred_lft 2162sec
inet 192.168.100.148/24 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb3:9146/64 scope link
valid_lft forever preferred_lft forever
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:05:47:ee brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe05:47ee/64 scope link
valid_lft forever preferred_lft forever
# ip r g 192.168.100.193 from 192.168.122.148
192.168.100.193 from 192.168.122.148 dev eth1
cache
Even if you specify an interface:
# ip r g 192.168.100.193 from 192.168.122.148 oif eth1
192.168.100.193 from 192.168.122.148 dev eth1
cache
Although this would be valid, peers using rp_filter will drop such
packets as their src doesn't match the routes for that interface.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-17 23:34:18 +08:00
|
|
|
/* Ensure the src address belongs to the output
|
|
|
|
* interface.
|
|
|
|
*/
|
|
|
|
odev = __ip_dev_find(sock_net(sk), laddr->a.v4.sin_addr.s_addr,
|
|
|
|
false);
|
2015-09-03 03:20:21 +08:00
|
|
|
if (!odev || odev->ifindex != fl4->flowi4_oif) {
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
if (!dst) {
|
2018-02-06 03:48:14 +08:00
|
|
|
dst = &rt->dst;
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
t->dst = dst;
|
|
|
|
memcpy(fl, &_fl, sizeof(_fl));
|
|
|
|
} else {
|
2015-09-03 03:20:22 +08:00
|
|
|
dst_release(&rt->dst);
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
}
|
sctp: fix src address selection if using secondary addresses
In short, sctp is likely to incorrectly choose src address if socket is
bound to secondary addresses. This patch fixes it by adding a new check
that checks if such src address belongs to the interface that routing
identified as output.
This is enough to avoid rp_filter drops on remote peer.
Details:
Currently, sctp will do a routing attempt without specifying the src
address and compare the returned value (preferred source) with the
addresses that the socket is bound to. When using secondary addresses,
this will not match.
Then it will try specifying each of the addresses that the socket is
bound to and re-routing, checking if that address is valid as src for
that dst. Thing is, this check alone is weak:
# ip r l
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.149
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.147
# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:15:18:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.147/24 brd 192.168.122.255 scope global dynamic eth0
valid_lft 2160sec preferred_lft 2160sec
inet 192.168.122.148/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe15:186a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b3:91:46 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.149/24 brd 192.168.100.255 scope global dynamic eth1
valid_lft 2162sec preferred_lft 2162sec
inet 192.168.100.148/24 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb3:9146/64 scope link
valid_lft forever preferred_lft forever
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:05:47:ee brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe05:47ee/64 scope link
valid_lft forever preferred_lft forever
# ip r g 192.168.100.193 from 192.168.122.148
192.168.100.193 from 192.168.122.148 dev eth1
cache
Even if you specify an interface:
# ip r g 192.168.100.193 from 192.168.122.148 oif eth1
192.168.100.193 from 192.168.122.148 dev eth1
cache
Although this would be valid, peers using rp_filter will drop such
packets as their src doesn't match the routes for that interface.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-17 23:34:18 +08:00
|
|
|
continue;
|
2015-09-03 03:20:21 +08:00
|
|
|
}
|
sctp: fix src address selection if using secondary addresses
In short, sctp is likely to incorrectly choose src address if socket is
bound to secondary addresses. This patch fixes it by adding a new check
that checks if such src address belongs to the interface that routing
identified as output.
This is enough to avoid rp_filter drops on remote peer.
Details:
Currently, sctp will do a routing attempt without specifying the src
address and compare the returned value (preferred source) with the
addresses that the socket is bound to. When using secondary addresses,
this will not match.
Then it will try specifying each of the addresses that the socket is
bound to and re-routing, checking if that address is valid as src for
that dst. Thing is, this check alone is weak:
# ip r l
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.149
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.147
# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:15:18:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.147/24 brd 192.168.122.255 scope global dynamic eth0
valid_lft 2160sec preferred_lft 2160sec
inet 192.168.122.148/24 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe15:186a/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b3:91:46 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.149/24 brd 192.168.100.255 scope global dynamic eth1
valid_lft 2162sec preferred_lft 2162sec
inet 192.168.100.148/24 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb3:9146/64 scope link
valid_lft forever preferred_lft forever
4: ens9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:05:47:ee brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe05:47ee/64 scope link
valid_lft forever preferred_lft forever
# ip r g 192.168.100.193 from 192.168.122.148
192.168.100.193 from 192.168.122.148 dev eth1
cache
Even if you specify an interface:
# ip r g 192.168.100.193 from 192.168.122.148 oif eth1
192.168.100.193 from 192.168.122.148 dev eth1
cache
Although this would be valid, peers using rp_filter will drop such
packets as their src doesn't match the routes for that interface.
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-17 23:34:18 +08:00
|
|
|
|
2018-02-06 03:48:14 +08:00
|
|
|
dst_release(dst);
|
2015-07-17 23:34:17 +08:00
|
|
|
dst = &rt->dst;
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
t->dst = dst;
|
|
|
|
memcpy(fl, &_fl, sizeof(_fl));
|
2015-07-17 23:34:17 +08:00
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
out_unlock:
|
2007-09-17 07:03:28 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
out:
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
if (dst) {
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("rt_dst:%pI4, rt_src:%pI4\n",
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
&fl->u.ip4.daddr, &fl->u.ip4.saddr);
|
|
|
|
} else {
|
|
|
|
t->dst = NULL;
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("no route\n");
|
sctp: fix possibly using a bad saddr with a given dst
Under certain circumstances, depending on the order of addresses on the
interfaces, it could be that sctp_v[46]_get_dst() would return a dst
with a mismatched struct flowi.
For example, if when walking through the bind addresses and the first
one is not a match, it saves the dst as a fallback (added in
410f03831c07), but not the flowi. Then if the next one is also not a
match, the previous dst will be returned but with the flowi information
for the 2nd address, which is wrong.
The fix is to use a locally stored flowi that can be used for such
attempts, and copy it to the parameter only in case it is a possible
match, together with the corresponding dst entry.
The patch updates IPv6 code mostly just to be in sync. Even though the issue
is also present there, it fallback is not expected to work with IPv6.
Fixes: 410f03831c07 ("sctp: add routing output fallback")
Reported-by: Jin Meng <meng.a.jin@nokia-sbell.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Tested-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-27 07:47:46 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* For v4, the source address is cached in the route entry(dst). So no need
|
|
|
|
* to cache it separately and hence this is an empty routine.
|
|
|
|
*/
|
2008-05-29 18:55:05 +08:00
|
|
|
static void sctp_v4_get_saddr(struct sctp_sock *sk,
|
2011-04-27 05:51:31 +08:00
|
|
|
struct sctp_transport *t,
|
|
|
|
struct flowi *fl)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-04-27 05:51:31 +08:00
|
|
|
union sctp_addr *saddr = &t->saddr;
|
2024-04-29 21:30:09 +08:00
|
|
|
struct rtable *rt = dst_rtable(t->dst);
|
2005-11-12 08:05:55 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (rt) {
|
|
|
|
saddr->v4.sin_family = AF_INET;
|
2011-05-10 05:49:13 +08:00
|
|
|
saddr->v4.sin_addr.s_addr = fl->u.ip4.saddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* What interface did this skb arrive on? */
|
|
|
|
static int sctp_v4_skb_iif(const struct sk_buff *skb)
|
|
|
|
{
|
2012-07-24 07:29:00 +08:00
|
|
|
return inet_iif(skb);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2022-11-17 04:01:19 +08:00
|
|
|
static int sctp_v4_skb_sdif(const struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
return inet_sdif(skb);
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Was this packet marked by Explicit Congestion Notification? */
|
|
|
|
static int sctp_v4_is_ce(const struct sk_buff *skb)
|
|
|
|
{
|
2007-04-21 13:47:35 +08:00
|
|
|
return INET_ECN_is_ce(ip_hdr(skb)->tos);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Create and initialize a new sk for the socket returned by accept(). */
|
|
|
|
static struct sock *sctp_v4_create_accept_sk(struct sock *sk,
|
2017-03-09 16:09:05 +08:00
|
|
|
struct sctp_association *asoc,
|
|
|
|
bool kern)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2008-03-26 01:26:21 +08:00
|
|
|
struct sock *newsk = sk_alloc(sock_net(sk), PF_INET, GFP_KERNEL,
|
2017-03-09 16:09:05 +08:00
|
|
|
sk->sk_prot, kern);
|
2009-02-13 16:33:44 +08:00
|
|
|
struct inet_sock *newinet;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!newsk)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
sock_init_data(NULL, newsk);
|
|
|
|
|
2009-02-13 16:33:44 +08:00
|
|
|
sctp_copy_sock(newsk, sk, asoc);
|
2005-04-17 06:20:36 +08:00
|
|
|
sock_reset_flag(newsk, SOCK_ZAPPED);
|
|
|
|
|
2018-02-25 00:18:51 +08:00
|
|
|
sctp_v4_copy_ip_options(sk, newsk);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
newinet = inet_sk(newsk);
|
|
|
|
|
2009-10-15 14:30:45 +08:00
|
|
|
newinet->inet_daddr = asoc->peer.primary_addr.v4.sin_addr.s_addr;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (newsk->sk_prot->init(newsk)) {
|
|
|
|
sk_common_release(newsk);
|
|
|
|
newsk = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
return newsk;
|
|
|
|
}
|
|
|
|
|
2014-07-31 02:40:53 +08:00
|
|
|
static int sctp_v4_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2014-07-31 02:40:53 +08:00
|
|
|
/* No address mapping for V4 sockets */
|
2019-03-31 16:58:15 +08:00
|
|
|
memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
|
2014-07-31 02:40:53 +08:00
|
|
|
return sizeof(struct sockaddr_in);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Dump the v4 addr to the seq file. */
|
|
|
|
static void sctp_v4_seq_dump_addr(struct seq_file *seq, union sctp_addr *addr)
|
|
|
|
{
|
2008-10-31 15:54:56 +08:00
|
|
|
seq_printf(seq, "%pI4 ", &addr->v4.sin_addr);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-06-05 03:40:15 +08:00
|
|
|
static void sctp_v4_ecn_capable(struct sock *sk)
|
|
|
|
{
|
|
|
|
INET_ECN_xmit(sk);
|
|
|
|
}
|
|
|
|
|
2017-10-24 16:45:31 +08:00
|
|
|
static void sctp_addr_wq_timeout_handler(struct timer_list *t)
|
2011-04-26 18:32:51 +08:00
|
|
|
{
|
2017-10-24 16:45:31 +08:00
|
|
|
struct net *net = from_timer(net, t, sctp.addr_wq_timer);
|
2011-04-26 18:32:51 +08:00
|
|
|
struct sctp_sockaddr_entry *addrw, *temp;
|
|
|
|
struct sctp_sock *sp;
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_lock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
list_for_each_entry_safe(addrw, temp, &net->sctp.addr_waitq, list) {
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: the first ent in wq:%p is addr:%pISc for cmd:%d at "
|
|
|
|
"entry:%p\n", __func__, &net->sctp.addr_waitq, &addrw->a.sa,
|
|
|
|
addrw->state, addrw);
|
2011-04-26 18:32:51 +08:00
|
|
|
|
2011-12-10 17:48:31 +08:00
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
2011-04-26 18:32:51 +08:00
|
|
|
/* Now we send an ASCONF for each association */
|
|
|
|
/* Note. we currently don't handle link local IPv6 addressees */
|
|
|
|
if (addrw->a.sa.sa_family == AF_INET6) {
|
|
|
|
struct in6_addr *in6;
|
|
|
|
|
|
|
|
if (ipv6_addr_type(&addrw->a.v6.sin6_addr) &
|
|
|
|
IPV6_ADDR_LINKLOCAL)
|
|
|
|
goto free_next;
|
|
|
|
|
|
|
|
in6 = (struct in6_addr *)&addrw->a.v6.sin6_addr;
|
2012-08-06 16:42:04 +08:00
|
|
|
if (ipv6_chk_addr(net, in6, NULL, 0) == 0 &&
|
2011-04-26 18:32:51 +08:00
|
|
|
addrw->state == SCTP_ADDR_NEW) {
|
|
|
|
unsigned long timeo_val;
|
|
|
|
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: this is on DAD, trying %d sec "
|
|
|
|
"later\n", __func__,
|
|
|
|
SCTP_ADDRESS_TICK_DELAY);
|
|
|
|
|
2011-04-26 18:32:51 +08:00
|
|
|
timeo_val = jiffies;
|
|
|
|
timeo_val += msecs_to_jiffies(SCTP_ADDRESS_TICK_DELAY);
|
2012-08-06 16:42:04 +08:00
|
|
|
mod_timer(&net->sctp.addr_wq_timer, timeo_val);
|
2011-04-26 18:32:51 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-06-07 04:05:55 +08:00
|
|
|
#endif
|
2012-08-06 16:42:04 +08:00
|
|
|
list_for_each_entry(sp, &net->sctp.auto_asconf_splist, auto_asconf_list) {
|
2011-04-26 18:32:51 +08:00
|
|
|
struct sock *sk;
|
|
|
|
|
|
|
|
sk = sctp_opt2sk(sp);
|
|
|
|
/* ignore bound-specific endpoints */
|
|
|
|
if (!sctp_is_ep_boundall(sk))
|
|
|
|
continue;
|
2014-01-21 15:44:12 +08:00
|
|
|
bh_lock_sock(sk);
|
2011-04-26 18:32:51 +08:00
|
|
|
if (sctp_asconf_mgmt(sp, addrw) < 0)
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: sctp_asconf_mgmt failed\n", __func__);
|
2014-01-21 15:44:12 +08:00
|
|
|
bh_unlock_sock(sk);
|
2011-04-26 18:32:51 +08:00
|
|
|
}
|
2012-06-18 19:04:55 +08:00
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
2011-04-26 18:32:51 +08:00
|
|
|
free_next:
|
2012-06-18 19:04:55 +08:00
|
|
|
#endif
|
2011-04-26 18:32:51 +08:00
|
|
|
list_del(&addrw->list);
|
|
|
|
kfree(addrw);
|
|
|
|
}
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
}
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
static void sctp_free_addr_wq(struct net *net)
|
2011-04-26 18:32:51 +08:00
|
|
|
{
|
|
|
|
struct sctp_sockaddr_entry *addrw;
|
|
|
|
struct sctp_sockaddr_entry *temp;
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_lock_bh(&net->sctp.addr_wq_lock);
|
|
|
|
del_timer(&net->sctp.addr_wq_timer);
|
|
|
|
list_for_each_entry_safe(addrw, temp, &net->sctp.addr_waitq, list) {
|
2011-04-26 18:32:51 +08:00
|
|
|
list_del(&addrw->list);
|
|
|
|
kfree(addrw);
|
|
|
|
}
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* lookup the entry for the same address in the addr_waitq
|
|
|
|
* sctp_addr_wq MUST be locked
|
|
|
|
*/
|
2012-08-06 16:42:04 +08:00
|
|
|
static struct sctp_sockaddr_entry *sctp_addr_wq_lookup(struct net *net,
|
|
|
|
struct sctp_sockaddr_entry *addr)
|
2011-04-26 18:32:51 +08:00
|
|
|
{
|
|
|
|
struct sctp_sockaddr_entry *addrw;
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
list_for_each_entry(addrw, &net->sctp.addr_waitq, list) {
|
2011-04-26 18:32:51 +08:00
|
|
|
if (addrw->a.sa.sa_family != addr->a.sa.sa_family)
|
|
|
|
continue;
|
|
|
|
if (addrw->a.sa.sa_family == AF_INET) {
|
|
|
|
if (addrw->a.v4.sin_addr.s_addr ==
|
|
|
|
addr->a.v4.sin_addr.s_addr)
|
|
|
|
return addrw;
|
|
|
|
} else if (addrw->a.sa.sa_family == AF_INET6) {
|
|
|
|
if (ipv6_addr_equal(&addrw->a.v6.sin6_addr,
|
|
|
|
&addr->a.v6.sin6_addr))
|
|
|
|
return addrw;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
void sctp_addr_wq_mgmt(struct net *net, struct sctp_sockaddr_entry *addr, int cmd)
|
2011-04-26 18:32:51 +08:00
|
|
|
{
|
|
|
|
struct sctp_sockaddr_entry *addrw;
|
|
|
|
unsigned long timeo_val;
|
|
|
|
|
|
|
|
/* first, we check if an opposite message already exist in the queue.
|
|
|
|
* If we found such message, it is removed.
|
|
|
|
* This operation is a bit stupid, but the DHCP client attaches the
|
|
|
|
* new address after a couple of addition and deletion of that address
|
|
|
|
*/
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_lock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
/* Offsets existing events in addr_wq */
|
2012-08-06 16:42:04 +08:00
|
|
|
addrw = sctp_addr_wq_lookup(net, addr);
|
2011-04-26 18:32:51 +08:00
|
|
|
if (addrw) {
|
|
|
|
if (addrw->state != cmd) {
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: offsets existing entry for %d, addr:%pISc "
|
|
|
|
"in wq:%p\n", __func__, addrw->state, &addrw->a.sa,
|
|
|
|
&net->sctp.addr_waitq);
|
|
|
|
|
2011-04-26 18:32:51 +08:00
|
|
|
list_del(&addrw->list);
|
|
|
|
kfree(addrw);
|
|
|
|
}
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* OK, we have to add the new address to the wait queue */
|
|
|
|
addrw = kmemdup(addr, sizeof(struct sctp_sockaddr_entry), GFP_ATOMIC);
|
|
|
|
if (addrw == NULL) {
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
addrw->state = cmd;
|
2012-08-06 16:42:04 +08:00
|
|
|
list_add_tail(&addrw->list, &net->sctp.addr_waitq);
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
|
|
|
|
pr_debug("%s: add new entry for cmd:%d, addr:%pISc in wq:%p\n",
|
|
|
|
__func__, addrw->state, &addrw->a.sa, &net->sctp.addr_waitq);
|
2011-04-26 18:32:51 +08:00
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
if (!timer_pending(&net->sctp.addr_wq_timer)) {
|
2011-04-26 18:32:51 +08:00
|
|
|
timeo_val = jiffies;
|
|
|
|
timeo_val += msecs_to_jiffies(SCTP_ADDRESS_TICK_DELAY);
|
2012-08-06 16:42:04 +08:00
|
|
|
mod_timer(&net->sctp.addr_wq_timer, timeo_val);
|
2011-04-26 18:32:51 +08:00
|
|
|
}
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.addr_wq_lock);
|
2011-04-26 18:32:51 +08:00
|
|
|
}
|
|
|
|
|
2007-09-17 07:02:12 +08:00
|
|
|
/* Event handler for inet address addition/deletion events.
|
|
|
|
* The sctp_local_addr_list needs to be protocted by a spin lock since
|
|
|
|
* multiple notifiers (say IPv4 and IPv6) may be running at the same
|
|
|
|
* time and thus corrupt the list.
|
|
|
|
* The reader side is protected with RCU.
|
|
|
|
*/
|
2006-12-21 08:08:22 +08:00
|
|
|
static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
|
|
|
|
void *ptr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-12-14 08:26:26 +08:00
|
|
|
struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
|
2007-09-17 07:02:12 +08:00
|
|
|
struct sctp_sockaddr_entry *addr = NULL;
|
|
|
|
struct sctp_sockaddr_entry *temp;
|
2012-08-06 16:42:04 +08:00
|
|
|
struct net *net = dev_net(ifa->ifa_dev->dev);
|
2008-03-12 09:05:02 +08:00
|
|
|
int found = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-12-14 08:26:26 +08:00
|
|
|
switch (ev) {
|
|
|
|
case NETDEV_UP:
|
2019-01-14 18:34:02 +08:00
|
|
|
addr = kzalloc(sizeof(*addr), GFP_ATOMIC);
|
2006-12-14 08:26:26 +08:00
|
|
|
if (addr) {
|
|
|
|
addr->a.v4.sin_family = AF_INET;
|
|
|
|
addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
|
2007-09-17 07:02:12 +08:00
|
|
|
addr->valid = 1;
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_lock_bh(&net->sctp.local_addr_lock);
|
|
|
|
list_add_tail_rcu(&addr->list, &net->sctp.local_addr_list);
|
|
|
|
sctp_addr_wq_mgmt(net, addr, SCTP_ADDR_NEW);
|
|
|
|
spin_unlock_bh(&net->sctp.local_addr_lock);
|
2006-12-14 08:26:26 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case NETDEV_DOWN:
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_lock_bh(&net->sctp.local_addr_lock);
|
2007-09-17 07:02:12 +08:00
|
|
|
list_for_each_entry_safe(addr, temp,
|
2012-08-06 16:42:04 +08:00
|
|
|
&net->sctp.local_addr_list, list) {
|
2008-04-13 09:40:38 +08:00
|
|
|
if (addr->a.sa.sa_family == AF_INET &&
|
|
|
|
addr->a.v4.sin_addr.s_addr ==
|
|
|
|
ifa->ifa_local) {
|
2012-08-06 16:42:04 +08:00
|
|
|
sctp_addr_wq_mgmt(net, addr, SCTP_ADDR_DEL);
|
2008-03-12 09:05:02 +08:00
|
|
|
found = 1;
|
2007-09-17 07:02:12 +08:00
|
|
|
addr->valid = 0;
|
|
|
|
list_del_rcu(&addr->list);
|
2006-12-14 08:26:26 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2012-08-06 16:42:04 +08:00
|
|
|
spin_unlock_bh(&net->sctp.local_addr_lock);
|
2008-03-12 09:05:02 +08:00
|
|
|
if (found)
|
2011-03-15 18:05:02 +08:00
|
|
|
kfree_rcu(addr, rcu);
|
2006-12-14 08:26:26 +08:00
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize the control inode/socket with a control endpoint data
|
|
|
|
* structure. This endpoint is reserved exclusively for the OOTB processing.
|
|
|
|
*/
|
2012-08-06 16:43:06 +08:00
|
|
|
static int sctp_ctl_sock_init(struct net *net)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int err;
|
2009-03-04 19:20:26 +08:00
|
|
|
sa_family_t family = PF_INET;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (sctp_get_pf_specific(PF_INET6))
|
|
|
|
family = PF_INET6;
|
|
|
|
|
2012-08-06 16:43:06 +08:00
|
|
|
err = inet_ctl_sock_create(&net->sctp.ctl_sock, family,
|
|
|
|
SOCK_SEQPACKET, IPPROTO_SCTP, net);
|
2009-03-04 19:20:26 +08:00
|
|
|
|
|
|
|
/* If IPv6 socket could not be created, try the IPv4 socket */
|
|
|
|
if (err < 0 && family == PF_INET6)
|
2012-08-06 16:43:06 +08:00
|
|
|
err = inet_ctl_sock_create(&net->sctp.ctl_sock, AF_INET,
|
2009-03-04 19:20:26 +08:00
|
|
|
SOCK_SEQPACKET, IPPROTO_SCTP,
|
2012-08-06 16:43:06 +08:00
|
|
|
net);
|
2009-03-04 19:20:26 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (err < 0) {
|
2010-08-24 21:21:08 +08:00
|
|
|
pr_err("Failed to create the SCTP control socket\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
return err;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-10-29 15:04:58 +08:00
|
|
|
static int sctp_udp_rcv(struct sock *sk, struct sk_buff *skb)
|
|
|
|
{
|
2020-10-29 15:05:03 +08:00
|
|
|
SCTP_INPUT_CB(skb)->encap_port = udp_hdr(skb)->source;
|
|
|
|
|
2020-10-29 15:04:58 +08:00
|
|
|
skb_set_transport_header(skb, sizeof(struct udphdr));
|
|
|
|
sctp_rcv(skb);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int sctp_udp_sock_start(struct net *net)
|
|
|
|
{
|
|
|
|
struct udp_tunnel_sock_cfg tuncfg = {NULL};
|
|
|
|
struct udp_port_cfg udp_conf = {0};
|
|
|
|
struct socket *sock;
|
|
|
|
int err;
|
|
|
|
|
|
|
|
udp_conf.family = AF_INET;
|
|
|
|
udp_conf.local_ip.s_addr = htonl(INADDR_ANY);
|
|
|
|
udp_conf.local_udp_port = htons(net->sctp.udp_port);
|
|
|
|
err = udp_sock_create(net, &udp_conf, &sock);
|
|
|
|
if (err) {
|
|
|
|
pr_err("Failed to create the SCTP UDP tunneling v4 sock\n");
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
tuncfg.encap_type = 1;
|
|
|
|
tuncfg.encap_rcv = sctp_udp_rcv;
|
2021-06-23 02:05:00 +08:00
|
|
|
tuncfg.encap_err_lookup = sctp_udp_v4_err;
|
2020-10-29 15:04:58 +08:00
|
|
|
setup_udp_tunnel_sock(net, sock, &tuncfg);
|
|
|
|
net->sctp.udp4_sock = sock->sk;
|
|
|
|
|
2020-10-29 15:04:59 +08:00
|
|
|
#if IS_ENABLED(CONFIG_IPV6)
|
|
|
|
memset(&udp_conf, 0, sizeof(udp_conf));
|
|
|
|
|
|
|
|
udp_conf.family = AF_INET6;
|
|
|
|
udp_conf.local_ip6 = in6addr_any;
|
|
|
|
udp_conf.local_udp_port = htons(net->sctp.udp_port);
|
|
|
|
udp_conf.use_udp6_rx_checksums = true;
|
|
|
|
udp_conf.ipv6_v6only = true;
|
|
|
|
err = udp_sock_create(net, &udp_conf, &sock);
|
|
|
|
if (err) {
|
|
|
|
pr_err("Failed to create the SCTP UDP tunneling v6 sock\n");
|
|
|
|
udp_tunnel_sock_release(net->sctp.udp4_sock->sk_socket);
|
|
|
|
net->sctp.udp4_sock = NULL;
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
tuncfg.encap_type = 1;
|
|
|
|
tuncfg.encap_rcv = sctp_udp_rcv;
|
2021-06-23 02:05:00 +08:00
|
|
|
tuncfg.encap_err_lookup = sctp_udp_v6_err;
|
2020-10-29 15:04:59 +08:00
|
|
|
setup_udp_tunnel_sock(net, sock, &tuncfg);
|
|
|
|
net->sctp.udp6_sock = sock->sk;
|
|
|
|
#endif
|
|
|
|
|
2020-10-29 15:04:58 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void sctp_udp_sock_stop(struct net *net)
|
|
|
|
{
|
|
|
|
if (net->sctp.udp4_sock) {
|
|
|
|
udp_tunnel_sock_release(net->sctp.udp4_sock->sk_socket);
|
|
|
|
net->sctp.udp4_sock = NULL;
|
|
|
|
}
|
2020-10-29 15:04:59 +08:00
|
|
|
if (net->sctp.udp6_sock) {
|
|
|
|
udp_tunnel_sock_release(net->sctp.udp6_sock->sk_socket);
|
|
|
|
net->sctp.udp6_sock = NULL;
|
|
|
|
}
|
2020-10-29 15:04:58 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Register address family specific functions. */
|
|
|
|
int sctp_register_af(struct sctp_af *af)
|
|
|
|
{
|
|
|
|
switch (af->sa_family) {
|
|
|
|
case AF_INET:
|
|
|
|
if (sctp_af_v4_specific)
|
|
|
|
return 0;
|
|
|
|
sctp_af_v4_specific = af;
|
|
|
|
break;
|
|
|
|
case AF_INET6:
|
|
|
|
if (sctp_af_v6_specific)
|
|
|
|
return 0;
|
|
|
|
sctp_af_v6_specific = af;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&af->list);
|
|
|
|
list_add_tail(&af->list, &sctp_address_families);
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get the table of functions for manipulating a particular address
|
|
|
|
* family.
|
|
|
|
*/
|
|
|
|
struct sctp_af *sctp_get_af_specific(sa_family_t family)
|
|
|
|
{
|
|
|
|
switch (family) {
|
|
|
|
case AF_INET:
|
|
|
|
return sctp_af_v4_specific;
|
|
|
|
case AF_INET6:
|
|
|
|
return sctp_af_v6_specific;
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Common code to initialize a AF_INET msg_name. */
|
|
|
|
static void sctp_inet_msgname(char *msgname, int *addr_len)
|
|
|
|
{
|
|
|
|
struct sockaddr_in *sin;
|
|
|
|
|
|
|
|
sin = (struct sockaddr_in *)msgname;
|
|
|
|
*addr_len = sizeof(struct sockaddr_in);
|
|
|
|
sin->sin_family = AF_INET;
|
|
|
|
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Copy the primary address of the peer primary address as the msg_name. */
|
|
|
|
static void sctp_inet_event_msgname(struct sctp_ulpevent *event, char *msgname,
|
|
|
|
int *addr_len)
|
|
|
|
{
|
|
|
|
struct sockaddr_in *sin, *sinfrom;
|
|
|
|
|
|
|
|
if (msgname) {
|
|
|
|
struct sctp_association *asoc;
|
|
|
|
|
|
|
|
asoc = event->asoc;
|
|
|
|
sctp_inet_msgname(msgname, addr_len);
|
|
|
|
sin = (struct sockaddr_in *)msgname;
|
|
|
|
sinfrom = &asoc->peer.primary_addr.v4;
|
|
|
|
sin->sin_port = htons(asoc->peer.port);
|
|
|
|
sin->sin_addr.s_addr = sinfrom->sin_addr.s_addr;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Initialize and copy out a msgname from an inbound skb. */
|
|
|
|
static void sctp_inet_skb_msgname(struct sk_buff *skb, char *msgname, int *len)
|
|
|
|
{
|
|
|
|
if (msgname) {
|
2007-03-14 00:59:32 +08:00
|
|
|
struct sctphdr *sh = sctp_hdr(skb);
|
|
|
|
struct sockaddr_in *sin = (struct sockaddr_in *)msgname;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
sctp_inet_msgname(msgname, len);
|
|
|
|
sin->sin_port = sh->source;
|
2007-04-21 13:47:35 +08:00
|
|
|
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Do we support this AF? */
|
|
|
|
static int sctp_inet_af_supported(sa_family_t family, struct sctp_sock *sp)
|
|
|
|
{
|
|
|
|
/* PF_INET only supports AF_INET addresses. */
|
2010-09-23 04:43:57 +08:00
|
|
|
return AF_INET == family;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Address matching with wildcards allowed. */
|
|
|
|
static int sctp_inet_cmp_addr(const union sctp_addr *addr1,
|
|
|
|
const union sctp_addr *addr2,
|
|
|
|
struct sctp_sock *opt)
|
|
|
|
{
|
|
|
|
/* PF_INET only supports AF_INET addresses. */
|
|
|
|
if (addr1->sa.sa_family != addr2->sa.sa_family)
|
|
|
|
return 0;
|
2008-03-18 13:44:53 +08:00
|
|
|
if (htonl(INADDR_ANY) == addr1->v4.sin_addr.s_addr ||
|
|
|
|
htonl(INADDR_ANY) == addr2->v4.sin_addr.s_addr)
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
if (addr1->v4.sin_addr.s_addr == addr2->v4.sin_addr.s_addr)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Verify that provided sockaddr looks bindable. Common verification has
|
|
|
|
* already been taken care of.
|
|
|
|
*/
|
|
|
|
static int sctp_inet_bind_verify(struct sctp_sock *opt, union sctp_addr *addr)
|
|
|
|
{
|
|
|
|
return sctp_v4_available(addr, opt);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Verify that sockaddr looks sendable. Common verification has already
|
|
|
|
* been taken care of.
|
|
|
|
*/
|
|
|
|
static int sctp_inet_send_verify(struct sctp_sock *opt, union sctp_addr *addr)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Fill in Supported Address Type information for INIT and INIT-ACK
|
|
|
|
* chunks. Returns number of addresses supported.
|
|
|
|
*/
|
|
|
|
static int sctp_inet_supported_addrs(const struct sctp_sock *opt,
|
2006-11-21 09:25:49 +08:00
|
|
|
__be16 *types)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
types[0] = SCTP_PARAM_IPV4_ADDRESS;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Wrapper routine that calls the ip transmit routine. */
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
static inline int sctp_v4_xmit(struct sk_buff *skb, struct sctp_transport *t)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
struct dst_entry *dst = dst_clone(t->dst);
|
|
|
|
struct flowi4 *fl4 = &t->fl.u.ip4;
|
|
|
|
struct sock *sk = skb->sk;
|
|
|
|
struct inet_sock *inet = inet_sk(sk);
|
2023-09-22 11:42:16 +08:00
|
|
|
__u8 dscp = READ_ONCE(inet->tos);
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
__be16 df = 0;
|
2008-08-04 12:15:08 +08:00
|
|
|
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
pr_debug("%s: skb:%p, len:%d, src:%pI4, dst:%pI4\n", __func__, skb,
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
skb->len, &fl4->saddr, &fl4->daddr);
|
|
|
|
|
|
|
|
if (t->dscp & SCTP_DSCP_SET_MASK)
|
|
|
|
dscp = t->dscp & SCTP_DSCP_VAL_MASK;
|
2018-07-02 18:21:12 +08:00
|
|
|
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
inet->pmtudisc = t->param_flags & SPP_PMTUD_ENABLE ? IP_PMTUDISC_DO
|
|
|
|
: IP_PMTUDISC_DONT;
|
|
|
|
SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
if (!t->encap_port || !sctp_sk(sk)->udp_port) {
|
|
|
|
skb_dst_set(skb, dst);
|
|
|
|
return __ip_queue_xmit(sk, skb, &t->fl, dscp);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (skb_is_gso(skb))
|
|
|
|
skb_shinfo(skb)->gso_type |= SKB_GSO_UDP_TUNNEL_CSUM;
|
2008-08-04 12:15:08 +08:00
|
|
|
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
if (ip_dont_fragment(sk, dst) && !skb->ignore_df)
|
|
|
|
df = htons(IP_DF);
|
net: sctp: rework debugging framework to use pr_debug and friends
We should get rid of all own SCTP debug printk macros and use the ones
that the kernel offers anyway instead. This makes the code more readable
and conform to the kernel code, and offers all the features of dynamic
debbuging that pr_debug() et al has, such as only turning on/off portions
of debug messages at runtime through debugfs. The runtime cost of having
CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
is negligible [1]. If kernel debugging is completly turned off, then these
statements will also compile into "empty" functions.
While we're at it, we also need to change the Kconfig option as it /now/
only refers to the ifdef'ed code portions in outqueue.c that enable further
debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
was enabled with this Kconfig option and has now been removed, we
transform those code parts into WARNs resp. where appropriate BUG_ONs so
that those bugs can be more easily detected as probably not many people
have SCTP debugging permanently turned on.
To turn on all SCTP debugging, the following steps are needed:
# mount -t debugfs none /sys/kernel/debug
# echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
This can be done more fine-grained on a per file, per line basis and others
as described in [2].
[1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
[2] Documentation/dynamic-debug-howto.txt
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-29 01:49:40 +08:00
|
|
|
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
skb->encapsulation = 1;
|
|
|
|
skb_reset_inner_mac_header(skb);
|
|
|
|
skb_reset_inner_transport_header(skb);
|
|
|
|
skb_set_inner_ipproto(skb, IPPROTO_SCTP);
|
2024-04-29 21:30:09 +08:00
|
|
|
udp_tunnel_xmit_skb(dst_rtable(dst), sk, skb, fl4->saddr,
|
sctp: support for sending packet over udp4 sock
This patch does what the rfc6951#section-5.3 says for ipv4:
"Within the UDP header, the source port MUST be the local UDP
encapsulation port number of the SCTP stack, and the destination port
MUST be the remote UDP encapsulation port number maintained for the
association and the destination address to which the packet is sent
(see Section 5.1).
Because the SCTP packet is the UDP payload, the length of the UDP
packet MUST be the length of the SCTP packet plus the size of the UDP
header.
The SCTP checksum MUST be computed for IPv4 and IPv6, and the UDP
checksum SHOULD be computed for IPv4 and IPv6."
Some places need to be adjusted in sctp_packet_transmit():
1. For non-gso packets, when transport's encap_port is set, sctp
checksum has to be done in sctp_packet_pack(), as the outer
udp will use ip_summed = CHECKSUM_PARTIAL to do the offload
setting for checksum.
2. Delay calling dst_clone() and skb_dst_set() for non-udp packets
until sctp_v4_xmit(), as for udp packets, skb_dst_set() is not
needed before calling udp_tunnel_xmit_skb().
then in sctp_v4_xmit():
1. Go to udp_tunnel_xmit_skb() only when transport->encap_port and
net->sctp.udp_port both are set, as these are one for dst port
and another for src port.
2. For gso packet, SKB_GSO_UDP_TUNNEL_CSUM is set for gso_type, and
with this udp checksum can be done in __skb_udp_tunnel_segment()
for each segments after the sctp gso.
3. inner_mac_header and inner_transport_header are set, as these
will be needed in __skb_udp_tunnel_segment() to find the right
headers.
4. df and ttl are calculated, as these are the required params by
udp_tunnel_xmit_skb().
5. nocheck param has to be false, as "the UDP checksum SHOULD be
computed for IPv4 and IPv6", says in rfc6951#section-5.3.
v1->v2:
- Use sp->udp_port instead in sctp_v4_xmit(), which is more safe.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-10-29 15:05:06 +08:00
|
|
|
fl4->daddr, dscp, ip4_dst_hoplimit(dst), df,
|
|
|
|
sctp_sk(sk)->udp_port, t->encap_port, false, false);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-02-15 22:53:59 +08:00
|
|
|
static struct sctp_af sctp_af_inet;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
static struct sctp_pf sctp_pf_inet = {
|
|
|
|
.event_msgname = sctp_inet_event_msgname,
|
|
|
|
.skb_msgname = sctp_inet_skb_msgname,
|
|
|
|
.af_supported = sctp_inet_af_supported,
|
|
|
|
.cmp_addr = sctp_inet_cmp_addr,
|
|
|
|
.bind_verify = sctp_inet_bind_verify,
|
|
|
|
.send_verify = sctp_inet_send_verify,
|
|
|
|
.supported_addrs = sctp_inet_supported_addrs,
|
|
|
|
.create_accept_sk = sctp_v4_create_accept_sk,
|
2014-07-31 02:40:53 +08:00
|
|
|
.addr_to_user = sctp_v4_addr_to_user,
|
|
|
|
.to_sk_saddr = sctp_v4_to_sk_saddr,
|
|
|
|
.to_sk_daddr = sctp_v4_to_sk_daddr,
|
2018-02-25 00:18:51 +08:00
|
|
|
.copy_ip_options = sctp_v4_copy_ip_options,
|
2008-02-15 22:53:59 +08:00
|
|
|
.af = &sctp_af_inet
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Notifier for inetaddr addition/deletion events. */
|
|
|
|
static struct notifier_block sctp_inetaddr_notifier = {
|
|
|
|
.notifier_call = sctp_inetaddr_event,
|
|
|
|
};
|
|
|
|
|
|
|
|
/* Socket operations. */
|
2005-12-23 04:49:22 +08:00
|
|
|
static const struct proto_ops inet_seqpacket_ops = {
|
2006-03-21 14:48:35 +08:00
|
|
|
.family = PF_INET,
|
|
|
|
.owner = THIS_MODULE,
|
|
|
|
.release = inet_release, /* Needs to be wrapped... */
|
|
|
|
.bind = inet_bind,
|
sctp: fix the issue that flags are ignored when using kernel_connect
Now sctp uses inet_dgram_connect as its proto_ops .connect, and the flags
param can't be passed into its proto .connect where this flags is really
needed.
sctp works around it by getting flags from socket file in __sctp_connect.
It works for connecting from userspace, as inherently the user sock has
socket file and it passes f_flags as the flags param into the proto_ops
.connect.
However, the sock created by sock_create_kern doesn't have a socket file,
and it passes the flags (like O_NONBLOCK) by using the flags param in
kernel_connect, which calls proto_ops .connect later.
So to fix it, this patch defines a new proto_ops .connect for sctp,
sctp_inet_connect, which calls __sctp_connect() directly with this
flags param. After this, the sctp's proto .connect can be removed.
Note that sctp_inet_connect doesn't need to do some checks that are not
needed for sctp, which makes thing better than with inet_dgram_connect.
Suggested-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Michal Kubecek <mkubecek@suse.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
2018-05-20 16:39:10 +08:00
|
|
|
.connect = sctp_inet_connect,
|
2006-03-21 14:48:35 +08:00
|
|
|
.socketpair = sock_no_socketpair,
|
|
|
|
.accept = inet_accept,
|
|
|
|
.getname = inet_getname, /* Semantics are different. */
|
2018-06-29 00:43:44 +08:00
|
|
|
.poll = sctp_poll,
|
2006-03-21 14:48:35 +08:00
|
|
|
.ioctl = inet_ioctl,
|
2019-04-18 04:51:48 +08:00
|
|
|
.gettstamp = sock_gettstamp,
|
2006-03-21 14:48:35 +08:00
|
|
|
.listen = sctp_inet_listen,
|
|
|
|
.shutdown = inet_shutdown, /* Looks harmless. */
|
|
|
|
.setsockopt = sock_common_setsockopt, /* IP_SOL IP_OPTION is a problem */
|
|
|
|
.getsockopt = sock_common_getsockopt,
|
|
|
|
.sendmsg = inet_sendmsg,
|
2016-07-22 21:25:42 +08:00
|
|
|
.recvmsg = inet_recvmsg,
|
2006-03-21 14:48:35 +08:00
|
|
|
.mmap = sock_no_mmap,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* Registration with AF_INET family. */
|
|
|
|
static struct inet_protosw sctp_seqpacket_protosw = {
|
|
|
|
.type = SOCK_SEQPACKET,
|
|
|
|
.protocol = IPPROTO_SCTP,
|
|
|
|
.prot = &sctp_prot,
|
|
|
|
.ops = &inet_seqpacket_ops,
|
|
|
|
.flags = SCTP_PROTOSW_FLAG
|
|
|
|
};
|
|
|
|
static struct inet_protosw sctp_stream_protosw = {
|
|
|
|
.type = SOCK_STREAM,
|
|
|
|
.protocol = IPPROTO_SCTP,
|
|
|
|
.prot = &sctp_prot,
|
|
|
|
.ops = &inet_seqpacket_ops,
|
|
|
|
.flags = SCTP_PROTOSW_FLAG
|
|
|
|
};
|
|
|
|
|
2020-10-29 15:05:03 +08:00
|
|
|
static int sctp4_rcv(struct sk_buff *skb)
|
|
|
|
{
|
2020-11-04 14:55:32 +08:00
|
|
|
SCTP_INPUT_CB(skb)->encap_port = 0;
|
2020-10-29 15:05:03 +08:00
|
|
|
return sctp_rcv(skb);
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Register with IP layer. */
|
2009-09-14 20:21:47 +08:00
|
|
|
static const struct net_protocol sctp_protocol = {
|
2020-10-29 15:05:03 +08:00
|
|
|
.handler = sctp4_rcv,
|
2005-04-17 06:20:36 +08:00
|
|
|
.err_handler = sctp_v4_err,
|
|
|
|
.no_policy = 1,
|
2014-01-09 17:01:17 +08:00
|
|
|
.icmp_strict_tag_validation = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* IPv4 address related functions. */
|
2008-02-15 22:53:59 +08:00
|
|
|
static struct sctp_af sctp_af_inet = {
|
2006-03-21 14:48:35 +08:00
|
|
|
.sa_family = AF_INET,
|
|
|
|
.sctp_xmit = sctp_v4_xmit,
|
|
|
|
.setsockopt = ip_setsockopt,
|
|
|
|
.getsockopt = ip_getsockopt,
|
|
|
|
.get_dst = sctp_v4_get_dst,
|
|
|
|
.get_saddr = sctp_v4_get_saddr,
|
|
|
|
.copy_addrlist = sctp_v4_copy_addrlist,
|
|
|
|
.from_skb = sctp_v4_from_skb,
|
|
|
|
.from_sk = sctp_v4_from_sk,
|
|
|
|
.from_addr_param = sctp_v4_from_addr_param,
|
|
|
|
.to_addr_param = sctp_v4_to_addr_param,
|
|
|
|
.cmp_addr = sctp_v4_cmp_addr,
|
|
|
|
.addr_valid = sctp_v4_addr_valid,
|
|
|
|
.inaddr_any = sctp_v4_inaddr_any,
|
|
|
|
.is_any = sctp_v4_is_any,
|
|
|
|
.available = sctp_v4_available,
|
|
|
|
.scope = sctp_v4_scope,
|
|
|
|
.skb_iif = sctp_v4_skb_iif,
|
2022-11-17 04:01:19 +08:00
|
|
|
.skb_sdif = sctp_v4_skb_sdif,
|
2006-03-21 14:48:35 +08:00
|
|
|
.is_ce = sctp_v4_is_ce,
|
|
|
|
.seq_dump_addr = sctp_v4_seq_dump_addr,
|
2008-06-05 03:40:15 +08:00
|
|
|
.ecn_capable = sctp_v4_ecn_capable,
|
2006-03-21 14:48:35 +08:00
|
|
|
.net_header_len = sizeof(struct iphdr),
|
|
|
|
.sockaddr_len = sizeof(struct sockaddr_in),
|
2018-02-25 00:18:51 +08:00
|
|
|
.ip_options_len = sctp_v4_ip_options_len,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2013-12-23 12:16:53 +08:00
|
|
|
struct sctp_pf *sctp_get_pf_specific(sa_family_t family)
|
|
|
|
{
|
2005-04-17 06:20:36 +08:00
|
|
|
switch (family) {
|
|
|
|
case PF_INET:
|
|
|
|
return sctp_pf_inet_specific;
|
|
|
|
case PF_INET6:
|
|
|
|
return sctp_pf_inet6_specific;
|
|
|
|
default:
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Register the PF specific function table. */
|
|
|
|
int sctp_register_pf(struct sctp_pf *pf, sa_family_t family)
|
|
|
|
{
|
|
|
|
switch (family) {
|
|
|
|
case PF_INET:
|
|
|
|
if (sctp_pf_inet_specific)
|
|
|
|
return 0;
|
|
|
|
sctp_pf_inet_specific = pf;
|
|
|
|
break;
|
|
|
|
case PF_INET6:
|
|
|
|
if (sctp_pf_inet6_specific)
|
|
|
|
return 0;
|
|
|
|
sctp_pf_inet6_specific = pf;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2012-08-06 16:47:55 +08:00
|
|
|
static inline int init_sctp_mibs(struct net *net)
|
2008-04-10 18:50:13 +08:00
|
|
|
{
|
2014-05-06 06:55:55 +08:00
|
|
|
net->sctp.sctp_statistics = alloc_percpu(struct sctp_mib);
|
|
|
|
if (!net->sctp.sctp_statistics)
|
|
|
|
return -ENOMEM;
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2012-08-06 16:47:55 +08:00
|
|
|
static inline void cleanup_sctp_mibs(struct net *net)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2014-05-06 06:55:55 +08:00
|
|
|
free_percpu(net->sctp.sctp_statistics);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-03-21 06:17:14 +08:00
|
|
|
static void sctp_v4_pf_init(void)
|
|
|
|
{
|
|
|
|
/* Initialize the SCTP specific PF functions. */
|
|
|
|
sctp_register_pf(&sctp_pf_inet, PF_INET);
|
|
|
|
sctp_register_af(&sctp_af_inet);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sctp_v4_pf_exit(void)
|
|
|
|
{
|
|
|
|
list_del(&sctp_af_inet.list);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int sctp_v4_protosw_init(void)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = proto_register(&sctp_prot, 1);
|
|
|
|
if (rc)
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
/* Register SCTP(UDP and TCP style) with socket layer. */
|
|
|
|
inet_register_protosw(&sctp_seqpacket_protosw);
|
|
|
|
inet_register_protosw(&sctp_stream_protosw);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sctp_v4_protosw_exit(void)
|
|
|
|
{
|
|
|
|
inet_unregister_protosw(&sctp_stream_protosw);
|
|
|
|
inet_unregister_protosw(&sctp_seqpacket_protosw);
|
|
|
|
proto_unregister(&sctp_prot);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int sctp_v4_add_protocol(void)
|
|
|
|
{
|
|
|
|
/* Register notifier for inet address additions/deletions. */
|
|
|
|
register_inetaddr_notifier(&sctp_inetaddr_notifier);
|
|
|
|
|
|
|
|
/* Register SCTP with inet layer. */
|
|
|
|
if (inet_add_protocol(&sctp_protocol, IPPROTO_SCTP) < 0)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sctp_v4_del_protocol(void)
|
|
|
|
{
|
|
|
|
inet_del_protocol(&sctp_protocol, IPPROTO_SCTP);
|
|
|
|
unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
|
|
|
|
}
|
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
static int __net_init sctp_defaults_init(struct net *net)
|
2012-08-06 16:42:04 +08:00
|
|
|
{
|
2012-08-06 16:43:06 +08:00
|
|
|
int status;
|
|
|
|
|
2012-08-07 15:29:57 +08:00
|
|
|
/*
|
|
|
|
* 14. Suggested SCTP Protocol Parameter Values
|
|
|
|
*/
|
|
|
|
/* The following protocol parameters are RECOMMENDED: */
|
|
|
|
/* RTO.Initial - 3 seconds */
|
|
|
|
net->sctp.rto_initial = SCTP_RTO_INITIAL;
|
|
|
|
/* RTO.Min - 1 second */
|
|
|
|
net->sctp.rto_min = SCTP_RTO_MIN;
|
|
|
|
/* RTO.Max - 60 seconds */
|
|
|
|
net->sctp.rto_max = SCTP_RTO_MAX;
|
|
|
|
/* RTO.Alpha - 1/8 */
|
|
|
|
net->sctp.rto_alpha = SCTP_RTO_ALPHA;
|
|
|
|
/* RTO.Beta - 1/4 */
|
|
|
|
net->sctp.rto_beta = SCTP_RTO_BETA;
|
|
|
|
|
|
|
|
/* Valid.Cookie.Life - 60 seconds */
|
|
|
|
net->sctp.valid_cookie_life = SCTP_DEFAULT_COOKIE_LIFE;
|
|
|
|
|
|
|
|
/* Whether Cookie Preservative is enabled(1) or not(0) */
|
|
|
|
net->sctp.cookie_preserve_enable = 1;
|
|
|
|
|
2012-10-24 17:20:03 +08:00
|
|
|
/* Default sctp sockets to use md5 as their hmac alg */
|
2012-12-14 23:22:01 +08:00
|
|
|
#if defined (CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5)
|
2012-10-24 17:20:03 +08:00
|
|
|
net->sctp.sctp_hmac_alg = "md5";
|
2012-12-14 23:22:01 +08:00
|
|
|
#elif defined (CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1)
|
2012-10-24 17:20:03 +08:00
|
|
|
net->sctp.sctp_hmac_alg = "sha1";
|
|
|
|
#else
|
|
|
|
net->sctp.sctp_hmac_alg = NULL;
|
|
|
|
#endif
|
|
|
|
|
2012-08-07 15:29:57 +08:00
|
|
|
/* Max.Burst - 4 */
|
|
|
|
net->sctp.max_burst = SCTP_DEFAULT_MAX_BURST;
|
|
|
|
|
2019-11-08 13:20:35 +08:00
|
|
|
/* Disable of Primary Path Switchover by default */
|
|
|
|
net->sctp.ps_retrans = SCTP_PS_RETRANS_MAX;
|
|
|
|
|
2015-12-16 13:55:04 +08:00
|
|
|
/* Enable pf state by default */
|
|
|
|
net->sctp.pf_enable = 1;
|
|
|
|
|
sctp: add pf_expose per netns and sock and asoc
As said in rfc7829, section 3, point 12:
The SCTP stack SHOULD expose the PF state of its destination
addresses to the ULP as well as provide the means to notify the
ULP of state transitions of its destination addresses from
active to PF, and vice versa. However, it is recommended that
an SCTP stack implementing SCTP-PF also allows for the ULP to be
kept ignorant of the PF state of its destinations and the
associated state transitions, thus allowing for retention of the
simpler state transition model of [RFC4960] in the ULP.
Not only does it allow to expose the PF state to ULP, but also
allow to ignore sctp-pf to ULP.
So this patch is to add pf_expose per netns, sock and asoc. And in
sctp_assoc_control_transport(), ulp_notify will be set to false if
asoc->expose is not 'enabled' in next patch.
It also allows a user to change pf_expose per netns by sysctl, and
pf_expose per sock and asoc will be initialized with it.
Note that pf_expose also works for SCTP_GET_PEER_ADDR_INFO sockopt,
to not allow a user to query the state of a sctp-pf peer address
when pf_expose is 'disabled', as said in section 7.3.
v1->v2:
- Fix a build warning noticed by Nathan Chancellor.
v2->v3:
- set pf_expose to UNUSED by default to keep compatible with old
applications.
v3->v4:
- add a new entry for pf_expose on ip-sysctl.txt, as Marcelo suggested.
- change this patch to 1/5, and move sctp_assoc_control_transport
change into 2/5, as Marcelo suggested.
- use SCTP_PF_EXPOSE_UNSET instead of SCTP_PF_EXPOSE_UNUSED, and
set SCTP_PF_EXPOSE_UNSET to 0 in enum, as Marcelo suggested.
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-08 13:20:32 +08:00
|
|
|
/* Ignore pf exposure feature by default */
|
|
|
|
net->sctp.pf_expose = SCTP_PF_EXPOSE_UNSET;
|
|
|
|
|
2012-08-07 15:29:57 +08:00
|
|
|
/* Association.Max.Retrans - 10 attempts
|
|
|
|
* Path.Max.Retrans - 5 attempts (per destination address)
|
|
|
|
* Max.Init.Retransmits - 8 attempts
|
|
|
|
*/
|
|
|
|
net->sctp.max_retrans_association = 10;
|
|
|
|
net->sctp.max_retrans_path = 5;
|
|
|
|
net->sctp.max_retrans_init = 8;
|
|
|
|
|
|
|
|
/* Sendbuffer growth - do per-socket accounting */
|
|
|
|
net->sctp.sndbuf_policy = 0;
|
|
|
|
|
|
|
|
/* Rcvbuffer growth - do per-socket accounting */
|
|
|
|
net->sctp.rcvbuf_policy = 0;
|
|
|
|
|
|
|
|
/* HB.interval - 30 seconds */
|
|
|
|
net->sctp.hb_interval = SCTP_DEFAULT_TIMEOUT_HEARTBEAT;
|
|
|
|
|
|
|
|
/* delayed SACK timeout */
|
|
|
|
net->sctp.sack_timeout = SCTP_DEFAULT_TIMEOUT_SACK;
|
|
|
|
|
|
|
|
/* Disable ADDIP by default. */
|
|
|
|
net->sctp.addip_enable = 0;
|
|
|
|
net->sctp.addip_noauth = 0;
|
|
|
|
net->sctp.default_auto_asconf = 0;
|
|
|
|
|
|
|
|
/* Enable PR-SCTP by default. */
|
|
|
|
net->sctp.prsctp_enable = 1;
|
|
|
|
|
2017-01-18 00:44:45 +08:00
|
|
|
/* Disable RECONF by default. */
|
|
|
|
net->sctp.reconf_enable = 0;
|
|
|
|
|
2012-08-07 15:29:57 +08:00
|
|
|
/* Disable AUTH by default. */
|
|
|
|
net->sctp.auth_enable = 0;
|
|
|
|
|
2019-08-26 16:30:02 +08:00
|
|
|
/* Enable ECN by default. */
|
|
|
|
net->sctp.ecn_enable = 1;
|
|
|
|
|
2020-10-29 15:04:58 +08:00
|
|
|
/* Set UDP tunneling listening port to 0 by default */
|
|
|
|
net->sctp.udp_port = 0;
|
|
|
|
|
2020-10-29 15:05:01 +08:00
|
|
|
/* Set remote encap port to 0 by default */
|
|
|
|
net->sctp.encap_port = 0;
|
|
|
|
|
2012-08-07 15:29:57 +08:00
|
|
|
/* Set SCOPE policy to enabled */
|
|
|
|
net->sctp.scope_policy = SCTP_SCOPE_POLICY_ENABLE;
|
|
|
|
|
|
|
|
/* Set the default rwnd update threshold */
|
|
|
|
net->sctp.rwnd_upd_shift = SCTP_DEFAULT_RWND_SHIFT;
|
|
|
|
|
|
|
|
/* Initialize maximum autoclose timeout. */
|
|
|
|
net->sctp.max_autoclose = INT_MAX / HZ;
|
|
|
|
|
2022-11-17 04:01:20 +08:00
|
|
|
#ifdef CONFIG_NET_L3_MASTER_DEV
|
|
|
|
net->sctp.l3mdev_accept = 1;
|
|
|
|
#endif
|
|
|
|
|
2012-08-07 15:23:59 +08:00
|
|
|
status = sctp_sysctl_net_register(net);
|
|
|
|
if (status)
|
|
|
|
goto err_sysctl_register;
|
|
|
|
|
2012-08-06 16:47:55 +08:00
|
|
|
/* Allocate and initialise sctp mibs. */
|
|
|
|
status = init_sctp_mibs(net);
|
|
|
|
if (status)
|
|
|
|
goto err_init_mibs;
|
|
|
|
|
2018-03-17 07:32:51 +08:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2012-08-06 16:45:15 +08:00
|
|
|
/* Initialize proc fs directory. */
|
|
|
|
status = sctp_proc_init(net);
|
|
|
|
if (status)
|
|
|
|
goto err_init_proc;
|
2018-03-17 07:32:51 +08:00
|
|
|
#endif
|
2012-08-06 16:45:15 +08:00
|
|
|
|
|
|
|
sctp_dbg_objcnt_init(net);
|
|
|
|
|
2012-08-06 16:42:04 +08:00
|
|
|
/* Initialize the local address list. */
|
|
|
|
INIT_LIST_HEAD(&net->sctp.local_addr_list);
|
|
|
|
spin_lock_init(&net->sctp.local_addr_lock);
|
|
|
|
sctp_get_local_addr_list(net);
|
|
|
|
|
|
|
|
/* Initialize the address event list */
|
|
|
|
INIT_LIST_HEAD(&net->sctp.addr_waitq);
|
|
|
|
INIT_LIST_HEAD(&net->sctp.auto_asconf_splist);
|
|
|
|
spin_lock_init(&net->sctp.addr_wq_lock);
|
|
|
|
net->sctp.addr_wq_timer.expires = 0;
|
2017-10-24 16:45:31 +08:00
|
|
|
timer_setup(&net->sctp.addr_wq_timer, sctp_addr_wq_timeout_handler, 0);
|
2012-08-06 16:42:04 +08:00
|
|
|
|
|
|
|
return 0;
|
2012-08-06 16:43:06 +08:00
|
|
|
|
2018-03-28 22:14:56 +08:00
|
|
|
#ifdef CONFIG_PROC_FS
|
2012-08-06 16:45:15 +08:00
|
|
|
err_init_proc:
|
2012-08-06 16:47:55 +08:00
|
|
|
cleanup_sctp_mibs(net);
|
2018-03-28 22:14:56 +08:00
|
|
|
#endif
|
2012-08-06 16:47:55 +08:00
|
|
|
err_init_mibs:
|
2012-08-07 15:23:59 +08:00
|
|
|
sctp_sysctl_net_unregister(net);
|
|
|
|
err_sysctl_register:
|
2012-08-06 16:43:06 +08:00
|
|
|
return status;
|
2012-08-06 16:42:04 +08:00
|
|
|
}
|
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
static void __net_exit sctp_defaults_exit(struct net *net)
|
2012-08-06 16:42:04 +08:00
|
|
|
{
|
|
|
|
/* Free the local address list */
|
|
|
|
sctp_free_addr_wq(net);
|
|
|
|
sctp_free_local_addr_list(net);
|
2012-08-06 16:43:06 +08:00
|
|
|
|
2018-03-17 07:32:51 +08:00
|
|
|
#ifdef CONFIG_PROC_FS
|
|
|
|
remove_proc_subtree("sctp", net->proc_net);
|
|
|
|
net->sctp.proc_net_sctp = NULL;
|
|
|
|
#endif
|
2012-08-06 16:47:55 +08:00
|
|
|
cleanup_sctp_mibs(net);
|
2012-08-07 15:23:59 +08:00
|
|
|
sctp_sysctl_net_unregister(net);
|
2012-08-06 16:42:04 +08:00
|
|
|
}
|
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
static struct pernet_operations sctp_defaults_ops = {
|
|
|
|
.init = sctp_defaults_init,
|
|
|
|
.exit = sctp_defaults_exit,
|
|
|
|
};
|
|
|
|
|
|
|
|
static int __net_init sctp_ctrlsock_init(struct net *net)
|
|
|
|
{
|
|
|
|
int status;
|
|
|
|
|
|
|
|
/* Initialize the control inode/socket for handling OOTB packets. */
|
|
|
|
status = sctp_ctl_sock_init(net);
|
|
|
|
if (status)
|
|
|
|
pr_err("Failed to initialize the SCTP control sock\n");
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2019-09-12 00:02:39 +08:00
|
|
|
static void __net_exit sctp_ctrlsock_exit(struct net *net)
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
{
|
|
|
|
/* Free the control endpoint. */
|
|
|
|
inet_ctl_sock_destroy(net->sctp.ctl_sock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct pernet_operations sctp_ctrlsock_ops = {
|
|
|
|
.init = sctp_ctrlsock_init,
|
|
|
|
.exit = sctp_ctrlsock_exit,
|
2012-08-06 16:42:04 +08:00
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Initialize the universe into something sensible. */
|
2013-06-17 17:40:05 +08:00
|
|
|
static __init int sctp_init(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-12-28 16:34:29 +08:00
|
|
|
unsigned long nr_pages = totalram_pages();
|
2020-07-24 21:09:19 +08:00
|
|
|
unsigned long limit;
|
|
|
|
unsigned long goal;
|
|
|
|
int max_entry_order;
|
|
|
|
int num_entries;
|
2007-08-16 07:07:44 +08:00
|
|
|
int max_share;
|
2020-07-24 21:09:19 +08:00
|
|
|
int status;
|
2005-04-17 06:20:36 +08:00
|
|
|
int order;
|
2020-07-24 21:09:19 +08:00
|
|
|
int i;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-03-01 20:58:29 +08:00
|
|
|
sock_skb_cb_check_size(sizeof(struct sctp_ulpevent));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-05-05 04:36:30 +08:00
|
|
|
/* Allocate bind_bucket and chunk caches. */
|
2005-04-17 06:20:36 +08:00
|
|
|
status = -ENOBUFS;
|
2024-01-31 16:45:49 +08:00
|
|
|
sctp_bucket_cachep = KMEM_CACHE(sctp_bind_bucket, SLAB_HWCACHE_ALIGN);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!sctp_bucket_cachep)
|
2007-05-05 04:36:30 +08:00
|
|
|
goto out;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-01-31 16:45:49 +08:00
|
|
|
sctp_chunk_cachep = KMEM_CACHE(sctp_chunk, SLAB_HWCACHE_ALIGN);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!sctp_chunk_cachep)
|
|
|
|
goto err_chunk_cachep;
|
|
|
|
|
2014-09-08 08:51:29 +08:00
|
|
|
status = percpu_counter_init(&sctp_sockets_allocated, 0, GFP_KERNEL);
|
2012-08-06 16:44:24 +08:00
|
|
|
if (status)
|
|
|
|
goto err_percpu_counter_init;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Implementation specific variables. */
|
|
|
|
|
|
|
|
/* Initialize default stream count setup information. */
|
|
|
|
sctp_max_instreams = SCTP_DEFAULT_INSTREAMS;
|
|
|
|
sctp_max_outstreams = SCTP_DEFAULT_OUTSTREAMS;
|
|
|
|
|
|
|
|
/* Initialize handle used for association ids. */
|
|
|
|
idr_init(&sctp_assocs_id);
|
|
|
|
|
2011-07-07 15:27:05 +08:00
|
|
|
limit = nr_free_buffer_pages() / 8;
|
2007-08-16 07:07:44 +08:00
|
|
|
limit = max(limit, 128UL);
|
|
|
|
sysctl_sctp_mem[0] = limit / 4 * 3;
|
|
|
|
sysctl_sctp_mem[1] = limit;
|
|
|
|
sysctl_sctp_mem[2] = sysctl_sctp_mem[0] * 2;
|
|
|
|
|
|
|
|
/* Set per-socket limits to no more than 1/128 the pressure threshold*/
|
|
|
|
limit = (sysctl_sctp_mem[1]) << (PAGE_SHIFT - 7);
|
|
|
|
max_share = min(4UL*1024*1024, limit);
|
|
|
|
|
2022-06-09 14:34:07 +08:00
|
|
|
sysctl_sctp_rmem[0] = PAGE_SIZE; /* give each asoc 1 page min */
|
2011-10-13 15:28:54 +08:00
|
|
|
sysctl_sctp_rmem[1] = 1500 * SKB_TRUESIZE(1);
|
2007-08-16 07:07:44 +08:00
|
|
|
sysctl_sctp_rmem[2] = max(sysctl_sctp_rmem[1], max_share);
|
|
|
|
|
2022-06-09 14:34:07 +08:00
|
|
|
sysctl_sctp_wmem[0] = PAGE_SIZE;
|
2007-08-16 07:07:44 +08:00
|
|
|
sysctl_sctp_wmem[1] = 16*1024;
|
|
|
|
sysctl_sctp_wmem[2] = max(64*1024, max_share);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Size and allocate the association hash table.
|
|
|
|
* The methodology is similar to that of the tcp hash tables.
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
* Though not identical. Start by getting a goal size
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2018-12-28 16:34:20 +08:00
|
|
|
if (nr_pages >= (128 * 1024))
|
|
|
|
goal = nr_pages >> (22 - PAGE_SHIFT);
|
2005-04-17 06:20:36 +08:00
|
|
|
else
|
2018-12-28 16:34:20 +08:00
|
|
|
goal = nr_pages >> (24 - PAGE_SHIFT);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
/* Then compute the page order for said goal */
|
|
|
|
order = get_order(goal);
|
|
|
|
|
|
|
|
/* Now compute the required page order for the maximum sized table we
|
|
|
|
* want to create
|
|
|
|
*/
|
|
|
|
max_entry_order = get_order(MAX_SCTP_PORT_HASH_ENTRIES *
|
|
|
|
sizeof(struct sctp_bind_hashbucket));
|
|
|
|
|
|
|
|
/* Limit the page order by that maximum hash table size */
|
|
|
|
order = min(order, max_entry_order);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Allocate and initialize the endpoint hash table. */
|
|
|
|
sctp_ep_hashsize = 64;
|
2013-03-12 13:39:47 +08:00
|
|
|
sctp_ep_hashtable =
|
treewide: kmalloc() -> kmalloc_array()
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
patch replaces cases of:
kmalloc(a * b, gfp)
with:
kmalloc_array(a * b, gfp)
as well as handling cases of:
kmalloc(a * b * c, gfp)
with:
kmalloc(array3_size(a, b, c), gfp)
as it's slightly less ugly than:
kmalloc_array(array_size(a, b), c, gfp)
This does, however, attempt to ignore constant size factors like:
kmalloc(4 * 1024, gfp)
though any constants defined via macros get caught up in the conversion.
Any factors with a sizeof() of "unsigned char", "char", and "u8" were
dropped, since they're redundant.
The tools/ directory was manually excluded, since it has its own
implementation of kmalloc().
The Coccinelle script used for this was:
// Fix redundant parens around sizeof().
@@
type TYPE;
expression THING, E;
@@
(
kmalloc(
- (sizeof(TYPE)) * E
+ sizeof(TYPE) * E
, ...)
|
kmalloc(
- (sizeof(THING)) * E
+ sizeof(THING) * E
, ...)
)
// Drop single-byte sizes and redundant parens.
@@
expression COUNT;
typedef u8;
typedef __u8;
@@
(
kmalloc(
- sizeof(u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * (COUNT)
+ COUNT
, ...)
|
kmalloc(
- sizeof(u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(__u8) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(char) * COUNT
+ COUNT
, ...)
|
kmalloc(
- sizeof(unsigned char) * COUNT
+ COUNT
, ...)
)
// 2-factor product with sizeof(type/expression) and identifier or constant.
@@
type TYPE;
expression THING;
identifier COUNT_ID;
constant COUNT_CONST;
@@
(
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_ID)
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_ID
+ COUNT_ID, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (COUNT_CONST)
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * COUNT_CONST
+ COUNT_CONST, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_ID)
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_ID
+ COUNT_ID, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (COUNT_CONST)
+ COUNT_CONST, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * COUNT_CONST
+ COUNT_CONST, sizeof(THING)
, ...)
)
// 2-factor product, only identifiers.
@@
identifier SIZE, COUNT;
@@
- kmalloc
+ kmalloc_array
(
- SIZE * COUNT
+ COUNT, SIZE
, ...)
// 3-factor product with 1 sizeof(type) or sizeof(expression), with
// redundant parens removed.
@@
expression THING;
identifier STRIDE, COUNT;
type TYPE;
@@
(
kmalloc(
- sizeof(TYPE) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(TYPE) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(TYPE))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * (COUNT) * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * (STRIDE)
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
|
kmalloc(
- sizeof(THING) * COUNT * STRIDE
+ array3_size(COUNT, STRIDE, sizeof(THING))
, ...)
)
// 3-factor product with 2 sizeof(variable), with redundant parens removed.
@@
expression THING1, THING2;
identifier COUNT;
type TYPE1, TYPE2;
@@
(
kmalloc(
- sizeof(TYPE1) * sizeof(TYPE2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(THING1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(THING1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * COUNT
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
|
kmalloc(
- sizeof(TYPE1) * sizeof(THING2) * (COUNT)
+ array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
, ...)
)
// 3-factor product, only identifiers, with redundant parens removed.
@@
identifier STRIDE, SIZE, COUNT;
@@
(
kmalloc(
- (COUNT) * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * STRIDE * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- (COUNT) * (STRIDE) * (SIZE)
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
|
kmalloc(
- COUNT * STRIDE * SIZE
+ array3_size(COUNT, STRIDE, SIZE)
, ...)
)
// Any remaining multi-factor products, first at least 3-factor products,
// when they're not all constants...
@@
expression E1, E2, E3;
constant C1, C2, C3;
@@
(
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(
- (E1) * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * E3
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- (E1) * (E2) * (E3)
+ array3_size(E1, E2, E3)
, ...)
|
kmalloc(
- E1 * E2 * E3
+ array3_size(E1, E2, E3)
, ...)
)
// And then all remaining 2 factors products when they're not all constants,
// keeping sizeof() as the second factor argument.
@@
expression THING, E1, E2;
type TYPE;
constant C1, C2, C3;
@@
(
kmalloc(sizeof(THING) * C2, ...)
|
kmalloc(sizeof(TYPE) * C2, ...)
|
kmalloc(C1 * C2 * C3, ...)
|
kmalloc(C1 * C2, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * (E2)
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(TYPE) * E2
+ E2, sizeof(TYPE)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * (E2)
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- sizeof(THING) * E2
+ E2, sizeof(THING)
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * E2
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- (E1) * (E2)
+ E1, E2
, ...)
|
- kmalloc
+ kmalloc_array
(
- E1 * E2
+ E1, E2
, ...)
)
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-13 04:55:00 +08:00
|
|
|
kmalloc_array(64, sizeof(struct sctp_hashbucket), GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!sctp_ep_hashtable) {
|
2010-08-24 21:21:08 +08:00
|
|
|
pr_err("Failed endpoint_hash alloc\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
status = -ENOMEM;
|
|
|
|
goto err_ehash_alloc;
|
|
|
|
}
|
|
|
|
for (i = 0; i < sctp_ep_hashsize; i++) {
|
|
|
|
rwlock_init(&sctp_ep_hashtable[i].lock);
|
2007-11-10 00:43:40 +08:00
|
|
|
INIT_HLIST_HEAD(&sctp_ep_hashtable[i].chain);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
/* Allocate and initialize the SCTP port hash table.
|
|
|
|
* Note that order is initalized to start at the max sized
|
|
|
|
* table we want to support. If we can't get that many pages
|
|
|
|
* reduce the order and try again
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
do {
|
|
|
|
sctp_port_hashtable = (struct sctp_bind_hashbucket *)
|
2015-12-16 07:33:39 +08:00
|
|
|
__get_free_pages(GFP_KERNEL | __GFP_NOWARN, order);
|
2005-04-17 06:20:36 +08:00
|
|
|
} while (!sctp_port_hashtable && --order > 0);
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!sctp_port_hashtable) {
|
2010-08-24 21:21:08 +08:00
|
|
|
pr_err("Failed bind hash alloc\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
status = -ENOMEM;
|
|
|
|
goto err_bhash_alloc;
|
|
|
|
}
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
|
|
|
|
/* Now compute the number of entries that will fit in the
|
|
|
|
* port hash space we allocated
|
|
|
|
*/
|
|
|
|
num_entries = (1UL << order) * PAGE_SIZE /
|
|
|
|
sizeof(struct sctp_bind_hashbucket);
|
|
|
|
|
2020-08-23 07:15:59 +08:00
|
|
|
/* And finish by rounding it down to the nearest power of two.
|
|
|
|
* This wastes some memory of course, but it's needed because
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
* the hash function operates based on the assumption that
|
2020-08-23 07:15:59 +08:00
|
|
|
* the number of entries is a power of two.
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
*/
|
|
|
|
sctp_port_hashsize = rounddown_pow_of_two(num_entries);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < sctp_port_hashsize; i++) {
|
|
|
|
spin_lock_init(&sctp_port_hashtable[i].lock);
|
2007-11-10 00:43:40 +08:00
|
|
|
INIT_HLIST_HEAD(&sctp_port_hashtable[i].chain);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2016-06-13 23:08:26 +08:00
|
|
|
status = sctp_transport_hashtable_init();
|
|
|
|
if (status)
|
2015-12-30 23:50:47 +08:00
|
|
|
goto err_thash_alloc;
|
|
|
|
|
sctp: Fix port hash table size computation
Dmitry Vyukov noted recently that the sctp_port_hashtable had an error in
its size computation, observing that the current method never guaranteed
that the hashsize (measured in number of entries) would be a power of two,
which the input hash function for that table requires. The root cause of
the problem is that two values need to be computed (one, the allocation
order of the storage requries, as passed to __get_free_pages, and two the
number of entries for the hash table). Both need to be ^2, but for
different reasons, and the existing code is simply computing one order
value, and using it as the basis for both, which is wrong (i.e. it assumes
that ((1<<order)*PAGE_SIZE)/sizeof(bucket) is still ^2 when its not).
To fix this, we change the logic slightly. We start by computing a goal
allocation order (which is limited by the maximum size hash table we want
to support. Then we attempt to allocate that size table, decreasing the
order until a successful allocation is made. Then, with the resultant
successful order we compute the number of buckets that hash table supports,
which we then round down to the nearest power of two, giving us the number
of entries the table actually supports.
I've tested this locally here, using non-debug and spinlock-debug kernels,
and the number of entries in the hashtable consistently work out to be
powers of two in all cases.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Dmitry Vyukov <dvyukov@google.com>
CC: Vladislav Yasevich <vyasevich@gmail.com>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2016-02-19 05:10:57 +08:00
|
|
|
pr_info("Hash tables configured (bind %d/%d)\n", sctp_port_hashsize,
|
|
|
|
num_entries);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
sctp_sysctl_register();
|
|
|
|
|
|
|
|
INIT_LIST_HEAD(&sctp_address_families);
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v4_pf_init();
|
|
|
|
sctp_v6_pf_init();
|
2017-11-26 20:16:08 +08:00
|
|
|
sctp_sched_ops_init();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
status = register_pernet_subsys(&sctp_defaults_ops);
|
|
|
|
if (status)
|
|
|
|
goto err_register_defaults;
|
2007-05-05 04:36:30 +08:00
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
status = sctp_v4_protosw_init();
|
2005-04-17 06:20:36 +08:00
|
|
|
if (status)
|
2008-03-21 06:17:14 +08:00
|
|
|
goto err_protosw_init;
|
|
|
|
|
|
|
|
status = sctp_v6_protosw_init();
|
|
|
|
if (status)
|
|
|
|
goto err_v6_protosw_init;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
status = register_pernet_subsys(&sctp_ctrlsock_ops);
|
2012-08-06 16:42:04 +08:00
|
|
|
if (status)
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
goto err_register_ctrlsock;
|
2012-08-06 16:42:04 +08:00
|
|
|
|
2008-03-21 06:17:14 +08:00
|
|
|
status = sctp_v4_add_protocol();
|
|
|
|
if (status)
|
2007-05-05 04:36:30 +08:00
|
|
|
goto err_add_protocol;
|
|
|
|
|
|
|
|
/* Register SCTP with inet6 layer. */
|
|
|
|
status = sctp_v6_add_protocol();
|
|
|
|
if (status)
|
|
|
|
goto err_v6_add_protocol;
|
|
|
|
|
2016-06-03 02:05:43 +08:00
|
|
|
if (sctp_offload_init() < 0)
|
|
|
|
pr_crit("%s: Cannot add SCTP protocol offload\n", __func__);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
out:
|
|
|
|
return status;
|
2007-05-05 04:36:30 +08:00
|
|
|
err_v6_add_protocol:
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v4_del_protocol();
|
2009-03-02 14:46:50 +08:00
|
|
|
err_add_protocol:
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
unregister_pernet_subsys(&sctp_ctrlsock_ops);
|
|
|
|
err_register_ctrlsock:
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v6_protosw_exit();
|
|
|
|
err_v6_protosw_init:
|
|
|
|
sctp_v4_protosw_exit();
|
|
|
|
err_protosw_init:
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
unregister_pernet_subsys(&sctp_defaults_ops);
|
|
|
|
err_register_defaults:
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v4_pf_exit();
|
|
|
|
sctp_v6_pf_exit();
|
2005-04-17 06:20:36 +08:00
|
|
|
sctp_sysctl_unregister();
|
|
|
|
free_pages((unsigned long)sctp_port_hashtable,
|
|
|
|
get_order(sctp_port_hashsize *
|
|
|
|
sizeof(struct sctp_bind_hashbucket)));
|
|
|
|
err_bhash_alloc:
|
2015-12-30 23:50:47 +08:00
|
|
|
sctp_transport_hashtable_destroy();
|
|
|
|
err_thash_alloc:
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(sctp_ep_hashtable);
|
|
|
|
err_ehash_alloc:
|
2012-08-06 16:44:24 +08:00
|
|
|
percpu_counter_destroy(&sctp_sockets_allocated);
|
|
|
|
err_percpu_counter_init:
|
2005-04-17 06:20:36 +08:00
|
|
|
kmem_cache_destroy(sctp_chunk_cachep);
|
|
|
|
err_chunk_cachep:
|
|
|
|
kmem_cache_destroy(sctp_bucket_cachep);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Exit handler for the SCTP protocol. */
|
2013-06-17 17:40:05 +08:00
|
|
|
static __exit void sctp_exit(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
/* BUG. This should probably do something useful like clean
|
|
|
|
* up all the remaining associations and all that memory.
|
|
|
|
*/
|
|
|
|
|
2007-05-05 04:36:30 +08:00
|
|
|
/* Unregister with inet6/inet layers. */
|
|
|
|
sctp_v6_del_protocol();
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v4_del_protocol();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
unregister_pernet_subsys(&sctp_ctrlsock_ops);
|
2012-08-06 16:42:04 +08:00
|
|
|
|
2008-03-21 06:17:14 +08:00
|
|
|
/* Free protosw registrations */
|
|
|
|
sctp_v6_protosw_exit();
|
|
|
|
sctp_v4_protosw_exit();
|
|
|
|
|
sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-09-11 04:31:15 +08:00
|
|
|
unregister_pernet_subsys(&sctp_defaults_ops);
|
|
|
|
|
2007-05-05 04:36:30 +08:00
|
|
|
/* Unregister with socket layer. */
|
2008-03-21 06:17:14 +08:00
|
|
|
sctp_v6_pf_exit();
|
|
|
|
sctp_v4_pf_exit();
|
2007-05-05 04:36:30 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
sctp_sysctl_unregister();
|
|
|
|
|
|
|
|
free_pages((unsigned long)sctp_port_hashtable,
|
|
|
|
get_order(sctp_port_hashsize *
|
|
|
|
sizeof(struct sctp_bind_hashbucket)));
|
2015-12-30 23:50:49 +08:00
|
|
|
kfree(sctp_ep_hashtable);
|
2015-12-30 23:50:47 +08:00
|
|
|
sctp_transport_hashtable_destroy();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-08-06 16:44:24 +08:00
|
|
|
percpu_counter_destroy(&sctp_sockets_allocated);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-06-08 11:11:43 +08:00
|
|
|
rcu_barrier(); /* Wait for completion of call_rcu()'s */
|
|
|
|
|
2007-05-05 04:36:30 +08:00
|
|
|
kmem_cache_destroy(sctp_chunk_cachep);
|
|
|
|
kmem_cache_destroy(sctp_bucket_cachep);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(sctp_init);
|
|
|
|
module_exit(sctp_exit);
|
|
|
|
|
2005-08-10 11:19:14 +08:00
|
|
|
/*
|
|
|
|
* __stringify doesn't likes enums, so use IPPROTO_SCTP value (132) directly.
|
|
|
|
*/
|
|
|
|
MODULE_ALIAS("net-pf-" __stringify(PF_INET) "-proto-132");
|
2006-12-14 08:33:35 +08:00
|
|
|
MODULE_ALIAS("net-pf-" __stringify(PF_INET6) "-proto-132");
|
2013-07-23 20:51:47 +08:00
|
|
|
MODULE_AUTHOR("Linux Kernel SCTP developers <linux-sctp@vger.kernel.org>");
|
2005-04-17 06:20:36 +08:00
|
|
|
MODULE_DESCRIPTION("Support for the SCTP protocol (RFC2960)");
|
2013-08-10 04:09:41 +08:00
|
|
|
module_param_named(no_checksums, sctp_checksum_disable, bool, 0644);
|
|
|
|
MODULE_PARM_DESC(no_checksums, "Disable checksums computing and verification");
|
2005-04-17 06:20:36 +08:00
|
|
|
MODULE_LICENSE("GPL");
|