License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2009-02-24 23:30:30 +08:00
|
|
|
#ifndef _RDS_IB_H
|
|
|
|
#define _RDS_IB_H
|
|
|
|
|
|
|
|
#include <rdma/ib_verbs.h>
|
|
|
|
#include <rdma/rdma_cm.h>
|
2011-06-06 18:43:46 +08:00
|
|
|
#include <linux/interrupt.h>
|
2010-04-24 01:49:53 +08:00
|
|
|
#include <linux/pci.h>
|
|
|
|
#include <linux/slab.h>
|
2009-02-24 23:30:30 +08:00
|
|
|
#include "rds.h"
|
|
|
|
#include "rdma_transport.h"
|
|
|
|
|
|
|
|
#define RDS_IB_MAX_SGE 8
|
|
|
|
#define RDS_IB_RECV_SGE 2
|
|
|
|
|
|
|
|
#define RDS_IB_DEFAULT_RECV_WR 1024
|
|
|
|
#define RDS_IB_DEFAULT_SEND_WR 256
|
2019-06-29 08:31:19 +08:00
|
|
|
#define RDS_IB_DEFAULT_FR_WR 512
|
2009-02-24 23:30:30 +08:00
|
|
|
|
2016-07-05 06:31:21 +08:00
|
|
|
#define RDS_IB_DEFAULT_RETRY_COUNT 1
|
2009-07-17 21:13:22 +08:00
|
|
|
|
2009-02-24 23:30:30 +08:00
|
|
|
#define RDS_IB_SUPPORTED_PROTOCOLS 0x00000003 /* minor versions supported */
|
|
|
|
|
2010-05-27 13:05:37 +08:00
|
|
|
#define RDS_IB_RECYCLE_BATCH_COUNT 32
|
|
|
|
|
2015-09-06 14:18:51 +08:00
|
|
|
#define RDS_IB_WC_MAX 32
|
|
|
|
|
2010-07-16 03:34:33 +08:00
|
|
|
extern struct rw_semaphore rds_ib_devices_lock;
|
2009-02-24 23:30:30 +08:00
|
|
|
extern struct list_head rds_ib_devices;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* IB posts RDS_FRAG_SIZE fragments of pages to the receive queues to
|
|
|
|
* try and minimize the amount of memory tied up both the device and
|
|
|
|
* socket receive queues.
|
|
|
|
*/
|
|
|
|
struct rds_page_frag {
|
|
|
|
struct list_head f_item;
|
2010-05-27 13:05:37 +08:00
|
|
|
struct list_head f_cache_entry;
|
2010-05-25 11:12:41 +08:00
|
|
|
struct scatterlist f_sg;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct rds_ib_incoming {
|
|
|
|
struct list_head ii_frags;
|
2010-05-27 13:05:37 +08:00
|
|
|
struct list_head ii_cache_entry;
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_incoming ii_inc;
|
|
|
|
};
|
|
|
|
|
2010-05-27 13:05:37 +08:00
|
|
|
struct rds_ib_cache_head {
|
|
|
|
struct list_head *first;
|
|
|
|
unsigned long count;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rds_ib_refill_cache {
|
2012-11-12 23:52:01 +08:00
|
|
|
struct rds_ib_cache_head __percpu *percpu;
|
2010-05-27 13:05:37 +08:00
|
|
|
struct list_head *xfer;
|
|
|
|
struct list_head *ready;
|
|
|
|
};
|
|
|
|
|
2018-07-24 11:51:21 +08:00
|
|
|
/* This is the common structure for the IB private data exchange in setting up
|
|
|
|
* an RDS connection. The exchange is different for IPv4 and IPv6 connections.
|
|
|
|
* The reason is that the address size is different and the addresses
|
|
|
|
* exchanged are in the beginning of the structure. Hence it is not possible
|
|
|
|
* for interoperability if same structure is used.
|
|
|
|
*/
|
|
|
|
struct rds_ib_conn_priv_cmn {
|
|
|
|
u8 ricpc_protocol_major;
|
|
|
|
u8 ricpc_protocol_minor;
|
|
|
|
__be16 ricpc_protocol_minor_mask; /* bitmask */
|
2018-10-13 22:13:23 +08:00
|
|
|
u8 ricpc_dp_toss;
|
|
|
|
u8 ripc_reserved1;
|
|
|
|
__be16 ripc_reserved2;
|
2018-07-24 11:51:21 +08:00
|
|
|
__be64 ricpc_ack_seq;
|
|
|
|
__be32 ricpc_credit; /* non-zero enables flow ctl */
|
|
|
|
};
|
|
|
|
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_ib_connect_private {
|
|
|
|
/* Add new fields at the end, and don't permute existing fields. */
|
2018-07-24 11:51:21 +08:00
|
|
|
__be32 dp_saddr;
|
|
|
|
__be32 dp_daddr;
|
|
|
|
struct rds_ib_conn_priv_cmn dp_cmn;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rds6_ib_connect_private {
|
|
|
|
/* Add new fields at the end, and don't permute existing fields. */
|
|
|
|
struct in6_addr dp_saddr;
|
|
|
|
struct in6_addr dp_daddr;
|
|
|
|
struct rds_ib_conn_priv_cmn dp_cmn;
|
|
|
|
};
|
|
|
|
|
|
|
|
#define dp_protocol_major dp_cmn.ricpc_protocol_major
|
|
|
|
#define dp_protocol_minor dp_cmn.ricpc_protocol_minor
|
|
|
|
#define dp_protocol_minor_mask dp_cmn.ricpc_protocol_minor_mask
|
|
|
|
#define dp_ack_seq dp_cmn.ricpc_ack_seq
|
|
|
|
#define dp_credit dp_cmn.ricpc_credit
|
|
|
|
|
|
|
|
union rds_ib_conn_priv {
|
|
|
|
struct rds_ib_connect_private ricp_v4;
|
|
|
|
struct rds6_ib_connect_private ricp_v6;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct rds_ib_send_work {
|
2010-03-02 06:03:09 +08:00
|
|
|
void *s_op;
|
2015-10-08 16:16:33 +08:00
|
|
|
union {
|
|
|
|
struct ib_send_wr s_wr;
|
|
|
|
struct ib_rdma_wr s_rdma_wr;
|
|
|
|
struct ib_atomic_wr s_atomic_wr;
|
|
|
|
};
|
2009-02-24 23:30:30 +08:00
|
|
|
struct ib_sge s_sge[RDS_IB_MAX_SGE];
|
|
|
|
unsigned long s_queued;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rds_ib_recv_work {
|
|
|
|
struct rds_ib_incoming *r_ibinc;
|
|
|
|
struct rds_page_frag *r_frag;
|
|
|
|
struct ib_recv_wr r_wr;
|
|
|
|
struct ib_sge r_sge[2];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct rds_ib_work_ring {
|
|
|
|
u32 w_nr;
|
|
|
|
u32 w_alloc_ptr;
|
|
|
|
u32 w_alloc_ctr;
|
|
|
|
u32 w_free_ptr;
|
|
|
|
atomic_t w_free_ctr;
|
|
|
|
};
|
|
|
|
|
2015-09-06 14:18:51 +08:00
|
|
|
/* Rings are posted with all the allocations they'll need to queue the
|
|
|
|
* incoming message to the receiving socket so this can't fail.
|
|
|
|
* All fragments start with a header, so we can make sure we're not receiving
|
|
|
|
* garbage, and we can tell a small 8 byte fragment from an ACK frame.
|
|
|
|
*/
|
|
|
|
struct rds_ib_ack_state {
|
|
|
|
u64 ack_next;
|
|
|
|
u64 ack_recv;
|
|
|
|
unsigned int ack_required:1;
|
|
|
|
unsigned int ack_next_valid:1;
|
|
|
|
unsigned int ack_recv_valid:1;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_ib_device;
|
|
|
|
|
|
|
|
struct rds_ib_connection {
|
|
|
|
|
|
|
|
struct list_head ib_node;
|
|
|
|
struct rds_ib_device *rds_ibdev;
|
|
|
|
struct rds_connection *conn;
|
|
|
|
|
|
|
|
/* alphabet soup, IBTA style */
|
|
|
|
struct rdma_cm_id *i_cm_id;
|
|
|
|
struct ib_pd *i_pd;
|
|
|
|
struct ib_cq *i_send_cq;
|
|
|
|
struct ib_cq *i_recv_cq;
|
2015-09-06 14:18:51 +08:00
|
|
|
struct ib_wc i_send_wc[RDS_IB_WC_MAX];
|
2015-09-06 14:18:51 +08:00
|
|
|
struct ib_wc i_recv_wc[RDS_IB_WC_MAX];
|
|
|
|
|
2016-03-02 07:20:53 +08:00
|
|
|
/* To control the number of wrs from fastreg */
|
|
|
|
atomic_t i_fastreg_wrs;
|
2019-07-17 06:29:17 +08:00
|
|
|
atomic_t i_fastreg_inuse_count;
|
2016-03-02 07:20:53 +08:00
|
|
|
|
2015-09-06 14:18:51 +08:00
|
|
|
/* interrupt handling */
|
2015-09-06 14:18:51 +08:00
|
|
|
struct tasklet_struct i_send_tasklet;
|
2015-09-06 14:18:51 +08:00
|
|
|
struct tasklet_struct i_recv_tasklet;
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* tx */
|
|
|
|
struct rds_ib_work_ring i_send_ring;
|
2010-03-02 06:03:09 +08:00
|
|
|
struct rm_data_op *i_data_op;
|
2019-10-03 12:11:08 +08:00
|
|
|
struct rds_header **i_send_hdrs;
|
|
|
|
dma_addr_t *i_send_hdrs_dma;
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_ib_send_work *i_sends;
|
2010-07-15 04:55:35 +08:00
|
|
|
atomic_t i_signaled_sends;
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* rx */
|
|
|
|
struct mutex i_recv_mutex;
|
|
|
|
struct rds_ib_work_ring i_recv_ring;
|
|
|
|
struct rds_ib_incoming *i_ibinc;
|
|
|
|
u32 i_recv_data_rem;
|
2019-10-03 12:11:08 +08:00
|
|
|
struct rds_header **i_recv_hdrs;
|
|
|
|
dma_addr_t *i_recv_hdrs_dma;
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_ib_recv_work *i_recvs;
|
|
|
|
u64 i_ack_recv; /* last ACK received */
|
2010-05-27 13:05:37 +08:00
|
|
|
struct rds_ib_refill_cache i_cache_incs;
|
|
|
|
struct rds_ib_refill_cache i_cache_frags;
|
2016-07-10 08:14:02 +08:00
|
|
|
atomic_t i_cache_allocs;
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* sending acks */
|
|
|
|
unsigned long i_ack_flags;
|
2009-04-01 16:20:20 +08:00
|
|
|
#ifdef KERNEL_HAS_ATOMIC64
|
|
|
|
atomic64_t i_ack_next; /* next ACK to send */
|
|
|
|
#else
|
|
|
|
spinlock_t i_ack_lock; /* protect i_ack_next */
|
2009-02-24 23:30:30 +08:00
|
|
|
u64 i_ack_next; /* next ACK to send */
|
2009-04-01 16:20:20 +08:00
|
|
|
#endif
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_header *i_ack;
|
|
|
|
struct ib_send_wr i_ack_wr;
|
|
|
|
struct ib_sge i_ack_sge;
|
2017-01-21 05:04:10 +08:00
|
|
|
dma_addr_t i_ack_dma;
|
2009-02-24 23:30:30 +08:00
|
|
|
unsigned long i_ack_queued;
|
|
|
|
|
|
|
|
/* Flow control related information
|
|
|
|
*
|
|
|
|
* Our algorithm uses a pair variables that we need to access
|
|
|
|
* atomically - one for the send credits, and one posted
|
|
|
|
* recv credits we need to transfer to remote.
|
|
|
|
* Rather than protect them using a slow spinlock, we put both into
|
|
|
|
* a single atomic_t and update it using cmpxchg
|
|
|
|
*/
|
|
|
|
atomic_t i_credits;
|
|
|
|
|
|
|
|
/* Protocol version specific information */
|
|
|
|
unsigned int i_flowctl:1; /* enable/disable flow ctl */
|
|
|
|
|
|
|
|
/* Batched completions */
|
|
|
|
unsigned int i_unsignaled_wrs;
|
2016-07-10 09:31:38 +08:00
|
|
|
|
|
|
|
/* Endpoint role in connection */
|
|
|
|
bool i_active_side;
|
2016-09-30 02:07:11 +08:00
|
|
|
atomic_t i_cq_quiesce;
|
2016-07-05 07:16:36 +08:00
|
|
|
|
|
|
|
/* Send/Recv vectors */
|
|
|
|
int i_scq_vector;
|
|
|
|
int i_rcq_vector;
|
2019-08-24 09:04:16 +08:00
|
|
|
u8 i_sl;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/* This assumes that atomic_t is at least 32 bits */
|
|
|
|
#define IB_GET_SEND_CREDITS(v) ((v) & 0xffff)
|
|
|
|
#define IB_GET_POST_CREDITS(v) ((v) >> 16)
|
|
|
|
#define IB_SET_SEND_CREDITS(v) ((v) & 0xffff)
|
|
|
|
#define IB_SET_POST_CREDITS(v) ((v) << 16)
|
|
|
|
|
|
|
|
struct rds_ib_ipaddr {
|
|
|
|
struct list_head list;
|
|
|
|
__be32 ipaddr;
|
2012-02-04 00:09:23 +08:00
|
|
|
struct rcu_head rcu;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
2015-09-11 12:20:57 +08:00
|
|
|
enum {
|
|
|
|
RDS_IB_MR_8K_POOL,
|
|
|
|
RDS_IB_MR_1M_POOL,
|
|
|
|
};
|
|
|
|
|
2009-02-24 23:30:30 +08:00
|
|
|
struct rds_ib_device {
|
|
|
|
struct list_head list;
|
|
|
|
struct list_head ipaddr_list;
|
|
|
|
struct list_head conn_list;
|
|
|
|
struct ib_device *dev;
|
|
|
|
struct ib_pd *pd;
|
2020-01-15 20:43:39 +08:00
|
|
|
u8 odp_capable:1;
|
2016-03-02 07:20:52 +08:00
|
|
|
|
2016-03-02 07:20:46 +08:00
|
|
|
unsigned int max_mrs;
|
2015-09-11 12:20:57 +08:00
|
|
|
struct rds_ib_mr_pool *mr_1m_pool;
|
|
|
|
struct rds_ib_mr_pool *mr_8k_pool;
|
2016-03-02 07:20:46 +08:00
|
|
|
unsigned int max_8k_mrs;
|
|
|
|
unsigned int max_1m_mrs;
|
2009-02-24 23:30:30 +08:00
|
|
|
int max_sge;
|
|
|
|
unsigned int max_wrs;
|
2010-01-13 02:50:48 +08:00
|
|
|
unsigned int max_initiator_depth;
|
|
|
|
unsigned int max_responder_resources;
|
2009-02-24 23:30:30 +08:00
|
|
|
spinlock_t spinlock; /* protect the above */
|
2017-07-04 20:53:15 +08:00
|
|
|
refcount_t refcount;
|
2010-05-19 06:48:51 +08:00
|
|
|
struct work_struct free_work;
|
2016-07-05 07:16:36 +08:00
|
|
|
int *vector_load;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
2010-04-24 01:49:53 +08:00
|
|
|
#define rdsibdev_to_node(rdsibdev) ibdev_to_node(rdsibdev->dev)
|
|
|
|
|
2009-02-24 23:30:30 +08:00
|
|
|
/* bits for i_ack_flags */
|
|
|
|
#define IB_ACK_IN_FLIGHT 0
|
|
|
|
#define IB_ACK_REQUESTED 1
|
|
|
|
|
|
|
|
/* Magic WR_ID for ACKs */
|
|
|
|
#define RDS_IB_ACK_WR_ID (~(u64) 0)
|
|
|
|
|
|
|
|
struct rds_ib_statistics {
|
|
|
|
uint64_t s_ib_connect_raced;
|
|
|
|
uint64_t s_ib_listen_closed_stale;
|
2015-09-06 14:18:51 +08:00
|
|
|
uint64_t s_ib_evt_handler_call;
|
|
|
|
uint64_t s_ib_tasklet_call;
|
2009-02-24 23:30:30 +08:00
|
|
|
uint64_t s_ib_tx_cq_event;
|
|
|
|
uint64_t s_ib_tx_ring_full;
|
|
|
|
uint64_t s_ib_tx_throttle;
|
|
|
|
uint64_t s_ib_tx_sg_mapping_failure;
|
|
|
|
uint64_t s_ib_tx_stalled;
|
|
|
|
uint64_t s_ib_tx_credit_updates;
|
|
|
|
uint64_t s_ib_rx_cq_event;
|
|
|
|
uint64_t s_ib_rx_ring_empty;
|
|
|
|
uint64_t s_ib_rx_refill_from_cq;
|
|
|
|
uint64_t s_ib_rx_refill_from_thread;
|
|
|
|
uint64_t s_ib_rx_alloc_limit;
|
2016-07-10 08:14:02 +08:00
|
|
|
uint64_t s_ib_rx_total_frags;
|
|
|
|
uint64_t s_ib_rx_total_incs;
|
2009-02-24 23:30:30 +08:00
|
|
|
uint64_t s_ib_rx_credit_updates;
|
|
|
|
uint64_t s_ib_ack_sent;
|
|
|
|
uint64_t s_ib_ack_send_failure;
|
|
|
|
uint64_t s_ib_ack_send_delayed;
|
|
|
|
uint64_t s_ib_ack_send_piggybacked;
|
|
|
|
uint64_t s_ib_ack_received;
|
2015-09-11 12:20:57 +08:00
|
|
|
uint64_t s_ib_rdma_mr_8k_alloc;
|
|
|
|
uint64_t s_ib_rdma_mr_8k_free;
|
|
|
|
uint64_t s_ib_rdma_mr_8k_used;
|
|
|
|
uint64_t s_ib_rdma_mr_8k_pool_flush;
|
|
|
|
uint64_t s_ib_rdma_mr_8k_pool_wait;
|
|
|
|
uint64_t s_ib_rdma_mr_8k_pool_depleted;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_alloc;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_free;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_used;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_pool_flush;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_pool_wait;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_pool_depleted;
|
2016-03-02 07:20:51 +08:00
|
|
|
uint64_t s_ib_rdma_mr_8k_reused;
|
|
|
|
uint64_t s_ib_rdma_mr_1m_reused;
|
2010-03-30 08:47:30 +08:00
|
|
|
uint64_t s_ib_atomic_cswp;
|
|
|
|
uint64_t s_ib_atomic_fadd;
|
2016-07-10 08:14:02 +08:00
|
|
|
uint64_t s_ib_recv_added_to_cache;
|
|
|
|
uint64_t s_ib_recv_removed_from_cache;
|
2009-02-24 23:30:30 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
extern struct workqueue_struct *rds_ib_wq;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fake ib_dma_sync_sg_for_{cpu,device} as long as ib_verbs.h
|
|
|
|
* doesn't define it.
|
|
|
|
*/
|
|
|
|
static inline void rds_ib_dma_sync_sg_for_cpu(struct ib_device *dev,
|
2015-06-17 02:44:07 +08:00
|
|
|
struct scatterlist *sglist,
|
|
|
|
unsigned int sg_dma_len,
|
|
|
|
int direction)
|
2009-02-24 23:30:30 +08:00
|
|
|
{
|
2015-06-17 02:44:07 +08:00
|
|
|
struct scatterlist *sg;
|
2009-02-24 23:30:30 +08:00
|
|
|
unsigned int i;
|
|
|
|
|
2015-06-17 02:44:07 +08:00
|
|
|
for_each_sg(sglist, sg, sg_dma_len, i) {
|
2019-02-01 00:30:34 +08:00
|
|
|
ib_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
|
|
|
|
sg_dma_len(sg), direction);
|
2009-02-24 23:30:30 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#define ib_dma_sync_sg_for_cpu rds_ib_dma_sync_sg_for_cpu
|
|
|
|
|
|
|
|
static inline void rds_ib_dma_sync_sg_for_device(struct ib_device *dev,
|
2015-06-17 02:44:07 +08:00
|
|
|
struct scatterlist *sglist,
|
|
|
|
unsigned int sg_dma_len,
|
|
|
|
int direction)
|
2009-02-24 23:30:30 +08:00
|
|
|
{
|
2015-06-17 02:44:07 +08:00
|
|
|
struct scatterlist *sg;
|
2009-02-24 23:30:30 +08:00
|
|
|
unsigned int i;
|
|
|
|
|
2015-06-17 02:44:07 +08:00
|
|
|
for_each_sg(sglist, sg, sg_dma_len, i) {
|
2019-02-01 00:30:34 +08:00
|
|
|
ib_dma_sync_single_for_device(dev, sg_dma_address(sg),
|
|
|
|
sg_dma_len(sg), direction);
|
2009-02-24 23:30:30 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
#define ib_dma_sync_sg_for_device rds_ib_dma_sync_sg_for_device
|
|
|
|
|
|
|
|
|
|
|
|
/* ib.c */
|
|
|
|
extern struct rds_transport rds_ib_transport;
|
2010-05-19 06:48:51 +08:00
|
|
|
struct rds_ib_device *rds_ib_get_client_data(struct ib_device *device);
|
|
|
|
void rds_ib_dev_put(struct rds_ib_device *rds_ibdev);
|
2009-02-24 23:30:30 +08:00
|
|
|
extern struct ib_client rds_ib_client;
|
|
|
|
|
2009-07-17 21:13:22 +08:00
|
|
|
extern unsigned int rds_ib_retry_count;
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
extern spinlock_t ib_nodev_conns_lock;
|
|
|
|
extern struct list_head ib_nodev_conns;
|
|
|
|
|
|
|
|
/* ib_cm.c */
|
|
|
|
int rds_ib_conn_alloc(struct rds_connection *conn, gfp_t gfp);
|
|
|
|
void rds_ib_conn_free(void *arg);
|
2016-07-01 07:11:16 +08:00
|
|
|
int rds_ib_conn_path_connect(struct rds_conn_path *cp);
|
2016-07-01 07:11:10 +08:00
|
|
|
void rds_ib_conn_path_shutdown(struct rds_conn_path *cp);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_state_change(struct sock *sk);
|
2010-07-10 03:26:20 +08:00
|
|
|
int rds_ib_listen_init(void);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_listen_stop(void);
|
2016-08-06 04:11:12 +08:00
|
|
|
__printf(2, 3)
|
2009-02-24 23:30:30 +08:00
|
|
|
void __rds_ib_conn_error(struct rds_connection *conn, const char *, ...);
|
|
|
|
int rds_ib_cm_handle_connect(struct rdma_cm_id *cm_id,
|
2018-07-24 11:51:21 +08:00
|
|
|
struct rdma_cm_event *event, bool isv6);
|
|
|
|
int rds_ib_cm_initiate_connect(struct rdma_cm_id *cm_id, bool isv6);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_cm_connect_complete(struct rds_connection *conn,
|
|
|
|
struct rdma_cm_event *event);
|
|
|
|
|
|
|
|
#define rds_ib_conn_error(conn, fmt...) \
|
|
|
|
__rds_ib_conn_error(conn, KERN_WARNING "RDS/IB: " fmt)
|
|
|
|
|
|
|
|
/* ib_rdma.c */
|
2018-07-24 11:51:21 +08:00
|
|
|
int rds_ib_update_ipaddr(struct rds_ib_device *rds_ibdev,
|
|
|
|
struct in6_addr *ipaddr);
|
2009-04-01 16:20:19 +08:00
|
|
|
void rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn);
|
|
|
|
void rds_ib_remove_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn);
|
2010-06-26 05:58:16 +08:00
|
|
|
void rds_ib_destroy_nodev_conns(void);
|
2016-03-02 07:20:54 +08:00
|
|
|
void rds_ib_mr_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc);
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* ib_recv.c */
|
2010-07-10 03:26:20 +08:00
|
|
|
int rds_ib_recv_init(void);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_recv_exit(void);
|
2016-07-01 07:11:15 +08:00
|
|
|
int rds_ib_recv_path(struct rds_conn_path *conn);
|
2018-07-31 13:48:41 +08:00
|
|
|
int rds_ib_recv_alloc_caches(struct rds_ib_connection *ic, gfp_t gfp);
|
2010-05-27 13:05:37 +08:00
|
|
|
void rds_ib_recv_free_caches(struct rds_ib_connection *ic);
|
2015-08-23 06:45:26 +08:00
|
|
|
void rds_ib_recv_refill(struct rds_connection *conn, int prefill, gfp_t gfp);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_inc_free(struct rds_incoming *inc);
|
2014-11-20 22:21:14 +08:00
|
|
|
int rds_ib_inc_copy_to_user(struct rds_incoming *inc, struct iov_iter *to);
|
2015-09-06 14:18:51 +08:00
|
|
|
void rds_ib_recv_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc,
|
|
|
|
struct rds_ib_ack_state *state);
|
2009-10-30 16:51:57 +08:00
|
|
|
void rds_ib_recv_tasklet_fn(unsigned long data);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_recv_init_ring(struct rds_ib_connection *ic);
|
|
|
|
void rds_ib_recv_clear_ring(struct rds_ib_connection *ic);
|
|
|
|
void rds_ib_recv_init_ack(struct rds_ib_connection *ic);
|
|
|
|
void rds_ib_attempt_ack(struct rds_ib_connection *ic);
|
|
|
|
void rds_ib_ack_send_complete(struct rds_ib_connection *ic);
|
|
|
|
u64 rds_ib_piggyb_ack(struct rds_ib_connection *ic);
|
2015-09-06 14:18:51 +08:00
|
|
|
void rds_ib_set_ack(struct rds_ib_connection *ic, u64 seq, int ack_required);
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* ib_ring.c */
|
|
|
|
void rds_ib_ring_init(struct rds_ib_work_ring *ring, u32 nr);
|
|
|
|
void rds_ib_ring_resize(struct rds_ib_work_ring *ring, u32 nr);
|
|
|
|
u32 rds_ib_ring_alloc(struct rds_ib_work_ring *ring, u32 val, u32 *pos);
|
|
|
|
void rds_ib_ring_free(struct rds_ib_work_ring *ring, u32 val);
|
|
|
|
void rds_ib_ring_unalloc(struct rds_ib_work_ring *ring, u32 val);
|
|
|
|
int rds_ib_ring_empty(struct rds_ib_work_ring *ring);
|
|
|
|
int rds_ib_ring_low(struct rds_ib_work_ring *ring);
|
|
|
|
u32 rds_ib_ring_oldest(struct rds_ib_work_ring *ring);
|
|
|
|
u32 rds_ib_ring_completed(struct rds_ib_work_ring *ring, u32 wr_id, u32 oldest);
|
|
|
|
extern wait_queue_head_t rds_ib_ring_empty_wait;
|
|
|
|
|
|
|
|
/* ib_send.c */
|
2016-07-01 07:11:10 +08:00
|
|
|
void rds_ib_xmit_path_complete(struct rds_conn_path *cp);
|
2009-02-24 23:30:30 +08:00
|
|
|
int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
|
|
|
|
unsigned int hdr_off, unsigned int sg, unsigned int off);
|
2015-09-06 14:18:51 +08:00
|
|
|
void rds_ib_send_cqe_handler(struct rds_ib_connection *ic, struct ib_wc *wc);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_send_init_ring(struct rds_ib_connection *ic);
|
|
|
|
void rds_ib_send_clear_ring(struct rds_ib_connection *ic);
|
2010-03-02 06:11:53 +08:00
|
|
|
int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_send_add_credits(struct rds_connection *conn, unsigned int credits);
|
|
|
|
void rds_ib_advertise_credits(struct rds_connection *conn, unsigned int posted);
|
|
|
|
int rds_ib_send_grab_credits(struct rds_ib_connection *ic, u32 wanted,
|
2009-04-09 22:09:39 +08:00
|
|
|
u32 *adv_credits, int need_posted, int max_posted);
|
2010-03-02 06:03:09 +08:00
|
|
|
int rds_ib_xmit_atomic(struct rds_connection *conn, struct rm_atomic_op *op);
|
2009-02-24 23:30:30 +08:00
|
|
|
|
|
|
|
/* ib_stats.c */
|
2018-09-24 03:25:15 +08:00
|
|
|
DECLARE_PER_CPU_SHARED_ALIGNED(struct rds_ib_statistics, rds_ib_stats);
|
2009-02-24 23:30:30 +08:00
|
|
|
#define rds_ib_stats_inc(member) rds_stats_inc_which(rds_ib_stats, member)
|
2016-07-10 08:14:02 +08:00
|
|
|
#define rds_ib_stats_add(member, count) \
|
|
|
|
rds_stats_add_which(rds_ib_stats, member, count)
|
2009-02-24 23:30:30 +08:00
|
|
|
unsigned int rds_ib_stats_info_copy(struct rds_info_iterator *iter,
|
|
|
|
unsigned int avail);
|
|
|
|
|
|
|
|
/* ib_sysctl.c */
|
2010-07-10 03:26:20 +08:00
|
|
|
int rds_ib_sysctl_init(void);
|
2009-02-24 23:30:30 +08:00
|
|
|
void rds_ib_sysctl_exit(void);
|
|
|
|
extern unsigned long rds_ib_sysctl_max_send_wr;
|
|
|
|
extern unsigned long rds_ib_sysctl_max_recv_wr;
|
|
|
|
extern unsigned long rds_ib_sysctl_max_unsig_wrs;
|
|
|
|
extern unsigned long rds_ib_sysctl_max_unsig_bytes;
|
|
|
|
extern unsigned long rds_ib_sysctl_max_recv_allocation;
|
|
|
|
extern unsigned int rds_ib_sysctl_flow_control;
|
|
|
|
|
|
|
|
#endif
|