2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
|
2005-06-28 05:36:37 +08:00
|
|
|
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
|
2006-01-31 06:31:33 +08:00
|
|
|
* Copyright (c) 2005, 2006 Cisco Systems, Inc. All rights reserved.
|
2005-08-11 14:03:10 +08:00
|
|
|
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
|
|
|
|
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* This software is available to you under a choice of one of two
|
|
|
|
* licenses. You may choose to be licensed under the terms of the GNU
|
|
|
|
* General Public License (GPL) Version 2, available from the file
|
|
|
|
* COPYING in the main directory of this source tree, or the
|
|
|
|
* OpenIB.org BSD license below:
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or
|
|
|
|
* without modification, are permitted provided that the following
|
|
|
|
* conditions are met:
|
|
|
|
*
|
|
|
|
* - Redistributions of source code must retain the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer.
|
|
|
|
*
|
|
|
|
* - Redistributions in binary form must reproduce the above
|
|
|
|
* copyright notice, this list of conditions and the following
|
|
|
|
* disclaimer in the documentation and/or other materials
|
|
|
|
* provided with the distribution.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
|
|
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
|
|
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
|
|
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
|
|
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
|
|
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
|
|
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
|
|
* SOFTWARE.
|
|
|
|
*/
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/gfp.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/hardirq.h>
|
Detach sched.h from mm.h
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.
This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
getting them indirectly
Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
they don't need sched.h
b) sched.h stops being dependency for significant number of files:
on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
after patch it's only 3744 (-8.3%).
Cross-compile tested on
all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
alpha alpha-up
arm
i386 i386-up i386-defconfig i386-allnoconfig
ia64 ia64-up
m68k
mips
parisc parisc-up
powerpc powerpc-up
s390 s390-up
sparc sparc-up
sparc64 sparc64-up
um-x86_64
x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
as well as my two usual configs.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-21 05:22:52 +08:00
|
|
|
#include <linux/sched.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-10-17 11:22:35 +08:00
|
|
|
#include <asm/io.h>
|
|
|
|
|
2005-08-26 04:40:04 +08:00
|
|
|
#include <rdma/ib_pack.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include "mthca_dev.h"
|
|
|
|
#include "mthca_cmd.h"
|
|
|
|
#include "mthca_memfree.h"
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MTHCA_MAX_DIRECT_CQ_SIZE = 4 * PAGE_SIZE
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MTHCA_CQ_ENTRY_SIZE = 0x20
|
|
|
|
};
|
|
|
|
|
2006-12-25 15:24:52 +08:00
|
|
|
enum {
|
|
|
|
MTHCA_ATOMIC_BYTE_LEN = 8
|
|
|
|
};
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Must be packed because start is 64 bits but only aligned to 32 bits.
|
|
|
|
*/
|
|
|
|
struct mthca_cq_context {
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 flags;
|
|
|
|
__be64 start;
|
|
|
|
__be32 logsize_usrpage;
|
|
|
|
__be32 error_eqn; /* Tavor only */
|
|
|
|
__be32 comp_eqn;
|
|
|
|
__be32 pd;
|
|
|
|
__be32 lkey;
|
|
|
|
__be32 last_notified_index;
|
|
|
|
__be32 solicit_producer_index;
|
|
|
|
__be32 consumer_index;
|
|
|
|
__be32 producer_index;
|
|
|
|
__be32 cqn;
|
|
|
|
__be32 ci_db; /* Arbel only */
|
|
|
|
__be32 state_db; /* Arbel only */
|
|
|
|
u32 reserved;
|
2005-04-17 06:20:36 +08:00
|
|
|
} __attribute__((packed));
|
|
|
|
|
|
|
|
#define MTHCA_CQ_STATUS_OK ( 0 << 28)
|
|
|
|
#define MTHCA_CQ_STATUS_OVERFLOW ( 9 << 28)
|
|
|
|
#define MTHCA_CQ_STATUS_WRITE_FAIL (10 << 28)
|
|
|
|
#define MTHCA_CQ_FLAG_TR ( 1 << 18)
|
|
|
|
#define MTHCA_CQ_FLAG_OI ( 1 << 17)
|
|
|
|
#define MTHCA_CQ_STATE_DISARMED ( 0 << 8)
|
|
|
|
#define MTHCA_CQ_STATE_ARMED ( 1 << 8)
|
|
|
|
#define MTHCA_CQ_STATE_ARMED_SOL ( 4 << 8)
|
|
|
|
#define MTHCA_EQ_STATE_FIRED (10 << 8)
|
|
|
|
|
|
|
|
enum {
|
|
|
|
MTHCA_ERROR_CQE_OPCODE_MASK = 0xfe
|
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
|
|
|
SYNDROME_LOCAL_LENGTH_ERR = 0x01,
|
|
|
|
SYNDROME_LOCAL_QP_OP_ERR = 0x02,
|
|
|
|
SYNDROME_LOCAL_EEC_OP_ERR = 0x03,
|
|
|
|
SYNDROME_LOCAL_PROT_ERR = 0x04,
|
|
|
|
SYNDROME_WR_FLUSH_ERR = 0x05,
|
|
|
|
SYNDROME_MW_BIND_ERR = 0x06,
|
|
|
|
SYNDROME_BAD_RESP_ERR = 0x10,
|
|
|
|
SYNDROME_LOCAL_ACCESS_ERR = 0x11,
|
|
|
|
SYNDROME_REMOTE_INVAL_REQ_ERR = 0x12,
|
|
|
|
SYNDROME_REMOTE_ACCESS_ERR = 0x13,
|
|
|
|
SYNDROME_REMOTE_OP_ERR = 0x14,
|
|
|
|
SYNDROME_RETRY_EXC_ERR = 0x15,
|
|
|
|
SYNDROME_RNR_RETRY_EXC_ERR = 0x16,
|
|
|
|
SYNDROME_LOCAL_RDD_VIOL_ERR = 0x20,
|
|
|
|
SYNDROME_REMOTE_INVAL_RD_REQ_ERR = 0x21,
|
|
|
|
SYNDROME_REMOTE_ABORTED_ERR = 0x22,
|
|
|
|
SYNDROME_INVAL_EECN_ERR = 0x23,
|
|
|
|
SYNDROME_INVAL_EEC_STATE_ERR = 0x24
|
|
|
|
};
|
|
|
|
|
|
|
|
struct mthca_cqe {
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 my_qpn;
|
|
|
|
__be32 my_ee;
|
|
|
|
__be32 rqpn;
|
2008-04-17 12:01:11 +08:00
|
|
|
u8 sl_ipok;
|
|
|
|
u8 g_mlpath;
|
2005-08-14 12:05:57 +08:00
|
|
|
__be16 rlid;
|
|
|
|
__be32 imm_etype_pkey_eec;
|
|
|
|
__be32 byte_cnt;
|
|
|
|
__be32 wqe;
|
|
|
|
u8 opcode;
|
|
|
|
u8 is_send;
|
|
|
|
u8 reserved;
|
|
|
|
u8 owner;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct mthca_err_cqe {
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 my_qpn;
|
|
|
|
u32 reserved1[3];
|
|
|
|
u8 syndrome;
|
2006-01-07 05:13:32 +08:00
|
|
|
u8 vendor_err;
|
2005-08-14 12:05:57 +08:00
|
|
|
__be16 db_cnt;
|
2006-01-07 05:13:32 +08:00
|
|
|
u32 reserved2;
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 wqe;
|
|
|
|
u8 opcode;
|
2006-01-07 05:13:32 +08:00
|
|
|
u8 reserved3[2];
|
2005-08-14 12:05:57 +08:00
|
|
|
u8 owner;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
#define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7)
|
|
|
|
#define MTHCA_CQ_ENTRY_OWNER_HW (1 << 7)
|
|
|
|
|
|
|
|
#define MTHCA_TAVOR_CQ_DB_INC_CI (1 << 24)
|
|
|
|
#define MTHCA_TAVOR_CQ_DB_REQ_NOT (2 << 24)
|
|
|
|
#define MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL (3 << 24)
|
|
|
|
#define MTHCA_TAVOR_CQ_DB_SET_CI (4 << 24)
|
|
|
|
#define MTHCA_TAVOR_CQ_DB_REQ_NOT_MULT (5 << 24)
|
|
|
|
|
|
|
|
#define MTHCA_ARBEL_CQ_DB_REQ_NOT_SOL (1 << 24)
|
|
|
|
#define MTHCA_ARBEL_CQ_DB_REQ_NOT (2 << 24)
|
|
|
|
#define MTHCA_ARBEL_CQ_DB_REQ_NOT_MULT (3 << 24)
|
|
|
|
|
2006-01-31 06:31:33 +08:00
|
|
|
static inline struct mthca_cqe *get_cqe_from_buf(struct mthca_cq_buf *buf,
|
|
|
|
int entry)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-01-31 06:31:33 +08:00
|
|
|
if (buf->is_direct)
|
|
|
|
return buf->queue.direct.buf + (entry * MTHCA_CQ_ENTRY_SIZE);
|
2005-04-17 06:20:36 +08:00
|
|
|
else
|
2006-01-31 06:31:33 +08:00
|
|
|
return buf->queue.page_list[entry * MTHCA_CQ_ENTRY_SIZE / PAGE_SIZE].buf
|
2005-04-17 06:20:36 +08:00
|
|
|
+ (entry * MTHCA_CQ_ENTRY_SIZE) % PAGE_SIZE;
|
|
|
|
}
|
|
|
|
|
2006-01-31 06:31:33 +08:00
|
|
|
static inline struct mthca_cqe *get_cqe(struct mthca_cq *cq, int entry)
|
|
|
|
{
|
|
|
|
return get_cqe_from_buf(&cq->buf, entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct mthca_cqe *cqe_sw(struct mthca_cqe *cqe)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
return MTHCA_CQ_ENTRY_OWNER_HW & cqe->owner ? NULL : cqe;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct mthca_cqe *next_cqe_sw(struct mthca_cq *cq)
|
|
|
|
{
|
2006-01-31 06:31:33 +08:00
|
|
|
return cqe_sw(get_cqe(cq, cq->cons_index & cq->ibcq.cqe));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void set_cqe_hw(struct mthca_cqe *cqe)
|
|
|
|
{
|
|
|
|
cqe->owner = MTHCA_CQ_ENTRY_OWNER_HW;
|
|
|
|
}
|
|
|
|
|
2005-06-28 05:36:39 +08:00
|
|
|
static void dump_cqe(struct mthca_dev *dev, void *cqe_ptr)
|
|
|
|
{
|
|
|
|
__be32 *cqe = cqe_ptr;
|
|
|
|
|
|
|
|
(void) cqe; /* avoid warning if mthca_dbg compiled away... */
|
|
|
|
mthca_dbg(dev, "CQE contents %08x %08x %08x %08x %08x %08x %08x %08x\n",
|
|
|
|
be32_to_cpu(cqe[0]), be32_to_cpu(cqe[1]), be32_to_cpu(cqe[2]),
|
|
|
|
be32_to_cpu(cqe[3]), be32_to_cpu(cqe[4]), be32_to_cpu(cqe[5]),
|
|
|
|
be32_to_cpu(cqe[6]), be32_to_cpu(cqe[7]));
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* incr is ignored in native Arbel (mem-free) mode, so cq->cons_index
|
|
|
|
* should be correct before calling update_cons_index().
|
|
|
|
*/
|
|
|
|
static inline void update_cons_index(struct mthca_dev *dev, struct mthca_cq *cq,
|
|
|
|
int incr)
|
|
|
|
{
|
2005-04-17 06:26:32 +08:00
|
|
|
if (mthca_is_memfree(dev)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
*cq->set_ci_db = cpu_to_be32(cq->cons_index);
|
|
|
|
wmb();
|
|
|
|
} else {
|
2007-10-15 11:40:27 +08:00
|
|
|
mthca_write64(MTHCA_TAVOR_CQ_DB_INC_CI | cq->cqn, incr - 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
dev->kar + MTHCA_CQ_DOORBELL,
|
|
|
|
MTHCA_GET_DOORBELL_LOCK(&dev->doorbell_lock));
|
2006-10-17 11:22:35 +08:00
|
|
|
/*
|
|
|
|
* Make sure doorbells don't leak out of CQ spinlock
|
|
|
|
* and reach the HCA out of order:
|
|
|
|
*/
|
|
|
|
mmiowb();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-10-29 22:39:42 +08:00
|
|
|
void mthca_cq_completion(struct mthca_dev *dev, u32 cqn)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct mthca_cq *cq;
|
|
|
|
|
|
|
|
cq = mthca_array_get(&dev->cq_table.cq, cqn & (dev->limits.num_cqs - 1));
|
|
|
|
|
|
|
|
if (!cq) {
|
|
|
|
mthca_warn(dev, "Completion event for bogus CQ %08x\n", cqn);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
++cq->arm_sn;
|
|
|
|
|
|
|
|
cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
|
|
|
|
}
|
|
|
|
|
2005-10-29 22:39:42 +08:00
|
|
|
void mthca_cq_event(struct mthca_dev *dev, u32 cqn,
|
|
|
|
enum ib_event_type event_type)
|
|
|
|
{
|
|
|
|
struct mthca_cq *cq;
|
|
|
|
struct ib_event event;
|
|
|
|
|
|
|
|
spin_lock(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
cq = mthca_array_get(&dev->cq_table.cq, cqn & (dev->limits.num_cqs - 1));
|
|
|
|
if (cq)
|
2006-05-10 01:50:29 +08:00
|
|
|
++cq->refcount;
|
|
|
|
|
2005-10-29 22:39:42 +08:00
|
|
|
spin_unlock(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
if (!cq) {
|
|
|
|
mthca_warn(dev, "Async event for bogus CQ %08x\n", cqn);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
event.device = &dev->ib_dev;
|
|
|
|
event.event = event_type;
|
|
|
|
event.element.cq = &cq->ibcq;
|
|
|
|
if (cq->ibcq.event_handler)
|
|
|
|
cq->ibcq.event_handler(&event, cq->ibcq.cq_context);
|
|
|
|
|
2006-05-10 01:50:29 +08:00
|
|
|
spin_lock(&dev->cq_table.lock);
|
|
|
|
if (!--cq->refcount)
|
2005-10-29 22:39:42 +08:00
|
|
|
wake_up(&cq->wait);
|
2006-05-10 01:50:29 +08:00
|
|
|
spin_unlock(&dev->cq_table.lock);
|
2005-10-29 22:39:42 +08:00
|
|
|
}
|
|
|
|
|
2005-12-16 06:20:23 +08:00
|
|
|
static inline int is_recv_cqe(struct mthca_cqe *cqe)
|
|
|
|
{
|
|
|
|
if ((cqe->opcode & MTHCA_ERROR_CQE_OPCODE_MASK) ==
|
|
|
|
MTHCA_ERROR_CQE_OPCODE_MASK)
|
|
|
|
return !(cqe->opcode & 0x01);
|
|
|
|
else
|
|
|
|
return !(cqe->is_send & 0x80);
|
|
|
|
}
|
|
|
|
|
2006-05-10 01:50:29 +08:00
|
|
|
void mthca_cq_clean(struct mthca_dev *dev, struct mthca_cq *cq, u32 qpn,
|
2005-08-20 01:59:31 +08:00
|
|
|
struct mthca_srq *srq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct mthca_cqe *cqe;
|
2005-11-10 04:23:17 +08:00
|
|
|
u32 prod_index;
|
2007-05-14 22:14:50 +08:00
|
|
|
int i, nfreed = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
spin_lock_irq(&cq->lock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First we need to find the current producer index, so we
|
|
|
|
* know where to start cleaning from. It doesn't matter if HW
|
|
|
|
* adds new entries after this loop -- the QP we're worried
|
|
|
|
* about is already in RESET, so the new entries won't come
|
|
|
|
* from our QP and therefore don't need to be checked.
|
|
|
|
*/
|
|
|
|
for (prod_index = cq->cons_index;
|
2006-01-31 06:31:33 +08:00
|
|
|
cqe_sw(get_cqe(cq, prod_index & cq->ibcq.cqe));
|
2005-04-17 06:20:36 +08:00
|
|
|
++prod_index)
|
|
|
|
if (prod_index == cq->cons_index + cq->ibcq.cqe)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (0)
|
|
|
|
mthca_dbg(dev, "Cleaning QPN %06x from CQN %06x; ci %d, pi %d\n",
|
2006-05-10 01:50:29 +08:00
|
|
|
qpn, cq->cqn, cq->cons_index, prod_index);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now sweep backwards through the CQ, removing CQ entries
|
|
|
|
* that match our QP by copying older entries on top of them.
|
|
|
|
*/
|
2005-11-10 04:23:17 +08:00
|
|
|
while ((int) --prod_index - (int) cq->cons_index >= 0) {
|
|
|
|
cqe = get_cqe(cq, prod_index & cq->ibcq.cqe);
|
2005-08-20 01:59:31 +08:00
|
|
|
if (cqe->my_qpn == cpu_to_be32(qpn)) {
|
2005-12-16 06:20:23 +08:00
|
|
|
if (srq && is_recv_cqe(cqe))
|
2005-08-20 01:59:31 +08:00
|
|
|
mthca_free_srq_wqe(srq, be32_to_cpu(cqe->wqe));
|
2005-04-17 06:20:36 +08:00
|
|
|
++nfreed;
|
2005-11-10 04:23:17 +08:00
|
|
|
} else if (nfreed)
|
|
|
|
memcpy(get_cqe(cq, (prod_index + nfreed) & cq->ibcq.cqe),
|
|
|
|
cqe, MTHCA_CQ_ENTRY_SIZE);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (nfreed) {
|
2007-05-14 22:14:50 +08:00
|
|
|
for (i = 0; i < nfreed; ++i)
|
|
|
|
set_cqe_hw(get_cqe(cq, (cq->cons_index + i) & cq->ibcq.cqe));
|
2005-04-17 06:20:36 +08:00
|
|
|
wmb();
|
|
|
|
cq->cons_index += nfreed;
|
|
|
|
update_cons_index(dev, cq, nfreed);
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock_irq(&cq->lock);
|
|
|
|
}
|
|
|
|
|
2006-01-31 06:31:33 +08:00
|
|
|
void mthca_cq_resize_copy_cqes(struct mthca_cq *cq)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In Tavor mode, the hardware keeps the consumer and producer
|
|
|
|
* indices mod the CQ size. Since we might be making the CQ
|
|
|
|
* bigger, we need to deal with the case where the producer
|
|
|
|
* index wrapped around before the CQ was resized.
|
|
|
|
*/
|
|
|
|
if (!mthca_is_memfree(to_mdev(cq->ibcq.device)) &&
|
|
|
|
cq->ibcq.cqe < cq->resize_buf->cqe) {
|
|
|
|
cq->cons_index &= cq->ibcq.cqe;
|
|
|
|
if (cqe_sw(get_cqe(cq, cq->ibcq.cqe)))
|
|
|
|
cq->cons_index -= cq->ibcq.cqe + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = cq->cons_index; cqe_sw(get_cqe(cq, i & cq->ibcq.cqe)); ++i)
|
|
|
|
memcpy(get_cqe_from_buf(&cq->resize_buf->buf,
|
|
|
|
i & cq->resize_buf->cqe),
|
|
|
|
get_cqe(cq, i & cq->ibcq.cqe), MTHCA_CQ_ENTRY_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
int mthca_alloc_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int nent)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
ret = mthca_buf_alloc(dev, nent * MTHCA_CQ_ENTRY_SIZE,
|
|
|
|
MTHCA_MAX_DIRECT_CQ_SIZE,
|
|
|
|
&buf->queue, &buf->is_direct,
|
|
|
|
&dev->driver_pd, 1, &buf->mr);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
for (i = 0; i < nent; ++i)
|
|
|
|
set_cqe_hw(get_cqe_from_buf(buf, i));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq_buf *buf, int cqe)
|
|
|
|
{
|
|
|
|
mthca_buf_free(dev, (cqe + 1) * MTHCA_CQ_ENTRY_SIZE, &buf->queue,
|
|
|
|
buf->is_direct, &buf->mr);
|
|
|
|
}
|
|
|
|
|
2006-02-01 12:45:51 +08:00
|
|
|
static void handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
|
|
|
|
struct mthca_qp *qp, int wqe_index, int is_send,
|
|
|
|
struct mthca_err_cqe *cqe,
|
|
|
|
struct ib_wc *entry, int *free_cqe)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int dbd;
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 new_wqe;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-28 05:36:39 +08:00
|
|
|
if (cqe->syndrome == SYNDROME_LOCAL_QP_OP_ERR) {
|
|
|
|
mthca_dbg(dev, "local QP operation err "
|
|
|
|
"(QPN %06x, WQE @ %08x, CQN %06x, index %d)\n",
|
|
|
|
be32_to_cpu(cqe->my_qpn), be32_to_cpu(cqe->wqe),
|
|
|
|
cq->cqn, cq->cons_index);
|
|
|
|
dump_cqe(dev, cqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2006-01-07 05:13:32 +08:00
|
|
|
* For completions in error, only work request ID, status, vendor error
|
|
|
|
* (and freed resource count for RD) have to be set.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
switch (cqe->syndrome) {
|
|
|
|
case SYNDROME_LOCAL_LENGTH_ERR:
|
|
|
|
entry->status = IB_WC_LOC_LEN_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_LOCAL_QP_OP_ERR:
|
|
|
|
entry->status = IB_WC_LOC_QP_OP_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_LOCAL_EEC_OP_ERR:
|
|
|
|
entry->status = IB_WC_LOC_EEC_OP_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_LOCAL_PROT_ERR:
|
|
|
|
entry->status = IB_WC_LOC_PROT_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_WR_FLUSH_ERR:
|
|
|
|
entry->status = IB_WC_WR_FLUSH_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_MW_BIND_ERR:
|
|
|
|
entry->status = IB_WC_MW_BIND_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_BAD_RESP_ERR:
|
|
|
|
entry->status = IB_WC_BAD_RESP_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_LOCAL_ACCESS_ERR:
|
|
|
|
entry->status = IB_WC_LOC_ACCESS_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_REMOTE_INVAL_REQ_ERR:
|
|
|
|
entry->status = IB_WC_REM_INV_REQ_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_REMOTE_ACCESS_ERR:
|
|
|
|
entry->status = IB_WC_REM_ACCESS_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_REMOTE_OP_ERR:
|
|
|
|
entry->status = IB_WC_REM_OP_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_RETRY_EXC_ERR:
|
|
|
|
entry->status = IB_WC_RETRY_EXC_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_RNR_RETRY_EXC_ERR:
|
|
|
|
entry->status = IB_WC_RNR_RETRY_EXC_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_LOCAL_RDD_VIOL_ERR:
|
|
|
|
entry->status = IB_WC_LOC_RDD_VIOL_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_REMOTE_INVAL_RD_REQ_ERR:
|
|
|
|
entry->status = IB_WC_REM_INV_RD_REQ_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_REMOTE_ABORTED_ERR:
|
|
|
|
entry->status = IB_WC_REM_ABORT_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_INVAL_EECN_ERR:
|
|
|
|
entry->status = IB_WC_INV_EECN_ERR;
|
|
|
|
break;
|
|
|
|
case SYNDROME_INVAL_EEC_STATE_ERR:
|
|
|
|
entry->status = IB_WC_INV_EEC_STATE_ERR;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
entry->status = IB_WC_GENERAL_ERR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2006-01-07 05:13:32 +08:00
|
|
|
entry->vendor_err = cqe->vendor_err;
|
|
|
|
|
2005-08-20 00:19:05 +08:00
|
|
|
/*
|
|
|
|
* Mem-free HCAs always generate one CQE per WQE, even in the
|
|
|
|
* error case, so we don't have to check the doorbell count, etc.
|
|
|
|
*/
|
|
|
|
if (mthca_is_memfree(dev))
|
2006-02-01 12:45:51 +08:00
|
|
|
return;
|
2005-08-20 00:19:05 +08:00
|
|
|
|
2006-02-01 12:45:51 +08:00
|
|
|
mthca_free_err_wqe(dev, qp, is_send, wqe_index, &dbd, &new_wqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we're at the end of the WQE chain, or we've used up our
|
|
|
|
* doorbell count, free the CQE. Otherwise just update it for
|
|
|
|
* the next poll operation.
|
|
|
|
*/
|
2005-08-20 00:19:05 +08:00
|
|
|
if (!(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd))
|
2006-02-01 12:45:51 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-02-13 07:06:08 +08:00
|
|
|
be16_add_cpu(&cqe->db_cnt, -dbd);
|
2005-04-17 06:20:36 +08:00
|
|
|
cqe->wqe = new_wqe;
|
|
|
|
cqe->syndrome = SYNDROME_WR_FLUSH_ERR;
|
|
|
|
|
|
|
|
*free_cqe = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int mthca_poll_one(struct mthca_dev *dev,
|
|
|
|
struct mthca_cq *cq,
|
|
|
|
struct mthca_qp **cur_qp,
|
|
|
|
int *freed,
|
|
|
|
struct ib_wc *entry)
|
|
|
|
{
|
|
|
|
struct mthca_wq *wq;
|
|
|
|
struct mthca_cqe *cqe;
|
|
|
|
int wqe_index;
|
|
|
|
int is_error;
|
|
|
|
int is_send;
|
|
|
|
int free_cqe = 1;
|
|
|
|
int err = 0;
|
2008-04-17 12:01:11 +08:00
|
|
|
u16 checksum;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
cqe = next_cqe_sw(cq);
|
|
|
|
if (!cqe)
|
|
|
|
return -EAGAIN;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure we read CQ entry contents after we've checked the
|
|
|
|
* ownership bit.
|
|
|
|
*/
|
|
|
|
rmb();
|
|
|
|
|
|
|
|
if (0) {
|
|
|
|
mthca_dbg(dev, "%x/%d: CQE -> QPN %06x, WQE @ %08x\n",
|
|
|
|
cq->cqn, cq->cons_index, be32_to_cpu(cqe->my_qpn),
|
|
|
|
be32_to_cpu(cqe->wqe));
|
2005-06-28 05:36:39 +08:00
|
|
|
dump_cqe(dev, cqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
is_error = (cqe->opcode & MTHCA_ERROR_CQE_OPCODE_MASK) ==
|
|
|
|
MTHCA_ERROR_CQE_OPCODE_MASK;
|
|
|
|
is_send = is_error ? cqe->opcode & 0x01 : cqe->is_send & 0x80;
|
|
|
|
|
|
|
|
if (!*cur_qp || be32_to_cpu(cqe->my_qpn) != (*cur_qp)->qpn) {
|
|
|
|
/*
|
|
|
|
* We do not have to take the QP table lock here,
|
|
|
|
* because CQs will be locked while QPs are removed
|
|
|
|
* from the table.
|
|
|
|
*/
|
|
|
|
*cur_qp = mthca_array_get(&dev->qp_table.qp,
|
|
|
|
be32_to_cpu(cqe->my_qpn) &
|
|
|
|
(dev->limits.num_qps - 1));
|
|
|
|
if (!*cur_qp) {
|
|
|
|
mthca_warn(dev, "CQ entry for unknown QP %06x\n",
|
|
|
|
be32_to_cpu(cqe->my_qpn) & 0xffffff);
|
|
|
|
err = -EINVAL;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-01-01 03:09:42 +08:00
|
|
|
entry->qp = &(*cur_qp)->ibqp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (is_send) {
|
|
|
|
wq = &(*cur_qp)->sq;
|
|
|
|
wqe_index = ((be32_to_cpu(cqe->wqe) - (*cur_qp)->send_wqe_offset)
|
|
|
|
>> wq->wqe_shift);
|
|
|
|
entry->wr_id = (*cur_qp)->wrid[wqe_index +
|
|
|
|
(*cur_qp)->rq.max];
|
2005-08-20 01:59:31 +08:00
|
|
|
} else if ((*cur_qp)->ibqp.srq) {
|
|
|
|
struct mthca_srq *srq = to_msrq((*cur_qp)->ibqp.srq);
|
|
|
|
u32 wqe = be32_to_cpu(cqe->wqe);
|
|
|
|
wq = NULL;
|
|
|
|
wqe_index = wqe >> srq->wqe_shift;
|
|
|
|
entry->wr_id = srq->wrid[wqe_index];
|
|
|
|
mthca_free_srq_wqe(srq, wqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
2006-06-13 22:19:42 +08:00
|
|
|
s32 wqe;
|
2005-04-17 06:20:36 +08:00
|
|
|
wq = &(*cur_qp)->rq;
|
2006-06-13 22:19:42 +08:00
|
|
|
wqe = be32_to_cpu(cqe->wqe);
|
|
|
|
wqe_index = wqe >> wq->wqe_shift;
|
2006-09-23 06:22:46 +08:00
|
|
|
/*
|
|
|
|
* WQE addr == base - 1 might be reported in receive completion
|
|
|
|
* with error instead of (rq size - 1) by Sinai FW 1.0.800 and
|
|
|
|
* Arbel FW 5.1.400. This bug should be fixed in later FW revs.
|
|
|
|
*/
|
2006-06-13 22:19:42 +08:00
|
|
|
if (unlikely(wqe_index < 0))
|
|
|
|
wqe_index = wq->max - 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
entry->wr_id = (*cur_qp)->wrid[wqe_index];
|
|
|
|
}
|
|
|
|
|
2005-08-20 01:59:31 +08:00
|
|
|
if (wq) {
|
|
|
|
if (wq->last_comp < wqe_index)
|
|
|
|
wq->tail += wqe_index - wq->last_comp;
|
|
|
|
else
|
|
|
|
wq->tail += wqe_index + wq->max - wq->last_comp;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-08-20 01:59:31 +08:00
|
|
|
wq->last_comp = wqe_index;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (is_error) {
|
2006-02-01 12:45:51 +08:00
|
|
|
handle_error_cqe(dev, cq, *cur_qp, wqe_index, is_send,
|
|
|
|
(struct mthca_err_cqe *) cqe,
|
|
|
|
entry, &free_cqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_send) {
|
2005-04-17 06:26:25 +08:00
|
|
|
entry->wc_flags = 0;
|
|
|
|
switch (cqe->opcode) {
|
|
|
|
case MTHCA_OPCODE_RDMA_WRITE:
|
|
|
|
entry->opcode = IB_WC_RDMA_WRITE;
|
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_RDMA_WRITE_IMM:
|
|
|
|
entry->opcode = IB_WC_RDMA_WRITE;
|
|
|
|
entry->wc_flags |= IB_WC_WITH_IMM;
|
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_SEND:
|
|
|
|
entry->opcode = IB_WC_SEND;
|
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_SEND_IMM:
|
|
|
|
entry->opcode = IB_WC_SEND;
|
|
|
|
entry->wc_flags |= IB_WC_WITH_IMM;
|
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_RDMA_READ:
|
|
|
|
entry->opcode = IB_WC_RDMA_READ;
|
|
|
|
entry->byte_len = be32_to_cpu(cqe->byte_cnt);
|
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_ATOMIC_CS:
|
|
|
|
entry->opcode = IB_WC_COMP_SWAP;
|
2006-12-25 15:24:52 +08:00
|
|
|
entry->byte_len = MTHCA_ATOMIC_BYTE_LEN;
|
2005-04-17 06:26:25 +08:00
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_ATOMIC_FA:
|
|
|
|
entry->opcode = IB_WC_FETCH_ADD;
|
2006-12-25 15:24:52 +08:00
|
|
|
entry->byte_len = MTHCA_ATOMIC_BYTE_LEN;
|
2005-04-17 06:26:25 +08:00
|
|
|
break;
|
|
|
|
case MTHCA_OPCODE_BIND_MW:
|
|
|
|
entry->opcode = IB_WC_BIND_MW;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
entry->opcode = MTHCA_OPCODE_INVALID;
|
|
|
|
break;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
} else {
|
|
|
|
entry->byte_len = be32_to_cpu(cqe->byte_cnt);
|
|
|
|
switch (cqe->opcode & 0x1f) {
|
|
|
|
case IB_OPCODE_SEND_LAST_WITH_IMMEDIATE:
|
|
|
|
case IB_OPCODE_SEND_ONLY_WITH_IMMEDIATE:
|
|
|
|
entry->wc_flags = IB_WC_WITH_IMM;
|
2008-07-15 14:48:45 +08:00
|
|
|
entry->ex.imm_data = cqe->imm_etype_pkey_eec;
|
2005-04-17 06:20:36 +08:00
|
|
|
entry->opcode = IB_WC_RECV;
|
|
|
|
break;
|
|
|
|
case IB_OPCODE_RDMA_WRITE_LAST_WITH_IMMEDIATE:
|
|
|
|
case IB_OPCODE_RDMA_WRITE_ONLY_WITH_IMMEDIATE:
|
|
|
|
entry->wc_flags = IB_WC_WITH_IMM;
|
2008-07-15 14:48:45 +08:00
|
|
|
entry->ex.imm_data = cqe->imm_etype_pkey_eec;
|
2005-04-17 06:20:36 +08:00
|
|
|
entry->opcode = IB_WC_RECV_RDMA_WITH_IMM;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
entry->wc_flags = 0;
|
|
|
|
entry->opcode = IB_WC_RECV;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
entry->slid = be16_to_cpu(cqe->rlid);
|
2008-04-17 12:01:11 +08:00
|
|
|
entry->sl = cqe->sl_ipok >> 4;
|
2005-04-17 06:20:36 +08:00
|
|
|
entry->src_qp = be32_to_cpu(cqe->rqpn) & 0xffffff;
|
2008-04-17 12:01:11 +08:00
|
|
|
entry->dlid_path_bits = cqe->g_mlpath & 0x7f;
|
2005-04-17 06:20:36 +08:00
|
|
|
entry->pkey_index = be32_to_cpu(cqe->imm_etype_pkey_eec) >> 16;
|
2008-04-17 12:01:11 +08:00
|
|
|
entry->wc_flags |= cqe->g_mlpath & 0x80 ? IB_WC_GRH : 0;
|
|
|
|
checksum = (be32_to_cpu(cqe->rqpn) >> 24) |
|
|
|
|
((be32_to_cpu(cqe->my_ee) >> 16) & 0xff00);
|
2012-01-12 01:03:51 +08:00
|
|
|
entry->wc_flags |= (cqe->sl_ipok & 1 && checksum == 0xffff) ?
|
|
|
|
IB_WC_IP_CSUM_OK : 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
entry->status = IB_WC_SUCCESS;
|
|
|
|
|
|
|
|
out:
|
|
|
|
if (likely(free_cqe)) {
|
|
|
|
set_cqe_hw(cqe);
|
|
|
|
++(*freed);
|
|
|
|
++cq->cons_index;
|
|
|
|
}
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mthca_poll_cq(struct ib_cq *ibcq, int num_entries,
|
|
|
|
struct ib_wc *entry)
|
|
|
|
{
|
|
|
|
struct mthca_dev *dev = to_mdev(ibcq->device);
|
|
|
|
struct mthca_cq *cq = to_mcq(ibcq);
|
|
|
|
struct mthca_qp *qp = NULL;
|
|
|
|
unsigned long flags;
|
|
|
|
int err = 0;
|
|
|
|
int freed = 0;
|
|
|
|
int npolled;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&cq->lock, flags);
|
|
|
|
|
2006-01-31 06:31:33 +08:00
|
|
|
npolled = 0;
|
|
|
|
repoll:
|
|
|
|
while (npolled < num_entries) {
|
2005-04-17 06:20:36 +08:00
|
|
|
err = mthca_poll_one(dev, cq, &qp,
|
|
|
|
&freed, entry + npolled);
|
|
|
|
if (err)
|
|
|
|
break;
|
2006-01-31 06:31:33 +08:00
|
|
|
++npolled;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
if (freed) {
|
|
|
|
wmb();
|
|
|
|
update_cons_index(dev, cq, freed);
|
|
|
|
}
|
|
|
|
|
2006-01-31 06:31:33 +08:00
|
|
|
/*
|
|
|
|
* If a CQ resize is in progress and we discovered that the
|
|
|
|
* old buffer is empty, then peek in the new buffer, and if
|
|
|
|
* it's not empty, switch to the new buffer and continue
|
|
|
|
* polling there.
|
|
|
|
*/
|
|
|
|
if (unlikely(err == -EAGAIN && cq->resize_buf &&
|
|
|
|
cq->resize_buf->state == CQ_RESIZE_READY)) {
|
|
|
|
/*
|
|
|
|
* In Tavor mode, the hardware keeps the producer
|
|
|
|
* index modulo the CQ size. Since we might be making
|
|
|
|
* the CQ bigger, we need to mask our consumer index
|
|
|
|
* using the size of the old CQ buffer before looking
|
|
|
|
* in the new CQ buffer.
|
|
|
|
*/
|
|
|
|
if (!mthca_is_memfree(dev))
|
|
|
|
cq->cons_index &= cq->ibcq.cqe;
|
|
|
|
|
|
|
|
if (cqe_sw(get_cqe_from_buf(&cq->resize_buf->buf,
|
|
|
|
cq->cons_index & cq->resize_buf->cqe))) {
|
|
|
|
struct mthca_cq_buf tbuf;
|
|
|
|
int tcqe;
|
|
|
|
|
|
|
|
tbuf = cq->buf;
|
|
|
|
tcqe = cq->ibcq.cqe;
|
|
|
|
cq->buf = cq->resize_buf->buf;
|
|
|
|
cq->ibcq.cqe = cq->resize_buf->cqe;
|
|
|
|
|
|
|
|
cq->resize_buf->buf = tbuf;
|
|
|
|
cq->resize_buf->cqe = tcqe;
|
|
|
|
cq->resize_buf->state = CQ_RESIZE_SWAPPED;
|
|
|
|
|
|
|
|
goto repoll;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irqrestore(&cq->lock, flags);
|
|
|
|
|
|
|
|
return err == 0 || err == -EAGAIN ? npolled : err;
|
|
|
|
}
|
|
|
|
|
IB: Return "maybe missed event" hint from ib_req_notify_cq()
The semantics defined by the InfiniBand specification say that
completion events are only generated when a completions is added to a
completion queue (CQ) after completion notification is requested. In
other words, this means that the following race is possible:
while (CQ is not empty)
ib_poll_cq(CQ);
// new completion is added after while loop is exited
ib_req_notify_cq(CQ);
// no event is generated for the existing completion
To close this race, the IB spec recommends doing another poll of the
CQ after requesting notification.
However, it is not always possible to arrange code this way (for
example, we have found that NAPI for IPoIB cannot poll after
requesting notification). Also, some hardware (eg Mellanox HCAs)
actually will generate an event for completions added before the call
to ib_req_notify_cq() -- which is allowed by the spec, since there's
no way for any upper-layer consumer to know exactly when a completion
was really added -- so the extra poll of the CQ is just a waste.
Motivated by this, we add a new flag "IB_CQ_REPORT_MISSED_EVENTS" for
ib_req_notify_cq() so that it can return a hint about whether the a
completion may have been added before the request for notification.
The return value of ib_req_notify_cq() is extended so:
< 0 means an error occurred while requesting notification
== 0 means notification was requested successfully, and if
IB_CQ_REPORT_MISSED_EVENTS was passed in, then no
events were missed and it is safe to wait for another
event.
> 0 is only returned if IB_CQ_REPORT_MISSED_EVENTS was
passed in. It means that the consumer must poll the
CQ again to make sure it is empty to avoid the race
described above.
We add a flag to enable this behavior rather than turning it on
unconditionally, because checking for missed events may incur
significant overhead for some low-level drivers, and consumers that
don't care about the results of this test shouldn't be forced to pay
for the test.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-07 12:02:48 +08:00
|
|
|
int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify_flags flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2007-10-15 11:40:27 +08:00
|
|
|
u32 dbhi = ((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED ?
|
|
|
|
MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL :
|
|
|
|
MTHCA_TAVOR_CQ_DB_REQ_NOT) |
|
|
|
|
to_mcq(cq)->cqn;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 11:40:27 +08:00
|
|
|
mthca_write64(dbhi, 0xffffffff, to_mdev(cq->device)->kar + MTHCA_CQ_DOORBELL,
|
2005-04-17 06:20:36 +08:00
|
|
|
MTHCA_GET_DOORBELL_LOCK(&to_mdev(cq->device)->doorbell_lock));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
IB: Return "maybe missed event" hint from ib_req_notify_cq()
The semantics defined by the InfiniBand specification say that
completion events are only generated when a completions is added to a
completion queue (CQ) after completion notification is requested. In
other words, this means that the following race is possible:
while (CQ is not empty)
ib_poll_cq(CQ);
// new completion is added after while loop is exited
ib_req_notify_cq(CQ);
// no event is generated for the existing completion
To close this race, the IB spec recommends doing another poll of the
CQ after requesting notification.
However, it is not always possible to arrange code this way (for
example, we have found that NAPI for IPoIB cannot poll after
requesting notification). Also, some hardware (eg Mellanox HCAs)
actually will generate an event for completions added before the call
to ib_req_notify_cq() -- which is allowed by the spec, since there's
no way for any upper-layer consumer to know exactly when a completion
was really added -- so the extra poll of the CQ is just a waste.
Motivated by this, we add a new flag "IB_CQ_REPORT_MISSED_EVENTS" for
ib_req_notify_cq() so that it can return a hint about whether the a
completion may have been added before the request for notification.
The return value of ib_req_notify_cq() is extended so:
< 0 means an error occurred while requesting notification
== 0 means notification was requested successfully, and if
IB_CQ_REPORT_MISSED_EVENTS was passed in, then no
events were missed and it is safe to wait for another
event.
> 0 is only returned if IB_CQ_REPORT_MISSED_EVENTS was
passed in. It means that the consumer must poll the
CQ again to make sure it is empty to avoid the race
described above.
We add a flag to enable this behavior rather than turning it on
unconditionally, because checking for missed events may incur
significant overhead for some low-level drivers, and consumers that
don't care about the results of this test shouldn't be forced to pay
for the test.
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-07 12:02:48 +08:00
|
|
|
int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify_flags flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct mthca_cq *cq = to_mcq(ibcq);
|
2007-10-15 11:40:27 +08:00
|
|
|
__be32 db_rec[2];
|
|
|
|
u32 dbhi;
|
|
|
|
u32 sn = cq->arm_sn & 3;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 11:40:27 +08:00
|
|
|
db_rec[0] = cpu_to_be32(cq->cons_index);
|
|
|
|
db_rec[1] = cpu_to_be32((cq->cqn << 8) | (2 << 5) | (sn << 3) |
|
|
|
|
((flags & IB_CQ_SOLICITED_MASK) ==
|
|
|
|
IB_CQ_SOLICITED ? 1 : 2));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 11:40:27 +08:00
|
|
|
mthca_write_db_rec(db_rec, cq->arm_db);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Make sure that the doorbell record in host memory is
|
|
|
|
* written before ringing the doorbell via PCI MMIO.
|
|
|
|
*/
|
|
|
|
wmb();
|
|
|
|
|
2007-10-15 11:40:27 +08:00
|
|
|
dbhi = (sn << 28) |
|
|
|
|
((flags & IB_CQ_SOLICITED_MASK) == IB_CQ_SOLICITED ?
|
|
|
|
MTHCA_ARBEL_CQ_DB_REQ_NOT_SOL :
|
|
|
|
MTHCA_ARBEL_CQ_DB_REQ_NOT) | cq->cqn;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-10-15 11:40:27 +08:00
|
|
|
mthca_write64(dbhi, cq->cons_index,
|
2005-04-17 06:20:36 +08:00
|
|
|
to_mdev(ibcq->device)->kar + MTHCA_CQ_DOORBELL,
|
|
|
|
MTHCA_GET_DOORBELL_LOCK(&to_mdev(ibcq->device)->doorbell_lock));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int mthca_init_cq(struct mthca_dev *dev, int nent,
|
2005-07-08 08:57:19 +08:00
|
|
|
struct mthca_ucontext *ctx, u32 pdn,
|
2005-04-17 06:20:36 +08:00
|
|
|
struct mthca_cq *cq)
|
|
|
|
{
|
2005-06-28 05:36:45 +08:00
|
|
|
struct mthca_mailbox *mailbox;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct mthca_cq_context *cq_context;
|
|
|
|
int err = -ENOMEM;
|
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
cq->ibcq.cqe = nent - 1;
|
|
|
|
cq->is_kernel = !ctx;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
cq->cqn = mthca_alloc(&dev->cq_table.alloc);
|
|
|
|
if (cq->cqn == -1)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2005-04-17 06:26:32 +08:00
|
|
|
if (mthca_is_memfree(dev)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
err = mthca_table_get(dev, dev->cq_table.table, cq->cqn);
|
|
|
|
if (err)
|
|
|
|
goto err_out;
|
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
if (cq->is_kernel) {
|
|
|
|
cq->arm_sn = 1;
|
|
|
|
|
|
|
|
err = -ENOMEM;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
cq->set_ci_db_index = mthca_alloc_db(dev, MTHCA_DB_TYPE_CQ_SET_CI,
|
|
|
|
cq->cqn, &cq->set_ci_db);
|
|
|
|
if (cq->set_ci_db_index < 0)
|
|
|
|
goto err_out_icm;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
cq->arm_db_index = mthca_alloc_db(dev, MTHCA_DB_TYPE_CQ_ARM,
|
|
|
|
cq->cqn, &cq->arm_db);
|
|
|
|
if (cq->arm_db_index < 0)
|
|
|
|
goto err_out_ci;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2005-06-28 05:36:45 +08:00
|
|
|
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
|
|
|
|
if (IS_ERR(mailbox))
|
|
|
|
goto err_out_arm;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-28 05:36:45 +08:00
|
|
|
cq_context = mailbox->buf;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
if (cq->is_kernel) {
|
2006-01-31 06:31:33 +08:00
|
|
|
err = mthca_alloc_cq_buf(dev, &cq->buf, nent);
|
2005-07-08 08:57:19 +08:00
|
|
|
if (err)
|
|
|
|
goto err_out_mailbox;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
spin_lock_init(&cq->lock);
|
2006-05-10 01:50:29 +08:00
|
|
|
cq->refcount = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
init_waitqueue_head(&cq->wait);
|
2006-06-18 11:37:41 +08:00
|
|
|
mutex_init(&cq->mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
memset(cq_context, 0, sizeof *cq_context);
|
|
|
|
cq_context->flags = cpu_to_be32(MTHCA_CQ_STATUS_OK |
|
|
|
|
MTHCA_CQ_STATE_DISARMED |
|
|
|
|
MTHCA_CQ_FLAG_TR);
|
2005-07-08 08:57:19 +08:00
|
|
|
cq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24);
|
|
|
|
if (ctx)
|
|
|
|
cq_context->logsize_usrpage |= cpu_to_be32(ctx->uar.index);
|
|
|
|
else
|
|
|
|
cq_context->logsize_usrpage |= cpu_to_be32(dev->driver_uar.index);
|
2005-04-17 06:20:36 +08:00
|
|
|
cq_context->error_eqn = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_ASYNC].eqn);
|
|
|
|
cq_context->comp_eqn = cpu_to_be32(dev->eq_table.eq[MTHCA_EQ_COMP].eqn);
|
2005-07-08 08:57:19 +08:00
|
|
|
cq_context->pd = cpu_to_be32(pdn);
|
2006-01-31 06:31:33 +08:00
|
|
|
cq_context->lkey = cpu_to_be32(cq->buf.mr.ibmr.lkey);
|
2005-04-17 06:20:36 +08:00
|
|
|
cq_context->cqn = cpu_to_be32(cq->cqn);
|
|
|
|
|
2005-04-17 06:26:32 +08:00
|
|
|
if (mthca_is_memfree(dev)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
cq_context->ci_db = cpu_to_be32(cq->set_ci_db_index);
|
|
|
|
cq_context->state_db = cpu_to_be32(cq->arm_db_index);
|
|
|
|
}
|
|
|
|
|
2011-07-08 01:20:40 +08:00
|
|
|
err = mthca_SW2HW_CQ(dev, mailbox, cq->cqn);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (err) {
|
|
|
|
mthca_warn(dev, "SW2HW_CQ failed (%d)\n", err);
|
|
|
|
goto err_out_free_mr;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->cq_table.lock);
|
|
|
|
if (mthca_array_set(&dev->cq_table.cq,
|
|
|
|
cq->cqn & (dev->limits.num_cqs - 1),
|
|
|
|
cq)) {
|
|
|
|
spin_unlock_irq(&dev->cq_table.lock);
|
|
|
|
goto err_out_free_mr;
|
|
|
|
}
|
|
|
|
spin_unlock_irq(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
cq->cons_index = 0;
|
|
|
|
|
2005-06-28 05:36:45 +08:00
|
|
|
mthca_free_mailbox(dev, mailbox);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_out_free_mr:
|
2005-08-19 04:39:31 +08:00
|
|
|
if (cq->is_kernel)
|
2006-01-31 06:31:33 +08:00
|
|
|
mthca_free_cq_buf(dev, &cq->buf, cq->ibcq.cqe);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
err_out_mailbox:
|
2005-06-28 05:36:45 +08:00
|
|
|
mthca_free_mailbox(dev, mailbox);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-28 05:36:45 +08:00
|
|
|
err_out_arm:
|
2005-07-08 08:57:19 +08:00
|
|
|
if (cq->is_kernel && mthca_is_memfree(dev))
|
2005-04-17 06:26:21 +08:00
|
|
|
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
err_out_ci:
|
2005-07-08 08:57:19 +08:00
|
|
|
if (cq->is_kernel && mthca_is_memfree(dev))
|
2005-04-17 06:26:21 +08:00
|
|
|
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_SET_CI, cq->set_ci_db_index);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
err_out_icm:
|
|
|
|
mthca_table_put(dev, dev->cq_table.table, cq->cqn);
|
|
|
|
|
|
|
|
err_out:
|
|
|
|
mthca_free(&dev->cq_table.alloc, cq->cqn);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2006-05-10 01:50:29 +08:00
|
|
|
static inline int get_cq_refcount(struct mthca_dev *dev, struct mthca_cq *cq)
|
|
|
|
{
|
|
|
|
int c;
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->cq_table.lock);
|
|
|
|
c = cq->refcount;
|
|
|
|
spin_unlock_irq(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
return c;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
void mthca_free_cq(struct mthca_dev *dev,
|
|
|
|
struct mthca_cq *cq)
|
|
|
|
{
|
2005-06-28 05:36:45 +08:00
|
|
|
struct mthca_mailbox *mailbox;
|
2005-04-17 06:20:36 +08:00
|
|
|
int err;
|
|
|
|
|
2005-06-28 05:36:45 +08:00
|
|
|
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
|
|
|
|
if (IS_ERR(mailbox)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
mthca_warn(dev, "No memory for mailbox to free CQ.\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2011-07-08 01:20:40 +08:00
|
|
|
err = mthca_HW2SW_CQ(dev, mailbox, cq->cqn);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (err)
|
|
|
|
mthca_warn(dev, "HW2SW_CQ failed (%d)\n", err);
|
|
|
|
|
|
|
|
if (0) {
|
2005-08-14 12:05:57 +08:00
|
|
|
__be32 *ctx = mailbox->buf;
|
2005-04-17 06:20:36 +08:00
|
|
|
int j;
|
|
|
|
|
|
|
|
printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n",
|
2005-07-08 08:57:19 +08:00
|
|
|
cq->cqn, cq->cons_index,
|
|
|
|
cq->is_kernel ? !!next_cqe_sw(cq) : 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
for (j = 0; j < 16; ++j)
|
|
|
|
printk(KERN_ERR "[%2x] %08x\n", j * 4, be32_to_cpu(ctx[j]));
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock_irq(&dev->cq_table.lock);
|
|
|
|
mthca_array_clear(&dev->cq_table.cq,
|
|
|
|
cq->cqn & (dev->limits.num_cqs - 1));
|
2006-05-10 01:50:29 +08:00
|
|
|
--cq->refcount;
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irq(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
if (dev->mthca_flags & MTHCA_FLAG_MSI_X)
|
|
|
|
synchronize_irq(dev->eq_table.eq[MTHCA_EQ_COMP].msi_x_vector);
|
|
|
|
else
|
|
|
|
synchronize_irq(dev->pdev->irq);
|
|
|
|
|
2006-05-10 01:50:29 +08:00
|
|
|
wait_event(cq->wait, !get_cq_refcount(dev, cq));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-07-08 08:57:19 +08:00
|
|
|
if (cq->is_kernel) {
|
2006-01-31 06:31:33 +08:00
|
|
|
mthca_free_cq_buf(dev, &cq->buf, cq->ibcq.cqe);
|
2005-07-08 08:57:19 +08:00
|
|
|
if (mthca_is_memfree(dev)) {
|
|
|
|
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index);
|
|
|
|
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_SET_CI, cq->set_ci_db_index);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2005-06-28 05:36:43 +08:00
|
|
|
mthca_table_put(dev, dev->cq_table.table, cq->cqn);
|
2005-04-17 06:20:36 +08:00
|
|
|
mthca_free(&dev->cq_table.alloc, cq->cqn);
|
2005-06-28 05:36:45 +08:00
|
|
|
mthca_free_mailbox(dev, mailbox);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-11-30 07:33:06 +08:00
|
|
|
int mthca_init_cq_table(struct mthca_dev *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int err;
|
|
|
|
|
|
|
|
spin_lock_init(&dev->cq_table.lock);
|
|
|
|
|
|
|
|
err = mthca_alloc_init(&dev->cq_table.alloc,
|
|
|
|
dev->limits.num_cqs,
|
|
|
|
(1 << 24) - 1,
|
|
|
|
dev->limits.reserved_cqs);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
|
|
|
|
err = mthca_array_init(&dev->cq_table.cq,
|
|
|
|
dev->limits.num_cqs);
|
|
|
|
if (err)
|
|
|
|
mthca_alloc_cleanup(&dev->cq_table.alloc);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2006-03-30 01:36:46 +08:00
|
|
|
void mthca_cleanup_cq_table(struct mthca_dev *dev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
mthca_array_cleanup(&dev->cq_table.cq, dev->limits.num_cqs);
|
|
|
|
mthca_alloc_cleanup(&dev->cq_table.alloc);
|
|
|
|
}
|