Auto-update from upstream

This commit is contained in:
Len Brown 2005-08-29 17:02:17 -04:00
commit 27a639a92d
238 changed files with 13074 additions and 6198 deletions

View File

@ -0,0 +1,288 @@
-------
PHY Abstraction Layer
(Updated 2005-07-21)
Purpose
Most network devices consist of set of registers which provide an interface
to a MAC layer, which communicates with the physical connection through a
PHY. The PHY concerns itself with negotiating link parameters with the link
partner on the other side of the network connection (typically, an ethernet
cable), and provides a register interface to allow drivers to determine what
settings were chosen, and to configure what settings are allowed.
While these devices are distinct from the network devices, and conform to a
standard layout for the registers, it has been common practice to integrate
the PHY management code with the network driver. This has resulted in large
amounts of redundant code. Also, on embedded systems with multiple (and
sometimes quite different) ethernet controllers connected to the same
management bus, it is difficult to ensure safe use of the bus.
Since the PHYs are devices, and the management busses through which they are
accessed are, in fact, busses, the PHY Abstraction Layer treats them as such.
In doing so, it has these goals:
1) Increase code-reuse
2) Increase overall code-maintainability
3) Speed development time for new network drivers, and for new systems
Basically, this layer is meant to provide an interface to PHY devices which
allows network driver writers to write as little code as possible, while
still providing a full feature set.
The MDIO bus
Most network devices are connected to a PHY by means of a management bus.
Different devices use different busses (though some share common interfaces).
In order to take advantage of the PAL, each bus interface needs to be
registered as a distinct device.
1) read and write functions must be implemented. Their prototypes are:
int write(struct mii_bus *bus, int mii_id, int regnum, u16 value);
int read(struct mii_bus *bus, int mii_id, int regnum);
mii_id is the address on the bus for the PHY, and regnum is the register
number. These functions are guaranteed not to be called from interrupt
time, so it is safe for them to block, waiting for an interrupt to signal
the operation is complete
2) A reset function is necessary. This is used to return the bus to an
initialized state.
3) A probe function is needed. This function should set up anything the bus
driver needs, setup the mii_bus structure, and register with the PAL using
mdiobus_register. Similarly, there's a remove function to undo all of
that (use mdiobus_unregister).
4) Like any driver, the device_driver structure must be configured, and init
exit functions are used to register the driver.
5) The bus must also be declared somewhere as a device, and registered.
As an example for how one driver implemented an mdio bus driver, see
drivers/net/gianfar_mii.c and arch/ppc/syslib/mpc85xx_devices.c
Connecting to a PHY
Sometime during startup, the network driver needs to establish a connection
between the PHY device, and the network device. At this time, the PHY's bus
and drivers need to all have been loaded, so it is ready for the connection.
At this point, there are several ways to connect to the PHY:
1) The PAL handles everything, and only calls the network driver when
the link state changes, so it can react.
2) The PAL handles everything except interrupts (usually because the
controller has the interrupt registers).
3) The PAL handles everything, but checks in with the driver every second,
allowing the network driver to react first to any changes before the PAL
does.
4) The PAL serves only as a library of functions, with the network device
manually calling functions to update status, and configure the PHY
Letting the PHY Abstraction Layer do Everything
If you choose option 1 (The hope is that every driver can, but to still be
useful to drivers that can't), connecting to the PHY is simple:
First, you need a function to react to changes in the link state. This
function follows this protocol:
static void adjust_link(struct net_device *dev);
Next, you need to know the device name of the PHY connected to this device.
The name will look something like, "phy0:0", where the first number is the
bus id, and the second is the PHY's address on that bus.
Now, to connect, just call this function:
phydev = phy_connect(dev, phy_name, &adjust_link, flags);
phydev is a pointer to the phy_device structure which represents the PHY. If
phy_connect is successful, it will return the pointer. dev, here, is the
pointer to your net_device. Once done, this function will have started the
PHY's software state machine, and registered for the PHY's interrupt, if it
has one. The phydev structure will be populated with information about the
current state, though the PHY will not yet be truly operational at this
point.
flags is a u32 which can optionally contain phy-specific flags.
This is useful if the system has put hardware restrictions on
the PHY/controller, of which the PHY needs to be aware.
Now just make sure that phydev->supported and phydev->advertising have any
values pruned from them which don't make sense for your controller (a 10/100
controller may be connected to a gigabit capable PHY, so you would need to
mask off SUPPORTED_1000baseT*). See include/linux/ethtool.h for definitions
for these bitfields. Note that you should not SET any bits, or the PHY may
get put into an unsupported state.
Lastly, once the controller is ready to handle network traffic, you call
phy_start(phydev). This tells the PAL that you are ready, and configures the
PHY to connect to the network. If you want to handle your own interrupts,
just set phydev->irq to PHY_IGNORE_INTERRUPT before you call phy_start.
Similarly, if you don't want to use interrupts, set phydev->irq to PHY_POLL.
When you want to disconnect from the network (even if just briefly), you call
phy_stop(phydev).
Keeping Close Tabs on the PAL
It is possible that the PAL's built-in state machine needs a little help to
keep your network device and the PHY properly in sync. If so, you can
register a helper function when connecting to the PHY, which will be called
every second before the state machine reacts to any changes. To do this, you
need to manually call phy_attach() and phy_prepare_link(), and then call
phy_start_machine() with the second argument set to point to your special
handler.
Currently there are no examples of how to use this functionality, and testing
on it has been limited because the author does not have any drivers which use
it (they all use option 1). So Caveat Emptor.
Doing it all yourself
There's a remote chance that the PAL's built-in state machine cannot track
the complex interactions between the PHY and your network device. If this is
so, you can simply call phy_attach(), and not call phy_start_machine or
phy_prepare_link(). This will mean that phydev->state is entirely yours to
handle (phy_start and phy_stop toggle between some of the states, so you
might need to avoid them).
An effort has been made to make sure that useful functionality can be
accessed without the state-machine running, and most of these functions are
descended from functions which did not interact with a complex state-machine.
However, again, no effort has been made so far to test running without the
state machine, so tryer beware.
Here is a brief rundown of the functions:
int phy_read(struct phy_device *phydev, u16 regnum);
int phy_write(struct phy_device *phydev, u16 regnum, u16 val);
Simple read/write primitives. They invoke the bus's read/write function
pointers.
void phy_print_status(struct phy_device *phydev);
A convenience function to print out the PHY status neatly.
int phy_clear_interrupt(struct phy_device *phydev);
int phy_config_interrupt(struct phy_device *phydev, u32 interrupts);
Clear the PHY's interrupt, and configure which ones are allowed,
respectively. Currently only supports all on, or all off.
int phy_enable_interrupts(struct phy_device *phydev);
int phy_disable_interrupts(struct phy_device *phydev);
Functions which enable/disable PHY interrupts, clearing them
before and after, respectively.
int phy_start_interrupts(struct phy_device *phydev);
int phy_stop_interrupts(struct phy_device *phydev);
Requests the IRQ for the PHY interrupts, then enables them for
start, or disables then frees them for stop.
struct phy_device * phy_attach(struct net_device *dev, const char *phy_id,
u32 flags);
Attaches a network device to a particular PHY, binding the PHY to a generic
driver if none was found during bus initialization. Passes in
any phy-specific flags as needed.
int phy_start_aneg(struct phy_device *phydev);
Using variables inside the phydev structure, either configures advertising
and resets autonegotiation, or disables autonegotiation, and configures
forced settings.
static inline int phy_read_status(struct phy_device *phydev);
Fills the phydev structure with up-to-date information about the current
settings in the PHY.
void phy_sanitize_settings(struct phy_device *phydev)
Resolves differences between currently desired settings, and
supported settings for the given PHY device. Does not make
the changes in the hardware, though.
int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd);
int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd);
Ethtool convenience functions.
int phy_mii_ioctl(struct phy_device *phydev,
struct mii_ioctl_data *mii_data, int cmd);
The MII ioctl. Note that this function will completely screw up the state
machine if you write registers like BMCR, BMSR, ADVERTISE, etc. Best to
use this only to write registers which are not standard, and don't set off
a renegotiation.
PHY Device Drivers
With the PHY Abstraction Layer, adding support for new PHYs is
quite easy. In some cases, no work is required at all! However,
many PHYs require a little hand-holding to get up-and-running.
Generic PHY driver
If the desired PHY doesn't have any errata, quirks, or special
features you want to support, then it may be best to not add
support, and let the PHY Abstraction Layer's Generic PHY Driver
do all of the work.
Writing a PHY driver
If you do need to write a PHY driver, the first thing to do is
make sure it can be matched with an appropriate PHY device.
This is done during bus initialization by reading the device's
UID (stored in registers 2 and 3), then comparing it to each
driver's phy_id field by ANDing it with each driver's
phy_id_mask field. Also, it needs a name. Here's an example:
static struct phy_driver dm9161_driver = {
.phy_id = 0x0181b880,
.name = "Davicom DM9161E",
.phy_id_mask = 0x0ffffff0,
...
}
Next, you need to specify what features (speed, duplex, autoneg,
etc) your PHY device and driver support. Most PHYs support
PHY_BASIC_FEATURES, but you can look in include/mii.h for other
features.
Each driver consists of a number of function pointers:
config_init: configures PHY into a sane state after a reset.
For instance, a Davicom PHY requires descrambling disabled.
probe: Does any setup needed by the driver
suspend/resume: power management
config_aneg: Changes the speed/duplex/negotiation settings
read_status: Reads the current speed/duplex/negotiation settings
ack_interrupt: Clear a pending interrupt
config_intr: Enable or disable interrupts
remove: Does any driver take-down
Of these, only config_aneg and read_status are required to be
assigned by the driver code. The rest are optional. Also, it is
preferred to use the generic phy driver's versions of these two
functions if at all possible: genphy_read_status and
genphy_config_aneg. If this is not possible, it is likely that
you only need to perform some actions before and after invoking
these functions, and so your functions will wrap the generic
ones.
Feel free to look at the Marvell, Cicada, and Davicom drivers in
drivers/net/phy/ for examples (the lxt and qsemi drivers have
not been tested as of this writing)

View File

@ -1,8 +1,8 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 13
EXTRAVERSION =-rc7
NAME=Woozy Numbat
EXTRAVERSION =
NAME=Affluent Albatross
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"

View File

@ -566,13 +566,12 @@ handle_signal(int sig, struct k_sigaction *ka, siginfo_t *info,
if (ka->sa.sa_flags & SA_RESETHAND)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
static inline void

View File

@ -635,10 +635,6 @@ config PM
and the Battery Powered Linux mini-HOWTO, available from
<http://www.tldp.org/docs.html#howto>.
Note that, even if you say N here, Linux on the x86 architecture
will issue the hlt instruction if nothing is to be done, thereby
sending the processor to sleep and saving power.
config APM
tristate "Advanced Power Management Emulation"
depends on PM
@ -650,12 +646,6 @@ config APM
battery status information, and user-space programs will receive
notification of APM "events" (e.g. battery status change).
If you select "Y" here, you can disable actual use of the APM
BIOS by passing the "apm=off" option to the kernel at boot time.
Note that the APM support is almost completely disabled for
machines with more than one CPU.
In order to use APM, you will need supporting software. For location
and more information, read <file:Documentation/pm.txt> and the
Battery Powered Linux mini-HOWTO, available from
@ -665,39 +655,12 @@ config APM
manpage ("man 8 hdparm") for that), and it doesn't turn off
VESA-compliant "green" monitors.
This driver does not support the TI 4000M TravelMate and the ACER
486/DX4/75 because they don't have compliant BIOSes. Many "green"
desktop machines also don't have compliant BIOSes, and this driver
may cause those machines to panic during the boot phase.
Generally, if you don't have a battery in your machine, there isn't
much point in using this driver and you should say N. If you get
random kernel OOPSes or reboots that don't seem to be related to
anything, try disabling/enabling this option (or disabling/enabling
APM in your BIOS).
Some other things you should try when experiencing seemingly random,
"weird" problems:
1) make sure that you have enough swap space and that it is
enabled.
2) pass the "no-hlt" option to the kernel
3) switch on floating point emulation in the kernel and pass
the "no387" option to the kernel
4) pass the "floppy=nodma" option to the kernel
5) pass the "mem=4M" option to the kernel (thereby disabling
all but the first 4 MB of RAM)
6) make sure that the CPU is not over clocked.
7) read the sig11 FAQ at <http://www.bitwizard.nl/sig11/>
8) disable the cache from your BIOS settings
9) install a fan for the video card or exchange video RAM
10) install a better fan for the CPU
11) exchange RAM chips
12) exchange the motherboard.
To compile this driver as a module, choose M here: the
module will be called apm.
endmenu
source "net/Kconfig"
@ -752,6 +715,8 @@ source "drivers/hwmon/Kconfig"
source "drivers/misc/Kconfig"
source "drivers/mfd/Kconfig"
source "drivers/media/Kconfig"
source "drivers/video/Kconfig"

View File

@ -1,6 +1,9 @@
config ICST525
bool
config ARM_GIC
bool
config ICST307
bool

View File

@ -4,6 +4,7 @@
obj-y += rtctime.o
obj-$(CONFIG_ARM_AMBA) += amba.o
obj-$(CONFIG_ARM_GIC) += gic.o
obj-$(CONFIG_ICST525) += icst525.o
obj-$(CONFIG_ICST307) += icst307.o
obj-$(CONFIG_SA1111) += sa1111.o

166
arch/arm/common/gic.c Normal file
View File

@ -0,0 +1,166 @@
/*
* linux/arch/arm/common/gic.c
*
* Copyright (C) 2002 ARM Limited, All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Interrupt architecture for the GIC:
*
* o There is one Interrupt Distributor, which receives interrupts
* from system devices and sends them to the Interrupt Controllers.
*
* o There is one CPU Interface per CPU, which sends interrupts sent
* by the Distributor, and interrupts generated locally, to the
* associated CPU.
*
* Note that IRQs 0-31 are special - they are local to each CPU.
* As such, the enable set/clear, pending set/clear and active bit
* registers are banked per-cpu for these sources.
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/smp.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <asm/mach/irq.h>
#include <asm/hardware/gic.h>
static void __iomem *gic_dist_base;
static void __iomem *gic_cpu_base;
/*
* Routines to acknowledge, disable and enable interrupts
*
* Linux assumes that when we're done with an interrupt we need to
* unmask it, in the same way we need to unmask an interrupt when
* we first enable it.
*
* The GIC has a seperate notion of "end of interrupt" to re-enable
* an interrupt after handling, in order to support hardware
* prioritisation.
*
* We can make the GIC behave in the way that Linux expects by making
* our "acknowledge" routine disable the interrupt, then mark it as
* complete.
*/
static void gic_ack_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
writel(irq, gic_cpu_base + GIC_CPU_EOI);
}
static void gic_mask_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_CLEAR + (irq / 32) * 4);
}
static void gic_unmask_irq(unsigned int irq)
{
u32 mask = 1 << (irq % 32);
writel(mask, gic_dist_base + GIC_DIST_ENABLE_SET + (irq / 32) * 4);
}
static void gic_set_cpu(struct irqdesc *desc, unsigned int irq, unsigned int cpu)
{
void __iomem *reg = gic_dist_base + GIC_DIST_TARGET + (irq & ~3);
unsigned int shift = (irq % 4) * 8;
u32 val;
val = readl(reg) & ~(0xff << shift);
val |= 1 << (cpu + shift);
writel(val, reg);
}
static struct irqchip gic_chip = {
.ack = gic_ack_irq,
.mask = gic_mask_irq,
.unmask = gic_unmask_irq,
#ifdef CONFIG_SMP
.set_cpu = gic_set_cpu,
#endif
};
void __init gic_dist_init(void __iomem *base)
{
unsigned int max_irq, i;
u32 cpumask = 1 << smp_processor_id();
cpumask |= cpumask << 8;
cpumask |= cpumask << 16;
gic_dist_base = base;
writel(0, base + GIC_DIST_CTRL);
/*
* Find out how many interrupts are supported.
*/
max_irq = readl(base + GIC_DIST_CTR) & 0x1f;
max_irq = (max_irq + 1) * 32;
/*
* The GIC only supports up to 1020 interrupt sources.
* Limit this to either the architected maximum, or the
* platform maximum.
*/
if (max_irq > max(1020, NR_IRQS))
max_irq = max(1020, NR_IRQS);
/*
* Set all global interrupts to be level triggered, active low.
*/
for (i = 32; i < max_irq; i += 16)
writel(0, base + GIC_DIST_CONFIG + i * 4 / 16);
/*
* Set all global interrupts to this CPU only.
*/
for (i = 32; i < max_irq; i += 4)
writel(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
/*
* Set priority on all interrupts.
*/
for (i = 0; i < max_irq; i += 4)
writel(0xa0a0a0a0, base + GIC_DIST_PRI + i * 4 / 4);
/*
* Disable all interrupts.
*/
for (i = 0; i < max_irq; i += 32)
writel(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32);
/*
* Setup the Linux IRQ subsystem.
*/
for (i = 29; i < max_irq; i++) {
set_irq_chip(i, &gic_chip);
set_irq_handler(i, do_level_IRQ);
set_irq_flags(i, IRQF_VALID | IRQF_PROBE);
}
writel(1, base + GIC_DIST_CTRL);
}
void __cpuinit gic_cpu_init(void __iomem *base)
{
gic_cpu_base = base;
writel(0xf0, base + GIC_CPU_PRIMASK);
writel(1, base + GIC_CPU_CTRL);
}
#ifdef CONFIG_SMP
void gic_raise_softirq(cpumask_t cpumask, unsigned int irq)
{
unsigned long map = *cpus_addr(cpumask);
writel(map << 16 | irq, gic_dist_base + GIC_DIST_SOFTINT);
}
#endif

View File

@ -658,11 +658,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
/*
* Block the signal if we were unsuccessful.
*/
if (ret != 0 || !(ka->sa.sa_flags & SA_NODEFER)) {
if (ret != 0) {
spin_lock_irq(&tsk->sighand->siglock);
sigorsets(&tsk->blocked, &tsk->blocked,
&ka->sa.sa_mask);
sigaddset(&tsk->blocked, sig);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&tsk->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&tsk->sighand->siglock);
}

View File

@ -36,7 +36,7 @@ static struct flash_platform_data coyote_flash_data = {
static struct resource coyote_flash_resource = {
.start = COYOTE_FLASH_BASE,
.end = COYOTE_FLASH_BASE + COYOTE_FLASH_SIZE,
.end = COYOTE_FLASH_BASE + COYOTE_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};

View File

@ -114,7 +114,7 @@ static struct flash_platform_data gtwx5715_flash_data = {
static struct resource gtwx5715_flash_resource = {
.start = GTWX5715_FLASH_BASE,
.end = GTWX5715_FLASH_BASE + GTWX5715_FLASH_SIZE,
.end = GTWX5715_FLASH_BASE + GTWX5715_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};

View File

@ -36,7 +36,7 @@ static struct flash_platform_data ixdp425_flash_data = {
static struct resource ixdp425_flash_resource = {
.start = IXDP425_FLASH_BASE,
.end = IXDP425_FLASH_BASE + IXDP425_FLASH_SIZE,
.end = IXDP425_FLASH_BASE + IXDP425_FLASH_SIZE - 1,
.flags = IORESOURCE_MEM,
};

View File

@ -35,6 +35,7 @@
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/assabet.h>
#include <asm/arch/mcp.h>
#include "generic.h"
@ -198,6 +199,11 @@ static struct irda_platform_data assabet_irda_data = {
.set_speed = assabet_irda_set_speed,
};
static struct mcp_plat_data assabet_mcp_data = {
.mccr0 = MCCR0_ADM,
.sclk_rate = 11981000,
};
static void __init assabet_init(void)
{
/*
@ -246,6 +252,7 @@ static void __init assabet_init(void)
sa11x0_set_flash_data(&assabet_flash_data, assabet_flash_resources,
ARRAY_SIZE(assabet_flash_resources));
sa11x0_set_irda_data(&assabet_irda_data);
sa11x0_set_mcp_data(&assabet_mcp_data);
}
/*

View File

@ -29,6 +29,7 @@
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/cerf.h>
#include <asm/arch/mcp.h>
#include "generic.h"
static struct resource cerfuart2_resources[] = {
@ -116,10 +117,16 @@ static void __init cerf_map_io(void)
GPDR |= CERF_GPIO_CF_RESET;
}
static struct mcp_plat_data cerf_mcp_data = {
.mccr0 = MCCR0_ADM,
.sclk_rate = 11981000,
};
static void __init cerf_init(void)
{
platform_add_devices(cerf_devices, ARRAY_SIZE(cerf_devices));
sa11x0_set_flash_data(&cerf_flash_data, &cerf_flash_resource, 1);
sa11x0_set_mcp_data(&cerf_mcp_data);
}
MACHINE_START(CERF, "Intrinsyc CerfBoard/CerfCube")

View File

@ -221,6 +221,11 @@ static struct platform_device sa11x0mcp_device = {
.resource = sa11x0mcp_resources,
};
void sa11x0_set_mcp_data(struct mcp_plat_data *data)
{
sa11x0mcp_device.dev.platform_data = data;
}
static struct resource sa11x0ssp_resources[] = {
[0] = {
.start = 0x80070000,

View File

@ -34,5 +34,8 @@ struct resource;
extern void sa11x0_set_flash_data(struct flash_platform_data *flash,
struct resource *res, int nr);
struct sa11x0_ssp_plat_ops;
extern void sa11x0_set_ssp_data(struct sa11x0_ssp_plat_ops *ops);
struct irda_platform_data;
void sa11x0_set_irda_data(struct irda_platform_data *irda);

View File

@ -13,12 +13,23 @@
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/mcp.h>
#include "generic.h"
#warning "include/asm/arch-sa1100/ide.h needs fixing for lart"
static struct mcp_plat_data lart_mcp_data = {
.mccr0 = MCCR0_ADM,
.sclk_rate = 11981000,
};
static void __init lart_init(void)
{
sa11x0_set_mcp_data(&lart_mcp_data);
}
static struct map_desc lart_io_desc[] __initdata = {
/* virtual physical length type */
{ 0xe8000000, 0x00000000, 0x00400000, MT_DEVICE }, /* main flash memory */
@ -47,5 +58,6 @@ MACHINE_START(LART, "LART")
.boot_params = 0xc0000100,
.map_io = lart_map_io,
.init_irq = sa1100_init_irq,
.init_machine = lart_init,
.timer = &sa1100_timer,
MACHINE_END

View File

@ -18,6 +18,7 @@
#include <asm/mach/flash.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/mcp.h>
#include <asm/arch/shannon.h>
#include "generic.h"
@ -52,9 +53,15 @@ static struct resource shannon_flash_resource = {
.flags = IORESOURCE_MEM,
};
static struct mcp_plat_data shannon_mcp_data = {
.mccr0 = MCCR0_ADM,
.sclk_rate = 11981000,
};
static void __init shannon_init(void)
{
sa11x0_set_flash_data(&shannon_flash_data, &shannon_flash_resource, 1);
sa11x0_set_mcp_data(&shannon_mcp_data);
}
static void __init shannon_map_io(void)

View File

@ -23,6 +23,7 @@
#include <asm/mach/flash.h>
#include <asm/mach/map.h>
#include <asm/mach/serial_sa1100.h>
#include <asm/arch/mcp.h>
#include <asm/arch/simpad.h>
#include <linux/serial_core.h>
@ -123,6 +124,11 @@ static struct resource simpad_flash_resources [] = {
}
};
static struct mcp_plat_data simpad_mcp_data = {
.mccr0 = MCCR0_ADM,
.sclk_rate = 11981000,
};
static void __init simpad_map_io(void)
@ -157,6 +163,7 @@ static void __init simpad_map_io(void)
sa11x0_set_flash_data(&simpad_flash_data, simpad_flash_resources,
ARRAY_SIZE(simpad_flash_resources));
sa11x0_set_mcp_data(&simpad_mcp_data);
}
static void simpad_power_off(void)

View File

@ -454,14 +454,13 @@ handle_signal(unsigned long sig, siginfo_t *info, sigset_t *oldset,
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&tsk->sighand->siglock);
sigorsets(&tsk->blocked, &tsk->blocked,
&ka->sa.sa_mask);
spin_lock_irq(&tsk->sighand->siglock);
sigorsets(&tsk->blocked, &tsk->blocked,
&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&tsk->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&tsk->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&tsk->sighand->siglock);
return;
}

View File

@ -517,13 +517,12 @@ handle_signal(int canrestart, unsigned long sig,
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -568,13 +568,12 @@ handle_signal(int canrestart, unsigned long sig,
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -506,13 +506,12 @@ static void handle_signal(unsigned long sig, siginfo_t *info,
else
setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
} /* end handle_signal() */
/*****************************************************************************/

View File

@ -488,13 +488,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
else
setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -577,10 +577,11 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
else
ret = setup_frame(sig, ka, oldset, regs);
if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
if (ret) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
sigaddset(&current->blocked,sig);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -467,15 +467,12 @@ handle_signal (unsigned long sig, struct k_sigaction *ka, siginfo_t *info, sigse
if (!setup_frame(sig, ka, info, oldset, scr))
return 0;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
{
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
sigaddset(&current->blocked, sig);
recalc_sigpending();
}
spin_unlock_irq(&current->sighand->siglock);
}
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
return 1;
}

View File

@ -341,13 +341,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, siginfo_t *info,
/* Set up the stack frame */
setup_rt_frame(sig, ka, info, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -732,13 +732,12 @@ handle_signal(int sig, struct k_sigaction *ka, siginfo_t *info,
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -155,13 +155,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info,
else
setup_irix_frame(ka, regs, sig, oldset);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
asmlinkage int do_irix_signal(sigset_t *oldset, struct pt_regs *regs)

View File

@ -425,13 +425,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info,
setup_frame(ka, regs, sig, oldset);
#endif
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
extern int do_signal32(sigset_t *oldset, struct pt_regs *regs);

View File

@ -751,13 +751,12 @@ static inline void handle_signal(unsigned long sig, siginfo_t *info,
else
setup_frame(ka, regs, sig, oldset);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
int do_signal32(sigset_t *oldset, struct pt_regs *regs)

View File

@ -517,13 +517,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
if (!setup_rt_frame(sig, ka, info, oldset, regs, in_syscall))
return 0;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
return 1;
}

View File

@ -759,13 +759,12 @@ int do_signal(sigset_t *oldset, struct pt_regs *regs)
else
handle_signal(signr, &ka, &info, oldset, regs, newsp);
if (!(ka.sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka.sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka.sa.sa_mask);
if (!(ka.sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
return 1;
}

View File

@ -481,10 +481,11 @@ static int handle_signal(unsigned long sig, struct k_sigaction *ka,
/* Set up Signal Frame */
ret = setup_rt_frame(sig, ka, info, oldset, regs);
if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
if (ret) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
sigaddset(&current->blocked,sig);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -976,11 +976,12 @@ int do_signal32(sigset_t *oldset, struct pt_regs *regs)
else
ret = handle_signal32(signr, &ka, &info, oldset, regs, newsp);
if (ret && !(ka.sa.sa_flags & SA_NODEFER)) {
if (ret) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked,
&ka.sa.sa_mask);
sigaddset(&current->blocked, signr);
if (!(ka.sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -637,12 +637,11 @@ handle_signal32(unsigned long sig, struct k_sigaction *ka,
else
setup_frame32(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -429,13 +429,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
else
setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -546,13 +546,12 @@ handle_signal(unsigned long sig, struct k_sigaction *ka, siginfo_t *info,
if (ka->sa.sa_flags & SA_ONESHOT)
ka->sa.sa_handler = SIG_DFL;
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -664,13 +664,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
else
setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -1034,13 +1034,12 @@ handle_signal(unsigned long signr, struct k_sigaction *ka,
else
setup_frame(&ka->sa, regs, signr, oldset, info);
}
if (!(ka->sa.sa_flags & SA_NOMASK)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(&current->blocked, signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,

View File

@ -574,13 +574,12 @@ static inline void handle_signal(unsigned long signr, struct k_sigaction *ka,
{
setup_rt_frame(ka, regs, signr, oldset,
(ka->sa.sa_flags & SA_SIGINFO) ? info : NULL);
if (!(ka->sa.sa_flags & SA_NOMASK)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(&current->blocked,signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,

View File

@ -1325,13 +1325,12 @@ static inline void handle_signal32(unsigned long signr, struct k_sigaction *ka,
else
setup_frame32(&ka->sa, regs, signr, oldset, info);
}
if (!(ka->sa.sa_flags & SA_NOMASK)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NOMASK))
sigaddset(&current->blocked,signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
static inline void syscall_restart32(unsigned long orig_i0, struct pt_regs *regs,

View File

@ -9,19 +9,11 @@
*
*/
#include <linux/types.h>
#include <linux/kdev_t.h>
#include <linux/time.h>
#include <linux/devfs_fs_kernel.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/smp_lock.h>
#include <linux/miscdevice.h>
#include <asm/uaccess.h>
#include <asm/irq.h>
#include <asm/pgtable.h>
#include "mem_user.h"
#include "user_util.h"
@ -31,35 +23,22 @@ static unsigned long p_buf = 0;
static char *v_buf = NULL;
static ssize_t
mmapper_read(struct file *file, char *buf, size_t count, loff_t *ppos)
mmapper_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
{
if(*ppos > mmapper_size)
return -EINVAL;
if(count + *ppos > mmapper_size)
count = count + *ppos - mmapper_size;
if(count < 0)
return -EINVAL;
copy_to_user(buf,&v_buf[*ppos],count);
return count;
return simple_read_from_buffer(buf, count, ppos, v_buf, mmapper_size);
}
static ssize_t
mmapper_write(struct file *file, const char *buf, size_t count, loff_t *ppos)
mmapper_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
if(*ppos > mmapper_size)
if (*ppos > mmapper_size)
return -EINVAL;
if(count + *ppos > mmapper_size)
count = count + *ppos - mmapper_size;
if (count > mmapper_size - *ppos)
count = mmapper_size - *ppos;
if(count < 0)
return -EINVAL;
copy_from_user(&v_buf[*ppos],buf,count);
if (copy_from_user(&v_buf[*ppos], buf, count))
return -EFAULT;
return count;
}
@ -77,7 +56,6 @@ mmapper_mmap(struct file *file, struct vm_area_struct * vma)
int ret = -EINVAL;
int size;
lock_kernel();
if (vma->vm_pgoff != 0)
goto out;
@ -92,7 +70,6 @@ mmapper_mmap(struct file *file, struct vm_area_struct * vma)
goto out;
ret = 0;
out:
unlock_kernel();
return ret;
}

View File

@ -87,12 +87,12 @@ static int handle_signal(struct pt_regs *regs, unsigned long signr,
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
force_sigsegv(signr, current);
}
else if(!(ka->sa.sa_flags & SA_NODEFER)){
} else {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked,
&ka->sa.sa_mask);
sigaddset(&current->blocked, signr);
if(!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -462,13 +462,12 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
else
setup_frame(sig, ka, oldset, regs);
if (!(ka->sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
/*

View File

@ -185,6 +185,40 @@ unsigned long __init e820_end_of_ram(void)
}
/*
* Compute how much memory is missing in a range.
* Unlike the other functions in this file the arguments are in page numbers.
*/
unsigned long __init
e820_hole_size(unsigned long start_pfn, unsigned long end_pfn)
{
unsigned long ram = 0;
unsigned long start = start_pfn << PAGE_SHIFT;
unsigned long end = end_pfn << PAGE_SHIFT;
int i;
for (i = 0; i < e820.nr_map; i++) {
struct e820entry *ei = &e820.map[i];
unsigned long last, addr;
if (ei->type != E820_RAM ||
ei->addr+ei->size <= start ||
ei->addr >= end)
continue;
addr = round_up(ei->addr, PAGE_SIZE);
if (addr < start)
addr = start;
last = round_down(ei->addr + ei->size, PAGE_SIZE);
if (last >= end)
last = end;
if (last > addr)
ram += last - addr;
}
return ((end - start) - ram) >> PAGE_SHIFT;
}
/*
* Mark e820 reserved areas as busy for the resource manager.
*/
void __init e820_reserve_resources(void)

View File

@ -394,10 +394,11 @@ handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka,
#endif
ret = setup_rt_frame(sig, ka, info, oldset, regs);
if (ret && !(ka->sa.sa_flags & SA_NODEFER)) {
if (ret) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked,&current->blocked,&ka->sa.sa_mask);
sigaddset(&current->blocked,sig);
if (!(ka->sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked,sig);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}

View File

@ -322,18 +322,26 @@ void zap_low_mappings(void)
void __init paging_init(void)
{
{
unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
unsigned long zones_size[MAX_NR_ZONES];
unsigned long holes[MAX_NR_ZONES];
unsigned int max_dma;
memset(zones_size, 0, sizeof(zones_size));
memset(holes, 0, sizeof(holes));
max_dma = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
if (end_pfn < max_dma)
if (end_pfn < max_dma) {
zones_size[ZONE_DMA] = end_pfn;
else {
holes[ZONE_DMA] = e820_hole_size(0, end_pfn);
} else {
zones_size[ZONE_DMA] = max_dma;
holes[ZONE_DMA] = e820_hole_size(0, max_dma);
zones_size[ZONE_NORMAL] = end_pfn - max_dma;
holes[ZONE_NORMAL] = e820_hole_size(max_dma, end_pfn);
}
free_area_init(zones_size);
free_area_init_node(0, NODE_DATA(0), zones_size,
__pa(PAGE_OFFSET) >> PAGE_SHIFT, holes);
}
return;
}

View File

@ -126,9 +126,11 @@ void __init setup_node_zones(int nodeid)
{
unsigned long start_pfn, end_pfn;
unsigned long zones[MAX_NR_ZONES];
unsigned long holes[MAX_NR_ZONES];
unsigned long dma_end_pfn;
memset(zones, 0, sizeof(unsigned long) * MAX_NR_ZONES);
memset(holes, 0, sizeof(unsigned long) * MAX_NR_ZONES);
start_pfn = node_start_pfn(nodeid);
end_pfn = node_end_pfn(nodeid);
@ -139,13 +141,17 @@ void __init setup_node_zones(int nodeid)
dma_end_pfn = __pa(MAX_DMA_ADDRESS) >> PAGE_SHIFT;
if (start_pfn < dma_end_pfn) {
zones[ZONE_DMA] = dma_end_pfn - start_pfn;
holes[ZONE_DMA] = e820_hole_size(start_pfn, dma_end_pfn);
zones[ZONE_NORMAL] = end_pfn - dma_end_pfn;
holes[ZONE_NORMAL] = e820_hole_size(dma_end_pfn, end_pfn);
} else {
zones[ZONE_NORMAL] = end_pfn - start_pfn;
holes[ZONE_NORMAL] = e820_hole_size(start_pfn, end_pfn);
}
free_area_init_node(nodeid, NODE_DATA(nodeid), zones,
start_pfn, NULL);
start_pfn, holes);
}
void __init numa_init_array(void)

View File

@ -702,12 +702,11 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
if (ka.sa.sa_flags & SA_ONESHOT)
ka.sa.sa_handler = SIG_DFL;
if (!(ka.sa.sa_flags & SA_NODEFER)) {
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka.sa.sa_mask);
spin_lock_irq(&current->sighand->siglock);
sigorsets(&current->blocked, &current->blocked, &ka.sa.sa_mask);
if (!(ka.sa.sa_flags & SA_NODEFER))
sigaddset(&current->blocked, signr);
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
}
recalc_sigpending();
spin_unlock_irq(&current->sighand->siglock);
return 1;
}

View File

@ -48,6 +48,8 @@ source "drivers/hwmon/Kconfig"
source "drivers/misc/Kconfig"
source "drivers/mfd/Kconfig"
source "drivers/media/Kconfig"
source "drivers/video/Kconfig"

View File

@ -26,7 +26,7 @@ obj-$(CONFIG_FB_INTEL) += video/intelfb/
obj-$(CONFIG_SERIO) += input/serio/
obj-y += serial/
obj-$(CONFIG_PARPORT) += parport/
obj-y += base/ block/ misc/ net/ media/
obj-y += base/ block/ misc/ mfd/ net/ media/
obj-$(CONFIG_NUBUS) += nubus/
obj-$(CONFIG_ATM) += atm/
obj-$(CONFIG_PPC_PMAC) += macintosh/

View File

@ -55,7 +55,11 @@ void acpi_power_off(void)
static int acpi_shutdown(struct sys_device *x)
{
return acpi_sleep_prepare(ACPI_STATE_S5);
if (system_state == SYSTEM_POWER_OFF) {
/* Prepare if we are going to power off the system */
return acpi_sleep_prepare(ACPI_STATE_S5);
}
return 0;
}
static struct sysdev_class acpi_sysclass = {

View File

@ -2433,7 +2433,7 @@ static int con_open(struct tty_struct *tty, struct file *filp)
int ret = 0;
acquire_console_sem();
if (tty->count == 1) {
if (tty->driver_data == NULL) {
ret = vc_allocate(currcons);
if (ret == 0) {
struct vc_data *vc = vc_cons[currcons].d;

View File

@ -325,7 +325,7 @@ int adm1026_attach_adapter(struct i2c_adapter *adapter)
int adm1026_detach_client(struct i2c_client *client)
{
i2c_detach_client(client);
kfree(client);
kfree(i2c_get_clientdata(client));
return 0;
}

View File

@ -845,7 +845,7 @@ static int adm1031_detach_client(struct i2c_client *client)
if ((ret = i2c_detach_client(client)) != 0) {
return ret;
}
kfree(client);
kfree(i2c_get_clientdata(client));
return 0;
}

View File

@ -1,5 +1,3 @@
EXTRA_CFLAGS += -Idrivers/infiniband/include
obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o \
ib_cm.o ib_umad.o ib_ucm.o
obj-$(CONFIG_INFINIBAND_USER_VERBS) += ib_uverbs.o

View File

@ -1,9 +1,10 @@
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -40,7 +41,7 @@
#include <asm/bug.h>
#include <ib_smi.h>
#include <rdma/ib_smi.h>
#include "smi.h"
#include "agent_priv.h"

View File

@ -1,9 +1,9 @@
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,8 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -32,12 +35,11 @@
* $Id: cache.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <linux/version.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <ib_cache.h>
#include <rdma/ib_cache.h>
#include "core_priv.h"

View File

@ -43,8 +43,8 @@
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <ib_cache.h>
#include <ib_cm.h>
#include <rdma/ib_cache.h>
#include <rdma/ib_cm.h>
#include "cm_msgs.h"
MODULE_AUTHOR("Sean Hefty");
@ -83,7 +83,7 @@ struct cm_port {
struct cm_device {
struct list_head list;
struct ib_device *device;
u64 ca_guid;
__be64 ca_guid;
struct cm_port port[0];
};
@ -100,8 +100,8 @@ struct cm_work {
struct list_head list;
struct cm_port *port;
struct ib_mad_recv_wc *mad_recv_wc; /* Received MADs */
u32 local_id; /* Established / timewait */
u32 remote_id;
__be32 local_id; /* Established / timewait */
__be32 remote_id;
struct ib_cm_event cm_event;
struct ib_sa_path_rec path[0];
};
@ -110,8 +110,8 @@ struct cm_timewait_info {
struct cm_work work; /* Must be first. */
struct rb_node remote_qp_node;
struct rb_node remote_id_node;
u64 remote_ca_guid;
u32 remote_qpn;
__be64 remote_ca_guid;
__be32 remote_qpn;
u8 inserted_remote_qp;
u8 inserted_remote_id;
};
@ -132,11 +132,11 @@ struct cm_id_private {
struct cm_av alt_av;
void *private_data;
u64 tid;
u32 local_qpn;
u32 remote_qpn;
u32 sq_psn;
u32 rq_psn;
__be64 tid;
__be32 local_qpn;
__be32 remote_qpn;
__be32 sq_psn;
__be32 rq_psn;
int timeout_ms;
enum ib_mtu path_mtu;
u8 private_data_len;
@ -253,7 +253,7 @@ static void cm_set_ah_attr(struct ib_ah_attr *ah_attr, u8 port_num,
u16 dlid, u8 sl, u16 src_path_bits)
{
memset(ah_attr, 0, sizeof ah_attr);
ah_attr->dlid = be16_to_cpu(dlid);
ah_attr->dlid = dlid;
ah_attr->sl = sl;
ah_attr->src_path_bits = src_path_bits;
ah_attr->port_num = port_num;
@ -264,7 +264,7 @@ static void cm_init_av_for_response(struct cm_port *port,
{
av->port = port;
av->pkey_index = wc->pkey_index;
cm_set_ah_attr(&av->ah_attr, port->port_num, cpu_to_be16(wc->slid),
cm_set_ah_attr(&av->ah_attr, port->port_num, wc->slid,
wc->sl, wc->dlid_path_bits);
}
@ -295,8 +295,9 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av)
return ret;
av->port = port;
cm_set_ah_attr(&av->ah_attr, av->port->port_num, path->dlid,
path->sl, path->slid & 0x7F);
cm_set_ah_attr(&av->ah_attr, av->port->port_num,
be16_to_cpu(path->dlid), path->sl,
be16_to_cpu(path->slid) & 0x7F);
av->packet_life_time = path->packet_life_time;
return 0;
}
@ -309,26 +310,26 @@ static int cm_alloc_id(struct cm_id_private *cm_id_priv)
do {
spin_lock_irqsave(&cm.lock, flags);
ret = idr_get_new_above(&cm.local_id_table, cm_id_priv, 1,
(int *) &cm_id_priv->id.local_id);
(__force int *) &cm_id_priv->id.local_id);
spin_unlock_irqrestore(&cm.lock, flags);
} while( (ret == -EAGAIN) && idr_pre_get(&cm.local_id_table, GFP_KERNEL) );
return ret;
}
static void cm_free_id(u32 local_id)
static void cm_free_id(__be32 local_id)
{
unsigned long flags;
spin_lock_irqsave(&cm.lock, flags);
idr_remove(&cm.local_id_table, (int) local_id);
idr_remove(&cm.local_id_table, (__force int) local_id);
spin_unlock_irqrestore(&cm.lock, flags);
}
static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id)
static struct cm_id_private * cm_get_id(__be32 local_id, __be32 remote_id)
{
struct cm_id_private *cm_id_priv;
cm_id_priv = idr_find(&cm.local_id_table, (int) local_id);
cm_id_priv = idr_find(&cm.local_id_table, (__force int) local_id);
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
@ -339,7 +340,7 @@ static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id)
return cm_id_priv;
}
static struct cm_id_private * cm_acquire_id(u32 local_id, u32 remote_id)
static struct cm_id_private * cm_acquire_id(__be32 local_id, __be32 remote_id)
{
struct cm_id_private *cm_id_priv;
unsigned long flags;
@ -356,8 +357,8 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv)
struct rb_node **link = &cm.listen_service_table.rb_node;
struct rb_node *parent = NULL;
struct cm_id_private *cur_cm_id_priv;
u64 service_id = cm_id_priv->id.service_id;
u64 service_mask = cm_id_priv->id.service_mask;
__be64 service_id = cm_id_priv->id.service_id;
__be64 service_mask = cm_id_priv->id.service_mask;
while (*link) {
parent = *link;
@ -376,7 +377,7 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv)
return NULL;
}
static struct cm_id_private * cm_find_listen(u64 service_id)
static struct cm_id_private * cm_find_listen(__be64 service_id)
{
struct rb_node *node = cm.listen_service_table.rb_node;
struct cm_id_private *cm_id_priv;
@ -400,8 +401,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info
struct rb_node **link = &cm.remote_id_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
u64 remote_ca_guid = timewait_info->remote_ca_guid;
u32 remote_id = timewait_info->work.remote_id;
__be64 remote_ca_guid = timewait_info->remote_ca_guid;
__be32 remote_id = timewait_info->work.remote_id;
while (*link) {
parent = *link;
@ -424,8 +425,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info
return NULL;
}
static struct cm_timewait_info * cm_find_remote_id(u64 remote_ca_guid,
u32 remote_id)
static struct cm_timewait_info * cm_find_remote_id(__be64 remote_ca_guid,
__be32 remote_id)
{
struct rb_node *node = cm.remote_id_table.rb_node;
struct cm_timewait_info *timewait_info;
@ -453,8 +454,8 @@ static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
struct rb_node **link = &cm.remote_qp_table.rb_node;
struct rb_node *parent = NULL;
struct cm_timewait_info *cur_timewait_info;
u64 remote_ca_guid = timewait_info->remote_ca_guid;
u32 remote_qpn = timewait_info->remote_qpn;
__be64 remote_ca_guid = timewait_info->remote_ca_guid;
__be32 remote_qpn = timewait_info->remote_qpn;
while (*link) {
parent = *link;
@ -484,7 +485,7 @@ static struct cm_id_private * cm_insert_remote_sidr(struct cm_id_private
struct rb_node *parent = NULL;
struct cm_id_private *cur_cm_id_priv;
union ib_gid *port_gid = &cm_id_priv->av.dgid;
u32 remote_id = cm_id_priv->id.remote_id;
__be32 remote_id = cm_id_priv->id.remote_id;
while (*link) {
parent = *link;
@ -598,7 +599,7 @@ static void cm_cleanup_timewait(struct cm_timewait_info *timewait_info)
spin_unlock_irqrestore(&cm.lock, flags);
}
static struct cm_timewait_info * cm_create_timewait_info(u32 local_id)
static struct cm_timewait_info * cm_create_timewait_info(__be32 local_id)
{
struct cm_timewait_info *timewait_info;
@ -715,14 +716,15 @@ retest:
EXPORT_SYMBOL(ib_destroy_cm_id);
int ib_cm_listen(struct ib_cm_id *cm_id,
u64 service_id,
u64 service_mask)
__be64 service_id,
__be64 service_mask)
{
struct cm_id_private *cm_id_priv, *cur_cm_id_priv;
unsigned long flags;
int ret = 0;
service_mask = service_mask ? service_mask : ~0ULL;
service_mask = service_mask ? service_mask :
__constant_cpu_to_be64(~0ULL);
service_id &= service_mask;
if ((service_id & IB_SERVICE_ID_AGN_MASK) == IB_CM_ASSIGN_SERVICE_ID &&
(service_id != IB_CM_ASSIGN_SERVICE_ID))
@ -735,8 +737,8 @@ int ib_cm_listen(struct ib_cm_id *cm_id,
spin_lock_irqsave(&cm.lock, flags);
if (service_id == IB_CM_ASSIGN_SERVICE_ID) {
cm_id->service_id = __cpu_to_be64(cm.listen_service_id++);
cm_id->service_mask = ~0ULL;
cm_id->service_id = cpu_to_be64(cm.listen_service_id++);
cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
} else {
cm_id->service_id = service_id;
cm_id->service_mask = service_mask;
@ -752,18 +754,19 @@ int ib_cm_listen(struct ib_cm_id *cm_id,
}
EXPORT_SYMBOL(ib_cm_listen);
static u64 cm_form_tid(struct cm_id_private *cm_id_priv,
enum cm_msg_sequence msg_seq)
static __be64 cm_form_tid(struct cm_id_private *cm_id_priv,
enum cm_msg_sequence msg_seq)
{
u64 hi_tid, low_tid;
hi_tid = ((u64) cm_id_priv->av.port->mad_agent->hi_tid) << 32;
low_tid = (u64) (cm_id_priv->id.local_id | (msg_seq << 30));
low_tid = (u64) ((__force u32)cm_id_priv->id.local_id |
(msg_seq << 30));
return cpu_to_be64(hi_tid | low_tid);
}
static void cm_format_mad_hdr(struct ib_mad_hdr *hdr,
enum cm_msg_attr_id attr_id, u64 tid)
__be16 attr_id, __be64 tid)
{
hdr->base_version = IB_MGMT_BASE_VERSION;
hdr->mgmt_class = IB_MGMT_CLASS_CM;
@ -896,7 +899,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
goto error1;
}
cm_id->service_id = param->service_id;
cm_id->service_mask = ~0ULL;
cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
cm_id_priv->timeout_ms = cm_convert_to_ms(
param->primary_path->packet_life_time) * 2 +
cm_convert_to_ms(
@ -963,7 +966,7 @@ static int cm_issue_rej(struct cm_port *port,
rej_msg->remote_comm_id = rcv_msg->local_comm_id;
rej_msg->local_comm_id = rcv_msg->remote_comm_id;
cm_rej_set_msg_rejected(rej_msg, msg_rejected);
rej_msg->reason = reason;
rej_msg->reason = cpu_to_be16(reason);
if (ari && ari_length) {
cm_rej_set_reject_info_len(rej_msg, ari_length);
@ -977,8 +980,8 @@ static int cm_issue_rej(struct cm_port *port,
return ret;
}
static inline int cm_is_active_peer(u64 local_ca_guid, u64 remote_ca_guid,
u32 local_qpn, u32 remote_qpn)
static inline int cm_is_active_peer(__be64 local_ca_guid, __be64 remote_ca_guid,
__be32 local_qpn, __be32 remote_qpn)
{
return (be64_to_cpu(local_ca_guid) > be64_to_cpu(remote_ca_guid) ||
((local_ca_guid == remote_ca_guid) &&
@ -1137,7 +1140,7 @@ static void cm_format_rej(struct cm_rej_msg *rej_msg,
break;
}
rej_msg->reason = reason;
rej_msg->reason = cpu_to_be16(reason);
if (ari && ari_length) {
cm_rej_set_reject_info_len(rej_msg, ari_length);
memcpy(rej_msg->ari, ari, ari_length);
@ -1276,7 +1279,7 @@ static int cm_req_handler(struct cm_work *work)
cm_id_priv->id.cm_handler = listen_cm_id_priv->id.cm_handler;
cm_id_priv->id.context = listen_cm_id_priv->id.context;
cm_id_priv->id.service_id = req_msg->service_id;
cm_id_priv->id.service_mask = ~0ULL;
cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL);
cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]);
ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av);
@ -1969,7 +1972,7 @@ static void cm_format_rej_event(struct cm_work *work)
param = &work->cm_event.param.rej_rcvd;
param->ari = rej_msg->ari;
param->ari_length = cm_rej_get_reject_info_len(rej_msg);
param->reason = rej_msg->reason;
param->reason = __be16_to_cpu(rej_msg->reason);
work->cm_event.private_data = &rej_msg->private_data;
}
@ -1978,20 +1981,20 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
struct cm_timewait_info *timewait_info;
struct cm_id_private *cm_id_priv;
unsigned long flags;
u32 remote_id;
__be32 remote_id;
remote_id = rej_msg->local_comm_id;
if (rej_msg->reason == IB_CM_REJ_TIMEOUT) {
if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_TIMEOUT) {
spin_lock_irqsave(&cm.lock, flags);
timewait_info = cm_find_remote_id( *((u64 *) rej_msg->ari),
timewait_info = cm_find_remote_id( *((__be64 *) rej_msg->ari),
remote_id);
if (!timewait_info) {
spin_unlock_irqrestore(&cm.lock, flags);
return NULL;
}
cm_id_priv = idr_find(&cm.local_id_table,
(int) timewait_info->work.local_id);
(__force int) timewait_info->work.local_id);
if (cm_id_priv) {
if (cm_id_priv->id.remote_id == remote_id)
atomic_inc(&cm_id_priv->refcount);
@ -2032,7 +2035,7 @@ static int cm_rej_handler(struct cm_work *work)
/* fall through */
case IB_CM_REQ_RCVD:
case IB_CM_MRA_REQ_SENT:
if (rej_msg->reason == IB_CM_REJ_STALE_CONN)
if (__be16_to_cpu(rej_msg->reason) == IB_CM_REJ_STALE_CONN)
cm_enter_timewait(cm_id_priv);
else
cm_reset_to_idle(cm_id_priv);
@ -2553,7 +2556,7 @@ static void cm_format_sidr_req(struct cm_sidr_req_msg *sidr_req_msg,
cm_format_mad_hdr(&sidr_req_msg->hdr, CM_SIDR_REQ_ATTR_ID,
cm_form_tid(cm_id_priv, CM_MSG_SEQUENCE_SIDR));
sidr_req_msg->request_id = cm_id_priv->id.local_id;
sidr_req_msg->pkey = param->pkey;
sidr_req_msg->pkey = cpu_to_be16(param->pkey);
sidr_req_msg->service_id = param->service_id;
if (param->private_data && param->private_data_len)
@ -2580,7 +2583,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
goto out;
cm_id->service_id = param->service_id;
cm_id->service_mask = ~0ULL;
cm_id->service_mask = __constant_cpu_to_be64(~0ULL);
cm_id_priv->timeout_ms = param->timeout_ms;
cm_id_priv->max_cm_retries = param->max_cm_retries;
ret = cm_alloc_msg(cm_id_priv, &msg);
@ -2621,7 +2624,7 @@ static void cm_format_sidr_req_event(struct cm_work *work,
sidr_req_msg = (struct cm_sidr_req_msg *)
work->mad_recv_wc->recv_buf.mad;
param = &work->cm_event.param.sidr_req_rcvd;
param->pkey = sidr_req_msg->pkey;
param->pkey = __be16_to_cpu(sidr_req_msg->pkey);
param->listen_id = listen_id;
param->device = work->port->mad_agent->device;
param->port = work->port->port_num;
@ -2645,7 +2648,7 @@ static int cm_sidr_req_handler(struct cm_work *work)
sidr_req_msg = (struct cm_sidr_req_msg *)
work->mad_recv_wc->recv_buf.mad;
wc = work->mad_recv_wc->wc;
cm_id_priv->av.dgid.global.subnet_prefix = wc->slid;
cm_id_priv->av.dgid.global.subnet_prefix = cpu_to_be64(wc->slid);
cm_id_priv->av.dgid.global.interface_id = 0;
cm_init_av_for_response(work->port, work->mad_recv_wc->wc,
&cm_id_priv->av);
@ -2673,7 +2676,7 @@ static int cm_sidr_req_handler(struct cm_work *work)
cm_id_priv->id.cm_handler = cur_cm_id_priv->id.cm_handler;
cm_id_priv->id.context = cur_cm_id_priv->id.context;
cm_id_priv->id.service_id = sidr_req_msg->service_id;
cm_id_priv->id.service_mask = ~0ULL;
cm_id_priv->id.service_mask = __constant_cpu_to_be64(~0ULL);
cm_format_sidr_req_event(work, &cur_cm_id_priv->id);
cm_process_work(cm_id_priv, work);
@ -3175,10 +3178,10 @@ int ib_cm_init_qp_attr(struct ib_cm_id *cm_id,
}
EXPORT_SYMBOL(ib_cm_init_qp_attr);
static u64 cm_get_ca_guid(struct ib_device *device)
static __be64 cm_get_ca_guid(struct ib_device *device)
{
struct ib_device_attr *device_attr;
u64 guid;
__be64 guid;
int ret;
device_attr = kmalloc(sizeof *device_attr, GFP_KERNEL);

View File

@ -34,7 +34,7 @@
#if !defined(CM_MSGS_H)
#define CM_MSGS_H
#include <ib_mad.h>
#include <rdma/ib_mad.h>
/*
* Parameters to routines below should be in network-byte order, and values
@ -43,19 +43,17 @@
#define IB_CM_CLASS_VERSION 2 /* IB specification 1.2 */
enum cm_msg_attr_id {
CM_REQ_ATTR_ID = __constant_htons(0x0010),
CM_MRA_ATTR_ID = __constant_htons(0x0011),
CM_REJ_ATTR_ID = __constant_htons(0x0012),
CM_REP_ATTR_ID = __constant_htons(0x0013),
CM_RTU_ATTR_ID = __constant_htons(0x0014),
CM_DREQ_ATTR_ID = __constant_htons(0x0015),
CM_DREP_ATTR_ID = __constant_htons(0x0016),
CM_SIDR_REQ_ATTR_ID = __constant_htons(0x0017),
CM_SIDR_REP_ATTR_ID = __constant_htons(0x0018),
CM_LAP_ATTR_ID = __constant_htons(0x0019),
CM_APR_ATTR_ID = __constant_htons(0x001A)
};
#define CM_REQ_ATTR_ID __constant_htons(0x0010)
#define CM_MRA_ATTR_ID __constant_htons(0x0011)
#define CM_REJ_ATTR_ID __constant_htons(0x0012)
#define CM_REP_ATTR_ID __constant_htons(0x0013)
#define CM_RTU_ATTR_ID __constant_htons(0x0014)
#define CM_DREQ_ATTR_ID __constant_htons(0x0015)
#define CM_DREP_ATTR_ID __constant_htons(0x0016)
#define CM_SIDR_REQ_ATTR_ID __constant_htons(0x0017)
#define CM_SIDR_REP_ATTR_ID __constant_htons(0x0018)
#define CM_LAP_ATTR_ID __constant_htons(0x0019)
#define CM_APR_ATTR_ID __constant_htons(0x001A)
enum cm_msg_sequence {
CM_MSG_SEQUENCE_REQ,
@ -67,35 +65,35 @@ enum cm_msg_sequence {
struct cm_req_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 rsvd4;
u64 service_id;
u64 local_ca_guid;
u32 rsvd24;
u32 local_qkey;
__be32 local_comm_id;
__be32 rsvd4;
__be64 service_id;
__be64 local_ca_guid;
__be32 rsvd24;
__be32 local_qkey;
/* local QPN:24, responder resources:8 */
u32 offset32;
__be32 offset32;
/* local EECN:24, initiator depth:8 */
u32 offset36;
__be32 offset36;
/*
* remote EECN:24, remote CM response timeout:5,
* transport service type:2, end-to-end flow control:1
*/
u32 offset40;
__be32 offset40;
/* starting PSN:24, local CM response timeout:5, retry count:3 */
u32 offset44;
u16 pkey;
__be32 offset44;
__be16 pkey;
/* path MTU:4, RDC exists:1, RNR retry count:3. */
u8 offset50;
/* max CM Retries:4, SRQ:1, rsvd:3 */
u8 offset51;
u16 primary_local_lid;
u16 primary_remote_lid;
__be16 primary_local_lid;
__be16 primary_remote_lid;
union ib_gid primary_local_gid;
union ib_gid primary_remote_gid;
/* flow label:20, rsvd:6, packet rate:6 */
u32 primary_offset88;
__be32 primary_offset88;
u8 primary_traffic_class;
u8 primary_hop_limit;
/* SL:4, subnet local:1, rsvd:3 */
@ -103,12 +101,12 @@ struct cm_req_msg {
/* local ACK timeout:5, rsvd:3 */
u8 primary_offset95;
u16 alt_local_lid;
u16 alt_remote_lid;
__be16 alt_local_lid;
__be16 alt_remote_lid;
union ib_gid alt_local_gid;
union ib_gid alt_remote_gid;
/* flow label:20, rsvd:6, packet rate:6 */
u32 alt_offset132;
__be32 alt_offset132;
u8 alt_traffic_class;
u8 alt_hop_limit;
/* SL:4, subnet local:1, rsvd:3 */
@ -120,12 +118,12 @@ struct cm_req_msg {
} __attribute__ ((packed));
static inline u32 cm_req_get_local_qpn(struct cm_req_msg *req_msg)
static inline __be32 cm_req_get_local_qpn(struct cm_req_msg *req_msg)
{
return cpu_to_be32(be32_to_cpu(req_msg->offset32) >> 8);
}
static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, u32 qpn)
static inline void cm_req_set_local_qpn(struct cm_req_msg *req_msg, __be32 qpn)
{
req_msg->offset32 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(req_msg->offset32) &
@ -208,13 +206,13 @@ static inline void cm_req_set_flow_ctrl(struct cm_req_msg *req_msg,
0xFFFFFFFE));
}
static inline u32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
static inline __be32 cm_req_get_starting_psn(struct cm_req_msg *req_msg)
{
return cpu_to_be32(be32_to_cpu(req_msg->offset44) >> 8);
}
static inline void cm_req_set_starting_psn(struct cm_req_msg *req_msg,
u32 starting_psn)
__be32 starting_psn)
{
req_msg->offset44 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
(be32_to_cpu(req_msg->offset44) & 0x000000FF));
@ -288,13 +286,13 @@ static inline void cm_req_set_srq(struct cm_req_msg *req_msg, u8 srq)
((srq & 0x1) << 3));
}
static inline u32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
static inline __be32 cm_req_get_primary_flow_label(struct cm_req_msg *req_msg)
{
return cpu_to_be32((be32_to_cpu(req_msg->primary_offset88) >> 12));
return cpu_to_be32(be32_to_cpu(req_msg->primary_offset88) >> 12);
}
static inline void cm_req_set_primary_flow_label(struct cm_req_msg *req_msg,
u32 flow_label)
__be32 flow_label)
{
req_msg->primary_offset88 = cpu_to_be32(
(be32_to_cpu(req_msg->primary_offset88) &
@ -350,13 +348,13 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
(local_ack_timeout << 3));
}
static inline u32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg)
static inline __be32 cm_req_get_alt_flow_label(struct cm_req_msg *req_msg)
{
return cpu_to_be32((be32_to_cpu(req_msg->alt_offset132) >> 12));
return cpu_to_be32(be32_to_cpu(req_msg->alt_offset132) >> 12);
}
static inline void cm_req_set_alt_flow_label(struct cm_req_msg *req_msg,
u32 flow_label)
__be32 flow_label)
{
req_msg->alt_offset132 = cpu_to_be32(
(be32_to_cpu(req_msg->alt_offset132) &
@ -422,8 +420,8 @@ enum cm_msg_response {
struct cm_mra_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
/* message MRAed:2, rsvd:6 */
u8 offset8;
/* service timeout:5, rsvd:3 */
@ -458,13 +456,13 @@ static inline void cm_mra_set_service_timeout(struct cm_mra_msg *mra_msg,
struct cm_rej_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
/* message REJected:2, rsvd:6 */
u8 offset8;
/* reject info length:7, rsvd:1. */
u8 offset9;
u16 reason;
__be16 reason;
u8 ari[IB_CM_REJ_ARI_LENGTH];
u8 private_data[IB_CM_REJ_PRIVATE_DATA_SIZE];
@ -495,45 +493,45 @@ static inline void cm_rej_set_reject_info_len(struct cm_rej_msg *rej_msg,
struct cm_rep_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
u32 local_qkey;
__be32 local_comm_id;
__be32 remote_comm_id;
__be32 local_qkey;
/* local QPN:24, rsvd:8 */
u32 offset12;
__be32 offset12;
/* local EECN:24, rsvd:8 */
u32 offset16;
__be32 offset16;
/* starting PSN:24 rsvd:8 */
u32 offset20;
__be32 offset20;
u8 resp_resources;
u8 initiator_depth;
/* target ACK delay:5, failover accepted:2, end-to-end flow control:1 */
u8 offset26;
/* RNR retry count:3, SRQ:1, rsvd:5 */
u8 offset27;
u64 local_ca_guid;
__be64 local_ca_guid;
u8 private_data[IB_CM_REP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
static inline u32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg)
static inline __be32 cm_rep_get_local_qpn(struct cm_rep_msg *rep_msg)
{
return cpu_to_be32(be32_to_cpu(rep_msg->offset12) >> 8);
}
static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, u32 qpn)
static inline void cm_rep_set_local_qpn(struct cm_rep_msg *rep_msg, __be32 qpn)
{
rep_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(rep_msg->offset12) & 0x000000FF));
}
static inline u32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
static inline __be32 cm_rep_get_starting_psn(struct cm_rep_msg *rep_msg)
{
return cpu_to_be32(be32_to_cpu(rep_msg->offset20) >> 8);
}
static inline void cm_rep_set_starting_psn(struct cm_rep_msg *rep_msg,
u32 starting_psn)
__be32 starting_psn)
{
rep_msg->offset20 = cpu_to_be32((be32_to_cpu(starting_psn) << 8) |
(be32_to_cpu(rep_msg->offset20) & 0x000000FF));
@ -600,8 +598,8 @@ static inline void cm_rep_set_srq(struct cm_rep_msg *rep_msg, u8 srq)
struct cm_rtu_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
u8 private_data[IB_CM_RTU_PRIVATE_DATA_SIZE];
@ -610,21 +608,21 @@ struct cm_rtu_msg {
struct cm_dreq_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
/* remote QPN/EECN:24, rsvd:8 */
u32 offset8;
__be32 offset8;
u8 private_data[IB_CM_DREQ_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
static inline u32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg)
static inline __be32 cm_dreq_get_remote_qpn(struct cm_dreq_msg *dreq_msg)
{
return cpu_to_be32(be32_to_cpu(dreq_msg->offset8) >> 8);
}
static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn)
static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, __be32 qpn)
{
dreq_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(dreq_msg->offset8) & 0x000000FF));
@ -633,8 +631,8 @@ static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn)
struct cm_drep_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
u8 private_data[IB_CM_DREP_PRIVATE_DATA_SIZE];
@ -643,37 +641,37 @@ struct cm_drep_msg {
struct cm_lap_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
u32 rsvd8;
__be32 rsvd8;
/* remote QPN/EECN:24, remote CM response timeout:5, rsvd:3 */
u32 offset12;
u32 rsvd16;
__be32 offset12;
__be32 rsvd16;
u16 alt_local_lid;
u16 alt_remote_lid;
__be16 alt_local_lid;
__be16 alt_remote_lid;
union ib_gid alt_local_gid;
union ib_gid alt_remote_gid;
/* flow label:20, rsvd:4, traffic class:8 */
u32 offset56;
__be32 offset56;
u8 alt_hop_limit;
/* rsvd:2, packet rate:6 */
uint8_t offset61;
u8 offset61;
/* SL:4, subnet local:1, rsvd:3 */
uint8_t offset62;
u8 offset62;
/* local ACK timeout:5, rsvd:3 */
uint8_t offset63;
u8 offset63;
u8 private_data[IB_CM_LAP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
static inline u32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
static inline __be32 cm_lap_get_remote_qpn(struct cm_lap_msg *lap_msg)
{
return cpu_to_be32(be32_to_cpu(lap_msg->offset12) >> 8);
}
static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, u32 qpn)
static inline void cm_lap_set_remote_qpn(struct cm_lap_msg *lap_msg, __be32 qpn)
{
lap_msg->offset12 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(lap_msg->offset12) &
@ -693,17 +691,17 @@ static inline void cm_lap_set_remote_resp_timeout(struct cm_lap_msg *lap_msg,
0xFFFFFF07));
}
static inline u32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
static inline __be32 cm_lap_get_flow_label(struct cm_lap_msg *lap_msg)
{
return be32_to_cpu(lap_msg->offset56) >> 12;
return cpu_to_be32(be32_to_cpu(lap_msg->offset56) >> 12);
}
static inline void cm_lap_set_flow_label(struct cm_lap_msg *lap_msg,
u32 flow_label)
__be32 flow_label)
{
lap_msg->offset56 = cpu_to_be32((flow_label << 12) |
(be32_to_cpu(lap_msg->offset56) &
0x00000FFF));
lap_msg->offset56 = cpu_to_be32(
(be32_to_cpu(lap_msg->offset56) & 0x00000FFF) |
(be32_to_cpu(flow_label) << 12));
}
static inline u8 cm_lap_get_traffic_class(struct cm_lap_msg *lap_msg)
@ -766,8 +764,8 @@ static inline void cm_lap_set_local_ack_timeout(struct cm_lap_msg *lap_msg,
struct cm_apr_msg {
struct ib_mad_hdr hdr;
u32 local_comm_id;
u32 remote_comm_id;
__be32 local_comm_id;
__be32 remote_comm_id;
u8 info_length;
u8 ap_status;
@ -779,10 +777,10 @@ struct cm_apr_msg {
struct cm_sidr_req_msg {
struct ib_mad_hdr hdr;
u32 request_id;
u16 pkey;
u16 rsvd;
u64 service_id;
__be32 request_id;
__be16 pkey;
__be16 rsvd;
__be64 service_id;
u8 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
@ -790,26 +788,26 @@ struct cm_sidr_req_msg {
struct cm_sidr_rep_msg {
struct ib_mad_hdr hdr;
u32 request_id;
__be32 request_id;
u8 status;
u8 info_length;
u16 rsvd;
__be16 rsvd;
/* QPN:24, rsvd:8 */
u32 offset8;
u64 service_id;
u32 qkey;
__be32 offset8;
__be64 service_id;
__be32 qkey;
u8 info[IB_CM_SIDR_REP_INFO_LENGTH];
u8 private_data[IB_CM_SIDR_REP_PRIVATE_DATA_SIZE];
} __attribute__ ((packed));
static inline u32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
static inline __be32 cm_sidr_rep_get_qpn(struct cm_sidr_rep_msg *sidr_rep_msg)
{
return cpu_to_be32(be32_to_cpu(sidr_rep_msg->offset8) >> 8);
}
static inline void cm_sidr_rep_set_qpn(struct cm_sidr_rep_msg *sidr_rep_msg,
u32 qpn)
__be32 qpn)
{
sidr_rep_msg->offset8 = cpu_to_be32((be32_to_cpu(qpn) << 8) |
(be32_to_cpu(sidr_rep_msg->offset8) &

View File

@ -38,7 +38,7 @@
#include <linux/list.h>
#include <linux/spinlock.h>
#include <ib_verbs.h>
#include <rdma/ib_verbs.h>
int ib_device_register_sysfs(struct ib_device *device);
void ib_device_unregister_sysfs(struct ib_device *device);

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -39,7 +39,7 @@
#include <linux/jhash.h>
#include <linux/kthread.h>
#include <ib_fmr_pool.h>
#include <rdma/ib_fmr_pool.h>
#include "core_priv.h"
@ -334,6 +334,7 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
{
struct ib_pool_fmr *fmr;
struct ib_pool_fmr *tmp;
LIST_HEAD(fmr_list);
int i;
kthread_stop(pool->thread);
@ -341,6 +342,11 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
i = 0;
list_for_each_entry_safe(fmr, tmp, &pool->free_list, list) {
if (fmr->remap_count) {
INIT_LIST_HEAD(&fmr_list);
list_add_tail(&fmr->fmr->list, &fmr_list);
ib_unmap_fmr(&fmr_list);
}
ib_dealloc_fmr(fmr->fmr);
list_del(&fmr->list);
kfree(fmr);

View File

@ -693,7 +693,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
goto out;
}
build_smp_wc(send_wr->wr_id, smp->dr_slid, send_wr->wr.ud.pkey_index,
build_smp_wc(send_wr->wr_id, be16_to_cpu(smp->dr_slid),
send_wr->wr.ud.pkey_index,
send_wr->wr.ud.port_num, &mad_wc);
/* No GRH for DR SMP */
@ -1554,7 +1555,7 @@ static int is_data_mad(struct ib_mad_agent_private *mad_agent_priv,
}
struct ib_mad_send_wr_private*
ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid)
ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid)
{
struct ib_mad_send_wr_private *mad_send_wr;
@ -1597,7 +1598,7 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
struct ib_mad_send_wr_private *mad_send_wr;
struct ib_mad_send_wc mad_send_wc;
unsigned long flags;
u64 tid;
__be64 tid;
INIT_LIST_HEAD(&mad_recv_wc->rmpp_list);
list_add(&mad_recv_wc->recv_buf.list, &mad_recv_wc->rmpp_list);
@ -2165,7 +2166,8 @@ static void local_completions(void *data)
* Defined behavior is to complete response
* before request
*/
build_smp_wc(local->wr_id, IB_LID_PERMISSIVE,
build_smp_wc(local->wr_id,
be16_to_cpu(IB_LID_PERMISSIVE),
0 /* pkey index */,
recv_mad_agent->agent.port_num, &wc);
@ -2294,7 +2296,7 @@ static void timeout_sends(void *data)
spin_unlock_irqrestore(&mad_agent_priv->lock, flags);
}
static void ib_mad_thread_completion_handler(struct ib_cq *cq)
static void ib_mad_thread_completion_handler(struct ib_cq *cq, void *arg)
{
struct ib_mad_port_private *port_priv = cq->cq_context;
@ -2574,8 +2576,7 @@ static int ib_mad_port_open(struct ib_device *device,
cq_size = (IB_MAD_QP_SEND_SIZE + IB_MAD_QP_RECV_SIZE) * 2;
port_priv->cq = ib_create_cq(port_priv->device,
(ib_comp_handler)
ib_mad_thread_completion_handler,
ib_mad_thread_completion_handler,
NULL, port_priv, cq_size);
if (IS_ERR(port_priv->cq)) {
printk(KERN_ERR PFX "Couldn't create ib_mad CQ\n");

View File

@ -40,8 +40,8 @@
#include <linux/pci.h>
#include <linux/kthread.h>
#include <linux/workqueue.h>
#include <ib_mad.h>
#include <ib_smi.h>
#include <rdma/ib_mad.h>
#include <rdma/ib_smi.h>
#define PFX "ib_mad: "
@ -121,7 +121,7 @@ struct ib_mad_send_wr_private {
struct ib_send_wr send_wr;
struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
u64 wr_id; /* client WR ID */
u64 tid;
__be64 tid;
unsigned long timeout;
int retries;
int retry;
@ -144,7 +144,7 @@ struct ib_mad_local_private {
struct ib_send_wr send_wr;
struct ib_sge sg_list[IB_MAD_SEND_REQ_MAX_SG];
u64 wr_id; /* client WR ID */
u64 tid;
__be64 tid;
};
struct ib_mad_mgmt_method_table {
@ -210,7 +210,7 @@ extern kmem_cache_t *ib_mad_cache;
int ib_send_mad(struct ib_mad_send_wr_private *mad_send_wr);
struct ib_mad_send_wr_private *
ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, u64 tid);
ib_find_send_mad(struct ib_mad_agent_private *mad_agent_priv, __be64 tid);
void ib_mad_complete_send_wr(struct ib_mad_send_wr_private *mad_send_wr,
struct ib_mad_send_wc *mad_send_wc);

View File

@ -61,7 +61,7 @@ struct mad_rmpp_recv {
int seg_num;
int newwin;
u64 tid;
__be64 tid;
u32 src_qp;
u16 slid;
u8 mgmt_class;
@ -100,6 +100,121 @@ void ib_cancel_rmpp_recvs(struct ib_mad_agent_private *agent)
}
}
static int data_offset(u8 mgmt_class)
{
if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM)
return offsetof(struct ib_sa_mad, data);
else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
(mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END))
return offsetof(struct ib_vendor_mad, data);
else
return offsetof(struct ib_rmpp_mad, data);
}
static void format_ack(struct ib_rmpp_mad *ack,
struct ib_rmpp_mad *data,
struct mad_rmpp_recv *rmpp_recv)
{
unsigned long flags;
memcpy(&ack->mad_hdr, &data->mad_hdr,
data_offset(data->mad_hdr.mgmt_class));
ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK;
ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
spin_lock_irqsave(&rmpp_recv->lock, flags);
rmpp_recv->last_ack = rmpp_recv->seg_num;
ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num);
ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin);
spin_unlock_irqrestore(&rmpp_recv->lock, flags);
}
static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
struct ib_mad_recv_wc *recv_wc)
{
struct ib_mad_send_buf *msg;
struct ib_send_wr *bad_send_wr;
int hdr_len, ret;
hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp,
recv_wc->wc->pkey_index, rmpp_recv->ah, 1,
hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len,
GFP_KERNEL);
if (!msg)
return;
format_ack((struct ib_rmpp_mad *) msg->mad,
(struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv);
ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr,
&bad_send_wr);
if (ret)
ib_free_send_mad(msg);
}
static int alloc_response_msg(struct ib_mad_agent *agent,
struct ib_mad_recv_wc *recv_wc,
struct ib_mad_send_buf **msg)
{
struct ib_mad_send_buf *m;
struct ib_ah *ah;
int hdr_len;
ah = ib_create_ah_from_wc(agent->qp->pd, recv_wc->wc,
recv_wc->recv_buf.grh, agent->port_num);
if (IS_ERR(ah))
return PTR_ERR(ah);
hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
m = ib_create_send_mad(agent, recv_wc->wc->src_qp,
recv_wc->wc->pkey_index, ah, 1, hdr_len,
sizeof(struct ib_rmpp_mad) - hdr_len,
GFP_KERNEL);
if (IS_ERR(m)) {
ib_destroy_ah(ah);
return PTR_ERR(m);
}
*msg = m;
return 0;
}
static void free_msg(struct ib_mad_send_buf *msg)
{
ib_destroy_ah(msg->send_wr.wr.ud.ah);
ib_free_send_mad(msg);
}
static void nack_recv(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *recv_wc, u8 rmpp_status)
{
struct ib_mad_send_buf *msg;
struct ib_rmpp_mad *rmpp_mad;
struct ib_send_wr *bad_send_wr;
int ret;
ret = alloc_response_msg(&agent->agent, recv_wc, &msg);
if (ret)
return;
rmpp_mad = (struct ib_rmpp_mad *) msg->mad;
memcpy(rmpp_mad, recv_wc->recv_buf.mad,
data_offset(recv_wc->recv_buf.mad->mad_hdr.mgmt_class));
rmpp_mad->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
rmpp_mad->rmpp_hdr.rmpp_version = IB_MGMT_RMPP_VERSION;
rmpp_mad->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ABORT;
ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
rmpp_mad->rmpp_hdr.rmpp_status = rmpp_status;
rmpp_mad->rmpp_hdr.seg_num = 0;
rmpp_mad->rmpp_hdr.paylen_newwin = 0;
ret = ib_post_send_mad(&agent->agent, &msg->send_wr, &bad_send_wr);
if (ret)
free_msg(msg);
}
static void recv_timeout_handler(void *data)
{
struct mad_rmpp_recv *rmpp_recv = data;
@ -115,8 +230,8 @@ static void recv_timeout_handler(void *data)
list_del(&rmpp_recv->list);
spin_unlock_irqrestore(&rmpp_recv->agent->lock, flags);
/* TODO: send abort. */
rmpp_wc = rmpp_recv->rmpp_wc;
nack_recv(rmpp_recv->agent, rmpp_wc, IB_MGMT_RMPP_STATUS_T2L);
destroy_rmpp_recv(rmpp_recv);
ib_free_recv_mad(rmpp_wc);
}
@ -230,60 +345,6 @@ insert_rmpp_recv(struct ib_mad_agent_private *agent,
return cur_rmpp_recv;
}
static int data_offset(u8 mgmt_class)
{
if (mgmt_class == IB_MGMT_CLASS_SUBN_ADM)
return offsetof(struct ib_sa_mad, data);
else if ((mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) &&
(mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END))
return offsetof(struct ib_vendor_mad, data);
else
return offsetof(struct ib_rmpp_mad, data);
}
static void format_ack(struct ib_rmpp_mad *ack,
struct ib_rmpp_mad *data,
struct mad_rmpp_recv *rmpp_recv)
{
unsigned long flags;
memcpy(&ack->mad_hdr, &data->mad_hdr,
data_offset(data->mad_hdr.mgmt_class));
ack->mad_hdr.method ^= IB_MGMT_METHOD_RESP;
ack->rmpp_hdr.rmpp_type = IB_MGMT_RMPP_TYPE_ACK;
ib_set_rmpp_flags(&ack->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE);
spin_lock_irqsave(&rmpp_recv->lock, flags);
rmpp_recv->last_ack = rmpp_recv->seg_num;
ack->rmpp_hdr.seg_num = cpu_to_be32(rmpp_recv->seg_num);
ack->rmpp_hdr.paylen_newwin = cpu_to_be32(rmpp_recv->newwin);
spin_unlock_irqrestore(&rmpp_recv->lock, flags);
}
static void ack_recv(struct mad_rmpp_recv *rmpp_recv,
struct ib_mad_recv_wc *recv_wc)
{
struct ib_mad_send_buf *msg;
struct ib_send_wr *bad_send_wr;
int hdr_len, ret;
hdr_len = sizeof(struct ib_mad_hdr) + sizeof(struct ib_rmpp_hdr);
msg = ib_create_send_mad(&rmpp_recv->agent->agent, recv_wc->wc->src_qp,
recv_wc->wc->pkey_index, rmpp_recv->ah, 1,
hdr_len, sizeof(struct ib_rmpp_mad) - hdr_len,
GFP_KERNEL);
if (!msg)
return;
format_ack((struct ib_rmpp_mad *) msg->mad,
(struct ib_rmpp_mad *) recv_wc->recv_buf.mad, rmpp_recv);
ret = ib_post_send_mad(&rmpp_recv->agent->agent, &msg->send_wr,
&bad_send_wr);
if (ret)
ib_free_send_mad(msg);
}
static inline int get_last_flag(struct ib_mad_recv_buf *seg)
{
struct ib_rmpp_mad *rmpp_mad;
@ -559,6 +620,34 @@ static int send_next_seg(struct ib_mad_send_wr_private *mad_send_wr)
return ib_send_mad(mad_send_wr);
}
static void abort_send(struct ib_mad_agent_private *agent, __be64 tid,
u8 rmpp_status)
{
struct ib_mad_send_wr_private *mad_send_wr;
struct ib_mad_send_wc wc;
unsigned long flags;
spin_lock_irqsave(&agent->lock, flags);
mad_send_wr = ib_find_send_mad(agent, tid);
if (!mad_send_wr)
goto out; /* Unmatched send */
if ((mad_send_wr->last_ack == mad_send_wr->total_seg) ||
(!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS))
goto out; /* Send is already done */
ib_mark_mad_done(mad_send_wr);
spin_unlock_irqrestore(&agent->lock, flags);
wc.status = IB_WC_REM_ABORT_ERR;
wc.vendor_err = rmpp_status;
wc.wr_id = mad_send_wr->wr_id;
ib_mad_complete_send_wr(mad_send_wr, &wc);
return;
out:
spin_unlock_irqrestore(&agent->lock, flags);
}
static void process_rmpp_ack(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
@ -568,11 +657,21 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
int seg_num, newwin, ret;
rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
if (rmpp_mad->rmpp_hdr.rmpp_status)
if (rmpp_mad->rmpp_hdr.rmpp_status) {
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
return;
}
seg_num = be32_to_cpu(rmpp_mad->rmpp_hdr.seg_num);
newwin = be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin);
if (newwin < seg_num) {
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_W2S);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_W2S);
return;
}
spin_lock_irqsave(&agent->lock, flags);
mad_send_wr = ib_find_send_mad(agent, rmpp_mad->mad_hdr.tid);
@ -583,8 +682,13 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
(!mad_send_wr->timeout) || (mad_send_wr->status != IB_WC_SUCCESS))
goto out; /* Send is already done */
if (seg_num > mad_send_wr->total_seg)
goto out; /* Bad ACK */
if (seg_num > mad_send_wr->total_seg || seg_num > mad_send_wr->newwin) {
spin_unlock_irqrestore(&agent->lock, flags);
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_S2B);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_S2B);
return;
}
if (newwin < mad_send_wr->newwin || seg_num < mad_send_wr->last_ack)
goto out; /* Old ACK */
@ -628,6 +732,72 @@ out:
spin_unlock_irqrestore(&agent->lock, flags);
}
static struct ib_mad_recv_wc *
process_rmpp_data(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
struct ib_rmpp_hdr *rmpp_hdr;
u8 rmpp_status;
rmpp_hdr = &((struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad)->rmpp_hdr;
if (rmpp_hdr->rmpp_status) {
rmpp_status = IB_MGMT_RMPP_STATUS_BAD_STATUS;
goto bad;
}
if (rmpp_hdr->seg_num == __constant_htonl(1)) {
if (!(ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST)) {
rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG;
goto bad;
}
return start_rmpp(agent, mad_recv_wc);
} else {
if (ib_get_rmpp_flags(rmpp_hdr) & IB_MGMT_RMPP_FLAG_FIRST) {
rmpp_status = IB_MGMT_RMPP_STATUS_BAD_SEG;
goto bad;
}
return continue_rmpp(agent, mad_recv_wc);
}
bad:
nack_recv(agent, mad_recv_wc, rmpp_status);
ib_free_recv_mad(mad_recv_wc);
return NULL;
}
static void process_rmpp_stop(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
struct ib_rmpp_mad *rmpp_mad;
rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
if (rmpp_mad->rmpp_hdr.rmpp_status != IB_MGMT_RMPP_STATUS_RESX) {
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
} else
abort_send(agent, rmpp_mad->mad_hdr.tid,
rmpp_mad->rmpp_hdr.rmpp_status);
}
static void process_rmpp_abort(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
{
struct ib_rmpp_mad *rmpp_mad;
rmpp_mad = (struct ib_rmpp_mad *)mad_recv_wc->recv_buf.mad;
if (rmpp_mad->rmpp_hdr.rmpp_status < IB_MGMT_RMPP_STATUS_ABORT_MIN ||
rmpp_mad->rmpp_hdr.rmpp_status > IB_MGMT_RMPP_STATUS_ABORT_MAX) {
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_BAD_STATUS);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BAD_STATUS);
} else
abort_send(agent, rmpp_mad->mad_hdr.tid,
rmpp_mad->rmpp_hdr.rmpp_status);
}
struct ib_mad_recv_wc *
ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent,
struct ib_mad_recv_wc *mad_recv_wc)
@ -638,23 +808,29 @@ ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent,
if (!(rmpp_mad->rmpp_hdr.rmpp_rtime_flags & IB_MGMT_RMPP_FLAG_ACTIVE))
return mad_recv_wc;
if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION)
if (rmpp_mad->rmpp_hdr.rmpp_version != IB_MGMT_RMPP_VERSION) {
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_UNV);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_UNV);
goto out;
}
switch (rmpp_mad->rmpp_hdr.rmpp_type) {
case IB_MGMT_RMPP_TYPE_DATA:
if (rmpp_mad->rmpp_hdr.seg_num == __constant_htonl(1))
return start_rmpp(agent, mad_recv_wc);
else
return continue_rmpp(agent, mad_recv_wc);
return process_rmpp_data(agent, mad_recv_wc);
case IB_MGMT_RMPP_TYPE_ACK:
process_rmpp_ack(agent, mad_recv_wc);
break;
case IB_MGMT_RMPP_TYPE_STOP:
process_rmpp_stop(agent, mad_recv_wc);
break;
case IB_MGMT_RMPP_TYPE_ABORT:
/* TODO: process_rmpp_nack(agent, mad_recv_wc); */
process_rmpp_abort(agent, mad_recv_wc);
break;
default:
abort_send(agent, rmpp_mad->mad_hdr.tid,
IB_MGMT_RMPP_STATUS_BADT);
nack_recv(agent, mad_recv_wc, IB_MGMT_RMPP_STATUS_BADT);
break;
}
out:
@ -714,7 +890,10 @@ int ib_process_rmpp_send_wc(struct ib_mad_send_wr_private *mad_send_wr,
if (rmpp_mad->rmpp_hdr.rmpp_type != IB_MGMT_RMPP_TYPE_DATA) {
msg = (struct ib_mad_send_buf *) (unsigned long)
mad_send_wc->wr_id;
ib_free_send_mad(msg);
if (rmpp_mad->rmpp_hdr.rmpp_type == IB_MGMT_RMPP_TYPE_ACK)
ib_free_send_mad(msg);
else
free_msg(msg);
return IB_RMPP_RESULT_INTERNAL; /* ACK, STOP, or ABORT */
}

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -32,7 +33,7 @@
* $Id: packer.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <ib_pack.h>
#include <rdma/ib_pack.h>
static u64 value_read(int offset, int size, void *structure)
{

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc.  All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -44,8 +44,8 @@
#include <linux/kref.h>
#include <linux/idr.h>
#include <ib_pack.h>
#include <ib_sa.h>
#include <rdma/ib_pack.h>
#include <rdma/ib_sa.h>
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("InfiniBand subnet administration query support");

View File

@ -1,9 +1,10 @@
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -36,7 +37,7 @@
* $Id: smi.c 1389 2004-12-27 22:56:47Z roland $
*/
#include <ib_smi.h>
#include <rdma/ib_smi.h>
#include "smi.h"
/*

View File

@ -1,5 +1,7 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -34,7 +36,7 @@
#include "core_priv.h"
#include <ib_mad.h>
#include <rdma/ib_mad.h>
struct ib_port {
struct kobject kobj;
@ -253,14 +255,14 @@ static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr,
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
be16_to_cpu(((u16 *) gid.raw)[0]),
be16_to_cpu(((u16 *) gid.raw)[1]),
be16_to_cpu(((u16 *) gid.raw)[2]),
be16_to_cpu(((u16 *) gid.raw)[3]),
be16_to_cpu(((u16 *) gid.raw)[4]),
be16_to_cpu(((u16 *) gid.raw)[5]),
be16_to_cpu(((u16 *) gid.raw)[6]),
be16_to_cpu(((u16 *) gid.raw)[7]));
be16_to_cpu(((__be16 *) gid.raw)[0]),
be16_to_cpu(((__be16 *) gid.raw)[1]),
be16_to_cpu(((__be16 *) gid.raw)[2]),
be16_to_cpu(((__be16 *) gid.raw)[3]),
be16_to_cpu(((__be16 *) gid.raw)[4]),
be16_to_cpu(((__be16 *) gid.raw)[5]),
be16_to_cpu(((__be16 *) gid.raw)[6]),
be16_to_cpu(((__be16 *) gid.raw)[7]));
}
static ssize_t show_port_pkey(struct ib_port *p, struct port_attribute *attr,
@ -332,11 +334,11 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
break;
case 16:
ret = sprintf(buf, "%u\n",
be16_to_cpup((u16 *)(out_mad->data + 40 + offset / 8)));
be16_to_cpup((__be16 *)(out_mad->data + 40 + offset / 8)));
break;
case 32:
ret = sprintf(buf, "%u\n",
be32_to_cpup((u32 *)(out_mad->data + 40 + offset / 8)));
be32_to_cpup((__be32 *)(out_mad->data + 40 + offset / 8)));
break;
default:
ret = 0;
@ -598,10 +600,10 @@ static ssize_t show_sys_image_guid(struct class_device *cdev, char *buf)
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x\n",
be16_to_cpu(((u16 *) &attr.sys_image_guid)[0]),
be16_to_cpu(((u16 *) &attr.sys_image_guid)[1]),
be16_to_cpu(((u16 *) &attr.sys_image_guid)[2]),
be16_to_cpu(((u16 *) &attr.sys_image_guid)[3]));
be16_to_cpu(((__be16 *) &attr.sys_image_guid)[0]),
be16_to_cpu(((__be16 *) &attr.sys_image_guid)[1]),
be16_to_cpu(((__be16 *) &attr.sys_image_guid)[2]),
be16_to_cpu(((__be16 *) &attr.sys_image_guid)[3]));
}
static ssize_t show_node_guid(struct class_device *cdev, char *buf)
@ -615,10 +617,10 @@ static ssize_t show_node_guid(struct class_device *cdev, char *buf)
return ret;
return sprintf(buf, "%04x:%04x:%04x:%04x\n",
be16_to_cpu(((u16 *) &attr.node_guid)[0]),
be16_to_cpu(((u16 *) &attr.node_guid)[1]),
be16_to_cpu(((u16 *) &attr.node_guid)[2]),
be16_to_cpu(((u16 *) &attr.node_guid)[3]));
be16_to_cpu(((__be16 *) &attr.node_guid)[0]),
be16_to_cpu(((__be16 *) &attr.node_guid)[1]),
be16_to_cpu(((__be16 *) &attr.node_guid)[2]),
be16_to_cpu(((__be16 *) &attr.node_guid)[3]));
}
static CLASS_DEVICE_ATTR(node_type, S_IRUGO, show_node_type, NULL);

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -73,14 +74,18 @@ static struct semaphore ctx_id_mutex;
static struct idr ctx_id_table;
static int ctx_id_rover = 0;
static struct ib_ucm_context *ib_ucm_ctx_get(int id)
static struct ib_ucm_context *ib_ucm_ctx_get(struct ib_ucm_file *file, int id)
{
struct ib_ucm_context *ctx;
down(&ctx_id_mutex);
ctx = idr_find(&ctx_id_table, id);
if (ctx)
ctx->ref++;
if (!ctx)
ctx = ERR_PTR(-ENOENT);
else if (ctx->file != file)
ctx = ERR_PTR(-EINVAL);
else
atomic_inc(&ctx->ref);
up(&ctx_id_mutex);
return ctx;
@ -88,21 +93,37 @@ static struct ib_ucm_context *ib_ucm_ctx_get(int id)
static void ib_ucm_ctx_put(struct ib_ucm_context *ctx)
{
if (atomic_dec_and_test(&ctx->ref))
wake_up(&ctx->wait);
}
static ssize_t ib_ucm_destroy_ctx(struct ib_ucm_file *file, int id)
{
struct ib_ucm_context *ctx;
struct ib_ucm_event *uevent;
down(&ctx_id_mutex);
ctx->ref--;
if (!ctx->ref)
ctx = idr_find(&ctx_id_table, id);
if (!ctx)
ctx = ERR_PTR(-ENOENT);
else if (ctx->file != file)
ctx = ERR_PTR(-EINVAL);
else
idr_remove(&ctx_id_table, ctx->id);
up(&ctx_id_mutex);
if (ctx->ref)
return;
if (IS_ERR(ctx))
return PTR_ERR(ctx);
down(&ctx->file->mutex);
atomic_dec(&ctx->ref);
wait_event(ctx->wait, !atomic_read(&ctx->ref));
/* No new events will be generated after destroying the cm_id. */
if (!IS_ERR(ctx->cm_id))
ib_destroy_cm_id(ctx->cm_id);
/* Cleanup events not yet reported to the user. */
down(&file->mutex);
list_del(&ctx->file_list);
while (!list_empty(&ctx->events)) {
@ -117,13 +138,10 @@ static void ib_ucm_ctx_put(struct ib_ucm_context *ctx)
kfree(uevent);
}
up(&file->mutex);
up(&ctx->file->mutex);
ucm_dbg("Destroyed CM ID <%d>\n", ctx->id);
ib_destroy_cm_id(ctx->cm_id);
kfree(ctx);
return 0;
}
static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file)
@ -135,11 +153,11 @@ static struct ib_ucm_context *ib_ucm_ctx_alloc(struct ib_ucm_file *file)
if (!ctx)
return NULL;
ctx->ref = 1; /* user reference */
atomic_set(&ctx->ref, 1);
init_waitqueue_head(&ctx->wait);
ctx->file = file;
INIT_LIST_HEAD(&ctx->events);
init_MUTEX(&ctx->mutex);
list_add_tail(&ctx->file_list, &file->ctxs);
@ -177,8 +195,8 @@ static void ib_ucm_event_path_get(struct ib_ucm_path_rec *upath,
if (!kpath || !upath)
return;
memcpy(upath->dgid, kpath->dgid.raw, sizeof(union ib_gid));
memcpy(upath->sgid, kpath->sgid.raw, sizeof(union ib_gid));
memcpy(upath->dgid, kpath->dgid.raw, sizeof *upath->dgid);
memcpy(upath->sgid, kpath->sgid.raw, sizeof *upath->sgid);
upath->dlid = kpath->dlid;
upath->slid = kpath->slid;
@ -201,10 +219,11 @@ static void ib_ucm_event_path_get(struct ib_ucm_path_rec *upath,
kpath->packet_life_time_selector;
}
static void ib_ucm_event_req_get(struct ib_ucm_req_event_resp *ureq,
static void ib_ucm_event_req_get(struct ib_ucm_context *ctx,
struct ib_ucm_req_event_resp *ureq,
struct ib_cm_req_event_param *kreq)
{
ureq->listen_id = (long)kreq->listen_id->context;
ureq->listen_id = ctx->id;
ureq->remote_ca_guid = kreq->remote_ca_guid;
ureq->remote_qkey = kreq->remote_qkey;
@ -240,34 +259,11 @@ static void ib_ucm_event_rep_get(struct ib_ucm_rep_event_resp *urep,
urep->srq = krep->srq;
}
static void ib_ucm_event_rej_get(struct ib_ucm_rej_event_resp *urej,
struct ib_cm_rej_event_param *krej)
{
urej->reason = krej->reason;
}
static void ib_ucm_event_mra_get(struct ib_ucm_mra_event_resp *umra,
struct ib_cm_mra_event_param *kmra)
{
umra->timeout = kmra->service_timeout;
}
static void ib_ucm_event_lap_get(struct ib_ucm_lap_event_resp *ulap,
struct ib_cm_lap_event_param *klap)
{
ib_ucm_event_path_get(&ulap->path, klap->alternate_path);
}
static void ib_ucm_event_apr_get(struct ib_ucm_apr_event_resp *uapr,
struct ib_cm_apr_event_param *kapr)
{
uapr->status = kapr->ap_status;
}
static void ib_ucm_event_sidr_req_get(struct ib_ucm_sidr_req_event_resp *ureq,
static void ib_ucm_event_sidr_req_get(struct ib_ucm_context *ctx,
struct ib_ucm_sidr_req_event_resp *ureq,
struct ib_cm_sidr_req_event_param *kreq)
{
ureq->listen_id = (long)kreq->listen_id->context;
ureq->listen_id = ctx->id;
ureq->pkey = kreq->pkey;
}
@ -279,19 +275,18 @@ static void ib_ucm_event_sidr_rep_get(struct ib_ucm_sidr_rep_event_resp *urep,
urep->qpn = krep->qpn;
};
static int ib_ucm_event_process(struct ib_cm_event *evt,
static int ib_ucm_event_process(struct ib_ucm_context *ctx,
struct ib_cm_event *evt,
struct ib_ucm_event *uvt)
{
void *info = NULL;
int result;
switch (evt->event) {
case IB_CM_REQ_RECEIVED:
ib_ucm_event_req_get(&uvt->resp.u.req_resp,
ib_ucm_event_req_get(ctx, &uvt->resp.u.req_resp,
&evt->param.req_rcvd);
uvt->data_len = IB_CM_REQ_PRIVATE_DATA_SIZE;
uvt->resp.present |= (evt->param.req_rcvd.primary_path ?
IB_UCM_PRES_PRIMARY : 0);
uvt->resp.present = IB_UCM_PRES_PRIMARY;
uvt->resp.present |= (evt->param.req_rcvd.alternate_path ?
IB_UCM_PRES_ALTERNATE : 0);
break;
@ -299,57 +294,46 @@ static int ib_ucm_event_process(struct ib_cm_event *evt,
ib_ucm_event_rep_get(&uvt->resp.u.rep_resp,
&evt->param.rep_rcvd);
uvt->data_len = IB_CM_REP_PRIVATE_DATA_SIZE;
break;
case IB_CM_RTU_RECEIVED:
uvt->data_len = IB_CM_RTU_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
break;
case IB_CM_DREQ_RECEIVED:
uvt->data_len = IB_CM_DREQ_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
break;
case IB_CM_DREP_RECEIVED:
uvt->data_len = IB_CM_DREP_PRIVATE_DATA_SIZE;
uvt->resp.u.send_status = evt->param.send_status;
break;
case IB_CM_MRA_RECEIVED:
ib_ucm_event_mra_get(&uvt->resp.u.mra_resp,
&evt->param.mra_rcvd);
uvt->resp.u.mra_resp.timeout =
evt->param.mra_rcvd.service_timeout;
uvt->data_len = IB_CM_MRA_PRIVATE_DATA_SIZE;
break;
case IB_CM_REJ_RECEIVED:
ib_ucm_event_rej_get(&uvt->resp.u.rej_resp,
&evt->param.rej_rcvd);
uvt->resp.u.rej_resp.reason = evt->param.rej_rcvd.reason;
uvt->data_len = IB_CM_REJ_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.rej_rcvd.ari_length;
info = evt->param.rej_rcvd.ari;
break;
case IB_CM_LAP_RECEIVED:
ib_ucm_event_lap_get(&uvt->resp.u.lap_resp,
&evt->param.lap_rcvd);
ib_ucm_event_path_get(&uvt->resp.u.lap_resp.path,
evt->param.lap_rcvd.alternate_path);
uvt->data_len = IB_CM_LAP_PRIVATE_DATA_SIZE;
uvt->resp.present |= (evt->param.lap_rcvd.alternate_path ?
IB_UCM_PRES_ALTERNATE : 0);
uvt->resp.present = IB_UCM_PRES_ALTERNATE;
break;
case IB_CM_APR_RECEIVED:
ib_ucm_event_apr_get(&uvt->resp.u.apr_resp,
&evt->param.apr_rcvd);
uvt->resp.u.apr_resp.status = evt->param.apr_rcvd.ap_status;
uvt->data_len = IB_CM_APR_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.apr_rcvd.info_len;
info = evt->param.apr_rcvd.apr_info;
break;
case IB_CM_SIDR_REQ_RECEIVED:
ib_ucm_event_sidr_req_get(&uvt->resp.u.sidr_req_resp,
ib_ucm_event_sidr_req_get(ctx, &uvt->resp.u.sidr_req_resp,
&evt->param.sidr_req_rcvd);
uvt->data_len = IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE;
break;
case IB_CM_SIDR_REP_RECEIVED:
ib_ucm_event_sidr_rep_get(&uvt->resp.u.sidr_rep_resp,
@ -357,43 +341,35 @@ static int ib_ucm_event_process(struct ib_cm_event *evt,
uvt->data_len = IB_CM_SIDR_REP_PRIVATE_DATA_SIZE;
uvt->info_len = evt->param.sidr_rep_rcvd.info_len;
info = evt->param.sidr_rep_rcvd.info;
break;
default:
uvt->resp.u.send_status = evt->param.send_status;
break;
}
if (uvt->data_len && evt->private_data) {
if (uvt->data_len) {
uvt->data = kmalloc(uvt->data_len, GFP_KERNEL);
if (!uvt->data) {
result = -ENOMEM;
goto error;
}
if (!uvt->data)
goto err1;
memcpy(uvt->data, evt->private_data, uvt->data_len);
uvt->resp.present |= IB_UCM_PRES_DATA;
}
if (uvt->info_len && info) {
if (uvt->info_len) {
uvt->info = kmalloc(uvt->info_len, GFP_KERNEL);
if (!uvt->info) {
result = -ENOMEM;
goto error;
}
if (!uvt->info)
goto err2;
memcpy(uvt->info, info, uvt->info_len);
uvt->resp.present |= IB_UCM_PRES_INFO;
}
return 0;
error:
kfree(uvt->info);
err2:
kfree(uvt->data);
return result;
err1:
return -ENOMEM;
}
static int ib_ucm_event_handler(struct ib_cm_id *cm_id,
@ -403,63 +379,42 @@ static int ib_ucm_event_handler(struct ib_cm_id *cm_id,
struct ib_ucm_context *ctx;
int result = 0;
int id;
/*
* lookup correct context based on event type.
*/
switch (event->event) {
case IB_CM_REQ_RECEIVED:
id = (long)event->param.req_rcvd.listen_id->context;
break;
case IB_CM_SIDR_REQ_RECEIVED:
id = (long)event->param.sidr_req_rcvd.listen_id->context;
break;
default:
id = (long)cm_id->context;
break;
}
ucm_dbg("Event. CM ID <%d> event <%d>\n", id, event->event);
ctx = ib_ucm_ctx_get(id);
if (!ctx)
return -ENOENT;
ctx = cm_id->context;
if (event->event == IB_CM_REQ_RECEIVED ||
event->event == IB_CM_SIDR_REQ_RECEIVED)
id = IB_UCM_CM_ID_INVALID;
else
id = ctx->id;
uevent = kmalloc(sizeof(*uevent), GFP_KERNEL);
if (!uevent) {
result = -ENOMEM;
goto done;
}
if (!uevent)
goto err1;
memset(uevent, 0, sizeof(*uevent));
uevent->resp.id = id;
uevent->resp.event = event->event;
result = ib_ucm_event_process(event, uevent);
result = ib_ucm_event_process(ctx, event, uevent);
if (result)
goto done;
goto err2;
uevent->ctx = ctx;
uevent->cm_id = ((event->event == IB_CM_REQ_RECEIVED ||
event->event == IB_CM_SIDR_REQ_RECEIVED ) ?
cm_id : NULL);
uevent->cm_id = (id == IB_UCM_CM_ID_INVALID) ? cm_id : NULL;
down(&ctx->file->mutex);
list_add_tail(&uevent->file_list, &ctx->file->events);
list_add_tail(&uevent->ctx_list, &ctx->events);
wake_up_interruptible(&ctx->file->poll_wait);
up(&ctx->file->mutex);
done:
ctx->error = result;
ib_ucm_ctx_put(ctx); /* func reference */
return result;
return 0;
err2:
kfree(uevent);
err1:
/* Destroy new cm_id's */
return (id == IB_UCM_CM_ID_INVALID);
}
static ssize_t ib_ucm_event(struct ib_ucm_file *file,
@ -517,9 +472,8 @@ static ssize_t ib_ucm_event(struct ib_ucm_file *file,
goto done;
}
ctx->cm_id = uevent->cm_id;
ctx->cm_id->cm_handler = ib_ucm_event_handler;
ctx->cm_id->context = (void *)(unsigned long)ctx->id;
ctx->cm_id = uevent->cm_id;
ctx->cm_id->context = ctx;
uevent->resp.id = ctx->id;
@ -585,30 +539,29 @@ static ssize_t ib_ucm_create_id(struct ib_ucm_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
down(&file->mutex);
ctx = ib_ucm_ctx_alloc(file);
up(&file->mutex);
if (!ctx)
return -ENOMEM;
ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler,
(void *)(unsigned long)ctx->id);
if (!ctx->cm_id) {
result = -ENOMEM;
goto err_cm;
ctx->cm_id = ib_create_cm_id(ib_ucm_event_handler, ctx);
if (IS_ERR(ctx->cm_id)) {
result = PTR_ERR(ctx->cm_id);
goto err;
}
resp.id = ctx->id;
if (copy_to_user((void __user *)(unsigned long)cmd.response,
&resp, sizeof(resp))) {
result = -EFAULT;
goto err_ret;
goto err;
}
return 0;
err_ret:
ib_destroy_cm_id(ctx->cm_id);
err_cm:
ib_ucm_ctx_put(ctx); /* user reference */
err:
ib_ucm_destroy_ctx(file, ctx->id);
return result;
}
@ -617,19 +570,11 @@ static ssize_t ib_ucm_destroy_id(struct ib_ucm_file *file,
int in_len, int out_len)
{
struct ib_ucm_destroy_id cmd;
struct ib_ucm_context *ctx;
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx)
return -ENOENT;
ib_ucm_ctx_put(ctx); /* user reference */
ib_ucm_ctx_put(ctx); /* func reference */
return 0;
return ib_ucm_destroy_ctx(file, cmd.id);
}
static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
@ -647,15 +592,9 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx)
return -ENOENT;
down(&ctx->file->mutex);
if (ctx->file != file) {
result = -EINVAL;
goto done;
}
ctx = ib_ucm_ctx_get(file, cmd.id);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
resp.service_id = ctx->cm_id->service_id;
resp.service_mask = ctx->cm_id->service_mask;
@ -666,9 +605,7 @@ static ssize_t ib_ucm_attr_id(struct ib_ucm_file *file,
&resp, sizeof(resp)))
result = -EFAULT;
done:
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
ib_ucm_ctx_put(ctx);
return result;
}
@ -683,19 +620,12 @@ static ssize_t ib_ucm_listen(struct ib_ucm_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx)
return -ENOENT;
ctx = ib_ucm_ctx_get(file, cmd.id);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
result = ib_cm_listen(ctx->cm_id, cmd.service_id,
cmd.service_mask);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
result = ib_cm_listen(ctx->cm_id, cmd.service_id, cmd.service_mask);
ib_ucm_ctx_put(ctx);
return result;
}
@ -710,18 +640,12 @@ static ssize_t ib_ucm_establish(struct ib_ucm_file *file,
if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
return -EFAULT;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx)
return -ENOENT;
ctx = ib_ucm_ctx_get(file, cmd.id);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
result = ib_cm_establish(ctx->cm_id);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
result = ib_cm_establish(ctx->cm_id);
ib_ucm_ctx_put(ctx);
return result;
}
@ -768,8 +692,8 @@ static int ib_ucm_path_get(struct ib_sa_path_rec **path, u64 src)
return -EFAULT;
}
memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof(union ib_gid));
memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof(union ib_gid));
memcpy(sa_path->dgid.raw, ucm_path.dgid, sizeof sa_path->dgid);
memcpy(sa_path->sgid.raw, ucm_path.sgid, sizeof sa_path->sgid);
sa_path->dlid = ucm_path.dlid;
sa_path->slid = ucm_path.slid;
@ -839,25 +763,17 @@ static ssize_t ib_ucm_send_req(struct ib_ucm_file *file,
param.max_cm_retries = cmd.max_cm_retries;
param.srq = cmd.srq;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_req(ctx->cm_id, &param);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.primary_path);
kfree(param.alternate_path);
return result;
}
@ -890,23 +806,14 @@ static ssize_t ib_ucm_send_rep(struct ib_ucm_file *file,
param.rnr_retry_count = cmd.rnr_retry_count;
param.srq = cmd.srq;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_rep(ctx->cm_id, &param);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
return result;
}
@ -928,23 +835,14 @@ static ssize_t ib_ucm_send_private_data(struct ib_ucm_file *file,
if (result)
return result;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = func(ctx->cm_id, private_data, cmd.len);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(private_data);
return result;
}
@ -995,26 +893,17 @@ static ssize_t ib_ucm_send_info(struct ib_ucm_file *file,
if (result)
goto done;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
result = func(ctx->cm_id, cmd.status,
info, cmd.info_len,
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = func(ctx->cm_id, cmd.status, info, cmd.info_len,
data, cmd.data_len);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(data);
kfree(info);
return result;
}
@ -1048,24 +937,14 @@ static ssize_t ib_ucm_send_mra(struct ib_ucm_file *file,
if (result)
return result;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_mra(ctx->cm_id, cmd.timeout, data, cmd.len);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
result = ib_send_cm_mra(ctx->cm_id, cmd.timeout,
data, cmd.len);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(data);
return result;
}
@ -1090,24 +969,16 @@ static ssize_t ib_ucm_send_lap(struct ib_ucm_file *file,
if (result)
goto done;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_lap(ctx->cm_id, path, data, cmd.len);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(data);
kfree(path);
return result;
}
@ -1140,24 +1011,16 @@ static ssize_t ib_ucm_send_sidr_req(struct ib_ucm_file *file,
param.max_cm_retries = cmd.max_cm_retries;
param.pkey = cmd.pkey;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_sidr_req(ctx->cm_id, &param);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.path);
return result;
}
@ -1184,30 +1047,22 @@ static ssize_t ib_ucm_send_sidr_rep(struct ib_ucm_file *file,
if (result)
goto done;
param.qp_num = cmd.qpn;
param.qkey = cmd.qkey;
param.status = cmd.status;
param.info_length = cmd.info_len;
param.private_data_len = cmd.data_len;
param.qp_num = cmd.qpn;
param.qkey = cmd.qkey;
param.status = cmd.status;
param.info_length = cmd.info_len;
param.private_data_len = cmd.data_len;
ctx = ib_ucm_ctx_get(cmd.id);
if (!ctx) {
result = -ENOENT;
goto done;
}
down(&ctx->file->mutex);
if (ctx->file != file)
result = -EINVAL;
else
ctx = ib_ucm_ctx_get(file, cmd.id);
if (!IS_ERR(ctx)) {
result = ib_send_cm_sidr_rep(ctx->cm_id, &param);
ib_ucm_ctx_put(ctx);
} else
result = PTR_ERR(ctx);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* func reference */
done:
kfree(param.private_data);
kfree(param.info);
return result;
}
@ -1305,22 +1160,17 @@ static int ib_ucm_close(struct inode *inode, struct file *filp)
struct ib_ucm_context *ctx;
down(&file->mutex);
while (!list_empty(&file->ctxs)) {
ctx = list_entry(file->ctxs.next,
struct ib_ucm_context, file_list);
up(&ctx->file->mutex);
ib_ucm_ctx_put(ctx); /* user reference */
up(&file->mutex);
ib_ucm_destroy_ctx(file, ctx->id);
down(&file->mutex);
}
up(&file->mutex);
kfree(file);
ucm_dbg("Deleted struct\n");
return 0;
}

View File

@ -40,17 +40,15 @@
#include <linux/cdev.h>
#include <linux/idr.h>
#include <ib_cm.h>
#include <ib_user_cm.h>
#include <rdma/ib_cm.h>
#include <rdma/ib_user_cm.h>
#define IB_UCM_CM_ID_INVALID 0xffffffff
struct ib_ucm_file {
struct semaphore mutex;
struct file *filp;
/*
* list of pending events
*/
struct list_head ctxs; /* list of active connections */
struct list_head events; /* list of pending events */
wait_queue_head_t poll_wait;
@ -58,12 +56,11 @@ struct ib_ucm_file {
struct ib_ucm_context {
int id;
int ref;
int error;
wait_queue_head_t wait;
atomic_t ref;
struct ib_ucm_file *file;
struct ib_cm_id *cm_id;
struct semaphore mutex;
struct list_head events; /* list of pending events. */
struct list_head file_list; /* member in file ctx list */

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -34,7 +35,7 @@
#include <linux/errno.h>
#include <ib_pack.h>
#include <rdma/ib_pack.h>
#define STRUCT_FIELD(header, field) \
.struct_offset_bytes = offsetof(struct ib_unpacked_ ## header, field), \
@ -194,6 +195,7 @@ void ib_ud_header_init(int payload_bytes,
struct ib_ud_header *header)
{
int header_len;
u16 packet_length;
memset(header, 0, sizeof *header);
@ -208,7 +210,7 @@ void ib_ud_header_init(int payload_bytes,
header->lrh.link_version = 0;
header->lrh.link_next_header =
grh_present ? IB_LNH_IBA_GLOBAL : IB_LNH_IBA_LOCAL;
header->lrh.packet_length = (IB_LRH_BYTES +
packet_length = (IB_LRH_BYTES +
IB_BTH_BYTES +
IB_DETH_BYTES +
payload_bytes +
@ -217,8 +219,7 @@ void ib_ud_header_init(int payload_bytes,
header->grh_present = grh_present;
if (grh_present) {
header->lrh.packet_length += IB_GRH_BYTES / 4;
packet_length += IB_GRH_BYTES / 4;
header->grh.ip_version = 6;
header->grh.payload_length =
cpu_to_be16((IB_BTH_BYTES +
@ -229,7 +230,7 @@ void ib_ud_header_init(int payload_bytes,
header->grh.next_header = 0x1b;
}
cpu_to_be16s(&header->lrh.packet_length);
header->lrh.packet_length = cpu_to_be16(packet_length);
if (header->immediate_present)
header->bth.opcode = IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE;

View File

@ -1,6 +1,6 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
@ -49,8 +49,8 @@
#include <asm/uaccess.h>
#include <asm/semaphore.h>
#include <ib_mad.h>
#include <ib_user_mad.h>
#include <rdma/ib_mad.h>
#include <rdma/ib_user_mad.h>
MODULE_AUTHOR("Roland Dreier");
MODULE_DESCRIPTION("InfiniBand userspace MAD packet access");
@ -271,7 +271,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct ib_send_wr *bad_wr;
struct ib_rmpp_mad *rmpp_mad;
u8 method;
u64 *tid;
__be64 *tid;
int ret, length, hdr_len, data_len, rmpp_hdr_size;
int rmpp_active = 0;
@ -316,7 +316,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
if (packet->mad.hdr.grh_present) {
ah_attr.ah_flags = IB_AH_GRH;
memcpy(ah_attr.grh.dgid.raw, packet->mad.hdr.gid, 16);
ah_attr.grh.flow_label = packet->mad.hdr.flow_label;
ah_attr.grh.flow_label = be32_to_cpu(packet->mad.hdr.flow_label);
ah_attr.grh.hop_limit = packet->mad.hdr.hop_limit;
ah_attr.grh.traffic_class = packet->mad.hdr.traffic_class;
}

View File

@ -1,6 +1,8 @@
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -43,8 +45,8 @@
#include <linux/kref.h>
#include <linux/idr.h>
#include <ib_verbs.h>
#include <ib_user_verbs.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_user_verbs.h>
struct ib_uverbs_device {
int devnum;
@ -97,10 +99,12 @@ extern struct idr ib_uverbs_mw_idr;
extern struct idr ib_uverbs_ah_idr;
extern struct idr ib_uverbs_cq_idr;
extern struct idr ib_uverbs_qp_idr;
extern struct idr ib_uverbs_srq_idr;
void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context);
void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr);
void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr);
int ib_umem_get(struct ib_device *dev, struct ib_umem *mem,
void *addr, size_t size, int write);
@ -129,5 +133,8 @@ IB_UVERBS_DECLARE_CMD(modify_qp);
IB_UVERBS_DECLARE_CMD(destroy_qp);
IB_UVERBS_DECLARE_CMD(attach_mcast);
IB_UVERBS_DECLARE_CMD(detach_mcast);
IB_UVERBS_DECLARE_CMD(create_srq);
IB_UVERBS_DECLARE_CMD(modify_srq);
IB_UVERBS_DECLARE_CMD(destroy_srq);
#endif /* UVERBS_H */

View File

@ -724,6 +724,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
struct ib_uobject *uobj;
struct ib_pd *pd;
struct ib_cq *scq, *rcq;
struct ib_srq *srq;
struct ib_qp *qp;
struct ib_qp_init_attr attr;
int ret;
@ -747,10 +748,12 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle);
scq = idr_find(&ib_uverbs_cq_idr, cmd.send_cq_handle);
rcq = idr_find(&ib_uverbs_cq_idr, cmd.recv_cq_handle);
srq = cmd.is_srq ? idr_find(&ib_uverbs_srq_idr, cmd.srq_handle) : NULL;
if (!pd || pd->uobject->context != file->ucontext ||
!scq || scq->uobject->context != file->ucontext ||
!rcq || rcq->uobject->context != file->ucontext) {
!rcq || rcq->uobject->context != file->ucontext ||
(cmd.is_srq && (!srq || srq->uobject->context != file->ucontext))) {
ret = -EINVAL;
goto err_up;
}
@ -759,7 +762,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
attr.qp_context = file;
attr.send_cq = scq;
attr.recv_cq = rcq;
attr.srq = NULL;
attr.srq = srq;
attr.sq_sig_type = cmd.sq_sig_all ? IB_SIGNAL_ALL_WR : IB_SIGNAL_REQ_WR;
attr.qp_type = cmd.qp_type;
@ -1004,3 +1007,178 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file,
return ret ? ret : in_len;
}
ssize_t ib_uverbs_create_srq(struct ib_uverbs_file *file,
const char __user *buf, int in_len,
int out_len)
{
struct ib_uverbs_create_srq cmd;
struct ib_uverbs_create_srq_resp resp;
struct ib_udata udata;
struct ib_uobject *uobj;
struct ib_pd *pd;
struct ib_srq *srq;
struct ib_srq_init_attr attr;
int ret;
if (out_len < sizeof resp)
return -ENOSPC;
if (copy_from_user(&cmd, buf, sizeof cmd))
return -EFAULT;
INIT_UDATA(&udata, buf + sizeof cmd,
(unsigned long) cmd.response + sizeof resp,
in_len - sizeof cmd, out_len - sizeof resp);
uobj = kmalloc(sizeof *uobj, GFP_KERNEL);
if (!uobj)
return -ENOMEM;
down(&ib_uverbs_idr_mutex);
pd = idr_find(&ib_uverbs_pd_idr, cmd.pd_handle);
if (!pd || pd->uobject->context != file->ucontext) {
ret = -EINVAL;
goto err_up;
}
attr.event_handler = ib_uverbs_srq_event_handler;
attr.srq_context = file;
attr.attr.max_wr = cmd.max_wr;
attr.attr.max_sge = cmd.max_sge;
attr.attr.srq_limit = cmd.srq_limit;
uobj->user_handle = cmd.user_handle;
uobj->context = file->ucontext;
srq = pd->device->create_srq(pd, &attr, &udata);
if (IS_ERR(srq)) {
ret = PTR_ERR(srq);
goto err_up;
}
srq->device = pd->device;
srq->pd = pd;
srq->uobject = uobj;
srq->event_handler = attr.event_handler;
srq->srq_context = attr.srq_context;
atomic_inc(&pd->usecnt);
atomic_set(&srq->usecnt, 0);
memset(&resp, 0, sizeof resp);
retry:
if (!idr_pre_get(&ib_uverbs_srq_idr, GFP_KERNEL)) {
ret = -ENOMEM;
goto err_destroy;
}
ret = idr_get_new(&ib_uverbs_srq_idr, srq, &uobj->id);
if (ret == -EAGAIN)
goto retry;
if (ret)
goto err_destroy;
resp.srq_handle = uobj->id;
spin_lock_irq(&file->ucontext->lock);
list_add_tail(&uobj->list, &file->ucontext->srq_list);
spin_unlock_irq(&file->ucontext->lock);
if (copy_to_user((void __user *) (unsigned long) cmd.response,
&resp, sizeof resp)) {
ret = -EFAULT;
goto err_list;
}
up(&ib_uverbs_idr_mutex);
return in_len;
err_list:
spin_lock_irq(&file->ucontext->lock);
list_del(&uobj->list);
spin_unlock_irq(&file->ucontext->lock);
err_destroy:
ib_destroy_srq(srq);
err_up:
up(&ib_uverbs_idr_mutex);
kfree(uobj);
return ret;
}
ssize_t ib_uverbs_modify_srq(struct ib_uverbs_file *file,
const char __user *buf, int in_len,
int out_len)
{
struct ib_uverbs_modify_srq cmd;
struct ib_srq *srq;
struct ib_srq_attr attr;
int ret;
if (copy_from_user(&cmd, buf, sizeof cmd))
return -EFAULT;
down(&ib_uverbs_idr_mutex);
srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle);
if (!srq || srq->uobject->context != file->ucontext) {
ret = -EINVAL;
goto out;
}
attr.max_wr = cmd.max_wr;
attr.max_sge = cmd.max_sge;
attr.srq_limit = cmd.srq_limit;
ret = ib_modify_srq(srq, &attr, cmd.attr_mask);
out:
up(&ib_uverbs_idr_mutex);
return ret ? ret : in_len;
}
ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file,
const char __user *buf, int in_len,
int out_len)
{
struct ib_uverbs_destroy_srq cmd;
struct ib_srq *srq;
struct ib_uobject *uobj;
int ret = -EINVAL;
if (copy_from_user(&cmd, buf, sizeof cmd))
return -EFAULT;
down(&ib_uverbs_idr_mutex);
srq = idr_find(&ib_uverbs_srq_idr, cmd.srq_handle);
if (!srq || srq->uobject->context != file->ucontext)
goto out;
uobj = srq->uobject;
ret = ib_destroy_srq(srq);
if (ret)
goto out;
idr_remove(&ib_uverbs_srq_idr, cmd.srq_handle);
spin_lock_irq(&file->ucontext->lock);
list_del(&uobj->list);
spin_unlock_irq(&file->ucontext->lock);
kfree(uobj);
out:
up(&ib_uverbs_idr_mutex);
return ret ? ret : in_len;
}

View File

@ -1,6 +1,8 @@
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -67,6 +69,7 @@ DEFINE_IDR(ib_uverbs_mw_idr);
DEFINE_IDR(ib_uverbs_ah_idr);
DEFINE_IDR(ib_uverbs_cq_idr);
DEFINE_IDR(ib_uverbs_qp_idr);
DEFINE_IDR(ib_uverbs_srq_idr);
static spinlock_t map_lock;
static DECLARE_BITMAP(dev_map, IB_UVERBS_MAX_DEVICES);
@ -91,6 +94,9 @@ static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file,
[IB_USER_VERBS_CMD_DESTROY_QP] = ib_uverbs_destroy_qp,
[IB_USER_VERBS_CMD_ATTACH_MCAST] = ib_uverbs_attach_mcast,
[IB_USER_VERBS_CMD_DETACH_MCAST] = ib_uverbs_detach_mcast,
[IB_USER_VERBS_CMD_CREATE_SRQ] = ib_uverbs_create_srq,
[IB_USER_VERBS_CMD_MODIFY_SRQ] = ib_uverbs_modify_srq,
[IB_USER_VERBS_CMD_DESTROY_SRQ] = ib_uverbs_destroy_srq,
};
static struct vfsmount *uverbs_event_mnt;
@ -125,18 +131,26 @@ static int ib_dealloc_ucontext(struct ib_ucontext *context)
kfree(uobj);
}
/* XXX Free SRQs */
list_for_each_entry_safe(uobj, tmp, &context->srq_list, list) {
struct ib_srq *srq = idr_find(&ib_uverbs_srq_idr, uobj->id);
idr_remove(&ib_uverbs_srq_idr, uobj->id);
ib_destroy_srq(srq);
list_del(&uobj->list);
kfree(uobj);
}
/* XXX Free MWs */
list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) {
struct ib_mr *mr = idr_find(&ib_uverbs_mr_idr, uobj->id);
struct ib_device *mrdev = mr->device;
struct ib_umem_object *memobj;
idr_remove(&ib_uverbs_mr_idr, uobj->id);
ib_dereg_mr(mr);
memobj = container_of(uobj, struct ib_umem_object, uobject);
ib_umem_release_on_close(mr->device, &memobj->umem);
ib_umem_release_on_close(mrdev, &memobj->umem);
list_del(&uobj->list);
kfree(memobj);
@ -343,6 +357,13 @@ void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr)
event->event);
}
void ib_uverbs_srq_event_handler(struct ib_event *event, void *context_ptr)
{
ib_uverbs_async_handler(context_ptr,
event->element.srq->uobject->user_handle,
event->event);
}
static void ib_uverbs_event_handler(struct ib_event_handler *handler,
struct ib_event *event)
{

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -4,6 +4,7 @@
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
*
* This software is available to you under a choice of one of two
@ -40,8 +41,8 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <ib_verbs.h>
#include <ib_cache.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
/* Protection domains */
@ -153,6 +154,66 @@ int ib_destroy_ah(struct ib_ah *ah)
}
EXPORT_SYMBOL(ib_destroy_ah);
/* Shared receive queues */
struct ib_srq *ib_create_srq(struct ib_pd *pd,
struct ib_srq_init_attr *srq_init_attr)
{
struct ib_srq *srq;
if (!pd->device->create_srq)
return ERR_PTR(-ENOSYS);
srq = pd->device->create_srq(pd, srq_init_attr, NULL);
if (!IS_ERR(srq)) {
srq->device = pd->device;
srq->pd = pd;
srq->uobject = NULL;
srq->event_handler = srq_init_attr->event_handler;
srq->srq_context = srq_init_attr->srq_context;
atomic_inc(&pd->usecnt);
atomic_set(&srq->usecnt, 0);
}
return srq;
}
EXPORT_SYMBOL(ib_create_srq);
int ib_modify_srq(struct ib_srq *srq,
struct ib_srq_attr *srq_attr,
enum ib_srq_attr_mask srq_attr_mask)
{
return srq->device->modify_srq(srq, srq_attr, srq_attr_mask);
}
EXPORT_SYMBOL(ib_modify_srq);
int ib_query_srq(struct ib_srq *srq,
struct ib_srq_attr *srq_attr)
{
return srq->device->query_srq ?
srq->device->query_srq(srq, srq_attr) : -ENOSYS;
}
EXPORT_SYMBOL(ib_query_srq);
int ib_destroy_srq(struct ib_srq *srq)
{
struct ib_pd *pd;
int ret;
if (atomic_read(&srq->usecnt))
return -EBUSY;
pd = srq->pd;
ret = srq->device->destroy_srq(srq);
if (!ret)
atomic_dec(&pd->usecnt);
return ret;
}
EXPORT_SYMBOL(ib_destroy_srq);
/* Queue pairs */
struct ib_qp *ib_create_qp(struct ib_pd *pd,

View File

@ -1,5 +1,3 @@
EXTRA_CFLAGS += -Idrivers/infiniband/include
ifdef CONFIG_INFINIBAND_MTHCA_DEBUG
EXTRA_CFLAGS += -DDEBUG
endif
@ -9,4 +7,4 @@ obj-$(CONFIG_INFINIBAND_MTHCA) += ib_mthca.o
ib_mthca-y := mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o \
mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o \
mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o \
mthca_provider.o mthca_memfree.o mthca_uar.o
mthca_provider.o mthca_memfree.o mthca_uar.o mthca_srq.o

View File

@ -177,3 +177,119 @@ void mthca_array_cleanup(struct mthca_array *array, int nent)
kfree(array->page_list);
}
/*
* Handling for queue buffers -- we allocate a bunch of memory and
* register it in a memory region at HCA virtual address 0. If the
* requested size is > max_direct, we split the allocation into
* multiple pages, so we don't require too much contiguous memory.
*/
int mthca_buf_alloc(struct mthca_dev *dev, int size, int max_direct,
union mthca_buf *buf, int *is_direct, struct mthca_pd *pd,
int hca_write, struct mthca_mr *mr)
{
int err = -ENOMEM;
int npages, shift;
u64 *dma_list = NULL;
dma_addr_t t;
int i;
if (size <= max_direct) {
*is_direct = 1;
npages = 1;
shift = get_order(size) + PAGE_SHIFT;
buf->direct.buf = dma_alloc_coherent(&dev->pdev->dev,
size, &t, GFP_KERNEL);
if (!buf->direct.buf)
return -ENOMEM;
pci_unmap_addr_set(&buf->direct, mapping, t);
memset(buf->direct.buf, 0, size);
while (t & ((1 << shift) - 1)) {
--shift;
npages *= 2;
}
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
if (!dma_list)
goto err_free;
for (i = 0; i < npages; ++i)
dma_list[i] = t + i * (1 << shift);
} else {
*is_direct = 0;
npages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
shift = PAGE_SHIFT;
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
if (!dma_list)
return -ENOMEM;
buf->page_list = kmalloc(npages * sizeof *buf->page_list,
GFP_KERNEL);
if (!buf->page_list)
goto err_out;
for (i = 0; i < npages; ++i)
buf->page_list[i].buf = NULL;
for (i = 0; i < npages; ++i) {
buf->page_list[i].buf =
dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&t, GFP_KERNEL);
if (!buf->page_list[i].buf)
goto err_free;
dma_list[i] = t;
pci_unmap_addr_set(&buf->page_list[i], mapping, t);
memset(buf->page_list[i].buf, 0, PAGE_SIZE);
}
}
err = mthca_mr_alloc_phys(dev, pd->pd_num,
dma_list, shift, npages,
0, size,
MTHCA_MPT_FLAG_LOCAL_READ |
(hca_write ? MTHCA_MPT_FLAG_LOCAL_WRITE : 0),
mr);
if (err)
goto err_free;
kfree(dma_list);
return 0;
err_free:
mthca_buf_free(dev, size, buf, *is_direct, NULL);
err_out:
kfree(dma_list);
return err;
}
void mthca_buf_free(struct mthca_dev *dev, int size, union mthca_buf *buf,
int is_direct, struct mthca_mr *mr)
{
int i;
if (mr)
mthca_free_mr(dev, mr);
if (is_direct)
dma_free_coherent(&dev->pdev->dev, size, buf->direct.buf,
pci_unmap_addr(&buf->direct, mapping));
else {
for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i)
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
buf->page_list[i].buf,
pci_unmap_addr(&buf->page_list[i],
mapping));
kfree(buf->page_list);
}
}

View File

@ -35,22 +35,22 @@
#include <linux/init.h>
#include <ib_verbs.h>
#include <ib_cache.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
#include "mthca_dev.h"
struct mthca_av {
u32 port_pd;
u8 reserved1;
u8 g_slid;
u16 dlid;
u8 reserved2;
u8 gid_index;
u8 msg_sr;
u8 hop_limit;
u32 sl_tclass_flowlabel;
u32 dgid[4];
__be32 port_pd;
u8 reserved1;
u8 g_slid;
__be16 dlid;
u8 reserved2;
u8 gid_index;
u8 msg_sr;
u8 hop_limit;
__be32 sl_tclass_flowlabel;
__be32 dgid[4];
};
int mthca_create_ah(struct mthca_dev *dev,
@ -128,7 +128,7 @@ on_hca_fail:
av, (unsigned long) ah->avdma);
for (j = 0; j < 8; ++j)
printk(KERN_DEBUG " [%2x] %08x\n",
j * 4, be32_to_cpu(((u32 *) av)[j]));
j * 4, be32_to_cpu(((__be32 *) av)[j]));
}
if (ah->type == MTHCA_AH_ON_HCA) {
@ -169,7 +169,7 @@ int mthca_read_ah(struct mthca_dev *dev, struct mthca_ah *ah,
header->lrh.service_level = be32_to_cpu(ah->av->sl_tclass_flowlabel) >> 28;
header->lrh.destination_lid = ah->av->dlid;
header->lrh.source_lid = ah->av->g_slid & 0x7f;
header->lrh.source_lid = cpu_to_be16(ah->av->g_slid & 0x7f);
if (ah->av->g_slid & 0x80) {
header->grh_present = 1;
header->grh.traffic_class =

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -36,7 +37,7 @@
#include <linux/pci.h>
#include <linux/errno.h>
#include <asm/io.h>
#include <ib_mad.h>
#include <rdma/ib_mad.h>
#include "mthca_dev.h"
#include "mthca_config_reg.h"
@ -108,6 +109,7 @@ enum {
CMD_SW2HW_SRQ = 0x35,
CMD_HW2SW_SRQ = 0x36,
CMD_QUERY_SRQ = 0x37,
CMD_ARM_SRQ = 0x40,
/* QP/EE commands */
CMD_RST2INIT_QPEE = 0x19,
@ -219,20 +221,20 @@ static int mthca_cmd_post(struct mthca_dev *dev,
* (and some architectures such as ia64 implement memcpy_toio
* in terms of writeb).
*/
__raw_writel(cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4);
__raw_writel(cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4);
__raw_writel(cpu_to_be32(in_modifier), dev->hcr + 2 * 4);
__raw_writel(cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4);
__raw_writel(cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4);
__raw_writel(cpu_to_be32(token << 16), dev->hcr + 5 * 4);
__raw_writel((__force u32) cpu_to_be32(in_param >> 32), dev->hcr + 0 * 4);
__raw_writel((__force u32) cpu_to_be32(in_param & 0xfffffffful), dev->hcr + 1 * 4);
__raw_writel((__force u32) cpu_to_be32(in_modifier), dev->hcr + 2 * 4);
__raw_writel((__force u32) cpu_to_be32(out_param >> 32), dev->hcr + 3 * 4);
__raw_writel((__force u32) cpu_to_be32(out_param & 0xfffffffful), dev->hcr + 4 * 4);
__raw_writel((__force u32) cpu_to_be32(token << 16), dev->hcr + 5 * 4);
/* __raw_writel may not order writes. */
wmb();
__raw_writel(cpu_to_be32((1 << HCR_GO_BIT) |
(event ? (1 << HCA_E_BIT) : 0) |
(op_modifier << HCR_OPMOD_SHIFT) |
op), dev->hcr + 6 * 4);
__raw_writel((__force u32) cpu_to_be32((1 << HCR_GO_BIT) |
(event ? (1 << HCA_E_BIT) : 0) |
(op_modifier << HCR_OPMOD_SHIFT) |
op), dev->hcr + 6 * 4);
out:
up(&dev->cmd.hcr_sem);
@ -273,12 +275,14 @@ static int mthca_cmd_poll(struct mthca_dev *dev,
goto out;
}
if (out_is_imm) {
memcpy_fromio(out_param, dev->hcr + HCR_OUT_PARAM_OFFSET, sizeof (u64));
be64_to_cpus(out_param);
}
if (out_is_imm)
*out_param =
(u64) be32_to_cpu((__force __be32)
__raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET)) << 32 |
(u64) be32_to_cpu((__force __be32)
__raw_readl(dev->hcr + HCR_OUT_PARAM_OFFSET + 4));
*status = be32_to_cpu(__raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24;
*status = be32_to_cpu((__force __be32) __raw_readl(dev->hcr + HCR_STATUS_OFFSET)) >> 24;
out:
up(&dev->cmd.poll_sem);
@ -1029,6 +1033,8 @@ int mthca_QUERY_DEV_LIM(struct mthca_dev *dev,
mthca_dbg(dev, "Max QPs: %d, reserved QPs: %d, entry size: %d\n",
dev_lim->max_qps, dev_lim->reserved_qps, dev_lim->qpc_entry_sz);
mthca_dbg(dev, "Max SRQs: %d, reserved SRQs: %d, entry size: %d\n",
dev_lim->max_srqs, dev_lim->reserved_srqs, dev_lim->srq_entry_sz);
mthca_dbg(dev, "Max CQs: %d, reserved CQs: %d, entry size: %d\n",
dev_lim->max_cqs, dev_lim->reserved_cqs, dev_lim->cqc_entry_sz);
mthca_dbg(dev, "Max EQs: %d, reserved EQs: %d, entry size: %d\n",
@ -1082,6 +1088,34 @@ out:
return err;
}
static void get_board_id(void *vsd, char *board_id)
{
int i;
#define VSD_OFFSET_SIG1 0x00
#define VSD_OFFSET_SIG2 0xde
#define VSD_OFFSET_MLX_BOARD_ID 0xd0
#define VSD_OFFSET_TS_BOARD_ID 0x20
#define VSD_SIGNATURE_TOPSPIN 0x5ad
memset(board_id, 0, MTHCA_BOARD_ID_LEN);
if (be16_to_cpup(vsd + VSD_OFFSET_SIG1) == VSD_SIGNATURE_TOPSPIN &&
be16_to_cpup(vsd + VSD_OFFSET_SIG2) == VSD_SIGNATURE_TOPSPIN) {
strlcpy(board_id, vsd + VSD_OFFSET_TS_BOARD_ID, MTHCA_BOARD_ID_LEN);
} else {
/*
* The board ID is a string but the firmware byte
* swaps each 4-byte word before passing it back to
* us. Therefore we need to swab it before printing.
*/
for (i = 0; i < 4; ++i)
((u32 *) board_id)[i] =
swab32(*(u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4));
}
}
int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
struct mthca_adapter *adapter, u8 *status)
{
@ -1094,6 +1128,7 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
#define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04
#define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08
#define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10
#define QUERY_ADAPTER_VSD_OFFSET 0x20
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
if (IS_ERR(mailbox))
@ -1111,6 +1146,9 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
MTHCA_GET(adapter->revision_id, outbox, QUERY_ADAPTER_REVISION_ID_OFFSET);
MTHCA_GET(adapter->inta_pin, outbox, QUERY_ADAPTER_INTA_PIN_OFFSET);
get_board_id(outbox + QUERY_ADAPTER_VSD_OFFSET / 4,
adapter->board_id);
out:
mthca_free_mailbox(dev, mailbox);
return err;
@ -1121,7 +1159,7 @@ int mthca_INIT_HCA(struct mthca_dev *dev,
u8 *status)
{
struct mthca_mailbox *mailbox;
u32 *inbox;
__be32 *inbox;
int err;
#define INIT_HCA_IN_SIZE 0x200
@ -1247,10 +1285,8 @@ int mthca_INIT_IB(struct mthca_dev *dev,
#define INIT_IB_FLAG_SIG (1 << 18)
#define INIT_IB_FLAG_NG (1 << 17)
#define INIT_IB_FLAG_G0 (1 << 16)
#define INIT_IB_FLAG_1X (1 << 8)
#define INIT_IB_FLAG_4X (1 << 9)
#define INIT_IB_FLAG_12X (1 << 11)
#define INIT_IB_VL_SHIFT 4
#define INIT_IB_PORT_WIDTH_SHIFT 8
#define INIT_IB_MTU_SHIFT 12
#define INIT_IB_MAX_GID_OFFSET 0x06
#define INIT_IB_MAX_PKEY_OFFSET 0x0a
@ -1266,12 +1302,11 @@ int mthca_INIT_IB(struct mthca_dev *dev,
memset(inbox, 0, INIT_IB_IN_SIZE);
flags = 0;
flags |= param->enable_1x ? INIT_IB_FLAG_1X : 0;
flags |= param->enable_4x ? INIT_IB_FLAG_4X : 0;
flags |= param->set_guid0 ? INIT_IB_FLAG_G0 : 0;
flags |= param->set_node_guid ? INIT_IB_FLAG_NG : 0;
flags |= param->set_si_guid ? INIT_IB_FLAG_SIG : 0;
flags |= param->vl_cap << INIT_IB_VL_SHIFT;
flags |= param->port_width << INIT_IB_PORT_WIDTH_SHIFT;
flags |= param->mtu_cap << INIT_IB_MTU_SHIFT;
MTHCA_PUT(inbox, flags, INIT_IB_FLAGS_OFFSET);
@ -1342,7 +1377,7 @@ int mthca_MAP_ICM(struct mthca_dev *dev, struct mthca_icm *icm, u64 virt, u8 *st
int mthca_MAP_ICM_page(struct mthca_dev *dev, u64 dma_addr, u64 virt, u8 *status)
{
struct mthca_mailbox *mailbox;
u64 *inbox;
__be64 *inbox;
int err;
mailbox = mthca_alloc_mailbox(dev, GFP_KERNEL);
@ -1468,6 +1503,27 @@ int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
CMD_TIME_CLASS_A, status);
}
int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int srq_num, u8 *status)
{
return mthca_cmd(dev, mailbox->dma, srq_num, 0, CMD_SW2HW_SRQ,
CMD_TIME_CLASS_A, status);
}
int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int srq_num, u8 *status)
{
return mthca_cmd_box(dev, 0, mailbox->dma, srq_num, 0,
CMD_HW2SW_SRQ,
CMD_TIME_CLASS_A, status);
}
int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status)
{
return mthca_cmd(dev, limit, srq_num, 0, CMD_ARM_SRQ,
CMD_TIME_CLASS_B, status);
}
int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
int is_ee, struct mthca_mailbox *mailbox, u32 optmask,
u8 *status)
@ -1513,7 +1569,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
if (i % 8 == 0)
printk(" [%02x] ", i * 4);
printk(" %08x",
be32_to_cpu(((u32 *) mailbox->buf)[i + 2]));
be32_to_cpu(((__be32 *) mailbox->buf)[i + 2]));
if ((i + 1) % 8 == 0)
printk("\n");
}
@ -1533,7 +1589,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
if (i % 8 == 0)
printk("[%02x] ", i * 4);
printk(" %08x",
be32_to_cpu(((u32 *) mailbox->buf)[i + 2]));
be32_to_cpu(((__be32 *) mailbox->buf)[i + 2]));
if ((i + 1) % 8 == 0)
printk("\n");
}

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -35,7 +36,7 @@
#ifndef MTHCA_CMD_H
#define MTHCA_CMD_H
#include <ib_verbs.h>
#include <rdma/ib_verbs.h>
#define MTHCA_MAILBOX_SIZE 4096
@ -183,10 +184,11 @@ struct mthca_dev_lim {
};
struct mthca_adapter {
u32 vendor_id;
u32 device_id;
u32 revision_id;
u8 inta_pin;
u32 vendor_id;
u32 device_id;
u32 revision_id;
char board_id[MTHCA_BOARD_ID_LEN];
u8 inta_pin;
};
struct mthca_init_hca_param {
@ -218,8 +220,7 @@ struct mthca_init_hca_param {
};
struct mthca_init_ib_param {
int enable_1x;
int enable_4x;
int port_width;
int vl_cap;
int mtu_cap;
u16 gid_cap;
@ -297,6 +298,11 @@ int mthca_SW2HW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int cq_num, u8 *status);
int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int cq_num, u8 *status);
int mthca_SW2HW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int srq_num, u8 *status);
int mthca_HW2SW_SRQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int srq_num, u8 *status);
int mthca_ARM_SRQ(struct mthca_dev *dev, int srq_num, int limit, u8 *status);
int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
int is_ee, struct mthca_mailbox *mailbox, u32 optmask,
u8 *status);

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -37,7 +39,7 @@
#include <linux/init.h>
#include <linux/hardirq.h>
#include <ib_pack.h>
#include <rdma/ib_pack.h>
#include "mthca_dev.h"
#include "mthca_cmd.h"
@ -55,21 +57,21 @@ enum {
* Must be packed because start is 64 bits but only aligned to 32 bits.
*/
struct mthca_cq_context {
u32 flags;
u64 start;
u32 logsize_usrpage;
u32 error_eqn; /* Tavor only */
u32 comp_eqn;
u32 pd;
u32 lkey;
u32 last_notified_index;
u32 solicit_producer_index;
u32 consumer_index;
u32 producer_index;
u32 cqn;
u32 ci_db; /* Arbel only */
u32 state_db; /* Arbel only */
u32 reserved;
__be32 flags;
__be64 start;
__be32 logsize_usrpage;
__be32 error_eqn; /* Tavor only */
__be32 comp_eqn;
__be32 pd;
__be32 lkey;
__be32 last_notified_index;
__be32 solicit_producer_index;
__be32 consumer_index;
__be32 producer_index;
__be32 cqn;
__be32 ci_db; /* Arbel only */
__be32 state_db; /* Arbel only */
u32 reserved;
} __attribute__((packed));
#define MTHCA_CQ_STATUS_OK ( 0 << 28)
@ -108,31 +110,31 @@ enum {
};
struct mthca_cqe {
u32 my_qpn;
u32 my_ee;
u32 rqpn;
u16 sl_g_mlpath;
u16 rlid;
u32 imm_etype_pkey_eec;
u32 byte_cnt;
u32 wqe;
u8 opcode;
u8 is_send;
u8 reserved;
u8 owner;
__be32 my_qpn;
__be32 my_ee;
__be32 rqpn;
__be16 sl_g_mlpath;
__be16 rlid;
__be32 imm_etype_pkey_eec;
__be32 byte_cnt;
__be32 wqe;
u8 opcode;
u8 is_send;
u8 reserved;
u8 owner;
};
struct mthca_err_cqe {
u32 my_qpn;
u32 reserved1[3];
u8 syndrome;
u8 reserved2;
u16 db_cnt;
u32 reserved3;
u32 wqe;
u8 opcode;
u8 reserved4[2];
u8 owner;
__be32 my_qpn;
u32 reserved1[3];
u8 syndrome;
u8 reserved2;
__be16 db_cnt;
u32 reserved3;
__be32 wqe;
u8 opcode;
u8 reserved4[2];
u8 owner;
};
#define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7)
@ -191,7 +193,7 @@ static void dump_cqe(struct mthca_dev *dev, void *cqe_ptr)
static inline void update_cons_index(struct mthca_dev *dev, struct mthca_cq *cq,
int incr)
{
u32 doorbell[2];
__be32 doorbell[2];
if (mthca_is_memfree(dev)) {
*cq->set_ci_db = cpu_to_be32(cq->cons_index);
@ -222,7 +224,8 @@ void mthca_cq_event(struct mthca_dev *dev, u32 cqn)
cq->ibcq.comp_handler(&cq->ibcq, cq->ibcq.cq_context);
}
void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn)
void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn,
struct mthca_srq *srq)
{
struct mthca_cq *cq;
struct mthca_cqe *cqe;
@ -263,8 +266,11 @@ void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn)
*/
while (prod_index > cq->cons_index) {
cqe = get_cqe(cq, (prod_index - 1) & cq->ibcq.cqe);
if (cqe->my_qpn == cpu_to_be32(qpn))
if (cqe->my_qpn == cpu_to_be32(qpn)) {
if (srq)
mthca_free_srq_wqe(srq, be32_to_cpu(cqe->wqe));
++nfreed;
}
else if (nfreed)
memcpy(get_cqe(cq, (prod_index - 1 + nfreed) &
cq->ibcq.cqe),
@ -291,7 +297,7 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
{
int err;
int dbd;
u32 new_wqe;
__be32 new_wqe;
if (cqe->syndrome == SYNDROME_LOCAL_QP_OP_ERR) {
mthca_dbg(dev, "local QP operation err "
@ -365,6 +371,13 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
break;
}
/*
* Mem-free HCAs always generate one CQE per WQE, even in the
* error case, so we don't have to check the doorbell count, etc.
*/
if (mthca_is_memfree(dev))
return 0;
err = mthca_free_err_wqe(dev, qp, is_send, wqe_index, &dbd, &new_wqe);
if (err)
return err;
@ -373,12 +386,8 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
* If we're at the end of the WQE chain, or we've used up our
* doorbell count, free the CQE. Otherwise just update it for
* the next poll operation.
*
* This does not apply to mem-free HCAs: they don't use the
* doorbell count field, and so we should always free the CQE.
*/
if (mthca_is_memfree(dev) ||
!(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd))
if (!(new_wqe & cpu_to_be32(0x3f)) || (!cqe->db_cnt && dbd))
return 0;
cqe->db_cnt = cpu_to_be16(be16_to_cpu(cqe->db_cnt) - dbd);
@ -450,23 +459,27 @@ static inline int mthca_poll_one(struct mthca_dev *dev,
>> wq->wqe_shift);
entry->wr_id = (*cur_qp)->wrid[wqe_index +
(*cur_qp)->rq.max];
} else if ((*cur_qp)->ibqp.srq) {
struct mthca_srq *srq = to_msrq((*cur_qp)->ibqp.srq);
u32 wqe = be32_to_cpu(cqe->wqe);
wq = NULL;
wqe_index = wqe >> srq->wqe_shift;
entry->wr_id = srq->wrid[wqe_index];
mthca_free_srq_wqe(srq, wqe);
} else {
wq = &(*cur_qp)->rq;
wqe_index = be32_to_cpu(cqe->wqe) >> wq->wqe_shift;
entry->wr_id = (*cur_qp)->wrid[wqe_index];
}
if (wq->last_comp < wqe_index)
wq->tail += wqe_index - wq->last_comp;
else
wq->tail += wqe_index + wq->max - wq->last_comp;
if (wq) {
if (wq->last_comp < wqe_index)
wq->tail += wqe_index - wq->last_comp;
else
wq->tail += wqe_index + wq->max - wq->last_comp;
wq->last_comp = wqe_index;
if (0)
mthca_dbg(dev, "%s completion for QP %06x, index %d (nr %d)\n",
is_send ? "Send" : "Receive",
(*cur_qp)->qpn, wqe_index, wq->max);
wq->last_comp = wqe_index;
}
if (is_error) {
err = handle_error_cqe(dev, cq, *cur_qp, wqe_index, is_send,
@ -584,13 +597,13 @@ int mthca_poll_cq(struct ib_cq *ibcq, int num_entries,
int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify)
{
u32 doorbell[2];
__be32 doorbell[2];
doorbell[0] = cpu_to_be32((notify == IB_CQ_SOLICITED ?
MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL :
MTHCA_TAVOR_CQ_DB_REQ_NOT) |
to_mcq(cq)->cqn);
doorbell[1] = 0xffffffff;
doorbell[1] = (__force __be32) 0xffffffff;
mthca_write64(doorbell,
to_mdev(cq->device)->kar + MTHCA_CQ_DOORBELL,
@ -602,9 +615,9 @@ int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify)
int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify)
{
struct mthca_cq *cq = to_mcq(ibcq);
u32 doorbell[2];
__be32 doorbell[2];
u32 sn;
u32 ci;
__be32 ci;
sn = cq->arm_sn & 3;
ci = cpu_to_be32(cq->cons_index);
@ -637,113 +650,8 @@ int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify)
static void mthca_free_cq_buf(struct mthca_dev *dev, struct mthca_cq *cq)
{
int i;
int size;
if (cq->is_direct)
dma_free_coherent(&dev->pdev->dev,
(cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE,
cq->queue.direct.buf,
pci_unmap_addr(&cq->queue.direct,
mapping));
else {
size = (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE;
for (i = 0; i < (size + PAGE_SIZE - 1) / PAGE_SIZE; ++i)
if (cq->queue.page_list[i].buf)
dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
cq->queue.page_list[i].buf,
pci_unmap_addr(&cq->queue.page_list[i],
mapping));
kfree(cq->queue.page_list);
}
}
static int mthca_alloc_cq_buf(struct mthca_dev *dev, int size,
struct mthca_cq *cq)
{
int err = -ENOMEM;
int npages, shift;
u64 *dma_list = NULL;
dma_addr_t t;
int i;
if (size <= MTHCA_MAX_DIRECT_CQ_SIZE) {
cq->is_direct = 1;
npages = 1;
shift = get_order(size) + PAGE_SHIFT;
cq->queue.direct.buf = dma_alloc_coherent(&dev->pdev->dev,
size, &t, GFP_KERNEL);
if (!cq->queue.direct.buf)
return -ENOMEM;
pci_unmap_addr_set(&cq->queue.direct, mapping, t);
memset(cq->queue.direct.buf, 0, size);
while (t & ((1 << shift) - 1)) {
--shift;
npages *= 2;
}
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
if (!dma_list)
goto err_free;
for (i = 0; i < npages; ++i)
dma_list[i] = t + i * (1 << shift);
} else {
cq->is_direct = 0;
npages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
shift = PAGE_SHIFT;
dma_list = kmalloc(npages * sizeof *dma_list, GFP_KERNEL);
if (!dma_list)
return -ENOMEM;
cq->queue.page_list = kmalloc(npages * sizeof *cq->queue.page_list,
GFP_KERNEL);
if (!cq->queue.page_list)
goto err_out;
for (i = 0; i < npages; ++i)
cq->queue.page_list[i].buf = NULL;
for (i = 0; i < npages; ++i) {
cq->queue.page_list[i].buf =
dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
&t, GFP_KERNEL);
if (!cq->queue.page_list[i].buf)
goto err_free;
dma_list[i] = t;
pci_unmap_addr_set(&cq->queue.page_list[i], mapping, t);
memset(cq->queue.page_list[i].buf, 0, PAGE_SIZE);
}
}
err = mthca_mr_alloc_phys(dev, dev->driver_pd.pd_num,
dma_list, shift, npages,
0, size,
MTHCA_MPT_FLAG_LOCAL_WRITE |
MTHCA_MPT_FLAG_LOCAL_READ,
&cq->mr);
if (err)
goto err_free;
kfree(dma_list);
return 0;
err_free:
mthca_free_cq_buf(dev, cq);
err_out:
kfree(dma_list);
return err;
mthca_buf_free(dev, (cq->ibcq.cqe + 1) * MTHCA_CQ_ENTRY_SIZE,
&cq->queue, cq->is_direct, &cq->mr);
}
int mthca_init_cq(struct mthca_dev *dev, int nent,
@ -795,7 +703,9 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq_context = mailbox->buf;
if (cq->is_kernel) {
err = mthca_alloc_cq_buf(dev, size, cq);
err = mthca_buf_alloc(dev, size, MTHCA_MAX_DIRECT_CQ_SIZE,
&cq->queue, &cq->is_direct,
&dev->driver_pd, 1, &cq->mr);
if (err)
goto err_out_mailbox;
@ -811,7 +721,6 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq_context->flags = cpu_to_be32(MTHCA_CQ_STATUS_OK |
MTHCA_CQ_STATE_DISARMED |
MTHCA_CQ_FLAG_TR);
cq_context->start = cpu_to_be64(0);
cq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24);
if (ctx)
cq_context->logsize_usrpage |= cpu_to_be32(ctx->uar.index);
@ -857,10 +766,8 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
return 0;
err_out_free_mr:
if (cq->is_kernel) {
mthca_free_mr(dev, &cq->mr);
if (cq->is_kernel)
mthca_free_cq_buf(dev, cq);
}
err_out_mailbox:
mthca_free_mailbox(dev, mailbox);
@ -904,7 +811,7 @@ void mthca_free_cq(struct mthca_dev *dev,
mthca_warn(dev, "HW2SW_CQ returned status 0x%02x\n", status);
if (0) {
u32 *ctx = mailbox->buf;
__be32 *ctx = mailbox->buf;
int j;
printk(KERN_ERR "context for CQN %x (cons index %x, next sw %d)\n",
@ -928,7 +835,6 @@ void mthca_free_cq(struct mthca_dev *dev,
wait_event(cq->wait, !atomic_read(&cq->refcount));
if (cq->is_kernel) {
mthca_free_mr(dev, &cq->mr);
mthca_free_cq_buf(dev, cq);
if (mthca_is_memfree(dev)) {
mthca_free_db(dev, MTHCA_DB_TYPE_CQ_ARM, cq->arm_db_index);

View File

@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -66,6 +68,10 @@ enum {
MTHCA_MAX_PORTS = 2
};
enum {
MTHCA_BOARD_ID_LEN = 64
};
enum {
MTHCA_EQ_CONTEXT_SIZE = 0x40,
MTHCA_CQ_CONTEXT_SIZE = 0x40,
@ -142,6 +148,7 @@ struct mthca_limits {
int reserved_mcgs;
int num_pds;
int reserved_pds;
u8 port_width_cap;
};
struct mthca_alloc {
@ -211,6 +218,13 @@ struct mthca_cq_table {
struct mthca_icm_table *table;
};
struct mthca_srq_table {
struct mthca_alloc alloc;
spinlock_t lock;
struct mthca_array srq;
struct mthca_icm_table *table;
};
struct mthca_qp_table {
struct mthca_alloc alloc;
u32 rdb_base;
@ -246,6 +260,7 @@ struct mthca_dev {
unsigned long device_cap_flags;
u32 rev_id;
char board_id[MTHCA_BOARD_ID_LEN];
/* firmware info */
u64 fw_ver;
@ -291,6 +306,7 @@ struct mthca_dev {
struct mthca_mr_table mr_table;
struct mthca_eq_table eq_table;
struct mthca_cq_table cq_table;
struct mthca_srq_table srq_table;
struct mthca_qp_table qp_table;
struct mthca_av_table av_table;
struct mthca_mcg_table mcg_table;
@ -331,14 +347,13 @@ extern void __buggy_use_of_MTHCA_PUT(void);
#define MTHCA_PUT(dest, source, offset) \
do { \
__typeof__(source) *__p = \
(__typeof__(source) *) ((char *) (dest) + (offset)); \
void *__d = ((char *) (dest) + (offset)); \
switch (sizeof(source)) { \
case 1: *__p = (source); break; \
case 2: *__p = cpu_to_be16(source); break; \
case 4: *__p = cpu_to_be32(source); break; \
case 8: *__p = cpu_to_be64(source); break; \
default: __buggy_use_of_MTHCA_PUT(); \
case 1: *(u8 *) __d = (source); break; \
case 2: *(__be16 *) __d = cpu_to_be16(source); break; \
case 4: *(__be32 *) __d = cpu_to_be32(source); break; \
case 8: *(__be64 *) __d = cpu_to_be64(source); break; \
default: __buggy_use_of_MTHCA_PUT(); \
} \
} while (0)
@ -354,12 +369,18 @@ int mthca_array_set(struct mthca_array *array, int index, void *value);
void mthca_array_clear(struct mthca_array *array, int index);
int mthca_array_init(struct mthca_array *array, int nent);
void mthca_array_cleanup(struct mthca_array *array, int nent);
int mthca_buf_alloc(struct mthca_dev *dev, int size, int max_direct,
union mthca_buf *buf, int *is_direct, struct mthca_pd *pd,
int hca_write, struct mthca_mr *mr);
void mthca_buf_free(struct mthca_dev *dev, int size, union mthca_buf *buf,
int is_direct, struct mthca_mr *mr);
int mthca_init_uar_table(struct mthca_dev *dev);
int mthca_init_pd_table(struct mthca_dev *dev);
int mthca_init_mr_table(struct mthca_dev *dev);
int mthca_init_eq_table(struct mthca_dev *dev);
int mthca_init_cq_table(struct mthca_dev *dev);
int mthca_init_srq_table(struct mthca_dev *dev);
int mthca_init_qp_table(struct mthca_dev *dev);
int mthca_init_av_table(struct mthca_dev *dev);
int mthca_init_mcg_table(struct mthca_dev *dev);
@ -369,6 +390,7 @@ void mthca_cleanup_pd_table(struct mthca_dev *dev);
void mthca_cleanup_mr_table(struct mthca_dev *dev);
void mthca_cleanup_eq_table(struct mthca_dev *dev);
void mthca_cleanup_cq_table(struct mthca_dev *dev);
void mthca_cleanup_srq_table(struct mthca_dev *dev);
void mthca_cleanup_qp_table(struct mthca_dev *dev);
void mthca_cleanup_av_table(struct mthca_dev *dev);
void mthca_cleanup_mcg_table(struct mthca_dev *dev);
@ -419,7 +441,19 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
void mthca_free_cq(struct mthca_dev *dev,
struct mthca_cq *cq);
void mthca_cq_event(struct mthca_dev *dev, u32 cqn);
void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn);
void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn,
struct mthca_srq *srq);
int mthca_alloc_srq(struct mthca_dev *dev, struct mthca_pd *pd,
struct ib_srq_attr *attr, struct mthca_srq *srq);
void mthca_free_srq(struct mthca_dev *dev, struct mthca_srq *srq);
void mthca_srq_event(struct mthca_dev *dev, u32 srqn,
enum ib_event_type event_type);
void mthca_free_srq_wqe(struct mthca_srq *srq, u32 wqe_addr);
int mthca_tavor_post_srq_recv(struct ib_srq *srq, struct ib_recv_wr *wr,
struct ib_recv_wr **bad_wr);
int mthca_arbel_post_srq_recv(struct ib_srq *srq, struct ib_recv_wr *wr,
struct ib_recv_wr **bad_wr);
void mthca_qp_event(struct mthca_dev *dev, u32 qpn,
enum ib_event_type event_type);
@ -433,7 +467,7 @@ int mthca_arbel_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
int mthca_arbel_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *wr,
struct ib_recv_wr **bad_wr);
int mthca_free_err_wqe(struct mthca_dev *dev, struct mthca_qp *qp, int is_send,
int index, int *dbd, u32 *new_wqe);
int index, int *dbd, __be32 *new_wqe);
int mthca_alloc_qp(struct mthca_dev *dev,
struct mthca_pd *pd,
struct mthca_cq *send_cq,

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -57,13 +58,13 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest)
__raw_writeq((__force u64) val, dest);
}
static inline void mthca_write64(u32 val[2], void __iomem *dest,
static inline void mthca_write64(__be32 val[2], void __iomem *dest,
spinlock_t *doorbell_lock)
{
__raw_writeq(*(u64 *) val, dest);
}
static inline void mthca_write_db_rec(u32 val[2], u32 *db)
static inline void mthca_write_db_rec(__be32 val[2], __be32 *db)
{
*(u64 *) db = *(u64 *) val;
}
@ -86,18 +87,18 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest)
__raw_writel(((__force u32 *) &val)[1], dest + 4);
}
static inline void mthca_write64(u32 val[2], void __iomem *dest,
static inline void mthca_write64(__be32 val[2], void __iomem *dest,
spinlock_t *doorbell_lock)
{
unsigned long flags;
spin_lock_irqsave(doorbell_lock, flags);
__raw_writel(val[0], dest);
__raw_writel(val[1], dest + 4);
__raw_writel((__force u32) val[0], dest);
__raw_writel((__force u32) val[1], dest + 4);
spin_unlock_irqrestore(doorbell_lock, flags);
}
static inline void mthca_write_db_rec(u32 val[2], u32 *db)
static inline void mthca_write_db_rec(__be32 val[2], __be32 *db)
{
db[0] = val[0];
wmb();

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -51,18 +52,18 @@ enum {
* Must be packed because start is 64 bits but only aligned to 32 bits.
*/
struct mthca_eq_context {
u32 flags;
u64 start;
u32 logsize_usrpage;
u32 tavor_pd; /* reserved for Arbel */
u8 reserved1[3];
u8 intr;
u32 arbel_pd; /* lost_count for Tavor */
u32 lkey;
u32 reserved2[2];
u32 consumer_index;
u32 producer_index;
u32 reserved3[4];
__be32 flags;
__be64 start;
__be32 logsize_usrpage;
__be32 tavor_pd; /* reserved for Arbel */
u8 reserved1[3];
u8 intr;
__be32 arbel_pd; /* lost_count for Tavor */
__be32 lkey;
u32 reserved2[2];
__be32 consumer_index;
__be32 producer_index;
u32 reserved3[4];
} __attribute__((packed));
#define MTHCA_EQ_STATUS_OK ( 0 << 28)
@ -127,28 +128,28 @@ struct mthca_eqe {
union {
u32 raw[6];
struct {
u32 cqn;
__be32 cqn;
} __attribute__((packed)) comp;
struct {
u16 reserved1;
u16 token;
u32 reserved2;
u8 reserved3[3];
u8 status;
u64 out_param;
u16 reserved1;
__be16 token;
u32 reserved2;
u8 reserved3[3];
u8 status;
__be64 out_param;
} __attribute__((packed)) cmd;
struct {
u32 qpn;
__be32 qpn;
} __attribute__((packed)) qp;
struct {
u32 cqn;
u32 reserved1;
u8 reserved2[3];
u8 syndrome;
__be32 cqn;
u32 reserved1;
u8 reserved2[3];
u8 syndrome;
} __attribute__((packed)) cq_err;
struct {
u32 reserved1[2];
u32 port;
u32 reserved1[2];
__be32 port;
} __attribute__((packed)) port_change;
} event;
u8 reserved3[3];
@ -167,7 +168,7 @@ static inline u64 async_mask(struct mthca_dev *dev)
static inline void tavor_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci)
{
u32 doorbell[2];
__be32 doorbell[2];
doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_SET_CI | eq->eqn);
doorbell[1] = cpu_to_be32(ci & (eq->nent - 1));
@ -190,8 +191,8 @@ static inline void arbel_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u
{
/* See comment in tavor_set_eq_ci() above. */
wmb();
__raw_writel(cpu_to_be32(ci), dev->eq_regs.arbel.eq_set_ci_base +
eq->eqn * 8);
__raw_writel((__force u32) cpu_to_be32(ci),
dev->eq_regs.arbel.eq_set_ci_base + eq->eqn * 8);
/* We still want ordering, just not swabbing, so add a barrier */
mb();
}
@ -206,7 +207,7 @@ static inline void set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci)
static inline void tavor_eq_req_not(struct mthca_dev *dev, int eqn)
{
u32 doorbell[2];
__be32 doorbell[2];
doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_REQ_NOT | eqn);
doorbell[1] = 0;
@ -224,7 +225,7 @@ static inline void arbel_eq_req_not(struct mthca_dev *dev, u32 eqn_mask)
static inline void disarm_cq(struct mthca_dev *dev, int eqn, int cqn)
{
if (!mthca_is_memfree(dev)) {
u32 doorbell[2];
__be32 doorbell[2];
doorbell[0] = cpu_to_be32(MTHCA_EQ_DB_DISARM_CQ | eqn);
doorbell[1] = cpu_to_be32(cqn);

View File

@ -1,5 +1,7 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -32,9 +34,9 @@
* $Id: mthca_mad.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <ib_verbs.h>
#include <ib_mad.h>
#include <ib_smi.h>
#include <rdma/ib_verbs.h>
#include <rdma/ib_mad.h>
#include <rdma/ib_smi.h>
#include "mthca_dev.h"
#include "mthca_cmd.h"
@ -192,7 +194,7 @@ int mthca_process_mad(struct ib_device *ibdev,
{
int err;
u8 status;
u16 slid = in_wc ? in_wc->slid : IB_LID_PERMISSIVE;
u16 slid = in_wc ? in_wc->slid : be16_to_cpu(IB_LID_PERMISSIVE);
/* Forward locally generated traps to the SM */
if (in_mad->mad_hdr.method == IB_MGMT_METHOD_TRAP &&

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -34,7 +35,6 @@
*/
#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
@ -171,6 +171,7 @@ static int __devinit mthca_dev_lim(struct mthca_dev *mdev, struct mthca_dev_lim
mdev->limits.reserved_mrws = dev_lim->reserved_mrws;
mdev->limits.reserved_uars = dev_lim->reserved_uars;
mdev->limits.reserved_pds = dev_lim->reserved_pds;
mdev->limits.port_width_cap = dev_lim->max_port_width;
/* IB_DEVICE_RESIZE_MAX_WR not supported by driver.
May be doable since hardware supports it for SRQ.
@ -212,7 +213,6 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
struct mthca_dev_lim dev_lim;
struct mthca_profile profile;
struct mthca_init_hca_param init_hca;
struct mthca_adapter adapter;
err = mthca_SYS_EN(mdev, &status);
if (err) {
@ -253,6 +253,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
profile = default_profile;
profile.num_uar = dev_lim.uar_size / PAGE_SIZE;
profile.uarc_size = 0;
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
profile.num_srq = dev_lim.max_srqs;
err = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca);
if (err < 0)
@ -270,26 +272,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
goto err_disable;
}
err = mthca_QUERY_ADAPTER(mdev, &adapter, &status);
if (err) {
mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n");
goto err_close;
}
if (status) {
mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, "
"aborting.\n", status);
err = -EINVAL;
goto err_close;
}
mdev->eq_table.inta_pin = adapter.inta_pin;
mdev->rev_id = adapter.revision_id;
return 0;
err_close:
mthca_CLOSE_HCA(mdev, 0, &status);
err_disable:
mthca_SYS_DIS(mdev, &status);
@ -442,15 +426,29 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev,
}
mdev->cq_table.table = mthca_alloc_icm_table(mdev, init_hca->cqc_base,
dev_lim->cqc_entry_sz,
mdev->limits.num_cqs,
mdev->limits.reserved_cqs, 0);
dev_lim->cqc_entry_sz,
mdev->limits.num_cqs,
mdev->limits.reserved_cqs, 0);
if (!mdev->cq_table.table) {
mthca_err(mdev, "Failed to map CQ context memory, aborting.\n");
err = -ENOMEM;
goto err_unmap_rdb;
}
if (mdev->mthca_flags & MTHCA_FLAG_SRQ) {
mdev->srq_table.table =
mthca_alloc_icm_table(mdev, init_hca->srqc_base,
dev_lim->srq_entry_sz,
mdev->limits.num_srqs,
mdev->limits.reserved_srqs, 0);
if (!mdev->srq_table.table) {
mthca_err(mdev, "Failed to map SRQ context memory, "
"aborting.\n");
err = -ENOMEM;
goto err_unmap_cq;
}
}
/*
* It's not strictly required, but for simplicity just map the
* whole multicast group table now. The table isn't very big
@ -466,11 +464,15 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev,
if (!mdev->mcg_table.table) {
mthca_err(mdev, "Failed to map MCG context memory, aborting.\n");
err = -ENOMEM;
goto err_unmap_cq;
goto err_unmap_srq;
}
return 0;
err_unmap_srq:
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
err_unmap_cq:
mthca_free_icm_table(mdev, mdev->cq_table.table);
@ -506,7 +508,6 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
struct mthca_dev_lim dev_lim;
struct mthca_profile profile;
struct mthca_init_hca_param init_hca;
struct mthca_adapter adapter;
u64 icm_size;
u8 status;
int err;
@ -551,6 +552,8 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
profile = default_profile;
profile.num_uar = dev_lim.uar_size / PAGE_SIZE;
profile.num_udav = 0;
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
profile.num_srq = dev_lim.max_srqs;
icm_size = mthca_make_profile(mdev, &profile, &dev_lim, &init_hca);
if ((int) icm_size < 0) {
@ -574,24 +577,11 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
goto err_free_icm;
}
err = mthca_QUERY_ADAPTER(mdev, &adapter, &status);
if (err) {
mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n");
goto err_free_icm;
}
if (status) {
mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, "
"aborting.\n", status);
err = -EINVAL;
goto err_free_icm;
}
mdev->eq_table.inta_pin = adapter.inta_pin;
mdev->rev_id = adapter.revision_id;
return 0;
err_free_icm:
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
@ -614,12 +604,70 @@ err_disable:
return err;
}
static void mthca_close_hca(struct mthca_dev *mdev)
{
u8 status;
mthca_CLOSE_HCA(mdev, 0, &status);
if (mthca_is_memfree(mdev)) {
if (mdev->mthca_flags & MTHCA_FLAG_SRQ)
mthca_free_icm_table(mdev, mdev->srq_table.table);
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
mthca_unmap_eq_icm(mdev);
mthca_UNMAP_ICM_AUX(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
mthca_UNMAP_FA(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);
if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM))
mthca_DISABLE_LAM(mdev, &status);
} else
mthca_SYS_DIS(mdev, &status);
}
static int __devinit mthca_init_hca(struct mthca_dev *mdev)
{
u8 status;
int err;
struct mthca_adapter adapter;
if (mthca_is_memfree(mdev))
return mthca_init_arbel(mdev);
err = mthca_init_arbel(mdev);
else
return mthca_init_tavor(mdev);
err = mthca_init_tavor(mdev);
if (err)
return err;
err = mthca_QUERY_ADAPTER(mdev, &adapter, &status);
if (err) {
mthca_err(mdev, "QUERY_ADAPTER command failed, aborting.\n");
goto err_close;
}
if (status) {
mthca_err(mdev, "QUERY_ADAPTER returned status 0x%02x, "
"aborting.\n", status);
err = -EINVAL;
goto err_close;
}
mdev->eq_table.inta_pin = adapter.inta_pin;
mdev->rev_id = adapter.revision_id;
memcpy(mdev->board_id, adapter.board_id, sizeof mdev->board_id);
return 0;
err_close:
mthca_close_hca(mdev);
return err;
}
static int __devinit mthca_setup_hca(struct mthca_dev *dev)
@ -709,11 +757,18 @@ static int __devinit mthca_setup_hca(struct mthca_dev *dev)
goto err_cmd_poll;
}
err = mthca_init_srq_table(dev);
if (err) {
mthca_err(dev, "Failed to initialize "
"shared receive queue table, aborting.\n");
goto err_cq_table_free;
}
err = mthca_init_qp_table(dev);
if (err) {
mthca_err(dev, "Failed to initialize "
"queue pair table, aborting.\n");
goto err_cq_table_free;
goto err_srq_table_free;
}
err = mthca_init_av_table(dev);
@ -738,6 +793,9 @@ err_av_table_free:
err_qp_table_free:
mthca_cleanup_qp_table(dev);
err_srq_table_free:
mthca_cleanup_srq_table(dev);
err_cq_table_free:
mthca_cleanup_cq_table(dev);
@ -844,33 +902,6 @@ static int __devinit mthca_enable_msi_x(struct mthca_dev *mdev)
return 0;
}
static void mthca_close_hca(struct mthca_dev *mdev)
{
u8 status;
mthca_CLOSE_HCA(mdev, 0, &status);
if (mthca_is_memfree(mdev)) {
mthca_free_icm_table(mdev, mdev->cq_table.table);
mthca_free_icm_table(mdev, mdev->qp_table.rdb_table);
mthca_free_icm_table(mdev, mdev->qp_table.eqp_table);
mthca_free_icm_table(mdev, mdev->qp_table.qp_table);
mthca_free_icm_table(mdev, mdev->mr_table.mpt_table);
mthca_free_icm_table(mdev, mdev->mr_table.mtt_table);
mthca_unmap_eq_icm(mdev);
mthca_UNMAP_ICM_AUX(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.aux_icm);
mthca_UNMAP_FA(mdev, &status);
mthca_free_icm(mdev, mdev->fw.arbel.fw_icm);
if (!(mdev->mthca_flags & MTHCA_FLAG_NO_LAM))
mthca_DISABLE_LAM(mdev, &status);
} else
mthca_SYS_DIS(mdev, &status);
}
/* Types of supported HCA */
enum {
TAVOR, /* MT23108 */
@ -887,9 +918,9 @@ static struct {
int is_memfree;
int is_pcie;
} mthca_hca_table[] = {
[TAVOR] = { .latest_fw = MTHCA_FW_VER(3, 3, 2), .is_memfree = 0, .is_pcie = 0 },
[ARBEL_COMPAT] = { .latest_fw = MTHCA_FW_VER(4, 6, 2), .is_memfree = 0, .is_pcie = 1 },
[ARBEL_NATIVE] = { .latest_fw = MTHCA_FW_VER(5, 0, 1), .is_memfree = 1, .is_pcie = 1 },
[TAVOR] = { .latest_fw = MTHCA_FW_VER(3, 3, 3), .is_memfree = 0, .is_pcie = 0 },
[ARBEL_COMPAT] = { .latest_fw = MTHCA_FW_VER(4, 7, 0), .is_memfree = 0, .is_pcie = 1 },
[ARBEL_NATIVE] = { .latest_fw = MTHCA_FW_VER(5, 1, 0), .is_memfree = 1, .is_pcie = 1 },
[SINAI] = { .latest_fw = MTHCA_FW_VER(1, 0, 1), .is_memfree = 1, .is_pcie = 1 }
};
@ -1051,6 +1082,7 @@ err_cleanup:
mthca_cleanup_mcg_table(mdev);
mthca_cleanup_av_table(mdev);
mthca_cleanup_qp_table(mdev);
mthca_cleanup_srq_table(mdev);
mthca_cleanup_cq_table(mdev);
mthca_cmd_use_polling(mdev);
mthca_cleanup_eq_table(mdev);
@ -1100,6 +1132,7 @@ static void __devexit mthca_remove_one(struct pci_dev *pdev)
mthca_cleanup_mcg_table(mdev);
mthca_cleanup_av_table(mdev);
mthca_cleanup_qp_table(mdev);
mthca_cleanup_srq_table(mdev);
mthca_cleanup_cq_table(mdev);
mthca_cmd_use_polling(mdev);
mthca_cleanup_eq_table(mdev);

View File

@ -42,10 +42,10 @@ enum {
};
struct mthca_mgm {
u32 next_gid_index;
u32 reserved[3];
u8 gid[16];
u32 qp[MTHCA_QP_PER_MGM];
__be32 next_gid_index;
u32 reserved[3];
u8 gid[16];
__be32 qp[MTHCA_QP_PER_MGM];
};
static const u8 zero_gid[16]; /* automatically initialized to 0 */
@ -94,10 +94,14 @@ static int find_mgm(struct mthca_dev *dev,
if (0)
mthca_dbg(dev, "Hash for %04x:%04x:%04x:%04x:"
"%04x:%04x:%04x:%04x is %04x\n",
be16_to_cpu(((u16 *) gid)[0]), be16_to_cpu(((u16 *) gid)[1]),
be16_to_cpu(((u16 *) gid)[2]), be16_to_cpu(((u16 *) gid)[3]),
be16_to_cpu(((u16 *) gid)[4]), be16_to_cpu(((u16 *) gid)[5]),
be16_to_cpu(((u16 *) gid)[6]), be16_to_cpu(((u16 *) gid)[7]),
be16_to_cpu(((__be16 *) gid)[0]),
be16_to_cpu(((__be16 *) gid)[1]),
be16_to_cpu(((__be16 *) gid)[2]),
be16_to_cpu(((__be16 *) gid)[3]),
be16_to_cpu(((__be16 *) gid)[4]),
be16_to_cpu(((__be16 *) gid)[5]),
be16_to_cpu(((__be16 *) gid)[6]),
be16_to_cpu(((__be16 *) gid)[7]),
*hash);
*index = *hash;
@ -258,14 +262,14 @@ int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
if (index == -1) {
mthca_err(dev, "MGID %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x "
"not found\n",
be16_to_cpu(((u16 *) gid->raw)[0]),
be16_to_cpu(((u16 *) gid->raw)[1]),
be16_to_cpu(((u16 *) gid->raw)[2]),
be16_to_cpu(((u16 *) gid->raw)[3]),
be16_to_cpu(((u16 *) gid->raw)[4]),
be16_to_cpu(((u16 *) gid->raw)[5]),
be16_to_cpu(((u16 *) gid->raw)[6]),
be16_to_cpu(((u16 *) gid->raw)[7]));
be16_to_cpu(((__be16 *) gid->raw)[0]),
be16_to_cpu(((__be16 *) gid->raw)[1]),
be16_to_cpu(((__be16 *) gid->raw)[2]),
be16_to_cpu(((__be16 *) gid->raw)[3]),
be16_to_cpu(((__be16 *) gid->raw)[4]),
be16_to_cpu(((__be16 *) gid->raw)[5]),
be16_to_cpu(((__be16 *) gid->raw)[6]),
be16_to_cpu(((__be16 *) gid->raw)[7]));
err = -EINVAL;
goto out;
}

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -285,6 +286,7 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
{
struct mthca_icm_table *table;
int num_icm;
unsigned chunk_size;
int i;
u8 status;
@ -305,7 +307,11 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
table->icm[i] = NULL;
for (i = 0; i * MTHCA_TABLE_CHUNK_SIZE < reserved * obj_size; ++i) {
table->icm[i] = mthca_alloc_icm(dev, MTHCA_TABLE_CHUNK_SIZE >> PAGE_SHIFT,
chunk_size = MTHCA_TABLE_CHUNK_SIZE;
if ((i + 1) * MTHCA_TABLE_CHUNK_SIZE > nobj * obj_size)
chunk_size = nobj * obj_size - i * MTHCA_TABLE_CHUNK_SIZE;
table->icm[i] = mthca_alloc_icm(dev, chunk_size >> PAGE_SHIFT,
(use_lowmem ? GFP_KERNEL : GFP_HIGHUSER) |
__GFP_NOWARN);
if (!table->icm[i])
@ -481,7 +487,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
}
}
int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, u32 **db)
int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, __be32 **db)
{
int group;
int start, end, dir;
@ -564,7 +570,7 @@ found:
page->db_rec[j] = cpu_to_be64((qn << 8) | (type << 5));
*db = (u32 *) &page->db_rec[j];
*db = (__be32 *) &page->db_rec[j];
out:
up(&dev->db_tab->mutex);

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -137,7 +138,7 @@ enum {
struct mthca_db_page {
DECLARE_BITMAP(used, MTHCA_DB_REC_PER_PAGE);
u64 *db_rec;
__be64 *db_rec;
dma_addr_t mapping;
};
@ -172,7 +173,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
int mthca_init_db_tab(struct mthca_dev *dev);
void mthca_cleanup_db_tab(struct mthca_dev *dev);
int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, u32 **db);
int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, __be32 **db);
void mthca_free_db(struct mthca_dev *dev, int type, int db_index);
#endif /* MTHCA_MEMFREE_H */

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -50,18 +51,18 @@ struct mthca_mtt {
* Must be packed because mtt_seg is 64 bits but only aligned to 32 bits.
*/
struct mthca_mpt_entry {
u32 flags;
u32 page_size;
u32 key;
u32 pd;
u64 start;
u64 length;
u32 lkey;
u32 window_count;
u32 window_count_limit;
u64 mtt_seg;
u32 mtt_sz; /* Arbel only */
u32 reserved[2];
__be32 flags;
__be32 page_size;
__be32 key;
__be32 pd;
__be64 start;
__be64 length;
__be32 lkey;
__be32 window_count;
__be32 window_count_limit;
__be64 mtt_seg;
__be32 mtt_sz; /* Arbel only */
u32 reserved[2];
} __attribute__((packed));
#define MTHCA_MPT_FLAG_SW_OWNS (0xfUL << 28)
@ -247,7 +248,7 @@ int mthca_write_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt,
int start_index, u64 *buffer_list, int list_len)
{
struct mthca_mailbox *mailbox;
u64 *mtt_entry;
__be64 *mtt_entry;
int err = 0;
u8 status;
int i;
@ -389,7 +390,7 @@ int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift,
for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) {
if (i % 4 == 0)
printk("[%02x] ", i * 4);
printk(" %08x", be32_to_cpu(((u32 *) mpt_entry)[i]));
printk(" %08x", be32_to_cpu(((__be32 *) mpt_entry)[i]));
if ((i + 1) % 4 == 0)
printk("\n");
}
@ -458,7 +459,7 @@ int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd,
static void mthca_free_region(struct mthca_dev *dev, u32 lkey)
{
mthca_table_put(dev, dev->mr_table.mpt_table,
arbel_key_to_hw_index(lkey));
key_to_hw_index(dev, lkey));
mthca_free(&dev->mr_table.mpt_alloc, key_to_hw_index(dev, lkey));
}
@ -562,7 +563,7 @@ int mthca_fmr_alloc(struct mthca_dev *dev, u32 pd,
for (i = 0; i < sizeof (struct mthca_mpt_entry) / 4; ++i) {
if (i % 4 == 0)
printk("[%02x] ", i * 4);
printk(" %08x", be32_to_cpu(((u32 *) mpt_entry)[i]));
printk(" %08x", be32_to_cpu(((__be32 *) mpt_entry)[i]));
if ((i + 1) % 4 == 0)
printk("\n");
}
@ -669,7 +670,7 @@ int mthca_tavor_map_phys_fmr(struct ib_fmr *ibfmr, u64 *page_list,
mpt_entry.length = cpu_to_be64(list_len * (1ull << fmr->attr.page_size));
mpt_entry.start = cpu_to_be64(iova);
writel(mpt_entry.lkey, &fmr->mem.tavor.mpt->key);
__raw_writel((__force u32) mpt_entry.lkey, &fmr->mem.tavor.mpt->key);
memcpy_toio(&fmr->mem.tavor.mpt->start, &mpt_entry.start,
offsetof(struct mthca_mpt_entry, window_count) -
offsetof(struct mthca_mpt_entry, start));

View File

@ -1,6 +1,7 @@
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -101,6 +102,7 @@ u64 mthca_make_profile(struct mthca_dev *dev,
profile[MTHCA_RES_UARC].size = request->uarc_size;
profile[MTHCA_RES_QP].num = request->num_qp;
profile[MTHCA_RES_SRQ].num = request->num_srq;
profile[MTHCA_RES_EQP].num = request->num_qp;
profile[MTHCA_RES_RDB].num = request->num_qp * request->rdb_per_qp;
profile[MTHCA_RES_CQ].num = request->num_cq;

View File

@ -1,5 +1,6 @@
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -41,6 +42,7 @@
struct mthca_profile {
int num_qp;
int rdb_per_qp;
int num_srq;
int num_cq;
int num_mcg;
int num_mpt;

View File

@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@ -34,7 +36,7 @@
* $Id: mthca_provider.c 1397 2004-12-28 05:09:00Z roland $
*/
#include <ib_smi.h>
#include <rdma/ib_smi.h>
#include <linux/mm.h>
#include "mthca_dev.h"
@ -79,10 +81,10 @@ static int mthca_query_device(struct ib_device *ibdev,
}
props->device_cap_flags = mdev->device_cap_flags;
props->vendor_id = be32_to_cpup((u32 *) (out_mad->data + 36)) &
props->vendor_id = be32_to_cpup((__be32 *) (out_mad->data + 36)) &
0xffffff;
props->vendor_part_id = be16_to_cpup((u16 *) (out_mad->data + 30));
props->hw_ver = be16_to_cpup((u16 *) (out_mad->data + 32));
props->vendor_part_id = be16_to_cpup((__be16 *) (out_mad->data + 30));
props->hw_ver = be16_to_cpup((__be16 *) (out_mad->data + 32));
memcpy(&props->sys_image_guid, out_mad->data + 4, 8);
memcpy(&props->node_guid, out_mad->data + 12, 8);
@ -118,6 +120,8 @@ static int mthca_query_port(struct ib_device *ibdev,
if (!in_mad || !out_mad)
goto out;
memset(props, 0, sizeof *props);
memset(in_mad, 0, sizeof *in_mad);
in_mad->base_version = 1;
in_mad->mgmt_class = IB_MGMT_CLASS_SUBN_LID_ROUTED;
@ -136,16 +140,17 @@ static int mthca_query_port(struct ib_device *ibdev,
goto out;
}
props->lid = be16_to_cpup((u16 *) (out_mad->data + 16));
props->lid = be16_to_cpup((__be16 *) (out_mad->data + 16));
props->lmc = out_mad->data[34] & 0x7;
props->sm_lid = be16_to_cpup((u16 *) (out_mad->data + 18));
props->sm_lid = be16_to_cpup((__be16 *) (out_mad->data + 18));
props->sm_sl = out_mad->data[36] & 0xf;
props->state = out_mad->data[32] & 0xf;
props->phys_state = out_mad->data[33] >> 4;
props->port_cap_flags = be32_to_cpup((u32 *) (out_mad->data + 20));
props->port_cap_flags = be32_to_cpup((__be32 *) (out_mad->data + 20));
props->gid_tbl_len = to_mdev(ibdev)->limits.gid_table_len;
props->max_msg_sz = 0x80000000;
props->pkey_tbl_len = to_mdev(ibdev)->limits.pkey_table_len;
props->qkey_viol_cntr = be16_to_cpup((u16 *) (out_mad->data + 48));
props->qkey_viol_cntr = be16_to_cpup((__be16 *) (out_mad->data + 48));
props->active_width = out_mad->data[31] & 0xf;
props->active_speed = out_mad->data[35] >> 4;
@ -221,7 +226,7 @@ static int mthca_query_pkey(struct ib_device *ibdev,
goto out;
}
*pkey = be16_to_cpu(((u16 *) out_mad->data)[index % 32]);
*pkey = be16_to_cpu(((__be16 *) out_mad->data)[index % 32]);
out:
kfree(in_mad);
@ -420,6 +425,77 @@ static int mthca_ah_destroy(struct ib_ah *ah)
return 0;
}
static struct ib_srq *mthca_create_srq(struct ib_pd *pd,
struct ib_srq_init_attr *init_attr,
struct ib_udata *udata)
{
struct mthca_create_srq ucmd;
struct mthca_ucontext *context = NULL;
struct mthca_srq *srq;
int err;
srq = kmalloc(sizeof *srq, GFP_KERNEL);
if (!srq)
return ERR_PTR(-ENOMEM);
if (pd->uobject) {
context = to_mucontext(pd->uobject->context);
if (ib_copy_from_udata(&ucmd, udata, sizeof ucmd))
return ERR_PTR(-EFAULT);
err = mthca_map_user_db(to_mdev(pd->device), &context->uar,
context->db_tab, ucmd.db_index,
ucmd.db_page);
if (err)
goto err_free;
srq->mr.ibmr.lkey = ucmd.lkey;
srq->db_index = ucmd.db_index;
}
err = mthca_alloc_srq(to_mdev(pd->device), to_mpd(pd),
&init_attr->attr, srq);
if (err && pd->uobject)
mthca_unmap_user_db(to_mdev(pd->device), &context->uar,
context->db_tab, ucmd.db_index);
if (err)
goto err_free;
if (context && ib_copy_to_udata(udata, &srq->srqn, sizeof (__u32))) {
mthca_free_srq(to_mdev(pd->device), srq);
err = -EFAULT;
goto err_free;
}
return &srq->ibsrq;
err_free:
kfree(srq);
return ERR_PTR(err);
}
static int mthca_destroy_srq(struct ib_srq *srq)
{
struct mthca_ucontext *context;
if (srq->uobject) {
context = to_mucontext(srq->uobject->context);
mthca_unmap_user_db(to_mdev(srq->device), &context->uar,
context->db_tab, to_msrq(srq)->db_index);
}
mthca_free_srq(to_mdev(srq->device), to_msrq(srq));
kfree(srq);
return 0;
}
static struct ib_qp *mthca_create_qp(struct ib_pd *pd,
struct ib_qp_init_attr *init_attr,
struct ib_udata *udata)
@ -956,14 +1032,22 @@ static ssize_t show_hca(struct class_device *cdev, char *buf)
}
}
static ssize_t show_board(struct class_device *cdev, char *buf)
{
struct mthca_dev *dev = container_of(cdev, struct mthca_dev, ib_dev.class_dev);
return sprintf(buf, "%.*s\n", MTHCA_BOARD_ID_LEN, dev->board_id);
}
static CLASS_DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL);
static CLASS_DEVICE_ATTR(fw_ver, S_IRUGO, show_fw_ver, NULL);
static CLASS_DEVICE_ATTR(hca_type, S_IRUGO, show_hca, NULL);
static CLASS_DEVICE_ATTR(board_id, S_IRUGO, show_board, NULL);
static struct class_device_attribute *mthca_class_attributes[] = {
&class_device_attr_hw_rev,
&class_device_attr_fw_ver,
&class_device_attr_hca_type
&class_device_attr_hca_type,
&class_device_attr_board_id
};
int mthca_register_device(struct mthca_dev *dev)
@ -990,6 +1074,17 @@ int mthca_register_device(struct mthca_dev *dev)
dev->ib_dev.dealloc_pd = mthca_dealloc_pd;
dev->ib_dev.create_ah = mthca_ah_create;
dev->ib_dev.destroy_ah = mthca_ah_destroy;
if (dev->mthca_flags & MTHCA_FLAG_SRQ) {
dev->ib_dev.create_srq = mthca_create_srq;
dev->ib_dev.destroy_srq = mthca_destroy_srq;
if (mthca_is_memfree(dev))
dev->ib_dev.post_srq_recv = mthca_arbel_post_srq_recv;
else
dev->ib_dev.post_srq_recv = mthca_tavor_post_srq_recv;
}
dev->ib_dev.create_qp = mthca_create_qp;
dev->ib_dev.modify_qp = mthca_modify_qp;
dev->ib_dev.destroy_qp = mthca_destroy_qp;

Some files were not shown because too many files have changed in this diff Show More