IPMI Driver updates for 3.19

For the following changes:
   - Quite a few bug fixes
   - A new driver for the powernv
   - A new driver for the SMBus interface from the IPMI 2.0 specification
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iEYEABECAAYFAlSKGRsACgkQIXnXXONXEReJngCeJsMEdY9pBJ9GEppkiSv0HG74
 VR8AoJb3PQ2SfqmAbT0RgACWEkSuWCdj
 =9NNo
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.code.sf.net/p/openipmi/linux-ipmi

Pull IPMI driver updates from Corey Minyard:
  - Quite a few bug fixes
  - A new driver for the powernv
  - A new driver for the SMBus interface from the IPMI 2.0 specification

* tag 'for-linus' of git://git.code.sf.net/p/openipmi/linux-ipmi:
  ipmi: Check the BT interrupt enable periodically
  ipmi: Fix attention handling for system interfaces
  ipmi: Periodically check to see if irqs and messages are set right
  drivers/char/ipmi: Add powernv IPMI driver
  ipmi: Add SMBus interface driver (SSIF)
  ipmi: Remove the now unused priority from SMI sender
  ipmi: Remove the now unnecessary message queue
  ipmi: Make the message handler easier to use for SMI interfaces
  ipmi: Move message sending into its own function
  ipmi: rename waiting_msgs to waiting_rcv_msgs
  ipmi: Fix handling of BMC flags
  ipmi: Initialize BMC device attributes
  ipmi: Unregister previously registered driver in error case
  ipmi: Use the proper type for acpi_handle
  ipmi: Fix a bug in hot add/remove
  ipmi: Remove useless sysfs_name parameters
  ipmi: clean up the device handling for the bmc device
  ipmi: Move the address source to string to ipmi-generic code
  ipmi: Ignore SSIF in the PNP handling
This commit is contained in:
Linus Torvalds 2014-12-12 14:49:56 -08:00
commit eea0cf3fcd
9 changed files with 2831 additions and 518 deletions

View File

@ -42,7 +42,13 @@ The driver interface depends on your hardware. If your system
properly provides the SMBIOS info for IPMI, the driver will detect it
and just work. If you have a board with a standard interface (These
will generally be either "KCS", "SMIC", or "BT", consult your hardware
manual), choose the 'IPMI SI handler' option.
manual), choose the 'IPMI SI handler' option. A driver also exists
for direct I2C access to the IPMI management controller. Some boards
support this, but it is unknown if it will work on every board. For
this, choose 'IPMI SMBus handler', but be ready to try to do some
figuring to see if it will work on your system if the SMBIOS/APCI
information is wrong or not present. It is fairly safe to have both
these enabled and let the drivers auto-detect what is present.
You should generally enable ACPI on your system, as systems with IPMI
can have ACPI tables describing them.
@ -52,7 +58,8 @@ their job correctly, the IPMI controller should be automatically
detected (via ACPI or SMBIOS tables) and should just work. Sadly,
many boards do not have this information. The driver attempts
standard defaults, but they may not work. If you fall into this
situation, you need to read the section below named 'The SI Driver'.
situation, you need to read the section below named 'The SI Driver' or
"The SMBus Driver" on how to hand-configure your system.
IPMI defines a standard watchdog timer. You can enable this with the
'IPMI Watchdog Timer' config option. If you compile the driver into
@ -97,7 +104,12 @@ driver, each open file for this device ties in to the message handler
as an IPMI user.
ipmi_si - A driver for various system interfaces. This supports KCS,
SMIC, and BT interfaces.
SMIC, and BT interfaces. Unless you have an SMBus interface or your
own custom interface, you probably need to use this.
ipmi_ssif - A driver for accessing BMCs on the SMBus. It uses the
I2C kernel driver's SMBus interfaces to send and receive IPMI messages
over the SMBus.
ipmi_watchdog - IPMI requires systems to have a very capable watchdog
timer. This driver implements the standard Linux watchdog timer
@ -476,6 +488,62 @@ for specifying an interface. Note that when removing an interface,
only the first three parameters (si type, address type, and address)
are used for the comparison. Any options are ignored for removing.
The SMBus Driver (SSIF)
-----------------------
The SMBus driver allows up to 4 SMBus devices to be configured in the
system. By default, the driver will only register with something it
finds in DMI or ACPI tables. You can change this
at module load time (for a module) with:
modprobe ipmi_ssif.o
addr=<i2caddr1>[,<i2caddr2>[,...]]
adapter=<adapter1>[,<adapter2>[...]]
dbg=<flags1>,<flags2>...
slave_addrs=<addr1>,<addr2>,...
[dbg_probe=1]
The addresses are normal I2C addresses. The adapter is the string
name of the adapter, as shown in /sys/class/i2c-adapter/i2c-<n>/name.
It is *NOT* i2c-<n> itself.
The debug flags are bit flags for each BMC found, they are:
IPMI messages: 1, driver state: 2, timing: 4, I2C probe: 8
Setting dbg_probe to 1 will enable debugging of the probing and
detection process for BMCs on the SMBusses.
The slave_addrs specifies the IPMI address of the local BMC. This is
usually 0x20 and the driver defaults to that, but in case it's not, it
can be specified when the driver starts up.
Discovering the IPMI compliant BMC on the SMBus can cause devices on
the I2C bus to fail. The SMBus driver writes a "Get Device ID" IPMI
message as a block write to the I2C bus and waits for a response.
This action can be detrimental to some I2C devices. It is highly
recommended that the known I2C address be given to the SMBus driver in
the smb_addr parameter unless you have DMI or ACPI data to tell the
driver what to use.
When compiled into the kernel, the addresses can be specified on the
kernel command line as:
ipmb_ssif.addr=<i2caddr1>[,<i2caddr2>[...]]
ipmi_ssif.adapter=<adapter1>[,<adapter2>[...]]
ipmi_ssif.dbg=<flags1>[,<flags2>[...]]
ipmi_ssif.dbg_probe=1
ipmi_ssif.slave_addrs=<addr1>[,<addr2>[...]]
These are the same options as on the module command line.
The I2C driver does not support non-blocking access or polling, so
this driver cannod to IPMI panic events, extend the watchdog at panic
time, or other panic-related IPMI functions without special kernel
patches and driver modifications. You can get those at the openipmi
web page.
The driver supports a hot add and remove of interfaces through the I2C
sysfs interface.
Other Pieces
------------

View File

@ -62,6 +62,20 @@ config IPMI_SI_PROBE_DEFAULTS
only be available on older systems if the "ipmi_si_intf.trydefaults=1"
boot argument is passed.
config IPMI_SSIF
tristate 'IPMI SMBus handler (SSIF)'
select I2C
help
Provides a driver for a SMBus interface to a BMC, meaning that you
have a driver that must be accessed over an I2C bus instead of a
standard interface. This module requires I2C support.
config IPMI_POWERNV
depends on PPC_POWERNV
tristate 'POWERNV (OPAL firmware) IPMI interface'
help
Provides a driver for OPAL firmware-based IPMI interfaces.
config IPMI_WATCHDOG
tristate 'IPMI Watchdog Timer'
help

View File

@ -7,5 +7,7 @@ ipmi_si-y := ipmi_si_intf.o ipmi_kcs_sm.o ipmi_smic_sm.o ipmi_bt_sm.o
obj-$(CONFIG_IPMI_HANDLER) += ipmi_msghandler.o
obj-$(CONFIG_IPMI_DEVICE_INTERFACE) += ipmi_devintf.o
obj-$(CONFIG_IPMI_SI) += ipmi_si.o
obj-$(CONFIG_IPMI_SSIF) += ipmi_ssif.o
obj-$(CONFIG_IPMI_POWERNV) += ipmi_powernv.o
obj-$(CONFIG_IPMI_WATCHDOG) += ipmi_watchdog.o
obj-$(CONFIG_IPMI_POWEROFF) += ipmi_poweroff.o

View File

@ -56,6 +56,8 @@ static int ipmi_init_msghandler(void);
static void smi_recv_tasklet(unsigned long);
static void handle_new_recv_msgs(ipmi_smi_t intf);
static void need_waiter(ipmi_smi_t intf);
static int handle_one_recv_msg(ipmi_smi_t intf,
struct ipmi_smi_msg *msg);
static int initialized;
@ -191,12 +193,12 @@ struct ipmi_proc_entry {
#endif
struct bmc_device {
struct platform_device *dev;
struct platform_device pdev;
struct ipmi_device_id id;
unsigned char guid[16];
int guid_set;
struct kref refcount;
char name[16];
struct kref usecount;
/* bmc device attributes */
struct device_attribute device_id_attr;
@ -210,6 +212,7 @@ struct bmc_device {
struct device_attribute guid_attr;
struct device_attribute aux_firmware_rev_attr;
};
#define to_bmc_device(x) container_of((x), struct bmc_device, pdev.dev)
/*
* Various statistics for IPMI, these index stats[] in the ipmi_smi
@ -323,6 +326,9 @@ struct ipmi_smi {
struct kref refcount;
/* Set when the interface is being unregistered. */
bool in_shutdown;
/* Used for a list of interfaces. */
struct list_head link;
@ -341,7 +347,6 @@ struct ipmi_smi {
struct bmc_device *bmc;
char *my_dev_name;
char *sysfs_name;
/*
* This is the lower-layer's sender routine. Note that you
@ -377,11 +382,16 @@ struct ipmi_smi {
* periodic timer interrupt. The tasklet is for handling received
* messages directly from the handler.
*/
spinlock_t waiting_msgs_lock;
struct list_head waiting_msgs;
spinlock_t waiting_rcv_msgs_lock;
struct list_head waiting_rcv_msgs;
atomic_t watchdog_pretimeouts_to_deliver;
struct tasklet_struct recv_tasklet;
spinlock_t xmit_msgs_lock;
struct list_head xmit_msgs;
struct ipmi_smi_msg *curr_msg;
struct list_head hp_xmit_msgs;
/*
* The list of command receivers that are registered for commands
* on this interface.
@ -474,6 +484,18 @@ static DEFINE_MUTEX(smi_watchers_mutex);
#define ipmi_get_stat(intf, stat) \
((unsigned int) atomic_read(&(intf)->stats[IPMI_STAT_ ## stat]))
static char *addr_src_to_str[] = { "invalid", "hotmod", "hardcoded", "SPMI",
"ACPI", "SMBIOS", "PCI",
"device-tree", "default" };
const char *ipmi_addr_src_to_str(enum ipmi_addr_src src)
{
if (src > SI_DEFAULT)
src = 0; /* Invalid */
return addr_src_to_str[src];
}
EXPORT_SYMBOL(ipmi_addr_src_to_str);
static int is_lan_addr(struct ipmi_addr *addr)
{
return addr->addr_type == IPMI_LAN_ADDR_TYPE;
@ -517,7 +539,7 @@ static void clean_up_interface_data(ipmi_smi_t intf)
tasklet_kill(&intf->recv_tasklet);
free_smi_msg_list(&intf->waiting_msgs);
free_smi_msg_list(&intf->waiting_rcv_msgs);
free_recv_msg_list(&intf->waiting_events);
/*
@ -1473,6 +1495,30 @@ static inline void format_lan_msg(struct ipmi_smi_msg *smi_msg,
smi_msg->msgid = msgid;
}
static void smi_send(ipmi_smi_t intf, struct ipmi_smi_handlers *handlers,
struct ipmi_smi_msg *smi_msg, int priority)
{
int run_to_completion = intf->run_to_completion;
unsigned long flags;
if (!run_to_completion)
spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
if (intf->curr_msg) {
if (priority > 0)
list_add_tail(&smi_msg->link, &intf->hp_xmit_msgs);
else
list_add_tail(&smi_msg->link, &intf->xmit_msgs);
smi_msg = NULL;
} else {
intf->curr_msg = smi_msg;
}
if (!run_to_completion)
spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
if (smi_msg)
handlers->sender(intf->send_info, smi_msg);
}
/*
* Separate from ipmi_request so that the user does not have to be
* supplied in certain circumstances (mainly at panic time). If
@ -1497,7 +1543,6 @@ static int i_ipmi_request(ipmi_user_t user,
struct ipmi_smi_msg *smi_msg;
struct ipmi_recv_msg *recv_msg;
unsigned long flags;
struct ipmi_smi_handlers *handlers;
if (supplied_recv)
@ -1520,8 +1565,7 @@ static int i_ipmi_request(ipmi_user_t user,
}
rcu_read_lock();
handlers = intf->handlers;
if (!handlers) {
if (intf->in_shutdown) {
rv = -ENODEV;
goto out_err;
}
@ -1856,7 +1900,7 @@ static int i_ipmi_request(ipmi_user_t user,
}
#endif
handlers->sender(intf->send_info, smi_msg, priority);
smi_send(intf, intf->handlers, smi_msg, priority);
rcu_read_unlock();
return 0;
@ -2153,7 +2197,7 @@ static void remove_proc_entries(ipmi_smi_t smi)
static int __find_bmc_guid(struct device *dev, void *data)
{
unsigned char *id = data;
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return memcmp(bmc->guid, id, 16) == 0;
}
@ -2164,7 +2208,7 @@ static struct bmc_device *ipmi_find_bmc_guid(struct device_driver *drv,
dev = driver_find_device(drv, NULL, guid, __find_bmc_guid);
if (dev)
return dev_get_drvdata(dev);
return to_bmc_device(dev);
else
return NULL;
}
@ -2177,7 +2221,7 @@ struct prod_dev_id {
static int __find_bmc_prod_dev_id(struct device *dev, void *data)
{
struct prod_dev_id *id = data;
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return (bmc->id.product_id == id->product_id
&& bmc->id.device_id == id->device_id);
@ -2195,7 +2239,7 @@ static struct bmc_device *ipmi_find_bmc_prod_dev_id(
dev = driver_find_device(drv, NULL, &id, __find_bmc_prod_dev_id);
if (dev)
return dev_get_drvdata(dev);
return to_bmc_device(dev);
else
return NULL;
}
@ -2204,84 +2248,92 @@ static ssize_t device_id_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 10, "%u\n", bmc->id.device_id);
}
DEVICE_ATTR(device_id, S_IRUGO, device_id_show, NULL);
static ssize_t provides_dev_sdrs_show(struct device *dev,
struct device_attribute *attr,
char *buf)
static ssize_t provides_device_sdrs_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 10, "%u\n",
(bmc->id.device_revision & 0x80) >> 7);
}
DEVICE_ATTR(provides_device_sdrs, S_IRUGO, provides_device_sdrs_show, NULL);
static ssize_t revision_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 20, "%u\n",
bmc->id.device_revision & 0x0F);
}
DEVICE_ATTR(revision, S_IRUGO, revision_show, NULL);
static ssize_t firmware_rev_show(struct device *dev,
struct device_attribute *attr,
char *buf)
static ssize_t firmware_revision_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 20, "%u.%x\n", bmc->id.firmware_revision_1,
bmc->id.firmware_revision_2);
}
DEVICE_ATTR(firmware_revision, S_IRUGO, firmware_revision_show, NULL);
static ssize_t ipmi_version_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 20, "%u.%u\n",
ipmi_version_major(&bmc->id),
ipmi_version_minor(&bmc->id));
}
DEVICE_ATTR(ipmi_version, S_IRUGO, ipmi_version_show, NULL);
static ssize_t add_dev_support_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 10, "0x%02x\n",
bmc->id.additional_device_support);
}
DEVICE_ATTR(additional_device_support, S_IRUGO, add_dev_support_show, NULL);
static ssize_t manufacturer_id_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 20, "0x%6.6x\n", bmc->id.manufacturer_id);
}
DEVICE_ATTR(manufacturer_id, S_IRUGO, manufacturer_id_show, NULL);
static ssize_t product_id_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 10, "0x%4.4x\n", bmc->id.product_id);
}
DEVICE_ATTR(product_id, S_IRUGO, product_id_show, NULL);
static ssize_t aux_firmware_rev_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 21, "0x%02x 0x%02x 0x%02x 0x%02x\n",
bmc->id.aux_firmware_revision[3],
@ -2289,174 +2341,96 @@ static ssize_t aux_firmware_rev_show(struct device *dev,
bmc->id.aux_firmware_revision[1],
bmc->id.aux_firmware_revision[0]);
}
DEVICE_ATTR(aux_firmware_revision, S_IRUGO, aux_firmware_rev_show, NULL);
static ssize_t guid_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct bmc_device *bmc = dev_get_drvdata(dev);
struct bmc_device *bmc = to_bmc_device(dev);
return snprintf(buf, 100, "%Lx%Lx\n",
(long long) bmc->guid[0],
(long long) bmc->guid[8]);
}
DEVICE_ATTR(guid, S_IRUGO, guid_show, NULL);
static void remove_files(struct bmc_device *bmc)
static struct attribute *bmc_dev_attrs[] = {
&dev_attr_device_id.attr,
&dev_attr_provides_device_sdrs.attr,
&dev_attr_revision.attr,
&dev_attr_firmware_revision.attr,
&dev_attr_ipmi_version.attr,
&dev_attr_additional_device_support.attr,
&dev_attr_manufacturer_id.attr,
&dev_attr_product_id.attr,
NULL
};
static struct attribute_group bmc_dev_attr_group = {
.attrs = bmc_dev_attrs,
};
static const struct attribute_group *bmc_dev_attr_groups[] = {
&bmc_dev_attr_group,
NULL
};
static struct device_type bmc_device_type = {
.groups = bmc_dev_attr_groups,
};
static void
release_bmc_device(struct device *dev)
{
if (!bmc->dev)
return;
device_remove_file(&bmc->dev->dev,
&bmc->device_id_attr);
device_remove_file(&bmc->dev->dev,
&bmc->provides_dev_sdrs_attr);
device_remove_file(&bmc->dev->dev,
&bmc->revision_attr);
device_remove_file(&bmc->dev->dev,
&bmc->firmware_rev_attr);
device_remove_file(&bmc->dev->dev,
&bmc->version_attr);
device_remove_file(&bmc->dev->dev,
&bmc->add_dev_support_attr);
device_remove_file(&bmc->dev->dev,
&bmc->manufacturer_id_attr);
device_remove_file(&bmc->dev->dev,
&bmc->product_id_attr);
if (bmc->id.aux_firmware_revision_set)
device_remove_file(&bmc->dev->dev,
&bmc->aux_firmware_rev_attr);
if (bmc->guid_set)
device_remove_file(&bmc->dev->dev,
&bmc->guid_attr);
kfree(to_bmc_device(dev));
}
static void
cleanup_bmc_device(struct kref *ref)
{
struct bmc_device *bmc;
struct bmc_device *bmc = container_of(ref, struct bmc_device, usecount);
bmc = container_of(ref, struct bmc_device, refcount);
if (bmc->id.aux_firmware_revision_set)
device_remove_file(&bmc->pdev.dev,
&bmc->aux_firmware_rev_attr);
if (bmc->guid_set)
device_remove_file(&bmc->pdev.dev,
&bmc->guid_attr);
remove_files(bmc);
platform_device_unregister(bmc->dev);
kfree(bmc);
platform_device_unregister(&bmc->pdev);
}
static void ipmi_bmc_unregister(ipmi_smi_t intf)
{
struct bmc_device *bmc = intf->bmc;
if (intf->sysfs_name) {
sysfs_remove_link(&intf->si_dev->kobj, intf->sysfs_name);
kfree(intf->sysfs_name);
intf->sysfs_name = NULL;
}
sysfs_remove_link(&intf->si_dev->kobj, "bmc");
if (intf->my_dev_name) {
sysfs_remove_link(&bmc->dev->dev.kobj, intf->my_dev_name);
sysfs_remove_link(&bmc->pdev.dev.kobj, intf->my_dev_name);
kfree(intf->my_dev_name);
intf->my_dev_name = NULL;
}
mutex_lock(&ipmidriver_mutex);
kref_put(&bmc->refcount, cleanup_bmc_device);
kref_put(&bmc->usecount, cleanup_bmc_device);
intf->bmc = NULL;
mutex_unlock(&ipmidriver_mutex);
}
static int create_files(struct bmc_device *bmc)
static int create_bmc_files(struct bmc_device *bmc)
{
int err;
bmc->device_id_attr.attr.name = "device_id";
bmc->device_id_attr.attr.mode = S_IRUGO;
bmc->device_id_attr.show = device_id_show;
sysfs_attr_init(&bmc->device_id_attr.attr);
bmc->provides_dev_sdrs_attr.attr.name = "provides_device_sdrs";
bmc->provides_dev_sdrs_attr.attr.mode = S_IRUGO;
bmc->provides_dev_sdrs_attr.show = provides_dev_sdrs_show;
sysfs_attr_init(&bmc->provides_dev_sdrs_attr.attr);
bmc->revision_attr.attr.name = "revision";
bmc->revision_attr.attr.mode = S_IRUGO;
bmc->revision_attr.show = revision_show;
sysfs_attr_init(&bmc->revision_attr.attr);
bmc->firmware_rev_attr.attr.name = "firmware_revision";
bmc->firmware_rev_attr.attr.mode = S_IRUGO;
bmc->firmware_rev_attr.show = firmware_rev_show;
sysfs_attr_init(&bmc->firmware_rev_attr.attr);
bmc->version_attr.attr.name = "ipmi_version";
bmc->version_attr.attr.mode = S_IRUGO;
bmc->version_attr.show = ipmi_version_show;
sysfs_attr_init(&bmc->version_attr.attr);
bmc->add_dev_support_attr.attr.name = "additional_device_support";
bmc->add_dev_support_attr.attr.mode = S_IRUGO;
bmc->add_dev_support_attr.show = add_dev_support_show;
sysfs_attr_init(&bmc->add_dev_support_attr.attr);
bmc->manufacturer_id_attr.attr.name = "manufacturer_id";
bmc->manufacturer_id_attr.attr.mode = S_IRUGO;
bmc->manufacturer_id_attr.show = manufacturer_id_show;
sysfs_attr_init(&bmc->manufacturer_id_attr.attr);
bmc->product_id_attr.attr.name = "product_id";
bmc->product_id_attr.attr.mode = S_IRUGO;
bmc->product_id_attr.show = product_id_show;
sysfs_attr_init(&bmc->product_id_attr.attr);
bmc->guid_attr.attr.name = "guid";
bmc->guid_attr.attr.mode = S_IRUGO;
bmc->guid_attr.show = guid_show;
sysfs_attr_init(&bmc->guid_attr.attr);
bmc->aux_firmware_rev_attr.attr.name = "aux_firmware_revision";
bmc->aux_firmware_rev_attr.attr.mode = S_IRUGO;
bmc->aux_firmware_rev_attr.show = aux_firmware_rev_show;
sysfs_attr_init(&bmc->aux_firmware_rev_attr.attr);
err = device_create_file(&bmc->dev->dev,
&bmc->device_id_attr);
if (err)
goto out;
err = device_create_file(&bmc->dev->dev,
&bmc->provides_dev_sdrs_attr);
if (err)
goto out_devid;
err = device_create_file(&bmc->dev->dev,
&bmc->revision_attr);
if (err)
goto out_sdrs;
err = device_create_file(&bmc->dev->dev,
&bmc->firmware_rev_attr);
if (err)
goto out_rev;
err = device_create_file(&bmc->dev->dev,
&bmc->version_attr);
if (err)
goto out_firm;
err = device_create_file(&bmc->dev->dev,
&bmc->add_dev_support_attr);
if (err)
goto out_version;
err = device_create_file(&bmc->dev->dev,
&bmc->manufacturer_id_attr);
if (err)
goto out_add_dev;
err = device_create_file(&bmc->dev->dev,
&bmc->product_id_attr);
if (err)
goto out_manu;
if (bmc->id.aux_firmware_revision_set) {
err = device_create_file(&bmc->dev->dev,
bmc->aux_firmware_rev_attr.attr.name = "aux_firmware_revision";
err = device_create_file(&bmc->pdev.dev,
&bmc->aux_firmware_rev_attr);
if (err)
goto out_prod_id;
goto out;
}
if (bmc->guid_set) {
err = device_create_file(&bmc->dev->dev,
bmc->guid_attr.attr.name = "guid";
err = device_create_file(&bmc->pdev.dev,
&bmc->guid_attr);
if (err)
goto out_aux_firm;
@ -2466,44 +2440,17 @@ static int create_files(struct bmc_device *bmc)
out_aux_firm:
if (bmc->id.aux_firmware_revision_set)
device_remove_file(&bmc->dev->dev,
device_remove_file(&bmc->pdev.dev,
&bmc->aux_firmware_rev_attr);
out_prod_id:
device_remove_file(&bmc->dev->dev,
&bmc->product_id_attr);
out_manu:
device_remove_file(&bmc->dev->dev,
&bmc->manufacturer_id_attr);
out_add_dev:
device_remove_file(&bmc->dev->dev,
&bmc->add_dev_support_attr);
out_version:
device_remove_file(&bmc->dev->dev,
&bmc->version_attr);
out_firm:
device_remove_file(&bmc->dev->dev,
&bmc->firmware_rev_attr);
out_rev:
device_remove_file(&bmc->dev->dev,
&bmc->revision_attr);
out_sdrs:
device_remove_file(&bmc->dev->dev,
&bmc->provides_dev_sdrs_attr);
out_devid:
device_remove_file(&bmc->dev->dev,
&bmc->device_id_attr);
out:
return err;
}
static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
const char *sysfs_name)
static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum)
{
int rv;
struct bmc_device *bmc = intf->bmc;
struct bmc_device *old_bmc;
int size;
char dummy[1];
mutex_lock(&ipmidriver_mutex);
@ -2527,7 +2474,7 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
intf->bmc = old_bmc;
bmc = old_bmc;
kref_get(&bmc->refcount);
kref_get(&bmc->usecount);
mutex_unlock(&ipmidriver_mutex);
printk(KERN_INFO
@ -2537,12 +2484,12 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
bmc->id.product_id,
bmc->id.device_id);
} else {
char name[14];
unsigned char orig_dev_id = bmc->id.device_id;
int warn_printed = 0;
snprintf(name, sizeof(name),
snprintf(bmc->name, sizeof(bmc->name),
"ipmi_bmc.%4.4x", bmc->id.product_id);
bmc->pdev.name = bmc->name;
while (ipmi_find_bmc_prod_dev_id(&ipmidriver.driver,
bmc->id.product_id,
@ -2566,23 +2513,16 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
}
}
bmc->dev = platform_device_alloc(name, bmc->id.device_id);
if (!bmc->dev) {
mutex_unlock(&ipmidriver_mutex);
printk(KERN_ERR
"ipmi_msghandler:"
" Unable to allocate platform device\n");
return -ENOMEM;
}
bmc->dev->dev.driver = &ipmidriver.driver;
dev_set_drvdata(&bmc->dev->dev, bmc);
kref_init(&bmc->refcount);
bmc->pdev.dev.driver = &ipmidriver.driver;
bmc->pdev.id = bmc->id.device_id;
bmc->pdev.dev.release = release_bmc_device;
bmc->pdev.dev.type = &bmc_device_type;
kref_init(&bmc->usecount);
rv = platform_device_add(bmc->dev);
rv = platform_device_register(&bmc->pdev);
mutex_unlock(&ipmidriver_mutex);
if (rv) {
platform_device_put(bmc->dev);
bmc->dev = NULL;
put_device(&bmc->pdev.dev);
printk(KERN_ERR
"ipmi_msghandler:"
" Unable to register bmc device: %d\n",
@ -2594,10 +2534,10 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
return rv;
}
rv = create_files(bmc);
rv = create_bmc_files(bmc);
if (rv) {
mutex_lock(&ipmidriver_mutex);
platform_device_unregister(bmc->dev);
platform_device_unregister(&bmc->pdev);
mutex_unlock(&ipmidriver_mutex);
return rv;
@ -2614,44 +2554,26 @@ static int ipmi_bmc_register(ipmi_smi_t intf, int ifnum,
* create symlink from system interface device to bmc device
* and back.
*/
intf->sysfs_name = kstrdup(sysfs_name, GFP_KERNEL);
if (!intf->sysfs_name) {
rv = -ENOMEM;
printk(KERN_ERR
"ipmi_msghandler: allocate link to BMC: %d\n",
rv);
goto out_err;
}
rv = sysfs_create_link(&intf->si_dev->kobj,
&bmc->dev->dev.kobj, intf->sysfs_name);
rv = sysfs_create_link(&intf->si_dev->kobj, &bmc->pdev.dev.kobj, "bmc");
if (rv) {
kfree(intf->sysfs_name);
intf->sysfs_name = NULL;
printk(KERN_ERR
"ipmi_msghandler: Unable to create bmc symlink: %d\n",
rv);
goto out_err;
}
size = snprintf(dummy, 0, "ipmi%d", ifnum);
intf->my_dev_name = kmalloc(size+1, GFP_KERNEL);
intf->my_dev_name = kasprintf(GFP_KERNEL, "ipmi%d", ifnum);
if (!intf->my_dev_name) {
kfree(intf->sysfs_name);
intf->sysfs_name = NULL;
rv = -ENOMEM;
printk(KERN_ERR
"ipmi_msghandler: allocate link from BMC: %d\n",
rv);
goto out_err;
}
snprintf(intf->my_dev_name, size+1, "ipmi%d", ifnum);
rv = sysfs_create_link(&bmc->dev->dev.kobj, &intf->si_dev->kobj,
rv = sysfs_create_link(&bmc->pdev.dev.kobj, &intf->si_dev->kobj,
intf->my_dev_name);
if (rv) {
kfree(intf->sysfs_name);
intf->sysfs_name = NULL;
kfree(intf->my_dev_name);
intf->my_dev_name = NULL;
printk(KERN_ERR
@ -2850,7 +2772,6 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
void *send_info,
struct ipmi_device_id *device_id,
struct device *si_dev,
const char *sysfs_name,
unsigned char slave_addr)
{
int i, j;
@ -2909,12 +2830,15 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
#ifdef CONFIG_PROC_FS
mutex_init(&intf->proc_entry_lock);
#endif
spin_lock_init(&intf->waiting_msgs_lock);
INIT_LIST_HEAD(&intf->waiting_msgs);
spin_lock_init(&intf->waiting_rcv_msgs_lock);
INIT_LIST_HEAD(&intf->waiting_rcv_msgs);
tasklet_init(&intf->recv_tasklet,
smi_recv_tasklet,
(unsigned long) intf);
atomic_set(&intf->watchdog_pretimeouts_to_deliver, 0);
spin_lock_init(&intf->xmit_msgs_lock);
INIT_LIST_HEAD(&intf->xmit_msgs);
INIT_LIST_HEAD(&intf->hp_xmit_msgs);
spin_lock_init(&intf->events_lock);
atomic_set(&intf->event_waiters, 0);
intf->ticks_to_req_ev = IPMI_REQUEST_EV_TIME;
@ -2984,7 +2908,7 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
if (rv == 0)
rv = add_proc_entries(intf, i);
rv = ipmi_bmc_register(intf, i, sysfs_name);
rv = ipmi_bmc_register(intf, i);
out:
if (rv) {
@ -3014,12 +2938,50 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
}
EXPORT_SYMBOL(ipmi_register_smi);
static void deliver_smi_err_response(ipmi_smi_t intf,
struct ipmi_smi_msg *msg,
unsigned char err)
{
msg->rsp[0] = msg->data[0] | 4;
msg->rsp[1] = msg->data[1];
msg->rsp[2] = err;
msg->rsp_size = 3;
/* It's an error, so it will never requeue, no need to check return. */
handle_one_recv_msg(intf, msg);
}
static void cleanup_smi_msgs(ipmi_smi_t intf)
{
int i;
struct seq_table *ent;
struct ipmi_smi_msg *msg;
struct list_head *entry;
struct list_head tmplist;
/* Clear out our transmit queues and hold the messages. */
INIT_LIST_HEAD(&tmplist);
list_splice_tail(&intf->hp_xmit_msgs, &tmplist);
list_splice_tail(&intf->xmit_msgs, &tmplist);
/* Current message first, to preserve order */
while (intf->curr_msg && !list_empty(&intf->waiting_rcv_msgs)) {
/* Wait for the message to clear out. */
schedule_timeout(1);
}
/* No need for locks, the interface is down. */
/*
* Return errors for all pending messages in queue and in the
* tables waiting for remote responses.
*/
while (!list_empty(&tmplist)) {
entry = tmplist.next;
list_del(entry);
msg = list_entry(entry, struct ipmi_smi_msg, link);
deliver_smi_err_response(intf, msg, IPMI_ERR_UNSPECIFIED);
}
for (i = 0; i < IPMI_IPMB_NUM_SEQ; i++) {
ent = &(intf->seq_table[i]);
if (!ent->inuse)
@ -3031,20 +2993,33 @@ static void cleanup_smi_msgs(ipmi_smi_t intf)
int ipmi_unregister_smi(ipmi_smi_t intf)
{
struct ipmi_smi_watcher *w;
int intf_num = intf->intf_num;
int intf_num = intf->intf_num;
ipmi_user_t user;
ipmi_bmc_unregister(intf);
mutex_lock(&smi_watchers_mutex);
mutex_lock(&ipmi_interfaces_mutex);
intf->intf_num = -1;
intf->handlers = NULL;
intf->in_shutdown = true;
list_del_rcu(&intf->link);
mutex_unlock(&ipmi_interfaces_mutex);
synchronize_rcu();
cleanup_smi_msgs(intf);
/* Clean up the effects of users on the lower-level software. */
mutex_lock(&ipmi_interfaces_mutex);
rcu_read_lock();
list_for_each_entry_rcu(user, &intf->users, link) {
module_put(intf->handlers->owner);
if (intf->handlers->dec_usecount)
intf->handlers->dec_usecount(intf->send_info);
}
rcu_read_unlock();
intf->handlers = NULL;
mutex_unlock(&ipmi_interfaces_mutex);
remove_proc_entries(intf);
/*
@ -3134,7 +3109,6 @@ static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf,
ipmi_user_t user = NULL;
struct ipmi_ipmb_addr *ipmb_addr;
struct ipmi_recv_msg *recv_msg;
struct ipmi_smi_handlers *handlers;
if (msg->rsp_size < 10) {
/* Message not big enough, just ignore it. */
@ -3188,9 +3162,8 @@ static int handle_ipmb_get_msg_cmd(ipmi_smi_t intf,
}
#endif
rcu_read_lock();
handlers = intf->handlers;
if (handlers) {
handlers->sender(intf->send_info, msg, 0);
if (!intf->in_shutdown) {
smi_send(intf, intf->handlers, msg, 0);
/*
* We used the message, so return the value
* that causes it to not be freed or
@ -3857,32 +3830,32 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
/* See if any waiting messages need to be processed. */
if (!run_to_completion)
spin_lock_irqsave(&intf->waiting_msgs_lock, flags);
while (!list_empty(&intf->waiting_msgs)) {
smi_msg = list_entry(intf->waiting_msgs.next,
spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
while (!list_empty(&intf->waiting_rcv_msgs)) {
smi_msg = list_entry(intf->waiting_rcv_msgs.next,
struct ipmi_smi_msg, link);
list_del(&smi_msg->link);
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags);
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
flags);
rv = handle_one_recv_msg(intf, smi_msg);
if (!run_to_completion)
spin_lock_irqsave(&intf->waiting_msgs_lock, flags);
if (rv == 0) {
/* Message handled */
ipmi_free_smi_msg(smi_msg);
} else if (rv < 0) {
/* Fatal error on the message, del but don't free. */
} else {
spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
if (rv > 0) {
/*
* To preserve message order, quit if we
* can't handle a message.
*/
list_add(&smi_msg->link, &intf->waiting_msgs);
break;
} else {
list_del(&smi_msg->link);
if (rv == 0)
/* Message handled */
ipmi_free_smi_msg(smi_msg);
/* If rv < 0, fatal error, del but don't free. */
}
}
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags);
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock, flags);
/*
* If the pretimout count is non-zero, decrement one from it and
@ -3903,7 +3876,41 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
static void smi_recv_tasklet(unsigned long val)
{
handle_new_recv_msgs((ipmi_smi_t) val);
unsigned long flags = 0; /* keep us warning-free. */
ipmi_smi_t intf = (ipmi_smi_t) val;
int run_to_completion = intf->run_to_completion;
struct ipmi_smi_msg *newmsg = NULL;
/*
* Start the next message if available.
*
* Do this here, not in the actual receiver, because we may deadlock
* because the lower layer is allowed to hold locks while calling
* message delivery.
*/
if (!run_to_completion)
spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
if (intf->curr_msg == NULL && !intf->in_shutdown) {
struct list_head *entry = NULL;
/* Pick the high priority queue first. */
if (!list_empty(&intf->hp_xmit_msgs))
entry = intf->hp_xmit_msgs.next;
else if (!list_empty(&intf->xmit_msgs))
entry = intf->xmit_msgs.next;
if (entry) {
list_del(entry);
newmsg = list_entry(entry, struct ipmi_smi_msg, link);
intf->curr_msg = newmsg;
}
}
if (!run_to_completion)
spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
if (newmsg)
intf->handlers->sender(intf->send_info, newmsg);
handle_new_recv_msgs(intf);
}
/* Handle a new message from the lower layer. */
@ -3911,13 +3918,16 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
struct ipmi_smi_msg *msg)
{
unsigned long flags = 0; /* keep us warning-free. */
int run_to_completion;
int run_to_completion = intf->run_to_completion;
if ((msg->data_size >= 2)
&& (msg->data[0] == (IPMI_NETFN_APP_REQUEST << 2))
&& (msg->data[1] == IPMI_SEND_MSG_CMD)
&& (msg->user_data == NULL)) {
if (intf->in_shutdown)
goto free_msg;
/*
* This is the local response to a command send, start
* the timer for these. The user_data will not be
@ -3953,29 +3963,40 @@ void ipmi_smi_msg_received(ipmi_smi_t intf,
/* The message was sent, start the timer. */
intf_start_seq_timer(intf, msg->msgid);
free_msg:
ipmi_free_smi_msg(msg);
goto out;
} else {
/*
* To preserve message order, we keep a queue and deliver from
* a tasklet.
*/
if (!run_to_completion)
spin_lock_irqsave(&intf->waiting_rcv_msgs_lock, flags);
list_add_tail(&msg->link, &intf->waiting_rcv_msgs);
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
flags);
}
/*
* To preserve message order, if the list is not empty, we
* tack this message onto the end of the list.
*/
run_to_completion = intf->run_to_completion;
if (!run_to_completion)
spin_lock_irqsave(&intf->waiting_msgs_lock, flags);
list_add_tail(&msg->link, &intf->waiting_msgs);
spin_lock_irqsave(&intf->xmit_msgs_lock, flags);
if (msg == intf->curr_msg)
intf->curr_msg = NULL;
if (!run_to_completion)
spin_unlock_irqrestore(&intf->waiting_msgs_lock, flags);
spin_unlock_irqrestore(&intf->xmit_msgs_lock, flags);
tasklet_schedule(&intf->recv_tasklet);
out:
return;
if (run_to_completion)
smi_recv_tasklet((unsigned long) intf);
else
tasklet_schedule(&intf->recv_tasklet);
}
EXPORT_SYMBOL(ipmi_smi_msg_received);
void ipmi_smi_watchdog_pretimeout(ipmi_smi_t intf)
{
if (intf->in_shutdown)
return;
atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1);
tasklet_schedule(&intf->recv_tasklet);
}
@ -4017,7 +4038,7 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
struct ipmi_recv_msg *msg;
struct ipmi_smi_handlers *handlers;
if (intf->intf_num == -1)
if (intf->in_shutdown)
return;
if (!ent->inuse)
@ -4082,8 +4103,7 @@ static void check_msg_timeout(ipmi_smi_t intf, struct seq_table *ent,
ipmi_inc_stat(intf,
retransmitted_ipmb_commands);
intf->handlers->sender(intf->send_info,
smi_msg, 0);
smi_send(intf, intf->handlers, smi_msg, 0);
} else
ipmi_free_smi_msg(smi_msg);
@ -4145,15 +4165,12 @@ static unsigned int ipmi_timeout_handler(ipmi_smi_t intf, long timeout_period)
static void ipmi_request_event(ipmi_smi_t intf)
{
struct ipmi_smi_handlers *handlers;
/* No event requests when in maintenance mode. */
if (intf->maintenance_mode_enable)
return;
handlers = intf->handlers;
if (handlers)
handlers->request_events(intf->send_info);
if (!intf->in_shutdown)
intf->handlers->request_events(intf->send_info);
}
static struct timer_list ipmi_timer;
@ -4548,6 +4565,7 @@ static int ipmi_init_msghandler(void)
proc_ipmi_root = proc_mkdir("ipmi", NULL);
if (!proc_ipmi_root) {
printk(KERN_ERR PFX "Unable to create IPMI proc dir");
driver_unregister(&ipmidriver.driver);
return -ENOMEM;
}

View File

@ -0,0 +1,310 @@
/*
* PowerNV OPAL IPMI driver
*
* Copyright 2014 IBM Corp.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*/
#define pr_fmt(fmt) "ipmi-powernv: " fmt
#include <linux/ipmi_smi.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of.h>
#include <asm/opal.h>
struct ipmi_smi_powernv {
u64 interface_id;
struct ipmi_device_id ipmi_id;
ipmi_smi_t intf;
u64 event;
struct notifier_block event_nb;
/**
* We assume that there can only be one outstanding request, so
* keep the pending message in cur_msg. We protect this from concurrent
* updates through send & recv calls, (and consequently opal_msg, which
* is in-use when cur_msg is set) with msg_lock
*/
spinlock_t msg_lock;
struct ipmi_smi_msg *cur_msg;
struct opal_ipmi_msg *opal_msg;
};
static int ipmi_powernv_start_processing(void *send_info, ipmi_smi_t intf)
{
struct ipmi_smi_powernv *smi = send_info;
smi->intf = intf;
return 0;
}
static void send_error_reply(struct ipmi_smi_powernv *smi,
struct ipmi_smi_msg *msg, u8 completion_code)
{
msg->rsp[0] = msg->data[0] | 0x4;
msg->rsp[1] = msg->data[1];
msg->rsp[2] = completion_code;
msg->rsp_size = 3;
ipmi_smi_msg_received(smi->intf, msg);
}
static void ipmi_powernv_send(void *send_info, struct ipmi_smi_msg *msg)
{
struct ipmi_smi_powernv *smi = send_info;
struct opal_ipmi_msg *opal_msg;
unsigned long flags;
int comp, rc;
size_t size;
/* ensure data_len will fit in the opal_ipmi_msg buffer... */
if (msg->data_size > IPMI_MAX_MSG_LENGTH) {
comp = IPMI_REQ_LEN_EXCEEDED_ERR;
goto err;
}
/* ... and that we at least have netfn and cmd bytes */
if (msg->data_size < 2) {
comp = IPMI_REQ_LEN_INVALID_ERR;
goto err;
}
spin_lock_irqsave(&smi->msg_lock, flags);
if (smi->cur_msg) {
comp = IPMI_NODE_BUSY_ERR;
goto err_unlock;
}
/* format our data for the OPAL API */
opal_msg = smi->opal_msg;
opal_msg->version = OPAL_IPMI_MSG_FORMAT_VERSION_1;
opal_msg->netfn = msg->data[0];
opal_msg->cmd = msg->data[1];
if (msg->data_size > 2)
memcpy(opal_msg->data, msg->data + 2, msg->data_size - 2);
/* data_size already includes the netfn and cmd bytes */
size = sizeof(*opal_msg) + msg->data_size - 2;
pr_devel("%s: opal_ipmi_send(0x%llx, %p, %ld)\n", __func__,
smi->interface_id, opal_msg, size);
rc = opal_ipmi_send(smi->interface_id, opal_msg, size);
pr_devel("%s: -> %d\n", __func__, rc);
if (!rc) {
smi->cur_msg = msg;
spin_unlock_irqrestore(&smi->msg_lock, flags);
return;
}
comp = IPMI_ERR_UNSPECIFIED;
err_unlock:
spin_unlock_irqrestore(&smi->msg_lock, flags);
err:
send_error_reply(smi, msg, comp);
}
static int ipmi_powernv_recv(struct ipmi_smi_powernv *smi)
{
struct opal_ipmi_msg *opal_msg;
struct ipmi_smi_msg *msg;
unsigned long flags;
uint64_t size;
int rc;
pr_devel("%s: opal_ipmi_recv(%llx, msg, sz)\n", __func__,
smi->interface_id);
spin_lock_irqsave(&smi->msg_lock, flags);
if (!smi->cur_msg) {
pr_warn("no current message?\n");
return 0;
}
msg = smi->cur_msg;
opal_msg = smi->opal_msg;
size = cpu_to_be64(sizeof(*opal_msg) + IPMI_MAX_MSG_LENGTH);
rc = opal_ipmi_recv(smi->interface_id,
opal_msg,
&size);
size = be64_to_cpu(size);
pr_devel("%s: -> %d (size %lld)\n", __func__,
rc, rc == 0 ? size : 0);
if (rc) {
spin_unlock_irqrestore(&smi->msg_lock, flags);
ipmi_free_smi_msg(msg);
return 0;
}
if (size < sizeof(*opal_msg)) {
spin_unlock_irqrestore(&smi->msg_lock, flags);
pr_warn("unexpected IPMI message size %lld\n", size);
return 0;
}
if (opal_msg->version != OPAL_IPMI_MSG_FORMAT_VERSION_1) {
spin_unlock_irqrestore(&smi->msg_lock, flags);
pr_warn("unexpected IPMI message format (version %d)\n",
opal_msg->version);
return 0;
}
msg->rsp[0] = opal_msg->netfn;
msg->rsp[1] = opal_msg->cmd;
if (size > sizeof(*opal_msg))
memcpy(&msg->rsp[2], opal_msg->data, size - sizeof(*opal_msg));
msg->rsp_size = 2 + size - sizeof(*opal_msg);
smi->cur_msg = NULL;
spin_unlock_irqrestore(&smi->msg_lock, flags);
ipmi_smi_msg_received(smi->intf, msg);
return 0;
}
static void ipmi_powernv_request_events(void *send_info)
{
}
static void ipmi_powernv_set_run_to_completion(void *send_info,
bool run_to_completion)
{
}
static void ipmi_powernv_poll(void *send_info)
{
struct ipmi_smi_powernv *smi = send_info;
ipmi_powernv_recv(smi);
}
static struct ipmi_smi_handlers ipmi_powernv_smi_handlers = {
.owner = THIS_MODULE,
.start_processing = ipmi_powernv_start_processing,
.sender = ipmi_powernv_send,
.request_events = ipmi_powernv_request_events,
.set_run_to_completion = ipmi_powernv_set_run_to_completion,
.poll = ipmi_powernv_poll,
};
static int ipmi_opal_event(struct notifier_block *nb,
unsigned long events, void *change)
{
struct ipmi_smi_powernv *smi = container_of(nb,
struct ipmi_smi_powernv, event_nb);
if (events & smi->event)
ipmi_powernv_recv(smi);
return 0;
}
static int ipmi_powernv_probe(struct platform_device *pdev)
{
struct ipmi_smi_powernv *ipmi;
struct device *dev;
u32 prop;
int rc;
if (!pdev || !pdev->dev.of_node)
return -ENODEV;
dev = &pdev->dev;
ipmi = devm_kzalloc(dev, sizeof(*ipmi), GFP_KERNEL);
if (!ipmi)
return -ENOMEM;
spin_lock_init(&ipmi->msg_lock);
rc = of_property_read_u32(dev->of_node, "ibm,ipmi-interface-id",
&prop);
if (rc) {
dev_warn(dev, "No interface ID property\n");
goto err_free;
}
ipmi->interface_id = prop;
rc = of_property_read_u32(dev->of_node, "interrupts", &prop);
if (rc) {
dev_warn(dev, "No interrupts property\n");
goto err_free;
}
ipmi->event = 1ull << prop;
ipmi->event_nb.notifier_call = ipmi_opal_event;
rc = opal_notifier_register(&ipmi->event_nb);
if (rc) {
dev_warn(dev, "OPAL notifier registration failed (%d)\n", rc);
goto err_free;
}
ipmi->opal_msg = devm_kmalloc(dev,
sizeof(*ipmi->opal_msg) + IPMI_MAX_MSG_LENGTH,
GFP_KERNEL);
if (!ipmi->opal_msg) {
rc = -ENOMEM;
goto err_unregister;
}
/* todo: query actual ipmi_device_id */
rc = ipmi_register_smi(&ipmi_powernv_smi_handlers, ipmi,
&ipmi->ipmi_id, dev, 0);
if (rc) {
dev_warn(dev, "IPMI SMI registration failed (%d)\n", rc);
goto err_free_msg;
}
dev_set_drvdata(dev, ipmi);
return 0;
err_free_msg:
devm_kfree(dev, ipmi->opal_msg);
err_unregister:
opal_notifier_unregister(&ipmi->event_nb);
err_free:
devm_kfree(dev, ipmi);
return rc;
}
static int ipmi_powernv_remove(struct platform_device *pdev)
{
struct ipmi_smi_powernv *smi = dev_get_drvdata(&pdev->dev);
ipmi_unregister_smi(smi->intf);
opal_notifier_unregister(&smi->event_nb);
return 0;
}
static const struct of_device_id ipmi_powernv_match[] = {
{ .compatible = "ibm,opal-ipmi" },
{ },
};
static struct platform_driver powernv_ipmi_driver = {
.driver = {
.name = "ipmi-powernv",
.owner = THIS_MODULE,
.of_match_table = ipmi_powernv_match,
},
.probe = ipmi_powernv_probe,
.remove = ipmi_powernv_remove,
};
module_platform_driver(powernv_ipmi_driver);
MODULE_DEVICE_TABLE(of, ipmi_powernv_match);
MODULE_DESCRIPTION("powernv IPMI driver");
MODULE_AUTHOR("Jeremy Kerr <jk@ozlabs.org>");
MODULE_LICENSE("GPL");

View File

@ -92,12 +92,9 @@ enum si_intf_state {
SI_GETTING_FLAGS,
SI_GETTING_EVENTS,
SI_CLEARING_FLAGS,
SI_CLEARING_FLAGS_THEN_SET_IRQ,
SI_GETTING_MESSAGES,
SI_ENABLE_INTERRUPTS1,
SI_ENABLE_INTERRUPTS2,
SI_DISABLE_INTERRUPTS1,
SI_DISABLE_INTERRUPTS2
SI_CHECKING_ENABLES,
SI_SETTING_ENABLES
/* FIXME - add watchdog stuff. */
};
@ -111,10 +108,6 @@ enum si_type {
};
static char *si_to_str[] = { "kcs", "smic", "bt" };
static char *ipmi_addr_src_to_str[] = { NULL, "hotmod", "hardcoded", "SPMI",
"ACPI", "SMBIOS", "PCI",
"device-tree", "default" };
#define DEVICE_NAME "ipmi_si"
static struct platform_driver ipmi_driver;
@ -174,8 +167,7 @@ struct smi_info {
struct si_sm_handlers *handlers;
enum si_type si_type;
spinlock_t si_lock;
struct list_head xmit_msgs;
struct list_head hp_xmit_msgs;
struct ipmi_smi_msg *waiting_msg;
struct ipmi_smi_msg *curr_msg;
enum si_intf_state si_state;
@ -254,9 +246,6 @@ struct smi_info {
/* The time (in jiffies) the last timeout occurred at. */
unsigned long last_timeout_jiffies;
/* Used to gracefully stop the timer without race conditions. */
atomic_t stop_operation;
/* Are we waiting for the events, pretimeouts, received msgs? */
atomic_t need_watch;
@ -268,6 +257,16 @@ struct smi_info {
*/
bool interrupt_disabled;
/*
* Does the BMC support events?
*/
bool supports_event_msg_buff;
/*
* Did we get an attention that we did not handle?
*/
bool got_attn;
/* From the get device id response... */
struct ipmi_device_id device_id;
@ -332,7 +331,10 @@ static void deliver_recv_msg(struct smi_info *smi_info,
struct ipmi_smi_msg *msg)
{
/* Deliver the message to the upper layer. */
ipmi_smi_msg_received(smi_info->intf, msg);
if (smi_info->intf)
ipmi_smi_msg_received(smi_info->intf, msg);
else
ipmi_free_smi_msg(msg);
}
static void return_hosed_msg(struct smi_info *smi_info, int cCode)
@ -356,28 +358,18 @@ static void return_hosed_msg(struct smi_info *smi_info, int cCode)
static enum si_sm_result start_next_msg(struct smi_info *smi_info)
{
int rv;
struct list_head *entry = NULL;
#ifdef DEBUG_TIMING
struct timeval t;
#endif
/* Pick the high priority queue first. */
if (!list_empty(&(smi_info->hp_xmit_msgs))) {
entry = smi_info->hp_xmit_msgs.next;
} else if (!list_empty(&(smi_info->xmit_msgs))) {
entry = smi_info->xmit_msgs.next;
}
if (!entry) {
if (!smi_info->waiting_msg) {
smi_info->curr_msg = NULL;
rv = SI_SM_IDLE;
} else {
int err;
list_del(entry);
smi_info->curr_msg = list_entry(entry,
struct ipmi_smi_msg,
link);
smi_info->curr_msg = smi_info->waiting_msg;
smi_info->waiting_msg = NULL;
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
printk(KERN_DEBUG "**Start2: %d.%9.9d\n", t.tv_sec, t.tv_usec);
@ -401,22 +393,7 @@ static enum si_sm_result start_next_msg(struct smi_info *smi_info)
return rv;
}
static void start_enable_irq(struct smi_info *smi_info)
{
unsigned char msg[2];
/*
* If we are enabling interrupts, we have to tell the
* BMC to use them.
*/
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
smi_info->si_state = SI_ENABLE_INTERRUPTS1;
}
static void start_disable_irq(struct smi_info *smi_info)
static void start_check_enables(struct smi_info *smi_info)
{
unsigned char msg[2];
@ -424,7 +401,7 @@ static void start_disable_irq(struct smi_info *smi_info)
msg[1] = IPMI_GET_BMC_GLOBAL_ENABLES_CMD;
smi_info->handlers->start_transaction(smi_info->si_sm, msg, 2);
smi_info->si_state = SI_DISABLE_INTERRUPTS1;
smi_info->si_state = SI_CHECKING_ENABLES;
}
static void start_clear_flags(struct smi_info *smi_info)
@ -440,6 +417,32 @@ static void start_clear_flags(struct smi_info *smi_info)
smi_info->si_state = SI_CLEARING_FLAGS;
}
static void start_getting_msg_queue(struct smi_info *smi_info)
{
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_MESSAGES;
}
static void start_getting_events(struct smi_info *smi_info)
{
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_EVENTS;
}
static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
{
smi_info->last_timeout_jiffies = jiffies;
@ -453,22 +456,45 @@ static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val)
* polled until we can allocate some memory. Once we have some
* memory, we will re-enable the interrupt.
*/
static inline void disable_si_irq(struct smi_info *smi_info)
static inline bool disable_si_irq(struct smi_info *smi_info)
{
if ((smi_info->irq) && (!smi_info->interrupt_disabled)) {
start_disable_irq(smi_info);
smi_info->interrupt_disabled = true;
if (!atomic_read(&smi_info->stop_operation))
smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES);
start_check_enables(smi_info);
return true;
}
return false;
}
static inline void enable_si_irq(struct smi_info *smi_info)
static inline bool enable_si_irq(struct smi_info *smi_info)
{
if ((smi_info->irq) && (smi_info->interrupt_disabled)) {
start_enable_irq(smi_info);
smi_info->interrupt_disabled = false;
start_check_enables(smi_info);
return true;
}
return false;
}
/*
* Allocate a message. If unable to allocate, start the interrupt
* disable process and return NULL. If able to allocate but
* interrupts are disabled, free the message and return NULL after
* starting the interrupt enable process.
*/
static struct ipmi_smi_msg *alloc_msg_handle_irq(struct smi_info *smi_info)
{
struct ipmi_smi_msg *msg;
msg = ipmi_alloc_smi_msg();
if (!msg) {
if (!disable_si_irq(smi_info))
smi_info->si_state = SI_NORMAL;
} else if (enable_si_irq(smi_info)) {
ipmi_free_smi_msg(msg);
msg = NULL;
}
return msg;
}
static void handle_flags(struct smi_info *smi_info)
@ -480,45 +506,22 @@ static void handle_flags(struct smi_info *smi_info)
start_clear_flags(smi_info);
smi_info->msg_flags &= ~WDT_PRE_TIMEOUT_INT;
ipmi_smi_watchdog_pretimeout(smi_info->intf);
if (smi_info->intf)
ipmi_smi_watchdog_pretimeout(smi_info->intf);
} else if (smi_info->msg_flags & RECEIVE_MSG_AVAIL) {
/* Messages available. */
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
disable_si_irq(smi_info);
smi_info->si_state = SI_NORMAL;
smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
if (!smi_info->curr_msg)
return;
}
enable_si_irq(smi_info);
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_GET_MSG_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_MESSAGES;
start_getting_msg_queue(smi_info);
} else if (smi_info->msg_flags & EVENT_MSG_BUFFER_FULL) {
/* Events available. */
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
disable_si_irq(smi_info);
smi_info->si_state = SI_NORMAL;
smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
if (!smi_info->curr_msg)
return;
}
enable_si_irq(smi_info);
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_EVENTS;
start_getting_events(smi_info);
} else if (smi_info->msg_flags & OEM_DATA_AVAIL &&
smi_info->oem_data_avail_handler) {
if (smi_info->oem_data_avail_handler(smi_info))
@ -527,6 +530,55 @@ static void handle_flags(struct smi_info *smi_info)
smi_info->si_state = SI_NORMAL;
}
/*
* Global enables we care about.
*/
#define GLOBAL_ENABLES_MASK (IPMI_BMC_EVT_MSG_BUFF | IPMI_BMC_RCV_MSG_INTR | \
IPMI_BMC_EVT_MSG_INTR)
static u8 current_global_enables(struct smi_info *smi_info, u8 base,
bool *irq_on)
{
u8 enables = 0;
if (smi_info->supports_event_msg_buff)
enables |= IPMI_BMC_EVT_MSG_BUFF;
else
enables &= ~IPMI_BMC_EVT_MSG_BUFF;
if (smi_info->irq && !smi_info->interrupt_disabled)
enables |= IPMI_BMC_RCV_MSG_INTR;
else
enables &= ~IPMI_BMC_RCV_MSG_INTR;
if (smi_info->supports_event_msg_buff &&
smi_info->irq && !smi_info->interrupt_disabled)
enables |= IPMI_BMC_EVT_MSG_INTR;
else
enables &= ~IPMI_BMC_EVT_MSG_INTR;
*irq_on = enables & (IPMI_BMC_EVT_MSG_INTR | IPMI_BMC_RCV_MSG_INTR);
return enables;
}
static void check_bt_irq(struct smi_info *smi_info, bool irq_on)
{
u8 irqstate = smi_info->io.inputb(&smi_info->io, IPMI_BT_INTMASK_REG);
irqstate &= IPMI_BT_INTMASK_ENABLE_IRQ_BIT;
if ((bool)irqstate == irq_on)
return;
if (irq_on)
smi_info->io.outputb(&smi_info->io, IPMI_BT_INTMASK_REG,
IPMI_BT_INTMASK_ENABLE_IRQ_BIT);
else
smi_info->io.outputb(&smi_info->io, IPMI_BT_INTMASK_REG, 0);
}
static void handle_transaction_done(struct smi_info *smi_info)
{
struct ipmi_smi_msg *msg;
@ -581,7 +633,6 @@ static void handle_transaction_done(struct smi_info *smi_info)
}
case SI_CLEARING_FLAGS:
case SI_CLEARING_FLAGS_THEN_SET_IRQ:
{
unsigned char msg[3];
@ -592,10 +643,7 @@ static void handle_transaction_done(struct smi_info *smi_info)
dev_warn(smi_info->dev,
"Error clearing flags: %2.2x\n", msg[2]);
}
if (smi_info->si_state == SI_CLEARING_FLAGS_THEN_SET_IRQ)
start_enable_irq(smi_info);
else
smi_info->si_state = SI_NORMAL;
smi_info->si_state = SI_NORMAL;
break;
}
@ -675,9 +723,11 @@ static void handle_transaction_done(struct smi_info *smi_info)
break;
}
case SI_ENABLE_INTERRUPTS1:
case SI_CHECKING_ENABLES:
{
unsigned char msg[4];
u8 enables;
bool irq_on;
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
@ -687,70 +737,53 @@ static void handle_transaction_done(struct smi_info *smi_info)
dev_warn(smi_info->dev,
"Maybe ok, but ipmi might run very slowly.\n");
smi_info->si_state = SI_NORMAL;
} else {
break;
}
enables = current_global_enables(smi_info, 0, &irq_on);
if (smi_info->si_type == SI_BT)
/* BT has its own interrupt enable bit. */
check_bt_irq(smi_info, irq_on);
if (enables != (msg[3] & GLOBAL_ENABLES_MASK)) {
/* Enables are not correct, fix them. */
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
msg[2] = (msg[3] |
IPMI_BMC_RCV_MSG_INTR |
IPMI_BMC_EVT_MSG_INTR);
msg[2] = enables | (msg[3] & ~GLOBAL_ENABLES_MASK);
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 3);
smi_info->si_state = SI_ENABLE_INTERRUPTS2;
}
break;
}
case SI_ENABLE_INTERRUPTS2:
{
unsigned char msg[4];
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
dev_warn(smi_info->dev,
"Couldn't set irq info: %x.\n", msg[2]);
dev_warn(smi_info->dev,
"Maybe ok, but ipmi might run very slowly.\n");
} else
smi_info->interrupt_disabled = false;
smi_info->si_state = SI_NORMAL;
break;
}
case SI_DISABLE_INTERRUPTS1:
{
unsigned char msg[4];
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
dev_warn(smi_info->dev, "Could not disable interrupts"
", failed get.\n");
smi_info->si_state = SI_SETTING_ENABLES;
} else if (smi_info->supports_event_msg_buff) {
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
smi_info->si_state = SI_NORMAL;
break;
}
start_getting_msg_queue(smi_info);
} else {
smi_info->si_state = SI_NORMAL;
} else {
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
msg[2] = (msg[3] &
~(IPMI_BMC_RCV_MSG_INTR |
IPMI_BMC_EVT_MSG_INTR));
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 3);
smi_info->si_state = SI_DISABLE_INTERRUPTS2;
}
break;
}
case SI_DISABLE_INTERRUPTS2:
case SI_SETTING_ENABLES:
{
unsigned char msg[4];
/* We got the flags from the SMI, now handle them. */
smi_info->handlers->get_result(smi_info->si_sm, msg, 4);
if (msg[2] != 0) {
dev_warn(smi_info->dev, "Could not disable interrupts"
", failed set.\n");
if (msg[2] != 0)
dev_warn(smi_info->dev,
"Could not set the global enables: 0x%x.\n",
msg[2]);
if (smi_info->supports_event_msg_buff) {
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg) {
smi_info->si_state = SI_NORMAL;
break;
}
start_getting_msg_queue(smi_info);
} else {
smi_info->si_state = SI_NORMAL;
}
smi_info->si_state = SI_NORMAL;
break;
}
}
@ -808,25 +841,35 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
* We prefer handling attn over new messages. But don't do
* this if there is not yet an upper layer to handle anything.
*/
if (likely(smi_info->intf) && si_sm_result == SI_SM_ATTN) {
if (likely(smi_info->intf) &&
(si_sm_result == SI_SM_ATTN || smi_info->got_attn)) {
unsigned char msg[2];
smi_inc_stat(smi_info, attentions);
if (smi_info->si_state != SI_NORMAL) {
/*
* We got an ATTN, but we are doing something else.
* Handle the ATTN later.
*/
smi_info->got_attn = true;
} else {
smi_info->got_attn = false;
smi_inc_stat(smi_info, attentions);
/*
* Got a attn, send down a get message flags to see
* what's causing it. It would be better to handle
* this in the upper layer, but due to the way
* interrupts work with the SMI, that's not really
* possible.
*/
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
/*
* Got a attn, send down a get message flags to see
* what's causing it. It would be better to handle
* this in the upper layer, but due to the way
* interrupts work with the SMI, that's not really
* possible.
*/
msg[0] = (IPMI_NETFN_APP_REQUEST << 2);
msg[1] = IPMI_GET_MSG_FLAGS_CMD;
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 2);
smi_info->si_state = SI_GETTING_FLAGS;
goto restart;
smi_info->handlers->start_transaction(
smi_info->si_sm, msg, 2);
smi_info->si_state = SI_GETTING_FLAGS;
goto restart;
}
}
/* If we are currently idle, try to start the next message. */
@ -846,19 +889,21 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
*/
atomic_set(&smi_info->req_events, 0);
smi_info->curr_msg = ipmi_alloc_smi_msg();
if (!smi_info->curr_msg)
goto out;
/*
* Take this opportunity to check the interrupt and
* message enable state for the BMC. The BMC can be
* asynchronously reset, and may thus get interrupts
* disable and messages disabled.
*/
if (smi_info->supports_event_msg_buff || smi_info->irq) {
start_check_enables(smi_info);
} else {
smi_info->curr_msg = alloc_msg_handle_irq(smi_info);
if (!smi_info->curr_msg)
goto out;
smi_info->curr_msg->data[0] = (IPMI_NETFN_APP_REQUEST << 2);
smi_info->curr_msg->data[1] = IPMI_READ_EVENT_MSG_BUFFER_CMD;
smi_info->curr_msg->data_size = 2;
smi_info->handlers->start_transaction(
smi_info->si_sm,
smi_info->curr_msg->data,
smi_info->curr_msg->data_size);
smi_info->si_state = SI_GETTING_EVENTS;
start_getting_events(smi_info);
}
goto restart;
}
out:
@ -879,8 +924,7 @@ static void check_start_timer_thread(struct smi_info *smi_info)
}
static void sender(void *send_info,
struct ipmi_smi_msg *msg,
int priority)
struct ipmi_smi_msg *msg)
{
struct smi_info *smi_info = send_info;
enum si_sm_result result;
@ -889,14 +933,8 @@ static void sender(void *send_info,
struct timeval t;
#endif
if (atomic_read(&smi_info->stop_operation)) {
msg->rsp[0] = msg->data[0] | 4;
msg->rsp[1] = msg->data[1];
msg->rsp[2] = IPMI_ERR_UNSPECIFIED;
msg->rsp_size = 3;
deliver_recv_msg(smi_info, msg);
return;
}
BUG_ON(smi_info->waiting_msg);
smi_info->waiting_msg = msg;
#ifdef DEBUG_TIMING
do_gettimeofday(&t);
@ -905,16 +943,16 @@ static void sender(void *send_info,
if (smi_info->run_to_completion) {
/*
* If we are running to completion, then throw it in
* the list and run transactions until everything is
* clear. Priority doesn't matter here.
* If we are running to completion, start it and run
* transactions until everything is clear.
*/
smi_info->curr_msg = smi_info->waiting_msg;
smi_info->waiting_msg = NULL;
/*
* Run to completion means we are single-threaded, no
* need for locks.
*/
list_add_tail(&(msg->link), &(smi_info->xmit_msgs));
result = smi_event_handler(smi_info, 0);
while (result != SI_SM_IDLE) {
@ -926,11 +964,6 @@ static void sender(void *send_info,
}
spin_lock_irqsave(&smi_info->si_lock, flags);
if (priority > 0)
list_add_tail(&msg->link, &smi_info->hp_xmit_msgs);
else
list_add_tail(&msg->link, &smi_info->xmit_msgs);
check_start_timer_thread(smi_info);
spin_unlock_irqrestore(&smi_info->si_lock, flags);
}
@ -1068,8 +1101,7 @@ static void request_events(void *send_info)
{
struct smi_info *smi_info = send_info;
if (atomic_read(&smi_info->stop_operation) ||
!smi_info->has_event_buffer)
if (!smi_info->has_event_buffer)
return;
atomic_set(&smi_info->req_events, 1);
@ -1697,7 +1729,7 @@ static int parse_str(struct hotmod_vals *v, int *val, char *name, char **curr)
}
*s = '\0';
s++;
for (i = 0; hotmod_ops[i].name; i++) {
for (i = 0; v[i].name; i++) {
if (strcmp(*curr, v[i].name) == 0) {
*val = v[i].val;
*curr = s;
@ -2133,6 +2165,9 @@ static int try_init_spmi(struct SPMITable *spmi)
case 3: /* BT */
info->si_type = SI_BT;
break;
case 4: /* SSIF, just ignore */
kfree(info);
return -EIO;
default:
printk(KERN_INFO PFX "Unknown ACPI/SPMI SI type %d\n",
spmi->InterfaceType);
@ -2250,6 +2285,8 @@ static int ipmi_pnp_probe(struct pnp_dev *dev,
case 3:
info->si_type = SI_BT;
break;
case 4: /* SSIF, just ignore */
goto err_free;
default:
dev_info(&dev->dev, "unknown IPMI type %lld\n", tmp);
goto err_free;
@ -2913,9 +2950,11 @@ static int try_enable_event_buffer(struct smi_info *smi_info)
goto out;
}
if (resp[3] & IPMI_BMC_EVT_MSG_BUFF)
if (resp[3] & IPMI_BMC_EVT_MSG_BUFF) {
/* buffer is already enabled, nothing to do. */
smi_info->supports_event_msg_buff = true;
goto out;
}
msg[0] = IPMI_NETFN_APP_REQUEST << 2;
msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD;
@ -2948,6 +2987,9 @@ static int try_enable_event_buffer(struct smi_info *smi_info)
* that the event buffer is not supported.
*/
rv = -ENOENT;
else
smi_info->supports_event_msg_buff = true;
out:
kfree(resp);
return rv;
@ -3188,15 +3230,10 @@ static void setup_xaction_handlers(struct smi_info *smi_info)
static inline void wait_for_timer_and_thread(struct smi_info *smi_info)
{
if (smi_info->intf) {
/*
* The timer and thread are only running if the
* interface has been started up and registered.
*/
if (smi_info->thread != NULL)
kthread_stop(smi_info->thread);
if (smi_info->thread != NULL)
kthread_stop(smi_info->thread);
if (smi_info->timer_running)
del_timer_sync(&smi_info->si_timer);
}
}
static struct ipmi_default_vals
@ -3274,8 +3311,8 @@ static int add_smi(struct smi_info *new_smi)
int rv = 0;
printk(KERN_INFO PFX "Adding %s-specified %s state machine",
ipmi_addr_src_to_str[new_smi->addr_source],
si_to_str[new_smi->si_type]);
ipmi_addr_src_to_str(new_smi->addr_source),
si_to_str[new_smi->si_type]);
mutex_lock(&smi_infos_lock);
if (!is_new_interface(new_smi)) {
printk(KERN_CONT " duplicate interface\n");
@ -3305,7 +3342,7 @@ static int try_smi_init(struct smi_info *new_smi)
printk(KERN_INFO PFX "Trying %s-specified %s state"
" machine at %s address 0x%lx, slave address 0x%x,"
" irq %d\n",
ipmi_addr_src_to_str[new_smi->addr_source],
ipmi_addr_src_to_str(new_smi->addr_source),
si_to_str[new_smi->si_type],
addr_space_to_str[new_smi->io.addr_type],
new_smi->io.addr_data,
@ -3371,8 +3408,7 @@ static int try_smi_init(struct smi_info *new_smi)
setup_oem_data_handler(new_smi);
setup_xaction_handlers(new_smi);
INIT_LIST_HEAD(&(new_smi->xmit_msgs));
INIT_LIST_HEAD(&(new_smi->hp_xmit_msgs));
new_smi->waiting_msg = NULL;
new_smi->curr_msg = NULL;
atomic_set(&new_smi->req_events, 0);
new_smi->run_to_completion = false;
@ -3380,7 +3416,6 @@ static int try_smi_init(struct smi_info *new_smi)
atomic_set(&new_smi->stats[i], 0);
new_smi->interrupt_disabled = true;
atomic_set(&new_smi->stop_operation, 0);
atomic_set(&new_smi->need_watch, 0);
new_smi->intf_num = smi_num;
smi_num++;
@ -3394,9 +3429,15 @@ static int try_smi_init(struct smi_info *new_smi)
* timer to avoid racing with the timer.
*/
start_clear_flags(new_smi);
/* IRQ is defined to be set when non-zero. */
if (new_smi->irq)
new_smi->si_state = SI_CLEARING_FLAGS_THEN_SET_IRQ;
/*
* IRQ is defined to be set when non-zero. req_events will
* cause a global flags check that will enable interrupts.
*/
if (new_smi->irq) {
new_smi->interrupt_disabled = false;
atomic_set(&new_smi->req_events, 1);
}
if (!new_smi->dev) {
/*
@ -3428,7 +3469,6 @@ static int try_smi_init(struct smi_info *new_smi)
new_smi,
&new_smi->device_id,
new_smi->dev,
"bmc",
new_smi->slave_addr);
if (rv) {
dev_err(new_smi->dev, "Unable to register device: error %d\n",
@ -3466,15 +3506,15 @@ static int try_smi_init(struct smi_info *new_smi)
return 0;
out_err_stop_timer:
atomic_inc(&new_smi->stop_operation);
wait_for_timer_and_thread(new_smi);
out_err:
new_smi->interrupt_disabled = true;
if (new_smi->intf) {
ipmi_unregister_smi(new_smi->intf);
ipmi_smi_t intf = new_smi->intf;
new_smi->intf = NULL;
ipmi_unregister_smi(intf);
}
if (new_smi->irq_cleanup) {
@ -3653,60 +3693,49 @@ module_init(init_ipmi_si);
static void cleanup_one_si(struct smi_info *to_clean)
{
int rv = 0;
unsigned long flags;
if (!to_clean)
return;
if (to_clean->intf) {
ipmi_smi_t intf = to_clean->intf;
to_clean->intf = NULL;
rv = ipmi_unregister_smi(intf);
if (rv) {
pr_err(PFX "Unable to unregister device: errno=%d\n",
rv);
}
}
if (to_clean->dev)
dev_set_drvdata(to_clean->dev, NULL);
list_del(&to_clean->link);
/* Tell the driver that we are shutting down. */
atomic_inc(&to_clean->stop_operation);
/*
* Make sure the timer and thread are stopped and will not run
* again.
* Make sure that interrupts, the timer and the thread are
* stopped and will not run again.
*/
if (to_clean->irq_cleanup)
to_clean->irq_cleanup(to_clean);
wait_for_timer_and_thread(to_clean);
/*
* Timeouts are stopped, now make sure the interrupts are off
* for the device. A little tricky with locks to make sure
* there are no races.
* in the BMC. Note that timers and CPU interrupts are off,
* so no need for locks.
*/
spin_lock_irqsave(&to_clean->si_lock, flags);
while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
spin_unlock_irqrestore(&to_clean->si_lock, flags);
poll(to_clean);
schedule_timeout_uninterruptible(1);
spin_lock_irqsave(&to_clean->si_lock, flags);
}
disable_si_irq(to_clean);
spin_unlock_irqrestore(&to_clean->si_lock, flags);
while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
poll(to_clean);
schedule_timeout_uninterruptible(1);
}
/* Clean up interrupts and make sure that everything is done. */
if (to_clean->irq_cleanup)
to_clean->irq_cleanup(to_clean);
while (to_clean->curr_msg || (to_clean->si_state != SI_NORMAL)) {
poll(to_clean);
schedule_timeout_uninterruptible(1);
}
if (to_clean->intf)
rv = ipmi_unregister_smi(to_clean->intf);
if (rv) {
printk(KERN_ERR PFX "Unable to unregister device: errno=%d\n",
rv);
}
if (to_clean->handlers)
to_clean->handlers->cleanup(to_clean->si_sm);

File diff suppressed because it is too large Load Diff

View File

@ -37,6 +37,7 @@
#include <linux/list.h>
#include <linux/proc_fs.h>
#include <linux/acpi.h> /* For acpi_handle */
struct module;
struct device;
@ -278,15 +279,18 @@ enum ipmi_addr_src {
SI_INVALID = 0, SI_HOTMOD, SI_HARDCODED, SI_SPMI, SI_ACPI, SI_SMBIOS,
SI_PCI, SI_DEVICETREE, SI_DEFAULT
};
const char *ipmi_addr_src_to_str(enum ipmi_addr_src src);
union ipmi_smi_info_union {
#ifdef CONFIG_ACPI
/*
* the acpi_info element is defined for the SI_ACPI
* address type
*/
struct {
void *acpi_handle;
acpi_handle acpi_handle;
} acpi_info;
#endif
};
struct ipmi_smi_info {

View File

@ -98,12 +98,11 @@ struct ipmi_smi_handlers {
operation is not allowed to fail. If an error occurs, it
should report back the error in a received message. It may
do this in the current call context, since no write locks
are held when this is run. If the priority is > 0, the
message will go into a high-priority queue and be sent
first. Otherwise, it goes into a normal-priority queue. */
are held when this is run. Message are delivered one at
a time by the message handler, a new message will not be
delivered until the previous message is returned. */
void (*sender)(void *send_info,
struct ipmi_smi_msg *msg,
int priority);
struct ipmi_smi_msg *msg);
/* Called by the upper layer to request that we try to get
events from the BMC we are attached to. */
@ -212,7 +211,6 @@ int ipmi_register_smi(struct ipmi_smi_handlers *handlers,
void *send_info,
struct ipmi_device_id *device_id,
struct device *dev,
const char *sysfs_name,
unsigned char slave_addr);
/*