2009-07-14 07:02:34 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2009, Microsoft Corporation.
|
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify it
|
|
|
|
* under the terms and conditions of the GNU General Public License,
|
|
|
|
* version 2, as published by the Free Software Foundation.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope it will be useful, but WITHOUT
|
|
|
|
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
|
|
|
* more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along with
|
|
|
|
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
|
|
|
|
* Place - Suite 330, Boston, MA 02111-1307 USA.
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Haiyang Zhang <haiyangz@microsoft.com>
|
|
|
|
* Hank Janssen <hjanssen@microsoft.com>
|
2011-04-30 04:45:15 +08:00
|
|
|
* K. Y. Srinivasan <kys@microsoft.com>
|
2011-03-16 06:03:34 +08:00
|
|
|
*
|
2009-07-14 07:02:34 +08:00
|
|
|
*/
|
2011-03-30 04:58:47 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2009-07-14 07:02:34 +08:00
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/device.h>
|
|
|
|
#include <linux/interrupt.h>
|
|
|
|
#include <linux/sysctl.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2011-04-30 04:45:15 +08:00
|
|
|
#include <linux/acpi.h>
|
2010-05-29 07:22:44 +08:00
|
|
|
#include <linux/completion.h>
|
2011-10-05 03:29:52 +08:00
|
|
|
#include <linux/hyperv.h>
|
2012-12-01 22:46:54 +08:00
|
|
|
#include <linux/kernel_stat.h>
|
2015-01-10 15:54:32 +08:00
|
|
|
#include <linux/clockchips.h>
|
2015-02-28 03:25:51 +08:00
|
|
|
#include <linux/cpu.h>
|
2017-02-09 01:51:37 +08:00
|
|
|
#include <linux/sched/task_stack.h>
|
|
|
|
|
2013-02-18 03:30:44 +08:00
|
|
|
#include <asm/mshyperv.h>
|
2015-03-01 03:39:01 +08:00
|
|
|
#include <linux/notifier.h>
|
|
|
|
#include <linux/ptrace.h>
|
2015-08-05 15:52:37 +08:00
|
|
|
#include <linux/screen_info.h>
|
2015-08-02 07:08:10 +08:00
|
|
|
#include <linux/kdebug.h>
|
2016-04-06 01:22:55 +08:00
|
|
|
#include <linux/efi.h>
|
2016-05-02 14:14:34 +08:00
|
|
|
#include <linux/random.h>
|
2011-05-13 10:34:28 +08:00
|
|
|
#include "hyperv_vmbus.h"
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
struct vmbus_dynid {
|
|
|
|
struct list_head node;
|
|
|
|
struct hv_vmbus_device_id id;
|
|
|
|
};
|
|
|
|
|
2011-06-07 06:49:39 +08:00
|
|
|
static struct acpi_device *hv_acpi_dev;
|
2011-03-16 06:03:32 +08:00
|
|
|
|
2011-04-30 04:45:04 +08:00
|
|
|
static struct completion probe_event;
|
2011-03-16 06:03:44 +08:00
|
|
|
|
2016-12-08 06:53:11 +08:00
|
|
|
static int hyperv_cpuhp_online;
|
2015-03-01 03:39:01 +08:00
|
|
|
|
2018-07-08 10:56:51 +08:00
|
|
|
static void *hv_panic_page;
|
|
|
|
|
2015-08-02 07:08:10 +08:00
|
|
|
static int hyperv_panic_event(struct notifier_block *nb, unsigned long val,
|
|
|
|
void *args)
|
|
|
|
{
|
|
|
|
struct pt_regs *regs;
|
|
|
|
|
|
|
|
regs = current_pt_regs();
|
|
|
|
|
2017-10-30 02:33:41 +08:00
|
|
|
hyperv_report_panic(regs, val);
|
2015-03-01 03:39:01 +08:00
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
2015-08-02 07:08:10 +08:00
|
|
|
static int hyperv_die_event(struct notifier_block *nb, unsigned long val,
|
|
|
|
void *args)
|
|
|
|
{
|
|
|
|
struct die_args *die = (struct die_args *)args;
|
|
|
|
struct pt_regs *regs = die->regs;
|
|
|
|
|
2017-10-30 02:33:41 +08:00
|
|
|
hyperv_report_panic(regs, val);
|
2015-08-02 07:08:10 +08:00
|
|
|
return NOTIFY_DONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct notifier_block hyperv_die_block = {
|
|
|
|
.notifier_call = hyperv_die_event,
|
|
|
|
};
|
2015-03-01 03:39:01 +08:00
|
|
|
static struct notifier_block hyperv_panic_block = {
|
|
|
|
.notifier_call = hyperv_panic_event,
|
|
|
|
};
|
|
|
|
|
2016-04-06 01:22:55 +08:00
|
|
|
static const char *fb_mmio_name = "fb_range";
|
|
|
|
static struct resource *fb_mmio;
|
2016-09-07 20:39:33 +08:00
|
|
|
static struct resource *hyperv_mmio;
|
|
|
|
static DEFINE_SEMAPHORE(hyperv_mmio_lock);
|
2011-03-16 06:03:44 +08:00
|
|
|
|
2011-12-02 01:59:34 +08:00
|
|
|
static int vmbus_exists(void)
|
|
|
|
{
|
|
|
|
if (hv_acpi_dev == NULL)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-09-03 00:25:56 +08:00
|
|
|
#define VMBUS_ALIAS_LEN ((sizeof((struct hv_vmbus_device_id *)0)->guid) * 2)
|
|
|
|
static void print_alias_name(struct hv_device *hv_dev, char *alias_name)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
for (i = 0; i < VMBUS_ALIAS_LEN; i += 2)
|
|
|
|
sprintf(&alias_name[i], "%02x", hv_dev->dev_type.b[i/2]);
|
|
|
|
}
|
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
static u8 channel_monitor_group(const struct vmbus_channel *channel)
|
2013-09-14 02:32:56 +08:00
|
|
|
{
|
|
|
|
return (u8)channel->offermsg.monitorid / 32;
|
|
|
|
}
|
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
static u8 channel_monitor_offset(const struct vmbus_channel *channel)
|
2013-09-14 02:32:56 +08:00
|
|
|
{
|
|
|
|
return (u8)channel->offermsg.monitorid % 32;
|
|
|
|
}
|
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
static u32 channel_pending(const struct vmbus_channel *channel,
|
|
|
|
const struct hv_monitor_page *monitor_page)
|
2013-09-14 02:32:56 +08:00
|
|
|
{
|
|
|
|
u8 monitor_group = channel_monitor_group(channel);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
2013-09-14 02:32:56 +08:00
|
|
|
return monitor_page->trigger_group[monitor_group].pending;
|
|
|
|
}
|
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
static u32 channel_latency(const struct vmbus_channel *channel,
|
|
|
|
const struct hv_monitor_page *monitor_page)
|
2013-09-14 02:32:57 +08:00
|
|
|
{
|
|
|
|
u8 monitor_group = channel_monitor_group(channel);
|
|
|
|
u8 monitor_offset = channel_monitor_offset(channel);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
2013-09-14 02:32:57 +08:00
|
|
|
return monitor_page->latency[monitor_group][monitor_offset];
|
|
|
|
}
|
|
|
|
|
2013-09-14 02:32:58 +08:00
|
|
|
static u32 channel_conn_id(struct vmbus_channel *channel,
|
|
|
|
struct hv_monitor_page *monitor_page)
|
|
|
|
{
|
|
|
|
u8 monitor_group = channel_monitor_group(channel);
|
|
|
|
u8 monitor_offset = channel_monitor_offset(channel);
|
|
|
|
return monitor_page->parameter[monitor_group][monitor_offset].connectionid.u.id;
|
|
|
|
}
|
|
|
|
|
2013-09-14 02:32:49 +08:00
|
|
|
static ssize_t id_show(struct device *dev, struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n", hv_dev->channel->offermsg.child_relid);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(id);
|
|
|
|
|
2013-09-14 02:32:50 +08:00
|
|
|
static ssize_t state_show(struct device *dev, struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n", hv_dev->channel->state);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(state);
|
|
|
|
|
2013-09-14 02:32:51 +08:00
|
|
|
static ssize_t monitor_id_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n", hv_dev->channel->offermsg.monitorid);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(monitor_id);
|
|
|
|
|
2013-09-14 02:32:53 +08:00
|
|
|
static ssize_t class_id_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "{%pUl}\n",
|
|
|
|
hv_dev->channel->offermsg.offer.if_type.b);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(class_id);
|
|
|
|
|
2013-09-14 02:32:54 +08:00
|
|
|
static ssize_t device_id_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "{%pUl}\n",
|
|
|
|
hv_dev->channel->offermsg.offer.if_instance.b);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(device_id);
|
|
|
|
|
2013-09-14 02:32:52 +08:00
|
|
|
static ssize_t modalias_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
char alias_name[VMBUS_ALIAS_LEN + 1];
|
|
|
|
|
|
|
|
print_alias_name(hv_dev, alias_name);
|
|
|
|
return sprintf(buf, "vmbus:%s\n", alias_name);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(modalias);
|
|
|
|
|
2018-07-29 05:58:48 +08:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
static ssize_t numa_node_show(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
return sprintf(buf, "%d\n", hv_dev->channel->numa_node);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(numa_node);
|
|
|
|
#endif
|
|
|
|
|
2013-09-14 02:32:56 +08:00
|
|
|
static ssize_t server_monitor_pending_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_pending(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(server_monitor_pending);
|
|
|
|
|
|
|
|
static ssize_t client_monitor_pending_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_pending(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(client_monitor_pending);
|
2013-09-14 02:32:53 +08:00
|
|
|
|
2013-09-14 02:32:57 +08:00
|
|
|
static ssize_t server_monitor_latency_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_latency(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[0]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(server_monitor_latency);
|
|
|
|
|
|
|
|
static ssize_t client_monitor_latency_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_latency(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(client_monitor_latency);
|
|
|
|
|
2013-09-14 02:32:58 +08:00
|
|
|
static ssize_t server_monitor_conn_id_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_conn_id(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[0]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(server_monitor_conn_id);
|
|
|
|
|
|
|
|
static ssize_t client_monitor_conn_id_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_conn_id(hv_dev->channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(client_monitor_conn_id);
|
|
|
|
|
2013-09-14 02:33:01 +08:00
|
|
|
static ssize_t out_intr_mask_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info outbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->outbound, &outbound);
|
|
|
|
return sprintf(buf, "%d\n", outbound.current_interrupt_mask);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(out_intr_mask);
|
|
|
|
|
|
|
|
static ssize_t out_read_index_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info outbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->outbound, &outbound);
|
|
|
|
return sprintf(buf, "%d\n", outbound.current_read_index);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(out_read_index);
|
|
|
|
|
|
|
|
static ssize_t out_write_index_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info outbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->outbound, &outbound);
|
|
|
|
return sprintf(buf, "%d\n", outbound.current_write_index);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(out_write_index);
|
|
|
|
|
|
|
|
static ssize_t out_read_bytes_avail_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info outbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->outbound, &outbound);
|
|
|
|
return sprintf(buf, "%d\n", outbound.bytes_avail_toread);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(out_read_bytes_avail);
|
|
|
|
|
|
|
|
static ssize_t out_write_bytes_avail_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info outbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->outbound, &outbound);
|
|
|
|
return sprintf(buf, "%d\n", outbound.bytes_avail_towrite);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(out_write_bytes_avail);
|
|
|
|
|
|
|
|
static ssize_t in_intr_mask_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info inbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->inbound, &inbound);
|
|
|
|
return sprintf(buf, "%d\n", inbound.current_interrupt_mask);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(in_intr_mask);
|
|
|
|
|
|
|
|
static ssize_t in_read_index_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info inbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->inbound, &inbound);
|
|
|
|
return sprintf(buf, "%d\n", inbound.current_read_index);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(in_read_index);
|
|
|
|
|
|
|
|
static ssize_t in_write_index_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr, char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info inbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->inbound, &inbound);
|
|
|
|
return sprintf(buf, "%d\n", inbound.current_write_index);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(in_write_index);
|
|
|
|
|
|
|
|
static ssize_t in_read_bytes_avail_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info inbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->inbound, &inbound);
|
|
|
|
return sprintf(buf, "%d\n", inbound.bytes_avail_toread);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(in_read_bytes_avail);
|
|
|
|
|
|
|
|
static ssize_t in_write_bytes_avail_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct hv_ring_buffer_debug_info inbound;
|
|
|
|
|
|
|
|
if (!hv_dev->channel)
|
|
|
|
return -ENODEV;
|
|
|
|
hv_ringbuffer_get_debuginfo(&hv_dev->channel->inbound, &inbound);
|
|
|
|
return sprintf(buf, "%d\n", inbound.bytes_avail_towrite);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(in_write_bytes_avail);
|
|
|
|
|
2015-08-05 15:52:43 +08:00
|
|
|
static ssize_t channel_vp_mapping_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
struct vmbus_channel *channel = hv_dev->channel, *cur_sc;
|
|
|
|
unsigned long flags;
|
|
|
|
int buf_size = PAGE_SIZE, n_written, tot_written;
|
|
|
|
struct list_head *cur;
|
|
|
|
|
|
|
|
if (!channel)
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
tot_written = snprintf(buf, buf_size, "%u:%u\n",
|
|
|
|
channel->offermsg.child_relid, channel->target_cpu);
|
|
|
|
|
|
|
|
spin_lock_irqsave(&channel->lock, flags);
|
|
|
|
|
|
|
|
list_for_each(cur, &channel->sc_list) {
|
|
|
|
if (tot_written >= buf_size - 1)
|
|
|
|
break;
|
|
|
|
|
|
|
|
cur_sc = list_entry(cur, struct vmbus_channel, sc_list);
|
|
|
|
n_written = scnprintf(buf + tot_written,
|
|
|
|
buf_size - tot_written,
|
|
|
|
"%u:%u\n",
|
|
|
|
cur_sc->offermsg.child_relid,
|
|
|
|
cur_sc->target_cpu);
|
|
|
|
tot_written += n_written;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_unlock_irqrestore(&channel->lock, flags);
|
|
|
|
|
|
|
|
return tot_written;
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(channel_vp_mapping);
|
|
|
|
|
2015-12-26 12:00:30 +08:00
|
|
|
static ssize_t vendor_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
return sprintf(buf, "0x%x\n", hv_dev->vendor_id);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(vendor);
|
|
|
|
|
|
|
|
static ssize_t device_show(struct device *dev,
|
|
|
|
struct device_attribute *dev_attr,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
struct hv_device *hv_dev = device_to_hv_device(dev);
|
|
|
|
return sprintf(buf, "0x%x\n", hv_dev->device_id);
|
|
|
|
}
|
|
|
|
static DEVICE_ATTR_RO(device);
|
|
|
|
|
2013-09-14 02:33:01 +08:00
|
|
|
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
|
2016-12-04 04:34:39 +08:00
|
|
|
static struct attribute *vmbus_dev_attrs[] = {
|
2013-09-14 02:32:49 +08:00
|
|
|
&dev_attr_id.attr,
|
2013-09-14 02:32:50 +08:00
|
|
|
&dev_attr_state.attr,
|
2013-09-14 02:32:51 +08:00
|
|
|
&dev_attr_monitor_id.attr,
|
2013-09-14 02:32:53 +08:00
|
|
|
&dev_attr_class_id.attr,
|
2013-09-14 02:32:54 +08:00
|
|
|
&dev_attr_device_id.attr,
|
2013-09-14 02:32:52 +08:00
|
|
|
&dev_attr_modalias.attr,
|
2018-07-29 05:58:48 +08:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
&dev_attr_numa_node.attr,
|
|
|
|
#endif
|
2013-09-14 02:32:56 +08:00
|
|
|
&dev_attr_server_monitor_pending.attr,
|
|
|
|
&dev_attr_client_monitor_pending.attr,
|
2013-09-14 02:32:57 +08:00
|
|
|
&dev_attr_server_monitor_latency.attr,
|
|
|
|
&dev_attr_client_monitor_latency.attr,
|
2013-09-14 02:32:58 +08:00
|
|
|
&dev_attr_server_monitor_conn_id.attr,
|
|
|
|
&dev_attr_client_monitor_conn_id.attr,
|
2013-09-14 02:33:01 +08:00
|
|
|
&dev_attr_out_intr_mask.attr,
|
|
|
|
&dev_attr_out_read_index.attr,
|
|
|
|
&dev_attr_out_write_index.attr,
|
|
|
|
&dev_attr_out_read_bytes_avail.attr,
|
|
|
|
&dev_attr_out_write_bytes_avail.attr,
|
|
|
|
&dev_attr_in_intr_mask.attr,
|
|
|
|
&dev_attr_in_read_index.attr,
|
|
|
|
&dev_attr_in_write_index.attr,
|
|
|
|
&dev_attr_in_read_bytes_avail.attr,
|
|
|
|
&dev_attr_in_write_bytes_avail.attr,
|
2015-08-05 15:52:43 +08:00
|
|
|
&dev_attr_channel_vp_mapping.attr,
|
2015-12-26 12:00:30 +08:00
|
|
|
&dev_attr_vendor.attr,
|
|
|
|
&dev_attr_device.attr,
|
2013-09-14 02:32:49 +08:00
|
|
|
NULL,
|
|
|
|
};
|
2016-12-04 04:34:39 +08:00
|
|
|
ATTRIBUTE_GROUPS(vmbus_dev);
|
2013-09-14 02:32:49 +08:00
|
|
|
|
2011-03-16 06:03:37 +08:00
|
|
|
/*
|
|
|
|
* vmbus_uevent - add uevent for our device
|
|
|
|
*
|
|
|
|
* This routine is invoked when a device is added or removed on the vmbus to
|
|
|
|
* generate a uevent to udev in the userspace. The udev will then look at its
|
|
|
|
* rule and the uevent generated here to load the appropriate driver
|
2011-08-26 00:48:38 +08:00
|
|
|
*
|
|
|
|
* The alias string will be of the form vmbus:guid where guid is the string
|
|
|
|
* representation of the device guid (each byte of the guid will be
|
|
|
|
* represented with two hex characters.
|
2011-03-16 06:03:37 +08:00
|
|
|
*/
|
|
|
|
static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
|
|
|
|
{
|
|
|
|
struct hv_device *dev = device_to_hv_device(device);
|
2011-09-03 00:25:56 +08:00
|
|
|
int ret;
|
|
|
|
char alias_name[VMBUS_ALIAS_LEN + 1];
|
2011-08-26 00:48:38 +08:00
|
|
|
|
2011-09-03 00:25:56 +08:00
|
|
|
print_alias_name(dev, alias_name);
|
2011-08-26 00:48:38 +08:00
|
|
|
ret = add_uevent_var(env, "MODALIAS=vmbus:%s", alias_name);
|
|
|
|
return ret;
|
2011-03-16 06:03:37 +08:00
|
|
|
}
|
|
|
|
|
2014-06-03 23:38:15 +08:00
|
|
|
static const uuid_le null_guid;
|
2011-08-26 00:48:39 +08:00
|
|
|
|
2015-12-15 08:01:43 +08:00
|
|
|
static inline bool is_null_guid(const uuid_le *guid)
|
2011-08-26 00:48:39 +08:00
|
|
|
{
|
2015-12-15 08:01:44 +08:00
|
|
|
if (uuid_le_cmp(*guid, null_guid))
|
2011-08-26 00:48:39 +08:00
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2011-09-14 01:59:37 +08:00
|
|
|
/*
|
|
|
|
* Return a matching hv_vmbus_device_id pointer.
|
|
|
|
* If there is no match, return NULL.
|
|
|
|
*/
|
2016-12-04 04:34:39 +08:00
|
|
|
static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
|
2015-12-15 08:01:43 +08:00
|
|
|
const uuid_le *guid)
|
2011-09-14 01:59:37 +08:00
|
|
|
{
|
2016-12-04 04:34:39 +08:00
|
|
|
const struct hv_vmbus_device_id *id = NULL;
|
|
|
|
struct vmbus_dynid *dynid;
|
|
|
|
|
|
|
|
/* Look at the dynamic ids first, before the static ones */
|
|
|
|
spin_lock(&drv->dynids.lock);
|
|
|
|
list_for_each_entry(dynid, &drv->dynids.list, node) {
|
|
|
|
if (!uuid_le_cmp(dynid->id.guid, *guid)) {
|
|
|
|
id = &dynid->id;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock(&drv->dynids.lock);
|
|
|
|
|
|
|
|
if (id)
|
|
|
|
return id;
|
|
|
|
|
|
|
|
id = drv->id_table;
|
|
|
|
if (id == NULL)
|
|
|
|
return NULL; /* empty device table */
|
|
|
|
|
2015-12-15 08:01:43 +08:00
|
|
|
for (; !is_null_guid(&id->guid); id++)
|
2015-12-15 08:01:44 +08:00
|
|
|
if (!uuid_le_cmp(id->guid, *guid))
|
2011-09-14 01:59:37 +08:00
|
|
|
return id;
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
/* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */
|
|
|
|
static int vmbus_add_dynid(struct hv_driver *drv, uuid_le *guid)
|
|
|
|
{
|
|
|
|
struct vmbus_dynid *dynid;
|
|
|
|
|
|
|
|
dynid = kzalloc(sizeof(*dynid), GFP_KERNEL);
|
|
|
|
if (!dynid)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
dynid->id.guid = *guid;
|
|
|
|
|
|
|
|
spin_lock(&drv->dynids.lock);
|
|
|
|
list_add_tail(&dynid->node, &drv->dynids.list);
|
|
|
|
spin_unlock(&drv->dynids.lock);
|
|
|
|
|
|
|
|
return driver_attach(&drv->driver);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void vmbus_free_dynids(struct hv_driver *drv)
|
|
|
|
{
|
|
|
|
struct vmbus_dynid *dynid, *n;
|
|
|
|
|
|
|
|
spin_lock(&drv->dynids.lock);
|
|
|
|
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
|
|
|
|
list_del(&dynid->node);
|
|
|
|
kfree(dynid);
|
|
|
|
}
|
|
|
|
spin_unlock(&drv->dynids.lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* store_new_id - sysfs frontend to vmbus_add_dynid()
|
|
|
|
*
|
|
|
|
* Allow GUIDs to be added to an existing driver via sysfs.
|
|
|
|
*/
|
|
|
|
static ssize_t new_id_store(struct device_driver *driver, const char *buf,
|
|
|
|
size_t count)
|
|
|
|
{
|
|
|
|
struct hv_driver *drv = drv_to_hv_drv(driver);
|
2017-05-19 01:46:06 +08:00
|
|
|
uuid_le guid;
|
2016-12-04 04:34:39 +08:00
|
|
|
ssize_t retval;
|
|
|
|
|
2017-05-19 01:46:06 +08:00
|
|
|
retval = uuid_le_to_bin(buf, &guid);
|
|
|
|
if (retval)
|
|
|
|
return retval;
|
2016-12-04 04:34:39 +08:00
|
|
|
|
|
|
|
if (hv_vmbus_get_id(drv, &guid))
|
|
|
|
return -EEXIST;
|
|
|
|
|
|
|
|
retval = vmbus_add_dynid(drv, &guid);
|
|
|
|
if (retval)
|
|
|
|
return retval;
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
static DRIVER_ATTR_WO(new_id);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* store_remove_id - remove a PCI device ID from this driver
|
|
|
|
*
|
|
|
|
* Removes a dynamic pci device ID to this driver.
|
|
|
|
*/
|
|
|
|
static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
|
|
|
|
size_t count)
|
|
|
|
{
|
|
|
|
struct hv_driver *drv = drv_to_hv_drv(driver);
|
|
|
|
struct vmbus_dynid *dynid, *n;
|
2017-05-19 01:46:06 +08:00
|
|
|
uuid_le guid;
|
|
|
|
ssize_t retval;
|
2016-12-04 04:34:39 +08:00
|
|
|
|
2017-05-19 01:46:06 +08:00
|
|
|
retval = uuid_le_to_bin(buf, &guid);
|
|
|
|
if (retval)
|
|
|
|
return retval;
|
2016-12-04 04:34:39 +08:00
|
|
|
|
2017-05-19 01:46:06 +08:00
|
|
|
retval = -ENODEV;
|
2016-12-04 04:34:39 +08:00
|
|
|
spin_lock(&drv->dynids.lock);
|
|
|
|
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
|
|
|
|
struct hv_vmbus_device_id *id = &dynid->id;
|
|
|
|
|
|
|
|
if (!uuid_le_cmp(id->guid, guid)) {
|
|
|
|
list_del(&dynid->node);
|
|
|
|
kfree(dynid);
|
|
|
|
retval = count;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock(&drv->dynids.lock);
|
|
|
|
|
|
|
|
return retval;
|
|
|
|
}
|
|
|
|
static DRIVER_ATTR_WO(remove_id);
|
|
|
|
|
|
|
|
static struct attribute *vmbus_drv_attrs[] = {
|
|
|
|
&driver_attr_new_id.attr,
|
|
|
|
&driver_attr_remove_id.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
ATTRIBUTE_GROUPS(vmbus_drv);
|
2011-09-14 01:59:37 +08:00
|
|
|
|
2011-03-16 06:03:38 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_match - Attempt to match the specified device to the specified driver
|
|
|
|
*/
|
|
|
|
static int vmbus_match(struct device *device, struct device_driver *driver)
|
|
|
|
{
|
|
|
|
struct hv_driver *drv = drv_to_hv_drv(driver);
|
2011-06-07 06:50:04 +08:00
|
|
|
struct hv_device *hv_dev = device_to_hv_device(device);
|
2011-03-16 06:03:38 +08:00
|
|
|
|
2016-01-28 14:29:41 +08:00
|
|
|
/* The hv_sock driver handles all hv_sock offers. */
|
|
|
|
if (is_hvsock_channel(hv_dev->channel))
|
|
|
|
return drv->hvsock;
|
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
if (hv_vmbus_get_id(drv, &hv_dev->dev_type))
|
2011-09-14 01:59:37 +08:00
|
|
|
return 1;
|
2011-04-27 00:20:24 +08:00
|
|
|
|
2011-08-26 00:48:39 +08:00
|
|
|
return 0;
|
2011-03-16 06:03:38 +08:00
|
|
|
}
|
|
|
|
|
2011-03-16 06:03:39 +08:00
|
|
|
/*
|
|
|
|
* vmbus_probe - Add the new vmbus's child device
|
|
|
|
*/
|
|
|
|
static int vmbus_probe(struct device *child_device)
|
|
|
|
{
|
|
|
|
int ret = 0;
|
|
|
|
struct hv_driver *drv =
|
|
|
|
drv_to_hv_drv(child_device->driver);
|
2011-04-30 04:45:10 +08:00
|
|
|
struct hv_device *dev = device_to_hv_device(child_device);
|
2011-09-14 01:59:38 +08:00
|
|
|
const struct hv_vmbus_device_id *dev_id;
|
2011-03-16 06:03:39 +08:00
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
dev_id = hv_vmbus_get_id(drv, &dev->dev_type);
|
2011-04-30 04:45:10 +08:00
|
|
|
if (drv->probe) {
|
2011-09-14 01:59:38 +08:00
|
|
|
ret = drv->probe(dev, dev_id);
|
2011-04-30 04:45:03 +08:00
|
|
|
if (ret != 0)
|
2011-03-30 04:58:47 +08:00
|
|
|
pr_err("probe failed for device %s (%d)\n",
|
|
|
|
dev_name(child_device), ret);
|
2011-03-16 06:03:39 +08:00
|
|
|
|
|
|
|
} else {
|
2011-03-30 04:58:47 +08:00
|
|
|
pr_err("probe not set for driver %s\n",
|
|
|
|
dev_name(child_device));
|
2011-06-07 06:50:07 +08:00
|
|
|
ret = -ENODEV;
|
2011-03-16 06:03:39 +08:00
|
|
|
}
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-03-16 06:03:40 +08:00
|
|
|
/*
|
|
|
|
* vmbus_remove - Remove a vmbus device
|
|
|
|
*/
|
|
|
|
static int vmbus_remove(struct device *child_device)
|
|
|
|
{
|
2015-03-01 03:18:16 +08:00
|
|
|
struct hv_driver *drv;
|
2011-04-30 04:45:12 +08:00
|
|
|
struct hv_device *dev = device_to_hv_device(child_device);
|
2011-03-16 06:03:40 +08:00
|
|
|
|
2015-03-01 03:18:16 +08:00
|
|
|
if (child_device->driver) {
|
|
|
|
drv = drv_to_hv_drv(child_device->driver);
|
|
|
|
if (drv->remove)
|
|
|
|
drv->remove(dev);
|
|
|
|
}
|
2011-03-16 06:03:40 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-03-16 06:03:41 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_shutdown - Shutdown a vmbus device
|
|
|
|
*/
|
|
|
|
static void vmbus_shutdown(struct device *child_device)
|
|
|
|
{
|
|
|
|
struct hv_driver *drv;
|
2011-04-30 04:45:14 +08:00
|
|
|
struct hv_device *dev = device_to_hv_device(child_device);
|
2011-03-16 06:03:41 +08:00
|
|
|
|
|
|
|
|
|
|
|
/* The device may not be attached yet */
|
|
|
|
if (!child_device->driver)
|
|
|
|
return;
|
|
|
|
|
|
|
|
drv = drv_to_hv_drv(child_device->driver);
|
|
|
|
|
2011-04-30 04:45:14 +08:00
|
|
|
if (drv->shutdown)
|
|
|
|
drv->shutdown(dev);
|
2011-03-16 06:03:41 +08:00
|
|
|
}
|
|
|
|
|
2011-03-16 06:03:42 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_device_release - Final callback release of the vmbus child device
|
|
|
|
*/
|
|
|
|
static void vmbus_device_release(struct device *device)
|
|
|
|
{
|
2011-06-07 06:50:04 +08:00
|
|
|
struct hv_device *hv_dev = device_to_hv_device(device);
|
Drivers: hv: vmbus: fix rescind-offer handling for device without a driver
In the path vmbus_onoffer_rescind() -> vmbus_device_unregister() ->
device_unregister() -> ... -> __device_release_driver(), we can see for a
device without a driver loaded: dev->driver is NULL, so
dev->bus->remove(dev), namely vmbus_remove(), isn't invoked.
As a result, vmbus_remove() -> hv_process_channel_removal() isn't invoked
and some cleanups(like sending a CHANNELMSG_RELID_RELEASED message to the
host) aren't done.
We can demo the issue this way:
1. rmmod hv_utils;
2. disable the Heartbeat Integration Service in Hyper-V Manager and lsvmbus
shows the device disappears.
3. re-enable the Heartbeat in Hyper-V Manager and modprobe hv_utils, but
lsvmbus shows the device can't appear again.
This is because, the host thinks the VM hasn't released the relid, so can't
re-offer the device to the VM.
We can fix the issue by moving hv_process_channel_removal()
from vmbus_close_internal() to vmbus_device_release(), since the latter is
always invoked on device_unregister(), whether or not the dev has a driver
loaded.
Signed-off-by: Dexuan Cui <decui@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-12-15 08:01:49 +08:00
|
|
|
struct vmbus_channel *channel = hv_dev->channel;
|
2011-03-16 06:03:42 +08:00
|
|
|
|
2017-05-01 07:21:18 +08:00
|
|
|
mutex_lock(&vmbus_connection.channel_mutex);
|
2017-09-30 12:09:36 +08:00
|
|
|
hv_process_channel_removal(channel->offermsg.child_relid);
|
2017-05-01 07:21:18 +08:00
|
|
|
mutex_unlock(&vmbus_connection.channel_mutex);
|
2011-06-07 06:50:04 +08:00
|
|
|
kfree(hv_dev);
|
2011-03-16 06:03:42 +08:00
|
|
|
|
|
|
|
}
|
|
|
|
|
2009-07-28 04:47:24 +08:00
|
|
|
/* The one and only one */
|
2011-04-30 04:45:08 +08:00
|
|
|
static struct bus_type hv_bus = {
|
|
|
|
.name = "vmbus",
|
|
|
|
.match = vmbus_match,
|
|
|
|
.shutdown = vmbus_shutdown,
|
|
|
|
.remove = vmbus_remove,
|
|
|
|
.probe = vmbus_probe,
|
|
|
|
.uevent = vmbus_uevent,
|
2016-12-04 04:34:39 +08:00
|
|
|
.dev_groups = vmbus_dev_groups,
|
|
|
|
.drv_groups = vmbus_drv_groups,
|
2009-07-14 07:02:34 +08:00
|
|
|
};
|
|
|
|
|
2010-12-16 02:48:08 +08:00
|
|
|
struct onmessage_work_context {
|
|
|
|
struct work_struct work;
|
|
|
|
struct hv_message msg;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void vmbus_onmessage_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct onmessage_work_context *ctx;
|
|
|
|
|
2015-02-28 03:25:54 +08:00
|
|
|
/* Do not process messages if we're in DISCONNECTED state */
|
|
|
|
if (vmbus_connection.conn_state == DISCONNECTED)
|
|
|
|
return;
|
|
|
|
|
2010-12-16 02:48:08 +08:00
|
|
|
ctx = container_of(work, struct onmessage_work_context,
|
|
|
|
work);
|
|
|
|
vmbus_onmessage(&ctx->msg);
|
|
|
|
kfree(ctx);
|
|
|
|
}
|
|
|
|
|
2017-02-12 14:02:19 +08:00
|
|
|
static void hv_process_timer_expiration(struct hv_message *msg,
|
|
|
|
struct hv_per_cpu_context *hv_cpu)
|
2015-01-10 15:54:32 +08:00
|
|
|
{
|
2017-02-12 14:02:19 +08:00
|
|
|
struct clock_event_device *dev = hv_cpu->clk_evt;
|
2015-01-10 15:54:32 +08:00
|
|
|
|
|
|
|
if (dev->event_handler)
|
|
|
|
dev->event_handler(dev);
|
|
|
|
|
2016-05-01 10:21:34 +08:00
|
|
|
vmbus_signal_eom(msg, HVMSG_TIMER_EXPIRED);
|
2015-01-10 15:54:32 +08:00
|
|
|
}
|
|
|
|
|
2016-02-27 07:13:21 +08:00
|
|
|
void vmbus_on_msg_dpc(unsigned long data)
|
2010-12-03 03:59:22 +08:00
|
|
|
{
|
2017-02-12 14:02:19 +08:00
|
|
|
struct hv_per_cpu_context *hv_cpu = (void *)data;
|
|
|
|
void *page_addr = hv_cpu->synic_message_page;
|
2010-12-03 03:59:22 +08:00
|
|
|
struct hv_message *msg = (struct hv_message *)page_addr +
|
|
|
|
VMBUS_MESSAGE_SINT;
|
2015-03-28 00:10:08 +08:00
|
|
|
struct vmbus_channel_message_header *hdr;
|
2017-03-05 09:27:16 +08:00
|
|
|
const struct vmbus_channel_message_table_entry *entry;
|
2010-12-16 02:48:08 +08:00
|
|
|
struct onmessage_work_context *ctx;
|
2016-05-01 10:21:34 +08:00
|
|
|
u32 message_type = msg->header.message_type;
|
2010-12-03 03:59:22 +08:00
|
|
|
|
2016-05-01 10:21:34 +08:00
|
|
|
if (message_type == HVMSG_NONE)
|
2016-02-27 07:13:15 +08:00
|
|
|
/* no msg */
|
|
|
|
return;
|
2015-03-28 00:10:08 +08:00
|
|
|
|
2016-02-27 07:13:15 +08:00
|
|
|
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
|
2015-03-28 00:10:08 +08:00
|
|
|
|
2017-10-30 03:21:00 +08:00
|
|
|
trace_vmbus_on_msg_dpc(hdr);
|
|
|
|
|
2016-02-27 07:13:15 +08:00
|
|
|
if (hdr->msgtype >= CHANNELMSG_COUNT) {
|
|
|
|
WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
|
|
|
|
goto msg_handled;
|
|
|
|
}
|
2015-03-28 00:10:08 +08:00
|
|
|
|
2016-02-27 07:13:15 +08:00
|
|
|
entry = &channel_message_table[hdr->msgtype];
|
|
|
|
if (entry->handler_type == VMHT_BLOCKING) {
|
|
|
|
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
|
|
|
|
if (ctx == NULL)
|
|
|
|
return;
|
2015-03-28 00:10:08 +08:00
|
|
|
|
2016-02-27 07:13:15 +08:00
|
|
|
INIT_WORK(&ctx->work, vmbus_onmessage_work);
|
|
|
|
memcpy(&ctx->msg, msg, sizeof(*msg));
|
2015-03-28 00:10:08 +08:00
|
|
|
|
2017-05-01 07:21:18 +08:00
|
|
|
/*
|
|
|
|
* The host can generate a rescind message while we
|
|
|
|
* may still be handling the original offer. We deal with
|
|
|
|
* this condition by ensuring the processing is done on the
|
|
|
|
* same CPU.
|
|
|
|
*/
|
|
|
|
switch (hdr->msgtype) {
|
|
|
|
case CHANNELMSG_RESCIND_CHANNELOFFER:
|
|
|
|
/*
|
|
|
|
* If we are handling the rescind message;
|
|
|
|
* schedule the work on the global work queue.
|
|
|
|
*/
|
|
|
|
schedule_work_on(vmbus_connection.connect_cpu,
|
|
|
|
&ctx->work);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case CHANNELMSG_OFFERCHANNEL:
|
|
|
|
atomic_inc(&vmbus_connection.offer_in_progress);
|
|
|
|
queue_work_on(vmbus_connection.connect_cpu,
|
|
|
|
vmbus_connection.work_queue,
|
|
|
|
&ctx->work);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
queue_work(vmbus_connection.work_queue, &ctx->work);
|
|
|
|
}
|
2016-02-27 07:13:15 +08:00
|
|
|
} else
|
|
|
|
entry->message_handler(hdr);
|
2010-12-03 03:59:22 +08:00
|
|
|
|
2015-03-28 00:10:08 +08:00
|
|
|
msg_handled:
|
2016-05-01 10:21:34 +08:00
|
|
|
vmbus_signal_eom(msg, message_type);
|
2010-12-03 03:59:22 +08:00
|
|
|
}
|
|
|
|
|
2017-02-12 14:02:20 +08:00
|
|
|
|
2017-02-12 14:02:21 +08:00
|
|
|
/*
|
|
|
|
* Direct callback for channels using other deferred processing
|
|
|
|
*/
|
|
|
|
static void vmbus_channel_isr(struct vmbus_channel *channel)
|
|
|
|
{
|
|
|
|
void (*callback_fn)(void *);
|
|
|
|
|
|
|
|
callback_fn = READ_ONCE(channel->onchannel_callback);
|
|
|
|
if (likely(callback_fn != NULL))
|
|
|
|
(*callback_fn)(channel->channel_callback_context);
|
|
|
|
}
|
|
|
|
|
2017-02-12 14:02:20 +08:00
|
|
|
/*
|
|
|
|
* Schedule all channels with events pending
|
|
|
|
*/
|
|
|
|
static void vmbus_chan_sched(struct hv_per_cpu_context *hv_cpu)
|
|
|
|
{
|
|
|
|
unsigned long *recv_int_page;
|
|
|
|
u32 maxbits, relid;
|
|
|
|
|
|
|
|
if (vmbus_proto_version < VERSION_WIN8) {
|
|
|
|
maxbits = MAX_NUM_CHANNELS_SUPPORTED;
|
|
|
|
recv_int_page = vmbus_connection.recv_int_page;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* When the host is win8 and beyond, the event page
|
|
|
|
* can be directly checked to get the id of the channel
|
|
|
|
* that has the interrupt pending.
|
|
|
|
*/
|
|
|
|
void *page_addr = hv_cpu->synic_event_page;
|
|
|
|
union hv_synic_event_flags *event
|
|
|
|
= (union hv_synic_event_flags *)page_addr +
|
|
|
|
VMBUS_MESSAGE_SINT;
|
|
|
|
|
|
|
|
maxbits = HV_EVENT_FLAGS_COUNT;
|
|
|
|
recv_int_page = event->flags;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(!recv_int_page))
|
|
|
|
return;
|
|
|
|
|
|
|
|
for_each_set_bit(relid, recv_int_page, maxbits) {
|
|
|
|
struct vmbus_channel *channel;
|
|
|
|
|
|
|
|
if (!sync_test_and_clear_bit(relid, recv_int_page))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Special case - vmbus channel protocol msg */
|
|
|
|
if (relid == 0)
|
|
|
|
continue;
|
|
|
|
|
2017-03-05 09:13:57 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
|
2017-02-12 14:02:20 +08:00
|
|
|
/* Find channel based on relid */
|
2017-03-05 09:13:57 +08:00
|
|
|
list_for_each_entry_rcu(channel, &hv_cpu->chan_list, percpu_list) {
|
2017-02-12 14:02:21 +08:00
|
|
|
if (channel->offermsg.child_relid != relid)
|
|
|
|
continue;
|
|
|
|
|
2017-08-12 01:03:59 +08:00
|
|
|
if (channel->rescind)
|
|
|
|
continue;
|
|
|
|
|
2017-10-30 03:21:16 +08:00
|
|
|
trace_vmbus_chan_sched(channel);
|
|
|
|
|
2017-10-30 02:33:40 +08:00
|
|
|
++channel->interrupts;
|
|
|
|
|
2017-02-12 14:02:21 +08:00
|
|
|
switch (channel->callback_mode) {
|
|
|
|
case HV_CALL_ISR:
|
|
|
|
vmbus_channel_isr(channel);
|
2017-02-12 14:02:20 +08:00
|
|
|
break;
|
2017-02-12 14:02:21 +08:00
|
|
|
|
|
|
|
case HV_CALL_BATCHED:
|
|
|
|
hv_begin_read(&channel->inbound);
|
|
|
|
/* fallthrough */
|
|
|
|
case HV_CALL_DIRECT:
|
|
|
|
tasklet_schedule(&channel->callback_event);
|
2017-02-12 14:02:20 +08:00
|
|
|
}
|
|
|
|
}
|
2017-03-05 09:13:57 +08:00
|
|
|
|
|
|
|
rcu_read_unlock();
|
2017-02-12 14:02:20 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-05 20:42:14 +08:00
|
|
|
static void vmbus_isr(void)
|
2010-12-03 03:59:22 +08:00
|
|
|
{
|
2017-02-12 14:02:19 +08:00
|
|
|
struct hv_per_cpu_context *hv_cpu
|
|
|
|
= this_cpu_ptr(hv_context.cpu_context);
|
|
|
|
void *page_addr = hv_cpu->synic_event_page;
|
2010-12-03 03:59:22 +08:00
|
|
|
struct hv_message *msg;
|
|
|
|
union hv_synic_event_flags *event;
|
2011-08-28 02:31:35 +08:00
|
|
|
bool handled = false;
|
2010-12-03 03:59:22 +08:00
|
|
|
|
2017-02-12 14:02:19 +08:00
|
|
|
if (unlikely(page_addr == NULL))
|
2014-03-05 20:42:14 +08:00
|
|
|
return;
|
2012-12-01 22:46:55 +08:00
|
|
|
|
|
|
|
event = (union hv_synic_event_flags *)page_addr +
|
|
|
|
VMBUS_MESSAGE_SINT;
|
2011-09-01 05:35:56 +08:00
|
|
|
/*
|
|
|
|
* Check for events before checking for messages. This is the order
|
|
|
|
* in which events and messages are checked in Windows guests on
|
|
|
|
* Hyper-V, and the Windows team suggested we do the same.
|
|
|
|
*/
|
2010-12-03 03:59:22 +08:00
|
|
|
|
2012-12-01 22:46:49 +08:00
|
|
|
if ((vmbus_proto_version == VERSION_WS2008) ||
|
|
|
|
(vmbus_proto_version == VERSION_WIN7)) {
|
2010-12-03 03:59:22 +08:00
|
|
|
|
2012-12-01 22:46:49 +08:00
|
|
|
/* Since we are a child, we only need to check bit 0 */
|
2017-02-06 08:20:31 +08:00
|
|
|
if (sync_test_and_clear_bit(0, event->flags))
|
2012-12-01 22:46:49 +08:00
|
|
|
handled = true;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Our host is win8 or above. The signaling mechanism
|
|
|
|
* has changed and we can directly look at the event page.
|
|
|
|
* If bit n is set then we have an interrup on the channel
|
|
|
|
* whose id is n.
|
|
|
|
*/
|
2011-08-28 02:31:35 +08:00
|
|
|
handled = true;
|
|
|
|
}
|
2011-03-16 06:03:43 +08:00
|
|
|
|
2012-12-01 22:46:49 +08:00
|
|
|
if (handled)
|
2017-02-12 14:02:20 +08:00
|
|
|
vmbus_chan_sched(hv_cpu);
|
2012-12-01 22:46:49 +08:00
|
|
|
|
2017-02-12 14:02:19 +08:00
|
|
|
page_addr = hv_cpu->synic_message_page;
|
2011-09-01 05:35:56 +08:00
|
|
|
msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
|
|
|
|
|
|
|
|
/* Check if there are actual msgs to be processed */
|
2015-01-10 15:54:32 +08:00
|
|
|
if (msg->header.message_type != HVMSG_NONE) {
|
|
|
|
if (msg->header.message_type == HVMSG_TIMER_EXPIRED)
|
2017-02-12 14:02:19 +08:00
|
|
|
hv_process_timer_expiration(msg, hv_cpu);
|
2015-01-10 15:54:32 +08:00
|
|
|
else
|
2017-02-12 14:02:19 +08:00
|
|
|
tasklet_schedule(&hv_cpu->msg_dpc);
|
2015-01-10 15:54:32 +08:00
|
|
|
}
|
2016-05-02 14:14:34 +08:00
|
|
|
|
|
|
|
add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
|
2011-03-16 06:03:43 +08:00
|
|
|
}
|
|
|
|
|
2018-07-08 10:56:51 +08:00
|
|
|
/*
|
|
|
|
* Boolean to control whether to report panic messages over Hyper-V.
|
|
|
|
*
|
|
|
|
* It can be set via /proc/sys/kernel/hyperv/record_panic_msg
|
|
|
|
*/
|
|
|
|
static int sysctl_record_panic_msg = 1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Callback from kmsg_dump. Grab as much as possible from the end of the kmsg
|
|
|
|
* buffer and call into Hyper-V to transfer the data.
|
|
|
|
*/
|
|
|
|
static void hv_kmsg_dump(struct kmsg_dumper *dumper,
|
|
|
|
enum kmsg_dump_reason reason)
|
|
|
|
{
|
|
|
|
size_t bytes_written;
|
|
|
|
phys_addr_t panic_pa;
|
|
|
|
|
|
|
|
/* We are only interested in panics. */
|
|
|
|
if ((reason != KMSG_DUMP_PANIC) || (!sysctl_record_panic_msg))
|
|
|
|
return;
|
|
|
|
|
|
|
|
panic_pa = virt_to_phys(hv_panic_page);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write dump contents to the page. No need to synchronize; panic should
|
|
|
|
* be single-threaded.
|
|
|
|
*/
|
2018-07-29 05:58:45 +08:00
|
|
|
kmsg_dump_get_buffer(dumper, true, hv_panic_page, PAGE_SIZE,
|
|
|
|
&bytes_written);
|
|
|
|
if (bytes_written)
|
|
|
|
hyperv_report_panic_msg(panic_pa, bytes_written);
|
2018-07-08 10:56:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static struct kmsg_dumper hv_kmsg_dumper = {
|
|
|
|
.dump = hv_kmsg_dump,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct ctl_table_header *hv_ctl_table_hdr;
|
|
|
|
static int zero;
|
|
|
|
static int one = 1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* sysctl option to allow the user to control whether kmsg data should be
|
|
|
|
* reported to Hyper-V on panic.
|
|
|
|
*/
|
|
|
|
static struct ctl_table hv_ctl_table[] = {
|
|
|
|
{
|
|
|
|
.procname = "hyperv_record_panic_msg",
|
|
|
|
.data = &sysctl_record_panic_msg,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec_minmax,
|
|
|
|
.extra1 = &zero,
|
|
|
|
.extra2 = &one
|
|
|
|
},
|
|
|
|
{}
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct ctl_table hv_root_table[] = {
|
|
|
|
{
|
|
|
|
.procname = "kernel",
|
|
|
|
.mode = 0555,
|
|
|
|
.child = hv_ctl_table
|
|
|
|
},
|
|
|
|
{}
|
|
|
|
};
|
2015-02-28 03:25:51 +08:00
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2009-09-02 22:11:14 +08:00
|
|
|
* vmbus_bus_init -Main vmbus driver initialization routine.
|
|
|
|
*
|
|
|
|
* Here, we
|
2010-03-12 06:51:23 +08:00
|
|
|
* - initialize the vmbus driver context
|
|
|
|
* - invoke the vmbus hv main init routine
|
|
|
|
* - retrieve the channel offers
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2015-12-15 08:01:46 +08:00
|
|
|
static int vmbus_bus_init(void)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-09-02 22:11:14 +08:00
|
|
|
int ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2010-12-03 04:08:08 +08:00
|
|
|
/* Hypervisor initialization...setup hypercall page..etc */
|
|
|
|
ret = hv_init();
|
2009-09-02 22:11:14 +08:00
|
|
|
if (ret != 0) {
|
2011-03-30 04:58:47 +08:00
|
|
|
pr_err("Unable to initialize the hypervisor - 0x%x\n", ret);
|
2011-06-07 06:50:08 +08:00
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
2011-04-30 04:45:08 +08:00
|
|
|
ret = bus_register(&hv_bus);
|
2011-06-07 06:50:08 +08:00
|
|
|
if (ret)
|
2017-01-29 03:37:14 +08:00
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2014-03-05 20:42:14 +08:00
|
|
|
hv_setup_vmbus_irq(vmbus_isr);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2013-06-19 11:28:10 +08:00
|
|
|
ret = hv_synic_alloc();
|
|
|
|
if (ret)
|
|
|
|
goto err_alloc;
|
2011-03-16 06:03:33 +08:00
|
|
|
/*
|
2013-02-18 03:30:44 +08:00
|
|
|
* Initialize the per-cpu interrupt state and
|
2011-03-16 06:03:33 +08:00
|
|
|
* connect to the host.
|
|
|
|
*/
|
2017-12-23 02:19:02 +08:00
|
|
|
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online",
|
2016-12-08 06:53:11 +08:00
|
|
|
hv_synic_init, hv_synic_cleanup);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err_alloc;
|
|
|
|
hyperv_cpuhp_online = ret;
|
|
|
|
|
2011-03-16 06:03:33 +08:00
|
|
|
ret = vmbus_connect();
|
2011-09-01 05:35:55 +08:00
|
|
|
if (ret)
|
2015-12-15 08:01:38 +08:00
|
|
|
goto err_connect;
|
2011-03-16 06:03:33 +08:00
|
|
|
|
2015-03-01 03:39:01 +08:00
|
|
|
/*
|
|
|
|
* Only register if the crash MSRs are available
|
|
|
|
*/
|
2015-08-02 07:08:20 +08:00
|
|
|
if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
|
2018-07-08 10:56:51 +08:00
|
|
|
u64 hyperv_crash_ctl;
|
|
|
|
/*
|
|
|
|
* Sysctl registration is not fatal, since by default
|
|
|
|
* reporting is enabled.
|
|
|
|
*/
|
|
|
|
hv_ctl_table_hdr = register_sysctl_table(hv_root_table);
|
|
|
|
if (!hv_ctl_table_hdr)
|
|
|
|
pr_err("Hyper-V: sysctl table register error");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Register for panic kmsg callback only if the right
|
|
|
|
* capability is supported by the hypervisor.
|
|
|
|
*/
|
2018-07-29 05:58:47 +08:00
|
|
|
hv_get_crash_ctl(hyperv_crash_ctl);
|
2018-07-08 10:56:51 +08:00
|
|
|
if (hyperv_crash_ctl & HV_CRASH_CTL_CRASH_NOTIFY_MSG) {
|
|
|
|
hv_panic_page = (void *)get_zeroed_page(GFP_KERNEL);
|
|
|
|
if (hv_panic_page) {
|
|
|
|
ret = kmsg_dump_register(&hv_kmsg_dumper);
|
|
|
|
if (ret)
|
|
|
|
pr_err("Hyper-V: kmsg dump register "
|
|
|
|
"error 0x%x\n", ret);
|
|
|
|
} else
|
|
|
|
pr_err("Hyper-V: panic message page memory "
|
|
|
|
"allocation failed");
|
|
|
|
}
|
|
|
|
|
2015-08-02 07:08:10 +08:00
|
|
|
register_die_notifier(&hyperv_die_block);
|
2015-03-01 03:39:01 +08:00
|
|
|
atomic_notifier_chain_register(&panic_notifier_list,
|
|
|
|
&hyperv_panic_block);
|
|
|
|
}
|
|
|
|
|
2010-12-03 00:50:58 +08:00
|
|
|
vmbus_request_offers();
|
2010-05-29 07:22:44 +08:00
|
|
|
|
2011-06-07 06:50:08 +08:00
|
|
|
return 0;
|
2011-09-01 05:35:55 +08:00
|
|
|
|
2015-12-15 08:01:38 +08:00
|
|
|
err_connect:
|
2016-12-08 06:53:11 +08:00
|
|
|
cpuhp_remove_state(hyperv_cpuhp_online);
|
2013-06-19 11:28:10 +08:00
|
|
|
err_alloc:
|
|
|
|
hv_synic_free();
|
2014-03-05 20:42:14 +08:00
|
|
|
hv_remove_vmbus_irq();
|
2011-09-01 05:35:55 +08:00
|
|
|
|
|
|
|
bus_unregister(&hv_bus);
|
2018-07-08 10:56:51 +08:00
|
|
|
free_page((unsigned long)hv_panic_page);
|
2018-07-29 05:58:46 +08:00
|
|
|
unregister_sysctl_table(hv_ctl_table_hdr);
|
|
|
|
hv_ctl_table_hdr = NULL;
|
2011-09-01 05:35:55 +08:00
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
2009-09-02 22:11:14 +08:00
|
|
|
/**
|
2015-08-05 15:52:37 +08:00
|
|
|
* __vmbus_child_driver_register() - Register a vmbus's driver
|
|
|
|
* @hv_driver: Pointer to driver structure you want to register
|
2011-08-26 06:07:32 +08:00
|
|
|
* @owner: owner module of the drv
|
|
|
|
* @mod_name: module name string
|
2010-03-05 06:11:00 +08:00
|
|
|
*
|
|
|
|
* Registers the given driver with Linux through the 'driver_register()' call
|
2011-08-26 06:07:32 +08:00
|
|
|
* and sets up the hyper-v vmbus handling for this driver.
|
2010-03-05 06:11:00 +08:00
|
|
|
* It will return the state of the 'driver_register()' call.
|
|
|
|
*
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2011-08-26 06:07:32 +08:00
|
|
|
int __vmbus_driver_register(struct hv_driver *hv_driver, struct module *owner, const char *mod_name)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-07-28 04:47:36 +08:00
|
|
|
int ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-08-26 06:07:32 +08:00
|
|
|
pr_info("registering driver %s\n", hv_driver->name);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-12-02 01:59:34 +08:00
|
|
|
ret = vmbus_exists();
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
2011-08-26 06:07:32 +08:00
|
|
|
hv_driver->driver.name = hv_driver->name;
|
|
|
|
hv_driver->driver.owner = owner;
|
|
|
|
hv_driver->driver.mod_name = mod_name;
|
|
|
|
hv_driver->driver.bus = &hv_bus;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
spin_lock_init(&hv_driver->dynids.lock);
|
|
|
|
INIT_LIST_HEAD(&hv_driver->dynids.list);
|
|
|
|
|
2011-08-26 06:07:32 +08:00
|
|
|
ret = driver_register(&hv_driver->driver);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-07-28 04:47:36 +08:00
|
|
|
return ret;
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2011-08-26 06:07:32 +08:00
|
|
|
EXPORT_SYMBOL_GPL(__vmbus_driver_register);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-09-02 22:11:14 +08:00
|
|
|
/**
|
2011-08-26 06:07:32 +08:00
|
|
|
* vmbus_driver_unregister() - Unregister a vmbus's driver
|
2015-08-05 15:52:37 +08:00
|
|
|
* @hv_driver: Pointer to driver structure you want to
|
|
|
|
* un-register
|
2010-03-05 06:11:00 +08:00
|
|
|
*
|
2011-08-26 06:07:32 +08:00
|
|
|
* Un-register the given driver that was previous registered with a call to
|
|
|
|
* vmbus_driver_register()
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2011-08-26 06:07:32 +08:00
|
|
|
void vmbus_driver_unregister(struct hv_driver *hv_driver)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2011-08-26 06:07:32 +08:00
|
|
|
pr_info("unregistering driver %s\n", hv_driver->name);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2016-12-04 04:34:39 +08:00
|
|
|
if (!vmbus_exists()) {
|
2011-12-28 05:49:37 +08:00
|
|
|
driver_unregister(&hv_driver->driver);
|
2016-12-04 04:34:39 +08:00
|
|
|
vmbus_free_dynids(hv_driver);
|
|
|
|
}
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
2011-08-26 06:07:32 +08:00
|
|
|
EXPORT_SYMBOL_GPL(vmbus_driver_unregister);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Called when last reference to channel is gone.
|
|
|
|
*/
|
|
|
|
static void vmbus_chan_release(struct kobject *kobj)
|
|
|
|
{
|
|
|
|
struct vmbus_channel *channel
|
|
|
|
= container_of(kobj, struct vmbus_channel, kobj);
|
|
|
|
|
|
|
|
kfree_rcu(channel, rcu);
|
|
|
|
}
|
|
|
|
|
|
|
|
struct vmbus_chan_attribute {
|
|
|
|
struct attribute attr;
|
|
|
|
ssize_t (*show)(const struct vmbus_channel *chan, char *buf);
|
|
|
|
ssize_t (*store)(struct vmbus_channel *chan,
|
|
|
|
const char *buf, size_t count);
|
|
|
|
};
|
|
|
|
#define VMBUS_CHAN_ATTR(_name, _mode, _show, _store) \
|
|
|
|
struct vmbus_chan_attribute chan_attr_##_name \
|
|
|
|
= __ATTR(_name, _mode, _show, _store)
|
|
|
|
#define VMBUS_CHAN_ATTR_RW(_name) \
|
|
|
|
struct vmbus_chan_attribute chan_attr_##_name = __ATTR_RW(_name)
|
|
|
|
#define VMBUS_CHAN_ATTR_RO(_name) \
|
|
|
|
struct vmbus_chan_attribute chan_attr_##_name = __ATTR_RO(_name)
|
|
|
|
#define VMBUS_CHAN_ATTR_WO(_name) \
|
|
|
|
struct vmbus_chan_attribute chan_attr_##_name = __ATTR_WO(_name)
|
|
|
|
|
|
|
|
static ssize_t vmbus_chan_attr_show(struct kobject *kobj,
|
|
|
|
struct attribute *attr, char *buf)
|
|
|
|
{
|
|
|
|
const struct vmbus_chan_attribute *attribute
|
|
|
|
= container_of(attr, struct vmbus_chan_attribute, attr);
|
|
|
|
const struct vmbus_channel *chan
|
|
|
|
= container_of(kobj, struct vmbus_channel, kobj);
|
|
|
|
|
|
|
|
if (!attribute->show)
|
|
|
|
return -EIO;
|
|
|
|
|
|
|
|
return attribute->show(chan, buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
static const struct sysfs_ops vmbus_chan_sysfs_ops = {
|
|
|
|
.show = vmbus_chan_attr_show,
|
|
|
|
};
|
|
|
|
|
|
|
|
static ssize_t out_mask_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
const struct hv_ring_buffer_info *rbi = &channel->outbound;
|
|
|
|
|
|
|
|
return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask);
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR_RO(out_mask);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
const struct hv_ring_buffer_info *rbi = &channel->inbound;
|
|
|
|
|
|
|
|
return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask);
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR_RO(in_mask);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
const struct hv_ring_buffer_info *rbi = &channel->inbound;
|
|
|
|
|
|
|
|
return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi));
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR_RO(read_avail);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
const struct hv_ring_buffer_info *rbi = &channel->outbound;
|
|
|
|
|
|
|
|
return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi));
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR_RO(write_avail);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%u\n", channel->target_cpu);
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t channel_pending_show(const struct vmbus_channel *channel,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_pending(channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
|
|
|
static ssize_t channel_latency_show(const struct vmbus_channel *channel,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%d\n",
|
|
|
|
channel_latency(channel,
|
|
|
|
vmbus_connection.monitor_pages[1]));
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL);
|
2017-09-22 11:58:49 +08:00
|
|
|
|
2017-10-30 02:33:40 +08:00
|
|
|
static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%llu\n", channel->interrupts);
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL);
|
2017-10-30 02:33:40 +08:00
|
|
|
|
|
|
|
static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%llu\n", channel->sig_events);
|
|
|
|
}
|
2018-01-05 06:13:25 +08:00
|
|
|
static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL);
|
2017-10-30 02:33:40 +08:00
|
|
|
|
2018-01-10 02:29:06 +08:00
|
|
|
static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%u\n", channel->offermsg.monitorid);
|
|
|
|
}
|
|
|
|
static VMBUS_CHAN_ATTR(monitor_id, S_IRUGO, subchannel_monitor_id_show, NULL);
|
|
|
|
|
|
|
|
static ssize_t subchannel_id_show(const struct vmbus_channel *channel,
|
|
|
|
char *buf)
|
|
|
|
{
|
|
|
|
return sprintf(buf, "%u\n",
|
|
|
|
channel->offermsg.offer.sub_channel_index);
|
|
|
|
}
|
|
|
|
static VMBUS_CHAN_ATTR_RO(subchannel_id);
|
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
static struct attribute *vmbus_chan_attrs[] = {
|
|
|
|
&chan_attr_out_mask.attr,
|
|
|
|
&chan_attr_in_mask.attr,
|
|
|
|
&chan_attr_read_avail.attr,
|
|
|
|
&chan_attr_write_avail.attr,
|
|
|
|
&chan_attr_cpu.attr,
|
|
|
|
&chan_attr_pending.attr,
|
|
|
|
&chan_attr_latency.attr,
|
2017-10-30 02:33:40 +08:00
|
|
|
&chan_attr_interrupts.attr,
|
|
|
|
&chan_attr_events.attr,
|
2018-01-10 02:29:06 +08:00
|
|
|
&chan_attr_monitor_id.attr,
|
|
|
|
&chan_attr_subchannel_id.attr,
|
2017-09-22 11:58:49 +08:00
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct kobj_type vmbus_chan_ktype = {
|
|
|
|
.sysfs_ops = &vmbus_chan_sysfs_ops,
|
|
|
|
.release = vmbus_chan_release,
|
|
|
|
.default_attrs = vmbus_chan_attrs,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* vmbus_add_channel_kobj - setup a sub-directory under device/channels
|
|
|
|
*/
|
|
|
|
int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
|
|
|
|
{
|
|
|
|
struct kobject *kobj = &channel->kobj;
|
|
|
|
u32 relid = channel->offermsg.child_relid;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
kobj->kset = dev->channels_kset;
|
|
|
|
ret = kobject_init_and_add(kobj, &vmbus_chan_ktype, NULL,
|
|
|
|
"%u", relid);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
kobject_uevent(kobj, KOBJ_ADD);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2011-09-08 22:24:12 +08:00
|
|
|
* vmbus_device_create - Creates and registers a new child device
|
2010-03-05 06:11:00 +08:00
|
|
|
* on the vmbus.
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2014-06-03 23:38:15 +08:00
|
|
|
struct hv_device *vmbus_device_create(const uuid_le *type,
|
|
|
|
const uuid_le *instance,
|
|
|
|
struct vmbus_channel *channel)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2009-07-28 23:32:53 +08:00
|
|
|
struct hv_device *child_device_obj;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-03-08 05:35:48 +08:00
|
|
|
child_device_obj = kzalloc(sizeof(struct hv_device), GFP_KERNEL);
|
|
|
|
if (!child_device_obj) {
|
2011-03-30 04:58:47 +08:00
|
|
|
pr_err("Unable to allocate device object for child device\n");
|
2009-07-14 07:02:34 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2010-10-22 00:05:27 +08:00
|
|
|
child_device_obj->channel = channel;
|
2011-08-26 00:48:28 +08:00
|
|
|
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
|
2011-01-27 04:12:11 +08:00
|
|
|
memcpy(&child_device_obj->dev_instance, instance,
|
2011-08-26 00:48:28 +08:00
|
|
|
sizeof(uuid_le));
|
2015-12-26 12:00:30 +08:00
|
|
|
child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */
|
2009-07-14 07:02:34 +08:00
|
|
|
|
|
|
|
|
|
|
|
return child_device_obj;
|
|
|
|
}
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2011-09-08 22:24:13 +08:00
|
|
|
* vmbus_device_register - Register the child device
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2011-09-08 22:24:13 +08:00
|
|
|
int vmbus_device_register(struct hv_device *child_device_obj)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2017-09-22 11:58:49 +08:00
|
|
|
struct kobject *kobj = &child_device_obj->device.kobj;
|
|
|
|
int ret;
|
2011-03-08 05:35:48 +08:00
|
|
|
|
2016-11-01 15:01:59 +08:00
|
|
|
dev_set_name(&child_device_obj->device, "%pUl",
|
2016-09-17 00:01:17 +08:00
|
|
|
child_device_obj->channel->offermsg.offer.if_instance.b);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-08-28 02:31:39 +08:00
|
|
|
child_device_obj->device.bus = &hv_bus;
|
2011-06-07 06:49:39 +08:00
|
|
|
child_device_obj->device.parent = &hv_acpi_dev->dev;
|
2011-03-08 05:35:48 +08:00
|
|
|
child_device_obj->device.release = vmbus_device_release;
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2009-09-02 22:11:14 +08:00
|
|
|
/*
|
|
|
|
* Register with the LDM. This will kick off the driver/device
|
|
|
|
* binding...which will eventually call vmbus_match() and vmbus_probe()
|
|
|
|
*/
|
2011-03-08 05:35:48 +08:00
|
|
|
ret = device_register(&child_device_obj->device);
|
2017-09-22 11:58:49 +08:00
|
|
|
if (ret) {
|
2011-03-30 04:58:47 +08:00
|
|
|
pr_err("Unable to register child device\n");
|
2017-09-22 11:58:49 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
child_device_obj->channels_kset = kset_create_and_add("channels",
|
|
|
|
NULL, kobj);
|
|
|
|
if (!child_device_obj->channels_kset) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_dev_unregister;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = vmbus_add_channel_kobj(child_device_obj,
|
|
|
|
child_device_obj->channel);
|
|
|
|
if (ret) {
|
|
|
|
pr_err("Unable to register primary channeln");
|
|
|
|
goto err_kset_unregister;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_kset_unregister:
|
|
|
|
kset_unregister(child_device_obj->channels_kset);
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2017-09-22 11:58:49 +08:00
|
|
|
err_dev_unregister:
|
|
|
|
device_unregister(&child_device_obj->device);
|
2009-07-14 07:02:34 +08:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2010-03-05 06:11:00 +08:00
|
|
|
/*
|
2011-09-08 22:24:14 +08:00
|
|
|
* vmbus_device_unregister - Remove the specified child device
|
2010-03-05 06:11:00 +08:00
|
|
|
* from the vmbus.
|
2009-09-02 22:11:14 +08:00
|
|
|
*/
|
2011-09-08 22:24:14 +08:00
|
|
|
void vmbus_device_unregister(struct hv_device *device_obj)
|
2009-07-14 07:02:34 +08:00
|
|
|
{
|
2013-06-15 07:13:35 +08:00
|
|
|
pr_debug("child device %s unregistered\n",
|
|
|
|
dev_name(&device_obj->device));
|
|
|
|
|
2017-11-14 21:53:32 +08:00
|
|
|
kset_unregister(device_obj->channels_kset);
|
|
|
|
|
2009-09-02 22:11:14 +08:00
|
|
|
/*
|
|
|
|
* Kick off the process of unregistering the device.
|
|
|
|
* This will call vmbus_remove() and eventually vmbus_device_release()
|
|
|
|
*/
|
2011-03-08 05:35:48 +08:00
|
|
|
device_unregister(&device_obj->device);
|
2009-07-14 07:02:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2011-04-30 04:45:15 +08:00
|
|
|
/*
|
2015-08-05 15:52:36 +08:00
|
|
|
* VMBUS is an acpi enumerated device. Get the information we
|
2014-01-30 10:14:39 +08:00
|
|
|
* need from DSDT.
|
2011-04-30 04:45:15 +08:00
|
|
|
*/
|
2015-08-05 15:52:36 +08:00
|
|
|
#define VTPM_BASE_ADDRESS 0xfed40000
|
2014-01-30 10:14:39 +08:00
|
|
|
static acpi_status vmbus_walk_resources(struct acpi_resource *res, void *ctx)
|
2011-04-30 04:45:15 +08:00
|
|
|
{
|
2015-08-05 15:52:36 +08:00
|
|
|
resource_size_t start = 0;
|
|
|
|
resource_size_t end = 0;
|
|
|
|
struct resource *new_res;
|
|
|
|
struct resource **old_res = &hyperv_mmio;
|
|
|
|
struct resource **prev_res = NULL;
|
|
|
|
|
2014-01-30 10:14:39 +08:00
|
|
|
switch (res->type) {
|
2015-08-05 15:52:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* "Address" descriptors are for bus windows. Ignore
|
|
|
|
* "memory" descriptors, which are for registers on
|
|
|
|
* devices.
|
|
|
|
*/
|
|
|
|
case ACPI_RESOURCE_TYPE_ADDRESS32:
|
|
|
|
start = res->data.address32.address.minimum;
|
|
|
|
end = res->data.address32.address.maximum;
|
2014-02-24 21:17:08 +08:00
|
|
|
break;
|
2011-04-30 04:45:15 +08:00
|
|
|
|
2014-01-30 10:14:39 +08:00
|
|
|
case ACPI_RESOURCE_TYPE_ADDRESS64:
|
2015-08-05 15:52:36 +08:00
|
|
|
start = res->data.address64.address.minimum;
|
|
|
|
end = res->data.address64.address.maximum;
|
2014-02-24 21:17:08 +08:00
|
|
|
break;
|
2015-08-05 15:52:36 +08:00
|
|
|
|
|
|
|
default:
|
|
|
|
/* Unused resource type */
|
|
|
|
return AE_OK;
|
|
|
|
|
2011-04-30 04:45:15 +08:00
|
|
|
}
|
2015-08-05 15:52:36 +08:00
|
|
|
/*
|
|
|
|
* Ignore ranges that are below 1MB, as they're not
|
|
|
|
* necessary or useful here.
|
|
|
|
*/
|
|
|
|
if (end < 0x100000)
|
|
|
|
return AE_OK;
|
|
|
|
|
|
|
|
new_res = kzalloc(sizeof(*new_res), GFP_ATOMIC);
|
|
|
|
if (!new_res)
|
|
|
|
return AE_NO_MEMORY;
|
|
|
|
|
|
|
|
/* If this range overlaps the virtual TPM, truncate it. */
|
|
|
|
if (end > VTPM_BASE_ADDRESS && start < VTPM_BASE_ADDRESS)
|
|
|
|
end = VTPM_BASE_ADDRESS;
|
|
|
|
|
|
|
|
new_res->name = "hyperv mmio";
|
|
|
|
new_res->flags = IORESOURCE_MEM;
|
|
|
|
new_res->start = start;
|
|
|
|
new_res->end = end;
|
|
|
|
|
2015-12-15 08:01:52 +08:00
|
|
|
/*
|
|
|
|
* If two ranges are adjacent, merge them.
|
|
|
|
*/
|
2015-08-05 15:52:36 +08:00
|
|
|
do {
|
|
|
|
if (!*old_res) {
|
|
|
|
*old_res = new_res;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2015-12-15 08:01:52 +08:00
|
|
|
if (((*old_res)->end + 1) == new_res->start) {
|
|
|
|
(*old_res)->end = new_res->end;
|
|
|
|
kfree(new_res);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((*old_res)->start == new_res->end + 1) {
|
|
|
|
(*old_res)->start = new_res->start;
|
|
|
|
kfree(new_res);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2016-04-06 01:22:53 +08:00
|
|
|
if ((*old_res)->start > new_res->end) {
|
2015-08-05 15:52:36 +08:00
|
|
|
new_res->sibling = *old_res;
|
|
|
|
if (prev_res)
|
|
|
|
(*prev_res)->sibling = new_res;
|
|
|
|
*old_res = new_res;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
prev_res = old_res;
|
|
|
|
old_res = &(*old_res)->sibling;
|
|
|
|
|
|
|
|
} while (1);
|
2011-04-30 04:45:15 +08:00
|
|
|
|
|
|
|
return AE_OK;
|
|
|
|
}
|
|
|
|
|
2015-08-05 15:52:36 +08:00
|
|
|
static int vmbus_acpi_remove(struct acpi_device *device)
|
|
|
|
{
|
|
|
|
struct resource *cur_res;
|
|
|
|
struct resource *next_res;
|
|
|
|
|
|
|
|
if (hyperv_mmio) {
|
2016-04-06 01:22:55 +08:00
|
|
|
if (fb_mmio) {
|
|
|
|
__release_region(hyperv_mmio, fb_mmio->start,
|
|
|
|
resource_size(fb_mmio));
|
|
|
|
fb_mmio = NULL;
|
|
|
|
}
|
|
|
|
|
2015-08-05 15:52:36 +08:00
|
|
|
for (cur_res = hyperv_mmio; cur_res; cur_res = next_res) {
|
|
|
|
next_res = cur_res->sibling;
|
|
|
|
kfree(cur_res);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-04-06 01:22:55 +08:00
|
|
|
static void vmbus_reserve_fb(void)
|
|
|
|
{
|
|
|
|
int size;
|
|
|
|
/*
|
|
|
|
* Make a claim for the frame buffer in the resource tree under the
|
|
|
|
* first node, which will be the one below 4GB. The length seems to
|
|
|
|
* be underreported, particularly in a Generation 1 VM. So start out
|
|
|
|
* reserving a larger area and make it smaller until it succeeds.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (screen_info.lfb_base) {
|
|
|
|
if (efi_enabled(EFI_BOOT))
|
|
|
|
size = max_t(__u32, screen_info.lfb_size, 0x800000);
|
|
|
|
else
|
|
|
|
size = max_t(__u32, screen_info.lfb_size, 0x4000000);
|
|
|
|
|
|
|
|
for (; !fb_mmio && (size >= 0x100000); size >>= 1) {
|
|
|
|
fb_mmio = __request_region(hyperv_mmio,
|
|
|
|
screen_info.lfb_base, size,
|
|
|
|
fb_mmio_name, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-08-05 15:52:37 +08:00
|
|
|
/**
|
|
|
|
* vmbus_allocate_mmio() - Pick a memory-mapped I/O range.
|
|
|
|
* @new: If successful, supplied a pointer to the
|
|
|
|
* allocated MMIO space.
|
|
|
|
* @device_obj: Identifies the caller
|
|
|
|
* @min: Minimum guest physical address of the
|
|
|
|
* allocation
|
|
|
|
* @max: Maximum guest physical address
|
|
|
|
* @size: Size of the range to be allocated
|
|
|
|
* @align: Alignment of the range to be allocated
|
|
|
|
* @fb_overlap_ok: Whether this allocation can be allowed
|
|
|
|
* to overlap the video frame buffer.
|
|
|
|
*
|
|
|
|
* This function walks the resources granted to VMBus by the
|
|
|
|
* _CRS object in the ACPI namespace underneath the parent
|
|
|
|
* "bridge" whether that's a root PCI bus in the Generation 1
|
|
|
|
* case or a Module Device in the Generation 2 case. It then
|
|
|
|
* attempts to allocate from the global MMIO pool in a way that
|
|
|
|
* matches the constraints supplied in these parameters and by
|
|
|
|
* that _CRS.
|
|
|
|
*
|
|
|
|
* Return: 0 on success, -errno on failure
|
|
|
|
*/
|
|
|
|
int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
|
|
|
|
resource_size_t min, resource_size_t max,
|
|
|
|
resource_size_t size, resource_size_t align,
|
|
|
|
bool fb_overlap_ok)
|
|
|
|
{
|
2016-04-06 01:22:54 +08:00
|
|
|
struct resource *iter, *shadow;
|
2016-04-06 01:22:56 +08:00
|
|
|
resource_size_t range_min, range_max, start;
|
2015-08-05 15:52:37 +08:00
|
|
|
const char *dev_n = dev_name(&device_obj->device);
|
2016-04-06 01:22:56 +08:00
|
|
|
int retval;
|
2016-04-06 01:22:50 +08:00
|
|
|
|
|
|
|
retval = -ENXIO;
|
|
|
|
down(&hyperv_mmio_lock);
|
2015-08-05 15:52:37 +08:00
|
|
|
|
2016-04-06 01:22:56 +08:00
|
|
|
/*
|
|
|
|
* If overlaps with frame buffers are allowed, then first attempt to
|
|
|
|
* make the allocation from within the reserved region. Because it
|
|
|
|
* is already reserved, no shadow allocation is necessary.
|
|
|
|
*/
|
|
|
|
if (fb_overlap_ok && fb_mmio && !(min > fb_mmio->end) &&
|
|
|
|
!(max < fb_mmio->start)) {
|
|
|
|
|
|
|
|
range_min = fb_mmio->start;
|
|
|
|
range_max = fb_mmio->end;
|
|
|
|
start = (range_min + align - 1) & ~(align - 1);
|
|
|
|
for (; start + size - 1 <= range_max; start += align) {
|
|
|
|
*new = request_mem_region_exclusive(start, size, dev_n);
|
|
|
|
if (*new) {
|
|
|
|
retval = 0;
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-08-05 15:52:37 +08:00
|
|
|
for (iter = hyperv_mmio; iter; iter = iter->sibling) {
|
|
|
|
if ((iter->start >= max) || (iter->end <= min))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
range_min = iter->start;
|
|
|
|
range_max = iter->end;
|
2016-04-06 01:22:56 +08:00
|
|
|
start = (range_min + align - 1) & ~(align - 1);
|
|
|
|
for (; start + size - 1 <= range_max; start += align) {
|
|
|
|
shadow = __request_region(iter, start, size, NULL,
|
|
|
|
IORESOURCE_BUSY);
|
|
|
|
if (!shadow)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
*new = request_mem_region_exclusive(start, size, dev_n);
|
|
|
|
if (*new) {
|
|
|
|
shadow->name = (char *)*new;
|
|
|
|
retval = 0;
|
|
|
|
goto exit;
|
2015-08-05 15:52:37 +08:00
|
|
|
}
|
|
|
|
|
2016-04-06 01:22:56 +08:00
|
|
|
__release_region(iter, start, size);
|
2015-08-05 15:52:37 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-06 01:22:50 +08:00
|
|
|
exit:
|
|
|
|
up(&hyperv_mmio_lock);
|
|
|
|
return retval;
|
2015-08-05 15:52:37 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(vmbus_allocate_mmio);
|
|
|
|
|
2016-04-06 01:22:51 +08:00
|
|
|
/**
|
|
|
|
* vmbus_free_mmio() - Free a memory-mapped I/O range.
|
|
|
|
* @start: Base address of region to release.
|
|
|
|
* @size: Size of the range to be allocated
|
|
|
|
*
|
|
|
|
* This function releases anything requested by
|
|
|
|
* vmbus_mmio_allocate().
|
|
|
|
*/
|
|
|
|
void vmbus_free_mmio(resource_size_t start, resource_size_t size)
|
|
|
|
{
|
2016-04-06 01:22:54 +08:00
|
|
|
struct resource *iter;
|
|
|
|
|
|
|
|
down(&hyperv_mmio_lock);
|
|
|
|
for (iter = hyperv_mmio; iter; iter = iter->sibling) {
|
|
|
|
if ((iter->start >= start + size) || (iter->end <= start))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
__release_region(iter, start, size);
|
|
|
|
}
|
2016-04-06 01:22:51 +08:00
|
|
|
release_mem_region(start, size);
|
2016-04-06 01:22:54 +08:00
|
|
|
up(&hyperv_mmio_lock);
|
2016-04-06 01:22:51 +08:00
|
|
|
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(vmbus_free_mmio);
|
|
|
|
|
2011-04-30 04:45:15 +08:00
|
|
|
static int vmbus_acpi_add(struct acpi_device *device)
|
|
|
|
{
|
|
|
|
acpi_status result;
|
2014-01-30 10:14:39 +08:00
|
|
|
int ret_val = -ENODEV;
|
2015-08-05 15:52:36 +08:00
|
|
|
struct acpi_device *ancestor;
|
2011-04-30 04:45:15 +08:00
|
|
|
|
2011-06-07 06:49:39 +08:00
|
|
|
hv_acpi_dev = device;
|
|
|
|
|
2011-08-28 02:31:38 +08:00
|
|
|
result = acpi_walk_resources(device->handle, METHOD_NAME__CRS,
|
2014-01-30 10:14:39 +08:00
|
|
|
vmbus_walk_resources, NULL);
|
2011-04-30 04:45:15 +08:00
|
|
|
|
2014-01-30 10:14:39 +08:00
|
|
|
if (ACPI_FAILURE(result))
|
|
|
|
goto acpi_walk_err;
|
|
|
|
/*
|
2015-08-05 15:52:36 +08:00
|
|
|
* Some ancestor of the vmbus acpi device (Gen1 or Gen2
|
|
|
|
* firmware) is the VMOD that has the mmio ranges. Get that.
|
2014-01-30 10:14:39 +08:00
|
|
|
*/
|
2015-08-05 15:52:36 +08:00
|
|
|
for (ancestor = device->parent; ancestor; ancestor = ancestor->parent) {
|
|
|
|
result = acpi_walk_resources(ancestor->handle, METHOD_NAME__CRS,
|
|
|
|
vmbus_walk_resources, NULL);
|
2014-01-30 10:14:39 +08:00
|
|
|
|
|
|
|
if (ACPI_FAILURE(result))
|
2015-08-05 15:52:36 +08:00
|
|
|
continue;
|
2016-04-06 01:22:55 +08:00
|
|
|
if (hyperv_mmio) {
|
|
|
|
vmbus_reserve_fb();
|
2015-08-05 15:52:36 +08:00
|
|
|
break;
|
2016-04-06 01:22:55 +08:00
|
|
|
}
|
2011-04-30 04:45:15 +08:00
|
|
|
}
|
2014-01-30 10:14:39 +08:00
|
|
|
ret_val = 0;
|
|
|
|
|
|
|
|
acpi_walk_err:
|
2011-04-30 04:45:15 +08:00
|
|
|
complete(&probe_event);
|
2015-08-05 15:52:36 +08:00
|
|
|
if (ret_val)
|
|
|
|
vmbus_acpi_remove(device);
|
2014-01-30 10:14:39 +08:00
|
|
|
return ret_val;
|
2011-04-30 04:45:15 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static const struct acpi_device_id vmbus_acpi_device_ids[] = {
|
|
|
|
{"VMBUS", 0},
|
2011-06-07 06:49:42 +08:00
|
|
|
{"VMBus", 0},
|
2011-04-30 04:45:15 +08:00
|
|
|
{"", 0},
|
|
|
|
};
|
|
|
|
MODULE_DEVICE_TABLE(acpi, vmbus_acpi_device_ids);
|
|
|
|
|
|
|
|
static struct acpi_driver vmbus_acpi_driver = {
|
|
|
|
.name = "vmbus",
|
|
|
|
.ids = vmbus_acpi_device_ids,
|
|
|
|
.ops = {
|
|
|
|
.add = vmbus_acpi_add,
|
2015-04-23 12:31:28 +08:00
|
|
|
.remove = vmbus_acpi_remove,
|
2011-04-30 04:45:15 +08:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2015-08-02 07:08:07 +08:00
|
|
|
static void hv_kexec_handler(void)
|
|
|
|
{
|
|
|
|
hv_synic_clockevents_cleanup();
|
2016-02-27 07:13:16 +08:00
|
|
|
vmbus_initiate_unload(false);
|
2016-12-08 06:53:12 +08:00
|
|
|
vmbus_connection.conn_state = DISCONNECTED;
|
|
|
|
/* Make sure conn_state is set as hv_synic_cleanup checks for it */
|
|
|
|
mb();
|
2016-12-08 06:53:11 +08:00
|
|
|
cpuhp_remove_state(hyperv_cpuhp_online);
|
2017-01-29 03:37:14 +08:00
|
|
|
hyperv_cleanup();
|
2015-08-02 07:08:07 +08:00
|
|
|
};
|
|
|
|
|
2015-08-02 07:08:09 +08:00
|
|
|
static void hv_crash_handler(struct pt_regs *regs)
|
|
|
|
{
|
2016-02-27 07:13:16 +08:00
|
|
|
vmbus_initiate_unload(true);
|
2015-08-02 07:08:09 +08:00
|
|
|
/*
|
|
|
|
* In crash handler we can't schedule synic cleanup for all CPUs,
|
|
|
|
* doing the cleanup for current CPU only. This should be sufficient
|
|
|
|
* for kdump.
|
|
|
|
*/
|
2016-12-08 06:53:12 +08:00
|
|
|
vmbus_connection.conn_state = DISCONNECTED;
|
2016-12-08 06:53:11 +08:00
|
|
|
hv_synic_cleanup(smp_processor_id());
|
2017-01-29 03:37:14 +08:00
|
|
|
hyperv_cleanup();
|
2015-08-02 07:08:09 +08:00
|
|
|
};
|
|
|
|
|
2011-06-07 06:49:39 +08:00
|
|
|
static int __init hv_acpi_init(void)
|
2011-03-16 06:03:32 +08:00
|
|
|
{
|
2011-07-16 04:38:56 +08:00
|
|
|
int ret, t;
|
2011-04-30 04:45:15 +08:00
|
|
|
|
2017-12-23 02:19:02 +08:00
|
|
|
if (!hv_is_hyperv_initialized())
|
2012-08-17 18:52:43 +08:00
|
|
|
return -ENODEV;
|
|
|
|
|
2011-04-30 04:45:15 +08:00
|
|
|
init_completion(&probe_event);
|
|
|
|
|
|
|
|
/*
|
2015-12-15 08:01:46 +08:00
|
|
|
* Get ACPI resources first.
|
2011-04-30 04:45:15 +08:00
|
|
|
*/
|
2011-06-07 06:49:40 +08:00
|
|
|
ret = acpi_bus_register_driver(&vmbus_acpi_driver);
|
|
|
|
|
2011-04-30 04:45:15 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2011-07-16 04:38:56 +08:00
|
|
|
t = wait_for_completion_timeout(&probe_event, 5*HZ);
|
|
|
|
if (t == 0) {
|
|
|
|
ret = -ETIMEDOUT;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2011-04-30 04:45:15 +08:00
|
|
|
|
2015-12-15 08:01:46 +08:00
|
|
|
ret = vmbus_bus_init();
|
2011-06-17 04:16:38 +08:00
|
|
|
if (ret)
|
2011-07-16 04:38:56 +08:00
|
|
|
goto cleanup;
|
|
|
|
|
2015-08-02 07:08:07 +08:00
|
|
|
hv_setup_kexec_handler(hv_kexec_handler);
|
2015-08-02 07:08:09 +08:00
|
|
|
hv_setup_crash_handler(hv_crash_handler);
|
2015-08-02 07:08:07 +08:00
|
|
|
|
2011-07-16 04:38:56 +08:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
acpi_bus_unregister_driver(&vmbus_acpi_driver);
|
2011-12-02 01:59:34 +08:00
|
|
|
hv_acpi_dev = NULL;
|
2011-06-17 04:16:38 +08:00
|
|
|
return ret;
|
2011-03-16 06:03:32 +08:00
|
|
|
}
|
|
|
|
|
2011-12-13 01:29:17 +08:00
|
|
|
static void __exit vmbus_exit(void)
|
|
|
|
{
|
2015-02-28 03:25:55 +08:00
|
|
|
int cpu;
|
|
|
|
|
2015-08-02 07:08:07 +08:00
|
|
|
hv_remove_kexec_handler();
|
2015-08-02 07:08:09 +08:00
|
|
|
hv_remove_crash_handler();
|
2015-02-28 03:25:54 +08:00
|
|
|
vmbus_connection.conn_state = DISCONNECTED;
|
2015-02-28 03:25:57 +08:00
|
|
|
hv_synic_clockevents_cleanup();
|
2015-04-23 12:31:32 +08:00
|
|
|
vmbus_disconnect();
|
2014-03-05 20:42:14 +08:00
|
|
|
hv_remove_vmbus_irq();
|
2017-02-12 14:02:19 +08:00
|
|
|
for_each_online_cpu(cpu) {
|
|
|
|
struct hv_per_cpu_context *hv_cpu
|
|
|
|
= per_cpu_ptr(hv_context.cpu_context, cpu);
|
|
|
|
|
|
|
|
tasklet_kill(&hv_cpu->msg_dpc);
|
|
|
|
}
|
2011-12-13 01:29:17 +08:00
|
|
|
vmbus_free_channels();
|
2017-02-12 14:02:19 +08:00
|
|
|
|
2015-08-02 07:08:20 +08:00
|
|
|
if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
|
2018-07-08 10:56:51 +08:00
|
|
|
kmsg_dump_unregister(&hv_kmsg_dumper);
|
2015-08-02 07:08:10 +08:00
|
|
|
unregister_die_notifier(&hyperv_die_block);
|
2015-04-23 12:31:29 +08:00
|
|
|
atomic_notifier_chain_unregister(&panic_notifier_list,
|
|
|
|
&hyperv_panic_block);
|
|
|
|
}
|
2018-07-08 10:56:51 +08:00
|
|
|
|
|
|
|
free_page((unsigned long)hv_panic_page);
|
2018-07-29 05:58:46 +08:00
|
|
|
unregister_sysctl_table(hv_ctl_table_hdr);
|
|
|
|
hv_ctl_table_hdr = NULL;
|
2011-12-13 01:29:17 +08:00
|
|
|
bus_unregister(&hv_bus);
|
2017-02-12 14:02:19 +08:00
|
|
|
|
2016-12-08 06:53:11 +08:00
|
|
|
cpuhp_remove_state(hyperv_cpuhp_online);
|
2015-08-02 07:08:05 +08:00
|
|
|
hv_synic_free();
|
2011-12-13 01:29:17 +08:00
|
|
|
acpi_bus_unregister_driver(&vmbus_acpi_driver);
|
|
|
|
}
|
|
|
|
|
2011-03-16 06:03:32 +08:00
|
|
|
|
2009-09-02 22:11:14 +08:00
|
|
|
MODULE_LICENSE("GPL");
|
2009-07-14 07:02:34 +08:00
|
|
|
|
2011-10-25 02:28:12 +08:00
|
|
|
subsys_initcall(hv_acpi_init);
|
2011-12-13 01:29:17 +08:00
|
|
|
module_exit(vmbus_exit);
|