2019-05-30 07:57:47 +08:00
// SPDX-License-Identifier: GPL-2.0-only
2008-11-27 00:21:24 +08:00
/*
* Copyright ( C ) 2007 - 2008 Advanced Micro Devices , Inc .
2015-02-04 23:12:55 +08:00
* Author : Joerg Roedel < jroedel @ suse . de >
2008-11-27 00:21:24 +08:00
*/
2015-05-29 00:41:24 +08:00
# define pr_fmt(fmt) "iommu: " fmt
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
2022-08-16 00:20:05 +08:00
# include <linux/amba/bus.h>
2011-09-06 22:03:26 +08:00
# include <linux/device.h>
2011-09-03 01:32:32 +08:00
# include <linux/kernel.h>
2021-06-16 21:38:46 +08:00
# include <linux/bits.h>
2008-11-27 00:21:24 +08:00
# include <linux/bug.h>
# include <linux/types.h>
2018-12-02 03:19:09 +08:00
# include <linux/init.h>
# include <linux/export.h>
2009-05-07 07:03:07 +08:00
# include <linux/slab.h>
2008-11-27 00:21:24 +08:00
# include <linux/errno.h>
2022-08-16 00:20:05 +08:00
# include <linux/host1x_context_bus.h>
2008-11-27 00:21:24 +08:00
# include <linux/iommu.h>
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
# include <linux/idr.h>
# include <linux/err.h>
2014-07-03 23:51:18 +08:00
# include <linux/pci.h>
2022-10-31 08:59:06 +08:00
# include <linux/pci-ats.h>
2014-09-20 00:03:06 +08:00
# include <linux/bitops.h>
2022-08-16 00:20:05 +08:00
# include <linux/platform_device.h>
2016-09-13 17:54:14 +08:00
# include <linux/property.h>
2018-09-10 21:49:18 +08:00
# include <linux/fsl/mc.h>
2019-12-19 20:03:41 +08:00
# include <linux/module.h>
2021-09-09 06:58:39 +08:00
# include <linux/cc_platform.h>
2023-03-13 21:26:31 +08:00
# include <linux/cdx/cdx_bus.h>
2013-08-16 01:59:23 +08:00
# include <trace/events/iommu.h>
2022-10-31 08:59:10 +08:00
# include <linux/sched/mm.h>
2022-12-10 01:23:08 +08:00
# include <linux/msi.h>
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
2022-08-17 01:28:05 +08:00
# include "dma-iommu.h"
2023-07-18 02:12:09 +08:00
# include "iommu-priv.h"
2022-08-17 01:28:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static struct kset * iommu_group_kset ;
2016-06-29 02:38:36 +08:00
static DEFINE_IDA ( iommu_group_ida ) ;
2023-08-09 20:47:55 +08:00
static DEFINE_IDA ( iommu_global_pasid_ida ) ;
2019-08-19 21:22:54 +08:00
static unsigned int iommu_def_domain_type __read_mostly ;
2021-08-11 20:21:37 +08:00
static bool iommu_dma_strict __read_mostly = IS_ENABLED ( CONFIG_IOMMU_DEFAULT_DMA_STRICT ) ;
2019-08-19 21:22:46 +08:00
static u32 iommu_cmd_line __read_mostly ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct iommu_group {
struct kobject kobj ;
struct kobject * devices_kobj ;
struct list_head devices ;
2022-10-31 08:59:09 +08:00
struct xarray pasid_array ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct mutex mutex ;
void * iommu_data ;
void ( * iommu_data_release ) ( void * iommu_data ) ;
char * name ;
int id ;
2015-05-29 00:41:29 +08:00
struct iommu_domain * default_domain ;
2022-05-10 00:19:19 +08:00
struct iommu_domain * blocking_domain ;
2015-05-29 00:41:31 +08:00
struct iommu_domain * domain ;
2020-04-29 21:36:47 +08:00
struct list_head entry ;
2022-04-18 08:49:50 +08:00
unsigned int owner_cnt ;
void * owner ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
} ;
2017-02-01 19:19:46 +08:00
struct group_device {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct list_head list ;
struct device * dev ;
char * name ;
} ;
2023-05-11 12:42:00 +08:00
/* Iterate over each struct group_device in a struct iommu_group */
# define for_each_group_device(group, pos) \
list_for_each_entry ( pos , & ( group ) - > devices , list )
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct iommu_group_attribute {
struct attribute attr ;
ssize_t ( * show ) ( struct iommu_group * group , char * buf ) ;
ssize_t ( * store ) ( struct iommu_group * group ,
const char * buf , size_t count ) ;
} ;
2017-01-20 04:57:52 +08:00
static const char * const iommu_group_resv_type_string [ ] = {
2019-06-03 14:53:35 +08:00
[ IOMMU_RESV_DIRECT ] = " direct " ,
[ IOMMU_RESV_DIRECT_RELAXABLE ] = " direct-relaxable " ,
[ IOMMU_RESV_RESERVED ] = " reserved " ,
[ IOMMU_RESV_MSI ] = " msi " ,
[ IOMMU_RESV_SW_MSI ] = " msi " ,
2017-01-20 04:57:52 +08:00
} ;
2019-08-19 21:22:46 +08:00
# define IOMMU_CMD_LINE_DMA_API BIT(0)
2021-04-01 23:52:54 +08:00
# define IOMMU_CMD_LINE_STRICT BIT(1)
2019-08-19 21:22:46 +08:00
2022-08-16 00:20:05 +08:00
static int iommu_bus_notifier ( struct notifier_block * nb ,
unsigned long action , void * data ) ;
2023-04-12 22:10:45 +08:00
static void iommu_release_device ( struct device * dev ) ;
2023-09-13 21:43:54 +08:00
static struct iommu_domain *
__iommu_group_domain_alloc ( struct iommu_group * group , unsigned int type ) ;
2020-04-29 21:36:46 +08:00
static int __iommu_attach_device ( struct iommu_domain * domain ,
struct device * dev ) ;
static int __iommu_attach_group ( struct iommu_domain * domain ,
struct iommu_group * group ) ;
2023-05-11 12:42:01 +08:00
enum {
IOMMU_SET_DOMAIN_MUST_SUCCEED = 1 < < 0 ,
} ;
2023-05-11 12:42:06 +08:00
static int __iommu_device_set_domain ( struct iommu_group * group ,
struct device * dev ,
struct iommu_domain * new_domain ,
unsigned int flags ) ;
2023-05-11 12:42:01 +08:00
static int __iommu_group_set_domain_internal ( struct iommu_group * group ,
struct iommu_domain * new_domain ,
unsigned int flags ) ;
2022-05-10 00:19:19 +08:00
static int __iommu_group_set_domain ( struct iommu_group * group ,
2023-05-11 12:42:01 +08:00
struct iommu_domain * new_domain )
{
return __iommu_group_set_domain_internal ( group , new_domain , 0 ) ;
}
static void __iommu_group_set_domain_nofail ( struct iommu_group * group ,
struct iommu_domain * new_domain )
{
WARN_ON ( __iommu_group_set_domain_internal (
group , new_domain , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ) ;
}
2023-05-11 12:42:12 +08:00
static int iommu_setup_default_domain ( struct iommu_group * group ,
int target_type ) ;
static int iommu_create_device_direct_mappings ( struct iommu_domain * domain ,
2020-04-29 21:36:50 +08:00
struct device * dev ) ;
2020-11-24 21:06:02 +08:00
static ssize_t iommu_group_store_type ( struct iommu_group * group ,
const char * buf , size_t count ) ;
2023-06-06 08:59:47 +08:00
static struct group_device * iommu_group_alloc_device ( struct iommu_group * group ,
struct device * dev ) ;
2023-06-06 08:59:48 +08:00
static void __iommu_group_free_device ( struct iommu_group * group ,
struct group_device * grp_dev ) ;
2020-04-29 21:36:46 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
# define IOMMU_GROUP_ATTR(_name, _mode, _show, _store) \
struct iommu_group_attribute iommu_group_attr_ # # _name = \
__ATTR ( _name , _mode , _show , _store )
2008-11-27 00:21:24 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
# define to_iommu_group_attr(_attr) \
container_of ( _attr , struct iommu_group_attribute , attr )
# define to_iommu_group(_kobj) \
container_of ( _kobj , struct iommu_group , kobj )
2008-11-27 00:21:24 +08:00
2017-02-01 20:23:08 +08:00
static LIST_HEAD ( iommu_device_list ) ;
static DEFINE_SPINLOCK ( iommu_device_lock ) ;
2023-11-22 02:04:02 +08:00
static const struct bus_type * const iommu_buses [ ] = {
2022-08-16 00:20:05 +08:00
& platform_bus_type ,
# ifdef CONFIG_PCI
& pci_bus_type ,
# endif
# ifdef CONFIG_ARM_AMBA
& amba_bustype ,
# endif
# ifdef CONFIG_FSL_MC_BUS
& fsl_mc_bus_type ,
# endif
# ifdef CONFIG_TEGRA_HOST1X_CONTEXT_BUS
& host1x_context_device_bus_type ,
# endif
2023-03-13 21:26:31 +08:00
# ifdef CONFIG_CDX_BUS
& cdx_bus_type ,
# endif
2022-08-16 00:20:05 +08:00
} ;
2019-08-19 21:22:53 +08:00
/*
* Use a function instead of an array here because the domain - type is a
* bit - field , so an array would waste memory .
*/
static const char * iommu_domain_type_str ( unsigned int t )
{
switch ( t ) {
case IOMMU_DOMAIN_BLOCKED :
return " Blocked " ;
case IOMMU_DOMAIN_IDENTITY :
return " Passthrough " ;
case IOMMU_DOMAIN_UNMANAGED :
return " Unmanaged " ;
case IOMMU_DOMAIN_DMA :
2021-08-11 20:21:30 +08:00
case IOMMU_DOMAIN_DMA_FQ :
2019-08-19 21:22:53 +08:00
return " Translated " ;
2023-09-13 21:43:35 +08:00
case IOMMU_DOMAIN_PLATFORM :
return " Platform " ;
2019-08-19 21:22:53 +08:00
default :
return " Unknown " ;
}
}
static int __init iommu_subsys_init ( void )
{
2022-08-16 00:20:05 +08:00
struct notifier_block * nb ;
2021-04-01 23:52:53 +08:00
if ( ! ( iommu_cmd_line & IOMMU_CMD_LINE_DMA_API ) ) {
2019-08-19 21:22:54 +08:00
if ( IS_ENABLED ( CONFIG_IOMMU_DEFAULT_PASSTHROUGH ) )
iommu_set_default_passthrough ( false ) ;
else
iommu_set_default_translated ( false ) ;
2019-08-19 21:22:55 +08:00
2021-09-09 06:58:39 +08:00
if ( iommu_default_passthrough ( ) & & cc_platform_has ( CC_ATTR_MEM_ENCRYPT ) ) {
2019-09-03 21:15:44 +08:00
pr_info ( " Memory encryption detected - Disabling default IOMMU Passthrough \n " ) ;
2019-08-19 21:22:55 +08:00
iommu_set_default_translated ( false ) ;
}
2019-08-19 21:22:54 +08:00
}
2021-08-11 20:21:34 +08:00
if ( ! iommu_default_passthrough ( ) & & ! iommu_dma_strict )
iommu_def_domain_type = IOMMU_DOMAIN_DMA_FQ ;
2023-05-10 03:10:48 +08:00
pr_info ( " Default domain type: %s%s \n " ,
2019-08-19 21:22:54 +08:00
iommu_domain_type_str ( iommu_def_domain_type ) ,
2021-04-01 23:52:53 +08:00
( iommu_cmd_line & IOMMU_CMD_LINE_DMA_API ) ?
2023-05-10 03:10:48 +08:00
" (set via kernel command line) " : " " ) ;
2019-08-19 21:22:53 +08:00
2021-08-11 20:21:36 +08:00
if ( ! iommu_default_passthrough ( ) )
2023-05-10 03:10:48 +08:00
pr_info ( " DMA domain TLB invalidation policy: %s mode%s \n " ,
2021-08-11 20:21:36 +08:00
iommu_dma_strict ? " strict " : " lazy " ,
( iommu_cmd_line & IOMMU_CMD_LINE_STRICT ) ?
2023-05-10 03:10:48 +08:00
" (set via kernel command line) " : " " ) ;
2021-07-12 19:12:16 +08:00
2022-08-16 00:20:05 +08:00
nb = kcalloc ( ARRAY_SIZE ( iommu_buses ) , sizeof ( * nb ) , GFP_KERNEL ) ;
if ( ! nb )
return - ENOMEM ;
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) ; i + + ) {
nb [ i ] . notifier_call = iommu_bus_notifier ;
bus_register_notifier ( iommu_buses [ i ] , & nb [ i ] ) ;
}
2019-08-19 21:22:53 +08:00
return 0 ;
}
subsys_initcall ( iommu_subsys_init ) ;
2022-08-16 00:20:06 +08:00
static int remove_iommu_group ( struct device * dev , void * data )
{
if ( dev - > iommu & & dev - > iommu - > iommu_dev = = data )
iommu_release_device ( dev ) ;
return 0 ;
}
2021-04-01 21:56:26 +08:00
/**
* iommu_device_register ( ) - Register an IOMMU hardware instance
* @ iommu : IOMMU handle for the instance
* @ ops : IOMMU ops to associate with the instance
* @ hwdev : ( optional ) actual instance device , used for fwnode lookup
*
* Return : 0 on success , or an error .
*/
int iommu_device_register ( struct iommu_device * iommu ,
const struct iommu_ops * ops , struct device * hwdev )
2017-02-01 20:23:08 +08:00
{
2022-08-16 00:20:06 +08:00
int err = 0 ;
2021-04-01 21:56:26 +08:00
/* We need to be able to take module references appropriately */
if ( WARN_ON ( is_module_address ( ( unsigned long ) ops ) & & ! ops - > owner ) )
return - EINVAL ;
iommu - > ops = ops ;
if ( hwdev )
2022-08-02 00:47:58 +08:00
iommu - > fwnode = dev_fwnode ( hwdev ) ;
2021-04-01 21:56:26 +08:00
2017-02-01 20:23:08 +08:00
spin_lock ( & iommu_device_lock ) ;
list_add_tail ( & iommu - > list , & iommu_device_list ) ;
spin_unlock ( & iommu_device_lock ) ;
2022-08-16 00:20:06 +08:00
2023-11-22 02:04:02 +08:00
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) & & ! err ; i + + )
2022-08-16 00:20:06 +08:00
err = bus_iommu_probe ( iommu_buses [ i ] ) ;
if ( err )
iommu_device_unregister ( iommu ) ;
return err ;
2017-02-01 20:23:08 +08:00
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( iommu_device_register ) ;
2017-02-01 20:23:08 +08:00
void iommu_device_unregister ( struct iommu_device * iommu )
{
2022-08-16 00:20:06 +08:00
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) ; i + + )
bus_for_each_dev ( iommu_buses [ i ] , NULL , iommu , remove_iommu_group ) ;
2017-02-01 20:23:08 +08:00
spin_lock ( & iommu_device_lock ) ;
list_del ( & iommu - > list ) ;
spin_unlock ( & iommu_device_lock ) ;
2023-08-23 00:15:57 +08:00
/* Pairs with the alloc in generic_single_device_group() */
iommu_group_put ( iommu - > singleton_group ) ;
iommu - > singleton_group = NULL ;
2017-02-01 20:23:08 +08:00
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( iommu_device_unregister ) ;
2017-02-01 20:23:08 +08:00
2023-08-03 08:08:02 +08:00
# if IS_ENABLED(CONFIG_IOMMUFD_TEST)
void iommu_device_unregister_bus ( struct iommu_device * iommu ,
2024-02-16 22:40:24 +08:00
const struct bus_type * bus ,
2023-08-03 08:08:02 +08:00
struct notifier_block * nb )
{
bus_unregister_notifier ( bus , nb ) ;
iommu_device_unregister ( iommu ) ;
}
EXPORT_SYMBOL_GPL ( iommu_device_unregister_bus ) ;
/*
* Register an iommu driver against a single bus . This is only used by iommufd
* selftest to create a mock iommu driver . The caller must provide
* some memory to hold a notifier_block .
*/
int iommu_device_register_bus ( struct iommu_device * iommu ,
2024-02-16 22:40:24 +08:00
const struct iommu_ops * ops ,
const struct bus_type * bus ,
2023-08-03 08:08:02 +08:00
struct notifier_block * nb )
{
int err ;
iommu - > ops = ops ;
nb - > notifier_call = iommu_bus_notifier ;
err = bus_register_notifier ( bus , nb ) ;
if ( err )
return err ;
spin_lock ( & iommu_device_lock ) ;
list_add_tail ( & iommu - > list , & iommu_device_list ) ;
spin_unlock ( & iommu_device_lock ) ;
err = bus_iommu_probe ( bus ) ;
if ( err ) {
iommu_device_unregister_bus ( iommu , bus , nb ) ;
return err ;
}
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_device_register_bus ) ;
# endif
2020-03-26 23:08:30 +08:00
static struct dev_iommu * dev_iommu_get ( struct device * dev )
2019-06-03 22:57:48 +08:00
{
2020-03-26 23:08:30 +08:00
struct dev_iommu * param = dev - > iommu ;
2019-06-03 22:57:48 +08:00
2023-12-08 02:03:11 +08:00
lockdep_assert_held ( & iommu_probe_device_lock ) ;
2019-06-03 22:57:48 +08:00
if ( param )
return param ;
param = kzalloc ( sizeof ( * param ) , GFP_KERNEL ) ;
if ( ! param )
return NULL ;
mutex_init ( & param - > lock ) ;
2020-03-26 23:08:30 +08:00
dev - > iommu = param ;
2019-06-03 22:57:48 +08:00
return param ;
}
2020-03-26 23:08:30 +08:00
static void dev_iommu_free ( struct device * dev )
2019-06-03 22:57:48 +08:00
{
2022-01-31 15:12:35 +08:00
struct dev_iommu * param = dev - > iommu ;
2020-03-26 23:08:30 +08:00
dev - > iommu = NULL ;
2022-01-31 15:12:35 +08:00
if ( param - > fwspec ) {
fwnode_handle_put ( param - > fwspec - > iommu_fwnode ) ;
kfree ( param - > fwspec ) ;
}
kfree ( param ) ;
2019-06-03 22:57:48 +08:00
}
2023-11-22 02:03:57 +08:00
/*
* Internal equivalent of device_iommu_mapped ( ) for when we care that a device
* actually has API ops , and don ' t want false positives from VFIO - only groups .
*/
static bool dev_has_iommu ( struct device * dev )
{
return dev - > iommu & & dev - > iommu - > iommu_dev ;
}
2022-10-31 08:59:06 +08:00
static u32 dev_iommu_get_max_pasids ( struct device * dev )
{
u32 max_pasids = 0 , bits = 0 ;
int ret ;
if ( dev_is_pci ( dev ) ) {
ret = pci_max_pasids ( to_pci_dev ( dev ) ) ;
if ( ret > 0 )
max_pasids = ret ;
} else {
ret = device_property_read_u32 ( dev , " pasid-num-bits " , & bits ) ;
if ( ! ret )
max_pasids = 1UL < < bits ;
}
return min_t ( u32 , max_pasids , dev - > iommu - > iommu_dev - > max_pasids ) ;
}
2023-12-08 02:03:12 +08:00
void dev_iommu_priv_set ( struct device * dev , void * priv )
{
/* FSL_PAMU does something weird */
if ( ! IS_ENABLED ( CONFIG_FSL_PAMU ) )
lockdep_assert_held ( & iommu_probe_device_lock ) ;
dev - > iommu - > priv = priv ;
}
EXPORT_SYMBOL_GPL ( dev_iommu_priv_set ) ;
2023-06-06 08:59:43 +08:00
/*
* Init the dev - > iommu and dev - > iommu_group in the struct device and get the
* driver probed
*/
static int iommu_init_device ( struct device * dev , const struct iommu_ops * ops )
2018-11-30 17:31:59 +08:00
{
2020-04-29 21:36:45 +08:00
struct iommu_device * iommu_dev ;
struct iommu_group * group ;
2019-06-03 22:57:48 +08:00
int ret ;
2018-11-30 17:31:59 +08:00
2023-06-06 08:59:43 +08:00
if ( ! dev_iommu_get ( dev ) )
return - ENOMEM ;
2018-11-30 17:31:59 +08:00
2019-12-19 20:03:41 +08:00
if ( ! try_module_get ( ops - > owner ) ) {
ret = - EINVAL ;
2020-04-29 21:37:11 +08:00
goto err_free ;
2019-12-19 20:03:41 +08:00
}
2020-04-29 21:36:45 +08:00
iommu_dev = ops - > probe_device ( dev ) ;
2020-04-29 21:37:11 +08:00
if ( IS_ERR ( iommu_dev ) ) {
ret = PTR_ERR ( iommu_dev ) ;
2023-06-06 08:59:43 +08:00
goto err_module_put ;
2020-04-29 21:37:11 +08:00
}
2023-08-23 00:15:57 +08:00
dev - > iommu - > iommu_dev = iommu_dev ;
2020-04-29 21:36:45 +08:00
2023-06-06 08:59:44 +08:00
ret = iommu_device_link ( iommu_dev , dev ) ;
if ( ret )
goto err_release ;
2020-04-29 21:36:45 +08:00
2023-06-06 08:59:43 +08:00
group = ops - > device_group ( dev ) ;
if ( WARN_ON_ONCE ( group = = NULL ) )
group = ERR_PTR ( - EINVAL ) ;
2020-04-29 21:36:49 +08:00
if ( IS_ERR ( group ) ) {
2020-04-29 21:36:45 +08:00
ret = PTR_ERR ( group ) ;
2023-06-06 08:59:44 +08:00
goto err_unlink ;
2020-04-29 21:36:45 +08:00
}
2023-06-06 08:59:43 +08:00
dev - > iommu_group = group ;
2020-04-29 21:36:45 +08:00
2023-06-06 08:59:43 +08:00
dev - > iommu - > max_pasids = dev_iommu_get_max_pasids ( dev ) ;
if ( ops - > is_attach_deferred )
dev - > iommu - > attach_deferred = ops - > is_attach_deferred ( dev ) ;
2019-12-19 20:03:41 +08:00
return 0 ;
2018-12-20 17:02:20 +08:00
2023-06-06 08:59:44 +08:00
err_unlink :
iommu_device_unlink ( iommu_dev , dev ) ;
2023-06-06 08:59:43 +08:00
err_release :
2022-06-21 23:14:26 +08:00
if ( ops - > release_device )
ops - > release_device ( dev ) ;
2023-06-06 08:59:43 +08:00
err_module_put :
2019-12-19 20:03:41 +08:00
module_put ( ops - > owner ) ;
2020-04-29 21:37:11 +08:00
err_free :
2023-08-23 00:15:57 +08:00
dev - > iommu - > iommu_dev = NULL ;
2020-03-26 23:08:30 +08:00
dev_iommu_free ( dev ) ;
2023-06-06 08:59:43 +08:00
return ret ;
}
2020-04-29 21:37:11 +08:00
2023-06-06 08:59:43 +08:00
static void iommu_deinit_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2022-11-05 03:51:43 +08:00
2023-06-06 08:59:43 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2023-06-06 08:59:44 +08:00
iommu_device_unlink ( dev - > iommu - > iommu_dev , dev ) ;
2023-06-06 08:59:43 +08:00
/*
* release_device ( ) must stop using any attached domain on the device .
2024-03-05 20:21:17 +08:00
* If there are still other devices in the group , they are not affected
2023-06-06 08:59:43 +08:00
* by this callback .
*
2024-03-05 20:21:17 +08:00
* If the iommu driver provides release_domain , the core code ensures
* that domain is attached prior to calling release_device . Drivers can
* use this to enforce a translation on the idle iommu . Typically , the
* global static blocked_domain is a good choice .
*
* Otherwise , the iommu driver must set the device to either an identity
* or a blocking translation in release_device ( ) and stop using any
* domain pointer , as it is going to be freed .
*
* Regardless , if a delayed attach never occurred , then the release
* should still avoid touching any hardware configuration either .
2023-06-06 08:59:43 +08:00
*/
2024-03-05 20:21:17 +08:00
if ( ! dev - > iommu - > attach_deferred & & ops - > release_domain )
ops - > release_domain - > ops - > attach_dev ( ops - > release_domain , dev ) ;
2023-06-06 08:59:43 +08:00
if ( ops - > release_device )
ops - > release_device ( dev ) ;
/*
* If this is the last driver to use the group then we must free the
* domains before we do the module_put ( ) .
*/
if ( list_empty ( & group - > devices ) ) {
if ( group - > default_domain ) {
iommu_domain_free ( group - > default_domain ) ;
group - > default_domain = NULL ;
}
if ( group - > blocking_domain ) {
iommu_domain_free ( group - > blocking_domain ) ;
group - > blocking_domain = NULL ;
}
group - > domain = NULL ;
}
/* Caller must put iommu_group */
dev - > iommu_group = NULL ;
module_put ( ops - > owner ) ;
dev_iommu_free ( dev ) ;
2018-11-30 17:31:59 +08:00
}
2023-11-16 02:25:44 +08:00
DEFINE_MUTEX ( iommu_probe_device_lock ) ;
2020-04-29 21:36:47 +08:00
static int __iommu_probe_device ( struct device * dev , struct list_head * group_list )
2018-11-30 17:31:59 +08:00
{
2023-11-22 02:04:02 +08:00
const struct iommu_ops * ops ;
struct iommu_fwspec * fwspec ;
2020-04-29 21:36:48 +08:00
struct iommu_group * group ;
2023-06-06 08:59:47 +08:00
struct group_device * gdev ;
2020-04-29 21:36:48 +08:00
int ret ;
2018-11-30 17:31:59 +08:00
2023-11-22 02:04:02 +08:00
/*
* For FDT - based systems and ACPI IORT / VIOT , drivers register IOMMU
* instances with non - NULL fwnodes , and client devices should have been
* identified with a fwspec by this point . Otherwise , we can currently
* assume that only one of Intel , AMD , s390 , PAMU or legacy SMMUv2 can
* be present , and that any of their registered instances has suitable
* ops for probing , and thus cheekily co - opt the same mechanism .
*/
fwspec = dev_iommu_fwspec_get ( dev ) ;
if ( fwspec & & fwspec - > ops )
ops = fwspec - > ops ;
else
ops = iommu_ops_from_fwnode ( NULL ) ;
2019-06-03 22:57:48 +08:00
if ( ! ops )
2020-05-12 00:10:00 +08:00
return - ENODEV ;
2022-11-05 03:51:43 +08:00
/*
* Serialise to avoid races between IOMMU drivers registering in
* parallel and / or the " replay " calls from ACPI / OF code via client
* driver probe . Once the latter have been cleaned up we should
* probably be able to use device_lock ( ) here to minimise the scope ,
* but for now enforcing a simple global ordering is fine .
*/
2023-11-16 02:25:44 +08:00
lockdep_assert_held ( & iommu_probe_device_lock ) ;
2020-04-29 21:36:48 +08:00
2023-06-06 08:59:39 +08:00
/* Device is probed already if in a group */
2023-11-16 02:25:44 +08:00
if ( dev - > iommu_group )
return 0 ;
2020-05-25 21:01:22 +08:00
2023-06-06 08:59:43 +08:00
ret = iommu_init_device ( dev , ops ) ;
if ( ret )
2023-11-16 02:25:44 +08:00
return ret ;
2020-04-29 21:36:45 +08:00
2023-06-06 08:59:43 +08:00
group = dev - > iommu_group ;
2023-06-06 08:59:47 +08:00
gdev = iommu_group_alloc_device ( group , dev ) ;
2021-08-10 12:44:00 +08:00
mutex_lock ( & group - > mutex ) ;
2023-06-06 08:59:47 +08:00
if ( IS_ERR ( gdev ) ) {
ret = PTR_ERR ( gdev ) ;
2023-06-06 08:59:41 +08:00
goto err_put_group ;
2023-06-06 08:59:47 +08:00
}
2020-04-29 21:36:48 +08:00
2023-06-06 08:59:48 +08:00
/*
* The gdev must be in the list before calling
* iommu_setup_default_domain ( )
*/
2023-06-06 08:59:47 +08:00
list_add_tail ( & gdev - > list , & group - > devices ) ;
2023-06-06 08:59:48 +08:00
WARN_ON ( group - > default_domain & & ! group - > domain ) ;
2023-05-11 12:42:12 +08:00
if ( group - > default_domain )
iommu_create_device_direct_mappings ( group - > default_domain , dev ) ;
2023-05-11 12:42:07 +08:00
if ( group - > domain ) {
2023-05-11 12:42:06 +08:00
ret = __iommu_device_set_domain ( group , dev , group - > domain , 0 ) ;
2023-05-11 12:42:12 +08:00
if ( ret )
2023-06-06 08:59:48 +08:00
goto err_remove_gdev ;
} else if ( ! group - > default_domain & & ! group_list ) {
2023-05-11 12:42:12 +08:00
ret = iommu_setup_default_domain ( group , 0 ) ;
if ( ret )
2023-06-06 08:59:48 +08:00
goto err_remove_gdev ;
} else if ( ! group - > default_domain ) {
/*
* With a group_list argument we defer the default_domain setup
* to the caller by providing a de - duplicated list of groups
* that need further setup .
*/
if ( list_empty ( & group - > entry ) )
list_add_tail ( & group - > entry , group_list ) ;
2020-11-20 00:58:46 +08:00
}
2021-11-08 14:13:49 +08:00
mutex_unlock ( & group - > mutex ) ;
2020-04-29 21:36:48 +08:00
iommu: Optimise PCI SAC address trick
Per the reasoning in commit 4bf7fda4dce2 ("iommu/dma: Add config for
PCI SAC address trick") and its subsequent revert, this mechanism no
longer serves its original purpose, but now only works around broken
hardware/drivers in a way that is unfortunately too impactful to remove.
This does not, however, prevent us from solving the performance impact
which that workaround has on large-scale systems that don't need it.
Once the 32-bit IOVA space fills up and a workload starts allocating and
freeing on both sides of the boundary, the opportunistic SAC allocation
can then end up spending significant time hunting down scattered
fragments of free 32-bit space, or just reestablishing max32_alloc_size.
This can easily be exacerbated by a change in allocation pattern, such
as by changing the network MTU, which can increase pressure on the
32-bit space by leaving a large quantity of cached IOVAs which are now
the wrong size to be recycled, but also won't be freed since the
non-opportunistic allocations can still be satisfied from the whole
64-bit space without triggering the reclaim path.
However, in the context of a workaround where smaller DMA addresses
aren't simply a preference but a necessity, if we get to that point at
all then in fact it's already the endgame. The nature of the allocator
is currently such that the first IOVA we give to a device after the
32-bit space runs out will be the highest possible address for that
device, ever. If that works, then great, we know we can optimise for
speed by always allocating from the full range. And if it doesn't, then
the worst has already happened and any brokenness is now showing, so
there's little point in continuing to try to hide it.
To that end, implement a flag to refine the SAC business into a
per-device policy that can automatically get itself out of the way if
and when it stops being useful.
CC: Linus Torvalds <torvalds@linux-foundation.org>
CC: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: John Garry <john.g.garry@oracle.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Vasant Hegde <vasant.hegde@amd.com>
Tested-by: Jakub Kicinski <kuba@kernel.org>
Link: https://lore.kernel.org/r/b8502b115b915d2a3fabde367e099e39106686c8.1681392791.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2023-04-13 21:40:25 +08:00
if ( dev_is_pci ( dev ) )
iommu_dma_set_pci_32bit_workaround ( dev ) ;
2020-04-29 21:36:48 +08:00
return 0 ;
2023-06-06 08:59:48 +08:00
err_remove_gdev :
list_del ( & gdev - > list ) ;
__iommu_group_free_device ( group , gdev ) ;
2023-06-06 08:59:41 +08:00
err_put_group :
2023-06-06 08:59:43 +08:00
iommu_deinit_device ( dev ) ;
2023-05-11 12:42:07 +08:00
mutex_unlock ( & group - > mutex ) ;
iommu_group_put ( group ) ;
2020-04-29 21:37:10 +08:00
2020-04-29 21:36:48 +08:00
return ret ;
2018-11-30 17:31:59 +08:00
}
2020-04-29 21:37:10 +08:00
int iommu_probe_device ( struct device * dev )
2018-11-30 17:31:59 +08:00
{
2022-06-21 23:14:25 +08:00
const struct iommu_ops * ops ;
2020-04-29 21:36:48 +08:00
int ret ;
2023-03-22 14:49:52 +08:00
2023-11-16 02:25:44 +08:00
mutex_lock ( & iommu_probe_device_lock ) ;
2020-04-29 21:36:48 +08:00
ret = __iommu_probe_device ( dev , NULL ) ;
2023-11-16 02:25:44 +08:00
mutex_unlock ( & iommu_probe_device_lock ) ;
2020-04-29 21:36:48 +08:00
if ( ret )
2023-06-06 08:59:48 +08:00
return ret ;
2023-03-22 14:49:52 +08:00
2022-06-21 23:14:25 +08:00
ops = dev_iommu_ops ( dev ) ;
2020-04-29 21:36:48 +08:00
if ( ops - > probe_finalize )
ops - > probe_finalize ( dev ) ;
return 0 ;
2023-03-22 14:49:52 +08:00
}
2023-06-06 08:59:42 +08:00
static void __iommu_group_free_device ( struct iommu_group * group ,
struct group_device * grp_dev )
2023-03-22 14:49:52 +08:00
{
struct device * dev = grp_dev - > dev ;
sysfs_remove_link ( group - > devices_kobj , grp_dev - > name ) ;
sysfs_remove_link ( & dev - > kobj , " iommu_group " ) ;
trace_remove_device_from_group ( group - > id , dev ) ;
2023-06-06 08:59:42 +08:00
/*
* If the group has become empty then ownership must have been
* released , and the current domain must be set back to NULL or
* the default domain .
*/
if ( list_empty ( & group - > devices ) )
WARN_ON ( group - > owner_cnt | |
group - > domain ! = group - > default_domain ) ;
2023-03-22 14:49:52 +08:00
kfree ( grp_dev - > name ) ;
kfree ( grp_dev ) ;
}
2023-06-06 08:59:43 +08:00
/* Remove the iommu_group from the struct device. */
2023-06-06 08:59:42 +08:00
static void __iommu_group_remove_device ( struct device * dev )
2018-11-30 17:31:59 +08:00
{
2023-03-22 14:49:53 +08:00
struct iommu_group * group = dev - > iommu_group ;
struct group_device * device ;
2020-04-29 21:36:45 +08:00
2023-03-22 14:49:53 +08:00
mutex_lock ( & group - > mutex ) ;
2023-06-06 08:59:42 +08:00
for_each_group_device ( group , device ) {
if ( device - > dev ! = dev )
continue ;
2023-03-22 14:49:53 +08:00
2023-06-06 08:59:42 +08:00
list_del ( & device - > list ) ;
__iommu_group_free_device ( group , device ) ;
2023-11-22 02:03:57 +08:00
if ( dev_has_iommu ( dev ) )
2023-06-06 08:59:43 +08:00
iommu_deinit_device ( dev ) ;
else
dev - > iommu_group = NULL ;
2023-06-06 08:59:46 +08:00
break ;
2023-06-06 08:59:42 +08:00
}
2023-06-06 08:59:43 +08:00
mutex_unlock ( & group - > mutex ) ;
2023-03-22 14:49:53 +08:00
/*
2023-06-06 08:59:47 +08:00
* Pairs with the get in iommu_init_device ( ) or
* iommu_group_add_device ( )
2023-03-22 14:49:53 +08:00
*/
2023-06-06 08:59:43 +08:00
iommu_group_put ( group ) ;
2023-06-06 08:59:42 +08:00
}
2023-03-22 14:49:53 +08:00
2023-06-06 08:59:42 +08:00
static void iommu_release_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
2019-06-03 22:57:48 +08:00
2023-06-06 08:59:46 +08:00
if ( group )
__iommu_group_remove_device ( dev ) ;
2019-06-03 22:57:48 +08:00
2023-06-06 08:59:46 +08:00
/* Free any fwspec if no iommu_driver was ever attached */
if ( dev - > iommu )
dev_iommu_free ( dev ) ;
2018-11-30 17:31:59 +08:00
}
2015-05-29 00:41:29 +08:00
2017-01-06 02:38:26 +08:00
static int __init iommu_set_def_domain_type ( char * str )
{
bool pt ;
2018-05-15 00:22:25 +08:00
int ret ;
2017-01-06 02:38:26 +08:00
2018-05-15 00:22:25 +08:00
ret = kstrtobool ( str , & pt ) ;
if ( ret )
return ret ;
2017-01-06 02:38:26 +08:00
2019-08-19 21:22:48 +08:00
if ( pt )
iommu_set_default_passthrough ( true ) ;
else
iommu_set_default_translated ( true ) ;
2019-08-19 21:22:46 +08:00
2017-01-06 02:38:26 +08:00
return 0 ;
}
early_param ( " iommu.passthrough " , iommu_set_def_domain_type ) ;
2018-09-21 00:10:23 +08:00
static int __init iommu_dma_setup ( char * str )
{
2021-04-01 23:52:54 +08:00
int ret = kstrtobool ( str , & iommu_dma_strict ) ;
if ( ! ret )
iommu_cmd_line | = IOMMU_CMD_LINE_STRICT ;
return ret ;
2018-09-21 00:10:23 +08:00
}
early_param ( " iommu.strict " , iommu_dma_setup ) ;
2021-07-12 19:12:20 +08:00
void iommu_set_dma_strict ( void )
2021-04-01 23:52:54 +08:00
{
2021-07-12 19:12:20 +08:00
iommu_dma_strict = true ;
2021-08-11 20:21:34 +08:00
if ( iommu_def_domain_type = = IOMMU_DOMAIN_DMA_FQ )
iommu_def_domain_type = IOMMU_DOMAIN_DMA ;
2021-04-01 23:52:54 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static ssize_t iommu_group_attr_show ( struct kobject * kobj ,
struct attribute * __attr , char * buf )
2011-10-22 03:56:05 +08:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct iommu_group_attribute * attr = to_iommu_group_attr ( __attr ) ;
struct iommu_group * group = to_iommu_group ( kobj ) ;
ssize_t ret = - EIO ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
if ( attr - > show )
ret = attr - > show ( group , buf ) ;
return ret ;
}
static ssize_t iommu_group_attr_store ( struct kobject * kobj ,
struct attribute * __attr ,
const char * buf , size_t count )
{
struct iommu_group_attribute * attr = to_iommu_group_attr ( __attr ) ;
struct iommu_group * group = to_iommu_group ( kobj ) ;
ssize_t ret = - EIO ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
if ( attr - > store )
ret = attr - > store ( group , buf , count ) ;
return ret ;
2011-10-22 03:56:05 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static const struct sysfs_ops iommu_group_sysfs_ops = {
. show = iommu_group_attr_show ,
. store = iommu_group_attr_store ,
} ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static int iommu_group_create_file ( struct iommu_group * group ,
struct iommu_group_attribute * attr )
{
return sysfs_create_file ( & group - > kobj , & attr - > attr ) ;
2011-10-22 03:56:05 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static void iommu_group_remove_file ( struct iommu_group * group ,
struct iommu_group_attribute * attr )
{
sysfs_remove_file ( & group - > kobj , & attr - > attr ) ;
}
static ssize_t iommu_group_show_name ( struct iommu_group * group , char * buf )
{
2023-03-22 20:34:21 +08:00
return sysfs_emit ( buf , " %s \n " , group - > name ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2017-01-20 04:57:51 +08:00
/**
* iommu_insert_resv_region - Insert a new region in the
* list of reserved regions .
* @ new : new region to insert
* @ regions : list of regions
*
2019-08-21 20:09:40 +08:00
* Elements are sorted by start address and overlapping segments
* of the same type are merged .
2017-01-20 04:57:51 +08:00
*/
2020-07-13 22:25:42 +08:00
static int iommu_insert_resv_region ( struct iommu_resv_region * new ,
struct list_head * regions )
2017-01-20 04:57:51 +08:00
{
2019-08-21 20:09:40 +08:00
struct iommu_resv_region * iter , * tmp , * nr , * top ;
LIST_HEAD ( stack ) ;
nr = iommu_alloc_resv_region ( new - > start , new - > length ,
2022-10-19 08:44:44 +08:00
new - > prot , new - > type , GFP_KERNEL ) ;
2019-08-21 20:09:40 +08:00
if ( ! nr )
return - ENOMEM ;
/* First add the new element based on start address sorting */
list_for_each_entry ( iter , regions , list ) {
if ( nr - > start < iter - > start | |
( nr - > start = = iter - > start & & nr - > type < = iter - > type ) )
break ;
}
list_add_tail ( & nr - > list , & iter - > list ) ;
/* Merge overlapping segments of type nr->type in @regions, if any */
list_for_each_entry_safe ( iter , tmp , regions , list ) {
phys_addr_t top_end , iter_end = iter - > start + iter - > length - 1 ;
2019-11-27 01:54:13 +08:00
/* no merge needed on elements of different types than @new */
if ( iter - > type ! = new - > type ) {
2019-08-21 20:09:40 +08:00
list_move_tail ( & iter - > list , & stack ) ;
continue ;
}
/* look for the last stack element of same type as @iter */
list_for_each_entry_reverse ( top , & stack , list )
if ( top - > type = = iter - > type )
goto check_overlap ;
list_move_tail ( & iter - > list , & stack ) ;
continue ;
check_overlap :
top_end = top - > start + top - > length - 1 ;
if ( iter - > start > top_end + 1 ) {
list_move_tail ( & iter - > list , & stack ) ;
2017-01-20 04:57:51 +08:00
} else {
2019-08-21 20:09:40 +08:00
top - > length = max ( top_end , iter_end ) - top - > start + 1 ;
list_del ( & iter - > list ) ;
kfree ( iter ) ;
2017-01-20 04:57:51 +08:00
}
}
2019-08-21 20:09:40 +08:00
list_splice ( & stack , regions ) ;
2017-01-20 04:57:51 +08:00
return 0 ;
}
static int
iommu_insert_device_resv_regions ( struct list_head * dev_resv_regions ,
struct list_head * group_resv_regions )
{
struct iommu_resv_region * entry ;
2017-02-06 17:11:38 +08:00
int ret = 0 ;
2017-01-20 04:57:51 +08:00
list_for_each_entry ( entry , dev_resv_regions , list ) {
ret = iommu_insert_resv_region ( entry , group_resv_regions ) ;
if ( ret )
break ;
}
return ret ;
}
int iommu_get_group_resv_regions ( struct iommu_group * group ,
struct list_head * head )
{
2017-02-10 22:13:10 +08:00
struct group_device * device ;
2017-01-20 04:57:51 +08:00
int ret = 0 ;
mutex_lock ( & group - > mutex ) ;
2023-05-11 12:42:00 +08:00
for_each_group_device ( group , device ) {
2017-01-20 04:57:51 +08:00
struct list_head dev_resv_regions ;
2022-05-04 20:39:58 +08:00
/*
* Non - API groups still expose reserved_regions in sysfs ,
* so filter out calls that get here that way .
*/
2023-11-22 02:03:57 +08:00
if ( ! dev_has_iommu ( device - > dev ) )
2022-05-04 20:39:58 +08:00
break ;
2017-01-20 04:57:51 +08:00
INIT_LIST_HEAD ( & dev_resv_regions ) ;
iommu_get_resv_regions ( device - > dev , & dev_resv_regions ) ;
ret = iommu_insert_device_resv_regions ( & dev_resv_regions , head ) ;
iommu_put_resv_regions ( device - > dev , & dev_resv_regions ) ;
if ( ret )
break ;
}
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_get_group_resv_regions ) ;
2017-01-20 04:57:52 +08:00
static ssize_t iommu_group_show_resv_regions ( struct iommu_group * group ,
char * buf )
{
struct iommu_resv_region * region , * next ;
struct list_head group_resv_regions ;
2023-03-22 20:34:21 +08:00
int offset = 0 ;
2017-01-20 04:57:52 +08:00
INIT_LIST_HEAD ( & group_resv_regions ) ;
iommu_get_group_resv_regions ( group , & group_resv_regions ) ;
list_for_each_entry_safe ( region , next , & group_resv_regions , list ) {
2023-03-22 20:34:21 +08:00
offset + = sysfs_emit_at ( buf , offset , " 0x%016llx 0x%016llx %s \n " ,
( long long ) region - > start ,
( long long ) ( region - > start +
region - > length - 1 ) ,
iommu_group_resv_type_string [ region - > type ] ) ;
2017-01-20 04:57:52 +08:00
kfree ( region ) ;
}
2023-03-22 20:34:21 +08:00
return offset ;
2017-01-20 04:57:52 +08:00
}
2018-07-12 04:59:36 +08:00
static ssize_t iommu_group_show_type ( struct iommu_group * group ,
char * buf )
{
2023-03-22 20:34:21 +08:00
char * type = " unknown " ;
2018-07-12 04:59:36 +08:00
2020-11-24 21:06:03 +08:00
mutex_lock ( & group - > mutex ) ;
2018-07-12 04:59:36 +08:00
if ( group - > default_domain ) {
switch ( group - > default_domain - > type ) {
case IOMMU_DOMAIN_BLOCKED :
2023-03-22 20:34:21 +08:00
type = " blocked " ;
2018-07-12 04:59:36 +08:00
break ;
case IOMMU_DOMAIN_IDENTITY :
2023-03-22 20:34:21 +08:00
type = " identity " ;
2018-07-12 04:59:36 +08:00
break ;
case IOMMU_DOMAIN_UNMANAGED :
2023-03-22 20:34:21 +08:00
type = " unmanaged " ;
2018-07-12 04:59:36 +08:00
break ;
case IOMMU_DOMAIN_DMA :
2023-03-22 20:34:21 +08:00
type = " DMA " ;
2018-07-12 04:59:36 +08:00
break ;
2021-08-11 20:21:30 +08:00
case IOMMU_DOMAIN_DMA_FQ :
2023-03-22 20:34:21 +08:00
type = " DMA-FQ " ;
2021-08-11 20:21:30 +08:00
break ;
2018-07-12 04:59:36 +08:00
}
}
2020-11-24 21:06:03 +08:00
mutex_unlock ( & group - > mutex ) ;
2018-07-12 04:59:36 +08:00
2023-03-22 20:34:21 +08:00
return sysfs_emit ( buf , " %s \n " , type ) ;
2018-07-12 04:59:36 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static IOMMU_GROUP_ATTR ( name , S_IRUGO , iommu_group_show_name , NULL ) ;
2017-01-20 04:57:52 +08:00
static IOMMU_GROUP_ATTR ( reserved_regions , 0444 ,
iommu_group_show_resv_regions , NULL ) ;
2020-11-24 21:06:02 +08:00
static IOMMU_GROUP_ATTR ( type , 0644 , iommu_group_show_type ,
iommu_group_store_type ) ;
2018-07-12 04:59:36 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static void iommu_group_release ( struct kobject * kobj )
{
struct iommu_group * group = to_iommu_group ( kobj ) ;
2015-05-29 00:41:25 +08:00
pr_debug ( " Releasing group %d \n " , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
if ( group - > iommu_data_release )
group - > iommu_data_release ( group - > iommu_data ) ;
2022-06-08 10:16:55 +08:00
ida_free ( & iommu_group_ida , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
2023-06-06 08:59:43 +08:00
/* Domains are free'd by iommu_deinit_device() */
WARN_ON ( group - > default_domain ) ;
WARN_ON ( group - > blocking_domain ) ;
2015-05-29 00:41:29 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
kfree ( group - > name ) ;
kfree ( group ) ;
}
2023-02-14 11:25:53 +08:00
static const struct kobj_type iommu_group_ktype = {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
. sysfs_ops = & iommu_group_sysfs_ops ,
. release = iommu_group_release ,
} ;
/**
* iommu_group_alloc - Allocate a new group
*
* This function is called by an iommu driver to allocate a new iommu
* group . The iommu group represents the minimum granularity of the iommu .
* Upon successful return , the caller holds a reference to the supplied
* group in order to hold the group until devices are added . Use
* iommu_group_put ( ) to release this extra reference count , allowing the
* group to be automatically reclaimed once it has no devices or external
* references .
*/
struct iommu_group * iommu_group_alloc ( void )
2011-10-22 03:56:05 +08:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
struct iommu_group * group ;
int ret ;
group = kzalloc ( sizeof ( * group ) , GFP_KERNEL ) ;
if ( ! group )
return ERR_PTR ( - ENOMEM ) ;
group - > kobj . kset = iommu_group_kset ;
mutex_init ( & group - > mutex ) ;
INIT_LIST_HEAD ( & group - > devices ) ;
2020-04-29 21:36:47 +08:00
INIT_LIST_HEAD ( & group - > entry ) ;
2022-10-31 08:59:09 +08:00
xa_init ( & group - > pasid_array ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
2022-06-08 10:16:55 +08:00
ret = ida_alloc ( & iommu_group_ida , GFP_KERNEL ) ;
2016-06-30 03:13:59 +08:00
if ( ret < 0 ) {
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
kfree ( group ) ;
2016-06-30 03:13:59 +08:00
return ERR_PTR ( ret ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2016-06-30 03:13:59 +08:00
group - > id = ret ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
ret = kobject_init_and_add ( & group - > kobj , & iommu_group_ktype ,
NULL , " %d " , group - > id ) ;
if ( ret ) {
2020-05-28 05:00:19 +08:00
kobject_put ( & group - > kobj ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
return ERR_PTR ( ret ) ;
}
group - > devices_kobj = kobject_create_and_add ( " devices " , & group - > kobj ) ;
if ( ! group - > devices_kobj ) {
kobject_put ( & group - > kobj ) ; /* triggers .release & free */
return ERR_PTR ( - ENOMEM ) ;
}
/*
* The devices_kobj holds a reference on the group kobject , so
* as long as that exists so will the group . We can therefore
* use the devices_kobj for reference counting .
*/
kobject_put ( & group - > kobj ) ;
2017-01-20 04:57:52 +08:00
ret = iommu_group_create_file ( group ,
& iommu_group_attr_reserved_regions ) ;
2023-02-16 09:21:16 +08:00
if ( ret ) {
kobject_put ( group - > devices_kobj ) ;
2017-01-20 04:57:52 +08:00
return ERR_PTR ( ret ) ;
2023-02-16 09:21:16 +08:00
}
2017-01-20 04:57:52 +08:00
2018-07-12 04:59:36 +08:00
ret = iommu_group_create_file ( group , & iommu_group_attr_type ) ;
2023-02-16 09:21:16 +08:00
if ( ret ) {
kobject_put ( group - > devices_kobj ) ;
2018-07-12 04:59:36 +08:00
return ERR_PTR ( ret ) ;
2023-02-16 09:21:16 +08:00
}
2018-07-12 04:59:36 +08:00
2015-05-29 00:41:25 +08:00
pr_debug ( " Allocated group %d \n " , group - > id ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
return group ;
}
EXPORT_SYMBOL_GPL ( iommu_group_alloc ) ;
/**
* iommu_group_get_iommudata - retrieve iommu_data registered for a group
* @ group : the group
*
* iommu drivers can store data in the group for use when doing iommu
* operations . This function provides a way to retrieve it . Caller
* should hold a group reference .
*/
void * iommu_group_get_iommudata ( struct iommu_group * group )
{
return group - > iommu_data ;
}
EXPORT_SYMBOL_GPL ( iommu_group_get_iommudata ) ;
/**
* iommu_group_set_iommudata - set iommu_data for a group
* @ group : the group
* @ iommu_data : new data
* @ release : release function for iommu_data
*
* iommu drivers can store data in the group for use when doing iommu
* operations . This function provides a way to set the data after
* the group has been allocated . Caller should hold a group reference .
*/
void iommu_group_set_iommudata ( struct iommu_group * group , void * iommu_data ,
void ( * release ) ( void * iommu_data ) )
2011-10-22 03:56:05 +08:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
group - > iommu_data = iommu_data ;
group - > iommu_data_release = release ;
}
EXPORT_SYMBOL_GPL ( iommu_group_set_iommudata ) ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
/**
* iommu_group_set_name - set name for a group
* @ group : the group
* @ name : name
*
* Allow iommu driver to set a name for a group . When set it will
* appear in a name attribute file under the group in sysfs .
*/
int iommu_group_set_name ( struct iommu_group * group , const char * name )
{
int ret ;
if ( group - > name ) {
iommu_group_remove_file ( group , & iommu_group_attr_name ) ;
kfree ( group - > name ) ;
group - > name = NULL ;
if ( ! name )
return 0 ;
}
group - > name = kstrdup ( name , GFP_KERNEL ) ;
if ( ! group - > name )
return - ENOMEM ;
ret = iommu_group_create_file ( group , & iommu_group_attr_name ) ;
if ( ret ) {
kfree ( group - > name ) ;
group - > name = NULL ;
return ret ;
}
2011-10-22 03:56:05 +08:00
return 0 ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
EXPORT_SYMBOL_GPL ( iommu_group_set_name ) ;
2011-10-22 03:56:05 +08:00
2023-05-11 12:42:12 +08:00
static int iommu_create_device_direct_mappings ( struct iommu_domain * domain ,
2020-04-29 21:36:50 +08:00
struct device * dev )
2015-05-29 00:41:34 +08:00
{
2017-01-20 04:57:47 +08:00
struct iommu_resv_region * entry ;
2015-05-29 00:41:34 +08:00
struct list_head mappings ;
unsigned long pg_size ;
int ret = 0 ;
2023-08-09 20:48:02 +08:00
pg_size = domain - > pgsize_bitmap ? 1UL < < __ffs ( domain - > pgsize_bitmap ) : 0 ;
2015-05-29 00:41:34 +08:00
INIT_LIST_HEAD ( & mappings ) ;
2023-08-09 20:48:02 +08:00
if ( WARN_ON_ONCE ( iommu_is_dma_domain ( domain ) & & ! pg_size ) )
return - EINVAL ;
2017-01-20 04:57:47 +08:00
iommu_get_resv_regions ( dev , & mappings ) ;
2015-05-29 00:41:34 +08:00
/* We need to consider overlapping regions for different devices */
list_for_each_entry ( entry , & mappings , list ) {
dma_addr_t start , end , addr ;
2020-12-07 17:35:53 +08:00
size_t map_size = 0 ;
2015-05-29 00:41:34 +08:00
2023-08-09 20:48:02 +08:00
if ( entry - > type = = IOMMU_RESV_DIRECT )
dev - > iommu - > require_direct = 1 ;
2015-05-29 00:41:34 +08:00
2023-08-09 20:48:02 +08:00
if ( ( entry - > type ! = IOMMU_RESV_DIRECT & &
entry - > type ! = IOMMU_RESV_DIRECT_RELAXABLE ) | |
! iommu_is_dma_domain ( domain ) )
2017-01-20 04:57:50 +08:00
continue ;
2023-08-09 20:48:02 +08:00
start = ALIGN ( entry - > start , pg_size ) ;
end = ALIGN ( entry - > start + entry - > length , pg_size ) ;
2020-12-07 17:35:53 +08:00
for ( addr = start ; addr < = end ; addr + = pg_size ) {
2015-05-29 00:41:34 +08:00
phys_addr_t phys_addr ;
2020-12-07 17:35:53 +08:00
if ( addr = = end )
goto map_end ;
2015-05-29 00:41:34 +08:00
phys_addr = iommu_iova_to_phys ( domain , addr ) ;
2020-12-07 17:35:53 +08:00
if ( ! phys_addr ) {
map_size + = pg_size ;
2015-05-29 00:41:34 +08:00
continue ;
2020-12-07 17:35:53 +08:00
}
2015-05-29 00:41:34 +08:00
2020-12-07 17:35:53 +08:00
map_end :
if ( map_size ) {
ret = iommu_map ( domain , addr - map_size ,
addr - map_size , map_size ,
2023-01-24 04:35:54 +08:00
entry - > prot , GFP_KERNEL ) ;
2020-12-07 17:35:53 +08:00
if ( ret )
goto out ;
map_size = 0 ;
}
2015-05-29 00:41:34 +08:00
}
}
2023-10-26 16:49:42 +08:00
if ( ! list_empty ( & mappings ) & & iommu_is_dma_domain ( domain ) )
iommu_flush_iotlb_all ( domain ) ;
2017-08-23 21:50:04 +08:00
2015-05-29 00:41:34 +08:00
out :
2017-01-20 04:57:47 +08:00
iommu_put_resv_regions ( dev , & mappings ) ;
2015-05-29 00:41:34 +08:00
return ret ;
}
2023-06-06 08:59:47 +08:00
/* This is undone by __iommu_group_free_device() */
static struct group_device * iommu_group_alloc_device ( struct iommu_group * group ,
struct device * dev )
2011-10-22 03:56:05 +08:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
int ret , i = 0 ;
2017-02-01 19:19:46 +08:00
struct group_device * device ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
device = kzalloc ( sizeof ( * device ) , GFP_KERNEL ) ;
if ( ! device )
2023-06-06 08:59:47 +08:00
return ERR_PTR ( - ENOMEM ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
device - > dev = dev ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
ret = sysfs_create_link ( & dev - > kobj , & group - > kobj , " iommu_group " ) ;
2017-01-16 20:58:07 +08:00
if ( ret )
goto err_free_device ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
device - > name = kasprintf ( GFP_KERNEL , " %s " , kobject_name ( & dev - > kobj ) ) ;
rename :
if ( ! device - > name ) {
2017-01-16 20:58:07 +08:00
ret = - ENOMEM ;
goto err_remove_link ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
ret = sysfs_create_link_nowarn ( group - > devices_kobj ,
& dev - > kobj , device - > name ) ;
if ( ret ) {
if ( ret = = - EEXIST & & i > = 0 ) {
/*
* Account for the slim chance of collision
* and append an instance to the name .
*/
2017-01-16 20:58:07 +08:00
kfree ( device - > name ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
device - > name = kasprintf ( GFP_KERNEL , " %s.%d " ,
kobject_name ( & dev - > kobj ) , i + + ) ;
goto rename ;
}
2017-01-16 20:58:07 +08:00
goto err_free_name ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2013-08-16 01:59:24 +08:00
trace_add_device_to_group ( group - > id , dev ) ;
2015-05-29 00:41:25 +08:00
2019-02-09 06:05:45 +08:00
dev_info ( dev , " Adding to iommu group %d \n " , group - > id ) ;
2015-05-29 00:41:25 +08:00
2023-06-06 08:59:47 +08:00
return device ;
2017-01-16 20:58:07 +08:00
err_free_name :
kfree ( device - > name ) ;
err_remove_link :
sysfs_remove_link ( & dev - > kobj , " iommu_group " ) ;
err_free_device :
kfree ( device ) ;
2019-02-09 06:05:45 +08:00
dev_err ( dev , " Failed to add to iommu group %d: %d \n " , group - > id , ret ) ;
2023-06-06 08:59:47 +08:00
return ERR_PTR ( ret ) ;
}
/**
* iommu_group_add_device - add a device to an iommu group
* @ group : the group into which to add the device ( reference should be held )
* @ dev : the device
*
* This function is called by an iommu driver to add a device into a
* group . Adding a device increments the group reference count .
*/
int iommu_group_add_device ( struct iommu_group * group , struct device * dev )
{
struct group_device * gdev ;
gdev = iommu_group_alloc_device ( group , dev ) ;
if ( IS_ERR ( gdev ) )
return PTR_ERR ( gdev ) ;
iommu_group_ref_get ( group ) ;
dev - > iommu_group = group ;
mutex_lock ( & group - > mutex ) ;
list_add_tail ( & gdev - > list , & group - > devices ) ;
mutex_unlock ( & group - > mutex ) ;
return 0 ;
2011-10-22 03:56:05 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
EXPORT_SYMBOL_GPL ( iommu_group_add_device ) ;
2011-10-22 03:56:05 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
/**
* iommu_group_remove_device - remove a device from it ' s current group
* @ dev : device to be removed
*
* This function is called by an iommu driver to remove the device from
* it ' s current group . This decrements the iommu group reference count .
*/
void iommu_group_remove_device ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
2021-07-31 15:47:37 +08:00
if ( ! group )
return ;
2019-02-09 06:05:45 +08:00
dev_info ( dev , " Removing from iommu group %d \n " , group - > id ) ;
2015-05-29 00:41:25 +08:00
2023-06-06 08:59:42 +08:00
__iommu_group_remove_device ( dev ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_group_remove_device ) ;
2024-02-05 19:56:07 +08:00
# if IS_ENABLED(CONFIG_LOCKDEP) && IS_ENABLED(CONFIG_IOMMU_API)
/**
* iommu_group_mutex_assert - Check device group mutex lock
* @ dev : the device that has group param set
*
* This function is called by an iommu driver to check whether it holds
* group mutex lock for the given device or not .
*
* Note that this function must be called after device group param is set .
*/
void iommu_group_mutex_assert ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
lockdep_assert_held ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_mutex_assert ) ;
# endif
2023-11-22 02:03:57 +08:00
static struct device * iommu_group_first_dev ( struct iommu_group * group )
{
lockdep_assert_held ( & group - > mutex ) ;
return list_first_entry ( & group - > devices , struct group_device , list ) - > dev ;
}
2022-01-28 18:44:33 +08:00
/**
* iommu_group_for_each_dev - iterate over each device in the group
* @ group : the group
* @ data : caller opaque data to be passed to callback function
* @ fn : caller supplied callback function
*
* This function is called by group users to iterate over group devices .
* Callers should hold a reference count to the group during callback .
* The group - > mutex is held across callbacks , which will block calls to
* iommu_group_add / remove_device .
*/
2015-05-29 00:41:31 +08:00
int iommu_group_for_each_dev ( struct iommu_group * group , void * data ,
int ( * fn ) ( struct device * , void * ) )
{
2023-05-11 12:42:14 +08:00
struct group_device * device ;
int ret = 0 ;
2015-05-29 00:41:31 +08:00
mutex_lock ( & group - > mutex ) ;
2023-05-11 12:42:14 +08:00
for_each_group_device ( group , device ) {
ret = fn ( device - > dev , data ) ;
if ( ret )
break ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
mutex_unlock ( & group - > mutex ) ;
2015-05-29 00:41:31 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_for_each_dev ) ;
/**
* iommu_group_get - Return the group for a device and increment reference
* @ dev : get the group that this device belongs to
*
* This function is called by iommu drivers and users to get the group
* for the specified device . If found , the group is returned and the group
* reference in incremented , else NULL .
*/
struct iommu_group * iommu_group_get ( struct device * dev )
{
struct iommu_group * group = dev - > iommu_group ;
if ( group )
kobject_get ( group - > devices_kobj ) ;
return group ;
}
EXPORT_SYMBOL_GPL ( iommu_group_get ) ;
2016-11-12 01:59:21 +08:00
/**
* iommu_group_ref_get - Increment reference on a group
* @ group : the group to use , must not be NULL
*
* This function is called by iommu drivers to take additional references on an
* existing group . Returns the given group for convenience .
*/
struct iommu_group * iommu_group_ref_get ( struct iommu_group * group )
{
kobject_get ( group - > devices_kobj ) ;
return group ;
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( iommu_group_ref_get ) ;
2016-11-12 01:59:21 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
/**
* iommu_group_put - Decrement group reference
* @ group : the group to use
*
* This function is called by iommu drivers and users to release the
* iommu group . Once the reference count is zero , the group is released .
*/
void iommu_group_put ( struct iommu_group * group )
{
if ( group )
kobject_put ( group - > devices_kobj ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_put ) ;
/**
* iommu_group_id - Return ID for a group
* @ group : the group to ID
*
* Return the unique ID for the group matching the sysfs group number .
*/
int iommu_group_id ( struct iommu_group * group )
{
return group - > id ;
}
EXPORT_SYMBOL_GPL ( iommu_group_id ) ;
2011-10-22 03:56:05 +08:00
2014-09-20 00:03:06 +08:00
static struct iommu_group * get_pci_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns ) ;
2014-07-03 23:51:18 +08:00
/*
* To consider a PCI device isolated , we require ACS to support Source
* Validation , Request Redirection , Completer Redirection , and Upstream
* Forwarding . This effectively means that devices cannot spoof their
* requester ID , requests and completions cannot be redirected , and all
* transactions are forwarded upstream , even as it passes through a
* bridge where the target device is downstream .
*/
# define REQ_ACS_FLAGS (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF)
2014-09-20 00:03:06 +08:00
/*
* For multifunction devices which are not isolated from each other , find
* all the other non - isolated functions and look for existing groups . For
* each function , we also need to look for aliases to or from other devices
* that may already have a group .
*/
static struct iommu_group * get_pci_function_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns )
{
struct pci_dev * tmp = NULL ;
struct iommu_group * group ;
if ( ! pdev - > multifunction | | pci_acs_enabled ( pdev , REQ_ACS_FLAGS ) )
return NULL ;
for_each_pci_dev ( tmp ) {
if ( tmp = = pdev | | tmp - > bus ! = pdev - > bus | |
PCI_SLOT ( tmp - > devfn ) ! = PCI_SLOT ( pdev - > devfn ) | |
pci_acs_enabled ( tmp , REQ_ACS_FLAGS ) )
continue ;
group = get_pci_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
}
return NULL ;
}
/*
2016-03-03 22:38:02 +08:00
* Look for aliases to or from the given device for existing groups . DMA
* aliases are only supported on the same bus , therefore the search
2014-09-20 00:03:06 +08:00
* space is quite small ( especially since we ' re really only looking at pcie
* device , and therefore only expect multiple slots on the root complex or
* downstream switch ports ) . It ' s conceivable though that a pair of
* multifunction devices could have aliases between them that would cause a
* loop . To prevent this , we use a bitmap to track where we ' ve been .
*/
static struct iommu_group * get_pci_alias_group ( struct pci_dev * pdev ,
unsigned long * devfns )
{
struct pci_dev * tmp = NULL ;
struct iommu_group * group ;
if ( test_and_set_bit ( pdev - > devfn & 0xff , devfns ) )
return NULL ;
group = iommu_group_get ( & pdev - > dev ) ;
if ( group )
return group ;
for_each_pci_dev ( tmp ) {
if ( tmp = = pdev | | tmp - > bus ! = pdev - > bus )
continue ;
/* We alias them or they alias us */
2016-03-03 22:38:02 +08:00
if ( pci_devs_are_dma_aliases ( pdev , tmp ) ) {
2014-09-20 00:03:06 +08:00
group = get_pci_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
group = get_pci_function_alias_group ( tmp , devfns ) ;
if ( group ) {
pci_dev_put ( tmp ) ;
return group ;
}
}
}
return NULL ;
}
2014-07-03 23:51:18 +08:00
struct group_for_pci_data {
struct pci_dev * pdev ;
struct iommu_group * group ;
} ;
/*
* DMA alias iterator callback , return the last seen device . Stop and return
* the IOMMU group if we find one along the way .
*/
static int get_pci_alias_or_group ( struct pci_dev * pdev , u16 alias , void * opaque )
{
struct group_for_pci_data * data = opaque ;
data - > pdev = pdev ;
data - > group = iommu_group_get ( & pdev - > dev ) ;
return data - > group ! = NULL ;
}
2015-10-22 05:51:38 +08:00
/*
* Generic device_group call - back function . It just allocates one
* iommu - group per device .
*/
struct iommu_group * generic_device_group ( struct device * dev )
{
2017-06-28 18:45:31 +08:00
return iommu_group_alloc ( ) ;
2015-10-22 05:51:38 +08:00
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( generic_device_group ) ;
2015-10-22 05:51:38 +08:00
2023-08-23 00:15:57 +08:00
/*
* Generic device_group call - back function . It just allocates one
* iommu - group per iommu driver instance shared by every device
* probed by that iommu driver .
*/
struct iommu_group * generic_single_device_group ( struct device * dev )
{
struct iommu_device * iommu = dev - > iommu - > iommu_dev ;
if ( ! iommu - > singleton_group ) {
struct iommu_group * group ;
group = iommu_group_alloc ( ) ;
if ( IS_ERR ( group ) )
return group ;
iommu - > singleton_group = group ;
}
return iommu_group_ref_get ( iommu - > singleton_group ) ;
}
EXPORT_SYMBOL_GPL ( generic_single_device_group ) ;
2014-07-03 23:51:18 +08:00
/*
* Use standard PCI bus topology , isolation features , and DMA alias quirks
* to find or create an IOMMU group for a device .
*/
2015-10-22 05:51:37 +08:00
struct iommu_group * pci_device_group ( struct device * dev )
2014-07-03 23:51:18 +08:00
{
2015-10-22 05:51:37 +08:00
struct pci_dev * pdev = to_pci_dev ( dev ) ;
2014-07-03 23:51:18 +08:00
struct group_for_pci_data data ;
struct pci_bus * bus ;
struct iommu_group * group = NULL ;
2014-09-20 00:03:06 +08:00
u64 devfns [ 4 ] = { 0 } ;
2014-07-03 23:51:18 +08:00
2015-10-22 05:51:37 +08:00
if ( WARN_ON ( ! dev_is_pci ( dev ) ) )
return ERR_PTR ( - EINVAL ) ;
2014-07-03 23:51:18 +08:00
/*
* Find the upstream DMA alias for the device . A device must not
* be aliased due to topology in order to have its own IOMMU group .
* If we find an alias along the way that already belongs to a
* group , use it .
*/
if ( pci_for_each_dma_alias ( pdev , get_pci_alias_or_group , & data ) )
return data . group ;
pdev = data . pdev ;
/*
* Continue upstream from the point of minimum IOMMU granularity
* due to aliases to the point where devices are protected from
* peer - to - peer DMA by PCI ACS . Again , if we find an existing
* group , use it .
*/
for ( bus = pdev - > bus ; ! pci_is_root_bus ( bus ) ; bus = bus - > parent ) {
if ( ! bus - > self )
continue ;
if ( pci_acs_path_enabled ( bus - > self , NULL , REQ_ACS_FLAGS ) )
break ;
pdev = bus - > self ;
group = iommu_group_get ( & pdev - > dev ) ;
if ( group )
return group ;
}
/*
2014-09-20 00:03:06 +08:00
* Look for existing groups on device aliases . If we alias another
* device or another device aliases us , use the same group .
2014-07-03 23:51:18 +08:00
*/
2014-09-20 00:03:06 +08:00
group = get_pci_alias_group ( pdev , ( unsigned long * ) devfns ) ;
if ( group )
return group ;
2014-07-03 23:51:18 +08:00
/*
2014-09-20 00:03:06 +08:00
* Look for existing groups on non - isolated functions on the same
* slot and aliases of those funcions , if any . No need to clear
* the search bitmap , the tested devfns are still valid .
2014-07-03 23:51:18 +08:00
*/
2014-09-20 00:03:06 +08:00
group = get_pci_function_alias_group ( pdev , ( unsigned long * ) devfns ) ;
if ( group )
return group ;
2014-07-03 23:51:18 +08:00
/* No shared group found, allocate new */
2017-06-28 18:45:31 +08:00
return iommu_group_alloc ( ) ;
2014-07-03 23:51:18 +08:00
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( pci_device_group ) ;
2014-07-03 23:51:18 +08:00
2018-09-10 21:49:18 +08:00
/* Get the IOMMU group for device on fsl-mc bus */
struct iommu_group * fsl_mc_device_group ( struct device * dev )
{
struct device * cont_dev = fsl_mc_cont_dev ( dev ) ;
struct iommu_group * group ;
group = iommu_group_get ( cont_dev ) ;
if ( ! group )
group = iommu_group_alloc ( ) ;
return group ;
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( fsl_mc_device_group ) ;
2018-09-10 21:49:18 +08:00
2023-05-11 12:42:12 +08:00
static struct iommu_domain *
2023-09-13 21:43:54 +08:00
__iommu_group_alloc_default_domain ( struct iommu_group * group , int req_type )
2020-04-29 21:36:39 +08:00
{
2023-05-11 12:42:12 +08:00
if ( group - > default_domain & & group - > default_domain - > type = = req_type )
return group - > default_domain ;
2023-09-13 21:43:54 +08:00
return __iommu_group_domain_alloc ( group , req_type ) ;
2020-04-29 21:36:39 +08:00
}
2023-05-11 12:42:11 +08:00
/*
* req_type of 0 means " auto " which means to select a domain based on
* iommu_def_domain_type or what the driver actually supports .
*/
static struct iommu_domain *
iommu_group_alloc_default_domain ( struct iommu_group * group , int req_type )
2020-04-29 21:36:46 +08:00
{
2023-11-22 02:03:57 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( iommu_group_first_dev ( group ) ) ;
2020-04-29 21:36:39 +08:00
struct iommu_domain * dom ;
2020-04-29 21:36:46 +08:00
2023-05-11 12:42:11 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2020-04-29 21:36:39 +08:00
2023-09-13 21:43:35 +08:00
/*
* Allow legacy drivers to specify the domain that will be the default
* domain . This should always be either an IDENTITY / BLOCKED / PLATFORM
* domain . Do not use in new drivers .
*/
2023-09-13 21:43:54 +08:00
if ( ops - > default_domain ) {
2024-01-31 00:12:53 +08:00
if ( req_type ! = ops - > default_domain - > type )
2023-11-02 07:28:11 +08:00
return ERR_PTR ( - EINVAL ) ;
2023-09-13 21:43:54 +08:00
return ops - > default_domain ;
2023-09-13 21:43:35 +08:00
}
2023-05-11 12:42:11 +08:00
if ( req_type )
2023-09-13 21:43:54 +08:00
return __iommu_group_alloc_default_domain ( group , req_type ) ;
2020-04-29 21:36:46 +08:00
2023-05-11 12:42:11 +08:00
/* The driver gave no guidance on what type to use, try the default */
2023-09-13 21:43:54 +08:00
dom = __iommu_group_alloc_default_domain ( group , iommu_def_domain_type ) ;
2023-11-02 07:28:11 +08:00
if ( ! IS_ERR ( dom ) )
2023-05-11 12:42:11 +08:00
return dom ;
2020-04-29 21:36:46 +08:00
2023-05-11 12:42:11 +08:00
/* Otherwise IDENTITY and DMA_FQ defaults will try DMA */
if ( iommu_def_domain_type = = IOMMU_DOMAIN_DMA )
2023-11-02 07:28:11 +08:00
return ERR_PTR ( - EINVAL ) ;
2023-09-13 21:43:54 +08:00
dom = __iommu_group_alloc_default_domain ( group , IOMMU_DOMAIN_DMA ) ;
2023-11-02 07:28:11 +08:00
if ( IS_ERR ( dom ) )
return dom ;
2020-04-29 21:36:46 +08:00
2023-05-11 12:42:11 +08:00
pr_warn ( " Failed to allocate default IOMMU domain of type %u for group %s - Falling back to IOMMU_DOMAIN_DMA " ,
iommu_def_domain_type , group - > name ) ;
return dom ;
2020-04-29 21:36:46 +08:00
}
2015-05-29 00:41:35 +08:00
struct iommu_domain * iommu_group_default_domain ( struct iommu_group * group )
{
return group - > default_domain ;
}
2020-04-29 21:36:49 +08:00
static int probe_iommu_group ( struct device * dev , void * data )
2011-10-22 03:56:05 +08:00
{
2020-04-29 21:36:49 +08:00
struct list_head * group_list = data ;
int ret ;
2015-06-29 16:16:08 +08:00
2023-11-16 02:25:44 +08:00
mutex_lock ( & iommu_probe_device_lock ) ;
2020-04-29 21:36:49 +08:00
ret = __iommu_probe_device ( dev , group_list ) ;
2023-11-16 02:25:44 +08:00
mutex_unlock ( & iommu_probe_device_lock ) ;
2015-06-29 16:16:08 +08:00
if ( ret = = - ENODEV )
ret = 0 ;
return ret ;
2011-10-22 03:56:05 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static int iommu_bus_notifier ( struct notifier_block * nb ,
unsigned long action , void * data )
2011-10-22 03:56:05 +08:00
{
struct device * dev = data ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
if ( action = = BUS_NOTIFY_ADD_DEVICE ) {
2018-11-30 17:31:59 +08:00
int ret ;
2017-04-18 20:51:48 +08:00
2018-11-30 17:31:59 +08:00
ret = iommu_probe_device ( dev ) ;
return ( ret ) ? NOTIFY_DONE : NOTIFY_OK ;
2015-05-29 00:41:28 +08:00
} else if ( action = = BUS_NOTIFY_REMOVED_DEVICE ) {
2018-11-30 17:31:59 +08:00
iommu_release_device ( dev ) ;
return NOTIFY_OK ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2011-10-22 03:56:05 +08:00
return 0 ;
}
2023-09-13 21:43:41 +08:00
/*
* Combine the driver ' s chosen def_domain_type across all the devices in a
* group . Drivers must give a consistent result .
*/
static int iommu_get_def_domain_type ( struct iommu_group * group ,
struct device * dev , int cur_type )
{
2023-11-22 02:03:57 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2023-09-13 21:43:41 +08:00
int type ;
2024-01-31 00:12:53 +08:00
if ( ops - > default_domain ) {
/*
* Drivers that declare a global static default_domain will
* always choose that .
*/
type = ops - > default_domain - > type ;
} else {
if ( ops - > def_domain_type )
type = ops - > def_domain_type ( dev ) ;
else
return cur_type ;
}
2023-09-13 21:43:41 +08:00
if ( ! type | | cur_type = = type )
return cur_type ;
if ( ! cur_type )
return type ;
dev_err_ratelimited (
dev ,
" IOMMU driver error, requesting conflicting def_domain_type, %s and %s, for devices in group %u. \n " ,
iommu_domain_type_str ( cur_type ) , iommu_domain_type_str ( type ) ,
group - > id ) ;
/*
* Try to recover , drivers are allowed to force IDENITY or DMA , IDENTITY
* takes precedence .
*/
if ( type = = IOMMU_DOMAIN_IDENTITY )
return type ;
return cur_type ;
}
/*
* A target_type of 0 will select the best domain type . 0 can be returned in
* this case meaning the global default should be used .
*/
2023-05-11 12:42:10 +08:00
static int iommu_get_default_domain_type ( struct iommu_group * group ,
int target_type )
2020-04-29 21:36:49 +08:00
{
2023-09-13 21:43:41 +08:00
struct device * untrusted = NULL ;
2023-05-11 12:42:10 +08:00
struct group_device * gdev ;
2023-09-13 21:43:41 +08:00
int driver_type = 0 ;
2020-04-29 21:36:49 +08:00
2023-05-11 12:42:10 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2020-04-29 21:36:49 +08:00
2023-09-13 21:43:42 +08:00
/*
* ARM32 drivers supporting CONFIG_ARM_DMA_USE_IOMMU can declare an
* identity_domain and it will automatically become their default
* domain . Later on ARM_DMA_USE_IOMMU will install its UNMANAGED domain .
2023-09-13 21:43:53 +08:00
* Override the selection to IDENTITY .
2023-09-13 21:43:42 +08:00
*/
2023-09-13 21:43:53 +08:00
if ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) ) {
static_assert ( ! ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) & &
IS_ENABLED ( CONFIG_IOMMU_DMA ) ) ) ;
2023-09-13 21:43:42 +08:00
driver_type = IOMMU_DOMAIN_IDENTITY ;
2023-09-13 21:43:53 +08:00
}
2023-09-13 21:43:42 +08:00
2023-05-11 12:42:10 +08:00
for_each_group_device ( group , gdev ) {
2023-09-13 21:43:41 +08:00
driver_type = iommu_get_def_domain_type ( group , gdev - > dev ,
driver_type ) ;
2020-04-29 21:36:49 +08:00
2023-09-13 21:43:42 +08:00
if ( dev_is_pci ( gdev - > dev ) & & to_pci_dev ( gdev - > dev ) - > untrusted ) {
/*
* No ARM32 using systems will set untrusted , it cannot
* work .
*/
if ( WARN_ON ( IS_ENABLED ( CONFIG_ARM_DMA_USE_IOMMU ) ) )
2023-05-11 12:42:10 +08:00
return - 1 ;
2023-09-13 21:43:41 +08:00
untrusted = gdev - > dev ;
2023-09-13 21:43:42 +08:00
}
2023-09-13 21:43:41 +08:00
}
2020-04-29 21:36:49 +08:00
2023-10-04 00:52:36 +08:00
/*
* If the common dma ops are not selected in kconfig then we cannot use
* IOMMU_DOMAIN_DMA at all . Force IDENTITY if nothing else has been
* selected .
*/
if ( ! IS_ENABLED ( CONFIG_IOMMU_DMA ) ) {
if ( WARN_ON ( driver_type = = IOMMU_DOMAIN_DMA ) )
return - 1 ;
if ( ! driver_type )
driver_type = IOMMU_DOMAIN_IDENTITY ;
}
2023-09-13 21:43:41 +08:00
if ( untrusted ) {
if ( driver_type & & driver_type ! = IOMMU_DOMAIN_DMA ) {
dev_err_ratelimited (
untrusted ,
" Device is not trusted, but driver is overriding group %u to %s, refusing to probe. \n " ,
group - > id , iommu_domain_type_str ( driver_type ) ) ;
return - 1 ;
2020-04-29 21:36:49 +08:00
}
2023-09-13 21:43:41 +08:00
driver_type = IOMMU_DOMAIN_DMA ;
2020-04-29 21:36:49 +08:00
}
2023-09-13 21:43:41 +08:00
if ( target_type ) {
if ( driver_type & & target_type ! = driver_type )
return - 1 ;
return target_type ;
2020-04-29 21:36:49 +08:00
}
2023-09-13 21:43:41 +08:00
return driver_type ;
2020-04-29 21:36:49 +08:00
}
2023-05-11 12:42:14 +08:00
static void iommu_group_do_probe_finalize ( struct device * dev )
2020-05-19 21:28:24 +08:00
{
2022-02-16 10:52:47 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2020-05-19 21:28:24 +08:00
2022-02-16 10:52:47 +08:00
if ( ops - > probe_finalize )
ops - > probe_finalize ( dev ) ;
2020-04-29 21:36:50 +08:00
}
2023-03-14 02:29:16 +08:00
int bus_iommu_probe ( const struct bus_type * bus )
2020-04-29 21:36:49 +08:00
{
2020-04-29 21:37:10 +08:00
struct iommu_group * group , * next ;
LIST_HEAD ( group_list ) ;
2020-04-29 21:36:49 +08:00
int ret ;
2020-04-29 21:37:10 +08:00
ret = bus_for_each_dev ( bus , NULL , & group_list , probe_iommu_group ) ;
if ( ret )
return ret ;
2020-04-29 21:36:49 +08:00
2020-04-29 21:37:10 +08:00
list_for_each_entry_safe ( group , next , & group_list , entry ) {
2023-05-11 12:42:14 +08:00
struct group_device * gdev ;
2022-11-05 03:51:43 +08:00
mutex_lock ( & group - > mutex ) ;
2020-04-29 21:37:10 +08:00
/* Remove item from the list */
list_del_init ( & group - > entry ) ;
2020-04-29 21:36:49 +08:00
2023-06-06 08:59:48 +08:00
/*
* We go to the trouble of deferred default domain creation so
* that the cross - group default domain type and the setup of the
* IOMMU_RESV_DIRECT will work correctly in non - hotpug scenarios .
*/
2023-05-11 12:42:12 +08:00
ret = iommu_setup_default_domain ( group , 0 ) ;
if ( ret ) {
2020-04-29 21:37:10 +08:00
mutex_unlock ( & group - > mutex ) ;
2023-05-11 12:42:12 +08:00
return ret ;
2020-04-29 21:37:10 +08:00
}
mutex_unlock ( & group - > mutex ) ;
2020-04-29 21:36:49 +08:00
2023-05-11 12:42:14 +08:00
/*
* FIXME : Mis - locked because the ops - > probe_finalize ( ) call - back
* of some IOMMU drivers calls arm_iommu_attach_device ( ) which
* in - turn might call back into IOMMU core code , where it tries
* to take group - > mutex , resulting in a deadlock .
*/
for_each_group_device ( group , gdev )
iommu_group_do_probe_finalize ( gdev - > dev ) ;
2020-04-29 21:36:49 +08:00
}
2023-05-11 12:42:12 +08:00
return 0 ;
2020-04-29 21:36:49 +08:00
}
2023-11-22 02:03:58 +08:00
/**
* iommu_present ( ) - make platform - specific assumptions about an IOMMU
* @ bus : bus to check
*
* Do not use this function . You want device_iommu_mapped ( ) instead .
*
* Return : true if some IOMMU is present and aware of devices on the given bus ;
* in general it may not be the only IOMMU , and it may not have anything to do
* with whatever device you are ultimately interested in .
*/
2023-03-14 02:29:16 +08:00
bool iommu_present ( const struct bus_type * bus )
2008-11-27 00:21:24 +08:00
{
2023-11-22 02:03:58 +08:00
bool ret = false ;
for ( int i = 0 ; i < ARRAY_SIZE ( iommu_buses ) ; i + + ) {
if ( iommu_buses [ i ] = = bus ) {
spin_lock ( & iommu_device_lock ) ;
ret = ! list_empty ( & iommu_device_list ) ;
spin_unlock ( & iommu_device_lock ) ;
}
}
return ret ;
2008-11-27 00:21:24 +08:00
}
2011-09-07 00:46:34 +08:00
EXPORT_SYMBOL_GPL ( iommu_present ) ;
2008-11-27 00:21:24 +08:00
2022-04-25 20:42:02 +08:00
/**
* device_iommu_capable ( ) - check for a general IOMMU capability
* @ dev : device to which the capability would be relevant , if available
* @ cap : IOMMU capability
*
* Return : true if an IOMMU is present and supports the given capability
* for the given device , otherwise false .
*/
bool device_iommu_capable ( struct device * dev , enum iommu_cap cap )
{
const struct iommu_ops * ops ;
2023-11-22 02:03:57 +08:00
if ( ! dev_has_iommu ( dev ) )
2022-04-25 20:42:02 +08:00
return false ;
ops = dev_iommu_ops ( dev ) ;
if ( ! ops - > capable )
return false ;
2022-08-15 23:26:49 +08:00
return ops - > capable ( dev , cap ) ;
2022-04-25 20:42:02 +08:00
}
EXPORT_SYMBOL_GPL ( device_iommu_capable ) ;
2022-12-10 01:23:08 +08:00
/**
* iommu_group_has_isolated_msi ( ) - Compute msi_device_has_isolated_msi ( )
* for a group
* @ group : Group to query
*
* IOMMU groups should not have differing values of
* msi_device_has_isolated_msi ( ) for devices in a group . However nothing
* directly prevents this , so ensure mistakes don ' t result in isolation failures
* by checking that all the devices are the same .
*/
bool iommu_group_has_isolated_msi ( struct iommu_group * group )
{
struct group_device * group_dev ;
bool ret = true ;
mutex_lock ( & group - > mutex ) ;
2023-05-11 12:42:00 +08:00
for_each_group_device ( group , group_dev )
2022-11-29 08:34:32 +08:00
ret & = msi_device_has_isolated_msi ( group_dev - > dev ) ;
2022-12-10 01:23:08 +08:00
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_has_isolated_msi ) ;
2011-09-14 03:25:23 +08:00
/**
* iommu_set_fault_handler ( ) - set a fault handler for an iommu domain
* @ domain : iommu domain
* @ handler : fault handler
2012-05-22 01:20:05 +08:00
* @ token : user data , will be passed back to the fault handler
2011-09-27 19:36:40 +08:00
*
* This function should be used by IOMMU users which want to be notified
* whenever an IOMMU fault happens .
*
* The fault handler itself should return 0 on success , and an appropriate
* error code otherwise .
2011-09-14 03:25:23 +08:00
*/
void iommu_set_fault_handler ( struct iommu_domain * domain ,
2012-05-22 01:20:05 +08:00
iommu_fault_handler_t handler ,
void * token )
2011-09-14 03:25:23 +08:00
{
BUG_ON ( ! domain ) ;
domain - > handler = handler ;
2012-05-22 01:20:05 +08:00
domain - > handler_token = token ;
2011-09-14 03:25:23 +08:00
}
2011-09-26 21:11:46 +08:00
EXPORT_SYMBOL_GPL ( iommu_set_fault_handler ) ;
2011-09-14 03:25:23 +08:00
2023-09-13 21:43:54 +08:00
static struct iommu_domain * __iommu_domain_alloc ( const struct iommu_ops * ops ,
2023-09-13 21:43:55 +08:00
struct device * dev ,
2023-09-13 21:43:54 +08:00
unsigned int type )
2008-11-27 00:21:24 +08:00
{
struct iommu_domain * domain ;
2023-05-05 05:10:56 +08:00
unsigned int alloc_type = type & IOMMU_DOMAIN_ALLOC_FLAGS ;
2008-11-27 00:21:24 +08:00
2023-09-13 21:43:54 +08:00
if ( alloc_type = = IOMMU_DOMAIN_IDENTITY & & ops - > identity_domain )
return ops - > identity_domain ;
2023-09-28 07:47:31 +08:00
else if ( alloc_type = = IOMMU_DOMAIN_BLOCKED & & ops - > blocked_domain )
return ops - > blocked_domain ;
2023-09-13 21:43:55 +08:00
else if ( type & __IOMMU_DOMAIN_PAGING & & ops - > domain_alloc_paging )
domain = ops - > domain_alloc_paging ( dev ) ;
else if ( ops - > domain_alloc )
domain = ops - > domain_alloc ( alloc_type ) ;
else
2023-11-02 07:28:11 +08:00
return ERR_PTR ( - EOPNOTSUPP ) ;
2011-09-06 22:03:26 +08:00
2023-11-02 07:28:11 +08:00
/*
* Many domain_alloc ops now return ERR_PTR , make things easier for the
* driver by accepting ERR_PTR from all domain_alloc ops instead of
* having two rules .
*/
if ( IS_ERR ( domain ) )
return domain ;
2008-11-27 00:21:24 +08:00
if ( ! domain )
2023-11-02 07:28:11 +08:00
return ERR_PTR ( - ENOMEM ) ;
2008-11-27 00:21:24 +08:00
2015-05-29 00:41:29 +08:00
domain - > type = type ;
2023-11-22 02:03:59 +08:00
domain - > owner = ops ;
2023-04-04 15:27:42 +08:00
/*
* If not already set , assume all sizes by default ; the driver
* may override this later
*/
if ( ! domain - > pgsize_bitmap )
2023-09-13 21:43:54 +08:00
domain - > pgsize_bitmap = ops - > pgsize_bitmap ;
2023-04-04 15:27:42 +08:00
2022-02-16 10:52:49 +08:00
if ( ! domain - > ops )
2023-09-13 21:43:54 +08:00
domain - > ops = ops - > default_domain_ops ;
2011-09-06 22:03:26 +08:00
2023-11-02 07:28:11 +08:00
if ( iommu_is_dma_domain ( domain ) ) {
int rc ;
rc = iommu_get_dma_cookie ( domain ) ;
if ( rc ) {
iommu_domain_free ( domain ) ;
return ERR_PTR ( rc ) ;
}
2021-08-11 20:21:15 +08:00
}
2008-11-27 00:21:24 +08:00
return domain ;
}
2023-09-13 21:43:54 +08:00
static struct iommu_domain *
__iommu_group_domain_alloc ( struct iommu_group * group , unsigned int type )
{
2023-11-22 02:03:57 +08:00
struct device * dev = iommu_group_first_dev ( group ) ;
2023-09-13 21:43:55 +08:00
2023-11-22 02:03:57 +08:00
return __iommu_domain_alloc ( dev_iommu_ops ( dev ) , dev , type ) ;
2023-09-13 21:43:54 +08:00
}
2023-11-22 02:04:00 +08:00
static int __iommu_domain_alloc_dev ( struct device * dev , void * data )
{
const struct iommu_ops * * ops = data ;
if ( ! dev_has_iommu ( dev ) )
return 0 ;
if ( WARN_ONCE ( * ops & & * ops ! = dev_iommu_ops ( dev ) ,
" Multiple IOMMU drivers present for bus %s, which the public IOMMU API can't fully support yet. You will still need to disable one or more for this to work, sorry! \n " ,
dev_bus_name ( dev ) ) )
return - EBUSY ;
* ops = dev_iommu_ops ( dev ) ;
return 0 ;
}
2023-03-14 02:29:16 +08:00
struct iommu_domain * iommu_domain_alloc ( const struct bus_type * bus )
2015-05-29 00:41:29 +08:00
{
2023-11-22 02:04:00 +08:00
const struct iommu_ops * ops = NULL ;
int err = bus_for_each_dev ( bus , NULL , & ops , __iommu_domain_alloc_dev ) ;
2023-11-02 07:28:11 +08:00
struct iommu_domain * domain ;
2023-11-22 02:04:00 +08:00
if ( err | | ! ops )
2023-09-13 21:43:54 +08:00
return NULL ;
2023-11-22 02:04:00 +08:00
2023-11-27 18:23:54 +08:00
domain = __iommu_domain_alloc ( ops , NULL , IOMMU_DOMAIN_UNMANAGED ) ;
2023-11-02 07:28:11 +08:00
if ( IS_ERR ( domain ) )
return NULL ;
return domain ;
2008-11-27 00:21:24 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_domain_alloc ) ;
void iommu_domain_free ( struct iommu_domain * domain )
{
2022-10-31 08:59:10 +08:00
if ( domain - > type = = IOMMU_DOMAIN_SVA )
mmdrop ( domain - > mm ) ;
2021-08-11 20:21:15 +08:00
iommu_put_dma_cookie ( domain ) ;
2023-09-13 21:43:34 +08:00
if ( domain - > ops - > free )
domain - > ops - > free ( domain ) ;
2008-11-27 00:21:24 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_domain_free ) ;
2022-05-10 00:19:19 +08:00
/*
* Put the group ' s domain back to the appropriate core - owned domain - either the
* standard kernel - mode DMA configuration or an all - DMA - blocked domain .
*/
static void __iommu_group_set_core_domain ( struct iommu_group * group )
{
struct iommu_domain * new_domain ;
if ( group - > owner )
new_domain = group - > blocking_domain ;
else
new_domain = group - > default_domain ;
2023-05-11 12:42:01 +08:00
__iommu_group_set_domain_nofail ( group , new_domain ) ;
2022-05-10 00:19:19 +08:00
}
2015-05-29 00:41:30 +08:00
static int __iommu_attach_device ( struct iommu_domain * domain ,
struct device * dev )
2008-11-27 00:21:24 +08:00
{
2013-08-16 01:59:26 +08:00
int ret ;
2017-08-09 16:33:40 +08:00
2011-09-06 22:44:29 +08:00
if ( unlikely ( domain - > ops - > attach_dev = = NULL ) )
return - ENODEV ;
2013-08-16 01:59:26 +08:00
ret = domain - > ops - > attach_dev ( domain , dev ) ;
2023-01-10 10:54:07 +08:00
if ( ret )
return ret ;
dev - > iommu - > attach_deferred = 0 ;
trace_attach_device_to_domain ( dev ) ;
return 0 ;
2008-11-27 00:21:24 +08:00
}
2015-05-29 00:41:30 +08:00
2022-10-18 07:01:22 +08:00
/**
* iommu_attach_device - Attach an IOMMU domain to a device
* @ domain : IOMMU domain to attach
* @ dev : Device that will be attached
*
* Returns 0 on success and error code on failure
*
* Note that EINVAL can be treated as a soft failure , indicating
* that certain configuration of the domain is incompatible with
* the device . In this case attaching a different domain to the
* device may succeed .
*/
2015-05-29 00:41:30 +08:00
int iommu_attach_device ( struct iommu_domain * domain , struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-29 00:41:30 +08:00
int ret ;
2017-12-21 00:48:36 +08:00
if ( ! group )
return - ENODEV ;
2015-05-29 00:41:30 +08:00
/*
2017-07-21 20:12:38 +08:00
* Lock the group to make sure the device - count doesn ' t
2015-05-29 00:41:30 +08:00
* change while we are attaching
*/
mutex_lock ( & group - > mutex ) ;
ret = - EINVAL ;
2023-05-11 12:41:59 +08:00
if ( list_count_nodes ( & group - > devices ) ! = 1 )
2015-05-29 00:41:30 +08:00
goto out_unlock ;
2015-05-29 00:41:31 +08:00
ret = __iommu_attach_group ( domain , group ) ;
2015-05-29 00:41:30 +08:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
2008-11-27 00:21:24 +08:00
EXPORT_SYMBOL_GPL ( iommu_attach_device ) ;
iommu: use the __iommu_attach_device() directly for deferred attach
Currently, because domain attach allows to be deferred from iommu
driver to device driver, and when iommu initializes, the devices
on the bus will be scanned and the default groups will be allocated.
Due to the above changes, some devices could be added to the same
group as below:
[ 3.859417] pci 0000:01:00.0: Adding to iommu group 16
[ 3.864572] pci 0000:01:00.1: Adding to iommu group 16
[ 3.869738] pci 0000:02:00.0: Adding to iommu group 17
[ 3.874892] pci 0000:02:00.1: Adding to iommu group 17
But when attaching these devices, it doesn't allow that a group has
more than one device, otherwise it will return an error. This conflicts
with the deferred attaching. Unfortunately, it has two devices in the
same group for my side, for example:
[ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
[ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
...
[ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
[ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
Finally, which caused the failure of tg3 driver when tg3 driver calls
the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
[ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
[ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12
[ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
[ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12
[ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
[ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12
[ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
[ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12
In addition, the similar situations also occur in other drivers such
as the bnxt_en driver. That can be reproduced easily in kdump kernel
when SME is active.
Let's move the handling currently in iommu_dma_deferred_attach() into
the iommu core code so that it can call the __iommu_attach_device()
directly instead of the iommu_attach_device(). The external interface
iommu_attach_device() is not suitable for handling this situation.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-26 19:53:37 +08:00
int iommu_deferred_attach ( struct device * dev , struct iommu_domain * domain )
{
2023-01-10 10:54:07 +08:00
if ( dev - > iommu & & dev - > iommu - > attach_deferred )
iommu: use the __iommu_attach_device() directly for deferred attach
Currently, because domain attach allows to be deferred from iommu
driver to device driver, and when iommu initializes, the devices
on the bus will be scanned and the default groups will be allocated.
Due to the above changes, some devices could be added to the same
group as below:
[ 3.859417] pci 0000:01:00.0: Adding to iommu group 16
[ 3.864572] pci 0000:01:00.1: Adding to iommu group 16
[ 3.869738] pci 0000:02:00.0: Adding to iommu group 17
[ 3.874892] pci 0000:02:00.1: Adding to iommu group 17
But when attaching these devices, it doesn't allow that a group has
more than one device, otherwise it will return an error. This conflicts
with the deferred attaching. Unfortunately, it has two devices in the
same group for my side, for example:
[ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
[ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
...
[ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
[ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
Finally, which caused the failure of tg3 driver when tg3 driver calls
the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
[ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
[ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12
[ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
[ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12
[ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
[ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12
[ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
[ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12
In addition, the similar situations also occur in other drivers such
as the bnxt_en driver. That can be reproduced easily in kdump kernel
when SME is active.
Let's move the handling currently in iommu_dma_deferred_attach() into
the iommu core code so that it can call the __iommu_attach_device()
directly instead of the iommu_attach_device(). The external interface
iommu_attach_device() is not suitable for handling this situation.
Signed-off-by: Lianbo Jiang <lijiang@redhat.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20210126115337.20068-3-lijiang@redhat.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2021-01-26 19:53:37 +08:00
return __iommu_attach_device ( domain , dev ) ;
return 0 ;
}
2015-05-29 00:41:30 +08:00
void iommu_detach_device ( struct iommu_domain * domain , struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-29 00:41:30 +08:00
2017-12-21 00:48:36 +08:00
if ( ! group )
return ;
2015-05-29 00:41:30 +08:00
mutex_lock ( & group - > mutex ) ;
2022-05-10 00:19:19 +08:00
if ( WARN_ON ( domain ! = group - > domain ) | |
2023-05-11 12:41:59 +08:00
WARN_ON ( list_count_nodes ( & group - > devices ) ! = 1 ) )
2015-05-29 00:41:30 +08:00
goto out_unlock ;
2022-05-10 00:19:19 +08:00
__iommu_group_set_core_domain ( group ) ;
2015-05-29 00:41:30 +08:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
}
2008-11-27 00:21:24 +08:00
EXPORT_SYMBOL_GPL ( iommu_detach_device ) ;
2015-05-29 00:41:32 +08:00
struct iommu_domain * iommu_get_domain_for_dev ( struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2015-05-29 00:41:32 +08:00
2017-08-17 18:40:08 +08:00
if ( ! group )
2015-05-29 00:41:32 +08:00
return NULL ;
2023-08-23 00:15:56 +08:00
return group - > domain ;
2015-05-29 00:41:32 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_get_domain_for_dev ) ;
2008-11-27 00:21:24 +08:00
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
/*
2018-09-12 23:24:12 +08:00
* For IOMMU_DOMAIN_DMA implementations which already provide their own
* guarantees that the group and its default domain are valid and correct .
*/
struct iommu_domain * iommu_get_dma_domain ( struct device * dev )
{
return dev - > iommu_group - > default_domain ;
}
2015-05-29 00:41:31 +08:00
static int __iommu_attach_group ( struct iommu_domain * domain ,
struct iommu_group * group )
{
2023-11-22 02:03:59 +08:00
struct device * dev ;
2022-05-10 00:19:19 +08:00
if ( group - > domain & & group - > domain ! = group - > default_domain & &
group - > domain ! = group - > blocking_domain )
2015-05-29 00:41:31 +08:00
return - EBUSY ;
2023-11-22 02:03:59 +08:00
dev = iommu_group_first_dev ( group ) ;
if ( ! dev_has_iommu ( dev ) | | dev_iommu_ops ( dev ) ! = domain - > owner )
return - EINVAL ;
2023-05-11 12:42:02 +08:00
return __iommu_group_set_domain ( group , domain ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
2022-10-18 07:01:22 +08:00
/**
* iommu_attach_group - Attach an IOMMU domain to an IOMMU group
* @ domain : IOMMU domain to attach
* @ group : IOMMU group that will be attached
*
* Returns 0 on success and error code on failure
*
* Note that EINVAL can be treated as a soft failure , indicating
* that certain configuration of the domain is incompatible with
* the group . In this case attaching a different domain to the
* group may succeed .
*/
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
int iommu_attach_group ( struct iommu_domain * domain , struct iommu_group * group )
{
2015-05-29 00:41:31 +08:00
int ret ;
mutex_lock ( & group - > mutex ) ;
ret = __iommu_attach_group ( domain , group ) ;
mutex_unlock ( & group - > mutex ) ;
return ret ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_attach_group ) ;
2023-07-18 02:12:09 +08:00
/**
* iommu_group_replace_domain - replace the domain that a group is attached to
* @ new_domain : new IOMMU domain to replace with
* @ group : IOMMU group that will be attached to the new domain
*
* This API allows the group to switch domains without being forced to go to
* the blocking domain in - between .
*
* If the currently attached domain is a core domain ( e . g . a default_domain ) ,
* it will act just like the iommu_attach_group ( ) .
*/
int iommu_group_replace_domain ( struct iommu_group * group ,
struct iommu_domain * new_domain )
{
int ret ;
if ( ! new_domain )
return - EINVAL ;
mutex_lock ( & group - > mutex ) ;
ret = __iommu_group_set_domain ( group , new_domain ) ;
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_NS_GPL ( iommu_group_replace_domain , IOMMUFD_INTERNAL ) ;
2023-05-11 12:42:01 +08:00
static int __iommu_device_set_domain ( struct iommu_group * group ,
struct device * dev ,
struct iommu_domain * new_domain ,
unsigned int flags )
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
{
2023-05-11 12:42:01 +08:00
int ret ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
2023-08-09 20:48:02 +08:00
/*
* If the device requires IOMMU_RESV_DIRECT then we cannot allow
* the blocking domain to be attached as it does not contain the
* required 1 : 1 mapping . This test effectively excludes the device
* being used with iommu_group_claim_dma_owner ( ) which will block
* vfio and iommufd as well .
*/
if ( dev - > iommu - > require_direct & &
( new_domain - > type = = IOMMU_DOMAIN_BLOCKED | |
new_domain = = group - > blocking_domain ) ) {
dev_warn ( dev ,
" Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor. \n " ) ;
return - EINVAL ;
}
2023-05-11 12:42:04 +08:00
if ( dev - > iommu - > attach_deferred ) {
if ( new_domain = = group - > default_domain )
return 0 ;
dev - > iommu - > attach_deferred = 0 ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
2023-05-11 12:42:01 +08:00
ret = __iommu_attach_device ( new_domain , dev ) ;
if ( ret ) {
/*
* If we have a blocking domain then try to attach that in hopes
* of avoiding a UAF . Modern drivers should implement blocking
* domains as global statics that cannot fail .
*/
if ( ( flags & IOMMU_SET_DOMAIN_MUST_SUCCEED ) & &
group - > blocking_domain & &
group - > blocking_domain ! = new_domain )
__iommu_attach_device ( group - > blocking_domain , dev ) ;
return ret ;
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
return 0 ;
}
2023-05-11 12:42:01 +08:00
/*
* If 0 is returned the group ' s domain is new_domain . If an error is returned
* then the group ' s domain will be set back to the existing domain unless
* IOMMU_SET_DOMAIN_MUST_SUCCEED , otherwise an error is returned and the group ' s
* domains is left inconsistent . This is a driver bug to fail attach with a
* previously good domain . We try to avoid a kernel UAF because of this .
*
* IOMMU groups are really the natural working unit of the IOMMU , but the IOMMU
* API works on domains and devices . Bridge that gap by iterating over the
* devices in a group . Ideally we ' d have a single device which represents the
* requestor ID of the group , but we also allow IOMMU drivers to create policy
* defined minimum sets , where the physical hardware may be able to distiguish
* members , but we wish to group them at a higher level ( ex . untrusted
* multi - function PCI devices ) . Thus we attach each device .
*/
static int __iommu_group_set_domain_internal ( struct iommu_group * group ,
struct iommu_domain * new_domain ,
unsigned int flags )
2015-05-29 00:41:31 +08:00
{
2023-05-11 12:42:01 +08:00
struct group_device * last_gdev ;
struct group_device * gdev ;
int result ;
2015-05-29 00:41:31 +08:00
int ret ;
2023-05-11 12:42:01 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2022-05-10 00:19:19 +08:00
if ( group - > domain = = new_domain )
return 0 ;
2023-09-13 21:43:48 +08:00
if ( WARN_ON ( ! new_domain ) )
return - EINVAL ;
2015-05-29 00:41:31 +08:00
2022-05-10 00:19:19 +08:00
/*
* Changing the domain is done by calling attach_dev ( ) on the new
* domain . This switch does not have to be atomic and DMA can be
* discarded during the transition . DMA must only be able to access
* either new_domain or group - > domain , never something else .
*/
2023-05-11 12:42:01 +08:00
result = 0 ;
for_each_group_device ( group , gdev ) {
ret = __iommu_device_set_domain ( group , gdev - > dev , new_domain ,
flags ) ;
if ( ret ) {
result = ret ;
/*
* Keep trying the other devices in the group . If a
* driver fails attach to an otherwise good domain , and
* does not support blocking domains , it should at least
* drop its reference on the current domain so we don ' t
* UAF .
*/
if ( flags & IOMMU_SET_DOMAIN_MUST_SUCCEED )
continue ;
goto err_revert ;
}
}
2022-05-10 00:19:19 +08:00
group - > domain = new_domain ;
2023-05-11 12:42:01 +08:00
return result ;
err_revert :
/*
* This is called in error unwind paths . A well behaved driver should
* always allow us to attach to a domain that was already attached .
*/
last_gdev = gdev ;
for_each_group_device ( group , gdev ) {
/*
2023-09-13 21:43:48 +08:00
* A NULL domain can happen only for first probe , in which case
* we leave group - > domain as NULL and let release clean
* everything up .
2023-05-11 12:42:01 +08:00
*/
if ( group - > domain )
WARN_ON ( __iommu_device_set_domain (
group , gdev - > dev , group - > domain ,
IOMMU_SET_DOMAIN_MUST_SUCCEED ) ) ;
if ( gdev = = last_gdev )
break ;
}
return ret ;
2015-05-29 00:41:31 +08:00
}
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
void iommu_detach_group ( struct iommu_domain * domain , struct iommu_group * group )
{
2015-05-29 00:41:31 +08:00
mutex_lock ( & group - > mutex ) ;
2022-05-10 00:19:19 +08:00
__iommu_group_set_core_domain ( group ) ;
2015-05-29 00:41:31 +08:00
mutex_unlock ( & group - > mutex ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_detach_group ) ;
2013-03-29 03:53:58 +08:00
phys_addr_t iommu_iova_to_phys ( struct iommu_domain * domain , dma_addr_t iova )
2008-11-27 00:21:24 +08:00
{
2021-07-15 21:04:24 +08:00
if ( domain - > type = = IOMMU_DOMAIN_IDENTITY )
return iova ;
if ( domain - > type = = IOMMU_DOMAIN_BLOCKED )
2011-09-06 22:44:29 +08:00
return 0 ;
return domain - > ops - > iova_to_phys ( domain , iova ) ;
2008-11-27 00:21:24 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_iova_to_phys ) ;
2009-03-18 15:33:06 +08:00
2021-06-16 21:38:47 +08:00
static size_t iommu_pgsize ( struct iommu_domain * domain , unsigned long iova ,
2021-06-16 21:38:48 +08:00
phys_addr_t paddr , size_t size , size_t * count )
2013-06-18 09:57:34 +08:00
{
2021-06-16 21:38:48 +08:00
unsigned int pgsize_idx , pgsize_idx_next ;
2021-06-16 21:38:46 +08:00
unsigned long pgsizes ;
2021-06-16 21:38:48 +08:00
size_t offset , pgsize , pgsize_next ;
2021-06-16 21:38:47 +08:00
unsigned long addr_merge = paddr | iova ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:46 +08:00
/* Page sizes supported by the hardware and small enough for @size */
pgsizes = domain - > pgsize_bitmap & GENMASK ( __fls ( size ) , 0 ) ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:46 +08:00
/* Constrain the page sizes further based on the maximum alignment */
if ( likely ( addr_merge ) )
pgsizes & = GENMASK ( __ffs ( addr_merge ) , 0 ) ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:46 +08:00
/* Make sure we have at least one suitable page size */
BUG_ON ( ! pgsizes ) ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:46 +08:00
/* Pick the biggest page size remaining */
pgsize_idx = __fls ( pgsizes ) ;
pgsize = BIT ( pgsize_idx ) ;
2021-06-16 21:38:48 +08:00
if ( ! count )
return pgsize ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:48 +08:00
/* Find the next biggest support page size, if it exists */
pgsizes = domain - > pgsize_bitmap & ~ GENMASK ( pgsize_idx , 0 ) ;
if ( ! pgsizes )
goto out_set_count ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:48 +08:00
pgsize_idx_next = __ffs ( pgsizes ) ;
pgsize_next = BIT ( pgsize_idx_next ) ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:48 +08:00
/*
* There ' s no point trying a bigger page size unless the virtual
* and physical addresses are similarly offset within the larger page .
*/
if ( ( iova ^ paddr ) & ( pgsize_next - 1 ) )
goto out_set_count ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:48 +08:00
/* Calculate the offset to the next page size alignment boundary */
offset = pgsize_next - ( addr_merge & ( pgsize_next - 1 ) ) ;
2013-06-18 09:57:34 +08:00
2021-06-16 21:38:48 +08:00
/*
* If size is big enough to accommodate the larger page , reduce
* the number of smaller pages .
*/
if ( offset + pgsize_next < = size )
size = offset ;
out_set_count :
* count = size > > pgsize_idx ;
2013-06-18 09:57:34 +08:00
return pgsize ;
}
2020-07-13 22:25:42 +08:00
static int __iommu_map ( struct iommu_domain * domain , unsigned long iova ,
phys_addr_t paddr , size_t size , int prot , gfp_t gfp )
2010-01-08 20:35:09 +08:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
unsigned long orig_iova = iova ;
unsigned int min_pagesz ;
size_t orig_size = size ;
2016-02-10 09:18:04 +08:00
phys_addr_t orig_paddr = paddr ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
int ret = 0 ;
2010-01-08 20:35:09 +08:00
2015-03-26 20:43:06 +08:00
if ( unlikely ( ! ( domain - > type & __IOMMU_DOMAIN_PAGING ) ) )
return - EINVAL ;
2023-09-13 00:18:44 +08:00
if ( WARN_ON ( ! ops - > map_pages | | domain - > pgsize_bitmap = = 0UL ) )
return - ENODEV ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
/* find out the minimum page size supported */
2016-04-08 01:42:06 +08:00
min_pagesz = 1 < < __ffs ( domain - > pgsize_bitmap ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
/*
* both the virtual address and the physical one , as well as
* the size of the mapping , must be aligned ( at least ) to the
* size of the smallest page supported by the hardware
*/
if ( ! IS_ALIGNED ( iova | paddr | size , min_pagesz ) ) {
2013-08-22 21:25:42 +08:00
pr_err ( " unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x \n " ,
2013-06-24 03:29:04 +08:00
iova , & paddr , size , min_pagesz ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
return - EINVAL ;
}
2013-08-22 21:25:42 +08:00
pr_debug ( " map: iova 0x%lx pa %pa size 0x%zx \n " , iova , & paddr , size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
while ( size ) {
2023-09-13 00:18:43 +08:00
size_t pgsize , count , mapped = 0 ;
pgsize = iommu_pgsize ( domain , iova , paddr , size , & count ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
2023-09-13 00:18:43 +08:00
pr_debug ( " mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu \n " ,
iova , & paddr , pgsize , count ) ;
ret = ops - > map_pages ( domain , iova , paddr , pgsize , count , prot ,
gfp , & mapped ) ;
2021-06-16 21:38:49 +08:00
/*
* Some pages may have been mapped , even if an error occurred ,
* so we should account for those so they can be unmapped .
*/
size - = mapped ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
if ( ret )
break ;
2021-06-16 21:38:49 +08:00
iova + = mapped ;
paddr + = mapped ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
}
/* unroll mapping in case something went wrong */
if ( ret )
iommu_unmap ( domain , orig_iova , orig_size - size ) ;
2013-08-16 01:59:28 +08:00
else
2016-02-10 09:18:04 +08:00
trace_map ( orig_iova , orig_paddr , orig_size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
return ret ;
2010-01-08 20:35:09 +08:00
}
2019-09-09 00:56:38 +08:00
2023-01-24 04:35:54 +08:00
int iommu_map ( struct iommu_domain * domain , unsigned long iova ,
phys_addr_t paddr , size_t size , int prot , gfp_t gfp )
2021-01-07 20:29:03 +08:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
2021-01-07 20:29:03 +08:00
int ret ;
2023-01-24 04:35:54 +08:00
might_sleep_if ( gfpflags_allow_blocking ( gfp ) ) ;
/* Discourage passing strange GFP flags */
if ( WARN_ON_ONCE ( gfp & ( __GFP_COMP | __GFP_DMA | __GFP_DMA32 |
__GFP_HIGHMEM ) ) )
return - EINVAL ;
2021-02-02 09:06:23 +08:00
ret = __iommu_map ( domain , iova , paddr , size , prot , gfp ) ;
2023-09-28 22:31:35 +08:00
if ( ret = = 0 & & ops - > iotlb_sync_map ) {
ret = ops - > iotlb_sync_map ( domain , iova , size ) ;
if ( ret )
goto out_err ;
}
2021-01-07 20:29:03 +08:00
return ret ;
2010-01-08 20:35:09 +08:00
2023-09-28 22:31:35 +08:00
out_err :
/* undo mappings already done */
iommu_unmap ( domain , iova , size ) ;
2021-06-16 21:38:48 +08:00
2021-01-07 20:29:03 +08:00
return ret ;
2021-06-16 21:38:48 +08:00
}
2010-01-08 20:35:09 +08:00
EXPORT_SYMBOL_GPL ( iommu_map ) ;
2021-06-16 21:38:48 +08:00
2017-08-23 21:50:04 +08:00
static size_t __iommu_unmap ( struct iommu_domain * domain ,
unsigned long iova , size_t size ,
2019-07-02 23:43:48 +08:00
struct iommu_iotlb_gather * iotlb_gather )
2010-01-08 20:35:09 +08:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
size_t unmapped_page , unmapped = 0 ;
2015-01-17 07:47:19 +08:00
unsigned long orig_iova = iova ;
2017-08-23 21:50:04 +08:00
unsigned int min_pagesz ;
2010-01-08 20:35:09 +08:00
2023-09-13 00:18:44 +08:00
if ( unlikely ( ! ( domain - > type & __IOMMU_DOMAIN_PAGING ) ) )
2018-02-05 18:45:53 +08:00
return 0 ;
2011-09-06 22:44:29 +08:00
2023-09-13 00:18:44 +08:00
if ( WARN_ON ( ! ops - > unmap_pages | | domain - > pgsize_bitmap = = 0UL ) )
2018-02-05 18:45:53 +08:00
return 0 ;
2015-03-26 20:43:06 +08:00
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
/* find out the minimum page size supported */
2016-04-08 01:42:06 +08:00
min_pagesz = 1 < < __ffs ( domain - > pgsize_bitmap ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
/*
* The virtual address , as well as the size of the mapping , must be
* aligned ( at least ) to the size of the smallest page supported
* by the hardware
*/
if ( ! IS_ALIGNED ( iova | size , min_pagesz ) ) {
2013-06-24 03:29:04 +08:00
pr_err ( " unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x \n " ,
iova , size , min_pagesz ) ;
2018-02-05 18:45:53 +08:00
return 0 ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
}
2013-06-24 03:29:04 +08:00
pr_debug ( " unmap this: iova 0x%lx size 0x%zx \n " , iova , size ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
/*
* Keep iterating until we either unmap ' size ' bytes ( or more )
* or we hit an area that isn ' t mapped .
*/
while ( unmapped < size ) {
2023-09-13 00:18:43 +08:00
size_t pgsize , count ;
pgsize = iommu_pgsize ( domain , iova , iova , size - unmapped , & count ) ;
unmapped_page = ops - > unmap_pages ( domain , iova , pgsize , count , iotlb_gather ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
if ( ! unmapped_page )
break ;
2013-06-24 03:29:04 +08:00
pr_debug ( " unmapped: iova 0x%lx size 0x%zx \n " ,
iova , unmapped_page ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
iova + = unmapped_page ;
unmapped + = unmapped_page ;
}
2015-01-17 11:53:17 +08:00
trace_unmap ( orig_iova , size , unmapped ) ;
iommu/core: split mapping to page sizes as supported by the hardware
When mapping a memory region, split it to page sizes as supported
by the iommu hardware. Always prefer bigger pages, when possible,
in order to reduce the TLB pressure.
The logic to do that is now added to the IOMMU core, so neither the iommu
drivers themselves nor users of the IOMMU API have to duplicate it.
This allows a more lenient granularity of mappings; traditionally the
IOMMU API took 'order' (of a page) as a mapping size, and directly let
the low level iommu drivers handle the mapping, but now that the IOMMU
core can split arbitrary memory regions into pages, we can remove this
limitation, so users don't have to split those regions by themselves.
Currently the supported page sizes are advertised once and they then
remain static. That works well for OMAP and MSM but it would probably
not fly well with intel's hardware, where the page size capabilities
seem to have the potential to be different between several DMA
remapping devices.
register_iommu() currently sets a default pgsize behavior, so we can convert
the IOMMU drivers in subsequent patches. After all the drivers
are converted, the temporary default settings will be removed.
Mainline users of the IOMMU API (kvm and omap-iovmm) are adopted
to deal with bytes instead of page order.
Many thanks to Joerg Roedel <Joerg.Roedel@amd.com> for significant review!
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Cc: David Brown <davidb@codeaurora.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <Joerg.Roedel@amd.com>
Cc: Stepan Moskovchenko <stepanm@codeaurora.org>
Cc: KyongHo Cho <pullip.cho@samsung.com>
Cc: Hiroshi DOYU <hdoyu@nvidia.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: kvm@vger.kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-11-10 17:32:26 +08:00
return unmapped ;
2010-01-08 20:35:09 +08:00
}
2017-08-23 21:50:04 +08:00
size_t iommu_unmap ( struct iommu_domain * domain ,
unsigned long iova , size_t size )
{
2019-07-02 23:43:48 +08:00
struct iommu_iotlb_gather iotlb_gather ;
size_t ret ;
iommu_iotlb_gather_init ( & iotlb_gather ) ;
ret = __iommu_unmap ( domain , iova , size , & iotlb_gather ) ;
2020-08-18 05:00:49 +08:00
iommu_iotlb_sync ( domain , & iotlb_gather ) ;
2019-07-02 23:43:48 +08:00
return ret ;
2017-08-23 21:50:04 +08:00
}
2010-01-08 20:35:09 +08:00
EXPORT_SYMBOL_GPL ( iommu_unmap ) ;
2011-10-22 03:56:05 +08:00
2017-08-23 21:50:04 +08:00
size_t iommu_unmap_fast ( struct iommu_domain * domain ,
2019-07-02 23:43:48 +08:00
unsigned long iova , size_t size ,
struct iommu_iotlb_gather * iotlb_gather )
2017-08-23 21:50:04 +08:00
{
2019-07-02 23:43:48 +08:00
return __iommu_unmap ( domain , iova , size , iotlb_gather ) ;
2017-08-23 21:50:04 +08:00
}
EXPORT_SYMBOL_GPL ( iommu_unmap_fast ) ;
2023-01-24 04:35:56 +08:00
ssize_t iommu_map_sg ( struct iommu_domain * domain , unsigned long iova ,
struct scatterlist * sg , unsigned int nents , int prot ,
gfp_t gfp )
2014-10-26 00:55:16 +08:00
{
2022-02-16 10:52:49 +08:00
const struct iommu_domain_ops * ops = domain - > ops ;
2018-10-11 23:56:42 +08:00
size_t len = 0 , mapped = 0 ;
phys_addr_t start ;
unsigned int i = 0 ;
2014-11-04 21:53:51 +08:00
int ret ;
2014-10-26 00:55:16 +08:00
2023-01-24 04:35:56 +08:00
might_sleep_if ( gfpflags_allow_blocking ( gfp ) ) ;
/* Discourage passing strange GFP flags */
if ( WARN_ON_ONCE ( gfp & ( __GFP_COMP | __GFP_DMA | __GFP_DMA32 |
__GFP_HIGHMEM ) ) )
return - EINVAL ;
2018-10-11 23:56:42 +08:00
while ( i < = nents ) {
phys_addr_t s_phys = sg_phys ( sg ) ;
2014-11-26 01:50:55 +08:00
2018-10-11 23:56:42 +08:00
if ( len & & s_phys ! = start + len ) {
2019-09-09 00:56:38 +08:00
ret = __iommu_map ( domain , iova + mapped , start ,
len , prot , gfp ) ;
2018-10-11 23:56:42 +08:00
if ( ret )
goto out_err ;
2014-11-26 01:50:55 +08:00
2018-10-11 23:56:42 +08:00
mapped + = len ;
len = 0 ;
}
2014-11-04 21:53:51 +08:00
2023-06-12 23:31:57 +08:00
if ( sg_dma_is_bus_address ( sg ) )
2022-07-09 00:50:58 +08:00
goto next ;
2018-10-11 23:56:42 +08:00
if ( len ) {
len + = sg - > length ;
} else {
len = sg - > length ;
start = s_phys ;
}
2014-11-04 21:53:51 +08:00
2022-07-09 00:50:58 +08:00
next :
2018-10-11 23:56:42 +08:00
if ( + + i < nents )
sg = sg_next ( sg ) ;
2014-10-26 00:55:16 +08:00
}
2023-09-28 22:31:35 +08:00
if ( ops - > iotlb_sync_map ) {
ret = ops - > iotlb_sync_map ( domain , iova , mapped ) ;
if ( ret )
goto out_err ;
}
2014-10-26 00:55:16 +08:00
return mapped ;
2014-11-04 21:53:51 +08:00
out_err :
/* undo mappings already done */
iommu_unmap ( domain , iova , mapped ) ;
2021-07-30 04:15:21 +08:00
return ret ;
2014-10-26 00:55:16 +08:00
}
2018-07-30 15:36:26 +08:00
EXPORT_SYMBOL_GPL ( iommu_map_sg ) ;
2013-01-29 21:26:20 +08:00
2017-04-26 21:39:28 +08:00
/**
* report_iommu_fault ( ) - report about an IOMMU fault to the IOMMU framework
* @ domain : the iommu domain where the fault has happened
* @ dev : the device where the fault has happened
* @ iova : the faulting address
* @ flags : mmu fault flags ( e . g . IOMMU_FAULT_READ / IOMMU_FAULT_WRITE / . . . )
*
* This function should be called by the low - level IOMMU implementations
* whenever IOMMU faults happen , to allow high - level users , that are
* interested in such events , to know about them .
*
* This event may be useful for several possible use cases :
* - mere logging of the event
* - dynamic TLB / PTE loading
* - if restarting of the faulting device is required
*
* Returns 0 on success and an appropriate error code otherwise ( if dynamic
* PTE / TLB loading will one day be supported , implementations will be able
* to tell whether it succeeded or not according to this return value ) .
*
* Specifically , - ENOSYS is returned if a fault handler isn ' t installed
* ( though fault handlers can also return - ENOSYS , in case they want to
* elicit the default behavior of the IOMMU drivers ) .
*/
int report_iommu_fault ( struct iommu_domain * domain , struct device * dev ,
unsigned long iova , int flags )
{
int ret = - ENOSYS ;
/*
* if upper layers showed interest and installed a fault handler ,
* invoke it .
*/
if ( domain - > handler )
ret = domain - > handler ( domain , dev , iova , flags ,
domain - > handler_token ) ;
trace_io_page_fault ( dev , iova , flags ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( report_iommu_fault ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
static int __init iommu_init ( void )
2011-10-22 03:56:05 +08:00
{
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
iommu_group_kset = kset_create_and_add ( " iommu_groups " ,
NULL , kernel_kobj ) ;
BUG_ON ( ! iommu_group_kset ) ;
2018-06-13 05:41:21 +08:00
iommu_debugfs_setup ( ) ;
iommu: IOMMU Groups
IOMMU device groups are currently a rather vague associative notion
with assembly required by the user or user level driver provider to
do anything useful. This patch intends to grow the IOMMU group concept
into something a bit more consumable.
To do this, we first create an object representing the group, struct
iommu_group. This structure is allocated (iommu_group_alloc) and
filled (iommu_group_add_device) by the iommu driver. The iommu driver
is free to add devices to the group using it's own set of policies.
This allows inclusion of devices based on physical hardware or topology
limitations of the platform, as well as soft requirements, such as
multi-function trust levels or peer-to-peer protection of the
interconnects. Each device may only belong to a single iommu group,
which is linked from struct device.iommu_group. IOMMU groups are
maintained using kobject reference counting, allowing for automatic
removal of empty, unreferenced groups. It is the responsibility of
the iommu driver to remove devices from the group
(iommu_group_remove_device).
IOMMU groups also include a userspace representation in sysfs under
/sys/kernel/iommu_groups. When allocated, each group is given a
dynamically assign ID (int). The ID is managed by the core IOMMU group
code to support multiple heterogeneous iommu drivers, which could
potentially collide in group naming/numbering. This also keeps group
IDs to small, easily managed values. A directory is created under
/sys/kernel/iommu_groups for each group. A further subdirectory named
"devices" contains links to each device within the group. The iommu_group
file in the device's sysfs directory, which formerly contained a group
number when read, is now a link to the iommu group. Example:
$ ls -l /sys/kernel/iommu_groups/26/devices/
total 0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:00:1e.0 ->
../../../../devices/pci0000:00/0000:00:1e.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.0 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0
lrwxrwxrwx. 1 root root 0 Apr 17 12:57 0000:06:0d.1 ->
../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1
$ ls -l /sys/kernel/iommu_groups/26/devices/*/iommu_group
[truncating perms/owner/timestamp]
/sys/kernel/iommu_groups/26/devices/0000:00:1e.0/iommu_group ->
../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.0/iommu_group ->
../../../../kernel/iommu_groups/26
/sys/kernel/iommu_groups/26/devices/0000:06:0d.1/iommu_group ->
../../../../kernel/iommu_groups/26
Groups also include several exported functions for use by user level
driver providers, for example VFIO. These include:
iommu_group_get(): Acquires a reference to a group from a device
iommu_group_put(): Releases reference
iommu_group_for_each_dev(): Iterates over group devices using callback
iommu_group_[un]register_notifier(): Allows notification of device add
and remove operations relevant to the group
iommu_group_id(): Return the group number
This patch also extends the IOMMU API to allow attaching groups to
domains. This is currently a simple wrapper for iterating through
devices within a group, but it's expected that the IOMMU API may
eventually make groups a more integral part of domains.
Groups intentionally do not try to manage group ownership. A user
level driver provider must independently acquire ownership for each
device within a group before making use of the group as a whole.
This may change in the future if group usage becomes more pervasive
across both DMA and IOMMU ops.
Groups intentionally do not provide a mechanism for driver locking
or otherwise manipulating driver matching/probing of devices within
the group. Such interfaces are generic to devices and beyond the
scope of IOMMU groups. If implemented, user level providers have
ready access via iommu_group_for_each_dev and group notifiers.
iommu_device_group() is removed here as it has no users. The
replacement is:
group = iommu_group_get(dev);
id = iommu_group_id(group);
iommu_group_put(group);
AMD-Vi & Intel VT-d support re-added in following patches.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2012-05-31 04:18:53 +08:00
return 0 ;
2011-10-22 03:56:05 +08:00
}
2015-05-19 21:20:23 +08:00
core_initcall ( iommu_init ) ;
2012-01-27 02:40:52 +08:00
2021-04-01 23:52:52 +08:00
int iommu_enable_nesting ( struct iommu_domain * domain )
{
if ( domain - > type ! = IOMMU_DOMAIN_UNMANAGED )
return - EINVAL ;
if ( ! domain - > ops - > enable_nesting )
return - EINVAL ;
return domain - > ops - > enable_nesting ( domain ) ;
}
EXPORT_SYMBOL_GPL ( iommu_enable_nesting ) ;
2021-04-01 23:52:55 +08:00
int iommu_set_pgtable_quirks ( struct iommu_domain * domain ,
unsigned long quirk )
{
if ( domain - > type ! = IOMMU_DOMAIN_UNMANAGED )
return - EINVAL ;
if ( ! domain - > ops - > set_pgtable_quirks )
return - EINVAL ;
return domain - > ops - > set_pgtable_quirks ( domain , quirk ) ;
}
EXPORT_SYMBOL_GPL ( iommu_set_pgtable_quirks ) ;
2023-07-18 02:12:00 +08:00
/**
* iommu_get_resv_regions - get reserved regions
* @ dev : device for which to get reserved regions
* @ list : reserved region list for device
*
* This returns a list of reserved IOVA regions specific to this device .
* A domain user should not map IOVA in these ranges .
*/
2017-01-20 04:57:47 +08:00
void iommu_get_resv_regions ( struct device * dev , struct list_head * list )
2015-05-29 00:41:33 +08:00
{
2022-02-16 10:52:47 +08:00
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
2015-05-29 00:41:33 +08:00
2022-02-16 10:52:47 +08:00
if ( ops - > get_resv_regions )
2017-01-20 04:57:47 +08:00
ops - > get_resv_regions ( dev , list ) ;
2015-05-29 00:41:33 +08:00
}
2023-07-18 02:12:00 +08:00
EXPORT_SYMBOL_GPL ( iommu_get_resv_regions ) ;
2015-05-29 00:41:33 +08:00
2019-12-18 21:42:01 +08:00
/**
2023-07-18 02:12:00 +08:00
* iommu_put_resv_regions - release reserved regions
2019-12-18 21:42:01 +08:00
* @ dev : device for which to free reserved regions
* @ list : reserved region list for device
*
2022-07-08 16:06:15 +08:00
* This releases a reserved region list acquired by iommu_get_resv_regions ( ) .
2019-12-18 21:42:01 +08:00
*/
2022-07-08 16:06:15 +08:00
void iommu_put_resv_regions ( struct device * dev , struct list_head * list )
2019-12-18 21:42:01 +08:00
{
struct iommu_resv_region * entry , * next ;
2022-06-15 18:10:36 +08:00
list_for_each_entry_safe ( entry , next , list , list ) {
if ( entry - > free )
entry - > free ( dev , entry ) ;
else
kfree ( entry ) ;
}
2019-12-18 21:42:01 +08:00
}
2022-07-08 16:06:15 +08:00
EXPORT_SYMBOL ( iommu_put_resv_regions ) ;
2019-12-18 21:42:01 +08:00
2017-01-20 04:57:49 +08:00
struct iommu_resv_region * iommu_alloc_resv_region ( phys_addr_t start ,
iommu: Disambiguate MSI region types
The introduction of reserved regions has left a couple of rough edges
which we could do with sorting out sooner rather than later. Since we
are not yet addressing the potential dynamic aspect of software-managed
reservations and presenting them at arbitrary fixed addresses, it is
incongruous that we end up displaying hardware vs. software-managed MSI
regions to userspace differently, especially since ARM-based systems may
actually require one or the other, or even potentially both at once,
(which iommu-dma currently has no hope of dealing with at all). Let's
resolve the former user-visible inconsistency ASAP before the ABI has
been baked into a kernel release, in a way that also lays the groundwork
for the latter shortcoming to be addressed by follow-up patches.
For clarity, rename the software-managed type to IOMMU_RESV_SW_MSI, use
IOMMU_RESV_MSI to describe the hardware type, and document everything a
little bit. Since the x86 MSI remapping hardware falls squarely under
this meaning of IOMMU_RESV_MSI, apply that type to their regions as well,
so that we tell the same story to userspace across all platforms.
Secondly, as the various region types require quite different handling,
and it really makes little sense to ever try combining them, convert the
bitfield-esque #defines to a plain enum in the process before anyone
gets the wrong impression.
Fixes: d30ddcaa7b02 ("iommu: Add a new type field in iommu_resv_region")
Reviewed-by: Eric Auger <eric.auger@redhat.com>
CC: Alex Williamson <alex.williamson@redhat.com>
CC: David Woodhouse <dwmw2@infradead.org>
CC: kvm@vger.kernel.org
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2017-03-17 01:00:16 +08:00
size_t length , int prot ,
2022-10-19 08:44:44 +08:00
enum iommu_resv_type type ,
gfp_t gfp )
2017-01-20 04:57:49 +08:00
{
struct iommu_resv_region * region ;
2022-10-19 08:44:44 +08:00
region = kzalloc ( sizeof ( * region ) , gfp ) ;
2017-01-20 04:57:49 +08:00
if ( ! region )
return NULL ;
INIT_LIST_HEAD ( & region - > list ) ;
region - > start = start ;
region - > length = length ;
region - > prot = prot ;
region - > type = type ;
return region ;
2015-05-29 00:41:33 +08:00
}
2019-12-19 20:03:37 +08:00
EXPORT_SYMBOL_GPL ( iommu_alloc_resv_region ) ;
2015-05-29 00:41:36 +08:00
2019-08-19 21:22:47 +08:00
void iommu_set_default_passthrough ( bool cmd_line )
{
if ( cmd_line )
2021-04-01 23:52:53 +08:00
iommu_cmd_line | = IOMMU_CMD_LINE_DMA_API ;
2019-08-19 21:22:47 +08:00
iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY ;
}
void iommu_set_default_translated ( bool cmd_line )
{
if ( cmd_line )
2021-04-01 23:52:53 +08:00
iommu_cmd_line | = IOMMU_CMD_LINE_DMA_API ;
2019-08-19 21:22:47 +08:00
iommu_def_domain_type = IOMMU_DOMAIN_DMA ;
}
bool iommu_default_passthrough ( void )
{
return iommu_def_domain_type = = IOMMU_DOMAIN_IDENTITY ;
}
EXPORT_SYMBOL_GPL ( iommu_default_passthrough ) ;
2024-02-16 22:40:26 +08:00
const struct iommu_ops * iommu_ops_from_fwnode ( const struct fwnode_handle * fwnode )
2016-11-21 18:01:36 +08:00
{
const struct iommu_ops * ops = NULL ;
2017-02-02 19:19:12 +08:00
struct iommu_device * iommu ;
2016-11-21 18:01:36 +08:00
2017-02-02 19:19:12 +08:00
spin_lock ( & iommu_device_lock ) ;
list_for_each_entry ( iommu , & iommu_device_list , list )
if ( iommu - > fwnode = = fwnode ) {
ops = iommu - > ops ;
2016-11-21 18:01:36 +08:00
break ;
}
2017-02-02 19:19:12 +08:00
spin_unlock ( & iommu_device_lock ) ;
2016-11-21 18:01:36 +08:00
return ops ;
}
2016-09-13 17:54:14 +08:00
int iommu_fwspec_init ( struct device * dev , struct fwnode_handle * iommu_fwnode ,
const struct iommu_ops * ops )
{
2018-11-28 20:35:24 +08:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2016-09-13 17:54:14 +08:00
if ( fwspec )
return ops = = fwspec - > ops ? 0 : - EINVAL ;
2020-03-26 23:08:31 +08:00
if ( ! dev_iommu_get ( dev ) )
return - ENOMEM ;
2020-02-13 22:00:21 +08:00
/* Preallocate for the overwhelmingly common case of 1 ID */
fwspec = kzalloc ( struct_size ( fwspec , ids , 1 ) , GFP_KERNEL ) ;
2016-09-13 17:54:14 +08:00
if ( ! fwspec )
return - ENOMEM ;
of_node_get ( to_of_node ( iommu_fwnode ) ) ;
fwspec - > iommu_fwnode = iommu_fwnode ;
fwspec - > ops = ops ;
2018-11-28 20:35:24 +08:00
dev_iommu_fwspec_set ( dev , fwspec ) ;
2016-09-13 17:54:14 +08:00
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_fwspec_init ) ;
void iommu_fwspec_free ( struct device * dev )
{
2018-11-28 20:35:24 +08:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2016-09-13 17:54:14 +08:00
if ( fwspec ) {
fwnode_handle_put ( fwspec - > iommu_fwnode ) ;
kfree ( fwspec ) ;
2018-11-28 20:35:24 +08:00
dev_iommu_fwspec_set ( dev , NULL ) ;
2016-09-13 17:54:14 +08:00
}
}
EXPORT_SYMBOL_GPL ( iommu_fwspec_free ) ;
2024-02-16 22:40:25 +08:00
int iommu_fwspec_add_ids ( struct device * dev , const u32 * ids , int num_ids )
2016-09-13 17:54:14 +08:00
{
2018-11-28 20:35:24 +08:00
struct iommu_fwspec * fwspec = dev_iommu_fwspec_get ( dev ) ;
2020-02-13 22:00:21 +08:00
int i , new_num ;
2016-09-13 17:54:14 +08:00
if ( ! fwspec )
return - EINVAL ;
2020-02-13 22:00:21 +08:00
new_num = fwspec - > num_ids + num_ids ;
if ( new_num > 1 ) {
fwspec = krealloc ( fwspec , struct_size ( fwspec , ids , new_num ) ,
GFP_KERNEL ) ;
2016-09-13 17:54:14 +08:00
if ( ! fwspec )
return - ENOMEM ;
2017-02-03 17:35:02 +08:00
2018-11-28 20:35:24 +08:00
dev_iommu_fwspec_set ( dev , fwspec ) ;
2016-09-13 17:54:14 +08:00
}
for ( i = 0 ; i < num_ids ; i + + )
fwspec - > ids [ fwspec - > num_ids + i ] = ids [ i ] ;
2020-02-13 22:00:21 +08:00
fwspec - > num_ids = new_num ;
2016-09-13 17:54:14 +08:00
return 0 ;
}
EXPORT_SYMBOL_GPL ( iommu_fwspec_add_ids ) ;
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
/*
* Per device IOMMU features .
*/
int iommu_dev_enable_feature ( struct device * dev , enum iommu_dev_features feat )
{
2023-11-22 02:03:57 +08:00
if ( dev_has_iommu ( dev ) ) {
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
2021-03-04 01:36:11 +08:00
if ( ops - > dev_enable_feat )
return ops - > dev_enable_feat ( dev , feat ) ;
}
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
return - ENODEV ;
}
EXPORT_SYMBOL_GPL ( iommu_dev_enable_feature ) ;
/*
* The device drivers should do the necessary cleanups before calling this .
*/
int iommu_dev_disable_feature ( struct device * dev , enum iommu_dev_features feat )
{
2023-11-22 02:03:57 +08:00
if ( dev_has_iommu ( dev ) ) {
const struct iommu_ops * ops = dev_iommu_ops ( dev ) ;
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
2021-03-04 01:36:11 +08:00
if ( ops - > dev_disable_feat )
return ops - > dev_disable_feat ( dev , feat ) ;
}
iommu: Add APIs for multiple domains per device
Sharing a physical PCI device in a finer-granularity way
is becoming a consensus in the industry. IOMMU vendors
are also engaging efforts to support such sharing as well
as possible. Among the efforts, the capability of support
finer-granularity DMA isolation is a common requirement
due to the security consideration. With finer-granularity
DMA isolation, subsets of a PCI function can be isolated
from each others by the IOMMU. As a result, there is a
request in software to attach multiple domains to a physical
PCI device. One example of such use model is the Intel
Scalable IOV [1] [2]. The Intel vt-d 3.0 spec [3] introduces
the scalable mode which enables PASID granularity DMA
isolation.
This adds the APIs to support multiple domains per device.
In order to ease the discussions, we call it 'a domain in
auxiliary mode' or simply 'auxiliary domain' when multiple
domains are attached to a physical device.
The APIs include:
* iommu_dev_has_feature(dev, IOMMU_DEV_FEAT_AUX)
- Detect both IOMMU and PCI endpoint devices supporting
the feature (aux-domain here) without the host driver
dependency.
* iommu_dev_feature_enabled(dev, IOMMU_DEV_FEAT_AUX)
- Check the enabling status of the feature (aux-domain
here). The aux-domain interfaces are available only
if this returns true.
* iommu_dev_enable/disable_feature(dev, IOMMU_DEV_FEAT_AUX)
- Enable/disable device specific aux-domain feature.
* iommu_aux_attach_device(domain, dev)
- Attaches @domain to @dev in the auxiliary mode. Multiple
domains could be attached to a single device in the
auxiliary mode with each domain representing an isolated
address space for an assignable subset of the device.
* iommu_aux_detach_device(domain, dev)
- Detach @domain which has been attached to @dev in the
auxiliary mode.
* iommu_aux_get_pasid(domain, dev)
- Return ID used for finer-granularity DMA translation.
For the Intel Scalable IOV usage model, this will be
a PASID. The device which supports Scalable IOV needs
to write this ID to the device register so that DMA
requests could be tagged with a right PASID prefix.
This has been updated with the latest proposal from Joerg
posted here [5].
Many people involved in discussions of this design.
Kevin Tian <kevin.tian@intel.com>
Liu Yi L <yi.l.liu@intel.com>
Ashok Raj <ashok.raj@intel.com>
Sanjay Kumar <sanjay.k.kumar@intel.com>
Jacob Pan <jacob.jun.pan@linux.intel.com>
Alex Williamson <alex.williamson@redhat.com>
Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Joerg Roedel <joro@8bytes.org>
and some discussions can be found here [4] [5].
[1] https://software.intel.com/en-us/download/intel-scalable-io-virtualization-technical-specification
[2] https://schd.ws/hosted_files/lc32018/00/LC3-SIOV-final.pdf
[3] https://software.intel.com/en-us/download/intel-virtualization-technology-for-directed-io-architecture-specification
[4] https://lkml.org/lkml/2018/7/26/4
[5] https://www.spinics.net/lists/iommu/msg31874.html
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Liu Yi L <yi.l.liu@intel.com>
Suggested-by: Kevin Tian <kevin.tian@intel.com>
Suggested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Suggested-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2019-03-25 09:30:28 +08:00
return - EBUSY ;
}
EXPORT_SYMBOL_GPL ( iommu_dev_disable_feature ) ;
2023-05-11 12:42:12 +08:00
/**
* iommu_setup_default_domain - Set the default_domain for the group
* @ group : Group to change
* @ target_type : Domain type to set as the default_domain
2020-11-24 21:06:02 +08:00
*
2023-05-11 12:42:12 +08:00
* Allocate a default domain and set it as the current domain on the group . If
* the group already has a default domain it will be changed to the target_type .
* When target_type is 0 the default domain is selected based on driver and
* system preferences .
2020-11-24 21:06:02 +08:00
*/
2023-05-11 12:42:12 +08:00
static int iommu_setup_default_domain ( struct iommu_group * group ,
int target_type )
2020-11-24 21:06:02 +08:00
{
2023-05-11 12:42:12 +08:00
struct iommu_domain * old_dom = group - > default_domain ;
struct group_device * gdev ;
struct iommu_domain * dom ;
2023-05-11 12:42:13 +08:00
bool direct_failed ;
2023-05-11 12:42:12 +08:00
int req_type ;
2023-03-22 14:49:56 +08:00
int ret ;
2020-11-24 21:06:02 +08:00
2023-03-22 14:49:54 +08:00
lockdep_assert_held ( & group - > mutex ) ;
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:12 +08:00
req_type = iommu_get_default_domain_type ( group , target_type ) ;
if ( req_type < 0 )
2023-03-22 14:49:56 +08:00
return - EINVAL ;
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:12 +08:00
dom = iommu_group_alloc_default_domain ( group , req_type ) ;
2023-11-02 07:28:11 +08:00
if ( IS_ERR ( dom ) )
return PTR_ERR ( dom ) ;
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:12 +08:00
if ( group - > default_domain = = dom )
return 0 ;
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:12 +08:00
/*
* IOMMU_RESV_DIRECT and IOMMU_RESV_DIRECT_RELAXABLE regions must be
* mapped before their device is attached , in order to guarantee
* continuity with any FW activity
*/
2023-05-11 12:42:13 +08:00
direct_failed = false ;
for_each_group_device ( group , gdev ) {
if ( iommu_create_device_direct_mappings ( dom , gdev - > dev ) ) {
direct_failed = true ;
dev_warn_once (
gdev - > dev - > iommu - > iommu_dev - > dev ,
" IOMMU driver was not able to establish FW requested direct mapping. " ) ;
}
}
2023-03-22 14:49:54 +08:00
2023-05-11 12:42:12 +08:00
/* We must set default_domain early for __iommu_device_set_domain */
group - > default_domain = dom ;
if ( ! group - > domain ) {
/*
* Drivers are not allowed to fail the first domain attach .
* The only way to recover from this is to fail attaching the
* iommu driver and call ops - > release_device . Put the domain
* in group - > default_domain so it is freed after .
*/
ret = __iommu_group_set_domain_internal (
group , dom , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ;
if ( WARN_ON ( ret ) )
2023-06-26 23:13:11 +08:00
goto out_free_old ;
2023-05-11 12:42:12 +08:00
} else {
ret = __iommu_group_set_domain ( group , dom ) ;
2023-06-26 23:13:11 +08:00
if ( ret )
goto err_restore_def_domain ;
2023-05-11 12:42:12 +08:00
}
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:13 +08:00
/*
* Drivers are supposed to allow mappings to be installed in a domain
* before device attachment , but some don ' t . Hack around this defect by
* trying again after attaching . If this happens it means the device
* will not continuously have the IOMMU_RESV_DIRECT map .
*/
if ( direct_failed ) {
for_each_group_device ( group , gdev ) {
ret = iommu_create_device_direct_mappings ( dom , gdev - > dev ) ;
if ( ret )
2023-06-26 23:13:11 +08:00
goto err_restore_domain ;
2023-05-11 12:42:13 +08:00
}
}
2020-11-24 21:06:02 +08:00
2023-06-26 23:13:11 +08:00
out_free_old :
if ( old_dom )
iommu_domain_free ( old_dom ) ;
return ret ;
err_restore_domain :
if ( old_dom )
2023-05-11 12:42:13 +08:00
__iommu_group_set_domain_internal (
group , old_dom , IOMMU_SET_DOMAIN_MUST_SUCCEED ) ;
2023-06-26 23:13:11 +08:00
err_restore_def_domain :
if ( old_dom ) {
2023-05-11 12:42:13 +08:00
iommu_domain_free ( dom ) ;
2023-06-26 23:13:11 +08:00
group - > default_domain = old_dom ;
2023-05-11 12:42:13 +08:00
}
2020-11-24 21:06:02 +08:00
return ret ;
}
/*
2021-08-11 20:21:38 +08:00
* Changing the default domain through sysfs requires the users to unbind the
* drivers from the devices in the iommu group , except for a DMA - > DMA - FQ
* transition . Return failure if this isn ' t met .
2020-11-24 21:06:02 +08:00
*
* We need to consider the race between this and the device release path .
2023-03-22 14:49:55 +08:00
* group - > mutex is used here to guarantee that the device release path
2020-11-24 21:06:02 +08:00
* will not be entered at the same time .
*/
static ssize_t iommu_group_store_type ( struct iommu_group * group ,
const char * buf , size_t count )
{
2023-05-11 12:42:15 +08:00
struct group_device * gdev ;
2020-11-24 21:06:02 +08:00
int ret , req_type ;
if ( ! capable ( CAP_SYS_ADMIN ) | | ! capable ( CAP_SYS_RAWIO ) )
return - EACCES ;
2022-05-04 20:39:58 +08:00
if ( WARN_ON ( ! group ) | | ! group - > default_domain )
2020-11-24 21:06:02 +08:00
return - EINVAL ;
if ( sysfs_streq ( buf , " identity " ) )
req_type = IOMMU_DOMAIN_IDENTITY ;
else if ( sysfs_streq ( buf , " DMA " ) )
req_type = IOMMU_DOMAIN_DMA ;
2021-08-11 20:21:35 +08:00
else if ( sysfs_streq ( buf , " DMA-FQ " ) )
req_type = IOMMU_DOMAIN_DMA_FQ ;
2020-11-24 21:06:02 +08:00
else if ( sysfs_streq ( buf , " auto " ) )
req_type = 0 ;
else
return - EINVAL ;
mutex_lock ( & group - > mutex ) ;
2023-03-22 14:49:55 +08:00
/* We can bring up a flush queue without tearing down the domain. */
if ( req_type = = IOMMU_DOMAIN_DMA_FQ & &
group - > default_domain - > type = = IOMMU_DOMAIN_DMA ) {
ret = iommu_dma_init_fq ( group - > default_domain ) ;
2023-05-11 12:42:15 +08:00
if ( ret )
goto out_unlock ;
2023-03-22 14:49:55 +08:00
2023-05-11 12:42:15 +08:00
group - > default_domain - > type = IOMMU_DOMAIN_DMA_FQ ;
ret = count ;
goto out_unlock ;
2023-03-22 14:49:55 +08:00
}
/* Otherwise, ensure that device exists and no driver is bound. */
if ( list_empty ( & group - > devices ) | | group - > owner_cnt ) {
2023-05-11 12:42:15 +08:00
ret = - EPERM ;
goto out_unlock ;
2020-11-24 21:06:02 +08:00
}
2023-05-11 12:42:12 +08:00
ret = iommu_setup_default_domain ( group , req_type ) ;
2023-05-11 12:42:15 +08:00
if ( ret )
goto out_unlock ;
2020-11-24 21:06:02 +08:00
/*
2023-03-22 14:49:54 +08:00
* Release the mutex here because ops - > probe_finalize ( ) call - back of
* some vendor IOMMU drivers calls arm_iommu_attach_device ( ) which
* in - turn might call back into IOMMU core code , where it tries to take
* group - > mutex , resulting in a deadlock .
2020-11-24 21:06:02 +08:00
*/
mutex_unlock ( & group - > mutex ) ;
2023-03-22 14:49:54 +08:00
/* Make sure dma_ops is appropriatley set */
2023-05-11 12:42:15 +08:00
for_each_group_device ( group , gdev )
iommu_group_do_probe_finalize ( gdev - > dev ) ;
return count ;
2020-11-24 21:06:02 +08:00
2023-05-11 12:42:15 +08:00
out_unlock :
mutex_unlock ( & group - > mutex ) ;
2023-03-22 14:49:55 +08:00
return ret ? : count ;
2020-11-24 21:06:02 +08:00
}
2022-04-18 08:49:50 +08:00
/**
* iommu_device_use_default_domain ( ) - Device driver wants to handle device
* DMA through the kernel DMA API .
* @ dev : The device .
*
* The device driver about to bind @ dev wants to do DMA through the kernel
* DMA API . Return 0 if it is allowed , otherwise an error .
*/
int iommu_device_use_default_domain ( struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller is the driver core during the pre-probe path */
struct iommu_group * group = dev - > iommu_group ;
2022-04-18 08:49:50 +08:00
int ret = 0 ;
if ( ! group )
return 0 ;
mutex_lock ( & group - > mutex ) ;
if ( group - > owner_cnt ) {
2023-10-06 17:57:06 +08:00
if ( group - > domain ! = group - > default_domain | | group - > owner | |
2022-10-31 08:59:09 +08:00
! xa_empty ( & group - > pasid_array ) ) {
2022-04-18 08:49:50 +08:00
ret = - EBUSY ;
goto unlock_out ;
}
}
group - > owner_cnt + + ;
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
/**
* iommu_device_unuse_default_domain ( ) - Device driver stops handling device
* DMA through the kernel DMA API .
* @ dev : The device .
*
* The device driver doesn ' t want to do DMA through kernel DMA API anymore .
* It must be called after iommu_device_use_default_domain ( ) .
*/
void iommu_device_unuse_default_domain ( struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller is the driver core during the post-probe path */
struct iommu_group * group = dev - > iommu_group ;
2022-04-18 08:49:50 +08:00
if ( ! group )
return ;
mutex_lock ( & group - > mutex ) ;
2022-10-31 08:59:09 +08:00
if ( ! WARN_ON ( ! group - > owner_cnt | | ! xa_empty ( & group - > pasid_array ) ) )
2022-04-18 08:49:50 +08:00
group - > owner_cnt - - ;
mutex_unlock ( & group - > mutex ) ;
}
2022-05-10 00:19:19 +08:00
static int __iommu_group_alloc_blocking_domain ( struct iommu_group * group )
{
2023-11-02 07:28:11 +08:00
struct iommu_domain * domain ;
2022-05-10 00:19:19 +08:00
if ( group - > blocking_domain )
return 0 ;
2023-11-02 07:28:11 +08:00
domain = __iommu_group_domain_alloc ( group , IOMMU_DOMAIN_BLOCKED ) ;
if ( IS_ERR ( domain ) ) {
2022-05-10 00:19:19 +08:00
/*
* For drivers that do not yet understand IOMMU_DOMAIN_BLOCKED
* create an empty domain instead .
*/
2023-11-02 07:28:11 +08:00
domain = __iommu_group_domain_alloc ( group ,
IOMMU_DOMAIN_UNMANAGED ) ;
if ( IS_ERR ( domain ) )
return PTR_ERR ( domain ) ;
2022-05-10 00:19:19 +08:00
}
2023-11-02 07:28:11 +08:00
group - > blocking_domain = domain ;
2022-05-10 00:19:19 +08:00
return 0 ;
}
2022-11-30 04:29:25 +08:00
static int __iommu_take_dma_ownership ( struct iommu_group * group , void * owner )
{
int ret ;
if ( ( group - > domain & & group - > domain ! = group - > default_domain ) | |
! xa_empty ( & group - > pasid_array ) )
return - EBUSY ;
ret = __iommu_group_alloc_blocking_domain ( group ) ;
if ( ret )
return ret ;
ret = __iommu_group_set_domain ( group , group - > blocking_domain ) ;
if ( ret )
return ret ;
group - > owner = owner ;
group - > owner_cnt + + ;
return 0 ;
}
2022-04-18 08:49:50 +08:00
/**
* iommu_group_claim_dma_owner ( ) - Set DMA ownership of a group
* @ group : The group .
* @ owner : Caller specified pointer . Used for exclusive ownership .
*
2022-11-30 04:29:25 +08:00
* This is to support backward compatibility for vfio which manages the dma
* ownership in iommu_group level . New invocations on this interface should be
* prohibited . Only a single owner may exist for a group .
2022-04-18 08:49:50 +08:00
*/
int iommu_group_claim_dma_owner ( struct iommu_group * group , void * owner )
{
int ret = 0 ;
2022-11-30 04:29:25 +08:00
if ( WARN_ON ( ! owner ) )
return - EINVAL ;
2022-04-18 08:49:50 +08:00
mutex_lock ( & group - > mutex ) ;
if ( group - > owner_cnt ) {
ret = - EPERM ;
goto unlock_out ;
}
2022-11-30 04:29:25 +08:00
ret = __iommu_take_dma_ownership ( group , owner ) ;
2022-04-18 08:49:50 +08:00
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_group_claim_dma_owner ) ;
/**
2022-11-30 04:29:25 +08:00
* iommu_device_claim_dma_owner ( ) - Set DMA ownership of a device
* @ dev : The device .
* @ owner : Caller specified pointer . Used for exclusive ownership .
2022-04-18 08:49:50 +08:00
*
2022-11-30 04:29:25 +08:00
* Claim the DMA ownership of a device . Multiple devices in the same group may
* concurrently claim ownership if they present the same owner value . Returns 0
* on success and error code on failure
2022-04-18 08:49:50 +08:00
*/
2022-11-30 04:29:25 +08:00
int iommu_device_claim_dma_owner ( struct device * dev , void * owner )
2022-04-18 08:49:50 +08:00
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-11-30 04:29:25 +08:00
int ret = 0 ;
if ( WARN_ON ( ! owner ) )
return - EINVAL ;
2022-05-10 00:19:19 +08:00
2022-12-30 16:31:00 +08:00
if ( ! group )
return - ENODEV ;
2022-04-18 08:49:50 +08:00
mutex_lock ( & group - > mutex ) ;
2022-11-30 04:29:25 +08:00
if ( group - > owner_cnt ) {
if ( group - > owner ! = owner ) {
ret = - EPERM ;
goto unlock_out ;
}
group - > owner_cnt + + ;
goto unlock_out ;
}
ret = __iommu_take_dma_ownership ( group , owner ) ;
unlock_out :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_device_claim_dma_owner ) ;
static void __iommu_release_dma_ownership ( struct iommu_group * group )
{
2022-10-31 08:59:09 +08:00
if ( WARN_ON ( ! group - > owner_cnt | | ! group - > owner | |
! xa_empty ( & group - > pasid_array ) ) )
2022-11-30 04:29:25 +08:00
return ;
2022-04-18 08:49:50 +08:00
group - > owner_cnt = 0 ;
group - > owner = NULL ;
2023-05-11 12:42:01 +08:00
__iommu_group_set_domain_nofail ( group , group - > default_domain ) ;
2022-11-30 04:29:25 +08:00
}
2022-05-10 00:19:19 +08:00
2022-11-30 04:29:25 +08:00
/**
* iommu_group_release_dma_owner ( ) - Release DMA ownership of a group
2023-07-31 19:27:58 +08:00
* @ group : The group
2022-11-30 04:29:25 +08:00
*
* Release the DMA ownership claimed by iommu_group_claim_dma_owner ( ) .
*/
void iommu_group_release_dma_owner ( struct iommu_group * group )
{
mutex_lock ( & group - > mutex ) ;
__iommu_release_dma_ownership ( group ) ;
2022-04-18 08:49:50 +08:00
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_group_release_dma_owner ) ;
2022-11-30 04:29:25 +08:00
/**
* iommu_device_release_dma_owner ( ) - Release DMA ownership of a device
2023-07-31 19:27:58 +08:00
* @ dev : The device .
2022-11-30 04:29:25 +08:00
*
* Release the DMA ownership claimed by iommu_device_claim_dma_owner ( ) .
*/
void iommu_device_release_dma_owner ( struct device * dev )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-11-30 04:29:25 +08:00
mutex_lock ( & group - > mutex ) ;
if ( group - > owner_cnt > 1 )
group - > owner_cnt - - ;
else
__iommu_release_dma_ownership ( group ) ;
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_device_release_dma_owner ) ;
2022-04-18 08:49:50 +08:00
/**
* iommu_group_dma_owner_claimed ( ) - Query group dma ownership status
* @ group : The group .
*
* This provides status query on a given group . It is racy and only for
* non - binding status reporting .
*/
bool iommu_group_dma_owner_claimed ( struct iommu_group * group )
{
unsigned int user ;
mutex_lock ( & group - > mutex ) ;
user = group - > owner_cnt ;
mutex_unlock ( & group - > mutex ) ;
return user ;
}
EXPORT_SYMBOL_GPL ( iommu_group_dma_owner_claimed ) ;
2022-10-31 08:59:09 +08:00
static int __iommu_set_group_pasid ( struct iommu_domain * domain ,
struct iommu_group * group , ioasid_t pasid )
{
struct group_device * device ;
int ret = 0 ;
2023-05-11 12:42:00 +08:00
for_each_group_device ( group , device ) {
2022-10-31 08:59:09 +08:00
ret = domain - > ops - > set_dev_pasid ( domain , device - > dev , pasid ) ;
if ( ret )
break ;
}
return ret ;
}
static void __iommu_remove_group_pasid ( struct iommu_group * group ,
ioasid_t pasid )
{
struct group_device * device ;
const struct iommu_ops * ops ;
2023-05-11 12:42:00 +08:00
for_each_group_device ( group , device ) {
2022-10-31 08:59:09 +08:00
ops = dev_iommu_ops ( device - > dev ) ;
ops - > remove_dev_pasid ( device - > dev , pasid ) ;
}
}
/*
* iommu_attach_device_pasid ( ) - Attach a domain to pasid of device
* @ domain : the iommu domain .
* @ dev : the attached device .
* @ pasid : the pasid of the device .
*
* Return : 0 on success , or an error .
*/
int iommu_attach_device_pasid ( struct iommu_domain * domain ,
struct device * dev , ioasid_t pasid )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2024-03-27 21:41:39 +08:00
struct group_device * device ;
2022-10-31 08:59:09 +08:00
void * curr ;
int ret ;
if ( ! domain - > ops - > set_dev_pasid )
return - EOPNOTSUPP ;
if ( ! group )
return - ENODEV ;
2024-03-27 21:41:39 +08:00
if ( ! dev_has_iommu ( dev ) | | dev_iommu_ops ( dev ) ! = domain - > owner | |
pasid = = IOMMU_NO_PASID )
2023-11-22 02:03:59 +08:00
return - EINVAL ;
2022-10-31 08:59:09 +08:00
mutex_lock ( & group - > mutex ) ;
2024-03-27 21:41:39 +08:00
for_each_group_device ( group , device ) {
if ( pasid > = device - > dev - > iommu - > max_pasids ) {
ret = - EINVAL ;
goto out_unlock ;
}
}
2022-10-31 08:59:09 +08:00
curr = xa_cmpxchg ( & group - > pasid_array , pasid , NULL , domain , GFP_KERNEL ) ;
if ( curr ) {
ret = xa_err ( curr ) ? : - EBUSY ;
goto out_unlock ;
}
ret = __iommu_set_group_pasid ( domain , group , pasid ) ;
if ( ret ) {
__iommu_remove_group_pasid ( group , pasid ) ;
xa_erase ( & group - > pasid_array , pasid ) ;
}
out_unlock :
mutex_unlock ( & group - > mutex ) ;
return ret ;
}
EXPORT_SYMBOL_GPL ( iommu_attach_device_pasid ) ;
/*
* iommu_detach_device_pasid ( ) - Detach the domain from pasid of device
* @ domain : the iommu domain .
* @ dev : the attached device .
* @ pasid : the pasid of the device .
*
* The @ domain must have been attached to @ pasid of the @ dev with
* iommu_attach_device_pasid ( ) .
*/
void iommu_detach_device_pasid ( struct iommu_domain * domain , struct device * dev ,
ioasid_t pasid )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-10-31 08:59:09 +08:00
mutex_lock ( & group - > mutex ) ;
__iommu_remove_group_pasid ( group , pasid ) ;
WARN_ON ( xa_erase ( & group - > pasid_array , pasid ) ! = domain ) ;
mutex_unlock ( & group - > mutex ) ;
}
EXPORT_SYMBOL_GPL ( iommu_detach_device_pasid ) ;
/*
* iommu_get_domain_for_dev_pasid ( ) - Retrieve domain for @ pasid of @ dev
* @ dev : the queried device
* @ pasid : the pasid of the device
* @ type : matched domain type , 0 for any match
*
* This is a variant of iommu_get_domain_for_dev ( ) . It returns the existing
* domain attached to pasid of a device . Callers must hold a lock around this
* function , and both iommu_attach / detach_dev_pasid ( ) whenever a domain of
* type is being manipulated . This API does not internally resolve races with
* attach / detach .
*
* Return : attached domain on success , NULL otherwise .
*/
struct iommu_domain * iommu_get_domain_for_dev_pasid ( struct device * dev ,
ioasid_t pasid ,
unsigned int type )
{
2023-08-23 00:15:56 +08:00
/* Caller must be a probed driver on dev */
struct iommu_group * group = dev - > iommu_group ;
2022-10-31 08:59:09 +08:00
struct iommu_domain * domain ;
if ( ! group )
return NULL ;
xa_lock ( & group - > pasid_array ) ;
domain = xa_load ( & group - > pasid_array , pasid ) ;
if ( type & & domain & & domain - > type ! = type )
domain = ERR_PTR ( - EBUSY ) ;
xa_unlock ( & group - > pasid_array ) ;
return domain ;
}
EXPORT_SYMBOL_GPL ( iommu_get_domain_for_dev_pasid ) ;
2022-10-31 08:59:10 +08:00
2023-08-09 20:47:55 +08:00
ioasid_t iommu_alloc_global_pasid ( struct device * dev )
{
int ret ;
/* max_pasids == 0 means that the device does not support PASID */
if ( ! dev - > iommu - > max_pasids )
return IOMMU_PASID_INVALID ;
/*
* max_pasids is set up by vendor driver based on number of PASID bits
* supported but the IDA allocation is inclusive .
*/
ret = ida_alloc_range ( & iommu_global_pasid_ida , IOMMU_FIRST_GLOBAL_PASID ,
dev - > iommu - > max_pasids - 1 , GFP_KERNEL ) ;
return ret < 0 ? IOMMU_PASID_INVALID : ret ;
}
EXPORT_SYMBOL_GPL ( iommu_alloc_global_pasid ) ;
void iommu_free_global_pasid ( ioasid_t pasid )
{
if ( WARN_ON ( pasid = = IOMMU_PASID_INVALID ) )
return ;
ida_free ( & iommu_global_pasid_ida , pasid ) ;
}
EXPORT_SYMBOL_GPL ( iommu_free_global_pasid ) ;