Driver core changes for 6.3-rc1

Here is the large set of driver core changes for 6.3-rc1.
 
 There's a lot of changes this development cycle, most of the work falls
 into two different categories:
   - fw_devlink fixes and updates.  This has gone through numerous review
     cycles and lots of review and testing by lots of different devices.
     Hopefully all should be good now, and Saravana will be keeping a
     watch for any potential regression on odd embedded systems.
   - driver core changes to work to make struct bus_type able to be moved
     into read-only memory (i.e. const)  The recent work with Rust has
     pointed out a number of areas in the driver core where we are
     passing around and working with structures that really do not have
     to be dynamic at all, and they should be able to be read-only making
     things safer overall.  This is the contuation of that work (started
     last release with kobject changes) in moving struct bus_type to be
     constant.  We didn't quite make it for this release, but the
     remaining patches will be finished up for the release after this
     one, but the groundwork has been laid for this effort.
 
 Other than that we have in here:
   - debugfs memory leak fixes in some subsystems
   - error path cleanups and fixes for some never-able-to-be-hit
     codepaths.
   - cacheinfo rework and fixes
   - Other tiny fixes, full details are in the shortlog
 
 All of these have been in linux-next for a while with no reported
 problems.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCY/ipdg8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynL3gCgwzbcWu0So3piZyLiJKxsVo9C2EsAn3sZ9gN6
 6oeFOjD3JDju3cQsfGgd
 =Su6W
 -----END PGP SIGNATURE-----

Merge tag 'driver-core-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull driver core updates from Greg KH:
 "Here is the large set of driver core changes for 6.3-rc1.

  There's a lot of changes this development cycle, most of the work
  falls into two different categories:

   - fw_devlink fixes and updates. This has gone through numerous review
     cycles and lots of review and testing by lots of different devices.
     Hopefully all should be good now, and Saravana will be keeping a
     watch for any potential regression on odd embedded systems.

   - driver core changes to work to make struct bus_type able to be
     moved into read-only memory (i.e. const) The recent work with Rust
     has pointed out a number of areas in the driver core where we are
     passing around and working with structures that really do not have
     to be dynamic at all, and they should be able to be read-only
     making things safer overall. This is the contuation of that work
     (started last release with kobject changes) in moving struct
     bus_type to be constant. We didn't quite make it for this release,
     but the remaining patches will be finished up for the release after
     this one, but the groundwork has been laid for this effort.

  Other than that we have in here:

   - debugfs memory leak fixes in some subsystems

   - error path cleanups and fixes for some never-able-to-be-hit
     codepaths.

   - cacheinfo rework and fixes

   - Other tiny fixes, full details are in the shortlog

  All of these have been in linux-next for a while with no reported
  problems"

[ Geert Uytterhoeven points out that that last sentence isn't true, and
  that there's a pending report that has a fix that is queued up - Linus ]

* tag 'driver-core-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (124 commits)
  debugfs: drop inline constant formatting for ERR_PTR(-ERROR)
  OPP: fix error checking in opp_migrate_dentry()
  debugfs: update comment of debugfs_rename()
  i3c: fix device.h kernel-doc warnings
  dma-mapping: no need to pass a bus_type into get_arch_dma_ops()
  driver core: class: move EXPORT_SYMBOL_GPL() lines to the correct place
  Revert "driver core: add error handling for devtmpfs_create_node()"
  Revert "devtmpfs: add debug info to handle()"
  Revert "devtmpfs: remove return value of devtmpfs_delete_node()"
  driver core: cpu: don't hand-override the uevent bus_type callback.
  devtmpfs: remove return value of devtmpfs_delete_node()
  devtmpfs: add debug info to handle()
  driver core: add error handling for devtmpfs_create_node()
  driver core: bus: update my copyright notice
  driver core: bus: add bus_get_dev_root() function
  driver core: bus: constify bus_unregister()
  driver core: bus: constify some internal functions
  driver core: bus: constify bus_get_kset()
  driver core: bus: constify bus_register/unregister_notifier()
  driver core: remove private pointer from struct bus_type
  ...
This commit is contained in:
Linus Torvalds 2023-02-24 12:58:55 -08:00
commit a93e884edf
182 changed files with 1541 additions and 1195 deletions

View File

@ -0,0 +1,10 @@
What: /sys/kernel/address_bit
Date: May 2023
KernelVersion: 6.3
Contact: Thomas Weißschuh <linux@weissschuh.net>
Description:
The address size of the running kernel in bits.
Access: Read
Users: util-linux

View File

@ -251,6 +251,7 @@ an involved disclosed party. The current ambassadors list:
IBM Z Christian Borntraeger <borntraeger@de.ibm.com>
Intel Tony Luck <tony.luck@intel.com>
Qualcomm Trilok Soni <tsoni@codeaurora.org>
Samsung Javier González <javier.gonz@samsung.com>
Microsoft James Morris <jamorris@linux.microsoft.com>
VMware

View File

@ -4,7 +4,7 @@
extern const struct dma_map_ops alpha_pci_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
#ifdef CONFIG_ALPHA_JENSEN
return NULL;

View File

@ -46,7 +46,7 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
int init_cache_level(unsigned int cpu)
{
unsigned int ctype, level, leaves;
int fw_level;
int fw_level, ret;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) {
@ -59,13 +59,13 @@ int init_cache_level(unsigned int cpu)
leaves += (ctype == CACHE_TYPE_SEPARATE) ? 2 : 1;
}
if (acpi_disabled)
if (acpi_disabled) {
fw_level = of_find_last_cache_level(cpu);
else
fw_level = acpi_find_last_cache_level(cpu);
if (fw_level < 0)
return fw_level;
} else {
ret = acpi_get_cache_info(cpu, &fw_level, NULL);
if (ret < 0)
fw_level = 0;
}
if (level < fw_level) {
/*

View File

@ -8,7 +8,7 @@
*/
extern const struct dma_map_ops *dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
return dma_ops;
}

View File

@ -6,7 +6,7 @@
extern const struct dma_map_ops jazz_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
#if defined(CONFIG_MACH_JAZZ)
return &jazz_dma_ops;

View File

@ -199,9 +199,9 @@ static struct attribute *gio_dev_attrs[] = {
};
ATTRIBUTE_GROUPS(gio_dev);
static int gio_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int gio_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct gio_device *gio_dev = to_gio_device(dev);
const struct gio_device *gio_dev = to_gio_device(dev);
add_uevent_var(env, "MODALIAS=gio:%x", gio_dev->id.id);
return 0;

View File

@ -21,7 +21,7 @@
extern const struct dma_map_ops *hppa_dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
return hppa_dma_ops;
}

View File

@ -552,7 +552,7 @@ static int parisc_generic_match(struct device *dev, struct device_driver *drv)
return match_device(to_parisc_driver(drv), to_parisc_device(dev));
}
static ssize_t make_modalias(struct device *dev, char *buf)
static ssize_t make_modalias(const struct device *dev, char *buf)
{
const struct parisc_device *padev = to_parisc_device(dev);
const struct parisc_device_id *id = &padev->id;
@ -562,7 +562,7 @@ static ssize_t make_modalias(struct device *dev, char *buf)
(u32)id->sversion);
}
static int parisc_uevent(struct device *dev, struct kobj_uevent_env *env)
static int parisc_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
const struct parisc_device *padev;
char modalias[40];

View File

@ -396,7 +396,7 @@ static inline struct ps3_system_bus_driver *ps3_drv_to_system_bus_drv(
return container_of(_drv, struct ps3_system_bus_driver, core);
}
static inline struct ps3_system_bus_device *ps3_dev_to_system_bus_dev(
struct device *_dev)
const struct device *_dev)
{
return container_of(_dev, struct ps3_system_bus_device, core);
}

View File

@ -161,10 +161,7 @@ static inline struct vio_driver *to_vio_driver(struct device_driver *drv)
return container_of(drv, struct vio_driver, driver);
}
static inline struct vio_dev *to_vio_dev(struct device *dev)
{
return container_of(dev, struct vio_dev, dev);
}
#define to_vio_dev(__dev) container_of_const(__dev, struct vio_dev, dev)
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_VIO_H */

View File

@ -439,7 +439,7 @@ static void ps3_system_bus_shutdown(struct device *_dev)
dev_dbg(&dev->core, " <- %s:%d\n", __func__, __LINE__);
}
static int ps3_system_bus_uevent(struct device *_dev, struct kobj_uevent_env *env)
static int ps3_system_bus_uevent(const struct device *_dev, struct kobj_uevent_env *env)
{
struct ps3_system_bus_device *dev = ps3_dev_to_system_bus_dev(_dev);

View File

@ -426,9 +426,14 @@ static struct attribute *ibmebus_bus_device_attrs[] = {
};
ATTRIBUTE_GROUPS(ibmebus_bus_device);
static int ibmebus_bus_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
return of_device_uevent_modalias(dev, env);
}
struct bus_type ibmebus_bus_type = {
.name = "ibmebus",
.uevent = of_device_uevent_modalias,
.uevent = ibmebus_bus_modalias,
.bus_groups = ibmbus_bus_groups,
.match = ibmebus_bus_bus_match,
.probe = ibmebus_bus_device_probe,

View File

@ -1609,10 +1609,10 @@ static int vio_bus_match(struct device *dev, struct device_driver *drv)
return (ids != NULL) && (vio_match_device(ids, vio_dev) != NULL);
}
static int vio_hotplug(struct device *dev, struct kobj_uevent_env *env)
static int vio_hotplug(const struct device *dev, struct kobj_uevent_env *env)
{
const struct vio_dev *vio_dev = to_vio_dev(dev);
struct device_node *dn;
const struct device_node *dn;
const char *cp;
dn = dev->of_node;

View File

@ -113,48 +113,6 @@ static void fill_cacheinfo(struct cacheinfo **this_leaf,
}
}
int init_cache_level(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct device_node *np = of_cpu_device_node_get(cpu);
struct device_node *prev = NULL;
int levels = 0, leaves = 0, level;
if (of_property_read_bool(np, "cache-size"))
++leaves;
if (of_property_read_bool(np, "i-cache-size"))
++leaves;
if (of_property_read_bool(np, "d-cache-size"))
++leaves;
if (leaves > 0)
levels = 1;
prev = np;
while ((np = of_find_next_cache_node(np))) {
of_node_put(prev);
prev = np;
if (!of_device_is_compatible(np, "cache"))
break;
if (of_property_read_u32(np, "cache-level", &level))
break;
if (level <= levels)
break;
if (of_property_read_bool(np, "cache-size"))
++leaves;
if (of_property_read_bool(np, "i-cache-size"))
++leaves;
if (of_property_read_bool(np, "d-cache-size"))
++leaves;
levels = level;
}
of_node_put(np);
this_cpu_ci->num_levels = levels;
this_cpu_ci->num_leaves = leaves;
return 0;
}
int populate_cache_leaves(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);

View File

@ -4,7 +4,7 @@
extern const struct dma_map_ops *dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
/* sparc32 uses per-device dma_ops */
return IS_ENABLED(CONFIG_SPARC64) ? dma_ops : NULL;

View File

@ -488,10 +488,7 @@ static inline struct vio_driver *to_vio_driver(struct device_driver *drv)
return container_of(drv, struct vio_driver, driver);
}
static inline struct vio_dev *to_vio_dev(struct device *dev)
{
return container_of(dev, struct vio_dev, dev);
}
#define to_vio_dev(__dev) container_of_const(__dev, struct vio_dev, dev)
int vio_ldc_send(struct vio_driver_state *vio, void *data, int len);
void vio_link_state_change(struct vio_driver_state *vio, int event);

View File

@ -46,7 +46,7 @@ static const struct vio_device_id *vio_match_device(
return NULL;
}
static int vio_hotplug(struct device *dev, struct kobj_uevent_env *env)
static int vio_hotplug(const struct device *dev, struct kobj_uevent_env *env)
{
const struct vio_dev *vio_dev = to_vio_dev(dev);

View File

@ -4,7 +4,7 @@
extern const struct dma_map_ops *dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
return dma_ops;
}

View File

@ -1200,7 +1200,7 @@ struct class block_class = {
.dev_uevent = block_uevent,
};
static char *block_devnode(struct device *dev, umode_t *mode,
static char *block_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
struct gendisk *disk = dev_to_disk(dev);

View File

@ -254,9 +254,9 @@ static void part_release(struct device *dev)
iput(dev_to_bdev(dev)->bd_inode);
}
static int part_uevent(struct device *dev, struct kobj_uevent_env *env)
static int part_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct block_device *part = dev_to_bdev(dev);
const struct block_device *part = dev_to_bdev(dev);
add_uevent_var(env, "PARTN=%u", part->bd_partno);
if (part->bd_meta_info && part->bd_meta_info->volname[0])

View File

@ -1014,7 +1014,7 @@ static int acpi_bus_match(struct device *dev, struct device_driver *drv)
&& !acpi_match_device_ids(acpi_dev, acpi_drv->ids);
}
static int acpi_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int acpi_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
return __acpi_device_uevent_modalias(to_acpi_device(dev), env);
}

View File

@ -133,7 +133,7 @@ static void acpi_hide_nondev_subnodes(struct acpi_device_data *data)
* -EINVAL: output error
* -ENOMEM: output is truncated
*/
static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
static int create_pnp_modalias(const struct acpi_device *acpi_dev, char *modalias,
int size)
{
int len;
@ -191,7 +191,7 @@ static int create_pnp_modalias(struct acpi_device *acpi_dev, char *modalias,
* only be called for devices having ACPI_DT_NAMESPACE_HID in their list of
* ACPI/PNP IDs.
*/
static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
static int create_of_modalias(const struct acpi_device *acpi_dev, char *modalias,
int size)
{
struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
@ -239,7 +239,7 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
return len;
}
int __acpi_device_uevent_modalias(struct acpi_device *adev,
int __acpi_device_uevent_modalias(const struct acpi_device *adev,
struct kobj_uevent_env *env)
{
int len;
@ -277,7 +277,7 @@ int __acpi_device_uevent_modalias(struct acpi_device *adev,
* Because other buses do not support ACPI HIDs & CIDs, e.g. for a device with
* hid:IBM0001 and cid:ACPI0001 you get: "acpi:IBM0001:ACPI0001".
*/
int acpi_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env)
int acpi_device_uevent_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
return __acpi_device_uevent_modalias(acpi_companion_match(dev), env);
}

View File

@ -120,7 +120,7 @@ int acpi_bus_register_early_device(int type);
Device Matching and Notification
-------------------------------------------------------------------------- */
struct acpi_device *acpi_companion_match(const struct device *dev);
int __acpi_device_uevent_modalias(struct acpi_device *adev,
int __acpi_device_uevent_modalias(const struct acpi_device *adev,
struct kobj_uevent_env *env);
/* --------------------------------------------------------------------------

View File

@ -81,6 +81,7 @@ static inline bool acpi_pptt_match_type(int table_type, int type)
* acpi_pptt_walk_cache() - Attempt to find the requested acpi_pptt_cache
* @table_hdr: Pointer to the head of the PPTT table
* @local_level: passed res reflects this cache level
* @split_levels: Number of split cache levels (data/instruction).
* @res: cache resource in the PPTT we want to walk
* @found: returns a pointer to the requested level if found
* @level: the requested cache level
@ -100,6 +101,7 @@ static inline bool acpi_pptt_match_type(int table_type, int type)
*/
static unsigned int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr,
unsigned int local_level,
unsigned int *split_levels,
struct acpi_subtable_header *res,
struct acpi_pptt_cache **found,
unsigned int level, int type)
@ -113,8 +115,17 @@ static unsigned int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr,
while (cache) {
local_level++;
if (!(cache->flags & ACPI_PPTT_CACHE_TYPE_VALID)) {
cache = fetch_pptt_cache(table_hdr, cache->next_level_of_cache);
continue;
}
if (split_levels &&
(acpi_pptt_match_type(cache->attributes, ACPI_PPTT_CACHE_TYPE_DATA) ||
acpi_pptt_match_type(cache->attributes, ACPI_PPTT_CACHE_TYPE_INSTR)))
*split_levels = local_level;
if (local_level == level &&
cache->flags & ACPI_PPTT_CACHE_TYPE_VALID &&
acpi_pptt_match_type(cache->attributes, type)) {
if (*found != NULL && cache != *found)
pr_warn("Found duplicate cache level/type unable to determine uniqueness\n");
@ -135,8 +146,8 @@ static unsigned int acpi_pptt_walk_cache(struct acpi_table_header *table_hdr,
static struct acpi_pptt_cache *
acpi_find_cache_level(struct acpi_table_header *table_hdr,
struct acpi_pptt_processor *cpu_node,
unsigned int *starting_level, unsigned int level,
int type)
unsigned int *starting_level, unsigned int *split_levels,
unsigned int level, int type)
{
struct acpi_subtable_header *res;
unsigned int number_of_levels = *starting_level;
@ -149,7 +160,8 @@ acpi_find_cache_level(struct acpi_table_header *table_hdr,
resource++;
local_level = acpi_pptt_walk_cache(table_hdr, *starting_level,
res, &ret, level, type);
split_levels, res, &ret,
level, type);
/*
* we are looking for the max depth. Since its potentially
* possible for a given node to have resources with differing
@ -165,29 +177,29 @@ acpi_find_cache_level(struct acpi_table_header *table_hdr,
}
/**
* acpi_count_levels() - Given a PPTT table, and a CPU node, count the caches
* acpi_count_levels() - Given a PPTT table, and a CPU node, count the cache
* levels and split cache levels (data/instruction).
* @table_hdr: Pointer to the head of the PPTT table
* @cpu_node: processor node we wish to count caches for
* @levels: Number of levels if success.
* @split_levels: Number of split cache levels (data/instruction) if
* success. Can by NULL.
*
* Given a processor node containing a processing unit, walk into it and count
* how many levels exist solely for it, and then walk up each level until we hit
* the root node (ignore the package level because it may be possible to have
* caches that exist across packages). Count the number of cache levels that
* exist at each level on the way up.
*
* Return: Total number of levels found.
* caches that exist across packages). Count the number of cache levels and
* split cache levels (data/instruction) that exist at each level on the way
* up.
*/
static int acpi_count_levels(struct acpi_table_header *table_hdr,
struct acpi_pptt_processor *cpu_node)
static void acpi_count_levels(struct acpi_table_header *table_hdr,
struct acpi_pptt_processor *cpu_node,
unsigned int *levels, unsigned int *split_levels)
{
int total_levels = 0;
do {
acpi_find_cache_level(table_hdr, cpu_node, &total_levels, 0, 0);
acpi_find_cache_level(table_hdr, cpu_node, levels, split_levels, 0, 0);
cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent);
} while (cpu_node);
return total_levels;
}
/**
@ -281,19 +293,6 @@ static struct acpi_pptt_processor *acpi_find_processor_node(struct acpi_table_he
return NULL;
}
static int acpi_find_cache_levels(struct acpi_table_header *table_hdr,
u32 acpi_cpu_id)
{
int number_of_levels = 0;
struct acpi_pptt_processor *cpu;
cpu = acpi_find_processor_node(table_hdr, acpi_cpu_id);
if (cpu)
number_of_levels = acpi_count_levels(table_hdr, cpu);
return number_of_levels;
}
static u8 acpi_cache_type(enum cache_type type)
{
switch (type) {
@ -334,7 +333,7 @@ static struct acpi_pptt_cache *acpi_find_cache_node(struct acpi_table_header *ta
while (cpu_node && !found) {
found = acpi_find_cache_level(table_hdr, cpu_node,
&total_levels, level, acpi_type);
&total_levels, NULL, level, acpi_type);
*node = cpu_node;
cpu_node = fetch_pptt_node(table_hdr, cpu_node->parent);
}
@ -602,32 +601,48 @@ static int check_acpi_cpu_flag(unsigned int cpu, int rev, u32 flag)
}
/**
* acpi_find_last_cache_level() - Determines the number of cache levels for a PE
* acpi_get_cache_info() - Determine the number of cache levels and
* split cache levels (data/instruction) and for a PE.
* @cpu: Kernel logical CPU number
* @levels: Number of levels if success.
* @split_levels: Number of levels being split (i.e. data/instruction)
* if success. Can by NULL.
*
* Given a logical CPU number, returns the number of levels of cache represented
* in the PPTT. Errors caused by lack of a PPTT table, or otherwise, return 0
* indicating we didn't find any cache levels.
*
* Return: Cache levels visible to this core.
* Return: -ENOENT if no PPTT table or no PPTT processor struct found.
* 0 on success.
*/
int acpi_find_last_cache_level(unsigned int cpu)
int acpi_get_cache_info(unsigned int cpu, unsigned int *levels,
unsigned int *split_levels)
{
u32 acpi_cpu_id;
struct acpi_pptt_processor *cpu_node;
struct acpi_table_header *table;
int number_of_levels = 0;
u32 acpi_cpu_id;
*levels = 0;
if (split_levels)
*split_levels = 0;
table = acpi_get_pptt();
if (!table)
return -ENOENT;
pr_debug("Cache Setup find last level CPU=%d\n", cpu);
pr_debug("Cache Setup: find cache levels for CPU=%d\n", cpu);
acpi_cpu_id = get_acpi_id_for_cpu(cpu);
number_of_levels = acpi_find_cache_levels(table, acpi_cpu_id);
pr_debug("Cache Setup find last level level=%d\n", number_of_levels);
cpu_node = acpi_find_processor_node(table, acpi_cpu_id);
if (!cpu_node)
return -ENOENT;
return number_of_levels;
acpi_count_levels(table, cpu_node, levels, split_levels);
pr_debug("Cache Setup: last_level=%d split_levels=%d\n",
*levels, split_levels ? *split_levels : -1);
return 0;
}
/**

View File

@ -235,9 +235,9 @@ static int amba_match(struct device *dev, struct device_driver *drv)
return amba_lookup(pcdrv->id_table, pcdev) != NULL;
}
static int amba_uevent(struct device *dev, struct kobj_uevent_env *env)
static int amba_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct amba_device *pcdev = to_amba_device(dev);
const struct amba_device *pcdev = to_amba_device(dev);
int retval = 0;
retval = add_uevent_var(env, "AMBA_ID=%08x", pcdev->periphid);

View File

@ -736,7 +736,7 @@ void update_siblings_masks(unsigned int cpuid)
ret = detect_cache_attributes(cpuid);
if (ret && ret != -ENOENT)
pr_info("Early cacheinfo failed, ret = %d\n", ret);
pr_info("Early cacheinfo allocation failed, ret = %d\n", ret);
/* update core and thread sibling masks */
for_each_online_cpu(cpu) {
@ -825,7 +825,7 @@ __weak int __init parse_acpi_topology(void)
#if defined(CONFIG_ARM64) || defined(CONFIG_RISCV)
void __init init_cpu_topology(void)
{
int ret;
int cpu, ret;
reset_cpu_topology();
ret = parse_acpi_topology();
@ -840,6 +840,14 @@ void __init init_cpu_topology(void)
reset_cpu_topology();
return;
}
for_each_possible_cpu(cpu) {
ret = fetch_cache_info(cpu);
if (ret) {
pr_err("Early cacheinfo failed, ret = %d\n", ret);
break;
}
}
}
void store_cpu_topology(unsigned int cpuid)

View File

@ -185,7 +185,7 @@ static int auxiliary_match(struct device *dev, struct device_driver *drv)
return !!auxiliary_match_id(auxdrv->id_table, auxdev);
}
static int auxiliary_uevent(struct device *dev, struct kobj_uevent_env *env)
static int auxiliary_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
const char *name, *p;

View File

@ -52,8 +52,23 @@ struct subsys_private {
struct kset glue_dirs;
struct class *class;
struct lock_class_key lock_key;
};
#define to_subsys_private(obj) container_of(obj, struct subsys_private, subsys.kobj)
#define to_subsys_private(obj) container_of_const(obj, struct subsys_private, subsys.kobj)
static inline struct subsys_private *subsys_get(struct subsys_private *sp)
{
if (sp)
kset_get(&sp->subsys);
return sp;
}
static inline void subsys_put(struct subsys_private *sp)
{
if (sp)
kset_put(&sp->subsys);
}
struct driver_private {
struct kobject kobj;
@ -130,6 +145,8 @@ struct kobject *virtual_device_parent(struct device *dev);
extern int bus_add_device(struct device *dev);
extern void bus_probe_device(struct device *dev);
extern void bus_remove_device(struct device *dev);
void bus_notify(struct device *dev, enum bus_notifier_event value);
bool bus_is_registered(const struct bus_type *bus);
extern int bus_add_driver(struct device_driver *drv);
extern void bus_remove_driver(struct device_driver *drv);
@ -158,6 +175,8 @@ extern void device_block_probing(void);
extern void device_unblock_probing(void);
extern void deferred_probe_extend_timeout(void);
extern void driver_deferred_probe_trigger(void);
const char *device_get_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid, const char **tmp);
/* /sys/devices directory */
extern struct kset *devices_kset;

File diff suppressed because it is too large Load Diff

View File

@ -229,8 +229,71 @@ static int cache_setup_of_node(unsigned int cpu)
return 0;
}
static int of_count_cache_leaves(struct device_node *np)
{
unsigned int leaves = 0;
if (of_property_read_bool(np, "cache-size"))
++leaves;
if (of_property_read_bool(np, "i-cache-size"))
++leaves;
if (of_property_read_bool(np, "d-cache-size"))
++leaves;
if (!leaves) {
/* The '[i-|d-|]cache-size' property is required, but
* if absent, fallback on the 'cache-unified' property.
*/
if (of_property_read_bool(np, "cache-unified"))
return 1;
else
return 2;
}
return leaves;
}
int init_of_cache_level(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct device_node *np = of_cpu_device_node_get(cpu);
struct device_node *prev = NULL;
unsigned int levels = 0, leaves, level;
leaves = of_count_cache_leaves(np);
if (leaves > 0)
levels = 1;
prev = np;
while ((np = of_find_next_cache_node(np))) {
of_node_put(prev);
prev = np;
if (!of_device_is_compatible(np, "cache"))
goto err_out;
if (of_property_read_u32(np, "cache-level", &level))
goto err_out;
if (level <= levels)
goto err_out;
leaves += of_count_cache_leaves(np);
levels = level;
}
of_node_put(np);
this_cpu_ci->num_levels = levels;
this_cpu_ci->num_leaves = leaves;
return 0;
err_out:
of_node_put(np);
return -EINVAL;
}
#else
static inline int cache_setup_of_node(unsigned int cpu) { return 0; }
int init_of_cache_level(unsigned int cpu) { return 0; }
#endif
int __weak cache_setup_acpi(unsigned int cpu)
@ -256,7 +319,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
struct cacheinfo *this_leaf, *sib_leaf;
unsigned int index;
unsigned int index, sib_index;
int ret = 0;
if (this_cpu_ci->cpu_map_populated)
@ -284,11 +347,13 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
if (i == cpu || !sib_cpu_ci->info_list)
continue;/* skip if itself or no cacheinfo */
sib_leaf = per_cpu_cacheinfo_idx(i, index);
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);
cpumask_set_cpu(i, &this_leaf->shared_cpu_map);
for (sib_index = 0; sib_index < cache_leaves(i); sib_index++) {
sib_leaf = per_cpu_cacheinfo_idx(i, sib_index);
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
cpumask_set_cpu(cpu, &sib_leaf->shared_cpu_map);
cpumask_set_cpu(i, &this_leaf->shared_cpu_map);
break;
}
}
}
/* record the maximum cache line size */
@ -302,7 +367,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
static void cache_shared_cpu_map_remove(unsigned int cpu)
{
struct cacheinfo *this_leaf, *sib_leaf;
unsigned int sibling, index;
unsigned int sibling, index, sib_index;
for (index = 0; index < cache_leaves(cpu); index++) {
this_leaf = per_cpu_cacheinfo_idx(cpu, index);
@ -313,9 +378,14 @@ static void cache_shared_cpu_map_remove(unsigned int cpu)
if (sibling == cpu || !sib_cpu_ci->info_list)
continue;/* skip if itself or no cacheinfo */
sib_leaf = per_cpu_cacheinfo_idx(sibling, index);
cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
for (sib_index = 0; sib_index < cache_leaves(sibling); sib_index++) {
sib_leaf = per_cpu_cacheinfo_idx(sibling, sib_index);
if (cache_leaves_are_shared(this_leaf, sib_leaf)) {
cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map);
cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map);
break;
}
}
}
}
}
@ -326,10 +396,6 @@ static void free_cache_attributes(unsigned int cpu)
return;
cache_shared_cpu_map_remove(cpu);
kfree(per_cpu_cacheinfo(cpu));
per_cpu_cacheinfo(cpu) = NULL;
cache_leaves(cpu) = 0;
}
int __weak init_cache_level(unsigned int cpu)
@ -342,29 +408,71 @@ int __weak populate_cache_leaves(unsigned int cpu)
return -ENOENT;
}
int detect_cache_attributes(unsigned int cpu)
static inline
int allocate_cache_info(int cpu)
{
int ret;
/* Since early detection of the cacheinfo is allowed via this
* function and this also gets called as CPU hotplug callbacks via
* cacheinfo_cpu_online, the initialisation can be skipped and only
* CPU maps can be updated as the CPU online status would be update
* if called via cacheinfo_cpu_online path.
*/
if (per_cpu_cacheinfo(cpu))
goto update_cpu_map;
if (init_cache_level(cpu) || !cache_leaves(cpu))
return -ENOENT;
per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
sizeof(struct cacheinfo), GFP_ATOMIC);
if (per_cpu_cacheinfo(cpu) == NULL) {
if (!per_cpu_cacheinfo(cpu)) {
cache_leaves(cpu) = 0;
return -ENOMEM;
}
return 0;
}
int fetch_cache_info(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci;
unsigned int levels = 0, split_levels = 0;
int ret;
if (acpi_disabled) {
ret = init_of_cache_level(cpu);
if (ret < 0)
return ret;
} else {
ret = acpi_get_cache_info(cpu, &levels, &split_levels);
if (ret < 0)
return ret;
this_cpu_ci = get_cpu_cacheinfo(cpu);
this_cpu_ci->num_levels = levels;
/*
* This assumes that:
* - there cannot be any split caches (data/instruction)
* above a unified cache
* - data/instruction caches come by pair
*/
this_cpu_ci->num_leaves = levels + split_levels;
}
if (!cache_leaves(cpu))
return -ENOENT;
return allocate_cache_info(cpu);
}
int detect_cache_attributes(unsigned int cpu)
{
int ret;
/* Since early initialization/allocation of the cacheinfo is allowed
* via fetch_cache_info() and this also gets called as CPU hotplug
* callbacks via cacheinfo_cpu_online, the init/alloc can be skipped
* as it will happen only once (the cacheinfo memory is never freed).
* Just populate the cacheinfo.
*/
if (per_cpu_cacheinfo(cpu))
goto populate_leaves;
if (init_cache_level(cpu) || !cache_leaves(cpu))
return -ENOENT;
ret = allocate_cache_info(cpu);
if (ret)
return ret;
populate_leaves:
/*
* populate_cache_leaves() may completely setup the cache leaves and
* shared_cpu_map or it may leave it partially setup.
@ -373,7 +481,6 @@ int detect_cache_attributes(unsigned int cpu)
if (ret)
goto free_ci;
update_cpu_map:
/*
* For systems using DT for cache hierarchy, fw_token
* and shared_cpu_map will be set up here only if they are

View File

@ -53,6 +53,8 @@ static void class_release(struct kobject *kobj)
pr_debug("class '%s': release.\n", class->name);
class->p = NULL;
if (class->class_release)
class->class_release(class);
else
@ -64,7 +66,7 @@ static void class_release(struct kobject *kobj)
static const struct kobj_ns_type_operations *class_child_ns_type(const struct kobject *kobj)
{
struct subsys_private *cp = to_subsys_private(kobj);
const struct subsys_private *cp = to_subsys_private(kobj);
struct class *class = cp->class;
return class->ns_type;
@ -75,7 +77,7 @@ static const struct sysfs_ops class_sysfs_ops = {
.store = class_attr_store,
};
static struct kobj_type class_ktype = {
static const struct kobj_type class_ktype = {
.sysfs_ops = &class_sysfs_ops,
.release = class_release,
.child_ns_type = class_child_ns_type,
@ -97,6 +99,7 @@ int class_create_file_ns(struct class *cls, const struct class_attribute *attr,
error = -EINVAL;
return error;
}
EXPORT_SYMBOL_GPL(class_create_file_ns);
void class_remove_file_ns(struct class *cls, const struct class_attribute *attr,
const void *ns)
@ -104,6 +107,7 @@ void class_remove_file_ns(struct class *cls, const struct class_attribute *attr,
if (cls)
sysfs_remove_file_ns(&cls->p->subsys.kobj, &attr->attr, ns);
}
EXPORT_SYMBOL_GPL(class_remove_file_ns);
static struct class *class_get(struct class *cls)
{
@ -186,17 +190,21 @@ int __class_register(struct class *cls, struct lock_class_key *key)
cls->p = cp;
error = kset_register(&cp->subsys);
if (error) {
kfree(cp);
return error;
}
if (error)
goto err_out;
error = class_add_groups(class_get(cls), cls->class_groups);
class_put(cls);
if (error) {
kobject_del(&cp->subsys.kobj);
kfree_const(cp->subsys.kobj.name);
kfree(cp);
goto err_out;
}
return 0;
err_out:
kfree(cp);
cls->p = NULL;
return error;
}
EXPORT_SYMBOL_GPL(__class_register);
@ -207,6 +215,7 @@ void class_unregister(struct class *cls)
class_remove_groups(cls, cls->class_groups);
kset_unregister(&cls->p->subsys);
}
EXPORT_SYMBOL_GPL(class_unregister);
static void class_create_release(struct class *cls)
{
@ -270,6 +279,7 @@ void class_destroy(struct class *cls)
class_unregister(cls);
}
EXPORT_SYMBOL_GPL(class_destroy);
/**
* class_dev_iter_init - initialize class device iterator
@ -454,6 +464,7 @@ int class_interface_register(struct class_interface *class_intf)
return 0;
}
EXPORT_SYMBOL_GPL(class_interface_register);
void class_interface_unregister(struct class_interface *class_intf)
{
@ -476,6 +487,7 @@ void class_interface_unregister(struct class_interface *class_intf)
class_put(parent);
}
EXPORT_SYMBOL_GPL(class_interface_unregister);
ssize_t show_class_attr_string(struct class *class,
struct class_attribute *attr, char *buf)
@ -582,11 +594,3 @@ int __init classes_init(void)
return -ENOMEM;
return 0;
}
EXPORT_SYMBOL_GPL(class_create_file_ns);
EXPORT_SYMBOL_GPL(class_remove_file_ns);
EXPORT_SYMBOL_GPL(class_unregister);
EXPORT_SYMBOL_GPL(class_destroy);
EXPORT_SYMBOL_GPL(class_interface_register);
EXPORT_SYMBOL_GPL(class_interface_unregister);

View File

@ -125,7 +125,7 @@ static void component_debugfs_add(struct aggregate_device *m)
static void component_debugfs_del(struct aggregate_device *m)
{
debugfs_remove(debugfs_lookup(dev_name(m->parent), component_debugfs_dir));
debugfs_lookup_and_remove(dev_name(m->parent), component_debugfs_dir);
}
#else

View File

@ -54,11 +54,12 @@ static LIST_HEAD(deferred_sync);
static unsigned int defer_sync_state_count = 1;
static DEFINE_MUTEX(fwnode_link_lock);
static bool fw_devlink_is_permissive(void);
static void __fw_devlink_link_to_consumers(struct device *dev);
static bool fw_devlink_drv_reg_done;
static bool fw_devlink_best_effort;
/**
* fwnode_link_add - Create a link between two fwnode_handles.
* __fwnode_link_add - Create a link between two fwnode_handles.
* @con: Consumer end of the link.
* @sup: Supplier end of the link.
*
@ -74,35 +75,42 @@ static bool fw_devlink_best_effort;
* Attempts to create duplicate links between the same pair of fwnode handles
* are ignored and there is no reference counting.
*/
int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup)
static int __fwnode_link_add(struct fwnode_handle *con,
struct fwnode_handle *sup, u8 flags)
{
struct fwnode_link *link;
int ret = 0;
mutex_lock(&fwnode_link_lock);
list_for_each_entry(link, &sup->consumers, s_hook)
if (link->consumer == con)
goto out;
if (link->consumer == con) {
link->flags |= flags;
return 0;
}
link = kzalloc(sizeof(*link), GFP_KERNEL);
if (!link) {
ret = -ENOMEM;
goto out;
}
if (!link)
return -ENOMEM;
link->supplier = sup;
INIT_LIST_HEAD(&link->s_hook);
link->consumer = con;
INIT_LIST_HEAD(&link->c_hook);
link->flags = flags;
list_add(&link->s_hook, &sup->consumers);
list_add(&link->c_hook, &con->suppliers);
pr_debug("%pfwP Linked as a fwnode consumer to %pfwP\n",
con, sup);
out:
mutex_unlock(&fwnode_link_lock);
return 0;
}
int fwnode_link_add(struct fwnode_handle *con, struct fwnode_handle *sup)
{
int ret;
mutex_lock(&fwnode_link_lock);
ret = __fwnode_link_add(con, sup, 0);
mutex_unlock(&fwnode_link_lock);
return ret;
}
@ -121,6 +129,19 @@ static void __fwnode_link_del(struct fwnode_link *link)
kfree(link);
}
/**
* __fwnode_link_cycle - Mark a fwnode link as being part of a cycle.
* @link: the fwnode_link to be marked
*
* The fwnode_link_lock needs to be held when this function is called.
*/
static void __fwnode_link_cycle(struct fwnode_link *link)
{
pr_debug("%pfwf: Relaxing link with %pfwf\n",
link->consumer, link->supplier);
link->flags |= FWLINK_FLAG_CYCLE;
}
/**
* fwnode_links_purge_suppliers - Delete all supplier links of fwnode_handle.
* @fwnode: fwnode whose supplier links need to be deleted
@ -181,6 +202,51 @@ void fw_devlink_purge_absent_suppliers(struct fwnode_handle *fwnode)
}
EXPORT_SYMBOL_GPL(fw_devlink_purge_absent_suppliers);
/**
* __fwnode_links_move_consumers - Move consumer from @from to @to fwnode_handle
* @from: move consumers away from this fwnode
* @to: move consumers to this fwnode
*
* Move all consumer links from @from fwnode to @to fwnode.
*/
static void __fwnode_links_move_consumers(struct fwnode_handle *from,
struct fwnode_handle *to)
{
struct fwnode_link *link, *tmp;
list_for_each_entry_safe(link, tmp, &from->consumers, s_hook) {
__fwnode_link_add(link->consumer, to, link->flags);
__fwnode_link_del(link);
}
}
/**
* __fw_devlink_pickup_dangling_consumers - Pick up dangling consumers
* @fwnode: fwnode from which to pick up dangling consumers
* @new_sup: fwnode of new supplier
*
* If the @fwnode has a corresponding struct device and the device supports
* probing (that is, added to a bus), then we want to let fw_devlink create
* MANAGED device links to this device, so leave @fwnode and its descendant's
* fwnode links alone.
*
* Otherwise, move its consumers to the new supplier @new_sup.
*/
static void __fw_devlink_pickup_dangling_consumers(struct fwnode_handle *fwnode,
struct fwnode_handle *new_sup)
{
struct fwnode_handle *child;
if (fwnode->dev && fwnode->dev->bus)
return;
fwnode->flags |= FWNODE_FLAG_NOT_DEVICE;
__fwnode_links_move_consumers(fwnode, new_sup);
fwnode_for_each_available_child_node(fwnode, child)
__fw_devlink_pickup_dangling_consumers(child, new_sup);
}
static DEFINE_MUTEX(device_links_lock);
DEFINE_STATIC_SRCU(device_links_srcu);
@ -230,6 +296,12 @@ static bool device_is_ancestor(struct device *dev, struct device *target)
return false;
}
static inline bool device_link_flag_is_sync_state_only(u32 flags)
{
return (flags & ~(DL_FLAG_INFERRED | DL_FLAG_CYCLE)) ==
(DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED);
}
/**
* device_is_dependent - Check if one device depends on another one
* @dev: Device to check dependencies for.
@ -256,8 +328,7 @@ int device_is_dependent(struct device *dev, void *target)
return ret;
list_for_each_entry(link, &dev->links.consumers, s_node) {
if ((link->flags & ~DL_FLAG_INFERRED) ==
(DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
if (device_link_flag_is_sync_state_only(link->flags))
continue;
if (link->consumer == target)
@ -330,8 +401,7 @@ static int device_reorder_to_tail(struct device *dev, void *not_used)
device_for_each_child(dev, NULL, device_reorder_to_tail);
list_for_each_entry(link, &dev->links.consumers, s_node) {
if ((link->flags & ~DL_FLAG_INFERRED) ==
(DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
if (device_link_flag_is_sync_state_only(link->flags))
continue;
device_reorder_to_tail(link->consumer, NULL);
}
@ -592,7 +662,8 @@ postcore_initcall(devlink_class_init);
DL_FLAG_AUTOREMOVE_SUPPLIER | \
DL_FLAG_AUTOPROBE_CONSUMER | \
DL_FLAG_SYNC_STATE_ONLY | \
DL_FLAG_INFERRED)
DL_FLAG_INFERRED | \
DL_FLAG_CYCLE)
#define DL_ADD_VALID_FLAGS (DL_MANAGED_LINK_FLAGS | DL_FLAG_STATELESS | \
DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE)
@ -661,8 +732,6 @@ struct device_link *device_link_add(struct device *consumer,
if (!consumer || !supplier || consumer == supplier ||
flags & ~DL_ADD_VALID_FLAGS ||
(flags & DL_FLAG_STATELESS && flags & DL_MANAGED_LINK_FLAGS) ||
(flags & DL_FLAG_SYNC_STATE_ONLY &&
(flags & ~DL_FLAG_INFERRED) != DL_FLAG_SYNC_STATE_ONLY) ||
(flags & DL_FLAG_AUTOPROBE_CONSUMER &&
flags & (DL_FLAG_AUTOREMOVE_CONSUMER |
DL_FLAG_AUTOREMOVE_SUPPLIER)))
@ -678,6 +747,10 @@ struct device_link *device_link_add(struct device *consumer,
if (!(flags & DL_FLAG_STATELESS))
flags |= DL_FLAG_MANAGED;
if (flags & DL_FLAG_SYNC_STATE_ONLY &&
!device_link_flag_is_sync_state_only(flags))
return NULL;
device_links_write_lock();
device_pm_lock();
@ -942,6 +1015,21 @@ static bool dev_is_best_effort(struct device *dev)
(dev->fwnode && (dev->fwnode->flags & FWNODE_FLAG_BEST_EFFORT));
}
static struct fwnode_handle *fwnode_links_check_suppliers(
struct fwnode_handle *fwnode)
{
struct fwnode_link *link;
if (!fwnode || fw_devlink_is_permissive())
return NULL;
list_for_each_entry(link, &fwnode->suppliers, c_hook)
if (!(link->flags & FWLINK_FLAG_CYCLE))
return link->supplier;
return NULL;
}
/**
* device_links_check_suppliers - Check presence of supplier drivers.
* @dev: Consumer device.
@ -969,11 +1057,8 @@ int device_links_check_suppliers(struct device *dev)
* probe.
*/
mutex_lock(&fwnode_link_lock);
if (dev->fwnode && !list_empty(&dev->fwnode->suppliers) &&
!fw_devlink_is_permissive()) {
sup_fw = list_first_entry(&dev->fwnode->suppliers,
struct fwnode_link,
c_hook)->supplier;
sup_fw = fwnode_links_check_suppliers(dev->fwnode);
if (sup_fw) {
if (!dev_is_best_effort(dev)) {
fwnode_ret = -EPROBE_DEFER;
dev_err_probe(dev, -EPROBE_DEFER,
@ -1162,7 +1247,9 @@ static ssize_t waiting_for_supplier_show(struct device *dev,
bool val;
device_lock(dev);
val = !list_empty(&dev->fwnode->suppliers);
mutex_lock(&fwnode_link_lock);
val = !!fwnode_links_check_suppliers(dev->fwnode);
mutex_unlock(&fwnode_link_lock);
device_unlock(dev);
return sysfs_emit(buf, "%u\n", val);
}
@ -1225,16 +1312,23 @@ void device_links_driver_bound(struct device *dev)
* them. So, fw_devlink no longer needs to create device links to any
* of the device's suppliers.
*
* Also, if a child firmware node of this bound device is not added as
* a device by now, assume it is never going to be added and make sure
* other devices don't defer probe indefinitely by waiting for such a
* child device.
* Also, if a child firmware node of this bound device is not added as a
* device by now, assume it is never going to be added. Make this bound
* device the fallback supplier to the dangling consumers of the child
* firmware node because this bound device is probably implementing the
* child firmware node functionality and we don't want the dangling
* consumers to defer probe indefinitely waiting for a device for the
* child firmware node.
*/
if (dev->fwnode && dev->fwnode->dev == dev) {
struct fwnode_handle *child;
fwnode_links_purge_suppliers(dev->fwnode);
mutex_lock(&fwnode_link_lock);
fwnode_for_each_available_child_node(dev->fwnode, child)
fw_devlink_purge_absent_suppliers(child);
__fw_devlink_pickup_dangling_consumers(child,
dev->fwnode);
__fw_devlink_link_to_consumers(dev);
mutex_unlock(&fwnode_link_lock);
}
device_remove_file(dev, &dev_attr_waiting_for_supplier);
@ -1591,8 +1685,11 @@ static int __init fw_devlink_strict_setup(char *arg)
}
early_param("fw_devlink.strict", fw_devlink_strict_setup);
u32 fw_devlink_get_flags(void)
static inline u32 fw_devlink_get_flags(u8 fwlink_flags)
{
if (fwlink_flags & FWLINK_FLAG_CYCLE)
return FW_DEVLINK_FLAGS_PERMISSIVE | DL_FLAG_CYCLE;
return fw_devlink_flags;
}
@ -1630,7 +1727,7 @@ static void fw_devlink_relax_link(struct device_link *link)
if (!(link->flags & DL_FLAG_INFERRED))
return;
if (link->flags == (DL_FLAG_MANAGED | FW_DEVLINK_FLAGS_PERMISSIVE))
if (device_link_flag_is_sync_state_only(link->flags))
return;
pm_runtime_drop_link(link);
@ -1727,44 +1824,138 @@ static void fw_devlink_unblock_consumers(struct device *dev)
device_links_write_unlock();
}
/**
* fw_devlink_relax_cycle - Convert cyclic links to SYNC_STATE_ONLY links
* @con: Device to check dependencies for.
* @sup: Device to check against.
*
* Check if @sup depends on @con or any device dependent on it (its child or
* its consumer etc). When such a cyclic dependency is found, convert all
* device links created solely by fw_devlink into SYNC_STATE_ONLY device links.
* This is the equivalent of doing fw_devlink=permissive just between the
* devices in the cycle. We need to do this because, at this point, fw_devlink
* can't tell which of these dependencies is not a real dependency.
*
* Return 1 if a cycle is found. Otherwise, return 0.
*/
static int fw_devlink_relax_cycle(struct device *con, void *sup)
static bool fwnode_init_without_drv(struct fwnode_handle *fwnode)
{
struct device_link *link;
int ret;
struct device *dev;
bool ret;
if (con == sup)
return 1;
if (!(fwnode->flags & FWNODE_FLAG_INITIALIZED))
return false;
ret = device_for_each_child(con, sup, fw_devlink_relax_cycle);
if (ret)
return ret;
dev = get_dev_from_fwnode(fwnode);
ret = !dev || dev->links.status == DL_DEV_NO_DRIVER;
put_device(dev);
list_for_each_entry(link, &con->links.consumers, s_node) {
if ((link->flags & ~DL_FLAG_INFERRED) ==
(DL_FLAG_SYNC_STATE_ONLY | DL_FLAG_MANAGED))
continue;
return ret;
}
if (!fw_devlink_relax_cycle(link->consumer, sup))
continue;
static bool fwnode_ancestor_init_without_drv(struct fwnode_handle *fwnode)
{
struct fwnode_handle *parent;
ret = 1;
fw_devlink_relax_link(link);
fwnode_for_each_parent_node(fwnode, parent) {
if (fwnode_init_without_drv(parent)) {
fwnode_handle_put(parent);
return true;
}
}
return false;
}
/**
* __fw_devlink_relax_cycles - Relax and mark dependency cycles.
* @con: Potential consumer device.
* @sup_handle: Potential supplier's fwnode.
*
* Needs to be called with fwnode_lock and device link lock held.
*
* Check if @sup_handle or any of its ancestors or suppliers direct/indirectly
* depend on @con. This function can detect multiple cyles between @sup_handle
* and @con. When such dependency cycles are found, convert all device links
* created solely by fw_devlink into SYNC_STATE_ONLY device links. Also, mark
* all fwnode links in the cycle with FWLINK_FLAG_CYCLE so that when they are
* converted into a device link in the future, they are created as
* SYNC_STATE_ONLY device links. This is the equivalent of doing
* fw_devlink=permissive just between the devices in the cycle. We need to do
* this because, at this point, fw_devlink can't tell which of these
* dependencies is not a real dependency.
*
* Return true if one or more cycles were found. Otherwise, return false.
*/
static bool __fw_devlink_relax_cycles(struct device *con,
struct fwnode_handle *sup_handle)
{
struct device *sup_dev = NULL, *par_dev = NULL;
struct fwnode_link *link;
struct device_link *dev_link;
bool ret = false;
if (!sup_handle)
return false;
/*
* We aren't trying to find all cycles. Just a cycle between con and
* sup_handle.
*/
if (sup_handle->flags & FWNODE_FLAG_VISITED)
return false;
sup_handle->flags |= FWNODE_FLAG_VISITED;
sup_dev = get_dev_from_fwnode(sup_handle);
/* Termination condition. */
if (sup_dev == con) {
ret = true;
goto out;
}
/*
* If sup_dev is bound to a driver and @con hasn't started binding to a
* driver, sup_dev can't be a consumer of @con. So, no need to check
* further.
*/
if (sup_dev && sup_dev->links.status == DL_DEV_DRIVER_BOUND &&
con->links.status == DL_DEV_NO_DRIVER) {
ret = false;
goto out;
}
list_for_each_entry(link, &sup_handle->suppliers, c_hook) {
if (__fw_devlink_relax_cycles(con, link->supplier)) {
__fwnode_link_cycle(link);
ret = true;
}
}
/*
* Give priority to device parent over fwnode parent to account for any
* quirks in how fwnodes are converted to devices.
*/
if (sup_dev)
par_dev = get_device(sup_dev->parent);
else
par_dev = fwnode_get_next_parent_dev(sup_handle);
if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode))
ret = true;
if (!sup_dev)
goto out;
list_for_each_entry(dev_link, &sup_dev->links.suppliers, c_node) {
/*
* Ignore a SYNC_STATE_ONLY flag only if it wasn't marked as
* such due to a cycle.
*/
if (device_link_flag_is_sync_state_only(dev_link->flags) &&
!(dev_link->flags & DL_FLAG_CYCLE))
continue;
if (__fw_devlink_relax_cycles(con,
dev_link->supplier->fwnode)) {
fw_devlink_relax_link(dev_link);
dev_link->flags |= DL_FLAG_CYCLE;
ret = true;
}
}
out:
sup_handle->flags &= ~FWNODE_FLAG_VISITED;
put_device(sup_dev);
put_device(par_dev);
return ret;
}
@ -1772,7 +1963,7 @@ static int fw_devlink_relax_cycle(struct device *con, void *sup)
* fw_devlink_create_devlink - Create a device link from a consumer to fwnode
* @con: consumer device for the device link
* @sup_handle: fwnode handle of supplier
* @flags: devlink flags
* @link: fwnode link that's being converted to a device link
*
* This function will try to create a device link between the consumer device
* @con and the supplier device represented by @sup_handle.
@ -1789,10 +1980,17 @@ static int fw_devlink_relax_cycle(struct device *con, void *sup)
* possible to do that in the future
*/
static int fw_devlink_create_devlink(struct device *con,
struct fwnode_handle *sup_handle, u32 flags)
struct fwnode_handle *sup_handle,
struct fwnode_link *link)
{
struct device *sup_dev;
int ret = 0;
u32 flags;
if (con->fwnode == link->consumer)
flags = fw_devlink_get_flags(link->flags);
else
flags = FW_DEVLINK_FLAGS_PERMISSIVE;
/*
* In some cases, a device P might also be a supplier to its child node
@ -1813,7 +2011,26 @@ static int fw_devlink_create_devlink(struct device *con,
fwnode_is_ancestor_of(sup_handle, con->fwnode))
return -EINVAL;
sup_dev = get_dev_from_fwnode(sup_handle);
/*
* SYNC_STATE_ONLY device links don't block probing and supports cycles.
* So cycle detection isn't necessary and shouldn't be done.
*/
if (!(flags & DL_FLAG_SYNC_STATE_ONLY)) {
device_links_write_lock();
if (__fw_devlink_relax_cycles(con, sup_handle)) {
__fwnode_link_cycle(link);
flags = fw_devlink_get_flags(link->flags);
dev_info(con, "Fixed dependency cycle(s) with %pfwf\n",
sup_handle);
}
device_links_write_unlock();
}
if (sup_handle->flags & FWNODE_FLAG_NOT_DEVICE)
sup_dev = fwnode_get_next_parent_dev(sup_handle);
else
sup_dev = get_dev_from_fwnode(sup_handle);
if (sup_dev) {
/*
* If it's one of those drivers that don't actually bind to
@ -1822,71 +2039,34 @@ static int fw_devlink_create_devlink(struct device *con,
*/
if (sup_dev->links.status == DL_DEV_NO_DRIVER &&
sup_handle->flags & FWNODE_FLAG_INITIALIZED) {
dev_dbg(con,
"Not linking %pfwf - dev might never probe\n",
sup_handle);
ret = -EINVAL;
goto out;
}
/*
* If this fails, it is due to cycles in device links. Just
* give up on this link and treat it as invalid.
*/
if (!device_link_add(con, sup_dev, flags) &&
!(flags & DL_FLAG_SYNC_STATE_ONLY)) {
dev_info(con, "Fixing up cyclic dependency with %s\n",
dev_name(sup_dev));
device_links_write_lock();
fw_devlink_relax_cycle(con, sup_dev);
device_links_write_unlock();
device_link_add(con, sup_dev,
FW_DEVLINK_FLAGS_PERMISSIVE);
if (!device_link_add(con, sup_dev, flags)) {
dev_err(con, "Failed to create device link with %s\n",
dev_name(sup_dev));
ret = -EINVAL;
}
goto out;
}
/* Supplier that's already initialized without a struct device. */
if (sup_handle->flags & FWNODE_FLAG_INITIALIZED)
/*
* Supplier or supplier's ancestor already initialized without a struct
* device or being probed by a driver.
*/
if (fwnode_init_without_drv(sup_handle) ||
fwnode_ancestor_init_without_drv(sup_handle)) {
dev_dbg(con, "Not linking %pfwf - might never become dev\n",
sup_handle);
return -EINVAL;
/*
* DL_FLAG_SYNC_STATE_ONLY doesn't block probing and supports
* cycles. So cycle detection isn't necessary and shouldn't be
* done.
*/
if (flags & DL_FLAG_SYNC_STATE_ONLY)
return -EAGAIN;
/*
* If we can't find the supplier device from its fwnode, it might be
* due to a cyclic dependency between fwnodes. Some of these cycles can
* be broken by applying logic. Check for these types of cycles and
* break them so that devices in the cycle probe properly.
*
* If the supplier's parent is dependent on the consumer, then the
* consumer and supplier have a cyclic dependency. Since fw_devlink
* can't tell which of the inferred dependencies are incorrect, don't
* enforce probe ordering between any of the devices in this cyclic
* dependency. Do this by relaxing all the fw_devlink device links in
* this cycle and by treating the fwnode link between the consumer and
* the supplier as an invalid dependency.
*/
sup_dev = fwnode_get_next_parent_dev(sup_handle);
if (sup_dev && device_is_dependent(con, sup_dev)) {
dev_info(con, "Fixing up cyclic dependency with %pfwP (%s)\n",
sup_handle, dev_name(sup_dev));
device_links_write_lock();
fw_devlink_relax_cycle(con, sup_dev);
device_links_write_unlock();
ret = -EINVAL;
} else {
/*
* Can't check for cycles or no cycles. So let's try
* again later.
*/
ret = -EAGAIN;
}
ret = -EAGAIN;
out:
put_device(sup_dev);
return ret;
@ -1914,7 +2094,6 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
struct fwnode_link *link, *tmp;
list_for_each_entry_safe(link, tmp, &fwnode->consumers, s_hook) {
u32 dl_flags = fw_devlink_get_flags();
struct device *con_dev;
bool own_link = true;
int ret;
@ -1944,14 +2123,13 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
con_dev = NULL;
} else {
own_link = false;
dl_flags = FW_DEVLINK_FLAGS_PERMISSIVE;
}
}
if (!con_dev)
continue;
ret = fw_devlink_create_devlink(con_dev, fwnode, dl_flags);
ret = fw_devlink_create_devlink(con_dev, fwnode, link);
put_device(con_dev);
if (!own_link || ret == -EAGAIN)
continue;
@ -1971,10 +2149,7 @@ static void __fw_devlink_link_to_consumers(struct device *dev)
*
* The function creates normal (non-SYNC_STATE_ONLY) device links between @dev
* and the real suppliers of @dev. Once these device links are created, the
* fwnode links are deleted. When such device links are successfully created,
* this function is called recursively on those supplier devices. This is
* needed to detect and break some invalid cycles in fwnode links. See
* fw_devlink_create_devlink() for more details.
* fwnode links are deleted.
*
* In addition, it also looks at all the suppliers of the entire fwnode tree
* because some of the child devices of @dev that have not been added yet
@ -1992,44 +2167,16 @@ static void __fw_devlink_link_to_suppliers(struct device *dev,
bool own_link = (dev->fwnode == fwnode);
struct fwnode_link *link, *tmp;
struct fwnode_handle *child = NULL;
u32 dl_flags;
if (own_link)
dl_flags = fw_devlink_get_flags();
else
dl_flags = FW_DEVLINK_FLAGS_PERMISSIVE;
list_for_each_entry_safe(link, tmp, &fwnode->suppliers, c_hook) {
int ret;
struct device *sup_dev;
struct fwnode_handle *sup = link->supplier;
ret = fw_devlink_create_devlink(dev, sup, dl_flags);
ret = fw_devlink_create_devlink(dev, sup, link);
if (!own_link || ret == -EAGAIN)
continue;
__fwnode_link_del(link);
/* If no device link was created, nothing more to do. */
if (ret)
continue;
/*
* If a device link was successfully created to a supplier, we
* now need to try and link the supplier to all its suppliers.
*
* This is needed to detect and delete false dependencies in
* fwnode links that haven't been converted to a device link
* yet. See comments in fw_devlink_create_devlink() for more
* details on the false dependency.
*
* Without deleting these false dependencies, some devices will
* never probe because they'll keep waiting for their false
* dependency fwnode links to be converted to device links.
*/
sup_dev = get_dev_from_fwnode(sup);
__fw_devlink_link_to_suppliers(sup_dev, sup_dev->fwnode);
put_device(sup_dev);
}
/*
@ -2312,7 +2459,7 @@ static void device_get_ownership(const struct kobject *kobj, kuid_t *uid, kgid_t
dev->class->get_ownership(dev, uid, gid);
}
static struct kobj_type device_ktype = {
static const struct kobj_type device_ktype = {
.release = device_release,
.sysfs_ops = &dev_sysfs_ops,
.namespace = device_namespace,
@ -2345,9 +2492,9 @@ static const char *dev_uevent_name(const struct kobject *kobj)
return NULL;
}
static int dev_uevent(struct kobject *kobj, struct kobj_uevent_env *env)
static int dev_uevent(const struct kobject *kobj, struct kobj_uevent_env *env)
{
struct device *dev = kobj_to_dev(kobj);
const struct device *dev = kobj_to_dev(kobj);
int retval = 0;
/* add device node properties if present */
@ -2950,7 +3097,7 @@ struct kobj_ns_type_operations *class_dir_child_ns_type(const struct kobject *ko
return dir->class->ns_type;
}
static struct kobj_type class_dir_ktype = {
static const struct kobj_type class_dir_ktype = {
.release = class_dir_release,
.sysfs_ops = &kobj_sysfs_ops,
.child_ns_type = class_dir_child_ns_type
@ -2984,8 +3131,9 @@ static DEFINE_MUTEX(gdp_mutex);
static struct kobject *get_device_parent(struct device *dev,
struct device *parent)
{
struct kobject *kobj = NULL;
if (dev->class) {
struct kobject *kobj = NULL;
struct kobject *parent_kobj;
struct kobject *k;
@ -3033,8 +3181,15 @@ static struct kobject *get_device_parent(struct device *dev,
}
/* subsystems can specify a default root directory for their devices */
if (!parent && dev->bus && dev->bus->dev_root)
return &dev->bus->dev_root->kobj;
if (!parent && dev->bus) {
struct device *dev_root = bus_get_dev_root(dev->bus);
if (dev_root) {
kobj = &dev_root->kobj;
put_device(dev_root);
return kobj;
}
}
if (parent)
return &parent->kobj;
@ -3371,7 +3526,7 @@ int device_add(struct device *dev)
/* we require the name to be set before, and pass NULL */
error = kobject_add(&dev->kobj, dev->kobj.parent, NULL);
if (error) {
glue_dir = get_glue_dir(dev);
glue_dir = kobj;
goto Error;
}
@ -3411,10 +3566,7 @@ int device_add(struct device *dev)
/* Notify clients of device addition. This call must come
* after dpm_sysfs_add() and before kobject_uevent().
*/
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_ADD_DEVICE, dev);
bus_notify(dev, BUS_NOTIFY_ADD_DEVICE);
kobject_uevent(&dev->kobj, KOBJ_ADD);
/*
@ -3471,6 +3623,7 @@ done:
device_pm_remove(dev);
dpm_sysfs_remove(dev);
DPMError:
dev->driver = NULL;
bus_remove_device(dev);
BusError:
device_remove_attrs(dev);
@ -3594,9 +3747,7 @@ void device_del(struct device *dev)
* before dpm_sysfs_remove().
*/
noio_flag = memalloc_noio_save();
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DEL_DEVICE, dev);
bus_notify(dev, BUS_NOTIFY_DEL_DEVICE);
dpm_sysfs_remove(dev);
if (parent)
@ -3627,9 +3778,7 @@ void device_del(struct device *dev)
device_platform_notify_remove(dev);
device_links_purge(dev);
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_REMOVED_DEVICE, dev);
bus_notify(dev, BUS_NOTIFY_REMOVED_DEVICE);
kobject_uevent(&dev->kobj, KOBJ_REMOVE);
glue_dir = get_glue_dir(dev);
kobject_del(&dev->kobj);
@ -3697,7 +3846,7 @@ static struct device *next_device(struct klist_iter *i)
* a name. This memory is returned in tmp and needs to be
* freed by the caller.
*/
const char *device_get_devnode(struct device *dev,
const char *device_get_devnode(const struct device *dev,
umode_t *mode, kuid_t *uid, kgid_t *gid,
const char **tmp)
{

View File

@ -125,17 +125,6 @@ static DEVICE_ATTR(release, S_IWUSR, NULL, cpu_release_store);
#endif /* CONFIG_ARCH_CPU_PROBE_RELEASE */
#endif /* CONFIG_HOTPLUG_CPU */
struct bus_type cpu_subsys = {
.name = "cpu",
.dev_name = "cpu",
.match = cpu_subsys_match,
#ifdef CONFIG_HOTPLUG_CPU
.online = cpu_subsys_online,
.offline = cpu_subsys_offline,
#endif
};
EXPORT_SYMBOL_GPL(cpu_subsys);
#ifdef CONFIG_KEXEC
#include <linux/kexec.h>
@ -336,7 +325,7 @@ static ssize_t print_cpu_modalias(struct device *dev,
return len;
}
static int cpu_uevent(struct device *dev, struct kobj_uevent_env *env)
static int cpu_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
char *buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (buf) {
@ -348,6 +337,20 @@ static int cpu_uevent(struct device *dev, struct kobj_uevent_env *env)
}
#endif
struct bus_type cpu_subsys = {
.name = "cpu",
.dev_name = "cpu",
.match = cpu_subsys_match,
#ifdef CONFIG_HOTPLUG_CPU
.online = cpu_subsys_online,
.offline = cpu_subsys_offline,
#endif
#ifdef CONFIG_GENERIC_CPU_AUTOPROBE
.uevent = cpu_uevent,
#endif
};
EXPORT_SYMBOL_GPL(cpu_subsys);
/*
* register_cpu - Setup a sysfs device for a CPU.
* @cpu - cpu->hotpluggable field set to 1 will generate a control file in
@ -368,9 +371,6 @@ int register_cpu(struct cpu *cpu, int num)
cpu->dev.offline_disabled = !cpu->hotpluggable;
cpu->dev.offline = !cpu_online(num);
cpu->dev.of_node = of_get_cpu_node(num, NULL);
#ifdef CONFIG_GENERIC_CPU_AUTOPROBE
cpu->dev.bus->uevent = cpu_uevent;
#endif
cpu->dev.groups = common_cpu_attr_groups;
if (cpu->hotpluggable)
cpu->dev.groups = hotplugable_cpu_attr_groups;
@ -610,9 +610,13 @@ static const struct attribute_group cpu_root_vulnerabilities_group = {
static void __init cpu_register_vulnerabilities(void)
{
if (sysfs_create_group(&cpu_subsys.dev_root->kobj,
&cpu_root_vulnerabilities_group))
pr_err("Unable to register CPU vulnerabilities\n");
struct device *dev = bus_get_dev_root(&cpu_subsys);
if (dev) {
if (sysfs_create_group(&dev->kobj, &cpu_root_vulnerabilities_group))
pr_err("Unable to register CPU vulnerabilities\n");
put_device(dev);
}
}
#else

View File

@ -257,13 +257,11 @@ static int deferred_devs_show(struct seq_file *s, void *data)
DEFINE_SHOW_ATTRIBUTE(deferred_devs);
#ifdef CONFIG_MODULES
int driver_deferred_probe_timeout = 10;
static int driver_deferred_probe_timeout = 10;
#else
int driver_deferred_probe_timeout;
static int driver_deferred_probe_timeout;
#endif
EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);
static int __init deferred_probe_timeout_setup(char *str)
{
int timeout;
@ -372,7 +370,7 @@ late_initcall(deferred_probe_initcall);
static void __exit deferred_probe_exit(void)
{
debugfs_remove_recursive(debugfs_lookup("devices_deferred", NULL));
debugfs_lookup_and_remove("devices_deferred", NULL);
}
__exitcall(deferred_probe_exit);
@ -413,10 +411,7 @@ static void driver_bound(struct device *dev)
driver_deferred_probe_del(dev);
driver_deferred_probe_trigger();
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_BOUND_DRIVER, dev);
bus_notify(dev, BUS_NOTIFY_BOUND_DRIVER);
kobject_uevent(&dev->kobj, KOBJ_BIND);
}
@ -435,9 +430,7 @@ static int driver_sysfs_add(struct device *dev)
{
int ret;
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_BIND_DRIVER, dev);
bus_notify(dev, BUS_NOTIFY_BIND_DRIVER);
ret = sysfs_create_link(&dev->driver->p->kobj, &dev->kobj,
kobject_name(&dev->kobj));
@ -502,9 +495,8 @@ int device_bind_driver(struct device *dev)
device_links_force_bind(dev);
driver_bound(dev);
}
else if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
else
bus_notify(dev, BUS_NOTIFY_DRIVER_NOT_BOUND);
return ret;
}
EXPORT_SYMBOL_GPL(device_bind_driver);
@ -695,9 +687,7 @@ dev_groups_failed:
probe_failed:
driver_sysfs_remove(dev);
sysfs_failed:
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
bus_notify(dev, BUS_NOTIFY_DRIVER_NOT_BOUND);
if (dev->bus && dev->bus->dma_cleanup)
dev->bus->dma_cleanup(dev);
pinctrl_bind_failed:
@ -1243,10 +1233,7 @@ static void __device_release_driver(struct device *dev, struct device *parent)
driver_sysfs_remove(dev);
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_UNBIND_DRIVER,
dev);
bus_notify(dev, BUS_NOTIFY_UNBIND_DRIVER);
pm_runtime_put_sync(dev);
@ -1260,11 +1247,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
klist_remove(&dev->p->knode_driver);
device_pm_check_callbacks(dev);
if (dev->bus)
blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
BUS_NOTIFY_UNBOUND_DRIVER,
dev);
bus_notify(dev, BUS_NOTIFY_UNBOUND_DRIVER);
kobject_uevent(&dev->kobj, KOBJ_UNBIND);
}
}

View File

@ -13,6 +13,8 @@
* overwrite the default setting if needed.
*/
#define pr_fmt(fmt) "devtmpfs: " fmt
#include <linux/kernel.h>
#include <linux/syscalls.h>
#include <linux/mount.h>
@ -376,9 +378,9 @@ int __init devtmpfs_mount(void)
err = init_mount("devtmpfs", "dev", "devtmpfs", DEVTMPFS_MFLAGS, NULL);
if (err)
printk(KERN_INFO "devtmpfs: error mounting %i\n", err);
pr_info("error mounting %d\n", err);
else
printk(KERN_INFO "devtmpfs: mounted\n");
pr_info("mounted\n");
return err;
}
@ -460,14 +462,12 @@ int __init devtmpfs_init(void)
mnt = vfs_kern_mount(&internal_fs_type, 0, "devtmpfs", opts);
if (IS_ERR(mnt)) {
printk(KERN_ERR "devtmpfs: unable to create devtmpfs %ld\n",
PTR_ERR(mnt));
pr_err("unable to create devtmpfs %ld\n", PTR_ERR(mnt));
return PTR_ERR(mnt);
}
err = register_filesystem(&dev_fs_type);
if (err) {
printk(KERN_ERR "devtmpfs: unable to register devtmpfs "
"type %i\n", err);
pr_err("unable to register devtmpfs type %d\n", err);
return err;
}
@ -480,12 +480,12 @@ int __init devtmpfs_init(void)
}
if (err) {
printk(KERN_ERR "devtmpfs: unable to create devtmpfs %i\n", err);
pr_err("unable to create devtmpfs %d\n", err);
unregister_filesystem(&dev_fs_type);
thread = NULL;
return err;
}
printk(KERN_INFO "devtmpfs: initialized\n");
pr_info("initialized\n");
return 0;
}

View File

@ -224,7 +224,7 @@ int driver_register(struct device_driver *drv)
int ret;
struct device_driver *other;
if (!drv->bus->p) {
if (!bus_is_registered(drv->bus)) {
pr_err("Driver '%s' was unable to register with bus_type '%s' because the bus was not initialized.\n",
drv->name, drv->bus->name);
return -EINVAL;
@ -274,30 +274,3 @@ void driver_unregister(struct device_driver *drv)
bus_remove_driver(drv);
}
EXPORT_SYMBOL_GPL(driver_unregister);
/**
* driver_find - locate driver on a bus by its name.
* @name: name of the driver.
* @bus: bus to scan for the driver.
*
* Call kset_find_obj() to iterate over list of drivers on
* a bus to find driver by name. Return driver if found.
*
* This routine provides no locking to prevent the driver it returns
* from being unregistered or unloaded while the caller is using it.
* The caller is responsible for preventing this.
*/
struct device_driver *driver_find(const char *name, struct bus_type *bus)
{
struct kobject *k = kset_find_obj(bus->p->drivers_kset, name);
struct driver_private *priv;
if (k) {
/* Drop reference added by kset_find_obj() */
kobject_put(k);
priv = to_driver(k);
return priv->driver;
}
return NULL;
}
EXPORT_SYMBOL_GPL(driver_find);

View File

@ -115,18 +115,13 @@ unsigned long __weak memory_block_size_bytes(void)
}
EXPORT_SYMBOL_GPL(memory_block_size_bytes);
/*
* Show the first physical section index (number) of this memory block.
*/
/* Show the memory block ID, relative to the memory block size */
static ssize_t phys_index_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
unsigned long phys_index;
phys_index = mem->start_section_nr / sections_per_block;
return sysfs_emit(buf, "%08lx\n", phys_index);
return sysfs_emit(buf, "%08lx\n", memory_block_id(mem->start_section_nr));
}
/*

View File

@ -24,8 +24,11 @@ bool dev_add_physical_location(struct device *dev)
dev->physical_location =
kzalloc(sizeof(*dev->physical_location), GFP_KERNEL);
if (!dev->physical_location)
if (!dev->physical_location) {
ACPI_FREE(pld);
return false;
}
dev->physical_location->panel = pld->panel;
dev->physical_location->vertical_position = pld->vertical_position;
dev->physical_location->horizontal_position = pld->horizontal_position;

View File

@ -441,11 +441,9 @@ static int __platform_get_irq_byname(struct platform_device *dev,
struct resource *r;
int ret;
if (!dev->dev.of_node || IS_ENABLED(CONFIG_OF_IRQ)) {
ret = fwnode_irq_get_byname(dev_fwnode(&dev->dev), name);
if (ret > 0 || ret == -EPROBE_DEFER)
return ret;
}
ret = fwnode_irq_get_byname(dev_fwnode(&dev->dev), name);
if (ret > 0 || ret == -EPROBE_DEFER)
return ret;
r = platform_get_resource_byname(dev, IORESOURCE_IRQ, name);
if (r) {
@ -499,6 +497,8 @@ EXPORT_SYMBOL_GPL(platform_get_irq_byname_optional);
* platform_add_devices - add a numbers of platform devices
* @devs: array of platform devices to add
* @num: number of platform devices in array
*
* Return: 0 on success, negative error number on failure.
*/
int platform_add_devices(struct platform_device **devs, int num)
{
@ -883,6 +883,13 @@ static int platform_probe_fail(struct platform_device *pdev)
return -ENXIO;
}
static int is_bound_to_driver(struct device *dev, void *driver)
{
if (dev->driver == driver)
return 1;
return 0;
}
/**
* __platform_driver_probe - register driver for non-hotpluggable device
* @drv: platform driver structure
@ -906,7 +913,7 @@ static int platform_probe_fail(struct platform_device *pdev)
int __init_or_module __platform_driver_probe(struct platform_driver *drv,
int (*probe)(struct platform_device *), struct module *module)
{
int retval, code;
int retval;
if (drv->driver.probe_type == PROBE_PREFER_ASYNCHRONOUS) {
pr_err("%s: drivers registered with %s can not be probed asynchronously\n",
@ -932,24 +939,21 @@ int __init_or_module __platform_driver_probe(struct platform_driver *drv,
/* temporary section violation during probe() */
drv->probe = probe;
retval = code = __platform_driver_register(drv, module);
retval = __platform_driver_register(drv, module);
if (retval)
return retval;
/*
* Fixup that section violation, being paranoid about code scanning
* the list of drivers in order to probe new devices. Check to see
* if the probe was successful, and make sure any forced probes of
* new devices fail.
*/
spin_lock(&drv->driver.bus->p->klist_drivers.k_lock);
/* Force all new probes of this driver to fail */
drv->probe = platform_probe_fail;
if (code == 0 && list_empty(&drv->driver.p->klist_devices.k_list))
retval = -ENODEV;
spin_unlock(&drv->driver.bus->p->klist_drivers.k_lock);
if (code != retval)
/* Walk all platform devices and see if any actually bound to this driver.
* If not, return an error as the device should have done so by now.
*/
if (!bus_for_each_dev(&platform_bus_type, NULL, &drv->driver, is_bound_to_driver)) {
retval = -ENODEV;
platform_driver_unregister(drv);
}
return retval;
}
EXPORT_SYMBOL_GPL(__platform_driver_probe);
@ -1353,9 +1357,9 @@ static int platform_match(struct device *dev, struct device_driver *drv)
return (strcmp(pdev->name, drv->name) == 0);
}
static int platform_uevent(struct device *dev, struct kobj_uevent_env *env)
static int platform_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct platform_device *pdev = to_platform_device(dev);
const struct platform_device *pdev = to_platform_device(dev);
int rc;
/* Some devices have extra OF data and an OF-style MODALIAS */
@ -1416,7 +1420,9 @@ static void platform_remove(struct device *_dev)
struct platform_driver *drv = to_platform_driver(_dev->driver);
struct platform_device *dev = to_platform_device(_dev);
if (drv->remove) {
if (drv->remove_new) {
drv->remove_new(dev);
} else if (drv->remove) {
int ret = drv->remove(dev);
if (ret)

View File

@ -30,6 +30,7 @@ struct soc_device {
static struct bus_type soc_bus_type = {
.name = "soc",
};
static bool soc_bus_registered;
static DEVICE_ATTR(machine, 0444, soc_info_show, NULL);
static DEVICE_ATTR(family, 0444, soc_info_show, NULL);
@ -117,7 +118,7 @@ struct soc_device *soc_device_register(struct soc_device_attribute *soc_dev_attr
const struct attribute_group **soc_attr_groups;
int ret;
if (!soc_bus_type.p) {
if (!soc_bus_registered) {
if (early_soc_dev_attr)
return ERR_PTR(-EBUSY);
early_soc_dev_attr = soc_dev_attr;
@ -183,6 +184,7 @@ static int __init soc_bus_register(void)
ret = bus_register(&soc_bus_type);
if (ret)
return ret;
soc_bus_registered = true;
if (early_soc_dev_attr)
return PTR_ERR(soc_device_register(early_soc_dev_attr));

View File

@ -760,7 +760,7 @@ static void software_node_release(struct kobject *kobj)
kfree(swnode);
}
static struct kobj_type software_node_type = {
static const struct kobj_type software_node_type = {
.release = software_node_release,
.sysfs_ops = &kobj_sysfs_ops,
};
@ -819,67 +819,6 @@ swnode_register(const struct software_node *node, struct swnode *parent,
return &swnode->fwnode;
}
/**
* software_node_register_nodes - Register an array of software nodes
* @nodes: Zero terminated array of software nodes to be registered
*
* Register multiple software nodes at once. If any node in the array
* has its .parent pointer set (which can only be to another software_node),
* then its parent **must** have been registered before it is; either outside
* of this function or by ordering the array such that parent comes before
* child.
*/
int software_node_register_nodes(const struct software_node *nodes)
{
int ret;
int i;
for (i = 0; nodes[i].name; i++) {
const struct software_node *parent = nodes[i].parent;
if (parent && !software_node_to_swnode(parent)) {
ret = -EINVAL;
goto err_unregister_nodes;
}
ret = software_node_register(&nodes[i]);
if (ret)
goto err_unregister_nodes;
}
return 0;
err_unregister_nodes:
software_node_unregister_nodes(nodes);
return ret;
}
EXPORT_SYMBOL_GPL(software_node_register_nodes);
/**
* software_node_unregister_nodes - Unregister an array of software nodes
* @nodes: Zero terminated array of software nodes to be unregistered
*
* Unregister multiple software nodes at once. If parent pointers are set up
* in any of the software nodes then the array **must** be ordered such that
* parents come before their children.
*
* NOTE: If you are uncertain whether the array is ordered such that
* parents will be unregistered before their children, it is wiser to
* remove the nodes individually, in the correct order (child before
* parent).
*/
void software_node_unregister_nodes(const struct software_node *nodes)
{
unsigned int i = 0;
while (nodes[i].name)
i++;
while (i--)
software_node_unregister(&nodes[i]);
}
EXPORT_SYMBOL_GPL(software_node_unregister_nodes);
/**
* software_node_register_node_group - Register a group of software nodes
* @node_group: NULL terminated array of software node pointers to be registered

View File

@ -405,20 +405,18 @@ static void pe_test_move_inline_str(struct kunit *test)
/* Handling of reference properties */
static void pe_test_reference(struct kunit *test)
{
static const struct software_node nodes[] = {
{ .name = "1", },
{ .name = "2", },
{ }
};
static const struct software_node node1 = { .name = "1" };
static const struct software_node node2 = { .name = "2" };
static const struct software_node *group[] = { &node1, &node2, NULL };
static const struct software_node_ref_args refs[] = {
SOFTWARE_NODE_REFERENCE(&nodes[0]),
SOFTWARE_NODE_REFERENCE(&nodes[1], 3, 4),
SOFTWARE_NODE_REFERENCE(&node1),
SOFTWARE_NODE_REFERENCE(&node2, 3, 4),
};
const struct property_entry entries[] = {
PROPERTY_ENTRY_REF("ref-1", &nodes[0]),
PROPERTY_ENTRY_REF("ref-2", &nodes[1], 1, 2),
PROPERTY_ENTRY_REF("ref-1", &node1),
PROPERTY_ENTRY_REF("ref-2", &node2, 1, 2),
PROPERTY_ENTRY_REF_ARRAY("ref-3", refs),
{ }
};
@ -427,7 +425,7 @@ static void pe_test_reference(struct kunit *test)
struct fwnode_reference_args ref;
int error;
error = software_node_register_nodes(nodes);
error = software_node_register_node_group(group);
KUNIT_ASSERT_EQ(test, error, 0);
node = fwnode_create_software_node(entries, NULL);
@ -436,7 +434,7 @@ static void pe_test_reference(struct kunit *test)
error = fwnode_property_get_reference_args(node, "ref-1", NULL,
0, 0, &ref);
KUNIT_ASSERT_EQ(test, error, 0);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &nodes[0]);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &node1);
KUNIT_EXPECT_EQ(test, ref.nargs, 0U);
/* wrong index */
@ -447,7 +445,7 @@ static void pe_test_reference(struct kunit *test)
error = fwnode_property_get_reference_args(node, "ref-2", NULL,
1, 0, &ref);
KUNIT_ASSERT_EQ(test, error, 0);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &nodes[1]);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &node2);
KUNIT_EXPECT_EQ(test, ref.nargs, 1U);
KUNIT_EXPECT_EQ(test, ref.args[0], 1LLU);
@ -455,7 +453,7 @@ static void pe_test_reference(struct kunit *test)
error = fwnode_property_get_reference_args(node, "ref-2", NULL,
3, 0, &ref);
KUNIT_ASSERT_EQ(test, error, 0);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &nodes[1]);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &node2);
KUNIT_EXPECT_EQ(test, ref.nargs, 3U);
KUNIT_EXPECT_EQ(test, ref.args[0], 1LLU);
KUNIT_EXPECT_EQ(test, ref.args[1], 2LLU);
@ -470,14 +468,14 @@ static void pe_test_reference(struct kunit *test)
error = fwnode_property_get_reference_args(node, "ref-3", NULL,
0, 0, &ref);
KUNIT_ASSERT_EQ(test, error, 0);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &nodes[0]);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &node1);
KUNIT_EXPECT_EQ(test, ref.nargs, 0U);
/* second reference in the array */
error = fwnode_property_get_reference_args(node, "ref-3", NULL,
2, 1, &ref);
KUNIT_ASSERT_EQ(test, error, 0);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &nodes[1]);
KUNIT_EXPECT_PTR_EQ(test, to_software_node(ref.fwnode), &node2);
KUNIT_EXPECT_EQ(test, ref.nargs, 2U);
KUNIT_EXPECT_EQ(test, ref.args[0], 3LLU);
KUNIT_EXPECT_EQ(test, ref.args[1], 4LLU);
@ -488,7 +486,7 @@ static void pe_test_reference(struct kunit *test)
KUNIT_EXPECT_NE(test, error, 0);
fwnode_remove_software_node(node);
software_node_unregister_nodes(nodes);
software_node_unregister_node_group(group);
}
static struct kunit_case property_entry_test_cases[] = {

View File

@ -155,12 +155,27 @@ static int transport_add_class_device(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
struct transport_class *tclass = class_to_transport_class(cont->class);
int error = attribute_container_add_class_device(classdev);
struct transport_container *tcont =
attribute_container_to_transport_container(cont);
if (!error && tcont->statistics)
if (error)
goto err_remove;
if (tcont->statistics) {
error = sysfs_create_group(&classdev->kobj, tcont->statistics);
if (error)
goto err_del;
}
return 0;
err_del:
attribute_container_class_device_del(classdev);
err_remove:
if (tclass->remove)
tclass->remove(tcont, dev, classdev);
return error;
}

View File

@ -28,7 +28,7 @@ static DEFINE_MUTEX(bcma_buses_mutex);
static int bcma_bus_match(struct device *dev, struct device_driver *drv);
static int bcma_device_probe(struct device *dev);
static void bcma_device_remove(struct device *dev);
static int bcma_device_uevent(struct device *dev, struct kobj_uevent_env *env);
static int bcma_device_uevent(const struct device *dev, struct kobj_uevent_env *env);
static ssize_t manuf_show(struct device *dev, struct device_attribute *attr, char *buf)
{
@ -627,9 +627,9 @@ static void bcma_device_remove(struct device *dev)
put_device(dev);
}
static int bcma_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int bcma_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct bcma_device *core = container_of(dev, struct bcma_device, dev);
const struct bcma_device *core = container_of_const(dev, struct bcma_device, dev);
return add_uevent_var(env,
"MODALIAS=bcma:m%04Xid%04Xrev%02Xcl%02X",

View File

@ -124,9 +124,9 @@ out:
/*
* fsl_mc_bus_uevent - callback invoked when a device is added
*/
static int fsl_mc_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int fsl_mc_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
const struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev);
if (add_uevent_var(env, "MODALIAS=fsl-mc:v%08Xd%s",
mc_dev->obj_desc.vendor,

View File

@ -1550,9 +1550,9 @@ void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
}
EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
static int mhi_ep_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mhi_ep_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
const struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
return add_uevent_var(env, "MODALIAS=" MHI_EP_DEVICE_MODALIAS_FMT,
mhi_dev->name);

View File

@ -1395,9 +1395,9 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv)
}
EXPORT_SYMBOL_GPL(mhi_driver_unregister);
static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mhi_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
const struct mhi_device *mhi_dev = to_mhi_device(dev);
return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
mhi_dev->name);

View File

@ -67,9 +67,9 @@ static int mips_cdmm_match(struct device *dev, struct device_driver *drv)
return mips_cdmm_lookup(cdrv->id_table, cdev) != NULL;
}
static int mips_cdmm_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mips_cdmm_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mips_cdmm_device *cdev = to_mips_cdmm_device(dev);
const struct mips_cdmm_device *cdev = to_mips_cdmm_device(dev);
int retval = 0;
retval = add_uevent_var(env, "CDMM_CPU=%u", cdev->cpu);

View File

@ -172,12 +172,17 @@ static void sunxi_rsb_device_remove(struct device *dev)
drv->remove(to_sunxi_rsb_device(dev));
}
static int sunxi_rsb_device_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
return of_device_uevent_modalias(dev, env);
}
static struct bus_type sunxi_rsb_bus = {
.name = RSB_CTRL_NAME,
.match = sunxi_rsb_device_match,
.probe = sunxi_rsb_device_probe,
.remove = sunxi_rsb_device_remove,
.uevent = of_device_uevent_modalias,
.uevent = sunxi_rsb_device_modalias,
};
static void sunxi_rsb_dev_release(struct device *dev)

View File

@ -27,7 +27,7 @@ static void cxl_memdev_release(struct device *dev)
kfree(cxlmd);
}
static char *cxl_memdev_devnode(struct device *dev, umode_t *mode, kuid_t *uid,
static char *cxl_memdev_devnode(const struct device *dev, umode_t *mode, kuid_t *uid,
kgid_t *gid)
{
return kasprintf(GFP_KERNEL, "cxl/%s", dev_name(dev));
@ -162,7 +162,7 @@ static const struct device_type cxl_memdev_type = {
.groups = cxl_memdev_attribute_groups,
};
bool is_cxl_memdev(struct device *dev)
bool is_cxl_memdev(const struct device *dev)
{
return dev->type == &cxl_memdev_type;
}

View File

@ -38,7 +38,7 @@ static ssize_t devtype_show(struct device *dev, struct device_attribute *attr,
}
static DEVICE_ATTR_RO(devtype);
static int cxl_device_id(struct device *dev)
static int cxl_device_id(const struct device *dev)
{
if (dev->type == &cxl_nvdimm_bridge_type)
return CXL_DEVICE_NVDIMM_BRIDGE;
@ -523,13 +523,13 @@ static const struct device_type cxl_port_type = {
.groups = cxl_port_attribute_groups,
};
bool is_cxl_port(struct device *dev)
bool is_cxl_port(const struct device *dev)
{
return dev->type == &cxl_port_type;
}
EXPORT_SYMBOL_NS_GPL(is_cxl_port, CXL);
struct cxl_port *to_cxl_port(struct device *dev)
struct cxl_port *to_cxl_port(const struct device *dev)
{
if (dev_WARN_ONCE(dev, dev->type != &cxl_port_type,
"not a cxl_port device\n"))
@ -1826,7 +1826,7 @@ void cxl_driver_unregister(struct cxl_driver *cxl_drv)
}
EXPORT_SYMBOL_NS_GPL(cxl_driver_unregister, CXL);
static int cxl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int cxl_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
return add_uevent_var(env, "MODALIAS=" CXL_MODALIAS_FMT,
cxl_device_id(dev));

View File

@ -588,8 +588,8 @@ static inline bool is_cxl_root(struct cxl_port *port)
return port->uport == port->dev.parent;
}
bool is_cxl_port(struct device *dev);
struct cxl_port *to_cxl_port(struct device *dev);
bool is_cxl_port(const struct device *dev);
struct cxl_port *to_cxl_port(const struct device *dev);
struct pci_bus;
int devm_cxl_register_pci_bus(struct device *host, struct device *uport,
struct pci_bus *bus);

View File

@ -72,7 +72,7 @@ cxled_to_memdev(struct cxl_endpoint_decoder *cxled)
return to_cxl_memdev(port->uport);
}
bool is_cxl_memdev(struct device *dev);
bool is_cxl_memdev(const struct device *dev);
static inline bool is_cxl_endpoint(struct cxl_port *port)
{
return is_cxl_memdev(port->uport);

View File

@ -18,7 +18,7 @@ struct dax_id {
char dev_name[DAX_NAME_LEN];
};
static int dax_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int dax_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
/*
* We only ever expect to handle device-dax instances, i.e. the

View File

@ -127,9 +127,9 @@ static int eisa_bus_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int eisa_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int eisa_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct eisa_device *edev = to_eisa_device(dev);
const struct eisa_device *edev = to_eisa_device(dev);
add_uevent_var(env, "MODALIAS=" EISA_DEVICE_MODALIAS_FMT, edev->id.sig);
return 0;

View File

@ -133,7 +133,7 @@ static void get_ids(const u32 *directory, int *id)
}
}
static void get_modalias_ids(struct fw_unit *unit, int *id)
static void get_modalias_ids(const struct fw_unit *unit, int *id)
{
get_ids(&fw_parent_device(unit)->config_rom[5], id);
get_ids(unit->directory, id);
@ -195,7 +195,7 @@ static void fw_unit_remove(struct device *dev)
driver->remove(fw_unit(dev));
}
static int get_modalias(struct fw_unit *unit, char *buffer, size_t buffer_size)
static int get_modalias(const struct fw_unit *unit, char *buffer, size_t buffer_size)
{
int id[] = {0, 0, 0, 0};
@ -206,9 +206,9 @@ static int get_modalias(struct fw_unit *unit, char *buffer, size_t buffer_size)
id[0], id[1], id[2], id[3]);
}
static int fw_unit_uevent(struct device *dev, struct kobj_uevent_env *env)
static int fw_unit_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct fw_unit *unit = fw_unit(dev);
const struct fw_unit *unit = fw_unit(dev);
char modalias[64];
get_modalias(unit, modalias, sizeof(modalias));

View File

@ -56,9 +56,9 @@ static void ffa_device_remove(struct device *dev)
ffa_drv->remove(to_ffa_dev(dev));
}
static int ffa_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int ffa_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
const struct ffa_device *ffa_dev = to_ffa_dev(dev);
return add_uevent_var(env, "MODALIAS=arm_ffa:%04x:%pUb",
ffa_dev->vm_id, &ffa_dev->uuid);

View File

@ -12,6 +12,7 @@
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/of.h>
#include "common.h"
@ -191,7 +192,7 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol,
scmi_dev->id = id;
scmi_dev->protocol_id = protocol;
scmi_dev->dev.parent = parent;
scmi_dev->dev.of_node = np;
device_set_node(&scmi_dev->dev, of_fwnode_handle(np));
scmi_dev->dev.bus = &scmi_bus_type;
scmi_dev->dev.release = scmi_device_release;
dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id);

View File

@ -294,9 +294,9 @@ static void dfl_bus_remove(struct device *dev)
ddrv->remove(ddev);
}
static int dfl_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int dfl_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct dfl_device *ddev = to_dfl_dev(dev);
const struct dfl_device *ddev = to_dfl_dev(dev);
return add_uevent_var(env, "MODALIAS=dfl:t%04Xf%04X",
ddev->type, ddev->feature_id);

View File

@ -897,10 +897,10 @@ static const struct attribute_group *cfam_attr_groups[] = {
NULL,
};
static char *cfam_devnode(struct device *dev, umode_t *mode,
static char *cfam_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
struct fsi_slave *slave = to_fsi_slave(dev);
const struct fsi_slave *slave = to_fsi_slave(dev);
#ifdef CONFIG_FSI_NEW_DEV_NODE
return kasprintf(GFP_KERNEL, "fsi/cfam%d", slave->cdev_idx);
@ -915,7 +915,7 @@ static const struct device_type cfam_type = {
.groups = cfam_attr_groups
};
static char *fsi_cdev_devnode(struct device *dev, umode_t *mode,
static char *fsi_cdev_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
#ifdef CONFIG_FSI_NEW_DEV_NODE

View File

@ -587,6 +587,13 @@ static int gpiochip_setup_dev(struct gpio_device *gdev)
{
int ret;
/*
* If fwnode doesn't belong to another device, it's safe to clear its
* initialized flag.
*/
if (gdev->dev.fwnode && !gdev->dev.fwnode->dev)
fwnode_dev_initialized(gdev->dev.fwnode, false);
ret = gcdev_register(gdev, gpio_devt);
if (ret)
return ret;

View File

@ -161,9 +161,14 @@ static void dp_aux_ep_dev_release(struct device *dev)
kfree(aux_ep_with_data);
}
static int dp_aux_ep_dev_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
return of_device_uevent_modalias(dev, env);
}
static struct device_type dp_aux_device_type_type = {
.groups = dp_aux_ep_dev_groups,
.uevent = of_device_uevent_modalias,
.uevent = dp_aux_ep_dev_modalias,
.release = dp_aux_ep_dev_release,
};

View File

@ -62,9 +62,9 @@ static int mipi_dsi_device_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int mipi_dsi_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mipi_dsi_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
const struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
int err;
err = of_device_uevent_modalias(dev, env);

View File

@ -338,7 +338,7 @@ static int host1x_device_match(struct device *dev, struct device_driver *drv)
return strcmp(dev_name(dev), drv->name) == 0;
}
static int host1x_device_uevent(struct device *dev,
static int host1x_device_uevent(const struct device *dev,
struct kobj_uevent_env *env)
{
struct device_node *np = dev->parent->of_node;

View File

@ -78,14 +78,14 @@ static int greybus_match_device(struct device *dev, struct device_driver *drv)
return 0;
}
static int greybus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int greybus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct gb_host_device *hd;
struct gb_module *module = NULL;
struct gb_interface *intf = NULL;
struct gb_control *control = NULL;
struct gb_bundle *bundle = NULL;
struct gb_svc *svc = NULL;
const struct gb_host_device *hd;
const struct gb_module *module = NULL;
const struct gb_interface *intf = NULL;
const struct gb_control *control = NULL;
const struct gb_bundle *bundle = NULL;
const struct gb_svc *svc = NULL;
if (is_gb_host_device(dev)) {
hd = to_gb_host_device(dev);

View File

@ -2676,9 +2676,9 @@ static const struct attribute_group hid_dev_group = {
};
__ATTRIBUTE_GROUPS(hid_dev);
static int hid_uevent(struct device *dev, struct kobj_uevent_env *env)
static int hid_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct hid_device *hdev = to_hid_device(dev);
const struct hid_device *hdev = to_hid_device(dev);
if (add_uevent_var(env, "HID_ID=%04X:%08X:%08X",
hdev->bus, hdev->vendor, hdev->product))

View File

@ -361,7 +361,7 @@ static struct attribute *ishtp_cl_dev_attrs[] = {
};
ATTRIBUTE_GROUPS(ishtp_cl_dev);
static int ishtp_cl_uevent(struct device *dev, struct kobj_uevent_env *env)
static int ishtp_cl_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
if (add_uevent_var(env, "MODALIAS=" ISHTP_MODULE_PREFIX "%s", dev_name(dev)))
return -ENOMEM;

View File

@ -30,7 +30,7 @@ static struct attribute *hsi_bus_dev_attrs[] = {
};
ATTRIBUTE_GROUPS(hsi_bus_dev);
static int hsi_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int hsi_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev));

View File

@ -711,9 +711,9 @@ __ATTRIBUTE_GROUPS(vmbus_bus);
* representation of the device guid (each byte of the guid will be
* represented with two hex characters.
*/
static int vmbus_uevent(struct device *device, struct kobj_uevent_env *env)
static int vmbus_uevent(const struct device *device, struct kobj_uevent_env *env)
{
struct hv_device *dev = device_to_hv_device(device);
const struct hv_device *dev = device_to_hv_device(device);
const char *format = "MODALIAS=vmbus:%*phN";
return add_uevent_var(env, format, UUID_SIZE, &dev->dev_type);

View File

@ -185,11 +185,11 @@ static struct device_type intel_th_source_device_type = {
.release = intel_th_device_release,
};
static char *intel_th_output_devnode(struct device *dev, umode_t *mode,
static char *intel_th_output_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
struct intel_th_device *thdev = to_intel_th_device(dev);
struct intel_th *th = to_intel_th(thdev);
const struct intel_th_device *thdev = to_intel_th_device(dev);
const struct intel_th *th = to_intel_th(thdev);
char *node;
if (thdev->id >= 0)

View File

@ -205,7 +205,7 @@ struct intel_th_driver {
* INTEL_TH_SWITCH and INTEL_TH_SOURCE are children of the intel_th device.
*/
static inline struct intel_th_device *
to_intel_th_parent(struct intel_th_device *thdev)
to_intel_th_parent(const struct intel_th_device *thdev)
{
struct device *parent = thdev->dev.parent;
@ -215,7 +215,7 @@ to_intel_th_parent(struct intel_th_device *thdev)
return to_intel_th_device(parent);
}
static inline struct intel_th *to_intel_th(struct intel_th_device *thdev)
static inline struct intel_th *to_intel_th(const struct intel_th_device *thdev)
{
if (thdev->type == INTEL_TH_OUTPUT)
thdev = to_intel_th_parent(thdev);

View File

@ -136,9 +136,9 @@ static int i2c_device_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int i2c_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int i2c_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct i2c_client *client = to_i2c_client(dev);
const struct i2c_client *client = to_i2c_client(dev);
int rc;
rc = of_device_uevent_modalias(dev, env);

View File

@ -78,7 +78,7 @@ EXPORT_SYMBOL_GPL(i3c_device_do_setdasa);
*
* Retrieve I3C dev info.
*/
void i3c_device_get_info(struct i3c_device *dev,
void i3c_device_get_info(const struct i3c_device *dev,
struct i3c_device_info *info)
{
if (!info)
@ -208,18 +208,6 @@ struct device *i3cdev_to_dev(struct i3c_device *i3cdev)
}
EXPORT_SYMBOL_GPL(i3cdev_to_dev);
/**
* dev_to_i3cdev() - Returns the I3C device containing @dev
* @dev: device object
*
* Return: a pointer to an I3C device object.
*/
struct i3c_device *dev_to_i3cdev(struct device *dev)
{
return container_of(dev, struct i3c_device, dev);
}
EXPORT_SYMBOL_GPL(dev_to_i3cdev);
/**
* i3c_device_match_id() - Returns the i3c_device_id entry matching @i3cdev
* @i3cdev: I3C device

View File

@ -273,9 +273,9 @@ static struct attribute *i3c_device_attrs[] = {
};
ATTRIBUTE_GROUPS(i3c_device);
static int i3c_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int i3c_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct i3c_device *i3cdev = dev_to_i3cdev(dev);
const struct i3c_device *i3cdev = dev_to_i3cdev(dev);
struct i3c_device_info devinfo;
u16 manuf, part, ext;

View File

@ -1372,7 +1372,7 @@ INPUT_DEV_STRING_ATTR_SHOW(phys);
INPUT_DEV_STRING_ATTR_SHOW(uniq);
static int input_print_modalias_bits(char *buf, int size,
char name, unsigned long *bm,
char name, const unsigned long *bm,
unsigned int min_bit, unsigned int max_bit)
{
int len = 0, i;
@ -1384,7 +1384,7 @@ static int input_print_modalias_bits(char *buf, int size,
return len;
}
static int input_print_modalias(char *buf, int size, struct input_dev *id,
static int input_print_modalias(char *buf, int size, const struct input_dev *id,
int add_cr)
{
int len;
@ -1432,7 +1432,7 @@ static ssize_t input_dev_show_modalias(struct device *dev,
}
static DEVICE_ATTR(modalias, S_IRUGO, input_dev_show_modalias, NULL);
static int input_print_bitmap(char *buf, int buf_size, unsigned long *bitmap,
static int input_print_bitmap(char *buf, int buf_size, const unsigned long *bitmap,
int max, int add_cr);
static ssize_t input_dev_show_properties(struct device *dev,
@ -1524,7 +1524,7 @@ static const struct attribute_group input_dev_id_attr_group = {
.attrs = input_dev_id_attrs,
};
static int input_print_bitmap(char *buf, int buf_size, unsigned long *bitmap,
static int input_print_bitmap(char *buf, int buf_size, const unsigned long *bitmap,
int max, int add_cr)
{
int i;
@ -1621,7 +1621,7 @@ static void input_dev_release(struct device *device)
* device bitfields.
*/
static int input_add_uevent_bm_var(struct kobj_uevent_env *env,
const char *name, unsigned long *bitmap, int max)
const char *name, const unsigned long *bitmap, int max)
{
int len;
@ -1639,7 +1639,7 @@ static int input_add_uevent_bm_var(struct kobj_uevent_env *env,
}
static int input_add_uevent_modalias_var(struct kobj_uevent_env *env,
struct input_dev *dev)
const struct input_dev *dev)
{
int len;
@ -1677,9 +1677,9 @@ static int input_add_uevent_modalias_var(struct kobj_uevent_env *env,
return err; \
} while (0)
static int input_dev_uevent(struct device *device, struct kobj_uevent_env *env)
static int input_dev_uevent(const struct device *device, struct kobj_uevent_env *env)
{
struct input_dev *dev = to_input_dev(device);
const struct input_dev *dev = to_input_dev(device);
INPUT_ADD_HOTPLUG_VAR("PRODUCT=%x/%x/%x/%x",
dev->id.bustype, dev->id.vendor,

View File

@ -895,9 +895,9 @@ static int serio_bus_match(struct device *dev, struct device_driver *drv)
return err; \
} while (0)
static int serio_uevent(struct device *dev, struct kobj_uevent_env *env)
static int serio_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct serio *serio;
const struct serio *serio;
if (!dev)
return -ENODEV;

View File

@ -76,9 +76,9 @@ static void ipack_bus_remove(struct device *device)
drv->ops->remove(dev);
}
static int ipack_uevent(struct device *dev, struct kobj_uevent_env *env)
static int ipack_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct ipack_device *idev;
const struct ipack_device *idev;
if (!dev)
return -ENODEV;

View File

@ -283,6 +283,7 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
* later the GPC power domain driver will not be skipped.
*/
of_node_clear_flag(node, OF_POPULATED);
fwnode_dev_initialized(domain->fwnode, false);
return 0;
}

View File

@ -128,12 +128,17 @@ static int macio_device_resume(struct device * dev)
return 0;
}
static int macio_device_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
return of_device_uevent_modalias(dev, env);
}
extern const struct attribute_group *macio_dev_groups[];
struct bus_type macio_bus_type = {
.name = "macio",
.match = macio_bus_match,
.uevent = of_device_uevent_modalias,
.uevent = macio_device_modalias,
.probe = macio_device_probe,
.remove = macio_device_remove,
.shutdown = macio_device_shutdown,

View File

@ -41,9 +41,9 @@ static int mcb_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int mcb_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mcb_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mcb_device *mdev = to_mcb_device(dev);
const struct mcb_device *mdev = to_mcb_device(dev);
int ret;
ret = add_uevent_var(env, "MODALIAS=mcb:16z%03d", mdev->id);

View File

@ -195,6 +195,19 @@ static void cio2_bridge_init_swnode_names(struct cio2_sensor *sensor)
SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0); /* And endpoint 0 */
}
static void cio2_bridge_init_swnode_group(struct cio2_sensor *sensor)
{
struct software_node *nodes = sensor->swnodes;
sensor->group[SWNODE_SENSOR_HID] = &nodes[SWNODE_SENSOR_HID];
sensor->group[SWNODE_SENSOR_PORT] = &nodes[SWNODE_SENSOR_PORT];
sensor->group[SWNODE_SENSOR_ENDPOINT] = &nodes[SWNODE_SENSOR_ENDPOINT];
sensor->group[SWNODE_CIO2_PORT] = &nodes[SWNODE_CIO2_PORT];
sensor->group[SWNODE_CIO2_ENDPOINT] = &nodes[SWNODE_CIO2_ENDPOINT];
if (sensor->ssdb.vcmtype)
sensor->group[SWNODE_VCM] = &nodes[SWNODE_VCM];
}
static void cio2_bridge_create_connection_swnodes(struct cio2_bridge *bridge,
struct cio2_sensor *sensor)
{
@ -219,6 +232,8 @@ static void cio2_bridge_create_connection_swnodes(struct cio2_bridge *bridge,
if (sensor->ssdb.vcmtype)
nodes[SWNODE_VCM] =
NODE_VCM(cio2_vcm_types[sensor->ssdb.vcmtype - 1]);
cio2_bridge_init_swnode_group(sensor);
}
static void cio2_bridge_instantiate_vcm_i2c_client(struct cio2_sensor *sensor)
@ -252,7 +267,7 @@ static void cio2_bridge_unregister_sensors(struct cio2_bridge *bridge)
for (i = 0; i < bridge->n_sensors; i++) {
sensor = &bridge->sensors[i];
software_node_unregister_nodes(sensor->swnodes);
software_node_unregister_node_group(sensor->group);
ACPI_FREE(sensor->pld);
acpi_dev_put(sensor->adev);
i2c_unregister_device(sensor->vcm_i2c_client);
@ -263,7 +278,7 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
struct cio2_bridge *bridge,
struct pci_dev *cio2)
{
struct fwnode_handle *fwnode;
struct fwnode_handle *fwnode, *primary;
struct cio2_sensor *sensor;
struct acpi_device *adev;
acpi_status status;
@ -310,7 +325,7 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
cio2_bridge_create_fwnode_properties(sensor, bridge, cfg);
cio2_bridge_create_connection_swnodes(bridge, sensor);
ret = software_node_register_nodes(sensor->swnodes);
ret = software_node_register_node_group(sensor->group);
if (ret)
goto err_free_pld;
@ -322,7 +337,9 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
}
sensor->adev = acpi_dev_get(adev);
adev->fwnode.secondary = fwnode;
primary = acpi_fwnode_handle(adev);
primary->secondary = fwnode;
cio2_bridge_instantiate_vcm_i2c_client(sensor);
@ -335,7 +352,7 @@ static int cio2_bridge_connect_sensor(const struct cio2_sensor_config *cfg,
return 0;
err_free_swnodes:
software_node_unregister_nodes(sensor->swnodes);
software_node_unregister_node_group(sensor->group);
err_free_pld:
ACPI_FREE(sensor->pld);
err_put_adev:

View File

@ -117,8 +117,9 @@ struct cio2_sensor {
struct acpi_device *adev;
struct i2c_client *vcm_i2c_client;
/* SWNODE_COUNT + 1 for terminating empty node */
struct software_node swnodes[SWNODE_COUNT + 1];
/* SWNODE_COUNT + 1 for terminating NULL */
const struct software_node *group[SWNODE_COUNT + 1];
struct software_node swnodes[SWNODE_COUNT];
struct cio2_node_names node_names;
struct cio2_sensor_ssdb ssdb;

View File

@ -1614,7 +1614,7 @@ static void rc_dev_release(struct device *device)
kfree(dev);
}
static int rc_dev_uevent(struct device *device, struct kobj_uevent_env *env)
static int rc_dev_uevent(const struct device *device, struct kobj_uevent_env *env)
{
struct rc_dev *dev = to_rc_dev(device);
int ret = 0;

View File

@ -57,10 +57,10 @@ static int memstick_bus_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int memstick_uevent(struct device *dev, struct kobj_uevent_env *env)
static int memstick_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct memstick_dev *card = container_of(dev, struct memstick_dev,
dev);
const struct memstick_dev *card = container_of_const(dev, struct memstick_dev,
dev);
if (add_uevent_var(env, "MEMSTICK_TYPE=%02X", card->id.type))
return -ENOMEM;

View File

@ -1227,9 +1227,9 @@ ATTRIBUTE_GROUPS(mei_cldev);
*
* Return: 0 on success -ENOMEM on when add_uevent_var fails
*/
static int mei_cl_device_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mei_cl_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mei_cl_device *cldev = to_mei_cl_device(dev);
const struct mei_cl_device *cldev = to_mei_cl_device(dev);
const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl);
u8 version = mei_me_cl_ver(cldev->me_cl);

View File

@ -55,9 +55,9 @@ static int tifm_bus_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int tifm_uevent(struct device *dev, struct kobj_uevent_env *env)
static int tifm_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct tifm_dev *sock = container_of(dev, struct tifm_dev, dev);
const struct tifm_dev *sock = container_of_const(dev, struct tifm_dev, dev);
if (add_uevent_var(env, "TIFM_CARD_TYPE=%s", tifm_media_type_name(sock->type, 1)))
return -ENOMEM;

View File

@ -55,9 +55,9 @@ static struct attribute *mmc_dev_attrs[] = {
ATTRIBUTE_GROUPS(mmc_dev);
static int
mmc_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
mmc_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct mmc_card *card = mmc_dev_to_card(dev);
const struct mmc_card *card = mmc_dev_to_card(dev);
const char *type;
unsigned int i;
int retval = 0;

View File

@ -120,9 +120,9 @@ static int sdio_bus_match(struct device *dev, struct device_driver *drv)
}
static int
sdio_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
sdio_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
struct sdio_func *func = dev_to_sdio_func(dev);
const struct sdio_func *func = dev_to_sdio_func(dev);
unsigned int i;
if (add_uevent_var(env,

View File

@ -577,6 +577,7 @@ static int mtd_part_of_parse(struct mtd_info *master,
{
struct mtd_part_parser *parser;
struct device_node *np;
struct device_node *child;
struct property *prop;
struct device *dev;
const char *compat;
@ -594,6 +595,15 @@ static int mtd_part_of_parse(struct mtd_info *master,
else
np = of_get_child_by_name(np, "partitions");
/*
* Don't create devices that are added to a bus but will never get
* probed. That'll cause fw_devlink to block probing of consumers of
* this partition until the partition device is probed.
*/
for_each_child_of_node(np, child)
if (of_device_is_compatible(child, "nvmem-cells"))
of_node_set_flag(child, OF_POPULATED);
of_property_for_each_string(np, "compatible", prop, compat) {
parser = mtd_part_get_compatible_parser(compat);
if (!parser)

View File

@ -1330,7 +1330,7 @@ static int mdio_bus_match(struct device *dev, struct device_driver *drv)
return 0;
}
static int mdio_uevent(struct device *dev, struct kobj_uevent_env *env)
static int mdio_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
int rc;

View File

@ -200,7 +200,7 @@ static void xenvif_debugfs_delif(struct xenvif *vif)
* and vif variables to the environment, for the benefit of the vif-* hotplug
* scripts.
*/
static int netback_uevent(struct xenbus_device *xdev,
static int netback_uevent(const struct xenbus_device *xdev,
struct kobj_uevent_env *env)
{
struct backend_info *be = dev_get_drvdata(&xdev->dev);

View File

@ -28,7 +28,7 @@ static int nvdimm_bus_major;
struct class *nd_class;
static DEFINE_IDA(nd_ida);
static int to_nd_device_type(struct device *dev)
static int to_nd_device_type(const struct device *dev)
{
if (is_nvdimm(dev))
return ND_DEVICE_DIMM;
@ -42,7 +42,7 @@ static int to_nd_device_type(struct device *dev)
return 0;
}
static int nvdimm_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
static int nvdimm_bus_uevent(const struct device *dev, struct kobj_uevent_env *env)
{
return add_uevent_var(env, "MODALIAS=" ND_DEVICE_MODALIAS_FMT,
to_nd_device_type(dev));

View File

@ -38,7 +38,7 @@ static const struct device_type nd_dax_device_type = {
.groups = nd_pfn_attribute_groups,
};
bool is_nd_dax(struct device *dev)
bool is_nd_dax(const struct device *dev)
{
return dev ? dev->type == &nd_dax_device_type : false;
}

View File

@ -572,7 +572,7 @@ static const struct device_type nvdimm_device_type = {
.groups = nvdimm_attribute_groups,
};
bool is_nvdimm(struct device *dev)
bool is_nvdimm(const struct device *dev)
{
return dev->type == &nvdimm_device_type;
}

View File

@ -82,14 +82,14 @@ static inline void nvdimm_security_overwrite_query(struct work_struct *work)
}
#endif
bool is_nvdimm(struct device *dev);
bool is_nd_pmem(struct device *dev);
bool is_nd_volatile(struct device *dev);
static inline bool is_nd_region(struct device *dev)
bool is_nvdimm(const struct device *dev);
bool is_nd_pmem(const struct device *dev);
bool is_nd_volatile(const struct device *dev);
static inline bool is_nd_region(const struct device *dev)
{
return is_nd_pmem(dev) || is_nd_volatile(dev);
}
static inline bool is_memory(struct device *dev)
static inline bool is_memory(const struct device *dev)
{
return is_nd_pmem(dev) || is_nd_volatile(dev);
}

View File

@ -599,7 +599,7 @@ static inline int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig)
struct nd_dax *to_nd_dax(struct device *dev);
#if IS_ENABLED(CONFIG_NVDIMM_DAX)
int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns);
bool is_nd_dax(struct device *dev);
bool is_nd_dax(const struct device *dev);
struct device *nd_dax_create(struct nd_region *nd_region);
#else
static inline int nd_dax_probe(struct device *dev,
@ -608,7 +608,7 @@ static inline int nd_dax_probe(struct device *dev,
return -ENODEV;
}
static inline bool is_nd_dax(struct device *dev)
static inline bool is_nd_dax(const struct device *dev)
{
return false;
}

View File

@ -839,12 +839,12 @@ static const struct device_type nd_volatile_device_type = {
.groups = nd_region_attribute_groups,
};
bool is_nd_pmem(struct device *dev)
bool is_nd_pmem(const struct device *dev)
{
return dev ? dev->type == &nd_pmem_device_type : false;
}
bool is_nd_volatile(struct device *dev)
bool is_nd_volatile(const struct device *dev)
{
return dev ? dev->type == &nd_volatile_device_type : false;
}

View File

@ -248,7 +248,7 @@ const void *of_device_get_match_data(const struct device *dev)
}
EXPORT_SYMBOL(of_device_get_match_data);
static ssize_t of_device_get_modalias(struct device *dev, char *str, ssize_t len)
static ssize_t of_device_get_modalias(const struct device *dev, char *str, ssize_t len)
{
const char *compat;
char *c;
@ -372,7 +372,7 @@ void of_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
mutex_unlock(&of_mutex);
}
int of_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env)
int of_device_uevent_modalias(const struct device *dev, struct kobj_uevent_env *env)
{
int sl;

Some files were not shown because too many files have changed in this diff Show More