2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-11-18 07:35:12 +08:00

Driver core / sysfs patches for 3.14-rc1

Here's the big driver core and sysfs patch set for 3.14-rc1.
 
 There's a lot of work here moving sysfs logic out into a "kernfs" to
 allow other subsystems to also have a virtual filesystem with the same
 attributes of sysfs (handle device disconnect, dynamic creation /
 removal  as needed / unneeded, etc.  This is primarily being done for
 the cgroups filesystem, but the goal is to also move debugfs to it when
 it is ready, solving all of the known issues in that filesystem as well.
 The code isn't completed yet, but all should be stable now (there is a
 big section that was reverted due to problems found when testing.)
 
 There's also some other smaller fixes, and a driver core addition that
 allows for a "collection" of objects, that the DRM people will be using
 soon (it's in this tree to make merges after -rc1 easier.)
 
 All of this has been in linux-next with no reported issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iEYEABECAAYFAlLdh0cACgkQMUfUDdst+ylv4QCfeDKDgLo4LsaBIIrFSxLoH/c7
 UUsAoMPRwA0h8wy+BQcJAg4H4J4maKj3
 =0pc0
 -----END PGP SIGNATURE-----

Merge tag 'driver-core-3.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull driver core / sysfs patches from Greg KH:
 "Here's the big driver core and sysfs patch set for 3.14-rc1.

  There's a lot of work here moving sysfs logic out into a "kernfs" to
  allow other subsystems to also have a virtual filesystem with the same
  attributes of sysfs (handle device disconnect, dynamic creation /
  removal as needed / unneeded, etc)

  This is primarily being done for the cgroups filesystem, but the goal
  is to also move debugfs to it when it is ready, solving all of the
  known issues in that filesystem as well.  The code isn't completed
  yet, but all should be stable now (there is a big section that was
  reverted due to problems found when testing)

  There's also some other smaller fixes, and a driver core addition that
  allows for a "collection" of objects, that the DRM people will be
  using soon (it's in this tree to make merges after -rc1 easier)

  All of this has been in linux-next with no reported issues"

* tag 'driver-core-3.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (113 commits)
  kernfs: associate a new kernfs_node with its parent on creation
  kernfs: add struct dentry declaration in kernfs.h
  kernfs: fix get_active failure handling in kernfs_seq_*()
  Revert "kernfs: fix get_active failure handling in kernfs_seq_*()"
  Revert "kernfs: replace kernfs_node->u.completion with kernfs_root->deactivate_waitq"
  Revert "kernfs: remove KERNFS_ACTIVE_REF and add kernfs_lockdep()"
  Revert "kernfs: remove KERNFS_REMOVED"
  Revert "kernfs: restructure removal path to fix possible premature return"
  Revert "kernfs: invoke kernfs_unmap_bin_file() directly from __kernfs_remove()"
  Revert "kernfs: remove kernfs_addrm_cxt"
  Revert "kernfs: make kernfs_get_active() block if the node is deactivated but not removed"
  Revert "kernfs: implement kernfs_{de|re}activate[_self]()"
  Revert "kernfs, sysfs, driver-core: implement kernfs_remove_self() and its wrappers"
  Revert "pci: use device_remove_file_self() instead of device_schedule_callback()"
  Revert "scsi: use device_remove_file_self() instead of device_schedule_callback()"
  Revert "s390: use device_remove_file_self() instead of device_schedule_callback()"
  Revert "sysfs, driver-core: remove unused {sysfs|device}_schedule_callback_owner()"
  Revert "kernfs: remove unnecessary NULL check in __kernfs_remove()"
  kernfs: remove unnecessary NULL check in __kernfs_remove()
  drivers/base: provide an infrastructure for componentised subsystems
  ...
This commit is contained in:
Linus Torvalds 2014-01-20 15:49:44 -08:00
commit d3bad75a6d
42 changed files with 4191 additions and 2919 deletions

View File

@ -0,0 +1,116 @@
Device Driver Design Patterns
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This document describes a few common design patterns found in device drivers.
It is likely that subsystem maintainers will ask driver developers to
conform to these design patterns.
1. State Container
2. container_of()
1. State Container
~~~~~~~~~~~~~~~~~~
While the kernel contains a few device drivers that assume that they will
only be probed() once on a certain system (singletons), it is custom to assume
that the device the driver binds to will appear in several instances. This
means that the probe() function and all callbacks need to be reentrant.
The most common way to achieve this is to use the state container design
pattern. It usually has this form:
struct foo {
spinlock_t lock; /* Example member */
(...)
};
static int foo_probe(...)
{
struct foo *foo;
foo = devm_kzalloc(dev, sizeof(*foo), GFP_KERNEL);
if (!foo)
return -ENOMEM;
spin_lock_init(&foo->lock);
(...)
}
This will create an instance of struct foo in memory every time probe() is
called. This is our state container for this instance of the device driver.
Of course it is then necessary to always pass this instance of the
state around to all functions that need access to the state and its members.
For example, if the driver is registering an interrupt handler, you would
pass around a pointer to struct foo like this:
static irqreturn_t foo_handler(int irq, void *arg)
{
struct foo *foo = arg;
(...)
}
static int foo_probe(...)
{
struct foo *foo;
(...)
ret = request_irq(irq, foo_handler, 0, "foo", foo);
}
This way you always get a pointer back to the correct instance of foo in
your interrupt handler.
2. container_of()
~~~~~~~~~~~~~~~~~
Continuing on the above example we add an offloaded work:
struct foo {
spinlock_t lock;
struct workqueue_struct *wq;
struct work_struct offload;
(...)
};
static void foo_work(struct work_struct *work)
{
struct foo *foo = container_of(work, struct foo, offload);
(...)
}
static irqreturn_t foo_handler(int irq, void *arg)
{
struct foo *foo = arg;
queue_work(foo->wq, &foo->offload);
(...)
}
static int foo_probe(...)
{
struct foo *foo;
foo->wq = create_singlethread_workqueue("foo-wq");
INIT_WORK(&foo->offload, foo_work);
(...)
}
The design pattern is the same for an hrtimer or something similar that will
return a single argument which is a pointer to a struct member in the
callback.
container_of() is a macro defined in <linux/kernel.h>
What container_of() does is to obtain a pointer to the containing struct from
a pointer to a member by a simple subtraction using the offsetof() macro from
standard C, which allows something similar to object oriented behaviours.
Notice that the contained member must not be a pointer, but an actual member
for this to work.
We can see here that we avoid having global pointers to our struct foo *
instance this way, while still keeping the number of parameters passed to the
work function to a single pointer.

View File

@ -342,7 +342,10 @@ kset use:
When you are finished with the kset, call:
void kset_unregister(struct kset *kset);
to destroy it.
to destroy it. This removes the kset from sysfs and decrements its reference
count. When the reference count goes to zero, the kset will be released.
Because other references to the kset may still exist, the release may happen
after kset_unregister() returns.
An example of using a kset can be seen in the
samples/kobject/kset-example.c file in the kernel tree.

View File

@ -433,7 +433,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
if (c->x86 >= 0x15)
snprintf(fw_name, sizeof(fw_name), "amd-ucode/microcode_amd_fam%.2xh.bin", c->x86);
if (request_firmware(&fw, (const char *)fw_name, device)) {
if (request_firmware_direct(&fw, (const char *)fw_name, device)) {
pr_debug("failed to load file %s\n", fw_name);
goto out;
}

View File

@ -278,7 +278,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
sprintf(name, "intel-ucode/%02x-%02x-%02x",
c->x86, c->x86_model, c->x86_mask);
if (request_firmware(&firmware, name, device)) {
if (request_firmware_direct(&firmware, name, device)) {
pr_debug("data file %s load failed\n", name);
return UCODE_NFOUND;
}

View File

@ -1,6 +1,6 @@
# Makefile for the Linux device tree
obj-y := core.o bus.o dd.o syscore.o \
obj-y := component.o core.o bus.o dd.o syscore.o \
driver.o class.o platform.o \
cpu.o firmware.o init.o map.o devres.o \
attribute_container.o transport_class.o \

View File

@ -146,8 +146,19 @@ void bus_remove_file(struct bus_type *bus, struct bus_attribute *attr)
}
EXPORT_SYMBOL_GPL(bus_remove_file);
static void bus_release(struct kobject *kobj)
{
struct subsys_private *priv =
container_of(kobj, typeof(*priv), subsys.kobj);
struct bus_type *bus = priv->bus;
kfree(priv);
bus->p = NULL;
}
static struct kobj_type bus_ktype = {
.sysfs_ops = &bus_sysfs_ops,
.release = bus_release,
};
static int bus_uevent_filter(struct kset *kset, struct kobject *kobj)
@ -953,8 +964,6 @@ void bus_unregister(struct bus_type *bus)
kset_unregister(bus->p->devices_kset);
bus_remove_file(bus, &bus_attr_uevent);
kset_unregister(&bus->p->subsys);
kfree(bus->p);
bus->p = NULL;
}
EXPORT_SYMBOL_GPL(bus_unregister);

382
drivers/base/component.c Normal file
View File

@ -0,0 +1,382 @@
/*
* Componentized device handling.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This is work in progress. We gather up the component devices into a list,
* and bind them when instructed. At the moment, we're specific to the DRM
* subsystem, and only handles one master device, but this doesn't have to be
* the case.
*/
#include <linux/component.h>
#include <linux/device.h>
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
struct master {
struct list_head node;
struct list_head components;
bool bound;
const struct component_master_ops *ops;
struct device *dev;
};
struct component {
struct list_head node;
struct list_head master_node;
struct master *master;
bool bound;
const struct component_ops *ops;
struct device *dev;
};
static DEFINE_MUTEX(component_mutex);
static LIST_HEAD(component_list);
static LIST_HEAD(masters);
static struct master *__master_find(struct device *dev,
const struct component_master_ops *ops)
{
struct master *m;
list_for_each_entry(m, &masters, node)
if (m->dev == dev && (!ops || m->ops == ops))
return m;
return NULL;
}
/* Attach an unattached component to a master. */
static void component_attach_master(struct master *master, struct component *c)
{
c->master = master;
list_add_tail(&c->master_node, &master->components);
}
/* Detach a component from a master. */
static void component_detach_master(struct master *master, struct component *c)
{
list_del(&c->master_node);
c->master = NULL;
}
int component_master_add_child(struct master *master,
int (*compare)(struct device *, void *), void *compare_data)
{
struct component *c;
int ret = -ENXIO;
list_for_each_entry(c, &component_list, node) {
if (c->master)
continue;
if (compare(c->dev, compare_data)) {
component_attach_master(master, c);
ret = 0;
break;
}
}
return ret;
}
EXPORT_SYMBOL_GPL(component_master_add_child);
/* Detach all attached components from this master */
static void master_remove_components(struct master *master)
{
while (!list_empty(&master->components)) {
struct component *c = list_first_entry(&master->components,
struct component, master_node);
WARN_ON(c->master != master);
component_detach_master(master, c);
}
}
/*
* Try to bring up a master. If component is NULL, we're interested in
* this master, otherwise it's a component which must be present to try
* and bring up the master.
*
* Returns 1 for successful bringup, 0 if not ready, or -ve errno.
*/
static int try_to_bring_up_master(struct master *master,
struct component *component)
{
int ret = 0;
if (!master->bound) {
/*
* Search the list of components, looking for components that
* belong to this master, and attach them to the master.
*/
if (master->ops->add_components(master->dev, master)) {
/* Failed to find all components */
master_remove_components(master);
ret = 0;
goto out;
}
if (component && component->master != master) {
master_remove_components(master);
ret = 0;
goto out;
}
/* Found all components */
ret = master->ops->bind(master->dev);
if (ret < 0) {
master_remove_components(master);
goto out;
}
master->bound = true;
ret = 1;
}
out:
return ret;
}
static int try_to_bring_up_masters(struct component *component)
{
struct master *m;
int ret = 0;
list_for_each_entry(m, &masters, node) {
ret = try_to_bring_up_master(m, component);
if (ret != 0)
break;
}
return ret;
}
static void take_down_master(struct master *master)
{
if (master->bound) {
master->ops->unbind(master->dev);
master->bound = false;
}
master_remove_components(master);
}
int component_master_add(struct device *dev,
const struct component_master_ops *ops)
{
struct master *master;
int ret;
master = kzalloc(sizeof(*master), GFP_KERNEL);
if (!master)
return -ENOMEM;
master->dev = dev;
master->ops = ops;
INIT_LIST_HEAD(&master->components);
/* Add to the list of available masters. */
mutex_lock(&component_mutex);
list_add(&master->node, &masters);
ret = try_to_bring_up_master(master, NULL);
if (ret < 0) {
/* Delete off the list if we weren't successful */
list_del(&master->node);
kfree(master);
}
mutex_unlock(&component_mutex);
return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(component_master_add);
void component_master_del(struct device *dev,
const struct component_master_ops *ops)
{
struct master *master;
mutex_lock(&component_mutex);
master = __master_find(dev, ops);
if (master) {
take_down_master(master);
list_del(&master->node);
kfree(master);
}
mutex_unlock(&component_mutex);
}
EXPORT_SYMBOL_GPL(component_master_del);
static void component_unbind(struct component *component,
struct master *master, void *data)
{
WARN_ON(!component->bound);
component->ops->unbind(component->dev, master->dev, data);
component->bound = false;
/* Release all resources claimed in the binding of this component */
devres_release_group(component->dev, component);
}
void component_unbind_all(struct device *master_dev, void *data)
{
struct master *master;
struct component *c;
WARN_ON(!mutex_is_locked(&component_mutex));
master = __master_find(master_dev, NULL);
if (!master)
return;
list_for_each_entry_reverse(c, &master->components, master_node)
component_unbind(c, master, data);
}
EXPORT_SYMBOL_GPL(component_unbind_all);
static int component_bind(struct component *component, struct master *master,
void *data)
{
int ret;
/*
* Each component initialises inside its own devres group.
* This allows us to roll-back a failed component without
* affecting anything else.
*/
if (!devres_open_group(master->dev, NULL, GFP_KERNEL))
return -ENOMEM;
/*
* Also open a group for the device itself: this allows us
* to release the resources claimed against the sub-device
* at the appropriate moment.
*/
if (!devres_open_group(component->dev, component, GFP_KERNEL)) {
devres_release_group(master->dev, NULL);
return -ENOMEM;
}
dev_dbg(master->dev, "binding %s (ops %ps)\n",
dev_name(component->dev), component->ops);
ret = component->ops->bind(component->dev, master->dev, data);
if (!ret) {
component->bound = true;
/*
* Close the component device's group so that resources
* allocated in the binding are encapsulated for removal
* at unbind. Remove the group on the DRM device as we
* can clean those resources up independently.
*/
devres_close_group(component->dev, NULL);
devres_remove_group(master->dev, NULL);
dev_info(master->dev, "bound %s (ops %ps)\n",
dev_name(component->dev), component->ops);
} else {
devres_release_group(component->dev, NULL);
devres_release_group(master->dev, NULL);
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
dev_name(component->dev), component->ops, ret);
}
return ret;
}
int component_bind_all(struct device *master_dev, void *data)
{
struct master *master;
struct component *c;
int ret = 0;
WARN_ON(!mutex_is_locked(&component_mutex));
master = __master_find(master_dev, NULL);
if (!master)
return -EINVAL;
list_for_each_entry(c, &master->components, master_node) {
ret = component_bind(c, master, data);
if (ret)
break;
}
if (ret != 0) {
list_for_each_entry_continue_reverse(c, &master->components,
master_node)
component_unbind(c, master, data);
}
return ret;
}
EXPORT_SYMBOL_GPL(component_bind_all);
int component_add(struct device *dev, const struct component_ops *ops)
{
struct component *component;
int ret;
component = kzalloc(sizeof(*component), GFP_KERNEL);
if (!component)
return -ENOMEM;
component->ops = ops;
component->dev = dev;
dev_dbg(dev, "adding component (ops %ps)\n", ops);
mutex_lock(&component_mutex);
list_add_tail(&component->node, &component_list);
ret = try_to_bring_up_masters(component);
if (ret < 0) {
list_del(&component->node);
kfree(component);
}
mutex_unlock(&component_mutex);
return ret < 0 ? ret : 0;
}
EXPORT_SYMBOL_GPL(component_add);
void component_del(struct device *dev, const struct component_ops *ops)
{
struct component *c, *component = NULL;
mutex_lock(&component_mutex);
list_for_each_entry(c, &component_list, node)
if (c->dev == dev && c->ops == ops) {
list_del(&c->node);
component = c;
break;
}
if (component && component->master)
take_down_master(component->master);
mutex_unlock(&component_mutex);
WARN_ON(!component);
kfree(component);
}
EXPORT_SYMBOL_GPL(component_del);
MODULE_LICENSE("GPL v2");

View File

@ -491,11 +491,13 @@ static int device_add_attrs(struct device *dev)
if (device_supports_offline(dev) && !dev->offline_disabled) {
error = device_create_file(dev, &dev_attr_online);
if (error)
goto err_remove_type_groups;
goto err_remove_dev_groups;
}
return 0;
err_remove_dev_groups:
device_remove_groups(dev, dev->groups);
err_remove_type_groups:
if (type)
device_remove_groups(dev, type->groups);
@ -1603,6 +1605,7 @@ device_create_groups_vargs(struct class *class, struct device *parent,
goto error;
}
device_initialize(dev);
dev->devt = devt;
dev->class = class;
dev->parent = parent;
@ -1614,7 +1617,7 @@ device_create_groups_vargs(struct class *class, struct device *parent,
if (retval)
goto error;
retval = device_register(dev);
retval = device_add(dev);
if (retval)
goto error;

View File

@ -299,7 +299,7 @@ static int handle_remove(const char *nodename, struct device *dev)
{
struct path parent;
struct dentry *dentry;
int deleted = 1;
int deleted = 0;
int err;
dentry = kern_path_locked(nodename, &parent);

View File

@ -96,6 +96,15 @@ static inline long firmware_loading_timeout(void)
return loading_timeout > 0 ? loading_timeout * HZ : MAX_SCHEDULE_TIMEOUT;
}
/* firmware behavior options */
#define FW_OPT_UEVENT (1U << 0)
#define FW_OPT_NOWAIT (1U << 1)
#ifdef CONFIG_FW_LOADER_USER_HELPER
#define FW_OPT_FALLBACK (1U << 2)
#else
#define FW_OPT_FALLBACK 0
#endif
struct firmware_cache {
/* firmware_buf instance will be added into the below list */
spinlock_t lock;
@ -219,6 +228,7 @@ static int fw_lookup_and_allocate_buf(const char *fw_name,
}
static void __fw_free_buf(struct kref *ref)
__releases(&fwc->lock)
{
struct firmware_buf *buf = to_fwbuf(ref);
struct firmware_cache *fwc = buf->fwc;
@ -270,21 +280,21 @@ module_param_string(path, fw_path_para, sizeof(fw_path_para), 0644);
MODULE_PARM_DESC(path, "customized firmware image search path with a higher priority than default path");
/* Don't inline this: 'struct kstat' is biggish */
static noinline_for_stack long fw_file_size(struct file *file)
static noinline_for_stack int fw_file_size(struct file *file)
{
struct kstat st;
if (vfs_getattr(&file->f_path, &st))
return -1;
if (!S_ISREG(st.mode))
return -1;
if (st.size != (long)st.size)
if (st.size != (int)st.size)
return -1;
return st.size;
}
static int fw_read_file_contents(struct file *file, struct firmware_buf *fw_buf)
{
long size;
int size;
char *buf;
int rc;
@ -820,7 +830,7 @@ static void firmware_class_timeout_work(struct work_struct *work)
static struct firmware_priv *
fw_create_instance(struct firmware *firmware, const char *fw_name,
struct device *device, bool uevent, bool nowait)
struct device *device, unsigned int opt_flags)
{
struct firmware_priv *fw_priv;
struct device *f_dev;
@ -832,7 +842,7 @@ fw_create_instance(struct firmware *firmware, const char *fw_name,
goto exit;
}
fw_priv->nowait = nowait;
fw_priv->nowait = !!(opt_flags & FW_OPT_NOWAIT);
fw_priv->fw = firmware;
INIT_DELAYED_WORK(&fw_priv->timeout_work,
firmware_class_timeout_work);
@ -848,8 +858,8 @@ exit:
}
/* load a firmware via user helper */
static int _request_firmware_load(struct firmware_priv *fw_priv, bool uevent,
long timeout)
static int _request_firmware_load(struct firmware_priv *fw_priv,
unsigned int opt_flags, long timeout)
{
int retval = 0;
struct device *f_dev = &fw_priv->dev;
@ -885,7 +895,7 @@ static int _request_firmware_load(struct firmware_priv *fw_priv, bool uevent,
goto err_del_bin_attr;
}
if (uevent) {
if (opt_flags & FW_OPT_UEVENT) {
buf->need_uevent = true;
dev_set_uevent_suppress(f_dev, false);
dev_dbg(f_dev, "firmware: requesting %s\n", buf->fw_id);
@ -911,16 +921,16 @@ err_put_dev:
static int fw_load_from_user_helper(struct firmware *firmware,
const char *name, struct device *device,
bool uevent, bool nowait, long timeout)
unsigned int opt_flags, long timeout)
{
struct firmware_priv *fw_priv;
fw_priv = fw_create_instance(firmware, name, device, uevent, nowait);
fw_priv = fw_create_instance(firmware, name, device, opt_flags);
if (IS_ERR(fw_priv))
return PTR_ERR(fw_priv);
fw_priv->buf = firmware->priv;
return _request_firmware_load(fw_priv, uevent, timeout);
return _request_firmware_load(fw_priv, opt_flags, timeout);
}
#ifdef CONFIG_PM_SLEEP
@ -942,7 +952,7 @@ static void kill_requests_without_uevent(void)
#else /* CONFIG_FW_LOADER_USER_HELPER */
static inline int
fw_load_from_user_helper(struct firmware *firmware, const char *name,
struct device *device, bool uevent, bool nowait,
struct device *device, unsigned int opt_flags,
long timeout)
{
return -ENOENT;
@ -1023,7 +1033,7 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name,
}
static int assign_firmware_buf(struct firmware *fw, struct device *device,
bool skip_cache)
unsigned int opt_flags)
{
struct firmware_buf *buf = fw->priv;
@ -1040,7 +1050,8 @@ static int assign_firmware_buf(struct firmware *fw, struct device *device,
* device may has been deleted already, but the problem
* should be fixed in devres or driver core.
*/
if (device && !skip_cache)
/* don't cache firmware handled without uevent */
if (device && (opt_flags & FW_OPT_UEVENT))
fw_add_devm_name(device, buf->fw_id);
/*
@ -1061,7 +1072,7 @@ static int assign_firmware_buf(struct firmware *fw, struct device *device,
/* called from request_firmware() and request_firmware_work_func() */
static int
_request_firmware(const struct firmware **firmware_p, const char *name,
struct device *device, bool uevent, bool nowait)
struct device *device, unsigned int opt_flags)
{
struct firmware *fw;
long timeout;
@ -1076,7 +1087,7 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
ret = 0;
timeout = firmware_loading_timeout();
if (nowait) {
if (opt_flags & FW_OPT_NOWAIT) {
timeout = usermodehelper_read_lock_wait(timeout);
if (!timeout) {
dev_dbg(device, "firmware: %s loading timed out\n",
@ -1095,16 +1106,18 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
ret = fw_get_filesystem_firmware(device, fw->priv);
if (ret) {
dev_warn(device, "Direct firmware load failed with error %d\n",
ret);
dev_warn(device, "Falling back to user helper\n");
ret = fw_load_from_user_helper(fw, name, device,
uevent, nowait, timeout);
if (opt_flags & FW_OPT_FALLBACK) {
dev_warn(device,
"Direct firmware load failed with error %d\n",
ret);
dev_warn(device, "Falling back to user helper\n");
ret = fw_load_from_user_helper(fw, name, device,
opt_flags, timeout);
}
}
/* don't cache firmware handled without uevent */
if (!ret)
ret = assign_firmware_buf(fw, device, !uevent);
ret = assign_firmware_buf(fw, device, opt_flags);
usermodehelper_read_unlock();
@ -1146,12 +1159,37 @@ request_firmware(const struct firmware **firmware_p, const char *name,
/* Need to pin this module until return */
__module_get(THIS_MODULE);
ret = _request_firmware(firmware_p, name, device, true, false);
ret = _request_firmware(firmware_p, name, device,
FW_OPT_UEVENT | FW_OPT_FALLBACK);
module_put(THIS_MODULE);
return ret;
}
EXPORT_SYMBOL(request_firmware);
#ifdef CONFIG_FW_LOADER_USER_HELPER
/**
* request_firmware: - load firmware directly without usermode helper
* @firmware_p: pointer to firmware image
* @name: name of firmware file
* @device: device for which firmware is being loaded
*
* This function works pretty much like request_firmware(), but this doesn't
* fall back to usermode helper even if the firmware couldn't be loaded
* directly from fs. Hence it's useful for loading optional firmwares, which
* aren't always present, without extra long timeouts of udev.
**/
int request_firmware_direct(const struct firmware **firmware_p,
const char *name, struct device *device)
{
int ret;
__module_get(THIS_MODULE);
ret = _request_firmware(firmware_p, name, device, FW_OPT_UEVENT);
module_put(THIS_MODULE);
return ret;
}
EXPORT_SYMBOL_GPL(request_firmware_direct);
#endif
/**
* release_firmware: - release the resource associated with a firmware image
* @fw: firmware resource to release
@ -1174,7 +1212,7 @@ struct firmware_work {
struct device *device;
void *context;
void (*cont)(const struct firmware *fw, void *context);
bool uevent;
unsigned int opt_flags;
};
static void request_firmware_work_func(struct work_struct *work)
@ -1185,7 +1223,7 @@ static void request_firmware_work_func(struct work_struct *work)
fw_work = container_of(work, struct firmware_work, work);
_request_firmware(&fw, fw_work->name, fw_work->device,
fw_work->uevent, true);
fw_work->opt_flags);
fw_work->cont(fw, fw_work->context);
put_device(fw_work->device); /* taken in request_firmware_nowait() */
@ -1233,7 +1271,8 @@ request_firmware_nowait(
fw_work->device = device;
fw_work->context = context;
fw_work->cont = cont;
fw_work->uevent = uevent;
fw_work->opt_flags = FW_OPT_NOWAIT | FW_OPT_FALLBACK |
(uevent ? FW_OPT_UEVENT : 0);
if (!try_module_get(module)) {
kfree(fw_work);

View File

@ -553,7 +553,7 @@ static const struct bin_attribute dmi_entry_raw_attr = {
static void dmi_sysfs_entry_release(struct kobject *kobj)
{
struct dmi_sysfs_entry *entry = to_entry(kobj);
sysfs_remove_bin_file(&entry->kobj, &dmi_entry_raw_attr);
spin_lock(&entry_list_lock);
list_del(&entry->list);
spin_unlock(&entry_list_lock);
@ -685,6 +685,7 @@ static void __exit dmi_sysfs_exit(void)
pr_debug("dmi-sysfs: unloading.\n");
cleanup_entry_list();
kset_unregister(dmi_kset);
kobject_del(dmi_kobj);
kobject_put(dmi_kobj);
}

View File

@ -393,7 +393,7 @@ static const DEVICE_ATTR(value, 0644,
static irqreturn_t gpio_sysfs_irq(int irq, void *priv)
{
struct sysfs_dirent *value_sd = priv;
struct kernfs_node *value_sd = priv;
sysfs_notify_dirent(value_sd);
return IRQ_HANDLED;
@ -402,7 +402,7 @@ static irqreturn_t gpio_sysfs_irq(int irq, void *priv)
static int gpio_setup_irq(struct gpio_desc *desc, struct device *dev,
unsigned long gpio_flags)
{
struct sysfs_dirent *value_sd;
struct kernfs_node *value_sd;
unsigned long irq_flags;
int ret, irq, id;

View File

@ -1635,7 +1635,7 @@ int bitmap_create(struct mddev *mddev)
sector_t blocks = mddev->resync_max_sectors;
struct file *file = mddev->bitmap_info.file;
int err;
struct sysfs_dirent *bm = NULL;
struct kernfs_node *bm = NULL;
BUILD_BUG_ON(sizeof(bitmap_super_t) != 256);

View File

@ -225,7 +225,7 @@ struct bitmap {
wait_queue_head_t overflow_wait;
wait_queue_head_t behind_wait;
struct sysfs_dirent *sysfs_can_clear;
struct kernfs_node *sysfs_can_clear;
};
/* the bitmap API */

View File

@ -106,7 +106,7 @@ struct md_rdev {
*/
struct work_struct del_work; /* used for delayed sysfs removal */
struct sysfs_dirent *sysfs_state; /* handle for 'state'
struct kernfs_node *sysfs_state; /* handle for 'state'
* sysfs entry */
struct badblocks {
@ -379,10 +379,10 @@ struct mddev {
sector_t resync_max; /* resync should pause
* when it gets here */
struct sysfs_dirent *sysfs_state; /* handle for 'array_state'
struct kernfs_node *sysfs_state; /* handle for 'array_state'
* file in sysfs.
*/
struct sysfs_dirent *sysfs_action; /* handle for 'sync_action' */
struct kernfs_node *sysfs_action; /* handle for 'sync_action' */
struct work_struct del_work; /* used for delayed sysfs removal */
@ -501,13 +501,13 @@ struct md_sysfs_entry {
};
extern struct attribute_group md_bitmap_group;
static inline struct sysfs_dirent *sysfs_get_dirent_safe(struct sysfs_dirent *sd, char *name)
static inline struct kernfs_node *sysfs_get_dirent_safe(struct kernfs_node *sd, char *name)
{
if (sd)
return sysfs_get_dirent(sd, name);
return sd;
}
static inline void sysfs_notify_dirent_safe(struct sysfs_dirent *sd)
static inline void sysfs_notify_dirent_safe(struct kernfs_node *sd)
{
if (sd)
sysfs_notify_dirent(sd);

View File

@ -112,7 +112,7 @@ struct mic_device {
struct work_struct shutdown_work;
u8 state;
u8 shutdown_status;
struct sysfs_dirent *state_sysfs;
struct kernfs_node *state_sysfs;
struct completion reset_wait;
void *log_buf_addr;
int *log_buf_len;

View File

@ -53,7 +53,7 @@ obj-$(CONFIG_FHANDLE) += fhandle.o
obj-y += quota/
obj-$(CONFIG_PROC_FS) += proc/
obj-$(CONFIG_SYSFS) += sysfs/
obj-$(CONFIG_SYSFS) += sysfs/ kernfs/
obj-$(CONFIG_CONFIGFS_FS) += configfs/
obj-y += devpts/

5
fs/kernfs/Makefile Normal file
View File

@ -0,0 +1,5 @@
#
# Makefile for the kernfs pseudo filesystem
#
obj-y := mount.o inode.o dir.o file.o symlink.o

1073
fs/kernfs/dir.c Normal file

File diff suppressed because it is too large Load Diff

867
fs/kernfs/file.c Normal file
View File

@ -0,0 +1,867 @@
/*
* fs/kernfs/file.c - kernfs file implementation
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007, 2013 Tejun Heo <tj@kernel.org>
*
* This file is released under the GPLv2.
*/
#include <linux/fs.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/poll.h>
#include <linux/pagemap.h>
#include <linux/sched.h>
#include "kernfs-internal.h"
/*
* There's one kernfs_open_file for each open file and one kernfs_open_node
* for each kernfs_node with one or more open files.
*
* kernfs_node->attr.open points to kernfs_open_node. attr.open is
* protected by kernfs_open_node_lock.
*
* filp->private_data points to seq_file whose ->private points to
* kernfs_open_file. kernfs_open_files are chained at
* kernfs_open_node->files, which is protected by kernfs_open_file_mutex.
*/
static DEFINE_SPINLOCK(kernfs_open_node_lock);
static DEFINE_MUTEX(kernfs_open_file_mutex);
struct kernfs_open_node {
atomic_t refcnt;
atomic_t event;
wait_queue_head_t poll;
struct list_head files; /* goes through kernfs_open_file.list */
};
static struct kernfs_open_file *kernfs_of(struct file *file)
{
return ((struct seq_file *)file->private_data)->private;
}
/*
* Determine the kernfs_ops for the given kernfs_node. This function must
* be called while holding an active reference.
*/
static const struct kernfs_ops *kernfs_ops(struct kernfs_node *kn)
{
if (kn->flags & KERNFS_LOCKDEP)
lockdep_assert_held(kn);
return kn->attr.ops;
}
/*
* As kernfs_seq_stop() is also called after kernfs_seq_start() or
* kernfs_seq_next() failure, it needs to distinguish whether it's stopping
* a seq_file iteration which is fully initialized with an active reference
* or an aborted kernfs_seq_start() due to get_active failure. The
* position pointer is the only context for each seq_file iteration and
* thus the stop condition should be encoded in it. As the return value is
* directly visible to userland, ERR_PTR(-ENODEV) is the only acceptable
* choice to indicate get_active failure.
*
* Unfortunately, this is complicated due to the optional custom seq_file
* operations which may return ERR_PTR(-ENODEV) too. kernfs_seq_stop()
* can't distinguish whether ERR_PTR(-ENODEV) is from get_active failure or
* custom seq_file operations and thus can't decide whether put_active
* should be performed or not only on ERR_PTR(-ENODEV).
*
* This is worked around by factoring out the custom seq_stop() and
* put_active part into kernfs_seq_stop_active(), skipping it from
* kernfs_seq_stop() if ERR_PTR(-ENODEV) while invoking it directly after
* custom seq_file operations fail with ERR_PTR(-ENODEV) - this ensures
* that kernfs_seq_stop_active() is skipped only after get_active failure.
*/
static void kernfs_seq_stop_active(struct seq_file *sf, void *v)
{
struct kernfs_open_file *of = sf->private;
const struct kernfs_ops *ops = kernfs_ops(of->kn);
if (ops->seq_stop)
ops->seq_stop(sf, v);
kernfs_put_active(of->kn);
}
static void *kernfs_seq_start(struct seq_file *sf, loff_t *ppos)
{
struct kernfs_open_file *of = sf->private;
const struct kernfs_ops *ops;
/*
* @of->mutex nests outside active ref and is just to ensure that
* the ops aren't called concurrently for the same open file.
*/
mutex_lock(&of->mutex);
if (!kernfs_get_active(of->kn))
return ERR_PTR(-ENODEV);
ops = kernfs_ops(of->kn);
if (ops->seq_start) {
void *next = ops->seq_start(sf, ppos);
/* see the comment above kernfs_seq_stop_active() */
if (next == ERR_PTR(-ENODEV))
kernfs_seq_stop_active(sf, next);
return next;
} else {
/*
* The same behavior and code as single_open(). Returns
* !NULL if pos is at the beginning; otherwise, NULL.
*/
return NULL + !*ppos;
}
}
static void *kernfs_seq_next(struct seq_file *sf, void *v, loff_t *ppos)
{
struct kernfs_open_file *of = sf->private;
const struct kernfs_ops *ops = kernfs_ops(of->kn);
if (ops->seq_next) {
void *next = ops->seq_next(sf, v, ppos);
/* see the comment above kernfs_seq_stop_active() */
if (next == ERR_PTR(-ENODEV))
kernfs_seq_stop_active(sf, next);
return next;
} else {
/*
* The same behavior and code as single_open(), always
* terminate after the initial read.
*/
++*ppos;
return NULL;
}
}
static void kernfs_seq_stop(struct seq_file *sf, void *v)
{
struct kernfs_open_file *of = sf->private;
if (v != ERR_PTR(-ENODEV))
kernfs_seq_stop_active(sf, v);
mutex_unlock(&of->mutex);
}
static int kernfs_seq_show(struct seq_file *sf, void *v)
{
struct kernfs_open_file *of = sf->private;
of->event = atomic_read(&of->kn->attr.open->event);
return of->kn->attr.ops->seq_show(sf, v);
}
static const struct seq_operations kernfs_seq_ops = {
.start = kernfs_seq_start,
.next = kernfs_seq_next,
.stop = kernfs_seq_stop,
.show = kernfs_seq_show,
};
/*
* As reading a bin file can have side-effects, the exact offset and bytes
* specified in read(2) call should be passed to the read callback making
* it difficult to use seq_file. Implement simplistic custom buffering for
* bin files.
*/
static ssize_t kernfs_file_direct_read(struct kernfs_open_file *of,
char __user *user_buf, size_t count,
loff_t *ppos)
{
ssize_t len = min_t(size_t, count, PAGE_SIZE);
const struct kernfs_ops *ops;
char *buf;
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
return -ENOMEM;
/*
* @of->mutex nests outside active ref and is just to ensure that
* the ops aren't called concurrently for the same open file.
*/
mutex_lock(&of->mutex);
if (!kernfs_get_active(of->kn)) {
len = -ENODEV;
mutex_unlock(&of->mutex);
goto out_free;
}
ops = kernfs_ops(of->kn);
if (ops->read)
len = ops->read(of, buf, len, *ppos);
else
len = -EINVAL;
kernfs_put_active(of->kn);
mutex_unlock(&of->mutex);
if (len < 0)
goto out_free;
if (copy_to_user(user_buf, buf, len)) {
len = -EFAULT;
goto out_free;
}
*ppos += len;
out_free:
kfree(buf);
return len;
}
/**
* kernfs_fop_read - kernfs vfs read callback
* @file: file pointer
* @user_buf: data to write
* @count: number of bytes
* @ppos: starting offset
*/
static ssize_t kernfs_fop_read(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
struct kernfs_open_file *of = kernfs_of(file);
if (of->kn->flags & KERNFS_HAS_SEQ_SHOW)
return seq_read(file, user_buf, count, ppos);
else
return kernfs_file_direct_read(of, user_buf, count, ppos);
}
/**
* kernfs_fop_write - kernfs vfs write callback
* @file: file pointer
* @user_buf: data to write
* @count: number of bytes
* @ppos: starting offset
*
* Copy data in from userland and pass it to the matching kernfs write
* operation.
*
* There is no easy way for us to know if userspace is only doing a partial
* write, so we don't support them. We expect the entire buffer to come on
* the first write. Hint: if you're writing a value, first read the file,
* modify only the the value you're changing, then write entire buffer
* back.
*/
static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
size_t count, loff_t *ppos)
{
struct kernfs_open_file *of = kernfs_of(file);
ssize_t len = min_t(size_t, count, PAGE_SIZE);
const struct kernfs_ops *ops;
char *buf;
buf = kmalloc(len + 1, GFP_KERNEL);
if (!buf)
return -ENOMEM;
if (copy_from_user(buf, user_buf, len)) {
len = -EFAULT;
goto out_free;
}
buf[len] = '\0'; /* guarantee string termination */
/*
* @of->mutex nests outside active ref and is just to ensure that
* the ops aren't called concurrently for the same open file.
*/
mutex_lock(&of->mutex);
if (!kernfs_get_active(of->kn)) {
mutex_unlock(&of->mutex);
len = -ENODEV;
goto out_free;
}
ops = kernfs_ops(of->kn);
if (ops->write)
len = ops->write(of, buf, len, *ppos);
else
len = -EINVAL;
kernfs_put_active(of->kn);
mutex_unlock(&of->mutex);
if (len > 0)
*ppos += len;
out_free:
kfree(buf);
return len;
}
static void kernfs_vma_open(struct vm_area_struct *vma)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
if (!of->vm_ops)
return;
if (!kernfs_get_active(of->kn))
return;
if (of->vm_ops->open)
of->vm_ops->open(vma);
kernfs_put_active(of->kn);
}
static int kernfs_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
int ret;
if (!of->vm_ops)
return VM_FAULT_SIGBUS;
if (!kernfs_get_active(of->kn))
return VM_FAULT_SIGBUS;
ret = VM_FAULT_SIGBUS;
if (of->vm_ops->fault)
ret = of->vm_ops->fault(vma, vmf);
kernfs_put_active(of->kn);
return ret;
}
static int kernfs_vma_page_mkwrite(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
int ret;
if (!of->vm_ops)
return VM_FAULT_SIGBUS;
if (!kernfs_get_active(of->kn))
return VM_FAULT_SIGBUS;
ret = 0;
if (of->vm_ops->page_mkwrite)
ret = of->vm_ops->page_mkwrite(vma, vmf);
else
file_update_time(file);
kernfs_put_active(of->kn);
return ret;
}
static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
void *buf, int len, int write)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
int ret;
if (!of->vm_ops)
return -EINVAL;
if (!kernfs_get_active(of->kn))
return -EINVAL;
ret = -EINVAL;
if (of->vm_ops->access)
ret = of->vm_ops->access(vma, addr, buf, len, write);
kernfs_put_active(of->kn);
return ret;
}
#ifdef CONFIG_NUMA
static int kernfs_vma_set_policy(struct vm_area_struct *vma,
struct mempolicy *new)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
int ret;
if (!of->vm_ops)
return 0;
if (!kernfs_get_active(of->kn))
return -EINVAL;
ret = 0;
if (of->vm_ops->set_policy)
ret = of->vm_ops->set_policy(vma, new);
kernfs_put_active(of->kn);
return ret;
}
static struct mempolicy *kernfs_vma_get_policy(struct vm_area_struct *vma,
unsigned long addr)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
struct mempolicy *pol;
if (!of->vm_ops)
return vma->vm_policy;
if (!kernfs_get_active(of->kn))
return vma->vm_policy;
pol = vma->vm_policy;
if (of->vm_ops->get_policy)
pol = of->vm_ops->get_policy(vma, addr);
kernfs_put_active(of->kn);
return pol;
}
static int kernfs_vma_migrate(struct vm_area_struct *vma,
const nodemask_t *from, const nodemask_t *to,
unsigned long flags)
{
struct file *file = vma->vm_file;
struct kernfs_open_file *of = kernfs_of(file);
int ret;
if (!of->vm_ops)
return 0;
if (!kernfs_get_active(of->kn))
return 0;
ret = 0;
if (of->vm_ops->migrate)
ret = of->vm_ops->migrate(vma, from, to, flags);
kernfs_put_active(of->kn);
return ret;
}
#endif
static const struct vm_operations_struct kernfs_vm_ops = {
.open = kernfs_vma_open,
.fault = kernfs_vma_fault,
.page_mkwrite = kernfs_vma_page_mkwrite,
.access = kernfs_vma_access,
#ifdef CONFIG_NUMA
.set_policy = kernfs_vma_set_policy,
.get_policy = kernfs_vma_get_policy,
.migrate = kernfs_vma_migrate,
#endif
};
static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
{
struct kernfs_open_file *of = kernfs_of(file);
const struct kernfs_ops *ops;
int rc;
/*
* mmap path and of->mutex are prone to triggering spurious lockdep
* warnings and we don't want to add spurious locking dependency
* between the two. Check whether mmap is actually implemented
* without grabbing @of->mutex by testing HAS_MMAP flag. See the
* comment in kernfs_file_open() for more details.
*/
if (!(of->kn->flags & KERNFS_HAS_MMAP))
return -ENODEV;
mutex_lock(&of->mutex);
rc = -ENODEV;
if (!kernfs_get_active(of->kn))
goto out_unlock;
ops = kernfs_ops(of->kn);
rc = ops->mmap(of, vma);
/*
* PowerPC's pci_mmap of legacy_mem uses shmem_zero_setup()
* to satisfy versions of X which crash if the mmap fails: that
* substitutes a new vm_file, and we don't then want bin_vm_ops.
*/
if (vma->vm_file != file)
goto out_put;
rc = -EINVAL;
if (of->mmapped && of->vm_ops != vma->vm_ops)
goto out_put;
/*
* It is not possible to successfully wrap close.
* So error if someone is trying to use close.
*/
rc = -EINVAL;
if (vma->vm_ops && vma->vm_ops->close)
goto out_put;
rc = 0;
of->mmapped = 1;
of->vm_ops = vma->vm_ops;
vma->vm_ops = &kernfs_vm_ops;
out_put:
kernfs_put_active(of->kn);
out_unlock:
mutex_unlock(&of->mutex);
return rc;
}
/**
* kernfs_get_open_node - get or create kernfs_open_node
* @kn: target kernfs_node
* @of: kernfs_open_file for this instance of open
*
* If @kn->attr.open exists, increment its reference count; otherwise,
* create one. @of is chained to the files list.
*
* LOCKING:
* Kernel thread context (may sleep).
*
* RETURNS:
* 0 on success, -errno on failure.
*/
static int kernfs_get_open_node(struct kernfs_node *kn,
struct kernfs_open_file *of)
{
struct kernfs_open_node *on, *new_on = NULL;
retry:
mutex_lock(&kernfs_open_file_mutex);
spin_lock_irq(&kernfs_open_node_lock);
if (!kn->attr.open && new_on) {
kn->attr.open = new_on;
new_on = NULL;
}
on = kn->attr.open;
if (on) {
atomic_inc(&on->refcnt);
list_add_tail(&of->list, &on->files);
}
spin_unlock_irq(&kernfs_open_node_lock);
mutex_unlock(&kernfs_open_file_mutex);
if (on) {
kfree(new_on);
return 0;
}
/* not there, initialize a new one and retry */
new_on = kmalloc(sizeof(*new_on), GFP_KERNEL);
if (!new_on)
return -ENOMEM;
atomic_set(&new_on->refcnt, 0);
atomic_set(&new_on->event, 1);
init_waitqueue_head(&new_on->poll);
INIT_LIST_HEAD(&new_on->files);
goto retry;
}
/**
* kernfs_put_open_node - put kernfs_open_node
* @kn: target kernfs_nodet
* @of: associated kernfs_open_file
*
* Put @kn->attr.open and unlink @of from the files list. If
* reference count reaches zero, disassociate and free it.
*
* LOCKING:
* None.
*/
static void kernfs_put_open_node(struct kernfs_node *kn,
struct kernfs_open_file *of)
{
struct kernfs_open_node *on = kn->attr.open;
unsigned long flags;
mutex_lock(&kernfs_open_file_mutex);
spin_lock_irqsave(&kernfs_open_node_lock, flags);
if (of)
list_del(&of->list);
if (atomic_dec_and_test(&on->refcnt))
kn->attr.open = NULL;
else
on = NULL;
spin_unlock_irqrestore(&kernfs_open_node_lock, flags);
mutex_unlock(&kernfs_open_file_mutex);
kfree(on);
}
static int kernfs_fop_open(struct inode *inode, struct file *file)
{
struct kernfs_node *kn = file->f_path.dentry->d_fsdata;
const struct kernfs_ops *ops;
struct kernfs_open_file *of;
bool has_read, has_write, has_mmap;
int error = -EACCES;
if (!kernfs_get_active(kn))
return -ENODEV;
ops = kernfs_ops(kn);
has_read = ops->seq_show || ops->read || ops->mmap;
has_write = ops->write || ops->mmap;
has_mmap = ops->mmap;
/* check perms and supported operations */
if ((file->f_mode & FMODE_WRITE) &&
(!(inode->i_mode & S_IWUGO) || !has_write))
goto err_out;
if ((file->f_mode & FMODE_READ) &&
(!(inode->i_mode & S_IRUGO) || !has_read))
goto err_out;
/* allocate a kernfs_open_file for the file */
error = -ENOMEM;
of = kzalloc(sizeof(struct kernfs_open_file), GFP_KERNEL);
if (!of)
goto err_out;
/*
* The following is done to give a different lockdep key to
* @of->mutex for files which implement mmap. This is a rather
* crude way to avoid false positive lockdep warning around
* mm->mmap_sem - mmap nests @of->mutex under mm->mmap_sem and
* reading /sys/block/sda/trace/act_mask grabs sr_mutex, under
* which mm->mmap_sem nests, while holding @of->mutex. As each
* open file has a separate mutex, it's okay as long as those don't
* happen on the same file. At this point, we can't easily give
* each file a separate locking class. Let's differentiate on
* whether the file has mmap or not for now.
*
* Both paths of the branch look the same. They're supposed to
* look that way and give @of->mutex different static lockdep keys.
*/
if (has_mmap)
mutex_init(&of->mutex);
else
mutex_init(&of->mutex);
of->kn = kn;
of->file = file;
/*
* Always instantiate seq_file even if read access doesn't use
* seq_file or is not requested. This unifies private data access
* and readable regular files are the vast majority anyway.
*/
if (ops->seq_show)
error = seq_open(file, &kernfs_seq_ops);
else
error = seq_open(file, NULL);
if (error)
goto err_free;
((struct seq_file *)file->private_data)->private = of;
/* seq_file clears PWRITE unconditionally, restore it if WRITE */
if (file->f_mode & FMODE_WRITE)
file->f_mode |= FMODE_PWRITE;
/* make sure we have open node struct */
error = kernfs_get_open_node(kn, of);
if (error)
goto err_close;
/* open succeeded, put active references */
kernfs_put_active(kn);
return 0;
err_close:
seq_release(inode, file);
err_free:
kfree(of);
err_out:
kernfs_put_active(kn);
return error;
}
static int kernfs_fop_release(struct inode *inode, struct file *filp)
{
struct kernfs_node *kn = filp->f_path.dentry->d_fsdata;
struct kernfs_open_file *of = kernfs_of(filp);
kernfs_put_open_node(kn, of);
seq_release(inode, filp);
kfree(of);
return 0;
}
void kernfs_unmap_bin_file(struct kernfs_node *kn)
{
struct kernfs_open_node *on;
struct kernfs_open_file *of;
if (!(kn->flags & KERNFS_HAS_MMAP))
return;
spin_lock_irq(&kernfs_open_node_lock);
on = kn->attr.open;
if (on)
atomic_inc(&on->refcnt);
spin_unlock_irq(&kernfs_open_node_lock);
if (!on)
return;
mutex_lock(&kernfs_open_file_mutex);
list_for_each_entry(of, &on->files, list) {
struct inode *inode = file_inode(of->file);
unmap_mapping_range(inode->i_mapping, 0, 0, 1);
}
mutex_unlock(&kernfs_open_file_mutex);
kernfs_put_open_node(kn, NULL);
}
/*
* Kernfs attribute files are pollable. The idea is that you read
* the content and then you use 'poll' or 'select' to wait for
* the content to change. When the content changes (assuming the
* manager for the kobject supports notification), poll will
* return POLLERR|POLLPRI, and select will return the fd whether
* it is waiting for read, write, or exceptions.
* Once poll/select indicates that the value has changed, you
* need to close and re-open the file, or seek to 0 and read again.
* Reminder: this only works for attributes which actively support
* it, and it is not possible to test an attribute from userspace
* to see if it supports poll (Neither 'poll' nor 'select' return
* an appropriate error code). When in doubt, set a suitable timeout value.
*/
static unsigned int kernfs_fop_poll(struct file *filp, poll_table *wait)
{
struct kernfs_open_file *of = kernfs_of(filp);
struct kernfs_node *kn = filp->f_path.dentry->d_fsdata;
struct kernfs_open_node *on = kn->attr.open;
/* need parent for the kobj, grab both */
if (!kernfs_get_active(kn))
goto trigger;
poll_wait(filp, &on->poll, wait);
kernfs_put_active(kn);
if (of->event != atomic_read(&on->event))
goto trigger;
return DEFAULT_POLLMASK;
trigger:
return DEFAULT_POLLMASK|POLLERR|POLLPRI;
}
/**
* kernfs_notify - notify a kernfs file
* @kn: file to notify
*
* Notify @kn such that poll(2) on @kn wakes up.
*/
void kernfs_notify(struct kernfs_node *kn)
{
struct kernfs_open_node *on;
unsigned long flags;
spin_lock_irqsave(&kernfs_open_node_lock, flags);
if (!WARN_ON(kernfs_type(kn) != KERNFS_FILE)) {
on = kn->attr.open;
if (on) {
atomic_inc(&on->event);
wake_up_interruptible(&on->poll);
}
}
spin_unlock_irqrestore(&kernfs_open_node_lock, flags);
}
EXPORT_SYMBOL_GPL(kernfs_notify);
const struct file_operations kernfs_file_fops = {
.read = kernfs_fop_read,
.write = kernfs_fop_write,
.llseek = generic_file_llseek,
.mmap = kernfs_fop_mmap,
.open = kernfs_fop_open,
.release = kernfs_fop_release,
.poll = kernfs_fop_poll,
};
/**
* __kernfs_create_file - kernfs internal function to create a file
* @parent: directory to create the file in
* @name: name of the file
* @mode: mode of the file
* @size: size of the file
* @ops: kernfs operations for the file
* @priv: private data for the file
* @ns: optional namespace tag of the file
* @static_name: don't copy file name
* @key: lockdep key for the file's active_ref, %NULL to disable lockdep
*
* Returns the created node on success, ERR_PTR() value on error.
*/
struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
const char *name,
umode_t mode, loff_t size,
const struct kernfs_ops *ops,
void *priv, const void *ns,
bool name_is_static,
struct lock_class_key *key)
{
struct kernfs_addrm_cxt acxt;
struct kernfs_node *kn;
unsigned flags;
int rc;
flags = KERNFS_FILE;
if (name_is_static)
flags |= KERNFS_STATIC_NAME;
kn = kernfs_new_node(parent, name, (mode & S_IALLUGO) | S_IFREG, flags);
if (!kn)
return ERR_PTR(-ENOMEM);
kn->attr.ops = ops;
kn->attr.size = size;
kn->ns = ns;
kn->priv = priv;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
if (key) {
lockdep_init_map(&kn->dep_map, "s_active", key, 0);
kn->flags |= KERNFS_LOCKDEP;
}
#endif
/*
* kn->attr.ops is accesible only while holding active ref. We
* need to know whether some ops are implemented outside active
* ref. Cache their existence in flags.
*/
if (ops->seq_show)
kn->flags |= KERNFS_HAS_SEQ_SHOW;
if (ops->mmap)
kn->flags |= KERNFS_HAS_MMAP;
kernfs_addrm_start(&acxt);
rc = kernfs_add_one(&acxt, kn);
kernfs_addrm_finish(&acxt);
if (rc) {
kernfs_put(kn);
return ERR_PTR(rc);
}
return kn;
}

377
fs/kernfs/inode.c Normal file
View File

@ -0,0 +1,377 @@
/*
* fs/kernfs/inode.c - kernfs inode implementation
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007, 2013 Tejun Heo <tj@kernel.org>
*
* This file is released under the GPLv2.
*/
#include <linux/pagemap.h>
#include <linux/backing-dev.h>
#include <linux/capability.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/xattr.h>
#include <linux/security.h>
#include "kernfs-internal.h"
static const struct address_space_operations kernfs_aops = {
.readpage = simple_readpage,
.write_begin = simple_write_begin,
.write_end = simple_write_end,
};
static struct backing_dev_info kernfs_bdi = {
.name = "kernfs",
.ra_pages = 0, /* No readahead */
.capabilities = BDI_CAP_NO_ACCT_AND_WRITEBACK,
};
static const struct inode_operations kernfs_iops = {
.permission = kernfs_iop_permission,
.setattr = kernfs_iop_setattr,
.getattr = kernfs_iop_getattr,
.setxattr = kernfs_iop_setxattr,
.removexattr = kernfs_iop_removexattr,
.getxattr = kernfs_iop_getxattr,
.listxattr = kernfs_iop_listxattr,
};
void __init kernfs_inode_init(void)
{
if (bdi_init(&kernfs_bdi))
panic("failed to init kernfs_bdi");
}
static struct kernfs_iattrs *kernfs_iattrs(struct kernfs_node *kn)
{
struct iattr *iattrs;
if (kn->iattr)
return kn->iattr;
kn->iattr = kzalloc(sizeof(struct kernfs_iattrs), GFP_KERNEL);
if (!kn->iattr)
return NULL;
iattrs = &kn->iattr->ia_iattr;
/* assign default attributes */
iattrs->ia_mode = kn->mode;
iattrs->ia_uid = GLOBAL_ROOT_UID;
iattrs->ia_gid = GLOBAL_ROOT_GID;
iattrs->ia_atime = iattrs->ia_mtime = iattrs->ia_ctime = CURRENT_TIME;
simple_xattrs_init(&kn->iattr->xattrs);
return kn->iattr;
}
static int __kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr)
{
struct kernfs_iattrs *attrs;
struct iattr *iattrs;
unsigned int ia_valid = iattr->ia_valid;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
iattrs = &attrs->ia_iattr;
if (ia_valid & ATTR_UID)
iattrs->ia_uid = iattr->ia_uid;
if (ia_valid & ATTR_GID)
iattrs->ia_gid = iattr->ia_gid;
if (ia_valid & ATTR_ATIME)
iattrs->ia_atime = iattr->ia_atime;
if (ia_valid & ATTR_MTIME)
iattrs->ia_mtime = iattr->ia_mtime;
if (ia_valid & ATTR_CTIME)
iattrs->ia_ctime = iattr->ia_ctime;
if (ia_valid & ATTR_MODE) {
umode_t mode = iattr->ia_mode;
iattrs->ia_mode = kn->mode = mode;
}
return 0;
}
/**
* kernfs_setattr - set iattr on a node
* @kn: target node
* @iattr: iattr to set
*
* Returns 0 on success, -errno on failure.
*/
int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr)
{
int ret;
mutex_lock(&kernfs_mutex);
ret = __kernfs_setattr(kn, iattr);
mutex_unlock(&kernfs_mutex);
return ret;
}
int kernfs_iop_setattr(struct dentry *dentry, struct iattr *iattr)
{
struct inode *inode = dentry->d_inode;
struct kernfs_node *kn = dentry->d_fsdata;
int error;
if (!kn)
return -EINVAL;
mutex_lock(&kernfs_mutex);
error = inode_change_ok(inode, iattr);
if (error)
goto out;
error = __kernfs_setattr(kn, iattr);
if (error)
goto out;
/* this ignores size changes */
setattr_copy(inode, iattr);
out:
mutex_unlock(&kernfs_mutex);
return error;
}
static int kernfs_node_setsecdata(struct kernfs_node *kn, void **secdata,
u32 *secdata_len)
{
struct kernfs_iattrs *attrs;
void *old_secdata;
size_t old_secdata_len;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
old_secdata = attrs->ia_secdata;
old_secdata_len = attrs->ia_secdata_len;
attrs->ia_secdata = *secdata;
attrs->ia_secdata_len = *secdata_len;
*secdata = old_secdata;
*secdata_len = old_secdata_len;
return 0;
}
int kernfs_iop_setxattr(struct dentry *dentry, const char *name,
const void *value, size_t size, int flags)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct kernfs_iattrs *attrs;
void *secdata;
int error;
u32 secdata_len = 0;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
if (!strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)) {
const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;
error = security_inode_setsecurity(dentry->d_inode, suffix,
value, size, flags);
if (error)
return error;
error = security_inode_getsecctx(dentry->d_inode,
&secdata, &secdata_len);
if (error)
return error;
mutex_lock(&kernfs_mutex);
error = kernfs_node_setsecdata(kn, &secdata, &secdata_len);
mutex_unlock(&kernfs_mutex);
if (secdata)
security_release_secctx(secdata, secdata_len);
return error;
} else if (!strncmp(name, XATTR_TRUSTED_PREFIX, XATTR_TRUSTED_PREFIX_LEN)) {
return simple_xattr_set(&attrs->xattrs, name, value, size,
flags);
}
return -EINVAL;
}
int kernfs_iop_removexattr(struct dentry *dentry, const char *name)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct kernfs_iattrs *attrs;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
return simple_xattr_remove(&attrs->xattrs, name);
}
ssize_t kernfs_iop_getxattr(struct dentry *dentry, const char *name, void *buf,
size_t size)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct kernfs_iattrs *attrs;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
return simple_xattr_get(&attrs->xattrs, name, buf, size);
}
ssize_t kernfs_iop_listxattr(struct dentry *dentry, char *buf, size_t size)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct kernfs_iattrs *attrs;
attrs = kernfs_iattrs(kn);
if (!attrs)
return -ENOMEM;
return simple_xattr_list(&attrs->xattrs, buf, size);
}
static inline void set_default_inode_attr(struct inode *inode, umode_t mode)
{
inode->i_mode = mode;
inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
}
static inline void set_inode_attr(struct inode *inode, struct iattr *iattr)
{
inode->i_uid = iattr->ia_uid;
inode->i_gid = iattr->ia_gid;
inode->i_atime = iattr->ia_atime;
inode->i_mtime = iattr->ia_mtime;
inode->i_ctime = iattr->ia_ctime;
}
static void kernfs_refresh_inode(struct kernfs_node *kn, struct inode *inode)
{
struct kernfs_iattrs *attrs = kn->iattr;
inode->i_mode = kn->mode;
if (attrs) {
/*
* kernfs_node has non-default attributes get them from
* persistent copy in kernfs_node.
*/
set_inode_attr(inode, &attrs->ia_iattr);
security_inode_notifysecctx(inode, attrs->ia_secdata,
attrs->ia_secdata_len);
}
if (kernfs_type(kn) == KERNFS_DIR)
set_nlink(inode, kn->dir.subdirs + 2);
}
int kernfs_iop_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct inode *inode = dentry->d_inode;
mutex_lock(&kernfs_mutex);
kernfs_refresh_inode(kn, inode);
mutex_unlock(&kernfs_mutex);
generic_fillattr(inode, stat);
return 0;
}
static void kernfs_init_inode(struct kernfs_node *kn, struct inode *inode)
{
kernfs_get(kn);
inode->i_private = kn;
inode->i_mapping->a_ops = &kernfs_aops;
inode->i_mapping->backing_dev_info = &kernfs_bdi;
inode->i_op = &kernfs_iops;
set_default_inode_attr(inode, kn->mode);
kernfs_refresh_inode(kn, inode);
/* initialize inode according to type */
switch (kernfs_type(kn)) {
case KERNFS_DIR:
inode->i_op = &kernfs_dir_iops;
inode->i_fop = &kernfs_dir_fops;
break;
case KERNFS_FILE:
inode->i_size = kn->attr.size;
inode->i_fop = &kernfs_file_fops;
break;
case KERNFS_LINK:
inode->i_op = &kernfs_symlink_iops;
break;
default:
BUG();
}
unlock_new_inode(inode);
}
/**
* kernfs_get_inode - get inode for kernfs_node
* @sb: super block
* @kn: kernfs_node to allocate inode for
*
* Get inode for @kn. If such inode doesn't exist, a new inode is
* allocated and basics are initialized. New inode is returned
* locked.
*
* LOCKING:
* Kernel thread context (may sleep).
*
* RETURNS:
* Pointer to allocated inode on success, NULL on failure.
*/
struct inode *kernfs_get_inode(struct super_block *sb, struct kernfs_node *kn)
{
struct inode *inode;
inode = iget_locked(sb, kn->ino);
if (inode && (inode->i_state & I_NEW))
kernfs_init_inode(kn, inode);
return inode;
}
/*
* The kernfs_node serves as both an inode and a directory entry for
* kernfs. To prevent the kernfs inode numbers from being freed
* prematurely we take a reference to kernfs_node from the kernfs inode. A
* super_operations.evict_inode() implementation is needed to drop that
* reference upon inode destruction.
*/
void kernfs_evict_inode(struct inode *inode)
{
struct kernfs_node *kn = inode->i_private;
truncate_inode_pages(&inode->i_data, 0);
clear_inode(inode);
kernfs_put(kn);
}
int kernfs_iop_permission(struct inode *inode, int mask)
{
struct kernfs_node *kn;
if (mask & MAY_NOT_BLOCK)
return -ECHILD;
kn = inode->i_private;
mutex_lock(&kernfs_mutex);
kernfs_refresh_inode(kn, inode);
mutex_unlock(&kernfs_mutex);
return generic_permission(inode, mask);
}

122
fs/kernfs/kernfs-internal.h Normal file
View File

@ -0,0 +1,122 @@
/*
* fs/kernfs/kernfs-internal.h - kernfs internal header file
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007, 2013 Tejun Heo <teheo@suse.de>
*
* This file is released under the GPLv2.
*/
#ifndef __KERNFS_INTERNAL_H
#define __KERNFS_INTERNAL_H
#include <linux/lockdep.h>
#include <linux/fs.h>
#include <linux/mutex.h>
#include <linux/xattr.h>
#include <linux/kernfs.h>
struct kernfs_iattrs {
struct iattr ia_iattr;
void *ia_secdata;
u32 ia_secdata_len;
struct simple_xattrs xattrs;
};
#define KN_DEACTIVATED_BIAS INT_MIN
/* KERNFS_TYPE_MASK and types are defined in include/linux/kernfs.h */
/**
* kernfs_root - find out the kernfs_root a kernfs_node belongs to
* @kn: kernfs_node of interest
*
* Return the kernfs_root @kn belongs to.
*/
static inline struct kernfs_root *kernfs_root(struct kernfs_node *kn)
{
/* if parent exists, it's always a dir; otherwise, @sd is a dir */
if (kn->parent)
kn = kn->parent;
return kn->dir.root;
}
/*
* Context structure to be used while adding/removing nodes.
*/
struct kernfs_addrm_cxt {
struct kernfs_node *removed;
};
/*
* mount.c
*/
struct kernfs_super_info {
/*
* The root associated with this super_block. Each super_block is
* identified by the root and ns it's associated with.
*/
struct kernfs_root *root;
/*
* Each sb is associated with one namespace tag, currently the
* network namespace of the task which mounted this kernfs
* instance. If multiple tags become necessary, make the following
* an array and compare kernfs_node tag against every entry.
*/
const void *ns;
};
#define kernfs_info(SB) ((struct kernfs_super_info *)(SB->s_fs_info))
extern struct kmem_cache *kernfs_node_cache;
/*
* inode.c
*/
struct inode *kernfs_get_inode(struct super_block *sb, struct kernfs_node *kn);
void kernfs_evict_inode(struct inode *inode);
int kernfs_iop_permission(struct inode *inode, int mask);
int kernfs_iop_setattr(struct dentry *dentry, struct iattr *iattr);
int kernfs_iop_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int kernfs_iop_setxattr(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags);
int kernfs_iop_removexattr(struct dentry *dentry, const char *name);
ssize_t kernfs_iop_getxattr(struct dentry *dentry, const char *name, void *buf,
size_t size);
ssize_t kernfs_iop_listxattr(struct dentry *dentry, char *buf, size_t size);
void kernfs_inode_init(void);
/*
* dir.c
*/
extern struct mutex kernfs_mutex;
extern const struct dentry_operations kernfs_dops;
extern const struct file_operations kernfs_dir_fops;
extern const struct inode_operations kernfs_dir_iops;
struct kernfs_node *kernfs_get_active(struct kernfs_node *kn);
void kernfs_put_active(struct kernfs_node *kn);
void kernfs_addrm_start(struct kernfs_addrm_cxt *acxt);
int kernfs_add_one(struct kernfs_addrm_cxt *acxt, struct kernfs_node *kn);
void kernfs_addrm_finish(struct kernfs_addrm_cxt *acxt);
struct kernfs_node *kernfs_new_node(struct kernfs_node *parent,
const char *name, umode_t mode,
unsigned flags);
/*
* file.c
*/
extern const struct file_operations kernfs_file_fops;
void kernfs_unmap_bin_file(struct kernfs_node *kn);
/*
* symlink.c
*/
extern const struct inode_operations kernfs_symlink_iops;
#endif /* __KERNFS_INTERNAL_H */

165
fs/kernfs/mount.c Normal file
View File

@ -0,0 +1,165 @@
/*
* fs/kernfs/mount.c - kernfs mount implementation
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007, 2013 Tejun Heo <tj@kernel.org>
*
* This file is released under the GPLv2.
*/
#include <linux/fs.h>
#include <linux/mount.h>
#include <linux/init.h>
#include <linux/magic.h>
#include <linux/slab.h>
#include <linux/pagemap.h>
#include "kernfs-internal.h"
struct kmem_cache *kernfs_node_cache;
static const struct super_operations kernfs_sops = {
.statfs = simple_statfs,
.drop_inode = generic_delete_inode,
.evict_inode = kernfs_evict_inode,
};
static int kernfs_fill_super(struct super_block *sb)
{
struct kernfs_super_info *info = kernfs_info(sb);
struct inode *inode;
struct dentry *root;
sb->s_blocksize = PAGE_CACHE_SIZE;
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
sb->s_magic = SYSFS_MAGIC;
sb->s_op = &kernfs_sops;
sb->s_time_gran = 1;
/* get root inode, initialize and unlock it */
mutex_lock(&kernfs_mutex);
inode = kernfs_get_inode(sb, info->root->kn);
mutex_unlock(&kernfs_mutex);
if (!inode) {
pr_debug("kernfs: could not get root inode\n");
return -ENOMEM;
}
/* instantiate and link root dentry */
root = d_make_root(inode);
if (!root) {
pr_debug("%s: could not get root dentry!\n", __func__);
return -ENOMEM;
}
kernfs_get(info->root->kn);
root->d_fsdata = info->root->kn;
sb->s_root = root;
sb->s_d_op = &kernfs_dops;
return 0;
}
static int kernfs_test_super(struct super_block *sb, void *data)
{
struct kernfs_super_info *sb_info = kernfs_info(sb);
struct kernfs_super_info *info = data;
return sb_info->root == info->root && sb_info->ns == info->ns;
}
static int kernfs_set_super(struct super_block *sb, void *data)
{
int error;
error = set_anon_super(sb, data);
if (!error)
sb->s_fs_info = data;
return error;
}
/**
* kernfs_super_ns - determine the namespace tag of a kernfs super_block
* @sb: super_block of interest
*
* Return the namespace tag associated with kernfs super_block @sb.
*/
const void *kernfs_super_ns(struct super_block *sb)
{
struct kernfs_super_info *info = kernfs_info(sb);
return info->ns;
}
/**
* kernfs_mount_ns - kernfs mount helper
* @fs_type: file_system_type of the fs being mounted
* @flags: mount flags specified for the mount
* @root: kernfs_root of the hierarchy being mounted
* @ns: optional namespace tag of the mount
*
* This is to be called from each kernfs user's file_system_type->mount()
* implementation, which should pass through the specified @fs_type and
* @flags, and specify the hierarchy and namespace tag to mount via @root
* and @ns, respectively.
*
* The return value can be passed to the vfs layer verbatim.
*/
struct dentry *kernfs_mount_ns(struct file_system_type *fs_type, int flags,
struct kernfs_root *root, const void *ns)
{
struct super_block *sb;
struct kernfs_super_info *info;
int error;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
info->root = root;
info->ns = ns;
sb = sget(fs_type, kernfs_test_super, kernfs_set_super, flags, info);
if (IS_ERR(sb) || sb->s_fs_info != info)
kfree(info);
if (IS_ERR(sb))
return ERR_CAST(sb);
if (!sb->s_root) {
error = kernfs_fill_super(sb);
if (error) {
deactivate_locked_super(sb);
return ERR_PTR(error);
}
sb->s_flags |= MS_ACTIVE;
}
return dget(sb->s_root);
}
/**
* kernfs_kill_sb - kill_sb for kernfs
* @sb: super_block being killed
*
* This can be used directly for file_system_type->kill_sb(). If a kernfs
* user needs extra cleanup, it can implement its own kill_sb() and call
* this function at the end.
*/
void kernfs_kill_sb(struct super_block *sb)
{
struct kernfs_super_info *info = kernfs_info(sb);
struct kernfs_node *root_kn = sb->s_root->d_fsdata;
/*
* Remove the superblock from fs_supers/s_instances
* so we can't find it, before freeing kernfs_super_info.
*/
kill_anon_super(sb);
kfree(info);
kernfs_put(root_kn);
}
void __init kernfs_init(void)
{
kernfs_node_cache = kmem_cache_create("kernfs_node_cache",
sizeof(struct kernfs_node),
0, SLAB_PANIC, NULL);
kernfs_inode_init();
}

151
fs/kernfs/symlink.c Normal file
View File

@ -0,0 +1,151 @@
/*
* fs/kernfs/symlink.c - kernfs symlink implementation
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007, 2013 Tejun Heo <tj@kernel.org>
*
* This file is released under the GPLv2.
*/
#include <linux/fs.h>
#include <linux/gfp.h>
#include <linux/namei.h>
#include "kernfs-internal.h"
/**
* kernfs_create_link - create a symlink
* @parent: directory to create the symlink in
* @name: name of the symlink
* @target: target node for the symlink to point to
*
* Returns the created node on success, ERR_PTR() value on error.
*/
struct kernfs_node *kernfs_create_link(struct kernfs_node *parent,
const char *name,
struct kernfs_node *target)
{
struct kernfs_node *kn;
struct kernfs_addrm_cxt acxt;
int error;
kn = kernfs_new_node(parent, name, S_IFLNK|S_IRWXUGO, KERNFS_LINK);
if (!kn)
return ERR_PTR(-ENOMEM);
if (kernfs_ns_enabled(parent))
kn->ns = target->ns;
kn->symlink.target_kn = target;
kernfs_get(target); /* ref owned by symlink */
kernfs_addrm_start(&acxt);
error = kernfs_add_one(&acxt, kn);
kernfs_addrm_finish(&acxt);
if (!error)
return kn;
kernfs_put(kn);
return ERR_PTR(error);
}
static int kernfs_get_target_path(struct kernfs_node *parent,
struct kernfs_node *target, char *path)
{
struct kernfs_node *base, *kn;
char *s = path;
int len = 0;
/* go up to the root, stop at the base */
base = parent;
while (base->parent) {
kn = target->parent;
while (kn->parent && base != kn)
kn = kn->parent;
if (base == kn)
break;
strcpy(s, "../");
s += 3;
base = base->parent;
}
/* determine end of target string for reverse fillup */
kn = target;
while (kn->parent && kn != base) {
len += strlen(kn->name) + 1;
kn = kn->parent;
}
/* check limits */
if (len < 2)
return -EINVAL;
len--;
if ((s - path) + len > PATH_MAX)
return -ENAMETOOLONG;
/* reverse fillup of target string from target to base */
kn = target;
while (kn->parent && kn != base) {
int slen = strlen(kn->name);
len -= slen;
strncpy(s + len, kn->name, slen);
if (len)
s[--len] = '/';
kn = kn->parent;
}
return 0;
}
static int kernfs_getlink(struct dentry *dentry, char *path)
{
struct kernfs_node *kn = dentry->d_fsdata;
struct kernfs_node *parent = kn->parent;
struct kernfs_node *target = kn->symlink.target_kn;
int error;
mutex_lock(&kernfs_mutex);
error = kernfs_get_target_path(parent, target, path);
mutex_unlock(&kernfs_mutex);
return error;
}
static void *kernfs_iop_follow_link(struct dentry *dentry, struct nameidata *nd)
{
int error = -ENOMEM;
unsigned long page = get_zeroed_page(GFP_KERNEL);
if (page) {
error = kernfs_getlink(dentry, (char *) page);
if (error < 0)
free_page((unsigned long)page);
}
nd_set_link(nd, error ? ERR_PTR(error) : (char *)page);
return NULL;
}
static void kernfs_iop_put_link(struct dentry *dentry, struct nameidata *nd,
void *cookie)
{
char *page = nd_get_link(nd);
if (!IS_ERR(page))
free_page((unsigned long)page);
}
const struct inode_operations kernfs_symlink_iops = {
.setxattr = kernfs_iop_setxattr,
.removexattr = kernfs_iop_removexattr,
.getxattr = kernfs_iop_getxattr,
.listxattr = kernfs_iop_listxattr,
.readlink = generic_readlink,
.follow_link = kernfs_iop_follow_link,
.put_link = kernfs_iop_put_link,
.setattr = kernfs_iop_setattr,
.getattr = kernfs_iop_getattr,
.permission = kernfs_iop_permission,
};

View File

@ -2790,6 +2790,8 @@ void __init mnt_init(void)
for (u = 0; u < HASH_SIZE; u++)
INIT_LIST_HEAD(&mountpoint_hashtable[u]);
kernfs_init();
err = sysfs_init();
if (err)
printk(KERN_WARNING "%s: sysfs_init error: %d\n",

View File

@ -2,4 +2,4 @@
# Makefile for the sysfs virtual filesystem
#
obj-y := inode.o file.o dir.o symlink.o mount.o group.o
obj-y := file.o dir.o symlink.o mount.o group.o

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -18,7 +18,7 @@
#include "sysfs.h"
static void remove_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
static void remove_files(struct kernfs_node *parent, struct kobject *kobj,
const struct attribute_group *grp)
{
struct attribute *const *attr;
@ -26,13 +26,13 @@ static void remove_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
if (grp->attrs)
for (attr = grp->attrs; *attr; attr++)
sysfs_hash_and_remove(dir_sd, (*attr)->name, NULL);
kernfs_remove_by_name(parent, (*attr)->name);
if (grp->bin_attrs)
for (bin_attr = grp->bin_attrs; *bin_attr; bin_attr++)
sysfs_remove_bin_file(kobj, *bin_attr);
}
static int create_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
static int create_files(struct kernfs_node *parent, struct kobject *kobj,
const struct attribute_group *grp, int update)
{
struct attribute *const *attr;
@ -49,22 +49,20 @@ static int create_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
* re-adding (if required) the file.
*/
if (update)
sysfs_hash_and_remove(dir_sd, (*attr)->name,
NULL);
kernfs_remove_by_name(parent, (*attr)->name);
if (grp->is_visible) {
mode = grp->is_visible(kobj, *attr, i);
if (!mode)
continue;
}
error = sysfs_add_file_mode_ns(dir_sd, *attr,
SYSFS_KOBJ_ATTR,
error = sysfs_add_file_mode_ns(parent, *attr, false,
(*attr)->mode | mode,
NULL);
if (unlikely(error))
break;
}
if (error) {
remove_files(dir_sd, kobj, grp);
remove_files(parent, kobj, grp);
goto exit;
}
}
@ -78,7 +76,7 @@ static int create_files(struct sysfs_dirent *dir_sd, struct kobject *kobj,
break;
}
if (error)
remove_files(dir_sd, kobj, grp);
remove_files(parent, kobj, grp);
}
exit:
return error;
@ -88,7 +86,7 @@ exit:
static int internal_create_group(struct kobject *kobj, int update,
const struct attribute_group *grp)
{
struct sysfs_dirent *sd;
struct kernfs_node *kn;
int error;
BUG_ON(!kobj || (!update && !kobj->sd));
@ -102,18 +100,22 @@ static int internal_create_group(struct kobject *kobj, int update,
return -EINVAL;
}
if (grp->name) {
error = sysfs_create_subdir(kobj, grp->name, &sd);
if (error)
return error;
kn = kernfs_create_dir(kobj->sd, grp->name,
S_IRWXU | S_IRUGO | S_IXUGO, kobj);
if (IS_ERR(kn)) {
if (PTR_ERR(kn) == -EEXIST)
sysfs_warn_dup(kobj->sd, grp->name);
return PTR_ERR(kn);
}
} else
sd = kobj->sd;
sysfs_get(sd);
error = create_files(sd, kobj, grp, update);
kn = kobj->sd;
kernfs_get(kn);
error = create_files(kn, kobj, grp, update);
if (error) {
if (grp->name)
sysfs_remove(sd);
kernfs_remove(kn);
}
sysfs_put(sd);
kernfs_put(kn);
return error;
}
@ -203,25 +205,27 @@ EXPORT_SYMBOL_GPL(sysfs_update_group);
void sysfs_remove_group(struct kobject *kobj,
const struct attribute_group *grp)
{
struct sysfs_dirent *dir_sd = kobj->sd;
struct sysfs_dirent *sd;
struct kernfs_node *parent = kobj->sd;
struct kernfs_node *kn;
if (grp->name) {
sd = sysfs_get_dirent(dir_sd, grp->name);
if (!sd) {
WARN(!sd, KERN_WARNING
kn = kernfs_find_and_get(parent, grp->name);
if (!kn) {
WARN(!kn, KERN_WARNING
"sysfs group %p not found for kobject '%s'\n",
grp, kobject_name(kobj));
return;
}
} else
sd = sysfs_get(dir_sd);
} else {
kn = parent;
kernfs_get(kn);
}
remove_files(sd, kobj, grp);
remove_files(kn, kobj, grp);
if (grp->name)
sysfs_remove(sd);
kernfs_remove(kn);
sysfs_put(sd);
kernfs_put(kn);
}
EXPORT_SYMBOL_GPL(sysfs_remove_group);
@ -257,22 +261,22 @@ EXPORT_SYMBOL_GPL(sysfs_remove_groups);
int sysfs_merge_group(struct kobject *kobj,
const struct attribute_group *grp)
{
struct sysfs_dirent *dir_sd;
struct kernfs_node *parent;
int error = 0;
struct attribute *const *attr;
int i;
dir_sd = sysfs_get_dirent(kobj->sd, grp->name);
if (!dir_sd)
parent = kernfs_find_and_get(kobj->sd, grp->name);
if (!parent)
return -ENOENT;
for ((i = 0, attr = grp->attrs); *attr && !error; (++i, ++attr))
error = sysfs_add_file(dir_sd, *attr, SYSFS_KOBJ_ATTR);
error = sysfs_add_file(parent, *attr, false);
if (error) {
while (--i >= 0)
sysfs_hash_and_remove(dir_sd, (*--attr)->name, NULL);
kernfs_remove_by_name(parent, (*--attr)->name);
}
sysfs_put(dir_sd);
kernfs_put(parent);
return error;
}
@ -286,14 +290,14 @@ EXPORT_SYMBOL_GPL(sysfs_merge_group);
void sysfs_unmerge_group(struct kobject *kobj,
const struct attribute_group *grp)
{
struct sysfs_dirent *dir_sd;
struct kernfs_node *parent;
struct attribute *const *attr;
dir_sd = sysfs_get_dirent(kobj->sd, grp->name);
if (dir_sd) {
parent = kernfs_find_and_get(kobj->sd, grp->name);
if (parent) {
for (attr = grp->attrs; *attr; ++attr)
sysfs_hash_and_remove(dir_sd, (*attr)->name, NULL);
sysfs_put(dir_sd);
kernfs_remove_by_name(parent, (*attr)->name);
kernfs_put(parent);
}
}
EXPORT_SYMBOL_GPL(sysfs_unmerge_group);
@ -308,15 +312,15 @@ EXPORT_SYMBOL_GPL(sysfs_unmerge_group);
int sysfs_add_link_to_group(struct kobject *kobj, const char *group_name,
struct kobject *target, const char *link_name)
{
struct sysfs_dirent *dir_sd;
struct kernfs_node *parent;
int error = 0;
dir_sd = sysfs_get_dirent(kobj->sd, group_name);
if (!dir_sd)
parent = kernfs_find_and_get(kobj->sd, group_name);
if (!parent)
return -ENOENT;
error = sysfs_create_link_sd(dir_sd, target, link_name);
sysfs_put(dir_sd);
error = sysfs_create_link_sd(parent, target, link_name);
kernfs_put(parent);
return error;
}
@ -331,12 +335,12 @@ EXPORT_SYMBOL_GPL(sysfs_add_link_to_group);
void sysfs_remove_link_from_group(struct kobject *kobj, const char *group_name,
const char *link_name)
{
struct sysfs_dirent *dir_sd;
struct kernfs_node *parent;
dir_sd = sysfs_get_dirent(kobj->sd, group_name);
if (dir_sd) {
sysfs_hash_and_remove(dir_sd, link_name, NULL);
sysfs_put(dir_sd);
parent = kernfs_find_and_get(kobj->sd, group_name);
if (parent) {
kernfs_remove_by_name(parent, link_name);
kernfs_put(parent);
}
}
EXPORT_SYMBOL_GPL(sysfs_remove_link_from_group);

View File

@ -1,331 +0,0 @@
/*
* fs/sysfs/inode.c - basic sysfs inode and dentry operations
*
* Copyright (c) 2001-3 Patrick Mochel
* Copyright (c) 2007 SUSE Linux Products GmbH
* Copyright (c) 2007 Tejun Heo <teheo@suse.de>
*
* This file is released under the GPLv2.
*
* Please see Documentation/filesystems/sysfs.txt for more information.
*/
#undef DEBUG
#include <linux/pagemap.h>
#include <linux/namei.h>
#include <linux/backing-dev.h>
#include <linux/capability.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
#include <linux/xattr.h>
#include <linux/security.h>
#include "sysfs.h"
static const struct address_space_operations sysfs_aops = {
.readpage = simple_readpage,
.write_begin = simple_write_begin,
.write_end = simple_write_end,
};
static struct backing_dev_info sysfs_backing_dev_info = {
.name = "sysfs",
.ra_pages = 0, /* No readahead */
.capabilities = BDI_CAP_NO_ACCT_AND_WRITEBACK,
};
static const struct inode_operations sysfs_inode_operations = {
.permission = sysfs_permission,
.setattr = sysfs_setattr,
.getattr = sysfs_getattr,
.setxattr = sysfs_setxattr,
};
int __init sysfs_inode_init(void)
{
return bdi_init(&sysfs_backing_dev_info);
}
static struct sysfs_inode_attrs *sysfs_init_inode_attrs(struct sysfs_dirent *sd)
{
struct sysfs_inode_attrs *attrs;
struct iattr *iattrs;
attrs = kzalloc(sizeof(struct sysfs_inode_attrs), GFP_KERNEL);
if (!attrs)
return NULL;
iattrs = &attrs->ia_iattr;
/* assign default attributes */
iattrs->ia_mode = sd->s_mode;
iattrs->ia_uid = GLOBAL_ROOT_UID;
iattrs->ia_gid = GLOBAL_ROOT_GID;
iattrs->ia_atime = iattrs->ia_mtime = iattrs->ia_ctime = CURRENT_TIME;
return attrs;
}
int sysfs_sd_setattr(struct sysfs_dirent *sd, struct iattr *iattr)
{
struct sysfs_inode_attrs *sd_attrs;
struct iattr *iattrs;
unsigned int ia_valid = iattr->ia_valid;
sd_attrs = sd->s_iattr;
if (!sd_attrs) {
/* setting attributes for the first time, allocate now */
sd_attrs = sysfs_init_inode_attrs(sd);
if (!sd_attrs)
return -ENOMEM;
sd->s_iattr = sd_attrs;
}
/* attributes were changed at least once in past */
iattrs = &sd_attrs->ia_iattr;
if (ia_valid & ATTR_UID)
iattrs->ia_uid = iattr->ia_uid;
if (ia_valid & ATTR_GID)
iattrs->ia_gid = iattr->ia_gid;
if (ia_valid & ATTR_ATIME)
iattrs->ia_atime = iattr->ia_atime;
if (ia_valid & ATTR_MTIME)
iattrs->ia_mtime = iattr->ia_mtime;
if (ia_valid & ATTR_CTIME)
iattrs->ia_ctime = iattr->ia_ctime;
if (ia_valid & ATTR_MODE) {
umode_t mode = iattr->ia_mode;
iattrs->ia_mode = sd->s_mode = mode;
}
return 0;
}
int sysfs_setattr(struct dentry *dentry, struct iattr *iattr)
{
struct inode *inode = dentry->d_inode;
struct sysfs_dirent *sd = dentry->d_fsdata;
int error;
if (!sd)
return -EINVAL;
mutex_lock(&sysfs_mutex);
error = inode_change_ok(inode, iattr);
if (error)
goto out;
error = sysfs_sd_setattr(sd, iattr);
if (error)
goto out;
/* this ignores size changes */
setattr_copy(inode, iattr);
out:
mutex_unlock(&sysfs_mutex);
return error;
}
static int sysfs_sd_setsecdata(struct sysfs_dirent *sd, void **secdata,
u32 *secdata_len)
{
struct sysfs_inode_attrs *iattrs;
void *old_secdata;
size_t old_secdata_len;
if (!sd->s_iattr) {
sd->s_iattr = sysfs_init_inode_attrs(sd);
if (!sd->s_iattr)
return -ENOMEM;
}
iattrs = sd->s_iattr;
old_secdata = iattrs->ia_secdata;
old_secdata_len = iattrs->ia_secdata_len;
iattrs->ia_secdata = *secdata;
iattrs->ia_secdata_len = *secdata_len;
*secdata = old_secdata;
*secdata_len = old_secdata_len;
return 0;
}
int sysfs_setxattr(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags)
{
struct sysfs_dirent *sd = dentry->d_fsdata;
void *secdata;
int error;
u32 secdata_len = 0;
if (!sd)
return -EINVAL;
if (!strncmp(name, XATTR_SECURITY_PREFIX, XATTR_SECURITY_PREFIX_LEN)) {
const char *suffix = name + XATTR_SECURITY_PREFIX_LEN;
error = security_inode_setsecurity(dentry->d_inode, suffix,
value, size, flags);
if (error)
goto out;
error = security_inode_getsecctx(dentry->d_inode,
&secdata, &secdata_len);
if (error)
goto out;
mutex_lock(&sysfs_mutex);
error = sysfs_sd_setsecdata(sd, &secdata, &secdata_len);
mutex_unlock(&sysfs_mutex);
if (secdata)
security_release_secctx(secdata, secdata_len);
} else
return -EINVAL;
out:
return error;
}
static inline void set_default_inode_attr(struct inode *inode, umode_t mode)
{
inode->i_mode = mode;
inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
}
static inline void set_inode_attr(struct inode *inode, struct iattr *iattr)
{
inode->i_uid = iattr->ia_uid;
inode->i_gid = iattr->ia_gid;
inode->i_atime = iattr->ia_atime;
inode->i_mtime = iattr->ia_mtime;
inode->i_ctime = iattr->ia_ctime;
}
static void sysfs_refresh_inode(struct sysfs_dirent *sd, struct inode *inode)
{
struct sysfs_inode_attrs *iattrs = sd->s_iattr;
inode->i_mode = sd->s_mode;
if (iattrs) {
/* sysfs_dirent has non-default attributes
* get them from persistent copy in sysfs_dirent
*/
set_inode_attr(inode, &iattrs->ia_iattr);
security_inode_notifysecctx(inode,
iattrs->ia_secdata,
iattrs->ia_secdata_len);
}
if (sysfs_type(sd) == SYSFS_DIR)
set_nlink(inode, sd->s_dir.subdirs + 2);
}
int sysfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat)
{
struct sysfs_dirent *sd = dentry->d_fsdata;
struct inode *inode = dentry->d_inode;
mutex_lock(&sysfs_mutex);
sysfs_refresh_inode(sd, inode);
mutex_unlock(&sysfs_mutex);
generic_fillattr(inode, stat);
return 0;
}
static void sysfs_init_inode(struct sysfs_dirent *sd, struct inode *inode)
{
struct bin_attribute *bin_attr;
inode->i_private = sysfs_get(sd);
inode->i_mapping->a_ops = &sysfs_aops;
inode->i_mapping->backing_dev_info = &sysfs_backing_dev_info;
inode->i_op = &sysfs_inode_operations;
set_default_inode_attr(inode, sd->s_mode);
sysfs_refresh_inode(sd, inode);
/* initialize inode according to type */
switch (sysfs_type(sd)) {
case SYSFS_DIR:
inode->i_op = &sysfs_dir_inode_operations;
inode->i_fop = &sysfs_dir_operations;
break;
case SYSFS_KOBJ_ATTR:
inode->i_size = PAGE_SIZE;
inode->i_fop = &sysfs_file_operations;
break;
case SYSFS_KOBJ_BIN_ATTR:
bin_attr = sd->s_attr.bin_attr;
inode->i_size = bin_attr->size;
inode->i_fop = &sysfs_bin_operations;
break;
case SYSFS_KOBJ_LINK:
inode->i_op = &sysfs_symlink_inode_operations;
break;
default:
BUG();
}
unlock_new_inode(inode);
}
/**
* sysfs_get_inode - get inode for sysfs_dirent
* @sb: super block
* @sd: sysfs_dirent to allocate inode for
*
* Get inode for @sd. If such inode doesn't exist, a new inode
* is allocated and basics are initialized. New inode is
* returned locked.
*
* LOCKING:
* Kernel thread context (may sleep).
*
* RETURNS:
* Pointer to allocated inode on success, NULL on failure.
*/
struct inode *sysfs_get_inode(struct super_block *sb, struct sysfs_dirent *sd)
{
struct inode *inode;
inode = iget_locked(sb, sd->s_ino);
if (inode && (inode->i_state & I_NEW))
sysfs_init_inode(sd, inode);
return inode;
}
/*
* The sysfs_dirent serves as both an inode and a directory entry for sysfs.
* To prevent the sysfs inode numbers from being freed prematurely we take a
* reference to sysfs_dirent from the sysfs inode. A
* super_operations.evict_inode() implementation is needed to drop that
* reference upon inode destruction.
*/
void sysfs_evict_inode(struct inode *inode)
{
struct sysfs_dirent *sd = inode->i_private;
truncate_inode_pages(&inode->i_data, 0);
clear_inode(inode);
sysfs_put(sd);
}
int sysfs_permission(struct inode *inode, int mask)
{
struct sysfs_dirent *sd;
if (mask & MAY_NOT_BLOCK)
return -ECHILD;
sd = inode->i_private;
mutex_lock(&sysfs_mutex);
sysfs_refresh_inode(sd, inode);
mutex_unlock(&sysfs_mutex);
return generic_permission(inode, mask);
}

View File

@ -14,146 +14,41 @@
#include <linux/fs.h>
#include <linux/mount.h>
#include <linux/pagemap.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/magic.h>
#include <linux/slab.h>
#include <linux/user_namespace.h>
#include "sysfs.h"
static struct vfsmount *sysfs_mnt;
struct kmem_cache *sysfs_dir_cachep;
static const struct super_operations sysfs_ops = {
.statfs = simple_statfs,
.drop_inode = generic_delete_inode,
.evict_inode = sysfs_evict_inode,
};
struct sysfs_dirent sysfs_root = {
.s_name = "",
.s_count = ATOMIC_INIT(1),
.s_flags = SYSFS_DIR | (KOBJ_NS_TYPE_NONE << SYSFS_NS_TYPE_SHIFT),
.s_mode = S_IFDIR | S_IRUGO | S_IXUGO,
.s_ino = 1,
};
static int sysfs_fill_super(struct super_block *sb, void *data, int silent)
{
struct inode *inode;
struct dentry *root;
sb->s_blocksize = PAGE_CACHE_SIZE;
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
sb->s_magic = SYSFS_MAGIC;
sb->s_op = &sysfs_ops;
sb->s_time_gran = 1;
/* get root inode, initialize and unlock it */
mutex_lock(&sysfs_mutex);
inode = sysfs_get_inode(sb, &sysfs_root);
mutex_unlock(&sysfs_mutex);
if (!inode) {
pr_debug("sysfs: could not get root inode\n");
return -ENOMEM;
}
/* instantiate and link root dentry */
root = d_make_root(inode);
if (!root) {
pr_debug("%s: could not get root dentry!\n", __func__);
return -ENOMEM;
}
root->d_fsdata = &sysfs_root;
sb->s_root = root;
sb->s_d_op = &sysfs_dentry_ops;
return 0;
}
static int sysfs_test_super(struct super_block *sb, void *data)
{
struct sysfs_super_info *sb_info = sysfs_info(sb);
struct sysfs_super_info *info = data;
enum kobj_ns_type type;
int found = 1;
for (type = KOBJ_NS_TYPE_NONE; type < KOBJ_NS_TYPES; type++) {
if (sb_info->ns[type] != info->ns[type])
found = 0;
}
return found;
}
static int sysfs_set_super(struct super_block *sb, void *data)
{
int error;
error = set_anon_super(sb, data);
if (!error)
sb->s_fs_info = data;
return error;
}
static void free_sysfs_super_info(struct sysfs_super_info *info)
{
int type;
for (type = KOBJ_NS_TYPE_NONE; type < KOBJ_NS_TYPES; type++)
kobj_ns_drop(type, info->ns[type]);
kfree(info);
}
static struct kernfs_root *sysfs_root;
struct kernfs_node *sysfs_root_kn;
static struct dentry *sysfs_mount(struct file_system_type *fs_type,
int flags, const char *dev_name, void *data)
{
struct sysfs_super_info *info;
enum kobj_ns_type type;
struct super_block *sb;
int error;
struct dentry *root;
void *ns;
if (!(flags & MS_KERNMOUNT)) {
if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
return ERR_PTR(-EPERM);
for (type = KOBJ_NS_TYPE_NONE; type < KOBJ_NS_TYPES; type++) {
if (!kobj_ns_current_may_mount(type))
return ERR_PTR(-EPERM);
}
if (!kobj_ns_current_may_mount(KOBJ_NS_TYPE_NET))
return ERR_PTR(-EPERM);
}
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);
for (type = KOBJ_NS_TYPE_NONE; type < KOBJ_NS_TYPES; type++)
info->ns[type] = kobj_ns_grab_current(type);
sb = sget(fs_type, sysfs_test_super, sysfs_set_super, flags, info);
if (IS_ERR(sb) || sb->s_fs_info != info)
free_sysfs_super_info(info);
if (IS_ERR(sb))
return ERR_CAST(sb);
if (!sb->s_root) {
error = sysfs_fill_super(sb, data, flags & MS_SILENT ? 1 : 0);
if (error) {
deactivate_locked_super(sb);
return ERR_PTR(error);
}
sb->s_flags |= MS_ACTIVE;
}
return dget(sb->s_root);
ns = kobj_ns_grab_current(KOBJ_NS_TYPE_NET);
root = kernfs_mount_ns(fs_type, flags, sysfs_root, ns);
if (IS_ERR(root))
kobj_ns_drop(KOBJ_NS_TYPE_NET, ns);
return root;
}
static void sysfs_kill_sb(struct super_block *sb)
{
struct sysfs_super_info *info = sysfs_info(sb);
/* Remove the superblock from fs_supers/s_instances
* so we can't find it, before freeing sysfs_super_info.
*/
kill_anon_super(sb);
free_sysfs_super_info(info);
void *ns = (void *)kernfs_super_ns(sb);
kernfs_kill_sb(sb);
kobj_ns_drop(KOBJ_NS_TYPE_NET, ns);
}
static struct file_system_type sysfs_fs_type = {
@ -165,48 +60,19 @@ static struct file_system_type sysfs_fs_type = {
int __init sysfs_init(void)
{
int err = -ENOMEM;
int err;
sysfs_dir_cachep = kmem_cache_create("sysfs_dir_cache",
sizeof(struct sysfs_dirent),
0, 0, NULL);
if (!sysfs_dir_cachep)
goto out;
sysfs_root = kernfs_create_root(NULL, NULL);
if (IS_ERR(sysfs_root))
return PTR_ERR(sysfs_root);
err = sysfs_inode_init();
if (err)
goto out_err;
sysfs_root_kn = sysfs_root->kn;
err = register_filesystem(&sysfs_fs_type);
if (!err) {
sysfs_mnt = kern_mount(&sysfs_fs_type);
if (IS_ERR(sysfs_mnt)) {
printk(KERN_ERR "sysfs: could not mount!\n");
err = PTR_ERR(sysfs_mnt);
sysfs_mnt = NULL;
unregister_filesystem(&sysfs_fs_type);
goto out_err;
}
} else
goto out_err;
out:
return err;
out_err:
kmem_cache_destroy(sysfs_dir_cachep);
sysfs_dir_cachep = NULL;
goto out;
}
if (err) {
kernfs_destroy_root(sysfs_root);
return err;
}
#undef sysfs_get
struct sysfs_dirent *sysfs_get(struct sysfs_dirent *sd)
{
return __sysfs_get(sd);
return 0;
}
EXPORT_SYMBOL_GPL(sysfs_get);
#undef sysfs_put
void sysfs_put(struct sysfs_dirent *sd)
{
__sysfs_put(sd);
}
EXPORT_SYMBOL_GPL(sysfs_put);

View File

@ -11,109 +11,73 @@
*/
#include <linux/fs.h>
#include <linux/gfp.h>
#include <linux/mount.h>
#include <linux/module.h>
#include <linux/kobject.h>
#include <linux/namei.h>
#include <linux/mutex.h>
#include <linux/security.h>
#include "sysfs.h"
static int sysfs_do_create_link_sd(struct sysfs_dirent *parent_sd,
struct kobject *target,
static int sysfs_do_create_link_sd(struct kernfs_node *parent,
struct kobject *target_kobj,
const char *name, int warn)
{
struct sysfs_dirent *target_sd = NULL;
struct sysfs_dirent *sd = NULL;
struct sysfs_addrm_cxt acxt;
enum kobj_ns_type ns_type;
int error;
struct kernfs_node *kn, *target = NULL;
BUG_ON(!name || !parent_sd);
BUG_ON(!name || !parent);
/*
* We don't own @target and it may be removed at any time.
* We don't own @target_kobj and it may be removed at any time.
* Synchronize using sysfs_symlink_target_lock. See
* sysfs_remove_dir() for details.
*/
spin_lock(&sysfs_symlink_target_lock);
if (target->sd)
target_sd = sysfs_get(target->sd);
if (target_kobj->sd) {
target = target_kobj->sd;
kernfs_get(target);
}
spin_unlock(&sysfs_symlink_target_lock);
error = -ENOENT;
if (!target_sd)
goto out_put;
if (!target)
return -ENOENT;
error = -ENOMEM;
sd = sysfs_new_dirent(name, S_IFLNK|S_IRWXUGO, SYSFS_KOBJ_LINK);
if (!sd)
goto out_put;
kn = kernfs_create_link(parent, name, target);
kernfs_put(target);
ns_type = sysfs_ns_type(parent_sd);
if (ns_type)
sd->s_ns = target_sd->s_ns;
sd->s_symlink.target_sd = target_sd;
target_sd = NULL; /* reference is now owned by the symlink */
if (!IS_ERR(kn))
return 0;
sysfs_addrm_start(&acxt);
/* Symlinks must be between directories with the same ns_type */
if (!ns_type ||
(ns_type == sysfs_ns_type(sd->s_symlink.target_sd->s_parent))) {
if (warn)
error = sysfs_add_one(&acxt, sd, parent_sd);
else
error = __sysfs_add_one(&acxt, sd, parent_sd);
} else {
error = -EINVAL;
WARN(1, KERN_WARNING
"sysfs: symlink across ns_types %s/%s -> %s/%s\n",
parent_sd->s_name,
sd->s_name,
sd->s_symlink.target_sd->s_parent->s_name,
sd->s_symlink.target_sd->s_name);
}
sysfs_addrm_finish(&acxt);
if (error)
goto out_put;
return 0;
out_put:
sysfs_put(target_sd);
sysfs_put(sd);
return error;
if (warn && PTR_ERR(kn) == -EEXIST)
sysfs_warn_dup(parent, name);
return PTR_ERR(kn);
}
/**
* sysfs_create_link_sd - create symlink to a given object.
* @sd: directory we're creating the link in.
* @kn: directory we're creating the link in.
* @target: object we're pointing to.
* @name: name of the symlink.
*/
int sysfs_create_link_sd(struct sysfs_dirent *sd, struct kobject *target,
int sysfs_create_link_sd(struct kernfs_node *kn, struct kobject *target,
const char *name)
{
return sysfs_do_create_link_sd(sd, target, name, 1);
return sysfs_do_create_link_sd(kn, target, name, 1);
}
static int sysfs_do_create_link(struct kobject *kobj, struct kobject *target,
const char *name, int warn)
{
struct sysfs_dirent *parent_sd = NULL;
struct kernfs_node *parent = NULL;
if (!kobj)
parent_sd = &sysfs_root;
parent = sysfs_root_kn;
else
parent_sd = kobj->sd;
parent = kobj->sd;
if (!parent_sd)
if (!parent)
return -EFAULT;
return sysfs_do_create_link_sd(parent_sd, target, name, warn);
return sysfs_do_create_link_sd(parent, target, name, warn);
}
/**
@ -164,10 +128,10 @@ void sysfs_delete_link(struct kobject *kobj, struct kobject *targ,
* sysfs_remove_dir() for details.
*/
spin_lock(&sysfs_symlink_target_lock);
if (targ->sd && sysfs_ns_type(kobj->sd))
ns = targ->sd->s_ns;
if (targ->sd && kernfs_ns_enabled(kobj->sd))
ns = targ->sd->ns;
spin_unlock(&sysfs_symlink_target_lock);
sysfs_hash_and_remove(kobj->sd, name, ns);
kernfs_remove_by_name_ns(kobj->sd, name, ns);
}
/**
@ -177,14 +141,14 @@ void sysfs_delete_link(struct kobject *kobj, struct kobject *targ,
*/
void sysfs_remove_link(struct kobject *kobj, const char *name)
{
struct sysfs_dirent *parent_sd = NULL;
struct kernfs_node *parent = NULL;
if (!kobj)
parent_sd = &sysfs_root;
parent = sysfs_root_kn;
else
parent_sd = kobj->sd;
parent = kobj->sd;
sysfs_hash_and_remove(parent_sd, name, NULL);
kernfs_remove_by_name(parent, name);
}
EXPORT_SYMBOL_GPL(sysfs_remove_link);
@ -201,130 +165,33 @@ EXPORT_SYMBOL_GPL(sysfs_remove_link);
int sysfs_rename_link_ns(struct kobject *kobj, struct kobject *targ,
const char *old, const char *new, const void *new_ns)
{
struct sysfs_dirent *parent_sd, *sd = NULL;
struct kernfs_node *parent, *kn = NULL;
const void *old_ns = NULL;
int result;
if (!kobj)
parent_sd = &sysfs_root;
parent = sysfs_root_kn;
else
parent_sd = kobj->sd;
parent = kobj->sd;
if (targ->sd)
old_ns = targ->sd->s_ns;
old_ns = targ->sd->ns;
result = -ENOENT;
sd = sysfs_get_dirent_ns(parent_sd, old, old_ns);
if (!sd)
kn = kernfs_find_and_get_ns(parent, old, old_ns);
if (!kn)
goto out;
result = -EINVAL;
if (sysfs_type(sd) != SYSFS_KOBJ_LINK)
if (kernfs_type(kn) != KERNFS_LINK)
goto out;
if (sd->s_symlink.target_sd->s_dir.kobj != targ)
if (kn->symlink.target_kn->priv != targ)
goto out;
result = sysfs_rename(sd, parent_sd, new, new_ns);
result = kernfs_rename_ns(kn, parent, new, new_ns);
out:
sysfs_put(sd);
kernfs_put(kn);
return result;
}
EXPORT_SYMBOL_GPL(sysfs_rename_link_ns);
static int sysfs_get_target_path(struct sysfs_dirent *parent_sd,
struct sysfs_dirent *target_sd, char *path)
{
struct sysfs_dirent *base, *sd;
char *s = path;
int len = 0;
/* go up to the root, stop at the base */
base = parent_sd;
while (base->s_parent) {
sd = target_sd->s_parent;
while (sd->s_parent && base != sd)
sd = sd->s_parent;
if (base == sd)
break;
strcpy(s, "../");
s += 3;
base = base->s_parent;
}
/* determine end of target string for reverse fillup */
sd = target_sd;
while (sd->s_parent && sd != base) {
len += strlen(sd->s_name) + 1;
sd = sd->s_parent;
}
/* check limits */
if (len < 2)
return -EINVAL;
len--;
if ((s - path) + len > PATH_MAX)
return -ENAMETOOLONG;
/* reverse fillup of target string from target to base */
sd = target_sd;
while (sd->s_parent && sd != base) {
int slen = strlen(sd->s_name);
len -= slen;
strncpy(s + len, sd->s_name, slen);
if (len)
s[--len] = '/';
sd = sd->s_parent;
}
return 0;
}
static int sysfs_getlink(struct dentry *dentry, char *path)
{
struct sysfs_dirent *sd = dentry->d_fsdata;
struct sysfs_dirent *parent_sd = sd->s_parent;
struct sysfs_dirent *target_sd = sd->s_symlink.target_sd;
int error;
mutex_lock(&sysfs_mutex);
error = sysfs_get_target_path(parent_sd, target_sd, path);
mutex_unlock(&sysfs_mutex);
return error;
}
static void *sysfs_follow_link(struct dentry *dentry, struct nameidata *nd)
{
int error = -ENOMEM;
unsigned long page = get_zeroed_page(GFP_KERNEL);
if (page) {
error = sysfs_getlink(dentry, (char *) page);
if (error < 0)
free_page((unsigned long)page);
}
nd_set_link(nd, error ? ERR_PTR(error) : (char *)page);
return NULL;
}
static void sysfs_put_link(struct dentry *dentry, struct nameidata *nd,
void *cookie)
{
char *page = nd_get_link(nd);
if (!IS_ERR(page))
free_page((unsigned long)page);
}
const struct inode_operations sysfs_symlink_inode_operations = {
.setxattr = sysfs_setxattr,
.readlink = generic_readlink,
.follow_link = sysfs_follow_link,
.put_link = sysfs_put_link,
.setattr = sysfs_setattr,
.getattr = sysfs_getattr,
.permission = sysfs_permission,
};

View File

@ -8,248 +8,36 @@
* This file is released under the GPLv2.
*/
#include <linux/lockdep.h>
#include <linux/kobject_ns.h>
#include <linux/fs.h>
#include <linux/rbtree.h>
#ifndef __SYSFS_INTERNAL_H
#define __SYSFS_INTERNAL_H
struct sysfs_open_dirent;
/* type-specific structures for sysfs_dirent->s_* union members */
struct sysfs_elem_dir {
struct kobject *kobj;
unsigned long subdirs;
/* children rbtree starts here and goes through sd->s_rb */
struct rb_root children;
};
struct sysfs_elem_symlink {
struct sysfs_dirent *target_sd;
};
struct sysfs_elem_attr {
union {
struct attribute *attr;
struct bin_attribute *bin_attr;
};
struct sysfs_open_dirent *open;
};
struct sysfs_inode_attrs {
struct iattr ia_iattr;
void *ia_secdata;
u32 ia_secdata_len;
};
/*
* sysfs_dirent - the building block of sysfs hierarchy. Each and
* every sysfs node is represented by single sysfs_dirent.
*
* As long as s_count reference is held, the sysfs_dirent itself is
* accessible. Dereferencing s_elem or any other outer entity
* requires s_active reference.
*/
struct sysfs_dirent {
atomic_t s_count;
atomic_t s_active;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
#endif
struct sysfs_dirent *s_parent;
const char *s_name;
struct rb_node s_rb;
union {
struct completion *completion;
struct sysfs_dirent *removed_list;
} u;
const void *s_ns; /* namespace tag */
unsigned int s_hash; /* ns + name hash */
union {
struct sysfs_elem_dir s_dir;
struct sysfs_elem_symlink s_symlink;
struct sysfs_elem_attr s_attr;
};
unsigned short s_flags;
umode_t s_mode;
unsigned int s_ino;
struct sysfs_inode_attrs *s_iattr;
};
#define SD_DEACTIVATED_BIAS INT_MIN
#define SYSFS_TYPE_MASK 0x00ff
#define SYSFS_DIR 0x0001
#define SYSFS_KOBJ_ATTR 0x0002
#define SYSFS_KOBJ_BIN_ATTR 0x0004
#define SYSFS_KOBJ_LINK 0x0008
#define SYSFS_COPY_NAME (SYSFS_DIR | SYSFS_KOBJ_LINK)
#define SYSFS_ACTIVE_REF (SYSFS_KOBJ_ATTR | SYSFS_KOBJ_BIN_ATTR)
/* identify any namespace tag on sysfs_dirents */
#define SYSFS_NS_TYPE_MASK 0xf00
#define SYSFS_NS_TYPE_SHIFT 8
#define SYSFS_FLAG_MASK ~(SYSFS_NS_TYPE_MASK|SYSFS_TYPE_MASK)
#define SYSFS_FLAG_REMOVED 0x02000
static inline unsigned int sysfs_type(struct sysfs_dirent *sd)
{
return sd->s_flags & SYSFS_TYPE_MASK;
}
/*
* Return any namespace tags on this dirent.
* enum kobj_ns_type is defined in linux/kobject.h
*/
static inline enum kobj_ns_type sysfs_ns_type(struct sysfs_dirent *sd)
{
return (sd->s_flags & SYSFS_NS_TYPE_MASK) >> SYSFS_NS_TYPE_SHIFT;
}
#ifdef CONFIG_DEBUG_LOCK_ALLOC
#define sysfs_dirent_init_lockdep(sd) \
do { \
struct attribute *attr = sd->s_attr.attr; \
struct lock_class_key *key = attr->key; \
if (!key) \
key = &attr->skey; \
\
lockdep_init_map(&sd->dep_map, "s_active", key, 0); \
} while (0)
/* Test for attributes that want to ignore lockdep for read-locking */
static inline bool sysfs_ignore_lockdep(struct sysfs_dirent *sd)
{
int type = sysfs_type(sd);
return (type == SYSFS_KOBJ_ATTR || type == SYSFS_KOBJ_BIN_ATTR) &&
sd->s_attr.attr->ignore_lockdep;
}
#else
#define sysfs_dirent_init_lockdep(sd) do {} while (0)
static inline bool sysfs_ignore_lockdep(struct sysfs_dirent *sd)
{
return true;
}
#endif
/*
* Context structure to be used while adding/removing nodes.
*/
struct sysfs_addrm_cxt {
struct sysfs_dirent *removed;
};
#include <linux/sysfs.h>
/*
* mount.c
*/
/*
* Each sb is associated with a set of namespace tags (i.e.
* the network namespace of the task which mounted this sysfs
* instance).
*/
struct sysfs_super_info {
void *ns[KOBJ_NS_TYPES];
};
#define sysfs_info(SB) ((struct sysfs_super_info *)(SB->s_fs_info))
extern struct sysfs_dirent sysfs_root;
extern struct kmem_cache *sysfs_dir_cachep;
extern struct kernfs_node *sysfs_root_kn;
/*
* dir.c
*/
extern struct mutex sysfs_mutex;
extern spinlock_t sysfs_symlink_target_lock;
extern const struct dentry_operations sysfs_dentry_ops;
extern const struct file_operations sysfs_dir_operations;
extern const struct inode_operations sysfs_dir_inode_operations;
struct sysfs_dirent *sysfs_get_active(struct sysfs_dirent *sd);
void sysfs_put_active(struct sysfs_dirent *sd);
void sysfs_addrm_start(struct sysfs_addrm_cxt *acxt);
void sysfs_warn_dup(struct sysfs_dirent *parent, const char *name);
int __sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd,
struct sysfs_dirent *parent_sd);
int sysfs_add_one(struct sysfs_addrm_cxt *acxt, struct sysfs_dirent *sd,
struct sysfs_dirent *parent_sd);
void sysfs_remove(struct sysfs_dirent *sd);
int sysfs_hash_and_remove(struct sysfs_dirent *dir_sd, const char *name,
const void *ns);
void sysfs_addrm_finish(struct sysfs_addrm_cxt *acxt);
struct sysfs_dirent *sysfs_find_dirent(struct sysfs_dirent *parent_sd,
const unsigned char *name,
const void *ns);
struct sysfs_dirent *sysfs_new_dirent(const char *name, umode_t mode, int type);
void release_sysfs_dirent(struct sysfs_dirent *sd);
int sysfs_create_subdir(struct kobject *kobj, const char *name,
struct sysfs_dirent **p_sd);
int sysfs_rename(struct sysfs_dirent *sd, struct sysfs_dirent *new_parent_sd,
const char *new_name, const void *new_ns);
static inline struct sysfs_dirent *__sysfs_get(struct sysfs_dirent *sd)
{
if (sd) {
WARN_ON(!atomic_read(&sd->s_count));
atomic_inc(&sd->s_count);
}
return sd;
}
#define sysfs_get(sd) __sysfs_get(sd)
static inline void __sysfs_put(struct sysfs_dirent *sd)
{
if (sd && atomic_dec_and_test(&sd->s_count))
release_sysfs_dirent(sd);
}
#define sysfs_put(sd) __sysfs_put(sd)
/*
* inode.c
*/
struct inode *sysfs_get_inode(struct super_block *sb, struct sysfs_dirent *sd);
void sysfs_evict_inode(struct inode *inode);
int sysfs_sd_setattr(struct sysfs_dirent *sd, struct iattr *iattr);
int sysfs_permission(struct inode *inode, int mask);
int sysfs_setattr(struct dentry *dentry, struct iattr *iattr);
int sysfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
struct kstat *stat);
int sysfs_setxattr(struct dentry *dentry, const char *name, const void *value,
size_t size, int flags);
int sysfs_inode_init(void);
void sysfs_warn_dup(struct kernfs_node *parent, const char *name);
/*
* file.c
*/
extern const struct file_operations sysfs_file_operations;
extern const struct file_operations sysfs_bin_operations;
int sysfs_add_file(struct sysfs_dirent *dir_sd,
const struct attribute *attr, int type);
int sysfs_add_file_mode_ns(struct sysfs_dirent *dir_sd,
const struct attribute *attr, int type,
int sysfs_add_file(struct kernfs_node *parent,
const struct attribute *attr, bool is_bin);
int sysfs_add_file_mode_ns(struct kernfs_node *parent,
const struct attribute *attr, bool is_bin,
umode_t amode, const void *ns);
void sysfs_unmap_bin_file(struct sysfs_dirent *sd);
/*
* symlink.c
*/
extern const struct inode_operations sysfs_symlink_inode_operations;
int sysfs_create_link_sd(struct sysfs_dirent *sd, struct kobject *target,
int sysfs_create_link_sd(struct kernfs_node *kn, struct kobject *target,
const char *name);
#endif /* __SYSFS_INTERNAL_H */

32
include/linux/component.h Normal file
View File

@ -0,0 +1,32 @@
#ifndef COMPONENT_H
#define COMPONENT_H
struct device;
struct component_ops {
int (*bind)(struct device *, struct device *, void *);
void (*unbind)(struct device *, struct device *, void *);
};
int component_add(struct device *, const struct component_ops *);
void component_del(struct device *, const struct component_ops *);
int component_bind_all(struct device *, void *);
void component_unbind_all(struct device *, void *);
struct master;
struct component_master_ops {
int (*add_components)(struct device *, struct master *);
int (*bind)(struct device *);
void (*unbind)(struct device *);
};
int component_master_add(struct device *, const struct component_master_ops *);
void component_master_del(struct device *,
const struct component_master_ops *);
int component_master_add_child(struct master *master,
int (*compare)(struct device *, void *), void *compare_data);
#endif

View File

@ -68,4 +68,11 @@ static inline void release_firmware(const struct firmware *fw)
#endif
#ifdef CONFIG_FW_LOADER_USER_HELPER
int request_firmware_direct(const struct firmware **fw, const char *name,
struct device *device);
#else
#define request_firmware_direct request_firmware
#endif
#endif

376
include/linux/kernfs.h Normal file
View File

@ -0,0 +1,376 @@
/*
* kernfs.h - pseudo filesystem decoupled from vfs locking
*
* This file is released under the GPLv2.
*/
#ifndef __LINUX_KERNFS_H
#define __LINUX_KERNFS_H
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/idr.h>
#include <linux/lockdep.h>
#include <linux/rbtree.h>
#include <linux/atomic.h>
#include <linux/completion.h>
struct file;
struct dentry;
struct iattr;
struct seq_file;
struct vm_area_struct;
struct super_block;
struct file_system_type;
struct kernfs_open_node;
struct kernfs_iattrs;
enum kernfs_node_type {
KERNFS_DIR = 0x0001,
KERNFS_FILE = 0x0002,
KERNFS_LINK = 0x0004,
};
#define KERNFS_TYPE_MASK 0x000f
#define KERNFS_ACTIVE_REF KERNFS_FILE
#define KERNFS_FLAG_MASK ~KERNFS_TYPE_MASK
enum kernfs_node_flag {
KERNFS_REMOVED = 0x0010,
KERNFS_NS = 0x0020,
KERNFS_HAS_SEQ_SHOW = 0x0040,
KERNFS_HAS_MMAP = 0x0080,
KERNFS_LOCKDEP = 0x0100,
KERNFS_STATIC_NAME = 0x0200,
};
/* type-specific structures for kernfs_node union members */
struct kernfs_elem_dir {
unsigned long subdirs;
/* children rbtree starts here and goes through kn->rb */
struct rb_root children;
/*
* The kernfs hierarchy this directory belongs to. This fits
* better directly in kernfs_node but is here to save space.
*/
struct kernfs_root *root;
};
struct kernfs_elem_symlink {
struct kernfs_node *target_kn;
};
struct kernfs_elem_attr {
const struct kernfs_ops *ops;
struct kernfs_open_node *open;
loff_t size;
};
/*
* kernfs_node - the building block of kernfs hierarchy. Each and every
* kernfs node is represented by single kernfs_node. Most fields are
* private to kernfs and shouldn't be accessed directly by kernfs users.
*
* As long as s_count reference is held, the kernfs_node itself is
* accessible. Dereferencing elem or any other outer entity requires
* active reference.
*/
struct kernfs_node {
atomic_t count;
atomic_t active;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
#endif
/* the following two fields are published */
struct kernfs_node *parent;
const char *name;
struct rb_node rb;
union {
struct completion *completion;
struct kernfs_node *removed_list;
} u;
const void *ns; /* namespace tag */
unsigned int hash; /* ns + name hash */
union {
struct kernfs_elem_dir dir;
struct kernfs_elem_symlink symlink;
struct kernfs_elem_attr attr;
};
void *priv;
unsigned short flags;
umode_t mode;
unsigned int ino;
struct kernfs_iattrs *iattr;
};
/*
* kernfs_dir_ops may be specified on kernfs_create_root() to support
* directory manipulation syscalls. These optional callbacks are invoked
* on the matching syscalls and can perform any kernfs operations which
* don't necessarily have to be the exact operation requested.
*/
struct kernfs_dir_ops {
int (*mkdir)(struct kernfs_node *parent, const char *name,
umode_t mode);
int (*rmdir)(struct kernfs_node *kn);
int (*rename)(struct kernfs_node *kn, struct kernfs_node *new_parent,
const char *new_name);
};
struct kernfs_root {
/* published fields */
struct kernfs_node *kn;
/* private fields, do not use outside kernfs proper */
struct ida ino_ida;
struct kernfs_dir_ops *dir_ops;
};
struct kernfs_open_file {
/* published fields */
struct kernfs_node *kn;
struct file *file;
/* private fields, do not use outside kernfs proper */
struct mutex mutex;
int event;
struct list_head list;
bool mmapped;
const struct vm_operations_struct *vm_ops;
};
struct kernfs_ops {
/*
* Read is handled by either seq_file or raw_read().
*
* If seq_show() is present, seq_file path is active. Other seq
* operations are optional and if not implemented, the behavior is
* equivalent to single_open(). @sf->private points to the
* associated kernfs_open_file.
*
* read() is bounced through kernel buffer and a read larger than
* PAGE_SIZE results in partial operation of PAGE_SIZE.
*/
int (*seq_show)(struct seq_file *sf, void *v);
void *(*seq_start)(struct seq_file *sf, loff_t *ppos);
void *(*seq_next)(struct seq_file *sf, void *v, loff_t *ppos);
void (*seq_stop)(struct seq_file *sf, void *v);
ssize_t (*read)(struct kernfs_open_file *of, char *buf, size_t bytes,
loff_t off);
/*
* write() is bounced through kernel buffer and a write larger than
* PAGE_SIZE results in partial operation of PAGE_SIZE.
*/
ssize_t (*write)(struct kernfs_open_file *of, char *buf, size_t bytes,
loff_t off);
int (*mmap)(struct kernfs_open_file *of, struct vm_area_struct *vma);
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lock_class_key lockdep_key;
#endif
};
#ifdef CONFIG_SYSFS
static inline enum kernfs_node_type kernfs_type(struct kernfs_node *kn)
{
return kn->flags & KERNFS_TYPE_MASK;
}
/**
* kernfs_enable_ns - enable namespace under a directory
* @kn: directory of interest, should be empty
*
* This is to be called right after @kn is created to enable namespace
* under it. All children of @kn must have non-NULL namespace tags and
* only the ones which match the super_block's tag will be visible.
*/
static inline void kernfs_enable_ns(struct kernfs_node *kn)
{
WARN_ON_ONCE(kernfs_type(kn) != KERNFS_DIR);
WARN_ON_ONCE(!RB_EMPTY_ROOT(&kn->dir.children));
kn->flags |= KERNFS_NS;
}
/**
* kernfs_ns_enabled - test whether namespace is enabled
* @kn: the node to test
*
* Test whether namespace filtering is enabled for the children of @ns.
*/
static inline bool kernfs_ns_enabled(struct kernfs_node *kn)
{
return kn->flags & KERNFS_NS;
}
struct kernfs_node *kernfs_find_and_get_ns(struct kernfs_node *parent,
const char *name, const void *ns);
void kernfs_get(struct kernfs_node *kn);
void kernfs_put(struct kernfs_node *kn);
struct kernfs_root *kernfs_create_root(struct kernfs_dir_ops *kdops,
void *priv);
void kernfs_destroy_root(struct kernfs_root *root);
struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
const char *name, umode_t mode,
void *priv, const void *ns);
struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
const char *name,
umode_t mode, loff_t size,
const struct kernfs_ops *ops,
void *priv, const void *ns,
bool name_is_static,
struct lock_class_key *key);
struct kernfs_node *kernfs_create_link(struct kernfs_node *parent,
const char *name,
struct kernfs_node *target);
void kernfs_remove(struct kernfs_node *kn);
int kernfs_remove_by_name_ns(struct kernfs_node *parent, const char *name,
const void *ns);
int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
const char *new_name, const void *new_ns);
int kernfs_setattr(struct kernfs_node *kn, const struct iattr *iattr);
void kernfs_notify(struct kernfs_node *kn);
const void *kernfs_super_ns(struct super_block *sb);
struct dentry *kernfs_mount_ns(struct file_system_type *fs_type, int flags,
struct kernfs_root *root, const void *ns);
void kernfs_kill_sb(struct super_block *sb);
void kernfs_init(void);
#else /* CONFIG_SYSFS */
static inline enum kernfs_node_type kernfs_type(struct kernfs_node *kn)
{ return 0; } /* whatever */
static inline void kernfs_enable_ns(struct kernfs_node *kn) { }
static inline bool kernfs_ns_enabled(struct kernfs_node *kn)
{ return false; }
static inline struct kernfs_node *
kernfs_find_and_get_ns(struct kernfs_node *parent, const char *name,
const void *ns)
{ return NULL; }
static inline void kernfs_get(struct kernfs_node *kn) { }
static inline void kernfs_put(struct kernfs_node *kn) { }
static inline struct kernfs_root *
kernfs_create_root(struct kernfs_dir_ops *kdops, void *priv)
{ return ERR_PTR(-ENOSYS); }
static inline void kernfs_destroy_root(struct kernfs_root *root) { }
static inline struct kernfs_node *
kernfs_create_dir_ns(struct kernfs_node *parent, const char *name,
umode_t mode, void *priv, const void *ns)
{ return ERR_PTR(-ENOSYS); }
static inline struct kernfs_node *
__kernfs_create_file(struct kernfs_node *parent, const char *name,
umode_t mode, loff_t size, const struct kernfs_ops *ops,
void *priv, const void *ns, bool name_is_static,
struct lock_class_key *key)
{ return ERR_PTR(-ENOSYS); }
static inline struct kernfs_node *
kernfs_create_link(struct kernfs_node *parent, const char *name,
struct kernfs_node *target)
{ return ERR_PTR(-ENOSYS); }
static inline void kernfs_remove(struct kernfs_node *kn) { }
static inline int kernfs_remove_by_name_ns(struct kernfs_node *kn,
const char *name, const void *ns)
{ return -ENOSYS; }
static inline int kernfs_rename_ns(struct kernfs_node *kn,
struct kernfs_node *new_parent,
const char *new_name, const void *new_ns)
{ return -ENOSYS; }
static inline int kernfs_setattr(struct kernfs_node *kn,
const struct iattr *iattr)
{ return -ENOSYS; }
static inline void kernfs_notify(struct kernfs_node *kn) { }
static inline const void *kernfs_super_ns(struct super_block *sb)
{ return NULL; }
static inline struct dentry *
kernfs_mount_ns(struct file_system_type *fs_type, int flags,
struct kernfs_root *root, const void *ns)
{ return ERR_PTR(-ENOSYS); }
static inline void kernfs_kill_sb(struct super_block *sb) { }
static inline void kernfs_init(void) { }
#endif /* CONFIG_SYSFS */
static inline struct kernfs_node *
kernfs_find_and_get(struct kernfs_node *kn, const char *name)
{
return kernfs_find_and_get_ns(kn, name, NULL);
}
static inline struct kernfs_node *
kernfs_create_dir(struct kernfs_node *parent, const char *name, umode_t mode,
void *priv)
{
return kernfs_create_dir_ns(parent, name, mode, priv, NULL);
}
static inline struct kernfs_node *
kernfs_create_file_ns(struct kernfs_node *parent, const char *name,
umode_t mode, loff_t size, const struct kernfs_ops *ops,
void *priv, const void *ns)
{
struct lock_class_key *key = NULL;
#ifdef CONFIG_DEBUG_LOCK_ALLOC
key = (struct lock_class_key *)&ops->lockdep_key;
#endif
return __kernfs_create_file(parent, name, mode, size, ops, priv, ns,
false, key);
}
static inline struct kernfs_node *
kernfs_create_file(struct kernfs_node *parent, const char *name, umode_t mode,
loff_t size, const struct kernfs_ops *ops, void *priv)
{
return kernfs_create_file_ns(parent, name, mode, size, ops, priv, NULL);
}
static inline int kernfs_remove_by_name(struct kernfs_node *parent,
const char *name)
{
return kernfs_remove_by_name_ns(parent, name, NULL);
}
static inline struct dentry *
kernfs_mount(struct file_system_type *fs_type, int flags,
struct kernfs_root *root)
{
return kernfs_mount_ns(fs_type, flags, root, NULL);
}
#endif /* __LINUX_KERNFS_H */

View File

@ -1,18 +0,0 @@
#ifndef _KOBJ_COMPLETION_H_
#define _KOBJ_COMPLETION_H_
#include <linux/kobject.h>
#include <linux/completion.h>
struct kobj_completion {
struct kobject kc_kobj;
struct completion kc_unregister;
};
#define kobj_to_kobj_completion(kobj) \
container_of(kobj, struct kobj_completion, kc_kobj)
void kobj_completion_init(struct kobj_completion *kc, struct kobj_type *ktype);
void kobj_completion_release(struct kobject *kobj);
void kobj_completion_del_and_wait(struct kobj_completion *kc);
#endif /* _KOBJ_COMPLETION_H_ */

View File

@ -64,7 +64,7 @@ struct kobject {
struct kobject *parent;
struct kset *kset;
struct kobj_type *ktype;
struct sysfs_dirent *sd;
struct kernfs_node *sd;
struct kref kref;
#ifdef CONFIG_DEBUG_KOBJECT_RELEASE
struct delayed_work release;

View File

@ -35,6 +35,7 @@ struct memory_block {
};
int arch_get_memory_phys_device(unsigned long start_pfn);
unsigned long __weak memory_block_size_bytes(void);
/* These states are exposed to userspace as text strings in sysfs */
#define MEM_ONLINE (1<<0) /* exposed to userspace */

View File

@ -12,6 +12,7 @@
#ifndef _SYSFS_H_
#define _SYSFS_H_
#include <linux/kernfs.h>
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/list.h>
@ -175,8 +176,6 @@ struct sysfs_ops {
ssize_t (*store)(struct kobject *, struct attribute *, const char *, size_t);
};
struct sysfs_dirent;
#ifdef CONFIG_SYSFS
int sysfs_schedule_callback(struct kobject *kobj, void (*func)(void *),
@ -244,12 +243,6 @@ void sysfs_remove_link_from_group(struct kobject *kobj, const char *group_name,
const char *link_name);
void sysfs_notify(struct kobject *kobj, const char *dir, const char *attr);
void sysfs_notify_dirent(struct sysfs_dirent *sd);
struct sysfs_dirent *sysfs_get_dirent_ns(struct sysfs_dirent *parent_sd,
const unsigned char *name,
const void *ns);
struct sysfs_dirent *sysfs_get(struct sysfs_dirent *sd);
void sysfs_put(struct sysfs_dirent *sd);
int __must_check sysfs_init(void);
@ -419,22 +412,6 @@ static inline void sysfs_notify(struct kobject *kobj, const char *dir,
const char *attr)
{
}
static inline void sysfs_notify_dirent(struct sysfs_dirent *sd)
{
}
static inline struct sysfs_dirent *
sysfs_get_dirent_ns(struct sysfs_dirent *parent_sd, const unsigned char *name,
const void *ns)
{
return NULL;
}
static inline struct sysfs_dirent *sysfs_get(struct sysfs_dirent *sd)
{
return NULL;
}
static inline void sysfs_put(struct sysfs_dirent *sd)
{
}
static inline int __must_check sysfs_init(void)
{
@ -461,10 +438,26 @@ static inline int sysfs_rename_link(struct kobject *kobj, struct kobject *target
return sysfs_rename_link_ns(kobj, target, old_name, new_name, NULL);
}
static inline struct sysfs_dirent *
sysfs_get_dirent(struct sysfs_dirent *parent_sd, const unsigned char *name)
static inline void sysfs_notify_dirent(struct kernfs_node *kn)
{
return sysfs_get_dirent_ns(parent_sd, name, NULL);
kernfs_notify(kn);
}
static inline struct kernfs_node *sysfs_get_dirent(struct kernfs_node *parent,
const unsigned char *name)
{
return kernfs_find_and_get(parent, name);
}
static inline struct kernfs_node *sysfs_get(struct kernfs_node *kn)
{
kernfs_get(kn);
return kn;
}
static inline void sysfs_put(struct kernfs_node *kn)
{
kernfs_put(kn);
}
#endif /* _SYSFS_H_ */

View File

@ -13,11 +13,11 @@
*/
#include <linux/kobject.h>
#include <linux/kobj_completion.h>
#include <linux/string.h>
#include <linux/export.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/random.h>
/**
* kobject_namespace - return @kobj's namespace tag
@ -65,13 +65,17 @@ static int populate_dir(struct kobject *kobj)
static int create_dir(struct kobject *kobj)
{
const struct kobj_ns_type_operations *ops;
int error;
error = sysfs_create_dir_ns(kobj, kobject_namespace(kobj));
if (!error) {
error = populate_dir(kobj);
if (error)
sysfs_remove_dir(kobj);
if (error)
return error;
error = populate_dir(kobj);
if (error) {
sysfs_remove_dir(kobj);
return error;
}
/*
@ -80,7 +84,20 @@ static int create_dir(struct kobject *kobj)
*/
sysfs_get(kobj->sd);
return error;
/*
* If @kobj has ns_ops, its children need to be filtered based on
* their namespace tags. Enable namespace support on @kobj->sd.
*/
ops = kobj_child_ns_ops(kobj);
if (ops) {
BUG_ON(ops->type <= KOBJ_NS_TYPE_NONE);
BUG_ON(ops->type >= KOBJ_NS_TYPES);
BUG_ON(!kobj_ns_type_registered(ops->type));
kernfs_enable_ns(kobj->sd);
}
return 0;
}
static int get_kobj_path_length(struct kobject *kobj)
@ -247,8 +264,10 @@ int kobject_set_name_vargs(struct kobject *kobj, const char *fmt,
return 0;
kobj->name = kvasprintf(GFP_KERNEL, fmt, vargs);
if (!kobj->name)
if (!kobj->name) {
kobj->name = old_name;
return -ENOMEM;
}
/* ewww... some of these buggers have '/' in the name ... */
while ((s = strchr(kobj->name, '/')))
@ -346,7 +365,7 @@ static int kobject_add_varg(struct kobject *kobj, struct kobject *parent,
*
* If @parent is set, then the parent of the @kobj will be set to it.
* If @parent is NULL, then the parent of the @kobj will be set to the
* kobject associted with the kset assigned to this kobject. If no kset
* kobject associated with the kset assigned to this kobject. If no kset
* is assigned to the kobject, then the kobject will be located in the
* root of the sysfs tree.
*
@ -536,7 +555,7 @@ out:
*/
void kobject_del(struct kobject *kobj)
{
struct sysfs_dirent *sd;
struct kernfs_node *sd;
if (!kobj)
return;
@ -625,10 +644,12 @@ static void kobject_release(struct kref *kref)
{
struct kobject *kobj = container_of(kref, struct kobject, kref);
#ifdef CONFIG_DEBUG_KOBJECT_RELEASE
pr_info("kobject: '%s' (%p): %s, parent %p (delayed)\n",
kobject_name(kobj), kobj, __func__, kobj->parent);
unsigned long delay = HZ + HZ * (get_random_int() & 0x3);
pr_info("kobject: '%s' (%p): %s, parent %p (delayed %ld)\n",
kobject_name(kobj), kobj, __func__, kobj->parent, delay);
INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup);
schedule_delayed_work(&kobj->release, HZ);
schedule_delayed_work(&kobj->release, delay);
#else
kobject_cleanup(kobj);
#endif
@ -759,55 +780,6 @@ const struct sysfs_ops kobj_sysfs_ops = {
.store = kobj_attr_store,
};
/**
* kobj_completion_init - initialize a kobj_completion object.
* @kc: kobj_completion
* @ktype: type of kobject to initialize
*
* kobj_completion structures can be embedded within structures with different
* lifetime rules. During the release of the enclosing object, we can
* wait on the release of the kobject so that we don't free it while it's
* still busy.
*/
void kobj_completion_init(struct kobj_completion *kc, struct kobj_type *ktype)
{
init_completion(&kc->kc_unregister);
kobject_init(&kc->kc_kobj, ktype);
}
EXPORT_SYMBOL_GPL(kobj_completion_init);
/**
* kobj_completion_release - release a kobj_completion object
* @kobj: kobject embedded in kobj_completion
*
* Used with kobject_release to notify waiters that the kobject has been
* released.
*/
void kobj_completion_release(struct kobject *kobj)
{
struct kobj_completion *kc = kobj_to_kobj_completion(kobj);
complete(&kc->kc_unregister);
}
EXPORT_SYMBOL_GPL(kobj_completion_release);
/**
* kobj_completion_del_and_wait - release the kobject and wait for it
* @kc: kobj_completion object to release
*
* Delete the kobject from sysfs and drop the reference count. Then wait
* until any other outstanding references are also dropped. This routine
* is only necessary once other references may have been taken on the
* kobject. Typically this happens when the kobject has been published
* to sysfs via kobject_add.
*/
void kobj_completion_del_and_wait(struct kobj_completion *kc)
{
kobject_del(&kc->kc_kobj);
kobject_put(&kc->kc_kobj);
wait_for_completion(&kc->kc_unregister);
}
EXPORT_SYMBOL_GPL(kobj_completion_del_and_wait);
/**
* kset_register - initialize and add a kset.
* @k: kset.
@ -835,6 +807,7 @@ void kset_unregister(struct kset *k)
{
if (!k)
return;
kobject_del(&k->kobj);
kobject_put(&k->kobj);
}

View File

@ -262,6 +262,7 @@ baz_error:
bar_error:
destroy_foo_obj(foo_obj);
foo_error:
kset_unregister(example_kset);
return -EINVAL;
}