License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 22:07:57 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2009-10-07 02:31:09 +08:00
|
|
|
#ifndef _FS_CEPH_MDS_CLIENT_H
|
|
|
|
#define _FS_CEPH_MDS_CLIENT_H
|
|
|
|
|
|
|
|
#include <linux/completion.h>
|
2009-12-08 04:31:09 +08:00
|
|
|
#include <linux/kref.h>
|
2009-10-07 02:31:09 +08:00
|
|
|
#include <linux/list.h>
|
|
|
|
#include <linux/mutex.h>
|
2010-02-16 04:08:46 +08:00
|
|
|
#include <linux/rbtree.h>
|
2009-10-07 02:31:09 +08:00
|
|
|
#include <linux/spinlock.h>
|
2017-03-03 17:15:06 +08:00
|
|
|
#include <linux/refcount.h>
|
2017-09-11 12:10:08 +08:00
|
|
|
#include <linux/utsname.h>
|
2020-03-20 11:45:02 +08:00
|
|
|
#include <linux/ktime.h>
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2010-04-07 06:14:15 +08:00
|
|
|
#include <linux/ceph/types.h>
|
|
|
|
#include <linux/ceph/messenger.h>
|
|
|
|
#include <linux/ceph/mdsmap.h>
|
2012-05-17 04:16:38 +08:00
|
|
|
#include <linux/ceph/auth.h>
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2020-03-20 11:44:59 +08:00
|
|
|
#include "metric.h"
|
2020-06-30 15:52:15 +08:00
|
|
|
#include "super.h"
|
2020-03-20 11:44:59 +08:00
|
|
|
|
2018-05-11 18:47:29 +08:00
|
|
|
/* The first 8 bits are reserved for old ceph releases */
|
2020-01-08 18:17:31 +08:00
|
|
|
enum ceph_feature_type {
|
|
|
|
CEPHFS_FEATURE_MIMIC = 8,
|
|
|
|
CEPHFS_FEATURE_REPLY_ENCODING,
|
|
|
|
CEPHFS_FEATURE_RECLAIM_CLIENT,
|
|
|
|
CEPHFS_FEATURE_LAZY_CAP_WANTED,
|
|
|
|
CEPHFS_FEATURE_MULTI_RECONNECT,
|
2019-11-16 00:51:55 +08:00
|
|
|
CEPHFS_FEATURE_DELEG_INO,
|
2020-07-16 22:05:57 +08:00
|
|
|
CEPHFS_FEATURE_METRIC_COLLECT,
|
2018-12-10 16:35:09 +08:00
|
|
|
|
2020-07-16 22:05:57 +08:00
|
|
|
CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_METRIC_COLLECT,
|
2020-01-08 18:17:31 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This will always have the highest feature bit value
|
|
|
|
* as the last element of the array.
|
|
|
|
*/
|
|
|
|
#define CEPHFS_FEATURES_CLIENT_SUPPORTED { \
|
2018-12-10 16:35:09 +08:00
|
|
|
0, 1, 2, 3, 4, 5, 6, 7, \
|
|
|
|
CEPHFS_FEATURE_MIMIC, \
|
2019-01-09 10:10:17 +08:00
|
|
|
CEPHFS_FEATURE_REPLY_ENCODING, \
|
2018-12-10 16:35:09 +08:00
|
|
|
CEPHFS_FEATURE_LAZY_CAP_WANTED, \
|
2019-01-01 16:28:33 +08:00
|
|
|
CEPHFS_FEATURE_MULTI_RECONNECT, \
|
2019-11-16 00:51:55 +08:00
|
|
|
CEPHFS_FEATURE_DELEG_INO, \
|
2020-07-16 22:05:57 +08:00
|
|
|
CEPHFS_FEATURE_METRIC_COLLECT, \
|
2020-01-08 18:17:31 +08:00
|
|
|
\
|
|
|
|
CEPHFS_FEATURE_MAX, \
|
2018-05-11 18:47:29 +08:00
|
|
|
}
|
|
|
|
#define CEPHFS_FEATURES_CLIENT_REQUIRED {}
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
/*
|
|
|
|
* Some lock dependencies:
|
|
|
|
*
|
|
|
|
* session->s_mutex
|
|
|
|
* mdsc->mutex
|
|
|
|
*
|
|
|
|
* mdsc->snap_rwsem
|
|
|
|
*
|
2011-12-01 01:47:09 +08:00
|
|
|
* ci->i_ceph_lock
|
2009-10-07 02:31:09 +08:00
|
|
|
* mdsc->snap_flush_lock
|
|
|
|
* mdsc->cap_delay_lock
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2010-04-07 06:14:15 +08:00
|
|
|
struct ceph_fs_client;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct ceph_cap;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* parsed info about a single inode. pointers are into the encoded
|
|
|
|
* on-wire structures within the mds reply message payload.
|
|
|
|
*/
|
|
|
|
struct ceph_mds_reply_info_in {
|
|
|
|
struct ceph_mds_reply_inode *in;
|
2010-12-15 09:37:52 +08:00
|
|
|
struct ceph_dir_layout dir_layout;
|
2009-10-07 02:31:09 +08:00
|
|
|
u32 symlink_len;
|
|
|
|
char *symlink;
|
|
|
|
u32 xattr_len;
|
|
|
|
char *xattr_data;
|
2014-11-14 21:29:55 +08:00
|
|
|
u64 inline_version;
|
|
|
|
u32 inline_len;
|
|
|
|
char *inline_data;
|
2016-02-14 18:06:41 +08:00
|
|
|
u32 pool_ns_len;
|
2016-03-07 09:35:06 +08:00
|
|
|
char *pool_ns_data;
|
2018-01-05 18:47:18 +08:00
|
|
|
u64 max_bytes;
|
|
|
|
u64 max_files;
|
2019-01-09 11:07:02 +08:00
|
|
|
s32 dir_pin;
|
2019-05-29 23:19:42 +08:00
|
|
|
struct ceph_timespec btime;
|
2019-04-18 20:15:46 +08:00
|
|
|
struct ceph_timespec snap_btime;
|
2020-08-28 09:28:44 +08:00
|
|
|
u64 rsnaps;
|
2019-06-06 19:29:23 +08:00
|
|
|
u64 change_attr;
|
2009-10-07 02:31:09 +08:00
|
|
|
};
|
|
|
|
|
2016-04-28 09:37:39 +08:00
|
|
|
struct ceph_mds_reply_dir_entry {
|
|
|
|
char *name;
|
|
|
|
u32 name_len;
|
|
|
|
struct ceph_mds_reply_lease *lease;
|
|
|
|
struct ceph_mds_reply_info_in inode;
|
2016-04-28 15:17:40 +08:00
|
|
|
loff_t offset;
|
2016-04-28 09:37:39 +08:00
|
|
|
};
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
/*
|
2010-12-02 06:14:38 +08:00
|
|
|
* parsed info about an mds reply, including information about
|
|
|
|
* either: 1) the target inode and/or its parent directory and dentry,
|
|
|
|
* and directory contents (for readdir results), or
|
|
|
|
* 2) the file range lock info (for fcntl F_GETLK results).
|
2009-10-07 02:31:09 +08:00
|
|
|
*/
|
|
|
|
struct ceph_mds_reply_info_parsed {
|
|
|
|
struct ceph_mds_reply_head *head;
|
|
|
|
|
2010-12-02 06:14:38 +08:00
|
|
|
/* trace */
|
2009-10-07 02:31:09 +08:00
|
|
|
struct ceph_mds_reply_info_in diri, targeti;
|
|
|
|
struct ceph_mds_reply_dirfrag *dirfrag;
|
|
|
|
char *dname;
|
|
|
|
u32 dname_len;
|
|
|
|
struct ceph_mds_reply_lease *dlease;
|
|
|
|
|
2010-12-02 06:14:38 +08:00
|
|
|
/* extra */
|
|
|
|
union {
|
|
|
|
/* for fcntl F_GETLK results */
|
|
|
|
struct ceph_filelock *filelock_reply;
|
|
|
|
|
|
|
|
/* for readdir results */
|
|
|
|
struct {
|
|
|
|
struct ceph_mds_reply_dirfrag *dir_dir;
|
2014-03-29 13:41:15 +08:00
|
|
|
size_t dir_buf_size;
|
2010-12-02 06:14:38 +08:00
|
|
|
int dir_nr;
|
2016-04-29 11:27:30 +08:00
|
|
|
bool dir_end;
|
2017-04-06 00:54:05 +08:00
|
|
|
bool dir_complete;
|
2016-04-29 11:27:30 +08:00
|
|
|
bool hash_order;
|
2017-04-06 00:54:05 +08:00
|
|
|
bool offset_hash;
|
2016-04-28 09:37:39 +08:00
|
|
|
struct ceph_mds_reply_dir_entry *dir_entries;
|
2010-12-02 06:14:38 +08:00
|
|
|
};
|
2012-12-29 01:56:46 +08:00
|
|
|
|
|
|
|
/* for create results */
|
|
|
|
struct {
|
|
|
|
bool has_create_ino;
|
|
|
|
u64 ino;
|
|
|
|
};
|
2010-12-02 06:14:38 +08:00
|
|
|
};
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
/* encoded blob describing snapshot contexts for certain
|
|
|
|
operations (e.g., open) */
|
|
|
|
void *snapblob;
|
|
|
|
int snapblob_len;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cap releases are batched and sent to the MDS en masse.
|
2017-04-13 23:07:04 +08:00
|
|
|
*
|
|
|
|
* Account for per-message overhead of mds_cap_release header
|
|
|
|
* and __le32 for osd epoch barrier trailing field.
|
2009-10-07 02:31:09 +08:00
|
|
|
*/
|
2017-04-13 23:07:04 +08:00
|
|
|
#define CEPH_CAPS_PER_RELEASE ((PAGE_SIZE - sizeof(u32) - \
|
2009-10-07 02:31:09 +08:00
|
|
|
sizeof(struct ceph_mds_cap_release)) / \
|
2017-04-13 23:07:04 +08:00
|
|
|
sizeof(struct ceph_mds_cap_item))
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* state associated with each MDS<->client session
|
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
CEPH_MDS_SESSION_NEW = 1,
|
|
|
|
CEPH_MDS_SESSION_OPENING = 2,
|
|
|
|
CEPH_MDS_SESSION_OPEN = 3,
|
|
|
|
CEPH_MDS_SESSION_HUNG = 4,
|
2019-07-25 20:16:41 +08:00
|
|
|
CEPH_MDS_SESSION_RESTARTING = 5,
|
|
|
|
CEPH_MDS_SESSION_RECONNECTING = 6,
|
|
|
|
CEPH_MDS_SESSION_CLOSING = 7,
|
2019-12-06 11:35:51 +08:00
|
|
|
CEPH_MDS_SESSION_CLOSED = 8,
|
|
|
|
CEPH_MDS_SESSION_REJECTED = 9,
|
2009-10-07 02:31:09 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct ceph_mds_session {
|
|
|
|
struct ceph_mds_client *s_mdsc;
|
|
|
|
int s_mds;
|
|
|
|
int s_state;
|
|
|
|
unsigned long s_ttl; /* time until mds kills us */
|
2018-12-21 17:41:39 +08:00
|
|
|
unsigned long s_features;
|
2009-10-07 02:31:09 +08:00
|
|
|
u64 s_seq; /* incoming msg seq # */
|
|
|
|
struct mutex s_mutex; /* serialize session messages */
|
|
|
|
|
|
|
|
struct ceph_connection s_con;
|
|
|
|
|
2012-05-17 04:16:38 +08:00
|
|
|
struct ceph_auth_handshake s_auth;
|
2009-11-19 08:19:57 +08:00
|
|
|
|
2021-06-05 00:03:09 +08:00
|
|
|
atomic_t s_cap_gen; /* inc each time we get mds stale msg */
|
|
|
|
unsigned long s_cap_ttl; /* when session caps expire. protected by s_mutex */
|
2012-01-13 09:48:10 +08:00
|
|
|
|
|
|
|
/* protected by s_cap_lock */
|
|
|
|
spinlock_t s_cap_lock;
|
2019-11-15 22:13:59 +08:00
|
|
|
refcount_t s_ref;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct list_head s_caps; /* all caps issued by this session */
|
2019-01-14 17:21:19 +08:00
|
|
|
struct ceph_cap *s_cap_iterator;
|
2019-07-20 03:22:28 +08:00
|
|
|
int s_nr_caps;
|
2009-10-07 02:31:09 +08:00
|
|
|
int s_num_cap_releases;
|
2013-09-22 11:08:14 +08:00
|
|
|
int s_cap_reconnect;
|
2015-01-05 11:04:04 +08:00
|
|
|
int s_readonly;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct list_head s_cap_releases; /* waiting cap_release messages */
|
2019-01-14 17:21:19 +08:00
|
|
|
struct work_struct s_cap_release_work;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2020-04-04 01:09:07 +08:00
|
|
|
/* See ceph_inode_info->i_dirty_item. */
|
2020-04-02 05:07:52 +08:00
|
|
|
struct list_head s_cap_dirty; /* inodes w/ dirty caps */
|
2020-04-04 01:09:07 +08:00
|
|
|
|
|
|
|
/* See ceph_inode_info->i_flushing_item. */
|
2009-10-07 02:31:09 +08:00
|
|
|
struct list_head s_cap_flushing; /* inodes w/ flushing caps */
|
2020-04-02 05:07:52 +08:00
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
unsigned long s_renew_requested; /* last time we sent a renew req */
|
|
|
|
u64 s_renew_seq;
|
|
|
|
|
|
|
|
struct list_head s_waiting; /* waiting requests */
|
|
|
|
struct list_head s_unsafe; /* unsafe requests */
|
2019-11-16 00:51:55 +08:00
|
|
|
struct xarray s_delegated_inos;
|
2009-10-07 02:31:09 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* modes of choosing which MDS to send a request to
|
|
|
|
*/
|
|
|
|
enum {
|
|
|
|
USE_ANY_MDS,
|
|
|
|
USE_RANDOM_MDS,
|
|
|
|
USE_AUTH_MDS, /* prefer authoritative mds for this metadata item */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ceph_mds_request;
|
|
|
|
struct ceph_mds_client;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* request completion callback
|
|
|
|
*/
|
|
|
|
typedef void (*ceph_mds_request_callback_t) (struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_request *req);
|
2014-10-14 10:33:35 +08:00
|
|
|
/*
|
|
|
|
* wait for request completion callback
|
|
|
|
*/
|
|
|
|
typedef int (*ceph_mds_request_wait_callback_t) (struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_request *req);
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* an in-flight mds request
|
|
|
|
*/
|
|
|
|
struct ceph_mds_request {
|
|
|
|
u64 r_tid; /* transaction id */
|
2010-02-16 04:08:46 +08:00
|
|
|
struct rb_node r_node;
|
2010-06-18 07:16:12 +08:00
|
|
|
struct ceph_mds_client *r_mdsc;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2019-11-15 22:13:59 +08:00
|
|
|
struct kref r_kref;
|
2009-10-07 02:31:09 +08:00
|
|
|
int r_op; /* mds op code */
|
|
|
|
|
|
|
|
/* operation on what? */
|
|
|
|
struct inode *r_inode; /* arg1 */
|
|
|
|
struct dentry *r_dentry; /* arg1 */
|
|
|
|
struct dentry *r_old_dentry; /* arg2: rename from or link from */
|
2011-07-27 02:31:14 +08:00
|
|
|
struct inode *r_old_dentry_dir; /* arg2: old dentry's parent dir */
|
2009-10-07 02:31:09 +08:00
|
|
|
char *r_path1, *r_path2;
|
|
|
|
struct ceph_vino r_ino1, r_ino2;
|
|
|
|
|
2017-01-31 23:28:26 +08:00
|
|
|
struct inode *r_parent; /* parent dir inode */
|
2009-10-07 02:31:09 +08:00
|
|
|
struct inode *r_target_inode; /* resulting inode */
|
|
|
|
|
2017-02-02 02:49:09 +08:00
|
|
|
#define CEPH_MDS_R_DIRECT_IS_HASH (1) /* r_direct_hash is valid */
|
|
|
|
#define CEPH_MDS_R_ABORTED (2) /* call was aborted */
|
|
|
|
#define CEPH_MDS_R_GOT_UNSAFE (3) /* got an unsafe reply */
|
|
|
|
#define CEPH_MDS_R_GOT_SAFE (4) /* got a safe reply */
|
|
|
|
#define CEPH_MDS_R_GOT_RESULT (5) /* got a result */
|
|
|
|
#define CEPH_MDS_R_DID_PREPOPULATE (6) /* prepopulated readdir */
|
2017-01-31 23:28:26 +08:00
|
|
|
#define CEPH_MDS_R_PARENT_LOCKED (7) /* is r_parent->i_rwsem wlocked? */
|
2019-12-03 02:47:57 +08:00
|
|
|
#define CEPH_MDS_R_ASYNC (8) /* async request */
|
2017-02-02 02:49:09 +08:00
|
|
|
unsigned long r_req_flags;
|
|
|
|
|
2010-05-14 03:01:13 +08:00
|
|
|
struct mutex r_fill_mutex;
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
union ceph_mds_request_args r_args;
|
|
|
|
int r_fmode; /* file mode, if expecting cap */
|
2020-02-17 23:19:14 +08:00
|
|
|
int r_request_release_offset;
|
2022-02-03 22:04:24 +08:00
|
|
|
const struct cred *r_cred;
|
2018-07-14 04:18:38 +08:00
|
|
|
struct timespec64 r_stamp;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
/* for choosing which mds to send this request to */
|
|
|
|
int r_direct_mode;
|
|
|
|
u32 r_direct_hash; /* choose dir frag based on this dentry hash */
|
|
|
|
|
|
|
|
/* data payload is used for xattr ops */
|
2014-09-16 19:15:28 +08:00
|
|
|
struct ceph_pagelist *r_pagelist;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
/* what caps shall we drop? */
|
|
|
|
int r_inode_drop, r_inode_unless;
|
|
|
|
int r_dentry_drop, r_dentry_unless;
|
|
|
|
int r_old_dentry_drop, r_old_dentry_unless;
|
|
|
|
struct inode *r_old_inode;
|
|
|
|
int r_old_inode_drop, r_old_inode_unless;
|
|
|
|
|
|
|
|
struct ceph_msg *r_request; /* original request */
|
|
|
|
struct ceph_msg *r_reply;
|
|
|
|
struct ceph_mds_reply_info_parsed r_reply_info;
|
|
|
|
int r_err;
|
2022-02-03 22:04:24 +08:00
|
|
|
u32 r_readdir_offset;
|
2020-02-19 03:12:45 +08:00
|
|
|
|
|
|
|
struct page *r_locked_page;
|
|
|
|
int r_dir_caps;
|
2020-02-17 23:19:14 +08:00
|
|
|
int r_num_caps;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2015-05-15 17:02:17 +08:00
|
|
|
unsigned long r_timeout; /* optional. jiffies, 0 is "wait forever" */
|
2009-10-07 02:31:09 +08:00
|
|
|
unsigned long r_started; /* start time to measure timeout against */
|
2020-03-20 11:45:02 +08:00
|
|
|
unsigned long r_start_latency; /* start time to measure latency */
|
|
|
|
unsigned long r_end_latency; /* finish time to measure latency */
|
2009-10-07 02:31:09 +08:00
|
|
|
unsigned long r_request_started; /* start time for mds request only,
|
|
|
|
used to measure lease durations */
|
|
|
|
|
|
|
|
/* link unsafe requests to parent directory, for fsync */
|
|
|
|
struct inode *r_unsafe_dir;
|
|
|
|
struct list_head r_unsafe_dir_item;
|
|
|
|
|
2015-10-27 18:36:06 +08:00
|
|
|
/* unsafe requests that modify the target inode */
|
|
|
|
struct list_head r_unsafe_target_item;
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
struct ceph_mds_session *r_session;
|
|
|
|
|
|
|
|
int r_attempts; /* resend attempts */
|
|
|
|
int r_num_fwd; /* number of forward attempts */
|
|
|
|
int r_resend_mds; /* mds to resend to next, if any*/
|
2010-06-23 06:58:01 +08:00
|
|
|
u32 r_sent_on_mseq; /* cap mseq request was sent at*/
|
2020-01-14 02:04:08 +08:00
|
|
|
u64 r_deleg_ino;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
struct list_head r_wait;
|
|
|
|
struct completion r_completion;
|
|
|
|
struct completion r_safe_completion;
|
|
|
|
ceph_mds_request_callback_t r_callback;
|
|
|
|
struct list_head r_unsafe_item; /* per-session unsafe list item */
|
|
|
|
|
2015-06-16 20:48:56 +08:00
|
|
|
long long r_dir_release_cnt;
|
|
|
|
long long r_dir_ordered_cnt;
|
|
|
|
int r_readdir_cache_idx;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
struct ceph_cap_reservation r_caps_reservation;
|
|
|
|
};
|
|
|
|
|
2015-04-27 15:33:28 +08:00
|
|
|
struct ceph_pool_perm {
|
|
|
|
struct rb_node node;
|
|
|
|
int perm;
|
2016-02-03 21:24:49 +08:00
|
|
|
s64 pool;
|
2016-03-07 09:35:06 +08:00
|
|
|
size_t pool_ns_len;
|
|
|
|
char pool_ns[];
|
2015-04-27 15:33:28 +08:00
|
|
|
};
|
|
|
|
|
2017-12-14 15:11:09 +08:00
|
|
|
struct ceph_snapid_map {
|
|
|
|
struct rb_node node;
|
|
|
|
struct list_head lru;
|
|
|
|
atomic_t ref;
|
|
|
|
u64 snap;
|
|
|
|
dev_t dev;
|
|
|
|
unsigned long last_used;
|
|
|
|
};
|
|
|
|
|
2019-03-21 18:20:10 +08:00
|
|
|
/*
|
|
|
|
* node for list of quotarealm inodes that are not visible from the filesystem
|
|
|
|
* mountpoint, but required to handle, e.g. quotas.
|
|
|
|
*/
|
|
|
|
struct ceph_quotarealm_inode {
|
|
|
|
struct rb_node node;
|
|
|
|
u64 ino;
|
|
|
|
unsigned long timeout; /* last time a lookup failed for this inode */
|
|
|
|
struct mutex mutex;
|
|
|
|
struct inode *inode;
|
|
|
|
};
|
|
|
|
|
2019-11-21 01:00:59 +08:00
|
|
|
struct cap_wait {
|
|
|
|
struct list_head list;
|
ceph: fix inode number handling on arches with 32-bit ino_t
Tuan and Ulrich mentioned that they were hitting a problem on s390x,
which has a 32-bit ino_t value, even though it's a 64-bit arch (for
historical reasons).
I think the current handling of inode numbers in the ceph driver is
wrong. It tries to use 32-bit inode numbers on 32-bit arches, but that's
actually not a problem. 32-bit arches can deal with 64-bit inode numbers
just fine when userland code is compiled with LFS support (the common
case these days).
What we really want to do is just use 64-bit numbers everywhere, unless
someone has mounted with the ino32 mount option. In that case, we want
to ensure that we hash the inode number down to something that will fit
in 32 bits before presenting the value to userland.
Add new helper functions that do this, and only do the conversion before
presenting these values to userland in getattr and readdir.
The inode table hashvalue is changed to just cast the inode number to
unsigned long, as low-order bits are the most likely to vary anyway.
While it's not strictly required, we do want to put something in
inode->i_ino. Instead of basing it on BITS_PER_LONG, however, base it on
the size of the ino_t type.
NOTE: This is a user-visible change on 32-bit arches:
1/ inode numbers will be seen to have changed between kernel versions.
32-bit arches will see large inode numbers now instead of the hashed
ones they saw before.
2/ any really old software not built with LFS support may start failing
stat() calls with -EOVERFLOW on inode numbers >2^32. Nothing much we
can do about these, but hopefully the intersection of people running
such code on ceph will be very small.
The workaround for both problems is to mount with "-o ino32".
[ idryomov: changelog tweak ]
URL: https://tracker.ceph.com/issues/46828
Reported-by: Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
Reported-and-Tested-by: Tuan Hoang1 <Tuan.Hoang1@ibm.com>
Signed-off-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
2020-08-18 20:03:48 +08:00
|
|
|
u64 ino;
|
2019-11-21 01:00:59 +08:00
|
|
|
pid_t tgid;
|
|
|
|
int need;
|
|
|
|
int want;
|
|
|
|
};
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
/*
|
|
|
|
* mds client state
|
|
|
|
*/
|
|
|
|
struct ceph_mds_client {
|
2010-04-07 06:14:15 +08:00
|
|
|
struct ceph_fs_client *fsc;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct mutex mutex; /* all nested structures */
|
|
|
|
|
|
|
|
struct ceph_mdsmap *mdsmap;
|
2010-08-12 05:51:23 +08:00
|
|
|
struct completion safe_umount_waiters;
|
|
|
|
wait_queue_head_t session_close_wq;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct list_head waiting_for_map;
|
2016-07-08 11:25:38 +08:00
|
|
|
int mdsmap_err;
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
struct ceph_mds_session **sessions; /* NULL for mds if no session */
|
2015-01-09 17:00:42 +08:00
|
|
|
atomic_t num_sessions;
|
2020-10-07 00:24:19 +08:00
|
|
|
int max_sessions; /* len of sessions array */
|
2009-10-07 02:31:09 +08:00
|
|
|
int stopping; /* true if shutting down */
|
|
|
|
|
2018-01-13 01:19:29 +08:00
|
|
|
atomic64_t quotarealms_count; /* # realms with quota */
|
2019-03-21 18:20:10 +08:00
|
|
|
/*
|
|
|
|
* We keep a list of inodes we don't see in the mountpoint but that we
|
|
|
|
* need to track quota realms.
|
|
|
|
*/
|
|
|
|
struct rb_root quotarealms_inodes;
|
|
|
|
struct mutex quotarealms_inodes_mutex;
|
2018-01-13 01:19:29 +08:00
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
/*
|
|
|
|
* snap_rwsem will cover cap linkage into snaprealms, and
|
|
|
|
* realm snap contexts. (later, we can do per-realm snap
|
|
|
|
* contexts locks..) the empty list contains realms with no
|
|
|
|
* references (implying they contain no inodes with caps) that
|
|
|
|
* should be destroyed.
|
|
|
|
*/
|
2015-05-05 21:22:13 +08:00
|
|
|
u64 last_snap_seq;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct rw_semaphore snap_rwsem;
|
2010-02-16 06:37:55 +08:00
|
|
|
struct rb_root snap_realms;
|
2009-10-07 02:31:09 +08:00
|
|
|
struct list_head snap_empty;
|
2019-01-01 16:28:33 +08:00
|
|
|
int num_snap_realms;
|
2009-10-07 02:31:09 +08:00
|
|
|
spinlock_t snap_empty_lock; /* protect snap_empty */
|
|
|
|
|
|
|
|
u64 last_tid; /* most recent mds request */
|
2015-05-19 18:54:40 +08:00
|
|
|
u64 oldest_tid; /* oldest incomplete mds request,
|
|
|
|
excluding setfilelock requests */
|
2010-02-16 04:08:46 +08:00
|
|
|
struct rb_root request_tree; /* pending mds requests */
|
2009-10-07 02:31:09 +08:00
|
|
|
struct delayed_work delayed_work; /* delayed work */
|
|
|
|
unsigned long last_renew_caps; /* last time we renewed our caps */
|
|
|
|
struct list_head cap_delay_list; /* caps with delayed release */
|
|
|
|
spinlock_t cap_delay_lock; /* protects cap_delay_list */
|
|
|
|
struct list_head snap_flush_list; /* cap_snaps ready to flush */
|
|
|
|
spinlock_t snap_flush_lock;
|
|
|
|
|
2015-06-09 15:48:57 +08:00
|
|
|
u64 last_cap_flush_tid;
|
2016-07-06 11:12:56 +08:00
|
|
|
struct list_head cap_flush_list;
|
2011-05-25 02:46:31 +08:00
|
|
|
struct list_head cap_dirty_migrating; /* ...that are migration... */
|
2009-10-07 02:31:09 +08:00
|
|
|
int num_cap_flushing; /* # caps we are flushing */
|
|
|
|
spinlock_t cap_dirty_lock; /* protects above items */
|
|
|
|
wait_queue_head_t cap_flushing_wq;
|
|
|
|
|
2019-01-31 16:55:51 +08:00
|
|
|
struct work_struct cap_reclaim_work;
|
2019-02-01 14:57:15 +08:00
|
|
|
atomic_t cap_reclaim_pending;
|
2019-01-31 16:55:51 +08:00
|
|
|
|
2010-06-18 07:16:12 +08:00
|
|
|
/*
|
|
|
|
* Cap reservations
|
|
|
|
*
|
|
|
|
* Maintain a global pool of preallocated struct ceph_caps, referenced
|
|
|
|
* by struct ceph_caps_reservations. This ensures that we preallocate
|
|
|
|
* memory needed to successfully process an MDS response. (If an MDS
|
|
|
|
* sends us cap information and we fail to process it, we will have
|
|
|
|
* problems due to the client and MDS being out of sync.)
|
|
|
|
*
|
|
|
|
* Reservations are 'owned' by a ceph_cap_reservation context.
|
|
|
|
*/
|
|
|
|
spinlock_t caps_list_lock;
|
|
|
|
struct list_head caps_list; /* unused (reserved or
|
|
|
|
unreserved) */
|
2019-11-21 01:00:59 +08:00
|
|
|
struct list_head cap_wait_list;
|
2010-06-18 07:16:12 +08:00
|
|
|
int caps_total_count; /* total caps allocated */
|
|
|
|
int caps_use_count; /* in use */
|
2019-02-01 14:57:15 +08:00
|
|
|
int caps_use_max; /* max used caps */
|
2010-06-18 07:16:12 +08:00
|
|
|
int caps_reserve_count; /* unused, reserved */
|
|
|
|
int caps_avail_count; /* unused, unreserved */
|
|
|
|
int caps_min_count; /* keep at least this many
|
|
|
|
(unreserved) */
|
2019-01-31 16:55:51 +08:00
|
|
|
spinlock_t dentry_list_lock;
|
|
|
|
struct list_head dentry_leases; /* fifo list */
|
|
|
|
struct list_head dentry_dir_leases; /* lru list */
|
2015-04-27 15:33:28 +08:00
|
|
|
|
2020-03-20 11:44:59 +08:00
|
|
|
struct ceph_client_metric metric;
|
|
|
|
|
2017-12-14 15:11:09 +08:00
|
|
|
spinlock_t snapid_map_lock;
|
|
|
|
struct rb_root snapid_map_tree;
|
|
|
|
struct list_head snapid_map_lru;
|
|
|
|
|
2015-04-27 15:33:28 +08:00
|
|
|
struct rw_semaphore pool_perm_rwsem;
|
|
|
|
struct rb_root pool_perm_tree;
|
2017-09-11 12:10:08 +08:00
|
|
|
|
|
|
|
char nodename[__NEW_UTS_LEN + 1];
|
2009-10-07 02:31:09 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
extern const char *ceph_mds_op_name(int op);
|
|
|
|
|
2020-06-30 15:52:15 +08:00
|
|
|
extern bool check_session_state(struct ceph_mds_session *s);
|
2020-10-12 21:39:06 +08:00
|
|
|
void inc_session_sequence(struct ceph_mds_session *s);
|
2020-06-30 15:52:15 +08:00
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
extern struct ceph_mds_session *
|
|
|
|
__ceph_lookup_mds_session(struct ceph_mds_client *, int mds);
|
|
|
|
|
2014-09-19 20:51:08 +08:00
|
|
|
extern const char *ceph_session_state_name(int s);
|
|
|
|
|
2019-12-20 08:44:09 +08:00
|
|
|
extern struct ceph_mds_session *
|
|
|
|
ceph_get_mds_session(struct ceph_mds_session *s);
|
2009-10-07 02:31:09 +08:00
|
|
|
extern void ceph_put_mds_session(struct ceph_mds_session *s);
|
|
|
|
|
|
|
|
extern int ceph_send_msg_mds(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_msg *msg, int mds);
|
|
|
|
|
2010-04-07 06:14:15 +08:00
|
|
|
extern int ceph_mdsc_init(struct ceph_fs_client *fsc);
|
2009-10-07 02:31:09 +08:00
|
|
|
extern void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc);
|
2015-07-01 16:27:46 +08:00
|
|
|
extern void ceph_mdsc_force_umount(struct ceph_mds_client *mdsc);
|
2010-04-07 06:14:15 +08:00
|
|
|
extern void ceph_mdsc_destroy(struct ceph_fs_client *fsc);
|
2009-10-07 02:31:09 +08:00
|
|
|
|
|
|
|
extern void ceph_mdsc_sync(struct ceph_mds_client *mdsc);
|
|
|
|
|
2010-05-15 01:02:57 +08:00
|
|
|
extern void ceph_invalidate_dir_request(struct ceph_mds_request *req);
|
2014-03-29 13:41:15 +08:00
|
|
|
extern int ceph_alloc_readdir_reply_buffer(struct ceph_mds_request *req,
|
|
|
|
struct inode *dir);
|
2009-10-07 02:31:09 +08:00
|
|
|
extern struct ceph_mds_request *
|
|
|
|
ceph_mdsc_create_request(struct ceph_mds_client *mdsc, int op, int mode);
|
2019-04-02 21:24:36 +08:00
|
|
|
extern int ceph_mdsc_submit_request(struct ceph_mds_client *mdsc,
|
|
|
|
struct inode *dir,
|
|
|
|
struct ceph_mds_request *req);
|
2022-02-03 22:04:24 +08:00
|
|
|
int ceph_mdsc_wait_request(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_request *req,
|
|
|
|
ceph_mds_request_wait_callback_t wait_func);
|
2009-10-07 02:31:09 +08:00
|
|
|
extern int ceph_mdsc_do_request(struct ceph_mds_client *mdsc,
|
|
|
|
struct inode *dir,
|
|
|
|
struct ceph_mds_request *req);
|
2020-02-19 03:12:45 +08:00
|
|
|
extern void ceph_mdsc_release_dir_caps(struct ceph_mds_request *req);
|
2020-05-27 21:09:27 +08:00
|
|
|
extern void ceph_mdsc_release_dir_caps_no_check(struct ceph_mds_request *req);
|
2009-10-07 02:31:09 +08:00
|
|
|
static inline void ceph_mdsc_get_request(struct ceph_mds_request *req)
|
|
|
|
{
|
2009-12-08 04:31:09 +08:00
|
|
|
kref_get(&req->r_kref);
|
|
|
|
}
|
|
|
|
extern void ceph_mdsc_release_request(struct kref *kref);
|
|
|
|
static inline void ceph_mdsc_put_request(struct ceph_mds_request *req)
|
|
|
|
{
|
|
|
|
kref_put(&req->r_kref, ceph_mdsc_release_request);
|
2009-10-07 02:31:09 +08:00
|
|
|
}
|
|
|
|
|
2021-07-05 09:22:56 +08:00
|
|
|
extern void send_flush_mdlog(struct ceph_mds_session *s);
|
2021-07-05 09:22:55 +08:00
|
|
|
extern void ceph_mdsc_iterate_sessions(struct ceph_mds_client *mdsc,
|
|
|
|
void (*cb)(struct ceph_mds_session *),
|
|
|
|
bool check_state);
|
2021-07-05 09:22:54 +08:00
|
|
|
extern struct ceph_msg *ceph_create_session_msg(u32 op, u64 seq);
|
2019-01-14 17:21:19 +08:00
|
|
|
extern void __ceph_queue_cap_release(struct ceph_mds_session *session,
|
|
|
|
struct ceph_cap *cap);
|
|
|
|
extern void ceph_flush_cap_releases(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_session *session);
|
2019-01-31 16:55:51 +08:00
|
|
|
extern void ceph_queue_cap_reclaim_work(struct ceph_mds_client *mdsc);
|
2019-02-01 14:57:15 +08:00
|
|
|
extern void ceph_reclaim_caps_nr(struct ceph_mds_client *mdsc, int nr);
|
2019-04-25 00:09:04 +08:00
|
|
|
extern int ceph_iterate_session_caps(struct ceph_mds_session *session,
|
|
|
|
int (*cb)(struct inode *,
|
|
|
|
struct ceph_cap *, void *),
|
|
|
|
void *arg);
|
2009-10-07 02:31:09 +08:00
|
|
|
extern void ceph_mdsc_pre_umount(struct ceph_mds_client *mdsc);
|
|
|
|
|
2019-04-30 00:13:14 +08:00
|
|
|
static inline void ceph_mdsc_free_path(char *path, int len)
|
|
|
|
{
|
2020-04-08 20:41:38 +08:00
|
|
|
if (!IS_ERR_OR_NULL(path))
|
2019-04-30 00:13:14 +08:00
|
|
|
__putname(path - (PATH_MAX - 1 - len));
|
|
|
|
}
|
|
|
|
|
2009-10-07 02:31:09 +08:00
|
|
|
extern char *ceph_mdsc_build_path(struct dentry *dentry, int *plen, u64 *base,
|
|
|
|
int stop_on_nosnap);
|
|
|
|
|
|
|
|
extern void __ceph_mdsc_drop_dentry_lease(struct dentry *dentry);
|
|
|
|
extern void ceph_mdsc_lease_send_msg(struct ceph_mds_session *session,
|
|
|
|
struct dentry *dentry, char action,
|
|
|
|
u32 seq);
|
|
|
|
|
2016-07-08 11:25:38 +08:00
|
|
|
extern void ceph_mdsc_handle_mdsmap(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_msg *msg);
|
|
|
|
extern void ceph_mdsc_handle_fsmap(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_msg *msg);
|
2009-10-07 02:31:09 +08:00
|
|
|
|
2013-11-24 14:33:01 +08:00
|
|
|
extern struct ceph_mds_session *
|
|
|
|
ceph_mdsc_open_export_target_session(struct ceph_mds_client *mdsc, int target);
|
2010-06-22 04:45:04 +08:00
|
|
|
extern void ceph_mdsc_open_export_target_sessions(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_session *session);
|
|
|
|
|
2018-01-24 21:24:33 +08:00
|
|
|
extern int ceph_trim_caps(struct ceph_mds_client *mdsc,
|
|
|
|
struct ceph_mds_session *session,
|
|
|
|
int max_caps);
|
2019-11-16 00:51:55 +08:00
|
|
|
|
2020-01-15 04:06:40 +08:00
|
|
|
static inline int ceph_wait_on_async_create(struct inode *inode)
|
|
|
|
{
|
|
|
|
struct ceph_inode_info *ci = ceph_inode(inode);
|
|
|
|
|
|
|
|
return wait_on_bit(&ci->i_ceph_flags, CEPH_ASYNC_CREATE_BIT,
|
|
|
|
TASK_INTERRUPTIBLE);
|
|
|
|
}
|
2019-11-16 00:51:55 +08:00
|
|
|
|
|
|
|
extern u64 ceph_get_deleg_ino(struct ceph_mds_session *session);
|
|
|
|
extern int ceph_restore_deleg_ino(struct ceph_mds_session *session, u64 ino);
|
2009-10-07 02:31:09 +08:00
|
|
|
#endif
|