2019-05-19 20:08:55 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* linux/fs/lockd/svcsubs.c
|
|
|
|
*
|
|
|
|
* Various support routines for the NLM server.
|
|
|
|
*
|
|
|
|
* Copyright (C) 1996, Olaf Kirch <okir@monad.swb.de>
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/types.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/time.h>
|
|
|
|
#include <linux/in.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2006-03-26 17:37:12 +08:00
|
|
|
#include <linux/mutex.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/sunrpc/svc.h>
|
2013-02-05 01:50:00 +08:00
|
|
|
#include <linux/sunrpc/addr.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/lockd/lockd.h>
|
|
|
|
#include <linux/lockd/share.h>
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/mount.h>
|
2014-05-07 01:37:13 +08:00
|
|
|
#include <uapi/linux/nfs2.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#define NLMDBG_FACILITY NLMDBG_SVCSUBS
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Global file hash table
|
|
|
|
*/
|
2006-10-04 17:15:58 +08:00
|
|
|
#define FILE_HASH_BITS 7
|
2005-04-17 06:20:36 +08:00
|
|
|
#define FILE_NRHASH (1<<FILE_HASH_BITS)
|
2006-10-04 17:15:58 +08:00
|
|
|
static struct hlist_head nlm_files[FILE_NRHASH];
|
2006-03-26 17:37:12 +08:00
|
|
|
static DEFINE_MUTEX(nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-04-06 21:46:00 +08:00
|
|
|
#ifdef CONFIG_SUNRPC_DEBUG
|
2005-11-02 05:53:32 +08:00
|
|
|
static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
|
|
|
|
{
|
|
|
|
u32 *fhp = (u32*)f->data;
|
|
|
|
|
|
|
|
/* print the first 32 bytes of the fh */
|
|
|
|
dprintk("lockd: %s (%08x %08x %08x %08x %08x %08x %08x %08x)\n",
|
|
|
|
msg, fhp[0], fhp[1], fhp[2], fhp[3],
|
|
|
|
fhp[4], fhp[5], fhp[6], fhp[7]);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
|
|
|
|
{
|
2021-08-23 23:26:39 +08:00
|
|
|
struct inode *inode = nlmsvc_file_inode(file);
|
2005-11-02 05:53:32 +08:00
|
|
|
|
|
|
|
dprintk("lockd: %s %s/%ld\n",
|
|
|
|
msg, inode->i_sb->s_id, inode->i_ino);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void nlm_debug_print_fh(char *msg, struct nfs_fh *f)
|
|
|
|
{
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void nlm_debug_print_file(char *msg, struct nlm_file *file)
|
|
|
|
{
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
static inline unsigned int file_hash(struct nfs_fh *f)
|
|
|
|
{
|
|
|
|
unsigned int tmp=0;
|
|
|
|
int i;
|
|
|
|
for (i=0; i<NFS2_FHSIZE;i++)
|
|
|
|
tmp += f->data[i];
|
|
|
|
return tmp & (FILE_NRHASH - 1);
|
|
|
|
}
|
|
|
|
|
2021-08-24 04:44:00 +08:00
|
|
|
int lock_to_openmode(struct file_lock *lock)
|
|
|
|
{
|
|
|
|
return (lock->fl_type == F_WRLCK) ? O_WRONLY : O_RDONLY;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Open the file. Note that if we're reexporting, for example,
|
|
|
|
* this could block the lockd thread for a while.
|
|
|
|
*
|
|
|
|
* We have to make sure we have the right credential to open
|
|
|
|
* the file.
|
|
|
|
*/
|
|
|
|
static __be32 nlm_do_fopen(struct svc_rqst *rqstp,
|
|
|
|
struct nlm_file *file, int mode)
|
|
|
|
{
|
|
|
|
struct file **fp = &file->f_file[mode];
|
|
|
|
__be32 nfserr;
|
|
|
|
|
|
|
|
if (*fp)
|
|
|
|
return 0;
|
|
|
|
nfserr = nlmsvc_ops->fopen(rqstp, &file->f_handle, fp, mode);
|
|
|
|
if (nfserr)
|
|
|
|
dprintk("lockd: open failed (error %d)\n", nfserr);
|
|
|
|
return nfserr;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Lookup file info. If it doesn't exist, create a file info struct
|
|
|
|
* and open a (VFS) file for the given inode.
|
|
|
|
*/
|
2006-10-20 14:28:46 +08:00
|
|
|
__be32
|
2005-04-17 06:20:36 +08:00
|
|
|
nlm_lookup_file(struct svc_rqst *rqstp, struct nlm_file **result,
|
2021-08-24 00:01:18 +08:00
|
|
|
struct nlm_lock *lock)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct nlm_file *file;
|
|
|
|
unsigned int hash;
|
2006-10-20 14:28:46 +08:00
|
|
|
__be32 nfserr;
|
2021-08-24 04:44:00 +08:00
|
|
|
int mode;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-08-24 00:01:18 +08:00
|
|
|
nlm_debug_print_fh("nlm_lookup_file", &lock->fh);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-08-24 00:01:18 +08:00
|
|
|
hash = file_hash(&lock->fh);
|
2021-08-24 04:44:00 +08:00
|
|
|
mode = lock_to_openmode(&lock->fl);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* Lock file table */
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_lock(&nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry(file, &nlm_files[hash], f_list)
|
2021-08-24 04:44:00 +08:00
|
|
|
if (!nfs_compare_fh(&file->f_handle, &lock->fh)) {
|
|
|
|
mutex_lock(&file->f_mutex);
|
|
|
|
nfserr = nlm_do_fopen(rqstp, file, mode);
|
|
|
|
mutex_unlock(&file->f_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto found;
|
2021-08-24 04:44:00 +08:00
|
|
|
}
|
2021-08-24 00:01:18 +08:00
|
|
|
nlm_debug_print_fh("creating file for", &lock->fh);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
nfserr = nlm_lck_denied_nolocks;
|
2006-09-27 16:49:37 +08:00
|
|
|
file = kzalloc(sizeof(*file), GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!file)
|
2021-08-24 04:44:00 +08:00
|
|
|
goto out_free;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-08-24 00:01:18 +08:00
|
|
|
memcpy(&file->f_handle, &lock->fh, sizeof(struct nfs_fh));
|
2006-10-04 17:16:06 +08:00
|
|
|
mutex_init(&file->f_mutex);
|
2006-10-04 17:15:58 +08:00
|
|
|
INIT_HLIST_NODE(&file->f_list);
|
2006-10-04 17:15:57 +08:00
|
|
|
INIT_LIST_HEAD(&file->f_blocks);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2021-08-24 04:44:00 +08:00
|
|
|
nfserr = nlm_do_fopen(rqstp, file, mode);
|
|
|
|
if (nfserr)
|
|
|
|
goto out_unlock;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-10-04 17:15:58 +08:00
|
|
|
hlist_add_head(&file->f_list, &nlm_files[hash]);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
found:
|
|
|
|
dprintk("lockd: found file %p (count %d)\n", file, file->f_count);
|
|
|
|
*result = file;
|
|
|
|
file->f_count++;
|
|
|
|
|
|
|
|
out_unlock:
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_unlock(&nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
return nfserr;
|
|
|
|
|
|
|
|
out_free:
|
|
|
|
kfree(file);
|
|
|
|
goto out_unlock;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete a file after having released all locks, blocks and shares
|
|
|
|
*/
|
|
|
|
static inline void
|
|
|
|
nlm_delete_file(struct nlm_file *file)
|
|
|
|
{
|
2005-11-02 05:53:32 +08:00
|
|
|
nlm_debug_print_file("closing file", file);
|
2006-10-04 17:15:58 +08:00
|
|
|
if (!hlist_unhashed(&file->f_list)) {
|
|
|
|
hlist_del(&file->f_list);
|
2021-08-24 04:44:00 +08:00
|
|
|
if (file->f_file[O_RDONLY])
|
|
|
|
nlmsvc_ops->fclose(file->f_file[O_RDONLY]);
|
|
|
|
if (file->f_file[O_WRONLY])
|
|
|
|
nlmsvc_ops->fclose(file->f_file[O_WRONLY]);
|
2006-10-04 17:15:58 +08:00
|
|
|
kfree(file);
|
|
|
|
} else {
|
|
|
|
printk(KERN_WARNING "lockd: attempt to release unknown file!\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-07-12 02:30:13 +08:00
|
|
|
static int nlm_unlock_files(struct nlm_file *file, fl_owner_t owner)
|
2021-08-24 04:44:00 +08:00
|
|
|
{
|
|
|
|
struct file_lock lock;
|
|
|
|
|
2022-01-19 06:00:51 +08:00
|
|
|
locks_init_lock(&lock);
|
2021-08-24 04:44:00 +08:00
|
|
|
lock.fl_type = F_UNLCK;
|
|
|
|
lock.fl_start = 0;
|
|
|
|
lock.fl_end = OFFSET_MAX;
|
2022-07-12 02:30:13 +08:00
|
|
|
lock.fl_owner = owner;
|
2022-01-19 06:00:16 +08:00
|
|
|
if (file->f_file[O_RDONLY] &&
|
|
|
|
vfs_lock_file(file->f_file[O_RDONLY], F_SETLK, &lock, NULL))
|
|
|
|
goto out_err;
|
|
|
|
if (file->f_file[O_WRONLY] &&
|
|
|
|
vfs_lock_file(file->f_file[O_WRONLY], F_SETLK, &lock, NULL))
|
|
|
|
goto out_err;
|
2021-08-24 04:44:00 +08:00
|
|
|
return 0;
|
2022-01-19 06:00:16 +08:00
|
|
|
out_err:
|
|
|
|
pr_warn("lockd: unlock failure in %s:%d\n", __FILE__, __LINE__);
|
|
|
|
return 1;
|
2021-08-24 04:44:00 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Loop over all locks on the given file and perform the specified
|
|
|
|
* action.
|
|
|
|
*/
|
|
|
|
static int
|
2006-10-04 17:15:59 +08:00
|
|
|
nlm_traverse_locks(struct nlm_host *host, struct nlm_file *file,
|
|
|
|
nlm_host_match_fn_t match)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct inode *inode = nlmsvc_file_inode(file);
|
|
|
|
struct file_lock *fl;
|
2015-01-17 04:05:55 +08:00
|
|
|
struct file_lock_context *flctx = inode->i_flctx;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct nlm_host *lockhost;
|
|
|
|
|
2015-01-17 04:05:55 +08:00
|
|
|
if (!flctx || list_empty_careful(&flctx->flc_posix))
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
again:
|
|
|
|
file->f_locks = 0;
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_lock(&flctx->flc_lock);
|
2015-01-17 04:05:55 +08:00
|
|
|
list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
|
2006-03-21 02:44:26 +08:00
|
|
|
if (fl->fl_lmops != &nlmsvc_lock_operations)
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* update current lock count */
|
|
|
|
file->f_locks++;
|
2006-10-04 17:15:59 +08:00
|
|
|
|
2019-05-23 22:45:45 +08:00
|
|
|
lockhost = ((struct nlm_lockowner *)fl->fl_owner)->host;
|
2006-10-04 17:15:59 +08:00
|
|
|
if (match(lockhost, host)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_unlock(&flctx->flc_lock);
|
2022-07-12 02:30:13 +08:00
|
|
|
if (nlm_unlock_files(file, fl->fl_owner))
|
2005-04-17 06:20:36 +08:00
|
|
|
return 1;
|
|
|
|
goto again;
|
|
|
|
}
|
|
|
|
}
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_unlock(&flctx->flc_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2008-01-18 00:10:12 +08:00
|
|
|
static int
|
|
|
|
nlmsvc_always_match(void *dummy1, struct nlm_host *dummy2)
|
|
|
|
{
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2006-10-04 17:15:59 +08:00
|
|
|
* Inspect a single file
|
|
|
|
*/
|
|
|
|
static inline int
|
|
|
|
nlm_inspect_file(struct nlm_host *host, struct nlm_file *file, nlm_host_match_fn_t match)
|
|
|
|
{
|
|
|
|
nlmsvc_traverse_blocks(host, file, match);
|
|
|
|
nlmsvc_traverse_shares(host, file, match);
|
|
|
|
return nlm_traverse_locks(host, file, match);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Quick check whether there are still any locks, blocks or
|
|
|
|
* shares on a given file.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
static inline int
|
2006-10-04 17:15:59 +08:00
|
|
|
nlm_file_inuse(struct nlm_file *file)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2006-10-04 17:15:59 +08:00
|
|
|
struct inode *inode = nlmsvc_file_inode(file);
|
|
|
|
struct file_lock *fl;
|
2015-01-17 04:05:55 +08:00
|
|
|
struct file_lock_context *flctx = inode->i_flctx;
|
2006-10-04 17:15:59 +08:00
|
|
|
|
|
|
|
if (file->f_count || !list_empty(&file->f_blocks) || file->f_shares)
|
|
|
|
return 1;
|
|
|
|
|
2015-01-17 04:05:55 +08:00
|
|
|
if (flctx && !list_empty_careful(&flctx->flc_posix)) {
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_lock(&flctx->flc_lock);
|
2015-01-17 04:05:55 +08:00
|
|
|
list_for_each_entry(fl, &flctx->flc_posix, fl_list) {
|
|
|
|
if (fl->fl_lmops == &nlmsvc_lock_operations) {
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_unlock(&flctx->flc_lock);
|
2015-01-17 04:05:55 +08:00
|
|
|
return 1;
|
|
|
|
}
|
2010-10-27 04:55:40 +08:00
|
|
|
}
|
2015-01-17 04:05:57 +08:00
|
|
|
spin_unlock(&flctx->flc_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-10-04 17:15:59 +08:00
|
|
|
file->f_locks = 0;
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2021-08-24 04:44:00 +08:00
|
|
|
static void nlm_close_files(struct nlm_file *file)
|
|
|
|
{
|
2022-07-12 02:30:14 +08:00
|
|
|
if (file->f_file[O_RDONLY])
|
|
|
|
nlmsvc_ops->fclose(file->f_file[O_RDONLY]);
|
|
|
|
if (file->f_file[O_WRONLY])
|
|
|
|
nlmsvc_ops->fclose(file->f_file[O_WRONLY]);
|
2021-08-24 04:44:00 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Loop over all files in the file table.
|
|
|
|
*/
|
|
|
|
static int
|
2008-01-18 00:10:12 +08:00
|
|
|
nlm_traverse_files(void *data, nlm_host_match_fn_t match,
|
|
|
|
int (*is_failover_file)(void *data, struct nlm_file *file))
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
struct hlist_node *next;
|
2006-10-04 17:15:58 +08:00
|
|
|
struct nlm_file *file;
|
2006-08-10 23:58:57 +08:00
|
|
|
int i, ret = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_lock(&nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < FILE_NRHASH; i++) {
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 09:06:00 +08:00
|
|
|
hlist_for_each_entry_safe(file, next, &nlm_files[i], f_list) {
|
2008-01-18 00:10:12 +08:00
|
|
|
if (is_failover_file && !is_failover_file(data, file))
|
|
|
|
continue;
|
2006-08-10 23:58:57 +08:00
|
|
|
file->f_count++;
|
|
|
|
mutex_unlock(&nlm_file_mutex);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/* Traverse locks, blocks and shares of this file
|
|
|
|
* and update file->f_locks count */
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
if (nlm_inspect_file(data, file, match))
|
2006-08-10 23:58:57 +08:00
|
|
|
ret = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-08-10 23:58:57 +08:00
|
|
|
mutex_lock(&nlm_file_mutex);
|
|
|
|
file->f_count--;
|
2005-04-17 06:20:36 +08:00
|
|
|
/* No more references to this file. Let go of it. */
|
2006-10-04 17:15:57 +08:00
|
|
|
if (list_empty(&file->f_blocks) && !file->f_locks
|
2005-04-17 06:20:36 +08:00
|
|
|
&& !file->f_shares && !file->f_count) {
|
2006-10-04 17:15:58 +08:00
|
|
|
hlist_del(&file->f_list);
|
2021-08-24 04:44:00 +08:00
|
|
|
nlm_close_files(file);
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(file);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_unlock(&nlm_file_mutex);
|
2006-08-10 23:58:57 +08:00
|
|
|
return ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Release file. If there are no more remote locks on this file,
|
|
|
|
* close it and free the handle.
|
|
|
|
*
|
|
|
|
* Note that we can't do proper reference counting without major
|
|
|
|
* contortions because the code in fs/locks.c creates, deletes and
|
|
|
|
* splits locks without notification. Our only way is to walk the
|
|
|
|
* entire lock list each time we remove a lock.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
nlm_release_file(struct nlm_file *file)
|
|
|
|
{
|
|
|
|
dprintk("lockd: nlm_release_file(%p, ct = %d)\n",
|
|
|
|
file, file->f_count);
|
|
|
|
|
|
|
|
/* Lock file table */
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_lock(&nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* If there are no more locks etc, delete the file */
|
2006-10-04 17:15:59 +08:00
|
|
|
if (--file->f_count == 0 && !nlm_file_inuse(file))
|
|
|
|
nlm_delete_file(file);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-03-26 17:37:12 +08:00
|
|
|
mutex_unlock(&nlm_file_mutex);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-10-04 17:15:59 +08:00
|
|
|
/*
|
|
|
|
* Helpers function for resource traversal
|
|
|
|
*
|
|
|
|
* nlmsvc_mark_host:
|
2012-07-25 20:55:54 +08:00
|
|
|
* used by the garbage collector; simply sets h_inuse only for those
|
|
|
|
* hosts, which passed network check.
|
2006-10-04 17:15:59 +08:00
|
|
|
* Always returns 0.
|
|
|
|
*
|
|
|
|
* nlmsvc_same_host:
|
|
|
|
* returns 1 iff the two hosts match. Used to release
|
|
|
|
* all resources bound to a specific host.
|
|
|
|
*
|
|
|
|
* nlmsvc_is_client:
|
|
|
|
* returns 1 iff the host is a client.
|
|
|
|
* Used by nlmsvc_invalidate_all
|
|
|
|
*/
|
2012-07-25 20:55:54 +08:00
|
|
|
|
2006-10-04 17:15:59 +08:00
|
|
|
static int
|
2012-07-25 20:55:54 +08:00
|
|
|
nlmsvc_mark_host(void *data, struct nlm_host *hint)
|
2006-10-04 17:15:59 +08:00
|
|
|
{
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
struct nlm_host *host = data;
|
|
|
|
|
2012-07-25 20:55:54 +08:00
|
|
|
if ((hint->net == NULL) ||
|
|
|
|
(host->net == hint->net))
|
|
|
|
host->h_inuse = 1;
|
2006-10-04 17:15:59 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
nlmsvc_same_host(void *data, struct nlm_host *other)
|
2006-10-04 17:15:59 +08:00
|
|
|
{
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
struct nlm_host *host = data;
|
|
|
|
|
2006-10-04 17:15:59 +08:00
|
|
|
return host == other;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
nlmsvc_is_client(void *data, struct nlm_host *dummy)
|
2006-10-04 17:15:59 +08:00
|
|
|
{
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
struct nlm_host *host = data;
|
|
|
|
|
2006-10-17 15:10:17 +08:00
|
|
|
if (host->h_server) {
|
|
|
|
/* we are destroying locks even though the client
|
|
|
|
* hasn't asked us too, so don't unmonitor the
|
|
|
|
* client
|
|
|
|
*/
|
|
|
|
if (host->h_nsmhandle)
|
|
|
|
host->h_nsmhandle->sm_sticky = 1;
|
|
|
|
return 1;
|
|
|
|
} else
|
|
|
|
return 0;
|
2006-10-04 17:15:59 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Mark all hosts that still hold resources
|
|
|
|
*/
|
|
|
|
void
|
2012-07-25 20:55:54 +08:00
|
|
|
nlmsvc_mark_resources(struct net *net)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-07-25 20:55:54 +08:00
|
|
|
struct nlm_host hint;
|
|
|
|
|
2017-11-08 13:55:55 +08:00
|
|
|
dprintk("lockd: %s for net %x\n", __func__, net ? net->ns.inum : 0);
|
2012-07-25 20:55:54 +08:00
|
|
|
hint.net = net;
|
|
|
|
nlm_traverse_files(&hint, nlmsvc_mark_host, NULL);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Release all resources held by the given client
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
nlmsvc_free_host_resources(struct nlm_host *host)
|
|
|
|
{
|
|
|
|
dprintk("lockd: nlmsvc_free_host_resources\n");
|
|
|
|
|
2008-01-18 00:10:12 +08:00
|
|
|
if (nlm_traverse_files(host, nlmsvc_same_host, NULL)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
printk(KERN_WARNING
|
2006-10-04 17:15:54 +08:00
|
|
|
"lockd: couldn't remove all locks held by %s\n",
|
2005-04-17 06:20:36 +08:00
|
|
|
host->h_name);
|
2006-10-04 17:15:54 +08:00
|
|
|
BUG();
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-07-01 06:58:14 +08:00
|
|
|
/**
|
|
|
|
* nlmsvc_invalidate_all - remove all locks held for clients
|
|
|
|
*
|
|
|
|
* Release all locks held by NFS clients.
|
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
void
|
|
|
|
nlmsvc_invalidate_all(void)
|
|
|
|
{
|
2008-07-01 06:58:14 +08:00
|
|
|
/*
|
2006-10-04 17:15:59 +08:00
|
|
|
* Previously, the code would call
|
|
|
|
* nlmsvc_free_host_resources for each client in
|
|
|
|
* turn, which is about as inefficient as it gets.
|
|
|
|
* Now we just do it once in nlm_traverse_files.
|
|
|
|
*/
|
2008-01-18 00:10:12 +08:00
|
|
|
nlm_traverse_files(NULL, nlmsvc_is_client, NULL);
|
|
|
|
}
|
|
|
|
|
2021-08-24 04:44:00 +08:00
|
|
|
|
2008-01-18 00:10:12 +08:00
|
|
|
static int
|
|
|
|
nlmsvc_match_sb(void *datap, struct nlm_file *file)
|
|
|
|
{
|
|
|
|
struct super_block *sb = datap;
|
|
|
|
|
2021-08-23 23:26:39 +08:00
|
|
|
return sb == nlmsvc_file_inode(file)->i_sb;
|
2008-01-18 00:10:12 +08:00
|
|
|
}
|
|
|
|
|
2008-07-01 06:58:14 +08:00
|
|
|
/**
|
|
|
|
* nlmsvc_unlock_all_by_sb - release locks held on this file system
|
|
|
|
* @sb: super block
|
|
|
|
*
|
|
|
|
* Release all locks held by clients accessing this file system.
|
|
|
|
*/
|
2008-01-18 00:10:12 +08:00
|
|
|
int
|
|
|
|
nlmsvc_unlock_all_by_sb(struct super_block *sb)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = nlm_traverse_files(sb, nlmsvc_always_match, nlmsvc_match_sb);
|
|
|
|
return ret ? -EIO : 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2008-01-18 00:10:12 +08:00
|
|
|
EXPORT_SYMBOL_GPL(nlmsvc_unlock_all_by_sb);
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
|
|
|
|
static int
|
|
|
|
nlmsvc_match_ip(void *datap, struct nlm_host *host)
|
|
|
|
{
|
2009-08-15 00:57:54 +08:00
|
|
|
return rpc_cmp_addr(nlm_srcaddr(host), datap);
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
}
|
|
|
|
|
2008-07-01 06:58:14 +08:00
|
|
|
/**
|
|
|
|
* nlmsvc_unlock_all_by_ip - release local locks by IP address
|
|
|
|
* @server_addr: server's IP address as seen by clients
|
|
|
|
*
|
|
|
|
* Release all locks held by clients accessing this host
|
|
|
|
* via the passed in IP address.
|
|
|
|
*/
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
int
|
2008-07-01 06:58:14 +08:00
|
|
|
nlmsvc_unlock_all_by_ip(struct sockaddr *server_addr)
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2008-07-01 06:58:14 +08:00
|
|
|
ret = nlm_traverse_files(server_addr, nlmsvc_match_ip, NULL);
|
|
|
|
return ret ? -EIO : 0;
|
lockd: unlock lockd locks associated with a given server ip
For high-availability NFS service, we generally need to be able to drop
file locks held on the exported filesystem before moving clients to a
new server. Currently the only way to do that is by shutting down lockd
entirely, which is often undesireable (for example, if you want to
continue exporting other filesystems).
This patch allows the administrator to release all locks held by clients
accessing the client through a given server ip address, by echoing that
address to a new file, /proc/fs/nfsd/unlock_ip, as in:
shell> echo 10.1.1.2 > /proc/fs/nfsd/unlock_ip
The expected sequence of events can be:
1. Tear down the IP address
2. Unexport the path
3. Write IP to /proc/fs/nfsd/unlock_ip to unlock files
4. Signal peer to begin take-over.
For now we only support IPv4 addresses and NFSv2/v3 (NFSv4 locks are not
affected).
Also, if unmounting the filesystem is required, we assume at step 3 that
clients using the given server ip are the only clients holding locks on
the given filesystem; otherwise, an additional patch is required to
allow revoking all locks held by lockd on a given filesystem.
Signed-off-by: S. Wendy Cheng <wcheng@redhat.com>
Cc: Lon Hohberger <lhh@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu>
fs/lockd/svcsubs.c | 66 +++++++++++++++++++++++++++++++++++++++-----
fs/nfsd/nfsctl.c | 65 +++++++++++++++++++++++++++++++++++++++++++
include/linux/lockd/lockd.h | 7 ++++
3 files changed, 131 insertions(+), 7 deletions(-)
2008-01-18 00:10:12 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(nlmsvc_unlock_all_by_ip);
|