2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-27 14:43:58 +08:00
linux-next/drivers/staging/lustre
Niu Yawei e16ffa838b staging: lustre: ptlrpc: no need to reassign mbits for replay
It's not necessary reassgin & re-adjust rq_mbits for replay
request in ptlrpc_set_bulk_mbits(), they all must have already
been correctly assigned before.

Such unecessary reassign could make the first matchbit not
PTLRPC_BULK_OPS_MASK aligned, that'll trigger LASSERT in
ptlrpc_register_bulk():

- ptlrpc_set_bulk_mbits() is called when first time sending
  request, rq_mbits is set as xid, which is BULK_OPS aligned;

- ptlrpc_set_bulk_mbits() continue to adjust the mbits for
  multi-bulk RPC, rq_mbits is not aligned anymore, then rq_xid
  is changed accordingly if client is connecting to an old
  server, so rq_xid became unaligned too;

- The request is replayed, ptlrpc_set_bulk_mbits() reassign
  the rq_mbits as rq_xid, which isn't aligned already, but
  ptlrpc_register_bulk() still assumes this value as the
  first matchbits and LASSERT it's BULK_OPS aligned.

Signed-off-by: Niu Yawei <yawei.niu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6808
Reviewed-on: http://review.whamcloud.com/23048
Reviewed-by: Fan Yong <fan.yong@intel.com>
Reviewed-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-30 08:08:31 -07:00
..
include/linux Staging: Lustre Fix block statement style issue 2017-07-16 08:41:03 +02:00
lnet staging: lustre: ldlm: crash on umount in cleanup_resource 2017-07-30 08:06:10 -07:00
lustre staging: lustre: ptlrpc: no need to reassign mbits for replay 2017-07-30 08:08:31 -07:00
Kconfig staging: lustre: make lustre dependent on LNet 2016-03-10 17:48:53 -08:00
Makefile
README.txt staging/lustre: change Lustre URLs and mailing list 2015-09-13 09:28:27 -07:00
sysfs-fs-lustre staging: lustre: obd: rename health sysfs file to health_check 2016-11-10 13:55:02 +01:00
TODO

Lustre Parallel Filesystem Client
=================================

The Lustre file system is an open-source, parallel file system
that supports many requirements of leadership class HPC simulation
environments.
Born from from a research project at Carnegie Mellon University,
the Lustre file system is a widely-used option in HPC.
The Lustre file system provides a POSIX compliant file system interface,
can scale to thousands of clients, petabytes of storage and
hundreds of gigabytes per second of I/O bandwidth.

Unlike shared disk storage cluster filesystems (e.g. OCFS2, GFS, GPFS),
Lustre has independent Metadata and Data servers that clients can access
in parallel to maximize performance.

In order to use Lustre client you will need to download the "lustre-client"
package that contains the userspace tools from http://lustre.org/download/

You will need to install and configure your Lustre servers separately.

Mount Syntax
============
After you installed the lustre-client tools including mount.lustre binary
you can mount your Lustre filesystem with:

mount -t lustre mgs:/fsname mnt

where mgs is the host name or ip address of your Lustre MGS(management service)
fsname is the name of the filesystem you would like to mount.


Mount Options
=============

  noflock
	Disable posix file locking (Applications trying to use
	the functionality will get ENOSYS)

  localflock
	Enable local flock support, using only client-local flock
	(faster, for applications that require flock but do not run
	 on multiple nodes).

  flock
	Enable cluster-global posix file locking coherent across all
	client nodes.

  user_xattr, nouser_xattr
	Support "user." extended attributes (or not)

  user_fid2path, nouser_fid2path
	Enable FID to path translation by regular users (or not)

  checksum, nochecksum
	Verify data consistency on the wire and in memory as it passes
	between the layers (or not).

  lruresize, nolruresize
	Allow lock LRU to be controlled by memory pressure on the server
	(or only 100 (default, controlled by lru_size proc parameter) locks
	 per CPU per server on this client).

  lazystatfs, nolazystatfs
	Do not block in statfs() if some of the servers are down.

  32bitapi
	Shrink inode numbers to fit into 32 bits. This is necessary
	if you plan to reexport Lustre filesystem from this client via
	NFSv4.

  verbose, noverbose
	Enable mount/umount console messages (or not)

More Information
================
You can get more information at the Lustre website: http://wiki.lustre.org/

Source for the userspace tools and out-of-tree client and server code
is available at: http://git.hpdd.intel.com/fs/lustre-release.git

Latest binary packages:
http://lustre.org/download/