for-5.13/drivers-2021-04-27

-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmCIJYcQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgpieWD/92qbtWl/z+9oCY212xV+YMoMqj/vGROX+U
 9i/FQJ3AIC/AUoNjZeW3NIbiaNqde5mrLlUSCHgn6RLsHK7p0GQJ4ohpbIGFG5+i
 2+Efm+vjlCxLVGrkeZEwMtsht7w/NbOYDr1Rgv9b4lQ6iWI11Mg8E337Whl1me1k
 h6bEXaioK9yqxYtsLgcn9I1qQ2p7gok0HX7zFU/XxEUZylqH6E4vQhj2+NL8UUqE
 7siFHADZE99Z7LXtOkl8YyOlGU52RCUzqDHWydvkipKjgYBi95HLXGT64Z+WCEvz
 HI54oVDRWr+uWdqDFfy+ncHm8pNeP0GV9JPhDz4ELRTSndoxB2il7wRLvp6wxV9d
 8Y4j7vb30i+8GGbM0c79dnlG76D9r5ivbTKixcXFKB128NusQR6JymIv1pKlSKhk
 H871/iOarrepAAUwVR5CtldDDJCy/q1Hks+7UXbaM3F9iNitxsJNZryQq9xdTu/N
 ThFOTz+VECG4RJLxIwmsWGiLgwr52/ybAl2MBcn+s7uC4jM/TFKpdQBfQnOAiINb
 MLlfuYRRSMg1Osb2fYZneR2ifmSNOMRdDJb+tsZGz4xWmZcj0uL4QgqcsOvuiOEQ
 veF/Ky50qw57hWtiEhvqa7/WIxzNF3G3wejqqA8hpT9Qifu0QawYTnXGUttYNBB1
 mO9R3/ccaw==
 =c0x4
 -----END PGP SIGNATURE-----

Merge tag 'for-5.13/drivers-2021-04-27' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:

 - MD changes via Song:
        - raid5 POWER fix
        - raid1 failure fix
        - UAF fix for md cluster
        - mddev_find_or_alloc() clean up
        - Fix NULL pointer deref with external bitmap
        - Performance improvement for raid10 discard requests
        - Fix missing information of /proc/mdstat

 - rsxx const qualifier removal (Arnd)

 - Expose allocated brd pages (Calvin)

 - rnbd via Gioh Kim:
        - Change maintainer
        - Change domain address of maintainers' email
        - Add polling IO mode and document update
        - Fix memory leak and some bug detected by static code analysis
          tools
        - Code refactoring

 - Series of floppy cleanups/fixes (Denis)

 - s390 dasd fixes (Julian)

 - kerneldoc fixes (Lee)

 - null_blk double free (Lv)

 - null_blk virtual boundary addition (Max)

 - Remove xsysace driver (Michal)

 - umem driver removal (Davidlohr)

 - ataflop fixes (Dan)

 - Revalidate disk removal (Christoph)

 - Bounce buffer cleanups (Christoph)

 - Mark lightnvm as deprecated (Christoph)

 - mtip32xx init cleanups (Shixin)

 - Various fixes (Tian, Gustavo, Coly, Yang, Zhang, Zhiqiang)

* tag 'for-5.13/drivers-2021-04-27' of git://git.kernel.dk/linux-block: (143 commits)
  async_xor: increase src_offs when dropping destination page
  drivers/block/null_blk/main: Fix a double free in null_init.
  md/raid1: properly indicate failure when ending a failed write request
  md-cluster: fix use-after-free issue when removing rdev
  nvme: introduce generic per-namespace chardev
  nvme: cleanup nvme_configure_apst
  nvme: do not try to reconfigure APST when the controller is not live
  nvme: add 'kato' sysfs attribute
  nvme: sanitize KATO setting
  nvmet: avoid queuing keep-alive timer if it is disabled
  brd: expose number of allocated pages in debugfs
  ataflop: fix off by one in ataflop_probe()
  ataflop: potential out of bounds in do_format()
  drbd: Fix fall-through warnings for Clang
  block/rnbd: Use strscpy instead of strlcpy
  block/rnbd-clt-sysfs: Remove copy buffer overlap in rnbd_clt_get_path_name
  block/rnbd-clt: Remove max_segment_size
  block/rnbd-clt: Generate kobject_uevent when the rnbd device state changes
  block/rnbd-srv: Remove unused arguments of rnbd_srv_rdma_ev
  Documentation/ABI/rnbd-clt: Add description for nr_poll_queues
  ...
This commit is contained in:
Linus Torvalds 2021-04-28 14:39:37 -07:00
commit fc05860628
99 changed files with 2322 additions and 4074 deletions

View File

@ -44,3 +44,21 @@ Date: Feb 2020
KernelVersion: 5.7 KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com> Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Contains the device access mode: ro, rw or migration. Description: Contains the device access mode: ro, rw or migration.
What: /sys/block/rnbd<N>/rnbd/resize
Date: Feb 2020
KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Write the number of sectors to change the size of the disk.
What: /sys/block/rnbd<N>/rnbd/remap_device
Date: Feb 2020
KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Remap the disconnected device if the session is not destroyed yet.
What: /sys/block/rnbd<N>/rnbd/nr_poll_queues
Date: Feb 2020
KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Contains the number of poll-mode queues

View File

@ -85,6 +85,19 @@ Description: Expected format is the following::
By default "rw" is used. By default "rw" is used.
nr_poll_queues
specifies the number of poll-mode queues. If the IO has HIPRI flag,
the block-layer will send the IO via the poll-mode queue.
For fast network and device the polling is faster than interrupt-base
IO handling because it saves time for context switching, switching to
another process, handling the interrupt and switching back to the
issuing process.
Set -1 if you want to set it as the number of CPUs
By default rnbd client creates only irq-mode queues.
NOTICE: MUST make a unique session for a device using the poll-mode queues.
Exit Codes: Exit Codes:
If the device is already mapped it will fail with EEXIST. If the input If the device is already mapped it will fail with EEXIST. If the input

View File

@ -474,7 +474,6 @@ prototypes::
int (*direct_access) (struct block_device *, sector_t, void **, int (*direct_access) (struct block_device *, sector_t, void **,
unsigned long *); unsigned long *);
void (*unlock_native_capacity) (struct gendisk *); void (*unlock_native_capacity) (struct gendisk *);
int (*revalidate_disk) (struct gendisk *);
int (*getgeo)(struct block_device *, struct hd_geometry *); int (*getgeo)(struct block_device *, struct hd_geometry *);
void (*swap_slot_free_notify) (struct block_device *, unsigned long); void (*swap_slot_free_notify) (struct block_device *, unsigned long);
@ -489,7 +488,6 @@ ioctl: no
compat_ioctl: no compat_ioctl: no
direct_access: no direct_access: no
unlock_native_capacity: no unlock_native_capacity: no
revalidate_disk: no
getgeo: no getgeo: no
swap_slot_free_notify: no (see below) swap_slot_free_notify: no (see below)
======================= =================== ======================= ===================

View File

@ -2756,7 +2756,6 @@ F: Documentation/devicetree/bindings/i2c/cdns,i2c-r1p10.yaml
F: Documentation/devicetree/bindings/i2c/xlnx,xps-iic-2.00.a.yaml F: Documentation/devicetree/bindings/i2c/xlnx,xps-iic-2.00.a.yaml
F: Documentation/devicetree/bindings/spi/xlnx,zynq-qspi.yaml F: Documentation/devicetree/bindings/spi/xlnx,zynq-qspi.yaml
F: arch/arm/mach-zynq/ F: arch/arm/mach-zynq/
F: drivers/block/xsysace.c
F: drivers/clocksource/timer-cadence-ttc.c F: drivers/clocksource/timer-cadence-ttc.c
F: drivers/cpuidle/cpuidle-zynq.c F: drivers/cpuidle/cpuidle-zynq.c
F: drivers/edac/synopsys_edac.c F: drivers/edac/synopsys_edac.c
@ -15540,8 +15539,8 @@ N: riscv
K: riscv K: riscv
RNBD BLOCK DRIVERS RNBD BLOCK DRIVERS
M: Danil Kipnis <danil.kipnis@cloud.ionos.com> M: Md. Haris Iqbal <haris.iqbal@ionos.com>
M: Jack Wang <jinpu.wang@cloud.ionos.com> M: Jack Wang <jinpu.wang@ionos.com>
L: linux-block@vger.kernel.org L: linux-block@vger.kernel.org
S: Maintained S: Maintained
F: drivers/block/rnbd/ F: drivers/block/rnbd/

View File

@ -310,14 +310,6 @@
xlnx,odd-parity = <0x0>; xlnx,odd-parity = <0x0>;
xlnx,use-parity = <0x0>; xlnx,use-parity = <0x0>;
} ; } ;
SysACE_CompactFlash: sysace@83600000 {
compatible = "xlnx,xps-sysace-1.00.a";
interrupt-parent = <&xps_intc_0>;
interrupts = < 4 2 >;
reg = < 0x83600000 0x10000 >;
xlnx,family = "virtex5";
xlnx,mem-width = <0x10>;
} ;
debug_module: debug@84400000 { debug_module: debug@84400000 {
compatible = "xlnx,mdm-1.00.d"; compatible = "xlnx,mdm-1.00.d";
reg = < 0x84400000 0x10000 >; reg = < 0x84400000 0x10000 >;

View File

@ -227,7 +227,6 @@ CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_UBI=m CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_GLUEBI=m CONFIG_MTD_UBI_GLUEBI=m
CONFIG_BLK_DEV_FD=m CONFIG_BLK_DEV_FD=m
CONFIG_BLK_DEV_UMEM=m
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m

View File

@ -232,7 +232,6 @@ CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_UBI=m CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_GLUEBI=m CONFIG_MTD_UBI_GLUEBI=m
CONFIG_BLK_DEV_FD=m CONFIG_BLK_DEV_FD=m
CONFIG_BLK_DEV_UMEM=m
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m

View File

@ -230,7 +230,6 @@ CONFIG_MTD_PHYSMAP_OF=y
CONFIG_MTD_UBI=m CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_GLUEBI=m CONFIG_MTD_UBI_GLUEBI=m
CONFIG_BLK_DEV_FD=m CONFIG_BLK_DEV_FD=m
CONFIG_BLK_DEV_UMEM=m
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m

View File

@ -197,13 +197,6 @@
reg = <0x00fa0000 0x00060000>; reg = <0x00fa0000 0x00060000>;
}; };
}; };
SysACE_CompactFlash: sysace@1,0 {
compatible = "xlnx,sysace";
interrupt-parent = <&UIC2>;
interrupts = <24 0x4>;
reg = <0x00000001 0x00000000 0x10000>;
};
}; };
UART0: serial@f0000200 { UART0: serial@f0000200 {

View File

@ -28,7 +28,6 @@ CONFIG_MTD_CFI_AMDSTD=y
CONFIG_MTD_PHYSMAP_OF=y CONFIG_MTD_PHYSMAP_OF=y
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=35000 CONFIG_BLK_DEV_RAM_SIZE=35000
CONFIG_XILINX_SYSACE=y
CONFIG_SCSI=y CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y CONFIG_BLK_DEV_SD=y
CONFIG_SCSI_CONSTANTS=y CONFIG_SCSI_CONSTANTS=y

View File

@ -74,7 +74,6 @@ static irqreturn_t floppy_hardint(int irq, void *dev_id)
int lcount; int lcount;
char *lptr; char *lptr;
st = 1;
for (lcount = virtual_dma_count, lptr = virtual_dma_addr; for (lcount = virtual_dma_count, lptr = virtual_dma_addr;
lcount; lcount--, lptr++) { lcount; lcount--, lptr++) {
st = inb(virtual_dma_port + FD_STATUS); st = inb(virtual_dma_port + FD_STATUS);

View File

@ -233,6 +233,7 @@ async_xor_offs(struct page *dest, unsigned int offset,
if (submit->flags & ASYNC_TX_XOR_DROP_DST) { if (submit->flags & ASYNC_TX_XOR_DROP_DST) {
src_cnt--; src_cnt--;
src_list++; src_list++;
src_offs++;
} }
/* wait for any prerequisite operations */ /* wait for any prerequisite operations */

View File

@ -50,7 +50,7 @@ config MAC_FLOPPY
config BLK_DEV_SWIM config BLK_DEV_SWIM
tristate "Support for SWIM Macintosh floppy" tristate "Support for SWIM Macintosh floppy"
depends on M68K && MAC depends on M68K && MAC && !HIGHMEM
help help
You should select this option if you want floppy support You should select this option if you want floppy support
and you don't have a II, IIfx, Q900, Q950 or AV series. and you don't have a II, IIfx, Q900, Q950 or AV series.
@ -121,23 +121,6 @@ source "drivers/block/mtip32xx/Kconfig"
source "drivers/block/zram/Kconfig" source "drivers/block/zram/Kconfig"
config BLK_DEV_UMEM
tristate "Micro Memory MM5415 Battery Backed RAM support"
depends on PCI
help
Saying Y here will include support for the MM5415 family of
battery backed (Non-volatile) RAM cards.
<http://www.umem.com/>
The cards appear as block devices that can be partitioned into
as many as 15 partitions.
To compile this driver as a module, choose M here: the
module will be called umem.
The umem driver has not yet been allocated a MAJOR number, so
one is chosen dynamically.
config BLK_DEV_UBD config BLK_DEV_UBD
bool "Virtual block device" bool "Virtual block device"
depends on UML depends on UML
@ -378,12 +361,6 @@ config SUNVDC
source "drivers/s390/block/Kconfig" source "drivers/s390/block/Kconfig"
config XILINX_SYSACE
tristate "Xilinx SystemACE support"
depends on 4xx || MICROBLAZE
help
Include support for the Xilinx SystemACE CompactFlash interface
config XEN_BLKDEV_FRONTEND config XEN_BLKDEV_FRONTEND
tristate "Xen virtual block device support" tristate "Xen virtual block device support"
depends on XEN depends on XEN

View File

@ -20,11 +20,9 @@ obj-$(CONFIG_AMIGA_Z2RAM) += z2ram.o
obj-$(CONFIG_N64CART) += n64cart.o obj-$(CONFIG_N64CART) += n64cart.o
obj-$(CONFIG_BLK_DEV_RAM) += brd.o obj-$(CONFIG_BLK_DEV_RAM) += brd.o
obj-$(CONFIG_BLK_DEV_LOOP) += loop.o obj-$(CONFIG_BLK_DEV_LOOP) += loop.o
obj-$(CONFIG_XILINX_SYSACE) += xsysace.o
obj-$(CONFIG_CDROM_PKTCDVD) += pktcdvd.o obj-$(CONFIG_CDROM_PKTCDVD) += pktcdvd.o
obj-$(CONFIG_SUNVDC) += sunvdc.o obj-$(CONFIG_SUNVDC) += sunvdc.o
obj-$(CONFIG_BLK_DEV_UMEM) += umem.o
obj-$(CONFIG_BLK_DEV_NBD) += nbd.o obj-$(CONFIG_BLK_DEV_NBD) += nbd.o
obj-$(CONFIG_BLK_DEV_CRYPTOLOOP) += cryptoloop.o obj-$(CONFIG_BLK_DEV_CRYPTOLOOP) += cryptoloop.o
obj-$(CONFIG_VIRTIO_BLK) += virtio_blk.o obj-$(CONFIG_VIRTIO_BLK) += virtio_blk.o

View File

@ -729,8 +729,12 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
unsigned long flags; unsigned long flags;
int ret; int ret;
if (type) if (type) {
type--; type--;
if (type >= NUM_DISK_MINORS ||
minor2disktype[type].drive_types > DriveType)
return -EINVAL;
}
q = unit[drive].disk[type]->queue; q = unit[drive].disk[type]->queue;
blk_mq_freeze_queue(q); blk_mq_freeze_queue(q);
@ -742,11 +746,6 @@ static int do_format(int drive, int type, struct atari_format_descr *desc)
local_irq_restore(flags); local_irq_restore(flags);
if (type) { if (type) {
if (type >= NUM_DISK_MINORS ||
minor2disktype[type].drive_types > DriveType) {
ret = -EINVAL;
goto out;
}
type = minor2disktype[type].index; type = minor2disktype[type].index;
UDT = &atari_disk_type[type]; UDT = &atari_disk_type[type];
} }
@ -2002,7 +2001,10 @@ static void ataflop_probe(dev_t dev)
int drive = MINOR(dev) & 3; int drive = MINOR(dev) & 3;
int type = MINOR(dev) >> 2; int type = MINOR(dev) >> 2;
if (drive >= FD_MAX_UNITS || type > NUM_DISK_MINORS) if (type)
type--;
if (drive >= FD_MAX_UNITS || type >= NUM_DISK_MINORS)
return; return;
mutex_lock(&ataflop_probe_lock); mutex_lock(&ataflop_probe_lock);
if (!unit[drive].disk[type]) { if (!unit[drive].disk[type]) {

View File

@ -22,6 +22,7 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/backing-dev.h> #include <linux/backing-dev.h>
#include <linux/debugfs.h>
#include <linux/uaccess.h> #include <linux/uaccess.h>
@ -48,6 +49,7 @@ struct brd_device {
*/ */
spinlock_t brd_lock; spinlock_t brd_lock;
struct radix_tree_root brd_pages; struct radix_tree_root brd_pages;
u64 brd_nr_pages;
}; };
/* /*
@ -116,6 +118,8 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
page = radix_tree_lookup(&brd->brd_pages, idx); page = radix_tree_lookup(&brd->brd_pages, idx);
BUG_ON(!page); BUG_ON(!page);
BUG_ON(page->index != idx); BUG_ON(page->index != idx);
} else {
brd->brd_nr_pages++;
} }
spin_unlock(&brd->brd_lock); spin_unlock(&brd->brd_lock);
@ -365,11 +369,13 @@ __setup("ramdisk_size=", ramdisk_size);
*/ */
static LIST_HEAD(brd_devices); static LIST_HEAD(brd_devices);
static DEFINE_MUTEX(brd_devices_mutex); static DEFINE_MUTEX(brd_devices_mutex);
static struct dentry *brd_debugfs_dir;
static struct brd_device *brd_alloc(int i) static struct brd_device *brd_alloc(int i)
{ {
struct brd_device *brd; struct brd_device *brd;
struct gendisk *disk; struct gendisk *disk;
char buf[DISK_NAME_LEN];
brd = kzalloc(sizeof(*brd), GFP_KERNEL); brd = kzalloc(sizeof(*brd), GFP_KERNEL);
if (!brd) if (!brd)
@ -382,6 +388,11 @@ static struct brd_device *brd_alloc(int i)
if (!brd->brd_queue) if (!brd->brd_queue)
goto out_free_dev; goto out_free_dev;
snprintf(buf, DISK_NAME_LEN, "ram%d", i);
if (!IS_ERR_OR_NULL(brd_debugfs_dir))
debugfs_create_u64(buf, 0444, brd_debugfs_dir,
&brd->brd_nr_pages);
/* This is so fdisk will align partitions on 4k, because of /* This is so fdisk will align partitions on 4k, because of
* direct_access API needing 4k alignment, returning a PFN * direct_access API needing 4k alignment, returning a PFN
* (This is only a problem on very small devices <= 4M, * (This is only a problem on very small devices <= 4M,
@ -397,7 +408,7 @@ static struct brd_device *brd_alloc(int i)
disk->fops = &brd_fops; disk->fops = &brd_fops;
disk->private_data = brd; disk->private_data = brd;
disk->flags = GENHD_FL_EXT_DEVT; disk->flags = GENHD_FL_EXT_DEVT;
sprintf(disk->disk_name, "ram%d", i); strlcpy(disk->disk_name, buf, DISK_NAME_LEN);
set_capacity(disk, rd_size * 2); set_capacity(disk, rd_size * 2);
/* Tell the block layer that this is not a rotational device */ /* Tell the block layer that this is not a rotational device */
@ -495,6 +506,8 @@ static int __init brd_init(void)
brd_check_and_reset_par(); brd_check_and_reset_par();
brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
mutex_lock(&brd_devices_mutex); mutex_lock(&brd_devices_mutex);
for (i = 0; i < rd_nr; i++) { for (i = 0; i < rd_nr; i++) {
brd = brd_alloc(i); brd = brd_alloc(i);
@ -519,6 +532,8 @@ static int __init brd_init(void)
return 0; return 0;
out_free: out_free:
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list) { list_for_each_entry_safe(brd, next, &brd_devices, brd_list) {
list_del(&brd->brd_list); list_del(&brd->brd_list);
brd_free(brd); brd_free(brd);
@ -534,6 +549,8 @@ static void __exit brd_exit(void)
{ {
struct brd_device *brd, *next; struct brd_device *brd, *next;
debugfs_remove_recursive(brd_debugfs_dir);
list_for_each_entry_safe(brd, next, &brd_devices, brd_list) list_for_each_entry_safe(brd, next, &brd_devices, brd_list)
brd_del_one(brd); brd_del_one(brd);

View File

@ -3,7 +3,7 @@
#include <linux/rbtree_augmented.h> #include <linux/rbtree_augmented.h>
#include "drbd_interval.h" #include "drbd_interval.h"
/** /*
* interval_end - return end of @node * interval_end - return end of @node
*/ */
static inline static inline
@ -18,7 +18,7 @@ sector_t interval_end(struct rb_node *node)
RB_DECLARE_CALLBACKS_MAX(static, augment_callbacks, RB_DECLARE_CALLBACKS_MAX(static, augment_callbacks,
struct drbd_interval, rb, sector_t, end, NODE_END); struct drbd_interval, rb, sector_t, end, NODE_END);
/** /*
* drbd_insert_interval - insert a new interval into a tree * drbd_insert_interval - insert a new interval into a tree
*/ */
bool bool
@ -56,6 +56,7 @@ drbd_insert_interval(struct rb_root *root, struct drbd_interval *this)
/** /**
* drbd_contains_interval - check if a tree contains a given interval * drbd_contains_interval - check if a tree contains a given interval
* @root: red black tree root
* @sector: start sector of @interval * @sector: start sector of @interval
* @interval: may not be a valid pointer * @interval: may not be a valid pointer
* *
@ -88,7 +89,7 @@ drbd_contains_interval(struct rb_root *root, sector_t sector,
return false; return false;
} }
/** /*
* drbd_remove_interval - remove an interval from a tree * drbd_remove_interval - remove an interval from a tree
*/ */
void void
@ -99,6 +100,7 @@ drbd_remove_interval(struct rb_root *root, struct drbd_interval *this)
/** /**
* drbd_find_overlap - search for an interval overlapping with [sector, sector + size) * drbd_find_overlap - search for an interval overlapping with [sector, sector + size)
* @root: red black tree root
* @sector: start sector * @sector: start sector
* @size: size, aligned to 512 bytes * @size: size, aligned to 512 bytes
* *

View File

@ -125,7 +125,7 @@ struct bio_set drbd_io_bio_set;
member of struct page. member of struct page.
*/ */
struct page *drbd_pp_pool; struct page *drbd_pp_pool;
spinlock_t drbd_pp_lock; DEFINE_SPINLOCK(drbd_pp_lock);
int drbd_pp_vacant; int drbd_pp_vacant;
wait_queue_head_t drbd_pp_wait; wait_queue_head_t drbd_pp_wait;
@ -268,7 +268,7 @@ void tl_restart(struct drbd_connection *connection, enum drbd_req_event what)
/** /**
* tl_clear() - Clears all requests and &struct drbd_tl_epoch objects out of the TL * tl_clear() - Clears all requests and &struct drbd_tl_epoch objects out of the TL
* @device: DRBD device. * @connection: DRBD connection.
* *
* This is called after the connection to the peer was lost. The storage covered * This is called after the connection to the peer was lost. The storage covered
* by the requests on the transfer gets marked as our of sync. Called from the * by the requests on the transfer gets marked as our of sync. Called from the
@ -479,7 +479,7 @@ int conn_lowest_minor(struct drbd_connection *connection)
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/** /*
* drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs * drbd_calc_cpu_mask() - Generate CPU masks, spread over all CPUs
* *
* Forces all threads of a resource onto the same CPU. This is beneficial for * Forces all threads of a resource onto the same CPU. This is beneficial for
@ -518,7 +518,6 @@ static void drbd_calc_cpu_mask(cpumask_var_t *cpu_mask)
/** /**
* drbd_thread_current_set_cpu() - modifies the cpu mask of the _current_ thread * drbd_thread_current_set_cpu() - modifies the cpu mask of the _current_ thread
* @device: DRBD device.
* @thi: drbd_thread object * @thi: drbd_thread object
* *
* call in the "main loop" of _all_ threads, no need for any mutex, current won't die * call in the "main loop" of _all_ threads, no need for any mutex, current won't die
@ -538,7 +537,7 @@ void drbd_thread_current_set_cpu(struct drbd_thread *thi)
#define drbd_calc_cpu_mask(A) ({}) #define drbd_calc_cpu_mask(A) ({})
#endif #endif
/** /*
* drbd_header_size - size of a packet header * drbd_header_size - size of a packet header
* *
* The header size is a multiple of 8, so any payload following the header is * The header size is a multiple of 8, so any payload following the header is
@ -1193,7 +1192,7 @@ static int fill_bitmap_rle_bits(struct drbd_device *device,
return len; return len;
} }
/** /*
* send_bitmap_rle_or_plain * send_bitmap_rle_or_plain
* *
* Return 0 when done, 1 when another iteration is needed, and a negative error * Return 0 when done, 1 when another iteration is needed, and a negative error
@ -1324,11 +1323,11 @@ void drbd_send_b_ack(struct drbd_connection *connection, u32 barrier_nr, u32 set
/** /**
* _drbd_send_ack() - Sends an ack packet * _drbd_send_ack() - Sends an ack packet
* @device: DRBD device. * @peer_device: DRBD peer device.
* @cmd: Packet command code. * @cmd: Packet command code.
* @sector: sector, needs to be in big endian byte order * @sector: sector, needs to be in big endian byte order
* @blksize: size in byte, needs to be in big endian byte order * @blksize: size in byte, needs to be in big endian byte order
* @block_id: Id, big endian byte order * @block_id: Id, big endian byte order
*/ */
static int _drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd, static int _drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
u64 sector, u32 blksize, u64 block_id) u64 sector, u32 blksize, u64 block_id)
@ -1370,9 +1369,9 @@ void drbd_send_ack_rp(struct drbd_peer_device *peer_device, enum drbd_packet cmd
/** /**
* drbd_send_ack() - Sends an ack packet * drbd_send_ack() - Sends an ack packet
* @device: DRBD device * @peer_device: DRBD peer device
* @cmd: packet command code * @cmd: packet command code
* @peer_req: peer request * @peer_req: peer request
*/ */
int drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd, int drbd_send_ack(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
struct drbd_peer_request *peer_req) struct drbd_peer_request *peer_req)
@ -1882,7 +1881,7 @@ int drbd_send(struct drbd_connection *connection, struct socket *sock,
return sent; return sent;
} }
/** /*
* drbd_send_all - Send an entire buffer * drbd_send_all - Send an entire buffer
* *
* Returns 0 upon success and a negative error value otherwise. * Returns 0 upon success and a negative error value otherwise.
@ -2161,9 +2160,6 @@ static int drbd_create_mempools(void)
if (ret) if (ret)
goto Enomem; goto Enomem;
/* drbd's page pool */
spin_lock_init(&drbd_pp_lock);
for (i = 0; i < number; i++) { for (i = 0; i < number; i++) {
page = alloc_page(GFP_HIGHUSER); page = alloc_page(GFP_HIGHUSER);
if (!page) if (!page)
@ -3509,6 +3505,7 @@ static int w_bitmap_io(struct drbd_work *w, int unused)
* @io_fn: IO callback to be called when bitmap IO is possible * @io_fn: IO callback to be called when bitmap IO is possible
* @done: callback to be called after the bitmap IO was performed * @done: callback to be called after the bitmap IO was performed
* @why: Descriptive text of the reason for doing the IO * @why: Descriptive text of the reason for doing the IO
* @flags: Bitmap flags
* *
* While IO on the bitmap happens we freeze application IO thus we ensure * While IO on the bitmap happens we freeze application IO thus we ensure
* that drbd_set_out_of_sync() can not be called. This function MAY ONLY be * that drbd_set_out_of_sync() can not be called. This function MAY ONLY be
@ -3554,6 +3551,7 @@ void drbd_queue_bitmap_io(struct drbd_device *device,
* @device: DRBD device. * @device: DRBD device.
* @io_fn: IO callback to be called when bitmap IO is possible * @io_fn: IO callback to be called when bitmap IO is possible
* @why: Descriptive text of the reason for doing the IO * @why: Descriptive text of the reason for doing the IO
* @flags: Bitmap flags
* *
* freezes application IO while that the actual IO operations runs. This * freezes application IO while that the actual IO operations runs. This
* functions MAY NOT be called from worker context. * functions MAY NOT be called from worker context.
@ -3657,7 +3655,6 @@ const char *cmdname(enum drbd_packet cmd)
[P_RS_CANCEL] = "RSCancel", [P_RS_CANCEL] = "RSCancel",
[P_CONN_ST_CHG_REQ] = "conn_st_chg_req", [P_CONN_ST_CHG_REQ] = "conn_st_chg_req",
[P_CONN_ST_CHG_REPLY] = "conn_st_chg_reply", [P_CONN_ST_CHG_REPLY] = "conn_st_chg_reply",
[P_RETRY_WRITE] = "retry_write",
[P_PROTOCOL_UPDATE] = "protocol_update", [P_PROTOCOL_UPDATE] = "protocol_update",
[P_RS_THIN_REQ] = "rs_thin_req", [P_RS_THIN_REQ] = "rs_thin_req",
[P_RS_DEALLOCATED] = "rs_deallocated", [P_RS_DEALLOCATED] = "rs_deallocated",

View File

@ -790,9 +790,11 @@ int drbd_adm_set_role(struct sk_buff *skb, struct genl_info *info)
mutex_lock(&adm_ctx.resource->adm_mutex); mutex_lock(&adm_ctx.resource->adm_mutex);
if (info->genlhdr->cmd == DRBD_ADM_PRIMARY) if (info->genlhdr->cmd == DRBD_ADM_PRIMARY)
retcode = drbd_set_role(adm_ctx.device, R_PRIMARY, parms.assume_uptodate); retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
R_PRIMARY, parms.assume_uptodate);
else else
retcode = drbd_set_role(adm_ctx.device, R_SECONDARY, 0); retcode = (enum drbd_ret_code)drbd_set_role(adm_ctx.device,
R_SECONDARY, 0);
mutex_unlock(&adm_ctx.resource->adm_mutex); mutex_unlock(&adm_ctx.resource->adm_mutex);
genl_lock(); genl_lock();
@ -916,7 +918,7 @@ void drbd_resume_io(struct drbd_device *device)
wake_up(&device->misc_wait); wake_up(&device->misc_wait);
} }
/** /*
* drbd_determine_dev_size() - Sets the right device size obeying all constraints * drbd_determine_dev_size() - Sets the right device size obeying all constraints
* @device: DRBD device. * @device: DRBD device.
* *
@ -1134,7 +1136,7 @@ drbd_new_dev_size(struct drbd_device *device, struct drbd_backing_dev *bdev,
return size; return size;
} }
/** /*
* drbd_check_al_size() - Ensures that the AL is of the right size * drbd_check_al_size() - Ensures that the AL is of the right size
* @device: DRBD device. * @device: DRBD device.
* *
@ -1962,7 +1964,7 @@ int drbd_adm_attach(struct sk_buff *skb, struct genl_info *info)
drbd_flush_workqueue(&connection->sender_work); drbd_flush_workqueue(&connection->sender_work);
rv = _drbd_request_state(device, NS(disk, D_ATTACHING), CS_VERBOSE); rv = _drbd_request_state(device, NS(disk, D_ATTACHING), CS_VERBOSE);
retcode = rv; /* FIXME: Type mismatch. */ retcode = (enum drbd_ret_code)rv;
drbd_resume_io(device); drbd_resume_io(device);
if (rv < SS_SUCCESS) if (rv < SS_SUCCESS)
goto fail; goto fail;
@ -2687,7 +2689,8 @@ int drbd_adm_connect(struct sk_buff *skb, struct genl_info *info)
} }
rcu_read_unlock(); rcu_read_unlock();
retcode = conn_request_state(connection, NS(conn, C_UNCONNECTED), CS_VERBOSE); retcode = (enum drbd_ret_code)conn_request_state(connection,
NS(conn, C_UNCONNECTED), CS_VERBOSE);
conn_reconfig_done(connection); conn_reconfig_done(connection);
mutex_unlock(&adm_ctx.resource->adm_mutex); mutex_unlock(&adm_ctx.resource->adm_mutex);
@ -2800,7 +2803,7 @@ int drbd_adm_disconnect(struct sk_buff *skb, struct genl_info *info)
mutex_lock(&adm_ctx.resource->adm_mutex); mutex_lock(&adm_ctx.resource->adm_mutex);
rv = conn_try_disconnect(connection, parms.force_disconnect); rv = conn_try_disconnect(connection, parms.force_disconnect);
if (rv < SS_SUCCESS) if (rv < SS_SUCCESS)
retcode = rv; /* FIXME: Type mismatch. */ retcode = (enum drbd_ret_code)rv;
else else
retcode = NO_ERROR; retcode = NO_ERROR;
mutex_unlock(&adm_ctx.resource->adm_mutex); mutex_unlock(&adm_ctx.resource->adm_mutex);

View File

@ -242,9 +242,9 @@ static void conn_reclaim_net_peer_reqs(struct drbd_connection *connection)
/** /**
* drbd_alloc_pages() - Returns @number pages, retries forever (or until signalled) * drbd_alloc_pages() - Returns @number pages, retries forever (or until signalled)
* @device: DRBD device. * @peer_device: DRBD device.
* @number: number of pages requested * @number: number of pages requested
* @retry: whether to retry, if not enough pages are available right now * @retry: whether to retry, if not enough pages are available right now
* *
* Tries to allocate number pages, first from our own page pool, then from * Tries to allocate number pages, first from our own page pool, then from
* the kernel. * the kernel.
@ -1352,7 +1352,7 @@ static void drbd_flush(struct drbd_connection *connection)
/** /**
* drbd_may_finish_epoch() - Applies an epoch_event to the epoch's state, eventually finishes it. * drbd_may_finish_epoch() - Applies an epoch_event to the epoch's state, eventually finishes it.
* @device: DRBD device. * @connection: DRBD connection.
* @epoch: Epoch object. * @epoch: Epoch object.
* @ev: Epoch event. * @ev: Epoch event.
*/ */
@ -1441,9 +1441,8 @@ max_allowed_wo(struct drbd_backing_dev *bdev, enum write_ordering_e wo)
return wo; return wo;
} }
/** /*
* drbd_bump_write_ordering() - Fall back to an other write ordering method * drbd_bump_write_ordering() - Fall back to an other write ordering method
* @connection: DRBD connection.
* @wo: Write ordering method to try. * @wo: Write ordering method to try.
*/ */
void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backing_dev *bdev, void drbd_bump_write_ordering(struct drbd_resource *resource, struct drbd_backing_dev *bdev,
@ -1619,11 +1618,10 @@ static void drbd_issue_peer_wsame(struct drbd_device *device,
} }
/** /*
* drbd_submit_peer_request() * drbd_submit_peer_request()
* @device: DRBD device. * @device: DRBD device.
* @peer_req: peer request * @peer_req: peer request
* @rw: flag field, see bio->bi_opf
* *
* May spread the pages to multiple bios, * May spread the pages to multiple bios,
* depending on bio_add_page restrictions. * depending on bio_add_page restrictions.
@ -3048,7 +3046,7 @@ out_free_e:
return -EIO; return -EIO;
} }
/** /*
* drbd_asb_recover_0p - Recover after split-brain with no remaining primaries * drbd_asb_recover_0p - Recover after split-brain with no remaining primaries
*/ */
static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold(local) static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold(local)
@ -3131,7 +3129,7 @@ static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold
return rv; return rv;
} }
/** /*
* drbd_asb_recover_1p - Recover after split-brain with one remaining primary * drbd_asb_recover_1p - Recover after split-brain with one remaining primary
*/ */
static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold(local) static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold(local)
@ -3188,7 +3186,7 @@ static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold
return rv; return rv;
} }
/** /*
* drbd_asb_recover_2p - Recover after split-brain with two remaining primaries * drbd_asb_recover_2p - Recover after split-brain with two remaining primaries
*/ */
static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold(local) static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold(local)
@ -4672,7 +4670,7 @@ static int receive_sync_uuid(struct drbd_connection *connection, struct packet_i
return 0; return 0;
} }
/** /*
* receive_bitmap_plain * receive_bitmap_plain
* *
* Return 0 when done, 1 when another iteration is needed, and a negative error * Return 0 when done, 1 when another iteration is needed, and a negative error
@ -4724,7 +4722,7 @@ static int dcbp_get_pad_bits(struct p_compressed_bm *p)
return (p->encoding >> 4) & 0x7; return (p->encoding >> 4) & 0x7;
} }
/** /*
* recv_bm_rle_bits * recv_bm_rle_bits
* *
* Return 0 when done, 1 when another iteration is needed, and a negative error * Return 0 when done, 1 when another iteration is needed, and a negative error
@ -4793,7 +4791,7 @@ recv_bm_rle_bits(struct drbd_peer_device *peer_device,
return (s != c->bm_bits); return (s != c->bm_bits);
} }
/** /*
* decode_bitmap_c * decode_bitmap_c
* *
* Return 0 when done, 1 when another iteration is needed, and a negative error * Return 0 when done, 1 when another iteration is needed, and a negative error
@ -5865,6 +5863,7 @@ static int got_NegRSDReply(struct drbd_connection *connection, struct packet_inf
switch (pi->cmd) { switch (pi->cmd) {
case P_NEG_RS_DREPLY: case P_NEG_RS_DREPLY:
drbd_rs_failed_io(device, sector, size); drbd_rs_failed_io(device, sector, size);
break;
case P_RS_CANCEL: case P_RS_CANCEL:
break; break;
default: default:

View File

@ -753,6 +753,7 @@ int __req_mod(struct drbd_request *req, enum drbd_req_event what,
case WRITE_ACKED_BY_PEER_AND_SIS: case WRITE_ACKED_BY_PEER_AND_SIS:
req->rq_state |= RQ_NET_SIS; req->rq_state |= RQ_NET_SIS;
fallthrough;
case WRITE_ACKED_BY_PEER: case WRITE_ACKED_BY_PEER:
/* Normal operation protocol C: successfully written on peer. /* Normal operation protocol C: successfully written on peer.
* During resync, even in protocol != C, * During resync, even in protocol != C,

View File

@ -904,9 +904,9 @@ out:
* is_valid_soft_transition() - Returns an SS_ error code if the state transition is not possible * is_valid_soft_transition() - Returns an SS_ error code if the state transition is not possible
* This function limits state transitions that may be declined by DRBD. I.e. * This function limits state transitions that may be declined by DRBD. I.e.
* user requests (aka soft transitions). * user requests (aka soft transitions).
* @device: DRBD device.
* @ns: new state.
* @os: old state. * @os: old state.
* @ns: new state.
* @connection: DRBD connection.
*/ */
static enum drbd_state_rv static enum drbd_state_rv
is_valid_soft_transition(union drbd_state os, union drbd_state ns, struct drbd_connection *connection) is_valid_soft_transition(union drbd_state os, union drbd_state ns, struct drbd_connection *connection)
@ -1044,7 +1044,7 @@ static void print_sanitize_warnings(struct drbd_device *device, enum sanitize_st
* @device: DRBD device. * @device: DRBD device.
* @os: old state. * @os: old state.
* @ns: new state. * @ns: new state.
* @warn_sync_abort: * @warn: placeholder for returned state warning.
* *
* When we loose connection, we have to set the state of the peers disk (pdsk) * When we loose connection, we have to set the state of the peers disk (pdsk)
* to D_UNKNOWN. This rule and many more along those lines are in this function. * to D_UNKNOWN. This rule and many more along those lines are in this function.
@ -1696,6 +1696,7 @@ static bool lost_contact_to_peer_data(enum drbd_disk_state os, enum drbd_disk_st
* @os: old state. * @os: old state.
* @ns: new state. * @ns: new state.
* @flags: Flags * @flags: Flags
* @state_change: state change to broadcast
*/ */
static void after_state_ch(struct drbd_device *device, union drbd_state os, static void after_state_ch(struct drbd_device *device, union drbd_state os,
union drbd_state ns, enum chg_state_flags flags, union drbd_state ns, enum chg_state_flags flags,

View File

@ -145,8 +145,6 @@
* Better audit of register_blkdev. * Better audit of register_blkdev.
*/ */
#undef FLOPPY_SILENT_DCL_CLEAR
#define REALLY_SLOW_IO #define REALLY_SLOW_IO
#define DEBUGT 2 #define DEBUGT 2
@ -2399,11 +2397,10 @@ static void rw_interrupt(void)
probing = 0; probing = 0;
} }
if (CT(raw_cmd->cmd[COMMAND]) != FD_READ || if (CT(raw_cmd->cmd[COMMAND]) != FD_READ) {
raw_cmd->kernel_data == bio_data(current_req->bio)) {
/* transfer directly from buffer */ /* transfer directly from buffer */
cont->done(1); cont->done(1);
} else if (CT(raw_cmd->cmd[COMMAND]) == FD_READ) { } else {
buffer_track = raw_cmd->track; buffer_track = raw_cmd->track;
buffer_drive = current_drive; buffer_drive = current_drive;
INFBOUND(buffer_max, nr_sectors + fsector_t); INFBOUND(buffer_max, nr_sectors + fsector_t);
@ -2411,27 +2408,6 @@ static void rw_interrupt(void)
cont->redo(); cont->redo();
} }
/* Compute maximal contiguous buffer size. */
static int buffer_chain_size(void)
{
struct bio_vec bv;
int size;
struct req_iterator iter;
char *base;
base = bio_data(current_req->bio);
size = 0;
rq_for_each_segment(bv, current_req, iter) {
if (page_address(bv.bv_page) + bv.bv_offset != base + size)
break;
size += bv.bv_len;
}
return size >> 9;
}
/* Compute the maximal transfer size */ /* Compute the maximal transfer size */
static int transfer_size(int ssize, int max_sector, int max_size) static int transfer_size(int ssize, int max_sector, int max_size)
{ {
@ -2453,7 +2429,6 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
{ {
int remaining; /* number of transferred 512-byte sectors */ int remaining; /* number of transferred 512-byte sectors */
struct bio_vec bv; struct bio_vec bv;
char *buffer;
char *dma_buffer; char *dma_buffer;
int size; int size;
struct req_iterator iter; struct req_iterator iter;
@ -2492,8 +2467,6 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
size = bv.bv_len; size = bv.bv_len;
SUPBOUND(size, remaining); SUPBOUND(size, remaining);
buffer = page_address(bv.bv_page) + bv.bv_offset;
if (dma_buffer + size > if (dma_buffer + size >
floppy_track_buffer + (max_buffer_sectors << 10) || floppy_track_buffer + (max_buffer_sectors << 10) ||
dma_buffer < floppy_track_buffer) { dma_buffer < floppy_track_buffer) {
@ -2509,13 +2482,13 @@ static void copy_buffer(int ssize, int max_sector, int max_sector_2)
pr_info("write\n"); pr_info("write\n");
break; break;
} }
if (((unsigned long)buffer) % 512)
DPRINT("%p buffer not aligned\n", buffer);
if (CT(raw_cmd->cmd[COMMAND]) == FD_READ) if (CT(raw_cmd->cmd[COMMAND]) == FD_READ)
memcpy(buffer, dma_buffer, size); memcpy_to_page(bv.bv_page, bv.bv_offset, dma_buffer,
size);
else else
memcpy(dma_buffer, buffer, size); memcpy_from_page(dma_buffer, bv.bv_page, bv.bv_offset,
size);
remaining -= size; remaining -= size;
dma_buffer += size; dma_buffer += size;
@ -2690,54 +2663,6 @@ static int make_raw_rw_request(void)
raw_cmd->flags &= ~FD_RAW_WRITE; raw_cmd->flags &= ~FD_RAW_WRITE;
raw_cmd->flags |= FD_RAW_READ; raw_cmd->flags |= FD_RAW_READ;
raw_cmd->cmd[COMMAND] = FM_MODE(_floppy, FD_READ); raw_cmd->cmd[COMMAND] = FM_MODE(_floppy, FD_READ);
} else if ((unsigned long)bio_data(current_req->bio) < MAX_DMA_ADDRESS) {
unsigned long dma_limit;
int direct, indirect;
indirect =
transfer_size(ssize, max_sector,
max_buffer_sectors * 2) - fsector_t;
/*
* Do NOT use minimum() here---MAX_DMA_ADDRESS is 64 bits wide
* on a 64 bit machine!
*/
max_size = buffer_chain_size();
dma_limit = (MAX_DMA_ADDRESS -
((unsigned long)bio_data(current_req->bio))) >> 9;
if ((unsigned long)max_size > dma_limit)
max_size = dma_limit;
/* 64 kb boundaries */
if (CROSS_64KB(bio_data(current_req->bio), max_size << 9))
max_size = (K_64 -
((unsigned long)bio_data(current_req->bio)) %
K_64) >> 9;
direct = transfer_size(ssize, max_sector, max_size) - fsector_t;
/*
* We try to read tracks, but if we get too many errors, we
* go back to reading just one sector at a time.
*
* This means we should be able to read a sector even if there
* are other bad sectors on this track.
*/
if (!direct ||
(indirect * 2 > direct * 3 &&
*errors < drive_params[current_drive].max_errors.read_track &&
((!probing ||
(drive_params[current_drive].read_track & (1 << drive_state[current_drive].probed_format)))))) {
max_size = blk_rq_sectors(current_req);
} else {
raw_cmd->kernel_data = bio_data(current_req->bio);
raw_cmd->length = current_count_sectors << 9;
if (raw_cmd->length == 0) {
DPRINT("%s: zero dma transfer attempted\n", __func__);
DPRINT("indirect=%d direct=%d fsector_t=%d\n",
indirect, direct, fsector_t);
return 0;
}
virtualdmabug_workaround();
return 2;
}
} }
if (CT(raw_cmd->cmd[COMMAND]) == FD_READ) if (CT(raw_cmd->cmd[COMMAND]) == FD_READ)
@ -2781,19 +2706,17 @@ static int make_raw_rw_request(void)
raw_cmd->length = ((raw_cmd->length - 1) | (ssize - 1)) + 1; raw_cmd->length = ((raw_cmd->length - 1) | (ssize - 1)) + 1;
raw_cmd->length <<= 9; raw_cmd->length <<= 9;
if ((raw_cmd->length < current_count_sectors << 9) || if ((raw_cmd->length < current_count_sectors << 9) ||
(raw_cmd->kernel_data != bio_data(current_req->bio) && (CT(raw_cmd->cmd[COMMAND]) == FD_WRITE &&
CT(raw_cmd->cmd[COMMAND]) == FD_WRITE &&
(aligned_sector_t + (raw_cmd->length >> 9) > buffer_max || (aligned_sector_t + (raw_cmd->length >> 9) > buffer_max ||
aligned_sector_t < buffer_min)) || aligned_sector_t < buffer_min)) ||
raw_cmd->length % (128 << raw_cmd->cmd[SIZECODE]) || raw_cmd->length % (128 << raw_cmd->cmd[SIZECODE]) ||
raw_cmd->length <= 0 || current_count_sectors <= 0) { raw_cmd->length <= 0 || current_count_sectors <= 0) {
DPRINT("fractionary current count b=%lx s=%lx\n", DPRINT("fractionary current count b=%lx s=%lx\n",
raw_cmd->length, current_count_sectors); raw_cmd->length, current_count_sectors);
if (raw_cmd->kernel_data != bio_data(current_req->bio)) pr_info("addr=%d, length=%ld\n",
pr_info("addr=%d, length=%ld\n", (int)((raw_cmd->kernel_data -
(int)((raw_cmd->kernel_data - floppy_track_buffer) >> 9),
floppy_track_buffer) >> 9), current_count_sectors);
current_count_sectors);
pr_info("st=%d ast=%d mse=%d msi=%d\n", pr_info("st=%d ast=%d mse=%d msi=%d\n",
fsector_t, aligned_sector_t, max_sector, max_size); fsector_t, aligned_sector_t, max_sector, max_size);
pr_info("ssize=%x SIZECODE=%d\n", ssize, raw_cmd->cmd[SIZECODE]); pr_info("ssize=%x SIZECODE=%d\n", ssize, raw_cmd->cmd[SIZECODE]);
@ -2807,31 +2730,21 @@ static int make_raw_rw_request(void)
return 0; return 0;
} }
if (raw_cmd->kernel_data != bio_data(current_req->bio)) { if (raw_cmd->kernel_data < floppy_track_buffer ||
if (raw_cmd->kernel_data < floppy_track_buffer || current_count_sectors < 0 ||
current_count_sectors < 0 || raw_cmd->length < 0 ||
raw_cmd->length < 0 || raw_cmd->kernel_data + raw_cmd->length >
raw_cmd->kernel_data + raw_cmd->length > floppy_track_buffer + (max_buffer_sectors << 10)) {
floppy_track_buffer + (max_buffer_sectors << 10)) { DPRINT("buffer overrun in schedule dma\n");
DPRINT("buffer overrun in schedule dma\n"); pr_info("fsector_t=%d buffer_min=%d current_count=%ld\n",
pr_info("fsector_t=%d buffer_min=%d current_count=%ld\n", fsector_t, buffer_min, raw_cmd->length >> 9);
fsector_t, buffer_min, raw_cmd->length >> 9); pr_info("current_count_sectors=%ld\n",
pr_info("current_count_sectors=%ld\n", current_count_sectors);
current_count_sectors); if (CT(raw_cmd->cmd[COMMAND]) == FD_READ)
if (CT(raw_cmd->cmd[COMMAND]) == FD_READ) pr_info("read\n");
pr_info("read\n"); if (CT(raw_cmd->cmd[COMMAND]) == FD_WRITE)
if (CT(raw_cmd->cmd[COMMAND]) == FD_WRITE) pr_info("write\n");
pr_info("write\n");
return 0;
}
} else if (raw_cmd->length > blk_rq_bytes(current_req) ||
current_count_sectors > blk_rq_sectors(current_req)) {
DPRINT("buffer overrun in direct transfer\n");
return 0; return 0;
} else if (raw_cmd->length < current_count_sectors << 9) {
DPRINT("more sectors than bytes\n");
pr_info("bytes=%ld\n", raw_cmd->length >> 9);
pr_info("sectors=%ld\n", current_count_sectors);
} }
if (raw_cmd->length == 0) { if (raw_cmd->length == 0) {
DPRINT("zero dma transfer attempted from make_raw_request\n"); DPRINT("zero dma transfer attempted from make_raw_request\n");
@ -3073,8 +2986,6 @@ static const char *drive_name(int type, int drive)
/* raw commands */ /* raw commands */
static void raw_cmd_done(int flag) static void raw_cmd_done(int flag)
{ {
int i;
if (!flag) { if (!flag) {
raw_cmd->flags |= FD_RAW_FAILURE; raw_cmd->flags |= FD_RAW_FAILURE;
raw_cmd->flags |= FD_RAW_HARDFAILURE; raw_cmd->flags |= FD_RAW_HARDFAILURE;
@ -3082,8 +2993,7 @@ static void raw_cmd_done(int flag)
raw_cmd->reply_count = inr; raw_cmd->reply_count = inr;
if (raw_cmd->reply_count > FD_RAW_REPLY_SIZE) if (raw_cmd->reply_count > FD_RAW_REPLY_SIZE)
raw_cmd->reply_count = 0; raw_cmd->reply_count = 0;
for (i = 0; i < raw_cmd->reply_count; i++) memcpy(raw_cmd->reply, reply_buffer, raw_cmd->reply_count);
raw_cmd->reply[i] = reply_buffer[i];
if (raw_cmd->flags & (FD_RAW_READ | FD_RAW_WRITE)) { if (raw_cmd->flags & (FD_RAW_READ | FD_RAW_WRITE)) {
unsigned long flags; unsigned long flags;
@ -3175,7 +3085,6 @@ static int raw_cmd_copyin(int cmd, void __user *param,
{ {
struct floppy_raw_cmd *ptr; struct floppy_raw_cmd *ptr;
int ret; int ret;
int i;
*rcmd = NULL; *rcmd = NULL;
@ -3194,8 +3103,7 @@ loop:
if (ptr->cmd_count > FD_RAW_CMD_FULLSIZE) if (ptr->cmd_count > FD_RAW_CMD_FULLSIZE)
return -EINVAL; return -EINVAL;
for (i = 0; i < FD_RAW_REPLY_SIZE; i++) memset(ptr->reply, 0, FD_RAW_REPLY_SIZE);
ptr->reply[i] = 0;
ptr->resultcode = 0; ptr->resultcode = 0;
if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) { if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) {
@ -4317,7 +4225,7 @@ static char __init get_fdc_version(int fdc)
r = result(fdc); r = result(fdc);
if (r <= 0x00) if (r <= 0x00)
return FDC_NONE; /* No FDC present ??? */ return FDC_NONE; /* No FDC present ??? */
if ((r == 1) && (reply_buffer[0] == 0x80)) { if ((r == 1) && (reply_buffer[ST0] == 0x80)) {
pr_info("FDC %d is an 8272A\n", fdc); pr_info("FDC %d is an 8272A\n", fdc);
return FDC_8272A; /* 8272a/765 don't know DUMPREGS */ return FDC_8272A; /* 8272a/765 don't know DUMPREGS */
} }
@ -4342,12 +4250,12 @@ static char __init get_fdc_version(int fdc)
output_byte(fdc, FD_UNLOCK); output_byte(fdc, FD_UNLOCK);
r = result(fdc); r = result(fdc);
if ((r == 1) && (reply_buffer[0] == 0x80)) { if ((r == 1) && (reply_buffer[ST0] == 0x80)) {
pr_info("FDC %d is a pre-1991 82077\n", fdc); pr_info("FDC %d is a pre-1991 82077\n", fdc);
return FDC_82077_ORIG; /* Pre-1991 82077, doesn't know return FDC_82077_ORIG; /* Pre-1991 82077, doesn't know
* LOCK/UNLOCK */ * LOCK/UNLOCK */
} }
if ((r != 1) || (reply_buffer[0] != 0x00)) { if ((r != 1) || (reply_buffer[ST0] != 0x00)) {
pr_info("FDC %d init: UNLOCK: unexpected return of %d bytes.\n", pr_info("FDC %d init: UNLOCK: unexpected return of %d bytes.\n",
fdc, r); fdc, r);
return FDC_UNKNOWN; return FDC_UNKNOWN;
@ -4359,11 +4267,11 @@ static char __init get_fdc_version(int fdc)
fdc, r); fdc, r);
return FDC_UNKNOWN; return FDC_UNKNOWN;
} }
if (reply_buffer[0] == 0x80) { if (reply_buffer[ST0] == 0x80) {
pr_info("FDC %d is a post-1991 82077\n", fdc); pr_info("FDC %d is a post-1991 82077\n", fdc);
return FDC_82077; /* Revised 82077AA passes all the tests */ return FDC_82077; /* Revised 82077AA passes all the tests */
} }
switch (reply_buffer[0] >> 5) { switch (reply_buffer[ST0] >> 5) {
case 0x0: case 0x0:
/* Either a 82078-1 or a 82078SL running at 5Volt */ /* Either a 82078-1 or a 82078SL running at 5Volt */
pr_info("FDC %d is an 82078.\n", fdc); pr_info("FDC %d is an 82078.\n", fdc);
@ -4379,7 +4287,7 @@ static char __init get_fdc_version(int fdc)
return FDC_87306; return FDC_87306;
default: default:
pr_info("FDC %d init: 82078 variant with unknown PARTID=%d.\n", pr_info("FDC %d init: 82078 variant with unknown PARTID=%d.\n",
fdc, reply_buffer[0] >> 5); fdc, reply_buffer[ST0] >> 5);
return FDC_82078_UNKN; return FDC_82078_UNKN;
} }
} /* get_fdc_version */ } /* get_fdc_version */
@ -4597,7 +4505,6 @@ static int floppy_alloc_disk(unsigned int drive, unsigned int type)
return err; return err;
} }
blk_queue_bounce_limit(disk->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(disk->queue, 64); blk_queue_max_hw_sectors(disk->queue, 64);
disk->major = FLOPPY_MAJOR; disk->major = FLOPPY_MAJOR;
disk->first_minor = TOMINOR(drive) | (type << 2); disk->first_minor = TOMINOR(drive) | (type << 2);

View File

@ -95,9 +95,9 @@
/* Device instance number, incremented each time a device is probed. */ /* Device instance number, incremented each time a device is probed. */
static int instance; static int instance;
static struct list_head online_list; static LIST_HEAD(online_list);
static struct list_head removing_list; static LIST_HEAD(removing_list);
static spinlock_t dev_lock; static DEFINE_SPINLOCK(dev_lock);
/* /*
* Global variable used to hold the major block device number * Global variable used to hold the major block device number
@ -1213,7 +1213,7 @@ static int mtip_standby_immediate(struct mtip_port *port)
{ {
int rv; int rv;
struct host_to_dev_fis fis; struct host_to_dev_fis fis;
unsigned long start; unsigned long __maybe_unused start;
unsigned int timeout; unsigned int timeout;
/* Build the FIS. */ /* Build the FIS. */
@ -4363,11 +4363,6 @@ static int __init mtip_init(void)
pr_info(MTIP_DRV_NAME " Version " MTIP_DRV_VERSION "\n"); pr_info(MTIP_DRV_NAME " Version " MTIP_DRV_VERSION "\n");
spin_lock_init(&dev_lock);
INIT_LIST_HEAD(&online_list);
INIT_LIST_HEAD(&removing_list);
/* Allocate a major block device number to use with this driver. */ /* Allocate a major block device number to use with this driver. */
error = register_blkdev(0, MTIP_DRV_NAME); error = register_blkdev(0, MTIP_DRV_NAME);
if (error <= 0) { if (error <= 0) {

View File

@ -84,6 +84,10 @@ enum {
NULL_Q_MQ = 2, NULL_Q_MQ = 2,
}; };
static bool g_virt_boundary = false;
module_param_named(virt_boundary, g_virt_boundary, bool, 0444);
MODULE_PARM_DESC(virt_boundary, "Require a virtual boundary for the device. Default: False");
static int g_no_sched; static int g_no_sched;
module_param_named(no_sched, g_no_sched, int, 0444); module_param_named(no_sched, g_no_sched, int, 0444);
MODULE_PARM_DESC(no_sched, "No io scheduler"); MODULE_PARM_DESC(no_sched, "No io scheduler");
@ -366,6 +370,7 @@ NULLB_DEVICE_ATTR(zone_capacity, ulong, NULL);
NULLB_DEVICE_ATTR(zone_nr_conv, uint, NULL); NULLB_DEVICE_ATTR(zone_nr_conv, uint, NULL);
NULLB_DEVICE_ATTR(zone_max_open, uint, NULL); NULLB_DEVICE_ATTR(zone_max_open, uint, NULL);
NULLB_DEVICE_ATTR(zone_max_active, uint, NULL); NULLB_DEVICE_ATTR(zone_max_active, uint, NULL);
NULLB_DEVICE_ATTR(virt_boundary, bool, NULL);
static ssize_t nullb_device_power_show(struct config_item *item, char *page) static ssize_t nullb_device_power_show(struct config_item *item, char *page)
{ {
@ -486,6 +491,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
&nullb_device_attr_zone_nr_conv, &nullb_device_attr_zone_nr_conv,
&nullb_device_attr_zone_max_open, &nullb_device_attr_zone_max_open,
&nullb_device_attr_zone_max_active, &nullb_device_attr_zone_max_active,
&nullb_device_attr_virt_boundary,
NULL, NULL,
}; };
@ -539,7 +545,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item)
static ssize_t memb_group_features_show(struct config_item *item, char *page) static ssize_t memb_group_features_show(struct config_item *item, char *page)
{ {
return snprintf(page, PAGE_SIZE, return snprintf(page, PAGE_SIZE,
"memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n"); "memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors,virt_boundary\n");
} }
CONFIGFS_ATTR_RO(memb_group_, features); CONFIGFS_ATTR_RO(memb_group_, features);
@ -605,6 +611,7 @@ static struct nullb_device *null_alloc_dev(void)
dev->zone_nr_conv = g_zone_nr_conv; dev->zone_nr_conv = g_zone_nr_conv;
dev->zone_max_open = g_zone_max_open; dev->zone_max_open = g_zone_max_open;
dev->zone_max_active = g_zone_max_active; dev->zone_max_active = g_zone_max_active;
dev->virt_boundary = g_virt_boundary;
return dev; return dev;
} }
@ -1896,6 +1903,9 @@ static int null_add_dev(struct nullb_device *dev)
BLK_DEF_MAX_SECTORS); BLK_DEF_MAX_SECTORS);
blk_queue_max_hw_sectors(nullb->q, dev->max_sectors); blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
if (dev->virt_boundary)
blk_queue_virt_boundary(nullb->q, PAGE_SIZE - 1);
null_config_discard(nullb); null_config_discard(nullb);
sprintf(nullb->disk_name, "nullb%d", nullb->index); sprintf(nullb->disk_name, "nullb%d", nullb->index);

View File

@ -97,6 +97,7 @@ struct nullb_device {
bool memory_backed; /* if data is stored in memory */ bool memory_backed; /* if data is stored in memory */
bool discard; /* if support discard */ bool discard; /* if support discard */
bool zoned; /* if device is zoned */ bool zoned; /* if device is zoned */
bool virt_boundary; /* virtual boundary on/off for the device */
}; };
struct nullb { struct nullb {

View File

@ -180,6 +180,7 @@ int null_register_zoned_dev(struct nullb *nullb)
void null_free_zoned_dev(struct nullb_device *dev) void null_free_zoned_dev(struct nullb_device *dev)
{ {
kvfree(dev->zones); kvfree(dev->zones);
dev->zones = NULL;
} }
int null_report_zones(struct gendisk *disk, sector_t sector, int null_report_zones(struct gendisk *disk, sector_t sector,

View File

@ -859,16 +859,6 @@ static unsigned int pd_check_events(struct gendisk *p, unsigned int clearing)
return r ? DISK_EVENT_MEDIA_CHANGE : 0; return r ? DISK_EVENT_MEDIA_CHANGE : 0;
} }
static int pd_revalidate(struct gendisk *p)
{
struct pd_unit *disk = p->private_data;
if (pd_special_command(disk, pd_identify) == 0)
set_capacity(p, disk->capacity);
else
set_capacity(p, 0);
return 0;
}
static const struct block_device_operations pd_fops = { static const struct block_device_operations pd_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.open = pd_open, .open = pd_open,
@ -877,7 +867,6 @@ static const struct block_device_operations pd_fops = {
.compat_ioctl = pd_ioctl, .compat_ioctl = pd_ioctl,
.getgeo = pd_getgeo, .getgeo = pd_getgeo,
.check_events = pd_check_events, .check_events = pd_check_events,
.revalidate_disk= pd_revalidate
}; };
/* probing */ /* probing */

View File

@ -34,6 +34,7 @@ enum {
RNBD_OPT_DEV_PATH = 1 << 2, RNBD_OPT_DEV_PATH = 1 << 2,
RNBD_OPT_ACCESS_MODE = 1 << 3, RNBD_OPT_ACCESS_MODE = 1 << 3,
RNBD_OPT_SESSNAME = 1 << 6, RNBD_OPT_SESSNAME = 1 << 6,
RNBD_OPT_NR_POLL_QUEUES = 1 << 7,
}; };
static const unsigned int rnbd_opt_mandatory[] = { static const unsigned int rnbd_opt_mandatory[] = {
@ -42,12 +43,13 @@ static const unsigned int rnbd_opt_mandatory[] = {
}; };
static const match_table_t rnbd_opt_tokens = { static const match_table_t rnbd_opt_tokens = {
{RNBD_OPT_PATH, "path=%s" }, {RNBD_OPT_PATH, "path=%s" },
{RNBD_OPT_DEV_PATH, "device_path=%s"}, {RNBD_OPT_DEV_PATH, "device_path=%s" },
{RNBD_OPT_DEST_PORT, "dest_port=%d" }, {RNBD_OPT_DEST_PORT, "dest_port=%d" },
{RNBD_OPT_ACCESS_MODE, "access_mode=%s"}, {RNBD_OPT_ACCESS_MODE, "access_mode=%s" },
{RNBD_OPT_SESSNAME, "sessname=%s" }, {RNBD_OPT_SESSNAME, "sessname=%s" },
{RNBD_OPT_ERR, NULL }, {RNBD_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{RNBD_OPT_ERR, NULL },
}; };
struct rnbd_map_options { struct rnbd_map_options {
@ -57,6 +59,7 @@ struct rnbd_map_options {
char *pathname; char *pathname;
u16 *dest_port; u16 *dest_port;
enum rnbd_access_mode *access_mode; enum rnbd_access_mode *access_mode;
u32 *nr_poll_queues;
}; };
static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt, static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
@ -68,7 +71,7 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
int opt_mask = 0; int opt_mask = 0;
int token; int token;
int ret = -EINVAL; int ret = -EINVAL;
int i, dest_port; int i, dest_port, nr_poll_queues;
int p_cnt = 0; int p_cnt = 0;
options = kstrdup(buf, GFP_KERNEL); options = kstrdup(buf, GFP_KERNEL);
@ -96,7 +99,7 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
kfree(p); kfree(p);
goto out; goto out;
} }
strlcpy(opt->sessname, p, NAME_MAX); strscpy(opt->sessname, p, NAME_MAX);
kfree(p); kfree(p);
break; break;
@ -139,7 +142,7 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
kfree(p); kfree(p);
goto out; goto out;
} }
strlcpy(opt->pathname, p, NAME_MAX); strscpy(opt->pathname, p, NAME_MAX);
kfree(p); kfree(p);
break; break;
@ -178,6 +181,19 @@ static int rnbd_clt_parse_map_options(const char *buf, size_t max_path_cnt,
kfree(p); kfree(p);
break; break;
case RNBD_OPT_NR_POLL_QUEUES:
if (match_int(args, &nr_poll_queues) || nr_poll_queues < -1 ||
nr_poll_queues > (int)nr_cpu_ids) {
pr_err("bad nr_poll_queues parameter '%d'\n",
nr_poll_queues);
ret = -EINVAL;
goto out;
}
if (nr_poll_queues == -1)
nr_poll_queues = nr_cpu_ids;
*opt->nr_poll_queues = nr_poll_queues;
break;
default: default:
pr_err("map_device: Unknown parameter or missing value '%s'\n", pr_err("map_device: Unknown parameter or missing value '%s'\n",
p); p);
@ -227,6 +243,19 @@ static ssize_t state_show(struct kobject *kobj,
static struct kobj_attribute rnbd_clt_state_attr = __ATTR_RO(state); static struct kobj_attribute rnbd_clt_state_attr = __ATTR_RO(state);
static ssize_t nr_poll_queues_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
struct rnbd_clt_dev *dev;
dev = container_of(kobj, struct rnbd_clt_dev, kobj);
return sysfs_emit(page, "%d\n", dev->nr_poll_queues);
}
static struct kobj_attribute rnbd_clt_nr_poll_queues =
__ATTR_RO(nr_poll_queues);
static ssize_t mapping_path_show(struct kobject *kobj, static ssize_t mapping_path_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page) struct kobj_attribute *attr, char *page)
{ {
@ -421,6 +450,7 @@ static struct attribute *rnbd_dev_attrs[] = {
&rnbd_clt_state_attr.attr, &rnbd_clt_state_attr.attr,
&rnbd_clt_session_attr.attr, &rnbd_clt_session_attr.attr,
&rnbd_clt_access_mode.attr, &rnbd_clt_access_mode.attr,
&rnbd_clt_nr_poll_queues.attr,
NULL, NULL,
}; };
@ -432,10 +462,14 @@ void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev)
* i.e. rnbd_clt_unmap_dev_store() leading to a sysfs warning because * i.e. rnbd_clt_unmap_dev_store() leading to a sysfs warning because
* of sysfs link already was removed already. * of sysfs link already was removed already.
*/ */
if (dev->blk_symlink_name && try_module_get(THIS_MODULE)) { if (dev->blk_symlink_name) {
sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name); if (try_module_get(THIS_MODULE)) {
sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
module_put(THIS_MODULE);
}
/* It should be freed always. */
kfree(dev->blk_symlink_name); kfree(dev->blk_symlink_name);
module_put(THIS_MODULE); dev->blk_symlink_name = NULL;
} }
} }
@ -456,6 +490,7 @@ static int rnbd_clt_add_dev_kobj(struct rnbd_clt_dev *dev)
ret); ret);
kobject_put(&dev->kobj); kobject_put(&dev->kobj);
} }
kobject_uevent(gd_kobj, KOBJ_ONLINE);
return ret; return ret;
} }
@ -465,7 +500,7 @@ static ssize_t rnbd_clt_map_device_show(struct kobject *kobj,
char *page) char *page)
{ {
return scnprintf(page, PAGE_SIZE, return scnprintf(page, PAGE_SIZE,
"Usage: echo \"[dest_port=server port number] sessname=<name of the rtrs session> path=<[srcaddr@]dstaddr> [path=<[srcaddr@]dstaddr>] device_path=<full path on remote side> [access_mode=<ro|rw|migration>]\" > %s\n\naddr ::= [ ip:<ipv4> | ip:<ipv6> | gid:<gid> ]\n", "Usage: echo \"[dest_port=server port number] sessname=<name of the rtrs session> path=<[srcaddr@]dstaddr> [path=<[srcaddr@]dstaddr>] device_path=<full path on remote side> [access_mode=<ro|rw|migration>] [nr_poll_queues=<number of queues>]\" > %s\n\naddr ::= [ ip:<ipv4> | ip:<ipv6> | gid:<gid> ]\n",
attr->attr.name); attr->attr.name);
} }
@ -475,15 +510,11 @@ static int rnbd_clt_get_path_name(struct rnbd_clt_dev *dev, char *buf,
int ret; int ret;
char pathname[NAME_MAX], *s; char pathname[NAME_MAX], *s;
strlcpy(pathname, dev->pathname, sizeof(pathname)); strscpy(pathname, dev->pathname, sizeof(pathname));
while ((s = strchr(pathname, '/'))) while ((s = strchr(pathname, '/')))
s[0] = '!'; s[0] = '!';
ret = snprintf(buf, len, "%s", pathname); ret = snprintf(buf, len, "%s@%s", pathname, dev->sess->sessname);
if (ret >= len)
return -ENAMETOOLONG;
ret = snprintf(buf, len, "%s@%s", buf, dev->sess->sessname);
if (ret >= len) if (ret >= len)
return -ENAMETOOLONG; return -ENAMETOOLONG;
@ -537,6 +568,7 @@ static ssize_t rnbd_clt_map_device_store(struct kobject *kobj,
char sessname[NAME_MAX]; char sessname[NAME_MAX];
enum rnbd_access_mode access_mode = RNBD_ACCESS_RW; enum rnbd_access_mode access_mode = RNBD_ACCESS_RW;
u16 port_nr = RTRS_PORT; u16 port_nr = RTRS_PORT;
u32 nr_poll_queues = 0;
struct sockaddr_storage *addrs; struct sockaddr_storage *addrs;
struct rtrs_addr paths[6]; struct rtrs_addr paths[6];
@ -548,6 +580,7 @@ static ssize_t rnbd_clt_map_device_store(struct kobject *kobj,
opt.pathname = pathname; opt.pathname = pathname;
opt.dest_port = &port_nr; opt.dest_port = &port_nr;
opt.access_mode = &access_mode; opt.access_mode = &access_mode;
opt.nr_poll_queues = &nr_poll_queues;
addrs = kcalloc(ARRAY_SIZE(paths) * 2, sizeof(*addrs), GFP_KERNEL); addrs = kcalloc(ARRAY_SIZE(paths) * 2, sizeof(*addrs), GFP_KERNEL);
if (!addrs) if (!addrs)
return -ENOMEM; return -ENOMEM;
@ -561,12 +594,13 @@ static ssize_t rnbd_clt_map_device_store(struct kobject *kobj,
if (ret) if (ret)
goto out; goto out;
pr_info("Mapping device %s on session %s, (access_mode: %s)\n", pr_info("Mapping device %s on session %s, (access_mode: %s, nr_poll_queues: %d)\n",
pathname, sessname, pathname, sessname,
rnbd_access_mode_str(access_mode)); rnbd_access_mode_str(access_mode),
nr_poll_queues);
dev = rnbd_clt_map_device(sessname, paths, path_cnt, port_nr, pathname, dev = rnbd_clt_map_device(sessname, paths, path_cnt, port_nr, pathname,
access_mode); access_mode, nr_poll_queues);
if (IS_ERR(dev)) { if (IS_ERR(dev)) {
ret = PTR_ERR(dev); ret = PTR_ERR(dev);
goto out; goto out;
@ -639,13 +673,9 @@ cls_destroy:
return err; return err;
} }
void rnbd_clt_destroy_default_group(void)
{
sysfs_remove_group(&rnbd_dev->kobj, &default_attr_group);
}
void rnbd_clt_destroy_sysfs_files(void) void rnbd_clt_destroy_sysfs_files(void)
{ {
sysfs_remove_group(&rnbd_dev->kobj, &default_attr_group);
kobject_del(rnbd_devs_kobj); kobject_del(rnbd_devs_kobj);
kobject_put(rnbd_devs_kobj); kobject_put(rnbd_devs_kobj);
device_destroy(rnbd_dev_class, MKDEV(0, 0)); device_destroy(rnbd_dev_class, MKDEV(0, 0));

View File

@ -110,6 +110,7 @@ static int rnbd_clt_change_capacity(struct rnbd_clt_dev *dev,
static int process_msg_open_rsp(struct rnbd_clt_dev *dev, static int process_msg_open_rsp(struct rnbd_clt_dev *dev,
struct rnbd_msg_open_rsp *rsp) struct rnbd_msg_open_rsp *rsp)
{ {
struct kobject *gd_kobj;
int err = 0; int err = 0;
mutex_lock(&dev->lock); mutex_lock(&dev->lock);
@ -128,6 +129,8 @@ static int process_msg_open_rsp(struct rnbd_clt_dev *dev,
*/ */
if (dev->nsectors != nsectors) if (dev->nsectors != nsectors)
rnbd_clt_change_capacity(dev, nsectors); rnbd_clt_change_capacity(dev, nsectors);
gd_kobj = &disk_to_dev(dev->gd)->kobj;
kobject_uevent(gd_kobj, KOBJ_ONLINE);
rnbd_clt_info(dev, "Device online, device remapped successfully\n"); rnbd_clt_info(dev, "Device online, device remapped successfully\n");
} }
err = rnbd_clt_set_dev_attr(dev, rsp); err = rnbd_clt_set_dev_attr(dev, rsp);
@ -312,13 +315,11 @@ static void rnbd_rerun_all_if_idle(struct rnbd_clt_session *sess)
static struct rtrs_permit *rnbd_get_permit(struct rnbd_clt_session *sess, static struct rtrs_permit *rnbd_get_permit(struct rnbd_clt_session *sess,
enum rtrs_clt_con_type con_type, enum rtrs_clt_con_type con_type,
int wait) enum wait_type wait)
{ {
struct rtrs_permit *permit; struct rtrs_permit *permit;
permit = rtrs_clt_get_permit(sess->rtrs, con_type, permit = rtrs_clt_get_permit(sess->rtrs, con_type, wait);
wait ? RTRS_PERMIT_WAIT :
RTRS_PERMIT_NOWAIT);
if (likely(permit)) if (likely(permit))
/* We have a subtle rare case here, when all permits can be /* We have a subtle rare case here, when all permits can be
* consumed before busy counter increased. This is safe, * consumed before busy counter increased. This is safe,
@ -344,7 +345,7 @@ static void rnbd_put_permit(struct rnbd_clt_session *sess,
static struct rnbd_iu *rnbd_get_iu(struct rnbd_clt_session *sess, static struct rnbd_iu *rnbd_get_iu(struct rnbd_clt_session *sess,
enum rtrs_clt_con_type con_type, enum rtrs_clt_con_type con_type,
int wait) enum wait_type wait)
{ {
struct rnbd_iu *iu; struct rnbd_iu *iu;
struct rtrs_permit *permit; struct rtrs_permit *permit;
@ -354,9 +355,7 @@ static struct rnbd_iu *rnbd_get_iu(struct rnbd_clt_session *sess,
return NULL; return NULL;
} }
permit = rnbd_get_permit(sess, con_type, permit = rnbd_get_permit(sess, con_type, wait);
wait ? RTRS_PERMIT_WAIT :
RTRS_PERMIT_NOWAIT);
if (unlikely(!permit)) { if (unlikely(!permit)) {
kfree(iu); kfree(iu);
return NULL; return NULL;
@ -435,16 +434,11 @@ static void msg_conf(void *priv, int errno)
schedule_work(&iu->work); schedule_work(&iu->work);
} }
enum wait_type {
NO_WAIT = 0,
WAIT = 1
};
static int send_usr_msg(struct rtrs_clt *rtrs, int dir, static int send_usr_msg(struct rtrs_clt *rtrs, int dir,
struct rnbd_iu *iu, struct kvec *vec, struct rnbd_iu *iu, struct kvec *vec,
size_t len, struct scatterlist *sg, unsigned int sg_len, size_t len, struct scatterlist *sg, unsigned int sg_len,
void (*conf)(struct work_struct *work), void (*conf)(struct work_struct *work),
int *errno, enum wait_type wait) int *errno, int wait)
{ {
int err; int err;
struct rtrs_clt_req_ops req_ops; struct rtrs_clt_req_ops req_ops;
@ -476,7 +470,8 @@ static void msg_close_conf(struct work_struct *work)
rnbd_clt_put_dev(dev); rnbd_clt_put_dev(dev);
} }
static int send_msg_close(struct rnbd_clt_dev *dev, u32 device_id, bool wait) static int send_msg_close(struct rnbd_clt_dev *dev, u32 device_id,
enum wait_type wait)
{ {
struct rnbd_clt_session *sess = dev->sess; struct rnbd_clt_session *sess = dev->sess;
struct rnbd_msg_close msg; struct rnbd_msg_close msg;
@ -530,7 +525,7 @@ static void msg_open_conf(struct work_struct *work)
* If server thinks its fine, but we fail to process * If server thinks its fine, but we fail to process
* then be nice and send a close to server. * then be nice and send a close to server.
*/ */
(void)send_msg_close(dev, device_id, NO_WAIT); send_msg_close(dev, device_id, RTRS_PERMIT_NOWAIT);
} }
} }
kfree(rsp); kfree(rsp);
@ -554,7 +549,7 @@ static void msg_sess_info_conf(struct work_struct *work)
rnbd_clt_put_sess(sess); rnbd_clt_put_sess(sess);
} }
static int send_msg_open(struct rnbd_clt_dev *dev, bool wait) static int send_msg_open(struct rnbd_clt_dev *dev, enum wait_type wait)
{ {
struct rnbd_clt_session *sess = dev->sess; struct rnbd_clt_session *sess = dev->sess;
struct rnbd_msg_open_rsp *rsp; struct rnbd_msg_open_rsp *rsp;
@ -583,7 +578,7 @@ static int send_msg_open(struct rnbd_clt_dev *dev, bool wait)
msg.hdr.type = cpu_to_le16(RNBD_MSG_OPEN); msg.hdr.type = cpu_to_le16(RNBD_MSG_OPEN);
msg.access_mode = dev->access_mode; msg.access_mode = dev->access_mode;
strlcpy(msg.dev_name, dev->pathname, sizeof(msg.dev_name)); strscpy(msg.dev_name, dev->pathname, sizeof(msg.dev_name));
WARN_ON(!rnbd_clt_get_dev(dev)); WARN_ON(!rnbd_clt_get_dev(dev));
err = send_usr_msg(sess->rtrs, READ, iu, err = send_usr_msg(sess->rtrs, READ, iu,
@ -601,7 +596,7 @@ static int send_msg_open(struct rnbd_clt_dev *dev, bool wait)
return err; return err;
} }
static int send_msg_sess_info(struct rnbd_clt_session *sess, bool wait) static int send_msg_sess_info(struct rnbd_clt_session *sess, enum wait_type wait)
{ {
struct rnbd_msg_sess_info_rsp *rsp; struct rnbd_msg_sess_info_rsp *rsp;
struct rnbd_msg_sess_info msg; struct rnbd_msg_sess_info msg;
@ -657,14 +652,18 @@ put_iu:
static void set_dev_states_to_disconnected(struct rnbd_clt_session *sess) static void set_dev_states_to_disconnected(struct rnbd_clt_session *sess)
{ {
struct rnbd_clt_dev *dev; struct rnbd_clt_dev *dev;
struct kobject *gd_kobj;
mutex_lock(&sess->lock); mutex_lock(&sess->lock);
list_for_each_entry(dev, &sess->devs_list, list) { list_for_each_entry(dev, &sess->devs_list, list) {
rnbd_clt_err(dev, "Device disconnected.\n"); rnbd_clt_err(dev, "Device disconnected.\n");
mutex_lock(&dev->lock); mutex_lock(&dev->lock);
if (dev->dev_state == DEV_STATE_MAPPED) if (dev->dev_state == DEV_STATE_MAPPED) {
dev->dev_state = DEV_STATE_MAPPED_DISCONNECTED; dev->dev_state = DEV_STATE_MAPPED_DISCONNECTED;
gd_kobj = &disk_to_dev(dev->gd)->kobj;
kobject_uevent(gd_kobj, KOBJ_OFFLINE);
}
mutex_unlock(&dev->lock); mutex_unlock(&dev->lock);
} }
mutex_unlock(&sess->lock); mutex_unlock(&sess->lock);
@ -687,7 +686,7 @@ static void remap_devs(struct rnbd_clt_session *sess)
* be asynchronous. * be asynchronous.
*/ */
err = send_msg_sess_info(sess, NO_WAIT); err = send_msg_sess_info(sess, RTRS_PERMIT_NOWAIT);
if (err) { if (err) {
pr_err("send_msg_sess_info(\"%s\"): %d\n", sess->sessname, err); pr_err("send_msg_sess_info(\"%s\"): %d\n", sess->sessname, err);
return; return;
@ -711,7 +710,7 @@ static void remap_devs(struct rnbd_clt_session *sess)
continue; continue;
rnbd_clt_info(dev, "session reconnected, remapping device\n"); rnbd_clt_info(dev, "session reconnected, remapping device\n");
err = send_msg_open(dev, NO_WAIT); err = send_msg_open(dev, RTRS_PERMIT_NOWAIT);
if (err) { if (err) {
rnbd_clt_err(dev, "send_msg_open(): %d\n", err); rnbd_clt_err(dev, "send_msg_open(): %d\n", err);
break; break;
@ -801,7 +800,7 @@ static struct rnbd_clt_session *alloc_sess(const char *sessname)
sess = kzalloc_node(sizeof(*sess), GFP_KERNEL, NUMA_NO_NODE); sess = kzalloc_node(sizeof(*sess), GFP_KERNEL, NUMA_NO_NODE);
if (!sess) if (!sess)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
strlcpy(sess->sessname, sessname, sizeof(sess->sessname)); strscpy(sess->sessname, sessname, sizeof(sess->sessname));
atomic_set(&sess->busy, 0); atomic_set(&sess->busy, 0);
mutex_init(&sess->lock); mutex_init(&sess->lock);
INIT_LIST_HEAD(&sess->devs_list); INIT_LIST_HEAD(&sess->devs_list);
@ -918,6 +917,7 @@ again:
return NULL; return NULL;
} }
/* caller is responsible for initializing 'first' to false */
static struct static struct
rnbd_clt_session *find_or_create_sess(const char *sessname, bool *first) rnbd_clt_session *find_or_create_sess(const char *sessname, bool *first)
{ {
@ -933,8 +933,7 @@ rnbd_clt_session *find_or_create_sess(const char *sessname, bool *first)
} }
list_add(&sess->list, &sess_list); list_add(&sess->list, &sess_list);
*first = true; *first = true;
} else }
*first = false;
mutex_unlock(&sess_lock); mutex_unlock(&sess_lock);
return sess; return sess;
@ -1173,9 +1172,54 @@ static blk_status_t rnbd_queue_rq(struct blk_mq_hw_ctx *hctx,
return ret; return ret;
} }
static int rnbd_rdma_poll(struct blk_mq_hw_ctx *hctx)
{
struct rnbd_queue *q = hctx->driver_data;
struct rnbd_clt_dev *dev = q->dev;
int cnt;
cnt = rtrs_clt_rdma_cq_direct(dev->sess->rtrs, hctx->queue_num);
return cnt;
}
static int rnbd_rdma_map_queues(struct blk_mq_tag_set *set)
{
struct rnbd_clt_session *sess = set->driver_data;
/* shared read/write queues */
set->map[HCTX_TYPE_DEFAULT].nr_queues = num_online_cpus();
set->map[HCTX_TYPE_DEFAULT].queue_offset = 0;
set->map[HCTX_TYPE_READ].nr_queues = num_online_cpus();
set->map[HCTX_TYPE_READ].queue_offset = 0;
blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]);
blk_mq_map_queues(&set->map[HCTX_TYPE_READ]);
if (sess->nr_poll_queues) {
/* dedicated queue for poll */
set->map[HCTX_TYPE_POLL].nr_queues = sess->nr_poll_queues;
set->map[HCTX_TYPE_POLL].queue_offset = set->map[HCTX_TYPE_READ].queue_offset +
set->map[HCTX_TYPE_READ].nr_queues;
blk_mq_map_queues(&set->map[HCTX_TYPE_POLL]);
pr_info("[session=%s] mapped %d/%d/%d default/read/poll queues.\n",
sess->sessname,
set->map[HCTX_TYPE_DEFAULT].nr_queues,
set->map[HCTX_TYPE_READ].nr_queues,
set->map[HCTX_TYPE_POLL].nr_queues);
} else {
pr_info("[session=%s] mapped %d/%d default/read queues.\n",
sess->sessname,
set->map[HCTX_TYPE_DEFAULT].nr_queues,
set->map[HCTX_TYPE_READ].nr_queues);
}
return 0;
}
static struct blk_mq_ops rnbd_mq_ops = { static struct blk_mq_ops rnbd_mq_ops = {
.queue_rq = rnbd_queue_rq, .queue_rq = rnbd_queue_rq,
.complete = rnbd_softirq_done_fn, .complete = rnbd_softirq_done_fn,
.map_queues = rnbd_rdma_map_queues,
.poll = rnbd_rdma_poll,
}; };
static int setup_mq_tags(struct rnbd_clt_session *sess) static int setup_mq_tags(struct rnbd_clt_session *sess)
@ -1189,7 +1233,15 @@ static int setup_mq_tags(struct rnbd_clt_session *sess)
tag_set->flags = BLK_MQ_F_SHOULD_MERGE | tag_set->flags = BLK_MQ_F_SHOULD_MERGE |
BLK_MQ_F_TAG_QUEUE_SHARED; BLK_MQ_F_TAG_QUEUE_SHARED;
tag_set->cmd_size = sizeof(struct rnbd_iu) + RNBD_RDMA_SGL_SIZE; tag_set->cmd_size = sizeof(struct rnbd_iu) + RNBD_RDMA_SGL_SIZE;
tag_set->nr_hw_queues = num_online_cpus();
/* for HCTX_TYPE_DEFAULT, HCTX_TYPE_READ, HCTX_TYPE_POLL */
tag_set->nr_maps = sess->nr_poll_queues ? HCTX_MAX_TYPES : 2;
/*
* HCTX_TYPE_DEFAULT and HCTX_TYPE_READ share one set of queues
* others are for HCTX_TYPE_POLL
*/
tag_set->nr_hw_queues = num_online_cpus() + sess->nr_poll_queues;
tag_set->driver_data = sess;
return blk_mq_alloc_tag_set(tag_set); return blk_mq_alloc_tag_set(tag_set);
} }
@ -1197,18 +1249,27 @@ static int setup_mq_tags(struct rnbd_clt_session *sess)
static struct rnbd_clt_session * static struct rnbd_clt_session *
find_and_get_or_create_sess(const char *sessname, find_and_get_or_create_sess(const char *sessname,
const struct rtrs_addr *paths, const struct rtrs_addr *paths,
size_t path_cnt, u16 port_nr) size_t path_cnt, u16 port_nr, u32 nr_poll_queues)
{ {
struct rnbd_clt_session *sess; struct rnbd_clt_session *sess;
struct rtrs_attrs attrs; struct rtrs_attrs attrs;
int err; int err;
bool first; bool first = false;
struct rtrs_clt_ops rtrs_ops; struct rtrs_clt_ops rtrs_ops;
sess = find_or_create_sess(sessname, &first); sess = find_or_create_sess(sessname, &first);
if (sess == ERR_PTR(-ENOMEM)) if (sess == ERR_PTR(-ENOMEM))
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
else if (!first) else if ((nr_poll_queues && !first) || (!nr_poll_queues && sess->nr_poll_queues)) {
/*
* A device MUST have its own session to use the polling-mode.
* It must fail to map new device with the same session.
*/
err = -EINVAL;
goto put_sess;
}
if (!first)
return sess; return sess;
if (!path_cnt) { if (!path_cnt) {
@ -1228,8 +1289,7 @@ find_and_get_or_create_sess(const char *sessname,
paths, path_cnt, port_nr, paths, path_cnt, port_nr,
0, /* Do not use pdu of rtrs */ 0, /* Do not use pdu of rtrs */
RECONNECT_DELAY, BMAX_SEGMENTS, RECONNECT_DELAY, BMAX_SEGMENTS,
BLK_MAX_SEGMENT_SIZE, MAX_RECONNECTS, nr_poll_queues);
MAX_RECONNECTS);
if (IS_ERR(sess->rtrs)) { if (IS_ERR(sess->rtrs)) {
err = PTR_ERR(sess->rtrs); err = PTR_ERR(sess->rtrs);
goto wake_up_and_put; goto wake_up_and_put;
@ -1237,12 +1297,13 @@ find_and_get_or_create_sess(const char *sessname,
rtrs_clt_query(sess->rtrs, &attrs); rtrs_clt_query(sess->rtrs, &attrs);
sess->max_io_size = attrs.max_io_size; sess->max_io_size = attrs.max_io_size;
sess->queue_depth = attrs.queue_depth; sess->queue_depth = attrs.queue_depth;
sess->nr_poll_queues = nr_poll_queues;
err = setup_mq_tags(sess); err = setup_mq_tags(sess);
if (err) if (err)
goto close_rtrs; goto close_rtrs;
err = send_msg_sess_info(sess, WAIT); err = send_msg_sess_info(sess, RTRS_PERMIT_WAIT);
if (err) if (err)
goto close_rtrs; goto close_rtrs;
@ -1352,12 +1413,12 @@ static void rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev, int idx)
if (!dev->rotational) if (!dev->rotational)
blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue); blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
add_disk(dev->gd);
} }
static int rnbd_client_setup_device(struct rnbd_clt_session *sess, static int rnbd_client_setup_device(struct rnbd_clt_dev *dev)
struct rnbd_clt_dev *dev, int idx)
{ {
int err; int err, idx = dev->clt_device_id;
dev->size = dev->nsectors * dev->logical_block_size; dev->size = dev->nsectors * dev->logical_block_size;
@ -1380,7 +1441,8 @@ static int rnbd_client_setup_device(struct rnbd_clt_session *sess,
static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess, static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
enum rnbd_access_mode access_mode, enum rnbd_access_mode access_mode,
const char *pathname) const char *pathname,
u32 nr_poll_queues)
{ {
struct rnbd_clt_dev *dev; struct rnbd_clt_dev *dev;
int ret; int ret;
@ -1389,7 +1451,12 @@ static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
if (!dev) if (!dev)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
dev->hw_queues = kcalloc(nr_cpu_ids, sizeof(*dev->hw_queues), /*
* nr_cpu_ids: the number of softirq queues
* nr_poll_queues: the number of polling queues
*/
dev->hw_queues = kcalloc(nr_cpu_ids + nr_poll_queues,
sizeof(*dev->hw_queues),
GFP_KERNEL); GFP_KERNEL);
if (!dev->hw_queues) { if (!dev->hw_queues) {
ret = -ENOMEM; ret = -ENOMEM;
@ -1415,6 +1482,7 @@ static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
dev->clt_device_id = ret; dev->clt_device_id = ret;
dev->sess = sess; dev->sess = sess;
dev->access_mode = access_mode; dev->access_mode = access_mode;
dev->nr_poll_queues = nr_poll_queues;
mutex_init(&dev->lock); mutex_init(&dev->lock);
refcount_set(&dev->refcount, 1); refcount_set(&dev->refcount, 1);
dev->dev_state = DEV_STATE_INIT; dev->dev_state = DEV_STATE_INIT;
@ -1471,14 +1539,13 @@ static bool exists_devpath(const char *pathname, const char *sessname)
return found; return found;
} }
static bool insert_dev_if_not_exists_devpath(const char *pathname, static bool insert_dev_if_not_exists_devpath(struct rnbd_clt_dev *dev)
struct rnbd_clt_session *sess,
struct rnbd_clt_dev *dev)
{ {
bool found; bool found;
struct rnbd_clt_session *sess = dev->sess;
mutex_lock(&sess_lock); mutex_lock(&sess_lock);
found = __exists_dev(pathname, sess->sessname); found = __exists_dev(dev->pathname, sess->sessname);
if (!found) { if (!found) {
mutex_lock(&sess->lock); mutex_lock(&sess->lock);
list_add_tail(&dev->list, &sess->devs_list); list_add_tail(&dev->list, &sess->devs_list);
@ -1502,7 +1569,8 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
struct rtrs_addr *paths, struct rtrs_addr *paths,
size_t path_cnt, u16 port_nr, size_t path_cnt, u16 port_nr,
const char *pathname, const char *pathname,
enum rnbd_access_mode access_mode) enum rnbd_access_mode access_mode,
u32 nr_poll_queues)
{ {
struct rnbd_clt_session *sess; struct rnbd_clt_session *sess;
struct rnbd_clt_dev *dev; struct rnbd_clt_dev *dev;
@ -1511,22 +1579,22 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
if (unlikely(exists_devpath(pathname, sessname))) if (unlikely(exists_devpath(pathname, sessname)))
return ERR_PTR(-EEXIST); return ERR_PTR(-EEXIST);
sess = find_and_get_or_create_sess(sessname, paths, path_cnt, port_nr); sess = find_and_get_or_create_sess(sessname, paths, path_cnt, port_nr, nr_poll_queues);
if (IS_ERR(sess)) if (IS_ERR(sess))
return ERR_CAST(sess); return ERR_CAST(sess);
dev = init_dev(sess, access_mode, pathname); dev = init_dev(sess, access_mode, pathname, nr_poll_queues);
if (IS_ERR(dev)) { if (IS_ERR(dev)) {
pr_err("map_device: failed to map device '%s' from session %s, can't initialize device, err: %ld\n", pr_err("map_device: failed to map device '%s' from session %s, can't initialize device, err: %ld\n",
pathname, sess->sessname, PTR_ERR(dev)); pathname, sess->sessname, PTR_ERR(dev));
ret = PTR_ERR(dev); ret = PTR_ERR(dev);
goto put_sess; goto put_sess;
} }
if (insert_dev_if_not_exists_devpath(pathname, sess, dev)) { if (insert_dev_if_not_exists_devpath(dev)) {
ret = -EEXIST; ret = -EEXIST;
goto put_dev; goto put_dev;
} }
ret = send_msg_open(dev, WAIT); ret = send_msg_open(dev, RTRS_PERMIT_WAIT);
if (ret) { if (ret) {
rnbd_clt_err(dev, rnbd_clt_err(dev,
"map_device: failed, can't open remote device, err: %d\n", "map_device: failed, can't open remote device, err: %d\n",
@ -1536,7 +1604,7 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
mutex_lock(&dev->lock); mutex_lock(&dev->lock);
pr_debug("Opened remote device: session=%s, path='%s'\n", pr_debug("Opened remote device: session=%s, path='%s'\n",
sess->sessname, pathname); sess->sessname, pathname);
ret = rnbd_client_setup_device(sess, dev, dev->clt_device_id); ret = rnbd_client_setup_device(dev);
if (ret) { if (ret) {
rnbd_clt_err(dev, rnbd_clt_err(dev,
"map_device: Failed to configure device, err: %d\n", "map_device: Failed to configure device, err: %d\n",
@ -1555,14 +1623,12 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
dev->max_hw_sectors, dev->rotational, dev->wc, dev->fua); dev->max_hw_sectors, dev->rotational, dev->wc, dev->fua);
mutex_unlock(&dev->lock); mutex_unlock(&dev->lock);
add_disk(dev->gd);
rnbd_clt_put_sess(sess); rnbd_clt_put_sess(sess);
return dev; return dev;
send_close: send_close:
send_msg_close(dev, dev->device_id, WAIT); send_msg_close(dev, dev->device_id, RTRS_PERMIT_WAIT);
del_dev: del_dev:
delete_dev(dev); delete_dev(dev);
put_dev: put_dev:
@ -1622,7 +1688,7 @@ int rnbd_clt_unmap_device(struct rnbd_clt_dev *dev, bool force,
destroy_sysfs(dev, sysfs_self); destroy_sysfs(dev, sysfs_self);
destroy_gen_disk(dev); destroy_gen_disk(dev);
if (was_mapped && sess->rtrs) if (was_mapped && sess->rtrs)
send_msg_close(dev, dev->device_id, WAIT); send_msg_close(dev, dev->device_id, RTRS_PERMIT_WAIT);
rnbd_clt_info(dev, "Device is unmapped\n"); rnbd_clt_info(dev, "Device is unmapped\n");
@ -1656,7 +1722,7 @@ int rnbd_clt_remap_device(struct rnbd_clt_dev *dev)
mutex_unlock(&dev->lock); mutex_unlock(&dev->lock);
if (!err) { if (!err) {
rnbd_clt_info(dev, "Remapping device.\n"); rnbd_clt_info(dev, "Remapping device.\n");
err = send_msg_open(dev, WAIT); err = send_msg_open(dev, RTRS_PERMIT_WAIT);
if (err) if (err)
rnbd_clt_err(dev, "remap_device: %d\n", err); rnbd_clt_err(dev, "remap_device: %d\n", err);
} }
@ -1678,7 +1744,6 @@ static void rnbd_destroy_sessions(void)
struct rnbd_clt_dev *dev, *tn; struct rnbd_clt_dev *dev, *tn;
/* Firstly forbid access through sysfs interface */ /* Firstly forbid access through sysfs interface */
rnbd_clt_destroy_default_group();
rnbd_clt_destroy_sysfs_files(); rnbd_clt_destroy_sysfs_files();
/* /*

View File

@ -90,6 +90,7 @@ struct rnbd_clt_session {
int queue_depth; int queue_depth;
u32 max_io_size; u32 max_io_size;
struct blk_mq_tag_set tag_set; struct blk_mq_tag_set tag_set;
u32 nr_poll_queues;
struct mutex lock; /* protects state and devs_list */ struct mutex lock; /* protects state and devs_list */
struct list_head devs_list; /* list of struct rnbd_clt_dev */ struct list_head devs_list; /* list of struct rnbd_clt_dev */
refcount_t refcount; refcount_t refcount;
@ -118,6 +119,7 @@ struct rnbd_clt_dev {
enum rnbd_clt_dev_state dev_state; enum rnbd_clt_dev_state dev_state;
char *pathname; char *pathname;
enum rnbd_access_mode access_mode; enum rnbd_access_mode access_mode;
u32 nr_poll_queues;
bool read_only; bool read_only;
bool rotational; bool rotational;
bool wc; bool wc;
@ -147,7 +149,8 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
struct rtrs_addr *paths, struct rtrs_addr *paths,
size_t path_cnt, u16 port_nr, size_t path_cnt, u16 port_nr,
const char *pathname, const char *pathname,
enum rnbd_access_mode access_mode); enum rnbd_access_mode access_mode,
u32 nr_poll_queues);
int rnbd_clt_unmap_device(struct rnbd_clt_dev *dev, bool force, int rnbd_clt_unmap_device(struct rnbd_clt_dev *dev, bool force,
const struct attribute *sysfs_self); const struct attribute *sysfs_self);
@ -159,7 +162,6 @@ int rnbd_clt_resize_disk(struct rnbd_clt_dev *dev, size_t newsize);
int rnbd_clt_create_sysfs_files(void); int rnbd_clt_create_sysfs_files(void);
void rnbd_clt_destroy_sysfs_files(void); void rnbd_clt_destroy_sysfs_files(void);
void rnbd_clt_destroy_default_group(void);
void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev); void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev);

View File

@ -147,10 +147,7 @@ static ssize_t rnbd_srv_dev_session_force_close_store(struct kobject *kobj,
} }
rnbd_srv_info(sess_dev, "force close requested\n"); rnbd_srv_info(sess_dev, "force close requested\n");
rnbd_srv_sess_dev_force_close(sess_dev, attr);
/* first remove sysfs itself to avoid deadlock */
sysfs_remove_file_self(&sess_dev->kobj, &attr->attr);
rnbd_srv_sess_dev_force_close(sess_dev);
return count; return count;
} }

View File

@ -114,8 +114,7 @@ rnbd_get_sess_dev(int dev_id, struct rnbd_srv_session *srv_sess)
return sess_dev; return sess_dev;
} }
static int process_rdma(struct rtrs_srv *sess, static int process_rdma(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
struct rtrs_srv_op *id, void *data, u32 datalen, struct rtrs_srv_op *id, void *data, u32 datalen,
const void *usr, size_t usrlen) const void *usr, size_t usrlen)
{ {
@ -178,8 +177,10 @@ err:
return err; return err;
} }
static void destroy_device(struct rnbd_srv_dev *dev) static void destroy_device(struct kref *kref)
{ {
struct rnbd_srv_dev *dev = container_of(kref, struct rnbd_srv_dev, kref);
WARN_ONCE(!list_empty(&dev->sess_dev_list), WARN_ONCE(!list_empty(&dev->sess_dev_list),
"Device %s is being destroyed but still in use!\n", "Device %s is being destroyed but still in use!\n",
dev->id); dev->id);
@ -198,18 +199,9 @@ static void destroy_device(struct rnbd_srv_dev *dev)
kfree(dev); kfree(dev);
} }
static void destroy_device_cb(struct kref *kref)
{
struct rnbd_srv_dev *dev;
dev = container_of(kref, struct rnbd_srv_dev, kref);
destroy_device(dev);
}
static void rnbd_put_srv_dev(struct rnbd_srv_dev *dev) static void rnbd_put_srv_dev(struct rnbd_srv_dev *dev)
{ {
kref_put(&dev->kref, destroy_device_cb); kref_put(&dev->kref, destroy_device);
} }
void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev, bool keep_id) void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev, bool keep_id)
@ -306,7 +298,7 @@ static int create_sess(struct rtrs_srv *rtrs)
mutex_unlock(&sess_lock); mutex_unlock(&sess_lock);
srv_sess->rtrs = rtrs; srv_sess->rtrs = rtrs;
strlcpy(srv_sess->sessname, sessname, sizeof(srv_sess->sessname)); strscpy(srv_sess->sessname, sessname, sizeof(srv_sess->sessname));
rtrs_srv_set_sess_priv(rtrs, srv_sess); rtrs_srv_set_sess_priv(rtrs, srv_sess);
@ -336,18 +328,22 @@ static int rnbd_srv_link_ev(struct rtrs_srv *rtrs,
} }
} }
void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev) void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev,
struct kobj_attribute *attr)
{ {
struct rnbd_srv_session *sess = sess_dev->sess; struct rnbd_srv_session *sess = sess_dev->sess;
sess_dev->keep_id = true; sess_dev->keep_id = true;
mutex_lock(&sess->lock); /* It is already started to close by client's close message. */
if (!mutex_trylock(&sess->lock))
return;
/* first remove sysfs itself to avoid deadlock */
sysfs_remove_file_self(&sess_dev->kobj, &attr->attr);
rnbd_srv_destroy_dev_session_sysfs(sess_dev); rnbd_srv_destroy_dev_session_sysfs(sess_dev);
mutex_unlock(&sess->lock); mutex_unlock(&sess->lock);
} }
static int process_msg_close(struct rtrs_srv *rtrs, static int process_msg_close(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
void *data, size_t datalen, const void *usr, void *data, size_t datalen, const void *usr,
size_t usrlen) size_t usrlen)
{ {
@ -366,20 +362,18 @@ static int process_msg_close(struct rtrs_srv *rtrs,
return 0; return 0;
} }
static int process_msg_open(struct rtrs_srv *rtrs, static int process_msg_open(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
const void *msg, size_t len, const void *msg, size_t len,
void *data, size_t datalen); void *data, size_t datalen);
static int process_msg_sess_info(struct rtrs_srv *rtrs, static int process_msg_sess_info(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
const void *msg, size_t len, const void *msg, size_t len,
void *data, size_t datalen); void *data, size_t datalen);
static int rnbd_srv_rdma_ev(struct rtrs_srv *rtrs, void *priv, static int rnbd_srv_rdma_ev(void *priv,
struct rtrs_srv_op *id, int dir, struct rtrs_srv_op *id, int dir,
void *data, size_t datalen, const void *usr, void *data, size_t datalen, const void *usr,
size_t usrlen) size_t usrlen)
{ {
struct rnbd_srv_session *srv_sess = priv; struct rnbd_srv_session *srv_sess = priv;
const struct rnbd_msg_hdr *hdr = usr; const struct rnbd_msg_hdr *hdr = usr;
@ -393,19 +387,16 @@ static int rnbd_srv_rdma_ev(struct rtrs_srv *rtrs, void *priv,
switch (type) { switch (type) {
case RNBD_MSG_IO: case RNBD_MSG_IO:
return process_rdma(rtrs, srv_sess, id, data, datalen, usr, return process_rdma(srv_sess, id, data, datalen, usr, usrlen);
usrlen);
case RNBD_MSG_CLOSE: case RNBD_MSG_CLOSE:
ret = process_msg_close(rtrs, srv_sess, data, datalen, ret = process_msg_close(srv_sess, data, datalen, usr, usrlen);
usr, usrlen);
break; break;
case RNBD_MSG_OPEN: case RNBD_MSG_OPEN:
ret = process_msg_open(rtrs, srv_sess, usr, usrlen, ret = process_msg_open(srv_sess, usr, usrlen, data, datalen);
data, datalen);
break; break;
case RNBD_MSG_SESS_INFO: case RNBD_MSG_SESS_INFO:
ret = process_msg_sess_info(rtrs, srv_sess, usr, usrlen, ret = process_msg_sess_info(srv_sess, usr, usrlen, data,
data, datalen); datalen);
break; break;
default: default:
pr_warn("Received unexpected message type %d with dir %d from session %s\n", pr_warn("Received unexpected message type %d with dir %d from session %s\n",
@ -446,7 +437,7 @@ static struct rnbd_srv_dev *rnbd_srv_init_srv_dev(const char *id)
if (!dev) if (!dev)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
strlcpy(dev->id, id, sizeof(dev->id)); strscpy(dev->id, id, sizeof(dev->id));
kref_init(&dev->kref); kref_init(&dev->kref);
INIT_LIST_HEAD(&dev->sess_dev_list); INIT_LIST_HEAD(&dev->sess_dev_list);
mutex_init(&dev->lock); mutex_init(&dev->lock);
@ -598,7 +589,7 @@ rnbd_srv_create_set_sess_dev(struct rnbd_srv_session *srv_sess,
kref_init(&sdev->kref); kref_init(&sdev->kref);
strlcpy(sdev->pathname, open_msg->dev_name, sizeof(sdev->pathname)); strscpy(sdev->pathname, open_msg->dev_name, sizeof(sdev->pathname));
sdev->rnbd_dev = rnbd_dev; sdev->rnbd_dev = rnbd_dev;
sdev->sess = srv_sess; sdev->sess = srv_sess;
@ -658,8 +649,7 @@ static char *rnbd_srv_get_full_path(struct rnbd_srv_session *srv_sess,
return full_path; return full_path;
} }
static int process_msg_sess_info(struct rtrs_srv *rtrs, static int process_msg_sess_info(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
const void *msg, size_t len, const void *msg, size_t len,
void *data, size_t datalen) void *data, size_t datalen)
{ {
@ -700,8 +690,7 @@ find_srv_sess_dev(struct rnbd_srv_session *srv_sess, const char *dev_name)
return NULL; return NULL;
} }
static int process_msg_open(struct rtrs_srv *rtrs, static int process_msg_open(struct rnbd_srv_session *srv_sess,
struct rnbd_srv_session *srv_sess,
const void *msg, size_t len, const void *msg, size_t len,
void *data, size_t datalen) void *data, size_t datalen)
{ {

View File

@ -64,7 +64,8 @@ struct rnbd_srv_sess_dev {
enum rnbd_access_mode access_mode; enum rnbd_access_mode access_mode;
}; };
void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev); void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev,
struct kobj_attribute *attr);
/* rnbd-srv-sysfs.c */ /* rnbd-srv-sysfs.c */
int rnbd_srv_create_dev_sysfs(struct rnbd_srv_dev *dev, int rnbd_srv_create_dev_sysfs(struct rnbd_srv_dev *dev,

View File

@ -392,7 +392,7 @@ static irqreturn_t rsxx_isr(int irq, void *pdata)
} }
/*----------------- Card Event Handler -------------------*/ /*----------------- Card Event Handler -------------------*/
static const char * const rsxx_card_state_to_str(unsigned int state) static const char *rsxx_card_state_to_str(unsigned int state)
{ {
static const char * const state_strings[] = { static const char * const state_strings[] = {
"Unknown", "Shutdown", "Starting", "Formatting", "Unknown", "Shutdown", "Starting", "Formatting",

View File

@ -816,8 +816,6 @@ static int swim_floppy_init(struct swim_priv *swd)
} }
swd->unit[drive].disk->queue = q; swd->unit[drive].disk->queue = q;
blk_queue_bounce_limit(swd->unit[drive].disk->queue,
BLK_BOUNCE_HIGH);
swd->unit[drive].disk->queue->queuedata = &swd->unit[drive]; swd->unit[drive].disk->queue->queuedata = &swd->unit[drive];
swd->unit[drive].swd = swd; swd->unit[drive].swd = swd;
} }

View File

@ -234,7 +234,6 @@ static unsigned short write_postamble[] = {
}; };
static void seek_track(struct floppy_state *fs, int n); static void seek_track(struct floppy_state *fs, int n);
static void init_dma(struct dbdma_cmd *cp, int cmd, void *buf, int count);
static void act(struct floppy_state *fs); static void act(struct floppy_state *fs);
static void scan_timeout(struct timer_list *t); static void scan_timeout(struct timer_list *t);
static void seek_timeout(struct timer_list *t); static void seek_timeout(struct timer_list *t);
@ -404,12 +403,28 @@ static inline void seek_track(struct floppy_state *fs, int n)
fs->settle_time = 0; fs->settle_time = 0;
} }
/*
* XXX: this is a horrible hack, but at least allows ppc32 to get
* out of defining virt_to_bus, and this driver out of using the
* deprecated block layer bounce buffering for highmem addresses
* for no good reason.
*/
static unsigned long swim3_phys_to_bus(phys_addr_t paddr)
{
return paddr + PCI_DRAM_OFFSET;
}
static phys_addr_t swim3_bio_phys(struct bio *bio)
{
return page_to_phys(bio_page(bio)) + bio_offset(bio);
}
static inline void init_dma(struct dbdma_cmd *cp, int cmd, static inline void init_dma(struct dbdma_cmd *cp, int cmd,
void *buf, int count) phys_addr_t paddr, int count)
{ {
cp->req_count = cpu_to_le16(count); cp->req_count = cpu_to_le16(count);
cp->command = cpu_to_le16(cmd); cp->command = cpu_to_le16(cmd);
cp->phy_addr = cpu_to_le32(virt_to_bus(buf)); cp->phy_addr = cpu_to_le32(swim3_phys_to_bus(paddr));
cp->xfer_status = 0; cp->xfer_status = 0;
} }
@ -441,16 +456,18 @@ static inline void setup_transfer(struct floppy_state *fs)
out_8(&sw->sector, fs->req_sector); out_8(&sw->sector, fs->req_sector);
out_8(&sw->nsect, n); out_8(&sw->nsect, n);
out_8(&sw->gap3, 0); out_8(&sw->gap3, 0);
out_le32(&dr->cmdptr, virt_to_bus(cp)); out_le32(&dr->cmdptr, swim3_phys_to_bus(virt_to_phys(cp)));
if (rq_data_dir(req) == WRITE) { if (rq_data_dir(req) == WRITE) {
/* Set up 3 dma commands: write preamble, data, postamble */ /* Set up 3 dma commands: write preamble, data, postamble */
init_dma(cp, OUTPUT_MORE, write_preamble, sizeof(write_preamble)); init_dma(cp, OUTPUT_MORE, virt_to_phys(write_preamble),
sizeof(write_preamble));
++cp; ++cp;
init_dma(cp, OUTPUT_MORE, bio_data(req->bio), 512); init_dma(cp, OUTPUT_MORE, swim3_bio_phys(req->bio), 512);
++cp; ++cp;
init_dma(cp, OUTPUT_LAST, write_postamble, sizeof(write_postamble)); init_dma(cp, OUTPUT_LAST, virt_to_phys(write_postamble),
sizeof(write_postamble));
} else { } else {
init_dma(cp, INPUT_LAST, bio_data(req->bio), n * 512); init_dma(cp, INPUT_LAST, swim3_bio_phys(req->bio), n * 512);
} }
++cp; ++cp;
out_le16(&cp->command, DBDMA_STOP); out_le16(&cp->command, DBDMA_STOP);
@ -1201,7 +1218,6 @@ static int swim3_attach(struct macio_dev *mdev,
disk->queue = NULL; disk->queue = NULL;
goto out_put_disk; goto out_put_disk;
} }
blk_queue_bounce_limit(disk->queue, BLK_BOUNCE_HIGH);
disk->queue->queuedata = fs; disk->queue->queuedata = fs;
rc = swim3_add_device(mdev, floppy_count); rc = swim3_add_device(mdev, floppy_count);

File diff suppressed because it is too large Load Diff

View File

@ -1,132 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* This file contains defines for the
* Micro Memory MM5415
* family PCI Memory Module with Battery Backup.
*
* Copyright Micro Memory INC 2001. All rights reserved.
*/
#ifndef _DRIVERS_BLOCK_MM_H
#define _DRIVERS_BLOCK_MM_H
#define IRQ_TIMEOUT (1 * HZ)
/* CSR register definition */
#define MEMCTRLSTATUS_MAGIC 0x00
#define MM_MAGIC_VALUE (unsigned char)0x59
#define MEMCTRLSTATUS_BATTERY 0x04
#define BATTERY_1_DISABLED 0x01
#define BATTERY_1_FAILURE 0x02
#define BATTERY_2_DISABLED 0x04
#define BATTERY_2_FAILURE 0x08
#define MEMCTRLSTATUS_MEMORY 0x07
#define MEM_128_MB 0xfe
#define MEM_256_MB 0xfc
#define MEM_512_MB 0xf8
#define MEM_1_GB 0xf0
#define MEM_2_GB 0xe0
#define MEMCTRLCMD_LEDCTRL 0x08
#define LED_REMOVE 2
#define LED_FAULT 4
#define LED_POWER 6
#define LED_FLIP 255
#define LED_OFF 0x00
#define LED_ON 0x01
#define LED_FLASH_3_5 0x02
#define LED_FLASH_7_0 0x03
#define LED_POWER_ON 0x00
#define LED_POWER_OFF 0x01
#define USER_BIT1 0x01
#define USER_BIT2 0x02
#define MEMORY_INITIALIZED USER_BIT1
#define MEMCTRLCMD_ERRCTRL 0x0C
#define EDC_NONE_DEFAULT 0x00
#define EDC_NONE 0x01
#define EDC_STORE_READ 0x02
#define EDC_STORE_CORRECT 0x03
#define MEMCTRLCMD_ERRCNT 0x0D
#define MEMCTRLCMD_ERRSTATUS 0x0E
#define ERROR_DATA_LOG 0x20
#define ERROR_ADDR_LOG 0x28
#define ERROR_COUNT 0x3D
#define ERROR_SYNDROME 0x3E
#define ERROR_CHECK 0x3F
#define DMA_PCI_ADDR 0x40
#define DMA_LOCAL_ADDR 0x48
#define DMA_TRANSFER_SIZE 0x50
#define DMA_DESCRIPTOR_ADDR 0x58
#define DMA_SEMAPHORE_ADDR 0x60
#define DMA_STATUS_CTRL 0x68
#define DMASCR_GO 0x00001
#define DMASCR_TRANSFER_READ 0x00002
#define DMASCR_CHAIN_EN 0x00004
#define DMASCR_SEM_EN 0x00010
#define DMASCR_DMA_COMP_EN 0x00020
#define DMASCR_CHAIN_COMP_EN 0x00040
#define DMASCR_ERR_INT_EN 0x00080
#define DMASCR_PARITY_INT_EN 0x00100
#define DMASCR_ANY_ERR 0x00800
#define DMASCR_MBE_ERR 0x01000
#define DMASCR_PARITY_ERR_REP 0x02000
#define DMASCR_PARITY_ERR_DET 0x04000
#define DMASCR_SYSTEM_ERR_SIG 0x08000
#define DMASCR_TARGET_ABT 0x10000
#define DMASCR_MASTER_ABT 0x20000
#define DMASCR_DMA_COMPLETE 0x40000
#define DMASCR_CHAIN_COMPLETE 0x80000
/*
3.SOME PCs HAVE HOST BRIDGES WHICH APPARENTLY DO NOT CORRECTLY HANDLE
READ-LINE (0xE) OR READ-MULTIPLE (0xC) PCI COMMAND CODES DURING DMA
TRANSFERS. IN OTHER SYSTEMS THESE COMMAND CODES WILL CAUSE THE HOST BRIDGE
TO ALLOW LONGER BURSTS DURING DMA READ OPERATIONS. THE UPPER FOUR BITS
(31..28) OF THE DMA CSR HAVE BEEN MADE PROGRAMMABLE, SO THAT EITHER A 0x6,
AN 0xE OR A 0xC CAN BE WRITTEN TO THEM TO SET THE COMMAND CODE USED DURING
DMA READ OPERATIONS.
*/
#define DMASCR_READ 0x60000000
#define DMASCR_READLINE 0xE0000000
#define DMASCR_READMULTI 0xC0000000
#define DMASCR_ERROR_MASK (DMASCR_MASTER_ABT | DMASCR_TARGET_ABT | DMASCR_SYSTEM_ERR_SIG | DMASCR_PARITY_ERR_DET | DMASCR_MBE_ERR | DMASCR_ANY_ERR)
#define DMASCR_HARD_ERROR (DMASCR_MASTER_ABT | DMASCR_TARGET_ABT | DMASCR_SYSTEM_ERR_SIG | DMASCR_PARITY_ERR_DET | DMASCR_MBE_ERR)
#define WINDOWMAP_WINNUM 0x7B
#define DMA_READ_FROM_HOST 0
#define DMA_WRITE_TO_HOST 1
struct mm_dma_desc {
__le64 pci_addr;
__le64 local_addr;
__le32 transfer_size;
u32 zero1;
__le64 next_desc_addr;
__le64 sem_addr;
__le32 control_bits;
u32 zero2;
dma_addr_t data_dma_handle;
/* Copy of the bits */
__le64 sem_control_bits;
} __attribute__((aligned(8)));
/* bits for card->flags */
#define UM_FLAG_DMA_IN_REGS 1
#define UM_FLAG_NO_BYTE_STATUS 2
#define UM_FLAG_NO_BATTREG 4
#define UM_FLAG_NO_BATT 8
#endif

View File

@ -1949,7 +1949,7 @@ module_param(feature_persistent, bool, 0644);
MODULE_PARM_DESC(feature_persistent, MODULE_PARM_DESC(feature_persistent,
"Enables the persistent grants feature"); "Enables the persistent grants feature");
/** /*
* Entry point to this code when a new device is created. Allocate the basic * Entry point to this code when a new device is created. Allocate the basic
* structures and the ring buffer for communication with the backend, and * structures and the ring buffer for communication with the backend, and
* inform the backend of the appropriate details for those. Switch to * inform the backend of the appropriate details for those. Switch to
@ -2075,7 +2075,7 @@ static int blkif_recover(struct blkfront_info *info)
return 0; return 0;
} }
/** /*
* We are reconnecting to the backend, due to a suspend/resume, or a backend * We are reconnecting to the backend, due to a suspend/resume, or a backend
* driver restart. We tear down our blkif structure and recreate it, but * driver restart. We tear down our blkif structure and recreate it, but
* leave the device-layer structures intact so that this is transparent to the * leave the device-layer structures intact so that this is transparent to the
@ -2440,7 +2440,7 @@ fail:
return; return;
} }
/** /*
* Callback received when the backend's state changes. * Callback received when the backend's state changes.
*/ */
static void blkback_changed(struct xenbus_device *dev, static void blkback_changed(struct xenbus_device *dev,

File diff suppressed because it is too large Load Diff

View File

@ -583,7 +583,8 @@ static blk_status_t gdrom_readdisk_dma(struct request *req)
read_command->cmd[1] = 0x20; read_command->cmd[1] = 0x20;
block = blk_rq_pos(req)/GD_TO_BLK + GD_SESSION_OFFSET; block = blk_rq_pos(req)/GD_TO_BLK + GD_SESSION_OFFSET;
block_cnt = blk_rq_sectors(req)/GD_TO_BLK; block_cnt = blk_rq_sectors(req)/GD_TO_BLK;
__raw_writel(virt_to_phys(bio_data(req->bio)), GDROM_DMA_STARTADDR_REG); __raw_writel(page_to_phys(bio_page(req->bio)) + bio_offset(req->bio),
GDROM_DMA_STARTADDR_REG);
__raw_writel(block_cnt * GDROM_HARD_SECTOR, GDROM_DMA_LENGTH_REG); __raw_writel(block_cnt * GDROM_HARD_SECTOR, GDROM_DMA_LENGTH_REG);
__raw_writel(1, GDROM_DMA_DIRECTION_REG); __raw_writel(1, GDROM_DMA_DIRECTION_REG);
__raw_writel(1, GDROM_DMA_ENABLE_REG); __raw_writel(1, GDROM_DMA_ENABLE_REG);
@ -789,8 +790,6 @@ static int probe_gdrom(struct platform_device *devptr)
goto probe_fail_requestq; goto probe_fail_requestq;
} }
blk_queue_bounce_limit(gd.gdrom_rq, BLK_BOUNCE_HIGH);
err = probe_gdrom_setupqueue(); err = probe_gdrom_setupqueue();
if (err) if (err)
goto probe_fail_toc; goto probe_fail_toc;

View File

@ -103,11 +103,11 @@ static inline void __rtrs_put_permit(struct rtrs_clt *clt,
* up earlier. * up earlier.
* *
* Context: * Context:
* Can sleep if @wait == RTRS_TAG_WAIT * Can sleep if @wait == RTRS_PERMIT_WAIT
*/ */
struct rtrs_permit *rtrs_clt_get_permit(struct rtrs_clt *clt, struct rtrs_permit *rtrs_clt_get_permit(struct rtrs_clt *clt,
enum rtrs_clt_con_type con_type, enum rtrs_clt_con_type con_type,
int can_wait) enum wait_type can_wait)
{ {
struct rtrs_permit *permit; struct rtrs_permit *permit;
DEFINE_WAIT(wait); DEFINE_WAIT(wait);
@ -174,7 +174,7 @@ struct rtrs_clt_con *rtrs_permit_to_clt_con(struct rtrs_clt_sess *sess,
int id = 0; int id = 0;
if (likely(permit->con_type == RTRS_IO_CON)) if (likely(permit->con_type == RTRS_IO_CON))
id = (permit->cpu_id % (sess->s.con_num - 1)) + 1; id = (permit->cpu_id % (sess->s.irq_con_num - 1)) + 1;
return to_clt_con(sess->s.con[id]); return to_clt_con(sess->s.con[id]);
} }
@ -1400,23 +1400,29 @@ static void rtrs_clt_close_work(struct work_struct *work);
static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt, static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt,
const struct rtrs_addr *path, const struct rtrs_addr *path,
size_t con_num, u16 max_segments, size_t con_num, u16 max_segments,
size_t max_segment_size) u32 nr_poll_queues)
{ {
struct rtrs_clt_sess *sess; struct rtrs_clt_sess *sess;
int err = -ENOMEM; int err = -ENOMEM;
int cpu; int cpu;
size_t total_con;
sess = kzalloc(sizeof(*sess), GFP_KERNEL); sess = kzalloc(sizeof(*sess), GFP_KERNEL);
if (!sess) if (!sess)
goto err; goto err;
/* Extra connection for user messages */ /*
con_num += 1; * irqmode and poll
* +1: Extra connection for user messages
sess->s.con = kcalloc(con_num, sizeof(*sess->s.con), GFP_KERNEL); */
total_con = con_num + nr_poll_queues + 1;
sess->s.con = kcalloc(total_con, sizeof(*sess->s.con), GFP_KERNEL);
if (!sess->s.con) if (!sess->s.con)
goto err_free_sess; goto err_free_sess;
sess->s.con_num = total_con;
sess->s.irq_con_num = con_num + 1;
sess->stats = kzalloc(sizeof(*sess->stats), GFP_KERNEL); sess->stats = kzalloc(sizeof(*sess->stats), GFP_KERNEL);
if (!sess->stats) if (!sess->stats)
goto err_free_con; goto err_free_con;
@ -1435,9 +1441,8 @@ static struct rtrs_clt_sess *alloc_sess(struct rtrs_clt *clt,
memcpy(&sess->s.src_addr, path->src, memcpy(&sess->s.src_addr, path->src,
rdma_addr_size((struct sockaddr *)path->src)); rdma_addr_size((struct sockaddr *)path->src));
strlcpy(sess->s.sessname, clt->sessname, sizeof(sess->s.sessname)); strlcpy(sess->s.sessname, clt->sessname, sizeof(sess->s.sessname));
sess->s.con_num = con_num;
sess->clt = clt; sess->clt = clt;
sess->max_pages_per_mr = max_segments * max_segment_size >> 12; sess->max_pages_per_mr = max_segments;
init_waitqueue_head(&sess->state_wq); init_waitqueue_head(&sess->state_wq);
sess->state = RTRS_CLT_CONNECTING; sess->state = RTRS_CLT_CONNECTING;
atomic_set(&sess->connected_cnt, 0); atomic_set(&sess->connected_cnt, 0);
@ -1576,9 +1581,14 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
} }
cq_size = max_send_wr + max_recv_wr; cq_size = max_send_wr + max_recv_wr;
cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors; cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors;
err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge, if (con->c.cid >= sess->s.irq_con_num)
cq_vector, cq_size, max_send_wr, err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
max_recv_wr, IB_POLL_SOFTIRQ); cq_vector, cq_size, max_send_wr,
max_recv_wr, IB_POLL_DIRECT);
else
err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge,
cq_vector, cq_size, max_send_wr,
max_recv_wr, IB_POLL_SOFTIRQ);
/* /*
* In case of error we do not bother to clean previous allocations, * In case of error we do not bother to clean previous allocations,
* since destroy_con_cq_qp() must be called. * since destroy_con_cq_qp() must be called.
@ -2528,7 +2538,6 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
void (*link_ev)(void *priv, void (*link_ev)(void *priv,
enum rtrs_clt_link_ev ev), enum rtrs_clt_link_ev ev),
unsigned int max_segments, unsigned int max_segments,
size_t max_segment_size,
unsigned int reconnect_delay_sec, unsigned int reconnect_delay_sec,
unsigned int max_reconnect_attempts) unsigned int max_reconnect_attempts)
{ {
@ -2558,7 +2567,6 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num,
clt->port = port; clt->port = port;
clt->pdu_sz = pdu_sz; clt->pdu_sz = pdu_sz;
clt->max_segments = max_segments; clt->max_segments = max_segments;
clt->max_segment_size = max_segment_size;
clt->reconnect_delay_sec = reconnect_delay_sec; clt->reconnect_delay_sec = reconnect_delay_sec;
clt->max_reconnect_attempts = max_reconnect_attempts; clt->max_reconnect_attempts = max_reconnect_attempts;
clt->priv = priv; clt->priv = priv;
@ -2628,9 +2636,9 @@ static void free_clt(struct rtrs_clt *clt)
* @pdu_sz: Size of extra payload which can be accessed after permit allocation. * @pdu_sz: Size of extra payload which can be accessed after permit allocation.
* @reconnect_delay_sec: time between reconnect tries * @reconnect_delay_sec: time between reconnect tries
* @max_segments: Max. number of segments per IO request * @max_segments: Max. number of segments per IO request
* @max_segment_size: Max. size of one segment
* @max_reconnect_attempts: Number of times to reconnect on error before giving * @max_reconnect_attempts: Number of times to reconnect on error before giving
* up, 0 for * disabled, -1 for forever * up, 0 for * disabled, -1 for forever
* @nr_poll_queues: number of polling mode connection using IB_POLL_DIRECT flag
* *
* Starts session establishment with the rtrs_server. The function can block * Starts session establishment with the rtrs_server. The function can block
* up to ~2000ms before it returns. * up to ~2000ms before it returns.
@ -2643,8 +2651,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
size_t paths_num, u16 port, size_t paths_num, u16 port,
size_t pdu_sz, u8 reconnect_delay_sec, size_t pdu_sz, u8 reconnect_delay_sec,
u16 max_segments, u16 max_segments,
size_t max_segment_size, s16 max_reconnect_attempts, u32 nr_poll_queues)
s16 max_reconnect_attempts)
{ {
struct rtrs_clt_sess *sess, *tmp; struct rtrs_clt_sess *sess, *tmp;
struct rtrs_clt *clt; struct rtrs_clt *clt;
@ -2652,7 +2659,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
clt = alloc_clt(sessname, paths_num, port, pdu_sz, ops->priv, clt = alloc_clt(sessname, paths_num, port, pdu_sz, ops->priv,
ops->link_ev, ops->link_ev,
max_segments, max_segment_size, reconnect_delay_sec, max_segments, reconnect_delay_sec,
max_reconnect_attempts); max_reconnect_attempts);
if (IS_ERR(clt)) { if (IS_ERR(clt)) {
err = PTR_ERR(clt); err = PTR_ERR(clt);
@ -2662,7 +2669,7 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
struct rtrs_clt_sess *sess; struct rtrs_clt_sess *sess;
sess = alloc_sess(clt, &paths[i], nr_cpu_ids, sess = alloc_sess(clt, &paths[i], nr_cpu_ids,
max_segments, max_segment_size); max_segments, nr_poll_queues);
if (IS_ERR(sess)) { if (IS_ERR(sess)) {
err = PTR_ERR(sess); err = PTR_ERR(sess);
goto close_all_sess; goto close_all_sess;
@ -2887,6 +2894,31 @@ int rtrs_clt_request(int dir, struct rtrs_clt_req_ops *ops,
} }
EXPORT_SYMBOL(rtrs_clt_request); EXPORT_SYMBOL(rtrs_clt_request);
int rtrs_clt_rdma_cq_direct(struct rtrs_clt *clt, unsigned int index)
{
int cnt;
struct rtrs_con *con;
struct rtrs_clt_sess *sess;
struct path_it it;
rcu_read_lock();
for (path_it_init(&it, clt);
(sess = it.next_path(&it)) && it.i < it.clt->paths_num; it.i++) {
if (READ_ONCE(sess->state) != RTRS_CLT_CONNECTED)
continue;
con = sess->s.con[index + 1];
cnt = ib_process_cq_direct(con->cq, -1);
if (cnt)
break;
}
path_it_deinit(&it);
rcu_read_unlock();
return cnt;
}
EXPORT_SYMBOL(rtrs_clt_rdma_cq_direct);
/** /**
* rtrs_clt_query() - queries RTRS session attributes * rtrs_clt_query() - queries RTRS session attributes
*@clt: session pointer *@clt: session pointer
@ -2915,8 +2947,7 @@ int rtrs_clt_create_path_from_sysfs(struct rtrs_clt *clt,
struct rtrs_clt_sess *sess; struct rtrs_clt_sess *sess;
int err; int err;
sess = alloc_sess(clt, addr, nr_cpu_ids, clt->max_segments, sess = alloc_sess(clt, addr, nr_cpu_ids, clt->max_segments, 0);
clt->max_segment_size);
if (IS_ERR(sess)) if (IS_ERR(sess))
return PTR_ERR(sess); return PTR_ERR(sess);

View File

@ -166,7 +166,6 @@ struct rtrs_clt {
unsigned int max_reconnect_attempts; unsigned int max_reconnect_attempts;
unsigned int reconnect_delay_sec; unsigned int reconnect_delay_sec;
unsigned int max_segments; unsigned int max_segments;
size_t max_segment_size;
void *permits; void *permits;
unsigned long *permits_map; unsigned long *permits_map;
size_t queue_depth; size_t queue_depth;

View File

@ -101,6 +101,7 @@ struct rtrs_sess {
uuid_t uuid; uuid_t uuid;
struct rtrs_con **con; struct rtrs_con **con;
unsigned int con_num; unsigned int con_num;
unsigned int irq_con_num;
unsigned int recon_cnt; unsigned int recon_cnt;
struct rtrs_ib_dev *dev; struct rtrs_ib_dev *dev;
int dev_ref; int dev_ref;

View File

@ -998,7 +998,7 @@ static void process_read(struct rtrs_srv_con *con,
usr_len = le16_to_cpu(msg->usr_len); usr_len = le16_to_cpu(msg->usr_len);
data_len = off - usr_len; data_len = off - usr_len;
data = page_address(srv->chunks[buf_id]); data = page_address(srv->chunks[buf_id]);
ret = ctx->ops.rdma_ev(srv, srv->priv, id, READ, data, data_len, ret = ctx->ops.rdma_ev(srv->priv, id, READ, data, data_len,
data + data_len, usr_len); data + data_len, usr_len);
if (unlikely(ret)) { if (unlikely(ret)) {
@ -1051,7 +1051,7 @@ static void process_write(struct rtrs_srv_con *con,
usr_len = le16_to_cpu(req->usr_len); usr_len = le16_to_cpu(req->usr_len);
data_len = off - usr_len; data_len = off - usr_len;
data = page_address(srv->chunks[buf_id]); data = page_address(srv->chunks[buf_id]);
ret = ctx->ops.rdma_ev(srv, srv->priv, id, WRITE, data, data_len, ret = ctx->ops.rdma_ev(srv->priv, id, WRITE, data, data_len,
data + data_len, usr_len); data + data_len, usr_len);
if (unlikely(ret)) { if (unlikely(ret)) {
rtrs_err_rl(s, rtrs_err_rl(s,

View File

@ -58,14 +58,13 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops,
size_t path_cnt, u16 port, size_t path_cnt, u16 port,
size_t pdu_sz, u8 reconnect_delay_sec, size_t pdu_sz, u8 reconnect_delay_sec,
u16 max_segments, u16 max_segments,
size_t max_segment_size, s16 max_reconnect_attempts, u32 nr_poll_queues);
s16 max_reconnect_attempts);
void rtrs_clt_close(struct rtrs_clt *sess); void rtrs_clt_close(struct rtrs_clt *sess);
enum { enum wait_type {
RTRS_PERMIT_NOWAIT = 0, RTRS_PERMIT_NOWAIT = 0,
RTRS_PERMIT_WAIT = 1, RTRS_PERMIT_WAIT = 1
}; };
/** /**
@ -81,7 +80,7 @@ enum rtrs_clt_con_type {
struct rtrs_permit *rtrs_clt_get_permit(struct rtrs_clt *sess, struct rtrs_permit *rtrs_clt_get_permit(struct rtrs_clt *sess,
enum rtrs_clt_con_type con_type, enum rtrs_clt_con_type con_type,
int wait); enum wait_type wait);
void rtrs_clt_put_permit(struct rtrs_clt *sess, struct rtrs_permit *permit); void rtrs_clt_put_permit(struct rtrs_clt *sess, struct rtrs_permit *permit);
@ -103,6 +102,7 @@ int rtrs_clt_request(int dir, struct rtrs_clt_req_ops *ops,
struct rtrs_clt *sess, struct rtrs_permit *permit, struct rtrs_clt *sess, struct rtrs_permit *permit,
const struct kvec *vec, size_t nr, size_t len, const struct kvec *vec, size_t nr, size_t len,
struct scatterlist *sg, unsigned int sg_cnt); struct scatterlist *sg, unsigned int sg_cnt);
int rtrs_clt_rdma_cq_direct(struct rtrs_clt *clt, unsigned int index);
/** /**
* rtrs_attrs - RTRS session attributes * rtrs_attrs - RTRS session attributes
@ -138,7 +138,6 @@ struct rtrs_srv_ops {
* message for the data transfer will be sent to * message for the data transfer will be sent to
* the client. * the client.
* @sess: Session
* @priv: Private data set by rtrs_srv_set_sess_priv() * @priv: Private data set by rtrs_srv_set_sess_priv()
* @id: internal RTRS operation id * @id: internal RTRS operation id
* @dir: READ/WRITE * @dir: READ/WRITE
@ -152,7 +151,7 @@ struct rtrs_srv_ops {
* @usr: The extra user message sent by the client (%vec) * @usr: The extra user message sent by the client (%vec)
* @usrlen: Size of the user message * @usrlen: Size of the user message
*/ */
int (*rdma_ev)(struct rtrs_srv *sess, void *priv, int (*rdma_ev)(void *priv,
struct rtrs_srv_op *id, int dir, struct rtrs_srv_op *id, int dir,
void *data, size_t datalen, const void *usr, void *data, size_t datalen, const void *usr,
size_t usrlen); size_t usrlen);

View File

@ -4,7 +4,7 @@
# #
menuconfig NVM menuconfig NVM
bool "Open-Channel SSD target support" bool "Open-Channel SSD target support (DEPRECATED)"
depends on BLOCK depends on BLOCK
help help
Say Y here to get to enable Open-channel SSDs. Say Y here to get to enable Open-channel SSDs.
@ -15,6 +15,8 @@ menuconfig NVM
If you say N, all options in this submenu will be skipped and disabled If you say N, all options in this submenu will be skipped and disabled
only do this if you know what you are doing. only do this if you know what you are doing.
This code is deprecated and will be removed in Linux 5.15.
if NVM if NVM
config NVM_PBLK config NVM_PBLK

View File

@ -1174,6 +1174,8 @@ int nvm_register(struct nvm_dev *dev)
{ {
int ret, exp_pool_size; int ret, exp_pool_size;
pr_warn_once("lightnvm support is deprecated and will be removed in Linux 5.15.\n");
if (!dev->q || !dev->ops) { if (!dev->q || !dev->ops) {
kref_put(&dev->ref, nvm_free); kref_put(&dev->ref, nvm_free);
return -EINVAL; return -EINVAL;
@ -1257,7 +1259,7 @@ static long nvm_ioctl_info(struct file *file, void __user *arg)
info = memdup_user(arg, sizeof(struct nvm_ioctl_info)); info = memdup_user(arg, sizeof(struct nvm_ioctl_info));
if (IS_ERR(info)) if (IS_ERR(info))
return -EFAULT; return PTR_ERR(info);
info->version[0] = NVM_VERSION_MAJOR; info->version[0] = NVM_VERSION_MAJOR;
info->version[1] = NVM_VERSION_MINOR; info->version[1] = NVM_VERSION_MINOR;

View File

@ -482,8 +482,7 @@ void bch_bucket_free(struct cache_set *c, struct bkey *k)
unsigned int i; unsigned int i;
for (i = 0; i < KEY_PTRS(k); i++) for (i = 0; i < KEY_PTRS(k); i++)
__bch_bucket_free(PTR_CACHE(c, k, i), __bch_bucket_free(c->cache, PTR_BUCKET(c, k, i));
PTR_BUCKET(c, k, i));
} }
int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve, int __bch_bucket_alloc_set(struct cache_set *c, unsigned int reserve,
@ -674,7 +673,7 @@ bool bch_alloc_sectors(struct cache_set *c,
SET_PTR_OFFSET(&b->key, i, PTR_OFFSET(&b->key, i) + sectors); SET_PTR_OFFSET(&b->key, i, PTR_OFFSET(&b->key, i) + sectors);
atomic_long_add(sectors, atomic_long_add(sectors,
&PTR_CACHE(c, &b->key, i)->sectors_written); &c->cache->sectors_written);
} }
if (b->sectors_free < c->cache->sb.block_size) if (b->sectors_free < c->cache->sb.block_size)

View File

@ -804,13 +804,6 @@ static inline sector_t bucket_remainder(struct cache_set *c, sector_t s)
return s & (c->cache->sb.bucket_size - 1); return s & (c->cache->sb.bucket_size - 1);
} }
static inline struct cache *PTR_CACHE(struct cache_set *c,
const struct bkey *k,
unsigned int ptr)
{
return c->cache;
}
static inline size_t PTR_BUCKET_NR(struct cache_set *c, static inline size_t PTR_BUCKET_NR(struct cache_set *c,
const struct bkey *k, const struct bkey *k,
unsigned int ptr) unsigned int ptr)
@ -822,7 +815,7 @@ static inline struct bucket *PTR_BUCKET(struct cache_set *c,
const struct bkey *k, const struct bkey *k,
unsigned int ptr) unsigned int ptr)
{ {
return PTR_CACHE(c, k, ptr)->buckets + PTR_BUCKET_NR(c, k, ptr); return c->cache->buckets + PTR_BUCKET_NR(c, k, ptr);
} }
static inline uint8_t gen_after(uint8_t a, uint8_t b) static inline uint8_t gen_after(uint8_t a, uint8_t b)
@ -841,7 +834,7 @@ static inline uint8_t ptr_stale(struct cache_set *c, const struct bkey *k,
static inline bool ptr_available(struct cache_set *c, const struct bkey *k, static inline bool ptr_available(struct cache_set *c, const struct bkey *k,
unsigned int i) unsigned int i)
{ {
return (PTR_DEV(k, i) < MAX_CACHES_PER_SET) && PTR_CACHE(c, k, i); return (PTR_DEV(k, i) < MAX_CACHES_PER_SET) && c->cache;
} }
/* Btree key macros */ /* Btree key macros */

View File

@ -426,7 +426,7 @@ void __bch_btree_node_write(struct btree *b, struct closure *parent)
do_btree_node_write(b); do_btree_node_write(b);
atomic_long_add(set_blocks(i, block_bytes(b->c->cache)) * b->c->cache->sb.block_size, atomic_long_add(set_blocks(i, block_bytes(b->c->cache)) * b->c->cache->sb.block_size,
&PTR_CACHE(b->c, &b->key, 0)->btree_sectors_written); &b->c->cache->btree_sectors_written);
b->written += set_blocks(i, block_bytes(b->c->cache)); b->written += set_blocks(i, block_bytes(b->c->cache));
} }
@ -1161,7 +1161,7 @@ static void make_btree_freeing_key(struct btree *b, struct bkey *k)
for (i = 0; i < KEY_PTRS(k); i++) for (i = 0; i < KEY_PTRS(k); i++)
SET_PTR_GEN(k, i, SET_PTR_GEN(k, i,
bch_inc_gen(PTR_CACHE(b->c, &b->key, i), bch_inc_gen(b->c->cache,
PTR_BUCKET(b->c, &b->key, i))); PTR_BUCKET(b->c, &b->key, i)));
mutex_unlock(&b->c->bucket_lock); mutex_unlock(&b->c->bucket_lock);

View File

@ -50,7 +50,7 @@ void bch_btree_verify(struct btree *b)
v->keys.ops = b->keys.ops; v->keys.ops = b->keys.ops;
bio = bch_bbio_alloc(b->c); bio = bch_bbio_alloc(b->c);
bio_set_dev(bio, PTR_CACHE(b->c, &b->key, 0)->bdev); bio_set_dev(bio, b->c->cache->bdev);
bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0); bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0);
bio->bi_iter.bi_size = KEY_SIZE(&v->key) << 9; bio->bi_iter.bi_size = KEY_SIZE(&v->key) << 9;
bio->bi_opf = REQ_OP_READ | REQ_META; bio->bi_opf = REQ_OP_READ | REQ_META;

View File

@ -50,7 +50,7 @@ static bool __ptr_invalid(struct cache_set *c, const struct bkey *k)
for (i = 0; i < KEY_PTRS(k); i++) for (i = 0; i < KEY_PTRS(k); i++)
if (ptr_available(c, k, i)) { if (ptr_available(c, k, i)) {
struct cache *ca = PTR_CACHE(c, k, i); struct cache *ca = c->cache;
size_t bucket = PTR_BUCKET_NR(c, k, i); size_t bucket = PTR_BUCKET_NR(c, k, i);
size_t r = bucket_remainder(c, PTR_OFFSET(k, i)); size_t r = bucket_remainder(c, PTR_OFFSET(k, i));
@ -71,7 +71,7 @@ static const char *bch_ptr_status(struct cache_set *c, const struct bkey *k)
for (i = 0; i < KEY_PTRS(k); i++) for (i = 0; i < KEY_PTRS(k); i++)
if (ptr_available(c, k, i)) { if (ptr_available(c, k, i)) {
struct cache *ca = PTR_CACHE(c, k, i); struct cache *ca = c->cache;
size_t bucket = PTR_BUCKET_NR(c, k, i); size_t bucket = PTR_BUCKET_NR(c, k, i);
size_t r = bucket_remainder(c, PTR_OFFSET(k, i)); size_t r = bucket_remainder(c, PTR_OFFSET(k, i));

View File

@ -19,7 +19,7 @@ struct feature {
static struct feature feature_list[] = { static struct feature feature_list[] = {
{BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, {BCH_FEATURE_INCOMPAT, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE,
"large_bucket"}, "large_bucket"},
{0, 0, 0 }, {0, 0, NULL },
}; };
#define compose_feature_string(type) \ #define compose_feature_string(type) \

View File

@ -36,7 +36,7 @@ void __bch_submit_bbio(struct bio *bio, struct cache_set *c)
struct bbio *b = container_of(bio, struct bbio, bio); struct bbio *b = container_of(bio, struct bbio, bio);
bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0); bio->bi_iter.bi_sector = PTR_OFFSET(&b->key, 0);
bio_set_dev(bio, PTR_CACHE(c, &b->key, 0)->bdev); bio_set_dev(bio, c->cache->bdev);
b->submit_time_us = local_clock_us(); b->submit_time_us = local_clock_us();
closure_bio_submit(c, bio, bio->bi_private); closure_bio_submit(c, bio, bio->bi_private);
@ -137,7 +137,7 @@ void bch_bbio_count_io_errors(struct cache_set *c, struct bio *bio,
blk_status_t error, const char *m) blk_status_t error, const char *m)
{ {
struct bbio *b = container_of(bio, struct bbio, bio); struct bbio *b = container_of(bio, struct bbio, bio);
struct cache *ca = PTR_CACHE(c, &b->key, 0); struct cache *ca = c->cache;
int is_read = (bio_data_dir(bio) == READ ? 1 : 0); int is_read = (bio_data_dir(bio) == READ ? 1 : 0);
unsigned int threshold = op_is_write(bio_op(bio)) unsigned int threshold = op_is_write(bio_op(bio))

View File

@ -111,7 +111,7 @@ reread: left = ca->sb.bucket_size - offset;
* Check from the oldest jset for last_seq. If * Check from the oldest jset for last_seq. If
* i->j.seq < j->last_seq, it means the oldest jset * i->j.seq < j->last_seq, it means the oldest jset
* in list is expired and useless, remove it from * in list is expired and useless, remove it from
* this list. Otherwise, j is a condidate jset for * this list. Otherwise, j is a candidate jset for
* further following checks. * further following checks.
*/ */
while (!list_empty(list)) { while (!list_empty(list)) {
@ -498,7 +498,7 @@ static void btree_flush_write(struct cache_set *c)
* - If there are matched nodes recorded in btree_nodes[], * - If there are matched nodes recorded in btree_nodes[],
* they are clean now (this is why and how the oldest * they are clean now (this is why and how the oldest
* journal entry can be reclaimed). These selected nodes * journal entry can be reclaimed). These selected nodes
* will be ignored and skipped in the folowing for-loop. * will be ignored and skipped in the following for-loop.
*/ */
if (((btree_current_write(b)->journal - fifo_front_p) & if (((btree_current_write(b)->journal - fifo_front_p) &
mask) != 0) { mask) != 0) {
@ -768,7 +768,7 @@ static void journal_write_unlocked(struct closure *cl)
w->data->csum = csum_set(w->data); w->data->csum = csum_set(w->data);
for (i = 0; i < KEY_PTRS(k); i++) { for (i = 0; i < KEY_PTRS(k); i++) {
ca = PTR_CACHE(c, k, i); ca = c->cache;
bio = &ca->journal.bio; bio = &ca->journal.bio;
atomic_long_add(sectors, &ca->meta_sectors_written); atomic_long_add(sectors, &ca->meta_sectors_written);

View File

@ -1052,6 +1052,7 @@ static int cached_dev_status_update(void *arg)
int bch_cached_dev_run(struct cached_dev *dc) int bch_cached_dev_run(struct cached_dev *dc)
{ {
int ret = 0;
struct bcache_device *d = &dc->disk; struct bcache_device *d = &dc->disk;
char *buf = kmemdup_nul(dc->sb.label, SB_LABEL_SIZE, GFP_KERNEL); char *buf = kmemdup_nul(dc->sb.label, SB_LABEL_SIZE, GFP_KERNEL);
char *env[] = { char *env[] = {
@ -1064,19 +1065,15 @@ int bch_cached_dev_run(struct cached_dev *dc)
if (dc->io_disable) { if (dc->io_disable) {
pr_err("I/O disabled on cached dev %s\n", pr_err("I/O disabled on cached dev %s\n",
dc->backing_dev_name); dc->backing_dev_name);
kfree(env[1]); ret = -EIO;
kfree(env[2]); goto out;
kfree(buf);
return -EIO;
} }
if (atomic_xchg(&dc->running, 1)) { if (atomic_xchg(&dc->running, 1)) {
kfree(env[1]);
kfree(env[2]);
kfree(buf);
pr_info("cached dev %s is running already\n", pr_info("cached dev %s is running already\n",
dc->backing_dev_name); dc->backing_dev_name);
return -EBUSY; ret = -EBUSY;
goto out;
} }
if (!d->c && if (!d->c &&
@ -1097,15 +1094,13 @@ int bch_cached_dev_run(struct cached_dev *dc)
* only class / kset properties are persistent * only class / kset properties are persistent
*/ */
kobject_uevent_env(&disk_to_dev(d->disk)->kobj, KOBJ_CHANGE, env); kobject_uevent_env(&disk_to_dev(d->disk)->kobj, KOBJ_CHANGE, env);
kfree(env[1]);
kfree(env[2]);
kfree(buf);
if (sysfs_create_link(&d->kobj, &disk_to_dev(d->disk)->kobj, "dev") || if (sysfs_create_link(&d->kobj, &disk_to_dev(d->disk)->kobj, "dev") ||
sysfs_create_link(&disk_to_dev(d->disk)->kobj, sysfs_create_link(&disk_to_dev(d->disk)->kobj,
&d->kobj, "bcache")) { &d->kobj, "bcache")) {
pr_err("Couldn't create bcache dev <-> disk sysfs symlinks\n"); pr_err("Couldn't create bcache dev <-> disk sysfs symlinks\n");
return -ENOMEM; ret = -ENOMEM;
goto out;
} }
dc->status_update_thread = kthread_run(cached_dev_status_update, dc->status_update_thread = kthread_run(cached_dev_status_update,
@ -1114,7 +1109,11 @@ int bch_cached_dev_run(struct cached_dev *dc)
pr_warn("failed to create bcache_status_update kthread, continue to run without monitoring backing device status\n"); pr_warn("failed to create bcache_status_update kthread, continue to run without monitoring backing device status\n");
} }
return 0; out:
kfree(env[1]);
kfree(env[2]);
kfree(buf);
return ret;
} }
/* /*

View File

@ -27,7 +27,7 @@ struct closure;
#else /* DEBUG */ #else /* DEBUG */
#define EBUG_ON(cond) do { if (cond); } while (0) #define EBUG_ON(cond) do { if (cond) do {} while (0); } while (0)
#define atomic_dec_bug(v) atomic_dec(v) #define atomic_dec_bug(v) atomic_dec(v)
#define atomic_inc_bug(v, i) atomic_inc(v) #define atomic_inc_bug(v, i) atomic_inc(v)

View File

@ -110,13 +110,13 @@ static void __update_writeback_rate(struct cached_dev *dc)
int64_t fps; int64_t fps;
if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID) { if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID) {
fp_term = dc->writeback_rate_fp_term_low * fp_term = (int64_t)dc->writeback_rate_fp_term_low *
(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW); (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW);
} else if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH) { } else if (c->gc_stats.in_use <= BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH) {
fp_term = dc->writeback_rate_fp_term_mid * fp_term = (int64_t)dc->writeback_rate_fp_term_mid *
(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID); (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID);
} else { } else {
fp_term = dc->writeback_rate_fp_term_high * fp_term = (int64_t)dc->writeback_rate_fp_term_high *
(c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH); (c->gc_stats.in_use - BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH);
} }
fps = div_s64(dirty, dirty_buckets) * fp_term; fps = div_s64(dirty, dirty_buckets) * fp_term;
@ -416,7 +416,7 @@ static void read_dirty_endio(struct bio *bio)
struct dirty_io *io = w->private; struct dirty_io *io = w->private;
/* is_read = 1 */ /* is_read = 1 */
bch_count_io_errors(PTR_CACHE(io->dc->disk.c, &w->key, 0), bch_count_io_errors(io->dc->disk.c->cache,
bio->bi_status, 1, bio->bi_status, 1,
"reading dirty data from cache"); "reading dirty data from cache");
@ -510,8 +510,7 @@ static void read_dirty(struct cached_dev *dc)
dirty_init(w); dirty_init(w);
bio_set_op_attrs(&io->bio, REQ_OP_READ, 0); bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0); io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0);
bio_set_dev(&io->bio, bio_set_dev(&io->bio, dc->disk.c->cache->bdev);
PTR_CACHE(dc->disk.c, &w->key, 0)->bdev);
io->bio.bi_end_io = read_dirty_endio; io->bio.bi_end_io = read_dirty_endio;
if (bch_bio_alloc_pages(&io->bio, GFP_KERNEL)) if (bch_bio_alloc_pages(&io->bio, GFP_KERNEL))

View File

@ -1722,6 +1722,8 @@ void md_bitmap_flush(struct mddev *mddev)
md_bitmap_daemon_work(mddev); md_bitmap_daemon_work(mddev);
bitmap->daemon_lastrun -= sleep; bitmap->daemon_lastrun -= sleep;
md_bitmap_daemon_work(mddev); md_bitmap_daemon_work(mddev);
if (mddev->bitmap_info.external)
md_super_wait(mddev);
md_bitmap_update_sb(bitmap); md_bitmap_update_sb(bitmap);
} }

View File

@ -734,78 +734,94 @@ void mddev_init(struct mddev *mddev)
} }
EXPORT_SYMBOL_GPL(mddev_init); EXPORT_SYMBOL_GPL(mddev_init);
static struct mddev *mddev_find_locked(dev_t unit)
{
struct mddev *mddev;
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
if (mddev->unit == unit)
return mddev;
return NULL;
}
/* find an unused unit number */
static dev_t mddev_alloc_unit(void)
{
static int next_minor = 512;
int start = next_minor;
bool is_free = 0;
dev_t dev = 0;
while (!is_free) {
dev = MKDEV(MD_MAJOR, next_minor);
next_minor++;
if (next_minor > MINORMASK)
next_minor = 0;
if (next_minor == start)
return 0; /* Oh dear, all in use. */
is_free = !mddev_find_locked(dev);
}
return dev;
}
static struct mddev *mddev_find(dev_t unit) static struct mddev *mddev_find(dev_t unit)
{ {
struct mddev *mddev, *new = NULL; struct mddev *mddev;
if (MAJOR(unit) != MD_MAJOR)
unit &= ~((1 << MdpMinorShift) - 1);
spin_lock(&all_mddevs_lock);
mddev = mddev_find_locked(unit);
if (mddev)
mddev_get(mddev);
spin_unlock(&all_mddevs_lock);
return mddev;
}
static struct mddev *mddev_alloc(dev_t unit)
{
struct mddev *new;
int error;
if (unit && MAJOR(unit) != MD_MAJOR) if (unit && MAJOR(unit) != MD_MAJOR)
unit &= ~((1<<MdpMinorShift)-1); unit &= ~((1 << MdpMinorShift) - 1);
retry:
spin_lock(&all_mddevs_lock);
if (unit) {
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
if (mddev->unit == unit) {
mddev_get(mddev);
spin_unlock(&all_mddevs_lock);
kfree(new);
return mddev;
}
if (new) {
list_add(&new->all_mddevs, &all_mddevs);
spin_unlock(&all_mddevs_lock);
new->hold_active = UNTIL_IOCTL;
return new;
}
} else if (new) {
/* find an unused unit number */
static int next_minor = 512;
int start = next_minor;
int is_free = 0;
int dev = 0;
while (!is_free) {
dev = MKDEV(MD_MAJOR, next_minor);
next_minor++;
if (next_minor > MINORMASK)
next_minor = 0;
if (next_minor == start) {
/* Oh dear, all in use. */
spin_unlock(&all_mddevs_lock);
kfree(new);
return NULL;
}
is_free = 1;
list_for_each_entry(mddev, &all_mddevs, all_mddevs)
if (mddev->unit == dev) {
is_free = 0;
break;
}
}
new->unit = dev;
new->md_minor = MINOR(dev);
new->hold_active = UNTIL_STOP;
list_add(&new->all_mddevs, &all_mddevs);
spin_unlock(&all_mddevs_lock);
return new;
}
spin_unlock(&all_mddevs_lock);
new = kzalloc(sizeof(*new), GFP_KERNEL); new = kzalloc(sizeof(*new), GFP_KERNEL);
if (!new) if (!new)
return NULL; return ERR_PTR(-ENOMEM);
new->unit = unit;
if (MAJOR(unit) == MD_MAJOR)
new->md_minor = MINOR(unit);
else
new->md_minor = MINOR(unit) >> MdpMinorShift;
mddev_init(new); mddev_init(new);
goto retry; spin_lock(&all_mddevs_lock);
if (unit) {
error = -EEXIST;
if (mddev_find_locked(unit))
goto out_free_new;
new->unit = unit;
if (MAJOR(unit) == MD_MAJOR)
new->md_minor = MINOR(unit);
else
new->md_minor = MINOR(unit) >> MdpMinorShift;
new->hold_active = UNTIL_IOCTL;
} else {
error = -ENODEV;
new->unit = mddev_alloc_unit();
if (!new->unit)
goto out_free_new;
new->md_minor = MINOR(new->unit);
new->hold_active = UNTIL_STOP;
}
list_add(&new->all_mddevs, &all_mddevs);
spin_unlock(&all_mddevs_lock);
return new;
out_free_new:
spin_unlock(&all_mddevs_lock);
kfree(new);
return ERR_PTR(error);
} }
static struct attribute_group md_redundancy_group; static struct attribute_group md_redundancy_group;
@ -5644,29 +5660,29 @@ static int md_alloc(dev_t dev, char *name)
* writing to /sys/module/md_mod/parameters/new_array. * writing to /sys/module/md_mod/parameters/new_array.
*/ */
static DEFINE_MUTEX(disks_mutex); static DEFINE_MUTEX(disks_mutex);
struct mddev *mddev = mddev_find(dev); struct mddev *mddev;
struct gendisk *disk; struct gendisk *disk;
int partitioned; int partitioned;
int shift; int shift;
int unit; int unit;
int error; int error ;
if (!mddev) /*
return -ENODEV; * Wait for any previous instance of this device to be completely
* removed (mddev_delayed_delete).
partitioned = (MAJOR(mddev->unit) != MD_MAJOR);
shift = partitioned ? MdpMinorShift : 0;
unit = MINOR(mddev->unit) >> shift;
/* wait for any previous instance of this device to be
* completely removed (mddev_delayed_delete).
*/ */
flush_workqueue(md_misc_wq); flush_workqueue(md_misc_wq);
mutex_lock(&disks_mutex); mutex_lock(&disks_mutex);
error = -EEXIST; mddev = mddev_alloc(dev);
if (mddev->gendisk) if (IS_ERR(mddev)) {
goto abort; mutex_unlock(&disks_mutex);
return PTR_ERR(mddev);
}
partitioned = (MAJOR(mddev->unit) != MD_MAJOR);
shift = partitioned ? MdpMinorShift : 0;
unit = MINOR(mddev->unit) >> shift;
if (name && !dev) { if (name && !dev) {
/* Need to ensure that 'name' is not a duplicate. /* Need to ensure that 'name' is not a duplicate.
@ -5678,6 +5694,7 @@ static int md_alloc(dev_t dev, char *name)
if (mddev2->gendisk && if (mddev2->gendisk &&
strcmp(mddev2->gendisk->disk_name, name) == 0) { strcmp(mddev2->gendisk->disk_name, name) == 0) {
spin_unlock(&all_mddevs_lock); spin_unlock(&all_mddevs_lock);
error = -EEXIST;
goto abort; goto abort;
} }
spin_unlock(&all_mddevs_lock); spin_unlock(&all_mddevs_lock);
@ -6524,11 +6541,9 @@ static void autorun_devices(int part)
md_probe(dev); md_probe(dev);
mddev = mddev_find(dev); mddev = mddev_find(dev);
if (!mddev || !mddev->gendisk) { if (!mddev)
if (mddev)
mddev_put(mddev);
break; break;
}
if (mddev_lock(mddev)) if (mddev_lock(mddev))
pr_warn("md: %s locked, cannot run\n", mdname(mddev)); pr_warn("md: %s locked, cannot run\n", mdname(mddev));
else if (mddev->raid_disks || mddev->major_version else if (mddev->raid_disks || mddev->major_version
@ -7821,8 +7836,7 @@ static int md_open(struct block_device *bdev, fmode_t mode)
/* Wait until bdev->bd_disk is definitely gone */ /* Wait until bdev->bd_disk is definitely gone */
if (work_pending(&mddev->del_work)) if (work_pending(&mddev->del_work))
flush_workqueue(md_misc_wq); flush_workqueue(md_misc_wq);
/* Then retry the open from the top */ return -EBUSY;
return -ERESTARTSYS;
} }
BUG_ON(mddev != bdev->bd_disk->private_data); BUG_ON(mddev != bdev->bd_disk->private_data);
@ -8153,7 +8167,11 @@ static void *md_seq_start(struct seq_file *seq, loff_t *pos)
loff_t l = *pos; loff_t l = *pos;
struct mddev *mddev; struct mddev *mddev;
if (l >= 0x10000) if (l == 0x10000) {
++*pos;
return (void *)2;
}
if (l > 0x10000)
return NULL; return NULL;
if (!l--) if (!l--)
/* header */ /* header */
@ -8575,6 +8593,26 @@ void md_write_end(struct mddev *mddev)
EXPORT_SYMBOL(md_write_end); EXPORT_SYMBOL(md_write_end);
/* This is used by raid0 and raid10 */
void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
struct bio *bio, sector_t start, sector_t size)
{
struct bio *discard_bio = NULL;
if (__blkdev_issue_discard(rdev->bdev, start, size, GFP_NOIO, 0,
&discard_bio) || !discard_bio)
return;
bio_chain(discard_bio, bio);
bio_clone_blkg_association(discard_bio, bio);
if (mddev->gendisk)
trace_block_bio_remap(discard_bio,
disk_devt(mddev->gendisk),
bio->bi_iter.bi_sector);
submit_bio_noacct(discard_bio);
}
EXPORT_SYMBOL_GPL(md_submit_discard_bio);
/* md_allow_write(mddev) /* md_allow_write(mddev)
* Calling this ensures that the array is marked 'active' so that writes * Calling this ensures that the array is marked 'active' so that writes
* may proceed without blocking. It is important to call this before * may proceed without blocking. It is important to call this before
@ -9251,11 +9289,11 @@ void md_check_recovery(struct mddev *mddev)
} }
if (mddev_is_clustered(mddev)) { if (mddev_is_clustered(mddev)) {
struct md_rdev *rdev; struct md_rdev *rdev, *tmp;
/* kick the device if another node issued a /* kick the device if another node issued a
* remove disk. * remove disk.
*/ */
rdev_for_each(rdev, mddev) { rdev_for_each_safe(rdev, tmp, mddev) {
if (test_and_clear_bit(ClusterRemove, &rdev->flags) && if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
rdev->raid_disk < 0) rdev->raid_disk < 0)
md_kick_rdev_from_array(rdev); md_kick_rdev_from_array(rdev);
@ -9569,7 +9607,7 @@ err_wq:
static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev) static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
{ {
struct mdp_superblock_1 *sb = page_address(rdev->sb_page); struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
struct md_rdev *rdev2; struct md_rdev *rdev2, *tmp;
int role, ret; int role, ret;
char b[BDEVNAME_SIZE]; char b[BDEVNAME_SIZE];
@ -9586,7 +9624,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
} }
/* Check for change of roles in the active devices */ /* Check for change of roles in the active devices */
rdev_for_each(rdev2, mddev) { rdev_for_each_safe(rdev2, tmp, mddev) {
if (test_bit(Faulty, &rdev2->flags)) if (test_bit(Faulty, &rdev2->flags))
continue; continue;

View File

@ -713,6 +713,8 @@ extern void md_write_end(struct mddev *mddev);
extern void md_done_sync(struct mddev *mddev, int blocks, int ok); extern void md_done_sync(struct mddev *mddev, int blocks, int ok);
extern void md_error(struct mddev *mddev, struct md_rdev *rdev); extern void md_error(struct mddev *mddev, struct md_rdev *rdev);
extern void md_finish_reshape(struct mddev *mddev); extern void md_finish_reshape(struct mddev *mddev);
void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
struct bio *bio, sector_t start, sector_t size);
extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio); extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio);
extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev, extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev,

View File

@ -477,7 +477,6 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
for (disk = 0; disk < zone->nb_dev; disk++) { for (disk = 0; disk < zone->nb_dev; disk++) {
sector_t dev_start, dev_end; sector_t dev_start, dev_end;
struct bio *discard_bio = NULL;
struct md_rdev *rdev; struct md_rdev *rdev;
if (disk < start_disk_index) if (disk < start_disk_index)
@ -500,18 +499,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
rdev = conf->devlist[(zone - conf->strip_zone) * rdev = conf->devlist[(zone - conf->strip_zone) *
conf->strip_zone[0].nb_dev + disk]; conf->strip_zone[0].nb_dev + disk];
if (__blkdev_issue_discard(rdev->bdev, md_submit_discard_bio(mddev, rdev, bio,
dev_start + zone->dev_start + rdev->data_offset, dev_start + zone->dev_start + rdev->data_offset,
dev_end - dev_start, GFP_NOIO, 0, &discard_bio) || dev_end - dev_start);
!discard_bio)
continue;
bio_chain(discard_bio, bio);
bio_clone_blkg_association(discard_bio, bio);
if (mddev->gendisk)
trace_block_bio_remap(discard_bio,
disk_devt(mddev->gendisk),
bio->bi_iter.bi_sector);
submit_bio_noacct(discard_bio);
} }
bio_endio(bio); bio_endio(bio);
} }

View File

@ -478,6 +478,8 @@ static void raid1_end_write_request(struct bio *bio)
if (!test_bit(Faulty, &rdev->flags)) if (!test_bit(Faulty, &rdev->flags))
set_bit(R1BIO_WriteError, &r1_bio->state); set_bit(R1BIO_WriteError, &r1_bio->state);
else { else {
/* Fail the request */
set_bit(R1BIO_Degraded, &r1_bio->state);
/* Finished with this branch */ /* Finished with this branch */
r1_bio->bios[mirror] = NULL; r1_bio->bios[mirror] = NULL;
to_put = bio; to_put = bio;

View File

@ -91,7 +91,7 @@ static inline struct r10bio *get_resync_r10bio(struct bio *bio)
static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data) static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data)
{ {
struct r10conf *conf = data; struct r10conf *conf = data;
int size = offsetof(struct r10bio, devs[conf->copies]); int size = offsetof(struct r10bio, devs[conf->geo.raid_disks]);
/* allocate a r10bio with room for raid_disks entries in the /* allocate a r10bio with room for raid_disks entries in the
* bios array */ * bios array */
@ -238,7 +238,7 @@ static void put_all_bios(struct r10conf *conf, struct r10bio *r10_bio)
{ {
int i; int i;
for (i = 0; i < conf->copies; i++) { for (i = 0; i < conf->geo.raid_disks; i++) {
struct bio **bio = & r10_bio->devs[i].bio; struct bio **bio = & r10_bio->devs[i].bio;
if (!BIO_SPECIAL(*bio)) if (!BIO_SPECIAL(*bio))
bio_put(*bio); bio_put(*bio);
@ -327,7 +327,7 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio,
int slot; int slot;
int repl = 0; int repl = 0;
for (slot = 0; slot < conf->copies; slot++) { for (slot = 0; slot < conf->geo.raid_disks; slot++) {
if (r10_bio->devs[slot].bio == bio) if (r10_bio->devs[slot].bio == bio)
break; break;
if (r10_bio->devs[slot].repl_bio == bio) { if (r10_bio->devs[slot].repl_bio == bio) {
@ -336,7 +336,6 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio,
} }
} }
BUG_ON(slot == conf->copies);
update_head_pos(slot, r10_bio); update_head_pos(slot, r10_bio);
if (slotp) if (slotp)
@ -1274,12 +1273,77 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
} }
} }
static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio)
{
int i;
struct r10conf *conf = mddev->private;
struct md_rdev *blocked_rdev;
retry_wait:
blocked_rdev = NULL;
rcu_read_lock();
for (i = 0; i < conf->copies; i++) {
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
struct md_rdev *rrdev = rcu_dereference(
conf->mirrors[i].replacement);
if (rdev == rrdev)
rrdev = NULL;
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
atomic_inc(&rdev->nr_pending);
blocked_rdev = rdev;
break;
}
if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) {
atomic_inc(&rrdev->nr_pending);
blocked_rdev = rrdev;
break;
}
if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) {
sector_t first_bad;
sector_t dev_sector = r10_bio->devs[i].addr;
int bad_sectors;
int is_bad;
/*
* Discard request doesn't care the write result
* so it doesn't need to wait blocked disk here.
*/
if (!r10_bio->sectors)
continue;
is_bad = is_badblock(rdev, dev_sector, r10_bio->sectors,
&first_bad, &bad_sectors);
if (is_bad < 0) {
/*
* Mustn't write here until the bad block
* is acknowledged
*/
atomic_inc(&rdev->nr_pending);
set_bit(BlockedBadBlocks, &rdev->flags);
blocked_rdev = rdev;
break;
}
}
}
rcu_read_unlock();
if (unlikely(blocked_rdev)) {
/* Have to wait for this device to get unblocked, then retry */
allow_barrier(conf);
raid10_log(conf->mddev, "%s wait rdev %d blocked",
__func__, blocked_rdev->raid_disk);
md_wait_for_blocked_rdev(blocked_rdev, mddev);
wait_barrier(conf);
goto retry_wait;
}
}
static void raid10_write_request(struct mddev *mddev, struct bio *bio, static void raid10_write_request(struct mddev *mddev, struct bio *bio,
struct r10bio *r10_bio) struct r10bio *r10_bio)
{ {
struct r10conf *conf = mddev->private; struct r10conf *conf = mddev->private;
int i; int i;
struct md_rdev *blocked_rdev;
sector_t sectors; sector_t sectors;
int max_sectors; int max_sectors;
@ -1337,8 +1401,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
r10_bio->read_slot = -1; /* make sure repl_bio gets freed */ r10_bio->read_slot = -1; /* make sure repl_bio gets freed */
raid10_find_phys(conf, r10_bio); raid10_find_phys(conf, r10_bio);
retry_write:
blocked_rdev = NULL; wait_blocked_dev(mddev, r10_bio);
rcu_read_lock(); rcu_read_lock();
max_sectors = r10_bio->sectors; max_sectors = r10_bio->sectors;
@ -1349,16 +1414,6 @@ retry_write:
conf->mirrors[d].replacement); conf->mirrors[d].replacement);
if (rdev == rrdev) if (rdev == rrdev)
rrdev = NULL; rrdev = NULL;
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
atomic_inc(&rdev->nr_pending);
blocked_rdev = rdev;
break;
}
if (rrdev && unlikely(test_bit(Blocked, &rrdev->flags))) {
atomic_inc(&rrdev->nr_pending);
blocked_rdev = rrdev;
break;
}
if (rdev && (test_bit(Faulty, &rdev->flags))) if (rdev && (test_bit(Faulty, &rdev->flags)))
rdev = NULL; rdev = NULL;
if (rrdev && (test_bit(Faulty, &rrdev->flags))) if (rrdev && (test_bit(Faulty, &rrdev->flags)))
@ -1379,15 +1434,6 @@ retry_write:
is_bad = is_badblock(rdev, dev_sector, max_sectors, is_bad = is_badblock(rdev, dev_sector, max_sectors,
&first_bad, &bad_sectors); &first_bad, &bad_sectors);
if (is_bad < 0) {
/* Mustn't write here until the bad block
* is acknowledged
*/
atomic_inc(&rdev->nr_pending);
set_bit(BlockedBadBlocks, &rdev->flags);
blocked_rdev = rdev;
break;
}
if (is_bad && first_bad <= dev_sector) { if (is_bad && first_bad <= dev_sector) {
/* Cannot write here at all */ /* Cannot write here at all */
bad_sectors -= (dev_sector - first_bad); bad_sectors -= (dev_sector - first_bad);
@ -1423,35 +1469,6 @@ retry_write:
} }
rcu_read_unlock(); rcu_read_unlock();
if (unlikely(blocked_rdev)) {
/* Have to wait for this device to get unblocked, then retry */
int j;
int d;
for (j = 0; j < i; j++) {
if (r10_bio->devs[j].bio) {
d = r10_bio->devs[j].devnum;
rdev_dec_pending(conf->mirrors[d].rdev, mddev);
}
if (r10_bio->devs[j].repl_bio) {
struct md_rdev *rdev;
d = r10_bio->devs[j].devnum;
rdev = conf->mirrors[d].replacement;
if (!rdev) {
/* Race with remove_disk */
smp_mb();
rdev = conf->mirrors[d].rdev;
}
rdev_dec_pending(rdev, mddev);
}
}
allow_barrier(conf);
raid10_log(conf->mddev, "wait rdev %d blocked", blocked_rdev->raid_disk);
md_wait_for_blocked_rdev(blocked_rdev, mddev);
wait_barrier(conf);
goto retry_write;
}
if (max_sectors < r10_bio->sectors) if (max_sectors < r10_bio->sectors)
r10_bio->sectors = max_sectors; r10_bio->sectors = max_sectors;
@ -1492,7 +1509,8 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
r10_bio->sector = bio->bi_iter.bi_sector; r10_bio->sector = bio->bi_iter.bi_sector;
r10_bio->state = 0; r10_bio->state = 0;
r10_bio->read_slot = -1; r10_bio->read_slot = -1;
memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->copies); memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) *
conf->geo.raid_disks);
if (bio_data_dir(bio) == READ) if (bio_data_dir(bio) == READ)
raid10_read_request(mddev, bio, r10_bio); raid10_read_request(mddev, bio, r10_bio);
@ -1500,6 +1518,304 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
raid10_write_request(mddev, bio, r10_bio); raid10_write_request(mddev, bio, r10_bio);
} }
static void raid_end_discard_bio(struct r10bio *r10bio)
{
struct r10conf *conf = r10bio->mddev->private;
struct r10bio *first_r10bio;
while (atomic_dec_and_test(&r10bio->remaining)) {
allow_barrier(conf);
if (!test_bit(R10BIO_Discard, &r10bio->state)) {
first_r10bio = (struct r10bio *)r10bio->master_bio;
free_r10bio(r10bio);
r10bio = first_r10bio;
} else {
md_write_end(r10bio->mddev);
bio_endio(r10bio->master_bio);
free_r10bio(r10bio);
break;
}
}
}
static void raid10_end_discard_request(struct bio *bio)
{
struct r10bio *r10_bio = bio->bi_private;
struct r10conf *conf = r10_bio->mddev->private;
struct md_rdev *rdev = NULL;
int dev;
int slot, repl;
/*
* We don't care the return value of discard bio
*/
if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
set_bit(R10BIO_Uptodate, &r10_bio->state);
dev = find_bio_disk(conf, r10_bio, bio, &slot, &repl);
if (repl)
rdev = conf->mirrors[dev].replacement;
if (!rdev) {
/*
* raid10_remove_disk uses smp_mb to make sure rdev is set to
* replacement before setting replacement to NULL. It can read
* rdev first without barrier protect even replacment is NULL
*/
smp_rmb();
rdev = conf->mirrors[dev].rdev;
}
raid_end_discard_bio(r10_bio);
rdev_dec_pending(rdev, conf->mddev);
}
/*
* There are some limitations to handle discard bio
* 1st, the discard size is bigger than stripe_size*2.
* 2st, if the discard bio spans reshape progress, we use the old way to
* handle discard bio
*/
static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
{
struct r10conf *conf = mddev->private;
struct geom *geo = &conf->geo;
int far_copies = geo->far_copies;
bool first_copy = true;
struct r10bio *r10_bio, *first_r10bio;
struct bio *split;
int disk;
sector_t chunk;
unsigned int stripe_size;
unsigned int stripe_data_disks;
sector_t split_size;
sector_t bio_start, bio_end;
sector_t first_stripe_index, last_stripe_index;
sector_t start_disk_offset;
unsigned int start_disk_index;
sector_t end_disk_offset;
unsigned int end_disk_index;
unsigned int remainder;
if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
return -EAGAIN;
wait_barrier(conf);
/*
* Check reshape again to avoid reshape happens after checking
* MD_RECOVERY_RESHAPE and before wait_barrier
*/
if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
goto out;
if (geo->near_copies)
stripe_data_disks = geo->raid_disks / geo->near_copies +
geo->raid_disks % geo->near_copies;
else
stripe_data_disks = geo->raid_disks;
stripe_size = stripe_data_disks << geo->chunk_shift;
bio_start = bio->bi_iter.bi_sector;
bio_end = bio_end_sector(bio);
/*
* Maybe one discard bio is smaller than strip size or across one
* stripe and discard region is larger than one stripe size. For far
* offset layout, if the discard region is not aligned with stripe
* size, there is hole when we submit discard bio to member disk.
* For simplicity, we only handle discard bio which discard region
* is bigger than stripe_size * 2
*/
if (bio_sectors(bio) < stripe_size*2)
goto out;
/*
* Keep bio aligned with strip size.
*/
div_u64_rem(bio_start, stripe_size, &remainder);
if (remainder) {
split_size = stripe_size - remainder;
split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
bio_chain(split, bio);
allow_barrier(conf);
/* Resend the fist split part */
submit_bio_noacct(split);
wait_barrier(conf);
}
div_u64_rem(bio_end, stripe_size, &remainder);
if (remainder) {
split_size = bio_sectors(bio) - remainder;
split = bio_split(bio, split_size, GFP_NOIO, &conf->bio_split);
bio_chain(split, bio);
allow_barrier(conf);
/* Resend the second split part */
submit_bio_noacct(bio);
bio = split;
wait_barrier(conf);
}
bio_start = bio->bi_iter.bi_sector;
bio_end = bio_end_sector(bio);
/*
* Raid10 uses chunk as the unit to store data. It's similar like raid0.
* One stripe contains the chunks from all member disk (one chunk from
* one disk at the same HBA address). For layout detail, see 'man md 4'
*/
chunk = bio_start >> geo->chunk_shift;
chunk *= geo->near_copies;
first_stripe_index = chunk;
start_disk_index = sector_div(first_stripe_index, geo->raid_disks);
if (geo->far_offset)
first_stripe_index *= geo->far_copies;
start_disk_offset = (bio_start & geo->chunk_mask) +
(first_stripe_index << geo->chunk_shift);
chunk = bio_end >> geo->chunk_shift;
chunk *= geo->near_copies;
last_stripe_index = chunk;
end_disk_index = sector_div(last_stripe_index, geo->raid_disks);
if (geo->far_offset)
last_stripe_index *= geo->far_copies;
end_disk_offset = (bio_end & geo->chunk_mask) +
(last_stripe_index << geo->chunk_shift);
retry_discard:
r10_bio = mempool_alloc(&conf->r10bio_pool, GFP_NOIO);
r10_bio->mddev = mddev;
r10_bio->state = 0;
r10_bio->sectors = 0;
memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
wait_blocked_dev(mddev, r10_bio);
/*
* For far layout it needs more than one r10bio to cover all regions.
* Inspired by raid10_sync_request, we can use the first r10bio->master_bio
* to record the discard bio. Other r10bio->master_bio record the first
* r10bio. The first r10bio only release after all other r10bios finish.
* The discard bio returns only first r10bio finishes
*/
if (first_copy) {
r10_bio->master_bio = bio;
set_bit(R10BIO_Discard, &r10_bio->state);
first_copy = false;
first_r10bio = r10_bio;
} else
r10_bio->master_bio = (struct bio *)first_r10bio;
rcu_read_lock();
for (disk = 0; disk < geo->raid_disks; disk++) {
struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
struct md_rdev *rrdev = rcu_dereference(
conf->mirrors[disk].replacement);
r10_bio->devs[disk].bio = NULL;
r10_bio->devs[disk].repl_bio = NULL;
if (rdev && (test_bit(Faulty, &rdev->flags)))
rdev = NULL;
if (rrdev && (test_bit(Faulty, &rrdev->flags)))
rrdev = NULL;
if (!rdev && !rrdev)
continue;
if (rdev) {
r10_bio->devs[disk].bio = bio;
atomic_inc(&rdev->nr_pending);
}
if (rrdev) {
r10_bio->devs[disk].repl_bio = bio;
atomic_inc(&rrdev->nr_pending);
}
}
rcu_read_unlock();
atomic_set(&r10_bio->remaining, 1);
for (disk = 0; disk < geo->raid_disks; disk++) {
sector_t dev_start, dev_end;
struct bio *mbio, *rbio = NULL;
struct md_rdev *rdev = rcu_dereference(conf->mirrors[disk].rdev);
struct md_rdev *rrdev = rcu_dereference(
conf->mirrors[disk].replacement);
/*
* Now start to calculate the start and end address for each disk.
* The space between dev_start and dev_end is the discard region.
*
* For dev_start, it needs to consider three conditions:
* 1st, the disk is before start_disk, you can imagine the disk in
* the next stripe. So the dev_start is the start address of next
* stripe.
* 2st, the disk is after start_disk, it means the disk is at the
* same stripe of first disk
* 3st, the first disk itself, we can use start_disk_offset directly
*/
if (disk < start_disk_index)
dev_start = (first_stripe_index + 1) * mddev->chunk_sectors;
else if (disk > start_disk_index)
dev_start = first_stripe_index * mddev->chunk_sectors;
else
dev_start = start_disk_offset;
if (disk < end_disk_index)
dev_end = (last_stripe_index + 1) * mddev->chunk_sectors;
else if (disk > end_disk_index)
dev_end = last_stripe_index * mddev->chunk_sectors;
else
dev_end = end_disk_offset;
/*
* It only handles discard bio which size is >= stripe size, so
* dev_end > dev_start all the time
*/
if (r10_bio->devs[disk].bio) {
mbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
mbio->bi_end_io = raid10_end_discard_request;
mbio->bi_private = r10_bio;
r10_bio->devs[disk].bio = mbio;
r10_bio->devs[disk].devnum = disk;
atomic_inc(&r10_bio->remaining);
md_submit_discard_bio(mddev, rdev, mbio,
dev_start + choose_data_offset(r10_bio, rdev),
dev_end - dev_start);
bio_endio(mbio);
}
if (r10_bio->devs[disk].repl_bio) {
rbio = bio_clone_fast(bio, GFP_NOIO, &mddev->bio_set);
rbio->bi_end_io = raid10_end_discard_request;
rbio->bi_private = r10_bio;
r10_bio->devs[disk].repl_bio = rbio;
r10_bio->devs[disk].devnum = disk;
atomic_inc(&r10_bio->remaining);
md_submit_discard_bio(mddev, rrdev, rbio,
dev_start + choose_data_offset(r10_bio, rrdev),
dev_end - dev_start);
bio_endio(rbio);
}
}
if (!geo->far_offset && --far_copies) {
first_stripe_index += geo->stride >> geo->chunk_shift;
start_disk_offset += geo->stride;
last_stripe_index += geo->stride >> geo->chunk_shift;
end_disk_offset += geo->stride;
atomic_inc(&first_r10bio->remaining);
raid_end_discard_bio(r10_bio);
wait_barrier(conf);
goto retry_discard;
}
raid_end_discard_bio(r10_bio);
return 0;
out:
allow_barrier(conf);
return -EAGAIN;
}
static bool raid10_make_request(struct mddev *mddev, struct bio *bio) static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
{ {
struct r10conf *conf = mddev->private; struct r10conf *conf = mddev->private;
@ -1514,6 +1830,10 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
if (!md_write_start(mddev, bio)) if (!md_write_start(mddev, bio))
return false; return false;
if (unlikely(bio_op(bio) == REQ_OP_DISCARD))
if (!raid10_handle_discard(mddev, bio))
return true;
/* /*
* If this request crosses a chunk boundary, we need to split * If this request crosses a chunk boundary, we need to split
* it. * it.
@ -3753,7 +4073,7 @@ static int raid10_run(struct mddev *mddev)
if (mddev->queue) { if (mddev->queue) {
blk_queue_max_discard_sectors(mddev->queue, blk_queue_max_discard_sectors(mddev->queue,
mddev->chunk_sectors); UINT_MAX);
blk_queue_max_write_same_sectors(mddev->queue, 0); blk_queue_max_write_same_sectors(mddev->queue, 0);
blk_queue_max_write_zeroes_sectors(mddev->queue, 0); blk_queue_max_write_zeroes_sectors(mddev->queue, 0);
blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9);

View File

@ -179,5 +179,6 @@ enum r10bio_state {
R10BIO_Previous, R10BIO_Previous,
/* failfast devices did receive failfast requests. */ /* failfast devices did receive failfast requests. */
R10BIO_FailFast, R10BIO_FailFast,
R10BIO_Discard,
}; };
#endif #endif

View File

@ -9,7 +9,7 @@ obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o
obj-$(CONFIG_NVME_FC) += nvme-fc.o obj-$(CONFIG_NVME_FC) += nvme-fc.o
obj-$(CONFIG_NVME_TCP) += nvme-tcp.o obj-$(CONFIG_NVME_TCP) += nvme-tcp.o
nvme-core-y := core.o nvme-core-y := core.o ioctl.o
nvme-core-$(CONFIG_TRACING) += trace.o nvme-core-$(CONFIG_TRACING) += trace.o
nvme-core-$(CONFIG_NVME_MULTIPATH) += multipath.o nvme-core-$(CONFIG_NVME_MULTIPATH) += multipath.o
nvme-core-$(CONFIG_NVM) += lightnvm.o nvme-core-$(CONFIG_NVM) += lightnvm.o

File diff suppressed because it is too large Load Diff

View File

@ -379,10 +379,8 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
/* /*
* Set keep-alive timeout in seconds granularity (ms * 1000) * Set keep-alive timeout in seconds granularity (ms * 1000)
* and add a grace period for controller kato enforcement
*/ */
cmd.connect.kato = ctrl->kato ? cmd.connect.kato = cpu_to_le32(ctrl->kato * 1000);
cpu_to_le32((ctrl->kato + NVME_KATO_GRACE) * 1000) : 0;
if (ctrl->opts->disable_sqflow) if (ctrl->opts->disable_sqflow)
cmd.connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW; cmd.connect.cattr |= NVME_CONNECT_DISABLE_SQFLOW;

View File

@ -1708,7 +1708,7 @@ restart:
* *
* If this routine returns error, the LLDD should abort the exchange. * If this routine returns error, the LLDD should abort the exchange.
* *
* @remoteport: pointer to the (registered) remote port that the LS * @portptr: pointer to the (registered) remote port that the LS
* was received from. The remoteport is associated with * was received from. The remoteport is associated with
* a specific localport. * a specific localport.
* @lsrsp: pointer to a nvmefc_ls_rsp response structure to be * @lsrsp: pointer to a nvmefc_ls_rsp response structure to be
@ -2128,6 +2128,7 @@ nvme_fc_init_request(struct blk_mq_tag_set *set, struct request *rq,
op->op.fcp_req.first_sgl = op->sgl; op->op.fcp_req.first_sgl = op->sgl;
op->op.fcp_req.private = &op->priv[0]; op->op.fcp_req.private = &op->priv[0];
nvme_req(rq)->ctrl = &ctrl->ctrl; nvme_req(rq)->ctrl = &ctrl->ctrl;
nvme_req(rq)->cmd = &op->op.cmd_iu.sqe;
return res; return res;
} }
@ -2759,8 +2760,6 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
struct nvme_fc_ctrl *ctrl = queue->ctrl; struct nvme_fc_ctrl *ctrl = queue->ctrl;
struct request *rq = bd->rq; struct request *rq = bd->rq;
struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq); struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
struct nvme_command *sqe = &cmdiu->sqe;
enum nvmefc_fcp_datadir io_dir; enum nvmefc_fcp_datadir io_dir;
bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags); bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags);
u32 data_len; u32 data_len;
@ -2770,7 +2769,7 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx,
!nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) !nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready))
return nvmf_fail_nonready_command(&queue->ctrl->ctrl, rq); return nvmf_fail_nonready_command(&queue->ctrl->ctrl, rq);
ret = nvme_setup_cmd(ns, rq, sqe); ret = nvme_setup_cmd(ns, rq);
if (ret) if (ret)
return ret; return ret;
@ -3086,7 +3085,7 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
ret = nvme_init_identify(&ctrl->ctrl); ret = nvme_init_ctrl_finish(&ctrl->ctrl);
if (ret || test_bit(ASSOC_FAILED, &ctrl->flags)) if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))
goto out_disconnect_admin_queue; goto out_disconnect_admin_queue;
@ -3100,6 +3099,11 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
} }
/* FC-NVME supports normal SGL Data Block Descriptors */ /* FC-NVME supports normal SGL Data Block Descriptors */
if (!(ctrl->ctrl.sgls & ((1 << 0) | (1 << 1)))) {
dev_err(ctrl->ctrl.device,
"Mandatory sgls are not supported!\n");
goto out_disconnect_admin_queue;
}
if (opts->queue_size > ctrl->ctrl.maxcmd) { if (opts->queue_size > ctrl->ctrl.maxcmd) {
/* warn if maxcmd is lower than queue_size */ /* warn if maxcmd is lower than queue_size */

481
drivers/nvme/host/ioctl.c Normal file
View File

@ -0,0 +1,481 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2011-2014, Intel Corporation.
* Copyright (c) 2017-2021 Christoph Hellwig.
*/
#include <linux/ptrace.h> /* for force_successful_syscall_return */
#include <linux/nvme_ioctl.h>
#include "nvme.h"
/*
* Convert integer values from ioctl structures to user pointers, silently
* ignoring the upper bits in the compat case to match behaviour of 32-bit
* kernels.
*/
static void __user *nvme_to_user_ptr(uintptr_t ptrval)
{
if (in_compat_syscall())
ptrval = (compat_uptr_t)ptrval;
return (void __user *)ptrval;
}
static void *nvme_add_user_metadata(struct bio *bio, void __user *ubuf,
unsigned len, u32 seed, bool write)
{
struct bio_integrity_payload *bip;
int ret = -ENOMEM;
void *buf;
buf = kmalloc(len, GFP_KERNEL);
if (!buf)
goto out;
ret = -EFAULT;
if (write && copy_from_user(buf, ubuf, len))
goto out_free_meta;
bip = bio_integrity_alloc(bio, GFP_KERNEL, 1);
if (IS_ERR(bip)) {
ret = PTR_ERR(bip);
goto out_free_meta;
}
bip->bip_iter.bi_size = len;
bip->bip_iter.bi_sector = seed;
ret = bio_integrity_add_page(bio, virt_to_page(buf), len,
offset_in_page(buf));
if (ret == len)
return buf;
ret = -ENOMEM;
out_free_meta:
kfree(buf);
out:
return ERR_PTR(ret);
}
static int nvme_submit_user_cmd(struct request_queue *q,
struct nvme_command *cmd, void __user *ubuffer,
unsigned bufflen, void __user *meta_buffer, unsigned meta_len,
u32 meta_seed, u64 *result, unsigned timeout)
{
bool write = nvme_is_write(cmd);
struct nvme_ns *ns = q->queuedata;
struct block_device *bdev = ns ? ns->disk->part0 : NULL;
struct request *req;
struct bio *bio = NULL;
void *meta = NULL;
int ret;
req = nvme_alloc_request(q, cmd, 0);
if (IS_ERR(req))
return PTR_ERR(req);
if (timeout)
req->timeout = timeout;
nvme_req(req)->flags |= NVME_REQ_USERCMD;
if (ubuffer && bufflen) {
ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen,
GFP_KERNEL);
if (ret)
goto out;
bio = req->bio;
if (bdev)
bio_set_dev(bio, bdev);
if (bdev && meta_buffer && meta_len) {
meta = nvme_add_user_metadata(bio, meta_buffer, meta_len,
meta_seed, write);
if (IS_ERR(meta)) {
ret = PTR_ERR(meta);
goto out_unmap;
}
req->cmd_flags |= REQ_INTEGRITY;
}
}
nvme_execute_passthru_rq(req);
if (nvme_req(req)->flags & NVME_REQ_CANCELLED)
ret = -EINTR;
else
ret = nvme_req(req)->status;
if (result)
*result = le64_to_cpu(nvme_req(req)->result.u64);
if (meta && !ret && !write) {
if (copy_to_user(meta_buffer, meta, meta_len))
ret = -EFAULT;
}
kfree(meta);
out_unmap:
if (bio)
blk_rq_unmap_user(bio);
out:
blk_mq_free_request(req);
return ret;
}
static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
{
struct nvme_user_io io;
struct nvme_command c;
unsigned length, meta_len;
void __user *metadata;
if (copy_from_user(&io, uio, sizeof(io)))
return -EFAULT;
if (io.flags)
return -EINVAL;
switch (io.opcode) {
case nvme_cmd_write:
case nvme_cmd_read:
case nvme_cmd_compare:
break;
default:
return -EINVAL;
}
length = (io.nblocks + 1) << ns->lba_shift;
if ((io.control & NVME_RW_PRINFO_PRACT) &&
ns->ms == sizeof(struct t10_pi_tuple)) {
/*
* Protection information is stripped/inserted by the
* controller.
*/
if (nvme_to_user_ptr(io.metadata))
return -EINVAL;
meta_len = 0;
metadata = NULL;
} else {
meta_len = (io.nblocks + 1) * ns->ms;
metadata = nvme_to_user_ptr(io.metadata);
}
if (ns->features & NVME_NS_EXT_LBAS) {
length += meta_len;
meta_len = 0;
} else if (meta_len) {
if ((io.metadata & 3) || !io.metadata)
return -EINVAL;
}
memset(&c, 0, sizeof(c));
c.rw.opcode = io.opcode;
c.rw.flags = io.flags;
c.rw.nsid = cpu_to_le32(ns->head->ns_id);
c.rw.slba = cpu_to_le64(io.slba);
c.rw.length = cpu_to_le16(io.nblocks);
c.rw.control = cpu_to_le16(io.control);
c.rw.dsmgmt = cpu_to_le32(io.dsmgmt);
c.rw.reftag = cpu_to_le32(io.reftag);
c.rw.apptag = cpu_to_le16(io.apptag);
c.rw.appmask = cpu_to_le16(io.appmask);
return nvme_submit_user_cmd(ns->queue, &c,
nvme_to_user_ptr(io.addr), length,
metadata, meta_len, lower_32_bits(io.slba), NULL, 0);
}
static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
struct nvme_passthru_cmd __user *ucmd)
{
struct nvme_passthru_cmd cmd;
struct nvme_command c;
unsigned timeout = 0;
u64 result;
int status;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (copy_from_user(&cmd, ucmd, sizeof(cmd)))
return -EFAULT;
if (cmd.flags)
return -EINVAL;
if (ns && cmd.nsid != ns->head->ns_id) {
dev_err(ctrl->device,
"%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n",
current->comm, cmd.nsid, ns->head->ns_id);
return -EINVAL;
}
memset(&c, 0, sizeof(c));
c.common.opcode = cmd.opcode;
c.common.flags = cmd.flags;
c.common.nsid = cpu_to_le32(cmd.nsid);
c.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
c.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
c.common.cdw10 = cpu_to_le32(cmd.cdw10);
c.common.cdw11 = cpu_to_le32(cmd.cdw11);
c.common.cdw12 = cpu_to_le32(cmd.cdw12);
c.common.cdw13 = cpu_to_le32(cmd.cdw13);
c.common.cdw14 = cpu_to_le32(cmd.cdw14);
c.common.cdw15 = cpu_to_le32(cmd.cdw15);
if (cmd.timeout_ms)
timeout = msecs_to_jiffies(cmd.timeout_ms);
status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
nvme_to_user_ptr(cmd.addr), cmd.data_len,
nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
0, &result, timeout);
if (status >= 0) {
if (put_user(result, &ucmd->result))
return -EFAULT;
}
return status;
}
static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
struct nvme_passthru_cmd64 __user *ucmd)
{
struct nvme_passthru_cmd64 cmd;
struct nvme_command c;
unsigned timeout = 0;
int status;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (copy_from_user(&cmd, ucmd, sizeof(cmd)))
return -EFAULT;
if (cmd.flags)
return -EINVAL;
if (ns && cmd.nsid != ns->head->ns_id) {
dev_err(ctrl->device,
"%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n",
current->comm, cmd.nsid, ns->head->ns_id);
return -EINVAL;
}
memset(&c, 0, sizeof(c));
c.common.opcode = cmd.opcode;
c.common.flags = cmd.flags;
c.common.nsid = cpu_to_le32(cmd.nsid);
c.common.cdw2[0] = cpu_to_le32(cmd.cdw2);
c.common.cdw2[1] = cpu_to_le32(cmd.cdw3);
c.common.cdw10 = cpu_to_le32(cmd.cdw10);
c.common.cdw11 = cpu_to_le32(cmd.cdw11);
c.common.cdw12 = cpu_to_le32(cmd.cdw12);
c.common.cdw13 = cpu_to_le32(cmd.cdw13);
c.common.cdw14 = cpu_to_le32(cmd.cdw14);
c.common.cdw15 = cpu_to_le32(cmd.cdw15);
if (cmd.timeout_ms)
timeout = msecs_to_jiffies(cmd.timeout_ms);
status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
nvme_to_user_ptr(cmd.addr), cmd.data_len,
nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
0, &cmd.result, timeout);
if (status >= 0) {
if (put_user(cmd.result, &ucmd->result))
return -EFAULT;
}
return status;
}
static bool is_ctrl_ioctl(unsigned int cmd)
{
if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
return true;
if (is_sed_ioctl(cmd))
return true;
return false;
}
static int nvme_ctrl_ioctl(struct nvme_ctrl *ctrl, unsigned int cmd,
void __user *argp)
{
switch (cmd) {
case NVME_IOCTL_ADMIN_CMD:
return nvme_user_cmd(ctrl, NULL, argp);
case NVME_IOCTL_ADMIN64_CMD:
return nvme_user_cmd64(ctrl, NULL, argp);
default:
return sed_ioctl(ctrl->opal_dev, cmd, argp);
}
}
#ifdef COMPAT_FOR_U64_ALIGNMENT
struct nvme_user_io32 {
__u8 opcode;
__u8 flags;
__u16 control;
__u16 nblocks;
__u16 rsvd;
__u64 metadata;
__u64 addr;
__u64 slba;
__u32 dsmgmt;
__u32 reftag;
__u16 apptag;
__u16 appmask;
} __attribute__((__packed__));
#define NVME_IOCTL_SUBMIT_IO32 _IOW('N', 0x42, struct nvme_user_io32)
#endif /* COMPAT_FOR_U64_ALIGNMENT */
static int nvme_ns_ioctl(struct nvme_ns *ns, unsigned int cmd,
void __user *argp)
{
switch (cmd) {
case NVME_IOCTL_ID:
force_successful_syscall_return();
return ns->head->ns_id;
case NVME_IOCTL_IO_CMD:
return nvme_user_cmd(ns->ctrl, ns, argp);
/*
* struct nvme_user_io can have different padding on some 32-bit ABIs.
* Just accept the compat version as all fields that are used are the
* same size and at the same offset.
*/
#ifdef COMPAT_FOR_U64_ALIGNMENT
case NVME_IOCTL_SUBMIT_IO32:
#endif
case NVME_IOCTL_SUBMIT_IO:
return nvme_submit_io(ns, argp);
case NVME_IOCTL_IO64_CMD:
return nvme_user_cmd64(ns->ctrl, ns, argp);
default:
if (!ns->ndev)
return -ENOTTY;
return nvme_nvm_ioctl(ns, cmd, argp);
}
}
static int __nvme_ioctl(struct nvme_ns *ns, unsigned int cmd, void __user *arg)
{
if (is_ctrl_ioctl(cmd))
return nvme_ctrl_ioctl(ns->ctrl, cmd, arg);
return nvme_ns_ioctl(ns, cmd, arg);
}
int nvme_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg)
{
struct nvme_ns *ns = bdev->bd_disk->private_data;
return __nvme_ioctl(ns, cmd, (void __user *)arg);
}
long nvme_ns_chr_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
struct nvme_ns *ns =
container_of(file_inode(file)->i_cdev, struct nvme_ns, cdev);
return __nvme_ioctl(ns, cmd, (void __user *)arg);
}
#ifdef CONFIG_NVME_MULTIPATH
static int nvme_ns_head_ctrl_ioctl(struct nvme_ns_head *head,
unsigned int cmd, void __user *argp)
{
struct nvme_ctrl *ctrl = nvme_find_get_live_ctrl(head->subsys);
int ret;
if (IS_ERR(ctrl))
return PTR_ERR(ctrl);
ret = nvme_ctrl_ioctl(ctrl, cmd, argp);
nvme_put_ctrl(ctrl);
return ret;
}
static int nvme_ns_head_ns_ioctl(struct nvme_ns_head *head,
unsigned int cmd, void __user *argp)
{
int srcu_idx = srcu_read_lock(&head->srcu);
struct nvme_ns *ns = nvme_find_path(head);
int ret = -EWOULDBLOCK;
if (ns)
ret = nvme_ns_ioctl(ns, cmd, argp);
srcu_read_unlock(&head->srcu, srcu_idx);
return ret;
}
int nvme_ns_head_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg)
{
struct nvme_ns_head *head = bdev->bd_disk->private_data;
void __user *argp = (void __user *)arg;
if (is_ctrl_ioctl(cmd))
return nvme_ns_head_ctrl_ioctl(head, cmd, argp);
return nvme_ns_head_ns_ioctl(head, cmd, argp);
}
long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
struct cdev *cdev = file_inode(file)->i_cdev;
struct nvme_ns_head *head =
container_of(cdev, struct nvme_ns_head, cdev);
void __user *argp = (void __user *)arg;
if (is_ctrl_ioctl(cmd))
return nvme_ns_head_ctrl_ioctl(head, cmd, argp);
return nvme_ns_head_ns_ioctl(head, cmd, argp);
}
#endif /* CONFIG_NVME_MULTIPATH */
static int nvme_dev_user_cmd(struct nvme_ctrl *ctrl, void __user *argp)
{
struct nvme_ns *ns;
int ret;
down_read(&ctrl->namespaces_rwsem);
if (list_empty(&ctrl->namespaces)) {
ret = -ENOTTY;
goto out_unlock;
}
ns = list_first_entry(&ctrl->namespaces, struct nvme_ns, list);
if (ns != list_last_entry(&ctrl->namespaces, struct nvme_ns, list)) {
dev_warn(ctrl->device,
"NVME_IOCTL_IO_CMD not supported when multiple namespaces present!\n");
ret = -EINVAL;
goto out_unlock;
}
dev_warn(ctrl->device,
"using deprecated NVME_IOCTL_IO_CMD ioctl on the char device!\n");
kref_get(&ns->kref);
up_read(&ctrl->namespaces_rwsem);
ret = nvme_user_cmd(ctrl, ns, argp);
nvme_put_ns(ns);
return ret;
out_unlock:
up_read(&ctrl->namespaces_rwsem);
return ret;
}
long nvme_dev_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
struct nvme_ctrl *ctrl = file->private_data;
void __user *argp = (void __user *)arg;
switch (cmd) {
case NVME_IOCTL_ADMIN_CMD:
return nvme_user_cmd(ctrl, NULL, argp);
case NVME_IOCTL_ADMIN64_CMD:
return nvme_user_cmd64(ctrl, NULL, argp);
case NVME_IOCTL_IO_CMD:
return nvme_dev_user_cmd(ctrl, argp);
case NVME_IOCTL_RESET:
dev_warn(ctrl->device, "resetting controller\n");
return nvme_reset_ctrl_sync(ctrl);
case NVME_IOCTL_SUBSYS_RESET:
return nvme_reset_subsystem(ctrl);
case NVME_IOCTL_RESCAN:
nvme_queue_scan(ctrl);
return 0;
default:
return -ENOTTY;
}
}

View File

@ -930,15 +930,15 @@ static int nvme_nvm_user_vcmd(struct nvme_ns *ns, int admin,
return ret; return ret;
} }
int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg) int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, void __user *argp)
{ {
switch (cmd) { switch (cmd) {
case NVME_NVM_IOCTL_ADMIN_VIO: case NVME_NVM_IOCTL_ADMIN_VIO:
return nvme_nvm_user_vcmd(ns, 1, (void __user *)arg); return nvme_nvm_user_vcmd(ns, 1, argp);
case NVME_NVM_IOCTL_IO_VIO: case NVME_NVM_IOCTL_IO_VIO:
return nvme_nvm_user_vcmd(ns, 0, (void __user *)arg); return nvme_nvm_user_vcmd(ns, 0, argp);
case NVME_NVM_IOCTL_SUBMIT_VIO: case NVME_NVM_IOCTL_SUBMIT_VIO:
return nvme_nvm_submit_vio(ns, (void __user *)arg); return nvme_nvm_submit_vio(ns, argp);
default: default:
return -ENOTTY; return -ENOTTY;
} }
@ -1240,7 +1240,7 @@ static struct attribute *nvm_dev_attrs[] = {
static umode_t nvm_dev_attrs_visible(struct kobject *kobj, static umode_t nvm_dev_attrs_visible(struct kobject *kobj,
struct attribute *attr, int index) struct attribute *attr, int index)
{ {
struct device *dev = container_of(kobj, struct device, kobj); struct device *dev = kobj_to_dev(kobj);
struct gendisk *disk = dev_to_disk(dev); struct gendisk *disk = dev_to_disk(dev);
struct nvme_ns *ns = disk->private_data; struct nvme_ns *ns = disk->private_data;
struct nvm_dev *ndev = ns->ndev; struct nvm_dev *ndev = ns->ndev;

View File

@ -50,19 +50,19 @@ void nvme_mpath_start_freeze(struct nvme_subsystem *subsys)
* and those that have a single controller and use the controller node * and those that have a single controller and use the controller node
* directly. * directly.
*/ */
void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, bool nvme_mpath_set_disk_name(struct nvme_ns *ns, char *disk_name, int *flags)
struct nvme_ctrl *ctrl, int *flags)
{ {
if (!multipath) { if (!multipath)
sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); return false;
} else if (ns->head->disk) { if (!ns->head->disk) {
sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance, sprintf(disk_name, "nvme%dn%d", ns->ctrl->subsys->instance,
ctrl->instance, ns->head->instance); ns->head->instance);
*flags = GENHD_FL_HIDDEN; return true;
} else {
sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance,
ns->head->instance);
} }
sprintf(disk_name, "nvme%dc%dn%d", ns->ctrl->subsys->instance,
ns->ctrl->instance, ns->head->instance);
*flags = GENHD_FL_HIDDEN;
return true;
} }
void nvme_failover_req(struct request *req) void nvme_failover_req(struct request *req)
@ -294,7 +294,7 @@ static bool nvme_available_path(struct nvme_ns_head *head)
return false; return false;
} }
blk_qc_t nvme_ns_head_submit_bio(struct bio *bio) static blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
{ {
struct nvme_ns_head *head = bio->bi_bdev->bd_disk->private_data; struct nvme_ns_head *head = bio->bi_bdev->bd_disk->private_data;
struct device *dev = disk_to_dev(head->disk); struct device *dev = disk_to_dev(head->disk);
@ -334,6 +334,71 @@ blk_qc_t nvme_ns_head_submit_bio(struct bio *bio)
return ret; return ret;
} }
static int nvme_ns_head_open(struct block_device *bdev, fmode_t mode)
{
if (!nvme_tryget_ns_head(bdev->bd_disk->private_data))
return -ENXIO;
return 0;
}
static void nvme_ns_head_release(struct gendisk *disk, fmode_t mode)
{
nvme_put_ns_head(disk->private_data);
}
const struct block_device_operations nvme_ns_head_ops = {
.owner = THIS_MODULE,
.submit_bio = nvme_ns_head_submit_bio,
.open = nvme_ns_head_open,
.release = nvme_ns_head_release,
.ioctl = nvme_ns_head_ioctl,
.getgeo = nvme_getgeo,
.report_zones = nvme_report_zones,
.pr_ops = &nvme_pr_ops,
};
static inline struct nvme_ns_head *cdev_to_ns_head(struct cdev *cdev)
{
return container_of(cdev, struct nvme_ns_head, cdev);
}
static int nvme_ns_head_chr_open(struct inode *inode, struct file *file)
{
if (!nvme_tryget_ns_head(cdev_to_ns_head(inode->i_cdev)))
return -ENXIO;
return 0;
}
static int nvme_ns_head_chr_release(struct inode *inode, struct file *file)
{
nvme_put_ns_head(cdev_to_ns_head(inode->i_cdev));
return 0;
}
static const struct file_operations nvme_ns_head_chr_fops = {
.owner = THIS_MODULE,
.open = nvme_ns_head_chr_open,
.release = nvme_ns_head_chr_release,
.unlocked_ioctl = nvme_ns_head_chr_ioctl,
.compat_ioctl = compat_ptr_ioctl,
};
static int nvme_add_ns_head_cdev(struct nvme_ns_head *head)
{
int ret;
head->cdev_device.parent = &head->subsys->dev;
ret = dev_set_name(&head->cdev_device, "ng%dn%d",
head->subsys->instance, head->instance);
if (ret)
return ret;
ret = nvme_cdev_add(&head->cdev, &head->cdev_device,
&nvme_ns_head_chr_fops, THIS_MODULE);
if (ret)
kfree_const(head->cdev_device.kobj.name);
return ret;
}
static void nvme_requeue_work(struct work_struct *work) static void nvme_requeue_work(struct work_struct *work)
{ {
struct nvme_ns_head *head = struct nvme_ns_head *head =
@ -412,9 +477,11 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
if (!head->disk) if (!head->disk)
return; return;
if (!test_and_set_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) if (!test_and_set_bit(NVME_NSHEAD_DISK_LIVE, &head->flags)) {
device_add_disk(&head->subsys->dev, head->disk, device_add_disk(&head->subsys->dev, head->disk,
nvme_ns_id_attr_groups); nvme_ns_id_attr_groups);
nvme_add_ns_head_cdev(head);
}
mutex_lock(&head->lock); mutex_lock(&head->lock);
if (nvme_path_is_optimized(ns)) { if (nvme_path_is_optimized(ns)) {
@ -602,8 +669,8 @@ static ssize_t nvme_subsys_iopolicy_show(struct device *dev,
struct nvme_subsystem *subsys = struct nvme_subsystem *subsys =
container_of(dev, struct nvme_subsystem, dev); container_of(dev, struct nvme_subsystem, dev);
return sprintf(buf, "%s\n", return sysfs_emit(buf, "%s\n",
nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]); nvme_iopolicy_names[READ_ONCE(subsys->iopolicy)]);
} }
static ssize_t nvme_subsys_iopolicy_store(struct device *dev, static ssize_t nvme_subsys_iopolicy_store(struct device *dev,
@ -628,7 +695,7 @@ SUBSYS_ATTR_RW(iopolicy, S_IRUGO | S_IWUSR,
static ssize_t ana_grpid_show(struct device *dev, struct device_attribute *attr, static ssize_t ana_grpid_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
return sprintf(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid); return sysfs_emit(buf, "%d\n", nvme_get_ns_from_dev(dev)->ana_grpid);
} }
DEVICE_ATTR_RO(ana_grpid); DEVICE_ATTR_RO(ana_grpid);
@ -637,7 +704,7 @@ static ssize_t ana_state_show(struct device *dev, struct device_attribute *attr,
{ {
struct nvme_ns *ns = nvme_get_ns_from_dev(dev); struct nvme_ns *ns = nvme_get_ns_from_dev(dev);
return sprintf(buf, "%s\n", nvme_ana_state_names[ns->ana_state]); return sysfs_emit(buf, "%s\n", nvme_ana_state_names[ns->ana_state]);
} }
DEVICE_ATTR_RO(ana_state); DEVICE_ATTR_RO(ana_state);
@ -668,9 +735,13 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id)
if (desc.state) { if (desc.state) {
/* found the group desc: update */ /* found the group desc: update */
nvme_update_ns_ana_state(&desc, ns); nvme_update_ns_ana_state(&desc, ns);
} else {
/* group desc not found: trigger a re-read */
set_bit(NVME_NS_ANA_PENDING, &ns->flags);
queue_work(nvme_wq, &ns->ctrl->ana_work);
} }
} else { } else {
ns->ana_state = NVME_ANA_OPTIMIZED; ns->ana_state = NVME_ANA_OPTIMIZED;
nvme_mpath_set_live(ns); nvme_mpath_set_live(ns);
} }
@ -687,8 +758,10 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
{ {
if (!head->disk) if (!head->disk)
return; return;
if (head->disk->flags & GENHD_FL_UP) if (head->disk->flags & GENHD_FL_UP) {
nvme_cdev_del(&head->cdev, &head->cdev_device);
del_gendisk(head->disk); del_gendisk(head->disk);
}
blk_set_queue_dying(head->disk->queue); blk_set_queue_dying(head->disk->queue);
/* make sure all pending bios are cleaned up */ /* make sure all pending bios are cleaned up */
kblockd_schedule_work(&head->requeue_work); kblockd_schedule_work(&head->requeue_work);
@ -758,4 +831,3 @@ void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
kfree(ctrl->ana_log_buf); kfree(ctrl->ana_log_buf);
ctrl->ana_log_buf = NULL; ctrl->ana_log_buf = NULL;
} }

View File

@ -27,7 +27,6 @@ extern unsigned int admin_timeout;
#define NVME_ADMIN_TIMEOUT (admin_timeout * HZ) #define NVME_ADMIN_TIMEOUT (admin_timeout * HZ)
#define NVME_DEFAULT_KATO 5 #define NVME_DEFAULT_KATO 5
#define NVME_KATO_GRACE 10
#ifdef CONFIG_ARCH_NO_SG_CHAIN #ifdef CONFIG_ARCH_NO_SG_CHAIN
#define NVME_INLINE_SG_CNT 0 #define NVME_INLINE_SG_CNT 0
@ -276,6 +275,9 @@ struct nvme_ctrl {
u32 max_hw_sectors; u32 max_hw_sectors;
u32 max_segments; u32 max_segments;
u32 max_integrity_segments; u32 max_integrity_segments;
u32 max_discard_sectors;
u32 max_discard_segments;
u32 max_zeroes_sectors;
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
u32 max_zone_append; u32 max_zone_append;
#endif #endif
@ -410,8 +412,12 @@ struct nvme_ns_head {
bool shared; bool shared;
int instance; int instance;
struct nvme_effects_log *effects; struct nvme_effects_log *effects;
#ifdef CONFIG_NVME_MULTIPATH
struct cdev cdev;
struct device cdev_device;
struct gendisk *disk; struct gendisk *disk;
#ifdef CONFIG_NVME_MULTIPATH
struct bio_list requeue_list; struct bio_list requeue_list;
spinlock_t requeue_lock; spinlock_t requeue_lock;
struct work_struct requeue_work; struct work_struct requeue_work;
@ -422,6 +428,11 @@ struct nvme_ns_head {
#endif #endif
}; };
static inline bool nvme_ns_head_multipath(struct nvme_ns_head *head)
{
return IS_ENABLED(CONFIG_NVME_MULTIPATH) && head->disk;
}
enum nvme_ns_features { enum nvme_ns_features {
NVME_NS_EXT_LBAS = 1 << 0, /* support extended LBA format */ NVME_NS_EXT_LBAS = 1 << 0, /* support extended LBA format */
NVME_NS_METADATA_SUPPORTED = 1 << 1, /* support getting generated md */ NVME_NS_METADATA_SUPPORTED = 1 << 1, /* support getting generated md */
@ -457,6 +468,9 @@ struct nvme_ns {
#define NVME_NS_ANA_PENDING 2 #define NVME_NS_ANA_PENDING 2
#define NVME_NS_FORCE_RO 3 #define NVME_NS_FORCE_RO 3
struct cdev cdev;
struct device cdev_device;
struct nvme_fault_inject fault_inject; struct nvme_fault_inject fault_inject;
}; };
@ -599,7 +613,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
void nvme_uninit_ctrl(struct nvme_ctrl *ctrl); void nvme_uninit_ctrl(struct nvme_ctrl *ctrl);
void nvme_start_ctrl(struct nvme_ctrl *ctrl); void nvme_start_ctrl(struct nvme_ctrl *ctrl);
void nvme_stop_ctrl(struct nvme_ctrl *ctrl); void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
int nvme_init_identify(struct nvme_ctrl *ctrl); int nvme_init_ctrl_finish(struct nvme_ctrl *ctrl);
void nvme_remove_namespaces(struct nvme_ctrl *ctrl); void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
@ -623,8 +637,7 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
struct request *nvme_alloc_request(struct request_queue *q, struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags); struct nvme_command *cmd, blk_mq_req_flags_t flags);
void nvme_cleanup_cmd(struct request *req); void nvme_cleanup_cmd(struct request *req);
blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req);
struct nvme_command *cmd);
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen); void *buf, unsigned bufflen);
int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
@ -640,16 +653,34 @@ int nvme_get_features(struct nvme_ctrl *dev, unsigned int fid,
int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count); int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count);
void nvme_stop_keep_alive(struct nvme_ctrl *ctrl); void nvme_stop_keep_alive(struct nvme_ctrl *ctrl);
int nvme_reset_ctrl(struct nvme_ctrl *ctrl); int nvme_reset_ctrl(struct nvme_ctrl *ctrl);
int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl);
int nvme_try_sched_reset(struct nvme_ctrl *ctrl); int nvme_try_sched_reset(struct nvme_ctrl *ctrl);
int nvme_delete_ctrl(struct nvme_ctrl *ctrl); int nvme_delete_ctrl(struct nvme_ctrl *ctrl);
void nvme_queue_scan(struct nvme_ctrl *ctrl);
int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi, int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi,
void *log, size_t size, u64 offset); void *log, size_t size, u64 offset);
struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk, struct nvme_ns *nvme_get_ns_from_disk(struct gendisk *disk,
struct nvme_ns_head **head, int *srcu_idx); struct nvme_ns_head **head, int *srcu_idx);
void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx); void nvme_put_ns_from_disk(struct nvme_ns_head *head, int idx);
bool nvme_tryget_ns_head(struct nvme_ns_head *head);
void nvme_put_ns_head(struct nvme_ns_head *head);
struct nvme_ctrl *nvme_find_get_live_ctrl(struct nvme_subsystem *subsys);
int nvme_cdev_add(struct cdev *cdev, struct device *cdev_device,
const struct file_operations *fops, struct module *owner);
void nvme_cdev_del(struct cdev *cdev, struct device *cdev_device);
int nvme_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg);
long nvme_ns_chr_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
int nvme_ns_head_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg);
long nvme_ns_head_chr_ioctl(struct file *file, unsigned int cmd,
unsigned long arg);
long nvme_dev_ioctl(struct file *file, unsigned int cmd,
unsigned long arg);
int nvme_getgeo(struct block_device *bdev, struct hd_geometry *geo);
extern const struct attribute_group *nvme_ns_id_attr_groups[]; extern const struct attribute_group *nvme_ns_id_attr_groups[];
extern const struct pr_ops nvme_pr_ops;
extern const struct block_device_operations nvme_ns_head_ops; extern const struct block_device_operations nvme_ns_head_ops;
#ifdef CONFIG_NVME_MULTIPATH #ifdef CONFIG_NVME_MULTIPATH
@ -661,8 +692,7 @@ static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
void nvme_mpath_unfreeze(struct nvme_subsystem *subsys); void nvme_mpath_unfreeze(struct nvme_subsystem *subsys);
void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys); void nvme_mpath_wait_freeze(struct nvme_subsystem *subsys);
void nvme_mpath_start_freeze(struct nvme_subsystem *subsys); void nvme_mpath_start_freeze(struct nvme_subsystem *subsys);
void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, bool nvme_mpath_set_disk_name(struct nvme_ns *ns, char *disk_name, int *flags);
struct nvme_ctrl *ctrl, int *flags);
void nvme_failover_req(struct request *req); void nvme_failover_req(struct request *req);
void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl); void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);
int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head); int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head);
@ -674,7 +704,6 @@ void nvme_mpath_stop(struct nvme_ctrl *ctrl);
bool nvme_mpath_clear_current_path(struct nvme_ns *ns); bool nvme_mpath_clear_current_path(struct nvme_ns *ns);
void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl); void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl);
struct nvme_ns *nvme_find_path(struct nvme_ns_head *head); struct nvme_ns *nvme_find_path(struct nvme_ns_head *head);
blk_qc_t nvme_ns_head_submit_bio(struct bio *bio);
static inline void nvme_mpath_check_last_path(struct nvme_ns *ns) static inline void nvme_mpath_check_last_path(struct nvme_ns *ns)
{ {
@ -701,16 +730,11 @@ static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl)
{ {
return false; return false;
} }
/* static inline bool nvme_mpath_set_disk_name(struct nvme_ns *ns, char *disk_name,
* Without the multipath code enabled, multiple controller per subsystems are int *flags)
* visible as devices and thus we cannot use the subsystem instance.
*/
static inline void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns,
struct nvme_ctrl *ctrl, int *flags)
{ {
sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); return false;
} }
static inline void nvme_failover_req(struct request *req) static inline void nvme_failover_req(struct request *req)
{ {
} }
@ -745,7 +769,7 @@ static inline void nvme_trace_bio_complete(struct request *req)
static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, static inline int nvme_mpath_init(struct nvme_ctrl *ctrl,
struct nvme_id_ctrl *id) struct nvme_id_ctrl *id)
{ {
if (ctrl->subsys->cmic & (1 << 3)) if (ctrl->subsys->cmic & NVME_CTRL_CMIC_ANA)
dev_warn(ctrl->device, dev_warn(ctrl->device,
"Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n"); "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n");
return 0; return 0;
@ -798,7 +822,7 @@ static inline int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node); int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node);
void nvme_nvm_unregister(struct nvme_ns *ns); void nvme_nvm_unregister(struct nvme_ns *ns);
extern const struct attribute_group nvme_nvm_attr_group; extern const struct attribute_group nvme_nvm_attr_group;
int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, unsigned long arg); int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, void __user *argp);
#else #else
static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
int node) int node)
@ -808,7 +832,7 @@ static inline int nvme_nvm_register(struct nvme_ns *ns, char *disk_name,
static inline void nvme_nvm_unregister(struct nvme_ns *ns) {}; static inline void nvme_nvm_unregister(struct nvme_ns *ns) {};
static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd, static inline int nvme_nvm_ioctl(struct nvme_ns *ns, unsigned int cmd,
unsigned long arg) void __user *argp)
{ {
return -ENOTTY; return -ENOTTY;
} }

View File

@ -224,6 +224,7 @@ struct nvme_queue {
*/ */
struct nvme_iod { struct nvme_iod {
struct nvme_request req; struct nvme_request req;
struct nvme_command cmd;
struct nvme_queue *nvmeq; struct nvme_queue *nvmeq;
bool use_sgl; bool use_sgl;
int aborted; int aborted;
@ -429,6 +430,7 @@ static int nvme_init_request(struct blk_mq_tag_set *set, struct request *req,
iod->nvmeq = nvmeq; iod->nvmeq = nvmeq;
nvme_req(req)->ctrl = &dev->ctrl; nvme_req(req)->ctrl = &dev->ctrl;
nvme_req(req)->cmd = &iod->cmd;
return 0; return 0;
} }
@ -852,7 +854,7 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
return nvme_setup_prp_simple(dev, req, return nvme_setup_prp_simple(dev, req,
&cmnd->rw, &bv); &cmnd->rw, &bv);
if (iod->nvmeq->qid && if (iod->nvmeq->qid && sgl_threshold &&
dev->ctrl.sgls & ((1 << 0) | (1 << 1))) dev->ctrl.sgls & ((1 << 0) | (1 << 1)))
return nvme_setup_sgl_simple(dev, req, return nvme_setup_sgl_simple(dev, req,
&cmnd->rw, &bv); &cmnd->rw, &bv);
@ -917,7 +919,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
struct nvme_dev *dev = nvmeq->dev; struct nvme_dev *dev = nvmeq->dev;
struct request *req = bd->rq; struct request *req = bd->rq;
struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
struct nvme_command cmnd; struct nvme_command *cmnd = &iod->cmd;
blk_status_t ret; blk_status_t ret;
iod->aborted = 0; iod->aborted = 0;
@ -931,24 +933,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags)))
return BLK_STS_IOERR; return BLK_STS_IOERR;
ret = nvme_setup_cmd(ns, req, &cmnd); ret = nvme_setup_cmd(ns, req);
if (ret) if (ret)
return ret; return ret;
if (blk_rq_nr_phys_segments(req)) { if (blk_rq_nr_phys_segments(req)) {
ret = nvme_map_data(dev, req, &cmnd); ret = nvme_map_data(dev, req, cmnd);
if (ret) if (ret)
goto out_free_cmd; goto out_free_cmd;
} }
if (blk_integrity_rq(req)) { if (blk_integrity_rq(req)) {
ret = nvme_map_metadata(dev, req, &cmnd); ret = nvme_map_metadata(dev, req, cmnd);
if (ret) if (ret)
goto out_unmap_data; goto out_unmap_data;
} }
blk_mq_start_request(req); blk_mq_start_request(req);
nvme_submit_cmd(nvmeq, &cmnd, bd->last); nvme_submit_cmd(nvmeq, cmnd, bd->last);
return BLK_STS_OK; return BLK_STS_OK;
out_unmap_data: out_unmap_data:
nvme_unmap_data(dev, req); nvme_unmap_data(dev, req);
@ -1060,18 +1062,10 @@ static inline int nvme_process_cq(struct nvme_queue *nvmeq)
static irqreturn_t nvme_irq(int irq, void *data) static irqreturn_t nvme_irq(int irq, void *data)
{ {
struct nvme_queue *nvmeq = data; struct nvme_queue *nvmeq = data;
irqreturn_t ret = IRQ_NONE;
/*
* The rmb/wmb pair ensures we see all updates from a previous run of
* the irq handler, even if that was on another CPU.
*/
rmb();
if (nvme_process_cq(nvmeq)) if (nvme_process_cq(nvmeq))
ret = IRQ_HANDLED; return IRQ_HANDLED;
wmb(); return IRQ_NONE;
return ret;
} }
static irqreturn_t nvme_irq_check(int irq, void *data) static irqreturn_t nvme_irq_check(int irq, void *data)
@ -2178,7 +2172,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
if (nr_io_queues == 0) if (nr_io_queues == 0)
return 0; return 0;
clear_bit(NVMEQ_ENABLED, &adminq->flags); clear_bit(NVMEQ_ENABLED, &adminq->flags);
if (dev->cmb_use_sqes) { if (dev->cmb_use_sqes) {
@ -2653,7 +2647,7 @@ static void nvme_reset_work(struct work_struct *work)
*/ */
dev->ctrl.max_integrity_segments = 1; dev->ctrl.max_integrity_segments = 1;
result = nvme_init_identify(&dev->ctrl); result = nvme_init_ctrl_finish(&dev->ctrl);
if (result) if (result)
goto out; goto out;

View File

@ -314,6 +314,7 @@ static int nvme_rdma_init_request(struct blk_mq_tag_set *set,
NVME_RDMA_DATA_SGL_SIZE; NVME_RDMA_DATA_SGL_SIZE;
req->queue = queue; req->queue = queue;
nvme_req(rq)->cmd = req->sqe.data;
return 0; return 0;
} }
@ -920,7 +921,7 @@ static int nvme_rdma_configure_admin_queue(struct nvme_rdma_ctrl *ctrl,
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
error = nvme_init_identify(&ctrl->ctrl); error = nvme_init_ctrl_finish(&ctrl->ctrl);
if (error) if (error)
goto out_quiesce_queue; goto out_quiesce_queue;
@ -2041,7 +2042,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
struct request *rq = bd->rq; struct request *rq = bd->rq;
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq); struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_rdma_qe *sqe = &req->sqe; struct nvme_rdma_qe *sqe = &req->sqe;
struct nvme_command *c = sqe->data; struct nvme_command *c = nvme_req(rq)->cmd;
struct ib_device *dev; struct ib_device *dev;
bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags); bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags);
blk_status_t ret; blk_status_t ret;
@ -2064,7 +2065,7 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx,
ib_dma_sync_single_for_cpu(dev, sqe->dma, ib_dma_sync_single_for_cpu(dev, sqe->dma,
sizeof(struct nvme_command), DMA_TO_DEVICE); sizeof(struct nvme_command), DMA_TO_DEVICE);
ret = nvme_setup_cmd(ns, rq, c); ret = nvme_setup_cmd(ns, rq);
if (ret) if (ret)
goto unmap_qe; goto unmap_qe;

View File

@ -417,6 +417,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
{ {
struct nvme_tcp_ctrl *ctrl = set->driver_data; struct nvme_tcp_ctrl *ctrl = set->driver_data;
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
struct nvme_tcp_cmd_pdu *pdu;
int queue_idx = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0; int queue_idx = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0;
struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx];
u8 hdgst = nvme_tcp_hdgst_len(queue); u8 hdgst = nvme_tcp_hdgst_len(queue);
@ -427,8 +428,10 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
if (!req->pdu) if (!req->pdu)
return -ENOMEM; return -ENOMEM;
pdu = req->pdu;
req->queue = queue; req->queue = queue;
nvme_req(rq)->ctrl = &ctrl->ctrl; nvme_req(rq)->ctrl = &ctrl->ctrl;
nvme_req(rq)->cmd = &pdu->cmd;
return 0; return 0;
} }
@ -874,7 +877,7 @@ static void nvme_tcp_state_change(struct sock *sk)
{ {
struct nvme_tcp_queue *queue; struct nvme_tcp_queue *queue;
read_lock(&sk->sk_callback_lock); read_lock_bh(&sk->sk_callback_lock);
queue = sk->sk_user_data; queue = sk->sk_user_data;
if (!queue) if (!queue)
goto done; goto done;
@ -895,7 +898,7 @@ static void nvme_tcp_state_change(struct sock *sk)
queue->state_change(sk); queue->state_change(sk);
done: done:
read_unlock(&sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
} }
static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue) static inline bool nvme_tcp_queue_more(struct nvme_tcp_queue *queue)
@ -1885,7 +1888,7 @@ static int nvme_tcp_configure_admin_queue(struct nvme_ctrl *ctrl, bool new)
blk_mq_unquiesce_queue(ctrl->admin_q); blk_mq_unquiesce_queue(ctrl->admin_q);
error = nvme_init_identify(ctrl); error = nvme_init_ctrl_finish(ctrl);
if (error) if (error)
goto out_quiesce_queue; goto out_quiesce_queue;
@ -1973,6 +1976,11 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
goto destroy_admin; goto destroy_admin;
} }
if (!(ctrl->sgls & ((1 << 0) | (1 << 1)))) {
dev_err(ctrl->device, "Mandatory sgls are not supported!\n");
goto destroy_admin;
}
if (opts->queue_size > ctrl->sqsize + 1) if (opts->queue_size > ctrl->sqsize + 1)
dev_warn(ctrl->device, dev_warn(ctrl->device,
"queue_size %zu > ctrl sqsize %u, clamping down\n", "queue_size %zu > ctrl sqsize %u, clamping down\n",
@ -2269,7 +2277,7 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0; u8 hdgst = nvme_tcp_hdgst_len(queue), ddgst = 0;
blk_status_t ret; blk_status_t ret;
ret = nvme_setup_cmd(ns, rq, &pdu->cmd); ret = nvme_setup_cmd(ns, rq);
if (ret) if (ret)
return ret; return ret;

View File

@ -96,7 +96,7 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
dev_warn(ns->ctrl->device, dev_warn(ns->ctrl->device,
"zone operations:%x not supported for namespace:%u\n", "zone operations:%x not supported for namespace:%u\n",
le16_to_cpu(id->zoc), ns->head->ns_id); le16_to_cpu(id->zoc), ns->head->ns_id);
status = -EINVAL; status = -ENODEV;
goto free_data; goto free_data;
} }
@ -105,7 +105,7 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
dev_warn(ns->ctrl->device, dev_warn(ns->ctrl->device,
"invalid zone size:%llu for namespace:%u\n", "invalid zone size:%llu for namespace:%u\n",
ns->zsze, ns->head->ns_id); ns->zsze, ns->head->ns_id);
status = -EINVAL; status = -ENODEV;
goto free_data; goto free_data;
} }

View File

@ -513,7 +513,7 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req)
default: default:
id->nuse = id->nsze; id->nuse = id->nsze;
break; break;
} }
if (req->ns->bdev) if (req->ns->bdev)
nvmet_bdev_set_limits(req->ns->bdev, id); nvmet_bdev_set_limits(req->ns->bdev, id);
@ -919,15 +919,21 @@ void nvmet_execute_async_event(struct nvmet_req *req)
void nvmet_execute_keep_alive(struct nvmet_req *req) void nvmet_execute_keep_alive(struct nvmet_req *req)
{ {
struct nvmet_ctrl *ctrl = req->sq->ctrl; struct nvmet_ctrl *ctrl = req->sq->ctrl;
u16 status = 0;
if (!nvmet_check_transfer_len(req, 0)) if (!nvmet_check_transfer_len(req, 0))
return; return;
if (!ctrl->kato) {
status = NVME_SC_KA_TIMEOUT_INVALID;
goto out;
}
pr_debug("ctrl %d update keep-alive timer for %d secs\n", pr_debug("ctrl %d update keep-alive timer for %d secs\n",
ctrl->cntlid, ctrl->kato); ctrl->cntlid, ctrl->kato);
mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ); mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ);
nvmet_req_complete(req, 0); out:
nvmet_req_complete(req, status);
} }
u16 nvmet_parse_admin_cmd(struct nvmet_req *req) u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
@ -940,7 +946,7 @@ u16 nvmet_parse_admin_cmd(struct nvmet_req *req)
if (nvmet_req_subsys(req)->type == NVME_NQN_DISC) if (nvmet_req_subsys(req)->type == NVME_NQN_DISC)
return nvmet_parse_discovery_cmd(req); return nvmet_parse_discovery_cmd(req);
ret = nvmet_check_ctrl_status(req, cmd); ret = nvmet_check_ctrl_status(req);
if (unlikely(ret)) if (unlikely(ret))
return ret; return ret;

View File

@ -1149,6 +1149,12 @@ static ssize_t nvmet_subsys_attr_model_store_locked(struct nvmet_subsys *subsys,
if (!len) if (!len)
return -EINVAL; return -EINVAL;
if (len > NVMET_MN_MAX_SIZE) {
pr_err("Model number size can not exceed %d Bytes\n",
NVMET_MN_MAX_SIZE);
return -EINVAL;
}
for (pos = 0; pos < len; pos++) { for (pos = 0; pos < len; pos++) {
if (!nvmet_is_ascii(page[pos])) if (!nvmet_is_ascii(page[pos]))
return -EINVAL; return -EINVAL;

View File

@ -864,10 +864,9 @@ static inline u16 nvmet_io_cmd_check_access(struct nvmet_req *req)
static u16 nvmet_parse_io_cmd(struct nvmet_req *req) static u16 nvmet_parse_io_cmd(struct nvmet_req *req)
{ {
struct nvme_command *cmd = req->cmd;
u16 ret; u16 ret;
ret = nvmet_check_ctrl_status(req, cmd); ret = nvmet_check_ctrl_status(req);
if (unlikely(ret)) if (unlikely(ret))
return ret; return ret;
@ -1190,19 +1189,19 @@ static void nvmet_init_cap(struct nvmet_ctrl *ctrl)
ctrl->cap |= NVMET_QUEUE_SIZE - 1; ctrl->cap |= NVMET_QUEUE_SIZE - 1;
} }
u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid, struct nvmet_ctrl *nvmet_ctrl_find_get(const char *subsysnqn,
struct nvmet_req *req, struct nvmet_ctrl **ret) const char *hostnqn, u16 cntlid,
struct nvmet_req *req)
{ {
struct nvmet_ctrl *ctrl = NULL;
struct nvmet_subsys *subsys; struct nvmet_subsys *subsys;
struct nvmet_ctrl *ctrl;
u16 status = 0;
subsys = nvmet_find_get_subsys(req->port, subsysnqn); subsys = nvmet_find_get_subsys(req->port, subsysnqn);
if (!subsys) { if (!subsys) {
pr_warn("connect request for invalid subsystem %s!\n", pr_warn("connect request for invalid subsystem %s!\n",
subsysnqn); subsysnqn);
req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn); req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR; goto out;
} }
mutex_lock(&subsys->lock); mutex_lock(&subsys->lock);
@ -1215,33 +1214,34 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
if (!kref_get_unless_zero(&ctrl->ref)) if (!kref_get_unless_zero(&ctrl->ref))
continue; continue;
*ret = ctrl; /* ctrl found */
goto out; goto found;
} }
} }
ctrl = NULL; /* ctrl not found */
pr_warn("could not find controller %d for subsys %s / host %s\n", pr_warn("could not find controller %d for subsys %s / host %s\n",
cntlid, subsysnqn, hostnqn); cntlid, subsysnqn, hostnqn);
req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid); req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
out: found:
mutex_unlock(&subsys->lock); mutex_unlock(&subsys->lock);
nvmet_subsys_put(subsys); nvmet_subsys_put(subsys);
return status; out:
return ctrl;
} }
u16 nvmet_check_ctrl_status(struct nvmet_req *req, struct nvme_command *cmd) u16 nvmet_check_ctrl_status(struct nvmet_req *req)
{ {
if (unlikely(!(req->sq->ctrl->cc & NVME_CC_ENABLE))) { if (unlikely(!(req->sq->ctrl->cc & NVME_CC_ENABLE))) {
pr_err("got cmd %d while CC.EN == 0 on qid = %d\n", pr_err("got cmd %d while CC.EN == 0 on qid = %d\n",
cmd->common.opcode, req->sq->qid); req->cmd->common.opcode, req->sq->qid);
return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR; return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
} }
if (unlikely(!(req->sq->ctrl->csts & NVME_CSTS_RDY))) { if (unlikely(!(req->sq->ctrl->csts & NVME_CSTS_RDY))) {
pr_err("got cmd %d while CSTS.RDY == 0 on qid = %d\n", pr_err("got cmd %d while CSTS.RDY == 0 on qid = %d\n",
cmd->common.opcode, req->sq->qid); req->cmd->common.opcode, req->sq->qid);
return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR; return NVME_SC_CMD_SEQ_ERROR | NVME_SC_DNR;
} }
return 0; return 0;
@ -1322,10 +1322,10 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
pr_warn("connect request for invalid subsystem %s!\n", pr_warn("connect request for invalid subsystem %s!\n",
subsysnqn); subsysnqn);
req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn); req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
req->error_loc = offsetof(struct nvme_common_command, dptr);
goto out; goto out;
} }
status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
down_read(&nvmet_config_sem); down_read(&nvmet_config_sem);
if (!nvmet_host_allowed(subsys, hostnqn)) { if (!nvmet_host_allowed(subsys, hostnqn)) {
pr_info("connect by host %s for subsystem %s not allowed\n", pr_info("connect by host %s for subsystem %s not allowed\n",
@ -1333,6 +1333,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn); req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
up_read(&nvmet_config_sem); up_read(&nvmet_config_sem);
status = NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR; status = NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR;
req->error_loc = offsetof(struct nvme_common_command, dptr);
goto out_put_subsystem; goto out_put_subsystem;
} }
up_read(&nvmet_config_sem); up_read(&nvmet_config_sem);

View File

@ -178,12 +178,14 @@ static void nvmet_execute_disc_get_log_page(struct nvmet_req *req)
if (req->cmd->get_log_page.lid != NVME_LOG_DISC) { if (req->cmd->get_log_page.lid != NVME_LOG_DISC) {
req->error_loc = req->error_loc =
offsetof(struct nvme_get_log_page_command, lid); offsetof(struct nvme_get_log_page_command, lid);
status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR; status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out; goto out;
} }
/* Spec requires dword aligned offsets */ /* Spec requires dword aligned offsets */
if (offset & 0x3) { if (offset & 0x3) {
req->error_loc =
offsetof(struct nvme_get_log_page_command, lpo);
status = NVME_SC_INVALID_FIELD | NVME_SC_DNR; status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out; goto out;
} }
@ -250,7 +252,7 @@ static void nvmet_execute_disc_identify(struct nvmet_req *req)
if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) { if (req->cmd->identify.cns != NVME_ID_CNS_CTRL) {
req->error_loc = offsetof(struct nvme_identify, cns); req->error_loc = offsetof(struct nvme_identify, cns);
status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR; status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
goto out; goto out;
} }

View File

@ -190,12 +190,8 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
status = nvmet_alloc_ctrl(d->subsysnqn, d->hostnqn, req, status = nvmet_alloc_ctrl(d->subsysnqn, d->hostnqn, req,
le32_to_cpu(c->kato), &ctrl); le32_to_cpu(c->kato), &ctrl);
if (status) { if (status)
if (status == (NVME_SC_INVALID_FIELD | NVME_SC_DNR))
req->error_loc =
offsetof(struct nvme_common_command, opcode);
goto out; goto out;
}
ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support; ctrl->pi_support = ctrl->port->pi_enable && ctrl->subsys->pi_support;
@ -222,7 +218,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
{ {
struct nvmf_connect_command *c = &req->cmd->connect; struct nvmf_connect_command *c = &req->cmd->connect;
struct nvmf_connect_data *d; struct nvmf_connect_data *d;
struct nvmet_ctrl *ctrl = NULL; struct nvmet_ctrl *ctrl;
u16 qid = le16_to_cpu(c->qid); u16 qid = le16_to_cpu(c->qid);
u16 status = 0; u16 status = 0;
@ -249,11 +245,12 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
goto out; goto out;
} }
status = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn, ctrl = nvmet_ctrl_find_get(d->subsysnqn, d->hostnqn,
le16_to_cpu(d->cntlid), le16_to_cpu(d->cntlid), req);
req, &ctrl); if (!ctrl) {
if (status) status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
goto out; goto out;
}
if (unlikely(qid > ctrl->subsys->max_qid)) { if (unlikely(qid > ctrl->subsys->max_qid)) {
pr_warn("invalid queue id (%d)\n", qid); pr_warn("invalid queue id (%d)\n", qid);

View File

@ -1020,61 +1020,76 @@ nvmet_fc_free_hostport(struct nvmet_fc_hostport *hostport)
nvmet_fc_hostport_put(hostport); nvmet_fc_hostport_put(hostport);
} }
static struct nvmet_fc_hostport *
nvmet_fc_match_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
{
struct nvmet_fc_hostport *host;
lockdep_assert_held(&tgtport->lock);
list_for_each_entry(host, &tgtport->host_list, host_list) {
if (host->hosthandle == hosthandle && !host->invalid) {
if (nvmet_fc_hostport_get(host))
return (host);
}
}
return NULL;
}
static struct nvmet_fc_hostport * static struct nvmet_fc_hostport *
nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle) nvmet_fc_alloc_hostport(struct nvmet_fc_tgtport *tgtport, void *hosthandle)
{ {
struct nvmet_fc_hostport *newhost, *host, *match = NULL; struct nvmet_fc_hostport *newhost, *match = NULL;
unsigned long flags; unsigned long flags;
/* if LLDD not implemented, leave as NULL */ /* if LLDD not implemented, leave as NULL */
if (!hosthandle) if (!hosthandle)
return NULL; return NULL;
/* take reference for what will be the newly allocated hostport */ /*
* take reference for what will be the newly allocated hostport if
* we end up using a new allocation
*/
if (!nvmet_fc_tgtport_get(tgtport)) if (!nvmet_fc_tgtport_get(tgtport))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
newhost = kzalloc(sizeof(*newhost), GFP_KERNEL);
if (!newhost) {
spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(host, &tgtport->host_list, host_list) {
if (host->hosthandle == hosthandle && !host->invalid) {
if (nvmet_fc_hostport_get(host)) {
match = host;
break;
}
}
}
spin_unlock_irqrestore(&tgtport->lock, flags);
/* no allocation - release reference */
nvmet_fc_tgtport_put(tgtport);
return (match) ? match : ERR_PTR(-ENOMEM);
}
newhost->tgtport = tgtport;
newhost->hosthandle = hosthandle;
INIT_LIST_HEAD(&newhost->host_list);
kref_init(&newhost->ref);
spin_lock_irqsave(&tgtport->lock, flags); spin_lock_irqsave(&tgtport->lock, flags);
list_for_each_entry(host, &tgtport->host_list, host_list) { match = nvmet_fc_match_hostport(tgtport, hosthandle);
if (host->hosthandle == hosthandle && !host->invalid) {
if (nvmet_fc_hostport_get(host)) {
match = host;
break;
}
}
}
if (match) {
kfree(newhost);
newhost = NULL;
/* releasing allocation - release reference */
nvmet_fc_tgtport_put(tgtport);
} else
list_add_tail(&newhost->host_list, &tgtport->host_list);
spin_unlock_irqrestore(&tgtport->lock, flags); spin_unlock_irqrestore(&tgtport->lock, flags);
return (match) ? match : newhost; if (match) {
/* no new allocation - release reference */
nvmet_fc_tgtport_put(tgtport);
return match;
}
newhost = kzalloc(sizeof(*newhost), GFP_KERNEL);
if (!newhost) {
/* no new allocation - release reference */
nvmet_fc_tgtport_put(tgtport);
return ERR_PTR(-ENOMEM);
}
spin_lock_irqsave(&tgtport->lock, flags);
match = nvmet_fc_match_hostport(tgtport, hosthandle);
if (match) {
/* new allocation not needed */
kfree(newhost);
newhost = match;
/* no new allocation - release reference */
nvmet_fc_tgtport_put(tgtport);
} else {
newhost->tgtport = tgtport;
newhost->hosthandle = hosthandle;
INIT_LIST_HEAD(&newhost->host_list);
kref_init(&newhost->ref);
list_add_tail(&newhost->host_list, &tgtport->host_list);
}
spin_unlock_irqrestore(&tgtport->lock, flags);
return newhost;
} }
static void static void
@ -1996,6 +2011,7 @@ nvmet_fc_handle_ls_rqst_work(struct work_struct *work)
* *
* @target_port: pointer to the (registered) target port the LS was * @target_port: pointer to the (registered) target port the LS was
* received on. * received on.
* @hosthandle: pointer to the host specific data, gets stored in iod.
* @lsrsp: pointer to a lsrsp structure to be used to reference * @lsrsp: pointer to a lsrsp structure to be used to reference
* the exchange corresponding to the LS. * the exchange corresponding to the LS.
* @lsreqbuf: pointer to the buffer containing the LS Request * @lsreqbuf: pointer to the buffer containing the LS Request

View File

@ -141,7 +141,7 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx,
if (!nvmf_check_ready(&queue->ctrl->ctrl, req, queue_ready)) if (!nvmf_check_ready(&queue->ctrl->ctrl, req, queue_ready))
return nvmf_fail_nonready_command(&queue->ctrl->ctrl, req); return nvmf_fail_nonready_command(&queue->ctrl->ctrl, req);
ret = nvme_setup_cmd(ns, req, &iod->cmd); ret = nvme_setup_cmd(ns, req);
if (ret) if (ret)
return ret; return ret;
@ -205,8 +205,10 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set,
unsigned int numa_node) unsigned int numa_node)
{ {
struct nvme_loop_ctrl *ctrl = set->driver_data; struct nvme_loop_ctrl *ctrl = set->driver_data;
struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req);
nvme_req(req)->ctrl = &ctrl->ctrl; nvme_req(req)->ctrl = &ctrl->ctrl;
nvme_req(req)->cmd = &iod->cmd;
return nvme_loop_init_iod(ctrl, blk_mq_rq_to_pdu(req), return nvme_loop_init_iod(ctrl, blk_mq_rq_to_pdu(req),
(set == &ctrl->tag_set) ? hctx_idx + 1 : 0); (set == &ctrl->tag_set) ? hctx_idx + 1 : 0);
} }
@ -396,7 +398,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q); blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
error = nvme_init_identify(&ctrl->ctrl); error = nvme_init_ctrl_finish(&ctrl->ctrl);
if (error) if (error)
goto out_cleanup_queue; goto out_cleanup_queue;

View File

@ -27,6 +27,7 @@
#define NVMET_ERROR_LOG_SLOTS 128 #define NVMET_ERROR_LOG_SLOTS 128
#define NVMET_NO_ERROR_LOC ((u16)-1) #define NVMET_NO_ERROR_LOC ((u16)-1)
#define NVMET_DEFAULT_CTRL_MODEL "Linux" #define NVMET_DEFAULT_CTRL_MODEL "Linux"
#define NVMET_MN_MAX_SIZE 40
/* /*
* Supported optional AENs: * Supported optional AENs:
@ -428,10 +429,11 @@ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl);
void nvmet_update_cc(struct nvmet_ctrl *ctrl, u32 new); void nvmet_update_cc(struct nvmet_ctrl *ctrl, u32 new);
u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn, u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp); struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp);
u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid, struct nvmet_ctrl *nvmet_ctrl_find_get(const char *subsysnqn,
struct nvmet_req *req, struct nvmet_ctrl **ret); const char *hostnqn, u16 cntlid,
struct nvmet_req *req);
void nvmet_ctrl_put(struct nvmet_ctrl *ctrl); void nvmet_ctrl_put(struct nvmet_ctrl *ctrl);
u16 nvmet_check_ctrl_status(struct nvmet_req *req, struct nvme_command *cmd); u16 nvmet_check_ctrl_status(struct nvmet_req *req);
struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn, struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn,
enum nvme_subsys_type type); enum nvme_subsys_type type);

View File

@ -29,6 +29,16 @@ static int so_priority;
module_param(so_priority, int, 0644); module_param(so_priority, int, 0644);
MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority"); MODULE_PARM_DESC(so_priority, "nvmet tcp socket optimize priority");
/* Define a time period (in usecs) that io_work() shall sample an activated
* queue before determining it to be idle. This optional module behavior
* can enable NIC solutions that support socket optimized packet processing
* using advanced interrupt moderation techniques.
*/
static int idle_poll_period_usecs;
module_param(idle_poll_period_usecs, int, 0644);
MODULE_PARM_DESC(idle_poll_period_usecs,
"nvmet tcp io_work poll till idle time period in usecs");
#define NVMET_TCP_RECV_BUDGET 8 #define NVMET_TCP_RECV_BUDGET 8
#define NVMET_TCP_SEND_BUDGET 8 #define NVMET_TCP_SEND_BUDGET 8
#define NVMET_TCP_IO_WORK_BUDGET 64 #define NVMET_TCP_IO_WORK_BUDGET 64
@ -119,6 +129,8 @@ struct nvmet_tcp_queue {
struct ahash_request *snd_hash; struct ahash_request *snd_hash;
struct ahash_request *rcv_hash; struct ahash_request *rcv_hash;
unsigned long poll_end;
spinlock_t state_lock; spinlock_t state_lock;
enum nvmet_tcp_queue_state state; enum nvmet_tcp_queue_state state;
@ -525,11 +537,36 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
struct nvmet_tcp_cmd *cmd = struct nvmet_tcp_cmd *cmd =
container_of(req, struct nvmet_tcp_cmd, req); container_of(req, struct nvmet_tcp_cmd, req);
struct nvmet_tcp_queue *queue = cmd->queue; struct nvmet_tcp_queue *queue = cmd->queue;
struct nvme_sgl_desc *sgl;
u32 len;
if (unlikely(cmd == queue->cmd)) {
sgl = &cmd->req.cmd->common.dptr.sgl;
len = le32_to_cpu(sgl->length);
/*
* Wait for inline data before processing the response.
* Avoid using helpers, this might happen before
* nvmet_req_init is completed.
*/
if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
len && len < cmd->req.port->inline_data_size &&
nvme_is_write(cmd->req.cmd))
return;
}
llist_add(&cmd->lentry, &queue->resp_list); llist_add(&cmd->lentry, &queue->resp_list);
queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work); queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &cmd->queue->io_work);
} }
static void nvmet_tcp_execute_request(struct nvmet_tcp_cmd *cmd)
{
if (unlikely(cmd->flags & NVMET_TCP_F_INIT_FAILED))
nvmet_tcp_queue_response(&cmd->req);
else
cmd->req.execute(&cmd->req);
}
static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd) static int nvmet_try_send_data_pdu(struct nvmet_tcp_cmd *cmd)
{ {
u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue); u8 hdgst = nvmet_tcp_hdgst_len(cmd->queue);
@ -961,7 +998,7 @@ static int nvmet_tcp_done_recv_pdu(struct nvmet_tcp_queue *queue)
le32_to_cpu(req->cmd->common.dptr.sgl.length)); le32_to_cpu(req->cmd->common.dptr.sgl.length));
nvmet_tcp_handle_req_failure(queue, queue->cmd, req); nvmet_tcp_handle_req_failure(queue, queue->cmd, req);
return -EAGAIN; return 0;
} }
ret = nvmet_tcp_map_data(queue->cmd); ret = nvmet_tcp_map_data(queue->cmd);
@ -1104,10 +1141,8 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue)
return 0; return 0;
} }
if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) && if (cmd->rbytes_done == cmd->req.transfer_len)
cmd->rbytes_done == cmd->req.transfer_len) { nvmet_tcp_execute_request(cmd);
cmd->req.execute(&cmd->req);
}
nvmet_prepare_receive_pdu(queue); nvmet_prepare_receive_pdu(queue);
return 0; return 0;
@ -1144,9 +1179,9 @@ static int nvmet_tcp_try_recv_ddgst(struct nvmet_tcp_queue *queue)
goto out; goto out;
} }
if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) && if (cmd->rbytes_done == cmd->req.transfer_len)
cmd->rbytes_done == cmd->req.transfer_len) nvmet_tcp_execute_request(cmd);
cmd->req.execute(&cmd->req);
ret = 0; ret = 0;
out: out:
nvmet_prepare_receive_pdu(queue); nvmet_prepare_receive_pdu(queue);
@ -1216,6 +1251,23 @@ static void nvmet_tcp_schedule_release_queue(struct nvmet_tcp_queue *queue)
spin_unlock(&queue->state_lock); spin_unlock(&queue->state_lock);
} }
static inline void nvmet_tcp_arm_queue_deadline(struct nvmet_tcp_queue *queue)
{
queue->poll_end = jiffies + usecs_to_jiffies(idle_poll_period_usecs);
}
static bool nvmet_tcp_check_queue_deadline(struct nvmet_tcp_queue *queue,
int ops)
{
if (!idle_poll_period_usecs)
return false;
if (ops)
nvmet_tcp_arm_queue_deadline(queue);
return !time_after(jiffies, queue->poll_end);
}
static void nvmet_tcp_io_work(struct work_struct *w) static void nvmet_tcp_io_work(struct work_struct *w)
{ {
struct nvmet_tcp_queue *queue = struct nvmet_tcp_queue *queue =
@ -1241,9 +1293,10 @@ static void nvmet_tcp_io_work(struct work_struct *w)
} while (pending && ops < NVMET_TCP_IO_WORK_BUDGET); } while (pending && ops < NVMET_TCP_IO_WORK_BUDGET);
/* /*
* We exahusted our budget, requeue our selves * Requeue the worker if idle deadline period is in progress or any
* ops activity was recorded during the do-while loop above.
*/ */
if (pending) if (nvmet_tcp_check_queue_deadline(queue, ops) || pending)
queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
} }
@ -1434,7 +1487,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
{ {
struct nvmet_tcp_queue *queue; struct nvmet_tcp_queue *queue;
write_lock_bh(&sk->sk_callback_lock); read_lock_bh(&sk->sk_callback_lock);
queue = sk->sk_user_data; queue = sk->sk_user_data;
if (!queue) if (!queue)
goto done; goto done;
@ -1452,7 +1505,7 @@ static void nvmet_tcp_state_change(struct sock *sk)
queue->idx, sk->sk_state); queue->idx, sk->sk_state);
} }
done: done:
write_unlock_bh(&sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
} }
static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue) static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
@ -1501,6 +1554,8 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue)
sock->sk->sk_state_change = nvmet_tcp_state_change; sock->sk->sk_state_change = nvmet_tcp_state_change;
queue->write_space = sock->sk->sk_write_space; queue->write_space = sock->sk->sk_write_space;
sock->sk->sk_write_space = nvmet_tcp_write_space; sock->sk->sk_write_space = nvmet_tcp_write_space;
if (idle_poll_period_usecs)
nvmet_tcp_arm_queue_deadline(queue);
queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work);
} }
write_unlock_bh(&sock->sk->sk_callback_lock); write_unlock_bh(&sock->sk->sk_callback_lock);

View File

@ -3439,15 +3439,6 @@ static void dasd_generic_auto_online(void *data, async_cookie_t cookie)
*/ */
int dasd_generic_probe(struct ccw_device *cdev) int dasd_generic_probe(struct ccw_device *cdev)
{ {
int ret;
ret = dasd_add_sysfs_files(cdev);
if (ret) {
DBF_EVENT_DEVID(DBF_WARNING, cdev, "%s",
"dasd_generic_probe: could not add "
"sysfs entries");
return ret;
}
cdev->handler = &dasd_int_handler; cdev->handler = &dasd_int_handler;
/* /*
@ -3488,15 +3479,13 @@ void dasd_generic_remove(struct ccw_device *cdev)
struct dasd_block *block; struct dasd_block *block;
device = dasd_device_from_cdev(cdev); device = dasd_device_from_cdev(cdev);
if (IS_ERR(device)) { if (IS_ERR(device))
dasd_remove_sysfs_files(cdev);
return; return;
}
if (test_and_set_bit(DASD_FLAG_OFFLINE, &device->flags) && if (test_and_set_bit(DASD_FLAG_OFFLINE, &device->flags) &&
!test_bit(DASD_FLAG_SAFE_OFFLINE_RUNNING, &device->flags)) { !test_bit(DASD_FLAG_SAFE_OFFLINE_RUNNING, &device->flags)) {
/* Already doing offline processing */ /* Already doing offline processing */
dasd_put_device(device); dasd_put_device(device);
dasd_remove_sysfs_files(cdev);
return; return;
} }
/* /*
@ -3515,8 +3504,6 @@ void dasd_generic_remove(struct ccw_device *cdev)
*/ */
if (block) if (block)
dasd_free_block(block); dasd_free_block(block);
dasd_remove_sysfs_files(cdev);
} }
EXPORT_SYMBOL_GPL(dasd_generic_remove); EXPORT_SYMBOL_GPL(dasd_generic_remove);

View File

@ -1772,12 +1772,13 @@ static const struct attribute_group ext_pool_attr_group = {
.attrs = ext_pool_attrs, .attrs = ext_pool_attrs,
}; };
static const struct attribute_group *dasd_attr_groups[] = { const struct attribute_group *dasd_dev_groups[] = {
&dasd_attr_group, &dasd_attr_group,
&capacity_attr_group, &capacity_attr_group,
&ext_pool_attr_group, &ext_pool_attr_group,
NULL, NULL,
}; };
EXPORT_SYMBOL_GPL(dasd_dev_groups);
/* /*
* Return value of the specified feature. * Return value of the specified feature.
@ -1895,18 +1896,6 @@ void dasd_path_remove_kobjects(struct dasd_device *device)
} }
EXPORT_SYMBOL(dasd_path_remove_kobjects); EXPORT_SYMBOL(dasd_path_remove_kobjects);
int dasd_add_sysfs_files(struct ccw_device *cdev)
{
return sysfs_create_groups(&cdev->dev.kobj, dasd_attr_groups);
}
void
dasd_remove_sysfs_files(struct ccw_device *cdev)
{
sysfs_remove_groups(&cdev->dev.kobj, dasd_attr_groups);
}
int int
dasd_devmap_init(void) dasd_devmap_init(void)
{ {

View File

@ -6630,6 +6630,7 @@ static struct ccw_driver dasd_eckd_driver = {
.driver = { .driver = {
.name = "dasd-eckd", .name = "dasd-eckd",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.dev_groups = dasd_dev_groups,
}, },
.ids = dasd_eckd_ids, .ids = dasd_eckd_ids,
.probe = dasd_eckd_probe, .probe = dasd_eckd_probe,

View File

@ -54,13 +54,6 @@ static struct ccw_device_id dasd_fba_ids[] = {
MODULE_DEVICE_TABLE(ccw, dasd_fba_ids); MODULE_DEVICE_TABLE(ccw, dasd_fba_ids);
static struct ccw_driver dasd_fba_driver; /* see below */
static int
dasd_fba_probe(struct ccw_device *cdev)
{
return dasd_generic_probe(cdev);
}
static int static int
dasd_fba_set_online(struct ccw_device *cdev) dasd_fba_set_online(struct ccw_device *cdev)
{ {
@ -71,9 +64,10 @@ static struct ccw_driver dasd_fba_driver = {
.driver = { .driver = {
.name = "dasd-fba", .name = "dasd-fba",
.owner = THIS_MODULE, .owner = THIS_MODULE,
.dev_groups = dasd_dev_groups,
}, },
.ids = dasd_fba_ids, .ids = dasd_fba_ids,
.probe = dasd_fba_probe, .probe = dasd_generic_probe,
.remove = dasd_generic_remove, .remove = dasd_generic_remove,
.set_offline = dasd_generic_set_offline, .set_offline = dasd_generic_set_offline,
.set_online = dasd_fba_set_online, .set_online = dasd_fba_set_online,

View File

@ -854,8 +854,7 @@ void dasd_delete_device(struct dasd_device *);
int dasd_get_feature(struct ccw_device *, int); int dasd_get_feature(struct ccw_device *, int);
int dasd_set_feature(struct ccw_device *, int, int); int dasd_set_feature(struct ccw_device *, int, int);
int dasd_add_sysfs_files(struct ccw_device *); extern const struct attribute_group *dasd_dev_groups[];
void dasd_remove_sysfs_files(struct ccw_device *);
void dasd_path_create_kobj(struct dasd_device *, int); void dasd_path_create_kobj(struct dasd_device *, int);
void dasd_path_create_kobjects(struct dasd_device *); void dasd_path_create_kobjects(struct dasd_device *);
void dasd_path_remove_kobjects(struct dasd_device *); void dasd_path_remove_kobjects(struct dasd_device *);

View File

@ -1265,9 +1265,6 @@ rescan:
if (disk_part_scan_enabled(disk) || if (disk_part_scan_enabled(disk) ||
!(disk->flags & GENHD_FL_REMOVABLE)) !(disk->flags & GENHD_FL_REMOVABLE))
set_capacity(disk, 0); set_capacity(disk, 0);
} else {
if (disk->fops->revalidate_disk)
disk->fops->revalidate_disk(disk);
} }
if (get_capacity(disk)) { if (get_capacity(disk)) {
@ -1439,10 +1436,6 @@ struct block_device *blkdev_get_by_dev(dev_t dev, fmode_t mode, void *holder)
if (ret) if (ret)
return ERR_PTR(ret); return ERR_PTR(ret);
/*
* If we lost a race with 'disk' being deleted, try again. See md.c.
*/
retry:
bdev = blkdev_get_no_open(dev); bdev = blkdev_get_no_open(dev);
if (!bdev) if (!bdev)
return ERR_PTR(-ENXIO); return ERR_PTR(-ENXIO);
@ -1489,8 +1482,6 @@ abort_claiming:
disk_unblock_events(disk); disk_unblock_events(disk);
put_blkdev: put_blkdev:
blkdev_put_no_open(bdev); blkdev_put_no_open(bdev);
if (ret == -ERESTARTSYS)
goto retry;
return ERR_PTR(ret); return ERR_PTR(ret);
} }
EXPORT_SYMBOL(blkdev_get_by_dev); EXPORT_SYMBOL(blkdev_get_by_dev);

View File

@ -1862,7 +1862,6 @@ struct block_device_operations {
unsigned int (*check_events) (struct gendisk *disk, unsigned int (*check_events) (struct gendisk *disk,
unsigned int clearing); unsigned int clearing);
void (*unlock_native_capacity) (struct gendisk *); void (*unlock_native_capacity) (struct gendisk *);
int (*revalidate_disk) (struct gendisk *);
int (*getgeo)(struct block_device *, struct hd_geometry *); int (*getgeo)(struct block_device *, struct hd_geometry *);
int (*set_read_only)(struct block_device *bdev, bool ro); int (*set_read_only)(struct block_device *bdev, bool ro);
/* this callback is with swap_lock and sometimes page table lock held */ /* this callback is with swap_lock and sometimes page table lock held */

View File

@ -112,10 +112,8 @@ struct nvm_dev_ops {
#ifdef CONFIG_NVM #ifdef CONFIG_NVM
#include <linux/blkdev.h>
#include <linux/file.h> #include <linux/file.h>
#include <linux/dmapool.h> #include <linux/dmapool.h>
#include <uapi/linux/lightnvm.h>
enum { enum {
/* HW Responsibilities */ /* HW Responsibilities */

View File

@ -405,6 +405,16 @@ struct nvme_id_ctrl_zns {
__u8 rsvd1[4095]; __u8 rsvd1[4095];
}; };
struct nvme_id_ctrl_nvm {
__u8 vsl;
__u8 wzsl;
__u8 wusl;
__u8 dmrl;
__le32 dmrsl;
__le64 dmsl;
__u8 rsvd16[4080];
};
enum { enum {
NVME_ID_CNS_NS = 0x00, NVME_ID_CNS_NS = 0x00,
NVME_ID_CNS_CTRL = 0x01, NVME_ID_CNS_CTRL = 0x01,

View File

@ -49,11 +49,11 @@ struct floppy_struct {
#define FDCLRPRM _IO(2, 0x41) #define FDCLRPRM _IO(2, 0x41)
/* clear user-defined parameters */ /* clear user-defined parameters */
#define FDSETPRM _IOW(2, 0x42, struct floppy_struct) #define FDSETPRM _IOW(2, 0x42, struct floppy_struct)
#define FDSETMEDIAPRM FDSETPRM #define FDSETMEDIAPRM FDSETPRM
/* set user-defined parameters for current media */ /* set user-defined parameters for current media */
#define FDDEFPRM _IOW(2, 0x43, struct floppy_struct) #define FDDEFPRM _IOW(2, 0x43, struct floppy_struct)
#define FDGETPRM _IOR(2, 0x04, struct floppy_struct) #define FDGETPRM _IOR(2, 0x04, struct floppy_struct)
#define FDDEFMEDIAPRM FDDEFPRM #define FDDEFMEDIAPRM FDDEFPRM
#define FDGETMEDIAPRM FDGETPRM #define FDGETMEDIAPRM FDGETPRM
@ -65,7 +65,7 @@ struct floppy_struct {
/* issue/don't issue kernel messages on media type change */ /* issue/don't issue kernel messages on media type change */
/* /*
* Formatting (obsolete) * Formatting (obsolete)
*/ */
#define FD_FILL_BYTE 0xF6 /* format fill byte. */ #define FD_FILL_BYTE 0xF6 /* format fill byte. */
@ -126,13 +126,13 @@ typedef char floppy_drive_name[16];
*/ */
struct floppy_drive_params { struct floppy_drive_params {
signed char cmos; /* CMOS type */ signed char cmos; /* CMOS type */
/* Spec2 is (HLD<<1 | ND), where HLD is head load time (1=2ms, 2=4 ms /* Spec2 is (HLD<<1 | ND), where HLD is head load time (1=2ms, 2=4 ms
* etc) and ND is set means no DMA. Hardcoded to 6 (HLD=6ms, use DMA). * etc) and ND is set means no DMA. Hardcoded to 6 (HLD=6ms, use DMA).
*/ */
unsigned long max_dtr; /* Step rate, usec */ unsigned long max_dtr; /* Step rate, usec */
unsigned long hlt; /* Head load/settle time, msec */ unsigned long hlt; /* Head load/settle time, msec */
unsigned long hut; /* Head unload time (remnant of unsigned long hut; /* Head unload time (remnant of
* 8" drives) */ * 8" drives) */
unsigned long srt; /* Step rate, usec */ unsigned long srt; /* Step rate, usec */
@ -145,12 +145,12 @@ struct floppy_drive_params {
unsigned char rps; /* rotations per second */ unsigned char rps; /* rotations per second */
unsigned char tracks; /* maximum number of tracks */ unsigned char tracks; /* maximum number of tracks */
unsigned long timeout; /* timeout for interrupt requests */ unsigned long timeout; /* timeout for interrupt requests */
unsigned char interleave_sect; /* if there are more sectors, use unsigned char interleave_sect; /* if there are more sectors, use
* interleave */ * interleave */
struct floppy_max_errors max_errors; struct floppy_max_errors max_errors;
char flags; /* various flags, including ftd_msg */ char flags; /* various flags, including ftd_msg */
/* /*
* Announce successful media type detection and media information loss after * Announce successful media type detection and media information loss after
@ -162,7 +162,7 @@ struct floppy_drive_params {
#define FD_BROKEN_DCL 0x20 #define FD_BROKEN_DCL 0x20
#define FD_DEBUG 0x02 #define FD_DEBUG 0x02
#define FD_SILENT_DCL_CLEAR 0x4 #define FD_SILENT_DCL_CLEAR 0x4
#define FD_INVERTED_DCL 0x80 /* must be 0x80, because of hardware #define FD_INVERTED_DCL 0x80 /* must be 0x80, because of hardware
considerations */ considerations */
char read_track; /* use readtrack during probing? */ char read_track; /* use readtrack during probing? */
@ -176,8 +176,8 @@ struct floppy_drive_params {
#define FD_AUTODETECT_SIZE 8 #define FD_AUTODETECT_SIZE 8
short autodetect[FD_AUTODETECT_SIZE]; /* autodetected formats */ short autodetect[FD_AUTODETECT_SIZE]; /* autodetected formats */
int checkfreq; /* how often should the drive be checked for disk int checkfreq; /* how often should the drive be checked for disk
* changes */ * changes */
int native_format; /* native format of this drive */ int native_format; /* native format of this drive */
}; };
@ -225,13 +225,13 @@ struct floppy_drive_struct {
* decremented after each probe. * decremented after each probe.
*/ */
int keep_data; int keep_data;
/* Prevent "aliased" accesses. */ /* Prevent "aliased" accesses. */
int fd_ref; int fd_ref;
int fd_device; int fd_device;
unsigned long last_checked; /* when was the drive last checked for a disk unsigned long last_checked; /* when was the drive last checked for a disk
* change? */ * change? */
char *dmabuf; char *dmabuf;
int bufblocks; int bufblocks;
}; };
@ -255,7 +255,7 @@ enum reset_mode {
/* /*
* FDC state * FDC state
*/ */
struct floppy_fdc_state { struct floppy_fdc_state {
int spec1; /* spec1 value last used */ int spec1; /* spec1 value last used */
int spec2; /* spec2 value last used */ int spec2; /* spec2 value last used */
int dtr; int dtr;
@ -302,16 +302,16 @@ struct floppy_write_errors {
* to the user process are not counted. * to the user process are not counted.
*/ */
unsigned int write_errors; /* number of physical write errors unsigned int write_errors; /* number of physical write errors
* encountered */ * encountered */
/* position of first and last write errors */ /* position of first and last write errors */
unsigned long first_error_sector; unsigned long first_error_sector;
int first_error_generation; int first_error_generation;
unsigned long last_error_sector; unsigned long last_error_sector;
int last_error_generation; int last_error_generation;
unsigned int badness; /* highest retry count for a read or write unsigned int badness; /* highest retry count for a read or write
* operation */ * operation */
}; };
@ -335,7 +335,7 @@ struct floppy_raw_cmd {
#define FD_RAW_DISK_CHANGE 4 /* out: disk change flag was set */ #define FD_RAW_DISK_CHANGE 4 /* out: disk change flag was set */
#define FD_RAW_INTR 8 /* wait for an interrupt */ #define FD_RAW_INTR 8 /* wait for an interrupt */
#define FD_RAW_SPIN 0x10 /* spin up the disk for this command */ #define FD_RAW_SPIN 0x10 /* spin up the disk for this command */
#define FD_RAW_NO_MOTOR_AFTER 0x20 /* switch the motor off after command #define FD_RAW_NO_MOTOR_AFTER 0x20 /* switch the motor off after command
* completion */ * completion */
#define FD_RAW_NEED_DISK 0x40 /* this command needs a disk to be present */ #define FD_RAW_NEED_DISK 0x40 /* this command needs a disk to be present */
#define FD_RAW_NEED_SEEK 0x80 /* this command uses an implied seek (soft) */ #define FD_RAW_NEED_SEEK 0x80 /* this command uses an implied seek (soft) */
@ -353,7 +353,7 @@ struct floppy_raw_cmd {
void __user *data; void __user *data;
char *kernel_data; /* location of data buffer in the kernel */ char *kernel_data; /* location of data buffer in the kernel */
struct floppy_raw_cmd *next; /* used for chaining of raw cmd's struct floppy_raw_cmd *next; /* used for chaining of raw cmd's
* within the kernel */ * within the kernel */
long length; /* in: length of dma transfer. out: remaining bytes */ long length; /* in: length of dma transfer. out: remaining bytes */
long phys_length; /* physical length, if different from dma length */ long phys_length; /* physical length, if different from dma length */

View File

@ -22,7 +22,6 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/const.h> #include <linux/const.h>
#include <linux/ioctl.h>
#else /* __KERNEL__ */ #else /* __KERNEL__ */
#include <stdio.h> #include <stdio.h>
#include <sys/ioctl.h> #include <sys/ioctl.h>