2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* raid1.c : Multiple Devices driver for Linux
|
|
|
|
*
|
|
|
|
* Copyright (C) 1999, 2000, 2001 Ingo Molnar, Red Hat
|
|
|
|
*
|
|
|
|
* Copyright (C) 1996, 1997, 1998 Ingo Molnar, Miguel de Icaza, Gadi Oxman
|
|
|
|
*
|
|
|
|
* RAID-1 management functions.
|
|
|
|
*
|
|
|
|
* Better read-balancing code written by Mika Kuoppala <miku@iki.fi>, 2000
|
|
|
|
*
|
2007-10-20 05:21:04 +08:00
|
|
|
* Fixes to reconstruction by Jakob Østergaard" <jakob@ostenfeld.dk>
|
2005-04-17 06:20:36 +08:00
|
|
|
* Various fixes by Neil Brown <neilb@cse.unsw.edu.au>
|
|
|
|
*
|
2005-06-22 08:17:23 +08:00
|
|
|
* Changes by Peter T. Breuer <ptb@it.uc3m.es> 31/1/2003 to support
|
|
|
|
* bitmapped intelligence in resync:
|
|
|
|
*
|
|
|
|
* - bitmap marked during normal i/o
|
|
|
|
* - bitmap used to skip nondirty blocks during sync
|
|
|
|
*
|
|
|
|
* Additions to bitmap code, (C) 2003-2004 Paul Clements, SteelEye Technology:
|
|
|
|
* - persistent bitmap code
|
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2, or (at your option)
|
|
|
|
* any later version.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* (for example /usr/src/linux/COPYING); if not, write to the Free
|
|
|
|
* Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
*/
|
|
|
|
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2008-10-15 06:09:21 +08:00
|
|
|
#include <linux/delay.h>
|
2009-03-31 11:33:13 +08:00
|
|
|
#include <linux/blkdev.h>
|
2011-07-04 01:58:33 +08:00
|
|
|
#include <linux/module.h>
|
2009-03-31 11:33:13 +08:00
|
|
|
#include <linux/seq_file.h>
|
2011-07-27 09:00:36 +08:00
|
|
|
#include <linux/ratelimit.h>
|
2009-03-31 11:33:13 +08:00
|
|
|
#include "md.h"
|
2009-03-31 11:27:03 +08:00
|
|
|
#include "raid1.h"
|
|
|
|
#include "bitmap.h"
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Number of guaranteed r1bios in case of extreme VM load:
|
|
|
|
*/
|
|
|
|
#define NR_RAID1_BIOS 256
|
|
|
|
|
2012-07-31 08:03:52 +08:00
|
|
|
/* when we get a read error on a read-only array, we redirect to another
|
|
|
|
* device without failing the first device, or trying to over-write to
|
|
|
|
* correct the read error. To keep track of bad blocks on a per-bio
|
|
|
|
* level, we store IO_BLOCKED in the appropriate 'bios' pointer
|
|
|
|
*/
|
|
|
|
#define IO_BLOCKED ((struct bio *)1)
|
|
|
|
/* When we successfully write to a known bad-block, we need to remove the
|
|
|
|
* bad-block marking which must be done from process context. So we record
|
|
|
|
* the success by setting devs[n].bio to IO_MADE_GOOD
|
|
|
|
*/
|
|
|
|
#define IO_MADE_GOOD ((struct bio *)2)
|
|
|
|
|
|
|
|
#define BIO_SPECIAL(bio) ((unsigned long)bio <= 2)
|
|
|
|
|
2011-10-11 13:50:01 +08:00
|
|
|
/* When there are this many requests queue to be written by
|
|
|
|
* the raid1 thread, we become 'congested' to provide back-pressure
|
|
|
|
* for writeback.
|
|
|
|
*/
|
|
|
|
static int max_queued_requests = 1024;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
|
|
|
|
sector_t bi_sector);
|
2011-10-11 13:49:05 +08:00
|
|
|
static void lower_barrier(struct r1conf *conf);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-10-07 14:46:04 +08:00
|
|
|
static void * r1bio_pool_alloc(gfp_t gfp_flags, void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
2011-10-11 13:48:43 +08:00
|
|
|
int size = offsetof(struct r1bio, bios[pi->raid_disks]);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* allocate a r1bio with room for raid_disks entries in the bios array */
|
2011-03-10 15:52:07 +08:00
|
|
|
return kzalloc(size, gfp_flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void r1bio_pool_free(void *r1_bio, void *data)
|
|
|
|
{
|
|
|
|
kfree(r1_bio);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define RESYNC_BLOCK_SIZE (64*1024)
|
2013-11-14 12:16:18 +08:00
|
|
|
#define RESYNC_DEPTH 32
|
2005-04-17 06:20:36 +08:00
|
|
|
#define RESYNC_SECTORS (RESYNC_BLOCK_SIZE >> 9)
|
|
|
|
#define RESYNC_PAGES ((RESYNC_BLOCK_SIZE + PAGE_SIZE-1) / PAGE_SIZE)
|
2013-11-14 12:16:18 +08:00
|
|
|
#define RESYNC_WINDOW (RESYNC_BLOCK_SIZE * RESYNC_DEPTH)
|
|
|
|
#define RESYNC_WINDOW_SECTORS (RESYNC_WINDOW >> 9)
|
2015-08-19 06:14:42 +08:00
|
|
|
#define CLUSTER_RESYNC_WINDOW (16 * RESYNC_WINDOW)
|
|
|
|
#define CLUSTER_RESYNC_WINDOW_SECTORS (CLUSTER_RESYNC_WINDOW >> 9)
|
2013-11-14 12:16:18 +08:00
|
|
|
#define NEXT_NORMALIO_DISTANCE (3 * RESYNC_WINDOW_SECTORS)
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-10-07 14:46:04 +08:00
|
|
|
static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct bio *bio;
|
2014-04-09 10:25:43 +08:00
|
|
|
int need_pages;
|
2005-04-17 06:20:36 +08:00
|
|
|
int i, j;
|
|
|
|
|
|
|
|
r1_bio = r1bio_pool_alloc(gfp_flags, pi);
|
2011-03-10 15:52:07 +08:00
|
|
|
if (!r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Allocate bios : 1 for reading, n-1 for writing
|
|
|
|
*/
|
|
|
|
for (j = pi->raid_disks ; j-- ; ) {
|
2010-10-26 14:33:54 +08:00
|
|
|
bio = bio_kmalloc(gfp_flags, RESYNC_PAGES);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!bio)
|
|
|
|
goto out_free_bio;
|
|
|
|
r1_bio->bios[j] = bio;
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* Allocate RESYNC_PAGES data pages and attach them to
|
2006-01-06 16:20:26 +08:00
|
|
|
* the first bio.
|
|
|
|
* If this is a user-requested check/repair, allocate
|
|
|
|
* RESYNC_PAGES for each bio.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2006-01-06 16:20:26 +08:00
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery))
|
2014-04-09 10:25:43 +08:00
|
|
|
need_pages = pi->raid_disks;
|
2006-01-06 16:20:26 +08:00
|
|
|
else
|
2014-04-09 10:25:43 +08:00
|
|
|
need_pages = 1;
|
|
|
|
for (j = 0; j < need_pages; j++) {
|
2006-01-06 16:20:26 +08:00
|
|
|
bio = r1_bio->bios[j];
|
2012-09-11 05:03:28 +08:00
|
|
|
bio->bi_vcnt = RESYNC_PAGES;
|
2006-01-06 16:20:26 +08:00
|
|
|
|
2012-09-11 05:03:28 +08:00
|
|
|
if (bio_alloc_pages(bio, gfp_flags))
|
2014-04-09 10:25:43 +08:00
|
|
|
goto out_free_pages;
|
2006-01-06 16:20:26 +08:00
|
|
|
}
|
|
|
|
/* If not user-requests, copy the page pointers to all bios */
|
|
|
|
if (!test_bit(MD_RECOVERY_REQUESTED, &pi->mddev->recovery)) {
|
|
|
|
for (i=0; i<RESYNC_PAGES ; i++)
|
|
|
|
for (j=1; j<pi->raid_disks; j++)
|
|
|
|
r1_bio->bios[j]->bi_io_vec[i].bv_page =
|
|
|
|
r1_bio->bios[0]->bi_io_vec[i].bv_page;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
r1_bio->master_bio = NULL;
|
|
|
|
|
|
|
|
return r1_bio;
|
|
|
|
|
2014-04-09 10:25:43 +08:00
|
|
|
out_free_pages:
|
|
|
|
while (--j >= 0) {
|
|
|
|
struct bio_vec *bv;
|
|
|
|
|
|
|
|
bio_for_each_segment_all(bv, r1_bio->bios[j], i)
|
|
|
|
__free_page(bv->bv_page);
|
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
out_free_bio:
|
2011-12-23 07:17:56 +08:00
|
|
|
while (++j < pi->raid_disks)
|
2005-04-17 06:20:36 +08:00
|
|
|
bio_put(r1_bio->bios[j]);
|
|
|
|
r1bio_pool_free(r1_bio, data);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void r1buf_pool_free(void *__r1_bio, void *data)
|
|
|
|
{
|
|
|
|
struct pool_info *pi = data;
|
2006-01-06 16:20:26 +08:00
|
|
|
int i,j;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1bio = __r1_bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-06 16:20:26 +08:00
|
|
|
for (i = 0; i < RESYNC_PAGES; i++)
|
|
|
|
for (j = pi->raid_disks; j-- ;) {
|
|
|
|
if (j == 0 ||
|
|
|
|
r1bio->bios[j]->bi_io_vec[i].bv_page !=
|
|
|
|
r1bio->bios[0]->bi_io_vec[i].bv_page)
|
2006-01-06 16:20:40 +08:00
|
|
|
safe_put_page(r1bio->bios[j]->bi_io_vec[i].bv_page);
|
2006-01-06 16:20:26 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i=0 ; i < pi->raid_disks; i++)
|
|
|
|
bio_put(r1bio->bios[i]);
|
|
|
|
|
|
|
|
r1bio_pool_free(r1bio, data);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void put_all_bios(struct r1conf *conf, struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2005-04-17 06:20:36 +08:00
|
|
|
struct bio **bio = r1_bio->bios + i;
|
2011-07-28 09:31:49 +08:00
|
|
|
if (!BIO_SPECIAL(*bio))
|
2005-04-17 06:20:36 +08:00
|
|
|
bio_put(*bio);
|
|
|
|
*bio = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void free_r1bio(struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
put_all_bios(conf, r1_bio);
|
|
|
|
mempool_free(r1_bio, conf->r1bio_pool);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void put_buf(struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
2006-01-06 16:20:21 +08:00
|
|
|
int i;
|
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2006-01-06 16:20:21 +08:00
|
|
|
struct bio *bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io)
|
|
|
|
rdev_dec_pending(conf->mirrors[i].rdev, r1_bio->mddev);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
mempool_free(r1_bio, conf->r1buf_pool);
|
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
lower_barrier(conf);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void reschedule_retry(struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = r1_bio->mddev;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
list_add(&r1_bio->retry_list, &conf->retry_list);
|
2006-01-06 16:20:19 +08:00
|
|
|
conf->nr_queued ++;
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
wake_up(&conf->wait_barrier);
|
2005-04-17 06:20:36 +08:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* raid_end_bio_io() is called when we have finished servicing a mirrored
|
|
|
|
* operation and are ready to return a success/failure code to the buffer
|
|
|
|
* cache layer.
|
|
|
|
*/
|
2011-10-11 13:48:43 +08:00
|
|
|
static void call_bio_endio(struct r1bio *r1_bio)
|
2011-07-28 09:31:48 +08:00
|
|
|
{
|
|
|
|
struct bio *bio = r1_bio->master_bio;
|
|
|
|
int done;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
sector_t start_next_window = r1_bio->start_next_window;
|
2013-10-12 06:44:27 +08:00
|
|
|
sector_t bi_sector = bio->bi_iter.bi_sector;
|
2011-07-28 09:31:48 +08:00
|
|
|
|
|
|
|
if (bio->bi_phys_segments) {
|
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
bio->bi_phys_segments--;
|
|
|
|
done = (bio->bi_phys_segments == 0);
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
/*
|
|
|
|
* make_request() might be waiting for
|
|
|
|
* bi_phys_segments to decrease
|
|
|
|
*/
|
|
|
|
wake_up(&conf->wait_barrier);
|
2011-07-28 09:31:48 +08:00
|
|
|
} else
|
|
|
|
done = 1;
|
|
|
|
|
|
|
|
if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
|
2015-07-20 21:29:37 +08:00
|
|
|
bio->bi_error = -EIO;
|
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
if (done) {
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(bio);
|
2011-07-28 09:31:48 +08:00
|
|
|
/*
|
|
|
|
* Wake up any possible resync thread that waits for the device
|
|
|
|
* to go idle.
|
|
|
|
*/
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
allow_barrier(conf, start_next_window, bi_sector);
|
2011-07-28 09:31:48 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void raid_end_bio_io(struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct bio *bio = r1_bio->master_bio;
|
|
|
|
|
2005-09-10 07:23:47 +08:00
|
|
|
/* if nobody has done the final endio yet, do it now */
|
|
|
|
if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) {
|
2011-10-07 11:23:17 +08:00
|
|
|
pr_debug("raid1: sync end %s on sectors %llu-%llu\n",
|
|
|
|
(bio_data_dir(bio) == WRITE) ? "write" : "read",
|
2013-10-12 06:44:27 +08:00
|
|
|
(unsigned long long) bio->bi_iter.bi_sector,
|
|
|
|
(unsigned long long) bio_end_sector(bio) - 1);
|
2005-09-10 07:23:47 +08:00
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
call_bio_endio(r1_bio);
|
2005-09-10 07:23:47 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
free_r1bio(r1_bio);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update disk head position estimator based on IRQ completion info.
|
|
|
|
*/
|
2011-10-11 13:48:43 +08:00
|
|
|
static inline void update_head_pos(int disk, struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
conf->mirrors[disk].head_position =
|
|
|
|
r1_bio->sector + (r1_bio->sectors);
|
|
|
|
}
|
|
|
|
|
2011-10-07 11:22:53 +08:00
|
|
|
/*
|
|
|
|
* Find the disk number which triggered given bio
|
|
|
|
*/
|
2011-10-11 13:48:43 +08:00
|
|
|
static int find_bio_disk(struct r1bio *r1_bio, struct bio *bio)
|
2011-10-07 11:22:53 +08:00
|
|
|
{
|
|
|
|
int mirror;
|
2011-12-23 07:17:56 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
|
|
|
int raid_disks = conf->raid_disks;
|
2011-10-07 11:22:53 +08:00
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
for (mirror = 0; mirror < raid_disks * 2; mirror++)
|
2011-10-07 11:22:53 +08:00
|
|
|
if (r1_bio->bios[mirror] == bio)
|
|
|
|
break;
|
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
BUG_ON(mirror == raid_disks * 2);
|
2011-10-07 11:22:53 +08:00
|
|
|
update_head_pos(mirror, r1_bio);
|
|
|
|
|
|
|
|
return mirror;
|
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void raid1_end_read_request(struct bio *bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2015-07-20 21:29:37 +08:00
|
|
|
int uptodate = !bio->bi_error;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio = bio->bi_private;
|
2005-04-17 06:20:36 +08:00
|
|
|
int mirror;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
mirror = r1_bio->read_disk;
|
|
|
|
/*
|
|
|
|
* this branch is our 'one mirror IO has finished' event handler:
|
|
|
|
*/
|
2006-01-06 16:20:19 +08:00
|
|
|
update_head_pos(mirror, r1_bio);
|
|
|
|
|
2007-05-10 18:15:50 +08:00
|
|
|
if (uptodate)
|
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
|
|
|
else {
|
|
|
|
/* If all other devices have failed, we want to return
|
|
|
|
* the error upwards rather than fail the last device.
|
|
|
|
* Here we redefine "uptodate" to mean "Don't want to retry"
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2007-05-10 18:15:50 +08:00
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
if (r1_bio->mddev->degraded == conf->raid_disks ||
|
|
|
|
(r1_bio->mddev->degraded == conf->raid_disks-1 &&
|
2015-07-24 07:22:16 +08:00
|
|
|
test_bit(In_sync, &conf->mirrors[mirror].rdev->flags)))
|
2007-05-10 18:15:50 +08:00
|
|
|
uptodate = 1;
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-10-11 10:44:30 +08:00
|
|
|
if (uptodate) {
|
2005-04-17 06:20:36 +08:00
|
|
|
raid_end_bio_io(r1_bio);
|
2012-10-11 10:44:30 +08:00
|
|
|
rdev_dec_pending(conf->mirrors[mirror].rdev, conf->mddev);
|
|
|
|
} else {
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* oops, read error:
|
|
|
|
*/
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-07-27 09:00:36 +08:00
|
|
|
printk_ratelimited(
|
|
|
|
KERN_ERR "md/raid1:%s: %s: "
|
|
|
|
"rescheduling sector %llu\n",
|
|
|
|
mdname(conf->mddev),
|
|
|
|
bdevname(conf->mirrors[mirror].rdev->bdev,
|
|
|
|
b),
|
|
|
|
(unsigned long long)r1_bio->sector);
|
2011-07-28 09:31:48 +08:00
|
|
|
set_bit(R1BIO_ReadError, &r1_bio->state);
|
2005-04-17 06:20:36 +08:00
|
|
|
reschedule_retry(r1_bio);
|
2012-10-11 10:44:30 +08:00
|
|
|
/* don't drop the reference on read_disk yet */
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void close_write(struct r1bio *r1_bio)
|
2011-07-28 09:32:41 +08:00
|
|
|
{
|
|
|
|
/* it really is the end of this request */
|
|
|
|
if (test_bit(R1BIO_BehindIO, &r1_bio->state)) {
|
|
|
|
/* free extra copy of the data pages */
|
|
|
|
int i = r1_bio->behind_page_count;
|
|
|
|
while (i--)
|
|
|
|
safe_put_page(r1_bio->behind_bvecs[i].bv_page);
|
|
|
|
kfree(r1_bio->behind_bvecs);
|
|
|
|
r1_bio->behind_bvecs = NULL;
|
|
|
|
}
|
|
|
|
/* clear the bitmap if all writes complete successfully */
|
|
|
|
bitmap_endwrite(r1_bio->mddev->bitmap, r1_bio->sector,
|
|
|
|
r1_bio->sectors,
|
|
|
|
!test_bit(R1BIO_Degraded, &r1_bio->state),
|
|
|
|
test_bit(R1BIO_BehindIO, &r1_bio->state));
|
|
|
|
md_write_end(r1_bio->mddev);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void r1_bio_write_done(struct r1bio *r1_bio)
|
2010-10-19 09:54:01 +08:00
|
|
|
{
|
2011-07-28 09:32:41 +08:00
|
|
|
if (!atomic_dec_and_test(&r1_bio->remaining))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
else {
|
|
|
|
close_write(r1_bio);
|
2011-07-28 09:31:49 +08:00
|
|
|
if (test_bit(R1BIO_MadeGood, &r1_bio->state))
|
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
else
|
|
|
|
raid_end_bio_io(r1_bio);
|
2010-10-19 09:54:01 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void raid1_end_write_request(struct bio *bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio = bio->bi_private;
|
[PATCH] md: support BIO_RW_BARRIER for md/raid1
We can only accept BARRIER requests if all slaves handle
barriers, and that can, of course, change with time....
So we keep track of whether the whole array seems safe for barriers,
and also whether each individual rdev handles barriers.
We initially assumes barriers are OK.
When writing the superblock we try a barrier, and if that fails, we flag
things for no-barriers. This will usually clear the flags fairly quickly.
If writing the superblock finds that BIO_RW_BARRIER is -ENOTSUPP, we need to
resubmit, so introduce function "md_super_wait" which waits for requests to
finish, and retries ENOTSUPP requests without the barrier flag.
When writing the real raid1, write requests which were BIO_RW_BARRIER but
which aresn't supported need to be retried. So raid1d is enhanced to do this,
and when any bio write completes (i.e. no retry needed) we remove it from the
r1bio, so that devices needing retry are easy to find.
We should hardly ever get -ENOTSUPP errors when writing data to the raid.
It should only happen if:
1/ the device used to support BARRIER, but now doesn't. Few devices
change like this, though raid1 can!
or
2/ the array has no persistent superblock, so there was no opportunity to
pre-test for barriers when writing the superblock.
Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-09 13:39:34 +08:00
|
|
|
int mirror, behind = test_bit(R1BIO_BehindIO, &r1_bio->state);
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = r1_bio->mddev->private;
|
2006-03-10 09:33:46 +08:00
|
|
|
struct bio *to_put = NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-10-07 11:22:53 +08:00
|
|
|
mirror = find_bio_disk(r1_bio, bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-09-03 17:56:18 +08:00
|
|
|
/*
|
|
|
|
* 'one mirror IO has finished' event handler:
|
|
|
|
*/
|
2015-07-20 21:29:37 +08:00
|
|
|
if (bio->bi_error) {
|
2011-07-28 09:32:41 +08:00
|
|
|
set_bit(WriteErrorSeen,
|
|
|
|
&conf->mirrors[mirror].rdev->flags);
|
2011-12-23 07:17:57 +08:00
|
|
|
if (!test_and_set_bit(WantReplacement,
|
|
|
|
&conf->mirrors[mirror].rdev->flags))
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &
|
|
|
|
conf->mddev->recovery);
|
|
|
|
|
2011-07-28 09:32:41 +08:00
|
|
|
set_bit(R1BIO_WriteError, &r1_bio->state);
|
2011-07-28 09:31:49 +08:00
|
|
|
} else {
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2010-09-03 17:56:18 +08:00
|
|
|
* Set R1BIO_Uptodate in our master bio, so that we
|
|
|
|
* will return a good error code for to the higher
|
|
|
|
* levels even if IO on some other mirrored buffer
|
|
|
|
* fails.
|
|
|
|
*
|
|
|
|
* The 'master' represents the composite IO operation
|
|
|
|
* to user-side. So if something waits for IO, then it
|
|
|
|
* will wait for the 'master' bio.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-07-28 09:31:49 +08:00
|
|
|
sector_t first_bad;
|
|
|
|
int bad_sectors;
|
|
|
|
|
2011-07-28 09:32:41 +08:00
|
|
|
r1_bio->bios[mirror] = NULL;
|
|
|
|
to_put = bio;
|
md/raid1: consider WRITE as successful only if at least one non-Faulty and non-rebuilding drive completed it.
Without that fix, the following scenario could happen:
- RAID1 with drives A and B; drive B was freshly-added and is rebuilding
- Drive A fails
- WRITE request arrives to the array. It is failed by drive A, so
r1_bio is marked as R1BIO_WriteError, but the rebuilding drive B
succeeds in writing it, so the same r1_bio is marked as
R1BIO_Uptodate.
- r1_bio arrives to handle_write_finished, badblocks are disabled,
md_error()->error() does nothing because we don't fail the last drive
of raid1
- raid_end_bio_io() calls call_bio_endio()
- As a result, in call_bio_endio():
if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
clear_bit(BIO_UPTODATE, &bio->bi_flags);
this code doesn't clear the BIO_UPTODATE flag, and the whole master
WRITE succeeds, back to the upper layer.
So we returned success to the upper layer, even though we had written
the data onto the rebuilding drive only. But when we want to read the
data back, we would not read from the rebuilding drive, so this data
is lost.
[neilb - applied identical change to raid10 as well]
This bug can result in lost data, so it is suitable for any
-stable kernel.
Cc: stable@vger.kernel.org
Signed-off-by: Alex Lyakas <alex@zadarastorage.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-06-05 01:42:21 +08:00
|
|
|
/*
|
|
|
|
* Do not set R1BIO_Uptodate if the current device is
|
|
|
|
* rebuilding or Faulty. This is because we cannot use
|
|
|
|
* such device for properly reading the data back (we could
|
|
|
|
* potentially use it, if the current write would have felt
|
|
|
|
* before rdev->recovery_offset, but for simplicity we don't
|
|
|
|
* check this here.
|
|
|
|
*/
|
|
|
|
if (test_bit(In_sync, &conf->mirrors[mirror].rdev->flags) &&
|
|
|
|
!test_bit(Faulty, &conf->mirrors[mirror].rdev->flags))
|
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
2010-09-03 17:56:18 +08:00
|
|
|
|
2011-07-28 09:31:49 +08:00
|
|
|
/* Maybe we can clear some bad blocks. */
|
|
|
|
if (is_badblock(conf->mirrors[mirror].rdev,
|
|
|
|
r1_bio->sector, r1_bio->sectors,
|
|
|
|
&first_bad, &bad_sectors)) {
|
|
|
|
r1_bio->bios[mirror] = IO_MADE_GOOD;
|
|
|
|
set_bit(R1BIO_MadeGood, &r1_bio->state);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-09-03 17:56:18 +08:00
|
|
|
if (behind) {
|
|
|
|
if (test_bit(WriteMostly, &conf->mirrors[mirror].rdev->flags))
|
|
|
|
atomic_dec(&r1_bio->behind_remaining);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In behind mode, we ACK the master bio once the I/O
|
|
|
|
* has safely reached all non-writemostly
|
|
|
|
* disks. Setting the Returned bit ensures that this
|
|
|
|
* gets done only once -- we don't ever want to return
|
|
|
|
* -EIO here, instead we'll wait
|
|
|
|
*/
|
|
|
|
if (atomic_read(&r1_bio->behind_remaining) >= (atomic_read(&r1_bio->remaining)-1) &&
|
|
|
|
test_bit(R1BIO_Uptodate, &r1_bio->state)) {
|
|
|
|
/* Maybe we can return now */
|
|
|
|
if (!test_and_set_bit(R1BIO_Returned, &r1_bio->state)) {
|
|
|
|
struct bio *mbio = r1_bio->master_bio;
|
2011-10-07 11:23:17 +08:00
|
|
|
pr_debug("raid1: behind end write sectors"
|
|
|
|
" %llu-%llu\n",
|
2013-10-12 06:44:27 +08:00
|
|
|
(unsigned long long) mbio->bi_iter.bi_sector,
|
|
|
|
(unsigned long long) bio_end_sector(mbio) - 1);
|
2011-07-28 09:31:48 +08:00
|
|
|
call_bio_endio(r1_bio);
|
2005-09-10 07:23:47 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2011-07-28 09:31:49 +08:00
|
|
|
if (r1_bio->bios[mirror] == NULL)
|
|
|
|
rdev_dec_pending(conf->mirrors[mirror].rdev,
|
|
|
|
conf->mddev);
|
2010-09-03 17:56:18 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Let's see if all mirrored write operations have finished
|
|
|
|
* already.
|
|
|
|
*/
|
2011-05-11 12:51:19 +08:00
|
|
|
r1_bio_write_done(r1_bio);
|
2006-06-26 15:27:35 +08:00
|
|
|
|
2006-03-10 09:33:46 +08:00
|
|
|
if (to_put)
|
|
|
|
bio_put(to_put);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This routine returns the disk from which the requested read should
|
|
|
|
* be done. There is a per-array 'next expected sequential IO' sector
|
|
|
|
* number - if this matches on the next IO then we use the last disk.
|
|
|
|
* There is also a per-disk 'last know head position' sector that is
|
|
|
|
* maintained from IRQ contexts, both the normal and the resync IO
|
|
|
|
* completion handlers update this position correctly. If there is no
|
|
|
|
* perfect sequential match then we pick the disk whose head is closest.
|
|
|
|
*
|
|
|
|
* If there are 2 mirrors in the same 2 devices, performance degrades
|
|
|
|
* because position is mirror, not device based.
|
|
|
|
*
|
|
|
|
* The rdev for the device selected will have nr_pending incremented.
|
|
|
|
*/
|
2011-10-11 13:49:05 +08:00
|
|
|
static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sectors)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2010-05-08 06:20:17 +08:00
|
|
|
const sector_t this_sector = r1_bio->sector;
|
2011-07-28 09:31:48 +08:00
|
|
|
int sectors;
|
|
|
|
int best_good_sectors;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
int best_disk, best_dist_disk, best_pending_disk;
|
|
|
|
int has_nonrot_disk;
|
2012-07-31 08:03:53 +08:00
|
|
|
int disk;
|
2011-05-11 12:34:56 +08:00
|
|
|
sector_t best_dist;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
unsigned int min_pending;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2010-09-06 12:10:08 +08:00
|
|
|
int choose_first;
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
int choose_next_idle;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
/*
|
2005-09-10 07:23:45 +08:00
|
|
|
* Check if we can balance. We can balance on the whole
|
2005-04-17 06:20:36 +08:00
|
|
|
* device if no resync is going on, or below the resync window.
|
|
|
|
* We take the first readable disk when above the resync window.
|
|
|
|
*/
|
|
|
|
retry:
|
2011-07-28 09:31:48 +08:00
|
|
|
sectors = r1_bio->sectors;
|
2011-05-11 12:34:56 +08:00
|
|
|
best_disk = -1;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
best_dist_disk = -1;
|
2011-05-11 12:34:56 +08:00
|
|
|
best_dist = MaxSector;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
best_pending_disk = -1;
|
|
|
|
min_pending = UINT_MAX;
|
2011-07-28 09:31:48 +08:00
|
|
|
best_good_sectors = 0;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
has_nonrot_disk = 0;
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
choose_next_idle = 0;
|
2011-07-28 09:31:48 +08:00
|
|
|
|
2014-08-12 23:13:19 +08:00
|
|
|
if ((conf->mddev->recovery_cp < this_sector + sectors) ||
|
|
|
|
(mddev_is_clustered(conf->mddev) &&
|
2015-06-24 22:30:32 +08:00
|
|
|
md_cluster_ops->area_resyncing(conf->mddev, READ, this_sector,
|
2014-08-12 23:13:19 +08:00
|
|
|
this_sector + sectors)))
|
|
|
|
choose_first = 1;
|
|
|
|
else
|
|
|
|
choose_first = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-31 08:03:53 +08:00
|
|
|
for (disk = 0 ; disk < conf->raid_disks * 2 ; disk++) {
|
2011-05-11 12:34:56 +08:00
|
|
|
sector_t dist;
|
2011-07-28 09:31:48 +08:00
|
|
|
sector_t first_bad;
|
|
|
|
int bad_sectors;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
unsigned int pending;
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
bool nonrot;
|
2011-07-28 09:31:48 +08:00
|
|
|
|
2010-09-06 12:10:08 +08:00
|
|
|
rdev = rcu_dereference(conf->mirrors[disk].rdev);
|
|
|
|
if (r1_bio->bios[disk] == IO_BLOCKED
|
|
|
|
|| rdev == NULL
|
2011-05-11 12:34:56 +08:00
|
|
|
|| test_bit(Faulty, &rdev->flags))
|
2010-09-06 12:10:08 +08:00
|
|
|
continue;
|
2011-05-11 12:34:56 +08:00
|
|
|
if (!test_bit(In_sync, &rdev->flags) &&
|
|
|
|
rdev->recovery_offset < this_sector + sectors)
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
2011-05-11 12:34:56 +08:00
|
|
|
if (test_bit(WriteMostly, &rdev->flags)) {
|
|
|
|
/* Don't balance among write-mostly, just
|
|
|
|
* use the first as a last resort */
|
2015-02-23 08:00:38 +08:00
|
|
|
if (best_dist_disk < 0) {
|
2012-01-08 22:41:51 +08:00
|
|
|
if (is_badblock(rdev, this_sector, sectors,
|
|
|
|
&first_bad, &bad_sectors)) {
|
2016-03-21 19:18:32 +08:00
|
|
|
if (first_bad <= this_sector)
|
2012-01-08 22:41:51 +08:00
|
|
|
/* Cannot use this */
|
|
|
|
continue;
|
|
|
|
best_good_sectors = first_bad - this_sector;
|
|
|
|
} else
|
|
|
|
best_good_sectors = sectors;
|
2015-02-23 08:00:38 +08:00
|
|
|
best_dist_disk = disk;
|
|
|
|
best_pending_disk = disk;
|
2012-01-08 22:41:51 +08:00
|
|
|
}
|
2011-05-11 12:34:56 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
/* This is a reasonable device to use. It might
|
|
|
|
* even be best.
|
|
|
|
*/
|
2011-07-28 09:31:48 +08:00
|
|
|
if (is_badblock(rdev, this_sector, sectors,
|
|
|
|
&first_bad, &bad_sectors)) {
|
|
|
|
if (best_dist < MaxSector)
|
|
|
|
/* already have a better device */
|
|
|
|
continue;
|
|
|
|
if (first_bad <= this_sector) {
|
|
|
|
/* cannot read here. If this is the 'primary'
|
|
|
|
* device, then we must not read beyond
|
|
|
|
* bad_sectors from another device..
|
|
|
|
*/
|
|
|
|
bad_sectors -= (this_sector - first_bad);
|
|
|
|
if (choose_first && sectors > bad_sectors)
|
|
|
|
sectors = bad_sectors;
|
|
|
|
if (best_good_sectors > sectors)
|
|
|
|
best_good_sectors = sectors;
|
|
|
|
|
|
|
|
} else {
|
|
|
|
sector_t good_sectors = first_bad - this_sector;
|
|
|
|
if (good_sectors > best_good_sectors) {
|
|
|
|
best_good_sectors = good_sectors;
|
|
|
|
best_disk = disk;
|
|
|
|
}
|
|
|
|
if (choose_first)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
continue;
|
|
|
|
} else
|
|
|
|
best_good_sectors = sectors;
|
|
|
|
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
nonrot = blk_queue_nonrot(bdev_get_queue(rdev->bdev));
|
|
|
|
has_nonrot_disk |= nonrot;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
pending = atomic_read(&rdev->nr_pending);
|
2011-05-11 12:34:56 +08:00
|
|
|
dist = abs(this_sector - conf->mirrors[disk].head_position);
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
if (choose_first) {
|
2011-05-11 12:34:56 +08:00
|
|
|
best_disk = disk;
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
}
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
/* Don't change to another disk for sequential reads */
|
|
|
|
if (conf->mirrors[disk].next_seq_sect == this_sector
|
|
|
|
|| dist == 0) {
|
|
|
|
int opt_iosize = bdev_io_opt(rdev->bdev) >> 9;
|
|
|
|
struct raid1_info *mirror = &conf->mirrors[disk];
|
|
|
|
|
|
|
|
best_disk = disk;
|
|
|
|
/*
|
|
|
|
* If buffered sequential IO size exceeds optimal
|
|
|
|
* iosize, check if there is idle disk. If yes, choose
|
|
|
|
* the idle disk. read_balance could already choose an
|
|
|
|
* idle disk before noticing it's a sequential IO in
|
|
|
|
* this disk. This doesn't matter because this disk
|
|
|
|
* will idle, next time it will be utilized after the
|
|
|
|
* first disk has IO size exceeds optimal iosize. In
|
|
|
|
* this way, iosize of the first disk will be optimal
|
|
|
|
* iosize at least. iosize of the second disk might be
|
|
|
|
* small, but not a big deal since when the second disk
|
|
|
|
* starts IO, the first disk is likely still busy.
|
|
|
|
*/
|
|
|
|
if (nonrot && opt_iosize > 0 &&
|
|
|
|
mirror->seq_start != MaxSector &&
|
|
|
|
mirror->next_seq_sect > opt_iosize &&
|
|
|
|
mirror->next_seq_sect - opt_iosize >=
|
|
|
|
mirror->seq_start) {
|
|
|
|
choose_next_idle = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* If device is idle, use it */
|
|
|
|
if (pending == 0) {
|
|
|
|
best_disk = disk;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (choose_next_idle)
|
|
|
|
continue;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
|
|
|
|
if (min_pending > pending) {
|
|
|
|
min_pending = pending;
|
|
|
|
best_pending_disk = disk;
|
|
|
|
}
|
|
|
|
|
2011-05-11 12:34:56 +08:00
|
|
|
if (dist < best_dist) {
|
|
|
|
best_dist = dist;
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
best_dist_disk = disk;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2010-09-06 12:10:08 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
md/raid1: read balance chooses idlest disk for SSD
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as many disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
V2: For hard disk and SSD mixed raid, doesn't use distance based algorithm for
random IO too. This makes the algorithm generic for raid with SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
/*
|
|
|
|
* If all disks are rotational, choose the closest disk. If any disk is
|
|
|
|
* non-rotational, choose the disk with less pending request even the
|
|
|
|
* disk is rotational, which might/might not be optimal for raids with
|
|
|
|
* mixed ratation/non-rotational disks depending on workload.
|
|
|
|
*/
|
|
|
|
if (best_disk == -1) {
|
|
|
|
if (has_nonrot_disk)
|
|
|
|
best_disk = best_pending_disk;
|
|
|
|
else
|
|
|
|
best_disk = best_dist_disk;
|
|
|
|
}
|
|
|
|
|
2011-05-11 12:34:56 +08:00
|
|
|
if (best_disk >= 0) {
|
|
|
|
rdev = rcu_dereference(conf->mirrors[best_disk].rdev);
|
2005-09-10 07:23:45 +08:00
|
|
|
if (!rdev)
|
|
|
|
goto retry;
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
2011-05-11 12:34:56 +08:00
|
|
|
if (test_bit(Faulty, &rdev->flags)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/* cannot risk returning a device that failed
|
|
|
|
* before we inc'ed nr_pending
|
|
|
|
*/
|
2006-01-06 16:20:46 +08:00
|
|
|
rdev_dec_pending(rdev, conf->mddev);
|
2005-04-17 06:20:36 +08:00
|
|
|
goto retry;
|
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
sectors = best_good_sectors;
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
|
|
|
|
if (conf->mirrors[best_disk].next_seq_sect != this_sector)
|
|
|
|
conf->mirrors[best_disk].seq_start = this_sector;
|
|
|
|
|
2012-07-31 08:03:53 +08:00
|
|
|
conf->mirrors[best_disk].next_seq_sect = this_sector + sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2011-07-28 09:31:48 +08:00
|
|
|
*max_sectors = sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-05-11 12:34:56 +08:00
|
|
|
return best_disk;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-12-15 09:56:56 +08:00
|
|
|
static int raid1_congested(struct mddev *mddev, int bits)
|
2006-10-03 16:15:54 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2006-10-03 16:15:54 +08:00
|
|
|
int i, ret = 0;
|
|
|
|
|
2015-05-23 05:13:26 +08:00
|
|
|
if ((bits & (1 << WB_async_congested)) &&
|
2011-10-11 13:50:01 +08:00
|
|
|
conf->pending_count >= max_queued_requests)
|
|
|
|
return 1;
|
|
|
|
|
2006-10-03 16:15:54 +08:00
|
|
|
rcu_read_lock();
|
2012-02-13 11:24:05 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
2006-10-03 16:15:54 +08:00
|
|
|
if (rdev && !test_bit(Faulty, &rdev->flags)) {
|
2007-07-24 15:28:11 +08:00
|
|
|
struct request_queue *q = bdev_get_queue(rdev->bdev);
|
2006-10-03 16:15:54 +08:00
|
|
|
|
2011-06-08 06:50:35 +08:00
|
|
|
BUG_ON(!q);
|
|
|
|
|
2006-10-03 16:15:54 +08:00
|
|
|
/* Note the '|| 1' - when read_balance prefers
|
|
|
|
* non-congested targets, it can be removed
|
|
|
|
*/
|
2015-05-23 05:13:26 +08:00
|
|
|
if ((bits & (1 << WB_async_congested)) || 1)
|
2006-10-03 16:15:54 +08:00
|
|
|
ret |= bdi_congested(&q->backing_dev_info, bits);
|
|
|
|
else
|
|
|
|
ret &= bdi_congested(&q->backing_dev_info, bits);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void flush_pending_writes(struct r1conf *conf)
|
2008-03-05 06:29:29 +08:00
|
|
|
{
|
|
|
|
/* Any writes that have been queued but are awaiting
|
|
|
|
* bitmap updates get flushed here.
|
|
|
|
*/
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
|
|
|
|
if (conf->pending_bio_list.head) {
|
|
|
|
struct bio *bio;
|
|
|
|
bio = bio_list_get(&conf->pending_bio_list);
|
2011-10-11 13:50:01 +08:00
|
|
|
conf->pending_count = 0;
|
2008-03-05 06:29:29 +08:00
|
|
|
spin_unlock_irq(&conf->device_lock);
|
|
|
|
/* flush any pending bitmap writes to
|
|
|
|
* disk before proceeding w/ I/O */
|
|
|
|
bitmap_unplug(conf->mddev->bitmap);
|
2011-10-11 13:50:01 +08:00
|
|
|
wake_up(&conf->wait_barrier);
|
2008-03-05 06:29:29 +08:00
|
|
|
|
|
|
|
while (bio) { /* submit pending writes */
|
|
|
|
struct bio *next = bio->bi_next;
|
|
|
|
bio->bi_next = NULL;
|
2012-10-11 10:28:54 +08:00
|
|
|
if (unlikely((bio->bi_rw & REQ_DISCARD) &&
|
|
|
|
!blk_queue_discard(bdev_get_queue(bio->bi_bdev))))
|
|
|
|
/* Just ignore it */
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(bio);
|
2012-10-11 10:28:54 +08:00
|
|
|
else
|
|
|
|
generic_make_request(bio);
|
2008-03-05 06:29:29 +08:00
|
|
|
bio = next;
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
2011-03-10 15:52:07 +08:00
|
|
|
}
|
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
/* Barriers....
|
|
|
|
* Sometimes we need to suspend IO while we do something else,
|
|
|
|
* either some resync/recovery, or reconfigure the array.
|
|
|
|
* To do this we raise a 'barrier'.
|
|
|
|
* The 'barrier' is a counter that can be raised multiple times
|
|
|
|
* to count how many activities are happening which preclude
|
|
|
|
* normal IO.
|
|
|
|
* We can only raise the barrier if there is no pending IO.
|
|
|
|
* i.e. if nr_pending == 0.
|
|
|
|
* We choose only to raise the barrier if no-one is waiting for the
|
|
|
|
* barrier to go down. This means that as soon as an IO request
|
|
|
|
* is ready, no other operations which require a barrier will start
|
|
|
|
* until the IO request has had a chance.
|
|
|
|
*
|
|
|
|
* So: regular IO calls 'wait_barrier'. When that returns there
|
|
|
|
* is no backgroup IO happening, It must arrange to call
|
|
|
|
* allow_barrier when it has finished its IO.
|
|
|
|
* backgroup IO calls must call raise_barrier. Once that returns
|
|
|
|
* there is no normal IO happeing. It must arrange to call
|
|
|
|
* lower_barrier when the particular background IO completes.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2014-09-10 14:01:24 +08:00
|
|
|
static void raise_barrier(struct r1conf *conf, sector_t sector_nr)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
2006-01-06 16:20:12 +08:00
|
|
|
|
|
|
|
/* Wait until no block IO is waiting */
|
|
|
|
wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
|
2012-11-30 18:42:40 +08:00
|
|
|
conf->resync_lock);
|
2006-01-06 16:20:12 +08:00
|
|
|
|
|
|
|
/* block any new IO from starting */
|
|
|
|
conf->barrier++;
|
2014-09-10 14:01:24 +08:00
|
|
|
conf->next_resync = sector_nr;
|
2006-01-06 16:20:12 +08:00
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
/* For these conditions we must wait:
|
|
|
|
* A: while the array is in frozen state
|
|
|
|
* B: while barrier >= RESYNC_DEPTH, meaning resync reach
|
|
|
|
* the max count which allowed.
|
|
|
|
* C: next_resync + RESYNC_SECTORS > start_next_window, meaning
|
|
|
|
* next resync will reach to the window which normal bios are
|
|
|
|
* handling.
|
2014-09-10 13:01:49 +08:00
|
|
|
* D: while there are any active requests in the current window.
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
*/
|
2006-01-06 16:20:12 +08:00
|
|
|
wait_event_lock_irq(conf->wait_barrier,
|
2013-11-14 12:16:18 +08:00
|
|
|
!conf->array_frozen &&
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
conf->barrier < RESYNC_DEPTH &&
|
2014-09-10 13:01:49 +08:00
|
|
|
conf->current_window_requests == 0 &&
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
(conf->start_next_window >=
|
|
|
|
conf->next_resync + RESYNC_SECTORS),
|
2012-11-30 18:42:40 +08:00
|
|
|
conf->resync_lock);
|
2006-01-06 16:20:12 +08:00
|
|
|
|
2014-09-16 10:14:14 +08:00
|
|
|
conf->nr_pending++;
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void lower_barrier(struct r1conf *conf)
|
2006-01-06 16:20:12 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2009-12-14 09:49:51 +08:00
|
|
|
BUG_ON(conf->barrier <= 0);
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_lock_irqsave(&conf->resync_lock, flags);
|
|
|
|
conf->barrier--;
|
2014-09-16 10:14:14 +08:00
|
|
|
conf->nr_pending--;
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
}
|
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
static bool need_to_wait_for_sync(struct r1conf *conf, struct bio *bio)
|
2006-01-06 16:20:12 +08:00
|
|
|
{
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
bool wait = false;
|
|
|
|
|
|
|
|
if (conf->array_frozen || !bio)
|
|
|
|
wait = true;
|
|
|
|
else if (conf->barrier && bio_data_dir(bio) == WRITE) {
|
2014-09-10 13:56:57 +08:00
|
|
|
if ((conf->mddev->curr_resync_completed
|
|
|
|
>= bio_end_sector(bio)) ||
|
|
|
|
(conf->next_resync + NEXT_NORMALIO_DISTANCE
|
|
|
|
<= bio->bi_iter.bi_sector))
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
wait = false;
|
|
|
|
else
|
|
|
|
wait = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return wait;
|
|
|
|
}
|
|
|
|
|
|
|
|
static sector_t wait_barrier(struct r1conf *conf, struct bio *bio)
|
|
|
|
{
|
|
|
|
sector_t sector = 0;
|
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_lock_irq(&conf->resync_lock);
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
if (need_to_wait_for_sync(conf, bio)) {
|
2006-01-06 16:20:12 +08:00
|
|
|
conf->nr_waiting++;
|
2012-03-19 09:46:38 +08:00
|
|
|
/* Wait for the barrier to drop.
|
|
|
|
* However if there are already pending
|
|
|
|
* requests (preventing the barrier from
|
|
|
|
* rising completely), and the
|
2014-09-04 13:51:44 +08:00
|
|
|
* per-process bio queue isn't empty,
|
2012-03-19 09:46:38 +08:00
|
|
|
* then don't wait, as we need to empty
|
2014-09-04 13:51:44 +08:00
|
|
|
* that queue to allow conf->start_next_window
|
|
|
|
* to increase.
|
2012-03-19 09:46:38 +08:00
|
|
|
*/
|
|
|
|
wait_event_lock_irq(conf->wait_barrier,
|
2013-11-14 12:16:18 +08:00
|
|
|
!conf->array_frozen &&
|
|
|
|
(!conf->barrier ||
|
2014-09-04 13:51:44 +08:00
|
|
|
((conf->start_next_window <
|
|
|
|
conf->next_resync + RESYNC_SECTORS) &&
|
|
|
|
current->bio_list &&
|
|
|
|
!bio_list_empty(current->bio_list))),
|
2012-11-30 18:42:40 +08:00
|
|
|
conf->resync_lock);
|
2006-01-06 16:20:12 +08:00
|
|
|
conf->nr_waiting--;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
|
|
|
|
if (bio && bio_data_dir(bio) == WRITE) {
|
2015-09-16 22:20:05 +08:00
|
|
|
if (bio->bi_iter.bi_sector >= conf->next_resync) {
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
if (conf->start_next_window == MaxSector)
|
|
|
|
conf->start_next_window =
|
|
|
|
conf->next_resync +
|
|
|
|
NEXT_NORMALIO_DISTANCE;
|
|
|
|
|
|
|
|
if ((conf->start_next_window + NEXT_NORMALIO_DISTANCE)
|
2013-10-12 06:44:27 +08:00
|
|
|
<= bio->bi_iter.bi_sector)
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
conf->next_window_requests++;
|
|
|
|
else
|
|
|
|
conf->current_window_requests++;
|
|
|
|
sector = conf->start_next_window;
|
2014-01-14 08:56:14 +08:00
|
|
|
}
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
}
|
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
conf->nr_pending++;
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
return sector;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
static void allow_barrier(struct r1conf *conf, sector_t start_next_window,
|
|
|
|
sector_t bi_sector)
|
2006-01-06 16:20:12 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_lock_irqsave(&conf->resync_lock, flags);
|
|
|
|
conf->nr_pending--;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
if (start_next_window) {
|
|
|
|
if (start_next_window == conf->start_next_window) {
|
|
|
|
if (conf->start_next_window + NEXT_NORMALIO_DISTANCE
|
|
|
|
<= bi_sector)
|
|
|
|
conf->next_window_requests--;
|
|
|
|
else
|
|
|
|
conf->current_window_requests--;
|
|
|
|
} else
|
|
|
|
conf->current_window_requests--;
|
|
|
|
|
|
|
|
if (!conf->current_window_requests) {
|
|
|
|
if (conf->next_window_requests) {
|
|
|
|
conf->current_window_requests =
|
|
|
|
conf->next_window_requests;
|
|
|
|
conf->next_window_requests = 0;
|
|
|
|
conf->start_next_window +=
|
|
|
|
NEXT_NORMALIO_DISTANCE;
|
|
|
|
} else
|
|
|
|
conf->start_next_window = MaxSector;
|
|
|
|
}
|
|
|
|
}
|
2006-01-06 16:20:12 +08:00
|
|
|
spin_unlock_irqrestore(&conf->resync_lock, flags);
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
}
|
|
|
|
|
2013-06-12 09:01:22 +08:00
|
|
|
static void freeze_array(struct r1conf *conf, int extra)
|
2006-01-06 16:20:19 +08:00
|
|
|
{
|
|
|
|
/* stop syncio and normal IO and wait for everything to
|
|
|
|
* go quite.
|
2013-11-14 12:16:18 +08:00
|
|
|
* We wait until nr_pending match nr_queued+extra
|
2008-03-05 06:29:35 +08:00
|
|
|
* This is called in the context of one normal IO request
|
|
|
|
* that has failed. Thus any sync request that might be pending
|
|
|
|
* will be blocked by nr_pending, and we need to wait for
|
|
|
|
* pending IO requests to complete or be queued for re-try.
|
2013-06-12 09:01:22 +08:00
|
|
|
* Thus the number queued (nr_queued) plus this request (extra)
|
2008-03-05 06:29:35 +08:00
|
|
|
* must match the number of pending IOs (nr_pending) before
|
|
|
|
* we continue.
|
2006-01-06 16:20:19 +08:00
|
|
|
*/
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
2013-11-14 12:16:18 +08:00
|
|
|
conf->array_frozen = 1;
|
2012-11-30 18:42:40 +08:00
|
|
|
wait_event_lock_irq_cmd(conf->wait_barrier,
|
2013-06-12 09:01:22 +08:00
|
|
|
conf->nr_pending == conf->nr_queued+extra,
|
2012-11-30 18:42:40 +08:00
|
|
|
conf->resync_lock,
|
|
|
|
flush_pending_writes(conf));
|
2006-01-06 16:20:19 +08:00
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
2011-10-11 13:49:05 +08:00
|
|
|
static void unfreeze_array(struct r1conf *conf)
|
2006-01-06 16:20:19 +08:00
|
|
|
{
|
|
|
|
/* reverse the effect of the freeze */
|
|
|
|
spin_lock_irq(&conf->resync_lock);
|
2013-11-14 12:16:18 +08:00
|
|
|
conf->array_frozen = 0;
|
2006-01-06 16:20:19 +08:00
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
|
|
|
}
|
|
|
|
|
2014-09-30 12:23:59 +08:00
|
|
|
/* duplicate the data pages for behind I/O
|
2010-10-19 09:54:01 +08:00
|
|
|
*/
|
2011-10-11 13:48:43 +08:00
|
|
|
static void alloc_behind_pages(struct bio *bio, struct r1bio *r1_bio)
|
2005-09-10 07:23:47 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
struct bio_vec *bvec;
|
2011-07-28 09:32:10 +08:00
|
|
|
struct bio_vec *bvecs = kzalloc(bio->bi_vcnt * sizeof(struct bio_vec),
|
2005-09-10 07:23:47 +08:00
|
|
|
GFP_NOIO);
|
2011-07-28 09:32:10 +08:00
|
|
|
if (unlikely(!bvecs))
|
2011-05-11 12:51:19 +08:00
|
|
|
return;
|
2005-09-10 07:23:47 +08:00
|
|
|
|
2012-09-06 06:22:02 +08:00
|
|
|
bio_for_each_segment_all(bvec, bio, i) {
|
2011-07-28 09:32:10 +08:00
|
|
|
bvecs[i] = *bvec;
|
|
|
|
bvecs[i].bv_page = alloc_page(GFP_NOIO);
|
|
|
|
if (unlikely(!bvecs[i].bv_page))
|
2005-09-10 07:23:47 +08:00
|
|
|
goto do_sync_io;
|
2011-07-28 09:32:10 +08:00
|
|
|
memcpy(kmap(bvecs[i].bv_page) + bvec->bv_offset,
|
|
|
|
kmap(bvec->bv_page) + bvec->bv_offset, bvec->bv_len);
|
|
|
|
kunmap(bvecs[i].bv_page);
|
2005-09-10 07:23:47 +08:00
|
|
|
kunmap(bvec->bv_page);
|
|
|
|
}
|
2011-07-28 09:32:10 +08:00
|
|
|
r1_bio->behind_bvecs = bvecs;
|
2011-05-11 12:51:19 +08:00
|
|
|
r1_bio->behind_page_count = bio->bi_vcnt;
|
|
|
|
set_bit(R1BIO_BehindIO, &r1_bio->state);
|
|
|
|
return;
|
2005-09-10 07:23:47 +08:00
|
|
|
|
|
|
|
do_sync_io:
|
2011-05-11 12:51:19 +08:00
|
|
|
for (i = 0; i < bio->bi_vcnt; i++)
|
2011-07-28 09:32:10 +08:00
|
|
|
if (bvecs[i].bv_page)
|
|
|
|
put_page(bvecs[i].bv_page);
|
|
|
|
kfree(bvecs);
|
2013-10-12 06:44:27 +08:00
|
|
|
pr_debug("%dB behind alloc failed, doing sync I/O\n",
|
|
|
|
bio->bi_iter.bi_size);
|
2005-09-10 07:23:47 +08:00
|
|
|
}
|
|
|
|
|
2012-08-02 06:33:20 +08:00
|
|
|
struct raid1_plug_cb {
|
|
|
|
struct blk_plug_cb cb;
|
|
|
|
struct bio_list pending;
|
|
|
|
int pending_cnt;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
|
|
|
|
{
|
|
|
|
struct raid1_plug_cb *plug = container_of(cb, struct raid1_plug_cb,
|
|
|
|
cb);
|
|
|
|
struct mddev *mddev = plug->cb.data;
|
|
|
|
struct r1conf *conf = mddev->private;
|
|
|
|
struct bio *bio;
|
|
|
|
|
2012-11-27 09:14:40 +08:00
|
|
|
if (from_schedule || current->bio_list) {
|
2012-08-02 06:33:20 +08:00
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
bio_list_merge(&conf->pending_bio_list, &plug->pending);
|
|
|
|
conf->pending_count += plug->pending_cnt;
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
2013-02-25 09:38:29 +08:00
|
|
|
wake_up(&conf->wait_barrier);
|
2012-08-02 06:33:20 +08:00
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
kfree(plug);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* we aren't scheduling, so we can do the write-out directly. */
|
|
|
|
bio = bio_list_get(&plug->pending);
|
|
|
|
bitmap_unplug(mddev->bitmap);
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
|
|
|
|
while (bio) { /* submit pending writes */
|
|
|
|
struct bio *next = bio->bi_next;
|
|
|
|
bio->bi_next = NULL;
|
2013-04-28 18:26:38 +08:00
|
|
|
if (unlikely((bio->bi_rw & REQ_DISCARD) &&
|
|
|
|
!blk_queue_discard(bdev_get_queue(bio->bi_bdev))))
|
|
|
|
/* Just ignore it */
|
2015-07-20 21:29:37 +08:00
|
|
|
bio_endio(bio);
|
2013-04-28 18:26:38 +08:00
|
|
|
else
|
|
|
|
generic_make_request(bio);
|
2012-08-02 06:33:20 +08:00
|
|
|
bio = next;
|
|
|
|
}
|
|
|
|
kfree(plug);
|
|
|
|
}
|
|
|
|
|
2016-01-21 05:52:20 +08:00
|
|
|
static void raid1_make_request(struct mddev *mddev, struct bio * bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2012-07-31 08:03:52 +08:00
|
|
|
struct raid1_info *mirror;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct bio *read_bio;
|
2011-07-28 09:31:48 +08:00
|
|
|
int i, disks;
|
2008-05-24 04:04:32 +08:00
|
|
|
struct bitmap *bitmap;
|
2005-06-22 08:17:23 +08:00
|
|
|
unsigned long flags;
|
2005-11-01 16:26:16 +08:00
|
|
|
const int rw = bio_data_dir(bio);
|
2010-08-18 14:16:05 +08:00
|
|
|
const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);
|
2010-09-03 17:56:18 +08:00
|
|
|
const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));
|
2012-10-11 10:28:54 +08:00
|
|
|
const unsigned long do_discard = (bio->bi_rw
|
|
|
|
& (REQ_DISCARD | REQ_SECURE));
|
2013-02-21 10:28:09 +08:00
|
|
|
const unsigned long do_same = (bio->bi_rw & REQ_WRITE_SAME);
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *blocked_rdev;
|
2012-08-02 06:33:20 +08:00
|
|
|
struct blk_plug_cb *cb;
|
|
|
|
struct raid1_plug_cb *plug = NULL;
|
2011-07-28 09:31:48 +08:00
|
|
|
int first_clone;
|
|
|
|
int sectors_handled;
|
|
|
|
int max_sectors;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
sector_t start_next_window;
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Register the new request and wait if the reconstruction
|
|
|
|
* thread has put up a bar for new requests.
|
|
|
|
* Continue immediately if no resync is active currently.
|
|
|
|
*/
|
2006-05-02 03:15:47 +08:00
|
|
|
|
2005-06-22 08:17:26 +08:00
|
|
|
md_write_start(mddev, bio); /* wait on superblock update early */
|
|
|
|
|
2009-12-14 09:49:51 +08:00
|
|
|
if (bio_data_dir(bio) == WRITE &&
|
2014-06-07 15:39:37 +08:00
|
|
|
((bio_end_sector(bio) > mddev->suspend_lo &&
|
|
|
|
bio->bi_iter.bi_sector < mddev->suspend_hi) ||
|
|
|
|
(mddev_is_clustered(mddev) &&
|
2015-06-24 22:30:32 +08:00
|
|
|
md_cluster_ops->area_resyncing(mddev, WRITE,
|
|
|
|
bio->bi_iter.bi_sector, bio_end_sector(bio))))) {
|
2009-12-14 09:49:51 +08:00
|
|
|
/* As the suspend_* range is controlled by
|
|
|
|
* userspace, we want an interruptible
|
|
|
|
* wait.
|
|
|
|
*/
|
|
|
|
DEFINE_WAIT(w);
|
|
|
|
for (;;) {
|
|
|
|
flush_signals(current);
|
|
|
|
prepare_to_wait(&conf->wait_barrier,
|
|
|
|
&w, TASK_INTERRUPTIBLE);
|
2012-09-26 06:05:12 +08:00
|
|
|
if (bio_end_sector(bio) <= mddev->suspend_lo ||
|
2014-06-07 15:39:37 +08:00
|
|
|
bio->bi_iter.bi_sector >= mddev->suspend_hi ||
|
|
|
|
(mddev_is_clustered(mddev) &&
|
2015-06-24 22:30:32 +08:00
|
|
|
!md_cluster_ops->area_resyncing(mddev, WRITE,
|
2014-06-07 15:39:37 +08:00
|
|
|
bio->bi_iter.bi_sector, bio_end_sector(bio))))
|
2009-12-14 09:49:51 +08:00
|
|
|
break;
|
|
|
|
schedule();
|
|
|
|
}
|
|
|
|
finish_wait(&conf->wait_barrier, &w);
|
|
|
|
}
|
2006-05-02 03:15:47 +08:00
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
start_next_window = wait_barrier(conf, bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2008-05-24 04:04:32 +08:00
|
|
|
bitmap = mddev->bitmap;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* make_request() can abort the operation when READA is being
|
|
|
|
* used and no empty request is available.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
|
|
|
|
|
|
|
|
r1_bio->master_bio = bio;
|
2013-02-06 07:19:29 +08:00
|
|
|
r1_bio->sectors = bio_sectors(bio);
|
2005-06-22 08:17:23 +08:00
|
|
|
r1_bio->state = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
r1_bio->mddev = mddev;
|
2013-10-12 06:44:27 +08:00
|
|
|
r1_bio->sector = bio->bi_iter.bi_sector;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
/* We might need to issue multiple reads to different
|
|
|
|
* devices if there are bad blocks around, so we keep
|
|
|
|
* track of the number of reads in bio->bi_phys_segments.
|
|
|
|
* If this is 0, there is only one r1_bio and no locking
|
|
|
|
* will be needed when requests complete. If it is
|
|
|
|
* non-zero, then it is the number of not-completed requests.
|
|
|
|
*/
|
|
|
|
bio->bi_phys_segments = 0;
|
2015-07-25 02:37:59 +08:00
|
|
|
bio_clear_flag(bio, BIO_SEG_VALID);
|
2011-07-28 09:31:48 +08:00
|
|
|
|
2005-11-01 16:26:16 +08:00
|
|
|
if (rw == READ) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* read balancing logic:
|
|
|
|
*/
|
2011-07-28 09:31:48 +08:00
|
|
|
int rdisk;
|
|
|
|
|
|
|
|
read_again:
|
|
|
|
rdisk = read_balance(conf, r1_bio, &max_sectors);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (rdisk < 0) {
|
|
|
|
/* couldn't find anywhere to read from */
|
|
|
|
raid_end_bio_io(r1_bio);
|
2011-09-12 18:12:01 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
mirror = conf->mirrors + rdisk;
|
|
|
|
|
2010-03-31 08:21:44 +08:00
|
|
|
if (test_bit(WriteMostly, &mirror->rdev->flags) &&
|
|
|
|
bitmap) {
|
|
|
|
/* Reading from a write-mostly device must
|
|
|
|
* take care not to over-take any writes
|
|
|
|
* that are 'behind'
|
|
|
|
*/
|
|
|
|
wait_event(bitmap->behind_wait,
|
|
|
|
atomic_read(&bitmap->behind_writes) == 0);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
r1_bio->read_disk = rdisk;
|
2014-09-22 08:06:23 +08:00
|
|
|
r1_bio->start_next_window = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2010-10-26 15:31:13 +08:00
|
|
|
read_bio = bio_clone_mddev(bio, GFP_NOIO, mddev);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio_trim(read_bio, r1_bio->sector - bio->bi_iter.bi_sector,
|
2013-08-08 02:14:32 +08:00
|
|
|
max_sectors);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
r1_bio->bios[rdisk] = read_bio;
|
|
|
|
|
2013-10-12 06:44:27 +08:00
|
|
|
read_bio->bi_iter.bi_sector = r1_bio->sector +
|
|
|
|
mirror->rdev->data_offset;
|
2005-04-17 06:20:36 +08:00
|
|
|
read_bio->bi_bdev = mirror->rdev->bdev;
|
|
|
|
read_bio->bi_end_io = raid1_end_read_request;
|
2010-08-08 00:20:39 +08:00
|
|
|
read_bio->bi_rw = READ | do_sync;
|
2005-04-17 06:20:36 +08:00
|
|
|
read_bio->bi_private = r1_bio;
|
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
if (max_sectors < r1_bio->sectors) {
|
|
|
|
/* could not read all from this device, so we will
|
|
|
|
* need another r1_bio.
|
|
|
|
*/
|
|
|
|
|
|
|
|
sectors_handled = (r1_bio->sector + max_sectors
|
2013-10-12 06:44:27 +08:00
|
|
|
- bio->bi_iter.bi_sector);
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->sectors = max_sectors;
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
if (bio->bi_phys_segments == 0)
|
|
|
|
bio->bi_phys_segments = 2;
|
|
|
|
else
|
|
|
|
bio->bi_phys_segments++;
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
|
|
|
/* Cannot call generic_make_request directly
|
|
|
|
* as that will be queued in __make_request
|
|
|
|
* and subsequent mempool_alloc might block waiting
|
|
|
|
* for it. So hand bio over to raid1d.
|
|
|
|
*/
|
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
|
|
|
|
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
|
|
|
|
|
|
|
|
r1_bio->master_bio = bio;
|
2013-02-06 07:19:29 +08:00
|
|
|
r1_bio->sectors = bio_sectors(bio) - sectors_handled;
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->state = 0;
|
|
|
|
r1_bio->mddev = mddev;
|
2013-10-12 06:44:27 +08:00
|
|
|
r1_bio->sector = bio->bi_iter.bi_sector +
|
|
|
|
sectors_handled;
|
2011-07-28 09:31:48 +08:00
|
|
|
goto read_again;
|
|
|
|
} else
|
|
|
|
generic_make_request(read_bio);
|
2011-09-12 18:12:01 +08:00
|
|
|
return;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* WRITE:
|
|
|
|
*/
|
2011-10-11 13:50:01 +08:00
|
|
|
if (conf->pending_count >= max_queued_requests) {
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
wait_event(conf->wait_barrier,
|
|
|
|
conf->pending_count < max_queued_requests);
|
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
/* first select target devices under rcu_lock and
|
2005-04-17 06:20:36 +08:00
|
|
|
* inc refcount on their rdev. Record them by setting
|
|
|
|
* bios[x] to bio
|
2011-07-28 09:31:48 +08:00
|
|
|
* If there are known/acknowledged bad blocks on any device on
|
|
|
|
* which we have seen a write error, we want to avoid writing those
|
|
|
|
* blocks.
|
|
|
|
* This potentially requires several writes to write around
|
|
|
|
* the bad blocks. Each set of writes gets it's own r1bio
|
|
|
|
* with a set of bios attached.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-04-18 16:25:43 +08:00
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
disks = conf->raid_disks * 2;
|
2008-04-30 15:52:32 +08:00
|
|
|
retry_write:
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
r1_bio->start_next_window = start_next_window;
|
2008-04-30 15:52:32 +08:00
|
|
|
blocked_rdev = NULL;
|
2005-04-17 06:20:36 +08:00
|
|
|
rcu_read_lock();
|
2011-07-28 09:31:48 +08:00
|
|
|
max_sectors = r1_bio->sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < disks; i++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
2008-04-30 15:52:32 +08:00
|
|
|
if (rdev && unlikely(test_bit(Blocked, &rdev->flags))) {
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
blocked_rdev = rdev;
|
|
|
|
break;
|
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->bios[i] = NULL;
|
2015-04-28 14:48:34 +08:00
|
|
|
if (!rdev || test_bit(Faulty, &rdev->flags)) {
|
2011-12-23 07:17:56 +08:00
|
|
|
if (i < conf->raid_disks)
|
|
|
|
set_bit(R1BIO_Degraded, &r1_bio->state);
|
2011-07-28 09:31:48 +08:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
|
|
|
if (test_bit(WriteErrorSeen, &rdev->flags)) {
|
|
|
|
sector_t first_bad;
|
|
|
|
int bad_sectors;
|
|
|
|
int is_bad;
|
|
|
|
|
|
|
|
is_bad = is_badblock(rdev, r1_bio->sector,
|
|
|
|
max_sectors,
|
|
|
|
&first_bad, &bad_sectors);
|
|
|
|
if (is_bad < 0) {
|
|
|
|
/* mustn't write here until the bad block is
|
|
|
|
* acknowledged*/
|
|
|
|
set_bit(BlockedBadBlocks, &rdev->flags);
|
|
|
|
blocked_rdev = rdev;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (is_bad && first_bad <= r1_bio->sector) {
|
|
|
|
/* Cannot write here at all */
|
|
|
|
bad_sectors -= (r1_bio->sector - first_bad);
|
|
|
|
if (bad_sectors < max_sectors)
|
|
|
|
/* mustn't write more than bad_sectors
|
|
|
|
* to other devices yet
|
|
|
|
*/
|
|
|
|
max_sectors = bad_sectors;
|
2006-01-06 16:20:46 +08:00
|
|
|
rdev_dec_pending(rdev, mddev);
|
2011-07-28 09:31:48 +08:00
|
|
|
/* We don't set R1BIO_Degraded as that
|
|
|
|
* only applies if the disk is
|
|
|
|
* missing, so it might be re-added,
|
|
|
|
* and we want to know to recover this
|
|
|
|
* chunk.
|
|
|
|
* In this case the device is here,
|
|
|
|
* and the fact that this chunk is not
|
|
|
|
* in-sync is recorded in the bad
|
|
|
|
* block log
|
|
|
|
*/
|
|
|
|
continue;
|
2010-05-18 13:27:13 +08:00
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
if (is_bad) {
|
|
|
|
int good_sectors = first_bad - r1_bio->sector;
|
|
|
|
if (good_sectors < max_sectors)
|
|
|
|
max_sectors = good_sectors;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
r1_bio->bios[i] = bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
2008-04-30 15:52:32 +08:00
|
|
|
if (unlikely(blocked_rdev)) {
|
|
|
|
/* Wait for this device to become unblocked */
|
|
|
|
int j;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
sector_t old = start_next_window;
|
2008-04-30 15:52:32 +08:00
|
|
|
|
|
|
|
for (j = 0; j < i; j++)
|
|
|
|
if (r1_bio->bios[j])
|
|
|
|
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->state = 0;
|
2013-10-12 06:44:27 +08:00
|
|
|
allow_barrier(conf, start_next_window, bio->bi_iter.bi_sector);
|
2008-04-30 15:52:32 +08:00
|
|
|
md_wait_for_blocked_rdev(blocked_rdev, mddev);
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
start_next_window = wait_barrier(conf, bio);
|
|
|
|
/*
|
|
|
|
* We must make sure the multi r1bios of bio have
|
|
|
|
* the same value of bi_phys_segments
|
|
|
|
*/
|
|
|
|
if (bio->bi_phys_segments && old &&
|
|
|
|
old != start_next_window)
|
|
|
|
/* Wait for the former r1bio(s) to complete */
|
|
|
|
wait_event(conf->wait_barrier,
|
|
|
|
bio->bi_phys_segments == 1);
|
2008-04-30 15:52:32 +08:00
|
|
|
goto retry_write;
|
|
|
|
}
|
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
if (max_sectors < r1_bio->sectors) {
|
|
|
|
/* We are splitting this write into multiple parts, so
|
|
|
|
* we need to prepare for allocating another r1_bio.
|
|
|
|
*/
|
|
|
|
r1_bio->sectors = max_sectors;
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
if (bio->bi_phys_segments == 0)
|
|
|
|
bio->bi_phys_segments = 2;
|
|
|
|
else
|
|
|
|
bio->bi_phys_segments++;
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
2005-06-22 08:17:23 +08:00
|
|
|
}
|
2013-10-12 06:44:27 +08:00
|
|
|
sectors_handled = r1_bio->sector + max_sectors - bio->bi_iter.bi_sector;
|
2005-09-10 07:23:47 +08:00
|
|
|
|
2010-10-19 09:54:01 +08:00
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
2005-09-10 07:23:47 +08:00
|
|
|
atomic_set(&r1_bio->behind_remaining, 0);
|
2005-06-22 08:17:12 +08:00
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
first_clone = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < disks; i++) {
|
|
|
|
struct bio *mbio;
|
|
|
|
if (!r1_bio->bios[i])
|
|
|
|
continue;
|
|
|
|
|
2010-10-26 15:31:13 +08:00
|
|
|
mbio = bio_clone_mddev(bio, GFP_NOIO, mddev);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio_trim(mbio, r1_bio->sector - bio->bi_iter.bi_sector, max_sectors);
|
2011-07-28 09:31:48 +08:00
|
|
|
|
|
|
|
if (first_clone) {
|
|
|
|
/* do behind I/O ?
|
|
|
|
* Not if there are too many, or cannot
|
|
|
|
* allocate memory, or a reader on WriteMostly
|
|
|
|
* is waiting for behind writes to flush */
|
|
|
|
if (bitmap &&
|
|
|
|
(atomic_read(&bitmap->behind_writes)
|
|
|
|
< mddev->bitmap_info.max_write_behind) &&
|
|
|
|
!waitqueue_active(&bitmap->behind_wait))
|
|
|
|
alloc_behind_pages(mbio, r1_bio);
|
|
|
|
|
|
|
|
bitmap_startwrite(bitmap, r1_bio->sector,
|
|
|
|
r1_bio->sectors,
|
|
|
|
test_bit(R1BIO_BehindIO,
|
|
|
|
&r1_bio->state));
|
|
|
|
first_clone = 0;
|
|
|
|
}
|
2011-07-28 09:32:10 +08:00
|
|
|
if (r1_bio->behind_bvecs) {
|
2005-09-10 07:23:47 +08:00
|
|
|
struct bio_vec *bvec;
|
|
|
|
int j;
|
|
|
|
|
2012-09-06 06:22:02 +08:00
|
|
|
/*
|
|
|
|
* We trimmed the bio, so _all is legit
|
2005-09-10 07:23:47 +08:00
|
|
|
*/
|
2013-02-07 04:23:11 +08:00
|
|
|
bio_for_each_segment_all(bvec, mbio, j)
|
2011-07-28 09:32:10 +08:00
|
|
|
bvec->bv_page = r1_bio->behind_bvecs[j].bv_page;
|
2005-09-10 07:23:47 +08:00
|
|
|
if (test_bit(WriteMostly, &conf->mirrors[i].rdev->flags))
|
|
|
|
atomic_inc(&r1_bio->behind_remaining);
|
|
|
|
}
|
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->bios[i] = mbio;
|
|
|
|
|
2013-10-12 06:44:27 +08:00
|
|
|
mbio->bi_iter.bi_sector = (r1_bio->sector +
|
2011-07-28 09:31:48 +08:00
|
|
|
conf->mirrors[i].rdev->data_offset);
|
|
|
|
mbio->bi_bdev = conf->mirrors[i].rdev->bdev;
|
|
|
|
mbio->bi_end_io = raid1_end_write_request;
|
2013-02-21 10:28:09 +08:00
|
|
|
mbio->bi_rw =
|
|
|
|
WRITE | do_flush_fua | do_sync | do_discard | do_same;
|
2011-07-28 09:31:48 +08:00
|
|
|
mbio->bi_private = r1_bio;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
atomic_inc(&r1_bio->remaining);
|
2012-08-02 06:33:20 +08:00
|
|
|
|
|
|
|
cb = blk_check_plugged(raid1_unplug, mddev, sizeof(*plug));
|
|
|
|
if (cb)
|
|
|
|
plug = container_of(cb, struct raid1_plug_cb, cb);
|
|
|
|
else
|
|
|
|
plug = NULL;
|
2010-10-19 09:54:01 +08:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2012-08-02 06:33:20 +08:00
|
|
|
if (plug) {
|
|
|
|
bio_list_add(&plug->pending, mbio);
|
|
|
|
plug->pending_cnt++;
|
|
|
|
} else {
|
|
|
|
bio_list_add(&conf->pending_bio_list, mbio);
|
|
|
|
conf->pending_count++;
|
|
|
|
}
|
2010-10-19 09:54:01 +08:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2012-08-02 06:33:20 +08:00
|
|
|
if (!plug)
|
2012-07-03 15:45:31 +08:00
|
|
|
md_wakeup_thread(mddev->thread);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2011-09-10 15:21:23 +08:00
|
|
|
/* Mustn't call r1_bio_write_done before this next test,
|
|
|
|
* as it could result in the bio being freed.
|
|
|
|
*/
|
2013-02-06 07:19:29 +08:00
|
|
|
if (sectors_handled < bio_sectors(bio)) {
|
2011-09-10 15:21:23 +08:00
|
|
|
r1_bio_write_done(r1_bio);
|
2011-07-28 09:31:48 +08:00
|
|
|
/* We need another r1_bio. It has already been counted
|
|
|
|
* in bio->bi_phys_segments
|
|
|
|
*/
|
|
|
|
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
|
|
|
|
r1_bio->master_bio = bio;
|
2013-02-06 07:19:29 +08:00
|
|
|
r1_bio->sectors = bio_sectors(bio) - sectors_handled;
|
2011-07-28 09:31:48 +08:00
|
|
|
r1_bio->state = 0;
|
|
|
|
r1_bio->mddev = mddev;
|
2013-10-12 06:44:27 +08:00
|
|
|
r1_bio->sector = bio->bi_iter.bi_sector + sectors_handled;
|
2011-07-28 09:31:48 +08:00
|
|
|
goto retry_write;
|
|
|
|
}
|
|
|
|
|
2011-09-10 15:21:23 +08:00
|
|
|
r1_bio_write_done(r1_bio);
|
|
|
|
|
|
|
|
/* In case raid1d snuck in to freeze_array */
|
|
|
|
wake_up(&conf->wait_barrier);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2016-01-21 05:52:20 +08:00
|
|
|
static void raid1_status(struct seq_file *seq, struct mddev *mddev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
int i;
|
|
|
|
|
|
|
|
seq_printf(seq, " [%d/%d] [", conf->raid_disks,
|
2006-10-03 16:15:52 +08:00
|
|
|
conf->raid_disks - mddev->degraded);
|
2006-09-01 12:27:36 +08:00
|
|
|
rcu_read_lock();
|
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
2005-04-17 06:20:36 +08:00
|
|
|
seq_printf(seq, "%s",
|
2006-09-01 12:27:36 +08:00
|
|
|
rdev && test_bit(In_sync, &rdev->flags) ? "U" : "_");
|
|
|
|
}
|
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
seq_printf(seq, "]");
|
|
|
|
}
|
|
|
|
|
2016-01-21 05:52:20 +08:00
|
|
|
static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2015-07-27 09:48:52 +08:00
|
|
|
unsigned long flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If it is not operational, then we have already marked it as dead
|
|
|
|
* else if it is the last working disks, ignore the error, let the
|
|
|
|
* next level up know.
|
|
|
|
* else mark the drive as failed
|
|
|
|
*/
|
2005-11-09 13:39:31 +08:00
|
|
|
if (test_bit(In_sync, &rdev->flags)
|
2009-01-09 05:31:11 +08:00
|
|
|
&& (conf->raid_disks - mddev->degraded) == 1) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Don't fail the drive, act as though we were just a
|
2009-01-09 05:31:11 +08:00
|
|
|
* normal single drive.
|
|
|
|
* However don't try a recovery from this drive as
|
|
|
|
* it is very likely to fail.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2011-07-27 09:00:36 +08:00
|
|
|
conf->recovery_disabled = mddev->recovery_disabled;
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
2009-01-09 05:31:11 +08:00
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
set_bit(Blocked, &rdev->flags);
|
2015-07-27 09:48:52 +08:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2006-10-03 16:15:53 +08:00
|
|
|
if (test_and_clear_bit(In_sync, &rdev->flags)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
mddev->degraded++;
|
2007-05-10 18:15:50 +08:00
|
|
|
set_bit(Faulty, &rdev->flags);
|
|
|
|
} else
|
|
|
|
set_bit(Faulty, &rdev->flags);
|
2015-07-27 09:48:52 +08:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2014-07-31 08:16:29 +08:00
|
|
|
/*
|
|
|
|
* if recovery is running, make sure it aborts.
|
|
|
|
*/
|
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
2006-10-03 16:15:46 +08:00
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
2015-08-14 09:11:10 +08:00
|
|
|
set_bit(MD_CHANGE_PENDING, &mddev->flags);
|
2011-01-14 06:14:33 +08:00
|
|
|
printk(KERN_ALERT
|
|
|
|
"md/raid1:%s: Disk failure on %s, disabling device.\n"
|
|
|
|
"md/raid1:%s: Operation continuing on %d devices.\n",
|
2010-05-03 12:30:35 +08:00
|
|
|
mdname(mddev), bdevname(rdev->bdev, b),
|
|
|
|
mdname(mddev), conf->raid_disks - mddev->degraded);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void print_conf(struct r1conf *conf)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_DEBUG "RAID1 conf printout:\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!conf) {
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_DEBUG "(!conf)\n");
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
}
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_DEBUG " --- wd:%d rd:%d\n", conf->raid_disks - conf->mddev->degraded,
|
2005-04-17 06:20:36 +08:00
|
|
|
conf->raid_disks);
|
|
|
|
|
2006-09-01 12:27:36 +08:00
|
|
|
rcu_read_lock();
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev);
|
2006-09-01 12:27:36 +08:00
|
|
|
if (rdev)
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_DEBUG " disk %d, wo:%d, o:%d, dev:%s\n",
|
2006-09-01 12:27:36 +08:00
|
|
|
i, !test_bit(In_sync, &rdev->flags),
|
|
|
|
!test_bit(Faulty, &rdev->flags),
|
|
|
|
bdevname(rdev->bdev,b));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-09-01 12:27:36 +08:00
|
|
|
rcu_read_unlock();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void close_sync(struct r1conf *conf)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
wait_barrier(conf, NULL);
|
|
|
|
allow_barrier(conf, 0, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
mempool_destroy(conf->r1buf_pool);
|
|
|
|
conf->r1buf_pool = NULL;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
|
2014-09-04 14:30:38 +08:00
|
|
|
spin_lock_irq(&conf->resync_lock);
|
2015-09-16 22:20:05 +08:00
|
|
|
conf->next_resync = MaxSector - 2 * NEXT_NORMALIO_DISTANCE;
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
conf->start_next_window = MaxSector;
|
2014-09-04 14:30:38 +08:00
|
|
|
conf->current_window_requests +=
|
|
|
|
conf->next_window_requests;
|
|
|
|
conf->next_window_requests = 0;
|
|
|
|
spin_unlock_irq(&conf->resync_lock);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static int raid1_spare_active(struct mddev *mddev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int i;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2010-08-18 09:56:59 +08:00
|
|
|
int count = 0;
|
|
|
|
unsigned long flags;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2014-09-30 12:23:59 +08:00
|
|
|
* Find all failed disks within the RAID1 configuration
|
2006-09-01 12:27:36 +08:00
|
|
|
* and mark them readable.
|
|
|
|
* Called under mddev lock, so rcu protection not needed.
|
2015-07-27 09:48:52 +08:00
|
|
|
* device_lock used to avoid races with raid1_end_read_request
|
|
|
|
* which expects 'In_sync' flags and ->degraded to be consistent.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2015-07-27 09:48:52 +08:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
for (i = 0; i < conf->raid_disks; i++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[i].rdev;
|
2011-12-23 07:17:57 +08:00
|
|
|
struct md_rdev *repl = conf->mirrors[conf->raid_disks + i].rdev;
|
|
|
|
if (repl
|
2014-10-30 07:51:31 +08:00
|
|
|
&& !test_bit(Candidate, &repl->flags)
|
2011-12-23 07:17:57 +08:00
|
|
|
&& repl->recovery_offset == MaxSector
|
|
|
|
&& !test_bit(Faulty, &repl->flags)
|
|
|
|
&& !test_and_set_bit(In_sync, &repl->flags)) {
|
|
|
|
/* replacement has just become active */
|
|
|
|
if (!rdev ||
|
|
|
|
!test_and_clear_bit(In_sync, &rdev->flags))
|
|
|
|
count++;
|
|
|
|
if (rdev) {
|
|
|
|
/* Replaced device not technically
|
|
|
|
* faulty, but we need to be sure
|
|
|
|
* it gets removed and never re-added
|
|
|
|
*/
|
|
|
|
set_bit(Faulty, &rdev->flags);
|
|
|
|
sysfs_notify_dirent_safe(
|
|
|
|
rdev->sysfs_state);
|
|
|
|
}
|
|
|
|
}
|
2006-09-01 12:27:36 +08:00
|
|
|
if (rdev
|
2013-10-24 09:55:17 +08:00
|
|
|
&& rdev->recovery_offset == MaxSector
|
2006-09-01 12:27:36 +08:00
|
|
|
&& !test_bit(Faulty, &rdev->flags)
|
2006-10-03 16:15:53 +08:00
|
|
|
&& !test_and_set_bit(In_sync, &rdev->flags)) {
|
2010-08-18 09:56:59 +08:00
|
|
|
count++;
|
2011-07-27 09:00:36 +08:00
|
|
|
sysfs_notify_dirent_safe(rdev->sysfs_state);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
2010-08-18 09:56:59 +08:00
|
|
|
mddev->degraded -= count;
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
print_conf(conf);
|
2010-08-18 09:56:59 +08:00
|
|
|
return count;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2008-06-28 06:31:33 +08:00
|
|
|
int err = -EEXIST;
|
2005-06-22 08:17:25 +08:00
|
|
|
int mirror = 0;
|
2012-07-31 08:03:52 +08:00
|
|
|
struct raid1_info *p;
|
2008-06-28 06:31:31 +08:00
|
|
|
int first = 0;
|
2011-12-23 07:17:56 +08:00
|
|
|
int last = conf->raid_disks - 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-07-27 09:00:36 +08:00
|
|
|
if (mddev->recovery_disabled == conf->recovery_disabled)
|
|
|
|
return -EBUSY;
|
|
|
|
|
2016-01-14 08:00:07 +08:00
|
|
|
if (md_integrity_add_rdev(rdev, mddev))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2008-06-28 06:31:31 +08:00
|
|
|
if (rdev->raid_disk >= 0)
|
|
|
|
first = last = rdev->raid_disk;
|
|
|
|
|
2015-08-21 23:33:39 +08:00
|
|
|
/*
|
|
|
|
* find the disk ... but prefer rdev->saved_raid_disk
|
|
|
|
* if possible.
|
|
|
|
*/
|
|
|
|
if (rdev->saved_raid_disk >= 0 &&
|
|
|
|
rdev->saved_raid_disk >= first &&
|
|
|
|
conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
|
|
|
|
first = last = rdev->saved_raid_disk;
|
|
|
|
|
2011-12-23 07:17:57 +08:00
|
|
|
for (mirror = first; mirror <= last; mirror++) {
|
|
|
|
p = conf->mirrors+mirror;
|
|
|
|
if (!p->rdev) {
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-05-03 03:19:24 +08:00
|
|
|
if (mddev->gendisk)
|
|
|
|
disk_stack_limits(mddev->gendisk, rdev->bdev,
|
|
|
|
rdev->data_offset << 9);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
p->head_position = 0;
|
|
|
|
rdev->raid_disk = mirror;
|
2008-06-28 06:31:33 +08:00
|
|
|
err = 0;
|
2005-11-29 05:44:13 +08:00
|
|
|
/* As all devices are equivalent, we don't need a full recovery
|
|
|
|
* if this was recently any drive of the array
|
|
|
|
*/
|
|
|
|
if (rdev->saved_raid_disk < 0)
|
2005-06-22 08:17:25 +08:00
|
|
|
conf->fullsync = 1;
|
2005-11-09 13:39:27 +08:00
|
|
|
rcu_assign_pointer(p->rdev, rdev);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
|
|
|
}
|
2011-12-23 07:17:57 +08:00
|
|
|
if (test_bit(WantReplacement, &p->rdev->flags) &&
|
|
|
|
p[conf->raid_disks].rdev == NULL) {
|
|
|
|
/* Add this device as a replacement */
|
|
|
|
clear_bit(In_sync, &rdev->flags);
|
|
|
|
set_bit(Replacement, &rdev->flags);
|
|
|
|
rdev->raid_disk = mirror;
|
|
|
|
err = 0;
|
|
|
|
conf->fullsync = 1;
|
|
|
|
rcu_assign_pointer(p[conf->raid_disks].rdev, rdev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2013-05-03 03:19:24 +08:00
|
|
|
if (mddev->queue && blk_queue_discard(bdev_get_queue(rdev->bdev)))
|
2012-10-11 10:28:54 +08:00
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, mddev->queue);
|
2005-04-17 06:20:36 +08:00
|
|
|
print_conf(conf);
|
2008-06-28 06:31:33 +08:00
|
|
|
return err;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-12-23 07:17:51 +08:00
|
|
|
static int raid1_remove_disk(struct mddev *mddev, struct md_rdev *rdev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
int err = 0;
|
2011-12-23 07:17:51 +08:00
|
|
|
int number = rdev->raid_disk;
|
2012-07-31 08:03:52 +08:00
|
|
|
struct raid1_info *p = conf->mirrors + number;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
if (rdev != p->rdev)
|
|
|
|
p = conf->mirrors + conf->raid_disks + number;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
print_conf(conf);
|
2011-12-23 07:17:51 +08:00
|
|
|
if (rdev == p->rdev) {
|
2005-11-09 13:39:31 +08:00
|
|
|
if (test_bit(In_sync, &rdev->flags) ||
|
2005-04-17 06:20:36 +08:00
|
|
|
atomic_read(&rdev->nr_pending)) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto abort;
|
|
|
|
}
|
2010-10-26 12:46:20 +08:00
|
|
|
/* Only remove non-faulty devices if recovery
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 04:04:39 +08:00
|
|
|
* is not possible.
|
|
|
|
*/
|
|
|
|
if (!test_bit(Faulty, &rdev->flags) &&
|
2011-07-27 09:00:36 +08:00
|
|
|
mddev->recovery_disabled != conf->recovery_disabled &&
|
md: restart recovery cleanly after device failure.
When we get any IO error during a recovery (rebuilding a spare), we abort
the recovery and restart it.
For RAID6 (and multi-drive RAID1) it may not be best to restart at the
beginning: when multiple failures can be tolerated, the recovery may be
able to continue and re-doing all that has already been done doesn't make
sense.
We already have the infrastructure to record where a recovery is up to
and restart from there, but it is not being used properly.
This is because:
- We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
which causes the recovery not be be checkpointed.
- We remove spares and then re-added them which loses important state
information.
The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
needed. If there is an error, the relevant drive will be marked as
Faulty, and that is enough to ensure correct handling of the error. So we
first remove MD_RECOVERY_ERR, changing some of the uses of it to
MD_RECOVERY_INTR.
Then we cause the attempt to remove a non-faulty device from an array to
fail (unless recovery is impossible as the array is too degraded). Then
when remove_and_add_spares attempts to remove the devices on which
recovery can continue, it will fail, they will remain in place, and
recovery will continue on them as desired.
Issue: If we are halfway through rebuilding a spare and another drive
fails, and a new spare is immediately available, do we want to:
1/ complete the current rebuild, then go back and rebuild the new spare or
2/ restart the rebuild from the start and rebuild both devices in
parallel.
Both options can be argued for. The code currently takes option 2 as
a/ this requires least code change
b/ this results in a minimally-degraded array in minimal time.
Cc: "Eivind Sarto" <ivan@kasenna.com>
Signed-off-by: Neil Brown <neilb@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-05-24 04:04:39 +08:00
|
|
|
mddev->degraded < conf->raid_disks) {
|
|
|
|
err = -EBUSY;
|
|
|
|
goto abort;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
p->rdev = NULL;
|
2005-05-01 23:59:04 +08:00
|
|
|
synchronize_rcu();
|
2005-04-17 06:20:36 +08:00
|
|
|
if (atomic_read(&rdev->nr_pending)) {
|
|
|
|
/* lost the race, try later */
|
|
|
|
err = -EBUSY;
|
|
|
|
p->rdev = rdev;
|
2009-08-03 08:59:47 +08:00
|
|
|
goto abort;
|
2011-12-23 07:17:57 +08:00
|
|
|
} else if (conf->mirrors[conf->raid_disks + number].rdev) {
|
|
|
|
/* We just removed a device that is being replaced.
|
|
|
|
* Move down the replacement. We drain all IO before
|
|
|
|
* doing this to avoid confusion.
|
|
|
|
*/
|
|
|
|
struct md_rdev *repl =
|
|
|
|
conf->mirrors[conf->raid_disks + number].rdev;
|
2013-06-12 09:01:22 +08:00
|
|
|
freeze_array(conf, 0);
|
2011-12-23 07:17:57 +08:00
|
|
|
clear_bit(Replacement, &repl->flags);
|
|
|
|
p->rdev = repl;
|
|
|
|
conf->mirrors[conf->raid_disks + number].rdev = NULL;
|
2013-06-12 09:01:22 +08:00
|
|
|
unfreeze_array(conf);
|
2011-12-23 07:17:57 +08:00
|
|
|
clear_bit(WantReplacement, &rdev->flags);
|
|
|
|
} else
|
2011-12-23 07:17:56 +08:00
|
|
|
clear_bit(WantReplacement, &rdev->flags);
|
2011-03-17 18:11:05 +08:00
|
|
|
err = md_integrity_register(mddev);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
abort:
|
|
|
|
|
|
|
|
print_conf(conf);
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_sync_read(struct bio *bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio = bio->bi_private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-10-07 11:22:55 +08:00
|
|
|
update_head_pos(r1_bio->read_disk, r1_bio);
|
2011-10-07 11:22:53 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* we have read a block, now it needs to be re-written,
|
|
|
|
* or re-read if the read failed.
|
|
|
|
* We don't do much here, just schedule handling by raid1d
|
|
|
|
*/
|
2015-07-20 21:29:37 +08:00
|
|
|
if (!bio->bi_error)
|
2005-04-17 06:20:36 +08:00
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
2006-01-06 16:20:26 +08:00
|
|
|
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining))
|
|
|
|
reschedule_retry(r1_bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
static void end_sync_write(struct bio *bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2015-07-20 21:29:37 +08:00
|
|
|
int uptodate = !bio->bi_error;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio = bio->bi_private;
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = r1_bio->mddev;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
int mirror=0;
|
2011-07-28 09:31:49 +08:00
|
|
|
sector_t first_bad;
|
|
|
|
int bad_sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-10-07 11:22:53 +08:00
|
|
|
mirror = find_bio_disk(r1_bio, bio);
|
|
|
|
|
2006-03-31 18:31:57 +08:00
|
|
|
if (!uptodate) {
|
2010-10-19 07:03:39 +08:00
|
|
|
sector_t sync_blocks = 0;
|
2006-03-31 18:31:57 +08:00
|
|
|
sector_t s = r1_bio->sector;
|
|
|
|
long sectors_to_go = r1_bio->sectors;
|
|
|
|
/* make sure these bits doesn't get cleared. */
|
|
|
|
do {
|
2006-07-10 19:44:18 +08:00
|
|
|
bitmap_end_sync(mddev->bitmap, s,
|
2006-03-31 18:31:57 +08:00
|
|
|
&sync_blocks, 1);
|
|
|
|
s += sync_blocks;
|
|
|
|
sectors_to_go -= sync_blocks;
|
|
|
|
} while (sectors_to_go > 0);
|
2011-07-28 09:33:00 +08:00
|
|
|
set_bit(WriteErrorSeen,
|
|
|
|
&conf->mirrors[mirror].rdev->flags);
|
2011-12-23 07:17:57 +08:00
|
|
|
if (!test_and_set_bit(WantReplacement,
|
|
|
|
&conf->mirrors[mirror].rdev->flags))
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &
|
|
|
|
mddev->recovery);
|
2011-07-28 09:33:00 +08:00
|
|
|
set_bit(R1BIO_WriteError, &r1_bio->state);
|
2011-07-28 09:31:49 +08:00
|
|
|
} else if (is_badblock(conf->mirrors[mirror].rdev,
|
|
|
|
r1_bio->sector,
|
|
|
|
r1_bio->sectors,
|
2011-07-28 09:33:42 +08:00
|
|
|
&first_bad, &bad_sectors) &&
|
|
|
|
!is_badblock(conf->mirrors[r1_bio->read_disk].rdev,
|
|
|
|
r1_bio->sector,
|
|
|
|
r1_bio->sectors,
|
|
|
|
&first_bad, &bad_sectors)
|
|
|
|
)
|
2011-07-28 09:31:49 +08:00
|
|
|
set_bit(R1BIO_MadeGood, &r1_bio->state);
|
2005-08-05 03:53:34 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining)) {
|
2011-07-28 09:31:49 +08:00
|
|
|
int s = r1_bio->sectors;
|
2011-07-28 09:33:00 +08:00
|
|
|
if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
|
|
|
|
test_bit(R1BIO_WriteError, &r1_bio->state))
|
2011-07-28 09:31:49 +08:00
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
else {
|
|
|
|
put_buf(r1_bio);
|
|
|
|
md_done_sync(mddev, s, uptodate);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:45:26 +08:00
|
|
|
static int r1_sync_page_io(struct md_rdev *rdev, sector_t sector,
|
2011-07-28 09:33:00 +08:00
|
|
|
int sectors, struct page *page, int rw)
|
|
|
|
{
|
|
|
|
if (sync_page_io(rdev, sector, sectors << 9, page, rw, false))
|
|
|
|
/* success */
|
|
|
|
return 1;
|
2011-12-23 07:17:57 +08:00
|
|
|
if (rw == WRITE) {
|
2011-07-28 09:33:00 +08:00
|
|
|
set_bit(WriteErrorSeen, &rdev->flags);
|
2011-12-23 07:17:57 +08:00
|
|
|
if (!test_and_set_bit(WantReplacement,
|
|
|
|
&rdev->flags))
|
|
|
|
set_bit(MD_RECOVERY_NEEDED, &
|
|
|
|
rdev->mddev->recovery);
|
|
|
|
}
|
2011-07-28 09:33:00 +08:00
|
|
|
/* need to record an error - either for the block or the device */
|
|
|
|
if (!rdev_set_badblocks(rdev, sector, sectors, 0))
|
|
|
|
md_error(rdev->mddev, rdev);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static int fix_sync_read_error(struct r1bio *r1_bio)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-05-11 12:40:44 +08:00
|
|
|
/* Try some synchronous reads of other devices to get
|
|
|
|
* good data, much like with normal read errors. Only
|
|
|
|
* read into the pages we already have so we don't
|
|
|
|
* need to re-issue the read request.
|
|
|
|
* We don't need to freeze the array, because being in an
|
|
|
|
* active sync request, there is no normal IO, and
|
|
|
|
* no overlapping syncs.
|
2011-07-28 09:31:48 +08:00
|
|
|
* We don't need to check is_badblock() again as we
|
|
|
|
* made sure that anything with a bad block in range
|
|
|
|
* will have bi_end_io clear.
|
2011-05-11 12:40:44 +08:00
|
|
|
*/
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = r1_bio->mddev;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2011-05-11 12:40:44 +08:00
|
|
|
struct bio *bio = r1_bio->bios[r1_bio->read_disk];
|
|
|
|
sector_t sect = r1_bio->sector;
|
|
|
|
int sectors = r1_bio->sectors;
|
|
|
|
int idx = 0;
|
|
|
|
|
|
|
|
while(sectors) {
|
|
|
|
int s = sectors;
|
|
|
|
int d = r1_bio->read_disk;
|
|
|
|
int success = 0;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2011-05-11 12:48:56 +08:00
|
|
|
int start;
|
2011-05-11 12:40:44 +08:00
|
|
|
|
|
|
|
if (s > (PAGE_SIZE>>9))
|
|
|
|
s = PAGE_SIZE >> 9;
|
|
|
|
do {
|
|
|
|
if (r1_bio->bios[d]->bi_end_io == end_sync_read) {
|
|
|
|
/* No rcu protection needed here devices
|
|
|
|
* can only be removed when no resync is
|
|
|
|
* active, and resync is currently active
|
|
|
|
*/
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-27 09:00:36 +08:00
|
|
|
if (sync_page_io(rdev, sect, s<<9,
|
2011-05-11 12:40:44 +08:00
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
READ, false)) {
|
|
|
|
success = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
d++;
|
2011-12-23 07:17:56 +08:00
|
|
|
if (d == conf->raid_disks * 2)
|
2011-05-11 12:40:44 +08:00
|
|
|
d = 0;
|
|
|
|
} while (!success && d != r1_bio->read_disk);
|
|
|
|
|
2011-05-11 12:48:56 +08:00
|
|
|
if (!success) {
|
2011-05-11 12:40:44 +08:00
|
|
|
char b[BDEVNAME_SIZE];
|
2011-07-28 09:33:42 +08:00
|
|
|
int abort = 0;
|
|
|
|
/* Cannot read from anywhere, this block is lost.
|
|
|
|
* Record a bad block on each device. If that doesn't
|
|
|
|
* work just disable and interrupt the recovery.
|
|
|
|
* Don't fail devices as that won't really help.
|
|
|
|
*/
|
2011-05-11 12:40:44 +08:00
|
|
|
printk(KERN_ALERT "md/raid1:%s: %s: unrecoverable I/O read error"
|
|
|
|
" for block %llu\n",
|
|
|
|
mdname(mddev),
|
|
|
|
bdevname(bio->bi_bdev, b),
|
|
|
|
(unsigned long long)r1_bio->sector);
|
2011-12-23 07:17:56 +08:00
|
|
|
for (d = 0; d < conf->raid_disks * 2; d++) {
|
2011-07-28 09:33:42 +08:00
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (!rdev || test_bit(Faulty, &rdev->flags))
|
|
|
|
continue;
|
|
|
|
if (!rdev_set_badblocks(rdev, sect, s, 0))
|
|
|
|
abort = 1;
|
|
|
|
}
|
|
|
|
if (abort) {
|
2011-10-26 08:54:39 +08:00
|
|
|
conf->recovery_disabled =
|
|
|
|
mddev->recovery_disabled;
|
2011-07-28 09:33:42 +08:00
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
|
|
|
md_done_sync(mddev, r1_bio->sectors, 0);
|
|
|
|
put_buf(r1_bio);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
/* Try next page */
|
|
|
|
sectors -= s;
|
|
|
|
sect += s;
|
|
|
|
idx++;
|
|
|
|
continue;
|
2006-01-06 16:20:26 +08:00
|
|
|
}
|
2011-05-11 12:48:56 +08:00
|
|
|
|
|
|
|
start = d;
|
|
|
|
/* write it back and re-read */
|
|
|
|
while (d != r1_bio->read_disk) {
|
|
|
|
if (d == 0)
|
2011-12-23 07:17:56 +08:00
|
|
|
d = conf->raid_disks * 2;
|
2011-05-11 12:48:56 +08:00
|
|
|
d--;
|
|
|
|
if (r1_bio->bios[d]->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-28 09:33:00 +08:00
|
|
|
if (r1_sync_page_io(rdev, sect, s,
|
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
WRITE) == 0) {
|
2011-05-11 12:48:56 +08:00
|
|
|
r1_bio->bios[d]->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(rdev, mddev);
|
2011-07-27 09:00:36 +08:00
|
|
|
}
|
2011-05-11 12:48:56 +08:00
|
|
|
}
|
|
|
|
d = start;
|
|
|
|
while (d != r1_bio->read_disk) {
|
|
|
|
if (d == 0)
|
2011-12-23 07:17:56 +08:00
|
|
|
d = conf->raid_disks * 2;
|
2011-05-11 12:48:56 +08:00
|
|
|
d--;
|
|
|
|
if (r1_bio->bios[d]->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
2011-07-28 09:33:00 +08:00
|
|
|
if (r1_sync_page_io(rdev, sect, s,
|
|
|
|
bio->bi_io_vec[idx].bv_page,
|
|
|
|
READ) != 0)
|
2011-07-27 09:00:36 +08:00
|
|
|
atomic_add(s, &rdev->corrected_errors);
|
2011-05-11 12:48:56 +08:00
|
|
|
}
|
2011-05-11 12:40:44 +08:00
|
|
|
sectors -= s;
|
|
|
|
sect += s;
|
|
|
|
idx ++;
|
|
|
|
}
|
2011-05-11 12:48:56 +08:00
|
|
|
set_bit(R1BIO_Uptodate, &r1_bio->state);
|
2015-07-20 21:29:37 +08:00
|
|
|
bio->bi_error = 0;
|
2011-05-11 12:40:44 +08:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2014-09-09 11:54:11 +08:00
|
|
|
static void process_checks(struct r1bio *r1_bio)
|
2011-05-11 12:40:44 +08:00
|
|
|
{
|
|
|
|
/* We have read all readable devices. If we haven't
|
|
|
|
* got the block, then there is no hope left.
|
|
|
|
* If we have, then we want to do a comparison
|
|
|
|
* and skip the write if everything is the same.
|
|
|
|
* If any blocks failed to read, then we need to
|
|
|
|
* attempt an over-write
|
|
|
|
*/
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = r1_bio->mddev;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2011-05-11 12:40:44 +08:00
|
|
|
int primary;
|
|
|
|
int i;
|
2012-04-12 14:04:47 +08:00
|
|
|
int vcnt;
|
2011-05-11 12:40:44 +08:00
|
|
|
|
2013-07-17 13:19:29 +08:00
|
|
|
/* Fix variable parts of all bios */
|
|
|
|
vcnt = (r1_bio->sectors + PAGE_SIZE / 512 - 1) >> (PAGE_SHIFT - 9);
|
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
|
|
|
int j;
|
|
|
|
int size;
|
2015-07-20 21:29:37 +08:00
|
|
|
int error;
|
2013-07-17 13:19:29 +08:00
|
|
|
struct bio *b = r1_bio->bios[i];
|
|
|
|
if (b->bi_end_io != end_sync_read)
|
|
|
|
continue;
|
2015-07-20 21:29:37 +08:00
|
|
|
/* fixup the bio for reuse, but preserve errno */
|
|
|
|
error = b->bi_error;
|
2013-07-17 13:19:29 +08:00
|
|
|
bio_reset(b);
|
2015-07-20 21:29:37 +08:00
|
|
|
b->bi_error = error;
|
2013-07-17 13:19:29 +08:00
|
|
|
b->bi_vcnt = vcnt;
|
2013-10-12 06:44:27 +08:00
|
|
|
b->bi_iter.bi_size = r1_bio->sectors << 9;
|
|
|
|
b->bi_iter.bi_sector = r1_bio->sector +
|
2013-07-17 13:19:29 +08:00
|
|
|
conf->mirrors[i].rdev->data_offset;
|
|
|
|
b->bi_bdev = conf->mirrors[i].rdev->bdev;
|
|
|
|
b->bi_end_io = end_sync_read;
|
|
|
|
b->bi_private = r1_bio;
|
|
|
|
|
2013-10-12 06:44:27 +08:00
|
|
|
size = b->bi_iter.bi_size;
|
2013-07-17 13:19:29 +08:00
|
|
|
for (j = 0; j < vcnt ; j++) {
|
|
|
|
struct bio_vec *bi;
|
|
|
|
bi = &b->bi_io_vec[j];
|
|
|
|
bi->bv_offset = 0;
|
|
|
|
if (size > PAGE_SIZE)
|
|
|
|
bi->bv_len = PAGE_SIZE;
|
|
|
|
else
|
|
|
|
bi->bv_len = size;
|
|
|
|
size -= PAGE_SIZE;
|
|
|
|
}
|
|
|
|
}
|
2011-12-23 07:17:56 +08:00
|
|
|
for (primary = 0; primary < conf->raid_disks * 2; primary++)
|
2011-05-11 12:40:44 +08:00
|
|
|
if (r1_bio->bios[primary]->bi_end_io == end_sync_read &&
|
2015-07-20 21:29:37 +08:00
|
|
|
!r1_bio->bios[primary]->bi_error) {
|
2011-05-11 12:40:44 +08:00
|
|
|
r1_bio->bios[primary]->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(conf->mirrors[primary].rdev, mddev);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
r1_bio->read_disk = primary;
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2011-05-11 12:48:56 +08:00
|
|
|
int j;
|
|
|
|
struct bio *pbio = r1_bio->bios[primary];
|
|
|
|
struct bio *sbio = r1_bio->bios[i];
|
2015-07-20 21:29:37 +08:00
|
|
|
int error = sbio->bi_error;
|
2011-05-11 12:40:44 +08:00
|
|
|
|
2012-09-12 02:26:12 +08:00
|
|
|
if (sbio->bi_end_io != end_sync_read)
|
2011-05-11 12:48:56 +08:00
|
|
|
continue;
|
2015-07-20 21:29:37 +08:00
|
|
|
/* Now we can 'fixup' the error value */
|
|
|
|
sbio->bi_error = 0;
|
2011-05-11 12:48:56 +08:00
|
|
|
|
2015-07-20 21:29:37 +08:00
|
|
|
if (!error) {
|
2011-05-11 12:48:56 +08:00
|
|
|
for (j = vcnt; j-- ; ) {
|
|
|
|
struct page *p, *s;
|
|
|
|
p = pbio->bi_io_vec[j].bv_page;
|
|
|
|
s = sbio->bi_io_vec[j].bv_page;
|
|
|
|
if (memcmp(page_address(p),
|
|
|
|
page_address(s),
|
2012-04-01 23:39:05 +08:00
|
|
|
sbio->bi_io_vec[j].bv_len))
|
2011-05-11 12:48:56 +08:00
|
|
|
break;
|
2006-01-06 16:20:22 +08:00
|
|
|
}
|
2011-05-11 12:48:56 +08:00
|
|
|
} else
|
|
|
|
j = 0;
|
|
|
|
if (j >= 0)
|
2012-10-11 11:17:59 +08:00
|
|
|
atomic64_add(r1_bio->sectors, &mddev->resync_mismatches);
|
2011-05-11 12:48:56 +08:00
|
|
|
if (j < 0 || (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)
|
2015-07-20 21:29:37 +08:00
|
|
|
&& !error)) {
|
2011-05-11 12:48:56 +08:00
|
|
|
/* No need to write to this device. */
|
|
|
|
sbio->bi_end_io = NULL;
|
|
|
|
rdev_dec_pending(conf->mirrors[i].rdev, mddev);
|
|
|
|
continue;
|
|
|
|
}
|
2012-09-11 04:49:33 +08:00
|
|
|
|
|
|
|
bio_copy_data(sbio, pbio);
|
2011-05-11 12:48:56 +08:00
|
|
|
}
|
2011-05-11 12:40:44 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
|
2011-05-11 12:40:44 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2011-05-11 12:40:44 +08:00
|
|
|
int i;
|
2011-12-23 07:17:56 +08:00
|
|
|
int disks = conf->raid_disks * 2;
|
2011-05-11 12:40:44 +08:00
|
|
|
struct bio *bio, *wbio;
|
|
|
|
|
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
|
|
|
|
|
|
|
if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
|
|
|
|
/* ouch - failed to read all of that. */
|
|
|
|
if (!fix_sync_read_error(r1_bio))
|
|
|
|
return;
|
2011-05-11 12:50:37 +08:00
|
|
|
|
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
2014-09-09 11:54:11 +08:00
|
|
|
process_checks(r1_bio);
|
|
|
|
|
2006-01-06 16:20:26 +08:00
|
|
|
/*
|
|
|
|
* schedule writes
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
|
|
|
for (i = 0; i < disks ; i++) {
|
|
|
|
wbio = r1_bio->bios[i];
|
2006-01-06 16:20:21 +08:00
|
|
|
if (wbio->bi_end_io == NULL ||
|
|
|
|
(wbio->bi_end_io == end_sync_read &&
|
|
|
|
(i == r1_bio->read_disk ||
|
|
|
|
!test_bit(MD_RECOVERY_SYNC, &mddev->recovery))))
|
2005-04-17 06:20:36 +08:00
|
|
|
continue;
|
|
|
|
|
2006-01-06 16:20:21 +08:00
|
|
|
wbio->bi_rw = WRITE;
|
|
|
|
wbio->bi_end_io = end_sync_write;
|
2005-04-17 06:20:36 +08:00
|
|
|
atomic_inc(&r1_bio->remaining);
|
2013-02-06 07:19:29 +08:00
|
|
|
md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio));
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
generic_make_request(wbio);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (atomic_dec_and_test(&r1_bio->remaining)) {
|
2005-06-22 08:17:23 +08:00
|
|
|
/* if we're here, all write(s) have completed, so clean up */
|
2012-07-19 13:59:18 +08:00
|
|
|
int s = r1_bio->sectors;
|
|
|
|
if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
|
|
|
|
test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
reschedule_retry(r1_bio);
|
|
|
|
else {
|
|
|
|
put_buf(r1_bio);
|
|
|
|
md_done_sync(mddev, s, 1);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a kernel thread which:
|
|
|
|
*
|
|
|
|
* 1. Retries failed read operations on working mirrors.
|
|
|
|
* 2. Updates the raid superblock when problems encounter.
|
2011-07-28 09:31:48 +08:00
|
|
|
* 3. Performs writes following reads for array synchronising.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void fix_read_error(struct r1conf *conf, int read_disk,
|
2006-10-03 16:15:51 +08:00
|
|
|
sector_t sect, int sectors)
|
|
|
|
{
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = conf->mddev;
|
2006-10-03 16:15:51 +08:00
|
|
|
while(sectors) {
|
|
|
|
int s = sectors;
|
|
|
|
int d = read_disk;
|
|
|
|
int success = 0;
|
|
|
|
int start;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2006-10-03 16:15:51 +08:00
|
|
|
|
|
|
|
if (s > (PAGE_SIZE>>9))
|
|
|
|
s = PAGE_SIZE >> 9;
|
|
|
|
|
|
|
|
do {
|
|
|
|
/* Note: no rcu protection needed here
|
|
|
|
* as this is synchronous in the raid1d thread
|
|
|
|
* which is the thread that might remove
|
|
|
|
* a device. If raid1d ever becomes multi-threaded....
|
|
|
|
*/
|
2011-07-28 09:31:48 +08:00
|
|
|
sector_t first_bad;
|
|
|
|
int bad_sectors;
|
|
|
|
|
2006-10-03 16:15:51 +08:00
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
2012-05-22 11:55:03 +08:00
|
|
|
(test_bit(In_sync, &rdev->flags) ||
|
|
|
|
(!test_bit(Faulty, &rdev->flags) &&
|
|
|
|
rdev->recovery_offset >= sect + s)) &&
|
2011-07-28 09:31:48 +08:00
|
|
|
is_badblock(rdev, sect, s,
|
|
|
|
&first_bad, &bad_sectors) == 0 &&
|
2011-01-14 06:14:33 +08:00
|
|
|
sync_page_io(rdev, sect, s<<9,
|
|
|
|
conf->tmppage, READ, false))
|
2006-10-03 16:15:51 +08:00
|
|
|
success = 1;
|
|
|
|
else {
|
|
|
|
d++;
|
2011-12-23 07:17:56 +08:00
|
|
|
if (d == conf->raid_disks * 2)
|
2006-10-03 16:15:51 +08:00
|
|
|
d = 0;
|
|
|
|
}
|
|
|
|
} while (!success && d != read_disk);
|
|
|
|
|
|
|
|
if (!success) {
|
2011-07-28 09:33:00 +08:00
|
|
|
/* Cannot read from anywhere - mark it bad */
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[read_disk].rdev;
|
2011-07-28 09:33:00 +08:00
|
|
|
if (!rdev_set_badblocks(rdev, sect, s, 0))
|
|
|
|
md_error(mddev, rdev);
|
2006-10-03 16:15:51 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* write it back and re-read */
|
|
|
|
start = d;
|
|
|
|
while (d != read_disk) {
|
|
|
|
if (d==0)
|
2011-12-23 07:17:56 +08:00
|
|
|
d = conf->raid_disks * 2;
|
2006-10-03 16:15:51 +08:00
|
|
|
d--;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
2014-09-18 09:09:04 +08:00
|
|
|
!test_bit(Faulty, &rdev->flags))
|
2011-07-28 09:33:00 +08:00
|
|
|
r1_sync_page_io(rdev, sect, s,
|
|
|
|
conf->tmppage, WRITE);
|
2006-10-03 16:15:51 +08:00
|
|
|
}
|
|
|
|
d = start;
|
|
|
|
while (d != read_disk) {
|
|
|
|
char b[BDEVNAME_SIZE];
|
|
|
|
if (d==0)
|
2011-12-23 07:17:56 +08:00
|
|
|
d = conf->raid_disks * 2;
|
2006-10-03 16:15:51 +08:00
|
|
|
d--;
|
|
|
|
rdev = conf->mirrors[d].rdev;
|
|
|
|
if (rdev &&
|
2014-09-18 09:09:04 +08:00
|
|
|
!test_bit(Faulty, &rdev->flags)) {
|
2011-07-28 09:33:00 +08:00
|
|
|
if (r1_sync_page_io(rdev, sect, s,
|
|
|
|
conf->tmppage, READ)) {
|
2006-10-03 16:15:51 +08:00
|
|
|
atomic_add(s, &rdev->corrected_errors);
|
|
|
|
printk(KERN_INFO
|
2010-05-03 12:30:35 +08:00
|
|
|
"md/raid1:%s: read error corrected "
|
2006-10-03 16:15:51 +08:00
|
|
|
"(%d sectors at %llu on %s)\n",
|
|
|
|
mdname(mddev), s,
|
2006-10-29 01:38:32 +08:00
|
|
|
(unsigned long long)(sect +
|
|
|
|
rdev->data_offset),
|
2006-10-03 16:15:51 +08:00
|
|
|
bdevname(rdev->bdev, b));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
sectors -= s;
|
|
|
|
sect += s;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:48:43 +08:00
|
|
|
static int narrow_write_error(struct r1bio *r1_bio, int i)
|
2011-07-28 09:32:41 +08:00
|
|
|
{
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = r1_bio->mddev;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[i].rdev;
|
2011-07-28 09:32:41 +08:00
|
|
|
|
|
|
|
/* bio has the data to be written to device 'i' where
|
|
|
|
* we just recently had a write error.
|
|
|
|
* We repeatedly clone the bio and trim down to one block,
|
|
|
|
* then try the write. Where the write fails we record
|
|
|
|
* a bad block.
|
|
|
|
* It is conceivable that the bio doesn't exactly align with
|
|
|
|
* blocks. We must handle this somehow.
|
|
|
|
*
|
|
|
|
* We currently own a reference on the rdev.
|
|
|
|
*/
|
|
|
|
|
|
|
|
int block_sectors;
|
|
|
|
sector_t sector;
|
|
|
|
int sectors;
|
|
|
|
int sect_to_write = r1_bio->sectors;
|
|
|
|
int ok = 1;
|
|
|
|
|
|
|
|
if (rdev->badblocks.shift < 0)
|
|
|
|
return 0;
|
|
|
|
|
2015-02-13 01:02:09 +08:00
|
|
|
block_sectors = roundup(1 << rdev->badblocks.shift,
|
|
|
|
bdev_logical_block_size(rdev->bdev) >> 9);
|
2011-07-28 09:32:41 +08:00
|
|
|
sector = r1_bio->sector;
|
|
|
|
sectors = ((sector + block_sectors)
|
|
|
|
& ~(sector_t)(block_sectors - 1))
|
|
|
|
- sector;
|
|
|
|
|
|
|
|
while (sect_to_write) {
|
|
|
|
struct bio *wbio;
|
|
|
|
if (sectors > sect_to_write)
|
|
|
|
sectors = sect_to_write;
|
|
|
|
/* Write at 'sector' for 'sectors'*/
|
|
|
|
|
2012-09-11 06:17:11 +08:00
|
|
|
if (test_bit(R1BIO_BehindIO, &r1_bio->state)) {
|
|
|
|
unsigned vcnt = r1_bio->behind_page_count;
|
|
|
|
struct bio_vec *vec = r1_bio->behind_bvecs;
|
|
|
|
|
|
|
|
while (!vec->bv_page) {
|
|
|
|
vec++;
|
|
|
|
vcnt--;
|
|
|
|
}
|
|
|
|
|
|
|
|
wbio = bio_alloc_mddev(GFP_NOIO, vcnt, mddev);
|
|
|
|
memcpy(wbio->bi_io_vec, vec, vcnt * sizeof(struct bio_vec));
|
|
|
|
|
|
|
|
wbio->bi_vcnt = vcnt;
|
|
|
|
} else {
|
|
|
|
wbio = bio_clone_mddev(r1_bio->master_bio, GFP_NOIO, mddev);
|
|
|
|
}
|
|
|
|
|
2011-07-28 09:32:41 +08:00
|
|
|
wbio->bi_rw = WRITE;
|
2013-10-12 06:44:27 +08:00
|
|
|
wbio->bi_iter.bi_sector = r1_bio->sector;
|
|
|
|
wbio->bi_iter.bi_size = r1_bio->sectors << 9;
|
2011-07-28 09:32:41 +08:00
|
|
|
|
2013-08-08 02:14:32 +08:00
|
|
|
bio_trim(wbio, sector - r1_bio->sector, sectors);
|
2013-10-12 06:44:27 +08:00
|
|
|
wbio->bi_iter.bi_sector += rdev->data_offset;
|
2011-07-28 09:32:41 +08:00
|
|
|
wbio->bi_bdev = rdev->bdev;
|
2015-10-21 00:09:12 +08:00
|
|
|
if (submit_bio_wait(WRITE, wbio) < 0)
|
2011-07-28 09:32:41 +08:00
|
|
|
/* failure! */
|
|
|
|
ok = rdev_set_badblocks(rdev, sector,
|
|
|
|
sectors, 0)
|
|
|
|
&& ok;
|
|
|
|
|
|
|
|
bio_put(wbio);
|
|
|
|
sect_to_write -= sectors;
|
|
|
|
sector += sectors;
|
|
|
|
sectors = block_sectors;
|
|
|
|
}
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void handle_sync_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
|
2011-07-28 09:38:13 +08:00
|
|
|
{
|
|
|
|
int m;
|
|
|
|
int s = r1_bio->sectors;
|
2011-12-23 07:17:56 +08:00
|
|
|
for (m = 0; m < conf->raid_disks * 2 ; m++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[m].rdev;
|
2011-07-28 09:38:13 +08:00
|
|
|
struct bio *bio = r1_bio->bios[m];
|
|
|
|
if (bio->bi_end_io == NULL)
|
|
|
|
continue;
|
2015-07-20 21:29:37 +08:00
|
|
|
if (!bio->bi_error &&
|
2011-07-28 09:38:13 +08:00
|
|
|
test_bit(R1BIO_MadeGood, &r1_bio->state)) {
|
2012-05-21 07:27:00 +08:00
|
|
|
rdev_clear_badblocks(rdev, r1_bio->sector, s, 0);
|
2011-07-28 09:38:13 +08:00
|
|
|
}
|
2015-07-20 21:29:37 +08:00
|
|
|
if (bio->bi_error &&
|
2011-07-28 09:38:13 +08:00
|
|
|
test_bit(R1BIO_WriteError, &r1_bio->state)) {
|
|
|
|
if (!rdev_set_badblocks(rdev, r1_bio->sector, s, 0))
|
|
|
|
md_error(conf->mddev, rdev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
put_buf(r1_bio);
|
|
|
|
md_done_sync(conf->mddev, s, 1);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
|
2011-07-28 09:38:13 +08:00
|
|
|
{
|
|
|
|
int m;
|
2015-08-14 09:11:10 +08:00
|
|
|
bool fail = false;
|
2011-12-23 07:17:56 +08:00
|
|
|
for (m = 0; m < conf->raid_disks * 2 ; m++)
|
2011-07-28 09:38:13 +08:00
|
|
|
if (r1_bio->bios[m] == IO_MADE_GOOD) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[m].rdev;
|
2011-07-28 09:38:13 +08:00
|
|
|
rdev_clear_badblocks(rdev,
|
|
|
|
r1_bio->sector,
|
2012-05-21 07:27:00 +08:00
|
|
|
r1_bio->sectors, 0);
|
2011-07-28 09:38:13 +08:00
|
|
|
rdev_dec_pending(rdev, conf->mddev);
|
|
|
|
} else if (r1_bio->bios[m] != NULL) {
|
|
|
|
/* This drive got a write error. We need to
|
|
|
|
* narrow down and record precise write
|
|
|
|
* errors.
|
|
|
|
*/
|
2015-08-14 09:11:10 +08:00
|
|
|
fail = true;
|
2011-07-28 09:38:13 +08:00
|
|
|
if (!narrow_write_error(r1_bio, m)) {
|
|
|
|
md_error(conf->mddev,
|
|
|
|
conf->mirrors[m].rdev);
|
|
|
|
/* an I/O failed, we can't clear the bitmap */
|
|
|
|
set_bit(R1BIO_Degraded, &r1_bio->state);
|
|
|
|
}
|
|
|
|
rdev_dec_pending(conf->mirrors[m].rdev,
|
|
|
|
conf->mddev);
|
|
|
|
}
|
2015-08-14 09:11:10 +08:00
|
|
|
if (fail) {
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
list_add(&r1_bio->retry_list, &conf->bio_end_io_list);
|
2016-02-29 23:43:58 +08:00
|
|
|
conf->nr_queued++;
|
2015-08-14 09:11:10 +08:00
|
|
|
spin_unlock_irq(&conf->device_lock);
|
|
|
|
md_wakeup_thread(conf->mddev->thread);
|
2015-10-24 13:02:16 +08:00
|
|
|
} else {
|
|
|
|
if (test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
close_write(r1_bio);
|
2015-08-14 09:11:10 +08:00
|
|
|
raid_end_bio_io(r1_bio);
|
2015-10-24 13:02:16 +08:00
|
|
|
}
|
2011-07-28 09:38:13 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static void handle_read_error(struct r1conf *conf, struct r1bio *r1_bio)
|
2011-07-28 09:38:13 +08:00
|
|
|
{
|
|
|
|
int disk;
|
|
|
|
int max_sectors;
|
2011-10-11 13:47:53 +08:00
|
|
|
struct mddev *mddev = conf->mddev;
|
2011-07-28 09:38:13 +08:00
|
|
|
struct bio *bio;
|
|
|
|
char b[BDEVNAME_SIZE];
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2011-07-28 09:38:13 +08:00
|
|
|
|
|
|
|
clear_bit(R1BIO_ReadError, &r1_bio->state);
|
|
|
|
/* we got a read error. Maybe the drive is bad. Maybe just
|
|
|
|
* the block and we can fix it.
|
|
|
|
* We freeze all other IO, and try reading the block from
|
|
|
|
* other devices. When we find one, we re-write
|
|
|
|
* and check it that fixes the read error.
|
|
|
|
* This is all done synchronously while the array is
|
|
|
|
* frozen
|
|
|
|
*/
|
|
|
|
if (mddev->ro == 0) {
|
2013-06-12 09:01:22 +08:00
|
|
|
freeze_array(conf, 1);
|
2011-07-28 09:38:13 +08:00
|
|
|
fix_read_error(conf, r1_bio->read_disk,
|
|
|
|
r1_bio->sector, r1_bio->sectors);
|
|
|
|
unfreeze_array(conf);
|
|
|
|
} else
|
|
|
|
md_error(mddev, conf->mirrors[r1_bio->read_disk].rdev);
|
2012-10-11 10:44:30 +08:00
|
|
|
rdev_dec_pending(conf->mirrors[r1_bio->read_disk].rdev, conf->mddev);
|
2011-07-28 09:38:13 +08:00
|
|
|
|
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
|
|
|
bdevname(bio->bi_bdev, b);
|
|
|
|
read_more:
|
|
|
|
disk = read_balance(conf, r1_bio, &max_sectors);
|
|
|
|
if (disk == -1) {
|
|
|
|
printk(KERN_ALERT "md/raid1:%s: %s: unrecoverable I/O"
|
|
|
|
" read error for block %llu\n",
|
|
|
|
mdname(mddev), b, (unsigned long long)r1_bio->sector);
|
|
|
|
raid_end_bio_io(r1_bio);
|
|
|
|
} else {
|
|
|
|
const unsigned long do_sync
|
|
|
|
= r1_bio->master_bio->bi_rw & REQ_SYNC;
|
|
|
|
if (bio) {
|
|
|
|
r1_bio->bios[r1_bio->read_disk] =
|
|
|
|
mddev->ro ? IO_BLOCKED : NULL;
|
|
|
|
bio_put(bio);
|
|
|
|
}
|
|
|
|
r1_bio->read_disk = disk;
|
|
|
|
bio = bio_clone_mddev(r1_bio->master_bio, GFP_NOIO, mddev);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio_trim(bio, r1_bio->sector - bio->bi_iter.bi_sector,
|
|
|
|
max_sectors);
|
2011-07-28 09:38:13 +08:00
|
|
|
r1_bio->bios[r1_bio->read_disk] = bio;
|
|
|
|
rdev = conf->mirrors[disk].rdev;
|
|
|
|
printk_ratelimited(KERN_ERR
|
|
|
|
"md/raid1:%s: redirecting sector %llu"
|
|
|
|
" to other mirror: %s\n",
|
|
|
|
mdname(mddev),
|
|
|
|
(unsigned long long)r1_bio->sector,
|
|
|
|
bdevname(rdev->bdev, b));
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_sector = r1_bio->sector + rdev->data_offset;
|
2011-07-28 09:38:13 +08:00
|
|
|
bio->bi_bdev = rdev->bdev;
|
|
|
|
bio->bi_end_io = raid1_end_read_request;
|
|
|
|
bio->bi_rw = READ | do_sync;
|
|
|
|
bio->bi_private = r1_bio;
|
|
|
|
if (max_sectors < r1_bio->sectors) {
|
|
|
|
/* Drat - have to split this up more */
|
|
|
|
struct bio *mbio = r1_bio->master_bio;
|
|
|
|
int sectors_handled = (r1_bio->sector + max_sectors
|
2013-10-12 06:44:27 +08:00
|
|
|
- mbio->bi_iter.bi_sector);
|
2011-07-28 09:38:13 +08:00
|
|
|
r1_bio->sectors = max_sectors;
|
|
|
|
spin_lock_irq(&conf->device_lock);
|
|
|
|
if (mbio->bi_phys_segments == 0)
|
|
|
|
mbio->bi_phys_segments = 2;
|
|
|
|
else
|
|
|
|
mbio->bi_phys_segments++;
|
|
|
|
spin_unlock_irq(&conf->device_lock);
|
|
|
|
generic_make_request(bio);
|
|
|
|
bio = NULL;
|
|
|
|
|
|
|
|
r1_bio = mempool_alloc(conf->r1bio_pool, GFP_NOIO);
|
|
|
|
|
|
|
|
r1_bio->master_bio = mbio;
|
2013-02-06 07:19:29 +08:00
|
|
|
r1_bio->sectors = bio_sectors(mbio) - sectors_handled;
|
2011-07-28 09:38:13 +08:00
|
|
|
r1_bio->state = 0;
|
|
|
|
set_bit(R1BIO_ReadError, &r1_bio->state);
|
|
|
|
r1_bio->mddev = mddev;
|
2013-10-12 06:44:27 +08:00
|
|
|
r1_bio->sector = mbio->bi_iter.bi_sector +
|
|
|
|
sectors_handled;
|
2011-07-28 09:38:13 +08:00
|
|
|
|
|
|
|
goto read_more;
|
|
|
|
} else
|
|
|
|
generic_make_request(bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-10-11 10:34:00 +08:00
|
|
|
static void raid1d(struct md_thread *thread)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2012-10-11 10:34:00 +08:00
|
|
|
struct mddev *mddev = thread->mddev;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned long flags;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct list_head *head = &conf->retry_list;
|
2011-04-18 16:25:41 +08:00
|
|
|
struct blk_plug plug;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
md_check_recovery(mddev);
|
2011-04-18 16:25:41 +08:00
|
|
|
|
2015-08-14 09:11:10 +08:00
|
|
|
if (!list_empty_careful(&conf->bio_end_io_list) &&
|
|
|
|
!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
|
|
|
|
LIST_HEAD(tmp);
|
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
|
2016-02-29 23:43:58 +08:00
|
|
|
while (!list_empty(&conf->bio_end_io_list)) {
|
|
|
|
list_move(conf->bio_end_io_list.prev, &tmp);
|
|
|
|
conf->nr_queued--;
|
|
|
|
}
|
2015-08-14 09:11:10 +08:00
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
while (!list_empty(&tmp)) {
|
2015-10-02 03:17:43 +08:00
|
|
|
r1_bio = list_first_entry(&tmp, struct r1bio,
|
|
|
|
retry_list);
|
2015-08-14 09:11:10 +08:00
|
|
|
list_del(&r1_bio->retry_list);
|
2015-10-24 13:02:16 +08:00
|
|
|
if (mddev->degraded)
|
|
|
|
set_bit(R1BIO_Degraded, &r1_bio->state);
|
|
|
|
if (test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
close_write(r1_bio);
|
2015-08-14 09:11:10 +08:00
|
|
|
raid_end_bio_io(r1_bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-04-18 16:25:41 +08:00
|
|
|
blk_start_plug(&plug);
|
2005-04-17 06:20:36 +08:00
|
|
|
for (;;) {
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2012-07-31 15:08:14 +08:00
|
|
|
flush_pending_writes(conf);
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2008-03-05 06:29:29 +08:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
|
|
|
if (list_empty(head)) {
|
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
break;
|
2008-03-05 06:29:29 +08:00
|
|
|
}
|
2011-10-11 13:48:43 +08:00
|
|
|
r1_bio = list_entry(head->prev, struct r1bio, retry_list);
|
2005-04-17 06:20:36 +08:00
|
|
|
list_del(head->prev);
|
2006-01-06 16:20:19 +08:00
|
|
|
conf->nr_queued--;
|
2005-04-17 06:20:36 +08:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
|
|
|
|
|
|
|
mddev = r1_bio->mddev;
|
2009-06-16 14:54:21 +08:00
|
|
|
conf = mddev->private;
|
2011-07-28 09:31:49 +08:00
|
|
|
if (test_bit(R1BIO_IsSync, &r1_bio->state)) {
|
2011-07-28 09:33:00 +08:00
|
|
|
if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
|
2011-07-28 09:38:13 +08:00
|
|
|
test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
handle_sync_write_finished(conf, r1_bio);
|
|
|
|
else
|
2011-07-28 09:31:49 +08:00
|
|
|
sync_request_write(mddev, r1_bio);
|
2011-07-28 09:32:41 +08:00
|
|
|
} else if (test_bit(R1BIO_MadeGood, &r1_bio->state) ||
|
2011-07-28 09:38:13 +08:00
|
|
|
test_bit(R1BIO_WriteError, &r1_bio->state))
|
|
|
|
handle_write_finished(conf, r1_bio);
|
|
|
|
else if (test_bit(R1BIO_ReadError, &r1_bio->state))
|
|
|
|
handle_read_error(conf, r1_bio);
|
|
|
|
else
|
2011-07-28 09:31:48 +08:00
|
|
|
/* just a partial read to be scheduled from separate
|
|
|
|
* context
|
|
|
|
*/
|
|
|
|
generic_make_request(r1_bio->bios[r1_bio->read_disk]);
|
2011-07-28 09:38:13 +08:00
|
|
|
|
2009-10-16 12:55:32 +08:00
|
|
|
cond_resched();
|
2011-07-28 09:31:48 +08:00
|
|
|
if (mddev->flags & ~(1<<MD_CHANGE_PENDING))
|
|
|
|
md_check_recovery(mddev);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2011-04-18 16:25:41 +08:00
|
|
|
blk_finish_plug(&plug);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static int init_resync(struct r1conf *conf)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int buffs;
|
|
|
|
|
|
|
|
buffs = RESYNC_WINDOW / RESYNC_BLOCK_SIZE;
|
2006-04-01 07:08:49 +08:00
|
|
|
BUG_ON(conf->r1buf_pool);
|
2005-04-17 06:20:36 +08:00
|
|
|
conf->r1buf_pool = mempool_create(buffs, r1buf_pool_alloc, r1buf_pool_free,
|
|
|
|
conf->poolinfo);
|
|
|
|
if (!conf->r1buf_pool)
|
|
|
|
return -ENOMEM;
|
|
|
|
conf->next_resync = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* perform a "sync" on one "block"
|
|
|
|
*
|
|
|
|
* We need to make sure that no normal I/O request - particularly write
|
|
|
|
* requests - conflict with active sync requests.
|
|
|
|
*
|
|
|
|
* This is achieved by tracking pending requests and a 'barrier' concept
|
|
|
|
* that can be installed to exclude normal IO requests.
|
|
|
|
*/
|
|
|
|
|
2016-01-21 05:52:20 +08:00
|
|
|
static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
|
|
|
|
int *skipped)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2011-10-11 13:48:43 +08:00
|
|
|
struct r1bio *r1_bio;
|
2005-04-17 06:20:36 +08:00
|
|
|
struct bio *bio;
|
|
|
|
sector_t max_sector, nr_sectors;
|
2006-01-06 16:20:21 +08:00
|
|
|
int disk = -1;
|
2005-04-17 06:20:36 +08:00
|
|
|
int i;
|
2006-01-06 16:20:21 +08:00
|
|
|
int wonly = -1;
|
|
|
|
int write_targets = 0, read_targets = 0;
|
2010-10-19 07:03:39 +08:00
|
|
|
sector_t sync_blocks;
|
2005-08-05 03:53:34 +08:00
|
|
|
int still_degraded = 0;
|
2011-07-28 09:31:48 +08:00
|
|
|
int good_sectors = RESYNC_SECTORS;
|
|
|
|
int min_bad = 0; /* number of sectors that are bad in all devices */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
if (!conf->r1buf_pool)
|
|
|
|
if (init_resync(conf))
|
2005-06-22 08:17:13 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-03-31 11:33:13 +08:00
|
|
|
max_sector = mddev->dev_sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (sector_nr >= max_sector) {
|
2005-06-22 08:17:23 +08:00
|
|
|
/* If we aborted, we need to abort the
|
|
|
|
* sync on the 'current' bitmap chunk (there will
|
|
|
|
* only be one in raid1 resync.
|
|
|
|
* We can find the current addess in mddev->curr_resync
|
|
|
|
*/
|
2005-07-15 18:56:35 +08:00
|
|
|
if (mddev->curr_resync < max_sector) /* aborted */
|
|
|
|
bitmap_end_sync(mddev->bitmap, mddev->curr_resync,
|
2005-06-22 08:17:23 +08:00
|
|
|
&sync_blocks, 1);
|
2005-07-15 18:56:35 +08:00
|
|
|
else /* completed sync */
|
2005-06-22 08:17:23 +08:00
|
|
|
conf->fullsync = 0;
|
2005-07-15 18:56:35 +08:00
|
|
|
|
|
|
|
bitmap_close_sync(mddev->bitmap);
|
2005-04-17 06:20:36 +08:00
|
|
|
close_sync(conf);
|
2015-08-19 06:14:42 +08:00
|
|
|
|
|
|
|
if (mddev_is_clustered(mddev)) {
|
|
|
|
conf->cluster_sync_low = 0;
|
|
|
|
conf->cluster_sync_high = 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2006-06-26 15:27:56 +08:00
|
|
|
if (mddev->bitmap == NULL &&
|
|
|
|
mddev->recovery_cp == MaxSector &&
|
2006-08-27 16:23:50 +08:00
|
|
|
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
|
2006-06-26 15:27:56 +08:00
|
|
|
conf->fullsync == 0) {
|
|
|
|
*skipped = 1;
|
|
|
|
return max_sector - sector_nr;
|
|
|
|
}
|
2006-08-27 16:23:50 +08:00
|
|
|
/* before building a request, check if we can skip these blocks..
|
|
|
|
* This call the bitmap_start_sync doesn't actually record anything
|
|
|
|
*/
|
2005-08-05 03:53:34 +08:00
|
|
|
if (!bitmap_start_sync(mddev->bitmap, sector_nr, &sync_blocks, 1) &&
|
2005-11-09 13:39:38 +08:00
|
|
|
!conf->fullsync && !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
2005-06-22 08:17:23 +08:00
|
|
|
/* We can skip this block, and probably several more */
|
|
|
|
*skipped = 1;
|
|
|
|
return sync_blocks;
|
|
|
|
}
|
2006-01-06 16:20:12 +08:00
|
|
|
|
2015-08-19 06:14:42 +08:00
|
|
|
/* we are incrementing sector_nr below. To be safe, we check against
|
|
|
|
* sector_nr + two times RESYNC_SECTORS
|
|
|
|
*/
|
|
|
|
|
|
|
|
bitmap_cond_end_sync(mddev->bitmap, sector_nr,
|
|
|
|
mddev_is_clustered(mddev) && (sector_nr + 2 * RESYNC_SECTORS > conf->cluster_sync_high));
|
2010-10-26 14:41:22 +08:00
|
|
|
r1_bio = mempool_alloc(conf->r1buf_pool, GFP_NOIO);
|
2006-01-06 16:20:12 +08:00
|
|
|
|
2014-09-10 14:01:24 +08:00
|
|
|
raise_barrier(conf, sector_nr);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-06 16:20:21 +08:00
|
|
|
rcu_read_lock();
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2006-01-06 16:20:21 +08:00
|
|
|
* If we get a correctably read error during resync or recovery,
|
|
|
|
* we might want to read from a different device. So we
|
|
|
|
* flag all drives that could conceivably be read from for READ,
|
|
|
|
* and any others (which will be non-In_sync devices) for WRITE.
|
|
|
|
* If a read fails, we try reading from something else for which READ
|
|
|
|
* is OK.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
r1_bio->mddev = mddev;
|
|
|
|
r1_bio->sector = sector_nr;
|
2005-06-22 08:17:23 +08:00
|
|
|
r1_bio->state = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
set_bit(R1BIO_IsSync, &r1_bio->state);
|
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2005-04-17 06:20:36 +08:00
|
|
|
bio = r1_bio->bios[i];
|
2012-09-12 02:26:12 +08:00
|
|
|
bio_reset(bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-06 16:20:21 +08:00
|
|
|
rdev = rcu_dereference(conf->mirrors[i].rdev);
|
|
|
|
if (rdev == NULL ||
|
2011-07-28 09:31:48 +08:00
|
|
|
test_bit(Faulty, &rdev->flags)) {
|
2011-12-23 07:17:56 +08:00
|
|
|
if (i < conf->raid_disks)
|
|
|
|
still_degraded = 1;
|
2006-01-06 16:20:21 +08:00
|
|
|
} else if (!test_bit(In_sync, &rdev->flags)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
bio->bi_rw = WRITE;
|
|
|
|
bio->bi_end_io = end_sync_write;
|
|
|
|
write_targets ++;
|
2006-01-06 16:20:21 +08:00
|
|
|
} else {
|
|
|
|
/* may need to read from here */
|
2011-07-28 09:31:48 +08:00
|
|
|
sector_t first_bad = MaxSector;
|
|
|
|
int bad_sectors;
|
|
|
|
|
|
|
|
if (is_badblock(rdev, sector_nr, good_sectors,
|
|
|
|
&first_bad, &bad_sectors)) {
|
|
|
|
if (first_bad > sector_nr)
|
|
|
|
good_sectors = first_bad - sector_nr;
|
|
|
|
else {
|
|
|
|
bad_sectors -= (sector_nr - first_bad);
|
|
|
|
if (min_bad == 0 ||
|
|
|
|
min_bad > bad_sectors)
|
|
|
|
min_bad = bad_sectors;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (sector_nr < first_bad) {
|
|
|
|
if (test_bit(WriteMostly, &rdev->flags)) {
|
|
|
|
if (wonly < 0)
|
|
|
|
wonly = i;
|
|
|
|
} else {
|
|
|
|
if (disk < 0)
|
|
|
|
disk = i;
|
|
|
|
}
|
|
|
|
bio->bi_rw = READ;
|
|
|
|
bio->bi_end_io = end_sync_read;
|
|
|
|
read_targets++;
|
2012-07-17 18:17:55 +08:00
|
|
|
} else if (!test_bit(WriteErrorSeen, &rdev->flags) &&
|
|
|
|
test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
|
|
|
|
!test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) {
|
|
|
|
/*
|
|
|
|
* The device is suitable for reading (InSync),
|
|
|
|
* but has bad block(s) here. Let's try to correct them,
|
|
|
|
* if we are doing resync or repair. Otherwise, leave
|
|
|
|
* this device alone for this sync request.
|
|
|
|
*/
|
|
|
|
bio->bi_rw = WRITE;
|
|
|
|
bio->bi_end_io = end_sync_write;
|
|
|
|
write_targets++;
|
2006-01-06 16:20:21 +08:00
|
|
|
}
|
|
|
|
}
|
2011-07-28 09:31:48 +08:00
|
|
|
if (bio->bi_end_io) {
|
|
|
|
atomic_inc(&rdev->nr_pending);
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_sector = sector_nr + rdev->data_offset;
|
2011-07-28 09:31:48 +08:00
|
|
|
bio->bi_bdev = rdev->bdev;
|
|
|
|
bio->bi_private = r1_bio;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2006-01-06 16:20:21 +08:00
|
|
|
rcu_read_unlock();
|
|
|
|
if (disk < 0)
|
|
|
|
disk = wonly;
|
|
|
|
r1_bio->read_disk = disk;
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2011-07-28 09:31:48 +08:00
|
|
|
if (read_targets == 0 && min_bad > 0) {
|
|
|
|
/* These sectors are bad on all InSync devices, so we
|
|
|
|
* need to mark them bad on all write targets
|
|
|
|
*/
|
|
|
|
int ok = 1;
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0 ; i < conf->raid_disks * 2 ; i++)
|
2011-07-28 09:31:48 +08:00
|
|
|
if (r1_bio->bios[i]->bi_end_io == end_sync_write) {
|
2012-04-01 23:04:19 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[i].rdev;
|
2011-07-28 09:31:48 +08:00
|
|
|
ok = rdev_set_badblocks(rdev, sector_nr,
|
|
|
|
min_bad, 0
|
|
|
|
) && ok;
|
|
|
|
}
|
|
|
|
set_bit(MD_CHANGE_DEVS, &mddev->flags);
|
|
|
|
*skipped = 1;
|
|
|
|
put_buf(r1_bio);
|
|
|
|
|
|
|
|
if (!ok) {
|
|
|
|
/* Cannot record the badblocks, so need to
|
|
|
|
* abort the resync.
|
|
|
|
* If there are multiple read targets, could just
|
|
|
|
* fail the really bad ones ???
|
|
|
|
*/
|
|
|
|
conf->recovery_disabled = mddev->recovery_disabled;
|
|
|
|
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
|
|
|
|
return 0;
|
|
|
|
} else
|
|
|
|
return min_bad;
|
|
|
|
|
|
|
|
}
|
|
|
|
if (min_bad > 0 && min_bad < good_sectors) {
|
|
|
|
/* only resync enough to reach the next bad->good
|
|
|
|
* transition */
|
|
|
|
good_sectors = min_bad;
|
|
|
|
}
|
|
|
|
|
2006-01-06 16:20:21 +08:00
|
|
|
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) && read_targets > 0)
|
|
|
|
/* extra read targets are also write targets */
|
|
|
|
write_targets += read_targets-1;
|
|
|
|
|
|
|
|
if (write_targets == 0 || read_targets == 0) {
|
2005-04-17 06:20:36 +08:00
|
|
|
/* There is nowhere to write, so all non-sync
|
|
|
|
* drives must be failed - so we are finished
|
|
|
|
*/
|
2012-07-31 08:05:34 +08:00
|
|
|
sector_t rv;
|
|
|
|
if (min_bad > 0)
|
|
|
|
max_sector = sector_nr + min_bad;
|
|
|
|
rv = max_sector - sector_nr;
|
2005-06-22 08:17:13 +08:00
|
|
|
*skipped = 1;
|
2005-04-17 06:20:36 +08:00
|
|
|
put_buf(r1_bio);
|
|
|
|
return rv;
|
|
|
|
}
|
|
|
|
|
2008-02-06 17:39:52 +08:00
|
|
|
if (max_sector > mddev->resync_max)
|
|
|
|
max_sector = mddev->resync_max; /* Don't do IO beyond here */
|
2011-07-28 09:31:48 +08:00
|
|
|
if (max_sector > sector_nr + good_sectors)
|
|
|
|
max_sector = sector_nr + good_sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
nr_sectors = 0;
|
2005-06-22 08:17:24 +08:00
|
|
|
sync_blocks = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
do {
|
|
|
|
struct page *page;
|
|
|
|
int len = PAGE_SIZE;
|
|
|
|
if (sector_nr + (len>>9) > max_sector)
|
|
|
|
len = (max_sector - sector_nr) << 9;
|
|
|
|
if (len == 0)
|
|
|
|
break;
|
2005-07-15 18:56:35 +08:00
|
|
|
if (sync_blocks == 0) {
|
|
|
|
if (!bitmap_start_sync(mddev->bitmap, sector_nr,
|
2005-11-09 13:39:38 +08:00
|
|
|
&sync_blocks, still_degraded) &&
|
|
|
|
!conf->fullsync &&
|
|
|
|
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery))
|
2005-07-15 18:56:35 +08:00
|
|
|
break;
|
2010-10-07 08:54:46 +08:00
|
|
|
if ((len >> 9) > sync_blocks)
|
2005-07-15 18:56:35 +08:00
|
|
|
len = sync_blocks<<9;
|
2005-06-22 08:17:23 +08:00
|
|
|
}
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0 ; i < conf->raid_disks * 2; i++) {
|
2005-04-17 06:20:36 +08:00
|
|
|
bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io) {
|
2006-01-06 16:20:26 +08:00
|
|
|
page = bio->bi_io_vec[bio->bi_vcnt].bv_page;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (bio_add_page(bio, page, len, 0) == 0) {
|
|
|
|
/* stop here */
|
2006-01-06 16:20:26 +08:00
|
|
|
bio->bi_io_vec[bio->bi_vcnt].bv_page = page;
|
2005-04-17 06:20:36 +08:00
|
|
|
while (i > 0) {
|
|
|
|
i--;
|
|
|
|
bio = r1_bio->bios[i];
|
2005-07-15 18:56:35 +08:00
|
|
|
if (bio->bi_end_io==NULL)
|
|
|
|
continue;
|
2005-04-17 06:20:36 +08:00
|
|
|
/* remove last page from this bio */
|
|
|
|
bio->bi_vcnt--;
|
2013-10-12 06:44:27 +08:00
|
|
|
bio->bi_iter.bi_size -= len;
|
2015-07-25 02:37:59 +08:00
|
|
|
bio_clear_flag(bio, BIO_SEG_VALID);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
goto bio_full;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
nr_sectors += len>>9;
|
|
|
|
sector_nr += len>>9;
|
2005-06-22 08:17:23 +08:00
|
|
|
sync_blocks -= (len>>9);
|
2005-04-17 06:20:36 +08:00
|
|
|
} while (r1_bio->bios[disk]->bi_vcnt < RESYNC_PAGES);
|
|
|
|
bio_full:
|
|
|
|
r1_bio->sectors = nr_sectors;
|
|
|
|
|
2015-08-19 06:14:42 +08:00
|
|
|
if (mddev_is_clustered(mddev) &&
|
|
|
|
conf->cluster_sync_high < sector_nr + nr_sectors) {
|
|
|
|
conf->cluster_sync_low = mddev->curr_resync_completed;
|
|
|
|
conf->cluster_sync_high = conf->cluster_sync_low + CLUSTER_RESYNC_WINDOW_SECTORS;
|
|
|
|
/* Send resync message */
|
|
|
|
md_cluster_ops->resync_info_update(mddev,
|
|
|
|
conf->cluster_sync_low,
|
|
|
|
conf->cluster_sync_high);
|
|
|
|
}
|
|
|
|
|
2006-01-06 16:20:26 +08:00
|
|
|
/* For a user-requested sync, we read all readable devices and do a
|
|
|
|
* compare
|
|
|
|
*/
|
|
|
|
if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
|
|
|
|
atomic_set(&r1_bio->remaining, read_targets);
|
2012-07-09 09:34:13 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2 && read_targets; i++) {
|
2006-01-06 16:20:26 +08:00
|
|
|
bio = r1_bio->bios[i];
|
|
|
|
if (bio->bi_end_io == end_sync_read) {
|
2012-07-09 09:34:13 +08:00
|
|
|
read_targets--;
|
2006-09-01 12:27:36 +08:00
|
|
|
md_sync_acct(bio->bi_bdev, nr_sectors);
|
2006-01-06 16:20:26 +08:00
|
|
|
generic_make_request(bio);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
atomic_set(&r1_bio->remaining, 1);
|
|
|
|
bio = r1_bio->bios[r1_bio->read_disk];
|
2006-09-01 12:27:36 +08:00
|
|
|
md_sync_acct(bio->bi_bdev, nr_sectors);
|
2006-01-06 16:20:26 +08:00
|
|
|
generic_make_request(bio);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-06 16:20:26 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
return nr_sectors;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static sector_t raid1_size(struct mddev *mddev, sector_t sectors, int raid_disks)
|
2009-03-18 09:10:40 +08:00
|
|
|
{
|
|
|
|
if (sectors)
|
|
|
|
return sectors;
|
|
|
|
|
|
|
|
return mddev->dev_sectors;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
static struct r1conf *setup_conf(struct mddev *mddev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf;
|
2009-12-14 09:49:51 +08:00
|
|
|
int i;
|
2012-07-31 08:03:52 +08:00
|
|
|
struct raid1_info *disk;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2009-12-14 09:49:51 +08:00
|
|
|
int err = -ENOMEM;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-10-11 13:49:05 +08:00
|
|
|
conf = kzalloc(sizeof(struct r1conf), GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!conf)
|
2009-12-14 09:49:51 +08:00
|
|
|
goto abort;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-31 08:03:52 +08:00
|
|
|
conf->mirrors = kzalloc(sizeof(struct raid1_info)
|
2011-12-23 07:17:56 +08:00
|
|
|
* mddev->raid_disks * 2,
|
2005-04-17 06:20:36 +08:00
|
|
|
GFP_KERNEL);
|
|
|
|
if (!conf->mirrors)
|
2009-12-14 09:49:51 +08:00
|
|
|
goto abort;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-01-06 16:20:19 +08:00
|
|
|
conf->tmppage = alloc_page(GFP_KERNEL);
|
|
|
|
if (!conf->tmppage)
|
2009-12-14 09:49:51 +08:00
|
|
|
goto abort;
|
2006-01-06 16:20:19 +08:00
|
|
|
|
2009-12-14 09:49:51 +08:00
|
|
|
conf->poolinfo = kzalloc(sizeof(*conf->poolinfo), GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!conf->poolinfo)
|
2009-12-14 09:49:51 +08:00
|
|
|
goto abort;
|
2011-12-23 07:17:56 +08:00
|
|
|
conf->poolinfo->raid_disks = mddev->raid_disks * 2;
|
2005-04-17 06:20:36 +08:00
|
|
|
conf->r1bio_pool = mempool_create(NR_RAID1_BIOS, r1bio_pool_alloc,
|
|
|
|
r1bio_pool_free,
|
|
|
|
conf->poolinfo);
|
|
|
|
if (!conf->r1bio_pool)
|
2009-12-14 09:49:51 +08:00
|
|
|
goto abort;
|
|
|
|
|
2009-10-16 12:55:44 +08:00
|
|
|
conf->poolinfo->mddev = mddev;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-12-23 07:17:57 +08:00
|
|
|
err = -EINVAL;
|
2008-05-15 07:05:54 +08:00
|
|
|
spin_lock_init(&conf->device_lock);
|
2012-03-19 09:46:39 +08:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2012-05-31 13:39:11 +08:00
|
|
|
struct request_queue *q;
|
2009-12-14 09:49:51 +08:00
|
|
|
int disk_idx = rdev->raid_disk;
|
2005-04-17 06:20:36 +08:00
|
|
|
if (disk_idx >= mddev->raid_disks
|
|
|
|
|| disk_idx < 0)
|
|
|
|
continue;
|
2011-12-23 07:17:57 +08:00
|
|
|
if (test_bit(Replacement, &rdev->flags))
|
2012-10-31 08:42:03 +08:00
|
|
|
disk = conf->mirrors + mddev->raid_disks + disk_idx;
|
2011-12-23 07:17:57 +08:00
|
|
|
else
|
|
|
|
disk = conf->mirrors + disk_idx;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-12-23 07:17:57 +08:00
|
|
|
if (disk->rdev)
|
|
|
|
goto abort;
|
2005-04-17 06:20:36 +08:00
|
|
|
disk->rdev = rdev;
|
2012-05-31 13:39:11 +08:00
|
|
|
q = bdev_get_queue(rdev->bdev);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
disk->head_position = 0;
|
md/raid1: prevent merging too large request
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when should we split big requests? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. This patch only considers request with size above optimal io size.
If all disks are busy, is it worth doing a split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Note: if the raid uses any hard disk, we don't prevent merging. That will make
performace worse.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2012-07-31 08:03:53 +08:00
|
|
|
disk->seq_start = MaxSector;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
conf->raid_disks = mddev->raid_disks;
|
|
|
|
conf->mddev = mddev;
|
|
|
|
INIT_LIST_HEAD(&conf->retry_list);
|
2015-08-14 09:11:10 +08:00
|
|
|
INIT_LIST_HEAD(&conf->bio_end_io_list);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
spin_lock_init(&conf->resync_lock);
|
2006-01-06 16:20:12 +08:00
|
|
|
init_waitqueue_head(&conf->wait_barrier);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2005-06-22 08:17:23 +08:00
|
|
|
bio_list_init(&conf->pending_bio_list);
|
2011-10-11 13:50:01 +08:00
|
|
|
conf->pending_count = 0;
|
2011-10-26 08:54:39 +08:00
|
|
|
conf->recovery_disabled = mddev->recovery_disabled - 1;
|
2005-06-22 08:17:23 +08:00
|
|
|
|
raid1: Rewrite the implementation of iobarrier.
There is an iobarrier in raid1 because of contention between normal IO and
resync IO. It suspends all normal IO when resync/recovery happens.
However if normal IO is out side the resync window, there is no contention.
So this patch changes the barrier mechanism to only block IO that
could contend with the resync that is currently happening.
We partition the whole space into five parts.
|---------|-----------|------------|----------------|-------|
start next_resync start_next_window end_window
start + RESYNC_WINDOW = next_resync
next_resync + NEXT_NORMALIO_DISTANCE = start_next_window
start_next_window + NEXT_NORMALIO_DISTANCE = end_window
Firstly we introduce some concepts:
1 - RESYNC_WINDOW: For resync, there are 32 resync requests at most at the
same time. A sync request is RESYNC_BLOCK_SIZE(64*1024).
So the RESYNC_WINDOW is 32 * RESYNC_BLOCK_SIZE, that is 2MB.
2 - NEXT_NORMALIO_DISTANCE: the distance between next_resync
and start_next_window. It also indicates the distance between
start_next_window and end_window.
It is currently 3 * RESYNC_WINDOW_SIZE but could be tuned if
this turned out not to be optimal.
3 - next_resync: the next sector at which we will do sync IO.
4 - start: a position which is at most RESYNC_WINDOW before
next_resync.
5 - start_next_window: a position which is NEXT_NORMALIO_DISTANCE
beyond next_resync. Normal-io after this position doesn't need to
wait for resync-io to complete.
6 - end_window: a position which is 2 * NEXT_NORMALIO_DISTANCE beyond
next_resync. This also doesn't need to wait, but is counted
differently.
7 - current_window_requests: the count of normalIO between
start_next_window and end_window.
8 - next_window_requests: the count of normalIO after end_window.
NormalIO will be partitioned into four types:
NormIO1: the end sector of bio is smaller or equal the start
NormIO2: the start sector of bio larger or equal to end_window
NormIO3: the start sector of bio larger or equal to
start_next_window.
NormIO4: the location between start_next_window and end_window
|--------|-----------|--------------------|----------------|-------------|
| start | next_resync | start_next_window | end_window |
NormIO1 NormIO4 NormIO4 NormIO3 NormIO2
For NormIO1, we don't need any io barrier.
For NormIO4, we used a similar approach to the original iobarrier
mechanism. The normalIO and resyncIO must be kept separate.
For NormIO2/3, we add two fields to struct r1conf: "current_window_requests"
and "next_window_requests". They indicate the count of active
requests in the two window.
For these, we don't wait for resync io to complete.
For resync action, if there are NormIO4s, we must wait for it.
If not, we can proceed.
But if resync action reaches start_next_window and
current_window_requests > 0 (that is there are NormIO3s), we must
wait until the current_window_requests becomes zero.
When current_window_requests becomes zero, start_next_window also
moves forward. Then current_window_requests will replaced by
next_window_requests.
There is a problem which when and how to change from NormIO2 to
NormIO3. Only then can sync action progress.
We add a field in struct r1conf "start_next_window".
A: if start_next_window == MaxSector, it means there are no NormIO2/3.
So start_next_window = next_resync + NEXT_NORMALIO_DISTANCE
B: if current_window_requests == 0 && next_window_requests != 0, it
means start_next_window move to end_window
There is another problem which how to differentiate between
old NormIO2(now it is NormIO3) and NormIO2.
For example, there are many bios which are NormIO2 and a bio which is
NormIO3. NormIO3 firstly completed, so the bios of NormIO2 became NormIO3.
We add a field in struct r1bio "start_next_window".
This is used to record the position conf->start_next_window when the call
to wait_barrier() is made in make_request().
In allow_barrier(), we check the conf->start_next_window.
If r1bio->stat_next_window == conf->start_next_window, it means
there is no transition between NormIO2 and NormIO3.
If r1bio->start_next_window != conf->start_next_window, it mean
there was a transition between NormIO2 and NormIO3. There can only
have been one transition. So it only means the bio is old NormIO2.
For one bio, there may be many r1bio's. So we make sure
all the r1bio->start_next_window are the same value.
If we met blocked_dev in make_request(), it must call allow_barrier
and wait_barrier. So the former and the later value of
conf->start_next_window will be change.
If there are many r1bio's with differnet start_next_window,
for the relevant bio, it depend on the last value of r1bio.
It will cause error. To avoid this, we must wait for previous r1bios
to complete.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-11-15 14:55:02 +08:00
|
|
|
conf->start_next_window = MaxSector;
|
|
|
|
conf->current_window_requests = conf->next_window_requests = 0;
|
|
|
|
|
2011-12-23 07:17:57 +08:00
|
|
|
err = -EIO;
|
2011-12-23 07:17:56 +08:00
|
|
|
for (i = 0; i < conf->raid_disks * 2; i++) {
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
disk = conf->mirrors + i;
|
|
|
|
|
2011-12-23 07:17:57 +08:00
|
|
|
if (i < conf->raid_disks &&
|
|
|
|
disk[conf->raid_disks].rdev) {
|
|
|
|
/* This slot has a replacement. */
|
|
|
|
if (!disk->rdev) {
|
|
|
|
/* No original, just make the replacement
|
|
|
|
* a recovering spare
|
|
|
|
*/
|
|
|
|
disk->rdev =
|
|
|
|
disk[conf->raid_disks].rdev;
|
|
|
|
disk[conf->raid_disks].rdev = NULL;
|
|
|
|
} else if (!test_bit(In_sync, &disk->rdev->flags))
|
|
|
|
/* Original is not in_sync - bad */
|
|
|
|
goto abort;
|
|
|
|
}
|
|
|
|
|
2006-06-26 15:27:40 +08:00
|
|
|
if (!disk->rdev ||
|
|
|
|
!test_bit(In_sync, &disk->rdev->flags)) {
|
2005-04-17 06:20:36 +08:00
|
|
|
disk->head_position = 0;
|
2012-05-22 11:55:31 +08:00
|
|
|
if (disk->rdev &&
|
|
|
|
(disk->rdev->saved_raid_disk < 0))
|
2007-08-23 05:01:52 +08:00
|
|
|
conf->fullsync = 1;
|
2012-07-31 08:03:53 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2009-12-14 09:49:51 +08:00
|
|
|
|
|
|
|
err = -ENOMEM;
|
2012-07-03 13:56:52 +08:00
|
|
|
conf->thread = md_register_thread(raid1d, mddev, "raid1");
|
2009-12-14 09:49:51 +08:00
|
|
|
if (!conf->thread) {
|
|
|
|
printk(KERN_ERR
|
2010-05-03 12:30:35 +08:00
|
|
|
"md/raid1:%s: couldn't allocate thread\n",
|
2009-12-14 09:49:51 +08:00
|
|
|
mdname(mddev));
|
|
|
|
goto abort;
|
2006-10-03 16:15:52 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-12-14 09:49:51 +08:00
|
|
|
return conf;
|
|
|
|
|
|
|
|
abort:
|
|
|
|
if (conf) {
|
2015-09-13 20:15:10 +08:00
|
|
|
mempool_destroy(conf->r1bio_pool);
|
2009-12-14 09:49:51 +08:00
|
|
|
kfree(conf->mirrors);
|
|
|
|
safe_put_page(conf->tmppage);
|
|
|
|
kfree(conf->poolinfo);
|
|
|
|
kfree(conf);
|
|
|
|
}
|
|
|
|
return ERR_PTR(err);
|
|
|
|
}
|
|
|
|
|
2014-12-15 09:56:58 +08:00
|
|
|
static void raid1_free(struct mddev *mddev, void *priv);
|
2016-01-21 05:52:20 +08:00
|
|
|
static int raid1_run(struct mddev *mddev)
|
2009-12-14 09:49:51 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf;
|
2009-12-14 09:49:51 +08:00
|
|
|
int i;
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev;
|
2012-04-02 07:48:38 +08:00
|
|
|
int ret;
|
2012-10-11 10:28:54 +08:00
|
|
|
bool discard_supported = false;
|
2009-12-14 09:49:51 +08:00
|
|
|
|
|
|
|
if (mddev->level != 1) {
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_ERR "md/raid1:%s: raid level not set to mirroring (%d)\n",
|
2009-12-14 09:49:51 +08:00
|
|
|
mdname(mddev), mddev->level);
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
if (mddev->reshape_position != MaxSector) {
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_ERR "md/raid1:%s: reshape_position set but not supported\n",
|
2009-12-14 09:49:51 +08:00
|
|
|
mdname(mddev));
|
|
|
|
return -EIO;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2009-12-14 09:49:51 +08:00
|
|
|
* copy the already verified devices into our private RAID1
|
|
|
|
* bookkeeping area. [whatever we allocate in run(),
|
2014-12-15 09:56:58 +08:00
|
|
|
* should be freed in raid1_free()]
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2009-12-14 09:49:51 +08:00
|
|
|
if (mddev->private == NULL)
|
|
|
|
conf = setup_conf(mddev);
|
|
|
|
else
|
|
|
|
conf = mddev->private;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-12-14 09:49:51 +08:00
|
|
|
if (IS_ERR(conf))
|
|
|
|
return PTR_ERR(conf);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-02-21 10:28:09 +08:00
|
|
|
if (mddev->queue)
|
md/raid1,5,10: Disable WRITE SAME until a recovery strategy is in place
There are cases where the kernel will believe that the WRITE SAME
command is supported by a block device which does not, in fact,
support WRITE SAME. This currently happens for SATA drivers behind a
SAS controller, but there are probably a hundred other ways that can
happen, including drive firmware bugs.
After receiving an error for WRITE SAME the block layer will retry the
request as a plain write of zeroes, but mdraid will consider the
failure as fatal and consider the drive failed. This has the effect
that all the mirrors containing a specific set of data are each
offlined in very rapid succession resulting in data loss.
However, just bouncing the request back up to the block layer isn't
ideal either, because the whole initial request-retry sequence should
be inside the write bitmap fence, which probably means that md needs
to do its own conversion of WRITE SAME to write zero.
Until the failure scenario has been sorted out, disable WRITE SAME for
raid1, raid5, and raid10.
[neilb: added raid5]
This patch is appropriate for any -stable since 3.7 when write_same
support was added.
Cc: stable@vger.kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2013-06-12 22:37:43 +08:00
|
|
|
blk_queue_max_write_same_sectors(mddev->queue, 0);
|
|
|
|
|
2012-03-19 09:46:39 +08:00
|
|
|
rdev_for_each(rdev, mddev) {
|
2011-06-08 06:50:35 +08:00
|
|
|
if (!mddev->gendisk)
|
|
|
|
continue;
|
2009-12-14 09:49:51 +08:00
|
|
|
disk_stack_limits(mddev->gendisk, rdev->bdev,
|
|
|
|
rdev->data_offset << 9);
|
2012-10-11 10:28:54 +08:00
|
|
|
if (blk_queue_discard(bdev_get_queue(rdev->bdev)))
|
|
|
|
discard_supported = true;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2005-06-22 08:17:23 +08:00
|
|
|
|
2009-12-14 09:49:51 +08:00
|
|
|
mddev->degraded = 0;
|
|
|
|
for (i=0; i < conf->raid_disks; i++)
|
|
|
|
if (conf->mirrors[i].rdev == NULL ||
|
|
|
|
!test_bit(In_sync, &conf->mirrors[i].rdev->flags) ||
|
|
|
|
test_bit(Faulty, &conf->mirrors[i].rdev->flags))
|
|
|
|
mddev->degraded++;
|
|
|
|
|
|
|
|
if (conf->raid_disks - mddev->degraded == 1)
|
|
|
|
mddev->recovery_cp = MaxSector;
|
|
|
|
|
2009-06-18 06:48:06 +08:00
|
|
|
if (mddev->recovery_cp != MaxSector)
|
2010-05-03 12:30:35 +08:00
|
|
|
printk(KERN_NOTICE "md/raid1:%s: not clean"
|
2009-06-18 06:48:06 +08:00
|
|
|
" -- starting background reconstruction\n",
|
|
|
|
mdname(mddev));
|
2014-09-30 12:23:59 +08:00
|
|
|
printk(KERN_INFO
|
2010-05-03 12:30:35 +08:00
|
|
|
"md/raid1:%s: active with %d out of %d mirrors\n",
|
2014-09-30 12:23:59 +08:00
|
|
|
mdname(mddev), mddev->raid_disks - mddev->degraded,
|
2005-04-17 06:20:36 +08:00
|
|
|
mddev->raid_disks);
|
2009-12-14 09:49:51 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* Ok, everything is just fine now
|
|
|
|
*/
|
2009-12-14 09:49:51 +08:00
|
|
|
mddev->thread = conf->thread;
|
|
|
|
conf->thread = NULL;
|
|
|
|
mddev->private = conf;
|
|
|
|
|
2009-03-31 11:59:03 +08:00
|
|
|
md_set_array_sectors(mddev, raid1_size(mddev, 0, 0));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-06-08 06:50:35 +08:00
|
|
|
if (mddev->queue) {
|
2012-10-11 10:28:54 +08:00
|
|
|
if (discard_supported)
|
|
|
|
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD,
|
|
|
|
mddev->queue);
|
|
|
|
else
|
|
|
|
queue_flag_clear_unlocked(QUEUE_FLAG_DISCARD,
|
|
|
|
mddev->queue);
|
2011-06-08 06:50:35 +08:00
|
|
|
}
|
2012-04-02 07:48:38 +08:00
|
|
|
|
|
|
|
ret = md_integrity_register(mddev);
|
2014-12-15 09:56:57 +08:00
|
|
|
if (ret) {
|
|
|
|
md_unregister_thread(&mddev->thread);
|
2014-12-15 09:56:58 +08:00
|
|
|
raid1_free(mddev, conf);
|
2014-12-15 09:56:57 +08:00
|
|
|
}
|
2012-04-02 07:48:38 +08:00
|
|
|
return ret;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2014-12-15 09:56:58 +08:00
|
|
|
static void raid1_free(struct mddev *mddev, void *priv)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2014-12-15 09:56:58 +08:00
|
|
|
struct r1conf *conf = priv;
|
2009-03-31 11:39:39 +08:00
|
|
|
|
2015-09-13 20:15:10 +08:00
|
|
|
mempool_destroy(conf->r1bio_pool);
|
2005-06-22 08:17:30 +08:00
|
|
|
kfree(conf->mirrors);
|
2013-04-24 09:42:44 +08:00
|
|
|
safe_put_page(conf->tmppage);
|
2005-06-22 08:17:30 +08:00
|
|
|
kfree(conf->poolinfo);
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(conf);
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static int raid1_resize(struct mddev *mddev, sector_t sectors)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
/* no resync is happening, and there is enough space
|
|
|
|
* on all devices, so we can resize.
|
|
|
|
* We need to make sure resync covers any new space.
|
|
|
|
* If the array is shrinking we should possibly wait until
|
|
|
|
* any io in the removed space completes, but it hardly seems
|
|
|
|
* worth it.
|
|
|
|
*/
|
2012-05-22 11:55:27 +08:00
|
|
|
sector_t newsize = raid1_size(mddev, sectors, 0);
|
|
|
|
if (mddev->external_size &&
|
|
|
|
mddev->array_sectors > newsize)
|
2009-03-31 12:00:31 +08:00
|
|
|
return -EINVAL;
|
2012-05-22 11:55:27 +08:00
|
|
|
if (mddev->bitmap) {
|
|
|
|
int ret = bitmap_resize(mddev->bitmap, newsize, 0, 0);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
md_set_array_sectors(mddev, newsize);
|
2008-07-21 15:05:22 +08:00
|
|
|
set_capacity(mddev->gendisk, mddev->array_sectors);
|
2009-08-03 08:59:58 +08:00
|
|
|
revalidate_disk(mddev->gendisk);
|
2009-03-31 12:00:31 +08:00
|
|
|
if (sectors > mddev->dev_sectors &&
|
2011-05-11 13:52:21 +08:00
|
|
|
mddev->recovery_cp > mddev->dev_sectors) {
|
2009-03-31 11:33:13 +08:00
|
|
|
mddev->recovery_cp = mddev->dev_sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
}
|
2009-03-31 12:00:31 +08:00
|
|
|
mddev->dev_sectors = sectors;
|
2005-07-28 02:43:28 +08:00
|
|
|
mddev->resync_max_sectors = sectors;
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static int raid1_reshape(struct mddev *mddev)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
/* We need to:
|
|
|
|
* 1/ resize the r1bio_pool
|
|
|
|
* 2/ resize conf->mirrors
|
|
|
|
*
|
|
|
|
* We allocate a new r1bio_pool if we can.
|
|
|
|
* Then raise a device barrier and wait until all IO stops.
|
|
|
|
* Then resize conf->mirrors and swap in the new r1bio pool.
|
2005-06-22 08:17:09 +08:00
|
|
|
*
|
|
|
|
* At the same time, we "pack" the devices so that all the missing
|
|
|
|
* devices have the higher raid_disk numbers.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
mempool_t *newpool, *oldpool;
|
|
|
|
struct pool_info *newpoolinfo;
|
2012-07-31 08:03:52 +08:00
|
|
|
struct raid1_info *newmirrors;
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2006-03-27 17:18:13 +08:00
|
|
|
int cnt, raid_disks;
|
2006-10-03 16:15:53 +08:00
|
|
|
unsigned long flags;
|
2008-06-28 12:44:04 +08:00
|
|
|
int d, d2, err;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2006-03-27 17:18:13 +08:00
|
|
|
/* Cannot change chunk_size, layout, or level */
|
2009-06-18 06:45:27 +08:00
|
|
|
if (mddev->chunk_sectors != mddev->new_chunk_sectors ||
|
2006-03-27 17:18:13 +08:00
|
|
|
mddev->layout != mddev->new_layout ||
|
|
|
|
mddev->level != mddev->new_level) {
|
2009-06-18 06:45:27 +08:00
|
|
|
mddev->new_chunk_sectors = mddev->chunk_sectors;
|
2006-03-27 17:18:13 +08:00
|
|
|
mddev->new_layout = mddev->layout;
|
|
|
|
mddev->new_level = mddev->level;
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2015-10-22 13:01:25 +08:00
|
|
|
if (!mddev_is_clustered(mddev)) {
|
|
|
|
err = md_allow_write(mddev);
|
|
|
|
if (err)
|
|
|
|
return err;
|
|
|
|
}
|
2007-01-26 16:57:11 +08:00
|
|
|
|
2006-03-27 17:18:13 +08:00
|
|
|
raid_disks = mddev->raid_disks + mddev->delta_disks;
|
|
|
|
|
2005-06-22 08:17:09 +08:00
|
|
|
if (raid_disks < conf->raid_disks) {
|
|
|
|
cnt=0;
|
|
|
|
for (d= 0; d < conf->raid_disks; d++)
|
|
|
|
if (conf->mirrors[d].rdev)
|
|
|
|
cnt++;
|
|
|
|
if (cnt > raid_disks)
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EBUSY;
|
2005-06-22 08:17:09 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
newpoolinfo = kmalloc(sizeof(*newpoolinfo), GFP_KERNEL);
|
|
|
|
if (!newpoolinfo)
|
|
|
|
return -ENOMEM;
|
|
|
|
newpoolinfo->mddev = mddev;
|
2011-12-23 07:17:56 +08:00
|
|
|
newpoolinfo->raid_disks = raid_disks * 2;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
newpool = mempool_create(NR_RAID1_BIOS, r1bio_pool_alloc,
|
|
|
|
r1bio_pool_free, newpoolinfo);
|
|
|
|
if (!newpool) {
|
|
|
|
kfree(newpoolinfo);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
2012-07-31 08:03:52 +08:00
|
|
|
newmirrors = kzalloc(sizeof(struct raid1_info) * raid_disks * 2,
|
2011-12-23 07:17:56 +08:00
|
|
|
GFP_KERNEL);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!newmirrors) {
|
|
|
|
kfree(newpoolinfo);
|
|
|
|
mempool_destroy(newpool);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
2013-06-12 09:01:22 +08:00
|
|
|
freeze_array(conf, 0);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/* ok, everything is stopped */
|
|
|
|
oldpool = conf->r1bio_pool;
|
|
|
|
conf->r1bio_pool = newpool;
|
2005-06-22 08:17:09 +08:00
|
|
|
|
2007-08-23 05:01:53 +08:00
|
|
|
for (d = d2 = 0; d < conf->raid_disks; d++) {
|
2011-10-11 13:45:26 +08:00
|
|
|
struct md_rdev *rdev = conf->mirrors[d].rdev;
|
2007-08-23 05:01:53 +08:00
|
|
|
if (rdev && rdev->raid_disk != d2) {
|
2011-07-27 09:00:36 +08:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
2007-08-23 05:01:53 +08:00
|
|
|
rdev->raid_disk = d2;
|
2011-07-27 09:00:36 +08:00
|
|
|
sysfs_unlink_rdev(mddev, rdev);
|
|
|
|
if (sysfs_link_rdev(mddev, rdev))
|
2007-08-23 05:01:53 +08:00
|
|
|
printk(KERN_WARNING
|
2011-07-27 09:00:36 +08:00
|
|
|
"md/raid1:%s: cannot register rd%d\n",
|
|
|
|
mdname(mddev), rdev->raid_disk);
|
2005-06-22 08:17:09 +08:00
|
|
|
}
|
2007-08-23 05:01:53 +08:00
|
|
|
if (rdev)
|
|
|
|
newmirrors[d2++].rdev = rdev;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
kfree(conf->mirrors);
|
|
|
|
conf->mirrors = newmirrors;
|
|
|
|
kfree(conf->poolinfo);
|
|
|
|
conf->poolinfo = newpoolinfo;
|
|
|
|
|
2006-10-03 16:15:53 +08:00
|
|
|
spin_lock_irqsave(&conf->device_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
mddev->degraded += (raid_disks - conf->raid_disks);
|
2006-10-03 16:15:53 +08:00
|
|
|
spin_unlock_irqrestore(&conf->device_lock, flags);
|
2005-04-17 06:20:36 +08:00
|
|
|
conf->raid_disks = mddev->raid_disks = raid_disks;
|
2006-03-27 17:18:13 +08:00
|
|
|
mddev->delta_disks = 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-06-12 09:01:22 +08:00
|
|
|
unfreeze_array(conf);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2015-07-06 10:26:57 +08:00
|
|
|
set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
|
2005-04-17 06:20:36 +08:00
|
|
|
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
|
|
|
|
md_wakeup_thread(mddev->thread);
|
|
|
|
|
|
|
|
mempool_destroy(oldpool);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static void raid1_quiesce(struct mddev *mddev, int state)
|
2005-09-10 07:23:45 +08:00
|
|
|
{
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf = mddev->private;
|
2005-09-10 07:23:45 +08:00
|
|
|
|
|
|
|
switch(state) {
|
2009-12-14 09:49:51 +08:00
|
|
|
case 2: /* wake for suspend */
|
|
|
|
wake_up(&conf->wait_barrier);
|
|
|
|
break;
|
2005-09-10 07:23:48 +08:00
|
|
|
case 1:
|
2013-11-14 12:16:18 +08:00
|
|
|
freeze_array(conf, 0);
|
2005-09-10 07:23:45 +08:00
|
|
|
break;
|
2005-09-10 07:23:48 +08:00
|
|
|
case 0:
|
2013-11-14 12:16:18 +08:00
|
|
|
unfreeze_array(conf);
|
2005-09-10 07:23:45 +08:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-10-11 13:47:53 +08:00
|
|
|
static void *raid1_takeover(struct mddev *mddev)
|
2009-12-14 09:49:51 +08:00
|
|
|
{
|
|
|
|
/* raid1 can take over:
|
|
|
|
* raid5 with 2 devices, any layout or chunk size
|
|
|
|
*/
|
|
|
|
if (mddev->level == 5 && mddev->raid_disks == 2) {
|
2011-10-11 13:49:05 +08:00
|
|
|
struct r1conf *conf;
|
2009-12-14 09:49:51 +08:00
|
|
|
mddev->new_level = 1;
|
|
|
|
mddev->new_layout = 0;
|
|
|
|
mddev->new_chunk_sectors = 0;
|
|
|
|
conf = setup_conf(mddev);
|
|
|
|
if (!IS_ERR(conf))
|
2013-11-14 12:16:18 +08:00
|
|
|
/* Array must appear to be quiesced */
|
|
|
|
conf->array_frozen = 1;
|
2009-12-14 09:49:51 +08:00
|
|
|
return conf;
|
|
|
|
}
|
|
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2011-10-11 13:49:58 +08:00
|
|
|
static struct md_personality raid1_personality =
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.name = "raid1",
|
2006-01-06 16:20:36 +08:00
|
|
|
.level = 1,
|
2005-04-17 06:20:36 +08:00
|
|
|
.owner = THIS_MODULE,
|
2016-01-21 05:52:20 +08:00
|
|
|
.make_request = raid1_make_request,
|
|
|
|
.run = raid1_run,
|
2014-12-15 09:56:58 +08:00
|
|
|
.free = raid1_free,
|
2016-01-21 05:52:20 +08:00
|
|
|
.status = raid1_status,
|
|
|
|
.error_handler = raid1_error,
|
2005-04-17 06:20:36 +08:00
|
|
|
.hot_add_disk = raid1_add_disk,
|
|
|
|
.hot_remove_disk= raid1_remove_disk,
|
|
|
|
.spare_active = raid1_spare_active,
|
2016-01-21 05:52:20 +08:00
|
|
|
.sync_request = raid1_sync_request,
|
2005-04-17 06:20:36 +08:00
|
|
|
.resize = raid1_resize,
|
2009-03-18 09:10:40 +08:00
|
|
|
.size = raid1_size,
|
2006-03-27 17:18:13 +08:00
|
|
|
.check_reshape = raid1_reshape,
|
2005-09-10 07:23:45 +08:00
|
|
|
.quiesce = raid1_quiesce,
|
2009-12-14 09:49:51 +08:00
|
|
|
.takeover = raid1_takeover,
|
2014-12-15 09:56:56 +08:00
|
|
|
.congested = raid1_congested,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
static int __init raid_init(void)
|
|
|
|
{
|
2006-01-06 16:20:36 +08:00
|
|
|
return register_md_personality(&raid1_personality);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static void raid_exit(void)
|
|
|
|
{
|
2006-01-06 16:20:36 +08:00
|
|
|
unregister_md_personality(&raid1_personality);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(raid_init);
|
|
|
|
module_exit(raid_exit);
|
|
|
|
MODULE_LICENSE("GPL");
|
2009-12-14 09:49:58 +08:00
|
|
|
MODULE_DESCRIPTION("RAID1 (mirroring) personality for MD");
|
2005-04-17 06:20:36 +08:00
|
|
|
MODULE_ALIAS("md-personality-3"); /* RAID1 */
|
2006-01-06 16:20:51 +08:00
|
|
|
MODULE_ALIAS("md-raid1");
|
2006-01-06 16:20:36 +08:00
|
|
|
MODULE_ALIAS("md-level-1");
|
2011-10-11 13:50:01 +08:00
|
|
|
|
|
|
|
module_param(max_queued_requests, int, S_IRUGO|S_IWUSR);
|