diff options
author | NeilBrown <neilb@suse.com> | 2016-11-04 08:46:03 +0300 |
---|---|---|
committer | Shaohua Li <shli@fb.com> | 2016-11-08 02:08:23 +0300 |
commit | a9ae93c8cc0b63d8283f335604362f903d2244e2 (patch) | |
tree | 5db1b0b94eca87a7251d774b0a75c0bb690e4312 | |
parent | 5e2c7a3611977b69ae0531e8fbdeab5dad17925a (diff) | |
download | linux-a9ae93c8cc0b63d8283f335604362f903d2244e2.tar.xz |
md/raid10: abort delayed writes when device fails.
When writing to an array with a bitmap enabled, the writes are grouped
in batches which are preceded by an update to the bitmap.
It is quite likely if that a drive develops a problem which is not
media related, that the bitmap write will be the first to report an
error and cause the device to be marked faulty (as the bitmap write is
at the start of a batch).
In this case, there is point submiting the subsequent writes to the
failed device - that just wastes times.
So re-check the Faulty state of a device before submitting a
delayed write.
This requires that we keep the 'rdev', rather than the 'bdev' in the
bio, then swap in the bdev just before final submission.
Reported-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
-rw-r--r-- | drivers/md/raid10.c | 22 |
1 files changed, 16 insertions, 6 deletions
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 25e3fd76a9db..5290be3d5c26 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -858,9 +858,14 @@ static void flush_pending_writes(struct r10conf *conf) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; + struct md_rdev *rdev = (void*)bio->bi_bdev; bio->bi_next = NULL; - if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) + bio->bi_bdev = rdev->bdev; + if (test_bit(Faulty, &rdev->flags)) { + bio->bi_error = -EIO; + bio_endio(bio); + } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && + !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) /* Just ignore it */ bio_endio(bio); else @@ -1036,9 +1041,14 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule) while (bio) { /* submit pending writes */ struct bio *next = bio->bi_next; + struct md_rdev *rdev = (void*)bio->bi_bdev; bio->bi_next = NULL; - if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && - !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) + bio->bi_bdev = rdev->bdev; + if (test_bit(Faulty, &rdev->flags)) { + bio->bi_error = -EIO; + bio_endio(bio); + } else if (unlikely((bio_op(bio) == REQ_OP_DISCARD) && + !blk_queue_discard(bdev_get_queue(bio->bi_bdev)))) /* Just ignore it */ bio_endio(bio); else @@ -1357,7 +1367,7 @@ retry_write: mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr+ choose_data_offset(r10_bio, rdev)); - mbio->bi_bdev = rdev->bdev; + mbio->bi_bdev = (void*)rdev; mbio->bi_end_io = raid10_end_write_request; bio_set_op_attrs(mbio, op, do_sync | do_fua); mbio->bi_private = r10_bio; @@ -1399,7 +1409,7 @@ retry_write: mbio->bi_iter.bi_sector = (r10_bio->devs[i].addr + choose_data_offset( r10_bio, rdev)); - mbio->bi_bdev = rdev->bdev; + mbio->bi_bdev = (void*)rdev; mbio->bi_end_io = raid10_end_write_request; bio_set_op_attrs(mbio, op, do_sync | do_fua); mbio->bi_private = r10_bio; |