diff options
author | NeilBrown <neilb@suse.com> | 2018-04-26 07:46:29 +0300 |
---|---|---|
committer | Shaohua Li <shli@fb.com> | 2018-05-01 19:47:50 +0300 |
commit | 011abdc9df559ec75779bb7c53a744c69b2a94c6 (patch) | |
tree | 2907219095e95dd67455f209ee5d96a5b0d8068e /lib/test_bitmap.c | |
parent | eb81b328267b2d97d11441483f5ac9dccb505818 (diff) | |
download | linux-011abdc9df559ec75779bb7c53a744c69b2a94c6.tar.xz |
md: fix two problems with setting the "re-add" device state.
If "re-add" is written to the "state" file for a device
which is faulty, this has an effect similar to removing
and re-adding the device. It should take up the
same slot in the array that it previously had, and
an accelerated (e.g. bitmap-based) rebuild should happen.
The slot that "it previously had" is determined by
rdev->saved_raid_disk.
However this is not set when a device fails (only when a device
is added), and it is cleared when resync completes.
This means that "re-add" will normally work once, but may not work a
second time.
This patch includes two fixes.
1/ when a device fails, record the ->raid_disk value in
->saved_raid_disk before clearing ->raid_disk
2/ when "re-add" is written to a device for which
->saved_raid_disk is not set, fail.
I think this is suitable for stable as it can
cause re-adding a device to be forced to do a full
resync which takes a lot longer and so puts data at
more risk.
Cc: <stable@vger.kernel.org> (v4.1)
Fixes: 97f6cd39da22 ("md-cluster: re-add capabilities")
Signed-off-by: NeilBrown <neilb@suse.com>
Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Diffstat (limited to 'lib/test_bitmap.c')
0 files changed, 0 insertions, 0 deletions