The post describes the steps to replace a mirror disk in a software raid array. In this example, we have 2 arrays: /dev/md0
(system) and /dev/md1
(data). Also we have boot partitions on each disk.
So, we have two disks:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
lsblk -f /dev/sdb /dev/sdc
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sdb
├─sdb1 vfat FAT16 D506-F28D
├─sdb2 linux_raid_member 1.2 deer:0 32640a07-ad38-c48e-748f-c6ea53326299
│ └─md0 LVM2_member LVM2 001 kAqPYz-WS4d-EWpZ-2Cem-Hqsk-Y6Xo-f4GwAU
│ └─system_vg-system_lv ext4 1.0 d9be8abd-ed85-4696-b8f4-29f9005854c1 71.2G 17% /
└─sdb3 linux_raid_member 1.2 deer:1 7269721b-d5b4-adbe-5cfc-549e38f88bf0
└─md1 LVM2_member LVM2 001 6uUOiT-eKoG-mEUh-ukC0-Jvhf-p6Vf-tQCMJq
└─data_vg-data_lv ext4 1.0 9a0a39c1-faf3-4d86-bf18-766f887ebe14 1.1T 29% /data
sdc
├─sdc1 vfat FAT16 D506-F28D 486.5M 0% /boot/efi
├─sdc2 linux_raid_member 1.2 deer:0 32640a07-ad38-c48e-748f-c6ea53326299
│ └─md0 LVM2_member LVM2 001 kAqPYz-WS4d-EWpZ-2Cem-Hqsk-Y6Xo-f4GwAU
│ └─system_vg-system_lv ext4 1.0 d9be8abd-ed85-4696-b8f4-29f9005854c1 71.2G 17% /
└─sdc3 linux_raid_member 1.2 deer:1 7269721b-d5b4-adbe-5cfc-549e38f88bf0
└─md1 LVM2_member LVM2 001 6uUOiT-eKoG-mEUh-ukC0-Jvhf-p6Vf-tQCMJq
└─data_vg-data_lv ext4 1.0 9a0a39c1-faf3-4d86-bf18-766f887ebe14 1.1T 29% /data
|
And two raid arrays:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Feb 27 12:35:49 2020
Raid Level : raid1
Array Size : 97589248 (93.07 GiB 99.93 GB)
Used Dev Size : 97589248 (93.07 GiB 99.93 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Jan 13 18:12:07 2024
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : deer:0 (local to host deer)
UUID : 32640a07:ad38c48e:748fc6ea:53326299
Events : 4373394
Number Major Minor RaidDevice State
3 8 34 0 active sync /dev/sdc2
4 8 18 1 active sync /dev/sdb2
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
|
sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Thu Feb 27 12:36:12 2020
Raid Level : raid1
Array Size : 1855226880 (1769.28 GiB 1899.75 GB)
Used Dev Size : 1855226880 (1769.28 GiB 1899.75 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Jan 13 18:21:52 2024
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : deer:1 (local to host deer)
UUID : 7269721b:d5b4adbe:5cfc549e:38f88bf0
Events : 34389
Number Major Minor RaidDevice State
3 8 35 0 active sync /dev/sdc3
4 8 19 1 active sync /dev/sdb3
|
Here we attach a new disk /dev/sdd
.
First off all, we have to copy the partition layout to new disk. We can use fdisk, but now example with sfdisk.
Needs use --force
option for restore if we get sfdisk: I don’t like these partitions – nothing changed
1
2
3
4
5
6
7
8
|
# save
sudo sfdisk -d /dev/sdb > part_table
# restore
sudo sfdisk /dev/sdd < part_table
# or
sudo sfdisk -d /dev/sda | sfdisk /dev/sdb
# with --force
sudo sfdisk -d /dev/sdb | sfdisk --force /dev/sdd
|
Secondly, add partition of disk to our raid array. For each disk:
1
2
|
sudo mdadm --manage /dev/md0 --add /dev/sdd2
sudo mdadm --manage /dev/md1 --add /dev/sdd3
|
After that. disk in raid arrays will have State: spare
. It means, if one disk from raud array broken, State: spare
changes to State: active sync
. And array sync will be started.
If we want change disk, we need to mark the disk as failed and raid changes state to sync
1
|
sudo mdadm --manage /dev/md0 --fail /dev/sdb1
|
Let’s check it:
1
2
3
|
cat /proc/mdstat
sudo mdadm --detail /dev/md0
sudo mdadm --detail /dev/md1
|
Following this, we have left remove failed disk from raid array:
1
2
|
sudo mdadm --manage /dev/md0 --remove /dev/sdb2
sudo mdadm --manage /dev/md1 --remove /dev/sdb3
|
Before we have finish, need to worry about boot loader (/dev/sdb1
->/dev/sdd1
)
1
|
sudo mkfs.vfat /dev/sdd1
|
To copy UUID of dos-partition need to install dosfstools
:
1
|
sudo apt install dosfstools
|
Need to cut -
symbol from UUID D506-F28D
to avoid an error:
1
|
sudo mkdosfs -i D506F28D /dev/sdd1
|
Let’s check:
1
2
3
4
|
lsblk -f | grep vfat
├─sdb1 vfat D506-F28D 486.5M 0% /boot/efi
├─sdc1 vfat D506-F28D
├─sdd1 vfat D506-F28D
|
For linux partition we have to use tune2fs
:
1
|
sudo tune2fs /dev/sdX2 -U 1cbce417-ad96-46b3-a477-641b1a315adb
|
Unmount /boot/efi
, mount new mountpoint, make efi-file and update initramfs with grub:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
umount /boot/efi/
mount /dev/sdd1 /boot/efi/
# it will return nothing
ls /boot/efi/
grub-install
Installing for x86_64-efi platform.
Installation finished. No error reported.
ls /boot/efi/EFI/proxmox/
grubx64.efi
update-initramfs -u
update-grub2
|
Prepare line for efibootmgr
. If we have path /boot/efi/EFI/proxmox/grubx64.efi
it needs to change this to \EFI\proxmox\grubx64.efi
.
Let’s create boot records:
1
2
3
4
5
|
sudo efibootmgr --create --disk /dev/sdd --part 1 --label "sdd" --loader "\EFI\proxmox\grubx64.efi"
BootCurrent: 0003
Timeout: 1 seconds
BootOrder: 0003
Boot0003* sdd HD(1,GPT,94753d5b-dc40-474b-a621-235bbc734118,0x800,0xf3800)/File(\EFI\proxmox\grubx64.efi)
|
Before reboot need to delete old disk boot loader from /dev/sdb1
and reboot
Usefull links: