Rebuild Software RAID (Linux)

This guide will walk you though how to Rebuild a Software RAID after replacing the defective hard disk.

After replacing the defective hard disk, you need to check the RAID status

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [faulty]
md1 : active raid1 sdb1[1]
      4194240 blocks [2/1] [_U]

md3 : active raid1 sdb3[1]
      970470016 blocks [2/1] [_U]

unused devices: <none>

The above output shows that the First Drive (sda) is not exists. By checking fdisk -l, we will notice that SDA has no partition table:

# fdisk -l
Disk /dev/ram0: 896 MiB, 939524096 bytes, 1835008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x76a7d556


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x76a7d556

Device     Boot    Start        End    Sectors   Size Id Type
/dev/sdb1           2048    8390655    8388608     4G fd Linux raid autodetect
/dev/sdb2        8390656   12584959    4194304     2G 82 Linux swap / Solaris
/dev/sdb3       12584960 1953525167 1940940208 925.5G fd Linux raid autodetect


Disk /dev/md3: 925.5 GiB, 993761296384 bytes, 1940940032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 4 GiB, 4294901760 bytes, 8388480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-usr: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

In the first step, copy the partition tables manually from the second to the first hard disk. This is done with this command:

# sfdisk -d /dev/sdb | sfdisk /dev/sda

After running sfdisk check to make sure that your exact partition layout has been duplicated on the new disk.

If it hasn’t, this means that the original drive is using GPT and therefore sfdisk will not work.

[root@host ~]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 74D5FA4C-6B00-4A51-B72B-A017907EBABE


#         Start          End    Size  Type            Name
 1         2048         6143      2M  BIOS boot       primary
 2         6144      3905535    1.9G  Linux RAID      primary
 3      3905536     11718655    3.7G  Linux swap      primary
 4     11718656   3907026943    1.8T  Linux RAID      primary

You will now need to delete the new broken partition you made before proceeding:

Then we will instead use sgdisk.

sgdisk /dev/sdb -R /dev/sda
sgdisk -G /dev/sda

If the sgdisk command isn’t available you will first need to install gdisk.

After that you can check with fdisk -l if the first hard disk is now as divided as the second one.

# fdisk -l
Disk /dev/ram0: 896 MiB, 939524096 bytes, 1835008 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x76a7d556

Device     Boot    Start        End    Sectors   Size Id Type
/dev/sda1           2048    8390655    8388608     4G fd Linux raid autodetect
/dev/sda2        8390656   12584959    4194304     2G 82 Linux swap / Solaris
/dev/sda3       12584960 1953525167 1940940208 925.5G fd Linux raid autodetect


Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x76a7d556

Device     Boot    Start        End    Sectors   Size Id Type
/dev/sdb1           2048    8390655    8388608     4G fd Linux raid autodetect
/dev/sdb2        8390656   12584959    4194304     2G 82 Linux swap / Solaris
/dev/sdb3       12584960 1953525167 1940940208 925.5G fd Linux raid autodetect


Disk /dev/md3: 925.5 GiB, 993761296384 bytes, 1940940032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 4 GiB, 4294901760 bytes, 8388480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-usr: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

After restoring the partitioning, activate the swap partition using the following commands:

# mkswap /dev/sda2
# swapon -p 1 /dev/sda2

*Note*

If your disk was using GPT your swap partition is likely /dev/sda3 instead.

Please reference fdisk -l output to confirm.

Now, we can start building the RAID:

# mdadm --manage /dev/md1 --add /dev/sda1
# mdadm --manage /dev/md3 --add /dev/sda3

If your partition layout was GPT, then you will be rebuilding md2 and md4 instead

We can now use cat /proc/mdstat to track the rebuild of the RAID.

cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [faulty]
md1 : active raid1 sda1[0] sdb1[1]
      4194240 blocks [2/2] [UU]

md3 : active raid1 sda3[2] sdb3[1]
      970470016 blocks [2/1] [_U]
      [========>............]  recovery = 40.2% (390393152/970470016) finish=67.9min speed=142233K/sec

unused devices: <none>

GPT Example:

[root@host ~]# cat /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sda2[2] sdb2[0]       
1949632 blocks super 1.0 [2/2] [UU] 


md4 : active raid1 sda4[2] sdb4[0]       
1947653952 blocks super 1.0 [2/1] [U_]       
[>....................]  recovery =  0.3% (7050752/1947653952) finish=658.0min speed=49147K/sec
unused devices: <none>

Now, after the RAID has been rebuilt, we need to build grub. First we need to mount everything:

# mount /dev/md1 /mnt
# mount /dev/mapper/vg00-var /mnt/var
# mount /dev/mapper/vg00-usr /mnt/usr
# mount /dev/mapper/vg00-home /mnt/home
# mount --bind /dev /mnt/dev
# mount --bind /dev/pts /mnt/dev/pts
# mount --bind /proc /mnt/proc
# mount --bind /sys /mnt/sys 

Then chroot /mnt director

# chroot /mnt

Building grub:

# grub-install /dev/sda
# grub-install /dev/sdb

Or, Building grub2:

# grub2-mkconfig
# grub2-install /dev/sda
# grub2-install /dev/sdb
# grubby --update-kernel=ALL --args=rd.auto=1

Exit Chroot with Exit and unmount all disks:

# exit
# umount -a

Finally, the RAID has been rebuilt

Mohammed has written 63 articles

2 thoughts on “Rebuild Software RAID (Linux)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>