RAID command “mdadm”

After we builded up partition at article “Using “gdisk” to manage HDD partition“, we’re gonna to build a RAID. How many disks you need and size of raid depends on your scenario.

In my situation, I installed CentOS on SSD. I also get another HDDs, larger space for saving data such as /home or /var directories. I would use my 3 disks and 2 partition each disk to build up 2 volume for home and var directories.

After we installed CentOS7 and boot up system, how do we build up software RAID on Linux? I would explain it in this article.

 

Prerequisites

It depends on your RAID type, you need to prepare enough HDDs. 

You can refer to wiki website below: 
Standard RAID levels[1] or Nested RAID levels[2]


[1] https://en.wikipedia.org/wiki/Standard_RAID_levels
[2] https://en.wikipedia.org/wiki/Nested_RAID_levels

Let’s start.

use “lsblk” to list out devices’ status.

[nathaniel@CentOS7 /]$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 167.7G  0 disk
├─sda1    8:1    0   512M  0 part  /boot/efi
├─sda2    8:2    0    20G  0 part  /boot
├─sda3    8:3    0 143.2G  0 part  /
└─sda4    8:4    0     4G  0 part  [SWAP]
sdb       8:16   0   1.8T  0 disk
├─sdb1    8:17   0   1.7T  0 part
└─sdb2    8:18   0   113G  0 part
sdc       8:32   0   1.8T  0 disk
├─sdc1    8:33   0   1.7T  0 part
└─sdc2    8:34   0   113G  0 part
sdd       8:48   0   1.8T  0 disk
├─sdd1    8:49   0   1.7T  0 part
└─sdd2    8:50   0   113G  0 part

How do we build a software RAID on CentOS 7? Easy way. Using a “mdadm” command.

 

#mdadm --create /dev/md0 --auto=yes --level=5 --raid-devices=4 --spare-devices=1 /dev/sd{b,c,d,e}1

–level=5 is meant RAID 5. –raid-devices equals to how many partitions we use. –spare-devices are some disks for auto rebuild but that disk is also needed to count in –raid-devices parameter. In example above, I have 4 disks or partitions but I just want 3 of them for RAID 5. Remaining one is for spare.

 

In my case, I should change my parameter as below. This volume is for home directory.

#mdadm --create /dev/md0 --auto=yes --level=5 --raid-devices=3 --spare-devices=0 /dev/sd{b,c,d}1

And this on is for var directory.

#mdadm --create /dev/md1 --auto=yes --level=5 --raid-devices=3 --spare-devices=0 /dev/sd{b,c,d}2

Then, run command and you would see a picture as below in your terminal.

#cat /proc/mdstat

#top

Sync usage is around 30% of CPU.

After RAID volume is sync, check the /dev/md0 RAID volume status and should be shown like this.

#mdadm --detail /dev/md0
[nathaniel@CentOS7 ~]$ sudo mdadm --detail /dev/md0
[sudo] password for nathaniel:
/dev/md0:
           Version : 1.2
     Creation Time : Tue May 28 00:35:25 2019
        Raid Level : raid5
        Array Size : 3669751808 (3499.75 GiB 3757.83 GB)
     Used Dev Size : 1834875904 (1749.87 GiB 1878.91 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Thu Jun  6 18:20:55 2019
             State : clean     #<-------epsecially here, clean.
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : CentOS7:0  (local to host CentOS7)
              UUID : b0025773:aa726537:b28ec4a3:b385b60d
            Events : 3788

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       49        2      active sync   /dev/sdd1

We’ve done our software RAID configuration on CentOS7. 😀

We watch devices again via “lsblk”.

[nathaniel@CentOS7 ~]$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0 167.7G  0 disk
├─sda1    8:1    0   512M  0 part  /boot/efi
├─sda2    8:2    0    20G  0 part  /boot
├─sda3    8:3    0 143.2G  0 part  /
└─sda4    8:4    0     4G  0 part  [SWAP]
sdb       8:16   0   1.8T  0 disk
├─sdb1    8:17   0   1.7T  0 part
│ └─md0   9:0    0   3.4T  0 raid5 
└─sdb2    8:18   0   113G  0 part
  └─md1   9:1    0 225.9G  0 raid5 
sdc       8:32   0   1.8T  0 disk
├─sdc1    8:33   0   1.7T  0 part
│ └─md0   9:0    0   3.4T  0 raid5 
└─sdc2    8:34   0   113G  0 part
  └─md1   9:1    0 225.9G  0 raid5 
sdd       8:48   0   1.8T  0 disk
├─sdd1    8:49   0   1.7T  0 part
│ └─md0   9:0    0   3.4T  0 raid5 
└─sdd2    8:50   0   113G  0 part
  └─md1   9:1    0 225.9G  0 raid5