Setting up RAID 6 on Linux

Leo Saavedra
5 min readDec 6, 2020

RAID 6 is an improvement of RAID 5, instead of having 1 parity drive it has two, which provides fault tolerance in case of failure of two drives. So it’s a little bit more reliable than RAID 5.

https://raid.wiki.kernel.org/index.php/Linux_Raid

In this environment I am using a CentOS 7.9.2009 virtual machine with 12x1GB virtual drives, as you can see below:

lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 12G 0 disk
├─sda1 8:1 0 4G 0 part [SWAP]
└─sda2 8:2 0 8G 0 part /
sdb 8:16 0 1G 0 disk
sdc 8:32 0 1G 0 disk
sdd 8:48 0 1G 0 disk
sde 8:64 0 1G 0 disk
sdf 8:80 0 1G 0 disk
sdg 8:96 0 1G 0 disk
sdh 8:112 0 1G 0 disk
sdi 8:128 0 1G 0 disk
sdj 8:144 0 1G 0 disk
sdk 8:160 0 1G 0 disk
sdl 8:176 0 1G 0 disk
sdm 8:192 0 1G 0 disk
sr0 11:0 1 1024M 0 rom

So we are going to set up RAID 6 using /dev/sd[b..m] devices.

In RAID 6, we use 2 drives for parity, so in this case we are going to have a volume of ((12 * 1GB) -(2 * 1GB )) = 10GB, or We are going to loss two drives.

For more detail in how to calculate the final volume you can use the RAID calculator in https://wintelguy.com/raidcalc.pl.

Software needed

In order to work with RAID on linux, We have to install mdadm:

yum -y install mdadm

Partitioning the drives

Before partitioning the drives, always examine the disk drives if there is any RAID is already created on the disk

for i in /dev/sd{b..m} ; do mdadm -E $i; donemdadm: No md superblock detected on /dev/sdb.
mdadm: No md superblock detected on /dev/sdc.
mdadm: No md superblock detected on /dev/sdd.
mdadm: No md superblock detected on /dev/sde.
mdadm: No md superblock detected on /dev/sdf.
mdadm: No md superblock detected on /dev/sdg.
mdadm: No md superblock detected on /dev/sdh.
mdadm: No md superblock detected on /dev/sdi.
mdadm: No md superblock detected on /dev/sdj.
mdadm: No md superblock detected on /dev/sdk.
mdadm: No md superblock detected on /dev/sdl.
mdadm: No md superblock detected on /dev/sdm.

The next step is create partitions for raid on ‘/dev/sdb’, ‘/dev/sdc’, ‘/dev/sdd’, ‘/dev/sde’, ‘/dev/sdf’, ‘/dev/sdg’, ‘/dev/sdh’, ‘/dev/sdi’, ‘/dev/sdj’, ‘/dev/sdk’, ‘/dev/sdl’, and ‘/dev/sdm’ with the help of following parted command. Here, I will show how to create partition on all drives.

for i in /dev/sd{b..m}
> do
> parted $i mklabel gpt
> parted -a opt $i mkpart primary ext4 0% 100%
> parted $i set 1 raid on
> parted $i align-check optimal 1
> done

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.

1 aligned

you can check parted(8) to see the details.

Creating the RAID 6

Now We are going to create the device ‘/dev/md0’ using the command mdadm and set the raid level, in this case ‘— level=6', over the 12 drives

mdadm --create /dev/md0 --level=6 --raid-devices=12 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Then We have to check the RAID status, We have two options, check the /proc/mdstat or use the mdadm command:

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdm1[11] sdl1[10] sdk1[9] sdj1[8] sdi1[7] sdh1[6] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
10444800 blocks super 1.2 level 6, 512k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
[=>...................] resync = 6.1% (64896/1044480) finish=1.0min speed=16224K/sec

Using mdadm:

mdadm --query --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Dec 5 18:25:32 2020
Raid Level : raid6
Array Size : 10444800 (9.96 GiB 10.70 GB)
Used Dev Size : 1044480 (1020.00 MiB 1069.55 MB)
Raid Devices : 12
Total Devices : 12
Persistence : Superblock is persistent

Update Time : Sat Dec 5 18:26:12 2020
State : clean, resyncing
Active Devices : 12
Working Devices : 12
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Resync Status : 51% complete

Name : centos7-raid:0 (local to host centos7-raid)
UUID : de687cfb:aba5d485:46e333ba:9f7d723b
Events : 8

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 4 active sync /dev/sdf1
5 8 97 5 active sync /dev/sdg1
6 8 113 6 active sync /dev/sdh1
7 8 129 7 active sync /dev/sdi1
8 8 145 8 active sync /dev/sdj1
9 8 161 9 active sync /dev/sdk1
10 8 177 10 active sync /dev/sdl1
11 8 193 11 active sync /dev/sdm1

As you can see above, the volume is still syncing at 51%

Creating the filesystem

First we have format the device ‘/dev/md0’. In this case I going to use xfs, as follow:

mkfs.xfs /dev/md0meta-data=/dev/md0               isize=512    agcount=16, agsize=163200 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2611200, imaxpct=25
= sunit=128 swidth=1280 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

then we have to create the mount point:

mkdir -p /export/raid6

adding to /etc/fstab

echo UUID=`blkid -s UUID -o value /dev/md0` /export/raid6  xfs discard,defaults 0 2 | tee -a /etc/fstab

and finally mount the filesystem

mount /export/raid6
df -Th /export/raid6/
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 10G 33M 10G 1% /export/raid6

If you want to know how to replace a faulty drive, there is a good tutorial in https://www.redhat.com/sysadmin/raid-drive-mdadm

--

--

Leo Saavedra
0 Followers

Sysadmin, for the archeologist from the future.