How to configure RAID 1 on Debian 11.3

To Configure RAID 1 On Debian 11.3

Introduction:

The RAID 1 system duplicates (or mirrors) a set of data for two or more hard drives; a typical RAID 1 mirrored pair has two drives. At the same time, information is written to the first drive and then to a second drive (or mirror). With a mirror volume, if one of the hard drives fails, the remaining drive can function as a single drive without affecting the information on it.

Installation Procedure:

Step 1: First check the version of the Debian 11.3 by using the below command

root@LinuxHelp: ~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye
Now list the disk by executing the following command

Step 2: Now list the disk by executing the below command

root@linuxhelp: ~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   10G  0 disk
sdb      8:16   0   10G  0 disk
sdc      8:32   0   60G  0 disk
├─sdc1   8:33   0   59G  0 part /
├─sdc2   8:34   0    1K  0 part
└─sdc5   8:37   0  975M  0 part [SWAP]
sr0     11:0    1 1024M  0 rom

Step 3: Install Prerequisites by using the below command

root@linuxhelp:~# apt-get install mdadm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:

  exim4-base exim4-config exim4-daemon-light gsasl-common libgnutls-dane0
  libgnutls30 libgsasl7 libmailutils7 libntlm0 mailutils mailutils-common
Suggested packages:
  exim4-doc-html | exim4-doc-info eximon4 spf-tools-perl swaks gnutls-bin
  mailutils-mh mailutils-doc dracut-core
The following NEW packages will be installed:
  exim4-base exim4-config exim4-daemon-light gsasl-common libgnutls-dane0
  libgsasl7 libmailutils7 libntlm0 mailutils mailutils-common mdadm
The following packages will be upgraded:
  libgnutls30
1 upgraded, 11 newly installed, 0 to remove and 66 not upgraded.
Need to get 5,671 kB/7,012 kB of archives.
After this operation, 12.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://deb.debian.org/debian bullseye/main amd64 mdadm amd64 4.1-11 [457 k                                                                                                                      B]
Get:2 http://security.debian.org/debian-security bullseye-security/main amd64 li                                                                                                                      bgnutls-dane0 amd64 3.7.1-5+deb11u2 [395 kB]

Step 4: Check the disk whether there is already raid is configured or not by using the below command

root@linuxhelp:~# sudo mdadm -E /dev/sd[a-b]
mdadm: No md superblock detected on /dev/sda.
mdadm: No md superblock detected on /dev/sdb.

Step 5: Now create a partitions by using the below command

root@linuxhelp:~# sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.36.1).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x918ce202.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-20971519, default 20971519):

Created a new partition 1 of type 'Linux' and of size 10 GiB.
Command (m for help): p
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x918ce202

Device     Boot Start      End  Sectors Size Id Type
/dev/sda1        2048 20971519 20969472  10G 83 Linux


Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.

Command (m for help): p
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x918ce202

Device     Boot Start      End  Sectors Size Id Type
/dev/sda1        2048 20971519 20969472  10G fd Linux raid autodetect
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Step 6: Now follow the steps to create another partition by using the below command

root@linuxhelp:~# sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xfeea4d66.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-20971519, default 20971519):
Created a new partition 1 of type 'Linux' and of size 10 GiB.

Command (m for help): p
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfeea4d66
Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 20971519 20969472  10G 83 Linux

Command (m for help): t
Selected partition 1

Hex code or alias (type L to list all): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m for help): p
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VMware Virtual S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfeea4d66
Device     Boot Start      End  Sectors Size Id Type
/dev/sdb1        2048 20971519 20969472  10G fd Linux raid autodetect
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Step 7: verify the changes on both drive using the same mdadm command:

root@linuxhelp:~# sudo mdadm -E /dev/sd[a-b]1
/dev/sda:
   MBR Magic : aa55
Partition[0] :     20969472 sectors at         2048 (type fd)
/dev/sdb:
   MBR Magic : aa55
Partition[0] :     20969472 sectors at         2048 (type fd)

Step 8: Create RAID by using the below command

root@linuxhelp:~#sudo mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[a-b]1 
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

Step 9: Now format the RAID in ext4 format by executing the below command

root@linuxhelp:~# sudo mkfs.ext4 /dev/md1
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 2618880 4k blocks and 655360 inodes
Filesystem UUID: 2a2f680a-cd65-45f1-8854-a05d77dafbc4
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Step 10: Now list the RAID details by enter the command below command

root@linuxhelp:~#sudo mdadm --detail /dev/md1 

/dev/md1:
           Version : 1.2
     Creation Time : Wed Sep 21 00:06:41 2022
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent
Update Time : Wed Sep 21 00:07:33 2022
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0
Consistency Policy : resync
            Name : linuxhelp:1  (local to host linuxhelp)
              UUID : 6579951c:a2aaacab:d965eee9:69a389fc
            Events : 19
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

Step 11: Now permanently mount the md device by running the blkid command and copy the UUID number by using the below command

root@linuxhelp:~# sudo blkid
/dev/sdb1: UUID="6579951c-a2aa-acab-d965-eee969a389fc" UUID_SUB="044b528f-027a-7805-d466-b994433255de" LABEL="linuxhelp:1" TYPE="linux_raid_member" PARTUUID="cffac0df-01"
/dev/sda1: UUID="6579951c-a2aa-acab-d965-eee969a389fc" UUID_SUB="b91405b6-9beb-cb5d-6293-ab3329c4a0c8" LABEL="linuxhelp:1" TYPE="linux_raid_member" PARTUUID="024e9fca-01"
/dev/sdc1: UUID="b22d2091-b685-46fb-b748-67410799aa4a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="45aaaed3-01"
/dev/sdc5: UUID="54d4365d-9e75-43c7-ae26-5b707791c809" TYPE="swap" PARTUUID="45aaaed3-05"
/dev/md1: UUID="2a2f680a-cd65-45f1-8854-a05d77dafbc4" BLOCK_SIZE="4096" TYPE="ext4"

[root@linuxhelp ~]# mkdir /mnt/raid1

Step 12: Now create a fstab file using vim editor and enter the copied UUID number in the file. Save and exit the file.

root@linuxhelp:~# vim /etc/fstab

Step 13: Now list the disk details by executing the below command

root@linuxhelp:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.5G     0  1.5G   0% /dev
tmpfs           293M  1.5M  291M   1% /run
/dev/sdc1        58G  7.3G   48G  14% /
tmpfs           1.5G     0  1.5G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           293M  104K  292M   1% /run/user/1000

Step 14: The device is mounted successfully and to verify the status of the device by using the below command

root@linuxhelp:~# mount -av
/                        : ignored
none                     : ignored
/media/cdrom0            : ignored
/mnt/raid1               : successfully mounted

Step 15: Finally list the Disk details after Raid configuration by using the below command.

root@linuxhelp:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   10G  0 disk  
└─sda1    8:1    0   10G  0 part  
  └─md1   9:1    0   10G  0 raid1 /mnt/raid1
sdb       8:16   0   10G  0 disk  
└─sdb1    8:17   0   10G  0 part  
  └─md1   9:1    0   10G  0 raid1 /mnt/raid1
sdc       8:32   0   60G  0 disk  
├─sdc1    8:33   0   59G  0 part  /
├─sdc2    8:34   0    1K  0 part  
└─sdc5    8:37   0  975M  0 part  [SWAP]
sr0      11:0    1 1024M  0 rom   

Conclusion:

We have reached the end of this article. In this guide, we have walked you through the steps required to Configure RAID 1 on Debian 11.3.Your feedback is much welcome.

FAQ
Q
Does Windows support RAID 1?
A
Today Windows 10 supports three types of software RAID: RAID 0, RAID 1, and RAID 5 (Windows Server). So, we have decided on the RAID type. Then to create a disk array, we connect all disks to the computer and boot the operating system.
Q
Is RAID 0 or 1 better?
A
RAID 0 offers the best performance and capacity but no fault tolerance. Conversely, RAID 1 offers fault tolerance but does not offer any capacity for performance benefits. While performance is an important factor, backup admins may prioritize fault tolerance to better protect data.
Q
What number of drives are needed for a RAID 1 volume?
A
A minimum of at least two (2) hard drives are required to create and maintain a RAID 1 volume. Unlike some other RAID configurations, RAID 1 volumes require an even number of drives to be used.
Q
What is RAID stand for?
A
Industry manufacturers later redefined the RAID acronym to stand for "redundant array of independent disks".
Q
What is RAID 1 and its use?
A
Disk mirroring, also known as RAID 1, is the replication of data to two or more disks. Disk mirroring is a good choice for applications that require high performance and high availability, such as transactional applications, email, and operating systems.