• Categories
    Category
  • Categories
    Category
  • News
  • Tutorials
  • Forums
  • Tags
  • Users
Tutorial Comments FAQ Related Articles

How to Configure and Test Raid 10 On RedHat 7.6

  • 00:27 lsblk
  • 00:49 fdisk /dev/sdb
  • 01:29 fdisk /dev/sdc
  • 03:29 mdadm -C /dev/md0 -l 10 -n 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
  • 04:36 mkfs.ext4 /dev/md0
  • 05:04 mount /dev/md0 data/
  • 05:17 df -h
  • 06:09 mdadm /dev/md0 -f /dev/sdb1
  • 07:09 mdadm /dev/md0 -r /dev/sdb1
  • 07:46 mdadm /dev/md0 --add /dev/sdf1
  • 08:07 mdadm --detail /dev/md0
6487

How to configure and test Raid 10 on RedHat 7.6

Introduction:

RAID 10, also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data that requires the least number of four disks and stripes data across mirrored pairs. This tutorial will explain Raid 10 Configuration and testing on Redhat 7.6.

Configuration Process:

First and foremost check the disk details

root@linuxhelp:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  3.7G  0 part /swap
├─sda2   8:2    0    1K  0 part 
├─sda5   8:5    0  976M  0 part /boot
└─sda6   8:6    0 15.3G  0 part /
sdb      8:16   0   20G  0 disk 
sdc      8:32   0   20G  0 disk 
sdd      8:48   0    2G  0 disk 
sde      8:64   0    2G  0 disk 
sdf      8:80   0    2G  0 disk 
sr0     11:0    1  1.9G  0 rom  

Now I will create partition of SDB, SDC, SDD, SDE,SDF disk by executing the following command

root@linuxhelp:~# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x4348c9fd.
Press n to create partition
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Press p to choose the partition type
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Enter the size of the partition
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-41943039, default 41943039): +1G
Created a new partition 1 of type 'Linux' and of size 1 GiB.
Press t to change the partition code
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Press w to write the partition
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

I will follow the same process to create partition to other drives

root@linuxhelp:~# fdisk /dev/sdc 
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
root@linuxhelp:~# fdisk /dev/sdd 
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
root@linuxhelp:~# fdisk /dev/sde 
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
root@linuxhelp:~# fdisk /dev/sdf 
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

After the partition is created then list the disk If the partition is created or not

root@linuxhelp:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  3.7G  0 part /swap
├─sda2   8:2    0    1K  0 part 
├─sda5   8:5    0  976M  0 part /boot
└─sda6   8:6    0 15.3G  0 part /
sdb      8:16   0   20G  0 disk 
└─sdb1   8:17   0    1G  0 part 
sdc      8:32   0   20G  0 disk 
└─sdc1   8:33   0    1G  0 part 
sdd      8:48   0    2G  0 disk 
└─sdd1   8:49   0    1G  0 part 
sde      8:64   0    2G  0 disk 
└─sde1   8:65   0    1G  0 part 
sdf      8:80   0    2G  0 disk 
└─sdf1   8:81   0    1G  0 part 
sr0     11:0    1  1.9G  0 rom  

After the partition is created then Create RAID using those partitions

root@linuxhelp:~# mdadm -C /dev/md0 -l 10 -n 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

After RAID created now format the raid to ext4 file system

root@linuxhelp:~# mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 523264 4k blocks and 130816 inodes
Filesystem UUID: d24481bd-1966-4151-a056-094ac82115ab
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912
Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

Now check the details about RAID6

root@linuxhelp:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 22 7:50:08 2021
        Raid Level : raid10
        Array Size : 2093056 (2044.00 MiB 2143.29 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
       Update Time : Thu Jan 22 07:53:09 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
            Layout : left-symmetric
        Chunk Size : 512K
Consistency Policy : resync
              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : 99e1980b:6f9f682f:c3cda362:1c608272
            Events : 17
    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

Now create a directory for mount the raid by executing the following command

root@linuxhelp:~# mkdir data

After the directory is created now mount the raid to the directory

root@linuxhelp:~# mount /dev/md0 data/

Now list the disk if the raid is mounted or not

root@linuxhelp:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.9G     0  1.9G   0% /dev
tmpfs           391M  1.7M  390M   1% /run
/dev/sda6        16G  6.8G  7.5G  48% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda5       945M  108M  773M  13% /boot
/dev/sda1       3.7G   15M  3.4G   1% /swap
tmpfs           391M   16K  391M   1% /run/user/1000
/dev/md0        2.0G  6.0M  1.9G   1% /root/data

After the raid is mounted. Add some files to the data directory for test the raid 6

root@linuxhelp:~# cd data/
root@linuxhelp:~/data# touch linuxhelp.txt

Now edit the file by using vim editor

root@linuxhelp:~/data# vim linuxhelp.txt 

Now list the directory

root@linuxhelp:~/data# ls -la
total 28
drwxr-xr-x 3 root root  4096 Jan 21 11:56 .
drwx------ 5 root root  4096 Jan 21 11:56 ..
-rw-r--r-- 1 root root    21 Jan 21 11:56 linuxhelp.txt
drwx------ 2 root root 16384 Jan 21 11:53 lost+found

Now I will fail the any one drives from the Raid 10

root@linuxhelp:~/data# mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
mdadm: set /dev/sdd1 faulty in /dev/md0

After that check the details of raid 10 if the drives are failed are not

root@linuxhelp:~/data# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 21 11:50:08 2021
        Raid Level : raid6
        Array Size : 2093056 (2044.00 MiB 2143.29 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
       Update Time : Thu Jan 21 12:11:28 2021
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0
            Layout : left-symmetric
        Chunk Size : 512K
Consistency Policy : resync
              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : 99e1980b:6f9f682f:c3cda362:1c608272
            Events : 21
    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1
       -       0        0        2      removed
       3       8       65        3      active sync   /dev/sde1
       0       8       17        -      faulty   /dev/sdb1
       2       8       49        -      active   /dev/sdd1

After the drive is failed now remove those failed drives from the raid 6

root@linuxhelp:~/data# mdadm /dev/md0 -r /dev/sdb1 /dev/sdd1
mdadm: hot removed /dev/sdb1 from /dev/md0
mdadm: hot removed /dev/sdd1 from /dev/md0

Now list the disk by using the following command

root@linuxhelp:~/data# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   20G  0 disk  
├─sda1    8:1    0  3.7G  0 part  /swap
├─sda2    8:2    0    1K  0 part  
├─sda5    8:5    0  976M  0 part  /boot
└─sda6    8:6    0 15.3G  0 part  /
sdb       8:16   0   20G  0 disk  
└─sdb1    8:17   0    1G  0 part  
sdc       8:32   0   20G  0 disk  
└─sdc1    8:33   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdd       8:48   0    2G  0 disk  
└─sdd1    8:49   0    1G  0 part  
sde       8:64   0    2G  0 disk  
└─sde1    8:65   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdf       8:80   0    2G  0 disk  
└─sdf1    8:81   0    1G  0 part  
sr0      11:0    1  1.9G  0 rom 

Now add one new drives to the raid 10 by executing the following command

root@linuxhelp:~/data# mdadm /dev/md0 --add /dev/sdf1
mdadm: added /dev/sdf1

Now check the details about raid if the drive is added or not

root@linuxhelp:~/data# mdadm --detail /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Thu Jan 21 11:50:08 2021
        Raid Level : raid6
        Array Size : 2093056 (2044.00 MiB 2143.29 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent
       Update Time : Thu Jan 21 12:14:11 2021
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0
            Layout : left-symmetric
        Chunk Size : 512K
Consistency Policy : resync
              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : 99e1980b:6f9f682f:c3cda362:1c608272
            Events : 43
    Number   Major   Minor   RaidDevice State
       1       8       33        1      active sync   /dev/sdc1
       4       8       81        2      active sync   /dev/sdf1
       3       8       65        3      active sync   /dev/sde1

Now check the raid directory if the data are stored or corrupted

root@linuxhelp:~/data# ls -la
total 28
drwxr-xr-x 3 root root  4096 Jan 21 11:56 .
drwx------ 5 root root  4096 Jan 21 11:56 ..
-rw-r--r-- 1 root root    21 Jan 21 11:56 linuxhelp.txt
drwx------ 2 root root 16384 Jan 21 11:53 lost+found

Now check the disk if the drives are mounted or not

root@linuxhelp:~/data# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   20G  0 disk  
├─sda1    8:1    0  3.7G  0 part  /swap
├─sda2    8:2    0    1K  0 part  
├─sda5    8:5    0  976M  0 part  /boot
└─sda6    8:6    0 15.3G  0 part  /
sdb       8:16   0   20G  0 disk  
└─sdb1    8:17   0    1G  0 part  
sdc       8:32   0   20G  0 disk  
└─sdc1    8:33   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdd       8:48   0    2G  0 disk  
└─sdd1    8:49   0    1G  0 part  
sde       8:64   0    2G  0 disk  
└─sde1    8:65   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdf       8:80   0    2G  0 disk  
└─sdf1    8:81   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sr0      11:0    1  1.9G  0 rom   

With this method, the Configuration and Test RAID 10 on Redhat 7.6 is comes to an end.

Tags:
gabriel
Author: 

Comments ( 0 )

No comments available

Add a comment

Frequently asked questions ( 5 )

Q

What is Raid 10?

A

RAID 10, also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data. It requires a minimum of four disks and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost because there is no parity in the striped sets.

Q

How many minimum Disk required to configure Raid 10 setup?

A

you must have 4 hard disks to configure the Raid 10 setup.

Q

If One disk Fail then Data can be recovered in Raid 10 Setup?

A

yes it can be recovered back using parity bit.

Q

Which process does Raid 10 use to recover data in case of disk failure.?

A

In case of failure of any disk the raid 10 does XOR to the parity bits to recover its data.

Q

Can I replace the new disk in place of my old faulty disk in RAID 10?

A

yes you can replace it with the new one and then have to configure it with raid 10 then it automatically resync the data.

Related Tutorials in How to Configure and Test Raid 10 On RedHat 7.6

Related Tutorials in How to Configure and Test Raid 10 On RedHat 7.6

How to configure RAID5  in CentOS 7
How to configure RAID5  in CentOS 7
Dec 28, 2017
How to configure RAID 0 on CentOS 7
How to configure RAID 0 on CentOS 7
Nov 11, 2017
How to enable repositories on RHEL7.6 without Red Hat-Subscription
How to enable repositories on RHEL7.6 without Red Hat-Subscription
Sep 5, 2020
How to Configure and Test RAID 1 on Ubuntu 20.4.1
How to Configure and Test RAID 1 on Ubuntu 20.4.1
Feb 2, 2021
How to check the lsb_release of your redhat based Linux distros
How to check the lsb_release of your redhat based Linux distros
Jun 8, 2018
How to Install Docker-CE on RHEL-7.6
How to Install Docker-CE on RHEL-7.6
Sep 11, 2020
Steps to do after minimal installation of RHEL/CentOS
Steps to do after minimal installation of RHEL/CentOS
May 5, 2016
How to configure RAID1 on CentOS 7
How to configure RAID1 on CentOS 7
Dec 8, 2017

Related Forums in How to Configure and Test Raid 10 On RedHat 7.6

Related Forums in How to Configure and Test Raid 10 On RedHat 7.6

redhat
ethan class=
How to fix "container-selinux >= 2:2.74" issue while Installing Docker-CE on RHEL7.6
Aug 25, 2020
Linux
AadrikaAnshu class=
How to add timestamps to history On Any Linux Machine
Jun 18, 2019
redhat
BlackRishi class=
RHVM_installation issuses
Jul 31, 2019
redhat
atly class=
How to install qcow2 images on a virtual machine on rhev 7.5
Jan 2, 2019
Linux
jackbrookes class=
How to save or backup RAID configuration
Oct 28, 2017
Linux
jackbrookes class=
ResourceSpace config error on AWS Instance
Mar 21, 2018
Linux
markdaniel class=
How to remove disk in RAID
Nov 17, 2017
Linux
charmi class=
How to check raid level in linux
Oct 21, 2017

Related News in How to Configure and Test Raid 10 On RedHat 7.6

Related News in How to Configure and Test Raid 10 On RedHat 7.6

Red Hat Enterprise Linux 6.9 Hits Beta
Red Hat Enterprise Linux 6.9 Hits Beta
Jan 6, 2017
Red Hat initiates new pilot program to ease into Digital Transformation
Red Hat initiates new pilot program to ease into Digital Transformation
Mar 30, 2017
Red Hat Preconized Red Hat Openstack Platform 11
Red Hat Preconized Red Hat Openstack Platform 11
May 10, 2017
IBM TO ACQUIRE RED HAT
IBM TO ACQUIRE RED HAT
Nov 15, 2018
Red Hat Enterprise Linux 7.7 beta rolled out
Red Hat Enterprise Linux 7.7 beta rolled out
Jun 7, 2019
Oracle Vulnerability Exploited in the Wild
Oracle Vulnerability Exploited in the Wild
Jun 25, 2019
Red Hat propels Linux towards the “Four Footprints of Technology”
Red Hat propels Linux towards the “Four Footprints of Technology”
Aug 4, 2017
Red Hat allows Microsoft’s .NET Core 2.0 to Linux and associated Cloud
Red Hat allows Microsoft’s .NET Core 2.0 to Linux and associated Cloud
Aug 23, 2017
Back To Top!
Rank
User
Points

Top Contributers

userNamenaveelansari
135850

Top Contributers

userNameayanbhatti
92510

Top Contributers

userNamehamzaahmed
32150

Top Contributers

1
userNamelinuxhelp
31040

Top Contributers

userNamemuhammadali
24500
Can you help legeek ?
Installation of the call center module

hello

I wish to install a call center in virtual with issabel, I downloaded the latest version of it , but I don' t arrive to install the call center module in issabel. please help me

thanks!

Networking
  • Routing
  • trunk
  • Netmask
  • Packet Capture
  • domain
  • HTTP Proxy
Server Setup
  • NFS
  • KVM
  • Memory
  • Sendmail
  • WebDAV
  • LXC
Shell Commands
  • Cloud commander
  • Command line archive tools
  • last command
  • Shell
  • terminal
  • Throttle
Desktop Application
  • Linux app
  • Pithos
  • Retrospect
  • Scribe
  • TortoiseHg
  • 4Images
Monitoring Tool
  • Monit
  • Apache Server Monitoring
  • EtherApe 
  • Arpwatch Tool
  • Auditd
  • Barman
Web Application
  • Nutch
  • Amazon VPC
  • FarmWarDeployer
  • Rukovoditel
  • Mirror site
  • Chef
Contact Us | Terms of Use| Privacy Policy| Disclaimer
© 2025 LinuxHelp.com All rights reserved. Linux™ is the registered trademark of Linus Torvalds. This site is not affiliated with linus torvalds in any way.