How to Set Up Software RAID 1 on an Existing Linux Distribution
In this tutorial, we’ll be talking about RAID, specifically we will set up software RAID 1 on a running Linux distribution.
What is RAID?
RAID stands for Redundant Array of Inexpensive Disks. RAID allows you to turn multiple physical hard drives into a single logical hard drive. There are many RAID levels such as RAID 0, RAID 1, RAID 5, RAID 10 etc.
Here we will discuss about RAID 1 which is also known as disk mirroring. RAID 1 creates identical copies of data. If you have two hard drives in RAID 1, then data will be written to both drives. The two hard drives have the same data.
The nice part about RAID 1 is that if one of your hard drive fails, your computer or server would still be up and running because you have a complete, intact copy of the data on the other hard drive. You can pull the failed hard drive out while the computer is running, insert a new hard drive and it will automatically rebuilds the mirror.
The downside of RAID 1 is that you don’t get any extra disk space. If your two hard drives are both 1TB, then the total usable volume is 1TB instead of 2TB.
Hardware RAID vs Software RAID
To set up RAID, you can either use a hard drive controller, or use a piece of software to create it. A hard drive controller is a PCIe card that you put into a computer. Then you connect your hard drives to this card. When you boot up the computer, you are going to see an option that allows you to configure the RAID. You can install an operating system on top of hardware RAID which can increase uptime.
Software RAID requires you already installed an operating system. It’s good for storing data.
Basic Steps to Create Software RAID 1 on Linux
- First you need to have a Linux distribution installed on your hard drive. In this tutorial we will name it
/dev/sda
. - Then you are going to grab two hard drives which will be named
/dev/sdb
and/dev/sdc
in this post. These two hard drives can be of different sizes. Remember to back up your existing data before formating your hard drives. - Next, we will create special file systems on
/dev/sdb
and/dev/sdc
. - And finally create the RAID 1 array using the
mdadm
utility.
Step 1: Format Hard Drive
Insert two hard drives into your Linux computer, then open up a terminal window. Run the following command to check the device name.
sudo fdisk -l
You can see mine is /dev/sdb
and /dev/sdc
.
Then run the following 2 commands to make new MBR partition table on the two hard drives. (Note: this is going to wipe out all existing partitions and data from these two hard drives. Make sure your data is backed up.)
sudo parted /dev/sdb mklabel msdos sudo parted /dev/sdc mklabel msdos
You can create GPT partition table by replacing msdos
with gpt
, but for the sake of compatibility, this tutorial will create MBR partition table.
Next, use the fdisk
command to create a new partition on each drive and format them as a Linux raid autodetect file system. First do this on /dev/sdb
.
sudo fdisk /dev/sdb
Follow these instructions.
- Type n to create a new partition.
- Type p to select primary partition.
- Type 1 to create /dev/sdb1.
- Press Enter to choose the default first sector
- Press Enter to choose the default last sector. This partition will span across the entire drive.
- Typing p will print information about the newly created partition. By default the partition type is Linux.
- We need to change the partition type, so type t.
- Enter fd to set partition type to
Linux raid autodetect
. - Type p again to check the partition type.
- Type w to apply the above changes.
Follow the same instruction to create a Linux raid autodetect partition on /dev/sdc
.
Now we have two raid devices /dev/sdb1
and /dev/sdc1
.
Step 2: Install mdadm
mdadm
is used for managing MD (multiple devices) devices, also known as Linux software RAID.
Debian/Ubuntu: sudo apt install mdadm CentOS/Redhat: sudo yum install mdadm SUSE: sudo zypper install mdadm Arch Linux sudo pacman -S mdadm
Let’s examine the two devices.
sudo mdadm --examine /dev/sdb /dev/sdc
You can see that both are the type fd (Linux raid autodetect). At this stage, there’s no RAID setup on /dev/sdb1
and /dev/sdc1
which can be inferred with this command.
sudo mdadm --examine /dev/sdb1 /dev/sdc1
Step 3: Create RAID 1 Logical Drive
Execute the following command to create RAID 1. The logical drive will be named /dev/md0
.
sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
Note: If you see this message: “Device or resource busy”, then you may need to reboot the OS.
Now we can check it with:
cat /proc/mdstat
You can see that md0 is active and is a RAID 1 setup. To get more detailed information about /dev/md0
, we can use the below commands:
sudo mdadm --detail /dev/md0
To obtain detailed information about each raid device, run this command:
sudo mdadm --examine /dev/sdb1 /dev/sdc1
Step 4: Create File System on the RAID 1 Logical Drive
Let’s format it to ext4 file system.
sudo mkfs.ext4 /dev/md0
Then create a mount point /mnt/raid1
and mount the RAID 1 drive.
sudo mkdir /mnt/raid1 sudo mount /dev/md0 /mnt/raid1
You can use this command to check how much disk space you have.
df -h /mnt/raid1
Step 5: Test
Now let’s go to /mnt/raid1
and create a text file.
cd /mnt/raid1 sudo nano raid1.txt
Write something like
This is raid 1 device.
Save and close the file. Next, remove one of your drives from your computer and check the status RAID 1 device again.
sudo mdadm --examine /dev/sdb1 /dev/sdc1
You can see that /dev/sdc1
is not available. If we check /dev/md0
, we can see that one RAID device is removed.
sudo mdadm --detail /dev/md0
However, the text file is still there.
cat /mnt/raid1/raid1.txt
To add the failed drive (in this case /dev/sdc1
) back to the RAID, run the following command.
sudo mdadm --manage /dev/md0 --add /dev/sdc1
Then check the details again:
sudo mdadm --detail /dev/md0
We can see that the RAID is rebuilding data on /dev/sdc1
. And you can check the rebuild progress (Rebuild Status).
Remember that if you use disk backup software such as Clonezilla, you need to restore data to the RAID logical drive, not the physical drive.
It’s very important to save our RAID1 configuration with the below command.
sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf
Output:
ARRAY /dev/md/0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=xenial:0 UUID=c7a2743d:f1e0d872:b2ad29cd:e2bee48c devices=/dev/sdb1,/dev/sdc1
On some Linux distribution such as CentOS, the config file for mdadm
is /etc/mdadm.conf
. You should run the following command to generate a new initramfs image after running the above command.
sudo update-initramfs -u
To automatically mount the RAID 1 logical drive on boot time, add an entry in /etc/fstab
file like below.
/dev/md0 /mnt/raid1 ext4 defaults 0 0
You may want to use the x-gvfs-show
option, will let you see your RAID1 in the sidebar of your file manager.
/dev/md0 /mnt/raid1 ext4 defaults,x-gvfs-show 0 0
How to Remove the RAID
If you don’t want to use the RAID anymore, run the following command to remove the RAID.
sudo mdadm --remove /dev/md0
Then edit the mdadm.conf
file and comment out the RAID definition.
#ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=bionic:0 UUID=76c80bd0:6b1fe526:90807435:99030af9 # devices=/dev/sda1,/dev/sdb1
Also, edit /etc/fstab
file and comment out the line that enables auto-mount of the RAID device.
Wrapping Up
I hope this tutorial helped you create software RAID 1 on Linux. As always, if you found this post useful, subscribe to our free newsletter or follow us on Google+, Twitter or like our Facebook page.
I already have Debian installed to one 128GB SSD and I already have another 128GB SSD in the box as well. Can I create a Raid1 mirror using the current OS drive and the other blank drive or will this process wipe out the OS drive?
This tutorial is not applicable to your situation, it will wipe out your OS drive.
Is there a way to enable Raid1 if you already has the OS installed on one of the drives? I want to create a Raid1 setup for my OS drive but this tutorial will wipe that out so how do you have the OS installed and setup a Raid1?
Have you found out how to do this? There is literally no guide on this use case.
To boot off of a RAID, you need a RAID defined by a hardware RAID controller, not a software-defined one like this tutorial is for, because a RAID’s contents are not accessible without its RAID controller, a controller that takes the form of software running within the OS’s scope can’t start before the OS does, and you can’t boot an OS off of a resource that requires that OS to already be running before that resource becomes available.
so how was this raid built if it shows their is an OS in place?
Not true. You can have OS booting on a soft RAID:
https://help.ubuntu.com/community/Installation/SoftwareRAID
There’s a guide here on how to set up RAID 1 from scratch on a new install: https://feeding.cloud.geek.nz/posts/installing-ubuntu-bionic-on-encrypted-raid1/
It’s fairly complicated but well explained.
I would expect you could dd your system volume over to a disk image on a backup drive and then dd it back after you had rebuilt your system drive array using the above procedure. Though I guess your resulting filesystem will be slightly smaller than the original one so that might not entirely work.
will like to know the steps to RAID 1 two external 4T drives. thanks
I just did this with 2 8TB USB3 drives. I followed the procedure as given except that I had to use `gpt` instead of `msdos` when issuing the boot record creation command with `parted`. This also meant that the filesystem type was `29` instead of `FD`.
Hi, great article, easy to read and follow. Thank you for this. Everything worked flawlessly, and I am quite proud of my self being able to follow Linux CLI instructions where everything works without any hiccups as I am new to Linux (ubuntu) and the learning curve is challenging for someone with zero programming knowledge.
My question is now that my drives are working and they are mounted, how do I actually use them?
When I try to open the raid 1 folder on /mnt, it says I have no permission.
Also, instead of having to go to home/mnt/raid1, how to I create a shortcut so it shows up in file explorer on the left hand side (like my computer, downloads, pictures, or when you plug in a USB stick, it shows up on the left hand side )?
And finally, how do I rename it from raid1 to something else like “1TB NAS RAID1”?
If you can’t access the raid1 folder, run the following command to grant read and write permission to your user account. Replace
username
with your real username.You can go to the raid 1 folder in your file explorer and press Ctrl+D to bookmark this folder.
Once you bookmarked this folder, you can rename it by right-clicking the name on the left sidebar.
i tried
sudo setfacl -R -m u:username:rwx /mnt/raid1
got this in return
setfacl: Option -m: Invalid argument near character 3
checked man setfacl and -m is a valid argument, dont know why it didnt work
also tried
sudo setfacl -m u:username:rwx /mnt/raid1
got
setfacl: Option -m: Invalid argument near character 1
also tried without -R, same result …. it doesnt like -m for some reason
Replace username with your real username.
I enjoyed your article but have one question. If I want to create a Raid 0 what would I change? I assume it would be at this stage:
Step 3: Create RAID 1 Logical Drive
Execute the following command to create RAID 1. The logical drive will be named /dev/md0.
sudo mdadm –create /dev/md0 –level=mirror –raid-devices=2 /dev/sdb1 /dev/sdc1
Hi,
Love the article.
What if the OS crashes, how do you remount this RAID elsewhere?
Thx
Hi,
The article seems helpful and very detailed, but I notice that you state “Software RAID requires you already installed an operating system.” However, if I go forward with creating the raid it will wipe my OS drive? How were you able to create the RAIDs with an OS already installed. Was there a way you backed up the boot data or partition? Also in another comment you stated that creating the raid would wipe any existing OS. I see in the output from sudo fdisk -l that your sdb drive has a boot partition. So did it not wipe your whole drive after creating the raids? I have not found another article this detailed I feel I am closer to successfully creating the raids but this is a big hiccup. Please let me know any thoughts on this. Thank You!
Besides the OS drive, you need at least two other drives. The OS drive can’t be added to the RAID.
My /dev/sdb in this article was a bootable USB stick with a live Linux system on it.
Hello,
What should used instead of update-initramfs in CentoOS 6?
Thanks!
Hello,
I used this to setup 3 4TB drives and it works great, and I have 3.6TB available. Thanks for this page and all you do.
Chuck
Excellent tutorial – best I have seen.
Having trouble though. Creation of MD0 does survive reboot. I think I see reference to MD127. I can’t seem to be able to get the mirror assigned back to MD0 after a reboot which hangs on trying to assign MD0. After I clear lines out of fstab and mdadm.conf – I get a boot – but cannot re-establish MD0. What am I missing?
Awesome, just perfect, thanks a million:)
Curious if it is safe for me to delete the lost+found directory that is in the array as soon as it was created?
Cheers
I tried to give this tutorial 5 stars but it crashed the browser. Three times. Weird.
There is an easy way to restore a partition to pre-RAID condition: use Gparted, delete the partition, then re-make the partition in the normal way.
Great guide, very clear and detailed. One minor thing that might be an issue is that apparently partition type 0xFD is deprecated and it’s recommended to use 0xDA when creating RAID partitions under MBR (unless, as I understand it, you’re not using initrd booting, which I don’t think has happened in a very long time). I guess 0xFD can be confusing for live distros? There’s more information here: https://raid.wiki.kernel.org/index.php/Partition_Types
It’s also worth mentioning that anyone using a 2TB or larger drive will have to use GPT. In `fdisk`, you’ll want to select partition type 29 as the codes are different for GPT.
For anyone seeking further information, the Arch guide to RAID is very informative: https://wiki.archlinux.org/index.php/RAID#GUID_Partition_Table
Ted, this comment is gold. I was banging my head against the desk as I couldn’t figure out why the fd thing wouldn’t work. The author needs to update this.
Wish I had read your comment before I began. I did guess that GPT 29 was best, but it’s great info.
After doing a fresh install of Linux Mint on a third disk. How do I make my existing two RAID1 disk available again?
Thanks for the excellent guide. I have a desktop with 20.04.1 ubuntu installed using the LVM option. Primary and extended partitions are on a 80GB SSD + 1TB HDD (sda & sdb) This works for me.
Following your tutorial I set up a RAID1 on 2-1TB HDD (sdc & sdd). It all works except it’s only accessible from root. In file manager I must drill down to +other locations -> computer -> /mnt/raid1 to write or read files.
How can I get access from home/”user”. Specifically, I want to write scheduled backup files to this raid1 from LuckyBackup app. Now the task will not execute because it doesn’t have read/write permission.
chmod the folder to 777 or chown it to your user account.
When creating your fstab you might be better of using the raid’s name vs the device name. Using the names from the example above you would end up with:
/dev/md/xenial:0 /mnt/raid1 ext4 defaults 0 0
If you run into raid issues, the md0 used in this example could change causing your array to not come up, even if everything is in place.
You could also use the UUID which is probably the best bet.
Excellent tutorial! Thank you so much! I’ve not used the mdadm command before now and this process was straight forward and easy to follow. I was able to use the information here to create a raid 1 with two 3TB hdd with no issue for media backup. Thank you for this article and the time it took to make it!
This is an excellent guide thank you for putting it together. It was very helpful for me!
hello, thanks very much for this article. it helped me a lot. however, i still may need some help:
when i type “cat /prox/mdstat” to check the raid, i get this output: …”md127 : active”… (see also the picture attached). although i named the logical device correctly (md0) following your article. stopping and removing md127 as well as removing the superblocks and starting the procedure from the beginning led to the same result.
i saw the problem, because after reboot, the file raid1.txt was not there any more. there is no md0 any more.. (please see picture attached).
my aim is to have a pc with 1 hdd (os, linux focal fossa server) and 2 hdd with raid1 system.
i hope you can help me within that. thanks
I followed the steps and everything goes well.
But when I remove one HDD driver from the computer after reboot I can’t find the files under raid1.
I am not sure what I missed.
Some suggestions are appreciated.
Thanks
Hey, i have the same problem, did you resolve it??
Great article… we have a customer interested in getting a script or utility that would enable them to easily format RAID-0 drives under Ubuntu 20.04, Ubuntu 22.04.1 LTS and Zorin (eventually). Would you be available to build such a script or utility as a (paid) consultant?
Wrong instructions on other tutorial made me to waste a lot of time.
Yours worked at first trial.
Thanks, Xiao