How to create software RAID 1 from existing installation on live system

If you have already installation which is installed on disk drive (/dev/sda), which is not part of RAID array and you have second disk drive (/dev/sdb) which is not in usage and you want to create software RAID from existing installation while OS is working, you can do following task to accomplish this:

1. Create RAID partitions on the second disk drive:
fdisk -l /dev/sdb

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        2432    19535008+  fd  Linux raid autodetect
/dev/sdb2            2433        2918     3903795   fd  Linux raid autodetect
/dev/sdb3            2919        3404     3903795   fd  Linux raid autodetect
/dev/sdb4            3405      121601   949417402+  fd  Linux raid autodetect
Create RAID 1 arrays using only the second disk and only one device per RAID partition.
mdadm --create /dev/md0 --level=1 --raid-devices=1 /dev/sdb1 --force
mdadm --create /dev/md1 --level=1 --raid-devices=1 /dev/sdb2 --force
mdadm --create /dev/md2 --level=1 --raid-devices=1 /dev/sdb3 --force
mdadm --create /dev/md3 --level=1 --raid-devices=1 /dev/sdb4 --force
Create swap and file systems:
mkfs.ext4 /dev/md0
mkfs.ext4 /dev/md2
mkfs.ext4 /dev/md3
mkswap /dev/md1
Mount and synchronize the existing data to RAID arrays:
mkdir /mnt/tmp
mkdir /mnt/var
mount /dev/md0 /mnt
mount /dev/md1 /mnt/tmp
mount /dev/md2 /mnt/var
mkdir /mnt/sys
mkdir /mnt/proc
rsync -av --exclude="/mnt" --exclude="/proc" --exclude="/sys" / /mnt/
rsync -av /var/ /mnt/var/
rsync -av /tmp /mnt/tmp/
Install grub to the new disk which implements RAID arrays:
grub-install /dev/sdb --root-directory=/mnt/
Update grub config files:
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
chroot /mnt
update-grub
exit
Describe the new mount devices in ‘/mnt/etc/fstab’:
~# blkid /dev/md0 
/dev/md0: UUID="64ea9000-3df6-4699-8c2e-df48f5c3835e" TYPE="ext4" 
~# blkid /dev/md1
/dev/md1: UUID="133dabc7-7957-462e-8b5e-1594ded31ec8" TYPE="swap" 
~# blkid /dev/md2
/dev/md2: UUID="a65dcd80-b8fc-49a6-835f-b06b930824b7" TYPE="ext4" 
~# blkid /dev/md3
/dev/md3: UUID="a7cbbd28-0617-4013-9870-714cf75334f4" TYPE="ext4"

cat /mnt/etc/fstab 
# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc	/proc	proc	defaults	0	0
UUID=64ea9000-3df6-4699-8c2e-df48f5c3835e	/	ext4	defaults,errors=remount-ro	0	1
UUID=133dabc7-7957-462e-8b5e-1594ded31ec8	none	swap	sw	0	0
UUID=a65dcd80-b8fc-49a6-835f-b06b930824b7	/tmp	ext4	nodev,nosuid	0	2
UUID=a7cbbd28-0617-4013-9870-714cf75334f4	/var	ext4	defaults	0	2
Umount RAID partitions:
umount /mnt/proc
umount /mnt/sys
umount /mnt
Reboot the server and choose from the BIOS to boot from the second disk drive
After starting server again you should see already RAID arrays mounted devices:
mount
/dev/md0 on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/md2 on /tmp type ext4 (rw,nosuid,nodev)
/dev/md3 on /var type ext4 (rw)
We need to mirror partition table from the new RAID block device to the old (/first) disk drive:
sfdisk -d /dev/sdb | sfdisk /dev/sda
Grow RAID arrays size:
mdadm --grow --raid-devices=2 /dev/md0
mdadm --grow --raid-devices=2 /dev/md1
mdadm --grow --raid-devices=2 /dev/md2
mdadm --grow --raid-devices=2 /dev/md3
Add partitions from the first disk to RAID arrays:
mdadm --manage /dev/md0 --add /dev/sda1
mdadm --manage /dev/md1 --add /dev/sda2
mdadm --manage /dev/md2 --add /dev/sda3
mdadm --manage /dev/md3 --add /dev/sda4
Now the new disk arrays should start syncing:
cat /proc/mdstat 
Personalities : [multipath] [faulty] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [linear] 
md0 : active raid1 sda1[2] sdb1[0]
      19533912 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[0]
      3902759 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[0]
      3902759 blocks super 1.2 [2/1] [U_]
      	resync=DELAYED

md3 : active raid1 sda4[2] sdb4[0]
      949416242 blocks super 1.2 [2/1] [U_]
      [==================>..]  recovery = 91.0% (864771072/949416242) finish=21.4min speed=65886K/sec
At the end we need to install grub on the first disk (/dev/sda) so if we can boot from any of disk:
grub-install --recheck /dev/sda

So this way you can create RAID 1 without almost any downtime. Just one server’s reboot and changing drive boot priority.