Ubuntu: Migrate to Grub2

From ReceptiveIT
Revision as of 11:57, 2 June 2011 by Alex (talk | contribs) (This is a work in progress)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This howto has been written to document a migration path from how I used to partition my servers, to my current partitioning scheme.

I used to use a USB flash disk to contain the /boot partition. Usually I would have a RAID-1 with the LVM taking the entire block devices. While using the entire block device is possible, it does not allow any space for a bootloader on the disk. The USB boot had the advantage of keeping the kernel and initial ramdisk on a disk that was outside of the raid and LVM subsystem. This was important while using grub-legacy as it did not understand LVM. Since Grub2 now understands LVM and some raid modes, it is now possible to have everything inside of a LVM, so we can ditch the USB disk requirement.

Make sure you have a backup before you attempt this procedure

Collect Information

Get the Raid information

[email protected]:~# cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sda[0] sdb[1]
     976762496 blocks [2/2] [UU]
     
unused devices: <none>

This tells us that we have a Raid-1 (mirror) with two disks and the raid array is md0. Since a Raid-1 is made up of two identical disks, we have the option of splitting the raid array, creating a new array with one of the disks and then moving the LVM to the new array.

Get the LVM information

[email protected]:~# pvscan 
 PV /dev/md0   VG lvmname   lvm2 [931.51 GiB / 714.51 GiB free]
 Total: 1 [931.51 GiB] / in use: 1 [931.51 GiB] / in no VG: 0 [0   ]

This tells us that we have a LVM with the name lvnname. This particular LVM is 931.51GB in size, but of that, 714.51GB is free space. The destination drive therefore needs to be 217GB minimum.


Split the Raid-1

Fail one of the disks. In this case it will be /dev/sdb

[email protected]:~# mdadm --manage --fail /dev/md0 /dev/sdb 

Remove the failed drive from the array

[email protected]:~# mdadm --remove /dev/md0 /dev/sdb 

Create a new Raid-1 with the spare disk

Sanitise the removed disk by writing zeros to the first 1M of the disk

 [email protected]:~# dd if=/dev/zero of=/dev/sdb bs=1M count=1

Partition the removed disk

 [email protected]:~# fdisk /dev/sdb

Create a new Raid-1 with the removed disk

 [email protected]:~# mdadm --create --level=1 --raid-devices=2 /dev/md1 /dev/sdb1 missing