I don't like partitions. MS-DOS partition tables or GUID Partition
Tables (GPT) alike. We use them because you always partition a disk,
right? We use them without understanding the ramifications. The MS-DOS
partition table was designed in 1981. We still blindly use it on almost
every machine today. Do we still surf the Internet with a
MS-DOS partition tables cannot handle drives of more than 2TiB. Intel
designed GPT in the 1990s to deal with this and add more than 4 (count
'em, 4) primary partitions. Its better, and is now part of the EFI
standard. For booting your hardware, a partition table can be quite
handy. Or even required.
However, I've long advised folks not to wrap partition tables around
their dedicated storage arrays. Growing storage arrays comes up multiple
times a week for me. An outage is required if there are partition
tables in use. A longer outage is needed if we have to convert the
array to GPT to cross the 2 TiB barrier. I have always advised folks
use raw LVM physical volumes on their storage arrays. LVM handles all
the manipulation for us in a smooth and consistent way in this case. No
reboots either as long as you instruct Linux to rescan the SCSI bus
after a resize.
echo 1 > /sys/class/scsi_device/DEVICE/rescan
Then grow the LVM physical volume:
Finally, you may resize the logical volumes and your file systems:
lvresize -l+100%FREE -r /dev/mapper/Volume00-foobar
No outage to handle growth in your dedicated storage array.
With so much infrastructure based on VMs (virtual machines) like KVM or
VMWare we can take these concepts further. Growing a root file system
on a VM isn't hard and shouldn't require a Linux expert to handle. Many
cloud providers have tools that do this for their customers. Take into
- VMWare and other VM tools are fully capable of doing all of our
storage virtualization that us Linux folks commonly do with LVM.
- Certain enterprise Linux distributions turn off the kernel's feature
to re-read partition tables for safety. In any case, the only way to
safely grok a modified partition table is to reboot. We can
manipulate file systems without an outage, so why do we need an
- Depending on the VM's partitioning and layout, it can require a large
amount of skill to move and resize partitions to extend file
systems. If the VM uses LVM that helps in this case.
- What if, by using a standard method of deployment, your worst case
scenario is your VM guys extend the file space, and the customer is
told to run one or two commands as root? What if by using standard
methods of deployment you could automate this process?
I've been attempting to build a better way for us to deploy VMs. Each
of these VM has 3 different virtual disks:
- 512MiB for /boot. This is partitioned with an MS-DOS table so the
machine can boot and the MBR is protected for the bootstrap
procedure. Rarely does /boot need to become larger.
- Swap. 2GiB. Vary according to your needs. Not partitioned! Raw
swap. Depending on memory load, resizing swap can be done online.
- 30GiB, more, or less depending on your needs. Not partitioned. This
is a raw LVM physical volume used to build your logical volumes for
how you would like your file system separated out.
The Red Hat installer doesn't support this. So creating and using an
image as a template for your VM farm can be very handy. But whether you
use images or, like me, use Kickstart, you need to get the Installer to
actually install this layout. Below are the relevant Kickstart snippets
that will install into the above configuration -- at least with RHEL 6.
But once the above can be reached, its a simple matter to grow the third
virtual HD and use pvresize and lvresize to extend the native
filesystem without an outage to your systems. (My devices here are
/dev/vda, /dev/vdb, and /dev/vbc.)
clearpart --drives vda --all
part /boot --size 1 --grow --ondisk vda
volgroup Volume00 vdc --useexisting
logvol / --size 8704 --fstype ext4 --vgname=Volume00 --name=root
logvol /tmp --size 2048 --fstype ext4 --vgname=Volume00 --name=tmp
logvol /var --size 7168 --fstype ext4 --vgname=Volume00 --name=var
# Clean up any possible left overs...
vgremove -v -f Volume00
# Whipe any partition table
dd if=/dev/zero of=/dev/vdb bs=512 count=1
dd if=/dev/zero of=/dev/vdc bs=512 count=1
# Create an LVM Volume Group
pvcreate -y /dev/vdc
vgcreate Volume00 /dev/vdc
# Create swap device
mkswap /dev/vdb -L vmswap
swapon -L vmswap
# Setup swap space in fstab
echo "LABEL=vmswap swap swap defaults 0 0" >> /etc/fstab