LVM, Virtual Volume Management

I just got back from renewing my training experience with Red Hat Linux.

Its always full of new stuff, and this time it included details about RHEL 7.0

We haven't fully adopted RHEL 7.0 yet, but its on the horizon. If RHEL5.0 were Windows XP then RHEL6.0 would be Windows 7 and RHEL7.0 would be Windows 8 or 10. There is so much change.

But one of the things that hasn't changed is the use of LVM.

LVM stands for Logical Volume Mangement, and I had a revelation.. which I have not had before. I know not everyone 'gets it' and some even fear it. But after this 'new point of view' you might like it too.

I was sitting there in class and noticed that LVM parallels what we do with fdisk/gdisk, make file system and so forth. It was like the older partitioning tools were handling the 'Physical Volume Management' or 'PVM' where the 'LVM' tools were handling dicing up the Virtual Volumes that were created from the 'Physical Volumes'.

Its not really quite as simple as that.. but it forced a tunnel vision like focus.. that really LVM was just a 'terribly' poorly named system of tools for performing this 'Virtual Volume Management' function.

LVM truth be told is not its full name, its actually LVM2 because there was an earlier attempt called LVM1 or generically 'EVM' for 'Enterprise Volume Management'.

Enterprise Volume Management was needed because the 'Enterprise' could not afford down time, and needed to speed up the maintenance of replacing drives, extending or shrinking file systems on a live system.

This is critically important in Server class hardware systems, but why is it also useful to Desktop users?

Because the same abstraction allows re-swizeling or reallocating more or less drive space to 'Volumes' which can contain either the entire '/' root file system or any compartmentalized portion of it, like a /home branch.

Its also really nice when drive sizes out strip the ability of BIOS or a particular version of UEFI to access a block on a physical disk and need a shim, or driver provided by the hardware vendor to make it accessible.

The crux of the 'learning curve' however for newbies seems a pathological 'need' by instructors to make it sexy or 'include more stuff'.. usually tacking on things like MDRAID or synthetic software RAID, or more resilient file systems like journaled file systems in the discussion.. distracting and confusing.. and blending the information into this monolithic mess.. that leave a lot of smart people thinking they are inherent or part and parcel of the LVM system.. they are not.. they may depend on LVM to some degree.. but only as much as a file depends on a file system (any file system) for storage blocks.

So what is LVM ?

Put simply, its fdisk for Virtual Volumes.. or fdisk for Volume Groups.

You see to create a Virtual Volume (aka a Volume Group) you first need building blocks, these are called 'Physical Extents' or 'PEs' in Physical disk terms they are 'blocks' of storage. They can be made from carving up whole disks, or carving up MBR partitions of disks, or GPT partitions.. it doesn't matter.. its an abstraction of the 'Physical Blocks on Disk' to virtual 'Physical Extents'.

Once a whole disk or partition has been carved up into PEs they are then 'used' to compose or build a 'Volume Group'. (and you can 'size' these PEs independent of the block sizes on the Physical disks underneath this abstraction, at pgcreate time you can define the PEs 'size' in bytes)

A Volume Group then is like this 'Virtual Hard Disk' which like a thanksgiving turkey, needs to be carved up into smaller Virtual Partitions before you can use them, or mount them in your operating system. Those Virtual Partitions are then called 'Not Logical Partitions' but 'Logical Volumes'.. I can hear you scream.. why.. but why? are they not logically called 'Logical Partitions?' -- well that's because that term is already used down in the cellar of Physical Volume Management.. and we would not want to get confused using that term again. An MBR disk can have four and only four Primary Partitions, after that you cannot create any more.

So planning ahead you can use one of your remaining Primary partition slots to make a special 'Extended' partition.. which is never actually addressed.. except as a pointer to a chain of 'Logical Partitions' underneath... these Logical Partitions (have absolutely nothing to do with Virtual Volume "Logical Partitions").

Sooo.. waay up Topside..on top of Virtual Volumes made from Volume Groups.. we call them 'Logical Volumes'.. ironic.. irritating.. and confusing. (they're [partitions] for cotton pick'n sake..! )

Let me state that again...

A Virtual Volume (which "really" is a Volume Group) is called a Logical Volume.. grrr.

The tools for performing this magic are exceedingly simple.. but hard to remember until you master their names.. and reasons for the choice of their names.. even if that reason is rather obscure and never really discussed.

First Physical Extents (the building blocks) are "made" using the pgcreate tool. (why isn't it called pecreate? I have no idea.. grrr)

Then the Volume Groups (the virtual hard drives or "volumes") are "made" using the vgcreate tool.

Finally the Logical Volumes (the virtual hard drive "partitions") are "made" using the lvcreate tool.

1. pgcreate - "pe create" - makes virtual "leggos" or "virtual disk storage blocks"
2. vgcreate - "vhdd create" - makes "virtual hard drives"
3. lvcreate - "lpart create" - makes "virtual (logical) partitions"

Each tool has a corresponding sister tool called "xx-display" to inspect the results and keep track of the "Virtual environment"

1. pgdisplay - "pe display"
2. vgdisplay - "vhdd display"
3. lvdisplay - "lpart display"

Now once these "virtual volume - (logical) partitions" are created they can be accessed from the /dev or /dev/mapper points just like physical hardware.

And the same tools used for creating a file system can be used on logical volumes to create file systems. Mkfs could then be used to lay down a fresh xfs file system and will be handled by the kernel device driver for xfs just like a physical hardware device file system.

# mkfs -t xfs /dev/vg-group1/lv-volume1

(think of it "like" )

# mkfs -t xfs /dev/vhdd1/lp1


# mkfs -t xfs /dev/vg1/lv1

Then the mount command or the /etc/fstab file can be used to attach the new device and connect it to a mount point on the current file system.

Anything that happens below the "virtual volume" or "volume group" layer.. will be hidden or transparent to the activities of the overlaid pavement of the logical volume (aka the virtual volume 'logical partition' ) and file system.. this is the 'Enterprise quality feature,, which desktop users can also use'

If we need to add more space to a full Logical Volume file system.. we can simply add a hard disk, carve it up into more PEs with the pgcreate command and add those to the volume group, then use an LV tool called "lvextend" to make the partition "bigger" while the file system is being used.. and without backing up the contents and resizing the file system and then restoring the files (a lot of maintenance down time).

Likewise, if we need to "remove" or "replace" a disk (perhaps it failed, is failing or S.M.A.R.T. tells us its expected to fail or some other reason) we can use pvmove ( it stands for 'physical volume Move' why not PEmove ? I have no idea...), to clean out all of the PEs from one disk or partition that is part of a volume group.. without notifying the upper layers, like the LV or file system.. or user.. this "frees up" the physical hard disk or partition and we can take it out of service and replace it. All while the system is running.

The major difference between 'Enterprise' and 'Desktop' is really in the details of whether 'While the system is running' means 'hot' as in 'Live to the world' or 'warm' as in 'Being used but can be rebooted to perform some quick task then back to service'. The game is to minimize system unavailability.

MDRAID or Multidisk RAID (aka software raid) or its likewise drivers can use LVs just like real physical disks or physical partitions to create fast RAID 0 or slow but resilient fail safe RAID 1 drives or anything in between. But they really don't require LVM.

LVM can also do nice things like make Copy on Write or Snapshot images possible.. but those are not fundamental reasons or purposes for LVM to exist.

Including obscure things like ( MDRAID, CoW, journaled file systems et. al. while 'sexy' ) in a newbie introduction simply flys over the important details of LVM and serves to confuse newbies about a very important tool that has become essential in daily life.

The terminology is a quagmire of a historical word swamp and does nothing to make it understandable.