7/07/2014

KVM, enabling virtio after P2V


We run some virtual machines on Red Hat Enterprise Linux (RHEL) version 5 using KVM.

Kernel Virtual Machine (KVM) on Red Hat is managed by libvirt or other management tools to configure, start and stop and maintain the virtual machine. Its an open specification so there are many command line and GUI management tools available.

The tool we use is Convirture, which is a bit like Ovirt or Openstack, in that a MySQL database backends a Web Console for overseeing the creation and management of common resources like shared storage and shared network bridged adapters in order to support high availability migrations of virtual machines between a cluster of host nodes.



So when a virtual machine is created or provisioned, it can be installed using the native installer, or "adopted" by migrating it from a Physical deployment on a dedicated server into a virtual machine. There are tools to assist doing this like disk2vhd for windows or virt-p2v for linux.

useful links:

virt-p2v and virt-v2v
summit talk on libguestfs virt-v2v

Inception -> Super nested KVM -> "Or how I learned to Not Fear the Matrix and Love the KVM"

After the original volumes are converted into raw or other acceptable virtual machine volumes, a virtual machine configuration or set of definition files can be created in order to wrap and support booting the volumes as a new virtual machine.

Initially the virtual machine will only have the drivers included at the time of the original install on the Physical server hardware. Which means the virtual machine "emulation wrapper" must coencidentally emulate hardware, for which, the original Physical volumes had drivers. These are often enough to get the virtual machine to boot, but are not optimum for performance reasons for the operating system in the new virtual environment.

A common relatively open alternative are virtio drivers.

M. Tim Jones,

"virtio was developed by Rusty Russell in support of his own virtualization solution called lguest. Linux is a hypervisor playground. Examples include the Kernel-based Virtual Machine (KVM), lguest, and User-mode Linux. Rather than have a variety of device emulation mechanisms (for network, block, and other drivers), virtio provides a common front end for these device emulations to standardize the interface and increase the reuse of code across platforms.
In full virtualization, the guest is unaware that it is being virtualized and requires no coding changes to work. Conversely, in paravirtualization, the guest operating system is aware that it is running on a hypervisor and includes code changes to make guest-to-hypervisor communications more efficient.
In the paravirtualization case the guest operating system is aware that it's running on a hypervisor, the coding changes are implemented by including virtio drivers that act as the interface between the guest and the hypervisor. The hypervisor implements back-end drivers to support the specialized guest device emulation. These "in-guest" and "in-hypervisor" drivers are where virtio comes in, providing a standardized interface for the development of optimized emulated device access to promote code reuse and increase efficiency.  "

Virtio drivers partner with a version of the Linux kernel that has included support for virtio to bypass most of the emulation layers and provide near bare metal speed to the virtio drivers operating in the virtualized operating system enviroment. Often a speed of at least x10 performance can be obtained.

Virtio drivers for "block devices" or SCSI hard disk drives are available.

Virtio drivers for "net devices" or network adapters are available.

Taking the example of Windows 2008 r2, once it is booted and working under emulation.

We shut it down and created a new disk volume, solely for the purpose of [triggering] a Plug-n-play install of the virtio driver for the new disk volume.

First to create the disk volume with our version of Convirt, we had to drop to the command prompt and execute the following command:

# qemu-img create -f raw elm-2.img 1G

This created a new 1 GB raw disk volume that we could attach to the virtual machine while it was shutdown.

Next we modified the virtual machine [Virtual Machine Config Settings for ELM]  by selecting [ Edit Settings]  and selected the [Miscellaneous] option, so that we could add a virtual machine startup parameter that does not normally exist in most default virtual machine configuration templates.


The parameter to create is "attribute"
drive
Followed by "value"
['file=/vm_vol/vm_disks/elm-2.img,if=virtio,cache=writeback']
Complete with the apostrophes and square brackets.

The content means nothing to the Convirture management system, but is understood by the qemu-kvm invocation that eventually takes place on the target host server to bring the virtual machine into existence.
file = designates the [raw] disk file volume

if = designates the interface or 'bus' the volume will be attached to, the host linux kernel already understands this type of bus and makes it available to the virtual machine

cache = designates the behavior the host takes when the virtual machine attempts to perform a disk io operation, for which there are tradeoffs, if uncertain, do not include it

For the 'guest' operating system, in this case Windows 2008 r2 (which is x64 bit) a virtio driver will need to be obtained and made available to it once it boots in order to add support for the if=virtio bus type.

The most conventional if quaint way of doing this is to provide an .iso file with the drivers included in the image.

You have to be aware and careful to match the virtio client or 'guest' drivers to the virtio support in the linux kernel, mismatches 'may' work, however client drivers that are too advanced of the linux kernel they are demanding service from, may over reach the current feature set of that linux kernel and result in an inaccurate report of driver compatibility, or down right failure to service the client driver.

In otherwords, your windows guest may "blue screen".

So, we are running RHEL v5.10 on this virtual machine host.

There are several sources of virtio client/guest drivers, Fedora, the linux-kvm.org project or Red Hat if you have a current subscription. Even Microsoft offers a links to them Windows Server Catalog

In our case since we are current with Red Hat Subscription, so we navigated to:

https://access.redhat.com/downloads/content/69/ver=/rhel---5/5.10/x86_64/packages

And type in "virtio" into the Filter box to arrive at a download link to the rpm:

virtio-win-1.0.3-0.52454.el5.noarch.rpm

Inside this rpm are many files including

virtio-win.iso
Which can be accessed whether by extracing with 7zip or cpio or rpm2cpio or simply installing the rpm on a host and then tracking down the iso file with an rpm -ql command.

Since this file is intended for guests running on the version of kernel that comes with RHEL v5.10 we can be assured of maximum compatability in our situation.

To get the contents into the guest image, we booted the image using normal procedures and let the plug-n-play process attempt to perform normally after detecting a new interface bus.

However..

It did not install the driver.

So we manually traversed the mounted cdrom iso image inside the virtual machine, located the correct driver for an x64 bit operating system and executed the installation program.


This installed the driver, and indeed opening the Device Manager and checking for a new SCSI device indicates Bus support for the Red Hat Virtio SCSI bus.

However..

It did not bring the attached raw volume online.

To do that the Windows 2008 Manager Storage MMC applet needed to be used to "browse" the attached storage and recognize the volume was detected, but "offline" merely selecting the [tab] to the Left of the drive volume and then "Online" brought it online.


Mission accomplished the virtual machine was shutdown again.

So that the temporary volume could be removed from the virtual machine configuration, and then the two existing [disks] could be converted to [drives] using the if=virtio type.


The completed [Miscellaneous] attribute value pair are the following:

attribute
drive

value

['file=/vm_vol/vm_disks/elm-0.img,if=virtio,boot=on,cache=writeback', 'file=/vm_vol/vm_disks/elm-1.img,if=virtio,cache=writeback']
Take note: The first volume also has [ boot=on ], and the second drive comes up "offline" as well.

Take note: While MPIO is an option for iSCSI, its not something to enable "accidentally" when moving from the "disk" type to the "drive" type in the virtual machine configuration, be sure to [ remove ] the storage entries for the disk file volumes under the Convirture [ Storage ] section, or where appropriate in your virtual machine manager configuration.


After the first boot in this configuration, the second drive will need to be brought online manually.

After this first boot "bring second drive online manually" procedure, on subsequent boots the second drive will be brought online automatically.

Based on my previous experience with VirtualBox and other vm technologies, the cdrom emulation was left on the ATA/IDE bus.


Changing from the default emulation to using virtio for network adapters is a lot easier.

First you can run the .msi Microsoft installer from the cdrom image to pre-install the drivers, in which case when the emulation type of the hardware changes it will simply install from the preinstalled pool of available drivers.

Next you shutdown the machine and then navigate to the [Network] section of the settings and choose to modify (at least in Convirture) the [model] of the bridged network adapter. The drop down provides many emulation  types to choose from, including the [virtio] type.



Restart the virtual machine and the virtio driver will be installed to support the new hardware, the old network adapter and its TCP/IP setting will be gone.

However if you didn't write the settings down, they will still be available for reference in the Controlset in the Windows Registery.

When configuring the new adapter with the old TCP/IP settings, you may find a warning a message that those settings are currently assigned to a "missing" network adapter, and an offer to remove them from that adapter so that they can be assigned to this new adapter. Generally it is okay to do this.

Benefits from virtio drivers will include lower CPU utilization, faster more responsive behavior from the operating system and generally lower memory use.