KVM Virtualization

Kernel-based Virtual Machine (KVM) is a popular virtualization solution supported by modern Linux kernels. It takes advantage of the CPU support for virtualization (Intel VT and AMD-V). You can run unmodified operating systems such as Linux, FreeBSD, and Microsoft Windows using KVM. For more information, see the compatibility list.

For our Ubuntu 16.04 host, we have installed the following packages via apt (see https://help.ubuntu.com/lts/serverguide/libvirt.html):

  • libvirt-bin
  • qemu-kvm
  • virt-manager
  • virtinst
  • virt-viewer

For a CentOS 7 host, install the following packages via yum:

  • libvirt
  • qemu-kvm
  • qemu-img
  • virt-install
  • virt-manager
  • virt-client

For other Linux distributions, refer to their respective manuals.

You can create virtual machines using the command line as long as you have installed the proper packages. For example, the following creates a CentOS 7 guest with 2 virtual CPUs and 4GiB of RAM:

virt-install -n centos-test \
        --ram 4096 \
        --vcpus 2 \
        --metadata description='CentOS – test',title='CentOS - test' \
        --cdrom /usr/local/src/dists/CentOS/7/CentOS-7-x86_64-Everything-1511.iso \
        --os-variant centos7.0 \
        --disk path=/var/lib/libvirt/images/centos-test-storage0.qcow2,size=40,format=qcow2 \
        --network bridge=br0,model=virtio \
        --graphics spice

If you want to have a processor topology of 2 sockets and 2 cores each, you can specify that as:

        ....
        --vcpus sockets=2,cores=2
        ....

Verification of the processor topology can be done using the utility lscpu, part of util-linux.

The --cdrom parameter points to the installation disc image and the --disk parameter points to the final installed OS image. In this example, we also specify the use of a bridge (the network device br0 in this case) to make it appear on the local host’s network as a regular host. The parameter --graphics specifies Spice as the means of connecting to the VM console.

Naturally, you can also use a GUI (virt-manager) to create the VM, but the command line is more fun, isn’t it? 😉 The man page for virt-install has the requisite information on how to use it. More examples are also available in the man page.

 

Mounting Raw and qcow2 Images

Mounting Raw and qcow2 images in order to inspect and use them doesn’t have to be difficult. After searching the internet, we found a couple of recommendations on how to do it. Here is what we did ourselves on an Ubuntu 16.04 Linux host.

Mounting The Raw Image

Associate the raw image with a loop device:

losetup /dev/loop0 image.raw

Map the partitions to loop devices:

kpartx -a /dev/loop0

You should be able to mount the partitions now:

mount /dev/mapper/loop0p1 /mnt/t01

where /mnt/t01 is a previously-existing mount point or directory.

For LVM partitions, determine the volume group name and activate it:

vgscan
vgchange -ay vg_volgroupname

Mount the desired logical volume:

mount /dev/mapper/vg_volgroupname-lg_logicalgroupname /mnt/t02

where /mnt/t02 is another pre-existing mount point or directory.

Unmounting The Raw Image

Unmount the previously mounted partitions:

umount /dev/t02
umount /dev/t01

Deactivate the volume group:

vgchange -an vg_volgroupname

Undo the mapping of the partitions to the loop devices:

kpartx -d /dev/loop0

Destroy the loop:

losetup -d /dev/loop0

Mounting The qcow2 Image

Here, we shall use the QEMU Network Block Device Driver for the purposes of mounting the qcow2 image.

First, load the nbd driver.

modprobe nbd max_part=63

Connect nbd to the image using qemu-nbd:

qemu-nbd -c /dev/nbd0 disk1.qcow2

Using fdisk, check the existing partitions. Mount the regular Linux partitions as is:

mount /dev/nbd0p1 /mnt/t01

For LVM partitions, associate a loopback device to the LVM partition:

losetup /dev/loop0 /dev/nbd0p2

See the LVM partitions under /dev/mapper:

ls -l /dev/mapper

You should also be able to display the logical partitions using lvdisplay and the volume groups with vgdisplay. Use vgchange as above to activate the volume group.

Mount the regular LVM partitions as usual:

mount /dev/mapper/vg_volgroupname-lv_logicalgroupname /mnt/t02

Unmounting the qcow2 Image

Unmount the partitions from the qcow2 image:

umount /mnt/t02
umount /mnt/t01

Deactivate the volume group:

vgchange -an vg_volgroupname

Remove the loopback device:

losetup -d /dev/loop0

Disconnect the nbd device:

qemu-nbd -d /dev/nbd0

Finally, remove the nbd kernel module:

rmmod nbd

We have successfully used the above procedures in mounting and unmounting raw and qcow2 images used in Linux KVM.

The procedures described above have been adapted for this article from these URLs: