Virsh Primer
Virsh is the command line interface for managin virtual machines through the libvirt library, which supports KVM, QEMU, Xen, VMWare, LXC and more.
Virsh helps you manage the layer above physical hardware, but below Kubernetes and Docker. Virtual machines managed with virsh behave exactly like physical machines - with their own storage, memory etc. These machines can be migrated while they are running and can be moved between sites.
The layer below virsh is the physical server layer where you can move the physical server without powering it down - usually around the same physical location. Libvirsh allows you to move the memory and storage of a virtual machine between physical machines. Docker and layers above allow you to run containerized applications on top of virtual machines.
In this module we will deal exclusively with the virtual machine layer. When you master this layer you will be able to run any number of machines across any number of physical servers. Mastering this layer of the IT stack allows you to then scale Kubernetes and docker on top. Mastering the virtual machine layer allows you to partition your physical hardware between multiple logical machines and to decouple your machines from physical hardware.
Installing Virsh
To get started with virsh
you need to install it along with necessary packages
your physical server.
# Install libvirt and KVM on Ubuntu/Debian:
sudo apt update && sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
# Install libvirt and KVM on CentOS/RHEL:
sudo yum install libvirt libvirt-python libguestfs-tools qemu-kvm
# Start and enable libvirt service:
sudo systemctl start libvirtd
sudo systemctl enable libvirtd
# Verify installation:
virsh list --all
Creating a disk
qemu-img create -f qcow2 /var/lib/libvirt/images/ubutnu-vm.qcow2 +4G
You can always grow the disk later.
Resizing a disk
qemu-img resize image.qcow2 +10G
This only resizes the VM container for the disk. You still need to extend the physical/logical volume inside the guest to make use of the new space. This process can be done without rebooting the guest system.
Here are a few ways you can find out about all your available volumes.
lsblk
pvdisplay /dev/vda3
Then you can resize the volume and the file system like this:
growpart /dev/vda 3
pvresize /dev/vda3
lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
Creating a virtual machine
If you are going to create multiple virtual machines, I highly recommend that
you go through the initial install process once and then simply copy the
resulting qcow2
image to create multiple machines.
The first step to build a machine from scratch is to define a machine with our
newly created qcow2
disk as the disk image and with ubuntu server iso image as
CDROM.
<domain type='kvm'>
<name>ubuntu-template</name>
<memory unit='MiB'>2048</memory>
<os>
<type arch='x86_64' machine="pc-i440fx-2.9">hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/ubuntu-vm.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name="qemu" type="raw"/>
<source file='/var/lib/libvirt/images/ubuntu-server.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<source file='./nocloud.iso'/>
<target dev='hdd' bus='ide'/>
<readonly/>
</disk>
<interface type='network'>
<source network='devops'/>
<mac address='52:54:00:10:10:01'/>
<model type='virtio'/>
</interface>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<graphics type='vnc' port='-1' listen='0.0.0.0'/>
</devices>
</domain>
The above image also has a cloud-init
iso that we create with a configuration
file called user-data
. Cloud init is the way by which ubuntu server can do a
largerly unattended install.
Here is a user-data file:
#cloud-config
bootcmd:
- cat /proc/cmdline > /tmp/cmdline
- sed -i'' 's/$/ autoinstall/g' /tmp/cmdline
- mount -n --bind -o ro /tmp/cmdline /proc/cmdline
autoinstall:
version: 1
keyboard:
layout: se
variant: nodeadkeys
identity:
hostname: ubuntu-base
username: user
password: "generate pass using 'mkpasswd -m sha-256'"
ssh:
install-server: true
allow-pw: false
authorized-keys:
- your ssh public key
network:
version: 2
ethernets:
enp0s3:
dhcp4: false
addresses:
- 192.168.254.254/24
gateway4: 192.168.254.1
nameservers:
addresses:
- 1.1.1.1 # Cloudflare DNS by default
packages:
- openssh-server
late-commands:
- ['curtin', 'in-target', '--', 'sh', '-c', 'echo "GRUB_CMDLINE_LINUX_DEFAULT=\"splash quiet\"" >> /etc/default/grub']
- ['curtin', 'in-target', '--', 'sh', '-c', 'echo "GRUB_CMDLINE_LINUX=\"console=tty0 console=ttyS0,115200n8 rootdelay=60\"" >> /etc/default/grub']
- ['curtin', 'in-target', '--', 'sh', '-c', 'echo "GRUB_TERMINAL=\"console serial\"" >> /etc/default/grub']
- ['curtin', 'in-target', '--', 'sh', '-c', 'echo "GRUB_SERIAL_COMMAND=\"serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1\"" >> /etc/default/grub']
- ['curtin', 'in-target', '--', 'update-grub']
data:
autoinstall:
version: 1
interactive-sections: []
You can use a default IP address for the template which you must then change to real IP. Use some static default IP which you will always remember to change.
Store the user-data
file in a directory like this along with an empty
meta-data
file. This will be our cloud-init ISO.
We can now boot into the automated install like this:
UBUNTU_VERSION=24.04
UBUNTU_ISO=/var/lib/libvirt/images/ubuntu-$(UBUNTU_VERSION)-live-server-amd64.iso
sudo test -f ${UBUNTU_ISO} || sudo wget --no-clobber -O $(UBUNTU_ISO) \
https://mirror.zetup.net/ubuntu-cd/${UBUNTU_VERSION}/ubuntu-${UBUNTU_VERSION}-live-server-amd64.iso
sudo rm -f /var/lib/libvirt/images/ubuntu-base.qcow2
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/ubuntu-base.qcow2 8G
sudo mkisofs -o /var/lib/libvirt/images/nocloud.iso \
-V cidata -J -r cloud-init/nocloud/
[ "" = "$(virsh list --all | grep ubuntu-base)" ] || \
virsh destroy ubuntu-base || \
virsh undefine ubuntu-base 2>&1 > /dev/null || true
virsh define ubuntu-install.xml
virsh start ubuntu-base
virt-viewer -c qemu:///system ubuntu-base
virsh define ubutnu-install.xml
virsh start ubuntu-install
Note that my user-data
in cloud-init is configured to explicitly execute an
unattended install. This means that all data on the disk will be wiped each time
the process is started.
Without the initial override of bootcmd, the ISO will ask whether you are really sure that you want to do autoinstall. This check is in place specifically to avoid wiping a production system using physical media.
Once the installation finishes, you will have a disk image with ubuntu pre-installed and configured which you can then copy to create more virtual machines.
With virsh
you can destroy, undefine and reinitialize the virtual machines but
virsh will never destory the disk images.
You can now remove the CDROM from the definition and redefine the machine.
<memory unit='MiB'>2048</memory>
<os>
<type arch='x86_64' machine="pc-i440fx-2.9">hvm</type>
- <boot dev='cdrom'/>
<boot dev='hd'/>
</os>
<devices>
@@ -13,18 +12,6 @@
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
- <disk type='file' device='cdrom'>
- <driver name="qemu" type="raw"/>
- <source file='/var/lib/libvirt/images/ubuntu-server.iso'/>
- <target dev='hdc' bus='ide'/>
- <readonly/>
- <address type='drive' controller='0' bus='1' target='0' unit='0'/>
- </disk>
- <disk type='file' device='cdrom'>
- <source file='./nocloud.iso'/>
- <target dev='hdd' bus='ide'/>
- <readonly/>
- </disk>
<interface type='network'>
<source network='devops'/>
<mac address='52:54:00:00:00:01'/>
Note that virtio MAC addresses all start with 52:54:00
followed by 3 random
bytes that are available for you to modify. Here is a good helper script for
generating random MAC addresses for virtio network interfaces:
#!/bin/bash
# Generate a random MAC address for a virtio network interface.
# Set the OUI (Organizationally Unique Identifier) part of the MAC address
# The first byte is set to a specific value indicating local administration and unicast
# X2:XX:XX suggests the local administration and unicast
# We use 52 (52:54:00 used by QEMU/KVM for virtio devices by default)
oui="52:54:00"
# Generate the last three octets of the MAC address with random values
octet4=$(printf "%02x" $((RANDOM % 256)))
octet5=$(printf "%02x" $((RANDOM % 256)))
octet6=$(printf "%02x" $((RANDOM % 256)))
# Combine them into a full MAC address
random_mac="$oui:$octet4:$octet5:$octet6"
echo "Random MAC address for virtio interface: $random_mac"
Our machine definition for the basic bootable ubuntu machine will now be:
<domain type='kvm'>
<name>ubuntu-base</name>
<memory unit='MiB'>2048</memory>
<os>
<type arch='x86_64' machine="pc-i440fx-2.9">hvm</type>
<boot dev='hd'/>
</os>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/ubuntu-base.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<interface type='network'>
<source network='default'/>
<mac address='52:54:00:8f:9d:83'/>
<model type='virtio'/>
</interface>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<graphics type='vnc' port='-1' listen='localhost'/>
</devices>
</domain>
Notice that this machine has both serial console and vnc connection. It also has ssh access with the public key installed through cloud init. The VNC connection is needed only if the process of adding console to kernel parameters somehow did not work. It mimics the VGA screen but over VNC. The serial console connection is also useful for debugging.
We can now access this machine using one of several methods:
virsh console ubuntu-base
virt-viewer ubuntu-base
ssh 192.168.100.254
Default Network
If we now want our machine to have access to external network and be able to
make requests to for example apt
repositories, we need to make sure that our
network is properly setup.
Let's create a default network:
<network>
<name>default</name>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:b6:d6:94'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.100' end='192.168.100.150'/>
</dhcp>
</ip>
</network>
This creates a nat network where guests on this network can not be accessed
directly but each guest can access the rest of the internet through the bridge
interface. This is the consequence of us using nat
mode. We could also use
route
mode to open the hosts on this network for incoming traffic and we can
also use bridge
mode if we want the hosts be part of for example the LAN
bridge and appear exactly as though they are on the physical LAN.
We will use additional modes when we will be setting up networking for gitlab server but for now let's just keep things simple.
If there is already a network called default
then we can destroy that network
first:
virsh net-destroy default
virsh net-undefine default
We can then create the network from the XML file by running:
virsh net-define net-default.xml
virsh net-start default
When our ubuntu machine host with address 192.168.100.254
now tries to
communicate with an external server, the packets will be translated through
192.168.100.1
which will appear as a gateway into the outside world. Replies
from outside will also be sent to this address of the bridge and will then be
translated to the local address of the host.
Virtual CPUs
Generally the simple directive works well:
+ <vcpu>6</vcpu>
You can have total CPUs across all VMs add up to a bigger number than your total available CPUs (over-provisioning). As long as machines do not need to use all CPUs at the same time this will not have significant effect on performance.
If your builds require specific CPU features only present on the host CPU then
you can enable CPU pass-through. This fixes for example the Fatal glibc error: CPU does not support x86-64-v2
error that may appear during node.js builds.
+ <cpu mode='host-passthrough'/>
ACPI Shutdown
Note that this very basic above machine still has no ACPI support so we can not
call virsh shutdown
on it and expect it to gracefully shutdown. In order to be
able to shutdown the machine gracefully (where linux gets to shutdown all
services before the machine is powered off) I use the following settings:
<os>
<type arch='x86_64' machine="pc-i440fx-2.9">hvm</type>
<boot dev='hd'/>
</os>
+ <features>
+ <acpi/>
+ </features>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <pm>
+ <suspend-to-mem enabled='no'/>
+ <suspend-to-disk enabled='no'/>
+ </pm>
We don't want the VM to be suspendable to either ram or disk. We also want to
restart the libvirt vm when we call reboot
command in linux.
Time configuration
Next I configure the realtime clock, programmable timer. The tick policy for rtc
is catchup
meaning that if kvm misses ticks it will deliver ticks at a higher
pace until it catches up. The delay policy means that it will continue to
deliver ticks at the same rate.
+ <clock offset='utc'>
+ <timer name='rtc' tickpolicy='catchup'/>
+ <timer name='pit' tickpolicy='delay'/>
+ </clock>
Running on KVM
One thing you will notice if you start the original machine is that it is
running as a qemu process. Since we want to make full use of hardware
virtualization, we can switch our VM to use kvm
(qemu-kvm) by explicitly
specifying emulator
parameter.
<devices>
+ <emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
Memory size and name
We can also adjust memory size at this point. If you are running services or apps you would want at least 4-8G of RAM for the VM.
<domain type='kvm'>
- <name>ubuntu-base</name>
- <memory unit='MiB'>2048</memory>
+ <name>your.domain.com</name>
+ <memory unit='MiB'>8192</memory>
<os>
The final base VM
<domain type='kvm'>
<name>ubuntu-base</name>
<memory unit='MiB'>2048</memory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64' machine="pc-i440fx-2.9">hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
</features>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/ubuntu-base.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<interface type='network'>
<source network='default'/>
<mac address='52:54:00:8f:9d:83'/>
<model type='virtio'/>
</interface>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<graphics type='vnc' port='-1' listen='localhost'/>
</devices>
</domain>
Summary
This concludes this quick overview of virtualisation using virsh. If you need additional help with your virtualisation and infrastructure, feel free to contact me below.