22 October 2020 Reading time: 9 minutes

Fast, lightweight and secure. Infrastructure LXD containers in VMmanager

Aleksander Grishin

Aleksander Grishin

Product Owner of VMmanager virtualization platform


In November, VMmanager will get a new type of virtualization: LXD by Canonical. Linux Container Daemon is the system container manager with an interface similar to virtual machines, but it is Linux-based (LXC).

Although LXD is a container virtualization, it is quite different from the well-known Docker because it has a full-fledged Linux system "on board". The OS works as if it were installed on a virtual or physical server. LXD and Docker are not mutually exclusive: Docker containers can be used inside an LXD container.

LXD is a lightweight, fast and simple virtualization that allows the following:

  • Limiting container resources "live"
  • Isolating containers and projects from each other
  • Security of cluster nodes
  • Using node resources more flexibly and efficiently
  • Virtual resources are provisioned in a matter of a few seconds!



A cluster is the main abstract unit for building a virtual infrastructure. The cluster combines several physical servers with homogeneous network settings, local and network storages. VMmanager will support a new type of clusters including servers running on Ubuntu 20.04. At the initial phase, it will only support networks of IP-fabric type with route announcing via iBGP. The remaining network configurations and the second network interface will not be available yet.

An LXD-cluster in VMmanager platform

Local storage

At first, the new cluster will only support ZFS storage. Containers and images will be stored in a ZFS pool, and local backups will be stored in a directory on the node. ZFS provides high-speed access to data, control of data integrity and fragmentation and allows you to create very large pools for data storage.

To connect the node, you will need to:

  • Prepare a ZFS pool on the node and specify its name in the interface (e.g. with the zfsutils-linux utility).
  • Specify the path to the file directory on the node. We will store the backups and container images in it.

To create a zfs pool you need a disk partition or an entire logical disk. In this example, a disk partition is used:

zpool create tank /dev/sda4

A pool is created and it is given the name of tank. You can view the list of pools and their status with zpool list. If you view the list inside the pool with zfs list, you will see the containers. For example:

tank/containers/c1 1.01G 8.99G 1.45G 

You can see how many resources are allocated and used. You can mount a particular directory on the node and view the files there:

zfs mount tank/containers/c1

At a later point, we plan to add support for such local storage types as DIR and LVM. And of course network storages — iSCSI, FC, Ceph.


To connect, you will need a node with Ubuntu 20.04. When you connect a node, VMmanager will automatically configure it using Ansible:

  1. Install the packs:
    • lxd in stable version 4.6
    • nftables — for anti-spoofing and limiting TCP connections
    • bird2 — to announce the routes via iBGP
  2. Enable the modules for IP fabric:
  3. Initialize LXD on the server
  4. Configure remote access to LXD with an asymmetrical key pair
  5. Then the LXD server can managed from the master device:
     from pylxd import Client
    client = Client(
         cert=('lxd.crt', 'lxd.key'))


At the first stage, the basic actions for container lifecycle management will be available:

  • Creation and deletion
  • Start/Stop/Restart
  • Changing password
  • Changing the resources
  • Changing the disk size
  • Creating a backup
  • Recovery from a backup
  • Cloning
  • Cold migration
  • Creating an image
  • Adding and deleting IPv4/IPv6 addresses
  • IO and IOPS weight
  • Network weight
  • Limiting the number of TCP connections

At the second stage, we will add the following features:

  • Limiting CPU as a percentage
  • Limiting connections to the container for specific ports
  • Live migration
  • Adding/deleting the network interface
  • Console access to the container

Changes in fine settings:

  • CPU, I/O, network weights as a value from 0 to 10
  • The read/write limit will be set in IOPS, or in Mbit/s.
  • CPU emulation mode will not be available
  • CPU usage limit as a percentage will be added

In LXD, changes in resources and fine settings are applied “live”, without container rebooting

You will get an infrastructure container as lightweight, fast and flexible as Docker. However, also more durable, isolated and secure.

The difference between an LXD infrastructure container and a Docker container

Using LXD is convenient if your IT department issues and supports isolated virtual resources on regular basis. This technology provides high performance and service security with excellent utilization of resources.

Container settings

We use the Cloud-init technology to configure just created containers at the first start-up. It allows you to easily install packs, record files, configure the network, login, password and other parameters. Cloud-init runs at startup, so there is no need to perform additional actions from the node or agents inside the container's operating system — you can simply use images from Cloud-init.

VMmanager uses Cloud-init in two cases:

  1. For initial container setup. For example, to create a user account and install the necessary packs.
  2. To configure the network when restarting the container. For example, to assign IP addresses to the interface, specify a default route, perform migration, etc.

In the future, this technology will help us to provide a user-friendly interface for implementing the Infrastructure-as-Code functionality and VDC in the product.


Configurations enable the functionality of isolated public IP addresses, which are not bound to the infrastructure, for virtual machines on top of hosts in a private network.

When a node is connected to a cluster, the platform installs bird2 and configures it almost the same way as in a cluster with KVM. Only the location of the config file differs: in Centos 8, it is /etc/bird.conf and in Ubuntu 20, it is /etc/bird/bird.conf.
Read more about IP fabric

The next step is to implement SDN using Open vSwitch. In the future, we will add support of simpler network schemes.

Operating systems

LXD is a lightweight hypervisor based on the parent operating system. Therefore, only Linux can be used as a guest operating system.

At a later point, we plan to add LXD container images from the Canonical repository to the OS list. However, as a first solution we are considering using the ispsystem repository with the images tested by our DevOps engineers. It is necessary for guaranteed operability of our customers' services, without any surprises.

An LXD template with a small size cloud-init, can be quickly downloaded and deployed. This allows the platform to provision a container in just a few seconds

LXD-templates from ISPsystem repository

At later stages, it will be possible to connect any repository to an LXD cluster in VMmanager.

Container licensing

At present, we are licensing VMmanager by the number of nodes and virtual machines. LXD containers will be licensed separately. This grants flexibility: clients can only pay for what they really need.

Try LXD in VMmanager

In the beginning of November, we will release LXD containers support in its basic functionality.

LXD infrastructure containers will allow a more efficient use of the company’s physical resources. You will be able to:

  • Deploy infrastructure containers quickly for your clients;
  • Place more guest OS in a node;
  • Arrange several local isolated environments, for example for software development or testing;
  • Provide security by isolating the LXD container from physical infrastructure.

Try LXD virtualization as a fast and reliable alternative to HyperV, VMware or KVM. With VMmanager it is convenient — simply add an empty node with Ubuntu 20.04 and create containers.

LXD management in VMmanager does not require console skills: you can configure the cluster and manage containers in the graphical interface.

Your opinions are welcome in the comments section below.

Request VMmanager demo