In order to use the High Availability function of Proxmox VE with guarantees of consistency, we need an odd number of nodes (at least 3) in the cluster, but in my home laboratory I only have two nodes.
To solve this we will use what is called a QDevice, in the Corosync nomenclature, in order to break ties and maintain quorum in the cluster. This will allow our hypervisor to operate in fault conditions that would otherwise cause outages. The use of a QDevice is not recommended in production environments. For our Homelab, we are going to use a Raspberry Pi as QDevice.
Understanding the benefits
Proxmox VE uses the Corosync cluster engine in the background to communicate configuration changes between cluster nodes.
To maintain synchronization between nodes, a requirement of Proxmox VE is that at least three nodes must be added to the cluster. This may not be feasible in a home laboratory or testing facility. In two-node configurations, what happens is that both nodes must always be operational for any changes to take place, such as starting, stopping or creating virtual machines and containers.
To solve this, we can use a QDevice whose sole purpose is to resolve disputes during the times of interruption of one of the nodes. This QDevice will not be a visible component of the Proxmox VE cluster and cannot run any virtual machines or containers on it.
Some benefits of adding a QDevice to a cluster are:
- Allow modifications in the running hypervisor during second node downtime in a two-node deployment.
- Resolve disputes in deployments when the number of nodes is even.
Create the QDevice
To create the QDevice I have used a Rasberry Pi 3 Model B and the Raspbian Lite distribution (here you can see how to install it). My Proxmox VE cluster is running version 6.3-6 using Corosync 3.
Installing dependencies on Raspberry Pi
We will execute the following commands on our Raspberry Pi:
sudo apt-get update sudo apt install corosync-qnetd sudo apt install corosync-qdevice
In my case the installation of the dependencies has failed several times due to connection failures with the
raspbian.raspberrypi.org repository. If the same thing happens to you, you can choose a new one from this list (https://www.raspbian.org/RaspbianMirrors/) and replace
raspbian.raspberrypi.org inside the file
We will also need to enable SSH access to the Raspberry Pi for the root user. For this we will do the following:
- We will create a password for the root user if we have not already done so:
sudo passwd root
- We will edit the file
/etc/ssh/sshd_configand add the line
- We will restart the Raspberry Pi and verify that we can connect to it by SSH using the root user
With this our Raspberry Pi will be ready to join the cluster.
Create a Cluster in Proxmox VE
Important: The secondary node that we are going to add to the cluster cannot have any virtual machine or container in it, so if you already have any of these things on that node you will have to make a backup, remove it from the node and restore the backup after creating the cluster.
To create the cluster we will log in to the web interface of the machine that we want to have as primary in the cluster and we will click on “Datacenter”, then on “Cluster” and finally on “Create Cluster”.
When doing this, a new window will appear where we will have to choose a name for the cluster and choose the network interface for communication between nodes of the cluster.
This interface should always be dedicated, in an isolated subnet and only for communication between cluster nodes and not the network interface used for communication of virtual machines but since my nodes do not have a dedicated interface I will use the only one I have.
We will press the “Create” button and our cluster will already be created, that simple. Now we will see that our Cluster screen has changed a bit and shows us the name of the cluster in addition to indicating that has one node.
Now we will click on “Join Information” and a pop-up window will appear with the information we need to add other nodes to the cluster.
We will press the “Copy Information” button and we will log in to the web interface of the second node that we want to add to the cluster. We will click on “Datacenter”, then on “Cluster” and finally on “Join Cluster”.
A new window will appear where we will paste the information that we have copied from node 1; We will only have to select the correct network interface, fill in the administrator password and click on “Join Cluster”.
After this we will lose the connection with the web interface of node 2. We will return to node 1 and we will see that the list of cluster nodes will have been updated and will show us both nodes.
Finish the Cluster configuration
We will connect to node 1 with our SSH client and check the cluster information:
We can see that our cluster is made up of two nodes and two voters, so we still need to add our third device (the Raspberry Pi) to be able to use the High Availability functions.
We will start by executing the following commands on node 1 and on node 2:
apt update apt install corosync-qdevice
And we will configure our 3rd “node” by executing the following in the main node, replacing the IP marked in red with that of your Raspberry Pi. We will use “-f” to make sure to overwrite any old quorum settings that may exist on the Raspberry Pi:
pvecm qdevice setup 192.168.17.37 -f
It will ask us to accept the “fingerprint” of the Raspberry Pi and also the root user password on the Raspberry Pi. It will run a script and it should finish without any error.
To check that everything has gone correctly we will execute the following in node 1:
And the result should show us that our cluster is made up of 2 nodes and three devices with voting rights for the quorum
With this, our cluster of two nodes with High Availability would be ready!
If you have made it this far, thank you very much! If you have any questions, do not hesitate to leave them in the comments and I will do my best to help.
And don’t forget to subscribe to receive an email when new articles are published.