OpenDAX v4 docs
Search
⌃K

MicroK8s Cluster Scaling

OpenDAX v4 uses MicroK8s as the deployment platform for the VM-based edition of the stack. The cluster can be easily horizontally scaled by adding new nodes installed on separate VMs. You can use any machine with MicroK8s v1.21+ installed.
To achieve High Availability mode, you will need at least three nodes (to check the cluster status run microk8s status). If your cluster size reaches 3+ nodes, the datastore (etcd) would become replicated automatically, which would allow you to interact with the cluster even if a number of nodes go down, thus eliminating the single point of failure. If a node goes down, the workloads would be moved to a different node.
Master recovery algorithm in HA mode is as follows:
  • Within the cluster, a voting process takes place between the nodes, through which a leader is elected.
  • If the leader node gets "removed" ungracefully, e.g. it crashes and never comes back, it will take up to 5 seconds for the cluster to elect a new leader.
  • Promoting a non-voter to a voter takes up to 30 seconds. This promotion takes place when a new node enters the cluster or when a voter crashes.

Scaling Instructions

To get a scaled-up OpenDAX v4 deployment:
  1. 1.
    Install and run OpenDAX v4 image.
  2. 2.
    Connect to the newly created virtual machine via SSH (check guides: AWS, DigitalOcean, GCP). Note: on AWS, use the ubuntu user when connecting via SSH.
  3. 3.
    Run sudo microk8s add-node. This command will generate a connection string in the form microk8s join <master_ip>:<port>/<token>.
  4. 4.
    Create and set up a separate machine that would serve as a MicroK8s worker node (MicroK8s Installation Guide). And run sudo microk8s join <master_ip>:<port>/<token> on to join your cluster. Example: sudo microk8s join 10.132.0.10:25000/1149a0c5478da5cc854bb7f30b5a9e41/5e08c80a47ac
  5. 5.
    Check if the node has successfully joined the cluster by running microk8s kubectl get nodes.
You can repeat steps 4 and 5 until the desired number of K8s cluster nodes is achieved.
Note: if you are using GCP and the private IP addresses of its VMs, you don't need to add any rules to the firewall. But if you want to use a public IP address or any other cloud provider, you should add a rule to allow traffic on ports 25000 and 19001.
Useful guides for setting up a firewall in different clouds: AWS, DigitalOcean, GCP.

Removing a node

On the worker node, you'd like to remove from the cluster, run microk8s leave. MicroK8s on the departing node will restart its own control plane and resume operations as a full single node cluster. To complete the node removal, run microk8s remove-node 10.22.254.79 on the master, substituting in the IP address of the removed node. The target node will revert to a regular standalone MicroK8s installation, becoming its own master. After that, the node is completely empty and can be terminated or added to a different cluster.