Lorenzo's blog

Technical reference about work stuff

K3S with Kube-VIP and Ingress Nginx

🗓️ Date: 2025-01-05 · 🗺️ Word count: 1341 · ⏱️ Reading time: 7 Minute

K3s and Kube-VIP are two powerful tools that are transforming how we deploy and manage Kubernetes clusters in edge, small-scale, and resource-constrained environments. K3s, a lightweight Kubernetes distribution developed by Rancher Labs, simplifies the deployment of Kubernetes while maintaining compatibility with full Kubernetes environments. With its reduced resource footprint and optimized design, K3s is ideal for scenarios where hardware resources are limited, such as IoT devices, small cloud instances, and development environments. As Kubernetes adoption continues to grow across industries, K3s has emerged as a go-to solution for those who need the power and flexibility of Kubernetes without the complexity and overhead of a traditional installation.

In conjunction with K3s, Kube-VIP offers an elegant solution for high-availability and load balancing, crucial for ensuring a reliable and resilient Kubernetes setup. Kube-VIP is an open-source virtual IP (VIP) management tool designed for Kubernetes, providing a lightweight way to manage virtual IP addresses within the cluster. By using Kube-VIP, users can create a seamless failover mechanism for services, ensuring that traffic is always directed to the available nodes even in the event of failures. When combined, K3s and Kube-VIP provide a potent solution for building scalable, high-availability Kubernetes clusters in resource-constrained environments, making them ideal for use in modern, distributed applications.

Prerequisites

  • Each node/host must have an unique hostname.
  • Each node/host must have a static IP address.
  • Each node/host must working internet access.
  • Each node/host does not have K3s installed.
  • Firewalld must be stopped and disabled, or configured according to K3S’ documentation.

Assumptions

  • The nodes run a RHEL-like linux distro,
  • 3 master nodes will be used, having IPs 192.168.123.10|.20|.30,
  • 2 agent nodes will be used, having IPs 192.168.123.11|.12,
  • The IP 192.168.123.80 will be used for the kube-apiserver load balancing VIP,
  • The IP range 192.168.123.240/28 will be used for the Service LoadBalancer IP resources.

Configure a static IP with NetworkManager

cat /etc/NetworkManager/system-connections/eth0.nmconnection

[connection]
id=eth0
uuid=58ce2488-e231-46b8-8a6f-72f5ae3112cc
type=ethernet
interface-name=eth0

[ethernet]

[ipv4]
dns=192.168.123.1;
# .10 is the host IP address, .1 is the gateway
address1=192.168.123.10/24,192.168.123.1
method=manual

[ipv6]
addr-gen-mode=eui64
method=ignore

[proxy]

Steps

K3S installation

Create the K3s server configuration file below on host that will be the first master node.

cat /etc/rancher/k3s/config.yaml

write-kubeconfig-mode: "0644"
tls-san:
  - "192.168.123.80"
  - "k3s.local"
cluster-init: true
token: "secret"
disable:
  - servicelb
  - traefik
  - local-storage
  • servicelb is disabled since Kube-VIP will be used as Service LoadBalancer,
  • traefik is disabled since Nginx will be used as Ingress controller,
  • local-storage is disabled since Longhorn will be used as Storage Class.

The tls-san field lists the name for which the certificates will be signed by the Cluster CA.

Install K3S and bootstrap the new cluster on the first master node. The configuration file will automatically be used. curl -sfL https://get.k3s.io | sh -

Kube-VIP installation

On K3s, Kube-VIP is installed via DaemonSets manifests.

On the first master node, create the RBAC resources for Kube-VIP:

mkdir -p /var/lib/rancher/k3s/server/manifests/
curl https://kube-vip.io/manifests/rbac.yaml -o /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml

Then, generate the custom DaemonSet manifest using a container-packaged utility, customizing the necessary variables. The flags --taint and --controlplane are required to share the virtual IP only on the control-plane nodes, avoiding that a worker node takes the IP that represents the apiserver control-plane.

export VIP=192.168.123.80 <-- change!

export INTERFACE=eth0     <-- change!

KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name")

alias kube-vip="/usr/local/bin/ctr image pull ghcr.io/kube-vip/kube-vip:$KVVERSION; /usr/local/bin/ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:$KVVERSION vip /kube-vip"

kube-vip manifest daemonset \
    --interface $INTERFACE \
    --address $VIP \
    --inCluster \
    --taint \
    --controlplane \
    --services \
    --arp \
    --leaderElection

Paste the output into /var/lib/rancher/k3s/server/manifests/kube-vip-ds.yaml. As soon as the kube-vip pod starts, the master node should take the addictional virtual IP and associate it with the specified network interface. In this case, the node now has both the IP 192.168.123.10 and 192.168.123.80.

[admin@master0 ~]$ ip a show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:65:95:17 brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 192.168.123.10/24 brd 192.168.123.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.123.80/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe65:9517/64 scope link 
       valid_lft forever preferred_lft forever

Test by copying locally the kubeconfig file (/etc/rancher/k3s/k3s.yaml) and replacing in server: https://127.0.0.1:6443 the IPs 192.168.123.10 and 192.168.123.80.

Adding additional master nodes

Check the prerequisites on each addictional node to be added to the cluster. Create the file /etc/rancher/k3s/config.yaml in each host that will be a master node.

write-kubeconfig-mode: "0644"
server: "https://192.168.123.10:6443"
tls-san:
  - "192.168.123.80"
  - "k3s.local"
token: "secret"
disable:
  - servicelb
  - traefik
  - local-storage

Try replacing server: "https://192.168.123.10:6443" with server: "https://192.168.123.80:6443": it should work.

Install K3s: curl -sfL https://get.k3s.io | sh -.

Add both a second and a third master node (using only 2 master nodes causes more issues that using only 1: if one of the 2 becomes unavailable the other stops responding).

After adding the third master node, check if the VIP floats amongs master nodes:

  • use the VIP in the local kubeconfig file,
  • the VIP should be assigned to the first master node, check running ip a show eth0,
  • turn off the first master node,
  • check if one of the other 2 master node takes the VIP.

Configure Kube-VIP Service LoadBalancer

In the previous steps, Kube-VIP has been configured to provide a single VIP that represents the control-plane, so that the cluster does not depend on a single node (for instance, in the kubeconfig file).

Kube-VIP can also be configured to provide virtual IPs for the LoadBalancer services, just as cloud providers do. To configure this feature follow the official documentation. In short:

kubectl apply -f https://raw.githubusercontent.com/kube-vip/kube-vip-cloud-provider/main/manifest/kube-vip-cloud-controller.yaml

# replace the range with the one that is valid for you
kubectl create configmap -n kube-system kubevip --from-literal cidr-global=192.168.123.240/28

To test, create a LoadBalancer service that exposes a deployment:

kubectl create deploy nginx --image=nginx:alpine --replicas=3
kubectl expose deploy nginx --port=80 --type=LoadBalancer --name=nginx
kubectl get svc # check svc type and IP
kubectl delete deploy nginx
kubectl delete svc nginx

Nginx Ingress controller installation

Traefik was excluded during the K3s installation, so we need to install another Ingress controller, such as Nginx.

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx; helm repo update
helm install ingress-nginx -n ingress-nginx -f ./values.yaml ingress-nginx/ingress-nginx

The values below are used to run the ingress controller pods only on control-plane nodes. Note to also set the default ingressClass, otherwise it must be specified on each ingress resource.

controller:
  replicaCount: 3
  tolerations:
  - effect: "NoSchedule"
    operator: "Exists"
    key: "node-role.kubernetes.io/control-plane"
  nodeSelector:
    "node-role.kubernetes.io/control-plane": "true"
  ingressClassResource:
    default: true

Check if the installation worked: kubectl get svc -n ingress-nginx.

NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.43.160.199   192.168.123.240   80:31389/TCP,443:31073/TCP   6h12m
ingress-nginx-controller-admission   ClusterIP      10.43.58.157    <none>            443/TCP                      6h12m

Check if a node gets the VIP of the LoadBalancer service associated to its network interface. Run on each master node: ip a | grep eth0 -A4 and check the IPs:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:c2:67:3c brd ff:ff:ff:ff:ff:ff
    altname enp1s0
    inet 192.168.123.20/24 brd 192.168.123.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.123.240/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fec2:673c/64 scope link 
       valid_lft forever preferred_lft forever

Adding worker (agent) nodes

Check the prerequisites on each addictional node that will be added to the cluster.

Install K3s in agent mode, using the VIP of the cluster.

curl -sfL https://get.k3s.io | K3S_URL=https://192.168.123.80:6443 K3S_TOKEN=secret sh -
kubectl get nodes -w

Longhorn installation

Longhorn allows to use distributed storage without relying on external appliances, such as NFS servers. Each node must have iSCSI-utils installed. On RHEL-based linux distibutions, run on each cluster node (masters and agents):

sudo dnf install iscsi-initiator-utils

Then, install longhorn via the helm chart and check via the webUI if it’s working:

helm repo add longhorn https://charts.longhorn.io; helm repo update
helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
k port-forward -n longhorn-system svc/longhorn-frontend 8080:80
# on the local machine, browse to localhost:8080

Finally, make sure there is only one default storage class.

Prometheus installation

Prometheus, by default, creates 2 PVs. Installing it is also useful to check if the persistent storage is working.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts; helm repo update
helm install prometheus prometheus-community/prometheus -n prometheus --create-namespace -f ./values.yaml
server:
  ingress:
    enabled: true
    hosts:
    - prometheus.cluster

Sources