My 5-node Bare Metal Kubernetes Homelab (Proxmox + HA Control Plane + Cloudflare Tunnel)
I built a 5-node bare-metal Kubernetes cluster on a single Dell micro server using Proxmox VMs, an HA control plane behind an Nginx TCP load balancer, and Cloudflare Tunnel + Nginx Ingress to expose services without opening inbound ports.
I have been wanting a "real" Kubernetes cluster at home for a while now. Not kind/minikube, not a single VM k3s (those are great), but something that feels close enough to production: multiple control plane nodes, multiple workers, proper ingress, and a sane way to expose services to the internet.
This post is about the cluster I just built and how did I set it up along with the pitfalls: 5-node Kubernetes on bare metal, but virtualized with Proxmox, using an HA control plane endpoint behind Nginx, and Cloudflare Tunnel to publish apps without opening inbound ports.
The goal
My main requirements were:
- HA control plane (no single master VM that can take down kubectl)
- No port forwarding on my home router
Hardware and VM layout
I started from a Dell "Micro" (I wrote Multiplex in my notes but you know what I mean). First thing I did was wipe it and install Proxmox.
From there I created 6 VMs:
- 3x Kubernetes control plane nodes (masters)
- 2x Kubernetes worker nodes
- 1x Nginx VM used as a TCP load balancer for the Kubernetes API server
So yes, the cluster is "bare metal" in the sense that it runs on my own physical box, but the nodes are VMs (which I actually prefer for snapshots and rebuilds).
High-level architecture
This is the mental model that made the whole setup click for me:
- The Kubernetes API endpoint is one stable hostname:port (ex:
k8s.trile.cloud:6443) - That hostname points to the Nginx load balancer VM
- Nginx load balances TCP 6443 to the 3 control plane nodes
- Workloads run on the workers
- External traffic to apps goes through:
Cloudflare DNS → Cloudflare Tunnel → Nginx Ingress Controller → Services
This way:
- I don’t expose NodePorts
- I don’t expose Ingress Controller directly to the internet
- I don’t even need to open 80/443 inbound at home (tunnel is outbound)
Kubernetes bootstrap (kubeadm)
I used kubeadm because it forces me to understand what’s happening. The core flow was:
- Prep every node: install
kubeadm,kubelet, CNI bits, and disable swap - Pick the stable control-plane endpoint (the LB VM)
kubeadm initon the first master- Join the other masters using the control-plane join command
- Join workers
The kubeadm init command is basically the heart of HA here. I used a control-plane endpoint like:
--control-plane-endpoint "k8s.trile.cloud:6443"
That endpoint is the load balancer, not a specific master.
Here’s the exact command shape I used on the first master (edit the endpoint + pod CIDR for your CNI choice):
swapoff -a
kubeadm init \
--control-plane-endpoint "k8s.trile.cloud:6443" \
--upload-certs \
--pod-network-cidr=10.244.0.0/16
After init, set up kubeconfig (do this as your normal user, not root):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Then install a CNI that matches the CIDR you picked. I used the 10.244.0.0/16 CIDR which is commonly used with Flannel.
Joining nodes is standard kubeadm:
- Other masters join using the control-plane join command kubeadm prints (it includes
--control-plane, a discovery hash, and a--certificate-key). - Workers join using the worker join command kubeadm prints.
If you lose the join command later, generate a new token on a master:
kubeadm token create --print-join-command
The Nginx TCP load balancer for the API server
This is the simplest part but also the most important part.
Kubernetes API server is just TCP 6443. So I configured Nginx using the stream {} block to load balance 6443 to my three masters.
Example (trimmed) configuration looks like this:
stream {
upstream k8s_servers{
hash $remote_addr consistent;
server 192.168.0.214:6443;
server 192.168.0.212:6443;
server 192.168.0.213:6443;
}
server {
listen 6443;
proxy_pass k8s_servers;
}
}
I’m not pretending this is some enterprise-grade load balancer, but for a homelab it’s perfect: stable endpoint, zero fancy moving parts, easy to debug.
Installing Nginx Ingress Controller
Once the cluster was alive, I installed the upstream Nginx Ingress Controller.
I kept it simple and used the official manifest. Then I verified pods/services in the ingress-nginx namespace.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx
Publishing services safely: Cloudflare Tunnel
This is the part I like the most.
Instead of exposing LoadBalancer services (which you can’t really do cleanly on a random home network anyway) or port-forwarding my router, I used Cloudflare Tunnel.
Cloudflared runs inside the cluster (Helm chart) and makes an outbound connection to Cloudflare. Then Cloudflare routes public hostnames to the tunnel.
This part is NOT as “just install a Helm chart” as people make it sound, because you’re basically gluing together 3 layers:
- Cloudflare Tunnel auth + tunnel routing rules
cloudflaredrunning in Kubernetes- Nginx Ingress routing inside the cluster
The guide that helped me the most is here (worth reading end-to-end):
https://medium.com/@nicholas5421/exposing-kubernetes-apps-to-the-internet-with-cloudflare-tunnel-ingress-controller-and-e30307c0fcb0
What I did (the practical version):
Step 1: Create the tunnel in Cloudflare
- Cloudflare dashboard → Zero Trust → Networks → Tunnels
- Create a tunnel (I named mine something like
k8s-homelab) - Download the tunnel credentials JSON (this is the file cloudflared uses to authenticate)
You should end up with:
tunnelId(UUID)credentials.jsonfile
Step 2: Create a namespace + secret in Kubernetes
kubectl create namespace cloudflare
kubectl create secret generic tunnel-credentials \
--from-file=credentials.json=/path/to/credentials.json \
-n cloudflare
Step 3: Install cloudflared with Helm
helm repo add cloudflare https://cloudflare.github.io/helm-charts
helm repo update
Create a values.yaml like this (replace the placeholders):
cloudflare:
tunnelName: 'k8s-homelab'
tunnelId: '<tunnel-id>'
secretName: 'tunnel-credentials'
ingress:
- hostname: trainforge.trile.cloud
service: http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:80
- hostname: api-trainforge.trile.cloud
service: http://ingress-nginx-controller.ingress-nginx.svc.cluster.local:80
- service: http_status:404
Then install/upgrade:
helm upgrade --install cloudflared cloudflare/cloudflared \
-f values.yaml \
-n cloudflare
kubectl get pods -n cloudflare
Step 4: Create DNS records in Cloudflare
For each public hostname, create a CNAME record:
trainforge → <tunnel-id>.cfargotunnel.com
api-trainforge → <tunnel-id>.cfargotunnel.com
Orange cloud enabled.
Step 5: Verify traffic is actually flowing
These are the first commands I run when “it should work” but doesn’t:
kubectl get svc -n ingress-nginx
kubectl logs -n cloudflare deploy/cloudflared --tail=200
kubectl logs -n ingress-nginx deploy/ingress-nginx-controller --tail=200
After that, your apps are just normal Kubernetes Ingress resources.
Ingress resources (the boring but necessary glue)
With Nginx Ingress Controller and the tunnel in place, exposing an app is just:
- Create a
Service - Create an
Ingress - Point hostnames to the tunnel CNAME in Cloudflare
I’m using this for my TrainForge services right now (frontend + backend), and it feels like a proper platform instead of random port forwards.
Things that tripped me up
A few gotchas that cost me time:
containerd / CRI runtime errors
At one point I got the classic kubeadm error about CRI runtime not running. My fix was basically resetting containerd config and then explicitly setting SystemdCgroup = true.
This is what it looked like for me:
sudo rm /etc/containerd/config.toml
sudo systemctl restart containerd
Then I regenerated /etc/containerd/config.toml (important bit: SystemdCgroup = true) and restarted containerd again.
Swap
Kubernetes really doesn’t like swap. Disabling it is easy, but making it permanent is the part people forget (update /etc/fstab).
Debugging ingress vs tunnel
When something returns 404, it’s very easy to blame Cloudflare first. Most of the time it was just:
- Wrong hostname in my Ingress
- Wrong namespace
- IngressClass not set to
nginx
Once I started checking kubectl describe ingress ... and kubectl logs for cloudflared + ingress-nginx controller, things got much easier.
Sometimes… it’s just slow
One more thing that wasted my time because I ran this on a tiny server: sometimes nothing is “wrong”, Kubernetes is just being Kubernetes and the pods take time to get scheduled / pulled / started.
If something isn’t responding right after you apply manifests, I now do a quick sanity check before changing configs:
kubectl get pods -A -o wide
kubectl get events -A --sort-by=.metadata.creationTimestamp | tail -n 30
If you see pods stuck in Pending, it might just be resource pressure (CPU/RAM) or image pulling taking a while. Describing the pod usually tells you the real reason:
kubectl -n <namespace> describe pod <pod-name>
End
That’s the cluster. Nothing too fancy, but it hits my goals: HA-ish control plane, clean ingress, and services exposed safely without opening inbound ports.
If you’re building something similar, my strongest advice is: get the API endpoint + load balancer stable first (so kubeadm join is boring), then add ingress, then add the tunnel. Doing it in the reverse order is pain.
As always, if you spot something dumb or unsafe in my setup, feel free to email me at tri@trile.cloud or connect with me via any social link in the footer. Thanks for reading!