Today we want to talk about a test platform for Rancher and Kubernetes and begin our setup.
What do we need?
We want to create a test platform that is near to a real production platform.
We have 3 possibilities:
- a complete local installation
- a complete installation in the cloud (EC2, Azure and more)
- a mixed installation (Rancher local, downstream Cluster in the cloud)
Cloud installations create costs, we want to avoid this in this Demo.
So i choose a local installation wit 7 VMs, each 2 VCPUs and 3.5G RAM.
This can already be handled by a Linux KVM hypervisor with 32GB RAM.
The VMs are all in one network.
The VMs are named node1 to node6 and the last one is named ranchernode.
It’s very important that DNS resolution for the hostnames is working! The whole Rancher/Kubernetes Setup relies on a working name resolution!
The VMs node1 to node6 are used for the downstream Kubernetes cluster (this is the cluster that carries the workload later).
In principle Rancher should work with any modern Linux distribution, but it”s better to choose an OS that is supported:
A very basic installation should do, no need for GUI.
All VMs should have a working network, a working name resolution and time synchronization.
The prerequesites for ranchernode
You can install Rancher on any certified Kubernetes distribution via helm chart (that’s some kind of repository format for applications in a Kubernetes cluster).
Rancher’s recommendation is to install Rancher on K3S or RKE.
A K3S Kubernetes cluster has the advantage that it’s very easy to install, while a RKE kubernetes cluster has other advantages, like automatic backup of the etcd database.
It’s recommended to install Rancher on a High available Kubernetes cluster for production environments, so in production you would need 3 nodes minimal just for Rancher.
I choose here to use only a single node K3S kubernetes cluster. But it can be anyway extended easily to a 3 nodes K3S kubernetes cluster to fit HA requirements.
We need to make sure that some elements are installed on our ranchernode:
- container-selinux (if SELinux is installed on the node)
Make sure containerd is started:
systemctl start containerd
systemctl enable containerd
K3S needs no Docker preinstalled on the nodes, it works with containerd and it can be easy installed on the ranchernode via
curl -sfL https://get.k3s.io | sh –
This simple command will download K3S on the ranchernode and install it.
And that’s all! Now a one node K3S kubernetes system is running on this node.
We can control that with
kubectl get node
kubectl get namespace
After K3S installation make sure that your Kubernetes config file is in your PATH:
vi ~/.bash_profile (and add the line from above to the end of the file)
In the next step we need to extend our K3S Kubernetes cluster with the Helm repository system.
That can also easily be done with the following commands:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
Control if Helm is working:
Install Rancher via Helm
Now it’s time to install the Rancher software in our K3S cluster:
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
Now we need to decide with what kind of certificates our Rancher should work, we have 3 possibilities. For the first 2 we need the cert-manager installed
- Rancher Generated Certificates (Default)
- Let’s Encrypt
- Certificates from Files
We go here with Self Signed certificates, so we need the cert-manager:
kubectl apply –validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager –namespace cert-manager –version v1.0.4
Verify the installation of cert-manager:
kubectl get pods –namespace cert-manager
Finally we deploy the Rancher application in Kubernetes with the help of Helm:
helm install rancher rancher-latest/rancher –namespace cattle-system –set hostname=ranchernode.localdomain
We can control the status of this process with the following command. please wait till the deployment has finished successfully:
kubectl -n cattle-system rollout status deploy/rancher
deployment “rancher” successfully rolled out
And that’s all! we have now successfully installed a one node K3S Kubernetes cluster with Rancher deployed.
In an enterprise environment with 3 nodes for this cluster we need to make sure that
resolves to a loadbalancer outside of the cluster that distributes the http/https connections from clients to all 3 IP addresses from the ranchernodes, the loadbalancer checks also for the health of this ranchernodes and takes them out of loadbalancing in case of a problem.
There are also possibilities to handle the load balancing inside the Kubernetes cluster if there is no external loadbalancer available.
Access the Rancher Webgui
Now we need just to open a browser to
and create a password for the admin user.
Rancher is now ready to deploy a downstream clusters.
That will be covered in the next part of this blog.
Possible Problems with this setup
If you experience problems with this setup, please try to deactivate SELinux/AppArmor and firewalld.
Depending on what resolves your problem you will know how to fix it and activate the component again.
CU soon again here 🙂