Let’s get you started on your Kubernetes journey with installing Kubernetes on premises in virtual machines.
Kubernetes is a distributed system, you will be creating a cluster which will have a master node that is in charge of all operations in your cluster. In this walkthrough we’ll create three workers which will run our applications. This cluster topology is, by no means, production ready. If you’re looking for production cluster builds check out Kubernetes documentation. Here and here. The primary components that need high availability in a Kubernetes cluster are the API Server which controls the state of the cluster and the etcd database which stores the persistent state of the cluster. You can learn more about Kubernetes cluster components here.
Get your infrastructure sorted
I’m using 4 Ubuntu Virtual machines in VMware Fusion on my Mac. Each with 2vCPUs and 2GB of RAM running Ubuntu 16.04.5. Ubuntu 18 requires a slightly different install. Documented here. In there you will add the Docker repository, then install Docker from there. The instructions below get Docker from Ubuntu’s repository
- k8s-master – 172.16.94.15
- K8s-node1 – DHCP
- K8s-node2 – DHCP
- K8s-node3 – DHCP
Ensure that each host has a unique name and that all nodes can have network reachability between each other. Take note of the IPs, because you will need to log into each node with SSH. If you need assistance getting your environment ready, check out my training on Pluralsight to get you started here! I have courses on installation, command line basics all the way up through advanced topics on networking and performance.
Another requirement, which Klaus Aschenbrenner reminded me, is that you need to disable the swap on any system which you will run the kubelet, which in our case is all systems. To do so you need to turn swap off with sudo swapoff -a and edit /etc/fstab removing or commenting out the swap volume entry.
Overview of the cluster creation process
- Install Kubernetes packages on all nodes
- Add Kubernetes’ apt repositories
- Install the required software for Kubernetes
- Download deployment files for your pod network
- Create a Kubernetes cluster on the master
- We’re going to use a utility called kubeadm to create our cluster with a basic configuration
- Install a Pod Network
- Join our three worker nodes to our cluster
Let’s start off with installing Kubernetes on to all of the nodes in our system. This is going to require logging into each server via SSH, adding the Kubernetes apt repositories and installing the correct packages. Perform the following tasks on ALL nodes in your cluster, the master and the three workers. If you add more nodes, you will need to install these packages on those nodes.
Add the gpg key for the Kubernetes Apt repository to your local system
demo@k8s-master1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
demo@k8s-master1:~$ sudo bash -c ‘cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
> deb https://apt.kubernetes.io/ kubernetes-xenial main
demo@k8s-master1:~$ sudo apt-get update
demo@k8s-master1:~$ sudo apt-get install -y kubelet kubeadm kubectl docker.io
demo@k8s-master1:~$ sudo apt-mark hold kubelet kubeadm kubectl docker.io
- Kubelet – On each node in the cluster, this is in charge of starting and stopping pods in response to the state defined on the API Server on the master
- Kubeadm – Primary command line utility for creating your cluster
- Kubectl – Primary command line utility for working with your cluster
- Docker – Remember, that Kubernetes is a container orchestrator so we’ll need a container runtime to run your containers. We’re using Docker. You can use other container runtimes if required
Download the YAML files for your Pod Network
demo@k8s-master1:~$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
demo@k8s-master1:~$ wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Creating a Kubernetes Cluster
demo@k8s-master1:~$ sudo kubeadm init –pod-network-cidr=192.168.0.0/16
- Creates a certificate authority – Kubernetes uses certificates to secure communication between components and also to verify the identity of hosts in the cluster
- Creates configuration files – On the master, this will create configuration files for various Kubernetes cluster components
- Pulls control plane images – the services implementing the cluster components are deployed into the cluster as containers. Very cool! You can, of course, run these as local system daemons on the hosts, but Kubernetes suggests keeping them inside containers
- Bootstraps the control plane pods – starts up the pods and creates static manifests on the master start automatically when the master node starts up
- Taints the master to just system pods – this means the master will run (schedule) only system Pods, not user Pods. This is ideal for production. In testing, you may want to untaint the master, you’ll really want to do this if you’re running a single node cluster. See this link for details on that.
- Generates a bootstrap token – used to join worker nodes to the cluster
- Starts any add-ons – the most common add-ons are the DNS pod and the master’s kube-proxy
Your Kubernetes master has initialized successfully!
You can now join any number of machines by running the following on each node
kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247
The output from your cluster creation is very important, it’s going to give you the code needed to access your cluster as a non-root user, the code needed to create your Pod network and also the code needed to join worker nodes to your cluster (just go ahead and copy this into a text file right now). Let’s go through each of those together.
Configuring your cluster for access from the master node as a non-privileged user
This will allow you to log into your system with a regular account and administer your cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Create your Pod network
Now that your cluster is created, you can deploy the YAML files for your Pod network. You must do this prior to adding more nodes to your cluster and certainly before starting any Pods on those nodes. We are going to use kubectl -f to deploy the Pod network from the YAML file we downloaded earlier.
demo@k8s-master1:~$ kubectl apply -f rbac-kdd.yaml
demo@k8s-master1:~$ kubectl apply -f calico.yaml
demo@k8s-master1:~$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6ll9j 2/2 Running 0 2m5s
kube-system coredns-576cbf47c7-8dgzl 1/1 Running 0 9m59s
kube-system coredns-576cbf47c7-cc9x2 1/1 Running 0 9m59s
kube-system etcd-k8s-master1 1/1 Running 0 8m58s
kube-system kube-apiserver-k8s-master1 1/1 Running 0 9m16s
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 9m16s
kube-system kube-proxy-8z9t7 1/1 Running 0 9m59s
kube-system kube-scheduler-k8s-master1 1/1 Running 0 8m55s
Joining worker nodes to your cluster
demo@k8s-node1:~$ sudo kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server “172.16.94.15:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://172.16.94.15:6443”
[discovery] Requesting info from “https://172.16.94.15:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.16.94.15:6443”
[discovery] Successfully established connection with API Server “172.16.94.15:6443”
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.12” ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-node1” as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the master to see this node join the cluster.
demo@k8s-master1:~$ kubeadm token list
demo@k8s-master1:~$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
demo@k8s-master1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 14m v1.12.2
k8s-node1 NotReady <none> 100s v1.12.2
k8s-node2 NotReady <none> 96s v1.12.2
k8s-node3 NotReady <none> 94s v1.12.2
demo@k8s-master1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 15m v1.12.2
k8s-node1 Ready <none> 2m34s v1.12.2
k8s-node2 Ready <none> 2m30s v1.12.2
k8s-node3 Ready <none> 2m28s v1.12.2