- Creates a certificate authority – Kubernetes uses certificates to secure communication between components and also to verify the identity of hosts in the cluster
- Creates configuration files – On the master, this will create configuration files for various Kubernetes cluster components
- Pulls control plane images – the services implementing the cluster components are deployed into the cluster as containers. Very cool! You can, of course, run these as local system daemons on the hosts, but Kubernetes suggests keeping them inside containers
- Bootstraps the control plane pods – starts up the pods and creates static manifests on the master start automatically when the master node starts up
- Taints the master to just system pods – this means the master will run (schedule) only system Pods, not user Pods. This is ideal for production. In testing, you may want to untaint the master, you’ll really want to do this if you’re running a single node cluster. See this link for details on that.
- Generates a bootstrap token – used to join worker nodes to the cluster
- Starts any add-ons – the most common add-ons are the DNS pod and the master’s kube-proxy
If you see this, you’re good to go! Keep that join command handy. We’ll need it in a second.
Your Kubernetes master has initialized successfully!
You can now join any number of machines by running the following on each node
kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247
The output from your cluster creation is very important, it’s going to give you the code needed to access your cluster as a non-root user, the code needed to create your Pod network and also the code needed to join worker nodes to your cluster (just go ahead and copy this into a text file right now). Let’s go through each of those together.
Configuring your cluster for access from the master node as a non-privileged user
This will allow you to log into your system with a regular account and administer your cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Create your Pod network
Now that your cluster is created, you can deploy the YAML files for your Pod network. You must do this prior to adding more nodes to your cluster and certainly before starting any Pods on those nodes. We are going to use kubectl -f to deploy the Pod network from the YAML file we downloaded earlier.
demo@k8s-master1:~$ kubectl apply -f rbac-kdd.yaml
demo@k8s-master1:~$ kubectl apply -f calico.yaml
Before moving forward, check for the creation of the Calico pods and also the DNS pods, once these are created and the STATUS is Running then you can proceed. In this output here you can also see the other components of your Kubernetes cluster. You see the containers running etcd, API Server, the Controller Manager, kube-proxy and the Scheduler.
demo@k8s-master1:~$ kubectl get pods –all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-6ll9j 2/2 Running 0 2m5s
kube-system coredns-576cbf47c7-8dgzl 1/1 Running 0 9m59s
kube-system coredns-576cbf47c7-cc9x2 1/1 Running 0 9m59s
kube-system etcd-k8s-master1 1/1 Running 0 8m58s
kube-system kube-apiserver-k8s-master1 1/1 Running 0 9m16s
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 9m16s
kube-system kube-proxy-8z9t7 1/1 Running 0 9m59s
kube-system kube-scheduler-k8s-master1 1/1 Running 0 8m55s
Joining worker nodes to your cluster
Now on each of the worker nodes, let’s use kubeadm join to join the worker nodes to the cluster. Go back to the output of kubeadm init and copy the string from that output be sure to put a sudo on the front before you do this on each node. The process below is called a TLS bootstrap. This securely joins the node to the cluster over TLS and authenticates the host with server certificates.
demo@k8s-node1:~$ sudo kubeadm join 172.16.94.15:6443 –token 2a71vm.aat5o5vd0eip9yrx –discovery-token-ca-cert-hash sha256:57b64257181341928e60548314f28aa0d2b15f4d81bf9ae9afdae0cee6baf247
[preflight] running pre-flight checks
[discovery] Trying to connect to API Server “172.16.94.15:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://172.16.94.15:6443”
[discovery] Requesting info from “https://172.16.94.15:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.16.94.15:6443”
[discovery] Successfully established connection with API Server “172.16.94.15:6443”
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.12” ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-node1” as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the master to see this node join the cluster.