Install Kubernetes locally.

The purpose of this blog is the gain more knowledge on Kubernetes cluster in a production graded environment. The complexity lays between installing Minikube and installing multi-master nodes for high-availability. It is a sweet spot to get a closer experience on how Kubernetes works, despite clicking "Create Cluster" on GKE.

Project overview.

$ vagrant -v Vagrant 2.2.7
$ vboxmanage -v 6.1.4r136177

Vagrant file

# -*- mode: ruby -*- # vi: set ft=ruby : $udpatePackages = <<-SCRIPT # yum update * -y echo "pretended to be update ( for timesaving sake)" SCRIPT Vagrant.configure("2") do |config| config.vm.define "master" do |master| master.vm.box = "centos/7" master.vm.box_version = "1905.1" master.vm.hostname = "master" master.vm.provision :shell, inline: "sed 's/127\.0\.0\.1.*master.*/172\.42\.42\.99 master/' -i /etc/hosts" master.vm.network "private_network", ip: "172.42.42.99" end (1..3).each do |i| config.vm.define "node#{i}" do |node| node.vm.box = "centos/7" node.vm.box_version = "1905.1" node.vm.hostname = "node#{i}" node.vm.provision :shell, inline: "sed 's/127\.0\.0\.1.*node#{i}.*/172\.42\.42\.#{i}0 node#{i}/' -i /etc/hosts" node.vm.network "private_network", ip: "172.42.42.#{i}0" end end config.vm.provider "virtualbox" do |vb| vb.memory = "2048" vb.cpus = 2 end config.vm.provision "shell", inline: $udpatePackages end
vagrant up
$ vagrant status Current machine states: master running (virtualbox) node1 not created (virtualbox) node2 not created (virtualbox) node3 not created (virtualbox)
$ vagrant status Current machine states: master running (virtualbox) node-1 running (virtualbox) node-2 running (virtualbox) node-3 running (virtualbox)
$ vagrant ssh master [vagrant@master ~]$ # this will create a problem when pods are accessed via their service IP. [vagrant@master ~]$ hostname -i 127.0.0.1 # Change from [vagrant@master ~]$ cat /etc/hosts 127.0.0.1 master master 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # to [root@master vagrant]# cat /etc/hosts 172.42.42.99 master master 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # expected output. [vagrant@master ~]$ hostname -i 172.42.42.99
[vagrant@node1 ~]$ hostname -i 172.42.42.10 [vagrant@node2 ~]$ hostname -i 172.42.42.20 [vagrant@node3 ~]$ hostname -i 172.42.42.30

Install Container runtime to all nodes including the master node.

$ vagrant ssh node-1 [vagrant@node1 ~]$
[vagrant@node1 ~]$ sudo su [root@node1 vagrant]#

Install Container Runtime - Docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum update -y && yum install -y \ containerd.io \ docker-ce \ docker-ce-cli
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF
[root@node1 vagrant]# cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] }
systemctl enable docker systemctl daemon-reload systemctl restart docker

Prepare all vms before installing kubeadm (master and nodes).

$ vagrant ssh master
sudo su
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
modprobe br_netfilter
[root@master vagrant]# lsmod | grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
update-alternatives --set iptables /usr/sbin/iptables-legacy
yum install net-tools -y
netstat -ntlp | grep -e "6443.|2379|2380|10250|1025[1-2]"
netstat -ntlp | grep -e "10250|3[0-1][0-9][0-9][0-9]|32[0-6][0-9][0-9]|327[0-5][0-9]|3276[0-7]"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
systemctl daemon-reload systemctl restart kubelet systemctl enable docker.service

Installing kubeadm and kubelet on nodes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
modprobe br_netfilter
yum install -y kubelet kubeadm --disableexcludes=kubernetes
systemctl enable --now kubelet
systemctl daemon-reload systemctl restart kubelet

Create a single control-plane cluster with kubeadm

# Check swap [root@node1 vagrant]# free -h total used free shared buff/cache available Mem: 1.8G 170M 1.2G 8.5M 437M 1.4G Swap: 2.0G 0B 2.0G # turn it off. [root@node1 vagrant]# swapoff -a # Recheck it you should see it occupied 0B [root@node1 vagrant]# free -h total used free shared buff/cache available Mem: 1.8G 196M 1.2G 8.6M 452M 1.4G Swap: 0B 0B 0B
[root@node1 vagrant]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Jun 1 17:13:31 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=8ac075e3-1124-4bb6-bef7-a6811bf8b870 / xfs defaults 0 0 /swapfile none swap defaults 0 0
[root@node1 vagrant]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Jun 1 17:13:31 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=8ac075e3-1124-4bb6-bef7-a6811bf8b870 / xfs defaults 0 0 #/swapfile none swap defaults 0 0
[root@node1 vagrant]# rm /swapfile rm: remove regular file ‘/swapfile’? y
[root@master vagrant]# kubeadm config images pull

With Flannel Container Network Interface (CNI)

[root@master vagrant]# kubeadm init --apiserver-advertise-address=172.42.42.99 --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=172.42.42.99 [vagrant@master ~]$ mkdir -p $HOME/.kube [vagrant@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [vagrant@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[vagrant@master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-bs5vv 0/1 Pending 0 79s kube-system coredns-6955765f44-pnz5j 0/1 Pending 0 79s kube-system etcd-master 1/1 Running 0 96s kube-system kube-apiserver-master 1/1 Running 0 96s kube-system kube-controller-manager-master 1/1 Running 0 95s kube-system kube-proxy-t6xc6 1/1 Running 0 78s kube-system kube-scheduler-master 1/1 Running 0 96s
[vagrant@master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-bs5vv 1/1 Running 0 21m kube-system coredns-6955765f44-pnz5j 1/1 Running 0 21m kube-system etcd-master 1/1 Running 0 21m kube-system kube-apiserver-master 1/1 Running 0 21m kube-system kube-controller-manager-master 1/1 Running 0 21m kube-system kube-flannel-ds-amd64-v9cd7 1/1 Running 0 59s kube-system kube-proxy-t6xc6 1/1 Running 0 21m kube-system kube-scheduler-master 1/1 Running 0 21m
[vagrant@master ~]$ kubectl -n kube-system describe pods/kube-flannel-ds-amd64-v9cd7
[vagrant@master ~]$ kubeadm token create --print-join-command
[root@node1 vagrant]# kubeadm join 172.42.42.99:6443 --token z8h6fl.9z3steltu2h3g388 --discovery-token-ca-cert-hash sha256:3f439f86613aa762631d80003cca3fbcffe2d9367f9e7327effc588452c7b48 b
[vagrant@master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 32m v1.17.4 node1 NotReady <none> 5s v1.17.4
[vagrant@master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 33m v1.17.4 node1 Ready <none> 65s v1.17.4
[vagrant@master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6955765f44-bs5vv 1/1 Running 0 33m kube-system coredns-6955765f44-pnz5j 1/1 Running 0 33m kube-system etcd-master 1/1 Running 0 34m kube-system kube-apiserver-master 1/1 Running 0 34m kube-system kube-controller-manager-master 1/1 Running 0 34m kube-system kube-flannel-ds-amd64-v9cd7 1/1 Running 0 13m kube-system kube-flannel-ds-amd64-vsnf9 1/1 Running 0 91s kube-system kube-proxy-hcv8q 1/1 Running 0 91s kube-system kube-proxy-t6xc6 1/1 Running 0 33m kube-system kube-scheduler-master 1/1 Running 0 34m

Ref :

Reset k8s cluster.

[vagrant@master ~]$ kubectl drain node1 --delete-local-data --force --ignore-daemonsets [vagrant@master ~]$ kubectl drain node2 --delete-local-data --force --ignore-daemonsets [vagrant@master ~]$ kubectl drain node3 --delete-local-data --force --ignore-daemonsets [vagrant@master ~]$ kubectl delete node node1 node2 node3
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

We are going to clear the cluster state for later use with Calico CNI.

[root@master vagrant]# kubeadm reset [root@master vagrant]# rm -rf /etc/cni/net.d [root@master vagrant]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

With Calico Container Network Interface (CNI)

kubeadm init --apiserver-advertise-address=172.42.42.99 --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=172.42.42.99 ... Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.42.42.99:6443 --token ov3mxl.e9z9yao71n83an17 \ --discovery-token-ca-cert-hash sha256:5fbd51df93b3d254fdc3992abd2de652a61b4c1749f853c7b7ae4dacae6e0b23
[vagrant@master ~]$ mkdir -p $HOME/.kube [vagrant@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [vagrant@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[vagrant@master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 4m8s v1.17.4 [vagrant@master ~]$ kubectl get pods --all-namespaces kube-system calico-kube-controllers-788d6b9876-fpfxv 0/1 ContainerCreating 0 36s kube-system calico-node-4xv48 0/1 PodInitializing 0 37s kube-system coredns-6955765f44-dbrxh 0/1 Pending 0 8m13s kube-system coredns-6955765f44-wv2xt 0/1 Pending 0 8m13s kube-system etcd-master 1/1 Running 0 8m28s kube-system kube-apiserver-master 1/1 Running 0 8m28s kube-system kube-controller-manager-master 1/1 Running 0 8m28s kube-system kube-proxy-r4mdf 1/1 Running 0 8m13s kube-system kube-scheduler-master 1/1 Running 0 8m28s
[vagrant@master ~]$ kubectl apply -f https://docs.projectcalico.org/v3.13/manifests/calico.yaml
[vagrant@master ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 9m13s v1.17.4 [vagrant@master ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-788d6b9876-fpfxv 1/1 Running 0 98s kube-system calico-node-4xv48 1/1 Running 0 99s kube-system coredns-6955765f44-dbrxh 1/1 Running 0 9m15s kube-system coredns-6955765f44-wv2xt 1/1 Running 0 9m15s kube-system etcd-master 1/1 Running 0 9m30s kube-system kube-apiserver-master 1/1 Running 0 9m30s kube-system kube-controller-manager-master 1/1 Running 0 9m30s kube-system kube-proxy-r4mdf 1/1 Running 0 9m15s kube-system kube-scheduler-master 1/1 Running 0 9m30s
[vagrant@master ~]$ kubeadm token create --print-join-command W0321 02:09:09.266806 25256 validation.go:28] Cannot validate kube-proxy config - no validator is available W0321 02:09:09.267153 25256 validation.go:28] Cannot validate kubelet config - no validator is available kubeadm join 172.42.42.99:6443 --token t7gyox.3olrli90wr5vsykc --discovery-token-ca-cert-hash sha256:769c0a302fa6334518ade9353191bc2905d9db03ff967f579c353d9333c6a58e
[root@node1 vagrant]# kubeadm join 172.42.42.99:6443 --token t7gyox.3olrli90wr5vsykc --discovery-token-ca-cert-hash sha256:769c0a302fa6334518ade9353191bc2 905d9db03ff967f579c353d9333c6a58e
[vagrant@master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 11m v1.17.4 node1 Ready <none> 63s v1.17.4
[root@node2 vagrant]# kubeadm join 172.42.42.99:6443 --token vfg5sv.t7pvcufta8qp2ieh --discovery-token-ca-cert-hash sha256:5fbd51df93b3d254fdc3992abd2de652a61b4c1749f853c7b7ae4dacae6e0b23
[root@node3 vagrant]# kubeadm join 172.42.42.99:6443 --token 7z7z1i.zo26loeufmfuwysr --discovery-token-ca-cert-hash sha256:5fbd51df93b3d254fdc3992abd2de652a61b4c1749f853c7b7ae4dacae6e0b23
[vagrant@master ~]$ kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 14m v1.17.4 node1 Ready <none> 3m28s v1.17.4 node2 Ready <none> 88s v1.17.4 node3 Ready <none> 69s v1.17.4
[vagrant@master ~]$ kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-788d6b9876-fpfxv 1/1 Running 0 8m25s kube-system calico-node-4xtgd 1/1 Running 0 4m26s kube-system calico-node-4xv48 1/1 Running 0 8m26s kube-system calico-node-ksj2w 1/1 Running 0 104s kube-system coredns-6955765f44-dbrxh 1/1 Running 0 16m kube-system coredns-6955765f44-wv2xt 1/1 Running 0 16m kube-system etcd-master 1/1 Running 0 16m kube-system kube-apiserver-master 1/1 Running 0 16m kube-system kube-controller-manager-master 1/1 Running 0 16m kube-system kube-proxy-2rv9q 1/1 Running 0 104s kube-system kube-proxy-r4mdf 1/1 Running 0 16m kube-system kube-proxy-x56vn 1/1 Running 0 4m26s kube-system kube-scheduler-master 1/1 Running 0 16m

Test drive

[vagrant@master ~]$ cat webserver.yaml
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: webserver spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:alpine name: nginx ports: - containerPort: 80
[vagrant@master ~]$ kubectl apply -f webserver.yaml deployment.apps/webserver created
[vagrant@master ~]$ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES webserver-5c559d5697-9lzhj 1/1 Running 0 38s 192.168.104.1 node2 <none> <none> webserver-5c559d5697-ljr2l 1/1 Running 0 38s 192.168.166.129 node1 <none> <none> webserver-5c559d5697-qvj54 1/1 Running 0 38s 192.168.104.2 node2 <none> <none>
[vagrant@master ~]$ curl 192.168.166.129 <!DOCTYPE html>... [vagrant@master ~]$ curl 192.168.104.1 <!DOCTYPE html>...
[vagrant@master ~]$ cat webserver-svc.yaml
apiVersion: v1 kind: Service metadata: labels: run: web-service name: web-service spec: ports: - port: 80 protocol: TCP selector: app: nginx type: NodePort
[vagrant@master ~]$ kubectl apply -f webserver-svc.yaml service/web-service created
vagrant@master ~]$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28m web-service NodePort 10.100.113.150 <none> 80:30695/TCP 37s

Imgur

Dashboard.

From kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
[vagrant@master ~]$ kubectl -n kubernetes-dashboard get pod NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-7b8b58dc8b-w4bgd 1/1 Running 0 56s kubernetes-dashboard-5f5f847d57-vtbmn 1/1 Running 0 56s
[vagrant@master ~]$ kubectl proxy --address=0.0.0.0 --accept-hosts='172.42.42.99' Starting to serve on [::]:8001

Imgur

[vagrant@master ~]$ cat admin-user.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard
[vagrant@master ~]$ kubectl get ServiceAccount --all-namespaces | grep admin-user # no result.
[vagrant@master ~]$ kubectl apply -f admin-user.yaml serviceaccount/admin-user created [vagrant@master ~]$ kubectl get ServiceAccount --all-namespaces | grep admin-user kubernetes-dashboard admin-user 1 8s
[vagrant@master ~]$ cat admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
[vagrant@master ~]$ kubectl apply -f admin-user-role-binding.yaml clusterrolebinding.rbac.authorization.k8s.io/admin-user created
[vagrant@master ~]$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-v2jt7 Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 40c43c90-db39-404f-aa04-575308c20b1a Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImZDd0dhQXBSVHkzWVNTOGlTUXhfOGZTYmpZRmNoaWlCUUozUHp5SlJ3N0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXYyanQ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MGM0M2M5MC1kYjM5LTQwNGYtYWEwNC01NzUzMDhjMjBiMWEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.A4vFLxvCLWwLoS1iuQImBsKs9BBi7FsTOH-Pi_93qHRi7sJOKJhtxjQTG0_WPFX2V_belhvd9_16rrVN5uwufpzTV3jYP1fYXeVwATwAsprFlWp79-57UbL1My61oGEsOU0ClCCCI7suqY2a2Jk05emoTuIjk7FmK7m4pwJj2vqMAzgvlJrkj1O27oBVXyVM2Jm6x-Yg1wmP_2g3Y0hCXZ0mquqwMb1BNJjFEUf8gkTgtOQZFALP896cT5UkeFf85NiN6de0d2E9ArfB553uMTn6G3TwcIzY058SY9cYERYilKHMYT5R_cDb5tIxOst7o4dk2TVbc5oth_XRu-N3vg

Imgur

NOTE: Dashboard should not be exposed publicly using kubectl proxy command as it only allows HTTP connection. For domains other than localhost and 127.0.0.1 it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
[vagrant@master ~]$ kubectl get pod --show-labels --all-namespaces -o wide | grep dashboard kubernetes-dashboard dashboard-metrics-scraper-7b8b58dc8b-w4bgd 1/1 Running 0 152m 192.168.104.3 node2 <none> <none> k8s-app=dashboard-metrics-scraper,pod-template-hash=7b8b58dc8b kubernetes-dashboard kubernetes-dashboard-5f5f847d57-vtbmn 1/1 Running 0 152m 192.168.166.130 node1 <none> <none> k8s-app=kubernetes-dashboard,pod-template-hash=5f5f847d57
[vagrant@master ~]$ kubectl get service -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.111.186.137 <none> 8000/TCP 157m kubernetes-dashboard NodePort 10.100.130.210 <none> 443:31432/TCP 157m
[root@node1 vagrant]# ip route | grep eth1 172.42.42.0/24 dev eth1 proto kernel scope link src 172.42.42.10 metric 101

Imgur

Imgur

When join node after k8s package has been updated.

situation

I was using 3 nodes [master, node1, node2] for a while. Later on, I decided to join a new node [node3] to the cluster. At that time, k8s had upgraded from 1.17.4-0 to 1.18.0-0. Without knowing it had been upgraded, the installation of kubelet and kubeadm were ultimately picking the latest version i.e. 1.18.0-0. When I try to join the new node I got this error

[root@node3 vagrant]# kubeadm join 172.42.42.99:6443 --token 0z3hm2.hwh087wzg9u1jvxu --discovery-token-ca-cert-hash sha256:769c0a302fa6334518ade9353191bc2905d9db03ff967f579c353d9333c6a58 e W0331 06:20:14.644517 31403 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' error execution phase kubelet-start: cannot get Node "node3": nodes "node3" is forbidden: User "system:bootstrap:0z3hm2" cannot get resource "nodes" in API group "" at the cluster scope To see the stack trace of this error execute with --v=5 or higher
[root@node3 vagrant]# kubeadm join 172.42.42.99:6443 --token hkb1zs.e4mcmlw5m4thfu8p --discovery-token-ca-cert-hash sha256:769c0a302fa6334518ade9353191bc2905d9db03ff967f579c353d9333c6a58e -v 5 W0331 06:34:10.201430 32071 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. I0331 06:34:10.201494 32071 join.go:371] [preflight] found NodeName empty; using OS hostname as NodeName I0331 06:34:10.201650 32071 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock [preflight] Running pre-flight checks I0331 06:34:10.201743 32071 preflight.go:90] [preflight] Running general checks I0331 06:34:10.201833 32071 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests I0331 06:34:10.201870 32071 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf I0331 06:34:10.201879 32071 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf I0331 06:34:10.201888 32071 checks.go:102] validating the container runtime I0331 06:34:10.318510 32071 checks.go:128] validating if the service is enabled and active I0331 06:34:10.445729 32071 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables I0331 06:34:10.445851 32071 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward I0331 06:34:10.445892 32071 checks.go:649] validating whether swap is enabled or not I0331 06:34:10.445960 32071 checks.go:376] validating the presence of executable conntrack I0331 06:34:10.446001 32071 checks.go:376] validating the presence of executable ip I0331 06:34:10.446285 32071 checks.go:376] validating the presence of executable iptables I0331 06:34:10.446297 32071 checks.go:376] validating the presence of executable mount I0331 06:34:10.446314 32071 checks.go:376] validating the presence of executable nsenter I0331 06:34:10.446325 32071 checks.go:376] validating the presence of executable ebtables I0331 06:34:10.446333 32071 checks.go:376] validating the presence of executable ethtool I0331 06:34:10.446341 32071 checks.go:376] validating the presence of executable socat I0331 06:34:10.446352 32071 checks.go:376] validating the presence of executable tc I0331 06:34:10.446361 32071 checks.go:376] validating the presence of executable touch I0331 06:34:10.446381 32071 checks.go:520] running all checks I0331 06:34:10.560088 32071 checks.go:406] checking whether the given node name is reachable using net.LookupHost I0331 06:34:10.560280 32071 checks.go:618] validating kubelet version I0331 06:34:10.655155 32071 checks.go:128] validating if the service is enabled and active I0331 06:34:10.664753 32071 checks.go:201] validating availability of port 10250 I0331 06:34:10.665002 32071 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt I0331 06:34:10.665016 32071 checks.go:432] validating if the connectivity type is via proxy or direct I0331 06:34:10.665049 32071 join.go:441] [preflight] Discovering cluster-info I0331 06:34:10.665077 32071 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "172.42.42.99:6443" I0331 06:34:10.677002 32071 token.go:116] [discovery] Requesting info from "172.42.42.99:6443" again to validate TLS against the pinned public key I0331 06:34:10.685736 32071 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.42.42.99:6443" I0331 06:34:10.685783 32071 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process I0331 06:34:10.685795 32071 join.go:455] [preflight] Fetching init configuration I0331 06:34:10.685955 32071 join.go:493] [preflight] Retrieving KubeConfig objects [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' I0331 06:34:10.703579 32071 interface.go:400] Looking for default routes with IPv4 addresses I0331 06:34:10.703615 32071 interface.go:405] Default route transits interface "eth0" I0331 06:34:10.703864 32071 interface.go:208] Interface eth0 is up I0331 06:34:10.704020 32071 interface.go:256] Interface "eth0" has 2 addresses :[10.0.2.15/24 fe80::5054:ff:fe8a:fee6/64]. I0331 06:34:10.704045 32071 interface.go:223] Checking addr 10.0.2.15/24. I0331 06:34:10.704056 32071 interface.go:230] IP found 10.0.2.15 I0331 06:34:10.704068 32071 interface.go:262] Found valid IPv4 address 10.0.2.15 for interface "eth0". I0331 06:34:10.704079 32071 interface.go:411] Found active IP 10.0.2.15 I0331 06:34:10.704126 32071 preflight.go:101] [preflight] Running configuration dependant checks I0331 06:34:10.704137 32071 controlplaneprepare.go:211] [download-certs] Skipping certs download I0331 06:34:10.704150 32071 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf I0331 06:34:10.706406 32071 kubelet.go:119] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt I0331 06:34:10.707463 32071 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "node3" and status "Ready" nodes "node3" is forbidden: User "system:bootstrap:hkb1zs" cannot get resource "nodes" in API group "" at the cluster scope cannot get Node "node3" k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runKubeletStartJoinPhase /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/kubelet.go:148 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll ... ...

Solution.

[vagrant@master ~]$ yum list kubelet kubeadm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror1.ku.ac.th * extras: mirror1.ku.ac.th * updates: mirror1.ku.ac.th Installed Packages kubeadm.x86_64 1.17.4-0 @kubernetes kubelet.x86_64 1.17.4-0 @kubernetes
[root@node3 vagrant]# yum list kubelet kubeadm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror2.totbb.net * extras: mirror2.totbb.net * updates: mirror2.totbb.net Available Packages kubeadm.x86_64 1.18.0-0 kubernetes kubelet.x86_64 1.18.0-0 kubernetes
[root@node3 vagrant]# yum install -y kubelet-1.17.4-0 kubeadm-1.17.4-0 --disableexcludes=kubernetes

Creating Highly Available clusters with kubeadm

Simply follow the link below.

Creating Highly Available clusters with kubeadm