k8s 06: controller-manager

In this entry, I’m going to install controller-manager, another key component of kubernetes. Its role is to maintain the cluster and pods as healthy and desired state.

Today’s goal is to make our cluster like this:

Install controller-manager on master node

1. Install controller-manager

the controller-manager binary is again in the same directory I extracted from the kubernetes latest package.

shogo@controller-1:~$ cd work
shogo@controller-1:~/work$ sudo su
root@controller-1:/home/k_shogo/work# mv ./kubernetes/server/bin/kube-controller-manager /usr/local/bin/

2. Configure controller-manager

I need to pass the API server IP address so that controller-manager can talk to its master.

root@controller-1:/home/k_shogo/work# cp /etc/systemd/system/kube-scheduler.service /etc/systemd/system/kube-controller-manager.service
root@controller-1:/home/k_shogo/work# vim /etc/systemd/system/kube-controller-manager.service
root@controller-1:/home/k_shogo/work# cat /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager  --master=http://127.0.0.1:8080
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
root@controller-1:/home/k_shogo/work# systemctl daemon-reload
root@controller-1:/home/k_shogo/work# systemctl start kube-controller-manager
root@controller-1:/home/k_shogo/work# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager Server
   Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; disabled; vendor pres
   Active: active (running) since Wed 2018-09-26 14:43:46 UTC; 9s ago
     Docs: https://github.com/kubernetes/kubernetes
Main PID: 2842 (kube-controller)
    Tasks: 6
   Memory: 76.1M
      CPU: 314ms
   CGroup: /system.slice/kube-controller-manager.service
           └─2842 /usr/local/bin/kube-controller-manager --master=http://127.0.0.1:8080
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.330984    2842 con
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.335608    2842 con
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.369923    2842 con
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.421479    2842 con
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.421868    2842 gar
Sep 26 14:43:50 controller-1 kube-controller-manager[2842]: I0926 14:43:50.467122    2842 con
Sep 26 14:43:51 controller-1 kube-controller-manager[2842]: I0926 14:43:51.049999    2842 con
Sep 26 14:43:51 controller-1 kube-controller-manager[2842]: I0926 14:43:51.150632    2842 con
Sep 26 14:43:52 controller-1 kube-controller-manager[2842]: I0926 14:43:52.512838    2842 con
Sep 26 14:43:52 controller-1 kube-controller-manager[2842]: I0926 14:43:52.613636    2842 con
root@controller-1:/home/k_shogo/work#
root@controller-1:/home/k_shogo/work# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /etc/systemd/system/kube-controller-manager.service.

3. Check

So let’s check if it works step by step.

  • all the components looks fine on controller node
shogo@controller-1:~$ kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                 
scheduler            Healthy   ok                 
etcd-0               Healthy   {"health":"true"}
  • and the node status is correctly updated if I shutdown worker-1
shogo@controller-1:~$ watch kubectl get nodes
Every 2.0s: kubectl get nodes
NAME        STATUS     ROLES     AGE       VERSION
worker-02   Ready      <none>    23h       v1.11.3
worker-1    NotReady   <none>    23h       v1.11.3
  • furthermore, deployment and scaling works now
shogo@controller-1:~$ kubectl get nodes
NAME        STATUS    ROLES     AGE       VERSION
worker-02   Ready     <none>    1d        v1.11.3
worker-1    Ready     <none>    1d        v1.11.3
shogo@controller-1:~$ kubectl run nginx-deploy --image=nginx
deployment.apps "nginx-deploy" created
shogo@controller-1:~$ kubectl get deploy
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1         1         1            1           9s
shogo@controller-1:~$ kubectl get rs
NAME                     DESIRED   CURRENT   READY     AGE
nginx-deploy-585856bc9   1         1         1         17s
shogo@controller-1:~$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
nginx-deploy-585856bc9-8vj4q   1/1       Running   0          21s
shogo@controller-1:~$ kubectl scale deploy/nginx-deploy --replicas=5
deployment.extensions "nginx-deploy" scaled
shogo@controller-1:~$ kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-deploy-585856bc9-544zv   1/1       Running   0          11s       172.17.0.3   worker-02
nginx-deploy-585856bc9-8mzkq   1/1       Running   0          11s       172.17.0.2   worker-1
nginx-deploy-585856bc9-8vj4q   1/1       Running   0          59s       172.17.0.2   worker-02
nginx-deploy-585856bc9-bbtrf   1/1       Running   0          11s       172.17.0.3   worker-1
nginx-deploy-585856bc9-mj2rf   1/1       Running   0          11s       172.17.0.4   worker-1
shogo@controller-1:~$
shogo@controller-1:~$ kubectl delete pods/nginx-deploy-585856bc9-544zv
pod "nginx-deploy-585856bc9-544zv" deleted
shogo@controller-1:~$ kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-deploy-585856bc9-8mzkq   1/1       Running   0          49s       172.17.0.2   worker-1
nginx-deploy-585856bc9-8vj4q   1/1       Running   0          1m        172.17.0.2   worker-02
nginx-deploy-585856bc9-bbtrf   1/1       Running   0          49s       172.17.0.3   worker-1
nginx-deploy-585856bc9-mj2rf   1/1       Running   0          49s       172.17.0.4   worker-1
nginx-deploy-585856bc9-qjkq7   1/1       Running   0          6s        172.17.0.4   worker-02

Nice, but from the last output it’s obvious that the pod ip address is duplicated and not ideal. These IP address is actually node local and cannot be reachable from outside. In the next entry, I will deploy kube-proxy to make these service accessible from outside.