In this post, we’re going to modify the communication between worker nodes and API server.
The communication between these services on worker nodes and master nodes requires two way.
Secure communication between Kubelet and API Server
1. Create kubeconfig file
These keys in this kubeconfig file are used for authentication during the request from worker node to API server.
# again, node authorization requires default user credentail(system:node:xx), and we need to make one kubeconfig each for worker-node root@controller-1:/var/lib/kubernetes/cert# for num in 1 2; do > kubectl config set-cluster k8s-demo \ > --certificate-authority=ca.pem \ > --embed-certs=true \ > --server=https://10.240.0.11:6443 \ > --kubeconfig=worker-${num}-kubelet.kubeconfig > > kubectl config set-credentials system:node:worker-${num} \ > --client-certificate=worker-${num}-kubelet.pem \ > --client-key=worker-${num}-kubelet-key.pem \ > --embed-certs=true \ > --kubeconfig=worker-${num}-kubelet.kubeconfig > > kubectl config set-context default \ > --cluster=k8s-demo \ > --user=system:node:worker-${num} \ > --kubeconfig=worker-${num}-kubelet.kubeconfig > done Cluster "k8s-demo" set. User "system:node:worker-1" set. Context "default" created. Cluster "k8s-demo" set. User "system:node:worker-2" set. Context "default" created. root@controller-1:/var/lib/kubernetes/cert# root@controller-1:/var/lib/kubernetes/cert# kubectl config view --kubeconfig=worker-1-kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://10.240.0.11:6443 name: k8s-demo contexts: - context: cluster: k8s-demo user: system:node:worker-1 name: default current-context: default kind: Config preferences: {} users: - name: system:node:worker-1 user: client-certificate-data: REDACTED client-key-data: REDACTED
2. Modify system service file
This change is required for worker nodes to serve valid HTTPS service so that API server can retrieve the data from worker nodes.
root@worker-1:~# cat /lib/systemd/system/kubelet.service [Unit] Description=kubelet: The Kubernetes Node Agent Documentation=https://kubernetes.io/docs/home/ [Service] ExecStart=/usr/bin/kubelet --kubeconfig=/var/lib/kubelet/worker.kubeconfig --client-ca-file=/var/lib/kubelet/cert/ca.pem --tls-cert-file=/var/lib/kubelet/cert/worker-1-kubelet.pem --tls-private-key-file=/var/lib/kubelet/cert/worker-1-kubelet-key.pem --cluster-dns=10.32.0.10 --cluster-domain="cluster.local" Restart=always StartLimitInterval=0 RestartSec=10 [Install] WantedBy=multi-user.target root@worker-1:~# systemctl daemon-reload root@worker-1:~# systemctl restart kubelet root@worker-1:~# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2018-10-14 22:41:25 UTC; 4s ago Docs: https://kubernetes.io/docs/home/ Main PID: 11261 (kubelet) Tasks: 10 Memory: 35.1M CPU: 569ms CGroup: /system.slice/kubelet.service └─11261 /usr/bin/kubelet --kubeconfig=/var/lib/kubelet/worker.kubeconfig --client-ca-file=/var/lib/kubelet/cert/ca.pem --tls-cert-file=/va Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.008299 11261 kubelet_node_status.go:79] Attempting to register node worker-1 Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.021699 11261 kubelet_node_status.go:123] Node worker-1 was previously registered Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.022060 11261 kubelet_node_status.go:82] Successfully registered node worker-1 Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.188147 11261 cpu_manager.go:155] [cpumanager] starting with none policy Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.188636 11261 cpu_manager.go:156] [cpumanager] reconciling every 10s Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.188799 11261 policy_none.go:42] [cpumanager] none policy: Start Oct 14 22:41:26 worker-1 kubelet[11261]: Starting Device Plugin manager Oct 14 22:41:26 worker-1 kubelet[11261]: W1014 22:41:26.198923 11261 pod_container_deletor.go:75] Container "74be015696350d458d07a84141e2c9d7ac7963 Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.294844 11261 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started fo Oct 14 22:41:26 worker-1 kubelet[11261]: I1014 22:41:26.295407 11261 reconciler.go:154] Reconciler: start to sync state root@worker-1:~#
3. Modify API server
Modify API server to communicate with kubelet in HTTPS.
k_shogo@controller-1:~$ cat /etc/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.32.0.0/24 --insecure-bind-address=0.0.0.0 --client-ca-file=/var/lib/kubernetes/cert/ca.pem --kubelet-certificate-authority=/var/lib/kubernetes/cert/ca.pem --kubelet-client-certificate=/var/lib/kubernetes/cert/apiserver.pem --kubelet-client-key=/var/lib/kubernetes/cert/apiserver-key.pem --tls-cert-file=/var/lib/kubernetes/cert/apiserver.pem --tls-private-key-file=/var/lib/kubernetes/cert/apiserver-key.pem Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target k_shogo@controller-1:~$ sudo systemctl daemon-reload k_shogo@controller-1:~$ sudo systemctl restart kube-apiserver k_shogo@controller-1:~$ sudo systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2018-10-14 22:04:58 UTC; 6s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 7012 (kube-apiserver) Tasks: 5 Memory: 282.8M CPU: 6.161s CGroup: /system.slice/kube-apiserver.service └─7012 /usr/local/bin/kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.32.0.0/24 --insecure-bind-address=0.0.0.0 --disable-admission-plugins= Oct 14 22:04:58 controller-1 kube-apiserver[7012]: W1014 22:04:58.777809 7012 authentication.go:377] AnonymousAuth is not allowed with the AllowAll authorizer. Resetting AnonymousAu Oct 14 22:04:59 controller-1 kube-apiserver[7012]: I1014 22:04:59.266542 7012 master.go:228] Using reconciler: master-count Oct 14 22:04:59 controller-1 kube-apiserver[7012]: W1014 22:04:59.454386 7012 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources. Oct 14 22:04:59 controller-1 kube-apiserver[7012]: W1014 22:04:59.466897 7012 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. Oct 14 22:04:59 controller-1 kube-apiserver[7012]: W1014 22:04:59.468811 7012 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources. Oct 14 22:04:59 controller-1 kube-apiserver[7012]: W1014 22:04:59.479568 7012 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. Oct 14 22:04:59 controller-1 kube-apiserver[7012]: [restful] 2018/10/14 22:04:59 log.go:33: [restful/swagger] listing is available at https://10.240.0.11:6443/swaggerapi Oct 14 22:04:59 controller-1 kube-apiserver[7012]: [restful] 2018/10/14 22:04:59 log.go:33: [restful/swagger] https://10.240.0.11:6443/swaggerui/ is mapped to folder /swagger-ui/ Oct 14 22:05:01 controller-1 kube-apiserver[7012]: [restful] 2018/10/14 22:05:01 log.go:33: [restful/swagger] listing is available at https://10.240.0.11:6443/swaggerapi Oct 14 22:05:01 controller-1 kube-apiserver[7012]: [restful] 2018/10/14 22:05:01 log.go:33: [restful/swagger] https://10.240.0.11:6443/swaggerui/ is mapped to folder /swagger-ui/ k_shogo@controller-1:~$
4. Confirm everything is working fine
Now let’s check if all components is still working.
# create one busybox instance to test k_shogo@controller-1:~$ kubectl run busybox --image=busybox -- sleep 3600 deployment.apps "busybox" created # it seems kubelet -> API server is working well k_shogo@controller-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE busybox-5ccc978d8d-clm4t 1/1 Running 0 4s # as the connection from api server to kubelet is in HTTPS, the write operation also succeed # with HTTP, kubectl logs should be timed out k_shogo@controller-1:~$ kubectl exec -it busybox-5ccc978d8d-clm4t -- sh / # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue link/ether 02:42:0a:c8:01:03 brd ff:ff:ff:ff:ff:ff inet 10.200.1.3/24 brd 10.200.1.255 scope global eth0 valid_lft forever preferred_lft forever / # exit # And the connection is made on port 6443(secure port) rather than 8080(insecure port) root@worker-1:~# lsof -nPi tcp:6443 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME kubelet 8596 root 6u IPv4 71488 0t0 TCP 10.240.0.22:55324->10.240.0.11:6443 (ESTABLISHED) # Kube-Proxy is still using port 8080 root@worker-1:~# lsof -nPi tcp:8080 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME kube-prox 9281 root 9u IPv4 76963 0t0 TCP 10.240.0.22:45702->10.240.0.11:8080 (ESTABLISHED) kube-prox 9281 root 10u IPv4 76964 0t0 TCP 10.240.0.22:45704->10.240.0.11:8080 (ESTABLISHED) root@worker-2:~#
Secure communication between Kubelet and API Server
1. Create kubeconfig file
This step is almost identical to kubelet. But this config file doesn’t have any node specific info.
root@controller-1:/var/lib/kubernetes/cert# { > kubectl config set-cluster k8s-demo \ > --certificate-authority=ca.pem \ > --embed-certs=true \ > --server=https://10.240.0.11:6443 \ > --kubeconfig=kube-proxy.kubeconfig > > kubectl config set-credentials system:kube-proxy \ > --client-certificate=kube-proxy.pem \ > --client-key=kube-proxy-key.pem \ > --embed-certs=true \ > --kubeconfig=kube-proxy.kubeconfig > > kubectl config set-context default \ > --cluster=k8s-demo \ > --user=system:kube-proxy \ > --kubeconfig=kube-proxy.kubeconfig > } Cluster "k8s-demo" set. User "system:kube-proxy" set. Context "default" created.
2. Modify system service file
oot@worker-2:~# cat /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy Server Documentation=https://github.com/kubernetes/kubernetes [Service] ExecStart=/usr/local/bin/kube-proxy --kubeconfig=/var/lib/kube-proxy/kube-proxy.kubeconfig Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target root@worker-2:~# systemctl daemon-reload root@worker-2:~# systemctl restart kube-proxy root@worker-2:~# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Server Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-10-15 23:02:11 UTC; 2s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 11447 (kube-proxy) Tasks: 0 Memory: 9.7M CPU: 109ms CGroup: /system.slice/kube-proxy.service ‣ 11447 /usr/local/bin/kube-proxy --kubeconfig=/var/lib/kube-proxy/kube-proxy.kubeconfig Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.947620 11447 conntrack.go:52] Setting nf_conntrack_max to 131072 Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.947829 11447 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.947991 11447 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.948606 11447 config.go:202] Starting service config controller Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.948813 11447 controller_utils.go:1019] Waiting for caches to sync for service config controller Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.949087 11447 config.go:102] Starting endpoints config controller Oct 15 23:02:11 worker-2 kube-proxy[11447]: I1015 23:02:11.949238 11447 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller Oct 15 23:02:12 worker-2 kube-proxy[11447]: I1015 23:02:12.049560 11447 controller_utils.go:1026] Caches are synced for endpoints config controller Oct 15 23:02:12 worker-2 kube-proxy[11447]: I1015 23:02:12.050029 11447 controller_utils.go:1026] Caches are synced for service config controller Oct 15 23:02:12 worker-2 kube-proxy[11447]: E1015 23:02:12.068422 11447 proxier.go:1319] Failed to delete stale service IP 10.32.0.10 connections, error: error deleting connection tra root@worker-2:~#
3. Confirmation
# now both kubelet and kube-proxy is using secure port root@worker-2:~# lsof -nPi tcp:6443 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME kubelet 8596 root 6u IPv4 71488 0t0 TCP 10.240.0.22:55324->10.240.0.11:6443 (ESTABLISHED) kube-prox 11447 root 6u IPv4 93449 0t0 TCP 10.240.0.22:59980->10.240.0.11:6443 (ESTABLISHED)