k8s 04: kubectl

So far I used various method to deploy pod as well as to check the status of the nodes/pods. Using those method is good to know what languages the services are talking, however it is not good for the human to use directly. Because I need to know all the endpoint, the method and dialect(yaml, json), and it is not actually realistic.

In this entry, I install kubectl so that it can interpret my intention to the language that each component(basically API server) can understand, and it also shows the state in more human readable way.

As usual, the deployment image would be as follows:

Kubectl on master node

1. install kubectl
The procedure is summarised at official document “Install and Set Up kubectl“. Since I’m using ubuntu, I can use snap to install kubectl.

shogo@aio-node:~$ sudo snap install kubectl --classic
kubectl 1.11.3 from 'canonical' installed
k_shogo@aio-node:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.8", GitCommit:"7eab6a49736cc7b01869a15f9f05dc5b49efb9fc", GitTreeState:"clean", BuildDate:"2018-09-14T15:54:20Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

2. configure kubectl (not necessary)
kubectl looks for localhost by default, hence in this case I don’t really need to configure kubectl as I’m running apiserver at localhost.

k_shogo@aio-node:~$ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

3. check if it works
I can check pod status using kubectl now. Note that some of the master components are declared to be “Unhealthy”, because I have not set them up yet.

shogo@aio-node:~$ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
nginx-test-with-nodename   1/1       Running   1          2d
shogo@aio-node:~$ kubectl describe pods
Name:         nginx-test-with-nodename
Namespace:    default
Node:         aio-node/172.16.0.2
...
Status:       Running
IP:           172.17.0.2
Containers:
  nginx:
    Container ID:   docker://d2fef80d71edd29720834bde3e867dde2084841361c45310d66d5cb20fb792b7
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
    Port:           80/TCP
...
Events:
  Type     Reason             Age                From               Message
  ----     ------             ----               ----               -------
  Normal   Pulling            2d                 kubelet, aio-node  pulling image "nginx"
  Normal   Pulled             2d                 kubelet, aio-node  Successfully pulled image "nginx"
  Normal   Created            2d                 kubelet, aio-node  Created container
  Normal   Started            2d                 kubelet, aio-node  Started container
  Warning  MissingClusterDNS  2d (x29 over 2d)   kubelet, aio-node  pod: "nginx-test-with-nodename_default(ef013c5e-bc5a-11e8-90ef-4201ac100002)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
shogo@aio-node:~$ kubectl get componentstatus
NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused   
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused   
etcd-0               Healthy     {"health":"true"}

 

I have working kubectl on master node.

But you can install kubectl in your local machine so that it can talk with apiserver directly instead of using ssh to login master node. But please note it is not a good option at this point in my deployment because this environment is not configured with proper certificate, hence the conversation between your local machine and apiserver(, which most likely via the internet) would not be encrypted.

Kubectl on local machine

1. Modify apiserver configuration

By default, apiserver only listens the request from localhost. So I need to change systemd configuration so that it passes the listening ip address when the service launches.

root@aio-node:~# cat /etc/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --etcd-servers=http://127.0.0.1:2379 \
  --service-cluster-ip-range=10.0.0.0/16 \
  --insecure-bind-address=0.0.0.0
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

2. Install kubectl on local machine

Follow the official document, and install kubectl on you machine. In my case, I’m using Mac, and using brew to install kubectl.

shogokobayashi k8s-test $ brew install kubectl
Updating Homebrew...
==> Downloading https://homebrew.bintray.com/bottles-portable-ruby/portable-ruby-2.3.7.leopard_64.bottle.tar.gz
######################################################################## 100.0%
==> Pouring portable-ruby-2.3.7.leopard_64.bottle.tar.gz
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
  https://github.com/Homebrew/brew#donations
==> Auto-updated Homebrew!
...
==> Downloading https://homebrew.bintray.com/bottles/kubernetes-cli-1.11.3.high_sierra.bottle.tar.gz
==> Downloading from https://akamai.bintray.com/e1/e1859f9f893237aa977be328846481b362e147ae92c2cde3e65a7d445b02dae9?__gda__=exp=1537642715~hmac=53e4525df303fad6f353b2ffd682b4b1db60dfdcefca478dbd298b
######################################################################## 100.0%
==> Pouring kubernetes-cli-1.11.3.high_sierra.bottle.tar.gz
==> Caveats
Bash completion has been installed to:
  /usr/local/etc/bash_completion.d

zsh completions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
?  /usr/local/Cellar/kubernetes-cli/1.11.3: 196 files, 49.1MB

3. Configure context for remote connection

I need to configure kubectl on my local machine so that it looks for the remote apiserver. Because I’m using GCE on GCP, I use the external ip of the instance in the “server” option here.  

shogokobayashi k8s-test $ kubectl config set-cluster k8s-test-admin --server=http://x.x.x.x:8080
Cluster "k8s-test-admin" set.
shogokobayashi k8s-test $ kubectl config set-context k8s-test-admin --cluster=k8s-test-admin --user=admin
Context "k8s-test-admin" created.
shogokobayashi k8s-test $ kubectl config use-context k8s-test-admin
Switched to context "k8s-test-admin".
shogokobayashi k8s-test $ kubectl config view
apiVersion: v1
clusters:
- cluster:
    server: http://x.x.x.x:8080
  name: k8s-test-admin
contexts:
- context:
    cluster: k8s-test-admin
    user: admin
  name: k8s-test-admin
current-context: k8s-test-admin
kind: Config
preferences: {}
users: []

4. Check it works

Once configured, I can use kubectl from local machine just the same as the master node.

shogokobayashi k8s-test $ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
aio-node   Ready     <none>    5d        v1.11.3
shogokobayashi k8s-test $ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
nginx-test-with-nodename   1/1       Running   1          2d