k8s 03: etcd and API server

I can now deploy a pod via the kubelet directly after the previous entry.

It just look for specific directory to update the pod status.
This time, I deploy etcd and API server.
– etcd holds all the cluster information.
– API server is the center piece of Kubernetes, according to the official document “Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane”.

I’m aiming to make something like this image:

Prepare etcd

1. install etcd
As stated before, etcd is kind of databse. I specify local /work/etcd-data directory to store this database. etcd server is listening the request on port 2379 by default. Reference: https://github.com/etcd-io/etcd/releases

root@aio-node:~# curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz -o etcd-v3.3.9-linux-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   620    0   620    0     0    982      0 --:--:-- --:--:-- --:--:--   984
100 10.7M  100 10.7M    0     0  1953k      0  0:00:05  0:00:05 --:--:-- 2272k
root@aio-node:~# tar xzvf etcd-v3.3.9-linux-amd64.tar.gz 
root@aio-node:~# cd etcd-v3.3.9-linux-amd64/
root@aio-node:~/etcd-v3.3.9-linux-amd64# sudo mv etcd* /usr/local/bin/
root@aio-node:~# mkdir /work/etcd-data
root@aio-node:~# cat /etc/systemd/system/etcd.service 

ExecStart=/usr/local/bin/etcd --data-dir=/work/etcd-data

root@aio-node:~# systemctl daemon-reload
root@aio-node:~# service etcd start


Prepare API server

1. install API server

Install API server. And in the systemd configuration file, pass the url of etcd server(in this case port 2379 on localhost –

root@aio-node:~# curl -L https://storage.googleapis.com/kubernetes-release/release/v1.10.8/bin/linux/amd64/kube-apiserver -o kube-apiserver
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  216M  100  216M    0     0  72.0M      0  0:00:03  0:00:03 --:--:-- 72.0M
root@aio-node:~# chmod +x kube-apiserver 
root@aio-node:~# mv kube-apiserver /usr/local/bin/
root@aio-node:~# cat /etc/systemd/system/kube-apiserver.service 
Description=Kubernetes API Server

ExecStart=/usr/local/bin/kube-apiserver \
  --etcd-servers= \

root@aio-node:~# systemctl daemon-reload
root@aio-node:~# service kube-apiserver start

2. confirm API server
Let’s check if API server is working correctly.
API server listens requests on port 8080 for non-secure request, so we can throw a request to in this case.

root@aio-node:~# curl
  "kind": "NodeList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/nodes",
    "resourceVersion": "32"
  "items": []


Prepare Kubelet

1. Modify Kubelet

At this moment, kubelet looks at specified directory. But I want kubelet to look at API server now. So I need to create a config file and modify the systemd file to load it.

root@aio-node:~# cat /var/lib/kubelet/node.kubeconfig 
apiVersion: v1
kind: Config
- cluster:
  name: local
- context:
    cluster: local
    user: ""
  name: local
current-context: local
preferences: {}
users: []
root@aio-node:~# systemctl daemon-reload
root@aio-node:~# service kubelet restart

Now, I can check the API server if the node is correctly registered.

root@aio-node:~# curl -Ss | jq '.items[].spec'
  "externalID": "aio-node"


Launch Test Pod

1. Create a test Pod manifest

So all the components are in place, and I should be able to request API server to launch a pod now. All I need to do is just send a manifest to API server. Take note that, at this moment there is no scheduler yet, so API server needs to know which node to run the pod(even though there is only one node available). Otherwise it goes to something like this:

oot@aio-node:~# cat nginx-test-with-nodename.yaml 
apiVersion: v1
kind: Pod
  name: nginx-test-with-nodename
  - name: nginx
    image: nginx
    - containerPort: 80
  nodeName: aio-node

2. Send a request to API server

The request to API server needs to be in JSON format, which is different from above YAML file. I’m using python here to send that request. I’m using non-secure port(using http) this time, so authentication and authorization wouldn’t be taken place. But admission control is still in place, and it can be disabled by passing the flag when it starts(of course it’s only for development and test purpose though).

root@aio-node:~# python3
Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import yaml, json, requests, pprint
>>> yf = yaml.safe_load(open('nginx-test-with-nodename.yaml','r').read())
>>> url = 'http://localhost:8080/api/v1/namespaces/default/pods'
>>> headers = {'Content-Type':'application/json'}
>>> jf = json.dumps(yf)
>>> r = requests.post(url, data=jf, headers=headers)
>>> pprint(r.json()['spec'])
{'containers': [{'image': 'nginx',
                 'imagePullPolicy': 'Always',
                 'name': 'nginx',
                 'ports': [{'containerPort': 80, 'protocol': 'TCP'}],
                 'resources': {},
                 'terminationMessagePath': '/dev/termination-log',
                 'terminationMessagePolicy': 'File'}],
 'dnsPolicy': 'ClusterFirst',
 'nodeName': 'aio-node',
 'restartPolicy': 'Always',
 'schedulerName': 'default-scheduler',
 'securityContext': {},
 'terminationGracePeriodSeconds': 30}
>>> quit()

3. Check the status

The request seems succeeded, so I can ask API server for the status. And of course I can send a GET request to pod IP address to check if Nginx is working.

root@aio-node:~# curl http://localhost:8080/api/v1/namespaces/default/pods
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/default/pods",
    "resourceVersion": "586"
  "items": [
      "metadata": {
        "name": "nginx-test-with-nodename",
        "namespace": "default",
        "selfLink": "/api/v1/namespaces/default/pods/nginx-test-with-nodename",
        "uid": "ef013c5e-bc5a-11e8-90ef-4201ac100002",
      "spec": {
        "containers": [
            "name": "nginx",
            "image": "nginx",
            "ports": [
                "containerPort": 80,
                "protocol": "TCP"
            "resources": {
     "status": {
        "phase": "Running",
root@aio-node:~# curl
<!DOCTYPE html>
<title>Welcome to nginx!</title>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>