k8s 05: scheduler

In this entry, I will show you how to deploy scheduler. Up to this point, I can deploy a pod by specifying which node to be used(e.g. aio-node). With scheduler, I don’t need to specify the pods any more. And I can deploy pods more intelligently because the scheduler “schedules” the pod to run in the best node available based on various criteria.

Perviously, I showed image like this below, which depict the case if there is no scheduler configured.

In this entry it would be like below:

Note that I deleted all-in-one node(aio-node) which I had been working on, and I changed the deployment a bit. I have a dedicated master node(master-01) and two worker node(worker-01, worker-02) for easy visualisation. Deployment is not much different from the all-in-one deployment. Only one difference is that we need to tell kubelet in worker nodes that the api server address is not the localhost but a reachable external ip address(internal ip address of the instance in my case – GCE).

Deploy a master node

1. Create compute instance (GCP)

I use ubuntu16.04, and put a “kube-master” tag so that this tag can be used for firewall rules later.

2. Install etcd and API server 

You can refer my previous post “k8s 03: etcd-and-api-server”.

[ etcd install ]

[ API server install ]

You can download the latest server binaries from here.

3. Install kubectl

 

Deploy two worker nodes

1. Create a first compute instance (GCP)

I deploy one node first, then I will clone the disk to create the second node.

2. Install container runtime and kubelet

You can refer my previous post “k8s 02: how kubelet works”.

3.Create a first compute instance (GCP)

Now I copy the disk from the first node, and create another instance with the hostname “worker-02”

 

Configure GCP Firewall rules

1. Add a Firewall rule for worker -> master

For the node to correctly register itself(and eventually to get the pod running), the worker node needs to communicate with API server on tcp port 8080(in our case, not secured. or tcp:6443 for secured port by default).

GCP has permissive rule for allow all for internal communication if the compute instances are in default VPC, but I’m using custom VPC hence all the internal communication is discarded if no rule is created(implicit deny).

Confirm the setup

1. Check node status on the master

Now, I can see the node is correctly registered on the master.

2. Check if a pod is deployed with nodename

And I can see a pod can be deployed in the worker node if there is nodename specified in the manifest.


So this is where I left the last entry, now I start deploying the scheduler.

Deploy scheduler

1. Install kube-scheduler

2. Check

Scheduler is working now, so it should do all the hardwork to schedule/assign the requested pod onto available worker node.


So far so good.

But it still lacks a few vital part of the deployment. One of them is a controller-manager. It monitors the node status, and deployment status and it takes necessary action if it’s not in the desired situation. Basically all the other component just takes care on what they are told, and they don’t care what the actual situation is. Without the controller-manager, there might be a case that worker nodes are not working, but the master doesn’t notice it and it just keeps waiting for the worker node, which in its point of view “Ready”, to fetch the work assigned to it.

In the next entry, I deploy controller-manager.