Kubernetes CKA certification – Where to Start

Summary:

  • I passed CKA exam on December 2018
  • Prepared for 4 months, before that I had little production experience on kubernetes
  • Must read: Kubernets in Action
  • Must possess: patience, curiosity
  • You cannot pass the exam if you just remember all commands in Kubernetes The Hard Way.
  • To check if you are ready, look through all the Kubernetes.io document. And if you still feel not overwhelmed by the amount of new things, it should be good timing to give it a go.
Continue reading “Kubernetes CKA certification – Where to Start”

KTHW Reinvented – Agenda

From the next post, I will guide you how to bring up Kubernetes cluster locally.

I use Kubernetes The Hard Way as a guidepost, but I will re-order the procedure so that it goes component by component. If you are willing to take CKA(Kubernetes Certified Administrator) Certification, you should follow original kubernetes the hard way again after completing this agenda, so that you can improve your deployment speed.

Agenda

  1. Compute resource procurement … I use my desktop pc to host 4 virtual ubuntu machines
  2. Etcd cluster bootstrap … Etcd is the base system of kubernetes to hold all the information
  3. Control plane bootstrap 01 … API server installation and flags investigation
  4. Control plane bootstrap 02 … Deploy LoadBalancer for API server
  5. Worker node bootstrap … kubelet and kube-proxy are installed on nodes.
  6. Control plane bootstrap 03 … Controller-Manager installation
  7. Control plane bootstrap 04 … Scheduller installation
  8. Pod network routes … configure network for inter-pod communication
  9. DNS … Deploy coredns in cluster
  10. Data encryption at rest … secure secret file encrypted

Continue reading “KTHW Reinvented – Agenda”

Kubernetes The Hard Way Picture

One of the most popular tutorial to bootstrap kubernetes components is Kelsey Hightower’s “Kubernetes The Hard Way“. It is really helpful to understand the complicated component structure of kubernetes.

I saw some people asking if there is any equivalent tutorial which is not using GCP(e.g. on-prem, AWS). Because Kubernetes The Hard Way is using GCP as its backend, it’s no wonder they think the tutorial is specific to GCP. But in fact, only a small bit of part is specific to GCP(maybe LoadBalancer, Swap configuration only), and most of the part is still applicable to any infrastructure.

Continue reading “Kubernetes The Hard Way Picture”

k8s ex04: Security daemon – Cisco Stealthwatch

Network security used to be only deployed to investigate the traffic between north and south(in other word external and internal), but as the cloud and virtualization progress, it is now required to have east to west (intra-site) security investigation. For this purpose, ISFW(inter-segment firewall) is deployed on-premise, but it’s quite difficult if the servers are in the cloud.

And with Kubernetes, it’s even more difficult. Because each pod can be connected each other over some kind of tunnel(e.g. overlay network) as I mentioned in the previous post. So all the communications are somewhat hidden and simple security rule/policy cannot be used to restrict the communication. We use network policy or tools like Istio to restrict these unexpected traffic. But similar to the legacy network, these restrictions are still rely on manual work, we need to make policy by ourselves and it needs to be updated every time new service are procured. This is very difficult, the developer wants to deploy the service as smooth as possible, but the security needs to be guaranteed even though some services are dealt by another team…

Cisco Stealthwatch can be used in these cases as a turnkey security monitor.  As the stealthwatch doesn’t require anything to be changed on kubernetes config, actually it deploys a special host network pod in each worker node as a daemonset. It visualizes the inter-node/external communication dynamically and can send an alert in case unexpected communication happens. Please note that I use Stealthwatch Cloud for this trial.

 

Deploy Stealthwatch in K8s


1. Access Stealthwatch portal, and navigate to Integration > Kubernetes. You can find manifest file for stealthwatch daemonset along with service-key, which identifies your account.

2. Apply manifest files.

root@controller-1:~/work# {
> echo -n "YOUR_SECRET_KEY" > obsrvbl-service-key.txt
> kubectl create secret generic obsrvbl --from-file=service_key=obsrvbl-service-key.txt
> rm obsrvbl-service-key.txt
> }
secret "obsrvbl" created
root@controller-1:~/work# {
> kubectl create serviceaccount --generator=serviceaccount/v1 obsrvbl
> kubectl create clusterrolebinding "obsrvbl" --clusterrole="view" --serviceaccount="default:obsrvbl"
> }
serviceaccount "obsrvbl" created
clusterrolebinding.rbac.authorization.k8s.io "obsrvbl" created
root@controller-1:~/work# cat << EOF > obsrvbl-daemonset.yaml
> apiVersion: apps/v1
> kind: DaemonSet
> metadata:
>   name: obsrvbl-ona
> spec:
>   selector:
>     matchLabels:
>       name: obsrvbl-ona
>   template:
>     metadata:
>       labels:
>         name: obsrvbl-ona
>     spec:
>       serviceAccountName: obsrvbl
>       tolerations:
>         - key: node-role.kubernetes.io/master
>           effect: NoSchedule
>       hostNetwork: true
>       containers:
>         - name: ona
>           image: obsrvbl/ona:3.1
>           env:
>             - name: OBSRVBL_SERVICE_KEY
>               valueFrom:
>                 secretKeyRef:
>                   name: obsrvbl
>                   key: service_key
>             - name: OBSRVBL_KUBERNETES_WATCHER
>               value: "true"
>             - name: OBSRVBL_HOSTNAME_RESOLVER
>               value: "false"
>             - name: OBSRVBL_NOTIFICATION_PUBLISHER
>               value: "false"
> EOF
root@controller-1:~/work# kubectl apply -f obsrvbl-daemonset.yaml 
daemonset.apps "obsrvbl-ona" created
root@controller-1:~/work# kubectl get ds
NAME          DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
obsrvbl-ona   2         2         2         2            2           <none>          1m
root@controller-1:~/work# kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP            NODE
busybox-5ccc978d8d-2nzs4   1/1       Running   4          2d        10.220.0.12   worker-1
nginx-65899c769f-txgtm     1/1       Running   3          2d        10.220.1.10   worker-2
obsrvbl-ona-dm9gb          1/1       Running   0          1m        10.240.0.21   worker-1
obsrvbl-ona-fkjnz          1/1       Running   0          1m        10.240.0.22   worker-2
root@controller-1:~/work#

 

3. After a few minutes, nodes with stealthwatch pod comes up as a sensor.

That’s all we need to do. With this setup, inter-pod/external communication are monitored and statistics information are sent to Stealthwatch cloud, where all the statistics data is processed.

 

Watch it works


1. After a few data being sent to the stealthwatch cloud, the portal starts to populate the valuable information. Dashboard by default shows the endpoint(in our case it includes pods on each nodes) and total in/out traffic.

2. And it shows any observations and alert as well. These are based on either static pattern(e.g. blacklisted), or behavioural basis(e.g. the traffic pattern is unusual compared to last 36 days).

Because this is my initial deployment it observes communication between controller-1(10.240.0.11) and other worker node are rather high. But this will be considered to be normal after some time.

And of course it can send an alert if there is any alert triggered.

3. You can explore network communication more if you would like to.

The good thing about this deployment is you don’t really have to care much of the things. Because the pod can see the traffic and it can also interact with the API server, it integrates all the real-time information with the historical data, and it nicely summarizes and show the outcome on the portal. And you can use them as a baseline to create security policies.

And it can be extended to GCP(with flow logs), AWS(flow logs), On-Premise(with onsite VM and sensor – e.g. Cat9300) to consolidate all the behavioural monitoring. Maybe I will introduce them in another post.

 

 

k8s 14: Calico IP-in-IP

In this post, I’m going to replace the network plugin from default “noops” to “cni”, and use Calico to connect each pod.

We follow official installation manual “Installing Calico for policy and networking“.

There are basically two types of installation of available. One uses kubernetes API server (and eventually backend etcd) to store data, and the other uses other etcd datastore. I use the former to utilise existing kubernetes setup.

Continue reading “k8s 14: Calico IP-in-IP”

k8s ex03: Network Plugin

Kubernetes is an orchestrator, or in other word it works like a conductor in the orchestra. In order to make the music happen, loads of other services are required. It includes not only direct service provider, such as API controller, but also in-direct service providers. Below are some example of them:

  • Computer resource deployment … Previously I deployed compute node manually. You can also use external provisioning tools like terraform.
  • Install system services(e.g. kubelet, container runtime) … For this as well I installed all of them manually, you can use automation tools like ansible.
  • Network management … Kube-proxy does provide network connectivity for services, but it doesn’t provide direct connectivity from a container to the other. For this end-to-end connectivity purpose, I added routes in GCP manually for all three worker nodes to route respective pod network(of 24 bit).

In this post, we explore some network fundamentals of container network management. For routing concept, there is a good slide shared by “TimHockin’s Illustrated Guide To Kubernetes Networking“.

Continue reading “k8s ex03: Network Plugin”

k8s 13: affinity and taint/toleration

In k8s 05: scheduler, I used node selector to select which node to launch a pod, and we don’t need to use that node selector once we launched scheduler. It is because Scheduler choose the best node for the pod based on various criteria(e.g. resource balancing). But often times, we have specific requirement for a pod which environment it should be launched on. For this purpose, kubernetes uses two ways as follows:

  • Labels and Affinity …… This is used to specify the “Preference” of the launching pod to select which node/pod to be launched on/with.
  • Taints and Tolerations …… This is used to specify the “Requirement” of the node to allow launching pod to be deployed on it.

 

k8s 12: Admission Controller – Service Account

In this post, I will talk about admission controller, one of the key component of API server(document reference). In the past few posts, we deployed PKI to secure the communication between each component. It’s time to secure/validate the requests itself. In previous post, I illustrated how API server deals the request, and it can be summarized as follows:

  • Authentication … Is the requester a valid account?
  • Authorization … Is the requester allowed to do what it request?
  • Admission Control … Allow/Reject/Modify the original request based on various criteria.

Continue reading “k8s 12: Admission Controller – Service Account”

k8s 10: Secure kubectl communication

In the next few posts, we will secure the communication between each services one by one. In this first post, we will secure the communication between your local machine and API server (in my case in GCP), which goes across the internet and considered to be the most vulnerable part in our cluster at this moment. After completing this post, the cluster communication will be something like below.

Continue reading “k8s 10: Secure kubectl communication”