Certified Kubernetes Administrator (CKA 1.19) — Preparation Guide

Soumiyajit
9 min readSep 3, 2020

Introduction :

  1. CKA is a performance based certification and hence the candidates are expected to have hands on practice/experience in k8s. My advice lesser your experience in k8s more practice required.
  2. Here we will try to discuss on the main topics that CKA exam tries to check with the candidate.
  3. This article include 2 sections:
    a. Preparation
    b. Best Practices for the exams
  4. This article is a preparation guide and the detailed knowledge for every topics needs to be covered beyond the questions mentioned inline.

Preparation:

Workloads & Scheduling — 15%

  • Question 1: Pods — create a pod names webapp using the nginx image in the exam namespace.
kubectl create namespace exam
kubectl run webapp --image=nginx -n exam

Tips:
— Do not waste time creating yaml files in the interest of time.
- To create yaml files use the — dry-run=client option

  • Question 2 : Services — create a service webapp-service with ClusterIP
kubectl expose pod webapp --name=webapp-service --port=80 --type=ClusterIP

Tips:
- Also look into the configurations in order to create a Nodeport to access a cluster.

  • Question 3 : Deployment — Create a deployment.yaml file to create a deployment named busybox-deployment with busybox:1.28 image to sleep for 4800 seconds and 3 replicas.
kubectl create deployment busybox-deployment --image=busybox:1.28 --dry-run=client -o yaml > deployment.yaml- Now make the required changes in the deployment.yaml# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: busybox-deployment
name: busybox-deployment
spec:
replicas: 3
selector:
matchLabels:
app: busybox-deployment
template:
metadata:
labels:
app: busybox-deployment
spec:
containers:
- image: busybox:1.28
name: busybox
command:
- "sleep"
- "4800"
  • Question 4 : Scale the deployment busybox-deployment to run 6 replicas of the pod
kubectl scale deployment busybox-deployment --replicas=6
  • Question 5: Multi-container Pods- create a multi container pod named multiapp with nginx and redis image:
apiVersion: v1
kind: Pod
metadata:
labels:
run: multiapp
name: multiapp
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
  • Question 6: Init containers — create a pod named myapp which has an init container that runs a busybox:1.28 image for 1000 seconds and when the init container is ready the pod will run an app container with nginx image.
kubectl run app --image=nginx --dry-run=client -o yaml > init.yaml-> Now edit the init.yaml to include the init container definition.apiVersion: v1
kind: Pod
metadata:
labels:
run: app
name: app
spec:
initContainers:
- name: myapp-container
image: busybox:1.28
command:
- "sleep"
- "1000"

containers:
- image: nginx
name: nginx

Tips:
- the pods would be in init state for 1000 seconds hence do not be worried if you do not get the pod in running state

  • Question 7: Rolling update and rollbacks deployments — create a deployment named nginx with the image nginx:1.14.2. Now rollout the image to nginx:1.16.1. Then again rollback the deployment to use the previous image.
kubectl create deployment nginx --image=nginx:1.14.2kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --recordkubectl rollout history deployment.v1.apps/nginxkubectl rollout undo deployment.v1.apps/nginx

Tips:
- Do not delete and recreate deployment with new image. That would not display any rollout history and would be considered incorrect.

  • Question 8: labels — create a pod webapp using the nginx image with label run=nginx
kubectl run webapp --image=nginx --labels run=nginx
  • Question 9: node selectors — create a pod names webapp with nginx image so that the pod gets created in node with selector disktype: ssd
apiVersion: v1
kind: Pod
metadata:
labels:
run: webapp
name: webapp
spec:
containers:
- image: nginx
name: webapp
nodeSelector:
disktype: ssd
  • Question 10: node affinity — create a deployment nginx-na with nginx image to deploy 2 replicas of the pods to prefer a node that has a disktype=ssd label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-na
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
  • Question 11: Taints and Tolerations: Apply a taint to the node01 worker node to make it unschedulable. Now create an nginx pod to tolerate the taint applied in node01.
kubectl taint nodes node1 key1=value1:NoScheduleapiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: "key1"
operator: "Exists"
effect: "NoSchedule"
  • Question 12: Daemonsets: Create a daemonset elasticsearch using the image quay.io/fluentd_elasticsearch/fluentd:v2.5.2in kube-system namespace.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
  • Question 13: Static Pods — Create a nginx pod named static-nginx so that the pods is deployed by the worker node node01. Use the /etc/kubernetes/manifest as the path for the pod definition file.
kubectl run static-nginx --image=nginx --dry-run -o yaml > static-nginx.yaml scp the static-nginx.yaml file to the /etc/kubernetes/manifest path of the ssh node01->systemctl status kubelet->look for 10-kubeadm.conf
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
vi /var/lib/kubelet/config.yaml
staticPodPath=/etc/kubernetes/manifest

Services & Networking — 20%

  • Question 14: CoreDNS: create a nginx pod and check the dns entry for the service and the pod using a temporary busybox container.
kubectl run nginx --image=nginxkubectl expose pod nginx --name=nginx-service --port=80#nslookup for service
kubectl run tmp --image=busybox:1.28.0 --restart=Never --rm -it nslookup nginx-service
#Example Output
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 10.111.182.31 nginx-service.default.svc.cluster.local
pod "tmp" deleted
#nslookup for pod (pod ip : 192.168.184.74)
kubectl run tmp --image=busybox:1.28.0 --restart=Never --rm -it nslookup 192-168-184-74.default.pod
#Example Output
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 192-168-184-74.default.pod
Address 1: 192.168.184.74 192-168-184-74.nginx-service.default.svc.cluster.local
pod "tmp" deleted
  • Question 15: Network Policy : Given a frontend application, backend application and a database. Create an egress policy named frontend-policy to allow traffic from frontend to backend and db . The backend is using the TCP port 8080 and db using the TCP port 3306
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: default
spec:
podSelector:
matchLabels:
name: frontend
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
name: db
ports:
- protocol: TCP
port: 3306
- to:
- podSelector:
matchLabels:
name: backend
ports:
- protocol: TCP
port: 8080
  • Question 16: Security Context — Create a pod named scc-demo with runAsUser: 1000 and fsGroup: 2000. Also add the "SYS_TIME"capabilities to the container.
apiVersion: v1
kind: Pod
metadata:
name: scc-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- name: sec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
capabilities:
add: ["SYS_TIME"]

Troubleshooting — 30%

  • Question 17: Remove Node — Make the worker node unschedulable and make sure all the workload of the node moves to other worker nodes.
kubectl drain node01 --ignore-daemonsets
  • Question 18: etcd backup — Store the backup of the cluster in the snapshot-pre-boot.db file
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save snapshot-pre-boot.db

Logging / Monitoring 5%

  • Question 19: Collect Logs — a create a pod nginx-logs with image nginx and redirect the logs of the pod to a file /root/nginx-logs.txt
kubectl run nginx-logs --image=nginx
kubectl logs nginx-logs > /root/nginx-logs.txt
  • Question 20: Monitor Nodes and Pods — Get the information for the node using the highest number of cpu and for the pod using the highest amount of memory.
kubectl top node
kubectl top pod
Sample Output:
master-01 $ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master-01 120m 6% 1221Mi 64%
node01 2096m 100% 946Mi 24%
node02 1100m 52% 1000Mi 28%
node03 1000m 50% 1022Mi 61%
master-01 $ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
nginx 317m 50Mi
test 720m 30Mi
redis 650m 20Mi

Tips:
- There must be a monitoring application hosted in the kube-system namespace. The same application would help to get the node and pod level cpu and memory information.

  • Question 21: JSON Path
    a> Get ExternalIPs of all nodes
    b> List PersistentVolumes sorted by capacity
    c> Get the version label of all pods with label app=nginx
    d> Check which nodes are ready
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'kubectl get pv --sort-by=.spec.capacity.storagekubectl get pods --selector=app=nginx -o jsonpath='{.items[*].metadata.labels.version}'JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
&& kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

Tips:
- One should know to extract details in json format for the required information.
- Refer the
cheat sheet to get quick details and reference.

  • Question 22: Node troubleshooting: One of the worked node is not in ready state. Fix the problem in the worker node.
- kubectl get nodes 
- make sure all nodes in ready state
- ssh to the node that is in not-ready state
- systemctl status kubelet / journalctl -u kubelet
- kubelet service stopped in worker node
- systemctl start kubelet
  • Question 23: Cluster Component Failure: manifest file is misconfigured
- check for the cluster components 
- kubectl get pods -n kube-system
- Many cases the configurations fixes are for the static pods in /etc/kubernetes/manifests/ files.
- for the failure service open the manifests file and check for the typo or wrong configuration
  • Question 24: Cluster Component Failure: Config file of kubelet service misconfigured
journalctl -u kubelet / tail -f /var/log/*
- we can see the misconfiguration in the worker node and the reason kubelet service fails.
- systemctl status kubelet
- Look for 10-kubeadm.conf to know the KUBELET_CONFIG_ARGS that points to kubelet config file.
- vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
- vi /var/lib/kubelet/config.yaml
[ now relate the configuration variable to the error we see in logs]
Example:
- If the staticPodPath: /etc/kubernetes/manifests
- server: https://172.17.0.97:6443

Storage — 10%

Question 25: Persistent Volume: Create a pod nginx-pd so that it last to the life of the pod. Use mount path as /tmp/cache and emptyDir type of volume.

apiVersion: v1
kind: Pod
metadata:
name: nginx-pd
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /tmp/cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
  • Question 26: Persistent Volume: Create a persistent volume pv-data with access mode as ReadWriteMany , storage of 1Gi and host path to /tmp/pvdata
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-data
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
hostPath:
path: /tmp/pvdata
  • Question 27:Persistent Volume Claim: Create the persistent volume claim for the pv created in the above question.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pv-claim1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi

Cluster Architecture, Installation & Configuration — 25%

  • Question 28: Cluster installation: kubeadm installation of master node and worker node with calico CNI network.
# steps for both the master and worker nodes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initialise the master node:

kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

To add a worker node to the master node:

#user the kubeadm join command generated when the bootstrapping of the master node is completekubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

Tips:
- You may not require to remember all the steps. Kindly refer the links for installation.
- The environment may already have cri-o / docker installed- so we may just require to install kubeadm, kubelet and kubectl.
- If the pod network yaml is provided kindly use the same.
- In some cases you may be provided the configfile, please use the same for bootstrapping the cluster:
kubeadm init --config=configfile.yaml

  • Question 29: Secrets — Provided a secret mysecret, create a pod named secret-test-pod to define all of the secret’s data as container environment variables using the busybox-1.28 image which should sleep for 1000 seconds.
# kubectl get secrets should have the mysecret created.apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- "sleep"
- "1000"
envFrom:
- secretRef:
name: mysecret

Question 30: Certificate Signing Request — the key and csr files are created for the user John. Create the certificate signing request and approve the signing request. Add the roles developer to create, get, list, update and delete pods. Also create the role bindings developer-binding-john for the new user.

cat john.csr | base64 | tr -d "\n"apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: john
spec:
groups:
- system:authenticated
request: S0tLS0KTUlJQ1ZqQ...
usages:
- client auth

Verify that the csr got create successfully and then approve the pending csr:

kubectl get csrkubectl certificate approve john

create the roles and roles binding

kubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=podskubectl create rolebinding developer-binding-john --role=developer --user=john

Tips:
- use the
command line utilities to create roles and rolebindings

Best Practices for the Exam

  • Even though the candidates are allowed to access the kubernetes.io website, they should be aware of the required information links otherwise a lot of time can go on searching for the required information.
  • Attempt all the questions. In case of any blocker do not spend much time to troubleshoot it and move on to the next question. You may fix it in next iteration.
  • You would be shared multiple clusters to answer your questions. For every question the desired cluster is provided. Use the correct context to use the right cluster for every question.
  • The exam has some tricky questions — so one may misunderstand the problem given the time. In case you are not able to clear in the first take — prepare well and go ahead with the retake. All the Best !!

--

--

Soumiyajit

Kubernetes, CNCF,OpenShift, OpenStack, Terraform, Ansible, AWS, Azure, DevOps, Telco Cloud, MicroServices