kata-deploy: The quick way to install Kata on your Kubernetes* Cluster
As I have learned more about Kubernetes, I came to the realization that anything can be done with just one more daemonset. Similarly, in cloud native, who needs packages when you can just install your software through a container image. With these lessons in mind, I worked with some colleagues to create an example daemonset for installing the Kata Containers runtime in Kubernetes. Since this work is now available on Kata’s Github, you are just two kubectl commands away from leveraging Kata Containers.
As a first step, I create a container image, kata-deploy, which contains all of the latest released Kata components. Taking a look at this container image is a great way to see exactly what is needed to run Kata Containers.
The following artifacts are included and used on the host-system where Kata is installed:
- kata-runtime: The Open Containers Initiative (OCI) compliant runtime which is called by an upper level orchestrator (such as Docker* or a Kubernetes CRI-shim).
- kata-proxy: A binary which runs on the host to help multiplex messages into and demultiplex messages out of the virtual machine which hosts the Kata-enabled containers. You’ll see one of these executing per pod.
- kata-shim: A binary which runs on the host to handle the stdio and signaling for the container processes. These are presented to the upper level orchestrator. You’ll see one of these executing per container process.
- qemu-system-x86_64: A statically built QEMU which is leveraged for providing hardware virtualized isolation. As a result of being statically linked, this QEMU can run without issue on most x86_64 distributions. There are a few binaries associated with QEMU which are also included.
There are also artifacts associated with the virtual machine itself:
- kata-containers.img: A minimal rootfs, based on Clear Linux*, which simply boots and starts a process, kata-agent, which will handle container lifecycle management inside of the virtual machine. This agent communicates with the host over a virtio-serial interface today, making use of a gRPC protocol.
- vmlinuz.container: A minimally configured Linux kernel image. This is a vanilla upstream kernel, based on the latest LTS - currently v4.14. It is configured for minimal footprint and boot time.
Leveraging daemonsets to install Kata
A more in depth description of the behavior of the daemonsets is documented on the kata-deploy GitHub page. Here I’ll give directions on how to make use of them.
A couple of caveats before we get started:
- Kata will only be installed on nodes which have either CRI-O* or containerd* installed. If dockershim* is installed, Kata will not be leveraged. While we are hoping to get support for OCI-compliant runtimes added, this is not available today with dockershim.
- If you are in a virtual machine environment, Kata Containers will only work if nested-virtualization support is available. On x86, look for “vmx” within the cpuinfo file. ie:
$ cat /proc/cpuinfo | grep vmx
To install Kata Containers, run the following:
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy.yaml
The kata-rbac.yaml is used to define a service account with permissions to label nodes which is used by kata-deploy.yaml. kata-deploy.yaml will run two daemonsets: one to label the nodes based on the CRI shim installed and a second to install Kata and configure the CRI-shim to leverage Kata. Once completed, you should see the label kata-containers.io/kata-runtime=true applied to the node.
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
kata-k8s-crio-xenial Ready master 1d v1.10.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kata-containers.io/container-runtime=cri-o,kata-containers.io/kata-runtime=true,kubernetes.io/hostname=kata-k8s-crio-xenial,node-role.kubernetes.io/master=
At this point, you are ready to start leveraging Kata Containers on your node.
Using Kata Containers in your Kubernetes Cluster
With Kata installed we can start pods which will make use of Kata Containers. In order to leverage Kata, existing pods should be updated to include a CRI-level annotation specifying the runtime to be used. Similarly, a node-select should be used to make sure the workload gets scheduled on a node which has kata-runtime installed. As an example, consider the following workload description:
Line 5 is an annotation that is added specific to CRI-O. We are stating that this is not a TrustedSandbox, which means CRI-O should use the untrusted-runtime, which is configured to be kata-runtime.
Line 6 is an annotation with the same logic (only inverted), specific to containerd.
Lines 12–13 are telling the scheduler to only run this workload on a node which has the label kata-containers.io/kata-runtime=true.
Already running workloads can be updated using kubectl patch to add these annotations and node selector. Once patched, they’ll restart using Kata Containers.
Removing Kata Containers from the cluster
To remove Kata from the cluster, the user will first delete the running daemonsets, and then deploy a short-lived cleanup daemonset. Please note, daemonsets can be pretty slow to delete from a cluster. This cleanup is carried out in a couple of steps. First, execute the following:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-cleanup.yaml
Once the above is applied, complete the cleanup by executing the following:
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-cleanup.yaml
$ kubectl delete -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac.yaml
While securing your workloads is not always straight forward, I hope that you found this deployment process easy, and that this method is an almost foolproof way to start running Kata Containers on your cluster. I put together the following screen capture to demonstrate this process. Enjoy!
Kata Containers is a fully open source project — check out Kata Containers on GitHub and join the channels below to find out how you can contribute.
IRC: #kata-dev on Freenode
Mailing list: http://lists.katacontainers.io/cgi-bin/mailman/listinfo