KubeVirt on Killercoda on KubeVirt

How KubeVirt runs on Killercoda which runs on KubeVirt

Kim Wuestkamp
ITNEXT

--

This article provides a high-level overview of how Killercoda uses KubeVirt to schedule disposable learning environments. We also talk about how KubeVirt can run on Killercoda in some kind of “nested-sandwich” architecture.

Killercoda

Killercoda is an interactive learning platform which allows everyone to access Linux|Kubernetes based environments just in their browser. READ MORE

KubeVirt

KubeVirt allows to run actual VMs (virtual machines) instead of the most commonly used containers on Kubernetes. KubeVirt describes itself like:

KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. (source)

We would actually go a bit further than the statement above, because on Killercoda we used KubeVirt for a brand new software architecture. Using containers is not the solution for every problem and being able to manage VMs like Pods inside K8s is pretty nice!

>>> Test KubeVirt on Killercoda <<<

Why use VMs instead of containers?

Killercoda uses containers for its main infrastructure, but not for the interactive learning environments that users get access to.

Containers are great for trusted workloads because they’re very fast and can share precious resources like memory. But two containers running on the same host still share the same Linux Kernel and there are always various kernel CVEs.

VMs provide much better security and encapsulation, but are slower and more resource heavy. If you allow untrusted users to run applications inside your infrastructure, then you need (among many other measures) a hard layer of encapsulation, hence VMs.

How VMs work with KubeVirt

KubeVirt provides K8s resources likeVirtualMachineInstance. Creating this resource will then cause a Pod and connected VM to be created automatically. This is amazing because it means we can treat VMs (mostly) like normal Pods.

But how does this even work? Because let’s imagine we have VMs, on these we install Kubernetes. Inside Kubernetes we install KubeVirt. Where will KubeVirt then create the new VMs treated like Pods? Let’s have a look at the following image:

In the image above we see a one node Kubernetes cluster. Pod 1 and Pod 2 are normal Pods. Pod 3 and Pod 4 are KubeVirt Pods. VM Pod 3 and VM Pod 4 are nested VMs and KubeVirt will communicate with KVM to create these.

KubeVirt needs Hardware Virtualization Support, this way it’s possible to create nested VMs on a VM. Nested Virtualization is possible on most dedicated servers and also cloud providers like GCP.

How Killercoda uses KubeVirt

A custom killercoda-operator (Golang) communicates with KubeVirt through the normal K8s way, by creating and deleting resources inside the cluster.

If you open for example the Ubuntu Playground, then a request will be sent to a Kubernetes cluster near your location. Inside that K8s cluster runs KubeVirt together with custom applications.

The killercoda-operator receives the request “new Ubuntu VM” and creates a new VirtualMachineInstance resource. This resource is available once KubeVirt is installed. This will make KubeVirt create a nested VM.

When the user closes the browser tab, the connection abort is sent to the killercoda-operator which simply deletes the VirtualMachineInstance resource. This will cause KubeVirt to actually delete all underlying resources and the nested VM.

How KubeVirt can run on Killercoda

KubeVirt runs on Killercoda runs on KubeVirt

The answer is relatively simple: Killercoda provides completely isolated VMs. Generally it doesn’t matter if you take a GCP, AWS or Killercoda VM, you just install for example K3s on it and then KubeVirt.

But on Killercoda VMs, nested virtualization isn’t supported. For these cases KubeVirt provides software simulation which is great for testing but probably too slow for production usage.

Why is Killercoda so amazingly fast?

(sorry for the self loving title)

Killercoda always keeps a certain number of VMs running, for example Ubuntu or Kubernetes. We call this preheating. If you open a Killercoda scenario and a preheated VM is available it’ll be assigned directly to you without wait time.

If there are no preheated VMs available, then a new one will be created on demand. It takes ~20 seconds to create a new running+ready Ubuntu environment and ~30 seconds to create a new running+ready two nodes Kubernetes cluster.

The biggest speed and reliability factor is probably that we stay away from volumes (PV/PVC). Volume creation can be slow and often unreliable. If you need to create and delete thousands of VMs in short time you need to get rid of any trouble making components.

KubeVirt works very well together with the CDI (Containerized Data Importer) and if you start your journey you should definitely use it. It will probably also work great in your production use-case. But on Killercoda we decided not to use CDI or volumes at all. To achieve this we actually run a customized KubeVirt version.

Kustomized KubeVirt version

For speed (no use of volumes) and deeper integration we maintain a customized KubeVirt version. Among other things, our custom version allows us to create VPNs between VMs. This is for example necessary in multi-node environments like the Kubernetes Playground, where we want VMs to be able to communicate with each other. More on how to build KubeVirt here.

Test KubeVirt on Killercoda!

KubeVirt is an amazing project with a great architecture and codebase. Test it now on Killercoda:

https://killercoda.com/kubevirt

The End

Thanks for reading!

--

--

killercoda.com | killer.sh (CKS CKA CKAD Simulator) | Software Engineer, Infrastructure Architect, Certified Kubernetes