Symfony4 Kubernetes Local Development Environment #2 Looking at Speed Issues

Kim Wuestkamp
4 min readApr 2, 2019

UPDATE: because of volume speed issues on OSX (and ksync not always smoothly working) and high CPU usage (1 2 3 …) when running Kubernetes locally I sadly cannot recommend k8s for local development atm. This article series was my exploration into if it’s feasible. If you have tips or suggestions please let me know. I’m running again on a docker-compose and Kubernetes combination.

In our first part of this series we did build a simple Symfony4 + local Kubernetes construct and got it running. But if we would want to develop now what then? The workflow currently would be:

  1. change file in IDE
  2. run/build.sh && run/down.sh && run/up.sh

This sounds slow! Yes we would need to build our containers after every change which is a no-go for local development. So lets skip the build part and transfer our changed code directly in our pods!

TLDL

There are various solutions listed here at docker-sync.

Volumes: bad

OSXFS cached volumes: bad

NFS: bad

Linux: good

The solution for Mac/Win seems to be to use Ksync (which we will do in upcoming part 3). But experiencing these speed issues yourself and playing around with those can’t be bad! So let’s have a look.

Use a Volume Share

We will work with the example repo git@github.com:wuestkamp/kubernetes-local-dev.git branch part2.

Wouldn’t it be nice if it would be so easy? I mean, its easy! But incredible slow.

This is the changed sf-deployment.yaml to do so. I commented much out. Like we don’t need the initContainer. We create a new hostPath volume and mount it in both nginx and php containers.

Now run run/down.sh followed by run/up.sh and open your browser. Lets compare some numbers:

Kubernetes: warmed cache

Kubernetes: empty cache

Symfonys server (bin/console server:run ): warmed cache

Symfonys server: empty cache

Every time files change (and assets gets compiled) many files in the cache would need to be re-generated and transferred via the volume mount to our host filesystem. This is slow.

Why? You can read yourself into it or just use the short explanation: you’re not using Linux. These speed issues only exist on Mac and Windows.

So let’s explore some solutions next!

OSXFS caching:

With docker you can enable osxfs caching but this options does net exist yet for Kubernetes.

Thought there seems to be trick: you mount the same volume directly via Docker in a simple dummy container and Kubernetes will use the same cached container as well. Order of who mounts first doesn’t matter! We have these options:

So in our case we do:

docker run -t -d -v /path/to/my/project/kubernetes-local-dev/symfony:/project:cached alpine sh

Which gives us like 3x faster handing with cleared cached. Without this hack we had almost 15 seconds.

Still… not workable ;)

Linux

Tried it with Ubuntu 16.04 and Minikube running on a 2013 Mac Book Air (my old development machine!).

Cleared Cache:

Warmed Cache:

Linux? You nice!

NFS:

Then there is the possibility of using NFS:

but I didn’t look into this solution yet because others did already:

It seems to be too slow as well.

End

Ksync does “the same thing” as Docker-Sync and uses rsync to transfer files from localhost to your cluster. We will look at this in PART THREE!

--

--

Kim Wuestkamp

killercoda.com | killer.sh (CKS CKA CKAD Simulator) | Software Engineer, Infrastructure Architect, Certified Kubernetes