Symfony4 Kubernetes Local Development Environment #2 Looking at Speed Issues
UPDATE: because of volume speed issues on OSX (and ksync not always smoothly working) and high CPU usage (1 2 3 …) when running Kubernetes locally I sadly cannot recommend k8s for local development atm. This article series was my exploration into if it’s feasible. If you have tips or suggestions please let me know. I’m running again on a docker-compose and Kubernetes combination.
In our first part of this series we did build a simple Symfony4 + local Kubernetes construct and got it running. But if we would want to develop now what then? The workflow currently would be:
- change file in IDE
run/build.sh && run/down.sh && run/up.sh
This sounds slow! Yes we would need to build our containers after every change which is a no-go for local development. So lets skip the build part and transfer our changed code directly in our pods!
There are various solutions listed here at docker-sync.
OSXFS cached volumes: bad
The solution for Mac/Win seems to be to use Ksync (which we will do in upcoming part 3). But experiencing these speed issues yourself and playing around with those can’t be bad! So let’s have a look.
Use a Volume Share
We will work with the example repo
Wouldn’t it be nice if it would be so easy? I mean, its easy! But incredible slow.
This is the changed sf-deployment.yaml to do so. I commented much out. Like we don’t need the initContainer. We create a new hostPath volume and mount it in both nginx and php containers.
run/down.sh followed by
run/up.sh and open your browser. Lets compare some numbers:
Kubernetes: warmed cache
Kubernetes: empty cache
Symfonys server (
bin/console server:run ): warmed cache
Symfonys server: empty cache
Every time files change (and assets gets compiled) many files in the cache would need to be re-generated and transferred via the volume mount to our host filesystem. This is slow.
Why? You can read yourself into it or just use the short explanation: you’re not using Linux. These speed issues only exist on Mac and Windows.
So let’s explore some solutions next!
Thought there seems to be trick: you mount the same volume directly via Docker in a simple dummy container and Kubernetes will use the same cached container as well. Order of who mounts first doesn’t matter! We have these options:
So in our case we do:
docker run -t -d -v /path/to/my/project/kubernetes-local-dev/symfony:/project:cached alpine sh
Which gives us like 3x faster handing with cleared cached. Without this hack we had almost 15 seconds.
Still… not workable ;)
Tried it with Ubuntu 16.04 and Minikube running on a 2013 Mac Book Air (my old development machine!).
Linux? You nice!
Then there is the possibility of using NFS:
NFS Native Support
This is the error I get: ERROR: for plain-docker-nfsmount_api_1 UnixHTTPConnectionPool(host='localhost', port=None)…
but I didn’t look into this solution yet because others did already:
Alternatives - docker-sync 0.5.10 documentation
Dedicated container mounts a local directory via osxfs and runs Unison to synchronize this mount with a Docker volume…
It seems to be too slow as well.
Ksync does “the same thing” as Docker-Sync and uses rsync to transfer files from localhost to your cluster. We will look at this in PART THREE!