Bottom line up front:
cluster/vagrant/sync_build.sh cluster/kubectl.sh delete -f manifests/virt-controller.yaml cluster/kubectl.sh create -f manifests/virt-controller.yaml |
When reworking code (refactoring or rewriting) you want to make sure the tests run. While Unit tests run quickly and within the code tree, functional tests require a more dedicated setup. Since the time to deploy a full live cluster is non-trivial, we want to be able to deploy only the component we’ve been working on. In the case of virt-controller, this is managed as a service, a deployment, and a single pod. All are defined by manifests/virt-controller.yaml.
To update a deployment, we need to make sure that the next time the containers run, they contains the new code. ./cluster/vagrant/sync_build.sh does a few things to make that happen. It complies the go code, rebuilds the containers, and uploads them to the image repositories on the vagrant machines.
All of these steps can be done using the single line:
make vagrant-deploy
but it will take a while. I ran it using the time command and it took 1m9.724s.
make alone takes 0m5.685s.
./cluster/vagrant/sync_build.sh takes 0m24.773s
cluster/kubectl.sh delete -f manifests/virt-controller.yaml takes 0m3.265s
and
time cluster/kubectl.sh create -f manifests/virt-controller.yaml takes 0m0.203s. Running it this way I find keeps me from getting distracted and losing the zone.
Running make docker is very slow, as it regenerates all of the docker containers. If you don’t really care about all of them, you can generate just virt-controller by running:
./hack/build-docker.sh build virt-controller
Which takes 0m1.521s.
So, the gating factor seems to be the roughly 40 second deploy time for ./cluster/vagrant/sync_build.sh. Not ideal for rapid development, but not horrible.