It has been enjoyable to learn the Kubevirt code base and coding in Go. However, unless the code gets deployed to servers, no one will use it in production. I’ve been learning OpenShift as an integration point for Kubevirt. Here are my notes for getting it up and running. This is not quite production grade, but should help write a proper deployment mechanism.
Please note, as of KubeVirt v0.17 installing KubeVirt directly via manifests is deprecated. KubeVirt is now deployed using the operator pattern.
https://kubevirt.io/user-guide/docs/latest/administration/intro.html
The rest of this document is maintained for historical value only.
Using openshift-ansible. I originally had to apply one patch but it got merged!
Here is my inventory file for a two node deployment.
[all] munchlax dialga [all:vars] ansible_ssh_user=ansible containerized=true openshift_deployment_type=origin ansible_python_interpreter=/usr/bin/python3 openshift_release=v1.5 openshift_image_tag=v1.5.1 openshift_public_ip=10.0.0.201 openshift_master_ingress_ip_network_cidr=10.0.1.0/24 [masters] munchlax [masters:vars] ansible_become=true [nodes] munchlax openshift_node_labels="{'region': 'infra'}" dialga openshift_node_labels="{'region': 'infra'}" [nodes:vars] ansible_become=true enable_excluders=false |
Running the playbook like this:
ansible-playbook -i ~/devel/local-openshift-ansible/inventory.ini ~/devel/openshift-ansible/playbooks/byo/config.yml |
I should have modified that to be able to schedule on the master node0, but it can be done after the fact like this:
oadm manage-node munchlax --schedulable=true |
Had to edit the manifests in kubevirt:
Move the manifests over to the master, in order to use the service account to create the various resources:
scp /home/ayoung/go/src/kubevirt.io/kubevirt/manifests/* ansible@munchlax:manifests/ |
Note the differences from the source:
$ diff -u virt-api.yaml.in virt-api.yaml --- virt-api.yaml.in 2017-06-06 12:01:46.077594982 -0400 +++ virt-api.yaml 2017-06-07 10:47:03.048151082 -0400 @@ -7,7 +7,7 @@ - port: 8183 targetPort: virt-api externalIPs : - - "{{ master_ip }}" + - "192.168.200.2" selector: app: virt-api --- @@ -23,17 +23,18 @@ spec: containers: - name: virt-api - image: {{ docker_prefix }}/virt-api:{{ docker_tag }} + image: kubevirt/virt-api:latest imagePullPolicy: IfNotPresent command: - "/virt-api" - "--port" - "8183" - "--spice-proxy" - - "{{ master_ip }}:3128" + - "10.0.0.30:3128" ports: - containerPort: 8183 name: "virt-api" protocol: "TCP" nodeSelector: - kubernetes.io/hostname: master + kubernetes.io/hostname: munchlax + |
All of the referenced images need to be “latest” instead of devel.
For both libvirt and virt-handler I use the privilegeduser service account. The master node (munchlax) has a ~/.kubeconf file set up to allow operations on the kube-system.
#For libvirt we need a service user: oc create serviceaccount -n default privilegeduser oc adm policy add-scc-to-user privileged -ndefault -z privilegeduser |
Starting the services in the dependency order is not necessary, but I do it anyway
kubectl create -f vm-resource.yaml kubectl create -f migration-resource.yaml kubectl create -f virt-api.yaml kubectl create -f virt-controller.yaml kubectl create -f libvirt.yaml kubectl create -f virt-handler.yaml |
As of this writing, kubevirt only support VMs in the default namespace. The VM launches using a few iSCSI volumes. I need to create the iscsi volume in the same namespace as the VM:
kubectl create -f iscsi-demo-target.yaml --namespace default |
I’m going to regret this but…overpowering the ayoung user to be god on the cluster.
oc create user ayoung oadm policy add-role-to-user edit ayoung #Don't trust ayoung... oadm policy add-cluster-role-to-user cluster-admin ayoung # he can't even keep track of his car keys and cellphone, and # you make him admin on your cluster? oadm policy add-role-to-user admin system:serviceaccount:kube-system:default |
Try to create a vm:
oc login kubectl config set-context $(kubectl config current-context) --namespace=default kubectl create -f ~/go/src/kubevirt.io/kubevirt/cluster/vm.yaml #wait for a bit and then kubectl get vms -o json | jq '. | .items[0] | .status | .phase' "Running" |
Still todo:Â Spice, console, and migrations.
Please note, as of KubeVirt v0.17 installing KubeVirt directly via manifests is deprecated. KubeVirt is now deployed using the operator pattern.
https://kubevirt.io/user-guide/docs/latest/administration/intro.html