Network setup for a custom Qemu Virtual Machine

After building a custome Qemu, rthere are a couple ways to run a VM to get to it. The older approach to VM management is to create a block device, run the vm with a boot device, do a full install and log in to the serial console. However, if you run the Qemu/KVM machine from the command lilne, hitting control C will stop your VM, and this is annoying. I have found it worth while to set up networking and then to SSH in to the machine.

My notes here suck. I am going to try and document what I have here working, and, over time, reverse engineer how I got here.

This is the command I use to run my virtual machine. This is on an AmpereOne test machine in my lab. You probably don’t have access to AARCH64 machines at this scale. Maybe someday….

../qemu/build/qemu-system-aarch64 \
        -machine virt \
        -enable-kvm \
        -m 16G \
        -cpu host \
        -smp 16 \
        -nographic \
        -monitor telnet:127.0.0.1:1234,server,nowait \
        -bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
        -drive if=none,file=../virt/vms/Fedora-Cloud-Base-Generic-43-1.6.aarch64.qcow2,id=hd0 \
        -device vhost-vsock-pci,guest-cid=22 \
        -device virtio-blk-device,drive=hd0,bootindex=0 \
        -object memory-backend-file,id=mem,size=16G,mem-path=/dev/shm,share=on \
        -numa node,memdev=mem  \
        -chardev socket,id=char0,path=/tmp/virtiofs_socket  \
        -virtfs local,path=/root/adam/linux,mount_tag=mylinux,security_model=passthrough,id=fs0 \
        -netdev bridge,id=vm0,br=virbr0 \
        -device virtio-net-pci,netdev=vm0,mac=52:54:00:70:0C:01 \
        -device virtio-scsi-device \
        -qmp unix:/tmp/qmp.sock,server,nowait \
        2>&1 | tee /tmp/qemu.log

The VM is running based on a cloud image I downloaded from Fedora. To get the Keys in the machine, I started by running it using libvirt and virt-install:

virt-install --name fire43  --os-variant fedora43 --disk  ./Fedora-Cloud-Base-Generic-43-1.6.aarch64.qcow2 --import  --cloud-init root-ssh-key=/root/.ssh/id_rsa.pub

Here is the bridge setup on the hypervisor:

5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:42:a1:5a:9b:36 brd ff:ff:ff:ff:ff:ff
    inet 10.76.112.72/24 brd 10.76.112.255 scope global dynamic noprefixroute virbr0
       valid_lft 12409sec preferred_lft 12409sec
    inet6 fe80::7098:f305:ad32:181e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

This took a bunch of trial and error to get right. I don’t know how much is specific to my environment, but I do know that the bridge IP address is how I log in to the machine.

Looking at how this is stored in /etc/NetworkManager/system-connections/virbr0.nmconnection

[connection]
id=virbr0
uuid=8074697c-fdbb-48ad-888a-a64c4468e91c
type=bridge
interface-name=virbr0

[ethernet]

[bridge]

[ipv4]
method=auto

[ipv6]
addr-gen-mode=default
method=auto

[proxy]

And the ethernet connection in /etc/NetworkManager/system-connections/enP5p1s0f1np1.nmconnection


[connection]
id=enP5p1s0f1np1
uuid=097663c4-765c-4678-aaf3-761a1af2bb72
type=ethernet
interface-name=enP5p1s0f1np1
timestamp=1760474378

[ethernet]

[ipv4]
method=auto

[ipv6]
addr-gen-mode=eui64
method=auto

[proxy]

I know I got here by running nmcli commands, but they have long since fallen off my bash history, and I did not write them down.

One thing I can tell by the IP address that my VM gets is that it is talking to the same DHCP server as the Hypervisor.

I recently destroyed my previous VM that had NFS setup. I would like to get that working again, as that allowed me to sync the Kernel between the Hypervisor and the VM. But that is a tale for another day.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.