Ansible, Azure, and Managed Disks

Many applications have a data directory, usually due to having an embedded database. For the set I work with, this includes Red Hat IdM/FreeIPA, CloudForms/ManageIQ, Ansible Tower/AWX, and OpenShift/Kubernetes. Its enough of a pattern that I have Ansible code for pairing a set of newly allocated partitions with a set of previously built virtual machines.

I’ll declare a set of variables like this:

cluster_hosts:
  - {name: idm,    flavor:  m1.medium}
  - {name: sso,    flavor:  m1.medium}
  - {name: master0, flavor:  m1.xlarge}
  - {name: master1, flavor:  m1.xlarge}
  - {name: master2, flavor:  m1.xlarge}
  - {name: node0,  flavor:  m1.medium}
  - {name: node1,  flavor:  m1.medium}
  - {name: node2,  flavor:  m1.medium}
  - {name: bastion,  flavor:  m1.small}
cluster_volumes:
  - {server_name: master0, volume_name: master0_var_volume, size: 30}
  - {server_name: master1, volume_name: master1_var_volume, size: 30}
  - {server_name: master2, volume_name: master2_var_volume, size: 30}

In OpenStack, the code looks like this:

- name: create servers
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ item.name }}.{{ clustername }}"
    image: rhel-guest-image-7.4-0
    key_name: ayoung-pubkey
    timeout: 200
    flavor: "{{ item.flavor }}"
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network" 
    meta:
      hostname: "{{ item.name }}.{{ clustername }}"
      fqdn: "{{ item.name }}.{{ clustername }}"
    userdata: |
      #cloud-config
      hostname: "{{ item.name }}.{{ clustername }}"
      fqdn:  "{{ item.name }}.{{ clustername }}"
      write_files:
        -   path: /etc/sudoers.d/999-ansible-requiretty
            permissions: 440
      content: |
        Defaults:{{ netname }} !requiretty
  with_items: "{{ cluster_hosts }}"
  register: osservers

- name: create openshift var volume
  os_volume:
    cloud: "{{ cloudname }}"
    size: 40
    display_name: "{{ item.volume_name }}"
  register: openshift_var_volume
  with_items: "{{ cluster_volumes }}"

- name: attach var volume to OCE Master
  os_server_volume:
    cloud: "{{ cloudname }}"
    state: present
    server: "{{ item.server_name }}.{{ clustername }}"
    volume:  "{{ item.volume_name }}"
    device: /dev/vdb
  with_items: "{{ cluster_volumes }}"

I wanted to do something comparable with Azure. My First take was this:

- name: Create virtual machine
  azure_rm_virtualmachine:
    resource_group: "{{ az_resources }}"
    name: "{{ item.name }}"
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
    image:
      offer: RHEL
      publisher: RedHat
      sku: '7.3'
      urn: 'RedHat:RHEL:7.3:latest'
      version: '7.3.2017090723'
  with_items: "{{ cluster_hosts }}"
  register: az_servers

- name: create additional volumes
  azure_rm_managed_disk:    
    name: "{{ item.volume_name }}"
    location: eastus
    resource_group: "{{ az_resources }}"
    disk_size_gb: 40
    managed_by: "{{ item.server_name }}"
  register: az_cluster_volumes
  with_items: "{{ cluster_volumes }}"

However, when I ran that, I got the error:

“Error updating virtual machine idm – Azure Error: OperationNotAllowed
Message: Addition of a managed disk to a VM with blob based disks is not supported.
Target: dataDisk”

And I was not able to reproduce using the CLI:

$ az vm create -g  ayoung_resources  -n IDM   --admin-password   e8f58a03-3fb6-4fa0-b7af-0F1A71A93605 --admin-username ayoung --image RedHat:RHEL:7.3:latest
{
  "fqdns": "",
  "id": "/subscriptions/362a873d-c89a-44ec-9578-73f2e492e2ae/resourceGroups/ayoung_resources/providers/Microsoft.Compute/virtualMachines/IDM",
  "location": "eastus",
  "macAddress": "00-0D-3A-1D-99-18",
  "powerState": "VM running",
  "privateIpAddress": "10.10.0.7",
  "publicIpAddress": "52.186.24.139",
  "resourceGroup": "ayoung_resources",
  "zones": ""
}
[ayoung@ayoung541 rippowam]$ az vm disk attach  -g ayoung_resources --vm-name IDM --disk  CFME-NE-DB --new --size-gb 100

However, looking into the https://github.com/ansible/ansible/blob/v2.5.0rc3/lib/ansible/modules/cloud/azure/azure_rm_virtualmachine.py#L1150:

if not data_disk.get('managed_disk_type'):
    data_disk_managed_disk = None
    disk_name = data_disk['storage_blob_name']
    data_disk_vhd = self.compute_models.VirtualHardDisk(uri=data_disk_requested_vhd_uri)

It looks like there is code to default to blob type if the “managed_disk_type” value is unset.

I added in the following line:

    managed_disk_type: "Standard_LRS"

Thus, my modified ansible task looks like this:

- name: Create virtual machine
  azure_rm_virtualmachine:
    resource_group: "{{ az_resources }}"
    name: "{{ item.name }}"
    managed_disk_type: "Standard_LRS"
    admin_username: "{{ az_username }}"
    admin_password: "{{ az_password }}"
    managed_disk_type: "Standard_LRS"
    image:
      offer: RHEL
      publisher: RedHat
      sku: '7.3'
      urn: 'RedHat:RHEL:7.3:latest'
      version: '7.3.2017090723'
  with_items: "{{ cluster_hosts }}"
  register: az_servers

- name: create additional volumes
  azure_rm_managed_disk:    
    name: "{{ item.volume_name }}"
    location: eastus
    resource_group: "{{ az_resources }}"
    disk_size_gb: 40
    managed_by: "{{ item.server_name }}"
  register: az_cluster_volumes
  with_items: "{{ cluster_volumes }}"

Which completed successfully:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.