Matching Create and Teardown in an Ansible Role

Nothing lasts forever. Except some developer setups that no-one seems to know who owns, and no one is willing to tear down. I’ve tried to build the code to clean up after myself into my provisioning systems. One pattern I’ve noticed is that the same data is required for building and for cleaning up a cluster. When I built Ossipee, each task had both a create and a teardown stage. I want the same from Ansible. Here is how I’ve made it work thus far.

The main mechanism I use is a conditional include based on a variable set. Here is the task/main.yaml file for one of my modules:

---
- include_tasks: create.yml
  when: not teardown

- include_tasks: teardown.yml
  when: teardown

I have two playbooks which call the same role. The playbooks/create.yml file:

---
- hosts: localhost
  vars:
    teardown: false
  roles:
    -  provision

and the playbooks/teardown.yaml file:

---
- hosts: localhost
  vars:
    teardown: true
  roles:
    -  provision

All of the real work is done in the tasks/create.yml and tasks/teardown.yml files. For example, I need to create a bunch of Network options in Neutron in a particular (dependency driven) order. Teardown needs to be done in the reverse order. Here is the create fragment for the network pieces:

- name: int_network
  os_network:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ netname }}_network"
    external: false
  register: osnetwork

- os_subnet:
    cloud: "{{ cloudname }}"
    state: present
    network_name: "{{ netname }}_network"
    name: "{{ netname }}_subnet"
    cidr: 192.168.24.0/23
    dns_nameservers:
      - 8.8.8.7
      - 8.8.8.8

- os_router:
    cloud: "{{ cloudname }}"
    state: present
    name: "{{ netname }}_router"
    interfaces: "{{ netname }}_subnet"
    network: public

To tear this down, I can reverse the order:

    
- os_router:
    cloud: rdusalab
    state: absent
    name: "{{ netname }}_router"

- os_subnet:
    cloud: rdusalab
    state: absent
    network_name: "{{ netname }}_network"
    name: "{{ netname }}_subnet"

- os_network:
    cloud: rdusalab
    state: absent
    name: "{{ netname }}_network"
    external: false

As you can see, the two files share a naming convention: name: “{{ netname }}_network” should really be precalcualted in the vars file and then useed in both cases. That is a good future improvement.

You can see the real value when it comes to lists of objects. For example, to create a set of virtual machines:

- name: create CFME server
  os_server:
    cloud: "{{ cloudname }}"
    state: present
    name: "cfme.{{ clustername }}"
    key_name: ayoung-pubkey
    timeout: 200
    flavor: 2
    boot_volume: "{{ cfme_volume.volume.id }}"
    security_groups:
      - "{{ securitygroupname }}"
    nics:
      -  net-id:  "{{ osnetwork.network.id }}"
         net-name: "{{ netname }}_network"
    meta:
      hostname: "{{ netname }}"
  register: cfme_server

It is easy to reverse this with the list of host names. In teardown.yml:

- os_server:
    cloud: "{{ cloudname }}"
    state: absent
    name: "cfme.{{ clustername }}"
  with_items: "{{ cluster_hosts  }}"

To create the set of resources I can run:

ansible-playbook   playbooks/create.yml

and to clean up

ansible-playbook   playbooks/teardown.yml

This is pattern scales. If you have three roles that all follow this pattern, they can be run in forward order to set up, and reverse order to teardown. However, it does tend to work at odds with Ansible’s Role dependency mechanism: Ansible does not allow you to only specify the dependent roles should be run in reverse in the teardown process.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.