Trial and error. Its a key part of getting work done in my field, and I make my share of errors. Today, I tried to create a virtual machine in Nova using a bad glance image that I had converted to a bootable volume:
The error message was:
{u'message': u'Build of instance d64fdd07-748c-4e27-b212-59e8cef9d6bf aborted: Block Device Mapping is Invalid.', u'code': 500, u'created': u'2018-01-31T03:10:56Z'} |
The VM could not release the volume.
$ openstack server remove volume d64fdd07-748c-4e27-b212-59e8cef9d6bf de4909df-e95c-4a54-af5c-c24a26146a89 Can't detach root device volume (HTTP 403) (Request-ID: req-725ce3fa-36e5-4dd8-b10f-7521c91a5c32) |
So I deleted the instance:
openstack server delete d64fdd07-748c-4e27-b212-59e8cef9d6bf |
But when I went to list the volumes:
+--------------------------------------+-------------+--------+------+---------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------------+--------+------+---------------------------------------------------------------+ | de4909df-e95c-4a54-af5c-c24a26146a89 | xxxx | in-use | 80 | Attached to d64fdd07-748c-4e27-b212-59e8cef9d6bf on /dev/vda | +--------------------------------------+-------------+--------+------+---------------------------------------------------------------+ $ openstack volume delete de4909df-e95c-4a54-af5c-c24a26146a89 Failed to delete volume with name or ID 'de4909df-e95c-4a54-af5c-c24a26146a89': Invalid volume: Volume status must be available or error or error_restoring or error_extending and must not be migrating, attached, belong to a group or have snapshots. (HTTP 400) (Request-ID: req-f651299d-740c-4ac9-9f52-8a603eace8f6) 1 of 1 volumes failed to delete. |
To unwedge it I need to run:
$ cinder reset-state --attach-status detached de4909df-e95c-4a54-af5c-c24a26146a89 Policy doesn't allow volume_extension:volume_admin_actions:reset_status to be performed. (HTTP 403) (Request-ID: req-8bdff31a-7745-4e5e-a449-a5dac5d87f70) ERROR: Unable to reset the state for the specified entity(s). |
SO, finally, I had to get an admin account (role admin on any project will work, still…)
. ~/devel/openstack/salab/rduv3-admin.rc cinder reset-state --attach-status detached de4909df-e95c-4a54-af5c-c24a26146a89 |
And now (as my non admin user)
$ openstack volume list +--------------------------------------+-------------+-----------+------+-----------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------------+-----------+------+-----------------------------------------------+ | de4909df-e95c-4a54-af5c-c24a26146a89 | xxxx | available | 80 | | +--------------------------------------+-------------+-----------+------+-----------------------------------------------+ $ openstack volume delete xxxx $ openstack volume list +--------------------------------------+-------------+--------+------+-----------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-------------+--------+------+-----------------------------------------------+ +--------------------------------------+-------------+--------+------+-----------------------------------------------+ |
I talked with the Cinder team about the policy for volume_extension:volume_admin_actions:reset_status and they seem to think that it is too unsafe for an average user to be able to perform. Thus, a “force delete” like this would need to be a new operation, or a different flag on an existing operation.
We’ll work on it.