The word Admin is used all over the place. To administer was originally something servants did to their masters. In one of the greater inversions of linguistic history, we now use Admin as a way to indicate authority. In OpenStack, the admin role is used for almost all operations that are reserved for someone with a higher level of authority. These actions are not expected to be performed by people with the plebean Member role.
Table of contents
Global versus Scoped
We have some objects that are global, and some that are scoped to projects. Global objects are typically things used to run the cloud, such as the set of hypervisor machines that Nova knows about. Everyday members are not allowed to “Enable Scheduling For A Compute Service” via the HTTP Call PUT /os-services/enable.
Keystone does not have a way to do global roles. All roles are scoped to a project. This by itself is not a problem. The problem is that a resource like a hypervisor does not have a project associated with it. If keystone can only hand out tokens scoped to projects, there is still no way to match the scoped token to the unscoped resource.
So, what Nova and many other services do is just look for the Role. And thus our bug. How do we go about fixing this?
Let me see if I can show this.
In our initial state, we have two users. Annie is the cloud admin, responsible for maintaining the over all infrastructure, such as “Enable Scheduling For A Compute Service”. Pablo is a project manager. As such, he has to do admin level things, but only with his project, such as setting the Metadata used for servers inside this project. Both operations are currently protected by the “admin” role.
Lets look at the role assignment object diagram. For this discussion, we are going to assume everything is inside a domain called “Default” which I will leave out of the diagrams to simplify them.
In both cases, our users are explicitly assigned roles on a project: Annie has the Admin role on the Infra project, and Pablo has the Admin role on the Devel project.
The API call to Add Hypervisor only checks the role on the token, and enforces that it must be “Admin.” Thus, both Pablo and Annie’s scoped tokens will pass the policy check for the Add Hypervisor call.
How do we fix this?
Lets assume, for the moment, that we were able instantly run a migration that added a project_id to every database table that holds a resource, and to every API that manages those resources. What would we use to populate that project_id? What value would we give it?
Lets say we add an admin project value to Keystone. When a new admin-level resource is made, it gets assigned to this admin project. All of those resources we have already should get this value, too. How would we communicate this project ID? We don’t have a keystone instance available when running the Nova Database migrations.
Turns out Nova does not need to know the actual project_id. Nova just needs to know that Keystone considers the token valid for global resources.
We’ve added a couple values to the Keystone configuration file: admin_domain_name and admin_project_name. These two values are how Keystone specifies which project is represents and admin project. When these two values are set, all token validation responses contain a value for is_admin_project. If the project requested matches the domain and project name, that value is True, otherwise false.
instead, we want the create_cell call to use a different rule. Instead of the scope check performed by admin_or_owner, it should confirm the admin role, as it did before, and also that the token has the is_admin_project Flag set.
Keystone already has support for setting is_admin_project, but none of the remote service are honoring it yet. Why? In part because, in order for it to make sense for one to do so, they all must do so. But also, because we cannot predict what project would be the admin project.
If we select a project based on name (e.g. Admin) we might be selecting a project that does not exist.
If we force that project to exist, we still do not know what users to assign to it. We would have effectively broken their cloud, as no users could execute Global admin level tasks.
In the long run, the trick is to provide a transition plan for when the configuration options are unset.]
If no admin project is set, then every project is admin project. This is enforced by oslo-context, which is used in policy enforcement.
Yeah, that seems surprising, but tt turns out that we have just codified what every deployment has already. Look ad the bug description again:
Problem: Granting a user an “admin” role on ANY tenant grants them unlimited “admin”-ness throughout the system because there is no differentiation between a scoped “admin”-ness and a global “admin”-ness.
Adding in the field is a necessary per-cursor to solving it, but the real problem is in the enforcement in Nova, Glance, and Cinder. Until they enforce on the flag, the bug still exists.
There is a phased plan to fix things.
- enable the is_admin_project mechanism in Keystone but leave it disabled.
- Add is_admin_project enforcement in the policy file for all of the services
- Enable an actual admin_project in devstack and Tempest
- After a few releases, when we are sure that people are using admin_project, remove the hack from oslo-context.
This plan was discussed and agreed upon by the policy team within Keystone, and vetted by several of the developers in the other projects, but it seems it was never fully disseminated, and thus the patches have sat in a barely reviewed state for a long while…over half a year. Meanwhile, the developers focused on this have shifted tasks.
Now’s The Time
We’ve got a renewed effort, and some new, energetic developers committed to making this happen. The changes have been rewritten with advice from earlier code reviews and resubmitted. This bug has been around for a long time: Bug #968696 was reported by Gabriel Hurley on 2012-03-29. Its been a hard task to come up with and execute a plan to solve it. If you are a core project reviewer, please look for the reviews for your project, or, even better, talk with us on IRC (Freenode #openstack-keystone) and help us figure out how to best adjust the default policy for your service.