Much of the future work we need to do on Keystone falls into issued of scope. I’m going to merely try and define the problems here, and avoid talking about solutions. I’ll try to address more specific aspects in future posts. Continue reading
During the years I worked as a Web application developer, it seemed like every application had its own authentication mechanism. An application developer is thinking in terms of the domain model for their application whether it be eCommerce, Systems management, photography, or weblogs. Identity Management is a cross cutting concern, and it is hard to get right. Why, then, do so many applications have “user” tables in their databases?
When reinstalling FreeIPA, you often get browser errors complaining of reissued certificates. Here is how you can deal with them:
As OpenStack evolves, its requirements for Identity Management evolve with it. In the early days, there was a single Nova server, and that stored user-id and password. Once OpenStack evolved into a body of servers, copying passwords around comprised too big a security risk. Keystone was first implemented as a central repository for those passwords.
Keystone tokens were originally implemented as a unique identifier. A user went to Keystone, submitted a request with their user-id and password, and received a UUID. That UUID was passed to a remote service such as the Nova API web service in place of authentication data. The remote service would then make a call to Keystone to verify the token. Thus, each remote call required, at the absolute minimum, an additional round trip to Keystone. The network traffic was exacerbated by the fact that it was driven by the command line clients, which had no way of storing the ephemeral tokens. Thus, one remote call required three round trips.
The call to validate a Keystone token returns much information. In addition to the return code, which indicates the validity of the token, the response contains a collection of role assignments for the project specified by the token. These role assignments are later matched against rules assigned to each of the web remote APIs. An example rule might state that in order to perform a given action, a user must be an administrator for the associated project. While OpenStack calls this Role Based Access Control (RBAC) there is nothing in the mechanism that specifies that only roles can be used for these decisions. Any attribute in the token response could reasonably be used to provide/deny access. Thus, we speak of the token as containing authorization attributes.
Two technologies helped to decrease the load on Keystone. The first was client side caching, implemented using the Python-Keyring library. This allows the reuse of a token. The second was the use of Public Key based document signing to allow in process validation of the keystone tokens. This new mechanism, termed PKI tokens, used the Crypto Message Syntax that is the basis for Secure Mail (SMIME). The size of the tokens increased dramatically, but now a single web request could be performed with a single round trip.
The are multiple systems that provide a comparable document with authorization attributes that is validated using cryptography. Security Assertion Markup Language (SAML) is probably the most widely deployed. JSON Web Tokens are the equivalent for people that prefer JSON to XML. The major authentication mechanisms have their own approach to reliably distributing authorization attributes linked to the authentication of the user. For Kerberos, it is the Privilege Attribute Certificate (PAC) mechanism. For X509, it is attribute certificates or proxy certificates.
Both UUID and PKI tokens are what is termed “Bearer Tokens.” “Any party in possession of a bearer token (a “bearer”) can use it to get access to the associated resources (without demonstrating possession of a cryptographic key).” A bearer token is a token that cannot be verifiably linked to the person presenting it. It is a bit like using a credit card number to establish your age on a website: if you steal someone’s credit card, you can make a false assertion of your identity. Just so, if a malicious user steals a bearer token, they can impersonate the token’s user. The complex workflows for OpenStack span multiple services. In the current system, a bearer token is attached to a request that will flow through the entire system.
The current token system has an additional drawback. A Keystone token is valid anywhere in the OpenStack system. That means that a token stolen from one system can be used on another system. To limit the damage done with tokens, many other systems limit them to a single target system. For example, in Kerberos, a user gets a service token that, while it is a bearer token, is only usable on a single a remote system. To talk to an additional system requires an additional service token.
To move beyond bearer tokens requires multiple steps. In order to link the token to a user, the user needs to use a secure authentication mechanism, and then link the token to that mechanism. A mechanism for that will be present in the Havana release. Its use will be optional to start; once we disable bearer tokens, we risk breaking the entire OpenStack system. If tokens must be bound to the user that initially requested them, how can a system call second and third system to do work on behalf of the user? If a token can only be used for a specific system, how can a workflow progress across multiple systems?
Token Revocation and Lifespan
There are other problems with the current token system. Tokens are long lived, in order to survive for the entire duration of the long workflows. However, we want to be able to quickly remove privileges from a user. This means that tokens can be revoked. Token revocation places a heavy burden on the system as remote systems must periodically check for token revocation lists from keystone, or return to the original scheme of online verification. In addition, since tokens can create other tokens, Keystone now has complex rules to track revocation events and properly revoke the correct set of tokens. To track these changes, the tokens are stored in a database attached to Keystone. This database cannot be ephemeral, or tokens are improperly recorded as invalid, but they must be flushed periodically to remove expired tokens, or risk filling up their storage.
Heat is the orchestration engine for OpenStack. Heat has a requirement that it must be able to perform an operation on behalf of a user even if that user is no longer available. To support heat, Keystone now has a mechanism for delegation of authorization data. This mechanism is called Trusts, and it uses the same language as is used in the legal world: the user that creates a trust is a trustor, the one that executes the trust is a trustee. The user is the trustor and Heat is the trustee. A user creates a trust with exactly the set of role assignments they want to give to the trustee. This set of role assignments is validated when the trustee uses the trust id to fetch a token. If the user is missing the role assignment, the trust is invalid and no token is returned. Upon success, the trustee receives a valid token. The trustee can use this token to perform work for the trustor.
Trusts are related to OAuth, with some significant differences. Probably the most important difference is that only users of the system can be trustees. In Oath, a Consumer can be, and is expected to be, an external system. Trusts specify the format of the data that is delegated. OAuth does not. A user must specify the data necessary to create a trusts, whereas in OAuth, the remote system crafts the request for the user, but then allows the user to inspect and verify the request. However, whether it is trusts, OAuth, or another comparable mechanism, the Keystone service has and will continue to provide a delegation mechanism.
Building upon the current state of Keystone, we can project forward to a system that deals with the shortcomings of Keystone. Instead of using long lived tokens,long running workflows can instead use a mechanism for delegation of authority. As an example, take a workflow for launching a virtual machine. This workflow needs to perform several operations across several services. It needs to fetch an image from glance, deploy it to the compute node, communicate with Cinder to get access to the remote disk partition, start the virtual machine, mount the remote partition, and connect the virtual machines interface to a network in Neutron. The user specifies this workload from the Nova API server, but the service that actually performs them will pull the operations out of a queue. They are scheduled by the scheduler, and mostly performed by the nova compute service. In addition, while requests are posted to the other API servers, they are also performed by worker processes. Whenever one service calls another API service, it will need a delegation. For the example above, the delegation would be from the user requesting the new virtual machine to the nova-compute user. The nova-compute user would use the trust to request a token on behalf of the end user. That token would contain the role assignments necessary to perform the operations on the other remote services.
Once delegations replace long lived tokens, we can shorten up the lifespan of the tokens such that they cover multiple web round trips, but are short as the acceptable window for processing revocation events. As a working value, assume a token will live around five minutes. Tokens that live this short of a time will not require a revocation list. Without the need for a revocation list, we can drop recording the tokens in the backend system. Keystone can use the same cryptographic approach to validating tokens that the rest of the systems use.
Developing New Policy
When a user makes a call on a web service, they do not know the policy that will allow or deny them access. They only know the end state of success or failure. If a deployer wishes to deploy new policy, they will also need to establish a set of roles that users will have in order to execute that policy. For example, a deployer may wish to split access to Glance images into a reader role and a writer role. Most work flows only will require the writer role. When performing the deploy instance workflow, the user should delegate the reader role to the Nova Compute service user, but not the writer role.
In order to create the delegations for these complex work flows, we are going to need a tool that will tell us what role assignments are required. To generate this information, we take a page from the book of SELinux: permissive policy enforcement. To test out new policy, the deployer will set up a test-bed OpenStack deployment. They will set the policy enforcement on the various services to “permissive” which will allow all actions, but will record the operations that would not have been permitted, and the rules that would have denied access to those operations. The deployer can then perform the various workflows, and generate a series of logs. From those logs, the deployer can deduce what role assignments to grant to users, and to generate a work plan for delegating those roles.
A workplan is the set of delegated role assignments required to perform a workflow. If a user wishes to perform a workflow, they will take the workplan and generate a series of trusts. The trust ids can be collected up into a single document and attached to a routing slip, or some other message decoration that will follow the workflow throughout the system. Whenever a system needs to perform a remote operation on behalf of the user, they will get the trust ID out of the workflow and use it to fetch a token from Keystone.
OpenStack is in wide deployment today. The solutions proposed in this document cannot disrupt ongoing operations. Instead, we need a phased approach that will add features incrementally. Many of the mechanisms required are in place, but not required for operations. All mechanisms need to be optional, not only to facilitate development, but to allow for uninterrupted operations should a policy or mechanism stop a critical operation. Ideally, the workplans and workflows will be implemented in addition to the current use of bearer tokens. Systems will, instead, execute the delegation based operations in conjunction with the bearer tokens, and record the failures, but allow existing workflows to continue. After a “breaking in” period where the deployers and administrators deal with the failing cases, they can switch over to enforcing the token binding and workplans.
We had a recent IRC discussion about the design of Trusts and how it compares with OAuth version 1.
Bearer tokens are vulnerable to replay attacks. OK, so what are our options? Something where the user proves, via cryptography that they have the right to actually use the token. It doesn’t matter if it is X509, Kerberos, or something we cook up ourselves, it is going to resolve to proving you have the right to use that token.
If tokens must be validated by the owner, we effectively break the ability of Open Stack to hand around bearer tokens to get work done. We are going to have to get a lot of stuff right in order to keep from breaking things. Fortunately, we now have the tools to work around this, and to better secure an OpenStack system: Trusts and Role Based Access Control.
Something you have. Something you are. Something You Know. Pick Two. This is the conventional wisdom for the basis of secure authentication.
With PKI, tokens have gone from 40 byte to a varying size more than 3000 bytes long. This plus additional payload in Horizon means that they no longer fit inside an HTTP cookie. How do we deal with this?
“I’ll gladly pay you Tuesday for a Hamburger Today” –Wimpy, from the Popeye Cartoon.
Sometimes you need to authorize a service to perform an action on your behalf. Often, that action takes place long after any authentication token you can provide would have expired. Currently, the only mechanism in Keystone that people can use is to share credentials. We can do better.