One of the most annoying administrative issues in Keystone is The MySQL backend to the token database filling up. While we have a flush scrit, it needs to be scheduled via cron. Here is a short over view of the types of tokens, why the backend is necessary, and what is being done to mitigate the problem.
DRAMATIS PERSONAE:
Amanda: The companies OpenStack system Admin
Manny: The IT manager.
ACT 1 SCENE1: Small conference room. Manny has called a meeting with Amanda.
Manny: Hey Amanda, What are these keystone tokens and why are they causing so many problems?
Amanda: Keystone tokens are an opaque blob used to allow caching of an authentication event and some subset of the authorization data associated with the user.
Manny: OK…backup. what does that mean?
Amanda: Authentication means that you prove that you are who you claim to be. For the most of OpenStack’s history, this has meant handing over a symmetric secret.
Manny: And a symmetric secret is …?
Amanda: A password.
Manny:Ok Got it. I hand in my password to prove that I am me. What is the authorization data?
Amanda: In OpenStack, it is the username and the user’s roles.
Manny: All their roles?
Amanda: No. only for the scope of the token. A token can be scoped to a project. Also to a domain, but in our set up, only I ever need a domain scoped token.
Manny: The domain is how I select between the customer list and our employees out of our LDAP server, right?
Amanda: Yep. There is another domain just for admin tasks, too. It has the service users for Nova and so on.
Manny: OK, so I get a token, and I can see all this stuff?
Amanda: Sort of. For most of the operation we do, you use the “openstack” command. That is the common command line, and it hides the fact that it is getting a token for most operations. But you can actually use a web tool called curl to go direct to the keystone server and request a token. I do that for debugging sometimes. If you do that, you see the body of the token data in the response. But that is different from being able to read the token itself. The token is actually only 32 characters long. It is what is known as a UUID.
Manny (slowly): UUID? Universally Unique Identifier. Right?
Amanda: Right. Its based on a long random number generated by the operating system. UUIDs are how most of OpenStack generates remote identifiers for VMs, images, volumes and so on.
Manny: Then the token doesn’t really hold all that data?
Amanda: It doesn’t. The token is just a…well, a token.
Manny: Like we used to have for the toll machines on route 93. Till we all got Easy pass!
Amanda: Yeah. Those tokens showed that you had paid for the trip. For OpenStack, a token is a remote reference to a subset of your user data. If you pass a token to Nova, it still has to go back to Keystone to validate the token. When it validates the token, it gets the data. However, our OpenStack deployment is so small, Nova and Keystone are on the same machine. Going back to Keystone does not require a “real” network round trip.
Manny: So when now that we are planning on going to the multi host set up, validating a token will require a network round trip?
Amanda: Actually, when we move to the multi-site, we are going to switch over to a different form of token that does not require a network round trip. And that is where the pain starts.
Manny: These are the PKI tokens you were talking about in the meeting?
Amanda: Yeah.
Manny: OK, I remember the term PKI was Public Key…something.
Amanda: The I is for infrastructure, but you remembered the important part.
Manny: Two keys, Public versus private: you encode with one and decode with the other.
Amanda: Yes. In this case, it is the token data that is encoded with private key, and decode with the public key.
Manny: I thought that made it huge. Do you really encode all the data?
Amanda: No, just a signature of the data. A Hash. This is called message signing, and it is used in a lot of places, basically to validate that the message is both unchanged and that it comes from the person you think it comes from.
Manny: OK, so…what is the pain.
Amanda: Two things. One, the tokens are bigger, much bigger, than a UUID. They have all of the validation data in them. To include the service catalog. And our service catalog is growing on the multi-site deployment, so we’ve been warned that the tokens might get so big that it causes problems.
Manny: Let’s come back to that. What is the other problem?
Amanda: OK…since a token is remotely validated, there is the possibility that something hass changed on Keystone, and the token is no longer valid. With our current system, Keystoen knows this immediately, and just dumps the token. So When Nova comes to validate it, its no longer valid and the user has to get another token. With remove validation, Nova has to periodically request a list of revoked tokens.
Manny: So either way Keystone needs to store data. What is the problem?
Amanda: Well, today we store our tokens in Memcached. Its a simple Key value store, its local to the Keystone instance, and it just dumps old data that hasn’t been used in a while. With revocations, if you dump old data, you might lose the fact that a token was revoked.
Manny: Effectively un-revoking that token?
Amanda: Yep.
Manny: OK…so how do we deal with this?
Amanda: We have to move from storing token in Memcached to MySQL. According to the docs and upstream discussions, this can work, but you have to be careful to schedule a job to clean up the old tokens, or you can fill up the token database. Some of the larger sites have to run this job very frequently.
Manny: Its a major source of pain?
Amanda: It can be. We don’t think we’ll be at that scale at the multisite launch, but it might happen as we grow.
Manny: OK, back to the token size thing, then. How do we deal with that?
Amanda: OK, when we go multi-site, we are going to have one of everything at each site: Keystone, Nova, Neutron, Glance. We have some jobs to synchronize the most essential things like the glance images and the customer database, but the rest is going to kept fairly separate. Each will be tagged as a region.
Manny: So the service catalog is going to be galactic, but will be sharded out by Keystone server?
Amanda: Sort of. We are going to actually make it possible to have the complete service catalog in each keystone server, but there is an option in Keystone to specify a subset of the catalog for a given project. So when you get a token, the service catalog will be scoped down to the project in question. We’ve done some estimates of size and we’ll be able to squeak by.
Manny: So, what about the multi-site contracts? Where a company can send there VMs to either a local or remote Nova?
Amanda: for now they will be separate projects. But for the future plans where we are going to need to be able to put them in the same project, we are stuck.
Manny: Ugh. We can’t be the only people with this problem.
Amanda: Some people are moving back to UUID tokens, but there are issues both with replication of the token database and also with cross site network traffic. But there is some upstream work that sounds promising to mitigate that.
Manny: The lightweight thing?
Amanda: Yeah, lightweight tokens. Its backing off the remotely validated aspect of Keystone tokens, but doesn’t need to store the tokens themselves. They use a scheme called Authorized Encryption which puts a minimal amount of info into the token to be able to recreate the whole authorization data. But only the Keystone server can expand that data. Then, all that needs to be persisted is the revocations.
Manny: Still?
Amanda: Yeah, and there are all the same issues there with flushing of data, but the scale of the data is much smaller. Password changes and removing roles from users are the ones we expect to see the most. We still need a cron job to flush those.
Manny: No silver bullet, eh? Still how will that work for multisite?
Amanda: Since the token is validated by cryptography, the different sites will need to synchronize the keys. There was a project called Kite that was part of Keystone, and then it wasn’t, and then it was again, but it is actually designed to solve this problem. So all of the Keystone servers will share their keys to validate tokens locally.
Manny: We’ll still need to synchronize the revocation data?
Amanda: No silver bullet.
Manny: Do we really need the revocation data? What if we just … didn’t revoke. Made the tokens short lived.
Amanda: Its been proposed. The problem is that a lot of the workflows were being built around the idea of long lived tokens. The Tokens went from 24 hours valid to 1 hour valid by default, and that broke some things. Some people have had to crank the time back up again. We think we might be able to get away with shorter tokens, but we need to test and see what it breaks.
Manny: Yeah, I could see HA having a problem with that…wait, 24hours…how does heat do what it needs to. It can restart a machine a mong afterwards. DO we just hand over the passwords to HEAT?
Amanda: Heh..used to,. But Heat uses a delegation mechanism called trusts. A user creates a trust, and that effectively says that Heat can do something on the users behalf, but Heat has to get its own token first. It first proves that it is Heat, and then it uses the trust to get a token on the users behalf.
Manny: So…trusts should be used everywhere?
Amanda: Something like trusts, but more lightweight. Trusts are deliberate delegation mechanisms, and a re set op on a per user bases. TO really scale, it would have to be something where the admin set up the delegation agreement as a template. If that were the case, then these long lived work flows would not need to use the same token.
Manny: And we could get rid of the revocation events. OK, that is time, and I have a customer meeting. Thanks.
Amanda: No problem.
EXIT
Manny: Which option are used in Keystone to specify a subset of the catalog for a given project?
Look into the endpoint filtering extension.