Minimal Token Size

OpenStack Keystone tokens can become too big to fit in the headers between mod_wsgi and the WSGI applications. Compression mitigates the problem somewhat, but if token sizes continue to grow, eventually they outpace the benefits of compression. How can we keep them to a minimal size?

There are two variables to the size of the tokens: the packaging, and the data inside. The packaging for a PKIZ token has a lower bound based on the the signing algorithm. An empty CMS document of compressed data is going to be no less than 650 bytes. An unscoped token with proper compression comes in at 930 bytes. This are for headers, but it means that we have to keep additional data inside the token body as small as possible.

Encoding

Lets shift gears back to the encoding. A recent proposal suggested using symmetric encryption instead of asymmetric. The idea is that a subset of data would be encrypted by Keystone, and the data would have to be sent back to Keystone to validate. What would this save us?

Lets assume for a moment that we don’t want to pay any of the overhead of the CMS message format. Instead, keystone will encrypt just the JSON and base64 the data. How much does that save us? Depends on the encryption algorithm. An empty token will be tiny: 33 bytes when encrypted like this:

openssl bf -salt -a -in cms/empty.json -out cms/empty.bf

Which, according to the openssl man page, is blowfish encrypted and base64 encoded. What about a non-trivial token? Turns out, our unscoped token is quite a bit bigger: 780 bytes for the comparable call:

openssl bf -d -k key.data -in cms/auth_token_unscoped.json -out cms/auth_token_unscoped.bf

Compared with the PKIZ format at 929 bytes, the benefit does not seem all that great.

What about for a scoped token with role data embedded in it, but no service catalog? It turns out the compression actually makes the PKIZ format more effecient: PKIZ is 917 bytes versus 1008 for the bf.

Content

What data is in the token?

Identification. This is what you would see in an unsigned token: user id and name, domain id and possibly name.

Scope: domain and project info Roles: specific to the scope. service catalog. The sets of services and endpoints that implement those services.

It is the service catalog that is so problematic. While we have stated that you can make tokens without a service catalog, doing so is rally not going to allow the endpoints to make any sort of decisions about where to get resources.

There is a lot of redundant data in the catalog. We’ve discussed doing ID only service catalogs. That implies that each endpoint is expandable on the endpoint size: the endpoints need to be able to fetch the service catalog and then look up the endpoints by ID.

But let us think in terms of scale. If there is a service catalog with, say, 512 endpoints, we are still going to be sending tokens that are 512 * length(endpoint_id)

Can we do better? According to Jon Bently in Programming Pearls, yes we can. We can use a bitmap. No, not the image format. Here a bitmap is an array of bits, each of which, when set, indicates inclusion of the member in the set.

We need two things. One, a cached version of the service catalog on the endpoints. But now we need to put a slightly stricter constraint on it: the token must match up exactly to a version of the service catalog, and the service catalog must contain that version number. I’d take the git approach, do a sha256 hash of the service catalog document, and include that version in the token.

Second, we need to enforce ordering on the service catalog. Each endpoint must be in a repeatable location in the list. I need to be able to refer to the endpoints, not by ID, but by sequence number.

Now, what the token would contain? Two things:

The hash of the service catalog. A bitmap of the included services.

Here’s a minimal service catalog

Index | Service name | endpoint ID
 0 | Nova | N1
 1 | Glance | G1
 2 | Neutron | T1
 3 | Cinder | C1

A service catlog that had all of the endpoints would be (b for binary) b1111 or, in Hex, 0xF

A service catalog with only Nova would be b0001 or 0x1.

Just cinder would be b1000 or 0x8

A service catalog with 512 endpoints would be 512 bits in length. That would be 64 characters long, the length of a string comparable to a sha256. A comparable list of uuids would take 16384 characters, not including the JSON overhead of commas and quotes.

I’ve done a couple tests with token data in both the minimized and the endpoing_id only formats. With 30 endpoint ids, the compressed token size is 1969 bytes. Adding one ID to that increases the size to 1989. The minimized format is 1117 when built with the following data:

"minimizedServiceCatalog": { 
    "catalog_sha256": "7c7b67a0b88c271384c94ed7d93423b79584da24a712c2ece0f57c9dd2060924",
    "entrymap": "Ox2a9d590bdb724e6d888db96f846c9fd8" },

The ID only one would scale up at rougly 20 bytes per entry point, the minimized one would stay fairly fixed in length.

Are there other options? If a token without a catalog assumed that all endpoints were valid, and auth_token middleware set the environment for the request appropriately, then there is no reason to even send a catalog on over.

Project filtering of endpoints could allow for definitions of the service catalog that is a subset of the overall catalog. These subordinate service catalogs could have their own ids, and be sent over in the token. This would minimize the size of data in the token at the expense of the server; a huge number of projects, each with their own service catalog would lead to a large synchronization effort between the endpoints and the keystone server.

If a token is only allowed to work with a limited subset of the endpoints assigned to the project, then maintaining strictly small service catalogs in their current format would be acceptable. However, this would require a significant number of changes on how users and service request tokens from Keystone.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.