Kerberos is a single sign on solution. AFAICT, it is the only one that solves the problem completely: You confirm that you are who you say you are, and the remote side confirms that it is who you think it is. It doesn’t work over he public internet only due to the fact that most corporate firewalls block the ports it needs. So we want to be able to do Kerberos, or its equivalent from the browser.
There are two approaches: we could try to do it in the context of existing technology, or we could extend the browser. First a really quick overview.
When using Kerberos you do two things:
1. Go to a centralized location and authenticate yourself
2. Use something from that centralized place to authenticate yourself to some other service.
Yeah, it is overly simplified. I’ll get into more details as I go.
OK: so this kind of maps to the web single sign on technologies out there. For example, OAuth and OpenID both work through redirects to a central auth server. After authenticating yourself, you then receive a redirect back, but this time with an addition value that confirms that the centralized server recognized you.
Why are these not the same? The devil is in the details.
First off, when you go to the auth server in Kerberso (called the Key DIstribution center, or KDC) the trust is mutual. The KDC send you something encrypted with your password. Meaning the KDC has to know your password. Thus,  you know that the KDC is really the right auth server, or your password won’t work.  WebSSO doesn’t provide this. IN fact, when you get redirected to, say, yahoo, and prompted for a UID and password, you are potentially going to receive a phishing attack. You provide your auth credentials to the remote server, and you get your token, but it could be configured to accept anything, and to just record your password.
Of course, the fact that you are authenticating with Basic Auth (UserID and Password)Â is pretty bad in its own right.
Kerberos uses the Password, but it doesn’t pass across the network. There is a big difference. What does back and forth allows you and the KDC to authenticate, but if either side attempts to cheat, it gains no more information than it had before.
The nonce/cookie/token/whatever that you get from a web SSO solution is just a random number. In most OpenID solutions, the Auth provider does not know anything about the services that it provides auth for, so it can’t say “these are legitimate” and, more importantly, “these are not legitimate.” It is easy to extend the systems to authenticate the requester, but it isn’t done baseline by the protocols.
In Kerberos, you get a ticket specific for some service. If the service you want the ticket for is unknown to the KDC, you don’t get a service ticket.
OK, so how could we make this work with current technology? The first step is certificates.
The Auth provider should present to you a certificate, signed by a trusted third party. Thus, you know the auth provider is who it says it is. This is necessary but not sufficient. An attacker can present a certificate that says “I am me” but you still have to check that you are talking to who you think you are talking. Thus, you need to direct which URL provides your auth server, not the requesting service. This is requirement #1. Once you auth against your ID server, it should provide you with a certificate with a short lifetime. This is the certificate that you would use for the remote service. This is requirement #2.
Much of this can be short circuited. If your machine has a client certificate, you can use that to authenticate to the remote server without having to go back to the auth server. Its SSL Certificate can confirm its identity to you. Since both certificates are signed by trusted third parties, both are confident that they are accurate. The question, then, is why don’t more people do this?
Cost. Getting a server that can sign other certificates is trivial. There is a great Open Source CA implementation that I happen to work on: However, getting your CA certificate signed by one of the authorities in the Mozilla or Internet Explorer known CA list costs big bucks. $10,000 was one price that I heard quoted. Lets say it is a million. The fact is that it costs something significant.
There are other issues.  Probably the most significant is that there is no session timeout. Once you have a certificate, you are signed in to any site that accepts that certificate. No user interaction is required. Ideally, a certificate based approach would be linked with a short, non-cacheable token like “enter the first letter of the day of week followed by a 4 digit PIN”.
User certificates are complicated to administer in the browser. For example, to remove a certificate in the current version of Mozilla, You have to know to go to “Edit->Preferences->Advanced->Encryption->View Certificate->Your Certificates. Also, if you accidentally use the wrong one to try and log in to a site, the only way to rectify this problem is to close the browser and go back to the site. User certificates typically have long lifespans: A year to two years is not uncommon. If a certificate times out, getting a new one can be tricky.
But there is a lot of technology to make it easy, too. OK a little more crypto background: A certificate is really just a decorated public key. If you do SSH, you probably have had to generate a key pair, and then copied the key over to the remote site. A Certificate takes that public key, and a trusted service cryptographically ensures that the key is really from you. So to get a certificate, you need to first generate key pair, then generate a signing request (CSR), send that in to get approved and signed, and get back the certificate.
All of this can be done by the browser. Mozilla has a internal database that holds your password, and that holds your private keys and certificates. It is (probably) in ~/.mozilla/firefox/*.default/cert8.db.
You can see what is in there with certutil -d ~/.mozilla/firefox/*.default -L
So the mechanisms all exist to do web single sign on with Certificates, but they are painful. So we have two options: build out the mechanisms for Certificate web SSO, or make Kerberos SSO work through the current technology.
To do Kerberos SSO, here’s what you would need.
Make it possible to get a Ticket granting ticket (TGT) via the web browser from the Key distribution Centery (KCD). This means first setting up an SSL connection between the browser and the KDC< and then using port 443. The browser could do a post to the KDC, to request a ticket, buyt the most important part is that the browser pops up a dialog that is unmistakably the enter your Kerberos password/kinit dialog, and not a phishing attempt. The browser needs to be involved, and should probably have a spot on the border of the browser that is not in the client area. This is a tricky design issue, and will require some smarts.
The mechanism to get a service ticket is pretty similar to getting the TGT, but could probably be hidden from the end user.
There should probably be a handful of notifications that are visible to the user. Getting a TGT requires interaction, but anytime something requests a new service ticket, the user should probably be notified, to prevent requests happening behind his/her back.
One shortcoming with current Keberos implementations is that they only allow one TGT and KDC at a time. This limitation is well understood and discussed, and will need to be alleviated prior to Kerbers really being a viable SSO alternative.
How to talk to the KDC via HTTP (not HTTPS) is implemented in the Heimdal tools. A compatible implementation in MIT Kerberos would be nice, of course, if that doesn’t already exist.
Also you need a way to find the KDC without configuring the name manually. The dns_lookup_* options should do that but I’m not sure about the MIT implementation and there’s no way to specify that the KDC should be accessed over HTTP except in krb5.conf. (Sample at http://www.pdc.kth.se/resources/software/login-1/firewalls-and-nat-using-kerberos)
It has to be possible to present different credentials to different sites of course, so yes, this needs to be implemented in the browser, but it really needs to integrate well with the credentials cache in the desktop login session too.
Is this different from the existing Kerberos support in IE and IIS/ISA?
” Make it possible to get a Ticket granting ticket (TGT) via the web browser from the Key distribution Centery (KCD). This means first setting up an SSL connection between the browser and the KDC< and then using port 443"
It sounds a bit like what Webauth (or Pubcookie and Shibboleth) does : it allows the user to get a Kerberos TGT through HTTPS that is stored in the user cookies.
Kerberos authentication happens over the web all the time. Check out the SPNEGO article on Wikipedia. mod_auth_kerb has supported this for years.
Eric, I am fairly certain that Microsoft does not proxy over port 443, but I cannot say authoritatively.
Anonymous: Yes, I know that SPNEGO etc works. The problem is that it requires port 88 to be open, and most corporate firewalls block it.
Bot: yes, it is in the same domain as the Web SSO. However, without redirects, you can’t get information from one site to another, and redirects are horribly insecure. And storing a TGT in the cookie means you give the “keys to the kingdom” to the remote site, which is insecure as well.
Alexander, yes, MIT Kerberos does the DNS lookup for the KDC. I think that they would be sufficient, but you could certainly add another srv record for the HTTP KDC if it was necessary…I don’t think it is, though. I agree it needs to be integrated with the desktop login cache.
Thanks to everyone for responding. I really appreciate the feedback.
Hey Adam –
“The KDC send you something encrypted with your password. Meaning the KDC has to know your password. Thus, you know that the KDC is really the right auth server, or your password won’t work.”
How does this work? How do you decrypt what the KDC sent back to you? Or does the browser do this for you? A big disadvantage of this is requiring your password to be decryptable on the KDC – if someone got a hold of the encryption key on the KDC they could get everyone’s password. Storing a one-way hash is a lot more secure.
Take a look at OAuth 2 (http://tools.ietf.org/html/draft-ietf-oauth-v2-22). It is pretty good. It’s not technically for SSO but it’s really easy to build an SSO system on top of it. I recently built one for NYSE. OAuth 2 is a substantial improvement on top of OAuth 1.
The issue with phishing is definitely real. In fact, Facebook, which uses OAuth 2 for SSO, used to have a simple DHTML popup for SSO logins, but it was too easy to fake that. So now they have a browser popup that clearly shows the address bar so the user can see what site they are typing their credentials into.
In OAuth 2 there is a concept of a client_id and a redirect_uri. The Auth server will only redirect a browser back to a redirect_uri registered with the client_id that is initiating the OAuth flow. That is how the Auth server knows the service is legitimate (it has to have the client_id registered in its database) and also how it keeps the user from getting redirected to an evil site.
After the user logs in to the Auth server it redirects them to the client site with a very short lived, one-time only code. (Also note that all of the flow requires HTTPS). The client site then exchanges that code for a time-limited access token. That exchange takes place server to server, not through the browser, so the access token is not shared with the browser at all.
A lot of heavy hitters including MS, Google, Facebook, Twitter are in the OAuth 2 group. Definitely worth taking another look.
Eric,
Agreed that OAuth 2 is better than 1, but any solution based on existing web technologies is going to fall down from the issues I point out. The browser needs to be smarter: it should identify CSRF attacks, invalid auth servers, and unauthorized servers trying to use auth servers.
Kerberos does store a hash…read up on it,
http://en.wikipedia.org/wiki/Kerberos_%28protocol%29#User_Client-based_Logon
Keep in mind when reading that that the enitre protocl should happen over SSL, and setting up SSL correctly requires Certificates…not quite a chicken egg problem, but a whole ‘nother piece of infrastructure to get right. I suspect that we could get a simpler solution based on the process of kerberos, but that uses to cryptography that is already part of the HTTP protocol.