CRLs are a critical part of security infrastructure – oh dear!

In the article “why has encrypted email not taken off” I looked at some of the barriers to the widespread adoption of secure email. Certificate revocation was one factor discussed.

In the article How certificate revocation (doesn’t) work in practice, Netcraft highlight a series of issues with certificate revocation lists (CRLs).

The CRL concept is relatively simple. Before you decide to trust a certificate, you should go to a web site and check if the certificate is on a black-list (stating it can no longer be trusted).
(You can use LDAP or OCSP to do this – both have essentially the same problem).
What the article reveals is the implementation in web browsers is inconsistent, resulting in a user inadvertently trusting something that has been flagged as untrustworthy.

This is a big deal, as users have been trained to look for the padlock icon in a browser to show the site can be trusted. If the padlock incorrectly shows, the system fails.
This is not a problem with the concept, or the user perception, but the user being let down by a poor implementation / interpretation of the X.509 standard.

CRLs have long been a problem. I recall in the early days of the Password project (1993-95) where we implemented a S/MIME system for pilot deployment for Universities across Europe huge debate as to what the revocation of a certificate used to encrypt an email meant semantically. Should it mean:

  • All message messages encrypted using the certificate are untrustworthy
  • Messages encrypted after the CRL date are untrustworthy, but ones before are OK
  • Nothing at all, as the certificate s used for confidentiality — a separate certificate should be used to indicate integrity
  • …but what about systems that use the same certificate for confidentiality and integrity?

Not easy stuff, especially when you add the variable “who decides”, the technology or the user?

In summary, PKI, X.509, CRLs etc are a great theory that have stood the test of time as a concept, but for it to really work well in practice there needs to better doctrine, policy, procedures and technology – otherwise users are lulled into a false sense of security.

Do you agree?