In a comment to my previous post, Daniele asked the entirely reasonable question,
Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?
Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.
My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.
DNS Is Awful
Quoting Google security engineer Adam Langley:
But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.
TXT records are far, far older than
TLSA records. It seems
TLSA records would fail to be retrieved greater than 4% of the
time. Extrapolate to the likely failure rate for lookup of
TLSA records would
be, and imagine what that would do to the reliability of DANE verification.
It would either be completely unworkable, or else would cause a whole new
round of “just click through the security error” training. Ugh.
This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.
Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.
Finally, performance is also a concern. Having to go out-of-band to
TLSA records delays page generation, and nobody likes slow page
DNSSEC Is Awful
Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.
1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.
DNS Providers are Awful
While we all poke fun at CAs who get compromised, consider how often
someone’s DNS control panel gets compromised. Now ponder the fact that, if
DANE is supported,
TLSA records can be manipulated in that DNS control
panel. Those records would then automatically be DNSSEC signed by the DNS
provider and served up to anyone who comes along. Ouch.
In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?
None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.