Why DANE isn't going to win

Posted: Tue, 14 July 2015 | permalink | 4 Comments

In a comment to my previous post, Daniele asked the entirely reasonable question,

Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?

Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.

My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.

DNS Is Awful

Quoting Google security engineer Adam Langley:

But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.

Consider that TXT records are far, far older than TLSA records. It seems likely that TLSA records would fail to be retrieved greater than 4% of the time. Extrapolate to the likely failure rate for lookup of TLSA records would be, and imagine what that would do to the reliability of DANE verification. It would either be completely unworkable, or else would cause a whole new round of “just click through the security error” training. Ugh.

This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.

Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.

Finally, performance is also a concern. Having to go out-of-band to retrieve TLSA records delays page generation, and nobody likes slow page loads.


Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.

1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.

DNS Providers are Awful

While we all poke fun at CAs who get compromised, consider how often someone’s DNS control panel gets compromised. Now ponder the fact that, if DANE is supported, TLSA records can be manipulated in that DNS control panel. Those records would then automatically be DNSSEC signed by the DNS provider and served up to anyone who comes along. Ouch.

In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?


None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.


From: Daniele
2015-07-21 13:18

Hello, thanks for your reply.

It seems to me that every issue you point out is not an intrinsic issue with the DNSSEC+DANE specification but issues with the current bad state of the DNS stack in most consumer products. I include in this category also the issues with the bad security record of DNS providers: users that fiddle with DNS records should be skilled enough to be able to handle something more complex that entering a password in a web form.

On the other hand, the centralized CA model has the inherent trust issue that makes it unreliable.

The fact that the current situation is not perfect, should not be the reason for client software developers to not include DNSSEC+DANE as in their products, as a possibility for those that want to deploy such technology.

As Google (and Mozilla?) are pushing to get SHA1 replaced (and I belive this is not a painless change for many), they can allow the deployment of DNSSEC+DANE enabling it in their browsers. Only in this way the issues with the DNS stack will emerge and can be fixed.

Cheers, Daniele

From: Matt Palmer
2015-07-22 03:06

Hi Daniele,

You are correct that all of the issues involved are fixable. I disagree that DANE should be implemented in browsers before they’re fixed, though – there are significant security issues involved in trusting TLSA records for all domains when the records can be trivially subverted by popping the DNS provider. There are also UX concerns with trying to rely on TLSA records when 4% of visitors can’t use them. Training users to click through security warnings is never the answer.

I also disagree that the CA trust issue is unfixable. Certificate Transparency, if rolled out universally, will make it impossible for a CA to silently misissue a certificate. CT is far easier to roll out universally than DANE (no need to fix every broken DNS resolver on the planet, just for starters).

I’d also note that the trust concerns you have for CAs also exist for DNS. They’re not quite the same – the “weakest link” issue isn’t quite as bad – but having anyone in the DNS hierarchy above your domain able to silently fiddle with your TLSA records still isn’t great from a security perspective.

From: Daniele
2015-07-22 09:09

Hello Matt,

implementing DANE does not mean that it must be enabled by default, or that it must be enabled for all domains. What annoys me is that currently there is no practical way to deploy DANE for HTTP connections and as long it is not used the problems cannot be found and thus cannot be fixed.

I don’t have much expertise with TLSA records, but I don’t see how someone higher up in the DNS hierarchy could modify the TLSA records for a domain without control on the associated private key.

Another point is that it is easy to monitor if the TLSA records for a domain change, while it is much harder to verify if somewhere on the Internet there is a server impersonating another with a fake certificate.

From: Matt Palmer
2015-07-23 01:28

Hi Daniele,

If you want to require users to enable DANE manually, you can just instruct your users to install the nic.cz DNSSEC/TLSA validation plugin. Problem solved.

As far as modifying the records, while someone higher up the hierarchy couldn’t forge responses with your DNS server’s private key, they can add additional DS records for a private key they control. This, and changing TLSA records, can be done for selected victims, rather than for the entire Internet, thus making simple monitoring ineffective. You need a transparency system, equivalent to CT, to ensure that you’re aware of any changes that get made.

Post a comment

All comments are held for moderation; markdown formatting accepted.

This is a honeypot form. Do not use this form unless you want to get your IP address blacklisted. Use the second form below for comments.
Name: (required)
E-mail: (required, not published)
Website: (optional)
Name: (required)
E-mail: (required, not published)
Website: (optional)