In a comment to my previous post, Daniele asked the entirely reasonable question,
Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?
Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.
My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.
DNS Is Awful
Quoting Google security engineer Adam Langley:
But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.
TXT records are far, far older than
TLSA records. It seems
TLSA records would fail to be retrieved greater than 4% of the
time. Extrapolate to the likely failure rate for lookup of
TLSA records would
be, and imagine what that would do to the reliability of DANE verification.
It would either be completely unworkable, or else would cause a whole new
round of “just click through the security error” training. Ugh.
This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.
Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.
Finally, performance is also a concern. Having to go out-of-band to
TLSA records delays page generation, and nobody likes slow page
DNSSEC Is Awful
Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.
1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.
DNS Providers are Awful
While we all poke fun at CAs who get compromised, consider how often
someone’s DNS control panel gets compromised. Now ponder the fact that, if
DANE is supported,
TLSA records can be manipulated in that DNS control
panel. Those records would then automatically be DNSSEC signed by the DNS
provider and served up to anyone who comes along. Ouch.
In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?
None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.
The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.
The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.
However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.
This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.
Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched sslaware.com, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.
If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.
Ever worked at a company (or on a codebase, or whatever) where it seemed like, no matter what the question was, the answer was written down somewhere you could easily find it? Most people haven’t, sadly, but they do exist, and I can assure you that it is an absolute pleasure.
On the other hand, practically everyone has experienced completely undocumented systems and processes, where knowledge is shared by word-of-mouth, or lost every time someone quits.
Why are there so many more undocumented systems than documented ones out there, and how can we cause more well-documented systems to exist? The answer isn’t “people are lazy”, and the solution is simple – though not easy.
Why Johnny Doesn’t Read
When someone needs to know something, they might go look for some documentation, or they might ask someone else or just guess wildly. The behaviour “look for documentation” is often reinforced negatively, by the result “documentation doesn’t exist”.
At the same time, the behaviours “ask someone” and “guess wildly” are positively reinforced, by the results “I get my question answered” and/or “at least I can get on with my work”. Over time, people optimise their behaviour by skipping the “look for documentation” step, and just go straight to asking other people (or guessing wildly).
Why Johnny Doesn’t Write
When someone writes documentation, they’re hoping that people will read it and not have to ask them questions in order to be productive and do the right thing. Hence, the behaviour “write documentation” is negatively reinforced by the results “I still get asked questions”, and “nobody does things the right way around here, dammit!”
Worse, though, is that there is very little positive reinforcement for the author: when someone does read the docs, and thus doesn’t ask a question, the author almost certainly doesn’t know they dodged a bullet. Similarly, when someone does things the right way, it’s unlikely that anyone will notice. It’s only the mistakes that catch the attention.
Given that the experience of writing documentation tends to skew towards the negative, it’s not surprising that eventually, the time spent writing documentation is reallocated to other, more utility-producing activities.
The combination of these two situations is self-reinforcing. While a suitably motivated reader might start by strictly looking for documentation, or an author initially be enthused to always fully documenting their work, over time the “reflex” will be for readers to just go ask someone, because “there’s never any documentation!”, and for authors to not write documentation because “nobody bothers to read what I write anyway!”.
It is important to recognise that this iterative feedback loop is the “natural state” of the reader/author ecosystem, resulting in something akin to thermodynamic entropy. To avoid the system descending into chaos, energy needs to be constantly applied to keep the system in order.
Effective methods for avoiding the vicious circle can be derived from the things that cause it. Change the forces that apply themselves to readers and authors, and they will behave differently.
On the reader’s side, the most effective way to encourage people to read documentation is for it to consistently exist. This means that those in control of a project or system mustn’t consider something “done” until the documentation is in a good state. Patches shouldn’t be landed, and releases shouldn’t be made, unless the documentation is altered to match the functional changes being made. Yes, this requires discipline, which is just a form of energy application to prevent entropic decay.
Writing documentation should be an explicit and well-understood part of somebody’s job description. Whoever is responsible for documentation needs to be given the time to do it properly. Writing well takes time and mental energy, and that time needs to be factored into the plans. Never forget that skimping on documentation, like short-changing QA or customer support, is a false economy that will cost more in the long term than it saves in the short term.
Even if the documentation exists, though, some people are going to tend towards asking people rather than consulting the documentation. This isn’t a moral failing on their part, but only happens when they believe that asking someone is more beneficial to them than going to the documentation. To change the behaviour, you need to change the belief.
You could change the belief by increasing the “cost” of asking. You could fire (or hellban) anyone who ever asks a question that is answered in the documentation. But you shouldn’t. You could yell “RTFM!” at everyone who asks a question. Thankfully that’s one acronym that’s falling out of favour.
Alternately, you can reduce the “cost” of getting the answer from the documentation. Possibly the largest single productivity boost for programmers, for example, has been the existence of Google. Whatever your problem, there’s a pretty good chance that a search or two will find a solution. For your private documentation, you probably don’t have the power of Google available, but decent full-text search systems are available. Use them.
Finally, authors would benefit from more positive reinforcement. If you find good documentation, let the author know! It requires a lot of effort (comparatively) to look up an author’s contact details and send them a nice e-mail. The “like” button is a more low-energy way of achieving a similar outcome – you click the button, and the author gets a warm, fuzzy feeling. If your internal documentation system doesn’t have some way to “close the loop” and let readers easily give authors a bit of kudos, fix it so it does.
Heck, even if authors just know that a page they wrote was loaded
in the past week, that’s better than the current situation, in which
deafening silence persists, punctuated by the occasional plaintive cry of
“Hey, do you know how to…?”.
Do you have any other ideas for how to encourage readers to read, and for authors to write?
You may have heard that Uber has been under a bit of fire lately for its desires to hire private investigators to dig up “dirt” on journalists who are critical of Uber. From using users’ ride data for party entertainment, putting the assistance dogs of blind passengers in the trunk, adding a surcharge to reduce the number of dodgy drivers, or even booking rides with competitors and then cancelling, or using the ride to try and convince the driver to change teams, it’s pretty clear that Uber is a pretty good example of how companies are inherently sociopathic.
However, most of those examples are internal stupidities that happened to be made public. It’s a very rare company that doesn’t do all sorts of shady things, on the assumption that the world will never find out about them. Uber goes quite a bit further, though, and is so out-of-touch with the world that it blogs about analysing people’s sexual activity for amusement.
You’ll note that if you follow the above link, it sends you to the Wayback Machine, and not Uber’s own site. That’s because the original page has recently turned into a 404. Why? Probably because someone at Uber realised that bragging about how Uber employees can amuse themselves by perving on your one night stands might not be a great idea. That still leaves the question open of what sort of a corporate culture makes anyone ever think that inspecting user data for amusement would be a good thing, let alone publicising it? It’s horrific.
Thankfully, despite Uber’s fairly transparent attempt at whitewashing (“clearwashing”?), the good ol’ Wayback Machine helps us to remember what really went on. It would be amusing if Uber tried to pressure the Internet Archive to remove their copies of this blog post (don’t bother, Uber; I’ve got a “Save As” button and I’m not afraid to use it).
In any event, I’ve never used Uber (not that I’ve got one-night stands to analyse, anyway), and I’ll certainly not be patronising them in the future. If you’re not keen on companies amusing themselves with your private data, I suggest you might consider doing the same.
Unless you’ve been living under a firewalled rock, you know that IPv6 is coming. There’s also a good chance that you’ve heard that IPv6 doesn’t have NAT. Or, if you pay close attention to the minutiae of IPv6 development, you’ve heard that IPv6 does have NAT, but you don’t have to (and shouldn’t) use it.
So let’s say we’ll skip NAT for IPv6. Fair enough. However, let’s say you have this use case:
A bunch of containers that need Internet access…
That are running in a VM…
On your laptop…
Behind your home router!
For IPv4, you’d just layer on the NAT, right? While SIP and IPsec might have kittens trying to work through three layers of NAT, for most things it’ll Just Work.
In the Grand Future of IPv6, without NAT, how the hell do you make that happen? The answer is “Prefix Delegation”, which allows routers to “delegate” management of a chunk of address space to downstream routers, and allow those downstream routers to, in turn, delegate pieces of that chunk to downstream routers.
In the case of our not-so-hypothetical containers-in-VM-on-laptop-at-home scenario, it would look like this:
My “border router” (a DNS-323 running Debian) asks my ISP for a delegated prefix, using DHCPv6. The ISP delegates a
/64out of that is allocated to the network directly attached to the internal interface, and the rest goes into “the pool”, as
/60blocks (so I’ve got 15 of them to delegate, if required).
My laptop gets an address on the LAN between itself and the DNS-323 via stateless auto-addressing (“SLAAC”). It also uses DHCPv6 to request one of the
/60blocks from the DNS-323. The laptop puts one
/64from that block as the address space for the “virtual LAN” (actually a Linux bridge) that connects the laptop to all my VMs, and puts the other 15
/64blocks into a pool for delegation.
The VM that will be running the set of containers under test gets an address on the “all VMs virtual LAN” via SLAAC, and then requests a delegated
/64to use for the “all containers virtual LAN” (another bridge, this one running on the VM itself) that the containers will each connect to themselves.
Now, almost all of this Just Works. The current releases of ISC DHCP support prefix delegation just fine, and a bit of shell script plumbing between the client and server seals the deal – the client needs to rewrite the server’s config file to tell it the netblock from which it can delegate.
Except for one teensy, tiny problem – routing. When the DHCP server delegates a netblock to a particular machine, the routing table needs to get updated so that packets going to that netblock actually get sent to the machine the netblock was delegated to. Without that, traffic destined for the containers (or the VM) won’t actually make it to its destination, and a one-way Internet connection isn’t a whole lot of use.
I cannot understand why this problem hasn’t been tripped over before. It’s absolutely fundamental to the correct operation of the delegation system. Some people advocate running a dynamic routing protocol, but that’s a sledgehammer to crack a nut if ever I saw one.
Actually, I know this problem has been tripped over before, by OpenWrt. Their solution, however, was to use a PHP script to scan logfiles and add routes. Suffice it to say, that wasn’t an option I was keen on exploring.
Instead, I decided to patch ISC DHCP so that the server can run an external
script to add the necessary routes, and perhaps modify firewall rules – and
also to reverse the process when the delegation is released (or expired).
If anyone else wants to play around with it, I’ve put it up on
I don’t make any promises that it’s the right way to do it, necessarily,
but it works, and the script I’ve added in
shows how it can be used to good effect. By the way, if anyone knows how
pull requests work over at ISC, drop me a line. From the look of their
website, they don’t appear to accept (or at least encourage) external
So, that’s one small patch for DHCP, one giant leap for my home network.
The standard recommendation is for ISPs to delegate each end-user customer a
/64networks); my ISP is being a little conservative in “only” giving me 256
/64s. It works fine for my purposes, but if you’re an ISP getting set for deploying IPv6, make life easy on your customers and give them a
If you’re someone who doesn’t like Debian’s policy of automatically starting
on install (or its heinous cousin, the
ENABLE variable in
/etc/default/<service>), then running an init system other than systemd
should work out nicely.
For some reason, I seem to end up writing software for very esoteric use-cases. Today, though, I think I’ve outdone myself: I sat down and wrote a Ruby library to get and set process resource limits – those things that nobody ever thinks about except when they run out of file descriptors.
I didn’t even have a direct need for it. Recently I was grovelling through the EventMachine codebase, looking at the filehandle limit code, and noticed that the pure-ruby implementation didn’t manipulate filehandle limits. I considered adding it, then realised that there wasn’t a library available to do it. Since I haven’t berked around with FFI for a while, I decided to write rlimit. Now to find the time to write that patch for EventMachine…
Since I doubt there are many people who have a burning need to manipulate rlimits in Ruby, this gem will no doubt sit quiet and undisturbed in the dark, dusty corners of rubygems.org. However, for the three people on earth who find this useful: you’re welcome.
If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).
To tell if you’re suffering from this particular problem, crack open your
~/.xsession-errors file (or wherever your system logs stdout/stderr from
programs running under X), and look for lines that look like this:
[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)] Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy failed: Too many open files
[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)] Cannot create shared memory buffer
If you see those errors, congratulations! The rest of this blog post will be of use to you.
There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.
System-wide, you can do this by creating a file
/etc/security/limits.d/local-nofile.conf, containing this line:
* - nofile 65535
You could also edit
/etc/security/limits.conf to contain the same line, if
you were so inclined. Note that this will only take effect next time you
login, or perhaps even only when you restart X (or, at worst, your entire
This doesn’t help you if you’ve got Chromium already open and you’d like to
stop it from crashing Right Now (perhaps restarting your machine would be a
terrible hardship, causing you to lose your hard-won uptime record), then
you can use a magical tool called
prlimit syscall is available if you’re running a Linux 2.6.36 or later
kernel, and running at least glibc 2.13. You’ll have a
line program if you’ve got util-linux 2.21 or later. If not, you can use
the example source code in the
prlimit(2) manpage, changing
RLIMIT_NOFILE, and then running it like this:
prlimit <PID> 65535 65535
<PID> argument is taken from the first number in the log messages from
.xsession-errors – in the example above, it’s
And now, you can go back to using your tabs as ersatz bookmarks, like I do.
$ sudo apt-get install -y leiningen [...] $ lein new scratch [...] $ cd scratch $ lein repl Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2 Transferring 5K from central Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2 Transferring 4K from central Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2 Transferring 3311K from central [...]
lein downloads some random JARs from a website over
HTTP1, with, as far as far I can tell, no verification that
what I’m asking for is what I’m getting (has nobody ever heard of
Man-in-the-Middle attacks in Maven land?). It downloads a
.sha1 file to
(presumably) do integrity checking, but that’s no safety net – if I can
serve you a dodgy
.jar, I can serve you an equally-dodgy
.sha1 file, too
(also, SHA256 is where all the cool kids are at these days). Finally,
jarsigner tells me that there’s no signature on the
.jar itself, either.
It gets better, though. The
repo1.maven.org site is served by the
fastly.net2 pseudo-CDN3, which adds another
set of points in the chain which can be subverted to hijack and spoof
traffic. More routers, more DNS zones, and more servers.
I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.
This repository, on the other hand… oy gevalt. There are OpenPGP (GPG)
signatures available for each package (tack
.asc onto the end of the
.jar URL), but no attempt was made to download the signatures for the
.jar I downloaded. Even if the signature was downloaded and checked,
there’s no way for me (or anyone) to trust the signature – the signature
was made by a key that’s signed by one other key, which itself has no
signatures. If I were an attacker, it wouldn’t be hard for me to replace
that key chain with one of my own devising.
Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.
Please, for the good of the Internet, improve your act, Maven. Putting
HTTPS on your distribution would be a bare minimum. There are attacks on
SSL, sure, but they’re a lot harder to pull off than sitting on public wifi
hijacking TCP connections. Far better would be to start mandating
signatures, requiring signature checks to pass, and having all signatures
chain to a well-known, widely-trusted, and properly secured trust root.
Signing all keys that are allowed to upload to
maven.org with a “maven.org
distribution root” key (itself kept in hardware and only used offline), and
then verifying that all signatures chain to that key, wouldn’t be insanely
difficult, and would greatly improve the security of the software supply
chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy
of the good. Cost-effective improvements are possible here.
Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.
To add insult to injury, it appears to ignore my
http_proxyenvironment variable, and the
repo1.maven.orgserver returns plain-text error responses with
Content-Type: text/xml. But at this point, that’s just icing on the shit cake. ↩
At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (
184-106-82-243.static.cloud-ips.comdoesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”). ↩
Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising. ↩
Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.
In recent versions (3.6), it added support for configuring per-repository
git hooks from within the
gitolite-admin repo itself – something which previously required directly
jiggering around with the repo metadata on the filesystem. It allows you to
“chain” multiple hooks together, too, which is a nice touch. You can, for
example, define hooks for “validate style guidelines”, “submit patch to code
review” and “push to the CI server”. Then for each repo you can pick which
of those hooks to execute. It’s neat.
There’s one glaring problem, though – you can only use these chained,
per-repo hooks on the
update hook is special, and gitolite wants to make sure you
never, ever forget it. You can hook into the update processing chain by
using something called a “virtual
ref”; they’re stored in a separate
configuration directory, use a different syntax in the config file, and if
you’re trying to learn what they do, you’ll spend a fair bit of time on
them. The documentation describes VREFs as “a mechanism to add additional
constraints to a push”. The association between that and the update hook
is one you get to make for yourself.
The interesting thing is that there’s no need for this gratuitous difference
in configuration methods between the different hooks. I wrote a very small
that makes the
update hook configurable in exactly the same way as the other
server-side hooks, with no loss of existing functionality.
The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!