MySQL replication and crash recovery

Posted: Sat, 16 July 2011 | permalink | No comments

The question was recently asked, “How do I perform crash recovery on a MySQL master/slave database cluster? Because replication is async, a transaction that has been committed to the master may not be able to leave the master before the crash happens.”

The short answer is, you don’t. Asynchronous replication, by it’s very nature, is prone to these sorts of issues, and MySQL doesn’t make life easier in general by using query replay.

Some of the issues aren’t massive: for instance, if a slave crashes, the master will keep replication logs for the slave until it comes back up (as long as the slave comes back before the master decides some of the replogs are too old and starts deleting them, of course), so as long as you can live without a slave for a while, you can recover and continue along your merry way.

Promoting a slave to be a master, on the other hand, is a situation frought with peril. Here, you’ve just got to hope that your chosen slave is consistent with the master’s state at death (because it’s async, you have no guarantees about that), and that all the other slaves have the same ideas about what counts as the current state of the data. If your newly-promoted slave managed to apply an update that another slave didn’t, that slave will be out of sync with reality (until such time as the now-dead master comes back and replays all those queries… hahahaha). To even guarantee that any slaves have a consistent view of the data as compared to the new master, you’ve got to rereplicate everything from the new master, because MySQL’s “where are you up to?” counter is master-specific.

I’ve heard people who have to deal with this sort of thing these days say that the Maatkit tools are handy for dealing with various issues of this type (monitoring, repairing the database when it gets inconsistent).

However, I prefer to keep right the hell away from MySQL replication altogether, after a year of having to gently coax an insane MySQL “tree” replication setup to do the right thing for any extended period, and watching a group of DBAs at Engine Yard go slowly insane.

My preferred alternatives are:

Never discount the value of the “Jaws” approach to scaling (“we’re gonna need a bigger boat^Wmachine”) – hardware is so ridiculously cheap, relative to the clueful sysadmin time needed to make (and keep) MySQL replication running, that you really need to be at the huge end of town (where other scaling issues are going to bite you in the arse first) before spending too much time on your database is worth the investment. Even then, you’ll get more bang for your buck from:

than you ever will from the constant, mind-numbing grind that is keeping MySQL replication going.

  1. The term comes from a time when a colleague near-screamed “This slave cluster keeps fucking me!” at a particularly intransigent set of MySQL slaves. 


Post a comment

All comments are held for moderation; markdown formatting accepted.

This is a honeypot form. Do not use this form unless you want to get your IP address blacklisted. Use the second form below for comments.
Name: (required)
E-mail: (required, not published)
Website: (optional)
Name: (required)
E-mail: (required, not published)
Website: (optional)