Many business entities have entrusted their data to MySQL, and for certain use cases, this is a sound choice. If you need an RDBMS for basic transaction processing that does not involve complex tasks and does not require full SQL standard compliance, then MySQL may be right for you (for enterprise-scale workloads, Postgres is the only choice). In fact, Facebook and YouTube have used MySQL for a while now, just to name a couple.
However, there is a conundrum that commonly afflicts MySQL users when their data suddenly assumes massive proportions (terabyte range): long backups and the database unavailability that goes along with it.
Simply copying the data files to a backup location will result in internal data inconsistency since data may have been added or changed while the copying was taking place. An alternative to this is stopping the database and performing the backup, which renders the database unavailable! Availability is the rage and rendering the database unusable is very much unacceptable. And even if, by a stroke of luck, management allows for some downtime, the slowness of MySQL’s native backup mechanism would eventually cause anybody to reconsider.
Consider this actual scenario from one of our clients, a large retail outfit in Southeast Asia. Their POS transactions are fed into a MySQL database that is involved in a replication cluster. The database has grown to 2 TB in size and their backups take 15 hours to complete–if at all. The backup process encroaches in upon operating hours and they are forced to just kill the backup. The problem escalates when the Slave server needs to be refreshed (often by lost binary logs preventing it from synchronizing with the Master) since resetting replication involves restoring a backup of the Master on the Slave node. What to do??!!
Enter hot backups and Percona’s Xtrabackup!