Sunday, 9 April 2017

How to Create Hot Backups of MySQL Databases with Percona XtraBackup

Introduction

Percona XtraBackup is the world’s only open-source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. With Percona XtraBackup, you can achieve the following benefits:
  • ·         Backups that complete quickly and reliably
  • ·         Uninterrupted transaction processing during backups
  • ·         Savings on disk space and network bandwidth
  • ·         Automatic backup verification
  • ·         Higher uptime due to faster restore time

Percona XtraBackup makes MySQL hot backups for all versions of Percona Server, MySQL, and MariaDB. It performs streaming, compressed, and incremental MySQL backups.



A very common challenge encountered when working with active database systems is performing hot backups—that is, creating backups without stopping the database service or making it read-only. Simply copying the data files of an active database will often result in a copy of the database that is internally inconsistent, i.e. it will not be usable or it will be missing transactions that occurred during the copy. On the other hand, stopping the database for scheduled backups renders database-dependent portions of your application to become unavailable. Percona XtraBackup is an open source utility that can be used to circumvent this issue, and create consistent full or incremental backups of running MySQL, MariaDB, and Percona Server databases, also known as hot backups.

How a Percona Xtradb Backup Works ?

Percona XtraBackup is based on InnoDB‘s crash-recovery functionality. It copies your InnoDB data files, which results in data that is internally inconsistent; but then it performs crash recovery on the files to make them a consistent, usable database again.

This works because InnoDB maintains a redo log, also called the transaction log. This contains a record of every change to InnoDB data. When InnoDB starts, it inspects the data files and the transaction log, and performs two steps. It applies committed transaction log entries to the data files, and it performs an undo operation on any transactions that modified data but did not commit.

Percona XtraBackup works by remembering the log sequence number (LSN) when it starts, and then copying away the data files. It takes some time to do this, so if the files are changing, then they reflect the state of the database at different points in time. At the same time, Percona XtraBackup runs a background process that watches the transaction log files, and copies changes from it. Percona XtraBackup needs to do this continually because the transaction logs are written in a round-robin fashion, and can be reused after a while. Percona XtraBackup needs the transaction log records for every change to the data files since it began execution.

When backup locks are supported by the server, xtrabackup will first copy InnoDB data, run the LOCK TABLES FOR BACKUP and copy the MyISAM tables and .frm files. Once this is done, the backup of the files will begin.

It will backup .frm.MRG.MYD.MYI.TRG.TRN.ARM.ARZ.CSM.CSV, .par, and .opt files.

After that xtrabackup will use LOCK BINLOG FOR BACKUP to block all operations that might change either binary log position or Exec_Master_Log_Pos or Exec_Gtid_Set (i.e. master binary log coordinates corresponding to the current SQL thread state on a replication slave) as reported by SHOW MASTER/SLAVE STATUS. xtrabackup will then finish copying the REDO log files and fetch the binary log coordinates. After this is completed xtrabackup will unlock the binary log and tables.Finally, the binary log position will be printed to STDERR and xtrabackup will exit returning 0 if all went OK.

Note that the STDERR of xtrabackup is not written in any file. You will have to redirect it to a file, e.g., xtrabackupOPTIONS 2> backupout.log.

During the prepare phase, Percona XtraBackup performs crash recovery against the copied data files, using the copied transaction log file. After this is done, the database is ready to restore and use.

Restoring a backup

To restore a backup with xtrabackup you can use the xtrabackup --copy-back or xtrabackup --move-back options.

It will copy the MyISAM tables, indexes, etc. (.frm.MRG.MYD.MYI.TRG.TRN.ARM.ARZ.CSM.CSV, par and .opt files) first, InnoDB tables and indexes next and the log files at last. It will preserve file’s attributes when copying them, you may have to change the files’ ownership to mysql before starting the database server, as they will be owned by the user who created the backup.
Alternatively, the xtrabackup --move-back option may be used to restore a backup. This option is similar to xtrabackup--copy-back with the only difference that instead of copying files it moves them to their target locations. As this option removes backup files, it must be used with caution. It is useful in cases when there is not enough free disk space to hold both data files and their backup copies.

Alternatively, the xtrabackup --move-back option may be used to restore a backup. This option is similar to xtrabackup--copy-back with the only difference that instead of copying files it moves them to their target locations. As this option removes backup files, it must be used with caution. It is useful in cases when there is not enough free disk space to hold both data files and their backup copies.


This tutorial will show you how to perform a full hot backup ,incremental backup and data restoring of your MySQL or MariaDB databases using Percona XtraBackup on Ubuntu 14.04.

Prerequisites
To follow this tutorial, you must have the following:
  • ·         Superuser privileges on an Ubuntu 14.04 system
  • ·         A running MySQL database
  • ·         Access to the admin user (root) of your database
  • ·         Whether xtrabackup or innobackupex is used, there are two actors involved: the user invoking the program - a system user - and the user performing action in the database server - a database user. Note that these are different users in different places, even though they may have the same username.
  • ·         To perform a hot backup of your database, your database system must be using the InnoDB storage engine. This is because XtraBackup relies on the transaction log that InnoDB maintains.

To show a list of all the tables in a database and their engines, use this SQL query:

$ mysql -uXXX -p XXX
> SELECT TABLE_NAME, ENGINE FROM   information_schema.TABLES WHERE  TABLE_SCHEMA = 'dbname';

The xtrabackup binary reads the [mysqld] and [xtrabackup] sections from any configuration files, in that order. That is so that it can read its options from your existing MySQL installation, such as the datadir or some of the InnoDB options. If you want to override these, just specify them in the [xtrabackup] section, and because it is read later, it will take precedence.
 Simply specify the options on the command-line. Normally, the only thing you might find convenient to place in the [xtrabackup] section of your my.cnf

[xtrabackup]
target_dir = /data/backups/mysql/

Install Percona XtraBackup

Debian andUbuntu
Debian:
  • ·         7.0 (wheezy)
  • ·         8.0 (jessie)


Ubuntu:
  • ·         12.04LTS (precise)
  • ·         14.04LTS (trusty)
  • ·         16.04LTS (xenial)
  • ·         16.10 (yakkety)

 Method 1:

Add the Percona repository key with this command:

sudo apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A

Then add the Percona repository to your apt sources:

sudo sh -c "echo 'deb http://repo.percona.com/apt trusty main' > /etc/apt/sources.list.d/percona.list"

sudo sh -c "echo 'deb-src http://repo.percona.com/apt trusty main' >> /etc/apt/sources.list.d/percona.list"

Run this command to update your apt sources:

sudo apt-get update

OR

Method 2:

Fetch the repository packages from Percona web:

$ wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

Install the downloaded package with dpkg. To do that, run the following commands as root or with sudo:

$ sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

Once you install this package the Percona repositories should be added. You can check the repository setup in the/etc/apt/sources.list.d/percona-release.list file.

Remember to update the local cache:

$ sudo apt-get update

Finally, you can run this command to install XtraBackup:
sudo apt-get install percona-xtrabackup
For version 2.4

$ sudo apt-get install percona-xtrabackup-24

Percona apt Testing repository

Percona offers pre-release builds from the testing repository. To enable it add the just add the testing word at the end of the Percona repository definition in your repository file (default /etc/apt/sources.list.d/percona-release.list). It should looks like this (in this example VERSION is the name of your distribution):

deb http://repo.percona.com/apt VERSION main testing
deb-src http://repo.percona.com/apt VERSION main testing

Uninstalling Percona XtraBackup

$ sudo apt-get remove percona-xtrabackup-24

Installing Percona XtraBackup on Red Hat Enterprise Linux and CentOS
  • ·         CentOS 5 and RHEL 5
  • ·         CentOS 6 and RHEL 6
  • ·         CentOS 7 and RHEL 7
  • ·         Amazon Linux AMI (works the same as CentOS 6)

1.      Install the Percona repository

yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm

You should see some output such as the following:

Retrieving http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm
Preparing...                ########################################### [100%]
   1:percona-release        ########################################### [100%]

1.      Testing the repository

Make sure packages are now available from the repository, by executing the following command:

yum list | grep percona

2.      Install the packages

You can now install Percona XtraBackup by running:

yum install percona-xtrabackup-24

Uninstalling Percona XtraBackup

yum remove percona-xtrabackup


 First Time Preparations

Before using XtraBackup for the first time, we need to prepare system and MySQL user that XtraBackup will use. This section covers the initial preparation.

Add System user 

sudo gpasswd -a username mysql

While we're at it, let's create the directory that will be used for storing the backups that XtraBackup creates:

sudo mkdir -p /data/backups
sudo chown –R mysql:mysql /data

Add Mysql Database User

XtraBackup requires a MySQL user that it will use when creating backups. Let's create one now.

Enter the MySQL console with this command:

$ mysql -u XXXXX –p XXXX

At the MySQL prompt, create a new MySQL user and assign it a password. In this example, the user is called "backup”  and the password is "XXXXX". Change both of these to something secure:

mysql> CREATE USER 'backup'@'localhost' IDENTIFIED BY 'XXXXXXX';
mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO'backup'@'localhost';

mysql> FLUSH PRIVILEGES;

Perform Full Hot Backup

This section covers the steps that are necessary to create a full hot backup of a MySQL database using XtraBackup. After ensuring that the database file permissions are correct, we will use XtraBackup to create a backup, then prepare it.

Update Datadir Permissions

sudo chown -R mysql: /var/lib/mysql

sudo find /var/lib/mysql -type d -exec chmod 770 "{}" \;

These commands ensure that all of the directories in the datadir are accessible to the mysql group, and should be run prior to each backup.

Creating a backup

To create a backup, run xtrabackup with the xtrabackup --backup option.
If the target directory does not exist, xtrabackup creates it. If the directory does exist and is empty, xtrabackup will succeed. xtrabackup will not overwrite existing files, it will fail with operating system error 17, file exists.

To start the backup process run:

$ xtrabackup --user=backup --password=XXXXXXXX  --backup --target-dir=/data/backups/


Using innobackupex Perl Script :

innobackupex is a script written in Perl that wraps the xtrabackup and performs the tasks where the performance and efficiency of C program isn’t needed.

$ innobackupex --user=backup  --password=XXXXXX --no-timestamp /data/backups/

You can provide other configuration file to innobackupex with this option. The only limitation is that it has to be the first option passed:

$ innobackupex --defaults-file=/tmp/other-my.cnf --user=backup --password=XXXXX /path/to/BACKUP-DIR/

Alternatively, you may omit the --no-timestamp to have XtraBackup create a backup directory based on the current timestamp.
This will store the backup at /data/backups/. If you specify a relative path, the target directory will be relative to the current directory.

160906 10:19:17 Finished backing up non-InnoDB tables and files
160906 10:19:17 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
xtrabackup: The latest check point (for incremental): '62988944'
xtrabackup: Stopping log copying thread.
.160906 10:19:18 >> log scanned up to (137343534)
160906 10:19:18 Executing UNLOCK TABLES
160906 10:19:18 All tables unlocked
160906 10:19:18 Backup created in directory '/data/backups/'
160906 10:19:18 [00] Writing backup-my.cnf
160906 10:19:18 [00]        ...done
160906 10:19:18 [00] Writing xtrabackup_info
160906 10:19:18 [00]        ...done
xtrabackup: Transaction log of lsn (26970807) to (137343534) was copied.
160906 10:19:18 completed OK!

The last thing you should see is something like the following, where the value of the <LSN> will be a number that depends on your system:

xtrabackup: Transaction log of lsn (<SLN>) to (<LSN>) was copied.

After the backup is finished, the target directory will contain files such as the following, assuming you have a single InnoDB table test.tbl1 and you are using MySQL’s innodb_file_per_table option:

$ ls -lh /data/backups/
total 182M
drwx------  7 root root 4.0K Sep  6 10:19 .
drwxrwxrwt 11 root root 4.0K Sep  6 11:05 ..
-rw-r-----  1 root root  387 Sep  6 10:19 backup-my.cnf
-rw-r-----  1 root root  76M Sep  6 10:19 ibdata1
drwx------  2 root root 4.0K Sep  6 10:19 mysql
drwx------  2 root root 4.0K Sep  6 10:19 performance_schema
drwx------  2 root root 4.0K Sep  6 10:19 sbtest
drwx------  2 root root 4.0K Sep  6 10:19 test
drwx------  2 root root 4.0K Sep  6 10:19 world2
-rw-r-----  1 root root  116 Sep  6 10:19 xtrabackup_checkpoints
-rw-r-----  1 root root  433 Sep  6 10:19 xtrabackup_info
-rw-r-----  1 root root 106M Sep  6 10:19 xtrabackup_logfile

Preparing a backup

We need to prepare it in order to restore it. Data files are not point-in-time consistent until they’ve been prepared, because they were copied at different times as the program ran, and they might have been changed while this was happening. If you try to start InnoDB with these data files, it will detect corruption and crash itself to prevent you from running on damaged data. The xtrabackup --prepare step makes the files perfectly consistent at a single instant in time, so you can run InnoDB on them.

During the prepare operation, xtrabackup boots up a kind of modified InnoDB that’s embedded inside it (the libraries it was linked against). The modifications are necessary to disable InnoDB’s standard safety checks, such as complaining that the log file isn’t the right size, which aren’t appropriate for working with backups. 

Note

You can prepare a backup created with older Percona XtraBackup version with a newer one, but not vice versa. Preparing a backup on an unsupported server version should be done with the latest Percona XtraBackup release which supports that server version. 
The prepare step uses this “embedded InnoDB” to perform crash recovery on the copied data files, using the copied log file.

$ xtrabackup --user=backup --password=XXXXXXXX --prepare --target-dir=/data/backups/

For innobackupex :

$ innobackupex --user=backup --password=XXXXXXXX --apply-log /data/backups/
--use-memory=4G

The preparing process can be speed up by using more memory in it. It depends on the free or available RAM on your system, it defaults to 100MB.
When this finishes, you should see an InnoDB shutdown with a message such as the following, where again the value of LSN will depend on your system:

InnoDB: Shutdown completed; log sequence number 137345046
160906 11:21:01 completed OK!

All following prepares will not change the already prepared data files, you’ll see that output says:

xtrabackup: This target seems to be already prepared.
xtrabackup: notice: xtrabackup_logfile was already used to '--prepare'.

Note
If you intend the backup to be the basis for further incremental backups, you should use the xtrabackup --apply-log-only option when preparing the backup, or you will not be able to apply incremental backups to it.

Perform Backup Restoration

Stop the MySQL service with this command:

sudo service mysql stop

Then move or delete the contents of the datadir (/var/lib/mysql).

$ mv /var/lib/mysql/  /var/lib/mysql_old
$ sudo mkdir  /var/lib/mysql
$ sudo chown -R mysql: /var/lib/mysql

For convenience xtrabackup binary has an xtrabackup --copy-back option, which will copy the backup to the server’s datadir:

$ xtrabackup --copy-back --target-dir=/data/backups/

OR

$ innobackupex --copy-back /data/backups/

If you don’t want to save your backup, you can use the xtrabackup --move-back option which will move the backed up data to the datadir.
If you don’t want to use any of the above options, you can additionally use rsync or cp to restore the files.

Note

The datadir must be empty before restoring the backup. Also it’s important to note that MySQL server needs to be shut down before restore is performed. You can’t restore to a datadir of a running mysqld instance (except when importing a partial backup).

Example of the rsync command that can be used to restore the backup can look like this:

$ rsync -avrP /data/backup/ /var/lib/mysql/

Now we're ready to start MySQL:

sudo service mysql start

Incremental Backup

Both xtrabackup and innobackupex tools supports incremental backups, which means that they can copy only the data that has changed since the last backup.

You can perform many incremental backups between each full backup, so you can set up a backup process such as a full backup once a week and an incremental backup every day, or full backups every day and incremental backups every hour.Incremental backups work because each InnoDB page contains a log sequence number, or LSN. The LSN is the system version number for the entire database. Each page’s LSN shows how recently it was changed.

An incremental backup copies each page whose LSN is newer than the previous incremental or full backup’s LSN. There are two algorithms in use to find the set of such pages to be copied. The first one, available with all the server types and versions, is to check the page LSN directly by reading all the data pages. The second one, available with Percona Server, is to enable the changed page tracking feature on the server, which will note the pages as they are being changed. This information will be then written out in a compact separate so-called bitmap file. The xtrabackup binary will use that file to read only the data pages it needs for the incremental backup, potentially saving many read requests. The latter algorithm is enabled by default if the xtrabackup binary finds the bitmap file. It is possible to specify xtrabackup --incremental-force-scan to read all the pages even if the bitmap data is available.

Incremental backups simply read the pages and compare their LSN to the last backup’s LSN. You still need a full backup to recover the incremental changes, however; without a full backup to act as a base, the incremental backups are useless.

Creating an Incremental Backup

To make an incremental backup, begin with a full backup as usual. The xtrabackup binary writes a file called xtrabackup_checkpoints into the backup’s target directory. 

$ xtrabackup --user=backup --password=XXXXXXXX --backup --target-dir=/data/backups/base

If you look at the xtrabackup_checkpoints file, you should see similar content depending on your LSN number:

backup_type = full-backuped
from_lsn = 0
to_lsn = 1626007
last_lsn = 1626007
compact = 0
recover_binlog_info = 1

Using Innobackupex :

$ innobackupex /data/backups

Now that you have a full backup, you can make an incremental backup based on it. Use the following command:

$ xtrabackup  --user=backup --password=XXXXXXXX --backup --target-dir=/data/backups/inc1 \
--incremental-basedir=/data/backups/base

The /data/backups/inc1/ directory should now contain delta files, such as ibdata1.delta and test/table1.ibd.delta. 
If you examine the xtrabackup_checkpoints file in this directory, you should see similar content to the following:

backup_type = incremental
from_lsn = 1626007
to_lsn = 4124244
last_lsn = 4124244
compact = 0
recover_binlog_info = 1

Using Innobackupex script :

$ innobackupex --incremental /data/backups --incremental-basedir=BASEDIR

It’s now possible to use this directory as the base for yet another incremental backup:

$ xtrabackup --user=backup --password=XXXXXXXX --backup --target-dir=/data/backups/inc2 \
--incremental-basedir=/data/backups/inc1

This folder also contains the xtrabackup_checkpoints:

backup_type = incremental
from_lsn = 4124244
to_lsn = 6938371
last_lsn = 7110572
compact = 0
recover_binlog_info = 1

Using Innobackupex script :

$ innobackupex --incremental /data/backups --incremental-basedir=INCREMENTAL-DIR-1

Note
In this case you can see that there is a difference between the to_lsn (last checkpoint LSN) and last_lsn (last copied LSN), this means that there was some traffic on the server during the backup process.

Preparing the Incremental Backups

In full backups, two types of operations are performed to make the database consistent: committed transactions are replayed from the log file against the data files, and uncommitted transactions are rolled back. You must skip the rollback of uncommitted transactions when preparing an incremental backup, because transactions that were uncommitted at the time of your backup may be in progress, and it’s likely that they will be committed in the next incremental backup. You should use thextrabackup --apply-log-only option to prevent the rollback phase.

Beginning with the full backup you created, you can prepare it, and then apply the incremental differences to it. Recall that you have the following backups:

/data/backups/base
/data/backups/inc1
/data/backups/inc2

To prepare the base backup, you need to run xtrabackup --prepare as usual, but prevent the rollback phase:

$ xtrabackup --user=backup --password=XXXXXXXX --prepare --apply-log-only --target-dir=/data/backups/base

The output should end with some text such as the following:

InnoDB: Shutdown completed; log sequence number 1626007
161011 12:41:04 completed OK!

Using innobackupex script :

innobackupex --apply-log --redo-only BASE-DIR

To apply the first incremental backup to the full backup, run the following command:

$ xtrabackup --prepare --apply-log-only --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc1

This applies the delta files to the files in /data/backups/base, which rolls them forward in time to the time of the incremental backup. It then applies the redo log as usual to the result. The final data is in /data/backups/base, not in the incremental directory.

Output :

incremental backup from 1626007 is enabled.
xtrabackup: cd to /data/backups/base
xtrabackup: This target seems to be already prepared with --apply-log-only.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(4124244)
...
xtrabackup: page size for /tmp/backups/inc1/ibdata1.delta is 16384 bytes
Applying /tmp/backups/inc1/ibdata1.delta to ./ibdata1...
...
161011 12:45:56 completed OK!

Using Innobackupex script :

innobackupex --apply-log --redo-only BASE-DIR --incremental-dir=INCREMENTAL-DIR-1

If you restore the files from /data/backups/base, you should see the state of the database as of the first incremental backup.

Preparing the second incremental backup is a similar process: apply the deltas to the (modified) base backup, and you will roll its data forward in time to the point of the second incremental backup:

$ xtrabackup --prepare --target-dir=/data/backups/base \
--incremental-dir=/data/backups/inc2

Note

xtrabackup --apply-log-only should be used when merging all incrementals except the last one. That’s why the previous line doesn’t contain the xtrabackup --apply-log-only option. Even if the xtrabackup --apply-log-only was used on the last step, backup would still be consistent but in that case server would perform the rollback phase.
Once prepared incremental backups are the same as the full backups and they can be restored the same way.

Using innobackupex script :

innobackupex --apply-log BASE-DIR --incremental-dir=INCREMENTAL-DIR-2

Once you merge the base with all the increments, you can prepare it to roll back the uncommitted transactions:

innobackupex --apply-log BASE-DIR

Restoring Incremental Backup

innobackupex --copy-back BASE-DIR

Incremental Streaming Backups using xbstream and tar

Incremental streaming backups can be performed with the xbstream streaming option. Currently backups are packed in custom xbstream format. With this feature taking a BASE backup is needed as well.
Taking a base backup:

innobackupex /data/backups

Taking a local backup:

innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ > incremental.xbstream

Unpacking the backup:

xbstream -x < incremental.xbstream

Taking a local backup and streaming it to the remote server and unpacking it:

innobackupex  --incremental --incremental-lsn=LSN-number --stream=xbstream ./ | /
ssh user@hostname " cat - | xbstream -x -C > /backup-dir/"

 Compressed Backup

In order to make a compressed backup you’ll need to use xtrabackup --compress option:

$ xtrabackup --backup --compress --target-dir=/data/compressed/

If you want to speed up the compression you can use the parallel compression, which can be enabled with xtrabackup --compress-threads option. Following example will use four threads for compression:

$ xtrabackup --backup --compress --compress-threads=4 \
--target-dir=/data/compressed/

Output should look like this

...
170223 13:00:38 [01] Compressing ./test/sbtest1.frm to /tmp/compressed/test/sbtest1.frm.qp
170223 13:00:38 [01]        ...done
170223 13:00:38 [01] Compressing ./test/sbtest2.frm to /tmp/compressed/test/sbtest2.frm.qp
170223 13:00:38 [01]        ...done
...
170223 13:00:39 [00] Compressing xtrabackup_info
170223 13:00:39 [00]        ...done
xtrabackup: Transaction log of lsn (9291934) to (9291934) was copied.
170223 13:00:39 completed OK!

Preparing the backup

Note
Before proceeding you’ll need to make sure that qpress has been installed. It’s availabe from Percona Software repositories

$ xtrabackup --decompress --target-dir=/data/compressed/

Percona XtraBackup doesn’t automatically remove the encrypted files. In order to clean up the backup directory you should use xtrabackup --remove-original option.

$ xtrabackup --prepare --target-dir=/data/compressed/

xtrabackup has a xtrabackup --copy-back option, which performs the restoration of a backup to the server’s datadir:

$ xtrabackup --copy-back --target-dir=/data/backups/

Partial Backups

Percona XtraBackup features partial backups, which means that you may backup only some specific tables or databases. The tables you back up must be in separate tablespaces, as a result of being created or altered after you enabled the innodb_file_per_table option on the server.

Creating Partial Backups

$ innobackupex --include='^mydatabase[.]mytable' /path/to/backup

Note that this option is passed to xtrabackup --tables and is matched against each table of each database, the directories of each database will be created even if they are empty.

$ echo "mydatabase.mytable" > /tmp/tables.txt
$ innobackupex --tables-file=/tmp/tables.txt /path/to/backup

The command above will create a timestamped directory with the usual files that innobackupex creates, but only containing the data-files related to the tables specified in the file.

$ innobackupex --databases="mydatabase.mytable mysql" /path/to/backup

The command above will create a timestamped directory with the usual files that innobackupex creates, but only containing the data-files related to mytable in the mydatabase directory and the mysql directory with the entire mysqldatabase.

Preparing Partial Backups

$ innobackupex --apply-log --export /path/to/partial/backup

Note that you can use the --export option with --apply-log to an already-prepared backup in order to create the .exp files.

Restoring Partial Backups

Restoring be done by copying back the prepared backup to a “clean” datadir (in that case, make sure to include the mysql database). System database can be created with:

$ sudo mysql_install_db --user=mysql

Restoring Individual Tables

In server versions prior to 5.6, it is not possible to copy tables between servers by copying the files, even with innodb_file_per_table. However, with the Percona XtraBackup, you can export individual tables from any InnoDB database, and import them into Percona Server with XtraDB or MySQL 5.6

Exporting tables

Exporting is done in the preparation stage, not at the moment of creating the backup. Once a full backup is created, prepare it with the --export option:

$ innobackupex --apply-log --export /path/to/backup

This will create for each InnoDB with its own tablespace a file with .exp extension. 
Now you should see a .exp file in the target directory:

$ find /data/backups/mysql/ -name export_test.*
/data/backups/mysql/test/export_test.exp
/data/backups/mysql/test/export_test.ibd
/data/backups/mysql/test/export_test.cfg

Importing tables

To import a table to other server, first create a new table with the same structure as the one that will be imported at that server:

OTHERSERVER|mysql> CREATE TABLE mytable (...) ENGINE=InnoDB;

then discard its tablespace:

OTHERSERVER|mysql> ALTER TABLE mydatabase.mytable DISCARD TABLESPACE;

After this, copy mytable.ibd and mytable.exp ( or mytable.cfg if importing to MySQL 5.6) files to database’s home, and import its tablespace:

OTHERSERVER|mysql> ALTER TABLE mydatabase.mytable IMPORT TABLESPACE;

Error :

Execute ALTER TABLE test.export_test DISCARD TABLESPACE;

If you see the following message, then you must enable innodb_file_per_table and create the table again: 
ERROR 1030 (HY000): Got error -1 from storage engine
Copy the exported files to the test/ subdirectory of the destination server’s data directory
Execute ALTER TABLE test.export_test IMPORT TABLESPACE;
The table should now be imported, and you should be able to SELECT from it and see the imported data.

Note


Persistent statistics for imported tablespace will be empty until you run the ANALYZE TABLE on the imported table. They will be empty because they are stored in the system tables mysql.innodb_table_stats and mysql.innodb_index_stats and they aren’t updated by server during the import. 

Encrypted Backup

Both xtrabackup --encrypt-key option and xtrabackup --encrypt-key-file option can be used to specify the encryption key. Encryption key can be generated with command like:

$ openssl rand -base64 24

Example output of that command should look like this:

GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs

This value then can be used as the encryption key
Example of the xtrabackup command using the xtrabackup --encrypt-key should look like this:

$ xtrabackup --backup --target-dir=/data/backups --encrypt=AES256 \
--encrypt-key="GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs"

Example of the xtrabackup command using the xtrabackup --encrypt-key-file should look like this:

$ xtrabackup --backup --target-dir=/data/backups/ --encrypt=AES256 \
--encrypt-key-file=/data/backups/keyfile

Using Innobackupex script :

$ innobackupex --encrypt=AES256 --encrypt-key="GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs" /data/backups
$ innobackupex --encrypt=AES256 --encrypt-key-file=/data/backups/keyfile /data/backups

Decrypting Encrypted Backups

$ xtrabackup --decrypt=AES256 –encrypt key="GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs" --target-dir=/data/backups/

In Percona XtraBackup 2.4.6 xtrabackup --remove-original option has been implemented that you can use to remove the encrypted files once they’ve been decrypted. To remove the files once they’re decrypted you should run:

$ xtrabackup --decrypt=AES256 --encrypt-key="GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs"\
--target-dir=/data/backups/ --remove-original

Innobackupex method :

$ for i in `find . -iname "*\.xbcrypt"`; do xbcrypt -d --encrypt-key-file=/root/secret_key --encrypt-algo=AES256 < $i > $(dirname $i)/$(basename $i .xbcrypt) && rm $i; done
$ innobackupex --decrypt=AES256 --encrypt-key="GCHFLrDFVx6UAsRb88uLVbAVWbK+Yzfs" /data/backups/2015-03-18_08-31-35/

Preparing and Restore:

$ xtrabackup --prepare --target-dir=/data/backups/
$ xtrabackup --copy-back --target-dir=/data/backups/

For innobackupex script :

$ innobackupex --apply-log /data/backups/2015-03-18_08-31-35/
$ innobackupex --copy-back /path/to/BACKUP-DIR

Compact Backup

$ innobackupex --compact /data/backups

If you check at the xtrabackup_checkpoints file in BASE-DIR, you should see something like:

backup_type = full-backuped
from_lsn = 0
to_lsn = 2888984349
last_lsn = 2888984349
compact = 1

$ innobackupex --apply-log --rebuild-indexes /data/backups/2013-02-01_10-29-48
$ innobackupex --copy-back /path/to/BACKUP-DIR

Streaming and Compressing Backups

Streaming mode, supported by Percona XtraBackup, sends backup to STDOUT in special tar or xbstream format instead of copying files to the backup directory.
This allows you to use other programs to filter the output of the backup, providing greater flexibility for storage of the backup. For example, compression is achieved by piping the output to a compression utility. One of the benefits of streaming backups and using Unix pipes is that the backups can be automatically encrypted.

$ innobackupex --stream=tar /tmp

Store the complete backup directly to a single file:

$ innobackupex --stream=xbstream /root/backup/ > /root/backup/backup.xbstream

To stream and compress the backup:

$ innobackupex --stream=xbstream --compress /root/backup/ > /root/backup/backup.xbstream

To unpack the backup to the /root/backup/ directory:

$ xbstream -x <  backup.xbstream -C /root/backup/

To send the compressed backup to another host and unpack it:

$ innobackupex --compress --stream=xbstream /root/backup/ | ssh user@otherhost "xbstream -x -C /root/backup/"

Store the complete backup directly to a tar archive:

$ innobackupex --stream=tar /root/backup/ > /root/backup/out.tar

To send the tar archive to another host:

$ innobackupex --stream=tar ./ | ssh user@destination \ "cat - > /data/backups/backup.tar"

Compress with your preferred compression tool:

$ innobackupex --stream=tar ./ | gzip - > backup.tar.gz
$ innobackupex --stream=tar ./ | bzip2 - > backup.tar.bz2


Point-In-Time recovery

Recovering up to particular moment in database’s history can be done with innobackupex and the binary logs of the server.
For taking the snapshot, we will use innobackupex for a full backup:

$ innobackupex /path/to/backup --no-timestamp
$ innobackupex --apply-log /path/to/backup

Now, suppose that time has passed, and you want to restore the database to a certain point in the past, having in mind that there is the constraint of the point where the snapshot was taken.
To find out what is the situation of binary logging in the server, execute the following queries:

mysql> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000001 |       126 |
| mysql-bin.000002 |      1306 |
| mysql-bin.000003 |       126 |
| mysql-bin.000004 |       497 |
+------------------+-----------+

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000004 |      497 |              |                  |
+------------------+----------+--------------+------------------+

The first query will tell you which files contain the binary log and the second one which file is currently being used to record changes, and the current position within it. Those files are stored usually in the datadir (unless other location is specified when the server is started with the --log-bin= option).
To find out the position of the snapshot taken, see the xtrabackup_binlog_info at the backup’s directory:

$ cat /path/to/backup/xtrabackup_binlog_info
mysql-bin.000003      57

This will tell you which file was used at moment of the backup for the binary log and its position. That position will be the effective one when you restore the backup:

$ innobackupex --copy-back /path/to/backup

Next Step is extracting the queries from the binary log with mysqlbinlog starting from the position of the snapshot and redirecting it to a file

$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \
    --start-position=57 > mybinlog.sql

Inspect the file with the queries to determine which position or date corresponds to the point-in-time wanted. Once determined, pipe it to the server. Assuming the point is 11-12-25 01:00:00:

$ mysqlbinlog /path/to/datadir/mysql-bin.000003 /path/to/datadir/mysql-bin.000004 \
    --start-position=57 --stop-datetime="11-12-25 01:00:00" | mysql -u root -p

and the database will be rolled forward up to that Point-In-Time.

0 comments:

Post a Comment