Location>code7788 >text

mysql hot migration

Popularity:831 ℃/2024-11-12 16:23:51

0. Background

XtraBackup Advantage

  1. Online Hot Standby: Supports hot backups of InnoDB and XtraDB without stopping the database, suitable for highly available environments.
  2. incremental backup: Supports incremental backups, which can significantly reduce backup time and storage space requirements.
  3. stream compression: Stream compression can be performed during the backup process to reduce the amount of transmitted data and improve transmission efficiency.
  4. master-slave synchronization: XtraBackup makes it easier to create and maintain master-slave synchronization relationships and simplifies database scaling.
  5. Low Load Backup: The load on the primary database during backup is relatively small and does not significantly affect normal business.

Backup tool selection

  • xtrabackup: Specialized for backups of InnoDB and XtraDB tables.
  • innobackupex: A script wrapper that can process both InnoDB and MyISAM tables, but locks when processing MyISAM.

Other Backup Strategies

  • separate database and tables: For very large data volumes, consider using a split library and split table strategy to optimize management and backup efficiency.

  • LVM snapshots: Utilize LVM snapshots to quickly obtain an instantaneous backup of the database, which reduces backup time and reduces stress on the original repository.

I. Backup method xtrabackup

1. Installation

Software Downloads - Percona

mysql version: mysql8.0.24

xtrabackup version: 8.0.25

Version Correlation

Not for a while.

Online Installation

yum install -y /yum/
yum install -y percona-xtrabackup-80
yum list | grep percona-xtrabackup

Offline package download address

image-20240925110412143

Server update yum source

sudo cp /etc// /etc//
sudo wget -O /etc// /repo/
sudo yum clean all
sudo yum makecache

Dependency Package Installation

sudo yum install epel-release
sudo yum install -y zstd
zstd --version

Offline installation, requires network to resolve dependencies

yum -y localinstall percona-xtrabackup-80-8.0.25-17.1.el7.x86_64.rpm

2. Backup

2.1 Description of tools

Introduction to the Xtrabackup tool

  • xtrabackup: Data Tools for hot backup of InnoDB and XtraDB tables cannot backup other types of tables or data table structures.
  • innobackupex: Willxtrabackup Encapsulated Perl script that provides the ability to back up MyISAM tables.

Common Options

  • --host: Specifies the host.

  • --user: Specify the user name.

  • --password: Specify the password.

  • --port: Specifies the port.

  • --databases: Specifies the database.

  • --incremental: Create an incremental backup.

  • --incremental-basedir: Specifies the directory containing the full backup.

  • --incremental-dir: Specifies the directory containing the incremental backup.

  • --apply-log: Performs a preprocessing operation on the backup.
    
    • clarification: After the backup is complete, the data cannot yet be used for recovery because the backup may contain transactions that have not yet been committed or transactions that have been committed but not yet synchronized to the data file.--apply-log s main role is to roll back uncommitted transactions and synchronize committed transactions to keep the data file in a consistent state.
  • --redo-only: Do not roll back uncommitted transactions.

  • --copy-back: Restore the backup catalog.

2.2 Full backup
xtrabackup --backup \
    --target-dir=/var/backups/xtrabackup \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --parallel=4  \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --compress-threads=4 \
    --compress-chunk-size=65536

gzip local compressed backup

Use streaming backups with pipelines to compress backups locally using the gzip command

--stream=xbstream \
	| gzip - > /data/backup/

To recover, you need to decompress it with gunzip and then with xbstream, in order to proceed to the Prepare phase.

# gzip decompress
gunzip

# xbstream decompress
xbstream -x --parallel=10 -C /data/backup/backup_full < . /backup1

clarification

  • --target-dir: The backup destination directory needs to be created beforehand to ensure that it exists.
  • --password: It is recommended that passwords be passed using environment variables or other secure methods to protect sensitive information.
  • --compress=quicklz specify the usequicklz as the compression algorithm.
  • --compress-threads=4 Specifies that 4 threads are used for compression.
  • --compress-chunk-size=65536 Specifies the size of the compression thread's working buffer.

Error, no authorization. failed to execute query ‘LOCK INSTANCE FOR BACKUP’ : 1227 (42000) Access denied

grant BACKUP_ADMIN on *.* to 'root'@'%';
flush privileges;

LOCK INSTANCE FOR BACKUP is a new backup-related SQL statement introduced by MySQL 8.0, which is mainly used to control access to database instances in a more granular and efficient way when performing database backups to ensure backup consistency. The working principle and characteristics of this command are as follows:

Purpose: When performing a backup operation, this command is used to acquire an instance-level lock that allows DML (Data Manipulation Language, such as INSERT, UPDATE, DELETE) operations to continue during the backup process while preventing DDL (Data Definition Language, such as CREATE, ALTER, DROP) operations that could cause inconsistencies in the snapshot of the data and certain administrative operations. This allows backups to be performed without disrupting database services, and is particularly suited to online backup scenarios where service interruptions need to be minimized.

Privilege Requirements: Execution of the LOCK INSTANCE FOR BACKUP statement requires the user to have the BACKUP_ADMIN privilege. This is a privilege level designed specifically for advanced operations related to backups.

Compatibility: This feature was introduced in MySQL 8.0 and above; MySQL versions earlier than 8.0 do not support this statement, so you may need to rely on other mechanisms (e.g., FLUSH TABLES WITH READ LOCK) to ensure consistency of backups when using older versions.

Unlocking: After performing a backup, you need to use the UNLOCK INSTANCE statement to release the lock previously obtained by LOCK INSTANCE FOR BACKUP to resume normal operation.

Comparison with traditional backup commands: Compared to traditional backup methods, such as using FLUSH TABLES WITH READ LOCK, LOCK INSTANCE FOR BACKUP provides less performance impact because it does not completely block write operations, but only restricts activities that could cause data inconsistencies, making it more suitable for high availability and high-performance requirements in production environments with high availability and performance requirements.

2.3 Incremental backup

xtrabackup supports incremental backups. Before doing an incremental backup, you need to do a full backup. xtrabackup will determine if you need to back up a page based on the lsn number of the innodb page. if the page lsn is greater than the lsn number of the last backup, you need to back up the page.
在这里插入图片描述

Start with a full backup.

xtrabackup --backup \
    --target-dir=/var/backups/full \
    --extra-lsndir=/var/backups/full \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --parallel=4  \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --compress-threads=4 \
    --compress-chunk-size=65536 \
    --stream=xbstream \
     2>/var/backups/full/backup_full.log | gzip -  > /var/backups/full/ 

The backup command with the -extra-lsndir option outputs xtrabackup_checkpoints to a separate file. For incremental backups, you need to follow the lsn in xtrabackup_checkpoints, and here are the relevant files.

[root@node83 full]# ll
total 4684
-rw-r--r-- 1 root root   43918 Sep 25 15:40 backup_full.log
-rw-r--r-- 1 root root 4741603 Sep 25 15:40 
-rw-r--r-- 1 root root     102 Sep 25 15:40 xtrabackup_checkpoints
-rw-r--r-- 1 root root     794 Sep 25 15:40 xtrabackup_info

Now, initiate an incremental backup.

xtrabackup --backup \
    --target-dir=/var/backups/inc1 \
    --extra-lsndir=/var/backups/inc1 \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --incremental-basedir=/var/backups/full  \
    --stream=xbstream \
    2>/var/backups/full/backup_full.log | gzip -  > /var/backups/inc1/backup_inc1.gz 
  • -incremental-basedir: directory where the full backup or last incremental backup xtrabackup_checkpoints file is located

Incremental backups can also be performed on top of the previous incremental backups

xtrabackup --backup \
    --target-dir=/var/backups/inc2 \
    --extra-lsndir=/var/backups/inc2 \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --incremental-basedir=/var/backups/inc1 \
    --stream=xbstream \
    | gzip -  > /var/backups/inc2/backup_inc2.gz

The structure is as follows

[root@node83 backups]# tree
.
├── full
│   ├── backup_full.log
│   ├── 
│   ├── xtrabackup_checkpoints
│   └── xtrabackup_info
├── inc1
│   ├── backup_inc1.gz
│   ├── xtrabackup_checkpoints
│   └── xtrabackup_info
└── inc2
    ├── backup_inc2.gz
    ├── xtrabackup_checkpoints
    └── xtrabackup_info

3 directories, 10 files

When restoring an incremental backup, you need to restore the base full backup first, and then restore the incremental backups in order of time.
在这里插入图片描述

Restore full backups

cd /var/backups/full

gunzip

# You need to delete these two files first, otherwise there will be a conflict when xbstream extracts the files
rm xtrabackup_checkpoints xtrabackup_info

xbstream -x -v < full

# There is another layer of compression due to the use of compress.
xtrabackup --decompress --target-dir=/var/backups/full
# Preparation phase
xtrabackup --prepare --apply-log-only --target-dir=. > prepare_full.log 2>&1

When restoring an incremental backup, switch to the directory of the full backup and execute the

cd /var/backups/inc1
rm xtrabackup_checkpoints xtrabackup_info
gunzip
xbstream -x -v < full
# Due to the use ofcompresscompressed,所有还有一层compressed
xtrabackup --decompress --target-dir=/var/backups/inc1

cd ../full
xtrabackup \
    --prepare \
    --apply-log-only \
	--incremental-dir=/data/backup/inc1 \
	--target-dir=.

ultimate

2.4 Streaming Backup

XtraBackup supports streaming backups, sending backups to STDOUT in the specified tar or xbstream format instead of copying files directly to the backup directory.

The tar and xbstream stream formats are supported in xtrabackup version 2.4, but the tar format does not support parallel backups.

In xtrabackup 8.0, only the xbstream stream format is supported, the tar format is no longer supported.

xtrabackup --backup \
--datadir=/data/dstore/1.0.0.0/mysql/data \
--user=root \
--password=Hbis@123 \
--socket=/srv/dstore/1.0.0.0/mysql/ \
--host=localhost \
--parallel=4  \
--compress \
--compress-threads=4 \
--compress-chunk-size=65536 \
--stream=xbstream | ssh -C [email protected] "cat > /var/backups/" 

Log in to the remote host to unzip

xbstream -x --parallel=10 -C /opt/backup  < /opt/backups/

The -x in xbstream indicates decompression, -parallel indicates parallelism, and -C specifies the directory where the decompression will take place; the last directory must exist.

Remote Backup Speed Limit

Backup directly to a remote server. If you are concerned that the backup will take up a large amount of network bandwidth, you can use the pv command to limit the speed.

--stream=xbstream | pv -q -L10m | ssh -C [email protected] "cat > /var/backups/" 

In the pv command, -q means that quiet does not output progress information and -L means that the transfer rate 10m means 10MB.

2.5 Backup of designated libraries

Note that when restoring a single library from a backup to another machine, you need to manually create the database and table structure in advance.

We can backup the database schema and restore it using the same process as described above.

utilization--databases Option to back up the database

For multiple databases, specify the databases as a list, for example--databases=“db1 db2 db3”。The database can also be specified in a text file with the option--databases-file together. To exclude a database from a backup, use the option--databases-exclude

utilization--export option to prepare a backup.

xtrabackup \
  --defaults-file=/srv/dstore/1.0.0.0/mysql/conf/ \
  --backup \
  --target-dir=/var/backups/test2 \
  --socket=/srv/dstore/1.0.0.0/mysql/ \
  --user=root \
  --password=Hbis@123 \
  --databases="test2" \
  --stream=xbstream | gzip > 

Description:

  • --defaults-file: Specifies the MySQL configuration file.
  • --backup: Indicates that a backup operation is performed.
  • --target-dir: Specifies the destination directory for the backup file.
  • --socket: Specify the MySQL socket file path.
  • --user respond in singing--password: Specifies the user and password for connecting to the database.
  • --databases: Specify the database to be backed up (in this case thetest2)。
  • --stream=xbstream: Outputs the backup data in xbstream format.
  • | gzip > : Compresses the output through a pipe into Documentation.

When the backup is complete, scp to the remote machine, e.g., /var/backups, and execute the import command.

gunzip
xbstream -x --parallel=10 <test2
# Preparation for implementation
xtrabackup --prepare --apply-log-only --export --target-dir=.

Now use ALTER TABLE

DISCARD TABLESPACE Deletes tablespaces for all InnoDB tables in the database.

ALTER TABLE person  DISCARD TABLESPACE;

Copy all table files from the backup directory (/var/backups/test2/test/*) to the mysql data directory (/opt/mysql/data).

Attention:Disable selinux before copying the files. after copying the files, change the ownership of the copied files to the mysql user if the backup user is different.

Finally, use the ALTER TABLE

IMPORT TABLESPACE; Recover the table.

ALTER TABLE person IMPORT TABLESPACE; 

This will restore the table to when it was backed up. ForPoint-in-time recovery, the binary log can be further applied to the database, but care should be taken to apply only those transactions that affect the table being restored.

The advantage of using this method is that there is no need to stop the database server. A minor disadvantage is that each table needs to be recovered separately, although it can be overcome with the help of scripts.

The script is below:

/bin/bash /bin/bash

# Check the input parameters
if [ "$#" -lt 2 ]; then
  echo "Usage: \$0 <database name> <delete|import>"
  exit 1
exit 2]; then echo

DB_NAME=\$1
ACTION=\$2
MYSQL_USER="root"
MYSQL_PASSWORD="Hbis@123"
MYSQL_SOCKET="/srv/dstore/1.0.0.0/mysql/"

# Get all table names
TABLES=$(mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "SHOW TABLES;" | awk '{ print \$1 }' | grep -v '^Tables_in_')

echo "Tables in database '$DB_NAME': $TABLES" | awk '{ print \$1 }' | grep -v '^Tables_in_')

# Perform a delete or import based on the action parameters
for TABLE in $TABLES; do
  if [ "$ACTION" == "Delete" ]; then
    echo "Deleting tablespace: $TABLE"
    mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "ALTER TABLE $TABLE DISCARD TABLESPACE;"
  elif [ "$ACTION" == "Import" ]; then
    echo "Importing tablespace: $TABLE"
    mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "ALTER TABLE $TABLE IMPORT TABLESPACE;"
  else
    echo "Invalid operation: $ACTION"
    exit 1
  else echo "Invalid operation: $ACTION" exit 1
else echo "Invalid operation: $ACTION" exit 1 fi

echo "Operation complete."

3. Restoring backups

decompression (esp. computer)

The compression algorithm defaults toqpressSo you need the qpress command to unzip it.

yum -y install qpress

Backup sets compressed with --decompress need to be decompressed before they are ready to be backed up, the decompression utility is qpress. the decompressed original files are not deleted, they can be removed with the --remove-original option, and --parallel can be used with the --decompress option to decompress multiple files at once.

xtrabackup --defaults-file=/opt/mysql/conf/ --decompress --target-dir=/var/backups/xtrabackup

Preparing the backup command

The first step is the Prepare phase, in which Xtrabackup starts an embedded InnoDB instance for Crash Recovery, with a buffer pool size specified by the --use-memory parameter, which defaults to 100MB. if you have enough memory, you can reduce the time spent in the Prepare phase by setting the memory to a larger size. If you have enough memory, you can reduce the time spent in the Prepare phase by setting a larger memory.

 --use-memory=2G 
xtrabackup --defaults-file=/opt/mysql/conf/ \
    --prepare \
    --target-dir=/var/backups/xtrabackup \
    --user=root \
    --password=123456 \
    --socket=/opt/mysql/ \
    --host=localhost \
    --apply-log-only

After the Prepare phase is complete, the recovery phase follows, where you can either manually copy the files to the data directory or use the xtrabackup utility to do so.

Restore Backup Command

xtrabackup 	--defaults-file=/opt/mysql/conf/ \
    --copy-back \
    --parallel=10 \
    --target-dir=/var/backups/xtrabackup \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost 

The apply-log-only parameter is required to restore a full backup. If you do not add the apply-log-only parameter, the final stage of preparing rolls back uncommitted transactions that may have already been committed during the next incremental backup.

  • You need to ensure that the data directory is empty before restoring the backup
  • The database must be in a stopped state

After successful execution, authorization must be granted

 chown -R mysql:mysql /opt/mysql/data

Starting the database

mysqld_safe --defaults-file=/etc/ &

II. Testing data integrity

Mode 1

  • Simulate 4 databases, perform timed data writes, and simulate random inserts.
  • backing up
    • First full backup
    • (Incremental Backup (Record Offset)
  • Scp data, recover data to slave machine
  • Configure master-slave, stop master node scripts, compare data integrity
Writing scripts

analog data

/bin/bash /bin/bash

# MySQL Configuration
MYSQL_USER="root"
MYSQL_PASSWORD="Hbis@123"
MYSQL_SOCKET="/srv/dstore/1.0.0.0/mysql/"

# Functions to create databases and insert data
create_databases_and_insert_data() {
    for i in {1..4}; do
        DB_NAME="test_db_$i"
        TABLE_NAME="test_table"

        # Create the database
        mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -e "CREATE DATABASE IF NOT EXISTS $DB_NAME;"

        # Create table
        mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "
        CREATE TABLE IF NOT EXISTS $TABLE_NAME (
            id INT AUTO_INCREMENT PRIMARY KEY,
            data_value VARCHAR(255) NOT NULL
        );"

        # Insert about 10000 pieces of data
        for ((j=1; j<=10000; j++)); do
            random_string=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 10)
            mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "
            INSERT INTO $TABLE_NAME (data_value) VALUES ('$random_string');"
        done

        echo "Database $DB_NAME created, 10000 data items inserted."
    done
}

# Execute create database and insert data
create_databases_and_insert_data

Data volume of a single library: 474K

Simulate timed insertion of scripts

/bin/bash /bin/bash

# MySQL Configuration
MYSQL_USER="root"
MYSQL_PASSWORD="Hbis@123"
MYSQL_SOCKET="/srv/dstore/1.0.0.0/mysql/"

# Name of the database and table
DB_COUNT=4
TABLE_NAME="test_table"
INSERT_COUNT=10 # Set the total number of inserts to be made

# Simulate timed insertion of data
insert_data() {
    for ((n=1; n<=INSERT_COUNT; n++)); do
        for i in $(seq 1 $DB_COUNT); do
            DB_NAME="test_db_$i"
            random_string=$(tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 10)
            mysql --user="$MYSQL_USER" --password="$MYSQL_PASSWORD" --socket="$MYSQL_SOCKET" -D "$DB_NAME" -e "
            INSERT INTO $TABLE_NAME (data_value) VALUES ('$random_string');"
            echo "Inserting data into $DB_NAME: $random_string"
        done
        sleep 2 # Pause 1 second after each loop
    done
}

# Perform data insertion
insert_data

After the simulated data is generated, the timed write script is turned on, and the backup begins to be executed next

nohup sh insert_data.sh > insert_data.log 2>&1 & 
Backup testing

Back up the full amount of data first

xtrabackup --backup \
    --target-dir=/var/backups/xtrabackup \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --parallel=4  \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --compress-threads=4 \
    --compress-chunk-size=65536

Perform another incremental backup

View Offset Points

mysql> show master status;
+---------------+----------+--------------+------------------+----------------------------------------------+
| File          | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                            |
+---------------+----------+--------------+------------------+----------------------------------------------+
| binlog.000029 |    26321 |              |                  | 3949cf93-71b5-11ef-925f-fa163e75258c:1-40306 |
+---------------+----------+--------------+------------------+----------------------------------------------+
1 row in set (0.03 sec)

Starting an incremental backup

xtrabackup --backup \
    --target-dir=/var/backups/inc1 \
    --extra-lsndir=/var/backups/inc1 \
    --datadir=/data/dstore/1.0.0.0/mysql/data \
    --user=root \
    --password=Hbis@123 \
    --socket=/srv/dstore/1.0.0.0/mysql/ \
    --host=localhost \
    --compress \
    --incremental-basedir=/var/backups/xtrabackup

After execution, the recovery operation begins, and you need to transfer the xtrabackup directory and inc1 to the slave node

scp -r /var/backups/   192.168.2.41:`pwd`

Perform a restore operation [Refer to 3.0 Restoring Backups above].

The data migration is complete and the master-slave configuration is executed after starting the database

stop slave;

CHANGE MASTER TO 
  MASTER_HOST='192.168.2.83', 
  MASTER_USER='asyncuser', 
  MASTER_PASSWORD='Hbis@123', 
  MASTER_PORT=3306,
  MASTER_LOG_FILE='binlog.000029',
  MASTER_LOG_POS=26321;
  
start slave;
show slave status\G;

After successful execution, check for data consistency

comprehensive database trump card (in card games) through (a gap)
test_table1 10033 10033
test_table2 10033 10033
test_table3 10033 10033

You can see that the data is consistent

Mode 2

  • First install the slave and create the database and table structure information according to the master database

  • Record Offset

    binlog.000030 |   617532 
    
  • Backup test_table1, 2, 3, 4 in order

  • Restore data

  • Configure master and slave to see data consistency

Before making a backup, you need to enable innodb_file_per_table

Open data loop insertion script

Backup test_table1, test_table2, test_table3, test_table4, example commands:

xtrabackup \
  --defaults-file=/srv/dstore/1.0.0.0/mysql/conf/ \
  --backup \
  --target-dir=/var/backups/db1 \
  --socket=/srv/dstore/1.0.0.0/mysql/ \
  --user=root \
  --password=Hbis@123 \
  --databases="test_db_1"

The data backup is completed and transferred to the slave, and the recovery operation is performed.

xtrabackup \
    --prepare \
	--export \
	--apply-log-only  \
	--target-dir=/var/backups/db1

Found an error.

[ERROR] [MY-012179] [InnoDB] Could not find any file associated with the tablespace ID: 10

No troubleshooting for now, testing revealed that when the timed write script runs, the exported data reports an error

Recover the table for each data in turn

Inspection reveals the existence of unsynchronized and inconsistent data

$. References

/m0_66011019/article/details/136206192

/weixin_4156186

MySQL Community Open Source Backup Tool Xtrabackup Details - CSDN Blogs

Percona XtraBackup: Backup and Recovery of Individual Tables or Databases

Backup and Recovery of a Single Database - MySQL & MariaDB / Percona XtraBackup - Percona Community Forums