Sunday, 12 March 2017

Installation and Configuration of Percona XtraDB Cluster on CentOS 6.8


Galera Cluster is a synchronous multi-master database cluster, based on synchronous replication and Oracle’s MySQL/InnoDB. When Galera Cluster is in use, you can direct reads and writes to any node, and you can lose any individual node without interruption in operations and without the need to handle complex failover procedures.

At a high level, Galera Cluster consists of a database server—that is, MySQL or MariaDB—that then uses the Galera Replication Plugin to manage replication. To be more specific, the MySQL replication plugin API has been extended to provide all the information and hooks required for true multi-master, synchronous replication. This extended API is called the Write-Set Replication API, or wsrep API.

Galera Replication Plugin is a general purpose replication plugin for any transactional system. It can be used to create a synchronous multi-master replication solution to achieve high availability and scale-out.


Through the wsrep API, Galera Cluster provides certification-based replication. A transaction for replication, the write-set, not only contains the database rows to replicate, but also includes information on all the locks that were held by the database during the transaction. Each node then certifies the replicated write-set against other write-sets in the applier queue. The write-set is then applied, if there are no conflicting locks. At this point, the transaction is considered committed, after which each node continues to apply it to the tablespace.

Benefits of Galera Cluster

Galera Cluster provides a significant improvement in high-availability for the MySQL ecosystem. The various ways to achieve high-availability have typically provided only some of the features available through Galera Cluster, making the choice of a high-availability solution an exercise in tradeoffs.
The following features are available through Galera Cluster:
     True Multi-master Read and write to any node at any time.
     Synchronous Replication No slave lag, no data is lost at node crash.
     Tightly Coupled All nodes hold the same state. No diverged data between nodes allowed.
     Multi-threaded Slave For better performance. For any workload.
     No Master-Slave Failover Operations or Use of VIP.
     Hot Standby No downtime during failover (since there is no failover).
     Automatic Node Provisioning No need to manually backup the database and copy it to the new node.
     Supports InnoDB.
     Transparent to Applications Required no (or minimal) changes) to the application.
     No Read and Write Splitting Needed.




This article will help you to set up Percona XtraDB Cluster Setup with 3 Nodes running with CentOS 6.8 .Cluster Details are Given as Follows.


System Info :

processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 61
model name : Intel(R) Core(TM) i3-5005U CPU @ 2.00GHz
stepping : 4
microcode : 33
cpu MHz : 1995.381
cache size : 3072 KB

RAM : 2 GB

OS : CentOS release 6.8 (Final)

Cluster Details :

  • Node 1 has hostname  cluster1  and IP : 192.168.47.178
  • Node 2 has hostname  cluster2  and IP : 192.168.47.142
  • Node 3 has hostname  cluster3   and IP : 192.168.47.177

Prerequisites:

·         In order for Percona XtraDB Cluster to work correctly firewall has to be set up to allow connections on the following ports: 3306, 4444, 4567 and 4568. 
  •  On 3306, Galera Cluster uses TCP for database client connections. Normal mysql replication
  •  Port 4567: Port for group communication. Galera Cluster uses TCP for replication traffic. Multicast replication uses both TCP and UDP on this port

  • Port 4568: Galera Cluster uses TCP for Incremental State Transfers.
  • Port 4444: Galera Cluster uses TCP for all other State Snapshot Transfer methods.
  • Selinux is Disabled 
Note : Step 1 to Step 11 has to be Performed on all the three Cluster's and Remaining Steps are node specific 


  Step 1 : Add the host names in /etc/hosts file 

[root@cluster1 log]# cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 cluster1
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.47.178 cluster1
192.168.47.142 cluster2
192.168.47.177 cluster3

Step 2 : Opening Firewall in all the Cluster's 

iptables -I INPUT -p tcp -m tcp --dport 3306 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 4567 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 4568 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 4444 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 3506 -j ACCEPT
iptables -I INPUT -p tcp -m tcp --dport 3606 -j ACCEPT

Deactivate Firewall if Required :

# service iptables save
# service iptables stop
# chkconfig iptables off
# service ip6tables save
# service ip6tables stop
# chkconfig ip6tables off

Uncheck the [*] from Firewall option 

system-config-firewall-tui

Step 3 : Disable Selinux 

 Edit the /etc/sysconfig/selinux file as below :

[root@cluster1 log]# cat /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 



Restart the System and Check :

[root@cluster1 ~]# sestatus 
SELinux status:                 disabled

Step 4 : Configure Password less Remote login Using SSH Keygen

In order for a system to accept SSH connections the system must first be running the SSH server. By default, CentOS 6 installs the SSH server so it is not usually necessary to install it. To check if it is installed and running use the following command:

Login to Node 1 :

/sbin/service sshd status

If the service is not installed install the Package :

yum install openssh-server
/sbin/service sshd start

Generate a pair of public keys using following command.

$ ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/home/tecmint/.ssh/id_rsa): [Press enter key]
Created directory '/home/tecmint/.ssh'.
Enter passphrase (empty for no passphrase): [Press enter key]
Enter same passphrase again: [Press enter key]
Your identification has been saved in /home/tecmint/.ssh/id_rsa.
Your public key has been saved in /home/tecmint/.ssh/id_rsa.pub.
The key fingerprint is:
5f:ad:40:00:8a:d1:9b:99:b3:b0:f8:08:99:c3:ed:d3 tecmint@tecmint.com
The key's randomart image is:
+--[ RSA 2048]----+
|        ..oooE.++|
|         o. o.o  |
|          ..   . |
|         o  . . o|
|        S .  . + |
|       . .    . o|
|      . o o    ..|
|       + +       |
|        +.       |
+-----------------+

$ cd

$ cd .ssh 

$ cp id_rsa.pub authorized_keys

Create .ssh Directory on Other Node's and Upload generated Public Key to Node 2 and Node 3 

cat .ssh/id_rsa.pub | ssh root@cluster2:/root/.ssh 'cat >> .ssh/authorized_keys'

Set Appropriate Permission :

ssh root@cluster2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"

Now Try Login to all the Nodes 


Step 5 : Installing  all the Required Dependency Binaries 

yum -y remove mysql-libs ;

yum -y remove mariadb-libs ;

yum -y install perl-DBD-MySQL.x86_64 ;

yum -y install perl-IO-Socket-SSL ;

yum -y install telnet ;

yum -y install  libevent-devel.x86_64 ;

yum -y install openssl098e.x86_64 ;

yum  -y install perl-Time-HiRes ;

yum  -y install nc     ;

yum  -y install perl-Digest-SHA.x86_64 ;

yum -y install perl-TermReadKey.x86_64 ;

yum -y install compat-readline5.x86_64 ;

yum -y install epel-release ;

yum -y install libev ;

yum -y install socat    ;
 
yum -y install php.x86_64 ;

yum install -y git scons gcc gcc-c++ openssl check cmake bison \
boost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel

 Installation of Percona Xtrabackup Tool 

Install the below Packages :

yum install libev

yum install perl-DBD-MySQL

yum install perl

Automatic Install :

yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

yum install percona-xtrabackup-24

Add Mysql User and Group 

groupadd mysql
useradd -g mysql mysql

Step 6 : Install Percona XtraDB Binaries using Yum 

cd /opt/data 

$ yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

$ yum install Percona-XtraDB-Cluster-56

Step 7 : Set Open File Limit 

echo "fs.file-max = 65536" >> /etc/sysctl.conf
echo "*          soft     nproc          65536" >> /etc/security/limits.conf
echo "*          hard     nproc          65536" >> /etc/security/limits.conf
echo "*          soft     nofile         65536" >> /etc/security/limits.conf
echo "*          hard     nofile         65536" >> /etc/security/limits.conf

Step 8 : Create Directories for MySQL 

cd /var/lib/mysql
mkdir -p data 
mkdir -p logs 
mkdir -p tmp 
mkdir -p binary-log
mkdir -p relay-log 
 
chown -R mysql:mysql /var/lib/mysql

Step 9 : Setting up Mysql Instance :

Start MySQL Service on the Node 

service mysql start 

service mysql status

Reset the Root Password :

  /usr/bin/mysqladmin -u root password 'XXXX'

Next we need to run the mysql_secure_installation script so we can improve the security

/usr/bin/mysql_secure_installation

Configure Mysql on System Startup

chkconfig --level 345 mysql on

chkconfig --add mysql
 
chkconfig mysql on


Step 10 : Create the Required  MySQL User's 

 
set sql_log_bin=0;
use mysql;
  
GRANT ALL PRIVILEGES ON *.* TO 'mysql'@'%' IDENTIFIED BY 'XXXXX'  WITH GRANT OPTION  ; 

CREATE USER 'repl'@'%' IDENTIFIED BY 'XXXXXX' ;

GRANT FILE, REPLICATION SLAVE ON *.* TO 'repl'@'%'  REQUIRE SSL;

GRANT FILE, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' IDENTIFIED BY 'XXXXXXX' ;

GRANT ALL PRIVILEGES ON *.* TO 'backup'@'%' IDENTIFIED BY 'backup'   ;
  
CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'XXXXX';
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@localhost;

CREATE USER 'sstuser'@'%' IDENTIFIED BY 'XXXXX';
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'%';

DELETE FROM mysql.user WHERE User = '';
DELETE FROM mysql.user WHERE User = 'root' AND Host != 'localhost';
DELETE FROM mysql.db WHERE Db='test' OR Db='test\\_%';
use mysql;
DROP DATABASE test;
select "List of Databases";
show databases;
select "";
select "users setup";
select user from user;
FLUSH PRIVILEGES;


Step 11 : Configure Galera my.cnf file 

Shutdown the Instance first 

service mysql stop

Rename the default my.cnf file and create a new my.cnf file 

mv /etc/my.cnf /etc/my.cnf_old 

Create a New my.cnf file from the template 

vi /etc/my.cnf 

Sample Template :

###### Percona-XtraDB-Cluster-server-5.6.34-79.1 ######
# Edit to your requirements.
[client]
###########################
# Listener Config Options #
###########################
port = 3306
socket = /var/lib/mysql/mysql.sock
#ssl-ca=/opt/mysql/3306/etc/ca-cert.pem
#ssl-cert=/opt/mysql/3306/etc/client-cert.pem
#ssl-key=/opt/mysql/3306/etc/client-key.pem
socket=/var/lib/mysql/mysql.sock

[mysqld]
server-id=100
datadir=/var/lib/mysql/data 
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
log_bin=/var/lib/mysql/binary-log/binary-log 
log-bin-index=/var/lib/mysql/binary-log/binary-log.index
slow-query-log=1
slow_query_log_file=/var/log/slow.log
log-slave-updates = 1
expire_logs_days=4
max-connections=1024
innodb_log_group_home_dir = /var/lib/mysql/data
innodb_data_home_dir = /var/lib/mysql/data
relay-log = /var/lib/mysql/relay-log/relay-bin
relay-log-index = /var/lib/mysql/relay-bin.index
relay-log-info-file = /var/lib/mysql/relay-log.info
relay_log_info_repository = FILE
relay_log_recovery=0

############################################################################
#  Galera Configuration
############################################################################


# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

# Path to Galera library
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://192.168.47.178,192.168.47.142,192.168.47.177

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=192.168.47.178
# Cluster name
wsrep_cluster_name=dptsource

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=cluster1

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
#pxc_strict_mode=ENFORCING

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth="sstuser:XXXXX"


read_only = 0
innodb_flush_log_at_trx_commit = 0
innodb_autoinc_lock_mode=2

wsrep_on=1
wsrep_certify_nonPK=1
wsrep_provider_options="gcache.size=128M"
enforce_storage_engine=InnoDB  
wsrep_node_incoming_address= AUTO


  • The first section modifies or re-asserts MySQL settings that will allow the cluster to function correctly. For example, Galera Cluster won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost. You can learn about the settings in more detail on the Galera Cluster system configuration page.
  • The "Galera Provider Configuration" section configures the MySQL components that provide a write-set replication API. This means Galera in our case, since Galera is a wsrep (write-set replication) provider. We specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options.
  • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the addresses of your three servers. If your servers have private IP addresses, use them here.
  • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For our initial setup, we are using rsync, because it's commonly available and does what we need for now.
  • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.


Step 12 : Configuring Node 1 of Percona XtraDB Galera Cluster 

Each node in the cluster needs to have a nearly identical configuration. Because of this, we will do all of the configuration on our first machine, and then copy it to the other nodes.

Make Sure that the following Paramaters are Already being set Correctly.

binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.47.178,192.168.47.142,192.168.47.177"
wsrep_cluster_name='dptsource'
wsrep_node_address='192.168.47.178'
wsrep_node_name='cluster1'
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:XXXXX"

Here wsrep_cluster_address="gcomm://first_ip,second_ip,third_ip",wsrep_node_address="this_node_ip" , wsrep_node_name="this_node_name"

A normal systemctl start mysql would fail because there are no nodes running for the first node to connect with, so we need to pass the wsrep-new-cluster parameter or Bootstarp  to the first node we start. However, neither systemd nor service will properly accept the --wsrep-new-cluster argument or bootstrap-pxc at this time, so we'll need to start the first node using the startup script in /etc/init.d. Once you've done this, you can start the remaining nodes 

Setup Network Manager in case to setup static IP

vi /etc/sysconfig/network-scripts/ifcfg-lo:1

DEVICE=lo:1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.0.2.10
NETMASK=255.255.255.255
NM_CONTROLLED=no

ifup lo:1

Start the Node 1 using Bootstrap Method 

/etc/init.d/mysql bootstrap-pxc

OR

/etc/init.d/mysql start --wsrep-new-cluster

Bootstrapping PXC (Percona XtraDB Cluster) 
Starting MySQL (Percona XtraDB Cluster)....... SUCCESS! 

[root@cluster1 log]# ps -ef | grep mysql 
root      15482      1  0 22:22 pts/0    00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql/data --pid-file=/var/run/mysqld/mysqld.pid --wsrep-new-cluster
mysql     16084  15482  0 22:22 pts/0    00:00:09 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql/data --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --wsrep-provider=/usr/lib64/galera3/libgalera_smm.so --wsrep-new-cluster --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --wsrep_start_position=908124f7-df4b-11e6-b688-0b7a78abda0f:10826


Verify the Cluster Health and Status 

CLUSTER INTEGRITY :

$ Mysql -umysql -pXXXXXX -P3306 

mysql > SHOW GLOBAL STATUS LIKE 'wsrep_cluster_state_uuid'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_conf_id'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';

 mysql> select @@hostname ;
+------------+
| @@hostname |
+------------+
| cluster1   |
+------------+
1 row in set (0.00 sec)

 +--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_state_uuid | 908124f7-df4b-11e6-b688-0b7a78abda0f |
+--------------------------+--------------------------------------+
1 row in set (0.05 sec)

+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| wsrep_cluster_conf_id | 1     |
+-----------------------+-------+
1 row in set (0.00 sec)

+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 1     |
+--------------------+-------+
1 row in set (0.00 sec)

+----------------------+---------+
| Variable_name        | Value   |
+----------------------+---------+
| wsrep_cluster_status | Primary |
+----------------------+---------+
1 row in set (0.01 sec)


NODE STATUS

mysql> select @@hostname ;
+------------+
| @@hostname |
+------------+
| cluster1   |
+------------+
1 row in set (0.00 sec)

mysql > SHOW GLOBAL STATUS LIKE 'wsrep_ready'; SHOW GLOBAL STATUS LIKE 'wsrep_connected'; SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';

+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_ready   | ON    |
+---------------+-------+
1 row in set (0.00 sec)

+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| wsrep_connected | ON    |
+-----------------+-------+
1 row in set (0.00 sec)

+---------------------------+--------+
| Variable_name             | Value  |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+
1 row in set (0.00 sec)

REPLICATION STATUS :

mysql> select @@hostname ;
+------------+
| @@hostname |
+------------+
| cluster1   |
+------------+
1 row in set (0.00 sec)


mysql> SHOW GLOBAL STATUS LIKE 'wsrep_local_send_queue_avg'; SHOW GLOBAL STATUS LIKE 'wsrep_flow_control_paused'; SHOW GLOBAL STATUS LIKE 'wsrep_cert_deps_distance'; 
+----------------------------+----------+
| Variable_name              | Value    |
+----------------------------+----------+
| wsrep_local_send_queue_avg | 0.000000 |
+----------------------------+----------+
1 row in set (0.00 sec)

+---------------------------+----------+
| Variable_name             | Value    |
+---------------------------+----------+
| wsrep_flow_control_paused | 0.000000 |
+---------------------------+----------+
1 row in set (0.00 sec)

+--------------------------+----------+
| Variable_name            | Value    |
+--------------------------+----------+
| wsrep_cert_deps_distance | 0.000000 |
+--------------------------+----------+
1 row in set (0.00 sec)


Create a Test Database :

create database test ;

CREATE TABLE authors (id INT, name VARCHAR(20), email VARCHAR(20));

INSERT INTO authors (id,name,email) VALUES(1,"Vivek","xuz@abc.com");
INSERT INTO authors (id,name,email) VALUES(2,"Priya","p@gmail.com");
INSERT INTO authors (id,name,email) VALUES(3,"Tom","tom@yahoo.com");


Step 13 : Configuring Node 2 of Percona XtraDB Galera Cluster 

After successfully starting cluster on cluster1. Start configuration on cluster2. Edit Galera server configuration file and add following values under [mysqld ] section. All the settings are similar to cluster1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.


Make Sure that the following Parameters are  set Correctly.

binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.47.178,192.168.47.142,192.168.47.177"
wsrep_cluster_name='dptsource'
wsrep_node_address='192.168.47.142'
wsrep_node_name='cluster2'
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:XXXXX"

Delete the Current data Directory and Create a new data directory 

$ service mysql stop

$ cd /var/lib/mysql 
$ mv data data.old 
$ mkdir -p data 
$ chown -R mysql:mysql data 

Start the Cluster using Following Command 

[root@cluster2 mysql]# /etc/init.d/mysql start

Starting MySQL (Percona XtraDB Cluster)......State transfer in progress, setting sleep higher
.... SUCCESS! 


Verify the Cluster health and Status on Cluster Integrity,Node Status,Replication etc.



Step 14 : Configuring Node 3 of Percona XtraDB Galera Cluster 

After successfully starting cluster on cluster2. Start configuration on cluster3. Edit Galera server configuration file and add following values under [mysqld ] section. All the settings are similar to cluster1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.


Make Sure that the following Parameters are  set Correctly.

binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.47.178,192.168.47.142,192.168.47.177"
wsrep_cluster_name='dptsource'
wsrep_node_address='192.168.47.177'
wsrep_node_name='cluster3'
wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth="sstuser:XXXXX"

Delete the Current data Directory and Create a new data directory 

$ service mysql stop

$ cd /var/lib/mysql 
$ mv data data.old 
$ mkdir -p data 
$ chown -R mysql:mysql data 

Start the Cluster using Following Command 

[root@cluster3 mysql]# /etc/init.d/mysql start

Starting MySQL (Percona XtraDB Cluster)......State transfer in progress, setting sleep higher
.... SUCCESS! 


Verify the Cluster health and Status on Cluster Integrity,Node Status,Replication etc.


Test the Replication on Galera Cluster by Read/Write on Cluster 1,2 and 3 

Script to Check the health of Galera Cluster 

a)  Input files and prerequisites 

$ MYSQL_PASS=XXXXXX; export MYSQL_PASS ;
$ HOST_USER=mysql ; export MYSQL_USER ;

b) Frame the Input file 

[root@cluster1 ~]# cat hosts.txt 
cluster1:3306
cluster2:3306
cluster3:3306

c) Script for Cluster Integrity 

$ echo "****** WSREP_CLUSTER_STATE_UUID VALUE SHOULD BE SAME ACCROSS THE CLUSTERS *****" ; echo "****** WSREP_CLUSTER_CONF_ID VALUE SHOULD BE SAME ACCROSS THE CLUSTERS *****" ; echo "****** WSREP_CLUSTER_SIZE PRINTS THE NUMBER OF NODES IN THE CLUSTER *****" ;  echo "****** WSREP_CLUSTER_STATUS VALUE OTHERE THAN PRIMARY INDICATES ISSUE IN THE CLUSTER *****" ; for i in `cat hosts.txt`; do IFS=':' read -a array <<< "$i" ; echo "##########${array[0]}##############"; mysql -umysql -h${array[0]} -P${array[1]} -p$MYSQL_PASS -be "SHOW GLOBAL STATUS LIKE 'wsrep_cluster_state_uuid'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_conf_id'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size'; SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status'; "  --skip-column-names --silent  ; done

****** WSREP_CLUSTER_STATE_UUID VALUE SHOULD BE SAME ACCROSS THE CLUSTERS *****
****** WSREP_CLUSTER_CONF_ID VALUE SHOULD BE SAME ACCROSS THE CLUSTERS *****
****** WSREP_CLUSTER_SIZE PRINTS THE NUMBER OF NODES IN THE CLUSTER *****
****** WSREP_CLUSTER_STATUS VALUE OTHERE THAN PRIMARY INDICATES ISSUE IN THE CLUSTER *****

##########cluster1##############
Warning: Using a password on the command line interface can be insecure.
wsrep_cluster_state_uuid 908124f7-df4b-11e6-b688-0b7a78abda0f
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_status Primary
##########cluster2##############
Warning: Using a password on the command line interface can be insecure.
wsrep_cluster_state_uuid 908124f7-df4b-11e6-b688-0b7a78abda0f
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_status Primary
##########cluster3##############
Warning: Using a password on the command line interface can be insecure.
wsrep_cluster_state_uuid 908124f7-df4b-11e6-b688-0b7a78abda0f
wsrep_cluster_conf_id 3
wsrep_cluster_size 3
wsrep_cluster_status Primary

d) Script for Node Status 

$ clear;echo "****** WSREP_READY VALUE OFF INDICATES IT CAN ACCEPT WRITE-SETS *****" ; echo "****** WSREP_CONNECTED OFF INDICATES THE NODE DOES NOT HAVE A CONNECTION OTHER CLUSTER COMPONENTS *****" ; echo "****** WSREP_LOCAL_STATE_COMMENT INITIALIZED INDICATES THE THAT THE NODE IS PART OF A NONOPERATIONAL COMPONENT *****" ; for i in `cat hosts.txt`; do IFS=':' read -a array <<< "$i" ; echo "##########${array[0]}##############"; mysql -umysql -h${array[0]} -P${array[1]} -p$MYSQL_PASS -be "SHOW GLOBAL STATUS LIKE 'wsrep_ready'; SHOW GLOBAL STATUS LIKE 'wsrep_connected'; SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment'; "  --skip-column-names --silent  ; done

****** WSREP_READY VALUE OFF INDICATES IT CAN ACCEPT WRITE-SETS *****
****** WSREP_CONNECTED OFF INDICATES THE NODE DOES NOT HAVE A CONNECTION OTHER CLUSTER COMPONENTS *****
****** WSREP_LOCAL_STATE_COMMENT INITIALIZED INDICATES THE THAT THE NODE IS PART OF A NONOPERATIONAL COMPONENT *****

##########cluster1##############
Warning: Using a password on the command line interface can be insecure.
wsrep_ready ON
wsrep_connected ON
wsrep_local_state_comment Synced
##########cluster2##############
Warning: Using a password on the command line interface can be insecure.
wsrep_ready ON
wsrep_connected ON
wsrep_local_state_comment Synced
##########cluster3##############
Warning: Using a password on the command line interface can be insecure.
wsrep_ready ON
wsrep_connected ON
wsrep_local_state_comment Synced

e) Script for Replication Status 

$ clear;echo "****** WSREP_LOCAL_RECV_QUE_AVG VALUE HIGHER THAN 0.0 INDICATES THE NODE CANNOT APPLY WRITE-SETS *****" ; echo "****** WSREP_FLOW_CONTROL_PAUSED VALUE OF 0.0 INDICATES THE NODE IS NOT FALLING BEHIND THE CLUSTER *****" ; for i in `cat hosts.txt`; do IFS=':' read -a array <<< "$i" ; echo "##########${array[0]}##############"; mysql -umysql -h${array[0]} -P${array[1]} -p$MYSQL_PASS -be "SHOW GLOBAL STATUS LIKE 'wsrep_local_send_queue_avg'; SHOW GLOBAL STATUS LIKE 'wsrep_flow_control_paused'; SHOW GLOBAL STATUS LIKE 'wsrep_cert_deps_distance'; " --skip-column-names --silent  ; done

****** WSREP_LOCAL_RECV_QUE_AVG VALUE HIGHER THAN 0.0 INDICATES THE NODE CANNOT APPLY WRITE-SETS *****
****** WSREP_FLOW_CONTROL_PAUSED VALUE OF 0.0 INDICATES THE NODE IS NOT FALLING BEHIND THE CLUSTER *****

##########cluster1##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000
wsrep_flow_control_paused 0.000000
wsrep_cert_deps_distance 0.000000
##########cluster2##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000
wsrep_flow_control_paused 0.000000
wsrep_cert_deps_distance 0.000000
##########cluster3##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000
wsrep_flow_control_paused 0.000000
wsrep_cert_deps_distance 0.000000


d) Script for Network Status 

$ clear;echo "****** GREATER THAN 0.0 INDICATE REPLICATION  OR NETWORK ISSUE*****" ; for i in `cat hosts.txt`; do IFS=':' read -a array <<< "$i" ; echo "##########${array[0]}##############"; mysql -umysql -h${array[0]} -P${array[1]} -p$MYSQL_PASS -be "SHOW GLOBAL STATUS LIKE 'wsrep_local_send_queue_avg';" --skip-column-names --silent  ; done

****** GREATER THAN 0.0 INDICATE REPLICATION  OR NETWORK ISSUE*****

##########cluster1##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000
##########cluster2##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000
##########cluster3##############
Warning: Using a password on the command line interface can be insecure.
wsrep_local_send_queue_avg 0.000000


Using Sysbench to test the Cluster 

Either way you now have a cluster up and running with one node. We will be installing sysbench to allow us to drive a load at the machine, and we will be installing myq_tools to check the flow of transactions in the cluster. Run this on all three nodes to set up sysbench and myq_tools and get them ready for our use.


a)   Configuring sysbench on Cluster 1 (Node 1) 

Download the sysbench package and install it.
  
$  yum install -y sysbench
$  yum install -y wget
​$ wget https://github.com/jayjanssen/myq-tools/releases/download/v0
$ tar -xzvf myq_tools.tgz
$ cd bin ​
$ mv * ../ ​
$ cd .. ​
$ rm -rf​ bin​ ​
​$ ln -s /usr/local/bin/myq_status.linux-amd64 myq_status​​

b) Create a script for data writes and reads 

$ vi /usr/local/bin/run_sysbench_oltp.sh

sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-user=test --mysql-password=test --mysql-db=test --mysql-host=localhost --mysql-ignore-errors=all --oltp-tables-count=1 --oltp-table-size=250000 --oltp-auto-inc=off --num-threads=1 --report-interval=1 --max-requests=0 --tx-rate=10 run | grep tps
  
  
$chmod +x run_sysbench_oltp.sh

$ sysbench --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-user=test --mysql-password=test --mysql-db=test --mysql-host=localhost --mysql-ignore-errors=all --oltp-table-size=250000 --num-threads=1 prepare
 

c) Run the Script 

[root@cluster1 ~]# cd /usr/local/bin/
[root@cluster1 bin]# ls 
run_sysbench_oltp.sh
[root@cluster1 bin]# ./run_sysbench_oltp.sh 
[   1s] threads: 1, tps: 5.00, reads: 83.98, writes: 24.00, response time: 677.22ms (95%), errors: 0.00, reconnects:  0.00
[   2s] threads: 1, tps: 14.95, reads: 198.40, writes: 55.83, response time: 541.20ms (95%), errors: 0.00, reconnects:  0.00
[   3s] threads: 1, tps: 10.03, reads: 143.44, writes: 40.12, response time: 453.18ms (95%), errors: 0.00, reconnects:  0.00
[   4s] threads: 1, tps: 14.00, reads: 189.95, writes: 55.99, response time: 126.46ms (95%), errors: 0.00, reconnects:  0.00
[   5s] threads: 1, tps: 7.99, reads: 111.80, writes: 31.94, response time: 58.29ms (95%), errors: 0.00, reconnects:  0.00
[   6s] threads: 1, tps: 13.03, reads: 194.39, writes: 52.11, response time: 85.36ms (95%), errors: 0.00, reconnects:  0.00
[   7s] threads: 1, tps: 11.96, reads: 155.54, writes: 47.86, response time: 39.51ms (95%), errors: 0.00, reconnects:  0.00
[   8s] threads: 1, tps: 11.01, reads: 168.10, writes: 44.03, response time: 41.59ms (95%), errors: 0.00, reconnects:  0.00





Troubleshooting :


 ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists
 
 $ rm -rf /var/lock/subsys/mysql/lock


0 comments:

Post a Comment