First play with Percona XtraDB Cluster

I wanted to try out XtraDB cluster as I am keen on using this technology and have played with Galera before. I have managed to set up a working 3 cluster system running on a single Centos 6 virtual machine using the instructions combined from the 2 following pages

At the end of this you should end up with a 3 node cluster running on ports 4000, 5000 and 6000.
Step 1 – Configure the Machine
Standard Centos 6 build with all the up to date packages installed – the host has an ip of 192.168.1.233 so all the config files will need to be changed to match your IP address
Added the following Percona Repos

And installed XtraDB

yum install Percona-XtraDB-Cluster-server Percona-XtraDB-Cluster-client

Created the 3 dir’s to hold the data directories

mkdir -p /data/bench/d1
mkdir -p /data/bench/d2
mkdir -p /data/bench/d2

Step 2 – Set up MySQL

Created the 3 my.cnf files

vi /etc/my.4000.cnf

[mysqld]
gdb
datadir=/data/bench/d1
port = 4000
socket=/tmp/mysql.4000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_provider_options = “gmcast.listen_addr=tcp://0.0.0.0:4010; “
wsrep_sst_receive_address=192.168.1.233:4020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node4000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2



vi /etc/my.5000.cnf

[mysqld]
gdb
datadir=/data/bench/d2
port = 5000
socket=/tmp/mysql.5000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.233:4010
wsrep_provider_options = “gmcast.listen_addr=tcp://0.0.0.0:5010; “
wsrep_sst_receive_address=192.168.1.233:5020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node5000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
vi /etc/my.6000.cnf
[mysqld]
gdb
datadir=/data/bench/d3
port = 6000
socket=/tmp/mysql.6000.sock
user=root
binlog_format=ROW
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.1.233:4010
wsrep_provider_options = “gmcast.listen_addr=tcp://0.0.0.0:6010; “
wsrep_sst_receive_address=192.168.1.233:6020
wsrep_slave_threads=2
wsrep_cluster_name=trimethylxanthine
wsrep_sst_method=rsync
wsrep_node_name=node6000
innodb_locks_unsafe_for_binlog=1
innodb_autoinc_lock_mode=2
Install the mysql default tables
mysql_install_db –datadir=/data/bench/d1
mysql_install_db –datadir=/data/bench/d2
mysql_install_db –datadir=/data/bench/d3
Step 3 – Starting the Cluster
In this case 4000 has to be started first
mysqld_safe –defaults-file=/etc/my.4000.cnf &
Once 4000 is up 5000 and 6000 can be started
mysqld_safe –defaults-file=/etc/my.5000.cnf &
mysqld_safe –defaults-file=/etc/my.6000.cnf &
The log for 4000 should contain messages similar to below
120115 16:35:52 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 0)
120115 16:35:52 [Note] WSREP: Member 0 (node4000) synced with group.
120115 16:35:52 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
120115 16:35:52 [Note] WSREP: Synchronized with group, ready for connections
120115 16:35:52 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
120115 16:35:53 [Note] WSREP: 1 (node5000): State transfer from 0 (node4000) complete.
120115 16:35:53 [Note] WSREP: Member 1 (node5000) synced with group.
120115 16:36:25 [Note] WSREP: 0 (node4000): State transfer to 2 (node6000) complete.
120115 16:36:25 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 0)
120115 16:36:25 [Note] WSREP: Member 0 (node4000) synced with group.
120115 16:36:25 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 0)
120115 16:36:25 [Note] WSREP: Synchronized with group, ready for connections
120115 16:36:25 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
120115 16:36:27 [Note] WSREP: 2 (node6000): State transfer from 0 (node4000) complete.
120115 16:36:27 [Note] WSREP: Member 2 (node6000) synced with group.
As far as I can see so far the important messages are WSREP: Member 1 (node5000) synced with group. and Member 2 (node6000) synced with group.

Step 4 – Testing the cluster
Create a table in 4000
mysql -P4000 -h 127.0.0.1 -e”create table test.a (a int)”;
and it should be created in 5000 and 6000 as well
[root@percona2 /]# mysql -P5000 -h 127.0.0.1 test -e”show tables”;
+—————-+
| Tables_in_test |
+—————-+
| a              |
+—————-+
[root@percona2 /]# mysql -P6000 -h 127.0.0.1 test -e”show tables”;
+—————-+
| Tables_in_test |
+—————-+
| a              |
+—————-+
[root@percona2 /]# 

Advertisements
First play with Percona XtraDB Cluster

Converting Innodb Tables to Archive

I recently had to set up a lot of Archive storage engine tables for several 100 InnoDB tables.

To do this conversion I had to script off all the indexes and in some cases the primary keys. I did a MySQLDUMP and ran the following commands against the dump file.

sed ‘s/ENGINE=InnoDB/ENGINE=archive/g’  dump.dmp > dump2.dmp
cat dump2.dmp | grep  -v ‘^  UNIQUE KEY’ > dump3.dmp 
cat dump3.dmp |grep -v ‘^  KEY’ >dump4.dmp
dump4 was then imported into the new archive database
Converting Innodb Tables to Archive