Testing Tungsten Fan In replication

I need to end up with several MySQL servers replicating into a single instance Tungsten Replicator appears to be the only solution for this. To test this out I am using tungsten-sandbox (http://code.google.com/p/tungsten-toolbox/wiki/TungstenSandbox).



The machine needs to have MySQL Sandbox 3.0.24 running (https://launchpad.net/mysql-sandbox/+download)


Create the directory to host the sandboxes

mkdir $HOME/tsb2

Get the Mysql binaries (I’m using 5.5.19 on a 64bit ubuntu box)

mkdir -p $HOME/opt/mysql
cd $HOME/opt/mysql

tar -xvf mysql-5.5.19-linux2.6-x86_64.tar.gz
mv mysql-5.5.19-linux2.6-x86_64 5.5.19

Download and untar the latest version of Tungsten Replicator and Tungsten Sandbox


From within the tungsten replication dir run

../tungsten-sandbox -n 3 –topology=fan-in –hub=3  -m 5.5.19 -p 7300 -l 12300 -r 10300

where -n is the number of nodes to create and –hub is the node to fan into

Once it has installed

cd $HOME/tsb2
./test_topology

Testing topology fan-in with 3 nodes.
Master nodes: [1 2] – Slave nodes: [3]
# node 3
1 inserted by node #1
2 inserted by node #2
appliedLastSeqno: 7
serviceName     : alpha
state           : ONLINE
appliedLastSeqno: 7
serviceName     : bravo
state           : ONLINE
appliedLastSeqno: 57
serviceName     : charlie
state           : ONLINE

More info
http://code.google.com/p/tungsten-toolbox/wiki/TungstenSandbox

Update:


It looks like in more recent versions of tungsten-sandbox the option –hub has been replaced by –fan-in so the command line is now

../tungsten-sandbox -n 3 –topology=fan-in –fan-in=3  -m 5.5.19 -p 7300 -l 12300 -r 10300

Advertisements
Testing Tungsten Fan In replication

New mysqldump options

A few new mysqldump options appear in 5.5.3 which I have found useful –dump-slave this works in a similar way as –master-data except it allows you to dump from another slave to build a new slave. The dump-slave adds the change master info into the dump file from the running donor slave.


So if you have a set-up like

   


You can create C by dumping B with the following command


mysqldump –dump-slave –all-databases -r backup.dmp


When you import the data into the newly created C database it will poistion the slave in the same place as B was when the dump was taken so you end up with




Another new useful option is  --apply-slave-statements  which adds the stop slave before the change master and start slave at the end



New mysqldump options

pt-archiver

I’ve started to use the Percona Toolkit quite a lot and have recently found pt-archiver useful for 2 things


Purging Large amounts of data

pt-archiver   –source h=$HOST,
              D=$DB,t=$TABLE,P=$PORT
              –where “$WHERE” –no-check-charset
              –skip-foreign-key-checks –progress 1000 –purge


the only bit that probably needs explaining is the $WHERE it is the where clause e.g. creation_date < '2010-10-11'


Archiving Data

pt-archiver  –source h=$SOURCE_HOST,
             D=$SOURCE_DB,t=$SOURCE_TABLE,P=$SOURCE_PORT
             –dest h=$DEST_HOST,
             D=$DEST_DB,P=$DEST_PORT
             –where “$WHERE” –no-check-charset
             –skip-foreign-key-checks –progress 1000 




Both of these use low impact methods to either purge or archive the data. It needs a primary key to work from. It’s got lots of options I have not played with yet


More info


http://www.percona.com/doc/percona-toolkit/pt-archiver.html






pt-archiver

greping Mysql output

I found this command very useful. It allows you to filter output from the MySQL command line. In this following case I was looking to see how many records where deleted in 1 minute.


mysql :05:12 DEV – NY> pager grep deleted
PAGER set to ‘grep deleted’
mysql :05:22 DEV – NY> show engine innodb statusG select sleep(60); show engine innodb statusG
Number of rows inserted 6312830, updated 0, deleted 5471458, read 11997007
1 row in set (0.02 sec)


1 row in set (1 min 0.00 sec)


Number of rows inserted 6312830, updated 0, deleted 5541659, read 12067208
1 row in set (0.01 sec)


The first line set the output to go to grep with a parameter of deleted. The second then runs show engine innodb status twice with a sleep of 60 seconds between each run.

greping Mysql output