Puppet modules for Continuent Tungsten Installation

About 3 years ago we (myself and my colleague Jeff Mace) embarked on a journey to automate installations for the Continuent Tungsten and Tungsten Replicator products (both now owned by VMware). Initially this was driven by 3 different requirements

  • Assist customer deployments reducing the load on Support and Deployment teams
  • Standardise QA host setup, we had many hosts with different configurations on them
  • All quick demo setups with Vagrant, both on Virtualbox and AWS

The initial target for this was the MySQL platform using Percona Server (at the time the only variant to support an yum/apt repository). Initially we wrote our own module for installing and maintaining MySQL but after several months of struggling we just offloaded that work to the Puppet labs MySQL module (https://forge.puppetlabs.com/puppetlabs/mysql).

Over the past 3 years it has been expanded to install the following RDBMS
  • MySQL (via Puppetlabs MySQL) – Standard Oracle MySQL, MariaDB and Percona Server
  • Oracle 11g/12c
  • Vertica
  • Hadoop (Cloudera 5)

It’s now at the point where a developer can spin up a new test VM using the following command, setting up by had used to be a multi-hour effort and a barrier for new people.

yum install puppet
puppet module install continuent/tungsten
echo "class { 'tungsten': installSSHKeys => true, installMysql=> true }"|puppet apply


This module became a key  component on the recent migration to internal VMware systems. This module allowed the quick deployment of around 1000 vm’s in a new vSphere environment. The deployment covered a range of MySQL flavours and versions, Oracle 11g and 12c and a mix of Hadoop and Vertica tests cluster. This module was paired with a range of internal modules which stood up the complete host, users, test toolkits, network configurations etc with no real manual intervention.



The initial adoption was painful (about 6 months) and initially had a great deal of push back from users who couldn’t understand why puppet was changing things back. After a while of education and moaning the benefits became more apparent to them.   


Puppet modules for Continuent Tungsten Installation


A part of the reserection and migration of some of the posts a lot of the code samples have either gone missing or the formatting has been messed up. I’m working my way through them sorting them out


Talks at Percona Live 2014

I seem to have ended up presenting 3 different talks at Percona Live in Santa Clara this year.

Automatically Deployed MySQL Geo-Clustering for Every Situation with Continuent Tungsten along with Jeff Mace

Avoiding pain when running MySQL in the cloud

Why puppet can save your sanity

As usual Continuent are sponsoring the conference as we have a big attendance from the team with 17 Continuent Sessions.


Talks at Percona Live 2014

OpenShift Cartridges and Ports

I’ve been writing an OpenShift cartridge for deploying Tungsten in OpenShift and below are some of my notes on using ports between cartridges

The following ports are available between Cartridges

POP: 106, 109, 110, 995, 1109
IMAP: 143, 220, 993
DNS: 53
SSH: 22
Kerberos: 88, 750, 4444
SMTP: 25, 465, 587
FTP: 21, 990
GIT: 9418
MySQLd: 1186, 3306, 63132-63164
Mongod: 27017
PostgreSQL: 5432
MS SQL: 1433-1434
Oracle: 1521, 2483, 2484 ?????
HTTP/HTTPS: 80, 8008, 8009, 8443
HTTP Cache: 8080, 8118, 8123, 10001-10010
memcache: 11211
jacorb: 3528, 3529
JBoss Debug: 8787
JBoss Management: 4712, 4447, 7600, 9123, 9990, 9999, 18001
AMQP: 5671-5672
PulseAudio: 4713
Flash: 843, 1935
Munin: 4949
Virt Migration: 49152-49216
OCSP: 9080
Other ports: 3128, 5445, 5455, 8255, 8389, 9352, 9353, 9374, 9472 9923, 9926, 9949, 9950, 9996, 9999, 10715, 10716, 10717, 14170, 14171, 14248, 60417

15000 – 35530 (which aren’t externally addressable)

Enabling Inter-Cartridge Communication

Note: the 2 ruby scripts are in

Define a port in manifest.yml

  – Private-IP-Name:   IP
    Private-Port-Name: MESSAGING_PORT
    Private-Port:      5445
    Public-Port-Name:  MESSAGING_PROXY_PORT

When a cart is launched the following variables are available

[51dbb0b84382ec73810000ff-narmitag.rhcloud.com bin]> env|grep MESSAGING

To start a server on the local cart – use the local port in this case 5445

ruby port_check.rb 5445 Y

Scale the cart to add another one

Connect to the server on the other cart using the proxy port

[51dbb2c25973ca7bc800008b-narmitag.rhcloud.com tungsten]> ruby port_check_client.rb 51dbb0b84382ec73810000ff-narmitag.rhcloud.com 51191
Tue Jul  9 03:35:07 2013
Connected to on 5445
Closing the connection. Bye!

Passing the port and host details to another cartridge at start up.

Add the following to the manifest.yml

    Type: “NET_TCP:tungsten-messaging-info”
    Type: “NET_TCP:tungsten-messaging-info”

Create the following 2 hooks in ~/hooks  

(Note:Make sure the execute attribute is on for the 2 hook scripts)



# Exit on any errors
set -e


kvargs=$(echo “${@:4}” | tr -d “n” )
for arg in $kvargs; do
    ip=$(echo “$arg” | cut -f 2 -d ‘=’ | tr -d “‘”)
    ip=$(echo “$ip” | sed “s/:/[/g”)
    if [ -z “$list” ]; then



# Start the application httpd instance

# Exit on any errors
set -e

function print_help {
    echo “Usage: $0 app-name namespace uuid”
    echo “Start a running application”

    echo “$0 $@” | logger -p local0.notice -t openshift_origin_httpd_start
    exit 1

while getopts ‘d’ OPTION
    case $OPTION in
        d) set -x
        ?) print_help

[ $# -eq 3 ] || print_help


Start up first Cart – cartA

[51dbc6c24382ec13550001b3-narmitag.rhcloud.com tungsten]> env | grep MESSAGING

[51dbc6c24382ec13550001b3-narmitag.rhcloud.com tungsten]> cat env/OPENSHIFT_TUNGSTEN_MESSAGING

2nd cart added – cartB 

output from cartA

[51dbc6c24382ec13550001b3-narmitag.rhcloud.com tungsten]> cat env/OPENSHIFT_TUNGSTEN_MESSAGING

output from cartB

[51dbc8e0e0b8cd63e8000099-narmitag.rhcloud.com tungsten]> env | grep MESSAGING

[51dbc8e0e0b8cd63e8000099-narmitag.rhcloud.com tungsten]> cat env/OPENSHIFT_TUNGSTEN_MESSAGING

OpenShift Cartridges and Ports