After we setup Ajenti and Ajenti V, now we should configure mysql master to master replication.

Mysql Master to Master Replication

Server 1

Create access between two servers with passwordless. On SSH server 1, type this command:

ssh-keygen -t dsa

Enter file in which to save the key (/root/.ssh/id_dsa): < > ENTER
Enter passphrase (empty for no passphrase): < > ENTER
Enter same passphrase again: < ENTER >

Copy SSH Access to server 2 with this command:

ssh-copy-id -i $HOME/.ssh/id_dsa.pub [email protected]

Server 2

cat $HOME/.ssh/authorized_keys

Server 1 & 2

nano /etc/mysql/my.cnf
#bind-address           = 127.0.0.1
service mysql restart

Server 1

 mysql --defaults-file=/etc/mysql/debian.cnf
 GRANT REPLICATION SLAVE ON *.* TO [email protected]'%' IDENTIFIED BY 'password';
 FLUSH PRIVILEGES;
 quit;

Please replace username and password with yours.

Server 2

mysql --defaults-file=/etc/mysql/debian.cnf
GRANT REPLICATION SLAVE ON *.* TO [email protected]'%' IDENTIFIED BY 'secretpassword';
FLUSH PRIVILEGES;
quit;

Server 1

nano /etc/mysql/my.cnf
[mysqld]
server-id = 1
binlog-ignore-db = mysql
replicate-ignore-db = mysql
auto-increment-increment = 2
replicate-same-server-id = 0
auto-increment-offset = 1
expire_logs_days = 10
max_binlog_size = 500M
log_bin = /var/log/mysql/mysql-bin.log
service mysql restart

Server 2

nano /etc/mysql/my.cnf
server-id = 2
binlog-ignore-db = mysql
replicate-ignore-db = mysql
auto-increment-increment = 2
replicate-same-server-id = 0
auto-increment-offset = 2
expire_logs_days = 10
max_binlog_size = 500M
log_bin = /var/log/mysql/mysql-bin.log
service mysql restart

Server 1

Allow IP Address Server 2 to CSF firewall in Server 1:

 csf -a 192.168.1.101

Server 2

Allow IP Address Server 1 to CSF firewall in Server 2:

csf -a 192.168.1.100

Server 2

mysql --defaults-file=/etc/mysql/debian.cnf

CHANGE MASTER TO MASTER_HOST='192.168.1.100', MASTER_USER='username', MASTER_PASSWORD='password';

show slave status\G
start slave;

Now you will see:
Slave_IO_Running: Yes Slave\_SQL\_Running: Yes

Write down this:
Master_Log_File: mysql-bin.000002 Read\_Master\_Log\_Pos: 107

Server 1

mysql --defaults-file=/etc/mysql/debian.cnf

Then, input this:

CHANGE MASTER TO MASTER_HOST='192.168.1.101', MASTER_USER='username', 
MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=107;

start slave;
show slave status\G

Read two indicators:

Slave_IO_Running: Yes Slave\_SQL\_Running: Yes

If this two indicators say Yes, then yes we are finish setup mysql master to master replication.

Test database replication

In Mysql Server 1 please input a database name:

CREATE DATABASE testrepli1;

In Mysql Server 2:

show databases;

Database testrepli1 will replicate and shown in Server 2. Also, please test input create database testrepli2 in Server 2, it will replicate in server 1.

Setup File Replication with Lsyncd

To replicate files between two server I will use Lsyncd. In my previous post I wrote tutorial about Lsyncd but for Centos. For Debian, more simple than Centos:

Install Lsyncd in Server 1 & Server 2, in both ways. Same as master to master replication. If only in Server 1, when it goes down and connection handles by Server 2, there is a lack datas if Server 1 goes up again.

Install:

aptitude -y install lsyncd

Then we define rules for duplication in file configuration:

mkdir /etc/lsyncd
nano /etc/lsyncd/lsyncd.conf.lua

Paste this codes:

settings = {
    statusFile = "/tmp/lsyncd.stat",
    statusInterval = 1,
 }

sync{
    default.rsync,
    source="/var/www",
    target="ip-address:/var/www",
    rsync = {
    compress = true,
    acls = true,
    verbose = true,
    owner = true,
    perms = true,
    group = true,
    delete = false,
    }
}

With parameter delete = false, replication will only copy and not delete datas. Now execute with command:

/etc/init.d/lsyncd start
lsyncd /etc/lsyncd/lsyncd.conf.lua

For check status lsyncd running or not use this command:

/etc/init.d/lsyncd status

Please make test for make file in Server 1 folder /var/www, and see the result in same folder at Server 2.

Continue to Part 3...