Skip to main content

DRBD + Pacemaker & Corosync MySQL Cluster Centos7

5.png

On Both Nodes

Host file
vim /etc/hosts

10.1.2.114 db1 db1.localdomain.com
10.1.2.115 db2 db2.localdomain.com

Firewall

Option 1 Firewalld

systemctl start firewalld
systemctl enable firewalld
firewall-cmd --permanent --add-service=high-availability
firewall-cmd --reload

Option 2 iptables

systemctl stop firewalld.service
systemctl mask firewalld.service
systemctl daemon-reload
yum install -y iptables-services
systemctl enable iptables.service
service iptables save

iptables config

iptables -F
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 80,443 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -d 10.1.2.0/24 -p udp -m multiport --dports 5405 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -d 10.1.2.0/24 -p tcp -m multiport --dports 2224 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 2224 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 3121 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -p tcp -m multiport --dports 21064 -j ACCEPT
iptables -A INPUT -s 10.1.2.0/24 -d 10.1.2.0/24 -p tcp -m multiport --dports 7788,7789 -j ACCEPT
iptables -A INPUT -p udp -m multiport --dports 137,138,139,445 -j DROP
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -j DROP

service iptables save
Disable SELINUX
vim /etc/sysconfig/selinux

SELINUX=disabled

Pacemaker Install

Install PaceMaker and Corosync

yum install pacemaker pcs

Authenticate as the hacluster user

echo "H@xorP@assWD" | passwd hacluster --stdin

Start and enable the service

systemctl start pcsd
systemctl enable pcsd

On DB1

Test and generate the Corosync configuration

pcs cluster auth db1 db2 -u hacluster -p H@xorP@assWD
pcs cluster setup --name db db1 db2

Start the cluster

pcs cluster start --all
pcs cluster enable --all

Verify Corosync installation

corosync-cfgtool -s

Create a new cluster configuration file

pcs cluster cib mycluster

Disable the Quorum & STONITH policies in your cluster configuration file

pcs -f /root/mycluster property set no-quorum-policy=ignore
pcs -f /root/mycluster property set stonith-enabled=false

Prevent the resource from failing back after recovery as it might increases downtime

pcs -f /root/mycluster resource defaults resource-stickiness=300
DRBD Installation

Both Nodes

Install the DRBD package

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum install -y kmod-drbd84 drbd84-utils

Edit the DRBD config and add the to hosts it will be connecting to (DB1 and DB2)

vim /etc/drbd.conf

Make sure you are using the FQDN  for the 2 db's, you can make sure it matches with uname -a

global {
 usage-count no;
}
resource r0 {
 protocol C;
startup {
 degr-wfc-timeout 60;
 outdated-wfc-timeout 30;
 wfc-timeout 20;
 }
disk {
 on-io-error detach;
 }
syncer {
 rate 100M;
 }
net {
 cram-hmac-alg sha1; 
 shared-secret "Daveisc00l123313";
 }
on db1.localdomain.com {
 device /dev/drbd0;
 disk /dev/sdb;
 address 10.1.2.114:7789;
 meta-disk internal;
 }
on db2.localdomain.com {
 device /dev/drbd0;
 disk /dev/sdb;
 address 10.1.2.115:7789;
 meta-disk internal;
 }
}

vim /etc/drbd.d/global_common.conf
common {
        handlers {
        }
        startup {
        }
        options {
        }
        disk {
        }
        net {
                 after-sb-0pri discard-zero-changes;
                 after-sb-1pri discard-secondary; 
                 after-sb-2pri disconnect;
         }
}

On DB1

Create the DRBD partition and assign it primary on DB1

drbdadm create-md r0
systemctl start drbd
drbdadm outdate r0
drbdadm -- --overwrite-data-of-peer primary all
drbdadm primary r0
mkfs.ext4 /dev/drbd0

On DB2

Configure r0 and start DRBD on db2

drbdadm create-md r0
systemctl start drbd
Pacemaker cluster resources

On DB1

Add resource r0 to the cluster resource

pcs -f /root/mycluster resource create r0 ocf:linbit:drbd drbd_resource=r0 op monitor interval=10s

Create an additional clone resource r0-clone to allow the resource to run on both nodes at the same time

pcs -f /root/mycluster resource master r0-clone r0 master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Add DRBD filesystem resource

pcs -f /root/mycluster resource create drbd-fs Filesystem device="/dev/drbd0" directory="/data" fstype="ext4"

Filesystem resource will need to run on the same node as the r0-clone resource, since the pacemaker cluster services that runs on the same node depend on each other we need to assign an infinity score to the constraint:

pcs -f /root/mycluster constraint colocation add drbd-fs with r0-clone INFINITY with-rsc-role=Master

Add a Virtual IP resource (edit the IP to match your config)

pcs -f /root/mycluster resource create vip1 ocf:heartbeat:IPaddr2 ip=10.1.2.116 cidr_netmask=24 op monitor interval=10s

VIP needs an active FS to run, so we need to make sure the resource starts DRBD before then VIP

pcs -f /root/mycluster constraint colocation add vip1 with drbd-fs INFINITY
pcs -f /root/mycluster constraint order drbd-fs then vip1

Verify your cluster and constraints configuration

pcs -f /root/mycluster resource show
pcs -f /root/mycluster constraint

And finally commit the changes

pcs cluster cib-push mycluster

Once completed run 'pcs status' & 'drbd-overview' to check your cluster status and verify drbd status on both nodes

On Both Nodes

Installing MySQL
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
sudo rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum update
yum install mysql-server

Setup MySQL for the DRBD mount directory (/data/mysql)

systemctl stop mysqld
vim /etc/my.cnf

[mysqld]
back_log = 250
general_log = 1
general_log_file = /var/log/mysql.log
log-error = /var/log/mysql.error.log
slow_query_log = 0
slow_query_log_file = /var/log/mysqld.slowquery.log
max_connections = 1500
table_open_cache = 7168
table_definition_cache = 7168
sort_buffer_size = 32M
thread_cache_size = 500
long_query_time = 2
max_heap_table_size = 128M
tmp_table_size = 128M
open_files_limit = 32768
datadir=/data/mysql
socket=/data/mysql/mysql.sock
skip-name-resolve
server-id = 1
log-bin=/data/mysql/drbd
expire_logs_days = 5
max_binlog_size = 100M
max_allowed_packet = 16M

# Query Cache Configuration
query_cache_limit = 16M
query_cache_size = 256M

# INNODB CONFIG
innodb_thread_concurrency = 16
innodb_buffer_pool_size = 2G
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 50
innodb_log_buffer_size = 4M
innodb_log_file_size = 5M
innodb_flush_method = O_DIRECT
innodb_support_xa = 0
innodb_file_per_table

Create the log matching the my.cnf config with the proper config

touch /var/log/mysql.log
chown mysql:mysql /var/log/mysql.log 
rm -rf /var/lib/mysql

On DB1

Configure the /data mount for mysql 
mkdir /data/mysql
chown mysql:mysql /data/mysql
mysql_install_db --datadir=/data/mysql --user=mysql
rm -rf /var/lib/mysql
ln -s /data/mysql /var/lib/
chown mysql:mysql /var/lib/mysql
systemctl start mysqld
mysql_secure_installation

Give grants to connect to the VIP

mysql -u root -p -h localhost

DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1');
CREATE USER 'root'@'10.1.2.' identified by 'P@SSWORD!';
CREATE USER 'root'@'10.1.2.116' IDENTIFIED BY 'P@SSWORD!';
GRANT ALL PRIVILEGES ON * . * TO 'root'@'10.1.2.' IDENTIFIED BY 'P@SSWORD!'';
GRANT ALL PRIVILEGES ON * . * TO 'root'@'10.1.2.116' IDENTIFIED BY 'P@SSWORD!'';
DELETE FROM mysql.user WHERE User='';
FLUSH PRIVILEGES;

Verifying grants for virtual IP

select user,host,password from mysql.user; 

pcs -f /root/mycluster resource create db ocf:heartbeat:mysql binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" datadir="/data/mysql" socket="/data/mysql/mysql.sock" additional_parameters="--bind-address=0.0.0.0" op start timeout=45s on-fail=restart op stop timeout=60s op monitor interval=15s timeout=30s
pcs -f /root/mycluster constraint colocation add db with vip1 INFINITY
pcs -f /root/mycluster constraint order vip1 then db
pcs resource cleanup

Both Nodes

vim .my.cnf
[client]
user=root
password=P@SSWORD!
host=localhost

Then reboot db1 and then db2 and make sure the sql is working, and the resources can failover

NOTES 

Test failover

pcs resource move drbd-fs db2

To update a resource after a commit

cibadmin --query > tmp.xml

Edit with vi tmp.xml or do a pcs -f tmp.xml %do your thing% 

cibadmin --replace --xml-file tmp.xml

To delete a resource

 pcs -f /root/my_cluster resource delete db
Recover a split brain

On secoundary
drbdadm secondary all
drbdadm disconnect all
drbdadm invalidate all
drbdadm connect all

On Primary
drbdadm primary all
drbdadm disconnect all
drbdadm connect all

On both
drbd-overview