This blog is a follow on from a blog post I wrote ages ago and have eventually got round to finishing it off
In this part of the process we will create the disks and setup the DRBD devices
First we need to connect to the Virtual Machines from a terminal session as it makes life much easier and quicker when you connect remotely.
You will need to make sure that your servers have static IP addresses.
For this document I will be using the following IP addresses for my servers.
drbdnode1 = 172.16.71.139 drbdnode2 = 172.16.71.140 drbdmstr = 172.16.71.141 (clustered IP address) Subnet Mask = 255.255.255.0 Gateway = 172.16.71.1 DNS Servers = 8.8.8.8 and 4.4.4.4
So to set the IP address as fixed you need to do the following.
Connect to the console of drbdnode1 and login
now we need to edit the file that contains the IP address of the network card
enter the following command and press return
sudo nano /etc/network/interfaces
enter the password for the user you are logged in as
You should see the following screen
now use your arrow keys on your keyboard and move the white cursor to the section that starts with iface eth0
press Ctrl K to remove the line then add the lines below with your IP address details
auto eth0 iface eth0 inet static address 172.16.71.139 netmask 255.255.255.0 network 172.16.71.0 broadcast 172.16.71.255 gateway 172.16.71.1
It should end up looking like this
Now press Ctrl X to exit
Then Y
Then press Enter to save
Now type in the following
sudo /etc/init.d/networking restart
Do the same for drbdnode2
Now that we have given each server a static Ip address we can connect via ssh to the server to do the admin remotely.
To do this you need to have a machine that has an ssh client installed most linux and osx clients have one already installed if you are on windows look for putty and use that.
So open a terminal on your machine and the in the following
ssh cluster@172.16.71.139 and press enter.
You need to substitute the username you created on your server when setting it up for the word cluster in the above command.
You will be prompted to accept a key for the server. Type yes and press enter.
Now enter the password for the user and press enter.
You should see a screen like this
Connect to both cluster nodes to make sure you are not stopped down the line to fix the problem.
You are now ready to work on your cluster.
First we need to create host records for the two servers
type the following into your terminal session
sudo nano /etc/hosts
and add a record for each server it should look something like this
Save the file as before and do the same for node2 but swap the names and ipadresses
Now we need to install a few packages that will allow us to use drbd
in the terminal on drbdnode1 type
apt-get install heartbeat drbd8-utils
and press enter you should have a screen like this
Press Y and then Enter to install the software. Do this on drbdnode2 as well
Now we need to create the partitions that we will use for the drbd cluster
to find out which disk we will be using run the command
sudo fdisk -l
to see which disks have not been partitioned your screen should look like this
As you can see at the end is the disk /dev/sdb does not have a partition table
look for the line “ Disk /dev/sdb doesn’t contain a valid partition table “
to create a partition table we need to run the following commands
sudo fdisk /dev/sdb n (to create a new partition) p (to select a primary partition) 1 (for the first partition) Enter (to select the start cylinder) and enter (to select the end cylinder) w (to write the changes)
the screen should look like this
Do this on both servers
once this is complete we now need to edit the drbd configuration files to set up our clustered filesystem.
In your terminal on drbdnode1 enter the command
sudo nano /etc/drbd.d/clusterdisk.res
Enter the password for your user and edit the file
Copy and paste the following code into your terminal screen and then change the details to match your server names and ipaddresses
resource clusterdisk { # name of resources protocol C; on drbdnode1 { # first server hostname device /dev/drbd0; # Name of DRBD device disk /dev/sdb1; # Partition to use, which was created using fdisk address 172.16.71.139:7788; # IP addres and port number used by drbd meta-disk internal; # where to store metadata meta-data } on drbdnode2 { # second server hostname device /dev/drbd0; disk /dev/sdb1; address 172.16.71.140:7788; meta-disk internal; } disk { on-io-error detach; } net { max-buffers 2048; ko-count 4; } syncer { rate 10M; al-extents 257; } startup { wfc-timeout 0; degr-wfc-timeout 120; # 2 minutos. } }
The screen should look similar to this
ctrl x (to exit)
y (to save the changed file)
enter (to overwrite the file)
Now we need to create the DRBD resource
enter the following command into your terminal session
sudo drbdadm create-md clusterdisk
After running this command you should see a screen similar to this
On drbdnode1 enter the following command
drbdadm -- --overwrite-data-of-peer primary all
this will activate it as the primary drbd node
to see if this has worked you can run the following command
sudo drbdadm status
the result should look like this on drbdnode1
and like this on drbdnode2
you will see that drbdnode1 has a status of
cs=”SyncSource”
and drbdnode2 has a status of
cs=”SyncTarget”
this tells you what role they are playing in the cluster
at the end of this line you will see a status resynced_percent=”3.8″
this tells you how much the drbd disk has synced.
Once the sync is complete connect to drbdnode1 and run the following command
sudo mkfs.ext4 /dev/drbd0
this will create an ext4 partition on the drbd file system. Which will sync across to drbdnode2
Configuring heartbeat resource
Now we need to setup the Mysql resource in the heartbeat configuration
firstly we need to create a file called authkeys. The file should be created in /etc/ha.d directory. You can do this with the following command
nano /etc/ha.d/authkeys
in this file you need to add the following text.
auth 3 3 md5 [SECRETWORD]
Replace [SECRETWORD] with a key you have generated.
This file needs to be on both servers in the /etc/ha.d directory.
After you have created the file you need to change the permissions on the file to make it more secure. This can be done with the following command
chmod 600 /etc/ha.d/authkeys
do this on both servers
Now we need to create the /etc/ha.d/ha.cf file to store the cluster config.
You can do this with the following command
nano /etc/ha.d/ha.cf
copy and paste this code into the file
logfile /var/log/ha-log keepalive 2 deadtime 30 udpport 695 bcast eth0 auto_failback off stonith_host drbdnode1 meatware drbdnode2 stonith_host drbdnode2 meatware drbdnode1 node drbdnode1 drbdnode2
do the same for both servers
next is the haresources file. Create the file here
nano /etc/ha.d/haresources
paste this code in there
dhcp-1 IPaddr::172.16.71.141 /24/eth0 drbddisk::clusterdisk Filesystem::/dev/drbd0::/var/lib/mysql::ext4 mysql
Your cluster is now ready to role.
All you now need to do is test the cluster which I will tell you how to do in a future blog post
Let me know how you get on