Quantcast
Channel: RoAkSoAx's Weblog » ubuntu hardy
Viewing all articles
Browse latest Browse all 2

DRBD and NFS

0
0

Hey all. Sorry for the delay in posting this tutorial, I’ve been pretty busy and I finally had some time to finish it. Enjoy :) .

Well, as you may know, in previous posts (Post 1, Post 2) I’ve showed you how to install and configure DRBD in an active/passive configuration with failover, automatically, using Heartbeat. Now, I’m going to show you how to use that configuration to export the data stored in the DRBD device (make it available for other servers in a network) using NFS.

So, the first thing to do is to install NFS in both drbd1 and drbd2, stop the daemon, and remove it from the upstart process (This means that we’ll have to remove the NFS daemon from starting during the boot up process). We do this as follows in both servers:

:~$ sudo apt-get install nfs-kernel-server nfs-common
:~$ sudo /etc/init.d/nfs-kernel-server stop
:~$ sudo update-rc.d -f nfs-kernel-server remove
:~$ sudo update-rc.d -f nfs-common remove

Now, you may wonder how is NFS going to work. The NFS daemon will be working only in the active (or primary) server, and this is going to be controlled by Heartbeat. But, since NFS stores information in /var/lib/nfs, on each server, we have to make both servers have the same information. This is because if drbd1 goes down, drbd2 will take over, but its information in /var/lib/nfs will be different from drbd1‘s info, and this will stop NFS from working. So, to make both servers have the same information in /var/lib/nfs, we are going to copy this information to the DRBD device and create a symbolic link, so that the information gets stored in the DRBD device and not in the actual disk. This way, both servers will have the same information. To do this, we do as follows in the primary server (drbd1):

mv /var/lib/nfs/ /data/
ln -s /data/nfs/ /var/lib/nfs
mkdir /data/export

After that, since we already copied the NFS lib files to the DRBD device in the primary server (drbd1), we have to remove them from the secondary server (drbd2) and create the link.

rm -rf /var/lib/nfs/
ln -s /data/nfs/ /var/lib/nfs

Now, since Heartbeat is going to control the NFS daemon, we have to tell Heartbeat to start the nfs-kernel-server daemon whenever it takes the control of the server. We do this in /etc/ha.d/haresources and we add nfs-kernel-server at the end. The file should look like this:

drbd1 IPaddr::172.16.0.135/24/eth0 drbddisk::testing Filesystem::/dev/drbd0::/data::ext3 nfs-kernel-server

Now that we’ve configured everything, we have to power off both servers, first the secondary and then the primary. Then we start the primary server, and during the boot up process we’ll see a message that will require us to type “yes” (This is the same message showed during the installation of DRBD in my first post).  After confirming, and If you have stonith configured, it is probable that drbd1 wont start its DRBD device, so it will remain as secondary, and won’t be able to mount it. This is because we will have to tell stonith to take over the service (To see if stonith is the problem, we can take a look at /var/log/ha-log). So, to do this, we do as follows in the primary server (drbd1):

meatclient -c drbd2

After doing this, we have to confirm. After the confirmation, Heartbeat will take the control, change the DRBD device to primary, and start NFS. Then, we can boot up the secondary server (drbd2). Enojy :-) .

Note: I made the choice of powering off both servers. You could just restart them, one at a time, and see what happens :) .


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images