Index menü
Software used,
DRBD http://www.drbd.org - I downloaded the centos srpms for v8.3.x and rebuilt them for scientificlinux. Scientificlinux 5.4 http://www.scientificlinux.orgMy test configuration
- drbd.conf
# # please have a a look at the example configuration file in # /usr/share/doc/drbd83/drbd.conf # global { usage-count no; } common { syncer { rate 33M; verify-alg sha1; } } resource r0 { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } net { cram-hmac-alg "sha1"; shared-secret "mysharedkey"; } on myhost.mydomain1 { device /dev/drbd0; disk /dev/sdb2; address 10.0.0.1:7788; meta-disk internal; } }Where myhost.mydomain1 is just my first host, I wish to just create a one node drbd cluster for now so I can add additional hosts to the pool to replicate my data at a later point. In short I want run the cluster in a degraded mode till I get more hosts to add redundancy for backup purposes.
/dev/sdb2 is just a small 50gb partition for testing purposes and the 10.0.0.1 address is just a private ip address that I intend on binding to the second network interface on my server such that I can use a cross over cable to connect to my backup host later on.
Once I had the above config file, I needed to run the following commands.
/etc/init.d/drbd restart drbdadm create-md r0 /etc/init.d/drbd restart drbdadm -- --overwrite-data-of-peer primary r0 /etc/init.d/drbd restart mkfs.ext3 /dev/drbd0 mount /dev/drbd0 /mnt/r0The about should give you a single node drbd setup, from reading the documentation It should be possible to add a second host to the setup, by adding this to the config file.
on myhost.mydomain2 { device /dev/drbd0; disk /dev/sdb2; address 10.0.0.2:7788; meta-disk internal; }then copy the updated drbd.conf file to myhost.mydomain2, then issue these commands on myhost.mydomain1, assuming you want to make this the primary node
drbdadm disconnect all drbdadm invalidate-remote all drbdadm primary all drbdadm connect allOn the secondary node myhost.mydomain2 issue these commands
drbdadm disconnect all drbdadm invalidate all drbdadm connect allPlease be careful on which nodes you issue these commands or else you might end up blowing away data. Once the above commands have been issued, have a look at /proc/drbd there will be some statistics on whats happening.
I'm not really using drbd for high availability, but rather for an easy way of providing a backup copy of my data. Note you would probably not want to automount the drbd partition on boot so I have set my system not to automount and to do no fsck's
So I now have this in my fstab for /home
/dev/drbd0 /home ext3 defaults,noauto,usrquota,grpquota 0 0and to mount it on my primary machine, I do
drbdadm primary all mount /homeNotes: I found that if both my disks are invalid (after some messing to test the backups) you might get something like this
[root@myhost.mydomain1 ~]# drbdadm primary r0 0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk Command 'drbdsetup 0 primary' terminated with exit code 17
you could force one of the hosts' disks to be valid by doing
drbdadm -- --overwrite-data-of-peer primary allmake sure you do it on the primary node!