Felhasználói eszközök

Eszközök a webhelyen


Oldalsáv

Index menü


Tagek listája

Szavak listája

tudasbazis:linux:drbd

DRBD

http://www.sgenomics.org/~jtang/blog/posts/Experimenting_with_DRBD_for_data_replication_for_backup_purposes/

Software used,

My test configuration

drbd.conf
#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd83/drbd.conf
#
 
global {
    usage-count no;
}
 
common {
    syncer { 
        rate 33M;
        verify-alg sha1;
    }
}
 
resource r0 {
    protocol C;
    startup {
        wfc-timeout 0;
        degr-wfc-timeout 120;
    }
    net {  
        cram-hmac-alg "sha1"; 
        shared-secret "mysharedkey"; 
    }
    on myhost.mydomain1 {
        device /dev/drbd0;
        disk /dev/sdb2;
        address 10.0.0.1:7788;
        meta-disk internal;
    }
}

Where myhost.mydomain1 is just my first host, I wish to just create a one node drbd cluster for now so I can add additional hosts to the pool to replicate my data at a later point. In short I want run the cluster in a degraded mode till I get more hosts to add redundancy for backup purposes.

/dev/sdb2 is just a small 50gb partition for testing purposes and the 10.0.0.1 address is just a private ip address that I intend on binding to the second network interface on my server such that I can use a cross over cable to connect to my backup host later on.

Once I had the above config file, I needed to run the following commands.

/etc/init.d/drbd restart
drbdadm create-md r0
/etc/init.d/drbd restart
drbdadm -- --overwrite-data-of-peer primary r0
/etc/init.d/drbd restart
mkfs.ext3 /dev/drbd0
mount /dev/drbd0 /mnt/r0

The about should give you a single node drbd setup, from reading the documentation It should be possible to add a second host to the setup, by adding this to the config file.

on myhost.mydomain2 {
    device /dev/drbd0;
    disk /dev/sdb2;
    address 10.0.0.2:7788;
    meta-disk internal;
}

then copy the updated drbd.conf file to myhost.mydomain2, then issue these commands on myhost.mydomain1, assuming you want to make this the primary node

drbdadm disconnect all
drbdadm invalidate-remote all
drbdadm primary all
drbdadm connect all

On the secondary node myhost.mydomain2 issue these commands

drbdadm disconnect all
drbdadm invalidate all
drbdadm connect all

Please be careful on which nodes you issue these commands or else you might end up blowing away data. Once the above commands have been issued, have a look at /proc/drbd there will be some statistics on whats happening.

I'm not really using drbd for high availability, but rather for an easy way of providing a backup copy of my data. Note you would probably not want to automount the drbd partition on boot so I have set my system not to automount and to do no fsck's

So I now have this in my fstab for /home

/dev/drbd0      /home           ext3    defaults,noauto,usrquota,grpquota 0 0

and to mount it on my primary machine, I do

drbdadm primary all
mount /home

Notes: I found that if both my disks are invalid (after some messing to test the backups) you might get something like this

[root@myhost.mydomain1 ~]# drbdadm primary r0
0: State change failed: (-2) Refusing to be Primary without at least one UpToDate disk
Command 'drbdsetup 0 primary' terminated with exit code 17

you could force one of the hosts' disks to be valid by doing

drbdadm -- --overwrite-data-of-peer primary all

make sure you do it on the primary node!

tudasbazis/linux/drbd.txt · Utolsó módosítás: 2015.05.12 04:36 (külső szerkesztés)