DragonFly BSD

hammer-pfs-slave-mirroring

Scenario

I have two 500 GB hard disks both with the Hammer file system. I want to create a master PFS in one hard disk and a slave PFS in the other disk. I want to mirror the data continuously from the master PFS to the slave PFS. This will help me avoid long 'fsck' and RAID parity rewrite times after an unclean shut down, and also will give me a setup somewhat like RAID 1.

Preparing the Disks

In this example we will be using ad4s1h and ad6s1h but you're disks are most likely going to be different. To find your disks you can scan through dmesg.

$ dmesg | less

You can then press / and type "ad" or "da" (without the quotes) depending on the type of controller you have.

Creating Disk Labels

If your disks are brand new they'll need to be formatted as either GPT or MBR prior to creating the slices/partitions. Here are some resources for creating a disklabel and adding a slice/partition on each disk.

https://www.dragonflybsd.org/docs/handbook/UnixBasics/#index22h3

https://www.dragonflybsd.org/~labthug/handbook/disks-adding.html

Encryption (Optional)

This step is not necessary but if you are looking to encrypt your data you'll have to do it now. You can either encrypt your disks with a password a key or both I will show an example of each below:

Kernel Modules

You will need to load the appropriate kernel modules to be able to create and mount encrypted volumes. This step is only needed if you didn't choose to encrypt any part of your system during installation.

# echo 'dm_target_crypt_load="YES"' >> /boot/loader.conf
# kldload dm_target_crypt

Password (you will be prompted for your password at boot)

Create encrypted containers with password

# cryptsetup luksFormat /dev/ad4s1h
# cryptsetup luksFormat /dev/ad6s1h

Open encrypted container with password

# cryptsetup luksOpen /dev/ad4s1h crypt-master
# cryptsetup luksOpen /dev/ad6s1h crypt-slave

Keyfile (keyfiles on removable media are outside the scope of this guide.)

Create a keyfile and use it to create encrypted containers

# mkdir /root/keys && chmod 700 /root/keys
# dd if=/dev/urandom of=/root/keys/data.key bs=512 count=4
# chmod 400 /root/keys/data.key
# cryptsetup luksFormat /dev/ad4s1h /root/keys/data.key
# cryptsetup luksFormat /dev/ad6s1h /root/keys/data.key

Open encrypted containers with keyfile

# cryptsetup luksOpen /dev/ad4s1h --key-file /root/keys/data.key crypt-master
# cryptsetup luksOpen /dev/ad6s1h --key-file /root/keys/data.key crypt-slave

Locating Serial Numbers

Locate the Serial Number for your disk partitions/slices. To find the serno you can look in dmesg and match it with the entries in /dev/serno. After locating them add them to the /etc/crypttab.(if you are unable to copy and paste you can do the following and then just remove what you don't need)

# ls /dev/serno >> /etc/crypttab

Edit /etc/crypttab with your editor of choice and add the following lines:

Password

crypt-master /dev/serno/WD-WCC3F0PLTCZD.s1h none none
crypt-slave  /dev/serno/WD-WCC6Y1AEVTK0.s1h none none

Keyfile

crypt-master /dev/serno/WD-WCC3F0PLTCZD.s1h /root/keys/data.key none
crypt-slave  /dev/serno/WD-WCC6Y1AEVTK0.s1h /root/keys/data.key none

Notice

If you have chosen to encrypt your data mirror you will need to make your HAMMER file systems on the newly created encrypted volumes instead of the slices/partitions themselves. So instead of /dev/ad4s1h and /dev/ad6s1h below you would be using /dev/mapper/crypt-master and /dev/mapper/crypt-slave respectively.

Creating HAMMER file system

newfs_hammer -L DATA /dev/ad4s1h
newfs_hammer -L DATA /dev/ad6s1h

Creating the master PFS on Disk 1

The Hammer file systems on Disk 1 and Disk 2 are mounted in '/etc/fstab' according to the following.

/dev/ad4s1h             /Backup1        hammer  rw              2       2
/dev/ad6s1h             /Backup2        hammer  rw              2       2

Go to the Hammer file system on Disk 1. We will be creating a master PFS called 'test' and will be mounting it using a null mount. If you don't have a directory called 'pfs' under the Hammer file system you should create it.

# pwd
/Backup1
# mkdir pfs

If you already have the pfs directory under the Hammer file system you can skip the above step and continue.

# hammer pfs-master /Backup1/pfs/test
Creating PFS #3 succeeded!
/Backup1/pfs/test
sync-beg-tid=0x0000000000000001
sync-end-tid=0x000000013f644ce0
shared-uuid=9043570e-b3d9-11de-9bef-011617202aa6
unique-uuid=9043574c-b3d9-11de-9bef-011617202aa6
label=""
prune-min=00:00:00
operating as a MASTER
snapshots dir for master defaults to <fs>/snapshots

Now the master PFS 'test' is created. Make a note of its 'shared-uuid' because we will need to use that to create the slave PFS for mirroring. You can mount the PFS under the Hammer file system on Disk 1 by doing the following.

# mkdir /Backup1/test

Now Edit '/etc/fstab' to contain the following line.

/Backup1/pfs/test      /Backup1/test    null    rw              0       0

Now mount the PFS by doing.

# mount -a
# mount |grep test
/Backup1/pfs/@@-1:00003 on /Backup1/test (null, local)

Creating the slave PFS on Disk 2.

Note that we must use the 'shared-uuid' of the master PFS to enable mirroring.

# hammer pfs-slave /Backup2/pfs/test shared-uuid=9043570e-b3d9-11de-9bef-011617202aa6
Creating PFS #3 succeeded!
/Backup2/pfs/test
sync-beg-tid=0x0000000000000001
sync-end-tid=0x0000000000000001
shared-uuid=9043570e-b3d9-11de-9bef-011617202aa6
unique-uuid=97d77f53-b3da-11de-9bef-011617202aa6
slave
label=""
prune-min=00:00:00
operating as a SLAVE
snapshots directory not set for slave

The slave PFS is not mounted but a symlink can be created in the root Hammer file system to point to it.

# ln -s /Backup2/pfs/test /Backup2/test
# ls -l /Backup2/test
lrwxr-xr-x  1 root  wheel  17 Oct  8 12:07 /Backup2/test -> /Backup2/pfs/test

(This step is optional, the PFS can be read through the original magic symlink /Backup2/pfs/test.)

Copying contents from PFS on Disk 1 to PFS on Disk 2 to enable mirroring.

The slave PFS will be accessible only after the first 'mirror-copy' operation.

# touch /Backup1/test/test-file
# ls /Backup1/test/
test-file
# sync

We do the "sync" so that the file creation operation in flushed from the kernel memory. Mirroring works only on operations flushed from the kernel memory. The slave PFS will be accessible only after the first mirroring operation.

# hammer mirror-copy /Backup1/test /Backup2/pfs/test
histogram range 000000013f6425fd - 000000013f644d60
Mirror-read: Mirror from 0000000000000002 to 000000013f644d60
Mirror-read /Backup1/test succeeded

# ls /Backup2/test/
test-file

Enabling continuous mirroring.

The hammer mirror-stream will automatically restart if the connection is lost so you only need to start it up once at boot.You can do this with an @reboot entry in the crontab.

@reboot root hammer mirror-stream /Backup1/test /Backup2/test