osuv.de

hcloud compress volumes

I use the hetzner cloud for private use and since they introduce volumes, I use them for my production data and for my backups.
Basically I use two 10GB volumes.

  • backups
  • dockerdata

Over time the dockerdata will grow and grow. That means, when I resize the dockerdata volume, the backups volume must follow. To prevent this (and to save money), I switched the backups volume to use btrfs with ZSTD compression.

sudo mount -o discard,defaults,compress=zstd /dev/disk/by-id/scsi-0HC_Volume_1234567 /mnt/backups

With a default mkfs.btrfs the result is dissapointing

/dev/sdc        9.8G  7.4G  1.9G  80% /mnt/dockerdata
/dev/sdb         10G  6.0G  2.2G  74% /mnt/backups

The reason for the lousy result is, that btrfs isolates the data from the metadata. And for metadata it reserved 1GB on the 10GB volume.
To solve this issue, you must use the --mixed option while formatting.

Normally the data and metadata block groups are isolated. The mixed mode will remove the isolation and store both types in the same block group type.

The official recommendation is to use this option on devices < 1GB. The soft-recommendation is < 5GB. Somewhere else I've read < 16GB.
So the negative aspect might be a weak performance (yeah, rsync to my backups mountpoint results in a very high cpu load!).

The mixed mode may lead to degraded performance on larger filesystems, but is otherwise usable, even on multiple devices.

Since this is a small volume at the moment (10GB) and I give a shit about performance on my backup device, I simply start using it in mixed mode. This is the result:

/dev/sdc        9.8G  7.4G  1.9G  80% /mnt/dockerdata
/dev/sdb         10G  5.9G  4.2G  59% /mnt/backups

$ sudo compsize /mnt/backups/
Processed 50218 files, 58842 regular extents (58842 refs), 37152 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       80%      5.8G         7.1G         7.1G       
none       100%      1.4G         1.4G         1.4G       
zstd        76%      4.3G         5.7G         5.7G  

This is pretty awesome. So even when my dockerdata volume grows > 10GB, my backups volume can stay at 10GB.