Automating backups with restic and rclone
After setting up Gitea, Nextcloud, and a handful of other services, I realized I had no backup plan. Everything lived on one server. One dead drive, one bad rm -rf, and all of it would be gone. That is not a hypothetical. It is a matter of time.
So I set up automated backups with restic and rclone. Restic handles encrypted, deduplicated snapshots. Rclone syncs them to Backblaze B2 for offsite storage. The whole thing runs on a systemd timer and costs me less than a dollar a month.
Why restic
There are a few tools in this space. rsync is the classic option, but it does not deduplicate or encrypt. Borg is excellent and battle-tested, but restic has a few things going for it that won my vote:
- Encryption by default. Every repo is encrypted with AES-256. No opt-in required.
- Deduplication. Restic splits files into variable-length chunks and only stores unique ones. Incremental backups are fast and storage-efficient.
- Multiple backends. Local disk, SFTP, S3, rclone. You pick.
- Single binary. No dependencies, no runtime. Download it and run it.
As of writing, restic 0.16.0 is the current release. It brought memory usage improvements and better restore progress reporting.
Setting up a restic repo
A restic repository is where your backups live. Initialize one locally first to get familiar:
restic init --repo /mnt/backup/restic-repoRestic asks you for a password. Do not lose this. Without it, your backups are unrecoverable. I store mine in a password manager and in a separate encrypted file.
For convenience, set environment variables so you do not have to pass flags every time:
export RESTIC_REPOSITORY=/mnt/backup/restic-repo
export RESTIC_PASSWORD_FILE=/root/.restic-passwordBacking up Docker volumes and databases
Most of my services run in Docker, so backing up means targeting the bind mounts and volumes where data lives. A simple backup command:
restic backup /opt/docker/gitea/data /opt/docker/nextcloud/dataDatabases need special treatment. Backing up a PostgreSQL data directory while the database is running can produce a corrupted snapshot. Use pg_dump instead:
docker exec postgres pg_dump -U postgres gitea > /opt/backups/dumps/gitea.sql
docker exec postgres pg_dump -U postgres nextcloud > /opt/backups/dumps/nextcloud.sqlRun the dumps before the restic backup so the SQL files are included in the snapshot. I put all of this in a single shell script:
#!/bin/bash
set -euo pipefail
DUMP_DIR=/opt/backups/dumps
mkdir -p "$DUMP_DIR"
# Dump databases
docker exec postgres pg_dump -U postgres gitea > "$DUMP_DIR/gitea.sql"
docker exec postgres pg_dump -U postgres nextcloud > "$DUMP_DIR/nextcloud.sql"
# Run restic backup
restic backup \
/opt/docker/gitea/data \
/opt/docker/nextcloud/data \
"$DUMP_DIR" \
--exclude="*.log" \
--exclude="*.cache"rclone to Backblaze B2
Local backups protect against accidental deletion and software failures. They do not protect against hardware failure, fire, or theft. You need an offsite copy.
Backblaze B2 is the cheapest option I found. Storage is $0.006/GB/month ($6/TB). They recently bumped the price from $5/TB, but it is still a fraction of what AWS S3 charges. Egress is free up to 3x your stored data, which is more than enough for restore scenarios.
Set up rclone with B2:
rclone configWalk through the interactive setup: choose "Backblaze B2", enter your account ID and application key, and give the remote a name. I called mine b2.
Now tell restic to use rclone as a backend:
restic init --repo rclone:b2:my-backup-bucket/resticThat is it. Restic talks to rclone, rclone talks to B2. Your backups are encrypted before they leave your machine, so B2 never sees your data in the clear.
Update the backup script to push to both local and remote:
# Local backup
restic backup /opt/docker /opt/backups/dumps \
--repo /mnt/backup/restic-repo
# Remote backup
restic backup /opt/docker /opt/backups/dumps \
--repo rclone:b2:my-backup-bucket/resticRetention policies
Without cleanup, your repo grows forever. Restic's forget command lets you define what to keep:
restic forget \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly 6 \
--pruneThis keeps the last 7 daily snapshots, 4 weekly, and 6 monthly. The --prune flag actually deletes the data from unreferenced snapshots. Without it, forget only removes the metadata.
For 50GB of Docker data with daily backups, I use about 55GB of repo space after a month. Deduplication does the heavy lifting.
Automating with systemd
I prefer systemd timers over cron. They integrate with journalctl for logging and handle missed runs gracefully.
Create the service unit:
# /etc/systemd/system/restic-backup.service
[Unit]
Description=Restic backup
After=docker.service
[Service]
Type=oneshot
EnvironmentFile=/etc/restic-env
ExecStart=/opt/scripts/backup.shCreate the timer unit:
# /etc/systemd/system/restic-backup.timer
[Unit]
Description=Run restic backup daily
[Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true
RandomizedDelaySec=900
[Install]
WantedBy=timers.targetEnable and start it:
systemctl daemon-reload
systemctl enable --now restic-backup.timerThe Persistent=true line means if the server was off at 3 AM, the backup runs as soon as it boots. RandomizedDelaySec=900 adds up to 15 minutes of jitter so you do not slam all your services at the exact same time.
Check the timer status:
systemctl list-timers restic-backup.timerTesting restores
A backup you have never restored from is not a backup. It is a hope.
Test it. Pick a snapshot, restore it to a temporary directory, and verify the data:
restic snapshots
restic restore latest --target /tmp/restore-test
ls /tmp/restore-test/opt/backups/dumps/Check that the SQL dumps are valid:
head -20 /tmp/restore-test/opt/backups/dumps/gitea.sqlIf you see valid SQL, your backup pipeline works end to end. I run a restore test every few months. It takes five minutes and the peace of mind is worth it.
I wrote more about the broader strategy behind all of this in my post on backup strategies for self-hosted data, including the 3-2-1 rule and how to prioritize what gets backed up.
Sources
Related posts
Self-hosting with Coolify: a PaaS on your own server
How Coolify turns your VPS into a Heroku-like platform for deploying apps, databases, and services with a clean web UI.
Backup strategies for self-hosted data
The 3-2-1 backup rule applied to self-hosted services, with practical tools and patterns I use to protect my data.
Self-hosting a media server with Jellyfin
Setting up Jellyfin to stream movies, music, and photos across all my devices without a Plex subscription.
Enjoying the blog? Subscribe via RSS to get new posts in your reader.
Subscribe via RSS