RAID IS NOT A BACKUP - It literally just used to improve the uptime. There is not tangible benefit of RAID for home server. For office server, maybe yes, but not for the home server.
$ lsblk# Look for the 3.7T drive with no partitions. Let's assume it's /dev/sdc
config_mirror will hold the backup of /srv/docker-data.
Run the First Backup Manually (Recommended): It’s a good idea to run the rsync commands once manually now to ensure they work correctly and to get your initial backup created. This might take a while depending on how much data you already have.
(Added -v for “verbose” output to see the files being copied.)--delete This flag is used to do ensure if file is deleted in original, it is also deleted in the backup
Guide to the Automated Server Backup System
This guide explains how to use the backup_script.sh to create safe, automated backups of your server’s critical data and application configurations.
1. Understanding the Script’s Features
The provided script (backup_script.sh) is designed to be robust and safe. Here’s what it does:
Mounts on Demand: It only mounts the backup drive (/mnt/backup) when the script starts.
Stops Services Safely: It gracefully stops all your Docker containers to ensure data consistency, especially for databases.
Performs the Backup: It uses rsync to efficiently mirror your /mnt/data and /srv/docker-data directories to the backup drive.
Logs Everything: It creates a detailed log file at /var/log/server_backup.log with timestamps, so you can always check the status of your backups.
Guaranteed Cleanup: It uses a trap to ensure that, no matter what happens (success, failure, or cancellation), it will always restart your Docker containers and unmount the backup drive, preventing your services from being left offline.
Error Handling: The script will exit immediately if any crucial step (like mounting the drive or running rsync) fails, triggering the cleanup process.
2. How to Set Up the Script
Step 1: Save the Script
Save the contents of backup_script.sh to a file on your server, for example, in your home directory at /home/john/backup_script.sh.
Step 2: Make the Script Executable
Open a terminal on your server and run the following command to give the script permission to be executed:
chmod +x /home/john/backup_script.sh
(Replace /home/john/ with the actual path if you saved it elsewhere.)
To mount and unmount manually:
sudo mount /mnt/backup #mountsudo umount /mnt/backup #Unmount
The backup_script.sh
#!/bin/bash# ==============================================================================# Robust Server Backup Script## Features:# - Mounts the backup drive on demand and unmounts it after use.# - Safely stops Docker containers before backup for data consistency.# - Uses a trap to GUARANTEE containers are restarted and the drive is# unmounted, even if the script fails.# - Backs up both critical user data and Docker application configs.# - Logs all actions to a dedicated log file with timestamps.# ==============================================================================# --- Configuration ---# Exit immediately if a command exits with a non-zero status.set -e# Define paths and variablesBACKUP_MOUNT_POINT="/mnt/backup"LOG_FILE="/var/log/server_backup.log"# Source directories for backupBACKUP_SRC_DATA="/mnt/data/"BACKUP_SRC_CONFIG="/srv/docker-data/"# Destination directories on the backup driveBACKUP_DEST_DATA="${BACKUP_MOUNT_POINT}/data_mirror/"BACKUP_DEST_CONFIG="${BACKUP_MOUNT_POINT}/config_mirror/"# List of Docker Compose stack directories to stop/start# Add or remove directories here as you add/remove servicesDOCKER_STACK_DIRS=( "portainer" "jellyfin" "immich" "syncthing" "transmission" "arr-suite" "uptime-kuma" "paperless-ngx" "homepage")# --- Functions ---# Function to log messages with a timestamplog_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sudo tee -a ${LOG_FILE}}# Function to stop all Docker servicesstop_services() { log_message "Stopping Docker services..." for stack in "${DOCKER_STACK_DIRS[@]}"; do if [ -d "/srv/docker-data/${stack}" ]; then log_message "Stopping stack: ${stack}" (cd "/srv/docker-data/${stack}" && docker compose stop) || log_message "WARNING: Could not stop stack ${stack}. It may not be running." else log_message "WARNING: Directory for stack ${stack} not found. Skipping." fi done log_message "All specified Docker services stopped."}# Function to start all Docker servicesstart_services() { log_message "Starting Docker services..." for stack in "${DOCKER_STACK_DIRS[@]}"; do if [ -d "/srv/docker-data/${stack}" ]; then log_message "Starting stack: ${stack}" (cd "/srv/docker-data/${stack}" && docker compose start) || log_message "ERROR: Failed to start stack ${stack}." fi done log_message "All specified Docker services started."}# Cleanup function that will be called on script exitcleanup() { log_message "--- Starting cleanup sequence ---" start_services # Unmount the backup drive if mountpoint -q "${BACKUP_MOUNT_POINT}"; then log_message "Unmounting backup drive: ${BACKUP_MOUNT_POINT}" if sudo umount "${BACKUP_MOUNT_POINT}"; then log_message "Backup drive unmounted successfully." else log_message "ERROR: Failed to unmount backup drive! Please check manually." fi else log_message "Backup drive was not mounted. Skipping unmount." fi log_message "--- Cleanup finished ---"}# --- Main Script ---# Set a trap to run the cleanup function on any script exit (normal, error, or interrupt)trap cleanup EXIT# Start of the scriptsudo touch ${LOG_FILE}sudo chown $(whoami):$(whoami) ${LOG_FILE}log_message "================== Starting Server Backup =================="# Check if backup drive is already mounted (it shouldn't be)if mountpoint -q "${BACKUP_MOUNT_POINT}"; then log_message "ERROR: Backup drive is already mounted. Aborting to prevent issues." exit 1fi# Mount the backup drivelog_message "Mounting backup drive: ${BACKUP_MOUNT_POINT}"if ! sudo mount "${BACKUP_MOUNT_POINT}"; then log_message "ERROR: Failed to mount backup drive. Aborting." exit 1filog_message "Backup drive mounted successfully."# Stop Docker containersstop_services# Run the data backuplog_message "Starting rsync for user data..."sudo rsync -a --delete "${BACKUP_SRC_DATA}" "${BACKUP_DEST_DATA}"log_message "User data rsync finished."# Run the config backuplog_message "Starting rsync for application configs..."sudo rsync -a --delete "${BACKUP_SRC_CONFIG}" "${BACKUP_DEST_CONFIG}"log_message "Application configs rsync finished."log_message "================== Backup Successful =================="# The 'trap' will automatically call the cleanup function upon this normal exit.exit 0
I am however, not planning to use the above script, and I will simplify the whole setup by not stopping the containers and just taking the backup. Since these are incremental backups and very fast, shouldn’t affect anything, also keeps the service up without disriptions.
simple_backup_script.sh
#!/bin/bash# ==============================================================================# Simplified Server Backup Script (Live Backup)## Features:# - Mounts the backup drive on demand and unmounts it after use.# - Performs a "live" backup without stopping Docker services.# - Uses a trap to GUARANTEE the drive is unmounted, even if the script fails.# - Backs up both critical user data and Docker application configs.# - Logs all actions to a dedicated log file with timestamps.# ==============================================================================# --- Configuration ---# Exit immediately if a command exits with a non-zero status.set -e# Define paths and variablesBACKUP_MOUNT_POINT="/mnt/backup"LOG_FILE="/var/log/server_backup.log"# Source directories for backupBACKUP_SRC_DATA="/mnt/data/"BACKUP_SRC_CONFIG="/srv/docker-data/"# Destination directories on the backup driveBACKUP_DEST_DATA="${BACKUP_MOUNT_POINT}/data_mirror/"BACKUP_DEST_CONFIG="${BACKUP_MOUNT_POINT}/config_mirror/"# --- Functions ---# Function to log messages with a timestamplog_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sudo tee -a ${LOG_FILE}}# Cleanup function that will be called on script exit to unmount the drivecleanup() { log_message "--- Starting cleanup sequence ---" # Unmount the backup drive if mountpoint -q "${BACKUP_MOUNT_POINT}"; then log_message "Unmounting backup drive: ${BACKUP_MOUNT_POINT}" if sudo umount "${BACKUP_MOUNT_POINT}"; then log_message "Backup drive unmounted successfully." else log_message "ERROR: Failed to unmount backup drive! Please check manually." fi else log_message "Backup drive was not mounted. Skipping unmount." fi log_message "--- Cleanup finished ---"}# --- Main Script ---# Set a trap to run the cleanup function on any script exit (normal, error, or interrupt)trap cleanup EXIT# Start of the scriptsudo touch ${LOG_FILE}sudo chown $(whoami):$(whoami) ${LOG_FILE}log_message "================== Starting LIVE Server Backup =================="# Check if backup drive is already mounted (it shouldn't be)if mountpoint -q "${BACKUP_MOUNT_POINT}"; then log_message "ERROR: Backup drive is already mounted. Aborting to prevent issues." exit 1fi# Mount the backup drivelog_message "Mounting backup drive: ${BACKUP_MOUNT_POINT}"if ! sudo mount "${BACKUP_MOUNT_POINT}"; then log_message "ERROR: Failed to mount backup drive. Aborting." exit 1filog_message "Backup drive mounted successfully."# Run the data backuplog_message "Starting rsync for user data..."sudo rsync -a --delete "${BACKUP_SRC_DATA}" "${BACKUP_DEST_DATA}"log_message "User data rsync finished."# Run the config backuplog_message "Starting rsync for application configs..."sudo rsync -a --delete "${BACKUP_SRC_CONFIG}" "${BACKUP_DEST_CONFIG}"log_message "Application configs rsync finished."log_message "================== LIVE Backup Successful =================="# The 'trap' will automatically call the cleanup function upon this normal exit.exit 0
Automating the script
sudo crontab -e
Then we need to update the file, which will be used by ubuntu to run it at the fixed time everyday
# Run the simplified server backup script every night at 3:00 AM0 3 * * * /home/john/simple_backup_script.sh
Also good to know:
# Run this manually once in a while, performs integrity checks, to be run one few months, takes time because it reads every single bit, great way to detect silent corruptionsudo rsync -av --checksum --delete /mnt/data/ /mnt/backup/data_mirror/
Understanding Your Data Structure
First, let’s recap where your critical data lives:
OS/System Files: On your SSD (/). These are the Ubuntu Server files themselves. Treat this as disposable. It’s faster to reinstall the OS than to try and back up/restore it perfectly.
Application Configuration: On your SSD, inside /srv/docker-data/. This contains:
All your docker-compose.yml and .env files.
The databases for Jellyfin, Immich, Radarr, Sonarr, Paperless, etc. (where they store their library info, settings, user accounts).
Portainer’s configuration.
This data is CRITICAL. Without it, your services will start, but they will be “empty” and need to be set up from scratch.
Bulk User Data: On your 4TB drives:
/mnt/media: Your movie and TV files. While precious, this data is often replaceable if you lose it (Radarr/Sonarr can re-download).
/mnt/data: Your Immich photos, Paperless documents, Syncthing files. This data is CRITICAL and likely irreplaceable.
Backup Data: On your 4TB drive:
/mnt/backup: This is currently empty, but we planned to put backups of /mnt/data here.
Recovery
Below is the steps provided by AI, in case of a failure of disk and how to restore it:
How Easy Will Migration/Recovery Be?
Extremely Easy, IF you implement a good backup strategy.
The beauty of Docker (and the way we’ve set it up) is that the application state (configs, databases in /srv/docker-data) is completely separate from the bulk user data (/mnt/media, /mnt/data).
Migrating/Recovering Apps: Is as simple as copying the /srv/docker-data folder to a new machine and running docker compose up -d in each subfolder. The apps will start, find their old databases, and pick up exactly where they left off.
Migrating/Recovering Bulk Data: Is as simple as moving the physical 4TB drives to the new machine and mounting them at the same locations (/mnt/media, /mnt/data).
What Are the Risks?
The main risks depend entirely on your backup strategy (or lack thereof):
Risk of No Backups:
Loss of /mnt/data drive: Irrecoverable loss of photos, documents, etc. (High Risk)
Loss of /mnt/media drive: Loss of media files, requiring potentially weeks of re-downloading via Arrs. (Low/Medium Risk)
Loss of OS SSD: Loss of all application configurations. All services need to be set up from scratch. Bulk data is fine, but apps won’t know about it. (High Risk - Major Inconvenience)
Risk with /mnt/backup Only for /mnt/data (Our Original Plan):
Loss of /mnt/data drive: You lose only the data changed since the last backup (e.g., last night). (Low Risk)
Loss of OS SSD: Still a high risk. You lose all app configs.
Risk with a Comprehensive Backup (What We Need):
If you back up both /mnt/data AND /srv/docker-data to /mnt/backup, your risks become minimal.
Loss of /mnt/data drive: Low risk (lose last night’s changes).
Loss of OS SSD: Low risk (lose last night’s app config changes).
Loss of /mnt/backup drive: High risk (you lose your safety net). This is why a 3-2-1 backup strategy (3 copies, 2 media, 1 offsite) is ideal long-term, but let’s start with local first.
Loss of multiple drives: This is where things get serious, but having /mnt/backup significantly improves your odds.
Recommendations for Seamless Transition (The Backup Plan)
This is the most crucial part. We need to upgrade our backup plan from Step 14.
Goal: Back up both critical data (/mnt/data) and critical configurations (/srv/docker-data) to the /mnt/backup drive nightly.
Tools: We’ll stick with rsync for simplicity and reliability. restic is another excellent tool, often mentioned for Docker backups, offering features like deduplication, encryption, and snapshots, but rsync is perfectly adequate for this local backup scenario.
Steps to Implement the New Backup Plan:
Create Backup Subfolders: Let’s organize the backup drive.
(I added -v for “verbose” so you can see the files being copied).
With this backup plan in place, migration and recovery become straightforward.
Detailed Steps for Migration/Recovery Scenarios
Scenario A: /mnt/data Drive Failure
Power down the server.
Replace the failed 4TB drive with a new one.
Power up the server.
Identify the new drive: Use lsblk (it might be /dev/sde temporarily).
Partition and Format: Use sudo fdisk /dev/sde (create a single GPT partition) and sudo mkfs.ext4 /dev/sde1.
Update Mount Point: Find the UUID of the new partition (sudo blkid /dev/sde1). Edit /etc/fstab (sudo nano /etc/fstab) and replace the UUID for the /mnt/data line with the new UUID.
```
- Set up Samba password: `sudo smbpasswd -a $USER`
3. Mount the Data Drives:
- Find the UUIDs of your three 4TB drive partitions: `sudo blkid`.
- Edit fstab: `sudo nano /etc/fstab`.
- Add lines similar to these, using **your actual UUIDs**:
Code snippet
```
UUID=uuid_for_media_drive /mnt/media ext4 defaults 0 2
UUID=uuid_for_data_drive /mnt/data ext4 defaults 0 2
UUID=uuid_for_backup_drive /mnt/backup ext4 defaults 0 2
```
- Create the mount points: `sudo mkdir /mnt/media /mnt/data /mnt/backup`.
- Mount everything: `sudo mount -a`. Verify with `lsblk`.
4. Restore the Application Configs:
- Create the target directory: `sudo mkdir -p /srv/docker-data`.
- Restore from your backup drive:
Bash
```
sudo rsync -av /mnt/backup/config_mirror/ /srv/docker-data/
```
5. Relaunch All Docker Stacks:
- Go into _each_ application's config directory and relaunch its stack. **Order doesn't matter.**
Bash
```
cd /srv/docker-data/portainer
sudo docker compose up -d
cd /srv/docker-data/jellyfin
sudo docker compose up -d
cd /srv/docker-data/immich
sudo docker compose up -d
# ...and so on for syncthing, transmission, arr-suite, npm, uptime-kuma, paperless-ngx
```
6. Done! All your services will start up, read their restored configurations, connect to the data on /mnt/data and /mnt/media, and function exactly as they did on the old server. You may need to log back into things like Tailscale (sudo tailscale up).
This process looks long when written out, but it’s mostly straightforward Linux setup. The key is that the backup of /srv/docker-data makes restoring your applications trivial.
The process of backup was modified on 202510240208, to accommodate the Immich database migration to SSD and also a different way to take the database backup instead of taking backup of the live database, which will almost results in the corrupt file. Check the Immich not for more details