All posts
2026-03-177 min

OpenClaw Backup and Migration: How to Back Up, Restore, and Move Servers Without Losing Data

BackupMigrationDisaster RecoveryOpenClawBest Practices

The Night That Created This Post

It was 2:00 AM. Sam's heartbeat cron fired — no response. Atlas's morning report didn't appear. I opened the Hetzner dashboard: server status "Failed."

The storage attachment had errored. The root volume was no longer writable. All Docker containers: stopped. The Gateway: offline.

The genuinely painful part: we had no proper backup. The Hetzner snapshots were three days old. The SOUL.md files, MEMORY.md entries, cron job configurations, the fine-tuned skills — all potentially gone.

We recovered most of the data. But it took three hours for what should have been 20 minutes.

This post is what we built afterward. You'll hopefully never be in this situation — but if you are, you'll be glad you read this.

---

What Actually Needs to Be Backed Up in OpenClaw

Before we talk tools: what do you actually lose when a server dies?

Tier 1: Critical — without this, nothing is recoverable

  • `~/.openclaw/openclaw.json` — main config (models, providers, channel tokens)
  • `.env` files per agent — all API keys
  • `SOUL.md` per agent — personality and behavior
  • `AGENTS.md` per agent — behavior rules and learned lessons
  • Tier 2: Important — significant quality loss without this

  • `MEMORY.md` per agent — curated long-term memory
  • `memory/YYYY-MM-DD.md` files — daily notes (last 14 days)
  • `HEARTBEAT.md` — prioritization list and active tasks
  • Installed skills (`~/.openclaw/workspace/skills/`)
  • Tier 3: Useful — can be reconstructed, but costs time

  • Cron job configurations (OpenClaw stores these internally)
  • Session logs (useful for debugging)
  • `memory/heartbeat-state.json` — tracking timestamps
  • What you do NOT need to back up:

  • The OpenClaw package itself (`npm install -g openclaw` is enough to restore)
  • Docker images (automatically pulled)
  • Node.js and system dependencies
  • ---

    The Backup System: Automated, Daily, Two-Pronged

    We back up in two ways simultaneously:

    Method 1: Git Repository for All Workspace Files

    The elegant thing about OpenClaw workspaces: they're mostly text files. Markdown, JSON, shell scripts. That's exactly the format Git was built for.

    ```bash

    # One-time setup: initialize workspace as a Git repository

    cd /opt/agents/workspaces

    # For each agent:

    cd sam

    git init

    git remote add origin git@github.com:your-org/openclaw-workspace-sam.git

    # Create .gitignore — API keys MUST NOT be committed

    cat > .gitignore << 'EOF'

    .env

    .env.local

    *.env

    memory/heartbeat-state.json

    EOF

    git add .

    git commit -m "initial: sam workspace backup"

    git push -u origin main

    ```

    Daily backup cron (runs at 03:00 UTC in our setup):

    ```

    Schedule: 0 3 * * *

    Prompt:

    Run a git commit and push for all agent workspaces:

  • cd /opt/agents/workspaces/sam && git add -A && git commit -m "daily backup $(date +%Y-%m-%d)" && git push origin main
  • (same commands for peter, maya, alex, iris, atlas)

    Report briefly: how many files changed (or "no changes" if nothing is new).

    ```

    Advantage: Full history. You can go back to any day and see what MEMORY.md contained at that point. Or roll back to a week ago if an agent starts acting strangely.

    Method 2: Encrypted Archive of the Complete OpenClaw Config

    The Git approach backs up workspaces — but not the OpenClaw configuration itself and not the API keys (which don't belong in Git).

    For those, we back up a daily encrypted archive to Hetzner Object Storage:

    ```bash

    #!/bin/bash

    # /opt/backup/openclaw-backup.sh

    # Run daily — should NOT be in Git (contains encryption key reference)

    BACKUP_DATE=$(date +%Y-%m-%d)

    BACKUP_DIR="/tmp/openclaw-backup-$BACKUP_DATE"

    BACKUP_FILE="/tmp/openclaw-backup-$BACKUP_DATE.tar.gz.gpg"

    # Directories to back up

    mkdir -p "$BACKUP_DIR"

    cp -r ~/.openclaw/ "$BACKUP_DIR/openclaw-config"

    cp -r /opt/agents/workspaces/ "$BACKUP_DIR/workspaces"

    # Archive and encrypt

    tar czf - "$BACKUP_DIR" | gpg --symmetric --cipher-algo AES256 --passphrase "$BACKUP_ENCRYPTION_KEY" --batch > "$BACKUP_FILE"

    # Upload to Hetzner S3-compatible Object Storage

    aws s3 cp "$BACKUP_FILE" "s3://your-backup-bucket/openclaw/$BACKUP_DATE.tar.gz.gpg" --endpoint-url https://fsn1.your-objectstorage.com

    # Cleanup

    rm -rf "$BACKUP_DIR" "$BACKUP_FILE"

    echo "Backup $BACKUP_DATE complete."

    ```

    The `BACKUP_ENCRYPTION_KEY` lives only in the `.env` on the server — not in the script, not in Git.

    Retention: We keep the last 30 daily backups. Older ones are automatically deleted:

    ```bash

    # Delete backups older than 30 days

    aws s3 ls "s3://your-backup-bucket/openclaw/" --endpoint-url https://fsn1.your-objectstorage.com | awk '{print $4}' | sort | head -n -30 | xargs -I{} aws s3 rm "s3://your-backup-bucket/openclaw/{}" --endpoint-url https://fsn1.your-objectstorage.com

    ```

    ---

    Restore: From Zero to Full Operation in Under 20 Minutes

    This is the test that shows whether a backup system actually works. We ran our own recovery once as a "dry run" — here's the guide.

    Step 1: Provision a New Server (0-5 minutes)

    Create a Hetzner CX32, add your SSH key, note the IP. This takes under 3 minutes.

    Step 2: Base Setup (5-8 minutes)

    ```bash

    # Connect

    ssh root@<NEW-IP>

    # Update system

    apt update && apt upgrade -y

    # Essentials

    apt install -y curl git ufw

    # Firewall

    ufw default deny incoming && ufw default allow outgoing

    ufw allow ssh && ufw enable

    # Tailscale

    curl -fsSL https://tailscale.com/install.sh | sh

    tailscale up

    # → Authorize in browser

    # Docker

    curl -fsSL https://get.docker.com | sh

    # Node.js 22

    curl -fsSL https://deb.nodesource.com/setup_22.x | bash -

    apt install nodejs -y

    # OpenClaw

    npm install -g openclaw@latest

    ```

    Step 3: Download and Decrypt Backup (8-12 minutes)

    ```bash

    # Install AWS CLI (for Object Storage)

    apt install awscli -y

    # Download the latest backup

    BACKUP_DATE=$(date +%Y-%m-%d)

    aws s3 cp "s3://your-backup-bucket/openclaw/$BACKUP_DATE.tar.gz.gpg" /tmp/restore.tar.gz.gpg --endpoint-url https://fsn1.your-objectstorage.com

    # Decrypt and extract

    # Get encryption key from your secure password manager (1Password, Bitwarden, etc.)

    gpg --decrypt --passphrase "$BACKUP_ENCRYPTION_KEY" --batch /tmp/restore.tar.gz.gpg | tar xz -C /tmp/

    # Move files to the right places

    cp -r /tmp/openclaw-backup-*/openclaw-config ~/.openclaw/

    mkdir -p /opt/agents

    cp -r /tmp/openclaw-backup-*/workspaces /opt/agents/workspaces

    ```

    Step 4: Restore API Keys from Password Manager

    This is the only step that stays manual — and that's intentional. Storing API keys in backups is a security risk.

    We store all OpenClaw API keys in 1Password as "Secure Notes." On a restore: open 1Password, copy all keys into the new `.env`.

    ```bash

    # Recreate .env per agent

    cat > /opt/agents/workspaces/sam/.env << 'EOF'

    ANTHROPIC_API_KEY=<from 1Password>

    TELEGRAM_TOKEN=<from 1Password>

    DISCORD_TOKEN=<from 1Password>

    CLICKUP_API_TOKEN=<from 1Password>

    EOF

    chmod 600 /opt/agents/workspaces/sam/.env

    # Repeat for all other agents

    ```

    Step 5: Start Docker Compose and Gateway (12-20 minutes)

    ```bash

    # Start Docker Compose

    cd /opt/agents

    docker compose up -d

    # Set up Gateway as systemd service (as described in the VPS setup post)

    systemctl enable openclaw

    systemctl start openclaw

    # Test

    openclaw gateway status

    openclaw channels list

    ```

    When everything is green: agents are online. MEMORY.md, SOUL.md, skills, cron jobs — all restored.

    Our actual time on the real server restore after the outage: 23 minutes. With this playbook it would have been under 20.

    ---

    Server Migration: When You're Switching Providers

    Migration is technically the same as a restore — but with the old server still running, which makes it easier.

    ```bash

    # On the old server: force a final backup

    cd /opt/backup && bash openclaw-backup.sh

    # On the new server: normal restore process (steps 1-5 above)

    # Test that everything works before shutting down the old server

    # Tailscale: remove the old server from the network

    # → admin.tailscale.com → Devices → old server → Remove

    ```

    The trap: If two servers are running simultaneously and both have the same Telegram/Discord bots connected, the agent receives messages twice — and responds twice. Before starting the new instance: stop the Gateway on the old server.

    ```bash

    # On the old server, before the new one starts:

    systemctl stop openclaw

    docker compose stop

    ```

    ---

    Backup Verification: Trust Is Good, Testing Is Better

    A backup that's never been tested isn't a backup. It's the illusion of a backup.

    We run a "restore drill" every month — not on a real server, but locally in a VM:

    ```bash

    # Locally: temporary VM or Docker container as test environment

    docker run -it --rm ubuntu:24.04 /bin/bash

    # Inside: run through the complete restore process

    # Goal: how long does it take? What's missing? What doesn't work?

    ```

    This sounds like overhead. In practice, the drill takes 30-40 minutes once a month. The benefit: you know that in an emergency you won't be surprised.

    ---

    What We Learned

    After the server outage and everything that followed:

    1. Two backup methods are better than one. Git for workspace files, encrypted archive for everything else.

    2. Never put API keys in the backup. In the password manager, managed separately.

    3. Daily backups, not weekly. MEMORY.md changes every day — a weekly backup means up to 7 days of data loss.

    4. Test the restore once. An untested backup is a risk, not a safety net.

    5. Hetzner snapshots aren't a substitute. They don't cover all scenarios and are slow to restore from.

    Three hours of downtime and lost work taught us what 3 hours of backup setup time would have prevented.

    ---

    The complete backup system — including the full shell scripts, the 1Password structure for API keys, and the monthly restore drill checklist — is documented in the OpenClaw Setup Playbook.

    18 chapters, based on real production experience. Including the mistakes we made.

    Fully available in German too. 🇩🇪

    Want to learn more?

    Our playbook contains 18 detailed chapters — available in English and German.

    Get the Playbook