Running OpenClaw on a Cheap VPS — No Mac Mini Required
The Hardware Misconception
A post is spreading on X this week: "Build a Free OpenClaw Setup in 8 Minutes (No Mac Mini)". The reactions show that many people assumed a Mac Mini was a prerequisite.
It's a misconception we also encountered early on.
AI agents don't need expensive hardware — they need a reliable internet connection and a stable process to run in. That can be a Mini PC. But it can just as well be a VPS for €4/month. For us, it's the latter.
This post shows step by step how we built our 6-agent setup on a cheap Hetzner server — for under €10/month, available 24/7, without any local hardware.
---
Why VPS Instead of Local Hardware?
There are legitimate reasons for both. Here's how we thought about it:
VPS advantages:
Local hardware advantages:
We chose VPS because we use cloud APIs (Anthropic) and don't need local models. If you want to run Ollama locally, a Mini PC with enough RAM makes more sense.
---
Step 1: Create a Server at Hetzner
Hetzner is popular among developers — fair pricing, European data centers, solid SLAs.
Recommended config for a single agent setup:
After creation you'll get an IP address. First login:
```bash
ssh root@<YOUR-IP>
```
---
Step 2: Basic Hardening and Updates
Immediately after first login — before anything else:
```bash
# Update the system
apt update && apt upgrade -y
# Set up firewall (ufw)
apt install ufw -y
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh # Port 22 — keep SSH!
ufw enable
# Verify
ufw status
```
What we do NOT open: HTTP (80), HTTPS (443), or any agent ports. Our agents are only reachable via Tailscale — not a single port is open to the regular internet.
If you're not familiar with Tailscale: it's an encrypted mesh network (WireGuard-based) that connects devices as if they were on the same local network — without open ports. Free for individuals and small teams.
---
Step 3: Install Tailscale
```bash
# Install Tailscale
curl -fsSL https://tailscale.com/install.sh | sh
# Start and log in
tailscale up
# Open the displayed link in your browser and connect with your Tailscale account
```
After login, the server appears in your Tailscale dashboard. You can now reach it by its Tailscale hostname (e.g. `my-hetzner-server`) — from any device also in your Tailscale network.
Optional: Now you can even close the SSH port in UFW and only connect via Tailscale. More secure, but not required.
---
Step 4: Install Docker
```bash
# Install Docker (official method)
curl -fsSL https://get.docker.com | sh
# Make Docker usable without sudo
usermod -aG docker $USER
# Re-login so the group takes effect
exit
ssh root@<YOUR-IP>
# Test
docker run hello-world
```
If you see "Hello from Docker!": Docker is running.
---
Step 5: Install OpenClaw
```bash
# Install Node.js 22 (OpenClaw requirement)
curl -fsSL https://deb.nodesource.com/setup_22.x | bash -
apt install nodejs -y
# Check version
node --version # Should be v22.x.x
# Install OpenClaw
npm install -g openclaw@latest
# Check version
openclaw version
```
---
Step 6: Set Up Your First Agent
```bash
# Start the onboarding wizard
openclaw onboard
# The wizard walks you through:
# 1. Choose LLM provider (e.g. Anthropic)
# 2. Enter API key
# 3. Choose model (e.g. claude-sonnet-4-5)
# 4. Set up a channel (e.g. Telegram)
# 5. Set workspace directory
```
Tip for API keys: Don't type them directly if the terminal is being logged. Better:
```bash
# Create .env file
mkdir -p ~/.openclaw/workspace
cat > ~/.openclaw/workspace/.env << 'EOF'
ANTHROPIC_API_KEY=sk-ant-...
TELEGRAM_TOKEN=123456:ABC...
EOF
chmod 600 ~/.openclaw/workspace/.env
```
---
Step 7: Run the Gateway as a systemd Service
This is the difference between "runs when I'm logged in" and "runs always, including after a reboot."
```bash
# Create systemd service
cat > /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw Gateway
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/root/.openclaw/workspace
EnvironmentFile=/root/.openclaw/workspace/.env
ExecStart=/usr/local/bin/openclaw gateway start --foreground
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Enable and start the service
systemctl daemon-reload
systemctl enable openclaw
systemctl start openclaw
# Check status
systemctl status openclaw
```
From now on, the gateway starts automatically on every server reboot.
```bash
# View logs
journalctl -u openclaw -f
```
---
Step 8: Multiple Agents with Docker Compose
For multiple agents, we use Docker Compose. Each agent gets its own container and workspace.
Directory structure:
```
/opt/agents/
├── docker-compose.yml
├── workspaces/
│ ├── sam/
│ │ ├── SOUL.md
│ │ ├── MEMORY.md
│ │ └── .env
│ ├── peter/
│ └── maya/
```
```yaml
# /opt/agents/docker-compose.yml
version: "3.8"
services:
agent-sam:
image: openclaw/openclaw:latest
container_name: agent-sam
restart: always
environment:
- OPENCLAW_WORKSPACE=/workspace
env_file:
- ./workspaces/sam/.env
volumes:
- ./workspaces/sam:/workspace:rw
networks:
- agents
agent-peter:
image: openclaw/openclaw:latest
container_name: agent-peter
restart: always
env_file:
- ./workspaces/peter/.env
volumes:
- ./workspaces/peter:/workspace:rw
networks:
- agents
agent-maya:
image: openclaw/openclaw:latest
container_name: agent-maya
restart: always
env_file:
- ./workspaces/maya/.env
volumes:
- ./workspaces/maya:/workspace:rw
networks:
- agents
networks:
agents:
driver: bridge
```
```bash
# Start all agents
cd /opt/agents
docker compose up -d
# Check status
docker compose ps
# View logs for one agent
docker compose logs -f agent-sam
```
---
Resource Usage in Practice
On our CX32 (4 vCPU, 8 GB RAM) with 6 agents:
| Resource | Usage |
|----------|-------|
| Total RAM | ~2.4 GB (idle) |
| CPU idle | <5% |
| CPU under load | 20–40% briefly |
| Network | <1 GB/month |
| Disk | ~8 GB (logs + workspaces) |
The bulk of the compute happens at the LLM provider, not on the server. Our server is mainly responsible for coordination, channel communication, and file operations — all lightweight.
Bottom line: A CX22 at €4/month handles 1–3 agents comfortably. For 6 agents, get the CX32 at €9.
---
Monitoring: Know What's Running
On a remote server, monitoring matters more than local. Two simple measures:
```bash
# External uptime monitoring
# Use a free service like UptimeRobot (uptimerobot.com)
# It pings your server via HTTP and alerts you if it goes down
# Disk warnings
# A cron job that checks disk usage and sends a Telegram message at >80%
# (Sam does this automatically as part of her heartbeat check)
```
Sam as her own monitor: One of our cron jobs has Sam check the server status once a day and report anything unusual. AI agents as their own monitoring solution — that closes the loop nicely.
---
The Result
A productive 6-agent setup for under €10/month:
This isn't a hobby setup. It's production-ready — and cheaper than a Spotify subscription.
The full playbook documents the exact Docker configuration, workspace structure, Tailscale integration, and system prompts for each of the 6 agents.
Fully available in German too. 🇩🇪
Want to learn more?
Our playbook contains 18 detailed chapters — available in English and German.
Get the Playbook