My first homelab build: mistakes and lessons
My first homelab was a used Dell OptiPlex I bought for 80 euros. It sat under my desk, ran Ubuntu Server, and hosted Nextcloud and a few Docker containers. I made every beginner mistake possible, and I learned more from that machine than from any tutorial.
What I built
The OptiPlex had an Intel i5, 8GB of RAM, and a 256GB SSD. Not powerful, but enough to run a handful of services. I installed Ubuntu Server, set up Docker, and started deploying things.
My initial setup:
- Nextcloud for file sync
- Pi-hole for ad blocking
- A small Node.js app I was building
- Nginx as a reverse proxy
Mistake 1: no backup strategy
This is the one that hurt. I ran Nextcloud for three months without any backup beyond the single drive it sat on. When the SSD developed bad sectors, I lost some files permanently. Nothing critical, but it was a wake-up call.
What I should have done: Set up automated backups from day one. Even a simple cron job copying data to an external USB drive would have been enough.
Mistake 2: exposing services directly
I port-forwarded individual services to the internet instead of using a VPN for remote access. Every exposed port is an attack surface. I was running services with default passwords accessible from the internet.
What I should have done: Set up WireGuard first, access everything through the VPN, and only expose services through a reverse proxy with proper authentication when needed.
Mistake 3: no monitoring
I had no way to know if a service was down unless I tried to use it. Services would crash, Docker containers would run out of memory, and the disk would fill up. I would only discover problems days later.
What I should have done: Install Uptime Kuma on day one. It takes five minutes and tells you immediately when something breaks.
Mistake 4: one big Docker Compose file
All my services lived in a single docker-compose.yml. Updating one service risked breaking everything else. A bad config change once brought down my entire stack.
What I should have done: Separate compose files per logical group. The database and its related app in one file, the reverse proxy in another, monitoring in another.
What I got right
Docker from the start. Installing applications directly on the host would have made everything harder to manage and clean up. Docker kept things isolated and reproducible.
Learning by doing. I could have spent weeks reading documentation before starting. Instead, I installed things, broke things, and fixed things. The hands-on experience was invaluable.
Starting small. One machine, a few services. I did not try to build the perfect homelab on day one. Each problem I solved taught me what to improve next.
What I would do differently
If I were starting over today, my first weekend would look like this:
- Install Proxmox instead of bare Ubuntu (gives me VMs and containers from the start)
- Set up a Docker VM with Caddy as the reverse proxy
- Deploy WireGuard for remote access
- Deploy Uptime Kuma for monitoring
- Set up automated backups with Restic
- Then start adding services
The fundamentals (reverse proxy, VPN, monitoring, backups) come first. They are less exciting than deploying cool services, but they save you from the mistakes I made.
The homelab today
That OptiPlex has since been replaced. The homelab has grown significantly, but the lessons from that first 80-euro machine are the foundation everything else is built on.
Sources
Related posts
Uptime Kuma told me everything was fine. It wasn't.
Green checkmarks are not observability. Here is what I learned building a real monitoring stack.
TrueNAS: reliable storage for your homelab
Why I use TrueNAS for network storage in my homelab, how to set it up, and the features that make it worth running over a simple file share.
Proxmox Backup Server: incremental backups done right
Why I use Proxmox Backup Server for my homelab backups, how incremental backups save massive amounts of storage, and how to set it up.
Enjoying the blog? Subscribe via RSS to get new posts in your reader.
Subscribe via RSS