Skip to main content
Back to blog

Containers vs virtual machines: when to use which

·3 min readDevOps

"Just use Docker" is the answer to almost every self-hosting question online. But containers and virtual machines solve different problems, and understanding when to use each saves you from shoehorning everything into Docker.

The fundamental difference

A virtual machine runs a complete operating system with its own kernel. It is fully isolated from the host, has its own network stack, and can run any OS (Linux, Windows, FreeBSD). A hypervisor like Proxmox manages the hardware allocation.

A container shares the host's kernel and isolates at the process level. It is lighter and faster but can only run the same OS family as the host (Linux containers on Linux hosts). Docker is the most common container runtime.

graph BT
    subgraph VM["Virtual Machine"]
        VA[App] --> VG[Guest OS + Kernel]
        VG --> VH[Hypervisor]
    end
    subgraph Container
        CA[App] --> CR[Container Runtime]
        CR --> CK[Host Kernel]
    end
    VH --> HW1[Host OS / Hardware]
    CK --> HW2[Host OS / Hardware]

When I use containers (Docker)

Web applications. A Next.js app, a Python API, a Go service. These are perfect for containers. They start in seconds, use minimal resources, and Docker Compose makes it easy to run multiple services together.

Databases and caches. PostgreSQL, Redis, MongoDB. Running these in containers with named volumes for data persistence is simpler than installing them directly on the host.

Utility services. Pi-hole, Uptime Kuma, Caddy, n8n. Self-hosted tools that run as a single process or a small group of processes work great in containers.

When I use virtual machines

Different operating systems. If I need a Windows environment for testing, that requires a VM. Containers cannot run Windows on a Linux host.

Full isolation. When I want a complete separation between workloads, including separate kernels, VMs provide stronger isolation than containers. My development environment runs in a VM separate from my production services.

Proxmox management. Running Docker inside a Proxmox VM gives me the best of both worlds. The VM provides snapshots, live migration, and resource limits. Docker inside it handles individual services.

Testing and experimentation. Spinning up a fresh VM, breaking things, and reverting to a snapshot is faster than rebuilding a Docker environment from scratch. I use VMs for testing new distributions, experimenting with configurations, and learning new tools.

LXC containers: the middle ground

Proxmox also supports LXC containers, which sit between Docker containers and full VMs. They run a lightweight Linux environment with their own init system but share the host kernel. They boot in seconds, use less RAM than VMs, and feel more like a lightweight server than an application container.

I use LXC for services that need a full Linux environment but do not need VM-level isolation. AdGuard Home runs in an LXC container because it needs to control its own networking stack, which is awkward in Docker.

My setup

Proxmox Host
├── VM: Docker Host (runs most services in Docker)
├── VM: Development Environment (isolated from production)
├── VM: TrueNAS (network storage)
├── LXC: AdGuard Home (DNS)
└── LXC: WireGuard (VPN)

Docker handles the applications. VMs handle isolation and different workloads. LXC handles lightweight system services. Each tool used where it fits best.

Sources

Enjoying the blog? Subscribe via RSS to get new posts in your reader.

Subscribe via RSS