Skip to main content
Back to blog

Automating your homelab with Ansible

·3 min readHomelab

When you have one server, SSH and manual configuration are fine. When you have three or four, repeating the same setup on each machine gets old fast. Ansible automates the boring parts: installing packages, configuring services, managing users, deploying updates.

What Ansible does

Ansible connects to your servers over SSH and runs tasks you define in YAML files called playbooks. No agent software on the servers, no central database, no complex setup. If you can SSH into a machine, Ansible can manage it.

graph LR
    C[Control Machine<br/>Playbook + Inventory] -->|SSH| S1[Server 1<br/>Run Tasks]
    C -->|SSH| S2[Server 2<br/>Run Tasks]
    C -->|SSH| S3[Server 3<br/>Run Tasks]

A simple example

Here is a playbook that installs Docker on a fresh Ubuntu server:

# playbooks/setup-docker.yml
---
- hosts: all
  become: true
  tasks:
    - name: Install prerequisites
      apt:
        name:
          - ca-certificates
          - curl
          - gnupg
        state: present
        update_cache: true
 
    - name: Add Docker GPG key
      apt_key:
        url: https://download.docker.com/linux/ubuntu/gpg
        state: present
 
    - name: Add Docker repository
      apt_repository:
        repo: "deb https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
        state: present
 
    - name: Install Docker
      apt:
        name:
          - docker-ce
          - docker-ce-cli
          - containerd.io
          - docker-compose-plugin
        state: present
 
    - name: Add user to docker group
      user:
        name: "{{ ansible_user }}"
        groups: docker
        append: true

Run it:

ansible-playbook -i inventory.ini playbooks/setup-docker.yml

Docker is now installed and configured on every server in your inventory. Run it again and nothing changes (Ansible is idempotent).

The inventory

Your inventory file lists your servers:

# inventory.ini
[homelab]
proxmox    ansible_host=192.168.1.10
nas        ansible_host=192.168.1.11
docker-vm  ansible_host=192.168.1.12
 
[homelab:vars]
ansible_user=admin
ansible_ssh_private_key_file=~/.ssh/homelab

What I automate

Initial server setup: Create users, configure SSH (disable password auth, change port), install common packages, set timezone, configure firewall rules. This playbook runs once on every new machine.

Docker deployments: Instead of SSH-ing into a server to update a docker-compose.yml, I have playbooks that copy the compose file and restart the service. One command updates any service across any machine.

System updates: A weekly playbook runs apt update && apt upgrade across all servers and reboots if needed. No more logging into each machine individually.

Backup configuration: The backup scripts, cron jobs, and Restic configuration are all deployed via Ansible. If I rebuild a server, the backup setup comes with it automatically.

Roles for reusability

Once your playbooks get complex, organize them into roles. A role is a bundle of tasks, templates, and variables for a specific purpose:

roles/
  common/         # Base packages, users, SSH config
  docker/         # Docker installation and config
  monitoring/     # Uptime Kuma, node_exporter
  backup/         # Restic setup and cron jobs

A server's playbook becomes a list of roles:

- hosts: docker-vm
  become: true
  roles:
    - common
    - docker
    - monitoring
    - backup

When Ansible is overkill

If you have one server and a handful of services, a shell script is simpler. Ansible shines when you have multiple machines, when you rebuild servers occasionally, or when you want a documented, version-controlled record of your infrastructure configuration.

For my homelab with three machines, Ansible saves me from the "what did I install on this server again?" problem. The playbooks are the documentation.

Sources

Enjoying the blog? Subscribe via RSS to get new posts in your reader.

Subscribe via RSS