Setup your Ubuntu Server in Under 10 Minutes

Skip the manual server setup with a three-stage Ansible approach that gets you from fresh Ubuntu install to production-ready in under 10 minutes.

Hand reaching toward a glowing Ubuntu logo with radiating light effects, symbolizing automated server deployment and mastery

I’ve deployed dozens of Ubuntu servers over the last couple of years, and I can tell you this: most “quick setup” guides aren’t actually quick when you factor in all the security hardening, service configuration, and inevitable do-overs when something breaks.

After countless hours of trial and error, I’ve distilled my server setup process into three automated stages that consistently get a production-ready Ubuntu server running in under 10 minutes.

This is a follow-up to my previous article on self-hosting with Ansible and Docker, focusing specifically on the initial server preparation that everyone seems to gloss over.

The Reality Check: Why Most Quick Setup Guides Fail

Most Ubuntu server tutorials follow the same pattern: install the OS, run a few commands, maybe configure SSH, and call it done. But in practice, you’re missing crucial pieces:

  • No proper user management - still logging in as root or ubuntu user
  • Incomplete security hardening - SSH is wide open, no firewall rules
  • Manual Docker installation - copy-pasting curl commands from the internet
  • No network isolation - everything exposed to the world
  • Zero automation - when something breaks, you start from scratch

The tutorials that do cover these aspects take 45+ minutes and require you to stay glued to your terminal.

My Three-Stage Approach

Instead of one massive script that tries to do everything, I break server setup into three distinct phases:

  1. Stage 1: Bootstrap (2 minutes) - Secure SSH access and admin user creation
  2. Stage 2: System Hardening (5 minutes) - Security, networking, and Docker installation
  3. Stage 3: Production Ready (3 minutes) - Service deployment and monitoring

Each stage builds on the previous one, with clear validation points and the ability to resume from any stage if something goes wrong.

Stage 1: Bootstrap - Getting Secure Access (2 minutes)

The bootstrap stage solves the chicken-and-egg problem of server access. You start with a fresh Ubuntu server accessible via password authentication, and end up with proper SSH key access and a dedicated admin user.

Here’s what happens behind the scenes:

- name: Create admin user for final access
  ansible.builtin.user:
    name: '{{ admin_username }}'
    shell: /bin/bash
    groups: sudo
    append: true
    state: present

- name: Configure passwordless sudo for admin user
  ansible.builtin.lineinfile:
    path: /etc/sudoers.d/{{ admin_username }}
    line: '{{ admin_username }} ALL=(ALL) NOPASSWD: ALL'
    create: true
    validate: 'visudo -cf %s'
    mode: '0440'

- name: Install SSH public key for admin user
  ansible.builtin.authorized_key:
    user: '{{ admin_username }}'
    key: '{{ admin_ssh_public_key }}'
    state: present

- name: Disable SSH password authentication
  ansible.builtin.lineinfile:
    path: /etc/ssh/sshd_config
    regexp: '{{ item.regexp }}'
    line: '{{ item.line }}'
    validate: 'sshd -t -f %s'
  loop:
    - { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication no' }
    - { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
  notify: restart sshd

Critical insight: Most people make the mistake of disabling password authentication before testing SSH key access. My bootstrap stage keeps both users accessible via SSH keys initially, giving you a safety net.

Run this stage with:

ansible-playbook -i inventory/stage1-bootstrap.yml playbooks/bootstrap.yml

After 2 minutes, you have an admin user with SSH keys and password authentication disabled - but you can still access the server if something goes wrong.

Stage 2: System Hardening - Production Security (5 minutes)

This is where most DIY approaches fall apart. You need to coordinate multiple security changes without locking yourself out. My second stage handles:

  • SSH hardening with proper timeout and connection limits
  • UFW firewall configuration with Docker integration
  • Tailscale VPN setup for secure remote access
  • Docker installation with security configurations

The key here is the UFW-Docker integration. Standard Ubuntu setups have a massive security hole: Docker bypasses UFW rules entirely, potentially exposing container ports to the internet.

- name: Add UFW-Docker security rules
  ansible.builtin.blockinfile:
    path: /etc/ufw/after.rules
    block: |
      # BEGIN UFW AND DOCKER
      *filter
      :DOCKER-USER - [0:0]
      -A DOCKER-USER -j ufw-user-forward
      -A DOCKER-USER -j RETURN -s 10.0.0.0/8
      -A DOCKER-USER -j RETURN -s 172.16.0.0/12
      -A DOCKER-USER -j RETURN -s 192.168.0.0/16
      # Block external access to Docker containers
      -A DOCKER-USER -j DROP

Tailscale installation is also automated, giving you secure VPN access even if the direct IP becomes unreachable:

- name: Install Tailscale
  ansible.builtin.apt:
    name: tailscale
    state: latest

- name: Authentication with Tailscale
  ansible.builtin.command:
    cmd: 'tailscale up --authkey={{ tailscale_auth_key }} --hostname={{ server_hostname }} --accept-risk=lose-ssh'

- name: Reboot the server to apply updates and hardening
  ansible.builtin.reboot:

Run this stage with:

ansible-playbook -i inventory/stage2-hardened.yml playbooks/system-setup.yml

After this stage completes, the server reboots automatically. You can then access it via either its direct IP or Tailscale hostname.

Stage 3: Production Ready - Service Deployment (3 minutes)

The final stage focuses on getting your server ready for actual workloads:

  • Storage configuration (if using multiple drives)
  • Service deployment with Docker Compose
  • Monitoring setup with proper log aggregation
  • Backup system configuration

For most self-hosted setups, this means launching your core services like Portainer, Traefik, monitoring tools, etc.

ansible-playbook -i inventory/stage3-production.yml playbooks/services.yml

The Gotchas I Learned the Hard Way

1. Docker’s UFW Bypass Docker creates its own iptables rules that completely bypass UFW. Without proper configuration, your containers might be exposed to the internet even with UFW enabled. The fix requires modifying /etc/ufw/after.rules to force Docker traffic through UFW’s chain.

2. SSH Configuration Testing Always test SSH configuration syntax with sshd -t before restarting the service. I’ve locked myself out of servers because of typos in sshd_config. The playbook snippet in Stage 1 includes a validate step to prevent this.

3. Tailscale Authentication The --authkey is essential for non-interactive setup, and --accept-risk=lose-ssh is crucial when setting up Tailscale on a remote server. Without them, the command waits for an interactive confirmation that never comes. The Stage 2 playbook example now includes both.

4. UFW Default Policies Set UFW policies before enabling rules. The order matters:

ufw default deny incoming
ufw default allow outgoing
ufw enable

5. Admin User Sudo Don’t forget to add your admin user to the sudo group AND configure passwordless sudo. Otherwise, automation breaks on privilege escalation. The Stage 1 playbook example now handles this for you.

Performance Insights

Total time breakdown:

  • Stage 1 (Bootstrap): ~2 minutes
  • Stage 2 (Hardening): ~5 minutes
  • Automated reboot: ~1 minute
  • Stage 3 (Services): ~3 minutes

Network transfer: ~200MB for all packages (Docker, Tailscale, system updates)

What takes the longest: Docker installation and system package updates. Using a local apt cache or mirror can cut this in half.

Why This Approach Works

1. Idempotent Operations
Every Ansible task can be run multiple times safely. If something fails, just re-run that stage.

2. Clear Validation Points
Each stage ends with connectivity verification. You know immediately if something went wrong.

3. Rollback Safety
SSH configuration changes include syntax validation and backup creation. Firewall rules are reset to known-good states.

4. Scalable
Need to set up 10 servers? Change the inventory file and run the same playbooks.

Common Alternatives and Why They Fall Short

Cloud-init scripts: These run once during instance creation. If they fail, you’re rebuilding from scratch.

Docker containers for the OS: Adds complexity and doesn’t address the underlying security hardening needs.

Manual setup scripts: Work once, break when Ubuntu versions change or packages get updated.

Configuration management alternatives (Puppet, Chef): Overkill for initial server setup, primarily designed for ongoing configuration drift management.

Getting Started

The complete automation is available in my self-hosted repository. To adapt it for your environment:

  1. Fork and customize the inventory files with your server IPs and user details
  2. Update group_vars/all.yml with your domain and network settings
  3. Create group_vars/vault.yml with your SSH keys and Tailscale auth key
  4. Run the three stages in order

The key is understanding that automation isn’t just about speed - it’s about repeatability and confidence. When your server setup is fully automated, you can experiment freely, knowing you can always get back to a known-good state in under 10 minutes.


In my next post, I’ll cover the service deployment patterns I use to get applications running reliably on this hardened foundation, including proper secrets management, backup automation, and zero-downtime updates.

Have questions about the setup process or want to share your own automation approaches? - Let me know on Twitter.

Subscribe to my newsletter

I send out a newsletter every week, usually on Thursdays, that's it!

You'll get two emails initially—a confirmation link to verify your subscription, followed by a welcome message. Thanks for joining!

Get notified of new posts. No spam, ever.

Related Posts.