How Ansible Deploys Service Configs
- The full chain
- Step 1 — Variable defined in group_vars
- Step 2 — Template references the variable
- Step 3 — Template task deploys it
- Step 4 — Handler notified
- Step 5 — Service restarts
- Step 6 — Verify task confirms it works
- Seeing what changed with --diff
- Idempotency — second run does nothing
- Tracing problems in the chain
- Rolling updates with serial:
The full chain
When Ansible deploys a config file, every step has a specific role. Understanding where each piece lives makes it easy to change the right thing and understand what broke.
group_vars/webservers.yml → defines nginx_port = 443
↓
roles/nginx/templates/nginx.conf.j2 → uses {{ nginx_port }}
↓
ansible.builtin.template task → renders and uploads to host
↓
/etc/nginx/nginx.conf on the host → the deployed config
↓
notify: Restart nginx → handler runs if file changed
↓
systemctl restart nginx → service picks up new config
↓
tasks/verify.yml → confirms service is running
Step 1 — Variable defined in group_vars
# inventories/production/group_vars/webservers.yml
---
nginx_port: 443
nginx_server_name: app.example.com
enable_tls: true
nginx_backend_host: 127.0.0.1
nginx_backend_port: 8080
Ansible loads this file automatically for any host in the [webservers] group. The variables are available in all tasks and templates for those hosts.
Step 2 — Template references the variable
# roles/nginx/templates/nginx.conf.j2
user nginx;
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
{% if enable_tls %}
listen 443 ssl;
ssl_certificate /etc/ssl/certs/{{ nginx_server_name }}.crt;
ssl_certificate_key /etc/ssl/private/{{ nginx_server_name }}.key;
{% else %}
listen {{ nginx_port }};
{% endif %}
server_name {{ nginx_server_name }};
location / {
proxy_pass http://{{ nginx_backend_host }}:{{ nginx_backend_port }};
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
Ansible renders this file on the controller before uploading it. What the target host receives is already-rendered text — no Jinja2 syntax reaches the server.
Step 3 — Template task deploys it
# roles/nginx/tasks/config.yml
- name: Deploy nginx configuration
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: '0644'
validate: nginx -t -c %s # validate before replacing
notify: Restart nginx
Key points about the template task:
src:looks inroles/nginx/templates/automaticallyvalidate:runsnginx -ton the rendered file before writing it to disk — if the config is invalid, Ansible errors without touching the live confignotify:queues the handler only if the file content changed
Step 4 — Handler notified
The notify: Restart nginx in the template task adds "Restart nginx" to the list of handlers to run at the end of the play. Handlers run once per play — if 3 tasks all notify the same handler, it still only runs once.
# roles/nginx/handlers/main.yml
- name: Restart nginx
ansible.builtin.service:
name: nginx
state: restarted
- name: Reload nginx
ansible.builtin.service:
name: nginx
state: reloaded
reloaded is safer than restarted for nginx — it applies config changes without dropping existing connections. Use restarted only when a reload is insufficient (e.g. changing TLS certs).
Step 5 — Service restarts
After all tasks in the play complete, Ansible runs queued handlers. The nginx service is reloaded, picking up the new config from /etc/nginx/nginx.conf.
If the handler itself fails (e.g. the config had an error that validate missed), Ansible reports the error and the play fails. The config file was already deployed at this point — you need to fix it manually or re-run the playbook with a corrected template.
Step 6 — Verify task confirms it works
# roles/nginx/tasks/verify.yml
- name: Flush handlers before verify
ansible.builtin.meta: flush_handlers
- name: Check nginx is running and responding
ansible.builtin.uri:
url: "http://{{ ansible_default_ipv4.address }}/"
status_code: [200, 301, 302]
register: nginx_response
- name: Show nginx response
ansible.builtin.debug:
msg: "nginx responded with HTTP {{ nginx_response.status }}"
meta: flush_handlers forces all pending handlers to run immediately, before the verify tasks. Without this, handlers would run after verify — meaning you would verify before the service had restarted.
Seeing what changed with --diff
ansible-playbook site.yml --check --diff --limit web01
Output when the nginx server_name changes:
TASK [nginx : Deploy nginx configuration] *****
--- before: /etc/nginx/nginx.conf
+++ after: /etc/nginx/nginx.conf
@@ -5,7 +5,7 @@
listen 443 ssl;
ssl_certificate /etc/ssl/certs/app.example.com.crt;
ssl_certificate_key /etc/ssl/private/app.example.com.key;
- server_name app.example.com;
+ server_name newapp.example.com;
changed: [web01]
The diff shows exactly what the template change produces. Review it before removing --check and running for real.
Idempotency — second run does nothing
Run the same playbook again when nothing has changed:
TASK [nginx : Deploy nginx configuration]
ok: [web01] ← no change — file is identical
PLAY RECAP
web01 : ok=8 changed=0 unreachable=0 failed=0
changed=0 means nothing was modified and no handlers ran. This is the expected result on a correctly configured system — safe to run anytime.
Tracing problems in the chain
When the deployed config has the wrong value, trace backwards:
Check what is on the host
cat /etc/nginx/nginx.conf
Check what the template would render
ansible-playbook site.yml --check --diff --limit web01 --tags nginx
Check the variable value Ansible sees
ansible web01 -m debug -a "var=nginx_server_name"
Find where the variable is defined
grep -r "nginx_server_name" inventories/ roles/
Rolling updates with serial:
When you have multiple web or app servers, applying changes to all of them simultaneously risks bringing the entire fleet down at once if something goes wrong. serial: tells Ansible to process the inventory in batches.
serial: batch sizes
---
- name: Deploy nginx config — rolling
hosts: webservers
become: true
serial: 1 # one host at a time (safest, slowest)
roles:
- nginx
# Percentage of the total hosts
serial: "25%" # 25% of webservers at a time
# Progressive batches — canary pattern
# Run on 1 first, then 2, then all remaining
serial:
- 1
- 2
- "100%"
The progressive batch pattern (serial: [1, 2, "100%"]) is ideal for production deploys: one host acts as a canary. If it fails, Ansible aborts before touching the rest of the fleet.
max_fail_percentage — abort threshold
---
- name: Deploy with failure threshold
hosts: webservers
become: true
serial: "25%"
max_fail_percentage: 10 # abort if more than 10% of hosts fail
roles:
- nginx
If a batch has 4 hosts and 1 fails (25%), Ansible aborts all subsequent batches because 25% exceeds the 10% threshold. The remaining 75% of the fleet stays on the old version.
any_errors_fatal — immediate full abort
---
- name: Deploy with strict abort
hosts: webservers
become: true
serial: 1
any_errors_fatal: true # abort entire play on first failure
roles:
- nginx
any_errors_fatal is stricter than max_fail_percentage — one failure on any host in any batch stops everything immediately. Use for database migrations or config changes where partial application is dangerous.
run_once and delegate_to — pre/post-flight tasks
---
- name: Deploy app with rolling restart
hosts: appservers
become: true
serial: 1
pre_tasks:
- name: Remove host from load balancer
ansible.builtin.uri:
url: "http://lb.internal/api/disable/{{ inventory_hostname }}"
method: POST
delegate_to: localhost # run on the control node, not the target
run_once: false # run for each host in the batch
roles:
- app
post_tasks:
- name: Re-add host to load balancer
ansible.builtin.uri:
url: "http://lb.internal/api/enable/{{ inventory_hostname }}"
method: POST
delegate_to: localhost
- name: Wait for host to pass health check
ansible.builtin.uri:
url: "http://{{ inventory_hostname }}/health"
status_code: 200
retries: 10
delay: 5
run_once: true executes a task exactly once across the entire play (useful for one-time setup like notifying a monitoring system). delegate_to: localhost runs a task on the control node for each host in the batch — different hosts, same executor.