Ansible Handlers & Templates in Practice
- How handlers work — timing
- Multiple handlers in a role
- Multiple tasks notifying one handler
- listen — group notify targets
- flush_handlers — run before verify
- What happens when a handler fails
- Multiple templates in a role
- OS-aware templates
- Template validation
- Controlling whitespace in templates
- ansible_managed header
- {% raw %} block
- lookup() — reading files and commands
- backup: yes best practice
How handlers work — timing
Handlers are tasks that only run when notified by another task that reported a change. The critical thing to understand about timing:
- Handlers run at the end of the play, not immediately when notified
- A handler notified multiple times still only runs once
- Handlers run in the order they are defined in handlers/main.yml
- If the play fails before completion, handlers that were queued do not run
meta: flush_handlers to force handlers to run first.
Multiple handlers in a role
A realistic nginx role has handlers for different situations:
---
# roles/nginx/handlers/main.yml
- name: Reload nginx
ansible.builtin.service:
name: nginx
state: reloaded
- name: Restart nginx
ansible.builtin.service:
name: nginx
state: restarted
- name: Validate nginx config
ansible.builtin.command: nginx -t
changed_when: false
Different tasks notify different handlers based on what changed:
# Changing nginx.conf → reload is enough
- name: Deploy nginx.conf
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: Reload nginx
# Changing TLS cert → requires full restart to re-read the cert
- name: Deploy TLS certificate
ansible.builtin.copy:
src: "{{ nginx_cert_file }}"
dest: /etc/ssl/certs/nginx.crt
notify: Restart nginx
Multiple tasks notifying one handler
When two or more tasks all notify the same handler, the handler runs once:
tasks:
- name: Deploy nginx.conf
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
notify: Reload nginx # queued
- name: Deploy default site config
ansible.builtin.template:
src: default.conf.j2
dest: /etc/nginx/conf.d/default.conf
notify: Reload nginx # already queued — handler still runs only once
- name: Deploy SSL config
ansible.builtin.template:
src: ssl.conf.j2
dest: /etc/nginx/conf.d/ssl.conf
notify: Reload nginx # still just once
This is correct and expected behaviour. No matter how many config files change in one run, nginx reloads once at the end.
listen — group notify targets
listen lets multiple handlers respond to a single notification topic. Useful when "restart the mail stack" should trigger several handlers:
---
# handlers/main.yml
- name: Restart postfix
ansible.builtin.service:
name: postfix
state: restarted
listen: restart mail stack
- name: Restart dovecot
ansible.builtin.service:
name: dovecot
state: restarted
listen: restart mail stack
- name: Deploy TLS cert for mail
ansible.builtin.copy:
src: mail.crt
dest: /etc/ssl/certs/mail.crt
notify: restart mail stack # runs BOTH handlers above
flush_handlers — run before verify
The most common reason to use meta: flush_handlers is to ensure a service is restarted before a verify task checks if it is responding correctly.
---
# roles/nginx/tasks/verify.yml
# Force all pending handlers to run NOW before we verify
- name: Flush handlers
ansible.builtin.meta: flush_handlers
# Now the service has definitely been reloaded/restarted
- name: Check nginx responds
ansible.builtin.uri:
url: "http://{{ ansible_default_ipv4.address }}/"
status_code: [200, 301, 302]
Without flush_handlers, the verify task would run before the service restarts — giving you a false pass (service responds but with old config) or a false fail (service was mid-restart).
What happens when a handler fails
If a handler errors, Ansible marks the host as failed and stops. The config file was already deployed — you will need to fix it.
RUNNING HANDLER [nginx : Reload nginx]
fatal: [web01]: FAILED! => {"changed": false, "msg": "Job for nginx.service failed..."}
PLAY RECAP
web01: ok=7 changed=3 unreachable=0 failed=1
At this point on the host:
# Config was deployed but service failed to reload
nginx -t # test if the config is valid
journalctl -u nginx -n 30 # read the error
nginx -t -c /etc/nginx/nginx.conf # detailed syntax check
Multiple templates in a role
A mail server role might manage several config files:
roles/postfix/templates/
├── main.cf.j2 # main postfix config
├── master.cf.j2 # service definitions (rarely changed)
├── sasl_passwd.j2 # relay authentication (vault-protected vars)
└── virtual.j2 # virtual alias maps
---
# roles/postfix/tasks/config.yml
- name: Deploy main.cf
ansible.builtin.template:
src: main.cf.j2
dest: /etc/postfix/main.cf
validate: postfix -c $(dirname %s) check
notify: Reload postfix
- name: Deploy virtual alias map
ansible.builtin.template:
src: virtual.j2
dest: /etc/postfix/virtual
notify:
- Rebuild virtual map
- Reload postfix
OS-aware templates
When a service has different config paths or package names on RHEL vs Debian:
# In tasks/main.yml — set OS-specific vars before config
- name: Set OS-specific variables
ansible.builtin.include_vars: "{{ ansible_os_family | lower }}.yml"
# vars/redhat.yml
_nginx_config_dir: /etc/nginx/conf.d
_nginx_log_dir: /var/log/nginx
# vars/debian.yml
_nginx_config_dir: /etc/nginx/sites-enabled
_nginx_log_dir: /var/log/nginx
Or handle differences directly in the template:
# templates/nginx.conf.j2
{% if ansible_os_family == "RedHat" %}
include /etc/nginx/conf.d/*.conf;
{% else %}
include /etc/nginx/sites-enabled/*;
{% endif %}
Template validation
Every template task that deploys a service config should use validate:. The %s is replaced by a temp file path containing the rendered content:
# nginx — validates syntax before writing
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
validate: nginx -t -c %s
# postfix — %s is the staged file; postfix -c expects a directory
ansible.builtin.template:
src: main.cf.j2
dest: /etc/postfix/main.cf
validate: postfix -c $(dirname %s) check
# apache — validates syntax
ansible.builtin.template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
validate: apachectl -t -f %s
# sshd — validates config before writing
ansible.builtin.template:
src: sshd_config.j2
dest: /etc/ssh/sshd_config
validate: sshd -t -f %s
Controlling whitespace in templates
Jinja2 control blocks ({% %}) can add unwanted blank lines to rendered configs. Use the minus sign to strip whitespace:
# Without stripping — leaves blank lines in output
{% for server in ntp_servers %}
server {{ server }} iburst
{% endfor %}
# With stripping — no extra blank lines
{%- for server in ntp_servers %}
server {{ server }} iburst
{%- endfor %}
The - on the opening tag strips whitespace before the block; on the closing tag it strips whitespace after. Many teams prefer the cleaner output and add - consistently to all control blocks.
ansible_managed header
The ansible_managed variable inserts a standard "this file is managed by Ansible" comment at the top of rendered templates. This tells operators not to edit the file manually.
# In your Jinja2 template (e.g. chrony.conf.j2):
# {{ ansible_managed }}
# Managed by: {{ playbook_dir | basename }} playbook
pool {{ chrony_pool }} iburst
driftfile /var/lib/chrony/drift
Rendered output:
# Ansible managed: /etc/chrony.conf modified on 2024-03-15 10:23:01 by ansible on control01
# Managed by: site playbook
pool ntp.internal.example.com iburst
driftfile /var/lib/chrony/drift
Customize the message in ansible.cfg: ansible_managed = Ansible managed — do not edit manually. Source: {file} on {host}
# {{ ansible_managed }} as the first line of every template. It prevents manual edits being silently overwritten on the next run and immediately answers "where does this config come from?" during incident investigation.
{% raw %} block
Some config files use {{ }} or {% %} syntax themselves — Prometheus, nginx, Grafana alerting rules. Wrapping sections in {% raw %} tells Jinja2 to pass the content through unchanged.
# prometheus_rules.yml.j2
# {{ ansible_managed }}
groups:
- name: example
rules:
- alert: HighCPU
# Raw block: Prometheus uses {{ }} for its own templating
{% raw %}
expr: 100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
annotations:
summary: "High CPU on {{ $labels.instance }}"
{% endraw %}
for: 5m
labels:
severity: warning
Everything between {% raw %} and {% endraw %} is treated as plain text — Jinja2 does not evaluate it.
lookup() — reading files and commands
lookup() lets templates read from the control node's filesystem or run a local command at render time. Common in templates that embed keys or generate dynamic values.
# Embed a public SSH key from the control node
authorized_keys: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
# Read a file relative to the playbook directory
ssl_cert: "{{ lookup('file', playbook_dir + '/files/server.crt') }}"
# Run a local command and capture output
build_version: "{{ lookup('pipe', 'git describe --tags --always') }}"
# Read an environment variable from the control node
aws_region: "{{ lookup('env', 'AWS_DEFAULT_REGION') }}"
Using lookups in templates:
# In a template file (e.g. authorized_keys.j2)
# {{ ansible_managed }}
{% for key_file in ssh_key_files %}
{{ lookup('file', key_file) }}
{% endfor %}
slurp or fetch modules if you need to read files from remote hosts.
backup: yes best practice
Both template and copy modules support backup: yes, which creates a timestamped backup of the destination file before overwriting it. This gives you a one-step rollback during incidents.
- name: Deploy nginx config (with backup)
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
owner: root
group: root
mode: "0644"
backup: yes # creates /etc/nginx/nginx.conf.2024-03-15@10:23:01~
validate: nginx -t -c %s
notify: Reload nginx
- name: Deploy SSL certificate (with backup)
ansible.builtin.copy:
src: files/server.crt
dest: /etc/pki/tls/certs/server.crt
mode: "0644"
backup: yes
Backups are stored in the same directory as the original file. On a busy system with frequent deploys you may want to clean them periodically. The backup file format is filename.YYYY-MM-DD@HH:MM:SS~.