Nginx Basics
What Nginx is
Nginx is a web server and reverse proxy. It can:
- Serve static files
- Terminate TLS
- Reverse proxy requests to backend applications
- Host multiple sites using server blocks (virtual hosts)
What a server block is
A server block is Nginx's configuration unit for one site or virtual host. It defines:
- Which hostname this config applies to
- Which port to listen on
- What to do with incoming requests
Main files
/etc/nginx/nginx.conf
/etc/nginx/conf.d/
/etc/nginx/sites-enabled/ # on some distros
Basic reverse proxy server block
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
What this means:
- Listen on port 80
- Respond for requests to
example.com - Forward all requests to the backend app on localhost port 8080
- Pass the original host and client IP in headers to the backend
HTTPS / TLS server block
To serve HTTPS, add a separate server block listening on port 443 with the TLS certificate and key. The HTTP block on port 80 can redirect to HTTPS or be removed.
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}
What this does:
- Serves HTTPS on port 443, loading the cert and key from the specified paths
- Proxies requests to the backend app on localhost port 8080
- Passes
X-Forwarded-Proto: httpsso the backend knows the original request was HTTPS - Redirects all plain HTTP traffic permanently to HTTPS
nginx -t after adding or changing a server block. If the cert file does not exist or the path is wrong, the test will catch it before you break a running service.
Upstream blocks and load balancing
An upstream block defines a named group of backend servers. This decouples the proxy target from the individual server block and enables load balancing.
upstream app_backend {
server 127.0.0.1:8080;
server 127.0.0.1:8081; # round-robin by default
server 127.0.0.1:8082 weight=3; # gets 3× more traffic
server 192.168.1.20:8080 backup; # only used if others fail
keepalive 32; # keep 32 idle connections to backends
}
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Connection ""; # needed for keepalive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Nginx's default load balancing is round-robin. For session-sticky workloads consider ip_hash (same client IP → same backend) or least_conn (fewest active connections). These are set as directives inside the upstream block.
Rate limiting
Rate limiting protects backends from abuse and sudden traffic spikes. It is implemented in two parts: a shared memory zone defined at the http level, and a limit applied inside a server or location block.
# In the http { } block (top of nginx.conf or included file)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
server {
listen 443 ssl;
server_name api.example.com;
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
# burst=20: allow up to 20 queued requests above the rate
# nodelay: process burst immediately rather than delay
proxy_pass http://app_backend;
}
location /login {
# Stricter limit for auth endpoints
limit_req zone=api_limit burst=5 nodelay;
proxy_pass http://app_backend;
}
}
The zone size (10m) stores state per IP — 10 MB holds roughly 160,000 IPs. Without nodelay, Nginx queues requests that exceed the rate and delays them; with nodelay it returns 503 immediately when the burst is full. Monitor rate-limit hits in the error log at warn level.
Logging and monitoring
Custom log formats help downstream log analysis tools (Splunk, Loki, ELK) parse structured data.
# Define a JSON-like combined format in the http { } block
log_format main_json escape=json
'{"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"request":"$request",'
'"status":$status,'
'"body_bytes":$body_bytes_sent,'
'"referrer":"$http_referer",'
'"upstream_addr":"$upstream_addr",'
'"request_time":$request_time}';
server {
access_log /var/log/nginx/access.log main_json;
error_log /var/log/nginx/error.log warn;
}
# Built-in status page (enable in a restricted location)
server {
listen 127.0.0.1:8888;
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
}
stub_status outputs active connections, accepts, handled, and requests counts — useful for monitoring scripts and dashboards. Never expose it publicly.
Common tuning knobs
| Directive | Where | What it does |
|---|---|---|
worker_processes auto; | main | One worker per CPU core |
worker_connections 1024; | events | Max simultaneous connections per worker |
client_max_body_size 20m; | http/server/location | Max upload size; returns 413 if exceeded |
proxy_read_timeout 60s; | http/server/location | How long to wait for backend response |
proxy_connect_timeout 10s; | http/server/location | How long to wait to connect to backend |
gzip on; | http/server/location | Compress text responses; add gzip_types |
server_tokens off; | http/server | Hide Nginx version from error pages and headers |
Useful commands
nginx -t
nginx -t
Tests config syntax. Always run this before reloading or restarting. If this reports errors, do not proceed.
nginx -T
nginx -T
Prints the full loaded config including all included files. Very useful when debugging where a setting actually comes from.
systemctl reload nginx
systemctl reload nginx
Reloads config without dropping connections. Use this for config changes instead of a full restart when possible.
journalctl
systemctl status nginx
journalctl -u nginx -n 50
Troubleshooting
- Config test first:
nginx -t - Port 80/443 listening:
ss -tulpn | grep nginx - Upstream reachable:
curl -I http://127.0.0.1:8080 - DNS resolves correctly to this server
- Cert files exist and permissions are correct
- Check logs:
journalctl -u nginx -n 50 - Full config dump:
nginx -Tto see the actual effective config