Nginx has been the default reverse proxy for self-hosted infrastructure for over a decade. Its configuration syntax is dense and its documentation is comprehensive but not always beginner-friendly. This post covers the patterns that matter most in production: TLS termination done right, rate limiting that actually protects your upstreams, upstream health checks, and the security headers that browsers now expect.
The minimal working config, explained
Before tuning anything, here's a well-structured baseline for proxying a single upstream service:
# /etc/nginx/sites-available/api.example.com
# Rate limiting zone โ defined in http{} context, referenced in server{}
# Keyed on client IP, 10MB of state (handles ~160k unique IPs), 30 req/min
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/m;
upstream api_backend {
server 127.0.0.1:3000;
keepalive 32; # Keep 32 idle connections to upstream
}
server {
listen 80;
server_name api.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name api.example.com;
http2 on;
# TLS โ managed by Certbot (Let's Encrypt)
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header Referrer-Policy strict-origin-when-cross-origin always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Rate limiting โ allow burst of 10, no delay on burst
limit_req zone=api burst=10 nodelay;
limit_req_status 429;
location / {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade"; # Required for WebSocket
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
}
TLS: beyond just installing a certificate
Certbot with the Nginx plugin handles the basics, but there are three things it doesn't configure for you:
HSTS preloading
The Strict-Transport-Security header tells browsers to always use HTTPS for your domain. The preload directive submits your domain to the browser-maintained HSTS preload list โ meaning browsers will refuse HTTP connections before they even reach your server. Set max-age=63072000 (two years) and only add preload once you're certain your entire domain and all subdomains can run on HTTPS permanently.
TLS session resumption
# In your http{} block โ enables TLS session caching across workers
ssl_session_cache shared:SSL:50m; # 50MB shared between all worker processes
ssl_session_timeout 1d;
ssl_session_tickets off; # Disable โ tickets don't support forward secrecy well
OCSP stapling
# Nginx fetches and caches the OCSP response so clients don't have to contact CA
ssl_stapling on;
ssl_stapling_verify on;
resolver 1.1.1.1 8.8.8.8 valid=300s;
resolver_timeout 5s;
Rate limiting: three zones for different purposes
A single global rate limit isn't granular enough for most APIs. In production, we use three separate zones:
# http{} block โ define all zones here
# General API: 60 requests per minute per IP
limit_req_zone $binary_remote_addr zone=api_general:10m rate=60r/m;
# Auth endpoints: 5 requests per minute per IP (strict โ brute force protection)
limit_req_zone $binary_remote_addr zone=api_auth:10m rate=5r/m;
# Static assets: no rate limit needed (use CDN/caching instead)
# --- In your server{} block ---
# Apply auth limit to login/token endpoints
location ~ ^/(auth|login|token|password-reset) {
limit_req zone=api_auth burst=3 nodelay;
limit_req_status 429;
# Return proper JSON for rate limit errors (not HTML)
error_page 429 /429.json;
proxy_pass http://api_backend;
# ... proxy headers as above
}
location / {
limit_req zone=api_general burst=20 nodelay;
limit_req_status 429;
proxy_pass http://api_backend;
}
Rate limiting behind a load balancer: if Nginx sits behind another load balancer (AWS ALB, Cloudflare, etc.), $remote_addr will be the load balancer IP โ rate-limiting every request together. Use $http_x_forwarded_for or $http_cf_connecting_ip instead, but only when you trust the upstream proxy to set these headers correctly and can't be spoofed by clients.
Upstream health checks and multiple backends
Nginx open-source doesn't support active health checks (that's an Nginx Plus feature), but you can configure passive health checks and use the max_fails / fail_timeout parameters to remove failing backends from rotation:
upstream api_backend {
# Passive health: remove a backend after 3 failures in 30 seconds
# It will be re-added after 30 seconds
server 10.0.0.10:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.11:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.12:3000 max_fails=3 fail_timeout=30s backup; # Only used if others fail
keepalive 64;
keepalive_requests 1000;
keepalive_timeout 75s;
}
For active health checks without Nginx Plus, use nginx_upstream_check_module (compiled in) or proxy to a sidecar like Envoy that handles health checking. Alternatively โ and this is what we do in most Studio deployments โ run Nginx behind a cloud load balancer that handles health checking and only routes to healthy Nginx instances.
Gzip and caching headers
# http{} block โ enable gzip globally
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6; # 1-9; 6 is a good balance
gzip_min_length 1000; # Don't gzip tiny responses
gzip_types text/plain text/css text/xml application/json application/javascript
application/xml+rss text/javascript image/svg+xml;
# Cache headers for static assets (in a location block)
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff|woff2|ttf)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off; # Don't log static asset requests
}
Logging with structured output
The default Nginx log format isn't easy to parse with tools like Loki or Elasticsearch. A JSON log format makes ingestion trivial:
# http{} block
log_format json_combined escape=json
'{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$uri",'
'"status":$status,'
'"body_bytes":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_time":"$upstream_response_time",'
'"upstream_addr":"$upstream_addr",'
'"http_referrer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"x_forwarded_for":"$http_x_forwarded_for"'
'}';
access_log /var/log/nginx/access.log json_combined;
In our Studio deployments we ship Nginx configs as part of a Nix or Ansible role that includes all of the above โ TLS with OCSP stapling, three rate-limit zones, JSON logging to stdout (collected by the host's log shipper), and Prometheus metrics via the nginx-module-vts stub status module. The entire config is version-controlled and applies reproducibly across environments.