Mastering Nginx: The Ultimate Guide for Developers and DevOps
Master Nginx—the powerful, high-performance web server and reverse proxy trusted by millions of websites. This comprehensive guide covers everything from architecture and installation to advanced configurations, security best practices, and troubleshooting, with easy-to-understand examples and diagrams.
Table of Contents
- Introduction
- Nginx Architecture Explained
- Installing Nginx
- Nginx Configuration Basics
- Serving Static Content
- Reverse Proxy Setup
- Load Balancing with Nginx
- SSL/TLS Configuration
- Caching and Performance Optimization
- Security Best Practices
- Advanced Use Cases
- Monitoring and Logging
- Troubleshooting Common Issues
- Conclusion
- FAQ
- Bonus: Templates and Cheatsheet
Introduction
What is Nginx?
Nginx (pronounced "engine-x") is a high-performance web server, reverse proxy, load balancer, and HTTP cache. Created by Igor Sysoev in 2004 to solve the C10K problem (handling 10,000 concurrent connections), Nginx has become one of the most popular web servers in the world, powering over 30% of all websites.
Think of Nginx as a traffic controller at a busy intersection. Just as a skilled traffic officer directs vehicles efficiently to prevent congestion, Nginx routes incoming web requests to the right destinations quickly and reliably—whether that's serving static files, forwarding requests to backend applications, or distributing load across multiple servers.
Why Nginx Matters in Modern Web Architecture
Modern web applications aren't simple anymore. They consist of:
- Multiple services: APIs, databases, caching layers, message queues
- High traffic demands: Thousands or millions of concurrent users
- Security requirements: SSL/TLS encryption, DDoS protection, rate limiting
- Performance expectations: Sub-second response times, efficient resource usage
Nginx excels in all these areas because of its:
- Asynchronous, event-driven architecture: Handles thousands of connections with minimal memory
- Versatility: Web server, reverse proxy, load balancer, API gateway—all in one
- Performance: Serves static content faster than most alternatives
- Reliability: Battle-tested by giants like Netflix, Airbnb, and WordPress.com
- Low resource footprint: Runs efficiently even on modest hardware
Brief History and Evolution
2004: Igor Sysoev creates Nginx to handle high-concurrency challenges at Rambler.ru (Russia's major search engine).
2011: Nginx Inc. is founded to provide commercial support and Nginx Plus (enterprise version).
2019: F5 Networks acquires Nginx Inc. for $670 million, validating its importance in modern infrastructure.
Today: Nginx powers some of the world's busiest sites and is the backbone of cloud-native architectures, Kubernetes ingress controllers, and API gateways.
Real-World Analogy: Nginx is like evolution from a single-lane road (traditional web servers) to a modern highway system with multiple lanes, smart traffic management, and bypass routes—all designed to keep traffic flowing smoothly no matter how many cars are on the road.
1. Nginx Architecture Explained
Understanding Nginx's architecture is key to unlocking its power. Unlike traditional web servers, Nginx uses an innovative design that makes it incredibly efficient.
Event-Driven vs Thread-Based Models
Most traditional web servers (like Apache with its prefork MPM) use a thread-based or process-based model:
Thread-Based Model Problems:
- Each connection requires a dedicated thread or process
- Context switching between thousands of threads is expensive
- Memory usage grows linearly with connections (each thread needs stack space)
- The C10K problem: difficult to handle 10,000+ concurrent connections
Nginx uses an event-driven, asynchronous model instead:
Event-Driven Model Advantages:
- One worker process handles thousands of connections
- No context switching overhead
- Constant memory usage regardless of connection count
- Can handle 10,000+ concurrent connections easily
Analogy: Think of a restaurant:
- Thread-based (Apache): One waiter per table. If you have 100 tables, you need 100 waiters standing around, even if most tables are just sipping water.
- Event-driven (Nginx): Four skilled waiters manage 100 tables by responding only when customers need something (placing orders, getting refills). Much more efficient!
How Nginx Handles Concurrent Connections
Nginx's secret sauce is its ability to handle multiple connections without creating multiple threads. Here's how:
- Non-blocking I/O: When waiting for data (from disk, network, or backend), Nginx doesn't sit idle—it handles other requests
- Event notifications: The OS notifies Nginx when data is ready (using epoll on Linux, kqueue on BSD/macOS)
- State machine: Each connection is a state machine—Nginx remembers where each request is and picks up when ready
Code Example - How it Works Conceptually:
// Simplified pseudo-code of Nginx's event loop
while (true) {
events = wait_for_events(); // Wait for OS notifications
for (event in events) {
connection = event.connection;
switch (connection.state) {
case READING_REQUEST:
read_request_data(connection);
break;
case PROCESSING:
process_request(connection);
break;
case WRITING_RESPONSE:
write_response_data(connection);
break;
case PROXYING:
forward_to_backend(connection);
break;
}
}
}
Key Components: Worker Processes, Master Process, Event Loop
Nginx's architecture consists of several key components working together:
1. Master Process:
- Reads and validates configuration
- Manages worker processes (starts, stops, monitors)
- Handles signals (reload, restart, shutdown)
- Runs with root privileges (if binding to port 80/443)
2. Worker Processes:
- Handle actual client connections
- Process requests asynchronously
- Run with limited privileges for security
- Number typically matches CPU cores (one per core for optimal performance)
3. Cache Manager & Cache Loader:
- Manage cached content on disk
- Clean up expired cache entries
- Load cache metadata on startup
Configuration Example:
# Number of worker processes (auto = number of CPU cores)
worker_processes auto;
# Maximum connections per worker
events {
worker_connections 1024; # Each worker can handle 1024 connections
use epoll; # Use efficient event mechanism (Linux)
}
With 4 CPU cores and worker_connections 1024, Nginx can handle:
- 4 workers × 1024 connections = 4,096 concurrent connections
Real-World Impact:
- A single Nginx instance on a 4-core server can easily handle 10,000+ concurrent connections
- Memory usage stays constant regardless of connection count
- CPU usage remains low even under heavy load
Analogy: Think of Nginx like a post office:
- Master process = Postmaster (manages everything, doesn't handle mail directly)
- Worker processes = Mail sorters (actually process and route mail)
- Event loop = Sorting system (efficiently handles thousands of letters without creating bottlenecks)
- Connections = Letters being processed (thousands can be "in progress" simultaneously)
2. Installing Nginx
Let's get Nginx up and running on your system. We'll cover installation on the most popular platforms.
Installation on Linux (Ubuntu/CentOS)
Ubuntu/Debian
Using official repositories (recommended for latest stable):
# Update package index
sudo apt update
# Install Nginx
sudo apt install nginx -y
# Start Nginx service
sudo systemctl start nginx
# Enable Nginx to start on boot
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
Using Nginx official repository (for latest mainline version):
# Install prerequisites
sudo apt install curl gnupg2 ca-certificates lsb-release ubuntu-keyring
# Import Nginx signing key
curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor \
| sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null
# Set up the repository
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" \
| sudo tee /etc/apt/sources.list.d/nginx.list
# Install Nginx
sudo apt update
sudo apt install nginx -y
CentOS/RHEL/Rocky Linux
# Install Nginx from EPEL repository
sudo yum install epel-release -y
sudo yum install nginx -y
# Or using DNF (CentOS 8+)
sudo dnf install nginx -y
# Start and enable Nginx
sudo systemctl start nginx
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
# Allow HTTP and HTTPS through firewall
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
Installation on Windows and macOS
macOS (using Homebrew)
# Install Homebrew if you haven't already
# /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Nginx
brew install nginx
# Start Nginx
brew services start nginx
# Or run Nginx manually
nginx
# Check if running
curl http://localhost:8080
Note: By default, Homebrew's Nginx runs on port 8080 (not 80).
Windows
- Download pre-built binaries from nginx.org/download
- Extract the zip file (e.g.,
nginx-1.24.0.zip) toC:\nginx - Run Nginx:
cd C:\nginx
start nginx
- Test by opening
http://localhostin your browser
Managing Nginx on Windows:
# Stop Nginx
nginx -s stop
# Graceful shutdown
nginx -s quit
# Reload configuration
nginx -s reload
# Test configuration
nginx -t
Verifying Installation and Basic Commands
After installation, verify Nginx is running:
# Check Nginx version
nginx -v
# Output: nginx version: nginx/1.24.0
# Check version with configuration details
nginx -V
# Test configuration file syntax
sudo nginx -t
# Output: nginx: configuration file /etc/nginx/nginx.conf test is successful
# Check if Nginx is running
ps aux | grep nginx
# Check listening ports
sudo netstat -tlnp | grep nginx
# Or on newer systems:
sudo ss -tlnp | grep nginx
Common Management Commands:
# Start Nginx (if not using systemd)
sudo nginx
# Stop Nginx gracefully
sudo nginx -s quit
# Stop Nginx immediately
sudo nginx -s stop
# Reload configuration without downtime
sudo nginx -s reload
# Reopen log files (useful after log rotation)
sudo nginx -s reopen
# Using systemd (Ubuntu/CentOS)
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
sudo systemctl reload nginx
sudo systemctl status nginx
Verify Installation with curl:
curl http://localhost
# You should see the default Nginx welcome page HTML
Expected Output:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Default File Locations:
| Item | Ubuntu/Debian | CentOS/RHEL | macOS (Homebrew) | Windows |
|---|---|---|---|---|
| Config file | /etc/nginx/nginx.conf | /etc/nginx/nginx.conf | /opt/homebrew/etc/nginx/nginx.conf | C:\nginx\conf\nginx.conf |
| Site configs | /etc/nginx/sites-available/ | /etc/nginx/conf.d/ | /opt/homebrew/etc/nginx/servers/ | C:\nginx\conf\ |
| Logs | /var/log/nginx/ | /var/log/nginx/ | /opt/homebrew/var/log/nginx/ | C:\nginx\logs\ |
| Web root | /var/www/html/ | /usr/share/nginx/html/ | /opt/homebrew/var/www/ | C:\nginx\html\ |
Quick Verification Test:
# Create a simple test page
echo "<h1>Nginx is working!</h1>" | sudo tee /var/www/html/test.html
# Access it
curl http://localhost/test.html
Analogy: Installing Nginx is like setting up a new phone:
- Download and install = Get the phone
- Start the service = Turn it on
- Verify it works = Make a test call
- Learn the commands = Learn how to use the features
3. Nginx Configuration Basics
Nginx configuration is powerful but can seem complex at first. Let's break it down into digestible pieces.
Understanding nginx.conf
The main configuration file (nginx.conf) is the control center for Nginx. It's organized in a hierarchical, directive-based format.
Basic Structure:
# Global directives (affect entire Nginx instance)
user www-data;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Events context (connection handling)
events {
worker_connections 1024;
use epoll;
}
# HTTP context (web server configuration)
http {
# HTTP-level directives
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
access_log /var/log/nginx/access.log;
# Server contexts (virtual hosts)
server {
# Server-level directives
listen 80;
server_name example.com;
# Location contexts (URL routing)
location / {
# Location-level directives
root /var/www/html;
index index.html;
}
}
}
Context Hierarchy Diagram:
Analogy: Think of nginx.conf like a company organization chart:
- Global context = Company-wide policies (applies to everyone)
- HTTP context = Department rules (applies to all teams in that department)
- Server context = Team rules (applies to specific team)
- Location context = Individual task rules (applies to specific work)
Structure: http, server, location Blocks
HTTP Block
The http block contains directives for handling HTTP/HTTPS traffic. Most of your configuration lives here.
http {
# Global HTTP settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Include additional files
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
# Default settings
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json;
# Server blocks go here
server {
# ...
}
}
Server Block
The server block defines a virtual host (like a website or domain).
server {
# Listen on port 80 for IPv4
listen 80;
# Listen on port 80 for IPv6
listen [::]:80;
# Domain names this server responds to
server_name example.com www.example.com;
# Document root
root /var/www/example.com;
# Default index files
index index.html index.htm index.php;
# Logging for this server
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
# Location blocks go here
location / {
try_files $uri $uri/ =404;
}
}
Multiple Server Blocks (Virtual Hosts):
http {
# First website
server {
listen 80;
server_name site1.com;
root /var/www/site1;
}
# Second website
server {
listen 80;
server_name site2.com;
root /var/www/site2;
}
# Default server (catches all other requests)
server {
listen 80 default_server;
server_name _;
return 444; # Close connection
}
}
Location Block
The location block defines how to handle specific URIs or URL patterns.
Exact Match:
# Matches exactly /about.html
location = /about.html {
root /var/www/html;
}
Prefix Match:
# Matches /images/*, /images/photos/*, etc.
location /images/ {
root /var/www;
autoindex on;
}
Regex Match (case-sensitive):
# Matches .jpg, .jpeg, .png, .gif files
location ~ \.(jpg|jpeg|png|gif)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
Regex Match (case-insensitive):
# Matches .JPG, .jpg, .PNG, .png, etc.
location ~* \.(jpg|jpeg|png|gif)$ {
expires 30d;
}
Location Matching Priority:
Priority Order (highest to lowest):
=- Exact match^~- Preferential prefix match (stops regex search)~or~*- Regex match (first match wins)- (no modifier) - Prefix match (longest match wins)
Example with Priority:
server {
listen 80;
server_name example.com;
# Priority 1: Exact match
location = / {
return 200 "Exact match for root";
}
# Priority 2: Preferential prefix
location ^~ /images/ {
return 200 "Preferential prefix match for images";
}
# Priority 3: Regex match
location ~ \.(jpg|png)$ {
return 200 "Regex match for images";
}
# Priority 4: Prefix match
location / {
return 200 "Prefix match (catch-all)";
}
}
Common Directives: listen, server_name, root, index
listen Directive
Specifies which IP address and port to listen on.
# Listen on default port 80
listen 80;
# Listen on port 8080
listen 8080;
# Listen on specific IP
listen 192.168.1.100:80;
# Listen on IPv6
listen [::]:80;
# Listen with SSL
listen 443 ssl;
# Default server for this port
listen 80 default_server;
# Enable HTTP/2
listen 443 ssl http2;
server_name Directive
Defines which domain names this server block handles.
# Single domain
server_name example.com;
# Multiple domains
server_name example.com www.example.com;
# Wildcard subdomain
server_name *.example.com;
# Regex pattern
server_name ~^(www\.)?(.+)$;
# Catch-all (not recommended for production)
server_name _;
How Nginx Matches Requests:
- Exact match:
server_name example.com; - Wildcard starting with asterisk:
server_name *.example.com; - Wildcard ending with asterisk:
server_name example.*; - Regular expression:
server_name ~^(.+)\.example\.com$; - Default server:
listen 80 default_server;
root Directive
Sets the document root directory for requests.
server {
listen 80;
server_name example.com;
# Document root for entire server
root /var/www/example.com;
location /images {
# Override root for /images
root /var/www/media;
# Request to /images/photo.jpg serves /var/www/media/images/photo.jpg
}
}
root vs alias:
# Using root (appends location path)
location /images/ {
root /var/www;
# /images/photo.jpg → /var/www/images/photo.jpg
}
# Using alias (replaces location path)
location /images/ {
alias /var/www/media/;
# /images/photo.jpg → /var/www/media/photo.jpg
}
index Directive
Defines default files to serve when a directory is requested.
server {
listen 80;
server_name example.com;
root /var/www/html;
# Try these files in order
index index.html index.htm index.php;
# When requesting http://example.com/
# Nginx will try:
# 1. /var/www/html/index.html
# 2. /var/www/html/index.htm
# 3. /var/www/html/index.php
}
Complete Example:
http {
# HTTP-level settings
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# Main website
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/example.com;
index index.html index.htm;
# Main location
location / {
try_files $uri $uri/ =404;
}
# Static assets with caching
location ~* \.(jpg|jpeg|png|gif|css|js)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
# Block access to hidden files
location ~ /\. {
deny all;
}
}
}
Configuration Testing and Reloading:
# Always test configuration before applying
sudo nginx -t
# If test is successful, reload
sudo nginx -s reload
# Or use systemctl
sudo systemctl reload nginx
Analogy: Think of Nginx configuration like organizing a library:
listen= Library entrance (which door is open)server_name= Library name (which library you're entering)root= Library location (where the books are stored)index= Catalog default (what to show when you ask for "books about cooking")location= Specific sections (fiction, non-fiction, reference)
4. Serving Static Content
One of Nginx's primary strengths is serving static files (HTML, CSS, JavaScript, images) incredibly fast.
Hosting HTML/CSS/JS Files
Basic Static Website Setup:
server {
listen 80;
server_name mysite.com www.mysite.com;
root /var/www/mysite.com;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
Directory Structure:
/var/www/mysite.com/
├── index.html
├── about.html
├── css/
│ ├── style.css
│ └── responsive.css
├── js/
│ ├── app.js
│ └── vendor.js
└── images/
├── logo.png
└── banner.jpg
Enhanced Configuration with Optimizations:
server {
listen 80;
server_name mysite.com www.mysite.com;
root /var/www/mysite.com;
index index.html;
# Charset
charset utf-8;
# Main location
location / {
try_files $uri $uri/ =404;
}
# CSS and JavaScript with caching
location ~* \.(css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Images with caching
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
expires 6M;
add_header Cache-Control "public, immutable";
access_log off;
}
# Fonts with caching
location ~* \.(woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Security: block access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
Setting MIME Types
MIME types tell browsers how to handle different file types. Nginx includes a comprehensive mime.types file.
Default MIME Types File (/etc/nginx/mime.types):
types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
application/javascript js;
application/json json;
application/pdf pdf;
image/png png;
image/svg+xml svg svgz;
image/webp webp;
video/mp4 mp4;
# ... many more
}
Including MIME Types in Your Config:
http {
include /etc/nginx/mime.types;
default_type application/octet-stream; # Fallback for unknown types
# Add custom MIME types if needed
types {
application/x-custom-type .custom;
}
}
Why MIME Types Matter:
Without correct MIME types, browsers might:
- Download files instead of displaying them
- Refuse to execute JavaScript (MIME type mismatch security)
- Display images as garbled text
Example Issue Without MIME Types:
# BAD: Browser won't execute this JavaScript
location /js/ {
# No MIME type set, browser gets application/octet-stream
}
# GOOD: Proper MIME type
location /js/ {
include /etc/nginx/mime.types;
# Browser gets application/javascript
}
Directory Indexing and Security Tips
Directory Indexing (AutoIndex):
server {
listen 80;
server_name files.example.com;
root /var/www/files;
location / {
autoindex on; # Enable directory listing
autoindex_exact_size off; # Show file sizes in human-readable format
autoindex_localtime on; # Show local time instead of UTC
autoindex_format html; # Format: html, xml, json, jsonp
}
}
Styled Directory Listing:
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
# Add custom header/footer
add_before_body /autoindex_header.html;
add_after_body /autoindex_footer.html;
}
Selective Directory Indexing:
server {
listen 80;
server_name example.com;
root /var/www/html;
# Disable by default
autoindex off;
# Enable only for specific directory
location /downloads/ {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
# Block hidden directories
location ~ /\. {
deny all;
}
}
Security Tips for Static Content
1. Block Access to Sensitive Files:
# Block .git, .env, .htaccess, etc.
location ~ /\.(git|env|htaccess|htpasswd) {
deny all;
return 404; # Pretend file doesn't exist
}
# Block backup and config files
location ~ \.(bak|config|sql|fla|psd|ini|log|sh|swp|dist|md)$ {
deny all;
}
2. Prevent Direct Access to Specific Directories:
# Block access to uploaded files execution
location /uploads/ {
location ~ \.php$ {
deny all; # Prevent PHP execution in uploads
}
}
3. Add Security Headers:
server {
listen 80;
server_name example.com;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Content Security Policy
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always;
}
4. Limit File Upload Sizes:
http {
# Limit request body size (prevent DoS via large uploads)
client_max_body_size 10M;
client_body_buffer_size 128k;
}
5. Disable Server Tokens:
http {
# Hide Nginx version number
server_tokens off;
}
Complete Secure Static Site Example:
server {
listen 80;
server_name example.com;
root /var/www/example.com;
index index.html;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Main location
location / {
try_files $uri $uri/ =404;
}
# Static assets with aggressive caching
location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
# Block hidden files and directories
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Block sensitive file extensions
location ~ \.(sql|bak|backup|log|env)$ {
deny all;
}
# Custom error pages
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/errors;
}
}
Testing Your Static Site:
# Create test files
sudo mkdir -p /var/www/example.com
echo "<h1>Hello from Nginx!</h1>" | sudo tee /var/www/example.com/index.html
echo "body { background: #f0f0f0; }" | sudo tee /var/www/example.com/style.css
# Test configuration
sudo nginx -t
# Reload Nginx
sudo nginx -s reload
# Test with curl
curl http://localhost
curl -I http://localhost/style.css # Check MIME type
Analogy: Serving static content is like running a library:
- MIME types = Category labels (helps browsers know what each file is)
- Directory indexing = Library catalog (shows available files)
- Security settings = Access control (prevents unauthorized access to restricted sections)
- Caching = Book checkout system (browsers can "borrow" files and keep them temporarily)
5. Reverse Proxy Setup
A reverse proxy sits between clients and backend servers, forwarding requests and responses. This is one of Nginx's most powerful features.
What is a Reverse Proxy?
Forward Proxy vs Reverse Proxy:
Forward Proxy: Acts on behalf of clients (hides client identity from servers)
- VPNs, corporate proxies
- Used by clients to access external resources
Reverse Proxy: Acts on behalf of servers (hides server details from clients)
- Nginx, HAProxy, AWS ALB
- Used by servers to distribute load, add security, cache content
Why Use a Reverse Proxy?
- Load Distribution: Spread traffic across multiple backend servers
- Security: Hide backend server details, add WAF, terminate SSL
- Caching: Cache responses to reduce backend load
- Compression: Compress responses before sending to clients
- SSL Termination: Handle SSL encryption/decryption centrally
- Single Entry Point: One public IP/domain for multiple backend services
Analogy: Nginx reverse proxy is like a hotel receptionist:
- Clients = Guests checking in
- Nginx = Receptionist (front desk)
- Backend servers = Hotel staff (housekeeping, room service, concierge)
Guests don't interact directly with staff—the receptionist routes requests to the appropriate person and brings back responses.
Proxying Requests to Backend Servers (Node.js, .NET, PHP)
Basic Reverse Proxy Configuration
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://localhost:3000; # Forward to Node.js app on port 3000
}
}
How It Works:
Proxying to Node.js Application
server {
listen 80;
server_name nodeapp.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
Node.js WebSocket Support (e.g., Socket.IO):
location /socket.io/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
Proxying to .NET Application
server {
listen 80;
server_name dotnetapp.example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Buffer settings for better performance
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
}
}
.NET Core with Kestrel Best Practices:
upstream dotnet_backend {
server localhost:5000;
keepalive 32; # Keep connections alive
}
server {
listen 80;
server_name dotnetapp.example.com;
location / {
proxy_pass http://dotnet_backend;
proxy_http_version 1.1;
proxy_set_header Connection ""; # Use keepalive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Increase if large request bodies
client_max_body_size 10M;
}
}
Proxying to PHP-FPM
server {
listen 80;
server_name phpapp.example.com;
root /var/www/phpapp;
index index.php index.html;
# Serve static files directly
location / {
try_files $uri $uri/ =404;
}
# Pass PHP scripts to PHP-FPM
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock; # Or 127.0.0.1:9000
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# Block access to .php files in uploads directory
location ~ ^/uploads/.*\.php$ {
deny all;
}
}
Handling Headers and Request Forwarding
Essential Proxy Headers:
location / {
proxy_pass http://backend;
# Preserve original Host header
proxy_set_header Host $host;
# Client's real IP address
proxy_set_header X-Real-IP $remote_addr;
# IP chain for proxies
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Original protocol (http/https)
proxy_set_header X-Forwarded-Proto $scheme;
# Original hostname
proxy_set_header X-Forwarded-Host $host;
# Original port
proxy_set_header X-Forwarded-Port $server_port;
}
Why These Headers Matter:
| Header | Purpose | Example |
|---|---|---|
Host | Original hostname requested | api.example.com |
X-Real-IP | Client's actual IP address | 203.0.113.42 |
X-Forwarded-For | IP chain through proxies | 203.0.113.42, 10.0.0.1 |
X-Forwarded-Proto | Original protocol | https |
X-Forwarded-Host | Original host | api.example.com |
Backend Application Reading Headers (Node.js example):
// app.js
app.get('/debug', (req, res) => {
res.json({
clientIP: req.headers['x-real-ip'],
forwardedFor: req.headers['x-forwarded-for'],
protocol: req.headers['x-forwarded-proto'],
host: req.headers['x-forwarded-host'],
});
});
Removing/Hiding Headers from Response:
location / {
proxy_pass http://backend;
# Hide backend server details
proxy_hide_header X-Powered-By;
proxy_hide_header Server;
# Add custom headers to response
add_header X-Proxy-By "Nginx" always;
}
Advanced: Rewriting Headers:
location /api/ {
# Strip /api prefix when forwarding
rewrite ^/api/(.*)$ /$1 break;
proxy_pass http://backend;
# Modify Host header
proxy_set_header Host backend.internal;
}
Multiple Backend Services Example:
server {
listen 80;
server_name example.com;
# Frontend (React/Vue/Angular)
location / {
root /var/www/frontend/dist;
try_files $uri $uri/ /index.html;
}
# API Backend (Node.js)
location /api/ {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Admin Panel (.NET)
location /admin/ {
proxy_pass http://localhost:5000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# File Upload Service (Python)
location /uploads/ {
proxy_pass http://localhost:8000/;
client_max_body_size 100M;
proxy_request_buffering off; # Stream large uploads
}
}
Reverse Proxy Flow Diagram:
Proxy Buffer Configuration:
http {
# Global proxy settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
proxy_temp_file_write_size 8k;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
When to Disable Buffering (streaming responses):
location /stream/ {
proxy_pass http://streaming_backend;
proxy_buffering off; # Disable for real-time data
proxy_cache off;
}
Complete Reverse Proxy Example:
upstream nodejs_backend {
server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 80;
server_name app.example.com;
# Logging
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
# Main application
location / {
proxy_pass http://nodejs_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# WebSocket support
location /ws {
proxy_pass http://nodejs_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400; # 24 hours for long-lived connections
}
}
Testing Your Reverse Proxy:
# Test backend is running
curl http://localhost:3000
# Test through Nginx
curl http://app.example.com
# Check headers being forwarded
curl -I http://app.example.com
# Test with custom header
curl -H "X-Custom: test" http://app.example.com
(Continuing in next part due to length...)
6. Load Balancing with Nginx
Load balancing distributes incoming traffic across multiple backend servers, improving reliability, scalability, and performance.
Load Balancing Strategies
Nginx supports several load balancing algorithms:
1. Round-Robin (Default)
Distributes requests evenly across all servers in rotation.
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Flow:
- Request 1 → backend1
- Request 2 → backend2
- Request 3 → backend3
- Request 4 → backend1 (cycle repeats)
2. Least Connections
Routes to the server with the fewest active connections.
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
Use Case: Backends with varying processing times.
3. IP Hash
Routes requests from the same client IP to the same backend server (session persistence).
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
Use Case: Applications requiring session affinity without shared session storage.
4. Weighted Load Balancing
Assigns different weights to servers based on capacity.
upstream backend {
server backend1.example.com weight=3; # Gets 3x more traffic
server backend2.example.com weight=2; # Gets 2x more traffic
server backend3.example.com weight=1; # Gets 1x traffic
}
Use Case: Servers with different hardware specifications.
Health Checks and Failover
upstream backend {
server backend1.example.com max_fails=3 fail_timeout=30s;
server backend2.example.com max_fails=3 fail_timeout=30s;
server backend3.example.com backup; # Only used if others fail
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
}
Parameters:
max_fails=3: Mark server down after 3 failed attemptsfail_timeout=30s: Server marked down for 30 secondsbackup: Only used when primary servers are unavailabledown: Temporarily remove server from rotation
Real-World Use Cases
Complete Load Balancing Example:
upstream api_backend {
least_conn;
server 192.168.1.10:3000 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.11:3000 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.12:3000 weight=1 max_fails=3 fail_timeout=30s;
server 192.168.1.13:3000 backup;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
}
}
7. SSL/TLS Configuration
Securing your website with HTTPS is essential. Nginx makes SSL/TLS configuration straightforward.
Generating and Installing SSL Certificates
Using Let's Encrypt with Certbot (Free, Automated)
# Install Certbot
sudo apt install certbot python3-certbot-nginx -y
# Obtain and install certificate
sudo certbot --nginx -d example.com -d www.example.com
# Test automatic renewal
sudo certbot renew --dry-run
Manual Certificate Installation:
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# SSL settings
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root /var/www/html;
}
}
Redirecting HTTP to HTTPS
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Modern SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
ssl_prefer_server_ciphers off;
# HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
root /var/www/html;
}
}
8. Caching and Performance Optimization
Enabling Micro-Caching
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off;
server {
listen 80;
location / {
proxy_cache my_cache;
proxy_cache_valid 200 1m;
proxy_cache_bypass $http_cache_control;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
}
Gzip Compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml text/javascript application/javascript application/json application/xml+rss;
gzip_comp_level 6;
9. Security Best Practices
# Rate limiting
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
server {
listen 80;
# Apply rate limit
location / {
limit_req zone=general burst=20 nodelay;
proxy_pass http://backend;
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
# Block malicious user agents
if ($http_user_agent ~* (bot|crawler|spider)) {
return 403;
}
}
10. Advanced Use Cases
Nginx with Docker
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY dist/ /usr/share/nginx/html/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nginx Ingress Controller (Kubernetes)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
11. Monitoring and Logging
Access and Error Logs
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
}
Nginx Status Module
server {
listen 8080;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
12. Troubleshooting Common Issues
502 Bad Gateway
Causes:
- Backend server is down
- Connection timeout
- Firewall blocking connection
Solutions:
# Check backend is running
curl http://localhost:3000
# Check Nginx error logs
sudo tail -f /var/log/nginx/error.log
# Increase timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
504 Gateway Timeout
# Increase timeout values
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
Configuration Test
# Always test before reload
sudo nginx -t
# Reload configuration
sudo nginx -s reload
# Restart Nginx
sudo systemctl restart nginx
Conclusion
Nginx is a powerful, versatile tool that excels at web serving, reverse proxying, load balancing, and caching. Key takeaways:
- Event-driven architecture enables handling thousands of concurrent connections efficiently
- Reverse proxy capabilities allow building complex application architectures
- Load balancing improves reliability and scalability
- SSL/TLS configuration is straightforward with Let's Encrypt
- Performance optimization through caching and compression
- Security features protect against common attacks
When to Use Nginx vs Alternatives
Use Nginx when:
- High concurrency requirements
- Need reverse proxy + load balancer + web server in one
- Serving static content efficiently
- Building microservices architecture
Consider Apache when:
- Need .htaccess file support
- Extensive use of Apache modules
- Shared hosting environment
Consider Caddy when:
- Want automatic HTTPS with zero configuration
- Prefer simpler configuration syntax
- Smaller-scale deployments
Resources for Further Learning
- Official Nginx Documentation
- Nginx Admin Handbook
- DigitalOcean Nginx Tutorials
- Mozilla SSL Configuration Generator
FAQ
Q: How many connections can Nginx handle? A: With proper tuning, Nginx can handle 10,000+ concurrent connections per worker process. A 4-core server can easily handle 40,000+ connections.
Q: Is Nginx better than Apache? A: Neither is universally "better." Nginx excels at high concurrency and static content. Apache offers more features via modules and .htaccess support.
Q: Can Nginx replace a CDN? A: Nginx can cache content effectively, but a global CDN provides edge locations worldwide. Use both: CDN in front of Nginx.
Q: How do I update Nginx without downtime?
A: Use nginx -s reload for configuration changes. For binary updates, use graceful restart or rolling deployment with load balancer.
Q: What's the difference between Nginx and Nginx Plus? A: Nginx Plus is the commercial version with additional features: advanced load balancing, API-driven configuration, active health checks, and commercial support.
Q: Can Nginx run on Windows for production? A: While possible, Nginx on Windows has limitations and lower performance. Linux is recommended for production deployments.
Bonus Section
Sample nginx.conf Templates
Static Website
server {
listen 80;
server_name example.com;
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
location ~* \.(css|js|jpg|png)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
Reverse Proxy with Load Balancing
upstream backend {
least_conn;
server 192.168.1.10:3000;
server 192.168.1.11:3000;
server 192.168.1.12:3000;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Nginx Command Cheatsheet
# Installation
sudo apt install nginx -y # Ubuntu/Debian
sudo yum install nginx -y # CentOS/RHEL
# Service Management
sudo systemctl start nginx # Start
sudo systemctl stop nginx # Stop
sudo systemctl restart nginx # Restart
sudo systemctl reload nginx # Reload config
sudo systemctl status nginx # Check status
# Configuration
sudo nginx -t # Test configuration
sudo nginx -T # Test and print config
sudo nginx -s reload # Reload configuration
sudo nginx -s stop # Stop Nginx
sudo nginx -s quit # Graceful shutdown
# Logs
sudo tail -f /var/log/nginx/access.log # Watch access log
sudo tail -f /var/log/nginx/error.log # Watch error log
# Common Tasks
sudo vim /etc/nginx/nginx.conf # Edit main config
sudo vim /etc/nginx/sites-available/default # Edit site config
sudo nginx -V # Show build configuration
Common Directives Reference
| Directive | Context | Purpose | Example |
|---|---|---|---|
listen | server | Port to listen on | listen 80; |
server_name | server | Domain names | server_name example.com; |
root | server, location | Document root | root /var/www/html; |
index | server, location | Index files | index index.html; |
proxy_pass | location | Proxy destination | proxy_pass http://localhost:3000; |
try_files | location | File lookup order | try_files $uri $uri/ =404; |
return | server, location | Return status code | return 301 https://$host$request_uri; |
rewrite | server, location | URL rewriting | rewrite ^/old/(.*)$ /new/$1; |
Performance Tuning Quick Reference
# Worker settings
worker_processes auto;
worker_connections 4096;
# Keepalive
keepalive_timeout 65;
keepalive_requests 100;
# Buffering
client_body_buffer_size 128k;
client_max_body_size 10m;
# Compression
gzip on;
gzip_comp_level 5;
gzip_min_length 256;
# File handling
sendfile on;
tcp_nopush on;
tcp_nodelay on;
# Caching
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
You've now mastered Nginx! From basic setup to advanced configurations, you have the knowledge to deploy high-performance, secure, and scalable web applications. Start simple, iterate, and gradually add complexity as your needs grow. Happy proxying! 🚀
💬 Comments
Comment section coming soon! Stay tuned for community discussions.