Getting Started
Install and configure ModPageSpeed 2.0 for your web server.
ModPageSpeed 2.0 is a self-hosted web performance optimization system. It deploys as a caching reverse proxy in front of any HTTP origin server and automatically optimizes images, minifies CSS and JavaScript, and serves optimized content with zero-copy caching — all on your own infrastructure.
Architecture
ModPageSpeed 2.0 has three components that work together:
-
Caching Proxy (nginx) — A dynamic nginx module (
ngx_pagespeed_module.so) that classifies incoming requests, serves cached optimized content via zero-copy mmap, and proxies cache misses to your origin server. -
Cyclone Cache — A shared disk cache file that stores both original and optimized content variants. Both nginx and the worker access it via memory-mapped I/O for sub-millisecond lookups.
-
Factory Worker — A lightweight C++ worker that reads original content from the cache, performs optimizations (image transcoding, CSS/JS minification), and writes optimized variants back to the cache.
Request Flow
When a request arrives:
- Nginx classifies the client’s capabilities (image format support, viewport, transfer encoding, Save-Data preference) into a 32-bit capability mask.
- The cache is checked for an optimized variant matching that mask.
- On cache hit: The content is served directly from the memory-mapped cache
file — no copies, no allocations. The response includes an
X-PageSpeed: HITheader. - On cache miss: The request is proxied to your origin. The response is
stored in the cache and served to the client with an
X-PageSpeed: MISSheader. A notification is sent to the Factory Worker. - The worker reads the original content, optimizes it, and writes the result back to the cache. Future requests for the same capability mask get the optimized version.
Prerequisites
- nginx 1.24+ (stable or mainline)
- Linux (Debian/Ubuntu or RHEL/Rocky) — x86_64
- Docker (if using the Docker deployment method)
Choose Your Installation Method
There are two ways to deploy ModPageSpeed 2.0:
Docker (Recommended)
The fastest way to get started. A Docker Compose setup runs nginx with the pagespeed module and the Factory Worker as separate containers sharing a cache volume.
Best for:
- New deployments
- Quick evaluation
- Kubernetes environments
- Teams that prefer containerized infrastructure
nginx Module
Install the prebuilt dynamic module (.so) into your existing nginx
installation and run the Factory Worker as a systemd service.
Best for:
- Existing nginx deployments you don’t want to containerize
- Bare-metal or VM-based infrastructure
- Environments where Docker isn’t available
Get started with the nginx module →
Quick Verification
Once installed (via either method), verify that ModPageSpeed is working:
# Request a page — first request will be a cache miss
curl -I http://localhost/
# Look for the X-PageSpeed header
# X-PageSpeed: MISS (first request, proxied to origin)
# X-PageSpeed: HIT (subsequent requests, served from cache)
Request the same URL again after a moment. The second response should show
X-PageSpeed: HIT, confirming that the cache is working.
To verify that optimizations are being applied, request an image with WebP support:
curl -H "Accept: image/webp,*/*" -o /dev/null -w "%{size_download}" \
http://localhost/image.jpg
The response size should be smaller than the original once the worker has processed it.
Next Steps
- Configuration Reference — All nginx directives, worker flags, and tuning options
- Deployment Guide — Production setup, monitoring, and cache sizing
- Web Console — Inspect cache state, monitor performance, and tune configuration
- Troubleshooting — Common issues and diagnostics