Installation: Docker
Deploy ModPageSpeed 2.0 using Docker containers.
This guide walks you through deploying ModPageSpeed 2.0 with Docker Compose. You’ll run three containers — nginx with the pagespeed module, the Factory Worker, and your origin server — sharing a Cyclone cache volume.
Prerequisites
- Docker Engine 20.10+
- Docker Compose v2
- A ModPageSpeed license key (get a 14-day free trial)
Directory Structure
Create a project directory with the following layout:
modpagespeed/
├── docker-compose.yml
├── nginx.conf
└── entrypoint-worker.sh
Docker Compose Configuration
Create docker-compose.yml:
services:
# Your origin server — replace with your actual upstream
origin:
image: nginx:stable
volumes:
- ./your-site:/usr/share/nginx/html:ro
expose:
- '8081'
# Factory Worker — optimizes cached content
worker:
image: modpagespeed/worker:latest
entrypoint: /entrypoint-worker.sh
volumes:
- shared:/shared
- ./entrypoint-worker.sh:/entrypoint-worker.sh:ro
depends_on:
- origin
# Nginx with PageSpeed module
nginx:
image: modpagespeed/nginx:latest
ports:
- '8080:8080'
volumes:
- shared:/shared
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- worker
volumes:
shared:
The shared volume is where the Cyclone cache file and Unix socket live. Both
the nginx and worker containers mount it at /shared.
Nginx Configuration
Create nginx.conf:
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
load_module /usr/lib/nginx/modules/ngx_pagespeed_module.so;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
server {
listen 8080;
server_name _;
# Enable PageSpeed
pagespeed on;
pagespeed_cache_path /shared/cache.vol;
# Worker socket path is read from pagespeed-shared.conf automatically
location / {
proxy_pass http://origin:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
The two pagespeed_* directives are all you need:
pagespeed on— enables the module for this server blockpagespeed_cache_path— path to the shared Cyclone cache file
The worker socket path, license key, and other shared settings are read
automatically from pagespeed-shared.conf, which the worker writes next to
the cache file.
Worker Entrypoint
Create entrypoint-worker.sh and make it executable (chmod +x):
#!/bin/bash
set -e
# Ensure shared directory is accessible by both nginx (nobody) and worker (root)
chmod 777 /shared
umask 0000
# Pre-create cache file with open permissions
touch /shared/cache.vol
chmod 666 /shared/cache.vol
exec factory_worker \
--socket /shared/pagespeed.sock \
--cache-path /shared/cache.vol
The permission setup is important: nginx worker processes run as nobody while
the Factory Worker runs as root. Both need read/write access to the cache file
and Unix socket.
Start the Stack
docker compose up -d
Check that all three containers are running:
docker compose ps
You should see origin, worker, and nginx all in a running state.
Verify It Works
Test with a simple request:
# First request — cache miss, proxied to origin
curl -I http://localhost:8080/
Look for the X-PageSpeed: MISS header. This means the module is active and the
response was proxied to your origin and cached.
# Second request — cache hit, served from cache
curl -I http://localhost:8080/
You should now see X-PageSpeed: HIT.
View Logs
# All services
docker compose logs -f
# Just the worker
docker compose logs -f worker
# Just nginx
docker compose logs -f nginx
The worker logs show optimization activity — you’ll see messages when it processes images, CSS, and JavaScript files.
Cache Size
By default, the cache size is 1 GB. To increase it, pass the --cache-size
flag to the worker (in bytes):
exec factory_worker \
--socket /shared/pagespeed.sock \
--cache-path /shared/cache.vol \
--cache-size 536870912 # 512 MB
Stopping and Restarting
# Stop all containers
docker compose down
# Stop and remove the cache volume (fresh start)
docker compose down -v
The cache is stored in a named Docker volume. Stopping containers preserves the
cache — optimized content is still available on restart. Use down -v only if
you want to clear the cache completely.
Kubernetes
For Kubernetes deployments, run nginx and the worker as separate containers in
the same pod, sharing an emptyDir volume:
apiVersion: v1
kind: Pod
metadata:
name: pagespeed
spec:
containers:
- name: nginx
image: modpagespeed/nginx:latest
ports:
- containerPort: 8080
volumeMounts:
- name: shared
mountPath: /shared
- name: worker
image: modpagespeed/worker:latest
command: ['/entrypoint-worker.sh']
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
emptyDir:
sizeLimit: 512Mi
Both containers in the same pod share the same network namespace, so the Unix socket is accessible without additional configuration.
Troubleshooting
No X-PageSpeed header:
Check that pagespeed on; is set in your nginx config and the module is loaded.
Verify with docker compose logs nginx.
X-PageSpeed: MISS on every request:
The cache file may not be shared correctly. Ensure both containers mount the same
volume at /shared and that permissions are set (cache file 666, socket
world-writable).
Worker not processing content:
Check worker logs with docker compose logs worker. Verify the worker is writing
pagespeed-shared.conf next to the cache file (nginx reads the socket path from
this file automatically).
Next Steps
- Configuration Reference — Tune cache size, worker threads, and other options
- Getting Started — Architecture overview and verification steps
- Helm Deployment — Deploy on Kubernetes with the official Helm chart
- Troubleshooting — Common issues and diagnostics