Deployment
Production deployment guide for ModPageSpeed 2.0.
This guide covers deploying ModPageSpeed 2.0 in production, including Docker Compose, systemd service configuration, file permissions, logging, monitoring, and cache sizing.
Docker Compose
The recommended production setup uses two containers sharing a named volume: the Factory Worker and nginx with the PageSpeed module. Your origin server runs separately (on the same host, a different host, or behind a load balancer).
Environment File
Copy the example environment file and adjust for your setup:
cp deploy/.env.example deploy/.env
Edit deploy/.env:
# Image version tag
PAGESPEED_VERSION=latest
# License key (warn-only if missing)
PAGESPEED_LICENSE_KEY=your-license-token-here
# Your origin server
BACKEND_HOST=127.0.0.1
BACKEND_PORT=8081
# Cache size in bytes (512 MB)
CACHE_SIZE=536870912
# Exposed ports
NGINX_PORT=80
NGINX_SSL_PORT=443
Docker Compose File
The production deploy/docker-compose.yml runs the worker and nginx:
services:
worker:
image: modpagespeed/worker:${PAGESPEED_VERSION:-latest}
volumes:
- pagespeed-data:/data
environment:
- CACHE_SIZE=${CACHE_SIZE:-536870912}
- PAGESPEED_LICENSE_KEY=${PAGESPEED_LICENSE_KEY:-}
restart: unless-stopped
healthcheck:
test:
[
'CMD-SHELL',
'python3 -c "import socket; s=socket.socket(socket.AF_UNIX); s.connect(''/data/pagespeed.sock.health''); d=s.recv(256); s.close(); exit(0 if d.startswith(b''OK'') else 1)"',
]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
logging:
driver: json-file
options:
max-size: '10m'
max-file: '3'
nginx:
image: modpagespeed/nginx:${PAGESPEED_VERSION:-latest}
ports:
- '${NGINX_PORT:-80}:80'
- '${NGINX_SSL_PORT:-443}:443'
volumes:
- pagespeed-data:/data
environment:
- BACKEND_HOST=${BACKEND_HOST:-127.0.0.1}
- BACKEND_PORT=${BACKEND_PORT:-8081}
- PAGESPEED_LICENSE_KEY=${PAGESPEED_LICENSE_KEY:-}
depends_on:
worker:
condition: service_healthy
restart: unless-stopped
logging:
driver: json-file
options:
max-size: '10m'
max-file: '5'
volumes:
pagespeed-data:
driver: local
Starting the Stack
# Build images (if not using pre-built)
docker/build-release.sh 2.0.0
# Start in detached mode
docker compose -f deploy/docker-compose.yml up -d
# Verify both containers are healthy
docker compose -f deploy/docker-compose.yml ps
SSL Configuration
To enable HTTPS, mount your certificate and key files into the nginx container and provide a custom nginx config:
nginx:
volumes:
- pagespeed-data:/data
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl/cert.pem:/etc/nginx/ssl/cert.pem:ro
- ./ssl/key.pem:/etc/nginx/ssl/key.pem:ro
Updating
To upgrade to a new version:
# Pull new images
docker compose -f deploy/docker-compose.yml pull
# Restart with zero downtime
docker compose -f deploy/docker-compose.yml up -d
The cache volume is preserved across restarts. Previously optimized variants remain available immediately.
Kubernetes (Helm)
A Helm chart is provided in deploy/helm/pagespeed/. It deploys the worker
and nginx as sidecar containers in a single pod, sharing an emptyDir volume
for the Cyclone cache.
Install
helm install pagespeed deploy/helm/pagespeed/ \
--set backend.host=my-origin.default.svc.cluster.local \
--set backend.port=8080
Common Overrides
helm install pagespeed deploy/helm/pagespeed/ \
--set backend.host=my-origin \
--set backend.port=8080 \
--set worker.cacheSize=1073741824 \
--set replicaCount=3 \
--set ingress.enabled=true \
--set ingress.hosts[0].host=cdn.example.com \
--set ingress.hosts[0].paths[0].path=/ \
--set ingress.hosts[0].paths[0].pathType=Prefix
License Key
Pass the license key as a secret:
helm install pagespeed deploy/helm/pagespeed/ \
--set licenseKey=your-license-token-here
Or reference an existing secret:
helm install pagespeed deploy/helm/pagespeed/ \
--set existingSecret=my-license-secret \
--set existingSecretKey=license-key
Architecture
Each pod contains two containers:
- worker — Reads/writes the shared cache, processes notifications
- nginx — Serves traffic, sends notifications to the worker
Both containers mount the same emptyDir volume at /data. The worker
creates the cache file and Unix sockets; nginx reads the cache and connects
to the sockets.
Autoscaling
Enable HPA-based autoscaling:
helm install pagespeed deploy/helm/pagespeed/ \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=2 \
--set autoscaling.maxReplicas=10
systemd Service
For bare-metal or VM deployments without Docker, run the Factory Worker as a systemd service.
Service File
Install the service file at /etc/systemd/system/pagespeed-worker.service:
[Unit]
Description=ModPageSpeed 2.0 Factory Worker
Documentation=https://modpagespeed.com/docs/configuration/
After=network.target
[Service]
Type=simple
ExecStartPre=/bin/mkdir -p /var/lib/pagespeed
ExecStartPre=/bin/chmod 777 /var/lib/pagespeed
ExecStart=/usr/local/bin/factory_worker \
--cache-path /var/lib/pagespeed/cache.vol \
--socket /var/lib/pagespeed/pagespeed.sock \
--cache-size 536870912 \
--log-format json \
--log-level info
# Permissions
UMask=0000
User=root
# Security hardening
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/pagespeed
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
# Restart behavior
Restart=on-failure
RestartSec=5
StartLimitBurst=5
StartLimitIntervalSec=60
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=pagespeed-worker
[Install]
WantedBy=multi-user.target
Enable and Start
sudo systemctl daemon-reload
sudo systemctl enable pagespeed-worker
sudo systemctl start pagespeed-worker
sudo systemctl status pagespeed-worker
Key systemd Settings
UMask=0000— Creates the Unix socket and cache file with open permissions so nginx workers (running asnobodyorwww-data) can access them.ProtectSystem=strict/ReadWritePaths— The worker can only write to/var/lib/pagespeed, limiting the blast radius of any vulnerability.LimitNOFILE=65536— Ensures the worker can open enough file descriptors for large caches and many simultaneous connections.Restart=on-failurewithStartLimitBurst=5— Restarts on crash but gives up after 5 failures in 60 seconds to prevent restart loops.
File Permissions
Cross-process cache sharing between nginx and the worker requires specific
file permissions. This is because nginx worker processes typically run as
nobody or www-data, while the Factory Worker runs as root.
Required Permissions
| Path | Permission | Reason |
|---|---|---|
/var/lib/pagespeed/ | 777 | Both processes create files in this directory |
/var/lib/pagespeed/cache.vol | 666 | Both processes read and write cache entries |
pagespeed.sock | world-writable | nginx sends notifications to the worker |
pagespeed.sock.health | world-writable | Health check access |
pagespeed.sock.mgmt | world-writable | Management socket access |
Ensuring Correct Permissions
For systemd deployments, UMask=0000 in the service file ensures all files
created by the worker (sockets, cache) are world-accessible.
For Docker deployments, the worker’s entrypoint script handles this:
#!/bin/bash
set -e
chmod 777 /data
umask 0000
touch /data/cache.vol
chmod 666 /data/cache.vol
exec factory_worker --socket /data/pagespeed.sock --cache-path /data/cache.vol
Verifying Permissions
If you see persistent MISS responses or worker processing errors, check
permissions:
ls -la /var/lib/pagespeed/
# Expected:
# drwxrwxrwx root root .
# -rw-rw-rw- root root cache.vol
# srwxrwxrwx root root pagespeed.sock
# srwxrwxrwx root root pagespeed.sock.health
# srwxrwxrwx root root pagespeed.sock.mgmt
Log Rotation
systemd (journald)
When using --log-format json with systemd, logs go to journald by default.
Journald manages its own rotation. View logs with:
# Follow worker logs
sudo journalctl -u pagespeed-worker -f
# Last 100 lines
sudo journalctl -u pagespeed-worker -n 100
# Since last hour
sudo journalctl -u pagespeed-worker --since "1 hour ago"
# JSON output for parsing
sudo journalctl -u pagespeed-worker -o cat | jq .
nginx Log Rotation
Install the logrotate configuration at /etc/logrotate.d/pagespeed:
/var/log/nginx/pagespeed*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0644 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 $(cat /var/run/nginx.pid) || true
endscript
}
This rotates PageSpeed-related nginx logs daily, keeping 14 days of compressed history.
Docker Log Rotation
The Docker Compose configuration includes built-in log rotation:
logging:
driver: json-file
options:
max-size: '10m' # Rotate at 10 MB
max-file: '3' # Keep 3 rotated files
View Docker logs with:
docker compose -f deploy/docker-compose.yml logs -f worker
docker compose -f deploy/docker-compose.yml logs -f nginx
Monitoring
Health Check
The worker exposes a health check socket at {socket_path}.health. Connect
to get a one-line status:
python3 -c "
import socket
s = socket.socket(socket.AF_UNIX)
s.connect('/var/lib/pagespeed/pagespeed.sock.health')
print(s.recv(256).decode())
s.close()
"
Response: OK 5/128 notifs=1542 variants=986 proactive=724 errors=3 cache_entries=2048
Use this for load balancer health checks and basic uptime monitoring.
Management Socket STATS
For detailed metrics, connect to the management socket at
{socket_path}.mgmt:
echo "STATS" | socat - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt
The response is a JSON object containing:
connections— Active and max connection countsnotifications— Total received and skipped (dedup) countsvariants— Total written and proactively generated countserrors— Processing error countcache— Current entry count and size in bytesby_type— Per content type counts and cumulative processing timeby_format— Per image format generation countstiming_us— Total cumulative processing time in microseconds
Prometheus Metrics
The management socket also supports a METRICS command that returns stats in
Prometheus text exposition format:
echo "METRICS" | socat -t 5 - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt
This outputs # HELP, # TYPE, and metric lines for all counters and gauges.
See the API Reference for the full metric list.
For Prometheus scraping, create a simple exporter that connects to the management socket and exposes the output on an HTTP endpoint, or use a cron-based approach to push metrics to Pushgateway.
Monitoring Script
A simple monitoring script that collects stats periodically:
#!/bin/bash
MGMT_SOCK="/var/lib/pagespeed/pagespeed.sock.mgmt"
while true; do
STATS=$(echo "STATS" | socat -t 5 - UNIX-CONNECT:$MGMT_SOCK 2>/dev/null)
if [ $? -eq 0 ]; then
echo "$(date -Iseconds) $STATS"
else
echo "$(date -Iseconds) ERROR: cannot connect to management socket"
fi
sleep 60
done
Key Metrics to Watch
errorsincreasing steadily indicates processing failures. Check worker logs for details.cache.sizeapproaching--cache-sizemeans LRU eviction is active. Consider increasing the cache size.notifications.skipped_dedupbeing a large fraction ofnotifications.receivedis normal and healthy — it means the worker is avoiding redundant work.connections.activenearconnections.maxmeans the worker is connection-limited. Increase--max-connections.
Cache Sizing Recommendations
The cache stores both original content and optimized variants. With proactive variant generation enabled, each image can produce many variants.
Estimating Cache Size
A rough formula:
cache_size = num_unique_urls * avg_original_size * variant_multiplier
Where variant_multiplier depends on your content mix:
| Content Type | Variant Multiplier | Notes |
|---|---|---|
| HTML | 2x | Original + critical CSS variant |
| CSS / JS | 2x | Original + minified |
| Images (default) | 10-20x | Multiple formats, viewports, density, Save-Data |
| Images (minimal) | 3-4x | With all proactive flags disabled |
Recommended Sizes
| Deployment Size | Cache Size | Flag Value |
|---|---|---|
| Small (< 100 images) | 256 MB | 268435456 |
| Medium (100-1000 images) | 1 GB | 1073741824 |
| Large (1000-10000 images) | 4 GB | 4294967296 |
| Very large (10000+ images) | 8+ GB | 8589934592 |
Signs the Cache Is Too Small
- Frequent
MISSresponses for previously-seen URLs (evicted content) cache.entriesin STATS is stable whilenotifications.receivedkeeps growing (cache is full, new entries evict old ones)- Worker processing the same URL repeatedly (check logs for duplicate URLs)
Signs the Cache Is Oversized
cache.sizeis consistently well below--cache-size— you are allocating disk space that is never used- Not a problem per se, but the pre-allocated cache file consumes disk space
Next Steps
- Configuration Reference — All worker flags and nginx directives
- Troubleshooting — Common issues and solutions
- API Reference — IPC protocol, health endpoint, and management socket details
- HTTP API Reference — REST and WebSocket endpoints
- Web Console — Visual cache inspector and real-time monitoring
- Helm Deployment — Kubernetes-specific deployment guide