Migrating from mod_pagespeed 1.x to ModPageSpeed 2.0
By Otto Schaaf
What changed and why
mod_pagespeed 1.x was built around the RewriteDriver — over 2,000 lines of orchestration code that managed a pipeline of 60+ filters, each modifying the HTML response in sequence during the request. Filters had carefully managed inter-dependencies: combine_css needed to run before rewrite_css, which needed to run before inline_css, and so on. This ordering required careful reasoning when adding new filters, because each one could interact with every existing one.
The architecture was designed for Apache. It hooked into Apache’s output filter chain, and the nginx port (ngx_pagespeed) adapted the filter pipeline to nginx’s event-driven model. This translation layer added significant maintenance overhead, particularly around subrequest handling and connection lifecycle management.
ModPageSpeed 2.0 takes a different architectural approach. The underlying optimization libraries — the HTML parser, CSS minifier, JS minifier, and image codecs — are individually well-tested and capable (and form the foundation of 2.0). The key change is moving orchestration out of the request path entirely, into a separate worker that optimizes content asynchronously.
This is not an incremental upgrade. It is a ground-up rethink that preserves the optimization quality of the battle-tested PSOL libraries while replacing the architecture around them. If you are running mod_pagespeed 1.x today, this guide will walk you through what changed and how to migrate.
Architecture comparison
mod_pagespeed 1.x processed everything synchronously within the request:
Client -> Apache/Nginx -> mod_pagespeed filters (sync) -> Origin
[RewriteDriver orchestration]
[60+ filters in pipeline]
[Rewrite cache for sub-resources]
ModPageSpeed 2.0 uses a three-component architecture where optimization happens outside the request path:
Client -> Nginx interceptor -> Cyclone Cache (mmap) -> Client
| ^
| (fire-and-forget) | (write variant)
v |
Factory Worker -----------+
[HTML: critical CSS injection]
[CSS: four-phase minification]
[JS: token-aware minification]
[Images: WebP/AVIF/optimized-original]
The differences are significant:
Synchronous vs. asynchronous. In 1.x, every response was transformed in-flight, adding latency to every request. In 2.0, the first request gets the original response (served from cache with X-PageSpeed: MISS), and the worker optimizes it in the background. Subsequent requests get the optimized variant (X-PageSpeed: HIT) with zero processing overhead — just a memory-mapped cache read.
Single-process vs. multi-process. In 1.x, the optimization code ran inside the web server process. In 2.0, the nginx module is a thin cache interceptor, and the factory worker is a separate process. They share a single Cyclone cache file with memory-mapped directories (enable_mmap_directory = true), so writes from either process are immediately visible to the other.
Filter pipeline vs. content-type dispatch. Instead of 60+ filters with ordering dependencies, the worker dispatches on content type: HTML, CSS, JavaScript, or Image. Each path runs independently. There is no filter interaction to reason about.
Configuration mapping
Many mod_pagespeed 1.x directives have no direct equivalent because 2.0 applies optimizations automatically based on content type. Here is a mapping of the most commonly used directives:
| 1.x Directive | 2.0 Equivalent | Notes |
|---|---|---|
ModPagespeedEnableFilters | Automatic | All optimizations enabled by default |
ModPagespeedDisableFilters rewrite_images | --disable-image | Worker flag |
ModPagespeedDisableFilters rewrite_css | --disable-css | Worker flag |
ModPagespeedDisableFilters rewrite_javascript | --disable-js | Worker flag |
ModPagespeedDisableFilters prioritize_critical_css | --disable-html | Worker flag |
ModPagespeedDisallow /pattern | pagespeed_disallow /pattern; | nginx directive |
ModPagespeedCacheSizeMb | --cache-size | Worker flag (bytes) |
ModPagespeedFileCachePath | pagespeed_cache_path | nginx directive |
Deliberately omitted features. Several 1.x filters have no 2.0 equivalent because the 2.0 architecture makes them unnecessary or handles their use case differently:
combine_css/combine_javascript— Not needed. HTTP/2 multiplexing eliminates the round-trip cost of multiple small files.lazyload_images— Replaced with nativeloading="lazy"attribute injection. The worker’s HTML transform pipeline automatically addsloading="lazy"to<img>and<iframe>tags, with the LCP candidate image (or first body image as fallback) receivingfetchpriority="high"instead. This is enabled by default. The 1.x filter used a JavaScript-based approach; 2.0 uses the browser-native attribute, which is faster and more reliable.defer_javascript— Not implemented. Thedeferandasyncattributes on<script>tags are the standard solution.inline_css/inline_javascript— Replaced by critical CSS injection, which is more targeted (only above-the-fold rules).sprite_images— Not implemented. HTTP/2 and modern image formats make spriting counterproductive.
Step-by-step migration
1. Back up your existing configuration
Save your current mod_pagespeed configuration before making any changes:
# Apache
cp /etc/apache2/mods-enabled/pagespeed.conf ~/pagespeed-1x-backup.conf
# Nginx (ngx_pagespeed)
cp /etc/nginx/nginx.conf ~/nginx-1x-backup.conf
2. Install ModPageSpeed 2.0
Deploy the three components using Docker Compose or systemd. The Docker Compose approach is recommended for initial testing:
# docker-compose.yml (simplified)
services:
worker:
image: modpagespeed/worker:latest
command: >
/usr/bin/factory_worker
--socket /shared/pagespeed.sock
--cache-path /shared/cache.vol
--cache-size 1073741824
volumes:
- shared:/shared
nginx:
image: modpagespeed/nginx:latest
volumes:
- shared:/shared
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- '80:80'
volumes:
shared:
3. Configure nginx
Create your nginx configuration with the PageSpeed module:
load_module /usr/lib/nginx/modules/ngx_pagespeed_module.so;
server {
listen 80;
server_name example.com;
pagespeed on;
pagespeed_cache_path /shared/cache.vol;
# Worker socket, license key, and HTML toggle are read
# automatically from pagespeed-shared.conf (written by the worker)
# Migrate your Disallow patterns
pagespeed_disallow /api/;
pagespeed_disallow /admin/;
pagespeed_disallow *.woff2;
location / {
proxy_pass http://your-backend:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
4. Start the services and verify
docker compose up -d
# Check the worker health endpoint
socat - UNIX-CONNECT:/shared/pagespeed.sock.health
# Expected: OK 0/128 notifs=0 variants=0 proactive=0 errors=0 cache_entries=0
# Make a test request
curl -I http://localhost/
# Look for: X-PageSpeed: MISS (first request)
# Wait a moment, then request again
curl -I http://localhost/
# Look for: X-PageSpeed: HIT (optimized variant served)
Testing your migration
Systematic validation ensures the migration is correct before you route production traffic through it.
Check response headers. Every response through ModPageSpeed 2.0 includes an X-PageSpeed header with either MISS (original content) or HIT (optimized variant). After warming the cache, verify that repeated requests return HIT.
Verify image format negotiation. Send requests with different Accept headers and confirm you receive the correct format:
# Request AVIF
curl -H "Accept: image/avif,image/webp,*/*" -o /dev/null -w "%{content_type}" http://localhost/image.jpg
# Expected: image/avif
# Request WebP
curl -H "Accept: image/webp,*/*" -o /dev/null -w "%{content_type}" http://localhost/image.jpg
# Expected: image/webp
# Request original
curl -H "Accept: */*" -o /dev/null -w "%{content_type}" http://localhost/image.jpg
# Expected: image/jpeg
Confirm CSS and JS minification. Compare content lengths between the origin and the optimized responses. The worker only writes a variant if minification actually reduced the size, so the optimized version should always be smaller or equal.
Monitor the health endpoint. The worker exposes statistics through its health check socket, reporting active connections, notifications received, variants written, and error counts. Set up monitoring on these counters to track optimization progress and catch issues early.
Gradual rollout. Start with a single backend server or a canary server block in nginx. Route a fraction of traffic through ModPageSpeed 2.0, compare Core Web Vitals in your RUM data, and expand once you are confident the optimization is correct.
FAQ
Can I run 1.x and 2.0 side by side? Yes. Deploy them on different servers or different nginx server blocks. They share no state, so there is no conflict.
Will my existing cache be preserved? No. ModPageSpeed 2.0 uses Cyclone, a completely different cache format from the 1.x file cache. The cache will be cold on first start and warm up as traffic flows through.
Do I need to change my HTML or application code? No. All optimization is transparent at the reverse-proxy level. Your application serves the same responses it always has.
What about Apache support? ModPageSpeed 2.0 uses nginx internally as its caching proxy, but it deploys in front of any HTTP origin server — including Apache. Run the Docker Compose setup with your Apache server as the backend origin. If you prefer to keep using mod_pagespeed 1.x directly in Apache, it continues to work.
What happened to specific filters? The configuration mapping table above covers the major filters. In general, filters that work around HTTP/1.1 limitations (combining, spriting, inlining) have been intentionally dropped because HTTP/2 makes them unnecessary or counterproductive. Filters that perform genuine optimization (image transcoding, CSS/JS minification, critical CSS) are built into the worker’s content-type dispatch.