mod_pagespeed Alternatives in 2026
If you landed here, you are probably running mod_pagespeed or ngx_pagespeed on a server that is overdue for an OS upgrade, and you just discovered that the module does not compile against your new nginx version. Or you are evaluating web optimization tools for the first time and wondering whether the Google project is still a viable option.
Short answer: the original Google project is effectively unmaintained. Two actively maintained successors exist, both built by the former maintainer. Here is the full picture.
The current state of Google’s mod_pagespeed
Google open-sourced mod_pagespeed in 2010. It was a full-stack optimization module that could rewrite HTML, transcode images, minify CSS and JavaScript, inline critical resources, combine files, and defer loading. All automatically, with no application code changes. The nginx port (ngx_pagespeed) followed in 2013.
The last meaningful development on the Google project happened around 2020-2021. The GitHub repositories are still public, but there have been no releases, no patch merges, and no maintainer activity since. Issues pile up. Pull requests go unreviewed. The CI pipelines have not run in years.
For Apache users, the existing module still functions if you can get it to compile. The situation is worse for nginx. ngx_pagespeed must be compiled against exact nginx source headers. It is a static module, not a dynamic one. Every nginx version upgrade requires downloading the matching source tree and recompiling the module from scratch. Modern distributions (Ubuntu 24.04, Debian 12, Rocky 9) ship nginx versions that the last ngx_pagespeed release was never built or tested against. Headers have changed. APIs have shifted. Getting it to compile is an exercise in patching, and even when it builds, you are running untested code against a production web server.
What broke and why
The project was designed for a team. At its center sits RewriteDriver: over 2,000 lines of C++ orchestrating 60+ filters through a pipeline where ordering matters, filters interact in subtle ways, and the configuration matrix spans hundreds of possible combinations. This is the kind of system that takes years and dedicated engineers to get right, and continuous effort to keep working.
Maintaining it means tracking changes across two web servers (Apache and nginx), testing the full filter matrix against new compiler versions, new OS releases, new TLS libraries, and new nginx APIs. When Google’s team moved on, that maintenance surface did not shrink. The community could file issues and submit patches, but nobody had the context to safely review changes to the filter pipeline. The interdependencies were too complex for drive-by contributions.
There was an attempt to fix this. Google and We-Amp jointly proposed mod_pagespeed as an Apache Software Foundation incubator project, hoping to build a broader maintainer community. The incubation failed. We-Amp remained the only active contributor, and no other parties stepped up to share the maintenance burden. Without a team, the project stalled.
This is not a criticism. The engineers who built mod_pagespeed solved hard problems and shipped a system that optimized billions of pages. The codebase reflects that ambition. It just requires a team to sustain.
Two successors, two approaches
I maintained the original project for years. When Google moved on, I continued the work commercially under We-Amp B.V. The result is two actively maintained products that take different approaches to the same problem.
mod_pagespeed 1.1: the direct continuation
mod_pagespeed 1.1 is the maintained fork of the original codebase. If you are running Google’s mod_pagespeed today and want the closest drop-in replacement, this is it.
What stayed the same: The filter architecture. The Apache and nginx module integration. The configuration directives. Your existing ModPagespeed* directives work. Your Disallow patterns work. The optimization pipeline you know (rewrite_images, rewrite_css, rewrite_javascript, prioritize_critical_css) is the same pipeline, actively maintained and tested.
What changed:
- Modernized build and dependencies. The build system was migrated to Bazel and every dependency updated to current versions. No more pinned-to-2018 libraries or bespoke build scripts.
- Actively maintained against current distributions. Compiled and tested on Ubuntu 24.04, Debian 12, Rocky 9, and current nginx/Apache releases. The compatibility issues that plague the abandoned Google version do not exist here.
- Multi-port. Beyond the original Apache and nginx, 1.1 adds Envoy and IIS ports, broadening the server coverage that Google never pursued. Distribution packages for Apache and nginx are in final testing; contact us for early access.
- Cyclone cache. The file-based cache from the original was replaced with Cyclone, the same variant-aware, memory-mapped cache that powers 2.0. Faster lookups, proper LRU eviction, and shared cache format between 1.1 and 2.0.
- Unified licensing and admin console. A web-based admin console for cache management, statistics, and license activation. Ed25519 token-based licensing with auto-renewal.
Best for: Teams that already run mod_pagespeed or ngx_pagespeed, want a maintained version, and prefer a native server module over a reverse proxy. Apache users in particular: 1.1 integrates directly into Apache’s output filter chain exactly like the original.
ModPageSpeed 2.0: the ground-up rebuild
ModPageSpeed 2.0 keeps the optimization libraries from the original (the image codecs, the CSS minifier, the JavaScript minifier, the HTML parser) but replaces everything above them. RewriteDriver, the filter pipeline, the resource manager, the cache coordination: all rebuilt in C++23.
The new architecture separates the system into three components:
The interceptor. A dynamic nginx component (no recompilation required) that classifies each request into a 32-bit capability mask encoding image format support (WebP, AVIF), viewport class, pixel density, Save-Data preference, and transfer encoding. It checks the Cyclone cache for a matching variant and serves it via mmap. Zero-copy, no allocations, no processing in the request path.
The worker. A separate C++ process that does the actual optimization. When nginx records a cache miss, it sends a fire-and-forget notification over a Unix socket. The worker reads the original content, runs the appropriate optimization, and writes the optimized variant back to the cache.
Cyclone cache. The same variant-aware disk cache used in 1.1, shared between the interceptor and worker via memory-mapped I/O. Stores multiple versions of the same resource under a single URL key, each identified by its capability mask. Lookups use best-fit fallback: if no exact match exists, the cache degrades gracefully to the closest available variant.
The practical effect: the first request gets the original content (X-PageSpeed: MISS). The worker optimizes it in the background. Subsequent requests get the optimized version (X-PageSpeed: HIT). Nginx never blocks on optimization work.
Best for: New deployments. Teams that want a reverse proxy architecture (Docker Compose in front of any HTTP origin). Sites that benefit from the 32-bit capability mask, which generates up to 36 image variants per source image, automatically matched to each visitor’s browser, viewport, density, and data-saving preference. Also available as ASP.NET Core middleware for .NET applications.
Choosing between 1.1 and 2.0
| mod_pagespeed 1.1 | ModPageSpeed 2.0 | |
|---|---|---|
| Architecture | Native server module (in-process) | Reverse proxy + async worker |
| Web servers | Apache, nginx, Envoy, IIS (packages in final testing) | Any HTTP origin (Docker Compose) |
| Configuration | ModPagespeed* directives (familiar) | Worker flags + nginx directives (new) |
| Optimization model | Synchronous, in the request path | Asynchronous, background worker |
| Image variants | Format negotiation (WebP, AVIF) | 32-bit capability mask (format × viewport × density × Save-Data = up to 37 variants) |
| Cache | Cyclone (shared) | Cyclone (shared) |
| Critical CSS | Filter-based extraction | Heuristic extraction (sub-5ms, no headless browser) |
| ASP.NET Core | No | Yes (NuGet middleware) |
| Migration from Google’s mod_pagespeed | Drop-in (same directives) | Config mapping required (migration guide) |
| Pricing | Per-server flat rate (see pricing) | Per-server flat rate (see pricing) |
If you are already running mod_pagespeed on Apache and it works, 1.1 is the path of least resistance. Same module, same config, maintained and tested on current distributions.
If you are running ngx_pagespeed and tired of recompiling against every nginx update, 2.0 deploys as a reverse proxy via Docker Compose today (no nginx module at all). 1.1 packages for nginx are in final testing.
If you are starting fresh, 2.0 is the more capable architecture. The async worker model means zero latency overhead in the request path, and the capability mask generates significantly more variants per image than format-only negotiation.
Other alternatives
Neither 1.1 nor 2.0 is the only option. Depending on your needs, other tools may be a better fit.
Image-only optimization. If your bottleneck is images and nothing else, imgproxy and thumbor are lighter self-hosted options. They handle format conversion and resizing well. They do not touch CSS, JavaScript, or HTML.
CDN-based optimization. Cloudflare Polish, Cloudinary, and imgix optimize images at the edge. They work well for globally distributed audiences and require no server-side setup. The trade-off is per-request pricing (which scales linearly with traffic), vendor lock-in via proprietary URL schemes, and routing your content through third-party infrastructure. A site serving 10 million image requests per month pays roughly $3,500/month in CDN transformation and bandwidth fees (based on typical published rates of $1-5 per 10,000 transformations plus $0.05-0.20/GB bandwidth) versus a flat per-server rate for self-hosted optimization (see pricing). See our detailed cost comparison for the full breakdown.
Manual optimization. Build scripts that run imagemin, cssnano, and terser at deploy time. This works for static sites. It does not help with dynamic content, user-uploaded images, or content from a CMS. And it does not adapt to client capabilities. Every visitor gets the same assets regardless of whether their browser supports AVIF or their connection is 3G.
The gap both products fill: automatic, full-stack optimization (images + CSS + JS + HTML + critical CSS) that runs on your infrastructure, requires no application code changes, and costs a flat rate regardless of traffic volume.
Getting started
ModPageSpeed 2.0 is available now with a 14-day free trial, no credit card required. Deploy with Docker Compose in front of your existing origin:
docker compose up -d
curl -I http://localhost:8080/
# X-PageSpeed: MISS → first request, original content
# X-PageSpeed: HIT → subsequent requests, optimized
See the getting started guide for full setup instructions, or the migration guide if you are moving from Google’s mod_pagespeed.
mod_pagespeed 1.1 distribution packages for Apache and nginx are in final testing. Contact us for early access or to discuss your migration.
Both products use flat per-server pricing. See the pricing page for current rates.