ModPageSpeed 2.0 now works with ASP.NET Core
By Otto Schaaf
ModPageSpeed 2.0 has been nginx-only until now. The nginx interceptor does zero-copy cache serving, 103 Early Hints, and sub-millisecond variant selection. It is the fastest integration we have. But it requires nginx.
If your application runs on ASP.NET Core, adding nginx as a reverse proxy just to get page optimization is a significant architectural change. You inherit nginx’s configuration language, its process model, and its deployment story. For teams that chose Kestrel deliberately, that is a hard sell.
So we built a second integration path: an ASP.NET Core middleware that calls the same optimization pipeline through a C API.
The approach
The optimization libraries in ModPageSpeed 2.0 — image transcoding, CSS extraction, HTML processing, cache management — are all C++. They compile into a shared library (libpagespeed.so) with a stable C API. The ASP.NET middleware calls this library via P/Invoke. Same code, same algorithms, same cache format. The difference is where the integration happens: inside your ASP.NET pipeline instead of inside nginx.
The middleware ships as a NuGet package (WeAmp.PageSpeed.AspNetCore). It registers a singleton cache, an HTML processor, and a background notification service. The middleware itself sits in your request pipeline and does two things: serve cached variants on HIT, and process HTML responses on MISS.
Quickstart
Here is a minimal Program.cs:
using WeAmp.PageSpeed.AspNetCore;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddPageSpeed(options =>
{
options.Cache.VolumePath = "/data/cache.vol";
options.Cache.VolumeSizeBytes = 256 * 1024 * 1024;
options.Worker.SocketPath = "/data/pagespeed.sock";
});
var app = builder.Build();
app.UsePageSpeed();
app.UseStaticFiles();
app.UseRouting();
app.Run();
AddPageSpeed() registers all services. UsePageSpeed() inserts the middleware. That is the entire integration.
For the full stack with image optimization, you need the worker alongside your ASP.NET app. The demo ships with a Docker Compose file:
services:
worker:
build:
target: worker-runtime
volumes:
- shared:/shared
aspnet:
build:
target: aspnet-runtime
environment:
PAGESPEED_CACHE_PATH: /shared/cache.vol
PAGESPEED_SOCKET_PATH: /shared/pagespeed.sock
volumes:
- shared:/shared
depends_on:
- worker
volumes:
shared:
The shared volume is the key. Both processes open the same Cyclone cache file via memory-mapped I/O. The ASP.NET middleware writes originals, the worker writes optimized variants, and each process sees the other’s writes immediately.
To try it:
cd samples/aspnetcore/samples/DemoSite
./run-demo.sh # Full stack: ASP.NET + worker
./run-demo.sh --standalone # HTML processing only, no worker
How it works
The request flow through the middleware:
-
Classify. The middleware reads
Accept,User-Agent,Save-Data, andAccept-Encodingheaders and calls into the native library to produce a 32-bit capability mask. Same classification logic as the nginx module — same mask, same bits. -
Cache lookup. The middleware calls
ReadBestAlternate(url, hostname, mask)on the shared Cyclone cache. If a variant exists that matches (or closely matches) the client’s capabilities, that is a HIT. -
HIT path. The cached content is served directly. The middleware reads any stored early hints (preload URLs, preconnect origins) and adds them as
Linkresponse headers. Response getsX-PageSpeed: HIT. -
MISS path. The middleware lets the request proceed through the rest of the pipeline. It captures the response body, checks that it is HTML within size limits, and runs it through the native HTML processor. The processor extracts critical CSS, adds lazy-load attributes, injects image dimensions, and collects preload hints. The processed HTML and the early hints are written to the cache, and the worker is notified asynchronously to generate image variants and compressed alternates.
-
Worker notification. A bounded channel queues notifications without blocking the request path. The background service drains the channel and sends each notification to the worker over a Unix socket. If the worker is not running (standalone mode), notifications are silently dropped.
Standalone vs full stack
The middleware supports two deployment modes.
Standalone — no worker, no image optimization. The middleware processes HTML only: critical CSS inlining, lazy-load attributes, explicit image dimensions, LCP preload hints, preconnect hints for third-party origins. These are response-time improvements that do not require background processing. Set Worker.SocketPath to null (or omit it) and the middleware operates independently.
Full stack — ASP.NET middleware plus worker. The middleware handles HTML processing and cache serving. The worker handles image transcoding (WebP, AVIF, SVG), viewport-based resizing, Save-Data quality variants, and pre-compressed alternates (gzip, brotli). This is the same optimization pipeline as the nginx integration, with the same cache format and the same variant selection logic.
The standalone mode is useful for applications where image optimization is handled elsewhere (a CDN, a build step) but HTML processing is not. Adding UsePageSpeed() gets you critical CSS and resource hints without any infrastructure changes.
Nginx vs middleware
An honest comparison:
| Nginx interceptor | ASP.NET middleware | |
|---|---|---|
| Cache serving | Zero-copy via mmap’d ngx_buf_t | Memory copy into response stream |
| 103 Early Hints | Sent before proxying on MISS | Not supported (Kestrel limitation) |
| Integration | Reverse proxy in front of origin | NuGet package in your pipeline |
| Configuration | nginx.conf directives | C# options / appsettings.json |
| Deployment | Separate nginx process | Same process as your app |
| HTML processing | Same pipeline | Same pipeline |
| Image optimization | Same worker | Same worker |
| Cache format | Same Cyclone cache | Same Cyclone cache |
| Variant selection | Same 32-bit mask | Same 32-bit mask |
| Recompilation | Requires nginx headers | dotnet add package |
The nginx interceptor wins on serving performance. Zero-copy mmap means cached variants go from disk to socket with no intermediate buffer. The middleware copies data through the managed memory pipeline — still fast, but not zero-copy. The nginx module also supports 103 Early Hints on cache misses, sending preload headers before the origin response arrives. Kestrel does not currently expose this capability to middleware.
The middleware wins on integration simplicity. It is a NuGet package. You add it to your pipeline in two lines of C#. There is no separate process to configure, no nginx.conf to maintain, and no reverse proxy to troubleshoot. Configuration lives in appsettings.json with hot-reload support via IOptionsMonitor.
Both integrations use the same optimization pipeline, the same cache, the same worker, and the same variant selection. The difference is the serving layer, not the optimization layer.
What’s next
The C API is language-agnostic. libpagespeed.so exposes plain C functions — classify a request, read from cache, process HTML, notify the worker. Any language with FFI support can call it. ASP.NET Core is the first integration beyond nginx, but the API does not assume .NET. A Go middleware, a Rust middleware, or a Python WSGI wrapper would call the same functions.
If you are running ASP.NET Core and want to try it, the demo site runs with a single ./run-demo.sh. Add ?bypass=1 to any URL to compare optimized and original output side by side.