Skip to main content

Troubleshooting

Diagnose and fix common ModPageSpeed 2.0 issues.

This page covers common issues you may encounter with ModPageSpeed 2.0 and how to diagnose and resolve them.

Cache Miss on Every Request

Symptom: Every response has X-PageSpeed: MISS, even for URLs that have been requested before.

Possible causes:

  1. Cache file not shared. The nginx module and Factory Worker must use the same cache file path. Verify that pagespeed_cache_path in your nginx config matches --cache-path in the worker’s startup command.

    # Check nginx config
    nginx -T 2>/dev/null | grep pagespeed_cache_path
    
    # Check worker process
    ps aux | grep factory_worker
  2. File permissions. The cache file must be readable and writable by both nginx workers and the Factory Worker. Check that the cache file is 666 and the parent directory is 777:

    ls -la /var/lib/pagespeed/

    Fix with:

    sudo chmod 777 /var/lib/pagespeed
    sudo chmod 666 /var/lib/pagespeed/cache.vol
  3. Memory-mapped directory not enabled. Both processes must open the cache with mmap directory sharing. This is automatic in the worker and nginx module, but if you see persistent misses, the processes may have separate in-memory directories. Restart both to re-sync:

    sudo systemctl restart pagespeed-worker
    sudo systemctl reload nginx
  4. Cache size too small. If the cache is full, LRU eviction removes older entries before they can be served. Check cache utilization via the management socket:

    echo "STATS" | socat - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt

    Look at cache.size relative to your --cache-size setting. If they are close, increase the cache size.

Worker Not Processing Content

Symptom: The cache has original content (X-PageSpeed: HIT) but images are not transcoded, CSS/JS are not minified, and no optimized variants appear.

Possible causes:

  1. Worker not running. Verify the worker process is active:

    # systemd
    sudo systemctl status pagespeed-worker
    
    # Docker
    docker compose ps worker
  2. Socket path mismatch. The worker writes its socket path to pagespeed-shared.conf (next to the cache file), which nginx reads automatically. Verify the shared config file exists and contains the correct socket path:

    # Check the shared config and socket file
    cat /var/lib/pagespeed/pagespeed-shared.conf
    ls -la /var/lib/pagespeed/pagespeed.sock
  3. Socket permissions. If the socket exists but nginx cannot connect, the socket may not be world-writable. Ensure the worker runs with UMask=0000 (systemd) or umask 0000 (Docker).

  4. Content type disabled. The worker may have processing disabled for specific content types. Check for --disable-html, --disable-css, --disable-js, or --disable-image flags in the worker’s startup command.

  5. Content too large. The worker silently skips content exceeding size limits. Check worker logs for “too large” warnings:

    sudo journalctl -u pagespeed-worker | grep "too large"

    Increase limits with --max-html-size, --max-css-size, --max-js-size, or --max-image-size as needed.

Images Not Converting to WebP/AVIF

Symptom: Image requests with Accept: image/webp still return the original JPEG or PNG.

Possible causes:

  1. Worker has not processed yet. After the first request (cache miss), the worker optimizes asynchronously. The first few requests may serve the original. Wait a moment and request again.

  2. Image too large. Images larger than --max-image-size (default 10 MB) are skipped. Check worker logs:

    sudo journalctl -u pagespeed-worker | grep "Image too large"
  3. Image transcoding disabled. Verify --disable-image is not set.

  4. Transcoding failed. Some images (corrupted, unusual color profiles, very large dimensions) may fail to transcode. Check for error messages:

    sudo journalctl -u pagespeed-worker | grep -i "transcode\|failed\|error"
  5. Decoded pixel buffer too large. The worker enforces a 50 MB limit on decoded pixel buffers to prevent out-of-memory conditions. A 10000x10000 RGBA image decodes to ~400 MB and will be rejected. There is no configuration override for this limit.

  6. Proactive variants not enabled. Without --proactive-image-variants, the worker only produces the single format matching the notification mask. WebP variants are only created when a WebP-capable client triggers the first notification.

Debug Logging

Enable debug-level logging to see detailed processing information:

# systemd: edit the service file
sudo systemctl edit pagespeed-worker --full
# Change --log-level info to --log-level debug

# Docker: set environment variable or modify command
docker compose exec worker factory_worker --log-level debug ...

Debug logging shows:

  • Every notification received (URL, content type, capability mask)
  • Deduplication decisions (which notifications are skipped)
  • Cache reads and writes (key, size, success/failure)
  • Image decode/encode details (format, dimensions, output size)
  • CSS/JS minification results (original size vs. minified size)
  • HTML scanning and critical CSS extraction details

For JSON log format (recommended for parsing):

factory_worker --log-format json --log-level debug

Parse JSON logs with jq:

sudo journalctl -u pagespeed-worker -o cat | jq 'select(.level == "ERROR")'

Management Socket Diagnostics

The management socket provides real-time insight into the worker’s state.

Check Overall Health

echo "STATS" | socat - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt

Look for:

  • High errors count — Processing failures. Check logs for details.
  • cache.entries is 0 — Cache may not be opening correctly.
  • notifications.received is 0 — The worker is not receiving notifications from nginx. Check socket connectivity.
  • variants.written is 0 but notifications.received is high — The worker receives notifications but fails to write variants. Check for permission issues or content processing errors.

Purge and Re-test a Specific URL

To force the worker to re-process a URL:

# Purge all variants
echo "PURGE localhost /images/photo.jpg" | socat - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt

# Request the URL again (triggers a cache miss and new notification)
curl -H "Accept: image/webp,*/*" http://localhost/images/photo.jpg -o /dev/null -w "%{size_download}\n"

# Wait a moment for the worker to process, then request again
sleep 2
curl -H "Accept: image/webp,*/*" http://localhost/images/photo.jpg -o /dev/null -w "%{size_download}\n"

If the second request returns a smaller size, the worker is processing correctly for that URL.

Cannot Connect to Management Socket

If socat or Python fails to connect to the management socket:

# Verify the socket file exists
ls -la /var/lib/pagespeed/pagespeed.sock.mgmt

# Verify the worker is running
sudo systemctl status pagespeed-worker

# Check socket permissions
stat /var/lib/pagespeed/pagespeed.sock.mgmt

The management socket is created when the worker starts and removed when it shuts down. If the socket file does not exist, the worker is not running or failed during initialization.

Common Error Messages

”Failed to open cache at …”

The worker cannot open or create the cache file. Verify the directory exists and has write permissions:

sudo mkdir -p /var/lib/pagespeed
sudo chmod 777 /var/lib/pagespeed

“Failed to bind to …”

The Unix socket path is already in use by another process, or a stale socket file exists from a previous crash. The worker removes stale socket files on startup, but if another instance is running, you will see this error. Stop the other instance first.

”Max connections reached”

The worker is at its connection limit. This can happen under heavy load when nginx sends many notifications simultaneously. Increase the limit:

factory_worker --max-connections 256

“Client buffer exceeded max size”

A single notification is larger than --max-buffer-size (default 1 MB). This typically indicates a malformed message. If you have legitimate very long URLs, increase the buffer size.

”URL too long”

The notification URL exceeds --max-url-length (default 8192 bytes). Increase the limit if your site uses very long URLs, or consider whether the long URL is intentional.

Browser Analysis Issues

Browser analysis requires headless Chrome and is disabled by default. Enable it with --enable-browser-analysis. These issues only apply when browser analysis is active.

Chrome Not Found

Symptom: Worker logs Failed to start Chrome or chrome binary not found on startup with browser analysis enabled.

Fix: The worker looks for Chrome at the path specified by --chrome-binary (default: /usr/bin/chrome-headless-shell). Verify the binary exists and is executable:

ls -la /usr/bin/chrome-headless-shell
# or wherever your Chrome is installed

# If Chrome is elsewhere, specify the path:
factory_worker --enable-browser-analysis --chrome-binary /usr/bin/chromium

In Docker containers, install chrome-headless-shell or chromium. The workbench-demo Docker image includes it by default.

CDP Connection Failures

Symptom: Worker logs CDP pipe read error or Chrome pipe EOF during analysis. Browser profiles are not generated.

Possible causes:

  1. Chrome crashed. The worker automatically restarts Chrome after a 2-second delay. Check logs for the crash reason:

    sudo journalctl -u pagespeed-worker | grep -i "chrome\|crash\|exit"
  2. Memory limit exceeded. If Chrome’s RSS exceeds --chrome-max-memory (default 512 MB), the worker kills and restarts it. Increase the limit for sites with large pages:

    factory_worker --enable-browser-analysis --chrome-max-memory 1024
  3. Startup timeout. Chrome may take longer to start in resource-constrained environments. Increase --chrome-startup-timeout (default 10000 ms):

    factory_worker --enable-browser-analysis --chrome-startup-timeout 20000

Analysis Timeouts

Symptom: Worker logs session timeout for browser analysis. Some pages never get browser-validated profiles.

Fix: The per-page timeout is controlled by --chrome-page-timeout (default 60000 ms). Complex pages with many stylesheets may need more time. However, if timeouts are frequent, the root cause is often Chrome struggling with inlined CSS volume. Check the page’s stylesheet count and total CSS size.

# Check browser analysis status via management socket
echo "BROWSER-STATUS" | socat - UNIX-CONNECT:/var/lib/pagespeed/pagespeed.sock.mgmt

The response includes analysis_errors, chrome_crashes, and queue_depth counters.

Browser Analysis Not Improving Results

Symptom: Browser analysis is enabled and running, but HTML output is identical to heuristic-only mode.

Possible causes:

  1. Profile TTL too short. If --browser-profile-ttl is very short, profiles expire before they are used. The default (86400 seconds / 24 hours) works for most sites.

  2. Template mismatch. Browser profiles are keyed by DOM structure hash. If every page has a unique structure (e.g., inline content changes the DOM tree), each page gets its own profile and re-analysis runs constantly. This is normal for highly dynamic sites but reduces the benefit.

  3. Individual features disabled. Check whether --no-browser-critical-css, --no-browser-lazy-loading, --no-browser-lcp-preload, or --no-browser-image-sizing flags are set. Each disables a specific browser analysis output.

SVG Vectorization Issues

SVG auto-vectorization converts suitable raster images (logos, icons, flat illustrations) to SVG format. It runs in detect mode by default, which only evaluates candidacy without producing SVG output. Set --svg-mode auto for production serving.

No SVG Variants Produced

Symptom: The worker processes images but no SVG variants appear in the cache.

Possible causes:

  1. SVG mode is detect (the default). In detect mode, the worker evaluates images for SVG candidacy and logs scores, but does not vectorize. Set --svg-mode auto or --svg-mode preview to produce SVG output:

    factory_worker --cache-path /data/cache.vol --svg-mode auto
  2. Candidacy threshold too high. The --svg-candidacy-threshold (default 50) filters out images with low vectorization suitability. Photos and complex textures score low and are rejected. This is by design — SVG is only beneficial for simple graphics. Lower the threshold to see more candidates:

    factory_worker --svg-candidacy-threshold 30
  3. Images too large. The --svg-max-pixels flag (default 65536, about 256x256) limits which images are evaluated. Large photos are excluded because vectorization produces enormous SVGs. Increase for larger icons:

    factory_worker --svg-max-pixels 262144  # ~512x512
  4. Image processing disabled. If --disable-image is set, all image processing is skipped, including SVG vectorization.

SVG Variants Larger Than Raster

Symptom: Debug logs show svg_size_rejected counter increasing. SVGs are produced but discarded.

This is the size gate working correctly. If the vectorized SVG is larger than the raster original (which is common for photos and complex images), the SVG variant is discarded. The svg_bytes_saved stat shows cumulative savings for SVGs that did pass the gate.

SVG Path Count Exceeded

Symptom: Debug logs show svg_path_count_rejected counter increasing.

Complex images produce SVGs with many <path> elements, which can slow down browser rendering. The --svg-max-paths flag (default 500) limits the maximum path count. If you want to allow more complex SVGs:

factory_worker --svg-max-paths 1000

Be cautious: SVGs with thousands of paths can cause rendering jank on mobile devices.

LCP Images Not Vectorized

Symptom: The LCP hero image qualifies for SVG but no SVG variant is produced.

By default, --svg-exclude-lcp true skips vectorization for images identified as the Largest Contentful Paint candidate. SVG path tessellation in the browser can be slower than decoding a raster image, potentially regressing LCP.

If your LCP image is a simple logo or icon that renders quickly as SVG:

factory_worker --svg-exclude-lcp false

License Activation Issues

ModPageSpeed 2.0 license enforcement is warn-only. The software functions without a valid license, but log warnings are emitted on startup and periodically.

”Invalid license signature”

Symptom: Worker logs Invalid license signature on startup.

The license token’s Ed25519 signature does not verify against the embedded public key. This means the token is corrupted, truncated, or was not issued by We-Amp.

Fix: Copy the full license token from your purchase confirmation or the web console. Ensure no whitespace or line breaks are included:

# Via CLI flag
factory_worker --license-key "eyJ0eXAiOi..."

# Via environment variable
export PAGESPEED_LICENSE_KEY="eyJ0eXAiOi..."

“License expired”

Symptom: Worker logs License valid but expired on startup.

The license token’s exp field is in the past. The software continues to run with warnings.

Fix: If you have --license-renewal-url configured with a subscription-based token, the worker renews automatically. Verify the renewal URL is reachable:

curl -I https://modpagespeed.com/api/renew

If automatic renewal is not configured, obtain a new license token from the web console or contact support.

”License key not set”

Symptom: Worker logs No license key configured on startup.

This is informational. Set the license key via CLI flag or environment variable. The software runs without restrictions during the trial period and continues to function after the trial with log warnings.

License Not Auto-Renewing

Symptom: License is near expiry but the worker does not renew it.

Auto-renewal requires:

  1. A v2 license token (contains a subscription ID sid field)
  2. --license-renewal-url set (or PAGESPEED_LICENSE_RENEWAL_URL env var)
  3. The renewal URL must be reachable from the worker
  4. curl must be available in the worker’s PATH (used via posix_spawnp)

Check the worker logs for renewal attempts:

sudo journalctl -u pagespeed-worker | grep -i "renew"

Vary Header and Cache Poisoning

Symptom: All responses are cache misses. The cache never populates even though nginx and the worker are running correctly.

Cause: The nginx module adds a Vary header containing its own user-agent token. If emit_vary() runs before vary_uncacheable() in the header filter chain, the module’s own Vary token poisons the cacheability check, causing every response to be treated as uncacheable.

Fix: This is an internal module bug. If you encounter this symptom after upgrading, check the src/nginx/ngx_pagespeed_module.cc source to verify that vary_uncacheable() is called before emit_vary() in the header filter. Downgrade to the previous version if the issue persists and report it.

Next Steps