Skip to main content

Helm Deployment

Deploy ModPageSpeed 2.0 on Kubernetes with the official Helm chart.

The official Helm chart deploys ModPageSpeed 2.0 as a sidecar pod: the nginx interceptor and Factory Worker run as two containers in the same pod, sharing an emptyDir volume for the Cyclone cache. This page covers installation, configuration, scaling, and upgrades.

Prerequisites

  • Kubernetes 1.24+
  • Helm 3.x
  • Access to the modpagespeed/worker and modpagespeed/nginx container images
  • A running origin server accessible from the cluster

Architecture

Each pod contains two containers:

+-----------------------------------------+
|  Pod                                    |
|  +----------------+  +---------------+  |
|  |  nginx         |  |  worker       |  |
|  |  (port 80)     |  |               |  |
|  +-------+--------+  +-------+-------+  |
|          |                    |          |
|          +--- /data (emptyDir) ---+     |
|          |  cache.vol              |     |
|          |  pagespeed.sock         |     |
|          |  pagespeed.sock.health  |     |
|          |  pagespeed.sock.mgmt    |     |
|          +-------------------------+     |
+-----------------------------------------+
  • nginx serves HTTP traffic on port 80. On cache miss, it proxies to your origin and sends a notification to the worker via Unix socket.
  • worker reads original content from the shared cache, optimizes it, and writes variants back. It creates the cache file and Unix sockets at startup.
  • emptyDir is the shared volume mounted at /data in both containers. Its lifetime is tied to the pod — cache is rebuilt after pod restarts.

Installation

Quick Start

helm install pagespeed deploy/helm/pagespeed/ \
  --set backend.host=my-origin.default.svc.cluster.local \
  --set backend.port=8080

Replace my-origin.default.svc.cluster.local with the Kubernetes service name (or IP) of your origin server.

With a Custom Values File

Create a my-values.yaml:

replicaCount: 2

backend:
  host: my-origin.default.svc.cluster.local
  port: 8080

worker:
  cacheSize: "1073741824"  # 1 GB

cache:
  sizeLimit: 2Gi

ingress:
  enabled: true
  className: nginx
  hosts:
    - host: cdn.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: cdn-tls
      hosts:
        - cdn.example.com

Install with:

helm install pagespeed deploy/helm/pagespeed/ -f my-values.yaml

Configuration Reference

Backend (Origin Server)

ValueDefaultDescription
backend.host127.0.0.1Origin server hostname or IP
backend.port8081Origin server port

Worker

ValueDefaultDescription
worker.image.repositorymodpagespeed/workerWorker container image
worker.image.taglatestImage tag
worker.cacheSize1073741824 (1 GB)Cyclone cache size in bytes
worker.extraArgs[]Additional CLI flags for worker

Pass additional worker flags via extraArgs:

worker:
  extraArgs:
    - --jpeg-quality
    - "80"
    - --log-level
    - debug
    - --no-proactive-savedata-variants

Nginx

ValueDefaultDescription
nginx.image.repositorymodpagespeed/nginxNginx container image
nginx.image.taglatestImage tag

License Key

ValueDefaultDescription
licenseKey""License token (stored in a Secret)
existingSecret""Name of a pre-existing Secret
existingSecretKeylicense-keyKey within the existing Secret

Set the license key inline:

helm install pagespeed deploy/helm/pagespeed/ \
  --set licenseKey=your-token-here \
  --set backend.host=my-origin

Or reference a secret you manage yourself:

kubectl create secret generic my-license --from-literal=license-key=your-token-here

helm install pagespeed deploy/helm/pagespeed/ \
  --set existingSecret=my-license \
  --set backend.host=my-origin

The license key is optional. Without it, both containers log a warning at startup but operate normally. See the License Activation guide for details.

Cache Volume

ValueDefaultDescription
cache.sizeLimit1GiemptyDir size limit

The sizeLimit caps the disk space the emptyDir volume can consume. Set this higher than worker.cacheSize to account for Unix socket files and any temporary data. A good rule: set sizeLimit to at least 1.5x worker.cacheSize.

Service and Ingress

ValueDefaultDescription
service.typeClusterIPKubernetes Service type
service.port80Service port
ingress.enabledfalseCreate an Ingress object
ingress.className""Ingress class name

Autoscaling

ValueDefaultDescription
autoscaling.enabledfalseEnable HPA
autoscaling.minReplicas1Minimum pod count
autoscaling.maxReplicas10Maximum pod count
autoscaling.targetCPUUtilizationPercentage70CPU target for scaling

Resource Recommendations

The worker is CPU-intensive (image encoding) and benefits from memory for the decode/encode pipeline. Nginx is lightweight.

Small Deployment (< 500 images)

worker:
  resources:
    requests: { cpu: 250m, memory: 256Mi }
    limits:   { cpu: "1",  memory: 512Mi }
  cacheSize: "268435456"  # 256 MB
nginx:
  resources:
    requests: { cpu: 100m, memory: 128Mi }
    limits:   { cpu: 500m, memory: 256Mi }

Medium Deployment (500-5000 images)

worker:
  resources:
    requests: { cpu: 500m, memory: 512Mi }
    limits:   { cpu: "2",  memory: 1Gi }
  cacheSize: "1073741824"  # 1 GB
nginx:
  resources:
    requests: { cpu: 100m, memory: 128Mi }
    limits:   { cpu: "1",  memory: 512Mi }

Large Deployment (5000+ images)

worker:
  resources:
    requests: { cpu: "1",  memory: 1Gi }
    limits:   { cpu: "4",  memory: 2Gi }
  cacheSize: "4294967296"  # 4 GB
nginx:
  resources:
    requests: { cpu: 250m, memory: 256Mi }
    limits:   { cpu: "1",  memory: 512Mi }
cache:
  sizeLimit: 6Gi
autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10

Upgrading

To upgrade to a new chart or image version:

helm upgrade pagespeed deploy/helm/pagespeed/ -f my-values.yaml

Or update just the image tags:

helm upgrade pagespeed deploy/helm/pagespeed/ \
  --reuse-values \
  --set worker.image.tag=2.0.1 \
  --set nginx.image.tag=2.0.1

The emptyDir cache is lost on pod restart. After an upgrade, the cache rebuilds as requests arrive — the first few responses will be cache misses while the worker re-optimizes content. This is expected and resolves within minutes under normal traffic.

Uninstalling

helm uninstall pagespeed

This removes all Kubernetes resources created by the chart (Deployment, Service, Ingress, HPA, ConfigMap, Secret). The emptyDir volumes are deleted with the pods.

Verifying the Deployment

After installation, check that both containers are running:

kubectl get pods -l app.kubernetes.io/name=pagespeed

Both containers in the pod should show READY:

NAME                         READY   STATUS    RESTARTS   AGE
pagespeed-6d4b8f9c7-x2k9m   2/2     Running   0          45s

Port-forward to test locally:

kubectl port-forward svc/pagespeed 8080:80
curl -I http://localhost:8080/
# Look for: X-PageSpeed: MISS (first request)

Next Steps