Frequently Asked Questions
I'm not running Apache or Nginx, do you support my http server?
While we only support Apache and Nginx, there are community developed ports to several other webservers:
If you run a webserver that doesn't have a port, one option would be to set up some other server running PageSpeed as a reverse proxy in front of it. If you're not sure which to use we recommend using PageSpeed on nginx, but any of these servers should work well as an optimizing reverse proxy.
(And, of course, if you're interested in porting PageSpeed to a new server, that would be awesome, and anyone porting should feel free to send us lots of detailed technical questions!)
When will you support my favorite OS or protocol?
While we have no dates to announce for upcoming releases, we definitely want to know what you think we should be working on. Please search our issues for your feature. If you don't find it, open a new issue and tag it "Type-Enhancement". To get updates on an issue, click the "star" icon for it. This also lets us know how many people are interested in the issue. If your issue is Nginx-specific, consider posting on the ngx_pagespeed bug tracker.
Do you support SUSE?
We support SUSE on Apache and Nginx when building from source. For Apache, Robert Munteanu (robert.munteanu@gmail.com) has set up a repository which publishes OpenSUSE RPMs for mod_pagespeed. The repository is hosted on OpenSUSE's build service instance. The builds have seen some testing on one of Robert's servers (OpenSUSE 12.1/x86_64) but he'd appreciate anyone else testing it.
To enable the module, install it, add deflate pagespeed
to your
list of Apache modules in /etc/sysconfig/apache2
and restart
Apache.
Please note that the module is linked dynamically with the current system libraries and as such will bring in more dependencies than the 'stock' Fedora or CentOS RPM.
Why isn't PageSpeed rewriting any of my pages?
Check the HTTP response headers from your HTML page:
curl -D- http://example.com | less
You should get something like:
- Apache:
-
Date: Fri, 30 Sep 2016 15:36:57 GMT Server: Apache/2.4.7 (Ubuntu) ... X-Mod-Pagespeed: 1.11.33.4-0 ...
- Nginx:
-
Date: Fri, 30 Sep 2016 15:37:24 GMT Server: nginx/1.11.4 ... X-Page-Speed: 1.11.33.4-0
If you don't see an X-Mod-Pagespeed
header (Apache)
or X-Page-Speed
header (Nginx), this means that your webserver
isn't letting PageSpeed run. This could be because it isn't actually installed,
you don't have it turned on in the configuration file, or many other reasons.
In Apache the problem might be that you have multiple
SetOutputFilter
directives: only one of those will win. See the
Apache SetOutputFilter documentation.
If you do see the header, but it doesn't look like PageSpeed is making any changes to your page, its possible that none of the active filters are finding anything to rewrite. Try comparing your page with PageSpeed off and with the collapse_whitespace filter enabled:
curl -D- 'http://example.com?ModPagespeed=off' | less curl -D- 'http://example.com?ModPagespeed=on&ModPagespeedFilters=collapse_whitespace' | less
If you see a change when run with collapse_whitespace
on, that
means PageSpeed is running but the filters you have selected aren't
optimizing anything. There are several reasons that could happen:
- The filters you have enabled aren't aggressive enough.
- Your resources (images, css, javascript) aren't cacheable. If
PageSpeed sees
cache-control
headers such asnocache
orprivate
it will not rewrite the resources. - CSS, JavaScript, and image files served from a distinct domain from the HTML must have the resource domain authorized. See Domains.
- Your CSS has new CSS3 syntax or other constructions we don't support. See issue 108. The fallback_rewrite_css_urls filter may be able to help. You can also use the standalone CSS parser to help debug these issues.
- Your resources are served over HTTPS. HTTPS resources can currently only be rewritten if they are origin-mapped or loaded from directly from the file-system. See HTTPS Support.
Why am I getting "Error: Missing Dependency: httpd > = 2.2" even though I have Apache 2.2.? installed?
You are probably trying to install mod_pagespeed using yum or apt-get (the .rpm or .deb binaries), but you installed Apache using a different method (cPanel, Wordpress, etc.). This will not work because mod_pagespeed binaries depend upon Apache being installed using yum or apt-get.
Instead you must either build from mod_pagespeed from source or search for mod_pagespeed + your platform to see if someone has documented an install process for that platform. For example, cPanel based installation.
I'm using cPanel on my server, how do I install mod_pagespeed?
cPanel installs Apache httpd server from source via the built-in EasyApache setup and build process. In order to enable mod_pagespeed on your server, download and install the mod_pagespeed module for cPanel WHM - once the module is installed, you can select "mod_pagespeed" as one of the modules during the regular EasyApache build (via the online tool, or from the command line). Do not install mod_pagespeed from .deb. or .rpm packages - cPanel requires that you use the EasyApache build process.
I'm using WordPress and my pages are blank. Why?
Disable compression in the WordPress plugin, so that mod_pagespeed will process uncompressed HTML. mod_pagespeed ensures that its output will be compressed by enabling mod_deflate.
PageSpeed broke my site; what do I do?
First of all, sorry about that. We put a lot of work into validating our rewriters against a large corpus of websites and we disable filters that cause problems as soon as we can, but sometimes things slip through.
Second, please upgrade to the latest version; we are continually working on bug-fixes and turning off filters that break pages.
If it's still breaking your site, please post a bug (Apache, Nginx). If you can, including the following information will make it much easier to diagnose:- Try appending
?ModPagespeed=off
to the URL. This de-activates PageSpeed. If the site is still broken, it is not a rewrite or HTML parsing problem. It might be a configuration clash, please ask us on our discussion group. - If that fixed the site, try
appending
?ModPagespeed=on&ModPagespeedFilters=
to the URL. This turns on PageSpeed, but no filters. If the site is broken now, it is an HTML parsing problem. Please let us know. - If the site still worked, try
appending
?ModPagespeed=on&ModPagespeedFilters=foo
for various filters "foo". For example try:http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=extend_cache http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=combine_css http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=inline_css http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=inline_javascript http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=insert_image_dimensions http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=rewrite_images http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=rewrite_css http://www.modpagespeed.com/?ModPagespeed=on&ModPagespeedFilters=rewrite_javascript
You may have to reload them a few times over several seconds to make sure they have had time to load sub-resources into cache and rewrite them. If one of these breaks your site, you now know which filter is at fault. Please let us know. You can disable that filter by adding a line to yourpagespeed.conf
file:- Apache:
ModPagespeedDisableFilters foo
- Nginx:
pagespeed DisableFilters foo;
I am getting 404s for rewritten resources (like example.png.pagespeed.ic.LxXAhtOwRv.png) or for the mod_pagespeed_admin console
The most common reason that the rewritten resources 404 is because of
mod_rewrite RewriteCond
rules. For example:
RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ /404 [L,R=301]
This rule causes 404 for all requests which don't exist on the filesystem, including mod_pagespeed rewritten resources and the mod_pagespeed admin console.
In order to fix this you must add an exception for mod_pagespeed URLs:
RewriteCond %{REQUEST_URI} !pagespeed
This will allow rewritten resources, the admin console and static resources necessary for some filters.
PageSpeed does not pick up changes when I edit CSS or JavaScript files
There are two distinct cache-times at play when you use PageSpeed:
- The origin TTL which PageSpeed uses to refresh its internal server-side cache.
- The TTL with which PageSpeed serves rewritten resources to browsers.
We recommend an origin TTL of 10 minutes, which provides reasonable responsiveness when you update a file. If you try to make it much smaller, then your server will need to refresh it more frequently. This adds server load and reduces optimization.
To see changes to your files more quickly while developing, flush the cache on your server(s).
If your environment allows you to enable LoadFromFile, you can get the best of both worlds because PageSpeed can eliminate its internal server-side cache.
Why is PageSpeed giving me errors in jquery or js_tinyMCE?
Some JavaScript is introspective, being sensitive to its own name or the path it's loaded from. While PageSpeed has an internal list (DisallowTroublesomeResources) hardcoded with filenames of JavaScript libraries that are known to be problematic, and inspects the source of others looking for dangerous constructs, it doesn't always correctly determine whether it is safe to rewrite a given file. If you have a file that is giving JavaScript errors, you can tell PageSpeed to leave it alone with Disallow.
What's with all these "Serf" errors in my logs? Error status=670003 (Temporary failure in name resolution)
This can happen when the DNS cannot be accessed accurately from the server. Thus sub-resources cannot be fetched and rewritten correctly.
To test that this is the case, ssh
into your machine and wget
a URL:
$ ssh YOUR_SITE $ wget http://YOUR_SITE/
If this fails, then DNS is not accessible or there is some other networking issue stopping you from accessing your host from itself.
One solution is to use origin-mapping to indicate the host from which the resources should be fetched:
- Apache:
ModPagespeedMapOriginDomain localhost www.example.com
- Nginx:
pagespeed MapOriginDomain localhost www.example.com;
This bypasses DNS lookup by telling PageSpeed to get all resources for
domain www.example.com
from localhost
.
This can also be used to improve the performance of PageSpeed when it is sitting behind a load balancer. It may be preferable to use localhost to load the resources rather than going out to the load-balancer and back.
Can I move PageSpeed's file-based cache into RAM?
Why yes, you can. PageSpeed uses the file system for its cache implementation. There is no requirement that this be a physical disk. Disk pressure will be reduced and performance may be improved by using a memory-based file system.
Put this in /etc/fstab
, with the uid and guid set to the appropiate
user and group of your webserver, and set path to your needs. Feel free to
change the size; here it is 256Mb:
tmpfs /path/to/pagespeed_cache tmpfs size=256m,mode=0775,uid=httpd,gid=httpd 0 0
Save it, and after that mount the tmpfs:
mount /path/to/pagespeed_cache
Why don't you allow source-installs in Apache via ./configure && make?
mod_pagespeed is dependent on several other packages that
use gclient
. For us to switch away from this build methodology
we'd have to either:
- rewrite the functionality we get for free from other packages, or
- get these packages to switch
methodologies and document for people installing from source that
they must
configure
andmake
about 10 other packages before they could compile ours.
To do either of those would cost us a lot of development time that we'd prefer
to spend making PageSpeed better. The benefit of gclient
,
besides the above, is that it lets us control explicitly which library versions
we link in, out of a large number of direct and transitive dependencies, helping
us create a consistent experience for our source-code builds. If we had to ask
our source-code installers to configure
and make
multiple dependent libraries there would likely be a lot of
version-incompatibilities.
We do support ./configure && make
in Nginx, but that only works
because we package up a binary distribution of the PageSpeed Optimization
Library and a "source"
installation only builds the ngx_pagespeed-specific files from source. When you
want
to build
PSOL from source along with ngx_pagespeed you still need to
use gclient
.
Why is my Google Analytics data being inflated by "Serf"?
If you track page views with a tracking image, you will need to explicitly tell PageSpeed not to try to fetch that image. For example if your tracking image were:
<img src="/ga.php?utmac=...">
you would add:
- Apache:
ModPagespeedDisallow "*/ga.php*"
- Nginx:
pagespeed Disallow "*/ga.php*";
to your configuration file.
mod_pagespeed does not rewrite pages produced from mod_php
mod_pagespeed only rewrites pages specifically marked as Content-Type:
text/html
(and a few other HTML types). If you dynamically generate your
pages from mod_php, PHP needs to set this header correctly.
One way to do this is to use the PHP's header function:
<?php header('Content-Type: text/html') ?>
This code goes at the top of your php file.
PageSpeed causes my page to display an XML
Parsing Error
message
This usually happens when using a content management or generation system (we've seen it with Munin and Magento for example). The full error message looks something like:
XML Parsing Error: mismatched tag. Expected: </li>. Location: http://www.example.com/ Line Number 123, Column 4: </ul>
This happens when the generated content has a meta
tag that
identifies the content as XHTML but the content has markup that is not valid
XHTML, and you have configured your webserver to set the content type
to HTML, so the browser parses it as HTML and doesn't detect the invalid XHTML
errors.
However, when convert_meta_tags
is enabled (and it's a core filter
so is on by default), PageSpeed inserts a content header into the response
with the value in the meta
tag, namely XHTML
(application/xhtml+xml
to be precise), resulting in the browser
displaying the error message because it is now parsing the page as XHTML and
it rejects the invalid content.
There are three solutions, in descending order of preference:
- If the content is XHTML, write XHTML and validate it with an XHTML validator.
- If the content is not XHTML, remove the
meta
tag that claims it is. - If the content is not XHTML but you can't remove the
meta
tag, say because your CMS doesn't let you, disable theconvert_meta_tags
filter in yourpagespeed.conf
:- Apache:
ModPagespeedDisableFilters convert_meta_tags
- Nginx:
pagespeed DisableFilters convert_meta_tags;
Why do I get Permission denied errors in my log file on CentOS, RHEL, or any system using SELinux?
The symptom is many error messages in the webserver log file of the form (split across multiple lines here for readability):
[Mon Jan 01 02:03:04 2001] [error] [mod_pagespeed 1.0.22.7-2005 @1234] /path/to/pagespeed_cache/randomgibberish.lock:0: creating dir (code=13 Permission denied)
These are because SELinux by default restricts permissions of daemons for
extra security, so you need to grant permission for the httpd
or daemon to write to the cache directory:
chcon -R -t httpd_sys_content_t /path/to/pagespeed_cacheThis is for Apache; we're not sure what you need to do for Nginx.
My logs contain messages about missing files requested from 224.0.0.0 and resources are not optimized, what's wrong?
For security reasons, PageSpeed will only fetch from host names it is explicitly told about via its domain configuration directives and from 127.0.0.1, the loopback interface of the server it's running on. Many Apache configuration management tools, however, will configure virtual hosts to only listen on the external IP, which causes those fetches to fail.
If you are affected, the following options may be appropriate:
- Unless you have a reason not to, have your virtual hosts listen on all
interfaces: change the directives of the form
<VirtualHost 198.51.100.1:80>
to the form<VirtualHost *:80>
- For every virtual host, list its domain name(s) with the
ModPagespeedDomain
directive inside its own<VirtualHost>
block. For example:<VirtualHost 198.51.100.1:80> ServerName www.example.com ModPagespeedDomain www.example.com
- For every virtual host, provide a
ModPagespeedMapOriginDomain
directive giving where to load its resources, for example:<VirtualHost 198.51.100.1:80> ServerName www.example.com ModPagespeedMapOriginDomain 198.51.100.1 www.example.com
- If you have
ModPagespeedInheritVHostConfig on
, you can also provide the origin mapping globally, which may be useful in combination with wildcards, for example:ModPagespeedMapOriginDomain loadbalancer.example.com *.example.com
Warning: You do not generally want to use
Domain
globally as doing so will tell PageSpeed that you consider all of these domains as mutually trusting. - If you are running a proxy or for some other reason cannot easily enumerate all virtual hosts, it is possible to disable this behavior, after taking some precautions. Please see fetch server restrictions for more information.
Why are rewritten pages sending POSTs back to my server?
Certain filters need to determine things about the page: in particular, the
lazyload_images
,
inline_preview_images
,
and inline_images
filters need to determine which images are above the fold, and the
prioritize_critical_css
filter needs to determine the CSS actually used by the page.
To do this, the filters inject JavaScript into the rewritten HTML that analyzes the page in the browser and sends data back to mod_pagespeed using a POST method. The default target is
/mod_pagespeed_beacon
but that
can be changed using the
ModPagespeedBeaconUrl
directive.
How do I enable or disable beacon POSTs being sent back to my server?
Filters that use the beacon automatically inject JavaScript to send the POST back to the server, and the POST handler is always enabled in mod_pagespeed, so there's nothing to do to enable beaconing.
To disable the use of beacons by the image rewriting filters use the
ModPagespeedCriticalImagesBeaconEnabled
directive. If you disable image beacons but enable filters that use
them, the filters will work but not as well as when beacons are enabled.
To disable the POST handler for all filters there are two options: either
disable all the filters that use it, or use a
<Location>
directive to block it. Filters are disabled using
ModPagespeedDisableFilters. An example
<Location>
directive to block all beacon POST handling that
can be added to your pagespeed.conf
file is:
<Location /mod_pagespeed_beacon> Order allow,deny </Location>
If you block POSTs but enable filters that use beacons, depending on the filter it will either have limited functionality or have no useful effect, but in all cases pointless processing will occur in both the server and the browser, so you should disable and forbid these filters if you block POSTs.
Note:Even if you disable all filters that use beacons someone
could use tools like wget
or curl
to send POSTs to
your server. These will have no effect but they will require processing. If you
want to completely disable POST handling use a <Location>
directive.
Why is PageSpeed inserting a meta refresh to /?PageSpeed=noscript or /?ModPagespeed=noscript at the top of the page?
The defer_javascript
,
lazyload_images
,
dedup_inlined_images
, and
local_storage_cache
filters require JavaScript to render pages correctly. To support clients that
have JavaScript disabled, if any of these filters are enabled, PageSpeed will
insert a meta refresh inside a noscript
tag at the top of the page.
This meta refresh will redirect clients with JavaScript disabled to the current
URL with a '?PageSpeed=noscript
' query parameter appended which
disables filters that require JavaScript.
If you wish to disable this redirect, for instance if your page already requires
JS to function correctly, set the following option in your
pagespeed.conf
:
- Apache:
ModPagespeedSupportNoScriptEnabled false
- Nginx:
pagespeed SupportNoScriptEnabled false;
Why won't the collapse_whitespace
filter
remove newlines?
When removing whitespace from HTML, some website optimizers remove newlines entirely, but PageSpeed leaves them in. This isn't actually a problem with newlines, however, it's that it's not generally safe to remove a run of whitespace entirely. You can turn any number of consecutive whitespace characters into a single one, and we do that, but removing the whole run can make the site render differently.
To take a simple example, consider:
<body> <h1>Hello World</h1> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce molestie ante <b>vitae</b> iaculis varius. ... </body>
PageSpeed with collapse_whitespace
will turn this into:
<body> <h1>Hello World</h1> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce molestie ante <b>vitae</b> iaculis varius. ... </body>
If PageSpeed went further and put it all on one line, it would be
converting <b>vitae</b> iaculis
into <b>vitae</b>iaculis
, which would change the
rendering from "vitae iaculis" into "vitaeiaculis"; one word
instead of two. It would have been safe to turn <body>
<h1>Hello World</h1>
into <body><h1>Hello
World</h1>
, but doing one and not the other requires understanding
the CSS (and JS) to the point where we can reliably tell that one pair of
elements are display: block
while the other pair
are display: inline
.
I've got a warning saying "Serf fetch failure rate extremely high". What does this mean?
The warning means that, when PageSpeed tried to fetch resources inside your
web page for optimization, over 50% of attempts inside a 30-minute period
failed. This may just mean you have some broken resource includes in your
pages (in which case, it may be a good idea to fix them for better performance),
but might indicate that PageSpeed's fetching is not working properly. If you
have in-place resource optimization on, that can result in user requests for
.pagespeed.
URLs returning error 404 intermittently.
First of all, check to see if the log mentions anything else about fetch trouble. If what's there is not helpful, the root cause may be more obvious if you follow these steps:
- Disable in-place resource optimization temporarily.
- Clear PageSpeed cache.
- Open a test page with a
?PagespeedFilters=+debug
query parameter, reload it a few times, and see if resources are getting optimized, and if not if there is an error message. - Revert the config changes.
Most likely you may need to configure an origin domain, to specify the host or IP to talk to fetch resources.
You may also want to consider using
LoadFromFile
functionality, as that performs much better if
your resources are static.
Sometimes pages are served as partially optimized. How can I achieve a more steady optimization ouput?
There are two (relatively) common situations that may lead up to a fluctuating level of optimization in the output:
- Low traffic, combined with short http expiry times for image, css and javascript
responses:
Consider increasing the http expiry applied to the original resources, or set up LoadFromFile to allow the module to load static files directly from disk. - High cache churn rates:
If PageSpeed's caches are sized too small, optimized assets falling out of the cache may cause frequent reoptimization. Sizing the cache to 3 to 4 times the size of the original assets of the website should allow the module to cache all the original resources plus multiple optimized variants (for serving different user-agents and screen sizes). - If the above did not help, the admin pages offer various tools that may assist in diagnosing what happens.