I was just doing some testing with YSlow and it's telling me:
Grade F on Compress components with gzip: There are 10 plain text
components that should be sent compressed
I know that Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate, and so the easiest solution to remedy this is to use mod_deflate on an Apache 2 server.
However, I've checked with two shared hosting companies and one local company and they've all told me that they don't support mod_deflate.
I know that some older browsers have trouble accepting gzipped / deflated content, and I'm not suggesting it be enabled by default, but are there any negatives for making mod_deflate available? Is it just extra load on the server's processors?
Also, are there any alternatives? I saw that if you are using a CMS like Wordpress you could potentially install a caching plugin which would serve out gzipped cached versions of the pages initially generated via PHP.
Compression takes CPU time. Maybe the hosting company decided they care more about CPU than network traffic. Maybe they offer it with a more expensive package. MAybe they simply didn't add it. Only your hosting company would know.
When using PHP you can check whether your PHP setup has zlib support enabled. If that is the case you can use ob_start("ob_gzhandler"); in code to enable an output buffer which will compress your data or set zlib.output_compression in your php configuration for instance by using php_flag zlib.outout_compression on in your .htaccessfile.
http://php.net/ob_gzhandler
http://php.net/zlib.output-compression
Related
I'm using nginx/1.10-3 and apache2-2.4.25-3 on Debian.
For many reasons i won't go into, I'm looking to switch to NGINX from Apache2.
My CMS has many files that either have no extensions, or have the wrong extension in terms of lining up with mime.types.
With Apache2 we rely on mod_mime_magic to override the extension and use the magic bytes to correctly set the content-type. However, I can't seem to find a way to get this on NGINX.
For example, we have images that end in .img and some files with no extension at all.
The only solution I can come up with, is to integrate extension rewriting/adding into the platform and change the extensions on upload and go through the existing ones. This will take a lot more time, though.
Is there a "hack" or an alternative to mime magic with NGINX?
Thanks
The SO post here explains what mod_pagespeed does, but I'm wondering if I would notice any significant difference in page load time with this installed on a server that is already using mod_deflate to compress files.
If it is worth installing, are there any special considerations to take into account with regards to configuration when running both modules, or should one replace the other? The server is running EasyApache4.
Yes you will because these modules do different things.
mod_deflate handles data compression
The mod_deflate module provides the DEFLATE output filter that allows
output from your server to be compressed before being sent to the
client over the network.
Simply put, its sole purpose is to reduce the amount of bytes sent for your server regardless of what kind of data is sent
mod_pagespeed performs optimizations that would speed up the resulting webpage performance from end user's perspective by following a bunch of web pages optimization best-practices
Here's a simple example:
imagine we have 1 html page and 1 small external javascript file
if we use mod_deflate, both of them will be gzipped BUT the browser would need to make 2 HTTP requests to fetch them
mod_pagespeed may decide it's worth inlining the contents of this js file into the .html page
if we use mod_deflate together with mod_pagespeed in this case the resulting number of bytes downloaded would be the same BUT the page would render faster as it would need to make only 1 single HTTP request
Such optimizations of the original .html page and its dependent resources may have a huge difference in terms of execution time especially on slow mobile networks
So the idea is to always enable mod_deflate and either apply these best-practices manually or use mod_pagespeed which would apply them automatically
Google now treats HTTP as insecure (check here), and in Chrome, we see warning messages if we access HTTP site. And now we have free SSL, letsencrypt. So I assume, we would surely use HTTPS for nearly every server.
Then I found, using gzip with SSL has some security issue, called Breach Attack. I really wonder, then, how can we achieve the purpose of gzip, while using SSL?
Especially on Angular, when built, it has quite large sizes; for now, I have main files that related to #angular, styles files that related to CSS/SCSS/whatever bundled with Webpack, scripts files that related to external javascript files. For my application case, it is like below (Angular 2.3.1, AoT, production build);
main.js: 739K
main.js.gz: 151K
styles.js: 394K
styles.js.gz: 100K
scripts.js: 1.8M
scripts.js.gz: 415K
For main and styles file, it seems okay without gzip. But for scripts file case, it is really big without gzip. 1.8 Megabytes... it would definitely heavy for mobile.
But my application uses WebRTC, which requires HTTPS. So it's kind of stuck for me. Is there any good solution?
BREACH attack is only a problem for content which contains secrets the attacker likes to guess (like CSRF tokens) and where also attacker controlled data are reflected in the content. Static Javascript files and other static files don't have this property so they can safely be compressed. See also Is gzipping content via TLS allowed? or Current State of BREACH (GZIP SSL Attack)?
I want to use mod_disk_cache in apache to cache my xml feeds to a folder and serve direct from that folder.
These are feeds dynamically created by php - but not changing very often.
I want the caching at the htaccess level to avoid any strain/call to php and keep server stress to a minimum.
http://httpd.apache.org/docs/2.2/mod/mod_cache.html
httpd.apache.org/docs/2.2/mod/mod_disk_cache.html
Has anyone done this before? Did it work for you?
I'm getting my server company to install the modules I need and can then have a go myself.
I'm hoping to use something similar to:
<IfModule mod_cache.c>
<IfModule mod_disk_cache.c>
CacheRoot c:/cacheroot
CacheEnable disk /
CacheDirLevels 5
CacheDirLength 3
</IfModule>
</IfModule>
I'll be sending Expires: and Last-Modified: headers in the xml too.
Think this will give me the desired solution and filling that cache folder and avoiding calls to php?
Or is this approach all wrong?
Thanks in advance for any guidance
I used in the past Apache with mod_cache on a Unix environment. It worked fine with low user load, but days with heavy load the system went down all the day.
After some tests we moved to Varnish Cache and now everything works better.
The problem is that only Unix environment is supported, a new varnish windows cygwin-based version exists, but I don't now if is suitable for production environment:
http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
It's not a bad thing. I've been using it long time ago. It works.
But you should know there are now really better alternatives when handling caches in front of an apache server. One of theses nice tools is Varnish. You will have very fine tunnings available.
Here's a deep explanation of why varnish is a modern tool and why this new way of using the OS (and not separating memory and disk in spirit) is good : http://www.varnish-cache.org/trac/wiki/ArchitectNotes
About the headers you should use theses headers to communicate with Varnish (or other things, like urls) and let the cache tool handle the final headers.
If you can have a direct access on your server and not just a restricted apache access try it. Now if you can only access apache configuration... but ... c:/cacheroot, you're using a windows server in production? You'll need an Unix-like system for varnish preferably 64bits.
The webserver hosting my website is not returning last-modified or expiry headers. I would like to rectify this to ensure my web content is cacheable.
I don't have access to the apache config files because the site is hosted on a shared environment that I have no control over. I can however make configurations via an .htaccess file. The server - apache 1.3 - is not configured with mod_expires or mod_headers and the company will not install these for me.
With these limitations in mind, what are my options?
Sorry for the post here. I recognise this question is not strictly a programming question, and more a sys admin question. When serverfault is public I'll make sure I direct questions of this nature there.
What sort of content? If static (HTML, images, CSS), then really the only way to attach headers is via the front-end webserver. I'm surprised the hosting company doesn't have mod_headers enabled, although they might not enable it for .htaccess. It's costing them more bandwidth and CPU (ie, money) to not cache.
If it's dynamic content, then you'll have control when generating the page. This will depend on your language; here's an example for PHP (it's from the PHP manual, and is a bad example, as it should also set the response code):
if (!headers_sent()) {
header('Location: http://www.example.com/');
exit;
}
Oh, and one thing about setting caching headers: don't set them for too long a duration, particularly for CSS and scripts. You may not think you want to change these, but you don't want a broken site while people still have the old content in their browsers. I would recommend maximum cache settings in the 4-8 hour range: good for a single user's session, or a work day, but not much more.