Can you use gzip over SSL? And Connection: Keep-Alive headers - ssl

I'm evaluating the front end performance of a secure (SSL) web app here at work and I'm wondering if it's possible to compress text files (html/css/javascript) over SSL. I've done some googling around but haven't found anything specifically related to SSL. If it's possible, is it even worth the extra CPU cycles since responses are also being encrypted? Would compressing responses hurt performance?
Also, I'm wanting to make sure we're keeping the SSL connection alive so we're not making SSL handshakes over and over. I'm not seeing Connection: Keep-Alive in the response headers. I do see Keep-Alive: 115 in the request headers but that's only keeping the connection alive for 115 milliseconds (seems like the app server is closing the connection after a single request is processed?) Wouldn't you want the server to be setting that response header for as long as the session inactivity timeout is?
I understand browsers don't cache SSL content to disk so we're serving the same files over and over and over on subsequent visits even though nothing has changed. The main optimization recommendations are reducing the number of http requests, minification, moving scripts to bottom, image optimization, possible domain sharding (though need to weigh the cost of another SSL handshake), things of that nature.

Yes, compression can be used over SSL; it takes place before the data is encrypted so can help over slow links. It should be noted that this is a bad idea: this also opens a vulnerability.
After the initial handshake, SSL is less of an overhead than many people think* - even if the client reconnects, there's a mechanism to continue existing sessions without renegotiating keys, resulting in less CPU usage and fewer round-trips.
Load balancers can screw with the continuation mechanism, though: if requests alternate between servers then more full handshakes are required, which can have a noticeable impact (~few hundred ms per request). Configure your load balancer to forward all requests from the same IP to the same app server.
Which app server are you using? If it can't be configured to use keep-alive, compress files and so on then consider putting it behind a reverse proxy that can (and while you're at it, relax the cache headers sent with static content - HttpWatchSupport's linked article has some useful hints on that front).
(*SSL hardware vendors will say things like "up to 5 times more CPU" but some chaps from Google reported that when Gmail went to SSL by default, it only accounted for ~1% CPU load)

You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways.
You can selectively use HTTP compression
You can always minify
Let's talk about caching too
I am going to assume you are using an HTTPS Everywhere style web site.
Scenario:
Static content like css or js:
Use HTTP compression
Use minification
Long cache period (like a year)
etag is only marginally useful (due to long cache)
Include some sort of version number in the URL in your HTML pointing to this asset so you can cache-bust
HTML content with ZERO sensitive info (like an About Us page):
Use HTTP compression
Use HTML minification
Use a short cache period
Use etag
HTML content with ANY sensitive info (like a CSRF token or bank account number):
NO HTTP compression
Use HTML minification
Cache-Control: no-store, must-revalidate
etag is pointless here (due to revalidation)
some logic to redirect the page after session timeout (taking into account multiple tabs). If someone presses the browser's Back button, the sensitive info is not displayed due to the cache header.
You can use HTTP compression with sensitive data IF:
You never return user input in the response (got a search box? don't use HTTP compression)
Or you do return user input in the response but randomly pad the response

Using compression with SSL opens you up to vulnerabilities like BREACH, CRIME, or other chosen plain-text attacks.
You should disable compression as SSL/TLS have no way to currently mitigate against these length oracle attacks.

To your first question: SSL is working on a different layer than compression. In a sense these two are features of a web server that can work together and not overlap. Yes, by enabling compression you'll use more CPU on your server but have less of outgoing traffic. So it's more of a tradeoff.
To your second question: Keep-Alive behavior is really dependent on HTTP version. You could move your static content to a non-ssl server (may include images, movies, audio, etc)

Related

Benefits of mod_pagespeed over mod_deflate

The SO post here explains what mod_pagespeed does, but I'm wondering if I would notice any significant difference in page load time with this installed on a server that is already using mod_deflate to compress files.
If it is worth installing, are there any special considerations to take into account with regards to configuration when running both modules, or should one replace the other? The server is running EasyApache4.
Yes you will because these modules do different things.
mod_deflate handles data compression
The mod_deflate module provides the DEFLATE output filter that allows
output from your server to be compressed before being sent to the
client over the network.
Simply put, its sole purpose is to reduce the amount of bytes sent for your server regardless of what kind of data is sent
mod_pagespeed performs optimizations that would speed up the resulting webpage performance from end user's perspective by following a bunch of web pages optimization best-practices
Here's a simple example:
imagine we have 1 html page and 1 small external javascript file
if we use mod_deflate, both of them will be gzipped BUT the browser would need to make 2 HTTP requests to fetch them
mod_pagespeed may decide it's worth inlining the contents of this js file into the .html page
if we use mod_deflate together with mod_pagespeed in this case the resulting number of bytes downloaded would be the same BUT the page would render faster as it would need to make only 1 single HTTP request
Such optimizations of the original .html page and its dependent resources may have a huge difference in terms of execution time especially on slow mobile networks
So the idea is to always enable mod_deflate and either apply these best-practices manually or use mod_pagespeed which would apply them automatically

How can I use gzip with SSL, or any alternatives?

Google now treats HTTP as insecure (check here), and in Chrome, we see warning messages if we access HTTP site. And now we have free SSL, letsencrypt. So I assume, we would surely use HTTPS for nearly every server.
Then I found, using gzip with SSL has some security issue, called Breach Attack. I really wonder, then, how can we achieve the purpose of gzip, while using SSL?
Especially on Angular, when built, it has quite large sizes; for now, I have main files that related to #angular, styles files that related to CSS/SCSS/whatever bundled with Webpack, scripts files that related to external javascript files. For my application case, it is like below (Angular 2.3.1, AoT, production build);
main.js: 739K
main.js.gz: 151K
styles.js: 394K
styles.js.gz: 100K
scripts.js: 1.8M
scripts.js.gz: 415K
For main and styles file, it seems okay without gzip. But for scripts file case, it is really big without gzip. 1.8 Megabytes... it would definitely heavy for mobile.
But my application uses WebRTC, which requires HTTPS. So it's kind of stuck for me. Is there any good solution?
BREACH attack is only a problem for content which contains secrets the attacker likes to guess (like CSRF tokens) and where also attacker controlled data are reflected in the content. Static Javascript files and other static files don't have this property so they can safely be compressed. See also Is gzipping content via TLS allowed? or Current State of BREACH (GZIP SSL Attack)?

How to forcefully flush HTTP headers in Apache httpd?

I need to periodically generate HTTP headers for clients and those headers need to be flushed to the client directly after one header is created. I can't wait for a body or anything else, I create a header and I want that Apache httpd sends it to the client.
I've already tried using autoflush, manual flush, large header data around 8k of data, disabled deflate modules and whatever could stand in may way, but httpd seems to ignore my wished until all headers are created and only afterwards flushes them. Depending on how fast I generate headers, the httpd process even increases memory to some hundreds of megabytes, so seems to buffer all headers.
Is there any way to get httpd to flush individual headers or is it impossible?
The answer is using NPH-scripts, which by default bypass the buffer of the web server. One needs to name the script nph-* and normally a web server should stop buffering headers and send them directly as they are printed and how they are. This works in my case, though using Apache httpd one needs to be careful:
Apache2 sends two HTTP headers with a mapped "nph-" CGI

Apache statically and dynamically compressed content

I have a website which has both dynamically generated (PHP) and static content. Setting up Apache to transparently compress everything in accordance with content negotiation is a trifle.
However, I am interested in not compressing static content that rarely, if ever, changes, but instead serving precompressed data in an "asis" manner.
The idea behind this is to reduce latency and save CPU power, and at the same time compress better. Basically, instead of compressing the same data over and over again, I would like the server to sendfile the contents without touching it, but with proper headers. And, ideally, it would work seamlessly with .html and .html.gz files, using transparent compression in one case and none in the other.
There is mod_asis, but this will not provide proper headers (most importantly the ones affecting cache and proxy operation), and it is agnostic of content negotiation. Adding a content-encoding for .gz seems to be the right thing, but does nothing, the ยด.html.gz` web pages appear as downloads (maybe this interferes with some default typemap?).
It seems that the gatling webserver does just what I want in this respect, but I'd really prefer staying with Apache, because despite anything one could blame Apache for, it's the one mainstream server that has been working kind of OK for many years.
Another workaround would be to serve the static content with another server on a different port or subdomain, but I'd prefer if it just worked "invisibly", and if the system was not made more complex than necessary.
Is there a well-known configuration idiom that makes Apache behave in the way indicated?

Accept-Encoding headers being sent by browser but not received by server

I have been trying to debug this for weeks. All of the browsers on all of the clients on my home network are sending 'Accept-Encoding: gzip,deflate'. However, that header is somehow, somewhere being dropped before the request makes it to a web server. For example, http://www.whatsmyip.org/http_compression/ says 'No, your browser is not requesting compressed content'.
I've used Fiddler to make sure that all of my browsers are indeed sending the header. I've swapped out my router. I've turned off all anti-virus software.
Brighthouse/Roadrunner (the local cable ISP) says they are not doing any filtering (and I can't see why they would in this case).
Any suggestions would be most welcome!
Try it with HTTPS.
If you are browsing a site via HTTPS, nothing between your browser and the web server can alter any HTTP-level aspect of the the request or response, including whether compression is enabled, without you having immediate and clear knowledge of that fact (check the site's certificate in your browser address bar and see if it's legit).
I had the Accept-Xncoding issue and determined it was CA Internet Security Suite causing the issue. Disabling wan't enought, you had to uninstall and then clear IE Cache.
Check your antivirus software. It's probably intercepting your outbound traffic and modifying headers on the fly in order to get uncompressed content. Lazy programmers don't like to include decompression methods themselves, or deal with chunked encoding.
Norton Internet Security will overwrite accept encoding with this line:
---------------: ----- -------
McAfee overwrites with this:
X-McProxyFilter: *************
something I haven't identified yet overwrites with this:
Accept-Xncoding: gzip, deflate
You are probably in the same boat. I read Zone Alarm wipes out the encoding header entirely (which means recalculating the size of the packet, but why should they care how much load they introduce on your system?). If you're running Zone Alarm, turn off the 'internet privacy option' or whatever it is, and try again.
Everytime I've seen this problem, it has been the result of shitty antivirus. Completely disabling someone's ability to receive compressed content without letting them know is dirty.