Google now treats HTTP as insecure (check here), and in Chrome, we see warning messages if we access HTTP site. And now we have free SSL, letsencrypt. So I assume, we would surely use HTTPS for nearly every server.
Then I found, using gzip with SSL has some security issue, called Breach Attack. I really wonder, then, how can we achieve the purpose of gzip, while using SSL?
Especially on Angular, when built, it has quite large sizes; for now, I have main files that related to #angular, styles files that related to CSS/SCSS/whatever bundled with Webpack, scripts files that related to external javascript files. For my application case, it is like below (Angular 2.3.1, AoT, production build);
main.js: 739K
main.js.gz: 151K
styles.js: 394K
styles.js.gz: 100K
scripts.js: 1.8M
scripts.js.gz: 415K
For main and styles file, it seems okay without gzip. But for scripts file case, it is really big without gzip. 1.8 Megabytes... it would definitely heavy for mobile.
But my application uses WebRTC, which requires HTTPS. So it's kind of stuck for me. Is there any good solution?
BREACH attack is only a problem for content which contains secrets the attacker likes to guess (like CSRF tokens) and where also attacker controlled data are reflected in the content. Static Javascript files and other static files don't have this property so they can safely be compressed. See also Is gzipping content via TLS allowed? or Current State of BREACH (GZIP SSL Attack)?
Related
I am rather new to TYPO3. Recently I noticed some very weird behavior in my installation: Some CSS-files in the directory typo3temp/assets/compressed got the MIME-type text/html instead of the expected text/css. Therefore my browser received a 403 Forbidden status code from the webserver for these resources. That resulted in some parts of the backend being shown without styling.
I tried clearing all caches and deleting the typo3temp/assets/compressed directory, however now all the stuff in there (CSS and JS) is served with MIME-type text/html. Getting the backend without JavaScript means, that I am now basically locked out of the backend. I can however still reach and use the install tool.
Do you have any ideas how this might happen and how to fix it?
Some details of my setup:
TYPO3 v10.4.13 (recently updated from 10.4.9)
Apache web server (I don't have access to its config and have to rely on .htaccess files)
I suggest to set
TYPO3_CONF_VARS/FE/compressionLevel=0
TYPO3_CONF_VARS/BE/compressionLevel=0
in order not have these kind of problems. The problem is that this compression creates compressed files but relies on webserver configuration in order to deliver them as text/css and NOT applying the default webserver's transport compression to them (or they could end up double-compressed and you might not even easily notice - some browsers can deal with that, others not).
It is a kind of micro-optimization that sounded useful in times when we avoided https:// because of the processing overhead...
Here's some docs (the first statement is outdated in my oppinion): https://docs.typo3.org/m/typo3/reference-skinning/master/en-us/BackendCssApi/CssCompression/Index.html
I have a website which has both dynamically generated (PHP) and static content. Setting up Apache to transparently compress everything in accordance with content negotiation is a trifle.
However, I am interested in not compressing static content that rarely, if ever, changes, but instead serving precompressed data in an "asis" manner.
The idea behind this is to reduce latency and save CPU power, and at the same time compress better. Basically, instead of compressing the same data over and over again, I would like the server to sendfile the contents without touching it, but with proper headers. And, ideally, it would work seamlessly with .html and .html.gz files, using transparent compression in one case and none in the other.
There is mod_asis, but this will not provide proper headers (most importantly the ones affecting cache and proxy operation), and it is agnostic of content negotiation. Adding a content-encoding for .gz seems to be the right thing, but does nothing, the ยด.html.gz` web pages appear as downloads (maybe this interferes with some default typemap?).
It seems that the gatling webserver does just what I want in this respect, but I'd really prefer staying with Apache, because despite anything one could blame Apache for, it's the one mainstream server that has been working kind of OK for many years.
Another workaround would be to serve the static content with another server on a different port or subdomain, but I'd prefer if it just worked "invisibly", and if the system was not made more complex than necessary.
Is there a well-known configuration idiom that makes Apache behave in the way indicated?
I'm evaluating the front end performance of a secure (SSL) web app here at work and I'm wondering if it's possible to compress text files (html/css/javascript) over SSL. I've done some googling around but haven't found anything specifically related to SSL. If it's possible, is it even worth the extra CPU cycles since responses are also being encrypted? Would compressing responses hurt performance?
Also, I'm wanting to make sure we're keeping the SSL connection alive so we're not making SSL handshakes over and over. I'm not seeing Connection: Keep-Alive in the response headers. I do see Keep-Alive: 115 in the request headers but that's only keeping the connection alive for 115 milliseconds (seems like the app server is closing the connection after a single request is processed?) Wouldn't you want the server to be setting that response header for as long as the session inactivity timeout is?
I understand browsers don't cache SSL content to disk so we're serving the same files over and over and over on subsequent visits even though nothing has changed. The main optimization recommendations are reducing the number of http requests, minification, moving scripts to bottom, image optimization, possible domain sharding (though need to weigh the cost of another SSL handshake), things of that nature.
Yes, compression can be used over SSL; it takes place before the data is encrypted so can help over slow links. It should be noted that this is a bad idea: this also opens a vulnerability.
After the initial handshake, SSL is less of an overhead than many people think* - even if the client reconnects, there's a mechanism to continue existing sessions without renegotiating keys, resulting in less CPU usage and fewer round-trips.
Load balancers can screw with the continuation mechanism, though: if requests alternate between servers then more full handshakes are required, which can have a noticeable impact (~few hundred ms per request). Configure your load balancer to forward all requests from the same IP to the same app server.
Which app server are you using? If it can't be configured to use keep-alive, compress files and so on then consider putting it behind a reverse proxy that can (and while you're at it, relax the cache headers sent with static content - HttpWatchSupport's linked article has some useful hints on that front).
(*SSL hardware vendors will say things like "up to 5 times more CPU" but some chaps from Google reported that when Gmail went to SSL by default, it only accounted for ~1% CPU load)
You should probably never use TLS compression. Some user agents (at least Chrome) will disable it anyways.
You can selectively use HTTP compression
You can always minify
Let's talk about caching too
I am going to assume you are using an HTTPS Everywhere style web site.
Scenario:
Static content like css or js:
Use HTTP compression
Use minification
Long cache period (like a year)
etag is only marginally useful (due to long cache)
Include some sort of version number in the URL in your HTML pointing to this asset so you can cache-bust
HTML content with ZERO sensitive info (like an About Us page):
Use HTTP compression
Use HTML minification
Use a short cache period
Use etag
HTML content with ANY sensitive info (like a CSRF token or bank account number):
NO HTTP compression
Use HTML minification
Cache-Control: no-store, must-revalidate
etag is pointless here (due to revalidation)
some logic to redirect the page after session timeout (taking into account multiple tabs). If someone presses the browser's Back button, the sensitive info is not displayed due to the cache header.
You can use HTTP compression with sensitive data IF:
You never return user input in the response (got a search box? don't use HTTP compression)
Or you do return user input in the response but randomly pad the response
Using compression with SSL opens you up to vulnerabilities like BREACH, CRIME, or other chosen plain-text attacks.
You should disable compression as SSL/TLS have no way to currently mitigate against these length oracle attacks.
To your first question: SSL is working on a different layer than compression. In a sense these two are features of a web server that can work together and not overlap. Yes, by enabling compression you'll use more CPU on your server but have less of outgoing traffic. So it's more of a tradeoff.
To your second question: Keep-Alive behavior is really dependent on HTTP version. You could move your static content to a non-ssl server (may include images, movies, audio, etc)
Is it possible to send pre-compressed files that are contained within an EARfile? More specifically, the jsp and js files within the WAR file. I am using Apache HTTP as the web server and although it is simple to turn on the deflate module and set it up to use a pre-compressed version of the files, I would like to apply this to files that are contained within an EAR file that is deployed to JBoss. The reason being that the content is quite static and compressing it on the fly each time is quite costly in terms of cpu time.
Quite frankly, I am not entirely familiar with how JBoss deploys these EAR files and 'serves' them. The gist of what I want to do is pre-compress the files contained inside the war so that when they are requested they are sent back to the client with gzip for Content-Encoding.
In theory, you could compress them before packging them in the EAR, and then serve them up with a custom controller which adds the http header to the response which tells the client they're compressed, but that seems like a lot of effort to go to.
When you say that on-the-fly compression is quite costly, have you actually measured it? Have you tried requesting a large number of uncompressed pages, measured the cpu usage, then tied it again with compressed pages? I think you may be over-estimating the impact. It uses quite low-intensity stream compression, designed to use little CPU resources.
You need to be very sure that you have a real performance problem before going to such lengths to mitigate it.
I don't frequent this site often and I seem to have left this thread hanging. Sorry about that. I did succeed in getting compression to my javascript and css files. What I did was I precompress them in the ant build process using the gzip. I then had to spoof the name to get rid of the gzip extension. So I had foo.js and compressed it into foo.js.gzip. I renamed this foo.js.gzip to foo.js and this is the file that gets packaged into the WAR file. So that handles the precompression part. To get this file served up properly, we just have to tell the browser that this file is compressed, via the content-encoding header of the http response. This was done via a output filter that is applied to files that matched the *.js extension (some Java/JBoss, WEB-INF/web.xml if it helps. I'm not too familiar with this so sorry guys).
The webserver hosting my website is not returning last-modified or expiry headers. I would like to rectify this to ensure my web content is cacheable.
I don't have access to the apache config files because the site is hosted on a shared environment that I have no control over. I can however make configurations via an .htaccess file. The server - apache 1.3 - is not configured with mod_expires or mod_headers and the company will not install these for me.
With these limitations in mind, what are my options?
Sorry for the post here. I recognise this question is not strictly a programming question, and more a sys admin question. When serverfault is public I'll make sure I direct questions of this nature there.
What sort of content? If static (HTML, images, CSS), then really the only way to attach headers is via the front-end webserver. I'm surprised the hosting company doesn't have mod_headers enabled, although they might not enable it for .htaccess. It's costing them more bandwidth and CPU (ie, money) to not cache.
If it's dynamic content, then you'll have control when generating the page. This will depend on your language; here's an example for PHP (it's from the PHP manual, and is a bad example, as it should also set the response code):
if (!headers_sent()) {
header('Location: http://www.example.com/');
exit;
}
Oh, and one thing about setting caching headers: don't set them for too long a duration, particularly for CSS and scripts. You may not think you want to change these, but you don't want a broken site while people still have the old content in their browsers. I would recommend maximum cache settings in the 4-8 hour range: good for a single user's session, or a work day, but not much more.