There has been a lot of discussion about gzipping Play's response (i.e. rendering HTML from a template). Apparently the developers don't care to add it, at least Page Speed reports that the response in 2.0.3 is not compressed and that my app is generating 70% of useless traffic.
I'm using lighttpd as a reverse proxy. mod_compress is enabled, for both "text/html" and "text/html; charset=utf8". But Play's response is still not compressed.
Is there any way to do this via the reverse proxy?
Related
I have a single-page application which is dependent on a javascript bundle to work. For fetching this bundle's CDN (cloudfront) url, I'm making a call to an AWS API Gateway endpoint which returns a HTTP 302 response having the Location header parameter as the CDN url. Now this CDN Url responds with cache-control headers having a sufficiently large max-age value. All the other browsers like Chrome, Firefox seem to honor this and cache the CDN Url response for further requests. But Safari isn't doing so (Version - 12). However, it does cache the response when I'm making the request to the CDN Url directly. Do I need to add some more headers or some additional metadata in the 302 response to make it work for safari?
I tried fiddling with the cache-control parameters like adding 'immutable' but nothing worked. I googled quite a lot about this issue but nothing concrete turned up.
I expected Safari to work with just the max-age parameter present in CDN's response, but it never caches it.
Is it possible to use non-SSL sources with HLS on a page and playlist served via SSL HTTPS?
I have a page served over HTTPS. It uses Video.js to play a .m3u8 playlist. The playlist is fetched from the same server over HTTPS and is dynamically generated. The individual .ts segments within the playlist are stored on a CDN.
I'm finding that the SSL handshakes for each .ts GET request are high. Would like to instead make the .ts GETs use non-SSL HTTP -- the video content is not sensitive (and if it were, HLS supports symmetric AES encryption which is significantly faster than the asymmetric SSL handshake).
However, Chrome is refusing to load the .ts segments from a non-SSL HTTP source:
video.js:26948 Mixed Content: The page at 'https://localhost' was
loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint
'http://foo.com/20180110144476.ts'. This request has been blocked;
the content must be served over HTTPS.
Add a content security policy does not help:
<meta http-equiv="Content-Security-Policy" content="connect-src http://foo.com 'self';">
Since the ts files are fetched via XMLHttpRequest they're considered active mixed content and modern browsers will block access by default.
The CSP's connect-src option further restricts the origins you can connect to and it won't allow you to bypass the mixed-content check.
I'm afraid the only way is to serve everything over either HTTPS or HTTP.
I have a server REST API that answer some JSON response. I want to chunk it on the server to increase response time.
Is there a way for a reverse proxy like Apache or Nginx or any other, to intercept this response, and gzip the chunks, and send it back to the client as chunked?
I got something working by gzipping the content before chunking it directly inside my API server, and I'm just wondering if there's any other option available to me that would increase response time of my server.
I think that this is possible according to some other stack overflow questions that I have seen answered.
https://serverfault.com/questions/159313/enabling-nginx-chunked-transfer-encoding/187573#187573
According to the above, it is possible to disable proxy_buffering in your nginx configuration, and supports gzipping output if configured.
As noted in the page, there are possible disadvantages and you should test to ensure that this action is appropriate.
We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?
The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en
I have my website configured to serve static content using gzip compression, like so:
<link rel='stylesheet' href='http://cdn-domain.com/css/style.css.gzip?ver=0.9' type='text/css' media='all' />
I don't see any website doing anything similar. So, the question is, what's wrong with this? Am I to expect shortcomings?
Precisely, as I understand it, most websites are configured to serve normal static files (.css, .js, etc) and gzipped content (.css.gz, .js.gz, etc) only if the request comes with a Accept-Encoding: gzip header. Why should they be doing this when all browsers support gzip just the same?
PS: I am not seeing any performance issues at all because all the static content is gzipped prior to uploading it to the CDN which then simply serves the gzipped files. Therefore, there's no stress/strain on my server.
Just in case it's helpful, here's the HTTP Response Header information for the gzipped CSS file:
And this for gzipped favicon.ico file:
Supporting Content-Encoding: gzip isn't a requirement of any current HTTP specification, that's why there is a trigger in the form of the request header.
In practice? If your audience is using a web browser and you are only worried about legitimate users then there is very, very slim to no chance that anyone will actually be affected by only having preprocessed gzipped versions available. It's a remnant of a bygone age. Browsers these days should handle being force-fed gzipped content even if they don't request it as long as you also provide them correct headers for the content being given to them. It's important to realise that HTTP request/response is a conversation and that most of the headers in a request are just that; a request. For the most part, the server on the other end is under no obligation to honor any particular headers, and as long as they return a valid response that makes sense the client on the other end should do their best to make sense of what was returned. This includes enabling gzip if the server responds that it has used it.
If your target is machine consumption however, then be a little wary. People still think that it's a smart idea to write their own HTTP/SMTP/etc parsers sometimes even though the topic has been done to death in multiple libraries for pretty much every language out there. All the libraries should support gzip just fine, but hand-rolled parsers usually won't.