I have my website configured to serve static content using gzip compression, like so:
<link rel='stylesheet' href='http://cdn-domain.com/css/style.css.gzip?ver=0.9' type='text/css' media='all' />
I don't see any website doing anything similar. So, the question is, what's wrong with this? Am I to expect shortcomings?
Precisely, as I understand it, most websites are configured to serve normal static files (.css, .js, etc) and gzipped content (.css.gz, .js.gz, etc) only if the request comes with a Accept-Encoding: gzip header. Why should they be doing this when all browsers support gzip just the same?
PS: I am not seeing any performance issues at all because all the static content is gzipped prior to uploading it to the CDN which then simply serves the gzipped files. Therefore, there's no stress/strain on my server.
Just in case it's helpful, here's the HTTP Response Header information for the gzipped CSS file:
And this for gzipped favicon.ico file:
Supporting Content-Encoding: gzip isn't a requirement of any current HTTP specification, that's why there is a trigger in the form of the request header.
In practice? If your audience is using a web browser and you are only worried about legitimate users then there is very, very slim to no chance that anyone will actually be affected by only having preprocessed gzipped versions available. It's a remnant of a bygone age. Browsers these days should handle being force-fed gzipped content even if they don't request it as long as you also provide them correct headers for the content being given to them. It's important to realise that HTTP request/response is a conversation and that most of the headers in a request are just that; a request. For the most part, the server on the other end is under no obligation to honor any particular headers, and as long as they return a valid response that makes sense the client on the other end should do their best to make sense of what was returned. This includes enabling gzip if the server responds that it has used it.
If your target is machine consumption however, then be a little wary. People still think that it's a smart idea to write their own HTTP/SMTP/etc parsers sometimes even though the topic has been done to death in multiple libraries for pretty much every language out there. All the libraries should support gzip just fine, but hand-rolled parsers usually won't.
Related
im working on an app which fetches json from a website. everything is working properly and im using alamofire .
but for some reason, when i post new content on the website and the json file changes, alamofire doesnt get the new content. instead, it loads the content from the cache instead of redownloading the new content.
the only workaround to this is to clear the cache which is a way that i do not prefer since the user will have to download the content all over again at each view load.
so what im asking is, is there a way to notify the alamofire method about the new content and try to load the new content instead of having me to implement a method to clear the cache?
Alamofire uses the Foundation URL loading system, which relies on NSURLCache. The cache behavior for HTTP requests is determined by the contents of your HTTP response's Cache-Control headers. For example, you may wish to configure your server to specify must-revalidate:
Cache-Control: max-age=3600, must-revalidate
You should also make sure your server is specifying ETag and Content-Length headers to make it easy to tell when content has changed.
NSHipster's writeup on NSURLCache has a few good examples. If you're totally new to web caching, I recommend you read the very helpful section 13 of the HTTP 1.1 spec, and possibly also this caching tutorial.
Are there any specific spec'd processes that a browser client can use to dynamically encourage a server to push additional requested items into the browser cache using HTTP/2 server push before the client needs to actually use them (not talking about server-side events or WebSockets, here, btw, but rather HTTP/2 server push)?
There is nothing (yet) specified formally for browsers to ask a server to push resources.
A browser could figure out what secondary resources needs to render a primary resource, and may send this information to the server opportunistically on a subsequent request with a HTTP header, but as I said, this is not specified yet.
[Disclaimer, I am the Jetty HTTP/2 maintainer]
Servers, on the other hand, may learn about resources that browsers ask, and may build a cache of correlated resources that they can push to clients.
Jetty provides a configurable PushCacheFilter that implements the strategy above, and implemented a HTTP/2 Push Demo.
The objective of server push is that the server send additional files (e.g. javascripts, css) along with the requested URL (e.g. an HTML page) to the browser before the browser knows what related files are required, thus saving a round-trip and improve webpage load speed. If the browser already know what resources are needed it can request with normal HTTP calls.
In sails 0.10.5 express compression is supposed to be in the middleware for production mode by default according to the issues on github, but none of the response headers have the appropriate Content-Encoding to suggest that they have been gzipped. Furthermore, the sizes of the assets all match the uncompressed assets.
After searching for any other issues related to this, I found this SO question which was theoretically the opposite of my problem: he had the gzipped files in place and needed the middleware and I have the middleware (supposedly by default) but no files. His problem was (apparently) solved by adding the middleware config, which was required for compression before 0.10.5. So, I npm installed grunt-contrib-compress and set up the config file. Now, I have the gzipped files being produced successfully, but they're not being served. I tried manually requesting the gzipped version of the asset by injecting it in sails-linker instead of the regular js, but the Content-Type on the response header was 'application/octect-stream'.
Has anyone successfully served gzipped static assets from a sails app? Am I doing anything obviously incorrectly? Even an outline of the general process would be appreciated.
I'm building a Chromecast app, where I want to stream .m3u8 files (HLS) from a streaming provider. The streaming provider does not add CORS headers to the HTTP headers, which is a requirement for building Chromecast apps.
Is there any way to route the requests through a proxy, and have the proxy add the necessary headers for .m3u8 files? AFAICS, the .m3u8 files further point to files for the different bandwith streams, so it would be necessary to have the proxy add appropriate CORS headers to the header for those files as well.
Here is an example of a link to a .m3u8 file that I want to be able to stream.
Hey I realise I'm a bit late but I thought I would post here in case other find it usefull. I had the same problem when developing a chromecast application. The simple solution I found was to include the TOMODOkorz library this will pass all http requests through it's proxy.
You could host your own proxy and change the library to point to yours relatively easily.
This is actually possible by rewriting the urls within Chromecast's Media Player Library and having these sub-playlists also proxy through a CORS proxy like http://www.corsproxy.com/.
To do this in your custom receiver, do not import the google-hosted library
<script type="text/javascript" src="//www.gstatic.com/cast/sdk/libs/mediaplayer/0.5.0/media_player.js"></script>
Instead, copy the obfuscated javascript directly into your receiver html page, and do the following:
Find+replace g.D.url=k with g.D.url='http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,'')
Find+replace url:k with url:('http://www.corsproxy.com/' + k.replace(/^(?:[a-z]+:)?\/\//i,''))
Now, if you send the initial contentId to Chromecast with the http://www.corsproxy.com/YOUR_M3U8_FILE_HERE you should have a fully functional HLS-playing Chromecast app.
Most providers have the ability to set CORS for their customers. Akamai certainly does.
I've been able to stream HLS to ChromeCast from an S3 bucket by adding a permissive CORS file to the permissions for the bucket.
To answer my own question:
This is not possible without rebroadcasting the streams. .m3u8 files are files containing links to other files, which in the end also contain the binaries. All of these, including the HTTP response containing the binary, needs the CORS headers for the Chromecast to display the contents.
If you're only looking to add CORS headers to textual responses corsproxy.com is a good alternative, a long with several available open source projects.
We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?
The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en