When downloading a production copy of jQuery, next to the link it says that the file is 32K Minified & Gzipped. I get Minified but what do they mean by Gzipped?
Is it Gzipped by the webserver like Apache deflate?
update: found this website to see which resources are gzipped http://gzipwtf.com/
When your browser sends a HTTP request to a web server, it can specify the Accept-Encoding field to indicate which compression schemas it supports:
GET /scripts/jquery.min.js HTTP/1.1
Host: www.example.com
Accept-Encoding: gzip, deflate
The server can then choose one of these schemas (but doesn't have to) and specify it in the response header:
HTTP/1.1 200 OK
Content-Encoding: gzip
etc.
So, if the web server is configured to gzip javascript files, and the browser supports it (the vast majority does), then the file will be "gzipped".
Yes, it uses an Apache module called mod_gzip:
http://sourceforge.net/projects/mod-gzip/
Which works (in principle) just like mod_deflate.
That download link is to a hosted file that you may hot-link to in your web pages. The file itself is minified JavaScript.
When the file requested from their hosting server by the browser, it is further compressed in transit using Gzip compression as specified in the content header. When the browser receives it, it gets inflated and stored in the browser's cache.
If you were to host the minified file on your own server it would not necessarily be compressed in transit as described unless you configured your server to use compression.
Related
I have the Amazon CloudFront gzip feature enabled: "Compress Objects Automatically".
This is happening for all the files in my CloudFront, while other CSS/JS files are loading as gzip (Double checked that my server request headers are accepting gzip files Accept-Encoding: gzip).
I am really lost trying to figure this out because all tutorials and google search results lead to the same explanation of how to check the radio button "Compress Objects Automatically" - which clearly doesn't help.
I thought maybe I can't gzip the files because they are too small to compress - But following google speed test saying clearly that I can compress these files with gzip.
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/bootstrap.min.css could save 100.3KiB (83% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/style.css could save 60.5KiB (80% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/shop/css/jquery.range.css could save 4.6KiB (83% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/font-awesome.min.css could save 21.9KiB (77% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/responsive.css could save 20KiB (80% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/general.min.js?ver=9.70 could save 232.9KiB (72% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/rcss/magnific-popup.css could save 5.7KiB (75% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/bootstrap.min.js could save 26.4KiB (73% reduction).
Compressing https://Cloudfront.cloudfront.net/…ve/static/plugins/jquery.validate.min.js could save 14KiB (67% reduction).
Compressing https://Cloudfront.cloudfront.net/…tic/plugins/jquery.magnific-popup.min.js could save 13.2KiB (63% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/plugins/jquery.range.min.js could save 3.9KiB (66% reduction).
Compressing https://Cloudfront.cloudfront.net/live/static/voting/jquery.cookie.js could save 1.2KiB (53% reduction).
What part am I missing that will help me gzip the files using Cloudfront?
this is how my response headers look like:
Accept-Ranges:bytes
Cache-Control:max-age=0
Connection:keep-alive
Content-Length:122540
Content-Type:text/css
Date:Sun, 23 Apr 2017 13:14:07 GMT
ETag:"2cb56af0a65d6ac432b906d085183457"
Last-Modified:Tue, 02 Aug 2016 08:49:54 GMT
Server:AmazonS3
Via:1.1 2cb56af0a65d6ac432b906d085183457.cloudfront.net (CloudFront)
X-Amz-Cf-Id:eCPcSDedADnqDZMlMbFjj08asdBSn7_lfR0imlXAT181Y8qRMtSZASDF27AiSTK8PDQ==
x-amz-meta-s3cmd-attrs:uid:123/gname:ubuntu/uname:ubuntu/gid:666/mode:666/mtime:666/atime:666/md5:2cb56af0a65d6ac432b906d085183457/ctime:666
X-Cache:RefreshHit from cloudfront
I understood the concept of returns on 200 and 304 - when deleting the browser cache it always shows a 200 response.
So there is some caching from Cloudfront? I added my bootstrap3.min.css file to the "Invalidation" table - didn't work.
Made sure the file is set to compression.
Added this to my website.com.conf file to enable gzip and display content-length header:
DeflateBufferSize 8096
SetOutputFilter DEFLATE
DeflateCompressionLevel 9
Tried to remove DeflateBufferSize 8096 from my .conf file and added <AllowedHeader>Content-Length</AllowedHeader> to the "CORS Configuration" - I do get the Content-Length right - but still not GZIPed. (following CloudFront with S3 website as origin is not serving gzipped files )
This is what I currently get:
Request URL:https://abc.cloudfront.net/live/static/rcss/bootstrap3.min.css
Request Method:GET
Status Code:200 OK
Remote Address:77.77.77.77:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
Accept-Ranges:bytes
Age:1479
Connection:keep-alive
Content-Length:122555
Content-Type:text/css
Date:Wed, 26 Apr 2017 08:48:34 GMT
ETag:"83527e410cd3fff5bd1e4aab253910b2"
Last-Modified:Wed, 26 Apr 2017 08:43:05 GMT
Server:AmazonS3
Via:1.1 5fc044210ebc4ac6efddab8b0bf5a686.cloudfront.net (CloudFront)
X-Amz-Cf-Id:3ZBgDY0c1WV_Pc0o_Bjwa5cQ9D9T-Cr30QDxd_GvD30iQ8W1ImReQIH==
X-Cache:Hit from cloudfront
Request Headers
Accept:text/css,*/*;q=0.1
Accept-Encoding:gzip, deflate, sdch, br
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Host:abc.cloudfront.net
Pragma:no-cache
Referer:https://example.com/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.81 Safari/537.36
Following: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
If you configure CloudFront to compress content, CloudFront removes the ETag response header from the files that it compresses. When the ETag header is present, CloudFront and your origin can use it to determine whether the version of a file in a CloudFront edge cache is identical to the version on the origin server. However, after compression the two versions are no longer identical.
I get the same Etag meaning - the served css file doesn't go through any compression.
Thinking maybe I didn't set the compression right for this specific file - This is now set to
*/bootstrap3.min.css (since it's inside a directory).
and I had this set before to
bootstrap3.min.css
both don't work.
My URL is: https://abc.cloudfront.net/live/static/rcss/bootstrap3.min.css
Following this, I edited my invalidation part to:
/live/static/rcss/bootstrap3.min.css
/static/rcss/bootstrap3.min.css
/rcss/bootstrap3.min.css
/bootstrap3.min.css
Can this be my actual problem?
X-Cache: RefreshHit from cloudfront
This means CloudFront checked the origin with a conditional request such as If-Modified-Since and the response was 304 Not Modified, indicating that the content at the origin server (S3) is unchanged from when CloudFront initially cached the resource, so it served the copy from cache.
...which was probably cached before you enabled "Compress Objects Automatically."
If you think about it, it would be far more efficient for CloudFront only to compress objects as they come in from the origin, not as they go out to the viewer, so files it already has would never get compressed.
This is documented:
CloudFront compresses files in each edge location when it gets the files from your origin. When you configure CloudFront to compress your content, it doesn't compress files that are already in edge locations. In addition, when a file expires in an edge location and CloudFront forwards another request for the file to your origin, CloudFront doesn't compress the file if your origin returns an HTTP status code 304, which means that the edge location already has the latest version of the file. If you want CloudFront to compress the files that are already in edge locations, you'll need to invalidate those files.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
So a cache invalidation of * is in order, to clear out the uncompressed versions.
But wait... higher up on the same page, there seems to be conflicting information:
Note
If CloudFront has an uncompressed version of the file in the cache, it still forwards a request to the origin.
Given the information, above, that seems to be a discrepancy. But, I believe the issue here is one of some unspoken assumptions. This information most likely would only be applicable to uncompressed copies that were cached in response to a viewer that did not send Accept-Encoding: gzip, in which case, the correct behavior on the part of CloudFront would be to cache the compressed and uncompressed responses independently, and to contact the origin if no compressed copy of an object was available and the viewer had indicates that it could support objects compressed with gzip, regardless of whether an uncompressed copy had been stored as a result of a request from a browser that did not advertise gzip support.
Or, it can be interpreted to mean that CloudFront did still send a request, but since the response was 304, it served the cached copy in spite of it being uncompressed.
Invalidate your cache, then wait for the invalidation to show that it's complete, then try again. This should be all that is needed to correct this behavior.
it might be because S3 is not sending the required Content-Length header response.
Check this answer for more details: https://stackoverflow.com/a/42448222/4005566
When I look at every page in live http headers, the page contains the below parts in header:
Accept Encoding: gzip, deflate
Content Encoding: Gzip
When I use websites to check whether it is compressed or not, it says it's not compressed. How can we be sure that a page is compressed?
For example I tested this site in Gzip tester and it says it's not compressed, but I see Content Encoding in live http headers.
Your headers are wrong, it should be:
Content-Encoding: gzip
so basically: dashes not spaces between words in the headername
It's your webserver that needs to add those and do the compression, see https://httpd.apache.org/docs/2.0/mod/mod_deflate.html
I've gziped my JavaScript file using gzip and uploaded it to Amazon S3. I've set the following:
content-type: application/x-javascript
content-encoding: gzip
The file was given public permissions.
The problem that when I refer the script to the location (correct one, I've checked) of the gzipped file (with js.gzip at the end), the application doesn't run it. When I tried to view the file in Chrome browser, it tried to download the file instead of showing it.
What I'm doing wrong?
According to one answer of a similar question, there's a bug in Safari (probably Webkit) which stops proper gzip acceptance with the "wrong" file extension.
File extension shouldn't matter, but apparently Webkit screws it up. try either .jgz or .gz.js.
Try to remove:
content-type: application/x-javascript
and only use:
content-encoding: gzip
in your S3 bucket file.js file (gziped, replace .js.gz for .js only).
Now the browser request should work without any problem.
When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar.
If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user.
Is there any setting that would change Apache's default behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself?
You could maybe play with the sendBufferSize to get a value big enough to contain your response in one chunk.
Then chunked content is part of the HTTP/1.1 protocol, you could force an HTTP/1.0 response (so not chunked: “A server MUST NOT send transfer-codings to an HTTP/1.0 client.”) by setting the force-response-1.0 in your apache configuration. But PHP breaks this settings, it's a long-known-bug of PHP, there's a workaround.
We could try to modify the request on the client side with an header preventing the chunked content, but w3c says: "All HTTP/1.1 applications MUST be able to receive and decode the "chunked" transfer-coding", so I don't think there's any header like 'Accept' and such which can prevent the server from chunking content. You could however try to set your request in HTTP/1.0, it's not really an header of the request, it's the first line, should be possible with jQuery, certainly.
Last thing, HTTP/1.0 lacks one big thing, the 'host' headers is not mandatory, verify your requests in HTTP/1.0 are still using the 'host' header if you work with name based virtualhosts.
edit: by using the technique cited in the workaround you can see that you could tweak Apache env in the PHP code. This can be used to force the 1.0 mode only for your special gzipped content, and you should use it to prevent having you complete application in HTTP/1.0 (or use the request mode to set the HTTP/1.0 for you gzip requests).
my site is all happily Gzipped according to:
http://www.gidnetwork.com/tools/gzip-test.php
However when I run it through Yslow I get a F for Gzip and it lists all of my scripts as components that are not gzipped.
Any ideas ?
Have a look in the headers in Firebug and check that the browser is sending
Accept-Encoding gzip,deflate
in the request header and that
Content-Encoding gzip
is being sent by the server in the response header (indicating that gzipping has been applied).
If you used the method in the linked pages to gzip your site, it won't have any effect on the scripts as they are not run through PHP. You'll need to either:
1) configure your webserver of choice (apache2 uses mod_deflate)
2) serve your .js files through php:
<?php ob_start('ob_gzhandler'); echo file_get_contents('whatever.js'); ?>