Google cloud storage console Content-Encoding to gzip - gzip

I am using Google Cloud Storage console to upload files. I am not using any command line tool.
I want to Set the Content-Encoding to gzip (-z option) in Metadata.
Please see below screenshot, is value 'z' is correct or not?
I have set value 'z' for all css and js files, and analyzed webpage on PageSpeed Insights.
PageSpeed Insights still telling me enable compression, please check below screenshot.
I am using Nginx webserver with HttpGzipModule installed on Debian 7.
Thanks.

"-z" is a feature of the gsutil command line tool -- it compresses the data locally and uploads it to GCS with Content-Encoding: gzip. It is not a feature (or property) of the HTTP protocol or Google Cloud Storage, hence simply setting the header does not achieve what you are going for.
If you want to store (and serve) gzip-encoded data, you have two options:
Apply gzip-compression locally, for instance with the gzip Unix tool. Then remove the .gz suffix from the file name and upload it with the "Content-Encoding: gzip" header and the appropriate Content-Type (e.g., "text/css" for css, "application/javascript" for js).
Use the gsutil tool with the -z flag, and it will take care of all of the above for you.

If you are using the Google Cloud SDK (eg: Java, Go, etc) instead of the CLI you can also enable a gzip setting.
For example in JavaScript:
bucket.upload('data.json', {
destination: 'data.json',
gzip: true
});
https://cloud.google.com/storage/docs/uploading-objects

Using the Google Cloud SDK in C#, there are two overloads for the UploadObject method.
You need to use the overload that takes a Google.Apis.Storage.v1.Data.Object and a Stream as parameters.
The example below assumes a json file compressed with gzip into a stream:
var objectToBeCreated = new Google.Apis.Storage.v1.Data.Object
{
Bucket = "bucketName",
Name = "objectName",
ContentType = "application/json",
ContentEncoding = "gzip"
};
var uploadedObject = storageClient.UploadObject(objectToBeCreated, stream);

Related

how to enable gzip compression in libwebsocket on ESP32

I am running a webserver on an ESP32 chip using the libwebsocket library. The server files are in a ROMFS partition on the ESP32.
Currently I am in the process of trying to improve the loading time by concatenating, minifying and compressing the javascript, html and css files.
The concatenation and minification worked properly, I now only have a concatenated.js and concatenated.css file in my website. But the issue came when I tried to get the compression working.
Initially, I thought my server would compress the files by itself before sending them, however when I looked at the server file transfer using Chrome developper extension, I found out that the javascript file GET request was returned with "content-type: text/javascript".
I tried several solutions I could think of, but none seem to work:
gzip the file before creating the romfs (ie there is now only a concatenated.js.gz in my ROMFS file system)
The server returns 404 when trying to access "concatenated.js"
gzip the file before creating the romfs and make it live alongside the original file (I was thinking maybe libwebsocket would be able to see they were both there and pick the most efficient one)
The server only returns the js file, and never the gz file
Does anybody knows how to enable the gzip compression in libwebsocket ? I am guessing there must be some options I don't have enabled, but it has been hard finding resources on the web. Most of them only discuss about the ability of the libwebsocket to get gzip from a zipped file.
Regards,
The issue ended up coming directly from the libwebsocket code.
When opening a file from the ESP32, there was no logic in place to look for a file with the same name and ".gz" at the end. The logic to look for such a file if the browser accepted gzip file needed to be added to the function.
This change was done on an older version of the libwebsocket, and as such may not apply to the latest version (for anybody looking at this modification). Also, I needed to include <string.h> to have access to the string manipulation functions:
libwebsocket/lib/plate/freertos/esp32/esp32-helpers.c -> function esp32_lws_fops_open
Replace
f->i = romfs_get_info(lws_esp32_romfs, filename, &len, &csum);
By
// check for the gzip file if gzip is allowed by the browser
f->i = NULL;
if((*flags & LWS_FOP_FLAG_COMPR_ACCEPTABLE_GZIP) == LWS_FOP_FLAG_COMPR_ACCEPTABLE_GZIP)
{
char *filename_gz = malloc(strlen(filename) + 3 + 1); // add space for ".gz" and null termination
sprintf(filename_gz, "%s.gz", filename);
f->i = romfs_get_info(lws_esp32_romfs, filename_gz, &len, &csum);
}
// if we haven't found a gz file (not allowed or no gzip), search for the regular file
if(!f->i)
{
f->i = romfs_get_info(lws_esp32_romfs, filename, &len, &csum);
}
// otherwise, add the flags to let the library knows the file transfered is a gzip file
else
{
*flags |= LWS_FOP_FLAG_COMPR_IS_GZIP;
}

Cannot play audio from Google Cloud Storage

Please help. I can not play my audio file. It is hosted in Google Cloud Storage, it works if I just rin it in a localhost server but when I use the uploaded one. I often get (failed)net::ERR_CONTENT_DECODING_FAILED
Below is how I use my audio file in my VueJS
<template>
<v-btn #click="triggerSound">Trigger Sound</v-btn>
<audio id="notif" src="adn.wxt.com/zhuanchu.wav" />
</template>
<script>
mounted() {
this.notifyAudio = document.getElementById('notif')
},
methods: {
async triggerSound() {
this.notifyAudio.play()
}
},
</script>
UPDATE
It works well in Firefox
there are many reasons why you might get this error.
This error occurs when HTTP request’s headers claim that content is gzip encoded while it's not (see content bellow for further explanation). This error can be fixed by turning off gzip encoding in the browser you use.
if that didn't solve your problem, try adding this flag -> gcloud alpha storage cp gs://bucket/file.gz . --no-gzip-encoding.
my last solution would be Passing gsutil -h --header-download "Accept-Encoding: gzip"
Deeper explanation regarding the error
Redundant Behaviour
You should not set your metadata to redundantly report the compression of the object:
gsutil setmeta -h "Content-Type:application/gzip" \
-h "Content-Encoding:gzip"
This implies you are uploading a gzip-compressed object that has been gzip-compressed a second time when that is not usually the case. When decompressive transcoding occurs on such an incorrectly reported object, the object is served identity encoded, but requesters think that they have received an object which still has a layer of compression associated with it. Attempts to decompress the object will fail.
a file that is not gzip-compressed should not be uploaded with the Content-Encoding: gzip. Doing so makes the object appear to be eligible for transcoding, but when requests for the object are made, attempts at transcoding fail.
Double compression
Some objects, such as many video, audio, and image files, not to mention gzip files themselves, are already compressed. Using gzip on such objects offers virtually no benefit: in almost all cases, doing so makes the object larger due to gzip overhead. For this reason, using gzip on compressed content is generally discouraged and may cause undesired behaviors.
For example, while Cloud Storage allows "doubly compressed" objects (that is, objects that are gzip-compressed but also have an underlying Content-Type that is itself compressed) to be uploaded and stored, it does not allow objects to be served in a doubly compressed state unless their Cache-Control metadata includes no-transform. Instead, it removes the outer, gzip, level of compression, drops the Content-Encoding response header, and serves the resulting object. This occurs even for requests with Accept-Encoding: gzip. The file that is received by the client thus does not have the same checksum as what was uploaded and stored in Cloud Storage, so any integrity checks fail.

how can I upload a gzipped json file to bigquery via the HTTP API?

When I try to upload an uncompressed json file, it works fine; but when I try a gzipped version of the same json file, the job would fail with lexical error resulted from failure to parse the json content.
I gzipped the json file with the gzip command from Mac OSX 10.8 and I have set the sourceFormat to: "NEWLINE_DELIMITED_JSON".
Did I do something incorrectly or gzipped json file should be processed differently?
I believe that using the multipart/related request it is not possible to submit binary data (such as the compressed file. However, if you don't want to use uncompressed data, you may be able to use resumable upload.
What language are you coding in? The python jobs.insert() api takes a media upload parameter, which you should be able to give a filename to in order to do resumable upload (which sends your job metadata and new table data as separate streams). I was able to use this to upload a compressed file.
This is what bq.py uses, so you could look at the source code here.
If you aren't using python, the googleapis client libraries for other languages should have similar functionality.
You can upload gzipped files to Google Cloud Storage, and BigQuery will be able to ingest it with a load job:
https://developers.google.com/bigquery/loading-data-into-bigquery#loaddatagcs

Vary: Accept-Encoding header for Amazon S3 hosted site

How do I add a Vary: Accept-Encoding header to the files of a static website hosted by Amazon S3?
This is the only thing keeping me from getting a 100/100 score from Google PageSpeed, I'd love to get this solved!
R
It's not possible to set the Vary-Accept-Encoding header for S3 objects.
Just to add some perspective, the reason for this is that S3 doesn't currently support on-the-fly compressing, so it's not possible to set this header. If in the future Amazon does add automatic compression, then that header would be set automatically.
With a static site, you're limited to either:
Serving uncompressed assets and having full support, but a slower site/more bandwidth.
Serving compressed assets by compressing them manually, but making the site look like garbage to any browser that doesn't support gzip (there are very few of them now). Note that the extension would still be .html (you don't want to set it to .gz because that implies an archive) but its content would be gzipped.

Why do Amazon S3 returns me an Error 330 about simple files?

I have added the "Content-Encoding: gzip" header to my S3 files and now when I try to access them, it returns me a "Error 330 (net::ERR_CONTENT_DECODING_FAILED)".
Note that my files are simply images, js and css.
How do I solve that issue?
You're going to have to manually gzip them and then upload them to S3. S3 doesn't have the ability to gzip on the fly like your web server does.
EDIT: Images are already compressed so don't gzip them.
Don't know if you are using Grunt as deployment tool but, use this to compress your files:
https://github.com/gruntjs/grunt-contrib-compress
Then:
https://github.com/MathieuLoutre/grunt-aws-s3
To upload compressed files to Amazon S3. Et voila!