Please help. I can not play my audio file. It is hosted in Google Cloud Storage, it works if I just rin it in a localhost server but when I use the uploaded one. I often get (failed)net::ERR_CONTENT_DECODING_FAILED
Below is how I use my audio file in my VueJS
<template>
<v-btn #click="triggerSound">Trigger Sound</v-btn>
<audio id="notif" src="adn.wxt.com/zhuanchu.wav" />
</template>
<script>
mounted() {
this.notifyAudio = document.getElementById('notif')
},
methods: {
async triggerSound() {
this.notifyAudio.play()
}
},
</script>
UPDATE
It works well in Firefox
there are many reasons why you might get this error.
This error occurs when HTTP request’s headers claim that content is gzip encoded while it's not (see content bellow for further explanation). This error can be fixed by turning off gzip encoding in the browser you use.
if that didn't solve your problem, try adding this flag -> gcloud alpha storage cp gs://bucket/file.gz . --no-gzip-encoding.
my last solution would be Passing gsutil -h --header-download "Accept-Encoding: gzip"
Deeper explanation regarding the error
Redundant Behaviour
You should not set your metadata to redundantly report the compression of the object:
gsutil setmeta -h "Content-Type:application/gzip" \
-h "Content-Encoding:gzip"
This implies you are uploading a gzip-compressed object that has been gzip-compressed a second time when that is not usually the case. When decompressive transcoding occurs on such an incorrectly reported object, the object is served identity encoded, but requesters think that they have received an object which still has a layer of compression associated with it. Attempts to decompress the object will fail.
a file that is not gzip-compressed should not be uploaded with the Content-Encoding: gzip. Doing so makes the object appear to be eligible for transcoding, but when requests for the object are made, attempts at transcoding fail.
Double compression
Some objects, such as many video, audio, and image files, not to mention gzip files themselves, are already compressed. Using gzip on such objects offers virtually no benefit: in almost all cases, doing so makes the object larger due to gzip overhead. For this reason, using gzip on compressed content is generally discouraged and may cause undesired behaviors.
For example, while Cloud Storage allows "doubly compressed" objects (that is, objects that are gzip-compressed but also have an underlying Content-Type that is itself compressed) to be uploaded and stored, it does not allow objects to be served in a doubly compressed state unless their Cache-Control metadata includes no-transform. Instead, it removes the outer, gzip, level of compression, drops the Content-Encoding response header, and serves the resulting object. This occurs even for requests with Accept-Encoding: gzip. The file that is received by the client thus does not have the same checksum as what was uploaded and stored in Cloud Storage, so any integrity checks fail.
Related
I have a Cloudfront distribution which has a single S3 origin serving static files. These files all have cache-control: public,max-age=31536000 (one year) as metadata, but when I view the distribution statistics, I see a consistent 60% Miss rate.
I can see that the objects with the lowest Hit rates (usually around 50%) are all from a specific folder, which my Django app uploads thumbnails to. These files still have the same headers, though, so I can't figure out why they're different.
For example – when I load this file (S3 origin, Cloudfront link) in my browser, I see a age: 1169380; x-cache: Hit from cloudfront headers. But if I curl the same URL, I see x-cache: Miss from cloudfront and no age header – if I curl again, the Age begins incrementing from 0 (and I see a Hit).
This feels wrong to me – the cache policy I'm using is a Cloudfront default (Managed-CachingOptimized) which doesn't forward any headers or querystrings, so why does my curl command trigger a call to origin when I just loaded the same file via my browser, and got a cached response?
It's possible I've misunderstood how Cloudfront is supposed to cache these files so would appreciate any pointers.
(If it helps, this page will give a bunch of URLs which show the issue, eg. any image under https://static.scenepointblank.com/thumbs/*)
I have a Dynamics 365 instance that makes heavy use of custom front-end interfaces using a modern Nodejs-based build pipeline involving the usual suspects such as webpack/babel/etc. I'm hosting these files as webresources in Dynamics (one html file and one bundle.js file per SPA).
As my team nears production, I'm trying to set up a nice production build for our front-end stuff to reduce load times. Unfortunately, I can't find a good way to serve our bundle.js files encoded as gzip because Dynamics does not return the Content-Encoded: gzip header when a request is made and therefore the browser doen't decompress the file and tries to read the compressed file as plain JavaScript.
Of course, we can serve the uncompressed file just fine but we would like to provide the smaller, faster loading file if possible as it's generally about 1/3 the size.
Does anyone have any brilliant ideas for how to override the default response headers coming back from dynamics when I request a web resource? Or any other clever solutions to this problem?
Thanks, and let me know if any clarification is needed.
I don't know of any way to serve gzipped content via a web resource.
If the download size is a huge concern perhaps encode the gzipped code to base64 and store it as a string variable in JS.
Then during execution you could decode, unzip, and eval() the code.
You could also store base64 gzipped code as a file attachment via an annotation record or within an XML web resource, though those options would require an additional API call to get the code, so a string variable may be your best bet.
I am using Google Cloud Storage console to upload files. I am not using any command line tool.
I want to Set the Content-Encoding to gzip (-z option) in Metadata.
Please see below screenshot, is value 'z' is correct or not?
I have set value 'z' for all css and js files, and analyzed webpage on PageSpeed Insights.
PageSpeed Insights still telling me enable compression, please check below screenshot.
I am using Nginx webserver with HttpGzipModule installed on Debian 7.
Thanks.
"-z" is a feature of the gsutil command line tool -- it compresses the data locally and uploads it to GCS with Content-Encoding: gzip. It is not a feature (or property) of the HTTP protocol or Google Cloud Storage, hence simply setting the header does not achieve what you are going for.
If you want to store (and serve) gzip-encoded data, you have two options:
Apply gzip-compression locally, for instance with the gzip Unix tool. Then remove the .gz suffix from the file name and upload it with the "Content-Encoding: gzip" header and the appropriate Content-Type (e.g., "text/css" for css, "application/javascript" for js).
Use the gsutil tool with the -z flag, and it will take care of all of the above for you.
If you are using the Google Cloud SDK (eg: Java, Go, etc) instead of the CLI you can also enable a gzip setting.
For example in JavaScript:
bucket.upload('data.json', {
destination: 'data.json',
gzip: true
});
https://cloud.google.com/storage/docs/uploading-objects
Using the Google Cloud SDK in C#, there are two overloads for the UploadObject method.
You need to use the overload that takes a Google.Apis.Storage.v1.Data.Object and a Stream as parameters.
The example below assumes a json file compressed with gzip into a stream:
var objectToBeCreated = new Google.Apis.Storage.v1.Data.Object
{
Bucket = "bucketName",
Name = "objectName",
ContentType = "application/json",
ContentEncoding = "gzip"
};
var uploadedObject = storageClient.UploadObject(objectToBeCreated, stream);
How do I add a Vary: Accept-Encoding header to the files of a static website hosted by Amazon S3?
This is the only thing keeping me from getting a 100/100 score from Google PageSpeed, I'd love to get this solved!
R
It's not possible to set the Vary-Accept-Encoding header for S3 objects.
Just to add some perspective, the reason for this is that S3 doesn't currently support on-the-fly compressing, so it's not possible to set this header. If in the future Amazon does add automatic compression, then that header would be set automatically.
With a static site, you're limited to either:
Serving uncompressed assets and having full support, but a slower site/more bandwidth.
Serving compressed assets by compressing them manually, but making the site look like garbage to any browser that doesn't support gzip (there are very few of them now). Note that the extension would still be .html (you don't want to set it to .gz because that implies an archive) but its content would be gzipped.
This is the code in my servlet:
while( bytes....){//do read file to bytes
response.getOutputStream().write(bytes);
response.getOutputStream().flush();
log4j.debug(response.isCommitted()); // out true.
}
If my file is 100MB , the server must read 100MB to memory and then the browser alerts
a dialog of downloading file.
How the waiting time of the browser terrible, when my file is gt than 2GB ....
Browser compatibility problems, from Servlet Best Practices, Part 3 by The O'Reilly Java Authors:
The bad news is that although the HTTP specification provides a
mechanism for file downloads (see HTTP/1.1, Section 19.5.1), many
browsers second-guess the server's directives and do what they think
is best rather than what they're told.
The good news is that the right combination of headers will download
files well enough to be practical. With these special headers set, a
compliant browser will open a Save As dialog, while a noncompliant
browser will open the dialog for all content except HTML or image
files.
set the Content-Type header to a nonstandard value such as
application/x-download.