To decompress a already gzip compressed x3d file (.x3d.gz) - xmlhttprequest

I've an already gzip compressed x3d file (.x3d.gz) to load 3d model for my webpage. Since the x3d file size is big, i've gzipped it. Now how can i decompress it in the browser when the user loads this page? is it through http request header? if yes, how to execute this?

Related

consume gzip files with databricks autoloader

I am currently unable to find a direct way to load .gz files via autoloader. I can load the files as a binary content but I cannot extract the compressed xml files and process them further in a streaming way.
Therefore, I would like to know if there is a way to consume the content of a gzip file via databricks autoloader

How can I decompress a GZip compressed file in chunks in memory?

I can decompress a small GZip file in memory but there are memory limitations on the cloud box that this will run on. I can get around this by doing it in chunks (~32 k). Is there an easy way to split up a GZip compressed file without reading through it?
Thanks,
Marc
Yes, you can use zlib to read a gzip file in chunks. No, you cannot split a gzip file without decoding it.

File Mime Type Detection

There is a necessity to identify the mime type of the file. The file is uploaded through the browser and is read by the server in chunks asynchronously. Is it possible to detect file mime type using Apache Tika with few initial chunks rather than having whole file data?
Is there any other tool to detect mime type from the first few chunks?

how can I upload a gzipped json file to bigquery via the HTTP API?

When I try to upload an uncompressed json file, it works fine; but when I try a gzipped version of the same json file, the job would fail with lexical error resulted from failure to parse the json content.
I gzipped the json file with the gzip command from Mac OSX 10.8 and I have set the sourceFormat to: "NEWLINE_DELIMITED_JSON".
Did I do something incorrectly or gzipped json file should be processed differently?
I believe that using the multipart/related request it is not possible to submit binary data (such as the compressed file. However, if you don't want to use uncompressed data, you may be able to use resumable upload.
What language are you coding in? The python jobs.insert() api takes a media upload parameter, which you should be able to give a filename to in order to do resumable upload (which sends your job metadata and new table data as separate streams). I was able to use this to upload a compressed file.
This is what bq.py uses, so you could look at the source code here.
If you aren't using python, the googleapis client libraries for other languages should have similar functionality.
You can upload gzipped files to Google Cloud Storage, and BigQuery will be able to ingest it with a load job:
https://developers.google.com/bigquery/loading-data-into-bigquery#loaddatagcs

Why do Amazon S3 returns me an Error 330 about simple files?

I have added the "Content-Encoding: gzip" header to my S3 files and now when I try to access them, it returns me a "Error 330 (net::ERR_CONTENT_DECODING_FAILED)".
Note that my files are simply images, js and css.
How do I solve that issue?
You're going to have to manually gzip them and then upload them to S3. S3 doesn't have the ability to gzip on the fly like your web server does.
EDIT: Images are already compressed so don't gzip them.
Don't know if you are using Grunt as deployment tool but, use this to compress your files:
https://github.com/gruntjs/grunt-contrib-compress
Then:
https://github.com/MathieuLoutre/grunt-aws-s3
To upload compressed files to Amazon S3. Et voila!