How can I decompress a GZip compressed file in chunks in memory? - gzip

I can decompress a small GZip file in memory but there are memory limitations on the cloud box that this will run on. I can get around this by doing it in chunks (~32 k). Is there an easy way to split up a GZip compressed file without reading through it?
Thanks,
Marc

Yes, you can use zlib to read a gzip file in chunks. No, you cannot split a gzip file without decoding it.

Related

To decompress a already gzip compressed x3d file (.x3d.gz)

I've an already gzip compressed x3d file (.x3d.gz) to load 3d model for my webpage. Since the x3d file size is big, i've gzipped it. Now how can i decompress it in the browser when the user loads this page? is it through http request header? if yes, how to execute this?

Can i concatenate two already gzipped files (using gzip) and then gunzip them?

Can i concatenate two already gzipped files (using gzip) and then gunzip them?
As of today, I download the gzipped files from remote servers, gunzip them individually and then cat them to merge.
Looking to make things faster here by merging the gzipped files and then gunzipping.
Yes. The concatenation of gzip streams is also a valid gzip stream. The result of gunzipping is the concatenation of the uncompressed data.
You could have just tried it.

Most straightforward way of inflating gzip memory stream

I have a gzipped file that I need to read and decompress in my application. I just read through the zlib manual, and it appears that the zlib function are able to operate via memory buffers, but the gzip interface is all file-based.
What is the most common method of dealing with gzipped files like this? Do I need to handle the gzip file format myself, pull out the deflated data, and pass it to the zlib functions?
Note: The reason file-based will not work is because the file is in an archive on a read-only medium, so I can't extract the file first and use the gzip functions from zlib. This is an embedded Linux system.
You need to "read through" the zlib manual again, this time reading through it. inflateInit2() has an option to decompress gzip streams.

Why do Amazon S3 returns me an Error 330 about simple files?

I have added the "Content-Encoding: gzip" header to my S3 files and now when I try to access them, it returns me a "Error 330 (net::ERR_CONTENT_DECODING_FAILED)".
Note that my files are simply images, js and css.
How do I solve that issue?
You're going to have to manually gzip them and then upload them to S3. S3 doesn't have the ability to gzip on the fly like your web server does.
EDIT: Images are already compressed so don't gzip them.
Don't know if you are using Grunt as deployment tool but, use this to compress your files:
https://github.com/gruntjs/grunt-contrib-compress
Then:
https://github.com/MathieuLoutre/grunt-aws-s3
To upload compressed files to Amazon S3. Et voila!

Creating a gzip stream using zlib

How to create a gzip stream using zlib? any code available?
Here is gzip file format. What you need is to output the member header followed by zlib-compressed bytes.
I hope that's enough...