unzip a string ziped with gzip - gzip

I am using an IDS that allows me to look at the responses sent by our servers to an attacker. The problem is that the response is compressed with GZIP. There is no way to save the text as gzip file. Right now my only solution is to bother customer support to have them decompress the text. I have tried several online tools without success. How can I decompress a string?

From your comment, if you have the payload in hex from your tool, and you are able to copy/paste it to a file, then
you can reverse hex to binary with the xxd tool
you can inflate the data with the gunzip tool
It will only be ok if you are able to have the full payload for a query.

Related

CKAN: How do I specify the encoding of my resource file?

I'm publishing my CSV files to CKAN using the API. I want to make my data easy to open in Brazilian Excel, so it must have:
semicolon ";" separated columns
coma "," as a decimal separator
use encoding cp-1252
I'm using Data Store and Data Pusher.
My problem is that if I upload my data with encoding cp1252, Data Pusher sends it as is to the Data Store that expects the data as UTF-8. The data preview doesn't display the accents correctly. In the image below Março were the correct value to display:
I want to have my user downloading the data as cp-1252, so it opens easily in Excel, but also have CKAN displaying it correctly. I must specify the encoding of the file while uploading the file.
I couldn't specify the encoding directly, but taking a look at Data Pusher source I saw that it uses the Messy Tables library. MT obey the environment locale set of the host machine, so I configure it to pt_BR.UTF8 and my accents worked fine.
Now I can publish my data with commas as a decimal separator and using encoding Windows-1252. The data correctly opens in Excel when it is downloaded, and is also correctly displayed in the Data Explorer.

Doing DSPSMTF to display a stmf on browser but it all junk and it is downlading the file instead of displaying it. Also any idea about CONTTYPES file?

I am using CGI DSPSTMF command to display stmf file on web browser. I am copying a spool file to a stmf file using CPYSPLF *STMF option. Once copied i am passing IFS location to DSPSTMF command but it is going to download automatically and when i open the download file i am getting all Junk data any idea why?
Also, i noticed it is using CONTTYPES file in CGILIB and on my server it is empty. What should be the values in it and what should i do show correct data instead of junk. I tried to use different methods to copy the file to IFS like used cpytostmf instead of cpysplf but on IFS file looks correct not the download version.
What CCSID is the resulting stream file tagged with?
use WRKLNK and option 8=Display attributes
If 65535, that tells the system the data is binary and it won't try to translate the EBCDIC to ASCII.
The correct fix is to properly configure your IBM i so that the stream file is tagged with it's correct CCSID.
Do a WRKSYSVAL QCCSID ... if your system is still set to 65535, that's the start of your problem. But this isn't programming related, you can try posting to Server Fault but you might get better responses on the Midrange mailing list

Base64 PDF difference on different servers

I have a webservice application that returns a base64 encoded PDF file, created with aspose. This webservice is now installed on a different (windows) server for testing purposes. However, when I call the webservice on the new server, the base64 is different than the original base64 on the first server.
I would like to understand why the base64 on the different servers are different. I converted the base64 to PDF and checked the PDF file, but it looks the same (besides the size of the PDF file that originally was 18kB, but is 14kB on the new server). Later we will need to install this webservice on multiple servers, where we hope that the base64 could be the same on all servers, so the base64 can be checked if the response is correct.
As far as I know there shouldn't be information about the server within the base64, so this couldn't be the different. Besides this the font that is used is also available on both servers. I already checked the metadata and didn't see any information here.
Could anyone help me and explain why these base64 are different and where the difference comes from?
Update:
I just uploaded the 2 PDF files, so it is easier to help me analyse the differences. These are the 2 PDF files:
Original server:
http://www.filedropper.com/pdforiginalserver
New server:
http://www.filedropper.com/pdfnewserver
I hope this makes it easier to help me with this problem.
The PDFs both embed a subset of the font Calibri, but on those two servers apparently different versions of that font were available to the PDF producer to create a subset from:
On the original server Calibri version 6.18 (copyrighted 2016) was used.
On the new server Calibri version 5.9.0 (copyrighted 2014) is used.

Is there a standard metadata key in S3 for storing MD5SUM of a large object?

S3 supports returning MD5SUMs for most objects in the ETag header. However for objects which have been uploaded in chunks the ETag is no longer the MD5SUM - it can still be used for checking integrity but I really need the MD5SUM.
I'd like to store the MD5SUM in the object metadata so I can retrieve it easily on large objects. However before I make up a key for this - is there a standard one in use by other software?
There is no standard that I've been able to identify, and frankly, too much of the software that has been written for S3 is not very well done -- S3 provides mechanisms like the Content-MD5 upload header that ensures S3 will flatly reject an upload corrupted in transit... which some developers don't seem to bother with -- so precedent might not be worth following, in any event.
But I've struggled with this same issue on multiple levels.
Note, though, that it is possible to calculate the S3 multipart etag of an S3 upload from a local file, if you know the part size used during the upload (which, again, screams out for a standard header for saving this information, which is otherwise lost if you don't preserve it or use a standard value). You take the md5 if each part, in binary (not hex), concatenate them, take the md5 of that (in hex this time) plus - plus the number of parts, and voilá, you have the multipart etag.
My (unreleased, internal-use) tool comically named "pedantic uploader" uses x-amz-meta-content-md5hex to store the hex-encoded md5 of the entire file, as well as x-amz-meta-content-sha256hex to store the sha256. I originally used x-amz-meta-content-md5 but that's potentially ambiguous since it could be base64-encoded.
If the object uses Content-Encoding: gzip, the attributes of the payload inside the gzip are also noted in the metadata by my code, with keys such as x-amz-meta-identity-content-md5hex, and the uncompressed byte count as x-amz-meta-identity-content-length, with "identity" referring to the unencoded payload before compression. I store the upload part size in bytes as x-amz-meta-multipart-part-size, and since I pre-calculate what S3 should also generate for the etag, saving this as x-amz-meta-expect-etag.
Not sure if this helps.

mysql workbench not exporting data with utf8

I have a database encoded in utf8 and I want to export it to a dump file. The problem is that when I do it the data in the dump file are not encoded in utf8. Is there a way to define the encoding when creating the dump file ?
Your DB when you created it, may have been using another form of encoding aside from UTF8. You may want to refer to this article about how to change encoding settings. Hopefully once that has been changed you will be able to export.
https://dev.mysql.com/doc/refman/5.0/en/charset-applications.html
This doc will show you how to encode per table, as well as how to change your encoding via CLI.
Cheers.