what format is this "passcode" file? (found along a keystore) - ssl

A Globalprotect vpn client seems to use a keystore in pfx format + a pan_client_cert_passcode.dat file for which I want to find out what the format is (=how to decode it).
I happen to know the passphrase (8 characters), and the passcode file seems to be binary, 16 bytes.
What format could it be?
EDIT:
To help find out: hex dump is 919e **** 77** **73 e18c fd17 d71b 1ddb and known plaintext is VD...ert (each . is a letter, each * is a hex digit)

A Google search for pan_client_cert_passcode dat gave me a result. There is information about this file on paloaltonetworks.com.
It seems you must have received a p12 file from paloalto. And then you create a pfx + dat file from it like this:
globalprotect import-certificate --location cert.p12
See the link. The dat file is definitely a proprietary file format used by the paloalto GlobalProtect program. It's probably encoded because they don't want to store plain text passwords on disk.

Related

Need Online Tool to Convert GZip compression text to ASCII (Readable) text

I am trying to view data in a Redis database.
It is compressed data using Lettuce 6.1.1 version compression library.
It uses a GZIP compression type.
I have tried several online tools to convert the GZIP text to a readable ASCII format.
The tools fail because it does not recognize the GZIP text as GZIP data. Maybe it has something to do with the compression algorithm lettuce uses to compress the data.
Can anyone point me to a tool where I can decompress this data to readable ascii text?
Here is an example of the compressed data:
\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x00\xABVN-\xCBLNu,JM\xF4+\xCDMJ-R\xB2R2604\xB44Q\xAA\x05\x00\x190\x9B\xD1\x1E\x00\x00\x00
This should translate to a number: 301194
Here is a second example:
1.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x01\x00\x93\xC0t\xC3\x06\x00\x00\x00
2.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB0\xB0\x04\x00o\x8D\xDE\xA4\x06\x00\x00\x00
3.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x07\x00)\x91}Z\x06\x00\x00\x00
4.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x03\x00\xBF\xA1z-\x06\x00\x00\x00
5.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x00\x00\x8A\x04\x19\xC4\x06\x00\x00\x00
6.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x02\x00\xA6e\x17*\x06\x00\x00\x00
7.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003604\xB44\x01\x00J\x05\x03\xD0\x06\x00\x00\x00
This should be a list of 7 service area numbers.
Not sure of the order but the values should be:
302090
302092
302097
302094
302096
302089
301194
I tried using this online tool:
https://codebeautify.org/gzip-decompress-online
There is no translation that appears in the translation window and no error is shown.
I also tried a this website:
https://www.multiutil.com/gzip-to-text-decompress/
I get the error: Invalid compression text
UPDATE
The RedisInsight screenshot below shows the key-value information. The value information that is compressed as gzip I would like to translate.
I wanted to copy the value that I have highlighted and decompress it so I can document what is stored in the database.
There is nothing wrong with your examples 1 through 7. They are all valid gzip streams that decompress to:
302094
302089
302097
302096
302090
302092
301194
Your first example in your question however has an error in the integtity check at the end. It decodes to:
{#eviceAreaNumber":"301194"}
While the deflate compressed data in the gzip stream is valid, the CRC that follows it is not. The uncompressed length after that is incorrect as well.
The online tools you point to are expecting Base64 encoded data. Not the partial hex encodings you are trying there.

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xf1 in position 990: invalid continuation byte

I am comparing to csv files to each other to produce the final file with fathered differences information its giving me error message. I have resaved all files to csv decoded with utf-8 and tried running - it does not work. Can someone help me.
The problem is that your file is not in UTF-8 format. Many tools will refuse to handle data that is claimed to be UTF-8, but isn’t. I’d check first if that file is actually UTF-8 or is stored in some different encoding.

SHA256 generation different for file and content of this file

I use online SHA256 converters to calculate a hash for a given file. There, I have seen an effect I don't understand.
For testing purposes, I wanted to calculate the hash for a very simple file. I named it "test.txt", and its only content is the string "abc", followed by a new line (I just pressed enter).
Now, when I put "abc" and newline into a SHA256 generator, I get the hash
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb
But when I put the complete file into the same generator, I get the hash
552bab6864c7a7b69a502ed1854b9245c0e1a30f008aaa0b281da62585fdb025
Where does the difference come from? I used this generator (in fact, I tried several ones, and they always yield the same result):
https://emn178.github.io/online-tools/sha256_checksum.html
Note that this different does not arise without newlines. If the file just contains the string "abc", the hash is
ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad
for the file as well as just for the content.
As noted in my comment, the difference is caused by how newline characters are represented across different operating systems (see details here):
On UNIX and UNIX-like systems, newlines are represented by a line feed character (\n).
On DOS and Windows systems, newlines are represented by a carriage return followed by a line feed character (\r\n).
Compare the following two commands and their output, corresponding to the SHA256 values in your question:
echo -en "abc\n" | sha256sum
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb
echo -en "abc\r\n" | sha256sum
552bab6864c7a7b69a502ed1854b9245c0e1a30f008aaa0b281da62585fdb025
The issue you are having could come from the character encoding of the new line.
In windows the new line is escaped with \r\n and in linux is escaped with \n.
These 2 have a different dec value (\r is 13 and \n is 10).
More info you can find here:
https://en.wikipedia.org/wiki/Newline
https://en.wikipedia.org/wiki/List_of_Unicode_characters
Even i faced same issue. but providing the data in hex mode helped to understand the actual behavior.
Canonicalization of data needs to be performed before SHA calculations which will eliminate such issues. Canonicalization needs to be performed both at Generation side and also at verification side.

Convert X.509 certificate from hex form to .cer format

How to convert X.509 certificate from a hex-dump form to .CER format? Additionally, should be blank space separators removed from hex dump first? Thanks.
You could use ASN.1 editor. It has a data converter that will convert HEX to PEM format of data. the source is also available so if you need to use it in code you can look how it is done.
Or you could use certutil.exe using command
certutil -decodehex c:\cert.hex c:\cert.cer
Normally .cer can either be binary encoded - DER - or it can be DER that has an "ASCII armor", called PEM.
If you want to create DER you probably only have to decode the hexadecimals. In that case you obviously need to disregard any whitespace within the hexadecimals (as well as any other spurious data that may be present).
If you want to have PEM it is required in addition to base 64 encode the result and add a header and footer text. Or you can use an existing library that does this of course.

RSA pubkey file type detection

I got a RSA pubkey.dat (almost obvious what it is) that has the following structure on contents:
ASN1 Integer of around 1024 bits (Modulus)
ASN1 Integer (Exponent)
Blob of 256 bytes (Signature)
No tags like "----begin---" or so. pure hex values in it.
There's any way to identify its format like if it's DER/PEM/etc , so i can open it with python crypto libraries or crypto++ on c++?
(Or if it matches a public standard structure name for me to check)
Seems like its not PEM as M2crypt can't load it.
Thanks in advance.
PEM-encoding has mandatory format:
-----BEGIN typeName-----
base64 of DER value
-----END typeName-----
where, for public keys, typeName="PUBLIC KEY" (AFAIR) so that's very easy to check with a regular expression such as the following:
/-----BEGIN [^-]+-----([A-Za-z0-9+\/=\s]+)-----END [^-]+-----/
If it's not PEM, it's usually plain DER.
The DER representation of an ASN.1 SEQUENCE always begins with 0x30 so usually when I have to decode a DER-or-PEM stream which I know for sure it's an ASN.1 SEQUENCE (most complex values are SEQUENCEs, anyways) I check the first byte: if its's 0x30, I decode as DER, else I decode as PEM.
You can check your ASN.1 data quickly using my very own opensource ASN.1 parser (it's all client-side Javascript, so I won't see your data).