Whats the hash algorithm of theses two hashes? - hashalgorithm

has someone an idea, which hash-algorithn was used for these two hashes:
$S$DjzC6BKx24dNLU4UPyiCGXo6bJ3rDYbQdf/waPOwE9X36592NiFi
$S$DDLj98cyEH3azm0QvZq4E59PuczniTbfXiftWf5ED2qtcZYW5MTm
It looks a bit salted, but i can not determine if the Salt is $S$ or rather $S$D, because i know only these two. The length of these hashes without the substring $S$ would be 52.

If it was salted, it would not be as easy to spot the salt.
These are probably Base64 encoded. This means that 3 letters encode 2 bytes. Since we have 51 letters apart from the prefix $S$D, it is divisable by 3. That makes 34 Bytes or 136 bits.
136 bits are probably a hash function with 128 bits plus a CRC of 8 bits. Problem is: There are only one gazillion 128 hash funktions out there. But I'd go with md5, since it is so commonly used.

Related

Does the security level of the hash of multiple files differ with different approaches?

I have the task of calculating the hash from multiple files.
I also already know the hash from each individual file.
There are two approaches:
hash(f1 + f2 + f3)
hash(hash(f1) + hash(f2) + hash(f3))
In the second approach, there will be less computation since I know the hash of each file individually.
Is the security level of these two approaches different?
Which of these approaches is more secure?
I am not strong in cryptography, so I can not objectively calculate the security level of each approach.
TL,DR: use hash(hash(f1) + hash(f2) + hash(f3))
Note: in this answer, + means concatenation. It is never any kind of numerical addition. If you have numerical data, apply my answer after converting the data to byte strings.
There is a problem with hash(f1 + f2 + f3): you can (for example) move some data from the end of f1 to the beginning of f2, and that won't change the hash. Whether this is a problem depends on what constraints there are, if any, on the file formats and on how the files are used.
It's usually hard to make sure in a system design that this isn't a problem. So whenever you combine strings or files for hashing, you should always make sure the combination is unambiguous. There are a few different ways to do it, such as:
Use some existing format that handles the packing of the strings or files for you. For example zip, ASN.1 DER, etc.
Encode each part in a way that doesn't contain a certain byte, and use that byte as a separator. For example encode each part in Base64 and use line breaks as separators.
Define a maximum length for each part. Before each part, encode the length using a fixed-width encoding. For example, if the maximum length of a part is 2^64-1 bytes, encode the unambiguous concatenation of (f1, f2, f3) as:
8 bytes: length(f1)
length(f1) bytes: f1
8 bytes: length(f2)
length(f2) bytes: f2
8 bytes: length(f3)
length(f3) bytes: f3
If you instead take hashes of hashes, you don't run into this problem, because here you do have a very strong constraint on the strings you're concatenating: they have a well-defined length (whatever the length of the hash algorithm is).
Taking hashes of hashes does not degrade security. It's part of a well-known technique: hash trees. If hash(hash(f1) + hash(f2) + hash(f3)) = hash(hash(g1) + hash(g2) + hash(g3)) then f1 = g1 and f2 = g2 and f3 = g3.
In addition to making the construction and verification easier, this approach lets you save computation if the set of files changes. If you've already stored hash(f1) and hash(f2) and you want to add f3 to the list, you just need to calculate hash(f3), and then the hash of the new list of hashes. This is also very useful for synchronization of data sets. If Alice wants to transmit files to Bob, she can send the hashes first, then Bob verifies which hashes he already knows and tells Alice, and Alice only needs to transmit the files whose hashes Bob doesn't already have.

AES-128 What padding method is used in this cipher example?

I have just implemented an AES-128 encryption algorithm, with the following message and key.
Message: "Two One Nine Two" (128 bits)
Key: "Thats my Kung Fu" (128 bits)
The cipher output for this is :
29c3505f571420f6402299b31a02d73a
which is correct when I cross-checked with online generators.
However, the online generator output is usually longer :
29c3505f571420f6402299b31a02d73ab3e46f11ba8d2b97c18769449a89e868
I tried several padding methods (bit, zerolength, cms, null, space) but nothing seems to produce exactly the b3e46f11ba8d2b97c18769449a89e868 part of the crypt text.
Could anyone help to explain what padding method (in binary) is used to produce those numbers, please?
Thank you #Topaco, the padding is indeed PKCS7. In this case, since the input is exactly 128 bit, an extra padding block of 128 bit must be appended, consisting of 16 bytes of the value 16 each:
00010000 000010000 00010000 00010000 00010000.... (x16)
This gives the correct crypt text
b3e46f11ba8d2b97c18769449a89e868
for the key in this example.

How about Decode Base64 Algorithm

Anyone know how Base64 decoding Algorithm, as information in the internet many article, journal, and book explain how to encoding base64 algorithm But the decoding Base64 not explained.So my question is how to decode Base4 algorithm?
Thank you,
Hope Your Answer
Basically you take one character at the time and convert it to the bits that it represents. So if you find an A character it would translate into 000000 and the / character translates into 111111. Then you concatenate the bits. So you get 000000 | 111111. This however won't fit into a byte, you have to split up and shift the result to get 00000011 and 1111xxxx where xxxx is not known yet
Of course, you may only be able to do this using bytes in a high performance implementation, so you have two spurious bits for each character (separated by a space from the bits that actually mean something).
((00 000000 << 2) & 11111100) | ((00 111111 >> 4) & 00000011) -> 00000011
((00 111111 << 4) & 11110000) | ???????? -> 1111xxxx
...
First with the shift operator << you put the bits in place. Then with the binary AND operator & you single out those bits you want and then you use the binary OR | operator you assemble the bits of the two characters.
Now after 4 characters you will have 3 full bytes. It may however be that your result is not a multiple of three. In that case you have either two or three characters possibly followed by padding (=) at the end. One character is not possible as that would suggest an incomplete byte with only the highest bits set. In that case you should simply ignore the last spurious bits encoded by the last character.
Personally I like to use a state machine to do the decoding. I've already created a couple of base 64 streams that use a state machine in Java. It may be useful to only decode once you have 4 characters (3 full bytes) until you are at the end of the base 64 encoding.

Hexadecimal numbers vs. hexadecimal enocding (with base64 as well)

Encoding with hexadecimal numbers seems to be different from using hexadecimals to represent numbers. For example, then hex number 0x40 to me should be equal to 64, or BA_{64}, but when I put it through this hex to base64 converter, I get the output: QA== which to me is equal to some number times 64. Why is this?
Also when I check the integer value of the hex string deadbeef I get 3735928559, but when I check it other places I get: 222 173 190 239. Why is this?
Addendum: So I guess it is because it is easier to break the number into bit chunks than treat it as a whole number when encoding? That is pretty confusing to me but I guess I get it.
You may wish to read this:
http://en.wikipedia.org/wiki/Base64
In summary, base64 specifies a specific encoding, which involves using different values for letters than their ASCII encoding.
For the second part, one source is treating the entire string as a 32 bit integer, and the other is dividing it into bytes and giving the value of each byte.

Why does this code encodes random salt first as hexadecimal digits?

I'm looking at some existing code that is generating a salt which is used as input into an authentication hash.
The salt is 16 bytes long, and is generated by first using an OS random number generator to get 8 bytes of random data.
Then each byte in the 8 byte buffer is used to place data into 2 bytes of the 16 byte buffer as follows:
out[j] = hexTable[data[i] & 0xF];
out[j-1] = hexTable[data[i] >> 4 & 0xF];
Where out is the 16 byte salt, data is the initial 8 byte buffer, j and i are just loop incrementers obviously, and hexTable is just an array of the hex digits i.e. 0 to F.
Why is all this being done? Why isn't the 16 byte salt just populated with random data to begin with? Why go through this elaborate process?
Is what is being done here a standard way of generating salts? What's the benefit and point of this over just generating 16 random bytes in the first place?
This is simply conversion of your 8 random bytes to 16 hexadecimal digits.
It seems that someone misunderstood the concept of salt, or what input your hash needs, and thought it only accepts hexadecimal digits.
Maybe also the salt is stored somewhere where it is easier to store hexadecimal digits instead of pure bytes, and the programmer thought it would be good to be able to reuse the stored salt as-is (i.e. without converting it back to bytes first).