from https://github.com/fmerg/pymerkle
Defense against second-preimage attack consists in the following security measures:
- Before computing the hash of a leaf, prepend the corresponding record with 0x00
- Before computing the hash of any interior node, prepend both of its parents' hashes with 0x01
(I understand that 0x00 and 0x01 are just "specific byte values" and they could be anything)
Why does this work?
Can't an attacker theoretically find a leaf hash of a forged content that begins with 0x00 ?
What good does prepending the parent's hash nodes do ?
Related
I'm playing around with the USB controller on the raspberry pi pico. The end goal is for the pico to be able to send keystrokes as a HID keyboard.
In any case, RP2040 datasheet says that the setup packet is always at the start of the usb controllers DPSRAM (0x50100000). I do get an interrupt signaling me that a setup packet has been received and when I read the setup packet I get the following data:
0x80 - seems to mean device to host, standard type, recipient is device
0x06 - seems to be GET_DESCRIPTOR
0x00 - low byte of wValue, means index zero
0x01 - high byte of wValue, means desc type device
0x00 - low byte of wIndex
0x00 - high byte of wIndex, means index zero
0x00 - low byte of wLength
0x00 - high byte of wLength
The first six bytes are completely what I would expect. But what does a wLength (last two bytes) of 0 mean? The device descriptor seems to have a length of 18 bytes, so I would have expected it to be 0x12, 0x00.
Is a wLength of zero when requesting a descriptor valid or is it more likely that I'm doing something wrong? If it is valid, how would I respond? With a zero length packet?
I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.
Facebook app secret is a string of 32 characters (0-9, a-f) and thus it represents a 128 bits byte array. Facebook uses this as the key to generate signed request using HMAC-SHA256. Is this a correct usage? I thought HMAC-SHA256 should use 256 bits keys.
HMAC takes the HASH(key) and uses it as the key if the length of the key is greater than the internal block size of the hash. Thus, a key larger than the internal block size of the hash provides no better security than one of equal size. Shorter keys are zero padded to be equal to the internal block size of the hash as per the HMAC specification.
It's impossible to use a 128-bit key with HMAC-SHA-256. If you mean 128 bits padded out to 512 bits with zeroes, then it's probably alright for short-term authentication. I'd recommend at least 256 bits and ideally you would want to use something equal to the internal block size of the underlying hash.
The page says that the 256bit signature is derived from a payload (what facebook is signing) + your 128 bit salt.
So yes, it sounds like correct usage.
The secret 16 bytes (32 characters) isn't actually a key in the sense that it's used to encrypt and decrypt something. Rather, it's a bit of data (a salt) that is used to alter the result of the digital signature, by changing the input ever-so-slightly, so that only someone who knew the exact secret and the exact payload could have created the signature.
is it true that RSA encryption only can handle limited payload of data ? ... im confused with the theory ... theoretically there is no note regarding this ...
RSA encrypts a single message which has a length which is somewhat smaller than the modulus. Specifically, the message is first "padded", resulting in a sequence of bytes which is then interpreted as a big integer between 0 and n-1, where n is the modulus (a part of the public key) -- so the padded message cannot be longer than the modulus, which implies a strict maximum length on the raw message.
Specifically, with the most common padding scheme (PKCS#1 "old-style", aka "v1.5"), the padding adds at least 11 bytes to the message, and the total padded message length must be equal to the modulus length, e.g. 128 bytes for a 1024-bit RSA key. Thus, the maximum message length is 117 bytes. Note that the resulting encrypted message length has the same size than the modulus, so the encryption necessarily expands the message size by at least 11 bytes.
The normal way of using RSA for encrypted a big message (say, an e-mail) is to use an hybrid scheme:
A random symmetric key K is chosen (a raw sequence of, e.g., 128 to 256 random bits).
The big message is symmetrically encrypted with K, using a proper and efficient symmetric encryption scheme such as AES.
K is asymmetrically encrypted with RSA.
"Splitting" a big message into so many 117-byte blocks, each to be encrypted with RSA, is not normally done, for a variety of reasons: it is difficult to do it right without adding extra weaknesses; each block would be expanded by 11 bytes, implying a non-negligible total message size increase (network bandwidth can be a scarce resource); symmetric encryption is much faster.
In the basic RSA algorithm (without padding) which is not very secure the size of the message is limited to be smaller than the modulus.
To enhance the security of RSA you should use padding schemes as defined in PKCS1. Depending on the scheme you choose the size of the message can be significantly smaller than the modulus.
http://en.wikipedia.org/wiki/PKCS1
I've been doing some preliminary research in the area of message digests. Specifically collision attacks of cryptographic hash functions such as MD5 and SHA-1, such as the Postscript example and X.509 certificate duplicate.
From what I can tell in the case of the postscript attack, specific data was generated and embedded within the header of the postscript file (which is ignored during rendering) which brought about the internal state of the md5 to a state such that the modified wording of the document would lead to a final MD value equivalent to the original postscript file.
The X.509 took a similar approach where by data was injected within the comment/whitespace sections of the certificate.
Ok so here is my question, and I can't seem to find anyone asking this question:
Why isn't the length of ONLY the data being consumed added as a final block to the MD calculation?
In the case of X.509 - Why is the whitespace and comments being taken into account as part of the MD?
Wouldn't a simple processes such as one of the following be enough to resolve the proposed collision attacks:
MD(M + |M|) = xyz
MD(M + |M| + |M| * magicseed_0 +...+ |M| * magicseed_n) = xyz
where :
M : is the message
|M| : size of the message
MD : is the message digest function (eg: md5, sha, whirlpool etc)
xyz : is the pairing of the acutal message digest value for the message M and |M|. <M,|M|>
magicseed_{i}: Is a set of random values generated with seed based on the internal-state prior to the size being added.
This technqiue should work, as to date all such collision attacks rely on adding more data to the original message.
In short, the level of difficulty involved in generating a collision message such that:
It not only generates the same MD
But is also comprehensible/parsible/compliant
and is also the same size as the original message,
is immensely difficult if not near impossible. Has this approach ever been discussed? Any links to papers etc would be nice.
Further Question: What is the lower bound for collisions of messages of common length for a hash function H chosen randomly from U, where U is the set of universal hash functions ?
Is it 1/N (where N is 2^(|M|)) or is it greater? If it is greater, that implies there is more than 1 message of length N that will map to the same MD value for a given H.
If that is the case, how practical is it to find these other messages? bruteforce would be of O(2^N), is there a method of time complexity less than bruteforce?
Can't speak for the rest of the questions, but the first one is fairly simple - adding length data to the input of the md5, at any stage of the hashing process (1st block, Nth block, final block) just changes the output hash. You couldn't retrieve that length from the output hash string afterwards. It's also not inconceivable that a collision couldn't be produced from another string with the exact same length in the first place, so saying "the original string was 17 bytes" is meaningless, because the colliding string could also be 17 bytes.
e.g.
md5("abce(17bytes)fghi") = md5("abdefghi<long sequence of text to produce collision>")
is still possible.
In the case of X.509 certificates specifically, the "comments" are not comments in the programming language sense: they are simply additional attributes with an OID that indicates they are to be interpreted as comments. The signature on a certificate is defined to be over the DER representation of the entire tbsCertificate ('to be signed' certificate) structure which includes all the additional attributes.
Hash function design is pretty deep theory, though, and might be better served on the Theoretical CS Stack Exchange.
As #Marc points out, though, as long as more bits can be modified than the output of the hash function contains, then by the pigeonhole principle a collision must exist for some pair of inputs. Because cryptographic hash functions are in general designed to behave pseudo-randomly over their inputs, collisions will tend toward being uniformly distributed over possible inputs.
EDIT: Incorporating the message length into the final block of the hash function would be equivalent to appending the length of everything that has gone before to the input message, so there's no real need to modify the hash function to do this itself; rather, specify it as part of the usage in a given context. I can see where this would make some types of collision attacks harder to pull off, since if you change the message length there's a changed field "downstream" of the area modified by the attack. However, this wouldn't necessarily impede the X.509 intermediate CA forgery attack since the length of the tbsCertificate is not modified.