I'm looking into using twofish to encrypt strings of data. Before trusting my precious data to an unknown library I wish to verify that it agrees with the known answer tests published on Bruce Schneier's website.
To my dismay I tried three twofish implementations and found none that agree with the KAT. This leads me to believe that I'm doing something wrong and I'm wondering if someone could tell me what it is.
I've made sure the mode is the same (CBC), the key length is the same (128bits) and the iv/key/pt values are the same. Is there an additional parameter in play for twofish encryption?
Here are the first two test entries from CBC_E_M.txt from the KAT archive:
I=0
KEY=00000000000000000000000000000000
IV=00000000000000000000000000000000
PT=00000000000000000000000000000000
CT=3CC3B181E1495D0495D652B66921DA0F
I=1
KEY=3CC3B181E1495D0495D652B66921DA0F
IV=3CC3B181E1495D0495D652B66921DA0F
PT=BE938D30FAB43B71F2E114E9C0529299
CT=695250B109C6F71D410AC38B0BBDA3D2
I interpret these to be in hex, therefore 16bytes=128bits long.
I tried using the following twofish implementations:
ruby: https://github.com/mcarpenter/twofish.rb
JS: https://github.com/ryanofsky/twofish/
online: http://twofish.online-domain-tools.com/
All three give the same CT for the first test, namely (hex encoded)
9f589f5cf6122c32b6bfec2f2ae8c35a
So far so good, except it does not agree with CT0 in the KAT...
For the second test the ruby library and the online tool give:
f84268f0293adf4d24e27194911a24c
While the js library gives:
fd803b310bb5388ddb76d5faf9e23dbe
And neither of these agrees with CT1 in the KAT.
Am I doing something wrong here? Any help greatly appreciated.
The online tool is easy to use, just be sure to select HEX for the key and input text. Here is the ruby code I used to generate these values (it's necessary to check out each library for this to work):
def twofish_encrypt(iv_hex, key_hex, data_hex)
iv = iv_hex.gsub(/ /, "").scan(/../).map { |x| x.hex.chr }.join
key = key_hex.gsub(/ /, "").scan(/../).map { |x| x.hex.chr }.join
data = data_hex.gsub(/ /, "").scan(/../).map { |x| x.hex.chr }.join
tf = Twofish.new(key, :mode => :cbc, :padding => :none)
tf.iv = iv
enc_data = tf.encrypt(data)
enc_data.each_byte.map { |b| b.to_s(16) }.join
end
ct0 = twofish_encrypt("00000000000000000000000000000000",
"00000000000000000000000000000000",
"00000000000000000000000000000000")
puts "ct0: #{ct0}"
ct1 = twofish_encrypt("3CC3B181E1495D0495D652B66921DA0F",
"3CC3B181E1495D0495D652B66921DA0F",
"BE938D30FAB43B71F2E114E9C0529299")
puts "ct1: #{ct1}"
function twofish_encrypt(iv_hex, key_hex, data_hex) {
var iv = new BinData()
iv.setHexNibbles(iv_hex)
iv.setlength(16*8)
binkey = new BinData()
binkey.setHexNibbles(key_hex)
binkey.setlength(16*8)
key = new TwoFish.Key(binkey);
data = new BinData()
data.setHexNibbles(data_hex)
data.setlength(16*8)
cipher = new TwoFish.Cipher(TwoFish.MODE_CBC, iv);
enc_data = TwoFish.Encrypt(cipher, key, data);
return enc_data.getHexNibbles(32);
}
var ct0 = twofish_encrypt("00000000000000000000000000000000",
"00000000000000000000000000000000",
"00000000000000000000000000000000");
console.log("ct0: " + ct0);
var ct1 = twofish_encrypt("3CC3B181E1495D0495D652B66921DA0F",
"3CC3B181E1495D0495D652B66921DA0F",
"BE938D30FAB43B71F2E114E9C0529299");
console.log("ct1: " + ct1);
The header of the CBC_E_M.txt file reads:
Cipher Block Chaining (CBC) Mode - ENCRYPTION
Monte Carlo Test
The confusion can be explained by this description; from the NIST description of the Monte Carlo Tests:
Each Monte Carlo Test consists of four million cycles through the candidate algorithm implementation. These cycles are divided into four hundred groups of 10,000 iterations each. Each iteration consists of processing an input block through the candidate algorithm, resulting in an output block. At the 10,000th cycle in an iteration, new values are assigned to the variables needed for the next iteration. The results of each 10,000th encryption or decryption cycle are recorded and included by the submitter in the appropriate file.
So what you get in the text file is 400 results, each representing 10,000 iterations where each input of an iteration depends on the output of the previous iterations. This is obviously not the same as a single encryption. Monte Carlo tests are basically performing many tests using randomized input; in this case a high number of block cipher encrypts are used to perform the randomization.
To test if your CBC code is correct, just use any of the other test vectors (not the Monte Carlo ones) and assume an all zero IV. In that case a single block (ECB) encrypt has the identical outcome of CBC mode. This also works for the ever more popular CTR mode.
The initial 9f589f5cf6122c32b6bfec2f2ae8c35a value that you found is correct for a 128 bit all zero key, IV and plaintext. The f84268f0293adf4d24e27194911a24c value is correct as well.
There is certainly something wrong with your hex encoder, your result is even not of the correct size for that value (what happens with leading zero's of the hex encodings?). Given the results and code, I would definitely take a look at your encoding / decoding functions.
Related
As per Java Card v3.1 new package is defined javacardx.security.derivation
https://docs.oracle.com/en/java/javacard/3.1/jc_api_srvc/api_classic/javacardx/security/derivation/package-summary.html
KDF X9.63 works on three inputs: input secret, counter and shared info.
Depends on length of generated key material, multiple rounds on hash is carried out to generated final output.
I am using this KDF via JC API to generated 64 bytes of output (which is carried out by 2 rounds of SHA-256) for a 16 bytes-Encryption Key, a 16 bytes-IV, and a 32 bytes-MAC Key.
Note: This is just pseudo code to put my question with necessary details.
DerivationFunction df = DerivationFunction.getInstance(DerivationFunction.ALG_KDF_ANSI_X9_63, false);
df.init(KDFAnsiX963Spec(MessageDigest.ALG_SHA_256, input, sharedInfo, (short) 64);
SecretKey encKey = KeyBuilder.buildKey(KeyBuilder.TYPE_AES, (short)16, false);
SecretKey macKey = KeyBuilder.buildKey(KeyBuilder.TYPE_HMAC, (short)32, false);
df.nextBytes(encKey);
df.nextBytes(IVBuffer, (short)0, (short)16);
df.lastBytes(macKey);
I have the following questions:
When rounds of KDF are performed? Are these performed during df.init() or during df.nextBytes() & df.lastBytes()?
One KDF round will generate 32 bytes output (considering SHA-256 algorithm) then how API's df.nextBytes() & df.lastBytes() will work with any output expected length < 32 bytes?
In this KDF counter is incremented in every next round then how counter will be managed between df.nextBytes() & df.lastBytes() API's?
When rounds of KDF are performed? Are these performed during df.init() or during df.nextBytes() & df.lastBytes()?
That seems to be implementation specific to me. It will probably be faster to perform all the calculations at one time, but in that case it still makes sense to wait for the first request of the bytes. On the other hand RAM is also often an issue, so on demand generation also makes some sense. That requires a somewhat trickier implementation though.
The fact that the output size is pre-specified probably indicates that the simpler method of generating all the key material at once is at least foreseen by the API designers (they probably created an implementation before subjecting it to peer review in the JCF).
One KDF round will generate 32 bytes output (considering SHA-256 algorithm) then how API's df.nextBytes() & df.lastBytes() will work with any output expected length < 32 bytes?
It will commonly return the leftmost bytes (of the hash output) and likely leave the rest of the bytes in a buffer. This buffer will likely be destroyed together with the rest of the state when lastBytes is called (so don't forget to do so).
Note that the API clearly states that you have to re-initialize the DerivationFunction instance if you want to use it again. So that is a very strong indication that they though of destruction of key material (something that is required by FIPS and Common Criteria certification, not just common sense).
Other KDF's could have a different way of returning bytes, but using the leftmost bytes and then add rounds to the right is so common you can call it universal. For the ANSI X9.63 KDF this is certainly the case and it is clearly specified in the standard that way.
In this KDF counter is incremented in every next round then how counter will be managed between df.nextBytes() & df.lastBytes() API's?
These are methods of the same class and cannot be viewed separately, so they are not separate API's. Class instances can keep state in anyway they want. It might simply hold the counter as class variable, but if it decided to generate the bytes during init or the first nextBytes / lastBytes call then the counter is not even required anymore.
I cannot find proper test vectors to test my code for Rabbit cipher which I have developed according to the specifications file from "http://www.ecrypt.eu.org/stream/rabbitpf.html".
However, I found some test vectors in a zip file of the C source code on the page "http://www.ecrypt.eu.org/stream/e2-rabbit.html".
The test vectors here have an output stream of 384-bits (128*3), but the specification file states that only 128-bit output key stream is obtained after each round.
Any help on the procedure of testing for correctness?
You should look into Rabbit stream cipher RFC on IETF: https://www.rfc-editor.org/rfc/rfc4503
Particulary Appendix A and B have a collection of test vectors for testing the correctness of your implementation, as well as for debugging the inner state after each iteration.
You can have a look into RabitTest.java to see, how I tested my implementation.
Notice that the input is the same for every test (384 bits of 0), and that's why the output is also of the same length. The length of the key is 128 bits, but that has nothing to do with the length of the output, because it is only used to setup the inner states. In the process of encryption a keystream is generated, which can be infinitely long, so that's why the key length isn't connected with the length of the output.
I'm trying to test my RSA implementation's correctness with the RSACryptoPAD example in here: http://www.codeproject.com/Articles/10877/Public-Key-RSA-Encryption-in-C-NET
But it always creates different encryption results. Isn't RSA just a mod and power operation? But the program can decrypt all different encrypted texts correctly. My results are same with the http://nmichaels.org/rsa.py site. I think RSACryptoPAD is doing some other things?
The code uses RSACryptoServiceProvider. The line
byte[] encryptedBytes = rsaCryptoServiceProvider.Encrypt( tempBytes, true );
Tells it to encrypt using OAEP padding, which introduces randomness into the padding. The reason for doing this is so that encrypting the same plaintext will always yield different results (as you are seeing). This is a good thing, as it stops an information leak where an attacker sees you sent the same message multiple times.
There's a great historical example of why this is important. "AF is short of water"
when I call getwork on my bitcoind server, I get the following:
./bitcoind getwork
{
"midstate" : "695d56ae173bbd0fd5f51d8f7753438b940b7cdd61eb62039036acd1af5e51e3",
"data" : "000000013d9dcbbc2d120137c5b1cb1da96bd45b249fd1014ae2c2b400001511000000009726fba001940ebb5c04adc4450bdc0c20b50db44951d9ca22fc5e75d51d501f4deec2711a1d932f00000000000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000",
"hash1" : "00000000000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000010000",
"target" : "00000000000000000000000000000000000000000000002f931d000000000000"
}
This protocol does not seem to be documented. How do I compute the hash from this data. I think that this data is in little endian. So the first step is to convert everything to big endian? Once that is done, I calculate the sha256 of the data. The data can be divided in two chuncks of 64 bytes each. The hash of the first chuck is given by midstate and therefore does not have to be computed.
I must therefore hash the chunck #2 with sha256, using the midstate as the initial hash values. Once that is done, I end up with a hash of chunk 2, which is 32 bytes. I calculate the hash of this chunk one more time to get a final hash.
Then, do I convert everything to little endian and submit the work?
What is hash1 used for?
The hash calculation is documented at Block hashing algorithm.
Start there for the relatively simple basics. The basic data structures are documented in Protocol specification - Bitcoin Wiki. Note that the protocol definition (and the definition of work) more or less assumes that SHA-256 hashes are 256-bit little-endian values, rather than big-endian as the standard implies. See also
Getwork is more complicated and runs into more serious endian/byte ordering confusion.
First note that the getwork API is optimized to speed up the initial steps of mining.
The midstate and hash1 values are for these performance optimizations and can be ignored. Just look at the "data".
And when a standard sha256 implementation is used, only the first 80 bytes (160 hex characters) of the "data" are hashed.
Unfortunately, the JSON data presented in the getwork data structure has different endian characteristics than what is needed for hashing in the block example above.
They all say to go to the source for the answer, but the C++ source can be big and confusing. A simple alternative is the poold.py code. There is discussion of it here: New mining pool for testing. You only need to look at the first few lines of the "checkwork" routine, and the "bufreverse" and "bytereverse" functions, to get the byte ordering right. In the end it is just a matter of doing a reversal of the bytes in each 32-bit segment of the data. Yes - very odd. But endian issues are tricky and can end up that way....
Some other helpful information on the way "getwork" works can be found in discussions at:
Do I understand header hashing?
Stupid newbie question about the nonce
Note that finding the signal to noise in the original Bitcoin forum is getting very hard, and there is currently an Area51 proposal for a StackExchange site for Bitcoin and Crypto Currency in general. Come join us!
It sounds right, there is a script in javascript that do calculate the hash but I do not fully understand it so I don't know, maybe you understand it better if you look.
this.tryHash = function(midstate, half, data, hash1, target, nonce){
data[3] = nonce;
this.sha.reset();
var h0 = this.sha.update(midstate, data).state; // compute first hash
for (var i = 0; i < 8; i++) hash1[i] = h0[i]; // place it in the h1 holder
this.sha.reset(); // reset to initial state
var h = this.sha.update(hash1).state; // compute final hash
if (h[7] == 0) {
var ret = [];
for (var i = 0; i < half.length; i++)
ret.push(half[i]);
for (var i = 0; i < data.length; i++)
ret.push(data[i]);
return ret;
} else return null;
};
SOURCE: https://github.com/jwhitehorn/jsMiner/blob/4fcdd9042a69b309035dfe9c9ddf716119831a16/engine.js#L149-165
Frankly speaking
Bitcoin block hashing algorithm is not officially described by any source.
"
The hash calculation is documented at Block hashing algorithm.
"
should read
The hash calculation is "described" at Block hashing algorithm.
en.bitcoin.it/wiki/Block_hashing_algorithm
btw the example code in PHP comes with a bug (typo)
the example code in Python generates errors when run by Python3.3 for Windows XP 32
(missing support for string.decode)
I've been doing some preliminary research in the area of message digests. Specifically collision attacks of cryptographic hash functions such as MD5 and SHA-1, such as the Postscript example and X.509 certificate duplicate.
From what I can tell in the case of the postscript attack, specific data was generated and embedded within the header of the postscript file (which is ignored during rendering) which brought about the internal state of the md5 to a state such that the modified wording of the document would lead to a final MD value equivalent to the original postscript file.
The X.509 took a similar approach where by data was injected within the comment/whitespace sections of the certificate.
Ok so here is my question, and I can't seem to find anyone asking this question:
Why isn't the length of ONLY the data being consumed added as a final block to the MD calculation?
In the case of X.509 - Why is the whitespace and comments being taken into account as part of the MD?
Wouldn't a simple processes such as one of the following be enough to resolve the proposed collision attacks:
MD(M + |M|) = xyz
MD(M + |M| + |M| * magicseed_0 +...+ |M| * magicseed_n) = xyz
where :
M : is the message
|M| : size of the message
MD : is the message digest function (eg: md5, sha, whirlpool etc)
xyz : is the pairing of the acutal message digest value for the message M and |M|. <M,|M|>
magicseed_{i}: Is a set of random values generated with seed based on the internal-state prior to the size being added.
This technqiue should work, as to date all such collision attacks rely on adding more data to the original message.
In short, the level of difficulty involved in generating a collision message such that:
It not only generates the same MD
But is also comprehensible/parsible/compliant
and is also the same size as the original message,
is immensely difficult if not near impossible. Has this approach ever been discussed? Any links to papers etc would be nice.
Further Question: What is the lower bound for collisions of messages of common length for a hash function H chosen randomly from U, where U is the set of universal hash functions ?
Is it 1/N (where N is 2^(|M|)) or is it greater? If it is greater, that implies there is more than 1 message of length N that will map to the same MD value for a given H.
If that is the case, how practical is it to find these other messages? bruteforce would be of O(2^N), is there a method of time complexity less than bruteforce?
Can't speak for the rest of the questions, but the first one is fairly simple - adding length data to the input of the md5, at any stage of the hashing process (1st block, Nth block, final block) just changes the output hash. You couldn't retrieve that length from the output hash string afterwards. It's also not inconceivable that a collision couldn't be produced from another string with the exact same length in the first place, so saying "the original string was 17 bytes" is meaningless, because the colliding string could also be 17 bytes.
e.g.
md5("abce(17bytes)fghi") = md5("abdefghi<long sequence of text to produce collision>")
is still possible.
In the case of X.509 certificates specifically, the "comments" are not comments in the programming language sense: they are simply additional attributes with an OID that indicates they are to be interpreted as comments. The signature on a certificate is defined to be over the DER representation of the entire tbsCertificate ('to be signed' certificate) structure which includes all the additional attributes.
Hash function design is pretty deep theory, though, and might be better served on the Theoretical CS Stack Exchange.
As #Marc points out, though, as long as more bits can be modified than the output of the hash function contains, then by the pigeonhole principle a collision must exist for some pair of inputs. Because cryptographic hash functions are in general designed to behave pseudo-randomly over their inputs, collisions will tend toward being uniformly distributed over possible inputs.
EDIT: Incorporating the message length into the final block of the hash function would be equivalent to appending the length of everything that has gone before to the input message, so there's no real need to modify the hash function to do this itself; rather, specify it as part of the usage in a given context. I can see where this would make some types of collision attacks harder to pull off, since if you change the message length there's a changed field "downstream" of the area modified by the attack. However, this wouldn't necessarily impede the X.509 intermediate CA forgery attack since the length of the tbsCertificate is not modified.