CryptoAPI: difference Between CALG_* and BCRYPT_*_ALGORITHM - cryptography

What is the difference between CALG_* and BCRYPT_*_ALGORITHM
for example: SHA_256 is defined both as:
#define CALG_SHA_256 (ALG_CLASS_HASH|ALG_TYPE_ANY|ALG_SID_SHA_256)
and
#define BCRYPT_SHA256_ALGORITHM L"SHA256"

From my understanding, CALC_* are crypto algorithm from crypto api from day 1.
BCRYPT are the one for CNG (crypto next gen) which replace (in a long time?) the legacy crypto API.

Related

How to extract encryption and MAC keys using KDF (X9.63) defined by javacardx.security.derivation

As per Java Card v3.1 new package is defined javacardx.security.derivation
https://docs.oracle.com/en/java/javacard/3.1/jc_api_srvc/api_classic/javacardx/security/derivation/package-summary.html
KDF X9.63 works on three inputs: input secret, counter and shared info.
Depends on length of generated key material, multiple rounds on hash is carried out to generated final output.
I am using this KDF via JC API to generated 64 bytes of output (which is carried out by 2 rounds of SHA-256) for a 16 bytes-Encryption Key, a 16 bytes-IV, and a 32 bytes-MAC Key.
Note: This is just pseudo code to put my question with necessary details.
DerivationFunction df = DerivationFunction.getInstance(DerivationFunction.ALG_KDF_ANSI_X9_63, false);
df.init(KDFAnsiX963Spec(MessageDigest.ALG_SHA_256, input, sharedInfo, (short) 64);
SecretKey encKey = KeyBuilder.buildKey(KeyBuilder.TYPE_AES, (short)16, false);
SecretKey macKey = KeyBuilder.buildKey(KeyBuilder.TYPE_HMAC, (short)32, false);
df.nextBytes(encKey);
df.nextBytes(IVBuffer, (short)0, (short)16);
df.lastBytes(macKey);
I have the following questions:
When rounds of KDF are performed? Are these performed during df.init() or during df.nextBytes() & df.lastBytes()?
One KDF round will generate 32 bytes output (considering SHA-256 algorithm) then how API's df.nextBytes() & df.lastBytes() will work with any output expected length < 32 bytes?
In this KDF counter is incremented in every next round then how counter will be managed between df.nextBytes() & df.lastBytes() API's?
When rounds of KDF are performed? Are these performed during df.init() or during df.nextBytes() & df.lastBytes()?
That seems to be implementation specific to me. It will probably be faster to perform all the calculations at one time, but in that case it still makes sense to wait for the first request of the bytes. On the other hand RAM is also often an issue, so on demand generation also makes some sense. That requires a somewhat trickier implementation though.
The fact that the output size is pre-specified probably indicates that the simpler method of generating all the key material at once is at least foreseen by the API designers (they probably created an implementation before subjecting it to peer review in the JCF).
One KDF round will generate 32 bytes output (considering SHA-256 algorithm) then how API's df.nextBytes() & df.lastBytes() will work with any output expected length < 32 bytes?
It will commonly return the leftmost bytes (of the hash output) and likely leave the rest of the bytes in a buffer. This buffer will likely be destroyed together with the rest of the state when lastBytes is called (so don't forget to do so).
Note that the API clearly states that you have to re-initialize the DerivationFunction instance if you want to use it again. So that is a very strong indication that they though of destruction of key material (something that is required by FIPS and Common Criteria certification, not just common sense).
Other KDF's could have a different way of returning bytes, but using the leftmost bytes and then add rounds to the right is so common you can call it universal. For the ANSI X9.63 KDF this is certainly the case and it is clearly specified in the standard that way.
In this KDF counter is incremented in every next round then how counter will be managed between df.nextBytes() & df.lastBytes() API's?
These are methods of the same class and cannot be viewed separately, so they are not separate API's. Class instances can keep state in anyway they want. It might simply hold the counter as class variable, but if it decided to generate the bytes during init or the first nextBytes / lastBytes call then the counter is not even required anymore.

Shamir's Secret Sharing using Bignum or Bigint or ....?

I've got a generic cryptographic implementation using OpenSSL's BIGNUM library in C. Standard decryption is working fine, but i would also like to implement Shamir's secret sharing (SSS).
The problem i've run across is that BIGNUM only supports whole numbers, and as part of the Lagrange interpolation for SSS, i'll need to be multiplying by negative values.
Is there any way to do this? Otherwise: I can do my SSS in another language (python?) so long as it is able to interact with the BIGNUM's produced by OpenSSL.
Any suggestions? TIA!
As you look at BIGNUM structure in OpenSSL, you'll find a flag named neg. If the BIGNUM object represents a negative number, neg will be set to 1. Also, the bn_mul() function handles the multiplication by negative number correctly. So you can implement SSS with OpenSSL, no problem!
Modular arithmetic (using groups) only provides positive results, so I presume you want to use non-modular arithmetic? In that case you could simply keep a separate variable indicating if the value is negative or not. The outcome of positive multiplication is the same except for the sign bit anyway.
It's not as clean a design as possible, but for a few methods it would probably not matter that much. You could create separate methods that mimic the BN methods except for an integer holding the value of the sign (-1, 0 or 1).

Interoperability of AES CTR mode?

I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.

Generating Random Numbers Securely in Objective-C

Where can I find an industry-accepted secure pseudo-random number generator for Objective-C? Is there one built in to the OS X SDK?
My question is basically the same as this one, except that I'm looking for a secure PRNG.
EDIT:
Thanks everyone for the help. Here's a simple one-liner to implement the /dev/random method:
-(NSData *)getRandomBytes:(NSUInteger)length {
return [[NSFileHandle fileHandleForReadingAtPath:#"/dev/random"] readDataOfLength:length];
}
Security.framework has a facility for doing this, called SecRandomCopyBytes(). Although it's really basically just copying from /dev/random.
You can use
int SecRandomCopyBytes (
SecRandomRef rnd,
size_t count,
uint8_t *bytes
);.
This function is available in Security/Security.h Framework.
This function reads from /dev/random to obtain an array of cryptographically-secure random bytes.
/dev/random is a blocking interface that only returns as much random
data as the system possesses at any particular time. Once the system
runs out of randomness, no more can be fetched until more is generated
(by listening to the user bang on the keyboard, or however the OS
gathers true randomness). /dev/urandom is a non-blocking interface
that always returns the requested amount of data, by using a
pseudorandom generator even when true randomness is exhausted.OS X has both of these as well, however they both act like /dev/urandom does on Linux.
Random Numbers
random(4) Mac OS X Manual Page
How good is SecRandomCopyBytes?

Using CommonCrypto to generate a salted key

This is how I have been generating my cryptographic keys until now:
unsigned char *salt; //8 salt bytes were created earlier
unsigned char *password; //password was obtained earlier
int passwordLength; //password length as well
unsigned char evp_key[EVP_MAX_KEY_LENGTH] = {"\0"};
unsigned char iv[EVP_MAX_IV_LENGTH];
EVP_BytesToKey(cipher, EVP_md5(), salt, password, //cipher is also given
passwordLength,
1, evp_key, iv);
The result is a key and an “initial value.” I can then use these two (evp_key and iv) along with the given cipher to encrypt my data.
Now that with Lion, Apple has deprecated the above code, I have the following question:
Question: How do I do the same thing with CommonCrypto? I just came across the CCKeyDerivationPBKDF() function. Is this the one I’m looking for? I can’t see how this is the case, since I don’t get any “initial value” back. I don’t know how to compare this CommonCrypto function with the old method.
In particular: This new function doesn’t seem to even support the MD5 algorithm—only the SHA1. How, then, can I create new code that is backwards compatible with my old codebase (and files it has created)?
I found the solution. To me, it seems impossible to derive the keys exactly the way OpenSSL does using any Apple’s methods. Instead, I just had to read how OpenSSL derive the key and initialization vector in the section “Key Derivation Algorithm” on the page http://www.openssl.org/docs/crypto/EVP_BytesToKey.html and simply mimic that.