I use RSACryptoServiceProvider to encrypt some small blocks of data. For the solution I'm working on, it's important that if the same piece of source data is encrypted twice with the same public key, the result (the encrypted block of data) is not the same.
I have checked this with an example and it worked like I hoped. My question is now, if this behaviour is by design and guaranteed or if I have to add some random part to the source data for guaranteeing that data blocks with the same data can not be matched anymore after encryption.
Here is the example:
byte[] data=new byte[]{1,7,8,3,4,5};
RSACryptoServiceProvider encrypter = cert.PublicKey.Key as RSACryptoServiceProvider;
byte[] encryptedData = encrypter.Encrypt(data,true);
// encryptedData has always other values in, although the source data is always
// 1,7,8,3,4,5 and the certificate is always the same (loaded from disk)
The concrete question is for .net but maybe the answer can be given in general for all RSA-implementations if it is by design?
The text-book RSA encryption algorithm is deterministic:
ciphertext = plaintext ^ encryption-exponent mod modulus
(Here ^ is integer exponentiation, mod the remainder operation.)
But as you remarked, this does not provide a good security guarantee, as an attacker which can guess the plaintext can simply verify this guess by encrypting it himself and comparing the results.
For this reason, the official RSA specifications (and also all implementations used in practice) include some (partly random) padding, so we don't actually encrypt plaintext, but pad(plaintext):
ciphertext = pad(plaintext) ^ encryption-exponent mod modulus
Decryption:
plaintext = unpad( ciphertext ^ decryption-exponent mod modulus )
Only with this padding RSA is actually a secure encryption scheme.
A similar padding is also used for RSA signatures, to avoid easy forging of signatures.
Related
I am migrating on RSACng for new version release from RSACryptoServiceProvider. However, as RSACryptoserviceProvider that is CAPI uses Little Endian Architecture and RSACng that is CNG API is using Big Endian Architecture, question is how can i decrypt the data using CNG Api that is previously encrypted using RSACryptoService provider (CAPI)?
I have already tried Array.reverse(cypherText) and tried to decrypt using CNG Api, but it is throwing the error, 'The parameter is incorrect'.
I have also tried the approach of decrypting half the cypher text because CNG API is using RSAEncryptionPadding.OaepSHA512 padding whereas CAPI uses OAEP padding.
My RSACryptoServiceProvider class is as below:-
public static void EncryptWithSystemKeyRSACryptoService(byte[]
plainBytes, bool representsUnicodeString, out string cypherText)
{
CspParameters cp = new CspParameters();
cp.KeyContainerName = regValue.ToString();
cp.Flags = CspProviderFlags.UseMachineKeyStore;
cp.KeyNumber = (int)KeyNumber.Exchange;
byte[] encBlockData=null;
using (RSACryptoServiceProvider rsaCSP = new RSACryptoServiceProvider(cp))
{
res = CryptResult.GeneralError;
int keysize = rsaCSP.KeySize;
//This encrypts data and uses FOAEP padding
encBlockData = rsaCSP.Encrypt(plainBytes, true);
}
//Should i have to reverse the Byte order?
// I am doing Array.reverse for encrypted data as it follows little endian architecture and CNG Api follows Big Endian architecture
Array.Reverse(encBlockData);
cypherText = BitConverter.ToString(encBlockData );
cypherText = cypherText.Replace("-", "");
cypherText = cypherText.ToLower();
}
This is how i encrypt data with RSACryptoservice Provider (CAPI)
My RSACng class is as below :-
//I am calling this to use RSACng API to get private keys
private static byte[] SetPrivateAndPublicKeysAndDecrypt(byte[] cyphertext)
{
cp.KeyContainerName = regValue.ToString();
cp.Flags = CspProviderFlags.UseMachineKeyStore;
cp.KeyNumber = (int)KeyNumber.Exchange;
using (RSACryptoServiceProvider rsaCSP = new
RSACryptoServiceProvider(cp))
{
res = CryptResult.GeneralError;
keysize = rsaCSP.KeySize;
q = rsaCSP.ExportCspBlob(false);
RSAp = rsaCSP.ExportParameters(true);
}
//created cngKey
cngKey = CngKey.Import(q, CngKeyBlobFormat.GenericPublicBlob);
//created RSACng instance
RSACng rsacng = new RSACng(cngKey)
{
KeySize = keysize
};
rsacng.ImportParameters(RSAp);
//Decrypt using RSACng API using OAEPSHA512 padding
var plainText= crypto.Decrypt(cyphertext, RSAEncryptionPadding.OaepSHA512);
return plainText;
}
Expected result should be-> plainText successfully decrypted
Actual Resul-> Exception caught-> 'The parameter is incorrect'.
RSA ciphertext is defined to use statically sized, unsigned, big endian encoding in PKCS#1 (which specifies PKCS#1 v1.5 RSA encryption and OAEP encryption as implemented by most libraries). The function is called I2OSP within that standard, and the ciphertext should have the same size (in full bytes) as the key size. If it isn't big endian, then it does not conform to RSA / OAEP, in other words.
The same goes for normal ASN.1 encoded keys: they use dynamically sized, signed, big endian encoding according to DER (distinguished encoding rules). Those keys are defined in PKCS#1, PKCS#8 and X.509 standards, although they may also be embedded in a PKCS#12 compatible key store - for instance. Sometimes the keys are then PEM encoded as well to make them compatible with protocols that require text rather than binary.
So you should never have to reverse ciphertext or keys that use one of the standard encodings. That the calculations are performed internally on little endian (or not) is of no concern. This is true for about every modern cipher or other cryptographic primitive; the input / output is simply defined in bytes with a specific order, not numbers. Only very low level functions may possibly operate on e.g. words, which muddles the problem (but you won't find that in the MS API's).
Only Microsofts own proprietary (lame) key encodings may use little endian.
bartonjs is of course correct in the comments; you need to match the padding methods, and the default hash to be used for OAEP (or rather, the mask generation function MGF1 within OAEP) is SHA-1. There are plenty of other pitfalls to avoid, such as performing correct encoding / decoding of plaintext / ciphertext.
In RSA you basically have two primes for decryption and product for encryption. Normally you make decryption key private and and encryption public, however for CA signature verification the roles are reversed - CA encrypts the signature and browser decrypts it, so the decryption key is public. This means that the two primes are public, and once they are known everybody can multiply them together and get their dirty hands on the super-secret CA private key. What am I missing here?
Normally you make decryption key private and and encryption public, however for CA signature verification the roles are reversed - CA encrypts the signature and browser decrypts it, so the decryption key is public.
The signature is done on the server side by using the private key only known to the server. The signature is validated by the client using the public key. This means only the public key is public and the private key stays secret at the server.
This means that your assumption that both primes are public is wrong.
CA encrypts the signature and browser decrypts it, so the decryption key is public
No, the CA signs the message with the private key; and others can verify the message using the public key.
What am I missing here?
The confusion probably comes from the way that many people learn how signing works, specifically because they learn about RSA as "encryption" is m^e % n and "decryption" is m^d % n. Then you learn that "signing" is a proof-of-private-key, done by m^d % n and "verification" is done by doing m^e % n and comparing the expected result to the digest of the message. Conclusion: signing == decryption.
The reason you get taught that is because RSA is a hard algorithm to work out on paper (and even hard to write correctly for the computer) if you are using "sensible" payload sizes (that is, any size big enough to hold even an MD5 hash (128 bits), which would require a minimum key size of 216-bit, resulting in doing ModExp with 5.26e64 < d < 1.06e65)
For RSA encryption (PKCS#1 v1.5 padding) you take your original message bytes and prepend them with
0x00
0x02
(n.Length - m.Length - 3) random non-zero values (minimum 8 of these)
0x00
So encryption is actually (00 02 padding 00 m)^e % n; or more generically pad(m)^e % n (another encryption padding option, OAEP, works very differently). Decryption now reverses that, and becomes depad(m^d % n).
Signing, on the other hand, uses different padding:
Compute T = DER-Encode(SEQUENCE(hashAlgorithmIdentifier, hash(m)))
Construct
0x00
0x01
(n.Length - T.Length - 3) zero-valued padding bytes
0x00
T
Again, the more generic form is just pad(m)^d % n. (RSA signatures have a second padding mode, called PSS, which is quite different)
Now signature verification deviates. The formula is verify(m^e % n). The simplest, most correct, form of verify for PKCS#1 signature padding (RSASSA-PKCS1-v1_5) is:
Run the signing padding formula again.
Verify that all the bytes are the same as what was produced as the output of the public key operation.
PSS verification is very different. PSS padding a) adds randomness that PKCS#1-signature padding doesn't have, and b) has a verify formula that only reveals "correct" or "not correct" without revealing what the expected message hash should be.
So the private key was always (n, d) and the public key was always (n, e). And signing and decrypting aren't really the same thing (but they both involve "the private key operation"). (The private key can also be considered the triplet (p, q, e), or the quintuple (p, q, dp, dq, qInv), but let's keep things simple :))
For more information see RFC 8017, the most recent version of the RSA specification (which includes OAEP and PSS, as well as PKCS#1 encryption and PKCS#1 signature).
Normally you make decryption key private and and encryption public, however for CA signature verification the roles are reversed - CA encrypts the signature and browser decrypts it, so the decryption key is public.
No. The signature is signed with the private key and verified with the public key. There is no role reversal of the keys as far as privacy is concerned. If there was, the digital signature would be worthless, instead of legally binding.
This means that the two primes are public
No it doesn't.
What am I missing here?
Most of it.
CA works as a trusted authority to handle digital certificates. In RSA digital signature, you have the private key to sign and the public key to verify the signature. Your browsers have the public keys of all the major CAs.The browser uses this public key to verify the digital certificate of the web server signed by a trusted CA. So the private key is not public and you can't compromise it. You can do a simple google search to get a clear understanding of CA and digital certificates.
Does it make a problem about security as long as key and iv kept secret but same?
Thanks.
UPDATE: I did some research and learned why iv used. As mentioned in the answer below it's a protection way against frequency attacks. And there is two requirements when creating iv: uniqueness and unpredictability.
The algorithm that I thought for creating aes key and iv is:
Get two values from a certain elliptic curve point that agreed before by using asymetric encryption.(x,y)
Aes.Key <- HASH(x)
(string)unique <- timestamp()
(string)unpre <- y
Aes.IV <- HASH(unique + unpre)
And share data as {encryptedmessage, timestamp}
Is this logic alright?
You need a random (or at least pseudo-random) Initialization Vector for block ciphers like AES.
Ripped straight from Wikipedia:
Randomization is crucial for encryption schemes to achieve semantic
security, a property whereby repeated usage of the scheme under the
same key does not allow an attacker to infer relationships between
segments of the encrypted message.
I'm developing a simple secure data exchange between Server-Client and having some problems at the time of implementing AES.
I've already implemented the Shared Key exchange (with public key crypto) and it works fine. The idea in my head was (pseudocode):
SERVER
ciphertext = AES.encrypt(sharedKey,data)
send(ciphertext)
CLIENT
ciphertext = receive()
plaintext = AES.decrypt(sharedKey,ciphertext)
And voilĂ . When I tried to implement that, I first found that there was an IV. I first tried setting it to all zeros, like this:
self.cipher = AES.new(self.Kshared, AES.MODE_CFB, '0000000000000000')
while( there is data to send ):
ciphertext = self.cipher.encrypt(data)
self.sendData(ciphertext)
Then, in the Client:
cipher = AES.new(Ksecreta, AES.MODE_CFB,'0000000000000000')
while( there is data to receive ):
plaintext = cipher.decrypt('0000000000000000'+data)[16:]
This works fine for the FIRST message, but not for the rest. I assume my problem might has something to do with the IV but I have no idea. Plus, the first implementation I found used a salt to generate another key and also a random IV but the problem is that the client has no idea of which salt/IV is the Server using. I guess you could send that via public key crypto but I first want a simple working AES crypto.
Thanks.
For your decryption code, there's no need to prepend the cipher text with the IV. Have you tried just plaintext = cipher.decrypt(data)?
It's safe to transmit the IV in clear text. So you can just generate it randomly, then send it along with the ciphertext outside of the communication. Something like self.sendData(iv + ciphertext) and later iv = data[:16] and ciphertext = data[16:]
Another common thing to consider is encoding - some transport formats don't play well with sending raw byes (which can include NULL characters). It's common to encode the ciphertext into base64 for transport. If you need that, look into base64.b64encode and base64.b64decode
I want to know if RSA signatures are unique for a data.
Suppose I have a "hello" string. The method of computing the RSA signature is firstly to get the sha1 digest(these are , I know, unqiue for data), then add a header with OID and padding scheme mentioned and do some mathematical jiggle to give the signature.
Now assuming padding is same, will the signature generating by openSSL or Bouncy Castle be same?
If yes, my only fear is, won't it be easy to get back the "text"/data??
I actaully tried to do an RSA signature of some data and the signatures from OpenSSL and BC was different. I repeated it but got same signature again and again for each of them. I realized that the two signatures of the methods were different because of the difference in padding. However I am still not sure why the signatures of each of the libs are same all the time I repeat them. Can somebody please give an easy explanation?
The "usual" padding scheme, described in PKCS#1 as the "old-style, v1.5" padding, is deterministic. It works like this:
The data to sign is hashed (e.g. with SHA-1).
A fixed header is added; that header is actually an ASN.1 structure which identifies the hash function which was just used to process the data.
Padding bytes are added (on the left): 0x00, then 0x01, then some 0xFF bytes, then 0x00. The number of 0xFF bytes is adjusted so that the resulting total length is exactly the byte length of the modulus (i.e. 128 bytes for a 1024-bit RSA key).
The padded value is converted to an integer (which is less than the modulus), which goes through the modular exponentiation which is at the core of RSA. The result is converted back to a sequence of bytes, and that's the signature.
All these operations are deterministic, there is no random, hence it is normal and expected that signing the same data with the same key and the same hash function will yield the same signature ever and ever.
However there is a slight underspecification in the ASN.1-based fixed header. This is a structure which identifies the hash function, along with "parameters" for that hash function. Usual hash functions take no parameters, hence the parameters shall be represented with either a special "NULL" value (which takes a few bytes), or be omitted altogether: both representations are acceptable (although the former is supposedly preferred). So, the raw effect is that there are two versions of the "fixed header", for a given hash function. OpenSSL and Bouncycastle do not use the same header. However, signature verifiers are supposed to accept both.
PKCS#1 also describes a newer padding scheme, called PSS, which is more complex but with a stronger security proof. PSS includes a bunch of random bytes, so you will get a distinct signature every time.
Signatures are not a privacy mechanism; it's not considered a problem if you can get the plaintext back out. If your message must be kept secret, then encrypt as well as sign.
Nevertheless, remember that RSA signatures are created using a signer's private key. Given such a signature, you can use the signer's public key to "undo" the RSA transform (raise the message's signature to e, mod n) and get out the SHA1 or other hash value that was provided as its input. You still can't undo the hash function to get the input plaintext corresponding to a signature that has become detached from its message.
RSA for encryption is a different matter. Padding methods for encryption here do include random data in order to defeat traffic analysis.
This is why you add a salt/initialisation vector on top of your key. That way it shouldn't be possible to tell which records came from the same plaintext.