is it true that RSA encryption only can handle limited payload of data ? ... im confused with the theory ... theoretically there is no note regarding this ...
RSA encrypts a single message which has a length which is somewhat smaller than the modulus. Specifically, the message is first "padded", resulting in a sequence of bytes which is then interpreted as a big integer between 0 and n-1, where n is the modulus (a part of the public key) -- so the padded message cannot be longer than the modulus, which implies a strict maximum length on the raw message.
Specifically, with the most common padding scheme (PKCS#1 "old-style", aka "v1.5"), the padding adds at least 11 bytes to the message, and the total padded message length must be equal to the modulus length, e.g. 128 bytes for a 1024-bit RSA key. Thus, the maximum message length is 117 bytes. Note that the resulting encrypted message length has the same size than the modulus, so the encryption necessarily expands the message size by at least 11 bytes.
The normal way of using RSA for encrypted a big message (say, an e-mail) is to use an hybrid scheme:
A random symmetric key K is chosen (a raw sequence of, e.g., 128 to 256 random bits).
The big message is symmetrically encrypted with K, using a proper and efficient symmetric encryption scheme such as AES.
K is asymmetrically encrypted with RSA.
"Splitting" a big message into so many 117-byte blocks, each to be encrypted with RSA, is not normally done, for a variety of reasons: it is difficult to do it right without adding extra weaknesses; each block would be expanded by 11 bytes, implying a non-negligible total message size increase (network bandwidth can be a scarce resource); symmetric encryption is much faster.
In the basic RSA algorithm (without padding) which is not very secure the size of the message is limited to be smaller than the modulus.
To enhance the security of RSA you should use padding schemes as defined in PKCS1. Depending on the scheme you choose the size of the message can be significantly smaller than the modulus.
http://en.wikipedia.org/wiki/PKCS1
Related
I'm wondering which changes from SSL 3.0 to TLS 1.0 exactly fixed the POODLE Attack. The Base for this Attack is the Messageblocks M1||MAC||PAD, so a whole Block is used for MAC and Padding.
I have the Idea, that it doesn't work anymore (without downgrading) cause in TLS 1.0 if the last Block is Padding it is 0x101010... (With block size of 16) and not 0xXX...XX10 (XX=Random), so it's a lot more Heavy to guess 16 Bytes directly instead of only the last Byte.
But are there any other security parameters that fixed this problem or did I mentioned it right? Like is the end of the Messages not ||MAC||PAD anymore? Or is the PAD maybe signed or something like that?
Regards
Julian
SSL 3.0 and TLS 1.0 differ in how they treat the padding.
See https://www.openssl.org/~bodo/ssl-poodle.pdf and this section:
The most severe problem of CBC encryption in SSL 3.0 is that its block
cipher padding is not deterministic, and not covered by the MAC
(Message Authentication Code): thus, the integrity of padding cannot
be fully verified when decrypting. Padding by 1 to L bytes (where L is
the block size in bytes) is used to obtain an integral number of
blocks before performing blockwise CBC (cipherblock chaining)
encryption. The weakness is the easiest to exploit if there’s an
entire block of padding, which (before encryption) consists of L-1
arbitrary bytes followed by a single byte of value L-1.
The messages in TLS1.0 are still structured the same, see this structure from RFC 2246:
block-ciphered struct {
opaque content[TLSCompressed.length];
opaque MAC[CipherSpec.hash_size];
uint8 padding[GenericBlockCipher.padding_length];
uint8 padding_length;
} GenericBlockCipher;
The padding is defined as such:
Each uint8 in the padding data vector must be filled with the padding length value.
This is the crucial difference between SSL 3.0 and TLS 1.0 in that regard, which makes the receiver able to check that the padding is right, and not being in fact leftover from valid application data blocks.
(compare https://www.rfc-editor.org/rfc/rfc6101#section-5.2.3.2 for SSL 3.0 with https://www.rfc-editor.org/rfc/rfc2246.html#section-6.2.3.2 for TLS 1.0)
This is also explained on https://www.imperialviolet.org/2014/10/14/poodle.html like that:
Consider the following plaintext HTTP request, which I've broken into
8-byte blocks (as in 3DES), but the same idea works for 16-byte blocks
(as in AES) just as well:
[GET / HT][TP/1.1\r\n][Cookie: ][abcdefgh][\r\n\r\nxxxx][MAC DATA][•••••••7]
The last block contains seven bytes of padding (represented as •) and
the final byte is the length of the padding.
[..]
An attacker can't see the plaintext contents like we can in the
diagram, above. They only see the CBC-encrypted ciphertext blocks. But
what happens if the attacker duplicates the block containing the
cookie data and overwrites the last block with it? When the receiver
decrypts the last block it XORs in the contents of the previous
ciphertext (which the attacker knows) and checks the authenticity of
the data. Critically, since SSLv3 doesn't specify the contents of the
padding (•) bytes, the receiver cannot check them. Thus the record
will be accepted if, and only if, the last byte ends up as a seven.
And later:
The critical part of this attack is that SSLv3 doesn't specify the
contents of padding bytes (the •s). TLS does and so this attack
doesn't work because the attacker only has a 2-64 or 2-128 chance of a
duplicated block being a valid padding block.
Facebook app secret is a string of 32 characters (0-9, a-f) and thus it represents a 128 bits byte array. Facebook uses this as the key to generate signed request using HMAC-SHA256. Is this a correct usage? I thought HMAC-SHA256 should use 256 bits keys.
HMAC takes the HASH(key) and uses it as the key if the length of the key is greater than the internal block size of the hash. Thus, a key larger than the internal block size of the hash provides no better security than one of equal size. Shorter keys are zero padded to be equal to the internal block size of the hash as per the HMAC specification.
It's impossible to use a 128-bit key with HMAC-SHA-256. If you mean 128 bits padded out to 512 bits with zeroes, then it's probably alright for short-term authentication. I'd recommend at least 256 bits and ideally you would want to use something equal to the internal block size of the underlying hash.
The page says that the 256bit signature is derived from a payload (what facebook is signing) + your 128 bit salt.
So yes, it sounds like correct usage.
The secret 16 bytes (32 characters) isn't actually a key in the sense that it's used to encrypt and decrypt something. Rather, it's a bit of data (a salt) that is used to alter the result of the digital signature, by changing the input ever-so-slightly, so that only someone who knew the exact secret and the exact payload could have created the signature.
is it true that RSA encryption only can handle limited payload of data ? ... im confused with the theory ... theoretically there is no note regarding this ...
RSA encrypts a single message which has a length which is somewhat smaller than the modulus. Specifically, the message is first "padded", resulting in a sequence of bytes which is then interpreted as a big integer between 0 and n-1, where n is the modulus (a part of the public key) -- so the padded message cannot be longer than the modulus, which implies a strict maximum length on the raw message.
Specifically, with the most common padding scheme (PKCS#1 "old-style", aka "v1.5"), the padding adds at least 11 bytes to the message, and the total padded message length must be equal to the modulus length, e.g. 128 bytes for a 1024-bit RSA key. Thus, the maximum message length is 117 bytes. Note that the resulting encrypted message length has the same size than the modulus, so the encryption necessarily expands the message size by at least 11 bytes.
The normal way of using RSA for encrypted a big message (say, an e-mail) is to use an hybrid scheme:
A random symmetric key K is chosen (a raw sequence of, e.g., 128 to 256 random bits).
The big message is symmetrically encrypted with K, using a proper and efficient symmetric encryption scheme such as AES.
K is asymmetrically encrypted with RSA.
"Splitting" a big message into so many 117-byte blocks, each to be encrypted with RSA, is not normally done, for a variety of reasons: it is difficult to do it right without adding extra weaknesses; each block would be expanded by 11 bytes, implying a non-negligible total message size increase (network bandwidth can be a scarce resource); symmetric encryption is much faster.
In the basic RSA algorithm (without padding) which is not very secure the size of the message is limited to be smaller than the modulus.
To enhance the security of RSA you should use padding schemes as defined in PKCS1. Depending on the scheme you choose the size of the message can be significantly smaller than the modulus.
http://en.wikipedia.org/wiki/PKCS1
I want to know if RSA signatures are unique for a data.
Suppose I have a "hello" string. The method of computing the RSA signature is firstly to get the sha1 digest(these are , I know, unqiue for data), then add a header with OID and padding scheme mentioned and do some mathematical jiggle to give the signature.
Now assuming padding is same, will the signature generating by openSSL or Bouncy Castle be same?
If yes, my only fear is, won't it be easy to get back the "text"/data??
I actaully tried to do an RSA signature of some data and the signatures from OpenSSL and BC was different. I repeated it but got same signature again and again for each of them. I realized that the two signatures of the methods were different because of the difference in padding. However I am still not sure why the signatures of each of the libs are same all the time I repeat them. Can somebody please give an easy explanation?
The "usual" padding scheme, described in PKCS#1 as the "old-style, v1.5" padding, is deterministic. It works like this:
The data to sign is hashed (e.g. with SHA-1).
A fixed header is added; that header is actually an ASN.1 structure which identifies the hash function which was just used to process the data.
Padding bytes are added (on the left): 0x00, then 0x01, then some 0xFF bytes, then 0x00. The number of 0xFF bytes is adjusted so that the resulting total length is exactly the byte length of the modulus (i.e. 128 bytes for a 1024-bit RSA key).
The padded value is converted to an integer (which is less than the modulus), which goes through the modular exponentiation which is at the core of RSA. The result is converted back to a sequence of bytes, and that's the signature.
All these operations are deterministic, there is no random, hence it is normal and expected that signing the same data with the same key and the same hash function will yield the same signature ever and ever.
However there is a slight underspecification in the ASN.1-based fixed header. This is a structure which identifies the hash function, along with "parameters" for that hash function. Usual hash functions take no parameters, hence the parameters shall be represented with either a special "NULL" value (which takes a few bytes), or be omitted altogether: both representations are acceptable (although the former is supposedly preferred). So, the raw effect is that there are two versions of the "fixed header", for a given hash function. OpenSSL and Bouncycastle do not use the same header. However, signature verifiers are supposed to accept both.
PKCS#1 also describes a newer padding scheme, called PSS, which is more complex but with a stronger security proof. PSS includes a bunch of random bytes, so you will get a distinct signature every time.
Signatures are not a privacy mechanism; it's not considered a problem if you can get the plaintext back out. If your message must be kept secret, then encrypt as well as sign.
Nevertheless, remember that RSA signatures are created using a signer's private key. Given such a signature, you can use the signer's public key to "undo" the RSA transform (raise the message's signature to e, mod n) and get out the SHA1 or other hash value that was provided as its input. You still can't undo the hash function to get the input plaintext corresponding to a signature that has become detached from its message.
RSA for encryption is a different matter. Padding methods for encryption here do include random data in order to defeat traffic analysis.
This is why you add a salt/initialisation vector on top of your key. That way it shouldn't be possible to tell which records came from the same plaintext.
I'm trying to encrypt some date using a public key derived form the exchange key pair made with the CALG_RSA_KEYX key type. I determined the block size was 512 bits using cryptgetkeyparam KP_BLOCKLEN. It seems the maximum number of bytes I can feed cryptencrypt in 53 (424 bits) for which I get an encrypted length of 64 back. How can I determine how many bytes I can feed into cryptencrypt? If I feed in more than 53 bytes, the call fails.
RSA using the usual PKCS#1 v.1.5 mode can encrypt a message that is at most k-11 bytes, where k is the length of the modulus in bytes. So a 512 bit key can encrypt up to 53 bytes and a 1024 bit key can encrypt up to 117 bytes.
RSA using OAEP can encrypt a message up to k-2*hLen-2, where k is the modulus byte-length and hLen is the length of the output of the underlying hash-function. So using SHA-1, a 512 bit key can encrypt up to 22 bytes and a 1024 bit key can encrypt up to 86 bytes.
You should not normally use a RSA key to encrypt your message directly. Instead you should generate a random symmetric key (f.x. an AES key), encrypt your message with the symmetric key, encrypt the key with the RSA key and transmit both encryptions to the recipient. This is usually called hybrid encryption.
EDIT: Although this response is marked as accepted by the OP, please see Rasmus Faber response instead, as this is a much better response. Posted 24 hours later, Rasmus's response corrects factual errors,in particular a mis-characterization of OAEP as a block cipher; OAEP is in fact a scheme used atop PKCS-1's Encoding Primitive for the purpose of key-encryption. OAEP is more secure and puts an even bigger limit on the maximum message length, this limit is also bound to a hash algorithm and its key length.
Another shortcoming of the following reply is its failure to stress that CALG_RSA_KEYX should be used exclusively for the key exchange, after which transmission of messages of any length can take place with whatever symmetric key encryption algorithm desired. The OP was aware of this, he was merely trying to "play" with the PK, and I did cover that much, albeit deep in the the long remarks thread.
Fore the time being, I'm leaving this response here, for the record, and also as Mike D may want to refer to it, but do remark-me-in, if you think that it would be better to remove it altogether; I don't mind doing so for sake of clarity!
-mjv- Sept 29, 2009
Original reply:
Have you check the error code from GetLastError(), following cryptencrypt()'s false return?
I suspect it might be NTE_BAD_LEN, unless there's be some other issue.
Maybe you can post the code that surrounds your calling criptencryt().
Bingo, upon seeing the CryptEncrypt() call.
You do not seem to be using the RSAES w/ OAEP scheme, since you do not have the CRYPT_OAEP flag on. This OAEP scheme is a block cipher based upon RSAES. This latter encryption algorihtm, however, can only encrypt messages slightly less than its key size (expressed in bytes). This is due to the minimum padding size defined in PKCS#1; such padding helps protect the algorithm from some key attacks, I think the ones based on known cleartext).
Therefore you have three options:
use the CRYPT_OAEP in the Flag parameter to CryptEncrypt()
extend the key size to say 1024 (if you have control over it, beware that longer keys will increase the time to encode/decode...)
Limit yourself to clear-text messages shorter than 54 bytes.
For documentation purposes, I'd like to make note of a few online resources.
- The [RSA Labs][1] web site which is very useful in all things crypto.
- Wikipedia articles on the subject are also quite informative, easier to read
and yet quite factual (I think).
When in doubt, however, do consult a real crypto specialist, not someone like me :-)