Why does POODLE Attack only affect after downgrading to SSL 3.0? - ssl

I'm wondering which changes from SSL 3.0 to TLS 1.0 exactly fixed the POODLE Attack. The Base for this Attack is the Messageblocks M1||MAC||PAD, so a whole Block is used for MAC and Padding.
I have the Idea, that it doesn't work anymore (without downgrading) cause in TLS 1.0 if the last Block is Padding it is 0x101010... (With block size of 16) and not 0xXX...XX10 (XX=Random), so it's a lot more Heavy to guess 16 Bytes directly instead of only the last Byte.
But are there any other security parameters that fixed this problem or did I mentioned it right? Like is the end of the Messages not ||MAC||PAD anymore? Or is the PAD maybe signed or something like that?
Regards
Julian

SSL 3.0 and TLS 1.0 differ in how they treat the padding.
See https://www.openssl.org/~bodo/ssl-poodle.pdf and this section:
The most severe problem of CBC encryption in SSL 3.0 is that its block
cipher padding is not deterministic, and not covered by the MAC
(Message Authentication Code): thus, the integrity of padding cannot
be fully verified when decrypting. Padding by 1 to L bytes (where L is
the block size in bytes) is used to obtain an integral number of
blocks before performing blockwise CBC (cipher­block chaining)
encryption. The weakness is the easiest to exploit if there’s an
entire block of padding, which (before encryption) consists of L-1
arbitrary bytes followed by a single byte of value L-1.
The messages in TLS1.0 are still structured the same, see this structure from RFC 2246:
block-ciphered struct {
opaque content[TLSCompressed.length];
opaque MAC[CipherSpec.hash_size];
uint8 padding[GenericBlockCipher.padding_length];
uint8 padding_length;
} GenericBlockCipher;
The padding is defined as such:
Each uint8 in the padding data vector must be filled with the padding length value.
This is the crucial difference between SSL 3.0 and TLS 1.0 in that regard, which makes the receiver able to check that the padding is right, and not being in fact leftover from valid application data blocks.
(compare https://www.rfc-editor.org/rfc/rfc6101#section-5.2.3.2 for SSL 3.0 with https://www.rfc-editor.org/rfc/rfc2246.html#section-6.2.3.2 for TLS 1.0)
This is also explained on https://www.imperialviolet.org/2014/10/14/poodle.html like that:
Consider the following plaintext HTTP request, which I've broken into
8-byte blocks (as in 3DES), but the same idea works for 16-byte blocks
(as in AES) just as well:
[GET / HT][TP/1.1\r\n][Cookie: ][abcdefgh][\r\n\r\nxxxx][MAC DATA][•••••••7]
The last block contains seven bytes of padding (represented as •) and
the final byte is the length of the padding.
[..]
An attacker can't see the plaintext contents like we can in the
diagram, above. They only see the CBC-encrypted ciphertext blocks. But
what happens if the attacker duplicates the block containing the
cookie data and overwrites the last block with it? When the receiver
decrypts the last block it XORs in the contents of the previous
ciphertext (which the attacker knows) and checks the authenticity of
the data. Critically, since SSLv3 doesn't specify the contents of the
padding (•) bytes, the receiver cannot check them. Thus the record
will be accepted if, and only if, the last byte ends up as a seven.
And later:
The critical part of this attack is that SSLv3 doesn't specify the
contents of padding bytes (the •s). TLS does and so this attack
doesn't work because the attacker only has a 2-64 or 2-128 chance of a
duplicated block being a valid padding block.

Related

Interoperability of AES CTR mode?

I use AES128 crypto in CTR mode for encryption, implemented for different clients (Android/Java and iOS/ObjC). The 16 byte IV used when encrypting a packet is formated like this:
<11 byte nonce> | <4 byte packet counter> | 0
The packet counter (included in a sent packet) is increased by one for every packet sent. The last byte is used as block counter, so that packets with fewer than 256 blocks always get a unique counter value. I was under the assumption that the CTR mode specified that the counter should be increased by 1 for each block, using the 8 last bytes as counter in a big endian way, or that this at least was a de facto standard. This also seems to be the case in the Sun crypto implementation.
I was a bit surprised when the corresponding iOS implementation (using CommonCryptor, iOS 5.1) failed to decode every block except the first when decoding a packet. It seems that CommonCryptor defines the counter in some other way. The CommonCryptor can be created in both big endian and little endian mode, but some vague comments in the CommonCryptor code indicates that this is not (or at least has not been) fully supported:
http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-60026/Source/API/CommonCryptor.c
/* corecrypto only implements CTR_BE. No use of CTR_LE was found so we're marking
this as unimplemented for now. Also in Lion this was defined in reverse order.
See <rdar://problem/10306112> */
By decoding block by block, each time setting the IV as specified above, it works nicely.
My question: is there a "right" way of implementing the CTR/IV mode when decoding multiple blocks in a single go, or can I expect it to be interoperability problems when using different crypto libs? Is CommonCrypto bugged in this regard, or is it just a question of implementing the CTR mode differently?
The definition of the counter is (loosely) specified in NIST recommendation sp800-38a Appendix B. Note that NIST only specifies how to use CTR mode with regards to security; it does not define one standard algorithm for the counter.
To answer your question directly, whatever you do you should expect the counter to be incremented by one each time. The counter should represent a 128 bit big endian integer according to the NIST specifications. It may be that only the least significant (rightmost) bits are incremented, but that will usually not make a difference unless you pass the 2^32 - 1 or 2^64 - 1 value.
For the sake of compatibility you could decide to use the first (leftmost) 12 bytes as random nonce, and leave the latter ones to zero, then let the implementation of the CTR do the increments. In that case you simply use a 96 bit / 12 byte random at the start, in that case there is no need for a packet counter.
You are however limited to 2^32 * 16 bytes of plaintext until the counter uses up all the available bits. It is implementation specific if the counter returns to zero or if the nonce itself is included in the counter, so you may want to limit yourself to messages of 68,719,476,736 = ~68 GB (yes that's base 10, Giga means 1,000,000,000).
because of the birthday problem you've got a 2^48 chance (48 = 96 / 2) of creating a collision for the nonce (required for each message, not each block), so you should limit the amount of messages;
if some attacker tricks you into decrypting 2^32 packets for the same nonce, you run out of counter.
In case this is still incompatible (test!) then use the initial 8 bytes as nonce. Unfortunately that does mean that you need to limit the number of messages because of the birthday problem.
Further investigations sheds some light on the CommonCrypto problem:
In iOS 6.0.1 the little endian option is now unimplemented. Also, I have verified that CommonCrypto is bugged in that the CCCryptorReset method does not in fact change the IV as it should, instead using pre-existing IV. The behaviour in 6.0.1 is different from 5.x.
This is potentially a security risc, if you initialize CommonCrypto with a nulled IV, and reset it to the actual IV right before encrypting. This would lead to all your data being encrypted with the same (nulled) IV, and multiple streams (that perhaps should have different IV but use same key) would leak data via a simple XOR of packets with corresponding ctr.

Elgamal Or RSA Encryption in Bouncy castle is not taking large input [duplicate]

is it true that RSA encryption only can handle limited payload of data ? ... im confused with the theory ... theoretically there is no note regarding this ...
RSA encrypts a single message which has a length which is somewhat smaller than the modulus. Specifically, the message is first "padded", resulting in a sequence of bytes which is then interpreted as a big integer between 0 and n-1, where n is the modulus (a part of the public key) -- so the padded message cannot be longer than the modulus, which implies a strict maximum length on the raw message.
Specifically, with the most common padding scheme (PKCS#1 "old-style", aka "v1.5"), the padding adds at least 11 bytes to the message, and the total padded message length must be equal to the modulus length, e.g. 128 bytes for a 1024-bit RSA key. Thus, the maximum message length is 117 bytes. Note that the resulting encrypted message length has the same size than the modulus, so the encryption necessarily expands the message size by at least 11 bytes.
The normal way of using RSA for encrypted a big message (say, an e-mail) is to use an hybrid scheme:
A random symmetric key K is chosen (a raw sequence of, e.g., 128 to 256 random bits).
The big message is symmetrically encrypted with K, using a proper and efficient symmetric encryption scheme such as AES.
K is asymmetrically encrypted with RSA.
"Splitting" a big message into so many 117-byte blocks, each to be encrypted with RSA, is not normally done, for a variety of reasons: it is difficult to do it right without adding extra weaknesses; each block would be expanded by 11 bytes, implying a non-negligible total message size increase (network bandwidth can be a scarce resource); symmetric encryption is much faster.
In the basic RSA algorithm (without padding) which is not very secure the size of the message is limited to be smaller than the modulus.
To enhance the security of RSA you should use padding schemes as defined in PKCS1. Depending on the scheme you choose the size of the message can be significantly smaller than the modulus.
http://en.wikipedia.org/wiki/PKCS1

RSA Encryption Problem [Size of payload data]

is it true that RSA encryption only can handle limited payload of data ? ... im confused with the theory ... theoretically there is no note regarding this ...
RSA encrypts a single message which has a length which is somewhat smaller than the modulus. Specifically, the message is first "padded", resulting in a sequence of bytes which is then interpreted as a big integer between 0 and n-1, where n is the modulus (a part of the public key) -- so the padded message cannot be longer than the modulus, which implies a strict maximum length on the raw message.
Specifically, with the most common padding scheme (PKCS#1 "old-style", aka "v1.5"), the padding adds at least 11 bytes to the message, and the total padded message length must be equal to the modulus length, e.g. 128 bytes for a 1024-bit RSA key. Thus, the maximum message length is 117 bytes. Note that the resulting encrypted message length has the same size than the modulus, so the encryption necessarily expands the message size by at least 11 bytes.
The normal way of using RSA for encrypted a big message (say, an e-mail) is to use an hybrid scheme:
A random symmetric key K is chosen (a raw sequence of, e.g., 128 to 256 random bits).
The big message is symmetrically encrypted with K, using a proper and efficient symmetric encryption scheme such as AES.
K is asymmetrically encrypted with RSA.
"Splitting" a big message into so many 117-byte blocks, each to be encrypted with RSA, is not normally done, for a variety of reasons: it is difficult to do it right without adding extra weaknesses; each block would be expanded by 11 bytes, implying a non-negligible total message size increase (network bandwidth can be a scarce resource); symmetric encryption is much faster.
In the basic RSA algorithm (without padding) which is not very secure the size of the message is limited to be smaller than the modulus.
To enhance the security of RSA you should use padding schemes as defined in PKCS1. Depending on the scheme you choose the size of the message can be significantly smaller than the modulus.
http://en.wikipedia.org/wiki/PKCS1

Are RSA signatures unique?

I want to know if RSA signatures are unique for a data.
Suppose I have a "hello" string. The method of computing the RSA signature is firstly to get the sha1 digest(these are , I know, unqiue for data), then add a header with OID and padding scheme mentioned and do some mathematical jiggle to give the signature.
Now assuming padding is same, will the signature generating by openSSL or Bouncy Castle be same?
If yes, my only fear is, won't it be easy to get back the "text"/data??
I actaully tried to do an RSA signature of some data and the signatures from OpenSSL and BC was different. I repeated it but got same signature again and again for each of them. I realized that the two signatures of the methods were different because of the difference in padding. However I am still not sure why the signatures of each of the libs are same all the time I repeat them. Can somebody please give an easy explanation?
The "usual" padding scheme, described in PKCS#1 as the "old-style, v1.5" padding, is deterministic. It works like this:
The data to sign is hashed (e.g. with SHA-1).
A fixed header is added; that header is actually an ASN.1 structure which identifies the hash function which was just used to process the data.
Padding bytes are added (on the left): 0x00, then 0x01, then some 0xFF bytes, then 0x00. The number of 0xFF bytes is adjusted so that the resulting total length is exactly the byte length of the modulus (i.e. 128 bytes for a 1024-bit RSA key).
The padded value is converted to an integer (which is less than the modulus), which goes through the modular exponentiation which is at the core of RSA. The result is converted back to a sequence of bytes, and that's the signature.
All these operations are deterministic, there is no random, hence it is normal and expected that signing the same data with the same key and the same hash function will yield the same signature ever and ever.
However there is a slight underspecification in the ASN.1-based fixed header. This is a structure which identifies the hash function, along with "parameters" for that hash function. Usual hash functions take no parameters, hence the parameters shall be represented with either a special "NULL" value (which takes a few bytes), or be omitted altogether: both representations are acceptable (although the former is supposedly preferred). So, the raw effect is that there are two versions of the "fixed header", for a given hash function. OpenSSL and Bouncycastle do not use the same header. However, signature verifiers are supposed to accept both.
PKCS#1 also describes a newer padding scheme, called PSS, which is more complex but with a stronger security proof. PSS includes a bunch of random bytes, so you will get a distinct signature every time.
Signatures are not a privacy mechanism; it's not considered a problem if you can get the plaintext back out. If your message must be kept secret, then encrypt as well as sign.
Nevertheless, remember that RSA signatures are created using a signer's private key. Given such a signature, you can use the signer's public key to "undo" the RSA transform (raise the message's signature to e, mod n) and get out the SHA1 or other hash value that was provided as its input. You still can't undo the hash function to get the input plaintext corresponding to a signature that has become detached from its message.
RSA for encryption is a different matter. Padding methods for encryption here do include random data in order to defeat traffic analysis.
This is why you add a salt/initialisation vector on top of your key. That way it shouldn't be possible to tell which records came from the same plaintext.

crypto api - block mode encryption determining input byte count

I'm trying to encrypt some date using a public key derived form the exchange key pair made with the CALG_RSA_KEYX key type. I determined the block size was 512 bits using cryptgetkeyparam KP_BLOCKLEN. It seems the maximum number of bytes I can feed cryptencrypt in 53 (424 bits) for which I get an encrypted length of 64 back. How can I determine how many bytes I can feed into cryptencrypt? If I feed in more than 53 bytes, the call fails.
RSA using the usual PKCS#1 v.1.5 mode can encrypt a message that is at most k-11 bytes, where k is the length of the modulus in bytes. So a 512 bit key can encrypt up to 53 bytes and a 1024 bit key can encrypt up to 117 bytes.
RSA using OAEP can encrypt a message up to k-2*hLen-2, where k is the modulus byte-length and hLen is the length of the output of the underlying hash-function. So using SHA-1, a 512 bit key can encrypt up to 22 bytes and a 1024 bit key can encrypt up to 86 bytes.
You should not normally use a RSA key to encrypt your message directly. Instead you should generate a random symmetric key (f.x. an AES key), encrypt your message with the symmetric key, encrypt the key with the RSA key and transmit both encryptions to the recipient. This is usually called hybrid encryption.
EDIT: Although this response is marked as accepted by the OP, please see Rasmus Faber response instead, as this is a much better response. Posted 24 hours later, Rasmus's response corrects factual errors,in particular a mis-characterization of OAEP as a block cipher; OAEP is in fact a scheme used atop PKCS-1's Encoding Primitive for the purpose of key-encryption. OAEP is more secure and puts an even bigger limit on the maximum message length, this limit is also bound to a hash algorithm and its key length.
Another shortcoming of the following reply is its failure to stress that CALG_RSA_KEYX should be used exclusively for the key exchange, after which transmission of messages of any length can take place with whatever symmetric key encryption algorithm desired. The OP was aware of this, he was merely trying to "play" with the PK, and I did cover that much, albeit deep in the the long remarks thread.
Fore the time being, I'm leaving this response here, for the record, and also as Mike D may want to refer to it, but do remark-me-in, if you think that it would be better to remove it altogether; I don't mind doing so for sake of clarity!
-mjv- Sept 29, 2009
Original reply:
Have you check the error code from GetLastError(), following cryptencrypt()'s false return?
I suspect it might be NTE_BAD_LEN, unless there's be some other issue.
Maybe you can post the code that surrounds your calling criptencryt().
Bingo, upon seeing the CryptEncrypt() call.
You do not seem to be using the RSAES w/ OAEP scheme, since you do not have the CRYPT_OAEP flag on. This OAEP scheme is a block cipher based upon RSAES. This latter encryption algorihtm, however, can only encrypt messages slightly less than its key size (expressed in bytes). This is due to the minimum padding size defined in PKCS#1; such padding helps protect the algorithm from some key attacks, I think the ones based on known cleartext).
Therefore you have three options:
use the CRYPT_OAEP in the Flag parameter to CryptEncrypt()
extend the key size to say 1024 (if you have control over it, beware that longer keys will increase the time to encode/decode...)
Limit yourself to clear-text messages shorter than 54 bytes.
For documentation purposes, I'd like to make note of a few online resources.
- The [RSA Labs][1] web site which is very useful in all things crypto.
- Wikipedia articles on the subject are also quite informative, easier to read
and yet quite factual (I think).
When in doubt, however, do consult a real crypto specialist, not someone like me :-)