Raw keys (RSA, AES) generation vs SSL/TLS performance - ssl

I am working on a small project that aims to design a secure communication between two, or more, devices A and B, with the following limitations:
Device A has very limited resources (e.g, smart card).
The used communication should perform minimum number of encryption/decryption operations to establish a secure connection.
Bidirectional authentication is required, in which A should be 100% sure of B's identity and vice versa.
The used technique should use public key cryptography (e.g., RSA) and after that establish a shared key using symmetric algorithm (e.g., AES)
I know that using certificates is much easier to manage and use. However, due to the limitations I though of using predefined RSA keys for both of the entities and afterward the devices can negotiate the new shared key using AES.
My question is about the validity of such technique and would it have better performance than using SSL/TLS certificates; in terms of number of steps and resources usage. Moreover, it would be really helpful if someone have numerical analysis for using raw keys generation (as my example above) versus using SSL/TLS certificates.

My question is about the validity of such technique and would it have better performance than using SSL/TLS certificates
Parsing certificates is tricky because it has a lot of ifs/elses, but even embedded CPU's would be able to do this. If you fully want to parse certificates you could also look at "card verifiable certificates" which are relatively simplistic certificates created for verification on smart cards (with similarly limited resources such as 8-10 KiB RAM, or less).
Besides that, verification is a (relatively efficient) RSA verification operation. You can however avoid this by simply pinning the certificate from B. You could for instance simply test the certificate by calculating the certificate fingerprint by performing a cryptographic hash, which you stored before.
As for the key generation: it won't of course matter for the RSA key pair generation itself. For the master secret agreement (and then session key derivation) TLS has multiple options such as using key agreement of the master secret + RSA authentication or RSA encryption of the master secret. How it compares to your scheme depends on the proprietary protocol.
There are of course also symmetric options available for TLS such as PSK (pre-shared key) and SRP. This will also create session keys, but both devices have hold only one shared key (or other token).
TLS has many options and doesn't necessarily introduce too much overhead. The problem is that if you try and create your own protocol you're bound to fail. With your current knowledge failure is almost certain. So I'd consider (variations) of TLS before exploring anything else. If you're up to it you can consider high performance suites such as Chacha20+Poly1305, possibly even paired with a Curve25519 self signed certificate.

Related

Would there be a compelling reason for implementing integrity check in a file transfer protocol, if the channel uses TLS?

I am developing a client server pair of applications to transfer files by streaming bytes over TCP/IP and the channel would use TLS always.
(Note: Due to certain OS related limitations SFTP or other such secure file transfer protocols cannot be used)
The application level protocol involves minimum but sufficient features to get the file to the other side.
I need to decide if the application level protocol needs to implement an integrity check (Ex: MD5).
Since TLS guarantees integrity, would this be redundant?
The use of TLS can provide you with some confidence that the data has not been changed (intentionally or otherwise) in transit, but not necessarily that the file that you intended to send is identical to the one that you receive.
There are plenty of other opportunities for the file to be corrupted/truncated/modified (such as when it's being read from the disk/database by the sender, or when it's written to disk by the receiver). Implementing your own integrity checking would help protect against those cases.
In terms of how you do the checking, if you're worried about malicious tampering then you should be checking a cryptographic signature (using something like GPG), rather than just a hash of the file. If you're going to use a hash then it's generally recommended to use a more modern algorithm such as a SHA-256 rather than the (legacy) MD5 algorithm - although most of the issues with MD5 won't affect you if you're only concerned about accidental corruption.

in TLS/SSL, what's the purpose of staging from premaster secret to master secret and then to encryption keys? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Why don't client and server just exchange the encryption keys directly using public key encryption or DH key exchange protocol? What the rationale behind that or what the problem it is to solve?
Its helpful to understand how keys are derived in modern SSL/TLS. Things were a bit different in early SSL (like SSLv2).
The master_secret is a common secret shared by the client and server. It is used to derive session specific keys. The master_secret derived form other parameters (discussed below).
There are 6 each secrets derived from the master_secret:
Client encryption key
Server encryption key
Client MAC key
Server MAC key
Client IV
Server IV
Assuming that neither eNULL nor aNULL is used, both the client and server use an encryption key for confidentiality and a HMAC key for authenticity. Each (client and server) has its own key.
While IVs are usually considered public, SSL/TLS treats them as secret parameters.
From RFC 5246, the master_secret is derived as:
master_secret = PRF(pre_master_secret, "master secret",
ClientHello.random + ServerHello.random)
[0..47];
The pre_master_secret comes from key agreement or key transport. If it comes from key agreement, then the pre_master_secret is the result of Diffie-Hellman key agreement. In an agreement scheme, both parties contribute to the derived secret.
If the pre_master_secret comes from a key transport scheme, then the client encrypts a random value under the server's public key. In this scheme, only the client provides keying material. When only one party provides the key, its called a key transport scheme.
What the rationale behind that or what the problem it is to solve?
The first stage, where the pre_master_secret is used, provides a "pluggable" architecture for key agreement or key transport.
The second stage, where the master_secret is derived, ensures both the client and server contribute to the keying material.
In addition, there's a label - "master secret" - that helps ensure derivation is unique even if the same parameters are used for something else (assuming a different derivation uses a different label). Use of labels are discussed in SP800-56 and SP800-57 (among other places).
The hash used in the second stage, where the master_secret is derived, performs two functions. First, it performs a mixing function. Second, it maps elements in the group used by key exchange or key agreement into random bit patterns.
The final stage is the derivation of the 6 keys from master_secret. According to 6.3. Key Calculation, the derivation does not provide key independence. It just ensures interoperability:
To generate the key material, compute
key_block = PRF(SecurityParameters.master_secret,
"key expansion",
SecurityParameters.server_random +
SecurityParameters.client_random);
until enough output has been generated. Then, the key_block is
partitioned as follows:
client_write_MAC_key[SecurityParameters.mac_key_length]
server_write_MAC_key[SecurityParameters.mac_key_length]
client_write_key[SecurityParameters.enc_key_length]
server_write_key[SecurityParameters.enc_key_length]
client_write_IV[SecurityParameters.fixed_iv_length]
server_write_IV[SecurityParameters.fixed_iv_length]
The steps above are a solid design. However, when used in SSL/TLS, there are lots of devils running around. For example, the above is not enough when a feature like renegotiation is added (triple handshake attack ftw!).
I believe the reason is that if the client simply selected a random number to use as the symmetric key and encrypted it using the server's public key to send to the server, there would potentially be a vulnerability if common clients used an imperfect random number generator, leading to predictable symmetric keys and making the communications much easier to break.
The actual key exchange protocol ensures that the symmetric key contains randomized elements from both the client and the server. This means that even if the client has an imperfect random number generator, the communications are still protected if the server's random number generator is cryptographically strong. Even if both the client's and the server's random number generators have weaknesses, the attack against the combination of the two is likely to be more expensive than if only the client's random number generator were used.
The rationale is that if the secret key is never exchanged it can never be detected. Key negotation algiorithms are known to be secure. An encryption is only as secure as its key.
pre master key to master key:
one side random is not really random, but 2 side 3 times random number could be really random..
master key to 6 key pairs:
2 for encryption, 2 for message integration check, and 2 for preventing CBC attack

Between tls_rsa_with_aes_256_cbc_sha and tls_rsa_with_aes_128_cbc_sha256 which one is more secure?

Which of the following cipher suites is more secure?
tls_rsa_with_aes_256_cbc_sha
tls_rsa_with_aes_128_cbc_sha256
AES-256 has nearly twice the security level as AES-128, given all other things being equal. SHA-256 has nearly twice the security level as SHA-1 (which is 160-bits). SHA-256 provides 128 bits of security. SHA-1 should provide 80-bits of security, but its practical security is around 65-bits.
Security levels do not equate to "twice as strong" (thanks Iridium). AES-256 has a security level of 256-bits, while AES-128 has a security level of about 128-bits. That means AES-256 needs about (2256)/2 operations to brute force, while AES-128 needs about (2128)/2 operations to brute force.
The use of the hashes in TLS is different than a long term signature, like the signature on a document or on a certificate. General signatures on documents and certificates need to survive for years or decades. In TLS, the signature over a record only needs to withstand as long as it takes for a packet to timeout on the network, which is a matter of minutes.
In practice for TLS, there's no effective difference between them. They are both hard, and your attacker will circumvent the encryption and try to attack you in other ways. He or she will find a weaker point in the system.
For example, the attacker might try to obtain the key used for key transport (that's the RSA in tls_rsa_*) by planting malware on the server through an injection. Then the attacker can simply calculate the 6 keys used in a TLS connection from the premaster secret that was transported and recovered under the compromised key.
As another example, the US government demanded Lavabit's private key so they could do the same. The US government could skirt the "plant malware" step through the legal system.

Authentication process in ARD

I am working on a third party client for Apple Remote Desktop. But I am stuck on its authentication process.
From Remote Desktop manual:
Authentication to Apple Remote Desktop clients uses an
authentication method which is based on a Diffie-Hellman Key
agreement protocol that creates a shared 128-bit key. This shared
key is used to encrypt both the name and password using the Advanced
Encryption Standard (AES). The Diffie-Hellman Key agreement protocol
used in ARD 2 is very similar to the Diffie-Hellman Key agreement
protocol used in personal file sharing, with both of them using a
512-bit prime for the shared key calculation. With Remote Desktop 2,
keystrokes and mouse events are encrypted when you control Mac OS X
client computers. This information is encrypted using the Advanced
Encryption Standard (AES) with the 128-bit shared key that was
derived during authentication.
Does anyone know where I can find a bit more technical information about the Authentication process in ARD? Such as which AES mode it uses and what initialization vector. Thanks
I ran into this exact problem recently. I couldn't find any detailed information beyond the high-level overview you mention, but I was able to figure out the technique based on my study of this C code from the gtk-vnc open source project. Basically, the steps are as follows:
Read the authentication material from the socket. A two-byte generator value, a two-byte key length value, the prime modulus (keyLength bytes), and the peer's generated public key (keyLength bytes).
Generate your own Diffie-Hellman public-private key pair.
Perform Diffie-Hellman key agreement, using the generator (g), prime (p), and the peer's public key. The output will be a shared secret known to both you and the peer.
Perform an MD5 hash of the shared secret. This 128-bit (16-byte) value will be used as the AES key.
Pack the username and password into a 128-byte plaintext "credentials" structure: { username[64], password[64] }. Null-terminate each. Fill the unused bytes with random characters so that the encryption output is less predictable.
Encrypt the plaintext credentials with the 128-bit MD5 hash from step 4, using the AES 128-bit symmetric cipher in electronic codebook (ECB) mode. Use no further padding for this block cipher.
Write the ciphertext from step 6 to the stream. Write your generated DH public key to the stream.
Check for authentication pass/fail as usual.
I don't have an Objective C implementation to share, but I have implemented this Java version which you may find useful to reference.
Not sure if anyone still needs this, but here's a Objective C implementation of the ARD authentication process that I cobbled together a few months back and released on Github a few days ago.
It's based loosely on David's (thanks!) Java implementation but uses OpenSSL's encryption functions for the MD5 hashing and AES 128 encryption steps.
There's also the TinyVNC library that also implements ARD authentication, but using the Crypto++ library for encryption functions instead.

How does browser generate symmetric key during SSL handshake

I have a small confusion on SSL handshake between browser and server in a typical https web scenario:
What I have understood so far is that in the process of SSL handshake, client (browser in this case) encrypts a randomly selected symmetric key with the public key (certificate received from server). This is sent back to the server, server decrypts it (symmetric key) with the private key. This symmetric key is now used during rest of the session to encrypt/decrypt the messages at both the ends. One of main reasons to do so is given as faster encryption using symmetric keys.
Questions
1) How does browser pick and generates this "randomly" selected symmetric key?
2) Do developers (or/and browser users) have control on this mechanism of generating symmetric keys?
Here is a very good description of how HTTPS connection establishment works. I will provide summary how session key is acquired by both parties (client and server), this process is known as "a key agreement protocol", here how it works:
The client generates the 48 byte “pre-master secret” random value.
The client pads these bytes with random data to make the input equal to 128 bytes.
The client encrypts it with server's public key and sends it to the server.
Then master key is produced by both parties in following manner:
master_secret = PRF(
pre_master_secret,
"master secret",
ClientHello.random + ServerHello.random
)
The PRF is the “Pseudo-Random Function” that’s also defined in the
spec and is quite clever. It combines the secret, the ASCII label, and
the seed data we give it by using the keyed-Hash Message
Authentication Code (HMAC) versions of both MD5 and SHA-1 hash
functions. Half of the input is sent to each hash function. It’s
clever because it is quite resistant to attack, even in the face of
weaknesses in MD5 and SHA-1. This process can feedback on itself and
iterate forever to generate as many bytes as we need.
Following this procedure, we obtain a 48 byte “master secret”.
Quoting from a this great video on network video, minute 1:18:07
Well where do you get randomness on your computer because your
computer is a deterministic device?
Well it collects entropies like your mouse stroke movements, your key
stroke movements and the timing of your hard disk, it tries to collect
all that randomness from the universe into a pull so that it can generate random keys just for one connection [this session]. And if that randomness is broken and its happened many times
in the last 30 years, then none of this works. If the adversary can
figure what your randomness can be then they can guess your keys. So use good randomness.
Note: the keys are created per session.