I am working on a third party client for Apple Remote Desktop. But I am stuck on its authentication process.
From Remote Desktop manual:
Authentication to Apple Remote Desktop clients uses an
authentication method which is based on a Diffie-Hellman Key
agreement protocol that creates a shared 128-bit key. This shared
key is used to encrypt both the name and password using the Advanced
Encryption Standard (AES). The Diffie-Hellman Key agreement protocol
used in ARD 2 is very similar to the Diffie-Hellman Key agreement
protocol used in personal file sharing, with both of them using a
512-bit prime for the shared key calculation. With Remote Desktop 2,
keystrokes and mouse events are encrypted when you control Mac OS X
client computers. This information is encrypted using the Advanced
Encryption Standard (AES) with the 128-bit shared key that was
derived during authentication.
Does anyone know where I can find a bit more technical information about the Authentication process in ARD? Such as which AES mode it uses and what initialization vector. Thanks
I ran into this exact problem recently. I couldn't find any detailed information beyond the high-level overview you mention, but I was able to figure out the technique based on my study of this C code from the gtk-vnc open source project. Basically, the steps are as follows:
Read the authentication material from the socket. A two-byte generator value, a two-byte key length value, the prime modulus (keyLength bytes), and the peer's generated public key (keyLength bytes).
Generate your own Diffie-Hellman public-private key pair.
Perform Diffie-Hellman key agreement, using the generator (g), prime (p), and the peer's public key. The output will be a shared secret known to both you and the peer.
Perform an MD5 hash of the shared secret. This 128-bit (16-byte) value will be used as the AES key.
Pack the username and password into a 128-byte plaintext "credentials" structure: { username[64], password[64] }. Null-terminate each. Fill the unused bytes with random characters so that the encryption output is less predictable.
Encrypt the plaintext credentials with the 128-bit MD5 hash from step 4, using the AES 128-bit symmetric cipher in electronic codebook (ECB) mode. Use no further padding for this block cipher.
Write the ciphertext from step 6 to the stream. Write your generated DH public key to the stream.
Check for authentication pass/fail as usual.
I don't have an Objective C implementation to share, but I have implemented this Java version which you may find useful to reference.
Not sure if anyone still needs this, but here's a Objective C implementation of the ARD authentication process that I cobbled together a few months back and released on Github a few days ago.
It's based loosely on David's (thanks!) Java implementation but uses OpenSSL's encryption functions for the MD5 hashing and AES 128 encryption steps.
There's also the TinyVNC library that also implements ARD authentication, but using the Crypto++ library for encryption functions instead.
Related
I am working on a small project that aims to design a secure communication between two, or more, devices A and B, with the following limitations:
Device A has very limited resources (e.g, smart card).
The used communication should perform minimum number of encryption/decryption operations to establish a secure connection.
Bidirectional authentication is required, in which A should be 100% sure of B's identity and vice versa.
The used technique should use public key cryptography (e.g., RSA) and after that establish a shared key using symmetric algorithm (e.g., AES)
I know that using certificates is much easier to manage and use. However, due to the limitations I though of using predefined RSA keys for both of the entities and afterward the devices can negotiate the new shared key using AES.
My question is about the validity of such technique and would it have better performance than using SSL/TLS certificates; in terms of number of steps and resources usage. Moreover, it would be really helpful if someone have numerical analysis for using raw keys generation (as my example above) versus using SSL/TLS certificates.
My question is about the validity of such technique and would it have better performance than using SSL/TLS certificates
Parsing certificates is tricky because it has a lot of ifs/elses, but even embedded CPU's would be able to do this. If you fully want to parse certificates you could also look at "card verifiable certificates" which are relatively simplistic certificates created for verification on smart cards (with similarly limited resources such as 8-10 KiB RAM, or less).
Besides that, verification is a (relatively efficient) RSA verification operation. You can however avoid this by simply pinning the certificate from B. You could for instance simply test the certificate by calculating the certificate fingerprint by performing a cryptographic hash, which you stored before.
As for the key generation: it won't of course matter for the RSA key pair generation itself. For the master secret agreement (and then session key derivation) TLS has multiple options such as using key agreement of the master secret + RSA authentication or RSA encryption of the master secret. How it compares to your scheme depends on the proprietary protocol.
There are of course also symmetric options available for TLS such as PSK (pre-shared key) and SRP. This will also create session keys, but both devices have hold only one shared key (or other token).
TLS has many options and doesn't necessarily introduce too much overhead. The problem is that if you try and create your own protocol you're bound to fail. With your current knowledge failure is almost certain. So I'd consider (variations) of TLS before exploring anything else. If you're up to it you can consider high performance suites such as Chacha20+Poly1305, possibly even paired with a Curve25519 self signed certificate.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Why don't client and server just exchange the encryption keys directly using public key encryption or DH key exchange protocol? What the rationale behind that or what the problem it is to solve?
Its helpful to understand how keys are derived in modern SSL/TLS. Things were a bit different in early SSL (like SSLv2).
The master_secret is a common secret shared by the client and server. It is used to derive session specific keys. The master_secret derived form other parameters (discussed below).
There are 6 each secrets derived from the master_secret:
Client encryption key
Server encryption key
Client MAC key
Server MAC key
Client IV
Server IV
Assuming that neither eNULL nor aNULL is used, both the client and server use an encryption key for confidentiality and a HMAC key for authenticity. Each (client and server) has its own key.
While IVs are usually considered public, SSL/TLS treats them as secret parameters.
From RFC 5246, the master_secret is derived as:
master_secret = PRF(pre_master_secret, "master secret",
ClientHello.random + ServerHello.random)
[0..47];
The pre_master_secret comes from key agreement or key transport. If it comes from key agreement, then the pre_master_secret is the result of Diffie-Hellman key agreement. In an agreement scheme, both parties contribute to the derived secret.
If the pre_master_secret comes from a key transport scheme, then the client encrypts a random value under the server's public key. In this scheme, only the client provides keying material. When only one party provides the key, its called a key transport scheme.
What the rationale behind that or what the problem it is to solve?
The first stage, where the pre_master_secret is used, provides a "pluggable" architecture for key agreement or key transport.
The second stage, where the master_secret is derived, ensures both the client and server contribute to the keying material.
In addition, there's a label - "master secret" - that helps ensure derivation is unique even if the same parameters are used for something else (assuming a different derivation uses a different label). Use of labels are discussed in SP800-56 and SP800-57 (among other places).
The hash used in the second stage, where the master_secret is derived, performs two functions. First, it performs a mixing function. Second, it maps elements in the group used by key exchange or key agreement into random bit patterns.
The final stage is the derivation of the 6 keys from master_secret. According to 6.3. Key Calculation, the derivation does not provide key independence. It just ensures interoperability:
To generate the key material, compute
key_block = PRF(SecurityParameters.master_secret,
"key expansion",
SecurityParameters.server_random +
SecurityParameters.client_random);
until enough output has been generated. Then, the key_block is
partitioned as follows:
client_write_MAC_key[SecurityParameters.mac_key_length]
server_write_MAC_key[SecurityParameters.mac_key_length]
client_write_key[SecurityParameters.enc_key_length]
server_write_key[SecurityParameters.enc_key_length]
client_write_IV[SecurityParameters.fixed_iv_length]
server_write_IV[SecurityParameters.fixed_iv_length]
The steps above are a solid design. However, when used in SSL/TLS, there are lots of devils running around. For example, the above is not enough when a feature like renegotiation is added (triple handshake attack ftw!).
I believe the reason is that if the client simply selected a random number to use as the symmetric key and encrypted it using the server's public key to send to the server, there would potentially be a vulnerability if common clients used an imperfect random number generator, leading to predictable symmetric keys and making the communications much easier to break.
The actual key exchange protocol ensures that the symmetric key contains randomized elements from both the client and the server. This means that even if the client has an imperfect random number generator, the communications are still protected if the server's random number generator is cryptographically strong. Even if both the client's and the server's random number generators have weaknesses, the attack against the combination of the two is likely to be more expensive than if only the client's random number generator were used.
The rationale is that if the secret key is never exchanged it can never be detected. Key negotation algiorithms are known to be secure. An encryption is only as secure as its key.
pre master key to master key:
one side random is not really random, but 2 side 3 times random number could be really random..
master key to 6 key pairs:
2 for encryption, 2 for message integration check, and 2 for preventing CBC attack
I know with RSA there are a few ways you can encrypt and decrypt data, meaning you can encrypt with either the public or private key (or both), and you can also decrypt with just a private or public key, or both.
With Triple Des, do you need both key and iv to decrypt? Or can you do it somehow with just a key? (public key?)
Being a symmetric algorithm, DES (and 3DES) uses a shared secret key. It doesn't have public keys.
And IV must be known to decryptor if this IV was used during encryption.
RSA is a public-key (or asymmetric) encryption algorithm – which means that there are key pairs of public and private keys, where you encrypt with one of them and decrypt with the other.
DES and Triple-DES are block ciphers. You use them together with a mode of operation to encrypt or decrypt a message – you use the same key for encryption as for decryption. This is known as a symmetric algorithm.
Some modes of operation (all good ones) need an initialization vector, so identical plaintexts don't lead to identical ciphertexts (and sometimes other weaknesses as well). Normally this initialization vector should be send/stored together with the ciphertext, it doesn't have to be secret. Depending on the mode of operation and the usage scenario, the IV should be used only once, be random, or non-predictable.
Also, nowadays you should not use DES (it has a too small key size to be secure). Triple-DES is okay, but much slower (and not more secure) than modern algorithms like AES.
3DES is no different than any other block cipher. If you are using a cipher mode which requires an IV, and you are not including the IV in the message header, you will need it to decrypt the message.
I understand that unique IV is important in encrypting to prevent attacks like frequency analysis. The question: For AES CBC encryption, whats the importance of the IV? has a pretty clear answer explaining the importance of the IV.
Would there be any security holes in sending the IV in clear text? Or would it need to be encrypted with the same public/private key that was used to send the symmetric key?
If the IV needs to be sent encrypted, then why not generate a new symmetric key each time and consider the IV as part of the key? Is it that generating a symmetric key is too costly? Or is it to minimize the amount of data transported?
The top answer to Secret vs. Non-secret Initialization Vector states:
A typical key establishment protocol will result in both involve parties computing a piece of data which they, but only they, both know. With Diffie-Hellman (or any Elliptic Curve variant thereof), the said shared piece of data has a fixed length and they have no control over its value (they just both get the same seemingly random sequence of bits).
How do two entities derive the "same seemingly random sequence of bits" without having a shared piece of information? Is the assumption that the shared information was sent encrypted? And, if the shared information is sent encrypted, why not just send the IV encrypted?
Because an application needs to transport the symmetric key securely, it would seem that separating the IV from the key itself is essentially an optimization. Or am I missing something?
There is no security hole by sending the IV in cleartext - this is similar to storing the salt for a hash in plaintext: As long as the attacker has no control over the IV/salt, and as long as it is random, there is no problem.
The main difference between initialization vector and key is that the key has to be kept secret, while the IV doesn't have to be - it can be readable by an attacker without any danger to the security of the encryption scheme in question.
The idea is that you can use the same key for several messages, only using different (random) initialization vectors for each, so relations between the plain texts don't show in the corresponding ciphertexts.
That said, if you are using a key agreement scheme like Diffie-Hellman, which gives you a new shared secret for each session anyways, you can also use it to generate the first initialization vector. This does not really give much security advantages compared to choosing the initialization vector directly and sending it with the message, but saves some bits of bandwith, and some bits of entropy from your random source. And it makes the IV a bit more random in case that one of the partners has a bad randomness source (though DH is not really secure in this case, too).
How do two entities derive the "same seemingly random sequence of bits" without having a shared piece of information?
Is the assumption that the shared information was sent encrypted? And, if the shared information is sent encrypted,
why not just send the IV encrypted?
Diffie-Hellman is based on a group-theoretic problem: Eve knows a (cyclic) group G with generator g and sees the the two values g^a (transmitted from Alice to Bob) and g^b (transmitted from Bob to Alice), where a and b are random large integers chosen by Alice and Bob, and unknown to Eve and even the other partner). The shared secret is then (g^a)^b = g^(a·b) = (g^b)^a. Obviously Bob (who knows b) can calculate the secret as (g^a)^b, while Alice (who knows a) can calculate (g^b)^a. Eve somehow needs to derive this secret to crack the protocol.
In some groups this (known as the computational Diffie-Hellman problem) seems to be a hard problem, and we are using these groups in Cryptography. (In the original DH, we use a subgroup of prime order of the multiplicative group of some large finite prime field, in Elliptic Curve DH we use an elliptic curve group over a finite field. Other groups work, too (but some of them are weak, e.g. in the additive group of a field it is trivial to solve).)
Then both Alice and Bob use a key derivation function to derive the actual keying material (i.e. encryption keys for both directions, MAC keys, and the starting IVs).
I have a small confusion on SSL handshake between browser and server in a typical https web scenario:
What I have understood so far is that in the process of SSL handshake, client (browser in this case) encrypts a randomly selected symmetric key with the public key (certificate received from server). This is sent back to the server, server decrypts it (symmetric key) with the private key. This symmetric key is now used during rest of the session to encrypt/decrypt the messages at both the ends. One of main reasons to do so is given as faster encryption using symmetric keys.
Questions
1) How does browser pick and generates this "randomly" selected symmetric key?
2) Do developers (or/and browser users) have control on this mechanism of generating symmetric keys?
Here is a very good description of how HTTPS connection establishment works. I will provide summary how session key is acquired by both parties (client and server), this process is known as "a key agreement protocol", here how it works:
The client generates the 48 byte “pre-master secret” random value.
The client pads these bytes with random data to make the input equal to 128 bytes.
The client encrypts it with server's public key and sends it to the server.
Then master key is produced by both parties in following manner:
master_secret = PRF(
pre_master_secret,
"master secret",
ClientHello.random + ServerHello.random
)
The PRF is the “Pseudo-Random Function” that’s also defined in the
spec and is quite clever. It combines the secret, the ASCII label, and
the seed data we give it by using the keyed-Hash Message
Authentication Code (HMAC) versions of both MD5 and SHA-1 hash
functions. Half of the input is sent to each hash function. It’s
clever because it is quite resistant to attack, even in the face of
weaknesses in MD5 and SHA-1. This process can feedback on itself and
iterate forever to generate as many bytes as we need.
Following this procedure, we obtain a 48 byte “master secret”.
Quoting from a this great video on network video, minute 1:18:07
Well where do you get randomness on your computer because your
computer is a deterministic device?
Well it collects entropies like your mouse stroke movements, your key
stroke movements and the timing of your hard disk, it tries to collect
all that randomness from the universe into a pull so that it can generate random keys just for one connection [this session]. And if that randomness is broken and its happened many times
in the last 30 years, then none of this works. If the adversary can
figure what your randomness can be then they can guess your keys. So use good randomness.
Note: the keys are created per session.