SSL is half symmetric and half asymmetric? - ssl

I am reading http://www.definityhealth.com/marketing/how_ssl_works.html
Looks like SSL is using asymmetric algorithm to exchange the symmetric key, after that it uses symmetric algorithm to encrypt the data.
One question, can I use asymmetric algorithm only? Like Alice and Bob both have certificate and, they are all using peer's public key to encrypt the data.

No, you can't use only asymmetric encryption.
TLS (SSL) does not support encryption of application data with public key algorithms because it would make no sense: it would be much less efficient yet provide no improvement to security.
Public key encryption is not harder to break than symmetric algorithms. In fact, for all we know, there may a trick that makes breaking some asymmetric algorithms trivial, just waiting to be discovered.
Public key algorithm solve the key exchange problem, and that's how TLS and every other security protocol use them. Symmetric algorithms are used to keep data private and protect its integrity.

As a general rule, one can say that asymmetric algorithms are much more computing intensive than symmetric algorithms. Thus it is very common case to use an asymmetric algorithms to exchange a symmetric key that will be used to exchange the data. It is also considered as sufficiently safe security wise.
Can you use asymmetric algorithms for everything? Surely you can.
Can you do it within SSL? I don't know.

Yes, you can, if you provide your own implementation for SSL - as this is not the original SSL design. (BTW, use TLS - it is very similiar but more secure).

Symmetric key uses the same key to encrypt and decrypt the data. The biggest issue with it is to send these to the receiver. Therefore the use of asymmetric keys are encouraged, where they have private and public keys.
Symmetric keys are generally used to encrypt large amounts of data which is faster. After, we send this data to the receiver again using an asymmetric algorithm.

Related

PBKDF2 for identifier hashing (not password) in .Net core

I need to hash identifiers before storing in a database. There will be up to 1 million values overall. I need to pseudonymise these values to comply with GDPR.
I am using .Net core and I want to stay with the core hashing functionality. I dont want to risk using external hashing implementations. The intention is to add a salt phrase to each value before hashing. These values have already been hashed by the supplier but I will be hashing again before storing in db.
I was going to use SHA256 but I have read that PBKDF2 is more secure. However, I have read that PBKDF2 is prone to collisions. It is of the utmost importance that the hashing implementation I use has a low collision chance. Has PBKDF2 a higher collision rate than simple SHA256? Does using a key-derivation of HMACSHA512 with PBKDF2 as opposed to HMACSHA1 reduce the possibility of collisions?
Would like recommendations for a secure, low-collision one-way hash for Net core.
There will be up to 1 million values overall. I need to pseudonymise these values to comply with GDPR
I was going to use SHA256 but I have read that PBKDF2 is more secure.
For this use case a proper cryptographic hash is imho the best option.
PBKDF2 is a key derivation function intended to derive higher entropy keys from relatively weak passwords. It uses a hash under the hood so if the hash has certain hash collision probability , pbkdf will have the same.
pbkdf2 is intended to be slow (using iterations) to mitigate feasibility of brute-forcing the input password. You don't need that property, even it may be bad for your use case.
So -you can boldly use sha256 to anonymize your data, imho it may be the best option you have today. Indeed principially you cannot prevent the hash collision, but the probability should be negligible

How much expensive is JWT decrypt

I am using JWT for API authentication.
I am just curious to know how much expensive is to decrypt the JWT each time when a request arrives.
It depends on the algorithm(s) used.
(Note that JWT supports signing as well as encryption - signed JWTs are the more common use case; my answer is general.)
The symmetric key algorithms (AES, HMAC) are the least expensive (very fast).
For public key algorithms, RSA-based algorithms are the most expensive, and elliptic curve algorithms (ECDH for key encryption, ECDSA for signing) are less computationally expensive but still more expensive than symmetric algorithms.

Benchmarking symmetric and asymmetric cryptography

In order to integrity protect a byte stream one can conceptually either use symmetric cryptography (e.g. an HMAC with SHA-1) or asymmetric cryptography (e.g. digital signature with RSA).
It is common sense that asymmetric cryptography is much more expensive than using symmetric cryptography. However, I would like to have hard numbers and would like to know whether there exist benchmark suites for existing crypto libraries (e.g. openssl) in order to gain some measurement results for symmetric and asymmetric cryptography algorithms.
The numbers I get from the built-in "openssl speed" app can, unfortunately, not be compared to each other.
Perhaps somebody already implemented a small benchmarking suite for this purpose?
Thanks,
Martin
I don't think a benchmark is useful here, because the two things you're comparing are built for different use-cases. An HMAC is designed for situations in which you have a shared secret you can use to authenticate the message, whilst signatures are designed for situations in which you don't have a shared secret, but rather want anyone with your public key be able to verify your signature. There are very few situations in which either primitive would be equally appropriate, and when there is, there's likely to be a clear favorite on security, rather than performance grounds.
It's fairly trivial to demonstrate that an HMAC is going to be faster, however: Signing a message requires first hashing it, then computing the signature over the hash, whilst computing an HMAC requries first hashing it, then computing the HMAC (which is merely two additional one-block hash computations). For the same reason, though, for any reasonable assumption as to message length and speed of your cryptographic primitives, the speed difference is going to be negligible, since the largest part of the cost is shared between both operations.
In short, you shouldn't choose the structure of your cryptosystem based on insignificant differences in performance.
All digital signature algorithms (RSA, DSA, ECDSA...) begin by hashing the source stream with a hash function; only the hash output is used afterwards. So the asymptotic cost of signing a long stream of data is the same as the asymptotic cost of hashing the same stream. HMAC is similar in that respect: first you input in the hash function a small fixed-size header, then the data stream; and you have an extra hash operation at the end which operates on a small fixed-size input. So the asymptotic cost of HMACing a long stream of data is the same as the asymptotic cost of hashing the same stream.
To sum up, for a suitably long data stream, a digital signature and HMAC will have the same CPU cost. Speed difference will not be noticeable (the complex part at the end of a digital signature is more expensive than what HMAC does, but a simple PC will still be able to do it in less than a millisecond).
The hash function itself can make a difference, though, at least if you can obtain the data with a high bandwidth. On a typical PC, you can hope hashing data at up to about 300 MB/s with SHA-1, but "only" 150 MB/s with SHA-256. On the other hand, a good mechanical harddisk or gigabit ethernet will hardly go beyond 100 MB/s read speed, so SHA-256 would not be the bottleneck here.

Big-O for public key encryption

I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encryption. I am attempting to determine the big-O notation of an idea I have developed that makes heavy use of public key encryption.
What are these Big-O algorithms as related to public key encryption:
A) Encrypt a file made up of N characters with an L length key
B) Decrypt that same file
C) A typical brute force algorithm to break an encrypted file with N characters and with a maximum key length of L
Any included Big-O notations for more efficient algorithms for breaking the encryption would be appreciated. Also, reference to wherever this material can be found.
Sorry to ask a question that I really should be able to find on my own, but I haven't managed to come across what I am looking for.
Standard public/private key algorithms are almost never used on large inputs, as the security properties of these algorithms are generally not suitable for bulk encryption. The most common configuration is to use a public/private key algorithm to encrypt a small (constant-size, usually 128 - 256 bit) key, then use that key for a symmetric encryption algorithm.
That being said, I'll use RSA as a test case for the rest of the questions:
A/B) Setting aside key generation, RSA encrypts and decrypts in O(n) for the size of the message. (Note that all messages must be the size of the key, so smaller messages are padded and larger messages must be broken up.) The exact speed of encryption/decryption depends on the algorithms used by your RSA implementation, but it's polynomial in key size:
http://www.javamex.com/tutorials/cryptography/rsa_key_length.shtml
C) Given a public key, RSA can be cracked by factoring the public key, which is currently best accomplished using GNFS (which is O(exp((7.1 b)^1/3 (log b)^1/3))). I don't believe there's much work on cracking RSA based on encrypted data, as the public key is a much more useful target.

Using a constant IV with single-block encryption

I have lots of small secrets that I want to store encrypted in a database. The database client will have the keys, and the database server will not deal with encryption and decryption. All of my secrets are 16 bytes or less, which means just one block when using AES. I'm using a constant IV (and key) to make the encryption deterministic and my reason for doing deterministic encryption is to be able to easily query the database using ciphertext and making sure the same secret is not stored twice (by making the column UNIQUE). As far as I can see there should be no problem doing this, as long as the key is secret. But I want to be sure: Am I right or wrong? In case I'm wrong, what attacks could be done?
BTW: Hashes are quite useless here, because of a relatively small number of possible plaintexts. With a hash it would be trivial to obtain the original plaintext.
An ideal cipher, for messages of length n bits, is a permutation of the 2n sequences of n bits, chosen at random in the 2n! such permutations. The "key" is the description of which permutation was chosen.
A secure block cipher is supposed to be indistinguishable from an ideal cipher, with n being the block size. For AES, n=128 (i.e. 16 bytes). AES is supposed to be a secure block cipher.
If all your secrets have length exactly 16 bytes (or less than 16 bytes, with some padding convention to unambiguously extend them to 16 bytes), then an ideal cipher is what you want, and AES "as itself" should be fine. With common AES implementations, which want to apply padding and process arbitrarily long streams, you can get a single-block encryption by asking for ECB mode, or CBC mode with an all-zero IV.
All the issues about IV, and why chaining modes such as CBC were needed in the first place, come from multi-block messages. AES encrypts 16-byte messages (no more, no less): chaining modes are about emulating an ideal cipher for longer messages. If, in your application, all messages have length exactly 16 bytes (or are shorter, but you add padding), then you just need the "raw" AES; and a fixed IV is a close enough emulation of raw AES.
Note, though, the following:
If you are storing encrypted elements in a database, and require uniqueness for the whole lifetime of your application, then your secret key is long-lived. Keeping a secret key secret for a long time can be a hard problem. For instance, long-lived secret keys need some kind of storage (which resists to reboots). How do you manage dead hard disks ? Do you destroy them in an acid-filled cauldron ?
Encryption ensures confidentiality, not integrity. In most security models, attackers can be active (i.e., if the attacker can read the database, he can probably write into it too). Active attacks open up a full host of issues: for instance, what could happen if the attacker swaps some of your secrets within the database ? Or alters some randomly ? Encryption is, as always, the easy part (not that it is really "easy", but it is much easier than the rest of the job).
If the assembly is publicly available, or can become so, your key and IV can be discovered by using Reflector to expose the source code that uses it. That would be the main problem with this, if the data really were secret. It is possible to obfuscate MSIL, but that just makes it harder to trace through; it still has to be computer-consumable, so you can't truly encrypt it.
The static IV would make your implementation vulnerable to frequency attacks. See For AES CBC encryption, whats the importance of the IV?