asymmetric HMAC - ssl

validation of HMAC generated hash requires the key to be known to the validator. So its symmetric. What is the similer asymmetric solution other than SSL ? as I want the signature be smaller like an md5 hash. and the generation and validation procedure to be light. I was going through Rabin's Signature algorithm but couldn't find any Implementation or pseudo code ro follow.

The smallest asymmetric signatures come from elliptic curve cryptosystems like ECDSA. ECDSA signature schemes require signatures approximately four times the length of a symmetric cipher key of equivalent security. So a scheme comparable in security to 128-bit AES would have 512-bit signatures. That's the state of the art right now -- schemes with smaller signatures but equal or greater security are not known.
If you don't need security quite that high, you could use a 192-bit curve which would result in 384-bit signatures. You can go down to 320-bit signatures (160-bit curves) and still have security comparable to 80-bit symmetric ciphers. If you really don't particularly care about security, 112-bit curves can be used, providing 224-bit signatures that are about as difficult to break as DES.
The following curves are what I would recommend for each security level:
SecP112R1: 224-bit signatures, 56-bit security level
SecP128R1: 256-bit signatures, 64-bit security level
SecP160K1: 320-bit signatures, 80-bit security level
SecP192K1: 386-bit signatures, 96-bit security level
SecP224K1: 448-bit signatures, 112-bit security level
SecP256K1: 512-bit signatures, 128-bit security level
For each curve, the private key is the same size as the curve. Public keys (in compressed form) are one bit larger than the curve size. Signatures are twice the curve size. So with SecP256K1, private keys are 256-bits, public keys are 257-bits, and signatures are 512-bits. These are the minimum sizes for the raw binary values.
Caution: I would consider 160-bit curves the minimum for any purpose where security is a factor. Smaller curves might be suitable if keys are generated, used, and then thrown away in a small time frame. For long-term security, 256-bit curves should be used. The system as a whole should be evaluated by competent experts before it is relied upon.

Related

Is still valid password hashing using md5 or sha1?

Just now I'm working in a financial project. Here, the team is thinking to use MD5 for password hashing.
But, today is easy copy a SHA1 or MD5 password to decrypt, inclusive if they are complex password like:
My$uper$ecur3PAS$word+448, you might use a online page to decrypt it and there is it.
Small and mid-range developers (including me) uses those hashing methods, but I think is not enough to provide security over the database.
(Excluding firewalls, network security, iptables, etc.).
Can someone give me a clue about what is the better approach to solve this vulnerability?
As per OWASP Password Storage Cheat Sheet, the recommendation is:
Argon2 is the winner of the password hashing competition and should be considered as your first choice for new applications;
PBKDF2 when FIPS certification or enterprise support on many platforms is required;
scrypt where resisting any/all hardware accelerated attacks is necessary but support isn’t.
bcrypt where PBKDF2 or scrypt support is not available.
MD5 and SHA1 are not secured for most security related use cases, because it is possible to find collisions with these algorithms. In other words, given an input and its hash value, it is possible to derive another input with the same hash value.
SHA-2 group of hashing algorithms are secured for many security use cases, but not for password hashing because they are blazingly fast when compared with the above algorithms. And performance is something we don't want for password hashing because that would make it easier for an attacker to perform a brute-force attack by trying a wide range of passwords in a short period of time.
The above 4 algorithms are therefore meant to be expensive in terms of memory, computation power and time. These values are usually parameterized so that they can be tuned to a high value as new technologies improve the computation power with passsing time. Therefore while using these algorithms, it is important to choose the work factor values correctly. Setting a very low valur may defeat the purpose.
In addition to that a salt should also be used.
Again from the same OWASP source:
Generate a unique salt upon creation of each stored credential (not just per user or system wide);
Use cryptographically-strong random data;
As storage permits, use a 32 byte or 64 byte salt (actual size dependent on protection function);
Scheme security does not depend on hiding, splitting, or otherwise obscuring the salt.
Salts serve two purposes:
prevent the protected form from revealing two identical credentials and
augment entropy fed to protecting function without relying on credential complexity.
The second aims to make pre-computed lookup attacks on an individual credential and time-based attacks on a population intractable
Your thinking is correct, MD5 and SHA1 should never be used for password hashing. I would recommend the following, in order of preference:
argon2
bcrypt
scrypt
PBKDF2
If you tag your question with the language/framework you are using, I can recommend specific libraries or methods.
Also be aware that encryption is not the right word to use here. These are password hashing algorithms, not encryption algorithms.

Why DES can be used only with 56bit key? And why the plaintext must be 64 bits in length?

Why DES can be used only with 56bit key? What happens if we use the longer key? Also, why the plaintext must be 64 bits in length?
US regulations at the time required users of stronger than 56-bit keys, to submit to "key recovery" to enable law enforcement back-door access.
Thus DES, as a standard, was specified at the maximum allowed key length of 56 bits. If you used a longer key, you would not be compatible with other DES systems.
See: http://en.wikipedia.org/wiki/56-bit_encryption
If you are implementing a system & have a choice of encryption, more modern & stronger ciphers would be absolutely recommended. The current standard would be AES (Advanced Encryption System) which is widely available, strong and allows key sizes from 128 - 256 bits.
For desktop or server applications, AES-256 would be a good default choice.
See: http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
When encrypting data, plaintexts must often be "padded" to a minimum size. Ciphers rely on jumbling and interactions between multiple bits, to preserve secrecy of the plaintext & avoid potentially revealing the key. For a short plaintext without padding, jumbling and interactions are removed as a factor & mathematical complexity drops vastly.
Encrypting just a single character without padding, for example a 'y' or 'n' response, could for example reduce a 2^256 keyspace down to possibly 2^24. That could be cracked in minutes. This would enable an attacker to guess large parts of the key, rapidly break it, and then (worst of all) -- decrypt all other traffic on the channel.

Benchmarking symmetric and asymmetric cryptography

In order to integrity protect a byte stream one can conceptually either use symmetric cryptography (e.g. an HMAC with SHA-1) or asymmetric cryptography (e.g. digital signature with RSA).
It is common sense that asymmetric cryptography is much more expensive than using symmetric cryptography. However, I would like to have hard numbers and would like to know whether there exist benchmark suites for existing crypto libraries (e.g. openssl) in order to gain some measurement results for symmetric and asymmetric cryptography algorithms.
The numbers I get from the built-in "openssl speed" app can, unfortunately, not be compared to each other.
Perhaps somebody already implemented a small benchmarking suite for this purpose?
Thanks,
Martin
I don't think a benchmark is useful here, because the two things you're comparing are built for different use-cases. An HMAC is designed for situations in which you have a shared secret you can use to authenticate the message, whilst signatures are designed for situations in which you don't have a shared secret, but rather want anyone with your public key be able to verify your signature. There are very few situations in which either primitive would be equally appropriate, and when there is, there's likely to be a clear favorite on security, rather than performance grounds.
It's fairly trivial to demonstrate that an HMAC is going to be faster, however: Signing a message requires first hashing it, then computing the signature over the hash, whilst computing an HMAC requries first hashing it, then computing the HMAC (which is merely two additional one-block hash computations). For the same reason, though, for any reasonable assumption as to message length and speed of your cryptographic primitives, the speed difference is going to be negligible, since the largest part of the cost is shared between both operations.
In short, you shouldn't choose the structure of your cryptosystem based on insignificant differences in performance.
All digital signature algorithms (RSA, DSA, ECDSA...) begin by hashing the source stream with a hash function; only the hash output is used afterwards. So the asymptotic cost of signing a long stream of data is the same as the asymptotic cost of hashing the same stream. HMAC is similar in that respect: first you input in the hash function a small fixed-size header, then the data stream; and you have an extra hash operation at the end which operates on a small fixed-size input. So the asymptotic cost of HMACing a long stream of data is the same as the asymptotic cost of hashing the same stream.
To sum up, for a suitably long data stream, a digital signature and HMAC will have the same CPU cost. Speed difference will not be noticeable (the complex part at the end of a digital signature is more expensive than what HMAC does, but a simple PC will still be able to do it in less than a millisecond).
The hash function itself can make a difference, though, at least if you can obtain the data with a high bandwidth. On a typical PC, you can hope hashing data at up to about 300 MB/s with SHA-1, but "only" 150 MB/s with SHA-256. On the other hand, a good mechanical harddisk or gigabit ethernet will hardly go beyond 100 MB/s read speed, so SHA-256 would not be the bottleneck here.

Big-O for public key encryption

I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encryption. I am attempting to determine the big-O notation of an idea I have developed that makes heavy use of public key encryption.
What are these Big-O algorithms as related to public key encryption:
A) Encrypt a file made up of N characters with an L length key
B) Decrypt that same file
C) A typical brute force algorithm to break an encrypted file with N characters and with a maximum key length of L
Any included Big-O notations for more efficient algorithms for breaking the encryption would be appreciated. Also, reference to wherever this material can be found.
Sorry to ask a question that I really should be able to find on my own, but I haven't managed to come across what I am looking for.
Standard public/private key algorithms are almost never used on large inputs, as the security properties of these algorithms are generally not suitable for bulk encryption. The most common configuration is to use a public/private key algorithm to encrypt a small (constant-size, usually 128 - 256 bit) key, then use that key for a symmetric encryption algorithm.
That being said, I'll use RSA as a test case for the rest of the questions:
A/B) Setting aside key generation, RSA encrypts and decrypts in O(n) for the size of the message. (Note that all messages must be the size of the key, so smaller messages are padded and larger messages must be broken up.) The exact speed of encryption/decryption depends on the algorithms used by your RSA implementation, but it's polynomial in key size:
http://www.javamex.com/tutorials/cryptography/rsa_key_length.shtml
C) Given a public key, RSA can be cracked by factoring the public key, which is currently best accomplished using GNFS (which is O(exp((7.1 b)^1/3 (log b)^1/3))). I don't believe there's much work on cracking RSA based on encrypted data, as the public key is a much more useful target.

Using a constant IV with single-block encryption

I have lots of small secrets that I want to store encrypted in a database. The database client will have the keys, and the database server will not deal with encryption and decryption. All of my secrets are 16 bytes or less, which means just one block when using AES. I'm using a constant IV (and key) to make the encryption deterministic and my reason for doing deterministic encryption is to be able to easily query the database using ciphertext and making sure the same secret is not stored twice (by making the column UNIQUE). As far as I can see there should be no problem doing this, as long as the key is secret. But I want to be sure: Am I right or wrong? In case I'm wrong, what attacks could be done?
BTW: Hashes are quite useless here, because of a relatively small number of possible plaintexts. With a hash it would be trivial to obtain the original plaintext.
An ideal cipher, for messages of length n bits, is a permutation of the 2n sequences of n bits, chosen at random in the 2n! such permutations. The "key" is the description of which permutation was chosen.
A secure block cipher is supposed to be indistinguishable from an ideal cipher, with n being the block size. For AES, n=128 (i.e. 16 bytes). AES is supposed to be a secure block cipher.
If all your secrets have length exactly 16 bytes (or less than 16 bytes, with some padding convention to unambiguously extend them to 16 bytes), then an ideal cipher is what you want, and AES "as itself" should be fine. With common AES implementations, which want to apply padding and process arbitrarily long streams, you can get a single-block encryption by asking for ECB mode, or CBC mode with an all-zero IV.
All the issues about IV, and why chaining modes such as CBC were needed in the first place, come from multi-block messages. AES encrypts 16-byte messages (no more, no less): chaining modes are about emulating an ideal cipher for longer messages. If, in your application, all messages have length exactly 16 bytes (or are shorter, but you add padding), then you just need the "raw" AES; and a fixed IV is a close enough emulation of raw AES.
Note, though, the following:
If you are storing encrypted elements in a database, and require uniqueness for the whole lifetime of your application, then your secret key is long-lived. Keeping a secret key secret for a long time can be a hard problem. For instance, long-lived secret keys need some kind of storage (which resists to reboots). How do you manage dead hard disks ? Do you destroy them in an acid-filled cauldron ?
Encryption ensures confidentiality, not integrity. In most security models, attackers can be active (i.e., if the attacker can read the database, he can probably write into it too). Active attacks open up a full host of issues: for instance, what could happen if the attacker swaps some of your secrets within the database ? Or alters some randomly ? Encryption is, as always, the easy part (not that it is really "easy", but it is much easier than the rest of the job).
If the assembly is publicly available, or can become so, your key and IV can be discovered by using Reflector to expose the source code that uses it. That would be the main problem with this, if the data really were secret. It is possible to obfuscate MSIL, but that just makes it harder to trace through; it still has to be computer-consumable, so you can't truly encrypt it.
The static IV would make your implementation vulnerable to frequency attacks. See For AES CBC encryption, whats the importance of the IV?