PBKDF2 for identifier hashing (not password) in .Net core - asp.net-core

I need to hash identifiers before storing in a database. There will be up to 1 million values overall. I need to pseudonymise these values to comply with GDPR.
I am using .Net core and I want to stay with the core hashing functionality. I dont want to risk using external hashing implementations. The intention is to add a salt phrase to each value before hashing. These values have already been hashed by the supplier but I will be hashing again before storing in db.
I was going to use SHA256 but I have read that PBKDF2 is more secure. However, I have read that PBKDF2 is prone to collisions. It is of the utmost importance that the hashing implementation I use has a low collision chance. Has PBKDF2 a higher collision rate than simple SHA256? Does using a key-derivation of HMACSHA512 with PBKDF2 as opposed to HMACSHA1 reduce the possibility of collisions?
Would like recommendations for a secure, low-collision one-way hash for Net core.

There will be up to 1 million values overall. I need to pseudonymise these values to comply with GDPR
I was going to use SHA256 but I have read that PBKDF2 is more secure.
For this use case a proper cryptographic hash is imho the best option.
PBKDF2 is a key derivation function intended to derive higher entropy keys from relatively weak passwords. It uses a hash under the hood so if the hash has certain hash collision probability , pbkdf will have the same.
pbkdf2 is intended to be slow (using iterations) to mitigate feasibility of brute-forcing the input password. You don't need that property, even it may be bad for your use case.
So -you can boldly use sha256 to anonymize your data, imho it may be the best option you have today. Indeed principially you cannot prevent the hash collision, but the probability should be negligible

Related

Is still valid password hashing using md5 or sha1?

Just now I'm working in a financial project. Here, the team is thinking to use MD5 for password hashing.
But, today is easy copy a SHA1 or MD5 password to decrypt, inclusive if they are complex password like:
My$uper$ecur3PAS$word+448, you might use a online page to decrypt it and there is it.
Small and mid-range developers (including me) uses those hashing methods, but I think is not enough to provide security over the database.
(Excluding firewalls, network security, iptables, etc.).
Can someone give me a clue about what is the better approach to solve this vulnerability?
As per OWASP Password Storage Cheat Sheet, the recommendation is:
Argon2 is the winner of the password hashing competition and should be considered as your first choice for new applications;
PBKDF2 when FIPS certification or enterprise support on many platforms is required;
scrypt where resisting any/all hardware accelerated attacks is necessary but support isn’t.
bcrypt where PBKDF2 or scrypt support is not available.
MD5 and SHA1 are not secured for most security related use cases, because it is possible to find collisions with these algorithms. In other words, given an input and its hash value, it is possible to derive another input with the same hash value.
SHA-2 group of hashing algorithms are secured for many security use cases, but not for password hashing because they are blazingly fast when compared with the above algorithms. And performance is something we don't want for password hashing because that would make it easier for an attacker to perform a brute-force attack by trying a wide range of passwords in a short period of time.
The above 4 algorithms are therefore meant to be expensive in terms of memory, computation power and time. These values are usually parameterized so that they can be tuned to a high value as new technologies improve the computation power with passsing time. Therefore while using these algorithms, it is important to choose the work factor values correctly. Setting a very low valur may defeat the purpose.
In addition to that a salt should also be used.
Again from the same OWASP source:
Generate a unique salt upon creation of each stored credential (not just per user or system wide);
Use cryptographically-strong random data;
As storage permits, use a 32 byte or 64 byte salt (actual size dependent on protection function);
Scheme security does not depend on hiding, splitting, or otherwise obscuring the salt.
Salts serve two purposes:
prevent the protected form from revealing two identical credentials and
augment entropy fed to protecting function without relying on credential complexity.
The second aims to make pre-computed lookup attacks on an individual credential and time-based attacks on a population intractable
Your thinking is correct, MD5 and SHA1 should never be used for password hashing. I would recommend the following, in order of preference:
argon2
bcrypt
scrypt
PBKDF2
If you tag your question with the language/framework you are using, I can recommend specific libraries or methods.
Also be aware that encryption is not the right word to use here. These are password hashing algorithms, not encryption algorithms.

Why do salts have fixed lengths?

I thought the reason to use variable, random strings as salts is to force the attacker to look up every salt before using his rainbow table on the hash. This takes a long time. But most developers seem to use fixed sizes for their salts. If you look up the size of one salt, which you know because most passwords are stored alongside with the salt, couldn't you just cut the salt length off every password digest and then use your rainbow table? I don't see how this would be too much of an effort and it makes me question why I should even use a variable salt. Because the only information the attacker needs is the size of one salt and then he doesn't have to look up anymore salts. Wouldn't a randomized size be much more secure? The variability and randomization really seems like a 'security through obscurity' thing to me, because it's so easy to workaround if you know the method and the only reason to implement it is to cause some moments of confusion. Or am I wrong?
Salts can be of any length. The only requirement of a good salt is it is 'sufficiently unique' as it is this property that makes rainbow table attack unfeasible against salts: it would take more time to generate the rainbow table, that would only work for a given salt, than to just attempt a brute-force attack1.
A public salt makes it more time-consuming to crack a list of passwords. However, it does not make dictionary attacks harder when cracking a single password. - Salt (cryptography).
An easy way to generate a good salt is then to generate a large (say, 128 random bits) value from a cryptographic random number generator. This will trivially ensure salts will be of the same length - but there is an insane amount of unique values that can be represented by 128 bits.
Since the hash is generated using the original 'long' salt, using a trimmed salt later would just yield an invalid hash when specifying the original password - it doesn't make it any easier to find a hash collision. (Choosing a 'slow' hash like bcrypt is another requirement for a good hashed password design.)
If an attacker was able to trick a system to use a small range of salt values (or even a fixed salt value) then the salts lose the property of 'sufficiently unique'.
1 Sadly, many people choose poor passwords - mainly passwords that are too short, based on common words / sequences, or shared between sites. While a slow hash will help a good bit, and the salt prevents application of rainbow tables, given a large enough pool of accounts (and less time then may be expected) a brute-force is still likely to recover some passwords.

Benchmarking symmetric and asymmetric cryptography

In order to integrity protect a byte stream one can conceptually either use symmetric cryptography (e.g. an HMAC with SHA-1) or asymmetric cryptography (e.g. digital signature with RSA).
It is common sense that asymmetric cryptography is much more expensive than using symmetric cryptography. However, I would like to have hard numbers and would like to know whether there exist benchmark suites for existing crypto libraries (e.g. openssl) in order to gain some measurement results for symmetric and asymmetric cryptography algorithms.
The numbers I get from the built-in "openssl speed" app can, unfortunately, not be compared to each other.
Perhaps somebody already implemented a small benchmarking suite for this purpose?
Thanks,
Martin
I don't think a benchmark is useful here, because the two things you're comparing are built for different use-cases. An HMAC is designed for situations in which you have a shared secret you can use to authenticate the message, whilst signatures are designed for situations in which you don't have a shared secret, but rather want anyone with your public key be able to verify your signature. There are very few situations in which either primitive would be equally appropriate, and when there is, there's likely to be a clear favorite on security, rather than performance grounds.
It's fairly trivial to demonstrate that an HMAC is going to be faster, however: Signing a message requires first hashing it, then computing the signature over the hash, whilst computing an HMAC requries first hashing it, then computing the HMAC (which is merely two additional one-block hash computations). For the same reason, though, for any reasonable assumption as to message length and speed of your cryptographic primitives, the speed difference is going to be negligible, since the largest part of the cost is shared between both operations.
In short, you shouldn't choose the structure of your cryptosystem based on insignificant differences in performance.
All digital signature algorithms (RSA, DSA, ECDSA...) begin by hashing the source stream with a hash function; only the hash output is used afterwards. So the asymptotic cost of signing a long stream of data is the same as the asymptotic cost of hashing the same stream. HMAC is similar in that respect: first you input in the hash function a small fixed-size header, then the data stream; and you have an extra hash operation at the end which operates on a small fixed-size input. So the asymptotic cost of HMACing a long stream of data is the same as the asymptotic cost of hashing the same stream.
To sum up, for a suitably long data stream, a digital signature and HMAC will have the same CPU cost. Speed difference will not be noticeable (the complex part at the end of a digital signature is more expensive than what HMAC does, but a simple PC will still be able to do it in less than a millisecond).
The hash function itself can make a difference, though, at least if you can obtain the data with a high bandwidth. On a typical PC, you can hope hashing data at up to about 300 MB/s with SHA-1, but "only" 150 MB/s with SHA-256. On the other hand, a good mechanical harddisk or gigabit ethernet will hardly go beyond 100 MB/s read speed, so SHA-256 would not be the bottleneck here.

Encrypting(MD5) multiple times can improve security?

I saw some guy who encrypt users password multiple times with MD5 to improve security. I'm not sure if this works but it doesn't look good. So, does it make sense?
Let's assume the hash function you use would be a perfect one-way function. Then you can view its output like that of a "random oracle", its output values are in a finite range of values (2^128 for MD5).
Now what happens if you apply the hash multiple times? The output will still stay in the same range (2^128). It's like you saying "Guess my random number!" twenty times, each time thinking of a new number - that doesn't make it harder or easier to guess. There isn't any "more random" than random. That's not a perfect analogy, but I think it helps to illustrate the problem.
Considering brute-forcing a password, your scheme doesn't add any security at all. Even worse, the only thing you could "accomplish" is to weaken the security by introducing some possibility to exploit the repeated application of the hash function. It's unlikely, but at least it's guaranteed that you for sure won't win anything.
So why is still not all lost with this approach? It's because of the notion that the others made with regard to having thousands of iterations instead of just twenty. Why is this a good thing, slowing the algorithm down? It's because most attackers will try to gain access using a dictionary (or rainbow table using often-used passwords, hoping that one of your users was negligent enough to use one of those (I'm guilty, at least Ubuntu told me upon installation). But on the other hand it's inhumane to require your users to remember let's say 30 random characters.
That's why we need some form of trade-off between easy to remember passwords but at the same time making it as hard as possible for attackers to guess them. There are two common practices, salts and slowing the process down by applying lots of iterations of some function instead of a single iteration. PKCS#5 is a good example to look into.
In your case applying MD5 20000 instead of 20 times would slow attackers using a dictionary significantly down, because each of their input passwords would have to go through the ordinary procedure of being hashed 20000 times in order to be still useful as an attack. Note that this procedure does not affect brute-forcing as illustrated above.
But why is using a salt still better? Because even if you apply the hash 20000 times, a resourceful attacker could pre-compute a large database of passwords, hashing each of them 20000 times, effectively generating a customized rainbow table specifically targeted at your application. Having done this they could quite easily attack your application or any other application using your scheme. That's why you also need to generate a high cost per password, to make such rainbow tables impractical to use.
If you want to be on the really safe side, use something like PBKDF2 illustrated in PKCS#5.
Hashing a password is not encryption. It is a one-way process.
Check out security.stackexchange.com, and the password related questions. They are so popular we put together this blog post specifically to help individuals find useful questions and answers.
This question specifically discusses using md5 20 times in a row - check out Thomas Pornin's answer. Key points in his answer:
20 is too low, it should be 20000 or more - password processing is still too fast
There is no salt: an attacker may attack passwords with very low per-password cost, e.g. rainbow tables - which can be created for any number of md5 cycles
Since there is no sure test for knowing whether a given algorithm is secure or not, inventing your own cryptography is often a recipe for disaster. Don't do it
There is such a question on crypto.SE but it is NOT public now. The answer by Paŭlo Ebermann is:
For password-hashing, you should not use a normal cryptographic hash,
but something made specially to protect passwords, like bcrypt.
See How to safely store a password for details.
The important point is that password crackers don't have to bruteforce
the hash output space (2160 for SHA-1), but only the
password space, which is much much smaller (depending on your password
rules - and often dictionaries help). Thus we don't want a fast
hash function, but a slow one. Bcrypt and friends are designed for
this.
And similar question has these answers:
The question is "Guarding against cryptanalytic breakthroughs: combining multiple hash functions"
Answer by Thomas Pornin:
Combining is what SSL/TLS does with MD5 and SHA-1, in its
definition of its internal "PRF" (which is actually a Key Derivation
Function). For a given hash function, TLS defines a KDF which
relies on HMAC which relies on the hash function. Then the KDF is
invoked twice, once with MD5 and once with SHA-1, and the results are
XORed together. The idea was to resist cryptanalytic breaks in either
MD5 or SHA-1. Note that XORing the outputs of two hash functions
relies on subtle assumptions. For instance, if I define SHB-256(m) =
SHA-256(m) XOR C, for a fixed constant C, then SHB-256 is as
good a hash function as SHA-256; but the XOR of both always yields
C, which is not good at all for hashing purposes. Hence, the
construction in TLS in not really sanctioned by the authority of
science (it just happens not to have been broken). TLS-1.2 does
not use that combination anymore; it relies on the KDF with a single,
configurable hash function, often SHA-256 (which is, in 2011, a smart
choice).
As #PulpSpy points out, concatenation is not a good generic way of
building hash functions. This was published by Joux in 2004 and then
generalized by Hoch and Shamir in 2006, for a large class of
construction involving iterations and concatenations. But mind the
fine print: this is not really about surviving weaknesses in hash
functions, but about getting your money worth. Namely, if you take a
hash function with a 128-bit output and another with a 160-bit output,
and concatenate the results, then collision resistance will be no
worse than the strongest of the two; what Joux showed is that it will
not be much better either. With 128+160 = 288 bits of output, you
could aim at 2144 resistance, but Joux's result implies
that you will not go beyond about 287.
So the question becomes: is there a way, if possible an efficient
way, to combine two hash functions such that the result is as
collision-resistant as the strongest of the two, but without incurring
the output enlargement of concatenation ? In 2006, Boneh and
Boyen have published a result which simply states that the answer
is no, subject to the condition of evaluating each hash function only
once. Edit: Pietrzak lifted the latter condition in 2007
(i.e. invoking each hash function several times does not help).
And by PulpSpy:
I'm sure #Thomas will give a thorough answer. In the interm, I'll just
point out that the collision resistance of your first construction,
H1(m)||H2(M) is surprisingly not that much better than just H1(M). See
section 4 of this paper:
http://web.cecs.pdx.edu/~teshrim/spring06/papers/general-attacks/multi-joux.pdf
no , it's not a good practice, you must use a $salt for your encryption because the password cand be cracked with those rainbow tables

Reasons why SHA512 is superior to MD5

I was wondering if I could reasons or links to resources explaining why SHA512 is a superior hashing algorithm to MD5.
It depends on your use case. You can't broadly claim "superiority". (I mean, yes you can, in some cases, but to be strict about it, you can't really).
But there are areas where MD5 has been broken:
For starters: MD5 is old, and common. There are tons of rainbow tables against it, and they're easy to find. So if you're hashing passwords (without a salt - shame on you!) - using md5 - you might as well not be hashing them, they're so easy to find. Even if you're hashing with simple salts really.
Second off, MD5 is no longer secure as a cryptographic hash function (indeed it is not even considered a cryptographic hash function anymore as the Forked One points out). You can generate different messages that hash to the same value. So if you've got a SSL Certificate with a MD5 hash on it, I can generate a duplicate Certificate that says what I want, that produces the same hash. This is generally what people mean when they say MD5 is 'broken' - things like this.
Thirdly, similar to messages, you can also generate different files that hash to the same value so using MD5 as a file checksum is 'broken'.
Now, SHA-512 is a SHA-2 Family hash algorithm. SHA-1 is kind of considered 'eh' these days, I'll ignore it. SHA-2 however, has relatively few attacks against it. The major one wikipedia talks about is a reduced-round preimage attack which means if you use SHA-512 in a horribly wrong way, I can break it. Obivously you're not likely to be using it that way, but attacks only get better, and it's a good springboard into more research to break SHA-512 in the same way MD5 is broken.
However, out of all the Hash functions available, the SHA-2 family is currently amoung the strongest, and the best choice considering commonness, analysis, and security. (But not necessarily speed. If you're in embedded systems, you need to perform a whole other analysis.)
MD5 has been cryptographically broken for quite some time now. This basically means that some of the properties usually guaranteed by hash algorithms, do not hold anymore. For example it is possible to find hash collisions in much less time than potentially necessary for the output length.
SHA-512 (one of the SHA-2 family of hash functions) is, for now, secure enough but possibly not much longer for the foreseeable future. That's why the NIST started a contest for SHA-3.
Generally, you want hash algorithms to be one-way functions. They map some input to some output. Usually the output is of a fixed length, thereby providing a "digest" of the original input. Common properties are for example that small changes in input yield large changes in the output (which helps detecting tampering) and that the function is not easily reversible. For the latter property the length of the output greatly helps because it provides a theoretical upper bound for the complexity of a collision attack. However, flaws in design or implementation often result in reduced complexity for attacks. Once those are known it's time to evaluate whether still using a hash function. If the attack complexity drops far enough practical attacks easily get in the range of people without specialized computing equipment.
Note: I've been talking only about one kind of attack here. The reality if much more nuanced but also much harder to grasp. Since hash functions are very commonly used for verifying file/message integrity the collision thing is probably the easiest one to understand and follow.
There are a couple of points not being addressed here, and I feel it is from a lack of understanding about what a hash is, how it works, and how long it takes to successfully attack them, using rainbow or any other method currently known to man...
Mathematically speaking, MD5 is not "broken" if you salt the hash and throttle attempts (even by 1 second), your security would be just as "broken" as it would by an attacker slowly pelting away at your 1ft solid steel wall with a wooden spoon:
It will take thousands of years, and by then everyone involved will be dead; there are more important things to worry about.
If you lock their account by the 20th attempt... problem solved. 20 hits on your wall = 0.0000000001% chance they got through. There is literally a better statistical chance you are in fact Jesus.
It's also important to note that absolutely any hash function is going to be vulnerable to collisions by virtue of what a hash is: "a (small) unique id of something else".
When you increase the bit space you decrease collision rates, but you also increase the size of the id and the time it takes to compute it.
Let's do a tiny thought experiment...
SHA-2, if it existed, would have 4 total possible unique IDs for something else... 00, 01, 10 & 11. It will produce collisions, obviously. Do you see the issue here? A hash is just a generated ID of what you're trying to identify.
MD5 is actually really, really good at randomly choosing a number based on an input. SHA is actually not that much better at it; SHA just has massive more space for IDs.
The method used is about 0.1% of the reason the collisions are less likely. The real reason is the larger bit space.
This is literally the only reason SHA-256 and SHA-512 are less vulnerable to collisions; because they use a larger space for a unique id.
The actual methods SHA-256 and SHA-512 use to generate the hash are in fact better, but not by much; the same rainbow attacks would work on them if they had fewer bits in their IDs, and files and even passwords can have identical IDs using SHA-256 and SHA-512, it's just a lot less likely because it uses more bits.
The REAL ISSUE is how you implement your security
If you allow automated attacks to hit your authentication endpoint 1,000 times per second, you're going to get broken into. If you throttle to 1 attempt per 3 seconds and lock the account for 24 hours after the 10th attempt, you're not.
If you store the passwords without salt (a salt is just an added secret to the generator, making it harder to identify bad passwords like "31337" or "password") and have a lot of users, you're going to get hacked. If you salt them, even if you use MD5, you're not.
Considering MD5 uses 128 bits (32 bytes in HEX, 16 bytes in binary), and SHA 512 is only 4x the space but virtually eliminates the collision ratio by giving you 2^384 more possible IDs... Go with SHA-512, every time.
But if you're worried about what is really going to happen if you use MD5, and you don't understand the real, actual differences, you're still probably going to get hacked, make sense?
reading this
However, it has been shown that MD5 is not collision resistant
more information about collision here
MD5 has a chance of collision (http://www.mscs.dal.ca/~selinger/md5collision/) and there are numerous MD5 rainbow tables for reverse password look-up on the web and available for download.
It needs a much larger dictionary to map backwards, and has a lower chance of collision.
It is simple, MD5 is broken ;) (see Wikipedia)
Bruce Schneier wrote of the attack that "[w]e already knew that MD5 is a broken hash function" and that "no one should be using MD5 anymore."