Can creators of RSA read all encoded messages? [closed] - cryptography

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
According to this page http://en.wikipedia.org/wiki/RSA_numbers each RSA version uses one single constant long number which is hard to factor.
Is this right?
For example, RSA-100 uses number
1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139
which was factored in 1991.
Meanwhile RSA-210 uses number
245246644900278211976517663573088018467026787678332759743414451715061600830038587216952208399332071549103626827191679864079776723243005600592035631246561218465817904100131859299619933817012149335034875870551067
which was not factored yet.
My question is: doesn't this mean that CREATORS of any specific RSA version KNOW the factor numbers and can consequently READ all encoded messages? If they don't know factorization then how they could generate a number?

Those numbers are just sample random numbers, which are used by RSA to judge the adequacy of the algorithm. The RSA asymmetric-key algorithm itself relies on the difficulty in factorizing numbers of a large size, for security.
The approximate time or difficulty in factoring these numbers is an indicator of how other such numbers used in the algorithm will fare against the amount of computational power we have.
These numbers, which were challenges, are described as follows.
(Quoting from Reference)
The RSA challenge numbers were generated using a secure process that
guarantees that the factors of each number cannot be obtained by any
method other than factoring the published value. No one, not even RSA
Laboratories, knows the factors of any of the challenge numbers. The
generation took place on a Compaq laptop PC with no network connection
of any kind. The process proceeded as follows:
First, 30,000 random
bytes were generated using a ComScire QNG hardware random number
generator, attached to the laptop's parallel port.
The random bytes
were used as the seed values for the B_GenerateKeyPair function, in
version 4.0 of the RSA BSAFE library.
The private portion of the
generated keypair was discarded. The public portion was exported, in
DER format to a disk file.
The moduli were extracted from the DER
files and converted to decimal for posting on the Web page.
The
laptop's hard drive was destroyed.
When it becomes fairly trivial and quick, to reliably factorize numbers of a particular size, it usually implies it is time to move to a longer number.

Look at Ron was wrong, Whit is right. It is a detailed analysis of duplicate RSA key use and the use of RSA keys using common factors (the problem you describe). There is a lot in the article but, to quote from its conclusion:
We checked the computational properties of millions of public keys
that we collected on the web. The majority does not seem to suffer from
obvious weaknesses and can be expected to provide the expected level
of security. We found that on the order of 0.003% of public keys is
incorrect, which does not seem to be unacceptable.
Yes, it is a problem and the problem will continue to grow but the sheer number of possible keys means the problem is not too serious, at least not yet. Note that the article does not cover the increasing ease of brute forcing shorter RSA keys, either.
Note that this is not an issue with the RSA algorithm or the random number generators used to generate keys (although the paper does mention seeding may still be an issue). It is the difficulty of checking a newly generated key against an ever expanding list of existing keys from an arbitrary, sometimes disconnected device. This differs from the known weak keys for DES, for example, where the weak keys are known upfront.

Related

Is truncating sha2/sha3 to 16 bytes worse than using crc32 which itself gives 16 bytes to begin with? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using AES128 in CBC mode, and I need a 16-byte key, so I was wondering if using sha2 or sha3 and then truncating it to 16 bytes (take first 16 bytes from the left) would make sha2/sha3 weaker than crc32 which gives me 16 bytes out of the box.
Each bit of a cryptographically secure hash is effectively random (i.e. independent of all the other bits). This is not true of non-cryptographic hashes. This property is critical for a secure key. You should always use a cryptographic hash for key derivation.
Truncating a long secure hash is a perfectly acceptable way to create a secure hash of shorter length. You may also select any subset of bits rather than just the most significant or least significant. If this weren't true, then the original hash would not itself be secure, because it would suggest some non-randomness in the output.
SHA-2 and SHA-3 intend to be cryptographically secure hashes (and at this point, we believe they are). CRC does not even intend to be cryptographically secure.
If the input key material is not itself random, then a fast hash like the SHA series may be subject to brute force. If so, then you need to use key stretching as well as hashing, for example with PBKDF2.
But you should never use CRC for any of this. It is not intended to be a secure hash.
For more discussion, see Should I use the first or last bits from a SHA-256 hash? and “SHA-256” vs “any 256 bits of SHA-512”, which is more secure?
I am using AES128 in CBC mode, and I need a 16-byte key, so I was wondering if using sha2 or sha3 and then truncating it to 16 bytes (take first 16 bytes from the left) would make sha2/sha3 weaker than crc32 which gives me 16 bytes out of the box.
The question was not clear about how the input to the CRC or SHAx is generated. The OP cleared more. So, I've provided the answer below parts;
I mean regardless of the input (say the input was even abcd ), would truncating sha2/3 to 16 bytes be more secure than using crc32.
First of all, forget CRC, it is not a cryptographical hash function, forget it.
When the input space is small, there is a special case of the pre-image attack of the hash functions. The attacker can try all possible combinations to generate the key. You can read more details in this Cryptography.SE Q/A
Secure hashing when the input comes from a small space
Is it easy to crack a hashed phone number?.
Forgot about the small input space!, the entities like BitCoin Miner or SuperComputer like Summit Can reach 2^64 very easily. Which simply says the 8-byte.
One should generate a strong password like the dicewire or Bip-39. This will provide you easy to remember and strong passwords. See also XKCD
Once you generated a good password, then you can pass it to the poor man's KDF1, to better use HKDF. Since your input material is good you can skip the expand part of the HKDF. You can also use the Password-based Key derivation functions like Scrypt, PBKDF2, and Argon2. In this case, choose the Argon2 since it was the winner of the Password Hashing Competition in July 2015.
I was just trying to encrypt data like sounds for a game with AES 128, and was just wondering if using only 16 bytes of the hashed password-like key with sha2/3 was a more secure solution than using a whole crc32 output instead. I've never worked with any of the other methods you mentioned...
For the proper use of CBC mode, you need a nonce, too. You can use HKDF or PBKDF2, Argon2, etc with different info/nonce to derive the nonce, too. This is very common.
Note those about CBC;
The nonce must be unique under the same key, i.e (Key,IV) pair must be used once
CBC IV must be unpredictable, however, as far as I can see this is not your case
CBC is vulnerable to padding oracle attacks on the server-side. This is not possible in your case, too.
CBC mode can only provide CPA security, there is no integrity and authentication. To provide integrity and authentication either use HMAC with a different key, or use the combined modes.
Use Authenticated Encryption With Associated Data mode of encryptions like AES-GCM and ChaCha20-Poly1305. Correctly using the GCM may be hard, better use ChaCha20-poly1305 or xChaCha20-poly1305 for better nonce random generations.

How long does it take to generate large prime numbers? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using a C implementation of bigint without any assembly, SSE etc.
running on a 2ghz dual core pentium laptop; what is
the average time that one should expect a prime number to
be created in?
Is it normal for primes which are greater than 512 bits to
take 30 seconds?
What about 2048, 4096 bits etc.?
From security stackexchange question 56214
I recently generated some custom Diffie-Hellman parameters which are basically just long (in the below case 4096 bit) primes.
As the generation took roughly 2 hours it cannot be something that is >generated on the fly........
Is this typical ? - 2 hours to generate a 4096 bit key ...
No, 4 hours are definitely not typical.
Generation of large random primes depends on the following:
the speed and entropy within the random number generator
the used algorithm to test the candidates for primality
the implementation
and luck
The random number generator used is very important. Especially for long term keys it may be that you require a random bit generator that contains a large amount of entropy. This can be achieved by accessing e.g. /dev/random on linux operating systems, for instance. There is one unfortunate problem: /dev/random may block until sufficient entropy is gathered. Depending on the system that can be very fast or very very slow.
Now, the second is the algorithm. When generating new DH parameters then often a method to generate a so called safe prime is usually used. Now generating safe primes is much much harder than generating a number that is probable prime. However, that prime is only used for the DH parameters not the key pair itself. So generating a safe prime is generally not needed; you can simply used a set of pre-calculated or even named parameters.
The implementation can make a bit of a difference as well. Although it won't change the order of complexity, it may still influence the result if the implementation is a thousand times slower than a fast implementation. These kind of differences are not unheard of within cryptography; a slow, interpreted language may be much slower than a hardware accelerated version, or a version directly running using vector instructions of the CPU or indeed GPU.
Furthermore, the only way to see if a number is prime is to test the number. There is no deterministic method of just generating primes. The problem with that is that although there are many, many primes available, it can still take a long time to find one. This is where the luck factor comes in: it could be that the first number you test is prime, but it can also be that you run through oodles of numbers before finding one. So in the end the runtime of the procedure is indeterministic.
For a C program, generating a safe prime of 4096 bits in over 2 hours seems a bit much. However, if it runs a very old CPU, without any SSE, it would not necessarily mean that anything is fundamentally wrong. However, taking 30 seconds for a 512 bit prime is very long. OpenSSL command line takes only between 0.015 (lucky) and 1.5 (unlucky) seconds on my laptop (but that's a Core i7).
Notes:
RSA generally requires two primes that are half the key size, and these are usually not safe primes. So generating a DH key pair (with new parameters) will take much longer than generating an RSA key pair of the same size.
If possible try to use predefined DH parameters. Unfortunately the openssl command line doesn't seem to support named DH parameters; this is only supported for DSA key pairs.
If you want speed, try Elliptic Curve DH with predefined parameters. Generating a key is almost as fast as just generating 256 random bits (for the P-256 curve, for instance). And until Quantum Crypto comes off age, those keys will be much stronger than DH keys on top of it.

Tips to generate strong RSA keys [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Is there any documentation including tips to generate strong RSA key?
I mean not just ' use XXX utility with -X flag'.
I mean some rules in theory. For example, module n should be not less then 1024 bit, etc.
Can anybody tell me?
In answer to your question, there is such documentation:
Strong primes are required by the ANSI X9.31 standard for use in generating RSA keys for digital signatures. This makes the factorization of n = p q using Pollard's p − 1 algorithm computationally infeasible. However, strong primes do not protect against modulus factorisation using newer algorithms such as Lenstra elliptic curve factorization and Number Field Sieve algorithm.
The version 4 RSA Laboratories’ Frequently Asked Questions About Today’s Cryptography was published in 1998 and can be found here ftp://ftp.rsa.com/pub/labsfaq/labsfaq4.pdf
Please pay attention to following questions:
Question 3.1.4. What are strong primes and are they necessary for RSA?
In the literature pertaining to RSA, it has often been suggested that in choosing a key pair, one should use socalled
“strong” primes p and q to generate the modulus n. Strong primes have certain properties that make the
product n hard to factor by specific factoring methods; such properties have included, for example, the existence
of a large prime factor of p-1 and a large prime factor of p+1. The reason for these concerns is some factoring
methods (for instance, the Pollard p-1 and p+1 methods, see Question 2.3.4) are especially suited to primes p such
that p-1 or p+1 has only small factors; strong primes are resistant to these attacks.
However, advances in factoring over the last ten years appear to have obviated the advantage of strong primes;
the elliptic curve factoring algorithm is one such advance. The new factoring methods have as good a chance of
success on strong primes as on “weak” primes. Therefore, choosing traditional “strong” primes alone does not
significantly increase security. Choosing large enough primes is what matters. However, there is no danger in
using strong, large primes, though it may take slightly longer to generate a strong prime than an arbitrary prime.
It is possible new factoring algorithms may be developed in the future which once again target primes with
certain properties. If this happens, choosing strong primes may once again help to increase security.
Question 3.1.5. How large a key should be used in RSA?
The size of an RSA key typically refers to the size of the modulus n. The two primes, p and q, which compose the
modulus, should be of roughly equal length; this makes the modulus harder to factor than if one of the primes is
much smaller than the other. If one chooses to use a 768-bit modulus, the primes should each have length approximately
384 bits. If the two primes are extremely close (identical except for, say, 100 - 200 bits), or more generally, if
their difference is close to any predetermined amount, then there is a potential security risk, but the probability
that two randomly chosen primes are so close is negligible.
The best size for an RSA modulus depends on one’s security needs. The larger the modulus, the greater the
security, but also the slower the RSA operations. One should choose a modulus length upon consideration, first, of
the value of the protected data and how long it needs to be protected, and, second, of how powerful one’s potential
threats might be.
As of 2010, the largest factored RSA number was 768 bits long (232 decimal digits). Its factorization, by a state-of-the-art distributed implementation, took around fifteen hundred CPU years (two years of real time, on many hundreds of computers). This means that, at this date, no larger RSA key has been factored. In practice, RSA keys are typically 1024 to 2048 bits long. Some experts believe that 1024-bit keys may become breakable in the near future; few see any way that 4096-bit keys could be broken in the foreseeable future. Therefore, it is generally presumed that RSA is secure if n is sufficiently large.
Key strength generally follows current state of the art computing power. Key size is only part of a security plan. You also need to consider secure storage of your keys and how often you change keys.
Basically, you need to pick the widest key width that is compatible with the software you'll be using.
Currently, it is a good rule of thumb to go with minimum 2048-bit RSA as of 2014. It does depend on:
Speed and frequency of use
What you are protecting
Max width supported by your software
If having your key cracked is just an inconvenience that doesn't impact your finances or health, then you can err on the side of convenience. But if you really care about privacy, use the strongest key you can stand (no less than 2048).
A good doc is the OpenPGP Best Practices
https://we.riseup.net/riseuplabs+paow/openpgp-best-practices

Encrypting(MD5) multiple times can improve security?

I saw some guy who encrypt users password multiple times with MD5 to improve security. I'm not sure if this works but it doesn't look good. So, does it make sense?
Let's assume the hash function you use would be a perfect one-way function. Then you can view its output like that of a "random oracle", its output values are in a finite range of values (2^128 for MD5).
Now what happens if you apply the hash multiple times? The output will still stay in the same range (2^128). It's like you saying "Guess my random number!" twenty times, each time thinking of a new number - that doesn't make it harder or easier to guess. There isn't any "more random" than random. That's not a perfect analogy, but I think it helps to illustrate the problem.
Considering brute-forcing a password, your scheme doesn't add any security at all. Even worse, the only thing you could "accomplish" is to weaken the security by introducing some possibility to exploit the repeated application of the hash function. It's unlikely, but at least it's guaranteed that you for sure won't win anything.
So why is still not all lost with this approach? It's because of the notion that the others made with regard to having thousands of iterations instead of just twenty. Why is this a good thing, slowing the algorithm down? It's because most attackers will try to gain access using a dictionary (or rainbow table using often-used passwords, hoping that one of your users was negligent enough to use one of those (I'm guilty, at least Ubuntu told me upon installation). But on the other hand it's inhumane to require your users to remember let's say 30 random characters.
That's why we need some form of trade-off between easy to remember passwords but at the same time making it as hard as possible for attackers to guess them. There are two common practices, salts and slowing the process down by applying lots of iterations of some function instead of a single iteration. PKCS#5 is a good example to look into.
In your case applying MD5 20000 instead of 20 times would slow attackers using a dictionary significantly down, because each of their input passwords would have to go through the ordinary procedure of being hashed 20000 times in order to be still useful as an attack. Note that this procedure does not affect brute-forcing as illustrated above.
But why is using a salt still better? Because even if you apply the hash 20000 times, a resourceful attacker could pre-compute a large database of passwords, hashing each of them 20000 times, effectively generating a customized rainbow table specifically targeted at your application. Having done this they could quite easily attack your application or any other application using your scheme. That's why you also need to generate a high cost per password, to make such rainbow tables impractical to use.
If you want to be on the really safe side, use something like PBKDF2 illustrated in PKCS#5.
Hashing a password is not encryption. It is a one-way process.
Check out security.stackexchange.com, and the password related questions. They are so popular we put together this blog post specifically to help individuals find useful questions and answers.
This question specifically discusses using md5 20 times in a row - check out Thomas Pornin's answer. Key points in his answer:
20 is too low, it should be 20000 or more - password processing is still too fast
There is no salt: an attacker may attack passwords with very low per-password cost, e.g. rainbow tables - which can be created for any number of md5 cycles
Since there is no sure test for knowing whether a given algorithm is secure or not, inventing your own cryptography is often a recipe for disaster. Don't do it
There is such a question on crypto.SE but it is NOT public now. The answer by Paŭlo Ebermann is:
For password-hashing, you should not use a normal cryptographic hash,
but something made specially to protect passwords, like bcrypt.
See How to safely store a password for details.
The important point is that password crackers don't have to bruteforce
the hash output space (2160 for SHA-1), but only the
password space, which is much much smaller (depending on your password
rules - and often dictionaries help). Thus we don't want a fast
hash function, but a slow one. Bcrypt and friends are designed for
this.
And similar question has these answers:
The question is "Guarding against cryptanalytic breakthroughs: combining multiple hash functions"
Answer by Thomas Pornin:
Combining is what SSL/TLS does with MD5 and SHA-1, in its
definition of its internal "PRF" (which is actually a Key Derivation
Function). For a given hash function, TLS defines a KDF which
relies on HMAC which relies on the hash function. Then the KDF is
invoked twice, once with MD5 and once with SHA-1, and the results are
XORed together. The idea was to resist cryptanalytic breaks in either
MD5 or SHA-1. Note that XORing the outputs of two hash functions
relies on subtle assumptions. For instance, if I define SHB-256(m) =
SHA-256(m) XOR C, for a fixed constant C, then SHB-256 is as
good a hash function as SHA-256; but the XOR of both always yields
C, which is not good at all for hashing purposes. Hence, the
construction in TLS in not really sanctioned by the authority of
science (it just happens not to have been broken). TLS-1.2 does
not use that combination anymore; it relies on the KDF with a single,
configurable hash function, often SHA-256 (which is, in 2011, a smart
choice).
As #PulpSpy points out, concatenation is not a good generic way of
building hash functions. This was published by Joux in 2004 and then
generalized by Hoch and Shamir in 2006, for a large class of
construction involving iterations and concatenations. But mind the
fine print: this is not really about surviving weaknesses in hash
functions, but about getting your money worth. Namely, if you take a
hash function with a 128-bit output and another with a 160-bit output,
and concatenate the results, then collision resistance will be no
worse than the strongest of the two; what Joux showed is that it will
not be much better either. With 128+160 = 288 bits of output, you
could aim at 2144 resistance, but Joux's result implies
that you will not go beyond about 287.
So the question becomes: is there a way, if possible an efficient
way, to combine two hash functions such that the result is as
collision-resistant as the strongest of the two, but without incurring
the output enlargement of concatenation ? In 2006, Boneh and
Boyen have published a result which simply states that the answer
is no, subject to the condition of evaluating each hash function only
once. Edit: Pietrzak lifted the latter condition in 2007
(i.e. invoking each hash function several times does not help).
And by PulpSpy:
I'm sure #Thomas will give a thorough answer. In the interm, I'll just
point out that the collision resistance of your first construction,
H1(m)||H2(M) is surprisingly not that much better than just H1(M). See
section 4 of this paper:
http://web.cecs.pdx.edu/~teshrim/spring06/papers/general-attacks/multi-joux.pdf
no , it's not a good practice, you must use a $salt for your encryption because the password cand be cracked with those rainbow tables

Are there public key cryptography algorithms that are provably NP-hard to defeat? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Should practical quantum computing become a reality, I am wondering if there are any public key cryptographic algorithms that are based on NP-complete problems, rather than integer factorization or discrete logarithms.
Edit:
Please check out the "Quantum computing in computational complexity theory" section of
the wiki article on quantum computers. It points out that the class of problems quantum computers can answer (BQP) is believed to be strictly easier than NP-complete.
Edit 2:
'Based on NP-complete' is a bad way of expressing what I'm interested in.
What I intended to ask is for a Public Key encryption algorithm with the property that any method for breaking the encryption can also be used to break the underlying NP-complete problem. This means breaking the encryption proves P=NP.
I am responding to this old thread because it is a very common and important question, and all of the answers here are inaccurate.
The short answer to the original question is an unequivocal "NO". There are no known encryption schemes (let alone public-key ones) that are based on an NP-complete problem (and hence all of them, under polynomial-time reductions). Some are "closer" that others, though, so let me elaborate.
There is a lot to clarify here, so let's start with the meaning of "based on an NP-complete problem." The generally agreed upon interpretation of this is: "can be proven secure in a particular formal model, assuming that no polynomial-time algorithms exist for NP-complete problems". To be even more precise, we assume that no algorithm exists that always solves an NP-complete problem. This is a very safe assumption, because that's a really hard thing for an algorithm to do - it's seemingly a lot easier to come up with an algorithm that solves random instances of the problem with good probability.
No encryption schemes have such a proof, though. If you look at the literature, with very few exceptions (see below), the security theorems read like the following:
Theorem: This encryption scheme is provably secure, assuming that no
polynomial-time algorithm exists for
solving random instances of some problem X.
Note the "random instances" part. For a concrete example, we might assume that no polynomial-time algorithm exists for factoring the product of two random n-bit primes with some good probability. This is very different (less safe) from assuming that no polynomial-time algorithm exists for always factoring all products of two random n-bit primes.
The "random instances" versus "worst case instances" issue is what is tripped up several responders above. The McEliece-type encryption schemes are based on a very special random version of decoding linear codes - and not on the actual worst-case version which is NP-complete.
Pushing beyond this "random instances" issue has required some deep and beautiful research in theoretical computer science. Starting with the work of Miklós Ajtai, we have found cryptographic algorithms where the security assumption is a "worst case" (safer) assumption instead of a random case one. Unfortunately, the worst case assumptions are for problems that are not known to be NP complete, and some theoretical evidence suggests that we can't adapt them to use NP-complete problems. For the interested, look up "lattice based cryptography".
Some cryptosystems based on NP-hard problems have been proposed (such as the Merkle-Hellman cryptosystem based on the subset-sum problem, and the Naccache-Stern knapsack cryptosystem based on the knapsack problem), but they have all been broken. Why is this? Lecture 16 of Scott Aaronson's Great Ideas in Theoretical Computer Science says something about this, which I think you should take as definitive. What it says is the following:
Ideally, we would like to construct a [Cryptographic Pseudorandom Generator] or cryptosystem whose security was based on an NP-complete problem. Unfortunately, NP-complete problems are always about the worst case. In cryptography, this would translate to a statement like “there exists a message that’s hard to decode”, which is not a good guarantee for a cryptographic system! A message should be hard to decrypt with overwhelming probability. Despite decades of effort, no way has yet been discovered to relate worst case to average case for NP-complete problems. And this is why, if we want computationally-secure cryptosystems, we need to make stronger assumptions than P≠NP.
This was an open question in 1998:
On the possibility of basing Cryptography on the assumption that P != NP
by Oded Goldreich, Rehovot Israel, Shafi Goldwasser
From the abstract: "Our conclusion is that the question remains open".
--I wonder if that's changed in the last decade?
Edit:
As far as I can tell the question is still open, with recent progress toward an answer of no such algorithm exists.
Adi Akavia, Oded Goldreich, Shafi Goldwasser, and Dana Moshkovitz published this paper in the ACM in 2006: On basing one-way functions on NP-hardness "Our main findings are the following two negative results"
The stanford site Complexity Zoo is helpful in decripting what those two negative results mean.
While many forms have been broken, check out Merkle-Hellman, based on a form of the NP-complete 'Knapsack Problem'.
Lattice cryptography offers the (over)generalized take-home message that indeed one can design cryptosystems where breaking the average case is as hard as solving a particular NP-hard problem (typically the Shortest Vector Problem or the Closest Vector Problem).
I can recommend reading the introduction section of http://eprint.iacr.org/2008/521 and then chasing references to the cryptosystems.
Also, see the lecture notes at http://www.cs.ucsd.edu/~daniele/CSE207C/, and chase links for a book if you want.
Googling for NP-complete and Public key encryption finds False positives ... that are actually insecure. This cartoonish pdf appears to show a public key encyption algorithm based on the minimium dominating set problem. Reading further it then admits to lying that the algorithm is secure ... the underlying problem is NP-Complete but it's use in the PK algorithm does not preserve the difficulty.
Another False positive Google find: Cryptanalysis of the Goldreich-Goldwasser-Halevi cryptosystem from Crypto '97. From the abstract:
At Crypto '97, Goldreich, Goldwasser and Halevi proposed a public-key cryptosystem based on the closest vector problem in a lattice, which is known to be NP-hard. We show that there is a major flaw in the design of the scheme which has two implications: any ciphertext leaks information on the plaintext, and the problem of decrypting ciphertexts can be reduced to a special closest vector problem which is much easier than the general problem.
There is a web site that may be relevant to your interests: Post-Quantum Cryptography.
Here is my reasoning. Correct me if I'm wrong.
(i) ``Breaking'' a cryptosystem is necessarily a problem in NP and co-NP. (Breaking a cryptosystem involves inverting the encryption function, which is one-to-one and computable in polynomial-time. So, given the ciphertext, the plaintext is a certificate that can be verified in polynomial time. Thus querying the plaintext based on the ciphertext is in NP and in co-NP.)
(ii) If there is an NP-hard problem in NP and co-NP, then NP = co-NP. (This problem would be NP-complete and in co-NP. Since any NP language is reducible to this co-NP language, NP is a subset of co-NP. Now use symmetry: any language L in co-NP has -L (its compliment) in NP, whence -L is in co-NP---that is L = --L is in NP.)
(iii) I think that it is generally believed that NP != co-NP, as otherwise there are polynomial-sized proofs that boolean formulas are not satisfiable.
Conclusion: Complexity-theoretic conjectures imply that NP-hard cryptosystems don't exist.
(Otherwise, you have an NP-hard problem in NP and co-NP, whence NP = co-NP---which is believed to be false.)
While RSA and other widely-used cryptographic algorithms are based on the difficulty of integer factorization (which is not known to be NP-complete), there are some public key cryptography algorithms based on NP-complete problems too. A google search for "public key" and "np-complete" will reveal some of them.
(I incorrectly said before that quantum computers would speed up NP-complete problems, but this is not true. I stand corrected.)
As pointed out by many other posters, it is possible to base cryptography on NP-hard or NP-complete problems.
However, the common methods for cryptography are going to be based on difficult mathematics (difficult to crack, that is). The truth is that it is easier to serialize numbers as a traditional key than to create a standardized string that solves an NP-hard problem. Therefore, practical crypto is based on mathematical problems that are not yet proven to be NP-hard or NP-complete (so it is conceivable that some of these problems are in P).
In ElGamal or RSA encryption, breaking it requires the cracking the discrete logarithm, so look at this wikipedia article.
No efficient algorithm for computing general discrete logarithms logbg is known. The naive algorithm is to raise b to higher and higher powers k until the desired g is found; this is sometimes called trial multiplication. This algorithm requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. There exists an efficient quantum algorithm due to Peter Shor however (http://arxiv.org/abs/quant-ph/9508027).
Computing discrete logarithms is apparently difficult. Not only is no efficient algorithm known for the worst case, but the average-case complexity can be shown to be at least as hard as the worst case using random self-reducibility.
At the same time, the inverse problem of discrete exponentiation is not (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries have been exploited in the construction of cryptographic systems.
The widespread belief is that these are NP-complete, but maybe can't be proven so. Note that quantum computers may break crypto efficiently!
Since nobody really answered the question I have to give you the hint: "McEliece". Do some searches on it. Its a proven NP-Hard encryption algorithm. It needs O(n^2) encryption and decryption time. It has a public key of size O(n^2) too, which is bad. But there are improvements which lower all these bounds.