Why if I change a single bit of a given RSA encrypted message and then I try to decrypt it using the appropriated key, I get a padding error? Does anyone know that?
Padding has a certain format which can be checked during decryption. If the format doesn't match due to some (malicious) manipulation then the decryption must fail.
RSA has a long history with padding schemes. Textbook RSA (no padding is applied) can be attacked in some scenarios very easily. With PKCS#1 padding we already have some either static or random padding (depending on the version of the standard) applied, but it used some specific markers which might result in accidental padding matches during decryption. OAEP improved the situation where you can be very sure that a successful decryption means that the ciphertext was not manipulated.
Related
Is it possible to look at a PDF and tell what encryption level was use to encrypt it?
We are proposing to set an AES encryption level, in Word by group policy, and wanted to confirm this was actually used on the resultant document.
The default AES used in word is 128 which we wanted to increase to 256.
Thanks for any ideas on how to test this.
No.
With AES it is impossible to inspect encrypted data and determine the key length used to encrypt it. AES has a constant block size for all key sizes, so you will merely see n x 16 bytes of apparently random data.
So, I'm using the RijndaelManaged class (.NET 2.0) to do AES-128 CBC encryption on small strings (around a dozen characters or less) in a config file. I've got everything working properly except that when I decrypt the data, the padding bytes are not removed. I understand I can choose to not do any padding but that is VERY insecure and that the padding bytes need to be added because that's how AES works (in discreet block sizes). Right now I'm using PaddingMode.ISO10126 to let the CryptoStream automatically append crypto random bytes.
What is the industry-standard way of handling this? What's the right way of getting rid of these "extra bytes" on decryption?
The best way of getting rid of padding is of course to use PKCS#7 padding instead, and let the cipher instance get rid of the padding, as GregS suggested.
The best way of performing encryption nowadays is to use CTR mode encryption instead, or preferably a cipher that contains authentication/integrity protection such as GCM. Note that with small strings you need to take care not to reveal information through the size of the cipher text though (the result of performing CTR mode encryption on "yes" will result in three bytes, "no" will result in 2 bytes).
Just working on a algorithm and so far i can encrypt and decrypt a number, which works fine. My question now is how do i go abouts encrypting an image? How does the UIdata look and shold i convert the image to that before I start? Never done anything on this level in terms of encryption and any input would be great! Thanks!
You'll probably want to encrypt in small chunks - perhaps a byte or word/int (4 bytes), maybe even a long (8 bytes) at a time depending on how your algorithm is implemented.
I don't know the signature of your algorithm (i.e. what types of input it takes and what types output it gives), but the most common ciphers are block ciphers, i.e. algorithms which have a input of some block size (nowadays 128 bits = 16 bytes is a common size), and a same-sized output, additionally to a key input (which should also have at least 128 bits).
To encrypt longer pieces of data (and actually, also for short pieces if you send multiple such pieces with the same key), you use a mode of operation (and probably additionally a padding scheme). This gives you an algorithm (or a pair of such) with an arbitrary length plaintext input, and slightly bigger ciphertext output (which the decryption algorithm undoes then).
Some hints:
Don't use ECB mode (i.e. simply encrypting each block independently of the others).
Probably you also should apply a MAC, to protect your data against malicious modifications (and also breaking of the encryption scheme by choosen-ciphertext attacks). Some modes of operation already include a MAC.
I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encryption. I am attempting to determine the big-O notation of an idea I have developed that makes heavy use of public key encryption.
What are these Big-O algorithms as related to public key encryption:
A) Encrypt a file made up of N characters with an L length key
B) Decrypt that same file
C) A typical brute force algorithm to break an encrypted file with N characters and with a maximum key length of L
Any included Big-O notations for more efficient algorithms for breaking the encryption would be appreciated. Also, reference to wherever this material can be found.
Sorry to ask a question that I really should be able to find on my own, but I haven't managed to come across what I am looking for.
Standard public/private key algorithms are almost never used on large inputs, as the security properties of these algorithms are generally not suitable for bulk encryption. The most common configuration is to use a public/private key algorithm to encrypt a small (constant-size, usually 128 - 256 bit) key, then use that key for a symmetric encryption algorithm.
That being said, I'll use RSA as a test case for the rest of the questions:
A/B) Setting aside key generation, RSA encrypts and decrypts in O(n) for the size of the message. (Note that all messages must be the size of the key, so smaller messages are padded and larger messages must be broken up.) The exact speed of encryption/decryption depends on the algorithms used by your RSA implementation, but it's polynomial in key size:
http://www.javamex.com/tutorials/cryptography/rsa_key_length.shtml
C) Given a public key, RSA can be cracked by factoring the public key, which is currently best accomplished using GNFS (which is O(exp((7.1 b)^1/3 (log b)^1/3))). I don't believe there's much work on cracking RSA based on encrypted data, as the public key is a much more useful target.
When using AES encryption, plaintext must be padded to the cipher block size. Most libraries and standards use padding where the padding bytes can be determined from the unpadded plaintext length. Is there a benefit to using random padding bytes when possible?
I'm implementing a scheme for storing sensitive per-user and per-session data. The data will usually be JSON-encoded key-value pairs, and can be potentially short and repetitive. I'm looking to PKCS#5 for guidance, but I planned on using AES for the encryption algorithm rather than DES3. I was planning on a random IV per data item, and a key determined by the user ID and password or a session ID, as appropriate.
One thing that surprised me is the PKCS#5 padding scheme for the plaintext. To pad the ciphertext to 8-byte blocks, 1 to 8 bytes are added at the end, with the padding byte content reflecting the number of padding bytes (i.e. 01, 0202, 030303, up to 0808080808080808). My own padding scheme was to use random bytes at the front of the plaintext, and the last character of the plaintext would be the number of padding bytes added.
My reasoning was that in AES-CBC mode, each block is a function of the ciphertext of the preceding block. This way, each plaintext would have an element of randomness, giving me another layer of protection from known plaintext attacks, as well as IV and key issues. Since my plaintext is expected to be short, I don't mind holding the whole decrypted string in memory, and slicing padding off the front and back.
One drawback would be the same unpadded plaintext, IV, and key would result in different ciphertext, making unit testing difficult (but not impossible - I can use a pseudo-random padding generator for testing, and a cryptographically strong one for production).
Another would be that, to enforce random padding, I'd have to add a minimum of two bytes - a count and one random byte. For deterministic padding, the minimum is one byte, either stored with the plaintext or in the ciphertext wrapper.
Since a well-regarded standard like PKCS#5 decided to use deterministic padding, I'm wondering if there is something else I missed, or I'm judging the benefits too high.
Both, I suspect. The benefit is fairly minimal.
You have forgotten about the runtime cost of acquiring or generating cryptographic-quality random numbers. at one extreme, when a finite supply of randomness is available (/dev/random on some systems for instance), your code may have to wait a long time for more random bytes.
At the other extreme, when you are getting your random bytes from a PRNG, you could expose yourself to problems if you're using the same random source to generate your keys. If you're sending encrypted data to multiple recipients one after another, you have given the previous recipient a whole bunch of information about the state of the PRNG which will be used to pick the key for your next comms session. If your PRNG algorithm is ever broken, which is IMO more likely than a good plaintext attack on full AES, you're much worse off than if you had used deliberately-deterministic padding.
In either case, however you get the padding, it's more computationally intensive than PKCS#5 padding.
As an aside, it is fairly standard to compress potentially-repetitive data with e.g. deflate before encrypting it; this reduces the redundancy in the data, which can make certain attacks more difficult to perform.
One last recommendation: deriving the key with a mechanism in which only the username and password vary is very dangerous. If you are going to use it, make sure you use a Hash algorithm with no known flaws (not SHA-1, not MD-5). cf this slashdot story
Hope this helps.