I would like to generate key for symmetrically encrypted ( AES) communication with others. Is it secure to use random number generator? Especially, for example, /dev/urandom provided by Linux?
Yes, that's how it's usually done. Just make sure your system is properly seeded. Most distributions do this automatically, but if you're not sure, you have two choices:
1) If you only need a few bytes and only rarely, you can use /dev/random.
2) When your program first starts up, read /proc/sys/kernel/random/entropy_avail. If it's greater than 512, you have nothing to worry about. You can read from /dev/urandom all you want and the results will be secure.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using AES128 in CBC mode, and I need a 16-byte key, so I was wondering if using sha2 or sha3 and then truncating it to 16 bytes (take first 16 bytes from the left) would make sha2/sha3 weaker than crc32 which gives me 16 bytes out of the box.
Each bit of a cryptographically secure hash is effectively random (i.e. independent of all the other bits). This is not true of non-cryptographic hashes. This property is critical for a secure key. You should always use a cryptographic hash for key derivation.
Truncating a long secure hash is a perfectly acceptable way to create a secure hash of shorter length. You may also select any subset of bits rather than just the most significant or least significant. If this weren't true, then the original hash would not itself be secure, because it would suggest some non-randomness in the output.
SHA-2 and SHA-3 intend to be cryptographically secure hashes (and at this point, we believe they are). CRC does not even intend to be cryptographically secure.
If the input key material is not itself random, then a fast hash like the SHA series may be subject to brute force. If so, then you need to use key stretching as well as hashing, for example with PBKDF2.
But you should never use CRC for any of this. It is not intended to be a secure hash.
For more discussion, see Should I use the first or last bits from a SHA-256 hash? and “SHA-256” vs “any 256 bits of SHA-512”, which is more secure?
I am using AES128 in CBC mode, and I need a 16-byte key, so I was wondering if using sha2 or sha3 and then truncating it to 16 bytes (take first 16 bytes from the left) would make sha2/sha3 weaker than crc32 which gives me 16 bytes out of the box.
The question was not clear about how the input to the CRC or SHAx is generated. The OP cleared more. So, I've provided the answer below parts;
I mean regardless of the input (say the input was even abcd ), would truncating sha2/3 to 16 bytes be more secure than using crc32.
First of all, forget CRC, it is not a cryptographical hash function, forget it.
When the input space is small, there is a special case of the pre-image attack of the hash functions. The attacker can try all possible combinations to generate the key. You can read more details in this Cryptography.SE Q/A
Secure hashing when the input comes from a small space
Is it easy to crack a hashed phone number?.
Forgot about the small input space!, the entities like BitCoin Miner or SuperComputer like Summit Can reach 2^64 very easily. Which simply says the 8-byte.
One should generate a strong password like the dicewire or Bip-39. This will provide you easy to remember and strong passwords. See also XKCD
Once you generated a good password, then you can pass it to the poor man's KDF1, to better use HKDF. Since your input material is good you can skip the expand part of the HKDF. You can also use the Password-based Key derivation functions like Scrypt, PBKDF2, and Argon2. In this case, choose the Argon2 since it was the winner of the Password Hashing Competition in July 2015.
I was just trying to encrypt data like sounds for a game with AES 128, and was just wondering if using only 16 bytes of the hashed password-like key with sha2/3 was a more secure solution than using a whole crc32 output instead. I've never worked with any of the other methods you mentioned...
For the proper use of CBC mode, you need a nonce, too. You can use HKDF or PBKDF2, Argon2, etc with different info/nonce to derive the nonce, too. This is very common.
Note those about CBC;
The nonce must be unique under the same key, i.e (Key,IV) pair must be used once
CBC IV must be unpredictable, however, as far as I can see this is not your case
CBC is vulnerable to padding oracle attacks on the server-side. This is not possible in your case, too.
CBC mode can only provide CPA security, there is no integrity and authentication. To provide integrity and authentication either use HMAC with a different key, or use the combined modes.
Use Authenticated Encryption With Associated Data mode of encryptions like AES-GCM and ChaCha20-Poly1305. Correctly using the GCM may be hard, better use ChaCha20-poly1305 or xChaCha20-poly1305 for better nonce random generations.
I am a complete newbie in server side programming. Currently I am writing a service to store users files sent from ios app. I would like to generate a unique id for each file and also use it as file name. Problem is, many of the solutions, such as using a hash function, I found online have the risk of collision. So what is preferred way of doing that? I know AWS s3 generate a unique id fora each file. How did they implement this?
Whatever programming language you use probably has a GUID (sometimes called UUID) library which can be considered universally unique. See https://en.wikipedia.org/wiki/Universally_unique_identifier
Hashing will not solve this problem at all, because the point of a hash is that two identical inputs should result in two identical outputs. Therefor if two users upload ThisIsAFile.pdf both will has to say a89na3 and there will be a collision.
A possible way is to generate some wide random id. If you generate some random name of several dozen of characters like _5E960vkoXF8_6t2yfMbEM0A_6uBsy060PxH_2YKKKmZkTR6 the collision probability can be made small enough to be negligible (e.g. your system would need to run many billions years to observe a single collision). If you want to estimate that probability, use the birthday problem approach.
(collisions are not always an issue, if you can make their probability tiny enough)
UUIDs are exploiting this idea. So the simplest way is simply to use a library function generating them, e.g. uuid_generate. You may want to do likewise (that is code your own random id generator), but you need to be careful about randomness.
At least, you could use a good PRNG (such as a Mersenne twister one) that you would seed periodically (and at startup) with some random noise, e.g. using /dev/random (read carefully random(4)...) or getrandom(2). Or you could buy some random generating hardware source (like OneRNG).
BTW, if you suppose that the user's files contents do not change (so each file is written once at creation time), you could use some cryptographic hash function on them (like SHA 256). Then if two distinct users would upload exactly the same content (for example, the text of GPLv3) you would store it once on your disk (in some shared file). The
https://www.softwareheritage.org/ project is using such a technique.
(for cardinality reasons, collisions remain theoretically possible, but highly improbable)
You don't want to make collisions mathematically impossible. You probably do want to have them very improbable: if the probability is less than 10-50 (or just 10-30 that is about 2-100) you probably should not care (since our Earth planet will vanish before that collision is likely to happen).
I have recently done some work to upgrade to the SSL keys for some webservices we consume. I did not initiate the work but its was to go from 1024 to 2048 bits.
When generating ssh keys I can specify the bit level(rate/depth?) with ssh-keygen -b 2048. But what are the benefits/deficits of a higher bit value? Are there any technical limits?
why are we not all generating ssl keys with a bit depth of 1 billion?
I'm going to assume the keys are RSA since 2048 is a common size for RSA (but non-existent for ECDSA or EdDSA).
But what are the benefits/deficits of a higher bit value?
The benefits are the "strength" of the key, to put it simply. Larger keys take longer to "crack". More specifically, in RSA, breaking a key requires factoring a very large number. The larger the number is, the harder it is to factor. This the the extent of what we know about factoring numbers, which is that it cannot be done in polynomial time using technology that is readily available.
Larger keys can perform slower, and require more memory to use. However, 2048 is considered the lowest "safe" size for RSA.
Are there any technical limits?
It depends on what is using a key. Speaking from experience, keys bigger than 4096 start running in to software problems because the key is too large.
why are we not all generating ssl keys with a bit depth of 1 billion?
Well a 100 MB-ish key would take a lot of memory to use. Secondly, RSA keys are not completely random numbers. They are made up of two prime numbers, p & q, which produce n, the modulus. Generating primes this large is quite a difficult task.
Finally, there is little security benefit once you go beyond a certain key size.
Just working on a algorithm and so far i can encrypt and decrypt a number, which works fine. My question now is how do i go abouts encrypting an image? How does the UIdata look and shold i convert the image to that before I start? Never done anything on this level in terms of encryption and any input would be great! Thanks!
You'll probably want to encrypt in small chunks - perhaps a byte or word/int (4 bytes), maybe even a long (8 bytes) at a time depending on how your algorithm is implemented.
I don't know the signature of your algorithm (i.e. what types of input it takes and what types output it gives), but the most common ciphers are block ciphers, i.e. algorithms which have a input of some block size (nowadays 128 bits = 16 bytes is a common size), and a same-sized output, additionally to a key input (which should also have at least 128 bits).
To encrypt longer pieces of data (and actually, also for short pieces if you send multiple such pieces with the same key), you use a mode of operation (and probably additionally a padding scheme). This gives you an algorithm (or a pair of such) with an arbitrary length plaintext input, and slightly bigger ciphertext output (which the decryption algorithm undoes then).
Some hints:
Don't use ECB mode (i.e. simply encrypting each block independently of the others).
Probably you also should apply a MAC, to protect your data against malicious modifications (and also breaking of the encryption scheme by choosen-ciphertext attacks). Some modes of operation already include a MAC.
I have lots of small secrets that I want to store encrypted in a database. The database client will have the keys, and the database server will not deal with encryption and decryption. All of my secrets are 16 bytes or less, which means just one block when using AES. I'm using a constant IV (and key) to make the encryption deterministic and my reason for doing deterministic encryption is to be able to easily query the database using ciphertext and making sure the same secret is not stored twice (by making the column UNIQUE). As far as I can see there should be no problem doing this, as long as the key is secret. But I want to be sure: Am I right or wrong? In case I'm wrong, what attacks could be done?
BTW: Hashes are quite useless here, because of a relatively small number of possible plaintexts. With a hash it would be trivial to obtain the original plaintext.
An ideal cipher, for messages of length n bits, is a permutation of the 2n sequences of n bits, chosen at random in the 2n! such permutations. The "key" is the description of which permutation was chosen.
A secure block cipher is supposed to be indistinguishable from an ideal cipher, with n being the block size. For AES, n=128 (i.e. 16 bytes). AES is supposed to be a secure block cipher.
If all your secrets have length exactly 16 bytes (or less than 16 bytes, with some padding convention to unambiguously extend them to 16 bytes), then an ideal cipher is what you want, and AES "as itself" should be fine. With common AES implementations, which want to apply padding and process arbitrarily long streams, you can get a single-block encryption by asking for ECB mode, or CBC mode with an all-zero IV.
All the issues about IV, and why chaining modes such as CBC were needed in the first place, come from multi-block messages. AES encrypts 16-byte messages (no more, no less): chaining modes are about emulating an ideal cipher for longer messages. If, in your application, all messages have length exactly 16 bytes (or are shorter, but you add padding), then you just need the "raw" AES; and a fixed IV is a close enough emulation of raw AES.
Note, though, the following:
If you are storing encrypted elements in a database, and require uniqueness for the whole lifetime of your application, then your secret key is long-lived. Keeping a secret key secret for a long time can be a hard problem. For instance, long-lived secret keys need some kind of storage (which resists to reboots). How do you manage dead hard disks ? Do you destroy them in an acid-filled cauldron ?
Encryption ensures confidentiality, not integrity. In most security models, attackers can be active (i.e., if the attacker can read the database, he can probably write into it too). Active attacks open up a full host of issues: for instance, what could happen if the attacker swaps some of your secrets within the database ? Or alters some randomly ? Encryption is, as always, the easy part (not that it is really "easy", but it is much easier than the rest of the job).
If the assembly is publicly available, or can become so, your key and IV can be discovered by using Reflector to expose the source code that uses it. That would be the main problem with this, if the data really were secret. It is possible to obfuscate MSIL, but that just makes it harder to trace through; it still has to be computer-consumable, so you can't truly encrypt it.
The static IV would make your implementation vulnerable to frequency attacks. See For AES CBC encryption, whats the importance of the IV?