Cryptographic Agility and Key Management - cryptography

I have a design question. I have a web application that uses .NET encryption APIs to encrypt/decrypt data. (App uses old crypto algorithms like MD5 and SHA-1). Also, app hard-codes the encryption keys in the production code.
I would like to;
1 ) Update existing old algorithms (MD5 and SHA-1) to new ones.
2 ) Move encryption keys from source code to a secure share.
3 ) Can change the encryption keys easily and regularly
My Design;
Algorithm Update
For the algorithm update, we use specific .NET implementations of crypto algorithms. We use classes like MD5CryptoProvider or RijndaelManaged. These are all hard-coded. I am going to remove the specific algorithm dependency and make it more agile like;
HashAlgorithm algo = HashAlgorithm.Create(MyPreferredHash.ToString());
algo.ComputeHash(...);
MyPreferredHash value will be loaded from a config file so that we can change this when we want to.
Question: Upgrading the code is easy to do this. However do you see any potential issues with changing crypto algorithms? We do not store any encrypted or hash data anywhere and web application is stateless. All the hash values are generated and appended to url strings and decrypted from another pages. Therefore, no data is stored. Except the cookies. When we encrypt cookie and send it back to user, we decrypt it when server receives it. In this case, i thought of destroying the cookie and send a new one to the client. Is this reasonable? Any other issues you think of ?
Key Management
Second part of the design, is to remove hard-coded keys from source code to secure share. After this, I need to be able to rollout new encryption keys. Each encryption key will be associated with a expire date. When we rollout a new encryption key, new key will be used for encryption and decryption. If it fails to decrypt, then we can try old keys. Old keys will be used for decryption or verification until their expire date. When they pass their expire date, they should retire.
For the storage; I am thinking of storing the encryption keys in a config file in the local machine as "encrypted" by a master key which will reside in a secure share. Therefore, anybody who doesn't have access to this secure share will not be able to see the master key. Master key will be loaded from secure share to machine registry when a machine reboots. The encryption keys in the local machine will be loaded from config file (local) and decrypted by master key in registry.
This storage choice will give us storing only one master key in a secure share and also historical changes to the encryption keys as we will store them in version control system.
The challenging part is the key change/update.
What is the recommended key change algorithm here for a distributed web application? If we are doing partial deployment after a release, not all the machines will have the same config file content (e.g. new encryption key added). All site deployment can take 1-2 weeks. This is also another concern that if we should wait for all deployments complete so that these keys will be active after that.
Any other feedback?

You are quite right to design your app to be agile in the face of unknown future attacks on particular encryption algorithms.
The simplest way to future-proof your app in a robust way would seem to be to switch to using a standard data format for your encrypted information, and use a standard library to do the hard lifting. The choice of a specific standard to use would depend on what kind of data formats you're working with, but there are good candidates to choose from. Then when there is a future attack, you can just change some parameters, or update to the latest version of the implementation.
Doing crypto is very tricky. Best to leave it up to the experts.

Related

Having trouble understanding where to store private keys

I am having an issue determining how to store API keys or other private information correctly. I have a helper library for a large set of inner-facing company applications that calls an external API for emailing and must have access to the API key. I store this key in a shared configuration file used by these applications. If I wanted to further secure the key by encrypting it or moving it to a service like Azure Key Vault, I then am stuck with the dilemma of having simply complicated the problem because I now have the private key for that encryption or the key to access Azure Key Vault that I now need to secure. Because this type of issue is so common I am assuming I am missing something here. Each time I try to further encrypt or otherwise secure a key I am simply adding another layer of the same problem. I still end up with a private key sitting somewhere in plain text. Is it the case that having a plain text key in an otherwise secure environment is just not an issue after all?
I would like to point out that I cannot use Environment Variables or some of the other tools I have seen to secure keys on the machine as these applications can be run on any number of terminal servers or local machines throughout the company. Most are click-once Dot Net applications written in .Net 4.5 and can run anywhere in our environment, some by any user in our domain.
I don't use azure but I assume Azure Key Vault is very similar to AWS Secrets Manager which is exactly the thing I would use (I wrote about one use case for storing Amplitude API keys on my blog).
Why is this better than simply having the key lying around in a file?
Simplified key distribution: you don't need to download the key on all the machines
Improved security: you simply load the key at runtime, no need to have the key lying on the disk forever
Note there's not much point in double-encrypting the key as you mentioned. That's just increasing complexity without improving the security of the solution much.
Also, in case of AWS, you would specify a very granular IAM policy/permissions for accessing the specific secret and attach the policy to the IAM role assigned to the instances needing to work with the key.

Securely storing encryption key in WinRT / Windows Store apps

I would like to encrypt some of my Windows 8 app data (stored in local storage folder). This is not very sensitive data, however I would like to prevent users from forging/modifying it (and was looking into encryption as a way to achieve that).
Is there any secure key store available that I could use to save my key for later reuse? What I would like to do is:
1) on the first run generate the key and store it somewhere,
2) encrypt the data using the key,
3) anytime i need to decrypt the data i would retrieve the key from the store.
Did you have a look at the PasswordVault class yet? http://msdn.microsoft.com/en-us/library/windows/apps/windows.security.credentials.passwordvault.aspx
Basically all confidential information should go in there.
If you want to 'prevent users from forging/modifying it', the question is, how much energy you are willing to put into this.
Generally speaking, data in the PasswordVault should be secure without using additional encryption. Regarding security above that level it can be said that an attacker with physical access to the computer can do anything with it, as you'd have to store the keys on the same machine. Anything more swiftly reaches a point of diminishing returns against invested effort - i.e. just adding some obscurity instead of real security.
Be careful with the PasswordVault solution, I discovered that it is actually a problem because it is a roaming settings. If you open your application on two devices at the same time, you will get two encryptions keys, but after a day or so, the roaming setting will override the first generated encryption key.

Correct and secure manner of storing in-app-purchases

What is the best way to store an in-app-purchase on a device,
so that the purchases can also be accessed offline but
the security of the purchases are not compromised?
Do not store anything valuable on the device as it cannot be trusted and it can easily be compromised by someone motivated.
Now, all of this depends on the type and value of the item that is being purchased and what happens if its compromised.
If its truly valuable then use a remote secure server for managing secure items. In app purchases include a receipt that can be verified by your remote secure server talking to apple's servers directly through a secure connection. See this link to verifying store receipts.
As far as I know, the most convenient way to securely store a purchased asset would be to use some form of encryption.
The user should be able to download an encrypted asset, and the app should decrypt it on the fly.
However, make sure that you store the key in a secure fashion as well, as string keys (within the app binary) can easily be recovered by a skilled hacker. A good way to secure the key would be to use some sort of authentication with a server-based system. The app would get the key off the server and keep it only for the few moments required to decrypt the asset.
This defense mechanism is not impregnable; I feel that it is sophicaticated enough to discourage most users from attempting to undermine it.
To decrypt your assets on the device, a good idea would be to use CommonCrypto. It's provided by Apple (with the iOS SDK), so you don't have to build it from scratch and you don't have to provide documentation (required by US law) for your app. I find Jim Dovey's Common Crypto wrapper the easiest way to use it.
Hope that helps. :)
You'll want to encrypt the file, for which your best bet is probably Common Crypto. In order to be able to access the data offline, you need to store the encryption key on the device.
The solution is to use the keychain: Use SecRandomCopyBytes to generate a key of sufficient length, and store it in the keychain using SecItemAdd. Then use that key to encrypt the data and write it to the device's local storage in the normal manner. When it comes time to read the file back from disk, use SecItemCopyMatching to load the key from the keychain and use it to decrypt the data.

SHA1-hashing for web authentication in place of Blowfish

Being unable to locate a working php/javascript implementation of blowfish, I'm now considering using SHA1 hashing to implement web-based authentication, but the lack of knowledge in this particular field makes me unsure of whether the chosen method is secure enough.
The planned roadmap:
User's password is stored on the server as an MD5 hash.
Server issues a public key (MD5 hash of current time in milliseconds)
Client javascript function takes user password as input, and calculates its MD5 hash
Client then concatenates public key and password hash from above, and calculates SHA1 of the resulting string
Client sends SHA1 hash to the server, where similar calculations are performed with public key and user's password MD5 hash
Server compares the hashes, a match indicates successful authentication.
A mismatch indicates authentication failure, and server issues a new public key, effectively expiring the one already used.
Now, the problematic part is about concatenating two keys before SHA1, could that be prone to some kind of statistical or other attacks?
Is there any specific order in which keys should be concatenated to improve the overall quality (i.e. higher bits being more important to reliability of encryption)?
Thank you in advance.
If you're only using the 'public key' (which isn't actually a public key, it's a nonce, and should really be random, unless you really want it to be usable over a certain timeframe, in which case make sure you use HMAC with a secret key to generate it so an adversary cannot predict the nonce) to prevent replay attacks, and it's a fixed size, then concatenation might not be a problem.
That said, I'm a bit concerned that you might not have a well-thought-out security model. What attack is this trying to prevent, anyway? The user's password hash is unsalted, so a break of your password database will reveal plaintext passwords easily enough anyway, and although having a time-limited nonce will mitigate replay attacks from a passive sniffer, such a passive sniffer could just steal the user's session key anyway. Speaking of which, why not just use the session key as the nonce instead of a timestamp-based system?
But really, why not just use SSL? Cryptography is really hard to get right, and people much smarter than you or I have spent decades reviewing SSL's security to get it right.
Edit: If you're worried about MITM attacks, then nothing short of SSL will save you. Period. Mallory can just replace your super-secure login form with one that sends the password in plaintext to him. Game over. And even a passive attacker can see everything going over the wire - including your session cookie. Once Eve has the session cookie, she just injects it into her browser and is already logged in. Game over.
If you say you can't use SSL, you need to take a very hard look at exactly what you're trying to protect, and what kinds of attacks you will mitigate. You're going to probably need to implement a desktop application of some sort to do the cryptography - if MITMs are going around, then you cannot trust ANY of your HTML or Javascript - Mallory can replace them at will. Of course, your desktop app will need to implement key exchange, encryption and authentication on the data stream, plus authentication of the remote host - which is exactly what SSL does. And you'll probably use pretty much the same algorithms as SSL to do it, if you do it right.
If you decide MITMs aren't in scope, but you want to protect against passive attacks, you'll probably need to implement some serious cryptography in Javascript - we're talking about a Diffie-Hellman exchange to generate a session key that is never sent across the wire (HTML5 Web storage, etc), AES in Javascript to protect the key, etc. And at this point you've basically implemented half of SSL in Javascript, only chances are there are more bugs in it - not least of which is the problem that it's quite hard to get secure random numbers in Javascript.
Basically, you have the choice between:
Not implementing any real cryptographic security (apparently not a choice, since you're implementing all these complex authentication protocols)
Implementing something that looks an awful lot like SSL, only probably not as good
Using SSL.
In short - if security matters, use SSL. If you don't have SSL, get it installed. Every platform that I know of that can run JS can also handle SSL, so there's really no excuse.
bdonlan is absolutely correct. As pointed out, an adversary only needs to replace your Javascript form with evil code, which will be trivial over HTTP. Then it's game over.
I would also suggest looking at moving your passwords to SHA-2 with salts, generated using a suitable cryptographic random number generator (i.e. NOT seeded using the server's clock). Also, perform the hash multiple times. See http://www.jasypt.org/howtoencryptuserpasswords.html sections 2 and 3.
MD5 is broken. Do not use MD5.
Your secure scheme needs to be similar to the following:
Everything happens on SSL. The authentication form, the server-side script that verifies the form, the images, etc. Nothing fancy needs to be done here, because SSL does all the hard work for you. Just a simple HTML form that submits the username/password in "plaintext" is all that is really needed, since SSL will encrypt everything.
User creates new password: you generate a random salt (NOT based off the server time, but from good crypto random source). Hash the salt + the new password many times, and store the salt & resulting hash in your database.
Verify password: your script looks up salt for the user, and hashes the salt + entered password many times. Check for match in database.
The only thing that should be stored in your database is the salt and the hash/digest.
Assuming you have a database of MD5 hashes that you need to support, then the solution might be to add database columns for new SHA-2 hashes & salts. When the user logs in, you check against the MD5 hash as you have been doing. If it works, then follow the steps in "user creates new password" to convert it to SHA-2 & salt, and then delete the old MD5 hash. User won't know what happened.
Anything that really deviates from this is probably going to have some security flaws.

saving key in application settings

i am starting to use cryptostream class. i may be wrong, if you encrypt something, close the app, and then try to decrypt it, it will not be able to because a different key will be generated. because i do need this functionality, i am wondering if it's possible to save the key in application settings and whether this is the right way to go?
If you always run your app under the same user account (it can be a local user or a domain user), the best option would be to use DPAPI. The advantage of using DPAPI is that you do not have to worry about the key (the system generates it for you). If you run the app under different user identities, then it gets more complex because the options that are available range from bad to worse (the major problem is: how do you protect your secret: key, password, passphrase, etc). Depending on what you want to do, you may not need to use encryption at all (e.g. if you want to encrypt a connection string, consider using integrated windows authentication, which does not require a password). For more info on the topic, check out this MSDN article: Safeguard Database Connection Strings and Other Sensitive Settings in Your Code; it may give you some ideas.
Lots of applications save the keys in configuration files. It's a common bad practice.
It's not secure but all secure options are hard to implement. There are options using different factors,
You can derive the key from a password using PBE (password-based encryption). But you have to enter a password to start your application. This is so called "What you know" factor.
Put the key in a smartcard. This is very secure but you need to have access to the card on the machine. This is called "What you have".
Ignore other schemes involving encrypting keys with yet another key. It doesn't really change the security strength.