Managing the rotation of Azure storage account keys with Azure Function and Key Vault - azure-storage

Having asked a question about Removing Secrets from Azure Function Config this Microsoft approach was recommended for managing the rotation of keys for Azure Storage Accounts and the keeping of those keys secret in Azure KeyVault
Note we are accessing Tables in an Azure Storage Account and Tables unlike Blobs and Queues do not support Managed Identity access controls.
The recommendation comes with some Azure Deplyment templates that would not run for me so I decided to create the resources myself to check my understanding of the approach. After trying to follow the recommendation I have some questions
Existing situation:
An existing function called "OurAzureFunction" that currently has the Storage Account connection string configured with the key directly in the Function config.
An existing storage account called "ourstorageaccount" that contains the application data that "OurAzureFunction" operates on
My understanding of the recommendation is that it introduces
"keyRotationAzureFunction", an Azure function with two Httptriggers, one that responds to event grid event for secrets that are soon to expire and one that can be called to regenerate the keys on demand.
"keyRotationKeyVault", a Key Vault that is operated on by the keyRotationAzureFunction.
An Event Grid subscription that listens to SecretNearExpiry event from "keyRotationKeyVault"
I have issues with understanding this approach. I can't see a better way but to collate the issues in this Stack Overflow question rather than with three individual questions.
Does keyRotationAzureFunction have the "Storage Account Key Operator Service Role" on "ourstorageaccount" so that it can regenerate its' keys?
What configuration does "OurAzureFunction" have that allows it to create a connection to ourstorageaccount? Is it the tagged secret in "keyRotationKeyVault"?
Is the value of the secret in "keyRotationKeyVault" not used just the tags related to the secret?

Yes, the function has to run as a principal that can rotate the keys, which that role provides. Key rotation can be kept as a separate role so that you can provide granular access to secrets to avoid leaks.
The function (rather, the principal) just needs "get" access to a secret used for generating SAS tokens (it's a special kind of secret where the value returned will change to generate new SAS tokens) that grants access to storage. The Key Vault must be configured to manage tokens for the Storage account. See a sample I just published recently at https://learn.microsoft.com/samples/azure/azure-sdk-for-net/share-link/ which I hope simplifies the problem.
The value of the secret is actually the generated SAS token for the storage account. The tags are used to figure out which secret to use for the storage account in case you have other secrets in your Key Vault, or even manage multiple function apps this way (you can identify the correct secret for the storage account key near expiry).

I'm not sure why ARM templates did not work for you. You need to be an owner of Storage and Key Vault to create necessary permissions.
To answer your questions:
Yes
Yes it is using tags to with Storage information to connect and regenerate key
Value is not not for connection to Storage, but it could be an alternative way to connect.
You can see more information about tags here:
https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell

Related

Having trouble understanding where to store private keys

I am having an issue determining how to store API keys or other private information correctly. I have a helper library for a large set of inner-facing company applications that calls an external API for emailing and must have access to the API key. I store this key in a shared configuration file used by these applications. If I wanted to further secure the key by encrypting it or moving it to a service like Azure Key Vault, I then am stuck with the dilemma of having simply complicated the problem because I now have the private key for that encryption or the key to access Azure Key Vault that I now need to secure. Because this type of issue is so common I am assuming I am missing something here. Each time I try to further encrypt or otherwise secure a key I am simply adding another layer of the same problem. I still end up with a private key sitting somewhere in plain text. Is it the case that having a plain text key in an otherwise secure environment is just not an issue after all?
I would like to point out that I cannot use Environment Variables or some of the other tools I have seen to secure keys on the machine as these applications can be run on any number of terminal servers or local machines throughout the company. Most are click-once Dot Net applications written in .Net 4.5 and can run anywhere in our environment, some by any user in our domain.
I don't use azure but I assume Azure Key Vault is very similar to AWS Secrets Manager which is exactly the thing I would use (I wrote about one use case for storing Amplitude API keys on my blog).
Why is this better than simply having the key lying around in a file?
Simplified key distribution: you don't need to download the key on all the machines
Improved security: you simply load the key at runtime, no need to have the key lying on the disk forever
Note there's not much point in double-encrypting the key as you mentioned. That's just increasing complexity without improving the security of the solution much.
Also, in case of AWS, you would specify a very granular IAM policy/permissions for accessing the specific secret and attach the policy to the IAM role assigned to the instances needing to work with the key.

Storing API Keys submitted by client in frontend

I know API keys need to be stored securely and should not be accessible client side. That being said, I also know that a lot of Wordpress plugins/ custom sites/ and such allow users to copy paste the API key into a text input on the admin panel.
My question is how do you do this securely? Do they hash it and save it to their database?
Say for example I made a react app or wordpress plugin that allowed users to do something with the Google Maps API. I know I can go get their API key and just hard code it in... but if I wanted to let the user update the key on their own - What would be the reccomended steps?
Thanks!
If I understand you correctly, you want your application to process secrets of third party APIs. A bit scary, but if you get the user consent - why not? First thing first - make sure the user understands what he is doing. Point out exactly what you will do with the API keys, what you will not do with the API keys and how will they be protected.
Personally I would never want to store such secrets in my own database, as this would be a single point of failure. When you are hacked, everyone is hacked. Why not put such secrets in - say - local storage so it never touches one of your servers?
Ok, in case it is your server that needs to do something, you could get the API key passed in a request, do something, but never log or persistently store the secret anywhere.
In case it is enough for the Java Script to do the job, local storage is even better solution.
One could think about encrypting the keys in the local storage, but I don't believe this would improve security a lot. I mean this would be security through obscurity and could by bypassed by someone with physical access to the machine/browser/user agent. But if someone would have such access, then probably some of the API keys would be one of the smaller problems.

Handling SAS tokens using shared access policy (azure storage)

I am sending blobs to an Azure Storage account. I have one customer with 3 IOT clients who each write to their own container.
I use a share access policy to create a SAS URI to each container.
I am not using an expire data when creating the shared access policy. The generated SAS URI is copied to a config file that each of the clients use this to write blobs to the storage.
This works fine. On the client I create the container using
CloudBlobContainer _container = new CloudBlobContainer(new Uri("https://myhubstorage.blob.core.windows.net/containername?sv=2015-04-05&sr=c&si=containername&sig=xxxxx"));
The token above is retrieved from a config file
To send blobs I use
var newBlob = _container.GetBlockBlobReference(filePath);
Now this works, but I'm not sure if this is the best approach. The reason is that I do not have an expiry on the shared access policy used to create the container SAS token. I don't want to distribute a new SAS token for the container each time it expires (would have to update the config file.
Also I do not want the clients to have access to the storage account key).
If a client is compromised I can revoke the shared access policy so the other clients will not be affected.
But is this the best approach to solve this regarding security? Input would be appreciated.
Using a shared access policy is suggested, however, you may need to notice that you can set up to 5 stored access policies for a container (It looks not a problem for you since there are only 3 IoT clients).
You might also want to refer to best practices for using SAS for a full list.

Storing API keys on server

I have a service where users each have an API key. I need to store the keys so that they can be used to validate API requests.
If I store the keys in plaintext in my database, I'm worried about the scenario of someone getting access to the db, grabbing all the plaintext api keys, then using them to impersonate others (there will likely be bigger problems if someone got access to the db, though).
This is similar to storing user passwords, where you just store the hash and validate using that - however most APIs let you view your API keys, which means they need to be stored in some recoverable way.
Is there a best practice for this?
The threat that someone gets the database and gets the keys means they can use the api keys to access the data in the database, which they already have, so no win there.
The threat that someone can access the database, get the passwords, means they can reuse those passwords on other web sites with the same user name because people tend to reuse their passwords.
Another reason having passwords in the clear or easily reversable is someone in your company could get a hold of the passwords, and start to do bad stuff acting as the user. Which IS a risk you might have if your API keys are in the clear.
Typically, HMAC is a solution for cryptographically computing a secure value from a single secret key, and some public value.
Have a look at HMAC. With HMAC, you can load a secret key into memory with the app (config file, read off of amazon KMS, typed in on app start, or however you want to get that secret key there).
In the database, store a token. Token = UUID() for example. The token should be unique to the user, the token could be versioned in case you need to regenerate, and the token could be random (like UUID). The token is not secret.
The API key is computed using the secret key (SK) and user token (UT) as follows:
API_SECRET = HMAC(SK, UT)
Then distribute that UT (More commonly called API_KEY) and API_SECRET to the user, and when the user tries to connect, you compute the API_SECRET:
Get user record from database (you're probably already asking the user to provide their username)
Compute the API_SECRET from the UT in the database:
API_SECRET_DB = HMAC(SK, UT)
Compare the computed API_SECRET_DB to the one provided in the request:
if (API_SECRET_DB == API_SECRET_FROM_REQUEST){
//login user
}
Bottom line, you only protect the Secret Key, and not every single credential.
I did an update to some library written in PHP which made it using an Impersonate Protection Algorithm (IPA). that lead to not saving the Token itself inside a database.
For more info check this https://github.com/vzool/api-hmac-guard
Hope it helps, Thanks

Revoke Shared Access Signatures after initial access in Azure Storage

I would like to essentially allow for one-time access to certain blob resources, requiring the user to check back with my server to get a new shared access signature before being able to access the resource again.
I have an implementation of this that I currently use, but I'm curious if there's something more ideal out there (particularly something already implemented in the Azure API that I missed).
Right now, a user can request the resource from the server. It validates their access to it, creates a unique hash in a database, directs the user to a link with that hash and the user loads the page. Once the page loads and they've completely downloaded the resource, I immediately invalidate the hash value in the database so it cannot be used again.
I know that Shared Access Signatures allow for time-based expiration, but do they allow for any sort of retrieval-count-based expiration, in that the user can completely download the resource and then the SAS invalidate itself? Thanks!
One time use is not supported by SAS tokens. If you get a chance it would be great if you could add this request to our Azure Storage User Voice Backlog. I would also encourage other people with the same requirement to vote on that as wel.
Thanks
Jason