Handling SAS tokens using shared access policy (azure storage) - azure-storage

I am sending blobs to an Azure Storage account. I have one customer with 3 IOT clients who each write to their own container.
I use a share access policy to create a SAS URI to each container.
I am not using an expire data when creating the shared access policy. The generated SAS URI is copied to a config file that each of the clients use this to write blobs to the storage.
This works fine. On the client I create the container using
CloudBlobContainer _container = new CloudBlobContainer(new Uri("https://myhubstorage.blob.core.windows.net/containername?sv=2015-04-05&sr=c&si=containername&sig=xxxxx"));
The token above is retrieved from a config file
To send blobs I use
var newBlob = _container.GetBlockBlobReference(filePath);
Now this works, but I'm not sure if this is the best approach. The reason is that I do not have an expiry on the shared access policy used to create the container SAS token. I don't want to distribute a new SAS token for the container each time it expires (would have to update the config file.
Also I do not want the clients to have access to the storage account key).
If a client is compromised I can revoke the shared access policy so the other clients will not be affected.
But is this the best approach to solve this regarding security? Input would be appreciated.

Using a shared access policy is suggested, however, you may need to notice that you can set up to 5 stored access policies for a container (It looks not a problem for you since there are only 3 IoT clients).
You might also want to refer to best practices for using SAS for a full list.

Related

How to limit access to the read only endpoint in Amazon Neptune?

I'd like to create a role that can access only the read-only endpoint.
Constructing the resource arn as described here will allow access to both read and write endpoints.
I tried setting the resource id of the READER instance in the arn in these ways:
arn:aws:neptune-db:region:account-id:reader-instance-resource-id/*
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id
arn:aws:neptune-db:region:account-id:cluster-resource-id/reader-instance-resource-id/*
But none of these work. Is there a way to give a role the read access only?
The roles and policies that Amazon Neptune currently supports are listed here. Currently, the NeptuneReadOnlyAccess managed policy applies only to the control plane. It allows you to read but not alter configurations. That policy does not apply to the data plane (running queries).
It is possible that a future Amazon Neptune update may add additional access control policies.
For right now, you will need to manage access to instances and endpoints as part of your application architecture.

Managing the rotation of Azure storage account keys with Azure Function and Key Vault

Having asked a question about Removing Secrets from Azure Function Config this Microsoft approach was recommended for managing the rotation of keys for Azure Storage Accounts and the keeping of those keys secret in Azure KeyVault
Note we are accessing Tables in an Azure Storage Account and Tables unlike Blobs and Queues do not support Managed Identity access controls.
The recommendation comes with some Azure Deplyment templates that would not run for me so I decided to create the resources myself to check my understanding of the approach. After trying to follow the recommendation I have some questions
Existing situation:
An existing function called "OurAzureFunction" that currently has the Storage Account connection string configured with the key directly in the Function config.
An existing storage account called "ourstorageaccount" that contains the application data that "OurAzureFunction" operates on
My understanding of the recommendation is that it introduces
"keyRotationAzureFunction", an Azure function with two Httptriggers, one that responds to event grid event for secrets that are soon to expire and one that can be called to regenerate the keys on demand.
"keyRotationKeyVault", a Key Vault that is operated on by the keyRotationAzureFunction.
An Event Grid subscription that listens to SecretNearExpiry event from "keyRotationKeyVault"
I have issues with understanding this approach. I can't see a better way but to collate the issues in this Stack Overflow question rather than with three individual questions.
Does keyRotationAzureFunction have the "Storage Account Key Operator Service Role" on "ourstorageaccount" so that it can regenerate its' keys?
What configuration does "OurAzureFunction" have that allows it to create a connection to ourstorageaccount? Is it the tagged secret in "keyRotationKeyVault"?
Is the value of the secret in "keyRotationKeyVault" not used just the tags related to the secret?
Yes, the function has to run as a principal that can rotate the keys, which that role provides. Key rotation can be kept as a separate role so that you can provide granular access to secrets to avoid leaks.
The function (rather, the principal) just needs "get" access to a secret used for generating SAS tokens (it's a special kind of secret where the value returned will change to generate new SAS tokens) that grants access to storage. The Key Vault must be configured to manage tokens for the Storage account. See a sample I just published recently at https://learn.microsoft.com/samples/azure/azure-sdk-for-net/share-link/ which I hope simplifies the problem.
The value of the secret is actually the generated SAS token for the storage account. The tags are used to figure out which secret to use for the storage account in case you have other secrets in your Key Vault, or even manage multiple function apps this way (you can identify the correct secret for the storage account key near expiry).
I'm not sure why ARM templates did not work for you. You need to be an owner of Storage and Key Vault to create necessary permissions.
To answer your questions:
Yes
Yes it is using tags to with Storage information to connect and regenerate key
Value is not not for connection to Storage, but it could be an alternative way to connect.
You can see more information about tags here:
https://github.com/jlichwa/KeyVault-Rotation-StorageAccountKey-PowerShell

Revoke Shared Access Signatures after initial access in Azure Storage

I would like to essentially allow for one-time access to certain blob resources, requiring the user to check back with my server to get a new shared access signature before being able to access the resource again.
I have an implementation of this that I currently use, but I'm curious if there's something more ideal out there (particularly something already implemented in the Azure API that I missed).
Right now, a user can request the resource from the server. It validates their access to it, creates a unique hash in a database, directs the user to a link with that hash and the user loads the page. Once the page loads and they've completely downloaded the resource, I immediately invalidate the hash value in the database so it cannot be used again.
I know that Shared Access Signatures allow for time-based expiration, but do they allow for any sort of retrieval-count-based expiration, in that the user can completely download the resource and then the SAS invalidate itself? Thanks!
One time use is not supported by SAS tokens. If you get a chance it would be great if you could add this request to our Azure Storage User Voice Backlog. I would also encourage other people with the same requirement to vote on that as wel.
Thanks
Jason

Rest API design, storing access tokens

I'm trying to wrap my head around restful API design on a bigger scale than one simple installation.
My setup would look something like his:
The question is, after a user has been authorized to do requests they get the access token. Should EVERY following request first go to the proxy, then to the auth server to check the token and finally get the data from the resource server?
Considering you need somewhere to store the users permissions/roles on what URIs he is allowed to use.
I was thinking if you move the tokens and the permission/roles to the rest proxy. Stored in a memory cache like Redis? And when a permission/role is updated on the auth server, it pushes those changes to the proxy. The proxy would not need to make additional calls to the auth server every single time reducing it to just 1 call to the resource server. Or maybe this is how everyone does it, two internal calls every request?
It is not a great idea to authenticate the token on every request. Instead , save the token in some fashion either in Redis or in a map on your resource server whose expiry time can be set in synch with the token expiry time.
Using Redis you can store these tokens along with the role against a single key say userId and set token's expiration time(by setting the expiry time of a key) .In this way once the token expires the calls will automatically be redirected to the authentication server on its own.
User roles and permissions should be saved on the resource server either as a separate set in Redis for maintaining permissions list to check against the user role which you will pick from Redis again (or depending on how you rest API facilitates setting permissions on resources as certain Rest API facilitators have inbuilt APIs for restricting resources via annotations). This permission list can be updated as and when modified.

How to restrict Amazon S3 API access?

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?
The recommended way is to use IAM to create a new user, then apply a policy to that user.
Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.
Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.