How to retain private control over the encryption keys while using Cloud services? - encryption-asymmetric

All cloud service providers have their own Key management systems and customer keys can be imported when desired. However i want to have private control over keys such that keys are provided externally and never imported into the KMS.So the cloud provider should be able to access the external key without importing. Or another case could be where public keys are at the cloud provider but only client has access to private keys externally. Any help or ideas will be appreciated

On Google Cloud, you can do this with External Key Manager (EKM) https://cloud.google.com/kms/docs/ekm

Related

How to implement BigQuery for the Google EKM?

I want to implement BigQuery which interacts with an external key management using HYOK function.
I have not found any guides about the configuration of BigQuery with EKM (and then using HYOK) but just the configuration of it with the CMEK (that use BYOK).
Can someone here help me?
To clarify some of the terminology in your question,
HYOK = Hold your own key = The ability to use keys outside GCP from within GCP
Cloud EKM = Cloud external key manager = Cloud EKM is the service that GCP uses to implement HYOK. Cloud EKM is used through the Cloud KMS API. EKM keys have the EXTERNAL protection level in Cloud KMS.
CMEK = Customer managed encryption keys = The ability to use keys in Cloud KMS to protect data at rest in other GCP services (such as BigQuery).
Answering your question directly, this guide applies equally to all keys available through Cloud KMS, including Cloud EKM keys. The difference for using a Cloud EKM key as opposed to one stored inside GCP is in how you set up the key in the Cloud KMS API (see this documentation). Once it is set up, the Cloud EKM keys work the same as other protection levels.

ASP.NET Core Data Protection with Azure Key Vault for containerized app deployment to Azure Kubernetes Service

I have an ASP.NET Core app that I deploy in a containerized manner to Azure Kubernetes Service (AKS) and when running just a single replica of the app - it is functional and works as expected.
However, when I run multiple replicas - I run into am error - “Unable to protect the message.State” from the OIDC provider.
Upon further research I have figured out that using ASP.NET Core Data Protection as depicted here is the solution -
https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/configuration/overview?view=aspnetcore-5.0#persisting-keys-when-hosting-in-a-docker-container
However - the above link does not expand upon the usage pattern of it while storing the key in Azure Key Vault. Assuming I have protected my keys in AKV how do I actually get to use it in my app? Is there sample or guidance on this aspect?
First of all I would recommend that the same client instance (With AddOpenIDConnect(...) is the same that also handles the callback from your Identity Provider (/signin-oidc). The state parameter that it sets when it first redirects you to the identity provider must match the returned response (for security reasons).
To make sure that issued cookies in the users browser is valid all the time, you need to make sure:
All client instances uses the same data protection encryption key
The key is the same during redeployment.
You can for example store this key in Azure Key Vault, SQL-Server or somewhere else.
btw, I did a blog post about the Data Protection API here and how you could store the key-ring in AKV as well.

GCP external application to app-engine endpoint authentication

We are building a small web-UI using React that will be served up by GCP App-Engine (standard). The UI will display a carousel of images along with some image metadata to our client's employees when they click on a link inside of their internal GIS system. We are looking to authenticate these calls since the App-Engine endpoint will be exposed publicly, and are hoping to use a GCP Service Account private key that will be used by the client to create a time-limited JSON web-token that will give temporary access to the GIS user when they open the web-UI. We are following this GCP documentation. In summary:
We create a new service-account with necessary IAM permissions in GCP along with a key
We share the private key with client which they then use to sign a Json Web Token which is passed in the call to our endpoint when user accesses our web-UI from their GIS system
Call is authenticated by GCP backend (ESP/OpenAPI)
Question: is this a recommended approach for external system accessing GCP resources or is there a better pattern more applicable to this type of situation (external system accessing GCP resource)?
I believe this is the recommended approach for your use case.
According to the official documentation:

Limiting Access to API Gateway (and AWS Lambda) in a package

We have a package that we share with out customers. In the package, we have a chunk of code that does HTTP Request callouts to our central API Gateway. As of now, our API Gateway is open and accepts requests from everywhere, which is not good. I want to limit access to our users who would be using our software. The only solution I have found is using IAM and providing authorization that would require us to include our Access Keys in the package. Our users can install our package in any environment they want and we have no control over that environment. So I think a viable option is to create a generic user policy with minimal access to allow our users to call our API Gateway. However, putting access key in the code doesn't seem like a good idea. Another option is to provider our customers with access keys but that also has overhead. What is a better alternative that is more secure and easy to maintain?
You can use built-in API Gateway API Key functionality when IAM policies aren't possible.
So long as your clients could be on any infrastructure, versus limited to AWS, the API Gateway service provides a generic API key solution, which allows you to restrict client traffic to your API Gateway by enforcing that client requests include API keys. This API key interface is part of their "API Usage Plan" feature.
This document explains how to use the console to set up an API Gateway to enforce that client traffic bears an API key:
To set up API keys, do the following:
Configure API methods to require an API key.
Create or import an API key for the API in a region.
Your clients can implement a "secret storage" solution, in order to avoid putting their API keys into their source code.
For sure it isn't wise for your clients to store their API Keys plain-text inside their source code. Instead, they could use a secret storage solution, to store the API keys outside of their codebase, but still give their applications access to the secret.
This article describes an example solution for secure secret storage (e.g. secure API key storage) which grants an application access to the application secret without putting the unencrypted secret into the source code. It uses Amazon KMS + Cryptex, but the same principle can be applied with other technologies: http://technologyadvice.github.io/lock-up-your-customer-accounts-give-away-the-key/

Account key vs shared access signature

I'm looking for guidance on how to securely use azure storage in an public facing production environment.
My simplest scenarios is multiple windows 8 store clients uploading images to azure. The account key is stored in app.config.
Is it ok to distribute the account key as part of my mobile application?
Or should I have a backend service that creates shared access signatures for the container / blob?
Thanks in advance.
Sharing your account key in your mobile application is not desirable because the clients get complete access to your account and can view/modify other data. Shared Access Signatures are useful in such cases as you can delegate access to certain storage account resources. You can grant them access to a resource for a specified period of time, with a specified set of permissions. In your case, you want to provide them access to only write blob content. You can find more details about SAS and how to use it here - http://msdn.microsoft.com/en-us/library/windowsazure/ee395415.aspx