Key Permissions in Azure Key Vault - access-control

Was exploring the Azure Key Vault Key permissions and one thing where I stuck is - What is the meaning of "List"- key permission while setting up this parameter in Access policy section?
It would be good if one can provide the documentation link for all the available key permission in the access policy. :-)

If you add the "List"- key permission for the user/group/service principal in the Access policies, they will be able to list the keys in your keyvault.
For all the available key permissions, you could see https://learn.microsoft.com/en-us/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy?view=azps-4.1.0#parameters

Related

How do I retrieve certificate definition in Azure Synapse dedicated pool?

I have created certificate with following definition. The certificate is created .Is there any option to retrieve certificate definition from system objects ?If we can retrieve definition ,what is the best way to restrict user to view create certificate definition?
I checked sys.sql_modules table but couldn't find anything
CREATE CERTIFICATE xxxx_Certificate ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y' WITH SUBJECT = 'YYYY Information', EXPIRY_DATE = '20221231';
Regards,
Rajib
As per my knowledge there is no proper way retrieve Azure key vault secret using T-SQL.
We can retrieve key vault secret using Py spark code in Synapse pool.
I created Azure key vault and created secret.
Image for reference:
value of secret:
I created synapse analytics and opened synapse studio and created notebook as mentioned below
I executed below code to retrieve the value of secret.
from notebookutils import mssparkutils
mssparkutils.credentials.getSecret('<keyvault_name>' , '<secret_name>')
I got the azure key vault value.
you can follow this way to retrieve secret value from Azure key vault.

Amazon session token

I am using Java to upload images to Amazon S3
AwsSessionCredentials awsCreds = AwsSessionCredentials.create(ACCESS_KEY, SECRET_KEY, SESSION_TOKEN);
S3Client s3main = S3Client.builder()
.credentialsProvider(StaticCredentialsProvider.create(awsCreds))
.endpointOverride(URI.create(MAIN_END_POINT)).region(main_region).build();
s3main.putObject(PutObjectRequest.builder().bucket(bucketName).key(img1Name).build(), RequestBody.fromBytes(bytesMain));
Above code works. I am passing blank string "" as SESSION_TOKEN. Just wondering what is the use of Session Token here? What value should I pass here?
You are using IAM user credentials and so you do not have a session token and your code should use AwsBasicCredentials. Session tokens are associated with short-term credentials from an assumed IAM role, in which case your code would use AwsSessionCredentials.
Background
To quote the AWS documentation:
You must provide your AWS access keys to make programmatic calls to AWS. When you create your access keys, you create the access key ID (for example, AKIAIOSFODNN7EXAMPLE) and secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY) as a set.
IAM users, for example, authenticate using an access key ID and a secret access key. These are long-lived credentials.
However, it is also possible to use short-term credentials:
You can also create and use temporary access keys, known as temporary security credentials. In addition to the access key ID and secret access key, temporary security credentials include a security token that you must send to AWS when you use temporary security credentials. The advantage of temporary security credentials is that they are short term. After they expire, they're no longer valid.
When IAM users or federated users assumes an IAM role, they are given a set of credentials that comprise an access key ID, a secret access key, and a security token. These are short-lived credentials.

Github add SSH key from others will grant access to all repos?

So recently Github have change policy and only allow SSH key for authentication
So I added a public SSH key from outside contributor to my account, but will this give the full access to this outside contributor to all my repos with read/write permissions?
this outside contributor should only have access to certain repo in my account, not other repos.
so my concern is , will this add SSH key will allow her to have full access??
Please help me understand, how exactly adding a SSH key will compromise the account security??
Thanks in advance.
It is not the case that GitHub has changed to allow only SSH keys for authentication. GitHub used to allow users to use a username and password over HTTPS if they were not using 2FA, a username and personal access token over HTTPS, or SSH using an SSH key. The only thing that has changed is that you can no longer use a username and password for HTTPS; you must use a personal access token instead of a password if you wish to use HTTPS.
If you give another user one of your personal access tokens or add one of their SSH keys to your account, they will have access to all of your repositories. This is insecure, and so you should not do it.
Instead, you should grant your contributor access using the Manage Access interface, and make sure they can access the repository using their own account. If they are using HTTPS, then they may need to either switch to SSH by changing the URL with git remote set-url git#github.com/owner/name.git (replacing owner and name) or just follow the directions outlined in this answer.
The fact that a contributor cannot access their own account is an issue that they need to address instead of having them access your account.
Yes, putting someone else's ssh key in your account will give them full access to all your repositories.
If you want to grant someone else access to your repositories, don't add their ssh key anywhere. Just set up the access permissions on your repositories to grant access to their github account. You can find access permissions by going to "Settings" and then selecting "Manage access" (this will take you to something like https://github.com/yourname/yourrepo/settings/access).
You'll find some documentation on this process here.

How does Azure Key Vault provide better security than encrypted configuration settings?

I have an ASP.NET Core website that is used to store and retrieve encryption keys which are, at times, used to "sign transactions" on behalf of the user.
Since I'm in Azure, research indicates the most secure way to store these keys is via Azure Key Vault.
I've looked at this article, which shows that to gain access Azure Key Vault values, I would end up using credentials stored in the Web App's Application Configuration settings.
// Connect to Azure Key Vault using the Client Id and Client Secret (AAD) - Get them from Azure AD Application.
var keyVaultEndpoint = settings["AzureKeyVault:Endpoint"];
var keyVaultClientId = settings["AzureKeyVault:ClientId"];
var keyVaultClientSecret = settings["AzureKeyVault:ClientSecret"];
if (!string.IsNullOrEmpty(keyVaultEndpoint) && !string.IsNullOrEmpty(keyVaultClientId) && !string.IsNullOrEmpty(keyVaultClientSecret))
{
config.AddAzureKeyVault(keyVaultEndpoint, keyVaultClientId, keyVaultClientSecret, new DefaultKeyVaultSecretManager());
}
Web App Application Configuration settings are encrypted at rest and during transit, but their values can be leaked in a number of ways, hence the need for Key Vault.
My question, however, is if I have to store the Key Vault access credentials somewhere in the app configuration, doesn't that essentially limit the security of key vault values to the same level as what the configuration setting already provides? Does the extra level of indirection make a difference somehow?
What's the point of using Key Vault if someone can just access the Key Vault by reading the Key Vault credentials from the Web Config Settings?
What am I missing?

Accessing a GCS bucket from GCE without credentials using a S3 library

I am trying to migrate an existing application that was using IAM permissions to write to a S3 bucket from EC2. According to Google documentation, you have a way to keep the same code and take advantage of the compatibility of GCS apis with S3. However, using the same code (I am just overriding the endpoint to use storage.googleapis.com instead), I hit the following exception:
com.amazonaws.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:115)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
at
Is there a way to do that without having to pass an access key and a secret key to my app?
If you want to keep using your existing API, the only way is by using a Google developer key, a simple migration always requires these two steps:
Change the request endpoint to to the Cloud Storage request endpoint: As you mentioned, you already completed this step by overriding to the Cloud Storage request endpoint:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
Replace the AWS access and secret Key with your Google developer key:
Because you are no longer going to be able to keep using the same IAM permissions you have previously set on AWS, authorization must be done using and access key and and a secret key, you will need to include an Authorization request header using your Google access key and create a signature using your Google secret key:
Authorization: AWS GOOG-ACCESS-KEY:signature
For further information, please check Authenticating in a simple migration scenario.