Set-AzureRmKeyVaultAccessPolicy cmdlet assigning policy as user instead of application - azure-powershell

We’re seeing an issue when trying to add an access policy on a KeyVault for providing permissions to secrets on an automation account service principal. We’re using the below cmdlet:
Set-AzureRmKeyVaultAccessPolicy -VaultName "KeyVaultName" -ApplicationId "0aaa8314-872d-41ef-a75e-d3a5ec5b31e6" -ObjectId "443d03a7-6b76-47d1-9406-8fb87c17bbc3" -PermissionsToSecrets recover,delete,backup,set,restore,list,get
when the cmdlet executes, we see something like this in the portal. Note the Icon which seems to look like a User.
Despite seeing this in the access policies, the automation account’s runbooks still fail with the error “Forbidden” when trying to access the keyvault:
Get-AzureKeyVaultSecret : Operation returned an invalid status code 'Forbidden'
At C:\Modules\User\CustomModule.psm1:28 char:22
+ ... clientID = (Get-AzureKeyVaultSecret -VaultName $global:ManagementKeyV ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Get-AzureKeyVaultSecret], KeyVaultErrorException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.KeyVault.GetAzureKeyVaultSecret
Tried in multiple ways for providing access to keyvault for the automation account service principal using the below cmdlets but still getting the same result.
Set-AzureRmKeyVaultAccessPolicy -VaultName "KeyVaultName" -ObjectId "443d03a7-6b76-47d1-9406-8fb87c17bbc3" -PermissionsToSecrets recover,delete,backup,set,restore,list,get -BypassObjectIdValidation
Set-AzureRmKeyVaultAccessPolicy -VaultName "KeyVaultName" -ServicePrincipalName ((Get-AzureRmADServicePrincipal -ApplicationId "0aaa8314-872d-41ef-a75e-d3a5ec5b31e6").ServicePrincipalNames[0]) -PermissionsToSecrets recover,delete,backup,set,restore,list,get
However, after manually adding the same service principal from the portal, we see a different icon for the same service principal.
Can someone please help out with this? Am I doing something wrong?
Many Thanks!

Try this:
Set-AzureRmKeyVaultAccessPolicy [-VaultName] -ServicePrincipalName "0aaa8314-872d-41ef-a75e-d3a5ec5b31e6" -PermissionsToSecrets recover,delete,backup,set,restore,list,get
Where SPN is the Application Id.
I know it's confusing.

For 2022, ran on Azure Cloud shell 2.43.0:
az keyvault set-policy --name myKeyVault --object-id <object-id> --secret-permissions <secret-permissions> --key-permissions <key-permissions> --certificate-permissions <certificate-permissions>
Remove the flags you don't want.
Refer to https://learn.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-cli

Related

Calling an API that runs on another GCP project with Airflow Composer

I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.

How to troubleshoot enabling API services in GCP using gcloud

When executing terraform apply, I get this error where I am being asked to enable IAM API for my project.
Error: Error creating service account: googleapi: Error 403: Identity and Access
Management (IAM) API has not been used in project [PROJECT-NUMBER] before or it is
disabled. Enable it by visiting
https://console.developers.google.com/apis/api/iam.googleapis.com/overview?
project=[PROJECT-NUMBER] then retry. If you enabled this API recently, wait a few
minutes for the action to propagate to our systems and retry.,
accessNotConfigured
When I attempt to enable it using gcloud, the service enable just hangs. Is there any way to get more information?
According to the Google Dashboard, everything is green.
I am also seeing the same issue using the UI.
$ gcloud services enable iam.googleapis.com container.googleapis.com
Error Message
ERROR: gcloud crashed (WaitException): last_result=True, last_retrial=178, time_passed_ms=1790337,time_to_wait=10000
Add --log-http to (any) gcloud command to get detailed logging of the underlying API calls. These may provide more details on where the error occurs.
You may wish to explicitly reference the project too: --project=....
Does IAM need to be enabled? It's such a foundational service, I'm surprised anything would work if it weren't enabled.

How to pass etcd credentials to kubernetes api-server?

I'm facing this issue since some days ago, and is the following:
I'm trying to start my kubernetes master using hypercube,
but the documentation is missing how to pass the etcd credentials to kubernetes in order to use a given user.
In the api-server configuration I have something like this:
- --etcd-servers=http://root:toor#etcd2-0.server:2379,http://root:toor#etcd2-1.server:2379,http://root:toor#etcd2-2.server:
Which is the only possible way to set the basic auth parameters of etcd.
(This works great for both etcdctl and etcd REST API.)
But I'm getting the following error:
F0915 17:25:35.579278 1 controller.go:86] Unable to perform initial IP allocation check: unable to persist the updated service IP allocations: 110: The request requires user authentication (Insufficient credentials) [0]
My etcd is refusing to write into /registry (which is the default folder), but root:toor are the right credentials.
I couldn't find any other configuration parameters for this, and I REALLY, REALLY need to secure my etcd with roles/users.
Please, I need some ideas / solutions if possible.

Azure Powershell command for getting resources in a Resource Group

In the Azure Powershell version 0.8 and 0.9, there is command
Get-AzureResource -ResourceGroupName "RGName" -OutputObjectFormat New
And, It returns the resources in the mentioned Resource Group of Azure. It necessitates the azure mode to be ARM mode.
But, in the Azure PowerShell version 1.2 and above
Get-AzureRMResource -ResourceGroupName "RGName"
fails to provide the resources present in a Resource Group. It needs further parameters like "ResourceID" or "ResourceName" which makes it resource specific.
What I need is that, it should return all the resources in a resource group.
Is it a bug with the newer version or am I missing something!
Suggest
You can use Find-AzureRmResource:
Find-AzureRmResource -ResourceGroupNameContains "RGName"
The Get-AzureRMResource PowerShell Command is implemented from REST API. If you check the REST API of listing resources in a resource group. You will see something like this.
https://management.azure.com/subscriptions/<Subscription ID>/resourceGroups/<resource group name>/resources?api-version=2015-01-01
And, if you add -debug option to Get-AzureRMResource -ResourceId <the resource id>, you will find the REST API it's using.
https://management.azure.com<the resource id>?api-version=2015-01-01
Comparing this two REST API, you will see that the following PowerShell command will list the resources in a resource group.
Get-AzureRmResource -ResourceId "/subscriptions/<Subscription ID>/resourceGroups/<resource group name>/resources"
I know it's tricky, but it does work.
Try
Get-AzureRmResource | where {$_.ResourceGroupName -eq "RG"}
Get the resource group in an object
$groups = Get-AzureRmResourceGroup -Name $RG_Name
fetch all the resources of a resource group in a variable
$t=(Find-AzureRmResource -ResourceGroupNameEquals
$groups.ResourceGroupName).ResourceName

Azure API Management - Update Swagger Schema

I have Imported my swagger schema and the management service has built out all the documentation for my API. I have now made changes and re-deployed the changes. Do I have remove the API from the API Management and re-import or is there a way to 'update' the existing one?
Ok guys I'm going to do my duty to humanity and show you the REAL way to do this. By "real" I mean, let's face it, nobody in the real world is going to keep clicking the portal to refresh changes to their API. What everyone wants is to automate this annoying manual task.
So I wrote this Powershell script which we are currently using in production. This will get you there for sure.
PREREQUISITE: You need a service principal to be able to automate the login. I used this guide to do that.
param(
[String]$pass,
[String]$swaggerUrl,
[String]$apiId,
[String]$apiName,
[String]$apiServiceUrl
)
Try
{
$azureAccountName = "[YOUR AZURE AD APPLICATION ID FOR THE SERVICE PRINCIPAL]"
$azurePassword = ConvertTo-SecureString $pass -AsPlainText -Force
$psCred = New-Object System.Management.Automation.PSCredential($azureAccountName, $azurePassword)
Add-AzureRmAccount -Credential $psCred -TenantId "[YOUR AZURE TENANT ID]" -ServicePrincipal
$azcontext = New-AzureRmApiManagementContext -ResourceGroupName "[YOUR RESOURCE GROUP NAME]" -ServiceName "[THE NAME OF YOUR API MANAGEMENT INSTANCE]"
Import-AzureRmApiManagementApi -Context $azcontext -SpecificationFormat "Swagger" -SpecificationUrl $swaggerUrl -ApiId $apiId
Set-AzureRmApiManagementApi -Context $azcontext -ApiId $apiId -ServiceUrl $apiServiceUrl -Protocols #("https") -Name $apiName
}
Catch
{
Write-Host "FAILURE! An error occurred!!! Aborting script..."
exit
}
Obviously you'll need to replace the bracketed strings above. An explanation of the parameters:
"pass" : Your service principal's password
"swaggerUrl" : The path to your application's swagger json document
"apiId" : Get this value from your API Management instance, it will be shown in the portal's dashboard if you check that existing API
"apiName" : Whatever you want to name it
"apiServiceUrl" : The Azure App Service Url of your API (or wherever your API is)
Nevermind, turns out you just tell the import that its an existing API and it will update. I was concerned I was going to end up with an error message that the 'operation already existed'.
I am currently using this script below, a modified version I can run locally or can be updated in CI/CD solution for automated updates
Login-AzureRmAccount
$subscriptionId =
( Get-AzureRmSubscription |
Out-GridView `
-Title "Select an Azure Subscription ..." `
-PassThru
).SubscriptionId
$subscriptionId
Select-AzureRmSubscription -SubscriptionId $subscriptionId
$apiMSName = "<input>"
$swaggerUrl = "<input>"
$apiId = "<input>"
$apiName = "<input>"
$apiServiceUrl = "<input>"
$resourceGroupName = "<input>"
$azcontext = New-AzureRmApiManagementContext -ResourceGroupName
$resourceGroupName -ServiceName $apiMSName
Import-AzureRmApiManagementApi -Context $azcontext -SpecificationFormat "Swagger" -SpecificationUrl $swaggerUrl -ApiId $apiId
Set-AzureRmApiManagementApi -Context $azcontext -ApiId $apiId -ServiceUrl $apiServiceUrl -Protocols #("https") -Name $apiName
Just as reference, since i had the same challenge, you have the option to use ARM templates, and create a CI (using VSTS, git, whatever) and deploy the ARM template. The advantage is, if for some reason you need to delete your API Management service and create it again, it will also be possible with the ARM template. If you need to do the changes to some specific configuration, or to the api specification, then you can do it, and deploy it, and it will update your changes.