Assigning group permissions using to Azure DevOps CLI - permissions

I am trying to assign permissions to the "build administrators" group using the cli.
The specific permission i want to update is the "Delete Team Project" permission.
The documentation is a little difficult to put together since the information is scattered, specially the parts about security tokens and permissions bits.
I am using the az devops security command. The part i am struggling with is getting the correct token and the setting the correct permission bits
I know the namespace I want to use. it is the environment namespace. Found this out by first checking all the namespaces and finding the guid for the environment namespace.
#get list of all namespaces
az devops security permission namespace list -o table
$envnamespace = <guid from above command for the environment namespace>
# first i set my org and token
$orgUrl = "https://dev.azure.com/<MYORG>"
$personalToken = "<MY_PERSONAL_TOKE>"
$projectName = "<my_project>"
# login using PAT
$personalToken | az devops login --organization $orgUrl
# set default organisation
az devops configure --defaults organization=$orgUrl
# get the group descriptor ID for the group "build administrators"
$id = az devops security group list --project $projectName --output json --query "graphGroups[?displayName == '$groupID'].descriptor | [0]" -o tsv --verbose
# now i want to add permissions for the group "build administrators"
# but i am not sure what the token should be and what permission bits to use
I run the following command to see list the permissions on the group. it returns some tokens but they don't make sense to me. How am i meant to know which token is for what permissions. for example how do i know which token is for "Delete Team Project" permission
az devops security permission list --namespace-id $envnamespace --subject $id
The aim next is to run the following command to update permissions
az devops security permission update --namespace-id $envnamespace --subject $id --token $token2 --allow-bit 4 deny-bit 1 --verbose
The --allow-bit and deny-bit i'm not sure exactly what it should be to set the permission to deny
any advice on the correct way to do this would be appreciated.

how do I know which token is for "Delete Team Project" permission
Run az devops security permission namespace list, the namespaceID of "Delete Team Project" is under the "Project" namespace.
You can get the bit and the namespaceID of the specific Delete Team Project namespace (for reference see screenshot shown below).
How am I meant to know which token is for what permissions
For the tokens, you can refer to Security tokens for permissions management for details, there are listed Token examples for different namespaces.
Another example for your reference (reference jessehouwing's blog) :
az login
az extension add --name "azure-devops"
# Find the group identifier of the group you want to set permissions for
$org = "gdbc2019-westeurope"
# There is a weird edge case here when an Azure DevOps Organization has a Team Project with the same name as the org.
# In that case you must also add a query to filter on the right domain property `?#.domain == '?'`
$subject = az devops security group list `
--org "https://dev.azure.com/$org/" `
--scope organization `
--subject-types vssgp `
--query "graphGroups[?#.principalName == '[$org]\Project Collection Administrators'].descriptor | [0]"
$namespaceId = az devops security permission namespace list `
--org "https://dev.azure.com/$org/" `
--query "[?#.name == 'Git Repositories'].namespaceId | [0]"
$bit = az devops security permission namespace show `
--namespace-id $namespaceId `
--org "https://dev.azure.com/$org/" `
--query "[0].actions[?#.name == 'PullRequestBypassPolicy'].bit | [0]"
az devops security permission update `
--id $namespaceId `
--subject $subject `
--token "repoV2/" `
--allow-bit $bit `
--merge true `
--org https://dev.azure.com/$org/

Related

Set-AzStorageContainerAcl error due supposed container not found but it really exists

I'm trying to update ACL through Azure PowerShell but I'm getting this weird error.
The script is pretty simple but don't understand what is wrong.
First I'm getting the Storage Container by name to be sure the
container already exists.
Then just trying to set ACL permission on
it but got an error saying the container doesn't exist.
Am I missing something?
Edit: Just to avoid confusion, I have full control on this storage account resource. I created it and I'm able to configure this setting through Azure portal but no with power shell.
Browse to Storage Account in the Azure Portal
Access Control (IAM)
Grant Access to this resource section (Add Role Assignments)
Role: Storage Blob Data Contributor
Assign Access to: Use the default values (I.e. User , Group , or Service Principal)
Select: User Name
Save
I tried to reproduce the same in my environment to apply ACL permissions:
Here is the script to apply the ACL permission to your container.
You can get the Azure Storage Account Key
Azure Portal > Storage accounts > YourStorageAccount >Access keys
#Install Storage Module
Install-Module Az.Storage
#Connect to Azure AD
Connect-AzureAD
#Set Context to Storage Account
$StorageContext = New-AzureStorageContext -StorageAccountName 'venkatsa' -StorageAccountKey 'StorageAccount-key'
$Container = Get-AzureStorageContainer -Name 'thejademo' -Context $StorageContext
#Get Container ACL Permissions
Get-AzStorageContainerAcl -Container "thejademo" -Context $StorageContext
#Set ACL permission to Container.
Set-AzStorageContainerAcl -Container "thejademo" -Permission Off -PassThru -Context $StorageContext
Applied ACL permission to my container

create a script for azure pim roles assigned to users

$filters = "(roleDefinitionId eq '69091246-20e8-4a56-aa4d-066075b2a7a8')" -or "(roleDefinitionId eq '3d762c5a-1b6c-493f-843e-55a3b42923d4')"
Write-Host -Message "Start ......... Script"
$getallPIMadmins = Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "fd799da1-bfc1-4234-a91c-72b3a1cb9e26" -filter $filters
can i use or condition in filter option
if yes how
i am expecting to get output from above condition if use or
I tried to reproduce the same in my environment to get the Azure AD PIM Roles using PowerShell Script
Check this Script to get the azure PIM roles assigned to users.
Note: Uninstall Azure AD module before installing Azure ADPreview
Module and Login with Azure AD Global Admin Credentials. *
Uninstall-Module AzureAD
Install-module AzureADPreview
Connect-AzureAD
Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId 15e217e9-19a5-4006-a9f1-f7e74d8b2a5a
Get-AzureADMSPrivilegedRoleAssignment -ProviderId "aadRoles" -ResourceId "15e217e9-19a5-4006-a9f1-f7e74d8b2a5a" -Filter "roleDefinitionId eq 'fdd7a751-b60b-444a-984c-02652fe8fa1c'
Result:

How to create a service account for a bigquery dataset from the cli

I've found instructions how to generate credentials for the project level but there aren't clear instructions on adding a service account to only a specific dataset using the cli.
I tried creating the service account:
gcloud iam service-accounts create NAME
and then getting the dataset:
bq show \
--format=prettyjson \
project_id:dataset > path_to_file
and then adding a role to the access section
{
"role": "OWNER",
"userByEmail": "NAME#PROJECT.iam.gserviceaccount.com"
},
and then updating it. It seemed to work because I was able to create a table but then I got an access denied error User does not have bigquery.jobs.create permission in project when I tried loading data into the table.
When I inspected the project in the cloud console, it seemed as if my service account was added to the project rather then the dataset, which is not what I want but also does not explain why I don't have the correct permissions. In addition to owner permissions I tried assigning editor permission and admin, neither of which solved the issue.
It is not possible for a service account to only have permissions on a dataset level and then run a query. When a query is invoked, it will create a job. To create a job, the service account to be used should have permission bigquery.jobs.create added at a project level. See document for required permissions to run a job.
With this in mind, it is required to add bigquery.jobs.create at project level so you can run queries on the shared dataset.
NOTE: You can use any of the following pre-defined roles as they all have bigquery.jobs.create.
roles/bigquery.user
roles/bigquery.jobUser
roles/bigquery.admin
With my example I used roles/bigquery.user. See steps below:
Create a new service account (bq-test-sa#my-project.iam.gserviceaccount.com)
Get the permissions on my dataset using bq show --format=prettyjson my-project:mydataset > info.json
Add OWNER permission to service account in info.json
{
"role": "OWNER",
"userByEmail": "bq-test-sa#my-project.iam.gserviceaccount.com"
},
Updated the permissions using bq update --source info.json my-project:mydataset
Check BigQuery > mydataset > "SHARE DATASET" to see if the service account was added.
Add role roles/bigquery.user to service account using gcloud projects add-iam-policy-binding myproject --member=serviceAccount:bq-test-sa#my-project.iam.gserviceaccount.com --role=roles/bigquery.jobUser

How can I use a SystemAssigned identity when pulling an image from Azure Container Registry into Azure Container Instances?

I want to create a container (or container group) in Azure Container Instances, pulling the image(s) from Azure Container Registry - but with using a SystemAssigned identity. With that I want to avoid using ACR login credentials, a service principal or a UserAssigned identity.
When I run this script (Azure CLI in PowerShell) ...
$LOC = "westeurope"
$RG = "myresourcegroup"
$ACRNAME = "myacr"
az configure --defaults location=$LOC group=$RG
$acr = az acr show -n $ACRNAME -o json | ConvertFrom-Json -Depth 10
az container create --name app1 --image $($acr.loginServer+"/app1") `
--assign-identity --role acrpull --scope $acr.id `
--debug
... ACI does not seem to recognize that it should be already authorized for ACR and shows this prompt:
Image registry username:
Azure CLI version: 2.14.0
Does this make sense? Is the ACI managed identity supported for ACR?
In your code, when you create an Azure container with a managed identity that is being created at the ACI creating time to authenticate to ACR. I am afraid that you can not do that because there are limitations
You can't use a managed identity to pull an image from Azure Container
Registry when creating a container group. The identity is only
available within a running container.
From Jan 2022 on managed identity is supported on Azure Container Instance to access Azure Container Registry: https://learn.microsoft.com/en-us/azure/container-instances/using-azure-container-registry-mi
#minus_one -solution do not work in my case. Runbook to make container registry. It does need more priviledges than stated in here...
https://github.com/Azure/azure-powershell/issues/3215
This solution will not use managed identity, and it is important to note that we will need owner role at least on the resource group level.
The main idea is to use service principals to get the access using the acrpull role. See the following PowerShell script:
$resourceGroup = (az group show --name $resourceGroupName | ConvertFrom-Json )
$containerRegistry = (az acr show --name $containerRegistryName | ConvertFrom-Json)
$servicePrincipal = (az ad sp create-for-rbac `
--name "${containerRegistryName}.azurecr.io" `
--scopes $containerRegistry.id `
--role acrpull `
| ConvertFrom-Json )
az container create `
--name $containerInstanceName `
--resource-group $resourceGroupName `
--image $containerImage `
--command-line "tail -f /dev/null" `
--registry-login-server "${containerRegistryName}.azurecr.io" `
--registry-username $servicePrincipal.appId `
--registry-password $servicePrincipal.password
Please note that we have created a service principal, so we also need to remove that:
az ad sp delete --id $servicePrincipal.appId
There is a documentation on how to do that:
Deploy to Azure Container Instances from Azure Container Registry
Update:
I think the --registry-login-server ${containerRegistryName}.azurecr.io" option was missing.

Prevent a user from deleting BigQuery tables

We're trying to create a very basic role that allows users to query BigQuery tables, but not delete them. The custom role we're experimenting with now has the following permissions:
- bigquery.jobs.create
- bigquery.jobs.get
- bigquery.jobs.list
- bigquery.jobs.listAll
- bigquery.readsessions.create
- bigquery.routines.get
- bigquery.routines.list
- bigquery.savedqueries.get
- bigquery.savedqueries.list
- bigquery.tables.export
- bigquery.tables.getData
- bigquery.tables.list
- bigquery.transfers.get
- resourcemanager.projects.get
We're only focusing on delete at this time, so the permissions list is a work in progress. There is only one custom role assigned to our test user with the above permissions. However, the user can delete tables from our BigQuery dataset. Any idea on the correct combinations of permissions to achieve our objective.
Thanks in advance!
You have listed 14 permissions and seem to be making an assumption these permissions allow BQ table deletion.
This assumption looks odd (because clearly the permission bigquery.tables.delete is not on the list) and in fact is incorrect. Which means the GCP IAM identity (a user or a service account) assigned the role comprised of these 14 permissions will be unable to delete BQ tables. This in turn means the identity you are testing with is assigned additional role(s) and/or permission(s) that are not accounted for.
To prove the assumption is incorrect open BQ Console as a project administrator and click on the Cloud Shell icon to start Cloud Shell VM. Then execute the following commands at the command prompt replacing <project-name>:
# Prove the current user is BQ admin by creating 'ds_test1' dataset,
# 'tbl_test1' table, then deleting and recreating the table
bq mk ds_test1
bq mk -t ds_test1.tbl_test1
bq rm -f -t ds_test1.tbl_test1
bq mk -t ds_test1.tbl_test1
# Create role `role_test1`
gcloud iam roles create role_test1 --project <project-name> --title "Role role_test1" --description "My custom role role_test1" --permissions bigquery.jobs.create,bigquery.jobs.get,bigquery.jobs.list,bigquery.jobs.listAll,bigquery.readsessions.create,bigquery.routines.get,bigquery.routines.list,bigquery.savedqueries.get,bigquery.saved
queries.list,bigquery.tables.export,bigquery.tables.getData,bigquery.tables.list,bigquery.transfers.get,resourcemanager.projects.get --stage GA
# Create service account 'sa-test1'
# It is a good security practice to dispose of it when testing is finished
gcloud iam service-accounts create sa-test1 --display-name "sa-test1" --description "Test SA sa-test1, delete it when not needed anymore" --project <project-name>
# Grant the role (and its permissions) to the service account
gcloud projects add-iam-policy-binding <project-name> --member=serviceAccount:sa-test1#<project-name>.iam.gserviceaccount.com --role projects/<project-name>/roles/role_test1
# Save the credential of the service account (including the security sensitive
# private key) to a disk file
gcloud iam service-accounts keys create ~/key-sa-test1.json --iam-account sa-test1#<project-name>.iam.gserviceaccount.com
# Impersonate the service account. This replaces the current permissions with
# that of the service account
gcloud auth activate-service-account sa-test1#<project-name>.iam.gserviceaccount.com --key-file=./key-sa-test1.json
# Confirm the ability to list tables
bq ls ds_test1
# Confirm inability to delete tables
# The command fails with error: BigQuery error in rm operation: Access Denied: Table <project-name>:ds_test1.tbl_test1: User does not have bigquery.tables.delete permission for table <project-name>:ds_test1.tbl_test1.
bq rm -f -t ds_test1.tbl_test1
# Close SSH connection to the VM and logoff
exit
To see the roles granted to the service account 'sa-test1' created above open Cloud Shell and execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:serviceAccount:sa-test1#<project-name>.
iam.gserviceaccount.com"
It should list our role projects/<project-name>/roles/role_test1.
To see the roles granted to the user who can delete tables execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:user:<email-of-the-user>"
I did some tests on my end.
When an user has the 14 listed permissions, they are not even able to see the BigQuery Datasets on the UI. To do so, the bigquery.datasets.get permission must be added to the custom role.
Even with the 15 permissions, they are unable to Delete BigQuery tables so you are in the right path.
Being able to delete tables indicates that the user does not have the created Custom role assigned or has more permissions from additional roles. Please:
Check that the Roles have been set correctly (my scenario with the 15 permissions). Be sure to save changes when assigning permissions to your Custom roles.
In your IAM Dashboard please double check that the user has this role linked to their account.
Also check if the user does not have additional roles like Owner, Editor, BigQuery Admin, BigQuery Data Editor, etc. If they have any of those extra roles, their permissions are making them able to delete BigQuery tables.
Finally, double check who is logged into the UI, you can check it by clicking on the photo at the top right corner of your GCP UI. The user should not see an account different to myUser#emaildomain.com like in the following image
Hope this is helpful!