Get-AzRoleAssignment command returning users and service principles who are removed from RBAC Permissions - azure-powershell

I am using Get-AzRoleAssignment to get RBAC details for Data Lake Storage Gen1 resource.
Command :
Get-AzRoleAssignment -ResourceGroupName "test" -ResourceName "testResource" -ResourceType "Microsoft.DataLakeAnalytics/accounts"
Above command gives us list of resources who have access to resource mentioned. Since first use of this command we have removed access for many resources but command still return names for those. I logged out and logged in multiple times to check if its caching issue but no use.

Related

How to create a service account for a bigquery dataset from the cli

I've found instructions how to generate credentials for the project level but there aren't clear instructions on adding a service account to only a specific dataset using the cli.
I tried creating the service account:
gcloud iam service-accounts create NAME
and then getting the dataset:
bq show \
--format=prettyjson \
project_id:dataset > path_to_file
and then adding a role to the access section
{
"role": "OWNER",
"userByEmail": "NAME#PROJECT.iam.gserviceaccount.com"
},
and then updating it. It seemed to work because I was able to create a table but then I got an access denied error User does not have bigquery.jobs.create permission in project when I tried loading data into the table.
When I inspected the project in the cloud console, it seemed as if my service account was added to the project rather then the dataset, which is not what I want but also does not explain why I don't have the correct permissions. In addition to owner permissions I tried assigning editor permission and admin, neither of which solved the issue.
It is not possible for a service account to only have permissions on a dataset level and then run a query. When a query is invoked, it will create a job. To create a job, the service account to be used should have permission bigquery.jobs.create added at a project level. See document for required permissions to run a job.
With this in mind, it is required to add bigquery.jobs.create at project level so you can run queries on the shared dataset.
NOTE: You can use any of the following pre-defined roles as they all have bigquery.jobs.create.
roles/bigquery.user
roles/bigquery.jobUser
roles/bigquery.admin
With my example I used roles/bigquery.user. See steps below:
Create a new service account (bq-test-sa#my-project.iam.gserviceaccount.com)
Get the permissions on my dataset using bq show --format=prettyjson my-project:mydataset > info.json
Add OWNER permission to service account in info.json
{
"role": "OWNER",
"userByEmail": "bq-test-sa#my-project.iam.gserviceaccount.com"
},
Updated the permissions using bq update --source info.json my-project:mydataset
Check BigQuery > mydataset > "SHARE DATASET" to see if the service account was added.
Add role roles/bigquery.user to service account using gcloud projects add-iam-policy-binding myproject --member=serviceAccount:bq-test-sa#my-project.iam.gserviceaccount.com --role=roles/bigquery.jobUser

Getting and creating stored access policy on a container results in a 404, resource not found

I have a Gen2 Azure storage account and try to create a stored access policy on a container using Powershell. I am signed into the account and the relevant subscription is selected. I save the context in a variable for further use by the following statements.
Connect-AzAccount
Set-AzContext -Subscription "<subscriptionid>"
$context = New-AzStorageContext -StorageAccountName "MyStorageAccount" -UseConnectedAccount
Creating a stored access policy failed so I added one manually through the portal and tried to get a list of the policies:
Get-AzStorageContainerStoredAccessPolicy -Container "MyContainer" -Context $context
This resulted in an error Get-AzStorageContainerStoredAccessPolicy: The specified resource does not exist., the same error that the "New-AzStorageContainerStoredAccessPolicy" command yielded. Adding the name of the existing policy with the -Policy parameter did not change the outcome, neither did changing the access level on the container.
I think I can also rule out typos as Get-AzStorageContainer "MyContainer" -Context $context gives me the details of the container as expected.
I am unclear as to what resource it is that does not exists, as the container clearly exists and it also contains at least one stored access policy. Can the container stored access policy command not be used on a Gen2 storage account or am I missing something else?
This is because Get/New-AzStorageContainerStoredAccessPolicy isn't supported by Oauth. You can find all operations supported by Oauth https://learn.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftstorage.
As a workaround, you can use connection string to do this:
Connect-AzAccount
Set-AzContext -Subscription "<subscriptionid>"
$context = New-AzStorageContext -ConnectionString "<Connection string>"
Get-AzStorageContainerStoredAccessPolicy -Container "<Container Name>" -Context $context
Same issue in Github: https://github.com/Azure/azure-powershell/issues/10391.

Prevent a user from deleting BigQuery tables

We're trying to create a very basic role that allows users to query BigQuery tables, but not delete them. The custom role we're experimenting with now has the following permissions:
- bigquery.jobs.create
- bigquery.jobs.get
- bigquery.jobs.list
- bigquery.jobs.listAll
- bigquery.readsessions.create
- bigquery.routines.get
- bigquery.routines.list
- bigquery.savedqueries.get
- bigquery.savedqueries.list
- bigquery.tables.export
- bigquery.tables.getData
- bigquery.tables.list
- bigquery.transfers.get
- resourcemanager.projects.get
We're only focusing on delete at this time, so the permissions list is a work in progress. There is only one custom role assigned to our test user with the above permissions. However, the user can delete tables from our BigQuery dataset. Any idea on the correct combinations of permissions to achieve our objective.
Thanks in advance!
You have listed 14 permissions and seem to be making an assumption these permissions allow BQ table deletion.
This assumption looks odd (because clearly the permission bigquery.tables.delete is not on the list) and in fact is incorrect. Which means the GCP IAM identity (a user or a service account) assigned the role comprised of these 14 permissions will be unable to delete BQ tables. This in turn means the identity you are testing with is assigned additional role(s) and/or permission(s) that are not accounted for.
To prove the assumption is incorrect open BQ Console as a project administrator and click on the Cloud Shell icon to start Cloud Shell VM. Then execute the following commands at the command prompt replacing <project-name>:
# Prove the current user is BQ admin by creating 'ds_test1' dataset,
# 'tbl_test1' table, then deleting and recreating the table
bq mk ds_test1
bq mk -t ds_test1.tbl_test1
bq rm -f -t ds_test1.tbl_test1
bq mk -t ds_test1.tbl_test1
# Create role `role_test1`
gcloud iam roles create role_test1 --project <project-name> --title "Role role_test1" --description "My custom role role_test1" --permissions bigquery.jobs.create,bigquery.jobs.get,bigquery.jobs.list,bigquery.jobs.listAll,bigquery.readsessions.create,bigquery.routines.get,bigquery.routines.list,bigquery.savedqueries.get,bigquery.saved
queries.list,bigquery.tables.export,bigquery.tables.getData,bigquery.tables.list,bigquery.transfers.get,resourcemanager.projects.get --stage GA
# Create service account 'sa-test1'
# It is a good security practice to dispose of it when testing is finished
gcloud iam service-accounts create sa-test1 --display-name "sa-test1" --description "Test SA sa-test1, delete it when not needed anymore" --project <project-name>
# Grant the role (and its permissions) to the service account
gcloud projects add-iam-policy-binding <project-name> --member=serviceAccount:sa-test1#<project-name>.iam.gserviceaccount.com --role projects/<project-name>/roles/role_test1
# Save the credential of the service account (including the security sensitive
# private key) to a disk file
gcloud iam service-accounts keys create ~/key-sa-test1.json --iam-account sa-test1#<project-name>.iam.gserviceaccount.com
# Impersonate the service account. This replaces the current permissions with
# that of the service account
gcloud auth activate-service-account sa-test1#<project-name>.iam.gserviceaccount.com --key-file=./key-sa-test1.json
# Confirm the ability to list tables
bq ls ds_test1
# Confirm inability to delete tables
# The command fails with error: BigQuery error in rm operation: Access Denied: Table <project-name>:ds_test1.tbl_test1: User does not have bigquery.tables.delete permission for table <project-name>:ds_test1.tbl_test1.
bq rm -f -t ds_test1.tbl_test1
# Close SSH connection to the VM and logoff
exit
To see the roles granted to the service account 'sa-test1' created above open Cloud Shell and execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:serviceAccount:sa-test1#<project-name>.
iam.gserviceaccount.com"
It should list our role projects/<project-name>/roles/role_test1.
To see the roles granted to the user who can delete tables execute:
gcloud projects get-iam-policy <project-name> --flatten="bindings[].members" --filter="bindings.members:user:<email-of-the-user>"
I did some tests on my end.
When an user has the 14 listed permissions, they are not even able to see the BigQuery Datasets on the UI. To do so, the bigquery.datasets.get permission must be added to the custom role.
Even with the 15 permissions, they are unable to Delete BigQuery tables so you are in the right path.
Being able to delete tables indicates that the user does not have the created Custom role assigned or has more permissions from additional roles. Please:
Check that the Roles have been set correctly (my scenario with the 15 permissions). Be sure to save changes when assigning permissions to your Custom roles.
In your IAM Dashboard please double check that the user has this role linked to their account.
Also check if the user does not have additional roles like Owner, Editor, BigQuery Admin, BigQuery Data Editor, etc. If they have any of those extra roles, their permissions are making them able to delete BigQuery tables.
Finally, double check who is logged into the UI, you can check it by clicking on the photo at the top right corner of your GCP UI. The user should not see an account different to myUser#emaildomain.com like in the following image
Hope this is helpful!

Can't create bucket without authentication

We updated our Couchbase from 4.6 Community edition to 5.0.0-2873 Enterprise Edition for testing purposes and our software using the java-client started throwing InvalidPasswordException when trying to open a bucket.
As I've found, every newly created bucket has authType='sasl' and a randomly generated saslPassword.
I've tried creating a bucket using the CLI instead of the GUI:
couchbase-cli bucket-create -c localhost:8091 -u Administrator -p password --bucket=general --bucket-ramsize=1300 --bucket-type=couchbase --bucket-password=
I got the following error:
ERROR: unrecognized arguments: --bucket-password=password
I also tried the bucket-edit function with the same result.
According to the documentation the argument should be valid.
I also tried using the REST API to change bucket authentication (and similarly password), but even though this didn't throw any erros, the authType and the password remained the same.
curl -X POST -u Administrator:password -d 'authType=none' http://<host>:8091/pools/default/buckets/general
Again, according to the documentation this should work.
If I query the bucket information for the sasl password and provide that for the openBucket function then the connection works, however we really don't want to use this feature in our system.
So, any other ideas how it would be possible to remove the bucket authentication in our 5.0EE Couchbase setup?
In Couchbase 5.0 we no longer support bucket passwords and have moved to using role based access control when connecting to buckets. This means that in 5.0 the standard (pre-production) way to connect to a bucket is by using the Administrator user and password that you created when setting up the cluster. In case you're unsure what the Administrator user is, it is the user you create when you first go through the Couchbase setup wizard or the it is the username and password you specify on the command line when running the couchbase-cli cluster-init command.
One thing to note is that using the Administrator user/password is the standard pre-production workflow. I would recommend that when you go into production you create separate users for your application which only have access to cluster resources they need to access in the cluster. You can do this by going to the Users tab in the Administration Console and creating a new user and giving them the Full Bucket Access role which is the standard role that applications should have.
You might now be saying to yourself that this all sounds great, but when I use the Administrator user/password I still am having issues. If this is the case the reason is because you have Couchbase 5.0, but your SDK is not new enough to handle the new RBAC authentication mechanism in 5.0. The workaround for this is to create a user in the Users tab with the same name as the bucket and give that user the Full Bucket Access role. You can then use this user to authenticate.
One last thing to mention is that during an upgrade from a pre-5.0 cluster to a 5.0 cluster Couchbase will automatically create a user for each bucket. The each user will have the same name as one of the buckets and the password for that user will correspond to the bucket password. This is done mainly to ensure that there is no application downtime during an upgrade. After upgrading the cluster the next step should ideally be to upgrade the Couchbase client library to have it start using RBAC authentication.
If you need to stay with old approach and no password you can use cochbase-cli with --rbac-username and --rbac-password "", but you need to specify password as "", e.g.
./couchbase-cli user-manage -c localhost:8091 -u Admin -p password --set --rbac-username <UserForBucket> --roles bucket_full_access[<BucketName>] --rbac-password "" --auth-domain local

using sqlcmd command to get the result to a file in a shared path with user name and password protection

I have a sqlcmd command which will give the result of to a file which will be placed in a shared folder.
exec xp_cmdshell 'sqlcmd -s $dataSource -d $dbName -i $inputFilePath -o $outputFilePath'
Now, what if the shared drive is protected and requires username and password.
How to give credential in the Sqlcmd to bypass the authentication.
xp_cmdshell will execute under the NT (Windows) credentials of:
impersonated login if logged in using Windows credentials
service account if logged in using SQL credentials and no explicit credential object exists
explicit credential is logged in using SQL credentials associated with a credential (see CREATE CREDENTIAL
if you insist on accessing a remote resource (file share) using the default context, you're uphill shitcreek without a paddle, as impersonated access to remote resources is 'double-hop' and requires constrained delegation for at leats one of the cases (logged in using NT).
A better option is to explicitly map the remote share \\server\share locally as a drive X: and then access drive X: instead. Mapping a drive locally allows for persisted credentials to be stored, but you have to be careful to make sure the mapping is visible in the service account session. Which is... basically impossible, see Map a network drive to be used by a service.
Now that you know why you cannot do this properly and you'll be pulling your own hair, meanwhile turn white from constantly be fighting difficult to troubleshoot failures, stand back and look at the problem from a different angle: Why do you want to use xp_cmdshell to call sqlcmd? Call sqlcmd directly, from a job/process. SQL Agent has all the support for you need for this, just set the job to run under a proxy account with appropiate credentials to connect to both the remote share and the destination $datasource.