Get-AzureRmDataLakeStoreChildItem access issue - azure-powershell

I am trying to run this powershell cmdlet :
Get-AzureRmDataLakeStoreChildItem -AccountName "xxxx" -Path "xxxxxx"
It fails with an access error. It does not really make sense because i have complete access to the ADLS account. I can browse in the Azure portal. It does not even work with a AzureRunAsConnection from an automation account. But it works perfectly for my colleague. What am i doing wrong?
Error :
Operation: LISTSTATUS failed with HttpStatus:Forbidden
RemoteException: AccessControlException LISTSTATUS failed with error
0x83090aa2 (Forbidden. ACL verification failed. Either the resource
does not exist or the user is not authorized to perform the requested
operation.).
[1f6e5d40-9be1-4682-84be-d538dfca0d19][2019-01-24T21:12:27.0252648-08:00]
JavaClassName: org.apache.hadoop.security.AccessControlException.
Last encountered exception thrown after 1 tries. [Forbidden (
AccessControlException LISTSTATUS failed with error 0x83090aa2
(Forbidden. ACL verification failed. Either the resource does not
exist or the user is not authorized to perform the requested
operation.).
I don't see any firewall restrictions :

I resolved the problem by providing read and execute access to all parent folders in the path. Since ADLS uses the POSIX standard, it does not inherit permissions from parent folders. So, even though the SPN(generated by the automation account) i was using had read/execute access to the specific folder i was interested in, it did not have access to other folders in that path.

Related

gcloud instance in a project generates 403 error- for fast.ai setup

I'm trying to create an instance inside a project using the commands I've been given for setting things up for fastai using Ubuntu 2004 in windows 10.
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-p100,count=1" \
--machine-type=$INSTANCE_TYPE \
--boot-disk-size=200GB \
--metadata="install-nvidia-driver=True" \
--preemptible
The above request creates an error:
ERROR: (gcloud.compute.instances.create) Could not fetch resource: - The user does not have access to service account 'service-#############compute-system.iam.gserviceaccount.com'. User: 'emailaddress#gmail.com'. Ask a project owner to grant you the iam.serviceAccountUser role on the service account
I'm kind of surprised I don't have access, as on the iam page, my email address from emailaddress#gmail.com is the owner (edit: on the iam page my email address is listed as the owner). The active account through gcloud auth list is emailaddress#gmail.com. I tried adding --impersonate-service-account withservice-##########computer-system.iam.gserviceaccount to create the image, but that was [403] forbidden with explanation: Make sure the account that's trying to impersonate it has access to the service account itself and the "roles/iam.serviceAccountTokenCreator" role.
even from the https://cloud.google.com/ai-platform/deep-learning-vm/docs/cli page, these are expected parameters. I can create an image from https://console.cloud.google.com/home/ but I don't know how to modify the accelerator or a couple other settings
There is a warning it creates, but I am fairly certain how to solve this issue: found in the fast.ai forum files to add billing to the account. Only added for completeness since I haven't changed it yet.
WARNING: Some requests generated warnings: - Disk size: '200 GB' is larger than image size: '50 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details
edit in response to the comment:
Without the impersonation flag I received the same error.
With impersonating the #computer-system.iam.gservice I got:
ERROR: (gcloud.compute.instances.create) Error 403 (Forbidden) - failed to impersonate [service-392038697283#compute-system.iam.gserviceaccount.com]. Make sure the account that's trying to impersonate it has access to the service account itself and the "roles/iam.serviceAccountTokenCreator" role.
so I added the serviceAccountTokenCreator role to that and I got
ERROR: (gcloud.compute.instances.create) Error 403 (Forbidden) - failed to impersonate [service-392038697283#compute-system.iam.gserviceaccount.com]. Make sure the account that's trying to impersonate it has access to the service account itself and the "roles/iam.serviceAccountTokenCreator" role.
I'm giving these roles time limits as I think I just need to create the instance and that should be enough, but I'm unable to create the instance
edit: as of 09 July 2020, I tried installing Ubuntu 18.04from this FastAI forum thread which helped with a previous error but added nothing to the current issue. I tried going from wsl-2 to wsl-1 which also didn't work. While I would thoroughly suggest trying to suggestions in the comments I just went with a different service.

To perform this operation a successful bind must be completed on the connection

I am trying to get user attributes from ldap using user id.
When i connect to server and use this command conn.result to check the result then it shows me success.
But when I try to do search operation using command
"conn.search(base_dn, group_filter, subtree)" and then again when i check conn.result then it shows me error message
"To perform this operation a successful bind must be completed on the connection"
The issue is i have written code in .net and thats passing the same base_dn and group_filters but it works there but not in python.
Typically this error implies you are using Microsoft Active Directory and are not performing a Successful Bind to the Directory.
We have Java Examples for use with LDAP

kubernetes on gcp: removed role, account gone how to restore permissions?

whilst 'hardening' the accounts - namely removing or toning down accounts with editor permissions on the projects I removed editor from what appears to be the kubernetes account that container engine uses on the back end of gcloud commands.
Once you remove the last role from an account it vanishes - hard lesson to learn!
Removed editor
serviceAccount:386242358897#cloudservices.gserviceaccount.com
It meant I initially couldn't deploy because it couldn't access container registry.
So I deleted the cluster and recreated expecting the account to get recreated. That failed due to insufficient permissions.
so I manually removed the compute instances (it wouldn't have permissions to recreate them), then templates and then the cluster.
As the UI now thinks you have no clusters it looks like you are back to the beginning. So I ran my scripts and they failed.
ERROR: (gcloud.container.clusters.create) Opetion [https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/operations/operation-xxxx'
startTime: u'2017-10-17T17:59:41.515667863Z'
status: StatusValueValuesEnum(DONE, 3)
statusMessage: u'Deploy error: "Not all instances running in IGM. Expect 1. Current actions &{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/gke-xxxx-default-pool-xxxx:PERMISSIONS_ERROR]".'
targetLink: u'https://container.googleapis.com/v1/projects/xxxx/zones/europe-west2-b/clusters/xxxx'
zone: u'europe-west2-b'>] finished with error: Deploy error: "Not all instances running in IGM. Expect 1. Current actions &{Abandoning:0 Creating:0 CreatingWithoutRetries:0 Deleting:0 None:0 Recreating:1 Refreshing:0 Restarting:0 Verifying:0 ForceSendFields:[] NullFields:[]}. Errors [https://www.googleapis.com/compute/beta/projects/xxxx/zones/europe-west2-b/instances/xxxx:PERMISSIONS_ERROR]".
Updated property [container/cluster].
when I try to create through UI I get this
Permission denied (HTTP 403): Google Compute Engine: Required 'compute.zones.get' permission for 'projects/xxxx/zones/us-central1-a'
Have done a number on it!
My problem is that I don't see a way of giving permissions back to whatever account it is trying to use (as I cannot see that account if it exists) nor can I see how to attach a new service account with permissions that are needed to whatever is doing the work under the hood.
UPDATE:
So ...
I recreated the account at the organisation level. Gave it service account role there because you cannot modify the domain of the accounts at project level.
I have then modified that at the project level to have editor permissions.
This means i can deploy a cluster but ... still cannot create load balancer - insufficient permissions
Error creating load balancer (will retry): Error getting LB for service default/bot: googleapi: Error 403: Required
'compute.forwardingRules.get' permission for 'projects/xxxx/regions/europe-west2/forwardingRules/xxxx', forbidden
the user having the problem this time is:
service-xxx#container-engine-robot.iam.gserviceaccount.com
So ...
I played with recreating accounts etc. Eventually got Kubernetes working again.
A week later tried to use datastore and discovered that AppEngine was dead beyond dead.
The only recourse was to start a new project from scratch.
The answer to this question is (some may laugh at its self evidence, but we are all in a rush at some point).
DO NOT CREATE USER ACCOUNTS OR GIVE THEM PERMISSIONS BEYOND WHAT THEY NEED BECAUSE DELETING THEM LATER IS REALLY NOT WORTH THE RISK.
Thankyou for listening :D

FlowGear IMAP Watcher Node

I've configured an IMAP Watcher Node to point at a Gmail-based account. The Test of the connection returns success. However, when I run the node in the design tool, I get this error:
IMAP Watcher(v1.0.1.6): An operation requiring Environment permission
was denied. Check Node Connection and properties or consider using a
DropPoint.
I imagine the "permission" it needs is to move the processed message from the Watch folder to the Processed folder. It's connecting to the mailbox with the credentials of the mailbox owner. Why would permissions be denied?
Please help.

ORA-24247: network access denied by access control list (ACL)

I seem to be getting the above error, and I tried sending a mail over the intranet as well, but of no use.
Does the above error message mean that my mail program is correct, and the problem is with the restriction imposed on an user, by the database administrator?
Taken from http://www.dba-oracle.com/t_ora_24247_network_access_denied_by_access_control_list_tips.htm:
ORA-24247: network access denied by access control list (ACL)
Cause: No access control list (ACL) has been assigned to the target
host or the privilege necessary to access the target host has not been
granted to the user in the access control list.
Action: Ensure that an access control list (ACL) has been assigned to
the target host and the privilege necessary to access the target host
has been granted to the user.
Your application will encounter an ORA-24247 error if it relies on one
of the network packages and no proper ACL has been created. For the
use of the following packages it is mandatory to have an ACL for the
application user in place in 11g:
UTL_TCP
UTL_SMTP
UTL_MAIL
UTL_HTTP
UTL_INADDR
Also read the following post by Ian Hoogeboom