Google cloud storage access issue - permissions

I am a Google cloud project owner but I am not able to access the files in my project buckets. I am getting the error
You need the storage.objects.list permission to list objects in this bucket. Ask a project or bucket owner to give you this permission and try again.
I am unable to copy files from the bucket as well and get an error The caller does not have permission
I have verified I'm authenticated as the right user (gcloud auth list).
What is going on here?

Somehow I had lost the Storage Object permission to my bucket. The option to modify permissions wasn't visible to me as well. I had to ask anothe project owner to add storage object admin permission for me on that bucket and it fixed the problem.

Related

Multiple users uploading into the same storage account via desktop app

would love to hear your ideas.
In this project, multiple users (let's say 1000 users) will upload files into the same storage account (AWS S3, Azure Blob Storage or DigitalOcean Spaces) using a Windows desktop app C#
The desktop app does have user authentication from a Web API
Questions
Is it correct that each user will have his/her own bucket?
What is the best way to securely introduce API key and bucket information into the desktop app so that files will be uploaded to the correct bucket and storage account?
Think about the structure of your S3 bucket and how you would later identify each object, which a user uploaded.
I would create for each user an initial key, which a user is able to upload the files, e.g.
username1/object1
/object2
/objectx
username2/object1
username3/object1
usernamex/objectx
This will give you the possibility, if a user is deleted, that you can just delete all objects with that username too. If you are using a generated key to identify the user, than you also can use the keyID instead of username.
The most interesting question is on how you will secure this, so that no other user will be able to see objects from others. If you have a underlying API, than it's "easy"... give the API the access to the S3 bucket and secure the requests, that only those objects will be listed for which the username or keyID matches.
If you are using IAM users (or roles), than you have automatically generate a policy for each base key (username1 or keyID) for the specific actions.
If you set up something like that, please be really sure to harden your security and also try to enable logging of this bucket to be sure, that user1 can't access objects from user2.

Trying to access GCS bucket but getting a `403 - Forbidden` error message

I'm hoping to get help with the right permission settings for accessing my files from a Colab app.
Goal
I'd like to be able to access personal images in a CGS bucket from a Colab python notebook running the "Style Transfer for Arbitrary Styles" demo of Tensorflow.
Situation
I setup a GCS bucket, made it public, and was able to retrieve files and use them in the demo.
To avoid having the GCS bucket publicly accessible, I removed allUsers and changed to my account/email that's tied to both Colab and GCS.
That caused the following error message:
Error Messages
Exception: URL fetch failure on https://storage.googleapis.com/01_bucket-02/Portrait-Ali-02-PXL_20220105_233524809.jpg: 403 -- Forbidden
Other Approaches
I'm trying to understand how I should approach this.
Is it a URL problem?
The 'Authenticated URL' caused the above 403 error.
https://storage.cloud.google.com/01_bucket-02/Portrait_82A6118_r01.png
And the gsutil link:
gs://01_bucket-02/Portrait_82A6118_r01.png
Returned this error message:
Exception: URL fetch failure on gs://01_bucket-02/Portrait_82A6118_r01.png: None -- unknown url type: gs
Authentication setup
For IAM
I have a service account in the project, as well as my user account (email: d#arrovox.com) that's tied to both the Colab and GCP accounts.
The Service Account role is Storage Admin.
The Service Account has an inheritance from the Project.
My user account, my email, is Storage Object Viewer
Assessment
Seems like the Authenticated URL is the right one, and it's a permissions issue.
Is this just about having the right permissions set in GCS, or do I need to call anything in the code before trying to return the image at the GCS URL?
I'd greatly appreciate any help or suggestions in how to troubleshoot this.
Thanks
doug
storage.objects.get is the demand for viewing files from GCS, but it looks like your user account or email already has the right permission.
How should I know my account has the right permission?
I think there's a simple solution to figure it out.
copy your Authenticated URL
Paste on any website and search.
If your current account doesn't have the right permission, that will return #Gmail-account does not have storage.objects.get access to the Google Cloud Storage object.
Or you can visit permission of bucket details to check are your email and service over there and have the right role.

S3 objects deny access - These objects came from another account's AWS CodeBuild project

(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
I'm updating accountB's S3 bucket by accountA's CodeBuild project.
A problem is, all the object from accountA's CodeBuild deny to access.
My purpose is using this S3 bucket for static hosting.
I set all requirements for static hosting and it's working fine when I uploaded simple index.html manually.
But the individual object from accountA's CodeBuild project show below attached error.
ex) index.html properties & permission
I checked the Disable artifact encryption option in the artifact setting in the CodeBuild project.
and also on the override params,
encryptionDisabled: true
This code build project is working fine when I save the output in the same account S3.
(S3 static hosting site in AccountA is working well)
But getting access issue in accountB's S3.
Before try to touch KMS policy, I want to know if I missed some configurations in the CodeBuild.
Please advice me what I have to do or missed...
Thanks.
(+)
I just found a similar question and answer with help from petrch (thanks!) and being try to apply...
CodeBuild upload build artifact to S3 with ACL
Upload the objects with bucket-owner-full-control canned ACL, otherwise the objects will be still "owned" by the source account.
See:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html
It says:
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
When you create a bucket or an object, Amazon S3 creates a default ACL that grants the resource owner full control over the resource. This is shown in the following sample bucket ACL (the default object ACL has the same structure)
So the object has ACL of the source bucket, it's not very obvious, but you can provide an ACL during the PutObject action from the source account. So it can still be just one call.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

Appveyor cannot upload to S3

I've got a S3 access key and secret set up. I've tried the credentials locally with the aws cli program. However, when run on Appveyor it got permission denied as follows
Deploying using S3 provider
Uploading artifact "NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg" (25,708 bytes) to S3 bucket "nrasterizer-artifacts" as "master/NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg"
Access Denied
How do I resolve this and let appveyor upload to my bucket?
This could be due to any number of reasons
Is S3 provider properly configured? Obvious, but please recheck the key& secret and bucket names etc.
Does the user have appropriate permissions? You did mention that you tested the credentials locally. But it could be that there is a S3 bucket policy which restricts uploads etc. to a set to specific IP addresses.
As I was using set_public: true setting I needed the s3:PutObjectAcl permission in addition to s3:PutObject.