How to GET data from s3 using postman - amazon-s3

I am trying to get data from file in s3 bucket. In postman set the response to GET for https://s3.eu-west-1.amazonaws.com/test-bucket-name/file-name? Entered Access Key: xxx123 Secret Access: xxxx12322 aws region: eu-west-1 service name: s3
In response i am getting access denied. Do i need to do any pre work to get the access keys working as they are generated for users and are used for CLI commands from my machine.
Many thanks in advance.

For this you have to make sure that you have an IAM user with the AmazonS3ReadOnlyPolicy to gain access to the object present in it.
I had this similar problem when I first started AWS and this link helped me :-
http://raaviblog.com/how-to-get-aws-s3-bucket-object-data-using-postman/#google_vignette
Have a look, cheers.

Was able to get list of files in the bucket. Issue was with permissions on the file.

Related

Pass IAM credential to netcore api deployed with aws lambda

I have a Netcore api code that includes retrieving and uploading files to aws S3. It works when I run it locally since I have saved IAM credentials locally in another folder. However, when I deploy it with aws lambda function and try to access S3 I get AmazonS3Exception "access denied". I'm wondering how can I setup access to IAM credentials remotely as I have done locally?
You should be assigning an IAM role as the Lambda function's execution role. Your code should be able to pick that up and use it automatically. If your code isn't picking that up automatically then edit your question to show the relevant code.

Trying to access GCS bucket but getting a `403 - Forbidden` error message

I'm hoping to get help with the right permission settings for accessing my files from a Colab app.
Goal
I'd like to be able to access personal images in a CGS bucket from a Colab python notebook running the "Style Transfer for Arbitrary Styles" demo of Tensorflow.
Situation
I setup a GCS bucket, made it public, and was able to retrieve files and use them in the demo.
To avoid having the GCS bucket publicly accessible, I removed allUsers and changed to my account/email that's tied to both Colab and GCS.
That caused the following error message:
Error Messages
Exception: URL fetch failure on https://storage.googleapis.com/01_bucket-02/Portrait-Ali-02-PXL_20220105_233524809.jpg: 403 -- Forbidden
Other Approaches
I'm trying to understand how I should approach this.
Is it a URL problem?
The 'Authenticated URL' caused the above 403 error.
https://storage.cloud.google.com/01_bucket-02/Portrait_82A6118_r01.png
And the gsutil link:
gs://01_bucket-02/Portrait_82A6118_r01.png
Returned this error message:
Exception: URL fetch failure on gs://01_bucket-02/Portrait_82A6118_r01.png: None -- unknown url type: gs
Authentication setup
For IAM
I have a service account in the project, as well as my user account (email: d#arrovox.com) that's tied to both the Colab and GCP accounts.
The Service Account role is Storage Admin.
The Service Account has an inheritance from the Project.
My user account, my email, is Storage Object Viewer
Assessment
Seems like the Authenticated URL is the right one, and it's a permissions issue.
Is this just about having the right permissions set in GCS, or do I need to call anything in the code before trying to return the image at the GCS URL?
I'd greatly appreciate any help or suggestions in how to troubleshoot this.
Thanks
doug
storage.objects.get is the demand for viewing files from GCS, but it looks like your user account or email already has the right permission.
How should I know my account has the right permission?
I think there's a simple solution to figure it out.
copy your Authenticated URL
Paste on any website and search.
If your current account doesn't have the right permission, that will return #Gmail-account does not have storage.objects.get access to the Google Cloud Storage object.
Or you can visit permission of bucket details to check are your email and service over there and have the right role.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

Appveyor cannot upload to S3

I've got a S3 access key and secret set up. I've tried the credentials locally with the aws cli program. However, when run on Appveyor it got permission denied as follows
Deploying using S3 provider
Uploading artifact "NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg" (25,708 bytes) to S3 bucket "nrasterizer-artifacts" as "master/NOpenType/bin/Release/NOpenType.0.1.4-ci0187.nupkg"
Access Denied
How do I resolve this and let appveyor upload to my bucket?
This could be due to any number of reasons
Is S3 provider properly configured? Obvious, but please recheck the key& secret and bucket names etc.
Does the user have appropriate permissions? You did mention that you tested the credentials locally. But it could be that there is a S3 bucket policy which restricts uploads etc. to a set to specific IP addresses.
As I was using set_public: true setting I needed the s3:PutObjectAcl permission in addition to s3:PutObject.

Setting different S3 read permissions based on uploader

I'm trying to arrive at a situation, where
one class of users can upload files that are subsequently not publicly available
another class of users can upload files that are publicly available.
I think I need to use two IAM users:
the first which has putObject permissions only and where I bake the secret key into javascript. (I use the AWS SDK putObject here, and bake in the first secret key)
the other where I keep the secret key on the server, and provide signatures for uploading to signed-in users of the right category. (I ended up using a POST command for this with multipart form-data, as I could not understand how to do it with the SDK other than baking in the second secret key, which would be bad as files can be uploaded and downloaded)
But I'm struggling to set up bucket permissions that support some files being publicly available while others are not at all.
Is there a way, or do I need to use separate buckets?
Update
Based on the first comment, I tried to add "acl": "public-read" to my policy and POST form data fields. The signatures are matching correctly, but I am now getting a forbidden response from AWS, which I don't get when this field is absent (but then the uploads are not publicly visible)