Google Cloud Storage transfer from Amazon S3 - Invalid access key - amazon-s3

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.

I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

Related

minio - s3 - bucket policy explanation

In minio. when you set bucket policy to download with mc command like this:
mc policy set download server/bucket
The policy of bucket changes to:
{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket"
]
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket/*"
]
}
],
"Version": "2012-10-17"
}
I understand that in second statement we give read access to anonymous users to download the files with url. What I don't understand is that why do we need to allow them to the actions s3:GetBucketLocation, s3:ListBucket.
Can anyone explain this?
Thanks in advance
GetBucketLocation is required to find the location of a bucket in some setups, and is required for compatibility with standard S3 tools like the awscli and mc tools.
ListBuckets is required to list the objects in a bucket. Without this permission you are still able to download objects, but you cannot list and discover them anonymously.
These are standard permissions that are safe to use and setup automatically by the mc anonymous command (previously called mc policy). It is generally not required to change them - though you can do so by directly calling the PutBucketPolicy API.

ECS Task Access Denied to S3

I have an IAM role set for my task with the following permissions, yet I get access denied trying to access the buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/Templates/*",
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::anotherBucket/*"
]
}
]
}
The container instance has a role with the standard AmazonEC2ContainerServiceforEC2Role policy.
I seem to be able to read and write to folders under from bucket/ like bucket/00001, BUT I can't read from bucket/Templates.
Ive redeployed the permissions and the tasks repeatedly (using terraform) but nothing changes. Ive added logging to the app to ensure it's using the correct bucket and path / keys.
I'm stumped. Anyone got a clue what I might have missed here?
Thanks
PS: It just occurred to me, the files in the buckets I cant access I copy there using a script. This is done using credentials other than the creds the task is using.
aws s3 cp ..\Api\somefiles\000000000001\ s3://bucket/000000000001 --recursive --profile p
aws s3 cp ..\Api\somefiles\Templates\000000000001\ s3://bucket/Templates/000000000001 --recursive --profile p
I was using -acl bucket-owner-full-control on the cp command but I removed that to see if would help - it didnt. Maybe I need something else?
It works now because you changed the Resource to match "".
Try adding the bucket itself as a resource, along with / pattern:
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sid1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "sid2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::anotherBucket"
"arn:aws:s3:::anotherBucket/*",
]
}
]
Solved. Found an old sample from a previous employer :) I needed a permission for List* explicitly, separate from the other permissions. I also needed to define the sids.
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sid1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "sid2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}

Amazon S3 Can't Delete Object via API

I'm setting up a new policy so my website can store images on S3, and I'm trying to keep it as secure as possible.
I can put an object and read it, but can not delete it, even though it appears I've followed the recommendations from Amazon. I am not using versioning.
What am I doing wrong?
Here's my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:PutLifecycleConfiguration",
"s3:DeleteObject",
"s3:ListObjects"
],
"Resource": "*"
}
]
}
After screwing around with multiple permission actions it turns out I needed to add s3:ListBucket and s3:ListObjects. Once added I can now delete objects.

Amazon s3 user policies

I'm trying to define a policy for a specific user.
I have several buckets in my S3 but I want to give the user access to some of them.
I created the following policy:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:PutObject"],
"Resource":["arn:aws:s3:::examplebucket"]
}
when I try to add a list of resources like this:
"Resource":["arn:aws:s3:::examplebucket1","arn:aws:s3:::examplebucket2"]
I get access denied
The only option that works for me (I get buckets lists) is:
"Resource": ["arn:aws:s3:::*"]
whats the problem?
Some Amazon S3 API calls operate at the Bucket-level, while some operate at the Object-level. Therefore, you will need a policy like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::test"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::test/*"]
}
]
}
See: AWS Security Blog - Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
I found that its an AWS limitation.
There is no option get filtered list of buckets.
Once you give permissions to ListAllMyBuckets like this:
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
}
you get the list of all bucket (including buckets that you don't have permissions to it).
More info could be found here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/
Few workarounds could be found here: Is there an S3 policy for limiting access to only see/access one bucket?

Amazon S3 bucket permission for unauthenticated cognito role user

I have setup an unauthenticated role under Amazon Cognito Identity pool. My goal is that guest users of my mobile app would be able to upload debugging logs (small text files) to my S3 bucket so I can troubleshoot issues. I notice I would get "Access Denied" from S3 if I don't modify my S3 bucket permission. If I add allow "Everyone" to have "Upload/Delete" privilege, the file upload succeeded. My concern is someone would then be able to upload large files to my bucket and cause a security issue. What is the recommend configuration for my need above? I am a newbie to S3 and Cognito.
I am using Amazon AWS SDK for iOS but I suppose this question is platform neutral.
Edit:
My policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iam:GetUser",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::import-to-ec2-*", "arn:aws:s3:::<my bucket name>/*"]
}
]
}
You don't need to modify the S3 bucket permission, but rather the IAM role associated with your identity pool. Try the following:
Visit the IAM console.
Find the role associated with your identity pool.
Attach a policy similar to the following to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": ["arn:aws:s3:::MYBUCKET/*"]
}
]
}
Replace MYBUCKET with your bucket name
Access your bucket as normal from your application use the iOS SDK and Cognito
You may want to consider limiting permissions further, including ${cognito-identity.amazonaws.com:sub} to partition your users, but the above policy will get you started.
The answer above is incomplete as of 2015, you need to authorize BOTH the role AND the bucket polity in S3 to authorize that Role to write to the bucket. Use s3:PutObject in both cases. The console has wizards for both cases
As #einarc said (cannot comment yet), to make it works I had to edit role and Bucket Policy. This is good enough for testing:
Bucket Policy:
{
"Id": "Policy1500742753994",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1500742752148",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::admin1.user1",
"Principal": "*"
}
]
}
Authenticated role's policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}