s3 access point: The authorization mechanism you have provided is not supported. Please use Signature Version 4 - amazon-s3

When trying to access my s3 object via in my browser in the internet, it works fine, but trying to reach it via my access point, I get the following error:
<Error>
<Code>InvalidRequest</Code>
<Message>The authorization mechanism you have provided is not supported. Please use Signature Version 4.</Message>
<RequestId>EK0XC8M8N16CDR45</RequestId>
<HostId>KkuhlYIxABLkWyExmQetiM79WbFTfajARZE/z17CjOCcdiG2JcsUtmefn9OUvawMrQF8LwwpsTo=</HostId>
</Error>
My permission looks like the following:
{
"Version": "2012-10-17",
"Id": "ExamplePolicy01",
"Statement": [
{
"Sid": "ExampleStatement01",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::testobjects1234/*",
"arn:aws:s3:::testobjects1234"
]
}
]
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 could not answer my question.

Related

minio - s3 - bucket policy explanation

In minio. when you set bucket policy to download with mc command like this:
mc policy set download server/bucket
The policy of bucket changes to:
{
"Statement": [
{
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket"
]
},
{
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Resource": [
"arn:aws:s3:::public-bucket/*"
]
}
],
"Version": "2012-10-17"
}
I understand that in second statement we give read access to anonymous users to download the files with url. What I don't understand is that why do we need to allow them to the actions s3:GetBucketLocation, s3:ListBucket.
Can anyone explain this?
Thanks in advance
GetBucketLocation is required to find the location of a bucket in some setups, and is required for compatibility with standard S3 tools like the awscli and mc tools.
ListBuckets is required to list the objects in a bucket. Without this permission you are still able to download objects, but you cannot list and discover them anonymously.
These are standard permissions that are safe to use and setup automatically by the mc anonymous command (previously called mc policy). It is generally not required to change them - though you can do so by directly calling the PutBucketPolicy API.

Amazon S3 Can't Delete Object via API

I'm setting up a new policy so my website can store images on S3, and I'm trying to keep it as secure as possible.
I can put an object and read it, but can not delete it, even though it appears I've followed the recommendations from Amazon. I am not using versioning.
What am I doing wrong?
Here's my policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:PutLifecycleConfiguration",
"s3:DeleteObject",
"s3:ListObjects"
],
"Resource": "*"
}
]
}
After screwing around with multiple permission actions it turns out I needed to add s3:ListBucket and s3:ListObjects. Once added I can now delete objects.

Google Cloud Storage transfer from Amazon S3 - Invalid access key

I'm trying to create a transfer from my S3 bucket to Google Cloud - it's basically the same problem as in this question, but none of the answers work for me. Whenever I try to make a transfer, I get the following error:
Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone.
I've tried the following policies, to no success:
First policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:GetBucketLocation"
],
"Resource": "*"
}
]
}
Second policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Third policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket-name",
"arn:aws:s3:::my-bucket-name/*"
]
}
]
}
I've also made sure to grant the 'List' permission to 'Everyone'. Tried this on buckets in two different locations - Sao Paulo and Oregon. I'm starting to run out of ideas, hope you can help.
I know this question is over a year old but I just encountered the same error when trying to do the transfer via the console. I worked around this by executing IT via the gsutils command line tool instead.
After installing and configuring the tool, simply run:
gsutils cp s3://sourcebucket gs://targetbucket
Hope this is helpful!

IAM configuration to access jgit on S3

I am trying to create IAM permissions so jgit can access a directory in one of my buckets.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>/*"]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": ["arn:aws:s3:::<mybucket>/<mydir>"]
}
]
}
Unfortunately it throws an error. I am not sure what other allow actions need to happen for this to work. (A little new at IAM).
Caused by: java.io.IOException: Reading of '<mydir>/packed-refs' failed: 403 Forbidden
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:519)
at org.eclipse.jgit.transport.AmazonS3.get(AmazonS3.java:289)
at org.eclipse.jgit.transport.TransportAmazonS3$DatabaseS3.open(TransportAmazonS3.java:284)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.openReader(WalkRemoteObjectDatabase.java:365)
at org.eclipse.jgit.transport.WalkRemoteObjectDatabase.readPackedRefs(WalkRemoteObjectDatabase.java:423)
... 13 more
Caused by: java.io.IOException:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>...</RequestId><HostId>...</HostId></Error>
at org.eclipse.jgit.transport.AmazonS3.error(AmazonS3.java:538)
... 17 more
The 403 Forbidden is obviously the error but not sure what needs to be added to the IAM. Any ideas?
[Should have added, too, that I tried this out in the policy simulator and it appeared to work there.]
The "403" error may simply mean that the key <mydir>/packed-refs doesn't exist. According to https://forums.aws.amazon.com/thread.jspa?threadID=56531:
Amazon S3 will return an AccessDenied error when a nonexistent key is requested and the requester is not allowed to list the contents of the bucket.
If you're pushing for the first time, that folder might not exist, and I'm guessing you would need ListBucket privileges on the parent directory to get the proper NoSuchKey response. Try changing that first statement to:
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<mybucket>/*"]
}
I also noticed that jgit push s3 refs/heads/master worked when jgit push s3 master did not.
To future folk: if all you want to do is to set up a git repos bucket with its own user, the following security policy seems to be good enough:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucketname>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>/*"
]
}
]
}

I cannot acces S3 even if I am allowed to it

I am using Amazon stuff. I have this IAM:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SSSSSS",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket1-name",
"arn:aws:s3:::bucket1-name/*"
]
}
]
}
I want to get an image from the bucket and I am getting the error Access Denied. What is the problem here?
It seems that the image I was trying to get was not on the Bucket. If I have tried to get an image that is on the specified bucket, it is not giving me the error of Access Denied. So, the problem seems to be that the Amazon is not making difference between Access Denied and Object not found.