How to enable access to S3 bucket? - amazon-s3

I am trying to serve a Static Websites utilyzing a S3 Bucket.
Unfortunately, a 403 error and an Access Denied message is being shown.
You can check it with the following endpoint:
http://sosanimalesco.s3-website-us-east-1.amazonaws.com/
Details:
My objects are public.
I've tryied Access Point Policy as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Principal": "*",
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
But it throws a 'Unsupported Resource ARN In Policy: The resource ARN is not supported for the resource-based policy attached to resource type S3 Access Point.' error.
How to enable access to S3 bucket?

Related

Imagekit EACCES - Access denied by AWS S3. Check attached IAM policy on AWS

After I set up Imagekit connecting to S3 bucket correctly with IAM policy having the s3:GetObject to the bucket, I got an error accessing the image through Imagekit url.
The error message is
EACCES - Access denied by AWS S3. Check attached IAM policy on AWS
Imagekit actually needs more than just action s3:GetObject in the policy if your objects in the S3 buckets are server-side encrypted. It will kms:Decrypt as well. This is not in their documentation as 2022/06/16.
My IAM policy is like the following to make Imagekit access correctly.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ImagekitObjectAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::[imagekit-bucket-name]/*"
]
},
{
"Sid": "ImagekitObjectEncryptingKeyAccess",
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:187681360541:key/[object-encrypting-key-id]"
]
}
]
}

AWS create user only with S3 permissions

I know how to create a user through AWS Console en IAM, but I wonder where or how should I set the permissions to that user in order that he only could:
upload/delete files to a specific folder in a specific S3 bucket
I have this bucket:
So I wonder if I have to set up the permissions in that interface, or directly in the user in IAM service
I'm creating a Group there with this policy:
but for "Write" and "Read" there are a lot of policies, which ones do I need only for write/read files in a specific bucket?
Edit: Currently I have this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::static.XXXXXX.com/images/carousel/*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
I wonder if that is enough to:
log in AWS Console
go to S3 buckets and delete/read objects in the folder of the bucket that I want
You can attach a role to that user that gets a custom policy (Doc).
There you can choose the service, the actions which you want to allow and also the resource which are whitelisted.
You can either use a resource based policy that is attached with S3 or an identity based policy attached to an IAM User, Group or Role.
Identity-based policies and resource-based policies
You can attach below identity policy to the user to upload/delete files to a specific folder in a specific S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::SAMPLE-BUCKET-NAME/foldername"
}
]
}
For more details, refer Grant Access to User-Specific Folders in an Amazon S3 Bucket

ECS task accessing S3 bucket website with Block Public Access enabled: "Access Denied"

I have an ECS task configured to run an nginx container that I want to use as a reverse proxy to a S3 bucket website.
For security purposes, Block public access is turned on for the bucket so I am looking for a way to give Read access only to the ECS task.
I want my ECS task running an nginx reverse proxy to have S3:GetObjects access to my website bucket. The bucket cannot be public so I want to restrict it to the ecs task using the ecs task IAM role as Principal.
IAM role:
arn:aws:iam:::role/ was configured with an attached policy that allows all S3 actions in the bucket and its objects:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "S3:*",
"Resource": [
"arn:aws:s3:::<BUCKET>",
"arn:aws:s3:::<BUCKET>/*"
]
}
]
}
In Trusted Entities, I added permission to assume the ECS Task role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
The issue is that the EC2 target group health check is always returning Access Denied to the bucket and its objects.
[08/Jun/2020:20:33:19 +0000] “GET / HTTP/1.1” 403 303 “-“ “ELB-HealthChecker/2.0”
I also tried to give it permission to by adding the bucket policy below, but I believe it is not needed as the IAM role already have access to it…
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allowNginxProxy",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<BUCKET>/*",
"arn:aws:s3:::<BUCKET>"
]
}
]
}
I have also tried using ”AWS": "arn:aws:iam::<ACCOUNT_NUMBER>:role/<ECS_TASK_ROLE>" as Principal.
Any suggestions?
Another possibility here:
Check if your S3 Objects are encrypted? If yes, your ECS Task Role should also have the permission to perform decryption. Otherwise, you would also get permission denied exception. One example can be found here.

Amazon S3 - Returns 403 error instead of 404 despite GetObject allowance

I've set up my S3 bucket with this tutorial to only accept requests from specific IP addresses. But even though those IPs are allowed to do GetObject, they get 403 errors instead of 404 for any files that are missing.
My updated bucket policy is (with fictitious IP addresses):
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.bucketname.com/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"100.100.100.0/22",
"101.101.101.0/22"
]
}
}
},
{
"Sid": "ListItems",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::www.bucketname.com",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"100.100.100.0/22",
"101.101.101.0/22"
]
}
}
}
]
}
(Updated with the ListBucket command, as pointed out by Mark B.)
I've found several related questions here on SO (like this and this), but their solutions are based on giving everyone permission to access the bucket's contents.
And that approach works, because if I lift my IP filter then 404 errors are given for missing files instead of 403. But that defeats the purpose of an IP filter.
I learned here that:
S3 returns a 403 instead of a 404 when the user doesn't have
permission to list the bucket contents.
But I cannot find way to have the bucket generate 404 error codes for missing files without removing my IP whitelist filter. And that is with including the GetObject command for retrieving the objects and ListBucket for listing the objects.
My reasoning is as follows: if the IP addresses are allowed to access the bucket's content, then shouldn't S3 generate a 404 error for these IPs instead of 403? How do I do that without removing my existing filter?
Note the documentation you quoted:
S3 returns a 403 instead of a 404 when the user doesn't have
permission to list the bucket contents.
The GetObject permission you have granted only gives permission to get an object that exists, it does not give permission to list all the objects in a bucket. You would need to add the ListBucket permission to your bucket policy. See this page for the full list of S3 IAM permissions, and the S3 operations they cover.
I've solved the problem of S3 issuing 403 instead of 404 errors not by changing the bucket policy, but by simply adding an 'Everyone' listing policy in the bucket settings:
I feel it's a less elegant than setting the bucket policy, but it at least works now.
My accompanying bucket policy is now still based on only whitelisting a few IPs:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::website-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"10.1.1.0/22",
"11.1.1.0/22"
]
}
}
}
]
}
My issue was that my computer clock was not set correctly. (because of DST issues)

Copying data from S3 to Redshift - Access denied

We are having trouble copying files from S3 to Redshift. The S3 bucket in question allows access only from a VPC in which we have a Redshift cluster. We have no problems with copying from public S3 buckets. We tried both, key-based and IAM role based approach, but result is the same: we keep getting 403 Access Denied by S3. Any idea what we are missing? Thanks.
EDIT:
Queries we use:
1. (using IAM role):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_iam_role=arn:aws:iam::123456789:role/redshift-copyunload' delimiter '|' gzip;
(using access keys):
copy redshift_table from 's3://bucket/file.csv.gz' credentials 'aws_access_key_id=xxx;aws_secret_access_key=yyy' delimiter '|' gzip;
S3 policy for IAM Role (first query) and IAM user (second query) is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt123456789",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Bucket has a policy denying access from anywhere other than VPC (redshift cluster is in this VPC):
{
"Version": "2012-10-17",
"Id": "VPCOnlyPolicy",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket/*",
"arn:aws:s3:::bucket"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-123456"
}
}
}
]
}
We have no problem loading from publicly accessible buckets and if we remove this bucket policy we can copy the data with no problems.
The bucket is in the same region as redshift cluster.
When we run IAM role (redshift-copyunload) through the policy simulator it returns "permission allowed".
Enable "Enhanced VPC Routing" on your Redshift. Without the "Enhanced VPC Routing" your Redshift traffic will be coming via Internet and your S3 bucket policy will deny access. See here:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-enabling-cluster.html
1 Check encription of bucket.
According doc : https://docs.aws.amazon.com/en_us/redshift/latest/dg/c_loading-encrypted-files.html The COPY command automatically recognizes and loads files encrypted using SSE-S3 and SSE-KMS.
2 Check kms: rules on you key|role
3 If files from EMR, check Security configurations for S3.