Creating S3 policy with terraform - amazon-s3

I am trying to create a S3 bucket and apply a policy to it. Bucket creation steps are fine and when I am trying to apply the below policy I am not able to find the bug in this tf file
The terraform version is - Terraform v0.12.23
{
"Sid": "DenyUnEncryptedConnection",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": [
"arn:aws:s3:::${var.s3_bucketName}",
"arn:aws:s3:::${var.s3_bucketName}/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
In my main.tf file this is what I am passing to the variable
module "s3-bucket-policy" {
source = "../s3-policy/"
s3_bucketName = "${aws_s3_bucket.s3_bucket.id}"
bucket_arn = "${aws_s3_bucket.s3_bucket.arn}"
....
The terraform plan command is giving me the policy as below.(Running it through a Jenkins job Copied out of Jenkins log)
module.s3_bucket.module.s3-bucket-policy.aws_s3_bucket_policy.communication_policy[0][0m will be created[0m[0m
00:00:07.805 [0m [32m+[0m[0m resource "aws_s3_bucket_policy" "communication_policy" {
00:00:07.805 [32m+[0m [0m[1m[0mbucket[0m[0m = (known after apply)
00:00:07.805 [32m+[0m [0m[1m[0mid[0m[0m = (known after apply)
00:00:07.805 [32m+[0m [0m[1m[0mpolicy[0m[0m = (known after apply)
00:00:07.805 }
But when I try to apply the same getting the below error and I am not sure how to proceed further.
[31m
00:01:13.117 [1m[31mError: [0m[0m[1mError putting S3 policy: MalformedPolicy: Action does not apply to any resource(s) in statement
00:01:13.117 status code: 400, [0m
00:01:13.117
Any pointers on this will be very much appreciated

You need to supply a proper Action compatible with your bucket, change your policy to the following and it should work:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
acl = "public-read"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyUnEncryptedConnection",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
EOF
}
(I changed the resource to use bucket_arn but it should work with s3_bucketName the way you did it too.)
Note the "Action": "s3:*", this policy explicitly denies all actions on the bucket and objects when the request meets the condition "aws:SecureTransport": "false" (i.e. it's not an HTTPS connection).

Related

AWS Cognito IAM policy - How to limit access to S3 folder (.NET SDK ListObjectsV2)

I am trying to limit access for a Cognito user to specific folders in a bucket. The final target is to reach what is described here but I've simplified it for debugging.
The structure is as follows
MYBUCKET/111/content_111_1.txt
MYBUCKET/111/content_111_2.txt
MYBUCKET/222/content_222_1.txt
MYBUCKET/333/
I am performing a simple "list" call via the SDK.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
I am authenticating via Cognito so I am updating Cognito's IAM policy linked to the authenticated role.
The following policy returns an S3 exception "Access Denied":
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
The following policy returns all results (as expected).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET"
}
]
}
This is supposed to be super straightforward (see here ). There are other similar questions (such as this and others) but with no final answer.
How should I write the IAM policy so that authenticated users can only access the contents of the folder "111"?
Best regards,
Andrej
I hope I now understand what I got wrong."s3:prefix" is not some form of "filter that will only return the objects that match the prefix"; it is "a parameter that forces the caller to provide specific prefix information when executing the operation".
The following is taken from the S3 documentation :
To answer my own question, starting from the IAM policy above
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::MYBUCKET",
"Condition": {
"StringLike": {
"s3:prefix": [
"111",
"111/*"
]
}
}
}
]
}
If I call the SDK with the code below, I will indeed get an "Access Denied" because I have not specified a prefix that matches the IAM policy.
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
But if I do specify the prefix in my SDK call, S3 will return the expected results i.e., only the ones "starting with 111".
using (AmazonS3Client s3Client = new AmazonS3Client(cognitoAWSCredentials))
{
ListObjectsV2Request listRequest = new()
{
BucketName = "MYBUCKET",
Prefix = "111"
};
ListObjectsV2Response listResponse = await s3Client.ListObjectsV2Async(listRequest);
}
In other words, my problem was not in the way I had written the IAM policy but in the way I was expecting the "s3:prefix" to work.

How to enable encryption in transit via Terraform to AWS?

Our AWS S3 bucket policy requires encryption in transit in order to place objects within S3. I have a Terraform written out that will write the state file to our S3 bucket. Unfortunately, it is not allowing me to do this due to the script not having encryption in transit.
Does anyone know if this is possible to achieve through Terraform?
*Edit: Adding in bucket policy.
{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
Edit: Adding in backend tfstate
terraform {
backend "s3" {
bucket = "test/inf/"
key = "s3_vpc_endpoint.tfstate"
region = "us-east-1"
}
}

How to get S3 objects from a codebuild buildspec ? (AccessDenied)

I have a CodePipeline Pipeline with a CodeBuild stage
Here is my buildspec :
{
"version": "0.2",
"phases": {
"build": {
"commands": [
"echo \"Hello, CodeBuild!\"",
"echo \"ca marche\" > test.txt",
"mkdir site-content",
"aws s3 sync s3://my-super-bucket-name site-content",
"ls - al"
]
}
},
"artifacts": {
"files": [
"test.txt"
]
}
}
The build project Service Role is defined with a default cdk generated policy, plus this one :
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-super-bucket-name",
"arn:aws:s3:::my-super-bucket-name/* "
],
"Effect": "Allow"
}
]
}
And codebuild.amazonaws.com is a trusted entities for the Role
On the bucket side, I have this bucket policy :
{
"Version": "2012-10-17",
"Id": "PolicyXXXXXXXXXXXXX",
"Statement": [
{
"Sid": "StmtYYYYYYYYYYYYY",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678910:user/a-user-for-another-process"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-super-bucket-name"
}
]
}
But the build project fail with this :
[Container] 2021/02/03 09:57:43 Running command aws s3 sync s3://my-super-bucket-name site-content
download failed: s3://my-super-bucket-name/test.txt to site-content/test.txt An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Completed 4 Bytes/13.7 KiB (0 Bytes/s) with 4 file(s) remaining
Help !
EDIT :
I just add this statement to the bucket policy :
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXXX:role/my-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-super-bucket-name"
}
But I have the same error :(
EDIT 2 :
Silly me ! It was :
"Resource": "arn:aws:s3:::my-super-bucket-name*"
Now it work !
You should modify the bucket policy to grant explicit access to you code build role, as the privileges are checked first based on bucket policy if there was no bucket policy attached to bucket then they way how you are trying would have worked.

Connecting aspera on cloud with S3bucket

I used this policy on AWS to try connecting AoC with an S3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::880559705280:role/atp-aws-us-east-1-ts-atc-node"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"sts:ExternalId": "crn:v1:bluemix:public:aspservice-service:global:a/2dd2425e9a424641a12855a1fd5e85ee:70740386-6ca4-4473-bf9b-69a1fd22be12:::c1893698-abfa-4934-a7ca-1a6d837df5e0"
}
}
}
]
}
but when copied on Bucket Policy, I receive Error: Statement is missing required element.
What is wrong?
You need to paste this policy file into Trust relationship policy in Role tab.

S3 Invalid Resource in bucket policy

I'm trying to make my entire S3 bucket public, but when I try to add the policy:
{
"Id": "Policy1454540872039",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1454540868094",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::sneakysnap/*",
"Principal": {
"AWS": [
"985506495298"
]
}
}
]
}
It tells me that my "Resource is invalid", but that is definitely the right arn and that is definitely the right bucket name. Anyone know what's going on?
I had this "problem" when I was trying to set a policy on the wrong bucket. That is, my arn in the policy was reading arn:aws:s3:::my-bucket-A/* but I was attempting to set it on my-bucket-B
I had solved the problem by this
arn:aws:s3:::your-bucket-name-here/*
'
If you are creating a policy for an access point it appears that aws will only accept the following format:
i) account id and region must be specified; and
ii) the literal string object must be included (object is not my bucket name)
arn:aws:s3:region:accountid:accesspoint/myaccesspointname/object/*
I found this answer here -> https://forums.aws.amazon.com/thread.jspa?threadID=315596
I faced the same issue and the following could fix your error. I hope this helps anyone facing the same. You need to specify the account ID that corresponds to the region for your load balancer and bucket.
"Principal": {
"AWS": [
"*********"
Please refer to this & update accordingly. This would solve this issue.
See also the Bucket Permissions section of Access Logs for Your Application Load Balancer.
I also had the same problem!
I was using wrong bucket name so I correct it.
It worked for me!
Best of Luck !
I was getting this error as well. The following change fixed it... No idea why.
This bucket threw the error: bleeblahblo-stuff
This worked: bleeblahblostuff
Maybe it was the dash.... Maybe the bucket length... Or maybe a combination of the two?? Both buckets had the same settings. Hmmm.
I was facing the same problem. I was was not using the correct resource name.
I did change the resource name to exactly of that bucket for which I was creating the bucket policy e.g
"Resource": "arn:aws:s3:::abc/*"
to
"Resource": "arn:aws:s3:::abc12/*"
My problem was that when I created my S3 bucket, by default the following were true:
Manage public access control lists (ACLs)
Block new public ACLs and uploading public objects (Recommended)
True
Remove public access granted through public ACLs (Recommended)
True
Manage public bucket policies
Block new public bucket policies (Recommended)
True
Block public and cross-account access if bucket has public policies (Recommended)
True
I had to set these all to false in order for me to change my bucket policy.
If you're trying the AWS startup workshop, try to close the website-bucket-policy.json file, an re-open it. It works for me, and I guess the update of the json file is not saved automatically unless you close it.
See if the bucket name you are specifying in Resource exists or not. The above answer from Vitaly solved my issue.
the problem I realized I had was that my bucket had a ".com" extension which needs to be included in your arn
To add to iamsohel's answer. I had this same issue when trying to set an S3 policy for enabling Elastic load balancer access logs using Terraform.
Here's the policy I was trying to set:
Access logs for your Application Load Balancer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::elb-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::bucket-name"
}
]
}
But I wanted to add some variables to the policy. My initial policy looked like this:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
But this was throwing an error:
Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: 3HHH9QK9SKB1V9Z0, host id: 8mOrnGi/nsHIcz59kryeriVExU7v+XgGpTw64GHfhjgkwhA3WKSfG7eNbgkMgBMA8qYlyUTLYP8=
│
│ with module.s3_bucket_policy_1.aws_s3_bucket_policy.main,
│ on ../../../../modules/aws/s3-bucket-policy/main.tf line 1, in resource "aws_s3_bucket_policy" "main":
│ 1: resource "aws_s3_bucket_policy" "main" {
All I had to do was to add /* to the end of the arn for the bucket resource:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
In my case it was the missing region in the arn for govcloud - so the resource had to be
"arn:aws-us-gov:s3:::grcsimpletest"
rather than
"arn:aws:s3:::grcsimpletest"
Strangely the policy that failed was from an AWS doc.... That said, it kind of clicked when I edited the policy in the S3 console and it showed the bucket arm on the edit screen.