Our AWS S3 bucket policy requires encryption in transit in order to place objects within S3. I have a Terraform written out that will write the state file to our S3 bucket. Unfortunately, it is not allowing me to do this due to the script not having encryption in transit.
Does anyone know if this is possible to achieve through Terraform?
*Edit: Adding in bucket policy.
{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
Edit: Adding in backend tfstate
terraform {
backend "s3" {
bucket = "test/inf/"
key = "s3_vpc_endpoint.tfstate"
region = "us-east-1"
}
}
Related
I am trying to write VPC Flow logs (from account 1) to an S3 bucket (on account 2), using terraform:
resource "aws_flow_log" "security_logs" {
log_destination = "arn:aws:s3:::my_vpcflowlogs_bucket"
log_destination_type = "s3"
vpc_id = var.vpc_id
traffic_type = "ALL"
}
resource "aws_iam_role" "vpc_flow_logs" {
name = "vpc_flow_logs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "write_vpc_flow_logs" {
name = "write_vpc_flow_logs"
role = aws_iam_role.vpc_flow_logs.id
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogDelivery",
"logs:DeleteLogDelivery"
],
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket"
}
]
})
}
Account 1 & 2 belong to the same organisation.
I am getting the following response:
Error creating Flow Log for (vpc-xxxxxxxxxxxx), error: Access Denied for LogDestination: my_vpcflowlogs_bucket. Please check LogDestination permission
How can I make this work? This bucket contains sensitive information, therefore i have restricted every kind of public access.
I am guessing that there is a way to allow certain principals to write into the bucket even from different accounts, but I am unaware how.
S3 Policy
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid": "AWSLogDeliveryWrite",
"Effect": "Allow",
"Principal": {"Service": "delivery.logs.amazonaws.com"},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:SourceAccount": <account-1-id>
},
"ArnLike": {
"aws:SourceArn": "arn:aws:logs::<account-1-id>:*"
}
}
},
{
"Sid": "AWSLogDeliveryCheck",
"Effect": "Allow",
"Principal": {"Service": "delivery.logs.amazonaws.com"},
"Action": ["s3:GetBucketAcl", "s3:ListBucket"],
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket",
"Condition": {
"StringEquals": {
"aws:SourceAccount": <account-1-id>
},
"ArnLike": {
"aws:SourceArn": "arn:aws:logs::<account-1-id>:*"
}
}
}
]
}
I am trying to make a S3 bucket policy that only allows GetObject from CloudFront but able to PutObject directly to the bucket.
Tried with several combinations but none of the worked.
Here is the latest attempt that I tried.
With, Block All Public Access: ALL OFF.
Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1604429581591",
"Statement": [
{
"Sid": "Stmt1605554261786",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557857544",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:MYCLOUDFRONT"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
]
}
This allows me to PutObject to the bucket but GetObject using CloudFront URL got access denied. If I removed
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
I can GetObject from CloudFront as well as from bucket directly.
Please help!
Found the solution of it.
First follow the instructions here to setup CloudFront: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/. Key point is
5. For Restrict Bucket Access, select Yes.
I was using Multer-S3 to upload my image files. ACL needs to be set to
acl: 'authenticated-read',
Also I am using serverSideEncryption, in S3 bucket properties=>Default encryption
Default encryption: Enabled
Server-side encryption: Amazon S3 master-key(SSE-S3)
Multer-S3 config
serverSideEncryption: 'AES256',
S3 Bucket permission un-Block all public access and ACL only enables Bucket Owner's permissions.
The final bucket policy I have is:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E36AHAEXL422P3"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605745908405",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*",
"Condition": {
"StringEqualsIfExists": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
With all above configurations, this allows PubObject from anyone as long as requests have server side encryption with 'AE256'. GetObject requests directly to the bucket will be blocked. All GetObject requests need to go through CloudFront.
I am trying to upload object to s3 bucket, encrypted with ONLY a specific KMS key.
I have created a policy with separate deny conditions, but it does not seem to work. Can somebody suggest where could I be going wrong?
I tested this policy with AWS CLI -
aws s3api put-object --bucket test-buck --key testimage.jpg --body testimage.jpg --ssekms-key-id arn:aws:kms:us-east-1:account-id:key/NOT-MY-key-id --server-side-encryption aws:kms
And I'm able to upload testimage.jpg using another key from my account, despite below deny statements.
The same policy works if I give it in Bucket policy, but here I want the policy to be assigned to my Role used by s3.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-buck/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "aws:kms"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Deny",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-buck/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:<account-id>:key/<my-default-kmskey-id>"
}
}
},
{
"Sid": "VisualEditor2",
"Effect": "Deny",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::test-buck/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
]
}
Also, how can I test if NON ssl-requests are being denied? I can not use aws cli because I think it uses SSL when communicating with AWS services by default.
Thanks in advance.
I have made a S3 bucket policy for readonly access to cloudfront, i want to restrict s3 bucket for public and just allow it from referrer urls via cloudfront how ever it is not applying my s3 bucket policy.
{
"Version": "2012-10-17",
"Id": "http referer",
"Statement": [
{
"Sid": "Allow get requests referred by www.def.com and dev.def.com.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::devmb/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.def.com/*",
"http://dev.def.com/*"
]
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXX"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::devmb/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://dev.xyx.com/*",
"http://blog.xyz.com/*"
]
}
}
}
]
}
My content is public right now.
Referer does not work when using CloudFront because the referer does not appear in the HTTP request header itself. You'll have to use S3 without CloudFront if you wish to restrict access using conditional aws:Referer in the bucket policy.
Is it possible (via IAM, bucket policy, or otherwise) to force Amazon S3 to only serve content over HTTPS/SSL and deny all regular, unencrypted HTTP access?
I believe this can be achieved using a bucket policy. Deny all HTTP requests to the bucket in question using the condition aws:SecureTransport: false.
The following is not tested but it should give you an idea of how to set it up for your case.
{
"Statement":[
{
"Action": "s3:*",
"Effect":"Deny",
"Principal": "*",
"Resource":"arn:aws:s3:::bucketname/*",
"Condition":{
"Bool":
{ "aws:SecureTransport": false }
}
}
]
}
Here you allow your incoming traffic but refuse the non SSL one. If you want to go back just remove the 2nd statement:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketnamehere/*"
},
{
"Sid": "PublicReadGetObject",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::yourbucketnamehere/*",
"Condition":{
"Bool":
{ "aws:SecureTransport": false }
}
}
]
}
Don't forget to put your bucket name at yourbucketnamehere.
Now you need to install a SSL certificate. All the information can be found here.