How to write VPC flow logs to an S3 bucket on another AWS account? - amazon-s3

I am trying to write VPC Flow logs (from account 1) to an S3 bucket (on account 2), using terraform:
resource "aws_flow_log" "security_logs" {
log_destination = "arn:aws:s3:::my_vpcflowlogs_bucket"
log_destination_type = "s3"
vpc_id = var.vpc_id
traffic_type = "ALL"
}
resource "aws_iam_role" "vpc_flow_logs" {
name = "vpc_flow_logs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "vpc-flow-logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "write_vpc_flow_logs" {
name = "write_vpc_flow_logs"
role = aws_iam_role.vpc_flow_logs.id
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogDelivery",
"logs:DeleteLogDelivery"
],
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket"
}
]
})
}
Account 1 & 2 belong to the same organisation.
I am getting the following response:
Error creating Flow Log for (vpc-xxxxxxxxxxxx), error: Access Denied for LogDestination: my_vpcflowlogs_bucket. Please check LogDestination permission
How can I make this work? This bucket contains sensitive information, therefore i have restricted every kind of public access.
I am guessing that there is a way to allow certain principals to write into the bucket even from different accounts, but I am unaware how.
S3 Policy
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid": "AWSLogDeliveryWrite",
"Effect": "Allow",
"Principal": {"Service": "delivery.logs.amazonaws.com"},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:SourceAccount": <account-1-id>
},
"ArnLike": {
"aws:SourceArn": "arn:aws:logs::<account-1-id>:*"
}
}
},
{
"Sid": "AWSLogDeliveryCheck",
"Effect": "Allow",
"Principal": {"Service": "delivery.logs.amazonaws.com"},
"Action": ["s3:GetBucketAcl", "s3:ListBucket"],
"Resource": "arn:aws:s3:::my_vpcflowlogs_bucket",
"Condition": {
"StringEquals": {
"aws:SourceAccount": <account-1-id>
},
"ArnLike": {
"aws:SourceArn": "arn:aws:logs::<account-1-id>:*"
}
}
}
]
}

Related

How to enable encryption in transit via Terraform to AWS?

Our AWS S3 bucket policy requires encryption in transit in order to place objects within S3. I have a Terraform written out that will write the state file to our S3 bucket. Unfortunately, it is not allowing me to do this due to the script not having encryption in transit.
Does anyone know if this is possible to achieve through Terraform?
*Edit: Adding in bucket policy.
{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
Edit: Adding in backend tfstate
terraform {
backend "s3" {
bucket = "test/inf/"
key = "s3_vpc_endpoint.tfstate"
region = "us-east-1"
}
}

Cannot put object to Amazon S3 bucket

I'm fighting with it for the last couple of days and I'm hopeless.
I use Zend S3 lib to access my S3 via IAM account.
I'm able to list and create my buckets, but I cannot put any object nor read info ($s3->getInfo()) of a sample file I uploaded via console.
I set to my IAM account to full access: AmazonS3FullAccess, also added my own:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:CreateJob",
"s3:HeadBucket"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::*",
"arn:aws:s3:*:*:accesspoint/*",
"arn:aws:s3:*:*:job/*"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::*/*"
}
]
}
This is my PHP code:
$amazonKey = AWS_ACCESS_KEY;
$amazonSecret = AWS_SECRET_ACCESS_KEY;
$s3 = new Zend_Service_Amazon_S3($amazonKey, $amazonSecret);
//Avaliable buckets this code works:
$this->view->content.="-----------Avaliable buckets:-------------<br>";
foreach($s3->getBuckets() as $bucket)
$this->view->content.=$bucket."<br>";
//this doesn't work
$amazonBucket='map-markers';
$object=$amazonBucket."/myobject";
$result=$s3->putObject($object, "somedata");
if ($result===false) $this->view->content.="putObject: '.$object.' FAIL!";
else
$this->view->content.="putObject: ".print_r($result)."<br>";
Also I tried C++ builder lib for AWS TAmazonStorageService: `
if (s3->UploadObject(BUCKET_NAME,OBJ_NAME,AnsiString("test").BytesOf()),true,0, amzbaPrivate, amzrNotSpecified, ResponseInfo) {
TVarRec args[1] = {ResponseInfo->StatusMessage};
Console("UploadObject:Upload to "+AnsiString(BUCKET_NAME)+" "+OBJ_NAME+" OK!");
}
else
{
TVarRec args[1] = {ResponseInfo->StatusMessage};
Console(Format(AnsiString("UploadObject Failure! %s"), args, 0));
}
S3->UploadObject returns true along with "HTTP 200" in ResponseInfo but the object is not created.
Bucket permissions are like this :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3Permissions",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::map-markers/*",
"arn:aws:s3:::map-markers"
]
}
]
}
Additionally I unblocked all public access to the bucket.
I have no Idea what else could I do?
best regards
Tom
`

S3 bucket policy multiple conditions

I'm looking to grant access to a bucket that will allow instances in my VPC full access to it along with machines via our Data Center. Without the aws:SouceIp line, I can restrict access to VPC online machines.
I need the policy to work so that the bucket can only be accessible from machines within the VPC AND from my office.
{
"Version": "2012-10-17",
"Id": "Policy1496253408968",
"Statement": [
{
"Sid": "Stmt1496253402061",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::xyz-sam-test/*",
"arn:aws:s3:::xyz-sam-test"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "vpc-dcb634bf",
"aws:SourceIp": "<MY PUBLIC IP>"
}
}
}
]
}
You can generate a policy whose Effect is to Deny access to the bucket when StringNotLike Condition for both keys matches those specific wildcards.
{
"Version": "2012-10-17",
"Id": "Policy1496253408968",
"Statement": [
{
"Sid": "Stmt1496253402061",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::xyz-sam-test/*",
"arn:aws:s3:::xyz-sam-test"
],
"Condition": {
"StringNotLike": {
"aws:sourceVpc": "vpc-dcb634bf",
"aws:SourceIp": "<MY PUBLIC IP>"
}
}
}
]
}
The second condition could also be separated to its own statement. AWS applies a logical OR across the statements. 1
{
"Version": "2012-10-17",
"Id": "Policy1496253408968",
"Statement": [
{
"Sid": "Stmt1496253402061",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::xyz-sam-test/*",
"arn:aws:s3:::xyz-sam-test"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "vpc-dcb634bf",
}
}
},
{
"Sid": "Stmt1496253402062",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::xyz-sam-test/*",
"arn:aws:s3:::xyz-sam-test"
],
"Condition": {
"StringLike": {
"aws:SourceIp": "<MY PUBLIC IP>"
}
}
}
]
}
AWS has predefined condition operators and keys (like aws:CurrentTime). Individual AWS services also define service-specific keys.
As an example, assume that you want to let user John access your Amazon SQS queue under the following conditions:
The time is after 12:00 p.m. on 7/16/2019
The time is before 3:00 p.m. on 7/16/2019
The request comes from an IP address within the range 192.0.2.0 to 192.0.2.255 or 203.0.113.0 to 203.0.113.255.
Your condition block has three separate condition operators, and all three of them must be met for John to have access to your queue, topic, or resource.
The following shows what the condition block looks like in your policy. The two values for aws:SourceIp are evaluated using OR. The three separate condition operators are evaluated using AND.
"Condition" : {
"DateGreaterThan" : {
"aws:CurrentTime" : "2019-07-16T12:00:00Z"
},
"DateLessThan": {
"aws:CurrentTime" : "2019-07-16T15:00:00Z"
},
"IpAddress" : {
"aws:SourceIp" : ["192.0.2.0/24", "203.0.113.0/24"]
}
}
reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html
this is an old question, but I think that there is a better solution with AWS new capabilities. Especially, I don't really like the deny / StringNotLike combination, because denying on an s3 policy can have unexpected effects such as locking your own S3 bucket down, by denying yourself (this could only be fixed by using the root account, which you may not have easily accessible in a professional context)
So the solution I have in mind is to use ForAnyValue in your condition (source). e.g something like this:
{
"Version": "2012-10-17",
"Id": "Policy1496253408968",
"Statement": [
{
"Sid": "Stmt1496253402061",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::xyz-sam-test/*",
"arn:aws:s3:::xyz-sam-test"
],
"Condition": {
"ForAnyValue:StringEquals": {
"aws:sourceVpc": [
"vpc-dcb634bf",
"<MY PUBLIC IP>"
]
}
}
}
]
}

Amazon S3 : How to allow access to a specific path, only for a specific referer?

I have a Amazon S3 bucket mybucket and only want to enable access to content in a specific nested folder (or in S3 terms, with a specific "prefix").
I tried the following S3 bucket policy but it doesn't work. After adding the condition I started getting access denied errors in the browser.
{
"Version": "2012-10-17",
"Id": "Policy for mybucket",
"Statement": [
{
"Sid": "Allow access to public content only from my.domain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://my.domain.com/*"
]
}
}
}
]
}
What should the policy look like to achieve this?
You need to split the policy in to two statements. One to allow access to the folder (prefix), and one to deny access when the referer is not one of the white listed domains:
{
"Version": "2012-10-17",
"Id": "Policy for mybucket",
"Statement": [
{
"Sid": "Allow access to public content",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*"
},
{
"Sid": "Deny access to public content when not on my.domain.com",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://my.domain.com/*"
]
}
}
}
]
}

s3 bucket policy to add exception

Hi I am trying to write the permissions policy to access my bucket.
I want to deny access to a particular user-agent and allow access to all other user agents. With the below policy the access is getting denied to all.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1456658595000",
"Effect": "Deny",
"Action": [
"s3:*"
],
"Condition": {
"StringLike": {
"aws:UserAgent": "NSPlayer"
}
},
"Resource": [
"arn:aws:s3:::bucket/"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::bucket/*"
]
}
]
}
Please let me know how should I write such policy so that except one user agent all others are able to access the same.
It has to be written this way!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SID",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Condition": {
"StringNotLike": {
"aws:UserAgent": "NSPlayer"
}
},
"Resource": [
"*"
]
}
]
}
This solution works if bucket objects are not public read/write. One related answer is here: Deny access to user agent to access a bucket in AWS S3