Terraform : Error with Comply with restrictions - amazon-s3

In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.
# Creating Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
# Creating s3 resource for invoking to lambda function
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
acl = "private"
tags = {
Name = "source-bucktet-testing"
Environment = "Dev"
}
}
# Adding S3 bucket as trigger to my lambda and giving the permissions
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = "aws_s3_bucket.bucket.id"
lambda_function {
lambda_function_arn = "aws_lambda_function.test_lambda.arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = "aws_lambda_function.test_lambda.function_name"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :

The value passed to the aws_lambda_function resource for function_name is invalid. An AWS Lambda function name can only contain letters, numbers, hyphens, or underscores with no spaces. You need to change the value of var.function_name to align with these restrictions.

Your var.function_name must be invalid.
The allowed function name format is explained here along with ARN:
The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 140.
Pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}(-gov)?-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?

Related

Terraform S3 Notification for multiple suffix's

I have 1 S3 bucket and want to trigger the same lambda for two different types of object:create scenarios.
I tried to have multiple aws_s3_bucket_notification blocks, but only one was made, terraform Notes:
S3 Buckets only support a single notification configuration. Declaring multiple aws_s3_bucket_notification resources to the same S3 Bucket will cause a perpetual difference in configuration. See the example "Trigger multiple Lambda functions" for an option.
I tried doing an OR statement, but TF didnt like that either
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.my_bucket_id
depends_on = [aws_lambda_permission.lambda_perm]
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-webpage.html" || "-image.png"
}
}
Need help having a suffix string that looks for two different strings. Thanks
You need multiple lambda_function blocks:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.my_bucket_id
depends_on = [aws_lambda_permission.lambda_perm]
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-webpage.html"
}
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-image.png"
}
}

Multi S3 bucket Definition with versioning limitations

I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}

I am trying to add custom validation for variables in my terraform script using map but i am facing error

I am trying to add custom validation for the variables in my terraform script for S3 bucket. But i am facing an error that is mentioned as below:
Reference to undeclared input variable
on main.tf line 2, in resource "aws_s3_bucket" "gouth_bucket_1_apr_2021":
2: bucket = var.bucket #"terraform-s3-bucket"
An input variable with the name "bucket" has not been declared. This variable
can be declared with a variable "bucket" {} block."
Can anyone help me on the same.please let me know which file needs the necessary changes and how.
Thanks in Advance
Below is my code :
main.tf :
resource "aws_s3_bucket" "gouth_bucket_1_apr_2021" {
bucket = var.bucket
acl = "private"
tags= var.tags
}
s3.tfvars :
bucket = "first-bucket-gouth"
#Variables of Tags
tags= {
name = "s3bucket",
account_id = "1234567",
owner = "abc#def.com",
os= "windows",
backup = "N",
application = "abc",
description = "s3 bucket",
env = "dev",
ticketid = "101",
marketami = "NA",
patching = "NA",
dc = "bangalore"
}
validation.tf :
variable "tags" {
type = map(string)
validation {
condition = length(var.tags["env"]) > 0
error_message = "Environment tag is required !!"
}
validation {
condition = length(var.tags["owner"]) > 0
error_message = "Owner tag is required !!"
}
validation {
condition = length(var.tags["dc"]) > 0
error_message = "DC tag is required !!"
}
validation {
condition = can(var.tags["account_id"])
error_message = "Acoount ID tag is required!!"
}
}
I can see two potential issues.
You are referencing var.bucket in your resource, but you are not defining a variable for it anywhere in your definition. This could simply look like:
variable "bucket" {}
You may not be picking up your tfvars file, if you are running Terraform with the tfvars file as an option like so terraform plan -var-file=s3.tfvars then thats ok, or you can rename your tfvars file to something.auto.tfvars or terraform.tfvars to get automatically used. (See > https://www.terraform.io/docs/language/values/variables.html#variable-definitions-tfvars-files)
I hope this answers your question.

error within if condition - 'encrypt' expected type 'bool', got unconvertible type 'string'

I'm trying to define a config block for two environments - local and cloud and I'm using the if/else condition but I got an error message for the encrypt attribute of the s3 bucket: 'encrypt' expected type 'bool', got unconvertible type 'string'.
If I remove the if/else condition block then it worked but I need to choose between the two environments, so I've to use if/else condition.
The config block code:
config = local.is_local_environment ? {
# Local configuration
path = "${path_relative_to_include()}/terraform.tfstate"
} : {
# Cloud configuration
bucket = "my-bucket"
key = "terraform/${path_relative_to_include()}/terraform.tfstate"
region = local.region
encrypt = true
dynamodb_table = "terraform-lock"
}
}
the issue is that local backends don't take any configuration, use null
config = local.is_local_environment ? null : {
# Cloud configuration
bucket = "my-bucket"
key = "terraform/${path_relative_to_include()}/terraform.tfstate"
region = local.region
encrypt = true
dynamodb_table = "terraform-lock"
}
}

How to concatenate S3 bucket name in Terraform variable and pass it to main tf file

I'm writing terraform templates to create two S3 buckets, however, my requirement is to concatenate their names in vars.tf and then pass it to main tf file. Below is the vars.tf and main s3.tf file.
vars.tf:
variable TENANT_NAME {
default = "Mansing"
}
variable BUCKET_NAME {
type = "list"
default = ["bh.${var.TENANT_NAME}.o365.attachments", "bh.${var.TENANT_NAME}.o365.eml"]
}
s3.tf:
resource "aws_s3_bucket" "b" {
bucket = "${element(var.BUCKET_NAME, 2)}"
acl = "private"
}
When do terraform plan I get an error indicating that var may not work here.
Error: Variables not allowed
on vars.tf line 10, in variable "BUCKET_NAME":
10: default = ["bh.${var.TENANT_NAME}.o365.attachments", "bh.${var.TENANT_NAME}.o365.eml"]
Variables may not be used here.
Error: Variables not allowed
on vars.tf line 10, in variable "BUCKET_NAME":
10: default = ["bh.${var.TENANT_NAME}.o365.attachments", "bh.${var.TENANT_NAME}.o365.eml"]
Variables may not be used here.
I tried replacing var in vars file with locale but did not work.
You can use Terraform locals block to concatenate variable values in the s3.tf file:
locals {
BUCKET_NAME = [
"bh.${var.TENANT_NAME}.o365.attachments",
"bh.${var.TENANT_NAME}.o365.eml"
]
}
resource "aws_s3_bucket" "b" {
bucket = "${element(local.BUCKET_NAME, 2)}"
acl = "private"
}