Write a dynamic Terraform block for a load balancer listener rule - dynamic

I'm new to dynamic blocks and am having some trouble writing rules to listeners on a load balancer that was created using for_each.
Below are the resources I created:
resource "aws_lb_listener" "app_listener_forward" {
for_each = toset(var.app_listener_ports)
load_balancer_arn = aws_lb.app_alb.arn
port = each.value
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
certificate_arn = var.ssl_cert
default_action {
type = "forward"
forward {
dynamic "target_group" {
for_each = aws_lb_target_group.app_tg
content {
arn = target_group.value["arn"]
}
}
stickiness {
enabled = true
duration = 86400
}
}
}
}
resource "aws_lb_listener_rule" "app_https_listener_rule" {
for_each = toset(var.app_listener_ports)
listener_arn = aws_lb_listener.app_listener_forward[each.value].arn
action {
type = "forward"
forward {
dynamic "target_group" {
for_each = aws_lb_target_group.app_tg
content {
arn = target_group.value["arn"]
}
}
}
}
dynamic "condition" {
for_each = var.images
path_pattern {
content {
values = condition.value["paths"]
}
}
}
}
resource "aws_lb_target_group" "app_tg" {
for_each = var.images
name = each.key
port = each.value.port
protocol = "HTTP"
target_type = "ip"
vpc_id = aws_vpc.app_vpc.id
health_check {
interval = 130
timeout = 120
healthy_threshold = 10
unhealthy_threshold = 10
}
stickiness {
type = "lb_cookie"
cookie_duration = 86400
}
}
Below are how the variables are defined:
variable "images" {
type = map(object({
app_port = number
paths = set(string)
}))
{
"app-one" = {
app_port = 3000
paths = [
"/appOne",
"/appOne/*"
]
}
"app-two" = {
app_port = 4000
paths = [
"/appTwo",
"/appTwo/*"
]
}
}
variable "app_listener_ports" {
type = list(string)
default = [
80, 443, 22, 7999, 8999
]
}
Upon executing, I am getting an error dealing with the path_pattern being unexpected:
Error: Unsupported block type
│
│ on alb.tf line 78, in resource "aws_lb_listener_rule" "app_https_listener_rule":
│ 78: path_pattern {
│
│ Blocks of type "path_pattern" are not expected here.
I've tried a few ways to get this dynamic block but am having some difficulty. Any advice would be appreciated.
Thank you!

Try it like this:
dynamic "condition" {
for_each = var.images
content {
path_pattern {
values = condition.value.paths
}
}
}
And change the type of paths from set(string) to list(string).

This is also completely acceptable:
dynamic "condition" {
for_each = var.images
content {
path_pattern {
values = condition.value["paths"]
}
}
}
However, in my opinion here it's better to not use a dynamic block for the condition to maintain readability and maintenance.
condition {
path_pattern {
values = [
"/appOne",
"/appOne/*" ## can also use variables if you prefer !!
]
}
}
I have already answered your original post related to the problem which you had after fixing the dynamic syntax.
Post URL: Error when creating dynamic terraform rule for alb listener rule

Related

terraform variable using block with no argument

I have a sample code below from terraform but I'm having some issues trying to declare a variable that the argument is a block
basic {}
and moving to production will be something like
dedicated {
cku = 2
}
DEV
resource "confluent_kafka_cluster" "basic" {
display_name = "basic_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "GCP"
region = "us-central1"
basic {} <<<< # I want this block to be declared as variable
# Calling the variable
local.cluster_type["dev"] <<<< # this approach is not supported. how can I call the variable directly if there is no argument?
}
PROD
resource "confluent_kafka_cluster" "dedicated" {
display_name = "dedicated_kafka_cluster"
availability = "MULTI_ZONE"
cloud = "GCP"
region = "us-central1"
# For Production it is using a different block
dedicated {
cku = 2
}
# Calling the variable
local.cluster_type["prod"] <<<<< # this approach is not supported. how can I call the variable directly if there is no argument?
}
Local variables
locals {
cluster_type = {
prod = "dedicated {
cku = 2
}"
dev = "basic {}"
}
}
You have some issues with your script:
confluent_kafka_cluster is deprecated, it should be replaced by confluentcloud_kafka_cluster
To use the environment, you can create confluentcloud_environment:
resource "confluentcloud_environment" "env" {
display_name = var.environment
}
To solve the issue of the block, you can use dynamic with conditions, like this:
dynamic "basic" {
for_each = var.environment == "dev" ? [1] : []
content {}
}
dynamic "dedicated" {
for_each = var.environment == "prod" ? [1] : []
content {
cku = 2
}
}
Your code can be like this:
resource "confluentcloud_environment" "env" {
display_name = var.environment
}
resource "confluentcloud_kafka_cluster" "basic" {
display_name = "basic_kafka_cluster"
availability = "SINGLE_ZONE"
cloud = "GCP"
region = "us-central1"
dynamic "basic" {
for_each = var.environment == "dev" ? [1] : []
content {}
}
dynamic "dedicated" {
for_each = var.environment == "prod" ? [1] : []
content {
cku = 2
}
}
environment {
id = confluentcloud_environment.env.id
}
}
variable "environment" {
default = "dev"
}

terraform dynamic block using list of map

I have a terraform variable:
variable "volumes" {
default = [
{
"name" : "mnt",
"value" : "/mnt/cvdupdate/"
},
{
"name" : "efs",
"value" : "/var"
},
]
}
and I am trying to create a dynamic block
dynamic "volume" {
for_each = var.volumes == "" ? [] : [true]
content {
name = volume["name"]
}
}
but I get an error when I run plan
name = volume["name"]
│
│ The given key does not identify an element in this collection value.
the desired output would be:
volume {
name = "mnt"
}
volume {
name = "efs"
}
what is wrong with my code?
Since you are using for_each, you should use value. Also you condition is incorrect. It all should be:
dynamic "volume" {
for_each = var.volumes == "" ? [] : var.volumes
content {
name = volume.value["name"]
}
}
As you are creating an if-else like condition to pass value to for loop, the condition needs a value to set. https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
Need to replace [true] with var.volumes to pass the value.
for_each = var.volumes == "" ? [] : var.volumes
And, then set the value in the content block with .value to finally set the values to use.
content {
name = volume.value["name"]
The final working code is below as #marcin posted.
dynamic "volume" {
for_each = var.volumes == "" ? [] : var.volumes
content {
name = volume.value["name"]
}
}
You can simply use for_each = var.volumes[*]:
dynamic "volume" {
for_each = var.volumes[*]
content {
name = volume.value["name"]
}
}
or:
dynamic "volume" {
for_each = var.volumes[*]
content {
name = volume.value.name # <------
}
}

Terraform Object Lock Configuration: AccessDenied

I have this terraform script that works perfectly fine for the whole s3 module but it cannot create the Object lock configuration resource and returns the message :
error creating S3 bucket (bucket-name) Object lock configuration: AccessDenied: AccessDenied
Status code 403, request id: ..., host id: ...
Desite the message, the S3 bucket is actually created, but I still get this error, maybe there is something missing in the policy ?
Here is my code.
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.4.0"
bucket = local.bucket_name
...
object_lock_enabled = true
attach_policy = true
policy = data.aws_iam_policy_document.voucher_s3_bucket.json
versioning = {
status = var.status
mfa_delete = var.mfa_delete
}
server_side_encryption_configuration = {
rule = {
apply_server_side_encryption_by_default = {
kms_master_key_id = aws_kms_key.voucher_s3_bucket.arn
sse_algorithm = "aws:kms"
}
}
}
}
data "aws_iam_policy_document" "s3_bucket_kms_key" {
statement {
sid = "AllowPutRoles"
effect = "Allow"
actions = ["kms:GenerateDataKey"]
principals {
identifiers = local.put_object_roles #we can use event_gateway iam_role for now
type = "AWS"
}
resources = ["*"]
}
statement {
sid = "AllowAdmin"
effect = "Allow"
actions = [
"kms:*",
]
principals {
identifiers = [data.aws_iam_role.admin_role.arn, data.aws_iam_role.default_role.arn, data.aws_iam_role.automation_role.arn]
type = "AWS"
}
resources = ["*"]
}
}
resource "aws_kms_key" "s3_bucket" {
tags = {
"s3_bucket" = local.bucket_name
}
enable_key_rotation = true
policy = data.aws_iam_policy_document.voucher_s3_bucket_kms_key.json
}
resource "aws_s3_bucket_object_lock_configuration" "s3_bucket_object_lock_configuration" {
bucket = local.bucket_name
rule {
default_retention {
mode = "GOVERNANCE"
years = 10
}
}
}
data "aws_iam_policy_document" "voucher_s3_bucket" {
statement {
sid = "DenyNoKMSEncryption"
effect = "Deny"
actions = ["s3:PutObject"]
principals {
identifiers = ["*"]
type = "*"
}
resources = ["${module.voucher_s3_bucket.s3_bucket_arn}/*"]
condition {
test = "StringNotEqualsIfExists"
values = ["aws:kms"]
variable = "s3:x-amz-server-side-encryption"
}
condition {
test = "Null"
values = ["false"]
variable = "s3:x-amz-server-side-encryption"
}
}
statement {
sid = "DenyWrongKMSKey"
effect = "Deny"
actions = ["s3:PutObject"]
principals {
identifiers = ["*"]
type = "*"
}
resources = ["${module.s3_bucket.s3_bucket_arn}/*"]
condition {
test = "StringNotEquals"
values = [aws_kms_key.voucher_s3_bucket.arn]
variable = "s3:x-amz-server-side-encryption-aws-kms-key-id"
}
}
statement {
sid = "AllowAdminDefault"
effect = "Allow"
actions = ["s3:*"]
principals {
identifiers = [data.aws_iam_role.admin_role.arn, data.aws_iam_role.default_role.arn]
type = "AWS"
}
resources = [
"${module.voucher_s3_bucket.s3_bucket_arn}/*",
module.voucher_s3_bucket.s3_bucket_arn,
]
}
statement {
sid = "DenyDeleteActions"
effect = "Deny"
actions = ["s3:DeleteBucket", "s3:DeleteObject", "s3:DeleteObjectVersion", "s3:PutBucketObjectLockConfiguration"]
principals {
identifiers = ["*"]
type = "AWS"
}
resources = [
"${module.s3_bucket.s3_bucket_arn}/*",
module.s3_bucket.s3_bucket_arn,
]
}
}

Dynamically AWS IAM policy document with principals

I am creating a dynamic AWS IAM policy document "FROM" static to "TO" dynamic but principals part gives "An argument named "principals" is not expected here"
If I delete "principals" from the aws_iam_policy_document it works. Any suggestion would be helpful.
FROM
data "aws_iam_policy_document" "bucket_policy" {
statement {
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::sdfsdfsdeploy",
"arn:aws:iam::sdfsdfsdeploy/OrganizationAccountAccessRole"
]
}
actions = [
"s3:GetObject",
"s3:PutObject"
]
resources = formatlist("arn:aws:s3:::%s/*", var.bucket_name)
}
}
TO
this code in source = "../../modules/s3/main.tf"
data "aws_iam_policy_document" "bucket_policy" {
dynamic "statement" {
for_each = var.policies_list
iterator = role
content {
effect = lookup(role.value, "effect", null)
principals = lookup(role.value, "principals", null)
actions = lookup(role.value, "actions", null)
resources = lookup(role.value, "resources", null)
}
}
}
module "s3_test" {
source = "../../modules/s3"
region = var.region
policies_list = [
{
effect = "Allow"
principals = {
type = "AWS"
identifiers = [
"arn:aws:iam::3ssdfsdfy",
"arn:aws:iam::3ssdfsdfy:role/OrganizationAccountAccessRole"
]
}
actions = [
"s3:GetObject",
"s3:PutObject"
]
resources = formatlist("arn:aws:s3:::%s/*", "teskjkjsdkfkjskdjhkjfhkjhskjdf")
}
]
}
Found it.
variable "policies_list" {
description = "nested block: s3_aws_iam_policy_document"
type = set(object(
{
actions = list(string)
effect = string
principals = set(object(
{
type = string
identifiers = list(string)
}
))
resources = list(string)
}
))
default = []
}
data "aws_iam_policy_document" "bucket_policy" {
dynamic "statement" {
for_each = var. policies_list
iterator = role
content {
effect = lookup(role.value, "effect", null)
actions = lookup(role.value, "actions", null)
dynamic "principals" {
for_each = role.value.principals
content {
type = principals.value["type"]
identifiers = principals.value["identifiers"]
}
}
resources = lookup(role.value, "resources", null)
}
}
}
based on
https://github.com/niveklabs/tfwriter/blob/1ea629ed386bbe6a8f21617a430dae19ba536a98/google-beta/r/google_storage_bucket.md

AWS S3 Object Lifecycle Exclution

I'm working in Terraform, and am creating an S3 object/folder-with-content. I would like to exclude that object from my lifecycle policy. But I'm not sure to exclude the object (folder-object/sample) from the lifecycle policy (Terraform Code Below):
resource "aws_s3_bucket" "s3_test" {
bucket = "test-bucket-upload"
acl = "private"
key = "folder-object/sample"
tags {
Name = "test-bucket"
Environment = "lab"
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
lifecycle_rule {
id = "glacier-transfer"
enabled = true
transition {
days = 360
storage_class = "GLACIER"
}
}
}
Instead of excluding, use prefix to identify the objects your lifecycle rule should apply to. For example, the rule below would only apply to objects in the new_objects folder in your bucket:
...
lifecycle_rule {
id = "glacier-transfer"
enabled = true
prefix = "new_objects/"
transition {
days = 360
storage_class = "GLACIER"
}
}
...