I have 1 S3 bucket and want to trigger the same lambda for two different types of object:create scenarios.
I tried to have multiple aws_s3_bucket_notification blocks, but only one was made, terraform Notes:
S3 Buckets only support a single notification configuration. Declaring multiple aws_s3_bucket_notification resources to the same S3 Bucket will cause a perpetual difference in configuration. See the example "Trigger multiple Lambda functions" for an option.
I tried doing an OR statement, but TF didnt like that either
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.my_bucket_id
depends_on = [aws_lambda_permission.lambda_perm]
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-webpage.html" || "-image.png"
}
}
Need help having a suffix string that looks for two different strings. Thanks
You need multiple lambda_function blocks:
resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = var.my_bucket_id
depends_on = [aws_lambda_permission.lambda_perm]
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-webpage.html"
}
lambda_function {
lambda_function_arn = "my functions arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "content/"
filter_suffix = "-image.png"
}
}
Related
I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}
I am facing a problem and workflow related to terraform to automate the creation of storage account, key vaults and access policies.
What I am trying to achieve is as follow:
I have a storage-account that runs with a for_each loop:
//==================================================
// Automation storage accounts
//==================================================
resource "azurerm_storage_account" "storage-foreach" {
for_each = var.storage-foreach
access_tier = "Hot"
account_kind = "StorageV2"
account_replication_type = "LRS"
account_tier = "Standard"
location = var.location
name = each.value
resource_group_name = azurerm_resource_group.tenant-testing-hamza.name
depends_on = [azurerm_key_vault_key.client-key]
identity {
type = "SystemAssigned"
}
lifecycle {
prevent_destroy = false
}
}
this storage account resource, loops through this variable to create the storage accounts
variable "storage-foreach" {
type = map(string)
default = { "storage1" = "storage1", "storage2" = "storage2", "storage3" = "storage3", "storage4" = "storage4"}
}
so far everything works smoothly. Than I wanted to add those storage accounts object id to my key vault access policy, as follow:
resource "azurerm_key_vault_access_policy" "storage" {
for_each = var.storage-foreach
key_vault_id = azurerm_key_vault.tenantsnbshared.id
tenant_id = "<tenant-id"
object_id = azurerm_storage_account.storage-foreach[each.key].identity.0.principal_id
key_permissions = ["get", "Create", "List", "Restore", "Recover", "Unwrapkey", "Wrapkey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["get", "set", "list", "delete", "recover"]
}
so far everything works just fine while creating the resource, I have all the access policies in place. But, If I try to remove, for example the storage1 from my variable, the storage account get deleted and the access policies related to that specific storage, which is good.
And here the main issue I am facing. If I try to add again the same storage in the variable and run a terraform apply , what happen is that the 3 policies still existing they get removed and the access policy for the storage account get created. If I do one more time terraform apply the logic get inverted, it will delete the first storage account access policy and add the other 3.
I can't find a solution to just update my access policies accordingly to the element I have set in my variable.
With this code, I'm planning on having mulitple variables like "vms". These are key/value maps of VMs and resource groups where I for_each on the data source lookup (backupvm) to get their id. Then I can add these VMs, using id, as a backup item in vault.
data "azurerm_virtual_machine" "backupvm" {
for_each = var.vms
name = each.key
resource_group_name = each.value
}
variable "vms" {
type = map
default = {
# "vm name" = "resource group name"
"vaulttestvm1" = "vaulttestrg"
"vaulttestvm2" = "vaulttestrg"
"vaulttestvm4" = "vaulttestrg"
}
}
resource "azurerm_resource_group" "rg" {
name = var.rg_name
location = var.location
}
resource "azurerm_recovery_services_vault" "vault" {
name = var.vault_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Standard"
# change later
soft_delete_enabled = false
}
resource "azurerm_backup_protected_vm" "vms" {
for_each = var.vms
recovery_vault_name = azurerm_recovery_services_vault.vault.name
resource_group_name = azurerm_recovery_services_vault.vault.resource_group_name
source_vm_id = data.azurerm_virtual_machine.backupvm[each.key].id
backup_policy_id = azurerm_backup_policy_vm.VMBackupPolicy.id
}
I need a way to reference var.vms, in both the data source & resource, so I could interchange using some logic for another map type variable. So if it was PowerShell, it would be something like:
name = if ($env -eq "prod") { $var.vms } elseif ($env -eq "stage") { $var.vmsstage } elseif ($env -eq "dev") { $var.vmsdev }
I've spent a day trying different things but haven't really got close. I may be somewhat restricted as I need to lookup the vm id using the data source first then loop through my resource (azurerm_backup_protected_vm) dropping that ID in. A solution to this would prevent me from having to use multiple data sources and resources of the same type. Thanks!
In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.
# Creating Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
# Creating s3 resource for invoking to lambda function
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
acl = "private"
tags = {
Name = "source-bucktet-testing"
Environment = "Dev"
}
}
# Adding S3 bucket as trigger to my lambda and giving the permissions
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = "aws_s3_bucket.bucket.id"
lambda_function {
lambda_function_arn = "aws_lambda_function.test_lambda.arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = "aws_lambda_function.test_lambda.function_name"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :
The value passed to the aws_lambda_function resource for function_name is invalid. An AWS Lambda function name can only contain letters, numbers, hyphens, or underscores with no spaces. You need to change the value of var.function_name to align with these restrictions.
Your var.function_name must be invalid.
The allowed function name format is explained here along with ARN:
The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 140.
Pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}(-gov)?-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?
I'm using Tf 0.12. I have an s3 module that outputs a list of buckets, that I would like to use as an input for a cloudfront module that I've got.
The problem I'm facing is that when I do terraform plan/apply I get the following error count.index is 0 |var.redirect-buckets is tuple with 1 element
I've tried all kinds of splats moving the count.index call around to no avail. My sample code is below.
module.s3
resource "aws_s3_bucket" "redirect" {
count = length(var.redirects)
bucket = element(var.redirects, count.index)
}
mdoule.s3.output
output "redirect-buckets" {
value = [aws_s3_bucket.redirect.*]
}
module.cdn.variables
...
variable "redirect-buckets" {
description = "Redirect buckets"
default = []
}
....
The error is thrown down here
module.cdn
resource "aws_cloudfront_distribution" "redirect" {
count = length(var.redirect-buckets)
default_cache_behavior {
// Line below throws the error, one amongst many
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
....
//Another error throwing line
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
Any help is greatly appreciated.
module.s3
resource "aws_s3_bucket" "redirects" {
for_each = var.redirects
bucket = each.value
}
Your variable definition for redirects needs to change to something like this:
variable "redirects" {
type = map(string)
}
module.s3.output:
output "redirect_buckets" {
value = aws_s3_bucket.redirects
}
module.cdn
resource "aws_cloudfront_distribution" "redirects" {
for_each = var.redirect_buckets
default_cache_behavior {
target_origin_id = "cloudfront-distribution-origin-${each.value.id}.s3.amazonaws.com"
}
Your variable definition for redirect-buckets needs to change to something like this (note underscores, using skewercase is going to behave strangely in some cases, not worth it):
variable "redirect_buckets" {
type = map(object(
{
id = string
}
))
}
root module
module "s3" {
source = "../s3" // or whatever the path is
redirects = {
site1 = "some-bucket-name"
site2 = "some-other-bucket"
}
}
module "cdn" {
source = "../cdn" // or whatever the path is
redirects_buckets = module.s3.redirect_buckets
}
From an example perspective, this is interesting, but you don't need to use outputs from S3 here since you could just hand the cdn module the same map of redirects and use for_each on those.
There is a tool called Terragrunt which wraps Terraform and supports dependencies.
https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules