Terraform update access policies - automation

I am facing a problem and workflow related to terraform to automate the creation of storage account, key vaults and access policies.
What I am trying to achieve is as follow:
I have a storage-account that runs with a for_each loop:
//==================================================
// Automation storage accounts
//==================================================
resource "azurerm_storage_account" "storage-foreach" {
for_each = var.storage-foreach
access_tier = "Hot"
account_kind = "StorageV2"
account_replication_type = "LRS"
account_tier = "Standard"
location = var.location
name = each.value
resource_group_name = azurerm_resource_group.tenant-testing-hamza.name
depends_on = [azurerm_key_vault_key.client-key]
identity {
type = "SystemAssigned"
}
lifecycle {
prevent_destroy = false
}
}
this storage account resource, loops through this variable to create the storage accounts
variable "storage-foreach" {
type = map(string)
default = { "storage1" = "storage1", "storage2" = "storage2", "storage3" = "storage3", "storage4" = "storage4"}
}
so far everything works smoothly. Than I wanted to add those storage accounts object id to my key vault access policy, as follow:
resource "azurerm_key_vault_access_policy" "storage" {
for_each = var.storage-foreach
key_vault_id = azurerm_key_vault.tenantsnbshared.id
tenant_id = "<tenant-id"
object_id = azurerm_storage_account.storage-foreach[each.key].identity.0.principal_id
key_permissions = ["get", "Create", "List", "Restore", "Recover", "Unwrapkey", "Wrapkey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["get", "set", "list", "delete", "recover"]
}
so far everything works just fine while creating the resource, I have all the access policies in place. But, If I try to remove, for example the storage1 from my variable, the storage account get deleted and the access policies related to that specific storage, which is good.
And here the main issue I am facing. If I try to add again the same storage in the variable and run a terraform apply , what happen is that the 3 policies still existing they get removed and the access policy for the storage account get created. If I do one more time terraform apply the logic get inverted, it will delete the first storage account access policy and add the other 3.
I can't find a solution to just update my access policies accordingly to the element I have set in my variable.

Related

Multi S3 bucket Definition with versioning limitations

I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}

Terraform - Conditional expression for a AWS role

I have the following in my module to create a role:
resource "aws_iam_role" "default" {
name = var.name
assume_role_policy = var.assume_role_policy
permissions_boundary = var.account_id != "" ? var.permissions_boundary : "arn:aws:iam::${data.aws_caller_identity.default.account_id}:policy/BoundedPermissionsPolicy"
}
Problem - I want to be able to set the permissions boundary argument to use the account ID if present and if not specified then make use of var.permissions_boundary (arn:aws:iam::${data.aws_caller_identity.default.account_id}:policy/BoundedPermissionsPolicy).
The code above does not work when I try to use the account ID.

How to generate SAS token using Access policy for a container of ADLS gen 2

How to generate SAS token using Access policy for a folder in container of ADLS gen 2.
exactly like below image but for ADLS gen 2 containers or folders. thank you in advance.
To generate SAS token using Access policy on ADLS containers need to create a Access Policy first . You can create Access Policy through Azure portal (Please Check with this link) or Storage Explorer.
Based on your attached
Screenshot you are using the Microsoft Storage Explorer so here are steps create access policy
1)Go to your container --> right click on container
2)Select the manage access policy
3)Click on the add. There you can provide the Access policy id and permissions you need to give on container like read ,write (click on check boxes).And click on save
4)Once access policy created. You can create the SAS based on that access policy .Right click on
The container select Get Share Access Signature. From the dropdown select the access policy and click
On the create
Generate SAS using terraform
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65" }
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "terraformtest"
location = "West Europe"
}
resource "azurerm_storage_account" "storage" {
name = "storage name"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "GRS"
allow_blob_public_access = true
}
resource "azurerm_storage_container" "container" {
name = "terraformcont"
storage_account_name = azurerm_storage_account.storage.name
container_access_type = "private"
}
data "azurerm_storage_account_blob_container_sas" "example" {
connection_string = azurerm_storage_account.storage.primary_connection_string
container_name = azurerm_storage_container.container.name
https_only = true
start = "Date"
expiry = "Date"permissions {
read = true
add = true
create = false
write = false
delete = true
list = true
}
}
output "sas_url_query_string" {
value = data.azurerm_storage_account_blob_container_sas.example.sas
sensitive = true
}
After running the above command you will get output inside terraform.tfstate
For more information check with this link

Terraform - reference different variable within resource or data source dependent on some logic

With this code, I'm planning on having mulitple variables like "vms". These are key/value maps of VMs and resource groups where I for_each on the data source lookup (backupvm) to get their id. Then I can add these VMs, using id, as a backup item in vault.
data "azurerm_virtual_machine" "backupvm" {
for_each = var.vms
name = each.key
resource_group_name = each.value
}
variable "vms" {
type = map
default = {
# "vm name" = "resource group name"
"vaulttestvm1" = "vaulttestrg"
"vaulttestvm2" = "vaulttestrg"
"vaulttestvm4" = "vaulttestrg"
}
}
resource "azurerm_resource_group" "rg" {
name = var.rg_name
location = var.location
}
resource "azurerm_recovery_services_vault" "vault" {
name = var.vault_name
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Standard"
# change later
soft_delete_enabled = false
}
resource "azurerm_backup_protected_vm" "vms" {
for_each = var.vms
recovery_vault_name = azurerm_recovery_services_vault.vault.name
resource_group_name = azurerm_recovery_services_vault.vault.resource_group_name
source_vm_id = data.azurerm_virtual_machine.backupvm[each.key].id
backup_policy_id = azurerm_backup_policy_vm.VMBackupPolicy.id
}
I need a way to reference var.vms, in both the data source & resource, so I could interchange using some logic for another map type variable. So if it was PowerShell, it would be something like:
name = if ($env -eq "prod") { $var.vms } elseif ($env -eq "stage") { $var.vmsstage } elseif ($env -eq "dev") { $var.vmsdev }
I've spent a day trying different things but haven't really got close. I may be somewhat restricted as I need to lookup the vm id using the data source first then loop through my resource (azurerm_backup_protected_vm) dropping that ID in. A solution to this would prevent me from having to use multiple data sources and resources of the same type. Thanks!

Terraform : Error with Comply with restrictions

In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.
# Creating Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
# Creating s3 resource for invoking to lambda function
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
acl = "private"
tags = {
Name = "source-bucktet-testing"
Environment = "Dev"
}
}
# Adding S3 bucket as trigger to my lambda and giving the permissions
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = "aws_s3_bucket.bucket.id"
lambda_function {
lambda_function_arn = "aws_lambda_function.test_lambda.arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = "aws_lambda_function.test_lambda.function_name"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :
The value passed to the aws_lambda_function resource for function_name is invalid. An AWS Lambda function name can only contain letters, numbers, hyphens, or underscores with no spaces. You need to change the value of var.function_name to align with these restrictions.
Your var.function_name must be invalid.
The allowed function name format is explained here along with ARN:
The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 140.
Pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}(-gov)?-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?