how to same variable with different values in different env in terraform module - variables

I am using the terraform s3 module https://github.com/terraform-aws-modules/terraform-aws-s3-bucket. I created a wrapper module around this module for creating a s3 bucket. Currently there are 3 different aws account where in this s3 modules changes should be used for creating the buckets. Since there are 3 different accounts, the grant and the owner values are different for each of the account. Currently, I have hardcoded these value for the buckets that I create
s3_bucket.tf
module "sample_bucket" {
source = "../../../../modules/aws/data/s3_bucket"
bucket = "sample_bucket"
lifecycle_rule = [
rule here
]
}
>../../../../modules/aws/data/s3_bucket/main.tf file
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.6.0"
bucket = var.bucket
attach_public_policy = var.attach_public_policy
server_side_encryption_configuration = var.server_side_encryption_configuration
grant = var.grant
owner = var.owner
cors_rule = var.cors_rule
lifecycle_rule = var.lifecycle_rule
tags = var.tags
versioning = var.versioning
replication_configuration = var.replication_configuration
force_destroy = var.force_destroy
}
> ../../../../modules/aws/data/s3_bucket/variable.tf file
variable "grant" {
description = "An ACL policy grant. Conflicts with `acl`"
type = any
default = []
}
variable "owner" {
description = "Bucket owner's display name and ID. Conflicts with `acl`"
type = map(string)
default = {}
}
I want to use the grant and the owner variable from the module file in the main.tf file and not hard code these values for each of the accounts in s3_bucket.tf for the buckets I create. Can someone help me here on how to use the same variable grant and onwer with different values for each of the account

You can use Terraform Workspaces for different environments. An example of how you could use workspaces:
module "sample_bucket" {
source = "../../../../modules/aws/data/s3_bucket"
bucket = "sample_bucket"
grant = "${var.grant}-${terraform.workspace}"
owner = "${var.owner}-${terraform.workspace}"
lifecycle_rule = [
rule here
]
}
P.S You might have to handle grant and owner differently from the above code since they are list and map
Update:
In case of different folder for each environment, you can define a separate variables.tfvars and pass it in as per the need. Read more about Input Variables here
Within each folder, you can run and terraform apply/plan and it will automatically detect your terraform.tfvars file.
s3_bucket
│ main.tf
└───dev
│ │ terraform.tfvars
│ │
└───stg
│ │ terraform.tfvars
│ │
└───prod
│ │ terraform.tfvars
│ │

Related

Multi S3 bucket Definition with versioning limitations

I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}

Load local CSV file into BigQuery table with Terraform?

I'm new to terraform. Is it possible to load the content of a CSV file into a BigQuery table without uploading it to GCS?
I've studied the document below, but the solution doesn't seem to work on local files:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigquery_job
Question:
Is it possible somehow to do this without uploading the file into Google's environment?
resource "google_bigquery_table" "my_tyable" {
dataset_id = google_bigquery_dataset.bq_config_dataset.dataset_id
table_id = "my_tyable"
schema = file("${path.cwd}/path/to/schema.json")
}
resource "google_bigquery_job" "load_data" {
job_id = "load_data"
load {
source_uris = [
#"gs://cloud-samples-data/bigquery/us-states/us-states-by-date.csv", # this would work
"${path.cwd}/path/to/data.csv", # this is not working
]
destination_table {
project_id = google_bigquery_table.my_tyable.project
dataset_id = google_bigquery_table.my_tyable.dataset_id
table_id = google_bigquery_table.my_tyable.table_id
}
skip_leading_rows = 0
schema_update_options = ["ALLOW_FIELD_RELAXATION", "ALLOW_FIELD_ADDITION"]
write_disposition = "WRITE_APPEND"
autodetect = true
}
}
I was trying this in my own project and I don't think it is possible based on the error message I am seeing:
│ Error: Error creating Job: googleapi: Error 400: Source URI must be a Google Cloud Storage location: [REDACTED].csv, invalid
│
│ with module.[REDACTED].google_bigquery_job.load_data,
│ on modules\[REDACTED]\main.tf line 73, in resource "google_bigquery_job" "load_data":
│ 73: resource "google_bigquery_job" "load_data" {
│
Ended up putting the CSV file into the same bucket as the Terraform state with prefix data/
Probably best option is to load it using file function
file("${path.module}/data.csv")

How to generate SAS token using Access policy for a container of ADLS gen 2

How to generate SAS token using Access policy for a folder in container of ADLS gen 2.
exactly like below image but for ADLS gen 2 containers or folders. thank you in advance.
To generate SAS token using Access policy on ADLS containers need to create a Access Policy first . You can create Access Policy through Azure portal (Please Check with this link) or Storage Explorer.
Based on your attached
Screenshot you are using the Microsoft Storage Explorer so here are steps create access policy
1)Go to your container --> right click on container
2)Select the manage access policy
3)Click on the add. There you can provide the Access policy id and permissions you need to give on container like read ,write (click on check boxes).And click on save
4)Once access policy created. You can create the SAS based on that access policy .Right click on
The container select Get Share Access Signature. From the dropdown select the access policy and click
On the create
Generate SAS using terraform
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65" }
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "terraformtest"
location = "West Europe"
}
resource "azurerm_storage_account" "storage" {
name = "storage name"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "GRS"
allow_blob_public_access = true
}
resource "azurerm_storage_container" "container" {
name = "terraformcont"
storage_account_name = azurerm_storage_account.storage.name
container_access_type = "private"
}
data "azurerm_storage_account_blob_container_sas" "example" {
connection_string = azurerm_storage_account.storage.primary_connection_string
container_name = azurerm_storage_container.container.name
https_only = true
start = "Date"
expiry = "Date"permissions {
read = true
add = true
create = false
write = false
delete = true
list = true
}
}
output "sas_url_query_string" {
value = data.azurerm_storage_account_blob_container_sas.example.sas
sensitive = true
}
After running the above command you will get output inside terraform.tfstate
For more information check with this link

Self link modules in terraform

I have the following terraform code snippet where I'm trying to use a self_link in the subnet.network resource that references the title of the network resource.
main.tf
resource "google_compute_network" "demo-vpc-network" {
auto_create_subnetworks = "false"
delete_default_routes_on_create = "false"
name = var.GCP_COMPUTE_NETWORK_NAME
project = var.GCP_PROJECT_NAME
routing_mode = "REGIONAL"
}
resource "google_compute_subnetwork" "demo-subnet" {
ip_cidr_range = "10.200.0.0/24"
name = "kubernetes"
network = google_compute_network.vpc_network.self.link
private_ip_google_access = "false"
project = var.GCP_PROJECT_NAME
region = "us-west1"
}
However, I get the following error.
Error: Reference to undeclared resource
on main.tf line 77, in resource "google_compute_subnetwork" "demo-subnet":
77: network = google_compute_network.vpc_network.self.link
A managed resource "google_compute_network" "vpc_network" has not been
declared in the root module.
google_compute_network.vpc_network.self.link
won't work because google_compute_network.vpc_network doesn't exist.
It's easy to fix because google_compute_network.demo-vpc-network does exist.
Update: Also, as you've noted in your comment self-link (with a hyphen) won't work and needs to be self_link (with an underscore).
Here's the second resource block with the bug fixed:
resource "google_compute_subnetwork" "demo-subnet" {
ip_cidr_range = "10.200.0.0/24"
name = "kubernetes"
network = google_compute_network.demo-vpc-network.self.link
private_ip_google_access = "false"
project = var.GCP_PROJECT_NAME
region = "us-west1"
}
That's because the resource for the main network is:
resource "google_compute_network" "vpc_network"
Then you could set a name for it with the property:
name = demo-vpc-network
Check here for more details

Workaround for `count.index` in Terraform Module

I need a workaround for using count.index inside a module block for some input variables. I have a habit of over-complicating problems, so maybe there's a much easier solution.
File/Folder Structure:
modules/
main.tf
ignition/
main.tf
modules/
files/
main.tf
template_files/
main.tf
End Goal: Create an Ignition file for each instance I'm deploying. Each Ignition file has instance-specific info like hostname, IP address, etc.
All of this code works if I use a static value or a variable without cound.index. I need help coming up with a workaround for the address, gateway, and hostname variables specifically. If I need to process the count.index inside one of the child modules, that's totally fine. I can't seem to wrap my brain around that though. I've tried null_data_source and null_resource blocks from the child modules to achieve that, but so far no luck.
Variables:
workers = {
Lab1 = {
"lab1k8sc8r001" = "192.168.17.100/24"
}
Lab2 = {
"lab2k8sc8r001" = "192.168.18.100/24"
}
}
gateway = {
Lab1 = [
"192.168.17.1",
]
Lab2 = [
"192.168.18.1",
]
}
From modules/main.tf, I'm calling the ignition module:
module "ignition_workers" {
source = "./modules/ignition"
virtual_machines = var.workers[terraform.workspace]
ssh_public_keys = var.ssh_public_keys
files = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
template_files = {
"files_eth0.nmconnection.yaml" = {
interface-name = "eth0",
address = element(values(var.workers[terraform.workspace]), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}
"files_etc_hostname.yaml" = {
hostname = element(keys(var.workers[terraform.workspace]), count.index),
}
"files_chronyd.yaml" = {
ntp_server = var.ntp_server,
}
}
}
From modules/ignition/main.tf I take the files and template_files variables to build the Ignition config:
module "ingition_file_snippets" {
source = "./modules/files"
files = var.files
}
module "ingition_template_file_snippets" {
source = "./modules/template_files"
template_files = var.template_files
}
data "ct_config" "fedora-coreos-config" {
count = length(var.virtual_machines)
content = templatefile("${path.module}/assets/files_ssh_authorized_keys.yaml", {
ssh_public_keys = var.ssh_public_keys
})
pretty_print = true
snippets = setunion(values(module.ingition_file_snippets.files), values(module.ingition_template_file_snippets.files))
}
I am not quite sure what you are trying to achieve so I can not give any detailed examples.
But modules in terraform do not support count or for_each yet. So you can also not use count.index.
You might want to change your module to take lists/maps of input and create those lists/maps via for-expressions by transforming them from some input variables.
You can combine for with if to create a filtered subset of your source list/map. Like in:
[for s in var.list : upper(s) if s != ""]
I hope this helps you work around the missing count support.