I need a workaround for using count.index inside a module block for some input variables. I have a habit of over-complicating problems, so maybe there's a much easier solution.
File/Folder Structure:
modules/
main.tf
ignition/
main.tf
modules/
files/
main.tf
template_files/
main.tf
End Goal: Create an Ignition file for each instance I'm deploying. Each Ignition file has instance-specific info like hostname, IP address, etc.
All of this code works if I use a static value or a variable without cound.index. I need help coming up with a workaround for the address, gateway, and hostname variables specifically. If I need to process the count.index inside one of the child modules, that's totally fine. I can't seem to wrap my brain around that though. I've tried null_data_source and null_resource blocks from the child modules to achieve that, but so far no luck.
Variables:
workers = {
Lab1 = {
"lab1k8sc8r001" = "192.168.17.100/24"
}
Lab2 = {
"lab2k8sc8r001" = "192.168.18.100/24"
}
}
gateway = {
Lab1 = [
"192.168.17.1",
]
Lab2 = [
"192.168.18.1",
]
}
From modules/main.tf, I'm calling the ignition module:
module "ignition_workers" {
source = "./modules/ignition"
virtual_machines = var.workers[terraform.workspace]
ssh_public_keys = var.ssh_public_keys
files = [
"files_90-disable-auto-updates.yaml",
"files_90-disable-console-logs.yaml",
]
template_files = {
"files_eth0.nmconnection.yaml" = {
interface-name = "eth0",
address = element(values(var.workers[terraform.workspace]), count.index),
gateway = element(var.gateway, count.index % length(var.gateway)),
dns = join(";", var.dns_servers),
dns-search = var.domain,
}
"files_etc_hostname.yaml" = {
hostname = element(keys(var.workers[terraform.workspace]), count.index),
}
"files_chronyd.yaml" = {
ntp_server = var.ntp_server,
}
}
}
From modules/ignition/main.tf I take the files and template_files variables to build the Ignition config:
module "ingition_file_snippets" {
source = "./modules/files"
files = var.files
}
module "ingition_template_file_snippets" {
source = "./modules/template_files"
template_files = var.template_files
}
data "ct_config" "fedora-coreos-config" {
count = length(var.virtual_machines)
content = templatefile("${path.module}/assets/files_ssh_authorized_keys.yaml", {
ssh_public_keys = var.ssh_public_keys
})
pretty_print = true
snippets = setunion(values(module.ingition_file_snippets.files), values(module.ingition_template_file_snippets.files))
}
I am not quite sure what you are trying to achieve so I can not give any detailed examples.
But modules in terraform do not support count or for_each yet. So you can also not use count.index.
You might want to change your module to take lists/maps of input and create those lists/maps via for-expressions by transforming them from some input variables.
You can combine for with if to create a filtered subset of your source list/map. Like in:
[for s in var.list : upper(s) if s != ""]
I hope this helps you work around the missing count support.
Related
I need a help with a terraform module that I've created. It works perfectly, but I need to add some automation.
I created a module which creates multiple private endpoints, but I always need to put the variable values in manually.
This is the module:
resource "azurerm_private_endpoint" "endpoint" {
for_each = try({ for endpoint in var.endpoints : endpoint.name => endpoint }, toset([]))
name = each.key
location = var.location
resource_group_name = var.resource_group_name
subnet_id = each.value.subnet_id
dynamic "private_service_connection" {
for_each = each.value.private_service_connection
content {
name = each.key
private_connection_resource_id = private_service_connection.value.private_connection_resource_id
is_manual_connection = false
subresource_names = var.subresource_name ### see values on : https://learn.microsoft.com/fr-fr/azure/private-link/private-endpoint-overview#private-link-resource
}
}
lifecycle {
ignore_changes = [
private_dns_zone_group
]
}
tags = var.tags
}
I need to have:
1 - for the private endpoint name : I need it to be automatically provided: "pendp-(the subresource_name value in lower cases- my resource_name =>(mysql server for example))"
2 - for the private connection name: I need the values to be automatically: "connection-(the subresource_name value in lower cases- my ressource_name =>(mysql server for exemple))"
3 - some automation to detect automatically the subresource_name ( if I create a private endpoint for a blob or for a mariadb or for a mysqlserver, the module should detected it.
terraform version:
terraform {
required_version = "~> 1"
required_providers {
azurerm = "~> 3.0"
}
}
The easiest way to combine values automatically would be to use the Terraform string join() function to join multiple strings together. For lower case strings, you can use the lower() function.
Some examples:
name = join("-", ["pandp", lower(var.subresource_name)])
...
name = join("-", ["connection", lower(var.subresource_name), lower(each.key)])
For your third rule, you want to use a conditional expression to determine if it's a blob, or mariadb, or mysqlserver.
In this example, we set an example_name local with a value some-blob-value if var.subresource_name contains a string that starts with "blob", and set it to something-else if the condition is false:
locals {
example_name = startswith(lower(var.subresource_name), "blob") ? "some-blob-value" : "something-else"
}
There are many options available for doing a conditional on if a value is passed to what you expect and then determine a result based on that value. What exactly you want isn't clear in the question, but hopefully this will get you pointed in the right direction.
Terraform even has several helper functions that might help you if you only need part of a string, such as startswith(), endswith(), or contains() depending on your needs.
I'm running Terraform using VScode editor which uses PowerShell as the default shell and getting the same error when I try to validate it or to run terraform init/plan/apply through VScode, external PowerShell or CMD.
The code was running without any issues until I added Virtual Machine creation code. I have clubbed the variables.tf, terraform.tfvars and the main Terraform code below.
terraform.tfvars
web_server_location = "West US 2"
resource_prefix = "web-server"
web_server_address_space = "1.0.0.0/22"
web_server_address_prefix = "1.0.1.0/24"
Environment = "Test"
variables.tf
variable "web_server_location" {
type = string
}
variable "resource_prefix" {
type = string
}
variable "web_server_address_space" {
type = string
}
#variable for network range
variable "web_server_address_prefix" {
type = string
}
#variable for Environment
variable "Environment" {
type = string
}
terraform_example.tf
# Configure the Azure Provider
provider "azurerm" {
# whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
version = "=2.0.0"
features {}
}
# Create a resource group
resource "azurerm_resource_group" "example_rg" {
name = "${var.resource_prefix}-RG"
location = var.web_server_location
}
# Create a virtual network within the resource group
resource "azurerm_virtual_network" "example_vnet" {
name = "${var.resource_prefix}-vnet"
resource_group_name = azurerm_resource_group.example_rg.name
location = var.web_server_location
address_space = [var.web_server_address_space]
}
# Create a subnet within the virtual network
resource "azurerm_subnet" "example_subnet" {
name = "${var.resource_prefix}-subnet"
resource_group_name = azurerm_resource_group.example_rg.name
virtual_network_name = azurerm_virtual_network.example_vnet.name
address_prefix = var.web_server_address_prefix
}
# Create a Network Interface
resource "azurerm_network_interface" "example_nic" {
name = "${var.resource_prefix}-NIC"
location = azurerm_resource_group.example_rg.location
resource_group_name = azurerm_resource_group.example_rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example_subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.example_public_ip.id
}
}
# Create a Public IP
resource "azurerm_public_ip" "example_public_ip" {
name = "${var.resource_prefix}-PublicIP"
location = azurerm_resource_group.example_rg.location
resource_group_name = azurerm_resource_group.example_rg.name
allocation_method = var.Environment == "Test" ? "Static" : "Dynamic"
tags = {
environment = "Test"
}
}
# Creating resource NSG
resource "azurerm_network_security_group" "example_nsg" {
name = "${var.resource_prefix}-NSG"
location = azurerm_resource_group.example_rg.location
resource_group_name = azurerm_resource_group.example_rg.name
# Security rule can also be defined with resource azurerm_network_security_rule, here just defining it inline.
security_rule {
name = "RDPInbound"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3389"
source_address_prefix = "*"
destination_address_prefix = "*"
}
tags = {
environment = "Test"
}
}
# NIC and NSG association
resource "azurerm_network_interface_security_group_association" "example_nsg_association" {
network_interface_id = azurerm_network_interface.example_nic.id
network_security_group_id = azurerm_network_security_group.example_nsg.id
}
# Creating Windows Virtual Machine
resource "azurerm_virtual_machine" "example_windows_vm" {
name = "${var.resource_prefix}-VM"
location = azurerm_resource_group.example_rg.location
resource_group_name = azurerm_resource_group.example_rg.name
network_interface_ids = [azurerm_network_interface.example_nic.id]
vm_size = "Standard_B1s"
delete_os_disk_on_termination = true
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServerSemiAnnual"
sku = "Datacenter-Core-1709-smalldisk"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
storage_account_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_password = "Password1234!"
}
os_profile_windows_config {
disable_password_authentication = false
}
tags = {
environment = "Test"
}
}
Error:
PS C:\Users\e5605266\Documents\MyFiles\Devops\Terraform> terraform init
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Invalid character
on terraform_example.tf line 89, in resource "azurerm_virtual_machine" "example_windows_vm":
89: location = azurerm_resource_group.example_rg.location
This character is not used within the language.
Error: Invalid expression
on terraform_example.tf line 89, in resource "azurerm_virtual_machine" "example_windows_vm":
89: location = azurerm_resource_group.example_rg.location
Expected the start of an expression, but found an invalid expression token.
Error: Argument or block definition required
on terraform_example.tf line 90, in resource "azurerm_virtual_machine" "example_windows_vm":
90: resource_group_name = azurerm_resource_group.example_rg.name
An argument or block definition is required here. To set an argument, use the
equals sign "=" to introduce the argument value.
Error: Invalid character
on terraform_example.tf line 90, in resource "azurerm_virtual_machine" "example_windows_vm":
90: resource_group_name = azurerm_resource_group.example_rg.name
This character is not used within the language.
*
I've encountered this problem myself in several different contexts, and it does have a common solution which is no fun at all: manually typing the code back in...
This resource block seems to be where it runs into problems:
resource "azurerm_virtual_machine" "example_windows_vm" {
name = "${var.resource_prefix}-VM"
location = azurerm_resource_group.example_rg.location
resource_group_name = azurerm_resource_group.example_rg.name
network_interface_ids = [azurerm_network_interface.example_nic.id]
vm_size = "Standard_B1s"
delete_os_disk_on_termination = true
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServerSemiAnnual"
sku = "Datacenter-Core-1709-smalldisk"
version = "latest"
}
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
storage_account_type = "Standard_LRS"
}
os_profile {
computer_name = "hostname"
admin_username = "adminuser"
admin_password = "Password1234!"
}
os_profile_windows_config {
disable_password_authentication = false
}
tags = {
environment = "Test"
}
}
Try copying that back into your editor as is. I cannot see any problematic characters in it, and ironically StackOverflow may have done you a solid and filtered them out. Literally copy/pasting it over the existing block may remedy the situation.
I have seen Terraform examples online with stylish double quotes (which aren't ASCII double quotes and won't work) many times. That may be what you are seeing.
Beyond that, you'd need to push your code to GitHub or similar so I can see the raw bytes for myself.
In the off-chance this helps someone who runs into this error and comes across it on Google, I just thought I would post my situation and how I fixed it.
I have an old demo Terraform infrastructure that I revisited after months and, long story short, I issued this command two days ago and forgot about it:
terraform plan -out=plan.tf
This creates a zip archive of the plan. Upon coming back two days later and running a terraform init, my terminal scrolled garbage and "This character is not used within the language." for about 7 seconds. Due to the .tf extension, terraform was looking at the zip data and promptly pooping its pants.
Through moving individual tf files to a temp directory and checking their validity with terraform init, I found the culprit, deleted it, and functionality was restored.
Be careful when exporting your plan files, folks!
I ran into the same problem and found this page.
I solved the issue and decided to post here.
I opened my plan file in Notepad++ and selected View-Show all symbols.
I removed all the TAB characters and replaced them with spaces.
In my case, the problem was fully resolved by this.
In my case, when I ran into the same problem ("This character is not used within the language"), I found the encoding of the files was UTF-16 (it was a generated file from PS). Changing the file encoding to UTF-8 (as mentioned in this question) solved the issue.
I found I got this most often when I go from Windows to linux. The *.tf file does not like the windows TABs and Line Breaks.
I tried to some of the same tools I use when I have this problem with *.sh, but so far I've resorted to manually cleaning up the lines I've seen in there error.
In my case, the .tf file was generated by the following command terraform show -no-color > my_problematic.tf, and this file's encoding is in "UTF-16 LE BOM", converting it to UTF-8 fixed my issue.
I'm using Tf 0.12. I have an s3 module that outputs a list of buckets, that I would like to use as an input for a cloudfront module that I've got.
The problem I'm facing is that when I do terraform plan/apply I get the following error count.index is 0 |var.redirect-buckets is tuple with 1 element
I've tried all kinds of splats moving the count.index call around to no avail. My sample code is below.
module.s3
resource "aws_s3_bucket" "redirect" {
count = length(var.redirects)
bucket = element(var.redirects, count.index)
}
mdoule.s3.output
output "redirect-buckets" {
value = [aws_s3_bucket.redirect.*]
}
module.cdn.variables
...
variable "redirect-buckets" {
description = "Redirect buckets"
default = []
}
....
The error is thrown down here
module.cdn
resource "aws_cloudfront_distribution" "redirect" {
count = length(var.redirect-buckets)
default_cache_behavior {
// Line below throws the error, one amongst many
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
....
//Another error throwing line
target_origin_id = "cloudfront-distribution-origin-${var.redirect-buckets[count.index]}.s3.amazonaws.com"
Any help is greatly appreciated.
module.s3
resource "aws_s3_bucket" "redirects" {
for_each = var.redirects
bucket = each.value
}
Your variable definition for redirects needs to change to something like this:
variable "redirects" {
type = map(string)
}
module.s3.output:
output "redirect_buckets" {
value = aws_s3_bucket.redirects
}
module.cdn
resource "aws_cloudfront_distribution" "redirects" {
for_each = var.redirect_buckets
default_cache_behavior {
target_origin_id = "cloudfront-distribution-origin-${each.value.id}.s3.amazonaws.com"
}
Your variable definition for redirect-buckets needs to change to something like this (note underscores, using skewercase is going to behave strangely in some cases, not worth it):
variable "redirect_buckets" {
type = map(object(
{
id = string
}
))
}
root module
module "s3" {
source = "../s3" // or whatever the path is
redirects = {
site1 = "some-bucket-name"
site2 = "some-other-bucket"
}
}
module "cdn" {
source = "../cdn" // or whatever the path is
redirects_buckets = module.s3.redirect_buckets
}
From an example perspective, this is interesting, but you don't need to use outputs from S3 here since you could just hand the cdn module the same map of redirects and use for_each on those.
There is a tool called Terragrunt which wraps Terraform and supports dependencies.
https://terragrunt.gruntwork.io/docs/features/execute-terraform-commands-on-multiple-modules-at-once/#dependencies-between-modules
I'm trying to use fetchgit to download source repos from my lab's private GitLab server, which currently self-signs its SSL certificate.
default.nix:
with (import <nixpkgs> {});
{ test-pkg = callPackage ./test-pkg.nix {
buildPythonPackage = python35Packages.buildPythonPackage;
};
}
test-pkg.nix:
{ buildPythonPackage,fetchgit }:
buildPythonPackage rec {
pname = "test-pkg";
version = "0.2.1";
src = fetchgit {
url = "https://gitlabserver/experiment-deployment/test-pkg";
rev = "refs/tags/v${version}";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
}
Which results in the error when I call nix-shell
fatal: unable to access 'https://gitlabserver/experiment-deployment/test-pkg/': SSL certificate problem: self signed certificate
Looking at, build-support/fetchgit, it seems that fetchgit is made with mkDerivation, so I tried to make a new fetchgit using overrideAttrs. I pass in the git environment variable to make git ignore SSL verification, expecting that the variable will be initialized during the setup phase.
revised default.nix:
with (import <nixpkgs> {});
let fetchgit-no-verify = fetchgit.overrideAttrs { GIT_SSL_NO_VERIFY=true;} ;
in rec {
test-pkg = callPackage ./test-pkg.nix {
buildPythonPackage = python35Packages.buildPythonPackage;
fetchgit = fetchgit-no-verify;
};
}
I thought I was really clever when I thought of this over the weekend, only to discover that when implemented my new error states that
error: attribute 'overrideAttrs' missing, at [...]/default.nix:2:26
Inspecting fetchgit in nix repl shows that it is a functor attribute set. I tried for a little bit to get to the overrideAttrs, without success. Trying again I saw that git could be passed to to fetchGit,
re-revised default.nix:
with (import <nixpkgs> {});
let git = git.overrideAttrs { GIT_SSL_NO_VERIFY=true;} ;
fetchgit-no-verify = fetchgit.override { git=git-no-verify;} ;
in rec {
test-pkg = callPackage ./test-pkg.nix {
buildPythonPackage = python35Packages.buildPythonPackage;
fetchgit = fetchgit-no-verify;
};
}
but the new error:
error: attempt to call something which is not a function but a set, at /nix/store/jmynn33vcn3mcscsch0zf46fz9wsw05y-nixpkgs-20.03pre193309.c4196cca9ac/nixpkgs/pkgs/stdenv/generic/make-derivation.nix:318:55
Finally, onto my questions. Is there a way to add the environment variable to the fetchgit or git derivations? Is there perhaps another way to connect--some builtin option I missed? I could use a private repository, using ssh and avoiding https, however due to how we deploy experiments I'd like to avoid that.
I was able to make this work with this ugly thing.
default.nix:
with (import <nixpkgs> {});
let fetchgit-no-verify = fetchgit // {
__functor = self : args :
(fetchgit.__functor self args).overrideAttrs (oldAttrs:{GIT_SSL_NO_VERIFY=true;});
} ;
in rec {
test-pkg = callPackage ./test-pkg.nix {
buildPythonPackage = python35Packages.buildPythonPackage;
fetchgit = fetchgit-no-verify;
};
}
fetchgit-no-verify uses the fetchgit functor set to begin with and overwrites the __functor attribute with a new function. The new functor just applies its arguments and then calls overrideAttrs.
This works, but I'm happy to award the answer to anybody who can add some insight or comes with another solution. For one, I'd like to know how the fetchgit derivation becomes a functor. Is this something callPackage does?.
There is something that I am not understanding about terraform variables. I am getting prompted for two variables in when I run "terraform apply". I don't think that I should be prompted for any as I defined a terraform.tfvars. I am getting prompted for (applicationNamespace, and staticIpName) but I am not sure why. What am I misunderstanding?
I created a file (terraform.tfvars):
#--------------------------------------------------------------
# General
#--------------------------------------------------------------
cluster = "reddiyo-development"
project = "<MYPROJECTID>"
region = "us-central1"
credentialsLocation = "<MYCERTLOCATION>"
bucket = "reddiyo-terraform-state"
vpcLocation = "us-central1-b"
network = "default"
staticIpName = "dev-env-ip"
#--------------------------------------------------------------
# Specific To NODE
#--------------------------------------------------------------
terraformPrefix = "development"
mainNodeName = "primary-pool"
nodeMachineType = "n1-standard-1"
#--------------------------------------------------------------
# Specific To Application
#--------------------------------------------------------------
applicationNamespace = "application"
I also have a terrform script:
variable "cluster" {}
variable "project" {}
variable "region" {}
variable "bucket" {}
variable "terraformPrefix" {}
variable "mainNodeName" {}
variable "vpcLocation" {}
variable "nodeMachineType" {}
variable "credentialsLocation" {}
variable "network" {}
variable "applicationNamespace" {}
variable "staticIpName" {}
data "terraform_remote_state" "remote" {
backend = "gcs"
config = {
bucket = "${var.bucket}"
prefix = "${var.terraformPrefix}"
}
}
provider "google" {
//This needs to be updated to wherever you put your credentials
credentials = "${file("${var.credentialsLocation}")}"
project = "${var.project}"
region = "${var.region}"
}
resource "google_container_cluster" "gke-cluster" {
name = "${var.cluster}"
network = "${var.network}"
location = "${var.vpcLocation}"
remove_default_node_pool = true
# node_pool {
# name = "${var.mainNodeName}"
# }
node_locations = [
"us-central1-a",
"us-central1-f"
]
//Get your credentials for the newly created cluster so that microservices can be deployed
provisioner "local-exec" {
command = "gcloud config set project ${var.project}"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster} --zone ${var.vpcLocation}"
}
}
resource "google_container_node_pool" "primary_pool" {
name = "${var.mainNodeName}"
cluster = "${var.cluster}"
location = "${var.vpcLocation}"
node_count = "2"
node_config {
machine_type = "${var.nodeMachineType}"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
management {
auto_repair = true
auto_upgrade = true
}
autoscaling {
min_node_count = 2
max_node_count = 10
}
}
# //Reserve a Static IP
resource "google_compute_address" "ip_address" {
name = "${var.staticIpName}"
}
//Install Ambassador
module "ambassador" {
source = "modules/ambassador"
applicationNamespace = "${var.applicationNamespace}"
}
You can try to force it to read your variables by using:
terraform apply -var-file=<path_to_your_vars>
For reference, read below, if anybody face the similar issue.
“terraform.tfvars” is the default variable file name, from where terraform will read variables.
If any other file name is used, it needs to be passed in the command line i.e: “terraform plan –var=whateverName.tfvars
Also, order of Loading for variables by Terraform program.
Environment Variables
terraform.tfvars
terraform.tfvars.json
Any .auto.tfvars
Any –var or –var-file options