What data does a terraform null_resource store in state? - ssl

In short, I am generating a key/cert pair on a local machine in order to keep keys out of terraform state. Will my keys end up in terraform state via the apply_ssl_tls.sh script inside of a null resource?
variable "site" {}
module "zone" {
source = "../../../../shared/terraform/zone.tf"
site = var.site
}
module "dns" {
source = "../../../../shared/terraform/dns.tf"
site = var.site
}
# Keys are generated on the local machine via a script in order to keep keys out of state.
# Will a null_resource keep keys out of state?
resource "null_resource" "ssl-tls" {
provisioner "local-exec" {
command = "../../../../shared/scripts/apply_ssl_tls.sh"
}
provisioner "local-exec" {
when = destroy
command = "../../../../shared/scripts/destroy_ssl_tls.sh"
}
}

Related

How to create a Hashicorp Vault user using Terraform

I am trying to create a Vault user in Terraform but can't seem to find the appropriate command to do so. I've searched the Terraform Registry and also performed some online searches but all to no avail.
All I'm looking to do is create a user, using the corresponding Terraform command to the Vault CLI command below:
vault write auth/userpass/users/bob password="passworld123" policies="default"
Any suggestions?
#hitman126 I guess you can take use of 'vault' provider module and 'vault_auth_backend' resource block. I guess your code should look like something similar to below
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.5.0"
}
}
}
provider "vault" {
}
resource "vault_auth_backend" "example" {
type = "userpass"
}
resource "vault_generic_secret" "developer_sample_data" {
path = "secret/foo"
data_json = <<EOT
{
"username": "bob",
"password": "passworld123"
}
EOT
}
In above code block, path is one full logic path where we write given data.To write data into the "generic" secret backend mounted in Vault by default, this should be prefixed with 'secret/'.
This might not be a full-fledged solution, but you can try something like this
Solution-2 :
If you have installed vault in machine and you would like to achieve above use case using vault command alone(if you don't want to use terraform-vault provider), then you can try something below
create one small sh script with above vault command. (valut-write.sh)
touch vault-write.sh
let content of script can be similar to below
#!/bin/sh
vault write auth/userpass/users/bob password="passworld123" policies="default"
chmod +x vault-write.sh
Create a .tf file with null resource, local-exec provisioner and invoke this sh script.
touch vault.tf
contents of vault.tf file can be similar to below
terraform {
required_version = "~> 1.1.1"
}
resource "null_resource" "vault_write" {
provisioner "local-exec" {
command = "/bin/sh vault-write.sh"
}
}

How to Output Terraform Module Variable Names

I'm fairly new to Terraform and I have a question.
I have a bunch of terraform modules calling a main module to create a number of s3 buckets.
module "s3_1" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["one"]
}
module "s3_2" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["two"]
}
module "s3_3" {
source = "../modules/s3-arc"
ENVIRONMENT = var.ENV
bucket_name = var.s3_dep["three"]
}
It so happens that the policies are are being created separately, and so there appears to be a race condition resulting in a NoSuchBucket: The specified bucket does not exist error because the policies are being created first.
I feel like in order to resolve this, I need to add an explicit dependency using depends_on but I can't seem to figure out how to output the bucket names being created by modules s3-1, s3_2, and s3_3 so that I can add the depends_on under the policy section.
How do I output these bucket names please?
Inside your module you can declare an output value which returns some attribute of the S3 bucket, and optionally any other objects that contribute to the functionality of the bucket.
For example:
terraform {
required_providers {
aws = {
# I'm using resource types introduced in v4
# below, so we'll need at least that version.
source = "hashicorp/aws"
version = ">= 4.0.0"
}
}
}
variable "bucket_name" {
type = string
}
resource "aws_s3_bucket" "example" {
bucket = var.bucket_name
# ...
}
resource "aws_s3_bucket_acl" "example" {
bucket = aws_s3_bucket.example.bucket
acl = "private"
}
resource "aws_s3_bucket_versioning" "example" {
bucket = aws_s3_bucket.example.bucket
versioning_configuration {
status = "Enabled"
}
}
output "bucket" {
value = {
name = aws_s3_bucket.example.bucket
arn = aws_s3_bucket.example.arn
}
# The bucket won't be "ready to use" until
# these other resources are created, so
# these are "hidden dependencies" as described
# in the documentation for depends_on
depends_on = [
aws_s3_bucket_acl.example,
aws_s3_bucket_versioning.example,
]
}
Using depends_on with an output value means that any object which refers to this output value in the calling module indirectly depends on those other resources too, and so all three of the S3-related resources must be created completely before anything in the caller can make use of the S3 bucket.
When you separately declare the a policy for one of these buckets in the root module, you'd refer to the bucket name or ARN via the bucket output value, which therefore completes the necessary dependency edges to get a correct ordering:
module "s3_1" {
source = "../modules/s3-arc"
bucket_name = var.s3_dep["one"]
}
resource "aws_s3_bucket_policy" "example" {
# This reference to module.s3_1.bucket.name establishes
# the needed dependency relationships.
bucket = module.s3_1.bucket.name
policy = jsonencode({
# ...
})
}

Access denied for s3 bucket for terraform backend

My terraform code is as below:
# PROVIDERS
provider "aws" {
profile = var.aws_profile
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 1.0.4"
}
}
}
terraform {
backend "s3" {
bucket = "terraform-backend-20200102"
key = "test.tfstate"
}
}
# DATA
data "aws_availability_zones" "available" {}
data "template_file" "public_cidrsubnet" {
count = var.subnet_count
template = "$${cidrsubnet(vpc_cidr,8,current_count)}"
vars = {
vpc_cidr = var.network_address_space
current_count = count.index
}
}
# RESOURCES
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = var.name
version = "2.62.0"
cidr = var.network_address_space
azs = slice(data.aws_availability_zones.available.names, 0, var.subnet_count)
public_subnets = []
private_subnets = data.template_file.public_cidrsubnet[*].rendered
tags = local.common_tags
}
However, when I run terraform init, it gives me an error.
$ terraform.exe init -reconfigure
Initializing modules...
Initializing the backend...
region
AWS region of the S3 Bucket and DynamoDB Table (if used).
Enter a value: ap-southeast-2
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: AccessDenied: Access Denied
status code: 403, request id: A2EB50094A12E22F, host id: JFwXo11eiAW3N0JL1Yoi/i1k03aqzSIwj34NOgMT/ScgmBEC/nncjsK/GKik0SFIT6Ym8Mr/j6U=
/vpc_create
$ aws s3 ls --profile=tcp-aws-sandbox-31
2020-11-02 23:05:48 terraform-backend-20200102
Do note that I can list my bucket from aws s3 ls command then why does terraform has any issue!?
P.S: I am trying to go to the local state file hence commented out the backend block, but it is still giving me an error, please assist.
# terraform {
# backend "s3" {
# bucket = "terraform-backend-20200102"
# key = "test.tfstate"
# }
# }
Ran aws configure and then it worked.
For some reason it was taking the wrong account even though, I set the correct aws profile in ~.aws/credentials file.
The way I realized it was using the wrong account was when I ran terraform apply after export TF_LOG=DEBUG

Terraform wants to replace existing resources

TF Version: 0.12.28 and 0.13.3
My Goal:
Have an AWS S3 bucket for PROD env to store tf state
Have an AWS S3 bucket for NONPROD env to store tf state
Following this tutorial I successfully accomplished the following:
a AWS S3 bucket and a dynamodb from a folder called TEST:
provider "aws" {
region = var.aws_region_id
}
resource "aws_s3_bucket" "terraform_state" {
bucket = var.aws_bucket_name
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
resource "aws_dynamodb_table" "terraform_locks" {
name = var.aws_bucket_name
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
Up to this point everything was successfully deployed
However when I wanted to have another S3 bucket/Dynamodb for PROD env the following happened:
I went to another folder called PRODUCTION, I did terraform init (initialization was ok)
copied the same module I have on PROD to this folder. And I renamed PROD with TEST to match the env
Terrarom plan now says it wants to replace my actual deployment to create the new one:
➜ S3 tf plan
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
aws_dynamodb_table.terraform_locks: Refreshing state... [id=test-myproject-poc]
aws_s3_bucket.terraform_state: Refreshing state... [id=test-myproject-poc]
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_dynamodb_table.terraform_locks must be replaced
-/+ resource "aws_dynamodb_table" "terraform_locks" {
~ arn = "arn:aws:dynamodb:us-east-1:1234567890:table/test-myproject-poc" -> (known after apply)
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
~ id = "test-myproject-poc" -> (known after apply)
~ name = "test-myproject-poc" -> "prod-myproject-poc" # forces replacement
The state is actually on global/s3/terraform.tfstate
I'm not using workspaces
What is the proper way to create S3_PROD without deleting the first one?
I solved the issue! Just found out that I needed to remove this block:
terraform {
backend "s3" {
bucket = "test-myproject-poc"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "test-myproject-poc"
encrypt = true
}
}
dropped .terraform folder and run init again.
After doing these steps, plan ran as expected (it didn't try to remove my deployment).
What I think, but not sure tough, is that it was trying to use the same state file previously deployed. So I just left tf to create the bucket and dynamo table to finally run the process of storing the new state of the new folder (PROD) in S3.
HTH

Cannot have file provisioner working with Terraform on DigitalOcean

I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.