How to change the path for local backend state when using workspaces in terraform? - automation

What is the expected configuration for using terraform workspaces with the local backend?
The local backend supports workspacing, but it does not appear you have much control over where the actual state is stored.
When you are not using workspaces you can supply a path parameter to the local backend to control where the state files are stored.
# Either in main.tf
terraform {
backend "local" {
path = "/path/to/terraform.tfstate
}
}
# Or as a flag
terraform init -backend-config="path=/path/to/terraform.tfstate"
I expected analogous functionality when using workspaces in that you would supply a directory for path and the workspaces would get created under that directory
For example:
terraform new workspace first
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
terraform new workspace second
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
would result in the state
/path/to/terraform.tfstate.d/first/terraform.tfstate
/path/to/terraform.tfstate.d/second/terraform.tfstate
This does not appear to be the case however. It looks like the local backend ignores the path parameter and puts the workspace configuration in the working directory.
Am I missing something or are you unable to control local backend workspace state?

There is an undocumented flag for the local backend workspace_dir that solves this issue.
The documentation task is tracked here
terraform {
backend "local" {
workspace_dir = "/path/to/terraform.tfstate.d"
}
}

Related

How to use terraform to ignore previous execution(state) [duplicate]

I'm a bit of a newbie with Terraform and still working my way through the documentation, have not yet been able to find a way to accomodate the set up I need to achieve for a specific solution and hoping that some kind soul may be able to give me a push in the right direction.
I'm trying to manage a single set of paramaterised templates which deploy everything needed to support a new application we are working on in GCP. What I am trying to achieve is being able to deploy those templates to three different environments, each environment being in a distinct GCP project, by itself.
The plan is, as per recommendations, run terraform and pass in
a) The specific .tfvars file depending on the environment/project being deployed to (dev/test/prod).
b) Use the -chdir parameter to tell Terraform to pick up all the templates from 'infra-common' folder.
The tricky part is that we want each environment (gcp project) to host it's own state file in gcs/storage.
I had been looking at workspaces but it appears that workspaces will just create state subfolders on a single backend.
Question: Can this be done or is there a better way to do it?
Thanks!
You can use --backend-config for this. Here's how you can achieve the desired behavior:
Create a .config file for each environment (dev.config, test.config, prod.config) which contain the name of the gcs bucket (which must already exist) for the respective environment
Specify the common backend in a single remote_state.tf file
Here's how it would look:
config/dev.config:
bucket = "tf-state-dev"
config/test.config:
bucket = "tf-state-test"
config/prod.config:
bucket = "tf-state-prod"
remote_state.tf:
terraform {
backend "gcs" {
prefix = "terraform/state"
}
}
then, you can run the init. So for example, for dev this would look like:
$ terraform init --backend-config=config/dev.config
then, you can create a workspace for the environment:
$ terraform workspace new dev
With this approach, you can use a single set of templates (you can in fact configure dynamic variables based on the current workspace).
What you could do (we have a project with a similar setup with a different cloud provider), is:
use infra-common as a module
instead of working with .tfvar files per environment, use a separate root module per environment which invokes infra-common as sub-module.
Your folder structure could look like:
project
|-- dev
| `-- main.tf
|-- modules
| `-- infra-common
| |-- main.tf
| `-- variables.tf
|-- test
| `-- main.tf
`-- prod
`-- main.tf
dev/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-dev"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "dev"
some_var = "value"
}
prod/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "prod"
some_var = "value"
}

How to run tflint rules all at one for a specific provider?

I installed the tflint plugin but when I run tflint on the root module I get nothing. When I specify a rule with --enable-rule then I get some warnings. How can I run the ruleset of azure or aws all at once?
when it comes to azure you can install the plugin by adding a config to .tflint.hcl and running tflint --init:
plugin "azurerm" {
enabled = true
version = "0.14.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
If you want to add a new rule to this ruleset, you can use the generator
$ go run ./rules/generator
You can find the list of rules available for the tflint-ruleset-azurerm Here

Debugging terragrunt dependency block resulting in s3 permission error

I'm trying to use a dependency block for the first time, but get aws s3 list object permission denied issues and have trouble debugging the issue.
The setup is as follows, using an s3 backend for storing terraform state:
A git repo containing the terraform modules:
archive
s3_inventory
Instantiations of the above:
prod/eu/archive/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//archive?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
dependency "s3-inventory" {
config_path = "../s3-inventory/"
}
prod/eu/s3_inventory/terragrunt.hcl:
terraform {
source = "git::ssh://git#my_server//s3_inventory?ref=v1.0.0"
}
include {
path = find_in_parent_folders()
}
Running terragrunt apply in prod/eu/archive works just fine when I remove the dependency block from the hcl file. It fails when I add the dependency block in.
Running terragrunt output -json in prod/eu/s3-inventory also works just fine.
With debugging flags on I still don't seem to get enough info as to why it's failing.
terragrunt apply --terragrunt-log-level debug --terragrunt-debug in prod/eu/archive results in something like this:
...<omitted>...
DEBU[0000] Detected module /Users/tim.kersten/prod/eu/s3-inventory/terragrunt.hcl is already init-ed. Retrieving outputs directly from working directory. prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
DEBU[0000] Running command: terraform output -json prefix=[/Users/tim.kersten/prod/eu/s3-inventory]
Failed to load state: AccessDenied: Access Denied
status code: 403, request id: ABC123DEF456GHI, host id: WW91J3JlIHRlcnJpYmx5IG5vc2UgZm9yIHRyeWluZyB0byBsb29rIGF0IG15IGhvc3QK
ERRO[0003] exit status 1
Something is clearly different, but the debugging options I set on terragrunt don't seem to give me enough info to understand what's different.
Anyone understand what's going on here?
Edit:
terragrunt version: 0.28.6

VueJS place multiple .env in folder

Hello I'm using VueJS 2 and I have multiple .env in my project.
My app have .env for each company to select the company configuration (skin color / files...)
Actually I have all my .env in the root folder:
.env.company1-dev
.env.company1-staging
.env.company1-prod
.env.company2-dev
.env.company2-staging
.env.company2-prod
.env.company3-dev
.env.company3-staging
.env.company3-prod
So when I'll get 20 companies it will be confused on my root folder so it is possible to create a folder where I can place all my .env ?
The idea :
/environments/company1/
.env.dev
.env.staging
.env.prod
/environments/company2/
.env.dev
.env.staging
.env.prod
/environments/company3/
.env.dev
.env.staging
.env.prod
On your vue.config.js file you can add:
const dotenv = require("dotenv");
const path = require("path");
let envfile = ".env";
if (process.env.NODE_ENV) {
envfile += "." + process.env.NODE_ENV;
}
const result = dotenv.config({
path: path.resolve(`environments/${process.env.VUE_APP_COMPANY}`, envfile)
});
// optional: check for errors
if (result.error) {
throw result.error;
}
the before run you can set VUE_APP_COMPANY to a company name and run your app,
Note: It's important to put this code on vue.config.js and not in main.js because dotenv will use fs to read files.
References
https://github.com/motdotla/dotenv#path
https://github.com/vuejs/vue-cli/issues/787
https://cli.vuejs.org/guide/mode-and-env.html#environment-variables
The accepted answer we have also used in the past. But I found a better solution to handle different environments. Using the npm package dotenv-flow allows not only the use of different environments but has some more benefits like:
local overwriting of variables by using .env.local or .env.staging.local and so on
definition of defaults using .env.defaults
In combination we have set up our projects with this configuration:
.env
.env.defaults
.env.development
.env.production
.env.staging
.env.test
And the only thing you have to do in your vue.config.js, nuxt.config.js or other entry points is
require('dotenv-flow').config()
Reference: https://www.npmjs.com/package/dotenv-flow
The powershell solution
I was handling exactly the same problem. Accepted solution is kind of ok, but it did not solve all differences between companies. Also, if you are using npm, your scripts can look nasty. So if you have powershell, here is what I suggest - get rid of the .env files :)
You can keep your structure like you want in the question. Just convert the env files to ps1.
/build/company1/
build-dev.ps1
build-stage.ps1
build-prod.ps1
/build/company2/
build-dev.ps1
build-stage.ps1
build-prod.ps1
Inside each of those, you can fully customize all env variables, run build process and apply some advanced post-build logic (like careful auto-deploy, publishing, merging with api project, ..).
So for example company1\build-stage.ps1 can look like this:
# You can pass some arguments to the script
param (
[string]$appName = "company1"
)
# Set environment variables for vue pipeline
$env:VUE_APP_ENVIRONMENT = "company1-stage";
$env:NODE_ENV="development";
$env:VUE_APP_NAME=$appName;
$env:VUE_APP_API_BASE_URL="https://company1.stage.mycompany.com"
# Run the vue pipeline build
vue-cli-service build;
# Any additional logic e.g.
# Copy-Item -Path "./dist" -Destination "my-server/my-app" -Recurse¨
Last part is easy - just call it (manualy or from integration service like TeamCity). Or, you can put it inside package.json.
...
"scripts": {
"build-company1-stage": "#powershell -Command ./build/company1/build-stage.ps1 -appName Company-One",
}
...
The you can call whole build process just by calling
npm run build-company1-stage
Similary, you can create localhost, dev, prod, test and any other environment. Let the javascript handle the part of building the app itself. For other advanced work, use poweshell. I think that this solution gives you much more flexibility for configuration and build process.
P.S.
I know that this way I'm merging configuration and build process, but you can always extract the configuration outside the file if it gets bigger.

Terraform Modules - Variables and .tfvars

Having an issue with Terraform modules and variables and I am at a loss as to what I am doing wrong.
I have a folder structure that looks like this;
Accounts
|_____Account1
| Main.tf
| terraform.tfvars
|_____Account2
|_____Account3
|_____Modules
|________VPC
Main.tf
Variables.tf
In my modules folder I have my main.tf and variables.tf, under the accounts I also have my main.tf (calling module vpc) and terrform.tfvars.
How can I use my terraform.tfvars to pass secure credentials to my main.tf, within my accounts folder?
Variables in my Variables.tf within the VPC module look like so;
variable "aws_access_key" {
default = ""
}
Within my account folders, in the Main.tf I am trying to call tfvars this way;
variable "aws_access_key" {}
module "VPC" {
source = "/Accounts/Modules/VPC"
aws_access_key = "${var.aws_access_key}"
}
Can run terraform init without any issues but trying to run terraform plan, it just comes up in red and fails to run. This does work if I enter the variables into my main.tf within the account folder manually. But I want to strip out anything sensitive into a .tfvars file, that will end up else where.
I hope I am doing something obviously wrong! Have also tried the -var-file=terraform.tfvars switch from within account1's folder.
Any idea's would be great. As everything I read tend to imply this should be working.
Thanks
Stephen
Just ran this from a fresh folder - taking only part of my config and is running fine - so must be something in the file. If anyone else comes across this