I'm a bit of a newbie with Terraform and still working my way through the documentation, have not yet been able to find a way to accomodate the set up I need to achieve for a specific solution and hoping that some kind soul may be able to give me a push in the right direction.
I'm trying to manage a single set of paramaterised templates which deploy everything needed to support a new application we are working on in GCP. What I am trying to achieve is being able to deploy those templates to three different environments, each environment being in a distinct GCP project, by itself.
The plan is, as per recommendations, run terraform and pass in
a) The specific .tfvars file depending on the environment/project being deployed to (dev/test/prod).
b) Use the -chdir parameter to tell Terraform to pick up all the templates from 'infra-common' folder.
The tricky part is that we want each environment (gcp project) to host it's own state file in gcs/storage.
I had been looking at workspaces but it appears that workspaces will just create state subfolders on a single backend.
Question: Can this be done or is there a better way to do it?
Thanks!
You can use --backend-config for this. Here's how you can achieve the desired behavior:
Create a .config file for each environment (dev.config, test.config, prod.config) which contain the name of the gcs bucket (which must already exist) for the respective environment
Specify the common backend in a single remote_state.tf file
Here's how it would look:
config/dev.config:
bucket = "tf-state-dev"
config/test.config:
bucket = "tf-state-test"
config/prod.config:
bucket = "tf-state-prod"
remote_state.tf:
terraform {
backend "gcs" {
prefix = "terraform/state"
}
}
then, you can run the init. So for example, for dev this would look like:
$ terraform init --backend-config=config/dev.config
then, you can create a workspace for the environment:
$ terraform workspace new dev
With this approach, you can use a single set of templates (you can in fact configure dynamic variables based on the current workspace).
What you could do (we have a project with a similar setup with a different cloud provider), is:
use infra-common as a module
instead of working with .tfvar files per environment, use a separate root module per environment which invokes infra-common as sub-module.
Your folder structure could look like:
project
|-- dev
| `-- main.tf
|-- modules
| `-- infra-common
| |-- main.tf
| `-- variables.tf
|-- test
| `-- main.tf
`-- prod
`-- main.tf
dev/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-dev"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "dev"
some_var = "value"
}
prod/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "prod"
some_var = "value"
}
Related
Terraform v0.14.8
Got this probleme when I try to launch terraform init, the provider registry.terraform.io/hashicorp/aci is not found
I want to use my provider : registry.terraform.io/ciscodevnet/aci
$ terraform providers
Providers required by configuration:
.
├── provider[registry.terraform.io/ciscodevnet/aci] 0.5.4
└── module.bride_domain_2001
└── provider[registry.terraform.io/hashicorp/aci]
My question : How to force registry.terraform.io/ciscodevnet/aci on module ?
How i call my module :
module "bride_domain_2001" {
source = "./modules/bride_domain_2001"
aci_vrf_vrf_training_id= aci_vrf.vrf_training.id
aci_tenant_tenant_training_id= aci_tenant.tenant_training.id
}
Expected Behavior
The in-house provider should be inherited from the parent and used
Actual Behavior
Terraform doesn't use inheritance from parent module
Thanks
It seems that your child module bride_domain_2001 is missing a required_providers entry to specify that it depends on ciscodevnet/aci, which is causing Terraform's backward compatibility behavior to assume you meant hashicorp/aci.
To fix it, add a required_providers entry to your child module:
terraform {
required_providers {
aci = {
source = "ciscodevnet/aci"
# (possibly also a >= version constraint)
}
}
}
Once you add this, Terraform will see that the root module and the child module both depend on this same provider ciscodevnet/aci, and so your configuration for the provider should then be inherited by resources belonging to that provider in the child module.
Hello I'm using VueJS 2 and I have multiple .env in my project.
My app have .env for each company to select the company configuration (skin color / files...)
Actually I have all my .env in the root folder:
.env.company1-dev
.env.company1-staging
.env.company1-prod
.env.company2-dev
.env.company2-staging
.env.company2-prod
.env.company3-dev
.env.company3-staging
.env.company3-prod
So when I'll get 20 companies it will be confused on my root folder so it is possible to create a folder where I can place all my .env ?
The idea :
/environments/company1/
.env.dev
.env.staging
.env.prod
/environments/company2/
.env.dev
.env.staging
.env.prod
/environments/company3/
.env.dev
.env.staging
.env.prod
On your vue.config.js file you can add:
const dotenv = require("dotenv");
const path = require("path");
let envfile = ".env";
if (process.env.NODE_ENV) {
envfile += "." + process.env.NODE_ENV;
}
const result = dotenv.config({
path: path.resolve(`environments/${process.env.VUE_APP_COMPANY}`, envfile)
});
// optional: check for errors
if (result.error) {
throw result.error;
}
the before run you can set VUE_APP_COMPANY to a company name and run your app,
Note: It's important to put this code on vue.config.js and not in main.js because dotenv will use fs to read files.
References
https://github.com/motdotla/dotenv#path
https://github.com/vuejs/vue-cli/issues/787
https://cli.vuejs.org/guide/mode-and-env.html#environment-variables
The accepted answer we have also used in the past. But I found a better solution to handle different environments. Using the npm package dotenv-flow allows not only the use of different environments but has some more benefits like:
local overwriting of variables by using .env.local or .env.staging.local and so on
definition of defaults using .env.defaults
In combination we have set up our projects with this configuration:
.env
.env.defaults
.env.development
.env.production
.env.staging
.env.test
And the only thing you have to do in your vue.config.js, nuxt.config.js or other entry points is
require('dotenv-flow').config()
Reference: https://www.npmjs.com/package/dotenv-flow
The powershell solution
I was handling exactly the same problem. Accepted solution is kind of ok, but it did not solve all differences between companies. Also, if you are using npm, your scripts can look nasty. So if you have powershell, here is what I suggest - get rid of the .env files :)
You can keep your structure like you want in the question. Just convert the env files to ps1.
/build/company1/
build-dev.ps1
build-stage.ps1
build-prod.ps1
/build/company2/
build-dev.ps1
build-stage.ps1
build-prod.ps1
Inside each of those, you can fully customize all env variables, run build process and apply some advanced post-build logic (like careful auto-deploy, publishing, merging with api project, ..).
So for example company1\build-stage.ps1 can look like this:
# You can pass some arguments to the script
param (
[string]$appName = "company1"
)
# Set environment variables for vue pipeline
$env:VUE_APP_ENVIRONMENT = "company1-stage";
$env:NODE_ENV="development";
$env:VUE_APP_NAME=$appName;
$env:VUE_APP_API_BASE_URL="https://company1.stage.mycompany.com"
# Run the vue pipeline build
vue-cli-service build;
# Any additional logic e.g.
# Copy-Item -Path "./dist" -Destination "my-server/my-app" -Recurse¨
Last part is easy - just call it (manualy or from integration service like TeamCity). Or, you can put it inside package.json.
...
"scripts": {
"build-company1-stage": "#powershell -Command ./build/company1/build-stage.ps1 -appName Company-One",
}
...
The you can call whole build process just by calling
npm run build-company1-stage
Similary, you can create localhost, dev, prod, test and any other environment. Let the javascript handle the part of building the app itself. For other advanced work, use poweshell. I think that this solution gives you much more flexibility for configuration and build process.
P.S.
I know that this way I'm merging configuration and build process, but you can always extract the configuration outside the file if it gets bigger.
What is the expected configuration for using terraform workspaces with the local backend?
The local backend supports workspacing, but it does not appear you have much control over where the actual state is stored.
When you are not using workspaces you can supply a path parameter to the local backend to control where the state files are stored.
# Either in main.tf
terraform {
backend "local" {
path = "/path/to/terraform.tfstate
}
}
# Or as a flag
terraform init -backend-config="path=/path/to/terraform.tfstate"
I expected analogous functionality when using workspaces in that you would supply a directory for path and the workspaces would get created under that directory
For example:
terraform new workspace first
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
terraform new workspace second
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
would result in the state
/path/to/terraform.tfstate.d/first/terraform.tfstate
/path/to/terraform.tfstate.d/second/terraform.tfstate
This does not appear to be the case however. It looks like the local backend ignores the path parameter and puts the workspace configuration in the working directory.
Am I missing something or are you unable to control local backend workspace state?
There is an undocumented flag for the local backend workspace_dir that solves this issue.
The documentation task is tracked here
terraform {
backend "local" {
workspace_dir = "/path/to/terraform.tfstate.d"
}
}
Having an issue with Terraform modules and variables and I am at a loss as to what I am doing wrong.
I have a folder structure that looks like this;
Accounts
|_____Account1
| Main.tf
| terraform.tfvars
|_____Account2
|_____Account3
|_____Modules
|________VPC
Main.tf
Variables.tf
In my modules folder I have my main.tf and variables.tf, under the accounts I also have my main.tf (calling module vpc) and terrform.tfvars.
How can I use my terraform.tfvars to pass secure credentials to my main.tf, within my accounts folder?
Variables in my Variables.tf within the VPC module look like so;
variable "aws_access_key" {
default = ""
}
Within my account folders, in the Main.tf I am trying to call tfvars this way;
variable "aws_access_key" {}
module "VPC" {
source = "/Accounts/Modules/VPC"
aws_access_key = "${var.aws_access_key}"
}
Can run terraform init without any issues but trying to run terraform plan, it just comes up in red and fails to run. This does work if I enter the variables into my main.tf within the account folder manually. But I want to strip out anything sensitive into a .tfvars file, that will end up else where.
I hope I am doing something obviously wrong! Have also tried the -var-file=terraform.tfvars switch from within account1's folder.
Any idea's would be great. As everything I read tend to imply this should be working.
Thanks
Stephen
Just ran this from a fresh folder - taking only part of my config and is running fine - so must be something in the file. If anyone else comes across this
When I try and run a test using the Apache LDAP API, I am getting the following error. I set up a Maven project , and my pom.xml has many dependencies for the Apache Directory server and API artifacts. My code (which I copied and pasted an example, just to get up and running, so that I can explore) all builds fine. However, when I run it (as a Junit Test), I get the following....
Can anyone help me? maybe even just provide an example of where the Apache LDAP API is being used successfully, and maybe give me the pom.xml with the correct dependencies also? (The apche LDAP API documentation seems to be out of date).
I am currently starting the test using the embedded Apache Directory server, using the following...
#RunWith(FrameworkRunner.class)
#CreateLdapServer(transports =
{
#CreateTransport(protocol = "LDAP") ,
#CreateTransport(protocol = "LDAPS") })
// disable changelog, for more info see DIRSERVER-1528
#CreateDS(enableChangeLog = false, name = "PasswordPolicyTest")
public class PasswordPolicyIT extends AbstractLdapTestUnit
{ .......etc }
So, therefore, an alternative approach, is that if I tailor some of the tests to just connect to a local Directory Server instance that I have running on my machine. I assume that this would stop the error messages that I am getting below..Again, if anyone could provide a code snippet there, it would be useful..
Many Thanks
> 2013-06-20 16:05:10 ERROR FrameworkRunner:287 - Problem locating LDIF
> file in schema repository Multiple copies of resource named
> 'schema/ou=schema/cn=apachemeta/ou=matchingrules/m-oid=1.3.6.1.4.1.18060.0.4.0.1.3.ldif'
> located on classpath at urls
> jar:file:/Users/rk/.m2/repository/org/apache/directory/api/api-ldap-client-all/1.0.0-M17/api-ldap-client-all-1.0.0-M17.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
> jar:file:/Users/rk/.m2/repository/org/apache/directory/shared/shared-ldap-schema-data/1.0.0-M7/shared-ldap-schema-data-1.0.0-M7.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
> jar:file:/Users/rk/.m2/repository/org/apache/directory/server/apacheds-all/2.0.0-M12/apacheds-all-2.0.0-M12.jar!/schema/ou%3dschema/cn%3dapachemeta/ou%3dmatchingrules/m-oid%3d1.3.6.1.4.1.18060.0.4.0.1.3.ldif
You need to exclude the shared-ldap-schema-data dependency from apacheds-all. Take a look at this comment