Having an issue with Terraform modules and variables and I am at a loss as to what I am doing wrong.
I have a folder structure that looks like this;
Accounts
|_____Account1
| Main.tf
| terraform.tfvars
|_____Account2
|_____Account3
|_____Modules
|________VPC
Main.tf
Variables.tf
In my modules folder I have my main.tf and variables.tf, under the accounts I also have my main.tf (calling module vpc) and terrform.tfvars.
How can I use my terraform.tfvars to pass secure credentials to my main.tf, within my accounts folder?
Variables in my Variables.tf within the VPC module look like so;
variable "aws_access_key" {
default = ""
}
Within my account folders, in the Main.tf I am trying to call tfvars this way;
variable "aws_access_key" {}
module "VPC" {
source = "/Accounts/Modules/VPC"
aws_access_key = "${var.aws_access_key}"
}
Can run terraform init without any issues but trying to run terraform plan, it just comes up in red and fails to run. This does work if I enter the variables into my main.tf within the account folder manually. But I want to strip out anything sensitive into a .tfvars file, that will end up else where.
I hope I am doing something obviously wrong! Have also tried the -var-file=terraform.tfvars switch from within account1's folder.
Any idea's would be great. As everything I read tend to imply this should be working.
Thanks
Stephen
Just ran this from a fresh folder - taking only part of my config and is running fine - so must be something in the file. If anyone else comes across this
Related
I'm a bit of a newbie with Terraform and still working my way through the documentation, have not yet been able to find a way to accomodate the set up I need to achieve for a specific solution and hoping that some kind soul may be able to give me a push in the right direction.
I'm trying to manage a single set of paramaterised templates which deploy everything needed to support a new application we are working on in GCP. What I am trying to achieve is being able to deploy those templates to three different environments, each environment being in a distinct GCP project, by itself.
The plan is, as per recommendations, run terraform and pass in
a) The specific .tfvars file depending on the environment/project being deployed to (dev/test/prod).
b) Use the -chdir parameter to tell Terraform to pick up all the templates from 'infra-common' folder.
The tricky part is that we want each environment (gcp project) to host it's own state file in gcs/storage.
I had been looking at workspaces but it appears that workspaces will just create state subfolders on a single backend.
Question: Can this be done or is there a better way to do it?
Thanks!
You can use --backend-config for this. Here's how you can achieve the desired behavior:
Create a .config file for each environment (dev.config, test.config, prod.config) which contain the name of the gcs bucket (which must already exist) for the respective environment
Specify the common backend in a single remote_state.tf file
Here's how it would look:
config/dev.config:
bucket = "tf-state-dev"
config/test.config:
bucket = "tf-state-test"
config/prod.config:
bucket = "tf-state-prod"
remote_state.tf:
terraform {
backend "gcs" {
prefix = "terraform/state"
}
}
then, you can run the init. So for example, for dev this would look like:
$ terraform init --backend-config=config/dev.config
then, you can create a workspace for the environment:
$ terraform workspace new dev
With this approach, you can use a single set of templates (you can in fact configure dynamic variables based on the current workspace).
What you could do (we have a project with a similar setup with a different cloud provider), is:
use infra-common as a module
instead of working with .tfvar files per environment, use a separate root module per environment which invokes infra-common as sub-module.
Your folder structure could look like:
project
|-- dev
| `-- main.tf
|-- modules
| `-- infra-common
| |-- main.tf
| `-- variables.tf
|-- test
| `-- main.tf
`-- prod
`-- main.tf
dev/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-dev"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "dev"
some_var = "value"
}
prod/main.tf
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}
module "stage" {
source = "../modules/infra-common"
env = "prod"
some_var = "value"
}
Hello I'm using VueJS 2 and I have multiple .env in my project.
My app have .env for each company to select the company configuration (skin color / files...)
Actually I have all my .env in the root folder:
.env.company1-dev
.env.company1-staging
.env.company1-prod
.env.company2-dev
.env.company2-staging
.env.company2-prod
.env.company3-dev
.env.company3-staging
.env.company3-prod
So when I'll get 20 companies it will be confused on my root folder so it is possible to create a folder where I can place all my .env ?
The idea :
/environments/company1/
.env.dev
.env.staging
.env.prod
/environments/company2/
.env.dev
.env.staging
.env.prod
/environments/company3/
.env.dev
.env.staging
.env.prod
On your vue.config.js file you can add:
const dotenv = require("dotenv");
const path = require("path");
let envfile = ".env";
if (process.env.NODE_ENV) {
envfile += "." + process.env.NODE_ENV;
}
const result = dotenv.config({
path: path.resolve(`environments/${process.env.VUE_APP_COMPANY}`, envfile)
});
// optional: check for errors
if (result.error) {
throw result.error;
}
the before run you can set VUE_APP_COMPANY to a company name and run your app,
Note: It's important to put this code on vue.config.js and not in main.js because dotenv will use fs to read files.
References
https://github.com/motdotla/dotenv#path
https://github.com/vuejs/vue-cli/issues/787
https://cli.vuejs.org/guide/mode-and-env.html#environment-variables
The accepted answer we have also used in the past. But I found a better solution to handle different environments. Using the npm package dotenv-flow allows not only the use of different environments but has some more benefits like:
local overwriting of variables by using .env.local or .env.staging.local and so on
definition of defaults using .env.defaults
In combination we have set up our projects with this configuration:
.env
.env.defaults
.env.development
.env.production
.env.staging
.env.test
And the only thing you have to do in your vue.config.js, nuxt.config.js or other entry points is
require('dotenv-flow').config()
Reference: https://www.npmjs.com/package/dotenv-flow
The powershell solution
I was handling exactly the same problem. Accepted solution is kind of ok, but it did not solve all differences between companies. Also, if you are using npm, your scripts can look nasty. So if you have powershell, here is what I suggest - get rid of the .env files :)
You can keep your structure like you want in the question. Just convert the env files to ps1.
/build/company1/
build-dev.ps1
build-stage.ps1
build-prod.ps1
/build/company2/
build-dev.ps1
build-stage.ps1
build-prod.ps1
Inside each of those, you can fully customize all env variables, run build process and apply some advanced post-build logic (like careful auto-deploy, publishing, merging with api project, ..).
So for example company1\build-stage.ps1 can look like this:
# You can pass some arguments to the script
param (
[string]$appName = "company1"
)
# Set environment variables for vue pipeline
$env:VUE_APP_ENVIRONMENT = "company1-stage";
$env:NODE_ENV="development";
$env:VUE_APP_NAME=$appName;
$env:VUE_APP_API_BASE_URL="https://company1.stage.mycompany.com"
# Run the vue pipeline build
vue-cli-service build;
# Any additional logic e.g.
# Copy-Item -Path "./dist" -Destination "my-server/my-app" -Recurse¨
Last part is easy - just call it (manualy or from integration service like TeamCity). Or, you can put it inside package.json.
...
"scripts": {
"build-company1-stage": "#powershell -Command ./build/company1/build-stage.ps1 -appName Company-One",
}
...
The you can call whole build process just by calling
npm run build-company1-stage
Similary, you can create localhost, dev, prod, test and any other environment. Let the javascript handle the part of building the app itself. For other advanced work, use poweshell. I think that this solution gives you much more flexibility for configuration and build process.
P.S.
I know that this way I'm merging configuration and build process, but you can always extract the configuration outside the file if it gets bigger.
What is the expected configuration for using terraform workspaces with the local backend?
The local backend supports workspacing, but it does not appear you have much control over where the actual state is stored.
When you are not using workspaces you can supply a path parameter to the local backend to control where the state files are stored.
# Either in main.tf
terraform {
backend "local" {
path = "/path/to/terraform.tfstate
}
}
# Or as a flag
terraform init -backend-config="path=/path/to/terraform.tfstate"
I expected analogous functionality when using workspaces in that you would supply a directory for path and the workspaces would get created under that directory
For example:
terraform new workspace first
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
terraform new workspace second
terraform init -backend-config="path=/path/to/terraform.tfstate.d"
terraform apply
would result in the state
/path/to/terraform.tfstate.d/first/terraform.tfstate
/path/to/terraform.tfstate.d/second/terraform.tfstate
This does not appear to be the case however. It looks like the local backend ignores the path parameter and puts the workspace configuration in the working directory.
Am I missing something or are you unable to control local backend workspace state?
There is an undocumented flag for the local backend workspace_dir that solves this issue.
The documentation task is tracked here
terraform {
backend "local" {
workspace_dir = "/path/to/terraform.tfstate.d"
}
}
I have a zip file containing the following structure (this is the root of the archive, not nested in a top-level folder, which I understand is a common cause of errors for aws-s3-lambda deployments):
- support/
- shared.js
- one.js
- two.js
and then in one.js and two.js:
var shared = require("./support/shared");
// ...
When I run this code locally, it works. I use the aws-sdk to upload the zip file to AWS-S3 and then use aws.lambda.createFunction() to create a function with that name and handler and everything. The created function DOES show up in my Lambda dashboard, but when I test it, I get "Cannot find module './support/shared'". I have also tried var shared = require("./support/shared.js"); and that gives "Cannot find module './support/shared.js'".
This is for runtime node6.10. The filename cases are correct for case-sensitive lambda.
Shouldn't this work?? What's the gotcha?
Is there a way to verify the file structure that Lambda is working in to show that the additional ./support/shared.js file actually made it to the working directory or whatever it uses?
The gotcha is that the zip file created on a windows machine has the wrong chmod permissions set in it for when AWS unpacks it. The files are there, but inaccessible but node just gives a generic warning about not found instead of that the folder access is denied.
I've just set up a Capistrano deploy for our application and I keep running into this error:
* executing ["ls /path/to/app/shared/assets/manifest*"]
servers: ["web03"]
[web03] executing command
[err :: web03] ls: /path/to/app/shared/assets/manifest*
[err :: web03] : No such file or directory
If I manually create a manifest file with touch /path/to/app/shared/assets/manifest.yml, the deploy script works fine. However, this feels all sorts of sketchy.
I've googled the heck out of this and the most I can gather is that the manifest file it's looking for is a product of the asset pipeline. I checked and I do, in fact, have the pipeline enabled (config.assets.enabled = true), so I'm at a loss.
Could someone please help me understand 1) what this manifest file is and how it's created; and 2) why isn't one being created for my application?
Update: I think I'm closing in on the answer and I think it has something to do with this line:
config.assets.prefix = "/some_other_path"
We needed to rename the "asset" path because we have Asset objects in our system and I'm guessing Cap might be getting confused because of it. Any suggestions?
My suspicion was right: this was a problem with the renamed asset directory. Cap didn't know to look in public/some_other_path instead of public/assets.
In other words, because this line is in my application.rb:
config.assets.prefix = "some_other_path"
I had to add this line to my deploy.rb:
set :assets_prefix, "some_other_path"
Then, Cap knows where to look for a manifest, copies it into shared/assets, and finishes correctly.
It'd be handy to have the deploy.rb reference the config variable instead of having to hard-code the path a second time, but that's outside the scope of this question.
if you configure with the aws, here it should be...
appname/config/environments/production.rb
config.action_controller.asset_host = "//#{ENV['FOG_DIRECTORY']}.s3.amazonaws.com"
config.assets.prefix = "/#{ENV['APP_NAME']}/assets"
appname/config/deploy.rb
...
set :keep_releases, 5
set :assets_prefix, ->{ "#{fetch(:application)}/assets" }
set :whenever_identifier, -> { "#{fetch(:application)}_#{fetch(:stage)}" }
...