Migrating from remote-state to backend in Terraform 0.9 - amazon-s3

I downloaded terraform 0.9 and tried to follow the migration guide to move from remote-state to backend
But it doesn't seem to work. I replaced:
data "terraform_remote_state" "state" {
backend = "s3"
config {
bucket = "terraform-state-${var.environment}"
key = "network/terraform.tfstate"
region = "${var.aws_region}"
}
}
with
terraform {
backend "s3" {
bucket = "terraform-backend"
key = "network/terraform.tfstate"
region = "us-west-2"
}
}
yet when I run terraform init in one of my environment folders, I get:
Deprecation warning: This environment is configured to use legacy
remote state. Remote state changed significantly in Terraform 0.9.
Please update your remote state configuration to use the new 'backend'
settings. For now, Terraform will continue to use your existing
settings. Legacy remote state support will be removed in Terraform
0.11.
You can find a guide for upgrading here:
https://www.terraform.io/docs/backends/legacy-0-8.html
I also had to drop the variable interpolation since this is not allowed anymore. Does that mean that one S3 Bucket is used for multiple environments? What have I missed here?

Per upgrade guide (https://www.terraform.io/docs/backends/legacy-0-8.html) after terraform init you also have to run terraform plan to finalize the migration, which will update the remote state file at s3.
As for configuring for multiple environments we ended up using a wrapper shell script with passing in parameters for ${application_name}/${env}/${project}, and using partial configuration.
For a project structure like this:
├── projects
│   └── application-name
│   ├── dev
│   │   ├── bastion
│   │   ├── db
│   │   ├── vpc
│   │   └── web-cluster
│   ├── prod
│   │   ├── bastion
│   │   ├── db
│   │   ├── vpc
│   │   └── web-cluster
│   └── backend.config
└── run-tf.sh
for each application_name/env/component = folder (i.e. dev/vpc) we added a placeholder backend configuration file like this:
backend.tf:
terraform {
backend "s3" {
}
}
Where the folder content for each component will look like this:
│   ├── prod
│   │   ├── vpc
│   │   │   ├── backend.tf
│   │   │   ├── main.tf
│   │   │   ├── outputs.tf
│   │   │   └── variables.tf
At "application_name/" or "application_name/env" level we added a backend.config file, like this:
bucket = "BUCKET_NAME"
region = "region_name"
lock = true
lock_table = "lock_table_name"
encrypt = true
Our wrapper shell script expects parameters application-name, environment, component, and the actual terraform cmd to run.
The content of run-tf.sh script (simplified):
#!/bin/bash
application=$1
envir=$2
component=$3
cmd=$4
tf_backend_config="root_path/$application/$envir/$component/backend.config"
terraform init -backend=true -backend-config="$tf_backend_config" -backend-config="key=tfstate/${application}/${envir}/${component}.json"
terraform get
terraform $cmd
Here is how a typical run-tf.sh invocation looks like:
$ run-tf.sh application_name dev vpc plan
$ run-tf.sh application_name prod bastion apply

You got confused with terraform remote command with remote-state. You dont have to change any remote-state things you have in your tf files.
Instead of configuring your remote state with terraform remote command and use backend config file mentioned in the migration link.
See the second github comment in this link. It has nice step by step procedure on what he did to migrate.
https://github.com/hashicorp/terraform/issues/12792

Related

Static files deleted when deploy new version to AWS elastic beanstalk

I deploy a node.js application with express to elastic beanstalk and my structure is like this:
project
├── app.js
├── client
├── apps
│   ├── app1
│ │ ├── models
│ │ ├── controllers
│ │ └── routers
│   └── app2
├── utils
├── views
├── logs
├── scripts
├── public
├── node_modules
├── .elasticbeanstalk
├── .ebextensions
I serve my static files in public directory with:
{
"option_settings": [
{
"option_name": "/public",
"namespace": "aws:elasticbeanstalk:container:nodejs:staticfiles",
"value": "/public"
},
]
}
in aws config files in .ebextensions directory.
but when I want to update code with
eb deploy
aws replace all "/var/app/current/" with project directory so all static files in server deleted.
Is aws pipelines help me? or any other solution?
thank you for your help.

Does CMake has QMake's .qmake.conf alternative (automatically included file from parent directory) or other means to achieve similar result?

Let's say we have repository structure like this (note the .qmake.conf files):
repo/
├── libraries
│   └── libFoo
│   └── libFoo.pri
├── projects
│   ├── ProjectX
│   │   ├── apps
│   │   │   └── AppX
│   │   │   └── AppX.pro
│   │   ├── libs
│   │   │   └── libX
│   │   │   └── libX.pri
│   │   └── .qmake.conf
│   └── ProjectY
│   ├── apps
│   │   └── AppY
│   │   └── AppY.pro
│   └── .qmake.conf
├── qmake
│   └── common.pri
└── .qmake.conf
QMake supports .qmake.conf files, where you can declare useful variables, and it is automatically included in your .pro file if found in parent directory.
This is how it helps to avoid dealing with ../../.. relative paths, for example:
Root repo/.qmake.conf file has REPO_ROOT=$$PWD declared.
project also has it's own repo/projects/ProjectX/.qmake.conf, which has include(../../.qmake.conf) included and PROJECT_ROOT=$$PWD declared.
project's application .pro file (repo/projects/ProjectX/apps/AppX/AppX.pro) can avoid writing ../../ and include all dependencies from sibling and parent directories like this:
include($${REPO_ROOT}/qmake/common.pri)
include($${REPO_ROOT}/libraries/libFoo/libFoo.pri)
include($${PROJECT_ROOT}/libs/libX/libX.pri)
This is convenient and tidy. You DO have to write ../../ once (and update it if repository tree changes), but only once per new .qmake.conf, and later you can use variables to refer to various useful relative paths in the repository in any number of .pro's you have.
Is three similar technique in CMake? How this kind of variable organization could be achieve with CMake, in most convenient way?
In CMake you can achieve similar result somewhat differently:
(regarding "useful variables" management)
CMake knows about 3 "types of variables":
vars with directory scope; directory scope variables behave in such a way that if you define them in some folder, they will automatically be visible in all subfolders. In brief, if you define some var in root CMakeLists.txt, it will be visible in all project subfolders. Example of defining "directory scope variable":
# outside any function
set(MY_USEFUL_VAR SOME_VALUE)
vars with function scope; function scope variables are variables defined within the function. They are visible in the current function scope and all scopes initiated from it. Example of function scope variable:
function(my_function)
# note that the definition is within the function
set(MY_LOCAL_VAR SOME_VALUE)
# rest of the function body...
endfunction()
cache variables may be considered as "global variables", and those are also stored within CMakeCache.txt file within the root build folder. Cache variables are defined as follows (adding a new string variable):
set (MY_CACHE_VAR "this is some string value" CACHE STRING "explanation of MY_VAR")
Also, as already suggested within the comments, you can place variables definitions into the various "include files" and include them using CMake include statement.
In the end, here is the documentation about set, and include CMake statements.

webpack load amd path modules dependency

I am testing the loading of modules in webpack. How would you indicate the path of the dependency in an AMD module?
My project has something like this:
├── modules
│   ├── mod1.js
│   ├── mod2.js
│   └── others
│   └── mod3.js
├── public
│   └── bundle.js
├── src
│   └── app
│   └── app.js
└── webpack.config.js
in app.js I import only mod3.js therefore you must compile the three JS (mod1, mod2, mod3) since mod3.js depend on them.
I have a "others" route. Every time I create a folder I have to include the following line in webpack.config.js?
path.resolve(__dirname, 'modules/others'),
Is it not possible to indicate the path of the dependency in the module itself without webpack compiling go to the hard defined in the config?
Thank you

Folder structure in corefx projects

I am trying to understand the folder structure of a corefx project, here System.IO. Here is how the System.IO folder appears in OS X
System.IO BLACKSTAR$ pwd
/Users/BLACKSTAR/dotnet/corefx/src/System.IO
sameer:System.IO BLACKSTAR$ tree
.
├── System.IO.sln
├── ref
│   ├── System.IO.Manual.cs
│   ├── System.IO.cs
│   ├── System.IO.csproj
│   ├── bin
│   │   └── Debug
│   │   └── dotnet
│   │   ├── ref.dll
│   │   └── ref.xml
│   ├── project.json
│   └── project.lock.json
├── src
│   ├── Resources
│   │   └── Strings.resx
│   ├── System
│   │   └── IO
│   │   └── InvalidDataException.cs
│   ├── System.IO.csproj
│   ├── project.json
│   └── project.lock.json
Here is what I am trying to figure out
What is there in ref folder?
What is there in src folder?
What is the connection between ref and src?
Ref is targeted to dotnet but Src is targeted to dnxcore50 framework. What does this imply?
I was able to build the project in ref folder but i couldn't build the project in src using dnu build though dnu restore ran successfully. What am I doing wrong?
sameer:System.IO BLACKSTAR$ dnvm list
Active Version Runtime Architecture OperatingSystem Alias
------ ------- ------- ------------ --------------- -----
1.0.0-beta7 coreclr x64 darwin
* 1.0.0-beta7 mono linux/osx default
sameer:System.IO BLACKSTAR$
What you See here is a NuGet package for a namespace which is in reality part of the CLR. Some types are needed very early... Like file io and elementary data types so they are part of the CLR distribution. You can find these in the core CLR github project.
So ...
Ref are empty implementations for design time. They are there to define the types.
SRC is the dnxcore5 based implementation... Essentially being empty.
Ref vs SRC.... Ref is used for lookup of the types ... Binding to the implementation (either in coreclr or mscorlib) is done by some PCL type forwards.
SRC is the pseudo implementation for coreclr. Maybe just the missing types. Ref targets dotnet since all modern SDK have type forwards for System.IO.
I have no idea how they are build.
Sorry for the missing details. It is not very well documented by MS.

rails3 asset pipeline and file collisions

I'm updating an existing rails 2 app to rails 3, and having some trouble understanding the asset pipeline. I have read through the guide and as I understand it, files in any of the following directories will resolve to /assets:
app/assets
lib/assets
vendor/assets
and you could access them using helpers...i.e.
image_tag('logo.png')
But what I don't understand is how collisions are handled? For example, what if there are the following files:
app/assets/images/logo.png
lib/assets/images/logo.png
If I go to myapp.com/assets/images/logo.png, which file will be returned? I could check for collisions manually within my app, but this becomes a pain point when using gems that rely on the asset pipeline.
Based on what I've found, you can't have duplicate files, as rails will just return the first one found.
This seems like a bit of a design flaw, as a gem may not namespace their own assets
Why not taking advantage of the index manifest and organize your app/assets into decoupled modules? You can then reference to a particular image, image_tag('admin/logo.png'), and get for free your UI codebase organised in a more meaningful way. You could even promote a complex component, such as Single Page Application into it's own module and reuse it from different parts of the app.
Let's say you app is composed out of three modules: the public side, an admin UI and, e.g., a CRM to let your agents track the selling process at your company:
app/assets/
├── coffeescripts
│   ├── admin
│   │   ├── components
│   │   ├── index.coffee
│   │   └── initializers
│   ├── application
│   │   ├── components
│   │   ├── index.sass
│   │   └── initializers
│   └── crm
│   ├── components
│   ├── index.sass
│   └── initializers
├── images
│   ├── admin
│   ├── application
│   └── crm
└── stylesheets
├── admin
│   ├── components
│   └── index.sass
├── application
│   ├── components
│   └── index.sass
└── crm
├── components
└── index.sass
21 directories, 6 files
Don't forget to update your application.rb so they will be precompiled properly:
config.assets.precompile = %w(admin.js application.js crm.js
admin.css application.css crm.css)