Terraform Outputs across modules - variables

I am struggling to work out how to pass outputs from a module and to consume it an another.
My folder structure:
.
├── main.tf
├── modules
│   ├── cloudwatch-event
│   │   ├── basic_event_rule.tf
│   │   ├── basic_event_target.tf
│   │   └── variables.tf
│   └── lambda
│      ├── basic_lambda.tf
│      ├── output.tf
│      ├── lambda.py
│      └── variables.tf
├── lambda
│   ├── main.tf
│   └── variables.tf
└── terraform.tfvar
In order to add scheduling to the lambda, i need to consume the Lambda ARN in to the CloudWatch module.
The lambda - basic_lambda.tf
resource "aws_lambda_function" "lambda_function" {
The lambda - outputs.tf
output "lambda_arn" {
value = "${aws_lambda_function.lambda_function.arn}"
In my lambda application module, i have this in my main lambda/main.tf
module "cloudwatch-event" {
source = "../modules/cloudwatch-event"
lambda_arn = "${module.lambda.lambda_arn}"
module "lambda" {
source = "../modules/lambda"
My lambda/variables.tf includes the lambda_arn variable as a string
variable "lambda_arn" {
type = "string"
}
The root main file looks like this:
provider "aws" {
region = var.aws_region
}
module "accesskey-lambda" {
source = "./lambda/"
}
Running TF i get this
Error: Missing required argument
on main.tf line 5, in module "accesskey-lambda":
5: module "accesskey-lambda" {
The argument "lambda_arn" is required, but no definition was found.
then adding it to the root main file doesnt resolve my issues.
Thanks
Nick

Solved, i had a typo
in the cloudwatch/basic_event_target.tf
arn = "${var.lambda_arn}"
Then in the cloudwatch/variable
variable "lambda_arn" {
type = string
}
The module then needed
module "cloudwatch-event" {
source = "../modules/cloudwatch-event"
lambda_arn = "${module.lambda.lambda_arn}"
}

Related

Terraform BigQuery replaces table & deletes the data when schema gets updated

I have somewhat below folder structure:
.
├── locals.tf
├── main.tf
├── modules
│   ├── bigquery
│   │   ├── main.tf
│   │   ├── schema
│   │   └── variables.tf
│   ├── bigquery_tables
│   │   ├── main.tf
│   │   ├── schema
│   │   └── variables.tf
│   ├── bigquery_views
│   │   ├── main.tf
│   │   ├── queries
│   │   ├── schema
│   │   └── variables.tf
│   ├── cloud_composer
│   ├── list_projects
├── providers.tf
├── storage.tf
├── variables.tf
└── versions.tf
& my main.tf in bigquery_tables is:
resource "google_bigquery_table" "bq_tables" {
for_each = { for table in var.bigquery_dataset_tables : table.table_id => table }
project = var.project_id
dataset_id = each.value.dataset_id
table_id = each.value.table_id
schema = file(format("${path.module}/schema/%v.json", each.value.file_name))
deletion_protection = false
dynamic "time_partitioning" {
for_each = try(each.value.time_partition, false) == false ? [] : [each.value.time_partition]
content {
type = each.value.partitioning_type
field = each.value.partitioning_field
}
}
}
The issue I am facing is since things are in development stage we have frequent update in schema.
Thus this change in schema is causing the BigQuery table to be replaced by terraform & also leading to data loss in BigQuery.
Can some one suggest a solution like what I should add in by resource block to avoid replacing the table & data loss?
I am unsure how can I add "external_data_configuration" in my current block as per https://github.com/hashicorp/terraform-provider-google/issues/10919.
Unfortunately if you change the schema structure of the BigQuery table, you will have this behaviour.
For the deletion_protection param, I think it's better to set it to true to prevent data loss or not set it (it' true by default).
The solution I saw, because you are in dev mode, it's using the Terraform workspace and run your schema updates in a separated workspace.
It will create a new dataset or table in each apply, example :
locals.tf file :
locals {
workspace = terraform.workspace != "default" ? "${terraform.workspace}_" : ""
}
main.tf file :
resource "google_bigquery_table" "bq_tables" {
for_each = { for table in var.bigquery_dataset_tables : table.table_id => table }
project = var.project_id
dataset_id = "${local.workspace}${each.value.dataset_id}"
table_id = each.value.table_id
schema = file(format("${path.module}/schema/%v.json", each.value.file_name))
deletion_protection = false
dynamic "time_partitioning" {
for_each = try(each.value.time_partition, false) == false ? [] : [each.value.time_partition]
content {
type = each.value.partitioning_type
field = each.value.partitioning_field
}
}
}
In this example, I used the workspace as prefix of dataset ID, if the default workspace is used, the prefix is empty, otherwise equals to the given workspace :
dataset=mydataset
- workspace=default => dataset=mydataset
- workspace=evolfeature1 => dataset=evolfeature1_mydataset

How to add another page to vuepress blog?

I want to add another page to vuepress with the blog plugin.
The new page that I added does not show the content.
I expect the about page to show the content
I use basic vuePressBlog template. My tree structure is
├── examples
│   ├── about
│   │   └── Readme.md
│   └── _posts
│   ├── 2018-4-4-intro-to-vuepress.md
│   ├── 2019-6-8-intro-to-vuepress-next.md
│   ├── 2019-6-8-shanghai.md
│   ├── 2019-6-8-summary.md
│   └── 2019-6-8-vueconf.md
├── index.js
├── layouts
│   ├── GlobalLayout.vue
│   ├── Layout.vue
│   ├── Post.vue
│   └── Tag.vue
├── package.json
├── package-lock.json
└── README.md
I added the following lines to the ./example/.vuepress/config.js
module.exports = {
title: "SlimBlog",
theme: require.resolve("../../"),
themeConfig: {
// Please keep looking down to see the available options.
nav: [
{
text: "Home",
link: "/",
},
{
text: "about",
link: "/about/",
},
{},
],
},
};
I assume there might be a layout missing but I unable to find the config for this.

List objects with AWS S3 SDK for Java 2.x

I have a bucket (logs) in Amazon S3 (us-east-1) with, unsurprisingly, logs, partitioned by application and date:
logs
├── peacekeepers
│   └── year=2018
│   ├── month=11
│   │   ├── day=01
│   │   ├── day=…
│   │   └── day=30
│   └── month=12
│   ├── day=01
│   ├── day=…
│   └── day=19
│   ├── 00:00 — 01:00.log
│   ├── …
│   └── 23:00 — 00:00.log
├── rep-hunters
├── retro-fans
└── rubber-duckies
I want to list all the objects (logs) for a particular date, month, year…
How do I do that with AWS SDK for Java 2.x?
New SDK makes it easy to work with paginated results:
S3Client client = S3Client.builder().region(Region.US_EAST_1).build();
ListObjectsV2Request request =
ListObjectsV2Request
.builder()
.bucket("logs")
.prefix("peacekeepers/year=2018/month=12")
// .prefix("peacekeepers/year=2018/month=12/day=19")
.build();
ListObjectsV2Iterable response = client.listObjectsV2Paginator(request);
for (ListObjectsV2Response page : response) {
for (S3Object object : page.contents()) {
// Consume the object
System.out.println(object.key());
}
}

Read Karate config from YAML

I would like to define environment-specific properties in a .yml/.yaml file. Therefore I created the following test.yaml:
baseUrl: 'http://localhost:1234'
Next, I wrote this karate-config.js:
function() {
var env = karate.env;
if (!env) {
env = 'test'; // default is test
}
// config = read(env + '.yaml')
var config = read('/home/user/git/karate-poc/src/test/java/test.yaml');
// var config = read('test.yaml');
// var config = read('classpath:test.yaml');
return config;
}
As seen here https://github.com/intuit/karate#reading-files the read() function should be known by Karate, however I'm not sure if this only applies to .feature files or the karate-config.js too.
Unfortunately, none of the above read()s work, as I'm getting this error:
Caused by: com.intuit.karate.exception.KarateException: javascript function call failed: could not find or read file: /home/user/git/karate-poc/src/test/java/test.yaml, prefix: NONE
at com.intuit.karate.Script.evalFunctionCall(Script.java:1602)
I'm sure that the file exists and is readable.
Am I doing something wrong or is my approach not supported? If it's not supported, what would be the recommended way to read the configuration based on the environment from a YAML file (once) in order to use it in (multiple) .feature files?
Thank you very much
Edit: Tree structure of the project:
.
├── build.gradle
├── gradle
│   └── wrapper
│   ├── gradle-wrapper.jar
│   └── gradle-wrapper.properties
├── gradle.properties
├── gradlew
├── gradlew.bat
└── src
   └── test
   └── java
   ├── karate
   │   └── rest
   │   ├── rest.feature
   │   └── RestRunner.java
   ├── karate-config.js
   └── test.yaml
Run with ./gradlew test
In JS, use the karate object, which is explained here: https://github.com/intuit/karate#the-karate-object
So this should work:
var config = karate.read('classpath:test.yaml');

Using baseController inside Alloy widget

I would like to extend my controller within simple widget.
I've created two files:
app/widgets/mywidget/controllers/base.js
app/widgets/mywidget/controllers/index.js
I start mycontroller.js file with line: exports.baseController = 'base'; and on Android it crashes with Exception:
/V8Exception(19693): Exception occurred at ti:/module.js:280: Uncaught Error: Requested module not found: alloy/controllers//glass/parent
Project tree looks like this:
app
├── README
├── alloy.js
├── assets
├── config.json
├── controllers
│ ├── base.js
│ ├── index.js
│   └── view.js
├── lib
│   └── user.js
├── models
├── styles
│   ├── app.tss
│   └── index.tss
├── views
│   ├── index.xml
│   └── view.xml
└── widgets
└── mywidget
├── controllers
│   ├── base.js
│   ├── index.js
│   └── view.js
├── styles
├── views
└── widget.json
index.js & view.js inside app/controller use base.js as baseController.
index.js & view.js inside app/widgets/mywidget/controllers use base.js inside same directory as their baseController. I don't try to extend baseController from app inside widget.