Can I bind a Lambda Layer directly to a static ARN instead of a zip file - pandas

I want to use an AWS provided Layer in a Lamba function. In Terraform what is the preferred way to bind it? Also, can the ARN be bound directly to the Layers property of the module, bypassing the need for defining the layer?
resource "aws_lambdas_layer" "lambda_layer"{
#filename = "python32-pandas.zip"
layer_name= "aws-pandas-py38-layer"
arn = "arn:aws:lambda:us-east-1:xxxxxx:AWSSDKPandas-Python38:1" #? Is this valid
}
module "lambda_test" {
source = "git::https://git.my-custom-aws-lambda.git"
application = var.application
service = "${var.service}-test"
file_path = "lambda_function.zip"
publicly_accessible = false
data_classification = "confidential"
handler = "lambda_function.lambda_handler"
runtime = "python3.8"
tfs_releasedefinitionname = ""
tfs_releasename = "0"
vpc_enabled = true
vpc_application_tag = "aws-infra"
promote = true
rollback = false
create_cwl_group = true
cwl_prefix = "my-project"
create_cwl_subscription = false
#Could layers an arn?
layers = [aws_lambda_layer_version.lambda_layer.arn]
timeout = 600 ####10 mins
memory_size = 1024 #### 1GB
environment = {
variables = {
destination_bucket_name = "us-east-1-my-sbx-${terraform.workspace}"
}
}
}

Doh! The layers property is an [array]. Minor lapse of reading comprehension on my part :/
The solution is to bind the layers to an array of ["arns"] pointing to the aws or custom arn(s).
layers = ["arn:aws:lambda:us-east-1:336392948345:layer:AWSSDKPandas-Python39:1"]

Related

How to reference OpenAPI Generator task properties

I am attempting to reference assigned property generatorName when setting outputDir.
Attempted to reference generatorName property using same syntax as other task properties (i.e. $buildDir). Also attempted to more fully qualify the property name openApiGenerator.generatorName.
openApiGenerate {
verbose = false
generatorName = "html2" // assignment to property
inputSpec = "$buildDir/swagger/testing.yml".toString()
//outputDir = "$buildDir/generated".toString()
outputDir = "$buildDir/generated/$generatorName".toString() // fails
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
// debugging code
println " buildDir: $buildDir".toString()
println " generatorName: $generatorName".toString() // this fails
}
output from debugging code shows failure to reference generatorName property:
> Configure project :
buildDir: C:\Users\jgunchy\repos\testingproject\build
generatorName: property(class java.lang.String, fixed(class java.lang.String, html2))
This is an observable property, not a string. You should be able to access the underlying string using .get() like this:
openApiGenerate {
verbose = false
generatorName = "html2"
inputSpec = "$buildDir/swagger/testing.yml".toString()
outputDir = "$buildDir/generated/${generatorName.get()}".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
}
Another option would be to use configuration rather than the project extension container's properties directly. For instance, add to gradle.properties:
generatorName=html2
Then, your configuration would look like this:
openApiGenerate {
verbose = false
generatorName = project.ext.generatorName
inputSpec = "$buildDir/swagger/testing.yml".toString()
outputDir = "$buildDir/${project.ext.generatorName}".toString()
apiPackage = "org.openapi.example.api"
invokerPackage = "org.openapi.example.invoker"
modelPackage = "org.openapi.example.model"
}
$buildDir is a getter on the Project instance with a toString() method that happens to output the File path, which is why it behaves differently.

Redis | Terraform | Creates every time freshly after execution of terraform apply

I am creating Redis in AWS using Terraform. But When I execute terraform apply command for first time it creates without issues. But If I re-run Terraform apply below TF code destroys the Redis and starts re-creating it instead it should tell me that it already exists start focusing on other newly added resources .
Is it expected behaviour of Redis?
Adding terraform plan in the question:
-/+ resource "aws_elasticache_replication_group" "redis" {
apply_immediately = true
at_rest_encryption_enabled = true
auto_minor_version_upgrade = false
automatic_failover_enabled = true
+ configuration_endpoint_address = (known after apply)
engine = "redis"
engine_version = "5.0.4"
~ id = "dev-af-redis" -> (known after apply)
maintenance_window = "sun:06:00-sun:07:00"
~ member_clusters = [
- "ca-cng-dev-af-redis-001",
- "ca-cng-dev-af-redis-002",
] -> (known after apply)
node_type = "cache.t2.medium"
~ number_cache_clusters = 2 -> (known after apply)
parameter_group_name = "default.redis5.0"
port = 6379
~ primary_endpoint_address = "master.dev-af-redis.qxyj8a.euc1.cache.amazonaws.com" -> (known after apply)
replication_group_description = "Airflow Cluster"
replication_group_id = "dev-af-redis"
security_group_ids = [
"sg-094175ad3062da04d",
]
~ security_group_names = [] -> (known after apply)
- snapshot_retention_limit = 0 -> null
~ snapshot_window = "02:30-03:30" -> (known after apply)
subnet_group_name = "dev-subnet-group-airflow"
tags = {
"Application" = "project"
"BusinessUnit" = "subproject"
"Classification" = "private"
"Environment" = "development"
"Name" = "dev-airflow-redis"
"TechnicalOwner" = "ops"
"Tier" = "orchestration"
}
transit_encryption_enabled = true
+ cluster_mode {
+ num_node_groups = 1
+ replicas_per_node_group = 1 # forces replacement
}
}
Plan: 1 to add, 0 to change, 1 to destroy.
TF code which used to create Redis:-
resource "aws_elasticache_replication_group" "cng_redis" {
replication_group_description = "Cluster"
replication_group_id = "dev-af-redis"
engine = "redis"
engine_version = "5.0.4"
node_type = "cache.t2.medium "
port = 6379
subnet_group_name = "dev-subnet-group-airflow"
security_group_ids = ["${aws_security_group.airflow_sg.id}"]
parameter_group_name = "default.redis5.0"
at_rest_encryption_enabled = true
transit_encryption_enabled = true
maintenance_window = "sun:06:00-sun:07:00"
auto_minor_version_upgrade = false
apply_immediately = true
automatic_failover_enabled = true
cluster_mode {
num_node_groups = "1"
replicas_per_node_group = "1"
}
tags = merge(
var.common_tags,
map("Classification", "private"),
map("Name", "airflow-redis")
)
}
Here is a solution ("this is not a bug, it's a feature" case, I suppose ;) ): https://github.com/terraform-providers/terraform-provider-aws/issues/4817#issuecomment-463993424
I tested it and it works.
You have to add parameter group with cluster-enabled set to yes.
I'm using Redis 5.0.5, so to my aws_elasticache_replication_group I added:
resource "aws_elasticache_replication_group" "elc-rep-group" {
...
automatic_failover_enabled = true #this is required, when cluster-enabled parameter is on
parameter_group_name = "default.redis5.0.cluster.on"
...
}

Terraform Variables prompting me when defined in tfvars

There is something that I am not understanding about terraform variables. I am getting prompted for two variables in when I run "terraform apply". I don't think that I should be prompted for any as I defined a terraform.tfvars. I am getting prompted for (applicationNamespace, and staticIpName) but I am not sure why. What am I misunderstanding?
I created a file (terraform.tfvars):
#--------------------------------------------------------------
# General
#--------------------------------------------------------------
cluster = "reddiyo-development"
project = "<MYPROJECTID>"
region = "us-central1"
credentialsLocation = "<MYCERTLOCATION>"
bucket = "reddiyo-terraform-state"
vpcLocation = "us-central1-b"
network = "default"
staticIpName = "dev-env-ip"
#--------------------------------------------------------------
# Specific To NODE
#--------------------------------------------------------------
terraformPrefix = "development"
mainNodeName = "primary-pool"
nodeMachineType = "n1-standard-1"
#--------------------------------------------------------------
# Specific To Application
#--------------------------------------------------------------
applicationNamespace = "application"
I also have a terrform script:
variable "cluster" {}
variable "project" {}
variable "region" {}
variable "bucket" {}
variable "terraformPrefix" {}
variable "mainNodeName" {}
variable "vpcLocation" {}
variable "nodeMachineType" {}
variable "credentialsLocation" {}
variable "network" {}
variable "applicationNamespace" {}
variable "staticIpName" {}
data "terraform_remote_state" "remote" {
backend = "gcs"
config = {
bucket = "${var.bucket}"
prefix = "${var.terraformPrefix}"
}
}
provider "google" {
//This needs to be updated to wherever you put your credentials
credentials = "${file("${var.credentialsLocation}")}"
project = "${var.project}"
region = "${var.region}"
}
resource "google_container_cluster" "gke-cluster" {
name = "${var.cluster}"
network = "${var.network}"
location = "${var.vpcLocation}"
remove_default_node_pool = true
# node_pool {
# name = "${var.mainNodeName}"
# }
node_locations = [
"us-central1-a",
"us-central1-f"
]
//Get your credentials for the newly created cluster so that microservices can be deployed
provisioner "local-exec" {
command = "gcloud config set project ${var.project}"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster} --zone ${var.vpcLocation}"
}
}
resource "google_container_node_pool" "primary_pool" {
name = "${var.mainNodeName}"
cluster = "${var.cluster}"
location = "${var.vpcLocation}"
node_count = "2"
node_config {
machine_type = "${var.nodeMachineType}"
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append",
]
}
management {
auto_repair = true
auto_upgrade = true
}
autoscaling {
min_node_count = 2
max_node_count = 10
}
}
# //Reserve a Static IP
resource "google_compute_address" "ip_address" {
name = "${var.staticIpName}"
}
//Install Ambassador
module "ambassador" {
source = "modules/ambassador"
applicationNamespace = "${var.applicationNamespace}"
}
You can try to force it to read your variables by using:
terraform apply -var-file=<path_to_your_vars>
For reference, read below, if anybody face the similar issue.
“terraform.tfvars” is the default variable file name, from where terraform will read variables.
If any other file name is used, it needs to be passed in the command line i.e: “terraform plan –var=whateverName.tfvars
Also, order of Loading for variables by Terraform program.
Environment Variables
terraform.tfvars
terraform.tfvars.json
Any .auto.tfvars
Any –var or –var-file options

How to pass variable from another lua file?

How to pass variable from another lua file? Im trying to pass the text variable title to another b.lua as a text.
a.lua
local options = {
title = "Easy - Addition",
backScene = "scenes.operationMenu",
}
b.lua
local score_label_2 = display.newText({parent=uiGroup, text=title, font=native.systemFontBold, fontSize=128, align="center"})
There are a couple ways to do this but the most straightforward is to treat 'a.lua' like a module and import it into 'b.lua' via require
For example in
-- a.lua
local options =
{
title = "Easy - Addition",
backScene = "scenes.operationMenu",
}
return options
and from
-- b.lua
local options = require 'a'
local score_label_2 = display.newText
{
parent = uiGroup,
text = options.title,
font = native.systemFontBold,
fontSize = 128,
align = "center"
}
You can import the file a.lua into a variable, then use it as an ordinary table.
in b.lua
local a = require("a.lua")
print(a.options.title)

How do you configure UDPInput to work with heka-flood udp test

I am trying to test sending data to heka's UDPInput with no success. I decided to try to use the heka-flood tool to mimic UPD traffic also with no success. I am using 0.10 version of heka. My heka.toml :
[UdpInput]
address = "127.0.0.1:4880"
net = "udp"
splitter = "udp_splitter"
decoder = "ProtobufDecoder"
set_hostname = true
# I have also tried not setting this as well
[udp_splitter]
type = "HekaFramingSplitter"
[ProtobufDecoder]
[LogOutput]
type = "LogOutput"
message_matcher = "Logger == 'UdpInput'"
encoder = "PayloadEncoder"
and my flood.toml:
[udp_proto]
ip_address = "127.0.0.1:4880"
sender = "udp"
pprof_file = ""
encoder = "protobuf"
num_messages = 1000
corrupt_percentage = 0.0001
signed_percentage = 0.00011
variable_size_messages = false
ascii_only = true
max_message_size = 32000
If I add another input, like say a log tailer and add it to the message matcher for the LogOutput, those messages end up being logged out. I never see anything from the UpdInput. What am I doing wrong?