Create CloudFormation stack from AWS Lambda function, passing API Gateway parameters - api

I am unable to get parameters in Lambda function. If I mention parameters value in lambda it works fine. when I remove parameters values from Lambda function and try from API gateway or test lambda it process default parameters values. please help
My Lambda function is :
import boto3
import time
import json
datetime = time.strftime("%Y%m%d%H%M%S")
stackname = 'myec2'
client = boto3.client('cloudformation')
response = client.create_stack(
StackName= (stackname+ '-' + datetime),
TemplateURL='https://testnaeem.s3.amazonaws.com/ec2tags.yaml',
Parameters=[
{
"ParameterKey": "MyInstanceName",
"ParameterValue": " "
},
{
"ParameterKey": "MyInstanceType",
"ParameterValue": " "
}
]
)
def lambda_handler(event, context):
return(response)
My CloudFormation template is:
---
Parameters:
MyInstanceType:
Description: Instance type description
Type: String
MyInstanceName:
Description: Instance type description
Type: String
Resources:
MyInstance:
Type: AWS::EC2::Instance
Properties:
AvailabilityZone: us-east-1a
ImageId: ami-047a51fa27710816e
InstanceType: !Ref MyInstanceType
KeyName: miankeyp
Tags:
- Key : Name
Value : !Ref MyInstanceName
- Key : app
Value : demo
Please help what changes required in the Lambda function.
My test Values are:
{
"MyInstanceName": "demott",
"MyInstanceType": "t2.micro"
}

I modified the code of your lambda function. Please check comments in the code for clarification:
import boto3
import time
import json
datetime = time.strftime("%Y%m%d%H%M%S")
stackname = 'myec2'
client = boto3.client('cloudformation')
def lambda_handler(event, context):
print(event) # to check what your even actually is
# it will be printed out in CloudWatch Logs for your
# function
# you have to check what the event actually looks like
# and adjust event['MyInstanceName'] and event['MyInstanceType']
# in the following code
response = client.create_stack(
StackName= (stackname+ '-' + datetime),
TemplateURL='https://testnaeem.s3.amazonaws.com/ec2tags.yaml',
Parameters=[
{
"ParameterKey": "MyInstanceName",
"ParameterValue": event['MyInstanceName']
},
{
"ParameterKey": "MyInstanceType",
"ParameterValue": event['MyInstanceType']
}
]
)
return(response)
By the way, such function and API gateway can spin up a lot of ec2 instances very quickly. So that you are aware of this.

Related

Pipeline generation - passing in simple datastructures like lists/arrays

For a code repository project in Palantir Foundry, I am struggling with re-using some of my transformation logic.
It seems almost trivial, but: is there way to send an Input to a Transform that is not a dataset/dataframe reference?
In my case I want to pass in strings or lists/arrays.
This is my code:
from pyspark.sql import functions as F
from transforms.api import Transform, Input, Output
def my_computation(result, customFilter, scope, my_categories, my_mappings):
scope_df = scope.dataframe()
my_categories_df = my_categories.dataframe()
my_mappings_df = my_mappings.dataframe()
filtered_cat_df = (
my_categories_df
.filter(F.col('CAT_NAME').isin(customFilter))
)
# ... more logic
def generateTransforms(config):
transforms = []
for key, value in config.items():
o = {}
for outKey, outValue in value['outputs'].items():
o[outKey] = Output(outValue)
i = {}
for inpKey, inpValue in value['inputs'].items():
i[inpKey] = Input(inpValue)
i['customFilter'] = Input(value['my_custom_filter'])
transforms.append(Transform(my_computation, inputs=i, outputs=o))
return transforms
config = {
"transform_one": {
"my_custom_filter": {
"foo",
"bar"
},
"inputs": {
"scope": "/my-project/input/scope",
"my_categories": "/my-project/input/my_categories",
"my_mappings": "/my-project/input/my_mappings"
},
"outputs": {
"result": "/my-project/output/result"
}
}
}
TRANSFORMS = generateTransforms(config)
The concrete question is: how can I send in the values from my_custom_filter into customFilter in the transformation function my_computation?
If I execute it like above, I get the error "TypeError: unhashable type: 'set'"
This looks like a python issue, any chance you can point out which line is causing the error?
Reading throung your code, I would guess it's this line:
i['customFilter'] = Input(value['my_custom_filter'])
Your python logic is wrong, if we unpack your code you're trying to do this call:
i['customFilter'] = Input({"foo", "bar"})
Edit to answer the comment on how to create a python transform to lock a variable in a closure:
def create_transform(inputs={}, outputs={}, my_other_var):
#transform(**inputs, **outputs)
def compute(input_foo, input_bar, output_foobar, ctx):
df = input_foo.dataframe()
df = df.withColumn("mycol", F.lit(my_other_var))
output_foorbar.write_dataframe(df)
return compute
and now you can call this:
transforms.append(create_tranform(inputs, outptus, "foobar"))

How to define Lambda variables to query Redshift over API Gateway

I faced a problem how to set parameters for API Gateway to query Amazon Redshift with Lambda function.
My connection is working properly, but I got all the time full table respond.
I need to define variables, that user can query specific parameters, values and schemas
Can someone suggest an examples how to set it up
My config is:
#!/usr/bin/env python
import psycopg2
import logging
import traceback
import json
from os import environ
query="SELECT * from public"
logger=logging.getLogger()
logger.setLevel(logging.INFO)
def make_connection():
conn=psycopg2.connect(dbname= 'database', host='redshift-cluster.amazonaws.com',
port= '5439', user= 'user', password= 'password')
conn.autocommit=True
return conn
def log_err(errmsg):
logger.error(errmsg)
return {"body": errmsg , "headers": {}, "statusCode": 400,
"isBase64Encoded":"false"}
logger.info("Cold start complete.")
print('Loading Function')
def handler(event,context):
try:
cnx = make_connection()
cursor=cnx.cursor()
try:
cursor.execute(query)
except:
return log_err ("ERROR: Cannot execute cursor.\n{}".format(
traceback.format_exc()) )
try:
results_list=[]
for result in cursor: results_list.append(result)
print(results_list)
cursor.close()
except:
return log_err ("ERROR: Cannot retrieve query data.\n{}".format(
traceback.format_exc()))
return {"body": str(results_list), "headers": {}, "statusCode": 200,
"isBase64Encoded":"false"}
except:
return log_err("ERROR: Cannot connect to database from handler.\n{}".format(
traceback.format_exc()))
finally:
try:
cnx.close()
except:
pass
if __name__== "__main__":
handler(None,None)
Rather than using psycopg2, I would recommend Using the Amazon Redshift Data API. It provides a much easier way to run a query on Amazon Redshift.
First, send the query with execute_statement(), then retrieve the results with get_statement_result().

error within if condition - 'encrypt' expected type 'bool', got unconvertible type 'string'

I'm trying to define a config block for two environments - local and cloud and I'm using the if/else condition but I got an error message for the encrypt attribute of the s3 bucket: 'encrypt' expected type 'bool', got unconvertible type 'string'.
If I remove the if/else condition block then it worked but I need to choose between the two environments, so I've to use if/else condition.
The config block code:
config = local.is_local_environment ? {
# Local configuration
path = "${path_relative_to_include()}/terraform.tfstate"
} : {
# Cloud configuration
bucket = "my-bucket"
key = "terraform/${path_relative_to_include()}/terraform.tfstate"
region = local.region
encrypt = true
dynamodb_table = "terraform-lock"
}
}
the issue is that local backends don't take any configuration, use null
config = local.is_local_environment ? null : {
# Cloud configuration
bucket = "my-bucket"
key = "terraform/${path_relative_to_include()}/terraform.tfstate"
region = local.region
encrypt = true
dynamodb_table = "terraform-lock"
}
}

Terraform : Error with Comply with restrictions

In terraform , Trying to S3 bucket as trigger to my lambda and giving the permissions. For this use case , creating S3 resource and trying to refer that lambda function in triggering logic. But When I refer code is failing with below error.
# Creating Lambda resource
resource "aws_lambda_function" "test_lambda" {
filename = "output/welcome.zip"
function_name = var.function_name
role = var.role_name
handler = var.handler_name
runtime = var.run_time
}
# Creating s3 resource for invoking to lambda function
resource "aws_s3_bucket" "bucket" {
bucket = "source-bucktet-testing"
acl = "private"
tags = {
Name = "source-bucktet-testing"
Environment = "Dev"
}
}
# Adding S3 bucket as trigger to my lambda and giving the permissions
resource "aws_s3_bucket_notification" "aws-lambda-trigger" {
bucket = "aws_s3_bucket.bucket.id"
lambda_function {
lambda_function_arn = "aws_lambda_function.test_lambda.arn"
events = ["s3:ObjectCreated:*"]
filter_prefix = "file-prefix"
filter_suffix = "file-extension"
}
}
resource "aws_lambda_permission" "test" {
statement_id = "AllowS3Invoke"
action = "lambda:InvokeFunction"
function_name = "aws_lambda_function.test_lambda.function_name"
principal = "s3.amazonaws.com"
source_arn = "arn:aws:s3:::aws_s3_bucket.bucket.id"
}
Error Message :
The value passed to the aws_lambda_function resource for function_name is invalid. An AWS Lambda function name can only contain letters, numbers, hyphens, or underscores with no spaces. You need to change the value of var.function_name to align with these restrictions.
Your var.function_name must be invalid.
The allowed function name format is explained here along with ARN:
The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 140.
Pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}(-gov)?-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?

Properly accessing cluster_config '__default__' values

I have a cluster.json file that looks like this:
{
"__default__":
{
"queue":"normal",
"memory":"12288",
"nCPU":"1",
"name":"{rule}_{wildcards.sample}",
"o":"logs/cluster/{wildcards.sample}/{rule}.o",
"e":"logs/cluster/{wildcards.sample}/{rule}.e",
"jvm":"10240m"
},
"aln_pe":
{
"memory":"61440",
"nCPU":"16"
},
"GenotypeGVCFs":
{
"jvm":"102400m",
"memory":"122880"
}
}
In my snakefile I have a few rules that try to access the cluster_config object in their params
params:
memory=cluster_config['__default__']['jvm']
But this will give me a 'KeyError'
KeyError in line 27 of home/bwubb/projects/Germline/S0330901/haplotype.snake:
'__default__'
Does this have something to do with '__default__' being a special object? It pprints in a visually appealing dictionary where as the others are labeled OrderDict, but when I look at the json it looks the same.
If nothing is wrong with my json, then should I refrain from accessing '__default__'?
The default value is accessed via the keyword "cluster", not
__default__
See here in this example in the tutorial:
{
"__default__" :
{
"account" : "my account",
"time" : "00:15:00",
"n" : 1,
"partition" : "core"
},
"compute1" :
{
"time" : "00:20:00"
}
}
The JSON list in the URL above and listed above is the one being accessed in this example. It's unfortunate they are not on the same page.
To access time, J.K. uses the following call.
#!python
#!/usr/bin/env python3
import os
import sys
from snakemake.utils import read_job_properties
jobscript = sys.argv[1]
job_properties = read_job_properties(jobscript)
# do something useful with the threads
threads = job_properties[threads]
# access property defined in the cluster configuration file (Snakemake >=3.6.0)
job_properties["cluster"]["time"]
os.system("qsub -t {threads} {script}".format(threads=threads, script=jobscript))