writing a cfn template to trigger fargate via cloudwatch events - amazon-cloudwatch

I am unable to get TaskDefinitionArn in a variable .
I am trying to do the below:
cloudwatchTriggerForLambdaFunction:
Type: 'AWS::Events::Rule'
Properties:
Description: 'Trigger Lambda function according to the specified schedule'
ScheduleExpression: !Ref CronExpression
State: ENABLED
Targets:
- Arn: !Sub '${LambdaFunction.Arn}'
Id: cloudwatchTriggerForLambdaFunction
- Arn: !GetAtt FargateLauncher.Arn
Id: fargate-launcher
Input:
!Sub |
{
taskDefinition: "${TaskDefinitionArn}"
}
but the above throwing the error like below:
An error occurred (ValidationError) when calling the CreateStack operation: Template error: instance of Fn::Sub references invalid resource attribute TaskDefinitionArn.
I cannot get the value of TaskDefinitionArn in a parameter as this is going to be created runtime so must get this lie above. Pleae suggest some solution to this. Thanks in advance.

I had to change my approach a bit.
I am now using direct cloudwatch trigger on fargate task instead of running lambda function to trigger fargate task.
So that way, this query seems invalid.
If you try this way do try to create the arnanually like
**arn:aws:ecs:${AWS::AccountId}:${AWS:Region}**

Since your resource name is TaskDefinition, you should reference it by name.
{
taskDefinition: "${TaskDefinition}"
}
But according to this aws documentation, the ecs event rule should be defined as below
{
"Group" : String,
"LaunchType" : String,
"NetworkConfiguration" : NetworkConfiguration,
"PlatformVersion" : String,
"TaskCount" : Integer,
"TaskDefinitionArn" : String
}
therefore the key should be TaskDefinitionArn not taskDefinition. Please have a look at the reference.

I agree - but i am using this approach link below to run fargate task with cloudwatch trigger where he is using TaskDefinitionArn as a parameter. Which I don;t want to do. I want to get the value of Arn itself while running my task.
Let me know if you did not get my query.
creating a 'Target' for a cloudwatch event rule via cloudformation for a fargate launchtype task

Related

error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension

I'm trying to enable ECS autoscaling for some Fargate services and run into the error in the title:
error creating Application AutoScaling Target: ValidationException: Unsupported service namespace, resource type or scalable dimension
The error happens on line 4 here:
resource "aws_appautoscaling_target" "autoscaling" {
max_capacity = var.max_capacity
min_capacity = 1
resource_id = var.resource_id
// <snip... a bunch of other vars not relevant to question>
I call the custom autoscaling module like so:
module "myservice_autoscaling" {
source = "../autoscaling"
resource_id = aws_ecs_service.myservice_worker.id
// <snip... a bunch of other vars not relevant to question>
My service is a normal ECS service block starting with:
resource "aws_ecs_service" "myservice_worker" {
After poking around online, I thought maybe I should construct the "service/clusterName/serviceName" sort of "manually", like so:
resource_id = "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
But that leads to a different error:
The argument "cluster_name" is required, but no definition was found.
I created cluster_name in my calling module (i.e. myservice ECS stuff that calls my new autoscaling module) variables.tf. And I have cluster_name in the outputs.tf of our cluster module where we're setting up the ECS cluster. I must be missing some linking still.
Any ideas? Thanks!
Edit: here's the solution that got it working for me
Yes, you do need to construct the resource_id in the form of "service/yourClusterName/yourServiceName". Mine ended up looking like: "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}"
You need to make sure you have access to the cluster name and service name variables. In my case, though I had the variable defined in my ECS service's variables.tf, and I added it my cluster module's outputs.tf, I was failing to pass down from the root module to the service module. This fixed that:
module "myservice" {
source = "./modules/myservice"
cluster_name = module.cluster.cluster_name // the line I added
(the preceding snippet goes in the main.tf of your root module (a level above your service module)
You are on the right track constructing the "service/${var.cluster_name}/${aws_ecs_service.myservice_worker.name}" string. It looks like you simply aren't referencing the cluster name correctly.
And I have cluster_name in the outputs.tf of our cluster module
So you need to reference that module output, instead of referencing a not-existent variable:
"service/${module.my_cluster_module.cluster_name}/${aws_ecs_service.myservice_worker.name}"
Change "my_cluster_module" to whatever name you gave the module that is creating your ECS cluster.

Have the same lambda configurations for multiple events in cloudFormation

I'm writing a CloudFormation code to create and configure an S3 bucket.
As part of the configuration, I'm adding a lambda triggering in 2 events. It's the same lambda.
How can I write this code? Should I duplicate the section or can I map the two events to the same behavior?
Here is the code
MyBucket:
Condition: CreateNewBucket
Type: AWS::S3::Bucket
## ...Bucket Config comes here... ##
## The interesting part ##
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put'
Function: My Lambda name
Filter:
S3Key:
Rules:
- Name: prefix
Value: 'someValue'
Is there an option to write:
LambdaConfigurations:
- Events: ['s3:ObjectCreated:Put', 's3:ObjectCreated:Post']
Or maybe
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put',
- Event 's3:ObjectCreated:Post',
...
Or Do I need to copy-past the block twice?
I can't find an example for this behavior.
Sorry if this is a trivial question, I'm new to CloudFormation.
Thanks!
It depends on exactly what you want to trigger the lambda function. If you want the event to fire on all object create events (s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:copy, and s3:ObjectCreated:CompleteMultipartUpload), you can simply use 's3:ObjectCreated:*' as the value for Event.
If you want Put and Post specifically, you'll need to supply two event configurations, one for Put and another for Post. The CloudFormation parser will accept multiple Event elements in a LambdaConfiguration but only the last one to appear is applied to the event notification.
This is an interesting divergence in console/API functionality vs CloudFormation/CDK functionality. The PutBucketNotificationConfiguration API operation accepts LambdaFunctionConfiguration arguments that support multiple individual events. PutBucketNotificationConfiguration replaces PutBucketNotification, which accepted CloudFunctionConfiguration, which has a deprecated Event element. So I wonder if CloudFormation still refers to the older API operation.

How to mix env variables with output variables in the environment declaration

So I have a env.yml file which lets me have a different variables for each stage:
provider:
name: aws
environment: ${file(env.yml):${opt:stage}}
I also need to share some output variables to Lambda which are declared like so:
Outputs:
UserPoolId:
Value:
Ref: QNABUserPool
Export:
Name: ${self:provider.stage}-UserPoolId
UserPoolClientId:
Value:
Ref: QNABUserPoolClient
Export:
Name: ${self:provider.stage}-UserPoolClientId
I've seen I can do this by adding this to my provider but this conflicts with my env.yml
environment:
COGNITO_USER_POOL_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolId}
COGNITO_USER_POOL_CLIENT_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolClientId}
I tried putting these into the env.yml but that didn't work:
Trying to request a non exported variable from CloudFormation. Stack name: "XXXX-alpha" Requested variable: "UserPoolId".
I tried using custom instead of environment and it deployed but the Lambda functions no longer had access to the variables.
So how can I mix these two together?
Thank you so much!
You can reference the Output values from your current service using the Fn::ImportValue function.
The serverless system adds sls-[service_name] to the variable but you can find them in the Outputs area of the CloudFormation Stack.
Navigate to CloudFormation --> Stacks --> [select your service] --> Outputs (tab). From there you'll see a column called Exports name.
Use that Exports name and use that for the import.
e.g. you have a WebSocket service and you need the service endpoint. If you look in the tab it will have an export ~ sls-wss-[your_service_name]-[stage]-ServiceEndpointWebsocket. Thus, you can import that into an environment variable:
Environment:
Variables:
ENDPOINT:
Fn::ImportValue: sls-wss-[your_service_name]-${opt:stage}-ServiceEndpointWebsocket

Referencing Serverless Output returns "[object Object]" instead of arn

I'm using State Machines for the first time, and I'm having trouble referencing the State Machine arn in my Lambda function. I've tried following this article and the docs, but I must be missing something, because instead of the arn, I'm getting "[object Object]".
Environment Variable:
environment:
EMAIL_STATE_MACHINE: ${self:resources.Outputs.EmailQueueStateMachine.Value}
Output:
Outputs:
EmailQueueStateMachine:
Description: The ARN of the email delivery state machine
Value:
Ref: EmailQueueStateMachine
State Machine:
stepFunctions:
stateMachines:
ReportDeliveryEmailQueueStateMachine:
name: emailQueueStateMachine
id: EmailQueueStateMachine
dependsOn:
- EmailTemplatesDynamoDbTable
definition:
<definition>
Everything I've read has this same setup, so I know I must be missing something obvious.

What is "hellostepfunc1" in the serverless documenation for setup AWS stepfunctions?

In these documentation from the serverless website - How to manage your AWS Step Functions with Serverless and GiTHUb - serverless-step-functions, we can find this word hellostepfunc1: in the serverless.yml file. I could not find reference to it. I dont understand what is it, and I can't find any reference to it, even after the State Machine was created into AWS.
If I delete it I get the follow error
Cannot use 'in' operator to search for 'role' in myStateMachine
But if I change its name for someName for example I have no error and the State Machine will works good.
I could assume it is only an identifier but I not sure.
Where can I find reference to it?
This is quite specific to the library you are using and how it names the statemachine which is getting created based upon whether the name: field is provided under the hellostepfunc1: or not.
Have a look at the testcases here and here to understand better.
In-short a .yaml like
stateMachines:
hellostepfunc1:
definition:
Comment: 'comment 1'
.....
has name of statemachine like hellostepfunc1StepFunctionsStateMachine as no name was specified.
Whereas for a .yaml like
stateMachines:
hellostepfunc1:
name: 'alpha'
definition:
Comment: 'comment 1'
.....
the name of statemachine is alpha as you had name was specified.