Serverless Framework: How to properly define parallel branches? - serverless-framework

I am trying to translate the following Amazon Step Functions definition from JSON to Serverless YML.
Here is the JSON version (which is working fine):
{
"Comment": "Parallel Evaluation of multiple Book Pricing atributes.",
"StartAt": "VerifyBookPricingAttributes",
"States": {
"VerifyBookPricingAttributes": {
"Type": "Parallel",
"Next": "ReturnCombinedData",
"Branches": [{
"StartAt": "ConfirmBookAvailability",
"States": {
"ConfirmBookAvailability": {
"Type": "Task",
"Comment": "This state will query DynamoDB table representing RS catalog. If the Book is found - availability will be confirmed",
"Resource": "arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookAvailability",
"ResultPath": "$.BookAvailability",
"End": true
}
}
},
{
"StartAt": "ConfirmBookPriceIsValid",
"States": {
"ConfirmBookPriceIsValid": {
"Type": "Task",
"Comment": "This state will query DynamoDB table representing Book Prices. If the input BookPrice matches the Dynamo valid - the pricing will be confirmed",
"Resource": "arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookPriceIsValid",
"ResultPath": "$.IsBookPriceValid",
"End": true
}
}
}
]
},
"ReturnCombinedData": {
"Type": "Pass",
"Parameters": {
"comment": "Combining the result",
"CombinedDetails": {
"BookAvailability.$": "$[0].BookAvailability",
"IsBookPriceValid.$": "$[1].IsBookPriceValid"
}
},
"End": true
}
}
}
The things to note are: Parallel type with Branches
I've started translating this into Serverless YML:
stepFunctions:
stateMachines:
Process-BookPricingCreated-StateMachine:
name: myStateMachine
definition:
StartAt: VerifyBookPricingAttributes
States:
VerifyBookPricingAttributes:
Type: Parallel
Next: ReturnCombinedData
Branches:
StartAt: ConfirmBookAvailability
States:
ConfirmBookAvailability:
Type: Task
Resource: arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookAvailability
ResultPath": $.BookAvailability
End: true
StartAt: ConfirmBookPriceIsValid
States:
ConfirmBookPriceIsValid:
Type: Task
Resource: arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookPriceIsValid
ResultPath: $.IsBookPriceValid
End: true
I am running into an issue where Serverless complains about having StartAt and State as duplicate keys (since those are parallel branches.
How do I properly deal with the Parallel Branches using Serverless Framework?

Branches should be a list and looks like you have an extra " in your first ResultPath
stepFunctions:
stateMachines:
Process-BookPricingCreated-StateMachine:
name: myStateMachine
definition:
StartAt: VerifyBookPricingAttributes
States:
VerifyBookPricingAttributes:
Type: Parallel
Next: ReturnCombinedData
Branches:
- StartAt: ConfirmBookAvailability
States:
ConfirmBookAvailability:
Type: Task
Resource: arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookAvailability
ResultPath: $.BookAvailability
End: true
- StartAt: ConfirmBookPriceIsValid
States:
ConfirmBookPriceIsValid:
Type: Task
Resource: arn:aws:lambda:us-east-1:000000000000:function:ConfirmBookPriceIsValid
ResultPath: $.IsBookPriceValid
End: true

Related

BigQuery: Routine deployment failing with error "Unknown option: description"

We use terraform to deploy BigQuery objects (datasets, tables, routines etc..) to region europe-west2 in GCP. We do this many many times a day and all of a sudden at "2021-08-18T21:15:44.033910202Z" our deployments starting failing when attempting to deploy BigQuery routines. They are all failing with errors of the form:
status: {
code: 3
message: "Unknown option: description"
}
Here is the first log message I can find pertaining to this error (I have redacted project names):
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 3,
"message": "Unknown option: description"
},
"authenticationInfo": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "deployer-dev#myadminproject.iam.gserviceaccount.com"
}
}
]
},
"requestMetadata": {
"callerIp": "10.51.0.116",
"callerSuppliedUserAgent": "Terraform/0.14.7 (+https://www.terraform.io) Terraform-Plugin-SDK/2.5.0 terraform-provider-google/3.69.0,gzip(gfe)",
"callerNetwork": "//compute.googleapis.com/projects/myadminproject/global/networks/__unknown__",
"requestAttributes": {},
"destinationAttributes": {}
},
"serviceName": "bigquery.googleapis.com",
"methodName": "google.cloud.bigquery.v2.RoutineService.InsertRoutine",
"authorizationInfo": [
{
"resource": "projects/myproject/datasets/p00003818_dp_model",
"permission": "bigquery.routines.create",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment",
"metadata": {
"routineCreation": {
"routine": {
"routineName": "projects/myproject/datasets/p00003818_dp_model/routines/UserProfile_Events_AllCarData_Deployment"
},
"reason": "ROUTINE_INSERT_REQUEST"
},
"#type": "type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"
}
},
"insertId": "ak27xdbke",
"resource": {
"type": "bigquery_dataset",
"labels": {
"dataset_id": "p00003818_dp_model",
"project_id": "myproject"
}
},
"timestamp": "2021-08-18T21:15:43.109609Z",
"severity": "ERROR",
"logName": "projects/myproject/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-08-18T21:15:44.033910202Z"
}
The fact that this occurred without any changes by ourselves indicates that this is a problem at the Google end. I also observe that whilst we witnessed this in a few projects it occurred first in one project and then a few minutes later in another - that may or may not be helpful information.
Posting here in case anyone else hits this problem and also hoping it might catch the attention of a googler.
UPDATE! I have reproduced the pproblem using the REST API https://cloud.google.com/bigquery/docs/reference/rest/v2/routines/insert
I have entered a payload that does not include a description and that successfully creates a routine:
However, if I include a description which, as this screenshot indicates, is a valid parameter:
then the request fails:

dotnet-monitor and OpenTelemetry?

I'm learning OpenTelemetry and I wonder how dotnet-monitor is connected with OpenTelemetry (Meter). Are those things somehow connected or maybe dotnet-monitor is just custom MS tools that is not using standards from OpenTelemetry (API, SDK and exporters).
If you run dotnet-monitor on your machine it exposes the dotnet metrics in Prometheus format which mean you can set OpenTelemetry collector to scrape those metrics
For example in OpenTelemetry-collector-contrib configuration
receivers:
prometheus_exec:
exec: dotnet monitor collect
port: 52325
Please note that for dotnet-monitor to run you need to create a setting.json in theis path:
$XDG_CONFIG_HOME/dotnet-monitor/settings.json
If $XDG_CONFIG_HOME is not defined, create the file in this path:
$HOME/.config/dotnet-monitor/settings.json
If you want to identify the process by its PID, write this into settings.json (change Value to your PID):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
}
}
If you want to identify the process by its name, write this into settings.json (change Value to your process name):
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessName",
"Value": "iisexpress"
}]
},
}
In my example I used this configuration:
{
"DefaultProcess": {
"Filters": [{
"Key": "ProcessId",
"Value": "1"
}]
},
"Metrics": {
"Providers": [
{
"ProviderName": "System.Net.Http"
},
{
"ProviderName": "Microsoft-AspNetCore-Server-Kestrel"
}
]
}
}

How to pass AWS Lambda error in AWS SNS notification through AWS Step Functions?

I have created an AWS Step Function which triggers a Lambda python code, terminates without error if Lambda succeeds, otherwise calls an SNS topic to message the subscribed users if the Lambda fails. It is running, but the message was fixed. The Step Function JSON is as follows:
{
"StartAt": "Lambda Trigger",
"States": {
"Lambda Trigger": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-2:xxxxxxxxxxxx:function:helloworldTest",
"End": true,
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"ResultPath": "$.error",
"Next": "Notify Failure"
}
]
},
"Notify Failure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Message": "Batch job submitted through Step Functions failed with the following error, $.error",
"TopicArn": "arn:aws:sns:us-east-2:xxxxxxxxxxxx:lambda-execution-failure"
},
"End": true
}
}
}
Only thing is, I want to append the failure error message to my message string, which I tried, but is not working as expected.
But I get a mail as follows:
How to go about it?
I could solve the problem using "Error.$": "$.Cause".
The following is a working example of the failure portion of state machine:
"Job Failure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Subject": "Lambda Job Failed",
"Message": {
"Alarm": "Lambda Job Failed",
"Error.$": "$.Cause"
},
"TopicArn": "arn:aws:sns:us-east-2:xxxxxxxxxxxx:Job-Run-Notification"
},
"End": true
}
Hope this helps!
Here is the full version of the code
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:StepFunctionTest",
"End": true,
"Catch": [
{
"ErrorEquals": [
"States.ALL"
],
"Next": "NotifyFailure"
}
]
},
"NotifyFailure": {
"Type": "Task",
"Resource": "arn:aws:states:::sns:publish",
"Parameters": {
"Subject": "[ERROR]: Task failed",
"Message": {
"Alarm": "Batch job submitted through Step Functions failed with the following error",
"Error.$": "$.Cause"
},
"TopicArn": "arn:aws:sns:us-east-1:XXXXXXXXXXXXX:Notificaiton"
},
"End": true
}
}
}
This line is already appending exception object to 'error' path.
"ResultPath": "$.error"
We just need pass '$' to Message.$ to SNS task, both input and error details will be sent to SNS.
{
"TopicArn":"${SnsTopic}",
"Message.$":"$"
}
if we don't want input to Lambda to be appended in email, we should skip ResultPath or have just '$' as ResultPath, input object is ignored.
"ResultPath": "$"

Is there a way to get Step Functions input values into EMR step Args

We are running batch spark jobs using AWS EMR clusters. Those jobs run periodically and we would like to orchestrate those via AWS Step Functions.
As of November 2019 Step Functions has support for EMR natively. When adding a Step to the cluster we can use the following config:
"Some Step": {
"Type": "Task",
"Resource": "arn:aws:states:::elasticmapreduce:addStep.sync",
"Parameters": {
"ClusterId.$": "$.cluster.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--class",
"com.some.package.Class",
"JarUri",
"--startDate",
"$.time",
"--daysToLookBack",
"$.daysToLookBack"
]
}
}
},
"Retry" : [
{
"ErrorEquals": [ "States.ALL" ],
"IntervalSeconds": 1,
"MaxAttempts": 1,
"BackoffRate": 2.0
}
],
"ResultPath": "$.firstStep",
"End": true
}
Within the Args List of the HadoopJarStep we would like to set arguments dynamically. e.g. if the input of the state machine execution is:
{
"time": "2020-01-08",
"daysToLookBack": 2
}
The strings in the config starting with "$." should be replaced accordingly when executing the State Machine, and the step on the EMR cluster should run command-runner.jar spark-submit --class com.some.package.Class JarUri --startDate 2020-01-08 --daysToLookBack 2. But instead it runs command-runner.jar spark-submit --class com.some.package.Class JarUri --startDate $.time --daysToLookBack $.daysToLookBack.
Does anyone know if there is a way to do this?
Parameters allow you to define key-value pairs, so as the value for the "Args" key is an array, you won't be able to dynamically reference a specific element in the array, you would need to reference the whole array instead. For example "Args.$": "$.Input.ArgsArray".
So for your use-case the best way to achieve this would be to add a pre-processing state, before calling this state. In the pre-processing state you can either call a Lambda function and format your input/output through code or for something as simple as adding a dynamic value to an array you can use a Pass State to reformat the data and then inside your task State Parameters you can use JSONPath to get the array which you defined in in the pre-processor. Here's an example:
{
"Comment": "A Hello World example of the Amazon States Language using Pass states",
"StartAt": "HardCodedInputs",
"States": {
"HardCodedInputs": {
"Type": "Pass",
"Parameters": {
"cluster": {
"ClusterId": "ValueForClusterIdVariable"
},
"time": "ValueForTimeVariable",
"daysToLookBack": "ValueFordaysToLookBackVariable"
},
"Next": "Pre-Process"
},
"Pre-Process": {
"Type": "Pass",
"Parameters": {
"FormattedInputsForEmr": {
"ClusterId.$": "$.cluster.ClusterId",
"Args": [
{
"Arg1": "spark-submit"
},
{
"Arg2": "--class"
},
{
"Arg3": "com.some.package.Class"
},
{
"Arg4": "JarUri"
},
{
"Arg5": "--startDate"
},
{
"Arg6.$": "$.time"
},
{
"Arg7": "--daysToLookBack"
},
{
"Arg8.$": "$.daysToLookBack"
}
]
}
},
"Next": "Some Step"
},
"Some Step": {
"Type": "Pass",
"Parameters": {
"ClusterId.$": "$.FormattedInputsForEmr.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args.$": "$.FormattedInputsForEmr.Args[*][*]"
}
}
},
"End": true
}
}
}
You can use the States.Array() intrinsic function. Your Parameters becomes:
"Parameters": {
"ClusterId.$": "$.cluster.ClusterId",
"Step": {
"Name": "FirstStep",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args.$": "States.Array('spark-submit', '--class', 'com.some.package.Class', 'JarUri', '--startDate', $.time, '--daysToLookBack', '$.daysToLookBack')"
}
}
}
Intrinsic functions are documented here but I don't think it explains the usage very well. The code snippets provided in the Step Functions console are more useful.
Note that you can also do string formatting on the args using States.Format(). For example, you could construct a path using an input variable as the final path segment:
"Args.$": "States.Array('mycommand', '--path', States.Format('my/base/path/{}', $.someInputVariable))"

Cloudformation S3bucket creation

Here's the cloudformation template I wrote to create a simple S3 bucket, How do I specify the name of the bucket? Is this the right way?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Simple S3 Bucket",
"Parameters": {
"OwnerService": {
"Type": "String",
"Default": "CloudOps",
"Description": "Owner or service name. Used to identify the owner of the vpc stack"
},
"ProductCode": {
"Type": "String",
"Default": "cloudops",
"Description": "Lowercase version of the product code (i.e. jem). Used for tagging"
},
"StackEnvironment": {
"Type": "String",
"Default": "stage",
"Description": "Lowercase version of the environment name (i.e. stage). Used for tagging"
}
},
"Mappings": {
"RegionMap": {
"us-east-1": {
"ShortRegion": "ue1"
},
"us-west-1": {
"ShortRegion": "uw1"
},
"us-west-2": {
"ShortRegion": "uw2"
},
"eu-west-1": {
"ShortRegion": "ew1"
},
"ap-southeast-1": {
"ShortRegion": "as1"
},
"ap-northeast-1": {
"ShortRegion": "an1"
},
"ap-northeast-2": {
"ShortRegion": "an2"
}
}
},
"Resources": {
"JenkinsBuildBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
"AccessControl": "Private"
},
"DeletionPolicy": "Delete"
}
},
"Outputs": {
"DeploymentBucket": {
"Description": "Bucket Containing Chef files",
"Value": {
"Ref": "DeploymentBucket"
}
}
}
}
Here's a really simple Cloudformation template that creates an S3 bucket, including defining the bucketname.
AWSTemplateFormatVersion: '2010-09-09'
Description: create a single S3 bucket
Resources:
SampleBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: sample-bucket-0827-cc
You can also leave the "Properties: BucketName" lines off if you want AWS to name the bucket for you. Then it will look like $StackName-SampleBucket-$uniqueIdentifier.
Hope this helps.
Your code has the BucketName already specified:
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
The BucketName is a string, and since you are using 'Fn Join', it will be combined of the functions you are joining.
"The intrinsic function Fn::Join appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter."
Your bucket name if you don't change the defaults is:
cloudops-stage-deplyment-yourAwsRegion
If you change the default parameters, then both cloudops, and stage can be changed, deployment is hard coded, yourAWSRegion will be pulled from where the stack is running, and will be returned in short format via the Mapping.
To extend 'cloudquiz' answer, this is what it'd look in yaml format:
Resources:
SomeS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Join: ["-", ["yourbucketname", {'Fn::Sub': '${AWS::Region}'}, {'Fn::Sub': '${Stage}'}]]