Aws - Cloudformation - Fn::Equals Error , Conditions can only be boolean operations on parameters and other conditions - conditional-statements

I am trying to create ACL's based on the Environment and have the following condition.
Global:
Env: stage
Region: us-west -1
Conditions:
IsStage: Fn::Equals [!Ref "Env", "stage"]
Resources:
publicIngressVpc:
Type: AWS::EC2::NetworkAclEntry
Condition: IsStage
Properties:
NetworkAclId:
Fn::ImportValue:
!Sub ${VpcStack}-publicNetworkAclId
RuleNumber: 150
Protocol: -1 # tcp
RuleAction: allow
CidrBlock: Some VPC
PortRange:
From: 1024
To: 65535
I am getting the following error:
Template format error: Conditions can only be boolean operations on parameters and other conditions

Try this instead:
Conditions:
IsStage:
!Equals [ !Ref Env, 'stage' ]

Related

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

Cloudwatch Event Rule doesn't invoke when source state machine fails

I have a step function that runs 2 separate lambdas. If the step function fails or times out, I want to get an email via SNS telling me the step function failed. I created the event rule using cloudformation and specified the statemachine ARN in the event pattern. When the step function fails, no email is sent out. If I remove the stateMachineArn parameter and run my step function, I get the failure email. I've double checked numerous times that I'm entering the correct ARN for the state machine. CF for the Event Rule is below (in YAML format). Thanks.
FailureEvent:
Type: AWS::Events::Rule
DependsOn:
- StateMachine
Properties:
Name: !Ref FailureRuleName
Description: "EventRule"
EventPattern:
detail-type:
- "Step Functions Execution Status Change"
detail:
status:
- "FAILED"
- "TIMED_OUT"
stateMachineArn: ["arn:aws:states:region:account#:stateMachine:statemachine"]
Targets:
-
Arn:
Ref: SNSARN
Id: !Ref SNSTopic
I did get this fixed and expanded on it to invoke a lambda that publishes a custom SNS email using a lambda. My alignment was off in my EventPattern section. See below. Thanks to #Marcin.
FailureEvent:
Type: AWS::Events::Rule
DependsOn:
- FMIStateMachine
Properties:
Description: !Ref FailureRuleDescription
Name: !Ref FailureRuleName
State: "ENABLED"
RoleArn:
'Fn::Join': ["", ['arn:aws:iam::', !Ref 'AWS::AccountId', ':role/', !Ref LambdaExecutionRole]]
EventPattern:
detail-type:
- "Step Functions Execution Status Change"
detail:
status:
- "FAILED"
- "TIMED_OUT"
stateMachineArn: [!Ref StateMachine]
Targets:
- Arn:
'Fn::Join': ["", ['arn:aws:lambda:', !Ref 'AWS::Region', ':', !Ref 'AWS::AccountId', ':function:', !Ref FailureLambda]]
Id: !Ref FailureLambda
Input: !Sub '{"failed_service": "${StateMachineName}","sns_arn": "arn:aws:sns:${AWS::Region}:${AWS::AccountId}:${SNSTopic}"}'

unable to add notificationconfiguration to s3 bucket

Created cloud formation template to create bucket with notification.
Following is code:
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
CBRS3ToS3IADelay:
Description: Number of days before an S3 object is transitioned from S3 to S3-IA
Type: Number
Default: 365
CBRS3ToGlacierDelay:
Description: Number of days before an S3-IA object is transitioned from S3-IA to Glacier.
Type: Number
Default: 1460
CBRBucketName:
Description: S3 bucket name
Type: String
Default: "my-bucket-test0011"
Resources:
CBRS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Ref: CBRBucketName
AccessControl: Private
LifecycleConfiguration:
Rules:
- Id: CbrCertReportGlacierArchiveRule
Status: Enabled
Transitions:
- StorageClass: STANDARD_IA
TransitionInDays: !Ref CBRS3ToS3IADelay
- StorageClass: GLACIER
TransitionInDays: !Ref CBRS3ToGlacierDelay
NotificationConfiguration:
LambdaConfigurations:
-
Function: "arn:aws:lambda:xxxx:xxxx:function:xxxx"
Event: "s3:ObjectCreated:Put"
Filter:
S3Key:
Rules:
-
Name: suffix
Value: ".gz"
Tags:
- Key: PRODUCT
Value: CRAWS
VersioningConfiguration:
Status: Enabled
Code working with notification block.
But above template is not working with notification.
Getting following error:
Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument
I able to do from console.
Anyone help me to fix this issue?
this is late, so more of answering myself for this question (just managed to fix the same problem): it fails due to a preliminary check on s3 to invoke that lambda function, we will need this:
CBRS3BucketCanInvokeFunctionX:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: ARN_OF_FUNCTION_X
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceAccount: !Ref 'AWS::AccountId'
SourceArn: !Sub 'arn:aws:s3:::${CBRBucketName}'
your CBRS3Bucket will also need to let above resource run first:
CBRS3Bucket:
Type: AWS::S3::Bucket
DependsOn: CBRS3BucketCanInvokeFunctionX
Try taking the .gz and put in just gz.

Serverless Framework: Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation

I am using package uploading zipped file like
frameworkVersion: "=1.27.3"
service: recipes
provider:
name: aws
endpointType: REGIONAL
runtime: python3.6
stage: dev
region: eu-central-1
memorySize: 512
deploymentBucket:
name: dfki-meta
versionFunctions: false
stackTags:
Project: DFKIAPP
# Allows updates to all resources except deleting/replacing EC2 instances
stackPolicy:
- Effect: Allow
Principal: "*"
Action: "Update:*"
Resource: "*"
- Effect: Deny
Principal: "*"
Action:
- Update: Replace
- Update: Delete
Resource: "*"
Condition:
StringEquals:
ResourceType:
- AWS::EC2::Instance
# Access to RDS and S3 Bucket
iamRoleStatements:
- Effect: "Allow"
Action: "s3:ListBucket"
Resource: "*"
package:
individually: true
functions:
# get_recipes:
# handler: handler.get_recipes
# module: recipes_crud
# package:
# individually: true
# timeout: 30
# events:
# - http:
# path: recipes
# method: get
# request:
# parameters:
# querystring:
# persona: true
get_recommendation:
handler: handler.get_recommendation
module: recipes_ml
package:
artifact: zipped_dir.zip
timeout: 30
events:
- http:
path: recipes/{id}
method: get
request:
parameters:
paths:
id: true
querystring:
schaerfe_def: true
saettig_def: true
erfahrung_def: true
schaerfe_wunsch: true
saettig_wunsch: true
erfahrung_wunsch: true
gericht_wunsch: true
stimmung_wunsch: true
Can not understand this error, isn't 52.18 under 69905067 bytes ?
(node:50928) ExperimentalWarning: The fs.promises API is experimental
Serverless: Packaging function: get_recommendation...
Serverless: Uploading function: get_recommendation (52.18 MB)...
Serverless Error ---------------------------------------
Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 10.1.0
Serverless Version: 1.27.3
The package size should be lower than 50MB according to the docs
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
from this blog post
The 20 MB addition presumably is there there to account for request
overhead involved with the AWS API (e.g. base64 encoding of the zip
file data). So far the 50 MB limit holds true-ish. But, we’re not
defeated yet.
This seems to be an issue only while uploading individual lambda function using serverless but if you don't give --function parameter and deploy full stack then it works fine!!!

AWS CloudFormation environmental conditional for ses role

I'm trying to make a reusable CloudFormation template and would like to do some kind of conditional where if the Environment parameter is "test" (or any other environment other than "prod"), then send SES emails to only gmail accounts (i.e., corporate accounts), but for "prod", send SES emails anywhere. Would I have to do two different roles and have conditions on each one? Or is there a way to do this inside of just the one role below? Thanks for any help!
Parameters:
Environment:
Description: Environment, which can be "test", "stage", "prod", etc.
Type: String
Resources:
Role:
Type: AWS::IAM::Role
Properties:
RoleName: myRole
Path: /
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "ecs.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: "ses-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: "*"
Condition:
"ForAllValues:StringLike":
"ses:Recipients":
- "*#gmail.com"
Conditions are perfectly suited for adding this sort of conditional logic to CloudFormation Resource Properties. In your example, you could use the Fn::If Intrinsic Function to include the existing Policy Condition (not to be confused with the CloudFormation Condition!) if the environment is not prod, and AWS::NoValue otherwise (removing the Policy Condition entirely when environment is prod):
Parameters:
Environment:
Description: Environment, which can be "test", "stage", "prod", etc.
Type: String
AllowedValues: [test, stage, prod]
Conditions:
IsProdEnvironment: !Equals [ !Ref Environment, prod ]
Resources:
Role:
Type: AWS::IAM::Role
Properties:
RoleName: myRole
Path: /
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Principal:
Service:
- "ecs.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
-
PolicyName: "ses-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: "*"
Condition: !If
- IsProdEnvironment
- !Ref AWS::NoValue
- "ForAllValues:StringLike":
"ses:Recipients":
- "*#gmail.com"