I am using npm package https://www.npmjs.com/package/serverless-step-functions-offline for running step functions offline. However I get the output form serverless as
Function "SaveSlotDetails" does not presented in serverless.yml
I have follwed the steps exactly as per the documentation, but I am not able to run the step function locally
Below is my serverless.yml file content for the related context
custom:
stepFunctionsOffline:
SaveSlotDetails:CreateSubscription
functions: # add 4 functions for CRUD
createSubscription:
handler: handlers/subscriptions.create
name: CreateSubscription
events:
- http:
path: subscriptions # path will be domain.name.com/dev/subscriptions
method: post
cors: true
stepFunctions:
stateMachines:
SlotCheckingMachine:
name: ProcessSlotAvailabilityStateMachine
definition:
StartAt: SaveSlotDetails
TimeoutSeconds: 3600
States:
SaveSlotDetails:
Type: Task
Resource: "arn:aws:lambda:us-east-1:269266452438:function:CreateSlot"
Next: "SearchSubscriptions"
I have tried using both function names createSubscription and
CreateSubscription, but nothing helps. I checked issues previously
raised, but doesn't help much
I am tried using versions 2.1.2 and 2.1.1, but doesn't work. Any help would be appreciated
Related
I have a deployment that is deploying based on the stage sls deploy --stage staging and I want to export the lambda ARN from that staging function within the Outputs. I expected it to understand the Condition but it seems it doesn't and is trying to deploy to each environment regardless of the Condition.
Is there a way to do it?
service: test
frameworkVersion: '^2.2.0'
plugins:
...
provider:
name: aws
runtime: python3.8
stage: ${opt:stage}
resources:
Conditions:
IsStaging:
Fn::Equals:
- ${self:provider.stage}
- staging
Resources:
...
Outputs:
MyLambdaFunction:
Condition: IsStaging
Value: !GetAtt [ MyLambdaFunction, Arn ]
Export:
Name: MyLambdaFunction
I couldn't find a plugin that mention the Conditional Output, is it something that works in a more up to date framework version or is it just not working?
I'm trying to define my serverless framework deployment bucket.
My serverless.yml looks like this:
provider:
name: aws
runtime: nodejs14.x
region: us-east-1
stage: dev
deploymentBucket:
name: ${self:environment.DEPLOYMENT_BUCKET}
environment:
${file(../evn.${opt:stage, 'dev'}.json)}
and the evn.dev.json file looks like this:
{
"DEPLOYMENT_BUCKET": "myBucketName"
}
(both of these files have non-relevant parts removed)
I'm getting a "cannot resolve variable at "provicer.deploymentBucket.name" error when trying to deploy.
How do I reference the DEPLOYMENT_BUCKET variable in the serverless.yml file?
EDIT: Other errors:
${environment}:DEPLOYMENT_BUCKET -> Could not locate deployment bucket. Error: The specified bucket is not valid
name: ${environment:DEPLOYMENT_BUCKET}1 -> Unrecognized configuration variable sources: "environment"
name: ${self:provider.environment:DEPLOYMENT_BUCKET}
and
name: ${self:environment:DEPLOYMENT_BUCKET}
-> Cannot resolve serverless.yml: Variables resolution errored with - Cannot resolve variable at "provider.deploymentBucket.name": Value not found at "self" source
I was able to solve the problem with this:
${file(../evn.${opt:stage, 'dev'}.json):DEPLOYMENT_BUCKET}
But 'reading' that file twice -- both here and in the 'environment' area seems to somewhat defeat the purpose of the environments area.
I have a serverless.common.yml, with properties that should be shared by all the services, with that:
service: ixxxx
custom:
stage: ${opt:stage, self:provider.stage}
resourcesStages:
prod: prod
dev: dev
resourcesStage: ${self:custom.resourcesStages.${self:custom.stage}, self:custom.resourcesStages.dev}
lambdaPolicyXRay:
Effect: Allow
Action:
- xray:PutTraceSegments
- xray:PutTelemetryRecords
Resource: "*"
And, another serverless.yml inside a services folder, which uses properties on the common file:
...
custom: ${file(../../serverless.common.yml):custom}
...
environment:
stage: ${self:custom.stage}
...
In that way, I can access the custom variables (from the common file) without a problem.
Now, I want to continue to import this file to custom, but adding new variables, related to this service, to it, so I tried that:
custom:
common: ${file(../../serverless.common.yml):custom}
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
And it seems it's possible to access, for example:
environment:
stage: ${self:custom.common.stage}
But now, I'm receiving the error:
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found.
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.stage' could not be found.
Serverless Error ---------------------------------------
Trying to populate non string value into a string for variable ${self:custom.stage}. Please make sure the value of the property is a strin
What am I doing wrong?
In your serverless.common.yml you must reference as if it were serverless.yml. In this case ${self:custom.stage} does not exist, but ${self:custom.common.stage} does exist.
service: ixxxx
custom:
stage: ${opt:stage, self:provider.stage}
resourcesStages:
prod: prod
dev: dev
resourcesStage: ${self:custom.common.resourcesStages.${self:custom.common.stage}, self:custom.resourcesStages.dev}
lambdaPolicyXRay:
Effect: Allow
Action:
- xray:PutTraceSegments
- xray:PutTelemetryRecords
Resource: "*"
So I'm pretty new to CloudFormation and also to Serverless framework. I've been trying to work through some exercises (such as an automatic thumbnail generator) and then create some simple projects that I can hopefully generalize for my own purposes.
Right now I'm attempting create a stack/function that creates two S3 buckets and has the Lambda Function take a CSV file form one, perform some simple transformations, and place it in the other receiving bucket.
In trying to build off the exercise I've done, I created a Yaml file with the following code:
provider:
name: aws
runtime: python3.8
region: us-east-1
profile: serverless-admin
timeout: 10
memorySize: 128
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: "*"
custom:
assets:
targets:
- bucket1: csvbucket1-08-16-2020
pythonRequirements:
dockerizePip: true
- bucket2: csvbucket2-08-16-2020
pythonRequirements:
dockerizePip: true
functions:
protomodel-readcsv:
handler: handler.readindata
events:
s3:
- bucket: ${self:custom.bucket1}
event: s3:ObjectCreated:*
suffix: .csv
- bucket: ${self:custom.bucket2}
plugins:
- serverless-python-requirements
- serverless-s3-deploy
However, when i do a Serverless deploy from my command prompt, I get:
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket1' could not be found.
Serverless Warning --------------------------------------
A valid service attribute to satisfy the declaration 'self:custom.bucket2' could not be found.
Serverless Error ---------------------------------------
Events for "protomodel-readcsv" must be an array, not an object
I've tried to make the events object in the protohandler-readcsv: by adding a - but I then get a bad indentation error that for some reason I cannot reconcile. But, more fundamentally, I'm not exactly sure why that item would need be an array anyway, and I wasn't clear about the warnings with the buckets either.
So sorry about a pretty newbie question about this, but running tutorials/examples online leaves a lot to try to figure out in trying to generalize/customize these examples.
custom:
assets:
targets:
- bucket1
I guess you need self:custom.assets.targets.bucket1, not sure if this nested assets will work.
Please check the example below is supposed to work.
service: MyService
custom:
deploymentBucket: s3_my_bucket
provider:
name: aws
deploymentBucket: ${self:custom.deploymentBucket}
stage: dev
I have the following semver setup:
- name: version
type: semver
source:
driver: gcs
bucket: my-ci
json_key: ((my.serviceaccount))
key: version/version.txt
initial_version: 0.0.0
In my publishjob, I have the following:
name: publish
serial_groups: [version]
plan:
- get: version
passed: [build]
trigger: true
So, basically the publish job is triggered after build job is passed (version updated)
Now, in the publish job I am creating a docker image and pushing it to gcr.
- put: my-gcr
params:
additional_tags: my/ci/tags
build: mycode
get_params: {skip_download: true}
Here, the image is correctly tagged based on the values in the tags file. However, I want to set these values dynamically based on the current version which can be retreived following this:
https://concoursetutorial.com/miscellaneous/versions-and-buildnumbers/#display-version
How can I use this version number to tag my docker image?
I solved it using the following code:
- put: artifacts
params:
additional_tags: version/number
build: mycode
get_params: {skip_download: true}