Fail to invoke step function using serverless framework ("stepf" is not a valid sub command) - serverless-framework

After the installing of the serverless step function plugin
npm install -g serverless
npm install -g serverless-step-functions
... and successfully deploy the step function through
serverless deploy
... Then I try to run: serverless invoke stepf
serverless invoke stepf --name ${sf} --data '${OUTPUT}'
Serverless Error ---------------------------------------
"stepf" is not a valid sub command. Run "serverless invoke" to see a more helpful error message for this command.
... And I get "stepf" is not a valid sub command
Why can't use the functionality from the serverless-step-functions plugin to invoke a step function?
The invoke command is described on the serverless-step-functions git-hub page:
https://github.com/serverless-operations/serverless-step-functions#invoke
The used version of the plugin serverless-step-functions is 2.21.1
Edit
An important piece of information is that the invoke command was executed from a folder that did not contain a serverless.yml file

The invoke command was executed from a directory which did not have a serverless.yml file.
Adding this minimal yaml file activated the plugin
service: some-step-function
provider:
name: aws
region: eu-north-1
runtime: java11
timeout: 30
plugins:
- serverless-step-functions
But in order to run:
serverless invoke stepf --name ${sf} --data '${input}'
... the name parameter in invoke must be the name described in the serverless.yml file.
In the example below, the correct value for the name parameter is aStateMachine. I first did the uncorrected assumption that the name was the same as the name parameter under the state machine.
service: some-step-function
provider:
name: aws
region: eu-north-1
runtime: java11
timeout: 30
...
stepFunctions:
stateMachines:
aStateMachine:
name: thisIsNotTheName
plugins:
- serverless-step-functions
Amusing that you are in the same directory as the above serverless.yml file. A working invoke to a step function could look something like:
serverless invoke stepf --name aStateMachine --data '{}'
The above example explain the error message in the question.
It's however much more convenient to build a solution where the invoke command is executed from the directory where you have the serverless.yml file.

Related

apptainer/singularity multi-stage build with different registries

I'm building an apptainer/singularity multi-stage recipe in a gitlab CI environment.
The first step of the recipe is built from an image hosted in a private registry, whereas the second built from an image hosted on dockerhub. Something like this:
# First stage
BootStrap: docker
Registry: <my_private_registry>
From: <my_image>
Stage: base
%files
...
%post
...
# Second stage
BootStrap: docker
Registry: index.docker.io
From: continuumio/miniconda3
Stage: final
%files from base
...
%post
...
Since the first registry is private, in the gitlab CI instance I'm setting the variables APPTAINER_DOCKER_USERNAME and APPTAINER_DOCKER_PASSWORD, as suggested here for CI/CD workflow.
This allows to build the first stage of the recipe succesfully.
Unfortunately, when the build of the second stage starts, it fails with:
> FATAL: While performing build: conveyor failed to get: unable to retrieve auth token: invalid username/password: unauthorized: incorrect username or password
I think because the credentials for my private registry are passed to dockerhub in the second stage.
How can I login to different registries in multi-stage builds?
Any idea about how to deal with this problem?
I found a way to accomplish what I wanted. The fact was that environment variables overrides other login modes.
So I deleted the APPTAINER_DOCKER_USERNAME and APPTAINER_DOCKER_PASSWORD environment variables and, using this method, I added the following before_script field to my .gitlab-ci.yaml:
apptainer:
stage: deploy
image:
name: kaczmarj/apptainer:1.1.3
entrypoint: [""]
tags:
- privileged
before_script:
- echo "$DOCKER_REGISTRY_TOKEN" | apptainer remote login --username <my_username> --password-stdin docker://$CI_REGISTRY
This way, both the private registry (stored in $CI_REGISTRY) and the public
one (dockerhub) are available.

Drone CI - How to set pipeline env var to result of CLI output

I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty
Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
⚠ Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.
if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try

docker-compose using cached file with pytest

I've configured IntelliJ to use python via a stack I've defined in docker-compose. I'm configured my project to execute my pytest via docker-compose so that I can use the debugger. However, I've discovered that after the initial run, when I change my code and re-run my tests, pytest is not seeing my changes, but rather executing a cached version of the code.
The only way I've discovered to get around this is to invoke the File menu option Invalidate Caches and Restart. This is annoying.
This my compose file:
networks:
app: {}
services:
item-set-definitions:
build:
context: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: build
command:
- /bin/bash
image: item-sets:test
networks:
app: {}
volumes:
- source: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: /project
type: bind
version: '3.9'
In the pytest Run configuration I've tried adding -force-recreate option in the docker-compose Command and options field but IntelliJ won't recognize it.
Does anyone know how I can configure IntelliJ to not cache any of my source file so that pytest will see my changed code?
Thank you

trigger pipeline fails in gitlab-ci

trying to trigger a pipeline in another project.
trigger job:
stage: triggers
needs: [test_01]
trigger:
include:
- project: voodoo212/ourlordandsavior
file: .gitlab-ci.yml
# strategy: depend
the remote pipeline runs fine when run separately but fails when triggered from another pipeline.
anything I am missing here? the triggered pipeline do start running but seem like it is the same error I got when not passing cache path
$ ./configure.sh
/bin/bash: line 114: ./configure.sh: No such file or directory
I just realized I had the syntax wrong !
no need to use include to run it just
trigger job:
stage: deploy
needs: [test_01]
trigger:
project: voodoo212/ourlordandsavior
strategy: depend
You can trigger it through API
script:
- curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master "https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"

Is there a way to define build stage in drone.yml?

I have some stages defined in drone.yml file. Is there a way to specify which stage need to be run through command line parameter? For example: below is my drone.yml file. I want to build buildOnContainer1 and buildOnContainer2 stage separately. So I am looking for a command such as drone exec buildOnContainer1. It only runs the command under buildOnContainer1.
buildOnContainer1:
image: container1
pull: true
commands:
- npm test:uat
buildOnContainer2:
image: container2
pull: true
commands:
- npm test:dev
My first thought to implement that granularity level of control is through environment variables.
Drone provides the ability to substitute environment variables at runtime. This gives us the ability to use dynamic build or commit details in our pipeline configuration.
You can pass environment variables to the command line and also to your Drone server using secrets. Check the Drone docs on ENV interpolation and docker exec command
You should build customized images for container1 and container2 to run the commands or skip them based on the values of specific environment variables.
A dirty example would be something like the following .drone.yml:
buildOnContainer1:
image: container1
pull: true
environment:
- SKIP=${skip.buildOnContainer1}
commands:
- ./myscript.sh test:uat
buildOnContainer2:
image: container2
pull: true
environment:
- SKIP=${skip.buildOnContainer2}
commands:
- ./mysqcrypt.sh test:dev
Your custom image should contain a mysqcript.sh bash script in your working directory. The script could check whether the value of the ENVAR SKIP is true or false. If true, it would do nothing. If false is would execute npm command with whatever args you had passed to the script.
Then you could execute the build locally:
drone exec --secret skip.buildOnContainer1=true --secret skip.buildOnContainer2=false