trying to trigger a pipeline in another project.
trigger job:
stage: triggers
needs: [test_01]
trigger:
include:
- project: voodoo212/ourlordandsavior
file: .gitlab-ci.yml
# strategy: depend
the remote pipeline runs fine when run separately but fails when triggered from another pipeline.
anything I am missing here? the triggered pipeline do start running but seem like it is the same error I got when not passing cache path
$ ./configure.sh
/bin/bash: line 114: ./configure.sh: No such file or directory
I just realized I had the syntax wrong !
no need to use include to run it just
trigger job:
stage: deploy
needs: [test_01]
trigger:
project: voodoo212/ourlordandsavior
strategy: depend
You can trigger it through API
script:
- curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master "https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"
Related
I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty
Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
⚠ Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.
if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try
So I have a quite simple gitlab-ci.yml script:
test stage:
stage: build
artifacts:
paths:
- result/
script:
…
So the problem is when it gets to the “Uploading artifacts for successful job”, it prints “Missing /usr/local/bin/gitlab-runner. Uploading artifacts is disabled”.
Tried to change an owner and a group of the gitlab-runner file to “gitlab-runner”, even gave 777 rights, but nothing helped.
Any ideas where I’m wrong?
If you are using gitlab latest version, it will be installed in /usr/bin/gitlab-runner, but you are trying to use /usr/local/bin/gitlab-runner.
Following the gitlab reference for reference tags, this simplest example doesn't work.
.gitlab-ci.yml:
include:
- local: shared.yml
this-doesnt-work:
script:
- !reference [.test, script]
shared.yml:
.test:
script:
- echo from shared
But gitlab doesn't seem to replace the reference, it tries to execute the literal ".test":
Executing "step_script" stage of the job script
$ .test
/bin/bash: line 106: .test: command not found
Your code works perfectly fine. Which gitlab version are you using? The !reference feature was introduced with gitlab 13.9. Maybe you are running an older version?
I'm trying to include a file in which I declare some repetitive jobs, I'm using extends.
I always have this error did not find expected key while parsing a block
this is the template file
.deploy_dev:
stage: deploy
image: nexus
script:
- ssh -i ~/.ssh/id_rsa -o "StrictHostKeyChecking=no" sellerbot#sb-dev -p 10290 'sudo systemctl restart mail.service'
only:
- dev
this is the main file
include:
- project: 'sellerbot/gitlab-ci'
ref: master
file: 'deploy.yml'
deploy_dev:
extends: .deploy_dev
Can anyone help me please
`
It looks like just stage: deploy has to be indented. In this case it's a good idea to use gilab CI line tool to check if CI pipeline code is valid or just YAML validator. When I checked section from template file in yaml linter I've got
(<unknown>): mapping values are not allowed in this context at line 3 column 8
After the installing of the serverless step function plugin
npm install -g serverless
npm install -g serverless-step-functions
... and successfully deploy the step function through
serverless deploy
... Then I try to run: serverless invoke stepf
serverless invoke stepf --name ${sf} --data '${OUTPUT}'
Serverless Error ---------------------------------------
"stepf" is not a valid sub command. Run "serverless invoke" to see a more helpful error message for this command.
... And I get "stepf" is not a valid sub command
Why can't use the functionality from the serverless-step-functions plugin to invoke a step function?
The invoke command is described on the serverless-step-functions git-hub page:
https://github.com/serverless-operations/serverless-step-functions#invoke
The used version of the plugin serverless-step-functions is 2.21.1
Edit
An important piece of information is that the invoke command was executed from a folder that did not contain a serverless.yml file
The invoke command was executed from a directory which did not have a serverless.yml file.
Adding this minimal yaml file activated the plugin
service: some-step-function
provider:
name: aws
region: eu-north-1
runtime: java11
timeout: 30
plugins:
- serverless-step-functions
But in order to run:
serverless invoke stepf --name ${sf} --data '${input}'
... the name parameter in invoke must be the name described in the serverless.yml file.
In the example below, the correct value for the name parameter is aStateMachine. I first did the uncorrected assumption that the name was the same as the name parameter under the state machine.
service: some-step-function
provider:
name: aws
region: eu-north-1
runtime: java11
timeout: 30
...
stepFunctions:
stateMachines:
aStateMachine:
name: thisIsNotTheName
plugins:
- serverless-step-functions
Amusing that you are in the same directory as the above serverless.yml file. A working invoke to a step function could look something like:
serverless invoke stepf --name aStateMachine --data '{}'
The above example explain the error message in the question.
It's however much more convenient to build a solution where the invoke command is executed from the directory where you have the serverless.yml file.