Gitlab CI exit 1 even if it is successful - gitlab-ci

I have a step on my gitlabci that runs php code sniff. I used custom base image for this step.
This step exit with code 1 and failed the step.
I checked this with starting a container with my docker image. phpcs command is working like charm inside of base image.
It seems like gitlab-ci throw this code even if job is succeded.
this is the output from gitlab-ci.
I checkout artifacts file row number and cli commands(inside docker container) row number. They are the same.
I could allow failure but this error is strange.
if [[ -f "phpstan.txt" && -s "phpstan.txt" ]]; then echo "exist and not empty";
I tried to allow failure inside bash script. I write a small custom control as above and place it after phpcs command inside my .gitlab-ci.yml. But job is failed before this script.
Gitlab version : v11.9.1
Docker image : custom based on php:7.2
My gitlab CI step :
phpcs:
stage: analysis
script:
- phpcs --standard=PSR2 --extensions=php --severity=5 -s src | tee phpcs.txt
artifacts:
when: always
expire_in: 1 week
paths:
- phpcs.txt
I think this is not about phpcs. I have a similar step (like phpcs) is named phpstan also an analsis mecahinism. It throws exactly same error on same line of script

Related

transferring strings across gitlab ci tasks stages using variables

I am wanting to store the output from a script in a variable for use in subsequent commands from within Gitlab CI.
Here is the script:
image: ...
build c-ares:
variables:
CARES_ARTIFACTS_DIR: "-"
script:
- CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
after_script:
- echo $CARES_ARTIFACTS_DIR
artifacts:
name: CARES_ARTIFACTS
paths:
- $CARES_ARTIFACTS_DIR
My intention is to:
first declare the variable CARES_ARTIFACTS_DIR with global scope
Set the variable value using the output from the build-c-ares.sh script
Recover the output from the build-c-ares.sh script on a later command using the variable
My code does not behave as intended - on dereferencing the variable I find it contains the original value it was assigned at declaration:
$ CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
Cloning into 'c-ares'...
Running after_script
00:01
Running after script...
$ echo $CARES_ARTIFACTS_DIR
-
Uploading artifacts for successful job
00:00
Uploading artifacts...
WARNING: -: no matching files. Ensure that the artifact path is relative to the working directory
ERROR: No files to upload
It is probably easier to just redirect the script output to a file and define that as an artifact.
Something similar to:
image: ...
build c-ares:
script:
- ./build-c-ares.sh > script_output
- cat script_output
artifacts:
paths:
- script_output
In regards to the specific issue, the variables used in the "artefacts" step will again use the variable initialisation defined for the job. Both the "artefacts" and the "script" steps for the job will start with the custom CARES_ARTIFACTS_DIR variable set to the value "-":
build c-ares:
variables:
CARES_ARTIFACTS_DIR: "-"
script:
# $CARES_ARTIFACTS_DIR=="-"
- CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
# $CARES_ARTIFACTS_DIR=="hello from build-c-ares.sh"
- echo $CARES_ARTIFACTS_DIR # prints "hello from build-c-ares.sh"
after_script:
# $CARES_ARTIFACTS_DIR=="-"
- echo $CARES_ARTIFACTS_DIR # prints "-"
Fundamentally, Gitlab variables cannot feed information across job steps as intended in the original post. My subjective opinion is to keep steps independent where possible and restrict input to artefacts from upstream jobs or variables explicitly defined in the pipeline script or settings.

Drone CI - How to set pipeline env var to result of CLI output

I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty
Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
⚠ Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.
if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try

gitlab CI/CD: How to enter into a container for testing i.e getting an interactive shell

Like in docker we can enter a container by and have an interactive shell
docker-compose exec containername /bin/bash
Similary in the script in gitlab CI/CD can we enter into it. Like it provides an interactive shell
Eg:
build:
stage: build
script:
- pwd; ls -al
HERE I WANT TO HAVE AN INTERACTIVE SHELL SO THAT I CAN CHECK FEW THINGS
I think we need to do an small detour here and explain how jobs are working in GitLab CI.
Each job is an encapsulated docker container. The container only executes things you like to be executed within the script directive. By default the jobs on shared runners are using a ruby container image.
If you want to check, what you have available within your image, or you want try things out locally. You can do so running a container with this image locally and mounting your project folder into it.
docker run --rm -v "$(pwd):/build/project" -w "/build/project" -it <the job image> /bin/bash # or /bin/sh or whatever shell is available in the image.
# -v mounts the current directory int /build/project in your container
# -w changes the working directory to the mounting point
# /bin/bash starts the shell, it might be that there are others within the image
If you want to use a different docker image, lets say because you are running some other build tool, you can specify this with the image directive like:
build:
image: maven:latest
script:
- echo "some output"
You do have the functionality available within your job, which is provided by the image. As the job will run within a container of that image.
You can even use some tools like https://github.com/firecow/gitlab-ci-local to verify this locally. But in the end those are just docker images, and you can easily recreate the flow on your own.

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Is there a way to define build stage in drone.yml?

I have some stages defined in drone.yml file. Is there a way to specify which stage need to be run through command line parameter? For example: below is my drone.yml file. I want to build buildOnContainer1 and buildOnContainer2 stage separately. So I am looking for a command such as drone exec buildOnContainer1. It only runs the command under buildOnContainer1.
buildOnContainer1:
image: container1
pull: true
commands:
- npm test:uat
buildOnContainer2:
image: container2
pull: true
commands:
- npm test:dev
My first thought to implement that granularity level of control is through environment variables.
Drone provides the ability to substitute environment variables at runtime. This gives us the ability to use dynamic build or commit details in our pipeline configuration.
You can pass environment variables to the command line and also to your Drone server using secrets. Check the Drone docs on ENV interpolation and docker exec command
You should build customized images for container1 and container2 to run the commands or skip them based on the values of specific environment variables.
A dirty example would be something like the following .drone.yml:
buildOnContainer1:
image: container1
pull: true
environment:
- SKIP=${skip.buildOnContainer1}
commands:
- ./myscript.sh test:uat
buildOnContainer2:
image: container2
pull: true
environment:
- SKIP=${skip.buildOnContainer2}
commands:
- ./mysqcrypt.sh test:dev
Your custom image should contain a mysqcript.sh bash script in your working directory. The script could check whether the value of the ENVAR SKIP is true or false. If true, it would do nothing. If false is would execute npm command with whatever args you had passed to the script.
Then you could execute the build locally:
drone exec --secret skip.buildOnContainer1=true --secret skip.buildOnContainer2=false