How to handle an asynchronous request/response as part of a Gitlab CI/CD test - gitlab-ci

I am looking to migrate from Jenkins to GitLab CI/CD. We currently use the BlazeMeter plugin for Jenkins to run GUI Functional tests on Blazemeter as part of a Jenkins job.
Unfortunately BlazeMeter doesn't have a plugin for GitLab but they do have a simple JSON API to start tests.
Because the tests can be long-running the Blazemeter API is asynchronous. One cUrl endpoint is used to start a test and another is used to poll and get the results (passing an ID returned in the first call).
What is the best way to handle this asynchronous process as part of a GitLab CI Pipeline job and what is the sample gitlab yaml?

GitLab has webhook or pipeline trigger feature where you can invoke from where ever you want. Also blazemeter has notification via webhooks. By combining these two will solve your problem without having long running one job until test completion.
test-trigger:
stage: test
script:
- # curl command to invoke test
except:
- triggers
test-completion:
stage: test
script:
- # reporting script
only:
- triggers
Following resources will help you to get started.
https://docs.gitlab.com/ee/ci/triggers/
https://guide.blazemeter.com/hc/en-us/articles/360001859058-Notifications-Overview-Notifications-Overview#webhook
https://blog.runscope.com/posts/how-to-send-runscope-webhook-notifications-to-google-hangouts-chat-with-eventn

The general solution is to use a shell/cmd script to manage loops for gitlab-ci.
build:
stage: runBlazeMeter
script:
- echo "START test run"
echo "use curl to initiate test"
COUNT=0
while
COUNT=$((COUNT + 1))
echo "use curl to query test completion"
RES=$(curl --silent https://jsonplaceholder.typicode.com/users/${COUNT} | wc -c)
[ $RES -gt 3 ]
do :; done
echo "END test run"

You can use two stages to achieve this by using artifacts to copy the ID from one stage to the next
start-test:
stage: test
artifacts:
untracked: true
script:
- curl http://run/the/test > testid.json
test:
stage: test
dependencies:
- start-test
script:
- TESTID=`cat testid.json`
while
sleep 1000
RES=$(curl https://test/status/${TESTID} | grep "COMPLETE")
[ $RES -gt 0 ]
do :; done
echo "TEST COMPLETE"

Related

Gitlab CI exit 1 even if it is successful

I have a step on my gitlabci that runs php code sniff. I used custom base image for this step.
This step exit with code 1 and failed the step.
I checked this with starting a container with my docker image. phpcs command is working like charm inside of base image.
It seems like gitlab-ci throw this code even if job is succeded.
this is the output from gitlab-ci.
I checkout artifacts file row number and cli commands(inside docker container) row number. They are the same.
I could allow failure but this error is strange.
if [[ -f "phpstan.txt" && -s "phpstan.txt" ]]; then echo "exist and not empty";
I tried to allow failure inside bash script. I write a small custom control as above and place it after phpcs command inside my .gitlab-ci.yml. But job is failed before this script.
Gitlab version : v11.9.1
Docker image : custom based on php:7.2
My gitlab CI step :
phpcs:
stage: analysis
script:
- phpcs --standard=PSR2 --extensions=php --severity=5 -s src | tee phpcs.txt
artifacts:
when: always
expire_in: 1 week
paths:
- phpcs.txt
I think this is not about phpcs. I have a similar step (like phpcs) is named phpstan also an analsis mecahinism. It throws exactly same error on same line of script

Conditional variables in gitlab-ci.yml

Depending on branch the build comes from I need to use slightly different command line arguments. Particularly I would like to upload snapshot nexus artifacts when building from a branch, and release artifact when building off master.
Is there a way to conditionally alter variables?
I tried to use except/only keywords like this
stages:
- stage
variables:
TYPE: Release
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
upload:
extends:
- .upload_common
- .upload_snapshot
Unfortunately it skips whole upload step when building off master.
The reason I am using 'extends' pattern here is that I have win and mac platforms which use slightly different variables substitution syntax ($ vs %). I also have a few different build configuration - Debug/Release, 32bit/64bit.
The code below actually works, but I had to duplicate steps for release and snapshot, one is enabled at a time.
stages:
- stage
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
.upload_release:
variables:
TYPE: "Release"
only:
- master
upload_release:
extends:
- .upload_common
- .upload_release
upload_snapshot:
extends:
- .upload_common
- .upload_snapshot
The code gets much larger when snapshot/release configuration is multiplied by Debug/Release, Mac/Win, and 32/64bits. I would like to keep number of configurations at minimum.
Having ability to conditionally altering just a few variables would help me reducing this code a lot.
Another addition in GitLab 13.7 are the rules:variables. This allows some logic in setting vars:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
Unfortunatelly YAML-anchors or GitLab-CI's extends don't seem to allow to combine things in script array of commands as of today.
I use built-in variable CI_COMMIT_REF_NAME in combination with global or job-only before_script to solve similar tasks without repeating myself.
Here is an example of my workaround on how to set environment variable to different values dynamically for PROD and DEV during delivery or deployment:
.provide ssh private deploy key: &provide_ssh_private_deploy_key
before_script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- |
if [ "$CI_COMMIT_REF_NAME" == "master" ]; then
echo "$SSH_PRIVATE_DEPLOY_KEY_PROD" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are in master (PROD)"
else
echo "$SSH_PRIVATE_DEPLOY_KEY_DEV" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are NOT in master (DEV)"
fi
- chmod 600 ~/.ssh/id_rsa
deliver-via-ssh:
stage: deliver
<<: *provide_ssh_private_deploy_key
script:
- echo Stage is deliver
- echo $MY_DYNAMIC_VAR
- ssh ...
Also consider this workaround for concatenation of "script" commands: https://stackoverflow.com/a/57209078/470108
hopefully it helps.
A nice way to prepare variables for other jobs is the dotenv report artifact. Unfortunately, these variables cannot be used in many places, but if you only need to access them from other jobs scripts, this is the way:
# prepare environment variables for other jobs
env:
stage: .pre
script:
# Set application version from GIT tag or fake it
- echo "APPVERSION=${CI_COMMIT_TAG:-0.1-dev-$CI_COMMIT_REF_SLUG}+$CI_COMMIT_SHORT_SHA" | tee -a .env
artifacts:
reports:
dotenv: .env
In the script of this job you can conditionally prepare and write your environment values into a file, then make a dotenv artifact out of it. Subsequent - or better yet dependent - jobs will pick up the variables for their scripts from there.

Using multiple runners in one gitlab-ci

I want to run CI pipline with 2 jobs:
job will boot up a docker image with docker-runner and run test inside docker
will run under ssh runner and pull code on a remote server.
Is it possible?
Yes, it's possible. You need to:
Register two GitLab Runners with needed executor (docker and shell), each witch different tag (or, at least one of them with a build tag).
Declare a specific tag for given job in your .gitlab-ci.yml, .
Shell runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-shell-runner
Please enter the gitlab-ci tags for this runner (comma separated):
shell
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Docker runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-docker-runner
Please enter the gitlab-ci tags for this runner (comma separated):
docker
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
docker
Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
.gitlab-ci.yml
buildWithShell:
stage: build
tags:
- shell
script:
- echo 'Building with the shell executor...'
buildWithDocker:
image: alpine:latest
stage: build
tags:
- docker
script:
- echo 'Building with the docker executor...'
Yes you can trigger different/mixed runners from a single gitlab-ci pipeline.
First you should register a shell runner on the target host and give it a tag (truncated):
$ gitlab-runner register
...
Please enter the gitlab-ci tags for this runner (comma separated):
my_shell_runner
...
Please enter the executor: virtualbox, docker+machine, docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh:
shell
Within your gitlab-ci.yaml something like this should work.
The 'test' job runs your test command in a docker container based on the image NAME_OF_IMAGE.
If that succeeds, the 'deploy' job chooses your shell runner based on the tag 'my_shell_runner' and will execute all commands within the script tag on the runner's host (truncated):
test:
stage: test
services:
- docker:dind
tags:
- docker-executor
script:
- docker run --rm NAME_OF_IMAGE sh -c "TEST_COMMAND_TO_RUN"
deploy:
stage: deploy
tags:
- my_shell_runner
script:
- COMMAND_TO_RUN
- COMMAND_TO_RUN
- COMMAND_TO_RUN

Can't get tests to pass on Gitlab CI

I've been trying to get our tests to pass on our Gitlab CI, but can't. I'm using the stock pipelines config that comes with Gitlab. All I've had to do is provide the gitlab yaml file to config the CI.
This is what we're using
image: maven:3.5.0-jdk-8-alpine
services:
- postgres:latest
variables:
POSTGRES_DB: my_test_db
POSTGRES_USER: my_test_user
POSTGRES_PASSWORD: ""
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
ACTIVE_ENV: test
connect:
image: postgres
script:
# official way to provide password to psql: http://www.postgresql.org/docs/9.3/static/libpq-envars.html
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
stages:
- test
test:
stage: test
script:
- "mvn -Denvironments=test -B db-migrator:migrate; mvn -Denvironments=test -DACTIVE_ENV=test -B test"
Everything works perfectly up to the point where the tests run. Then they all error out with similar messages:
383 [main] WARN org.javalite.activeweb.DBSpecHelper - no DB connections are configured, none opened
456 [main] WARN org.javalite.activeweb.DBSpecHelper - no DB connections are configured, none opened
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.528 sec <<< FAILURE! - in app.models.RoleTest
validatePresenceOfUsers(app.models.RoleTest) Time elapsed: 0.071 sec <<< ERROR!
org.javalite.activejdbc.DBException: Failed to retrieve metadata from DB, connection: 'default' is not available
I have one database.properties file that is checked in and is for tests only (our dev and prod envs use jndi). It looks like so:
test.driver=org.postgresql.Driver
test.username=my_test_user
test.password=
test.url=jdbc:postgresql://postgres/edv_test
Again, migrations run using all this exact same config. I just can't figure out why the tests won't run. I understand why it's saying there's no default db, but I don't get why it's not seeing the test settings and configuring that connection as expected.
So you know, the Maven flag environments like this: mvn test -Denvironments=test only works for the DB-Migrator, and not for the tests. Any JavaLite application in a standard running mode or as a test, it will be looking at ACTIVE_ENV. If this is not set, it will assume development. In test mode, it will be looking at database.properties block development.test.xxx=yyy as in http://javalite.io/database_configuration#property-file-configuration.
Think of it as "development" environment, "test" mode.
Additionally, DbConfig is not involved in tests, as database connections in test have a special treatment (rollback transactions), see: http://javalite.io/testing_with_db_connection

Gitlab CI Different executor per stage

Is it possible to have 2 stages in gitlab-ci.yml and one to be run with docker runner but the other to be run with shell?
Imagine I want to run tests in a docker container but I want to run deploy stage in shell locally in the container.
Not exactly stages but you can have different jobs to be run by different runners using tags configuration option which should give you exactly what you want.
Add (either during runner creation or later in Project settings -> Runners) tag docker to the Docker runner and tag shell to the shell runner. Then you can set the tags in your .gitlab-ci.yml file:
stages:
- test
- deploy
tests:
stage: test
tags:
- docker
script:
- [test routine]
deployment:
stage: deploy
tags:
- shell
script:
- [deployment routine]