Gitlabci: how make dynamic next stage? - gitlab-ci

I have pipeline case
Exist 2 files:
.base_integration_test.yml - Integration tests without Kafka
.base_integration_test_with_kafka.yml - Integration tests with Kafka
include:
# PRODUCT
- project: 'gitlabci/integration-test'
ref: dev_v2
file:
- 'spark/.base_integration_test.yml'
- 'spark/.base_integration_test_with_kafka.yml'
scenario selection in the preliminary step
need choose
or
.base_integration_test:
variables:
COVERAGE_SOURCE: "./src"
extends: .base_integration_test
or
.base_integration_test_with_kafka:
variables:
COVERAGE_SOURCE: "./src"
extends: .base_integration_test_with_kafka
How to do it better?
p.s.
how tried
make stage
prepare_test:
image: $CI_REGISTRY/platform/docker-images/vault:1.8
stage: prepare_test
script:
- export CICD_KAFKA_HOST=$(cat test/fixtures.py | grep KAFKA_HOST)
- >
if [ "$CICD_KAFKA_HOST" != "" ]; then
export CICD_KAFKA_HOST="true"
else
export CICD_KAFKA_HOST="false"
echo "CICD_KAFKA_HOST=$CICD_KAFKA_HOST" >> dotenv.env
fi
- env | sort -f
artifacts:
reports:
dotenv:
- dotenv.env
expire_in: 6000 seconds
in next stage
integration_test:
variables:
COVERAGE_SOURCE: "./src"
extends: .base_integration_test
dependencies:
- prepare_test
rules:
- if: $CICD_KAFKA_HOST == "false"
- when: never
but integration_test doesn't even show up on startup pipeline

Related

Passing variable between jobs in Azure Pipeline with empty result

I am writing an azure pipeline yml requesting to pass variables between jobs but the variables are not passing through. however, it wasn't successful and it returns an empty variable.
here is my pipeline:
jobs:
- job: UpdateVersion
variables:
terraformRepo: ${{ parameters.terraformRepo }}
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
persistCredentials: true
- checkout: ${{ parameters.terraformRepo }}
- task: AzureCLI#2
displayName: PerformVerUpdate
inputs:
azureSubscription: ${{ parameters.azureSubscriptionName }}
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
echo Step 3 result
echo "Reponame $Reponame"
echo "notify $notify"
echo "pullRequestId $pullRequestId"
echo "##vso[task.setvariable variable=pullRequestId;isOutput=true;]$pullRequestId"
echo "##vso[task.setvariable variable=Reponame;isOutput=true;]$Reponame"
echo "##vso[task.setvariable variable=notify;isOutput=true;]true"
Name: PerformVerUpdate
- job: SlackSuccessNotification
dependsOn: UpdateVersion
condition: and(succeeded(), eq(dependencies.UpdateVersion.outputs['PerformVerUpdate.notify'], 'true'))
pool:
vmImage: 'ubuntu-latest'
variables:
- group: platform-alerts-webhooks
- name: notify_J1
value: $[ dependencies.UpdateVersion.outputs['PerformVerUpdate.notify'] ]
- name: pullRequestId_J1
value: $[ dependencies.UpdateVersion.outputs['PerformVerUpdate.pullRequestId'] ]
- name: Reponame_J1
value: $[ dependencies.UpdateVersion.outputs['PerformVerUpdate.Reponame'] ]
steps:
- task: AzurePowerShell#5
displayName: Slack Notification
inputs:
pwsh: true
azureSubscription: ${{ parameters.azureSubscriptionName }}
ScriptType: 'InlineScript'
TargetAzurePs: LatestVersion
inline: |
write-host "Reponame $(Reponame_J1)"
write-host "pullRequest $(pullRequestId_J1)"
I've tried so many different syntax for it but the variables are still not able to pass through between both jobs - e.g. The condition is passing Null result to second job "(Expanded: and(True, eq(Null, 'true'))". Could anyone help with this?
Firstly 'Name' should be 'name' in lowercase
Name: PerformVerUpdate
The rest of syntax seems fine(I have tested it on Bash task because I do not have Azure subscription).
If renaming 'Name' does not help I suppose the problem may be that your Bash task is running within AzureCLI#2 task.
Maybe as workaround you could add new Bash task right after AzureCLI#2 and try to set there output variable for next job?

GitLab CI: Unable to test socketIO endpoints when FF_NETWORK_PER_BUILD is 1

I use GitLab CI to perform the E2E tests on socket IO endpoints and it used to be working correctly until I set FF_NETWORK_PER_BUILD to 1 in the .gitlab-ci.yml file.
No specific error is thrown, what is the problem?
This is how I connect to socket server in Jest test:
const address = app.listen().address(); // returns { address: '::', family: 'IPv6', port: 42073 }
const baseAddress = `http://${address.host}:${address.port}`;
socket = io(baseAddress, {
transports: ['websocket'],
auth: { token: response.body.token },
forceNew: true,
});
.gitlab-ci.yml
image: node:16
stages:
- dependencies
- e2e
- build
cache:
paths:
- node_modules
dependency_job:
stage: dependencies
script:
- npm ci
test_e2e:
stage: test
variables:
NODE_ENV: test
PORT: 3000
THROTTLE_TTL: 60000
THROTTLE_LIMIT: 1
SESSION_SECRET: somesecret
DOMAIN: console.okhtapos.com
POSTGRES_PASSWORD: password
DATABASE_URL: "postgresql://postgres:password#postgres:5432/postgres?schema=public"
REDIS_HOST: redis
REDIS_PORT: 6379
REDIS_VOLATILE_HOST: redis
REDIS_VOLATILE_PORT: 6379
SESSIONS_REDIS_PREFIX: "sess:"
SESSIONS_SECRET: somesecret
SESSIONS_NAME: omni-session
GRPC_URL: localhost:5004
OCTO_CENTRAL_GRPC_URL: localhost:5005
CUSTOMERS_ATTRIBUTES_DATABASE_NAME: omnichannel-customer-attributes
CUSTOMERS_ATTRIBUTES_MAX: 100
CUSTOMERS_ATTRIBUTES_MAX_TOTAL_SIZE_IN_BYTES: 5000
MONGODB_URL: mongodb://mongo:27017
CUSTOMERS_ATTRIBUTES_MAX_QUARANTINE_DAYS: 7
KAFKA_BROKERS: kafka:9092
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092,INTERNAL://localhost:9093"
KAFKA_BROKER_ID: "1"
KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"
KAFKA_LISTENERS: "PLAINTEXT://0.0.0.0:9092,INTERNAL://0.0.0.0:9093"
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
LABELS_MAX_QUARANTINE_DAYS: 7
LABELS_MAX: 100
DEPARTMENTS_MAX: 100
AUTHENTICATION_GATEWAY_SECRET: someGatewaySecret
CASSANDRA_CONTACT_POINTS: cassandra:9042
CASSANDRA_LOCAL_DATA_CENTER: datacenter1
CASSANDRA_KEYSPACE: omnichannel
FF_NETWORK_PER_BUILD: 1
ZOOKEEPER_CONNECT: zookeeper:2181
BROKER_ID: 1
services:
- redis:latest
- postgres:15beta3
- mongo:latest
- cassandra:latest
- name: debezium/zookeeper:2.0.0.Beta1
alias: zookeeper
- name: debezium/kafka:2.0.0.Beta1
alias: kafka
script:
- npx prisma generate
- npx prisma migrate deploy
- npm run test:e2e
build_job:
stage: build
script:
- npm run build

How return artifact from Child Job to Parent pipeline?

Use trigger for dynamic select test job
prepare_test:
image: $CI_REGISTRY/platform/docker-images/vault:1.8
variables:
CONTEXT_TEST: |
include:
# PRODUCT
- project: 'gitlabci/integration-test'
ref: dev_v2
file:
- 'spark/.base_integration_test.yml'
- 'spark/.base_integration_test_with_kafka.yml'
integration_test:
variables:
COVERAGE_SOURCE: "./src"
INTEGRATION_TEST: |
$CONTEXT_TEST
extends: .base_integration_test
INTEGRATION_TEST_WITH_KAFKA: |
$CONTEXT_TEST
extends: .base_integration_test_with_kafka
stage: prepare_test
script:
- export CICD_KAFKA_HOST=$(cat test/fixtures.py | grep KAFKA_HOST)
- >
if [ "$CICD_KAFKA_HOST" != "" ]; then
export CICD_KAFKA_HOST="true"
echo "$INTEGRATION_TEST_WITH_KAFKA" >> test.yml
else
export CICD_KAFKA_HOST="false"
echo "$INTEGRATION_TEST" >> test.yml
fi
- env | sort -f
artifacts:
paths:
- test.yml
expire_in: 6000 seconds
# --------------- Integration test --------------- ###
integration_test:
stage: test
trigger:
include:
- artifact: test.yml
job: prepare_test
strategy: depend
after complete child integration_test create coverage-report.xml
How return coverage-report.xml to parent pipeline?
You can use Gitlab API:
my-job:
image: ...
stage: ...
script:
- >
export CI_CHILD_PIPELINE_ID=$(curl --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/bridges" | jq ".[].downstream_pipeline.id")
- echo $CI_CHILD_PIPELINE_ID
- >
export CI_CHILD_JOB_ID=$(curl --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/pipelines/$CI_CHILD_PIPELINE_ID/jobs" | jq '.[].id')
- echo $CI_CHILD_JOB_ID
- 'curl --output artifacts.zip --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/jobs/$CI_CHILD_JOB_ID/artifacts"'
- unzip artifacts.zip

Drone template not triggering build

Following is how our.drone.yml looks like (and template also listed below) this an example configuration very much how we want in our production. The reason we are using a template is that our staging and production have similar configurations with values different in them(hence circuit template). And we wanted to remove duplication using the template circuit.yaml.
But currently, we are unable to do so df I don’t define the test.yaml(template) and have test step imported without template (and have the circuit template define to avoid the duplicate declaration of staging and production build) the drone build fails with
"template converter: template name given not found
If I define the test step as a template. I see the test step working but on creating the tag I see the following error
{"commit":"28ac7ad3a01728bd1e9ec2992fee36fae4b7c117","event":"tag","level":"info","msg":"trigger: skipping build, no matching pipelines","pipeline":"test","ref":"refs/tags/v1.4.0","repo":"meetme2meat/drone-example","time":"2022-01-07T19:16:15+05:30"}
---
kind: template
load: test.yaml
data:
commands:
- echo "machine github.com login $${GITHUB_LOGIN} password $${GITHUB_PASSWORD}" > /root/.netrc
- chmod 600 /root/.netrc
- go clean -testcache
- echo "Running test"
- go test -race ./...
---
kind: template
load: circuit.yaml
data:
deploy: deploy
create_tags:
commands:
- echo "Deploying version $DRONE_SEMVER"
- echo -n "$DRONE_SEMVER,latest" > .tags
backend_image:
version: ${DRONE_SEMVER}
tags:
- '${DRONE_SEMVER}'
- latest
And the template is below
test.yaml
kind: pipeline
type: docker
name: test
steps:
- name: test
image: golang:latest
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
commands:
{{range .input.commands }}
- {{ . }}
{{end}}
volumes:
- name: deps
path: /go
- name: build
image: golang:alpine
commands:
- go build -v -o out .
volumes:
- name: deps
path: /go
volumes:
- name: deps
temp: {}
trigger:
branch:
- main
event:
- push
- pull_request
circuit.yaml
kind: pipeline
type: docker
name: {{ .input.deploy }}
steps:
- name: create-tags
image: alpine
commands:
{{range .input.create_tags.commands }}
- {{ . }}
{{end}}
- name: build
image: plugins/docker
environment:
GITHUB_LOGIN:
from_secret: github_username
GITHUB_PASSWORD:
from_secret: github_token
VERSION: {{ .input.backend_image.version }}
SERVICE: circuits
settings:
auto_tag: false
repo: ghcr.io/meetme2meat/drone-ci-example
registry: ghcr.io

Why is this rule preventing my GitLab stage from running?

In my .gitlab-ci.yml file I have this stage, which uses environment variables in artifacts from a previous stage:
build_dev_containers:
stage: build_dev_containers
variables:
CI_DEBUG_TRACE: "true"
script:
- whoami
…and it outputs the following debug information:
++ DEV_CONTAINERS=true
If I change it by adding the following rule, the stage no longer runs:
rules:
- if: '$DEV_CONTAINERS == "true"'
Any idea what I could be doing wrong?
Not sure if this information adds any value, but just in case:
My previous stage outputs a .env file in its artifacts, and it contains the value
DEV_CONTAINERS=true
Here is the complete file. The powershell script creates package.env in the root path:
image: microsoft/dotnet:latest
variables:
GIT_RUNNER_PATH: 'C:\GitLab'
SCRIPTS_PATH: '.\Lava-Tools\BuildAndDeploy\BuildServer'
stages:
- dev_deploy
- build_dev_containers
dev_deploy:
stage: dev_deploy
tags:
- lava
variables:
GIT_CLONE_PATH: '$GIT_RUNNER_PATH/builds/d/$CI_COMMIT_SHORT_SHA/$CI_PROJECT_NAME'
script:
- 'powershell -noprofile -noninteractive -executionpolicy Bypass -command ${SCRIPTS_PATH}\createdevdeployvars.ps1 -Branch "${CI_COMMIT_REF_NAME}" -ShortCommitHash "${CI_COMMIT_SHORT_SHA}"'
artifacts:
reports:
dotenv: package.env
build_dev_containers:
stage: build_dev_containers
image: docker.repo.ihsmarkit.com/octo/alpine/build/dotnet:latest
tags:
- lava-linux-containers
variables:
CI_DEBUG_TRACE: "true"
script:
- whoami
rules:
- if: '$DEV_CONTAINERS == "true"'
The rules are evaluated before the jobs begin, so a rule cannot evaluate the output from a job.
As a workaround I used if statements in my script: section:
build_dev_containers:
stage: build_dev_containers
image: docker.repo.ihsmarkit.com/octo/alpine/build/dotnet:latest
tags:
- lava-linux-containers
script:
- if [ "$DEV_CONTAINERS" == "true" ]; then echo "DEV_CONTAINERS is true - running"; else echo "DEV_CONTAINERS is not true - skipping"; exit 0; fi
- whoami
deploy_dev_containers:
stage: deploy_dev_containers
tags:
- lava
script:
- |
if ( "$DEV_CONTAINERS" -eq "true" ) {
Write-Output "DEV_CONTAINERS is true - running"
}
else {
Write-Output "DEV_CONTAINERS is not true - skipping"
exit 0
}
- ls