using more than one dependency in job gitlab-ci - selenium

Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?

Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test

Related

'app-deploy' job needs 'app-verify' job but 'app-verify' is not in any previous stage You can also test your .gitlab-ci.yml in CI Lint

Seeing Found errors in your .gitlab-ci.yml:
'app-deploy' job needs 'app-verify' job
but 'app-verify' is not in any previous stage
You can also test your .gitlab-ci.yml in CI Lint
Where as both stages are defined
Cache set as below
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
Stages defined as below, snippets
app-build:
stage: build
# Extending the maven-build function from maven.yaml
extends: .maven-build
app-deploy:
stage: deploy
extends: .docker-publish
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
variables:
DOCKERFILE: Dockerfile
CONTEXT: app
OUTPUT: app.war
needs:
- app-build
- app-verify
dependencies:
- app-build
- app-verify
How to resolve the above error.
Error should go away and no error in pipeline run.

Dynamic child pipelines and stop action not working

after adding dynamic child pipelines to our CI pipeline on stop action(eg. deleting branch), stopped working.
In stop job we are deleting created k8s resources, so its important to be executed.
What i noticed is that defining environment in child pipeline is probable cause(without environment section, on stop action is working).
Any ideas?
gitlab-ci.yaml looks like this
stages:
....
- deploy
- tests_prepare
- maintenance_tests
....
deploy_branch_to_k8s:
stage: deploy
only:
- branches
except:
- master
dependencies:
- build_api
environment:
name: branches/$CI_COMMIT_REF_NAME
on_stop: stop_deployed_branch_in_k8s
script:
- deploy to k8s
stop_deployed_branch_in_k8s:
stage: deploy
only:
- branches
except:
- master
when: manual
dependencies: []
variables:
GIT_STRATEGY: none
environment:
name: branches/$CI_COMMIT_REF_NAME
action: stop
script:
- delete k8s resources
generate_config_tests:
stage: tests_prepare
only:
- branches
except:
- master
dependencies:
- build_api
....
script:
- python3 ./utils/generate-jobs-config.py > generated-config.yml
artifacts:
paths:
- generated-config.yml
create_maintenance_tests_pipeline:
stage: maintenance_tests
only:
- branches
except:
- master
trigger:
strategy: depend
include:
- artifact: generated-config.yml
job: generate_config_tests
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
generated-config.yml looks something like this
stages:
- tests
run_maintenance_test_job___TEST_NAME__:
stage: tests
retry: 2
environment:
name: branches/$CI_COMMIT_REF_NAME
needs:
- pipeline: $PARENT_PIPELINE_ID
job: generate_config_maintenance_tests
script:
- deploy a run tests in k8s
If I'm not wrong, you should skip the needs part altogether in the child pipeline, since it is only used for jobs in the same pipeline. Its upstream will be the parent pipeline anyway.

Gitlab CI: How to fail a pipeline while still running subsequent stages/jobs

I am trying to run some jobs in a "test" stage followed by one job in a "monitor" stage.
The trouble is if the unit tests fail in the test stage, the entire pipeline fails and it skips my job in the monitor stage altogether.
I can set the unit tests to allow failure, which lets the monitor stage run, but the pipeline will pass if the unit tests fail, and I don't want that.
How do I have the monitor stage run its job while still having the pipeline fail if the unit tests fail?
Here is the relevant configuration:
include:
- project: templates/kubernetes
ref: master
file: /.kube-api-version-checks.yaml
- local: .choose-runner.yaml
ref: master
.run_specs_script: &run_specs_script |
./kubernetes/integration/specs/run-specs.sh $CI_COMMIT_SHA $TEST_NAMESPACE $ECR_BASE_URL/test/$IMAGE_NAME $PROCESSES ${UNIT_TEST_INSTANCE_TYPE:-c5d.12xlarge}
.base_unit_tests:
image: XXX
stage: test
coverage: '/TOTAL\sCOVERAGE:\s\d+\.\d+%/'
variables:
GIT_DEPTH: 1
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests:
extends:
- .base_unit_tests
- .integration
unit_tests_dependency_update:
extends:
- .base_unit_tests
- .low-priority
unit_tests_dependencies_next:
image: XXX
stage: test
allow_failure: true
except:
- web
- triggers
tags:
- integration-green-kube-runner
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^hint\/upgrade/
variables:
GIT_DEPTH: 1
DEPENDENCIES_NEXT: 1
IMAGE_NAME: next
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests_datadog:
extends:
- .integration
stage: monitor
image: node
variables:
DD_API_KEY: XXX
before_script:
- npm install -g #datadog/datadog-ci
script:
- DD_ENV=ci DATADOG_API_KEY="$DD_API_KEY" DATADOG_SITE=datadoghq.com datadog-ci junit upload --service <service> ./tmp
dependencies:
- unit_tests
when: always
This might not be the best solution, but you could add a dedicated stage after your monitor stage that only runs on_failure (https://docs.gitlab.com/ee/ci/yaml/index.html#artifactswhen) and returns an exit code.
fail_pipeline:
stage: status
image: bash
script:
- exit 1
when: on_failure

How can I schedule jobs on different time in Gitlab

I have build and test jobs in Gitlab CI yaml.
I want to trigger build job every evening at 16.00
and I want to trigger test jobs every morning 4.00 on GitLab
and I know on Gitlab CI/CD - Schedules - New Schedule
but I don't know how can I write this and works in Gitlab CI yaml
I have uploaded my Gitlab CI yaml file.
Can you show me please?
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- test
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
- pwd
artifacts:
paths:
- 'Output'
test_job:
stage: test
only:
- schedules
script:
- 'Output\bin\Debug\NewProject.exe'
Did you try only:variables/except:variables?
First you need to set proper variable in your schedule then add only variables to your yml config. Example:
...
build_job:
...
only:
variables:
- $SCHEDULED_BUILD == "True"
test_job:
...
only:
variables:
- $SCHEDULED_TEST == "True"
If you always want to have 12 hours of delay you could use just one schedule and add when:delayed
when: delayed
start_in: 12 hours
UPDATE: As per request in comments added complete example of simple pipeline configuration, job build should run when SCHEDULED_BUILD is set to True and test job should run when SCHEDULED_TEST is set to True:
build:
script:
- echo only build
only:
variables:
- $SCHEDULED_BUILD == "True"
test:
script:
- echo only test
only:
variables:
- $SCHEDULED_TEST == "True"

Gitlab after push tag

Is it possible to configure the gitlab-yml of the project in such a way that after the tag has been pushed out it can run several commands ? If so, how do you get it? I would also like to define the variables that I would like to use in these several commands.
My gitlab-ci looks like:
stages:
- build
- deploy
build:
stage: build
script:
- composer install --no-ansi
- vendor/bin/phar-composer build
artifacts:
paths:
- example.phar
tags:
- php:7.0
deploy:
stage: deploy
only:
- tags
dependencies:
- build
script:
- cp example.phar /opt/example/
tags:
- php:7.0
It's about running example.phar bin/console command1 $VARIABLE1 $VARIABLE2 $VARIABLE3 $VARIABLE4.
Please help me because I am not completely familiar with these matters.
You can trigger a job when one tag is pushed using only parameter:
build:
stage: build
image: alpine:3.6
script:
- echo "A tag has been pushed!"
only:
- tags