'app-deploy' job needs 'app-verify' job but 'app-verify' is not in any previous stage You can also test your .gitlab-ci.yml in CI Lint - gitlab-ci

Seeing Found errors in your .gitlab-ci.yml:
'app-deploy' job needs 'app-verify' job
but 'app-verify' is not in any previous stage
You can also test your .gitlab-ci.yml in CI Lint
Where as both stages are defined
Cache set as below
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
Stages defined as below, snippets
app-build:
stage: build
# Extending the maven-build function from maven.yaml
extends: .maven-build
app-deploy:
stage: deploy
extends: .docker-publish
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
variables:
DOCKERFILE: Dockerfile
CONTEXT: app
OUTPUT: app.war
needs:
- app-build
- app-verify
dependencies:
- app-build
- app-verify
How to resolve the above error.
Error should go away and no error in pipeline run.

Related

Cannot run quarkus test on gitlab CI

Everything works on my machine. Problem is when I run tests on gitlab CI i get this error :
AuthenticationResourceTest > signinuser exists => returns a jwt token FAILED
java.lang.RuntimeException at QuarkusTestExtension.java:626
Caused by: java.lang.reflect.InvocationTargetException at NativeMethodAccessorImpl.java:-2
Caused by: java.util.concurrent.CompletionException at CompletableFuture.java:314
Caused by: java.lang.RuntimeException at TestResourceManager.java:457
Caused by: java.lang.IllegalStateException at DockerClientProviderStrategy.java:15
As my project is using Lombok could this be related to it? (but why does it work on my machine???).
Here is my .gitlab-ci:
stages:
- build
- test
build:
stage: build
image: openjdk:16
script: ./gradlew --build-cache quarkusBuild
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
test:
stage: test
image: openjdk:16
script: ./gradlew check
artifacts:
name: coverage
paths:
- $CI_PROJECT_DIR/build/jacoco-report
reports:
junit: jacoco.xml
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
Really struggling here.
Found the error.
Forgot that testcontainer needs docker dind
For those who struggle with this: test containers needs to run a docker container in order to create a test database.
That's the purpose of the dind service (docker in docker).
from
https://www.testcontainers.org/supported_docker_environment/continuous_integration/gitlab_ci/
Here is a sample .gitlab-ci.yml that executes test with gradle:
# DinD service is required for Testcontainers
services:
- name: docker:dind
# explicitly disable tls to avoid docker startup interruption
command: ["--tls=false"]
variables:
# Instruct Testcontainers to use the daemon of DinD.
DOCKER_HOST: "tcp://docker:2375"
# Instruct Docker not to start over TLS.
DOCKER_TLS_CERTDIR: ""
# Improve performance with overlayfs.
DOCKER_DRIVER: overlay2
test:
image: gradle:5.0
stage: test
script: ./gradlew test

Dynamic child pipelines and stop action not working

after adding dynamic child pipelines to our CI pipeline on stop action(eg. deleting branch), stopped working.
In stop job we are deleting created k8s resources, so its important to be executed.
What i noticed is that defining environment in child pipeline is probable cause(without environment section, on stop action is working).
Any ideas?
gitlab-ci.yaml looks like this
stages:
....
- deploy
- tests_prepare
- maintenance_tests
....
deploy_branch_to_k8s:
stage: deploy
only:
- branches
except:
- master
dependencies:
- build_api
environment:
name: branches/$CI_COMMIT_REF_NAME
on_stop: stop_deployed_branch_in_k8s
script:
- deploy to k8s
stop_deployed_branch_in_k8s:
stage: deploy
only:
- branches
except:
- master
when: manual
dependencies: []
variables:
GIT_STRATEGY: none
environment:
name: branches/$CI_COMMIT_REF_NAME
action: stop
script:
- delete k8s resources
generate_config_tests:
stage: tests_prepare
only:
- branches
except:
- master
dependencies:
- build_api
....
script:
- python3 ./utils/generate-jobs-config.py > generated-config.yml
artifacts:
paths:
- generated-config.yml
create_maintenance_tests_pipeline:
stage: maintenance_tests
only:
- branches
except:
- master
trigger:
strategy: depend
include:
- artifact: generated-config.yml
job: generate_config_tests
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
generated-config.yml looks something like this
stages:
- tests
run_maintenance_test_job___TEST_NAME__:
stage: tests
retry: 2
environment:
name: branches/$CI_COMMIT_REF_NAME
needs:
- pipeline: $PARENT_PIPELINE_ID
job: generate_config_maintenance_tests
script:
- deploy a run tests in k8s
If I'm not wrong, you should skip the needs part altogether in the child pipeline, since it is only used for jobs in the same pipeline. Its upstream will be the parent pipeline anyway.

Gitlab CI: How to fail a pipeline while still running subsequent stages/jobs

I am trying to run some jobs in a "test" stage followed by one job in a "monitor" stage.
The trouble is if the unit tests fail in the test stage, the entire pipeline fails and it skips my job in the monitor stage altogether.
I can set the unit tests to allow failure, which lets the monitor stage run, but the pipeline will pass if the unit tests fail, and I don't want that.
How do I have the monitor stage run its job while still having the pipeline fail if the unit tests fail?
Here is the relevant configuration:
include:
- project: templates/kubernetes
ref: master
file: /.kube-api-version-checks.yaml
- local: .choose-runner.yaml
ref: master
.run_specs_script: &run_specs_script |
./kubernetes/integration/specs/run-specs.sh $CI_COMMIT_SHA $TEST_NAMESPACE $ECR_BASE_URL/test/$IMAGE_NAME $PROCESSES ${UNIT_TEST_INSTANCE_TYPE:-c5d.12xlarge}
.base_unit_tests:
image: XXX
stage: test
coverage: '/TOTAL\sCOVERAGE:\s\d+\.\d+%/'
variables:
GIT_DEPTH: 1
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests:
extends:
- .base_unit_tests
- .integration
unit_tests_dependency_update:
extends:
- .base_unit_tests
- .low-priority
unit_tests_dependencies_next:
image: XXX
stage: test
allow_failure: true
except:
- web
- triggers
tags:
- integration-green-kube-runner
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^hint\/upgrade/
variables:
GIT_DEPTH: 1
DEPENDENCIES_NEXT: 1
IMAGE_NAME: next
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests_datadog:
extends:
- .integration
stage: monitor
image: node
variables:
DD_API_KEY: XXX
before_script:
- npm install -g #datadog/datadog-ci
script:
- DD_ENV=ci DATADOG_API_KEY="$DD_API_KEY" DATADOG_SITE=datadoghq.com datadog-ci junit upload --service <service> ./tmp
dependencies:
- unit_tests
when: always
This might not be the best solution, but you could add a dedicated stage after your monitor stage that only runs on_failure (https://docs.gitlab.com/ee/ci/yaml/index.html#artifactswhen) and returns an exit code.
fail_pipeline:
stage: status
image: bash
script:
- exit 1
when: on_failure

Having a script run only when a manually triggered job fails in GitLab

I have the following script that pulls from a remote template. The remote template has the following stages: build, test, code_analysis, compliance, deploy.
The deploy step is manually triggered and executed AWS CLI to deploy a SAM project.
I want to add an additional step such that when the deploy step fails, it will execute a script to rollback the cloudformation stack to its last operational state.
I created a "cleanup-cloudformation-stack-failure" job and tried adding "extends: .deploy", but that didn't work.
I then added an additional stage called "cloudformation_stack_rollback" in the serverless-template.yml file and tried to use a mix of rules and when to get it to trigger on failure, but I'm getting errors flagged by GitLab's linter.
Does anyone know what I'm doing wrong?
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
rules:
- if: '$CI_JOB_MANUAL == true'
when: on_failure
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --resources-to-skip ${STACK_NAME}
You forgot double quotes around true, however you can use Directed Asyclic Graphs to execute jobs conditionally
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
needs:
- deploy-qas
when: on_failure
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --reso

using more than one dependency in job gitlab-ci

Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?
Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test