I am working on Azure Pipelines on a Windows Self hosted Agent.
I need my pipeline to run a PowerShell script and if the script is successful, the next stage do the deployment, else the task fails, we have to fix something and resume the task
I'll describe what the pipeline does since the start as it might help understand.
First, the pipeline calls a template with parameters:
stages:
- template: release.yml#templates
parameters:
dbConnectionString: ''
The template release.yml#templates is below:
parameters:
- name: 'dbConnectionString'
default: ''
type: string
There is the first stage that simply builds the project, works fine
stages:
- stage: Build
- job: Build_Project
steps:
- checkout: none
- template: build.yml
The second stage depends on the result of the previous one.
For some cases of the template, there is no DB to check so I run the job only a parameter is provided.
Then, I want to run the CompareFile script only if the DBCheck was successful or if there was no parameter.
- stage: Deploy
dependsOn:
- Build
condition: eq( dependencies.Build.result, 'Succeeded' )
jobs:
- job: CheckDb
condition: ne('${{ parameters.dbConnectionString }}', '')
steps:
- checkout: none
- template: validate-db.yml#templates
parameters:
ConnectionString: '${{ parameters.dbConnectionString }}'
- job: CompareFiles
dependsOn: CheckDb
condition: or( eq( dependencies.CheckDb.result, 'Succeeded' ), eq('${{ parameters.dbConnectionString }}', '') )
steps:
- checkout: none
- task: PowerShell#2
name: compareFiles
inputs:
targetType: filePath
filePath: 'compareFile.ps1'
- deployment: Deploy2
dependsOn: CompareFiles
environment: 'Env ST'
strategy:
runOnce:
deploy:
steps:
- task: PowerShell#2
inputs:
targetType: filePath
filePath: 'File.ps1'
The next job is to compare file the files, the CompareFile.ps1 file is below.
The file compareFileContent.ps1 tries to make the task fail or succeed but I don't know PowerShell enough.
I have found somewhere that $host.SetShouldExit(10) could make the task fail so I tried 10 for failure and 0 for success,
I also tried exit values but for now, testing with $equal = $true the stage "Deploy2" is skipped so I am blocked
[CmdletBinding()]
param ()
***
$equal = $true
if($equal) {
# make pipeline to succeed
$host.SetShouldExit(0)
exit 0
}
else {
# make pipeline to fail
$host.SetShouldExit(10)
exit 10
}
Would you have ideas why the deployment job is skipped?
I was able to use these exit values to make the pipeline task succeed or failed:
if($equal) {
# make pipeline to succeed
exit 0
}
else {
exit 1
}
I used the PowerShell script in its own stage instead of in a job and it worked, when the task fails, I can do required manual actions and run the task again.
Cheers,
Claude
If you want to fail more *elegantly", you could format an error message like
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1
Basically, it allows you to print an error message correctly formated in addition to failing the task
And the explanation why Exit 1 is optional:
exit 1 is optional, but is often a command you'll issue soon after an
error is logged. If you select Control Options: Continue on error,
then the exit 1 will result in a partially successful build instead of
a failed build.
The source could be found here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=bash#example-log-an-error
Related
I need to configure a gitlab ci job to be re-executed when it fails. More specifically the deploy job. I set up the job with a retry value and tried to force it to fail to test it. But I couldn't achieve the job start again. Here an example of what I'm trying to do:
deploy:
stage: deploy
retry: 2
script:
- echo "running..."
- exit 1
only: [qa_branch]
I have a pipeline, after the stages.
At the end of last two stages as you see below.
Teardown will delete the application from kubernetes and destroy will be delete the kubenretes cluster and other resources as a whole.
I have set automatic and allow failure true.
But, I want to set the last destroy stage as manual if the teardown stage fails.So that I could cross check and resume the job later.
If the teardown passed successfully then it should be done automatically.
How to set that?
Because, .gitlab-ci.yml does not allow rules when: on_failure and when: manual at the same time and same job rule. so I was used Parent-child pipelines to solve the problem.
this is a example:
create a cleanup job template like:
# file name: cleanup_job_tmpl.yml
.cleanup job tmpl:
script:
- echo "run cleanup job"
create auto cleanup job
# file name: cleanup_auto.yml
include: cleanup_job_tmpl.yml
cleanup job auto:
extends: .cleanup job tmpl
before_script:
- echo "auto run job"
create manual cleanup job
# file name: cleanup_manual.yml
include: cleanup_job_tmpl.yml
cleanup job manual:
extends: .cleanup job tmpl
when: manual
before_script:
- echo "manual run job"
and finally .gitlab-ci.yml
stages:
- "teardown"
- "cleanup"
default:
image: ubuntu:20.04
teardown job:
stage: teardown
script:
- echo "run teardown job and exit 10"
- exit 10
artifacts:
reports:
dotenv: cleanup.env
cleanup trigger auto:
stage: cleanup
when: on_success #(default on_success)
trigger:
include: cleanup_auto.yml
cleanup trigger manual:
stage: cleanup
when: on_failure
trigger:
include: cleanup_manual.yml
when teardown job: exit 10, will trigger cleanup trigger manual job, and when I remove teardown job: exit 10, will trigger cleanup trigger auto job.
There is my GitLab repo to demo this case.
manual-job-on-previous-stagefailure · GitLab
When success to autl run job - pipeline: Success Pipeline · GitLab
When failure to manual run job - pipeline: Failure Pipeline · GitLab
I am trying to skip a GitLab ci job based on the results of the previous job, however, as a result, the job never runs. I have the impression that rules are evaluated at the beginning of the Pipeline not at the beginning of the Job. Is there any way to make it work?
cache:
paths:
- .images
stages:
- prepare
- build
dirs:
stage: prepare
image:
name: docker.image.me/run:latest
script:
- rm -rf .images/*
- [ $(($RANDOM % 2)) -eq 1 ] && touch .images/DESKTOP
desktop:
stage: build
needs: ["dirs"]
image:
name: docker.image.me/run:latest
rules:
- exists:
- .images/DESKTOP
when: always
script:
- echo "Why is this never launched?"
Dynamically created jobs could be a solution (https://docs.gitlab.com/ee/ci/parent_child_pipelines.html#dynamic-child-pipelines).
You could create a yml-file with a your "desktop"-job in section "script" in your "dirs"-job if ".images/DESKTOP" is created.
Else your created yml-file should be empty.
The created yml-file can be triggered in a seperat job after "dirs"-job.
I'm using for creating dynamic child pipelines jsonnet (https://jsonnet.org/).
The rules evaluation is happening at the beginning of a Gitlab pipeline
Quoting from Gitlab docs https://docs.gitlab.com/ee/ci/yaml/#rules
Rules are evaluated when the pipeline is created, and evaluated in order until the first match. When a match is found, the job is either included or excluded from the pipeline, depending on the configuration.
Here the problem seems to be with the usage of exists keyword
Quoting from Gitlab docs https://docs.gitlab.com/ee/ci/yaml/#rulesexists
Use exists to run a job when certain files exist in the repository
But here it seems that .images/DESKTOP is in Gitlab's runner cache, not in your repository.
cache:
paths:
- .images
I have a gitlab ci pipeline like below, and there is a bash script notify.sh which will send different notification depends on stage build result(success or failed), currently I use a arg --result to control the logic, and I write two jobs for stage notify(success-notify and failed-notify) and assign the value of --result manually. Is there a way to get the stage build result directly(like STAGE_BUILD_STATE) instead of use statement when?
---
stages:
- build
- notify
build:
stage: build
script:
- build something
success-notify:
stage: notify
script:
- bash notify.sh --result success
failed-notify:
stage: notify
script:
- bash notify.sh --result failed
when: on_failure
I have build and test jobs in Gitlab CI yaml.
I want to trigger build job every evening at 16.00
and I want to trigger test jobs every morning 4.00 on GitLab
and I know on Gitlab CI/CD - Schedules - New Schedule
but I don't know how can I write this and works in Gitlab CI yaml
I have uploaded my Gitlab CI yaml file.
Can you show me please?
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- test
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
- pwd
artifacts:
paths:
- 'Output'
test_job:
stage: test
only:
- schedules
script:
- 'Output\bin\Debug\NewProject.exe'
Did you try only:variables/except:variables?
First you need to set proper variable in your schedule then add only variables to your yml config. Example:
...
build_job:
...
only:
variables:
- $SCHEDULED_BUILD == "True"
test_job:
...
only:
variables:
- $SCHEDULED_TEST == "True"
If you always want to have 12 hours of delay you could use just one schedule and add when:delayed
when: delayed
start_in: 12 hours
UPDATE: As per request in comments added complete example of simple pipeline configuration, job build should run when SCHEDULED_BUILD is set to True and test job should run when SCHEDULED_TEST is set to True:
build:
script:
- echo only build
only:
variables:
- $SCHEDULED_BUILD == "True"
test:
script:
- echo only test
only:
variables:
- $SCHEDULED_TEST == "True"