Use pool name as variable from variable group for deployment task - azure-pipelines-yaml

This question came from fact that you can only use global variables for deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops#schema
Use only global level variables for defining a pool name. Stage/job level variables are not supported to define pool name.
But my config is a bit more complex. I do have main pipeline.yml with several stages. Stage's jobs defined in the template, so I have following structure:
# pipeline.yml (some cuts)
....
stages:
- stage: Build
displayName: Build all
jobs:
- template: ../templates/build.yml
parameters:
stage: 'DEV'
- stage: Dev
displayName: Deploy Dev
variables:
- group: GROUP-DEV
pool:
name: '$(env._global.agentPool)' # env._global.agentPool defined in in the variable group
jobs:
- template: ../templates/deployment.yml
parameters:
stage: 'DEV'
# other stages here ....
# templates/deployment.yml
parameters:
- name: stage
displayName: 'Stage'
type: string
jobs:
- deployment: Deploy${{ parameters.stage }}
displayName: Deploy ${{ parameters.stage }}
environment: ${{ parameters.stage }}
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: art_${{ parameters.stage }}
displayName: 'Download art_${{ parameters.stage }}'
# more steps here.....
Now the problem: with such config I got error:
##[error]Pipeline does not have permissions to use the referenced pool(s) . For authorization details, refer to https://aka.ms/yamlauthz.
which means we faced with limitation mentioned in the first quote (proof: https://developercommunity.visualstudio.com/t/pool-name-in-deployment-job-cannot-be-in-a-variabl/940174)
As soon as I change either pool name to hardcoded value or remove -deployment and use regular steps all works fine.
Question: with that, is there any chance to re-work templates to still use agent pool name from group variable? I need the -deployment task to reference environment.
What was tried already:
use pool name as param of the templates/deployment.yml and use pool inside steps. No luck, same story

Related

'app-deploy' job needs 'app-verify' job but 'app-verify' is not in any previous stage You can also test your .gitlab-ci.yml in CI Lint

Seeing Found errors in your .gitlab-ci.yml:
'app-deploy' job needs 'app-verify' job
but 'app-verify' is not in any previous stage
You can also test your .gitlab-ci.yml in CI Lint
Where as both stages are defined
Cache set as below
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
Stages defined as below, snippets
app-build:
stage: build
# Extending the maven-build function from maven.yaml
extends: .maven-build
app-deploy:
stage: deploy
extends: .docker-publish
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
variables:
DOCKERFILE: Dockerfile
CONTEXT: app
OUTPUT: app.war
needs:
- app-build
- app-verify
dependencies:
- app-build
- app-verify
How to resolve the above error.
Error should go away and no error in pipeline run.

Bitbucket Pipeline custom variable is not showing its value in step script

I have created a pipeline for running automation script from bitbucket, when I trigger build with docker container all custom variable is working (it pass the correct values) but when I run the same project in self.hosted machine and pass the variables it show blank values.
Below is the sample of pipeline file
# This is an example Starter pipeline configuration
# Use a skeleton to build, test and deploy using manual and parallel steps
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
pull-requests: #Trigger pipeline project on each Pull-Request
'**':
- step:
name: 'version check'
image: maven:3.3.9
caches:
- maven
script:
- echo "Your versions here..."
- mvn --version
- step:
name: 'Clean and Build'
script:
- mvn -f TestAutomation/pom.xml clean compile
custom:
Automation-Run: #Name of this pipeline
- variables:
- name: testngFile
default: "src/test/resources/testng"
- name: browser
default: "CHROME"
allowed-values: # optionally restrict variable values
- "CHROME"
- "FIREFOX"
- name: environment
default: STAGING
allowed-values:
- "STAGING"
- "QA"
- "PROD"
- name: forkCount
default: "0"
- name: tags
default: Regression
- step:
name: 'Connect to Runner & Automation-Run'
runs-on:
- self.hosted
- windows
- testautomation
script:
- cd TestAutomation
- .\Example1.bat ${testngFile} ${browser} ${environment} ${tags}
artifacts:
- /target/surefire-reports/**.xml
inside Example1.bat file I have mentioned below command
mvn clean test -Dproject.testngFile=%1 -Dbrowser.name=%2 -Dproject.environment=%3 -Dcucumber.filter.tags=#%4
enter image description here
enter image description here
I am expecting when user enter any values, it picked and stored into variable and those variables are pass to Example1.bat as runtime command line args.

Having a script run only when a manually triggered job fails in GitLab

I have the following script that pulls from a remote template. The remote template has the following stages: build, test, code_analysis, compliance, deploy.
The deploy step is manually triggered and executed AWS CLI to deploy a SAM project.
I want to add an additional step such that when the deploy step fails, it will execute a script to rollback the cloudformation stack to its last operational state.
I created a "cleanup-cloudformation-stack-failure" job and tried adding "extends: .deploy", but that didn't work.
I then added an additional stage called "cloudformation_stack_rollback" in the serverless-template.yml file and tried to use a mix of rules and when to get it to trigger on failure, but I'm getting errors flagged by GitLab's linter.
Does anyone know what I'm doing wrong?
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
rules:
- if: '$CI_JOB_MANUAL == true'
when: on_failure
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --resources-to-skip ${STACK_NAME}
You forgot double quotes around true, however you can use Directed Asyclic Graphs to execute jobs conditionally
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
needs:
- deploy-qas
when: on_failure
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --reso

Use gitlab environment(or branch) name in sources

Good day, all.
In serverless framework 'serverless.yml' i have a database variable.
environment:
DATABASE_NAME: ${'test-db'}
In Gitlab CI I try to replace the same database name with branch or environment name in serverless.yml file. serverless deploy command uses serverless.yml content to deploy resources.
i tried:
DATABASE_NAME: ${ CI_ENVIRONMENT_NAME }
DATABASE_NAME: ${ $CI_ENVIRONMENT_NAME }
.gitlab-ci.yml:
image: ~some-nodejs-image
stages:
- deploy
deploy_development:
stage: deploy
script:
- serverless deploy -v
environment:
name: development
only:
- develop
I think you are conflating bash syntax, with the variable syntax allowed in serverless.yml.
Try:
DATABASE_NAME: ${env:CI_ENVIRONMENT_NAME}

How to create multiple stages in serverless framwork

i'm trying yo create multiple stages in serverless with no success.
Here is my serverless.yml:
service: some-cache-updater
provider:
name: aws
runtime: nodejs8.10
stage: dev
functions:
scheduledUpdater:
handler: handler.scheduledUpdater
timeout: 120
What i wish to add is a prod stage with a different timeout.
Can i do it in the same yml?
Any way an example or a reference will be helpful... thanks.
Use Serverless' $self reference interpolation which can include further interpolation.
Define a custom variable where necessary. You can also use a default value if the variable doesn't exist.
Example:
service: some-cache-updater
custom:
functimeout:
prod: 120
uat: 60
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:stage, 'dev'}
functions:
scheduledUpdater:
handler: handler.scheduledUpdater
# Lookup stage key from custom.functimeout. If it doesn't exist
# default to 10
timeout: ${self:custom.functimeout.${self:provider.stage}, '10'}
Then, when you deploy you can pass the --stage prod or --stage uat argument. In this example, no setting the stage will default to dev
serverless.yml:
...
provider:
stage: ${opt:stage, 'dev'}
...
Command line:
sls deploy --stage prod
${opt:stage, 'dev'} takes the value passed from command line --stage option. In this case prod. If no option is passed dev is taken as default.
More info here:
https://serverless.com/framework/docs/providers/aws/guide/variables/#recursively-reference-properties