Want to run two separate Pipeline from .gitlab-ci.yml file against two separate branches for deploying into two separate aws accounts - amazon-s3

I have all pipeline against aws "A" account for all branches. Now I want to run pipeline for aws "B" accounts against master branch only and rest of branch pipeline will trigger the aws "A" account. Please guide me to how I configure pipeline to run aws "cli" commands against two accounts from same .gitlab-ci.yml file. Just want to host some files into s3 thru "aws s3" cli commands.
Regards
Aniket

Include only:refs and except:refs inside .gitlab-ci.yml:
job_for_master_only:
only:
refs:
- master
script:
- command_for_B_account
job_for_other_branches:
except:
refs:
- master
script:
- command_for_A_account
Reference: https://docs.gitlab.com/ee/ci/yaml/#onlyrefsexceptrefs
Edit after comment
If the commands are the same with only different configurations, then you can use job template. A job template needs to have a name that start with a '.'. Use extends keyword to utilize the template:
.job_template:
script:
- some_command_that_use_environment_variable_ACCOUNT_ID
job_for_master_only:
extends:
- .job_template
only:
refs:
- master
variables:
ACCOUNT_ID: B
job_for_other_branches:
extends:
- .job_template
except:
refs:
- master
variables:
ACCOUNT_ID: A
Reference: https://docs.gitlab.com/ee/ci/yaml/#extends

Related

Azure DevOps yaml trigger pipeline

Is there a way to trigger a pipeline if particular variables yaml file is updated in Azure Source Control and use exactly that updated variables yaml file values? I know how to gather values from variables yaml, but I can not figure out how to get the main pipeline to use exactly that particular variables yaml file. Example, folders hierarchy in SC:
Variables
var1.yaml
var2.yaml
var3.yaml
My main pipeline just a draft:
trigger:
batch: true
branches:
include:
- master
paths:
include:
- variables/var*
pool:
vmImage: ubuntu-latest
stages:
- stage: stage1
displayName: 'My stage'
jobs:
- job: Job 1
displayName: 'Run something'
variables:
- template: ../variables/...(not sure about this part yet)
steps:
- task: PowerShell#2
displayName: 'Run PoSH command'
inputs:
targetType: 'inline'
script: |
write-host '${{ variables.varx }}' (again not sure here yet)
Basically, is there a logic how to run main pipeline, based by the variables yaml template which has triggered the main pipeline. There will be many variables yaml files in SC, and once particular yaml, example var2.yaml is updated, the main pipeline gets triggered and uses var2.yaml values.

How to trigger downstream pipeline with single parent and multiple dynamic child in gitlab?

I would like to trigger multiple downstream pipeline dynamically from single parent/Stage.
.gitlab-ci.yml
Stages:
- gen-yml
- run
gen-yml:
stage: gen-yml
script: bash gen_yml_env_wise.sh
trigger-downdtream:
include:
- local: dynamic-gen.yml
strategy: depend
dynamic-gen.yml
Env1:
stage: test
Variable:
env: "tst"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Env2:
stage: uat
Variable:
env: "uat"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Getting error which run pipeline due to multiple trigger, i guess :
Error in .gitlab.yml: job: Env1 config contains unknown keys: trigger
I creating dynamic-gen.yml file dynamically based on the available env for test & only i have passing env variable in downstream pipeline.
Here i would like to run two downstream pipeline for tst and uat respectively and env testing is dynamically it may be 1 and more.
For reference purposes, I looking solution as below (dynamically trigger downstream job)
Hope will get solution & thanks in advance.

How to generate the report of API changes on the pipeline?

I have manually generated the report of my API changes using swagger-diff
I can automate it in a local machine using makefile or script but what about if I wanted to implement it in the Gitlab pipeline, how can I generate the report in such a way when someone pushes the changes on the API endpoints
java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
Note that: All the project code is also being containerized.
Configure a job in the pipeline to run when there are changes to your api routes. Save the output as an artifact. If you also need the diff published, you could either do the publishing in that job or create a dependent job which uses the artifact to publish the diff to a Gitlab page or external provider.
If you have automated the process locally, then most of the work is done already if it is in a shell script or something similar.
Example:
This example assumes that your api routes are defined in customer/api/routes/ and internal/api/routes and that you want to generate the diff when a commit or MR is pushed to the dev branch.
ApiDiff:
stage: build
image: java:<some-tag>
script:
- java -jar bin/swagger-diff.jar -old https://url/v1/swagger.json -new https://url2/v2/swagger.json -v 2.0 -output-mode html > changes.html
artifacts:
expire_in: 1 day
name: api-diff
when: on_success
paths: changes.html
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never
And then the job to publish the diff if you want one. This could also be done in the same job that generates the diff.
PublishDiff:
stage: deploy
needs:
- job: "ApiDiff"
optional: false
artifacts: true
image: someimage:latest
script:
- <some script to publish the report>
rules:
- if: "$CI_COMMIT_REF_NAME == 'dev'"
changes:
- customer/api/routes/*
- internal/api/routes/*
- when: never

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.