I am working on database deployment using gitlab CICD. Now there are two databases e.g. ABC and XYZ. One team is working on DB ABC and we are working on DB XYZ. Now the logic is same but if we need to pass DB name according to the team in gitlab pipeline, Whats the process fotr that ? for example if team 1 is working they will select DB ABC and all changes will be reflected on ABC and same for the other. I have already set up variables in gitlab-ci.yml but the task is manual as one team has to overwrite name of DB of other team and when it merges to master it chanhges the variable name everytime which is hard to manage .
variables:
DB_NAME_dev: DEMO_DB
DB_NAME_qa: DEMO_DB
DB_NAME_prod: DEMO_DB
Now if team 2 wants to work on their pipeline they have to change the value of DB_NAME_dev to their database which is a manual task. Is there a smart way to select DB name and the pipeline runs only for that database rather than manually editing the DB name ?
How do you pass variables in GitLab?
An alternative is to use Gitlab Variables. Go to your project page, Settings tab -> CI/CD, find Variables and click on the Expand button. Here you can define variable names and values, which will be automatically passed into the gitlab pipelines, and are available as environment variables there.
You can also use the git branch method. Let's say the 'ABC' and 'XYZ' team pushes their code to specific branches (eg. branch starting with 'abc' or 'xyz'). For those, you need to export variables in before_script with only parameter.
Create branch-specific jobs in your CI file:
abc-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_abc
- export DB_NAME_qa: $DEMO_DB_abc
- export DB_NAME_prod: $DEMO_DB_abc
only:
- /^abc/.*$/#gitlab-org/gitlab
xyz-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_xyz
- export DB_NAME_qa: $DEMO_DB_xyz
- export DB_NAME_prod: $DEMO_DB_xyz
only:
- /^xyz/.*$/#gitlab-org/gitlab
This pipeline will only run when Team 'XYZ' or 'ABC' pushes their code to their team-specific branches which might start with the prefix xyz or abc (eg. xyz-dev, xyz/dev, abc-dev, etc.)
And it will use variables accordingly.
Note: you need to define variables in CI/CD settings.
Thank you!
Related
I'm trying to set the tag of a job to that job's unique ID:
some-cool-job:
tags:
- $CI_JOB_ID
however it doesn't seem to resolve the variable. It just sets the tag to "$CI_JOB_ID". Similarly, $CI_PIPELINE_ID doesn't work.
Using $CI_JOB_NAME or $CI_PIPELINE_IID instead, works fine.
Hence I assume that the ID just doesn't exist at the time the tags are parsed.
Following this, how else can I uniquely identify a job using variables available at this time?
GitLab assigns a number of predefined environment variables for you. One of these is CI_JOB_ID. You can view the value by printing it within a script.
some-cool-job:
script:
- echo $CI_JOB_ID
In the context of a .gitlab-ci.yml file, tags map jobs to runners. For instance, I tag my runners with names reflecting the executor being used (e.g. - shell or docker), then I tag jobs within my .gitlab-ci.yml file that need a shell executor with shell.
May I ask, what is the desired outcome of tagging a job with the job ID, in your case?
to be continuous is a set of advanced ready-to use templates for GitLab CI.
By default, every to be continuous template is considering master as the default production branch, and develop the default integration branch.
Can this default behavior be changed ? For instance, use main instead of master as the production branch ?
Sure you can.
Production and integration branches are variabilized using regular expressions:
variables:
# default production ref name (pattern)
PROD_REF: '/^master$/'
# default integration ref name (pattern)
INTEG_REF: '/^develop$/'
Simply overriding them shall change the behavior.
Example in your .gitlab-ci.yml file:
variables:
# my production branch
PROD_REF: '/^main$/'
You could even decide that every branch with format prod-xxx should be considered as production.
Using a regex here helps:
variables:
# my production branch(es)
PROD_REF: '/^prod-.*$/'
/!\ $PROD_REF and $INTEG_REF are used to implement pattern matching in GitLab CI rules, so beware of this GitLab bug.
If you have a close look at the issue, the conclusion is that only 3 regex patterns are working:
pattern1: '/^abcde$/'
pattern5: '/^abcde.*/'
pattern6: '/^abcde/'
So make sure you're using one of those.
I have a build pipeline, lets say A - that stores a file (this file has a variable value that is set within that build pipeline) within a folder. This Pipeline A triggers another Pipeline B that Publishes the folder as an artifact using the Publish artifact task. But the folder name is dynamic as it is fetched from that file within Pipeline A. I need to pass on the file with that variable value from Pipeline A to Pipeline B while triggering it. Is there any way to do this in Azure DevOps, without using the yaml pipelines?
I have a little complex set of pipelines that I set up using the Classic mode, and converting them all to yaml would take a long time, so would like to know if there is any work around to this.
There are few workarounds:
Create a variable group, and during the Pipeline A set the variable value there with Rest API, then Pipeline B use this variable.
During Pipeline A update the Pipeline B definition with the new value with Rest API.
In Pipeline A trigger the Pipeline B with Trigger Build Task, there you can pass the variable value to the Pipeline B (you do it in the "Build Parameters" field).
I don't think there's a clean way to do this if you need to trigger the build by adding Pipeline A under the triggers section of Pipeline B.
Consider triggering Pipeline B when Pipeline A completes using the REST API. That way, you can have your 'file path' as a variable on Pipeline B and pass it in the parameters collection.
Something like:
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?ignoreWarnings={ignoreWarnings}&checkInTicket={checkInTicket}&sourceBuildId={sourceBuildId}&api-version=5.0
{
"definition": {
"id": 1234
},
"parameters": "{\"fileName\":\"yourfilename\"}"
}
filePath would be the name of your variable in Pipeline B
Have a look at the Builds - Queue documentation for more info.
I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.
I have alot of Bamboo variables defined due the fact that i have a system with alot of legacy and config at places where it does not belong. Getting rid of all this will take a bit longer on the roadmap so i need to find a way to auto replace all these values.
The number im talking about is that there are 8 customer config files with each about 100 variables. Indeed, there was a maniac who added all of those in Bamboo because as you might thought most of them are variable for each environment.
At this moment i want to automate the deployment process and all is going fine exact the fact that i need to replace 100 variables and i dont want to maintain it in my script itself all the time.
I am looking for a way to retrieve all the variables in an array so i can just iterate through all the keys and try to replace them at the config files.
echo "${bamboo.application.myvalue}" will replace the value as expected. The only problem is, how can i get all the keys under bamboo.*
I tried it with the following functions but all without success:
printenv
env
declare
All above without success. How can i retrieve a list of all those variables as inline script in Bamboo.
Thanks alot
I think it is not possible to change the value of the variables on the fly. Instead, you can use the "Inject Bamboo variables" task in order to be able to change the variable value.
This task reads a file to create the variables. So, all you have to do is to create this file with the values you need, and then use this variables.
E.g.: Creating a file from a powershell script:
$path = 'bambooVariaveis.properties'
$connectionstringX = 'connectionstring="Data Source=XXXX;"'
$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding($False)
[System.IO.File]::WriteAllLines($path, $connectionstringX, $Utf8NoBomEncoding)
E.g: Inject Bamboo Variables config
Using it (in a subsequent script task):
echo ${bamboo.inject.connectionstring}