How to pass branch name dynamically to bamboo plan? - branch

Is there a way to pass/change the branch name to the bamboo plan before 'source checkout' task?
I tried using 'script' task and exporting branch related bamboo variables, but that did not work.
Example:
export bamboo_planRepository_branch=my_branch_name
export bamboo_planRepository_1_branch=my_branch_name
export bamboo_repository_git_branch=my_branch_name
export bamboo_repository_branch_name=my_branch_name
echo $bamboo_planRepository_branch
echo $bamboo_planRepository_1_branch
echo $bamboo_repository_git_branch
echo $bamboo_repository_branch_name

I guess it's not possible to pass branch name dynamically into Source Checkout task.
You might want to use "Automatic branch management" feature (go to Plan Configuration - Branches), this allows to create plan branches for new created branches. So Bamboo will checkout new branch for you automatically.

you can checkout as many as repo you want but you should never try to repurpose your build plan - each release branch should be tied to its onw plan- you can clone and delete plans as you go

Related

How to pass environment variables in gitlab dynamically?

I am working on database deployment using gitlab CICD. Now there are two databases e.g. ABC and XYZ. One team is working on DB ABC and we are working on DB XYZ. Now the logic is same but if we need to pass DB name according to the team in gitlab pipeline, Whats the process fotr that ? for example if team 1 is working they will select DB ABC and all changes will be reflected on ABC and same for the other. I have already set up variables in gitlab-ci.yml but the task is manual as one team has to overwrite name of DB of other team and when it merges to master it chanhges the variable name everytime which is hard to manage .
variables:
DB_NAME_dev: DEMO_DB
DB_NAME_qa: DEMO_DB
DB_NAME_prod: DEMO_DB
Now if team 2 wants to work on their pipeline they have to change the value of DB_NAME_dev to their database which is a manual task. Is there a smart way to select DB name and the pipeline runs only for that database rather than manually editing the DB name ?
How do you pass variables in GitLab?
An alternative is to use Gitlab Variables. Go to your project page, Settings tab -> CI/CD, find Variables and click on the Expand button. Here you can define variable names and values, which will be automatically passed into the gitlab pipelines, and are available as environment variables there.
You can also use the git branch method. Let's say the 'ABC' and 'XYZ' team pushes their code to specific branches (eg. branch starting with 'abc' or 'xyz'). For those, you need to export variables in before_script with only parameter.
Create branch-specific jobs in your CI file:
abc-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_abc
- export DB_NAME_qa: $DEMO_DB_abc
- export DB_NAME_prod: $DEMO_DB_abc
only:
- /^abc/.*$/#gitlab-org/gitlab
xyz-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_xyz
- export DB_NAME_qa: $DEMO_DB_xyz
- export DB_NAME_prod: $DEMO_DB_xyz
only:
- /^xyz/.*$/#gitlab-org/gitlab
This pipeline will only run when Team 'XYZ' or 'ABC' pushes their code to their team-specific branches which might start with the prefix xyz or abc (eg. xyz-dev, xyz/dev, abc-dev, etc.)
And it will use variables accordingly.
Note: you need to define variables in CI/CD settings.
Thank you!

How to specify model schema when referencing another dbt project as a package? (dbt multi-repo setup)

We're using a dbt multi-repo setup with different projects for different business areas. We have several projects, something like this:
dbt_dwh
dbt_project1
dbt_project2
The dbt_dwh project contains models which we plan to reference in projects 1 and 2 (we have ~10 projects that would reference the dbt_dwh project) by way of installing git packages. Ideally, we'd like to be able to just reference the models in the dbt_dwh project (e.g.
SELECT * FROM {{ ref('dbt_dwh', 'model_1') }}). However, each of our projects sits in it's own database schema and this causes issue upon dbt run because dbt uses the target schema from dbt_project_x, where these objects don't exist. I've included example set-up info below, for clarity.
packages.yml file for dbt_project1:
packages:
- git: https://git/repo/url/here/dbt_dwh.git
revision: master
profiles.yml for dbt_dwh:
dbt_dwh:
target: dwh_dev
outputs:
dwh_dev:
<config rows here>
dwh_prod:
<config rows here>
profiles.yml for dbt_project1:
dbt_project1:
target: project1_dev
outputs:
project1_dev:
<config rows here>
project1_prod:
<config rows here>
sf_orders.sql in dbt_dwh:
{{
config(
materialized = 'table',
alias = 'sf_orders'
)
}}
SELECT * FROM {{ source('salesforce', 'orders') }} WHERE uid IS NOT NULL
revenue_model1.sql in dbt_project1:
{{
config(
materialized = 'table',
alias = 'revenue_model1'
)
}}
SELECT * FROM {{ ref('dbt_dwh', 'sf_orders') }}
My expectation here was that dbt would examine the sf_orders model and see that the default schema for the project it sits in (dbt_dwh) is dwh_dev, so it would construct the object reference as dwh_dev.sf_orders.
However, if you use command dbt run -m revenue_model1 then the default dbt behaviour is to assume all models are located in the default target for dbt_project1, so you get something like:
11:05:03 1 of 1 START sql table model project1_dev.revenue_model1 .................... [RUN]
11:05:04 1 of 1 ERROR creating sql table model project1_dev.revenue_model1 ........... [ERROR in 0.89s]
11:05:05
11:05:05 Completed with 1 error and 0 warnings:
11:05:05
11:05:05 Runtime Error in model revenue_model1 (folder\directory\revenue_model1.sql)
11:05:05 404 Not found: Table database_name.project1_dev.sf_orders was not found
I've got several questions here:
How do you force dbt to use a specific schema on runtime when using dbt ref function?
Is it possible to force dbt to use the default parameters/settings for models inside the dbt_dwh project when this Git repo is installed as a package in another project?
Some points to note:
All objects & schemas listed above sit in the same database
I know that many people recommend mono-repo set-up to avoid exactly this type of scenario, but switching to a mono-repo structure is not feasible right now, as we are already fully invested in multi-repo setup
Although it would be feasible to create source.yml files in each of the dbt projects to reference the output objects of the dbt_dwh project, this feels like duplication of effort and could result in different versions of the same sources.yml file across projects
I appreciate it is possible to hard-code the output schema in the dbt config block, but this removes our ability to test in dev environment/schema for dbt_dwh project
I managed to find a solution so I'll answer my own question in case anybody else runs up against the same issue. Unfortunately this is not documented anywhere that I can find, however, a throw-away comment in the dbt Slack workspace sparked an idea that allowed me to find the/a solution (I'll post the message if I manage to find it again, to give credit where it's due).
To fix this is actually very simple, you just need to add the project being imported to your profiles.yml file and specify the schema. For our use case this is fine as we only have 1 schema we use.
profiles.yml for dbt_project1:
models:
db_project_1:
outputs:
project1_dev:
<configs here>
project1_prod:
<configs here>
dbt_dwh:
+schema: [[schema you want these models to run into]]
<configs here>
The advantages with this approach are:
When you generate/serve dbt docs it allows you to see the upstream lineage from the upstream project
If there are any upstream dependencies in your upstream project you can run this using dbt run -m +model_name (this can be super handy)
If you don't want this behaviour then you can use dbt run -m +model_name --exclude dbt_dwh (for example) to prevent models in your upstream project from running.
I haven't yet figured out if it is possible to use the default parameters/settings for models inside the upstream project (in this case dbt_dwh) but I will edit this answer if I find a way.

Use `needs` keyword in GitLab CI with variable stage name

My pipeline has two different contexts, per se. If a developer is running on a branch other than main, a job called scan_sandbox is created on the pipeline of the merge request to scan the Dockerfile that the developer is working on currently.
When the branch is merged into main, a scan_production job is created, implying that the image is going to be pushed to the registry and later used in production environment.
My problem is to deal with this variable stage name, either scan_sandbox or scan_production with the needs statement, in order to fetch and publish the scan results. I've tried...
needs: ["scan_production", "scan_sandbox"]
But it returns an error, since both stages aren't going to be declared in different contexts. Also tried...
needs: ["container_scan"]
Which is the name of the stage where both scans will run, but GitLab CI also doesn't interpret it this way.
Anyone has any ideas?
Here is an image of the problem:
You can specify a dependency to be optional in Gitlab yaml. In case if it's optional, Gitlab won't fail if the stage was not executed. It is specifically added to handle the stages with rules, only, or except conditions.
So you can specify
needs:
- job: scan_sandbox
optional: true
- job: scan_production
optional: true
Notes:
This will work for Gitlab version >= 13.9
Doc link: https://docs.gitlab.com/ee/ci/yaml/#needsoptional

Can I change default production branch and/or integration branch?

to be continuous is a set of advanced ready-to use templates for GitLab CI.
By default, every to be continuous template is considering master as the default production branch, and develop the default integration branch.
Can this default behavior be changed ? For instance, use main instead of master as the production branch ?
Sure you can.
Production and integration branches are variabilized using regular expressions:
variables:
# default production ref name (pattern)
PROD_REF: '/^master$/'
# default integration ref name (pattern)
INTEG_REF: '/^develop$/'
Simply overriding them shall change the behavior.
Example in your .gitlab-ci.yml file:
variables:
# my production branch
PROD_REF: '/^main$/'
You could even decide that every branch with format prod-xxx should be considered as production.
Using a regex here helps:
variables:
# my production branch(es)
PROD_REF: '/^prod-.*$/'
/!\ $PROD_REF and $INTEG_REF are used to implement pattern matching in GitLab CI rules, so beware of this GitLab bug.
If you have a close look at the issue, the conclusion is that only 3 regex patterns are working:
pattern1: '/^abcde$/'
pattern5: '/^abcde.*/'
pattern6: '/^abcde/'
So make sure you're using one of those.

Travis-CI, how to get committer_email, author_name, within after_script command

I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.