How to get pull request key, source and target branch in Bamboo 6.6.3? - bamboo

When a PR is created in Bitbucket Bamboo creates a build. In version 6.6.3 the following variables are not available
bamboo.repository.pr.key
bamboo.repository.pr.sourceBranch
bamboo.repository.pr.targetBranch
I need these values in order to pass them to Sonar for PR analysis. What is the workaround to get the above values?

Related

How to specify model schema when referencing another dbt project as a package? (dbt multi-repo setup)

We're using a dbt multi-repo setup with different projects for different business areas. We have several projects, something like this:
dbt_dwh
dbt_project1
dbt_project2
The dbt_dwh project contains models which we plan to reference in projects 1 and 2 (we have ~10 projects that would reference the dbt_dwh project) by way of installing git packages. Ideally, we'd like to be able to just reference the models in the dbt_dwh project (e.g.
SELECT * FROM {{ ref('dbt_dwh', 'model_1') }}). However, each of our projects sits in it's own database schema and this causes issue upon dbt run because dbt uses the target schema from dbt_project_x, where these objects don't exist. I've included example set-up info below, for clarity.
packages.yml file for dbt_project1:
packages:
- git: https://git/repo/url/here/dbt_dwh.git
revision: master
profiles.yml for dbt_dwh:
dbt_dwh:
target: dwh_dev
outputs:
dwh_dev:
<config rows here>
dwh_prod:
<config rows here>
profiles.yml for dbt_project1:
dbt_project1:
target: project1_dev
outputs:
project1_dev:
<config rows here>
project1_prod:
<config rows here>
sf_orders.sql in dbt_dwh:
{{
config(
materialized = 'table',
alias = 'sf_orders'
)
}}
SELECT * FROM {{ source('salesforce', 'orders') }} WHERE uid IS NOT NULL
revenue_model1.sql in dbt_project1:
{{
config(
materialized = 'table',
alias = 'revenue_model1'
)
}}
SELECT * FROM {{ ref('dbt_dwh', 'sf_orders') }}
My expectation here was that dbt would examine the sf_orders model and see that the default schema for the project it sits in (dbt_dwh) is dwh_dev, so it would construct the object reference as dwh_dev.sf_orders.
However, if you use command dbt run -m revenue_model1 then the default dbt behaviour is to assume all models are located in the default target for dbt_project1, so you get something like:
11:05:03 1 of 1 START sql table model project1_dev.revenue_model1 .................... [RUN]
11:05:04 1 of 1 ERROR creating sql table model project1_dev.revenue_model1 ........... [ERROR in 0.89s]
11:05:05
11:05:05 Completed with 1 error and 0 warnings:
11:05:05
11:05:05 Runtime Error in model revenue_model1 (folder\directory\revenue_model1.sql)
11:05:05 404 Not found: Table database_name.project1_dev.sf_orders was not found
I've got several questions here:
How do you force dbt to use a specific schema on runtime when using dbt ref function?
Is it possible to force dbt to use the default parameters/settings for models inside the dbt_dwh project when this Git repo is installed as a package in another project?
Some points to note:
All objects & schemas listed above sit in the same database
I know that many people recommend mono-repo set-up to avoid exactly this type of scenario, but switching to a mono-repo structure is not feasible right now, as we are already fully invested in multi-repo setup
Although it would be feasible to create source.yml files in each of the dbt projects to reference the output objects of the dbt_dwh project, this feels like duplication of effort and could result in different versions of the same sources.yml file across projects
I appreciate it is possible to hard-code the output schema in the dbt config block, but this removes our ability to test in dev environment/schema for dbt_dwh project
I managed to find a solution so I'll answer my own question in case anybody else runs up against the same issue. Unfortunately this is not documented anywhere that I can find, however, a throw-away comment in the dbt Slack workspace sparked an idea that allowed me to find the/a solution (I'll post the message if I manage to find it again, to give credit where it's due).
To fix this is actually very simple, you just need to add the project being imported to your profiles.yml file and specify the schema. For our use case this is fine as we only have 1 schema we use.
profiles.yml for dbt_project1:
models:
db_project_1:
outputs:
project1_dev:
<configs here>
project1_prod:
<configs here>
dbt_dwh:
+schema: [[schema you want these models to run into]]
<configs here>
The advantages with this approach are:
When you generate/serve dbt docs it allows you to see the upstream lineage from the upstream project
If there are any upstream dependencies in your upstream project you can run this using dbt run -m +model_name (this can be super handy)
If you don't want this behaviour then you can use dbt run -m +model_name --exclude dbt_dwh (for example) to prevent models in your upstream project from running.
I haven't yet figured out if it is possible to use the default parameters/settings for models inside the upstream project (in this case dbt_dwh) but I will edit this answer if I find a way.

Use `needs` keyword in GitLab CI with variable stage name

My pipeline has two different contexts, per se. If a developer is running on a branch other than main, a job called scan_sandbox is created on the pipeline of the merge request to scan the Dockerfile that the developer is working on currently.
When the branch is merged into main, a scan_production job is created, implying that the image is going to be pushed to the registry and later used in production environment.
My problem is to deal with this variable stage name, either scan_sandbox or scan_production with the needs statement, in order to fetch and publish the scan results. I've tried...
needs: ["scan_production", "scan_sandbox"]
But it returns an error, since both stages aren't going to be declared in different contexts. Also tried...
needs: ["container_scan"]
Which is the name of the stage where both scans will run, but GitLab CI also doesn't interpret it this way.
Anyone has any ideas?
Here is an image of the problem:
You can specify a dependency to be optional in Gitlab yaml. In case if it's optional, Gitlab won't fail if the stage was not executed. It is specifically added to handle the stages with rules, only, or except conditions.
So you can specify
needs:
- job: scan_sandbox
optional: true
- job: scan_production
optional: true
Notes:
This will work for Gitlab version >= 13.9
Doc link: https://docs.gitlab.com/ee/ci/yaml/#needsoptional

What is the correct way to pull a remote branch to a local branch using libgit2sharp? [duplicate]

i try to Checkout a Remotebranch via LibGitSharp. In git itself you use this comands:
git fetch origin
git checkout -b test origin/test
in newer Versions it is just:
git fetch
git checkout test
So i tried this Code:
repo.Fetch("origin");
repo.Checkout("origin/" + Name);
The Fetch and Checkout runs without any problems but there is no copy of the Remotebranch.
Does anyone have an idea to Checkout the Remote with other methods?
My alternative would be to create the Branch in the Repository and push it into Remote:
Branch newBranch = repo.Branches.Add(Name, repo.Branches["master"].Commits.First());
repo.Network.Push(newBranch);
but i get this Exception:
The branch 'Test1' ("refs/heads/Test1") that you are trying to push does not track an upstream branch.
maybe i could set the Branch to an upstream Branch, but i don't know how.
Edit: I haven't explained it properly, so I try to describe it better what the Fetch and Checkout does in my program. The Fetch command is performed correctly. Now if i use the checkout command it should be create an local Branch of the Remotebranch, but it doesn't. I've also tried repo.Checkout(name), without "origin/" ,but it cast an Exception: No valid git object identified by '...' exists in the repository.
If I correctly understand your question, you're willing to create a local branch which would be configured to track the fetched remote tracking branch.
In other words, once you fetch a repository, your references contains remote tracking branches (eg. origin/theBranch) and you'd like to create a local branch bearing the same name (eg. theBranch).
The following example should demonstrate how to do this
const string localBranchName = "theBranch";
// The local branch doesn't exist yet
Assert.Null(repo.Branches[localBranchName]);
// Let's get a reference on the remote tracking branch...
const string trackedBranchName = "origin/theBranch";
Branch trackedBranch = repo.Branches[trackedBranchName];
// ...and create a local branch pointing at the same Commit
Branch branch = repo.CreateBranch(localBranchName, trackedBranch.Tip);
// The local branch is not configured to track anything
Assert.False(branch.IsTracking);
// So, let's configure the local branch to track the remote one.
Branch updatedBranch = repo.Branches.Update(branch,
b => b.TrackedBranch = trackedBranch.CanonicalName);
// Bam! It's done.
Assert.True(updatedBranch.IsTracking);
Assert.Equal(trackedBranchName, updatedBranch.TrackedBranch.Name);
Note: More examples can be found in the BranchFixture.cs test suite.

How to get discarded builds and the date of build execution from Jenkins API?

I'd like to retrieve from Jenkins api information about all builds of specified job.
I've set in configuration to Discard old builds:
Days to keep builds - 20
Max # of builds to keep - 10
1) Is it possible to get information about builds whatare discarded?
Also I could get following information about each not discarded build:
"builds" : [
{
"_class" : "hudson.model.FreeStyleBuild",
"number" : 34,
"url" : "http://myUrl/view/Myjob/34/"
}
]
I use following url for this: http://MyUrl/view/MyJob/api/json?tree=builds[url,number]&pretty=true
2) Is it possible to get date of build execution?
To discard a build means, well, to discard it, i.e. its information is gone.
I can imagine a solution where you:
collect, store and update(=extend) information about all existing builds on a regular basis
at times you'd like to know which builds have been discarded:
collect the information of all existing builds at this very moment
don't update your store, but ...
create a diff between the current (C) and the information previously stored in your store (S). The builds that are in S but not in C are the discarded builds.

Travis-CI, how to get committer_email, author_name, within after_script command

I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.