Bamboo Spec YAML and location of shared artifacts - bamboo

in the context of using Gradle to drive build, testing, and further jobs/stages on Bamboo server (version 7.2.1) I've configured env. variable GRADLE_USER_HOME to save downloaded Gradle binary to project-local path with the intent to share it with further downstream jobs/stages.
But unfortunately Bamboo ignores "source" or location folder of the artifact. Excerpt from our bamboo.yaml:
Build Java application artifact:
tasks:
- script:
scripts:
- "export GRADLE_USER_HOME=${bamboo.build.working.directory}/GradleUserHome"
- ./gradlew --no-daemon assemble
- "echo GRADLE USER HOME content; ls -al $GRADLE_USER_HOME/; echo '---'" # DEBUG
artifacts:
- name: "Gradle Wrapper installation"
location: GradleUserHome
pattern: '**/*.*'
required: true
shared: true
Debugging output of the echo command shows expected content.
But next downstream job shows that content of artifact "Gradle Wrapper installation" is installed relative to project's workspace, but not in sub-folder ./GradleUserHome as denoted by location key (just as if mentioned location config item is simply ignored with downstream jobs/stages).
Any ideas how to fix this?
Thanks
PS: Next downstream job exhibits in its log messages something like the following:
Preparing artifact 'Gradle Wrapper installation' for use at /var/atlassian/bamboo-agent02-home/xml-data/build-dir/[...] (location: )
Take notice of empty location!

Related

How do I see the logs added to the Xcode Build Phase Scripts in Azure DevOps pipeline?

I have a react native app within an nx monorepo that runs, archives, and builds successfully on my local machine.
I am trying to accomplish the same with Azure DevOps pipeline with the following XCode build task.
The Azure DevOps Xcode build task looks like this...
#Your build pipeline references an undefined variable named ‘Parameters.scheme’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘Parameters.xcodeVersion’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘APPLE_CERTIFICATE_SIGNING_IDENTITY’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘APPLE_PROV_PROFILE_UUID’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: Xcode#5
displayName: 'Xcode Build to Generate the signed IPA'
inputs:
actions: 'clean build -verbose'
xcWorkspacePath: 'apps/my-app/ios/MyApp.xcworkspace'
scheme: '$(Parameters.scheme)'
xcodeVersion: '$(Parameters.xcodeVersion)'
packageApp: true
exportOptions: specify
exportMethod: 'ad-hoc'
signingOption: manual
signingIdentity: '$(APPLE_CERTIFICATE_SIGNING_IDENTITY)'
provisioningProfileUuid: '$(APPLE_PROV_PROFILE_UUID)'
In the pipeline logs, I observed that it runs a task close to this...
xcodebuild -sdk iphoneos -configuration Release -workspace ios/MyApp.xcworkspace -scheme MyApp clean build -verbose
I modified the paths as above and ran the task on local terminal and it builds successfully. It prints the logs I set in Xcode > Target (MyApp) > BuildPhases > Bundle React Native code and images as shown below
echo "\n 0. ⚛️🍀 DEBUG PIPELINE: Bundle React Native code and images \n"
echo "\n 1. ⚛️🍀 cd \$PROJECT_DIR/.."
pwd
ls
cd $PROJECT_DIR/..
export NODE_BINARY=node
./node_modules/react-native/scripts/react-native-xcode.sh
echo "\n 0. 🩸 DEBUG PIPELINE: Bundle React Native code and images::SCRIPT COMPLETED \n"
None of these logs show up in the pipeline. Even when I enable system diagnostics before running the pipeline with...
☑️ Enable system diagnostics
I have seen these related questions and answers and my attempt is at troubleshooting to see the what gets run.
Question: Does the Azure DevOps Xcode build task above use the same phase script? Does it remove the logs? Does it use another build phase script? How can I see the logs added to BuildPhase scripts in the AzurePipe line logs?
Thank you.

dbt: how can I run ad_reporting model (only with google_ads source) from fivetran transformation?

I have a dbt project in BitBucket repo, which I connected to fivetran transformation.
my deployment.yml file contains:
jobs:
- name: daily
targetName: dev
schedule: 0 12 * * * # Define when this job should run, using cron format. This example will run every day at 12:00pm (according to your warehouse timezone).
steps:
- name: run models # Give each step in your job a name. This will enable you to track the steps in the logs.
command: dbt run
my dbt_project.yml file is:
name: 'myproject'
version: '1.0.0'
config-version: 2
# This setting configures which "profile" dbt uses for this project.
profile: 'fivetran'
# These configurations specify where dbt should look for different types of files.
# The `source-paths` config, for example, states that models in this project can be
# found in the "models/" directory. You probably won't need to change these!
model-paths: ["models"]
analysis-paths: ["analysis"]
test-paths: ["tests"]
seed-paths: ["data"]
macro-paths: ["macros"]
snapshot-paths: ["snapshots"]
target-path: "target" # directory which will store compiled SQL files
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_modules"
vars:
ad_reporting__pinterest_enabled: False
ad_reporting__microsoft_ads_enabled: False
ad_reporting__linkedin_ads_enabled: False
ad_reporting__google_ads_enabled: True
ad_reporting__twitter_ads_enabled: False
ad_reporting__facebook_ads_enabled: False
ad_reporting__snapchat_ads_enabled: False
ad_reporting__tiktok_ads_enabled: False
api_source: google_ads ## adwords by default and is case sensitive!
google_ads_schema: google_ads
google_ads_database: fivetran
models:
# disable all models except than google_ads
linkedin:
enabled: False
linkedin_source:
enabled: False
twitter_ads:
enabled: False
twitter_ads_source:
enabled: False
snapchat_ads:
enabled: False
snapchat_ads_source:
enabled: False
pinterest:
enabled: False
pinterest_source:
enabled: False
facebook_ads:
enabled: False
facebook_ads_source:
enabled: False
microsoft_ads:
enabled: False
microsoft_ads_source:
enabled: False
tiktok_ads:
enabled: False
tiktok_ads_source:
enabled: False
google_ads:
enabled: True
google_ads_source:
enabled: True
my packages.yml file is:
packages:
- package: fivetran/ad_reporting
version: 0.7.0
bottom line:
I have a dbt project that needs eventually run from fivetran transformation.
which means I cannot push the dbt_packages folder, instead I have the packages.yml file that "installing" the needed packages using the command dbt deps.
after installing the packages, dbt run command will be running and since packages.yml contains ad_reporting package, the run command will cause the ad_reporting model to run.
and since in dbt_project.yml we disabled all sources except than google_ads, only google_ads will triggered from ad_reporting.
now all I want is to run dbt ad_reporting model, that includes only the google_ads source.
this option is built in and should work.
however, when I run this command LOCALLY
dbt run --select ad_reporting
I get this error:
Compilation Error
dbt found two resources with the name "google_ads__url_ad_adapter". Since these resources have
the same name,
dbt will be unable to find the correct resource when ref("google_ads__url_ad_adapter") is
used. To fix this,
change the name of one of these resources:
- model.google_ads.google_ads__url_ad_adapter (**models\url_adwords\google_ads__url_ad_adapter.sql**)
- model.google_ads.google_ads__url_ad_adapter (models\url_google_ads\google_ads__url_ad_adapter.sql)
and when I changed manually this file name:
dbt_packages\google_ads\**models\url_google_ads\google_ads__url_ad_adapter.sql**
from google_ads__url_ad_adapter.sql to google_ads**1**__url_ad_adapter.sql
(just to avoid duplicate file names, as I read in dbt documentation that file names should be uniques even if they are in different folders,
everything worked just fine.
but, as I said before, I need this project to run from fivetran transformation, not locally.
and when I push this project to it's repo, I don't push the dbt_packages folder, since a dbt project should be up to 30 MB size.
and then, according to packages.yml file, dbt deps command executed, and then the project could run. BUT- as I showed, I needed to change file name MANUALLY, and now, when I cant push dbt_packages folder, dbt deps "downolading" the files, and as you saw, there is a bug: 2 files are coming from installation with same name.
that's why when the fivetran transformation is trying to run the command dbt run - I get this error again:
Compilation Error
dbt found two resources with the name "google_ads__url_ad_adapter". Since these resources have
the same name,
dbt will be unable to find the correct resource when ref("google_ads__url_ad_adapter") is
used. To fix this,
change the name of one of these resources:
- model.google_ads.google_ads__url_ad_adapter (models/url_google_ads/google_ads__url_ad_adapter.sql)
- model.google_ads.google_ads__url_ad_adapter (models/url_adwords/google_ads__url_ad_adapter.sql)
what can I do to enable ad_reporting run from fivetran transformation without this compilation error? and how is it possible that dbt produces this dupliacte file names, after writing in documentation that file names should be unique?
I found a solution.
as I said, the problem was the unnecessary file dbt_packages\google_ads\models\url_google_ads\google_ads__url_ad_adapter.sql.
so in deployment.yml file, I added step that delete the file,
now the deployment.yml file looks like: (I added the step called 'delete unnecessary file')
jobs:
- name: daily
targetName: dev
schedule: 0 12 * * * # Define when this job should run, using cron format. This example will run every day at 12:00pm (according to your warehouse timezone).
steps:
- name: delete unnecessary file
command: dbt clean
- name: run models
command: dbt run # Enter the dbt command that should run in this step. This example will run all your models.
also I had to add dbt clean command that looking for paths declared in dbt_project.yml
so I added the problematic folder path, like that: (I added the last line)
clean-targets: # directories to be removed by `dbt clean`
- "target"
- "dbt_modules"
- "dbt_packages/google_ads/models/url_adwords"
and now after pushing the project, when fivetran transformation is running the project, using dbt_project.yml file, the first step is deleting the duplicate file and then the dbt run command could run just fine.
problem solved :)
For the ad_reporting package to run it needs atleast 2 data sources.
For more info setting up the ad_reporting look at this answer from the Fivetran team:
https://github.com/fivetran/dbt_ad_reporting/issues/48

"__built-in-schema.yml (Line: 2012, Col: 24): Expected a mapping" after V2-V3 pipeline converstion

UPDATE: Here's the contents of azure_pipelines.yml:
resources:
repositories:
- repository: pf
type: git
name: _
ref: refs/tags/3.6.6
trigger: none
stages:
- template: __
parameters:
project:
- name: "__" # Must be unique within the list of projects
type: "msbuild" # Used with Publish/Deploy. Options: adla, dacpac, dotnet, egg, maven, msbuild, node, nuget, python, sap, ssis
path: "__" # Project Path
file: "__.csproj" # Project file
toolset: "msbuild" # Used with Build/Package. Options: adla, dotnet, maven, msbuild, node, python, sap, ssis
playbook: "___.yml"
sonarqube:
name: scan
scan: true
sqExclusions: ""
additionalProperties: "" #|
# sonar.branch.name=master
# sonar.branch.target=master
fortify:
fortifyApp: "_______"
fortifyVersion: "____"
sast: true
dast: false
buildConfiguration: $(BuildConfiguration) #release, debug
buildPlatform: $(BuildPlatform) #any cpu, x86, x64
- name: "__" # Must be unique within the list of projects
type: "msbuild" # Used with Publish/Deploy. Options: adla, dacpac, dotnet, egg, maven, msbuild, node, nuget, python, sap, ssis
path: "___" # Project Path
file: "___.csproj" # Project file
toolset: "msbuild" # Used with Build/Package. Options: adla, dotnet, maven, msbuild, node, python, sap, ssis
playbook: "_.yml"
sonarqube:
name: scan
scan: true
sqExclusions: ""
additionalProperties: "" #|
# sonar.branch.name=master
# sonar.branch.target=master
fortify:
fortifyApp: "______"
fortifyVersion: "__"
sast: true
dast: false
buildConfiguration: $(BuildConfiguration) #release, debug
buildPlatform: $(BuildPlatform) #any cpu, x86, x64
I'm attempting to run the ADO pipeline for a .NET Framework 4.7 app that has undergone the V2 to V3 pipeline conversion. I get the following error message, and the build doesn't even try to run:
__built-in-schema.yml: Maximum object depth exceeded
__built-in-schema.yml (Line: 2012, Col: 24): Expected a mapping
__built-in-schema.yml: Maximum object depth exceeded
That's using the msbuild toolset. I have an API project and a web app project in the solution, so if I removed either of them and then run the pipeline, it progresses past this stage, but gets hung on another error. (Not really necessary to go into detail about that error though, because it's directly related to me having removed the other project).
With the dotnet toolset, it runs, but then complains about a missing reference:
> D:\a\1\s\___\___.csproj(350,3): error MSB4019: The imported project "C:\Program Files\dotnet\sdk\3.1.101\Microsoft\VisualStudio\v16.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the expression in the Import declaration "C:\Program Files\dotnet\sdk\3.1.101\Microsoft\VisualStudio\v16.0\WebApplications\Microsoft.WebApplication.targets" is correct, and that the file exists on disk.
0 Warning(s)
I've tried commenting out the reference to Microsoft.WebApplication.targets in the .csproj files for the app, but it just adds them back in when I run the build locally or via the pipeline.
I've also tried changing the build pool to VS 2017 and Hosted Windows 2019 with VS2019, thinking the VS targets would be available in those pools, but no luck.
I also tried installing MSBuild.Microsoft.VisualStudio.Web.targets via NuGet, but end up with the same results.
What am I missing here?
“__built-in-schema.yml (Line: 2012, Col: 24): Expected a mapping” after V2-V3 pipeline converstion
According to the error log:
error MSB4019: The imported project "C:\Program
Files\dotnet\sdk\3.1.101\Microsoft\VisualStudio\v16.0\WebApplications\Microsoft.WebApplication.targets"
was not found
We could to know the Microsoft.WebApplication.targets invoked from dotnet SDK instead of MSBuild. The correct path for Microsoft.WebApplication.targets should be:
C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\MSBuild\Microsoft\VisualStudio\v16.0\WebApplications\Microsoft.WebApplication.targets
So, we need use MSBuild toolset instead of the dotnet toolset. And since it use the MSBuild 16.0, we need use the build agent pool Hosted Windows 2019 with VS2019 to build the project.
For the error Maximum object depth exceeded, it seems your yaml file has too deep file path, please try to shortening the file path to see if this resolves the issue.
If above info not help you, please share more info about you issue, like what is your project type, is python project? And share your build pipeline (yaml).
Hope this helps.

How to fail task if there are no artifacts

I have a step in my .gitlab-ci.yml to run a script that generates some artifacts. Under normal circumstances, the directory contains artifacts and they are are picked up as such by gitlab-ci. But, I'm trying to set things up so that the task fails if there are no artifacts. All I get now is a warning in the log telling me there are no artifacts. I want to treat this warning as an error and fail the task. Is there a way to do this?
I suppose I could just update my bash script to exit non-zero if the artifacts aren't present, but I'd like to do it in the gitlab task definition if possible.
rpm_build:
stage: build
script: ./scripts/build_rpms.sh
artifacts:
paths:
- my/RPMS/
expire_in: 3 days
I've looked at the documentation on the artifacts section, but couldn't find anything.
https://docs.gitlab.com/ce/ci/yaml/#artifacts
At the moment, this is still an open issue in GitLab: https://gitlab.com/gitlab-org/gitlab-ce/issues/35641
Therefore you will have to resort to updating your script in a way that it returns a non-zero exit status.

Gradle uploadArchives via scp prompts for password muliple times

I am trying out Gradle and am trying to upload jars to my Nexus repo using Wagon SCP as described in the Gradle user guide. I have taken the build file as specified in the user guide:
configurations {
deployerJars
}
repositories {
mavenCentral()
}
dependencies {
deployerJars "org.apache.maven.wagon:wagon-ssh:1.0-beta-2"
}
uploadArchives {
repositories.mavenDeployer {
name = 'sshDeployer' // optional
configuration = configurations.deployerJars
repository(url: "scp://repos.mycompany.com/releases") {
authentication(userName: "me", password: "myPassword")
}
}
}
(Of course with the exception that the URL and credentials are adapted to my repo.)
Now, when running gradle uploadArchives, the build freezes after a while. I cancelled the build and restarted it with info logging turned on and found that the script is prompting me for password:
gradle -i uploadArchives
Starting Build
Settings evaluated using empty settings file.
Projects loaded. Root project using build file '/Users/developer/Slask/ex24/build.gradle'.
Included projects: [root project 'ex24']
Evaluating root project 'ex24' using build file '/Users/developer/Slask/ex24/build.gradle'.
All projects evaluated.
Selected primary task 'uploadArchives'
Tasks to be executed: [task ':compileJava', task ':processResources', task ':classes', task ':jar', task ':uploadArchives']
:compileJava
Executing task ':compileJava' due to:
No history is available for task ':compileJava'.
[ant:javac] Compiling 1 source file to /Users/developer/Slask/ex24/build/classes/main
[ant:javac] warning: [options] bootstrap class path not set in conjunction with -source 1.5
[ant:javac] 1 warning
:processResources
Skipping task ':processResources' as it has no source files.
:processResources UP-TO-DATE
:classes
Skipping task ':classes' as it has no actions.
:jar
Executing task ':jar' due to:
No history is available for task ':jar'.
:uploadArchives
Task ':uploadArchives' has not declared any outputs, assuming that it is out-of-date.
Publishing configuration: configuration ':archives'
:: loading settings :: url = jar:file:/usr/local/Cellar/gradle/1.0-milestone-7/libexec/lib/ivy-2.2.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Publishing to Resolver org.gradle.api.publication.maven.internal.ant.DefaultGroovyMavenDeployer#53c0f47a
[ant:null] Deploying to scp://192.168.0.100/mynexusrepo
[INFO] Retrieving previous build number from remote
Password::
Apparently, the password configured in the build script is ignored.
Anyway, I entered the password and then got prompted some more times, where I obeyed and re-entered the password.
Finally, the build completed sucessfully.
Afterwards I checked my repo and the artifact had been uploaded successfully.
So, uploading jars to a repo works.
However, that the gradle prompts me for password does not work for me, since I planned to use this in a automated build process with Jenkins.
NOW TO MY QUESTION:
Does anyone know if there is a way to turn this password prompting off?
I don't know why it's prompting you for the password. That may be something to fix in a newer version of wagon. I do know that you can use this to avoid the need for a password:
repository(url: 'scp://example.com/var/repos') {
authentication(userName: "me", privateKey: "/home/me/.ssh/id_rsa")
}