How do I see the logs added to the Xcode Build Phase Scripts in Azure DevOps pipeline? - react-native

I have a react native app within an nx monorepo that runs, archives, and builds successfully on my local machine.
I am trying to accomplish the same with Azure DevOps pipeline with the following XCode build task.
The Azure DevOps Xcode build task looks like this...
#Your build pipeline references an undefined variable named ‘Parameters.scheme’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘Parameters.xcodeVersion’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘APPLE_CERTIFICATE_SIGNING_IDENTITY’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
#Your build pipeline references an undefined variable named ‘APPLE_PROV_PROFILE_UUID’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: Xcode#5
displayName: 'Xcode Build to Generate the signed IPA'
inputs:
actions: 'clean build -verbose'
xcWorkspacePath: 'apps/my-app/ios/MyApp.xcworkspace'
scheme: '$(Parameters.scheme)'
xcodeVersion: '$(Parameters.xcodeVersion)'
packageApp: true
exportOptions: specify
exportMethod: 'ad-hoc'
signingOption: manual
signingIdentity: '$(APPLE_CERTIFICATE_SIGNING_IDENTITY)'
provisioningProfileUuid: '$(APPLE_PROV_PROFILE_UUID)'
In the pipeline logs, I observed that it runs a task close to this...
xcodebuild -sdk iphoneos -configuration Release -workspace ios/MyApp.xcworkspace -scheme MyApp clean build -verbose
I modified the paths as above and ran the task on local terminal and it builds successfully. It prints the logs I set in Xcode > Target (MyApp) > BuildPhases > Bundle React Native code and images as shown below
echo "\n 0. ⚛️🍀 DEBUG PIPELINE: Bundle React Native code and images \n"
echo "\n 1. ⚛️🍀 cd \$PROJECT_DIR/.."
pwd
ls
cd $PROJECT_DIR/..
export NODE_BINARY=node
./node_modules/react-native/scripts/react-native-xcode.sh
echo "\n 0. 🩸 DEBUG PIPELINE: Bundle React Native code and images::SCRIPT COMPLETED \n"
None of these logs show up in the pipeline. Even when I enable system diagnostics before running the pipeline with...
☑️ Enable system diagnostics
I have seen these related questions and answers and my attempt is at troubleshooting to see the what gets run.
Question: Does the Azure DevOps Xcode build task above use the same phase script? Does it remove the logs? Does it use another build phase script? How can I see the logs added to BuildPhase scripts in the AzurePipe line logs?
Thank you.

Related

How to Use Docker Build Secrets with Kaniko

Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.

Bamboo Spec YAML and location of shared artifacts

in the context of using Gradle to drive build, testing, and further jobs/stages on Bamboo server (version 7.2.1) I've configured env. variable GRADLE_USER_HOME to save downloaded Gradle binary to project-local path with the intent to share it with further downstream jobs/stages.
But unfortunately Bamboo ignores "source" or location folder of the artifact. Excerpt from our bamboo.yaml:
Build Java application artifact:
tasks:
- script:
scripts:
- "export GRADLE_USER_HOME=${bamboo.build.working.directory}/GradleUserHome"
- ./gradlew --no-daemon assemble
- "echo GRADLE USER HOME content; ls -al $GRADLE_USER_HOME/; echo '---'" # DEBUG
artifacts:
- name: "Gradle Wrapper installation"
location: GradleUserHome
pattern: '**/*.*'
required: true
shared: true
Debugging output of the echo command shows expected content.
But next downstream job shows that content of artifact "Gradle Wrapper installation" is installed relative to project's workspace, but not in sub-folder ./GradleUserHome as denoted by location key (just as if mentioned location config item is simply ignored with downstream jobs/stages).
Any ideas how to fix this?
Thanks
PS: Next downstream job exhibits in its log messages something like the following:
Preparing artifact 'Gradle Wrapper installation' for use at /var/atlassian/bamboo-agent02-home/xml-data/build-dir/[...] (location: )
Take notice of empty location!

Testing a Docker Image

I'm working on writing test for a project and i want to test and verify a docker image build. But i don't want to push the image.
I want the image to build on a CI (like taskcluster) and run test.
You would need to use taskcluster/docker-worker, that is a Docker worker which is detailed in the reference documentation.
That worker include test suites: you can see an example in taskcluster/mozilla-taskcluster.
Run test on source code than final image.
Create one build docker exactly with same environment as deployment docker.
Mount source code in build docker and run test cases inside build docker.If test cases are succeeded then only you build deployment image and push it.

iOS - How to pass build params in fastlane snapshot

I am using fastlane snapshot tool for taking snapshot for app screens.
According to fastlane community, i need to run,
fastlane snapshot init
Then after configuring project ui test target, i need to run
fastlane snapshot
But if I want to provide some build parameters like, xcodebuild test test-only params, how can I do that. For example i want to build like,
xcodebuild test -workspace <path>
-scheme <name>
-destination <specifier>
-only-testing:TestBundleA/TestSuiteA/TestCaseA
-only-testing:TestBundleB/TestSuiteB
-only-testing:TestBundleC
I see,
fastlane snapshot --help
Then I added in Snapfile,
xcargs -only-testing:TestBundleB/TestSuiteB
But this gives error
(eval):36: syntax error, unexpected tSYMBEG, expecting keyword_do or
'{' or '(' only-testing:TestBundleB/TestSuiteB
How can i solve this error?
I am not familiar with running snapshot from the command line, so if you can, I would recommend creating a fastlane/Fastfile (or editing it if it already exists) to have a lane that calls snapshot with the options that you are looking for. You can call it with its various parameters as explained in the docs
The example shows you how snapshot could be called, and the Parameters table describes the other parameters you can pass to the fastlane Action.
To pass xcargs via the Snapfile, try xcargs "-only-testing:TestBundleB/TestSuiteB" in your Snapfile. See this Issue.

Go, Golang : travis error for main program, go get -v

In my repo's subdirectory, I have some scripts with package main to show some example usage fo my package. But this gives me the following errors when being tested on Travis.
repo
example-dir
sub-dir
main.go // this gives me error like the following
github.com/~/directory-for-main-program
The command "go get -v ./..." failed. Retrying, 2 of 3.
I see this error only in Travis , not in local machine with go test.
Is there anyway to separate the main program and still able to pass the Travis testing?
Either use the correct path in your main.go, which is the proper way or use build constraints to disable that file:
// +build local
package main
//other code
then to locally build it use go build -tags local or go run -tags local