How do i create an RTC Baseline on a component in a stream using scm.exe? - automation

After building software from files from an RTC stream, I want to create a baseline of the component as a record of the state of the files.
It needs to be automated, hence the use of SCM.
I want to create it in a single step if I can, i.e, not create baseline on the component in a workspace, then deliver it.
I can create a baseline on a component in a workspace using:
scm.exe create baseline -r Repository -u username -P password "workspace" "Baseline name" "Component name"
Alternatively, how can I automatically deliver the baseline in the workspace above, or should I be using snapshots?

I am not aware of creating a baseline in a single step.
If you can create it in a dedicate workspace with, as a default flow target, the right stream, you should be able to call scm deliver (examples), as mentioned here:
scm deliver: scm deliver -r <repo> -s <source_stream> -t <target_stream> -b <baseline_uuid_or_name>
Or simply:
scm deliver -C <component>
Since it would deliver all changesets and baselines for that component.

Related

How to Use Docker Build Secrets with Kaniko

Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.

Is there a way to test a fully-managed Cloud Run revision before sending traffic to it?

I use Google's Cloud Run (fully managed) to run an app that I'm building. When I deploy a new revision, I'd like to be able to verify that various health checks are ok before I start sending it traffic, but I haven't been able to find a URL for individual (traffic-less) revisions. Is there anything similar to what I'm looking for?
This is possible using "Revision tags", a feature currently in alpha:
By creating a tag latest that always point to the latets revision, you will be able to access it under the URL https://latest---<SERVICE>-<HASH>.a.run.app.
To do so, use this command:
gcloud alpha run services update-traffic --update-tags latest=LATEST
When deploying, make sure to not migrate traffic to the new revision with:
gcloud run deploy --image ... --no-traffic
After testing the newly created revision, send 10% of the traffic traffic to it with
gcloud alpha run services update-traffic --to-tags latest=10
Yes, you can test a new revision before sending traffic to it.
Now, there is the current revision "editor-v1-0-0":
First, to test a new revision by opening the url, you need to add a tag to a new revision. So, to add a tag to a new revision, add the flag as shown below to the command which creates a new revision (It's also possible to add a tag to a new revision with both command and GUI even after creating a new revision):
--tag <tag>
Now, I'll add the tag "green" to a revision:
--tag green
Second, not to send any traffic to a new revision after creating it, you also need to add the flag as shown below to the command (You cannot use this flag with the command if no revisions exist when creating a new revision):
--no-traffic
Then, including 2 flags above, I run the full command referring to Shipping the public editor service in Securing Cloud Run services tutorial as shown below to create a new revision with "editor:2.0.0" image:
gcloud run deploy editor --image gcr.io/myproject-318173/editor:2.0.0 \
--service-account editor-identity \
--set-env-vars EDITOR_UPSTREAM_RENDER_URL=https://renderer-4bdlubpdxq-an.a.run.app \
--allow-unauthenticated \
--revision-suffix v2-0-0 \
--tag green \
--no-traffic
Now, the new revision "editor-v2-0-0" is created with the tag "green" and "0% Traffic" as shown below:
Then, when clicking on "green" tag of the new revision "editor-v2-0-0":
You can open and test the new revision as shown below before sending traffic to the new revision:
And the URL above is:
https://green---editor-4bdlubpdxq-an.a.run.app
And by clicking on "🖋️":
For example, you can change "green":
To "blue" with GUI:
And you can add more tag "yellow":
And you can also remove the tags:
But if you remove the tag, you cannot open and test the new revision:
In addition, you can also change, add and remove tags by clicking on "⋮" then "Manage revision URLs (tags)":
Lastly, I posted the answer explaining more about tags so see it if you want to know more about tags.

Delete or reset Gitlab CI builds

Is it possible to delete old builds in Gitlab CI?
I tested a few things and have now about 20 builds that are useless (most are failed anyway).
It also shows stages that I don't have anymore which kinda clutters the Pipelines page and some of the uploaded artifacts are a bit big.
I wasn't able to find any documentation on this, only that disabling CI in the settings doesn't remove the builds.
Using Gitlab 8.10 Community (hosted by Gitlab.com)
There is currently no option in the GUI to completely get rid of a build other than expunge related data from the build. (The erase option in the build)
If you would have a local installation you could modify the database directly but I would advise caution. (I'll put the guide here for completeness sake)
Login to the GitLab database. If you use the default PostgreSQL :
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql -d gitlabhq_production
Check if there is a table ci_builds. For pSQL: \dt
Delete the builds with normal SQL. For example: DELETE FROM ci_builds WHERE id = 2
(Optional) If you want to cleanup a list of commits which triggered a build you need to midify the table ci_commits.

Run Method everyday at certain time

I am writing a menu bar application, I need to run a method every day at a certian time. I would like it to run even if the user is not logged in. I know I need to create a helper tool registering it with launchd. Is there a good tutorial on this? I'm not new to programing but I am new to using helper tools and launched. I have been doing some reading and came across SMJob, and I know I can use it to create helper tools just not how to use it. I just need some direction with this.
Take a look at Daemons and Services Programming Guide
The solution is to create some command line utility, make launchd plist file in the /Library/LaunchDaemons directory (note that it must be owned by root:wheel and have 0644 mode) and load job via command sudo launchctl load -w /Library/LaunchDaemons/your.plist (flag -w forces your job to launch at every boot). For the running your job periodically set StartInterval or StartCalendarInterval key in your plist (see "Creating Launch Daemons and Agents" -> "Creating a launchd Property List File" -> "Running a Job Periodically" in the guide for the example")

Scoping a flow target of repository workspace in RTC Command line Interface

I have a repository workspace which has a default flow target. I want to edit the flow target and make it scoped only for few components. This is possible from RTC Eclipse Client. How can I achieve the same from RTC command line Interface. Please tell with reference to RTC 3.0.1.3.
I am not sure if that API works for 3.x, or only for 4.x, but this sequence of lscm commands seems to produced a scoped flow target:
# Set a component as the flow target
$ lscm workspace flowtarget TestWorkspace1 TestStream1 -C TestComp2 -r lo
Successfully updated the flow target.
# View workspace flow target that was scoped to specific components
$ lscm workspace flowtarget TestWorkspace1 TestStream1 -r <repo>
(1352) "TestStream1" (scoped) (current)
The following components flow from/to this flow target:
(1351) "TestComp2"
You can see that command introduced in the Rational Team Concert 4.0.1 M4 Milestone, so it is possible isn't available in RTC3.x.