I am building WebRTC library using travis CI.
This is running well but takes lots of time and more and more often the build ends with the message :
The job exceeded the maximum time limit for jobs, and has been
terminated.
You can consult a log that failed travis log
During the gclient sync :
_______ running 'download_from_google_storage --directory --recursive --num_threads=10 --no_auth --quiet --bucket chromium-webrtc-resources src/resources' in '/home/travis/build/mpromonet/webrtc-streamer/webrtc'
...
Hook 'download_from_google_storage --directory --recursive --num_threads=10 --no_auth --quiet --bucket chromium-webrtc-resources src/resources' took 1255.11 secs
I disabled the tests, so I think this is useless and it takes lots of time.
Is there anyway to give some arguments or setting some variables to avoid this time costly task ?
A way to not download chromium-webrtc-resources defined in dependencies DEPS
{
# Download test resources, i.e. video and audio files from Google Storage.
'pattern': '.',
'action': ['download_from_google_storage',
'--directory',
'--recursive',
'--num_threads=10',
'--no_auth',
'--quiet',
'--bucket', 'chromium-webrtc-resources',
'src/resources'],
},
is to pached it removing this section or adding a condition that is false.
In order to patch I used the folowing command :
sed -i -e "s|'src/resources'],|'src/resources'],'condition':'rtc_include_tests==true',|" src/DEPS
This save about 20mn and allow the travis build to stay below the timeout.
You can bake the entire toolchain into a docker image and run your actual tests/builds in that. Delegate the docker image update into another automated process (travis-ci cronjob for example).
An additional benefit is that you now have full control over when parts of your toolchain change. I find that very important.
Edit:
Some resources to read.
The official travis docs for using docker
Building & deploying images on travis
Dockerhub automated builds
Related
Context
Our current build system builds docker images inside of a docker container (Docker in Docker). Many of our docker builds need credentials to be able to pull from private artifact repositories.
We've handled this with docker secrets.. passing in the secret to the docker build command, and in the Dockerfile, referencing the secret in the RUN command where its needed. This means we're using docker buildkit. This article explains it.
We are moving to a different build system (GitLab) and the admins have disabled Docker in Docker (security reasons) so we are moving to Kaniko for docker builds.
Problem
Kaniko doesn't appear to support secrets the way docker does. (there are no command line options to pass a secret through the Kaniko executor).
The credentials the docker build needs are stored in GitLab variables. For DinD, you simply add those variables to the docker build as a secret:
DOCKER_BUILDKIT=1 docker build . \
--secret=type=env,id=USERNAME \
--secret=type=env,id=PASSWORD \
And then in docker, use the secret:
RUN --mount=type=secret,id=USERNAME --mount=type=secret,id=PASSWORD \
USER=$(cat /run/secrets/USERNAME) \
PASS=$(cat /run/secrets/PASSWORD) \
./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts
...rest of build..
Without the --secret flag to the kaniko executor, I'm not sure how to take advantage of docker secrets... nor do I understand the alternatives. I also want to continue to support developer builds. We have a 'build.sh' script that takes care of gathering credentials and adding them to the docker build command.
Current Solution
I found this article and was able to sort out a working solution. I want to ask the experts if this is valid or what the alternatives might be.
I discovered that when the kaniko executor runs, it appears to mount a volume into the image that's being built at: /kaniko. That directory does not exist when the build is complete and does not appear to be cached in the docker layers.
I also found out that if if the Dockerfile secret is not passed in via the docker build command, the build still executes.
So my gitlab-ci.yml file has this excerpt (the REPO_USER/REPO_PWD variables are GitLab CI variables):
- echo "${REPO_USER}" > /kaniko/repo-credentials.txt
- echo "${REPO_PWD}" >> /kaniko/repo-credentials.txt
- /kaniko/executor
--context "${CI_PROJECT_DIR}/docker/target"
--dockerfile "${CI_PROJECT_DIR}/docker/target/Dockerfile"
--destination "${IMAGE_NAME}:${BUILD_TAG}"
Key piece here is echo'ing the credentials to a file in the /kaniko directory before calling the executor. That directory is (temporarily) mounted into the image which the executor is building. And since all this happens inside of the kaniko image, that file will disappear when kaniko (gitlab) job completes.
The developer build script (snip):
//to keep it simple, this assumes that the developer has their credentials//cached in a file (ignored by git) called dev-credentials.txt
DOCKER_BUILDKIT=1 docker build . \
--secret id=repo-creds,src=dev-credentials.txt
Basically same as before. Had to put it in a file instead of environment variables.
The dockerfile (snip):
RUN --mount=type=secret,id=repo-creds,target=/kaniko/repo-credentials.txt USER=$(sed '1q;d' /kaniko/repo-credentials.txt) PASS=$(sed '2q;d' /kaniko/repo-credentials.txt) ./scriptThatUsesTheseEnvVarCredentialsToPullArtifacts...rest of build..
This Works!
In the Dockerfile, by mounting the secret in the /kaniko subfolder, it will work with both the DinD developer build as well as with the CI Kaniko executor.
For Dev builds, DinD secret works as always. (had to change it to a file rather than env variables which I didn't love.)
When the build is run by Kaniko, I suppose since the secret in the RUN command is not found, it doesn't even try to write the temporary credentials file (which I expected would fail the build). Instead, because I directly wrote the varibles to the temporarily mounted /kaniko directory, the rest of the run command was happy.
Advice
To me this does seem more kludgy than expected. I'm wanting to find out other/alternative solutions. Finding out the /kaniko folder is mounted into the image at build time seems to open a lot of possibilities.
I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.
I have an ECS task that runs some test cases. I have it running in Fargate. Yay!
Now I want to download the test results file(s) from the container. I have the task and container IDs handy. I can find the exit code with
aws ecs describe-tasks --cluster Fargate --tasks <my-task-id>
How do I download the log and/or files produced?
It looks like, as of right now, the only way to get test results off of my server is to send the results to S3 before the container shuts down.
From this thread, there's no way to mount a volume / EFS onto a Fargate container.
Here's my bash script for running my tests (in build.sh) and then uploading the results to S3:
#!/bin/bash
echo Running tests...
pushd ~circleci/project/
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
commandToRun="~/project/.circleci/build_scripts/build.sh";
# Run the command
eval $commandToRun 2>&1 | tee /tmp/build.log
# Get the exit code
exitCode=$?
aws s3 cp /tmp/build-$FEATURE.log s3://$CICD_BUCKET/build.log \
--storage-class REDUCED_REDUNDANCY \
--region us-east-1
exit ${exitCode}
Of course, you'll have to pass in the AWS_ACCESS_KEY, AWS_SECRET_KEY and CICD_BUCKET environment variables. The bucket name you choose needs to be pre-created, but any directory structure below it does NOT need to be created in advance.
You probably want to look at using CodeBuild for this use case, which can automatically copy artifacts to S3.
It's actually quite easy to orchestrate the following using a simple bash script and the AWS CLI:
Idempotently Create/Update a CodeBuild project (using a simple CloudFormation template you can define in your source repository)
Run a Codebuild job that executes a given revision of your source repository (using again a self-defining buildspec.yml specification defined in your source repository)
Attach to the CloudWatch logs log group for your CodeBuild job and stream log output
Finally detect when the job has completed successfully or not and then download any artifacts locally using S3
I use this approach to run builds in CodeBuild, with Bamboo as the overarching continuous delivery system.
I use drone as CI and want to know how I can disable simultaneous build. What's happening is that when I submit two commits to git repo, drone will trigger two build on each of the submit. How can I let the second build wait until the first one finish?
Regarding the open source version of Drone: set the DOCKER_MAX_PROCS environment variable of your drone agent to 1, i.e. docker run -e DOCKER_MAX_PROCS=1 [...] drone/drone:0.5 agent. The agent will run one build concurrently, other builds will queue up.
See the Installation Reference section in the readme for more info.
I am trying to use the Publish Over SSH plugin to publish many kinds of build artifact to an external server. Examples of build artifacts are compiled builds, XML output from testing, and JSON output from linting.
If testing or linting results in errors, the build will fail or be marked unstable. In the case of a failed build, the Publish Over SSH plugin will not copy the build artifacts, writing to the console:
SSH: Current build result is [FAILURE], not going to run.
I see no reason why I wouldn't want to publish this information if it exists, and I would like to continue to report errors as build failures. So, is there any way to force Jenkins to publish build artifacts even if the job is marked as a failure?
I thought I could use the Flexible Publish to force this, by wrapping the Publish Over SSH in an "always" condition, but this gave the same output as before on a build failure.
I can think of a couple of work-arounds:
a) store the build status in an environment variable; force the status to SUCCESS; perform the publish step; recover the build status from the environment variable using java jenkins-cli.jar set-build-status $STORED_STATUS
OR
b) Write a bash script to perform the publishing step manually using SSH, cutting out the Publish Over SSH plugin altogether
Before I push forward with either of these solutions (neither of which I like), is there any piece of configuration that I'm missing?
The solution I ended up using was to use rsync/ssh to copy the files manually using a post build script. I configured this in my Jenkins Job Builder YAML like so:
- publisher:
name: publish-to-archive
publishers:
- post-tasks:
- matches:
- log-text: ".*"
script: |
ssh -i ${{HOME}}/.ssh/id_rsa jenkins#archiver "mkdir -p {archive_path}"
rsync -Pravdtze "ssh -i ${{HOME}}/.ssh/id_rsa" {source_path} jenkins#archiver:{archive_path}
Quoting old hooky on jenkinsci-users:
How can I force Publish Over SSH to work even if the build has been marked
a failure?
Use "Send files or execute commands over SSH after the build runs" in
configuration section "Build environment"
Job configuration / Build Environment / Send files or execute commands over SSH after the build runs
instead of using a post-build or build-step.