Travis CI: Uploading artifacts to S3 results in "The bucket you are attempting to access must be addressed using the specified endpoint" - amazon-s3

I have a Travis CI build that is configured to upload the build artifacts to S3. I've followed the Travis artifacts documentation but when the build completes I get the following error (and the S3 container is empty).
ERROR: failed to upload: /home/travis/build/jonburney/KingsgateMediaPlayer-Android/
app/build/outputs/apk/app-release-unsigned.apk
err: The bucket you are attempting to access must be addressed using the specified
endpoint. Please send all future requests to this endpoint.
I have tried to specify the "endpoint" option in the configuration but it was ignored. It appears to be attempting to upload the file to
https://s3.amazonaws.com/kmp-build-output/jonburney/KingsgateMediaPlayer-Android/30/30.1/app/build/outputs/apk/app-release-unsigned.apk.
Here is a copy of the relevant section from my .travis.yml file
addons:
artifacts: true
s3_region: "us-west-2"
artifacts:
paths:
- $(git ls-files -o app/build/outputs | tr "\n" ":")
Have I missed a configuration option for this scenario? Any help is appreciated!

This was fixed after an email to the Travis-CI support team and some investigation. The code in my .travis.yml file was modified to ensure that "artifacts" was only present once, like so:
addons:
artifacts:
s3_region: "us-west-2"
paths:
- $(git ls-files -o app/build/outputs | tr "\n" ":")

Related

Append the package.json version number to my build artifact in aws-codebuild

I really dont know if this is a simple (must be), common or complex task.
I have a buildspec.yml file in my codebuild project, and i am trying to append the version written in package.json file to the output artifact.
I have already seen a lot of tutorials that teach how to append the date (not really useful to me), and others that tell me to execute a version.sh file with this
echo $(sed -nr 's/^\s*"version": "([0-9]{1,}.[0-9]{1,}.*)",$/\1/p' package.json)
and set it in a variable (it doesn't work).
i'm ending up with a build folder called: "my-project-$(version.sh)"
codebuild environment uses ubuntu and nodejs
Update (solved):
my version.sh file:
#!/usr/bin/env bash
echo $(sed -nr 's/^\s*\"version": "([0-9]{1,}\.[0-9]{1,}.*)",$/\1/p' package.json)
Then, i just found out 2 things:
Allow access to your version.sh file:
git update-index --add --chmod=+x version.sh
Declare a variable in any phase in buildspec, i dit in in build phase (just to make sure repository is already copied in environment)
TAGG=$($CODEBUILD_SRC_DIR/version.sh)
then reference it in artifact versioned name:
artifacts:
files:
- '**/*'
name: workover-frontend-$TAG
As result, my build artifact's name: myproject-1.0.0
In my case this script do not want to fetch data from package.json. On my local machine it working great but on AWS doesn't. I had to use chmod in different way, because i got message that i don't have right permissions. My buildspec:
version: 0.2
env:
variables:
latestTag: ""
phases:
pre_build:
commands:
- "echo sed version"
- sed --version
build:
commands:
- chmod +x version.sh
- latestTag=$($CODEBUILD_SRC_DIR/version.sh)
- "echo $latestTag"
artifacts:
files:
- '**/*'
discard-paths: yes
And results in console:
CodeBuild
I also have to mark that when i paste only for example echo 222 into version.sh file i got right answer in CodeBuild console.

GitLab CI/CD Could I get artifacts real path in runner then send files with scp?

I'm learning GitLab CI/CD, I want to when finished build send files in artifacts, the idea is possible?
image: maven:3.8.1-jdk-11
stages:
- build
- deploy
build:
stage: build
script:
- mvn clean install
artifacts:
paths:
- "*/target/*.jar"
deploy:
stage: deploy
script:
- scp -r <artifacts_path> root#test.com:~/Deploy
Could I get artifacts real path in runner then send files with scp?
Generally speaking, no. You must rely on artifact restoration process. Keep in mind that (1) artifacts are generally not stored on the runner and (2) docker runners execute jobs inside of a docker container and typically would not have access to files on the runner host, even if artifacts were stored there.
When jobs start, artifacts from previous stages are restored into the workspace.
So, as an alternative solution, you can simply start with an empty workspace (don't checkout the repo), then upload all files in the workspace, which should be only the restored artifacts, assuming there are no file-based variables.
deploy:
variables: # prevent checkout of repository
GIT_STRATEGY: none
stage: deploy
script:
- ls -laht # list files, which should be just restored artifacts
- scp -r ./ root#test.com:~/Deploy
Another way might be to just use the same glob pattern used in the artifacts:paths: to find the files and upload them.
variables:
ARTIFACTS_PATTERN: "*/target/*.jar"
build:
# ...
artifacts:
paths:
- $ARTIFACTS_PATTERN
deploy:
script: # something like this. Not sure if scp supports glob patterns
- rsync -a -m --include="$ARTIFACTS_PATTERN" user#remote:~/Deploy

Persistent Bitbucket pipeline build artifacts greater than 14 days

I have a pipeline which loses build artifacts after 14 days. I.e, after 14 days, without S3 or Artifactory integration, the pipeline of course loses "Deploy" button functionality - it becomes greyed out since the build artifact is removed. I understand this is by intention by BB/Atlassian to reduce costs etc (detail in below link).
Please check last section of this page "Artifact downloads and Expiry" - https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
If you need artifact storage for longer than 14 days (or more than 1
GB), we recommend using your own storage solution, like Amazon S3 or a
hosted artifact repository like JFrog Artifactory.
Question:
Is anyone able to provide advice or sample code on how to approach BB Pipeline integration with Artifactory (or S3) in order to retain artifacts. Is the Artifactory generic upload/download pipe approach the only way or is the quote above hinting at a more native BB "repository setting" to provide integration with S3 or Artifactory? https://www.jfrog.com/confluence/display/RTF6X/Bitbucket+Pipelines+Artifactory+Pipes
Bitbucket give an example of linking to an S3 bucket on their site.
https://support.atlassian.com/bitbucket-cloud/docs/publish-and-link-your-build-artifacts/
The key is Step 4 where you link the artefact to the build.
However the example doesn't actually create an artefact that is linked to S3, but rather adds a status code with a description that links to the uploaded item's in S3. To use these in further steps you would then have to download the artefacts.
This can be done using the aws cli and an image that has this installed, for example the amazon/aws-sam-cli-build-image-nodejs14.x (SAM was required in my case).
The following is an an example that:
Creates an artefact ( a txt file ) and uploads to an AWS S3 bucket
Creates a "link" as a build status against the commit that triggered the pipeline, as per Amazon's suggestion ( this is just added for reference after the 14 days... meh)
Carrys out a "deployment", where by the artefact is downloaded from AWS S3, in this stage I also then set the downloaded S3 artefact as a BitBucket artefact, I mean why not... it may expire after 14 days but at if I've just re-deployed then I may want this available for another 14 days....
image: amazon/aws-sam-cli-build-image-nodejs14.x
pipelines:
branches:
main:
- step:
name: Create artefact
script:
- mkdir -p artefacts
- echo "This is an artefact file..." > artefacts/buildinfo.txt
- echo "Generating Build Number:\ ${BITBUCKET_BUILD_NUMBER}" >> artefacts/buildinfo.txt
- echo "Git Commit Hash:\ ${BITBUCKET_COMMIT}" >> artefacts/buildinfo.txt
- aws s3api put-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt --body artefacts/buildinfo.txt
- step:
name: Link artefact to AWS S3
script:
- export S3_URL="https://bitbucket-artefact-test.s3.eu-west-2.amazonaws.com/${BITBUCKET_BUILD_NUMBER}/buildinfo.txt"
- export BUILD_STATUS="{\"key\":\"doc\", \"state\":\"SUCCESSFUL\", \"name\":\"DeployArtefact\", \"url\":\"${S3_URL}\"}"
- curl -H "Content-Type:application/json" -X POST --user "${BB_AUTH_STRING}" -d "${BUILD_STATUS}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/commit/${BITBUCKET_COMMIT}/statuses/build"
- step:
name: Test - Deployment
deployment: Test
script:
- mkdir artifacts
- aws s3api get-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt artifacts/buildinfo.txt
- cat artifacts/buildinfo.txt
artifacts:
- artifacts/**
Note:
I've got the following secrets/variables against the repository:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
BB_AUTH_STRING

Downloading artifacts from coordinator forbidden

I am building a jar app with the gitlabci and after the build, the jar is sent to the next task with artifact.
Mavenbuild:artifact:
stage: mavenbuild
image:
name: maven:3.6.0-jdk-8
tags:
- docker
script:
- mvn clean install -pl batch-o365 -am -q
artifacts:
paths:
- batch-o365/app
Dockerbuild:ok:
stage: dockerbuild
image:
name: ekino/docker-buildbox:latest-dind-aws
dependencies:
- Mavenbuild:artifact
tags:
- docker
script:
- docker build .
The artifact is well uploaded :
Uploading artifacts...
batch-o365/app: found 3 matching files
Uploading artifacts to coordinator... ok id=11969 responseStatus=201 Created token=xxx
But when I tied to retrive it in the next task I have this error :
Downloading artifacts for Mavenbuild:artifact (11969)...
ERROR: Downloading artifacts from coordinator... forbidden id=11969 responseStatus=403 Forbidden status=403 Forbidden token=xxx
FATAL: permission denied
ERROR: Job failed: exit code 1
I already use artifacts on another projet from this gitlab server and it's working well.
Is someone here already has this issue with artifacts ?
I found the solution.
We are using internal proxies and I forgot to exclude the gitlab URL.
With this modification :
Dockerbuild:ok:
stage: dockerbuild
image:
name: ekino/docker-buildbox:latest-dind-aws
variables:
HTTP_PROXY: http://proxy:8000
HTTPS_PROXY: http://proxy:8000
NO_PROXY: 169.254.169.254,gitlab.xxx.com
Artifact is well retrived by the job.
Downloading artifacts for Mavenbuild:artifact (11989)...
Downloading artifacts from coordinator... ok id=11989 responseStatus=200 OK token=--xxx

How to publish docker images to docker hub from gitlab-ci

Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.