Difficults deploying circleci v.2 to AWS S3 - amazon-s3

I'm a newbie to to CI/CD stuff and I've been trying for a couple of days to deploy an application to our bucket at AWS S3.
I tried this:
https://medium.freecodecamp.org/how-to-set-up-continuous-deployment-to-aws-s3-using-circleci-in-under-30-minutes-a8e268284098
this:
https://circleci.com/docs/1.0/continuous-deployment-with-amazon-s3/
And this:
https://medium.com/#zlwaterfield/circleci-s3-upload-dbffa0956b6f
But somehow I wasn't able to succeed with my attempt to do so. Circleci says my file successful was build, but somehow no deploy was made and no error msg was received. My AWS permissions are set, so it's being really frustrating this task.
Here's my final file:
jobs:
build:
docker:
-
image: "circleci/openjdk:8-jdk"
environment:
JVM_OPTS: "-Xmx3200m"
TERM: dumb
steps:
- checkout
-
restore_cache:
keys:
- "v1-dependencies-{{ checksum \"build.gradle\" }}"
- v1-dependencies-
-
run: "gradle dependencies"
-
save_cache:
key: "v1-dependencies-{{ checksum \"build.gradle\" }}"
paths:
- ~/.gradle
-
run: "gradle test"
working_directory: ~/repo
deploy:
machine:
enabled: true
steps:
-
run:
command: 'aws s3 sync ${myAppName}/ s3://${myBucketName} --region us-west-2'
name: Deploy
working_directory: ~/repo
version: 2

Updated: I was able to find a way. Here's my solution in case anyone needs it:
jobs:
build:
docker:
-
image: "circleci/openjdk:8-jdk"
environment:
JVM_OPTS: "-Xmx3200m"
TERM: dumb
steps:
- checkout
-
restore_cache:
keys:
- "v1-dependencies-{{ checksum \"build.gradle\" }}"
- v1-dependencies-
-
run: "gradle dependencies"
-
save_cache:
key: "v1-dependencies-{{ checksum \"build.gradle\" }}"
paths:
- ~/.gradle
-
run: "gradle build"
-
run: "gradle test"
- run:
command: "sudo apt-get -y -qq install awscli"
name: "Install awscli"
-
run:
command: "aws configure list"
name: "show credentials"
-
run:
command: "aws s3 ls"
name: "List all buckets"
-
run:
command: "aws s3 sync /tmp/app/myProject/build/libs s3://my-aws-bucket"
name: "Deploy to my AWS bucket"
working_directory: /tmp/app
version: 2
workflows:
build-deploy:
jobs:
-
build-job:
filters:
branches:
only:
- /development.*/
- /staging.*/
version: 2

Related

Upload artifacts to aws from circle ci

I have created this yam file to create a binary image for my iot board with circle ci..
version: 2.1
orbs:
python: circleci/python#1.4.0
jobs:
build:
executor: python/default
steps:
- checkout # checkout source code to working directory
- run:
name: Install PlatformIO
command: pip install --upgrade platformio
- run:
name: Compile Project
command: pio run
- run:
name: Creating Dummy Artifacts
command: |
cd .pio/build/esp32dev
echo "firmare.bin" > /tmp/art-1;
mkdir /tmp/artifacts;
echo "my artifact files in a dir" > /tmp/artifacts/art-2;
- store_artifacts:
path: /tmp/art-1
destination: artifact-file
- store_artifacts:
path: /tmp/artifacts
workflows:
main:
jobs:
- build
I would like to store the artifact the firmware.bin in a bucket in aws...
Do you know how to do it or a similar example that I can check and modify ?
Thanks a lot
I guess the simple option is to use CircleCI's circleci/aws-s3 orb.

Selenium-JAVA maven project CI/CD pipeline using CircleCI

I tried to create CI/CD pipeline for Selenium maven project using CircleCI. but I found webDriverException. Here, I attached screenshot and circleCI.yml file. enter image description here
enter image description hereenter image description here
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:11-jdk
working_directory: ~/demoProject
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: mvn dependency:go-offline
- run:
name: Running X virtual framebuffer
command: Xvfb :0 -ac &
- run:
name: Run Tests
command: |
export DISPLAY=:99
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# run tests!
- run: mvn clean test
- store_artifacts:
path: target/surefire-reports
destination: tr1
- store_test_results:
path: target/surefire-reports

gitlub pipeline optimization - overcome release-cli problems

I've trying to continuously improve our gitlab pipeline for a small project, add more automation and speed up deployment time. Currently, there's several challenges I've faced.
We are using release-cli in order to create specific tags for every environment - prod, stage and dev. Because of its nature, it's changed the stages form the new one after every run. So after every job we need to manually trigger "package" stage from the new workflow (see attached screenshot). I wonder how can we use release-cli more effectively and what should I add into pipeline in order to fully automate its run, at least for production environment?
Here's the code:
stages:
- install
- release
- package
- deploy
### FOR MAIN BRANCH
.main-branch:
only:
- main
.using-cache:
cache:
key: main
paths:
- node_modules
policy: pull
install prod:
extends: .main-branch
tags:
- canary
stage: install
script:
- echo "Install"
### RELEASE
patch release prod:
tags:
- canary
extends: .main-branch
needs: [ "install prod" ]
stage: release
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION"
--description "Auto-release v$NEW_VERSION"
--tag-name "$NEW_VERSION"
package prod:
extends: .main-branch
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy production:
tags:
- canary
only:
- tags
needs: [ "package prod" ]
extends: .main-branch
stage: deploy
environment:
name: production
url: https://saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-stable --from-env-file=$PROD_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade client-stable ./.helm/client -i --set image.tag=$CI_COMMIT_TAG --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade server-stable ./.helm/server -i --set image.tag=$CI_COMMIT_TAG --kube-context zefir-projects/saharok-monorepo:saharok
### FOR STAGE BRANCH
.stage-branch:
only:
- stage
.using-cache stage:
cache:
key: stage
paths:
- node_modules
policy: pull
install stage:
extends: .stage-branch
tags:
- canary
stage: install
script:
- echo "Install"
patch release stage:
tags:
- canary
extends: .stage-branch
stage: release
needs: [ "install stage" ]
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION"
--description "Auto-release v$NEW_VERSION"
--tag-name "$NEW_VERSION"
package stage:
extends: .stage-branch
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy staging:
tags:
- canary
only:
- tags
needs: [ "package stage" ]
extends: .stage-branch
stage: deploy
when: manual
environment:
name: staging
url: https://staging.saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-staging --from-env-file=$STAGE_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade client-staging ./.helm/client -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/client/values.staging.yaml --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade server-staging ./.helm/server -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/server/values.staging.yaml --kube-context zefir-projects/saharok-monorepo:saharok
### FOR DEV BRANCH
install dev:
tags:
- canary
stage: install
when: manual
script:
- echo "Install"
patch release dev:
tags:
- canary
stage: release
needs: [ "install dev" ]
before_script:
- TAG=$(git describe origin/$CI_COMMIT_REF_NAME --abbrev=0 --tags --always )
- NEW_VERSION=${TAG%.${TAG##*.}}.$CI_COMMIT_SHORT_SHA
script:
- echo "Running the release job. Set tag $NEW_VERSION"
- release-cli create
--name "Auto-release v$NEW_VERSION for dev"
--description "Auto-release v$NEW_VERSION for dev"
--tag-name "$NEW_VERSION"
package dev:
tags:
- canary
stage: package
when: manual
script:
- docker-compose build
- docker-compose push
only:
- tags
deploy dev:
tags:
- canary
only:
- tags
stage: deploy
needs: [ "package dev" ]
environment:
name: dev
url: https://dev.saharok.store/
action: start
script:
- kubectl config use-context zefir-projects/saharok-monorepo:saharok
- kubectl create secret generic server-dev --from-env-file=$DEV_ENV_FILE -o yaml --dry-run=client | kubectl apply -f -
- helm upgrade --install client-dev ./.helm/client -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/client/values.dev.yaml --kube-context zefir-projects/saharok-monorepo:saharok
- helm upgrade --install server-dev ./.helm/server -i --set image.tag=$CI_COMMIT_TAG -f ./.helm/server/values.dev.yaml --kube-context zefir-projects/saharok-monorepo:saharok

GitHub CI: Push React build to another repo

I've set a GitHub action that make a build of my React application.
I need that build to be pushed to another repo that I'm using to keep track of the builds.
This is the action that is actually running:
on:
push:
branches: [master]
jobs:
build:
name: create-package
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
name: Use Node.js 14
with:
node-version: ${{ matrix.node-version }}
#- name: Install dependencies
- run: npm ci
- run: npm run build --if-present
env:
CI: false
copy:
runs-on: ubuntu-latest
needs: build
steps:
- name: Copy to another repo
uses: andstor/copycat-action#v3
with:
personal_token: ${{ secrets.API_TOKEN_GITHUB }}
src_path: build
dst_path: /.
dst_owner: federico-arona
dst_repo_name: test-build
dst_branch: main
By the way when the action run the copy job it fails with the following message:
cp: can't stat 'origin-repo/build': No such file or directory
What am I doing wrong?
For anyone that needs an answer on this.
The problem was related to the fact that I was using two different jobs, one to run the build and one to copy that build to another repo.
This won't work because each job has its own runner and its own file system, meaning that the data aren't shared between jobs.
To avoid this problem I made all on in one job. Another solution is to pass the build between jobs as artifact:
https://docs.github.com/en/actions/guides/storing-workflow-data-as-artifacts#passing-data-between-jobs-in-a-workflow
Another problem was related to the copy action I was using. For some reason that action didn't find the build directory, probably because its assuming a different working directory. I switched to another action.
Here's the final result:
on:
push:
branches: [master]
jobs:
build:
name: create-package
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14]
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
name: Use Node.js 14
with:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm run build --if-present
env:
CI: false
- run: ls
- name: Copy to another repo
uses: andstor/copycat-action#v3
with:
personal_token: ${{ secrets.API_TOKEN_GITHUB }}
src_path: build
dst_path: /.
dst_owner: federico-arona
dst_repo_name: test-build
dst_branch: main

CircleCI Error migrating config to version 2

I am trying to migrating circleci 1.0 to 2.0 ,
and I got this error.
in job ‘build’: steps is not a list
can someone help me what is the reason ?
version: 2
jobs:
build:
docker:
- image: circleci/ruby:2.2.3-jessie
environment:
AWS_REGION: eu-central-1
steps:
- checkout
- run: echo "Tests are skipped because of static site."
- run: mkdir -p /tmp/test-data
deploy:
production:
branch: master
commands:
- bundle exec middleman s3_sync
The identation of the array below steps seems to be off. Try this:
steps:
- checkout
- run: echo "Tests are skipped because of static site."
- run: mkdir -p /tmp/test-data