Build process in Github Actions completed with exit code 2. -- AWS S3 sync - amazon-s3

The following GitHub actions return an error code 2. The last 3 lines from the Workflow seem to show the aws sync has completed successfully. The aws s3 sync CLI command works correctly locally.
Worflow results:
609 Completed 52.0 MiB/52.0 MiB (5.9 MiB/s) with 1 file(s) remaining
610 upload: services/v1/myfile.py to s3://bucket/dev/backend/services/v1/myfile.py
611 Error: Process completed with exit code 2.
GitHub Action:
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-2
- name: Deploy to S3 bucket
run: aws s3 sync . s3://bucket/dev/backend --exclude 'venv/*' --exclude '.aws-sam/*' --exclude '.git/*' --exclude '.gitignore'

OK the answer is the AWS CLI (for s3 sync) is returning 2. To check this I ran the following locally:
aws --version
aws-cli/2.1.30 Python/3.8.8 Darwin/20.6.0 exe/x86_64 prompt/off
aws s3 sync . s3://bucket/dev/backend --exclude 'venv/*' --exclude '.aws-sam/*' --exclude '.git/*' --exclude '.gitignore' --exclude 'services/ts_service1/.aws-sam/*' --profile ABC123 --delete
echo $?
2
And then checked the following:
aws s3 ls
echo $?
0
So there is something going on with the AWS CLI, both AWS CLI commands executed correctly.
See https://docs.aws.amazon.com/cli/latest/topic/return-codes.html

Related

github actions s3 push gives various errors

I'm trying to trigger a push to an s3 bucket when I git push local to GitHub. In the yml file I use s3 sync and that seems to be the troublesome line. It either says the path isn't found or, if I use the --recursive flag, it says that's an unknown. I'm also using the --delete flag.
I've tried the local path with the exact path location for the directory I want to push, I've also tried to do it like ./public/ as suggested here (https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-) Then I saw in aws documents that to push a whole directory you need the --recurisve flag so I tried adding that before the --delete flag and such.
My yaml file looks like this
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1
- name: Deploy static site to S3 bucket
run: aws s3 sync ./public/ s3://BUCKET_NAME --delete

. gitlab-ci. yml pipeline run only on one branch

i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and itโ€™s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.

Download from s3 into a actions workflow

I'm working on 2 github actions workflows:
Train a model and save it to s3 (monthly)
Download the model from s3 and use it in predictions (daily)
Using https://github.com/jakejarvis/s3-sync-action I was able to complete the first workflow. I train a model and then sync a dir, 'models' with a bucket on s3.
I had planned on using the same action to download the model for use in prediction but it looks like this action is one directional, upload only no download.
I found out the hard way by creating a workflow and attempting to sync with the runner:
retreive-model-s3:
runs-on: ubuntu-latest
steps:
- name: checkout current repo
uses: actions/checkout#master
- name: make dir to sync with s3
run: mkdir models
- name: checkout s3 sync action
uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: ${{ secrets.AWS_S3_ENDPOINT }}
AWS_REGION: 'us-south' # optional: defaults to us-east-1
SOURCE_DIR: 'models' # optional: defaults to entire repository
- name: dir after
run: |
ls -l
ls -l models
- name: Upload model as artifact
uses: actions/upload-artifact#v2
with:
name: xgb-model
path: models/regression_model_full.rds
At the time of running, when I login to the UI I can see the object regression_model_full.rds is indeed there, it's just not downloading. I'm still unsure if this is expected or not (the name of the action 'sync' is what's confusing me).
For our s3 we must use the parameter AWS_S3_ENDPOINT. I found another action, AWS S3 here but unlike the sync action I started out with there's no option to add AWS_S3_ENDPOINT. Looking at the repo too it's two years old except a update tot he readme 8 months ago.
What's the 'prescribed' or conventional way to download from s3 during a workflow?
Soo I had the same problem as you. I was trying to download from S3 to update a directory folder in GitHub.
What I learned from actions is if you're updating some files in the repo you must follow the normal approach as if you were doing it locally eg) checkout, make changes, push.
So for your particular workflow you must checkout your repo in the workflow using actions/checkout#master and after you sync with a particular directory the main problem I was not doing was then pushing the changes back to the repo! This allowed me to update my folder daily.
Anyway, here is my script and hope you find it useful. I am using the AWS S3 action you mention towards the end.
# This is a basic workflow to help you get started with Actions
name: Fetch data.
# Controls when the workflow will run
on:
schedule:
# Runs "at hour 6 past every day" (see https://crontab.guru)
- cron: '00 6 * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: keithweaver/aws-s3-github-action#v1.0.0 # Verifies the recursive flag
name: sync folder
with:
command: sync
source: ${{ secrets.S3_BUCKET }}
destination: ./data/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
flags: --delete
- name: Commit changes
run: |
git config --local user.email "action#github.com"
git config --local user.name "GitHub Action"
git add .
git diff-index --quiet HEAD || git commit -m "{commit message}" -a
git push origin main:main
Sidenote: the flag --delete allows you to keep your current folder up to date with your s3 folder by deleting any files that are not present in your s3 folder anymore

How to set cache-control to max-age=0 with github s3 sync action

I am uploading a vue app to my s3 bucket on every merge to master. My problem is that the invalidation of the cache does not fully work. My next step is adding a metadata to the object index.html on every push. For that I wanted to ask how to add it in the github action jakejarvis/s3-sync-action (https://github.com/marketplace/actions/s3-sync)?
Or do I have to use another github action to accomplish that?
My workflow looks like that at the moment:
name: Build
on:
push:
branches: [master]
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node.js 12.x
uses: actions/setup-node#v1
with:
node-version: '12.x'
- name: npm install
run: |
npm ci
- name: build
run: |
npm run build
- name: Deploy to S3
uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_STAGING_BUCKET_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
SOURCE_DIR: 'dist'
- name: Invalidate cloudfront
uses: muratiger/invalidate-cloudfront-and-wait-for-completion-action#master
env:
DISTRIBUTION_ID: ${{ secrets.AWS_STAGING_DISTRIBUTION_ID }}
PATHS: '/*'
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
The result I am looking for is that the index.html has another metadata Cache-Control: max-age=0 after every deployment. At the moment I am adding it by hand in the s3 management console which is not a good solution for me. Because the metadata is gone after every deployment.
I found answers on how to do it with the aws-cli but I don't know it it is possible to add that in my action.
aws s3 cp s3://[mybucket]/index.html s3://[mybucket]/index.html --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=0,public
PS: I know I need to write tests ๐Ÿ™ˆ
In your example:
with:
args: --acl public-read --delete
args takes effect because it's passed through as-is to the aws s3 invocation, which has $* at the end:
sh -c "aws s3 sync ${SOURCE_DIR:-.} s3://${AWS_S3_BUCKET}/${DEST_DIR} \
--profile s3-sync-action \
--no-progress \
${ENDPOINT_APPEND} $*"
If you'd also like to set --cache-control max-age=0,public then add it to the args:
with:
args: --acl public-read --delete --cache-control max-age=0,public

Travis AWS S3 SDK set cache header for particular file

In my Travis script is there a way when uploading contents to S3 Bucket as follows :
# deploy:
# provider: script
# skip_cleanup: true
# script: "~/.local/bin/aws s3 sync dist s3://mybucket --region=eu-west-1
# --delete"
# before_deploy:
# - npm run build
# - pip install --user awscli
I also want to set a no cache header on a particular file in that bucket (i.e. sw.js). Is that currently possible in the SDK ?
I am afraid that this is not possible using a single s3 sync command. But you may try to execute two commands using exclude and include options. One to sync all except the sw.js and the other one just for sw.js.
script: ~/.local/bin/aws s3 sync dist s3://mybucket --include "*" --exclude "sw.js" --region eu-west-1 --delete ; ~/.local/bin/aws s3 sync dist s3://mybucket --exclude "*" --include "sw.js" --region eu-west-1 --delete --cache-control "no-cache" --metadata-directive REPLACE
Note: --metadata-directive REPLACE option is necessary for non-multipart copies.