github actions s3 push gives various errors - amazon-s3

I'm trying to trigger a push to an s3 bucket when I git push local to GitHub. In the yml file I use s3 sync and that seems to be the troublesome line. It either says the path isn't found or, if I use the --recursive flag, it says that's an unknown. I'm also using the --delete flag.
I've tried the local path with the exact path location for the directory I want to push, I've also tried to do it like ./public/ as suggested here (https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-) Then I saw in aws documents that to push a whole directory you need the --recurisve flag so I tried adding that before the --delete flag and such.
My yaml file looks like this
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1
- name: Deploy static site to S3 bucket
run: aws s3 sync ./public/ s3://BUCKET_NAME --delete

Related

Download from s3 into a actions workflow

I'm working on 2 github actions workflows:
Train a model and save it to s3 (monthly)
Download the model from s3 and use it in predictions (daily)
Using https://github.com/jakejarvis/s3-sync-action I was able to complete the first workflow. I train a model and then sync a dir, 'models' with a bucket on s3.
I had planned on using the same action to download the model for use in prediction but it looks like this action is one directional, upload only no download.
I found out the hard way by creating a workflow and attempting to sync with the runner:
retreive-model-s3:
runs-on: ubuntu-latest
steps:
- name: checkout current repo
uses: actions/checkout#master
- name: make dir to sync with s3
run: mkdir models
- name: checkout s3 sync action
uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: ${{ secrets.AWS_S3_ENDPOINT }}
AWS_REGION: 'us-south' # optional: defaults to us-east-1
SOURCE_DIR: 'models' # optional: defaults to entire repository
- name: dir after
run: |
ls -l
ls -l models
- name: Upload model as artifact
uses: actions/upload-artifact#v2
with:
name: xgb-model
path: models/regression_model_full.rds
At the time of running, when I login to the UI I can see the object regression_model_full.rds is indeed there, it's just not downloading. I'm still unsure if this is expected or not (the name of the action 'sync' is what's confusing me).
For our s3 we must use the parameter AWS_S3_ENDPOINT. I found another action, AWS S3 here but unlike the sync action I started out with there's no option to add AWS_S3_ENDPOINT. Looking at the repo too it's two years old except a update tot he readme 8 months ago.
What's the 'prescribed' or conventional way to download from s3 during a workflow?
Soo I had the same problem as you. I was trying to download from S3 to update a directory folder in GitHub.
What I learned from actions is if you're updating some files in the repo you must follow the normal approach as if you were doing it locally eg) checkout, make changes, push.
So for your particular workflow you must checkout your repo in the workflow using actions/checkout#master and after you sync with a particular directory the main problem I was not doing was then pushing the changes back to the repo! This allowed me to update my folder daily.
Anyway, here is my script and hope you find it useful. I am using the AWS S3 action you mention towards the end.
# This is a basic workflow to help you get started with Actions
name: Fetch data.
# Controls when the workflow will run
on:
schedule:
# Runs "at hour 6 past every day" (see https://crontab.guru)
- cron: '00 6 * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: keithweaver/aws-s3-github-action#v1.0.0 # Verifies the recursive flag
name: sync folder
with:
command: sync
source: ${{ secrets.S3_BUCKET }}
destination: ./data/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
flags: --delete
- name: Commit changes
run: |
git config --local user.email "action#github.com"
git config --local user.name "GitHub Action"
git add .
git diff-index --quiet HEAD || git commit -m "{commit message}" -a
git push origin main:main
Sidenote: the flag --delete allows you to keep your current folder up to date with your s3 folder by deleting any files that are not present in your s3 folder anymore

Build process in Github Actions completed with exit code 2. -- AWS S3 sync

The following GitHub actions return an error code 2. The last 3 lines from the Workflow seem to show the aws sync has completed successfully. The aws s3 sync CLI command works correctly locally.
Worflow results:
609 Completed 52.0 MiB/52.0 MiB (5.9 MiB/s) with 1 file(s) remaining
610 upload: services/v1/myfile.py to s3://bucket/dev/backend/services/v1/myfile.py
611 Error: Process completed with exit code 2.
GitHub Action:
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-2
- name: Deploy to S3 bucket
run: aws s3 sync . s3://bucket/dev/backend --exclude 'venv/*' --exclude '.aws-sam/*' --exclude '.git/*' --exclude '.gitignore'
OK the answer is the AWS CLI (for s3 sync) is returning 2. To check this I ran the following locally:
aws --version
aws-cli/2.1.30 Python/3.8.8 Darwin/20.6.0 exe/x86_64 prompt/off
aws s3 sync . s3://bucket/dev/backend --exclude 'venv/*' --exclude '.aws-sam/*' --exclude '.git/*' --exclude '.gitignore' --exclude 'services/ts_service1/.aws-sam/*' --profile ABC123 --delete
echo $?
2
And then checked the following:
aws s3 ls
echo $?
0
So there is something going on with the AWS CLI, both AWS CLI commands executed correctly.
See https://docs.aws.amazon.com/cli/latest/topic/return-codes.html

How to set cache-control to max-age=0 with github s3 sync action

I am uploading a vue app to my s3 bucket on every merge to master. My problem is that the invalidation of the cache does not fully work. My next step is adding a metadata to the object index.html on every push. For that I wanted to ask how to add it in the github action jakejarvis/s3-sync-action (https://github.com/marketplace/actions/s3-sync)?
Or do I have to use another github action to accomplish that?
My workflow looks like that at the moment:
name: Build
on:
push:
branches: [master]
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node.js 12.x
uses: actions/setup-node#v1
with:
node-version: '12.x'
- name: npm install
run: |
npm ci
- name: build
run: |
npm run build
- name: Deploy to S3
uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_STAGING_BUCKET_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
SOURCE_DIR: 'dist'
- name: Invalidate cloudfront
uses: muratiger/invalidate-cloudfront-and-wait-for-completion-action#master
env:
DISTRIBUTION_ID: ${{ secrets.AWS_STAGING_DISTRIBUTION_ID }}
PATHS: '/*'
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
The result I am looking for is that the index.html has another metadata Cache-Control: max-age=0 after every deployment. At the moment I am adding it by hand in the s3 management console which is not a good solution for me. Because the metadata is gone after every deployment.
I found answers on how to do it with the aws-cli but I don't know it it is possible to add that in my action.
aws s3 cp s3://[mybucket]/index.html s3://[mybucket]/index.html --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=0,public
PS: I know I need to write tests 🙈
In your example:
with:
args: --acl public-read --delete
args takes effect because it's passed through as-is to the aws s3 invocation, which has $* at the end:
sh -c "aws s3 sync ${SOURCE_DIR:-.} s3://${AWS_S3_BUCKET}/${DEST_DIR} \
--profile s3-sync-action \
--no-progress \
${ENDPOINT_APPEND} $*"
If you'd also like to set --cache-control max-age=0,public then add it to the args:
with:
args: --acl public-read --delete --cache-control max-age=0,public

Github actions to deploy static site to AWS S3

I am trying to deploy static content to AWS S3 from Github actions. I created AWS id and secret environment variables
and have this as main.yml
name: S3CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Build static site
- run: yarn install && npm run-script build
- name: Deploy static site to S3 bucket
run: aws s3 sync ./dist/ s3://awss3-blog --delete
But Github actions failed with error
Invalid Workflow File
DETAILS
every step must define a uses or run key
Usually, always actually from my own experience, GitHub is showing clearly the invalid part of the YAML. In my cases, it is almost always complain about the tabs instead of spaces, and yes, I'm very mad about it!!!
In your case, as #smac89 already mentioned, it is the line starting - run, which is wrongly NOT associated with the previous - name because of that dash, so the - name became orphan as well.
To the point of deploying to S3: I warmly suggest (as I already did somewhere else) to do it just with the CLI and without any additional action/plugin.
It is as simple as:
- name: Deploy static site to S3 bucket
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: aws s3 sync ./dist/ s3://awss3-blog --delete
As you can see, it is exactly the same effort from the secrets perspective, but simpler, independent, cleaner etc. BTW, region is not required and may safely be omitted.
It has to do with this line:
- run: yarn install && npm run-script build
But it is specifically complaining about this step:
- name: Build static site
Remove the - infront of the run if you want the above step to use that run command
This is a full example. Just pay attentions to the variables that you have to set and remove the Cloudfront invalidation if you don't need it. This repo: https://github.com/caiocsgomes/caiogomes.me has it implemented, building a static website with Hugo and deploying to s3.
# Workflow name
name: S3 Deploy
on:
workflow_dispatch:
push:
paths:
- 'app/**'
- '.github/workflows/deploy.yml'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
BUCKET_NAME: caiogomes.me
steps:
- name: Install hugo
run: sudo apt install hugo
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Checkout repository
uses: actions/checkout#v3
with:
submodules: 'true'
- name: Build
run: cd app/ && hugo
- name: Upload files to S3
run: aws s3 sync app/public/ s3://${{ env.BUCKET_NAME }}/ --exact-timestamps --delete
create-cloudfront-invalidation:
needs: build-and-deploy
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
CLOUDFRONT_DISTRIBUTION_ID: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
steps:
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Invalidate clodufront distribution
run: aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"

upload single file to s3 via travis instead of directory

I am trying to upload a file to S3 Bucket via travis. However I am not able to do that. Here is the snippet below.
deploy:
- provider: s3
bucket: test-S#
region: eu-west-1
upload-dir: test-s2/application
local-dir: target/latest.tar.gz
skip_cleanup: true
The error I am getting is Not a directory - target/latest.tar.gz. I want to know how can I upload a single file instead of whole directory to S3 with Travis. Is there any way for that?
To upload a single file, We need to perform one extra step here. Use before_deploy stage to achieve it.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
Now, Instead of specifying the local directory, use glob option from the s3 provider.
Please see below example.
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'
At the end, your travis file will look like as below.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'