Travis skipping S3 deployment because branch is not permitted - amazon-s3

I have a new issue with a Travis build. In brief, my .travis.yml file contains:
deploy:
provider: s3
access_key_id: mYacc3ssKeyID
secret_access_key:
secure: mYacc3ssKey
bucket: my-bucket-staging
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: staging
deploy:
provider: s3
access_key_id: mYOtheracc3ssKeyID
secret_access_key:
secure: mYOtheracc3ssKey
bucket: my-bucket
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: master
Until August 16, this setup worked as intended (staging branch was deployed to the my-bucket-staging bucket, master branch was deployed to the my-bucket bucket, and all other branches were ignored). My .travis.yml file hasn't changed since July 13, but the staging branch stopped deploying with the message Skipping a deployment with the s3 provider because this branch is not permitted on August 16. My last known successful deployment was on August 15.
It's also worth noting that the master deployment still works as expected, it's just the staging branch I'm having issues with.
Since I haven't changed anything on my end (the staging branch is the same branch, .travis.yml file is the same, etc), I'm wondering if there was a change with Travis that I missed? Does anyone know why this would (seemingly) just stop working?
I reread through Travis's documentation and didn't notice anything different, but I feel like something must have changed at some point or I'm missing something terribly obvious.

The above YAML segment defines two identical keys deploy, so only the last one is effective; meaning, there is no deployment provider defined with on.branch: staging as far as your .travis.yml is concerned.
If you want to define 2 deployment providers that work on different branches, you need a 2-element array under deploy:
deploy:
- provider: s3
access_key_id: mYacc3ssKeyID
secret_access_key:
secure: mYacc3ssKey
bucket: my-bucket-staging
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: staging
- provider: s3
access_key_id: mYOtheracc3ssKeyID
secret_access_key:
secure: mYOtheracc3ssKey
bucket: my-bucket
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: master
It is not clear to me how it could have been working before with your original configuration as indicated. I would be interested to see the working Travis CI build log.

Related

github actions s3 push gives various errors

I'm trying to trigger a push to an s3 bucket when I git push local to GitHub. In the yml file I use s3 sync and that seems to be the troublesome line. It either says the path isn't found or, if I use the --recursive flag, it says that's an unknown. I'm also using the --delete flag.
I've tried the local path with the exact path location for the directory I want to push, I've also tried to do it like ./public/ as suggested here (https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-) Then I saw in aws documents that to push a whole directory you need the --recurisve flag so I tried adding that before the --delete flag and such.
My yaml file looks like this
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1
- name: Deploy static site to S3 bucket
run: aws s3 sync ./public/ s3://BUCKET_NAME --delete

Download from s3 into a actions workflow

I'm working on 2 github actions workflows:
Train a model and save it to s3 (monthly)
Download the model from s3 and use it in predictions (daily)
Using https://github.com/jakejarvis/s3-sync-action I was able to complete the first workflow. I train a model and then sync a dir, 'models' with a bucket on s3.
I had planned on using the same action to download the model for use in prediction but it looks like this action is one directional, upload only no download.
I found out the hard way by creating a workflow and attempting to sync with the runner:
retreive-model-s3:
runs-on: ubuntu-latest
steps:
- name: checkout current repo
uses: actions/checkout#master
- name: make dir to sync with s3
run: mkdir models
- name: checkout s3 sync action
uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: ${{ secrets.AWS_S3_ENDPOINT }}
AWS_REGION: 'us-south' # optional: defaults to us-east-1
SOURCE_DIR: 'models' # optional: defaults to entire repository
- name: dir after
run: |
ls -l
ls -l models
- name: Upload model as artifact
uses: actions/upload-artifact#v2
with:
name: xgb-model
path: models/regression_model_full.rds
At the time of running, when I login to the UI I can see the object regression_model_full.rds is indeed there, it's just not downloading. I'm still unsure if this is expected or not (the name of the action 'sync' is what's confusing me).
For our s3 we must use the parameter AWS_S3_ENDPOINT. I found another action, AWS S3 here but unlike the sync action I started out with there's no option to add AWS_S3_ENDPOINT. Looking at the repo too it's two years old except a update tot he readme 8 months ago.
What's the 'prescribed' or conventional way to download from s3 during a workflow?
Soo I had the same problem as you. I was trying to download from S3 to update a directory folder in GitHub.
What I learned from actions is if you're updating some files in the repo you must follow the normal approach as if you were doing it locally eg) checkout, make changes, push.
So for your particular workflow you must checkout your repo in the workflow using actions/checkout#master and after you sync with a particular directory the main problem I was not doing was then pushing the changes back to the repo! This allowed me to update my folder daily.
Anyway, here is my script and hope you find it useful. I am using the AWS S3 action you mention towards the end.
# This is a basic workflow to help you get started with Actions
name: Fetch data.
# Controls when the workflow will run
on:
schedule:
# Runs "at hour 6 past every day" (see https://crontab.guru)
- cron: '00 6 * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: keithweaver/aws-s3-github-action#v1.0.0 # Verifies the recursive flag
name: sync folder
with:
command: sync
source: ${{ secrets.S3_BUCKET }}
destination: ./data/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
flags: --delete
- name: Commit changes
run: |
git config --local user.email "action#github.com"
git config --local user.name "GitHub Action"
git add .
git diff-index --quiet HEAD || git commit -m "{commit message}" -a
git push origin main:main
Sidenote: the flag --delete allows you to keep your current folder up to date with your s3 folder by deleting any files that are not present in your s3 folder anymore

Deploy a specific directory to npm with Travis-CI

I want to deploy the dist folder after success. But instead, it keeps deploying the whole repository.
What I want to achieve is the same effect with:
npm publish dist
Here is the related part from my .travis.yml:
deploy:
provider: npm
email: sa.alemdar#hotmail.com
api_key:
secure: MyApiKey
skip_cleanup: true
file_glob: true
file: "dist/**/*"
on:
tags: true
repo: salemdar/angular2-cookie
The solution is to use before_deploy script and go to your folder.
Just make sure you have include your package.json in your folder and skip_cleanup option to true.
There is a fonctional solution:
language: node_js
node_js:
- '5'
- '4'
after_success:
- npm run build #make a dist folder
before_deploy:
- cd dist
deploy:
provider: npm
email: email#gmail.com
skip_cleanup: true
api_key:
secure: ##your_secure_key
on:
branch: master
tags: true
repo: loveindent/stateful-api-mock-server

Travis CI: Uploading artifacts to S3 results in "The bucket you are attempting to access must be addressed using the specified endpoint"

I have a Travis CI build that is configured to upload the build artifacts to S3. I've followed the Travis artifacts documentation but when the build completes I get the following error (and the S3 container is empty).
ERROR: failed to upload: /home/travis/build/jonburney/KingsgateMediaPlayer-Android/
app/build/outputs/apk/app-release-unsigned.apk
err: The bucket you are attempting to access must be addressed using the specified
endpoint. Please send all future requests to this endpoint.
I have tried to specify the "endpoint" option in the configuration but it was ignored. It appears to be attempting to upload the file to
https://s3.amazonaws.com/kmp-build-output/jonburney/KingsgateMediaPlayer-Android/30/30.1/app/build/outputs/apk/app-release-unsigned.apk.
Here is a copy of the relevant section from my .travis.yml file
addons:
artifacts: true
s3_region: "us-west-2"
artifacts:
paths:
- $(git ls-files -o app/build/outputs | tr "\n" ":")
Have I missed a configuration option for this scenario? Any help is appreciated!
This was fixed after an email to the Travis-CI support team and some investigation. The code in my .travis.yml file was modified to ensure that "artifacts" was only present once, like so:
addons:
artifacts:
s3_region: "us-west-2"
paths:
- $(git ls-files -o app/build/outputs | tr "\n" ":")

upload single file to s3 via travis instead of directory

I am trying to upload a file to S3 Bucket via travis. However I am not able to do that. Here is the snippet below.
deploy:
- provider: s3
bucket: test-S#
region: eu-west-1
upload-dir: test-s2/application
local-dir: target/latest.tar.gz
skip_cleanup: true
The error I am getting is Not a directory - target/latest.tar.gz. I want to know how can I upload a single file instead of whole directory to S3 with Travis. Is there any way for that?
To upload a single file, We need to perform one extra step here. Use before_deploy stage to achieve it.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
Now, Instead of specifying the local directory, use glob option from the s3 provider.
Please see below example.
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'
At the end, your travis file will look like as below.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'