upload single file to s3 via travis instead of directory - amazon-s3

I am trying to upload a file to S3 Bucket via travis. However I am not able to do that. Here is the snippet below.
deploy:
- provider: s3
bucket: test-S#
region: eu-west-1
upload-dir: test-s2/application
local-dir: target/latest.tar.gz
skip_cleanup: true
The error I am getting is Not a directory - target/latest.tar.gz. I want to know how can I upload a single file instead of whole directory to S3 with Travis. Is there any way for that?

To upload a single file, We need to perform one extra step here. Use before_deploy stage to achieve it.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
Now, Instead of specifying the local directory, use glob option from the s3 provider.
Please see below example.
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'
At the end, your travis file will look like as below.
before_deploy
- cp -Rf YOUR_FILE_LOCATION file_name.tar.gz
deploy:
- provider: s3
bucket: bucket_name
region: us-east-2
skip_cleanup: true
upload-dir: 's3_upload_directory'
glob: 'file_name.tar.gz'

Related

github actions s3 push gives various errors

I'm trying to trigger a push to an s3 bucket when I git push local to GitHub. In the yml file I use s3 sync and that seems to be the troublesome line. It either says the path isn't found or, if I use the --recursive flag, it says that's an unknown. I'm also using the --delete flag.
I've tried the local path with the exact path location for the directory I want to push, I've also tried to do it like ./public/ as suggested here (https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-) Then I saw in aws documents that to push a whole directory you need the --recurisve flag so I tried adding that before the --delete flag and such.
My yaml file looks like this
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1
- name: Deploy static site to S3 bucket
run: aws s3 sync ./public/ s3://BUCKET_NAME --delete

Dotnet core code publish push to s3 as Zip from gitlab CI/CD

How can I zip the artifacts before copying to s3 bucket, this is done as the beanstalk requires zip file to update.
I wanted to deploy the dotnet publish code in beanstalk. I am using Gitlab CI/CD to trigger the build when new changes are pushed to the gitlab repo
In my .gitlab-ci.yml file what am doing is
build and publish the code using dotnet publish
copy the published folder artifact to s3 bucket as zip
create new beanstalk application version
update beanstalk environment to reflect the new changes.
Here I was able to perform all the steps except step 3. Can anyone please help me on how can I Zip the published folder and copy that zip to s3 bucket. Please find my relavant code below:
build:
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- dotnet publish -c Release -o /builds/maskkk/samplewebapplication/publish/
stage: build
artifacts:
paths:
- /builds/maskkk/samplewebapplication/publish/
deployFile:
image: python:latest
stage: deploy
script:
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region us-east-2
- aws s3 cp --recursive /builds/maskkk/samplewebapplication/publish/ s3://elasticbeanstalk-us-east-2-654654456/JBGood-$CI_PIPELINE_ID
- aws elasticbeanstalk create-application-version --application-name Test5 --version-label JBGood-$CI_PIPELINE_ID --source-bundle S3Bucket=elasticbeanstalk-us-east-2-654654456,S3Key=JBGood-$CI_PIPELINE_ID
- aws elasticbeanstalk update-environment --application-name Test5 --environment-name Test5-env --version-label JBGood-$CI_PIPELINE_ID````
I got the answer to this issue we can simply run a
zip -r ../published.zip *
this will create a zip file and can upload this zip folder to s3.
Please let me know if we have any other better solution to this.

How to upload a directory to a AWS S3 bucket along with a KMS ID through CLI?

I want to upload a directory (A folder consist of other folders and .txt files) to a folder(partition) in a specific S3 bucket along with a given KMS-id via CLI. The following command which is to upload a jar file to an S3 bucket, was found.
The command I found for upload a jar:
aws s3 sync /?? s3://???-??-dev-us-east-2-813426848798/build/tmp/snapshot --sse aws:kms --sse-kms-key-id alias/nbs/dev/data --delete --region us-east-2 --exclude "*" --include "*.?????"
Suppose;
Location (Bucket Name with folder name) - "s3://abc-app-us-east-2-12345678/tmp"
KMS-id - https://us-east-2.console.aws.amazon.com/kms/home?region=us-east-2#/kms/keys/aa11-123aa-45/
Directory to be uploaded - myDirectory
And I want to know;
Whether the same command can be used to upload a directory with a
bunch of files and folders in it?
If so, how this command should be changed?
the cp command works this way:
aws s3 cp ./localFolder s3://awsexamplebucket/abc --recursive --sse aws:kms --sse-kms-key-id a1b2c3d4-e5f6-7890-g1h2-123456789abc
I haven't tried sync command with kms, but the way you use sync is,
aws s3 sync ./localFolder s3://awsexamplebucket/remotefolder

CircleCI Deploy to AWS S3: What is the path to my files?

My Deployment fails in CircleCI
In my config file, I have the following:
deploy:
docker:
- image: circleci/python:2.7-jessie
working_directory: ~/circleci-docs
steps:
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to s3
command: aws s3 sync <filepath> s3://BUCKET-NAME/ --delete
It fails on the deploy and I get the error
The user provided path does not exist
I have tried a few different file paths:
/
~/applicationname
~/working-directoryname
~/
But they all give the same error.
Then I tried using the working_directory name and also /home/circleci/working_directory_name
Both seem to succeed, yet no files appear in the bucket
What is the path that I should be using for the filepath?

Travis skipping S3 deployment because branch is not permitted

I have a new issue with a Travis build. In brief, my .travis.yml file contains:
deploy:
provider: s3
access_key_id: mYacc3ssKeyID
secret_access_key:
secure: mYacc3ssKey
bucket: my-bucket-staging
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: staging
deploy:
provider: s3
access_key_id: mYOtheracc3ssKeyID
secret_access_key:
secure: mYOtheracc3ssKey
bucket: my-bucket
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: master
Until August 16, this setup worked as intended (staging branch was deployed to the my-bucket-staging bucket, master branch was deployed to the my-bucket bucket, and all other branches were ignored). My .travis.yml file hasn't changed since July 13, but the staging branch stopped deploying with the message Skipping a deployment with the s3 provider because this branch is not permitted on August 16. My last known successful deployment was on August 15.
It's also worth noting that the master deployment still works as expected, it's just the staging branch I'm having issues with.
Since I haven't changed anything on my end (the staging branch is the same branch, .travis.yml file is the same, etc), I'm wondering if there was a change with Travis that I missed? Does anyone know why this would (seemingly) just stop working?
I reread through Travis's documentation and didn't notice anything different, but I feel like something must have changed at some point or I'm missing something terribly obvious.
The above YAML segment defines two identical keys deploy, so only the last one is effective; meaning, there is no deployment provider defined with on.branch: staging as far as your .travis.yml is concerned.
If you want to define 2 deployment providers that work on different branches, you need a 2-element array under deploy:
deploy:
- provider: s3
access_key_id: mYacc3ssKeyID
secret_access_key:
secure: mYacc3ssKey
bucket: my-bucket-staging
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: staging
- provider: s3
access_key_id: mYOtheracc3ssKeyID
secret_access_key:
secure: mYOtheracc3ssKey
bucket: my-bucket
skip_cleanup: true
local_dir: dist/
acl: public_read
on:
branch: master
It is not clear to me how it could have been working before with your original configuration as indicated. I would be interested to see the working Travis CI build log.