Is there a way to push only changed file to S3 using bitbucket pipeline? - amazon-s3

Here is an example of a pipeline, what changes are needed to complete this task??
deployment: production
script:
- pipe: atlassian/aws-s3-deploy:0.3.8
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: 'us-east-1'
S3_BUCKET: 'bbci-example-s3-deploy'
LOCAL_PATH: 'artifact'

Related

github actions s3 push gives various errors

I'm trying to trigger a push to an s3 bucket when I git push local to GitHub. In the yml file I use s3 sync and that seems to be the troublesome line. It either says the path isn't found or, if I use the --recursive flag, it says that's an unknown. I'm also using the --delete flag.
I've tried the local path with the exact path location for the directory I want to push, I've also tried to do it like ./public/ as suggested here (https://dev.to/johnkevinlosito/deploy-static-website-to-s3-using-github-actions-) Then I saw in aws documents that to push a whole directory you need the --recurisve flag so I tried adding that before the --delete flag and such.
My yaml file looks like this
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1
- name: Deploy static site to S3 bucket
run: aws s3 sync ./public/ s3://BUCKET_NAME --delete

Bitbucket Pippelines EKS Container image Change

It is required to deploy the ECR Image to EKS via Bitbucket pipelines.
So I have created the step below. But I am not sure about the correct command for the KUBECTL_COMMAND to change (set) the deployment image with the new one in a namespace in the EKS cluster:
- step:
name: 'Deployment to Production'
deployment: Production
trigger: 'manual'
script:
- pipe: atlassian/aws-eks-kubectl-run:2.2.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: 'xxx-zaferu-dev'
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
- echo " Deployment has been finished successfully..."
So I am looking for the correct way for this step!
If this is not the best way for the CI/CD deployment, I am planning to use basic command to change the conatiner image :
image: python:3.8
pipelines:
default:
- step:
name: Update EKS deployment
script:
- aws eks update-kubeconfig --name <cluster-name>
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
- aws eks describe-cluster --name <cluster-name>
I tried to use:
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
but it gives an error :
INFO: Successfully updated the kube config.
Error from server (NotFound): deployments.apps "xxx-app" not found
sorry I got my bug, a missing namespace :)
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
I forgot to add -n and then I realized.

How to pass a github secret to asp.net core app deployed using github action?

I want to use GitHub secrets inside the repository code.
The repository is an asp.net core web app. I am deploying the app to the Azure app service using Github actions.
I have tried declaring env variable in the workflow like this
# Docs for the Azure Web Apps Deploy action: https://go.microsoft.com/fwlink/?linkid=2134798
# More GitHub Actions for Azure: https://go.microsoft.com/fwlink/?linkid=2135048
name: Azure App Service - TestJuzer(Production), Build and deploy DotnetCore app
on:
push:
branches:
- master
env:
TEST_STRING: ${{ secrets.TEST_STRING }}
jobs:
build-and-deploy:
runs-on: windows-latest
steps:
# checkout the repo
- name: 'Checkout Github Action'
uses: actions/checkout#master
- name: Set up Node.js '12.x'
uses: actions/setup-node#v1
with:
node-version: '12.x'
- name: Set up .NET Core
uses: actions/setup-dotnet#v1
with:
dotnet-version: '5.0.x'
- name: Build with dotnet
run: dotnet build --configuration Release
env:
TEST_STRING: ${{ secrets.TEST_STRING }}
- name: dotnet publish
run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp
- name: Run Azure webapp deploy action using publish profile credentials
uses: azure/webapps-deploy#v2
with:
app-name: TestJuzer
slot-name: Production
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_461CFB6B8D42419FA7F58944D621BA78 }}
package: ${{env.DOTNET_ROOT}}/myapp
env:
TEST_STRING: ${{ secrets.TEST_STRING }}
and accessing the env variable in .net like this
Environment.GetEnvironmentVariable("TEST_STRING")
"TEST_STRING" is the name of the secret. But I am getting null.
I want to pass secret as an environment variable in the workflow and use it in the deployed app.
Any help appreciated Thanks
You set env variable on agent machine. And here you should rather set it on App Service. You could use Azure App Service Settings:
Here is sample workflow:
# .github/workflows/configureAppSettings.yml
on: [push]
jobs:
build:
runs-on: windows-latest
steps:
- uses: azure/login#v1
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}'
- uses: azure/appservice-settings#v1
with:
app-name: 'my-app'
slot-name: 'staging' # Optional and needed only if the settings have to be configured on the specific deployment slot
app-settings-json: '${{ secrets.APP_SETTINGS }}'
connection-strings-json: '${{ secrets.CONNECTION_STRINGS }}'
general-settings-json: '{"alwaysOn": "false", "webSocketsEnabled": "true"}' #'General configuration settings as Key Value pairs'
id: settings
- run: echo "The webapp-url is ${{ steps.settings.outputs.webapp-url }}"
- run: |
az logout
In you case it could be like:
- name: Set Web App settings
uses: Azure/appservice-settings#v1
with:
app-name: 'node-rnc'
app-settings-json: |
[
{
"name": "TEST_STRING",
"value": "${{ secrets.TEST_STRING }}",
"slotSetting": false
}
]

How to set cache-control to max-age=0 with github s3 sync action

I am uploading a vue app to my s3 bucket on every merge to master. My problem is that the invalidation of the cache does not fully work. My next step is adding a metadata to the object index.html on every push. For that I wanted to ask how to add it in the github action jakejarvis/s3-sync-action (https://github.com/marketplace/actions/s3-sync)?
Or do I have to use another github action to accomplish that?
My workflow looks like that at the moment:
name: Build
on:
push:
branches: [master]
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Use Node.js 12.x
uses: actions/setup-node#v1
with:
node-version: '12.x'
- name: npm install
run: |
npm ci
- name: build
run: |
npm run build
- name: Deploy to S3
uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_STAGING_BUCKET_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
SOURCE_DIR: 'dist'
- name: Invalidate cloudfront
uses: muratiger/invalidate-cloudfront-and-wait-for-completion-action#master
env:
DISTRIBUTION_ID: ${{ secrets.AWS_STAGING_DISTRIBUTION_ID }}
PATHS: '/*'
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
The result I am looking for is that the index.html has another metadata Cache-Control: max-age=0 after every deployment. At the moment I am adding it by hand in the s3 management console which is not a good solution for me. Because the metadata is gone after every deployment.
I found answers on how to do it with the aws-cli but I don't know it it is possible to add that in my action.
aws s3 cp s3://[mybucket]/index.html s3://[mybucket]/index.html --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=0,public
PS: I know I need to write tests 🙈
In your example:
with:
args: --acl public-read --delete
args takes effect because it's passed through as-is to the aws s3 invocation, which has $* at the end:
sh -c "aws s3 sync ${SOURCE_DIR:-.} s3://${AWS_S3_BUCKET}/${DEST_DIR} \
--profile s3-sync-action \
--no-progress \
${ENDPOINT_APPEND} $*"
If you'd also like to set --cache-control max-age=0,public then add it to the args:
with:
args: --acl public-read --delete --cache-control max-age=0,public

Github actions to deploy static site to AWS S3

I am trying to deploy static content to AWS S3 from Github actions. I created AWS id and secret environment variables
and have this as main.yml
name: S3CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Build static site
- run: yarn install && npm run-script build
- name: Deploy static site to S3 bucket
run: aws s3 sync ./dist/ s3://awss3-blog --delete
But Github actions failed with error
Invalid Workflow File
DETAILS
every step must define a uses or run key
Usually, always actually from my own experience, GitHub is showing clearly the invalid part of the YAML. In my cases, it is almost always complain about the tabs instead of spaces, and yes, I'm very mad about it!!!
In your case, as #smac89 already mentioned, it is the line starting - run, which is wrongly NOT associated with the previous - name because of that dash, so the - name became orphan as well.
To the point of deploying to S3: I warmly suggest (as I already did somewhere else) to do it just with the CLI and without any additional action/plugin.
It is as simple as:
- name: Deploy static site to S3 bucket
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: aws s3 sync ./dist/ s3://awss3-blog --delete
As you can see, it is exactly the same effort from the secrets perspective, but simpler, independent, cleaner etc. BTW, region is not required and may safely be omitted.
It has to do with this line:
- run: yarn install && npm run-script build
But it is specifically complaining about this step:
- name: Build static site
Remove the - infront of the run if you want the above step to use that run command
This is a full example. Just pay attentions to the variables that you have to set and remove the Cloudfront invalidation if you don't need it. This repo: https://github.com/caiocsgomes/caiogomes.me has it implemented, building a static website with Hugo and deploying to s3.
# Workflow name
name: S3 Deploy
on:
workflow_dispatch:
push:
paths:
- 'app/**'
- '.github/workflows/deploy.yml'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
BUCKET_NAME: caiogomes.me
steps:
- name: Install hugo
run: sudo apt install hugo
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Checkout repository
uses: actions/checkout#v3
with:
submodules: 'true'
- name: Build
run: cd app/ && hugo
- name: Upload files to S3
run: aws s3 sync app/public/ s3://${{ env.BUCKET_NAME }}/ --exact-timestamps --delete
create-cloudfront-invalidation:
needs: build-and-deploy
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
CLOUDFRONT_DISTRIBUTION_ID: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
steps:
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Invalidate clodufront distribution
run: aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"