(gitlab to aws s3): did not find expected key while parsing a block mapping at line 1 column 1 - amazon-s3

This GitLab CI configuration is invalid: (): did not find expected key while parsing a block mapping at line 1 column 1.
I have below gitlab-ci.yml file which shows error in pipeline
deploy:
image:
stage: deploy
name: banst/awscli
entrypoint: [""]
script:
- aws configure set region us-east-1
- aws s3 sync . s3://$S3_BUCKET/
only:
main
I have done
gitlab runner register and running
added aws s3 bucket and aws key and secret key id in variable

Related

Bitbucket Pippelines EKS Container image Change

It is required to deploy the ECR Image to EKS via Bitbucket pipelines.
So I have created the step below. But I am not sure about the correct command for the KUBECTL_COMMAND to change (set) the deployment image with the new one in a namespace in the EKS cluster:
- step:
name: 'Deployment to Production'
deployment: Production
trigger: 'manual'
script:
- pipe: atlassian/aws-eks-kubectl-run:2.2.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: 'xxx-zaferu-dev'
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
- echo " Deployment has been finished successfully..."
So I am looking for the correct way for this step!
If this is not the best way for the CI/CD deployment, I am planning to use basic command to change the conatiner image :
image: python:3.8
pipelines:
default:
- step:
name: Update EKS deployment
script:
- aws eks update-kubeconfig --name <cluster-name>
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
- aws eks describe-cluster --name <cluster-name>
I tried to use:
KUBECTL_COMMAND: 'set image deployment.apps/xxx-dev xxx-application=123456789.dkr.ecr.eu-west-1.amazonaws.com/ci-cd-test:latest'
but it gives an error :
INFO: Successfully updated the kube config.
Error from server (NotFound): deployments.apps "xxx-app" not found
sorry I got my bug, a missing namespace :)
- kubectl set image deployment/<deployment-name> <container-name>=<new-image>:<tag> -n <namespace>
I forgot to add -n and then I realized.

. gitlab-ci. yml pipeline run only on one branch

i have . gitlab-ci. yml file. when i push to stage branch it make stage commands (only stage) but when i merge to main it's still make "only stage" command
what i am missing ??
variables:
DOCKER_REGISTRY: 036470204880.dkr.ecr.us-east-1.amazonaws.com
AWS_DEFAULT_REGION: us-east-1
APP_NAME: apiv6
APP_NAME_STAGE: apiv6-test
DOCKER_HOST: tcp://docker:2375
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:latest
- aws ecs update-service --cluster apiv6 --service apiv6 --force-new-deployment
only:
- main
publish:
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
- aws --version
- docker --version
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME_STAGE:latest .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME_STAGE:latest
- aws ecs update-service --cluster apiv6-test --service apiv6-test-service --force-new-deployment
only:
- stage
Itamar, I believe this is a YAML limitation. See this GitLab issue as reference.
The problem is that you have two jobs with the same name. But when the YAML file is parsed, you're actually overriding the first job.
Also, from the official GitLab documentation:
Use unique names for your jobs. If multiple jobs have the same name, only one is added to the pipeline, and it’s difficult to predict which one is chosen
Please, try renaming one of your jobs and test it again.

Persistent Bitbucket pipeline build artifacts greater than 14 days

I have a pipeline which loses build artifacts after 14 days. I.e, after 14 days, without S3 or Artifactory integration, the pipeline of course loses "Deploy" button functionality - it becomes greyed out since the build artifact is removed. I understand this is by intention by BB/Atlassian to reduce costs etc (detail in below link).
Please check last section of this page "Artifact downloads and Expiry" - https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
If you need artifact storage for longer than 14 days (or more than 1
GB), we recommend using your own storage solution, like Amazon S3 or a
hosted artifact repository like JFrog Artifactory.
Question:
Is anyone able to provide advice or sample code on how to approach BB Pipeline integration with Artifactory (or S3) in order to retain artifacts. Is the Artifactory generic upload/download pipe approach the only way or is the quote above hinting at a more native BB "repository setting" to provide integration with S3 or Artifactory? https://www.jfrog.com/confluence/display/RTF6X/Bitbucket+Pipelines+Artifactory+Pipes
Bitbucket give an example of linking to an S3 bucket on their site.
https://support.atlassian.com/bitbucket-cloud/docs/publish-and-link-your-build-artifacts/
The key is Step 4 where you link the artefact to the build.
However the example doesn't actually create an artefact that is linked to S3, but rather adds a status code with a description that links to the uploaded item's in S3. To use these in further steps you would then have to download the artefacts.
This can be done using the aws cli and an image that has this installed, for example the amazon/aws-sam-cli-build-image-nodejs14.x (SAM was required in my case).
The following is an an example that:
Creates an artefact ( a txt file ) and uploads to an AWS S3 bucket
Creates a "link" as a build status against the commit that triggered the pipeline, as per Amazon's suggestion ( this is just added for reference after the 14 days... meh)
Carrys out a "deployment", where by the artefact is downloaded from AWS S3, in this stage I also then set the downloaded S3 artefact as a BitBucket artefact, I mean why not... it may expire after 14 days but at if I've just re-deployed then I may want this available for another 14 days....
image: amazon/aws-sam-cli-build-image-nodejs14.x
pipelines:
branches:
main:
- step:
name: Create artefact
script:
- mkdir -p artefacts
- echo "This is an artefact file..." > artefacts/buildinfo.txt
- echo "Generating Build Number:\ ${BITBUCKET_BUILD_NUMBER}" >> artefacts/buildinfo.txt
- echo "Git Commit Hash:\ ${BITBUCKET_COMMIT}" >> artefacts/buildinfo.txt
- aws s3api put-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt --body artefacts/buildinfo.txt
- step:
name: Link artefact to AWS S3
script:
- export S3_URL="https://bitbucket-artefact-test.s3.eu-west-2.amazonaws.com/${BITBUCKET_BUILD_NUMBER}/buildinfo.txt"
- export BUILD_STATUS="{\"key\":\"doc\", \"state\":\"SUCCESSFUL\", \"name\":\"DeployArtefact\", \"url\":\"${S3_URL}\"}"
- curl -H "Content-Type:application/json" -X POST --user "${BB_AUTH_STRING}" -d "${BUILD_STATUS}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/commit/${BITBUCKET_COMMIT}/statuses/build"
- step:
name: Test - Deployment
deployment: Test
script:
- mkdir artifacts
- aws s3api get-object --bucket bitbucket-artefact-test --key ${BITBUCKET_BUILD_NUMBER}/buildinfo.txt artifacts/buildinfo.txt
- cat artifacts/buildinfo.txt
artifacts:
- artifacts/**
Note:
I've got the following secrets/variables against the repository:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
BB_AUTH_STRING

Dotnet core code publish push to s3 as Zip from gitlab CI/CD

How can I zip the artifacts before copying to s3 bucket, this is done as the beanstalk requires zip file to update.
I wanted to deploy the dotnet publish code in beanstalk. I am using Gitlab CI/CD to trigger the build when new changes are pushed to the gitlab repo
In my .gitlab-ci.yml file what am doing is
build and publish the code using dotnet publish
copy the published folder artifact to s3 bucket as zip
create new beanstalk application version
update beanstalk environment to reflect the new changes.
Here I was able to perform all the steps except step 3. Can anyone please help me on how can I Zip the published folder and copy that zip to s3 bucket. Please find my relavant code below:
build:
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- dotnet publish -c Release -o /builds/maskkk/samplewebapplication/publish/
stage: build
artifacts:
paths:
- /builds/maskkk/samplewebapplication/publish/
deployFile:
image: python:latest
stage: deploy
script:
- pip install awscli
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set region us-east-2
- aws s3 cp --recursive /builds/maskkk/samplewebapplication/publish/ s3://elasticbeanstalk-us-east-2-654654456/JBGood-$CI_PIPELINE_ID
- aws elasticbeanstalk create-application-version --application-name Test5 --version-label JBGood-$CI_PIPELINE_ID --source-bundle S3Bucket=elasticbeanstalk-us-east-2-654654456,S3Key=JBGood-$CI_PIPELINE_ID
- aws elasticbeanstalk update-environment --application-name Test5 --environment-name Test5-env --version-label JBGood-$CI_PIPELINE_ID````
I got the answer to this issue we can simply run a
zip -r ../published.zip *
this will create a zip file and can upload this zip folder to s3.
Please let me know if we have any other better solution to this.

Can't push to gcr with drone plugins/docker

i have been trying out drone and have been unsuccessful in pushing the docker image to gcr.
pipeline:
build:
image: plugins/docker
dockerfile: docker/Dockerfile
registry: gcr.io
repo: gcr.io/<REPO>
tags: "${DRONE_COMMIT_SHA}"
insecure: true
debug: true
The following is the error message:
denied: Unable to access the repository; please check that you have permission to access it.
I have been trying to follow the documentation but I always get this error.
Need help. Thanks.
The first step is to store your credentials (we call them secrets) in drone. You can do this using the command line utility or the user interface.
drone secret add <github_repo> --name=docker_username --value=<username>
drone secret add <github_repo> --name=docker_password --value=<password>
Once you have stored your credentials you must update your yaml configuration file to request access to the named secrets using the secrets attribute (this seems to be missing in your example). Example configuration:
pipeline:
build:
image: plugins/docker
dockerfile: docker/Dockerfile
registry: gcr.io
repo: gcr.io/<REPO>
secrets: [ docker_username, docker_password ]
For reference please see the following secret documentation which uses the docker plugin as the primary example http://docs.drone.io/manage-secrets/