I am completing my Cloud DevOps Nanodegree program with Udacity.
I am doing my third project Give Your Application Auto-Deploy Superpowers
I am getting stuck Deploy-Backend becuase I am getting random characters in my CircleCI Pipeline.
This is my end result in CircleCI Pipeline:
Here is my Deploy-Back Job in my config.yml:
deploy-backend:
docker:
- image: python:3.11-rc-alpine
steps:
- checkout
- add_ssh_keys:
fingerprints: [ 'xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx' ]
- attach_workspace:
at: ~/
- run:
name: Install dependencies
command: |
apk add --update ansible
apk add --update tar gzip nodejs npm
apk add --update --no-cache python3 py3-pip
/usr/local/bin/python -m pip install --upgrade pip
pip install awscli
- run:
name: Configure Env File
command: |
echo NODE_ENV=production >> backend/.env
echo VERSION=1 >> backend/.env
echo TYPEORM_CONNECTION=postgres >> backend/.env
echo TYPEORM_MIGRATIONS_DIR=./src/migrations >> backend/.env
echo TYPEORM_ENTITIES=./src/modules/domain/**/*.entity.ts >> backend/.env
echo TYPEORM_MIGRATIONS=./src/migrations/*.ts >> backend/.env
echo TYPEORM_HOST=$TYPEORM_HOST >> "backend/.env"
echo TYPEORM_PORT=$TYPEORM_PORT >> "backend/.env"
echo TYPEORM_USERNAME=$TYPEORM_USERNAME >> "backend/.env"
echo TYPEORM_PASSWORD=$TYPEORM_PASSWORD >> "backend/.env"
echo TYPEORM_DATABASE=$TYPEORM_DATABASE >> "backend/.env"
cat backend/.env
- run:
name: Deploy backend
command: |
cd backend
cd ..
tar -C backend -czvf artifact.tar.gz .
ls
mkdir -p /root/project
mv artifact.tar.gz /root/project/artifact.tar.gz
cd .circleci/ansible
echo "Contents of the inventory.txt file is -------"
cat inventory.txt
ansible-playbook -i inventory.txt deploy-backend.yml
- destroy-environment
- revert-migrations
These are my deploy tasks:
---
- name: "update apt packages."
become: true
apt:
update_cache: yes
- name: "upgrade packages"
become: true
apt:
upgrade: yes
- name: "install dependencies."
become: true
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: "install pm2"
become: true
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates directory
file:
path: /home/ubuntu/backend
state: directory
- name: Copy artifact.tar.gz file
template:
src: /root/project/artifact.tar.gz
dest: /home/ubuntu/backend
- name: Uncompress Backend
shell: |
cd /home/ubuntu/backend
tar xvzf artifact.tar.gz -C .
ls -la
- name: Build
become: true
shell: |
cd /home/ubuntu/backend
npm install
npm run build
- name: Start PM2
shell: |
cd /home/ubuntu/backend
pm2 start npm --name backend -- start
Related
I'm using the gitlab-ci (13.9) to test and build a react project.
On the branch develop everything works fine.
On the branch validation, the build job can't install a private package:
[2/5] Resolving packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/#company%2fname-of-my-package: Not found".
info If you think this is a bug, please open a bug report with the information provided in "/builds/code/conference/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error Command failed with exit code 1.
The .gitlab-ci.yml is the same for both branches:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
script:
- yarn run build
The package.json is the same for both branches.
Both branches are protected.
develop is the project default branch.
There is no error log available /builds/code/conference/yarn-error.log
There is no specific variable config in .gitlab-ci for develop
What could cause this to fail ?
I managed to make my CI pass on the branch validation by copying the ssh/npmrc configuration in my build-job:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
- docker-build-push
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
- .npmrc
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
script:
- yarn run build
docker-job:
services:
- docker:dind
image: docker:18.09.9
stage: docker-build-push
only:
- develop
- validation
before_script:
- apk update && apk add git rsync curl jq
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
script:
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
- /bin/sh docker/init.sh
- docker push registry.apizee.com/docker/apizee-rancher/conf4:${CI_COMMIT_REF_NAME}
- '[[ -f "docker/deploy.sh" ]] && sh docker/deploy.sh "${CI_COMMIT_REF_NAME}"'
So there might be a default cache/artifacts setting on the default branch and not on other branches ?
The following exemplary workflow runs without issues:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
However, some of my CI/CD code needs to run in a container:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: apt update && apt install -y git hub
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
Now, hub suddenly doesn't work anymore:
Run hub release create -m "$(date)" "v$(date +%s)"
hub release create -m "$(date)" "v$(date +%s)"
shell: sh -e {0}
env:
GITHUB_TOKEN: ***
Error creating release: Unauthorized (HTTP 401)
Bad credentials
Error: Process completed with exit code 1.
The issue was actually with mismatching versions: hub on native ubuntu-latest GitHub Actions was the (as of now) most recent version 2.14.2 while apt install on the ubuntu:latest container installed only version 2.7.0 (from Dec 28, 2018!).
The solution is to install the latest hub binary directly from their GitHub releases page instead of using apt:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: |
apt update && apt install -y git wget
url="$(wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
wget -qO- "$url" | tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
After adding sudo, it works for me.
- name: Install Deps
run: |
sudo apt-get update 2> /dev/null || true
sudo apt-get install -y git
sudo apt-get install -y wget
url="$(sudo wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
sudo wget -qO- "$url" | sudo tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"
I have been trying to build my project and deployment to a remote server using gitlab CI runner and using this link as reference
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-continuous-deployment-pipeline-with-gitlab-ci-cd-on-ubuntu-18-04
After runing the pipeline, the publish stage is giving error about the docker tagging
$ docker build -t $TAG_COMMIT -t $TAG_LATEST .
invalid argument "/patch-9:64a25b49" for "-t, --tag" flag: invalid reference format
I have tried changing the docker build tagging in different formats but still could not find out why the error.
I have tried changing the tagging
TAG_LATEST: ${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_NAME}:latest
TAG_COMMIT:${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_NAME}:${CI_COMMIT_SHORT_SHA}
but I am still get the error
$ cd $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
$ docker build -t $TAG_COMMIT -t $TAG_LATEST .
invalid argument "/patch-10:fbf4855b" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Can anyone help me solve this problem?
My .gitlab-ci.yml file looks
image: golang:1.15.3
variables:
REPO: github.com
NAMESPACE: daniel
PROJECT: danapp
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
before_script:
- mkdir -p $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
- cp -r -v $CI_PROJECT_DIR $GOPATH/src/github.com/daniel
- cd $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
stages:
- build
- publish
- deploy
compile:
stage: build
script:
- go build -race -ldflags "-extldflags '-static'" -o $CI_PROJECT_DIR/danapp
artifacts:
paths:
- danapp
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
before_script:
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- chmod og= $SSH_PRIVATE_KEY
- apk update && apk add openssh-client
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker pull $TAG_COMMIT"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker container rm -f danapp || true"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker run -d -p 20005:20005 --name danapp $TAG_COMMIT"
environment: stagging
only:
- master
I suggest you change your $CI_COMMIT_REF_NAME to $CI_COMMIT_REF_SLUG
Maybe this solved.
Here is how the pipeline works
when a push on master (work as expected): build project && push jar to dev
when a tag is created (woesn't work as expected):
build project
increment pom.xml version and push pom.xml to
push jar to server
But when I do step 2, it retrigger another build in the CI.
How can I push and avoid triggering the job in this case ?
Here is the full gitlab-ci.yml:
image: maven:3.6.0-jdk-10
variables:
APP_NAME: demo
MAVEN_OPTS: -Dmaven.repo.local=/cache/maven.repository
stages:
- build
- deploy_dev
- deploy_prod
build:
stage: build
script:
- mvn package -P build
- mv target/*.jar target/$APP_NAME.jar
artifacts:
untracked: true
deploy_dev:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- '[[ -f /.dockerenv ]] && mkdir -p ~/.ssh && echo "$KNOWN_HOST" > ~/.ssh/known_hosts'
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage: deploy_dev
environment:
name: dev
url: http://devsb01:9999
dependencies:
- build
only:
- master
except:
- tags
script:
- ssh root#devsb01 "service $APP_NAME stop"
- scp target/$APP_NAME.jar root#devsb01:/var/apps/$APP_NAME/
- ssh root#devsb01 "service $APP_NAME start"
deploy_prod:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- '[[ -f /.dockerenv ]] && mkdir -p ~/.ssh && echo "$KNOWN_HOST" > ~/.ssh/known_hosts'
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
stage: deploy_prod
environment:
name: production
dependencies:
- build
only:
- tags
except:
- branches
script:
- mvn versions:set -DnewVersion=$CI_COMMIT_REF_NAME
- git config --global user.name "gitlab-ci"
- git config --global user.email "gitlab-ci#unc.nc"
- git --version
- git status
- git add pom.xml
- git commit -m "increment pom version"
- git push http://gitlab-ci:${GITLABCI_PWD}#gitlab.unc.nc/dsi-infogestion/demo.git HEAD:master
- git status
- ssh root#prodsb01 "service $APP_NAME stop"
- scp target/$APP_NAME.jar root#prodsb01:/var/apps/$APP_NAME/
- ssh root#prodsb01 "service $APP_NAME start"
I had the string '[ci skip]' in the commit message and it works:
git commit -m "increment pom version [ci skip]"
I'm looking to do a series of tasks if a specific apt package is missing.
for example:
if graphite-carbon is NOT installed do:
- apt: name=debconf-utils state=present
- shell: echo 'graphite-carbon/postrm_remove_databases boolean false' | debconf-set-selections
- apt: name=debconf-utils state=absent
another example:
if statsd is NOT installed do:
- file: path=/tmp/build state=directory
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
- shell: dpkg -i /tmp/build/statsd*.deb
How would I begin to crack this?
I'm thinking maybe I can do a -shell: dpkg -l|grep <package name> and capture the return code somehow.
You can use the package_facts module (requires Ansible 2.5):
- name: Gather package facts
package_facts:
manager: apt
- name: Install debconf-utils if graphite-carbon is absent
apt:
name: debconf-utils
state: present
when: '"graphite-carbon" not in ansible_facts.packages'
...
It looks like my solution is working.
This is an example of how I have it working:
- shell: dpkg-query -W 'statsd'
ignore_errors: True
register: is_statd
- name: create build dir
file: path=/tmp/build state=directory
when: is_statd|failed
- name: install dev packages for statd build
apt: name={{ item }}
with_items:
- git
- devscripts
- debhelper
when: is_statd|failed
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
when: is_statd|failed
....
Here is another example:
- name: test if create_superuser.sh exists
stat: path=/tmp/create_superuser.sh
ignore_errors: True
register: f
- name: create graphite superuser
command: /tmp/create_superuser.sh
when: f.stat.exists == True
...and one more
- stat: path=/tmp/build
ignore_errors: True
register: build_dir
- name: destroy build dir
shell: rm -fvR /tmp/build
when: build_dir.stat.isdir is defined and build_dir.stat.isdir
I think you're on the right track with the dpkg | grep, only that the return code will be 0 in any case. But you can simply check the output.
- shell: dpkg-query -l '<package name>'
register: dpkg_result
- do_something:
when: dpkg_result.stdout != ""
I'm a bit late to this party but here's another example that uses exit codes - ensure you explicitly match the desired status text in the dpkg-query results:
- name: Check if SystemD is installed
command: dpkg-query -s systemd | grep 'install ok installed'
register: dpkg_check
tags: ntp
- name: Update repositories cache & install SystemD if it is not installed
apt:
name: systemd
update_cache: yes
when: dpkg_check.rc == 1
tags: ntp