Ansible Do Task If Apt Package Is Missing - conditional-statements

I'm looking to do a series of tasks if a specific apt package is missing.
for example:
if graphite-carbon is NOT installed do:
- apt: name=debconf-utils state=present
- shell: echo 'graphite-carbon/postrm_remove_databases boolean false' | debconf-set-selections
- apt: name=debconf-utils state=absent
another example:
if statsd is NOT installed do:
- file: path=/tmp/build state=directory
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
- shell: dpkg -i /tmp/build/statsd*.deb
How would I begin to crack this?
I'm thinking maybe I can do a -shell: dpkg -l|grep <package name> and capture the return code somehow.

You can use the package_facts module (requires Ansible 2.5):
- name: Gather package facts
package_facts:
manager: apt
- name: Install debconf-utils if graphite-carbon is absent
apt:
name: debconf-utils
state: present
when: '"graphite-carbon" not in ansible_facts.packages'
...

It looks like my solution is working.
This is an example of how I have it working:
- shell: dpkg-query -W 'statsd'
ignore_errors: True
register: is_statd
- name: create build dir
file: path=/tmp/build state=directory
when: is_statd|failed
- name: install dev packages for statd build
apt: name={{ item }}
with_items:
- git
- devscripts
- debhelper
when: is_statd|failed
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
when: is_statd|failed
....
Here is another example:
- name: test if create_superuser.sh exists
stat: path=/tmp/create_superuser.sh
ignore_errors: True
register: f
- name: create graphite superuser
command: /tmp/create_superuser.sh
when: f.stat.exists == True
...and one more
- stat: path=/tmp/build
ignore_errors: True
register: build_dir
- name: destroy build dir
shell: rm -fvR /tmp/build
when: build_dir.stat.isdir is defined and build_dir.stat.isdir

I think you're on the right track with the dpkg | grep, only that the return code will be 0 in any case. But you can simply check the output.
- shell: dpkg-query -l '<package name>'
register: dpkg_result
- do_something:
when: dpkg_result.stdout != ""

I'm a bit late to this party but here's another example that uses exit codes - ensure you explicitly match the desired status text in the dpkg-query results:
- name: Check if SystemD is installed
command: dpkg-query -s systemd | grep 'install ok installed'
register: dpkg_check
tags: ntp
- name: Update repositories cache & install SystemD if it is not installed
apt:
name: systemd
update_cache: yes
when: dpkg_check.rc == 1
tags: ntp

Related

Weird characters deploy backend server using Ansible, Nodejs and Artifact

I am completing my Cloud DevOps Nanodegree program with Udacity.
I am doing my third project Give Your Application Auto-Deploy Superpowers
I am getting stuck Deploy-Backend becuase I am getting random characters in my CircleCI Pipeline.
This is my end result in CircleCI Pipeline:
Here is my Deploy-Back Job in my config.yml:
deploy-backend:
docker:
- image: python:3.11-rc-alpine
steps:
- checkout
- add_ssh_keys:
fingerprints: [ 'xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx' ]
- attach_workspace:
at: ~/
- run:
name: Install dependencies
command: |
apk add --update ansible
apk add --update tar gzip nodejs npm
apk add --update --no-cache python3 py3-pip
/usr/local/bin/python -m pip install --upgrade pip
pip install awscli
- run:
name: Configure Env File
command: |
echo NODE_ENV=production >> backend/.env
echo VERSION=1 >> backend/.env
echo TYPEORM_CONNECTION=postgres >> backend/.env
echo TYPEORM_MIGRATIONS_DIR=./src/migrations >> backend/.env
echo TYPEORM_ENTITIES=./src/modules/domain/**/*.entity.ts >> backend/.env
echo TYPEORM_MIGRATIONS=./src/migrations/*.ts >> backend/.env
echo TYPEORM_HOST=$TYPEORM_HOST >> "backend/.env"
echo TYPEORM_PORT=$TYPEORM_PORT >> "backend/.env"
echo TYPEORM_USERNAME=$TYPEORM_USERNAME >> "backend/.env"
echo TYPEORM_PASSWORD=$TYPEORM_PASSWORD >> "backend/.env"
echo TYPEORM_DATABASE=$TYPEORM_DATABASE >> "backend/.env"
cat backend/.env
- run:
name: Deploy backend
command: |
cd backend
cd ..
tar -C backend -czvf artifact.tar.gz .
ls
mkdir -p /root/project
mv artifact.tar.gz /root/project/artifact.tar.gz
cd .circleci/ansible
echo "Contents of the inventory.txt file is -------"
cat inventory.txt
ansible-playbook -i inventory.txt deploy-backend.yml
- destroy-environment
- revert-migrations
These are my deploy tasks:
---
- name: "update apt packages."
become: true
apt:
update_cache: yes
- name: "upgrade packages"
become: true
apt:
upgrade: yes
- name: "install dependencies."
become: true
apt:
name: ["nodejs", "npm"]
state: latest
update_cache: yes
- name: "install pm2"
become: true
npm:
name: pm2
global: yes
production: yes
state: present
- name: Creates directory
file:
path: /home/ubuntu/backend
state: directory
- name: Copy artifact.tar.gz file
template:
src: /root/project/artifact.tar.gz
dest: /home/ubuntu/backend
- name: Uncompress Backend
shell: |
cd /home/ubuntu/backend
tar xvzf artifact.tar.gz -C .
ls -la
- name: Build
become: true
shell: |
cd /home/ubuntu/backend
npm install
npm run build
- name: Start PM2
shell: |
cd /home/ubuntu/backend
pm2 start npm --name backend -- start

Get error x509 in release job in my local gitlab pipeline

I am running a local gitlab server with self-signed certificate, My pipline builds my application and create a release but I have x509 I tried the workaround mentionned on gitlab documenation but it doesn't work. Everything works fine when tested in gitlab.com
To summerize first I build my application to generate a war file as an artifact, then the artifact is uploaded using gitlab API to generate URL and file path after that release job add tags and generate the release page
my gitlab-ci.yaml
---
variables:
PACKAGE_VERSION: "V7"
GENERIC_WAR: "mypackage-${PACKAGE_VERSION}.war"
PACKAGE_REGISTRY_URL: "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${CI_PROJECT_NAME}/${PACKAGE_VERSION}"
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: always
variables:
SERVER: "${PROD_SERVER}"
- if: $CI_COMMIT_BRANCH == "test"
when: always
variables:
SERVER: "${TEST_SERVER}"
- if: $CI_COMMIT_BRANCH == "feature/release"
when: always
variables:
SERVER: "${TEST_SERVER}"
stages:
- build
- upload
- prepare
- release
- deploy
build-application:
stage: build
image: maven:3.8.4-jdk-8
script:
- mvn clean package -U -DskipTests=true
- echo $CI_COMMIT_TAG
artifacts:
expire_in: 2h
when: always
paths:
- target/*.war
upload:
stage: upload
image: curlimages/curl:latest
needs:
- job: build-application
artifacts: true
# rules:
# - if: $CI_COMMIT_TAG
script:
- |
curl -k --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --upload-file target/*.war "${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}"
prepare_job:
stage: prepare
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
script:
- echo "EXTRA_DESCRIPTION=some message" >> variables.env # Generate the EXTRA_DESCRIPTION and TAG environment variables
- echo "TAG=v$(cat VERSION)" >> variables.env
artifacts:
reports:
dotenv: variables.env
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
before_script:
- apk --no-cache add openssl ca-certificates
- mkdir -p /usr/local/share/ca-certificates/extra
- openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
- update-ca-certificates
script:
- echo 'running release_job for $TAG'
release:
name: "Release $TAG"
description: "Created using the release-cli $EXTRA_DESCRIPTION"
tag_name: "$TAG"
ref: "$CI_COMMIT_SHA"
assets:
links:
- name: "{$GENERIC_WAR}"
url: "${PACKAGE_REGISTRY_URL}"
filepath: "/${GENERIC_WAR}"
Release job execution
Running with gitlab-runner 14.5.2 (e91107dd)
on Shared-Docker mdaS6_cA
Preparing the "docker" executor
00:03
Using Docker executor with image registry.gitlab.com/gitlab-org/release-cli:latest ...
Pulling docker image registry.gitlab.com/gitlab-org/release-cli:latest ...
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
Preparing environment
00:01
Running on runner-mdas6ca-project-32-concurrent-0 via Docker-Server1...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/Saiida/backend-endarh/.git/
Checking out 7735e9ea as feature/release...
Removing target/
Removing variables.env
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:02
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
$ apk --no-cache add openssl ca-certificates
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r5)
(2/2) Installing openssl (1.1.1l-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 7 MiB in 16 packages
$ mkdir -p /usr/local/share/ca-certificates/extra
$ openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
$ update-ca-certificates
Warning! Cannot copy to bundle: /usr/local/share/ca-certificates/extra
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-extra.pem does not contain exactly one certificate or CRL: skipping
$ echo 'running release_job for $TAG'
running release_job for $TAG
Executing "step_release" stage of the job script
00:01
$ release-cli create --name "Release $TAG" --description "Created using the release-cli $EXTRA_DESCRIPTION" --tag-name "$TAG" --ref "$CI_COMMIT_SHA" --assets-link "{\"url\":\"${PACKAGE_REGISTRY_URL}\",\"name\":\"{$GENERIC_WAR}\",\"filepath\":\"/${GENERIC_WAR}\"}"
time="2021-12-23T08:47:48Z" level=info msg="Creating Release..." cli=release-cli command=create name="Release v" project-id=32 ref=7735e9ea9422e20b09cae2072c692843b118423a server-url="https://gitlab.endatamweel.tn" tag-name=v version=0.10.0
time="2021-12-23T08:47:48Z" level=fatal msg="run app" cli=release-cli error="failed to create release: failed to do request: Post \"https://gitlab.endatamweel.tn/api/v4/projects/32/releases\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0" version=0.10.0
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
I managed to get it work by replacing the yaml format of the release job with the release-cli command and arguments and set --insecure-https option not optimised for production of course
release:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == "feature/release" # Run this job when commits are pushed or merged to the default branch
script:
- |
release-cli --insecure-https=true create --name "Release $TAG" --tag-name $TAG --ref $CI_COMMIT_SHA \
--assets-link "{\"name\":\"${GENERIC_WAR}\",\"url\":\"${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}\", \"link_type\":\"package\"}"

Gitlab-ci private package install fails

I'm using the gitlab-ci (13.9) to test and build a react project.
On the branch develop everything works fine.
On the branch validation, the build job can't install a private package:
[2/5] Resolving packages...
error An unexpected error occurred: "https://registry.yarnpkg.com/#company%2fname-of-my-package: Not found".
info If you think this is a bug, please open a bug report with the information provided in "/builds/code/conference/yarn-error.log".
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error Command failed with exit code 1.
The .gitlab-ci.yml is the same for both branches:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
script:
- yarn run build
The package.json is the same for both branches.
Both branches are protected.
develop is the project default branch.
There is no error log available /builds/code/conference/yarn-error.log
There is no specific variable config in .gitlab-ci for develop
What could cause this to fail ?
I managed to make my CI pass on the branch validation by copying the ssh/npmrc configuration in my build-job:
variables:
DOCKER_DRIVER: overlay2
GIT_SSL_NO_VERIFY: 'true'
DOCKER_TLS_CERTDIR: ''
stages:
- install
- test
- build
- docker-build-push
install_dependencies:
image: node:lts-alpine
stage: install
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
artifacts:
expire_in: 1 hour
paths:
- node_modules/
- .npmrc
script:
- yarn install
test-job:
image: node:lts-alpine
stage: test
script:
- yarn run test
build-job:
image: node:lts-alpine
stage: build
only:
- develop
- validation
artifacts:
expire_in: 1 hour
paths:
- dist/
before_script:
- apk update && apk add git openssh-client
# run ssh agent
- eval $(ssh-agent -s)
# add ssh key stored in gitlab ci variables
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- echo '//registry.npmjs.org/:_authToken=${NPM_TOKEN}'>.npmrc
script:
- yarn run build
docker-job:
services:
- docker:dind
image: docker:18.09.9
stage: docker-build-push
only:
- develop
- validation
before_script:
- apk update && apk add git rsync curl jq
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
script:
- docker login -u gitlab-ci-token -p ${PUBLISH_KEY} registry.apizee.com
- /bin/sh docker/init.sh
- docker push registry.apizee.com/docker/apizee-rancher/conf4:${CI_COMMIT_REF_NAME}
- '[[ -f "docker/deploy.sh" ]] && sh docker/deploy.sh "${CI_COMMIT_REF_NAME}"'
So there might be a default cache/artifacts setting on the default branch and not on other branches ?

GitHub Actions with hub results in Unauthorized (HTTP 401) Bad credentials

The following exemplary workflow runs without issues:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
However, some of my CI/CD code needs to run in a container:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: apt update && apt install -y git hub
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
Now, hub suddenly doesn't work anymore:
Run hub release create -m "$(date)" "v$(date +%s)"
hub release create -m "$(date)" "v$(date +%s)"
shell: sh -e {0}
env:
GITHUB_TOKEN: ***
Error creating release: Unauthorized (HTTP 401)
Bad credentials
Error: Process completed with exit code 1.
The issue was actually with mismatching versions: hub on native ubuntu-latest GitHub Actions was the (as of now) most recent version 2.14.2 while apt install on the ubuntu:latest container installed only version 2.7.0 (from Dec 28, 2018!).
The solution is to install the latest hub binary directly from their GitHub releases page instead of using apt:
on: [push]
jobs:
create_release:
runs-on: ubuntu-latest
container:
image: ubuntu:latest
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Install dependencies
run: |
apt update && apt install -y git wget
url="$(wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
wget -qO- "$url" | tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"
- name: Checkout
uses: actions/checkout#v2
- name: Create release
run: hub release create -m "$(date)" "v$(date +%s)"
After adding sudo, it works for me.
- name: Install Deps
run: |
sudo apt-get update 2> /dev/null || true
sudo apt-get install -y git
sudo apt-get install -y wget
url="$(sudo wget -qO- https://api.github.com/repos/github/hub/releases/latest | tr '"' '\n' | grep '.*/download/.*/hub-linux-amd64-.*.tgz')"
sudo wget -qO- "$url" | sudo tar -xzvf- -C /usr/bin --strip-components=2 --wildcards "*/bin/hub"

Install Rbenv using Ansible

I am trying to install Rbenv on my server using Ansible but getting this error:
TASK: [rbenv | create temporary directory] ********************
fatal: [localhost] => Conditional expression must evaluate to True or False: is_failed($rbuild_present)
FATAL: all hosts have already failed -- aborting
My playbook is:
---
- name: rbenv | update rbenv repo
git: repo=git://github.com/sstephenson/rbenv.git
dest=$rbenv_root
version=v0.4.0
- name: rbenv | add rbenv to path
file: path=/usr/local/bin/rbenv
src=${rbenv_root}/bin/rbenv
state=link
- name: rbenv | add rbenv initialization to profile
template: src=templates/rbenv.sh.j2
dest=/etc/profile.d/rbenv.sh
owner=root
group=root
mode=0755
- name: rbenv | check ruby-build installed
command: test -x /usr/local/bin/ruby-build
register: rbuild_present
ignore_errors: yes
- name: rbenv | create temporary directory
shell: mktemp -d
register: tempdir
when_failed: $rbuild_present
- name: rbenv | clone ruby-build repo
git: repo=git://github.com/sstephenson/ruby-build.git
dest=${tempdir.stdout}/ruby-build
when_failed: $rbuild_present
- name: rbenv | install ruby-build
command: ./install.sh
chdir=${tempdir.stdout}/ruby-build
when_failed: $rbuild_present
- name: rbenv | remove temporary directory
file: path=${tempdir.stdout} state=absent
when_failed: $rbuild_present
- name: rbenv | check ruby $ruby_version installed
shell: RBENV_ROOT=${rbenv_root} rbenv versions | grep $ruby_version
register: ruby_installed
ignore_errors: yes
- name: rbenv | install ruby $ruby_version
shell: RBENV_ROOT=${rbenv_root} rbenv install $ruby_version
when_failed: $ruby_installed
- name: rbenv | set global ruby $ruby_version
shell: RBENV_ROOT=${rbenv_root} rbenv global $ruby_version
when_failed: $ruby_installed
- name: rbenv | rehash
shell: RBENV_ROOT=${rbenv_root} rbenv rehash
when_failed: $ruby_installed
- name: rbenv | set gemrc
copy: src=files/gemrc
dest=/root/.gemrc
owner=root
group=root
mode=0644
Any ideas?
I suspect you are using the latest Ansible or > 1.3.x . The when_ syntax has been deprecated, when you run your playbook it should give you a warning. Instead use something like:
when: ruby_installed|failed
or something like:
when: 'not ($ruby_installed)'