Gitlab CI/CD Pipeline keeps failing for terraform plan/apply (terraform layered architecture) - gitlab-ci

So I currently I have a layered architecture with terraform. Here is what my directory looks like.
.
├── README.md
├── application-layer
│   ├── datasources.tf
│   ├── ec2.tf
│   ├── providers.tf
│   ├── terraform.tfvars
│   └── variables.tf
└── network-layer
├── gateways.tf
├── outputs.tf
├── providers.tf
├── routes.tf
├── securitygroups.tf
├── subnet.tf
├── terraform.tfvars
├── variables.tf
└── vpc.tf
I'm having trouble with getting the GitLab CI pipeline to succeed at the plan/apply stages, specifically for the application-layer. The application layer pulls the state from the network layer so the ec2 instances in the application-layer, can reference security groups in the network-layer. The network-layer needs to be applied first, then the application-layer can be applied. Otherwise even running the terraform commands manually will fail. However, I'm having a hard understanding of how to translate that into the GitLab CI yaml. This is what I currently have, but the pipeline passes at plan-network, but fails at plan-app.
stages:
- format
- validate
- plan
- apply
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- terraform init
- terraform --version
- export AWS_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
fmt-network:
stage: format
before_script:
- cd network-layer
- terraform init
script:
- terraform fmt
allow_failure: false
validate-network:
stage: validate
before_script:
- cd network-layer
- terraform init
script:
- terraform validate
plan-network:
stage: plan
before_script:
- cd network-layer
- terraform init
script:
- terraform plan
dependencies:
- validate-network
apply-network:
stage: apply
before_script:
- cd network-layer
- terraform init
script:
- terraform apply --auto-approve
dependencies:
- plan-network
artifacts:
when: always
fmt-app:
stage: format
before_script:
- cd application-layer
- terraform init
script:
- terraform fmt
allow_failure: false
validate-app:
stage: validate
before_script:
- cd application-layer
- terraform init
script:
- terraform validate
plan-app:
stage: plan
before_script:
- cd application-layer
- terraform init
script:
- terraform plan
dependencies:
- validate-app
apply-app:
stage: apply
before_script:
- cd application-layer
- terraform init
script:
- terraform apply --auto-approve
dependencies:
- plan-app
I have tried adding an artifact and a path to go into the folders instead of cd into each but that didn't work.
I have also tried adding a "needs" section to the plan-app that reads :
plan-app:
stage: plan
before_script:
- cd application-layer
- terraform init
needs: ["apply-network"]
script:
- terraform plan
however I get this error "plan-app job: need apply-network is not defined in current or prior stages"

Related

Defining multiple cases for an Ansible variable based on multiple conditions

I have this variable here, set in a .yaml variables file
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
I want my patch_plan_week_and_day variable to be set dynamically, based on role and environment which are 2 other variables set elsewhere (doesn't matter now) outside this variables file.
For instance, I will explain 3 cases:
If role = 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Monday' and thus the end result of patch_plan = 'foo-Week1_Monday-bar'.
If role != 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Tuesday' and thus the end result of patch_plan = 'foo-Week1_Tuesday-bar'
If role = 'slave' and environment = 'pro' then patch_plan_week_and_day = 'Week3_Wednesday' and hus the end result of patch_plan = 'foo-Week3_Wednesday-bar'
This is the idea of the code:
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
# Patch Plans
## I want something like this:
# case 1
patch_plan_week_and_day: Week1_Monday
when: role == 'master' and environment == 'srvb'
# case 2
patch_plan_week_and_day: Week1_Tuesday
when: role != 'master' and environment == 'srvb'
# case 3
patch_plan_week_and_day: Week3_Wednesday
when: role == 'slave' and environment == 'pro'
I have 14 cases in total.
Put the logic into a dictionary. For example,
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
Create the project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── roles
├── master
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
├── non_master
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
└── slave
├── defaults
│   └── main.yml
└── tasks
└── main.yml
10 directories, 9 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
localhost
shell> cat pb.yml
- hosts: localhost
vars:
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
roles:
- "{{ my_role }}"
The code of all roles is identical
shell> cat roles/master/defaults/main.yml
patch_plan_role: "{{ (my_role in patch_plan_week_and_day_dict[env].keys()|list)|
ternary(my_role, 'default') }}"
patch_plan_week_and_day: "{{ patch_plan_week_and_day_dict[env][patch_plan_role] }}"
shell> cat roles/master/tasks/main.yml
- debug:
var: patch_plan_week_and_day
Example 1.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=master
...
patch_plan_week_and_day: Week1_Monday
Example 2.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=non_master
...
patch_plan_week_and_day: Week1_Tuesday
Example 3.
shell> ansible-playbook pb.yml -e env=pro -e my_role=slave
...
patch_plan_week_and_day: Week3_Wednesday
A lot of considerations here ...
It seems you try to use Ansible as a programming language which it isn't. You've started to implement something without any description about your use case and what is actually the problem. The given example looks like an anti-pattern.
... set dynamically, based on role and environmentv ...
It is in fact "static" and based on the properties of the systems. You only try to generate the values at runtime. Timeslots when patches can or should be applied (Patch Window) are facts about the system and usually configured within the Configuration Management Database (CMDB). So this kind of information should be already there, either in a database or within the Ansible inventory or as a Custom fact on the system itself.
... which are 2 other variables set elsewhere (doesn't matter now) outside this variables file. ...
Probably it does matter and maybe you could configure the Patch Cycle or Patch Window there.
By pursuing your approach further you'll mix up Playbook Logic with Infrastructure Description or Configuration Properties leading fast into less readable and probably future unmaintainable code. You'll deny yourself the opportunity to maintain the system configuration within a Version Control System (VCS), CMDB or the inventory.
Therefore avoid CASE, SWITCH and IF THEN ELSE ELSEIF structures and describe the desired state of your systems instead.
Some Further Readings
In addition to the sources already given.
Best Practices - Content Organization
General tips
At last, this is what fixed it, thank you everyone
patch_plan: 'foo-{{ patch_plan_week_and_day[environment][role] }}-bar'
srvb:
master: Week1_Monday
slave: Week1_Tuesday
pre:
master: Week1_Sunday
slave: Week1_Friday
pro:
master: Week1_Thursday
slave: Week1_Wednesday

Execute GitLab job only if files have changed in a subdirectory, otherwise use cached artefact in following job

I have a simple pipeline, comparable to this one:
image: docker:20
variables:
GIT_STRATEGY: clone
stages:
- Building - Frontend
- Building - Backend
include:
- local: /.ci/extensions/ci-variables.yml
- local: /.ci/extensions/docker-login.yml
Build Management:
stage: Building - Frontend
image: node:14-buster
script:
# Install needed dependencies for building
- apt-get update
- apt-get -y upgrade
- apt-get install -y build-essential
- yarn global add #quasar/cli
- yarn global add #vue/cli
# Install required modules
- cd ${CI_PROJECT_DIR}/resources/js/management
- npm ci --cache .npm --prefer-offline
# Build project
- npm run build
# Create archive
- tar czf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz *
cache:
policy: pull-push
key:
files:
- ./resources/js/management/package-lock.json
paths:
- ./resources/js/management/.npm/
artifacts:
paths:
- dist-resources-js-management.tar.gz
Build Docker:
stage: Building - Backend
needs: [Build Management, Build Administration]
dependencies:
- Build Management
- Build Administration
variables:
CI_REGISTRY_IMAGE_COMMIT_SHA: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_COMMIT_SHA]
CI_REGISTRY_IMAGE_REF_NAME: !reference [.ci-variables, variables, CI_REGISTRY_IMAGE_REF_NAME]
before_script:
- !reference [.docker-login, before_script]
script:
- mkdir -p {CI_PROJECT_DIR}/public/static/management
- tar xzf ${CI_PROJECT_DIR}/dist-resources-js-management.tar.gz --directory ${CI_PROJECT_DIR}/public/static/management
- docker build
--pull
--label "org.opencontainers.image.title=$CI_PROJECT_TITLE"
--label "org.opencontainers.image.url=$CI_PROJECT_URL"
--label "org.opencontainers.image.created=$CI_JOB_STARTED_AT"
--label "org.opencontainers.image.revision=$CI_COMMIT_SHA"
--label "org.opencontainers.image.version=$CI_COMMIT_REF_NAME"
--tag "$CI_REGISTRY_IMAGE_COMMIT_SHA"
-f .build/Dockerfile
.
I now want the first job to be executed under the following conditions:
Something has changed in the directory ${CI_PROJECT_DIR}/resources/js/management
This job has not yet created an artifact.
The last job should therefore always be able to access an artifact. If nothing has changed in the directory, it does not have to be created anew each time. If it did not exist before, it must of course be created.
Is there a way to map this in the GitLab Ci?
If I currently specify the dependencies and then work with only:changes: for the first job, GitLab complains if the job is not executed. Likewise with needs:.

Gitlab CI Include is not including

I don't get it. I have two repos. One for infrastructure, and the other for project code. Inside project code, I have .gitlab-ci.yml file that will include one job for creating env variables, another include file that will include other stages inside the job. Every other stage is triggered, but the first include is not triggered no matter what I do. What am I doing wrong?
Project gitlab-ci
# Stages
stages:
- pre-build test
- build
- post-build test
- deploy
- environment
- e2e
# Main Variables
variables:
GIT_SUBMODULE_STRATEGY: normal
IMAGE_VERSION: "latest"
CARGO_HOME: $CI_PROJECT_DIR/.cargo
FF_USE_FASTZIP: "true"
ARTIFACT_COMPRESSION_LEVEL: "fast"
CACHE_COMPRESSION_LEVEL: "fastest"
STAGING_BRANCH: "master"
VARIABLES_FILE: variables.txt
# Include main CI files from infrastructure repository
include:
- project: 'project/repo-one'
ref: master
file: '/gitlab-ci/env/ci-app-env.yml'
- project: 'project/repo-one'
ref: master
file: '/gitlab-ci/app/ci-merge-request.yml'
env ci file
env mr:
stage: .pre
before_script:
- TEST_VAR="TEST"
- IMAGE_PATH="/var/www"
script:
- echo "export TEST_VAR=$TEST_VAR" > $VARIABLES_FILE
- echo "export IMAGE_PATH=$IMAGE_PATH" > $VARIABLES_FILE
- cat $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE

Deploy to Maven Central from a Gitlab CI workflow

I maintain a few Java library projects on GitLab, which I build with a GitLab CI workflow and currently deploy to a GitLab Maven repository. Now I would like to deploy them to Maven Central instead, but have been unable to find any tutorials, examples or boilerplate code for doing so.
My current .gitlab-ci.yml looks like this:
Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
image: maven:3.6.3-openjdk-8
cache:
key: "$CI_JOB_NAME"
paths:
- .m2/repository
- public
.verify: &verify
stage: test
script:
- 'mvn $MAVEN_CLI_OPTS verify'
except:
- master
- dev
verify:jdk8:
<<: *verify
# Deploy to GitLab's Maven Repository for `master` and `dev` branch
deploy:jdk8:
stage: deploy
script:
- if [ ! -f ci_settings.xml ];
then echo "CI settings missing\! If deploying to GitLab Maven Repository, please see https://docs.gitlab.com/ee/user/project/packages/maven_repository.html#creating-maven-packages-with-gitlab-cicd for instructions.";
fi
- 'mvn $MAVEN_CLI_OPTS deploy -s ci_settings.xml'
only:
- master
- dev
# Deploy Javadoc to GitLab Pages
pages:
stage: deploy
script:
- 'mvn javadoc:javadoc -DadditionalJOption=-Xdoclint:none'
- if [ -e public/javadoc/$CI_COMMIT_REF_NAME ];
then rm -Rf public/javadoc/$CI_COMMIT_REF_NAME ;
fi
- 'mkdir -p public/javadoc/$CI_COMMIT_REF_NAME'
- 'cp -r target/site/apidocs/* public/javadoc/$CI_COMMIT_REF_NAME/'
artifacts:
paths:
- public
only:
- master
- dev
The closest I found was an instruction on deploying to Artifactory, but nothing about Maven Central. What is the procedure for Maven Central?

Gitlab run pipeline job only when previous job ran

I'm trying to create a pipeline with a production and a development deployment. In both environments the application should be built with docker. But only when something changed in the according directory.
For example:
When something changed in the frontend directory the frontend should be build and deployed
When something changed in the backend directory the backend should be build and deployed
At first I didn't had the needs: keyword. The pipeline always executed the deploy_backend and deploy_frontend even when the build jobs were not executed.
Now I've added the needs: keyword, but Gitlab says yaml invalid when there was only a change in one directory. When there is a change in both directories the pipeline works fine. When there for exaple a change in the README.md outside the 2 directories the says yaml invalid as well.
Does anyone knows how I can create a pipeline that only runs when there is a change in a specified directory and only runs the according deploy job when the build job has ran?
gitlab-ci.yml:
stages:
- build
- deploy
build_frontend:
stage: build
only:
refs:
- master
- development
changes:
- frontend/*
script:
- cd frontend
- docker build -t frontend .
build_backend:
stage: build
only:
refs:
- master
- development
changes:
- backend/*
script:
- cd backend
- docker build -t backend .
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
script:
- "echo deploy backend"
needs: ["build_backend"]
The problem here is that your deploy jobs require the previous build jobs to actually exist.
However, by using the only.changes-rule, they only exist if actually something changed within those directories.
So when only something in the frontend-folder changed, the build_backend-Job is not generated at all. But the deploy_backend_dev job still is and then misses it's dependency.
A quick fix would be to add the only.changes configuration also to the deployment-jobs like this:
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
changes:
- frontend/*
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
changes:
- backend/*
script:
- "echo deploy backend"
needs: ["build_backend"]
This way, both jobs will only be created if the dependent build job is created as well and the yaml will not be invalid.