How to build AWS services using Terraform with Bamboo - bamboo

Can someone mention the steps in setting up the CI/CD pipeline for creating AWS services using Terraform in Bamboo

You probably want to create a build plan which looks as follows:
STAGE: Plan
JOB: Plan
TASK: Script, terraform init -input=false
TASK: Script, terraform validate -input=false
TASK: Script, terraform plan -out=tfplan -input=false
STAGE: Apply
JOB: Apply
TASK: Script, terraform apply -input=false tfplan
The 'Apply' stage should be configured to be a 'Manual stage', meaning it needs manual approval before it will run. This allows to review the created Terraform plan in the first stage before applying it on your infrastructure. The plan itself can be linked to, and trigger on the repository with your Terraform specifications.
In terms of connecting with AWS, you can provide the required Terraform variables as environment variables in your scripts, which themselves refer to regular bamboo variables. For example:
export AWS_ACCESS_KEY_ID ="${bamboo.AwsAccessKeyId}"
terraform plan ...
In terms of fetching these variable values from AWS, you might be interested in this plugin: https://marketplace.atlassian.com/apps/1221965/secret-managers-for-bamboo (note that I am affiliated).

Related

How can I trigger a specific stage in azure release pipeline according to the build configuration in build pipeline

In my selenium automation suite I have different config files for different environments. (App.Dev.config, App.QA.config likewise). Currently I have two azure pipelines one build pipeline and a release pipeline. So if I want to run the UI automation tests in QA environment what I do now is change the buildconfiguration variable in build pipeline to 'QA' run the build pipeline and then once it is success run the QA stage in release pipeline manually. Is there a way to trigger this automatically?
Install the trigger release extension: Release Orchestrator :https://marketplace.visualstudio.com/items?itemName=dmitryserbin.release-orchestrator&targetId=ca4e4e67-3099-4c62-9ea9-bef80e0cc70a&utm_source=vstsproduct&utm_medium=ExtHubManageList
Set variable in your build pipeline:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: 'echo "##vso[task.setvariable variable=doThing]QA"'
Use the release Orchestrator task in your build pipeline to trigger the wanted stage in your release pipeline with conditions:
- task: releaseorchestrator#2
inputs:
endpointType: 'integrated'
projectName: '{ProjectName}'
definitionName: 'TestReleaseExtension'
releaseStrategy: 'create'
definitionStage: 'Stage 2'
approvalRetry: 60
updateInterval: 5
condition: eq(variables['doThing'], 'QA')
Make sure the Build Service Account has the permission to create releases.

Pass services to child pipeline in GitLab

I am trying to generalize the cicd of our GitLab projects.
I am planning to create a cicd-templates repo, containing general jobs that I run in multiple projects.
I have for example a terraform template that accepts input variables and runs an init, validate, plan and apply job.
I am now trying to create a similar template for our python-nox sessions. The issue is that, for our integration tests, we need two services.
I would prefer not to include the services in the template, since they are not needed for the integration tests of other projects (but other services might).
So I was wondering how I could include a ci template (from another project) and pass the needed images from the parent pipeline.
What is not working:
Parent/project pipeline:
trigger-nox-template:
variables:
IMAGE: "registry.gitlab.com/path/to/my/image:latest"
trigger:
include:
- project: cicd-templates
file: /nox_tests.yml
strategy: depend
services:
- name: mcr.microsoft.com/mssql/server:2017-latest
alias: db
- name: mcr.microsoft.com/azure-storage/azurite:3.13.1
alias: storage
cicd-templates/nox_tests.yml:
variables:
IMAGE: "registry.gitlab.com/path/to/a/default/image:latest"
integration:
image: '$IMAGE'
script:
- python -m nox -s integration
As I said, I could hardcode the services in the template as well, but they might vary based on the parent pipeline, so I'm looking for a more dynamic solution.
ps. How I implemented the image does work, but if there is a more elegant way, that would be appreciated as well.
Thanks in advance!

Invoke GitLab CI jobs from inside other jobs

I have many different GitLab CI jobs in my repository and dependent on variables that are set by an user in a config file I want to execute different sequences of jobs. My approach is to create a scheduler job that analyzes the config file and executes the jobs accordingly. However, I cannot figure out how to execute another job from within a job.
Any help is appreciated!
This would be a good use case for dynamic child pipelines. This is pretty much the only way to customize a pipeline based on the outcome of another job.
From the docs:
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
In your case, the script generate-ci-config would be the analysis of your config files and creates a job configuration conditionally based on the config contents.

How can I write a commit hash to a file using .gitlab-ci before build and deploy

I want to add an endpoint in my server to retrieve the current commit hash in production. I am using .gitlab-ci. I want to do this in the pipeline so that the commit hash is written to a file before "build and deploy". I can read this file on request to return the latest deployed version. Can anyone help me with the steps and examples? Thanks in advance!
I would offer an alternative to this. Use GitLab's environments and deployments features that, in part, considers this exact use case.
In your CI/CD configuration (.gitlab-ci.yml), you can specify an environment: key that will record deployments to your environment(s).
For example:
deploy:
script:
- echo "your deployment script here"
environment:
name: "production"
Now, when this job runs, GitLab will record it as a deployment that can be queried later.
Then you can use the deployments API or the environments API to get the latest deployment information which will include, among other information, the commit hash of the deployment.

Gitlab CI does not support variable expansion in needs keyword, is there any solution?

I'm creating a template for all the deploy jobs, and I need to be able to use needs keyword with different values for each deploy job, but GitLab CI, as far as I know, does not support using variable in needs keyword. Is there any workaround?
This is what I need to do:
# Deploy template
.deploy:
stage: deploy
only:
- develop
tags:
- deploy
needsL ["build:$PROJECT_NAME"]
# Deploy jobs
deploy:package1:
extends: .deploy
variables:
PROJECT_NAME: 'package1'
#needs: ['build:package1']
deploy:package2:
extends: .deploy
variables:
PROJECT_NAME: 'package2'
#needs: ['build:package2']
You can't do this. needs: will not support variables.
However, if the template you're making does not contain the job it depends on, the best approach is probably to not use needs: at all, otherwise you greatly increase the likelihood that including your template will cause an invalid yaml file.
So, your options would be either to (1) include the jobs you depend on in the same template, then designate needs: explicitly or (2) Rely on users to provide the needs: key in the deploy job if they want.
For example, a user can do this:
include:
- "your template"
# job originates in the project configuration
my_project_jobs:
script: "..."
your_deploy_template_job:
needs: ["my_project_job"] # add the key to the included template job
Or if you provide both jobs in your pipeline configuration, you can use some rules: to keep the jobs from running, and let users enable them and override their script configurations to implement builds.
# your template yaml
your_template_build_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
when: on_success
- when: never
your_template_deploy_job:package1
rules:
- if: '$PACKAGE1_ENABLED'
needs: [your_template_build_job:package1]
# ...
Then a user might just do this:
# user project yaml
include:
- "your template"
variables:
PACKAGE1_ENABLED: true
your_template_build_job:package1
script: "my project build script"
When the user doesn't explicitly enable a job, neither the build nor deploy job will be in the pipeline configuration. However, they only need to enable the build job (by variable) and the needs: configuration for the deploy job will already be in place.
Neither of these approaches are particularly perfect for very flexible use of templates, unfortunately. But there may be another option...
Workaround: Dynamic child pipelines
As a possible workaround, users could use dynamic child pipelines to generate an entire pipeline configuration with correct needs: based on a minimal configuration. Almost anything is possible with dynamic child pipelines because you can generate the YAML programmatically on-the-fly, though, it may be more trouble than it's worth.