bypass input values in GitHub Actions workflow to a terraform variables file - input

As part of provisioning google cloud resources with GitHub actions using terraform I need to bypass some input values using terraform variables file, the issue is THL does not support Golang.
I have tried to do the following:
Create a GitHub actions workflow with
workflow_dispatch:
inputs:
new_planet:
description: 'Bucket Name'
required: true
default: 'some bucket'
At the end of the workflow there:
- name: terraform plan
id: plan
run: |
terraform plan -var-file=variables.tf
In the variables.tf:
variable "backend_bucket" {
type = string
default = ${{ github.event.inputs.new_planet }}
description = "The backend bucket name"
I will appreciate it if you have any idea how to bypass the input values from the workflow into the terraform.

You can use the backend-config option in the command line [1]. You would first need to configure the backend (e.g., by creating a backend.tf file) and add this:
terraform {
backend "s3" {
}
}
This way, you would be prompted for input every time you run terraform init. However, there is an additional CLI option, -input=false which prevents Terraform from asking for input. This snippet below will move into the directory where the Terraform code is (depending on the name of the repo, the directory name will be different) and run terraform init with the -backend-config options as well as -input set to false:
- name: Terraform Init
id: init
run: |
cd terraform-code
terraform init -backend-config="bucket=${{ secrets.STATE_BUCKET_NAME }}" \
-backend-config="key=${{ secrets.STATE_KEY }}" \
-backend-config="region=${{ secrets.AWS_REGION }}" \
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY_ID }}" \
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-input=false -no-color
I suppose you don't want the name of the bucket and other sensitive values to be hardcoded, I suggest using the GitHub Actions secrets [2].
After you set this up, you can run terraform plan without having to specify variables for the backend config. On the other hand, you could create a terraform.tfvars file in one of the previous steps so it can be consumed by plan step. Here is one of my examples:
- name: Terraform Tfvars
id: tfvars
run: |
cd terraform-code
cat << EOF > terraform.tfvars
profile = "profilename"
aws_region = "us-east-1"
EOF
You would finish off with the following snippet (note the -input=false again:
- name: Terraform Plan
id: plan
run: |
cd terraform-code
terraform plan -no-color -input=false
continue-on-error: true
All of the terraform part is available through the GitHub Action provided by Hashicorp [3].
[1] https://www.terraform.io/docs/language/settings/backends/configuration.html#partial-configuration
[2] https://docs.github.com/en/actions/security-guides/encrypted-secrets
[3] https://github.com/hashicorp/setup-terraform

Related

Can github actions edit parts of README.md?

I want to update ONLY the first header of a readme so it always has the repo's name.
I know how to get the repo name in github actions by doing something like:
name: CI
on:
push:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Run a one-line script
run: echo "REPO_NAME=${{ github.event.repository.name }}" >> $GITHUB_ENV
However, I want to access my readme.md and add the 'github.event.repository.name' to the header.
So I would make a Readme with something like:
Introduction for: {{ github.event.repository.name }}
hoping I can get something like this with gitactions:
Introduction for: RepoName
I tried using some marketplace github actions but the one I tried seems to not do variables and it seems to update the whole readme.md not just the header: https://github.com/marketplace/actions/dynamic-readme
Here is the failed example for this marketplace plugin:
name: Dynamic Template
on:
push:
branches:
- main
workflow_dispatch:
jobs:
update_templates:
name: "Update Templates"
runs-on: ubuntu-latest
steps:
- name: "📥 Fetching Repository Contents"
uses: actions/checkout#main
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo "REPO_NAME=${{ github.event.repository.name }}" >> $GITHUB_ENV
# Runs a set of commands using the runners shell
- name: Update GitHub Profile README
uses: theboi/github-update-readme#v1.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO_NAME: ${{ github.event.repository.name }}
with:
header: $REPO_NAME
Is there any way to make the readme.md file have the repo name dynamically with a variable in github actions?
EDIT: I think I am very close but I don't know how to commit code in github actions. I figured, I can do this manually by using the sed command in bash in github actions. I think it worked but I think I have to commit the code to make it save. Here is the code I have so far:
name: Dynamic Template
on:
push:
branches:
- main
workflow_dispatch:
jobs:
update_templates:
name: "Update Templates"
runs-on: ubuntu-latest
steps:
- name: "📥 Fetching Repository Contents"
uses: actions/checkout#main
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo "REPO_NAME=${{ github.event.repository.name }}" >> $GITHUB_ENV
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
ls
sed -i 's/<reponame>/$REPO_NAME/' README.md
cat README.md
echo $REPO_NAME
I figured it out. You can it manually by using the sed command in bash within a runner in github actions. Set your README.md with a variable that you want to replace like <reponame> then use github actions to find that string and replace it with something you want (for me the repo name).
name: Dynamic Template
on:
create:
jobs:
update_templates:
name: "Update Templates"
runs-on: ubuntu-latest
steps:
- name: "📥 Fetching Repository Contents"
uses: actions/checkout#main
# Runs a set of commands using the runners shell
- name: Update README.md
run: |
sed -i 's/<reponame>/'${{ github.event.repository.name }}'/' README.md
git config user.email "41898282+github-actions[bot]#users.noreply.github.com"
git config user.name "github-actions[bot]"
git commit -am "Automated report"
git push
The email I used is a dependabot mentioned from here: https://github.com/orgs/community/discussions/26560#discussioncomment-3531273
This method needs four files
README.md
python script #replace tag
github_action.yml #run it automatically
variable.json #define variable
#PythonKiddieScripterX provides a good idea.
Based on the idea that context inside <> those <information will be hidden>. We can update our readme files by replacing those tags <> with the texts we want.
Example of my README.md file
This answer is version `1.0`. It will be updated automatically by loading `json` file and using GitHub Action.
In my readme.json file
{
VERSION: 1.0
}
Then I will only need a script to do the replacement. In this example, I use python saved as
replace_tag.py.
import re
import os
import json
# read all .md files
readmefiles = []
for root, dirs, files in os.walk("."):
for file in files:
if file.endswith(".md"):
readmefiles.append(os.path.join(root, file))
# load variable json
with open('readme.json') as f:
var_dic = json.load(f)
# match pattern (<variable-*.?-tag>)(`*.?`)
# example: (<variable-VERSION-tag>)(`1.1`)
for filename in readmefiles:
with open(filename,"r") as f:
content = f.read()
# update readme variables
for key, value in var_dic.items():
pattern = r"(<variable-{}-tag>)(`.*?`)".format(key)
replacement = r"\1`{}`".format(value)
content = re.sub(pattern, replacement, content)
with open(filename,"w") as f:
f.write(content)
Then the final thing is to use GitHub Action to run this python script every time there is a change.
yml file could be
name: README Dynamic Update
on:
push:
paths:
- *.md
- readme.json
- **/*.md
workflow_dispatch:
jobs:
update_templates:
name: "Update Templates"
runs-on: ubuntu-latest
steps:
- name: "📥 Update GitHub Readme files"
uses: actions/checkout#main
# Runs a set of commands using the runners shell
- name: Update README.md
run: |
python replace_tag.py
- name: pull-request
uses: repo-sync/pull-request#v2
with:
destination_branch: "main"
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: commit
run: |
git config --global user.email youremail
git config --global user.name yourusername
git add .
git commit -m "README update Automation" -a
- name: Push changes
uses: ad-m/github-push-action#master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
Also for those text inside the code blocks, we can use <pre></pre> to solve the hidding issue.
Note that
need to add tag in this format <variable-YOURTAG-tag>"variable"

Giving container the same permissions with the workflow in GitHub actions

I am using workload identity federation to provide some permissions to my workflow.
This seems to be working fine
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
- run: gcloud projects list
i.e. the gcloud projects list command is successful.
However, in a next step I am running the same command in a container
- name: run container
run: docker run my-image:latest
and the process fails (I don't have access to the logs for the moment but it definately fails)
Is there a way to make the container created having the same auth context as the workflow?
Do I need to bind mount some token generated perhaps?
export the credentials (option provided by the auth action)
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
create_credentials_file: true
Make credentials readable
# needed in the docker volume creation so that it is read
# by the user with which the image runs (not root)
- name: change permissions of credentials file
shell: bash
run: chmod 775 $GOOGLE_GHA_CREDS_PATH
Mount the credentials file and perform a gcloud auth login using this file in the container
- name: docker run
run: |
docker run \
-v $GOOGLE_GHA_CREDS_PATH:${{ env.CREDENTIALS_MOUNT_PATH }} \
--entrypoint sh \
${{ env.CLUSTER_SCALING_IMAGE }} \
-c "gcloud auth login --cred-file=${{ env.CREDENTIALS_MOUNT_PATH }} && do whatever"
The entrypoint can of course be modified accordingly to support the case above

Drone CI - How to set pipeline env var to result of CLI output

I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty
Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
âš  Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.
if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try

Download from s3 into a actions workflow

I'm working on 2 github actions workflows:
Train a model and save it to s3 (monthly)
Download the model from s3 and use it in predictions (daily)
Using https://github.com/jakejarvis/s3-sync-action I was able to complete the first workflow. I train a model and then sync a dir, 'models' with a bucket on s3.
I had planned on using the same action to download the model for use in prediction but it looks like this action is one directional, upload only no download.
I found out the hard way by creating a workflow and attempting to sync with the runner:
retreive-model-s3:
runs-on: ubuntu-latest
steps:
- name: checkout current repo
uses: actions/checkout#master
- name: make dir to sync with s3
run: mkdir models
- name: checkout s3 sync action
uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: ${{ secrets.AWS_S3_ENDPOINT }}
AWS_REGION: 'us-south' # optional: defaults to us-east-1
SOURCE_DIR: 'models' # optional: defaults to entire repository
- name: dir after
run: |
ls -l
ls -l models
- name: Upload model as artifact
uses: actions/upload-artifact#v2
with:
name: xgb-model
path: models/regression_model_full.rds
At the time of running, when I login to the UI I can see the object regression_model_full.rds is indeed there, it's just not downloading. I'm still unsure if this is expected or not (the name of the action 'sync' is what's confusing me).
For our s3 we must use the parameter AWS_S3_ENDPOINT. I found another action, AWS S3 here but unlike the sync action I started out with there's no option to add AWS_S3_ENDPOINT. Looking at the repo too it's two years old except a update tot he readme 8 months ago.
What's the 'prescribed' or conventional way to download from s3 during a workflow?
Soo I had the same problem as you. I was trying to download from S3 to update a directory folder in GitHub.
What I learned from actions is if you're updating some files in the repo you must follow the normal approach as if you were doing it locally eg) checkout, make changes, push.
So for your particular workflow you must checkout your repo in the workflow using actions/checkout#master and after you sync with a particular directory the main problem I was not doing was then pushing the changes back to the repo! This allowed me to update my folder daily.
Anyway, here is my script and hope you find it useful. I am using the AWS S3 action you mention towards the end.
# This is a basic workflow to help you get started with Actions
name: Fetch data.
# Controls when the workflow will run
on:
schedule:
# Runs "at hour 6 past every day" (see https://crontab.guru)
- cron: '00 6 * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: keithweaver/aws-s3-github-action#v1.0.0 # Verifies the recursive flag
name: sync folder
with:
command: sync
source: ${{ secrets.S3_BUCKET }}
destination: ./data/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
flags: --delete
- name: Commit changes
run: |
git config --local user.email "action#github.com"
git config --local user.name "GitHub Action"
git add .
git diff-index --quiet HEAD || git commit -m "{commit message}" -a
git push origin main:main
Sidenote: the flag --delete allows you to keep your current folder up to date with your s3 folder by deleting any files that are not present in your s3 folder anymore

How to use a Variable in Ansible aws_ec2 plugin

I want to filter ec2 instances according to the Environment tag which I define when I run the scripts, i.e ansible-playbook start.yml -e env=dev
However, it seems that the plugin is not parsing variables. Any idea on how to achieve this task?
my aws_ec2.yml:
---
plugin: aws_ec2
regions:
- eu-central-1
filters:
tag:Secure: 'yes'
tag:Environment: "{{ env }}"
hostnames:
- private-ip-address
strict: False
groups:
keyed_groups:
- key: tags.Function
separator: ''
Edit
There is no error message resulting when running the playbook. The only problem that ansible handle the variable exactly as a string tag:Environment: "{{ env }}"instead of value tag:Environment: dev