Giving container the same permissions with the workflow in GitHub actions - authentication

I am using workload identity federation to provide some permissions to my workflow.
This seems to be working fine
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
- run: gcloud projects list
i.e. the gcloud projects list command is successful.
However, in a next step I am running the same command in a container
- name: run container
run: docker run my-image:latest
and the process fails (I don't have access to the logs for the moment but it definately fails)
Is there a way to make the container created having the same auth context as the workflow?
Do I need to bind mount some token generated perhaps?

export the credentials (option provided by the auth action)
- name: authenticate to gcp
id: auth
uses: 'google-github-actions/auth#v0'
with:
token_format: 'access_token'
workload_identity_provider: ${{ env.WORKLOAD_IDENTITY_PROVIDER }}
service_account: ${{ env.SERVICE_ACCOUNT_EMAIL }}
create_credentials_file: true
Make credentials readable
# needed in the docker volume creation so that it is read
# by the user with which the image runs (not root)
- name: change permissions of credentials file
shell: bash
run: chmod 775 $GOOGLE_GHA_CREDS_PATH
Mount the credentials file and perform a gcloud auth login using this file in the container
- name: docker run
run: |
docker run \
-v $GOOGLE_GHA_CREDS_PATH:${{ env.CREDENTIALS_MOUNT_PATH }} \
--entrypoint sh \
${{ env.CLUSTER_SCALING_IMAGE }} \
-c "gcloud auth login --cred-file=${{ env.CREDENTIALS_MOUNT_PATH }} && do whatever"
The entrypoint can of course be modified accordingly to support the case above

Related

Download from s3 into a actions workflow

I'm working on 2 github actions workflows:
Train a model and save it to s3 (monthly)
Download the model from s3 and use it in predictions (daily)
Using https://github.com/jakejarvis/s3-sync-action I was able to complete the first workflow. I train a model and then sync a dir, 'models' with a bucket on s3.
I had planned on using the same action to download the model for use in prediction but it looks like this action is one directional, upload only no download.
I found out the hard way by creating a workflow and attempting to sync with the runner:
retreive-model-s3:
runs-on: ubuntu-latest
steps:
- name: checkout current repo
uses: actions/checkout#master
- name: make dir to sync with s3
run: mkdir models
- name: checkout s3 sync action
uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: ${{ secrets.AWS_S3_ENDPOINT }}
AWS_REGION: 'us-south' # optional: defaults to us-east-1
SOURCE_DIR: 'models' # optional: defaults to entire repository
- name: dir after
run: |
ls -l
ls -l models
- name: Upload model as artifact
uses: actions/upload-artifact#v2
with:
name: xgb-model
path: models/regression_model_full.rds
At the time of running, when I login to the UI I can see the object regression_model_full.rds is indeed there, it's just not downloading. I'm still unsure if this is expected or not (the name of the action 'sync' is what's confusing me).
For our s3 we must use the parameter AWS_S3_ENDPOINT. I found another action, AWS S3 here but unlike the sync action I started out with there's no option to add AWS_S3_ENDPOINT. Looking at the repo too it's two years old except a update tot he readme 8 months ago.
What's the 'prescribed' or conventional way to download from s3 during a workflow?
Soo I had the same problem as you. I was trying to download from S3 to update a directory folder in GitHub.
What I learned from actions is if you're updating some files in the repo you must follow the normal approach as if you were doing it locally eg) checkout, make changes, push.
So for your particular workflow you must checkout your repo in the workflow using actions/checkout#master and after you sync with a particular directory the main problem I was not doing was then pushing the changes back to the repo! This allowed me to update my folder daily.
Anyway, here is my script and hope you find it useful. I am using the AWS S3 action you mention towards the end.
# This is a basic workflow to help you get started with Actions
name: Fetch data.
# Controls when the workflow will run
on:
schedule:
# Runs "at hour 6 past every day" (see https://crontab.guru)
- cron: '00 6 * * *'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: keithweaver/aws-s3-github-action#v1.0.0 # Verifies the recursive flag
name: sync folder
with:
command: sync
source: ${{ secrets.S3_BUCKET }}
destination: ./data/
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: ${{ secrets.AWS_REGION }}
flags: --delete
- name: Commit changes
run: |
git config --local user.email "action#github.com"
git config --local user.name "GitHub Action"
git add .
git diff-index --quiet HEAD || git commit -m "{commit message}" -a
git push origin main:main
Sidenote: the flag --delete allows you to keep your current folder up to date with your s3 folder by deleting any files that are not present in your s3 folder anymore

bypass input values in GitHub Actions workflow to a terraform variables file

As part of provisioning google cloud resources with GitHub actions using terraform I need to bypass some input values using terraform variables file, the issue is THL does not support Golang.
I have tried to do the following:
Create a GitHub actions workflow with
workflow_dispatch:
inputs:
new_planet:
description: 'Bucket Name'
required: true
default: 'some bucket'
At the end of the workflow there:
- name: terraform plan
id: plan
run: |
terraform plan -var-file=variables.tf
In the variables.tf:
variable "backend_bucket" {
type = string
default = ${{ github.event.inputs.new_planet }}
description = "The backend bucket name"
I will appreciate it if you have any idea how to bypass the input values from the workflow into the terraform.
You can use the backend-config option in the command line [1]. You would first need to configure the backend (e.g., by creating a backend.tf file) and add this:
terraform {
backend "s3" {
}
}
This way, you would be prompted for input every time you run terraform init. However, there is an additional CLI option, -input=false which prevents Terraform from asking for input. This snippet below will move into the directory where the Terraform code is (depending on the name of the repo, the directory name will be different) and run terraform init with the -backend-config options as well as -input set to false:
- name: Terraform Init
id: init
run: |
cd terraform-code
terraform init -backend-config="bucket=${{ secrets.STATE_BUCKET_NAME }}" \
-backend-config="key=${{ secrets.STATE_KEY }}" \
-backend-config="region=${{ secrets.AWS_REGION }}" \
-backend-config="access_key=${{ secrets.AWS_ACCESS_KEY_ID }}" \
-backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-input=false -no-color
I suppose you don't want the name of the bucket and other sensitive values to be hardcoded, I suggest using the GitHub Actions secrets [2].
After you set this up, you can run terraform plan without having to specify variables for the backend config. On the other hand, you could create a terraform.tfvars file in one of the previous steps so it can be consumed by plan step. Here is one of my examples:
- name: Terraform Tfvars
id: tfvars
run: |
cd terraform-code
cat << EOF > terraform.tfvars
profile = "profilename"
aws_region = "us-east-1"
EOF
You would finish off with the following snippet (note the -input=false again:
- name: Terraform Plan
id: plan
run: |
cd terraform-code
terraform plan -no-color -input=false
continue-on-error: true
All of the terraform part is available through the GitHub Action provided by Hashicorp [3].
[1] https://www.terraform.io/docs/language/settings/backends/configuration.html#partial-configuration
[2] https://docs.github.com/en/actions/security-guides/encrypted-secrets
[3] https://github.com/hashicorp/setup-terraform

How to send passphrase for ssh-add with GitHub Actions?

My goal is to store private key with passphrase in GitHub secrets, but I don't know how to enter the passphrase through GitHub actions.
What I've tried:
I created a private key without passphrase and store it in GitHub secrets.
.github/workflows/docker-build.yml
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
eval $(ssh-agent -s)
echo "${{ secrets.SSH_PRIVATE_KEY }}" | ssh-add -
ssh -o StrictHostKeyChecking=no root#${{ secrets.HOSTNAME }} "rm -rf be-bankaccount; git clone https://github.com/kidfrom/be-bankaccount.git; cd be-bankaccount; docker build -t be-bankaccount .; docker-compose up -d;"
I finally figured this out because I didn't want to go to the trouble of updating all my servers with a passphrase-less authorized key. Ironically, it probably took me longer to do this but now I can save you the time.
The two magic ingredients are: using SSH_AUTH_SOCK to share between GH action steps and using ssh-add with DISPLAY=None and SSH_ASKPASS set to an executable script that sends your passphrase via stdin.
For your question specifically, you do not need SSH_AUTH_SOCK because all your commands run within a single job step. However, for more complex workflows, you'll need it set.
Here's an example workflow:
name: ssh with passphrase example
env:
# Use the same ssh-agent socket value across all jobs
# Useful when a GH action is using SSH behind-the-scenes
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
# Start ssh-agent but set it to use the same ssh_auth_sock value.
# The agent will be running in all steps after this, so it
# should be one of the first.
- name: Setup SSH passphrase
env:
SSH_PASSPHRASE: ${{secrets.SSH_PASSPHRASE}}
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
echo 'echo $SSH_PASSPHRASE' > ~/.ssh_askpass && chmod +x ~/.ssh_askpass
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | DISPLAY=None SSH_ASKPASS=~/.ssh_askpass ssh-add - >/dev/null
# Debug print out the added identities. This will prove SSH_AUTH_SOCK
# is persisted across job steps
- name: Print ssh-add identities
runs: ssh-add -l
job2:
# NOTE: SSH_AUTH_SOCK will be set, but the agent itself is not
# shared across jobs, each job is a new container sandbox
# so you still need to setup the passphrase again
steps: ...
Resources I referenced:
SSH_AUTH_SOCK setting: https://www.webfactory.de/blog/use-ssh-key-for-private-repositories-in-github-actions
GitLab and Ansible using passphrase: How to run an ansible-playbook with a passphrase-protected-ssh-private-key?
You could try and use actions/webfactory-ssh-agent, which comes from the study done in "Using a SSH deploy key in GitHub Actions to access private repositories" done by Matthias Pigulla
GitHub Actions only have access to the repository they run for. So, in order to access additional private repositories, create an SSH key with sufficient access privileges.
Then, use this action to make the key available with ssh-agent on the Action worker node. Once this has been set up, git clone commands using ssh URLs will just work.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v1
# Make sure the #v0.4.1 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps

Why is my static site broken using github action and azure cli to deploy?

I'm trying to deploy my static site to Azure storage but have been having issues getting the site open correctly even though the github action executes without errors and the files seem to be in place. In the browser, index.html seems to load along with the css and js.... but the site does not run properly. The console shows a failure in the js:
The odd thing is that I don't have any issues using the azure storage extension in vscode or using the azure cli:
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '<CONNECTION_STRING>'
when I deploy from my laptop.
My github action looks like this:
name: Blob storage website CI
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install
run: |
npm install
- name: npm build
run: |
npm run build
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Azure CLI script
uses: azure/CLI#v1
with:
azcliversion: latest
inlineScript: |
az storage blob upload-batch --account-name <ACCOUNT_NAME> -d '$web' -s ./dist --connection-string '${{ secrets.BLOB_STORAGE_CONNECTION_STRING }}'
# Azure logout
- name: logout
run: |
az logout
based on this article here.
I thought that it might be due to the azure cli version, but none of the versions I've tried have made a difference.
Any ideas why my site broken using github action and azure cli to deploy?
For anyone interested - I was missing environment variables during the build process in the GitHub Action. I was able to pass these without checking in the .env files using github secrets.
There's now a step in the action to create a .env,
- name: Set Environment Variables
run: |
touch .env
echo ENVIRONMENT_VARIABLE=${{secrets.ENVIRONMENT_VARIABLE}} >> .env
and another to remove it:
- name: Remove Environment Variables
run: |
rm .env

How to use ssh identity file with Github Actions

I'm in the throes of setting up a Github Action that should run an SSH command to connect to a private server. The private server's connection settings i have specify an identityFile, which I do own. After this connection, I will then run a proxycommand, so this is essentially to a bastion, for context.
What I cannot quite figure out at this point is how/which github action supports this configuration. I see the commands on this one (similar to others): https://github.com/appleboy/ssh-action/blob/master/action.yml and no mention of identifyFile property. Is there another way to execute this or a ssh command that can make this possible?
Would appreciate some pointers, thanks!
If you need some explanation of how to write your action, you can read this article : How to create Github Actions to run tests with docker services .
You just have to create your workflow file and use the actions of appleboy like on steps keyword :
- name: executing remote ssh commands using password
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.KEY }}
key_path: ${{ secrets.KEY_PATH }}
password: ${{ secrets.PASSWORD }}
port: ${{ secrets.PORT }}
script: whoami
With the script line, you can execute what you want to do in the server and connect with the parameters set above. For multiple line do like this :
script: |
pwd
ls -al
Hope it will help.