Inappropriate IOCTL for device when using capistrano in github-actions - ssh

I have the following yaml file set up for github-actions:
name: Build, Test, and Deploy to Staging
on:
push:
branches:
- develop
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build-and-test:
runs-on: ubuntu-latest
env:
DB_DATABASE: foamfactory_stage
DB_ROOT_USER: root
DB_ROOT_PASSWORD: password
DB_USER: admin
DB_PASSWORD: ${{ secrets.MYSQL_USER_PASSWORD }}
steps:
- name: Set up MySQL
run: |
sudo systemctl start mysql.service
mysql -e 'CREATE DATABASE ${{ env.DB_DATABASE }};' -u${{ env.DB_ROOT_USER }} -p${{ env.DB_ROOT_PASSWORD }}
mysql -e "CREATE USER '${{ env.DB_USER }}'#'localhost' IDENTIFIED BY '${{ env.DB_PASSWORD }}';" -u${{env.DB_ROOT_USER}} -p${{ env.DB_ROOT_PASSWORD }}
mysql -e "CREATE DATABASE IF NOT EXISTS ${{ env.DB_DATABASE }};" -u${{env.DB_ROOT_USER}} -p${{ env.DB_ROOT_PASSWORD }}
mysql -e "GRANT ALL PRIVILEGES ON ${{ env.DB_DATABASE }}.* to '${{ env.DB_USER }}'#'localhost';" -u${{env.DB_ROOT_USER}} -p${{ env.DB_ROOT_PASSWORD }}
mysql -e "FLUSH PRIVILEGES;" -u${{env.DB_ROOT_USER}} -p${{ env.DB_ROOT_PASSWORD }}
- name: Install SSH key to Server
uses: shimataro/ssh-key-action#v2
with:
key: ${{ secrets.STAGE_API_DEPLOY_KEY }}
name: github-actions
known_hosts: ${{ secrets.STAGE_API_HOST_KEY }}
config: |
host stage.api.example.com
IdentityFile ~/.ssh/github-actions
IdentitiesOnly yes
ForwardAgent yes
- uses: actions/checkout#v2
- name: Set up Ruby Environment
uses: ruby/setup-ruby#v1
with:
ruby-version: 2.6.1
bundler-cache: true
env:
RAILS_ENV: staging
- name: Setup Database
env:
RAILS_ENV: staging
run: bundle exec rake db:setup
- name: Perform Database Migrations
env:
RAILS_ENV: staging
run: bundle exec rake db:migrate
- name: Run specs
env:
RAILS_ENV: staging
run: bundle exec rails spec
deploy-staging:
needs: build-and-test
runs-on: ubuntu-latest
steps:
- name: Install SSH Host Key
uses: shimataro/ssh-key-action#v2
with:
key: ${{ secrets.STAGE_API_DEPLOY_KEY }}
name: github-actions
known_hosts: ${{ secrets.STAGE_API_HOST_KEY }}
config: |
host stage.api.example.com
IdentityFile ~/.ssh/github-actions
IdentitiesOnly yes
ForwardAgent yes
- uses: actions/checkout#v2
- name: Set up Ruby
uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Install SSH Key
run: |
eval "$(ssh-agent -s)"
ssh-add -D
ssh-add ~/.ssh/github-actions
- name: Check SSH Key Viability
run: |
echo "ls -al" | ssh deploy#stage.api.example.com
- name: Deploy to staging
run: |
bundle exec cap staging deploy
The last step, 'Deploy to staging' is failing with the following output:
Run bundle exec cap staging deploy
bundle exec cap staging deploy
shell: /usr/bin/bash -e {0}
#<Thread:0x000055e7af018820#/home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/runners/parallel.rb:10 run> terminated with exception (report_on_exception is true):
/home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/runners/parallel.rb:15:in `rescue in block (2 levels) in execute': Exception while executing as deploy#stage.api.example.com: Inappropriate ioctl for device (SSHKit::Runner::ExecuteError)
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/runners/parallel.rb:11:in `block (2 levels) in execute'
/home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/prompt.rb:45:in `noecho': Inappropriate ioctl for device (Errno::ENOTTY)
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/prompt.rb:45:in `ask'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/authentication/methods/password.rb:68:in `ask_password'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/authentication/methods/password.rb:21:in `authenticate'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/authentication/session.rb:86:in `block in authenticate'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/authentication/session.rb:72:in `each'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh/authentication/session.rb:72:in `authenticate'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/net-ssh-6.1.0/lib/net/ssh.rb:255:in `start'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/connection_pool.rb:63:in `call'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/connection_pool.rb:63:in `with'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/netssh.rb:177:in `with_ssh'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/netssh.rb:130:in `execute_command'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:148:in `block in create_command_and_execute'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:148:in `tap'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:148:in `create_command_and_execute'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:61:in `test'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/capistrano-passenger-0.2.1/lib/capistrano/tasks/passenger.cap:43:in `block (3 levels) in <top (required)>'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:31:in `instance_exec'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/backends/abstract.rb:31:in `run'
from /home/runner/work/api/api/vendor/bundle/ruby/2.6.0/gems/sshkit-1.21.2/lib/sshkit/runners/parallel.rb:12:in `block (2 levels) in execute'
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#stage.api.example.com: Inappropriate ioctl for device
Caused by:
Errno::ENOTTY: Inappropriate ioctl for device
Tasks: TOP => rvm:hook => passenger:rvm:hook => passenger:test_which_passenger
(See full trace by running task with --trace)
deploy#stage.api.example.com's password:
Error: Process completed with exit code 1.
It appears that there is some lack of communication between the ssh-agent and the capistrano task, hence the reason it appears to be asking for a password. However, on the previous step, 'Check SSH Key Viability' it's clear that the SSH key is usable and working:
Run echo "ls -al" | ssh deploy#stage.api.example.com
echo "ls -al" | ssh deploy#stage.api.example.com
shell: /usr/bin/bash -e {0}
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added the ECDSA host key for IP address 'XXX.XXX.XXX.XXX' to the list of known hosts.
<output of ls command>
I'm not sure what I'm doing incorrectly here, but was wondering if someone might give me a hint as to why this isn't working to deploy from github-actions.

The reason this was failing was that it was using an incorrect SSH key to authenticate to Github, based on what was being used on the server (in this case, the server being api.stage.example.com). Because, on api.stage.example.com, my ~/.ssh/config file showed:
Host github.com
HostName github.com
IdentityFile ~/.ssh/github-actions
It was using that file - ~/.ssh/github-actions, and not id_rsa as the private key to authenticate to github. As such, I needed to add a deploy key for the appropriate repository that contained the corresponding ~/.ssh/github_actions.pub key.
Further, I had to change the line in my config/deploy/staging.rb file that previously only used id_rsa to utilize either id_rsa or github_actions:
set :ssh_options, {
keys: %w(~/.ssh/id_rsa ~/.ssh/github-actions),
forward_agent: true,
}
Re-deploying then alleviated the errors in question.

Related

Error in libcrypto on Github Actions SSH command

I am going to create automatic deploy to my testing server via SSH in Github Actions. I was created connecting by private key. It's work correctly on local (tested in ubuntu:latest docker image), but when I push my code into repository I got error.
Run ssh -i ~/.ssh/private.key -o "StrictHostKeyChecking no" ***#*** -p *** whoami
Warning: Permanently added '[***]:***' (ED25519) to the list of known hosts.
Load key "/home/runner/.ssh/private.key": error in libcrypto
Permission denied, please try again.
Permission denied, please try again.
***#***: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Error: Process completed with exit code 255.
My workflow code:
name: Testing deploy
on:
push:
branches:
- develop
- feature/develop-autodeploy
jobs:
build:
name: Build and deploy
runs-on: ubuntu-latest
steps:
- run: mkdir -p ~/.ssh/
- run: echo "{{ secrets.STAGING_KEY }}" > ~/.ssh/private.key
- run: chmod 600 ~/.ssh/private.key
- run: ssh -i ~/.ssh/private.key -o "StrictHostKeyChecking no" ${{ secrets.STAGING_USER }}#${{ secrets.STAGING_HOST }} -p ${{ secrets.STAGING_PORT }} whoami
I was tried 3rd-hand packages e.g. D3rHase/ssh-command-action and appleboy/ssh-action with another errors.
Resolved. In line, where I making private.key file missing $ character. My bad.

github actions: SSH into droplet and run code

I want to deploy a github project automatically through github actions when I push my code to github. My yaml-file looks like this:
name: push-and-deploy-to-server
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: appleboy/scp-action#master
with:
host: ${{ secrets.SSH_HOST }}
port: 22
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
source: "."
target: "."
- uses: appleboy/ssh-action#master
with:
host: ${{ secrets.SSH_HOST }}
port: 22
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
npm install
pm2 restart index.js
I have a server with an SSH keypair. The public key is added to the server authorized_keys, and I can SSH through my terminal to the server.
When I push code to the github repo, the action runs. I get the following error:
drone-scp error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
The weird thing is: after this error, I'm not able to SSH into my server anymore, even through my console I get a "Permission denied (publickey)". So before running the github action, everything works, after that it fails.
The ip address of the server is SSH_HOST, the username which I use to SSH into the server is set in SSH_USERNAME and the private key (the same as I use on my local laptop to ssh into the server) is set in SSH_PRIVATE_KEY.
Does anyone have encountered the same problem before? I have really no clue whats going on here.
Edit: extra information: it's a private repository.

Get error x509 in release job in my local gitlab pipeline

I am running a local gitlab server with self-signed certificate, My pipline builds my application and create a release but I have x509 I tried the workaround mentionned on gitlab documenation but it doesn't work. Everything works fine when tested in gitlab.com
To summerize first I build my application to generate a war file as an artifact, then the artifact is uploaded using gitlab API to generate URL and file path after that release job add tags and generate the release page
my gitlab-ci.yaml
---
variables:
PACKAGE_VERSION: "V7"
GENERIC_WAR: "mypackage-${PACKAGE_VERSION}.war"
PACKAGE_REGISTRY_URL: "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/${CI_PROJECT_NAME}/${PACKAGE_VERSION}"
workflow:
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: always
variables:
SERVER: "${PROD_SERVER}"
- if: $CI_COMMIT_BRANCH == "test"
when: always
variables:
SERVER: "${TEST_SERVER}"
- if: $CI_COMMIT_BRANCH == "feature/release"
when: always
variables:
SERVER: "${TEST_SERVER}"
stages:
- build
- upload
- prepare
- release
- deploy
build-application:
stage: build
image: maven:3.8.4-jdk-8
script:
- mvn clean package -U -DskipTests=true
- echo $CI_COMMIT_TAG
artifacts:
expire_in: 2h
when: always
paths:
- target/*.war
upload:
stage: upload
image: curlimages/curl:latest
needs:
- job: build-application
artifacts: true
# rules:
# - if: $CI_COMMIT_TAG
script:
- |
curl -k --header "JOB-TOKEN: ${CI_JOB_TOKEN}" --upload-file target/*.war "${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}"
prepare_job:
stage: prepare
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
script:
- echo "EXTRA_DESCRIPTION=some message" >> variables.env # Generate the EXTRA_DESCRIPTION and TAG environment variables
- echo "TAG=v$(cat VERSION)" >> variables.env
artifacts:
reports:
dotenv: variables.env
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never
- if: $CI_COMMIT_BRANCH == "feature/release"
before_script:
- apk --no-cache add openssl ca-certificates
- mkdir -p /usr/local/share/ca-certificates/extra
- openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
- update-ca-certificates
script:
- echo 'running release_job for $TAG'
release:
name: "Release $TAG"
description: "Created using the release-cli $EXTRA_DESCRIPTION"
tag_name: "$TAG"
ref: "$CI_COMMIT_SHA"
assets:
links:
- name: "{$GENERIC_WAR}"
url: "${PACKAGE_REGISTRY_URL}"
filepath: "/${GENERIC_WAR}"
Release job execution
Running with gitlab-runner 14.5.2 (e91107dd)
on Shared-Docker mdaS6_cA
Preparing the "docker" executor
00:03
Using Docker executor with image registry.gitlab.com/gitlab-org/release-cli:latest ...
Pulling docker image registry.gitlab.com/gitlab-org/release-cli:latest ...
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
Preparing environment
00:01
Running on runner-mdas6ca-project-32-concurrent-0 via Docker-Server1...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/Saiida/backend-endarh/.git/
Checking out 7735e9ea as feature/release...
Removing target/
Removing variables.env
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:02
Using docker image sha256:c2d3a3c3b9ad5ef63478b6a6b757632dd7994d50e603ec69999de6b541e1dca8 for registry.gitlab.com/gitlab-org/release-cli:latest with digest registry.gitlab.com/gitlab-org/release-cli#sha256:68e201226e1e76cb7edd327c89eb2d5d1a1d2b0fd4a6ea5126e24184d9aa4ffc ...
$ apk --no-cache add openssl ca-certificates
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/2) Installing ca-certificates (20191127-r5)
(2/2) Installing openssl (1.1.1l-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 7 MiB in 16 packages
$ mkdir -p /usr/local/share/ca-certificates/extra
$ openssl s_client -connect ${CI_SERVER_HOST}:${CI_SERVER_PORT} -servername ${CI_SERVER_HOST} -showcerts </dev/null 2>/dev/null | sed -e '/-----BEGIN/,/-----END/!d' | tee "/usr/local/share/ca-certificates/${CI_SERVER_HOST}.crt" >/dev/null
$ update-ca-certificates
Warning! Cannot copy to bundle: /usr/local/share/ca-certificates/extra
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
WARNING: ca-cert-extra.pem does not contain exactly one certificate or CRL: skipping
$ echo 'running release_job for $TAG'
running release_job for $TAG
Executing "step_release" stage of the job script
00:01
$ release-cli create --name "Release $TAG" --description "Created using the release-cli $EXTRA_DESCRIPTION" --tag-name "$TAG" --ref "$CI_COMMIT_SHA" --assets-link "{\"url\":\"${PACKAGE_REGISTRY_URL}\",\"name\":\"{$GENERIC_WAR}\",\"filepath\":\"/${GENERIC_WAR}\"}"
time="2021-12-23T08:47:48Z" level=info msg="Creating Release..." cli=release-cli command=create name="Release v" project-id=32 ref=7735e9ea9422e20b09cae2072c692843b118423a server-url="https://gitlab.endatamweel.tn" tag-name=v version=0.10.0
time="2021-12-23T08:47:48Z" level=fatal msg="run app" cli=release-cli error="failed to create release: failed to do request: Post \"https://gitlab.endatamweel.tn/api/v4/projects/32/releases\": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0" version=0.10.0
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
I managed to get it work by replacing the yaml format of the release job with the release-cli command and arguments and set --insecure-https option not optimised for production of course
release:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
needs:
- job: prepare_job
artifacts: true
rules:
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == "feature/release" # Run this job when commits are pushed or merged to the default branch
script:
- |
release-cli --insecure-https=true create --name "Release $TAG" --tag-name $TAG --ref $CI_COMMIT_SHA \
--assets-link "{\"name\":\"${GENERIC_WAR}\",\"url\":\"${PACKAGE_REGISTRY_URL}/${GENERIC_WAR}\", \"link_type\":\"package\"}"

How to send passphrase for ssh-add with GitHub Actions?

My goal is to store private key with passphrase in GitHub secrets, but I don't know how to enter the passphrase through GitHub actions.
What I've tried:
I created a private key without passphrase and store it in GitHub secrets.
.github/workflows/docker-build.yml
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
eval $(ssh-agent -s)
echo "${{ secrets.SSH_PRIVATE_KEY }}" | ssh-add -
ssh -o StrictHostKeyChecking=no root#${{ secrets.HOSTNAME }} "rm -rf be-bankaccount; git clone https://github.com/kidfrom/be-bankaccount.git; cd be-bankaccount; docker build -t be-bankaccount .; docker-compose up -d;"
I finally figured this out because I didn't want to go to the trouble of updating all my servers with a passphrase-less authorized key. Ironically, it probably took me longer to do this but now I can save you the time.
The two magic ingredients are: using SSH_AUTH_SOCK to share between GH action steps and using ssh-add with DISPLAY=None and SSH_ASKPASS set to an executable script that sends your passphrase via stdin.
For your question specifically, you do not need SSH_AUTH_SOCK because all your commands run within a single job step. However, for more complex workflows, you'll need it set.
Here's an example workflow:
name: ssh with passphrase example
env:
# Use the same ssh-agent socket value across all jobs
# Useful when a GH action is using SSH behind-the-scenes
SSH_AUTH_SOCK: /tmp/ssh_agent.sock
jobs:
job1:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v2
# Start ssh-agent but set it to use the same ssh_auth_sock value.
# The agent will be running in all steps after this, so it
# should be one of the first.
- name: Setup SSH passphrase
env:
SSH_PASSPHRASE: ${{secrets.SSH_PASSPHRASE}}
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
run: |
ssh-agent -a $SSH_AUTH_SOCK > /dev/null
echo 'echo $SSH_PASSPHRASE' > ~/.ssh_askpass && chmod +x ~/.ssh_askpass
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | DISPLAY=None SSH_ASKPASS=~/.ssh_askpass ssh-add - >/dev/null
# Debug print out the added identities. This will prove SSH_AUTH_SOCK
# is persisted across job steps
- name: Print ssh-add identities
runs: ssh-add -l
job2:
# NOTE: SSH_AUTH_SOCK will be set, but the agent itself is not
# shared across jobs, each job is a new container sandbox
# so you still need to setup the passphrase again
steps: ...
Resources I referenced:
SSH_AUTH_SOCK setting: https://www.webfactory.de/blog/use-ssh-key-for-private-repositories-in-github-actions
GitLab and Ansible using passphrase: How to run an ansible-playbook with a passphrase-protected-ssh-private-key?
You could try and use actions/webfactory-ssh-agent, which comes from the study done in "Using a SSH deploy key in GitHub Actions to access private repositories" done by Matthias Pigulla
GitHub Actions only have access to the repository they run for. So, in order to access additional private repositories, create an SSH key with sufficient access privileges.
Then, use this action to make the key available with ssh-agent on the Action worker node. Once this has been set up, git clone commands using ssh URLs will just work.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v1
# Make sure the #v0.4.1 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.4.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps

Integration of circleci in ruby on rails

Got struck on creating config file for circleci. This is my config file created under circleci folder(.circleci -> config.yml).
version: 2.0
jobs:
build:
working_directory: ~/electrik_backend
docker:
- image: circleci/ruby:2.4.1-node-browsers
- image: postgres:9.6.2-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_DB: postgres_test
steps:
- checkout
# Bundle install dependencies
- run:
name: Install dependencies
command: bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs 4 --retry 3
# Restore bundle cache
- restore_cache:
keys:
- rails-demo-{{ checksum "Gemfile.lock" }}
- rails-demo-
# Store bundle cache
- save_cache:
key: rails-demo-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && sudo tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for db
command: dockerize -wait tcp://localhost:5432 -timeout 1m
# Setup the database
- run: bundle exec rake db:create db:migrate
- run: rails db:test:prepare
- run: rspec
Untill database everything is working, but at setup of database getting error as rake aborted!
Cannot loadRails.application.database_configuration:
Could not load database configuration. No such file - ["config/database.yml"]. my config.yml file is
development:
adapter: postgresql
encoding: unicode
database: electrik_development
host: localhost
pool: 5
username: postgres
password: test123
test:
adapter: postgresql
encoding: unicode
database: electrik_test
host: localhost
pool: 5
username: postgres
password: test123
Modify the database setup and run tests.
# Setup the database
- run: mv config/database.yml.sample config/database.yml
- run: RAILS_ENV=test bundle exec rake db:create
- run: bundle exec rake db:setup
# Run the tests
- type: shell
command: |
bundle exec rspec --profile 10 \
--out test_results/rspec.xml \
--format progress \
$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)
I prefer to set host 127.0.0.1 for CircleCI