I'm trying to automatically deploy my app to digital ocean through bitbucket pipelines. Here are the steps my deployment is following:
connect to the remote digital ocean droplet using ssh
clone my repository by running a git clone with ssh
launch my application with docker-compose
I have successfully setup ssh access to my remote. I have also configured ssh access to my repository and can successfully execute git clone from my remote server.
However, in the pipeline, while connection to the remote server is successfull, the git clone command fails with the following error.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Anybody has an idea of what is going on here?
Here is my bitbucket-pipelines.yml
image: atlassian/default-image:latest
pipelines:
default:
- step:
deployment: production
script:
- cat deploy.sh | ssh $USER_NAME#$HOST
- echo "Deploy step finished"
And the deployment script deploy.sh
#!/usr/bin/env sh
git clone git#bitbucket.org:<username>/<my_repo>.git
cd my_repo
docker-compose up -d
Logs for the git clone ssh commands within the droplet and from the pipeline
Git uses the default ssh key by default.
You can overwrite the SSH command used by git, by setting the GIT_SSH_COMMAND environment variable. You can add the -i argument to use a different SSH key.
export GIT_SSH_COMMAND="ssh -i ~/.ssh/<key>"
git clone git#bitbucket.org:<username>/<my_repo>.git
From the git documentation:
GIT_SSH
GIT_SSH_COMMAND
If either of these environment variables is set then git fetch and git push will use the specified command instead of ssh when they need to connect to a remote system. The command-line parameters passed to the configured command are determined by the ssh variant. See ssh.variant option in git-config[1] for details.
$GIT_SSH_COMMAND takes precedence over $GIT_SSH, and is interpreted by the shell, which allows additional arguments to be included. $GIT_SSH on the other hand must be just the path to a program (which can be a wrapper shell script, if additional arguments are needed).
Usually it is easier to configure any desired options through your personal .ssh/config file. Please consult your ssh documentation for further details.
Related
Since I need to use a self-hosted runner, the option to use existing marketplace SSH Actions is not viable because they tend to docker and build an image on GitHub Actions which fails because of a lack of permissions for an unknown user. Existing GitHub SSH Actions such as appleboy fail due to reliance on Docker for me.
Please note: Appleboy & other similar actions work great when using GitHub Actions Runners.
Since it's a self-hosted runner i.e. I have access to it all the time. So using an SSH profile and then using native-run command works pretty well.
To create an SSH profile & use it in GitHub Actions:
Create (if doesn't exist already) a "config" file in "~/.ssh/".
Add a profile in "~/.ssh/config". For example-
Host dev
HostName qa.bluerelay.com
User ec2-user
Port 22
IdentityFile /home/ec2-user/mykey/something.pem
Now to test it, on self-hosted runner run:
ssh dev ls
This should run the ls command inside the dev server.
4. Now since the SSH profile is set up, you can easily run remote SSH commands using GitHub Actions. For example-
name: Test Workflow
on:
push:
branches: ["main"]
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Download Artifacts into Dev
run: |
ssh dev 'cd GADeploy && pwd && sudo wget https://somelink.amazon.com/web.zip'
- name: Deploy Web Artifact to Dev
run: |
ssh dev 'cd GADeploy && sudo ./deploy-web.sh web.zip'
This works like a charm!!
Independently of the runner itself, you would need first to check if SSH does work from your server
curl -v telnet://github.com:22
# Assuming your ~/.ssh/id_rsa.pub is copied to your GitHub profile page
git ls-remote git#github.com:you/YourRepository
Then you can install your runner, without Docker.
Finally, your runner script can execute jobs.<job_id>.steps[*].run commands, using git with SSH URLs.
I have been using the docker build --ssh flag to give builds access to my keys from ssh-agent.
When I try the same thing with podman it does not work. I am working on macOS Monterey 12.0.1. Intel chip. I have also reproduced this on Ubuntu and WSL2.
❯ podman --version
podman version 3.4.4
This is an example Dockerfile:
FROM python:3.10
RUN mkdir -p -m 0600 ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git
When I run DOCKER_BUILDKIT=1 docker build --ssh default . it works i.e. the build succeeds, the repo is cloned and the ssh key is not baked into the image.
When I run podman build --ssh default . the build fails with:
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: error building at STEP "RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git": error while running runtime: exit status 128
I have just begun playing around with podman. Looking at the docs, that flag does appear to be supported. I have tried playing around with the format a little, specifying the id directly for example but no variation of specifying the flag or the mount has worked so far. Is there something about how podman works that I may be missing that explains this?
Adding this line as suggested in the comments:
RUN --mount=type=ssh ssh-add -l
Results in this error:
STEP 4/5: RUN --mount=type=ssh ssh-add -l
Could not open a connection to your authentication agent.
Error: error building at STEP "RUN --mount=type=ssh ssh-add -l": error while running runtime: exit status 2
Edit:
I belive this may have something to do with this issue in buildah. A fix has been merged but has not been released yet as far as I can see.
The error while running runtime: exit status 2 does not to me appear to be necessarily related to SSH or --ssh for podman build. It's hard to say really, and I've successfully used --ssh like you are trying to do, with some minor differences that I can't relate to the error.
I am also not sure ssh-add being run as part of building the container is what you really meant to do -- if you want it to talk to an agent, you need to have two environment variables being exported from the environment in which you run ssh-add, these define where to find the agent to talk to and are as follows:
SSH_AUTH_SOCK, specifying the path to a socket file that a program uses to communicate with the agent
SSH_AGENT_PID, specifying the PID of the agent
Again, without these two variables present in the set of exported environment variables, the agent is not discoverable and might as well not exist at all so ssh-add will fail.
Since your agent is probably running as part of the set of processes to which your podman build also belongs to, at the minimum the PID denoted by SSH_AGENT_PID should be valid in that namespace (meaning it's normally invalid in the set of processes that container building is isolated to, so defining the variable as part of building the container would be a mistake). Similar story with SSH_AUTH_SOCK -- the path to the socket file dumped by starting the agent program, would not normally refer to a file that exists in the mount namespace of the container being built.
Now, you can run both the agent and ssh-add as part of building a container, but ssh-add reads keys from ~/.ssh and if you had key files there as part of the container image being built you wouldn't need --ssh in the first place, would you?
The value of --ssh lies in allowing you to transfer your authority to talk to remote services defined through your keys on the host, to the otherwise very isolated container building procedure, through use of nothing else but an SSH agent designed for this very purpose. That removes the need to do things like copying key files into the container. They (keys) should also normally not be part of the built container, especially if they were only to be used during building. The agent, on the other hand, runs on the host, securely encapsulates the keys you add to it, and since the host is where you'd have your keys that's where you're supposed to run ssh-add at to add them to the agent.
I'm trying to run a CircleCI test job locally by running
circleci local execute --job test
However, I'm getting this error message:
go: github.com/some/repo#v0.0.0-20180921204022-800easdf7ec: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /go/pkg/mod/cache/vcs/52f8e69c46f5a1cc77e6bf: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
I would basically like to do the equivalent of https://circleci.com/docs/2.0/add-ssh-key/ for the local CircleCI environment, but there is no way to go to Project Settings -> Checkout SSH Keys as described in that documentation. I've read the documentation at https://circleci.com/docs/2.0/local-cli/#run-a-job-in-a-container-on-your-machine but wasn't able to find a way to do this.
Any idea how I can check out code from private Github repos in the local CircleCI environment?
Have you tried adding the key to your SSH keychain?
ssh-add (location of ssh key)
This should add it to the keychain and CircleCI local should pick it up.
You can use --checkout-key argument
circleci local execute build --checkout-key id_rsa
Note: id_rsa should be in the same folder
versiont Control > GitHub > "Test Successufl"
IMG1
Git version : 2.18.0
SSH ececutable: Native
IMG2
BUT!!
git clone fail
IMG3
I don't know why this happens any help would be appreciated.
You are cloning via HTTP. SSH executable is not related.
Check if it works in the command line first. It could happen there is git credential.helper that somehow saved wrong credentials and git is trying to use them.
You could use SSH instead, but make sure SSH keys are registered on GitHub and, since you want to use native SSH client, the key is added to ssh-agent or does not have a passphrase, because IntelliJ is not a terminal and cannot handle interactive prompts for passphrases.
Everybody Thanks~
I resolved it in the following way.
Step 1. Create SSH Key (in Local PC)
My PC use Windows.
So, I use Git Bash.
Step 2. Register GitHub
Personal settings > SSH and GPG Keys
Nes SSH Key > XXX_rsa.pub (This contens is Created SSH Key File in Local PC)
Step 3. In intellij, Git Clone used "Use SSH" instead of "Use HTTPS".
I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later