I have 2 PCs, each with SourceTree installed. On each machine, I run ssh-keygen -t rsa to generate the public & private keys and I place them in the folders:
G:/.ssh/PC1
G:/.ssh/PC2
under each folder, there are 3 files : id_rsa, id_rsa.pub, known_hosts.
I copied the content of each id_ras.pub to create a SSH key in server. On each machine, in SourceTree, I specify the "SSH Client Configuration" so that
SSH Key points to G:\.ssh\PC1\id_rsa & G:\.ssh\PC2\id_rsa
OpenSSH is used in both machine for SSH Client.
But it appears that one of the PCs is able to access GitLab properly in SourceTree, i.e., push/pull work; on the other PC, when launching SourceTree, it alerts:
'ssh-agent' failed with code -1: System.Exception:Unable to start 'C:\Users\xxx\AppData\Local\Atlassian\SourceTree\git_local\bin\ssh-agent.exe' check the git installation.
Further, I tried a new check out from GitLab to a new folder on the PC that is not working and after that it starts to work. But later, it stops working and gives the same alert.
I tried to pull from repository, and it errors:
git -c diff.mnemonicprefix=false -c core.quotepath=false fetch origin
C:\Users\xxx\AppData\Local\Atlassian\SourceTree\git_local\bin\sh.exe: *** fork: can't reserve
memory for stack 0x2E60000 - 0x3060000, Win32 error 0
0 [main] sh 11020 sync_with_child: child 7124(0x238) died before initialization with status code 0x1
13 [main] sh 11020 sync_with_child: *** child state waiting for longjmp
C:\Program Files (x86)\Atlassian\SourceTree\tools\openssh_wrapper.sh: fork: Resource temporarily unavailable
fatal: Could not read from remote repository.
Related
I saw few other posts (in particular this one) about it but there are from last year. I still have this issue right now. I opened the Preview features from the User settings but I can't turn off this feature.
My pipelines use SSH connection to run some commands on a virtual machine (basically, pull a Docker image).
All my pipelines are failing. How can I fix it or update the SSH connections?
Update
I set up the Service connection
and I use it in my pipelines with this YAML code:
- task: SSH#0
displayName: 'SSH: stop shinyproxy'
inputs:
sshEndpoint: $(server)
commands: |
echo $(pwd) | sudo -S docker stop shinyproxy
failOnStdErr: false
continueOnError: true
All pipelines, new and old, get the same error
##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: All configured authentication methods failed
at doNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:803:21)
at tryNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:993:7)
at USERAUTH_FAILURE (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:373:11)
at 51 (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/handlers.misc.js:337:16)
at Protocol.onPayload (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:2025:10)
at AESGCMDecipherNative.decrypt (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/crypto.js:987:26)
at Protocol.parsePacket [as _parse] (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:1994:25)
at Protocol.parse (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:293:16)
at Socket. (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:713:21)
at Socket.emit (node:events:527:28) {
level: 'client-authentication'
I have never had this issue before.
Based on the other post, highlighted by Antonia, the solution has to be applied on the Ubuntu machine.
To fix it, open Terminal and edit /etc/ssh/sshd_config and, at the end of it, add this line
/etc/ssh/sshd_config
After that, restart. It is working for me.
I have been using the docker build --ssh flag to give builds access to my keys from ssh-agent.
When I try the same thing with podman it does not work. I am working on macOS Monterey 12.0.1. Intel chip. I have also reproduced this on Ubuntu and WSL2.
❯ podman --version
podman version 3.4.4
This is an example Dockerfile:
FROM python:3.10
RUN mkdir -p -m 0600 ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git
When I run DOCKER_BUILDKIT=1 docker build --ssh default . it works i.e. the build succeeds, the repo is cloned and the ssh key is not baked into the image.
When I run podman build --ssh default . the build fails with:
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: error building at STEP "RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git": error while running runtime: exit status 128
I have just begun playing around with podman. Looking at the docs, that flag does appear to be supported. I have tried playing around with the format a little, specifying the id directly for example but no variation of specifying the flag or the mount has worked so far. Is there something about how podman works that I may be missing that explains this?
Adding this line as suggested in the comments:
RUN --mount=type=ssh ssh-add -l
Results in this error:
STEP 4/5: RUN --mount=type=ssh ssh-add -l
Could not open a connection to your authentication agent.
Error: error building at STEP "RUN --mount=type=ssh ssh-add -l": error while running runtime: exit status 2
Edit:
I belive this may have something to do with this issue in buildah. A fix has been merged but has not been released yet as far as I can see.
The error while running runtime: exit status 2 does not to me appear to be necessarily related to SSH or --ssh for podman build. It's hard to say really, and I've successfully used --ssh like you are trying to do, with some minor differences that I can't relate to the error.
I am also not sure ssh-add being run as part of building the container is what you really meant to do -- if you want it to talk to an agent, you need to have two environment variables being exported from the environment in which you run ssh-add, these define where to find the agent to talk to and are as follows:
SSH_AUTH_SOCK, specifying the path to a socket file that a program uses to communicate with the agent
SSH_AGENT_PID, specifying the PID of the agent
Again, without these two variables present in the set of exported environment variables, the agent is not discoverable and might as well not exist at all so ssh-add will fail.
Since your agent is probably running as part of the set of processes to which your podman build also belongs to, at the minimum the PID denoted by SSH_AGENT_PID should be valid in that namespace (meaning it's normally invalid in the set of processes that container building is isolated to, so defining the variable as part of building the container would be a mistake). Similar story with SSH_AUTH_SOCK -- the path to the socket file dumped by starting the agent program, would not normally refer to a file that exists in the mount namespace of the container being built.
Now, you can run both the agent and ssh-add as part of building a container, but ssh-add reads keys from ~/.ssh and if you had key files there as part of the container image being built you wouldn't need --ssh in the first place, would you?
The value of --ssh lies in allowing you to transfer your authority to talk to remote services defined through your keys on the host, to the otherwise very isolated container building procedure, through use of nothing else but an SSH agent designed for this very purpose. That removes the need to do things like copying key files into the container. They (keys) should also normally not be part of the built container, especially if they were only to be used during building. The agent, on the other hand, runs on the host, securely encapsulates the keys you add to it, and since the host is where you'd have your keys that's where you're supposed to run ssh-add at to add them to the agent.
I am trying to start minishift on my machine. It successfully creates minishift VM but throws time out error.
Configuration:
Minishift version: v1.34.0+f5db7cb
OS: Windows 10
Hypervisor: Virtual Box v6.0.10
PS C:\WINDOWS\system32> minishift start
-- Starting OpenShift cluster .......................................................................Error during 'cluster up' execution: Error starting the cluster. ssh command error:
command : /var/lib/minishift/bin/oc cluster up --image 'openshift/origin-${component}:v3.11.0' --public-hostname 192.168.99.100 --routing-suffix 192.168.99.100.nip.io --base-dir /var/lib/minishift/base
err : exit status 1
output : Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
Pulling image openshift/origin-cli:v3.11.0
E0725 17:15:42.919928 5316 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-cli:v3.11.0 anonymously
Image pull complete
E0725 17:15:44.643860 5316 helper.go:173] Reading docker config from /home/docker/.docker/config.json failed: open /home/docker/.docker/config.json: no such file or directory, will attempt to pull image docker.io/openshift/origin-node:v3.11.0 anonymously
Pulling image openshift/origin-node:v3.11.0
Pulled 5/6 layers, 85% complete
Pulled 6/6 layers, 100% complete
Extracting
Image pull complete
Checking type of volume mount ...
Determining server IP ...
Using public hostname IP 192.168.99.100 as the host IP
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11.0 is available ...
I0725 17:16:20.775520 5316 config.go:40] Running "create-master-config"
Starting OpenShift using openshift/origin-control-plane:v3.11.0 ...
I0725 17:16:31.108342 5316 config.go:46] Running "create-node-config"
I0725 17:16:35.237968 5316 flags.go:30] Running "create-kubelet-flags"
I0725 17:16:36.785234 5316 run_kubelet.go:49] Running "start-kubelet"
I0725 17:16:37.288388 5316 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
E0725 17:21:37.300062 5316 run_self_hosted.go:571] API server error: Get https://192.168.99.100:8443/healthz?timeout=32s: dial tcp 192.168.99.100:8443: connect: connection refused ()
Error: timed out waiting for the condition
Expected result: It should provide me without errors link to open web console
This happens to me sometimes too.
Solutions include:
minishift stop && minishift start (turn it off and on again)
restart Windows (perhaps VBox has corrupted itself again)
More info on my minishift setup is here:
http://divby0.blogspot.com/2019/07/configuring-minishift-for-use-with.html
For what it's worth I use a combination of linux shells in Windows 10 to interact with minishift / docker daemon:
Git Bash (usually the best)
Docker Toolbox (plan B when something won't run in Git Bash shell)
WSL (ubuntu based, plan C in desperation)
I installed Gitlab 6 on my MBP running OS X 10.8.5 and works fine. I can create projects, users, commit, push ... But only from there.
I try to push project from my iMac ( I do git config, init, add, commit), I generated also the ssh keys.
When a test the connection : ssh -T git#my_server it gives "Welcome to Gitlab, Anonymous".
but when I issue the push -u origin master I've got :
Blockquote
Access denied
Fatal: Could not read from remote repository
Please make sure you have the correct access rights and the repository exists.
Blockquote
Is it something relating with SSH or with Gitlab itself ?
In Gitlab 6 a added a 2d key to the project, my public key (generated on the iMac).
On my MBP I added the content of the id_rsa.pub (iMac) to the authorised_keys and known_hosts files (MBP)
Thank.
Though I have followed the usual steps for using the dotCloud CLI under Cygwin, dotcloud push fails in all cases: --rsync, --hg, and --git.
I am on Windows 8 and Cygwin.
How can I push successfully?
Sample output:
me#host /cygdrive/d/project
$ dotcloud push --rsync
==> Pushing code with rsync from "./" to application myapp
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /home/lapo/package/rsync-3.0.9-1/src/rsync-3.0.9/io.c(605) [sender=3.0.9]
me#host /cygdrive/d/project
$ dotcloud push --git
Permission denied (publickey,password).r from "./" to application myapp
fatal: The remote end hung up unexpectedly
me#host /cygdrive/d/project
$ dotcloud push --hg
==> Pushing code with mercurial from "./" to application myapp
abort: no suitable response from remote hg!
Error: Mercurial returned a fatal error
You may be running into a bug in Cygwin's group permissions. Vineet Gupta gives a workaround in his blog. The problem comes from the very strict permissions expected by ssh around the keys, and the solution is to set the permission on the ssh key properly (to 600, rw by owner only). Cygwin seems to need the group to be added manually.
Updating the steps to get the dotCloud CLI installed, including setting the permissions, leads to:
Start the Cygwin Setup.
Select default choices until you reach the package selection dialog.
Enable the following packages:
net/openssh
net/rsync
devel/git
devel/mercurial
python/python (make sure it’s at least 2.6!)
web/wget
After the installation, you should have a Cygwin icon on your desktop. Start it: you will get a command-line shell.
Download easy_install
wget http://peak.telecommunity.com/dist/ez_setup.py
Install easy_install
python ez_setup.py
You now have easy_install; let’s use it to install pip:
easy_install pip
Now install dotcloud (the CLI)
pip install dotcloud
Set up the CLI with your credentials. This will also download the ssh key.
dotcloud setup
New Step Update the permissions on your dotCloud key:
chgrp Users ~/.dotcloud_cli/dotcloud.key
chmod 600 ~/.dotcloud_cli/dotcloud.key
Now you should be able to dotcloud push
If you have multiple dotCloud accounts, then you will need to repeat this process for each account, since each account has its own key. Also note that you shouldn't have to set these permissions manually, but it seems like the group ownership is sometimes the wrong default in Cygwin. Linux and OSX don't seem to show this problem, though the permissions must be 600 for all OSes, so it is worth checking.