I succesfully followed these instructions from GitHub on how to generate SSH keys and my connection with GitHub is succesfull.
But when I later want to check my SSH key following these instructions I don't get the SSH fingerprint I see in my GitHub SSH Keys setting page when I use ssh-add -l.
Instead of the SSH key fingerprint I get the message The agent has no identities. Why? And what does it mean?
This means you haven't successfully added your key to your agent. Use ssh-add to do so, as given in step 3, part 2 of your first link.
Note that this needs to be done for each ssh-agent instance; thus, if you log out and back in, you need to ssh-add your key again. Similarly, if you start ssh-agent twice, in two different terminal windows, they won't have shared private keys between them, so you would need to ssh-add once in each window (or, better, configure your system in such a way as to have an agent shared between all running applications in your desktop session).
Modern desktop environments generally will provide a SSH keyring for you, so you shouldn't need to start ssh-agent yourself if your agent is so configured, and the agent instance so provided should be shared across your entire session. gnome-keyring behaves this way, as does Apple's keychain and KDE's Wallet (with ksshaskpass enabled).
Related
i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.
I used capistrano to deploy my project using my local rsa key located at ~/.ssh/id_rsa. This always worked as expected.
Now I installed the development environment on a new computer, and now when I run cap ... deploy, I get this error:
OpenSSH keys only supported if ED25519 is available (NotImplementedError)
net-ssh requires the following gems for ed25519 support:
ed25519 (>= 1.2, < 2.0)
bcrypt_pbkdf (>= 1.0, < 2.0)
I found plenty of questions about this while googling. Most suggest to run ssh-add ~/.ssh/id_rsa to add the key to the ssh agent, some suggest to install the two listed gems and use a ed25519 key.
I understand from there, that capistrano is looking for a key stored in the ssh agent, and then falls back to using an ed25519 key. What I need is that capistrano simply is using the local ssh key located at ~/.ssh/id_rsa.
I didn't find how to tell capistrano to use the local ssh key ~/.ssh/id_rsa instead of the ssh agent.
Notes
I am using cygwin on Windows, and the installation of a permanent ssh agent is tricky. I found lengthy instructions, but did not get it to work.
As a workaround, I run these commands before cap ... deploy
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
Long question short
How can I configure capistrano or ssh that way that cap ... deploy picks the RSA key at ~/.ssh/id_rsa to connect to the remote server ?
This seems to be a SSH issue instead a Capistrano one. First some explanations..
ssh-agent stores securely your decrypted keys on memory and "there is no reasonable and safe way to preserve the decrypted keys among reboots/re-logins"
ssh-add just adds these keys to your agent
With that said, it seems that your operative system isn't loading your keys on your ssh agent automatically when it boot, so the solution is to automate this task and set it up to run when you starts your session.
I'm not a Windows user and I don't have any way to test this answer, but hope this solve your problem.
On desktop, right click and "New" > "Shortcut"
When it asks for "What item would you like to create a shortcut for?", enter this: "start-ssh-agent" (with quotation marks included). Then click "Next"
On "What would you like to name the shortcut?" enter any name, for example: autoloadssh.exe (must be an executable). Click "Save"
Copy this shortcut and paste it on your startup folder located at "C:\Users[YOUR_USER]\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup"
Reboot your system and after load you should be able to run "cap ... deploy"
Take a look to this thread.
I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.
We have a 'master' Mercurial server on our network that we use for a local staging box. Our team does all of our pushes and pulls to/from this one box. I'm having trouble with the implementation I'm using, but I'm also second guessing whether what I want to do is even a good idea...
We also want to start using BitBucket, but only as a secondary server. I'd like to use a hook to automatically push to Bitbucket, but I can't get it working right...
Here's the HGRC from the 'master' repo:
[hooks]
changegroup =
changegroup.update = hg update
changegroup.bitbucket = hg push ssh://hg#bitbucket.org/account/repo
If I manually fire off the above push, everything works perfectly. However, as a hook it fails:
warning: changegroup.bitbucket hook exited with status 255
I followed this guide to get SSH working: Set up SSH for Git and Mercurial on Mac OSX/Linux
I get my keys generated, I run ssh-agent, and I ssh-add the key. But ssh-agent doesn't seem to be doing anything, and as soon as I exit the SSH session it seems to leave memory. Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password. I thought the whole point of this was for it not to do that?
But taking a step back, maybe this is a terrible idea to being with. If I give my public key to Bitbucket wouldn't that theoretically mean if someone got a hold of it, they could SSH in to my box without a password?
And if so, what alternative do I have to forward commits to bitbucket? I'd rather not use HTTPS because it would require putting our bitbucket password as plain/text in the .hg/hgrc file...
Maybe there's some more obvious way to do this that I'm missing? For the developers, I'd rather keep things the way they are now (everyone push to master) instead of reconfiguring everyone's developer box to have a private key and to push to bitbucket instead...
As always, thanks for any help you guys can provide.
Woah, there are a lot of questions there. I'll hit a few of 'em:
But ssh-agent doesn't seem to be doing anything, and as soon as I exit the SSH session it seems to leave memory.
You're correct. ssh-agent is for interactive sessions, not for automation. In most usages when you log out it's killed, but even if that weren't the case it wouldn't be working as you imagine because when someone does that hg push they're running a new, non-interactive session that wouldn't have access to the ssh-agent anyway
Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password.
Testing it out like that isn't valid. That's saying "I want to log into an interactive session at bitbucket with the username hg", but that's not what they authorize you to do. If you send them your public key they let you login as the user hg only for the purposes of doing hg non-interactive commands.
Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password.
No, public keys are meant to be public -- you can list anyone's on github for example. The public key just says "anyone who has the private key that matches this is authorized to...", so any site that wants your private key are crooks, but any site that wants you public key is just offering you a way to use something better than a password.
One thing you may be missing about hooks is "who" the hook runs as. When people are pushing to your "centralish" repo over ssh that the hook is being run as their unix user, and if they're pushing over http the hook is being run as the web server's user.
If you had:
a private ssh key with no password on it
the public key matching that private key setup on bitbucket
the unix user running the hook using that private key for access to bitbucket.org
then what you're trying to do would work.
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.