I understand that when running an SSH command with public key authentication, the client will try all the SSH keys it knows about until the host accepts one (https://security.stackexchange.com/questions/182804/how-does-ssh-know-which-public-key-to-use-from-authorized-keys).
When running an Ansible command on a host using SSH there does not seem to be this capability: Ansible requires an SSH private key file to be specified explicitly in ansible.cfg:
private_key_file = /user/.ssh/id_rsa_mykey
In my use case, Ansible is running inside a docker container on Lando. All SSH keys are imported from the user's ssh config directory to a known path in the container. However, I don't necessarily know the name of the one that's needed by Ansible because this is something individual users configure.
Is there a way to make SSH commands issued by Ansible try multiple keys like SSH is designed to do?
Ansible requires an SSH private key file to be specified explicitly in ansible.cfg:
Ansible does not require that you provide a private key file in your ansible.cfg. Since ansible is just calling out to ssh, the preferred place to configure connection credentials is in your ~/.ssh/config file. There, you can configure multiple host-specific keys:
Host host1
IdentityFile ~/.ssh/key-for-host1
Host host2
IdentityFile ~/.ssh/key-for-host2
Or you can configure it to try multiple keys in sequence:
Host *.example.com
IdentityFile ~/.ssh/maybe-this-one
IdentityFile ~/.ssh/okay-how-about-this-instead
And of course ssh will use any keys present in your ssh agent.
Related
Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.
I had been using VSCode's remote-ssh to access my virtual machines running on google cloud. This had been working perfectly fine until I made a snapshot of my most recent instance and created a new instance out of this on a larger VM. Now when I try to connect (through any method) I get: " Permission denied (publickey).". I have spent countless hours deleting and re-adding, and recreating my ssh keys to no avail. Before I simply ran "gcloud compute config-ssh" and this created a working config file, but now this works. Please help, I have tried everything and there is simply no way for me to ssh. On the website I can click the ssh button to open up their shell, but cannot do it from my terminal
The problem may be related to the lack of identification of your SSH private key during connection in VSCode. You can indicate your private key adding IdentityFile option pointing to your SSH private key, this in your SSH connection host entries in SSH configuration files:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
Here the long story if you or someone need more information.
You can go from the start for ensure that you do no have compromise your SSH keys and that is the origin of problem.
Create SSH Key
First, create new ssh keys.In the computer that you will use to access your remote host, that is Google VM instance, open your terminal or cmd and go to the ssh folder to generate the keys.
My ssh config and keys are under my user directory, /home/my_user/.ssh on Linux or C:\Users\my_user\.ssh on Windows.
The I will cd to one of these path, depending on for which of them I using at the moment.
Linux:
cd /home/my_user/.ssh
Windows:
cd C:\Users\my_user\.ssh
Command to generate SSH key
ssh-keygen -t rsa -f my_ssh_key -C user
my_ssh_key: the name your key, you can put what you want to better identify
user: must be the user that you want to use to connect at your Google VM instance.
This will generate an Private Key named my_ssh_key and a Public key named my_ssh_key.pub.
Alternatively, stay in any location of operating system and passing the absolute path where to generate the keys:
Linux:
ssh-keygen -t rsa -f /home/my_user/.ssh/my_ssh_key -C user
Windows:
ssh-keygen -t rsa -f C:\Users\my_user\.ssh\my_ssh_key -C user
Copy the public key in your Google cloud VM authorized_keys file
/home/my_user/.ssh/authorized_keys
** Do not rewrite anyone public key that already exists jus append in the file of authorized_keys file.
Add new ssh Host entry for remote connection
Click on Remote SSH manager, the icon at the bottom right of the VS Code, click on the Remote SSH: Open Configuration File option and choose your ssh configuration file to add another SSH entry for remote connection.
The config file must be under SSH directory, the same path used in the step of generate SSH keys.
Linux:
/home/my_user/.ssh/config
Windows:
C:\Users\my_user\.ssh\config
To add another Host, write the following make the properly changes:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
vm_name: is alias to connect with ssh command in practical way, could be what you want.
external_ip: the external of your Google VM instance, you can get in the VM instances panel at https://console.cloud.google.com/
IdentityFile: the path for yout private SSH key, the file that you generated that note have .pub extension.
Linux:
/home/my_user/.ssh/my_ssh_key
Windows:
C:\Users\my_user\.ssh\my_ssh_key
Port: the por number of SSH of your Google VM instance, 22 is the default port.
Now it is just choose this host to connect to your Google VM instance.
For more details about SSH settings on Google Cloud Platform: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#linux-and-macos_1
I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.
I manually deploy websites through SSH, I manage source code in github/bitbucket. For every new site I'm currently generating a new keypair on the server and adding it to github/bitbucket, so that I can pull chances from server.
I came across a feature in capistrano to use local machine's key pair for pulling updates to server, which is ssh_options[:forward_agent] = true
How can I do something like this and forward my local machine's keypair to the server I'm SSH-ing into, so that I can avoid adding keys into github/bitbucket for every new site.
This turned out to be very simple, complete guide is here Using SSH Forwarding
In essence, you need to create a ~/.ssh/config file, if it doesn't exist.
Then, add the hosts (either domain name or IP address in the file and set ForwardAgent yes)
Sample Code:
Host example.com
ForwardAgent yes
Makes SSH life a lot easier.
Create ~/.ssh/config
Fill it with (host address is the address of the host you want to allow creds to be forwarded to):
Host [host address]
ForwardAgent yes
If you haven't already run ssh-agent, run it:
ssh-agent
Take the output from that command and paste it into the terminal. This will set the environment variables that need to be set for agent forwarding to work. Optionally, you can replace this and step 3 with:
eval "$(ssh-agent)"
Add the key you want forwarded to the ssh agent:
ssh-add [path to key if there is one]/[key_name].pem
Log into the remote host:
ssh -A [user]#[hostname]
From here, if you log into another host that accepts that key, it will just work:
ssh [user]#[hostname]
To use it simply with the default identity (id_rsa) you can use the following couple of command:
ssh-add
ssh -A [username]#[server-address]
The configuration file is very helpful but the trick for agent forwarding does the ssh-add command. It seems that this have to be initial triggered before any remote connections or after restart of the computer. To permanently add the key try the following solution from the user daminetreg:
Add private key permanently with ssh-add on Ubuntu
It is very useful :
ssh -i [private-key] -A [user]#[host]
You can set one command in bash_aliases or other command routines.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I want to use multiple private keys to connect to different servers or different portions of the same server (my uses are system administration of server, administration of Git, and normal Git usage within the same server). I tried simply stacking the keys in the id_rsa files to no avail.
Apparently a straightforward way to do this is to use the command
ssh -i <key location> login#server.example.com
That is quite cumbersome.
Any suggestions as to how to go about doing this a bit easier?
From my .ssh/config:
Host myshortname realname.example.com
HostName realname.example.com
IdentityFile ~/.ssh/realname_rsa # private key for realname
User remoteusername
Host myother realname2.example.org
HostName realname2.example.org
IdentityFile ~/.ssh/realname2_rsa # different private key for realname2
User remoteusername
Then you can use the following to connect:
ssh myshortname
ssh myother
And so on.
You can instruct ssh to try multiple keys in succession when connecting. Here's how:
$ cat ~/.ssh/config
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_rsa_old
IdentityFile ~/.ssh/id_ed25519
# ... and so on
$ ssh server.example.com -v
....
debug1: Next authentication method: publickey
debug1: Trying private key: /home/example/.ssh/id_rsa
debug1: read PEM private key done: type RSA
debug1: Authentications that can continue: publickey
debug1: Trying private key: /home/example/.ssh/id_rsa_old
debug1: read PEM private key done: type RSA
....
[server ~]$
This way you don't have to specify what key works with which server. It'll just use the first working key.
Also you would only enter a passphrase if a given server is willing to accept the key. As seen above ssh didn't try to ask for a password for .ssh/id_rsa even if it had one.
Surely it doesn't outbeat a per-server configuration as in other answers, but at least you won't have to add a configuration for all and every server you connect to!
The answer from Randal Schwartz almost helped me all the way.
I have a different username on the server, so I had to add the User keyword to my file:
Host friendly-name
HostName long.and.cumbersome.server.name
IdentityFile ~/.ssh/private_ssh_file
User username-on-remote-machine
Now you can connect using the friendly-name:
ssh friendly-name
More keywords can be found on the OpenSSH man page. NOTE: Some of the keywords listed might already be present in your /etc/ssh/ssh_config file.
The previous answers have properly explained the way to create a configuration file to manage multiple ssh keys. I think, the important thing that also needs to be explained is the replacement of a host name with an alias name while cloning the repository.
Suppose, your company's GitHub account's username is abc1234.
And suppose your personal GitHub account's username is jack1234
And, suppose you have created two RSA keys, namely id_rsa_company and id_rsa_personal. So, your configuration file will look like below:
# Company account
Host company
HostName github.com
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_company
# Personal account
Host personal
HostName github.com
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa_personal
Now, when you are cloning the repository (named demo) from the company's GitHub account, the repository URL will be something like:
Repo URL: git#github.com:abc1234/demo.git
Now, while doing git clone, you should modify the above repository URL as:
git#company:abc1234/demo.git
Notice how github.com is now replaced with the alias "company" as we have defined in the configuration file.
Similary, you have to modify the clone URL of the repository in the personal account depending upon the alias provided in the configuration file.
ssh-add ~/.ssh/xxx_id_rsa
Make sure you test it before adding with:
ssh -i ~/.ssh/xxx_id_rsa username#example.com
If you have any problems with errors sometimes changing the security of the file helps:
chmod 0600 ~/.ssh/xxx_id_rsa
Generate an SSH key:
$ ssh-keygen -t rsa -C <email1#example.com>
Generate another SSH key:
$ ssh-keygen -t rsa -f ~/.ssh/accountB -C <email2#example.com>
Now, two public keys (id_rsa.pub, accountB.pub) should be exists in the ~/.ssh/ directory.
$ ls -l ~/.ssh # see the files of '~/.ssh/' directory
Create configuration file ~/.ssh/config with the following contents:
$ nano ~/.ssh/config
Host bitbucket.org
User git
Hostname bitbucket.org
PreferredAuthentications publickey
IdentityFile ~/.ssh/id_rsa
Host bitbucket-accountB
User git
Hostname bitbucket.org
PreferredAuthentications publickey
IdentitiesOnly yes
IdentityFile ~/.ssh/accountB
Clone from default account.
$ git clone git#bitbucket.org:username/project.git
Clone from the accountB account.
$ git clone git#bitbucket-accountB:username/project.git
Note: Because of the User git directive, you can omit the git# portion of the repo URL, shortening your clone command like so:
$ git clone bitbucket-accountB:username/project.git
This is the only purpose of that directive. If you don't need it (e.g. you always copy-paste the git clone command from the website), you can leave it out of the config.
See More Here
I would agree with Tuomas about using ssh-agent. I also wanted to add a second private key for work and this tutorial worked like a charm for me.
Steps are as below:
$ ssh-agent bash
$ ssh-add /path.to/private/key e.g ssh-add ~/.ssh/id_rsa
Verify by $ ssh-add -l
Test it with $ssh -v <host url> e.g ssh -v git#assembla.com
Now, with the recent version of Git, we can specify sshCommand in the repository-specific Git configuration file:
[core]
repositoryformatversion = 0
filemode = true
bare = false
logallrefupdates = true
sshCommand = ssh -i ~/.ssh/id_rsa_user
[remote "origin"]
url = git#bitbucket.org:user/repo.git
fetch = +refs/heads/*:refs/remotes/origin/*
For me on MacOs, the only working solution was to simply add this in file ~/.ssh/config:
Host *
IdentityFile ~/.ssh/your_ssh_key
IdentityFile ~/.ssh/your_ssh_key2
IdentityFile ~/.ssh/your_ssh_key3
AddKeysToAgent yes
your_ssh_key is without any extension. Don't use .pub.
I had run into this issue a while back, when I had two Bitbucket accounts and wanted to had to store separate SSH keys for both. This is what worked for me.
I created two separate ssh configurations as follows.
Host personal.bitbucket.org
HostName bitbucket.org
User git
IdentityFile /Users/username/.ssh/personal
Host work.bitbucket.org
HostName bitbucket.org
User git
IdentityFile /Users/username/.ssh/work
Now when I had to clone a repository from my work account - the command was as follows.
git clone git#bitbucket.org:teamname/project.git
I had to modify this command to:
git clone git#**work**.bitbucket.org:teamname/project.git
Similarly the clone command from my personal account had to be modified to
git clone git#personal.bitbucket.org:name/personalproject.git
Refer this link for more information.
Use ssh-agent for your keys.
Here is the solution that I used inspired from the answer of sajib-khan. The default configuration is not set; it's my personal account on GitLab and the other specified is my company account. Here is what I did:
Generate the SSH key
ssh-keygen -t rsa -f ~/.ssh/company -C "name.surname#company.com"
Edit the SSH configuration
nano ~/.ssh/config
Host company.gitlab.com
HostName gitlab.com
PreferredAuthentications publickey
IdentityFile ~/.ssh/company
Delete the cached SSH key(s)
ssh-add -D
Test it!
ssh -T git#company.gitlab.com
Welcome to GitLab, #hugo.sohm!
ssh -T git#gitlab.com
Welcome to GitLab, #HugoSohm!
Use it!
Company account
git clone git#company.gitlab.com:group/project.git
Personal/default account
git clone git#gitlab.com:username/project.git
Here is the source that I used.
For those who are working with aws I would highly recommend working with EC2 Instance Connect.
Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH).
With EC2 Instance Connect, you use AWS Identity and Access Management (IAM) policies and principles to control SSH access to your instances, removing the need to share and manage SSH keys.
After installing the relevant packages (pip install ec2instanceconnectcli or cloning the repo directly) you can connect very easy to multiple EC2 instances by just changing the instance id:
What is happening behind the scenes?
When you connect to an instance using EC2 Instance Connect, the Instance Connect API pushes a one-time-use SSH public key to the instance metadata where it remains for 60 seconds. An IAM policy attached to your IAM user authorizes your IAM user to push the public key to the instance metadata.
The SSH daemon uses AuthorizedKeysCommand and AuthorizedKeysCommandUser, which are configured when Instance Connect is installed, to look up the public key from the instance metadata for authentication, and connects you to the instance.
(*) Amazon Linux 2 2.0.20190618 or later and Ubuntu 20.04 or later comes preconfigured with EC2 Instance Connect.
For other supported Linux distributions, you must set up Instance Connect for every instance that will support using Instance Connect. This is a one-time requirement for each instance.
Links:
Set up EC2 Instance Connect
Connect using EC2 Instance Connect
Securing your bastion hosts with Amazon EC2 Instance Connect
You can create a configuration file named config in your ~/.ssh folder. It can contain:
Host aws
HostName *yourip*
User *youruser*
IdentityFile *idFile*
This will allow you to connect to machines like this
ssh aws
As mentioned on a Atlassian blog page,
generate a config file within the .ssh folder, including the following text:
#user1 account
Host bitbucket.org-user1
HostName bitbucket.org
User git
IdentityFile ~/.ssh/user1
IdentitiesOnly yes
#user2 account
Host bitbucket.org-user2
HostName bitbucket.org
User git
IdentityFile ~/.ssh/user2
IdentitiesOnly yes
Then you can simply checkout with the suffix domain and within the projects you can configure the author names, etc. locally.
Multiple key pairs on GitHub
1.0 SSH configuration file
1.1 Create ~/.ssh/config
1.2 chmod 600 ~/.ssh/config (must)
1.3 Input the following into the file:
Host pizza
HostName github.com
PreferredAuthentications publickey # optional
IdentityFile ~/.ssh/privatekey1
Case A: Fresh new Git clone
Use this command to Git clone:
$ git clone git#pizza:yourgitusername/pizzahut_repo.git
Note: If you want to change the host name “pizza” of .ssh/config in the future, go into the Git cloned folder, edit .git/config file URL line (see case B)
Case B: Already have Git clone folder
2.1 Go to the cloned folder, and then go into the .git folder
2.2 Edit configuration file
2.3 Update the URL from *old to new:
(Old) URL = git#github.com:yourgitusername/pizzahut_repo.git
(New) URL = git#pizza:yourgitusername/pizzahut_repo.git
IMPORTANT: You must start ssh-agent
You must start ssh-agent (if it is not running already) before using ssh-add as follows:
eval `ssh-agent -s` # start the agent
ssh-add id_rsa_2 # Where id_rsa_2 is your new private key file
Note that the eval command starts the agent on Git Bash on Windows. Other environments may use a variant to start the SSH agent.
On Ubuntu 18.04 (Bionic Beaver) there is nothing to do.
After having created an second SSH key successfully the system will try to find a matching SSH key for each connection.
Just to be clear you can create a new key with these commands:
# Generate key make sure you give it a new name (id_rsa_server2)
ssh-keygen
# Make sure ssh agent is running
eval `ssh-agent`
# Add the new key
ssh-add ~/.ssh/id_rsa_server2
# Get the public key to add it to a remote system for authentication
cat ~/.ssh/id_rsa_server2.pub
I love the approach to set the following in file ~/.ssh/config:
# Configuration for GitHub to support multiple GitHub keys
Host github.com
HostName github.com
User git
# UseKeychain adds each keys passphrase to the keychain so you
# don't have to enter the passphrase each time.
UseKeychain yes
# AddKeysToAgent would add the key to the agent whenever it is
# used, which might lead to debugging confusion since then
# sometimes the one repository works and sometimes the
# other depending on which key is used first.
# AddKeysToAgent yes
# I only use my private id file so all private
# repositories don't need the environment variable
# `GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa"` to be set.
IdentityFile ~/.ssh/id_rsa
Then in your repository you can create a .env file which contains the ssh command to be used:
GIT_SSH_COMMAND="ssh -i ~/.ssh/your_ssh_key"
If you then use e.g. dotenv the environment environment variable is exported automatically and whoop whoop, you can specify the key you want per project/directory. The passphrase is asked for only once since it is added to the keychain.
This solution works perfectly with Git and is designed to work on a Mac (due to UseKeychain).
On CentOS 6.5 running OpenSSH_5.3p1 and OpenSSL 1.0.1e-fips, I solved the problem by renaming my key files so that none of them had the default name.
My .ssh directory contains id_rsa_foo and id_rsa_bar, but no id_rsa, etc.
You can try this sshmulti npm package for maintaining multiple SSH keys.