known_hosts has two hosts - ssh

I'm trying to delete a known_host with ssh-keygen -R, however, I have two hosts on one line, like this: [slsapp.com]:1234,[108.163.203.146]:1234. Should I just go in and do it by hand?

Your client is clever and stores both hostname and IP address that the hostname resolves to (using DNS), which avoids additional verification when you in future connect to ip instead of host name.
If you changed host keys on server, you probably want to remove both of them, since both of them point to the same machine.

Related

SSH: Can I always trust a remote server with dynamic IP?

I have a server that I need to connect to sometimes. It has a public IP, but the IP is renewed quite often. To combat that, I have a script that will update a DNS record to map ssh.myurl.com to my new IP.
This setup works, but once the IP changes, I get the error:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Then I have to delete the key from my client, using ssh-keygen -R ssh.myurl.com, and it works again.
But this seems redundant, as I can't really see how this adds any security. Can I configure this setup so that it always trusts this connection?
The setup is using ProxyJump if that would change anything.

SSH protocol determine hostname used by client

I am attempting to create an SSH server (using Paramiko, but that's mostly irrelevant here). I am wondering if there is any way to determine the hostname that the SSH client requested. For example, if I were to connect with
ssh user#example.com
but I also had a CNAME record that pointed to the same server so I could also connect with
ssh user#foo.com
then I would like the server to know in the first case the user requested example.com and in the second, foo.com.
I have been reading through SSH protocol documents like:
https://www.rfc-editor.org/rfc/rfc4253
https://www.rfc-editor.org/rfc/rfc4252
But cannot find out if there is a way to do this.
In general, the ssh protocol does not support this. It's possible that a given ssh client may send an environment variable that gives you a hint, but that would happen after key exchange and user authentication, which would be far later than you'd want the information. It happens that if you were using Kerberos authentication via one of the ssh GSS-API mechanisms described in RFC 4462, you would get the hostname the user requested as part of the GSS exchange. That almost certainly doesn't help you, but it happens to be the only case I'm aware of where this information is sent.
For ssh virtual hosting you're going to need to dedicate an IP address or port for each virtual host. Take a look at port sharing or IPv6 as possibilities for your application.

Host key verification failed - amazon EC2

I am working with win 7 and git bash as well as an amazon EC2 instance. I tried to log into my instance:
$ ssh -i f:mykey.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
71:00:d7:d8:a------------------26.
Please contact your system administrator.
Add correct host key in /m/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /m/.ssh/known_hosts:27
ECDSA host key for ec2-52-10-**-**.us-west-2.compute.amazonaws.com has changed and you have request
ed strict checking.
Host key verification failed.
Logging in like this has worked fine in the past, but this problem started after I rebooted my EC2 instance. How can I get this working again?
edit:
$ ssh -i f:tproxy.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
ssh: connect to host ec2-52-10-**-**.us-west-2.compute.amazonaws.com port 22: Bad file number
tried again:
The authenticity of host 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com (52.10.**-**)' can't be
established.
ECDSA key fingerprint is d6:c4:88:-----------fd:65.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com,52.10.**-**' (ECDSA) t
o the list of known hosts.
Permission denied (publickey).
what should I do now?
The hostname has a new ssh key, so ssh tells you something has changed.
The hint is here:
Offending ECDSA key in /m/.ssh/known_hosts:27
If you're sure the server on the other side is authentic, you should delete line 27 in /m/.ssh/known_hosts.
This error says that something has been changed since your last login to this server and that the server you try to ssh to, might not be the server you think it is.
One thing to be aware of...
When you create an EC2 instance, No fixed IP assigned to this instance.
When you start this instance, it will get (dynamic) IP number and a DNS name which will be based on that IP.
If you shutdown the instance and start it again few hours later, it might get a new IP and a new DNS name.
If you are still trying to access the old DNS name/IP, you are actually trying to access a server that might not belong to you.
This will end with the same error msg as you had.
(It can happen because you pointed a DNS entry to the old IP, or you are using scripts that try to access the old DNS name/IP, or you just repeating the ssh command from your history...)
If this is the case, the solution is to use Elastic IP.
You can assign Elastic IP to your server, and this will force it to keep its IP address between reboots.
Elastic IP is free while your (attached) server is up.
But it will cost you some minor fees when the attached server is down.
This is done to make sure you are not "reserving" IP while not using/need it
In BeanStalk environment, the issue is that it refers to the key from known_hosts for the respective IP. But it has changed. So using the same key would not work.
Removing the key for the IP from ~/.ssh/known_hosts and then connecting by ssh would work.
(Basically, when the entry is not there in ~/.ssh/known_hosts it will create a new one, and thus resolve the conflict)
Type the following command to set the permissions. Replace ~/mykeypair.pem with the location and file name of your key pair private key file.
chmod 400 ~/mykeypair.pem
In your case mykeypair.pem is tproxy.pem
I was facing the same issue and after making pem file private it was fixed.
Here is some more information on SSH Key Permissions

known_hosts domain can switch keys

Solaris systems, local system has known_hosts with key automatically added. Target systems have authorized_keys2.
Target system is one of two servers configured active-passive. Switch back and forth monthly (or failover). Target servers have two different hostnames on the same domain. Two different IP addresses.
VIP configuration is set with one virtual host name so that it will always land on the active server of the pair. Let's say bar. Remote user foo.
Source system originates scp or ssh connection to the VIP domain.
ssh foo#bar.subdom.dom.com, or just ssh#foo.bar
Once the key is stored in the known_hosts, all is fine for the month, but when it changes to the opposite node, it fails with:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED
blah blah
RSA Host key for dokes has changed and you have requested strict checking
Suggestions to have 2 different keys allowable for one VIP server name?
Thanks
JimR
you can copy the host keys from one machine to the other to get rid of the message. that is what you sometimes do when serving ssh behind a loadbalancer as well.
i do not know about solaris, but the keys might be located at /etc/ssh/ssh_host_rsa_key and /etc/ssh/ssh_host_rsa_key.pub or somewhere similar.
It seems that I can have two entries in known_hosts with the same VIP address, but different keys. I manually edited known_hosts to add the 2nd key.
I installed that today, and it works on the first node. I will confirm next month with the automated switchover that it continues to work on the opposite node.
JimR

ssh-config by host subnet

So I have a whole bunch of machines on my 10.10.10.x subnet, all of them are essentially configured in the same way. I differentiate these from machines on my 10.10.11.x subnet which serves a different purpose.
I'd like to be able to type 'ssh 10.x' to connect to machines on the 10. network and 'ssh 11.x' to connect to machines on the 11 network.
I know I can setup individual machines to allow access to the full ip, or the shorthand version like this in my ~/.ssh/config:
Host 10.10.10.11 10.11
HostName 10.10.10.11
User root
This can get pretty repetitive for lots of hosts on my network, so my question is, is there a way to specify this as a pattern, for the entire subnet, something like:
Host 10.10.10.x
User root
Host 10.x
HostName 10.10.10.x
User root
Thanks
This line will provide the desired functionality:
Host 192.168.1.*
IdentityFile KeyFile
If you attempt to connect a server whose ip is in this subnet, you will be able to establish an ssh connection.
From the ssh_config(5) Manpage:
A pattern consists of zero or more non-whitespace characters, ‘*’ (a
wildcard that matches zero or more characters), or ‘?’ (a wildcard that
matches exactly one character). For example, to specify a set of decla‐
rations for any host in the “.co.uk” set of domains, the following pat‐
tern could be used:
Host *.co.uk
The following pattern would match any host in the 192.168.0.[0-9] network
range:
Host 192.168.0.?
A pattern-list is a comma-separated list of patterns. Patterns within
pattern-lists may be negated by preceding them with an exclamation mark
(‘!’). For example, to allow a key to be used from anywhere within an
organisation except from the “dialup” pool, the following entry (in
authorized_keys) could be used:
from="!*.dialup.example.com,*.example.com"
So you can just use host 10.*