Occationally see WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED - ssh

We have a kubernetes cluster with multiple worker nodes. The only difference between the nodes are the ip addresses and hostnames. I used to ssh into all of them, and there is no problem at all.
I understand that there are a lot of similar questions related to this warning, but I don't think any of them is the same as mine. In detail, recently, this warning shows when ssh into one of the worker nodes occationally. That is to say, sometimes I can ssh into the machine successfully, but some times see this warning. This is the full warning message:
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:95IISBgdGFLH2SEeU+E6YO+S9qnfEXfJqblNqfY/SFE.
Please contact your system administrator.
Add correct host key in /home/xxx/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/xxx/.ssh/known_hosts:55
remove with:
ssh-keygen -f "/home/xxx/.ssh/known_hosts" -R "10.210.10.12"
RSA host key for 10.210.10.12 has changed and you have requested strict checking.
Host key verification failed.
These are what I have done so far:
Check the MAC address of the machine with arp -a command, and it is the targeted machine.
Remove the 55th line of known_hosts. But after sometime, it happens again.
Re-plug the cable, can ssh in for a while, and after sometime, it also happens again.
Any idea?
Thanks in advance.

From your question, I understand that you've removed the 55th line without understanding why. That line contained the previous host identification of the 10.210.10.12 machine. Why did you do that? The goal of the warning is to warn you about something suspicious, and you should aim to figure out what happened before dismissing the warning.
Checking the MAC is a good idea, but MAC addresses can be spoofed far easier than SSH host id's. It doesn't tell you much. A changed MAC would be a bad sign, but this does not tell you much. You need to figure out what happened, and so far you've found nothing.
If your company has a security team, contact them. If not, get assistance.

After days of struggle, finall work it out. Turns out that the Supermicro motherboard will takes up an IP address for IPMI, which is also 10.210.10.12.

Related

SSH: Can I always trust a remote server with dynamic IP?

I have a server that I need to connect to sometimes. It has a public IP, but the IP is renewed quite often. To combat that, I have a script that will update a DNS record to map ssh.myurl.com to my new IP.
This setup works, but once the IP changes, I get the error:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Then I have to delete the key from my client, using ssh-keygen -R ssh.myurl.com, and it works again.
But this seems redundant, as I can't really see how this adds any security. Can I configure this setup so that it always trusts this connection?
The setup is using ProxyJump if that would change anything.

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

Why does SSH seem to remember my valid connection settings even though they're now invalid?

I'm troubleshooting some stuff with an application I'm working on that uses SFTP. Along the way, I'm using the openSSH command line client to connect to a separate SFTP server, using a configuration file (~/.ssh/config). In between tests, I'm changing the configurations, and at times I try to deliberately test an invalid configuration.
Currently, I just changed my config file to remove the IdentityFile line. Without this, it shouldn't know what key file to use to try and make the connection, and as such, the connection should fail. However, every time I ssh to that hostname, the connection succeeds without even so much as a password prompt.
This is BAD. My server requires the use of the keyfile, I know this because my application cannot connect without one. Yet it's almost like SSH is remembering an old, valid configuration for the server even though my current configuration is invalid.
What can I do to fix this? I don't want SSH to be hanging onto old configurations like this.
If you don't specify IdentityFile, the ssh will use the keys in default location (~/.ssh/id_{rsa,dsa,ecdsa}), as described in the manual page for ssh:
IdentityFile
Specifies a file from which the user's DSA, ECDSA, Ed25519 or RSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa,
~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa for protocol version 2. [...]

Host key verification failed - amazon EC2

I am working with win 7 and git bash as well as an amazon EC2 instance. I tried to log into my instance:
$ ssh -i f:mykey.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
71:00:d7:d8:a------------------26.
Please contact your system administrator.
Add correct host key in /m/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /m/.ssh/known_hosts:27
ECDSA host key for ec2-52-10-**-**.us-west-2.compute.amazonaws.com has changed and you have request
ed strict checking.
Host key verification failed.
Logging in like this has worked fine in the past, but this problem started after I rebooted my EC2 instance. How can I get this working again?
edit:
$ ssh -i f:tproxy.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
ssh: connect to host ec2-52-10-**-**.us-west-2.compute.amazonaws.com port 22: Bad file number
tried again:
The authenticity of host 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com (52.10.**-**)' can't be
established.
ECDSA key fingerprint is d6:c4:88:-----------fd:65.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com,52.10.**-**' (ECDSA) t
o the list of known hosts.
Permission denied (publickey).
what should I do now?
The hostname has a new ssh key, so ssh tells you something has changed.
The hint is here:
Offending ECDSA key in /m/.ssh/known_hosts:27
If you're sure the server on the other side is authentic, you should delete line 27 in /m/.ssh/known_hosts.
This error says that something has been changed since your last login to this server and that the server you try to ssh to, might not be the server you think it is.
One thing to be aware of...
When you create an EC2 instance, No fixed IP assigned to this instance.
When you start this instance, it will get (dynamic) IP number and a DNS name which will be based on that IP.
If you shutdown the instance and start it again few hours later, it might get a new IP and a new DNS name.
If you are still trying to access the old DNS name/IP, you are actually trying to access a server that might not belong to you.
This will end with the same error msg as you had.
(It can happen because you pointed a DNS entry to the old IP, or you are using scripts that try to access the old DNS name/IP, or you just repeating the ssh command from your history...)
If this is the case, the solution is to use Elastic IP.
You can assign Elastic IP to your server, and this will force it to keep its IP address between reboots.
Elastic IP is free while your (attached) server is up.
But it will cost you some minor fees when the attached server is down.
This is done to make sure you are not "reserving" IP while not using/need it
In BeanStalk environment, the issue is that it refers to the key from known_hosts for the respective IP. But it has changed. So using the same key would not work.
Removing the key for the IP from ~/.ssh/known_hosts and then connecting by ssh would work.
(Basically, when the entry is not there in ~/.ssh/known_hosts it will create a new one, and thus resolve the conflict)
Type the following command to set the permissions. Replace ~/mykeypair.pem with the location and file name of your key pair private key file.
chmod 400 ~/mykeypair.pem
In your case mykeypair.pem is tproxy.pem
I was facing the same issue and after making pem file private it was fixed.
Here is some more information on SSH Key Permissions

known_hosts domain can switch keys

Solaris systems, local system has known_hosts with key automatically added. Target systems have authorized_keys2.
Target system is one of two servers configured active-passive. Switch back and forth monthly (or failover). Target servers have two different hostnames on the same domain. Two different IP addresses.
VIP configuration is set with one virtual host name so that it will always land on the active server of the pair. Let's say bar. Remote user foo.
Source system originates scp or ssh connection to the VIP domain.
ssh foo#bar.subdom.dom.com, or just ssh#foo.bar
Once the key is stored in the known_hosts, all is fine for the month, but when it changes to the opposite node, it fails with:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED
blah blah
RSA Host key for dokes has changed and you have requested strict checking
Suggestions to have 2 different keys allowable for one VIP server name?
Thanks
JimR
you can copy the host keys from one machine to the other to get rid of the message. that is what you sometimes do when serving ssh behind a loadbalancer as well.
i do not know about solaris, but the keys might be located at /etc/ssh/ssh_host_rsa_key and /etc/ssh/ssh_host_rsa_key.pub or somewhere similar.
It seems that I can have two entries in known_hosts with the same VIP address, but different keys. I manually edited known_hosts to add the 2nd key.
I installed that today, and it works on the first node. I will confirm next month with the automated switchover that it continues to work on the opposite node.
JimR