I have a server that I need to connect to sometimes. It has a public IP, but the IP is renewed quite often. To combat that, I have a script that will update a DNS record to map ssh.myurl.com to my new IP.
This setup works, but once the IP changes, I get the error:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
Then I have to delete the key from my client, using ssh-keygen -R ssh.myurl.com, and it works again.
But this seems redundant, as I can't really see how this adds any security. Can I configure this setup so that it always trusts this connection?
The setup is using ProxyJump if that would change anything.
Related
The scenario is as follows: I have a VPS (Droplet) in Digital Ocean (DO), I connect via putty-ssh, however I must have another user enabled with root privileges and with password access (without ssh), this is because When there are connection problems through putty-ssh, I must enter through my DO account, and access the droplet console using that user with a password to fix the problem. This usually happens every time I restart the server and I can not connect with any user from putty, the connection is rejected. The solution is simple, restart ufw and everything solved.
However I open a door for hackers who can easily break this user password with all privileges. The idea is to allow this user to connect only from my personal IP, but the Ubuntu firewall only allows IP / port / application rules, no user can be referenced. How could I solve this problem?
After much research and testing and more tests, specifically with the commands telnet and login, I discovered something I did not know; when the SSH service is active, only ssh connection with a private key is allowed, no other connection is allowed, even with ssh+password. This feature, either integrated into Ubuntu, or is implemented by Digital Ocean, I guess the first.
Considering this, there is no problem that raised in this question; no one can connect to the server unless you have the private key, and if you also only allow the ssh connection from a specific IP, the security is very good. By configuring the firewall in this simple way, it will be sufficient:
ufw status verbose
To Action From
-- ------ ----
8000 ALLOW IN Anywhere
6666/tcp ALLOW IN 15.15.15.15
8000(v6) ALLOW IN Anywhere (v6)
Port 8000 for incoming requests from HTTP and HTTPS clients, which will be managed by django, and any port other than the default 22 for ssh, specifying the private IP of my computer, I can only connect from my computer with the corresponding private key. We will also have to modify the ssh configuration file which is the file /etc/ssh/sshd_config replacing port 22, PasswordAuthentication no and restarting the service with service ssh restart.
I am attempting to create an SSH server (using Paramiko, but that's mostly irrelevant here). I am wondering if there is any way to determine the hostname that the SSH client requested. For example, if I were to connect with
ssh user#example.com
but I also had a CNAME record that pointed to the same server so I could also connect with
ssh user#foo.com
then I would like the server to know in the first case the user requested example.com and in the second, foo.com.
I have been reading through SSH protocol documents like:
https://www.rfc-editor.org/rfc/rfc4253
https://www.rfc-editor.org/rfc/rfc4252
But cannot find out if there is a way to do this.
In general, the ssh protocol does not support this. It's possible that a given ssh client may send an environment variable that gives you a hint, but that would happen after key exchange and user authentication, which would be far later than you'd want the information. It happens that if you were using Kerberos authentication via one of the ssh GSS-API mechanisms described in RFC 4462, you would get the hostname the user requested as part of the GSS exchange. That almost certainly doesn't help you, but it happens to be the only case I'm aware of where this information is sent.
For ssh virtual hosting you're going to need to dedicate an IP address or port for each virtual host. Take a look at port sharing or IPv6 as possibilities for your application.
Let's say I have a digital ocean droplet - 68.456.72.184
When ssh-ing into my remote server, I'd rather not have to type out the whole ssh command -
ssh 68.456.72.184
The host's name is Stormtrooper - how do I make it so that client machines can ssh into the server via
ssh Stormtrooper
I imagine this requires some sort of configuration on the local client machine that's connecting? In what order does does a client machine search for host names? I imagine there's some local setting where it looks for "Stormtrooper"'s IP address, and if not found it it looks in the local network, and then looks in the "global" network (i.e. public DNS).
I'm not quite sure how that lookup process works, so an explanation there would be great as well.
You can create local ssh_config in ~/.ssh/config with a content:
Host Stormtrooper
Hostname 68.456.72.184
And then you can ssh to that server using ssh Stormtrooper (even tab completion will work for you).
Connecting using FQDN will work too if you have correctly set up DNS. If you have a domain Stormtrooper.tld pointing to this IP, you are able to ssh using
ssh Stormtrooper.tld
For local network resolving, you would need local DNS, which would do this translation for you.
I am working with win 7 and git bash as well as an amazon EC2 instance. I tried to log into my instance:
$ ssh -i f:mykey.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
71:00:d7:d8:a------------------26.
Please contact your system administrator.
Add correct host key in /m/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /m/.ssh/known_hosts:27
ECDSA host key for ec2-52-10-**-**.us-west-2.compute.amazonaws.com has changed and you have request
ed strict checking.
Host key verification failed.
Logging in like this has worked fine in the past, but this problem started after I rebooted my EC2 instance. How can I get this working again?
edit:
$ ssh -i f:tproxy.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
ssh: connect to host ec2-52-10-**-**.us-west-2.compute.amazonaws.com port 22: Bad file number
tried again:
The authenticity of host 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com (52.10.**-**)' can't be
established.
ECDSA key fingerprint is d6:c4:88:-----------fd:65.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com,52.10.**-**' (ECDSA) t
o the list of known hosts.
Permission denied (publickey).
what should I do now?
The hostname has a new ssh key, so ssh tells you something has changed.
The hint is here:
Offending ECDSA key in /m/.ssh/known_hosts:27
If you're sure the server on the other side is authentic, you should delete line 27 in /m/.ssh/known_hosts.
This error says that something has been changed since your last login to this server and that the server you try to ssh to, might not be the server you think it is.
One thing to be aware of...
When you create an EC2 instance, No fixed IP assigned to this instance.
When you start this instance, it will get (dynamic) IP number and a DNS name which will be based on that IP.
If you shutdown the instance and start it again few hours later, it might get a new IP and a new DNS name.
If you are still trying to access the old DNS name/IP, you are actually trying to access a server that might not belong to you.
This will end with the same error msg as you had.
(It can happen because you pointed a DNS entry to the old IP, or you are using scripts that try to access the old DNS name/IP, or you just repeating the ssh command from your history...)
If this is the case, the solution is to use Elastic IP.
You can assign Elastic IP to your server, and this will force it to keep its IP address between reboots.
Elastic IP is free while your (attached) server is up.
But it will cost you some minor fees when the attached server is down.
This is done to make sure you are not "reserving" IP while not using/need it
In BeanStalk environment, the issue is that it refers to the key from known_hosts for the respective IP. But it has changed. So using the same key would not work.
Removing the key for the IP from ~/.ssh/known_hosts and then connecting by ssh would work.
(Basically, when the entry is not there in ~/.ssh/known_hosts it will create a new one, and thus resolve the conflict)
Type the following command to set the permissions. Replace ~/mykeypair.pem with the location and file name of your key pair private key file.
chmod 400 ~/mykeypair.pem
In your case mykeypair.pem is tproxy.pem
I was facing the same issue and after making pem file private it was fixed.
Here is some more information on SSH Key Permissions
I am currently trying to work out how to SSH to servers behind firewalls that deny all incoming connections. The servers can SSH out, so I am wondering if there is a way to get the server behind the firewall to create an SSH tunnel to my workstation, then allow my workstation to send commands back to the server through it?
I have looked into tunneling / reverse tunneling, but these appear to be port forwarding solutions, which will not work as the firewall denies all connections on all ports.
Ideally, I would like to do this in Ruby (using the Net::SSH gem), such that instead of opening a new connection like:
Net::SSH.start('host', 'user', :password => "password")
I could somehow bind to an existing tunnel.
Thanks!
This is fairly simple if you have control over the server. I'll give the command-line version, and you can work that into any framework you like:
server$ ssh -R 9091:localhost:22 client.example.egg
client$ ssh -p 9091 localhost
The server establishes a connection to the client first which starts listening on the "R"emote end (i.e. the client) on port 9091 (something I just made up), and forwards those connections to localhost:22, i.e. to the ssh server on itself.
The client then just needs to connect to its own local port 9091, which is transparently forwarded to the server's ssh server.
This will usually wreak havoc to your public key checking (and adherent security!), because the client's ssh client doesn't know that localhost:9091 is the same as server:22. If your client is Putty, then you have an option to provide the "real" server name somewhere so that the credentials can be looked up properly.
Unless you can create (and maintain) a tunnel out from the host you're trying to connect to first (which would allow you then to connect through that tunnel), no you can't. That's the point of a firewall: prevent unauthorised access to a network.
However the firewall shouldn't block a tunnel, although it depends exactly how the tunnel's managed. A port-forwarding tunnel set up using ssh's tunneling features would subvert the firewall. However it may also get you in trouble with the administrator of the remote network.
So ultimately, you'd need to speak to the network administrator to get the firewall rules relaxed in order to do it without needing to tunnel, or at least get authorisation to have a tunnel.