I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.
Related
I am trying to follow the numerous tutorials and gists such as:
https://gist.github.com/creotiv/d091515703672ec0bf1a6271336806f0
https://stackoverflow.com/questions/48459804/how-can-i-ssh-to-google-colaboratory-vm/53252985#53252985
When I run the steps, it seems like everything went fine (I get the root password), but I do see this:
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
but unfortunately, after all the steps when I do the following on my local machine:
ssh -p17057 root#0.tcp.ngrok.io
I get:
ssh: connect to host 0.tcp.ngrok.io port 17057: Connection timed out
I am on vanilla Debian Buster - any pointers to why this is happening would be incredibly useful to debug
thank you.
I also had trouble ssh'ing into the ngrok tcp tunnel. I was using my local laptop as the access point to the colab VM. What I did was fire up an EC2 instance on AWS and use that instead. Also, I used ssh reverse tunnel and dropped the need for ngrok altogether since the ec2 machine already had a public IP. Check my answer here: https://stackoverflow.com/a/63186681/4126114
I have been working on a http server which accepts connections and then based on the host name, loads up the right project from .so, generates the page the client is asking for, then sends them back.
Now that I have several working projects, I am interested in making them available to others but here is my problem :
I am connecting to my dedicated server through ssh, and starting my daemon from there, but after a while, the pages are no longer accessible because my program is no longer running.
I also get kicked by the server after a while. I wonder :
How do I keep my server running ? Does the fact that I keep getting kicked out by ssh after a little idle time explains why my daemon is being shutdown ?
Thanks in advance to whoever will be able to give me some element of answer.
When your SSH session times out SIGHUP was sent to the sub-processes forked from the current interactive shell. That's why the processes were terminated (server no longer running).
To avoid idle SSH connection being kicked by the server, set the ServerAliveInterval to send a request for response from server (e.g. ~/.ssh/config)
Host *
ServerAliveInterval 30
To avoid shell sub-process termination, refer to
https://askubuntu.com/questions/348836/keep-the-running-processes-alive-when-disconneting-the-remote-connection/348921#348921
https://askubuntu.com/questions/349262/run-a-nohup-command-over-ssh-then-disconnect
In short, there are 3 options:
nohup
disown / setsid
start the servers in CLI in tmux or screen session on the server
NOTE: If the server instances are already properly daemonized, try looking at monit or supervisord to keep them running ;-D
I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)
I have a google compute engine VM, running ubuntu, and utilising Laravel Forge.
I seem to get blocked by the VM after accessing SSH a few times (2-4), even if I'm logging in correctly. Restarting the VM unblocks me.
I first noticed the issue as I was having trouble logging into SSH, after a few attempts it would become unreachable. My website hosted on it also wouldn't resolve. After restarting the vm, I could try log into ssh again and my website works. This happened a couple time before I figured out how to correctly log in with SSH.
Next, trying to log in to the database with HeidiSQL, which uses plink, I log in fine. But it seems to keep reconnecting via SSH every time I do something, and after 2-4 of these reconnects, I get the same problem with the VM being unreachable by SSH and my website hosted on it being down.
Using SQLyog, which seems to maintain the one SSH connection, rather than constantly reconnecting like HeidiSQL, I have no problems.
When my website is down, I use those "down for everyone or just me" websites to see if it is down, and apparently it's just down for me, so I must be getting blocked.
So I guess my questions are:
1. Is this normal?
2. Can I unblock myself without restarting the VM?
3. Can I make blocking occur in a less strict way?
4. Why does HeidiSQL keep reconnecting via SSH rather than maintaining the one connection like SQLyog seems to?
You have encountered sshguard, which is enabled by default on the GCE Ubuntu images (at least on the 14.10 image, where I encountered it myself). There is a whitelist file at /etc/sshguard/whitelist.
The sshguard default configuration on my VM has a "dangerousness" threshold of 40. Most "attacks" that sshguard detects incur dangerousness of 10, so getting blocked after 4 reconnects sounds about right.
The attack signatures are listed here: http://www.sshguard.net/docs/reference/attack-signatures/
I would bet that you are connecting from an IP that has an invalid reverse DNS configuration (I was). Four connects like that and the default config blocks you for 20 minutes.
I'm playing with Google Compute Engine(GCE) as I'm planning to migrate the cloud service provider from Rackspace(reason: GCE has good upgrade plans with best discount price).
I have few issues with GCE and one of them is Ubuntu os/image not supported by default. But there is an alternate method to run any linux distro in GCE, which is called Building an image from scratch for uploading custom images and creating instances(servers) from uploaded image.
I could able to create and run the instances from the Ubuntu image I uploaded to GCE following the link hagikuratakeshi.hatenablog.com. This is simply running ubuntu in general. I didn't face any problem but google's gcutil tool prompts for ssh passphrase and adds the key in GCE meta data but accepts only password logins(then why it prompts for passphrase).
I want to strictly follow Building an image from scratch as recommended by google. But after following all the steps, I could not able to login to my server instance via SSH. I guess this happens when I install Google Compute Engine image packages: google-startup-scripts_1.1.2-1_all.deb, google-compute-daemon_1.1.2-1_all.deb & python-gcimagebundle_1.1.2-1_all.deb. These packages/scripts make some changes to the instance at the startup and also to SSH configuration which are Strongly recommended. Once I strictly follow the link or once I install these packages I could not able to establish SSH connection once the instance is rebooted. The error message similar to the one below is shown while trying to connect:
test#machine1:~$ gcutil --service_version="v1" --project="mypro-555" ssh --zone="asia-east1-a" "server-instance-1"
INFO: Running command line: ssh o UserKnownHostsFile=/dev/null -o CheckHostIP=no -o StrictHostKeyChecking=no -i /home/test/.ssh/google_compute_engine -A -p 22 test#101.167.xxx.xxx -
ssh: connect to host 101.167.xxx.xxx port 22: Connection refused
NOTE: The user account test is available and common on both local and GCE server!.
My main problem is SSH connection when I strictly follow the steps. If I upload the fresh image and then follow the recommended steps connecting SSH, I could not do SSH again once I restart the instance (or) if I setup everything in the uploaded image before uploading, the created instance will be running but I could not able to connect atleast ones and the error is same.
Anybody using GCE with your custom image?, are you allowed to connected even after following the recommended settings?. Anyone already fixed this SSH issue?. Please post your comments!
EDIT 1
I could not figure out from the logs and here is the output of gcutil getserialportoutput server-instance-1.
The key here is that your ssh client says "connection refused". This indicates that there is indeed a machine at that IP address, but it's not accepting SSH connections. There are a few possible explanations:
The ssh daemon isn't running, or is listening on the wrong interface
Your instance is configured with a firewall that's denying SSH traffic
The GCE firewall rule to allow SSH traffic has been removed