How do I keep my daemon open through ssh tunnel? - ssh

I have been working on a http server which accepts connections and then based on the host name, loads up the right project from .so, generates the page the client is asking for, then sends them back.
Now that I have several working projects, I am interested in making them available to others but here is my problem :
I am connecting to my dedicated server through ssh, and starting my daemon from there, but after a while, the pages are no longer accessible because my program is no longer running.
I also get kicked by the server after a while. I wonder :
How do I keep my server running ? Does the fact that I keep getting kicked out by ssh after a little idle time explains why my daemon is being shutdown ?
Thanks in advance to whoever will be able to give me some element of answer.

When your SSH session times out SIGHUP was sent to the sub-processes forked from the current interactive shell. That's why the processes were terminated (server no longer running).
To avoid idle SSH connection being kicked by the server, set the ServerAliveInterval to send a request for response from server (e.g. ~/.ssh/config)
Host *
ServerAliveInterval 30
To avoid shell sub-process termination, refer to
https://askubuntu.com/questions/348836/keep-the-running-processes-alive-when-disconneting-the-remote-connection/348921#348921
https://askubuntu.com/questions/349262/run-a-nohup-command-over-ssh-then-disconnect
In short, there are 3 options:
nohup
disown / setsid
start the servers in CLI in tmux or screen session on the server
NOTE: If the server instances are already properly daemonized, try looking at monit or supervisord to keep them running ;-D

Related

SSH client does not timeout when connection to server has been diconnected

I think there is a simple answer to this question, but everything I find online is about preventing SSH client connections from timing out.
In this case, the client has established a connection to the server, and remains connected. Then the connection is disrupted, say the ethernet cable is unplugged, or the router is powered off.
When this happens, the client connection is not dropped.
The ssh client connection is part of a script and the line that performs the ssh login looks like this:
ssh -Nn script#example.com
The .ssh/config contains the following parameters:
Host *
ServerAliveInterval 60
ServerAliveCountMax 2
When these disconnects occur, I'd like the client ssh connection to timeout, and allow the script to attempt reconnect...
Thanks!
I guess I was wrong about this being a simple question, since no one was able to provide an answer.
My further reading and asking led to one reply on the openssh IRC channel, around 2022-06-06. I was advised that the options:
ServerAliveInterval 60
ServerAliveCountMax 2
Often don't disconnect the client as one might expect.
The ssh_config man page:
ServerAliveCountMax
Sets the number of server alive messages (see below) which
may be sent without ssh(1) receiving any messages back from
the server. If this threshold is reached while server alive
messages are being sent, ssh will disconnect from the server,
terminating the session...
The default value is 3. If, for example, ServerAliveInterval
(see below) is set to 15 and ServerAliveCountMax is left at
the default, if the server becomes unresponsive, ssh will
disconnect after approximately 45 seconds.
Seems to pretty conclusively state that disconnecting on lack of server response is the intention of these parameters. However, in practice this doesn't happen in all cases. Maybe the caveat here is: "while server alive messages are being sent"?
If the application calls for a reliable client disconnect when the server becomes unresponsive, the advice was to implement an external method, separate from the ssh client login script, that monitors server responsiveness, and kills the ssh client process on timeout.

Why would Redis allow client to shutdown server?

I installed redis on my computer and opened 1 redis-server and 2 redis-cli. If I type "shutdown save" command in the first redis-cli terminal, it will close both the server and the first redis-cli. Then, the second redis-cli won't be able to communicate with redis-server anymore because it has already been shutdown by the other redis-cli. It just doesn't make sense to me. IMO, a server is a standalone service and should always be running. A client should be able to connect/disconnect with a server but never disable a server. Why would Redis allow a client to disable a server which could be shared by many other clients? Consider if the redis server is on a remote machine and the redis clients are on other machines, wouldn't it be very dangerous since if one of the clients shut down the remote server then all other clients will be influenced?
If you don't want clients to execute the SHUTDOWN command (or any other for that matter), you can use the rename-command configuration directive.
As of the upcoming Redis v6, ACL are expected to provide a better degree of control over admin and application command.
No, I think you are getting it wrong. It's application responsibility to allow/disallow certain specific action on remote server. You can simply disallow certain commands so that single cli cannot take down the redis-server.

Stressing out SSHD server with multiple requests from same server

I am doing a customer demo where I need to stress out a sshd server with repeated sequential requests. So I wrote a small shell script with a loop in it. The first connection is successful, however the sshd refuses connection immediately after the first connection. So all my subsequent requests fail.
Right now the SSHD is running in a docker container and I am running the script from the host. So no external factor such as network proxy is in picture here.
So far I have checked the following things
The SSHD config file contains the following line (I bumped up the value)
MaxStartups 100:300:600
Checked everything here - http://edoceo.com/notabene/ssh-exchange-identification
Have been googling around for what could be the problem (too many links to post here). Any ideas?
Ok. So the SSHD daemon was being spawned in the debug mode. Therefore it could not fork. It would get killed after one connection. Tried putting it in the regular mode and now the test is flying :)

Can the GUI of an RDP session remain active after disconnect

I'm running automated testing procedures that emulates keystrokes and mouseclicks 24/7.
Although it runs fine locally, on an RDP session it stops running once minimized or disconnected. Apparently, the GUI doesn't exist if you can't physically see it on the screen.
There is a registry work-around for keeping the GUI active for minimizing the window, but I know of no way to keep it alive after disconnect.
Ideally, I would have this run on the server Windows console session which would not care about being disconnected but in a hosted environment (I tried Amazon and Go Daddy) there is no way to access the console session.
Does anyone know how I can get around this? Basically any solution that allows me to run my application on a VPS. I need the reliability of a host but the flexibility to run it as if I was sitting right in front.
Yes, you can.
There are two types of sessions in Windows: The "console" session which is always active, and there can only be a max of one of, and "terminal" sessions, a la RDP. Using "rdpwrap" on Github, you can have an unlimited number of terminal sessions.
RDP sessions will become "deactivated" when there is not a connection to them. Programs will still run, but anything that depends on GUI interaction will break badly.
Luckily, we may "convert" a terminal session into a console session instead of disconnecting from Remote Desktop normally by running the following command from inside the terminal session:
for /f "skip=1 tokens=3" %%s in ('query user %USERNAME%') do (tscon.exe %%s /dest:console)
This will disconnect you from the session, but it will still run with full graphical context. This answers your question. You can reconnect to it and it will become a terminal session again, and you can do this infinitely. And, of course, autohotkey works perfectly.
But, what if you need more than one persistent, graphics-enabled session?
To get an unlimited amount of graphics-persistent sessions, you can run Remote Desktop and start terminal sessions from within the "main" session described above. Normally Remote Desktop prevents this "loopback" behavior, but if you specify "127.0.0.2" for the destination, you will be able to start a terminal session with any number of the users on the remote machine.
The graphics-persistentness will only be present on terminal servers if they are not minimized, unless you create and set RemoteDesktop_SuppressWhenMinimized to 2 at the following registry location:
HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal Server Client
With this you can get an unlimited number of completely independent graphics-persistent remote sessions from a single machine.
This could be a workaround, altough I have not tried it myself and it involves having another machine
Let's assume that at the moment you are creating a session to myserver.com
Local Client ----> myserver.com
Instead of doing that, you could try having a separate server (let's call it myslave.com) and use that to establish a session
Local Client ----> myslave.com ----> myserver.com
Then if you disconnect the Local Client ---> myslave.com session the GUI of the session between myslave.com ----> myserver.com should remain active.
It will work only if you are connected to the console session of myslave.com.
I found a similar way. I had same problem, i downloaded rdp wraper which allows you configure multiple session rpd server and one tool which is included (rdpchecker.exe) allows you connect to localhost so you can connect to your server from your server and you dont need that middle client.
This could be a workaround, altough I have not tried it myself and it involves having >another machine
Let's assume that at the moment you are creating a session to myserver.com
Local Client ----> myserver.com
Instead of doing that, you could try having a separate server (let's call it myslave.com) and use that to establish a session
Local Client ----> myslave.com ----> myserver.com
Then if you disconnect the Local Client ---> myslave.com session the GUI of the session
between myslave.com ----> myserver.com should remain active
If you are using a windows server you don't even need another machine.
1) Connect to the server with the remote desktop connection (#con1).
2) Create a new alias for your server system like "127.0.0.2" in Windows\System32\drivers\etc\hosts .
3) Now establish a new remote desktop connection from your windows server (in #con1) to itself (#con2).
4) Finally start your GUI needing application e.g. UI-Path in #con2 and then close #con1.
I ran into the same problem and noticed that using VNC (TightVNC) to take over the remote machine seems to solve the issue. I guess VNC uses the console screen. Once activated and logged-in it stays logged-in, also after a VNC disconnect. Make sure that the screen never turns off in the power options.
Take note that keeping the console logged-in on a VPS is in general not recommended.

Amazon EC2 ssh timeout due inactivity

I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.