How can I start go server permanently? - ssh

I've got problem with my Go server.
When I'm connected to my NAS via SSH and do ./gogs web, the server is starting. But when I close the SSH connection, the server is stopped.
How I can start my Go server permanently?

You have scripts in gogs allowing you to launch the server as a daemon:
scripts/init/debian/gogs (recently fixed with issue 519)
scripts/init/centos/gogs
That would allow the process to remain while the session is closed.
You have other options in issues 172.

This is not a Go-specifioc problem, what is happening is that the Go program is still attached to your terminal and when you log out, the kernel will trigger a SIGHUP to every binary still connected to that terminal session.
Your best option is probably to use nohup ./gogs web.
Second-best option would be to rewrite main, so that it intercepts and handles SIGHUP, stopping it from killing your program. However, doing so requires handling quite a few things properly (you really should close stdin, stdout and stderr; make sure all your logging is done through the log library, ...)

Related

Laravel keeping remote connection until all commands have finished

Toolset:
Laravel 5.2.*
LaravelCollective remote package ^5.2
Let's say I have a route http://example.com/npm when I hit this route I process some request parameters and then ssh into a remote server using the LaravelCollective remote package.
After some time I see in my logs that the connection is closed. I know this because that message is logged after the ssh command. So my applications tells me that my command is executed successfully.
But when I go and check the server there is no node_modules folder, but after hitting the route 10x is suddenly is there.
That made me think that my connection is closed even the commands where not finished. To be sure about that I started monitoring the process on ther server with the following command
ps aux
This resulted in the fact that I got my success message but the process was still running on my server, which means the output I get is not correct and it prevents a follow-up command to fail (gulp production)
I dug a bit into the source code to see that there is a way to keep that connection open but no luck so far.
The question: can I keep this connection open until the commands are definitely finished so that my response to the end user is correct?

run binary service in Openshift and control it

I made a simple binary application like a clock and run it in server but when I close ssh connection,application will be closed.
But i want clock run all the time for example.
Then I made a simple service and I want to run it in server but I do not know how install,control and resume it after I close ssh connection.
Try adding an & after it to put it into the background, then you can close your ssh session and leave it running. You could also (if the binary is in git) use an action hook to run it (but you still need to include the &), something like..
$OPENSHIFT_DATA_DIR/clock &
If it was located in your $OPENSHIFT_DATA_DIR directory.

Vagrant starts but cannot connect until I do a 'force provision'

I've got a Windows 7 machine setup with vagrant/virtualBox - each morning when I try to access my development site it always a 'Unable to connect' error message in my Firefox browser.
Even though I have booted the machine using the 'vagrant up' command for some reason it only seems to be accessible via the browser once I have done the 'vagrant provision' command. This is obviously annoying as it starts doing stuff from scratch.. eg installing my mysql database again.
Can anyone provide any light into this, and why it seems to fail to connect all the time - as this provision command is only a temporary fix as i'll need to amend the DB everytime which is only feasible in the short term.
Might just be my own setup - but I noticed nginx was restarting so have to restart this each time

What happens to a process in an EC2 instance when I get a 'Broken Pipe' error on ssh?

I am using some EC2 instances to run some large jobs I can not run locally. The issue I am seeing is that after a while (X hours since the process started) my connection on my shell gives me a broken pipe error
ubuntu#ip-10-122-xxx-xxx:~/stratto/ode$ Write failed: Broken pipe
The instance is still there because I can reconnect with no problems, but how can I reconnect and get back at seeing the logs of the process as before the 'Broken Pipe'
Any tip much appreciated,
Thanks!
Redirect your output to a file and then run the program "nohup ..." to ensure the disconnect doesn't kill it. Use "tail -f" to monitor the redirected file.
Note: Originally said to use "tee" but that won't work. I think a straight redirect and then tail on the file works.
You can use screen to run processes in the cloud even when you are not connected to the server.
sudo apt install screen
To specifically address the issue described in the original post (e.g. connecting to AWS EC2 instances) I a basic example and a more advanced example of using screen.
You can use "screen". Detach from it and ping to google.com. So there ssh session will be active through out the installation.

Tornado stopped running on AWS immediately after I terminate my remote session

I'm using SSH to remotely launch Tornado on Amazon Web Service. It works fine when I launch it by:
python startTornado.py
However, after my SSH session times out or terminated, the Tornado server is also stopped immediately, so I can't access the webpage anymore. I did quite some search but couldn't find an answer on Google.
How can I keep Tornado and the site running after my SSH session terminated?
The process will shut down when you logout if it's running in the foreground or if it tries to write to stdout and the terminal it's outputting to no longer exists. Try starting the server with
nohup python startTornado.py &
The nohup command redirects output to a file, and the & at the end runs the command in the background. Alternatively, you can use the screen utility which allows you to detach a terminal and reattach it in a different ssh session (see the screen man page for details).
While all the above solutions solve the immediate problem, what you might really need to run such processes in production, control them (start/restart/stop) is supervisor. It is python based and its more useful when you have to run multiple instances of tornado behind nginx.
In addition to nohup as Kevin has mentioned, you can also use disown command if you are using bash:
disown <job-id>