Toolset:
Laravel 5.2.*
LaravelCollective remote package ^5.2
Let's say I have a route http://example.com/npm when I hit this route I process some request parameters and then ssh into a remote server using the LaravelCollective remote package.
After some time I see in my logs that the connection is closed. I know this because that message is logged after the ssh command. So my applications tells me that my command is executed successfully.
But when I go and check the server there is no node_modules folder, but after hitting the route 10x is suddenly is there.
That made me think that my connection is closed even the commands where not finished. To be sure about that I started monitoring the process on ther server with the following command
ps aux
This resulted in the fact that I got my success message but the process was still running on my server, which means the output I get is not correct and it prevents a follow-up command to fail (gulp production)
I dug a bit into the source code to see that there is a way to keep that connection open but no luck so far.
The question: can I keep this connection open until the commands are definitely finished so that my response to the end user is correct?
Related
I've got problem with my Go server.
When I'm connected to my NAS via SSH and do ./gogs web, the server is starting. But when I close the SSH connection, the server is stopped.
How I can start my Go server permanently?
You have scripts in gogs allowing you to launch the server as a daemon:
scripts/init/debian/gogs (recently fixed with issue 519)
scripts/init/centos/gogs
That would allow the process to remain while the session is closed.
You have other options in issues 172.
This is not a Go-specifioc problem, what is happening is that the Go program is still attached to your terminal and when you log out, the kernel will trigger a SIGHUP to every binary still connected to that terminal session.
Your best option is probably to use nohup ./gogs web.
Second-best option would be to rewrite main, so that it intercepts and handles SIGHUP, stopping it from killing your program. However, doing so requires handling quite a few things properly (you really should close stdin, stdout and stderr; make sure all your logging is done through the log library, ...)
I am trying to bring locking code of a farm in automatic way.
So, i have on each remote server echoid.exe and a batch file.
The batch file simply execute the echoid.exe and write its output into a text file which i can parse.
The problem is when im triggering the .bat file remotely, it seems like the echoid.exe executed on the container host (the one im using to send execution command through psexec for example) rather than executing the code in the remote host (meaning- the locking code output is wrong) . If the same .bat file executed locally (and manually), the results are OK.
Any idea why? does anyone know how can i run the echoid remotely and get the correct results?
i have tried several remote action and all failed and brought wrong results :(
please help!
BTW all remote machines are WIN OS.
I made a simple binary application like a clock and run it in server but when I close ssh connection,application will be closed.
But i want clock run all the time for example.
Then I made a simple service and I want to run it in server but I do not know how install,control and resume it after I close ssh connection.
Try adding an & after it to put it into the background, then you can close your ssh session and leave it running. You could also (if the binary is in git) use an action hook to run it (but you still need to include the &), something like..
$OPENSHIFT_DATA_DIR/clock &
If it was located in your $OPENSHIFT_DATA_DIR directory.
I've got a Windows 7 machine setup with vagrant/virtualBox - each morning when I try to access my development site it always a 'Unable to connect' error message in my Firefox browser.
Even though I have booted the machine using the 'vagrant up' command for some reason it only seems to be accessible via the browser once I have done the 'vagrant provision' command. This is obviously annoying as it starts doing stuff from scratch.. eg installing my mysql database again.
Can anyone provide any light into this, and why it seems to fail to connect all the time - as this provision command is only a temporary fix as i'll need to amend the DB everytime which is only feasible in the short term.
Might just be my own setup - but I noticed nginx was restarting so have to restart this each time
I am using some EC2 instances to run some large jobs I can not run locally. The issue I am seeing is that after a while (X hours since the process started) my connection on my shell gives me a broken pipe error
ubuntu#ip-10-122-xxx-xxx:~/stratto/ode$ Write failed: Broken pipe
The instance is still there because I can reconnect with no problems, but how can I reconnect and get back at seeing the logs of the process as before the 'Broken Pipe'
Any tip much appreciated,
Thanks!
Redirect your output to a file and then run the program "nohup ..." to ensure the disconnect doesn't kill it. Use "tail -f" to monitor the redirected file.
Note: Originally said to use "tee" but that won't work. I think a straight redirect and then tail on the file works.
You can use screen to run processes in the cloud even when you are not connected to the server.
sudo apt install screen
To specifically address the issue described in the original post (e.g. connecting to AWS EC2 instances) I a basic example and a more advanced example of using screen.
You can use "screen". Detach from it and ping to google.com. So there ssh session will be active through out the installation.