I have a problem how to keep my program run in ssh when the laptop(Mac) loses wifi-connection or network. I was running a python program remotely by ssh into a server, and before I run the code I made a new screen by entering 'screen'. And then I ran the program and pressed ctrl+A+D to detach the screen. Everything looked fine and the program continued to work when the laptop was closed( in a place with WIFI). However when I walked outside with my laptop for several minutes and I reopened the laptop it showed 'write fail: broken pipe' and the program stopped. I guess the problem happened because the laptop lost the network connection. Is there any way to fix this problem such that I could bring my laptop anywhere and keep my program run?
Open the screen on the remote server after SSHing in so you have a persistent session there, not on your local box.
If you did that, note that you'll still get disconnected if you loose connection but then SSHing in again and re-open the screen session to get back to work.
local$ ssh remote.server
remote$ screen -ls # list screens
remote$ screen -dr <screen name> # force reconnect to screen session
edit:
With using screen you get a permanent session that you can restore. This sessions will live where you start it. If you want to make sure that you keep running something on the remote server, then first SSH and then start the screen on the remote.
If you loose connection, then only your SSH connection will be terminated and you'll be disconnected from your screen session but that won't stop. You can SSH in again and reconnect to the screen session.
Try this:
local$ ssh remote.server
remote$ screen -S date
# screen starts with name 'date'. if it's the first time you start screen on
# this box it might display some welcome message where you need to press enter
remote-screen$ while true; do date; sleep 1; done
# this will show the time every second
# disconnect your network: the ssh connection will be terminated
# open console again and continue
local$ ssh remote.server
remote$ screen -dr date
After re-connecting to the screen session you should see the dates still going without any pause.
Related
I use SSH to connect to my AWS EC2 instances and run code that takes a long time to complete. I find that if my local computer sleeps (or even if I leave it unattended for a bit) the SSH connection hangs up (which is not fatal in itself) but this seems to terminate the code on the EC2 instance that I launched using SSH.
Also, I use SSH to locally monitor the exception of my remote code, so even if there's a way to tell the remote process to stay alive after SSH has gone, I still need a way to locally see the output of the process as it continues to run (without SSH).
How do I keep code running on my AWS EC2 instance after SSH has hung up; how can I monitor the output of such a process?
When you close your tty (ssh close in your case) your process gets a SIGHUP and the default action on SIGHUP is to terminate. To avoid that you can use the command nohup to trap and not send the SIGHUP to your command, or trap the SIGHUP in your code and ignore it.
There are a bunch of ways to track a background process, but perhaps the easiest is to have it write to a file and in that other ssh you can read that file. If your process is really a command on the command line you can redirect its standard output and standard error to a file. When such a file keeps getting new content, it may be annoying to keep reading it to refresh, in which case the command "tail -f" handy.
Here is how you can config your ssh connection to stay alive :
vi ~/.ssh/config # on your client side
add this line to engage sending a "null packet" every 120 seconds :
ServerAliveInterval 120
If you own the server side do a similar change :
vi /etc/ssh/sshd_config
add these lines at bottom of config file
ClientAliveInterval 120
ClientAliveCountMax 720
this is for linux YMMV on other OS settings
Use screen
local> ssh ...
remote> screen
remote+screen> python long_running.py ...
You can then detach from screen and even disconnect from SSH, and when you return by SSHing back in again, you can
remote> screen -r
to reconnect to your running code.
I am wanting to rsync files from my home computer to a cloud server. I am able to set up my continuous rsync with the following:
#!/bin/bash
while :
do
rsync -rav * --include=*.bz2 --exclude=*.* --exclude=ZIP.sh --exclude=UPLOAD.sh
--chmod=a+rwx user#server.com:/home/user/date
sleep 180
done
This of course will run continuously if i set up a keygen as here states. I want to run the rsync continuously with me entering the password the first time and after that it will continuously run until I push CTRL+C. Is there a way to do this?
Yes, using SSH connection sharing:
Add this top your ~/.ssh/config file:
ControlMaster auto
ControlPath /tmp/ssh_%r#%n:%p
ControlPersist 8h
Connection sharing means that all your SSH connections to the same server will share the same connection. This means you can skip the authentication process for all but the first connection. The ControlPersist setting controls how long the connection will idle for before being closed (8 hours means I can login in the morning, and the connection will still be active at the end of the day, but will have expired before the next day).
The ControlPath specifies where the cached sockets will live. It can be anywhere you like, and they can be called anything you like, but the /tmp directory will do fine, and the name must be unique to each user, server, and port you wish to use, or else you'll get clashes.
Incidentally, you should probably check out the lsyncd tool as an alternative to continuous active scanning. It uses kernel notifications to watch the file-system, and launches rsync only when something actually changes.
I've got problem with my Go server.
When I'm connected to my NAS via SSH and do ./gogs web, the server is starting. But when I close the SSH connection, the server is stopped.
How I can start my Go server permanently?
You have scripts in gogs allowing you to launch the server as a daemon:
scripts/init/debian/gogs (recently fixed with issue 519)
scripts/init/centos/gogs
That would allow the process to remain while the session is closed.
You have other options in issues 172.
This is not a Go-specifioc problem, what is happening is that the Go program is still attached to your terminal and when you log out, the kernel will trigger a SIGHUP to every binary still connected to that terminal session.
Your best option is probably to use nohup ./gogs web.
Second-best option would be to rewrite main, so that it intercepts and handles SIGHUP, stopping it from killing your program. However, doing so requires handling quite a few things properly (you really should close stdin, stdout and stderr; make sure all your logging is done through the log library, ...)
I use PuTTY sessions to talk to an embedded device running QNX 6.4.1 using SSH over TCP/IP.
Today, one of my systems mysteriously won't allow me to have more than one PuTTY session open at a time. If I try to start a second session, I can authenticate with user name and password, but the sign on banner prints out with an extra blank line between each line and messes with my ability to hit enter. I can do nothing that looks remotely valid except Control-C or close the PuTTY window.
I suspected the text file that contains the banner had bad line
endings, but it does not.
I suspected terminal setting issues, but if I have one session open it
works. With no changes to settings, just trying to open a second session it
does not.
I wondered if the .profile was getting mangled, but that doesn't
seem to be the case either.
Now I'm down to "perhaps ssh is messed up and rebooting would fix
it?" But I am hesitant to reboot it because if we lose TCP/IP
connection to it, it's several hours worth of work (physical labor)
to restore.
Any thoughts about what is going wrong and how I can fix it?
I'm connecting using PuTTY 0.62 from 64-bit Windows 7 to QNX 6.4.1. The openssh/openssl version is modern.
UPDATE
The issue came back a few days later. Using Guntram Blohm's suggestion below, I was able to at least get past the "Press enter once you've read the banner" screen. I then ran stty sane ctrl-j as he recommended. Here is the output of stty:
Bad after I had run stty sane ctrl-j (And hand reformatted it to be readable)
Name: /dev/ttyp1
Type: pseudo
Opens: 3
+raw +echo
+osflow
intr=^C quit=^\ erase=^? kill=^U eof=^D start=^Q stop=^S susp=^Z
lnext=^V min=01 time=00 pr1=^[ pr2=5B left=44 right=43 up=41
down=42 ins=40 del=50 home=48 end=59
I then opened another PuTTY session immediately after this and it worked properly. This is confusing me how it works sometimes and doesn't work others. How can that happen? What is different?
Good
Name: /dev/ttyp2
Type: pseudo
Opens: 2
+edit
+osflow
intr=^C quit=^\ erase=^? kill=^U eof=^D start=^Q stop=^S susp=^Z
lnext=^V min=01 time=00 pr1=^[ pr2=5B left=44 right=43 up=41
down=42 ins=40 del=50 home=48 end=59
So right now I have a good PuTTY terminal open, and a bad one. What else can I do to isolate this issue?
It's probably another process that uses the pseudo-terminal, puts it in a special state, then crashes without restoring the state. vi comes to mind, or maybe a file upload/download program. These programs change the terminal mode to read each character indicidually, instead of line by line, and tweak a few other things as well. Normally, logging out/back in SHOULD fix that, but i'm not sure QNX handles it correctly.
One thing you could do to copy the parameters of a working terminal to the messed up one is stty -g on the good one, then paste that output to the command line of the bad one. Like this (on Linux, i don't have a QNX at the moment):
(on the good terminal)
gbl#bermuda$ stty -g
500:5:bf:8a3b:3:1c:7f:15:4:0:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
(on the bad one)
gbl#bermuda$ stty 500:5:bf:8a3b:3:1c:7f:15:4:0:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
These terminal modes are kept per pseudo tty device, that's why your /dev/ttyp1 can be messed up, while the /dev/ttyp2 that's allocated for the next ssh connection is ok.
I am using some EC2 instances to run some large jobs I can not run locally. The issue I am seeing is that after a while (X hours since the process started) my connection on my shell gives me a broken pipe error
ubuntu#ip-10-122-xxx-xxx:~/stratto/ode$ Write failed: Broken pipe
The instance is still there because I can reconnect with no problems, but how can I reconnect and get back at seeing the logs of the process as before the 'Broken Pipe'
Any tip much appreciated,
Thanks!
Redirect your output to a file and then run the program "nohup ..." to ensure the disconnect doesn't kill it. Use "tail -f" to monitor the redirected file.
Note: Originally said to use "tee" but that won't work. I think a straight redirect and then tail on the file works.
You can use screen to run processes in the cloud even when you are not connected to the server.
sudo apt install screen
To specifically address the issue described in the original post (e.g. connecting to AWS EC2 instances) I a basic example and a more advanced example of using screen.
You can use "screen". Detach from it and ping to google.com. So there ssh session will be active through out the installation.