So I am using GPU server with remote connection through SSH, 'screen' command to keep the jupyter running even when the SSH console window is closed.
By right it should keep running even when I'm away. Nonetheless I'm failing to keep the script running on jupyter when I leave my seat. Some of these scripts require hours and hours to produce end results but I can't have them finished.
Apparently the GPU server has NOT turned off. So I can't think memory is the issue.
Has anyone got a better understanding of this matter and a way to fix it?
Kindly respond. Thank you.
Related
I’m using Jetbrains Gateway to use IntelliJ to work on the code in a remote Linux machine.
I connect via SSH. I connect and work fine initially, the latency is around 150 ms but it keeps disconnecting. I realized it disconnects when I stop coding and spend some time on other things (browser etc.). When I want to re-connect it asks for SSH key passphrase, I need to enter it multiple times and the 'Save Permanently' option won't work.
Is there a setting I can do to keep it connected?
Could this be related to my company's network, in which case again can I increase timeout or retry on fail by some settings?
This is Max from the remote development team at JetBrains.
I'm sorry to hear you are having trouble with Gateway. Please create a ticket here https://youtrack.jetbrains.com/newIssue?project=GTW describing your issue and attaching your logs. This will help us understand whats going on and resolve your issue.
Thanks!
I started using Remote Development feature of IntelliJ connecting to the same remote machine as described here https://www.jetbrains.com/help/idea/remote-development-starting-page.html
It does not disconnect, I can work whole day (tested 12+ hours) without any connection issues.
I'm using Google Colab to run a server with ngrok and it's amazing, but every time I leave it disconnects and my server stops forever. It makes sense for that to happen but is there a way or a loophole? Is there a device I can keep this running on? I've used the while True:pass method and it works but requires me too keep the tab open, and I leave my computer closed a lot. Is there a web hosting service that can keep a webpage running on a server forever?
I suggest to look at this topic.
Also notice that with the free version of Colab, your maximum connection time is 12 hours, no matter what. If you pass to the PRO version, that should be extended to a maximum of 24 hours. Look here for more details
I'm newly moving from a Linux working environment to Windows, and I'm mainly using local port forwarding+Pycharm to run my python code on a server that is double-hop from my laptop.
I am able to establish the ssh tunnel through Windows cmd or MobaXterm local terminal or MobaXterm tunneling tool. I works fine on my Pycharm, when I check it from tools/deployment/configuration/test connection, and I can also see the files in remote server. But every time I start my Pycharm, it shows two background process, "updating python interpreter" and "updating pycharm helper", and the precess bar simply do not show any moving on! And I cannot run python on remote server, because Pycharm says I lack python helper.
And most wired, when it is running these two precess, my terminal for local port forwarding freezes, and I cannot type in commands in the jump server. And when I try to recheck the connection, it turns out that connection fails.
My ssh tunneling+pycharm deployment used to work fine in my Ubuntu. Thanks anybody who can shed light on my confusion!
Well, thanks everyone, I have solved this problem.
The reason is simple, but I did not notice that the ~/.pycharm_helper 's size is actually changing in the process, while the GUI bar may be not moving.
So it is due to my double-hop inconvenience, and the low Internet speed. I left it in dorm for a whole night out, and it comes out just fine.
I am facing a peculiar problem, where I run a job that performs deep learning model training under tensorflow and that job dies prematurely without any apparent warning. There is not syntactical error in the code, but the job running it dies after half hour from the start of the job. The syslog does not show anything pertaining to the problem I am facing, but I do see large gaps in syslog timestamp around the time my job fails.
I am connected to a Ubuntu 18.04 LTS server through ssh. Even when I logout or stay connected to the server, my job dies after 30-40 mins. The one thing that I see consistently in the syslog around the time my job fails is the Airflow_Temperature_Cel warning.
I'd appreciate any help regarding this freak issue I am facing.
This has happened to me on multiple occasions and I can't seem to pinpoint the cause of it.
Whenever I try to shutdown or reboot the Raspberry Pi via an SSH connection, the system broadcasts it's halting, but doesn't close the SSH connection. Instead it's left hanging until I type something after a minute and it notes a "Broken Pipe" error.
The weird thing about this is that it's random across installs.
On my Pi B, Rev 1, the connection closes. Initially this was the case on my Pi 3, but after a reinstall of Raspbian it stopped closing it. Another reinstall fixed it, but yesterday I reinstalled again and the problem came back.
It's seems that I'm the only one who has this problem (or at least has queried other about in online) so I thought I'd pick the brains of whoever stumbled upon this question. Anyone have any idea why this happens?
P.S. it's doesn't happen to my other servers, only to the Pi's.
This probably happens as a result of the order of the steps performed during system shutdown.
The recommended solution is installing libpam-systemd and dbus and making sure that UsePAM is enabled in sshd_config:
apt-get install libpam-systemd dbus
See the following links for a more detailed explanation:
https://serverfault.com/questions/706475/ssh-sessions-hang-on-shutdown-reboot
https://unix.stackexchange.com/questions/216950/after-sending-shutdown-command-ssh-session-doesnt-terminate