we are using Aix box and we observer a session disconnection after about an hour be it normal putty connections or connections via plsql developer or any other utility.
Is there any parameter that we can change or check for this issue ?
You can check TMOUT (often value 3600, I prefer 0).
grep TMOUT /etc/prof* $HOME/.profile
Related
I use SSH to connect to my AWS EC2 instances and run code that takes a long time to complete. I find that if my local computer sleeps (or even if I leave it unattended for a bit) the SSH connection hangs up (which is not fatal in itself) but this seems to terminate the code on the EC2 instance that I launched using SSH.
Also, I use SSH to locally monitor the exception of my remote code, so even if there's a way to tell the remote process to stay alive after SSH has gone, I still need a way to locally see the output of the process as it continues to run (without SSH).
How do I keep code running on my AWS EC2 instance after SSH has hung up; how can I monitor the output of such a process?
When you close your tty (ssh close in your case) your process gets a SIGHUP and the default action on SIGHUP is to terminate. To avoid that you can use the command nohup to trap and not send the SIGHUP to your command, or trap the SIGHUP in your code and ignore it.
There are a bunch of ways to track a background process, but perhaps the easiest is to have it write to a file and in that other ssh you can read that file. If your process is really a command on the command line you can redirect its standard output and standard error to a file. When such a file keeps getting new content, it may be annoying to keep reading it to refresh, in which case the command "tail -f" handy.
Here is how you can config your ssh connection to stay alive :
vi ~/.ssh/config # on your client side
add this line to engage sending a "null packet" every 120 seconds :
ServerAliveInterval 120
If you own the server side do a similar change :
vi /etc/ssh/sshd_config
add these lines at bottom of config file
ClientAliveInterval 120
ClientAliveCountMax 720
this is for linux YMMV on other OS settings
Use screen
local> ssh ...
remote> screen
remote+screen> python long_running.py ...
You can then detach from screen and even disconnect from SSH, and when you return by SSHing back in again, you can
remote> screen -r
to reconnect to your running code.
I am wanting to rsync files from my home computer to a cloud server. I am able to set up my continuous rsync with the following:
#!/bin/bash
while :
do
rsync -rav * --include=*.bz2 --exclude=*.* --exclude=ZIP.sh --exclude=UPLOAD.sh
--chmod=a+rwx user#server.com:/home/user/date
sleep 180
done
This of course will run continuously if i set up a keygen as here states. I want to run the rsync continuously with me entering the password the first time and after that it will continuously run until I push CTRL+C. Is there a way to do this?
Yes, using SSH connection sharing:
Add this top your ~/.ssh/config file:
ControlMaster auto
ControlPath /tmp/ssh_%r#%n:%p
ControlPersist 8h
Connection sharing means that all your SSH connections to the same server will share the same connection. This means you can skip the authentication process for all but the first connection. The ControlPersist setting controls how long the connection will idle for before being closed (8 hours means I can login in the morning, and the connection will still be active at the end of the day, but will have expired before the next day).
The ControlPath specifies where the cached sockets will live. It can be anywhere you like, and they can be called anything you like, but the /tmp directory will do fine, and the name must be unique to each user, server, and port you wish to use, or else you'll get clashes.
Incidentally, you should probably check out the lsyncd tool as an alternative to continuous active scanning. It uses kernel notifications to watch the file-system, and launches rsync only when something actually changes.
Ansible seems to be sending SIGHUP signals at the end of (certain?) tasks. This is a problem as the tasks call a bash script which in turn starts a server instance.
Now if the closing of Ansibles SSH session sends a SIGHUP, this will actually shutdown the server - the start of which was the key point of the Ansible task in the first place.
So, is there a way I can guarantee that Ansible will use SSH in a way that will not send a SIGHUP signal when closing the task/session?
I could theoretically start the bashscript using nohup. But this seems to be just a dirty workaround, as I know that SSH is capable of doing what I want: if I manually compose and pass the script command via SSH as such:
ssh user#server "scriptToStartMyServer.sh params"
.. then it works fine and no SIGHUP is sent (thus the server survives and is not shutdown immediately after being started).
Edit:
Sadly we cannot avoid using these bash scripts to start servers and we cannot really change them as this is something given to us by the customer.
I've got problem with my Go server.
When I'm connected to my NAS via SSH and do ./gogs web, the server is starting. But when I close the SSH connection, the server is stopped.
How I can start my Go server permanently?
You have scripts in gogs allowing you to launch the server as a daemon:
scripts/init/debian/gogs (recently fixed with issue 519)
scripts/init/centos/gogs
That would allow the process to remain while the session is closed.
You have other options in issues 172.
This is not a Go-specifioc problem, what is happening is that the Go program is still attached to your terminal and when you log out, the kernel will trigger a SIGHUP to every binary still connected to that terminal session.
Your best option is probably to use nohup ./gogs web.
Second-best option would be to rewrite main, so that it intercepts and handles SIGHUP, stopping it from killing your program. However, doing so requires handling quite a few things properly (you really should close stdin, stdout and stderr; make sure all your logging is done through the log library, ...)
I'm using MediaTemple's Grid Server (shared/grid hosting) to run some MySQL/PHP sites I'm writing and noticed that I wasn't closing one of my MySQL connections, which caused my site to error out:
"Too Many Connections"
I can't log in anywhere to close the connections manually.
Is that any way to close open connections using a script or other type of command?.
Should I just wait?
If you can't log into MySQL at all, you will probably have to contact your hosting provider to kill the connections.
If you can use the MySQL shell, you can use the show processlist command to view connections, then use the kill command to remove the connections.
It's been my experience that hung SQL connections tend to stay that way, unfortunately.
blindly going in an terminating connections is not the way to solve this problem. first you need to understand why you are running out of connections. is your max_connections setting selected to correctly match the number of max/anticipated users? are you using persistent connections when you really don't need them? etc.
Make sure that you're closing the connections with your PHP code. Also, you could increase the maximum connections allowed in /etc/my.cnf.
max_connections=500
Finally, you can login to a mysql prompt and type show status or show processlist to view various statistics with your server.
If all else fails, restarting the server daemon should clear the persistent connections.
Well, if you cannot ever sneak in with a connection, I dunno', but if you can occasionally sneak in, in Ruby it would be close to:
require 'mysql'
mysql = Mysql.new(ip, user, pass)
processlist = mysql.query("show full processlist")
killed = 0
processlist.each { | process |
mysql.query("KILL #{process[0].to_i}")
}
puts "#{Time.new} -- killed: #{killed} connections"
If you can access the command line with enough privileges, restart the MySQL server or the Apache (assuming that you use Apache) server - because probably it is keeping the connections open. After you successfully closed the connections, make sure that you are not using persistent connections from PHP (the general opinion seems to be that it doesn't create any significant performance gain, but it has all kinds of problems - like you've experienced - and in some cases - like using it PostgreSQL - it can even significantly slow down your site!).