SSH forward all X connections from DISPLAY - ssh

Right now at work we have a login machine where our home area is located and all tools are run on the compute farm, to run in GUI mode I believe the job is submitted to the farm and the selected machine will run the command with the DISPLAY variable set to what was in our local environment. This seems to only work with vnc right now, is there anyway I can use SSH and use a valid DISPLAY setting?

If you establish a SSH session with X protocol tunneling, you can query the value of the DISPLAY environment variable at the remote side. For example:
client$ ssh -X server
server$ echo $DISPLAY
localhost:17.0
This value is going to be different for each SSH session.
If I'm understanding your environment correctly, you'd need to pre-establish ssh sessions to all of the node in the compute farm. Then, when the job runs on a particular compute farm node, it'd have to set the particular DISPLAY variable that goes back on the ssh session you pre-established.

Related

Transparently connect to SSH machine using expect script

I have an expect script which runs ssh and eventually results in a shell. Is it possible to run this script whenever someone ssh's to the matching machine, instead of just using ssh? (ideally by setting some config in ~/.ssh/config)
I'd prefer not to create an alias/script for ssh to check what the host is, then run the appropriate shell.

How properly set up remote server through nested tmux sessions

I'm trying to set up my remote server so that I can ssh in, start a python process, then detach, logout, and shut down my local computer. I've been able to do this before using nested tmux sessions:
local host - tmux - ssh into remote
remote host - tmux - start python process - detach remote tmux
"exit" from the remote host and close down the remote session.
This was working just fine for me. I can detach the remote session, close down ssh, shut down everything locally, then boot up, relogin via ssh, and then reattach the remote tmux session.
My issue is that now my remote server is in a lab setting (I now run a lab with multiple people, whereas before it was just me). I don't want different users to be logging in when there is a process running. I'm trying to limit people not knowing a server is in use, logging in to start a process, and disrupting (or diverting memory from) a process being run by another user.
My way around this was to setup a generic login user and password that everyone in the lab uses. Then, for that generic user, I edited the /etc/security/limits.conf file to have a maxlogins of 1 for that user. While this works in practice (no other user can log in when one is already logged in), it means I can no longer RE login.
Now I get:
local host -> tmux -> ssh into remote
remote host -> tmux -> start python process -> remote detach
remote host -> exit ssh
local host -> tmux -> ssh into remote:
Too many logins for 'lab2'.
It appears that, with the process still running, the login stays active and I am trying to 'relogin' to an ongoing login session. But since I've set the max to 1, I cannot. Does anyone have any advice for how to fix this?
Thanks!

SSH chaining using PHPSeclib (ssh machine 1, machine 1->machine2, interact)

We've brought up this topic before, but curious if anyone has any new information on this issue.
We use multiple servers that are accessed behind a "management server", so when we SSH in we have to log in there first, then from there log into our destination machine so always at least 2 SSH connections. We currently use port forwarding on the management server by using : which will take us directly through to the server of interest behind the scenes so we think we're directly ssh'ing into each one.
The issue here is that it requires specific setup, and in a scalable environment where servers can be added/removed the maintenance is cumbersome. Ideally we'd just be able to ssh into multiple machines using phpseclib and run commands.
Has anyone ran into this or have advice on a solution from the scripting level? Basically we need to ssh chain and ssh into machine 1, then machine 2 from machine 1, and run commands/interact with machine 2.
$ssh = new Net_SSH2('machine1');
$ssh->login('user', 'pass');
$ssh->setTimeout(10);
$ssh->enablePTY();
$ssh->exec('ssh machine2');
echo $ssh->read();
At this point (assuming that you're using RSA authentication and that your private key is in your ~/.ssh/id_rsa file on machine) the prompt that you get back should be of machine 2.
You could connect to a machine3 as well by doing this:
$ssh = new Net_SSH2('machine1');
$ssh->login('user', 'pass');
$ssh->setTimeout(10);
$ssh->enablePTY();
$ssh->exec('ssh machine2');
echo $ssh->read();
$ssh->exec('ssh machine3');
echo $ssh->read();

Start ipython cluster using ssh on windows machine

I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)

Amazon EC2 ssh timeout due inactivity

I am able to issue commands to my EC2 instances via SSH and these commands logs answers which I'm supposed to keep watching for a long time. The bad thing is that SSH command is closed after some time due to my inactivity and I'm no longer able to see what's going on with my instances.
How can I disable/increase timeout in Amazon Linux machines?
The error looks like this:
Read from remote host ec2-50-17-48-222.compute-1.amazonaws.com: Connection reset by peer
You can set a keep alive option in your ~/.ssh/config file on your computer's home dir:
ServerAliveInterval 50
Amazon AWS usually drops your connection after only 60 seconds of inactivity, so this option will ping the server every 50 seconds and keep you connected indefinitely.
Assuming your Amazon EC2 instance is running Linux (and the very likely case that you are using SSH-2, not 1), the following should work pretty handily:
Remote into your EC2 instance.
ssh -i <YOUR_PRIVATE_KEY_FILE>.pem <INTERNET_ADDRESS_OF_YOUR_INSTANCE>
Add a "client-alive" directive to the instance's SSH-server configuration file.
echo 'ClientAliveInterval 60' | sudo tee --append /etc/ssh/sshd_config
Restart or reload the SSH server, for it to recognize the configuration change.
The command for that on Ubuntu Linux would be..
sudo service ssh restart
On any other Linux, though, the following is probably correct..
sudo service sshd restart
Disconnect.
logout
The next time you SSH into that EC2 instance, those super-annoying frequent connection freezes/timeouts/drops should hopefully be gone.
This is also helps with Google Compute Engine instances, which come with similarly annoying default settings.
Warning: Do note that TCPKeepAlive settings (which also exist) are subtly, yet distinctly different from the ClientAlive settings that I propose above, and that changing TCPKeepAlive settings from the default may actually hurt your situation rather than help.
More info here: http://man.openbsd.org/?query=sshd_config
Consider using screen or byobu and the problem will likely go away. What's more, even if the connection is lost, you can reconnect and restore access to the same terminal screen you had before, via screen -r or byobu -r.
byobu is an enhancement for screen, and has a wonderful set of options, such as an estimate of EC2 costs.
I know for Putty you can utilize a keepalive setting so it will send some activity packet every so often as to not go "idle" or "stale"
http://the.earth.li/~sgtatham/putty/0.55/htmldoc/Chapter4.html#S4.13.4
If you are using other client let me know.
You can use Mobaxterm, free tabbed SSH terminal with below settings-
Settings -> Configuration -> SSH -> SSH keepalive
remember to restart Mobaxterm app after changing the setting.
I have a 10+ custom AMIs all based on Amazon Linux AMIs and I've never run into any timeout issues due to inactivity on a SSH connection. I've had connections stay open more than 24 hrs, without running a single command. I don't think there are any timeouts built into the Amazon Linux AMIs.