IPython interface to distributed Dask workers over ssh yields "Connection refused" - ssh

Today, I thought I would attempt get to know my workers better through spawning an ipython kernel. Doing so seemed easy enough using the handy
client.start_ipython_workers()
I was able to get the connection information, and then wrote a script to dump it to JSON. I then configured some port forwarding to connect to the worker, however the worker client does not seem to accept connections.
connect channel 2: open failed: connect failed: Connection refused
It is possible that I still have some configuration problems with ssh, however I have been successfully connecting to my Jupyter notebook kernel through a similar channel. Is there some reason why the worker would be blocking connections?
winfo = client.start_ipython_workers()
for worker in winfo.keys():
winfo[worker]['key']=winfo[worker]['key'].decode('utf8')
with open(os.path.join('/home/centos/kernels/','kernel-'+winfo[worker].pop('ip')+'.json'), 'w+') as f:
winfo[worker]['ip']='127.0.0.1'
json.dump(winfo[worker], f,indent=2)
#!/bin/bash
for port in $(cat $2 | grep '_port' | grep -o '[0-9]\+'); do
echo "establishing tunnel to "$port
ssh $1 -f -N -L $port:127.0.0.1:$port
done

Related

Autokill broken reverse ssh tunnels

I have 1 server which is behind a NAT and a firewall and I have another in another location that is accessible via a domain. The server behind the NAT and firewall is running on a cloud environment and is designed to be disposable ie if it breaks we can simply redeploy it with a single script, in this case, it is OpenStack using a heat template. When that server fires up it runs the following command to create a reverse SSH tunnel to the server outside the NAT and Firewall to allow us to connect via port 8080 on that server. The issue I am having is it seems if that OpenSSH tunnel gets broken (server goes down maybe) the tunnel remains, meaning when we re-deploy the heat template to launch the server again it will no longer be able to connect to that port unless I kill the ssh process on the server outside the NAT beforehand.
here is the command I am using currently to start the reverse tunnel:
sudo ssh -f -N -T -R 9090:localhost:80 user#example.com
I had a similar issue, and fixed it this way:
First, at the server, I created in the home directory a script called .kill_tunel_ssh.sh with this contents:
#this finds the process that is opening the port 9090, finds its PID and kills it
sudo netstat -ltpun | grep 9090 | grep 127 | awk -F ' ' '{print $7}' | awk -F '/' '{print $1}' | xargs kill -9
Then, at the client, I created a script called connect_ssh.sh with this contents:
#this opens a ssh connection, runs the script .kill_tunnel_ssh.sh and exit
ssh user#remote.com "./.kill_tunel_ssh.sh"
#this opens a ssh connection opening the reverse tunnel
ssh user#remote.com -R 9090:localhost:80
Now, I always use connect_ssh.sh to open the SSH connection, instead of using the ssh command directly.
It requires the user at the remote host to have sudo configured without asking for password when executing the netstat command.
Maybe (probably) there is a better way to accomplish it, but that is working for me.

Using dispy with port forwarding via ssh tunnel

I have dispynode running on a remote server. I'm trying to open an SSH tunnel from my computer (client) and configure dispyJobCluster to use this tunnel. But it's not working. Am I not configuring this right ? Here's how I'm doing this :
( p.s . i don't have a deep knowledge in distributed & parallel computing nor networking, I'm a civil engineer so please excuse me if I don't use the right technical words sometimes)
SSH tunnel​ :
plink -v -ssh -L 61:localhost:21 user#myserver.net
This will forward connections to port 61 to localhost:21 on the server where dispynode is running
dispynode :
sudo dispynode.py -d --ext_ip_addr localhost -p 21 -i localhost
will listen on port 21 and transmit using localhost which leads it though the tunnel back to the client
with this dispyClient JobCluster code :
cluster = dispy.JobCluster( runCasterDispyWorker,
nodes=[('localhost',61)], \
ip_addr='localhost', \
ext_ip_addr='localhost', \
port = 61, \
node_port = 21, \
recover_file='recover.rec', \
)
When I launch the dispy.py I get the following error in the command prompt from which I opened the SSH tunnel :
Opening connection to localhost:21 for forwarding from 127.0.0.1:64027
Forwarded port closed
At least I guess this means that dipsy is trying to access the opened SSH tunnel but I'm not sure what's happening server side. It seems that dispynode receives nothing.
Running a quick traffic capture with TCPdump on the server confirms it. For some unknown reason, the port changes to 64027.
I have also tried to open 2 SSH tunnels simultaneously :
One for client-to-server communications
plink -v -ssh -L 61:localhost:21 user#myserver.net
One for server-to-client communications
plink -v -ssh -R 20:localhost:60 user#myserver.net
but with no luck. I'm not even sure whether it is best to use remote forwarding or local forwarding
I tried this solution that the developer of dispy himself suggested but it didn't work for me :
http://sourceforge.net/p/dispy/discussion/1771151/thread/bcad6eaa/
Is the configuration i used above wrong ? Should I use remote or local forwarding ? Why does the port change automatically, can it be because of my company's firewall blocking the connection through the ports i'm trying to use ? Has anyone managed to run dispy through an SSH tunnel before ?
This worked for me. It should work for you :
SSH tunnel ( i'm using PuTTY's plink.exe to create the tunnel ):
plink -v -ssh -R 51347:localhost:51347 [username on server]#[server's Public IP or DomainName] -pw [USER PASSWORD on server] -N
dispynode (running on the server - linux):
sudo dispynode.py -d --ext_ip_addr [public IP or domain name of server]
JobCluster (dipsyClient):
def Worker():
os.system('echo hello') #prints hello on the server running dispynode
return 0
import os
import dispy, logging
cluster = dispy.JobCluster( \
Worker, \
nodes=['IP public or domain name of server'], \
ext_ip_addr='localhost', \
recover_file='recoverdispy.rec', \
)
job = cluster.submit()
print "waiting for job completion"
job()
print('status: %s\nstdout: %s\nstderr: %s\nexception: %s' % (job.status, job.stdout, job.stderr, job.exception))
Try this piece of code .. Make sure the required ports are allowed to be used

View Activemq Messages with Jolokia and Hawt.io

Though browsing several websites and here on stack overflow, there seems to be a way to view the messages in an Activemq queue using Jolokia and Hawt.io, but I have been unsuccessful to this point.
We are running our Activemq (version 5.12.0) as in embedded service in our Spring Webapp and exposed the Jolokia web services as explained in this webpage:
https://jolokia.org/reference/html/agents.html#agent-war-programmatic
When looking that the Jolokia web services via Hawt.io, I can not figure out how to actually view the messages in the queue.
Here is a screenshot showing the queue size:
So, how can I view the messages in an Activemq queue using Jolokia and Hawt.io?
The solution we ended up going with didn't actually use Jolokia or Hawt.io.
We ended up using Jconsole.
When looking at ActiveMQ queues, if you used a java serialized object in the queue, the data won't be very readably, but if you serialize your object to json, it is quite easy to see what is in the queue.
It is terribly important to read these directions all the way though, carefully.
These instructions discuss SSH Tunneling and it is quite easy to mess something up and there are not very good log messages when things go wrong.
Remote Debugging
Due to security reasons, we have closed all the open debug ports on our remote virtual machines.
To get remote debugging to work, we will need to use SSH Tunneling to access the remote virtual machine debugging ports.
Remote Application Setup
The application that you want to remotely debug must have the JPDA Transport connector enabled.
After Java 1.4, to enable the JPDA Transport, add the following vm parameter when starting your java virtual machine:
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=<remote_port_number>
The above attributes are hard to describe, but what is presented above works well. More information about the above attributes can be found on the Connection and Invocation Details page.
Local IDE Setup
In Intellij to connect to a remote java virtual machine, open the "Run/Debug Configurations" window.
Then select a new "Remote" configuration.
Enter the following values:
Debugger mode
Attach to remote JVM
Host
localhost
Port
<local_port_number>*
Use module classpath
<local_package>**
The <port_number> should be the local port number of the ssh tunneling session that you will be starting. It is recommended that the <remote_port_number> and the <local_port_number> are the same value.
** This value should be whatever your local project is named.
SSH Tunneling
To actually connect to the remote debugging port, we'll need to use SSH Tunneling.
Run the following command via a terminal command line:
$ ssh -L <local_port_number>:localhost:<remote_port_number> -f <username>#<remote_server_name> -N
Example:
$ ssh -L 10001:localhost:10001 -f <your_username>#<your.server.com> -N
This command does the following:
Starts an ssh session with the <remote_server_name>.
Connects your <local_port_number> to the <remote_port_number> of the localhost of the remote machine. In this case, we're saying connect to localhost:10001 of the <your.server.com> machine.
Start remote debugging in the Intellij IDE and you should then be connected to the remote java virtual machine.
Resources
Intellij IDEA remotely debug java console program
Remote debug of a Java App using SSH tunneling (without opening server ports)
Remote JMX
We use JMX to look at the Spring Integration Kaha DB Queues.
Remote Application Setup
Add the following vm parameters:
-Dcom.sun.management.jmxremote.port=64250
-Dcom.sun.management.jmxremote.rmi.port=64250
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=127.0.0.1
The jmxremote.port and jmxremote.rmi.port can be any number and they can be different values, it just helps if they are the same value when doing the ssh tunneling below.
SSH Tunneling
$ ssh -L 64250:localhost:64250 -f <your_username>#<your.server.com> -N
JConsole Setup
This is done in a new terminal window.
$ jconsole -J-DsocksProxyHost=localhost -J-DsocksProxyPort=64250 service:jmx:rmi:///jndi/rmi://127.0.0.1:64250/jmxrmi
Resources
Why Java opens 3 ports when JMX is configured?
Clean Up
To close the ssh processes above:
$ lsof -i tcp | grep ^ssh
Then perform a kill on the process id.
Using jps and jstack to Help Debug
List all java processes running on a machine:
$ sudo jps
List the threads of an application running:
$ sudo -u <process_owner> jstack <process_id>
Example:
$ sudo -u tomcat jstack <pid>

Remote debugging using gdbserver over ssh

I want to debug a process running on a remote box from my host box (I built the code on the host machine).
Both have linux type operating systems.
I seems I can only communicate to the remote box from the host box via ssh (I tested using telnet).
I have followed the following steps to set this up:
On the Remote box:
Stop the firewall service:
service firewall_service stop
Attach the process to gdbserver
--attach :remote_port process_id
On the Host box:
Set up port forwarding via ssh
sudo ssh remote_username#remote_ip -L host_port:localhost:remote_port
-f sleep 60m
Set up gdb to attach to a remote process:
gdb file.debug
(gdb) target remote remote_ip:remote_port
When I try to start the debugging on the host by running 'target remote remote_ip:remote_port' on the host box I get a 'Connection timedout' error.
Can you guys see anything I am doing wrong, anything to check or an alternative way to debug remotely over ssh I would be grateful.
Thanks
This command:
sudo ssh remote_username#remote_ip -L host_port:localhost:remote_port ...
forwards local host_port to remote_port on remote_ip's localhost. This is useful only if you could not just connect to remote_ip:remote_port directly (for example, if that port is blocked by firewall).
This command:
(gdb) target remote remote_ip:remote_port
asks GDB to connect to remote_port on remote_ip. But you said that you can only reach remote_ip via ssh, so it's not surprising that GDB times out.
What you want:
ssh remote_username#remote_ip -L host_port:localhost:remote_port ...
(gdb) target remote :host_port
In other words, you connect to local host_port, and ssh forwards that local connection to remote_ip:remote_port, where gdbserver is listening for it.

"Connection to localhost closed by remote host." when rsyncing over ssh

I'm trying to set up an automatic rsync backup (using cron) over an ssh tunnel but am getting an error "Connection to localhost closed by remote host.". I'm running Ubuntu 12.04. I've searched for help and tried many solutions such as adding ALL:ALL to /etc/hosts.allow, check for #MaxStartups 10:30:60 in sshd_config, setting UsePrivilegeSeparation no in sshd_config, creating /var/empty/sshd but none have fixed the problem.
I have autossh running to make sure the tunnel is always there:
autossh -M 25 -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
This seems to be running fine, and I've been able to use the tunnel for various rsync tasks, and in fact the first time I ran the following rsync task via cron it succeeded:
rsync -av --delete-after /tank/Documents/ peteman#10.0.1.5://Volumes/TowerBackup/tank/Documents/
with the status of each file and the output
sent 7331634 bytes received 88210 bytes 40215.96 bytes/sec
total size is 131944157313 speedup is 17782.61
Ever since that first success, every attempt gives me the following output
building file list ... Connection to localhost closed by remote host.
rsync: connection unexpectedly closed (8 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(605) [sender=3.0.9]
An rsync operation of a smaller subdirectory works as expected. I'd appreciate any ideas on what could be the problem.
It seems the issues is related to autossh. If I create my tunnel via ssh instead of autossh it works fine. I suspect I could tweak the environment variables that affect the autossh configuration, but for my purposes I've solved the problem by wrapping the rsycn command in a script that first opens a tunnel via ssh, executes the backup then kills the ssh tunnel, thereby eliminating the need for the always open tunnel created by autossh:
#!/bin/sh
#Start SSH tunnel
ssh -t -L 2222:destination.address.edu:22 pbeyersdorf#intermediate.address.edu -N -f
#execute backup commands
rsync -a /tank/Documents/ peteman#localhost://Volumes/TowerBackup/tank/Documents/ -e "ssh -p 2222"
#Kill SSH tunnel
pkill -f "ssh.*destination.address"