parallel ssh (pssh) with output stream - ssh

I have 3 servers and I want to run a command on all of them parallely from a client and see the output as it streams.
I have tried using pssh but it shows output only when the command exits. But what I want is the output from all the servers on the stdout of my client as it produces output before exiting.
For example, when I run "ping google.com" on all the servers, I get output only when I hit Ctrl+C like this:
My command looks like this:
pssh -h server_list -l userName -i pemFile.pem 'ping google.com'
How to see the ping output from all the 3 servers as it pings?

I was trying to achieve the same, and the best way for me was to specify an output directory and then follow the stream on the output files, like so:
We add -o /tmp/out -t 0 so we get the output of each host to the specified directory, and we do not get any timeout.
pssh -h server_list -l userName -i pemFile.pem -o /tmp/out -t 0 'ping google.com'
Leave that running, and then follow the streams. Assuming you have host1, host2, host3, and host4 in your server_list, you can do the following:
tail -f /tmp/out/host{1,2,3,4}

Related

iperf connects but does not report output

I am using iperf to test network bandwidth between two Ubuntu 16.04.2 hosts (10.0.0.1 and 10.0.0.51). I ran "iperf -s" on 10.0.0.51 and then ran "iperf -c 10.0.0.51 -T 10" on 10.0.0.1. I do see the connection establishment (i.e. local 10.0.0.51 port 5001 connected with 10.0.0.1 port 37680) on both the sides but I do not get the results. It just hangs. Any help is highly appreciated. Thanks
with iperf3, you can see the output in json format:
The command:
iperf3 -c <server-ip> -w 4000 -t 10 -i 2 -f MBytes -V -J --logfile test.log
Note: In this case make sure you can run iperf client and server same version i.e version 3 in both cases
Check for firewalls or packet filters, e.g. for linux use iptables -L to list them and iptables -F to delete them all. Also, what version of iperf? You might want to display interval reports (-i 1) and see what they are reporting.

ssh -L forward multiple ports

I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.

How to save a log of all psql queries AND the results

This question is similar to:
psql - write a query and the query's output to a file
However, their syntax doesn't work.
When I open a psql session from the command line, I'd like to save both the queries sent and the result.
The below code saves queries, but not output:
psql -h host -U username -p port -d database -L ~/file_to_save_output.txt
You can just redirect the output (STDOUT) using the > symbol like below. Redirection works in both Unix and Windows command prompt.
psql -h host -U username -p port -d database -L ~/file_to_save_output.txt > output.txt
From Postgres Doc
--echo-queries
Copy all SQL commands sent to the server to standard output as well.
This is equivalent to setting the variable ECHO to queries.
So in order to get query + query results to a single file,
psql -h host -U username -p port -d database --echo-queries -L output_queries_and_results.txt
Additionally you can save queries and query results in separate files,
psql -h host -U username -p port -d database --echo-queries -L output_queries_only.txt -o output_results_only.txt
Note: The first method will still show queries and query results in terminal, the second will output all results to the file and won't show in the terminal.

Get incoming ssh forwarded connection port number

I have a server who is forwarding connections to a set of other servers.
Here I forward all incomming connections on:
my.tunnel.com:33199 to my.server2.com:52222
And..
my.tunnel.com:33200 to my.server3.com:52222
.. until
my.tunnel.com:XXXXX to my.serverN.com:52222
I'm initiating this by the following command on each server, except the tunnel my.tunnel.com:
ssh -o StrictHostKeyChecking=no -l root -i /etc/ssh/id_rsa -R *:33199:127.0.0.1:22 -p 443 my.tunnel.com 0 33199
...
ssh -o StrictHostKeyChecking=no -l root -i /etc/ssh/id_rsa -R *:XXXXX:127.0.0.1:22 -p 443 my.tunnel.com 0 XXXXX
Well, this works fine!
But!
At the point of the launching of each of these commands I'd like to check on my.tunnel.com that my.server2.com wants my.tunnel.com to forward exactly from port 33199, but not another port! So at this point I'd like to get this port number.
Please let me know if the problem is still not enough clearly exposed.
Thanks!
To get the forwarded port
There is no such information in the environmental variables, so you must pass it yourself:
ssh -R 33199:127.0.0.1:22 my.tunnel.com "export MY_FWD_PORT=33199; my_command"
(my_command is the script you want to run on the server). More information about passing variables - https://superuser.com/q/163167/93604
To get the source port
Look at the environment variable SSH_CONNECTION in man ssh(1). Its meaning is:
source_ip source_port dest_ip dest_port
You probably want source_port, so just get the second part of it:
echo $SSH_CONNECTION | awk '{ print $2 }'
or
echo $SSH_CONNECTION | cut -d" " -f 2

alternative to tail -f | grep server logs

Currently, I'm making curl calls, check the response and some times do a "ssh HOSTNAME "tail -f LOGFILE" | grep PATTERN. Is there a tool out there that streamline/generalize this process of making some request, checking both the response and server logs for certain patterns? (Oh, and getting statistics like response time would be plus)
I've only got an answer to part of your question. To get good stats out of cURL, try something like this:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null -s http://www.google.com/