How can two or more applications use PF_RING ZC cluster? - packet-capture

The following Picture indicates that many applications could use PF_RING clusters.
I have tested this by the following commands.
./zount -i eth0 -c 55 ---- ok
./zcount1 -i eth0 -c 99 ----- ok
Suppose i want to use pf_ring zc. As far as i know If you open a device using a PF_RING-aware driver in zero copy (e.g. pfcount -i zc:eth1) the device becomes unavailable to standard networking as it is accessed in zero-copy through kernel bypass, as happened with the predecessor DNA. Once the application accessing the device is closed, standard networking activities can take place again.
I have 2 questions:
Question 1- as long as the application connected to the NIC via ZC (e.g. pfcount -i zc:eth1), the NIC is inaccessible to other applications .in such a situations no one could use zero copy.
$pfcount2 -i zc:eth1 -------- error .
if i am wrong ,is it possible to use PF_RING ZC like PF_RING? the following pic show what i am trying to say.
Question 2: is it possible to use PFRing_zc as follow? if the answer is YES, how ? is there any API?
Thanks in Advance

cardigliano, a ntop member ,answered my question:
yes you can do both with zbalance:
zbalance_ipc -i zc:ethX -c 99 -m 0 -n
zbalance_ipc -i zc:ethX -c 99 -m 0 -n ,
Please take a look at zbalance_ipc -h for more options and at the zbalance_ipc output to see how to attach consumers to the cluster. There is also a
README.examples with a few examples.

Related

Iptables masquerade not working on Debian VM

I have a VM in VirtualBox with Debian 10 and I'm trying to NAT masquerade it's output interface (enp0s8) so that it's clients (VMs connected to it) can access the Internet.
All interfaces in the system have an IP. I've already enabled forwarding with:
echo 1 > /proc/sys/net/ipv4/ip_forward
sysctl -w net.ipv4.ip_forward=1
And then I executed:
iptables -t nat -A POSTROUTING -o enp0s8 -j MASQUERADE
However, whenever I execute the above, the following happens:
And no matter how many times I iptables --flush -t nat and repeat the process, the result is always the same. The rule I want to apply is never saved properly and the client's IPs are never masked.
What is the issue here? Almost all tutorials say this is the correct way for masquerading.
I've also tried using nftables, without success.
It is already showing the right output. To show the rules with the interface details, you need to use,
iptables -t nat -L -n -v
And btw, if you have setup NAT networking, it is already taken care to connect outside.
And have you set the default gateway of your clients to this box?

Unable to run PBS script on multiple nodes using GNU parallel

I have been trying to use multiple nodes in my PBS script to run several independent jobs. Each individual job is supposed to use 8 cores and each node in the cluster has 32 cores. So, I would like to have each node run 4 jobs. My PBS script is as follows.
#!/usr/bin/env bash
#PBS -l nodes=2:ppn=32
#PBS -l mem=128gb
#PBS -l walltime=01:00:00
#PBS -j oe
#PBS -V
#PBS -l gres=ccm
sort -u $PBS_NODEFILE > nodelist.dat
#cat ${PBS_NODEFILE} > nodelist.dat
export JOBS_PER_NODE=4
PARALLEL="parallel -j $JOBS_PER_NODE --sshloginfile nodelist.dat --wd $PBS_O_WORKDIR"
$PARALLEL -a input_files.dat sh test.sh {}
input_files.dat contains the name of job files. I have successfully used this script to run parallel jobs on one node (in which case I remove --sshloginfile nodelist.dat and sort -u $PBS_NODEFILE > nodelist.dat from the script). However, whenever I try to run this script on more than one node, I get the following error.
ssh: connect to host 922 port 22: Invalid argument
ssh: connect to host 901 port 22: Invalid argument
ssh: connect to host 922 port 22: Invalid argument
ssh: connect to host 901 port 22: Invalid argument
Here, 922 and 901 are the numbers corresponding to the assigned nodes and are included in the nodelist.dat ($PBS_NODEFILE) file.
I tried to search for this problem but couldn't find much as everyone else seems to be doing fine with --sshloginfile argument, so I am not sure if this is a system specific problem.
Edit:
As #Ole Tange mentioned in his answer and comments, I need to modify the "node number" as produced by $PBS_NODEFILE, which I am doing in the following way inside the PBS script.
# provides a unique number (say, 900) associated with the node.
sort -u $PBS_NODEFILE > nodelist.dat
# changes the contents of the nodelist.dat from "900" to "username#w-900.cluster.uni.edu"
sed -i -r "s/([0-9]+)/username#w-\1.cluster.uni.edu/g" nodelist.dat
I verified that the nodelist.dat contains only one line viz., username#w-900.cluster.uni.edu.
Edit-2:
It seems like the cluster's architecture is responsible for the error I am getting. I ran the same script on a different cluster (say, cluster_2), and it finished without any errors. In my sysadmin's words, the reason why it works on cluster_2 is: "cluster_2 is a single machine. Once your job starts, you are actually on the head node of your PBS job like you would expect."
The variable $PARALLEL is used by GNU Parallel for options. So when you also use it, it is likely to cause confusion. It does not seem to be the root cause here, though, but do yourself a favor and use another variable name (or use it as described in the man page).
The problem here seems to be ssh which will not see a number as a hostname:
$ ssh 8
ssh: connect to host 8 port 22: Invalid argument
Add the domain name, and ssh will see it as a hostname:
$ ssh 8.pi.dk
<<connects>>
If I were you I would talk to your cluster admin and ask if the worker nodes could be renamed to w-XXX, where XXX is their current name.

Apache server running at nearly 100%

We have just moved our web apps to a self hosted site on digital ocean, vs our previous web host. The instance is getting hammered by rpm's according to New Relic but we are seeing very few page views. Throughput RPM's are around the 400rpm stage where as we only have about 1 page view per minute.
When i look at the access log it is getting hammered with what i am guessing is spambots, trying to access the non existant downloads folder. Its causing my CPU to run at 95%, even though nothing is actually happening.
How can i stop this spamming access??
So far i have created a downloads folder and put a Deny All in a htaccess file in it. That appeared to cool things down but now its getting worse again (hence the desperate post)
Find a pattern of malevolent requests and restrict the IP they are coming from.
Require a hashed headrt to be provided for each request to verify the identity of the person/group wanting access.
Restrict more than N downloads to any IP over M time threshold.
Distribute traffic load via DNS proxying to multiple hosts/web servers.
Switch to NGINX. NGINX is more performant than Apache in most cases with "high-levels" of requests. See Digital Ocean's article --> https://www.digitalocean.com/community/tutorials/apache-vs-nginx-practical-considerations.
Make sure your firewall employs a whitelist of hosts/ports. NOT *
I'd use tables to drop any connection from the spam bot ip address.
Find which ips are connected to your apache server:
netstat -tn 2>/dev/null | grep :80 | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr | head
You should get something like:
20 49.237.134.0
10 31.187.6.0
15 166.137.246.0
Once you find the bot ip addresses (probably the ones with higher number of connections), use iptables to DROP further connections:
iptables -A INPUT -s 49.237.134.0 -p tcp --destination-port 80 -j DROP
iptables -A INPUT -s 31.187.6.0 -p tcp --destination-port 80 -j DROP
iptables -A INPUT -s 166.137.246.0 -p tcp --destination-port 80 -j DROP
Note:
Make sure you're not dropping connections from search engine bots like google, yahoo, etc...
You can use www.infobyip.com to get detailed information about a specific ip address.

ssh -L forward multiple ports

I'm currently running a bunch of:
sudo ssh -L PORT:IP:PORT root#IP
where IP is the target of a secured machine, and PORT represents the ports I'm forwarding.
This is because I use a lot of applications which I cannot access without this forwarding. After performing this, I can access through localhost:PORT.
The main problem occured now that I actually have 4 of these ports that I have to forward.
My solution is to open 4 shells and constantly search my history backwards to look for exactly which ports need to be forwarded etc, and then run this command - one in each shell (having to fill in passwords etc).
If only I could do something like:
sudo ssh -L PORT1+PORT2+PORT+3:IP:PORT+PORT2+PORT3 root#IP
then that would already really help.
Is there a way to make it easier to do this?
The -L option can be specified multiple times within the same command. Every time with different ports. I.e. ssh -L localPort0:ip:remotePort0 -L localPort1:ip:remotePort1 ...
Exactly what NaN answered, you specify multiple -L arguments. I do this all the time. Here is an example of multi port forwarding:
ssh remote-host -L 8822:REMOTE_IP_1:22 -L 9922:REMOTE_IP_2:22
Note: This is same as -L localhost:8822:REMOTE_IP_1:22 if you don't specify localhost.
Now with this, you can now (from another terminal) do:
ssh localhost -p 8822
to connect to REMOTE_IP_1 on port 22
and similarly
ssh localhost -p 9922
to connect to REMOTE_IP_2 on port 22
Of course, there is nothing stopping you from wrapping this into a script or automate it if you have many different host/ports to forward and to certain specific ones.
For people who are forwarding multiple port through the same host can setup something like this in their ~/.ssh/config
Host all-port-forwards
Hostname 10.122.0.3
User username
LocalForward PORT_1 IP:PORT_1
LocalForward PORT_2 IP:PORT_2
LocalForward PORT_3 IP:PORT_3
LocalForward PORT_4 IP:PORT_4
and it becomes a simple ssh all-port-forwards away.
You can use the following bash function (just add it to your ~/.bashrc):
function pfwd {
for i in ${#:2}
do
echo Forwarding port $i
ssh -N -L $i:localhost:$i $1 &
done
}
Usage example:
pfwd hostname {6000..6009}
jbchichoko and yuval have given viable solutions. But jbchichoko's answer isn't a flexible answer as a function, and the opened tunnels by yuval's answer cannot be shut down by ctrl+c because it runs in the background. I give my solution below solving both the two flaws:
Defing a function in ~/.bashrc or ~/.zshrc:
# fsshmap multiple ports
function fsshmap() {
echo -n "-L 1$1:127.0.0.1:$1 " > $HOME/sh/sshports.txt
for ((i=($1+1);i<$2;i++))
do
echo -n "-L 1$i:127.0.0.1:$i " >> $HOME/sh/sshports.txt
done
line=$(head -n 1 $HOME/sh/sshports.txt)
cline="ssh "$3" "$line
echo $cline
eval $cline
}
A example of running the function:
fsshmap 6000 6010 hostname
Result of this example:
You can access 127.0.0.1:16000~16009 the same as hostname:6000~6009
In my company both me and my team members need access to 3 ports of a non-reachable "target" server so I created a permanent tunnel (that is a tunnel that can run in background indefinitely, see params -f and -N) from a reachable server to the target one. On the command line of the reachable server I executed:
ssh root#reachableIP -f -N -L *:8822:targetIP:22 -L *:9006:targetIP:9006 -L *:9100:targetIP:9100
I used user root but your own user will work. You will have to enter the password of the chosen user (even if you are already connected to the reachable server with that user).
Now port 8822 of the reachable machine corresponds to port 22 of the target one (for ssh/PuTTY/WinSCP) and ports 9006 and 9100 on the reachable machine correspond to the same ports of the target one (they host two web services in my case).
Another one liner that I use and works on debian:
ssh user#192.168.1.10 $(for j in $(seq 20000 1 20100 ) ; do echo " -L$j:127.0.0.1:$j " ; done | tr -d "\n")
One of the benefits of logging into a server with port forwarding is facilitating the use of Jupyter Notebook. This link provides an excellent description of how to it. Here I would like to do some summary and expansion for all of you guys to refer.
Situation 1. Login from a local machine named Host-A (e.g. your own laptop) to a remote work machine named Host-B.
ssh user#Host-B -L port_A:localhost:port_B
jupyter notebook --NotebookApp.token='' --no-browser --port=port_B
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-B but see it in Host-A.
Situation 2. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C. This is usually the case for most analytical servers within universities and can be achieved by using two ssh -L connected with -t.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C
jupyter notebook --NotebookApp.token='' --no-browser --port=port_C
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-C but see it in Host-A.
Situation 3. Login from a local machine named Host-A (e.g. your own laptop) to a remote login machine named Host-B and from there login to the remote work machine named Host-C and finally login to the remote work machine Host-D. This is not usually the case but might happen sometime. It's an extension of Situation 2 and the same logic can be applied on more machines.
ssh -L port_A:localhost:port_B user#Host-B -t ssh -L port_B:localhost:port_C user#Host-C -t ssh -L port_C:localhost:port_D user#Host-D
jupyter notebook --NotebookApp.token='' --no-browser --port=port_D
Then you can open a browser and enter: http://localhost:port_A/ to do your work on Host-D but see it in Host-A.
Note that port_A, port_B, port_C, port_D can be random numbers except common port numbers listed here. In Situation 1, port_A and port_B can be the same to simplify the procedure.
Here is a solution inspired from the one from Yuval Atzmon.
It has a few benefits over the initial solution:
first it creates a single background process and not one per port
it generates the alias that allows you to kill your tunnels
it binds only to 127.0.0.1 which is a little more secure
You may use it as:
tnl your.remote.com 1234
tnl your.remote.com {1234,1235}
tnl your.remote.com {1234..1236}
And finally kill them all with tnlkill.
function tnl {
TUNNEL="ssh -N "
echo Port forwarding for ports:
for i in ${#:2}
do
echo " - $i"
TUNNEL="$TUNNEL -L 127.0.0.1:$i:localhost:$i"
done
TUNNEL="$TUNNEL $1"
$TUNNEL &
PID=$!
alias tnlkill="kill $PID && unalias tnlkill"
}
An alternative approach is to tell ssh to work as a SOCKS proxy using the -D flag.
That way you would be able to connect to any remote network address/port accesible through the ssh server as long as the client applications are able to go through a SOCKS proxy (or work with something like socksify).
If you want a simple solution that runs in the background and is easy to kill - use a control socket
# start
$ ssh -f -N -M -S $SOCKET -L localhost:9200:localhost:9200 $HOST
# stop
$ ssh -S $SOCKET -O exit $HOST
I've developed loco for help with ssh forwarding. It can be used to share ports 5000 and 7000 on remote locally at the same ports:
pip install loco
loco listen SSHINFO -r 5000 -r 7000
First It can be done using Parallel Execution by xargs -P 0.
Create a file for binding the ports e.g.
localhost:8080:localhost:8080
localhost:9090:localhost:8080
then run
xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE> < port-forward
or you can do a one-liner
echo localhost:{8080,9090} | tr ' ' '\n' | sed 's/.*/&:&/' | xargs -P 0 -I xxx ssh -vNTCL xxx <REMOTE>
pros independent ssh port-forwarding, they are independent == avoiding Single Point of Failure
cons each ssh port-forwarding is forked separately, somehow not efficient
second it can be done using curly brackets expansion feature in bash
echo "ssh -vNTC $(echo localhost:{10,20,30,40,50} | perl -lpe 's/[^ ]+/-L $&:$&/g') <REMOTE>"
# output
ssh -vNTC -L localhost:10:localhost:10 -L localhost:20:localhost:20 -L localhost:30:localhost:30 -L localhost:40:localhost:40 -L localhost:50:localhost:50 <REMOTE>
real example
echo "-vNTC $(echo localhost:{8080,9090} | perl -lpe 's/[^ ]+/-L $&:$&/g') gitlab" | xargs ssh
Forwarding 8080 and 9090 to gitlab server.
pros one single fork == efficient
cons by closing this process (ssh) all forwarding are closed == Single Point of Failure
You can use this zsh function (probably works with bash, too)(Put it in ~/.zshrc):
ashL () {
local a=() i
for i in "$#[2,-1]"
do
a+=(-L "${i}:localhost:${i}")
done
autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -NT "$1" "$a[#]"
}
Examples:
ashL db#114.39.161.24 6480 7690 7477
ashL db#114.39.161.24 {6000..6050} # Forwards the whole range. This is simply shell syntax sugar.

Cygrunsrv & autossh : A way to embedd remote commands in the command line?

I'm using cygrunsrv and autossh on windows XP to create a service building a tunnel to a remote server but i also want to create another tunnel from the remote server to another server.
I can achieve it with this command line :
autossh -M 5432 serverA -t 'autossh -M 4321 serverB -N'
but when I want to set it up in cygwin through cygrunsrv to make it works as a service :
cygrunsrv -I TUNNEL -p /usr/bin/autossh -a "-M 5432 serverA -t 'autossh -M 4321 serverB -N'" -e AUTOSSH_NTSERVICE=yes -e AUTOSSH_POLL=20 -e AUTOSSH_GATETIME=30
It's not fully working. The service is creating the tunnel correctly to ServerA but it's not sending the autossh command "autossh -M 4321 serverB -N" to ServerA.
I tried to escape the quote but all my efforts didn't make any difference and I'm not seeing any command sent in the autossh logs.
I think the problem is related to pseudo terminal that is not created through the cygrunsrv.
I'd like to know if there's a way to fix my cygrunsrv command line to make it work or should I consider a different approach ?
Lionel, try removing the AUTOSSH_NTSERVICE=yes from the cygrunsrv invocation. As /usr/share/doc/autossh/README.Cygwin explains:
Setting AUTOSSH_NTSERVICE=yes in the calling environment ...
change[s] autossh's behavior in three useful
ways:
(1) Add an -N flag to each invocation of ssh, thus disabling shell
access. The idea is that if you're running autossh as a system
service, you're using it to forward ports; it wouldn't make sense to
run a shell session as a system service. (If you think this reasoning
is wrong, please send a bug report to the author or Cygwin maintainer,
and tell us what you're trying to do.)
Despite what the above says, it seems that you may have a good reason for not wanting -N (which suppresses command execution) in your service's ssh invocation. Removing AUTOSSH_NTSERVICE=yes should take care of it. It will have a couple of other minor disadvantages, but you can probably live with it. Read the rest of README.Cygwin for the details.