Why pushing more than 3 MPLS headers on a packet results in the packet not being forwarded? - openvswitch

MPLS header stacks are limited to size 3. Pushing more than 3 MPLS headers on a packet results in the packet not being forwarded in Open vSwitch.
sudo mn --topo single,2 --switch ovsk
mininet> h1 ping h2
Installed a minimal set of flow entries on s1:
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=push_mpls:0x8847,push_mpls:0x8847,push_mpls:0x8847,push_mpls:0x8847,output:2
sudo ovs-ofctl -O OpenFlow13 add-flow s1 in_port=2,actions=push_mpls:0x8847,push_mpls:0x8847,push_mpls:0x8847,push_mpls:0x8847,output:1
Flow entries are correctly matched.
sudo ovs-ofctl -O OpenFlow13 dump-flows s1 | grep -o "n_packets=\w*"
Yet no packets leave s1 confirmed by
sudo tcpdump -ni s1-eth2
Any explanation will be appreciated

For version 2.4.0 lib/flow.h contains a variable called FLOW_MAX_MPLS_LABELS which defines the maximum number of mpls supported in a stack. The value is set to 3
https://github.com/openvswitch/ovs/blob/v2.4.0/lib/flow.h
#define FLOW_MAX_MPLS_LABELS 3
For later version you should check but they will probably take a similar approach, limiting to 3 mpls in the code.

Related

Hyperledger Fabric - How to clear out the DEV environment after each blockchain network test?

Fabric 1.4.3 version. Blockchain network with 1 Oderer (solo) + 1 Org, running on Docker.
Trying to instantiate chaincode, due to PANIC error on PEER0, and peer crashes.
Impossible to instantiate chaincode, because PEER0 crashes doing the process.
At CLI docker prompt, I did this command sequence:
1) $> peer channel create -o $ORDERERNAME -c $CHANNELNAME -f $CONFIGTXFOLDER/devchannel.tx --tls --cafile=$ORDERER_TLSCACERT
Result in Cli: UTC [cli.common] readBlock -> INFO 04e Received block: 0
2) $> peer channel join -o $ORDERERNAME -b $CONFIGTXFOLDER/devgenesis.block --tls --cafile=$ORDERER_TLSCACERT
Result in Cli: UTC [channelCmd] executeJoin -> INFO 03e Successfully submitted proposal to join channel
3) $> peer chaincode install -n $CHCODENAME -p $CHCODEPATH -v $CHCODEVERSION -l node --tls --cafile $ADMIN_PEER_TLSCACERT
Result in Cli: UTC [chaincodeCmd] install -> INFO 04a Installed remotely response:<status:200 payload:"OK" >
4) $> peer chaincode instantiate -C $CHANNELNAME -n $CHCODENAME -v $CHCODEVERSION -o $ORDERERNAME -c '{"Args":["init","a","100","b","200"]}' -P "AND ('GuaraniMSP.admin')" --tls --cafile $ORDERER_TLSCACERT --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE
Result before crash in PEER0:
UTC [gossip.state] commitBlock -> ERRO 87e Got error while committing(unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
UTC [gossip.state] deliverPayloads -> PANI 87f Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
/opt/go/src/runtime/asm_amd64.s:1333
panic: Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
I opened an issue at Hyperledger Fabric JIRA about this situation, and I received the information that I need to clear out the environment to ensure there are no artifacts from prior trials.
Issue FABB-147 at Hyperledger Fabric (My logs are there).
I saw a problem documented close to my, but with differences, at peers get crash after anchor peer update
Now I have to certify that my Hyperledger Fabric environment is clear. But how?
Is there any documented procedure or checklist to verify if there are no artifacts from prior trials in Hyperledger Fabric?
If I suppress the ORDERER_GENERAL_GENESISFILE and ORDERER_GENERAL_GENESISPROFILE settings for orderer.yaml and docker-compose.yaml, will the orderer element start normally, and then the network will understand that it should use the genesis.block informed when creating the new channel?
Thanks in advance for your help.
Run this command docker system prune
Start Your network
again
hope this help u as it helped me
While not an answer to cleaning your environment, here's a possible solution to your underlying problem.
The peer channel create command will generate a $CHANNELNAME.block file in its current working directory. This is the block you need to use when running peer channel join, not a genesis block.
$> peer channel create ... -c $CHANNELNAME
$> peer channel join ... -b $CONFIGTXFOLDER/devgenesis.block
Will ruin your day by causing your peers to crash most of the time, and rarely work after recreating the ledger from a new genesis block. Try this instead:
$> peer channel create ... -c $CHANNELNAME
$> peer channel join ... -b $CHANNELNAME.block
I was using the genesis block from the system fabric network, loaded on the Ordener (from orderer.yaml or docker-compose.yaml), to join the channel, and that was wrong.
The source of my error was that I was using the system genesis block instead of using the genesis block generated in the peer channel create command.
I understood that the peer channel create command uses a transaction block model (.tx) as input, and as an output it generates a genesis block (.block) that must be used as input in the peer channel join command.
The correct sequence is:
Step 1) $> peer channel create -o $ORDERERNAME -c $CHANNELNAME -f $CONFIGTXFOLDER/devchanneltrack.tx --outputBlock $CHANNELFOLDER/devchannelgen.block --tls --cafile=$ORDERER_TLSCACERT
Step 2) $> peer channel join -o $ORDERERNAME -b $CHANNELFOLDER/devchannelgen.block --tls --cafile=$ORDERER_TLSCACERT
I made corrections to the scripts, and now I can join the channel. Then I did the installation and instantiation of new chaincodes on the channel, and they worked perfectly.

Is it possible to generate CAN message in one device and dump CAN message in another device using SocketCAN?

I am using SocketCAN and CANtact toolkit to send and receive CAN message. I am using two CANtact toolkits one to send CAN message and other to receive the CAN message both the CANtact toolkits are connected by DB9 female to DB9 female and the other ends are connected to the USB port of the laptop.
I used the following SocketCAN commands to first configure them
sudo modprobe can
sudo modprobe can_raw
sudo modprobe slcan
sudo slcand -o -s6 -t hw -S 3000000 /dev/ttyACM0 slcan0
sudo ip link set slcan0 up
The above commands are for the first CANtact toolkit then I connected the second CANtact toolkit and configured it with the following commands
sudo modprobe can
sudo modprobe can_raw
sudo modprobe slcan
sudo slcand -o -s6 -t hw -S 3000000 /dev/ttyACM1 slcan1
sudo ip link set slcan1 up
I performed these steps in two different terminals
In the first terminal, I gave
cangen -v slcan0
In the second terminal,I gave
candump slcan1
I don't receive any CAN messages in terminal 2 but if i give
cangen -v slcan0
in the first terminal and
candump slcan0
in second terminal I am able to view the CAN messages sent
This means the CAN message is not communicated between the two CANtact toolkits How can this be resolved? Or am I committing any mistake?
]5

Installing video4linux Beaglebone

I'm working on Beaglebone black on a project on image processing. For this purpose, I shall need video4linux application.
However, I'm not able to share my internet with the beaglebone because of which "sudo apt-get install v4l-utils" isn't working. Whenever I change the internet sharing settings, I'm unable to ssh into the local ip of the beaglebone.
Hence, I want a method where I can install video4linux without internet connectivity.
It will be very complicated if you want install package without internet as you need to install all the dependent package before the actual package.
Each package have dependency of other package.
Still you can download the package from :v4l-utils
And using WinSCP tool you can copy in Beagle-bone.
Other dependent package for debian you can find at:Debian Packages
You can also share internet from your host to beagle-bone via USB.
Follow this sh code:
On the BeagleBone
ifconfig usb0 192.168.7.2
route add default gw 192.168.7.1
On Linux computer:
sudo su
#eth0 is my internet facing interface, eth3 is the BeagleBone USB connection
ifconfig eth3 192.168.7.1
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
iptables --append FORWARD --in-interface eth3 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

Can't use tc in docker container

I am using tc to limit send rate in a docker container.
Added below script into Dockerfile:
tc qdisc add dev eth0 root handle 1: htb default 2
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 2mbit ceil 2mbit prio 2
tc qdisc add dev eth0 parent 1:2 handle 2: sfq perturb 10
tc filter add dev eth0 protocol ip parent 1:0 u32 match ip dst 192.168.1.124 flowid 1:2
Run docker under root account via this command:
docker run --cap-add=NET_ADMIN --name lqt_build -d -p 8443:8443 -p 443:443 -p 3478:3478 lqt_build
But it still show this error:
Step 25 : RUN cd /usr/share/ta/ && sudo ./tt rate
---> Running in fb6a4477ad6c
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
We have an error talking to the kernel
[8] System error: read parent: connection reset by peer
It seems the kernel prevents apps in the container from changing some kernel settings even though they're running as root. I guess the container doesn't have its own kernel but runs on the kernel shared with (potentially) many other containers so it can't be allowed to touch settings of the underlying kernel. Does anyone have experience with this issue?
The root cause is the use of tc in Dokcerfile. NET_ADMIN capability does not take effect at that time. Tc command works fine after docker container is running. Thanks user2915097.

ssh client (dropbear on a router) does no output when put in background

I'm trying to automate some things on remote Linux machines with bash scripting on Linux machine and have a working command (the braces are a relict from cmd concatenations):
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"')
But if an ampersand is concatenated to execute it in background, it seems to execute, but no output is printed, neither on stdout, nor on stderr, and even a redirection to a file (inside the braces) does not work...:
(ssh -i /path/to/private_key user#remoteHost 'sh -c "echo 1; echo 2; echo 3; uname -a"') &
By the way, I'm running the ssh client dropbear v0.52 in BusyBox v1.17.4 on Linux 2.4.37.10 (TomatoUSB build on a WRT54G).
Is there a way to get the output either? What's the reason for this behaviour?
EDIT:
For convenience, here's the plain ssh help output (on my TomatoUSB):
Dropbear client v0.52
Usage: ssh [options] [user#]host[/port][,[user#]host/port],...] [command]
Options are:
-p <remoteport>
-l <username>
-t Allocate a pty
-T Don't allocate a pty
-N Don't run a remote command
-f Run in background after auth
-y Always accept remote host key if unknown
-s Request a subsystem (use for sftp)
-i <identityfile> (multiple allowed)
-L <listenport:remotehost:remoteport> Local port forwarding
-g Allow remote hosts to connect to forwarded ports
-R <listenport:remotehost:remoteport> Remote port forwarding
-W <receive_window_buffer> (default 12288, larger may be faster, max 1MB)
-K <keepalive> (0 is never, default 0)
-I <idle_timeout> (0 is never, default 0)
-B <endhost:endport> Netcat-alike forwarding
-J <proxy_program> Use program pipe rather than TCP connection
Amendment after 1 day:
The braces do not hurt, with and without its the same result. I wanted to put the ssh authentication to background, so the -f option is not a solution. Interesting side note: if an unexpected option is specified (like -v), the error message WARNING: Ignoring unknown argument '-v' is displayed - even when put in background, so getting output from background processes generally works in my environment.
I tried on x86 Ubuntu regular ssh client: it works. I also tried dbclient on x86 Ubuntu: works, too. So this problem seems to be specific to the TomatoUSB build - or inside the "dropbear v0.52" was an unknown fix between the build in TomatoUSB and the one Ubuntu provides (difference in help output is just the double-sized default receive window buffer on Ubuntu)... how can a process know if it was put in background? Is there a solution to the problem?
I had the similar problem on my OpenWRT router. Dropbear SSH client does not write anything to output if there is no stdin, e.g. when run by cron. I presume that & has the same effect on process stdin (no input).
I found some workaround on author's bugtracker. Try to redirect input from /dev/zero.
Like:
ssh -i yourkey user#remotehost "echo 123" </dev/zero &
It worked for me as I tried to describe at my blog page.