Redirect "ofp_packet_in" packet among multiple controllers in POX? - sdn

I am trying to redirect ofp_packet_in packet among multiple controllers. For example, suppose there are two controllers c1,c2 and one switch s1. s1 is assigned to c1. Now, c1 receives a Packet_In from switch s1. Generally, s1 should dispose of this Packet_In. What I am trying to do is to send this Packet_In to c2 and let c2 process this Packet_In.
I am trying to implement my idea by POX, but I got some mistakes.
This is the code of c1, only _handle_packet_in is shown:
def _handle_PacketIn(self, event):
log.debug("Switch %s has a PacketIn: [port: %d, ...]", event.dpid, event.port)
self._redirect_packet(event)
def _redirect_packet(self, event):
log.debug("Send packet to 6634!")
TCP_IP = '10.0.2.15'
TCP_PORT = 6634
BUFFER_SIZE = 1024
packet = event.ofp
# I attach all the payload of OpenFlow Packet_In to the new packet
MESSAGE = packet.pack()
# MESSAGE = MESSAGE + 'Hello world'
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(MESSAGE)
# s.close()
Then I start Mininet and build the topology. (Here the topology has little difference with the formal description, however, it is clear and modified from Mininet example controllers2.py)
from mininet.net import Mininet
from mininet.node import Controller, OVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel
from mininet.node import RemoteController
def multiControllerNet():
"Create a network from semi-scratch with multiple controllers."
net = Mininet( controller=Controller, switch=OVSSwitch, autoSetMacs=True )
print "*** Creating (reference) controllers"
# c1 = net.addController( 'c1', port=6633 )
# c2 = net.addController( 'c2', port=6634 )
c1 = net.addController('c1', controller=RemoteController, ip='10.0.2.15', port=6633)
c2 = net.addController('c2', controller=RemoteController, ip='10.0.2.15', port=6634)
print "*** Creating switches"
s1 = net.addSwitch( 's1' )
s2 = net.addSwitch( 's2' )
print "*** Creating hosts"
hosts1 = [ net.addHost( 'h%d' % n ) for n in 3, 4 ]
hosts2 = [ net.addHost( 'h%d' % n ) for n in 5, 6 ]
print "*** Creating links"
for h in hosts1:
net.addLink( s1, h )
for h in hosts2:
net.addLink( s2, h )
net.addLink( s1, s2 )
print "*** Starting network"
net.build()
# c1.start()
c2.start()
s1.start( [ c1 ] )
# s1.start([c2])
s2.start( [ c2 ] )
# s2.start([c2])
# print "*** Testing network"
# net.pingAll()
print "*** Running CLI"
CLI( net )
print "*** Stopping network"
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' ) # for CLI output
multiControllerNet()
Then, I start two controllers at my host with different ports, 6633, 6634. Open c1:
../pox.py openflow.of_01 --port=6633 --address=10.0.2.15 openflow_test log.level --DEBUG
and, open c2
../pox.py openflow.of_01 --port=6634 --address=10.0.2.15 openflow_test_2 log.level --DEBUG
c1 has only the _handle_packet_in handler which is shown above. c2 has no function.
I try to ping between h3 (controlled by c1) to h5 (controller by c2), in order to trigger _handle_packet_in handler.
I use wireshark to capture the of_packet_in packet, and the new redirect packet
of_packet_in packet:
redirect packet:
It is clear that they have the same payload (OpenFlow packet).
However, c2 doesn't accept this packet, and warn that this is dummy OpenFlowNexus. This is the error:
I guess, even if c1 sends a legal OpenFlow of_packet_in to c2, c2 has no idea about "who is c1", for c1 has not registered to c1 using OpenFlow of_hello, of_features_request,.... Therefore, c2 discard the OpenFlow of_packet_in sent by c1, and say dummy.
I only want to let c2 process the Packet_In redirected by c1. In this way, c2 can calculate and install table entries for table-miss event happened in s1.
Maybe I can use other controllers, like floodlight, ONOS..., to solve this problem. Maybe this problem cannot be solved. Thank you for sharing your idea, best wishes.
I am using POX 0.2.0 (carp)

You connect c1 to c2 as if c1 is a switch and you expect c2 to treat c1 like that. However, c1 did no represent itself like this. In fact, c1 did not introduce itself to c2 in no way. Each switch that connects to a controller follows a certain identification protocol and you can see that happen in ConnectionUp handler. c1 does not trigger this handler at c2.
c1 can identify itself as a switch to c2 and then redirect packets to it. However, to my mind, this is too cumbersome. I would recommend that c1 connect to c2 out-of-band and communicate with c2 in their own protocol(rules). For example, c2 waits for a connection on a different port. When c1 connects, c2 waits for redirected packets. c1 redirects a packet with the required context to c2, and c2 decides on action upon receiving the packet and the required context.

Related

Store targets as collections that handle logic operation

I think my title is kinda unclear but I don't konw how to tell that otherwise.
My problem is:
We have users that belong to groups, there are many types of groups and any user belong to exaclty one group for each type.
Example: With group types A, B and C, containing respectively the groups (A1; A2; A3), (B1; B2) and (C1; C2; C3)
Every User must have a list of groups like [A1, B1, C1] or [A1, B2, C3] but never [A1, A2, B1] or [A1, C2]
We have messages that target to certain groups but not just a union, it can be more complex collection operations
Example: we can have message intended to [A1, B1, C3], [A1, *, *], [A1|A2, *, *] or even like ([A1, B1, C2] | [A2, B2, C1])
(* = any group of the type, | = or)
Messages are stored in a SQL DB, and users can retrieve all messages intended to their groups
How may I store messages and make my Query to reproduce this behavior ?
An option could be to encode both the user groups and the message targets in a (big) integer built on the powers of 2, and then base your query on a bitwise AND between user group code and message target code.
The idea is, group 1 is 1, group 2 is 2, group 3 is 4 and so on.
Level 1:
Assumptions:
you know in advance how many group types you have, and you have very few of them
you don't have more than 64 groups per type (assuming you work with 64-bit integers)
the message has only one target: A1|A2,B..,C... is ok, A*,B...,C... is ok, (A1,B1,C1)|(A2,B2,C2) is not.
Solution:
Encode each user group as the corresponding power of 2
Encode each message target as the sum of the allowed values: if groups 1 and 3 are allowed (A1|A3) the code will be 1+4=5, if all groups are allowed (A*) the code will be 2**64-1
you will have a User table and a Message table, and both will have one field for each group type code
The query will be WHERE (u.g1 & m.g1) * (u.g2 & m.g2) * ... * (u.gN & m.gN) <> 0
Level 2:
Assumptions:
you have some more group types, and/or you don't know in advance how many they are, or how they are composed
you don't have more than 64 groups in total (e.g. 10 for the first type, 12 for the second, ...)
the message still has only one target as above
Solution:
encode each user group and each message target as a single integer, taking care of the offset: if the first type has 10 groups they will be encoded from 1 to 1023 (2**10-1), then if the second type has 12 groups they will go from 1024 (2**10) to 4194304 (2**(10+12)-1), and so on
you will still have a User table and a Message table, and both will have one single field for the cumulative code
you will need to define a function which is able to check the user group vs the message target separately by each range; this can be difficult to do in SQL, and depends on which engine you are using
following is a Python implementation of both the encoding and the check
class IdEncoder:
def __init__(self, sizes):
self.sizes = sizes
self.grouplimits = {}
offset = 0
for i,size in enumerate(sizes):
self.grouplimits[i] = (2**offset, 2**(offset + size)-1)
offset += size
def encode(self, vals):
n = 0
for i, val in enumerate(vals):
if val == '*':
g = self.grouplimits[i][1] - self.grouplimits[i][0] + 1
else:
svals = val.split('|')
g = 0
for sval in svals:
g += 2**(int(sval)-1)
if i > 0:
g *= self.grouplimits[i][0]
print(g)
n += g
return n
def check(self, user, message):
res = False
for i,size in enumerate(self.sizes):
if user%2**size & message%2**size == 0:
break
if i < len(self.sizes)-1:
user >>= size
message >>= size
else:
res = True
return res
c = IdEncoder([10,12,10])
m3 = c.encode(['1|2','*','*'])
u1 = c.encode(['1','1','1'])
c.check(u1,m3)
True
u2=c.encode(['4','1','1'])
c.check(u2,m3)
False
Level 3:
Assumptions:
you adopt one of the above solutions, but you need multiple targets for each message
Solution:
You will need a third table, MessageTarget, containing the target code fields as above and a FK linking to the message
The query will search for all the MessageTarget rows compatible with the User group code(s) and show the related Message data
So you have 3 main tables:
Messages
Users
Groups
You then create 2 relationship tables:
Message-Group
User-Group
If you want to limit users to have access to just "their" messages then you join:
User > User-Group > Message-Group > Message

nextflow .collect() method in RNA-seq example workflow

I understand we have to use collect() when we run a process that takes as input two channels, where the first channel has one element and then second one has > 1 element:
#! /usr/bin/env nextflow
nextflow.enable.dsl=2
process A {
input:
val(input1)
output:
path 'index.txt', emit: foo
script:
"""
echo 'This is an index' > index.txt
"""
}
process B {
input:
val(input1)
path(input2)
output:
path("${input1}.txt")
script:
"""
cat <(echo ${input1}) ${input2} > \"${input1}.txt\"
"""
}
workflow {
A( Channel.from( 'A' ) )
// This would only run for one element of the first channel:
B( Channel.from( 1, 2, 3 ), A.out.foo )
// and this for all of them as intended:
B( Channel.from( 1, 2, 3 ), A.out.foo.collect() )
}
Now the question: Why can this line in the example workflow from nextflow-io (https://github.com/nextflow-io/rnaseq-nf/blob/master/modules/rnaseq.nf#L15) work without using collect() or toList()?
It is the same situation, a channel with one element (the index) and a channel with > 1 (the fastq pairs) shall be used by the same process (quant), and it runs on all fastq files. What am I missing compared to my dummy example?
You need to create the first channel with a value factory which never exhausts the channel.
Your linked example implicitly creates a value channel which is why it works. The same happens when you call .collect() on A.out.foo.
Channel.from (or the more modern Channel.of) create a sequence channel which can be exhausted which is why both A and B only run once.
So
A( Channel.value('A') )
is all you need.

Snakemake multiple input files with expand but no repetitions

I'm new to snakemake and I don't know how to figure out this problem.
I've got my rule which has two inputs:
rule test
input_file1=f1
input_file2=f2
f1 is in [A{1}$, A{2}£, B{1}€, B{2}¥]
f2 is in [C{1}, C{2}]
The numbers are wildcards that come from an expand call. I need to find a way to pass to the file f1 and f2 a pair of files that match exactly with the number. For example:
f1 = A1
f2 = C1
or
f1 = B1
f2 = C1
I have to avoid combinations such as:
f1 = A1
f2 = C2
I would create a function that makes this kind of matches between the files, but the same should manage the input_file1 and the input_file2 at the same time. I thought to make a function that creates a dictionary with the different allowed combinations but how would I "iterate" over it during the expand?
Thanks
Assuming rule test gives you in output a file named {f1}.{f2}.txt, then you need some mechanism that correctly pairs f1 and f2 and create a list of {f1}.{f2}.txt files.
How you create this list is up to you, expand is just a convenience function for that but maybe in this case you may want to avoid it.
Here's a super simple example:
fin1 = ['A1$', 'A2£', 'B1€', 'B2¥']
fin2 = ['C1', 'C2']
outfiles = []
for x in fin1:
for y in fin2:
## Here you pair f1 and f2. This is a very trivial way of doing it:
if y[1] in x:
outfiles.append('%s.%s.txt' % (x, y))
wildcard_constraints:
f1 = '|'.join([re.escape(x) for x in fin1]),
f2 = '|'.join([re.escape(x) for x in fin2]),
rule all:
input:
outfiles,
rule test:
input:
input_f1 = '{f1}.txt',
input_f2 = '{f2}.txt',
output:
'{f1}.{f2}.txt',
shell:
r"""
cat {input} > {output}
"""
This pipeline will execute the following commands
cat A2£.txt C2.txt > A2£.C2.txt
cat A1$.txt C1.txt > A1$.C1.txt
cat B1€.txt C1.txt > B1€.C1.txt
cat B2¥.txt C2.txt > B2¥.C2.txt
If you touch the starting input files with touch 'A1$.txt' 'A2£.txt' 'B1€.txt' 'B2¥.txt' 'C1.txt' 'C2.txt' you should be able to run this example.

CDA Authentication Parameters

For CDA Authentication The EMV terminal a GENERATE AC command like
80 AE P1 00 LC DATA 00
CLA = 80
INS = AE
P1 = ?
P2 = 00
LC = ?
DATA = ?
LE = 00
Where do the parameters P1, LC and Data come from?
P1 defines the type of cryptogram you expect the chip to generate for you. It also has bit to specify the data has to be responded inside a CDA jacket. Refer the below part from EMVCo book 3.
So P1 = 0x00 will mean you expect an AAC,
0x80 for ARQC and
0x40 for TC
Turn on bit 5, and you get the data inside a certificate.
I hope you understand that not always you will get the expected cryptogram type back from Card. It can be in the order TC > ARQC > AC. When requesting TC, you can expect TC, ARQC or AC. When ARQC is requested you can get ARQC or AAC, but not TC. When AAC is requested, it is always AAC and not TC or ARQC.

Header in Transformation File (SAP BPC) getting evaluated even when not desired

I am facing an issue with my flat file. The BAdI is processing the header data as the body of the flat file. Due to this . The TIMEID, which is conditioned to be an year belonging to 'Q1', is giving the error. If I replace the TIME label with 2014.Q1 (which belongs to Q1), then it works fine, but if I use the label “TIMEID” in the header data, it gets evaluated, and gives an error “time member TIMEID does not belong to Q1”. This also rejects all the subsequent records. This happens regardless of whether HEADER in the transformation file is labeled as YES (with SKIP=1) or NO.
Due to this, the “cl_ujk_query=>query()” function is not returning any data.
Following is the flat file (Cis for header data, and R is for the records, both of which are valid):
______________________________________________________________________
c1 c2 c3 c4 c5 TIMEID c7 c8 c8 c9
______________________________________________________________________
r11 r12 r13 r14 r15 2014.Q1 r17 r18 r19 r20
r21 r22 r23 r24 r25 2013.Q1 r27 r28 r29 r30
_____________________________________________________________________
Following is the Transformation File:
_________________________________________________________________________
***OPTIONS
FORMAT = DELIMITED
HEADER = YES
DELIMITER = ,
SKIP = 1
SKIPIF =
VALIDATERECORDS=YES
CREDITPOSITIVE=YES
MAXREJECTCOUNT= -1
ROUNDAMOUNT=
STARTROUTINE=ZNAME_TIME
*MAPPING
A=*COL(1)
B=*STR(OC_) + *COL(8)
TIME=*COL(6)
D=*STR(NOBUYER)
E=*STR(CC)
F=*STR(INPUT)
G=*COL(5)
H=*COL(2)
I=*COL(4)
J=*STR(NO_J)
K=*COL(7)
*CONVERSION
**
________________________________________________________________________
You have to change the header = NO in the transformation file .