What if there are multiple forwarding rules for the same flow in the Openflow switch? - sdn

I am trying to use the POX controller to control the path of flows. I know that the Open vSwitch will choose the forwarding rules that have the highest priority. But what will happen if I insert a new forwarding rule for the existing flow with the same priority. Will the Open vSwitch randomly choose one rule to match?

The OpenFlow 1.3 specification says:
If there are multiple matching flow entries with the same highest priority, the selected flow entry is explicitly undefined.
The older OpenFlow 1.0 specification states that:
If multiple entries have the same priority, the switch is free to choose any ordering.
Open vSwitch docs and this other source here says:
OpenFlow leaves behavior undefined when two or more flows with the same priority can match a single packet. Some users expect "sensible" behavior, such as more specific flows taking precedence over less specific flows, but OpenFlow does not specify this and Open vSwitch does not implement it. Users should therefore take care to use priorities to ensure the behavior that they expect.
It is unclear, I know, but based on these sources, it is up to the user to deal with situations where a flow entry overlap with same priorities occurs. The user should take care of setting the right priorities and the switch is free to implement a way to deal with this as the vendor desires. The switch may, for instance, select the newest flow entry, as you said it happened in your case.

Related

what is the differences between "write metadata" and "set metadata" in ovs?

I mean, write metadata is implemented by the instructions in openflow, on the other side, set field in action can also set the metadata ,what is the differences between them?
As far as I can see, WRITE_METADATA and SET_FIELD for metadata do the same in Open vSwitch.
I'm guessing both are exposed by Open vSwitch to follow the OpenFlow specifications as much as possible. OpenFlow has a clear distinction between Actions and Instructions (cf. Sections 5.5 and 5.6 of OpenFlow v1.5.1): Instructions are attached to rules and applied at the end of each table, whereas Actions are attached to packets (using the Write-Actions Instruction) and applied at the end of the pipeline (or before if the Apply-Actions Instruction is executed). In Open vSwitch, the distinction is not as clear: Actions can be attached to both packets and rules.
Thus, while WRITE_METADATA is different from SET_FIELD in the OpenFlow specification because the first is an Instruction, the second an Action, you can do the same as WRITE_METADATA with a SET_FIELD Action.

Interoperability in DDS

I am new to DDS domain and need to have the below understanding.
how to publish common topics between two vendors to achieve interoperability in DDS?
The Scenario is :
Suppose there are two vendor products V1 and V2. V1 has a publisher which publishes on topic T1. V2 wants to subscribe for this topic.How will the Subscriber(V2) know that there exists a Topic T1?
I have a similar doubt on Domain level.how will a subscriber know to which domain it has to participate in?
I am using OpenDDS.
Thanks
Interoperability between vendors is possible, and regularly tested/demonstrated by the main vendors.
You will need to configure your DDS implementation to use RTPS ( I think RTPS2 currently), rather than any proprietary transport that vendors may use. This might be enabled by default.
In terms of which domain to participate in, you programmatically create a domain participant in a particular domain (which domain it connects to might be controlled by a config file) and all further entities (publishers, subscribers, etc) that you create then belong to that domain participant and therefore operate in that domain
To build on #rcs's answer a bit... the actual amount of work you have to do can depend on DDS implementations (OpenDDS, RTI, Prismtech...) because they'll have different defaults. If you use the same on both ends, then your configuration becomes a lot simpler since defaults should line up for things like domain and RTPS.
You will need to make sure the following match:
Domain ID
Domain Partion
Transport (I recommend RTPS, FWIW version difference between 2.1 and 2.2 hasn't mattered in my experience)
TCP or UDP
Discovery port and data port - this will matter more or less depending which implementations you use and whether or not you're using the same one on both ends of the connection, if use using the same, they'll have the same defaults.
Make sure the topic one publishes matches the topic the other subscribes to, this will apply to the Topic and the Type (see more here)
Serialization of the data
Discovery (unicast vs. multicast, make sure
whatever setup you choose is valid, ex: both devices are in the same
multicast group)
QoS settings will need to line up, though I think defaults will likely work (read more here)
Get the Shapes demo working between the machines you're working on first, this does some basic sanity checking to know that it is possible with the given configuration and network setup. Every vendor/implementation that I've seen has a shapes demo to run, for example, here is RTI's.
That's all I can think of right now, hope that helps. I have found DDS documentation to be really good, especially if you know when you can (and when you can't) use any vendor's documentation's answer for your implementation (ex: answer found on RTI's doc or forum and whether or not it works for your OpenDDS application). Often the solutions are similar, but you'll find RTI supports the most and RTI + Prismtech have some of the best documentation.
The DDS RTPS protocol exchanges discovery information so that different applications participating in the same domain (!) know who is out there, and what they are offering/requesting. You need to make sure that the two applications are using the same domain ID (specified on the domain participant). Also, as some implementations allow for different transport options, make sure to use RTPS (sometimes called DDSI) networking.
The RTPS specification contains a mapping from domain ID to port numbers, so if applications from different vendors use the same ID it should just work. Implementations might however override portnumbers with configuration.
To maximize the chance that the applications communicate properly, ensure they use the same IDL datamodel. Vendors have different approaches to type evolution / mapping types that don't exactly match, and not all of them implement the XTypes specification (yet).
Also, as some implementations are stricter than others, ensure that you stay within bounds of the specification. This means that a topic name should only contain alphanumerical characters (I sometimes see ':' to indicate scoping, that is not allowed).
Things that will definitely not work between vendors is TRANSIENT/PERSISTENT durability or communication over TCP, as both have not been standardized yet. TRANSIENT_LOCAL should work. The difference between TRANSIENT_LOCAL and TRANSIENT is that with TRANSIENT_LOCAL, data is no longer aligned after a publisher (writer) leaves the system, whereas with TRANSIENT that data will still be available.
Also note that for API-level interoperability between vendors, your best chance is to use the new isocpp API, since that one has been implemented pretty consistently across the vendor implementations I've seen.
Hope that helps!

(MPLS) tunneling in OpenFlow

We have a network consisting of multiple OpenFlow 1.0 and 1.3 compatible switches, that are interconnected. Each of the switches is connected to one or more switches in a way that there is a route from every switch to every other switch, though not necessarily directly (so the packets might end up having to be passed through multiple switches to reach it's destination).
What I need to do is to get some form of tunneling system, where I can create a flow that passes packets through all these switches to the target machine.
What I know that is possible is to push and pop MPLS labels to the packet. So I figured I might push two labels at the ingress. The outer label identifies the target switch and the inner label identifies the target port. This way I only need flows on each switch to pass packets with matching labels to the target switch first and then to the target port, when it reached the target switch.
The problem here is only that I found no way of matching on MPLS labels. Does anyone know if there is a way to match on these labels? Or is there any other way of doing what I want to do?
Thanks a lot in advance!
yes, you can do
match = parser.OFPMatch(in_port=inPort,eth_type=ether.ETH_TYPE_MPLS,mpls_label=m_label)
that's how you can match mpls labels and give whatever actions you wanna give.

Prevent subscribers from reading certain samples temporarily

We have a situation where there are 2 modules, with one having a publisher and the other subscriber. The publisher is going to publish some samples using key attributes. Is it possible for the publisher to prevent the subscriber from reading certain samples? This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
We are planning on using Opensplice DDS but please give your inputs even if they are not specific to Opensplice.
Thanks.
RTI Connext DDS supplies an option to coordinate writes (in the documentation as "coherent write", see Section 6.3.10, and the PRESENTATION QoS.
myPublisher->begin_coherent_changes();
// (writers in that publisher do their writes) /* data captured at publisher */
myPublisher->end_coherent_changes(); /* all writes now leave */
Regards,
rip
If I understand your question properly, then there is no native DDS mechanism to achieve what you are looking for. You wrote:
This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
There is no such thing as a "global mutex" in DDS.
However, I suspect you can achieve your goal by adding some information to the data-model and adjust your application logics. For example, you could add an enumeration field to your data; let's say you add a field called status and it can take one of the values CALCULATING or READY.
On the publisher side, in stead of "taking a the mutex", your application could publish a sample with the status value set to CALCULATING. When the calculation is finished, the new sample can be written with the value of status set to READY.
On the subscriber side, you could use a QueryCondition with status=READY as its expression. Read or take actions should only be done through the QueryCondition, using read_w_condition() or take_w_condition(). Whenever the status is not equal to READY, the subscribing side will not see any samples. This approach takes advantage of the mechanism that newer samples overwrite older ones, assuming that your history depth is set to the default value of 1.
If this results in the behaviour that you are looking for, then there are two remaining disadvantages to this approach. First, the application logics get somewhat polluted by the use of the status field and the QueryCondition. This could easily be hidden by an abstraction layer though. It would even be possible to hide it behind some lock/unlock-like interface. The second disadvantage is due to the extra sample going over the wire when setting the status field to CALCULATING. But extra communications can not be avoided anyway if you want to implement a distributed mutex-like functionality. Only if your samples are pretty big and/or high-frequent, this is an issue. In that case, you might have to resort to a dedicated, small Topic for the single purpose of simulating the locking mechanism.
The PRESENTATION Qos is not specific RTI Connext DDS. It is part of the OMG DDS specification. That said the ability to write coherent changes on multiple DataWriters/Topics (as opposed to using a single DataWriter) is part of one of the optional profiles (object model profile), so no all DDS implementations necessariiy support it.
Gerardo

Metadata in openflow rule

This Question is extension of the following
OpenFlow Rule Metadata
I would like to have this clarified, on my question about Metadata
Let us say, I have an Open Flow rules, as below
Cookie=0x8000001, duration=228925.445s, table=17, n_packets=350, n_bytes=32424, priority=10,metadata=0xc000f30000000000/0xffffff0000000000 actions=goto_table:19
I wanted to understand the following
Do we have certain rule/ Algorithm , to determine these Metadata from a Packet.
because the Packet in OVS is actually switched based on Matching Metadata, Is that correct ?? ( At least according to the above flow rule )
And the Packet itself does not carry the Metadata, then how exactly
the packet hitting a flow matched against the Metadata.
So, If I understood it correctly the Packets those are traversed between the flow-tables, are within the OVS application itself or Handled #OVS Application level, until it had determined Egress Port
So in that Case, the MetaData are handled #OVS-Application level, until the Packets is send via Egress Port.
Is this correct??
Finally which Module in ODL is responsible for determine the Metadata, and I would like to understand from the code how exactly it was done.
The OpenFlow metadata field starts with a value of zero for every packets. Tables can then writes to this field and you can match on it in subsequent tables. It is only used to carry information from one table to the next, as explained in the OpenFlow specifications:
Metadata: a maskable register that is used to carry information from one table to the next.
first of all you can try Ryu instead, its code is more easy to read and understand.
Then, I think metadata/instructions/actions.... these things are belong to the processing of OVS forwarding, but these things needs to attach to something and that is the packet that OVS received. About the question "Do we have certain rule/ Algorithm , to determine these Metadata from a Packet. " I think the value of the Metadata is determind by the controller, which means that it depends on 'how do you design your own network instance using some(e.g. RYU) controller application'.