I have created a simple network topology using mininet and integrated it with Opendaylight. Now I want to set up queues on a switch and get the flow stats from it. I haven't found any step by step tutorial anywhere. Any kind of help or suggestions will be appreciated.
Here is the picture of my network topology attached.
Assuming your switches support Openflow 1.3, you can make use of Openflow meter table [1].
Current way to flush a meter to switch with Opendaylight looks like this (full detail in [2]:
Create MD-SAL modeled flow and commit it into dataStore using two
phase commit
FRM gets notified and invokes corresponding rpc (addMeter) on particular service provider (if suitable provider for given node registered)
the provider (plugin in this case) transforms MD-SAL modeled meter into OF-API modeled meter
OF-API modeled meter is then flushed into OFLibrary
OFLibrary encodes meter into particular version of wire protocol and sends it to particular switch
Check on mininet side if meter is installed
[1] - https://www.cs.princeton.edu/courses/archive/fall13/cos597E/papers/openflow-spec-v1.3.2.pdf (Section 5.7)
[2] - https://wiki.opendaylight.org/view/OpenDaylight_OpenFlow_Plugin:End_to_End_Meters#Learn_End_to_End_for_Inventory
Related
I work for a company that manufactures large scientific instruments, with a single instrument having 100+ components: pumps, temperature sensors, valves, switches and so on. I write the WPF desktop software that customers use to control their instrument, which is connected to the PC via a serial or TCP connection. The concept is the same though - to change a pump's speed for example, I would send a "command" to the instrument, where an FPGA and custom firmware would take care of handling that command. The desktop software also needs to display dozens of "readback" values (temperatures, pressures, valve states, etc.), and are again retrieved by issuing a "command" to request a particular readback value from the instrument.
We're considering implementing some kind of telemetry service, whereby the desktop application will record maybe a couple of dozen readback values, each having its own interval - weekly, daily, hourly, per minute or per second.
Now, I could write my own telemetry solution, whereby I record the data locally to disk then upload to a server (say) once a week, but I've been wondering if I could utilise Azure IoT for collecting the data instead. After wading through the documentation and concepts I'm still none the wiser! I get the feeling it is designed for "physical" IoT devices that are directly connected to the internet, rather than data being sent from a desktop application?
Assuming this is feasible, I'd be grateful for any pointers to the relevant areas of Azure IoT. Also, how would I map a single instrument and all its components (valves, pumps, etc) to an Azure IoT "device"? I'm assuming each component would be a device, in which case is it possible to group multiple devices together to represent one customer instrument?
Finally, how is the collected data reported on? Is there something built-in to Azure, or is it essentially a glorified database that would require bespoke software to analyse the recorded data?
Azure IoT would give you:
Device SDKs for connecting (MQTT or AMQP), sending telemetry, receiving commands, receiving messages, reporting properties, and receiving property update requests.
An HA/DR service (IoT Hub) for managing devices and their authentication, configuring telemetry routes (where to route the incoming messages).
Service SDKs for managing devices, sending commands, requesting property updates, and sending messages.
If it matches your solution, you could also make use of the Device Provisioning Service, where devices connect and are assigned an IoT hub. This would make sense, for instance, if you have devices around the world and wish to have them connect to the closest IoT hub you have deployed.
Those are the building blocks. You'd integrate the device SDK into your WPF app. It doesn't have to be a physical device, but the fact it has access to sensor data makes it behave like one and that seems like a good fit. Then you'd build a service app using the Service SDKs to manage the fleet of WPF apps (that represent an instrument with components, right?). For monitoring telemetry, it would depend on how you choose to route it. By default, it goes to an EventHub instance created for you. You'd use the EventHub SDK to subscribe to those messages. Alternatively, or in addition to, those telemetry messages could be routed to Azure Storage where you could perform historical analysis. There are other routing options.
Does that help?
I have a question regarding how to enable Multihop in LoRa (that is to communicate between two end devices without the LoRaWAN gateway). I have tried doing it using transparent bridging but it won't work.
Although it works with LoRaBlink the issue is flooding. If the number of devices increases the channel utilization as well as the performance goes down rapidly.
Can someone please suggest if there is any other way to do it or how to do it efficiently through LoRaBlink?
Thanks
If you check the wiki of Radiohead library, you will find RHRouter and RHMesh under topic Managers with the following description:
RHRouter Multi-hop delivery of RHReliableDatagrams from source node to destination node via 0 or more intermediate nodes, with manual, pre-programmed routing.
RHMesh Multi-hop delivery of RHReliableDatagrams with automatic route discovery and rediscovery.
There are raw LoRa libraries for a mesh network. It's implemented on the Pycom devices, and the library for it is called PyMesh. The technology is based on Thread by Thread groop.
We have around 9000 devices in field.
This devices are at groups of 1-100 at customers on prem.
The devices are not capable of azure-iot-sdk integration.
The devices have a webservice API.
The devices should appear as first-class devices in azure.
We like the iot edge module provisiong feature.
We want to evaluate if modules could gather data from the devices and send them to IoTHub for further processing.
We found this feature overview of IoTEdge: https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
Pattern Transparent and Protocol translation are out of scope due to above facts. Pattern Identity translation seems to fit.
We want a 1 to 1 relationship between module and real device.
Therefor we assume the following POC with the hope of clarification and best practise:
we implement a iot edge module (azure-iot-sdk-java)
we open module connection to iotedge and suscribe to desired properties
the module identity gets as desired property the ip of the real device and the azure device identitiy connection string.
we open device connection to iotedge by adding GatewayHostName to the device connection string as described here https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
we request data from the real device and send them via azure device identity.
This somewho mixes up two patterns and seems kind of odd to us.
Can you point out best practises and risks with this approach?
Yes, I agree with that Pattern Identity translation could fit your scenario.
There are three patterns for using an IoT Edge device as a gateway: transparent, protocol translation, and identity translation, you can refer to this link to get more introduction about these three pattern.
According to described here
http://flowgrammable.org/sdn/openflow/message-layer/flowmod/
and in the OpenFlow switch specifications, the flow_mod message is not acknowledgeable.
Is there any way for the controller (POX, ODL, or any other) to receive a confirmation for installed flow match or to retrieve the installed flows in the switch's flow table?
Thank you
There is a concept in openflow called "barrier" where the controller
can send a barrier request to have the switch acknowledge the flow_mod.
In OpenDaylight, the default openflowplugin stats collection will poll
the connected switches and will store the config (including the flow table)
in OpenDaylight's operational store.
This Question is extension of the following
OpenFlow Rule Metadata
I would like to have this clarified, on my question about Metadata
Let us say, I have an Open Flow rules, as below
Cookie=0x8000001, duration=228925.445s, table=17, n_packets=350, n_bytes=32424, priority=10,metadata=0xc000f30000000000/0xffffff0000000000 actions=goto_table:19
I wanted to understand the following
Do we have certain rule/ Algorithm , to determine these Metadata from a Packet.
because the Packet in OVS is actually switched based on Matching Metadata, Is that correct ?? ( At least according to the above flow rule )
And the Packet itself does not carry the Metadata, then how exactly
the packet hitting a flow matched against the Metadata.
So, If I understood it correctly the Packets those are traversed between the flow-tables, are within the OVS application itself or Handled #OVS Application level, until it had determined Egress Port
So in that Case, the MetaData are handled #OVS-Application level, until the Packets is send via Egress Port.
Is this correct??
Finally which Module in ODL is responsible for determine the Metadata, and I would like to understand from the code how exactly it was done.
The OpenFlow metadata field starts with a value of zero for every packets. Tables can then writes to this field and you can match on it in subsequent tables. It is only used to carry information from one table to the next, as explained in the OpenFlow specifications:
Metadata: a maskable register that is used to carry information from one table to the next.
first of all you can try Ryu instead, its code is more easy to read and understand.
Then, I think metadata/instructions/actions.... these things are belong to the processing of OVS forwarding, but these things needs to attach to something and that is the packet that OVS received. About the question "Do we have certain rule/ Algorithm , to determine these Metadata from a Packet. " I think the value of the Metadata is determind by the controller, which means that it depends on 'how do you design your own network instance using some(e.g. RYU) controller application'.