Suppose I have a topology like < host1--switch1--switch2--switch3--controller>. So, the physical path between switch1 and the controller consist of switch2 and switch3. Does the control traffic from switch1 to the controller goes from switch2 and switch3? I mean the openflow traffics between switch1 and the controller go to switch2 at first and then go to switch3 until it reach to the controller. Am I right?
Openflow switches have seperate management ports to connect to the controller. If you use outofband connection (direct connection to controller via management port) switch communicates with controller directly. In a such case, for data flows, you can still use the topology you mentioned.
But, if there is no management connection between switch 1 and controller, then it comes to "inband" connection. In a such case, switch 1 sends openflow messages via data port, switch 2 encapsulates this message and send to the controller.
From OpenFlow documentation:
The specication of the networks used for the OpenFlow channels is
outside the scope of the present specication. It may be a separate
dedicated network, or the OpenFlow channel may use the network managed
by the OpenFlow switch (in-band controller connection). The only
requirement is that it should provide TCP/IP connectivity.
Dedicated network means outofband connection.
To learn some details about inband you can look this documentation:
Important part is:
In this setup, control traffic sent by switch A will be seen by
switch B, which will send it to the controller as part of an
OFPT_PACKET_IN message. Switch A will then see the OFPT_PACKET_IN
message's packet, re-encapsulate it in another OFPT_PACKET_IN, and
send it to the controller. Switch B will then see that
OFPT_PACKET_IN, and so on in an infinite loop.
Related
I just finished reading sections 1-6.2 of the OpenFlow specification here.
Section 6.1.2 says:
Packet-in events can be configured to buffer packets. For packet-in generated by an output action in
a flow entries or group bucket, it can be specified individually in the output action itself (see 7.2.6.1),
for other packet-in it can be configured in the switch configuration (see 7.3.2). If the packet-in event is
configured to buffer packets and the switch has sufficient memory to buffer them, the packet-in event
contains only some fraction of the packet header and a buffer ID to be used by a controller when it
is ready for the switch to forward the packet. Switches that do not support internal buffering, are
configured to not buffer packets for the packet-in event, or have run out of internal buffering, must
send the full packet to controllers as part of the event. Buffered packets will usually be processed via a
Packet-out or Flow-mod message from a controller, or automatically expired after some time
This makes it sound like for every packet that hits the OpenFlow switch, an asynchronous message must be sent to the controller to make a forwarding decision. However Chapter 5 makes it sound like a switch has a set of OpenFlow flows and at the end of that generates an action set which determines what should be done with a packet and the packet is only forwarded to the controller when there is a flow table miss.
Under what conditions is a packet sent to the controller for a decision? Is it always? Or is it only circumstantial?
Packets will be sent to the OpenFlow controller any time the out port is set to be the controller.
PACKET_IN events occur when a flow wasn't matched on the switch and are then sent to the controller. Otherwise no event is created - the switch simply forwards the packet according to the flow rules and the controller is none the wiser.
According to described here
http://flowgrammable.org/sdn/openflow/message-layer/flowmod/
and in the OpenFlow switch specifications, the flow_mod message is not acknowledgeable.
Is there any way for the controller (POX, ODL, or any other) to receive a confirmation for installed flow match or to retrieve the installed flows in the switch's flow table?
Thank you
There is a concept in openflow called "barrier" where the controller
can send a barrier request to have the switch acknowledge the flow_mod.
In OpenDaylight, the default openflowplugin stats collection will poll
the connected switches and will store the config (including the flow table)
in OpenDaylight's operational store.
OpenFlow allows a controller to request port statistics from the a switch using a message, and the controller in return receives a reply with the statistics.
For example, in Ryu we can use ryu.ofproto.ofproto_v1_3_parser.OFPPortStatsRequest for this purpose.
Is there a way to get the port statistics from a switch without issuing a request message from the controller, but possibly as an action by the switch on receipt of a particular type of packet?
I'm currently trying to make 3 arduinos talking to each other with ZigBee, and it's kinda working.
But I currently use AT mode on the Bees and it's a little bit harsh when I have to switch the destination address in the Coordinator of the network (1 Coordinator and 2 Routers)
Can I put the Coordinator in API mode (to make it easier to switch addresses with xbee-api for Arduino) but still be able to communicate with the AT routers and be able to send/receive data from them?
Thanks for your answer :)
Absolutely, and it's common to set up a network like that. You can have AT routers connected to "dumb" hosts that just send streams of serial data, and an API coordinator that receives from multiple routers, identifying the source of each message using the headers of the API frames, and able to send unicast messages back to individual routers or broadcast messages to all routers.
Make use of the 0x10 Transmit Request API frame to send from the coordinator to the routers. You'll receive either 0x90 or 0x91 frames (depending on the setting of ATAO).
how to setup a signaling server for webRTC when the system are connected in Local Area Network? Its mandatory that we must use STUN and TURN server for signaling?
To make WebRTC run on LAN, you will require to have a signaling server in that LAN. A signaling server is any web server that will allow your web clients to exchange the SDP offer/answer and ICE candidates that are generated by the WebRTC PeerConnection. This can be done using AJAX or WebSockets.
I have listed some top sources for information about WebRTC. Please go through some of the links on that page to better understand how the WebRTC signaling works.
You will not require a STUN/TURN server as your WebRTC clients (i.e. Web Browser) will be in the LAN and accessible to each other. FYI... STUN/TURN servers are not part of the signaling but part of the media leg and usually required for NAT traversals of media.
Webrtc needs some kind of signalling system for initial negotiation.. like transferring SDP, ICE-candidates, sending and receiving offers etc... rest is done by peer-peer connection. For initial signalling you can use any technique like sending AJAX calls, using socket.io etc.
STUN and TURN servers are required for NAT traversal, NAT traversal is important because it is needed for determining the path between peers. You can use google provided STUN/TURN server address stun:stun.l.google.com:19302 etc , or you can configure your own turn server by using rfc-5766 turn server
Making signalling server for WebRTC is quite easy.
I used PHP, MYSQL and AJAX to maintain signalling data.
Suppose A wants to call B.
Then A creates offer using createOffer method. This method returns an offer object.
You have to transfer this offer object to user B, This is a part of signalling process.
Now create MYSQL database, having columns :
caller, callee, offer, answer, callerICE and calleeICE
Now offer created by A is stored in offer attribute with the help of AJAX call .
(Remember to use JSON.stringify the JS object before "POSTing" object to server.)
Now user B scans this offer attribute created by caller A , again with the help of AJAX call.
In this way , offer object created at user A can arrive at user B.
Now, user B responds to the offer by calling createAnswer method. This method returns answer object. This can again be stored in "answer" attribute of database.
Then the caller A scans this "answer" attribute created by callee B.
In this way, answer object created by B can arrive at A.
To store iceCandidate object representing caller A, use "callerIce" attribute of MYSQL table. Note that, callee B is scanning "callerIce" to know the details of caller A.
In this way we can transfer the iceCandidate objects representing future peer.
After you complete transferring of iceCandidate object, the connectionState property holds "connected" indicating two peers are connected.
If any questions, let me know!
Cheers ! You can now share local media stream to the remote peer.