RYU Controller Block traffic - openflow

I am creating a simple network with mininet. and i want block traffic from one host to another with controller.I Want know which of RYU API useful for me.parser."OFPMatch" or another RYU API

Something you may find useful for working with the Ryu controller is Ryuretic. It provides an additional layer of abstraction for the Ryu Controller, so all you have to worry with is the incoming packet.
The Ryuretic backend renders all events to the user as a pkt (a dictionary object), and the contents of the pkt are retrieved by providing the header field of interest (e.g, pkt['srcmac'], pkt['dstmac'], pkt['ethtype'], pkt['inport'], pkt['srcip'], etc.) Using the information from the pkt, the user can choose which fields to match and what action (fwd, drop, redirect, mirror, craft) to take when a match is found.
To install Ryuretic, simply copy the [files] (https://github.com/Ryuretic/RyureticLabs/tree/master/ryu/ryu/app/Ryuretic) to the directory /ryu/ryu/app/Ryuretic. If you installed Ryu, then you already have the /ryu/ryu/app directory. You just need to create the Ryuretic directory and copy the files there.
Ryuretic Labs provides setup instruction and some use cases for implementing security features on SDNs using Ryuretic. It also provides a Mininet testbed for testing your network applications on the VM provided by SDN Hub.

Related

Floodlight controller routing algorithme

I recently started working with SDN and floodlight controller. I want to changing routing algorithme in floodlight by deleting the algorithm exist (Dijkstra algorithm), any link for that ?
Floodlight controller has a topology manager that takes care of maintaining the
network graph, functionalities for finding routes through the topology via algorithm for the controller. Dijakstra runs in topology manager. You may need to update the Topology manager and its related dependencies as per your requirement.
The below link could be one good starting point
https://github.com/floodlight/floodlight/blob/master/src/main/java/net/floodlightcontroller/topology/TopologyManager.java
Based on Dijskstra, the Yen's algorithm shall form shortest path, in the order of shortest path which shall get stored in pathCache for usage. The Topology manager interacts and helps in exposing the path information.
Application modules like Forwarding module shall retrieve the path from Topology Manager and then insert flows over the path based on learned source, destination.
Controller module like TopologyService help in maintaining the topology related information for the controller, as well as to find routing in the network.
You may need to update the modules as per your algorithm with few additional details https://floodlight.atlassian.net/wiki/spaces/floodlightcontroller/pages/1343513/How+to+Write+a+Module

opendaylight: how to deploy different applications on different switches?

I am using Oendaylight (carbon) to develop an SDN application consisting of multiple switches that connect to a controller. I want to be able to deploy different applications on different switches when they connect. How can I specify this? For example when openflow:1 connects, I want to deploy an L2 switch on it and when openflow:2 connects, I want to deploy a different application on it. Thanks.
First of all, you do not deploy applications on switches. Applications run on controller, you can add a logic which programs selected switches only.
What you essentially want to do is reactive programming; wait for events and act accordingly. This can be easily achieved by hooking the event listeners to the nodes in YANG model of the application. Any change in these nodes will be then notified to your application which can then do selective network programming.
In the example mentioned, you will need to use "InstanceIdentifier" object to identify which data model's events you are interested in listening.
InstanceIdentifier<Node> nodeID = InstanceIdentifier.builder(Nodes.class).child(Node.class, new NodeKey(node.getId())) .augmentation(FlowCapableNode.class)
.build();
Now simply register a listener to this IID by using DataBroker object's registerDataChangeListener method.
db.registerDataChangeListener(LogicalDatastoreType.CONFIGURATION, nodeID, this, AsyncDataBroker.DataChangeScope.SUBTREE);
At this point in time, you will be notified of any update (addition/modifification/deletion) of the switches you have registered for.
Finally to catch the event use/override DataChangeListener interface's onDataChanged method and add your fancy logic.
The similar logic can be fine grained to listen to activities on particular flow tables, flow rules, etc on the switch.

omnet++ Inet - Simulating dynamic access point behaviour

I have to create a particular simulation for a college project. The simulation should feature several mobile nodes cyclically switch between 802.11 access point and station modes. While in station mode, nodes should read the SSIDs of access points around them, and then they should change their SSID in AP mode accordingly. There is no need for connections or data exchange between the nodes beside the SSID reading.
Now, I've been through Omnet/Inet tutorials/documentation (all two of them), and I feel pretty much stuck.
What I could use right now is someone confirming my understanding of the framework giving me some directions on how exactly I should proceed.
From what I understand is Inet does not implement any direct/easy way to do what I'm trying to do. Most examples have fixed connections declared in NED files and hosts with a fixed status (AP or STA) defined in the .ini file.
So my question is basically how do I do that: do I need to extend a module (say, wirelessHost), modifying its runtime behaviour, or should I implement a new application (like UDPApp) to have my node read other SSIDs and change his accordingly? And what is the best way to access an host's SSID?
You may utilize two radios for each mobile node e.g. **.mobilenode[*].numRadios = 2 (see also example in /inet/examples/wireless/multiradio/).
The first radio operates as AP **.mobilenode[*].wlan[0].mgmtType = "Ieee80211MgmtAPSimplified" which has to adapt its SSID.
The second radio serves as STA **.mobilenode[*].wlan[1].mgmtType = "Ieee80211MgmtSTA". Now, you have to sub-class Ieee80211AgentSTA which handles the SSID scanning procedure and has to change the first radio's SSID upon new SSID detection. Then you utilize the adopted sub-class within the simulation. Finally, active scanning has to be activated **.mobilenode[*].wlan[1].agent.activeScan = true.

Labview: error in accessing server addres space

I am trying to access the server address space and I am getting this Error.
LabVIEW: (Hex 0xFFFA8EBB) The node path refers to a node that does not exist in the server address space
The server is on a Plc and I am connected via Lan. the information i have is
Server-URL: opc.tcp://192.168.1.135:4840
Namespace-URI: urn:B&R/pv/
I have tried different things but i am not sure how to access the variables in address space. any suggestions would be helpful
B&R Publishes the Endpoints of your data in a fairly consistent manner. If you use a OPC UA browsing tool, you will find that the address space visible to Labview should start with
PLC.Modules.<Default>
B&R Automation Studio requires that you complete the default OPC UA configuration. Within that configuration you would need to enable the nodes/endpoints in question. You can then access these nodes in Labview.
You should check the following:
Under your controller, confirm that you have enabled OPC UA in the
configuration view.
Next, check that you have added a OPC UA Default View File to your
configuration for the hardware you are running.
Finally, in that file, ensure that you have enabled the endpoint/variable and that
it has at least the read permission. The quickest and most expedient
way is to ensure that you have gone to the top level of the OPC UA
Default View File and added the Everyone role and that Read is
enabled. This will cascade down to all enabled endpoints.
Save this and make sure it has been built and added to your controller. You should be able to access endpoints then.
For example, if I have a program called "LampController" running in B&R with a variable called switchState it would be addressed by:
PLC.Modules.<Default>.LampController.switchState
You need to use %26; in place of an ampersand. The ampersand is used to delimit the URI from the query segment. It's pretty unusual to even have an ampersand in a URI. Are you sure you typed it right?

How to efficiently deploy content types to a Content Type Hub

I have set up a Content Type Hub and tested the syndication is working correctly by creating a test content type and watching it be published to the client site.
Then I deployed the content types I am actually interested in publishing to the hub (by way of a feature) along with the site columns they depend on.
I get the error
Content type '...' cannot be published to this site because feature '...' is not enabled.
I want to deploy content types with features for upgradability and ease of porting between dev, qual and prod environments. But am left not understanding what the benefit of the Hub is.
If I have to activate the deploying feature, the content types will already be on the site before publishing will take place. If I have to manually create the content types on the Hub site with the web UI (yuck!), I have the issue of trying to keep three landscapes manually synchronized.
Is there a way to efficiently manage content type deployment to the Hub while still using the Hub to publish the content types?
The advantage of using the Content Type Hub, is that it allows you to use and reuse your Content Types over multiple site collections and Web Applications throughout your farm.
Because all of your site collections are now using instances of the same syndicated content types, if, in the future, you need to add/remove/rename columns within the content types, this is done as easily as updating the content type, and resubscribing (then allowing sharepoint to run its timer jobs, and double checking that the changes updated because you're a careful SharePoint administrator).
I am not sure which error you are receiving, there simply isn't enough context in your post. However, I think you may be slightly confused on how syndicated content types are published. First, you turn on the content syndication hub publishing feature on the site collection that holds all of the content types you are going to reuse throughout your farm. Next you configure the mixed metadata service, so that SharePoint loads each of your content types "into memory" more or less.
After this step, you get to choose which site collections you want to subscribe to the syndication hub. To do this, you need to turn on the content type publishing site collection feature. Note: If you use blank templates for your sites you may receive a feature error like you've described, due to a "flaw" with blank templates. See my post at: http://www.thesharepointblog.net/Lists/Posts/Post.aspx?ID=109
Only AFTER you've turned on the subscribing feature, And content Type Hub timer job has run, AND the subscribing timer job has run, will your site collection see the available content types.
As for manually creating content types on the hub site, the only OOB way of doing this is to use the UI. Personally, I wrote a utility that does everything I just described for me, from creating the initial content types, to creating the syndication hub, publishing them to all of the site collections, and most time consumingly, associating them with all of the lists and libraries on the subscribing site collections. I had intended for my employing company to sell it, but as they don't seem interested, I could open source it if there is enough interest.
Hope this was helpful.
This looks like a shortcoming of the hub, indeed.
I've witnessed it before.
If you've deployed your content type to the hub, please check if the INHERITS tag of the content type element is set to TRUE. Otherwise it won't work in a hub.
<ContentType ID="xxxxx"
Name="xxxx"
Group="xxxx"
Description="xxxx"
Inherits="TRUE"
Version="0">
</ContentType>
Don't forget that you can actually synchronize the content types BETWEEN farms -- this is especially valuable when you are developing on a separate farm and don't want to hassle with a PnP Framework for managing your content types... In some cases, the Content Type may already exist on the production farm and you need a copy of them on dev and/or test..