Porting RIP ON SDN as an application - sdn

I have been insisted to "Port RIP On SDN" as my final year project.
But my doubt is like when the controller has an intelligence of routing , whys RIP on SDN. Is there any other advantage of using RIP as an application in SDN ? Is this a valid project to proceed ?

I think in SDN you can use link-state routing protocol because in SDN you have information of all nodes in network, but you can use such distributed algorithm between SDN controllers. Frameworks like hyperflow (for more information see HyperFlow: A Distributed Control Pane for OpenFlow paper) use this idea for distributed control pane.

You would need to use the path finding logic of RIP or any other routing protocol on the topology discovered through SDN southbound protocols to find a path for routing traffic. Then you need to insert the rules which reflect this path on the dataplane.

Related

RESTful for Axon Repositories

PROBLEM: Application uses Axon Framework and org.axonframework.eventsourcing.EventSourcingRepository and building _links in HAL format is needed in responses.
RESEARCH: Can be tuned with Spring Hateoas, but a lot requires to be handcoded in rest-controller. Spring Data REST offers autogeneration of links with an only annotation on CRUD repository. The project is not RDBS & JPA-based, so Spring Data REST is not an option.
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
Gotcha, so you are essentially looking to expose a service's capabilities when it comes to which commands can be handled by a given Command Handling Component, disregarding whether that component is an Aggregate or an External Command Handler.
Note, that interaction between a component which dispatches commands and one which handles them resides within the CommandBus. When an Axon application starts up, it's the CommandBus which receives all the registrations for known command handlers.
That way, the CommandBus provides the location transparency for this part of the application. And it's location transparency which provides clear and cleanly segregated components; essentially what will help you to take an evolutionary microservices approach (as AxonIQ describes here).
I'd thus argue the necessity of sharing all known command handlers on a given service/aggregate through REST.
Regardless, whether it makes sense is always a question of "it depends". I for one have created a means to share the known commands a service could handle as JSON schema, as you can see here in a sample project I helped built between AxonIQ and Pivotal.
So, to come round to your question:
QUESTION: Does Axon offer any RESTful solutions from the box, or is there a better autoconfigured alternative to Spring HATEOAS?
No, Axon does not provide something like this out of the box, as it expect you use the CommandBus for communication. I do know you might need a starting point somewhere, for which REST makes sense, but even then exposing all known commands can be regarded as exposing your internal domain to the outside world. In the majority of scenarios, that would be undesirable, but as stated this highly "depends" on your use case.

PLC Programmable Logic Controller Protocols

I'd like to integrate a PLC with a computer. Set outputs and read inputs. I've looked at Modbus and its simple although if I want to act on the change in a input I would need to poll the input to detect the change. Are there any open and common protocols used by PLC's that would push/update on sensor/input change rather than requiring polling?
OPC UA (Unified Architecture) is an open protocol standard implemented on many PLCs with many PC client implementations available. It supports both "subscription" and "event" mechanisms, in addition to polling and other communication services.
Open and common, and also simple to implement, I don't think there are.
You should look for terms like "report by exception" and "unsolicited reporting". DNP3 for example has this feature, it's widely used in electrical applications, but it is not simple to implement, nor is it open.
Depending on your controller, maybe you can look at MQQT, there is support for Arduinos and RPi's, and also industrial controllers like WISE-5231
The two previous answer's are decent. As Nelson mentioned, you haven't specified which controller you are using. You also haven't mentioned what on the computer you'd like to integrate with the PLC. Beckhoff's TwinCAT PLCs can use MQTT, OPC-UA as well as a host of other protocols. They also offer libraries to use their ADS protocol.
As part of ADS, you can either set up an ADS server on your machine (it's very easy) and have your PLC's write to the server. The more typical way is to subscribe to variables/structure in the PLC using this ADS mechanism from within your program's runtime. An event will be fired when the variable struct changes (you can specify how much it should have changed by, if an analog value).
The method you pick is probably dictated by your architecture. If you have many PLCs, I would set up an ADS server in your computer, if you have a handful, subscribe from your program. Of course, you can mix and match these approaches too.
Here is a page of examples: https://infosys.beckhoff.com/english.php?content=../content/1033/tc3_adssamples_net/html/tcsample_net_intro.htm&id=8269274592628480035

Composing several bounded contexts in DDD

We're developing a comprehensive domain model encompassing 7(!) models/bounded contexts spanning several teams. We are yet to decide whether each one of the BCs is entirely disconnected from the others (being orchestrated by a layer above) or whether they are going to communicate via domain-events.
The application under development is for all purposes a SWT/Swing single-threaded application, so no fancy distributed mumbo jumbo between the different BCs is needed.
Yet, a big question remains: how to integrate all those different models? Should it be the Application Layer to undertake the task? If yes, and since in some (hopefully, few) cases the wiring and order ends up being complex, isn't the Application Layer the wrong place to do that?
For instance, consider the use of case of assembling a very complex synthetically created human (AssembleHumanoid). We have bounded contexts relating to the circulatory system, to the bone structure, the nervous system, ventilation system, coordination, immunological and mental systems and still the sensor system (lol, this was just all made up as you might imagine).
Wiring up all that stuff in the Application Layer feels kinda wrong. The obvious solution seems to be to create a 2nd Domain Layer just for orchestration matters. I've looked up but Vernon's Implementing Domain-Driven Design doesn't directly touch the issue (although he gets near # p531, "Composing Multiple Bounded Contexts").
What are your thoughts on the matter?
I'm right now tackling the same questions as you. My role in my project is architect and we have identified 5 BC's. But we are one team and intend to develop theses BC's within one large application. So our BC's are modules within a larger insurance application where each BC speaks its own ubiquitous language (Treaty, Reinsurance, security, medical risk assessment, premium).
But I have given this a lot of thoughts and I think we'll send updates to other BC through Domain Events. Our client is a MVC site that will consume our service layer. But My intention is that application layer have that kind of granularity so it will manage to perform the main task for the client without letting the client MVC project to coordination to other BC's.
We uses some shared Kernel between BC's but not for communicating. We do use DDD integration pattern where we have reference to other BC through Value Objects. We also have som BC to act like Factory, for example Security BC are creating different user roles for other BC's.
But when it comes to execution of a use case that actually need to to some final task in other BC's , Domain Event comes to rescue.

WCF Service Layer in n-layered application: performance considerations

When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years.
When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have:
UI Layer and Presentation Layer (Model View Pattern)
Service Layer (WCF)
Business Layer
Data Access Layer
Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them?
Question #0: is this a good enterpise application template in your opinion?
Question #1: where should I host the service layer? Should it be a Windows Service or what else?
Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree?
Question #3: if you agree with me at Question 2, which kind of binding would you use?
Looking forward to hear from you. Have a nice weekend!
Marco
Question #0: is this a good enterpise
application template in your opinion?
Yes, for most middle-of-the-road line-of-business applications, it's probably a good starting point.
Question #1: where should I host the
service layer? Should it be a Windows
Service or what else?
If you're serious about using WCF services, then yes, I would recommend self-hosting them in a Windows service. Why? You don't have to have IIS on the server, you don't have to rely on IIS to host your service, you can choose your service address as you wish, and you have complete control over your options.
Question #2: in the source code
provided the service layer expose just
an endpoint with WSHttpBinding. This
is the most interoperable binding but
(I think) the worst in terms of
performances (due to serialization and
deserializations of objects). Do you
agree?
No, the most interoperable would be a basicHttpBinding with no security. Any SOAP stack will be able to connect to that. Or then a webHttpBinding for a RESTful service - for this, you don't even need SOAP - just a HTTP stack will do.
What do we use??
internally, if Intranet-scenarios are in play (server and clients behind corporate firewall): always netTcp - it's the best, fastest, most versatile. Doesn't work well over internet though :-( (need to open ports on firewalls - always a hassle)
externally: webHttpBinding or basicHttpBinding, mostly because of their ease of integration with non-.NET platforms
Here are my 5 cents:
0: yes
1: I would start by hosting it in IIS because it's very easy and gets you somewhere fast.
2: If you need security then definitely yes, go with WSHttpBinding (or maybe even wsFederationHttpBinding if you want some more fance security). It performs quite fast in practice even though, as you say, it does have some overhead, and can be quite hard to call from other platforms (such as java).
3: N/A
Finally, remember to define your services' data-contract objects in a separate assembly that can be referenced both from the service dll and the consumer in your ui layer.
Did your teachers also tell you why you should create such an architecture ;-) ? What I am missing in your question are your requirements. Before any of us can tell you if this is a good architecture or template, we have to know the requirements of the application. The non functional requirements or -illities of an application should drive the design of an architecture.
I would like to know what is the most important non functional requirement of your application? (Maintainability, Portability, Reliability or ...). For example take a look at http://en.wikipedia.org/wiki/ISO/IEC_9126 or http://www.serc.nl/quint-book/
I think that we architects should create architectures based on requirements from the business. This means that we architects should make the business more aware of the importance of non functional requirements.
Question #0: is this a good enterpise application template in your opinion?
You use the layers architecture pattern, this means that layers could evolve independent of each other more easily. One of the most used architecture patterns, note that this pattern also has disadvantages (performance, traceability).
Question #1: where should I host the service layer? Should it be a Windows Service or what else?
Difficult to answer. Hosting a service in IIS has two advantages, it scales easier and traceability is easier (WCF in IIS has loads of monitor options). Hosting a service in a Windows Service gives you more binding options (Named Pipe binding/ TCP binding).
Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree?
Performance wise the WSHttpBinding costs more, but it scores high on interoperability. So the choice depends on your non-functional requirements.
Question #3: if you agree with me at Question 2, which kind of binding would you use?
Named Pipes and TCP binding are very fast. Name Pipe binding should only be used when communicating in a single machine. TCP binding could be an option but you have to open a special port in the firewall.
I know this question is old, but I found it while searching for my current architectural problem in refactoring a service layer that feeds a web application. While googling, I've found these much more modern guidelines by Microsoft, hoping that this could help somebody, here's the links:
about the business layer and the discouraged anemic domain model https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/microservice-domain-model
about the data persistence layer
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/infrastructure-persistence-layer-design
The entire pattern book is downloadable as pdf.
[EDIT]
Through the documentation, I've found a technique, that in my experience has been useful to avoid lots of switch case and to apply powerful patterns to solve complex problems. The suggested implementation is better than mine (I had to use older c# versions): https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/enumeration-classes-over-enum-types

Object Oriented module/definition for networking devices/topology?

Is there any module/definition available for a class/schema for representing the topology, connection, access details etc of networking devices ? The intent is to use this for automation, and to manage routers/servers as objects rather than as tcl keyed lists/arrays which gets unwieldy.
Look at SNMP (Simple Network Management Protocol). Most network devices and services, from IIS to Cisco routers, provide some sort of SNMP interface that may provide the capabilities for which you are searching. Specific implementations and capabilities vary between vendors and devices, but the protocol is standardized and very widely implemented.
The word topology in the context of communication nework refers to the way in which how devices are connectd over a network. Its important types are
BUS
RING
STAR
etc
Look into MIB2 (SNMP based). You should note there exists 10's of different MIBs to representing various networking technologies / solutions. You can even devise your private MIB to suit your needs.
You should refer to relevant IETF drafts explaining the nomenclature used in MIBs (when I find the reference, I'll post it).
I could also suggest you perform searches on keywords such as "OSS", "Network Management", "NMS".