NFV/SDN in cloudstack - sdn

I am new with the NFV+SDN technologies. I have downloaded the OpenDayLight and cloudstack. I have mininet network as underlying physical topology. I want to set up a multi cloud that must contain cloudstack and another IAAS technology, and finally manage the interconnection of resources created on these clouds. I already integrated opendaylight with cloudstack but still don't have a clear image on how to start.
My confusions are:
which technology can guide me to realize a multi-cloud, NFV or SDN? Also is the opendaylight the solution for this? Or there are other frameworks or projects that can help me better.
I shall be grateful to you for any information that could get me started on this project.

It depends on what you want to achieve.
OpenDayLight already supports inter-domain routing through BGP, hence having two OpenDayLight talking to each other through BGP will allow you to get L3 (IP based) traffic back-n-forth which is going to be sufficient to interconnect L3-as-a-Service tenants between the two cloud systems.
BGP (as it is today in ODL) will not cut it for L2-as-a-Service or complex multi-cloud deployments. To achieve connectivity across cloud-domains for L2aaS / Complex-tenants, you will need to
Control Plane: An extension to East-West signaling between SDNc of each cloud to handle L2aaS service requirements (OpenDayLight supports multiple options here)
Data Plane:
A cloud fabric that can carry L2aaS (you don't want to lose the L2aaS identifiers when you move from one domain to the other domain).
An anchor node (ex. DC-GW) to get SDNc to configure the data-plane L2 fabric cross-connects (through interfaces such as OVSDB, ML2 or other).
The above two bullets are not trivial work and don't expect them to be done without some customization. Not to mention that the DC-GW vendor compatibility with ODL (ML2 plugin capabilities) will define a lot of what can and can't be done.
Final point, there are a couple of companies building their SDN go-to-market around the above problem you are trying to fix (Cisco, Arista, Nokia, Ericsson ...etc.). Keep us posted with the progress you are making on that front; you may end up putting a foundation for a new framework in the industry.

I encountered such situation with one master student three years ago. She was trying to do intra-cloud computing work, where there are many resources from two or more providers needed to be managed or outsourced.
She was working on Open nebula.
To answer you on your specific questions, SDN is a network controller no more!
it responsible for installing the path in underlying switches so two hosts can communicate to each other.
NFV is responsible to manage network functions installed in the network. They could be integrated into SDN or only in a simple cloud computing Environment.
As you can see, there are nothing for both of them to help you inter connects two cloud computing environments. They only responsible to manage network component.
YOu can provide us more information about the requirements you are trying to implement.

Related

What is a BNO(Business Network Operator) in Corda?

I am new to this.Could you please help me in understanding the concept in simple words?
Corda Network involves a variety of machines and resources that need to be sized, deployed (on cloud or premise), architected, tested, managed and monitored to insure the stability and communication of the various participants in the Network.
From Corda documentation:
The Business Network Operator is responsible for the infrastructure of the business network, they maintain the network map and identity services that allow parties to communicate, and - in many deployments - also operate the notary service.

Is Multihop possible with LoRa?

I have a question regarding how to enable Multihop in LoRa (that is to communicate between two end devices without the LoRaWAN gateway). I have tried doing it using transparent bridging but it won't work.
Although it works with LoRaBlink the issue is flooding. If the number of devices increases the channel utilization as well as the performance goes down rapidly.
Can someone please suggest if there is any other way to do it or how to do it efficiently through LoRaBlink?
Thanks
If you check the wiki of Radiohead library, you will find RHRouter and RHMesh under topic Managers with the following description:
RHRouter Multi-hop delivery of RHReliableDatagrams from source node to destination node via 0 or more intermediate nodes, with manual, pre-programmed routing.
RHMesh Multi-hop delivery of RHReliableDatagrams with automatic route discovery and rediscovery.
There are raw LoRa libraries for a mesh network. It's implemented on the Pycom devices, and the library for it is called PyMesh. The technology is based on Thread by Thread groop.

Notifying WCF Subscribers in Cloud Computing

I have been reading a lot about WCF lately and whenever the subject of implementing a subscriber-broadcast mechanism comes up (as in an instant messaging system), the solution invariably is to use a static dictionary to hold your subscriber channels.
An example can be found in the answer to the following question, but it is a common practice.
Making a list of subscribers available across calls to a service
This seems like a very good solution for "traditional" web programming, but how is this handled in the cloud? Specifically, how do we get around the fact that every computer in the grid has different "static" variables?
I know very little about the different Cloud platforms. Are there different solutions for Azure, Amazon Web Services and VMWare?
For broadcasting/push-type notifications, please look at SignalR (http://signalr.net/). Microsoft is making that part of the ASP.NET platform:
http://channel9.msdn.com/Events/Build/2012/3-034
It has some real nice functionality like gracefully, falling back on multiple mechanisms if advanced things like WebSockets are not supported by the server/client. While it is do-able, you would have to code all of that in WCF.
There are pretty big differences between the cloud vendor platforms. I could post you multiple links but the cloud vendors you mention are changing VERY rapidly with what they offer. Your commitment to a particular cloud vendor is for the long term...don't think of it as well Vendor A has something Vendor B doesn't. There are differences BTW like that...Amazon for example, has specialized VMs: high I/O, high memory, high CPU. While, Azure for example has a much better designed VM layer.
I think of it this way (mu opinion)...Microsoft is a company that owns: .NET, ASP.NET, server platforms: SQL Server, Windows Server, SharePoint, Office Services etc. They are very well positioned against someone like Amazon or VMWare which do not have rich product portfolios like this. Plus Microsoft can price those servers into the cloud, Amazon/RackSpace/VMWare have to pay Microsoft a premium for it. You seem like you are talking about WCF/.NET, which would favor the Microsoft Azure platform.
On Azure you can run Linux VMs; code in python, Java etc. but it favors the Microsoft stack. Conversely, for AWS you can run .NET/Microsoft etc, but it favors the Linux/open source stack. Think of it in long term...because in 2 years both major cloud vendors will be making commitments in those areas. For example, RackSpace is going all-in with their OpenStack platform...they have no choice.
The Windows Azure Service Bus has a a couple of options that you can use for broadcasting WCF events.
The Realy service has a netEventRelayBinding, which will allow subscribing service instances to reeive a one-way service call when a client calls an endpoint. If the clients are discinnected, they will not receive any messages.
http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_eventingonservicebusvs2010_topic2.aspx
Brokered Messaging has tipics and subscriptions where a message can be broeadasct to up to 2,000 subscribers. The messages are stored durably, so if a client is disconnedted they wil receive all the messages when they reconnect.
http://www.cloudcasts.net/devguide/Default.aspx?id=12044
Regards,
Alan
It might be worth you looking into something like RabbitMQ on AppHarbor. It's something I keep meaning to look at but can't find the time. I only mention it because nobody else has ;)

Does DDS have a Broker?

I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.
Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties?
If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?
In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?
In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.
I'd really appreciate it if anybody could help shed some light on this.
Thanks,
Faheem
DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.
DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.
Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.
OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).
OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.
Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).
Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).
And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.
So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.
Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').
Hope this is somewhat helpful.

Skype protocol and supernodes

I have a question about the skype protocol.
Supposedly, according to wiki, the supernodes in Skype are used in UDP hole punching. The supernodes are nodes without firewalls/NATs.
My question is, how is this reliable? Isn't the vast majority of internet users behind NAT?
And, if I was to create a P2P application using this technique, what happens if there are no peers without firewalls? I don't understand how you can launch an application that relies on that there will be some peers eventually without NAT
Thanks
I can't comment on Skype specifically, but I have some experience with this (http://wiki.squeak.org/squeak/5629). We called our supernodes "big friendly giants" or BFGs :).
The idea behind supernodes is that while you hope that they pop up in the network, giving new users more options for NAT hole punching, you provide, as p2p network operator, a minimal set yourselves (could be just one or two machines, they are just needed for initial hole punching, real traffic will get re-routed directly anyway). As far as I'm aware, Skype does that as well - they run a minimum set of supernodes themselves.
When Skype had issues earlier this year, a lot of people tried to reconnect and thus the supernodes got overloaded, resulting in a domino effect. Skype added supernodes, but the amount of people trying to reconnect at that time was so massive that it took quite a while before the network rebuilt itself. It's quite funny - we had that with the above project as well - that a P2P network can be extremely resilient until it gets pushed over some edge and the whole thing crumbles.
[disclaimer: I work for eBay, former owner of Skype, but this is all my personal opinion and based on public information]
Read the papers on libjingle with discussion about services like STUN. When both parties are behind NAT an external service is often required to relay through or assist in punching a hole open on one or other side.
http://code.google.com/apis/talk/libjingle/important_concepts.html