I have a question regarding how to enable Multihop in LoRa (that is to communicate between two end devices without the LoRaWAN gateway). I have tried doing it using transparent bridging but it won't work.
Although it works with LoRaBlink the issue is flooding. If the number of devices increases the channel utilization as well as the performance goes down rapidly.
Can someone please suggest if there is any other way to do it or how to do it efficiently through LoRaBlink?
Thanks
If you check the wiki of Radiohead library, you will find RHRouter and RHMesh under topic Managers with the following description:
RHRouter Multi-hop delivery of RHReliableDatagrams from source node to destination node via 0 or more intermediate nodes, with manual, pre-programmed routing.
RHMesh Multi-hop delivery of RHReliableDatagrams with automatic route discovery and rediscovery.
There are raw LoRa libraries for a mesh network. It's implemented on the Pycom devices, and the library for it is called PyMesh. The technology is based on Thread by Thread groop.
Related
In the CANopen Network, all devices (clients, formally "slaves") communicate with a central controller (master). Therefore, no slaves listen to the process data objects (PDOs) and therefore to the CAN identifiers of another slave. Using PDO Linking, PDOs can be exchanged directly without a master. Therefore, the CAN identifiers have to be adjusted accordingly.
Even if there are many sources concerning PDO Linking on the internet, I did not find any specific examples (e.g. schematic linking of 2 client devices). Can you recommend any resources for PDO Linking (book, articles, websites, ...)?
Sources: Beckhoff Information System
The topic of PDO linking is discussed in a couple of websites, however only at a very abstract level. Useful resources for understanding the theory are:
CANopen Solutions: PDO services
Micro Control: Identifier Usage in CANopen Networks
embedded-communication: CANopen PDO Linking (German)
Beckhoff Information System: Process Data Objects (PDO)
Emotas: Process DataLinker
Vogel: Optimierung der PDO-Kommunikation in CANopen-Netzwerken (German)
I would be thankful for your recommendations concerning hands-on examples!
Just looking at your source:
you need to adapt TPDO of Device 1 and RPDO of Device 2 accordingly.
You should be sure that COB-ID of TPDO shall be same as COB-ID of RPDO.
You should also be sure that you map the data properly.
In such a case Device 2 will be able to receive data directly from Device 1
I am new to WebRTC technology.
I want to create a video chat / video conferencing with a transmitter and many followers (more 1000).
Example:
I read a lot of documentations :
https://medium.com/linagora-engineering/scalability-in-video-conferencing-part-1-276f52b4acac
https://webrtcglossary.com/sfu/
But I still don't know what is the best solution (in my case) between Selective Forwarding Unit (SFU) and Multiploint Control Unit (MCU).
Can you help me to understand?
I think the best way is MCU but I am not sure.
Second question:
Can you suggest some sources and links that can help me to set up such an architecture. Currently my project works perfectly in Peer To Peer (Mesh) but it is not the right solution. I have absolutely no idea how to set this up.
Thank you so much
It is possible to implement this using an SFU. The more peers are connected, the more you would need processing power to handle those new peers. This could be done by using more threads and/or forwarding requests to another machine.
With mediasoup it is possible have control over this. With this tool you have routers where peers can connect to to get the stream. A router works on a worker which has a limited amount of receiving peers (depending on cpu capacity). Now to allow more peers you can forward the stream to other routers which can expand the total capacity.
useful links:
https://mediasoup.org/documentation/v3/scalability/#one-to-many-broadcasting
https://mediasoup.org/documentation/v3/mediasoup/design/#architecture
https://mediasoup.discourse.group/t/scalability-in-mediasoup-example/793/2?u=dirvann
I am new with the NFV+SDN technologies. I have downloaded the OpenDayLight and cloudstack. I have mininet network as underlying physical topology. I want to set up a multi cloud that must contain cloudstack and another IAAS technology, and finally manage the interconnection of resources created on these clouds. I already integrated opendaylight with cloudstack but still don't have a clear image on how to start.
My confusions are:
which technology can guide me to realize a multi-cloud, NFV or SDN? Also is the opendaylight the solution for this? Or there are other frameworks or projects that can help me better.
I shall be grateful to you for any information that could get me started on this project.
It depends on what you want to achieve.
OpenDayLight already supports inter-domain routing through BGP, hence having two OpenDayLight talking to each other through BGP will allow you to get L3 (IP based) traffic back-n-forth which is going to be sufficient to interconnect L3-as-a-Service tenants between the two cloud systems.
BGP (as it is today in ODL) will not cut it for L2-as-a-Service or complex multi-cloud deployments. To achieve connectivity across cloud-domains for L2aaS / Complex-tenants, you will need to
Control Plane: An extension to East-West signaling between SDNc of each cloud to handle L2aaS service requirements (OpenDayLight supports multiple options here)
Data Plane:
A cloud fabric that can carry L2aaS (you don't want to lose the L2aaS identifiers when you move from one domain to the other domain).
An anchor node (ex. DC-GW) to get SDNc to configure the data-plane L2 fabric cross-connects (through interfaces such as OVSDB, ML2 or other).
The above two bullets are not trivial work and don't expect them to be done without some customization. Not to mention that the DC-GW vendor compatibility with ODL (ML2 plugin capabilities) will define a lot of what can and can't be done.
Final point, there are a couple of companies building their SDN go-to-market around the above problem you are trying to fix (Cisco, Arista, Nokia, Ericsson ...etc.). Keep us posted with the progress you are making on that front; you may end up putting a foundation for a new framework in the industry.
I encountered such situation with one master student three years ago. She was trying to do intra-cloud computing work, where there are many resources from two or more providers needed to be managed or outsourced.
She was working on Open nebula.
To answer you on your specific questions, SDN is a network controller no more!
it responsible for installing the path in underlying switches so two hosts can communicate to each other.
NFV is responsible to manage network functions installed in the network. They could be integrated into SDN or only in a simple cloud computing Environment.
As you can see, there are nothing for both of them to help you inter connects two cloud computing environments. They only responsible to manage network component.
YOu can provide us more information about the requirements you are trying to implement.
In my current scenario, I'm using NETLIGHT Pin (Pin no. 64) of SIM800 module with my PIC microcontroller to know whether my module is registered or not?
This way I built the circuit. Just I removed LED from VBAT. Then I connect collected of NPN transistor to pic micro input PIN.
I want to know whether any easy way using AT commands to find network registration status of SIM800?
Unfortunalty it's seem not to be really possible (or in fact detecting this state in only one way)
I use SIM800 and let it run for hours and I have seen many cases of network loose, while the AT+CREG? continue telling everything is OK.
Also, even with network down, the SIM800 continue sending you the name of the operator and the strengh of the signal.
The only way I've found is to monitor the serial port: when the SIM800 loose the network, it sends two messages:
+PDP: DEACT and
+SAPBR 1: DEACT
I suggest you to have a look at the document "SIM800 Series_AT Command Manual" and especially the chapter "19.3 Summary of Unsolicited Result Codes". You'll find +PDP and other interesting code (like under-voltage warning, DNS failed...) and see some of these messages are not linked to AT command.
From the manufacturer's documentation:
I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.
Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties?
If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?
In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?
In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.
I'd really appreciate it if anybody could help shed some light on this.
Thanks,
Faheem
DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.
DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.
Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.
OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).
OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.
Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).
Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).
And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.
So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.
Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').
Hope this is somewhat helpful.