PDO Linking in CANopen - automation

In the CANopen Network, all devices (clients, formally "slaves") communicate with a central controller (master). Therefore, no slaves listen to the process data objects (PDOs) and therefore to the CAN identifiers of another slave. Using PDO Linking, PDOs can be exchanged directly without a master. Therefore, the CAN identifiers have to be adjusted accordingly.
Even if there are many sources concerning PDO Linking on the internet, I did not find any specific examples (e.g. schematic linking of 2 client devices). Can you recommend any resources for PDO Linking (book, articles, websites, ...)?
Sources: Beckhoff Information System

The topic of PDO linking is discussed in a couple of websites, however only at a very abstract level. Useful resources for understanding the theory are:
CANopen Solutions: PDO services
Micro Control: Identifier Usage in CANopen Networks
embedded-communication: CANopen PDO Linking (German)
Beckhoff Information System: Process Data Objects (PDO)
Emotas: Process DataLinker
Vogel: Optimierung der PDO-Kommunikation in CANopen-Netzwerken (German)
I would be thankful for your recommendations concerning hands-on examples!

Just looking at your source:
you need to adapt TPDO of Device 1 and RPDO of Device 2 accordingly.
You should be sure that COB-ID of TPDO shall be same as COB-ID of RPDO.
You should also be sure that you map the data properly.
In such a case Device 2 will be able to receive data directly from Device 1

Related

ApiRTC - Media always sent to the cloud, even with meshOnlyEnabled

As a follow-up to my previous post (ApiRTC - Behaviour with meshModeEnabled and meshOnlyEnabled)
Hello,
You say that SFU is necessary for any activity that requires centralizing all the streams (recording, bandwidth optimization,...). However, in MESH mode, the files/media exchanged still manage to be recorded on the Apizee media server even though I don't go through the SFU. How is this possible ?
Can this behaviour be disabled so that the exchanged documents never leave the MESH stream ?
I have not found anything about this in the documentation.
By the way, the documentation often mentions the term "MCU", does this mean that ApiRTC also uses an MCU server in addition to the SFU ?
Thanks in advance.
apirtc
Can this behaviour be disabled so that the exchanged documents never
leave the MESH stream ?
Concerning a recording of all the streams in the conversation (via the startRecording method of the Conversation object see https://apirtc.github.io/references/apirtc-js/Conversation.html#startRecording__anchor):
--> The composition of multiple streams into one video file is done server-side by the SFU (v4.4.8).
Concerning the files (through conversation.pushData method):
--> We manage the file transfer through uploading the file on a storage and share the URI to all parties of a conversation. P2P transfer is not available (v4.4.8)
To exchange data in a P2P mode, you can use the Conversation.sendData method to send raw data across all participants.
Regarding your question about the MCU, no, ApiRTC doesnt use any MCU server to date (v. 4.4.8). The document refers to MCU for very specific on-premise deployment, not supported for ApiRTC users.
Cheers,
Romain

Is Multihop possible with LoRa?

I have a question regarding how to enable Multihop in LoRa (that is to communicate between two end devices without the LoRaWAN gateway). I have tried doing it using transparent bridging but it won't work.
Although it works with LoRaBlink the issue is flooding. If the number of devices increases the channel utilization as well as the performance goes down rapidly.
Can someone please suggest if there is any other way to do it or how to do it efficiently through LoRaBlink?
Thanks
If you check the wiki of Radiohead library, you will find RHRouter and RHMesh under topic Managers with the following description:
RHRouter Multi-hop delivery of RHReliableDatagrams from source node to destination node via 0 or more intermediate nodes, with manual, pre-programmed routing.
RHMesh Multi-hop delivery of RHReliableDatagrams with automatic route discovery and rediscovery.
There are raw LoRa libraries for a mesh network. It's implemented on the Pycom devices, and the library for it is called PyMesh. The technology is based on Thread by Thread groop.

iot edge best practise

We have around 9000 devices in field.
This devices are at groups of 1-100 at customers on prem.
The devices are not capable of azure-iot-sdk integration.
The devices have a webservice API.
The devices should appear as first-class devices in azure.
We like the iot edge module provisiong feature.
We want to evaluate if modules could gather data from the devices and send them to IoTHub for further processing.
We found this feature overview of IoTEdge: https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
Pattern Transparent and Protocol translation are out of scope due to above facts. Pattern Identity translation seems to fit.
We want a 1 to 1 relationship between module and real device.
Therefor we assume the following POC with the hope of clarification and best practise:
we implement a iot edge module (azure-iot-sdk-java)
we open module connection to iotedge and suscribe to desired properties
the module identity gets as desired property the ip of the real device and the azure device identitiy connection string.
we open device connection to iotedge by adding GatewayHostName to the device connection string as described here https://learn.microsoft.com/de-de/azure/iot-edge/iot-edge-as-gateway
we request data from the real device and send them via azure device identity.
This somewho mixes up two patterns and seems kind of odd to us.
Can you point out best practises and risks with this approach?
Yes, I agree with that Pattern Identity translation could fit your scenario.
There are three patterns for using an IoT Edge device as a gateway: transparent, protocol translation, and identity translation, you can refer to this link to get more introduction about these three pattern.

NFV/SDN in cloudstack

I am new with the NFV+SDN technologies. I have downloaded the OpenDayLight and cloudstack. I have mininet network as underlying physical topology. I want to set up a multi cloud that must contain cloudstack and another IAAS technology, and finally manage the interconnection of resources created on these clouds. I already integrated opendaylight with cloudstack but still don't have a clear image on how to start.
My confusions are:
which technology can guide me to realize a multi-cloud, NFV or SDN? Also is the opendaylight the solution for this? Or there are other frameworks or projects that can help me better.
I shall be grateful to you for any information that could get me started on this project.
It depends on what you want to achieve.
OpenDayLight already supports inter-domain routing through BGP, hence having two OpenDayLight talking to each other through BGP will allow you to get L3 (IP based) traffic back-n-forth which is going to be sufficient to interconnect L3-as-a-Service tenants between the two cloud systems.
BGP (as it is today in ODL) will not cut it for L2-as-a-Service or complex multi-cloud deployments. To achieve connectivity across cloud-domains for L2aaS / Complex-tenants, you will need to
Control Plane: An extension to East-West signaling between SDNc of each cloud to handle L2aaS service requirements (OpenDayLight supports multiple options here)
Data Plane:
A cloud fabric that can carry L2aaS (you don't want to lose the L2aaS identifiers when you move from one domain to the other domain).
An anchor node (ex. DC-GW) to get SDNc to configure the data-plane L2 fabric cross-connects (through interfaces such as OVSDB, ML2 or other).
The above two bullets are not trivial work and don't expect them to be done without some customization. Not to mention that the DC-GW vendor compatibility with ODL (ML2 plugin capabilities) will define a lot of what can and can't be done.
Final point, there are a couple of companies building their SDN go-to-market around the above problem you are trying to fix (Cisco, Arista, Nokia, Ericsson ...etc.). Keep us posted with the progress you are making on that front; you may end up putting a foundation for a new framework in the industry.
I encountered such situation with one master student three years ago. She was trying to do intra-cloud computing work, where there are many resources from two or more providers needed to be managed or outsourced.
She was working on Open nebula.
To answer you on your specific questions, SDN is a network controller no more!
it responsible for installing the path in underlying switches so two hosts can communicate to each other.
NFV is responsible to manage network functions installed in the network. They could be integrated into SDN or only in a simple cloud computing Environment.
As you can see, there are nothing for both of them to help you inter connects two cloud computing environments. They only responsible to manage network component.
YOu can provide us more information about the requirements you are trying to implement.

Interoperability in DDS

I am new to DDS domain and need to have the below understanding.
how to publish common topics between two vendors to achieve interoperability in DDS?
The Scenario is :
Suppose there are two vendor products V1 and V2. V1 has a publisher which publishes on topic T1. V2 wants to subscribe for this topic.How will the Subscriber(V2) know that there exists a Topic T1?
I have a similar doubt on Domain level.how will a subscriber know to which domain it has to participate in?
I am using OpenDDS.
Thanks
Interoperability between vendors is possible, and regularly tested/demonstrated by the main vendors.
You will need to configure your DDS implementation to use RTPS ( I think RTPS2 currently), rather than any proprietary transport that vendors may use. This might be enabled by default.
In terms of which domain to participate in, you programmatically create a domain participant in a particular domain (which domain it connects to might be controlled by a config file) and all further entities (publishers, subscribers, etc) that you create then belong to that domain participant and therefore operate in that domain
To build on #rcs's answer a bit... the actual amount of work you have to do can depend on DDS implementations (OpenDDS, RTI, Prismtech...) because they'll have different defaults. If you use the same on both ends, then your configuration becomes a lot simpler since defaults should line up for things like domain and RTPS.
You will need to make sure the following match:
Domain ID
Domain Partion
Transport (I recommend RTPS, FWIW version difference between 2.1 and 2.2 hasn't mattered in my experience)
TCP or UDP
Discovery port and data port - this will matter more or less depending which implementations you use and whether or not you're using the same one on both ends of the connection, if use using the same, they'll have the same defaults.
Make sure the topic one publishes matches the topic the other subscribes to, this will apply to the Topic and the Type (see more here)
Serialization of the data
Discovery (unicast vs. multicast, make sure
whatever setup you choose is valid, ex: both devices are in the same
multicast group)
QoS settings will need to line up, though I think defaults will likely work (read more here)
Get the Shapes demo working between the machines you're working on first, this does some basic sanity checking to know that it is possible with the given configuration and network setup. Every vendor/implementation that I've seen has a shapes demo to run, for example, here is RTI's.
That's all I can think of right now, hope that helps. I have found DDS documentation to be really good, especially if you know when you can (and when you can't) use any vendor's documentation's answer for your implementation (ex: answer found on RTI's doc or forum and whether or not it works for your OpenDDS application). Often the solutions are similar, but you'll find RTI supports the most and RTI + Prismtech have some of the best documentation.
The DDS RTPS protocol exchanges discovery information so that different applications participating in the same domain (!) know who is out there, and what they are offering/requesting. You need to make sure that the two applications are using the same domain ID (specified on the domain participant). Also, as some implementations allow for different transport options, make sure to use RTPS (sometimes called DDSI) networking.
The RTPS specification contains a mapping from domain ID to port numbers, so if applications from different vendors use the same ID it should just work. Implementations might however override portnumbers with configuration.
To maximize the chance that the applications communicate properly, ensure they use the same IDL datamodel. Vendors have different approaches to type evolution / mapping types that don't exactly match, and not all of them implement the XTypes specification (yet).
Also, as some implementations are stricter than others, ensure that you stay within bounds of the specification. This means that a topic name should only contain alphanumerical characters (I sometimes see ':' to indicate scoping, that is not allowed).
Things that will definitely not work between vendors is TRANSIENT/PERSISTENT durability or communication over TCP, as both have not been standardized yet. TRANSIENT_LOCAL should work. The difference between TRANSIENT_LOCAL and TRANSIENT is that with TRANSIENT_LOCAL, data is no longer aligned after a publisher (writer) leaves the system, whereas with TRANSIENT that data will still be available.
Also note that for API-level interoperability between vendors, your best chance is to use the new isocpp API, since that one has been implemented pretty consistently across the vendor implementations I've seen.
Hope that helps!