what should be the DDS communication structure? - data-distribution-service

I have 2 exiting c++ project where I have Sender and Receiver. They are connected with UDP socket connection and build through CmakeFile.txt
Root
|→ Sender
| |→ src
| |→ include
| |→ Cmakefile.txt
|→ Receiver
| |→ src
| |→ include
| |→ Cmakefile.txt
|→ Cmakefile.txt
Now I want to change UDP with DDS context idl
Root
|→ Sender
| |→ src
| | |→ Publisher.cpp
| |→ include
| |→ Cmakefile.txt
|→ Receiver
| |→ src
| | |→ Subscriber.cpp
| |→ include
| |→ Cmakefile.txt
|→ Cmakefile.txt
Root
|→ Sender
| |→ src
| |→ include
| |→ Cmakefile.txt
|→ Receiver
| |→ src
| |→ include
| |→ Cmakefile.txt
|→ DDSCom
| |→ src
| | |→ Publisher.cpp
| | |→ Subscriber.cpp
| |→ include
| |→ Cmakefile.txt
|→ Cmakefile.txt
what should be my structure?

Project Structure (source files):
The project structure you present seems reasonable. In general, I would recommend that the structure of your source code be informed more by the architecture and logical structure of the application, and not so much by the choice of communication mechanism.
However, there are some specific considerations when integrating DDS into a project. In particular, if you are using IDL (or XML) to define your DDS data types for the application, then it often makes sense to locate those files in a 'common' area. The DDS IDL compiler will generate code from those type definition files, and this generated code can be compiled into a library, or simply compiled into each application.
Also, if you are using a version control system (git, svn, etc), then I would recommend that the IDL file[s] be controlled, but not the generated code. [There are some arguments to control the generated code too, but I think that it almost always causes more harm than good.] So, specifically for your project, I would expect to find one or more IDL (or XML) files under DDSCom/src , DDSCom/include, or perhaps DDSCom/idl, as you prefer. A CMake rule can be created to generate type specific source code from the IDL as part of the build process. This guarantees that the generated code is kept up-to-date with the data types, and with upgrades of the DDS implementation.
This approach should apply regardless of the DDS implementation in use; for example, CoreDX DDS, OpenSplice, or RTI.
Source Code Structure:
Regarding the internal code structure (not the 'directory' structure), there are many ways to architect an application that uses DDS for communication. The DDS API allows for synchronous and asynchronous communication patterns.
In general a data producer creates several DDS entities to facilitate the ability to write data (for example, a DDS::DomainParticipant, a DDS::Publisher, a DDS::Topic, and a DDS::DataWriter). The DataWriter entity supports the 'write' call, and the other entities provide various configuration points to affect the communication behavior and structure.
Similarly, a data consumer creates corresponding DDS entities that enable it to 'read' data (for example, a DDS::DomainParticipant, a DDS::Subscriber, a DDS::Topic, and a DDS::DataReader). The DataReader supports many different variants of the 'read' operation to provide access to available data.
Each of the DDS entities acts as a factory for other entities, and each can be configured with various Quality of Service (QoS) policy settings. These QoS settings give DDS a very rich set of configuration options that impact communications. The 'Topic' entity defines a logical grouping of data identified by name, and further specifies the "type" of
the data contained within the collection.
In a small project, you may find it easier to create the DDS entities in place where needed in the application (for example, right in the main() routine). Alternatively, for larger systems, it is often beneficial to encapsulate the DDS entities within a component that can be reused among different applications.

Related

Objective-C plugin and CoreML model failing after email or Box transfer

I have an plugin written in Objective-C which incorporates a CoreML model. The plugin and ML model compile and run fine locally. If I email or transfer the plugin model and coreml model via Box, my plugin crashes and throws a damaged error. I can get the plugin to function by removing extended attributes in terminal: xattr -cr me/myplugin.plugin but the ML section of code still fails.
If I monitor in XCode, I notice the following when the coreml model fails:
[coreml] Input feature input_layer required but not passed to neural network.
[coreml] Failure verifying inputs.
Is there some signature or attached attribute that would lead to this issue when transferring via email/box?
Is there some signature or attached attribute that would lead to this
issue when transferring via email/box?
Since you have access to both versions of each file (before emailing / transferring via box, and the after transferring).
Go to both versions of each file and do the following:
ls -la
If it has extended attributes there will be an # symbol. For example:
drwxr-xr-x# 254 hoakley staff 8636 24 Jul 18:39 miscDocs
If the versions after transfer do not have an # symbol, then they do not have extended attributes.
Then for each file (both versions) do:
xattr -l filepath
This will display the extended attributes of each file.
You should compare the attributes of both versions of each file and see the difference. This should answer your question. If there is no difference, then no extended attribute has been added or removed.
Read : https://eclecticlight.co/2017/08/14/show-me-your-metadata-extended-attributes-in-macos-sierra/

Determining ONVIF Protocol S support

Is there a way to determine whether Profile S is supported by looking at the ONVIF responses/profiles in the device? Do I just assume if the profile includes Video Source configuration, that Profile S is supported?
As you can read in section 9 of the ONVIF_Profile_-S_Specification_v1-1-1,
A device compliant to this specification shall additionally include
the specific scope parameter as presented in Table 1: Scope
parameters. Apart from this pre-defined parameter, it shall be
possible to set any scope parameter as defined by the device owner.
Scope parameters can be listed and set through the commands provided
by the Device service, defined in the ONVIF Core Specification.
+--------+--------------+---------------------------------------------------------+
|Category|Defined values|Description |
+--------+--------------+---------------------------------------------------------+
|Profile |Streaming |The Streaming scope indicates if the device is |
| | |compliant to the Profile S. A device compliant to the |
| | |Profile S shall include a scope entry with this value in |
| | |its scope list. |
+--------+--------------+---------------------------------------------------------+
So, in order to know if the device claims Profile S support, you need to check if the scope onvif://www.onvif.org/Profile/Streaming is present.
You can create a onvif Media client with:
https://www.onvif.org/ver10/media/wsdl/media.wsdl
This client can request the supported profiles with 'GetProfiles'.

Link to external types in ExDoc

I recently created my first Hex Package; Ecto.Rut and I'm now working on its documentation. Since it uses Ecto.Repo at the back and returns Ecto.Schema and Ecto.Changeset types, I wanted to link them in the #specs.
Internal and Elixir core types (such as Keyword.t) are automatically linked, but ex_doc doesn't link external types defined in the Ecto modules. How do I make that happen?
I've currently tried specifying the complete module name in the #spec but that doesn't work:
#callback all(opts :: Keyword.t) :: [Ecto.Schema.t] | no_return
After some discussion on ElixirForum, Jose added this feature. With ExDoc v0.14.2 and onwards, it supports auto-linking for external dependency modules.
From the Github Page:
By referring to a module, function, type or callback from any of your dependencies, such as MyDep, ExDoc will automatically link to that dependency documentation on hexdocs.pm (the link can be configured with the :deps option in your mix.exs)
This means, simply mentioning the complete module name would autolink types, callbacks, modules and methods. So, by updating to the latest ExDoc, my existing code now auto-links:
#callback all(opts :: Keyword.t) :: [Ecto.Schema.t] | no_return

.scr file and APDU

I am using JCard sim, java card version 2.2.2 and I want to know how the .scr file is associated with the .java file. (the java card simulator on NetBeans IDE. I am not using an actual smartcard).
If someone can provide me with some useful links on how these two files are related, I would greatly appreciate it.
I have looked through the following links, but they were not specifically helpful in illustrating how I can modify the .scr file in association with my .java file
C H A P T E R 5 - Converting Java Class Files
How to write a Java Card applet: A developer's guide - JavaWorld
Basically what I am trying to do is create a test applet (without the need of .scr files to send and receive APDUs by my other files)
- I want to be able to read APDU which contains the the parameters for a function in my process method
- That function will then create another APDU as its output, which another function will read as one of it's parameters
As far as I understand, the .scr file is used to send command APDUs that is read by the applet, but there is no way to write to the .scr file.
How can I create my own .java test file that sends and receives APDUs instead of relying on the .scr?
I can provide more details of what my code looks, if absolutely required.
Thanks
You can communicate with the simulator using the method described in the quick start guide of jCardSim. It is also described how to select an Applet using the correct AID in there. The inherited process(APDU) method will receive any APDU send using the methods described in the quick start guide, starting with the SELECT by NAME APDU (INS = A4). After that it is normal APDU processing.

Getting EPG info from DVB-T

I'm interested in grabbing the EPG data from DVB-T streams. Does anyone know of any C libraries or an alternative means of getting the data?
tv_grab_dvb can do this. See the subversion repository for sources.
tv_grab_dvb is made to work with the stream grabbed from the DVB-T card using dvbtools on Linux, but it may be portable to other platforms - I think it just works with the raw data from the stream.
...a new answer to an old question:
I wrote a utility called dvbtee that can be used as a c++ library, a cross-platform command line utility, or a node.js module.
(despite it being a c++ library, one could still link to it from c code)
The command line utility will parse your streams and output the EPG, depending on the arguments you specify, it can generate plain text or a JSON block of data.
dvbtee: a digital television streamer / parser / service information aggregator supporting various interfaces including telnet CLI & http control
The node.js module will emit events containing the PSIP table data (along with EPG info)
node-dvbtee: MPEG2 transport stream parser for Node.js with support for television broadcast PSIP tables