RTI DDS creating own data types - data-distribution-service

I am working on a .Net example where I define my own data type using RTI Connext DDS.
Instead of creating the application from the beginning, I got help from the source code of the hello_world_xml_dynamic example in rti_workspace directory. I have made several changes to the USER_QOS_PROFILES.xml file to create my own data type and changes its name to MY_PROFILES.xml
But when I compile the application and run it from the command line, I get the following error:
DDS_DomainParticipantFactory_create_participant_from_config_w_paramsI:ERROR: Profile library 'MyParticipantLibrary::PublicationParticipant' not found
! Unable to create DDS domain participant
The line of code that catching the error:
if (this.participant == null)
{
this.participant = DDS.DomainParticipantFactory.get_instance().
create_participant_from_config(
"MyParticipantLibrary::PublicationParticipant");
if (this.participant == null)
{
Console.Error.WriteLine("! Unable to create DDS domain participant");
return;
}
}
this is the configuration file MY_PROFILES.xml :
<!--
RTI Data Distribution Service Deployment
-->
<dds xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://community.rti.com/schema/6.0.1/rti_dds_profiles.xsd">
<!-- Qos Library -->
<qos_library name="qosLibrary">
<qos_profile name="DefaultProfile">
</qos_profile>
</qos_library>
<!-- types -->
<types>
<struct name="FlightData">
<member name="Latitude" type="double"/>
<member name="Longitude" type="double"/>
<member name="Altitude" type="double"/>
</struct>
</types>
<!-- Domain Library -->
<domain_library name="MyDomainLibrary" >
<domain name="FlightDataDomain" domain_id="0">
<register_type name="FlightDataType"
type_ref="FlightData" />
<topic name="FlightDataTopic"
register_type_ref="FlightDataType">
<topic_qos name="FlightData_qos"
base_name="qosLibrary::DefaultProfile"/>
</topic>
</domain>
</domain_library>
<!-- Participant library -->
<domain_participant_library name="MyParticipantLibrary">
<domain_participant name="PublicationParticipant"
domain_ref="MyDomainLibrary::FlightDataDomain">
<publisher name="MyPublisher">
<data_writer name="FlightDataWriter"
topic_ref="FlightDataTopic"/>
</publisher>
</domain_participant>
<domain_participant name="SubscriptionParticipant"
domain_ref="MyDomainLibrary::FlightDataDomain">
<subscriber name="MySubscriber">
<data_reader name="FlightDataReader"
topic_ref="FlightDataTopic">
<datareader_qos name="FlightData_reader_qos"
base_name="qosLibrary::DefaultProfile"/>
</data_reader>
</subscriber>
</domain_participant>
</domain_participant_library>
</dds>
where am i making a mistake?

Your XML file looks correct. From the 'not found' error message, it seems that you may not have taken the right steps to instruct your application to load that profiles-file MY_PROFILES.xml to actually learn about your desired Participant. You can easily verify that this is the case by introducing an error in your XML file (for example by incorrectly renaming one tag) and rerun your application. If it does not complain about the syntax or schema of the XML, then your file did not get loaded and this hypothesis is correct.
If that turns out to be your problem indeed, then you have several options to fix that. They are listed in the User's Manual section 18.5 How to Load XML-Specified QoS Settings.

Related

Why this dash manifest keeps the player stuck until streams are downloaded?

I have this manifest file below . The issue is that the player waits for the streams to download completely before to start playing which is bad for the user experience. Any idea how to fix it? I expected the player to start range requests and feed media source with partial requests instead to wait for the streams to completely download.
<MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:dash:schema:mpd:2011" xmlns:xlink="http://www.w3.org/1999/xlink" xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd" profiles="urn:mpeg:dash:profile:isoff-live:2011" type="static" mediaPresentationDuration="PT30M67.6S" minBufferTime="PT2S">
<ProgramInformation></ProgramInformation>
<Period id="0" start="PT0.0S">
<AdaptationSet id="0" contentType="video" segmentAlignment="true" bitstreamSwitching="true" lang="und">
<Representation id="0" mimeType="video/webm" codecs="vp9" bandwidth="770153" width="854" height="480" frameRate="23421/1000">
<BaseURL>https://liveradio.s3.eu-central-1.amazonaws.com/video.webm</BaseURL>
<SegmentList duration="1840613" startNumber="1">
<Initialization range="0-219"/>
<SegmentURL indexRange="220-6592"/>
</SegmentList>
</Representation>
</AdaptationSet>
<AdaptationSet id="1" contentType="audio" segmentAlignment="true" bitstreamSwitching="true" lang="und">
<Representation id="1" mimeType="audio/webm" codecs="opus" bandwidth="115412" audioSamplingRate="48000">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"/>
<BaseURL>https://liveradio.s3.eu-central-1.amazonaws.com/audio.webm</BaseURL>
<SegmentList duration="1840641" startNumber="1">
<Initialization range="0-258"/>
<SegmentURL indexRange="259-3444"/>
</SegmentList>
</Representation>
</AdaptationSet>
</Period>
</MPD>
You seem to be using a mix of the DASH 'live' profile approach and the 'on-demand' profile one - you can see the profile in the profiles="urn:mpeg:dash:profile:isoff-live:2011" at the top of your manifest.
At a very high level the difference is:
'live' profile manifests contain a list of urls for each segment to be downloaded.
'on-demand' profile manifests contain a URL to a file and an index to where the segments can be found in the file, so the client can download chunks as it wants.
DASH is a complex specification and it may be that some players will accept some mixes of profiles and others not, and not all players support all features - for example Shaka player claims not to support 'indexRange' (or did in 2017: https://github.com/google/shaka-player/issues/765)

Registration-Free COM for standalone VB.NET application - complex case

I need your help.
I read whole internet about Registration-Free COM/DLLs but my problem is more complex.
I'm preparing an application in VB.NET which will be used in an environment in which users don't have admin rights, so I can't simpy install it or register COM. This COM is a LogParser library designed by microsoft.
DLL also doesn't have to be embeded - would be nice, but it may be also extracted from exe during startup - i'm ok with this approach
Generally in a main form i've got a button which invokes another form by:
LogParser_Form.Show()
This another Form 'Imports MSUtil', which is a Interop.MSUtil.dll and which is embeeded to exe by Fody Costura add-on.
Form contains also a class which has multiple declarations of variables defined in COM, eg:
Dim IISW3CLOG As New COMIISW3CInputContextClass
(there is more than one)
But this dll refers somewhere to bigger: LogParser.dll which is acutally a COM component which requires registration, so my LogParser_Form doesn't appear when button is clicked, but it throws an exception that COM component is not found...
Unfortunately Fody Costura or Ilmerge don't work for the COM...
I tried multiple tricks wich manifest files, etc, but no luck...
You are my last hope - please help me... How to embed this COM to exe without registering it?
I suppose that properly used manifest files may help, but I didn't find a way to successfully use it ...
Getting Registration-Free COM to work can be tricky, but works when configured properly. The key issue is creating manifests, which document all required dependencies. In your case, you'll need two manifests:
Client manifest for your application
Server manifest for the LogParser library. This part requires a tool for analyzing type libraries, such as the OLE/COM Object Viewer (oleview.exe). It allows looking into the embedded type library inside LogParser.dll.
Let's take the (slightly modified) C# example, which is documented in the LogParser help file. The client is named "logqryclient.exe" in this case, and the Runtime Callable Wrapper has been created via the type library importer (tlbimp).
using System;
using Interop.MSUtil;
namespace logqryclient
{
class Program
{
static void Main(string[] args)
{
try
{
// Instantiate the LogQuery object
ILogQuery oLogQuery = new LogQueryClassClass();
// Create the query
string query = #"SELECT TOP 50 SourceName, EventID, Message FROM System";
// Execute the query
ILogRecordset oRecordSet = oLogQuery.Execute(query, null);
// Browse the recordset
for (; !oRecordSet.atEnd(); oRecordSet.moveNext())
{
ILogRecord rec = oRecordSet.getRecord();
Console.WriteLine(rec.toNativeString(","));
}
// Close the recordset
oRecordSet.close();
}
catch (System.Runtime.InteropServices.COMException exc)
{
Console.WriteLine("Unexpected error: " + exc.Message);
}
}
}
}
To use this code without registering the COM classes, you'll first need to place the LogParser.dll into the same directory as the client executable.
Next, you'll need to create an accompanying server manifest (named "LogParser.manifest" here). This documents all necessary classes and marshalling information for the interfaces (required for thread switching). As mentioned earlier, you'll need a type library analyzer to gain access to the class and interface identifiers.
In the above case, you'll need identifiers for:
ILogQuery interface & LogQueryClass class
ILogRecordset interface
ILogRecord interface
Hence, the server manifest could look as follows:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity type="win32" name="LogParser" version="1.0.0.0" />
<file name = "LogParser.dll">
<!-- LogQueryClass -->
<comClass
clsid="{8CFEBA94-3FC2-45CA-B9A5-9EDACF704F66}"
threadingModel = "Apartment" />
<!-- Embedded type library -->
<typelib
tlbid="{A7E75D86-41CD-4B6E-B4BD-CC2ED34B3FB0}"
version="1.0"
helpdir=""/>
</file>
<!-- Marshalling information for interfaces -->
<comInterfaceExternalProxyStub
name="ILogQuery"
iid="{3BDE06BC-89E4-42FD-BE64-832A5F33D7D3}"
proxyStubClsid32="{00020424-0000-0000-C000-000000000046}"
baseInterface="{00000000-0000-0000-C000-000000000046}"
tlbid = "{A7E75D86-41CD-4B6E-B4BD-CC2ED34B3FB0}" />
<comInterfaceExternalProxyStub
name="ILogRecordset"
iid="{C9452B1B-093C-4842-ABD1-F81410926874}"
proxyStubClsid32="{00020424-0000-0000-C000-000000000046}"
baseInterface="{00000000-0000-0000-C000-000000000046}"
tlbid = "{A7E75D86-41CD-4B6E-B4BD-CC2ED34B3FB0}" />
<comInterfaceExternalProxyStub
name="ILogRecord"
iid="{185FFF88-E24A-4984-9621-AA41BEAE8513}"
proxyStubClsid32="{00020424-0000-0000-c000-000000000046}"
baseInterface="{00000000-0000-0000-c000-000000000046}"
tlbid = "{A7E75D86-41CD-4B6E-B4BD-CC2ED34B3FB0}" />
</assembly>
To allow the client to find the server manifest and ultimately the LogParser library, embed the following client manifest into the "logqryclient.exe" client:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity
type = "win32"
name = "logqryclient"
version = "1.0.0.0" />
<dependency>
<dependentAssembly>
<assemblyIdentity
type="win32"
name="LogParser"
version="1.0.0.0" />
</dependentAssembly>
</dependency>
</assembly>
Now, all required information is located in the manifests, so that you can run the code in registration-free configuration.

Custom Code Templates for SuiteCloud IDE

I'm planning on making my own Code Templates for when I generate my new scripts. Since SuiteCloud IDE is only configured for SuiteScript 1.0, I was hoping to create new templates for SuiteScript 2.0.
That said, I've got to the part where I can specify the directory for my custom templates, and I've gone ahead and created my templates, however, since I'm lacking the templates.xml, SuiteCloud IDE doesn't recognise my custom templates.
NetSuite Help doesn't really help that much except state that that file exists. But it doesn't say what it should contain, or even the structure of the data.
If anyone can help out here, it'd be much appreciated. TIA.
We've done the same exercise long ago for SuiteScript 1.0, and I've just recently done it for our SuiteScript 2.0 set.
You can find the default templates inside of P2_POOL_HOME/plugins/com.netsuite.ide.core_2016.2.0.e4.jar/templates/ where P2_POOL_HOME is usually ~/.p2/pool/
The general format of templates.xml is:
<configuration>
<templates>
<template label="TEXT YOU WANT IN DROPDOWN"
defaultFilename="DEFAULT NAME FOR FILE"
typesControl="radio|checkbox"
headerFilename="PATH/TO/FILE/HEADER"
startFilename="PATH/TO/START/FILE"
endFilename="PATH/TO/END/FILE"
rename="false">
<types>
<files label="TEXT LABEL FOR CHECKBOX" bodyFilename="PATH/TO/FILE/WHEN/SELECTED" />
</types>
</template>
</templates>
</configuration>
Here are two examples from our templates:
<configuration>
<templates>
<template label="2.0 Portlet"
defaultFilename="360CUSTOMER_PROJECT_PL_DESCRIPTION.js"
typesControl="radio"
headerFilename="header.ss2.js"
startFilename="portlet_start.ss2.js"
endFilename="portlet_end.ss2.js"
rename="false">
<types>
<files label="Render" bodyFilename="portlet.ss2.js" />
</types>
</template>
<template label="2.0 RESTlet"
defaultFilename="360CUSTOMER_PROJECT_RECORDTYPE_RL_DESCRIPTION.js"
typesControl="checkbox" headerFilename="header.ss2.js"
startFilename="RESTlet_start.ss2.js"
endFilename="RESTlet_end.ss2.js"
rename="true">
<types>
<files label="GET" bodyFilename="RESTlet_get.ss2.js" />
<files label="POST" bodyFilename="RESTlet_post.ss2.js" />
<files label="PUT" bodyFilename="RESTlet_put.ss2.js" />
<files label="DELETE" bodyFilename="RESTlet_delete.ss2.js" />
</types>
</template>
</templates>
</configuration>
For scripts that only have one entry point method (e.g. Suitelet, Portlet, Scheduled), you use radio for the typesControl setting and just have a single <file> tag under <types>. For those that have multiple entry points to choose from (e.g. Client, Map/Reduce, User Event), you use checkbox for the typesControl and then list each option that you want using <file> tags under <types>.
I do not actually know what the rename setting does.
The basic file structure of the generated file will be:
/* CONTENTS OF HEADER FILE */
/* CONTENTS OF START FILE */
/* CONTENTS OF ENTRY POINT 1 FILE */
/* CONTENTS OF ENTRY POINT 2 FILE */
/* ... */
/* CONTENTS OF ENTRY POINT N FILE */
/* CONTENTS OF END FILE */
I have tried the same thing using 2017. You will find some js and template files inside the jar file. I have modified the ss_2.0_suitelet.js and ss_header.js file.
Just pick those file from the jar and place in your own local library. Then modify. Dont forget to point the template directory in the preference->netsuite (eclipse ide).
You can also add Author and the Date. But not sure how to add the $filename in the comment as variable. Here is some sample.
Version Date Author Remarks
1.00 ${date} ${author} Initial version

How could include javascript libraries on a wirecloud operator

I am trying to develop a wirecloud operator but I don't know how to include a javascript library (i.e. jquery) on the config.xml except the main.js file. I tried to include the jquery library on the config.xml just like the main.js way, using different wire:index, but it did not work.
Is there any way to include a second JS library?
Yes, you can have more than one javascript file in operators. I think your problem is related to the RDF syntax that is a bit weird. Anyway, the following snippet is an example of how to include jquery and a main.js file using RDF/XML:
<usdl:utilizedResource>
<usdl:Resource rdf:about="js/jquery.min.js">
<wire:index>0</wire:index>
</usdl:Resource>
</usdl:utilizedResource>
<usdl:utilizedResource>
<usdl:Resource rdf:about="js/main.js">
<wire:index>1</wire:index>
</usdl:Resource>
</usdl:utilizedResource>
Alternatively, if you are using the Mashup portal at FIWARE Lab, you can make use of the new XML format that is going to be available on WireCloud 0.7.0 (currently the Mashup portal is running release candidate of that version). This is an example of the new format:
<?xml version='1.0' encoding='UTF-8'?>
<operator xmlns="http://wirecloud.conwet.fi.upm.es/ns/macdescription/1" vendor="CoNWeT" name="ngsi-source" version="3.0">
<details>
<title>NGSI source</title>
<homepage>https://github.com/wirecloud-fiware/ngsi-source</homepage>
<authors>Álvaro Arranz García <aarranz#conwet.com></authors>
<email>aarranz#conwet.com</email>

<description>Retrieve Orion Context Broker entities and their updates in real time.</description>
<longdescription>DESCRIPTION.md</longdescription>
<license>AGPLv3+ w/linking exception</license>
<licenseurl>http://www.gnu.org/licenses/agpl-3.0.html</licenseurl>
<doc>doc/userguide.md</doc>
<changelog>doc/changelog.md</changelog>
</details>
<requirements>
<feature name="NGSI"/>
</requirements>
<preferences>
<preference name="ngsi_server" type="text" label="NGSI server URL" description="URL of the Orion Context Broker to use for retrieving entity information" default="http://orion.lab.fi-ware.org:10026/"/>
<preference name="ngsi_proxy" type="text" label="NGSI proxy URL" description="URL of the Orion Context Broker proxy to use for receiving notifications about changes" default="http://mashup.lab.fi-ware.org:3000/"/>
<preference name="ngsi_entities" type="text" label="NGSI entity types" description="A comma separated list of entity types to use for filtering entities from the Orion Context broker. Thies field cannot be empty." default="Node, AMMS, Regulator"/>
<preference name="ngsi_id_filter" type="text" label="Id pattern" description="Id pattern for filtering entities. This preference can be empty, in that case, entities won't be filtered by id." default=""/>
<preference name="ngsi_update_attributes" type="text" label="Monitored NGSI Attributes" description="Attributes to monitor for updates. Currently, the Orion Context Broker requires a list of attributes to monitor for changes, so this field cannot be empty." default="Latitud, Longitud, presence, batteryCharge, illuminance, ActivePower, ReactivePower, electricPotential, electricalCurrent"/>
</preferences>
<wiring>
<outputendpoint name="entityOutput" type="text" label="Provide entity" description="Every change over each entity fires an event" friendcode="entity"/>
</wiring>
<scripts>
<script src="js/other.dependency.js"/>
<script src="js/main.js"/>
</scripts>
</operator>
You can found more documentation about this new format at this link.
Finally, I came off to include other JS files using RDF/XML on a wirecloud operator. Firstly, I use the following code on the config.xml file:
<usdl-core:utilizedResource rdf:about="js/jquery-1.10.2.min.js">
<wire:index>0</wire:index>
</usdl-core:utilizedResource>
<usdl-core:utilizedResource rdf:about="js/main.js">
<wire:index>1</wire:index>
</usdl-core:utilizedResource>
Also, the included JS files have to be declared as follows:
<wire:Operator rdf:about="http://wirecloud.conwet.fi.upm.es/ns/widget#Operator">
...
<usdl-core:utilizedResource rdf:resource="js/jquery-1.10.2.min.js"/>
<usdl-core:utilizedResource rdf:resource="js/main.js"/>
....
</wire:Operator>

Simulating the Maven2 filter mechanism using Ant

I have a properties file, let say my-file.properties.
In addition to that, I have several configuration files for my application where some information must be filled regarding the content of my-file.properties file.
my-file.properties:
application.version=1.0
application.build=42
user.name=foo
user.password=bar
Thus, in my configuration files, I will find some ${application.version}, ${user.name} that will be replaced by their value taken in the properties file...
When I build my application using Maven2, I only need to specify the properties file and say that my resources files are filtered (as in this answer to another problem). However, I need to achieve the same thing by using only Ant.
I've seen that Ant offers a filter task. However, it forces me to use the pattern #property.key# (i.e. #user.name# instead of #{user.name}) in my configuration files, which is not acceptable in my case.
How can I solve my problem?
I think expandproperties is what you are looking for. This acts just like Maven2's resource filters.
INPUT
For instance, if you have src directory (one of many files):
<link href="${css.files.remote}/css1.css"/>
src/test.txt
PROCESS
And in my ANT build file we have this:
<project default="default">
<!-- The remote location of any CSS files -->
<property name="css.files.remote" value="/css/theCSSFiles" />
...
<target name="ExpandPropertiesTest">
<mkdir dir="./filtered"/>
<copy todir="./filtered">
<filterchain>
<expandproperties/>
</filterchain>
<fileset dir="./src" />
</copy>
</target>
</project>
build.xml
OUTPUT
*When you run the ExpandPropertiesTest target you will have the following in your filtered directory: *
<link href="/css/theCSSFiles/css1.css"/>
filtered/test.txt
You can define a custom FilterReader. So you have a couple of choices:
Extend/copy the org.apache.tools.ant.filters.ReplaceTokens class and define a Map property that references another properties file containing all the replacements. This is still a bit of a chore as you have to define all the replacements.
Extend/copy the org.apache.tools.ant.filters.ReplaceTokens class with additional processing that just substitutes the matched token with a version with the correct garnish. Of course you'd have to be really careful where you use this type as it will match anything with the begin and end token.
So in the read() method of ReplaceTokens, replace:
final String replaceWith = (String) hash.get(key.toString());
with a call to a getReplacement() method:
...
final String replaceWith = getReplacement(key.toString);
...
private String getReplacement(String key) {
//first check if we have a replacement defined
if(has.containsKey(key)) {
return (String)hash.get(key);
}
//now use our built in rule, use a StringBuilder if you want to be tidy
return "$" + key + "}";
}
To use this, you'd ensure your class is packaged and on Ant's path and modify your filter:
<filterreader classname="my.custom.filters.ReplaceTokens">
<!-- Define the begin and end tokens -->
<param type="tokenchar" name="begintoken" value="$"/>
<param type="tokenchar" name="endtoken" value="}"/>
<!--Can still define explicit tokens, any not
defined explicitly will be replaced by the generic rule -->
</filterreader>
One hooooooorible way to make this work, inspired by the solution of Mnementh, is with the following code:
<!-- Read the property file -->
<property file="my-file.properties"/>
<copy todir="${dist-files}" overwrite="true">
<fileset dir="${src-files}">
<include name="*.properties"/>
</fileset>
<filterchain>
<filterreader classname="org.apache.tools.ant.filters.ReplaceTokens">
<!-- Define the begin and end tokens -->
<param type="tokenchar" name="begintoken" value="$"/>
<param type="tokenchar" name="endtoken" value="}"/>
<!-- Define one token per entry in the my-file.properties. Arggh -->
<param type="token" name="{application.version" value="${application.version}"/>
<param type="token" name="{user.name" value="${user.name}"/>
...
</filterreader>
</filterchain>
</copy>
Explanations:
I am using the ReplaceTokens reader to look for all $...} pattern. I cannot search for ${...} patterns, as the begintoken is a char and not a String. Then, I set the list of tokens starting with a { (i.e. I see {user.name instead of user.name). Hopefully, I have "only" about 20 lines in my-file.properties, so I need to define "only" 20 tokens in my Ant file...
Is there any simple and stupid solution to solve this simple and stupid problem??
Ant knows a concept named Filterchains, that is useful here. Use the ReplaceTokens-filter and specify the begintoken and endtoken as empty (normally that's '#'). That should do the trick.