How to write ESF/Kura Assets (Wires) - assets

Can anyone please provide links to tutorials or documentation on how to write Assets (Wires) to be deployed in the ESF Admin. (I am using a Eurotech edge computing device.)
I have successfully written and deployed a Java API (ConfigurableComponent) as a Bundle. I can see that it is Active. I just need help with how to write a Java API that becomes an Asset
Thanks.

I'm not familiarized with ESF, but as Kura is similar and compatible, let me provide you an answer based on that.
At least in Kura, there is only one available Asset which is not expected to be replaced or extended (org.eclipse.kura.wire.WireAsset). What you can do is create a Driver with a different configuration for the Assets variables. In most cases this is the best option and more than enough to create any additional connection.
The creation of a Driver is quite complicated to be summarized here but I suggest you to use the following references:
Official documentation on how to create a configurable component from scratch.
S7 driver in Kura project to evolve from the configurable component to the whole Driver implementation.
In general terms, once you have created a configurable component, you must implement the Driver class in the component. Don't forget to define it in the OSGI-INF XML file. Use the S7 example for this.
The definition of your Asset can be modified by returning a Channel Descriptor in the Driver method getChannelDescriptor where you describe the variables as in this example on S7 descriptor.

Related

OpenTest custom test actors

I'm really impressed with OpenTest project. Found it highly intriguing how many ideas this project is sharing with some projects I created and worked on. Like your epic architecture with actors pulling tasks.. and many others :)
Did you think about including other automation technologies to base Actors on?
I could see two main groups:
1 Established test automation tooling like TestCafe (support for non-selenium gui testing could leverage the whole solution a lot)
2 Custom tooling needed for specific tasks. Would be great to have an actor with some domain-specific capabilities. Now as I can see this could be achieved by introducing another layer of execution worker called by an actor using rest api. What I mean is the possibility of using/including them as new 'actor types' with custom keywords releted.
Thank you for your nice words. We spent a lot of time thinking through the architecture and implementation of OpenTest and it's very rewarding to see that people understand and appreciate the design.
Implementing new keywords (test actions) can be done without creating custom test actors, by creating a new Java class that inherits from the TestAction base class and override its run method. For a simple example, you can take a look at the implementation of the Delay test action. You can then package the new test action in a JAR and drop it (along with any dependencies) in the user-jars subdirectory in your test actor's working directory. The test actor will dynamically load all the JARs it finds in there and will find the new test action class (using reflection) so you can make use of it in your tests. Some useful info and things to look out for:
Your Java project is going to have to define a dependency on the opentest-base project (which is where the TestAction base class is implemented).
When you copy the JAR to where your test actor is, make sure to copy any dependency JARs along with it. Please note that a lot of the dependencies that you might need are already included with the core test actor binaries (you can have a look at the POM.xml to see what they are).
If you happen to have any dependencies that conflict with the other JARs that included with the core test actor binaries, you can apply a technique called shading to "hide" the conflicting classes under a different package name. Most of the times you're not going to need this, but if you do and you get stuck let me know and I'll give you some pointers.
Here's sample project that demonstrates how to build an OpenTest extension that creates a couple of custom keywords: https://github.com/adrianth/opentest-extension-sample
And here's an extensive video tutorial about creating custom OpenTest keywords: https://getopentest.org/tutorials/custom-keywords.html

Folder specific cucumber-reporting without parallel run?

Was wondering if I could get setup cucumber-reporting for specific folders?
For example in the https://github.com/intuit/karate#naming-conventions, in CatsRunner.java i set up the third party cucumber reporting? without parallel execution. please advise or direct me on how to set it up.
Rationale. its easier to read and helps me in debugging
You are always recommended to use the 3rd party Cucumber Reporting when you want HTML reports. And you can create as many Java classes similar to the DemoTestParallel in different packages. The first argument to CucumberRunner.parallel() needs to be a Java class - and by default, the same package and sub-folders will be scanned for *.feature files.
BUT I think your question is simply how to easily debug a single feature in dev-mode. Easy, just use the #RunWith(Karate.class) JUnit runner: video | documentation

Mule best practice?

I would like to build a component that other developers can plugin in to MuleStudio and use to process files. It will expose a variety of methods which process the incoming file and return a new file. I want to make sure I'm going in the right direction with my implementation of this, and would appreciate any advice about best practices.
From my reading, it seems that I should use Mule Devkit to create a Module. This module can contain a variety of Processor methods. I then package with the maven command, and it can be installed as a plugin.
Some specific questions:
-Should I use Processors or Transformers, is there any difference in this case?
-Should I create multiple modules each with one Processor/Transformer, or one module with all the Processors/Transfromers?
-I would like the file to be able to be supplied generically (from an email, http, local file system, etc...). What should the parameter and return of my Processors be? Can I use InputStream as a param and OutputStream as my return, and then expect users to use the proper Endpoints/transformers to provide the InputStream. Or should I supply a variety of methods that take a variety of parameters, and perform the converison myself?
By looking at your requirement i can suggest please go ahead with MuleSoft Connector DevKit which in the box contains so many cool features and easy to build and install.
You can give it a try once , and achieve your business needs:
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.7/
Creating Anypoint Connector
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.7/creating-an-anypoint-connector-project

Tutorials for local storage?

I am looking for places to learn how to use local storage in chrome extensions.
More specifically:
I want to use options and local storage of variables to run different css content scripts depending on a stored variable.
Look at the chrome.* API reference, particularly at the chrome.storage section here. They also provide a few examples, here's one.

Cocoa/Objective-C Plugins Collisions

My application has a plugin system that allows my users to write their own plugins that get loaded at runtime. Usually this is fine but in some cases two plugins use the same libraries that will cause a collision between those two.
Example:
Plugin A wants to use TouchJSON for working with JSON and thus the creator adds the TouchJSON code to the plugin source and it gets compiled and linked into the plugin binary. Later Plugin B also wants to use that same library and does exactly the same. Now when my app loads these two different plugins it detects this and spits out an warning like this:
Class CJSONScanner is implemented in
both [path_to_plugin_a] and
[path_to_plugin_b]. One of the two
will be used. Which one is undefined.
Since my app just loads plugins and makes sure they conform to a certain protocol I have no control over which plugins are loaded and if two or more use the same library.
As long as both plugins use the exact same version of the library this will probably work but as soon as the API changes in one plugin a bunch of problems will arise.
Is there anything I can do about this?
The bundle loading system provides no mean to pacifically resolve name conflicts. In fact, we're told to ensure ourselves that the problem doesn't happen, rather than what to do if it happens. (Obviously, in your case, that's not possible).
You could file a bug report with this issue.
If this is absolutely critical to your application, you may want to have bundles live in separate processes and use some kind of IPC, possibly NSDistantObject, to pass the data from your program to the plugin hosts. However, I'm fairly sure this is a bag of hurt, so if you don't have very clearly-defined interfaces that allow for distribution into different processes, it might be quite an undertaking.
In a single-process model, the only way to deal with this is to ensure that the shared code (more precisely, the shared Objective-C classes) is loaded once. There are two ways to do this:
Put the shared code in a framework.
Put the shared code in a loadable bundle, and load the bundle when the plug-in is loaded if the relevant classes aren’t already available (check using NSClassFromString()). The client code would also have to use NSClassFromString() rather than referring to classes directly.
Of course, if you aren’t in control of the plug-ins you can’t enforce either of these schemes. The best you can do is provide appropriate guidelines and possibly infrastructure; for instance, in the second case the loading could be handled by the application, perhaps by specifying a class to check for and the name of an embedded bundle to load if it isn’t available in the plug-in’s Info.plist.