Is there any documented example of using a custom sensor say TMP35 with cumulocity using Java - cumulocity

I am having a hard time understanding where exactly do we bind the hardware for example a TMP35 temperature sensor with the software (i.e. in the Java API).
Is there any documented example for this or any custom sensor (where the driver isn't already available)?
Or can anyone outline the approach to accomplish the same?
Do I need to extend the c8y.lx.driver.Driver class?
Any pointers appreciated.
I believe that TMP35 has no means of communication to the cumulocity server. So maybe anyone can please provide a way to make a custom sensor (which has a means for communication as well and is Java-enabled) link with Cumulocity? That is what I am interested in knowing?
I know that there are some certified devices which are being supported out of the box.

There are two steps:
Get the data from your analogue sensor with Java.
Send the data to Cumulocity.
Step 1 is unrelated to Cumulocity. You need an ADC, and Google provides a few examples on how to connect those (like http://www.lediouris.net/RaspberryPI/ADC/readme.html).
Step 2 is then quite simple. Create a subclass of "MeasurementPollingDriver" and implement run(). Inside run(), query the sensor using the method from Step 1 and convert that into a measurement. Send that measurement using super.sendMeasurement(measurement). Here is an example.
If you have a device library with callbacks, you could just copy the code from MeasurementPollingDriver
TemperatureMeasurement measurement = ...;
MeasurementRepresentation measurementRep = new MeasurementRepresentation();
measurementRep.setSource(mo);
measurementRep.set(measurement);
measurementRep.setTime(new Date());
measurements.create(measurementRep);

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Obtaining Sensor Data and motor control values?

I am currently using Webots and new to the software framework. I need to implement a robot at the moment and get the sensor data and motor control values from it. The robot is a self-made robot and is not one of them already implemented tutorials. Can someone elaborate on how to get those values? I am trying to implement it in C++ if someone could help me with the syntax of the code to obtain the values?
You should start by following the Webots tutorials, there is one specific for controller which explains exactly what you are trying to do and is available in c++: https://cyberbotics.com/doc/guide/tutorial-4-more-about-controllers?tab-language=c++
There is one tutorial for building your own robot too: https://cyberbotics.com/doc/guide/tutorial-6-4-wheels-robot?tab-language=c++
In any case, I would recommend following at least tutorials 1 to 6 to get familiar with Webots.

RealityKit How to create custom meshes at runtime?

RealityKit has a bunch of useful functionality like built-in multiuser synchronization over a network to support shared worlds, but I can’t seem to find much documentation regarding mesh / object creation at runtime. RealityKit has some basic mesh generation functions (box, sphere, etc.) but I’d like to create my own procedural meshes at runtime (vertices and indices), and likely regenerate them every frame immediate-mode rendering style.
Firstly, is there a way to do this, or is RealityKit too closed-in without a way to do much custom rendering?
Secondly, would there be an alternative solution that might let me use some of RealityKit’s synchronization? For example, is that part really just another library I can use with ARKit 3? What is it called? I’d like to be able to synchronize arbitrary data between users’ devices as well, so the built-in system would be helpful as well.
I can’t really test this because I don’t have any devices that can support the beta software at the moment. I am trying to learn whether I’ll be able to do what I want for my program(s) if I do get the necessary hardware, but the documentation is sparse.
Feb 2022
As of macOS 12 / iOS 15, RealityKit includes API to allow you to provide your own procedurally generated meshes, primarily through the following methods:
generate(from:)
generate(from:)
generateAsync(from:)
generateAsync(from:)
These provide means to do create the MeshResource instances - synchronously and asynchronously - either constructing the models and instances yourself, or by providing a list of MeshDescriptor that you create yourself.
The Apple documentation (as I'm writing this) is non-existent, but the APIs themselves are reasonably well documented if you look into the generated swift interfaces. Max Cobb has an article (on Medium): Getting Started with RealityKit: Procedural Geometries that goes into some description of how to use a MeshDescriptor to describe a surface mesh, and also has a swift package with some additional geometries that use this technique: RealityGeometries that's not hard to read through to see examples of using it in action.
As far as I know RealityKit can only use primitives or usdz files as models. While you can generate usdz files using ModelIO on device but that isn't feasible for your use case.
The synchronization however is built into ARKit although you have to do a little bit more work when you are not using RealityKit.
Create a MultipeerConnectivity session between the devices (that's something you need to to for RealityKit as well)
Configure your ARSession and set isCollborationEnabled which makes your session output CollaborationData in the session(_:didOutputCollaborationData:) delegate callback.
Send this data using your MultipeerConnectivity session.
When receiving data from other users integrate it into your session using update(with:)
To send arbitrary information between users you can either send them via MultipeerConnectivity independently from ARKit or use custom ARAnchors, which is the preferred option when your dealing with positional data, e.g. when a users has placed an object at a specific location.
Instead of adding objects directly (by using something like scene.rootNode.addChildNode() in SceneKit you create a special ARAnchor subclass with all the information needed to add your model and add it to your session.
Then you add the object in the rendered(_:didAdd:forAnchor:) callback. This has the benefits of better tracking around your object (because you added an anchor to the position, indicating to ARKit that it should remember the position) and that you don't need to do anything special for multiuser experiences, because ARKit calls the rendered(_:didAdd:forAnchor:) method for both manually added anchors as well as automatically added ones, for example when it receives collaboration data.

I'm a bit new to Modbus communications and am having a hard time figuring out what functions to use on the master's side of "free modbus"

I'm a bit new to Modbus communications and I've started reading about the "Free Modbus" library. Now, I understood how to use it to implement the slave side of the Modbus communications, but I just can't seem to find how to use the library on the master's side. For example, what function show I call on the master's side to read discrete input number 3 of slave 19 (for instance)?
Thank you in advance for the help.
By the way, I'm writing in C and am programming for a MSP430 microcontroller.
It's not stated directly on the website, but FreeMODBUS library supports only slave side. For example in the init function ( modbus.html#ga0">http://www.freemodbus.org/api/group_modbus.html#ga0 ) one of the parameters is "ucSlaveAddress" - address of your (slave) device.
The guy that created FreeMODBUS now works on commercial libs, and there's is a library for master mode - http://www.embedded-solutions.at/index.php/en/products/modbus-master

Making my own application for my USB MIDI device

I want to try and make my own application for my Novation Nocturn, which is a USB DJ controller surface. The application software interacts with it to send out MIDI messages to software like Traktor, Ableton and Cubase.
I'm aware of libusb, but that's as far as I've got. I've successfully installed it to interact with my device but stopped there.
I'm after some suitable reading material basically. USB specs, MIDI specs and such. If I'm honest the full USB 2.0 spec looks like it holds loads of stuff I don't need.
Just looking for something interesting to do now that I've finished my degree (Computer Science). My current programming knowledge is C++ and mainly C#.
Could do with some direction on how to get stuck into this task.
edit:
Update to include some info from the Device Manager on the Nocturn.
Hardware IDs:
USB\VID_1235&PID_000A&REV_0009
USB\VID_1235&PID_000A
Compatible IDs:
USB\Class_FF&SubClass_00&Prot_00
USB\Class_FF&SubClass_00
USB\Class_FF
Device Class:
MEDIA
USB MIDI is probably one abstraction layer lower than you want to deal with. I'd suggest finding a good MIDI framework and interacting with the device via MIDI instead.
For C++, Juce is probably the way to go, as you didn't mention a target platform or any other specific requirements.
If you want to go the .NET route, the easiest way to get started is with the C# MIDI Toolkit code:
http://www.codeproject.com/KB/audio-video/MIDIToolkit.aspx
In there, you'll find all the basics for opening an device, reading input, and writing output. Alternatively, NAudio has some MIDI classes, but they are somewhat incomplete.
As you develop, you'll want a reference for the MIDI spec handy.
A tool that you will find invaluable is MIDI-OX. In fact, I suggest that before you start coding, you fire up MIDI-OX and use it to sniff the messages coming from the Novation. It will give you a good idea of what the Novation sends. You can use it in conjunction with MIDI Yoke (a configurable virtual MIDI port) to insert itself between the Novation, and Ableton Live (or whatever software you normally use with your Novation) so you can see all of the messages in normal use.
Done... Kidding, but I've started on this in Python - I personally want linux support. I am teaching myself python, but I only dabble in programming.
You can see basic functionality at https://github.com/dewert/nocturn-linux-midi. The guy who reverse engineered it (i.e. the leap I wouldn't have been able to make myself) doesn't seem to be doing any more with it. His code is at https://github.com/timoahummel/nocturn-game
I am using PyPortMIDI and PyUSB, both of which I believe are wrappers for the C equivalents. I think this is all ok on Windows, but haven't tried.
What is currently on my github is crap, but it is proof-of-concept. I'm working on doing it properly now, with threading and proper configuration options.
The driver for the Nocturn makes it appear to system as a MIDI device, even though it isn't a USB MIDI device at the hardware level. The Automap software works entirely at the MIDI level, receiving MIDI instructions and sending different instructions in response - it is separate from the driver and not neccesary.
Alternatively, look at https://github.com/timoahummel/nocturn-game for an example of talking to it directly over USB from Python. You can probably port this to another language with libusb bindings.
Old thread, but I've just recently started looking into this.
I had a look at the Python application that dewert has written. Interestingly, it turns out that the data that the Nocturn emits is in fact MIDI, although it doesn't register itself as a USB MIDI device.
But looking at the actual data coming from the device, it actually emits control change messages (0xB0 controller value) for everything. Also the control commands that are sent to it are also control change messages, albeit only the data bytes, as the Nocturn seems to support MIDI running status (i.e. when sending multiple control change messages, it is not necessary to repeat the data byte).
Indeed, the looking at the magical initialization data it is actually just a bunch of control changes: it starts with 0xb0 and from there on the data comes in twos. For instance the last two bytes in the init string are 0x7f 0x00 which simply turn off the LED for the rightmost forward button. (There is something subtle happening as a result of the initialization being sent though, as the Nocturn sometimes emits some messages which appear to be some form of timeout events, and that behavior changes depending on whether the initialization string has been sent or not.)
Using MIDI-like messages makes sense, as Novation would be well aware of the MIDI protocol, so it would be easiest for them to use it for the communication even if the device is not strictly a MIDI device.
Note though that the incrementors just send the values 1 or 127, i.e. +1 or -1 step, so even with some trivial mapping software it's not really useful as it is. (Actually, if turned quickly, one can get 3 or 125 for instance, with the 125 corresponding to -3.) The only controller which sends a continuous value is the slider, which emits an 8 bit value when moved.
I suppose you'll want to know about USB classes in general and USB MIDI class in particular. The latter is the best what you can hope for in case you don't posess documentation for some proprietary protocol (whether it's used there instead).