Multiple managed object in one Scada widget - cumulocity

I would like to present some devices with their measurements on one single SVG file to show a graphical view of a room.
But from what I see, we can't link the properties of multiples devices to one SCADA widget. It seems a bit limited but maybe I miss something?

Since this question was asked the feature was added (Cumulocity v7.25).
SCADA widgets can now access properties of multiple devices.

Related

Can i program an AI that reads what is on the screen and then does some simple task on the computer

So i was thinking is there a way i can program an AI that reads something(numbers mostly, in a font and a particular area on the screen that i will specify) and then perform some clicks on the screen according to what it read...the data(numbers) will change constantly and the AI will have to look out for these changes and act accordingly. I am not asking exactly how do i do that. I am asking whether it is possible and if yes then which approach should i take like for example python or something else and where do i start?
You need a OCR library such as opencv to recognize digits. The rest should be regular programming.
It is quite likely that your operating system doesn't allow you access to parts of the screen that are not owned by your application, so you are either blocked at this point, or you are restricted to parts of the screen owned by your application. (If I enter my details on the screen into my banking app, I definitely don't want another app to be able to read it).
Next you'd need to find away to read the pixels on the screen programatically. That will be very different from OS to OS, so very unlikely to be built into your language's library. You might be able to interface with whatever is availabe on your OS, or find a library that does it for you. This will give you an image, made of pixels.
Then you need some OCR software to read the text. AI doesn't seem to be involved in any of this.

Rendering different content for each eye in Mixed Reality

I'm currently looking to see if there is a way that within a Windows 10 Universal App, that is intended for Mixed Reality, to display different content for each eye?
What would ideal is if this could be done entirely in XAML, for example...
<mixedreality target="lefteye">
...
</mixedreality>
<mixedreality target="righteye">
...
</mixedreality>
I know this can be done in a Unity App but I want this to be done within a Window in the Mixed Reality portal.
Thanks in advance.
Nick.

Can I sync multiple live radio streams with video?

Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.
This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)

What is the nature of the gestures needed in Windows 8?

Most of touchpads on laptops don't handle multitouch, hence are not able to send swipe gestures to the OS.
Would it be possible to send some gestures to Windows from an external device, like a Teensy, or a recent Arduino, that can already emulate a keyboard and a mouse. I could send buttons 4 and 5 (mouse wheel up and down), but I would like to send a real swipe gesture (for example with a flex sensor...).
One of the ways that you could work with arduino and similar is to use the Microsoft .NET Microframework, which is an open source code, available for no cost from: Micro Framework
There are other frameworks available for the Artuino that you might want to use. So if you have a great idea on how to utilize the sensor hardware, then the output must meet certain specifications.
To be able to connect to your hardware that reads gestures, you will need to understand how drivers are created, so take a look at this: Info on drivers.
To find that type of information you would need to take a look at above link, this is for sensors, which would appear to be not quite what you are looking for, you are looking to use "gestures" but first you have to be able to make the connection to your device, this guide MIGHT help. I have reviewed it for other reasons.
There is a bunch of stuff to dig through, but first of all, imo, is to understand how to get your software to communicate with Windows 8. Let me know if you have any other questions. I am not the best person, you might want to refer to the community at the Micro Framework link shown above.
Good luck.
That's perfectly possible. What your effectively suggesting is that you want to create your own input peripheral like a trackpad and use that to send inputs. As long as windows recognizes this device as an input source it will work.

Windows 8 manipulate tiles

Can I manipulate the tiles from C++ application (through WinAPI/WinRT, register or something else)?
Your question is very unspecific. Look at these samples 1 and 2 to get started with Tiles and C++. If you've got a more specific/detailed question feel free to ask.
Edit: (method names and namespaces are from c# but should be ~equal in c++)
Primary tiles can only be deleted by the user.
You can access limited WinRT functionality out of an non MetroApp. But as far as I know there is no way to delete foreign tiles in the "limited" WinRT.
If you want to delete secondary tiles you have to use Windows.UI.StartScreen.SecondaryTile.FindAllAsync("id of the app tiles belong to"); and then call tile.RequestDeleteAsync() for each tile in the result you want to remove. Note that every delete request must be accepted by user. As already mentioned Windows.UI.StartScreen namespace is only available out of metro apps.