Complete open source software stack which can be used for building digital twins? - blender

For a poc project, we would like to build digital twin of an physical device like e.g. coffee machine. Would like to know which open source software components can be used for the same. Some software components based on the information available are mentioned below:
Eclipse Hono IOT platform for iot gateway
Eclipse Vorto for describing information models
Eclipse Ditto for Digital Twin representation. It provides abstract representation of device last state in the form of http or websocket apis
Blender / Unreal Engine for 3D models
Protege for Ontology editor
I have the following questions:
Are we missing any software components to create digital twin of an physical asset?
Assuming we have 3D models available and sensor data is also available, how can we feed live sensor data to 3D models e.g changing the color of water tank based on real sensor data of water tank level? Not able to understand how real time sensor data will be connected to 3D models.
How will ontology be helpful in creating 3D models?

So you have a 3d model and sensor information, and you want to change some properties of the 3d model to reflect the sensor information? You should need to use 5 different tools for something like that. I would suggest looking into video games development tools like Unity3D or UnrealEngine 4.

Related

AR representation without 3rd party apps

I have stuck on very interesting question in regards of the AR. Is there a way to represent an augmented reality 3d objective with a camera application by apple without downloading any of the 3rd party apps. just by pointing it towards the camera
AR applications uses ARKit which enables world tracking which works through a technique called visual-inertial odometry.
By using the iPhone or iPad’s camera and motion sensors, ARKit will find bunch of points in the environment and track them when you move the phone. Once you pin it in the real world 3D model is rendered.
Where as these operations are not supported by the default camera application since they are not leveraging ARKit internally by Default.

Brickschema Information

I am working on an IoT gateway like raspberry pi. So I want to use Brickschema for data modelling and data normalisation of sensors data. I had gone through a paper link and find some theory about brickschema. There I didn't find the implementation thing. Could you tell me how can I start working with Brickschema using python libraries and what I need to integrate with brick to model the sensor data..? Please share me some example to implement Brickschema for my IoT gateway.Thanks in advance.

Making 3D scan model using Intel RealSense D435 Point clouds

Earlier this week I received the Intel RealSense D435 camera and now I am discovering its capabilities. After doing a few hours of research, I discovered the previous version of the SDK had a 3D model scan example application. Since SDK 2.0, this example application is no longer present making it harder to create 3D models with the camera.
I have managed to create various Point cloud (.ply) files with the camera, and now I try to use CloudCompare to generate a 3D model of it. However, without any success. Since my knowledge about computer vision is rather basic, I reach out to the community how it's possible to accomplish a 3D model scan using only PointClouds. The model can be rough, but preferable most noisy data needs to be removed.
Try recfusion 1.7.3 for scanning. 99 euro

Functional and Non-Functional Requirements For Train Tracking Project

I am currently developing a project document for my assignment. The title of my project is 'Systematic Train Tracker'.
Let me describe about my system:
The GPS receiver on the train will get the trains information and pass it to the control server using GSM Network. Then the control server will transmit the information to the train administrative office for monitoring purpose. Then the information will also passed to particular stations to display it to passengers.
So what will be the functional and non-functional requirements for my project? And what are the possible constraints will be??
Please help.

Libfreenect VS OpenNI

So I know this question has been done before but most of the other time it was still when both OpenNI and Libfreenect where being diveloped. My question are:
1)I want to know it what state the are now.
2)The differences between this two (pros, cons and anything else)
3)Specifically for skeleton tracking, which is better and give more data about the skeleton (for example in Microsoft SDK they give data for 20 joints, is it the same in this two, more, less?)
Libfreenect is mainly a driver which exposes the Kinect device's features:
- depth stream
- IR stream
- color(RGB) stream
- motor control
- LED control
- accelerometer
It does not provide any advanced processing features like scene segmentation, skeleton tracking, etc.
On the other hand, OpenNI allows generic access to Kinect's feature (mainly the image streams), but also provides rich processing features such as:
- scene segmentation
- skeleton tracking
- hand detection and tracking
- gesture recognition
- user interface elements
etc.
but no low level controls to device features like motor/LED/accelerometer.
As opposed to libfreenect which AFAIK works only with the Kinect sensor, OpenNI
works with Kinect but with other sensors as well like Asus Xtion Pro, Carmine, etc.
You've mentioned the Kinect SDK. It's good to bare in mind the are multiple Kinect sensors:
- Kinect for Xbox
- Kinect for Windows
The Kinect for Windows sensor for example allows a close mode and has a longer range.
I don't know how the skeleton tracking differs.
Also, there is a MS Kinect-OpenNI bridge bridge project and OpenNI2 works plays nice with Kinect