Is there a delta serialization when saving blueprint to .uasset file in unreal engine? - serialization

In unreal engine, there is net delta serialization in network replication. I want to know whether UE uses delta serialization when saving blueprint to .uasset file on the disk. Is there undo/redo system(some classes like FTransactionObjectDeltaChange) do the delta serialization?

Related

Which file is supported by Microsoft Mesh app?

The Microsoft Mesh app on Hololens 2 allows loading files from the local device. I want to load 3D models to my workspace, but don't know which types of 3D models are supported. I have tried .fbx files but had no success.
I have found the answer:
"For 3D content, only .glb files are supported at this time. We currently have a file size limit of 75MB, or maximum 300,000 vertex count for 3D models. If these limits are exceeded, you will fail to load your content, and get a warning: "This model is too complex"."
https://learn.microsoft.com/en-us/mesh/mesh-app/use-mesh/import-content#import-user-content

How can I write raw data on SD card without filesystem by using DSP?

I'm new in embedded electronic/programming but I have a project. For this project, I want to write raw data provided by sensor in 512b buffers on a SD card (SPI mode) with no filesystem.
I'm trying to do it by using the low level functions of the FATFS library. I've heard that I have to create my own format and rewrite functions to do it, but I'm lost in all those lines code... I suppose that I have first to initialize SD by using a command sequence (CMD0, CMD8, CMD55, ACMD41...).
I'm not sure for the next steps, if I have to open a file with the fopen function and then use the fwrite function...
If somebody can explain me how it work for a non filesystem SD card and guide me in the steps to follow I would be very grateful.
Keep in mind that cards with a memory capacity > 2Tb are not supported in SPI mode
(Physical Layer Simplified Specification.
If you don't want to use a file system, then fopen, and fwrite are irrelevant. You simply use the the low-level block driver. If you are referring to http://www.elm-chan.org/fsw/ff/00index_e.html, then the API subset you need is :
disk_status - Get device status
disk_initialize - Initialize device
disk_read - Read data
disk_write - Write data
disk_ioctl - Control device dependent functions
However, since you then have to manage the blocks somehow so you know which blocks are available and where to write next (like one large file), you will essentially be writing your own filesystem (albeit a very simple and limited one). So it begs the question of why you would not just use an existing filesystem?
There are many reasons for not using FAT in an embedded system but few of them will be resolved by implementing your own "raw" filesystem without you doing a lot of work reinventing the filesystem! Consider something more robust and designed for resource constrained embedded systems with potentially unreliable power source, such as littlefs.

Complete open source software stack which can be used for building digital twins?

For a poc project, we would like to build digital twin of an physical device like e.g. coffee machine. Would like to know which open source software components can be used for the same. Some software components based on the information available are mentioned below:
Eclipse Hono IOT platform for iot gateway
Eclipse Vorto for describing information models
Eclipse Ditto for Digital Twin representation. It provides abstract representation of device last state in the form of http or websocket apis
Blender / Unreal Engine for 3D models
Protege for Ontology editor
I have the following questions:
Are we missing any software components to create digital twin of an physical asset?
Assuming we have 3D models available and sensor data is also available, how can we feed live sensor data to 3D models e.g changing the color of water tank based on real sensor data of water tank level? Not able to understand how real time sensor data will be connected to 3D models.
How will ontology be helpful in creating 3D models?
So you have a 3d model and sensor information, and you want to change some properties of the 3d model to reflect the sensor information? You should need to use 5 different tools for something like that. I would suggest looking into video games development tools like Unity3D or UnrealEngine 4.

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

Create skeleton data using depth files

I have a bunch of jpeg and depth(raw) files saved on disk using kinect sdk
Is there a way to create the skeleton data (joint points) using these files with openni?
If so how it could be done?
Thanks!!
OpenNI does not handle the skeleton tracking. Rather it is done through the NITE middleware layer that plugs into OpenNI. NITE, and the algorithms that handle the skeleton generation, are closed source and not available to dissection.
I am not aware of an API call to push a raw image into the skeleton process for pulling out the skeleton data. I'd bet that movement within the stream actually plays a part in the algorithm, making single image processing very imprecise.