Which file is supported by Microsoft Mesh app? - mesh

The Microsoft Mesh app on Hololens 2 allows loading files from the local device. I want to load 3D models to my workspace, but don't know which types of 3D models are supported. I have tried .fbx files but had no success.

I have found the answer:
"For 3D content, only .glb files are supported at this time. We currently have a file size limit of 75MB, or maximum 300,000 vertex count for 3D models. If these limits are exceeded, you will fail to load your content, and get a warning: "This model is too complex"."
https://learn.microsoft.com/en-us/mesh/mesh-app/use-mesh/import-content#import-user-content

Related

3D models (.obj or .fbx or .glb) do not load in Hololens 2 3d-viewer

I am exporting simple 3D models as .obj, .fbx or .glb using blender, and succesfully display them in the 3D viewer app of a hololens 2.
As soon as the models are more complex (for example created by makehuman), the exports cannot be displayed in Hololens 2 3d viewer app.The error message says that the models are not optimised for windows mixed reality.
I found some documentation on the limitation of Hololens 1 .glb files. But I cannot find the specification for hololens 2 and the three file formats.
In addition: Should I reduced the complexity in the blender models, or during the export, or are there even tools to post-process 3D models for Hololens 2 / Windwos mixed reality?
You can use the following link as a guide for optimizing your models - Optimize your 3D models
The asset requirements for pre-installed 3D Viewer app on HoloLens 2, please see Asset requirements overview for more details, here is a quote to the main points::
Exporting - Assets must be delivered in the .glb file format (binary glTF)
Modeling - Assets must be less than 10k triangles, have no more than 64 nodes and 32 submeshes per LOD
Materials - Textures can't be larger than 4096 x 4096 and the smallest mip map should be no larger than 4 on either dimension
Animation - Animations can't be longer than 20 minutes at 30 FPS (36,000 keyframes) and must contain <= 8192 morph target vertices
Optimizing - Assets should be optimized using
the WindowsMRAssetConverter. Required on Windows OS Versions <= 1709*
and recommended on Windows OS versions >= 1803
For the question of other tools to post-process 3D models, you can easily optimize any glTF 2.0 model using the Windows Mixed Reality Asset Converter available on GitHub. This tool includes a command line tool that uses these steps in sequence in order to convert a glTF 2.0 core asset for use in the Windows Mixed Reality home.
From my experience, only the simplest models will successfully open in 3D viewer, whether using HoloLens1 or 2. A primary reason is that even a model that "looks simple" could very well be comprised of well more than 10,000 polygons. For instance, a simple model of a screw, modeled originally in a CAD application, could be 10,000 polygons. So imagine how many polygons the whole product model would be!

download open street maps' tiles.png

I'm trying to use offline open street map in a react native application, for that reason, and according to react native maps I need to store the tiles in a specific format :
The path template of the locally stored tiles. The patterns {x} {y} {z} will be replaced at runtime
For example, /storage/emulated/0/mytiles/{z}/{x}/{y}.png
I tried to download the tiles using tiles servers, however, I find out that It will take a lot of time (it is almost impossible) I also looked at the proposed ways to download tiles, however, I don't know the files extension and I don't know if I could convert one of them to png, therefore, I wonder I there is an opensource/free way to do that
I find also, this software but I can only use it up to zoom=13, otherwise its not for free.
Bulk downloads are usually forbidden. See the tile usage policy. Quoting the important parts:
OpenStreetMap’s own servers are run entirely on donated resources.
OpenStreetMap data is free for everyone to use. Our tile servers are not.
Bulk downloading is strongly discouraged. Do not download tiles unnecessarily.
In particular, downloading significant areas of tiles at zoom levels 17 and higher for offline or later usage is forbidden [...]
You can render your own raster tiles by installing a rendering software such as TileMill or by installing your own tile-server. Alternatively take a look at Commercial OSM software and services.
Alternatively switch to vector tiles. Obtaining raw OSM data is rather easy. Vector tiles allow you to render tiles on your device on the fly.

Making 3D scan model using Intel RealSense D435 Point clouds

Earlier this week I received the Intel RealSense D435 camera and now I am discovering its capabilities. After doing a few hours of research, I discovered the previous version of the SDK had a 3D model scan example application. Since SDK 2.0, this example application is no longer present making it harder to create 3D models with the camera.
I have managed to create various Point cloud (.ply) files with the camera, and now I try to use CloudCompare to generate a 3D model of it. However, without any success. Since my knowledge about computer vision is rather basic, I reach out to the community how it's possible to accomplish a 3D model scan using only PointClouds. The model can be rough, but preferable most noisy data needs to be removed.
Try recfusion 1.7.3 for scanning. 99 euro

Postprocess Depth Image to get Skeleton using the Kinect sdk / other tools?

The short question: I am wondering if the kinect SDK / Nite can be exploited to get a depth image IN, skeleton OUT software.
The long question: I am trying to dump depth,rgb,skeleton data streams captured from a v2 Kinect into rosbags. However, to the best of my knowledge, capturing the skeleton stream on Linux with ros, kinect v2 isn't possible yet. Therefore, I was wondering if I could dump rosbags containing rgb,depth streams, and then post-process these to get the skeleton stream.
I can capture all three streams on windows using the Microsoft kinect v2 SDK, but then dumping them to rosbags, with all the metadata (camera_info, sync info etc) would be painful (correct me if I am wrong).
It's quite some time ago that I worked with NITE (and I only used Kinect v1) so maybe someone else can give a more up-to-date answer, but from what I remember, this should easily be possible.
As long as all relevant data is published via ROS topics, it is quite easy to record them with rosbag and play them back afterwards. Every node that can handle live data from the sensor will also be able to do the same work on recorded data coming from a bag file.
One issue you may encounter is that if you record kinect-data, the bag files are quickly becoming very large (several gigabytes). This can be problematic if you want to edit the file afterwards on a machine with very little RAM. If you only want to play the file or if you have enough RAM, this should not really be a problem, though.
Indeed it is possible to perform a NiTE2 skeleton tracking on any depth-image-stream.
Refer to:
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/How-to-use
and
https://github.com/VIML/VirtualDeviceForOpenNI2/wiki/About-PrimeSense-NiTE
With this extension one can add a virtual device which allows to manipulate each pixel of the depth stream. This device can then be used for creation of a userTracker object. As long as the right device name is provided skeleton tracking can be done
\OpenNI2\VirtualDevice\Kinect
but consider usage limits:
NiTE only allow to been used with "Authorized Hardware"

Create skeleton data using depth files

I have a bunch of jpeg and depth(raw) files saved on disk using kinect sdk
Is there a way to create the skeleton data (joint points) using these files with openni?
If so how it could be done?
Thanks!!
OpenNI does not handle the skeleton tracking. Rather it is done through the NITE middleware layer that plugs into OpenNI. NITE, and the algorithms that handle the skeleton generation, are closed source and not available to dissection.
I am not aware of an API call to push a raw image into the skeleton process for pulling out the skeleton data. I'd bet that movement within the stream actually plays a part in the algorithm, making single image processing very imprecise.