Inverse depth Image - blender

I have depth image generated from mesh using blender.
I want to inverse it and get shape back in blender, how to?

Related

Blender remove UV coordinates from map

Suppose I created a sphere mesh, uv-unwrapped it, and created 1000 texture maps around those unwrapped coordinates. Now I realized that I want some parts of the sphere to be "untextured" and have an option to texture them with another random texture. How would I remove the uv coordinates from the sphere so they don't get textured or at least move them to another uvmap without changing the position of the unwrapped coordinates.

What algorithm do i need to convert a 2D image file into a representative 2D triangle mesh file?

I am looking for some advice to point me in the direction of the algorithm I would need to convert an image file into a mesh. Note that I am not asking to convert from 2D into 3D - the output mesh is not required to have any depth.
For image file I mean a black and white image of a relatively simple shape such as a stick figure stored in a simple to read uncompressed bitmap file. The shape would have a high contrast between the black and white areas of the image to help detect the edges of the image by an algorithm.
For the static mesh I mean the data that can be used to construct a typical indexed triangle mesh (list of vertices and a list of indices) in a modern 3D game engine such as Unreal. The mesh would need to represent the shape of the image in 2D but is not required to have any 3D depth in itself, ie. zero thickness. The mesh will ultimately be used in a 3D environment like a cardboard cut-out shape for example imagine it standing on a ground plane.
This conversion is not required to work in any real time environment - it can be batched processed and then it is intended the mesh data read in by the game engine.
Thanks in advance.

Bounding box coordinates via video object detection

Is it possible to get the coordinates of the detected object through video object detection? I used OpenCV to change video to images and LabelImg to put the bounding box to them. For the output, I want to be able to read (for a video file) the coordinates of the bounding box so I could get the centre of the box.
Object Detection works on per image or per frame basis.
The basic object detection model takes image as input and gives back the detected bounding boxes.
Now, what you need to do is, once your model is trained, read a video, frame by frame, or by skippng 'n' frames, pass the frame or image to the object detector and get the coordinates from it and show it into the output video frame.
That is how object detection works for a video.
Please refer to below links for references :
https://github.com/tensorflow/models/tree/master/research/object_detection
https://www.edureka.co/blog/tensorflow-object-detection-tutorial/

convert RGBD image to a polygon mesh

I have seen this post on how to convert a depth image into a point cloud. What I need is to convert it into a ply file with triangle and vertices (full triangular mesh).
Is this even possible without any special algorithm?

QSGGeometryNode depth (z) problems with 3 vertices

I am drawing a 3D geometry (Point3D vertices) in a Qml scene graph with a custom QSGGeometryNode and QSGTransformNode. This works except that the 3D model is cut off at a certain z-coordinate (z is the depth axis in Qml). First I expected that the problem is due to intersection with the Qml 2D plane. But I tried to move the model along the z axis and it gets always cut off (as if there is a local model frustum clipping plane).
What could be the source of this problem?
Regards,
Unfortunately you can't "just" render 3D content inside the scene, as the scene graph will compress your Z values to make them honour proper stacking of the items.
If you have a 3D object, you may want to use QQuickFramebufferObject instead (see also this blog post).