How to export a reconstructed 3D model with texture using insight3d - blender

I am using the open-source structure-from-motion program: insight3D to reconstruct a 3D model from a set of 2D images. I was able to export the resulting model as an VRML (.wrl) file but the texture and texture coordinates were not included when I inspected the resulting file in Blender/Meshlab. Has anyone been successful with exporting a textured model with insight3D? Thanks!

Related

How do you export lighting and material as .obj or .glb in blender?

I have a model in Blender with lighting and materials (credit to Polygon Runway)
When I export this as an OBJ I just get the geometry:
How do I bake(?) what I see in the shaded model into textures and automatically apply them to the model so when I export as an obj with texture I get something that look similar?

What algorithm do i need to convert a 2D image file into a representative 2D triangle mesh file?

I am looking for some advice to point me in the direction of the algorithm I would need to convert an image file into a mesh. Note that I am not asking to convert from 2D into 3D - the output mesh is not required to have any depth.
For image file I mean a black and white image of a relatively simple shape such as a stick figure stored in a simple to read uncompressed bitmap file. The shape would have a high contrast between the black and white areas of the image to help detect the edges of the image by an algorithm.
For the static mesh I mean the data that can be used to construct a typical indexed triangle mesh (list of vertices and a list of indices) in a modern 3D game engine such as Unreal. The mesh would need to represent the shape of the image in 2D but is not required to have any 3D depth in itself, ie. zero thickness. The mesh will ultimately be used in a 3D environment like a cardboard cut-out shape for example imagine it standing on a ground plane.
This conversion is not required to work in any real time environment - it can be batched processed and then it is intended the mesh data read in by the game engine.
Thanks in advance.

Grayscale input image for SSD detector in Tensorflow Detection API

I'm creating a dataset of images to train a detector using Tensoflow Detection API (SSD/MobileNet).
Images are grayscale but it seems the input should be RGB image.
Do I need to convert grayscale images to a three channel RGB by just copying first channel to two other channels? (If yes, is there any software for doing this?) or Two other channel should be empty? (Is there any software for doing this?)
Best regards.
Yes, you have to convert your grayscale images to RGB images.
A possible solution is to use OpenCV:
import cv2
# suppose that gray_img is your grayscale image
input = cv2.cvtColor(gray_img, cv2.COLOR_GRAY2RGB)
Now you can use input as a valid input image for your model

Why put the whole image in a tfrecord file? Why not just crop according to the bounding-box and put the cropped object in the tfrecord file?

Why do we put the whole image in a tfrecord file? Why not just crop the image according to the bounding-box and put the cropped object in the tfrecord file? This should greatly reduce the size of that file.
Because you want to learn to detect where that object is in the image. In image classification, you would cut out the images as you proposed and the network would output "car" or "not car". In object detection, the network will output the bounding boxes for the objects along with the class. ("car is at x1-x2-y1-y2") It learns by having the whole picture with the bounding boxes for the loss function.

Get the 2d image that was used to render a specific face in the 3D model

I have a 3d cube model that is rendered with multiple images similar to [this] tutorial.
When I click on a point in the 3d model I want to get the image that corresponds to the face of the selected point.
Is this possible?
Thanks