How to convert a colored point cloud into a textured mesh? - mesh

I have a .ply file which contains a colored point cloud:
I need to convert it as a textured mesh. I can create a blank mesh doing:
Filters -> Point Set -> Surface Reconstruction: Poisson
But the result is a white mesh
It seems that all the informations about the color go lost. Any advice?
Thanks

If you want vertex colors use Filters -> Sampling -> Vertex Attribute Transfer, click the appropriate boxes to transfer color, and select the appropriate source and target meshes.
If you want a texture, you first need UV coordinates - the easiest and messiest way for an arbitrary mesh is to do Filters -> Texture -> Parameterization: Trivial per triangle, then use Filters -> Texture -> Transfer: Vertex Attributes to Texture. This can cause seams to appear. For my messy meshes I use the Smart UV in Blender and export it out as a .obj, then import it into meshlab and use the aforementioned Transfer: Vertex Attributes to Texture filter.

Related

Blender remove UV coordinates from map

Suppose I created a sphere mesh, uv-unwrapped it, and created 1000 texture maps around those unwrapped coordinates. Now I realized that I want some parts of the sphere to be "untextured" and have an option to texture them with another random texture. How would I remove the uv coordinates from the sphere so they don't get textured or at least move them to another uvmap without changing the position of the unwrapped coordinates.

What algorithm do i need to convert a 2D image file into a representative 2D triangle mesh file?

I am looking for some advice to point me in the direction of the algorithm I would need to convert an image file into a mesh. Note that I am not asking to convert from 2D into 3D - the output mesh is not required to have any depth.
For image file I mean a black and white image of a relatively simple shape such as a stick figure stored in a simple to read uncompressed bitmap file. The shape would have a high contrast between the black and white areas of the image to help detect the edges of the image by an algorithm.
For the static mesh I mean the data that can be used to construct a typical indexed triangle mesh (list of vertices and a list of indices) in a modern 3D game engine such as Unreal. The mesh would need to represent the shape of the image in 2D but is not required to have any 3D depth in itself, ie. zero thickness. The mesh will ultimately be used in a 3D environment like a cardboard cut-out shape for example imagine it standing on a ground plane.
This conversion is not required to work in any real time environment - it can be batched processed and then it is intended the mesh data read in by the game engine.
Thanks in advance.

How to optimize a mesh which I manually aligned

I manually aligned a mesh to another mesh. Two meshes have different topologies. The resulting mesh does not have a flat and smooth surface as is illustrated in the first picture.
My question is that, if there exist algorithms or tools such as a function inside meshlab or blender etc. that can smooth and optimize my mesh.
This is my mesh.
And I want to optimize it such that it is smooth like this:
I don't see the relation between "I manually aligned a mesh" and "resulting mesh does not have a flat and smooth surface". The aligned mesh should be similar to input mesh.
Despite that, try to apply the Meshlab filter Taubin Smooth to your mesh to rearrange the topology without introducing big deformations.

How can I fill the circle when pressing on F in blender 2.83?

The tutorial is using blender 2.82
I added a circle and when I change it to edit mode and press on the F key instead filling the circle it's giving me this error :
This is the circle in object mode :
Then changing to edit mode and pressing F :
Following a tutorial when he press on F the result is :
I didn't see in the tutorial that he selected any points or how to do it.
TLDR: use Add -> Mesh -> Circle, and always use mesh unless otherwise stated.
You added curve -> circle but you should add mesh -> circle
There are many different types of object in blender. Mesh and curve are different and has different purpose in blender:
Mesh is basic and most used type of geometry in blender. It's made of vertices (points) and edges (straight lines between points) and faces (at last 3 edges creating a plane).
Curve is made from points with handles connected with curve generated accordingly to handles and type of point. It used to create some basic round shapes (like paths and motion guides) and can be optionally convert to Mesh.
You can convert curve to mesh from menu object -> Convert to -> Mesh form Curve[..],
when add a circle, choose it from mesh, mot curve.
I don't know why either but that worked for me.

Can I specify per face normal in OpenGL ES and achieve non-smooth/flat shading?

I want to display mesh models in OpenGL ES 2.0, where it clearly shows the actual mesh, so I don't want smooth shading across each primitive/triangle. The only two options I can think about are
Each triangle has its own set of normals, all perpendicular to the triangles surface (but then I guess I can't share vertices among the triangles with this option)
Indicate triangle/primitive edges using black lines and stick to the normal way with shared vertices and one normal for each vertex
Does it have to be like this? Why can't I simply read in primitives and don't specify any normals and somehow let OpenGL ES 2.0 make a flat shade on each face?
Similar question Similar Stackoverflow question, but no suggestion to solution
Because in order to have shading on your mesh (any, smooth or flat), you need a lighting model, and OpenGL ES can't guess it. There is no fixed pipeline in GL ES 2 so you can't use any built-in function that will do the job for you (using a built-in lighting model).
In flat shading, the whole triangle will be drawn with the same color, computed from the angle between its normal and the light source (Yes, you also need a light source, which could simply be the origin of the perspective view). This is why you need at least one normal per triangle.
Then, a GPU works in a very parallelized way, processing several vertices (and then fragments) at the same time. To be efficient, it can't share data among vertices. This is why you need to replicate normals for each vertex.
Also, your mesh can't share vertices among triangles anymore as you said, because they share only the vertex position, not the vertex normal. So you need to put 3 * NbTriangles vertices in you buffer, each one having one position and one normal. You can't either have the benefit of using triangle strips/fans, because none of your faces will have a common vertex with another one (because, again, different normals).