I have seen this post on how to convert a depth image into a point cloud. What I need is to convert it into a ply file with triangle and vertices (full triangular mesh).
Is this even possible without any special algorithm?
Related
My 3D mesh contains T-vertices. I want to keep all vertices, but automatically subdivide triangles's edges that run through/past such a vertex.
Here's an
image showing triangles with t-vertices, and after the tessellation
that I'm looking for.
I started implementing some code but really think this must already exist.
The question is if your input mesh consist on plain triangles (in which case you can't have neighbors, because you can have several neighbors by each edge) or your mesh consist on "triangle shaped polygons".
If your input are triangles, meshlab won't solve your problem.
If you have polygons, you can use the "convert mesh into pure triangles" filter.
I am looking for some advice to point me in the direction of the algorithm I would need to convert an image file into a mesh. Note that I am not asking to convert from 2D into 3D - the output mesh is not required to have any depth.
For image file I mean a black and white image of a relatively simple shape such as a stick figure stored in a simple to read uncompressed bitmap file. The shape would have a high contrast between the black and white areas of the image to help detect the edges of the image by an algorithm.
For the static mesh I mean the data that can be used to construct a typical indexed triangle mesh (list of vertices and a list of indices) in a modern 3D game engine such as Unreal. The mesh would need to represent the shape of the image in 2D but is not required to have any 3D depth in itself, ie. zero thickness. The mesh will ultimately be used in a 3D environment like a cardboard cut-out shape for example imagine it standing on a ground plane.
This conversion is not required to work in any real time environment - it can be batched processed and then it is intended the mesh data read in by the game engine.
Thanks in advance.
I have a Poliigon Texture Demo c4d file. The file includes a sphere with a texture which renders correctly (bottom sphere in image). However when I create a sphere (top sphere in image), convert it to a polygonal object and apply the same texture it is being stretched horizontally.
I can fix this by changing the "Length U" setting to 50% in the Texture Tag but I notice that the sphere below does not need this modification so I was wondering how to convert the top sphere to a polygonal object the same way the bottom sphere is.
Cinema 4d Example
I have included a screengrab. The only notable difference is that the sphere below has additional diagonal division.
I am quite new to 3D so hope this all makes sense.
I think you only need to change the Sphere's Type, to a triangular type, like the sphere at the bottom.
If this helps, please consider up-voting and marking you question as solved
I would like to to plot 2D vector field in a single picture using the Hue & brightness method, i.e., Hue to direction (or say, phase), brightness to magnitude.
Such method is often used to visualize e.g., magnetic domains, vortex etc which are reconstructed from Lorenz microscopy.
As input, I have two images of size 1024*1024, pixels contain the magnitude of X and Y component of the vector field.
Since DM does not support native HSL color scheme, I think one should first uses a group of self defined functions to convert HSL to RGB...
You can only use RGB images in DigitalMicrograph, so you will have to do the conversion from HSB to RGB in your script code, and then create the according RGB image.
Luckily, there is a demonstration script on the Gatan script resources webpage which does exactly that! You can basically use the script as it is shown there.
Gatan Script Resources
Link to script-file:
Display as HSB
Note, the script uses complex images as input - just as a convenient container to combine two images into a single one. The test function demonstrates this though.
I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.