I've managed to implement the Marching Cubes algorithm in C#. Up to now I've tried the algorithm to render a sphere. That's an easy one because the density function is not very complex to code.
But now I want to get the algorithm to go further and render some interesting terrains for games. So I would need proper density functions for this task.
First thing that comes to my head is a Volumetric Perlin Noise. That's ok but I am looking for a terrain without convex shapes, I mean, no caves and similar geometries by the moment.
Ok, I know that for that a simple height map can do the job, but I want a voxel-generated terrain. What type of density function o pseudocode would I need to implement them?
You can easily convert a heightmap into voxel terrain. Each pixel in your heightmap corresponds to a column of voxels in your voxel world. For a given pixel in the heightmap read the height. Then iterate over each voxel in the corresponding column and set it to 'solid' if it is less than your reference height or 'empty' if it is more than your reference height.
Here is some sample code using the PolyVox library.
Related
I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.
I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html
I'm using CGAL 2D Delaunay triangulation to define a terrain. I can't use the terrain class because my triangulation has constraints and they can't be used on terrain or 3D triangulations. (That's what I see so far, since there are no terrain properties or 3D triangulation classes). Due to the constraints I'm using the make_conforming_delaunay_2 function to refine the triangulation. I have a problem when using this function. Everything is compiling and running OK, but the problem is with the results:
The function is inserting some points out of any existing triangle face. Is this correct?
Since it is a terrain I need the elevation of these inserted points. Is there any way to make CGAL tell me what triangle face these inserted points are in, so that I could calculate its elevation? I expected the points only in existing triangles faces.
Is there anyway even in a 2d triangulation to use 3D points? (So that the interpolated points will come with the elevation already calculated.)
You can use the class CGAL::Projection_traits_xy_3 like in this example.
I'm creating heightmaps using Fractal Brownian Motion. I'm then coloring it based on the heights and mapping it to a sphere. My problem is that the heightmap doesn't wrap seamlessly. I've used the Diamond Square algorithm and it's pretty easy to make things seamless using it, but I can't seem to figure out how to do it with fBm and I seem to be having trouble finding an explanation for it on the web.
To clarify, by "seamless", I mean that when I map it to a sphere, it creates a seamless map on the sphere.
Instead of calculating the heightmap per pixel on the heightmap, calculate the heightmap in 3D space based on each point on the sphere and then map that to an image pixel. You're going to have trouble wrapping a 2D, rectangular heightmap like that onto a sphere without getting ugly results at the poles unless you start your calculations from the sphere.
fBM generalizes to 3 dimensions, so given a point on the sphere you can get the height at that point, and then you can do the math to map that value to where it should be stored in the heightmap image.
Or you could use one of the traditional map projections. A cylindrical projection (x, y)->(x, sin y) would give you a seam of just one meridian, which you could rotate to the back. Or you could "antialias" the edge by one or another means.
With a stereographic projection (x,y,z)->(x/(z+1),y/(z+1)), there's only one sour point (the projection point itself).
I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.