I can wrap my head around using a 2D Perlin noise function to generate the height value but I don't understand why a 3D Perlin noise function would be used. In Notch's blog, he mentioned using a 3D Perlin noise function for the terrain generation on Minecraft. Does anyone know how that would be done and why it would be useful? If you are passing x, y, and z values doesn't that imply you already have the height?
The article says exactly why he used 3D noise:
I used a 2D Perlin noise heightmap...
...but the disadvantage of being rather
dull. Specifically, there’s no way for
this method to generate any overhangs.
So I switched the system over into a
similar system based off 3D Perlin
noise. Instead of sampling the “ground
height”, I treated the noise value as
the “density”, where anything lower
than 0 would be air, and anything
higher than or equal to 0 would be
ground.
Well, Minecraft is about Mines. So, what Notch tried to solve was: "How do I get holes / overhangs in my world?"
Since 2D perlin noise generates nice/smooth looking hills, 3d perlin noise will generate nice/smooth hills and nice holes in your 3D voxel grid.
An implementation can be found here (while that is an N-dimensional solution).
In other use-cases the Z component of a 3D perlin noise is set to the current time. This way you will get a smooth transition between different 2d perlin noises and that can be used as groundwork for fluid textures.
You should look at the Minetest source, specifically at the files noise.cpp and map.cpp.
If you are still confused, I actually had the same question and figured it out and made perhaps the only tutorial video on the subject on YouTube as a whole!
My Video Explaining 3D Perlin Noise:
https://youtu.be/plLVPJJCL8w
Related
I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?
This is again a question about the CGAL 3D surface mesher.
http://doc.cgal.org/latest/Surface_mesher/index.html#Chapter_3D_Surface_Mesh_Generation
With the definition
Surface_3 surface(sphere_function, // pointer to function
Sphere_3(CGAL::ORIGIN, 64.0)); // bounding sphere
(as given too in the example code) I define an implicit surface given by 'sphere function' and a Sphere_3 of radius 8.
The difference is now, that the zeros of 'sphere function' are (contrary to its now misleading name) no longer bounded and inside Sphere_3. Instead 'sphere_function' represents an unbounded surface (think of x^2 + y^2 - z^2 - 1 = 0) and my intention is to triangularize its part that is in the Sphere_3.
In my examples up to now this worked quite well, if only for some annoying problem, I do not know how to overcome: The boundaries, where the implicit surface meets the Sphere, are very "rough" or "jagged" in a more than acceptable amount.
I already tried the 'Manifold_with_boundary_tag()', but it gave no improvements.
One road to improve the output that I am contemplating, is converting the triangulated mesh (a C2t3) into a Polyhedron_3 and this in a Nef_polyhedron and intersect that with a Nef_polyhedron well approximating a slightly smaller Sphere. But this seems a bit like shooting with cannons for sparrows, nevertheless I have currently no better idea and googling gave me also no hint. So my question: What to do about this problem? Can it be done with CGAL (and moderate programming effort) or is it necessary or better to use another system?
(Just for explanation for what I need this: I try to develop a program that constructs 3D-printable models of algebraic surfaces and having a smooth and also in the boundaries smooth triangulation is my last step that is missing before I can hand the surface over to OpenSCAD to generate a solid body of constant thickness).
The only solution I see is to use the 3D Mesh Generation with sharp feature preservation and no criteria on the cells. You will have to provide the intersection of the bounding sphere with the surface yourself.
There is one example with two intersecting spheres in the user manual.
I figured someone probably asked this question before but I wasn't able to find an answer.
I'm writing a physics library for my game engine (2d, currently in actionscript3, but easily translatable to C based languages).
I'm having problems finding a good formula to calculate the inertia of my game objects.
The thing is, there are plenty of proven formulas to calculate inertia around a centroid of a convex polygon, but my structure is slightly different: I have game-objects with their own local space. You can add convex shapes such as circles and convex polygons to this local space to form complex objects. The shapes themselves again have their own local space. So there are three layers: World, object & shape space.
I would have no problems calculating the inertia of each individual polygon in the shape with the formulas provided on the moments of inertia Wikipedia article.
or the ones provided in an awesome collision detection & response article.
But I'm wondering how to relate this to my object structure, do I simply add all the inertia's of the shapes of the object? That's what another writer uses to calculate the inertia of triangulated polygons, he adds all the moments of inertia of the triangles. Or is there more to it?
I find this whole inertia concept quite difficult to understand as I don't have a strong physics background. So if anyone could provide me with an answer, preferably with the logic behind inertia around a given centroid, I would be very thankful. I actually study I.T. - Game development at my university, but to my great frustration none of the teachers in their ranks are experienced in the area of physics.
Laurens, the physics is much simpler if you stay in two dimensional space. In 2D space, rotations are described by a scalar, resistance to rotation (moment of inertia) is described by a scalar, and rotations are additive and commutative. Things get hairy (much, much hairier) in three dimensional space.
When you connect two objects, the combined object has its own center of mass. To calculate the moment of inertia of this combined object, you need to sum the moments of inertia of the individual objects and also add on offset term given by the Steiner parallel axis theorem for each individual object. This offset term is the mass of the object times the square of the distance to the composite center of mass.
The primary reason you need to know the moment of inertia is so that you can simulate the response to torques that act on your object. This is fairly straightforward in 2D physics. Rotational behavior is an analog to Newton's second law. Instead of F=ma you use T=Iα. (Things once again are much hairier in 3D space.) You need to find the external forces and torques, solve for linear acceleration and rotational acceleration, and then integrate numerically.
A good beginner's book on game physics is probably in order. You can find a list of recommended texts in this question at the gamedev sister site.
For linear motion you can just add them. Inertia is proportional to mass. Adding the masses of your objects and calculating the inertia of the sum is equivalent to adding their individual inertias.
For rotation it gets more complicated, you need to find the centre of mass.
Read up on Newton's laws of motion. You'll need to understand them if you're writing a physics engine. The laws themselves are very short but understanding them requires more context so google around.
You should specifically try to understand the concepts: Mass, Inertia, Force, Acceleration, Momentum, Velocity, Kinetic energy. They're all related.
I am looking for an algorithm that receives a 3d surface mesh (i.e comprised of 3d triangles that are a discretization of some manifold) and generates tetrahedra inside the mesh's volume.
i.e, I want the 3d equivalent to this 2d problem: given a closed curve, triangulate it's interior.
I am sorry if this is unclear, it's the best way I could think of explaining it.
For the 2d case there's Triangle. For a 3d case I could find none.
pygalmesh (a project of mine based on CGAL) can do just that.
pygalmesh-volume-from-surface elephant.vtu out.vtk --cell-size 1.0 --odt
https://github.com/nschloe/pygalmesh/#volume-meshes-from-surface-meshes
I found GRUMMP which seems to answer all the needs mentioned in the question, and more...
I haven't had any experience using GRUMMP, but as far as a 3D version of triangle there is tetgen. If you know the triangle switches it is built to resemble it. It also has fairly decent documentation and a python wrapper for it and triangle.
http://wias-berlin.de/software/tetgen/
http://mathema.tician.de/software/meshpy/
Recently I've started developing voxel engine. What I need is only colorful voxels without texture, but at very large amount (much smaller than minecraft) - and the question is how to draw the scene very fast? I'm using c#/xna but this is in my opinion not very important in this case, let's talk about general cases. Look at these two games:
http://www.youtube.com/watch?v=EKdRri5jSMs
http://www.youtube.com/watch?v=in0bavLJ8KQ
Especially I think video number 2 represents great optimization methods (my gfx card starts choking just at 192 x 192 x 64) How they achieve this?
What i would to have in the engine:
colorful voxels without texture, but shaded
many, many voxels, say minimum 512 x 512 x 128 to achieve something like video #2
shadows (smooth shadows will be great but this is not necessary)
optional: dynamic lighting (for example from fireballs flying, which light up near voxel structures)
framerate minimum 40 FPS
camera have 3 ways of freedom (move in x-axis, move in y-axis, move in z-axis), no camera rotation is needed
finally optional feature may be Depth of Field (it will be sweet ^^ )
What optimization I have already know:
remove unseen voxels that resides inside voxel structure (covered
from six directions by other voxels)
remove unseen faces of voxels - because camera have no rotation and always look aslant forward like in TPP games, so if we divide screen
by vertical cut, left voxels and right voxels will show only 3 faces
keep voxels in Dictionary instead of 3-dimensional array - jumping through array of size 512 x 512 x 128 takes miliseconds which is
unacceptable - but dictionary int:color where int describes packed
3D position is much much faster
use instancing where applciable
occluding? (how to do this?)
space dividing / octtree (is it good idea?)
I'll be very thankful if someone give me a tip how to improve existing optimizations listed above or can share ideas of new improvements. Thanks
1) Voxatron uses a software renderer rather than the GPU. You can read some details about it if you read the comments in this blog post:
http://www.lexaloffle.com/bbs/?tid=201
I haven't looked in detail myself so can't tell you much more than that.
2) I've never played 3D Dot Game Heroes but I don't have any reason to believe it uses voxels at all. I mean, I don't see any cubes being added or deleted. Most likely it is just a static polygon mesh with a nice texture applied.
As for implementing it yourself, do not try to draw the world by rendering cubes as this is very slow. Instead you should process the volume and generate meshes lying on the intersection of solid voxels and empty ones. Break the volume into suitable sized regions (e.g. 32x32x32) and generate a mesh for each.
I have written a book article about this which you might find useful. It's actually about smooth voxel terain but a lot of the priciples stll apply.
You can read it on Google books here: http://books.google.com/books?id=WNfD2u8nIlIC&lpg=PR1&dq=game%20engine%20gems&pg=PA39#v=onepage&q&f=false
And you can find the associated source code here: http://www.thermite3d.org
Since you are using XNA, you can just use instancing to get the desired effect: http://www.float4x4.net/index.php/2010/06/hardware-instancing-in-xna/
http://roecode.wordpress.com/2008/03/17/xna-framework-gameengine-development-part-19-hardware-instancing-pc-only/
The underlying concept is instancing: this feature lets you specify some amount of repeating data and some amount of varying data in a single DrawIndexedPrimitive call. In your case, the instance stream would be a single solid box, and the other stream would be the transform and color information.