Creating seamless worldmaps with Fractal Brownian Motion - terrain

I'm creating heightmaps using Fractal Brownian Motion. I'm then coloring it based on the heights and mapping it to a sphere. My problem is that the heightmap doesn't wrap seamlessly. I've used the Diamond Square algorithm and it's pretty easy to make things seamless using it, but I can't seem to figure out how to do it with fBm and I seem to be having trouble finding an explanation for it on the web.
To clarify, by "seamless", I mean that when I map it to a sphere, it creates a seamless map on the sphere.

Instead of calculating the heightmap per pixel on the heightmap, calculate the heightmap in 3D space based on each point on the sphere and then map that to an image pixel. You're going to have trouble wrapping a 2D, rectangular heightmap like that onto a sphere without getting ugly results at the poles unless you start your calculations from the sphere.
fBM generalizes to 3 dimensions, so given a point on the sphere you can get the height at that point, and then you can do the math to map that value to where it should be stored in the heightmap image.

Or you could use one of the traditional map projections. A cylindrical projection (x, y)->(x, sin y) would give you a seam of just one meridian, which you could rotate to the back. Or you could "antialias" the edge by one or another means.
With a stereographic projection (x,y,z)->(x/(z+1),y/(z+1)), there's only one sour point (the projection point itself).

Related

How to keep Gmsh mesh in the bounding curves?

i am quite a beginner in Gmsh and am trying to create a mesh for hydrodynamic simulation from coastlines. I used splines for the complex coastline for simplicity, but the produced mesh crossed over the coastlines. What should i do to make the mesh not cross over the bounding curves?
Image for reference
Your mesh is simply to coarse in the moment. The points of each Triangle in the mesh lie on the real geometry/coastline but the edges are linearly connected and do not care about the geometry.
In order to refine the mesh you might try to press Mesh->Refine by Splitting a couple of times and see split the few current cells. The mesh should get finer and should not violate the geometry boarder by as much as right now.
BUT by this you'll only make the "issue" less obvious to see. On a smaller scale you will always see mesh cells that are partly "outside" the geometry borders. You cannot prevent this with concave meshes like the one you have here. If you have s.th. convex like a circle all elements will strictly lie inside the geometry border.
So as a first step, make a finer mesh until you are satisfied with the match between geometry and mesh.

How does Blender calculate vertex normals?

I'm attempting to calculate vertex normals for various game assets. The normals I calculate are used for "inflating" the model (to draw behind the real model producing a thick outline).
I currently compute the normal for each face and average all of them (several other questions on Stack Overflow suggest this approach). However, this doesn't work for sharp corners like this one (adjacent faces' normals marked in orange, the normal I'm trying to calculate is outlined in green).
The object looks like a small pedestal and we're looking at the front-left corner. There are three adjoining faces (the bottom face isn't visible; its normal points straight down).
Blender computes an excellent normal that lies squarely in the middle of the three faces' normals; it seems like it somehow calculates a normal that has minimum rotation to each of the three face normals. Blender's normal also doesn't change when the quads are triangulated differently.
Averaging the faces' normals gives me a different normal that points slightly upward in the Z-axis (-0.45, -0.89, +0.08). Inflating my model this way doesn't produce a good outline because the bottom face of the outline is shifted up and doesn't enclose the original model.
I attempted to look at the Blender source code but couldn't find what I was looking for. If anyone can point me to the algorithm in the Blender source, I'd accept that also.
Weight the surface normals by the angle of the faces where they join. It is a common practice in surface rendering (see discussion here: http://www.bytehazard.com/code/vertnorm.html), and will ensure that your bottom face is weighted stronger than the two slanted side faces. I don't know if Blender does it differently, but you should give it a try.

Proper density functions for Voxel-based terrains?

I've managed to implement the Marching Cubes algorithm in C#. Up to now I've tried the algorithm to render a sphere. That's an easy one because the density function is not very complex to code.
But now I want to get the algorithm to go further and render some interesting terrains for games. So I would need proper density functions for this task.
First thing that comes to my head is a Volumetric Perlin Noise. That's ok but I am looking for a terrain without convex shapes, I mean, no caves and similar geometries by the moment.
Ok, I know that for that a simple height map can do the job, but I want a voxel-generated terrain. What type of density function o pseudocode would I need to implement them?
You can easily convert a heightmap into voxel terrain. Each pixel in your heightmap corresponds to a column of voxels in your voxel world. For a given pixel in the heightmap read the height. Then iterate over each voxel in the corresponding column and set it to 'solid' if it is less than your reference height or 'empty' if it is more than your reference height.
Here is some sample code using the PolyVox library.

Can I specify per face normal in OpenGL ES and achieve non-smooth/flat shading?

I want to display mesh models in OpenGL ES 2.0, where it clearly shows the actual mesh, so I don't want smooth shading across each primitive/triangle. The only two options I can think about are
Each triangle has its own set of normals, all perpendicular to the triangles surface (but then I guess I can't share vertices among the triangles with this option)
Indicate triangle/primitive edges using black lines and stick to the normal way with shared vertices and one normal for each vertex
Does it have to be like this? Why can't I simply read in primitives and don't specify any normals and somehow let OpenGL ES 2.0 make a flat shade on each face?
Similar question Similar Stackoverflow question, but no suggestion to solution
Because in order to have shading on your mesh (any, smooth or flat), you need a lighting model, and OpenGL ES can't guess it. There is no fixed pipeline in GL ES 2 so you can't use any built-in function that will do the job for you (using a built-in lighting model).
In flat shading, the whole triangle will be drawn with the same color, computed from the angle between its normal and the light source (Yes, you also need a light source, which could simply be the origin of the perspective view). This is why you need at least one normal per triangle.
Then, a GPU works in a very parallelized way, processing several vertices (and then fragments) at the same time. To be efficient, it can't share data among vertices. This is why you need to replicate normals for each vertex.
Also, your mesh can't share vertices among triangles anymore as you said, because they share only the vertex position, not the vertex normal. So you need to put 3 * NbTriangles vertices in you buffer, each one having one position and one normal. You can't either have the benefit of using triangle strips/fans, because none of your faces will have a common vertex with another one (because, again, different normals).

World space to screen space (perspective projection)

I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.