What does having an inertia tensor of zero do in Bullet? - physics

In the Bullet Physics library, when constructing a rigid body the default argument for the inertia tensor is the zero vector.
My understanding of inertia is fairly elementary but from the equation
torque = inertia * angular_velocity
I would expect angular velocity on an object with zero inertia to be undefined.
The documentation for constructing rigid bodies says
For dynamic objects, you can use the collision shape to approximate
the local inertia tensor, otherwise use the zero vector (default
argument)
So what happens with this zero inertia? Have I misunderstood the equation? Or is having zero inertia in Bullet similar to having zero mass in defining an object to be static with respect to orientation?

To start let us define inertia.
Inertia is the resistance of any physical object to a change its state of motion, or the tendency of an object to resist any change in its motion.
The off diagonal elements of the Inertia Tensor are called the products of inertia. The products of inertia are zero when the body is symmetrical about the axes of rotation, such as for a rectangular box or cylinder rotating on their symmetry axis. I would imagine that the Inertia Tensor that is used in the physics engine you use is always diagonal to avoid complexity.
I suppose in the case of your ballistic physic engine, where this torque might come in to play is for an object that let's say is spinning around the z-axis with the following angular velocity (0, 0, ωr) [in cylindrical coordinates (r, theta, z)], then you might want to find out the torque required to stop this rotation in some time t (i.e. a rotational acceleration of magnitude -ωr/t). Here you would then use the equation you have stated above.
The interpretation of the zero matrix would be to represent zero 'inertia', and object with no mass and what I have said above again holds.

By building a zero inertia bubble or a negative state of flux, a bullet's dynamic will be nominal to tensor point and underlying rotation, regardless of torque, vector or angle. To simply state otherwise is dense or incorrect.

Related

Problems converting axis angle to quaternion

I have a vector which points from my camera to an object I want to point to, so I want to make a quaternion which rotates the camera so that it points to that vector. So i do this (using glm)
glm::quat rotation=glm::angleAxis(0.0f,vector);
If i understand that function correctly in the case of a vector of, for example, (0.0f,0.0f,-1.0f), the camera should point forward in the depth axis, and with no roll. But i don't know why, in that case it creates this quaternion:
x:-0
y:-0
z:-0
w:1
wich does nothing, and the x,y,z values only differ from 0 if the angle parameter passed to angleAxis() is different from 0. Why does this happen? if i pass angle 0, shouldn't it simply create a rotation with only yaw and pitch, and no roll?
Already go the answer: (on the gameDev forum, from the user clb)
No, your understanding is off. Angle-axis rotation generates rotation about the given axis, by the given angle. If you don't specify an angle, you'll always get identity back. This is because rotation a zero angle, be it about any axis, of course does not rotate at all.
What you are thinking about is the LookAt rotation. I am not sure what the function signature in glm is, but I'm sure they have a similar function. Use a LookAt rotation to orient one direction vector to face towards another vector.

Proper density functions for Voxel-based terrains?

I've managed to implement the Marching Cubes algorithm in C#. Up to now I've tried the algorithm to render a sphere. That's an easy one because the density function is not very complex to code.
But now I want to get the algorithm to go further and render some interesting terrains for games. So I would need proper density functions for this task.
First thing that comes to my head is a Volumetric Perlin Noise. That's ok but I am looking for a terrain without convex shapes, I mean, no caves and similar geometries by the moment.
Ok, I know that for that a simple height map can do the job, but I want a voxel-generated terrain. What type of density function o pseudocode would I need to implement them?
You can easily convert a heightmap into voxel terrain. Each pixel in your heightmap corresponds to a column of voxels in your voxel world. For a given pixel in the heightmap read the height. Then iterate over each voxel in the corresponding column and set it to 'solid' if it is less than your reference height or 'empty' if it is more than your reference height.
Here is some sample code using the PolyVox library.

Creating seamless worldmaps with Fractal Brownian Motion

I'm creating heightmaps using Fractal Brownian Motion. I'm then coloring it based on the heights and mapping it to a sphere. My problem is that the heightmap doesn't wrap seamlessly. I've used the Diamond Square algorithm and it's pretty easy to make things seamless using it, but I can't seem to figure out how to do it with fBm and I seem to be having trouble finding an explanation for it on the web.
To clarify, by "seamless", I mean that when I map it to a sphere, it creates a seamless map on the sphere.
Instead of calculating the heightmap per pixel on the heightmap, calculate the heightmap in 3D space based on each point on the sphere and then map that to an image pixel. You're going to have trouble wrapping a 2D, rectangular heightmap like that onto a sphere without getting ugly results at the poles unless you start your calculations from the sphere.
fBM generalizes to 3 dimensions, so given a point on the sphere you can get the height at that point, and then you can do the math to map that value to where it should be stored in the heightmap image.
Or you could use one of the traditional map projections. A cylindrical projection (x, y)->(x, sin y) would give you a seam of just one meridian, which you could rotate to the back. Or you could "antialias" the edge by one or another means.
With a stereographic projection (x,y,z)->(x/(z+1),y/(z+1)), there's only one sour point (the projection point itself).

World space to screen space (perspective projection)

I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4º component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.

How to render a 2d side-scroller game

I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.