Compute direction of gravity vector - gps

As I understand to compute vector of gravity not saficient to compute normal to elipsoid, but we need to computer normal to geoid?
But how to compute normal to geoid, how geoid is defined?
Wiki article say that it's represented by Spherical harmonics.

You do not need the know anything about geoids to compute the vector of gravity.
Newtonian gravity is a monopole force, so it acts between the centers of mass of the two objects. The shapes of the objects don't matter at all. Let's assume you have two objects 1 and 2 and the coordinates of the object's centers of mass are:
r_1 = (x_1,y_1,z_1) and
r_2 = (x_2,y_2,z_2)
The direction of gravitational force on object 1 from object 2 is then simply the difference between the vectors (gravity is always attractive):
r = r_2 - r_1= (x_2 - x_1,y_2 - y1,z_2 - z_1)
If object 1 is something sitting on the surface of Earth and you are looking for the normal force caused by the Earth's surface pushing back on the bottom of object 1, that normal force vector is given by the normal to the surface at that point.

Related

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

where can I find detailed resource on object state prediction for using with dead reckoning?

I have a server and a client.
I have 40 opengl cubes. There state is described by 3d vector for position and 3x3 rotation matrix(or a quaternion).
How can I send intermediate packets and predict the object state on the client between those packets(extrapolation)?
for object position I can use a linear predictor on velocity.
How to predict quaternion states?
The easiest thing, parallel to what you're doing with linear velocity, is to use a linear predictor on angular velocity.
If you have two quaternions, q_0 and q_t, representing global orientations that are t seconds apart, you can compute the finite difference between the two quaternions and use that to find an angular velocity that can be used for extrapolation.
Make sure that the inner-product between q_0 and q_t is non-negative. If it's negative, negate all the components of one of the quaternions. This makes sure that we're not trying to go the long way around. If your bodies are rotating really fast relative to your sampling, this is a problem and you'll need a more complicated model that accounts for the previous angular velocity and makes assumptions about maximum possible acceleration. We'll assume that's not the case.
Then we compute the relative difference quaternion. dq = q_t * q_0' (where q_0' is the quaternion rotational inverse/conjugate). If you have the luxury of having fixed-sized steps, you can stop here and predict then next orientation t seconds into the future: q_2t = dq*d_t.
If we can' step forward by integer multiples of t, we compute the angle of rotation from dq. Quaternions and angular velocities are both variations on "axis-angle" representations of changes in orientation. If you rotate by Θ around unit-length axis [x,y,z], then the quaternion representation of that is q = [cos(Θ/2), sin(Θ/2)x, sin(Θ/2)y, sin(Θ/2)z] (using the quaternion convention where the w component comes first). If you rotate by Θ/t around axis [x,y,z], then the angular velocity is v = [Θx,Θy,Θz]/t. So v = Θ[q.x,q.y,q.z]/(t||[q.x,q.y,q.z]||). We can compute the angle two ways: Θ = 2acos(q.w) = 2asin(||[q.x,q.y,q.z]||). These will always be the same because of step 1. Numerics make it nicer to use sine since we need to find m = ||[q.x,q.y,q.z]|| anyway for the next step.
If m is large enough, then we just find the angular velocity:
v = 2asin(m)[dq.x,dq.y,dq.z]/(m*t)
However, if m's not large enough, we'll face numeric issues trying to divide by near-zero. So programmers will use the Taylor expansion of the sinc() function around zero, which happens to be very accurate in this case. Remember that m = sin(Θ/2). With m<1e-4, we can accurately compute asin(m)/m = 6/(6-m*m). Then you just need to multiply the result by 2*[dq.x,dq.y,dq.z]/t and you have your angular velocity. Phew.
Extrapolating is then a matter of multiplying your angular velocity times the time that has passed. Then you go backwards, converting the angular change to a quaternion and multiplying it onto q_t.
It seems like there must be an easier way...

Gravitational Pull

Does anyone know of a tutorial that would deal with gravitational pull of two objects? Eg. a satellite being drawn to the moon (and possibly sling shot past it).
I have a small Java game that I am working on and I would like to implement his feature in it.
I have the formula for gravitational attraction between two bodies, but when I try to use it in my game, nothing happens?
There are two object on the screen, one of which will always be stationary while the other one moves in a straight line at a constant speed until it comes within the detection range of the stationary object. At which point it should be drawn to the stationary object.
First I calculate the distance between the two objects, and depending on their mass and this distance, I update the x and y coordinates.
But like I said, nothing happens. Am I not implementing the formula correctly?
I have included some code to show what I have so far.
This is the instance when the particle collides with the gates detection range, and should start being pulled towards it
for (int i = 0; i < particle.length; i++)
{
// **************************************************************************************************
// GATE COLLISION
// **************************************************************************************************
// Getting the instance when a Particle collides with a Gate
if (getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()) <=
sumOfRadii(particle[i].getRadius(), barrier.getRadius()))
{
particle[i].calcGravPull(particle[i].getMass(), barrier.getMass(),
getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()));
}
And the method in my Particle class to do the movement
// Calculate the gravitational pull between objects
public void calcGravPull(int mass1, int mass2, double distBetweenObjects)
{
double gravityPull;
gravityPull = GRAV_CONSTANT * ((mass1 * mass2) / (distBetweenObjects * distBetweenObjects));
x += gravityPull;
y += gravityPull;
}
Your formula has problems. You're calculating the gravitational force, and then applying it as if it were an acceleration. Acceleration is force divided by mass, so you need to divide the force by the small object's mass. Therefore, GRAV_CONSTANT * ((mass1) / (distBetweenObjects * distBetweenObjects)) is the formula for acceleration of mass2.
Then you're using it as if it were a positional adjustment, not a velocity adjustment (which an acceleration is). Keep track of the velocity of the moving mass, use that to adjust its position, and use the acceleration to change that velocity.
Finally, you're using acceleration as a scalar when it's really a vector. Calculate the angle from the moving mass to the stationary mass, and if you're representing it as angle from the positive x-axis multiply the x acceleration by the cosine of the angle, and the y acceleration by the sine of the angle.
That will give you a correct representation of gravity.
If it does nothing, check the coordinates to see what is happening. Make sure the stationary mass is large enough to have an effect. Gravity is a very weak force, and you'll have no significant effect with much smaller than a planetary mass.
Also, make sure you're using the correct gravitational constant for the units you're using. The constant you find in the books is for the MKS system - meters, kilograms, and seconds. If you're using kilometers as units of length, you need to multiply the constant by a million, or alternately multiply the length by a thousand before plugging it into the formula.
Your algorithm is correct. Probably the gravitational pull you compute is too small to be seen. I'd remove GRAV_CONSTANT and try again.
BTW if you can gain a bit of speed moving the result of getDistanceBetweenObjects() in a temporary variable.

Continuous collision detection between two moving tetrahedra

My question is fairly simple. I have two tetrahedra, each with a current position, a linear speed in space, an angular velocity and a center of mass (center of rotation, actually).
Having this data, I am trying to find a (fast) algorithm which would precisely determine (1) whether they would collide at some point in time, and if it is the case, (2) after how much time they collided and (3) the point of collision.
Most people would solve this by doing triangle-triangle collision detection, but this would waste a few CPU cycles on redundant operations such as checking the same edge of one tetrahedron against the same edge of the other tetrahedron upon checking up different triangles. This only means I'll optimize things a bit. Nothing to worry about.
The problem is that I am not aware of any public CCD (continuous collision detection) triangle-triangle algorithm which takes self-rotation in account.
Therefore, I need an algorithm which would be inputted the following data:
vertex data for three triangles
position and center of rotation/mass
linear velocity and angular velocity
And would output the following:
Whether there is a collision
After how much time the collision occurred
In which point in space the collision occurred
Thanks in advance for your help.
The commonly used discrete collision detection would check the triangles of each shape for collision, over successive discrete points in time. While straightforward to compute, it could miss a fast moving object hitting another one, due to the collision happening between discrete points in time tested.
Continuous collision detection would first compute the volumes traced by each triangle over an infinity of time. For a triangle moving at constant speed and without rotation, this volume could look like a triangular prism. CCD would then check for collision between the volumes, and finally trace back if and at what time the triangles actually shared the same space.
When angular velocity is introduced, the volume traced by each triangle no longer looks like a prism. It might look more like the shape of a screw, like a strand of DNA, or some other non-trivial shapes you might get by rotating a triangle around some arbitrary axis while dragging it linearly. Computing the shape of such volume is no easy feat.
One approach might first compute the sphere that contains an entire tetrahedron when it is rotating at the given angular velocity vector, if it was not moving linearly. You can compute a rotation circle for each vertex, and derive the sphere from that. Given a sphere, we can now approximate the extruded CCD volume as a cylinder with the radius of the sphere and progressing along the linear velocity vector. Finding collisions of such cylinders gets us a first approximation for an area to search for collisions in.
A second, complementary approach might attempt to approximate the actual volume traced by each triangle by breaking it down into small, almost-prismatic sub-volumes. It would take the triangle positions at two increments of time, and add surfaces generated by tracing the triangle vertices at those moments. It's an approximation because it connects a straight line rather than an actual curve. For the approximation to avoid gross errors, the duration between each successive moments needs to be short enough such that the triangle only completes a small fraction of a rotation. The duration can be derived from the angular velocity.
The second approach creates many more polygons! You can use the first approach to limit the search volume, and then use the second to get higher precision.
If you're solving this for a game engine, you might find the precision of above sufficient (I would still shudder at the computational cost). If, rather, you're writing a CAD program or working on your thesis, you might find it less than satisfying. In the latter case, you might want to refine the second approach, perhaps by a better geometric description of the volume occupied by a turning, moving triangle -- when limited to a small turn angle.
I have spent quite a lot of time wondering about geometry problems like this one, and it seems like accurate solutions, despite their simple statements, are way too complicated to be practical, even for analogous 2D cases.
But intuitively I see that such solutions do exist when you consider linear translation velocities and linear angular velocities. Don't think you'll find the answer on the web or in any book because what we're talking about here are special, yet complex, cases. An iterative solution is probably what you want anyway -- the rest of the world is satisfied with those, so why shouldn't you be?
If you were trying to collide non-rotating tetrahedra, I'd suggest a taking the Minkowski sum and performing a ray check, but that won't work with rotation.
The best I can come up with is to perform swept-sphere collision using their bounding spheres to give you a range of times to check using bisection or what-have-you.
Here's an outline of a closed-form mathematical approach. Each element of this will be easy to express individually, and the final combination of these would be a closed form expression if one could ever write it out:
1) The equation of motion for each point of the tetrahedra is fairly simple in it's own coordinate system. The motion of the center of mass (CM) will just move smoothly along a straight line and the corner points will rotate around an axis through the CM, assumed to be the z-axis here, so the equation for each corner point (parameterized by time, t) is p = vt + x + r(sin(wt+s)i + cos(wt + s)j ), where v is the vector velocity of the center of mass; r is the radius of the projection onto the x-y plane; i, j, and k are the x, y and z unit vectors; and x and s account for the starting position and phase of rotation at t=0.
2) Note that each object has it's own coordinate system to easily represent the motion, but to compare them you'll need to rotate each into a common coordinate system, which may as well be the coordinate system of the screen. (Note though that the different coordinate systems are fixed in space and not traveling with the tetrahedra.) So determine the rotation matrices and apply them to each trajectory (i.e. the points and CM of each of the tetrahedra).
3) Now you have an equation for each trajectory all within the same coordinate system and you need to find the times of the intersections. This can be found by testing whether any of the line segments from the points to the CM of a tetrahedron intersects the any of the triangles of another. This also has a closed-form expression, as can be found here.
Layering these steps will make for terribly ugly equations, but it wouldn't be hard to solve them computationally (although with the rotation of the tetrahedra you need to be sure not to get stuck in a local minimum). Another option might be to plug it into something like Mathematica to do the cranking for you. (Not all problems have easy answers.)
Sorry I'm not a math boff and have no idea what the correct terminology is. Hope my poor terms don't hide my meaning too much.
Pick some arbitrary timestep.
Compute the bounds of each shape in two dimensions perpendicular to the axis it is moving on for the timestep.
For a timestep:
If the shaft of those bounds for any two objects intersect, half timestep and start recurse in.
A kind of binary search of increasingly fine precision to discover the point at which a finite intersection occurs.
Your problem can be cast into a linear programming problem and solved exactly.
First, suppose (p0,p1,p2,p3) are the vertexes at time t0, and (q0,q1,q2,q3) are the vertexes at time t1 for the first tetrahedron, then in 4d space-time, they fill the following 4d closed volume
V = { (r,t) | (r,t) = a0 (p0,t0) + … + a3 (p3,t0) + b0 (q0,t1) + … + b3 (q3,t1) }
Here the a0...a3 and b0…b3 parameters are in the interval [0,1] and sum to 1:
a0+a1+a2+a3+b0+b1+b2+b3=1
The second tetrahedron is similarly a convex polygon (add a ‘ to everything above to define V’ the 4d volume for that moving tetrahedron.
Now the intersection of two convex polygon is a convex polygon. The first time this happens would satisfy the following linear programming problem:
If (p0,p1,p2,p3) moves to (q0,q1,q2,q3)
and (p0’,p1’,p2’,p3’) moves to (q0’,q1’,q2’,q3’)
then the first time of intersection happens at points/times (r,t):
Minimize t0*(a0+a1+a2+a3)+t1*(b0+b1+b2+b3) subject to
0 <= ak <=1, 0<=bk <=1, 0 <= ak’ <=1, 0<=bk’ <=1, k=0..4
a0*(p0,t0) + … + a3*(p3,t0) + b0*(q0,t1) + … + b3*(q3,t1)
= a0’*(p0’,t0) + … + a3’*(p3’,t0) + b0’*(q0’,t1) + … + b3’*(q3’,t1)
The last is actually 4 equations, one for each dimension of (r,t).
This is a total of 20 linear constraints of the 16 values ak,bk,ak', and bk'.
If there is a solution, then
(r,t)= a0*(p0,t0) + … + a3*(p3,t0) + b0*(q0,t1) + … + b3*(q3,t1)
Is a point of first intersection. Otherwise they do not intersect.
Thought about this in the past but lost interest... The best way to go about solving it would be to abstract out one object.
Make a coordinate system where the first tetrahedron is the center (barycentric coords or a skewed system with one point as the origin) and abstract out the rotation by making the other tetrahedron rotate around the center. This should give you parametric equations if you make the rotation times time.
Add the movement of the center of mass towards the first and its spin and you have a set of equations for movement relative to the first (distance).
Solve for t where the distance equals zero.
Obviously with this method the more effects you add (like wind resistance) the messier the equations get buts its still probably the simplest (almost every other collision technique uses this method of abstraction). The biggest problem is if you add any effects that have feedback with no analytical solution the whole equation becomes unsolvable.
Note: If you go the route of of a skewed system watch out for pitfalls with distance. You must be in the right octant! This method favors vectors and quaternions though, while the barycentric coords favors matrices. So pick whichever your system uses most effectively.

Solving for optimal alignment of 3d polygonal mesh

I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.