Path mapping using VectorNav VN100 IMU to map a route between two GPS coordinates - gps

I'm trying to use a VectorNav VN100 IMU to map a path through an underground tunnel (GPS denied environment) and am wondering what is the best approach to take to do this.
I get lots of data points from the VN100 these include: orientation/pose (Euler angles, quaternions), and acceleration and gyroscope values in three dimensions. The acceleration and gyro values are given in raw and filtered formats where filtered outputs have been filtered using an onboard Kalman filter.
In addition to IMU measurements I also measure GPS-RTK coordinates in three dimensions at the start and end-points of the tunnel.
How should I approach this mapping problem? I'm quite new to this area and do not know how to extract position from the acceleration and orientation data. I know acceleration can be integrated once to give velocity and that in turn can be integrated again to get position but how do I combine this data together with orientation data (quaternions) to get the path?

In robotics, Mapping means representing the environment using perception sensor (like 2D,3D laser or Cameras).
Once you got the map, it can be used by robot to know its location(Localization). Map is also used for find a path between locations to move from one place to another place(Path planning).
In your case you need a perception sensor to get the better location estimation. With only IMU you can track the position using Extended Kalman filter(EKF) but it drifts quickly.
Robot Operating System has EKF implementation you can refer it.

Ok so I came across a solution that gets me somewhat closer to my goal of finding the path travelled underground, although it is by no means the final solution I'm posting my algorithm here in the hopes that it helps someone else.
My method is as follows:
Rotate the Acceleration vector A = [Ax, Ay, Az] output by the VectorNav VN100 into the North, East, Down frame by multiplying by the quaternion VectorNav output Q = [q0, q1, q2, q3]. How to multiply a vector by a quaternion is outlined in this other post.
Basically you take the acceleration vector and add a fourth component on to the end of it to act as the scalar term, then multiply by the quaternion and it's conjugate (N.B. the scalar terms in both matrices should be in the same position, in this case the scalar quaternion term is the first term, so therefore a zero scalar term should be added on to the start of the acceleration vector) e.g. A = [0,Ax,Ay,Az]. Then perform the following multiplication:
A_ned = Q A Q*
where Q* is the complex conjugate of the quaternion (i, j, and k terms are negated).
Integrate the rotated acceleration vector to get the velocity vector: V_ned
Integrate the Velocity vector to get the position in north, east, down: R_ned
There is substantial drift in the velocity and position due to sensor bias which causes drift. This can be corrected for somewhat if we know the start and end velocity and start and end positions. In this case the start and end velocities were zero so I used this to correct the drift in the velocity vector.
Uncorrected Velocity
Corrected Velocity
My final comparison between IMU position vs GPS is shown here (read: there's still a long way to go).
GPS-RTK data vs VectorNav IMU data
Now I just need to come up with a sensor fusion algorithm to try to improve the position estimation...

Related

Solidworks Feature Recognition on a fill pattern/linear pattern

I am currently creating a feature and patterning it across a flat plane to get the maximum number of features to fit on the plane. I do this frequently enough to warrant building some sort of marcro for this if possible. The issue that I run into is I still have to manually set the spacing between the parts. I want to be able to create a feature and have it determine "best" fit spacing given an area while avoiding overlaps. I have had very little luck finding any resources describing this. Any information or links to potentially helpful resources on this would be much appreciated!
Thank you.
Before, you start the linear pattern bit:
Select the face2 of that feature2, get the outer most loop2 of edges. You can test for that using loop2.IsOuter.
Now:
if the loop has one edge: that means it's a circle and the spacing must superior to the circle's radius
if the loop has more that one edge, that you need to calculate all the distances between the vertices and assume that the largest distance is the safest spacing.
NOTA: If one of the edges is a spline, then you need a different strategy:
You would need to convert the face into a sketch and finds the coordinates of that spline to calculate the highest distances.
Example: The distance between the edges is lower than the distance between summit of the splines. If the linear pattern has the a vertical direction, then spacing has to be superior to the distance between the summit.
When I say distance, I mean the distance projected on the linear pattern direction.

Calculating distance in m in xyz between GPS coordinates that are close together

I have a set of GPS Coordinates and I want to find the speed required for a UAV to travel between them. Doing this by calculating distance in x y z and then dividing by time to travel - m/s.
I know the great circle distance but I assume this will be incorrect since they are all relatively close together(within 10m)?
Is there an accurate way to do this?
For small distances you can use the haversine formula without a relevant loss of accuracy in comparison to Vincenty's f.e. Plus, it's designed to be accurate for very small distances. This can be read up here if you are interested.
You can do this by converting lat/long/alt into XYZ format for both points. Then, figure out the rotation angles to move one of those points (usually, the oldest point) so that it would be at lat=0 long=0 alt=0. Rotate the second position report (the newest point) by the same rotation angles. If you do it all correctly, you will find X equals the east offset, Y equals the north offset, and Z equals the up offset. You can use Pythagorean theorm with X and Y (north and east) offsets to determine the horizontal distance traveled. Normally, you just ignore the altitude differences and work with horizontal data only.
All of this assumes you are using accurate formulas to convert lat/lon/alt into XYZ. It also assumes you have enough precision in the lat/lon/alt values to be accurate. Approximations are not good if you want good results. Normally, you need about 6 decimal digits of precision in lat/lon values to compute positions down to the meter level of accuracy.
Keep in mind that this method doesn't work very well if you haven't moved far (greater than about 10 or 20 meters, more is better). There is enough noise in the GPS position reports that you are going to get jumpy velocity values that you will need to further filter to get good accuracy. The math approach isn't the problem here, it's the inherent noise in the GPS position reports. When you have good reports, you will get good velocity.
A GPS receiver doesn't normally use this approach to know velocity. It looks at the way doppler values change for each satellite and factor in current position to know what the velocity is. This works reasonably well when the vehicle is moving. It is a much faster way to detect changes in velocity (for instance, to release a position clamp). The normal user doesn't have access to the internal doppler values and the math gets very complicated, so it's not something you can do.

Maya Mel project lookAt target into place after motion capture import

I have a facial animation rig which I am driving in two different manners: it has an artist UI in the Maya viewports as is common for interactive animating, and I've connected it up with the FaceShift markerless motion capture system.
I envision a workflow where a performance is captured, imported into Maya, sample data is smoothed and reduced, and then an animator takes over for finishing.
Our face rig has the eye gaze controlled by a mini-hierarchy of three objects (global lookAtTarget and a left and right eye offset).
Because the eye gazes are controlled by this LookAt setup, they need to be disabled when eye-gaze-including motion capture data is imported.
After the motion capture data is imported, the eye gazes are now set with motion capture rotations.
I am seeking a short Mel routine that does the following: it marches through the motion capture eye rotation samples, backwards calculates and sets each eyes' LookAt target position, and averages the two to get the global LookAt target's position.
After that Mel routine is run, I can turn the eye's LookAt constraint back on, the eyes gaze control returns to rig, nothing has changed visually, and the animator will have their eye UI working in the Maya viewport again.
I'm thinking this should be common logic for anyone doing facial mocap. Anyone got anything like this already?
How good is the eye tracking in the mocap? There may be issues if the targets are far away: depending on the sampling of the data, you may get 'crazy eyes' which seem not to converge, or jumpy data. If that's the case you may need to junk the eye data altogether, or smooth it heavily before retargeting.
To find the convergence of the two eyes, you try this (like #julian I'm using locators, etc since doing all the math in mel would be irritating).
1) constrain a locator to one eye so that one axis oriented along the look vector and the other is in the plane of the second eye. Let's say the eye aims down Z and the second eye is in the XZ plane
2) make a second locator, parented to the first, and constrained to the second eye in the same way: pointing down Z, with the first eye in the XZ plane
3) the local Y rotation of the second locator is the angle of convergence between the two eyes.
4) Figure out the focal distance using the law of sines and a cheat for the offset of the second eye relative to the first. The local X distance of the second eye is one leg of a right triangle. The angles of the triangle are the convergence angle from #3 and 90- the convergence angle. In other words:
focal distance eye_locator2.tx
-------------- = ---------------
sin(90 - eye_locator2.ry) sin( eye_locator2.ry)
so algebraically:
focal distance = eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)
You'll have to subtract the local Z of eye2, since the triangle we're solving is shifted backwards or forwards by that much:
focal distance = (eye_locator2.tx * sin(90 - eye_locator2.ry) / sin( eye_locator2.ry)) - eye_locator2.tz
5) Position the target along the local Z direction of the eye locator at the distance derived above. It sounds like the actual control uses two look targets that can be moved apart to avoid crosseyes - it's kind of judgement call to know how much to use that vs the actual convergence distance. For lots of real world data the convergence may be way too far away for animator convenience: a target 30 meters away is pretty impractical to work with, but might be simulated with a target 10 meters away with a big spread. Unfortunately there's no empirical answer for that one - it's a judgement call.
I don't have this script but it would be fairly simple. Can you provide an example maya scene? You don't need any math. Here's how you could go about it:
Assume the axis pointing through the pupil is positive X, and focal length is 10 units.
Create 2 locators. Parent one to each eye. Set their translations to
(10, 0, 0).
Create 2 more locators in worldspace. Point constrain them to the others.
Create a plusMinusAverage node.
Connect the worldspace locator's translations to plusMinusAverage1 input 1 and 2
Create another locator (the lookAt)
Connect the output of plusMinusAverage1 to the translation of the lookAt locator.
Bake the translation of the lookAt locator.
Delete the other 4 locators.
Aim constrain the eyes' X axes to the lookAt.
This can all be done in a script using commands: spaceLocator, createNode, connectAttr, setAttr, bakeSimulation, pointConstraint, aimConstraint, delete.
The solution ended up being quite simple. The situation is motion capture data on the rotation nodes of the eyes, while simultaneously wanting (non-technical) animator over-ride control for the eye gaze. Within Maya, constraints have a weight factor: a parametric 0-1 value controlling the influence of the constraint. The solution is for the animator to simply key the eyes' lookAt constraint weight to 1 when they want control over the eye gaze, key those same weights to 0 when they want the motion captured eye gaze, and use a smooth transition of those constraint weights to mask the transition. This is better than my original idea described above, because the original motion capture data remains in place, available as reference, allowing the animator to switch back and forth if need be.

where can I find detailed resource on object state prediction for using with dead reckoning?

I have a server and a client.
I have 40 opengl cubes. There state is described by 3d vector for position and 3x3 rotation matrix(or a quaternion).
How can I send intermediate packets and predict the object state on the client between those packets(extrapolation)?
for object position I can use a linear predictor on velocity.
How to predict quaternion states?
The easiest thing, parallel to what you're doing with linear velocity, is to use a linear predictor on angular velocity.
If you have two quaternions, q_0 and q_t, representing global orientations that are t seconds apart, you can compute the finite difference between the two quaternions and use that to find an angular velocity that can be used for extrapolation.
Make sure that the inner-product between q_0 and q_t is non-negative. If it's negative, negate all the components of one of the quaternions. This makes sure that we're not trying to go the long way around. If your bodies are rotating really fast relative to your sampling, this is a problem and you'll need a more complicated model that accounts for the previous angular velocity and makes assumptions about maximum possible acceleration. We'll assume that's not the case.
Then we compute the relative difference quaternion. dq = q_t * q_0' (where q_0' is the quaternion rotational inverse/conjugate). If you have the luxury of having fixed-sized steps, you can stop here and predict then next orientation t seconds into the future: q_2t = dq*d_t.
If we can' step forward by integer multiples of t, we compute the angle of rotation from dq. Quaternions and angular velocities are both variations on "axis-angle" representations of changes in orientation. If you rotate by Θ around unit-length axis [x,y,z], then the quaternion representation of that is q = [cos(Θ/2), sin(Θ/2)x, sin(Θ/2)y, sin(Θ/2)z] (using the quaternion convention where the w component comes first). If you rotate by Θ/t around axis [x,y,z], then the angular velocity is v = [Θx,Θy,Θz]/t. So v = Θ[q.x,q.y,q.z]/(t||[q.x,q.y,q.z]||). We can compute the angle two ways: Θ = 2acos(q.w) = 2asin(||[q.x,q.y,q.z]||). These will always be the same because of step 1. Numerics make it nicer to use sine since we need to find m = ||[q.x,q.y,q.z]|| anyway for the next step.
If m is large enough, then we just find the angular velocity:
v = 2asin(m)[dq.x,dq.y,dq.z]/(m*t)
However, if m's not large enough, we'll face numeric issues trying to divide by near-zero. So programmers will use the Taylor expansion of the sinc() function around zero, which happens to be very accurate in this case. Remember that m = sin(Θ/2). With m<1e-4, we can accurately compute asin(m)/m = 6/(6-m*m). Then you just need to multiply the result by 2*[dq.x,dq.y,dq.z]/t and you have your angular velocity. Phew.
Extrapolating is then a matter of multiplying your angular velocity times the time that has passed. Then you go backwards, converting the angular change to a quaternion and multiplying it onto q_t.
It seems like there must be an easier way...

Continuous collision detection between two moving tetrahedra

My question is fairly simple. I have two tetrahedra, each with a current position, a linear speed in space, an angular velocity and a center of mass (center of rotation, actually).
Having this data, I am trying to find a (fast) algorithm which would precisely determine (1) whether they would collide at some point in time, and if it is the case, (2) after how much time they collided and (3) the point of collision.
Most people would solve this by doing triangle-triangle collision detection, but this would waste a few CPU cycles on redundant operations such as checking the same edge of one tetrahedron against the same edge of the other tetrahedron upon checking up different triangles. This only means I'll optimize things a bit. Nothing to worry about.
The problem is that I am not aware of any public CCD (continuous collision detection) triangle-triangle algorithm which takes self-rotation in account.
Therefore, I need an algorithm which would be inputted the following data:
vertex data for three triangles
position and center of rotation/mass
linear velocity and angular velocity
And would output the following:
Whether there is a collision
After how much time the collision occurred
In which point in space the collision occurred
Thanks in advance for your help.
The commonly used discrete collision detection would check the triangles of each shape for collision, over successive discrete points in time. While straightforward to compute, it could miss a fast moving object hitting another one, due to the collision happening between discrete points in time tested.
Continuous collision detection would first compute the volumes traced by each triangle over an infinity of time. For a triangle moving at constant speed and without rotation, this volume could look like a triangular prism. CCD would then check for collision between the volumes, and finally trace back if and at what time the triangles actually shared the same space.
When angular velocity is introduced, the volume traced by each triangle no longer looks like a prism. It might look more like the shape of a screw, like a strand of DNA, or some other non-trivial shapes you might get by rotating a triangle around some arbitrary axis while dragging it linearly. Computing the shape of such volume is no easy feat.
One approach might first compute the sphere that contains an entire tetrahedron when it is rotating at the given angular velocity vector, if it was not moving linearly. You can compute a rotation circle for each vertex, and derive the sphere from that. Given a sphere, we can now approximate the extruded CCD volume as a cylinder with the radius of the sphere and progressing along the linear velocity vector. Finding collisions of such cylinders gets us a first approximation for an area to search for collisions in.
A second, complementary approach might attempt to approximate the actual volume traced by each triangle by breaking it down into small, almost-prismatic sub-volumes. It would take the triangle positions at two increments of time, and add surfaces generated by tracing the triangle vertices at those moments. It's an approximation because it connects a straight line rather than an actual curve. For the approximation to avoid gross errors, the duration between each successive moments needs to be short enough such that the triangle only completes a small fraction of a rotation. The duration can be derived from the angular velocity.
The second approach creates many more polygons! You can use the first approach to limit the search volume, and then use the second to get higher precision.
If you're solving this for a game engine, you might find the precision of above sufficient (I would still shudder at the computational cost). If, rather, you're writing a CAD program or working on your thesis, you might find it less than satisfying. In the latter case, you might want to refine the second approach, perhaps by a better geometric description of the volume occupied by a turning, moving triangle -- when limited to a small turn angle.
I have spent quite a lot of time wondering about geometry problems like this one, and it seems like accurate solutions, despite their simple statements, are way too complicated to be practical, even for analogous 2D cases.
But intuitively I see that such solutions do exist when you consider linear translation velocities and linear angular velocities. Don't think you'll find the answer on the web or in any book because what we're talking about here are special, yet complex, cases. An iterative solution is probably what you want anyway -- the rest of the world is satisfied with those, so why shouldn't you be?
If you were trying to collide non-rotating tetrahedra, I'd suggest a taking the Minkowski sum and performing a ray check, but that won't work with rotation.
The best I can come up with is to perform swept-sphere collision using their bounding spheres to give you a range of times to check using bisection or what-have-you.
Here's an outline of a closed-form mathematical approach. Each element of this will be easy to express individually, and the final combination of these would be a closed form expression if one could ever write it out:
1) The equation of motion for each point of the tetrahedra is fairly simple in it's own coordinate system. The motion of the center of mass (CM) will just move smoothly along a straight line and the corner points will rotate around an axis through the CM, assumed to be the z-axis here, so the equation for each corner point (parameterized by time, t) is p = vt + x + r(sin(wt+s)i + cos(wt + s)j ), where v is the vector velocity of the center of mass; r is the radius of the projection onto the x-y plane; i, j, and k are the x, y and z unit vectors; and x and s account for the starting position and phase of rotation at t=0.
2) Note that each object has it's own coordinate system to easily represent the motion, but to compare them you'll need to rotate each into a common coordinate system, which may as well be the coordinate system of the screen. (Note though that the different coordinate systems are fixed in space and not traveling with the tetrahedra.) So determine the rotation matrices and apply them to each trajectory (i.e. the points and CM of each of the tetrahedra).
3) Now you have an equation for each trajectory all within the same coordinate system and you need to find the times of the intersections. This can be found by testing whether any of the line segments from the points to the CM of a tetrahedron intersects the any of the triangles of another. This also has a closed-form expression, as can be found here.
Layering these steps will make for terribly ugly equations, but it wouldn't be hard to solve them computationally (although with the rotation of the tetrahedra you need to be sure not to get stuck in a local minimum). Another option might be to plug it into something like Mathematica to do the cranking for you. (Not all problems have easy answers.)
Sorry I'm not a math boff and have no idea what the correct terminology is. Hope my poor terms don't hide my meaning too much.
Pick some arbitrary timestep.
Compute the bounds of each shape in two dimensions perpendicular to the axis it is moving on for the timestep.
For a timestep:
If the shaft of those bounds for any two objects intersect, half timestep and start recurse in.
A kind of binary search of increasingly fine precision to discover the point at which a finite intersection occurs.
Your problem can be cast into a linear programming problem and solved exactly.
First, suppose (p0,p1,p2,p3) are the vertexes at time t0, and (q0,q1,q2,q3) are the vertexes at time t1 for the first tetrahedron, then in 4d space-time, they fill the following 4d closed volume
V = { (r,t) | (r,t) = a0 (p0,t0) + … + a3 (p3,t0) + b0 (q0,t1) + … + b3 (q3,t1) }
Here the a0...a3 and b0…b3 parameters are in the interval [0,1] and sum to 1:
a0+a1+a2+a3+b0+b1+b2+b3=1
The second tetrahedron is similarly a convex polygon (add a ‘ to everything above to define V’ the 4d volume for that moving tetrahedron.
Now the intersection of two convex polygon is a convex polygon. The first time this happens would satisfy the following linear programming problem:
If (p0,p1,p2,p3) moves to (q0,q1,q2,q3)
and (p0’,p1’,p2’,p3’) moves to (q0’,q1’,q2’,q3’)
then the first time of intersection happens at points/times (r,t):
Minimize t0*(a0+a1+a2+a3)+t1*(b0+b1+b2+b3) subject to
0 <= ak <=1, 0<=bk <=1, 0 <= ak’ <=1, 0<=bk’ <=1, k=0..4
a0*(p0,t0) + … + a3*(p3,t0) + b0*(q0,t1) + … + b3*(q3,t1)
= a0’*(p0’,t0) + … + a3’*(p3’,t0) + b0’*(q0’,t1) + … + b3’*(q3’,t1)
The last is actually 4 equations, one for each dimension of (r,t).
This is a total of 20 linear constraints of the 16 values ak,bk,ak', and bk'.
If there is a solution, then
(r,t)= a0*(p0,t0) + … + a3*(p3,t0) + b0*(q0,t1) + … + b3*(q3,t1)
Is a point of first intersection. Otherwise they do not intersect.
Thought about this in the past but lost interest... The best way to go about solving it would be to abstract out one object.
Make a coordinate system where the first tetrahedron is the center (barycentric coords or a skewed system with one point as the origin) and abstract out the rotation by making the other tetrahedron rotate around the center. This should give you parametric equations if you make the rotation times time.
Add the movement of the center of mass towards the first and its spin and you have a set of equations for movement relative to the first (distance).
Solve for t where the distance equals zero.
Obviously with this method the more effects you add (like wind resistance) the messier the equations get buts its still probably the simplest (almost every other collision technique uses this method of abstraction). The biggest problem is if you add any effects that have feedback with no analytical solution the whole equation becomes unsolvable.
Note: If you go the route of of a skewed system watch out for pitfalls with distance. You must be in the right octant! This method favors vectors and quaternions though, while the barycentric coords favors matrices. So pick whichever your system uses most effectively.