How to convert relative GPS coordinates to a "local custom" x, y, z coordinate? - gps

Let's say I know two persons are standing at GPS location A and B. A is looking at B.
I would like to know B's (x, y, z) coordinates based on A, where the +y axis is the direction to B (since A is looking at B), +z is the vertically to the sky. (therefore +x is right-hand side of A)
I know how to convert a GPS coordinate to UTM, but in this case, a coordinate system rotation and translation seem needed. I am going to come up with a calculation, but before that, will there be some codes to look at?
I think this must be handled by many applications, but I could not find so far.

Convert booth points to 3D Cartesian
GPS suggest WGS84 so see How to convert a spherical velocity coordinates into cartesian
Construct transform matrix with your desired axises
see Understanding 4x4 homogenous transform matrices. So you need 3 perpendicular unit vectors. The Y is view direction so
Y = normalize(B-A);
one of the axises will be most likely up vector so you can use approximation
Z = normalize(A);
and as origin you can use point A directly. Now just exploit cross product to create X perpendicular to both and make also Y perpendicular to X and Z (so up stays up). For more info see Representing Points on a Circular Radar Math approach
Transfrom B to B' by that matrix
Again in the QA linked in #1 is how to do it. It is simple matrix/vector multiplication.

Related

Find the furthest y cartesian coordinate for 6 DoF robot in joint coordinates

I've got robotic arm with 6 DoF. I make constrain that x, z carteasian coordinate and orientation is exactly specified. I would like to get joint coordinates which are at cartesian position [x, y_max, z], where y_max is the maximum y cartesian coordinate which is reachable by the end-effector of the robotic arm.
For example:
I set x to be 0.5, z to by 1.0 and I want to find joint coordinates that satisfy after forward kinematics that robot's end-effector is at cartesian coordinates [0.5, maximum reachable coordinate, 1.0].
I know that if I know cartesian position and orientation I can find joint coordinates by inverse kinematics and check if the end-effector is at desired coordinateds by forward kinematics, but what if I don't know one of the axis in cartesian and it depends on robot how far it is possible to move? As far as I know, inverse kinematics is possible to solve analyticaly or numericaly, but to solve it I need to know the whole frame of the finish coordinate.
Moreover I would like to have orientation dependent on y coordinate. (for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0]).
You could use a numerical task-based inverse kinematics with a task such as:
Orientation: the orientation you have specified
Position in (x, z): the coordinates you have specified
Position in y: something very far away
The behavior of a task-based approach (with proper damping) when a target is not feasible is to "stretch" the robot as far as it can without violating its constraints. Here is an example with a humanoid robot and three tasks:
(for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0])
This should be possible with a proper task as well. For example, in C++ the mc_rtc framework has a LookAtTask to keep a frame looking at a desired point.

Is there a simple math solution to sample a disk area light? (Raytracing)

I'm trying to implement different types of lights in my ray-tracer coded in C. I have successfully implemented spot, point, directional and rectangular area lights.
For rectangular area light I define two vectors (U and V) in space and I use them to move into the virtual (delimited) rectangle they form.
Depending on the intensity of the light I take several samples on the rectangle then I calculate the amount of the light reaching a point as though each sample were a single spot light.
With rectangles it is very easy to find the position of the various samples, but things get complicated when I try to do the same with a disk light.
I found little documentation about that and most of them already use ready-made functions to do so.
The only interesting thing I found is this document (https://graphics.pixar.com/library/DiskLightSampling/paper.pdf) but I'm unable to exploit it.
Would you know how to help me achieve a similar result (of the following image) with vector operations? (ex. Having the origin, orientation, radius of the disk and the number of samples)
Any advice or documentation in this regard would help me a lot.
This question reduces to:
How can I pick a uniformly-distributed random point on a disk?
A naive approach would be to generate random polar coordinates and transform them to cartesian coordinates:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = r cos θ and y = r sin θ
This is incorrect because it causes the points to bunch up in the center; for example:
A correct, but inefficient, way to do this is via rejection sampling:
Uniformly generate random x and y, each over [0, 1]
If sqrt(x^2 + y^2) < 1, return the point
Goto 1
The correct way to do this is illustrated here:
Randomly generate an angle θ between 0 and 2π
Randomly generate a distance d between 0 and radius r of your disk
Transform to cartesian coordinates with x = sqrt(r) cos θ and y = sqrt(r) sin θ

GLKView GLKMatrix4MakeLookAt description and explanation

For modelviewMatrix I understand how to form translate and scale Matrix. But I am unable to understand how to form viewMatrix using GLKMatrix4MakeLookAt. Can anyone explain how to it works and how to give value to parameters(eye center up X Y Z).
GLK_INLINE GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
float centerX, float centerY, float centerZ,
float upX, float upY, float upZ)
GLKMatrix4MakeLookAt creates a viewing matrix (in the same way as gluLookAt does, in case you look at other OpenGL code). As the parameters suggest, it considers the position of the viewer's eye, the point in space they're looking at (e.g., a point on an object), and the up vector, which specifies which direction is "up" (e.g., pointing towards the sky). The viewing matrix generated is the combination of a rotation matrix (composed of a set of orthonormal bases [basis vectors]) and an translation.
Logically, the matrix is basically constructed in a few steps:
compute the line-of-sight vector, which is the normalized vector going from the eye's position to the point you're looking at, the center point.
compute the cross product of the line-of-sight vector with the up vector, and normalize the resulting vector.
compute the cross product of the vector computed in step 2. with the line-of-sight to complete the orthonormal basis.
create a 3x3 rotation matrix by setting the first row to the vector created in step 2., the middle row with the vector from step 3., and the bottom row to the negated, normalized line-of-sight vector.
those three steps produce a rotation matrix that will rotate the world coordinate system into eye coordinates (a coordinate system where the eye is located at the origin, and the line-of-sight is down the -z axis. The final viewing matrix is computed by multiplying a translation to the negated eye position, which moves the "world coordinate positioned eye" to the origin for eye coordinates.
Here's a related question showing the code of GLKMatrix4MakeLookAt, and here's a question with more detail about eye coordinates and related coordinate systems: (What exactly are eye space coordinates?) .

How to get a 3D point on a plane (which is represented as normal and offset from origin)?

I know how to get the intersection point between a ray and a plane, if I know the ray and
a point on the plane, and the plane normal.
In the code I use the plane is represented as signed offset from origin, and normal, and I
need to get some, any point on the plane. How to do this?
So, the plane equation: Ax + By + Cz + D = 0, and I know A,B and C, that is basically
the normal of the plane and I know D, which is the signed distance from the origin. And
my question is, given that how do I get some 3D point on the plane?
Thanks
If (A, B, C) are normalized vector, the point on the plane closest to original is simply:
(-AD, -BD, -CD)
This can be easily known from your description that (A, B, C) is the plane normal, and D is the distance between the plane and origin.
This method is simple and do not need any branching.
Point on plane closest to origin
You get one plane point by intersecting plane with a ray (line) :-)
Choose some point P=(x,y,z), calculate w=Ax+By+Cz.
If w=-D than P is on the plane.
For w!=-D, choose some direction Q=(dx,dy,dz) for which l=Adx+Bdy+Cdz!=0, e.g. q=(A,0,0), if B!=0 or C!=0. Than point P+l*Q/w is on the plane.

How to find the point of collision between an irregular shape (built out of 3 circles) and a line

I'm making a program in which many weird shapes are drawn onto a canvas. Right now i'm trying to implement the last, and possebly hardest, one.
In this particular shape i need a way to find the location (on a 2d canvas) where the line hits the shape. The following image is an example of what i have right now.
The black dots are the points that a known to me (i also have the location of the center of the three open circles and the radius of these circles). Each of the three outer lines needs a line towards the center dot, ending at the point that it hits the circle. This shape can be turned 90, 180 or 270 degrees.
The shape should look something like the following:
If you need any other information, please ask me in the comments. I'm not very good at math so please be gentle, thanks!
If A and B are points forming a line, then you can describe any point on that line using coordinates:
x = t·Ax + (1−t)·Bx
y = t·Ay + (1−t)·By
0 ≤ t ≤ 1
You can also describe the circle with center M and radius r as
(x − Mx)2 + (y − My)2 = r2
So take the x and y from the equations of the line, and plug them into the equation of the circle. You obtain a quadratic equation in t. Its two solutions describe the two points of intersection between the line and circle. In your example, only one of them lies on the line segment, i.e. satisfies 0 ≤ t ≤ 1. The other describes a point on the extension of the segment past its endpoint. Take the correct value for t back to the equations of the line, and you obtain the x and y coordinates of the point of intersection.
If you don't know up front which circle you want to intersect with a given line, then intersect all three and choose the most appropriate point afterwards. Probably that is the point closest to the outside starting point of the line segment. The same goes in cases where both points of intersection lie on the segment.