How to convert quaternion to angle? - objective-c

Is there a formula to convert a quaternion to an angle?
Looking to do something on the iPhone using the Core Motion API and the gyro so that based on data I receive from it (in the form of quaternions) I can project a UIView on the screen.
Thanks

Yes, see Quaternions and spatial rotation. The unit quaternion (w, x, y, z) represents a rotation about the axis (x, y, z) of an angle of 2*cos-1(w).
Note that this is only true of unit quaternions. Non-unit quaternions do not represent rotations.

Related

Find the furthest y cartesian coordinate for 6 DoF robot in joint coordinates

I've got robotic arm with 6 DoF. I make constrain that x, z carteasian coordinate and orientation is exactly specified. I would like to get joint coordinates which are at cartesian position [x, y_max, z], where y_max is the maximum y cartesian coordinate which is reachable by the end-effector of the robotic arm.
For example:
I set x to be 0.5, z to by 1.0 and I want to find joint coordinates that satisfy after forward kinematics that robot's end-effector is at cartesian coordinates [0.5, maximum reachable coordinate, 1.0].
I know that if I know cartesian position and orientation I can find joint coordinates by inverse kinematics and check if the end-effector is at desired coordinateds by forward kinematics, but what if I don't know one of the axis in cartesian and it depends on robot how far it is possible to move? As far as I know, inverse kinematics is possible to solve analyticaly or numericaly, but to solve it I need to know the whole frame of the finish coordinate.
Moreover I would like to have orientation dependent on y coordinate. (for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0]).
You could use a numerical task-based inverse kinematics with a task such as:
Orientation: the orientation you have specified
Position in (x, z): the coordinates you have specified
Position in y: something very far away
The behavior of a task-based approach (with proper damping) when a target is not feasible is to "stretch" the robot as far as it can without violating its constraints. Here is an example with a humanoid robot and three tasks:
(for example I would like to guarantee that end-effector is always looking at coordinates [0.5, 0, 0])
This should be possible with a proper task as well. For example, in C++ the mc_rtc framework has a LookAtTask to keep a frame looking at a desired point.

How to get the orientation vector of the camera given its rotation matrix / quaternion?

When I have the rotation matrix or quaternion representation of a camera's pose, is there a way to obtain the orientation vector of the camera?
Here the orientation vector means a 3D vector in the world coordinate (WC) that represents an orientation.
I read through the commonly used representations like euler angles and axis-angle, but I didn't find any representations that can represent the orientation of the camera in WC.
Could anyone help? Thank you!
You probably want the 3x1 Rodrigues vector. Just plug in the SO(3) rotation matrix of the camera orientation in world coordinates, and you will get a vector representation. Just to be clear, pose and orientation are different. Pose is orientation + position. If you want the position as well, that can be represented as a 3x1 vector of t = [x y z]' (using Matlab notation).
A typical representation of the pose is a 4x4 matrix in SE(3) (Special Euclidean Group), which is just:
T = [R t; 0 0 0 1]
Where R is the rotation matrix in SO(3).

How to convert relative GPS coordinates to a "local custom" x, y, z coordinate?

Let's say I know two persons are standing at GPS location A and B. A is looking at B.
I would like to know B's (x, y, z) coordinates based on A, where the +y axis is the direction to B (since A is looking at B), +z is the vertically to the sky. (therefore +x is right-hand side of A)
I know how to convert a GPS coordinate to UTM, but in this case, a coordinate system rotation and translation seem needed. I am going to come up with a calculation, but before that, will there be some codes to look at?
I think this must be handled by many applications, but I could not find so far.
Convert booth points to 3D Cartesian
GPS suggest WGS84 so see How to convert a spherical velocity coordinates into cartesian
Construct transform matrix with your desired axises
see Understanding 4x4 homogenous transform matrices. So you need 3 perpendicular unit vectors. The Y is view direction so
Y = normalize(B-A);
one of the axises will be most likely up vector so you can use approximation
Z = normalize(A);
and as origin you can use point A directly. Now just exploit cross product to create X perpendicular to both and make also Y perpendicular to X and Z (so up stays up). For more info see Representing Points on a Circular Radar Math approach
Transfrom B to B' by that matrix
Again in the QA linked in #1 is how to do it. It is simple matrix/vector multiplication.

Convert grid of dots in XY plane from camera coordinates to real world coordinates

I am writing a program. I have, say, a grid of dots on a piece of paper. I fix one end and bend the paper toward the screen, giving me a trapezoidal shape from the camera's point of view. I have the (x,y) camera coordinate of each dot. Is there a simple way I can change these (x,y) to real life (x,y) which should give me a rectangle? I have the camera/real (x,y) of the original flat sheet of paper pre-bend if that helps.
I have looked at 3D Camera coordinates to world coordinates (change of basis?) and Transforming screen coordinates from security camera to real world coordinates.
Look up "homography". The transformation from a plane in 3D space to its image as captured by an ideal pinhole camera is a homography. It can be represented as a 3x3 matrix H that transforms the 3D coordinates X of points in the world to their corresponding homogeneous image coordinates x:
x = H * X
where X is a 3x1 vector of the world point coordinates, and x = [u, v, w]^T is the image point in homogeneous coordinates.
Given a minimum of 4 matches between world and image points (e.g. the corners of a rectangle) you can estimate the parameters of the matrix H. For details, look up "DLT algorithm". In OpenCV the routine to use is findHomography.

Not getting how the property rotation works in SceneKit

When you specify a rotation for an object, you do something like this :
_earthNode.rotation = SCNVector4Make(1, 0, 0, M_PI/2);
What I am not getting is how to specify a specific rotation for each axis ? Because let's say that I wanted to rotate my node from PI on x, PI/2 on y, and PI/4 on z, how would I do that ? I thought that I could do something like this :
_earthNode.rotation = SCNVector4Make(1, 0.5, 0.25, M_PI);
But it doesn't change anything....
How does this property works ?
The rotation vector in Scene Kit is specified as the axis of rotation (first 3 components) follow by the angle (4th component), called axis-angle representation.
The format you are trying to specify (the different angles along each axis) is called Euler angles (unless I'm remembering wrong).
Translating between the two representations is just a bit of trigonometry. A quick online search for "Euler angles to axis angle" lead to this page which shows who to do it in Java.
SCNNode has an eulerAngles property that allows you to do just that