How to get camera rotation in libGDX? - camera

I want to get the rotation of the OrthograpicCamera in libGDX.
I'm currently using this formula that I copied from another SOF post:
float camRotation = -(float)Math.atan2(cam.up.x, cam.up.y)*MathUtils.radiansToDegrees);
This returns -0.0 if I don't rotate.
If I rotate by 1 cam.rotate(1f); it camRotation prints -1.0
& if I rotate by -1 cam.rotate(-1f); camRotation prints 1.0
I'm confused by the math. What's the proper way to get camera rotation in libGDX?

I think it only the minus sign at the beginning of
-(float)Math.atan2(cam.up.x, cam.up.y)*MathUtils.radiansToDegrees);
that is causing your confusion, if you remove that it should work as you expect.
atan2(b, a) gives you the angle between the positive x-axis and the point (a, b). Note calling it for (b, a) give the angle to the point (a, b) and not to point (b, a)
In the example code you have atan2 is called with arguments cam.up.y and cam.up.x where cam.up is a unit-vector which indicates what is up.
So in an unrotated camera the up-vector would be (0, 1) (it's actually a 3-dimensional vector but we can ignore the z-axis for now), if we plug that into the definition of atan2 is says that we should get the angle between positive-x (1, 0) and up (but with the arguments flipped) (1, 0), which is zero.
So using atan2 to compare the up vector to the positive x-axis is a valid way of finding the rotation of the camera.

Related

Method ViroARScene.getCameraOrientationAsync() returns strange values in rotation array

I'm developing a PoC with ViroReact lib but I'm getting strage values for the camera rotation.
Environment:
Device: Android 10. Xiaomi Mi 9
ViroReact 2.20.2
The ViroARScene.getCameraOrientationAsync() returns unexpected values in rotation array when I rotate the device over the y-axis, trying to keep the x and z axis fixed.
Specifically, when the y-axis reaches the -90º the x/z values change to +/180º and from this point the y-axis values are getting close to 0, for instance, instead of -135º the y-axis value is -45 with the x/z values in +/-180. In other words the y-axis values NEVER return an absolute value over 90.
Some examples (values have got an error margin of about 6 degrees):
Rotation expected: [0, -90, 0]. Returned rotation: [+/-180, -90, +/-180]
Rotation expected: [0, -135, 0]. Returned rotation: [+/-180, -45, +/-180]
Rotation expected: [0, -180, 0]. Returned rotation: [+/-180, 0, +/-180]
Questions:
Why the absolute value of y-axis is never greater than 90 ?
Why the x/z values change to +/-180º when I reach some point (+/-90º in y-axis) if I'm just rotating the device over the y-axis.
Is this the expeted behavior ? If so, could anyone explain these values (please).
The code to retrieve the values:
<ViroARScene onTrackingUpdated={this._onInitialized} anchorDetectionTypes={"PlanesVertical"}>
...
</ViroARScene>
_onInitialized(state, reason) {
if (state === ViroConstants.TRACKING_NORMAL && reason === ViroConstants.TRACKING_REASON_NONE) {
console.log('Tracking initiated');
this._scene.getCameraOrientationAsync().then(
(orientation) => {
console.log('Cam rot:', round(orientation.rotation));
});
}
}
I've also created a GitHub issue with some mockups to show the rotation values expected and returned: https://github.com/ViroCommunity/viro/issues/13
I think what you're coming up against might be Gimbal lock, which is the reason that a lot of 3d rotators are expressed in Quaternions, instead of the xyz (aka Euler - pronounced "oiler") system you are using now. It's probably expected behaviour for your system.
I'm not familiar with your platform but it might have built-in helpers or alternative methods you can use in order to work with Quaternions instead, if not then a solution for you might be to install a library (or write some code) that translates between Euler angles and Quaternions so that your calculations make more sense, if you are going to be spending time around the y-0.

Not getting how the property rotation works in SceneKit

When you specify a rotation for an object, you do something like this :
_earthNode.rotation = SCNVector4Make(1, 0, 0, M_PI/2);
What I am not getting is how to specify a specific rotation for each axis ? Because let's say that I wanted to rotate my node from PI on x, PI/2 on y, and PI/4 on z, how would I do that ? I thought that I could do something like this :
_earthNode.rotation = SCNVector4Make(1, 0.5, 0.25, M_PI);
But it doesn't change anything....
How does this property works ?
The rotation vector in Scene Kit is specified as the axis of rotation (first 3 components) follow by the angle (4th component), called axis-angle representation.
The format you are trying to specify (the different angles along each axis) is called Euler angles (unless I'm remembering wrong).
Translating between the two representations is just a bit of trigonometry. A quick online search for "Euler angles to axis angle" lead to this page which shows who to do it in Java.
SCNNode has an eulerAngles property that allows you to do just that

How can I find the points in a line - Objective c?

Consider a line from point A (x,y) to B (p,q).
The method CGContextMoveToPoint(context, x, y); moves to the point x,y and the method CGContextAddLineToPoint(context, p, q); will draw the line from point A to B.
My question is, can I find the all points that the line cover?
Actually I need to know the exact point which is x points before the end point B.
Refer this image..
The line above is just for reference. This line may have in any angle. I needed the 5th point which is in the line before the point B.
Thank you
You should not think in terms of pixels. Coordinates are floating point values. The geometric point at (x,y) does not need to be a pixel at all. In fact you should think of pixels as being rectangles in your coordinate system.
This means that "x pixels before the end point" does not really makes sense. If a pixel is a rectangle, "x pixels" is a different quantity if you move horizontally than it is if you move vertically. And if you move in any other direction it's even harder to decide what it means.
Depending on what you are trying to do it may or may not be easy to translate your concepts in pixel terms. It's probably better, however, to do the opposite and stop thinking in terms of pixels and translate all you are currently expressing in pixel terms into non pixel terms.
Also remember that exactly what a pixel is is system dependent and you may or may not, in general, be able to query the system about it (especially if you take into consideration things like retina displays and all resolution independent functionality).
Edit:
I see you edited your question, but "points" is not more precise than "pixels".
However I'll try to give you a workable solution. At least it will be workable once you reformulate your problem in the right terms.
Your question, correctly formulated, should be:
Given two points A and B in a cartesian space and a distance delta, what are the coordinates of a point C such that C is on the line passing through A and B and the length of the segment BC is delta?
Here's a solution to that question:
// Assuming point A has coordinates (x,y) and point B has coordinates (p,q).
// Also assuming the distance from B to C is delta. We want to find the
// coordinates of C.
// I'll rename the coordinates for legibility.
double ax = x;
double ay = y;
double bx = p;
double by = q;
// this is what we want to find
double cx, cy;
// we need to establish a limit to acceptable computational precision
double epsilon = 0.000001;
if ( bx - ax < epsilon && by - ay < epsilon ) {
// the two points are too close to compute a reliable result
// this is an error condition. handle the error here (throw
// an exception or whatever).
} else {
// compute the vector from B to A and its length
double bax = bx - ax;
double bay = by - ay;
double balen = sqrt( pow(bax, 2) + pow(bay, 2) );
// compute the vector from B to C (same direction of the vector from
// B to A but with lenght delta)
double bcx = bax * delta / balen;
double bcy = bay * delta / balen;
// and now add that vector to the vector OB (with O being the origin)
// to find the solution
cx = bx + bcx;
cy = by + bcy;
}
You need to make sure that points A and B are not too close or the computations will be imprecise and the result will be different than you expect. That's what epsilon is supposed to do (you may or may not want to change the value of epsilon).
Ideally a suitable value for epsilon is not related to the smallest number representable in a double but to the level of precision that a double gives you for values in the order of magnitude of the coordinates.
I have hardcoded epsilon, which is a common way to define it's value as you generally know in advance the order of magnitude of your data, but there are also 'adaptive' techniques to compute an epsilon from the actual values of the arguments (the coordinates of A and B and the delta, in this case).
Also note that I have coded for legibility (the compiler should be able to optimize anyway). Feel free to recode if you wish.
It's not so hard, translate your segment into a math line expression, x pixels may be translated into radius of a circe with center in B, make a system to find where they intercept, you get two solutions, take the point that is closer to A.
This is the code you can use
float distanceFromPx2toP3 = 1300.0;
float mag = sqrt(pow((px2.x - px1.x),2) + pow((px2.y - px1.y),2));
float P3x = px2.x + distanceFromPx2toP3 * (px2.x - px1.x) / mag;
float P3y = px2.y + distanceFromPx2toP3 * (px2.y - px1.y) / mag;
CGPoint P3 = CGPointMake(P3x, P3y);
Either you can follow this link also it will give you the detail description -
How to find a third point using two other points and their angle.
You can find out number of points whichever you want to find.

How to calculate the quaternion that represents a triangle's 3D rotation?

Or to look at it another way, let's say we have 2 same size triangles located and orientated at different parts of 3D space. How do you calculate the quaternion that describes the rotation such that applying the quaternion to triangle A would have it sit at triangle B? It is difficult to see how finding the normal of A and B and calculating the quaternion from this would work because the normal vector does not contain information about rotation (or rather, it assumes the standard base frame for the normals of both triangles thus throwing away valuable information). It seems you would need to find the vectors from each triangles (a, b, c) to the others (a, b, c) and somehow construct a quaternion out of this. Way beyond me, and could any mathematicians please dumb it down.
First orient the normal vectors then the plane.
Source=(s1,s2,s3)
Target=(t1,t2,t3)
NormSource = (s1 - s2)cross(s1 - s3)
NormTarget = (t1 - t2)cross(t1 - t3)
Quat1 = getRotationTo (NormSource,NormTarget)
Quat2 = getRotationTo ( Quat1 * (s1 - s2),(t1 - t2) );
QuatFinal = Quat2 * Quat1

Finding the co-ordinate on an arc for the next position of an orbitting camera

The best example I can give is located at:
http://www.mathopenref.com/arclength.html
In that Java applet, imagine C is the object to be rotated around and A is the camera. I wish to move the camera to point B, but I do not know how to work out B's co-ordinates. How do you do it? In my case, I know the positions of C and A, and the angle theta to rotate.
I know you can use:
x = Xcentre + radius * sin(theta)
y = Ycentre + radius * cos(theta)
but this fails to take into account the camera current position.
I can't help but feel there's some simple solution I'm missing.
Solved by using the equations listed and just reversing the calculation to derive theta. Then applied a check to ensure 360 degree rotations can be done (else only 180 degrees can).