Convert quaternions between two coordinate systems where x axis becomes -x - quaternions

How to estimate new quaternion when x axis becomes -x ?
In short, I need to estimate the new quaternion when rotation around y becomes 180-y.

if angle around y is 30 degrees, around x=20 degrees, and around z is z = 70 degrees then around y should become 180-30 degrees because x becomes -x
In Quaternions:
The new y in -x should be (180-30)*pi/180 and its quaternions are found as follows (original in https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles but for different coordinate system)
a = 180-30; //180-30;
ax = 20 * Math.PI/180;
ay = a * Math.PI/180;
az = 70 * Math.PI/180;
t0 = Math.cos(ay * 0.5); // yaw
t1 = Math.sin(ay * 0.5);
t2 = Math.cos(az * 0.5); // roll
t3 = Math.sin(az * 0.5);
t4 = Math.cos(ax * 0.5); // pitch
t5 = Math.sin(ax * 0.5);
t024 = t0 * t2 * t4;
t025 = t0 * t2 * t5;
t034 = t0 * t3 * t4;
t035 = t0 * t3 * t5;
t124 = t1 * t2 * t4;
t125 = t1 * t2 * t5;
t134 = t1 * t3 * t4;
t135 = t1 * t3 * t5;
x = t025 + t134;
y =-t035 + t124;
z = t034 + t125;
w = t024 - t135;

Related

From world coordinates to camera coordinates

I have a 3D point in the world coordinates, (-140,-500,0) where z is the upwards vector, x is the depth and y is the horizontal
Now I want to convert this point to camera coordinates
I know that I need to calculate rotation matrix and translation
I have roll pitch and yaw and the camera position
I want to know if I am calculating the point in the camera coordinates correctly
//ax, ay and az are the position of the point in the real world
//cx, cy and cz are the position of the camera
//67.362312316894531 is pitch
//89.7135009765625 is roll
//0.033716827630996704 is yaw
double x = ax - cx;
double y = ay -cy;
double z = az - cz;
double cosx = cos(67.362312316894531);
double sinx = sin(67.362312316894531);
double cosy = cos(89.7135009765625);
double siny = sin(89.7135009765625);
double cosz = cos(0.033716827630996704);
double sinz = sin(0.033716827630996704);
dx = cosy * (sinz * y + cosz * x) - siny * z;
dy = sinx * (cosy * z + siny * (sinz * y + cosz * x)) + cosx * (cosz * y - sinz * x);
dz = cosx * (cosy * z + siny * (sinz * y + cosz * x)) - sinx * (cosz * y - sinz * x);
I know that the other method is to calculate the rotation matrix
//where yp is pitch, thet is roll and k is yaw
double rotxm[9] = { 1,0,0,0,cos(yp),-sin(yp),0,sin(yp),cos(yp) };
double rotym[9] = { cos(thet),0,sin(thet),0,1,0,-sin(thet),0,cos(thet) };
double rotzm[9] = { cos(k),-sin(k),0,sin(k),cos(k),0,0,0,1};
cv::Mat rotx = Mat{ 3,3,CV_64F,rotxm };
cv::Mat roty = Mat{ 3,3,CV_64F,rotym };
cv::Mat rotz = Mat{ 3,3,CV_64F,rotzm };
cv::Mat rotationm = rotz * roty * rotx;
my question are these two methods correct? or at least on of them.. how can I make sure of that

Accurately calculate moon phases

For a new project I like to calculate the moon phases. So far I haven't seen any code that does that. I don't want to rely on online-services for this.
I have tried some functions, but they are not 100% reliable. Functions I have tried:
NSInteger r = iYear % 100;
r %= 19;
if (r>9){ r -= 19;}
r = ((r * 11) % 30) + iMonth + iDay;
if (iMonth<3){r += 2;}
r -= ((iYear<2000) ? 4 : 8.3);
r = floor(r+0.5);
other one:
float n = floor(12.37 * (iYear -1900 + ((1.0 * iMonth - 0.5)/12.0)));
float RAD = 3.14159265/180.0;
float t = n / 1236.85;
float t2 = t * t;
float as = 359.2242 + 29.105356 * n;
float am = 306.0253 + 385.816918 * n + 0.010730 * t2;
float xtra = 0.75933 + 1.53058868 * n + ((1.178e-4) - (1.55e-7) * t) * t2;
xtra = xtra + (0.1734 - 3.93e-4 * t) * sin(RAD * as) - 0.4068 * sin(RAD * am);
float i = (xtra > 0.0 ? floor(xtra) : ceil(xtra - 1.0));
float j1 = [self julday:iYear iMonth:iMonth iDay:iDay];
float jd = (2415020 + 28 * n) + i;
jd = fmodf((j1-jd + 30), 30);
and last one
NSInteger thisJD = [self julday:iYear iMonth:iMonth iDay:iDay];
float degToRad = 3.14159265 / 180;
float K0, T, T2, T3, J0, F0, M0, M1, B1, oldJ = 0.0;
K0 = floor((iYear-1900)*12.3685);
T = (iYear-1899.5) / 100;
T2 = T*T; T3 = T*T*T;
J0 = 2415020 + 29*K0;
F0 = 0.0001178*T2 - 0.000000155*T3 + (0.75933 + 0.53058868*K0) - (0.000837*T + 0.000335*T2);
M0 = 360*[self getFrac:((K0*0.08084821133)) + 359.2242 - 0.0000333*T2 - 0.00000347*T3];
M1 = 360*[self getFrac:((K0*0.07171366128)) + 306.0253 + 0.0107306*T2 + 0.00001236*T3];
B1 = 360*[self getFrac:((K0*0.08519585128)) + 21.2964 - (0.0016528*T2) - (0.00000239*T3)];
NSInteger phase = 0;
NSInteger jday = 0;
while (jday < thisJD) {
float F = F0 + 1.530588*phase;
float M5 = (M0 + phase*29.10535608)*degToRad;
float M6 = (M1 + phase*385.81691806)*degToRad;
float B6 = (B1 + phase*390.67050646)*degToRad;
F -= 0.4068*sin(M6) + (0.1734 - 0.000393*T)*sin(M5);
F += 0.0161*sin(2*M6) + 0.0104*sin(2*B6);
F -= 0.0074*sin(M5 - M6) - 0.0051*sin(M5 + M6);
F += 0.0021*sin(2*M5) + 0.0010*sin(2*B6-M6);
F += 0.5 / 1440;
oldJ=jday;
jday = J0 + 28*phase + floor(F);
phase++;
}
float jd = fmodf((thisJD-oldJ), 30);
All are working more and less, but none is really giving the correct dates of full moon for 2017 and 2018.
Does anyone have a function that will calculate the moon phases correctly - also based on time zone?
EDIT:
I only want the function for the Moonphases. SwiftAA offers a lot more and only produces not needed overhead in the app.

Finding intersection points of line and circle

Im trying to understand what this function does. It was given by my teacher and I just cant understands, whats logic behind the formulas finding x, and y coordinates. From my math class I know I my formulas for finding interception but its confusing translated in code. So I have some problems how they defined the formulas for a,b,c and for finding the coordinates x and y.
void Intersection::getIntersectionPoints(const Arc& arc, const Line& line) {
double a, b, c, mu, det;
std::pair<double, double> xPoints;
std::pair<double, double> yPoints;
std::pair<double, double> zPoints;
//(m2+1)x2+2(mc−mq−p)x+(q2−r2+p2−2cq+c2)=0.
//a= m2;
//b= 2 * (mc - mq - p);
//c= q2−r2+p2−2cq+c2
a = pow((line.end().x - line.start().x), 2) + pow((line.end().y - line.start().y), 2) + pow((line.end().z - line.start().z), 2);
b = 2 * ((line.end().x - line.start().x)*(line.start().x - arc.center().x)
+ (line.end().y - line.start().y)*(line.start().y - arc.center().y)
+ (line.end().z - line.start().z)*(line.start().z - arc.center().z));
c = pow((arc.center().x), 2) + pow((arc.center().y), 2) +
pow((arc.center().z), 2) + pow((line.start().x), 2) +
pow((line.start().y), 2) + pow((line.start().z), 2) -
2 * (arc.center().x * line.start().x + arc.center().y * line.start().y +
arc.center().z * line.start().z) - pow((arc.radius()), 2);
det = pow(b, 2) - 4 * a * c;
/* Tangenta na kružnicu */
if (Math<double>::isEqual(det, 0.0, 0.00001)) {
if (!Math<double>::isEqual(a, 0.0, 0.00001))
mu = -b / (2 * a);
else
mu = 0.0;
// x = h + t * ( p − h )
xPoints.second = xPoints.first = line.start().x + mu * (line.end().x - line.start().x);
yPoints.second = yPoints.first = line.start().y + mu * (line.end().y - line.start().y);
zPoints.second = zPoints.first = line.start().z + mu * (line.end().z - line.start().z);
}
if (Math<double>::isGreater(det, 0.0, 0.00001)) {
// first intersection
mu = (-b - sqrt(pow(b, 2) - 4 * a * c)) / (2 * a);
xPoints.first = line.start().x + mu * (line.end().x - line.start().x);
yPoints.first = line.start().y + mu * (line.end().y - line.start().y);
zPoints.first = line.start().z + mu * (line.end().z - line.start().z);
// second intersection
mu = (-b + sqrt(pow(b, 2) - 4 * a * c)) / (2 * a);
xPoints.second = line.start().x + mu * (line.end().x - line.start().x);
yPoints.second = line.start().y + mu * (line.end().y - line.start().y);
zPoints.second = line.start().z + mu * (line.end().z - line.start().z);
}
Denoting the line's start point as A, end point as B, circle's center as C, circle's radius as r and the intersection point as P, then we can write P as
P=(1-t)*A + t*B = A+t*(B-A) (1)
Point P will also locate on the circle, therefore
|P-C|^2 = r^2 (2)
Plugging equation (1) into equation (2), you will get
|B-A|^2*t^2 + 2(B-A)\dot(A-C)*t +(|A-C|^2 - r^2) = 0 (3)
This is how you get the formula for a, b and c in the program you posted. After solving for t, you shall obtain the intersection point(s) from equation (1). Since equation (3) is quadratic, you might get 0, 1 or 2 values for t, which correspond to the geometric configurations where the line might not intersect the circle, be exactly tangent to the circle or pass thru the circle at two locations.

Distance using WGS84-ellipsoid

Consider points P1 (60°N, 20°E, 0) and P2 (60°N, 22°E, 0) on the
surface of the Earth
What is the shortest distance between the points P1 and P2, when the shape of the
Earth is modeled using WGS-84 ellipsoid?
Unfortunately, Vincenty's algorithm fails to converge for some inputs.
GeographicLib provides an alternative which always converges (and
is also more accurate). Implementations in C++, C, Fortran, Javascript, Python, Java, and Matlab are provided. E.g., using the
Matlab package:
format long;
geoddistance(60,20,60,22)
->
111595.753650629
As pointed out in a comment to your question, you should use Vincenty's formula for inverse problem.
Answer to your question is: 111595.75 metres (or 60.257 nautical miles).
Javascript implementation of Vincenty's inverse formula, as copied from http://jsperf.com/vincenty-vs-haversine-distance-calculations:
/**
* Calculates geodetic distance between two points specified by latitude/longitude using
* Vincenty inverse formula for ellipsoids
*
* #param {Number} lat1, lon1: first point in decimal degrees
* #param {Number} lat2, lon2: second point in decimal degrees
* #returns (Number} distance in metres between points
*/
function distVincenty(lat1, lon1, lat2, lon2) {
var a = 6378137,
b = 6356752.314245,
f = 1 / 298.257223563; // WGS-84 ellipsoid params
var L = (lon2 - lon1).toRad();
var U1 = Math.atan((1 - f) * Math.tan(lat1.toRad()));
var U2 = Math.atan((1 - f) * Math.tan(lat2.toRad()));
var sinU1 = Math.sin(U1),
cosU1 = Math.cos(U1);
var sinU2 = Math.sin(U2),
cosU2 = Math.cos(U2);
var lambda = L,
lambdaP, iterLimit = 100;
do {
var sinLambda = Math.sin(lambda),
cosLambda = Math.cos(lambda);
var sinSigma = Math.sqrt((cosU2 * sinLambda) * (cosU2 * sinLambda) + (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda) * (cosU1 * sinU2 - sinU1 * cosU2 * cosLambda));
if (sinSigma == 0) return 0; // co-incident points
var cosSigma = sinU1 * sinU2 + cosU1 * cosU2 * cosLambda;
var sigma = Math.atan2(sinSigma, cosSigma);
var sinAlpha = cosU1 * cosU2 * sinLambda / sinSigma;
var cosSqAlpha = 1 - sinAlpha * sinAlpha;
var cos2SigmaM = cosSigma - 2 * sinU1 * sinU2 / cosSqAlpha;
if (isNaN(cos2SigmaM)) cos2SigmaM = 0; // equatorial line: cosSqAlpha=0 (§6)
var C = f / 16 * cosSqAlpha * (4 + f * (4 - 3 * cosSqAlpha));
lambdaP = lambda;
lambda = L + (1 - C) * f * sinAlpha * (sigma + C * sinSigma * (cos2SigmaM + C * cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM)));
} while (Math.abs(lambda - lambdaP) > 1e-12 && --iterLimit > 0);
if (iterLimit == 0) return NaN // formula failed to converge
var uSq = cosSqAlpha * (a * a - b * b) / (b * b);
var A = 1 + uSq / 16384 * (4096 + uSq * (-768 + uSq * (320 - 175 * uSq)));
var B = uSq / 1024 * (256 + uSq * (-128 + uSq * (74 - 47 * uSq)));
var deltaSigma = B * sinSigma * (cos2SigmaM + B / 4 * (cosSigma * (-1 + 2 * cos2SigmaM * cos2SigmaM) - B / 6 * cos2SigmaM * (-3 + 4 * sinSigma * sinSigma) * (-3 + 4 * cos2SigmaM * cos2SigmaM)));
var s = b * A * (sigma - deltaSigma);
s = s.toFixed(3); // round to 1mm precision
return s;
}
The Haversine Formula is commonly used (error < 0,5%)

Return CATransform3D to map quadrilateral to quadrilateral

I'm trying to derive a CATransform3D that will map a quad with 4 corner points to another quad with 4 new corner points. I've spent a little bit of time researching this and it seems the steps involve converting the original Quad to a Square, and then converting that Square to the new Quad. My methods look like this (code borrowed from here):
- (CATransform3D)quadFromSquare_x0:(float)x0 y0:(float)y0 x1:(float)x1 y1:(float)y1 x2:(float)x2 y2:(float)y2 x3:(float)x3 y3:(float)y3 {
float dx1 = x1 - x2, dy1 = y1 - y2;
float dx2 = x3 - x2, dy2 = y3 - y2;
float sx = x0 - x1 + x2 - x3;
float sy = y0 - y1 + y2 - y3;
float g = (sx * dy2 - dx2 * sy) / (dx1 * dy2 - dx2 * dy1);
float h = (dx1 * sy - sx * dy1) / (dx1 * dy2 - dx2 * dy1);
float a = x1 - x0 + g * x1;
float b = x3 - x0 + h * x3;
float c = x0;
float d = y1 - y0 + g * y1;
float e = y3 - y0 + h * y3;
float f = y0;
CATransform3D mat;
mat.m11 = a;
mat.m12 = b;
mat.m13 = 0;
mat.m14 = c;
mat.m21 = d;
mat.m22 = e;
mat.m23 = 0;
mat.m24 = f;
mat.m31 = 0;
mat.m32 = 0;
mat.m33 = 1;
mat.m34 = 0;
mat.m41 = g;
mat.m42 = h;
mat.m43 = 0;
mat.m44 = 1;
return mat;
}
- (CATransform3D)squareFromQuad_x0:(float)x0 y0:(float)y0 x1:(float)x1 y1:(float)y1 x2:(float)x2 y2:(float)y2 x3:(float)x3 y3:(float)y3 {
CATransform3D mat = [self quadFromSquare_x0:x0 y0:y0 x1:x1 y1:y1 x2:x2 y2:y2 x3:x3 y3:y3];
// invert through adjoint
float a = mat.m11, d = mat.m21, /* ignore */ g = mat.m41;
float b = mat.m12, e = mat.m22, /* 3rd col*/ h = mat.m42;
/* ignore 3rd row */
float c = mat.m14, f = mat.m24;
float A = e - f * h;
float B = c * h - b;
float C = b * f - c * e;
float D = f * g - d;
float E = a - c * g;
float F = c * d - a * f;
float G = d * h - e * g;
float H = b * g - a * h;
float I = a * e - b * d;
// Probably unnecessary since 'I' is also scaled by the determinant,
// and 'I' scales the homogeneous coordinate, which, in turn,
// scales the X,Y coordinates.
// Determinant = a * (e - f * h) + b * (f * g - d) + c * (d * h - e * g);
float idet = 1.0f / (a * A + b * D + c * G);
mat.m11 = A * idet; mat.m21 = D * idet; mat.m31 = 0; mat.m41 = G * idet;
mat.m12 = B * idet; mat.m22 = E * idet; mat.m32 = 0; mat.m42 = H * idet;
mat.m13 = 0 ; mat.m23 = 0 ; mat.m33 = 1; mat.m43 = 0 ;
mat.m14 = C * idet; mat.m24 = F * idet; mat.m34 = 0; mat.m44 = I * idet;
return mat;
}
After calculating both matrices, multiplying them together, and assigning to the view in question, I end up with a transformed view, but it is wildly incorrect. In fact, it seems to be sheared like a parallelogram no matter what I do. What am I missing?
UPDATE 2/1/12
It seems the reason I'm running into issues may be that I need to accommodate for FOV and focal length into the model view matrix (which is the only matrix I can alter directly in Quartz.) I'm not having any luck finding documentation online on how to calculate the proper matrix, though.
I was able to achieve this by porting and combining the quad warping and homography code from these two URLs:
http://forum.openframeworks.cc/index.php/topic,509.30.html
http://forum.openframeworks.cc/index.php?topic=3121.15
UPDATE: I've open sourced a small class that does this: https://github.com/dominikhofmann/DHWarpView