Return CATransform3D to map quadrilateral to quadrilateral - objective-c

I'm trying to derive a CATransform3D that will map a quad with 4 corner points to another quad with 4 new corner points. I've spent a little bit of time researching this and it seems the steps involve converting the original Quad to a Square, and then converting that Square to the new Quad. My methods look like this (code borrowed from here):
- (CATransform3D)quadFromSquare_x0:(float)x0 y0:(float)y0 x1:(float)x1 y1:(float)y1 x2:(float)x2 y2:(float)y2 x3:(float)x3 y3:(float)y3 {
float dx1 = x1 - x2, dy1 = y1 - y2;
float dx2 = x3 - x2, dy2 = y3 - y2;
float sx = x0 - x1 + x2 - x3;
float sy = y0 - y1 + y2 - y3;
float g = (sx * dy2 - dx2 * sy) / (dx1 * dy2 - dx2 * dy1);
float h = (dx1 * sy - sx * dy1) / (dx1 * dy2 - dx2 * dy1);
float a = x1 - x0 + g * x1;
float b = x3 - x0 + h * x3;
float c = x0;
float d = y1 - y0 + g * y1;
float e = y3 - y0 + h * y3;
float f = y0;
CATransform3D mat;
mat.m11 = a;
mat.m12 = b;
mat.m13 = 0;
mat.m14 = c;
mat.m21 = d;
mat.m22 = e;
mat.m23 = 0;
mat.m24 = f;
mat.m31 = 0;
mat.m32 = 0;
mat.m33 = 1;
mat.m34 = 0;
mat.m41 = g;
mat.m42 = h;
mat.m43 = 0;
mat.m44 = 1;
return mat;
}
- (CATransform3D)squareFromQuad_x0:(float)x0 y0:(float)y0 x1:(float)x1 y1:(float)y1 x2:(float)x2 y2:(float)y2 x3:(float)x3 y3:(float)y3 {
CATransform3D mat = [self quadFromSquare_x0:x0 y0:y0 x1:x1 y1:y1 x2:x2 y2:y2 x3:x3 y3:y3];
// invert through adjoint
float a = mat.m11, d = mat.m21, /* ignore */ g = mat.m41;
float b = mat.m12, e = mat.m22, /* 3rd col*/ h = mat.m42;
/* ignore 3rd row */
float c = mat.m14, f = mat.m24;
float A = e - f * h;
float B = c * h - b;
float C = b * f - c * e;
float D = f * g - d;
float E = a - c * g;
float F = c * d - a * f;
float G = d * h - e * g;
float H = b * g - a * h;
float I = a * e - b * d;
// Probably unnecessary since 'I' is also scaled by the determinant,
// and 'I' scales the homogeneous coordinate, which, in turn,
// scales the X,Y coordinates.
// Determinant = a * (e - f * h) + b * (f * g - d) + c * (d * h - e * g);
float idet = 1.0f / (a * A + b * D + c * G);
mat.m11 = A * idet; mat.m21 = D * idet; mat.m31 = 0; mat.m41 = G * idet;
mat.m12 = B * idet; mat.m22 = E * idet; mat.m32 = 0; mat.m42 = H * idet;
mat.m13 = 0 ; mat.m23 = 0 ; mat.m33 = 1; mat.m43 = 0 ;
mat.m14 = C * idet; mat.m24 = F * idet; mat.m34 = 0; mat.m44 = I * idet;
return mat;
}
After calculating both matrices, multiplying them together, and assigning to the view in question, I end up with a transformed view, but it is wildly incorrect. In fact, it seems to be sheared like a parallelogram no matter what I do. What am I missing?
UPDATE 2/1/12
It seems the reason I'm running into issues may be that I need to accommodate for FOV and focal length into the model view matrix (which is the only matrix I can alter directly in Quartz.) I'm not having any luck finding documentation online on how to calculate the proper matrix, though.

I was able to achieve this by porting and combining the quad warping and homography code from these two URLs:
http://forum.openframeworks.cc/index.php/topic,509.30.html
http://forum.openframeworks.cc/index.php?topic=3121.15
UPDATE: I've open sourced a small class that does this: https://github.com/dominikhofmann/DHWarpView

Related

Look-at quaternion using up vector

I have a camera (in a custom 3D engine) that accepts a quaternion for the rotation transform. I have two 3D points representing a camera and an object to look at. I want to calculate the quaternion that looks from the camera to the object, while respecting the world up axis.
This question asks for the same thing without the "up" vector. All three answers result in the camera pointing in the correct direction, but rolling (as in yaw/pitch/roll; imagine leaning your head onto your ear while looking at something).
I can calculate an orthonormal basis of vectors that match the desired coordinate system by:
lookAt = normalize(target - camera)
sideaxis = cross(lookAt, worldUp)
rotatedup = cross(sideaxis, lookAt)
How can I create a quaternion from those three vectors? This question asks for the same thing...but unfortunately the only and accepted answer says ~"let's assume you don't care about roll", and then goes about ignoring the up axis. I do care about roll. I don't want to ignore the up axis.
A previous answer has given a valid solution using angles. This answer will present an alternative method.
The orthonormal basis vectors, renaming them F = lookAt, R = sideaxis, U = rotatedup, directly form the columns of the 3x3 rotation matrix which is equivalent to your desired quaternion:
Multiplication with a vector is equivalent to using said vector's components as the coordinates in the camera's basis.
A 3x3 rotation matrix can be converted into a quaternion without conversion to angles / use of costly trigonometric functions. Below is a numerically stable C++ snippet which does this, returning a normalized quaternion:
inline void CalculateRotation( Quaternion& q ) const {
float trace = a[0][0] + a[1][1] + a[2][2];
if( trace > 0 ) {
float s = 0.5f / sqrtf(trace + 1.0f);
q.w = 0.25f / s;
q.x = ( a[2][1] - a[1][2] ) * s;
q.y = ( a[0][2] - a[2][0] ) * s;
q.z = ( a[1][0] - a[0][1] ) * s;
} else {
if ( a[0][0] > a[1][1] && a[0][0] > a[2][2] ) {
float s = 2.0f * sqrtf( 1.0f + a[0][0] - a[1][1] - a[2][2]);
q.w = (a[2][1] - a[1][2] ) / s;
q.x = 0.25f * s;
q.y = (a[0][1] + a[1][0] ) / s;
q.z = (a[0][2] + a[2][0] ) / s;
} else if (a[1][1] > a[2][2]) {
float s = 2.0f * sqrtf( 1.0f + a[1][1] - a[0][0] - a[2][2]);
q.w = (a[0][2] - a[2][0] ) / s;
q.x = (a[0][1] + a[1][0] ) / s;
q.y = 0.25f * s;
q.z = (a[1][2] + a[2][1] ) / s;
} else {
float s = 2.0f * sqrtf( 1.0f + a[2][2] - a[0][0] - a[1][1] );
q.w = (a[1][0] - a[0][1] ) / s;
q.x = (a[0][2] + a[2][0] ) / s;
q.y = (a[1][2] + a[2][1] ) / s;
q.z = 0.25f * s;
}
}
}
Source: http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion
Converting this to suit your situation is of course just a matter of swapping the matrix elements with the corresponding vector components:
// your code from before
F = normalize(target - camera); // lookAt
R = normalize(cross(F, worldUp)); // sideaxis
U = cross(R, F); // rotatedup
// note that R needed to be re-normalized
// since F and worldUp are not necessary perpendicular
// so must remove the sin(angle) factor of the cross-product
// same not true for U because dot(R, F) = 0
// adapted source
Quaternion q;
double trace = R.x + U.y + F.z;
if (trace > 0.0) {
double s = 0.5 / sqrt(trace + 1.0);
q.w = 0.25 / s;
q.x = (U.z - F.y) * s;
q.y = (F.x - R.z) * s;
q.z = (R.y - U.x) * s;
} else {
if (R.x > U.y && R.x > F.z) {
double s = 2.0 * sqrt(1.0 + R.x - U.y - F.z);
q.w = (U.z - F.y) / s;
q.x = 0.25 * s;
q.y = (U.x + R.y) / s;
q.z = (F.x + R.z) / s;
} else if (U.y > F.z) {
double s = 2.0 * sqrt(1.0 + U.y - R.x - F.z);
q.w = (F.x - R.z) / s;
q.x = (U.x + R.y) / s;
q.y = 0.25 * s;
q.z = (F.y + U.z) / s;
} else {
double s = 2.0 * sqrt(1.0 + F.z - R.x - U.y);
q.w = (R.y - U.x) / s;
q.x = (F.x + R.z) / s;
q.y = (F.y + U.z) / s;
q.z = 0.25 * s;
}
}
(And needless to say swap y and z if you're using OpenGL.)
Assume you initially have three ortonormal vectors: worldUp, worldFront and worldSide, and lets use your equations for lookAt, sideAxis and rotatedUp. The worldSide vector will not be necessary to achieve the result.
Break the operation in two. First, rotate around worldUp. Then rotate around sideAxis, which will now actually be parallel to the rotated worldSide.
Axis1 = worldUp
Angle1 = (see below)
Axis2 = cross(lookAt, worldUp) = sideAxis
Angle2 = (see below)
Each of these rotations correspond to a quaternion using:
Q = cos(Angle/2) + i * Axis_x * sin(Angle/2) + j * Axis_y * sin(Angle/2) + k * Axis_z * sin(Angle/2)
Multiply both Q1 and Q2 and you get the desired quaternion.
Details for the angles:
Let P(worldUp) be the projection matrix on the worldUp direction, i.e., P(worldUp).v = cos(worldUp,v).worldUp or using kets and bras, P(worldUp) = |worldUp >< worldUp|. Let I be the identity matrix.
Project lookAt in the plane perpendicular to worldUp and normalize it.
tmp1 = (I - P(worldUp)).lookAt
n1 = normalize(tmp1)
Angle1 = arccos(dot(worldFront,n1))
Angle2 = arccos(dot(lookAt,n1))
EDIT1:
Notice that there is no need to compute transcendental functions. Since the dot product of a pair of normalized vectors is the cosine of an angle and assuming that cos(t) = x, we have the trigonometric identities:
cos(t/2) = sqrt((1 + x)/2)
sin(t/2) = sqrt((1 - x)/2)
If somebody search for C# version with handling every matrix edge cases (not input edge cases!), here it is:
public static SoftQuaternion LookRotation(SoftVector3 forward, SoftVector3 up)
{
forward = SoftVector3.Normalize(forward);
// First matrix column
SoftVector3 sideAxis = SoftVector3.Normalize(SoftVector3.Cross(up, forward));
// Second matrix column
SoftVector3 rotatedUp = SoftVector3.Cross(forward, sideAxis);
// Third matrix column
SoftVector3 lookAt = forward;
// Sums of matrix main diagonal elements
SoftFloat trace1 = SoftFloat.One + sideAxis.X - rotatedUp.Y - lookAt.Z;
SoftFloat trace2 = SoftFloat.One - sideAxis.X + rotatedUp.Y - lookAt.Z;
SoftFloat trace3 = SoftFloat.One - sideAxis.X - rotatedUp.Y + lookAt.Z;
// If orthonormal vectors forms identity matrix, then return identity rotation
if (trace1 + trace2 + trace3 < SoftMath.CalculationsEpsilon)
{
return Identity;
}
// Choose largest diagonal
if (trace1 + SoftMath.CalculationsEpsilon > trace2 && trace1 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace1) * (SoftFloat)2.0f;
return new SoftQuaternion(
(SoftFloat)0.25f * s,
(rotatedUp.X + sideAxis.Y) / s,
(lookAt.X + sideAxis.Z) / s,
(rotatedUp.Z - lookAt.Y) / s);
}
else if (trace2 + SoftMath.CalculationsEpsilon > trace1 && trace2 + SoftMath.CalculationsEpsilon > trace3)
{
SoftFloat s = SoftMath.Sqrt(trace2) * (SoftFloat)2.0f;
return new SoftQuaternion(
(rotatedUp.X + sideAxis.Y) / s,
(SoftFloat)0.25f * s,
(lookAt.Y + rotatedUp.Z) / s,
(lookAt.X - sideAxis.Z) / s);
}
else
{
SoftFloat s = SoftMath.Sqrt(trace3) * (SoftFloat)2.0f;
return new SoftQuaternion(
(lookAt.X + sideAxis.Z) / s,
(lookAt.Y + rotatedUp.Z) / s,
(SoftFloat)0.25f * s,
(sideAxis.Y - rotatedUp.X) / s);
}
}
This realization based on deeper understanding of this conversation, and was tested for many edge case scenarios.
P.S.
Quaternion's constructor is (x, y, z, w)
SoftFloat is software float type, so you can easyly change it to built-in float if needed
For full edge case safe realization (including input) check this repo.
lookAt
sideaxis
rotatedup
If you normalize this 3 vectors, it is a components of rotation matrix 3x3. So just convert this rotation matrix to quaternion.

Accurately calculate moon phases

For a new project I like to calculate the moon phases. So far I haven't seen any code that does that. I don't want to rely on online-services for this.
I have tried some functions, but they are not 100% reliable. Functions I have tried:
NSInteger r = iYear % 100;
r %= 19;
if (r>9){ r -= 19;}
r = ((r * 11) % 30) + iMonth + iDay;
if (iMonth<3){r += 2;}
r -= ((iYear<2000) ? 4 : 8.3);
r = floor(r+0.5);
other one:
float n = floor(12.37 * (iYear -1900 + ((1.0 * iMonth - 0.5)/12.0)));
float RAD = 3.14159265/180.0;
float t = n / 1236.85;
float t2 = t * t;
float as = 359.2242 + 29.105356 * n;
float am = 306.0253 + 385.816918 * n + 0.010730 * t2;
float xtra = 0.75933 + 1.53058868 * n + ((1.178e-4) - (1.55e-7) * t) * t2;
xtra = xtra + (0.1734 - 3.93e-4 * t) * sin(RAD * as) - 0.4068 * sin(RAD * am);
float i = (xtra > 0.0 ? floor(xtra) : ceil(xtra - 1.0));
float j1 = [self julday:iYear iMonth:iMonth iDay:iDay];
float jd = (2415020 + 28 * n) + i;
jd = fmodf((j1-jd + 30), 30);
and last one
NSInteger thisJD = [self julday:iYear iMonth:iMonth iDay:iDay];
float degToRad = 3.14159265 / 180;
float K0, T, T2, T3, J0, F0, M0, M1, B1, oldJ = 0.0;
K0 = floor((iYear-1900)*12.3685);
T = (iYear-1899.5) / 100;
T2 = T*T; T3 = T*T*T;
J0 = 2415020 + 29*K0;
F0 = 0.0001178*T2 - 0.000000155*T3 + (0.75933 + 0.53058868*K0) - (0.000837*T + 0.000335*T2);
M0 = 360*[self getFrac:((K0*0.08084821133)) + 359.2242 - 0.0000333*T2 - 0.00000347*T3];
M1 = 360*[self getFrac:((K0*0.07171366128)) + 306.0253 + 0.0107306*T2 + 0.00001236*T3];
B1 = 360*[self getFrac:((K0*0.08519585128)) + 21.2964 - (0.0016528*T2) - (0.00000239*T3)];
NSInteger phase = 0;
NSInteger jday = 0;
while (jday < thisJD) {
float F = F0 + 1.530588*phase;
float M5 = (M0 + phase*29.10535608)*degToRad;
float M6 = (M1 + phase*385.81691806)*degToRad;
float B6 = (B1 + phase*390.67050646)*degToRad;
F -= 0.4068*sin(M6) + (0.1734 - 0.000393*T)*sin(M5);
F += 0.0161*sin(2*M6) + 0.0104*sin(2*B6);
F -= 0.0074*sin(M5 - M6) - 0.0051*sin(M5 + M6);
F += 0.0021*sin(2*M5) + 0.0010*sin(2*B6-M6);
F += 0.5 / 1440;
oldJ=jday;
jday = J0 + 28*phase + floor(F);
phase++;
}
float jd = fmodf((thisJD-oldJ), 30);
All are working more and less, but none is really giving the correct dates of full moon for 2017 and 2018.
Does anyone have a function that will calculate the moon phases correctly - also based on time zone?
EDIT:
I only want the function for the Moonphases. SwiftAA offers a lot more and only produces not needed overhead in the app.

Finding intersection points of line and circle

Im trying to understand what this function does. It was given by my teacher and I just cant understands, whats logic behind the formulas finding x, and y coordinates. From my math class I know I my formulas for finding interception but its confusing translated in code. So I have some problems how they defined the formulas for a,b,c and for finding the coordinates x and y.
void Intersection::getIntersectionPoints(const Arc& arc, const Line& line) {
double a, b, c, mu, det;
std::pair<double, double> xPoints;
std::pair<double, double> yPoints;
std::pair<double, double> zPoints;
//(m2+1)x2+2(mc−mq−p)x+(q2−r2+p2−2cq+c2)=0.
//a= m2;
//b= 2 * (mc - mq - p);
//c= q2−r2+p2−2cq+c2
a = pow((line.end().x - line.start().x), 2) + pow((line.end().y - line.start().y), 2) + pow((line.end().z - line.start().z), 2);
b = 2 * ((line.end().x - line.start().x)*(line.start().x - arc.center().x)
+ (line.end().y - line.start().y)*(line.start().y - arc.center().y)
+ (line.end().z - line.start().z)*(line.start().z - arc.center().z));
c = pow((arc.center().x), 2) + pow((arc.center().y), 2) +
pow((arc.center().z), 2) + pow((line.start().x), 2) +
pow((line.start().y), 2) + pow((line.start().z), 2) -
2 * (arc.center().x * line.start().x + arc.center().y * line.start().y +
arc.center().z * line.start().z) - pow((arc.radius()), 2);
det = pow(b, 2) - 4 * a * c;
/* Tangenta na kružnicu */
if (Math<double>::isEqual(det, 0.0, 0.00001)) {
if (!Math<double>::isEqual(a, 0.0, 0.00001))
mu = -b / (2 * a);
else
mu = 0.0;
// x = h + t * ( p − h )
xPoints.second = xPoints.first = line.start().x + mu * (line.end().x - line.start().x);
yPoints.second = yPoints.first = line.start().y + mu * (line.end().y - line.start().y);
zPoints.second = zPoints.first = line.start().z + mu * (line.end().z - line.start().z);
}
if (Math<double>::isGreater(det, 0.0, 0.00001)) {
// first intersection
mu = (-b - sqrt(pow(b, 2) - 4 * a * c)) / (2 * a);
xPoints.first = line.start().x + mu * (line.end().x - line.start().x);
yPoints.first = line.start().y + mu * (line.end().y - line.start().y);
zPoints.first = line.start().z + mu * (line.end().z - line.start().z);
// second intersection
mu = (-b + sqrt(pow(b, 2) - 4 * a * c)) / (2 * a);
xPoints.second = line.start().x + mu * (line.end().x - line.start().x);
yPoints.second = line.start().y + mu * (line.end().y - line.start().y);
zPoints.second = line.start().z + mu * (line.end().z - line.start().z);
}
Denoting the line's start point as A, end point as B, circle's center as C, circle's radius as r and the intersection point as P, then we can write P as
P=(1-t)*A + t*B = A+t*(B-A) (1)
Point P will also locate on the circle, therefore
|P-C|^2 = r^2 (2)
Plugging equation (1) into equation (2), you will get
|B-A|^2*t^2 + 2(B-A)\dot(A-C)*t +(|A-C|^2 - r^2) = 0 (3)
This is how you get the formula for a, b and c in the program you posted. After solving for t, you shall obtain the intersection point(s) from equation (1). Since equation (3) is quadratic, you might get 0, 1 or 2 values for t, which correspond to the geometric configurations where the line might not intersect the circle, be exactly tangent to the circle or pass thru the circle at two locations.

opencl workitem run parallel

asking about speed or optimize the code
the kernel for sobel edge detection for gray img
When I run the program without any process only show input video and output(same as input) the frame per secounds fps=70 but when process down to 20 (process using GPU kernel for sobel)
Does anyone have an idea of how to speed up this code? I used local memory instead of global memory but the change is small.
How can I make all work items process the image?
sobel kernel
__kernel void hello_kernel(const __global uchar *input, __global uchar *output,const uint width,const uint height)
{
int x = get_global_id(0);
int y = get_global_id(1);
int index = width * y + x;
float a,b,c,d,e,f,g,h,i;
float8 v;
float sobelX = 0;
float sobelY = 0;
//if(index > width && index < (height*width)-width && (index % width-1) > 0 && (index % width-1) < width-1){
a = input[index-1-width] * -1.0f;
b =input[index-0-width] * 0.0f;
c = input[index+1-width] * +1.0f;
d = input[index-1] * -2.0f;
e = input[index-0] * 0.0f;
f = input[index+1] * +2.0f;
g = input[index-1+width] * -1.0f;
h = input[index-0+width] * 0.0f;
i = input[index+1+width] * +1.0f;
sobelX = a+b+c+d+e+f+g+h+i;
a = input[index-1-width] * -1.0f;
b = input[index-0-width] * -2.0f;
c = input[index+1-width] * -1.0f;
d = input[index-1] * 0.0f;
e = input[index-0] * 0.0f;
f = input[index+1] * 0.0f;
g = input[index-1+width] * +1.0f;
h = input[index-0+width] * +2.0f;
i = input[index+1+width] * +1.0f;
sobelY = a+b+c+d+e+f+g+h+i;
output[index] = sqrt(pow(sobelX,2) + pow(sobelY,2));
}

Maya-like camera implementation

I am working on Maya-like camera implementation, and I've done track and dolly functions correctly but I just cannot implement tumble.
I am working in PhiloGL engine (WebGL base), so I would really appreciate some help with code in this engine.
I've looked at how Maya's camera actually work, but I cannot find out. Here is my code so-far
if(mode == "rot")
{
var angleX = diffx / 150;
var angleY = diffy / 150;
//var angleZ = sign * Math.sqrt((diffx * diffx)+(diffy * diffy)) / 150;
e.stop();
//axe Z
//camera.position.x = x * Math.cos(angleX) - y * Math.sin(angleX);
//camera.position.y = x * Math.sin(angleX) + y * Math.cos(angleX);
//axe X
//camera.position.y = y * Math.cos(angleY) - z * Math.sin(angleY);
//camera.position.z = y * Math.sin(angleY) + z * Math.cos(angleY);
//camera.update();
//axe Y
camera.position.z = z * Math.cos(angleX) - x * Math.sin(angleX);
camera.position.x = z * Math.sin(angleX) + x * Math.cos(angleX);
camera.update();
position.x = e.x;
position.y = e.y;
position.z = e.z;
}
This isn't working nor do I know what am I doing wrong.
Any clues?
I use this in inka3d (www.inka3d.com) but it does not depend on inka3d. The output is a 4x4 matrix. Can you make use of that?
// turntable like camera, y is up-vector
// tx, ty and tz are camera target position
// rx, ry and rz are camera rotation angles (rad)
// di is camera distance from target
// fr is an array where the resulting view matrix is written into (16 values, row major)
control.cameraY = function(tx, ty, tz, rx, ry, rz, di, fr)
{
var a = rx * 0.5;
var b = ry * 0.5;
var c = rz * 0.5;
var d = Math.cos(a);
var e = Math.sin(a);
var f = Math.cos(b);
var g = Math.sin(b);
var h = Math.cos(c);
var i = Math.sin(c);
var j = f * e * h + g * d * i;
var k = f * -e * i + g * d * h;
var l = f * d * i - g * e * h;
var m = f * d * h - g * -e * i;
var n = j * j;
var o = k * k;
var p = l * l;
var q = m * m;
var r = j * k;
var s = k * l;
var t = j * l;
var u = m * j;
var v = m * k;
var w = m * l;
var x = q + n - o - p;
var y = (r + w) * 2.0;
var z = (t - v) * 2.0;
var A = (r - w) * 2.0;
var B = q - n + o - p;
var C = (s + u) * 2.0;
var D = (t + v) * 2.0;
var E = (s - u) * 2.0;
var F = q - n - o + p;
var G = di;
var H = -(tx + D * G);
var I = -(ty + E * G);
var J = -(tz + F * G);
fr[0] = x;
fr[1] = A;
fr[2] = D;
fr[3] = 0.0;
fr[4] = y;
fr[5] = B;
fr[6] = E;
fr[7] = 0.0;
fr[8] = z;
fr[9] = C;
fr[10] = F;
fr[11] = 0.0;
fr[12] = x * H + y * I + z * J;
fr[13] = A * H + B * I + C * J;
fr[14] = D * H + E * I + F * J;
fr[15] = 1.0;
};