How to detect CGAffineTransform is flipped? - objective-c

I want to know the way to detect a transform of view is flipped or not.
I've read this question but there is no answer I expected.
And also tried to detect using with a 3 by 3 matrix of CGAffineTransform.
But I'm not good at math and this kind of matrix....
So Cloud anyone please help me...

A transformation is not flipped. A view is flipped. To do so, a transformation is applied. So the transformation flips the view, it is not flipped itself.
So you can read the transformation of a view and check, whether it is the flip transformation. (A scale transformation with a scaling factor of 1 for the x-axis and -1 for the y-axis.) But it is possible that many transformations are applied to the view. So maybe, you do not get the "pure" flip transformation. And at the end of the day, it depends on what you call "flipped". Is a rotation of 180° a flip? (What is your real problem?)
However, the most robust way to check that seems to get the transformation and transform a point. The result gives you a hint, what is done. If the sign of the y coordinate changes, but the value of the x-coordinate remains constant, it looks like a flip. (On the first view.)

CGAffineTransform has properties to detect that,
Like I say,
aTransformObj.a : represents scale X
aTransformObj.b : represents skew Y
aTransformObj.c : represents skew X
aTransformObj.d : represents scale Y
aTransformObj.tx : represents translate x
aTransformObj.ty : represents translate Y
You can check with if those values are negative or not,
1) .a property value if it is more then 0 to any negative value then it is horizontally flipped.
2) .d property value if it is more then 0 to any negative value then it is vertically flipped.

Related

How can I plot a portion of a surface in a specified region?

I have a parametric surface in 3D. I would like to observe parts of this surface, specifically, the part with z > 0 and the part with x2 + y2 + z2 < c.
A few methods that I tried:
Naïvely throwing away the rest of the data, for instance setting X[Z<0] = nan etc. Since this does not line up with the parametrization that I chose, it would create ragged edges. Is there some sort of "antialiasing" interpolation options that I can choose? I would be grateful for a pointer to the docs for numpy or plotly.
Trying to set the alpha of the color scale. This sort of works, it seems to introduce some incorrect rendering. In the picture below, the dark green lump should be at the front of the light green disk. Is there something that I did wrong?
On the other hand, I couldn't locate in the manual a way to set "two dimensional" color scales, so that I can simultaneously set the opacity according to the z value and the hue according to some other quantity of interest. Is this possible?
Is there a convenient method to achieve my goal? Or can I improve my attempts above? Any help is appreciated!

Fast check if polygon contains point between dataframes

I have two dataframes. One contains a column of Polygons, taken from an image of polygon shapes. Each polygon has a set of coordinates. This dataframe also has a "segment-id" column. I have another dataframe, containing a column of Points, also with coordinates. These Points represent pixels from the same image of Polygon shapes, and therefore have the same coordinate system. I want to give every Point the "segment-id" of the Polygon which contains it. Every Polygon contains at least one Point.
Currently, I achieve this by using a nested for loop:
for i, row in enumerate(point_df.itertuples(), 0):
point = pixel_df.at[i, 'geometry']
for j in range(len(polygon_df)):
polygon = polygon_df.iat[j, 0]
if polygon.contains(point):
pixel_df.at[i, 'segment_id'] = polygon_df.at[j, 'segment_id']
else:
pass
This is extremely slow. For 100 Points, it takes around 10 seconds. I need a faster way of doing this. I have tried using apply but it is still super slow.
Hope someone can help me out, thanks very much.
For fast "is point inside polygon":
Preparation: In the code that obtains the data describing the polygons; using all the vertices, find the minimum and maximum y-coord, and minimum and maximum x-coord; and store that with the polygon's data.
1) Using the point's coords and the polygon's "minimum and maximum x and y" (pre-determined during preparation); do a "bounding box" test. This is just a fast way to find out if the point is definitely not inside the polygon (so you can skip the more expensive steps most of the time).
2) Set a "yes/no" flag to "no"
3) For each edge in the polygon; determine if a horizontal line passing through the point would intersect with the edge, and if it does determine the x-coord of the intersection. If the x-coord of the intersection is less than the point's x-coord, toggle (with NOT) the "yes/no" flag. Ignore "horizontal line passes through a vertex" during this step.
4) For each vertex, compare its y-coord with the point's y-coord. If they're the same you need to look at both edges coming from that vertex to determine if the edge's vertices are in the same y direction. If the edge's vertices are in the same y direction (if the edges form a 'V' shape or upside-down 'V' shape) ignore the vertex. Otherwise (if the edges form a '<' or '>' shape), if the vertex's x-coord is less than the point's x-coord, toggle the "yes/no" flag.
After all this is done; that "yes/no" flag will tell you if the point was in the polygon.

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

Three.js camera tilt up or down and keep horizon level

camera.rotate.y pans left or right in a predictable manner.
camera.rotate.x looks up or down predictably when camera.rotate.y is at 180 degrees.
but when I change the value of camera.rotate.y to some new value, and then I change the value of camera.rotate.x, the horizon rotates.
I've looked for an algorithm to adjust for horizon rotation after camera.rotate.x is changed, but haven't found it.
In three.js, an object's orientation can be specified by its Euler rotation vector object.rotation. The three components of the rotation vector represent the rotation in radians around the object's internal x-axis, y-axis, and z-axis respectively.
The order in which the rotations are performed is specified by object.rotation.order. The default order is "XYZ" -- rotation around the x-axis occurs first, then the y-axis, then the z-axis.
Rotations are performed with respect to the object's internal coordinate system -- not the world coordinate system. This is important. So, for example, after the x-rotation occurs, the object's y- and z- axes will generally no longer be aligned with the world axes. Rotations specified in this way are not unique.
So, for example, if in code you specify,
camera.rotation.y = y_radians; // Y first
camera.rotation.x = x_radians; // X second
camera.rotation.z = 0;
the rotations are applied in the object's rotation.order, not in the order you specified them.
In your case, you may find it more intuitive to set rotation.order to "YXZ", which is equivalent to "heading, pitch, and roll".
For more information about Euler angles, see the Wikipedia article. Three.js follows the Tait–Bryan convention, as explained in the article.
three.js r.61
I've been looking for the same info for few days now, the trick is: use regular rotateX to look up/down, but use rotateOnWorldAxis(new THREE.Vector3(0.0, 1.0, 0.0), angle) for horiz turn (https://discourse.threejs.org/t/vertical-camera-rotation/15334).

translate coordinate from one triangle to a triangle with a different perspective

How do i calculate point D for triangle 2?
I have the the following coordinates for triangle 1:
a(0,0) b(0,78) c(18,39)
point D is located at (0,39) in triangle 1.
now I change the perspective on my triangle by for example moving coordinate b and c.
the new triangle formed is called triangle 2 with coordinates:
a(0,0) b(11,72) c(37,42)
AS YOU CAN SEE POINT D IS NOT IN THE MIDDLE OF LINE a<-->b BECAUSE OF THE CHANGE IN PERSPECTIVE/SKEW.
How do i calculate point d? I have the coordinates abc of triangle 1 & 2.
Preferably answer in programcode rather than using math signs, since i am not a hero at reading math :)
You need to convert point D to barycentric coordinates using the original triangle coordinates, then convert it back to cartesian coordinates using the modified triangle coordinates.
This looks like a good introduction to triangular barycentric coordinates: http://blogs.msdn.com/b/rezanour/archive/2011/08/07/barycentric-coordinates-and-point-in-triangle-tests.aspx
Also, explicit formulae for converting a point in a triangle to barycentric coordinates are given at the end of the Converting to Barycentric Coordinates section of the Wikipedia article “Barycentric coordinate system”.
I guess there are more ways of calculating a coordinate from one perspective to another.
more on the triangle way is written by culebrón here: Transforming captured co-ordinates into screen co-ordinates
At the same link there is another way by using SVD and calculate an H-matrix which can be used to translate any coordinate from one perspective to another. I am going to use this way because i could solve this way in matlab. Next step in objective-c! i had some trouble calculating the same in objective-c. more on that here: calculate the V from A = USVt in objective-C with SVD from LAPACK in xcode
I would like to know how to solve the triangle way too! i could not figure out what a1 and a2 were in culebron's post: https://stackoverflow.com/a/1690300/1568532 neither the width and height made much sense to me.
Also i would like to know how to calculate the EYE's point of view on a triangle or quadrangle based on 3 or 4 coordinates. if you know the original size of the object.
any ideas on this?
when i search for eye or camera's point of view. there is load of result about photography.
what do i need to use in order to calculate this? maybe some example anyone?