Convert point coordinates to a new coordinate system - objective-c

Let's say I have point which has the coordinates (50,100) where (0,0) is in the upper left corner of a view.
How can I get the coordinates of the same point if I want the beginning of the coordinate system to be the center of the screen (ie width/2, height/2) ?
Note that I am implementing a custom View and I am drawing inside it and I just want to convert the coordinate inside that same view. I am basically implementing a graphic calculator and I need to have my coordinate system to start in the middle of the screen so the graphics could look better.

I notice you tagged it as iOS problem, so use the method Apple have built in UIView:
(CGPoint)convertPoint:(CGPoint)point fromView:(UIView *)view

Find the midpoint you will be using, so for a 100x100 screen, this would be (50,50). Then take the point you need to convert and subtract the midpoint X value from the point X value, and then subtract the point Y value from the midpoint Y value. Notice that you are not doing the same operation on both values.
So if the point is (30,25) the new point would be (-20,25) because 30 - 50 = -20 and 50 - 25 = 25.

Related

How can I implement degrees round the drawn compass

I am developing a GPS waypoint application. I have started by drawing my compass but am finding it difficult to implement degree text around the circle. Can anyone help me with a solution? The compass image am working on] 1 here shows the circle of the compass I have drawn.
This image here shows what I want to achieve, that is implementing degree text round the compass [Image of what I want to achieve] 2
Assuming you're doing this in a custom view, you need to use one of the drawText methods on the Canvas passed in to onDraw.
You'll have to do a little trigonometry to get the x, y position of the text - basically if there's a circle with radius r you're placing the text origins on (i.e. how far out from the centre they are), and you're placing one at angle θ:
x = r * cosθ
y = r * sinθ
The sin and cos functions take a value in radians, so you'll have to convert that if you're using degrees:
val radians = (degrees.toDouble() / 360.0) * (2.0 * Math.PI)
and 0 degrees is at 3 o'clock on the circle, not 12, so you'll have to subtract 90 degrees from your usual compass positions (e.g. 90 degrees on the compass is 0 degrees in the local coordinates). The negative values you get are fine, -90 is the same as 270. If you're trying to replicate the image you posted (where the numbers and everything else are rotating while the needle stays at the top) you'll have to apply an angle offset anyway!
These x and y values are distance from the centre of the circle, which probably needs to be the centre of your view (which you've probably already calculated to draw your circle). You'll also need to account for the extra space you need to draw those labels, scaling everything so it all fits in the View

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

How could I make an "iPod Wheel" type control on iPhone?

I want to create a sort of "iPod Wheel" control in a Swift project that I'm doing. I've got everything drawn out, but not it's time to actually make this thing work.
What would be the best way to recognize "spinning" so to speak, or to describe that more clearly, when the user is actively pressing the wheel and spinning his/her thumb around the wheel in a clockwise or counter-clockwise direction.
I will no doubt want to use touchesBegan/touchesMoved/touchesEnded. What's the best way to figure out spinning though?
I'm thinking
a) determine in touchesMoved if the users touch is within circle, by determining the radius from the center point. Center point and radius are easily obtainable. Using these however, how can I determine the outer edge of the circle/wheel... to know whether the user is within the actually circle (their touch could still be in the view, but outside the actual wheel portion)
b) Determine the current angle and how it has changed the previous angle. By that I mean... I would use the center point of the circle as one point, and the users current touch as the second point. This gives me my vector. I would also have a baseline angle. Likely center point to 12 c'clock. I would compare the two vectors (I already have a VectorMath class for this from something else I'm doing) and see my angle is 0. If the users touch were at 3 oclock, and I compared it to our baseline angle... I would see the angle is 90 degrees. I would continually calculate the angle, and perhaps every 5 degrees of change... would warrant a change in the controls output (depending on desired sensitivity).
Does this seem like the best way to do this? I think this would be an ideal way, but am still not sure on how to calculate the circles outer edge, and determine if a users touch is within it.
You are on the right track. I think approach b) will work.
Remember the starting position of the finger at the touchesBegan
event.
Imagine a line from the finger position to the middle of the button
circle.
For the touchesMoved event, again, imagine a virtual line from the
new position to the center of the circle.
Using the formula from
http://mathworld.wolfram.com/Line-LineAngle.html (or some code) you can determine
the angle between the two lines. If it's a negative angle the user
is turning the wheel counter-clockwise, otherwise it's clockwise.
To determine if the touch event was inside the ring, calculate the distance from the center of the circle to the point of touch. It should be between the minimum and the maximum distance (inner circle and outer circle radius). Calculating the distance between to two points is explained at https://www.mathsisfun.com/algebra/distance-2-points.html
I think you're almost there, although I'd do something slightly different on your point b.
If you think about it, when you start "spinning" on your iPod, you don't need to start from a precise position, you start spinning from "where you started", therefore I wouldn't set my "baseline angle" at π/2, I'd set my baseline (or 0°) angle at the point the user taps for the first time, and starting from then, I'd count the offset angles, clockwise and counterclockwise.
I don't think there would be much difference, except maybe from some calculations you'll do on the angles, on the two approaches, practically speaking; it just makes more sense imho to start counting from the first input rater than setting a baseline to π/2 and counting the first angle.
I am answering in parts.
// Get a position based on the angle
float xPosition = center.x + (radiusX * sinf(angleInRadians)) - (CGRectGetWidth([cell frame]) / 2);
float yPosition = center.y + (radiusY * cosf(angleInRadians)) - (CGRectGetHeight([cell frame]) / 2);
float scale = 0.75f + 0.25f * (cosf(angleInRadians) + 1.0);
next
[cell setTransform:CGAffineTransformScale(CGAffineTransformMakeTranslation(xPosition, yPosition), scale, scale)];
// Tweak alpha using the same system as applied for scale, this
// time with 0.3 the minimum and a semicircle range of 0.5
[cell setAlpha:(0.3f + 0.5f * (cosf(angleInRadians) + 1.0))];
and
- (void)spin:(SpinGestureRecognizer *)recognizer
{
CGFloat angleInRadians = -[recognizer rotation];
CGFloat degrees = 180.0 * angleInRadians / M_PI; // Radians to degrees
[self setCurrentAngle:[self currentAngle] + degrees];
[self setAngle:[self currentAngle]];
}
again check the wheelview.m of photowheel in github.

How to change the anchor point from the top-left corner of a transformation matrix to the bottom-left corner?

Say, I have an image on an HTML page.
I apply an affine transformation to the image using CSS3 matrix function.
It looks like:
img#myimage {
transform: matrix(a, b, c, d, tx, ty);
/* use -webkit-transform, -moz-transform etc. */
}
The origin of an HTML page is the top-left corner and the y-axis is inverted.
I'm trying to put the same image in an environment (cocos2d) where the origin is the bottom-left corner and the y-axis is upright.
To get the same result in the other environment, I need to transform the origin somehow and reflect that in the resulting CGAffineTransform.
It would be great if I can get some help with the matrix math that goes here. (I'm not so good with matrices.)
The following formula would work,
for converting the position from CSS3 to Cocos2d:
(screen Size - "y" position in CSS3 - height of object)
Explanation:
To make the origin for the Cocos environment same as for the CSS3 environment we would only have to add the screen size to the cocos2d's bodies y co-ordinate.
Eg. The screen size is (100,100) and the body is a point object if you place it at (0,0) in CSS3 it would be at the top left corner. If we add the screen size to the y co-ordinates for cocos2d the object would be placed at (0,100) which is the top-left corner for cocos2d as well
To make the co-ordinates same, since the Y axis is inverted, we have to subtract the "Y" co-ordinate given in CSS3 from the Screen Size for Cocos2d. Suppose we place the same point object in the previous example at (0,10) in CSS3 we would place it at (0, 100 - 10) in cocos2d which would be the same positions on the screen
Since our body would NOT always be a point object we have to take care of its anchor point as well. If suppose the body's height is 20 and we place it at (0,10) in CSS3 then it would be placed at the top-left position and would be coming down because the Y axis is inverted
Hence we would also have to subtract the body's total height from the screen size and "y" co-ordinate to place it at the same position which would be (0, 100 - 10 - 20) putting the body at the same place in cocos2d environment
I hope I am correct and clear :)

Quartz scaling sprite vertical range but not horizontal when go to fullscreen mode / increase window size

I have create a Quartz composition for use in MAC OS program as part of my interface.
I am relying on the fact that when you have composition sprite movement (a text bullet point in my case) is limited both in the X plane and Y plane to minimum -1 and maximum +1.
When I scale up the window / make my window full screen, I find that the horizontal plane (X axis) remains the same, with -1 being my far left point and +1 being my far right point. However the vertical plane (Y axis) changes, in full screen mode it goes from -0.7 to +0.7.
This scaling is screwing with my calculations. Is there anyway to get the application to keep the scale as -1 to +1 for both horizontal and vertical planes? Or is there a way of determining the upper and lower limits?
Appreciate any help/pointers
Quartz Composer viewer Y limits are usually -0.75 -> 0.75 but it's only a matter of aspect ratio. X limits are allways -1 -> 1, Y ones are dependents on them.
You might want to assign dynamically customs width and heigth variables, capturing the context bounds size. For example :
double myWidth = context.bounds.size.width;
double myHeight = context.bounds.size.height;
Where "context" is your viewer context object.
If you're working directly with the QC viewer : you should use the Rendering Destination Dimensions patch that will give you the width and the height. Divide Height by 2, then multiply the result by -1 to have the other side.