How to find the coordinates of point? - line

I have finite set of points in simple 2-dimensional Euclidean space ( I know coordinates of these points).
Let's say I pick point A(x1,y1) and B(x2,y2) in 2-dimensional Euclidean space. So I have a line AB. I need to find coordinates of such point C (actually I need to find if point C is in my set of points) , that length of AB = AC and lines AB and AC form right angle. (Actually two points should satisfy these conditions: on one side of the line AB and on other side)
This should be done in constant time.

You basically just want to rotate point B around point A by 90 degrees, right? If so, then first you translate A to the origin, then rotate, then translate back.
C = [-(y2-y1)+x1,x2-x1+y1]; // rotate +90 deg
C = [y2-y1+x1,-(x2-x1)+y1]; // rotate -90 deg

Related

How to find the point of collision between an irregular shape (built out of 3 circles) and a line

I'm making a program in which many weird shapes are drawn onto a canvas. Right now i'm trying to implement the last, and possebly hardest, one.
In this particular shape i need a way to find the location (on a 2d canvas) where the line hits the shape. The following image is an example of what i have right now.
The black dots are the points that a known to me (i also have the location of the center of the three open circles and the radius of these circles). Each of the three outer lines needs a line towards the center dot, ending at the point that it hits the circle. This shape can be turned 90, 180 or 270 degrees.
The shape should look something like the following:
If you need any other information, please ask me in the comments. I'm not very good at math so please be gentle, thanks!
If A and B are points forming a line, then you can describe any point on that line using coordinates:
x = t·Ax + (1−t)·Bx
y = t·Ay + (1−t)·By
0 ≤ t ≤ 1
You can also describe the circle with center M and radius r as
(x − Mx)2 + (y − My)2 = r2
So take the x and y from the equations of the line, and plug them into the equation of the circle. You obtain a quadratic equation in t. Its two solutions describe the two points of intersection between the line and circle. In your example, only one of them lies on the line segment, i.e. satisfies 0 ≤ t ≤ 1. The other describes a point on the extension of the segment past its endpoint. Take the correct value for t back to the equations of the line, and you obtain the x and y coordinates of the point of intersection.
If you don't know up front which circle you want to intersect with a given line, then intersect all three and choose the most appropriate point afterwards. Probably that is the point closest to the outside starting point of the line segment. The same goes in cases where both points of intersection lie on the segment.

How can I find the points in a line - Objective c?

Consider a line from point A (x,y) to B (p,q).
The method CGContextMoveToPoint(context, x, y); moves to the point x,y and the method CGContextAddLineToPoint(context, p, q); will draw the line from point A to B.
My question is, can I find the all points that the line cover?
Actually I need to know the exact point which is x points before the end point B.
Refer this image..
The line above is just for reference. This line may have in any angle. I needed the 5th point which is in the line before the point B.
Thank you
You should not think in terms of pixels. Coordinates are floating point values. The geometric point at (x,y) does not need to be a pixel at all. In fact you should think of pixels as being rectangles in your coordinate system.
This means that "x pixels before the end point" does not really makes sense. If a pixel is a rectangle, "x pixels" is a different quantity if you move horizontally than it is if you move vertically. And if you move in any other direction it's even harder to decide what it means.
Depending on what you are trying to do it may or may not be easy to translate your concepts in pixel terms. It's probably better, however, to do the opposite and stop thinking in terms of pixels and translate all you are currently expressing in pixel terms into non pixel terms.
Also remember that exactly what a pixel is is system dependent and you may or may not, in general, be able to query the system about it (especially if you take into consideration things like retina displays and all resolution independent functionality).
Edit:
I see you edited your question, but "points" is not more precise than "pixels".
However I'll try to give you a workable solution. At least it will be workable once you reformulate your problem in the right terms.
Your question, correctly formulated, should be:
Given two points A and B in a cartesian space and a distance delta, what are the coordinates of a point C such that C is on the line passing through A and B and the length of the segment BC is delta?
Here's a solution to that question:
// Assuming point A has coordinates (x,y) and point B has coordinates (p,q).
// Also assuming the distance from B to C is delta. We want to find the
// coordinates of C.
// I'll rename the coordinates for legibility.
double ax = x;
double ay = y;
double bx = p;
double by = q;
// this is what we want to find
double cx, cy;
// we need to establish a limit to acceptable computational precision
double epsilon = 0.000001;
if ( bx - ax < epsilon && by - ay < epsilon ) {
// the two points are too close to compute a reliable result
// this is an error condition. handle the error here (throw
// an exception or whatever).
} else {
// compute the vector from B to A and its length
double bax = bx - ax;
double bay = by - ay;
double balen = sqrt( pow(bax, 2) + pow(bay, 2) );
// compute the vector from B to C (same direction of the vector from
// B to A but with lenght delta)
double bcx = bax * delta / balen;
double bcy = bay * delta / balen;
// and now add that vector to the vector OB (with O being the origin)
// to find the solution
cx = bx + bcx;
cy = by + bcy;
}
You need to make sure that points A and B are not too close or the computations will be imprecise and the result will be different than you expect. That's what epsilon is supposed to do (you may or may not want to change the value of epsilon).
Ideally a suitable value for epsilon is not related to the smallest number representable in a double but to the level of precision that a double gives you for values in the order of magnitude of the coordinates.
I have hardcoded epsilon, which is a common way to define it's value as you generally know in advance the order of magnitude of your data, but there are also 'adaptive' techniques to compute an epsilon from the actual values of the arguments (the coordinates of A and B and the delta, in this case).
Also note that I have coded for legibility (the compiler should be able to optimize anyway). Feel free to recode if you wish.
It's not so hard, translate your segment into a math line expression, x pixels may be translated into radius of a circe with center in B, make a system to find where they intercept, you get two solutions, take the point that is closer to A.
This is the code you can use
float distanceFromPx2toP3 = 1300.0;
float mag = sqrt(pow((px2.x - px1.x),2) + pow((px2.y - px1.y),2));
float P3x = px2.x + distanceFromPx2toP3 * (px2.x - px1.x) / mag;
float P3y = px2.y + distanceFromPx2toP3 * (px2.y - px1.y) / mag;
CGPoint P3 = CGPointMake(P3x, P3y);
Either you can follow this link also it will give you the detail description -
How to find a third point using two other points and their angle.
You can find out number of points whichever you want to find.

Find the x and y coordinates of a certain point of a moving object

If you understand objective c very well, then just read the last 2 sentences. The rest of this just summarizes the last 2 sentances:
So I have two sprites, the lower arm and the upper arm. I set the anchor points to ccp(0.5f,0.0f) So lets say that the following dashes represent the lower arm, the anchorpoint is the dash in parenthesis: (-)------ . So the object is rotating around this point (the CGPoint at the moment is ccp(100,55)).
What I need is, if the lower arm is rotating around the dash in parenthesis: (-)-----o the circle represents the point I want. I'm basically connecting the two arms and trying to make the movement look nice... Both arms are 17 pixels long (which means that if the lower arm is pointing straight up, the CGPoint of the circle is ccp(100,72), and if the arm was pointing straight down, the circle would be ccp(100,38).
What equation would I use so that I could set the position of the upper arm equal to the position of the lower arm's rotating CGPoint, represented as a circle in the 2nd paragraph of this question. Like... _,/ the _ represents the lower arm, the comma represents the point I want, and the / represent the upper arm.
So lower and upper arm = 17 pixels long, anchor point for both is (0.5f,0.0f), how do I find the point opposite of the anchor point for the lower arm.
x = 100 + 17 * cos(θ)
y = 55 + 17 * sin(θ)
You need to find what the angle of rotation is. I'm not that familiar with objective c, but if you're using a rotation function there's most likely an angle component somewhere you can reference.
From there you can use trigonometry to find the components of your x and y change.
For x it will be: (anchor x) + (length of arm) * cosine(angle of rotation)
And for y it will be: (anchor y) + (length of arm) * sine(angle of rotation)
Also, make sure you know whether the angle is in radians or degrees, you might have to convert based on the sine/cosine functions.

Extract transform and rotation matrices from homography?

I have 2 consecutive images from a camera and I want to estimate the change in camera pose:
I calculate the optical flow:
Const MAXFEATURES As Integer = 100
imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp")
imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp")
grayA = imgA.Convert(Of Gray, Byte)()
grayB = imgB.Convert(Of Gray, Byte)()
imagesize = cvGetSize(grayA)
pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
features = MAXFEATURES
featuresA = grayA.GoodFeaturesToTrack(features, 0.01, 25, 3)
grayA.FindCornerSubPix(featuresA, New System.Drawing.Size(10, 10),
New System.Drawing.Size(-1, -1),
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03))
features = featuresA(0).Length
Emgu.CV.OpticalFlow.PyrLK(grayA, grayB, pyrBufferA, pyrBufferB, _
featuresA(0), New Size(25, 25), 3, _
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03D),
flags, featuresB(0), status, errors)
pointsA = New Matrix(Of Single)(features, 2)
pointsB = New Matrix(Of Single)(features, 2)
For i As Integer = 0 To features - 1
pointsA(i, 0) = featuresA(0)(i).X
pointsA(i, 1) = featuresA(0)(i).Y
pointsB(i, 0) = featuresB(0)(i).X
pointsB(i, 1) = featuresB(0)(i).Y
Next
Dim Homography As New Matrix(Of Double)(3, 3)
cvFindHomography(pointsA.Ptr, pointsB.Ptr, Homography, HOMOGRAPHY_METHOD.RANSAC, 1, 0)
and it looks right, the camera moved leftwards and upwards:
Now I want to find out how much the camera moved and rotated. If I declare my camera position and what it's looking at:
' Create camera location at origin and lookat (straight ahead, 1 in the Z axis)
Location = New Matrix(Of Double)(2, 3)
location(0, 0) = 0 ' X location
location(0, 1) = 0 ' Y location
location(0, 2) = 0 ' Z location
location(1, 0) = 0 ' X lookat
location(1, 1) = 0 ' Y lookat
location(1, 2) = 1 ' Z lookat
How do I calculate the new position and lookat?
If I'm doing this all wrong or if there's a better method, any suggestions would be very welcome, thanks!
For pure camera rotation R = A-1HA. To prove this consider image to plane homographies H1=A and H2=AR, where A is camera intrinsic matrix. Then H12=H2*H1-1=A-1RA, from which you can obtain R
Camera translation is harder to estimate. If the camera translates you have to a find fundamental matrix first (not homography): xTFx=0 and then convert it into an essential matrix E=ATFA; Then you can decompose E into rotation and translation E=txR, where tx means a vector product matrix. Decomposition is not obvious, see this.
The rotation you get will be exact while the translation vector can be found only up to scale. Intuitively this scaling means that from the two images alone you cannot really say whether the objects are close and small or far away and large. To disambiguate we may use a familiar size objects, known distance between two points, etc.
Finally note that a human visual system has a similar problem: though we "know" the distance between our eyes, when they are converged on the object the disparity is always zero and from disparity alone we cannot say what the distance is. Human vision relies on triangulation from eyes version signal to figure out absolute distance.
Well what your looking at is in simple terms a Pythagorean theorem problem a^2 + b^2 = c^2. However when it comes to camera based applications things are not very easy to accurately determine. You have found half of the detail you need for "a" however finding "b" or "c" is much harder.
The Short Answer
Basically it can't be done with a single camera. But it can be with done with two cameras.
The Long Winded Answer (Thought I'd explain in more depth, no pun intended)
I'll try and explain, say we select two points within our image and move the camera left. We know the distance from the camera of each point B1 is 20mm and point B2 is 40mm . Now lets assume that we process the image and our measurement are A1 is (0,2) and A2 is (0,4) these are related to B1 and B2 respectively. Now A1 and A2 are not measurements; they are pixels of movement.
What we now have to do is multiply the change in A1 and A2 by a calculated constant which will be the real world distance at B1 and B2. NOTE: Each one these is different according to measurement B*. This all relates to Angle of view or more commonly called the Field of View in photography at different distances. You can accurately calculate the constant if you know the size of each pixel on the camera CCD and the f number of the lens you have inside the camera.
I would expect this isn't the case so at different distances you have to place an object of which you know the length and see how many pixels it takes up. Close up you can use a ruler to make things easier. With these measurements. You take this data and form a curve with a line of best fit. Where the X-axis will be the distance of the object and the Y-axis will be the constant of pixel to distance ratio that you must multiply your movement by.
So how do we apply this curve. Well it's guess work. In theory the larger the measurement of movement A* the closer the object to the camera. In our example our ratios for A1 > A2 say 5mm and 3mm respectively and we would now know that point B1 has moved 10mm (2x5mm) and B2 has moved 6mm (2x6mm). But let's face it - we will never know B and we will never be able to tell if a distance moved is 20 pixels of an object close up not moving far or an object far away moving a much great distance. This is why things like the Xbox Kinect use additional sensors to get depth information that can be tied to the objects within the image.
What you attempting could be attempted with two cameras as the distance between these cameras is known the movement can be more accurately calculated (effectively without using a depth sensor). The maths behind this is extremely complex and I would suggest looking up some journal papers on the subject. If you would like me to explain the theory, I can attempt to.
All my experience comes from designing high speed video acquisition and image processing for my PHD so trust me, it can't be done with one camera, sorry. I hope some of this helps.
Cheers
Chris
[EDIT]
I was going to add a comment but this is easier due to the bulk of information:
Since it is the Kinect I will assume you have some relevant depth information associated with each point if not you will need to figure out how to get this.
The equation you will need to start of with is for the Field of View (FOV):
o/d = i/f
Where:
f is equal to the focal length of the lens usually given in mm (i.e. 18 28 30 50 are standard examples)
d is the object distance from the lens gathered from kinect data
o is the object dimension (or "field of view" perpendicular to and bisected by the optical axis).
i is the image dimension (or "field stop" perpendicular to and bisected by the optical axis).
We need to calculate i, where o is our unknown so for i (which is a diagonal measurement),
We will need the size of the pixel on the ccd this will in micrometres or µm you will need to find this information out, For know we will take it as being 14um which is standard for a midrange area scan camera.
So first we need to work out i horizontal dimension (ih) which is the number of pixels of the width of the camera multiplied by the size of the ccd pixel (We will use 640 x 320)
so: ih = 640*14um = 8960um
= 8960/1000 = 8.96mm
Now we need i vertical dimension (iv) same process but height
so: iv = (320 * 14um) / 1000 = 4.48mm
Now i is found by Pythagorean theorem Pythagorean theorem a^2 + b^2 = c^2
so: i = sqrt(ih^2 _ iv^2)
= 10.02 mm
Now we will assume we have a 28 mm lens. Again, this exact value will have to be found out. So our equation is rearranged to give us o is:
o = (i * d) / f
Remember o will be diagonal (we will assume of object or point is 50mm away):
o = (10.02mm * 50mm) / 28mm
17.89mm
Now we need to work out o horizontal dimension (oh) and o vertical dimension (ov) as this will give us the distance per pixel that the object has moved. Now as FOV α CCD or i is directly proportional to o we will work out a ratio k
k = i/o
= 10.02 / 17.89
= 0.56
so:
o horizontal dimension (oh):
oh = ih / k
= 8.96mm / 0.56 = 16mm per pixel
o vertical dimension (ov):
ov = iv / k
= 4.48mm / 0.56 = 8mm per pixel
Now we have the constants we require, let's use it in an example. If our object at 50mm moves from position (0,0) to (2,4) then the measurements in real life are:
(2*16mm , 4*8mm) = (32mm,32mm)
Again, a Pythagorean theorem: a^2 + b^2 = c^2
Total distance = sqrt(32^2 + 32^2)
= 45.25mm
Complicated I know, but once you have this in a program it's easier. So for every point you will have to repeat at least half the process as d will change on therefore o for every point your examining.
Hope this gets you on your way,
Cheers
Chris

kinect object measuring

I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.