How to calculate the interior angles total from giving 4 points in order? - objective-c

I am using objective-c, and I would like to calculate the interior angles total, with giving 4 points in order. Is objective-c have these kind of maths library to do so ? Thanks.

It is 180*(n-2), where n is the number of sides (=number of vertices) of the polygon.
Reference is here.

Objective-C uses the standard C maths library maths.h. This has the trig and sqrt functions you would be likely to need.

I have just recently solved this problem in Java. There must be a good library for this. However if you are looking to calculate the angle between three points then you simply need to use the dot product of the two vectors which would be produced thus for
x_1, y_1, x_2, y_2, x_3, y_3
define
a_x = x_2 - x_1
a_y = y_2 - y_1
b_x = x_3 - x_2
b_y = y_3 - y_2
Then
dot_product = a_x * b_x + a_y * b_y
This allows you to calculate the value of cos_theta via the relation
cos_theta = dot_product / sqrt((a_x * a_x + a_y * a_y) * (b_x * b_x + b_y * b_y))
When you calculate the inverse cos of cos_theta you will get the smallest of the two possible solutions. I.e the values which are lass than or equal to 180 degress or PI radians.
I am not sure what you mean by the sum of the interior angles but if you sum the values derived from the above algorithm I think you will get what you want.
If you need to get the "angles on the left" or "the angles on the right" you will need to add a cross product to this algorithm.

Related

How do I determine the distance between v and PQ when v =[2,1,2] and PQ = [1,0,3]? P = [0,0,0] Q = [1,0,3]

What I have tried already: d = |v||PQ|sin("Theta")
Now, I need to determine what theta is, so I set up a position on a makeshift graph, the graph I made was on the xy plane only as the z plane complicates things needlessly for finding theta. So, I ended up with an acute angle, and if the angle is acute, then I have to find theta which according to dot product facts is greater than 0.
I do not have access to theta, so I used the same princples from cross dots. u * v = |u||v|cos("theta") but in this case, u and v are PQ and v. A vector is a vector, right?
so now I have theta = acos((v*PQ)/(|v||PQ))
with that I get (4sqrt(10))/15 = 32.5125173162 in degrees, so the angle is 32.5125173162 degrees.
So, now that I have theta, I plug it into my distance formula |v||PQ|sin(32.5125173162)
3*sqrt(10)*sin(32.5125173162) = 5.0990195136
or for the sake of simplicity, 5.1
I however want to know if this question is correct.
If it is NOT correct, what can I do to correct it? At what points did I use incorrect information?
This is not a question with a definitive answer in the back of the book, its a question on the side of a page that said: "try this!"
There are a couple of problems with this question.
From the context it looks like you mean for both v and PQ to be vectors. The "distance" between two vectors is an awkward (not well defined) question because vectors are not position bound.
You are using the cross product formula and I have no idea why:
|AxB| = |A||B|Sin(theta)
I think what you are actually trying to do is calculate the distance between the terminal points of the vectors, (2, 1, 2) and (1, 0, 3). Just use the Pythagorean Theorem (extended to 3D) for this.
d = sqrt( (x1 - x2)^2 + (y1 - y2)^2 + (z1 - z2)^2 )
d = sqrt( (2 - 1)^2 + (1 - 2)^2 + (2 - 3)^2 )
d = sqrt( 1^2 + (-1)^2 + (-1)^2 )
d = sqrt(3)
Edit:
If what you need really is the magnitude of the cross product, |AxB| then just find the cross product (using the determinant) and then calculate the magnitude of the result. There is no need for the formula you were using.

Transform a vector to another frame of reference

I have a green vehicle which will shortly collide with a blue object (which is 200 away from the cube)
It has a Kinect depth camera D at [-100,0,200] which sees the corner of the cube (grey sphere)
The measured depth is 464 at 6.34° in the X plane and 12.53° in the Y plane.
I want to calculate the position of the corner as it would appear if there was a camera F at [150,0,0], which would see this:
in other words transform the red vector into the yellow vector. I know that this is achieved with a transformation matrix but I can't find out how to compute the matrix from the D-F vector [250,0,-200] or how to use it; my high-school maths dates back 40 years.
math.se has a similar question but it doesn't cover my problem and I can't find anything on robotices.se either.
I realise that I should show some code that I've tried, but I don't know where to start. I would be very grateful if somebody could help me to solve this.
ROS provides the tf library which allows you to transform between frames. You can simply set a static transform between the pose of your camera and the pose of your desired location. Then, you can get the pose of any point detected by your camera in the reference frame of your desired point on your robot. ROS tf will do everything you need and everything I explain below.
The longer answer is that you need to construct a transformation tree. First, compute the static transformation between your two poses. A pose is a 7-dimensional transformation including a translation and orientation. This is best represented as a quaternion and a 3D vector.
Now, for all poses in the reference frame of your kinect, you must transform them to your desired reference frame. Let's call this frame base_link and your camera frame camera_link.
I'm going to go ahead and decide that base_link is the parent of camera_link. Technically these transformations are bidirectional, but because you may need a transformation tree, and because ROS cares about this, you'll want to decide who is the parent.
To convert rotation from camera_link to base_link, you need to compute the rotational difference. This can be done by multiplying the quaternion of base_link's orientation by the conjugate of camera_link's orientation. Here's a super quick Python example:
def rotDiff(self,q1: Quaternion,q2: Quaternion) -> Quaternion:
"""Finds the quaternion that, when applied to q1, will rotate an element to q2"""
conjugate = Quaternion(q2.qx*-1,q2.qy*-1,q2.qz*-1,q2.qw)
return self.rotAdd(q1,conjugate)
def rotAdd(self, q1: Quaternion, q2: Quaternion) -> Quaternion:
"""Finds the quaternion that is the equivalent to the rotation caused by both input quaternions applied sequentially."""
w1 = q1.qw
w2 = q2.qw
x1 = q1.qx
x2 = q2.qx
y1 = q1.qy
y2 = q2.qy
z1 = q1.qz
z2 = q2.qz
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return Quaternion(x,y,z,w)
Next, you need to add the vectors. The naive approach is to simply add the vectors, but you need to account for rotation when calculating these. What you really need is a coordinate transformation. The position of camera_link relative to base_link is some 3D vector. Based on your drawing, this is [-250, 0, 200]. Next, we need to reproject the vectors to your points of interest into the rotational frame of base_link. I.e., all the points your camera sees at 12.53 degrees that appear at the z = 0 plane to your camera are actually on a 12.53 degree plane relative to base_link and you need to find out what their coordinates are relative to your camera as if your camera was in the same orientation as base_link.
For details on the ensuing math, read this PDF (particularly starting at page 9).
To accomplish this, we need to find your vector's components in base_link's reference frame. I find that it's easiest to read if you convert the quaternion to a rotation matrix, but there is an equivalent direct approach.
To convert a quaternion to a rotation matrix:
def Quat2Mat(self, q: Quaternion) -> rotMat:
m00 = 1 - 2 * q.qy**2 - 2 * q.qz**2
m01 = 2 * q.qx * q.qy - 2 * q.qz * q.qw
m02 = 2 * q.qx * q.qz + 2 * q.qy * q.qw
m10 = 2 * q.qx * q.qy + 2 * q.qz * q.qw
m11 = 1 - 2 * q.qx**2 - 2 * q.qz**2
m12 = 2 * q.qy * q.qz - 2 * q.qx * q.qw
m20 = 2 * q.qx * q.qz - 2 * q.qy * q.qw
m21 = 2 * q.qy * q.qz + 2 * q.qx * q.qw
m22 = 1 - 2 * q.qx**2 - 2 * q.qy**2
result = [[m00,m01,m02],[m10,m11,m12],[m20,m21,m22]]
return result
Now that your rotation is represented as a rotation matrix, it's time to do the final calculation.
Following the MIT lecture notes from my link above, I'll arbitrarily name the vector to your point of interest from the camera A.
Find the rotation matrix that corresponds with the quaternion that represents the rotation between base_link and camera_link and simply perform a matrix multiplication. If you're in Python, you can use numpy to do this, but in the interest of being explicit, here is the long form of the multiplication:
def coordTransform(self, M: RotMat, A: Vector) -> Vector:
"""
M is my rotation matrix that represents the rotation between my frames
A is the vector of interest in the frame I'm rotating from
APrime is A, but in the frame I'm rotating to.
"""
APrime = []
i = 0
for component in A:
APrime.append(component * M[i][0] + component * M[i][1] + component * m[i][2])
i += 1
return APrime
Now, the vectors from camera_link are represented as if camera_link and base_link share an orientation.
Now you may simply add the static translation between camera_link and base_link (or subtract base_link -> camera_link) and the resulting vector will be your point's new translation.
Putting it all together, you can now gather the translation and orientation of every point your camera detects relative to any arbitrary reference frame to gather pose data relevant to your application.
You can put all of this together into a function simply called tf() and stack these transformations up and down a complex transformation tree. Simply add all the transformations up to a common ancestor and subtract all the transformations down to your target node in order to find the transformation of your data between any two arbitrary related frames.
Edit: Hendy pointed out that it's unclear what Quaternion() class I refer to here.
For the purposes of this answer, this is all that's necessary:
class Quaternion():
def __init__(self, qx: float, qy: float, qz: float, qw: float):
self.qx = qx
self.qy = qy
self.xz = qz
self.qw = qw
But if you want to make this class super handy, you can define __mul__(self, other: Quaternion and __rmul__(self, other: Quaternion) to perform quaternion multiplication (order matters, so make sure to do both!). conjugate(self), toEuler(self), toRotMat(self), normalize(self) may also be handy additions.
Note that due to quirks in Python's typing, the above other: Quaternion is only for clarity. You'll need a longer-form if type(other) != Quaternion: raise TypeError('You can only multiply quaternions with other quaternions) error handling block to make that into valid python :)
The following definitions are not necessary for this answer, but they may prove useful to the reader.
import numpy as np
def __mul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = self.qw
r2 = other.qw
v1 = [self.qx,self.qy,self.qz]
v2 = [other.qx,other.qy,other.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def __rmul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = other.qw
r2 = self.qw
v1 = [other.qx,other.qy,other.qz]
v2 = [self.qx,self.qy,self.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def conjugate(self):
return Quaternion(self.qx*-1,self.qy*-1,self.qz*-1,self.qw)

Determine angle of a straight line in 3D space

I have a straight line in space with an start and end point (x,y,z) and I am attempting to get the angle between this vector and the plane defined by z=0. I am using VB.NET
Here is a picture of the line in my 3d environment (the line I'm intersted in is circled in red) :
It is set to an angle of 70 degrees right now.
You need 2 rays to define an angle.
If you want the angle between a vector and a plane, it is defined for any vector in that plane. However, there is only one minimal value for that, which is the angle between a vector and its projection onto said plane.
Therefore, that minimal value is the one we take when we speak of the angle between a vector and a plane.
This value is also π/2 - the angle between your vector and the the vector that is normal to the plane.You can read more about it all on this site.
With v your vector (thus v.x = end.x - start.x and idem for y and z), n the normal to the plane and a the angle you are looking for, we know from the definition of a scalar product that:
<v,n> = ||v|| * ||n|| * cos(π/2 - a)
We know cos(π/2 - a) = sin(a), and the normal to the z=0 plane is simply the vector n = (0, 0, 1). Thus both the scalar product, v.x * n.x + v.y * n.y + v.z * n.z, and the norm of n, ||n|| = 1, can be simplified a lot. We get the following expression:
sin(a) = v.z / ||v||
Thus finally, the formula by taking the reciprocical of the sine and expliciting the norm of v:
a = Asin(v.z / sqrt( v.x*v.x + v.y*v.y + v.z*v.z ))
According to this documentation the Asin function exists in your System.Math class. It does, however, return the value in radians:
Return Value
Type: System.Double
An angle, θ, measured in radians, such that -π/2 ≤ θ ≤ π/2
-or-
NaN if d < -1 or d > 1 or d equals NaN.
Luckily the same System.Math class contains the value of π so that you can do the conversion:
a *= 180 / Math.PI

Plot a third point past the two previously plotted points. Cocos2d

Ok so let me try to explain this the best way that i can.
I have two points plotted 'A' and 'B' and I am trying to plot a third point 'C' so that it is past point 'B' but along the same slope. I have the angle of the line and I would post some code but I really have no idea where to begin.
any help would be awesome!
Just a little code that i do have
CGPoint vector = ccpSub(touchedPoint, fixedPoint);
CGFloat rotateAngle = -ccpToAngle(vector);
Assuming that by this you mean you need a 3rd point C added such that all the points are colinear, all you need to do is calculate the vector that takes you from A to B, and then generate a new point by adding multiples of this vector to the point B. Choose the multiple based on the distance you want C to be from B.
As an example, say A = (2,2), B = (4,3). Then the vector from A to B is given by (2,1).
All you need to do then is work out how far your new point is from B and add a multiple K*(2,1) to your point B where K is chosen to meet the requirements of your distance
I am assuming you are in 2D, but the same method would apply in higher dimensions
My math is rusty, but the linear equation is generally represented as y=m*x+b, where m is the slope, and b is the y-intercept. You can get m, the slope, by taking the difference of the y values and dividing that by the difference in the x values, e.g., if A = (2,2) and B = (4,3), then m is (3-2)/(4-2) or 0.5. Then, you can solve the linear equation for b, the y-intercept, i.e. b=y-m*x and then plug in either of the data points, e.g. if we plug in the x and y values for point A, you get b = 2 - 0.5 * 2 = 1. Now knowing the slope, m (0.5 in this example), and the y-intercept, b (1 in this example), you can calculate the y for any x value using y=m*x+b, in this case y=0.5*x+1.
So, if touchedPoint and fixedPoint are CGPoint, you can calculate the slope and y-intercept from fixedPoint and touchedPoint like so:
double m = (fixedPoint.y - touchedPoint.y) / (fixedPoint.x - touchedPoint.x);
double b = fixedPoint.y - m * fixedPoint.x;
Now, you don't say how you want to determine where this third point, C, is. But if you, for example, knew the x coordinate for this new point C, you can calculate the y coordinate that falls on the same line as follows:
CGPoint pointC;
pointC.x = 400; // or set this to whatever you want
pointC.y = m * pointC.x + b;

Is there an iterative way to calculate radii along a scanline?

I am processing a series of points which all have the same Y value, but different X values. I go through the points by incrementing X by one. For example, I might have Y = 50 and X is the integers from -30 to 30. Part of my algorithm involves finding the distance to the origin from each point and then doing further processing.
After profiling, I've found that the sqrt call in the distance calculation is taking a significant amount of my time. Is there an iterative way to calculate the distance?
In other words:
I want to efficiently calculate: r[n] = sqrt(x[n]*x[n] + y*y)). I can save information from the previous iteration. Each iteration changes by incrementing x, so x[n] = x[n-1] + 1. I can not use sqrt or trig functions because they are too slow except at the beginning of each scanline.
I can use approximations as long as they are good enough (less than 0.l% error) and the errors introduced are smooth (I can't bin to a pre-calculated table of approximations).
Additional information:
x and y are always integers between -150 and 150
I'm going to try a couple ideas out tomorrow and mark the best answer based on which is fastest.
Results
I did some timings
Distance formula: 16 ms / iteration
Pete's interperlating solution: 8 ms / iteration
wrang-wrang pre-calculation solution: 8ms / iteration
I was hoping the test would decide between the two, because I like both answers. I'm going to go with Pete's because it uses less memory.
Just to get a feel for it, for your range y = 50, x = 0 gives r = 50 and y = 50, x = +/- 30 gives r ~= 58.3. You want an approximation good for +/- 0.1%, or +/- 0.05 absolute. That's a lot lower accuracy than most library sqrts do.
Two approximate approaches - you calculate r based on interpolating from the previous value, or use a few terms of a suitable series.
Interpolating from previous r
r = ( x2 + y2 ) 1/2
dr/dx = 1/2 . 2x . ( x2 + y2 ) -1/2 = x/r
double r = 50;
for ( int x = 0; x <= 30; ++x ) {
double r_true = Math.sqrt ( 50*50 + x*x );
System.out.printf ( "x: %d r_true: %f r_approx: %f error: %f%%\n", x, r, r_true, 100 * Math.abs ( r_true - r ) / r );
r = r + ( x + 0.5 ) / r;
}
Gives:
x: 0 r_true: 50.000000 r_approx: 50.000000 error: 0.000000%
x: 1 r_true: 50.010000 r_approx: 50.009999 error: 0.000002%
....
x: 29 r_true: 57.825065 r_approx: 57.801384 error: 0.040953%
x: 30 r_true: 58.335225 r_approx: 58.309519 error: 0.044065%
which seems to meet the requirement of 0.1% error, so I didn't bother coding the next one, as it would require quite a bit more calculation steps.
Truncated Series
The taylor series for sqrt ( 1 + x ) for x near zero is
sqrt ( 1 + x ) = 1 + 1/2 x - 1/8 x2 ... + ( - 1 / 2 )n+1 xn
Using r = y sqrt ( 1 + (x/y)2 ) then you're looking for a term t = ( - 1 / 2 )n+1 0.36n with magnitude less that a 0.001, log ( 0.002 ) > n log ( 0.18 ) or n > 3.6, so taking terms to x^4 should be Ok.
Y=10000
Y2=Y*Y
for x=0..Y2 do
D[x]=sqrt(Y2+x*x)
norm(x,y)=
if (y==0) x
else if (x>y) norm(y,x)
else {
s=Y/y
D[round(x*s)]/s
}
If your coordinates are smooth, then the idea can be extended with linear interpolation. For more precision, increase Y.
The idea is that s*(x,y) is on the line y=Y, which you've precomputed distances for. Get the distance, then divide it by s.
I assume you really do need the distance and not its square.
You may also be able to find a general sqrt implementation that sacrifices some accuracy for speed, but I have a hard time imagining that beating what the FPU can do.
By linear interpolation, I mean to change D[round(x)] to:
f=floor(x)
a=x-f
D[f]*(1-a)+D[f+1]*a
This doesn't really answer your question, but may help...
The first questions I would ask would be:
"do I need the sqrt at all?".
"If not, how can I reduce the number of sqrts?"
then yours: "Can I replace the remaining sqrts with a clever calculation?"
So I'd start with:
Do you need the exact radius, or would radius-squared be acceptable? There are fast approximatiosn to sqrt, but probably not accurate enough for your spec.
Can you process the image using mirrored quadrants or eighths? By processing all pixels at the same radius value in a batch, you can reduce the number of calculations by 8x.
Can you precalculate the radius values? You only need a table that is a quarter (or possibly an eighth) of the size of the image you are processing, and the table would only need to be precalculated once and then re-used for many runs of the algorithm.
So clever maths may not be the fastest solution.
Well there's always trying optimize your sqrt, the fastest one I've seen is the old carmack quake 3 sqrt:
http://betterexplained.com/articles/understanding-quakes-fast-inverse-square-root/
That said, since sqrt is non-linear, you're not going to be able to do simple linear interpolation along your line to get your result. The best idea is to use a table lookup since that will give you blazing fast access to the data. And, since you appear to be iterating by whole integers, a table lookup should be exceedingly accurate.
Well, you can mirror around x=0 to start with (you need only compute n>=0, and the dupe those results to corresponding n<0). After that, I'd take a look at using the derivative on sqrt(a^2+b^2) (or the corresponding sin) to take advantage of the constant dx.
If that's not accurate enough, may I point out that this is a pretty good job for SIMD, which will provide you with a reciprocal square root op on both SSE and VMX (and shader model 2).
This is sort of related to a HAKMEM item:
ITEM 149 (Minsky): CIRCLE ALGORITHM
Here is an elegant way to draw almost
circles on a point-plotting display:
NEW X = OLD X - epsilon * OLD Y
NEW Y = OLD Y + epsilon * NEW(!) X
This makes a very round ellipse
centered at the origin with its size
determined by the initial point.
epsilon determines the angular
velocity of the circulating point, and
slightly affects the eccentricity. If
epsilon is a power of 2, then we don't
even need multiplication, let alone
square roots, sines, and cosines! The
"circle" will be perfectly stable
because the points soon become
periodic.
The circle algorithm was invented by
mistake when I tried to save one
register in a display hack! Ben Gurley
had an amazing display hack using only
about six or seven instructions, and
it was a great wonder. But it was
basically line-oriented. It occurred
to me that it would be exciting to
have curves, and I was trying to get a
curve display hack with minimal
instructions.