Converting horizontal forces to elliptical - physics

I'm trying to express the forces equations in point C. But not with the expected Vertical X or Y, i want to show them as a function of the angle "alpha"
i think that
Mg ==> Mg * cos(Alpha) is correct.
but I can't understand how to express the Normal
N = ?

The projection of the Mg vector onto the radius is Mg * cos(Beta), where Beta is the angle between Mg and the radius. And given that Alpha + Beta + 90 = 180, we get Beta = 90 - Alpha. So the answer is Mg * cos(90 - Alpha) from C to the center O.

Related

Can I use the Postgres functions to find points inside a rotating rectangle of fixed size?

I'm using Postgres 9.5 and I've just installed PostGIS for some extended functions. I have a table with (x,y) points and I want to find the rectangle that fits the maximum number of points. The constraint is that the rectangle side lenghts are fixed. So far I'm counting how many points are in the box without rotation. My points are centered around the origin, (0,0).
SELECT Sum(CASE
WHEN x > -5
AND x < 5
AND y > -10
AND y < 10 THEN 1
ELSE 0
END) AS inside_points,
Count(1) AS total_points
FROM track_t;
This query gives me the count of points inside a rectangle with origin (0,0) and lenghts x = 10 and y = 20.
From here I would create a helper table of rotated rectangle corner points (angle, x1, y1, x2, y2), then cross join to my data, and count over the points per angle, while GROUP BY angle. Then I can select which angle gives me the most points inside the rectangle.
But this seems a little old fashioned, and perhaps non-performant. Additionally, counting points inside a rotated rectangle is not a trivial calculation.
Are there more efficient and elegant ways, perhaps using Postgres Geometric Datatypes or PostGIS Box2D, to rotate a rectangle with fixed side lenghts, and then to count the number of points inside? The geometric functions look good, but they seem to provide minimum bounding boxes and not the other way around.
In addition to Postgresql, I'm using a Python framework that could be used in case SQL can't make this work.
Update: One thing I tried is to use Geometric Types, specifically BOX
SELECT deg, Box(Point(-5, -10), Point(5, 10)) * Point(1, Radians(deg))
FROM Generate_series(0, 360, 90) AS deg
Unforunately, the Rotate function by a Point doesn't work for Polygons.
I ended up by generating rectangle vertices, rotating those vertices, and then comparing the area of the rectangle (constant) with the area of the 4 triangles that are made by including the test point.
This technique is based on the parsimonious answer:
Make triangle. Suppose, abcd is the rectangle and x is the point then if area(abx)+area(bcx)+area(cdx)+area(dax) equals area(abcd) then the point is inside it.
The rectangles are defined by
A bottom left (-x/2,-y/2)
B top left (-x/2,+y/2)
C top right (+x/2,+y/2)
D bottom right (+x/2,-y/2)
This code then checks if point (qx,qy) is inside a rectangle of width x=10 and height y=20, which is rotated around the origin (0,0) by an angle with range of 0 to 180, by 10 degrees.
Here's the code. It's taking 9 minutes to check 750k points, so there is definite room for improvement. Additionally, It can be parallelized once I upgrade to 9.6
with t as (select 10*0.5 as x, 20*0.5 as y, 17.0 as qx, -3.0 as qy)
select
z.angle
-- ABC area
--,abs(0.5*(z.ax*(z.by-z.cy)+z.bx*(z.cy-z.ay)+z.cx*(z.ay-z.by)))
-- CDA area
--,abs(0.5*(z.cx*(z.dy-z.ay)+z.dx*(z.ay-z.cy)+z.ax*(z.cy-z.dy)))
-- ABCD area
,abs(0.5*(z.ax*(z.by-z.cy)+z.bx*(z.cy-z.ay)+z.cx*(z.ay-z.by))) + abs(0.5*(z.cx*(z.dy-z.ay)+z.dx*(z.ay-z.cy)+z.ax*(z.cy-z.dy))) as abcd_area
-- ABQ area
--,abs(0.5*(z.ax*(z.by-z.qx)+z.bx*(z.qy-z.ay)+z.qx*(z.ay-z.by)))
-- BCQ area
--,abs(0.5*(z.bx*(z.cy-z.qx)+z.cx*(z.qy-z.by)+z.qx*(z.by-z.cy)))
-- CDQ area
--,abs(0.5*(z.cx*(z.dy-z.qx)+z.dx*(z.qy-z.cy)+z.qx*(z.cy-z.dy)))
-- DAQ area
--,abs(0.5*(z.dx*(z.ay-z.qx)+z.ax*(z.qy-z.dy)+z.qx*(z.dy-z.ay)))
-- total area of triangles with question point (ABQ + BCQ + CDQ + DAQ)
,abs(0.5*(z.ax*(z.by-z.qx)+z.bx*(z.qy-z.ay)+z.qx*(z.ay-z.by)))
+ abs(0.5*(z.bx*(z.cy-z.qx)+z.cx*(z.qy-z.by)+z.qx*(z.by-z.cy)))
+ abs(0.5*(z.cx*(z.dy-z.qx)+z.dx*(z.qy-z.cy)+z.qx*(z.cy-z.dy)))
+ abs(0.5*(z.dx*(z.ay-z.qx)+z.ax*(z.qy-z.dy)+z.qx*(z.dy-z.ay))) as point_area
from
(
SELECT
a.id as angle
-- bottom left (A)
,(-t.x) * cos(radians(a.id)) - (-t.y) * sin(radians(a.id)) as ax
,(-t.x) * sin(radians(a.id)) + (-t.y) * cos(radians(a.id)) as ay
--top left (B)
,(-t.x) * cos(radians(a.id)) - (t.y) * sin(radians(a.id)) as bx
,(-t.x) * sin(radians(a.id)) + (t.y) * cos(radians(a.id)) as by
--top right (C)
,(t.x) * cos(radians(a.id)) - (t.y) * sin(radians(a.id)) as cx
,(t.x) * sin(radians(a.id)) + (t.y) * cos(radians(a.id)) as cy
--bottom right (D)
,(t.x) * cos(radians(a.id)) - (-t.y) * sin(radians(a.id)) as dx
,(t.x) * sin(radians(a.id)) + (-t.y) * cos(radians(a.id)) as dy
-- point to check (Q)
,t.qx as qx
,t.qy as qy
FROM generate_series(0,180,10) AS a(id), t
) z
;
the results then are
angle;abcd_area;point_area
0;200;340
10;200;360.6646055963
20;200;373.409049054212
30;200;377.846096908265
40;200;373.84093170467
50;200;361.515248361426
60;200;341.243556529821
70;200;313.641801308188
80;200;279.548648061772
90;200;240
*100;200;200*
*110;200;200*
*120;200;200*
*130;200;200*
*140;200;200*
150;200;237.846096908265
160;200;277.643408923024
170;200;312.04311584956
180;200;340
Where the rotations of angles 100, 110, 120, 130 and 140 degrees then includes the test-point (indicated with *)

Ground longitude/latitude under a satellite (cartesian coordinates) at a specfic epoch

The script I'm wanting to develop uses the cartesian coordinates (XYZ) from a satellite, and in conjunction with the range, elevation and azimuth from a location, I then take a satellite’s orbital information and get the ground longitude/latitude under that satellite at a given time.
One step further from this: imagne the signal from a satellite piercing the atmosphere at exactly 300km above sea level. At this particular point when altitude is 300km, I need to calculate the ground longitude/latitude.
In the pyemph module there appears to be already a method (ephem.readtle) that can achieve this, but for TLE (two line element) data only. I'd like to use a satellite's cartesian coordinates to develop this. Is there such a method already out there? Or perhaps somebody with experience in this
domain can point me in the right direction.
A similar question already exists referring to ECEF from Azimuth, Elevation, Range and Observer Lat,Lon,Alt, but it's not the same problem.
Here's what I have developed already:
- satellite cartesian coordinates, XYZ
- azimuth, elevation and range of satellite from ground station
- ground station coordinates in lat, long, height above sea level
Here's what I need:
- ground longitude/latitude under a satellite at a specific epoch, and in particular where the piercing point in the atmosphere (the point which the signal from the satellite pierces the atmosphere) is 300km altitude.
I found what I was looking for via this:
def ionospheric_pierce_point(self, dphi, dlambda, ele, azi):
Re = 6378136.3 # Earth ellipsoid in meters
h = cs.SHELL_HEIGHT * 10**3 # Height of pierce point meters, and where maximum electron density is assumed
coeff = Re / (Re + h)
lat_rx = dphi
long_rx = dlambda
# Degrees to radians conversions
ele_rad = np.deg2rad(ele)
azi_rad = np.deg2rad(azi)
lat_rx_rad = np.deg2rad(lat_rx)
long_rx_rad = np.deg2rad(long_rx)
psi_pp = (np.pi / 2) - ele_rad - np.arcsin(coeff * np.cos(ele_rad)) # Earth central angle between user and the Eart projection of the pierce point, in radians
psi_pp_deg = np.rad2deg(psi_pp)
lat_pp = np.arcsin(np.sin(lat_rx_rad)*np.cos(psi_pp) +
np.cos(lat_rx_rad)*np.sin(psi_pp)*np.cos(azi_rad)) # in radians
if (lat_rx > 70 and ((np.tan(psi_pp)*np.cos(azi_rad)) > np.tan((np.pi/2) - lat_rx_rad))) or (lat_rx < -70 and ((np.tan(psi_pp)*np.cos(azi_rad + np.pi)) > np.tan((np.pi/2) + lat_rx_rad))):
long_pp = long_rx_rad + np.pi - np.arcsin((np.sin(psi_pp)*np.sin(azi_rad)) / np.cos(lat_pp))
else:
long_pp = long_rx_rad + np.arcsin((np.sin(psi_pp)*np.sin(azi_rad)) / np.cos(lat_pp))
lat_pp_deg = np.rad2deg(lat_pp)
long_pp_deg = np.rad2deg(long_pp)
return lat_pp_deg, long_pp_deg

Transform a vector to another frame of reference

I have a green vehicle which will shortly collide with a blue object (which is 200 away from the cube)
It has a Kinect depth camera D at [-100,0,200] which sees the corner of the cube (grey sphere)
The measured depth is 464 at 6.34° in the X plane and 12.53° in the Y plane.
I want to calculate the position of the corner as it would appear if there was a camera F at [150,0,0], which would see this:
in other words transform the red vector into the yellow vector. I know that this is achieved with a transformation matrix but I can't find out how to compute the matrix from the D-F vector [250,0,-200] or how to use it; my high-school maths dates back 40 years.
math.se has a similar question but it doesn't cover my problem and I can't find anything on robotices.se either.
I realise that I should show some code that I've tried, but I don't know where to start. I would be very grateful if somebody could help me to solve this.
ROS provides the tf library which allows you to transform between frames. You can simply set a static transform between the pose of your camera and the pose of your desired location. Then, you can get the pose of any point detected by your camera in the reference frame of your desired point on your robot. ROS tf will do everything you need and everything I explain below.
The longer answer is that you need to construct a transformation tree. First, compute the static transformation between your two poses. A pose is a 7-dimensional transformation including a translation and orientation. This is best represented as a quaternion and a 3D vector.
Now, for all poses in the reference frame of your kinect, you must transform them to your desired reference frame. Let's call this frame base_link and your camera frame camera_link.
I'm going to go ahead and decide that base_link is the parent of camera_link. Technically these transformations are bidirectional, but because you may need a transformation tree, and because ROS cares about this, you'll want to decide who is the parent.
To convert rotation from camera_link to base_link, you need to compute the rotational difference. This can be done by multiplying the quaternion of base_link's orientation by the conjugate of camera_link's orientation. Here's a super quick Python example:
def rotDiff(self,q1: Quaternion,q2: Quaternion) -> Quaternion:
"""Finds the quaternion that, when applied to q1, will rotate an element to q2"""
conjugate = Quaternion(q2.qx*-1,q2.qy*-1,q2.qz*-1,q2.qw)
return self.rotAdd(q1,conjugate)
def rotAdd(self, q1: Quaternion, q2: Quaternion) -> Quaternion:
"""Finds the quaternion that is the equivalent to the rotation caused by both input quaternions applied sequentially."""
w1 = q1.qw
w2 = q2.qw
x1 = q1.qx
x2 = q2.qx
y1 = q1.qy
y2 = q2.qy
z1 = q1.qz
z2 = q2.qz
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return Quaternion(x,y,z,w)
Next, you need to add the vectors. The naive approach is to simply add the vectors, but you need to account for rotation when calculating these. What you really need is a coordinate transformation. The position of camera_link relative to base_link is some 3D vector. Based on your drawing, this is [-250, 0, 200]. Next, we need to reproject the vectors to your points of interest into the rotational frame of base_link. I.e., all the points your camera sees at 12.53 degrees that appear at the z = 0 plane to your camera are actually on a 12.53 degree plane relative to base_link and you need to find out what their coordinates are relative to your camera as if your camera was in the same orientation as base_link.
For details on the ensuing math, read this PDF (particularly starting at page 9).
To accomplish this, we need to find your vector's components in base_link's reference frame. I find that it's easiest to read if you convert the quaternion to a rotation matrix, but there is an equivalent direct approach.
To convert a quaternion to a rotation matrix:
def Quat2Mat(self, q: Quaternion) -> rotMat:
m00 = 1 - 2 * q.qy**2 - 2 * q.qz**2
m01 = 2 * q.qx * q.qy - 2 * q.qz * q.qw
m02 = 2 * q.qx * q.qz + 2 * q.qy * q.qw
m10 = 2 * q.qx * q.qy + 2 * q.qz * q.qw
m11 = 1 - 2 * q.qx**2 - 2 * q.qz**2
m12 = 2 * q.qy * q.qz - 2 * q.qx * q.qw
m20 = 2 * q.qx * q.qz - 2 * q.qy * q.qw
m21 = 2 * q.qy * q.qz + 2 * q.qx * q.qw
m22 = 1 - 2 * q.qx**2 - 2 * q.qy**2
result = [[m00,m01,m02],[m10,m11,m12],[m20,m21,m22]]
return result
Now that your rotation is represented as a rotation matrix, it's time to do the final calculation.
Following the MIT lecture notes from my link above, I'll arbitrarily name the vector to your point of interest from the camera A.
Find the rotation matrix that corresponds with the quaternion that represents the rotation between base_link and camera_link and simply perform a matrix multiplication. If you're in Python, you can use numpy to do this, but in the interest of being explicit, here is the long form of the multiplication:
def coordTransform(self, M: RotMat, A: Vector) -> Vector:
"""
M is my rotation matrix that represents the rotation between my frames
A is the vector of interest in the frame I'm rotating from
APrime is A, but in the frame I'm rotating to.
"""
APrime = []
i = 0
for component in A:
APrime.append(component * M[i][0] + component * M[i][1] + component * m[i][2])
i += 1
return APrime
Now, the vectors from camera_link are represented as if camera_link and base_link share an orientation.
Now you may simply add the static translation between camera_link and base_link (or subtract base_link -> camera_link) and the resulting vector will be your point's new translation.
Putting it all together, you can now gather the translation and orientation of every point your camera detects relative to any arbitrary reference frame to gather pose data relevant to your application.
You can put all of this together into a function simply called tf() and stack these transformations up and down a complex transformation tree. Simply add all the transformations up to a common ancestor and subtract all the transformations down to your target node in order to find the transformation of your data between any two arbitrary related frames.
Edit: Hendy pointed out that it's unclear what Quaternion() class I refer to here.
For the purposes of this answer, this is all that's necessary:
class Quaternion():
def __init__(self, qx: float, qy: float, qz: float, qw: float):
self.qx = qx
self.qy = qy
self.xz = qz
self.qw = qw
But if you want to make this class super handy, you can define __mul__(self, other: Quaternion and __rmul__(self, other: Quaternion) to perform quaternion multiplication (order matters, so make sure to do both!). conjugate(self), toEuler(self), toRotMat(self), normalize(self) may also be handy additions.
Note that due to quirks in Python's typing, the above other: Quaternion is only for clarity. You'll need a longer-form if type(other) != Quaternion: raise TypeError('You can only multiply quaternions with other quaternions) error handling block to make that into valid python :)
The following definitions are not necessary for this answer, but they may prove useful to the reader.
import numpy as np
def __mul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = self.qw
r2 = other.qw
v1 = [self.qx,self.qy,self.qz]
v2 = [other.qx,other.qy,other.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def __rmul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = other.qw
r2 = self.qw
v1 = [other.qx,other.qy,other.qz]
v2 = [self.qx,self.qy,self.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def conjugate(self):
return Quaternion(self.qx*-1,self.qy*-1,self.qz*-1,self.qw)

Determine angle of a straight line in 3D space

I have a straight line in space with an start and end point (x,y,z) and I am attempting to get the angle between this vector and the plane defined by z=0. I am using VB.NET
Here is a picture of the line in my 3d environment (the line I'm intersted in is circled in red) :
It is set to an angle of 70 degrees right now.
You need 2 rays to define an angle.
If you want the angle between a vector and a plane, it is defined for any vector in that plane. However, there is only one minimal value for that, which is the angle between a vector and its projection onto said plane.
Therefore, that minimal value is the one we take when we speak of the angle between a vector and a plane.
This value is also π/2 - the angle between your vector and the the vector that is normal to the plane.You can read more about it all on this site.
With v your vector (thus v.x = end.x - start.x and idem for y and z), n the normal to the plane and a the angle you are looking for, we know from the definition of a scalar product that:
<v,n> = ||v|| * ||n|| * cos(π/2 - a)
We know cos(π/2 - a) = sin(a), and the normal to the z=0 plane is simply the vector n = (0, 0, 1). Thus both the scalar product, v.x * n.x + v.y * n.y + v.z * n.z, and the norm of n, ||n|| = 1, can be simplified a lot. We get the following expression:
sin(a) = v.z / ||v||
Thus finally, the formula by taking the reciprocical of the sine and expliciting the norm of v:
a = Asin(v.z / sqrt( v.x*v.x + v.y*v.y + v.z*v.z ))
According to this documentation the Asin function exists in your System.Math class. It does, however, return the value in radians:
Return Value
Type: System.Double
An angle, θ, measured in radians, such that -π/2 ≤ θ ≤ π/2
-or-
NaN if d < -1 or d > 1 or d equals NaN.
Luckily the same System.Math class contains the value of π so that you can do the conversion:
a *= 180 / Math.PI

Equation to find average of multiple velocities?

I need to find the average Edit: total 2D velocity given multiple 2D velocities (speed and direction). A few examples:
Example 1
Velocity 1 is 90° at a speed of 10 pixels or units per second.
Velocity 2 is 270° at a speed of 5 pixels or units per second.
The average velocity is 90° at 5 pixels or units per second.
Example 2
Velocity 1 is 0° at a speed of 10 pixels or units per second
Velocity 2 is 180° at a speed of 10 pixels or units per second
Velocity 3 is 90° at a speed of 8 pixels or units per second
The average velocity is 90° at 8 pixels or units per second
Example 3
Velocity 1 is 0° at 10 pixels or units per second
Velocity 2 is 90° at 10 pixels or units per second
The average velocity is 45° at 14.142 pixels or units per second
I am using JavaScript but it's mostly a language-independent question and I can convert it to JavaScript if necessary.
If you're going to be using a bunch of angles, I would just calculate each speed,
vx = v * cos(theta),
vy = v * sin(theta)
then sum the x velocities and the y velocities separately as vector components and divide by the total number of velocities,
sum(vx) / total v, sum(vy) / total v
and then finally calculate the final speed and direction with your final vx and vy. The magnitude of the speed can be found by a simple application of pythagorean theorem, and then final angle should just be tan-1(y/x).
Per example #3
vx = 10 * cos(90) + 10 * cos(0) = 10,
vy = 10 * sin(90) + 10 * sin(0) = 10
so, tan-1(10/10) = tan-1(1) = 45
then a final magnitude of sqrt(10^2 + 10^2) = 14.142
These are vectors, and you should use vector addition to add them. So right and up are positive, while left and down are negative.
Add your left-to-right vectors (x axis).
Example 1 = -10+5 = -5
Example 2 = -8 = -8
Example 3 = 10 = 10. (90 degrees is generally 90 degrees to the right)
Add you ups and downs similarly and you get these velocities, your left-to-right on the left in the brackets, and your up-to-down on the right.
(-5, 0)
(-8,0)
(10, 10)
These vectors contain all the information you need to plot the motion of an object, you do not need to calculate angles to plot the motion of the object. If for some reason you would rather use speeds (similar to velocity, but different) and angles, then you must first calculate the vectors as above and then use the Pythagorean theorem to find the speed and simple trigonometry to get the angle. Something like this:
var speed = Math.sqrt(x * x + y * y);
var tangeant = y / x;
var angleRadians = Math.atan(tangeant);
var angleDegrees = angleRadians * (180 / Math.PI);
I'll warn you that you should probably talk to someone who know trigonometry and test this well. There is potential for misleading bugs in work like this.
From your examples it sounds like you want addition of 2-dimensional vectors, not averages.
E.g. example 2 can be represented as
(0,10) + (0,-10) + (-8, 0) = (-8,0)
The speed is then equal to the length of the vector:
sqrt(x^2+y^2)
To get average:
add each speed, and then divide by the number of speeds.
10mph + 20mph / 2 = 15
12mph + 14mph + 13mph + 16mph / 4 = 14 (13,75)
This is not so much average as it is just basic vector addition. You're finding multiple "pixel vectors" and adding them together. If you have a velocity vector of 2 pixels to the right, and 1 up, and you add it to a velocity vector of 3 pixels to the left and 2 down, you will get a velocity vector of 1 pixel left, and 1 down.
So the speed is defined by
pij = pixels going up or (-)down
pii = pixels going right or (-)left
speedi = pii1 + pii2 = 2-3 = -1 (1 pixel left)
speedj = pij1 + pij2 = 1-2 = -1 (1 pixel down)
From there, you need to decide which directions are positive, and which are negative. I recommend that left is negative, and down is negative (like a mathematical graph).
The angle of the vector, would be the arctan(speedj/speedi)
arctan(-1/-1) = 45 degrees