How to find intersection of two arcs on sphere with CGAL - cgal

I have 2 arcs on unit sphere in 3-dimensional space and I want to know do they intersects or not with using of CGAL. I know that i must use doIntersect function, but I don't understand how to define arks (which parameters I must transmit to :Circular_arc_point_3).
For example, i have 2 points in spherical coordinates:
phi = 0, psi = 0, r = 1
phi = 45, psi = 45, r = 1
(They define arc on unit sphere)
Which parameters I must set in:
Circular_arc_point_3 p = Circular_arc_point_3(?, ?, ?);

You can construct an arc from a supporting circle and its two endpoints.
See the following documentation pages:
Circular_arc_3
Circular_arc_point_3
Circle_3

Related

Projecting a vector in a given plane using numpy

Using numpy, how can I do an orthogonal projection of, for example, the vector np.array([0.3,0.5,0.2]) into the plane 3x+2y-2z=0 ?
EDIT:
I think one may simply use numpy.linalg.lstsq to find the orthogonal projection?
Your hyperplane is defined by the set of x such that <a,x>=0, where a is a vector orthogonal to the plane. In your example,
a = (3,2,-2).
Then The projection of a point p is in the hyperplane is a point p_proj such that p-p_proj is orthogonal to the plane. This means that it is parallel to a, or in other words p-p_proj=lambda*a. So
p_proj = p- lambda*a (1).
since p_proj is in the hyperplane, <p_proj,a> = 0 so multiplying by a on the equality(1) gives
lambda= <p,a>/<a,a>.
Substituting into (2), you get
Projection(p) = p_proj = p-<p,a>/<a,a>a
which can be done easily in numpy using np.dot(v_1,v_2) wherever we encounter <v_1,v_2>:
def projection(p,a):
lambda_val = np.dot(p,a)/np.dot(a,a)
return p - lambda_val * a
(Note that this is a Gram-Schmidt iteration).

simulate Boolean Model with R

I am complete Newbie on programming with R, especially with the library spatstat. And so i hope that anyone can help me.
I want to simulate a Boolean Model. In my case, this is a Poisson-Pointprocess with closed circles with radius r around the points of the point process.
With X = rpoispp (100) I can already simulate the point process. But I have no idea how to generate the circles around the dots.
my google research was unfortunately not successful.
Thanks for help,
Perry
Googleing 'Boolean model spatstat' gave me
this helpfile as one of the
first hits. In the examples section there is an example of a Boolean model
in the unit square with disc radius 0.2:
library(spatstat)
X <- discs(runifpoint(15) %mark% 0.2, npoly=16)
plot(X, main = "", col = "gray")
Is this what you need?
Above X is a polygonal approximation of the union of discs. Alternatively
the Boolean model is often represented simply as a marked point
pattern with the disc radius as the mark. What is more convenient depends on
the context.
FYI: There are several equivalent ways of attaching a (vector) of marks m
to a point pattern X: marks(X) <- m and X %mark% m.
Created on 2018-11-08 by the reprex package (v0.2.1)
Simply make a marked point pattern and use the correct value of markscale
when plotting with plot.ppp:
library(spatstat)
X <- ppp(0.5, 0.5, window = owin()) # Single point in middle of unit square
marks(X) <- 0.5
plot.ppp(X, markscale = 1, legend = FALSE) # Mark interpreted directly as diameter of cirle
plot.ppp(X, markscale = 2, legend = FALSE) # 2*mark interpreted as diameter of cirle (i.e. mark = radius)
Created on 2018-11-11 by the reprex package (v0.2.1)
Now, After a little self-study I came to the following results:
> X=rpoispp(8)
> m=1
> marks(X)=m
> plot(X)
i get this picture:enter image description here
Now, i want to scale the Radii of the marks. I do not understand how R sets the radii of the circles. I found this on a tutorial site: "If markscale is given, then a mark value of m is plotted as a circle of radius m * markscale (if m is positive)". I tested it and was surprised by the result. If I understood correctly, the radius of the markers would have to stay the same with
>plot(X, markscale = 1)
and would have to double by
>plot(X, markscale = 2)
but it does not.
PS: Sorry for my bad english.
Perry

Kinetic Theory Model

Edit: I've now fixed the problem I asked about. The spheres were leaving the box in the corners, where the if statements (in the while loop shown below) got confused. In the bits of code that reverse the individual components of velocity on contact with walls, some elif statements were used. When elif is used (as far as I can tell) if the sphere exceeds more than one position limit at a time, the program only reverses the velocity component for one of them. This is rectified when replacing elif with simply if. I'm not sure if I quite understand the reason behind this, so hopefully someone cleverer than I will comment such information, but for now, if anyone has the same problem, I hope my limited input helps!
Some context first:
I'm trying to build a model of the kinetic theory of gases in VPython, as a revision exercise for my (Physics) degree. This involves me building a hollow box and putting a bunch of spheres in it, randomly positioned throughout the box. I then need to assign each of the spheres its own random velocity and then use a loop to adjust the position of each sphere with reference to its velocity vector and a time step.
The spheres should also undergo elastic collisions with each wall and all other spheres.
When a sphere meets a wall in the x-direction, its x-velocity component is reversed and similarly in the y and z directions.
When a sphere meets another sphere, they swap velocities.
Currently, my code works so far as creating the right number of spheres and distributing them randomly and giving each sphere its own random velocity. The spheres also move as they should, except for collisions. The spheres should all stay inside the box as they should bounce off all the walls. They appear to be bouncing off each other, however, occasionally a sphere or two will go straight through the box.
I am extremely new to programming and I don't quite understand what's going on here or why it's happening but I'd be very grateful if someone could help me.
Below is the code I have so far (I've tried to comment what I'm doing at each step):
##########################################################
# This code is meant to create an empty box and then create
# a certain number of spheres (num_spheres) that will sit inside
# the box. Each sphere will then be assigned a random velocity vector.
# A loop will then adjust the position of each sphere to make them
# move. The spheres will undergo elastic collisions with the box walls
# and also with the other spheres in the box.
##########################################################
from visual import *
import random as random
import numpy as np
num_spheres = 15
fps = 24 #fps of while loop (later)
dt = 1.0/fps #time step
l = 40 #length of box
w = 2 #width of box
radius = 0.5 #radius of spheres
##########################################################
# Creating an empty box with sides length/height l, width w
wallR = box(pos = (l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallL = box(pos = (-l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallU = box(pos = (0,l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallD = box(pos = (0,-l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallF = box(pos = (0,0,l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
wallB = box(pos = (0,0,-l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
#defining a function that creates a list of 'num_spheres' randomly positioned spheres
def create_spheres(num):
global l, radius
particles = [] # Create an empty list
for i in range(0,num): # Loop i from 0 to num-1
v = np.random.rand(3)
particles.append(sphere(pos= (3.0/4.0*l) * (v - 0.5), #pos such that spheres are inside box
radius = radius, color=color.red, index=i))
# each sphere is given an index for ease of referral later
return particles
#defining a global variable = the array of velocities for the spheres
velarray = []
#defining a function that gives each sphere a random velocity
def velocity_spheres(sphere_list):
global velarray
for sphere in spheres:
#making the sign of each velocity component random
rand = random.randint(0,1)
if rand == 1:
sign = 1
else:
sign = -1
mu = 10 #defining an average for normal distribution
sigma = 0.1 #defining standard deviation of normal distribution
# 3 random numbers form the velocity vector
vel = vector(sign*random.normalvariate(mu, sigma),sign*random.normalvariate(mu, sigma),
sign*random.normalvariate(mu, sigma))
velarray.append(vel)
spheres = create_spheres(num_spheres) #creating some spheres
velocity_spheres(spheres) # invoking the velocity function
while True:
rate(fps)
for sphere in spheres:
sphere.pos += velarray[sphere.index]*dt
#incrementing sphere position by reference to its own velocity vector
if abs(sphere.pos.x) > (l/2.0)-w-radius:
(velarray[sphere.index])[0] = -(velarray[sphere.index])[0]
#reversing x-velocity on contact with a side wall
elif abs(sphere.pos.y) > (l/2.0)-w-radius:
(velarray[sphere.index])[1] = -(velarray[sphere.index])[1]
#reversing y-velocity on contact with a side wall
elif abs(sphere.pos.z) > (l/2.0)-w-radius:
(velarray[sphere.index])[2] = -(velarray[sphere.index])[2]
#reversing z-velocity on contact with a side wall
for sphere2 in spheres: #checking other spheres
if sphere2 != sphere:
#making sure we aren't checking the sphere against itself
if abs(sphere2.pos-sphere.pos) < (sphere.radius+sphere2.radius):
#if the other spheres are touching the sphere we are looking at
v1 = velarray[sphere.index]
#noting the velocity of the first sphere before the collision
velarray[sphere.index] = velarray[sphere2.index]
#giving the first sphere the velocity of the second before the collision
velarray[sphere2.index] = v1
#giving the second sphere the velocity of the first before the collision
Thanks again for any help!
The elif statements within the while loop in the code given in the original question are/were the cause of the problem. The conditional statement, elif, is only applicable if the original, if, condition is not satisfied. The circumstance wherein a sphere meets the corner of the box satisfies at least two of the conditions for reversing velocity components. This means that, while one would expect (at least) two velocity components to be reversed, only one is. That is, the direction specified by the if statement is reversed, whereas the component(s) mentioned in the elif statement(s) are not, as the first condition has been satisfied and, hence, the elif statements are ignored.
If each elif is changed to be a separate if statement, the code works as intended.

Calculating 2D resultant forces for vehicles in games

I am trying to calculate the forces that will act on circular objects in the event of a collision. Unfortunately, my mechanics is slightly rusty so i'm having a bit of trouble.
I have an agent class with members
vector position // (x,y)
vector velocity // (x,y)
vector forward // (x,y)
float radius // radius of the agent (all circles)
float mass
So if we have A,B:Agent, and in the next time step the velocity is going to change the position. If a collision is going to occur I want to work out the force that will act on the objects.
I know Line1 = (B.position-A.position) is needed to work out the angle of the resultant force but how to calculate it is baffling me when I have to take into account current velocity of the vehicle along with the angle of collision.
arctan(L1.y,L1.x) is am angle for the force (direction can be determined)
sin/cos are height/width of the components
Also I know to calculate the rotated axis I need to use
x = cos(T)*vel.x + sin(T)*vel.y
y = cos(T)*vel.y + sin(T)*vel.x
This is where my brain can't cope anymore.. Any help would be appreciated.
As I say, the aim is to work out the vector force applied to the objects as I have already taken into account basic physics.
Added a little psudocode to show where I was starting to go with it..
A,B:Agent
Agent {
vector position, velocity, front;
float radius,mass;
}
vector dist = B.position - A.position;
float distMag = dist.magnitude();
if (distMag < A.radius + B.radius) { // collision
float theta = arctan(dist.y,dist.x);
flost sine = sin(theta);
float cosine = cos(theta);
vector newAxis = new vector;
newAxis.x = cosine * dist .x + sine * dist .y;
newAxis.y = cosine * dist .y - sine * dist .x;
// Converted velocities
vector[] vTemp = {
new vector(), new vector() };
vTemp[0].x = cosine * agent.velocity.x + sine * agent.velocity.y;
vTemp[0].y = cosine * agent.velocity.y - sine * agent.velocity.x;
vTemp[1].x = cosine * current.velocity.x + sine * current.velocity.y;
vTemp[1].y = cosine * current.velocity.y - sine * current.velocity.x;
Here's to hoping there's a curious maths geek on stack..
Let us assume, without loss of generality, that we are in the second object's reference frame before the collision.
Conservation of momentum:
m1*vx1 = m1*vx1' + m2*vx2'
m1*vy1 = m1*vy1' + m2*vy2'
Solving for vx1', vy1':
vx1' = vx1 - (m2/m1)*vx2'
vy1' = vy1 - (m2/m1)*vy2'
Secretly, I will remember the fact that vx1'*vx1' + vy1'*vy1' = v1'*v1'.
Conservation of energy (one of the things elastic collisions give us is that angle of incidence is angle of reflection):
m1*v1*v1 = m1*v1'*v1' + m2*v2'+v2'
Solving for v1' squared:
v1'*v1' = v1*v1 - (m2/m1)v2'*v2'
Combine to eliminate v1':
(1-m2/m1)*v2'*v2' = 2*(vx2'*vx1+vy2'*vy1)
Now, if you've ever seen a stationary poolball hit, you know that it flies off in the direction of the contact normal (this is the same as your theta).
v2x' = v2'cos(theta)
v2y' = v2'sin(theta)
Therefore:
v2' = 2/(1-m2/m1)*(vx1*sin(theta)+vy1*cos(theta))
Now you can solve for v1' (either use v1'=sqrt(v1*v1-(m2/m1)*v2'*v2') or solve the whole thing in terms of the input variables).
Let's call phi = arctan(vy1/vx1). The angle of incidence relative to the tangent line to the circle at the point of intersection is 90-phi-theta (pi/2-phi-theta if you prefer). Add that again for the reflection, then convert back to an angle relative to the horizontal. Let's call the angle of incidence psi = 180-phi-2*theta (pi-phi-2*theta). Or,
psi = (180 or pi) - (arctan(vy1/vx1))-2*(arctan(dy/dx))
So:
vx1' = v1'sin(psi)
vy1' = v1'cos(psi)
Consider: if these circles are supposed to be solid 3D spheres, then use a mass proportional to radius-cubed for each one (note that the proportionality constant cancels out). If they are supposed to be disklike, use mass proportional to radius-squared. If they are rings, just use radius.
Next point to consider: Since the computer updates at discrete time events, you actually have overlapping objects. You should back out the objects so that they don't overlap before computing the new location of each object. For extra credit, figure out the time that they should have intersected, then move them in the new direction for that amount of time. Note that this time is just the overlap / old velocity. The reason that this is important is that you might imagine a collision that is computed that causes the objects to still overlap (causing them to collide again).
Next point to consider: to translate the original problem into this problem, just subtract object 2's velocity from object 1 (component-wise). After the computation, remember to add it back.
Final point to consider: I probably made an algebra error somewhere along the line. You should seriously consider checking my work.

Extract transform and rotation matrices from homography?

I have 2 consecutive images from a camera and I want to estimate the change in camera pose:
I calculate the optical flow:
Const MAXFEATURES As Integer = 100
imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp")
imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp")
grayA = imgA.Convert(Of Gray, Byte)()
grayB = imgB.Convert(Of Gray, Byte)()
imagesize = cvGetSize(grayA)
pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
features = MAXFEATURES
featuresA = grayA.GoodFeaturesToTrack(features, 0.01, 25, 3)
grayA.FindCornerSubPix(featuresA, New System.Drawing.Size(10, 10),
New System.Drawing.Size(-1, -1),
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03))
features = featuresA(0).Length
Emgu.CV.OpticalFlow.PyrLK(grayA, grayB, pyrBufferA, pyrBufferB, _
featuresA(0), New Size(25, 25), 3, _
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03D),
flags, featuresB(0), status, errors)
pointsA = New Matrix(Of Single)(features, 2)
pointsB = New Matrix(Of Single)(features, 2)
For i As Integer = 0 To features - 1
pointsA(i, 0) = featuresA(0)(i).X
pointsA(i, 1) = featuresA(0)(i).Y
pointsB(i, 0) = featuresB(0)(i).X
pointsB(i, 1) = featuresB(0)(i).Y
Next
Dim Homography As New Matrix(Of Double)(3, 3)
cvFindHomography(pointsA.Ptr, pointsB.Ptr, Homography, HOMOGRAPHY_METHOD.RANSAC, 1, 0)
and it looks right, the camera moved leftwards and upwards:
Now I want to find out how much the camera moved and rotated. If I declare my camera position and what it's looking at:
' Create camera location at origin and lookat (straight ahead, 1 in the Z axis)
Location = New Matrix(Of Double)(2, 3)
location(0, 0) = 0 ' X location
location(0, 1) = 0 ' Y location
location(0, 2) = 0 ' Z location
location(1, 0) = 0 ' X lookat
location(1, 1) = 0 ' Y lookat
location(1, 2) = 1 ' Z lookat
How do I calculate the new position and lookat?
If I'm doing this all wrong or if there's a better method, any suggestions would be very welcome, thanks!
For pure camera rotation R = A-1HA. To prove this consider image to plane homographies H1=A and H2=AR, where A is camera intrinsic matrix. Then H12=H2*H1-1=A-1RA, from which you can obtain R
Camera translation is harder to estimate. If the camera translates you have to a find fundamental matrix first (not homography): xTFx=0 and then convert it into an essential matrix E=ATFA; Then you can decompose E into rotation and translation E=txR, where tx means a vector product matrix. Decomposition is not obvious, see this.
The rotation you get will be exact while the translation vector can be found only up to scale. Intuitively this scaling means that from the two images alone you cannot really say whether the objects are close and small or far away and large. To disambiguate we may use a familiar size objects, known distance between two points, etc.
Finally note that a human visual system has a similar problem: though we "know" the distance between our eyes, when they are converged on the object the disparity is always zero and from disparity alone we cannot say what the distance is. Human vision relies on triangulation from eyes version signal to figure out absolute distance.
Well what your looking at is in simple terms a Pythagorean theorem problem a^2 + b^2 = c^2. However when it comes to camera based applications things are not very easy to accurately determine. You have found half of the detail you need for "a" however finding "b" or "c" is much harder.
The Short Answer
Basically it can't be done with a single camera. But it can be with done with two cameras.
The Long Winded Answer (Thought I'd explain in more depth, no pun intended)
I'll try and explain, say we select two points within our image and move the camera left. We know the distance from the camera of each point B1 is 20mm and point B2 is 40mm . Now lets assume that we process the image and our measurement are A1 is (0,2) and A2 is (0,4) these are related to B1 and B2 respectively. Now A1 and A2 are not measurements; they are pixels of movement.
What we now have to do is multiply the change in A1 and A2 by a calculated constant which will be the real world distance at B1 and B2. NOTE: Each one these is different according to measurement B*. This all relates to Angle of view or more commonly called the Field of View in photography at different distances. You can accurately calculate the constant if you know the size of each pixel on the camera CCD and the f number of the lens you have inside the camera.
I would expect this isn't the case so at different distances you have to place an object of which you know the length and see how many pixels it takes up. Close up you can use a ruler to make things easier. With these measurements. You take this data and form a curve with a line of best fit. Where the X-axis will be the distance of the object and the Y-axis will be the constant of pixel to distance ratio that you must multiply your movement by.
So how do we apply this curve. Well it's guess work. In theory the larger the measurement of movement A* the closer the object to the camera. In our example our ratios for A1 > A2 say 5mm and 3mm respectively and we would now know that point B1 has moved 10mm (2x5mm) and B2 has moved 6mm (2x6mm). But let's face it - we will never know B and we will never be able to tell if a distance moved is 20 pixels of an object close up not moving far or an object far away moving a much great distance. This is why things like the Xbox Kinect use additional sensors to get depth information that can be tied to the objects within the image.
What you attempting could be attempted with two cameras as the distance between these cameras is known the movement can be more accurately calculated (effectively without using a depth sensor). The maths behind this is extremely complex and I would suggest looking up some journal papers on the subject. If you would like me to explain the theory, I can attempt to.
All my experience comes from designing high speed video acquisition and image processing for my PHD so trust me, it can't be done with one camera, sorry. I hope some of this helps.
Cheers
Chris
[EDIT]
I was going to add a comment but this is easier due to the bulk of information:
Since it is the Kinect I will assume you have some relevant depth information associated with each point if not you will need to figure out how to get this.
The equation you will need to start of with is for the Field of View (FOV):
o/d = i/f
Where:
f is equal to the focal length of the lens usually given in mm (i.e. 18 28 30 50 are standard examples)
d is the object distance from the lens gathered from kinect data
o is the object dimension (or "field of view" perpendicular to and bisected by the optical axis).
i is the image dimension (or "field stop" perpendicular to and bisected by the optical axis).
We need to calculate i, where o is our unknown so for i (which is a diagonal measurement),
We will need the size of the pixel on the ccd this will in micrometres or µm you will need to find this information out, For know we will take it as being 14um which is standard for a midrange area scan camera.
So first we need to work out i horizontal dimension (ih) which is the number of pixels of the width of the camera multiplied by the size of the ccd pixel (We will use 640 x 320)
so: ih = 640*14um = 8960um
= 8960/1000 = 8.96mm
Now we need i vertical dimension (iv) same process but height
so: iv = (320 * 14um) / 1000 = 4.48mm
Now i is found by Pythagorean theorem Pythagorean theorem a^2 + b^2 = c^2
so: i = sqrt(ih^2 _ iv^2)
= 10.02 mm
Now we will assume we have a 28 mm lens. Again, this exact value will have to be found out. So our equation is rearranged to give us o is:
o = (i * d) / f
Remember o will be diagonal (we will assume of object or point is 50mm away):
o = (10.02mm * 50mm) / 28mm
17.89mm
Now we need to work out o horizontal dimension (oh) and o vertical dimension (ov) as this will give us the distance per pixel that the object has moved. Now as FOV α CCD or i is directly proportional to o we will work out a ratio k
k = i/o
= 10.02 / 17.89
= 0.56
so:
o horizontal dimension (oh):
oh = ih / k
= 8.96mm / 0.56 = 16mm per pixel
o vertical dimension (ov):
ov = iv / k
= 4.48mm / 0.56 = 8mm per pixel
Now we have the constants we require, let's use it in an example. If our object at 50mm moves from position (0,0) to (2,4) then the measurements in real life are:
(2*16mm , 4*8mm) = (32mm,32mm)
Again, a Pythagorean theorem: a^2 + b^2 = c^2
Total distance = sqrt(32^2 + 32^2)
= 45.25mm
Complicated I know, but once you have this in a program it's easier. So for every point you will have to repeat at least half the process as d will change on therefore o for every point your examining.
Hope this gets you on your way,
Cheers
Chris